当前位置: 首页 > news >正文

butterfly蝴蝶分类

一、分类原因

由于植物分类所使用的数据集存在一定问题,修改起来比较麻烦,本次采用kaggle的ButterflyMothsImageClassification数据集,对100这种蝴蝶进行分类。

二、100中蝴蝶类别

‘ADONIS’,‘AFRICAN GIANT SWALLOWTAIL’,‘AMERICAN SNOOT’,‘AN 88’,‘APPOLLO’,‘ARCIGERA FLOWER MOTH’,‘ATALA’,‘ATLAS MOTH’,‘BANDED ORANGE HELICONIAN’,‘BANDED PEACOCK’,‘BANDED TIGER MOTH’,‘BECKERS WHITE’,‘BIRD CHERRY ERMINE MOTH’,‘BLACK HAIRSTREAK’,‘BLUE MORPHO’,‘BLUE SPOTTED CROW’,‘BROOKES BIRDWING’,‘BROWN ARGUS’,‘BROWN SIPROETA’,‘CABBAGE WHITE’,‘CAIRNS BIRDWING’,‘CHALK HILL BLUE’,‘CHECQUERED SKIPPER’,‘CHESTNUT’,‘CINNABAR MOTH’,‘CLEARWING MOTH’,‘CLEOPATRA’,‘CLODIUS PARNASSIAN’,‘CLOUDED SULPHUR’,‘COMET MOTH’,‘COMMON BANDED AWL’,‘COMMON WOOD-NYMPH’,‘COPPER TAIL’,‘CRECENT’,‘CRIMSON PATCH’,‘DANAID EGGFLY’,‘EASTERN COMA’,‘EASTERN DAPPLE WHITE’,‘EASTERN PINE ELFIN’,‘ELBOWED PIERROT’,‘EMPEROR GUM MOTH’,‘GARDEN TIGER MOTH’,‘GIANT LEOPARD MOTH’,‘GLITTERING SAPPHIRE’,‘GOLD BANDED’,‘GREAT EGGFLY’,‘GREAT JAY’,‘GREEN CELLED CATTLEHEART’,‘GREEN HAIRSTREAK’,‘GREY HAIRSTREAK’,‘HERCULES MOTH’,‘HUMMING BIRD HAWK MOTH’,‘INDRA SWALLOW’,‘IO MOTH’,‘Iphiclus sister’,‘JULIA’,‘LARGE MARBLE’,‘LUNA MOTH’,‘MADAGASCAN SUNSET MOTH’,‘MALACHITE’,‘MANGROVE SKIPPER’,‘MESTRA’,‘METALMARK’,‘MILBERTS TORTOISESHELL’,‘MONARCH’,‘MOURNING CLOAK’,‘OLEANDER HAWK MOTH’,‘ORANGE OAKLEAF’,‘ORANGE TIP’,‘ORCHARD SWALLOW’,‘PAINTED LADY’,‘PAPER KITE’,‘PEACOCK’,‘PINE WHITE’,‘PIPEVINE SWALLOW’,‘POLYPHEMUS MOTH’,‘POPINJAY’,‘PURPLE HAIRSTREAK’,‘PURPLISH COPPER’,‘QUESTION MARK’,‘RED ADMIRAL’,‘RED CRACKER’,‘RED POSTMAN’,‘RED SPOTTED PURPLE’,‘ROSY MAPLE MOTH’,‘SCARCE SWALLOW’,‘SILVER SPOT SKIPPER’,‘SIXSPOT BURNET MOTH’,‘SLEEPY ORANGE’,‘SOOTYWING’,‘SOUTHERN DOGFACE’,‘STRAITED QUEEN’,‘TROPICAL LEAFWING’,‘TWO BARRED FLASHER’,‘ULYSES’,‘VICEROY’,‘WHITE LINED SPHINX MOTH’,‘WOOD SATYR’,‘YELLOW SWALLOW TAIL’,‘ZEBRA LONG WING’

三、配置文件

auto_scale_lr = dict(base_batch_size=256)
data_preprocessor = dict(mean=[123.675,116.28,103.53,],num_classes=100,std=[58.395,57.12,57.375,],to_rgb=True)
dataset_type = 'ImageNet'
data_root = 'data/ButterflyMothsImageClassification'
default_hooks = dict(checkpoint=dict(interval=1, type='CheckpointHook', max_keep_ckpts=2, save_best="auto"),logger=dict(interval=100, type='LoggerHook'),param_scheduler=dict(type='ParamSchedulerHook'),sampler_seed=dict(type='DistSamplerSeedHook'),timer=dict(type='IterTimerHook'),visualization=dict(enable=False, type='VisualizationHook'))
default_scope = 'mmpretrain'
env_cfg = dict(cudnn_benchmark=False,dist_cfg=dict(backend='nccl'),mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0))
launcher = 'none'
load_from = './work_dirs/resnet50_8xb32-coslr_in1k/resnet50_8xb32_in1k_20210831-ea4938fc.pth'
log_level = 'INFO'
model = dict(backbone=dict(depth=50,num_stages=4,out_indices=(3,),style='pytorch',type='ResNet'),head=dict(in_channels=2048,# loss=dict(loss_weight=1.0, type='CrossEntropyLoss'),loss=dict(type='LabelSmoothLoss',label_smooth_val=0.1,num_classes=100,reduction='mean',loss_weight=1.0),num_classes=100,topk=(1,5,),type='LinearClsHead'),data_preprocessor=data_preprocessor,neck=dict(type='GlobalAveragePooling'),type='ImageClassifier')
train_cfg = dict(by_epoch=True, max_epochs=300, val_interval=1)
optim_wrapper = dict(optimizer=dict(lr=0.1, momentum=0.9, type='SGD', weight_decay=0.0001))
param_scheduler = dict(T_max=260, begin=20, by_epoch=True, end=300, type='CosineAnnealingLR')
randomness = dict(deterministic=False, seed=None)
resume = False
test_cfg = dict()
test_pipeline = [dict(type='LoadImageFromFile'),dict(edge='short', scale=256, type='ResizeEdge'),dict(crop_size=224, type='CenterCrop'),dict(type='PackInputs'),
]
test_dataloader = dict(batch_size=32,collate_fn=dict(type='default_collate'),dataset=dict(data_root=data_root,pipeline=test_pipeline,split='test',ann_file='test.txt',type=dataset_type),num_workers=1,persistent_workers=True,pin_memory=True,sampler=dict(shuffle=False, type='DefaultSampler'))
test_evaluator = dict(topk=(1,5,), type='Accuracy')train_pipeline = [dict(type='LoadImageFromFile'),dict(scale=224, type='RandomResizedCrop'),dict(direction='horizontal', prob=0.5, type='RandomFlip'),dict(type='PackInputs'),
]
train_dataloader = dict(batch_size=45,collate_fn=dict(type='default_collate'),dataset=dict(data_root=data_root,pipeline=train_pipeline,split='train',ann_file='train.txt',type=dataset_type),num_workers=1,persistent_workers=True,pin_memory=True,sampler=dict(shuffle=True, type='DefaultSampler'))val_cfg = dict()
val_dataloader = dict(batch_size=45,collate_fn=dict(type='default_collate'),dataset=dict(data_root=data_root,pipeline=test_pipeline,split='val',ann_file='valid.txt',type=dataset_type),num_workers=1,persistent_workers=True,pin_memory=True,sampler=dict(shuffle=False, type='DefaultSampler'))
val_evaluator = test_evaluator
vis_backends = [dict(type='LocalVisBackend'),
]
visualizer = dict(type='UniversalVisualizer', vis_backends=[dict(type='LocalVisBackend'),])
work_dir = './work_dirs\\resnet50_8xb32-coslr_in1k'

三、训练结果

accuracy/top1: 97.0000 accuracy/top5: 99.0000

四、结果展示

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

http://www.lryc.cn/news/264388.html

相关文章:

  • 计算机基础:网络基础
  • [原创][R语言]股票分析实战[3]:周级别涨幅趋势的相关性
  • MSVC编译 openssl windows 库
  • electron兼容统信UOS系统过程中的坑
  • Flink系列之:Apache Kafka SQL 连接器
  • 灰盒测试简要学习指南!
  • 【经典LeetCode算法题目专栏分类】【第7期】快慢指针与链表
  • springboot解决XSS存储型漏洞
  • I.MX6ULL_Linux_驱动篇(47)linux RTC驱动
  • 详解IBM企业架构框架模型CBM
  • 宝塔面板安装MySQL数据库并通过内网穿透工具实现公网远程访问
  • Elasticsearch 性能调优基础知识
  • 速盾网络:网络安全守护者
  • jmeter如何参数化?Jmeter参数化设置的5种方法
  • 01AVue入门(持续学习中)
  • js 深浅拷贝的区别和实现方法
  • 【jvm从入门到实战】(九) 垃圾回收(2)-垃圾回收器
  • C#基础——匿名函数和参数不固定的函数
  • PCL 点云匹配 4 之 (非线性迭代点云匹配)lM-ICP
  • MySQL_14.数据库高速缓冲区空间管理
  • leetcode 974. 和可被 K 整除的子数组(优质解法)
  • 【技术】MySQL 日期时间操作
  • 测试理论知识三:测试用例、测试策略
  • 【clickhouse】在CentOS中离线安装clickhouse
  • 微信商户号申请0.2费率
  • 基于单片机设计的电子指南针(LSM303DLH模块(三轴磁场 + 三轴加速度)
  • 深度学习 该用什么标准判断差异最小
  • 汽车制造厂设备故障预测与健康管理PHM
  • 如何通过宝塔面板搭建一个MySQL数据库服务并实现无公网ip远程访问?
  • C++ Qt开发:TabWidget实现多窗体功能