当前位置: 首页 > news >正文

[Pytorch]手写数字识别——真·手写!

Github网址:https://github.com/diaoquesang/pytorchTutorials/tree/main

本教程创建于2023/7/31,几乎所有代码都有对应的注释,帮助初学者理解dataset、dataloader、transform的封装,初步体验调参的过程,初步掌握opencv、pandas、os等库的使用,😋纯手撸手写数字识别项目(为减少代码量简化了部分数据集相关操作),全流程跑通Pytorch!❤️❤️❤️
This tutorial was created on 2023/7/31. Almost all the code has corresponding comments, to help beginners understand dataset, dataloader, transform packaging, preliminary experience of the process of tuning the parameters, the initial grasp of the use of libraries such as opencv, pandas, os, etc., 😋 and get involved in this handwritten digit recognition project (we simplified some dataset-related operations in order to reduce the amount of code). Enjoy the whole process of running Pytorch!❤️❤️❤️

如果喜欢本项目的话,留下你的⭐吧!
Give me a ⭐ if you like this project!

一、train.py

import torch
import torchvisionfrom torch import nn
import torch.nn.functional as F
from torch.utils.data import Dataset, DataLoader
from torchvision import transformsimport os
import cv2 as cv
import pandas as pdclass myDataset(Dataset):  # 定义数据集类def __init__(self, annotations_file, img_dir, transform=None,target_transform=None):  # 传入参数(标签路径,图像路径,图像预处理方式,标签预处理方式)self.img_labels = pd.read_csv(annotations_file, sep=" ", header=None)# 从标签路径中读取标签,sep为划分间隔符,header为列标题的行位置self.img_dir = img_dir  # 读取图像路径self.transform = transform  # 读取图像预处理方式self.target_transform = target_transform  # 读取标签预处理方式def __len__(self):return len(self.img_labels)  # 读取标签数量作为数据集长度def __getitem__(self, idx):  # 从数据集中取出数据img_path = os.path.join(self.img_dir, self.img_labels.iloc[idx, 0])# 从标签对象中取出第idx行第0列(第0列为图像位置所在列)的值(numberImages\5.bmp),并与图像路径(numberImages)进行拼接image = cv.imread(img_path)  # 用openCV的imread函数读取图像label = self.img_labels.iloc[idx, 1]  # 从标签对象中取出第idx行第1列(第1列为图像标签所在列)的值(5)if self.transform:image = self.transform(image)  # 图像预处理if self.target_transform:label = self.target_transform(label)  # 标签预处理return image, label  # 返回图像和标签class myTransformMethod1():  # Python3默认继承object类def __call__(self, img):  # __call___,让类实例变成一个可以被调用的对象,像函数img = cv.resize(img, (28, 28))  # 改变图像大小img = cv.cvtColor(img, cv.COLOR_BGR2RGB)  # 将BGR(openCV默认读取为BGR)改为RGBreturn img  # 返回预处理后的图像# 测试函数
# print(pd.read_csv("annotations.txt", sep=" ", header=None))
# print(os.path.join("numberImages", pd.read_csv("annotations.txt", sep=" ", header=None).iloc[5, 0]))
# print(pd.read_csv("annotations.txt", sep=" ", header=None).iloc[5, 1])
# cv.imshow("1",cv.imread(os.path.join("numberImages", pd.read_csv("annotations.txt", sep=" ", header=None).iloc[5, 0])))
# cv.waitKey(0)class myNetwork(nn.Module):  # 定义神经网络def __init__(self):super().__init__()  # 继承nn.Module的构造器self.flatten = nn.Flatten(-3, -1)# 继承nn.Module的Flatten函数并改为flatten,考虑到推理时没有batch(CHW),若使用默认值(1,-1)会导致C没有被flatten,故使用(-3,-1)self.linear_relu_stack = nn.Sequential(  # 定义前向传播序列nn.Linear(3 * 28 * 28, 512),nn.ReLU(),nn.Linear(512, 512),nn.ReLU(),nn.Linear(512, 10),)def forward(self, x):  # 定义前向传播方法x = self.flatten(x)logits = self.linear_relu_stack(x)return logits# 设置运行环境,默认为cuda,若cuda不可用则改为mps,若mps也不可用则改为cpu
device = ("cuda"if torch.cuda.is_available()else "mps"if torch.backends.mps.is_available()else "cpu"
)
print(f"Using {device} device")  # 输出运行环境model = myNetwork().to(device)  # 创建神经网络模型实例# 设置超参数
learning_rate = 1e-5  # 学习率
batch_size = 8  # 每批数据数量
epochs = 3000  # 总轮数img_path = "./numberImages"  # 设置图像路径
label_path = "./annotations.txt"  # 设置标签路径myTransform = transforms.Compose([myTransformMethod1(), transforms.ToTensor()])
# 定义图像预处理组合,ToTensor()中Pytorch将HWC(openCV默认读取为height,width,channel)改为CHW,并将值[0,255]除以255进行归一化[0,1]myDataset = myDataset(label_path, img_path, myTransform)  # 创建数据集实例myDataLoader = DataLoader(myDataset, batch_size=batch_size,shuffle=True)
# 创建数据读取器(可对训练集和测试集分别创建),batch_size为每批数据数量(一般为2的n次幂以提高运行速度),shuffle为随机打乱数据def train():# 根据epochs(总轮数)训练for epoch in range(epochs):totalLoss = 0# 分批读取数据for batch, (images, labels) in enumerate(myDataLoader):# 数据转换到对应运行环境images = images.to(device)labels = labels.to(device)pred = model(images)  # 前向传播myLoss = nn.CrossEntropyLoss()  # 定义损失函数(交叉熵)optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)  # 定义优化器loss = myLoss(pred, labels)  # 计算损失函数totalLoss += loss  # 计入总损失函数loss.backward()  # 反向传播optimizer.step()  # 更新权重optimizer.zero_grad()  # 清空梯度if batch % 1 == 0:  # 每隔1个batch输出1次lossloss, current = loss.item(), min((batch + 1) * batch_size,len(myDataset))print(f"epoch: {epoch:>5d} loss: {loss:>7f}  [{current:>5d}/{len(myDataset):>5d}]")if epoch == 0:minTotalLoss = totalLossif totalLoss < minTotalLoss:print("······························模型已保存······························")minTotalLoss = totalLosstorch.save(model, "./myModel.pth")  # 保存性能最好的模型if __name__ == "__main__":model.train()  # 设置训练模式train()

二、eval.py

import torch
import torchvisionfrom torch import nn
import torch.nn.functional as F
from torch.utils.data import Dataset, DataLoader
from torchvision import transformsimport os
import cv2 as cv
import pandas as pdclass myTransformMethod1():  # Python3默认继承object类def __call__(self, img):  # __call___,让类实例变成一个可以被调用的对象,像函数img = cv.resize(img, (28, 28))  # 改变图像大小img = cv.cvtColor(img, cv.COLOR_BGR2RGB)  # 将BGR(openCV默认读取为BGR)改为RGBreturn img  # 返回预处理后的图像class myNetwork(nn.Module):  # 定义神经网络def __init__(self):super().__init__()  # 继承nn.Module的构造器self.flatten = nn.Flatten(-3, -1)# 继承nn.Module的Flatten函数并改为flatten,考虑到推理时没有batch(CHW),若使用默认值(1,-1)会导致C没有被flatten,故使用(-3,-1)self.linear_relu_stack = nn.Sequential(  # 定义前向传播序列nn.Linear(3 * 28 * 28, 512),nn.ReLU(),nn.Linear(512, 512),nn.ReLU(),nn.Linear(512, 10),)def forward(self, x):  # 定义前向传播方法x = self.flatten(x)logits = self.linear_relu_stack(x)return logitsif __name__ == "__main__":model = torch.load("./myModel.pth").to("cuda")  # 载入模型model.eval()  # 设置推理模式myTransform = transforms.Compose([myTransformMethod1(), transforms.ToTensor()])# 定义图像预处理组合,ToTensor()中Pytorch将HWC(openCV默认读取为height,width,channel)改为CHW,并将值[0,255]除以255进行归一化[0,1]for i in range(10):img = cv.imread("./numberImages/"+str(i)+".bmp")  # 用openCV的imread函数读取图像img = myTransform(img).to("cuda")  # 图像预处理print(torch.argmax(model(img)))

三、其余资料详见Github

http://www.lryc.cn/news/104341.html

相关文章:

  • android studio 找不到符号类 Canvas 或者 错误: 程序包java.awt不存在
  • AWS——02篇(AWS之服务存储EFS在Amazon EC2上的挂载——针对EC2进行托管文件存储)
  • FFmpeg 打包mediacodec 编码帧 MPEGTS
  • 软件测试如何推进项目进度?
  • 首次尝试鸿蒙开发!
  • 前端面试题-react
  • EIP-2535 Diamond standard 实用工具分享
  • 【LangChain】向量存储(Vector stores)
  • Debian/Ubuntu 安装 Chrome 和 Chrome Driver 并使用 selenium 自动化测试
  • [SQL挖掘机] - 窗口函数 - 合计: with rollup
  • 远程控制平台一之推拉流的实现
  • RTT(RT-Thread)线程管理(1.2W字详细讲解)
  • 你真的会自动化吗?Web自动化测试-PO模式实战,一文通透...
  • C# 使用堆栈实现队列
  • git操作:修改本地的地址
  • 【以图搜图】Python实现根据图片批量匹配(查找)相似图片
  • 【无标题】JSP--Java的服务器页面
  • 【Linux】进程间通信——system V共享内存 | 消息队列 | 信号量
  • CentOS实现html转pdf
  • 【C++】基于多设计模式下的同步异步日志系统
  • 防火墙监控工具
  • 组合模式——树形结构的处理
  • 从实体按键看 Android 车载的自定义事件机制
  • nosql之redis集群
  • SpringBoot 项目使用 Redis 对用户 IP 进行接口限流
  • SLA探活工具EaseProbe
  • [Java] 观察者模式简述
  • linux驱动定时器实现按键按下打印字符
  • 反转链表(JS)
  • [PyTorch][chapter 45][RNN_2]