当前位置: 首页 > news >正文

胶囊网络实现手写数字分类

文章目录

  • 前言
  • 一、完整代码
  • 二、修改成自己的数据集
  • 总结


前言

胶囊网络的概念可以先行搜索。


一、完整代码

import torch
import torch.nn.functional as F
from torch import nn
from torchvision import transforms, datasets
from torch.optim import Adam
from torch.utils.data import DataLoader# 定义胶囊网络中的胶囊层
class CapsuleLayer(nn.Module):def __init__(self, num_capsules, num_route_nodes, in_channels, out_channels, kernel_size=None, stride=None,num_iterations=3):super(CapsuleLayer, self).__init__()self.num_route_nodes = num_route_nodesself.num_iterations = num_iterationsself.num_capsules = num_capsulesif num_route_nodes != -1:self.route_weights = nn.Parameter(torch.randn(num_capsules, num_route_nodes, in_channels, out_channels))else:self.capsules = nn.ModuleList([nn.Conv2d(in_channels, out_channels, kernel_size=kernel_size, stride=stride, padding=0)for _ in range(num_capsules)])def squash(self, tensor, dim=-1):squared_norm = (tensor ** 2).sum(dim=dim, keepdim=True)scale = squared_norm / (1 + squared_norm)return scale * tensor / torch.sqrt(squared_norm)def forward(self, x):if self.num_route_nodes != -1:priors = x[None, :, :, None, :] @ self.route_weights[:, None, :, :, :]logits = torch.zeros(*priors.size()).to(x.device)for i in range(self.num_iterations):probs = F.softmax(logits, dim=2)outputs = self.squash((probs * priors).sum(dim=2, keepdim=True))if i != self.num_iterations - 1:delta_logits = (priors * outputs).sum(dim=-1, keepdim=True)logits = logits + delta_logitselse:outputs = [capsule(x).view(x.size(0), -1, 1) for capsule in self.capsules]outputs = torch.cat(outputs, dim=-2)outputs = self.squash(outputs)return outputs# 定义整个胶囊网络模型
class CapsuleNet(nn.Module):def __init__(self):super(CapsuleNet, self).__init__()self.conv1 = nn.Conv2d(in_channels=1, out_channels=256, kernel_size=9, stride=1)self.primary_capsules = CapsuleLayer(num_capsules=8, num_route_nodes=-1, in_channels=256, out_channels=32,kernel_size=9, stride=2)self.digit_capsules = CapsuleLayer(num_capsules=10, num_route_nodes=32 * 6 * 6, in_channels=8,out_channels=16)def forward(self, x):x = F.relu(self.conv1(x), inplace=True)x = self.primary_capsules(x)x = self.digit_capsules(x).squeeze().transpose(0, 1)x = (x ** 2).sum(dim=-1) ** 0.5return x# 训练和评估
def train(model, train_loader, optimizer, epoch):model.train()for batch_idx, (data, target) in enumerate(train_loader):data, target = data.to(device), target.to(device)optimizer.zero_grad()output = model(data)loss = F.cross_entropy(output, target)loss.backward()optimizer.step()if batch_idx % 10 == 0:print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(epoch, batch_idx * len(data), len(train_loader.dataset),100. * batch_idx / len(train_loader), loss.item()))def test(model, test_loader):model.eval()test_loss = 0correct = 0with torch.no_grad():for data, target in test_loader:data, target = data.to(device), target.to(device)output = model(data)test_loss += F.cross_entropy(output, target, reduction='sum').item()pred = output.argmax(dim=1, keepdim=True)correct += pred.eq(target.view_as(pred)).sum().item()test_loss /= len(test_loader.dataset)print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(test_loss, correct, len(test_loader.dataset),100. * correct / len(test_loader.dataset)))# 数据加载和预处理
transform = transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.1307,), (0.3081,))
])train_dataset = datasets.MNIST(root='./data', train=True, download=True, transform=transform)
test_dataset = datasets.MNIST(root='./data', train=False, transform=transform)train_loader = DataLoader(train_dataset, batch_size=64, shuffle=True)
test_loader = DataLoader(test_dataset, batch_size=64, shuffle=True)# 设置设备
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")# 初始化模型和优化器
model = CapsuleNet().to(device)
optimizer = Adam(model.parameters())# 训练和测试模型
num_epochs = 10
for epoch in range(num_epochs):train(model, train_loader, optimizer, epoch)test(model, test_loader)

二、修改成自己的数据集

以下几个位置是需要修改的。


# 数据加载和预处理
transform = transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.1307,), (0.3081,))
])train_dataset = datasets.MNIST(root='./data', train=True, download=True, transform=transform)
test_dataset = datasets.MNIST(root='./data', train=False, transform=transform)train_loader = DataLoader(train_dataset, batch_size=64, shuffle=True)
test_loader = DataLoader(test_dataset, batch_size=64, shuffle=True)

这些位置要根据数据集实际情况修改。主要是如果分辨率修改了,那么下面的也要跟着修改。

self.conv1 = nn.Conv2d(in_channels=1, out_channels=256, kernel_size=9, stride=1)
self.primary_capsules = CapsuleLayer(num_capsules=8, num_route_nodes=-1, in_channels=256, out_channels=32, kernel_size=9, stride=2)
self.digit_capsules = CapsuleLayer(num_capsules=10, num_route_nodes=32 * 6 * 6, in_channels=8,out_channels=16)

修改这3行代码很容易报错。要理解了以后修改。


总结

多试试。

http://www.lryc.cn/news/252796.html

相关文章:

  • Java零基础-if条件语句
  • 中国证券交易所有哪些
  • 欢迎回到 C++ - 现代 C++(心得-壹)
  • 【Vue3+Ts项目】硅谷甄选 — 搭建后台管理系统模板
  • MATLAB 系统辨识 - 在线估计 - Online Estimation
  • 【Java面试——基础题】
  • Haiku库和Jax库介绍
  • 2023-简单点-proxyPool源码(二)-setting.py
  • 中级工程师评审条件:如何成为一名合格的中级工程师
  • StarRocks上新,“One Data、All Analytics”还有多远?
  • Java8实战-总结50
  • kicad源代码研究:参照Candence实现工程管理
  • Asp.net core WebApi 配置自定义swaggerUI和中文注释,Jwt Bearer配置
  • DNS 查询结果逐行解释
  • ArcGIS制作广场游客聚集状态及密度图
  • 同旺科技 USB TO SPI / I2C --- 调试W5500_TCP Client接收数据
  • MQ - KAFKA 高级篇
  • 如何快速查找最后(最右侧)隐藏列
  • 精密制造ERP系统包含哪些模块?精密制造ERP软件是做什么的
  • TypeScript 的高级技巧
  • TiDB 7.x 源码编译之 TiDB Server 篇,及新特性详解
  • Hadoop实验putty文件
  • 研发人员绩效考核难题及解决措施
  • Inference with C# BERT NLP Deep Learning and ONNX Runtime
  • 6、原型模式(Prototype Pattern,不常用)
  • 图像万物分割——Segment Anything算法解析与模型推理
  • Redis实战篇笔记(最终篇)
  • 游戏配置表的导入使用
  • ❀dialog命令运用于linux❀
  • 【算法】蓝桥杯2013国C 横向打印二叉树 题解