当前位置: 首页 > news >正文

手写A2C(FrozenLake环境)

成功截图

算法组件

包含: 包含经验池, actor_model, critic_model三个部分

  • actor输出每一个state对应所有action的概率 --- 概率分布
  • critic估计每一个状态的状态值 --- 标量

actor和critic都用mlp, 经过试验发现actor的网络复杂一点效果好一些.

actor最后输出的torch.distributions.Categorical类是一个概率分布类, 用于处理类别分布(离散分布), 有如下的作用:

  • 自动应用softmax函数, 将输入logits转为归一化的概率分布
  • 方便从该分布中采样: sample()方法
  • 方便计算给定样本的概率 对数概率 熵: probs() log_prob() entropy() 方法

训练框架

  • 先收集经验,然后优化模型, 这里当收集够一个批次后就优化模型
  • 由于a2c是on-policy算法, replay buffer 收集后使用就不用了, 所以优化模型后就清空

目标函数

1. actor用策略梯度定理

其中A采用广义优势估计, 效果要好一些

2. critic用目标值和当前值的MSE

目标值 = 当前值+广义优势估计

3. 广义优势估计计算

完整代码

import torch
import torch.nn as nn
from torch.nn import functional as F
import gymnasium as gym
import tqdm
from torch.distributions import Categorical
from typing import  Tupleclass PolicyNetwork(nn.Module):def __init__(self, n_observations: int, n_actions: int):super(PolicyNetwork, self).__init__()self.layer1 = nn.Linear(n_observations, 32)   # 16->32 32->16 16->4 给定episode次数 效果差 (在self.lr = 0.0001下,但是在self.lr = 0.01 照样能收敛 )self.layer2 = nn.Linear(32, 16)               # 16->64 64->32 32->4 勉强self.layer3 = nn.Linear(16, n_actions)        # 16->128 128->128 128->4 收敛快# 16->256 256->256 256->4 收敛更快def forward(self, x: torch.Tensor) -> Categorical: x = F.relu(self.layer1(x))x = F.relu(self.layer2(x))action_logits = self.layer3(x)return Categorical(logits=action_logits)class A2C:def __init__(self, env, total_episodes):#############超参数#############self.actor_lr = 0.01self.critic_lr = 0.01self.batch_size = 256self.entropy_coeff = 0.01self.value_loss_coeff = 0.5self.gae_lambda = 0.95 self.discount_rate = 0.99  self.total_episodes = total_episodes#############A2C的核心要件#############self.replay_buffer = []self.actor_model = PolicyNetwork(16, 4)self.critic_model = nn.Sequential( # 不需要像 actor model那么复杂nn.Linear(16, 16), nn.ReLU(),nn.Linear(16, 1))############优化组件#############self.actor_optimizer = torch.optim.Adam(self.actor_model.parameters(), lr=self.actor_lr) self.critic_optimizer = torch.optim.Adam(self.critic_model.parameters(), lr=self.critic_lr)self.env = envself.count = 0self.success = 0def train(self):bar = tqdm.tqdm(range(self.total_episodes), desc=f"episode {0} {self.success / (self.count+1e-8)}") for i in bar:state, info = self.env.reset()done = Falsetruncated = False# 收集经验 for _ in range(self.batch_size): # 强制收集batch_size个数据也可以while not done or truncated:action = self.choose_action(state)new_state, r, done, truncated, info = self.env.step(action) self.append_data(state, action, r, new_state, done)state = new_stateif done or truncated:self.count+=1if new_state == 15: self.success+=1# 优化模型if len(self.replay_buffer) == self.batch_size:self.optimize_model()self.replay_buffer.clear()if i % 100 == 0:self.count = 0self.success = 0bar.set_description(f"episode {i} {self.success / (self.count+1e-8)}")def choose_action(self, state):with torch.no_grad():policy_dist = self.actor_model(self.state_to_input(state))action_tensor = policy_dist.sample()action = action_tensor.item()return actiondef optimize_model(self):state = torch.stack([self.state_to_input(tup[0]) for tup in self.replay_buffer[-self.batch_size:]])action = torch.IntTensor([tup[1] for tup in self.replay_buffer[-self.batch_size:]])reward = torch.FloatTensor([tup[2] for tup in self.replay_buffer[-self.batch_size:]])new_state = torch.stack([self.state_to_input(tup[3]) for tup in self.replay_buffer[-self.batch_size:]])done = torch.FloatTensor([tup[4] for tup in self.replay_buffer[-self.batch_size:]])# 以上state和new_state是二维的, 其他是一维的,即batch维with torch.no_grad():value = self.critic_model(state).squeeze()last_value = self.critic_model(new_state[:-1]).squeeze()next_value = torch.cat((value[1:], last_value))# 相比一次TD误差, GAE效果显著之好 advantages, returns_to_go = self.compute_gae_and_returns(reward, value, next_value, done, self.discount_rate, self.gae_lambda)# 更新actor 根据策略梯度定理policy_dist = self.actor_model(state)logpi = policy_dist.log_prob(action)actor_fn = -(logpi * advantages + self.entropy_coeff * policy_dist.entropy()) # 熵的效果不大self.actor_optimizer.zero_grad()actor_fn.mean().backward(retain_graph=True) # .mean() torch要求梯度得标量函数self.actor_optimizer.step()# 更新criticv = self.critic_model(state).squeeze()critic_fn = F.mse_loss(v, returns_to_go)self.critic_optimizer.zero_grad()(self.value_loss_coeff * critic_fn).backward()self.critic_optimizer.step()def compute_gae_and_returns(self,rewards: torch.Tensor, values: torch.Tensor, next_values: torch.Tensor, dones: torch.Tensor, discount_rate: float, lambda_gae: float, ) -> Tuple[torch.Tensor, torch.Tensor]:advantages = torch.zeros_like(rewards)last_advantage = 0.0n_steps = len(rewards)# 计算GAEfor t in reversed(range(n_steps)):mask = 1.0 - dones[t]delta = rewards[t] + discount_rate * next_values[t] * mask - values[t] advantages[t] = delta + discount_rate * lambda_gae * last_advantage * masklast_advantage = advantages[t]# 返回给critic作为TD目标  returns_to_go = advantages + values return advantages, returns_to_godef append_data(self, state, action, r, new_state, done):self.replay_buffer.append((state, action, r, new_state, done))def state_to_input(self, state):input_dim = 16input = torch.zeros(input_dim, dtype=torch.float)input[int(state)] = 1return inputenv = gym.make("FrozenLake-v1", is_slippery=False)
policy = A2C(env, 3000)
policy.train()env = gym.make("FrozenLake-v1", is_slippery=False, render_mode="human")
state, info = env.reset()
done = False
truncated = False
while True:with torch.no_grad():action=policy.choose_action(state) new_state, reward, done, truncated, info = env.step(action)state=new_stateif done or truncated:state, info = env.reset()

调参体会

  • lr还是很有影响
  • actor的模型复杂度影响也有影响,体会到了复杂mlp比简单的mlp效果要好
  • 最有效果的是GAE替代了一次TD误差
http://www.lryc.cn/news/600123.html

相关文章:

  • 牛客刷题记录01
  • 【C++】二叉搜索数
  • 流式接口,断点续传解决方案及实现
  • QKV 为什么是三个矩阵?注意力为何要除以 √d?多头注意力到底有啥用?
  • 【PyTorch】图像多分类项目
  • jwt 在net9.0中做身份认证
  • qt框架,使用webEngine如何调试前端
  • 开发笔记 | 优化对话管理器脚本与对话语音的实现
  • 13.使用C连接mysql
  • Jenkins中出现pytest: error: unrecognized arguments: --alluredir=report错误解决办法
  • 栈----1.有效的括号
  • 机器学习 KNN 算法,鸢尾花案例
  • 从Taro的Dialog.open出发,学习远程控制组件之【事件驱动】
  • C++ 多线程同步机制详解:互斥锁、条件变量与原子操作
  • Python Multiprocessing 进程池完全教程:从理论到实战
  • 数据结构(3)单链表
  • [尚庭公寓]14-找房模块
  • Canal 1.1.7的安装
  • 习题5.6 “数学黑洞“
  • PHP插件开发中的一个错误:JSON直接输出导致网站首页异常
  • 纸板留声机:用ESP32和NFC打造会唱歌的复古装置
  • 手语式映射:Kinova Gen3 力控机械臂自适应控制的研究与应用
  • 秒收蜘蛛池解析机制的原理
  • PPIO上线阿里旗舰推理模型Qwen3-235B-A22B-Thinking-2507
  • ATR2652SGNSS全频段低噪声放大器
  • PostgreSQL对象权限管理
  • GPU 驱动安装升级测试
  • [NPUCTF2020]ReadlezPHP
  • CSS 盒子模型学习版的理解
  • C语言第 9 天学习笔记:数组(二维数组与字符数组)