构建LangChain应用程序的示例代码:55、如何实现多代理模拟,其中特权代理决定谁发言。这遵循与多代理分散发言者选择相反的选择方案
示例展示了如何实现一个多代理模拟,其中一个特权代理决定谁来发言。
这遵循与多代理分散式发言人选择相反的选择方案。
我们在一个虚构的新闻网络模拟环境中展示这种方法的一个例子。这个例子将展示我们如何实现能够:
- 在说话前思考
- 终止对话
的代理。
导入LangChain相关模块
import functools
import random
from collections import OrderedDict
from typing import Callable, Listimport tenacity
from langchain.output_parsers import RegexParser
from langchain.prompts import (PromptTemplate,
)
from langchain.schema import (HumanMessage,SystemMessage,
)
from langchain_openai import ChatOpenAI# 导入所需的Python模块和LangChain组件
# 包括功能性工具、随机数生成、有序字典、类型提示
# 以及LangChain的输出解析器、提示模板、消息模式和OpenAI聊天模型
DialogueAgent
和 DialogueSimulator
类
我们将使用在其他示例多人龙与地下城和分散式发言人选择中定义的相同 DialogueAgent
和 DialogueSimulator
类。
class DialogueAgent:def __init__(self,name: str,system_message: SystemMessage,model: ChatOpenAI,) -> None:self.name = nameself.system_message = system_messageself.model = modelself.prefix = f"{self.name}: "self.reset()def reset(self):self.message_history = ["Here is the conversation so far."]def send(self) -> str:"""应用聊天模型到消息历史并返回消息字符串"""message = self.model.invoke([self.system_message,HumanMessage(content="\n".join(self.message_history + [self.prefix])),])return message.contentdef receive(self, name: str, message: str) -> None:"""将{name}说的{message}连接到消息历史中"""self.message_history.append(f"{name}: {message}")class DialogueSimulator:def __init__(self,agents: List[DialogueAgent],selection_function: Callable[[int, List[DialogueAgent]], int],) -> None:self.agents = agentsself._step = 0self.select_next_speaker = selection_functiondef reset(self):for agent in self.agents:agent.reset()def inject(self, name: str, message: str):"""用{name}的{message}启动对话"""for agent in self.agents:agent.receive(name, message)# 增加时间步self._step += 1def step(self) -> tuple[str, str]:# 1. 选择下一个说话者speaker_idx = self.select_next_speaker(self._step, self.agents)speaker = self.agents[speaker_idx]# 2. 下一个说话者发送消息message = speaker.send()# 3. 每个人接收消息for receiver in self.agents:receiver.receive(speaker.name, message)# 4. 增加时间步self._step += 1return speaker.name, message# 定义对话代理类,包含初始化、重置、发送和接收消息的方法
# 定义对话模拟器类,包含初始化、重置、注入消息和执行一步模拟的方法
DirectorDialogueAgent
类
DirectorDialogueAgent
是一个特权代理,负责选择其他代理中的哪一个下一个发言。这个代理负责
- 通过选择何时让哪个代理发言来引导对话
- 终止对话。
为了实现这样一个代理,我们需要解决几个问题。
首先,为了引导对话,DirectorDialogueAgent
需要在一条消息中(1)反思已经说过的内容,(2)选择下一个代理,以及(3)提示下一个代理发言。虽然可能可以提示LLM在同一个调用中执行所有三个步骤,但这需要编写自定义代码来解析输出的消息以提取选择的下一个代理。这不太可靠,因为LLM可以用不同的方式表达它如何选择下一个代理。
相反,我们可以做的是将步骤(1-3)明确地分成三个单独的LLM调用。首先,我们会要求DirectorDialogueAgent
反思到目前为止的对话并生成一个响应。然后我们提示DirectorDialogueAgent
输出下一个代理的索引,这很容易解析。最后,我们将选定的下一个代理的名字传回DirectorDialogueAgent
,要求它提示下一个代理发言。
其次,简单地提示DirectorDialogueAgent
决定何时终止对话通常会导致DirectorDialogueAgent
立即终止对话。为了解决这个问题,我们随机抽样一个伯努利变量来决定对话是否应该终止。根据这个变量的值,我们将注入一个自定义提示,告诉DirectorDialogueAgent
继续对话或终止对话。
class IntegerOutputParser(RegexParser):def get_format_instructions(self) -> str:return "Your response should be an integer delimited by angled brackets, like this: <int>."# 定义整数输出解析器,用于解析输出中的整数class DirectorDialogueAgent(DialogueAgent):def __init__(self,name,system_message: SystemMessage,model: ChatOpenAI,speakers: List[DialogueAgent],stopping_probability: float,) -> None:super().__init__(name, system_message, model)self.speakers = speakersself.next_speaker = ""self.stop = Falseself.stopping_probability = stopping_probabilityself.termination_clause = "Finish the conversation by stating a concluding message and thanking everyone."self.continuation_clause = "Do not end the conversation. Keep the conversation going by adding your own ideas."# 1. 有一个用于生成对前一个说话者的响应的提示self.response_prompt_template = PromptTemplate(input_variables=["message_history", "termination_clause"],template=f"""{{message_history}}Follow up with an insightful comment.
{{termination_clause}}
{self.prefix}""",)# 2. 有一个用于决定下一个说话者的提示self.choice_parser = IntegerOutputParser(regex=r"<(\d+)>", output_keys=["choice"], default_output_key="choice")self.choose_next_speaker_prompt_template = PromptTemplate(input_variables=["message_history", "speaker_names"],template=f"""{{message_history}}Given the above conversation, select the next speaker by choosing index next to their name:
{{speaker_names}}{self.choice_parser.get_format_instructions()}Do nothing else.""",)# 3. 有一个用于提示下一个说话者发言的提示self.prompt_next_speaker_prompt_template = PromptTemplate(input_variables=["message_history", "next_speaker"],template=f"""{{message_history}}The next speaker is {{next_speaker}}.
Prompt the next speaker to speak with an insightful question.
{self.prefix}""",)def _generate_response(self):# 如果self.stop = True,那么我们将注入带有终止条款的提示sample = random.uniform(0, 1)self.stop = sample < self.stopping_probabilityprint(f"\tStop? {self.stop}\n")response_prompt = self.response_prompt_template.format(message_history="\n".join(self.message_history),termination_clause=self.termination_clause if self.stop else "",)self.response = self.model.invoke([self.system_message,HumanMessage(content=response_prompt),]).contentreturn self.response@tenacity.retry(stop=tenacity.stop_after_attempt(2),wait=tenacity.wait_none(), # 重试之间无等待时间retry=tenacity.retry_if_exception_type(ValueError),before_sleep=lambda retry_state: print(f"ValueError occurred: {retry_state.outcome.exception()}, retrying..."),retry_error_callback=lambda retry_state: 0,) # 当所有重试都用尽时的默认值def _choose_next_speaker(self) -> str:speaker_names = "\n".join([f"{idx}: {name}" for idx, name in enumerate(self.speakers)])choice_prompt = self.choose_next_speaker_prompt_template.format(message_history="\n".join(self.message_history + [self.prefix] + [self.response]),speaker_names=speaker_names,)choice_string = self.model.invoke([self.system_message,HumanMessage(content=choice_prompt),]).contentchoice = int(self.choice_parser.parse(choice_string)["choice"])return choicedef select_next_speaker(self):return self.chosen_speaker_iddef send(self) -> str:"""应用聊天模型到消息历史并返回消息字符串"""# 1. 生成并保存对前一个说话者的响应self.response = self._generate_response()if self.stop:message = self.responseelse:# 2. 决定下一个说话者self.chosen_speaker_id = self._choose_next_speaker()self.next_speaker = self.speakers[self.chosen_speaker_id]print(f"\tNext speaker: {self.next_speaker}\n")# 3. 提示下一个说话者发言next_prompt = self.prompt_next_speaker_prompt_template.format(message_history="\n".join(self.message_history + [self.prefix] + [self.response]),next_speaker=self.next_speaker,)message = self.model.invoke([self.system_message,HumanMessage(content=next_prompt),]).contentmessage = " ".join([self.response, message])return message# 定义导演对话代理类,继承自DialogueAgent
# 包含生成响应、选择下一个说话者和发送消息的方法
# 使用tenacity库进行重试处理
定义参与者和话题
topic = "The New Workout Trend: Competitive Sitting - How Laziness Became the Next Fitness Craze"
director_name = "Jon Stewart"
agent_summaries = OrderedDict({"Jon Stewart": ("Host of the Daily Show", "New York"),"Samantha Bee": ("Hollywood Correspondent", "Los Angeles"),"Aasif Mandvi": ("CIA Correspondent", "Washington D.C."),"Ronny Chieng": ("Average American Correspondent", "Cleveland, Ohio"),}
)
word_limit = 50# 定义讨论话题、主持人名称、参与者信息和字数限制
生成系统消息
agent_summary_string = "\n- ".join([""]+ [f"{name}: {role}, located in {location}"for name, (role, location) in agent_summaries.items()]
)conversation_description = f"""This is a Daily Show episode discussing the following topic: {topic}.The episode features {agent_summary_string}."""agent_descriptor_system_message = SystemMessage(content="You can add detail to the description of each person."
)def generate_agent_description(agent_name, agent_role, agent_location):agent_specifier_prompt = [agent_descriptor_system_message,HumanMessage(content=f"""{conversation_description}Please reply with a creative description of {agent_name}, who is a {agent_role} in {agent_location}, that emphasizes their particular role and location.Speak directly to {agent_name} in {word_limit} words or less.Do not add anything else."""),]agent_description = ChatOpenAI(temperature=1.0)(agent_specifier_prompt).contentreturn agent_descriptiondef generate_agent_header(agent_name, agent_role, agent_location, agent_description):return f"""{conversation_description}Your name is {agent_name}, your role is {agent_role}, and you are located in {agent_location}.Your description is as follows: {agent_description}You are discussing the topic: {topic}.Your goal is to provide the most informative, creative, and novel perspectives of the topic from the perspective of your role and your location.
"""def generate_agent_system_message(agent_name, agent_header):return SystemMessage(content=(f"""{agent_header}
You will speak in the style of {agent_name}, and exaggerate your personality.
Do not say the same things over and over again.
Speak in the first person from the perspective of {agent_name}
For describing your own body movements, wrap your description in '*'.
Do not change roles!
Do not speak from the perspective of anyone else.
Speak only from the perspective of {agent_name}.
Stop speaking the moment you finish speaking from your perspective.
Never forget to keep your response to {word_limit} words!
Do not add anything else."""))agent_descriptions = [generate_agent_description(name, role, location)for name, (role, location) in agent_summaries.items()
]
agent_headers = [generate_agent_header(name, role, location, description)for (name, (role, location)), description in zip(agent_summaries.items(), agent_descriptions)
]
agent_system_messages = [generate_agent_system_message(name, header)for name, header in zip(agent_summaries, agent_headers)
]# 生成代理描述、代理头部和系统消息
# 使用OpenAI聊天模型生成创意描述
# 组合生成每个代理的完整系统消息
for name, description, header, system_message in zip(agent_summaries, agent_descriptions, agent_headers, agent_system_messages
):print(f"\n\n{name} Description:")print(f"\n{description}")print(f"\nHeader:\n{header}")print(f"\nSystem Message:\n{system_message.content}")# 打印每个代理的描述、头部和系统消息
使用LLM详细阐述讨论话题
topic_specifier_prompt = [SystemMessage(content="You can make a task more specific."),HumanMessage(content=f"""{conversation_description}Please elaborate on the topic. Frame the topic as a single question to be answered.Be creative and imaginative.Please reply with the specified topic in {word_limit} words or less. Do not add anything else."""),
]
specified_topic = ChatOpenAI(temperature=1.0)(topic_specifier_prompt).contentprint(f"Original topic:\n{topic}\n")
print(f"Detailed topic:\n{specified_topic}\n")# 使用OpenAI聊天模型生成更详细的话题描述
# 将话题转化为一个需要回答的问题
定义说话者选择函数
最后我们将定义一个说话者选择函数 select_next_speaker
,它接受每个代理的出价并选择出价最高的代理(随机打破平局)。
我们将定义一个 ask_for_bid
函数,它使用我们之前定义的 bid_parser
来解析代理的出价。我们将使用 tenacity
来装饰 ask_for_bid
,以便在代理的出价无法正确解析时多次重试,并在最大尝试次数后产生默认出价0。
def select_next_speaker(step: int, agents: List[DialogueAgent], director: DirectorDialogueAgent
) -> int:"""如果步骤是偶数,则选择导演否则,由导演选择下一个说话者。"""# 导演在奇数步骤说话if step % 2 == 1:idx = 0else:# 这里导演选择下一个说话者idx = director.select_next_speaker() + 1 # +1 因为我们排除了导演return idx# 定义选择下一个说话者的函数
# 在奇数步骤选择导演,否则由导演选择
主循环
director = DirectorDialogueAgent(name=director_name,system_message=agent_system_messages[0],model=ChatOpenAI(temperature=0.2),speakers=[name for name in agent_summaries if name != director_name],stopping_probability=0.2,
)agents = [director]
for name, system_message in zip(list(agent_summaries.keys())[1:], agent_system_messages[1:]
):agents.append(DialogueAgent(name=name,system_message=system_message,model=ChatOpenAI(temperature=0.2),))# 创建导演代理和其他对话代理
simulator = DialogueSimulator(agents=agents,selection_function=functools.partial(select_next_speaker, director=director),
)
simulator.reset()
simulator.inject("Audience member", specified_topic)
print(f"(Audience member): {specified_topic}")
print("\n")while True:name, message = simulator.step()print(f"({name}): {message}")print("\n")if director.stop:break# 创建对话模拟器并运行模拟
# 注入指定话题并循环执行模拟步骤
# 直到导演决定停止对话
总结
本文介绍了如何实现一个多代理模拟系统,其中一个特权代理决定谁来发言。这与分散式发言人选择方案相反。文章以一个虚构的新闻网络模拟为例,展示了如何实现能够在说话前思考并终止对话的代理。
扩展知识
-
多代理系统:这是人工智能和自然语言处理领域的一个重要研究方向,涉及多个AI代理之间的交互和协作。这种系统可以模拟复杂的社会互动,用于各种应用,如虚拟助手、游戏AI和社会模拟。
-
对话管理:在这个示例中,导演代理负责管理对话流程。这反映了现实世界中的对话管理策略,如在会议或访谈中由主持人引导讨论。
-
自然语言生成(NLG):系统中的每个代理都使用NLG技术来生成响应。这涉及到根据上下文和角色生成连贯、相关和自然的语言输出。
-
角色扮演AI:每个代理都被赋予了特定的角色和个性。这种技术可以用于创建更真实和引人入胜的AI角色,用于娱乐、教育或训练目的。
-
概率决策:使用随机抽样来决定是否结束对话是一种简单但有效的方法,可以引入不可预测性和多样性。在更复杂的系统中,这可能涉及更高级的概率模型和决策理论。