当前位置: 首页 > news >正文

LangChain教程 - 表达式语言 (LCEL) -构建智能链

系列文章索引
LangChain教程 - 系列文章

LangChain提供了一种灵活且强大的表达式语言 (LangChain Expression Language, LCEL),用于创建复杂的逻辑链。通过将不同的可运行对象组合起来,LCEL可以实现顺序链、嵌套链、并行链、路由以及动态构建等高级功能,从而满足各种场景下的需求。本文将详细介绍这些功能及其实现方式。

顺序链

LCEL的核心功能是将可运行对象按顺序组合起来,其中前一个对象的输出会自动传递给下一个对象作为输入。我们可以使用管道操作符 (|) 或显式的 .pipe() 方法来构建顺序链。

以下是一个简单的例子:

from langchain_ollama import OllamaLLM
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParsermodel = OllamaLLM(model="qwen2.5:0.5b")
prompt = ChatPromptTemplate.from_template("tell me a joke about {topic}")chain = prompt | model | StrOutputParser()result = chain.invoke({"topic": "bears"})
print(result)

输出:

Here's a bear joke for you:Why did the bear dissolve in water?
Because it was a polar bear!

在上述例子中,提示模板将输入格式化为聊天模型的输入格式,聊天模型生成笑话,最后通过输出解析器将结果转换为字符串。

嵌套链

嵌套链允许我们将多个链组合起来以创建更复杂的逻辑。例如,可以将一个生成笑话的链与另一个链组合,该链负责分析笑话的有趣程度。

analysis_prompt = ChatPromptTemplate.from_template("is this a funny joke? {joke}")
composed_chain = {"joke": chain} | analysis_prompt | model | StrOutputParser()result = composed_chain.invoke({"topic": "bears"})
print(result)

输出:

Haha, that's a clever play on words! Using "polar" to imply the bear dissolved or became polar/polarized when put in water. Not the most hilarious joke ever, but it has a cute, groan-worthy pun that makes it mildly amusing.

并行链

RunnableParallel 使得可以并行运行多个链,并将每个链的结果组合成一个字典。这种方式适用于需要同时处理多个任务的场景。

from langchain_core.runnables import RunnableParalleljoke_chain = ChatPromptTemplate.from_template("tell me a joke about {topic}") | model
poem_chain = ChatPromptTemplate.from_template("write a 2-line poem about {topic}") | modelparallel_chain = RunnableParallel(joke=joke_chain, poem=poem_chain)result = parallel_chain.invoke({"topic": "bear"})
print(result)

输出:

{'joke': "Why don't bears like fast food? Because they can't catch it!",'poem': "In the quiet of the forest, the bear roams free\nMajestic and wild, a sight to see."
}

路由

路由允许根据输入动态选择要执行的子链。LCEL提供了两种实现路由的方式:

使用自定义函数

通过 RunnableLambda 实现动态路由:

from langchain_core.prompts import PromptTemplate
from langchain_core.runnables import RunnableLambdachain = (PromptTemplate.from_template("""Given the user question below, classify it as either being about `LangChain`, `Anthropic`, or `Other`.Do not respond with more than one word.<question>
{question}
</question>Classification:""")| OllamaLLM(model="qwen2.5:0.5b")| StrOutputParser()
)langchain_chain = PromptTemplate.from_template("""You are an expert in langchain. \
Always answer questions starting with "As Harrison Chase told me". \
Respond to the following question:Question: {question}
Answer:"""
) | OllamaLLM(model="qwen2.5:0.5b")
anthropic_chain = PromptTemplate.from_template("""You are an expert in anthropic. \
Always answer questions starting with "As Dario Amodei told me". \
Respond to the following question:Question: {question}
Answer:"""
) | OllamaLLM(model="qwen2.5:0.5b")
general_chain = PromptTemplate.from_template("""Respond to the following question:Question: {question}
Answer:"""
) | OllamaLLM(model="qwen2.5:0.5b")def route(info):if "anthropic" in info["topic"].lower():return anthropic_chainelif "langchain" in info["topic"].lower():return langchain_chainelse:return general_chainfull_chain = {"topic": chain, "question": lambda x: x["question"]} | RunnableLambda(route)result = full_chain.invoke({"question": "how do I use LangChain?"})
print(result)def route(info):if "anthropic" in info["topic"].lower():return anthropic_chainelif "langchain" in info["topic"].lower():return langchain_chainelse:return general_chainfrom langchain_core.runnables import RunnableLambdafull_chain = {"topic": chain, "question": lambda x: x["question"]} | RunnableLambda(route)result = full_chain.invoke({"question": "how do I use LangChain?"})
print(result)

使用 RunnableBranch

RunnableBranch 通过条件匹配选择分支:

from langchain_core.runnables import RunnableBranchbranch = RunnableBranch((lambda x: "anthropic" in x["topic"].lower(), anthropic_chain),(lambda x: "langchain" in x["topic"].lower(), langchain_chain),general_chain,
)full_chain = {"topic": chain, "question": lambda x: x["question"]} | branch
result = full_chain.invoke({"question": "how do I use Anthropic?"})
print(result)

动态构建

动态构建链可以根据输入在运行时生成链的部分。通过 RunnableLambda 的返回值机制,可以返回一个新的 Runnable

from langchain_core.runnables import chain, RunnablePassthroughllm = OllamaLLM(model="qwen2.5:0.5b")contextualize_instructions = """Convert the latest user question into a standalone question given the chat history. Don't answer the question, return the question and nothing else (no descriptive text)."""
contextualize_prompt = ChatPromptTemplate.from_messages([("system", contextualize_instructions),("placeholder", "{chat_history}"),("human", "{question}"),]
)
contextualize_question = contextualize_prompt | llm | StrOutputParser()@chain
def contextualize_if_needed(input_: dict):if input_.get("chat_history"):return contextualize_questionelse:return RunnablePassthrough() | itemgetter("question")@chain
def fake_retriever(input_: dict):return "egypt's population in 2024 is about 111 million"qa_instructions = ("""Answer the user question given the following context:\n\n{context}."""
)
qa_prompt = ChatPromptTemplate.from_messages([("system", qa_instructions), ("human", "{question}")]
)full_chain = (RunnablePassthrough.assign(question=contextualize_if_needed).assign(context=fake_retriever)| qa_prompt| llm| StrOutputParser()
)result = full_chain.invoke({"question": "what about egypt","chat_history": [("human", "what's the population of indonesia"),("ai", "about 276 million"),],
})
print(result)

输出:

According to the context provided, Egypt's population in 2024 is estimated to be about 111 million.

完整代码实例

from operator import itemgetterfrom langchain_ollama import OllamaLLM
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParserprint("\n-----------------------------------\n")# Simple demo
model = OllamaLLM(model="qwen2.5:0.5b")
prompt = ChatPromptTemplate.from_template("tell me a joke about {topic}")chain = prompt | model | StrOutputParser()result = chain.invoke({"topic": "bears"})
print(result)print("\n-----------------------------------\n")# Compose demo
analysis_prompt = ChatPromptTemplate.from_template("is this a funny joke? {joke}")
composed_chain = {"joke": chain} | analysis_prompt | model | StrOutputParser()result = composed_chain.invoke({"topic": "bears"})
print(result)print("\n-----------------------------------\n")# Parallel demo
from langchain_core.runnables import RunnableParalleljoke_chain = ChatPromptTemplate.from_template("tell me a joke about {topic}") | model
poem_chain = ChatPromptTemplate.from_template("write a 2-line poem about {topic}") | modelparallel_chain = RunnableParallel(joke=joke_chain, poem=poem_chain)result = parallel_chain.invoke({"topic": "bear"})
print(result)print("\n-----------------------------------\n")# Route demo
from langchain_core.prompts import PromptTemplate
from langchain_core.runnables import RunnableLambdachain = (PromptTemplate.from_template("""Given the user question below, classify it as either being about `LangChain`, `Anthropic`, or `Other`.Do not respond with more than one word.<question>
{question}
</question>Classification:""")| OllamaLLM(model="qwen2.5:0.5b")| StrOutputParser()
)langchain_chain = PromptTemplate.from_template("""You are an expert in langchain. \
Always answer questions starting with "As Harrison Chase told me". \
Respond to the following question:Question: {question}
Answer:"""
) | OllamaLLM(model="qwen2.5:0.5b")
anthropic_chain = PromptTemplate.from_template("""You are an expert in anthropic. \
Always answer questions starting with "As Dario Amodei told me". \
Respond to the following question:Question: {question}
Answer:"""
) | OllamaLLM(model="qwen2.5:0.5b")
general_chain = PromptTemplate.from_template("""Respond to the following question:Question: {question}
Answer:"""
) | OllamaLLM(model="qwen2.5:0.5b")def route(info):if "anthropic" in info["topic"].lower():return anthropic_chainelif "langchain" in info["topic"].lower():return langchain_chainelse:return general_chainfull_chain = {"topic": chain, "question": lambda x: x["question"]} | RunnableLambda(route)result = full_chain.invoke({"question": "how do I use LangChain?"})
print(result)print("\n-----------------------------------\n")# Branch demo
from langchain_core.runnables import RunnableBranchbranch = RunnableBranch((lambda x: "anthropic" in x["topic"].lower(), anthropic_chain),(lambda x: "langchain" in x["topic"].lower(), langchain_chain),general_chain,
)full_chain = {"topic": chain, "question": lambda x: x["question"]} | branch
result = full_chain.invoke({"question": "how do I use Anthropic?"})
print(result)print("\n-----------------------------------\n")# Dynamic demo
from langchain_core.runnables import chain, RunnablePassthroughllm = OllamaLLM(model="qwen2.5:0.5b")contextualize_instructions = """Convert the latest user question into a standalone question given the chat history. Don't answer the question, return the question and nothing else (no descriptive text)."""
contextualize_prompt = ChatPromptTemplate.from_messages([("system", contextualize_instructions),("placeholder", "{chat_history}"),("human", "{question}"),]
)
contextualize_question = contextualize_prompt | llm | StrOutputParser()@chain
def contextualize_if_needed(input_: dict):if input_.get("chat_history"):return contextualize_questionelse:return RunnablePassthrough() | itemgetter("question")@chain
def fake_retriever(input_: dict):return "egypt's population in 2024 is about 111 million"qa_instructions = ("""Answer the user question given the following context:\n\n{context}."""
)
qa_prompt = ChatPromptTemplate.from_messages([("system", qa_instructions), ("human", "{question}")]
)full_chain = (RunnablePassthrough.assign(question=contextualize_if_needed).assign(context=fake_retriever)| qa_prompt| llm| StrOutputParser()
)result = full_chain.invoke({"question": "what about egypt","chat_history": [("human", "what's the population of indonesia"),("ai", "about 276 million"),],
})
print(result)print("\n-----------------------------------\n")

J-LangChain实现上面实例

J-LangChain - 智能链构建

总结

LangChain的LCEL通过提供顺序链、嵌套链、并行链、路由和动态构建等功能,为开发者构建复杂的语言任务提供了强大的工具。无论是简单的逻辑流还是复杂的动态决策,LCEL都能高效地满足需求。通过合理使用这些功能,开发者可以快速搭建高效、灵活的智能链,为各种场景的应用提供支持。

http://www.lryc.cn/news/512685.html

相关文章:

  • 使用Locust对Redis进行负载测试
  • HIVE数据仓库分层
  • 数据结构与算法之动态规划: LeetCode 2407. 最长递增子序列 II (Ts版)
  • 电子电气架构 --- 什么是自动驾驶技术中的域控制单元(DCU)?
  • html5css3
  • FPGA多路红外相机视频拼接输出,提供2套工程源码和技术支持
  • python实战(十二)——如何进行新词发现?
  • 动手做计算机网络仿真实验入门学习
  • 完整的 FFmpeg 命令使用教程
  • Leetcode 3405. Count the Number of Arrays with K Matching Adjacent Elements
  • Springboot(五十六)SpringBoot3集成SkyWalking
  • 有没有免费提取音频的软件?音频编辑软件介绍!
  • Linux 中查看内存使用情况全攻略
  • 【SQL Server】教材数据库(3)
  • 使用 ECharts 与 Vue 构建数据可视化组件
  • Yocto 项目 - 共享状态缓存 (Shared State Cache) 机制
  • Unity3D仿星露谷物语开发9之创建农场Scene
  • STM32-笔记20-测量按键按下时间
  • 2024年12月30日Github流行趋势
  • SAP PP bom历史导出 ALV 及XLSX 带ECN号
  • 使用WebRTC进行视频通信
  • npm ERR! ECONNRESET 解决方法
  • 【连续学习之SS-IL算法】2021年CPVR会议论文Ss-il:Separated softmax for incremental learning
  • Go+chromedp实现Web UI自动化测试
  • 【MySQL 高级特性与性能优化】
  • Spring Boot教程之三十九: 使用 Maven 将 Spring Boot 应用程序 Docker 化
  • 微信小程序开发示例
  • 【机器学习】概述
  • 音视频采集推流时间戳记录方案
  • 【Linux】:线程安全 + 死锁问题