Quick Start¶ 快速入门 ¶
In this comprehensive quick start, we will build a support chatbot in LangGraph that can:
在这个全面的快速入门中,我们将构建一个可以在 LangGraph 中支持的聊天机器人:
- Answer common questions by searching the web
通过搜索网络回答常见问题 - Maintain conversation state across calls
在调用之间保持对话状态 - Route complex queries to a human for review
将复杂查询路由到人工进行审核 - Use custom state to control its behavior
使用自定义状态来控制其行为 - Rewind and explore alternative conversation paths
倒带并探索替代对话路径
We'll start with a basic chatbot and progressively add more sophisticated capabilities, introducing key LangGraph concepts along the way.
我们将从一个基本的聊天机器人开始,逐步添加更复杂的功能,同时介绍关键的 LangGraph 概念。
Setup¶ 设置 ¶
First, install the required packages:
首先,安装所需的包:
Next, set your API keys:
接下来,设置您的 API 密钥:
import getpass
import os
def _set_env(var: str):
if not os.environ.get(var):
os.environ[var] = getpass.getpass(f"{var}: ")
_set_env("ANTHROPIC_API_KEY")
Set up LangSmith for LangGraph development
为 LangGraph 开发设置 LangSmith
Sign up for LangSmith to quickly spot issues and improve the performance of your LangGraph projects. LangSmith lets you use trace data to debug, test, and monitor your LLM apps built with LangGraph — read more about how to get started here.
注册 LangSmith,以快速发现问题并提高您使用 LangGraph 项目的性能。LangSmith 允许您使用追踪数据来调试、测试和监控您使用 LangGraph 构建的 LLM 应用程序 — 在这里阅读更多关于如何开始的信息。
Part 1: Build a Basic Chatbot¶
第 1 部分:构建一个基本的聊天机器人 ¶
We'll first create a simple chatbot using LangGraph. This chatbot will respond directly to user messages. Though simple, it will illustrate the core concepts of building with LangGraph. By the end of this section, you will have a built rudimentary chatbot.
我们将首先使用 LangGraph 创建一个简单的聊天机器人。这个聊天机器人将直接响应用户消息。尽管简单,但它将展示使用 LangGraph 构建的核心概念。在本节结束时,您将拥有一个初步构建的聊天机器人。
Start by creating a StateGraph
. A StateGraph
object defines the structure of our chatbot as a "state machine". We'll add nodes
to represent the llm and functions our chatbot can call and edges
to specify how the bot should transition between these functions.
首先创建一个 StateGraph
。一个 StateGraph
对象定义了我们聊天机器人的结构为“状态机”。我们将添加 nodes
来表示llm和聊天机器人可以调用的函数,以及 edges
来指定机器人如何在这些函数之间转换。
from typing import Annotated
from typing_extensions import TypedDict
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
class State(TypedDict):
# Messages have the type "list". The `add_messages` function
# in the annotation defines how this state key should be updated
# (in this case, it appends messages to the list, rather than overwriting them)
messages: Annotated[list, add_messages]
graph_builder = StateGraph(State)
Note 注意
The first thing you do when you define a graph is define the State
of the graph. The State
consists of the schema of the graph as well as reducer functions which specify how to apply updates to the state. In our example State
is a TypedDict
with a single key: messages
. The messages
key is annotated with the add_messages
reducer function, which tells LangGraph to append new messages to the existing list, rather than overwriting it. State keys without an annotation will be overwritten by each update, storing the most recent value. Check out this conceptual guide to learn more about state, reducers and other low-level concepts.
定义图时首先要定义图的 State
。 State
包含图的模式以及指定如何将更新应用于状态的归约函数。在我们的示例中, State
是一个 TypedDict
,具有一个键: messages
。 messages
键被注释为 add_messages
归约函数,这告诉 LangGraph 将新消息附加到现有列表中,而不是覆盖它。没有注释的状态键将被每次更新覆盖,存储最新的值。查看这个概念指南以了解更多关于状态、归约器和其他低级概念的信息。
So now our graph knows two things:
所以现在我们的图知道两件事:
- Every
node
we define will receive the currentState
as input and return a value that updates that state.
每个node
我们定义的将接收当前State
作为输入,并返回一个更新该状态的值。 messages
will be appended to the current list, rather than directly overwritten. This is communicated via the prebuiltadd_messages
function in theAnnotated
syntax.
messages
将被附加到当前列表中,而不是直接覆盖。这是通过Annotated
语法中的预构建add_messages
函数进行传达的。
Next, add a "chatbot
" node. Nodes represent units of work. They are typically regular python functions.
接下来,添加一个“ chatbot
”节点。节点代表工作单元。它们通常是常规的 Python 函数。
from langchain_anthropic import ChatAnthropic
llm = ChatAnthropic(model="claude-3-5-sonnet-20240620")
def chatbot(state: State):
return {"messages": [llm.invoke(state["messages"])]}
# The first argument is the unique node name
# The second argument is the function or object that will be called whenever
# the node is used.
graph_builder.add_node("chatbot", chatbot)
API 参考:ChatAnthropic
Notice how the chatbot
node function takes the current State
as input and returns a dictionary containing an updated messages
list under the key "messages". This is the basic pattern for all LangGraph node functions.
注意到 chatbot
节点函数将当前 State
作为输入,并返回一个包含更新后的 messages
列表的字典,键为 "messages"。这是所有 LangGraph 节点函数的基本模式。
The add_messages
function in our State
will append the llm's response messages to whatever messages are already in the state.
我们的 State
中的 add_messages
函数将把 llm 的响应消息附加到状态中已经存在的任何消息上。
Next, add an entry
point. This tells our graph where to start its work each time we run it.
接下来,添加一个 entry
点。这告诉我们的图表每次运行时从哪里开始工作。
Similarly, set a finish
point. This instructs the graph "any time this node is run, you can exit."
同样,设置一个 finish
点。这指示图表“每当运行此节点时,您可以退出。”
Finally, we'll want to be able to run our graph. To do so, call "compile()
" on the graph builder. This creates a "CompiledGraph
" we can use invoke on our state.
最后,我们希望能够运行我们的图。为此,在图构建器上调用“ compile()
”。这会创建一个“ CompiledGraph
”,我们可以在我们的状态上调用它。
You can visualize the graph using the get_graph
method and one of the "draw" methods, like draw_ascii
or draw_png
. The draw
methods each require additional dependencies.
您可以使用 get_graph
方法和其中一个“绘制”方法(如 draw_ascii
或 draw_png
)来可视化图。 draw
方法每个都需要额外的依赖项。
from IPython.display import Image, display
try:
display(Image(graph.get_graph().draw_mermaid_png()))
except Exception:
# This requires some extra dependencies and is optional
pass
Now let's run the chatbot!
现在让我们运行聊天机器人!
Tip: You can exit the chat loop at any time by typing "quit", "exit", or "q".
提示:您可以随时通过输入“退出”、“exit”或“q”来退出聊天循环。
def stream_graph_updates(user_input: str):
for event in graph.stream({"messages": [("user", user_input)]}):
for value in event.values():
print("Assistant:", value["messages"][-1].content)
while True:
try:
user_input = input("User: ")
if user_input.lower() in ["quit", "exit", "q"]:
print("Goodbye!")
break
stream_graph_updates(user_input)
except:
# fallback if input() is not available
user_input = "What do you know about LangGraph?"
print("User: " + user_input)
stream_graph_updates(user_input)
break
Assistant: LangGraph is a library designed to help build stateful multi-agent applications using language models. It provides tools for creating workflows and state machines to coordinate multiple AI agents or language model interactions. LangGraph is built on top of LangChain, leveraging its components while adding graph-based coordination capabilities. It's particularly useful for developing more complex, stateful AI applications that go beyond simple query-response interactions.
Goodbye!
恭喜!您已经使用 LangGraph 构建了第一个聊天机器人。这个机器人可以通过接受用户输入并使用 LLM 生成响应来进行基本对话。您可以在提供的链接中检查上述调用的 LangSmith Trace。
However, you may have noticed that the bot's knowledge is limited to what's in its training data. In the next part, we'll add a web search tool to expand the bot's knowledge and make it more capable.
然而,您可能已经注意到,机器人的知识仅限于其训练数据中的内容。在下一部分中,我们将添加一个网络搜索工具,以扩展机器人的知识并增强其能力。
Below is the full code for this section for your reference:
以下是本节的完整代码供您参考:
Full Code 完整代码
from typing import Annotated from langchain_anthropic import ChatAnthropic from typing_extensions import TypedDict from langgraph.graph import StateGraph from langgraph.graph.message import add_messages class State(TypedDict): messages: Annotated[list, add_messages] graph_builder = StateGraph(State) llm = ChatAnthropic(model="claude-3-5-sonnet-20240620") def chatbot(state: State): return {"messages": [llm.invoke(state["messages"])]} # The first argument is the unique node name # The second argument is the function or object that will be called whenever # the node is used. graph_builder.add_node("chatbot", chatbot) graph_builder.set_entry_point("chatbot") graph_builder.set_finish_point("chatbot") graph = graph_builder.compile()
Part 2: Enhancing the Chatbot with Tools¶
第二部分:通过工具增强聊天机器人 ¶
To handle queries our chatbot can't answer "from memory", we'll integrate a web search tool. Our bot can use this tool to find relevant information and provide better responses.
为了处理我们的聊天机器人无法“凭记忆”回答的查询,我们将集成一个网络搜索工具。我们的机器人可以使用这个工具来查找相关信息并提供更好的响应。
Requirements¶ 要求 ¶
Before we start, make sure you have the necessary packages installed and API keys set up:
在我们开始之前,请确保您已安装必要的软件包并设置了 API 密钥:
First, install the requirements to use the Tavily Search Engine, and set your TAVILY_API_KEY.
首先,安装使用 Tavily 搜索引擎所需的依赖,并设置您的 TAVILY_API_KEY。
接下来,定义工具:
from langchain_community.tools.tavily_search import TavilySearchResults
tool = TavilySearchResults(max_results=2)
tools = [tool]
tool.invoke("What's a 'node' in LangGraph?")
[{'url': 'https://medium.com/@cplog/introduction-to-langgraph-a-beginners-guide-14f9be027141',
'content': 'Nodes: Nodes are the building blocks of your LangGraph. Each node represents a function or a computation step. You define nodes to perform specific tasks, such as processing input, making ...'},
{'url': 'https://saksheepatil05.medium.com/demystifying-langgraph-a-beginner-friendly-dive-into-langgraph-concepts-5ffe890ddac0',
'content': 'Nodes (Tasks): Nodes are like the workstations on the assembly line. Each node performs a specific task on the product. In LangGraph, nodes are Python functions that take the current state, do some work, and return an updated state. Next, we define the nodes, each representing a task in our sandwich-making process.'}]
API 参考:TavilySearchResults
The results are page summaries our chat bot can use to answer questions.
结果是我们的聊天机器人可以用来回答问题的页面摘要。
Next, we'll start defining our graph. The following is all the same as in Part 1, except we have added bind_tools
on our LLM. This lets the LLM know the correct JSON format to use if it wants to use our search engine.
接下来,我们将开始定义我们的图。以下内容与第一部分完全相同,只是我们在我们的LLM上添加了 bind_tools
。这让LLM知道如果它想使用我们的搜索引擎,应该使用正确的 JSON 格式。
from typing import Annotated
from langchain_anthropic import ChatAnthropic
from typing_extensions import TypedDict
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
class State(TypedDict):
messages: Annotated[list, add_messages]
graph_builder = StateGraph(State)
llm = ChatAnthropic(model="claude-3-5-sonnet-20240620")
# Modification: tell the LLM which tools it can call
llm_with_tools = llm.bind_tools(tools)
def chatbot(state: State):
return {"messages": [llm_with_tools.invoke(state["messages"])]}
graph_builder.add_node("chatbot", chatbot)
API 参考:ChatAnthropic | StateGraph | 开始 | 结束 | 添加消息
Next we need to create a function to actually run the tools if they are called. We'll do this by adding the tools to a new node.
接下来,我们需要创建一个函数,以便在调用工具时实际运行它们。我们将通过将工具添加到一个新节点来实现这一点。
Below, we implement a BasicToolNode
that checks the most recent message in the state and calls tools if the message contains tool_calls
. It relies on the LLM's tool_calling
support, which is available in Anthropic, OpenAI, Google Gemini, and a number of other LLM providers.
以下,我们实现一个 BasicToolNode
,该函数检查状态中的最新消息,并在消息包含 tool_calls
时调用工具。它依赖于 LLM 的 tool_calling
支持,该支持在 Anthropic、OpenAI、Google Gemini 以及其他一些 LLM 提供商中可用。
We will later replace this with LangGraph's prebuilt ToolNode to speed things up, but building it ourselves first is instructive.
我们稍后将用 LangGraph 的预构建 ToolNode 替换它以加快速度,但首先自己构建它是有启发性的。
import json
from langchain_core.messages import ToolMessage
class BasicToolNode:
"""A node that runs the tools requested in the last AIMessage."""
def __init__(self, tools: list) -> None:
self.tools_by_name = {tool.name: tool for tool in tools}
def __call__(self, inputs: dict):
if messages := inputs.get("messages", []):
message = messages[-1]
else:
raise ValueError("No message found in input")
outputs = []
for tool_call in message.tool_calls:
tool_result = self.tools_by_name[tool_call["name"]].invoke(
tool_call["args"]
)
outputs.append(
ToolMessage(
content=json.dumps(tool_result),
name=tool_call["name"],
tool_call_id=tool_call["id"],
)
)
return {"messages": outputs}
tool_node = BasicToolNode(tools=[tool])
graph_builder.add_node("tools", tool_node)
API 参考:ToolMessage
With the tool node added, we can define the conditional_edges
.
With the tool node added, we can define the conditional_edges
. 添加工具节点后,我们可以定义 conditional_edges
。
Recall that edges route the control flow from one node to the next. Conditional edges usually contain "if" statements to route to different nodes depending on the current graph state. These functions receive the current graph state
and return a string or list of strings indicating which node(s) to call next.
Recall that edges route the control flow from one node to the next. Conditional edges usually contain "if" statements to route to different nodes depending on the current graph state. These functions receive the current graph state
and return a string or list of strings indicating which node(s) to call next. 请记住,边缘将控制流从一个节点路由到下一个节点。条件边通常包含“if”语句,以根据当前图形状态路由到不同的节点。这些函数接收当前图形 state
并返回一个字符串或字符串列表,指示下一个要调用的节点。
Below, call define a router function called route_tools
, that checks for tool_calls in the chatbot's output. Provide this function to the graph by calling add_conditional_edges
, which tells the graph that whenever the chatbot
node completes to check this function to see where to go next.
以下,调用定义一个路由函数叫做 route_tools
,该函数检查聊天机器人的输出中的工具调用。通过调用 add_conditional_edges
将此函数提供给图形,这告诉图形每当 chatbot
节点完成时检查此函数以查看下一步该去哪里。
The condition will route to tools
if tool calls are present and END
if not.
如果存在工具调用,则条件将路由到 tools
,如果不存在,则路由到 END
。
Later, we will replace this with the prebuilt tools_condition to be more concise, but implementing it ourselves first makes things more clear.
后面我们将用预构建的 tools_condition 替换这个,以使其更简洁,但首先自己实现它可以使事情更清晰。
from typing import Literal
def route_tools(
state: State,
):
"""
Use in the conditional_edge to route to the ToolNode if the last message
has tool calls. Otherwise, route to the end.
"""
if isinstance(state, list):
ai_message = state[-1]
elif messages := state.get("messages", []):
ai_message = messages[-1]
else:
raise ValueError(f"No messages found in input state to tool_edge: {state}")
if hasattr(ai_message, "tool_calls") and len(ai_message.tool_calls) > 0:
return "tools"
return END
# The `tools_condition` function returns "tools" if the chatbot asks to use a tool, and "END" if
# it is fine directly responding. This conditional routing defines the main agent loop.
graph_builder.add_conditional_edges(
"chatbot",
route_tools,
# The following dictionary lets you tell the graph to interpret the condition's outputs as a specific node
# It defaults to the identity function, but if you
# want to use a node named something else apart from "tools",
# You can update the value of the dictionary to something else
# e.g., "tools": "my_tools"
{"tools": "tools", END: END},
)
# Any time a tool is called, we return to the chatbot to decide the next step
graph_builder.add_edge("tools", "chatbot")
graph_builder.add_edge(START, "chatbot")
graph = graph_builder.compile()
Notice that conditional edges start from a single node. This tells the graph "any time the 'chatbot
' node runs, either go to 'tools' if it calls a tool, or end the loop if it responds directly.
注意条件边是从单个节点开始的。这告诉图“每当 ' chatbot
' 节点运行时,如果调用工具则转到 'tools',否则如果直接响应则结束循环。”
Like the prebuilt tools_condition
, our function returns the END
string if no tool calls are made. When the graph transitions to END
, it has no more tasks to complete and ceases execution. Because the condition can return END
, we don't need to explicitly set a finish_point
this time. Our graph already has a way to finish!
像预构建的 tools_condition
一样,我们的函数在没有工具调用时返回 END
字符串。当图形过渡到 END
时,它没有更多的任务要完成并停止执行。因为条件可以返回 END
,我们这次不需要显式设置 finish_point
。我们的图形已经有了一种完成的方式!
Let's visualize the graph we've built. The following function has some additional dependencies to run that are unimportant for this tutorial.
让我们可视化我们构建的图形。以下函数有一些额外的依赖项,这些依赖项对于本教程并不重要。
from IPython.display import Image, display
try:
display(Image(graph.get_graph().draw_mermaid_png()))
except Exception:
# This requires some extra dependencies and is optional
pass
Now we can ask the bot questions outside its training data.
现在我们可以向机器人提问超出其训练数据的问题。
while True:
try:
user_input = input("User: ")
if user_input.lower() in ["quit", "exit", "q"]:
print("Goodbye!")
break
stream_graph_updates(user_input)
except:
# fallback if input() is not available
user_input = "What do you know about LangGraph?"
print("User: " + user_input)
stream_graph_updates(user_input)
break
Assistant: [{'text': "To provide you with accurate and up-to-date information about LangGraph, I'll need to search for the latest details. Let me do that for you.", 'type': 'text'}, {'id': 'toolu_01Q588CszHaSvvP2MxRq9zRD', 'input': {'query': 'LangGraph AI tool information'}, 'name': 'tavily_search_results_json', 'type': 'tool_use'}]
Assistant: [{"url": "https://www.langchain.com/langgraph", "content": "LangGraph sets the foundation for how we can build and scale AI workloads \u2014 from conversational agents, complex task automation, to custom LLM-backed experiences that 'just work'. The next chapter in building complex production-ready features with LLMs is agentic, and with LangGraph and LangSmith, LangChain delivers an out-of-the-box solution ..."}, {"url": "https://github.com/langchain-ai/langgraph", "content": "Overview. LangGraph is a library for building stateful, multi-actor applications with LLMs, used to create agent and multi-agent workflows. Compared to other LLM frameworks, it offers these core benefits: cycles, controllability, and persistence. LangGraph allows you to define flows that involve cycles, essential for most agentic architectures ..."}]
Assistant: Based on the search results, I can provide you with information about LangGraph:
1. Purpose:
LangGraph is a library designed for building stateful, multi-actor applications with Large Language Models (LLMs). It's particularly useful for creating agent and multi-agent workflows.
2. Developer:
LangGraph is developed by LangChain, a company known for its tools and frameworks in the AI and LLM space.
3. Key Features:
- Cycles: LangGraph allows the definition of flows that involve cycles, which is essential for most agentic architectures.
- Controllability: It offers enhanced control over the application flow.
- Persistence: The library provides ways to maintain state and persistence in LLM-based applications.
4. Use Cases:
LangGraph can be used for various applications, including:
- Conversational agents
- Complex task automation
- Custom LLM-backed experiences
5. Integration:
LangGraph works in conjunction with LangSmith, another tool by LangChain, to provide an out-of-the-box solution for building complex, production-ready features with LLMs.
6. Significance:
LangGraph is described as setting the foundation for building and scaling AI workloads. It's positioned as a key tool in the next chapter of LLM-based application development, particularly in the realm of agentic AI.
7. Availability:
LangGraph is open-source and available on GitHub, which suggests that developers can access and contribute to its codebase.
8. Comparison to Other Frameworks:
LangGraph is noted to offer unique benefits compared to other LLM frameworks, particularly in its ability to handle cycles, provide controllability, and maintain persistence.
LangGraph appears to be a significant tool in the evolving landscape of LLM-based application development, offering developers new ways to create more complex, stateful, and interactive AI systems.
Goodbye!
恭喜!您在 langgraph 中创建了一个对话代理,可以在需要时使用搜索引擎检索更新的信息。现在它可以处理更广泛的用户查询。要查看您的代理刚刚采取的所有步骤,请查看此 LangSmith 跟踪。
Our chatbot still can't remember past interactions on its own, limiting its ability to have coherent, multi-turn conversations. In the next part, we'll add memory to address this.
我们的聊天机器人仍然无法自主记住过去的互动,这限制了它进行连贯的多轮对话的能力。在下一部分,我们将添加记忆功能来解决这个问题。
The full code for the graph we've created in this section is reproduced below, replacing our BasicToolNode
for the prebuilt ToolNode, and our route_tools
condition with the prebuilt tools_condition
本节中我们创建的图的完整代码如下,替换我们为预构建的 ToolNode 的 BasicToolNode
,以及将我们的 route_tools
条件替换为预构建的 tools_condition
Full Code 完整代码
from typing import Annotated from langchain_anthropic import ChatAnthropic from langchain_community.tools.tavily_search import TavilySearchResults from langchain_core.messages import BaseMessage from typing_extensions import TypedDict from langgraph.graph import StateGraph from langgraph.graph.message import add_messages from langgraph.prebuilt import ToolNode, tools_condition class State(TypedDict): messages: Annotated[list, add_messages] graph_builder = StateGraph(State) tool = TavilySearchResults(max_results=2) tools = [tool] llm = ChatAnthropic(model="claude-3-5-sonnet-20240620") llm_with_tools = llm.bind_tools(tools) def chatbot(state: State): return {"messages": [llm_with_tools.invoke(state["messages"])]} graph_builder.add_node("chatbot", chatbot) tool_node = ToolNode(tools=[tool]) graph_builder.add_node("tools", tool_node) graph_builder.add_conditional_edges( "chatbot", tools_condition, ) # Any time a tool is called, we return to the chatbot to decide the next step graph_builder.add_edge("tools", "chatbot") graph_builder.set_entry_point("chatbot") graph = graph_builder.compile()
Part 3: Adding Memory to the Chatbot¶
第三部分:为聊天机器人添加记忆 ¶
Our chatbot can now use tools to answer user questions, but it doesn't remember the context of previous interactions. This limits its ability to have coherent, multi-turn conversations.
我们的聊天机器人现在可以使用工具来回答用户问题,但它不记得之前互动的上下文。这限制了它进行连贯的多轮对话的能力。
LangGraph solves this problem through persistent checkpointing. If you provide a checkpointer
when compiling the graph and a thread_id
when calling your graph, LangGraph automatically saves the state after each step. When you invoke the graph again using the same thread_id
, the graph loads its saved state, allowing the chatbot to pick up where it left off.
LangGraph 通过持久检查点解决了这个问题。如果在编译图时提供一个 checkpointer
,在调用图时提供一个 thread_id
,LangGraph 会在每一步后自动保存状态。当您使用相同的 thread_id
再次调用图时,图会加载其保存的状态,使聊天机器人能够从中断的地方继续。
We will see later that checkpointing is much more powerful than simple chat memory - it lets you save and resume complex state at any time for error recovery, human-in-the-loop workflows, time travel interactions, and more.
我们稍后会看到,检查点比简单的聊天记忆强大得多——它允许您在任何时候保存和恢复复杂状态,以便进行错误恢复、人机协作工作流程、时间旅行交互等。
But before we get too ahead of ourselves, let's add checkpointing to enable multi-turn conversations.
但在我们走得太远之前,让我们添加检查点以启用多轮对话。
To get started, create a MemorySaver
checkpointer.
要开始,请创建一个 MemorySaver
检查点。
API 参考:内存节省器
Notice we're using an in-memory checkpointer. This is convenient for our tutorial (it saves it all in-memory). In a production application, you would likely change this to use SqliteSaver
or PostgresSaver
and connect to your own DB.
注意我们使用的是内存检查点。这对于我们的教程很方便(它将所有内容保存在内存中)。在生产应用中,您可能会将其更改为使用 SqliteSaver
或 PostgresSaver
并连接到您自己的数据库。
Next define the graph. Now that you've already built your own BasicToolNode
, we'll replace it with LangGraph's prebuilt ToolNode
and tools_condition
, since these do some nice things like parallel API execution. Apart from that, the following is all copied from Part 2.
接下来定义图形。现在您已经构建了自己的 BasicToolNode
,我们将用 LangGraph 的预构建 ToolNode
和 tools_condition
替换它,因为这些可以执行一些很好的功能,比如并行 API 执行。除此之外,以下内容均来自第 2 部分。
from typing import Annotated
from langchain_anthropic import ChatAnthropic
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_core.messages import BaseMessage
from typing_extensions import TypedDict
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_condition
class State(TypedDict):
messages: Annotated[list, add_messages]
graph_builder = StateGraph(State)
tool = TavilySearchResults(max_results=2)
tools = [tool]
llm = ChatAnthropic(model="claude-3-5-sonnet-20240620")
llm_with_tools = llm.bind_tools(tools)
def chatbot(state: State):
return {"messages": [llm_with_tools.invoke(state["messages"])]}
graph_builder.add_node("chatbot", chatbot)
tool_node = ToolNode(tools=[tool])
graph_builder.add_node("tools", tool_node)
graph_builder.add_conditional_edges(
"chatbot",
tools_condition,
)
# Any time a tool is called, we return to the chatbot to decide the next step
graph_builder.add_edge("tools", "chatbot")
graph_builder.add_edge(START, "chatbot")
API 参考:ChatAnthropic | TavilySearchResults | BaseMessage | StateGraph | START | END | add_messages | ToolNode | tools_condition
Finally, compile the graph with the provided checkpointer.
最后,使用提供的检查点编译图形。
Notice the connectivity of the graph hasn't changed since Part 2. All we are doing is checkpointing the State
as the graph works through each node.
请注意,自第 2 部分以来,图形的连通性没有改变。我们所做的只是将 State
进行检查点记录,因为图形在每个节点上工作。
from IPython.display import Image, display
try:
display(Image(graph.get_graph().draw_mermaid_png()))
except Exception:
# This requires some extra dependencies and is optional
pass
Now you can interact with your bot! First, pick a thread to use as the key for this conversation.
现在您可以与您的机器人互动了!首先,选择一个主题作为本次对话的关键。
Next, call your chat bot.
接下来,调用你的聊天机器人。
user_input = "Hi there! My name is Will."
# The config is the **second positional argument** to stream() or invoke()!
events = graph.stream(
{"messages": [("user", user_input)]}, config, stream_mode="values"
)
for event in events:
event["messages"][-1].pretty_print()
================================[1m Human Message [0m=================================
Hi there! My name is Will.
==================================[1m Ai Message [0m==================================
Hello Will! It's nice to meet you. How can I assist you today? Is there anything specific you'd like to know or discuss?
{'messages': []}
).注意:在调用我们的图时,配置作为第二个位置参数提供。它重要的是不嵌套在图输入中(
{'messages': []}
)。
Let's ask a followup: see if it remembers your name.
让我们问一个后续问题:看看它是否记得你的名字。
user_input = "Remember my name?"
# The config is the **second positional argument** to stream() or invoke()!
events = graph.stream(
{"messages": [("user", user_input)]}, config, stream_mode="values"
)
for event in events:
event["messages"][-1].pretty_print()
================================[1m Human Message [0m=================================
Remember my name?
==================================[1m Ai Message [0m==================================
Of course, I remember your name, Will. I always try to pay attention to important details that users share with me. Is there anything else you'd like to talk about or any questions you have? I'm here to help with a wide range of topics or tasks.
请注意,我们没有使用外部列表来管理内存:这一切都由检查点处理!您可以在这个 LangSmith 跟踪中检查完整的执行过程,以了解发生了什么。
Don't believe me? Try this using a different config.
不相信我?试试使用不同的配置。
# The only difference is we change the `thread_id` here to "2" instead of "1"
events = graph.stream(
{"messages": [("user", user_input)]},
{"configurable": {"thread_id": "2"}},
stream_mode="values",
)
for event in events:
event["messages"][-1].pretty_print()
================================[1m Human Message [0m=================================
Remember my name?
==================================[1m Ai Message [0m==================================
I apologize, but I don't have any previous context or memory of your name. As an AI assistant, I don't retain information from past conversations. Each interaction starts fresh. Could you please tell me your name so I can address you properly in this conversation?
thread_id
in the config. See this call's LangSmith trace for comparison. 请注意,我们唯一的改变是修改了配置中的
thread_id
。请参阅此调用的 LangSmith 跟踪以进行比较。
By now, we have made a few checkpoints across two different threads. But what goes into a checkpoint? To inspect a graph's state
for a given config at any time, call get_state(config)
.
到目前为止,我们在两个不同的线程中设置了一些检查点。但是,检查点包含什么?要检查图的 state
在任何时间的给定配置,请调用 get_state(config)
。
StateSnapshot(values={'messages': [HumanMessage(content='Hi there! My name is Will.', additional_kwargs={}, response_metadata={}, id='8c1ca919-c553-4ebf-95d4-b59a2d61e078'), AIMessage(content="Hello Will! It's nice to meet you. How can I assist you today? Is there anything specific you'd like to know or discuss?", additional_kwargs={}, response_metadata={'id': 'msg_01WTQebPhNwmMrmmWojJ9KXJ', 'model': 'claude-3-5-sonnet-20240620', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 405, 'output_tokens': 32}}, id='run-58587b77-8c82-41e6-8a90-d62c444a261d-0', usage_metadata={'input_tokens': 405, 'output_tokens': 32, 'total_tokens': 437}), HumanMessage(content='Remember my name?', additional_kwargs={}, response_metadata={}, id='daba7df6-ad75-4d6b-8057-745881cea1ca'), AIMessage(content="Of course, I remember your name, Will. I always try to pay attention to important details that users share with me. Is there anything else you'd like to talk about or any questions you have? I'm here to help with a wide range of topics or tasks.", additional_kwargs={}, response_metadata={'id': 'msg_01E41KitY74HpENRgXx94vag', 'model': 'claude-3-5-sonnet-20240620', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 444, 'output_tokens': 58}}, id='run-ffeaae5c-4d2d-4ddb-bd59-5d5cbf2a5af8-0', usage_metadata={'input_tokens': 444, 'output_tokens': 58, 'total_tokens': 502})]}, next=(), config={'configurable': {'thread_id': '1', 'checkpoint_ns': '', 'checkpoint_id': '1ef7d06e-93e0-6acc-8004-f2ac846575d2'}}, metadata={'source': 'loop', 'writes': {'chatbot': {'messages': [AIMessage(content="Of course, I remember your name, Will. I always try to pay attention to important details that users share with me. Is there anything else you'd like to talk about or any questions you have? I'm here to help with a wide range of topics or tasks.", additional_kwargs={}, response_metadata={'id': 'msg_01E41KitY74HpENRgXx94vag', 'model': 'claude-3-5-sonnet-20240620', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 444, 'output_tokens': 58}}, id='run-ffeaae5c-4d2d-4ddb-bd59-5d5cbf2a5af8-0', usage_metadata={'input_tokens': 444, 'output_tokens': 58, 'total_tokens': 502})]}}, 'step': 4, 'parents': {}}, created_at='2024-09-27T19:30:10.820758+00:00', parent_config={'configurable': {'thread_id': '1', 'checkpoint_ns': '', 'checkpoint_id': '1ef7d06e-859f-6206-8003-e1bd3c264b8f'}}, tasks=())
snapshot.next # (since the graph ended this turn, `next` is empty. If you fetch a state from within a graph invocation, next tells which node will execute next)
The snapshot above contains the current state values, corresponding config, and the next
node to process. In our case, the graph has reached an END
state, so next
is empty.
上述快照包含当前状态值、相应的配置以及要处理的 next
节点。在我们的案例中,图形已达到 END
状态,因此 next
为空。
Congratulations! Your chatbot can now maintain conversation state across sessions thanks to LangGraph's checkpointing system. This opens up exciting possibilities for more natural, contextual interactions. LangGraph's checkpointing even handles arbitrarily complex graph states, which is much more expressive and powerful than simple chat memory.
恭喜!您的聊天机器人现在可以通过 LangGraph 的检查点系统在会话之间保持对话状态。这为更自然、上下文相关的互动开辟了令人兴奋的可能性。LangGraph 的检查点甚至可以处理任意复杂的图状态,这比简单的聊天记忆更具表现力和强大。
In the next part, we'll introduce human oversight to our bot to handle situations where it may need guidance or verification before proceeding.
在下一部分,我们将为我们的机器人引入人工监督,以处理在继续之前可能需要指导或验证的情况。
Check out the code snippet below to review our graph from this section.
请查看下面的代码片段,以回顾本节中的图表。
Full Code 完整代码
from typing import Annotated from langchain_anthropic import ChatAnthropic from langchain_community.tools.tavily_search import TavilySearchResults from langchain_core.messages import BaseMessage from typing_extensions import TypedDict from langgraph.checkpoint.memory import MemorySaver from langgraph.graph import StateGraph from langgraph.graph.message import add_messages from langgraph.prebuilt import ToolNode class State(TypedDict): messages: Annotated[list, add_messages] graph_builder = StateGraph(State) tool = TavilySearchResults(max_results=2) tools = [tool] llm = ChatAnthropic(model="claude-3-5-sonnet-20240620") llm_with_tools = llm.bind_tools(tools) def chatbot(state: State): return {"messages": [llm_with_tools.invoke(state["messages"])]} graph_builder.add_node("chatbot", chatbot) tool_node = ToolNode(tools=[tool]) graph_builder.add_node("tools", tool_node) graph_builder.add_conditional_edges( "chatbot", tools_condition, ) graph_builder.add_edge("tools", "chatbot") graph_builder.set_entry_point("chatbot") graph = graph_builder.compile(checkpointer=memory)
Part 4: Human-in-the-loop¶
第 4 部分:人机协同
Agents can be unreliable and may need human input to successfully accomplish tasks. Similarly, for some actions, you may want to require human approval before running to ensure that everything is running as intended.
代理可能不可靠,可能需要人类的输入才能成功完成任务。同样,对于某些操作,您可能希望在执行之前要求人类批准,以确保一切按预期进行。
LangGraph supports human-in-the-loop
workflows in a number of ways. In this section, we will use LangGraph's interrupt_before
functionality to always break the tool node.
LangGraph 支持 human-in-the-loop
工作流的多种方式。在本节中,我们将使用 LangGraph 的 interrupt_before
功能来始终断开工具节点。
First, start from our existing code. The following is copied from Part 3.
首先,从我们现有的代码开始。以下内容复制自第 3 部分。
from typing import Annotated
from langchain_anthropic import ChatAnthropic
from langchain_community.tools.tavily_search import TavilySearchResults
from typing_extensions import TypedDict
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import StateGraph, START
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_condition
memory = MemorySaver()
class State(TypedDict):
messages: Annotated[list, add_messages]
graph_builder = StateGraph(State)
tool = TavilySearchResults(max_results=2)
tools = [tool]
llm = ChatAnthropic(model="claude-3-5-sonnet-20240620")
llm_with_tools = llm.bind_tools(tools)
def chatbot(state: State):
return {"messages": [llm_with_tools.invoke(state["messages"])]}
graph_builder.add_node("chatbot", chatbot)
tool_node = ToolNode(tools=[tool])
graph_builder.add_node("tools", tool_node)
graph_builder.add_conditional_edges(
"chatbot",
tools_condition,
)
graph_builder.add_edge("tools", "chatbot")
graph_builder.add_edge(START, "chatbot")
API 参考:ChatAnthropic | TavilySearchResults | MemorySaver | StateGraph | START | add_messages | ToolNode | tools_condition
Now, compile the graph, specifying to interrupt_before
the tools
node.
现在,编译图形,指定给 interrupt_before
tools
节点。
graph = graph_builder.compile(
checkpointer=memory,
# This is new!
interrupt_before=["tools"],
# Note: can also interrupt __after__ tools, if desired.
# interrupt_after=["tools"]
)
user_input = "I'm learning LangGraph. Could you do some research on it for me?"
config = {"configurable": {"thread_id": "1"}}
# The config is the **second positional argument** to stream() or invoke()!
events = graph.stream(
{"messages": [("user", user_input)]}, config, stream_mode="values"
)
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
================================[1m Human Message [0m=================================
I'm learning LangGraph. Could you do some research on it for me?
==================================[1m Ai Message [0m==================================
[{'text': "Certainly! I'd be happy to research LangGraph for you. To get the most up-to-date and comprehensive information, I'll use the Tavily search engine to look this up. Let me do that for you now.", 'type': 'text'}, {'id': 'toolu_01R4ZFcb5hohpiVZwr88Bxhc', 'input': {'query': 'LangGraph framework for building language model applications'}, 'name': 'tavily_search_results_json', 'type': 'tool_use'}]
Tool Calls:
tavily_search_results_json (toolu_01R4ZFcb5hohpiVZwr88Bxhc)
Call ID: toolu_01R4ZFcb5hohpiVZwr88Bxhc
Args:
query: LangGraph framework for building language model applications
让我们检查图状态以确认它是否正常工作。
Notice that unlike last time, the "next" node is set to 'tools'. We've interrupted here! Let's check the tool invocation.
注意到与上次不同的是,“下一个”节点被设置为'tools'。我们在这里中断了!让我们检查工具调用。
[{'name': 'tavily_search_results_json',
'args': {'query': 'LangGraph framework for building language model applications'},
'id': 'toolu_01R4ZFcb5hohpiVZwr88Bxhc',
'type': 'tool_call'}]
This query seems reasonable. Nothing to filter here. The simplest thing the human can do is just let the graph continue executing. Let's do that below.
这个查询似乎是合理的。这里没有需要过滤的内容。人类能做的最简单的事情就是让图形继续执行。我们在下面这样做。
Next, continue the graph! Passing in None
will just let the graph continue where it left off, without adding anything new to the state.
接下来,继续图表!传入 None
将仅让图表在上次停止的地方继续,而不会向状态中添加任何新内容。
# `None` will append nothing new to the current state, letting it resume as if it had never been interrupted
events = graph.stream(None, config, stream_mode="values")
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
==================================[1m Ai Message [0m==================================
[{'text': "Certainly! I'd be happy to research LangGraph for you. To get the most up-to-date and comprehensive information, I'll use the Tavily search engine to look this up. Let me do that for you now.", 'type': 'text'}, {'id': 'toolu_01R4ZFcb5hohpiVZwr88Bxhc', 'input': {'query': 'LangGraph framework for building language model applications'}, 'name': 'tavily_search_results_json', 'type': 'tool_use'}]
Tool Calls:
tavily_search_results_json (toolu_01R4ZFcb5hohpiVZwr88Bxhc)
Call ID: toolu_01R4ZFcb5hohpiVZwr88Bxhc
Args:
query: LangGraph framework for building language model applications
=================================[1m Tool Message [0m=================================
Name: tavily_search_results_json
[{"url": "https://towardsdatascience.com/from-basics-to-advanced-exploring-langgraph-e8c1cf4db787", "content": "LangChain is one of the leading frameworks for building applications powered by Lardge Language Models. With the LangChain Expression Language (LCEL), defining and executing step-by-step action sequences — also known as chains — becomes much simpler. In more technical terms, LangChain allows us to create DAGs (directed acyclic graphs). As LLM applications, particularly LLM agents, have ..."}, {"url": "https://github.com/langchain-ai/langgraph", "content": "Overview. LangGraph is a library for building stateful, multi-actor applications with LLMs, used to create agent and multi-agent workflows. Compared to other LLM frameworks, it offers these core benefits: cycles, controllability, and persistence. LangGraph allows you to define flows that involve cycles, essential for most agentic architectures ..."}]
==================================[1m Ai Message [0m==================================
Thank you for your patience. I've found some valuable information about LangGraph for you. Let me summarize the key points:
1. LangGraph is a library for building stateful, multi-actor applications with Large Language Models (LLMs).
2. It is particularly useful for creating agent and multi-agent workflows.
3. LangGraph is built on top of LangChain, which is one of the leading frameworks for building LLM-powered applications.
4. Key benefits of LangGraph compared to other LLM frameworks include:
a) Cycles: It allows you to define flows that involve cycles, which is essential for most agent architectures.
b) Controllability: Offers more control over the application flow.
c) Persistence: Provides ways to maintain state across interactions.
5. LangGraph works well with the LangChain Expression Language (LCEL), which simplifies the process of defining and executing step-by-step action sequences (chains).
6. In technical terms, LangGraph enables the creation of Directed Acyclic Graphs (DAGs) for LLM applications.
7. It's particularly useful for building more complex LLM agents and multi-agent systems.
LangGraph seems to be an advanced tool that builds upon LangChain to provide more sophisticated capabilities for creating stateful and multi-actor LLM applications. It's especially valuable if you're looking to create complex agent systems or applications that require maintaining state across interactions.
Is there any specific aspect of LangGraph you'd like to know more about? I'd be happy to dive deeper into any particular area of interest.
查看此调用的 LangSmith 跟踪,以了解上述调用中所做的确切工作。请注意,状态在第一步中加载,以便您的聊天机器人可以从中断的地方继续。
Congrats! You've used an interrupt
to add human-in-the-loop execution to your chatbot, allowing for human oversight and intervention when needed. This opens up the potential UIs you can create with your AI systems. Since we have already added a checkpointer, the graph can be paused indefinitely and resumed at any time as if nothing had happened.
恭喜!您已经使用了一个 interrupt
为您的聊天机器人添加了人机协作执行,允许在需要时进行人工监督和干预。这为您可以与 AI 系统创建的潜在用户界面打开了可能性。由于我们已经添加了检查点,图形可以无限期暂停,并在任何时候恢复,就像什么都没有发生过一样。
Next, we'll explore how to further customize the bot's behavior using custom state updates.
接下来,我们将探讨如何通过自定义状态更新进一步定制机器人的行为。
Below is a copy of the code you used in this section. The only difference between this and the previous parts is the addition of the interrupt_before
argument.
以下是您在本节中使用的代码的副本。与之前的部分唯一的区别是添加了 interrupt_before
参数。
Full Code 完整代码
from typing import Annotated from langchain_anthropic import ChatAnthropic from langchain_community.tools.tavily_search import TavilySearchResults from langchain_core.messages import BaseMessage from typing_extensions import TypedDict from langgraph.checkpoint.memory import MemorySaver from langgraph.graph import StateGraph from langgraph.graph.message import add_messages from langgraph.prebuilt import ToolNode, tools_condition class State(TypedDict): messages: Annotated[list, add_messages] graph_builder = StateGraph(State) tool = TavilySearchResults(max_results=2) tools = [tool] llm = ChatAnthropic(model="claude-3-5-sonnet-20240620") llm_with_tools = llm.bind_tools(tools) def chatbot(state: State): return {"messages": [llm_with_tools.invoke(state["messages"])]} graph_builder.add_node("chatbot", chatbot) tool_node = ToolNode(tools=[tool]) graph_builder.add_node("tools", tool_node) graph_builder.add_conditional_edges( "chatbot", tools_condition, ) graph_builder.add_edge("tools", "chatbot") graph_builder.set_entry_point("chatbot") memory = MemorySaver() graph = graph_builder.compile( checkpointer=memory, # This is new! interrupt_before=["tools"], # Note: can also interrupt __after__ actions, if desired. # interrupt_after=["tools"] )
Part 5: Manually Updating the State¶
第五部分:手动更新状态 ¶
In the previous section, we showed how to interrupt a graph so that a human could inspect its actions. This lets the human read
the state, but if they want to change their agent's course, they'll need to have write
access.
在前一部分中,我们展示了如何中断一个图,以便人类可以检查其行为。这使得人类能够 read
状态,但如果他们想要改变代理的方向,他们需要拥有 write
访问权限。
Thankfully, LangGraph lets you manually update state! Updating the state lets you control the agent's trajectory by modifying its actions (even modifying the past!). This capability is particularly useful when you want to correct the agent's mistakes, explore alternative paths, or guide the agent towards a specific goal.
幸运的是,LangGraph 允许您手动更新状态!更新状态使您能够通过修改代理的行为(甚至修改过去)来控制代理的轨迹。这一功能在您想要纠正代理的错误、探索替代路径或引导代理朝向特定目标时特别有用。
We'll show how to update a checkpointed state below. As before, first, define your graph. We'll reuse the exact same graph as before.
我们将展示如何更新一个检查点状态。与之前一样,首先定义你的图。我们将重用之前的完全相同的图。
from typing import Annotated
from langchain_anthropic import ChatAnthropic
from langchain_community.tools.tavily_search import TavilySearchResults
from typing_extensions import TypedDict
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import StateGraph, START
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_condition
class State(TypedDict):
messages: Annotated[list, add_messages]
graph_builder = StateGraph(State)
tool = TavilySearchResults(max_results=2)
tools = [tool]
llm = ChatAnthropic(model="claude-3-5-sonnet-20240620")
llm_with_tools = llm.bind_tools(tools)
def chatbot(state: State):
return {"messages": [llm_with_tools.invoke(state["messages"])]}
graph_builder.add_node("chatbot", chatbot)
tool_node = ToolNode(tools=[tool])
graph_builder.add_node("tools", tool_node)
graph_builder.add_conditional_edges(
"chatbot",
tools_condition,
)
graph_builder.add_edge("tools", "chatbot")
graph_builder.add_edge(START, "chatbot")
memory = MemorySaver()
graph = graph_builder.compile(
checkpointer=memory,
# This is new!
interrupt_before=["tools"],
# Note: can also interrupt **after** actions, if desired.
# interrupt_after=["tools"]
)
user_input = "I'm learning LangGraph. Could you do some research on it for me?"
config = {"configurable": {"thread_id": "1"}}
# The config is the **second positional argument** to stream() or invoke()!
events = graph.stream({"messages": [("user", user_input)]}, config)
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
API 参考:ChatAnthropic | TavilySearchResults | MemorySaver | StateGraph | START | add_messages | ToolNode | tools_condition
snapshot = graph.get_state(config)
existing_message = snapshot.values["messages"][-1]
existing_message.pretty_print()
==================================[1m Ai Message [0m==================================
[{'text': "Certainly! I'd be happy to research LangGraph for you. To get the most up-to-date and comprehensive information, I'll use the Tavily search engine to look this up. Let me do that for you now.", 'type': 'text'}, {'id': 'toolu_018YcbFR37CG8RRXnavH5fxZ', 'input': {'query': 'LangGraph: what is it, how is it used in AI development'}, 'name': 'tavily_search_results_json', 'type': 'tool_use'}]
Tool Calls:
tavily_search_results_json (toolu_018YcbFR37CG8RRXnavH5fxZ)
Call ID: toolu_018YcbFR37CG8RRXnavH5fxZ
Args:
query: LangGraph: what is it, how is it used in AI development
到目前为止,所有这些都是前一部分的完全重复。LLM刚刚请求使用搜索引擎工具,我们的图被中断了。如果我们像之前一样继续,工具将被调用以搜索网络。
But what if the user wants to intercede? What if we think the chat bot doesn't need to use the tool?
但是如果用户想要干预呢?如果我们认为聊天机器人不需要使用这个工具呢?
Let's directly provide the correct response!
让我们直接提供正确的回应!
from langchain_core.messages import AIMessage, ToolMessage
answer = (
"LangGraph is a library for building stateful, multi-actor applications with LLMs."
)
new_messages = [
# The LLM API expects some ToolMessage to match its tool call. We'll satisfy that here.
ToolMessage(content=answer, tool_call_id=existing_message.tool_calls[0]["id"]),
# And then directly "put words in the LLM's mouth" by populating its response.
AIMessage(content=answer),
]
new_messages[-1].pretty_print()
graph.update_state(
# Which state to update
config,
# The updated values to provide. The messages in our `State` are "append-only", meaning this will be appended
# to the existing state. We will review how to update existing messages in the next section!
{"messages": new_messages},
)
print("\n\nLast 2 messages;")
print(graph.get_state(config).values["messages"][-2:])
==================================[1m Ai Message [0m==================================
LangGraph is a library for building stateful, multi-actor applications with LLMs.
Last 2 messages;
[ToolMessage(content='LangGraph is a library for building stateful, multi-actor applications with LLMs.', id='675f7618-367f-44b7-b80e-2834afb02ac5', tool_call_id='toolu_018YcbFR37CG8RRXnavH5fxZ'), AIMessage(content='LangGraph is a library for building stateful, multi-actor applications with LLMs.', additional_kwargs={}, response_metadata={}, id='35fd5682-0c2a-4200-b192-71c59ac6d412')]
Now the graph is complete, since we've provided the final response message! Since state updates simulate a graph step, they even generate corresponding traces. Inspec the LangSmith trace of the update_state
call above to see what's going on.
现在图形已经完成,因为我们提供了最终的响应消息!由于状态更新模拟了图形步骤,它们甚至生成了相应的跟踪。检查上述 update_state
调用的 LangSmith 跟踪以了解发生了什么。
Notice that our new messages are appended to the messages already in the state. Remember how we defined the State
type?
请注意,我们的新消息被附加到状态中已经存在的消息上。还记得我们是如何定义 State
类型的吗?
We annotated messages
with the pre-built add_messages
function. This instructs the graph to always append values to the existing list, rather than overwriting the list directly. The same logic is applied here, so the messages we passed to update_state
were appended in the same way!
我们用预构建的 add_messages
函数对 messages
进行了注释。这指示图形始终将值附加到现有列表中,而不是直接覆盖列表。这里应用了相同的逻辑,因此我们传递给 update_state
的消息以相同的方式附加!
The update_state
function operates as if it were one of the nodes in your graph! By default, the update operation uses the node that was last executed, but you can manually specify it below. Let's add an update and tell the graph to treat it as if it came from the "chatbot".
update_state
函数的操作就像是您图中的一个节点!默认情况下,更新操作使用最后执行的节点,但您可以在下面手动指定它。让我们添加一个更新,并告诉图将其视为来自“聊天机器人”。
graph.update_state(
config,
{"messages": [AIMessage(content="I'm an AI expert!")]},
# Which node for this function to act as. It will automatically continue
# processing as if this node just ran.
as_node="chatbot",
)
{'configurable': {'thread_id': '1',
'checkpoint_ns': '',
'checkpoint_id': '1ef7d134-3958-6412-8002-3f4b4112062f'}}
Check out the LangSmith trace for this update call at the provided link. Notice from the trace that the graph continues into the tools_condition
edge. We just told the graph to treat the update as_node="chatbot"
. If we follow the diagram below and start from the chatbot
node, we naturally end up in the tools_condition
edge and then __end__
since our updated message lacks tool calls.
查看提供链接的 LangSmith 追踪以获取此更新调用的详细信息。从追踪中可以注意到,图形继续进入 tools_condition
边。我们刚刚告诉图形处理更新 as_node="chatbot"
。如果我们遵循下面的图示并从 chatbot
节点开始,我们自然会到达 tools_condition
边,然后是 __end__
,因为我们更新的消息缺少工具调用。
from IPython.display import Image, display
try:
display(Image(graph.get_graph().draw_mermaid_png()))
except Exception:
# This requires some extra dependencies and is optional
pass
Inspect the current state as before to confirm the checkpoint reflects our manual updates.
检查当前状态,如之前所述,以确认检查点反映我们的手动更新。
[ToolMessage(content='LangGraph is a library for building stateful, multi-actor applications with LLMs.', id='675f7618-367f-44b7-b80e-2834afb02ac5', tool_call_id='toolu_018YcbFR37CG8RRXnavH5fxZ'), AIMessage(content='LangGraph is a library for building stateful, multi-actor applications with LLMs.', additional_kwargs={}, response_metadata={}, id='35fd5682-0c2a-4200-b192-71c59ac6d412'), AIMessage(content="I'm an AI expert!", additional_kwargs={}, response_metadata={}, id='288e2f74-f1cb-4082-8c3c-af4695c83117')]
()
chatbot
and responding with an AIMessage that doesn't contain tool_calls
, the graph knows that it has entered a finished state (next
is empty).注意:我们已经继续向状态添加 AI 消息。由于我们作为
chatbot
进行操作,并以不包含 tool_calls
的 AIMessage 进行响应,因此图表知道它已进入完成状态( next
为空)。
What if you want to overwrite existing messages?¶
如果您想覆盖现有消息怎么办?
The add_messages
function we used to annotate our graph's State
above controls how updates are made to the messages
key. This function looks at any message IDs in the new messages
list. If the ID matches a message in the existing state, add_messages
overwrites the existing message with the new content.
我们用来注释图表的 State
上的 add_messages
函数控制如何对 messages
键进行更新。该函数查看新 messages
列表中的任何消息 ID。如果 ID 与现有状态中的消息匹配, add_messages
将用新内容覆盖现有消息。
As an example, let's update the tool invocation to make sure we get good results from our search engine! First, start a new thread:
作为一个例子,让我们更新工具调用,以确保我们从搜索引擎获得良好的结果!首先,启动一个新线程:
user_input = "I'm learning LangGraph. Could you do some research on it for me?"
config = {"configurable": {"thread_id": "2"}} # we'll use thread_id = 2 here
events = graph.stream(
{"messages": [("user", user_input)]}, config, stream_mode="values"
)
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
================================[1m Human Message [0m=================================
I'm learning LangGraph. Could you do some research on it for me?
==================================[1m Ai Message [0m==================================
[{'text': "Certainly! I'd be happy to research LangGraph for you. To get the most up-to-date and accurate information, I'll use the Tavily search engine to look this up. Let me do that for you now.", 'type': 'text'}, {'id': 'toolu_01TfAeisrpx4ddgJpoAxqrVh', 'input': {'query': 'LangGraph framework for language models'}, 'name': 'tavily_search_results_json', 'type': 'tool_use'}]
Tool Calls:
tavily_search_results_json (toolu_01TfAeisrpx4ddgJpoAxqrVh)
Call ID: toolu_01TfAeisrpx4ddgJpoAxqrVh
Args:
query: LangGraph framework for language models
接下来,让我们更新我们代理的工具调用。也许我们特别想搜索人机协作的工作流程。
from langchain_core.messages import AIMessage
snapshot = graph.get_state(config)
existing_message = snapshot.values["messages"][-1]
print("Original")
print("Message ID", existing_message.id)
print(existing_message.tool_calls[0])
new_tool_call = existing_message.tool_calls[0].copy()
new_tool_call["args"]["query"] = "LangGraph human-in-the-loop workflow"
new_message = AIMessage(
content=existing_message.content,
tool_calls=[new_tool_call],
# Important! The ID is how LangGraph knows to REPLACE the message in the state rather than APPEND this messages
id=existing_message.id,
)
print("Updated")
print(new_message.tool_calls[0])
print("Message ID", new_message.id)
graph.update_state(config, {"messages": [new_message]})
print("\n\nTool calls")
graph.get_state(config).values["messages"][-1].tool_calls
Original
Message ID run-342f3f54-356b-4cc1-b747-573f6aa31054-0
{'name': 'tavily_search_results_json', 'args': {'query': 'LangGraph framework for language models'}, 'id': 'toolu_01TfAeisrpx4ddgJpoAxqrVh', 'type': 'tool_call'}
Updated
{'name': 'tavily_search_results_json', 'args': {'query': 'LangGraph human-in-the-loop workflow'}, 'id': 'toolu_01TfAeisrpx4ddgJpoAxqrVh', 'type': 'tool_call'}
Message ID run-342f3f54-356b-4cc1-b747-573f6aa31054-0
Tool calls
[{'name': 'tavily_search_results_json',
'args': {'query': 'LangGraph human-in-the-loop workflow'},
'id': 'toolu_01TfAeisrpx4ddgJpoAxqrVh',
'type': 'tool_call'}]
API 参考:AIMessage
Notice that we've modified the AI's tool invocation to search for "LangGraph human-in-the-loop workflow" instead of the simple "LangGraph".
注意到我们已将 AI 的工具调用修改为搜索“LangGraph 人机协作工作流程”,而不是简单的“LangGraph”。
Check out the LangSmith trace to see the state update call - you can see our new message has successfully updated the previous AI message.
查看 LangSmith 追踪以查看状态更新调用 - 您可以看到我们的新消息已成功更新了之前的 AI 消息。
Resume the graph by streaming with an input of None
and the existing config.
通过输入 None
和现有配置重新启动图形流。
events = graph.stream(None, config, stream_mode="values")
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
==================================[1m Ai Message [0m==================================
[{'text': "Certainly! I'd be happy to research LangGraph for you. To get the most up-to-date and accurate information, I'll use the Tavily search engine to look this up. Let me do that for you now.", 'type': 'text'}, {'id': 'toolu_01TfAeisrpx4ddgJpoAxqrVh', 'input': {'query': 'LangGraph framework for language models'}, 'name': 'tavily_search_results_json', 'type': 'tool_use'}]
Tool Calls:
tavily_search_results_json (toolu_01TfAeisrpx4ddgJpoAxqrVh)
Call ID: toolu_01TfAeisrpx4ddgJpoAxqrVh
Args:
query: LangGraph human-in-the-loop workflow
=================================[1m Tool Message [0m=================================
Name: tavily_search_results_json
[{"url": "https://www.youtube.com/watch?v=9BPCV5TYPmg", "content": "In this video, I'll show you how to handle persistence with LangGraph, enabling a unique Human-in-the-Loop workflow. This approach allows a human to grant an..."}, {"url": "https://medium.com/@kbdhunga/implementing-human-in-the-loop-with-langgraph-ccfde023385c", "content": "Implementing a Human-in-the-Loop (HIL) framework in LangGraph with the Streamlit app provides a robust mechanism for user engagement and decision-making. By incorporating breakpoints and ..."}]
==================================[1m Ai Message [0m==================================
Thank you for your patience. I've found some information about LangGraph, particularly focusing on its human-in-the-loop workflow capabilities. Let me summarize what I've learned for you:
1. LangGraph Overview:
LangGraph is a framework for building stateful, multi-actor applications with Large Language Models (LLMs). It's particularly useful for creating complex, interactive AI systems.
2. Human-in-the-Loop (HIL) Workflow:
One of the key features of LangGraph is its support for human-in-the-loop workflows. This means that it allows for human intervention and decision-making within AI-driven processes.
3. Persistence Handling:
LangGraph offers capabilities for handling persistence, which is crucial for maintaining state across interactions in a workflow.
4. Implementation with Streamlit:
There are examples of implementing LangGraph's human-in-the-loop functionality using Streamlit, a popular Python library for creating web apps. This combination allows for the creation of interactive user interfaces for AI applications.
5. Breakpoints and User Engagement:
LangGraph allows the incorporation of breakpoints in the workflow. These breakpoints are points where the system can pause and wait for human input or decision-making, enhancing user engagement and control over the AI process.
6. Decision-Making Mechanism:
The human-in-the-loop framework in LangGraph provides a robust mechanism for integrating user decision-making into AI workflows. This is particularly useful in scenarios where human judgment or expertise is needed to guide or validate AI actions.
7. Flexibility and Customization:
From the information available, it seems that LangGraph offers flexibility in how human-in-the-loop processes are implemented, allowing developers to customize the interaction points and the nature of human involvement based on their specific use case.
LangGraph appears to be a powerful tool for developers looking to create more interactive and controllable AI applications, especially those that benefit from human oversight or input at crucial stages of the process.
Would you like me to research any specific aspect of LangGraph in more detail, or do you have any questions about what I've found so far?
查看追踪以查看工具调用和后来的 LLM 响应。注意,现在图形使用我们更新的查询词查询搜索引擎 - 我们能够手动覆盖 LLM 的搜索!
All of this is reflected in the graph's checkpointed memory, meaning if we continue the conversation, it will recall all the modified state.
所有这些都反映在图表的检查点内存中,这意味着如果我们继续对话,它将记住所有修改的状态。
events = graph.stream(
{
"messages": (
"user",
"Remember what I'm learning about?",
)
},
config,
stream_mode="values",
)
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
================================[1m Human Message [0m=================================
Remember what I'm learning about?
==================================[1m Ai Message [0m==================================
I apologize for my oversight. You're absolutely right to remind me. You mentioned that you're learning LangGraph. Thank you for bringing that back into focus.
Since you're in the process of learning LangGraph, it would be helpful to know more about your current level of understanding and what specific aspects of LangGraph you're most interested in or finding challenging. This way, I can provide more targeted information or explanations that align with your learning journey.
Are there any particular areas of LangGraph you'd like to explore further? For example:
1. Basic concepts and architecture of LangGraph
2. Setting up and getting started with LangGraph
3. Implementing specific features like the human-in-the-loop workflow
4. Best practices for using LangGraph in projects
5. Comparisons with other similar frameworks
Or if you have any specific questions about what you've learned so far, I'd be happy to help clarify or expand on those topics. Please let me know what would be most useful for your learning process.
interrupt_before
and update_state
to manually modify the state as a part of a human-in-the-loop workflow. Interruptions and state modifications let you control how the agent behaves. Combined with persistent checkpointing, it means you can pause
an action and resume
at any point. Your user doesn't have to be available when the graph interrupts!恭喜!您已使用
interrupt_before
和 update_state
手动修改状态,作为人机协作工作流程的一部分。中断和状态修改让您控制代理的行为。结合持久检查点,这意味着您可以在任何时候 pause
一个动作并 resume
。您的用户在图形中断时不必在线!
The graph code for this section is identical to previous ones. The key snippets to remember are to add .compile(..., interrupt_before=[...])
(or interrupt_after
) if you want to explicitly pause the graph whenever it reaches a node. Then you can use update_state
to modify the checkpoint and control how the graph should proceed.
本节的图形代码与之前的相同。需要记住的关键片段是,如果您希望在图形到达节点时显式暂停图形,则添加 .compile(..., interrupt_before=[...])
(或 interrupt_after
)。然后,您可以使用 update_state
来修改检查点并控制图形的进展方式。
Part 6: Customizing State¶
第六部分:自定义状态 ¶
So far, we've relied on a simple state (it's just a list of messages!). You can go far with this simple state, but if you want to define complex behavior without relying on the message list, you can add additional fields to the state.
到目前为止,我们依赖于一个简单的状态(它只是一个消息列表!)。您可以在这个简单状态上走得很远,但如果您想在不依赖消息列表的情况下定义复杂行为,您可以向状态添加额外的字段。
In this section, we will extend our chat bot with a new node to illustrate this.
在本节中,我们将通过添加一个新节点来扩展我们的聊天机器人,以说明这一点。
In the examples above, we involved a human deterministically: the graph always interrupted whenever an tool was invoked. Suppose we wanted our chat bot to have the choice of relying on a human.
在上述示例中,我们以确定性的方式涉及了人类:每当调用工具时,图形总是会中断。假设我们希望我们的聊天机器人能够选择依赖于人类。
One way to do this is to create a passthrough "human" node, before which the graph will always stop. We will only execute this node if the LLM invokes a "human" tool.
一种方法是创建一个透传的“人类”节点,在此节点之前,图形将始终停止。只有当LLM调用“人类”工具时,我们才会执行此节点。
For our convenience, we will include an "ask_human" flag in our graph state that we will flip if the LLM calls this tool.
为了方便起见,我们将在图状态中包含一个“ask_human”标志,如果LLM调用此工具,我们将翻转该标志。
Below, define this new graph, with an updated State
下面,定义这个新图,更新后的 State
from typing import Annotated
from langchain_anthropic import ChatAnthropic
from langchain_community.tools.tavily_search import TavilySearchResults
from typing_extensions import TypedDict
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import StateGraph, START
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_condition
class State(TypedDict):
messages: Annotated[list, add_messages]
# This flag is new
ask_human: bool
API 参考:ChatAnthropic | TavilySearchResults | MemorySaver | StateGraph | START | add_messages | ToolNode | tools_condition
Next, define a schema to show the model to let it decide to request assistance.
接下来,定义一个模式以展示模型,让其决定是否请求帮助。
Using Pydantic with LangChain
使用 Pydantic 与 LangChain
This notebook uses Pydantic v2 BaseModel
, which requires langchain-core >= 0.3
. Using langchain-core < 0.3
will result in errors due to mixing of Pydantic v1 and v2 BaseModels
.
本笔记本使用 Pydantic v2 BaseModel
,这需要 langchain-core >= 0.3
。使用 langchain-core < 0.3
将导致错误,因为混合使用 Pydantic v1 和 v2 BaseModels
。
from pydantic import BaseModel
class RequestAssistance(BaseModel):
"""Escalate the conversation to an expert. Use this if you are unable to assist directly or if the user requires support beyond your permissions.
To use this function, relay the user's 'request' so the expert can provide the right guidance.
"""
request: str
Next, define the chatbot node. The primary modification here is flip the ask_human
flag if we see that the chat bot has invoked the RequestAssistance
flag.
接下来,定义聊天机器人节点。这里的主要修改是,如果我们看到聊天机器人调用了 RequestAssistance
标志,则翻转 ask_human
标志。
tool = TavilySearchResults(max_results=2)
tools = [tool]
llm = ChatAnthropic(model="claude-3-5-sonnet-20240620")
# We can bind the llm to a tool definition, a pydantic model, or a json schema
llm_with_tools = llm.bind_tools(tools + [RequestAssistance])
def chatbot(state: State):
response = llm_with_tools.invoke(state["messages"])
ask_human = False
if (
response.tool_calls
and response.tool_calls[0]["name"] == RequestAssistance.__name__
):
ask_human = True
return {"messages": [response], "ask_human": ask_human}
Next, create the graph builder and add the chatbot and tools nodes to the graph, same as before.
接下来,创建图形构建器,并将聊天机器人和工具节点添加到图形中,与之前相同。
graph_builder = StateGraph(State)
graph_builder.add_node("chatbot", chatbot)
graph_builder.add_node("tools", ToolNode(tools=[tool]))
Next, create the "human" node
. This node
function is mostly a placeholder in our graph that will trigger an interrupt. If the human does not manually update the state during the interrupt
, it inserts a tool message so the LLM knows the user was requested but didn't respond. This node also unsets the ask_human
flag so the graph knows not to revisit the node unless further requests are made.
接下来,创建“人类” node
。这个 node
函数在我们的图中主要是一个占位符,用于触发中断。如果人类在 interrupt
期间没有手动更新状态,它会插入一条工具消息,以便LLM知道用户被请求但没有回应。该节点还会取消设置 ask_human
标志,以便图知道在没有进一步请求的情况下不再访问该节点。
from langchain_core.messages import AIMessage, ToolMessage
def create_response(response: str, ai_message: AIMessage):
return ToolMessage(
content=response,
tool_call_id=ai_message.tool_calls[0]["id"],
)
def human_node(state: State):
new_messages = []
if not isinstance(state["messages"][-1], ToolMessage):
# Typically, the user will have updated the state during the interrupt.
# If they choose not to, we will include a placeholder ToolMessage to
# let the LLM continue.
new_messages.append(
create_response("No response from human.", state["messages"][-1])
)
return {
# Append the new messages
"messages": new_messages,
# Unset the flag
"ask_human": False,
}
graph_builder.add_node("human", human_node)
Next, define the conditional logic. The select_next_node
will route to the human
node if the flag is set. Otherwise, it lets the prebuilt tools_condition
function choose the next node.
接下来,定义条件逻辑。如果标志被设置, select_next_node
将路由到 human
节点。否则,它将让预构建的 tools_condition
函数选择下一个节点。
Recall that the tools_condition
function simply checks to see if the chatbot
has responded with any tool_calls
in its response message. If so, it routes to the action
node. Otherwise, it ends the graph.
回想一下, tools_condition
函数只是检查 chatbot
是否在其响应消息中回应了任何 tool_calls
。如果是,它将路由到 action
节点。否则,它将结束图形。
def select_next_node(state: State):
if state["ask_human"]:
return "human"
# Otherwise, we can route as before
return tools_condition(state)
graph_builder.add_conditional_edges(
"chatbot",
select_next_node,
{"human": "human", "tools": "tools", END: END},
)
Finally, add the simple directed edges and compile the graph. These edges instruct the graph to always flow from node a
->b
whenever a
finishes executing.
最后,添加简单的有向边并编译图。这些边指示图在 a
执行完成时始终从节点 a
-> b
流动。
# The rest is the same
graph_builder.add_edge("tools", "chatbot")
graph_builder.add_edge("human", "chatbot")
graph_builder.add_edge(START, "chatbot")
memory = MemorySaver()
graph = graph_builder.compile(
checkpointer=memory,
# We interrupt before 'human' here instead.
interrupt_before=["human"],
)
If you have the visualization dependencies installed, you can see the graph structure below:
如果您安装了可视化依赖项,您可以看到下面的图形结构:
from IPython.display import Image, display
try:
display(Image(graph.get_graph().draw_mermaid_png()))
except Exception:
# This requires some extra dependencies and is optional
pass
The chat bot can either request help from a human (chatbot->select->human), invoke the search engine tool (chatbot->select->action), or directly respond (chatbot->select->end). Once an action or request has been made, the graph will transition back to the chatbot
node to continue operations.
聊天机器人可以请求人类的帮助(聊天机器人->选择->人类),调用搜索引擎工具(聊天机器人->选择->行动),或直接响应(聊天机器人->选择->结束)。一旦采取了行动或请求,图形将过渡回 chatbot
节点以继续操作。
Let's see this graph in action. We will request for expert assistance to illustrate our graph.
让我们看看这个图表的实际应用。我们将请求专家的帮助来说明我们的图表。
user_input = "I need some expert guidance for building this AI agent. Could you request assistance for me?"
config = {"configurable": {"thread_id": "1"}}
# The config is the **second positional argument** to stream() or invoke()!
events = graph.stream(
{"messages": [("user", user_input)]}, config, stream_mode="values"
)
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
================================[1m Human Message [0m=================================
I need some expert guidance for building this AI agent. Could you request assistance for me?
==================================[1m Ai Message [0m==================================
[{'text': "Certainly! I understand that you need expert guidance for building an AI agent. I'll use the RequestAssistance function to escalate your request to an expert who can provide you with the specialized knowledge and support you need. Let me do that for you right away.", 'type': 'text'}, {'id': 'toolu_01Mo3N2c1byuSZwT1vyJWRia', 'input': {'request': 'The user needs expert guidance for building an AI agent. They require specialized knowledge and support in AI development and implementation.'}, 'name': 'RequestAssistance', 'type': 'tool_use'}]
Tool Calls:
RequestAssistance (toolu_01Mo3N2c1byuSZwT1vyJWRia)
Call ID: toolu_01Mo3N2c1byuSZwT1vyJWRia
Args:
request: The user needs expert guidance for building an AI agent. They require specialized knowledge and support in AI development and implementation.
RequestAssistance
" tool we provided it, and the interrupt has been set. Let's inspect the graph state to confirm.通知:LLM 已调用我们提供的 "
RequestAssistance
" 工具,并已设置中断。让我们检查图形状态以确认。
The graph state is indeed interrupted before the 'human'
node. We can act as the "expert" in this scenario and manually update the state by adding a new ToolMessage with our input.
图状态确实在 'human'
节点之前被中断。在这种情况下,我们可以充当“专家”,通过添加一个新的 ToolMessage 以及我们的输入来手动更新状态。
Next, respond to the chatbot's request by:
1. Creating a ToolMessage
with our response. This will be passed back to the chatbot
.
2. Calling update_state
to manually update the graph state.
接下来,通过以下方式响应聊天机器人的请求:1. 创建一个 ToolMessage
,并附上我们的回应。这将被传回 chatbot
。2. 调用 update_state
以手动更新图表状态。
ai_message = snapshot.values["messages"][-1]
human_response = (
"We, the experts are here to help! We'd recommend you check out LangGraph to build your agent."
" It's much more reliable and extensible than simple autonomous agents."
)
tool_message = create_response(human_response, ai_message)
graph.update_state(config, {"messages": [tool_message]})
{'configurable': {'thread_id': '1',
'checkpoint_ns': '',
'checkpoint_id': '1ef7d092-bb30-6bee-8002-015e7e1c56c0'}}
You can inspect the state to confirm our response was added.
您可以检查状态以确认我们的响应已被添加。
[HumanMessage(content='I need some expert guidance for building this AI agent. Could you request assistance for me?', additional_kwargs={}, response_metadata={}, id='3f28f959-9ab7-489a-9c58-7ed1b49cedf3'),
AIMessage(content=[{'text': "Certainly! I understand that you need expert guidance for building an AI agent. I'll use the RequestAssistance function to escalate your request to an expert who can provide you with the specialized knowledge and support you need. Let me do that for you right away.", 'type': 'text'}, {'id': 'toolu_01Mo3N2c1byuSZwT1vyJWRia', 'input': {'request': 'The user needs expert guidance for building an AI agent. They require specialized knowledge and support in AI development and implementation.'}, 'name': 'RequestAssistance', 'type': 'tool_use'}], additional_kwargs={}, response_metadata={'id': 'msg_01VRnZvVbgsVRbQaQuvsziDx', 'model': 'claude-3-5-sonnet-20240620', 'stop_reason': 'tool_use', 'stop_sequence': None, 'usage': {'input_tokens': 516, 'output_tokens': 130}}, id='run-4e3f7906-5887-40d9-9267-5beefe7b3b76-0', tool_calls=[{'name': 'RequestAssistance', 'args': {'request': 'The user needs expert guidance for building an AI agent. They require specialized knowledge and support in AI development and implementation.'}, 'id': 'toolu_01Mo3N2c1byuSZwT1vyJWRia', 'type': 'tool_call'}], usage_metadata={'input_tokens': 516, 'output_tokens': 130, 'total_tokens': 646}),
ToolMessage(content="We, the experts are here to help! We'd recommend you check out LangGraph to build your agent. It's much more reliable and extensible than simple autonomous agents.", id='8583b899-d898-4051-9f36-f5e5d11e9a37', tool_call_id='toolu_01Mo3N2c1byuSZwT1vyJWRia')]
Next, resume the graph by invoking it with None
as the inputs.
接下来,通过使用 None
作为输入来恢复图形。
events = graph.stream(None, config, stream_mode="values")
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
=================================[1m Tool Message [0m=================================
We, the experts are here to help! We'd recommend you check out LangGraph to build your agent. It's much more reliable and extensible than simple autonomous agents.
=================================[1m Tool Message [0m=================================
We, the experts are here to help! We'd recommend you check out LangGraph to build your agent. It's much more reliable and extensible than simple autonomous agents.
==================================[1m Ai Message [0m==================================
Thank you for your patience. I've escalated your request to our expert team, and they have provided some initial guidance. Here's what they suggest:
The experts recommend that you check out LangGraph for building your AI agent. They mention that LangGraph is a more reliable and extensible option compared to simple autonomous agents.
LangGraph is likely a framework or tool designed specifically for creating complex AI agents. It seems to offer advantages in terms of reliability and extensibility, which are crucial factors when developing sophisticated AI systems.
To further assist you, I can provide some additional context and next steps:
1. Research LangGraph: Look up documentation, tutorials, and examples of LangGraph to understand its features and how it can help you build your AI agent.
2. Compare with other options: While the experts recommend LangGraph, it might be useful to understand how it compares to other AI agent development frameworks or tools you might have been considering.
3. Assess your requirements: Consider your specific needs for the AI agent you want to build. Think about the tasks it needs to perform, the level of complexity required, and how LangGraph's features align with these requirements.
4. Start with a small project: If you decide to use LangGraph, consider beginning with a small, manageable project to familiarize yourself with the framework.
5. Seek community support: Look for LangGraph user communities, forums, or discussion groups where you can ask questions and get additional support as you build your agent.
6. Consider additional training: Depending on your current skill level, you might want to look into courses or workshops that focus on AI agent development, particularly those that cover LangGraph.
Do you have any specific questions about LangGraph or AI agent development that you'd like me to try to answer? Or would you like me to search for more detailed information about LangGraph and its features?
注意到聊天机器人在其最终响应中已纳入更新状态。由于所有内容都已进行检查点,"专家"人类可以在任何时候执行更新,而不会影响图的执行。
Congratulations! you've now added an additional node to your assistant graph to let the chat bot decide for itself whether or not it needs to interrupt execution. You did so by updating the graph State
with a new ask_human
field and modifying the interruption logic when compiling the graph. This lets you dynamically include a human in the loop while maintaining full memory every time you execute the graph.
恭喜!您现在已向助手图添加了一个额外的节点,以便聊天机器人自行决定是否需要中断执行。您通过更新图 State
,添加了一个新的 ask_human
字段,并在编译图时修改了中断逻辑来实现这一点。这使您能够在每次执行图时动态地将人类纳入循环,同时保持完整的记忆。
We're almost done with the tutorial, but there is one more concept we'd like to review before finishing that connects checkpointing
and state updates
.
我们快要完成教程了,但在结束之前还有一个概念我们想要复习一下,它连接了 checkpointing
和 state updates
。
This section's code is reproduced below for your reference.
本节的代码如下所示,供您参考。
Full Code 完整代码
from typing import Annotated from langchain_anthropic import ChatAnthropic from langchain_community.tools.tavily_search import TavilySearchResults from langchain_core.messages import BaseMessage # NOTE: you must use langchain-core >= 0.3 with Pydantic v2 from pydantic import BaseModel from typing_extensions import TypedDict from langgraph.checkpoint.memory import MemorySaver from langgraph.graph import StateGraph from langgraph.graph.message import add_messages from langgraph.prebuilt import ToolNode, tools_condition class State(TypedDict): messages: Annotated[list, add_messages] # This flag is new ask_human: bool class RequestAssistance(BaseModel): """Escalate the conversation to an expert. Use this if you are unable to assist directly or if the user requires support beyond your permissions. To use this function, relay the user's 'request' so the expert can provide the right guidance. """ request: str tool = TavilySearchResults(max_results=2) tools = [tool] llm = ChatAnthropic(model="claude-3-5-sonnet-20240620") # We can bind the llm to a tool definition, a pydantic model, or a json schema llm_with_tools = llm.bind_tools(tools + [RequestAssistance]) def chatbot(state: State): response = llm_with_tools.invoke(state["messages"]) ask_human = False if ( response.tool_calls and response.tool_calls[0]["name"] == RequestAssistance.__name__ ): ask_human = True return {"messages": [response], "ask_human": ask_human} graph_builder = StateGraph(State) graph_builder.add_node("chatbot", chatbot) graph_builder.add_node("tools", ToolNode(tools=[tool])) def create_response(response: str, ai_message: AIMessage): return ToolMessage( content=response, tool_call_id=ai_message.tool_calls[0]["id"], ) def human_node(state: State): new_messages = [] if not isinstance(state["messages"][-1], ToolMessage): # Typically, the user will have updated the state during the interrupt. # If they choose not to, we will include a placeholder ToolMessage to # let the LLM continue. new_messages.append( create_response("No response from human.", state["messages"][-1]) ) return { # Append the new messages "messages": new_messages, # Unset the flag "ask_human": False, } graph_builder.add_node("human", human_node) def select_next_node(state: State): if state["ask_human"]: return "human" # Otherwise, we can route as before return tools_condition(state) graph_builder.add_conditional_edges( "chatbot", select_next_node, {"human": "human", "tools": "tools", "__end__": "__end__"}, ) graph_builder.add_edge("tools", "chatbot") graph_builder.add_edge("human", "chatbot") graph_builder.set_entry_point("chatbot") memory = MemorySaver() graph = graph_builder.compile( checkpointer=memory, interrupt_before=["human"], )
Part 7: Time Travel¶
第 7 部分:时间旅行 ¶
In a typical chat bot workflow, the user interacts with the bot 1 or more times to accomplish a task. In the previous sections, we saw how to add memory and a human-in-the-loop to be able to checkpoint our graph state and manually override the state to control future responses.
在典型的聊天机器人工作流程中,用户与机器人进行 1 次或多次交互以完成任务。在前面的部分中,我们看到如何添加记忆和人类参与,以便能够检查我们的图状态并手动覆盖状态以控制未来的响应。
But what if you want to let your user start from a previous response and "branch off" to explore a separate outcome?
但如果您希望让用户从之前的响应开始,并“分支”以探索不同的结果呢?
Or what if you want users to be able to "rewind" your assistant's work to fix some mistakes or try a different strategy (common in applications like autonomous software engineers)?
或者如果您希望用户能够“倒回”助手的工作,以修正一些错误或尝试不同的策略(在像自主软件工程师这样的应用中很常见)呢?
You can create both of these experiences and more using LangGraph's built-in "time travel" functionality.
您可以使用 LangGraph 内置的“时间旅行”功能创建这两种体验及更多体验。
In this section, you will "rewind" your graph by fetching a checkpoint using the graph's get_state_history
method. You can then resume execution at this previous point in time.
在本节中,您将通过使用图的 get_state_history
方法来“回滚”您的图。然后,您可以在这个之前的时间点恢复执行。
First, recall our chatbot graph. We don't need to make any changes from before:
首先,回想一下我们的聊天机器人图。我们不需要做任何更改。
from typing import Annotated, Literal
from langchain_anthropic import ChatAnthropic
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_core.messages import AIMessage, ToolMessage
# NOTE: you must use langchain-core >= 0.3 with Pydantic v2
from pydantic import BaseModel
from typing_extensions import TypedDict
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import StateGraph, START
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_condition
class State(TypedDict):
messages: Annotated[list, add_messages]
# This flag is new
ask_human: bool
class RequestAssistance(BaseModel):
"""Escalate the conversation to an expert. Use this if you are unable to assist directly or if the user requires support beyond your permissions.
To use this function, relay the user's 'request' so the expert can provide the right guidance.
"""
request: str
tool = TavilySearchResults(max_results=2)
tools = [tool]
llm = ChatAnthropic(model="claude-3-5-sonnet-20240620")
# We can bind the llm to a tool definition, a pydantic model, or a json schema
llm_with_tools = llm.bind_tools(tools + [RequestAssistance])
def chatbot(state: State):
response = llm_with_tools.invoke(state["messages"])
ask_human = False
if (
response.tool_calls
and response.tool_calls[0]["name"] == RequestAssistance.__name__
):
ask_human = True
return {"messages": [response], "ask_human": ask_human}
graph_builder = StateGraph(State)
graph_builder.add_node("chatbot", chatbot)
graph_builder.add_node("tools", ToolNode(tools=[tool]))
def create_response(response: str, ai_message: AIMessage):
return ToolMessage(
content=response,
tool_call_id=ai_message.tool_calls[0]["id"],
)
def human_node(state: State):
new_messages = []
if not isinstance(state["messages"][-1], ToolMessage):
# Typically, the user will have updated the state during the interrupt.
# If they choose not to, we will include a placeholder ToolMessage to
# let the LLM continue.
new_messages.append(
create_response("No response from human.", state["messages"][-1])
)
return {
# Append the new messages
"messages": new_messages,
# Unset the flag
"ask_human": False,
}
graph_builder.add_node("human", human_node)
def select_next_node(state: State):
if state["ask_human"]:
return "human"
# Otherwise, we can route as before
return tools_condition(state)
graph_builder.add_conditional_edges(
"chatbot",
select_next_node,
{"human": "human", "tools": "tools", END: END},
)
graph_builder.add_edge("tools", "chatbot")
graph_builder.add_edge("human", "chatbot")
graph_builder.add_edge(START, "chatbot")
memory = MemorySaver()
graph = graph_builder.compile(
checkpointer=memory,
interrupt_before=["human"],
)
API 参考:ChatAnthropic | TavilySearchResults | AIMessage | ToolMessage | MemorySaver | StateGraph | START | add_messages | ToolNode | tools_condition
from IPython.display import Image, display
try:
display(Image(graph.get_graph().draw_mermaid_png()))
except Exception:
# This requires some extra dependencies and is optional
pass
Let's have our graph take a couple steps. Every step will be checkpointed in its state history:
让我们的图形进行几步。每一步都会在其状态历史中进行检查点记录:
config = {"configurable": {"thread_id": "1"}}
events = graph.stream(
{
"messages": [
("user", "I'm learning LangGraph. Could you do some research on it for me?")
]
},
config,
stream_mode="values",
)
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
================================[1m Human Message [0m=================================
I'm learning LangGraph. Could you do some research on it for me?
==================================[1m Ai Message [0m==================================
[{'text': "Certainly! I'd be happy to research LangGraph for you. To get the most up-to-date and accurate information, I'll use the Tavily search function to gather details about LangGraph. Let me do that for you now.", 'type': 'text'}, {'id': 'toolu_019HPZEw6v1eSLBXnwxk6MZm', 'input': {'query': 'LangGraph framework for language models'}, 'name': 'tavily_search_results_json', 'type': 'tool_use'}]
Tool Calls:
tavily_search_results_json (toolu_019HPZEw6v1eSLBXnwxk6MZm)
Call ID: toolu_019HPZEw6v1eSLBXnwxk6MZm
Args:
query: LangGraph framework for language models
=================================[1m Tool Message [0m=================================
Name: tavily_search_results_json
[{"url": "https://medium.com/@cplog/introduction-to-langgraph-a-beginners-guide-14f9be027141", "content": "LangGraph is a powerful tool for building stateful, multi-actor applications with Large Language Models (LLMs). It extends the LangChain library, allowing you to coordinate multiple chains (or ..."}, {"url": "https://towardsdatascience.com/from-basics-to-advanced-exploring-langgraph-e8c1cf4db787", "content": "LangChain is one of the leading frameworks for building applications powered by Lardge Language Models. With the LangChain Expression Language (LCEL), defining and executing step-by-step action sequences — also known as chains — becomes much simpler. In more technical terms, LangChain allows us to create DAGs (directed acyclic graphs)."}]
==================================[1m Ai Message [0m==================================
Thank you for your patience. I've gathered some information about LangGraph for you. Let me summarize the key points:
1. What is LangGraph?
LangGraph is a powerful tool designed for building stateful, multi-actor applications using Large Language Models (LLMs). It's an extension of the LangChain library, which is already a popular framework for developing LLM-powered applications.
2. Purpose and Functionality:
- LangGraph allows developers to coordinate multiple chains or actors within a single application.
- It enhances the capabilities of LangChain by introducing more complex, stateful workflows.
3. Relation to LangChain:
- LangGraph builds upon LangChain, which is one of the leading frameworks for creating LLM-powered applications.
- LangChain itself uses the LangChain Expression Language (LCEL) to define and execute step-by-step action sequences, also known as chains.
- LangChain allows the creation of DAGs (Directed Acyclic Graphs), which represent the flow of operations in an application.
4. Key Features:
- Stateful Applications: Unlike simple query-response models, LangGraph allows the creation of applications that maintain state across interactions.
- Multi-Actor Systems: It supports coordinating multiple AI "actors" or components within a single application, enabling more complex interactions and workflows.
5. Use Cases:
While not explicitly mentioned in the search results, LangGraph is typically used for creating more sophisticated AI applications such as:
- Multi-turn conversational agents
- Complex task-planning systems
- Applications requiring memory and context management across multiple steps or actors
Learning LangGraph can be a valuable skill, especially if you're interested in developing advanced applications with LLMs that go beyond simple question-answering or text generation tasks. It allows for the creation of more dynamic, interactive, and stateful AI systems.
Is there any specific aspect of LangGraph you'd like to know more about, or do you have any questions about how it compares to or works with LangChain?
events = graph.stream(
{
"messages": [
("user", "Ya that's helpful. Maybe I'll build an autonomous agent with it!")
]
},
config,
stream_mode="values",
)
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
================================[1m Human Message [0m=================================
Ya that's helpful. Maybe I'll build an autonomous agent with it!
==================================[1m Ai Message [0m==================================
[{'text': "That's an excellent idea! Building an autonomous agent with LangGraph is a great way to explore its capabilities and learn about advanced AI application development. LangGraph's features make it well-suited for creating autonomous agents. Let me provide some additional insights and encouragement for your project.", 'type': 'text'}, {'id': 'toolu_017t6BS5rNCzFWcpxRizDKjE', 'input': {'query': 'building autonomous agents with LangGraph examples and tutorials'}, 'name': 'tavily_search_results_json', 'type': 'tool_use'}]
Tool Calls:
tavily_search_results_json (toolu_017t6BS5rNCzFWcpxRizDKjE)
Call ID: toolu_017t6BS5rNCzFWcpxRizDKjE
Args:
query: building autonomous agents with LangGraph examples and tutorials
=================================[1m Tool Message [0m=================================
Name: tavily_search_results_json
[{"url": "https://medium.com/@lucas.dahan/hands-on-langgraph-building-a-multi-agent-assistant-06aa68ed942f", "content": "Building the Graph. With our agents defined, we'll create a graph.py file to orchestrate their interactions. The basic graph structure in LangGraph is really simple, here we are going to use ..."}, {"url": "https://medium.com/@cplog/building-tool-calling-conversational-ai-with-langchain-and-langgraph-a-beginners-guide-8d6986cc589e", "content": "Introduction to AI Agent with LangChain and LangGraph: A Beginner’s Guide Two powerful tools revolutionizing this field are LangChain and LangGraph. In this guide, we’ll explore how these technologies can be combined to build a sophisticated AI assistant capable of handling complex conversations and tasks. Tool calling is a standout feature in agentic design, allowing the LLM to interact with external systems or perform specific tasks via the @tool decorator. While the Assistant class presented here is one approach, the flexibility of tool calling and LangGraph allows for a wide range of designs. With LangChain and LangGraph, you can build a powerful, flexible AI assistant capable of handling complex tasks and conversations. Tool calling significantly enhances the AI’s capabilities by enabling interaction with external systems."}]
==================================[1m Ai Message [0m==================================
Your enthusiasm for building an autonomous agent with LangGraph is fantastic! This project will not only help you learn more about LangGraph but also give you hands-on experience with cutting-edge AI development. Here are some insights and tips to get you started:
1. Multi-Agent Systems:
LangGraph excels at creating multi-agent systems. You could design your autonomous agent as a collection of specialized sub-agents, each handling different aspects of tasks or knowledge domains.
2. Graph Structure:
The basic graph structure in LangGraph is straightforward. You'll create a graph.py file to orchestrate the interactions between your agents or components.
3. Tool Calling:
A key feature you can incorporate is tool calling. This allows your LLM-based agent to interact with external systems or perform specific tasks. You can implement this using the @tool decorator in your code.
4. Flexibility in Design:
LangGraph offers great flexibility in designing your agent. While there are example structures like the Assistant class, you have the freedom to create a wide range of designs tailored to your specific needs.
5. Complex Conversations and Tasks:
Your autonomous agent can be designed to handle sophisticated conversations and complex tasks. This is where LangGraph's stateful nature really shines, allowing your agent to maintain context over extended interactions.
6. Integration with LangChain:
Since LangGraph builds upon LangChain, you can leverage features from both. This combination allows for powerful, flexible AI assistants capable of managing intricate workflows.
7. External System Interaction:
Consider incorporating external APIs or databases to enhance your agent's capabilities. This could include accessing real-time data, performing calculations, or interacting with other services.
8. Tutorial Resources:
There are tutorials available that walk through the process of building AI assistants with LangChain and LangGraph. These can be excellent starting points for your project.
To get started, you might want to:
1. Set up your development environment with LangChain and LangGraph.
2. Define the core functionalities you want your autonomous agent to have.
3. Design the overall structure of your agent, possibly as a multi-agent system.
4. Implement basic interactions and gradually add more complex features like tool calling and state management.
5. Test your agent thoroughly with various scenarios to ensure robust performance.
Remember, building an autonomous agent is an iterative process. Start with a basic version and progressively enhance its capabilities. This approach will help you understand the intricacies of LangGraph while creating a sophisticated AI application.
Do you have any specific ideas about what kind of tasks or domain you want your autonomous agent to specialize in? This could help guide the design and implementation process.
replay
the full state history to see everything that occurred.现在我们让代理采取了几步,我们可以
replay
完整的状态历史,以查看发生的所有事情。
to_replay = None
for state in graph.get_state_history(config):
print("Num Messages: ", len(state.values["messages"]), "Next: ", state.next)
print("-" * 80)
if len(state.values["messages"]) == 6:
# We are somewhat arbitrarily selecting a specific state based on the number of chat messages in the state.
to_replay = state
Num Messages: 8 Next: ()
--------------------------------------------------------------------------------
Num Messages: 7 Next: ('chatbot',)
--------------------------------------------------------------------------------
Num Messages: 6 Next: ('tools',)
--------------------------------------------------------------------------------
Num Messages: 5 Next: ('chatbot',)
--------------------------------------------------------------------------------
Num Messages: 4 Next: ('__start__',)
--------------------------------------------------------------------------------
Num Messages: 4 Next: ()
--------------------------------------------------------------------------------
Num Messages: 3 Next: ('chatbot',)
--------------------------------------------------------------------------------
Num Messages: 2 Next: ('tools',)
--------------------------------------------------------------------------------
Num Messages: 1 Next: ('chatbot',)
--------------------------------------------------------------------------------
Num Messages: 0 Next: ('__start__',)
--------------------------------------------------------------------------------
to_replay
as a state to resume from. This is the state after the chatbot
node in the second graph invocation above.注意到每一步的图形都保存了检查点。这跨越了调用,因此您可以在整个线程的历史中回溯。我们选择了
to_replay
作为恢复的状态。这是上述第二个图形调用中 chatbot
节点之后的状态。
Resuming from this point should call the action node next.
从这一点恢复应该调用下一个动作节点。
('tools',)
{'configurable': {'thread_id': '1', 'checkpoint_ns': '', 'checkpoint_id': '1ef7d094-2634-687c-8006-49ddde5b2f1c'}}
to_replay.config
) contains a checkpoint_id
timestamp. Providing this checkpoint_id
value tells LangGraph's checkpointer to load the state from that moment in time. Let's try it below:注意到检查点的配置 (
to_replay.config
) 包含一个 checkpoint_id
时间戳。提供这个 checkpoint_id
值告诉 LangGraph 的检查点程序从那个时刻加载状态。让我们在下面尝试一下:
# The `checkpoint_id` in the `to_replay.config` corresponds to a state we've persisted to our checkpointer.
for event in graph.stream(None, to_replay.config, stream_mode="values"):
if "messages" in event:
event["messages"][-1].pretty_print()
==================================[1m Ai Message [0m==================================
[{'text': "That's an excellent idea! Building an autonomous agent with LangGraph is a great way to explore its capabilities and learn about advanced AI application development. LangGraph's features make it well-suited for creating autonomous agents. Let me provide some additional insights and encouragement for your project.", 'type': 'text'}, {'id': 'toolu_017t6BS5rNCzFWcpxRizDKjE', 'input': {'query': 'building autonomous agents with LangGraph examples and tutorials'}, 'name': 'tavily_search_results_json', 'type': 'tool_use'}]
Tool Calls:
tavily_search_results_json (toolu_017t6BS5rNCzFWcpxRizDKjE)
Call ID: toolu_017t6BS5rNCzFWcpxRizDKjE
Args:
query: building autonomous agents with LangGraph examples and tutorials
=================================[1m Tool Message [0m=================================
Name: tavily_search_results_json
[{"url": "https://blog.langchain.dev/how-to-build-the-ultimate-ai-automation-with-multi-agent-collaboration/", "content": "Learn how to create an autonomous research assistant using LangGraph, an extension of LangChain for agent and multi-agent flows. Follow the steps to define the graph state, initialize the graph, and run the agents for planning, research, review, writing and publishing."}, {"url": "https://medium.com/@lucas.dahan/hands-on-langgraph-building-a-multi-agent-assistant-06aa68ed942f", "content": "Building the Graph. With our agents defined, we'll create a graph.py file to orchestrate their interactions. The basic graph structure in LangGraph is really simple, here we are going to use ..."}]
==================================[1m Ai Message [0m==================================
Great choice! Building an autonomous agent with LangGraph is an excellent way to dive deep into its capabilities. Based on the additional information I've found, here are some insights and steps to help you get started:
1. LangGraph for Autonomous Agents:
LangGraph is particularly well-suited for creating autonomous agents, especially those involving multi-agent collaboration. It allows you to create complex, stateful workflows that can simulate autonomous behavior.
2. Example Project: Autonomous Research Assistant
One popular example is building an autonomous research assistant. This type of project can help you understand the core concepts of LangGraph while creating something useful.
3. Key Steps in Building an Autonomous Agent:
a. Define the Graph State: This involves setting up the structure that will hold the agent's state and context.
b. Initialize the Graph: Set up the initial conditions and parameters for your agent.
c. Create Multiple Agents: For a complex system, you might create several specialized agents, each with a specific role (e.g., planning, research, review, writing).
d. Orchestrate Interactions: Use LangGraph to manage how these agents interact and collaborate.
4. Components of an Autonomous Agent:
- Planning Agent: Determines the overall strategy and steps.
- Research Agent: Gathers necessary information.
- Review Agent: Evaluates and refines the work.
- Writing Agent: Produces the final output.
- Publishing Agent: Handles the final distribution or application of results.
5. Implementation Tips:
- Start with a simple graph structure in LangGraph.
- Define clear roles and responsibilities for each agent or component.
- Use LangGraph's features to manage state and context across the different stages of your agent's workflow.
6. Learning Resources:
- Look for tutorials and examples specifically on building multi-agent systems with LangGraph.
- The LangChain documentation and community forums can be valuable resources, as LangGraph builds upon LangChain.
7. Potential Applications:
- Autonomous research assistants
- Complex task automation systems
- Interactive storytelling agents
- Autonomous problem-solving systems
Building an autonomous agent with LangGraph is an exciting project that will give you hands-on experience with advanced concepts in AI application development. It's a great way to learn about state management, multi-agent coordination, and complex workflow design in AI systems.
As you embark on this project, remember to start small and gradually increase complexity. You might begin with a simple autonomous agent that performs a specific task, then expand its capabilities and add more agents or components as you become more comfortable with LangGraph.
Do you have a specific type of autonomous agent in mind, or would you like some suggestions for beginner-friendly autonomous agent projects to start with?
**action**
node. You can tell this is the case since the first value printed above is the response from our search engine tool.注意到图形从
**action**
节点恢复执行。您可以判断这是因为上面打印的第一个值是我们搜索引擎工具的响应。
Congratulations! You've now used time-travel checkpoint traversal in LangGraph. Being able to rewind and explore alternative paths opens up a world of possibilities for debugging, experimentation, and interactive applications.
恭喜!您现在已经在 LangGraph 中使用了时间旅行检查点遍历。能够倒带并探索替代路径为调试、实验和交互式应用程序打开了无限可能的世界。
Conclusion¶ 结论 ¶
Congrats! You've completed the intro tutorial and built a chat bot in LangGraph that supports tool calling, persistent memory, human-in-the-loop interactivity, and even time-travel!
恭喜!您已完成入门教程,并在 LangGraph 中构建了一个支持工具调用、持久记忆、人机交互,甚至时间旅行的聊天机器人!
The LangGraph documentation is a great resource for diving deeper into the library's capabilities.
LangGraph 文档是深入了解该库功能的绝佳资源。