AI Agent Development, Mastered with This One Video. (LangChain + LangGraph)
Video

AI Agent Development, Mastered with This One Video. (LangChain + LangGraph)

2026.02.17
Β·YouTubeΒ·by 배레온/λΆ€μ‚°/개발자
#AI Agent#Gemini#LangChain#LangGraph#Orchestration

Key Points

  • 1This comprehensive lecture series introduces LangGraph as a low-level orchestration framework for building advanced AI agents, emphasizing fine-grained control over complex business logic and enabling dynamic multi-agent systems.
  • 2The curriculum details a step-by-step agent development process, including defining models and tools, creating a shared "State" (data notebook), and defining "Nodes" (workers) as functions that process and return state.
  • 3It demonstrates how to construct a graph by connecting these nodes using "Edges" for direct transitions and "Conditional Edges" for routing based on agent reasoning, allowing for sophisticated agent workflows like LLM calls and tool execution.

This paper, presented as a comprehensive video tutorial series by Blueme AI, introduces LangGraph, a low-level orchestration framework and runtime for building, managing, and deploying advanced AI agents. It emphasizes LangGraph's ability to provide transparent control over agent logic, contrasting it with LangChain's more "black-box" approach. The tutorial aims to equip developers with the skills to design sophisticated, multi-agent systems capable of dynamic decision-making, "time travel" (revisiting past states), forking (exploring alternative paths), and dynamic human intervention. The series primarily uses Google's Gemini models (e.g., Gemini 1.5 Flash, Gemini 1.5 Pro) and compares them with GPT models.

The core methodology of LangGraph revolves around five key steps for agent construction, utilizing fundamental concepts:

  1. Defining Models and Tools:
    • Language Models (LLMs): The tutorial leverages existing integrations from LangChain, specifically langchain-google-genai's ChatGoogleGenerativeAI. An LLM is initialized, e.g., ChatGoogleGenerativeAI(model="geminiβˆ’1.5βˆ’flash")ChatGoogleGenerativeAI(model="gemini-1.5-flash").
    • Tools: Standard LangChain tools are defined as Python functions decorated with @tool, including type hints for arguments and return values, and descriptive docstrings. Examples include multiply, add, and divide.
    • Binding: A crucial step involves binding the defined tools to the LLM using the model.bind_tools(tools) method. This creates a new LLM instance (model_with_tools) that is aware of and can call the specified tools.
  1. Defining the State:
    • The State serves as the shared memory or "notebook" where all agents/nodes write and read data. It is defined as a TypedDict subclass from the langchain_core.messages.graph module.
    • Key fields are declared with specific types and an annotation for handling message accumulation:
      • messages: Annotated[List[AnyMessage], AddMessage]: This field holds a list of all interaction messages (human, AI, tool). The Annotated type, combined with AddMessage from langgraph.graph.message, ensures that new messages are automatically appended to the list when the state is updated.
      • llm_calls: int: An integer field to track the number of LLM invocations. This field's value can be incremented using a dictionary get method with a default of 0 for initial access, e.g., state.get("llm_calls", 0) + 1.
  1. Defining Nodes:
    • Nodes are the functional units that perform operations on the State. Each node is implemented as a Python function that accepts the current State as its sole argument and returns a dict representing updates to the State.
    • llm_call Node: This node represents the LLM's operation.
      • It invokes the model_with_tools with the current state["messages"].
      • It captures the LLM's response, which can be a direct answer (content) or a tool call (tool\_calls).
      • It updates the messages field by appending the LLM's response and increments llm_calls.
    • tool_node Node: This node executes the tools.
      • It processes the last message in state["messages"], specifically looking for tool_calls in the AI's response.
      • It iterates through each tool call, identifies the tool by its name, retrieves the corresponding tool function (e.g., from a pre-built tools_by_name dictionary mapping tool names to tool objects), and invokes it with the provided arguments.
      • The results of tool execution are then wrapped in ToolMessage objects (which include content and tool_call_id to link back to the original tool call).
      • These ToolMessage objects are added to the messages list in the State.
  1. Creating the Graph:
    • An instance of StateGraph (from langgraph.graph) is initialized with the defined State class.
    • Nodes are added to the graph using graph.add_node(name: str, function: Callable).
    • Edges connecting nodes are defined using graph.add_edge(source_node: str, target_node: str). For example, after a tool executes (tool_node), its result is typically fed back to the LLM (llm_call) for further processing or final response generation.
    • Start and End Nodes: LangGraph provides predefined START and END nodes to mark the entry and exit points of the graph. The START node implicitly receives the initial user input and populates the State.
    • Conditional Edges: These allow dynamic routing based on the content of the State.
      • A "conditional function" is defined (e.g., should_continue) that inspects the State (specifically, the last message from the LLM).
      • This function determines the next node to execute (e.g., tool_node if a tool call is present, or END if a final answer is provided).
      • The conditional edge is added using graph.add_conditional_edges(source_node: str, conditional_function: Callable, mapping: Dict[str, str]). The mapping specifies which target node corresponds to each possible return value of the conditional function. For example, graph.add_conditional_edges("llm_call", should_continue, {"tool_node": "tool_node", "END": END}) routes from llm_call to either tool_node or END based on should_continue's output.
  1. Compiling and Running the Agent:
    • After defining all nodes and edges, the graph is compiled into an executable AgentExecutor using graph.compile().
    • The compiled agent can then be invoked with an initial input, and its execution flow will follow the defined graph structure, leveraging the shared State for inter-node communication and decision-making.