GitHub - google-gemini/gemini-fullstack-langgraph-quickstart: Get started with building Fullstack Agents using Gemini 2.5 and LangGraph
Key Points
- 1This project provides a fullstack application featuring a React frontend and a LangGraph-powered backend agent designed for comprehensive, iterative web research using Google Gemini models.
- 2The agent dynamically generates search terms, queries Google Search, reflects on results to identify knowledge gaps, and refines searches until it can provide a well-supported, cited answer.
- 3Serving as an example of research-augmented conversational AI, the application integrates LangGraph for orchestration, Gemini for AI tasks, and includes clear instructions for local development and Docker deployment.
This paper presents "Gemini Fullstack LangGraph Quickstart," a fullstack application demonstrating a sophisticated research-augmented conversational AI agent. The system integrates a React frontend with a LangGraph-powered backend, designed for comprehensive, iterative web research and answer synthesis using Google Gemini models and the Google Search API.
The core methodology revolves around a LangGraph agent, defined in backend/src/agent/graph.py, which orchestrates a dynamic and reflective research process. This agent exhibits the following sequence of operations:
- Initial Query Generation: Upon receiving a user's natural language query, a Google Gemini Large Language Model (LLM) is employed to dynamically generate an initial set of precise search terms or questions. This leverages the LLM's understanding capabilities to translate high-level requests into actionable search queries.
- Web Research and Information Retrieval: For each generated search query, the system utilizes the Google Search API. The Gemini LLM then processes the search results (e.g., snippets, content from retrieved web pages) to extract relevant information, effectively acting as a retrieval mechanism.
- Reflective Reasoning and Knowledge Gap Analysis: A critical step involves a reflective process, also powered by a Gemini LLM. The agent analyzes the information gathered from web research. This analysis aims to identify whether the current knowledge base is sufficient to answer the original user query comprehensively, or if significant knowledge gaps (information deficiencies, inconsistencies, or lack of depth) exist. This meta-cognition allows the agent to assess its own understanding.
- Iterative Refinement and Query Generation: If the reflective analysis identifies knowledge gaps or deems the current information insufficient, the agent enters an iterative refinement loop. A Gemini LLM is again utilized to generate follow-up search queries, specifically designed to address the identified gaps. This process of web research, reflection, and new query generation repeats for a pre-configured maximum number of iterations, allowing the agent to progressively deepen its understanding.
- Answer Synthesis and Citation: Once the agent's internal reflection determines that sufficient information has been gathered, a final Google Gemini LLM is tasked with synthesizing all accumulated data into a coherent, well-structured answer. This synthesized response explicitly includes citations from the web sources from which the information was extracted, ensuring traceability and verifiability.
The system's architecture comprises two primary components: a frontend/ directory housing a React application built with Vite, utilizing Tailwind CSS and Shadcn UI for the user interface; and a backend/ directory containing the LangGraph/FastAPI application that encapsulates the research agent logic. LangGraph is central to managing the stateful execution of the complex, multi-step agentic workflow, enabling the iterative and branching logic required for advanced reasoning. Production deployments of the LangGraph backend necessitate a Redis instance for pub-sub messaging (enabling real-time streaming of outputs) and a Postgres database for persistent storage of assistants, threads, runs, and managing the background task queue with exactly-once semantics. The project is licensed under Apache-2.0.