Introducing the Developer Knowledge API and MCP Server- Google Developers Blog
Blog

Introducing the Developer Knowledge API and MCP Server- Google Developers Blog

Jess Kuras
2026.02.10
·Web·by 이호민
#AI#API#Developer Tools#Documentation#LLM

Key Points

  • 1Google introduces the Developer Knowledge API, a programmatic source providing accurate and up-to-date access to official Google developer documentation as Markdown for AI-powered tools.
  • 2Complementing the API, an official Model Context Protocol (MCP) server allows AI assistants and IDEs to integrate and "read" this documentation, enabling reliable features like implementation guidance and troubleshooting.
  • 3Available now in public preview, these tools aim to ensure AI models have the latest context for Google technology, with future plans for structured content and broader documentation coverage.

The Developer Knowledge API and its associated Model Context Protocol (MCP) server introduce a canonical, machine-readable gateway to Google’s official developer documentation, addressing the critical challenge of providing AI-powered developer tools with accurate and up-to-date information. The core problem identified is that Large Language Models (LLMs) are limited by the recency and reliability of their training data and the fragility of web-scraping for dynamic content, leading to outdated or incorrect guidance when applied to evolving Google technologies like Firebase, Android, and Google Cloud.

The solution comprises two interconnected components. The Developer Knowledge API functions as the programmatic source of truth for Google's public documentation. It enables developers and AI tools to search and retrieve relevant documentation pages and snippets, with the content delivered in Markdown format. Key features of this API include comprehensive coverage across domains such as firebase.google.com, developer.android.com, and docs.cloud.google.com. A crucial aspect of its methodology is its commitment to freshness: during the public preview phase, documentation is re-indexed within 24 hours of an update. This rapid re-indexing process ensures that the retrieved information reflects the latest feature releases, API changes, and best practices, thereby providing a dynamic corpus for LLMs. The API's underlying mechanism involves a search and retrieval system operating on this frequently updated, vast corpus of official Markdown documentation.

Complementing the API, the Model Context Protocol (MCP) server serves as an intermediary, facilitating the integration of this live documentation into AI assistants and Integrated Development Environments (IDEs). MCP is presented as an open standard designed to enable AI agents to safely and efficiently access external data sources. In this context, the MCP server acts as a bridge: an AI assistant, configured to interact with the Developer Knowledge MCP server, can formulate natural language queries. The MCP server then translates these queries into structured API calls to the Developer Knowledge API, retrieves the relevant Markdown content, and injects this up-to-date information directly into the LLM's context. This real-time context injection enables AI tools to perform more reliably for tasks such as providing specific implementation guidance (e.g., for Firebase push notifications), assisting with troubleshooting (e.g., diagnosing API errors by referencing documentation), and conducting comparative analyses between Google Cloud services. The technical setup involves generating an API key for the Developer Knowledge API and enabling the MCP server via the gcloud CLI using the command gcloudbetaservicesmcpenabledeveloperknowledge.googleapis.comproject=PROJECTIDgcloud beta services mcp enable developerknowledge.googleapis.com --project=PROJECT_ID, followed by configuring the specific AI tool to utilize this server.

In essence, the methodology ensures that AI tools, instead of relying on potentially stale static training data or brittle web scraping, can programmatically query and receive current, authoritative Google developer documentation as contextual input. This approach directly addresses the LLM limitation articulated as "LLMs are only as good as the context they are given" by providing a high-fidelity, continuously updated external knowledge source. Future developments aim to enhance this by supporting structured content like code samples and API reference entities, expanding the corpus, and further reducing re-indexing latency.