Open Responses
Service

Open Responses

2026.01.17
·Web·by 이호민
#LLM#OpenAI API#Interoperability#Specification#Agent

Key Points

  • 1Open Responses is an open-source specification and ecosystem designed to create multi-provider, interoperable LLM interfaces based on the OpenAI Responses API.
  • 2It aims to standardize LLM interactions by providing a shared schema and tooling, enabling unified requests and outputs across diverse language model providers.
  • 3The specification is built for portability, supporting complex agentic workflows with consistent streaming and tool invocation patterns, while remaining extensible for provider-specific features.

Open Responses is an open-source specification and accompanying ecosystem designed to create multi-provider, interoperable interfaces for Large Language Models (LLMs). It leverages the foundational concepts of the OpenAI Responses API as its basis but extends them to achieve provider independence.

The core methodology of Open Responses centers on establishing a shared schema and a unified tooling layer. This approach aims to standardize the interaction with diverse LLM providers, enabling a consistent experience for tasks such as calling language models, streaming incremental results, and orchestrating complex agentic workflows, irrespective of the underlying LLM service.

The primary motivation for Open Responses stems from the current fragmentation within LLM APIs. While fundamental building blocks like messages, tool calls, streaming capabilities, and multimodal inputs are common across providers, their specific encodings and API interfaces vary significantly. Open Responses addresses this by defining a single, open specification that describes requests and outputs universally, minimizing the translation effort required to switch between or integrate multiple providers.

Its design adheres to three key principles:

  1. Multi-provider by default: The specification defines a singular, canonical schema that is designed to map cleanly and efficiently to the proprietary APIs of various model providers. This abstract layer allows developers to write code once and deploy it across different LLM backends.
  2. Friendly to real-world agentic workflows: The specification provides consistent patterns for critical agentic capabilities. This includes standardized formats for streaming events, ensuring uniform handling of progressive outputs. It also defines common patterns for tool invocation, detailing how external functions or services are described and called by the LLM. A foundational concept is "items," defined as the atomic unit of model output and tool use, providing a granular and consistent structure for processing LLM responses and integrating with external tools.
  3. Extensible without fragmentation: Open Responses maintains a stable core specification that ensures broad compatibility while simultaneously allowing for the integration of provider-specific features or extensions where they do not yet generalize across the ecosystem. This approach prevents the core standard from becoming overly prescriptive while accommodating innovation.

The technical contract for Open Responses is detailed via an OpenAPI reference, providing a machine-readable definition of its entire API surface area, including request/response types and data structures. Interoperability and adherence to the specification are validated through a suite of acceptance tests, which ensure implementations conform to the defined behaviors and data formats. The project is community-driven, inviting contributions across various technical domains including schema definition, streaming protocols, tooling development, test suite expansion, and documentation.