Claude now offers code execution, MCP server connections, file storage, and extended prompt caching through the API—giving developers powerful tools to build agents that analyze data, connect to external systems, and maintain context for longer periods of time.
Key Points
- 1Anthropic API now offers a code execution tool, allowing Claude to run Python for data analysis, and an MCP connector for seamless integration with remote Model Context Protocol servers.
- 2A new Files API enables developers to store and access documents persistently across sessions, streamlining workflows and integrating with the code execution tool for direct data processing.
- 3Extended prompt caching, with a one-hour time-to-live, significantly reduces costs and latency for long-running agent workflows by maintaining context more efficiently.
Anthropic has introduced four new capabilities on its API to empower developers in constructing more sophisticated AI agents: the code execution tool, the Model Context Protocol (MCP) connector, the Files API, and extended prompt caching. These features, in conjunction with Claude Opus 4 and Sonnet 4, aim to provide robust functionalities for data analysis, external system integration, persistent file management, and long-term context maintenance, thereby reducing the need for custom infrastructure.
The code execution tool enables Claude to execute Python code within a sandboxed environment, facilitating computational results and data visualizations. This transforms Claude from a mere code-writing assistant into a dynamic data analyst capable of iteratively loading datasets, generating exploratory charts, identifying patterns, and refining outputs based on real-time execution results. This capability supports end-to-end analytical tasks, encompassing use cases such as financial modeling (e.g., generating projections, analyzing portfolios), scientific computing (e.g., executing simulations, processing experimental data), business intelligence (e.g., automated reports, sales data analysis), document processing (e.g., data extraction and transformation), and statistical analysis (e.g., regression, hypothesis testing, predictive modeling). Users receive 50 free hours daily, with additional usage priced at $0.05 per hour per container.
The MCP connector streamlines the integration of Claude with remote Model Context Protocol servers. Previously requiring developers to build custom client harnesses for MCP connections, the Anthropic API now automates connection management, tool discovery, and error handling. By simply providing a remote MCP server URL, developers can grant Claude access to powerful third-party tools. When configured, Claude autonomously connects to specified MCP servers, retrieves available tools, reasons agentically about tool selection and argument passing, executes tool calls iteratively until a satisfactory result is achieved, and manages authentication and error handling, returning an enhanced response. This leverages a growing ecosystem of remote MCP servers, including integrations with platforms like Zapier and Asana.
The Files API simplifies the storage and retrieval of documents within agent workflows. Instead of re-uploading documents with every request, developers can upload files once and reference them persistently across multiple conversations or sessions. This is particularly beneficial for applications involving large document sets such as knowledge bases, technical documentation, or datasets. The Files API integrates directly with the code execution tool, allowing Claude to access and process uploaded files during code execution, and even produce new files (e.g., charts, graphs) as part of the response, maintaining data persistence for analytical tasks.
Extended prompt caching offers an alternative to the standard 5-minute time-to-live (TTL) for prompt caching, providing an option for a 1-hour TTL at an additional cost. This 12x increase in cache duration significantly reduces expenses (up to 90%) and latency (up to 85%) for long prompts, especially within long-running agent workflows. It enables agents to maintain extensive contextual knowledge and examples over extended periods, making it feasible to build agents that handle multi-step workflows, analyze complex documents, or coordinate with other systems efficiently and cost-effectively, even at scale.
These capabilities collectively enhance the agentic methodology of Claude by providing the foundational infrastructure for more autonomous, context-aware, and externally connected AI systems. The core methodology involves augmenting the language model's reasoning capabilities with programmatic execution, external tool invocation, persistent state management, and efficient context retention, thereby enabling the deployment of complex, real-world AI applications without extensive custom backend development.