GitHub - badlogic/pi-mono: AI agent toolkit: coding agent CLI, unified LLM API, TUI & web UI libraries, Slack bot, vLLM pods
Key Points
- 1The `pi-mono` monorepo is an open-source AI agent toolkit providing comprehensive tools for building AI agents and managing LLM deployments.
- 2It includes a unified multi-provider LLM API, an agent runtime, an interactive coding agent CLI, terminal and web UI libraries, a Slack bot, and utilities for managing vLLM deployments.
- 3Licensed under MIT, the project is currently on an "OSS Vacation" until February 16, 2026, during which all pull requests will be automatically closed.
The provided document describes the Pi Monorepo, an open-source project designed as a comprehensive toolkit for building AI agents and managing Large Language Model (LLM) deployments. The core methodology of Pi is modular, distributed across several specialized npm packages within a monorepo structure, allowing for flexible integration and development.
The project's key components and their respective methodologies include:
@mariozechner/pi-ai: This package implements a unified multi-provider LLM API. Its methodology centers on abstracting away the specifics of various LLM providers (e.g., OpenAI, Anthropic, Google). It provides a consistent interface for interacting with diverse models, standardizing request and response formats, and simplifying the process of switching between or integrating multiple LLM services. This unified API aims to reduce development overhead for applications requiring multi-vendor LLM support.@mariozechner/pi-agent-core: This package provides the fundamental agent runtime. Its methodology focuses on enabling sophisticated AI agent behavior through:- Tool Calling: It facilitates the dynamic invocation of external functions or APIs by the agent. This involves defining tool schemas, parsing tool requests from LLM outputs, and executing the associated code to extend the agent's capabilities beyond conversational responses, allowing interaction with real-world systems or data.
- State Management: It manages the persistent and evolving internal state of an agent, including conversational history, context variables, and operational parameters. This is crucial for maintaining coherence and memory across multiple turns or complex workflows.
- Agent Orchestration: It defines the control flow and decision-making mechanisms within an agent, guiding its interactions, processing LLM outputs, and managing tool execution to achieve specific objectives.
@mariozechner/pi-coding-agent: This package implements an interactive coding agent via a Command Line Interface (CLI). Its methodology combines the agent core's capabilities with a user-facing terminal interface, allowing users to interactively direct an AI agent for coding-related tasks, leveraging tool calling for code generation, execution, and debugging.@mariozechner/pi-mom: This package functions as a Slack bot designed to delegate messages to thepi-coding-agent. Its methodology involves integrating the coding agent's capabilities into a team communication platform, allowing users to leverage AI-assisted coding and problem-solving directly within Slack by routing user queries to the underlying coding agent.@mariozechner/pi-tui: This package provides a Terminal User Interface (TUI) library. Its methodology is based on differential rendering, efficiently updating only the changed parts of the terminal screen. This enables the creation of responsive and dynamic interactive experiences within a command-line environment, suitable for complex agent interactions or status displays.@mariozechner/pi-web-ui: This package offers web components specifically designed for AI chat interfaces. Its methodology focuses on providing reusable, encapsulated UI elements that can be easily integrated into web applications to build interactive chat experiences for AI agents, handling aspects like message display, input fields, and interaction flows.@mariozechner/pi-pods: This package provides a CLI for managing vLLM deployments on GPU pods. Its methodology addresses the operational aspect of LLMs, enabling the deployment, scaling, and management of vLLM (a high-throughput inference engine for LLMs) instances on cloud or on-premises GPU infrastructure, streamlining the provisioning of LLM serving endpoints.
The project is released under the MIT License, indicating permissive use and distribution. Development practices emphasize a monorepo approach with standardized build, test, and linting procedures.