GitHub - x1xhlol/system-prompts-and-models-of-ai-tools: FULL Augment Code, Claude Code, Cluely, CodeBuddy, Comet, Cursor, Devin AI, Junie, Kiro, Leap.new, Lovable, Manus, NotionAI, Orchids.app, Perplexity, Poke, Qoder, Replit, Same.dev, Trae, Traycer AI, VSCode Agent, Warp.dev, Windsurf, Xcode, Z.ai Code, Dia & v0. (And other Open Sourced) System Prompts, Internal Tools & AI Models
Feed

GitHub - x1xhlol/system-prompts-and-models-of-ai-tools: FULL Augment Code, Claude Code, Cluely, CodeBuddy, Comet, Cursor, Devin AI, Junie, Kiro, Leap.new, Lovable, Manus, NotionAI, Orchids.app, Perplexity, Poke, Qoder, Replit, Same.dev, Trae, Traycer AI, VSCode Agent, Warp.dev, Windsurf, Xcode, Z.ai Code, Dia & v0. (And other Open Sourced) System Prompts, Internal Tools & AI Models

x1xhlol
2025.04.27
ยทGitHubยทby Anonymous
#AI#LLM#System Prompts#AI Tools#Open Source

Key Points

  • 1This GitHub repository serves as a comprehensive, open-source collection detailing system prompts, internal tools, and AI models from numerous AI agents and applications.
  • 2It offers over 30,000 lines of insights into the structure and functionality of these diverse AI systems, promoting transparency and understanding.
  • 3The project also highlights critical security warnings for AI startups regarding vulnerabilities stemming from exposed system instructions and model configurations.

This repository serves as a comprehensive, open-source collection of system prompts, internal tool definitions, and AI model configurations gleaned from a wide array of commercial and open-source AI products and agents. It encompasses over 30,000 lines of detailed insights into the underlying engineering of these AI systems.

The core methodology involves the systematic acquisition and aggregation of prompt engineering artifacts and internal tool specifications. This is achieved through an implied process of observational analysis, and potentially reverse engineering or careful examination of public information, to deduce the precise instructions and functional interfaces that govern the behavior of various AI agents. The collected data represents the implicit or explicit directives given to large language models (LLMs) to define their persona, constraints, operational procedures, and integration points with other software components.

The repository's structure is organized hierarchically, with top-level directories named after specific AI tools or companies (e.g., Anthropic, Augment Code, Cursor Prompts, Devin AI, NotionAi, Perplexity, VSCode Agent, Warp.dev). Within these directories, users can find the specific system prompts, API definitions, or descriptions of internal tooling that contribute to the unique functionalities of each AI product. This organized data provides a granular view into:

  • System Prompts: The initial, hidden instructions given to an LLM to establish its role, guidelines, and context for interaction. These can include persona definitions, safety constraints, response formatting rules, and meta-instructions for task execution.
  • Internal Tool Definitions: Specifications for how the AI agent interacts with external or internal functions and APIs. This often includes tool names, descriptions, and JSON schema definitions for their input arguments, enabling the AI to perform actions like code generation, web browsing, data retrieval, or file manipulation.
  • Model Configurations (Implied): While not explicitly detailed as model architectures, the collected prompts and tool definitions indirectly reveal aspects of how models are fine-tuned or instructed to utilize their capabilities for specific applications.

The project operates under the GPL-3.0 license and aims to serve as an educational resource for developers, researchers, and AI enthusiasts interested in the practical application of prompt engineering and AI system design. It also highlights a critical security concern for AI startups, emphasizing the vulnerability of exposed system instructions and internal tools to potential exploitation or intellectual property loss, advocating for secure practices in AI system deployment.