GitHub - Lazydok/cc-skills-template
Blog

GitHub - Lazydok/cc-skills-template

Lazydok
2026.03.16
Β·GitHubΒ·by μ„±μ‚°/λΆ€μ‚°/μž‘λΆ€
#Agent#AI#Automation#LLM#Multi-agent

Key Points

  • 1The CC Skills Template enables Claude Code to leverage multi-agent teams, distributing tasks and facilitating parallel work and cross-verification for enhanced performance.
  • 2It integrates specialized subagents like Gemini CLI, Codex CLI, and Gemini Image, enforcing crucial "MUST rules" for ensemble AI combinations and artifact-based communication for structured collaboration.
  • 3This framework automates complex workflows, such as iterative design refinement with visual validation loops, comprehensive security audits, and robust architecture planning, yielding significantly more accurate and reliable results from a single prompt.

The provided document outlines a set of skill templates for Claude Code, designed to maximize its capabilities by orchestrating multiple AI agents into collaborative teams. The core premise is to leverage parallel processing and rigorous cross-verification among specialized AI agents to achieve faster, more accurate, and more reliable outcomes than a single agent could.

The system distinguishes between Subagents and Agent Teams. Subagents are independent workers returning individual results (e.g., gemini-cli, codex-cli). Agent Teams, conversely, are collaborative units that share a common task list and engage in bidirectional communication to achieve a collective goal.

Core Methodology and Key Components:

  1. Agent Teams Orchestration: The system enables the dynamic formation of "Agent Teams" that work in parallel. This is an experimental feature activated by setting the environment variable CLAUDECODEEXPERIMENTALAGENTTEAMS=1CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1. Team members can be displayed in various modes, including tmux for separate panels or in-process within the main terminal.
  2. Specialized Subagents (CLI Tools):
    • gemini-cli (Google Gemini CLI Subagent): Used for tasks requiring web search, Vision-Language Model (VLM) capabilities, and frontend analysis. Configuration involves npm installation, version check (v0.32.1+), and OAuth authentication.
    • codex-cli (OpenAI Codex CLI Subagent): Employed for independent code analysis, logic verification, and security audits. Requires npm installation, version check (v0.112.0+), and ChatGPT account login.
    • gemini-image (Gemini Image Generation): Facilitates AI-powered image generation (UI mockups, icons, banners). Requires a paid Google AI Studio API key stored in a .env file.
  3. Cross-Verification (MUST Rules): A critical component ensuring result reliability. The paper specifies "MUST Rules" for AI ensemble combinations based on task types, which are non-negotiable for high-stakes operations:
    • Complex Code Analysis/Algorithms: Claude + Codex (Codex excels in logical reasoning and code accuracy).
    • Web UI/Frontend/Image Analysis: Claude + Gemini (Gemini excels in VLM and web search).
    • Architecture Design/Proposals: Claude + Codex + Gemini (a 3-way gate where all three must approve for the plan to proceed).
    • Security Audits: Claude + Codex (requires at least two independent security perspectives).
    • Financial/Trading Logic: Claude + Codex (essential for mathematical accuracy cross-checks).
Reliability is judged by consensus: CRITICAL (unanimous agreement, mandatory correction), HIGH (2/3 agreement, investigation needed), MEDIUM (1/3 agreement, often false positive).
  1. Artifact-Based Communication: CLI agents (Gemini/Codex) do not communicate directly via messages with the Claude Code team. Instead, they interact by dropping structured artifact files into a shared temporary directory (/tmp/xv/{task-name}/). Claude Code team members then read these files.
    • Standard Artifact Format: All review/critique files adhere to a specific Markdown format, including agent name, date, status (PASS|FAIL|PASS_WITH_COMMENTS), Findings (CRITICAL/HIGH/MEDIUM with line numbers and descriptions), Summary (1-3 sentences), and Verdict (APPROVE|REQUEST_CHANGES). Examples of artifacts include plan_draft.md, claude_review.md, codex_review.md, gemini_research.md, and synthesis_report.md.

Example Workflow: SaaS Landing Page with Automatic Verification Loop

This example demonstrates the power of the agent-teams skill by integrating image generation, UI development, E2E testing, VLM-based quality assessment, and iterative refinement.

  1. Team Composition: An 8-member team is spawned, including architect (Claude, designs wireframes), image-gen (gemini-image, generates visual assets), ui-dev (Claude, implements HTML/CSS/JS), e2e-runner (Claude, Playwright, captures screenshots), vlm-judge (gemini-cli, analyzes screenshots for design score), xv-gemini (gemini-cli, researches trends), xv-codex (codex-cli, audits code quality), and synthesizer (Claude, determines loop continuation based on score).
  2. Design-Verify-Refine Loop:
    • Phase 1 (Parallel Research & Design): architect designs the landing page structure, while xv-gemini researches design trends.
    • Phase 2 (Image Generation - 1st Iteration): image-gen creates initial images (hero banner, icons, illustration) using gemini-image based on the design and trends.
    • Phase 3 (Implementation): ui-dev implements the landing page, incorporating the generated images.
    • Phase 4 (E2E Screenshot Capture): e2e-runner captures screenshots across different devices (desktop, tablet, mobile).
    • Phase 5 (Visual Quality Assessment - Start of Loop): vlm-judge analyzes the screenshots using Gemini's VLM capabilities and assigns a comprehensive score (e.g., 76/100) based on visual harmony, color consistency, typography, responsiveness, etc.
    • Synthesizer Decision: If the score is below a threshold (e.g., 85 points), the synthesizer issues detailed feedback to image-gen (e.g., "hero banner gradient obscures text, needs dark overlay") and ui-dev (e.g., "feature icon style mismatch").
    • Loop Iteration (Round 2, 3...): image-gen regenerates images based on feedback (e.g., adding dark gradient overlay to banner, simplifying icons), ui-dev makes corresponding CSS/HTML adjustments, e2e-runner recaptures, and vlm-judge re-evaluates. This iterative process continues until the score meets or exceeds the target. The paper demonstrates prompt evolution based on VLM feedback, e.g., the hero banner prompt evolving to include "dark gradient overlay at bottom 30% for text readability."
    • Final Cross-Verification: Once the score threshold is met, xv-codex performs an independent code audit (Lighthouse scores), and xv-gemini verifies adherence to latest design trends, providing comprehensive validation before final approval.

Customization:
The system is highly customizable. SKILL.md files define Claude Code's behavior through prompts. Users can either directly edit these Markdown files or instruct Claude Code via natural language (e.g., "optimize agent-teams skill for Django + React projects"). This allows for:

  • Tailored Team Compositions: Including specific roles (e.g., db-migration specialist, api-docs-generator).
  • Project-Specific Review Checklists: (e.g., Next.js Server Component vs. Client Component checks in gemini-cli reviews, Spark performance anti-patterns in codex-cli).
  • Workflow Integration: (e.g., adding Alembic migration validation, OpenAPI spec generation, or platform-specific mobile testers).
  • Reinforced MUST Rules: Enforcing critical checks for specific domains (e.g., mandatory Codex cross-verification for mathematical logic in financial systems).

The ultimate value proposition is that a simple, single-line prompt can trigger a sophisticated, multi-agent workflow involving parallel execution, automated quality gates, and cross-validation, leading to superior results by automatically identifying and addressing issues that a single agent or a less integrated system might miss.