GitHub - Lazydok/cc-skills-template
Key Points
- 1The CC Skills Template enables Claude Code to leverage multi-agent teams, distributing tasks and facilitating parallel work and cross-verification for enhanced performance.
- 2It integrates specialized subagents like Gemini CLI, Codex CLI, and Gemini Image, enforcing crucial "MUST rules" for ensemble AI combinations and artifact-based communication for structured collaboration.
- 3This framework automates complex workflows, such as iterative design refinement with visual validation loops, comprehensive security audits, and robust architecture planning, yielding significantly more accurate and reliable results from a single prompt.
The provided document outlines a set of skill templates for Claude Code, designed to maximize its capabilities by orchestrating multiple AI agents into collaborative teams. The core premise is to leverage parallel processing and rigorous cross-verification among specialized AI agents to achieve faster, more accurate, and more reliable outcomes than a single agent could.
The system distinguishes between Subagents and Agent Teams. Subagents are independent workers returning individual results (e.g., gemini-cli, codex-cli). Agent Teams, conversely, are collaborative units that share a common task list and engage in bidirectional communication to achieve a collective goal.
Core Methodology and Key Components:
- Agent Teams Orchestration: The system enables the dynamic formation of "Agent Teams" that work in parallel. This is an experimental feature activated by setting the environment variable . Team members can be displayed in various modes, including
tmuxfor separate panels orin-processwithin the main terminal. - Specialized Subagents (CLI Tools):
gemini-cli(Google Gemini CLI Subagent): Used for tasks requiring web search, Vision-Language Model (VLM) capabilities, and frontend analysis. Configuration involvesnpminstallation, version check (v0.32.1+), and OAuth authentication.codex-cli(OpenAI Codex CLI Subagent): Employed for independent code analysis, logic verification, and security audits. Requiresnpminstallation, version check (v0.112.0+), and ChatGPT account login.gemini-image(Gemini Image Generation): Facilitates AI-powered image generation (UI mockups, icons, banners). Requires a paid Google AI Studio API key stored in a.envfile.
- Cross-Verification (MUST Rules): A critical component ensuring result reliability. The paper specifies "MUST Rules" for AI ensemble combinations based on task types, which are non-negotiable for high-stakes operations:
- Complex Code Analysis/Algorithms: Claude + Codex (Codex excels in logical reasoning and code accuracy).
- Web UI/Frontend/Image Analysis: Claude + Gemini (Gemini excels in VLM and web search).
- Architecture Design/Proposals: Claude + Codex + Gemini (a 3-way gate where all three must approve for the plan to proceed).
- Security Audits: Claude + Codex (requires at least two independent security perspectives).
- Financial/Trading Logic: Claude + Codex (essential for mathematical accuracy cross-checks).
CRITICAL (unanimous agreement, mandatory correction), HIGH (2/3 agreement, investigation needed), MEDIUM (1/3 agreement, often false positive).
- Artifact-Based Communication: CLI agents (Gemini/Codex) do not communicate directly via messages with the Claude Code team. Instead, they interact by dropping structured artifact files into a shared temporary directory (
/tmp/xv/{task-name}/). Claude Code team members then read these files.- Standard Artifact Format: All review/critique files adhere to a specific Markdown format, including agent name, date, status (
PASS|FAIL|PASS_WITH_COMMENTS),Findings(CRITICAL/HIGH/MEDIUM with line numbers and descriptions),Summary(1-3 sentences), andVerdict(APPROVE|REQUEST_CHANGES). Examples of artifacts includeplan_draft.md,claude_review.md,codex_review.md,gemini_research.md, andsynthesis_report.md.
- Standard Artifact Format: All review/critique files adhere to a specific Markdown format, including agent name, date, status (
Example Workflow: SaaS Landing Page with Automatic Verification Loop
This example demonstrates the power of the agent-teams skill by integrating image generation, UI development, E2E testing, VLM-based quality assessment, and iterative refinement.
- Team Composition: An 8-member team is spawned, including
architect(Claude, designs wireframes),image-gen(gemini-image, generates visual assets),ui-dev(Claude, implements HTML/CSS/JS),e2e-runner(Claude, Playwright, captures screenshots),vlm-judge(gemini-cli, analyzes screenshots for design score),xv-gemini(gemini-cli, researches trends),xv-codex(codex-cli, audits code quality), andsynthesizer(Claude, determines loop continuation based on score). - Design-Verify-Refine Loop:
- Phase 1 (Parallel Research & Design):
architectdesigns the landing page structure, whilexv-geminiresearches design trends. - Phase 2 (Image Generation - 1st Iteration):
image-gencreates initial images (hero banner, icons, illustration) usinggemini-imagebased on the design and trends. - Phase 3 (Implementation):
ui-devimplements the landing page, incorporating the generated images. - Phase 4 (E2E Screenshot Capture):
e2e-runnercaptures screenshots across different devices (desktop, tablet, mobile). - Phase 5 (Visual Quality Assessment - Start of Loop):
vlm-judgeanalyzes the screenshots using Gemini's VLM capabilities and assigns a comprehensive score (e.g., 76/100) based on visual harmony, color consistency, typography, responsiveness, etc. - Synthesizer Decision: If the score is below a threshold (e.g., 85 points), the
synthesizerissues detailed feedback toimage-gen(e.g., "hero banner gradient obscures text, needs dark overlay") andui-dev(e.g., "feature icon style mismatch"). - Loop Iteration (Round 2, 3...):
image-genregenerates images based on feedback (e.g., adding dark gradient overlay to banner, simplifying icons),ui-devmakes corresponding CSS/HTML adjustments,e2e-runnerrecaptures, andvlm-judgere-evaluates. This iterative process continues until the score meets or exceeds the target. The paper demonstrates prompt evolution based on VLM feedback, e.g., the hero banner prompt evolving to include "dark gradient overlay at bottom 30% for text readability." - Final Cross-Verification: Once the score threshold is met,
xv-codexperforms an independent code audit (Lighthouse scores), andxv-geminiverifies adherence to latest design trends, providing comprehensive validation before final approval.
- Phase 1 (Parallel Research & Design):
Customization:
The system is highly customizable. SKILL.md files define Claude Code's behavior through prompts. Users can either directly edit these Markdown files or instruct Claude Code via natural language (e.g., "optimize agent-teams skill for Django + React projects"). This allows for:
- Tailored Team Compositions: Including specific roles (e.g.,
db-migrationspecialist,api-docs-generator). - Project-Specific Review Checklists: (e.g., Next.js
Server Componentvs.Client Componentchecks ingemini-clireviews, Spark performance anti-patterns incodex-cli). - Workflow Integration: (e.g., adding Alembic migration validation, OpenAPI spec generation, or platform-specific mobile testers).
- Reinforced MUST Rules: Enforcing critical checks for specific domains (e.g., mandatory
Codexcross-verification for mathematical logic in financial systems).
The ultimate value proposition is that a simple, single-line prompt can trigger a sophisticated, multi-agent workflow involving parallel execution, automated quality gates, and cross-validation, leading to superior results by automatically identifying and addressing issues that a single agent or a less integrated system might miss.