Building Your AI Second Brain with Kapasi's LLM Wiki: Just Watch This—Claude Code × Obsidian × Graphify
Video

Building Your AI Second Brain with Kapasi's LLM Wiki: Just Watch This—Claude Code × Obsidian × Graphify

2026.04.12
·YouTube·by 배레온/부산/개발자
#AI#Knowledge Management#LLM#Obsidian#Second Brain

Key Points

  • 1The video introduces a powerful AI knowledge management meta, combining Claude Code with Obsidian to build a "second brain" that addresses limitations of typical AI agents in accumulating reusable personal knowledge.
  • 2It elaborates on Andrej Karpathy's LM Wiki concept, an AI-driven framework that organizes purpose-driven content into a dynamic, interconnected wiki, offering superior knowledge compounding and simpler setup than traditional RAG systems.
  • 3Furthermore, the system is enhanced with Graphifi, a tool that transforms Obsidian notes into a queryable knowledge graph, enabling AI to leverage structured graph data for more accurate and contextually rich knowledge retrieval.

The video introduces a cutting-edge personal knowledge management (PKM) workflow that integrates AI agents, specifically Claude Code, with knowledge management tools like Obsidian, structured by the LM Wiki methodology, and enhanced by Graphify for knowledge graph capabilities. The primary motivation is to overcome the limitations of conventional AI agent usage, where conversational memory is transient and personal knowledge accumulation for AI reuse is difficult. The proposed system aims to build an AI-powered "second brain" that allows for persistent, reusable knowledge by structuring data in a way that AI can effectively process and leverage.

The core methodology revolves around a synergistic combination of tools and principles:

  1. Obsidian as the Knowledge Base: Obsidian is chosen for its native Markdown support, which is easily parsable by large language models (LLMs). It acts as the human-friendly front-end for knowledge input and retrieval, as highlighted by Andrej Karpathy. The emphasis is on "purposeful collection" – only data deemed valuable and collected with clear intent (dubbed "Gold Data") should be integrated. This ensures high-quality inputs ("Gold In, Gold Out") for better AI outputs.
  1. Claude Code (AI Agent) for Knowledge Processing: Claude Code, or similar AI agents (e.g., OpenAI Codex), serves as the central processing unit. It interacts with the Obsidian vault, reading and writing Markdown files, and performs tasks such as context understanding, information extraction, summarization, and knowledge organization.
  1. LM Wiki Methodology: Proposed by Andrej Karpathy, the LM Wiki provides a conceptual framework for structuring knowledge. It suggests a folder hierarchy and processing flow:
    • Raw Folder: Acts as an inbox for newly collected raw data (papers, videos, articles, images) from the internet.
    • LLM-based Agent (e.g., Claude Code): Ingests data from the raw folder. It processes, links, dissects, and transforms raw materials into reusable knowledge components.
    • Wiki Folder: Stores the processed, organized knowledge in a structured wiki format. This includes creating indexes, tables of contents, extracting concepts and entities based on a predefined schema, and continually updating the wiki as new information is ingested. The process involves a continuous loop of updating and refining the wiki based on new inputs and existing connections.
    • Schema and Principles: The LM Wiki emphasizes that knowledge organization is not random but follows specific processes, principles, and a schema, including the user's "core context" and "purposeful collection" criteria.

A key comparison is made between LM Wiki and RAG (Retrieval Augmented Generation):

  • Setup Complexity: LM Wiki is simpler, relying on Markdown files in folders, whereas RAG systems typically require complex vector databases and embedding models.
  • Infrastructure: LM Wiki needs no special infrastructure beyond local file storage, unlike RAG's vector databases.
  • Search Reliability: LM Wiki potentially offers higher search reliability due to its structured indexing and AI-assisted ingestion process, which actively integrates new knowledge.
  • Knowledge Accumulation: LM Wiki allows for "compound interest" in knowledge accumulation, as new information integrates with existing structures, making knowledge increasingly interconnected and leverageable.
  1. Graphify for Knowledge Graph Enhancement: Graphify is introduced as a tool that transcends simple text-based knowledge retrieval by converting the Obsidian vault's Markdown files into a knowledge graph.
    • Functionality: Graphify builds a graph database from the linked Markdown notes in Obsidian, identifying nodes (concepts, entities) and edges (relationships).
    • AI Integration: This knowledge graph (graph.json) is then used by the AI agent (Claude Code) to provide graph-aware queries, offering more semantically rich and connected responses than traditional text-based search. It creates a "Graph-RAG"-like experience where queries are informed by relational data.
    • Output: Graphify generates graph.json for AI consumption, graph.html for human visualization of the knowledge graph, and a markdown report summarizing the graph's properties.

Demonstrated Workflow Steps:

  1. Obsidian Vault Setup: Create a new, clean Obsidian vault.
  2. Define "My Core Context": A Markdown file is created in the vault to establish the user's personal goals, values, and purpose for knowledge management (e.g., "Brain Trinity's Brian, helping people achieve goals with PKM and AI, through creating YouTube, Instagram, and LinkedIn content"). This context is crucial for the AI to understand the user's needs.
  3. AI Interview for Context Refinement: Claude Code is prompted to read "My Core Context" and conduct an interview to deepen its understanding of the user's role, motivation for recording, desired outputs, and core values.
  4. Generate claude.md: Based on the refined context from the interview, Claude Code generates a claude.md file within the vault. This file contains rules and guidelines for the AI agent, reflecting the user's specific context and expectations for knowledge handling.
  5. Integrate LM Wiki Structure: The user provides Claude Code with the LM Wiki concept (e.g., via a linked article's content). Claude Code then structures the Obsidian vault by creating the raw, wiki, and output folders, along with specific claude.md files within these subfolders to define rules pertinent to each section.
  6. Obsidian Web Clipper with Custom Templates: To efficiently ingest external data, Obsidian's Web Clipper is configured. Claude Code is prompted to generate custom JSON templates for the Web Clipper (e.g., for articles, YouTube videos, podcasts, books, research) based on the LM Wiki's raw folder requirements. These templates ensure incoming data is formatted correctly for ingestion.
  7. Knowledge Ingestion (/ingest Skill): New content is clipped into the raw folder using the custom Web Clipper templates. A Claude Code "skill" named /ingest is created (based on the previous manual process), which reads new files from raw, asks the user for their "purposeful collection" insight (why they saved it), summarizes the content, extracts entities, and then organizes/updates the wiki folder accordingly.
  8. Querying and Synthesizing Knowledge (/query, /lint Skills):
    • /query: Allows the user to ask questions, and Claude Code retrieves relevant information from the structured wiki knowledge base to provide answers, similar to RAG.
    • /lint: A maintenance skill that checks and updates the entire wiki, ensuring consistency and freshness of information, especially as the knowledge base grows.
    • /synthesize: A skill (implicitly mentioned) to integrate disparate knowledge from the wiki into coherent insights or reports.
  9. Graphify Integration:
    • Installation: Graphify (a Python tool) is installed (e.g., pip install graphify).
    • Graph Generation: Claude Code is prompted to run Graphify on the wiki folder (e.g., by executing graphify wiki). This command processes the Markdown files, their links, and content to build a knowledge graph.
    • Graph-Aware Querying: A new query method is introduced using Graphify (e.g., graphify query [question]). This enables Claude Code to leverage the generated graph's relational information when answering queries, providing deeper, interconnected insights. Graphify also generates an HTML visualization of the graph and a Markdown report.

The speaker emphasizes that while AI automates much of the process, the user's "purposeful collection" and clearly defined "My Core Context" remain paramount. This ensures that the accumulated knowledge is relevant and valuable, rather than just a disorganized mass of data. This integrated system of Obsidian, Claude Code, LM Wiki, and Graphify represents a powerful future trend in personal knowledge management.