GitHub - Fosowl/agenticSeek: Fully Local Manus AI. No APIs, No $200 monthly bills. Enjoy an autonomous agent that thinks, browses the web, and code for the sole cost of electricity. πŸ”” Official updates only via twitter @Martin993886460 (Beware of fake account)
Service

GitHub - Fosowl/agenticSeek: Fully Local Manus AI. No APIs, No $200 monthly bills. Enjoy an autonomous agent that thinks, browses the web, and code for the sole cost of electricity. πŸ”” Official updates only via twitter @Martin993886460 (Beware of fake account)

Fosowl
2026.01.29
·GitHub·by 이호민
#AI#Autonomous Agent#Local LLM#Web Browsing#Coding Assistant

Key Points

  • 1AgenticSeek is a fully local, open-source AI assistant designed for privacy and autonomy, running entirely on your hardware without cloud dependencies or recurring API costs.
  • 2This agent can autonomously browse the internet, write and debug code in multiple languages, and effectively plan and execute complex multi-step tasks.
  • 3It prioritizes local LLM execution via Ollama or LM Studio, supports external APIs, and uses Docker for bundled services like SearxNG, offering flexibility in deployment.

AgenticSeek is an open-source, privacy-focused, and fully local AI assistant designed as an alternative to cloud-dependent agentic systems like Manus AI. Its primary goal is to provide autonomous capabilities for web browsing, code generation and execution, and complex task planning, all while ensuring user data remains on the local device, eliminating API costs and external data sharing.

The core methodology of AgenticSeek revolves around a highly configurable, multi-agent architecture orchestrated by a Large Language Model (LLM). Upon receiving a user query, the system's "smart agent selection" mechanism dynamically chooses the most appropriate internal agent to handle the task. This suggests an implicit routing or planning layer that directs queries to specialized components. Complex tasks are broken down into sub-tasks, which are then sequentially executed by these agents, allowing for multi-step reasoning and action.

Technically, AgenticSeek is built upon Python and heavily leverages Docker for service orchestration, providing a self-contained environment. Key components and their roles include:

  1. LLM Integration: The system supports various LLM providers, configurable via the config.ini file.
    • Local LLMs: It deeply integrates with local inference engines like Ollama and LM Studio. Users configure islocal=Trueis_local = True, provider_name (e.g., ollama, lm-studio), provider_model (e.g., deepseek-r1:14b), and provider_server_address (e.g., http://127.0.0.1:11434 for Ollama or http://127.0.0.1:1234 for LM Studio). It also supports local OpenAI-compatible servers (e.g., llama.cpp server) by setting providername=openaiprovider_name = openai and islocal=Trueis_local = True.
    • Self-Hosted LLM Server: For users with a powerful remote server, AgenticSeek provides a custom llm_server.py script. The client machine can connect to this remote server by setting islocal=Falseis_local = False, providername=serverprovider_name = server, and provider_server_address to the server's IP and port (e.g., http://x.x.x.x:3333).
    • Cloud API LLMs: For less powerful hardware, it allows integration with commercial APIs like OpenAI, Google Gemini, Deepseek, Hugging Face, TogetherAI, and OpenRouter. This mode requires setting islocal=Falseis_local = False, the specific provider_name (e.g., openai), and providing the respective API key as an environment variable.
  2. Web Browsing: Autonomous web interaction is facilitated by undetected_chromedriver, enabling the agent to search, read, extract information, and fill web forms. Search capabilities are powered by a local instance of SearxNG, an open-source metasearch engine, which is deployed via Docker Compose and configured with SEARXNG_BASE_URL (e.g., http://searxng:8080 within Docker or http://localhost:8080 for CLI host mode).
  3. Code Execution: The agent can autonomously write, debug, and run programs in multiple languages (Python, C, Go, Java, etc.). The specific mechanisms for sandboxing or environment management for code execution are not explicitly detailed but are implied by its capability to "run programs."
  4. Input/Output Modalities:
    • Voice-Enabled Interface: Integrates speech-to-text (listen=Truelisten = True in config.ini) and text-to-speech (speak=Truespeak = True) functionalities. Speech-to-text operates in CLI mode, activating upon a trigger keyword, which is the configured agent_name.
    • File System Interaction: The agent operates within a defined WORK_DIR, enabling it to read and write files on the user's local machine for task execution (e.g., saving search results, code files).
  5. Session Management: Supports saving and recovering previous session states (savesession=Truesave_session = True, recoverlastsession=Truerecover_last_session = True).

Deployment is primarily via Docker Compose, initiating services like SearxNG, Redis, the frontend, and backend with start_services.sh full. A CLI mode is also available, requiring host package installation and specific SEARXNG_BASE_URL configuration. Hardware requirements for local LLM inference are significant, with a minimum of 8GB VRAM (e.g., 7B models) and recommendations for 24GB+ VRAM (e.g., 32B models) for effective performance, especially in complex web browsing and planning tasks. The project is under active development, relying on community contributions.