12 KiB
12 KiB
📖 Antigravity Brain - Developer Documentation
Complete guide for developers and AI agents working on this project.
📑 Table of Contents
- Architecture Overview
- Adding New Agents
- Adding New Crews
- Adding New Tools
- Configuration Reference
- Memory System
- AI Agent Guidelines
🏗️ Architecture Overview
┌─────────────────────────────────────────────────────────────────┐
│ Chainlit Web UI │
│ (src/app.py:8000) │
└─────────────────────────┬───────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ Smart Router │
│ (src/router.py) │
│ Classifies user intent → Routes to Crew │
└─────────────────────────┬───────────────────────────────────────┘
│
┌───────────────┼───────────────┐
▼ ▼ ▼
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Infra Crew │ │ Security │ │ HR/Evolution│
│ │ │ Crew │ │ Crew │
└──────┬──────┘ └──────┬──────┘ └──────┬──────┘
│ │ │
▼ ▼ ▼
┌─────────────────────────────────────────────────────────────────┐
│ Agent Factory │
│ (src/agents/factory.py) │
│ Loads Persona → Injects Knowledge → Creates Agent │
└─────────────────────────┬───────────────────────────────────────┘
│
┌───────────────┼───────────────┐
▼ ▼ ▼
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Persona │ │ Knowledge │ │ Tools │
│ (.md) │ │ Standards │ │ (Python) │
└─────────────┘ └─────────────┘ └─────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ Shared Memory (Mem0) │
│ (src/memory/wrapper.py) │
│ Qdrant Vector DB + HuggingFace Embeddings │
└─────────────────────────────────────────────────────────────────┘
Key Files
| File | Purpose |
|---|---|
src/app.py |
Chainlit entry point, handles chat |
src/router.py |
Routes requests to appropriate crew |
src/config.py |
LLM & Memory configuration |
src/agents/factory.py |
Creates agents from persona files |
src/crews/definitions.py |
Defines crew compositions |
src/memory/wrapper.py |
Mem0 integration with rate limiting |
🤖 Adding New Agents
Step 1: Create Persona File
Create src/agents/personas/persona-<name>.md:
---
description: Short description of the agent
llm_config:
provider: default # or: openai, gemini, ollama
---
# 👤 Persona: Agent Name
**Role:** The agent's job title
**Goal:** What the agent aims to achieve
## 🧠 Backstory
Detailed personality and background. This becomes the agent's
system prompt. Write in character.
## 📋 Protocol
1. Step one of how this agent works
2. Step two...
Step 2: Register in Factory (Optional)
If agent needs special tools, update src/crews/definitions.py:
from src.agents.factory import AgentFactory
from src.tools.your_tool import YourTool
agent = AgentFactory.create_agent(
"your-agent-name", # matches filename
specific_tools=[YourTool()],
model_tier="smart" # or "fast"
)
Naming Convention
- Filename:
persona-<lowercase-hyphenated>.md - Example:
persona-bob-builder.md
👥 Adding New Crews
Step 1: Define Crew in definitions.py
Edit src/crews/definitions.py:
elif crew_name == "Your New Crew":
# Create agents
agent1 = AgentFactory.create_agent("agent-one", model_tier="smart")
agent2 = AgentFactory.create_agent("agent-two", model_tier="fast")
# Define tasks
task1 = Task(
description=f"Do something with: '{inputs.get('topic')}'",
expected_output="Expected result description",
agent=agent1
)
task2 = Task(
description="Review the previous work",
expected_output="Approval or feedback",
agent=agent2
)
# Return crew
return Crew(
agents=[agent1, agent2],
tasks=[task1, task2],
process=Process.sequential,
verbose=True
)
Step 2: Register in Router
Edit src/router.py prompt:
prompt = f"""
AVAILABLE CREWS:
...
6. 'Your New Crew': Description of when to use this crew.
"""
Step 3: Add to Crew List
In src/crews/definitions.py:
@staticmethod
def get_available_crews():
return [
...
"Your New Crew",
]
🔧 Adding New Tools
Step 1: Create Tool File
Create src/tools/your_tools.py:
from crewai.tools import BaseTool
from pydantic import BaseModel, Field
class YourToolInput(BaseModel):
"""Input schema - MUST have docstring and Field descriptions."""
param1: str = Field(..., description="What this parameter is for")
param2: int = Field(default=10, description="Optional with default")
class YourTool(BaseTool):
name: str = "Your Tool Name"
description: str = (
"Detailed description of what this tool does. "
"The agent reads this to decide when to use it."
)
args_schema: type = YourToolInput
def _run(self, param1: str, param2: int = 10) -> str:
try:
# Your logic here
result = do_something(param1, param2)
return f"Success: {result}"
except Exception as e:
# NEVER raise, always return error string
return f"Error: {str(e)}"
Tool Guidelines
- Always catch exceptions - Return error strings, never raise
- Descriptive docstrings - Agents use these to understand usage
- Type hints required - All parameters need types
- Return strings - Narrative results, not raw JSON
⚙️ Configuration Reference
.env Variables
# LLM Provider: gemini, openai, anthropic, ollama
LLM_PROVIDER=gemini
# Model names (used for both agents and memory)
LLM_MODEL_FAST=gemini-2.5-flash-lite-preview-06-17
LLM_MODEL_SMART=gemini-2.5-flash-lite-preview-06-17
# API Keys (only the one matching your provider)
GEMINI_API_KEY=your-key
OPENAI_API_KEY=your-key
ANTHROPIC_API_KEY=your-key
# Memory Configuration
MEMORY_PROVIDER=qdrant # qdrant (local) or mem0 (cloud)
MEMORY_EMBEDDING_PROVIDER=local # local, openai, or gemini
QDRANT_HOST=qdrant # Docker service name
QDRANT_PORT=6333
MEMORY_PROJECT_ID=your_project # Namespace for memories
Model Tiers
- smart: Used for complex reasoning (strategy, architecture)
- fast: Used for quick tasks (classification, simple responses)
🧠 Memory System
How It Works
- All agents have access to
SearchMemoryToolandSaveMemoryTool - Memories are stored in Qdrant vector database
- Mem0 uses LLM to extract facts and embeddings to search
Rate Limiting
The memory system has built-in protection:
- Max 50 calls/minute
- 3 retries with exponential backoff
- Graceful degradation (continues without memory if unavailable)
Memory Scope
All memories are scoped to MEMORY_PROJECT_ID. Change this to isolate different projects.
🤖 AI Agent Guidelines
For AI Agents Working on This Codebase
READ BEFORE MAKING CHANGES
-
Load Knowledge First
- Read
src/knowledge/standards/*.mdbefore writing code - These are THE LAW for code style and patterns
- Read
-
Never Hardcode
- Use
Config.get_llm_config()for LLM settings - Use
Config.get_mem0_config()for memory settings - Use environment variables for secrets
- Use
-
Error Handling
- Tools must NEVER raise exceptions
- Always return descriptive error strings
- Use rate limiting for external APIs
-
Adding Agents
- Create persona file first
- Test agent loads:
AgentFactory.create_agent("name") - Add to crew only after persona works
-
Testing Changes
docker-compose restart app docker logs antigravity_brain --tail 50 -
Commit Convention
feat: Add new feature fix: Bug fix docs: Documentation refactor: Code cleanup
📁 Directory Reference
src/
├── agents/
│ ├── factory.py # Agent creation logic
│ └── personas/ # Agent personality files
│ ├── persona-arthur-mendes.md
│ ├── persona-gus-fring.md
│ └── ... (26 agents)
├── crews/
│ └── definitions.py # Crew compositions
├── knowledge/
│ └── standards/ # Corporate knowledge base
│ ├── docker_standards.md
│ ├── python_tool_standards.md
│ └── ... (16 standards)
├── memory/
│ └── wrapper.py # Mem0 + rate limiting
├── tools/
│ ├── base.py # File system tools
│ ├── evolution.py # SpawnAgent, LearnPolicy
│ └── zabbix.py # Zabbix validation tools
├── app.py # Chainlit entry
├── config.py # Configuration hub
└── router.py # Smart routing
🚀 Quick Commands
# Start application
docker-compose up -d
# View logs
docker logs antigravity_brain -f
# Restart after changes
docker-compose restart app
# Rebuild container
docker-compose build --no-cache app
# Access Qdrant dashboard
open http://localhost:6333/dashboard
Built with ❤️ by ITGuys | Last Updated: 2026-01-07