5.4 KiB
5.4 KiB
🔧 API & Code Reference
Technical reference for the Antigravity Brain codebase.
📦 Core Modules
src/config.py
Central configuration hub.
from src.config import Config
# Get LLM configuration
llm_config = Config.get_llm_config(mode="smart") # or "fast"
# Returns: {"model": "gemini/...", "temperature": 0.7, "api_key": "..."}
# Get Memory configuration
mem_config = Config.get_mem0_config()
# Returns: {"llm": {...}, "embedder": {...}, "vector_store": {...}}
# Get Telegram token
token = Config.get_telegram_token()
src/agents/factory.py
Agent creation from persona files.
from src.agents.factory import AgentFactory
# Create agent with default tools
agent = AgentFactory.create_agent("arthur-mendes", model_tier="smart")
# Create agent with specific tools
from src.tools.zabbix import ZabbixValidatorTool
agent = AgentFactory.create_agent(
"arthur-mendes",
specific_tools=[ZabbixValidatorTool()],
model_tier="smart"
)
# List available personas
personas = AgentFactory.list_available_personas()
# Returns: ["persona-arthur-mendes", "persona-gus-fring", ...]
# Load corporate knowledge
knowledge = AgentFactory.load_knowledge_base()
# Returns concatenated string of all standards/*.md files
src/crews/definitions.py
Crew assembly and management.
from src.crews.definitions import CrewDefinitions
# Get available crews
crews = CrewDefinitions.get_available_crews()
# Returns: ["Infra Engineering (Zabbix)", "Security Audit", ...]
# Assemble a crew
crew = CrewDefinitions.assemble_crew(
crew_name="Infra Engineering (Zabbix)",
inputs={"topic": "Validate this template"}
)
# Execute crew
result = crew.kickoff(inputs={"topic": "Your task here"})
src/router.py
Smart request routing using LLM.
from src.router import SmartRouter
# Route a user request to appropriate crew
crew_name = SmartRouter.route("Check server health")
# Returns: "Infra Engineering (Zabbix)"
crew_name = SmartRouter.route("Create a new agent")
# Returns: "HR & Evolution"
src/memory/wrapper.py
Memory tools with rate limiting.
from src.memory.wrapper import SearchMemoryTool, SaveMemoryTool, MemoryWrapper
# Get memory client
client = MemoryWrapper.get_client()
# Use as tools (typically assigned to agents)
search_tool = SearchMemoryTool()
result = search_tool._run(query="What do we know about Zabbix?")
save_tool = SaveMemoryTool()
result = save_tool._run(fact="The server runs on port 8000")
🔧 Available Tools
Memory Tools (src/memory/wrapper.py)
| Tool | Input | Output |
|---|---|---|
SearchMemoryTool |
query: str |
Found memories or "No relevant information" |
SaveMemoryTool |
fact: str |
"Successfully saved" or error |
Evolution Tools (src/tools/evolution.py)
| Tool | Input | Output |
|---|---|---|
SpawnAgentTool |
filename, name, role, goal, backstory, llm_preference |
Path to created file |
LearnPolicyTool |
title, content, category |
Path to saved policy |
Zabbix Tools (src/tools/zabbix.py)
| Tool | Input | Output |
|---|---|---|
ZabbixValidatorTool |
file_path: str |
Validation report |
UUIDFixerTool |
file_path: str |
Fixed file path |
🎭 Persona File Format
---
description: Short description
llm_config:
provider: default # openai, gemini, ollama, default
---
# 👤 Persona: Name
**Role:** Job title
**Goal:** Primary objective
## 🧠 Backstory
Personality and background text...
Parsed Fields
| Field | Source | Fallback |
|---|---|---|
name |
First # Heading |
"Unknown Agent" |
role |
**Role:** line |
"Support Agent" |
goal |
**Goal:** or **Especialidade:** |
"Execute tasks related to {role}" |
backstory |
Entire body content | - |
llm_config |
YAML frontmatter | {} |
🌐 Environment Variables
| Variable | Required | Default | Description |
|---|---|---|---|
LLM_PROVIDER |
Yes | openai |
gemini, openai, anthropic, ollama |
LLM_MODEL_FAST |
Yes | gpt-3.5-turbo |
Model for quick tasks |
LLM_MODEL_SMART |
Yes | gpt-4o |
Model for complex reasoning |
GEMINI_API_KEY |
If gemini | - | Google AI API key |
OPENAI_API_KEY |
If openai | - | OpenAI API key |
ANTHROPIC_API_KEY |
If anthropic | - | Anthropic API key |
OLLAMA_BASE_URL |
If ollama | http://localhost:11434 |
Ollama server URL |
MEMORY_PROVIDER |
No | mem0 |
qdrant (local) or mem0 (cloud) |
MEMORY_EMBEDDING_PROVIDER |
No | openai |
local, openai, gemini |
QDRANT_HOST |
If qdrant | localhost |
Qdrant host |
QDRANT_PORT |
If qdrant | 6333 |
Qdrant port |
MEMORY_PROJECT_ID |
No | default_project |
Memory namespace |
🔄 Rate Limiting Constants
In src/memory/wrapper.py:
MAX_RETRIES = 3 # Max retry attempts on 429
RETRY_DELAY_SECONDS = 2.0 # Initial delay (doubles each retry)
MAX_CALLS_PER_MINUTE = 50 # Conservative API limit
🐳 Docker Services
| Service | Port | Purpose |
|---|---|---|
app |
8000 | Chainlit web UI |
qdrant |
6333 | Vector database |
telegram_listener |
- | Telegram bot (optional) |
📝 Logging
import logging
logger = logging.getLogger("AntigravityMemory") # Memory module
logger = logging.getLogger("AntigravityConfig") # Config module
Logs appear in Docker: docker logs antigravity_brain -f