Manual Tracing Setup
Set up agent-level tracing manually with decorators. This guide shows you how to add the new simplified tracing approach to your AI agents without using the CLI.
Prefer automatic setup? Use our CLI Setup for automatic code generation, or the Main Quickstart for complete autonomous engineer setup.
When to use manual setup
Choose manual setup when:
- You need custom tracing configuration
- You’re integrating into existing complex codebases
- You want full control over the tracing implementation
- You’re using frameworks or patterns the CLI doesn’t recognize
Otherwise, use the CLI - it’s faster and handles configuration automatically.
Installation and configuration
Step 1: Install the SDK
Step 2: Configure your API key
Agent-level tracing (recommended)
Add the @tracing
decorator only to your main agent entry points - the functions that start agent execution:
Important: Only trace agent entry points, not individual helper functions. The @tracing
decorator should go on functions that handle complete user requests or start agent workflows.
Multiple agents
For applications with multiple AI agents, use different agent names:
Verify your setup
✅ Check your dashboard: Go to dashboard.handit.ai - you should see:
- Agent traces appearing when you run your AI
- Agent names matching what you specified in decorators
- Complete execution flows captured automatically
✅ Test your agent: Run a request through your AI and confirm:
- Trace appears in the dashboard within seconds
- All operations within your agent function are captured
- Performance metrics show timing and resource usage
Setup complete! Your autonomous engineer can now see your AI agent executions and will start monitoring for issues immediately.
What to trace vs what NOT to trace
✅ DO trace these (agent entry points):
# Main agent functions that handle complete user requests
@tracing(agent="customer-service")
async def process_customer_request(message):
return await handle_complete_workflow(message)
# API endpoints that start agent execution
@app.post("/api/chat")
@tracing(agent="chat-api")
async def chat_endpoint(request):
return await process_customer_request(request.message)
# Different agents in your system
@tracing(agent="document-analyzer")
async def analyze_document(document):
return await complete_document_analysis(document)
❌ DON’T trace these (helper functions):
# Internal helper functions - these get captured automatically
async def classify_intent(message): # Don't add @tracing here
return await llm.classify(message)
async def search_knowledge_base(query): # Don't add @tracing here
return await vector_db.search(query)
async def generate_response(context): # Don't add @tracing here
return await llm.generate(context)
Key principle: Trace the agent entry points that start complete workflows, not the individual functions within those workflows. Everything inside the traced function gets captured automatically.
Best practices
One decorator per agent workflow: Each complete user interaction should have one agent trace that captures the entire workflow.
Use descriptive agent names: Choose names that clearly identify different parts of your AI system like “customer-service”, “document-analysis”, or “code-generation”.
Environment variables: Always use environment variables for your API key, never hardcode it in your source code.
Advanced configuration
For complex scenarios, you can customize tracing behavior:
Custom metadata: Add additional context to traces
Sampling: For high-volume applications, trace a percentage of requests
Error handling: Customize how errors are captured and reported
Performance tuning: Optimize tracing for your specific use case
For these advanced scenarios, see our Advanced Tracing Guide.
Need help?
Tracing not working: Verify your API key is configured correctly and your agent function is being called.
Missing traces: Ensure the @tracing
decorator is on your main agent function, not on internal helper functions.
Performance concerns: Agent-level tracing has minimal overhead, but contact support for high-volume optimization.
For more help, visit our Support page or join our Discord community .