Agent Tracing
The foundation of AI agent observability. Agent Tracing wraps your entire agent workflow, creating the root trace that connects all operations together and gives you complete visibility from start to finish.
Think of Agent Tracing as the master recorder for your AI system. When a user submits a request, Agent Tracing creates a comprehensive record of everything that happens to process that request—every function call, every decision point, every tool execution, and every language model interaction.
Why Agent Tracing is essential: Without this foundational wrapper, your autonomous engineer would see individual operations in isolation without understanding how they connect. Agent Tracing provides the context that links everything together, enabling pattern recognition and root cause analysis.
Golden Rule: Every request that enters your system should start with Agent Tracing. This creates the root context that allows all child operations to be properly linked and analyzed together.
How Agent Tracing Works
Agent Tracing creates a root context for your entire workflow, enabling comprehensive observability across your AI system. When you wrap your main agent function with Agent Tracing, it automatically captures the complete execution flow, correlates all operations within that request, monitors performance and errors across the entire workflow, and maintains a hierarchical view of how your system processes requests.
The result: Your autonomous engineer can see the big picture of how your AI operates, understand how different components interact, and identify issues that span multiple operations or components.
Implementation
Python
from handit_service import tracker
# Async version
@tracker.start_agent_tracing(key="invoice-assistant")
async def process_invoice(user_input: str) -> str:
"""
Main function of your agent.
Everything that happens inside will be linked to the same trace.
"""
# Your agent logic here
answer = await generate_answer(user_input)
return answer
# Sync version
@tracker.start_agent_tracing(key="invoice-assistant")
def process_invoice_sync(user_input: str) -> str:
"""
Synchronous version of the agent.
Same tracing capabilities, just without async/await.
"""
# Your agent logic here
answer = generate_answer_sync(user_input)
return answer
Key Features
-
Automatic Context Creation
- Creates a unique trace ID for each request
- Maintains context throughout the entire execution
- Links all child operations together
-
Error Handling
- Automatically captures and reports errors
- Includes stack traces and error context
- Maintains error information in the trace
-
Performance Monitoring
- Tracks total execution time
- Monitors individual operation durations
- Identifies bottlenecks in the workflow
-
Operation Correlation
- Links all operations within a request
- Maintains parent-child relationships
- Provides end-to-end visibility
Real-World Examples
Agent Tracing is essential for any entry point that processes requests or executes workflows. Here are common scenarios:
Python
# Chat Agent - Entry point for conversational AI
@app.post("/api/v1/chat")
@tracker.start_agent_tracing(key="chat-agent-v1")
async def handle_chat(request: ChatRequest):
"""
Main entry point for chat interactions.
Tracks the entire conversation flow, including:
- User messages and context
- AI model responses
- Tool executions
- Conversation state
"""
try:
# Process the chat request
response = await chat_agent.process(
message=request.message,
context=request.context,
user_id=request.user_id
)
return response
except Exception as e:
logger.error(f"Chat processing error: {e}")
raise HTTPException(status_code=500, detail=str(e))
# Agentic Scheduler - Entry point for AI-powered scheduling
@app.post("/api/v1/schedule")
@tracker.start_agent_tracing(key="scheduler-agent-v1")
async def handle_scheduling(request: ScheduleRequest):
"""
Main entry point for AI scheduling operations.
Tracks the entire scheduling workflow:
- Calendar analysis
- Conflict resolution
- Meeting optimization
- Notification handling
"""
try:
schedule = await scheduler_agent.process(
participants=request.participants,
preferences=request.preferences,
constraints=request.constraints
)
return schedule
except Exception as e:
logger.error(f"Scheduling error: {e}")
raise HTTPException(status_code=500, detail=str(e))
# Call Agent - Entry point for AI call handling
@app.post("/api/v1/call")
@tracker.start_agent_tracing(key="call-agent-v1")
async def handle_call(request: CallRequest):
"""
Main entry point for AI call processing.
Tracks the entire call workflow:
- Call initialization
- Real-time transcription
- Intent recognition
- Response generation
- Call summary
"""
try:
call_result = await call_agent.process(
audio_stream=request.audio,
call_metadata=request.metadata,
user_context=request.context
)
return call_result
except Exception as e:
logger.error(f"Call processing error: {e}")
raise HTTPException(status_code=500, detail=str(e))
# Document Processing Agent - Entry point for document analysis
@app.post("/api/v1/process-document")
@tracker.start_agent_tracing(key="document-agent-v1")
async def handle_document(request: DocumentRequest):
"""
Main entry point for document processing.
Tracks the entire document workflow:
- Document parsing
- Content extraction
- Analysis and classification
- Summary generation
"""
try:
result = await document_agent.process(
document=request.document,
processing_type=request.type,
options=request.options
)
return result
except Exception as e:
logger.error(f"Document processing error: {e}")
raise HTTPException(status_code=500, detail=str(e))
Best Practices
Implementation Guidelines
-
Always Start with Agent Tracing
- Wrap your main entry points (API endpoints, webhooks, scheduled tasks)
- Use meaningful keys that describe your agent’s purpose
- Make keys consistent across environments
-
Error Handling
- Let errors propagate naturally through your code
- Don’t suppress exceptions within the traced function
- Use try-catch blocks when needed for specific error handling
-
Keep Traces Clean
- Avoid unnecessary nesting of agent traces
- Use appropriate granularity for your workflow
- Remove sensitive information from trace data
With Agent Tracing in place, you’ll have complete visibility into your application’s behavior, making it easier to debug issues and optimize performance.
Next Steps
Ready to add more detailed tracing to your agent components?
- Learn about LLM Node Tracing for AI model interaction tracking
- Explore Tool Tracing for function and tool execution monitoring