Skip to Content
🎉 Welcome to handit.ai Documentation!
TracingTracing FeaturesOverview

Tracing Features Overview

Transform your AI agents from black boxes to fully observable systems. Handit.ai’s tracing features give you complete visibility into every operation, from the highest-level agent workflow to individual LLM calls and tool executions.

Handit.ai provides three core tracing capabilities that work together to give you comprehensive observability:

  • 🤖 Agent Tracing - The orchestrator that tracks your entire agent workflow from start to finish
  • 🧠 LLM Node Tracing - The brain monitor that captures every AI model interaction with complete context
  • 🛠️ Tool Tracing - The action tracker that monitors every function and tool execution

The Three Pillars of AI Agent Tracing

🤖 Agent Tracing: The Complete Picture

What it does: Wraps your entire agent workflow, creating the root trace that connects all operations together.

Perfect for: Main agent functions, API endpoints, end-to-end monitoring, production workflows

Key capabilities:

  • Automatic operation linking - All child operations are automatically connected
  • Complete execution timeline - From first input to final output
  • Error propagation tracking - See how errors flow through your system
  • Performance overview - Total execution time and resource usage
process_customer_request.py
@tracker.start_agent_tracing() async def process_customer_request(request): # Everything inside here is automatically traced intent = await classify_intent(request) context = await search_knowledge_base(intent) response = await generate_response(context) return response

🧠 LLM Node Tracing: Understanding AI Decisions

What it does: Captures every interaction with language models, including prompts, responses, and performance metrics.

Perfect for: GPT/Claude calls, prompt engineering, token usage monitoring, model performance analysis

Key capabilities:

  • Complete prompt history - Every prompt and response is recorded
  • Performance metrics - Response time, token usage, and costs
  • Model parameters - Temperature, max tokens, and other settings
  • Error handling - Failed calls and retry attempts
classify_intent.py
@tracker.trace_agent_node("intent-classifier") async def classify_intent(user_message): # LLM call is automatically captured with full contextAdd commentMore actions response = await llm.generate(f"Classify intent: {user_message}") return response

🛠️ Tool Tracing: Monitoring Actions

What it does: Tracks every tool execution, from simple functions to complex API calls and database operations.

Perfect for: Custom functions, external API integrations, database queries, file processing

Key capabilities:

  • Function monitoring - Input parameters and return values
  • Execution timing - How long each operation takes
  • Error tracking - Detailed error context and stack traces
  • Resource usage - Memory, CPU, and network utilization
search_knowledge_base.py
@tracker.trace_agent_node("knowledge-search") async def search_knowledge_base(query): # Tool execution is captured with timing and results results = await vector_db.similarity_search(query) return results

How They Work Together

The three tracing types create a complete observability stack:

Agent Tracing Creates the Foundation

Your main agent function becomes the root trace that captures the entire workflow

LLM & Tool Tracing Add Detail

Individual operations are automatically linked as child traces under the agent

Complete Visibility Emerges

You get a hierarchical view showing exactly how your agent processes each request

Example trace hierarchy:

🤖 Customer Support Agent (2.3s) ├── 🧠 Intent Classification (0.8s) ├── 🛠️ Knowledge Search (0.9s) ├── 🧠 Response Generation (0.6s) └── 🛠️ Response Validation (0.1s)

Automatic correlation: All operations are automatically linked together, so you can see the complete story of how your agent handled each request.

Why This Matters

🔍 Complete Visibility - See every operation, understand execution flow, track data transformations

🐛 Faster Debugging - Pinpoint exact failure locations, access complete error context, reduce debugging time from hours to minutes

⚡ Performance Optimization - Identify bottlenecks, track resource usage, optimize prompts and model parameters

📊 Production Monitoring - Monitor agent health, track success rates, get alerts for anomalies

Getting Started

Choose Your Starting Point

Pick the tracing type that matches your immediate needs

Implement Tracing

Add decorators or wrappers to your agent functions

View Results

See complete traces in your Handit.ai dashboard

Optimize & Scale

Use insights to improve your agent’s performance

Start simple: Begin with Agent Tracing for your main workflow, then add LLM and Tool tracing for detailed insights. You can implement tracing incrementally without disrupting your existing code.

Explore Each Feature

Ready to dive deeper into each tracing capability?

Transform your AI agents from mysterious black boxes into fully observable, debuggable, and optimizable systems.

Last updated on