Skip to Content
🎉 Welcome to handit.ai Documentation!

CLI Tracing Setup

Set up comprehensive AI agent tracing in minutes using the Handit CLI. This guide covers the complete CLI workflow for generating tracing code and configuring observability for your AI agents.

Prerequisites: You need Node.js installed and a Handit.ai account. The CLI will handle all SDK installation and configuration automatically.

Quick Setup

Step 1: Install the Handit CLI

terminal
npm install -g @handit.ai/cli

Step 2: Run Setup Command

terminal
handit-cli setup

The CLI will guide you through an interactive setup process:

Account Connection

  • Log into your Handit.ai account (opens browser if needed)
  • Verify your integration token
  • Select your project/workspace

Project Analysis

  • Scan your codebase to identify AI agent patterns
  • Detect existing LLM calls and agent functions
  • Suggest optimal tracing integration points

Code Generation

  • Generate tracing configuration files
  • Create SDK initialization code
  • Add tracing wrappers to your agent functions
  • Set up environment variable templates

Validation

  • Test the generated configuration
  • Verify connection to Handit.ai platform
  • Confirm tracing is working correctly

Generated Code Structure

After running the setup, the CLI creates several files in your project:

Configuration Files

Agent Wrapper Code

The CLI automatically generates tracing wrappers for your detected agent functions:

Customizing Generated Code

Modifying Tracing Configuration

You can customize the generated tracing configuration:

handit_config.py
# Add custom configuration after CLI generation tracker.config( api_key=os.getenv("HANDIT_API_KEY"), project_name="my-ai-agent", environment=os.getenv("ENVIRONMENT", "development"), # Custom settings batch_size=50, # Batch traces for better performance flush_interval=30, # Send traces every 30 seconds debug_mode=True, # Enable debug logging sampling_rate=1.0 # Trace 100% of requests (reduce for high volume) )

Adding Custom Tracing Points

Extend the generated code with additional tracing:

CLI Management Commands

Managing Your Setup

If you need to update your tracing configuration or fix issues:

# Re-run setup to update configuration handit-cli setup

The CLI will:

  • Detect any changes in your codebase
  • Update tracing configuration as needed
  • Regenerate integration code if necessary
  • Test the connection to Handit.ai

When to re-run setup:

  • After adding new agent functions
  • When changing your project structure
  • If tracing stops working
  • To update to latest SDK version

Integration Patterns

Existing Codebase Integration

The CLI adapts to different codebase structures:

Class-based Agents:

# Original code class CustomerAgent: def process_request(self, msg): pass # CLI generates traced wrapper class TracedCustomerAgent(CustomerAgent): @tracker.start_agent_tracing() def process_request(self, msg): return super().process_request(msg)

Function-based Agents:

# Original code async def handle_customer_query(query): return process_query(query) # CLI generates traced version @tracker.start_agent_tracing() async def traced_handle_customer_query(query): return await handle_customer_query(query)

Framework Integration:

# FastAPI integration from fastapi import FastAPI from handit_config import tracker app = FastAPI() @app.post("/chat") @tracker.start_agent_tracing() async def chat_endpoint(message: str): return await process_chat(message)

Multiple Agent Support

For projects with multiple agents:

agents/__init__.py
""" Auto-generated multi-agent tracing setup """ from .customer_service_traced import CustomerServiceAgent from .technical_support_traced import TechnicalSupportAgent from .sales_agent_traced import SalesAgent # Export traced versions __all__ = [ 'CustomerServiceAgent', 'TechnicalSupportAgent', 'SalesAgent' ]

Best Practices

Development Workflow

  1. Run CLI setup when starting a new project
  2. Use traced versions of your agents in development
  3. Validate tracing before deploying to production
  4. Update configuration when adding new agent functions

Production Considerations

✅ Recommended Practices

  • Use environment variables for API keys
  • Set appropriate sampling rates for high traffic
  • Monitor tracing overhead in production
  • Keep traced and original versions in sync

🔧 Performance Optimization

  • Batch traces for better performance
  • Use async tracing for high-throughput systems
  • Configure flush intervals appropriately
  • Monitor memory usage with large traces

Code Organization

my-ai-project/ ├── agents/ │ ├── original/ # Original agent implementations │ │ ├── customer_service.py │ │ └── technical_support.py │ ├── traced/ # CLI-generated traced versions │ │ ├── customer_service_traced.py │ │ └── technical_support_traced.py │ └── __init__.py # Export traced versions ├── handit_config.py # CLI-generated configuration ├── .env.example # Environment template └── requirements.txt # Updated with handit-sdk

Troubleshooting

CLI Setup Issues

Command not found:

# Reinstall CLI npm uninstall -g @handit.ai/cli npm install -g @handit.ai/cli

Setup fails during account connection:

  • Ensure you have a valid Handit.ai account
  • Check internet connection for browser authentication
  • Verify your account has API access enabled

Code generation errors:

  • Ensure your project structure follows standard patterns
  • Check that your agent functions are properly defined
  • Try running with --verbose flag for detailed error info

Runtime Issues

Traces not appearing in dashboard:

  • Verify API key is correctly set in environment
  • Check network connectivity to Handit.ai
  • Ensure you’re using the traced versions of your agents

Performance issues:

  • Reduce sampling rate: sampling_rate=0.1 (10% of traces)
  • Increase batch size: batch_size=100
  • Adjust flush interval: flush_interval=60

Next Steps

Ready to observe your AI! Your agents now have comprehensive tracing. Monitor performance, debug issues, and optimize behavior using the generated observability infrastructure.

Last updated on