Skip to Content
🎉 Welcome to handit.ai Documentation!

Tool Tracing

Track every action your agent takes. Tool Tracing monitors all your custom functions, API calls, and utility operations, providing complete visibility into how your agent’s tools perform.

Perfect for tracking custom functions, RAG systems, database queries, API integrations, and any supporting operations that power your AI agent.

Tool Tracing automatically captures inputs, outputs, execution time, and error details for all your agent’s supporting functions and tools.

What Gets Tracked

Every tool execution is captured with complete context:

Data TypeWhat’s CapturedWhy It Matters
📥 Function InputsAll parameters and arguments passed to toolsDebug issues and validate data flow
📤 Function OutputsReturn values and results from tool executionVerify tool behavior and output quality
⏱️ PerformanceExecution time and resource usageIdentify slow tools and bottlenecks
❌ ErrorsException details, stack traces, and failure contextQuickly diagnose and fix tool issues
🔗 ContextTool relationships and execution orderUnderstand how tools work together

Implementation

tool_tracing.py
from handit_service import tracker import logging from typing import Any, Dict, List, Optional # Basic usage with a simple function def temperature_converter(celsius: float) -> float: return celsius * 9/5 + 32 tracked_converter = tracker.track_tool( temperature_converter, "temp-converter" ) # Usage with error handling async def process_data(data: Dict[str, Any]) -> Dict[str, Any]: try: result = await tracked_converter(data) return result except Exception as e: logging.error(f"Error processing data: {str(e)}") raise # Example with multiple tools def text_processor(text: str) -> str: return text.strip().lower() def data_validator(data: Dict[str, Any]) -> bool: return all(key in data for key in ["id", "value"]) # Create tracked versions tracked_processor = tracker.track_tool(text_processor, "text-processor") tracked_validator = tracker.track_tool(data_validator, "data-validator")

Common Use Cases

Utility Function Tracking

utility_tracking.py
from handit_service import tracker from typing import List, Dict, Any import logging # Utility function tracking with validation class DataProcessor: def __init__(self): self.cleaner = tracker.track_tool( self._clean_data, "data-cleaner" ) self.validator = tracker.track_tool( self._validate_data, "data-validator" ) self.transformer = tracker.track_tool( self._transform_data, "data-transformer" ) def _clean_data(self, data: Dict[str, Any]) -> Dict[str, Any]: """Clean input data by removing whitespace and normalizing values.""" return { k: str(v).strip() if isinstance(v, str) else v for k, v in data.items() } def _validate_data(self, data: Dict[str, Any]) -> bool: """Validate data structure and content.""" required_fields = ["id", "name", "value"] return all(field in data for field in required_fields) def _transform_data(self, data: Dict[str, Any]) -> Dict[str, Any]: """Transform data into required format.""" return { "id": str(data["id"]), "name": data["name"].lower(), "value": float(data["value"]) } async def process(self, data: Dict[str, Any]) -> Dict[str, Any]: try: # Clean data cleaned_data = self.cleaner(data) # Validate data if not self.validator(cleaned_data): raise ValueError("Invalid data structure") # Transform data result = self.transformer(cleaned_data) return result except Exception as e: logging.error(f"Processing error: {str(e)}") raise # Usage async def main(): processor = DataProcessor() data = { "id": 123, "name": " Test Data ", "value": "42.5" } result = await processor.process(data)

RAG Tool Tracking

rag_tracking.py
from handit_service import tracker from typing import List, Dict, Any import logging # RAG tool tracking with vector search class RAGTool: def __init__(self, vector_store): self.vector_store = vector_store self.search = tracker.track_tool( self._search_documents, "vector-search" ) self.rank = tracker.track_tool( self._rank_results, "result-ranker" ) async def _search_documents( self, query: str, top_k: int = 5 ) -> List[Dict[str, Any]]: """Search for relevant documents.""" try: results = await self.vector_store.similarity_search( query, k=top_k ) return [ { "content": doc.page_content, "metadata": doc.metadata } for doc in results ] except Exception as e: logging.error(f"Search error: {str(e)}") raise def _rank_results( self, results: List[Dict[str, Any]] ) -> List[Dict[str, Any]]: """Rank search results by relevance.""" return sorted( results, key=lambda x: x.get("score", 0), reverse=True ) async def query( self, query: str, top_k: int = 5 ) -> List[Dict[str, Any]]: try: # Search for documents results = await self.search(query, top_k) # Rank results ranked_results = self.rank(results) return ranked_results except Exception as e: logging.error(f"Query error: {str(e)}") raise # Usage async def main(): rag = RAGTool(vector_store) results = await rag.query("What is machine learning?")

Best Practices

Implementation Guidelines

  1. Use Meaningful Tool IDs

    • Choose descriptive names that identify the tool’s purpose
    • Include context like “data-validator” or “api-client”
    • Make IDs consistent across environments
  2. Error Handling

    • Let errors propagate naturally through your code
    • Implement proper error boundaries for critical tools
    • Log detailed error information for debugging
  3. Performance Considerations

    • Track execution time for performance-critical tools
    • Monitor resource usage for expensive operations
    • Identify and optimize bottlenecks
  4. Data Management

    • Sanitize sensitive data before logging
    • Handle large inputs and outputs appropriately
    • Manage tool state and lifecycle properly
⚠️

Always implement proper error handling and logging when tracking tools to ensure you can debug issues effectively.

Dashboard Insights

When you trace tools, you get comprehensive analytics in your Handit.ai dashboard:

📊 Performance Metrics

  • Function execution time and latency trends
  • Resource usage for each tool operation
  • Request volume and frequency patterns
  • Tool efficiency and throughput analysis

🔍 Complete Tracing Data

  • Full function input and output inspection
  • Tool execution order and call hierarchy
  • Data transformations and processing flow
  • Function parameter and return value tracking

🚨 Error Detection & Flagging

  • Automatic detection of tool failures and exceptions
  • Error pattern analysis and failure rate tracking
  • Stack trace capture and error context
  • Tool reliability monitoring and alerts

Tool Tracing provides the visibility you need to optimize your agent’s supporting functions, reduce errors, and improve overall performance.

Next Steps

Ready to complete your tracing setup?

Last updated on