SDK Overview
Comprehensive AI agent observability made simple. The Handit.ai SDKs provide powerful tracking and monitoring capabilities for your AI applications, helping you understand performance, debug issues, and optimize your AI systems.
Our SDKs are designed to give you complete visibility into your AI agent workflows with minimal setup and maximum insight.
The Handit.ai SDKs automatically capture agent executions, LLM interactions, tool usage, and performance metrics, providing detailed insights into your AI application’s behavior.
Available SDKs
Python SDK
For AI applications built with Python
Designed for seamless integration with popular AI frameworks like LangChain, OpenAI, and custom Python applications.
Requirements: Python 3.7+ | Support: Sync & Async operations
JavaScript SDK
For Node.js AI applications
Built for modern JavaScript/TypeScript applications with comprehensive tracking capabilities.
Requirements: Node.js 14+ | Support: CommonJS & ES modules
Core Tracing Methods
Both SDKs provide the same core tracing capabilities through different method signatures:
Agent-Level Tracing
Purpose: Track complete AI agent workflows from start to finish
Method | Python | JavaScript | Use Case |
---|---|---|---|
Agent Wrapper | @start_agent_tracing() | startAgentTracing() | Wrap entire agent functions for automatic tracing |
Manual Agent | _send_tracked_data() | captureAgentNode() | Custom control over agent execution tracking |
Function-Level Tracing
Purpose: Monitor individual components, tools, and LLM calls
Method | Python | JavaScript | Use Case |
---|---|---|---|
Node Decorator | @trace_agent_node() | traceAgentNode() | Automatic tracing of specific functions |
Node Function | trace_agent_node_func() | captureAgentNode() | Programmatic function tracing |
Model Tracking | track_model() | captureModel() | LLM interaction monitoring |
Tool Tracking | track_tool() | trackTool() | Custom tool and API call tracing |
Configuration & Setup
Purpose: Initialize and configure SDK behavior
Method | Python | JavaScript | Use Case |
---|---|---|---|
Configuration | tracker.config() | config() | Set API keys and SDK options |
Context Management | endAgentTracing() | endAgentTracing() | Manual session management |
What Gets Tracked
Automatic Tracking
- Agent Executions - Complete workflow timing and status
- Function Calls - Input parameters and return values
- LLM Interactions - Prompts, responses, token usage, and performance
- Tool Usage - Custom function executions and API calls
- Error Handling - Exception details and stack traces
- Performance Metrics - Execution time and resource usage
Custom Tracking
- Business Events - Domain-specific metrics and KPIs
- User Context - User IDs, session data, and custom metadata
- External Services - Third-party API calls and database queries
- Conditional Logic - Environment-based and user-tier tracking
Key Capabilities by Use Case
🔍 Debugging & Troubleshooting
- Complete execution traces - See exactly what your agent did
- Error context capture - Full stack traces with input data
- Performance bottleneck identification - Find slow operations
- Data flow visualization - Track data transformations
📊 Performance Monitoring
- Response time tracking - Monitor agent and LLM latency
- Resource usage monitoring - Track memory and CPU usage
- Token usage analysis - Monitor LLM costs and efficiency
- Success rate monitoring - Track completion and failure rates
🎯 Optimization & Insights
- Prompt performance analysis - Compare different prompts
- Model comparison - Evaluate different LLM models
- Tool effectiveness tracking - Monitor tool success rates
- User experience metrics - Track user satisfaction indicators
🏗️ Development & Testing
- Gradual rollout support - Test new features safely
- A/B testing integration - Compare different approaches
- Environment-specific tracking - Different behavior per environment
- Custom event tracking - Monitor business-specific metrics
Integration Patterns
Automatic Integration
- Framework Detection - Automatic integration with LangChain, OpenAI
- HTTP Request Interception - Automatic API call tracking
- Error Boundary Creation - Automatic error capture and reporting
Manual Integration
- Custom Workflows - Track proprietary agent architectures
- Third-Party Services - Monitor external API dependencies
- Legacy Systems - Add observability to existing applications
Hybrid Approach
- Selective Tracing - Combine automatic and manual tracking
- Conditional Logic - Environment and user-based tracing decisions
- Performance Optimization - Balance detail with system performance
SDK-Specific Features
Python SDK Advantages
- Deep Framework Integration - Native LangChain and OpenAI support
- Async/Await Support - Full asynchronous operation tracking
- Scientific Computing - Integration with NumPy, Pandas, and ML libraries
- Decorator Patterns - Pythonic function decoration for tracing
JavaScript SDK Advantages
- Modern JavaScript Support - ES6+, TypeScript, and module systems
- HTTP Library Integration - Automatic Axios and Fetch tracking
- Event-Driven Architecture - WebSocket and event-based tracing
- Microservice Ready - Built for distributed Node.js applications
Getting Started
Ready to add comprehensive observability to your AI applications?
Choose your SDK:
- Python SDK Documentation - Complete Python integration guide
- JavaScript SDK Documentation - Complete JavaScript integration guide
Or explore tracing approaches:
- Tracing Guide Overview - Compare different tracing methods
- Quick Start Guide - Get tracing working in 5 minutes
Both SDKs are designed to provide maximum insight with minimal performance impact. Start with basic tracing and gradually add more detailed monitoring as needed.