This guide walks you through how to manually integrate Handit.AI into your Node.js or Python application, giving you full control over your agent's setup, instrumentation, and monitoring.
Installation
Install the Handit.AI SDK in your project:
npm install @handit.ai/node
pip install "handit-sdk>=1.13.0"
Basic Configuration
Configure the Handit.AI client using your API key (stored in environment variables).
from handit import HanditTracker
tracker = HanditTracker()
tracker.config(api_key=os.getenv("HANDIT_API_KEY"))
Agent Configuration
Before instrumenting your agent, download the agent configuration file from the Agents tab in the Handit.AI dashboard. This file maps node names to their corresponding slugs and is used for tracing:
{
"agent-name": {
"node-name": "node-slug"
}
}
This mapping file is essential for the traceAgentNode function as it provides the correct slugs for each node in your agent.
The node wrapper (traceAgentNode) should wrap each individual function that performs a specific operation in your agent. This includes:
Model calls (e.g., GPT, Claude)
Tool executions
Data processing functions
Any other discrete operation
It's crucial to:
Pass the actual input data through the wrapper (not just metadata)
Return the complete output from the function
Use the correct slug from your agent config file
import { traceAgentNode } from '@handit.ai/node';
import { OpenAI } from 'openai';
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY
});
// Example: Model node
async function gptNode(messages) {
const response = await openai.chat.completions.create({
messages,
model: "your-model-name"
});
// Return the complete model response
return response.choices[0].message.content;
}
// Example: Tool node
async function dataProcessor(input) {
// Process the data
const processed = await processData(input);
// Return the complete processed result
return processed;
}
// Wrap nodes with tracing
const tracedGpt = traceAgentNode({
agentNodeId: "text-preprocessor", // Use the slug from your agent config
callback: gptNode
});
const tracedProcessor = traceAgentNode({
agentNodeId: "data-processor", // Use the slug from your agent config
callback: dataProcessor
});
// Use in your agent
async function processAgentFlow(input) {
// Pass the actual input data through the wrapper
const processed = await tracedProcessor(input);
const response = await tracedGpt(processed);
return response;
}
from handit import trace_agent_node
from openai import OpenAI
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
# Example: Model node
@tracker.trace_agent_node(agent_node_id="text-preprocessor")
async def gpt_node(messages):
response = await client.chat.completions.create(
messages=messages,
model="your-model-name"
)
# Return the complete model response
return response.choices[0].message.content
# Example: Tool node
@tracker.trace_agent_node(agent_node_id="data-processor")
async def data_processor(input_data):
# Process the data
processed = await process_data(input_data)
# Return the complete processed result
return processed
# Use in your agent
@tracker.start_agent_tracing()
async def process_agent_flow(input_data):
# Pass the actual input data through the wrapper
processed = await gpt_node(input_data)
response = await data_processor(processed)
return response
Manual Send: Capture Node Activity Without Wrapping
Use the manual method (captureAgentNode) when you don’t want to or can’t wrap a function, but still want to trace its activity. This approach is ideal for:
Functions that are hard to isolate
Dynamically constructed logic
Legacy systems or edge cases
You’ll need to manually pass:
agentNodeSlug: The slug from your agent config file
requestBody: The actual input data (e.g., user prompt, system prompt, parameters)
responseBody: The full output or response from the node
It’s crucial to:
Provide the real input and output, not just metadata
Match the slug with your config file for accurate attribution
Call captureAgentNode() after the operation completes
import { captureAgentNode } from '@handit.ai/node';
import { OpenAI } from 'openai';
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY
});
// Example: Model node without wrapper
async function gptNode(messages) {
const requestBody = { systemPrompt: messages[0], userPrompt: messages[1] };
const response = await openai.chat.completions.create({
messages,
model: "your-model-name"
});
const responseBody = response.choices[0].message.content;
// Manually capture the trace
await captureAgentNode({
agentNodeSlug: "text-preprocessor", // Use the slug from your config
requestBody,
responseBody
});
return responseBody;
}
import os
from openai import AsyncOpenAI
from handit import HanditTracker
tracker = HanditTracker()
tracker.config(api_key=os.getenv("HANDIT_API_KEY"))
client = AsyncOpenAI(api_key=os.getenv("OPENAI_API_KEY"))
# Example: Model node (manual tracking)
async def gpt_node(messages):
request_body = {
"system_prompt": messages[0]["content"] if messages and messages[0]["role"] == "system" else "",
"user_prompt": messages[1]["content"] if len(messages) > 1 and messages[1]["role"] == "user" else ""
}
response = await client.chat.completions.create(
messages=messages,
model="your-model-name"
)
response_body = response.choices[0].message.content
# Manually send tracking
tracker._send_tracked_data_sync(
model_id="text-preprocessor", # Use the slug from your agent config
request_body=request_body,
response_body=response_body
)
return response_body
Best Practices
Environment Variables
Store API keys securely (e.g., .env files)
Never commit secrets to version control
Error Handling
Wrap model/tool calls with proper try/catch logic
Log exceptions with helpful context
Traced functions automatically propagate errors to Handit for visibility
Security
Sanitize sensitive inputs before sending to Handit
Validate incoming data
Implement access controls on endpoints using agents