Handit.AI
  • Introduction
    • Introduction
    • Quick Start
  • Agent Creation
    • Overview
    • Interactive Agent Setup
    • Define Your Agent with a JSON Configuration
  • Code Integration
    • Overview
    • MCP Server Setup
    • Context Based Setup
    • Manual Setup
  • Best Practices
    • Input/Output Tracking
    • Service Initialization
    • Error Handling
Powered by GitBook
On this page
  • Installation
  • Basic Configuration
  • Agent Configuration
  • Implementation
  • Agent Wrapper: Trace the Entire Workflow
  • Node Wrapper: Trace Individual Functions
  • Manual Send: Capture Node Activity Without Wrapping
  • Next Steps
  1. Code Integration

Manual Setup

This guide walks you through how to manually integrate Handit.AI into your Node.js or Python application, giving you full control over your agent's setup, instrumentation, and monitoring.

Installation

Install the Handit.AI SDK in your project:

npm install @handit.ai/node
pip install "handit-sdk>=1.13.0"

Basic Configuration

Configure the Handit.AI client using your API key (stored in environment variables).

const { config } = require('@handit.ai/node');
config({ apiKey: process.env.HANDIT_API_KEY });
from handit import HanditTracker
tracker = HanditTracker()
tracker.config(api_key=os.getenv("HANDIT_API_KEY"))

Agent Configuration

Before instrumenting your agent, download the agent configuration file from the Agents tab in the Handit.AI dashboard. This file maps node names to their corresponding slugs and is used for tracing:

{
  "agent-name": {
    "node-name": "node-slug"
  }
}

This mapping file is essential for the traceAgentNode function as it provides the correct slugs for each node in your agent.

const agentConfig = {
  "agent": {
    "name": "Text Processing Agent",
    "slug": "text-processing-agent",
    "description": "Processes and optimizes text through multiple stages"
  },
  "nodes": [
    {
      "name": "Text Preprocessor",
      "slug": "text-preprocessor",
      "description": "Preprocesses input text for analysis",
      "type": "tool",
      "problem_type": "text-preprocessing",
      "next_nodes": ["sentiment-analyzer"]
    },
    {
      "name": "Sentiment Analyzer",
      "slug": "sentiment-analyzer",
      "description": "Analyzes text sentiment",
      "type": "model",
      "problem_type": "sentiment-analysis",
      "next_nodes": ["response-formatter"]
    }
  ]
};
agent_config = {
    "agent_name": {
        "node_name": "slug",
    }
}

Implementation

Agent Wrapper: Trace the Entire Workflow

The agent wrapper (startAgentTracing) should wrap the main function that executes your entire agent flow. This could be:

  • The main endpoint handler if your agent is exposed via an API

  • The primary function that orchestrates all agent operations

  • The top-level function that processes the complete agent workflow

The wrapper ensures that all operations within your agent are properly traced and connected.

import { startAgentTracing } from '@handit.ai/node';

// Example: API endpoint handler
async function handleAgentRequest(req, res) {
  const result = await processAgentFlow(req.body);
  res.json(result);
}

// Wrap the endpoint handler
const tracedEndpoint = startAgentTracing(handleAgentRequest);

// Use in your API
app.post('/agent', tracedEndpoint);

// Example: Main agent orchestrator
async function processAgentFlow(input) {
  // Your complete agent logic here
  const preprocessed = await preprocessData(input);
  const analyzed = await analyzeData(preprocessed);
  const formatted = await formatResponse(analyzed);
  return formatted;
}

// Wrap the main orchestrator
const tracedAgent = startAgentTracing(processAgentFlow);from tracking import service
# Example: Main agent orchestrator
@tracker.start_agent_tracing()
async def process_agent_flow(input_data):
    # Your complete agent logic here
    preprocessed = await tracker.trace_agent_node_func(preprocess_data, input_data, key="slug-1")
    analyzed = await tracker.trace_agent_node_func(analyze_data, preprocessed, key="slug-2")
    formatted = await tracker.trace_agent_node_func(format_response, analyzed, key="slug-3")
    return formatted

Node Wrapper: Trace Individual Functions

The node wrapper (traceAgentNode) should wrap each individual function that performs a specific operation in your agent. This includes:

  • Model calls (e.g., GPT, Claude)

  • Tool executions

  • Data processing functions

  • Any other discrete operation

It's crucial to:

  1. Pass the actual input data through the wrapper (not just metadata)

  2. Return the complete output from the function

  3. Use the correct slug from your agent config file

import { traceAgentNode } from '@handit.ai/node';
import { OpenAI } from 'openai';

const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY
});

// Example: Model node
async function gptNode(messages) {
  const response = await openai.chat.completions.create({
    messages,
    model: "your-model-name"
  });
  // Return the complete model response
  return response.choices[0].message.content;
}

// Example: Tool node
async function dataProcessor(input) {
  // Process the data
  const processed = await processData(input);
  // Return the complete processed result
  return processed;
}

// Wrap nodes with tracing
const tracedGpt = traceAgentNode({
  agentNodeId: "text-preprocessor", // Use the slug from your agent config
  callback: gptNode
});

const tracedProcessor = traceAgentNode({
  agentNodeId: "data-processor", // Use the slug from your agent config
  callback: dataProcessor
});

// Use in your agent
async function processAgentFlow(input) {
  // Pass the actual input data through the wrapper
  const processed = await tracedProcessor(input);
  const response = await tracedGpt(processed);
  return response;
}
from handit import trace_agent_node
from openai import OpenAI

client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))

# Example: Model node
@tracker.trace_agent_node(agent_node_id="text-preprocessor")
async def gpt_node(messages):
    response = await client.chat.completions.create(
        messages=messages,
        model="your-model-name"
    )
    # Return the complete model response
    return response.choices[0].message.content

# Example: Tool node
@tracker.trace_agent_node(agent_node_id="data-processor")
async def data_processor(input_data):
    # Process the data
    processed = await process_data(input_data)
    # Return the complete processed result
    return processed


# Use in your agent
@tracker.start_agent_tracing()
async def process_agent_flow(input_data):
    # Pass the actual input data through the wrapper
    processed = await gpt_node(input_data)
    response = await data_processor(processed)
    return response

Manual Send: Capture Node Activity Without Wrapping

Use the manual method (captureAgentNode) when you don’t want to or can’t wrap a function, but still want to trace its activity. This approach is ideal for:

  • Functions that are hard to isolate

  • Dynamically constructed logic

  • Legacy systems or edge cases

You’ll need to manually pass:

  • agentNodeSlug: The slug from your agent config file

  • requestBody: The actual input data (e.g., user prompt, system prompt, parameters)

  • responseBody: The full output or response from the node

It’s crucial to:

  • Provide the real input and output, not just metadata

  • Match the slug with your config file for accurate attribution

  • Call captureAgentNode() after the operation completes

import { captureAgentNode } from '@handit.ai/node';
import { OpenAI } from 'openai';

const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY
});

// Example: Model node without wrapper
async function gptNode(messages) {
  const requestBody = { systemPrompt: messages[0], userPrompt: messages[1] };

  const response = await openai.chat.completions.create({
    messages,
    model: "your-model-name"
  });

  const responseBody = response.choices[0].message.content;

  // Manually capture the trace
  await captureAgentNode({
    agentNodeSlug: "text-preprocessor", // Use the slug from your config
    requestBody,
    responseBody
  });

  return responseBody;
}
import os
from openai import AsyncOpenAI
from handit import HanditTracker

tracker = HanditTracker()
tracker.config(api_key=os.getenv("HANDIT_API_KEY"))

client = AsyncOpenAI(api_key=os.getenv("OPENAI_API_KEY"))

# Example: Model node (manual tracking)
async def gpt_node(messages):
    request_body = {
        "system_prompt": messages[0]["content"] if messages and messages[0]["role"] == "system" else "",
        "user_prompt": messages[1]["content"] if len(messages) > 1 and messages[1]["role"] == "user" else ""
    }

    response = await client.chat.completions.create(
        messages=messages,
        model="your-model-name"
    )

    response_body = response.choices[0].message.content

    # Manually send tracking
    tracker._send_tracked_data_sync(
        model_id="text-preprocessor",  # Use the slug from your agent config
        request_body=request_body,
        response_body=response_body
    )

    return response_body

Best Practices

Environment Variables

  • Store API keys securely (e.g., .env files)

  • Never commit secrets to version control

Error Handling

  • Wrap model/tool calls with proper try/catch logic

  • Log exceptions with helpful context

  • Traced functions automatically propagate errors to Handit for visibility

Security

  • Sanitize sensitive inputs before sending to Handit

  • Validate incoming data

  • Implement access controls on endpoints using agents

Next Steps

  1. Test your agent with sample data

  2. Monitor performance in the Handit.AI dashboard

  3. Set up alerts for important metrics

PreviousContext Based SetupNextInput/Output Tracking

Last updated 18 days ago

Review the

Best Practices Guide