Learn how Handit.AI automatically optimizes your AI agents through continuous learning and adaptation.
Automatic Prompt Optimization
Handit.AI analyzes performance metrics to automatically improve prompts:
import { fetchOptimizedPrompt, traceAgentNode } from '@handit.ai/node';
async function optimizedAgent(input) {
// Get the optimized prompt
const optimizedPrompt = await fetchOptimizedPrompt({
modelId: agentsTrackingConfig.mariaManager.documentCompressor
});
// Use optimized prompt in your node
const optimizedNode = traceAgentNode({
agentNodeId: agentsTrackingConfig.mariaManager.documentCompressor,
callback: async (data) => {
return await llmCall({
...data,
prompt: optimizedPrompt
});
}
});
return await optimizedNode(input);
}
class OptimizedAgent:
@start_agent_tracing
async def process(self, input_data):
# Get the optimized prompt
optimized_prompt = await self.tracker.fetch_optimized_prompt(
model_id=agent_config["mar_i_a_manager"]["document_compressor"]
)
# Use optimized prompt in your node
optimized_node = trace_agent_node({
"agent_node_id": agent_config["mar_i_a_manager"]["document_compressor"],
"callback": lambda data: self.llm_call(data, optimized_prompt)
})
return await optimized_node(input_data)
A/B Testing
Handit.AI automatically tests different configurations to find the best performing version:
Continuous Learning
Your agent improves automatically through:
Dynamic configuration updates