Skip to Content
🎉 Welcome to handit.ai Documentation!
EvaluationEvaluation GuideLLM Node Assignment

LLM Node Assignment

Connect evaluators to your AI functions. Assign custom evaluators to your LLM nodes for automated quality assessment of your AI responses.

Learn the simple process of associating your custom evaluators with specific LLM nodes in your system.

Before assigning evaluators, ensure you have created custom evaluators and configured model tokens.

Assignment Methods

You can associate evaluators with LLM nodes in two ways:

From Evaluation Suite

Select your evaluator and associate it with LLM nodes

From Agent Performance Dashboard

Use the three-dot menu on LLM nodes to manage evaluator associations

Method 1: From Evaluation Suite

1. Navigate to Evaluation Suite

  • Go to Evaluation Suite
  • Find the evaluator you want to assign

2. Associate LLM Node

  • Click on your evaluator
  • Click Associate LLM Node
  • Select the model/node you want to evaluate

3. Confirm Association

  • Review your selection
  • Click Save to create the association

Method 2: From Agent Performance Dashboard

1. Navigate to Agent Performance

  • Go to Agent Performance dashboard
  • Locate the LLM node you want to evaluate

2. Manage Evaluators

  • Click the three-dot menu on the LLM node
  • Select Add Evaluators to associate new evaluators
  • Or select Remove Evaluators to remove existing associations

3. Select and Save

  • Choose which evaluators to associate with this node
  • Save your selections

Multiple Evaluator Management

Assign multiple evaluators to the same LLM node for comprehensive quality assessment:

Example: Customer Support Node

Node: Customer Support LLM Associated Evaluators: ✅ Response Quality Check ✅ Safety Compliance ✅ Customer Satisfaction

Benefits:

  • Comprehensive Assessment: Evaluate different quality dimensions
  • Specialized Focus: Each evaluator targets specific criteria
  • Balanced Coverage: Combine accuracy, safety, and user experience metrics

Best Practices

Start Simple

  • Begin with one evaluator per node
  • Add more evaluators as you understand the system
  • Focus on your most critical AI functions first

Choose Relevant Evaluators

  • Customer Support: Helpfulness and professionalism evaluators
  • Content Generation: Accuracy and engagement evaluators
  • Technical Support: Safety and completeness evaluators

Monitor and Optimize

  • Check evaluation results regularly in your dashboard
  • Adjust evaluator associations based on insights
  • Remove evaluators that aren’t providing value

Removing Associations

To remove an evaluator from an LLM node:

  • Click the three-dot menu on the LLM node in Agent Performance
  • Select Remove Evaluators
  • Choose which evaluators to remove
  • Confirm the removal

Next Steps

Your evaluators are now associated with LLM nodes and will automatically assess AI responses:

  • Monitor evaluation results to track quality trends
  • Adjust evaluator associations based on performance insights
  • Scale evaluation across additional nodes as needed

Your evaluators are now running! They will automatically assess AI responses and provide quality insights in your dashboard.

Last updated on