AI Classify
Sends data to an LLM along with a list of labels, and routes execution to the matching label's outgoing branch. If no label matches confidently, the fallback branch fires.
Unlike AI Transform (which returns free-form text), AI Classify forces the LLM to pick one of N predefined outcomes, so downstream routing is deterministic.
When to use
- Risk scoring: classify a customer as
low_risk/medium_risk/high_risk. - Intent detection: is this support ticket a
billing,technical, orcomplaint? - Sentiment routing:
positive,neutral,negative. - Any "categorize this and branch on the category" use case.
Configuration
| Field | Required | What it does |
|---|---|---|
instruction | No | Extra context for the LLM. Keep short — the labels themselves should be self-describing. |
inputField | No | The payload to classify. If blank, the upstream node's output is used. |
labels | Yes | JSON array of {key, description}. Each becomes an output handle. |
provider | No | groq (default), gemini, openrouter. |
model | No | Provider-specific model ID. |
Labels format
json
[
{ "key": "low_risk", "description": "No prior issues, clean history" },
{ "key": "medium_risk", "description": "Some flags but nothing severe" },
{ "key": "high_risk", "description": "Chargebacks, complaints, or fraud signals" }
]Each label's key becomes an outgoing handle on the node. The description is what the LLM reads when deciding — so make it specific.
Handles
- One handle per label (e.g.
low_risk,medium_risk,high_risk). - Fallback — fires when the LLM response doesn't match any declared key. Always wire this to something (even just an Emit Event for logging) so you don't silently drop traffic.
What it outputs
{
label: "medium_risk",
rawResponse: "medium_risk",
confidence: null,
model: "llama-3.3-70b-versatile",
provider: "groq",
tokensIn: 128,
tokensOut: 4,
durationMs: 640
}Gotchas
- Label keys must be JSON-safe identifiers. Lowercase with underscores is safest. No spaces, no dashes (they confuse some LLM response parsers).
- Overlapping descriptions: if two labels sound similar, the LLM becomes non-deterministic. Make each description clearly distinct.
- Cost: classification is cheap (low token count), but runs per execution, so high-volume workflows add up. Watch
ai_usage_logsto track spend. - Don't use for branching on exact field values: if you already have a field like
customer.tier === 'pro', use Switch or If/Else — they're faster, free, and deterministic. AI Classify is for the cases where the category is implicit in free-form data.