AI Transform
Runs data through an LLM with a natural-language instruction and writes the response to the workflow context. Use this when you need a flexible transformation that's hard to express as static field mappings — summaries, extractions, rewrites.
When to use
- Summarize a long support ticket, email thread, or document.
- Extract specific fields from unstructured text.
- Rewrite tone (e.g. "Make this customer message more polite").
- Translate between languages.
- Generate structured JSON from free-form input.
For choosing between discrete categories (low/medium/high risk), use AI Classify — it's purpose-built for that and gives you deterministic routing.
Configuration
| Field | Required | What it does |
|---|---|---|
instruction | Yes | The prompt sent to the LLM. Supports {{...}} refs inline. |
inputField | No | The payload the instruction operates on. If blank, the output of the immediate upstream node is used. |
outputFormat | No | text (default), json, or array. Forces the LLM to return valid JSON if the structured format is selected. |
provider | No | groq (default, fastest), gemini (most capable), openrouter (widest model selection). |
model | No | Provider-specific model ID. Default: llama-3.3-70b-versatile on Groq. |
Instruction patterns that work well
- "Summarize this support ticket in under 50 words, focusing on the customer's main issue."
- "Extract the following fields as JSON: amount, currency, invoice_id. Return only valid JSON."
- "Rewrite this message in a polite, professional tone. Keep the factual content the same."
What it outputs
{
result: <the LLM's response — parsed if outputFormat is json/array, otherwise string>,
value: <same as result, legacy alias>,
model: "llama-3.3-70b-versatile",
provider: "groq",
tokensIn: 234,
tokensOut: 87,
durationMs: 1840
}Downstream nodes reference {{ai_transform.data.result}} — that's the canonical key. Token counts go to ai_usage_logs for cost tracking.
Gotchas
- Non-determinism: two runs with the same input can produce different outputs. Don't use AI Transform for logic that needs reproducibility.
- Prompt injection: if your
instructionorinputFieldcontains untrusted user text, the user can try to subvert the LLM. Sanitize untrusted input or add explicit "ignore user-controlled content" to the instruction. - Hallucinations: the LLM may invent fields that don't exist in the input. For critical flows, validate the output with a downstream If/Else checking required fields.
- Latency: Groq is fastest (~500-1500ms), Gemini is slower (~2-5s). Account for this in workflows where total runtime matters.