52). LLM Nodes & Durable Patterns: Function calling and JSON schemas, retries, and guardrails that make AI outputs reliable instead of random
LLMs feel unpredictable when you treat them like smart text boxes.
You tweak a prompt, bump a temperature, swap a model… and suddenly:
Nobody sees it until it hits production.
The fix isn’t “better prompts.” It’s treating every LLM step as a node in a system with contracts and control loops.
This is the mental model I use:
Once you lock those in, you go from “prompt luck” to something you can actually ship and maintain.
What’s an “LLM node”?
Think of an LLM node exactly like you’d think of a service or component:
That means:
In other words, “LLM node” is less about the model and more about how you wrap it:
The patterns below are just ways of making that concrete.
Pattern 1: Function calling is the backbone
Function calling turns the model from “writer” into “router + argument builder.”
Instead of free-form “do everything” output, the model:
You get a clean split:
A few rules that keep this sane:
That’s what lets you debug “why did the agent decide to do this?” later.
Function calling isn’t magic. It’s just a clean way to turn LLM intent into typed actions your system can trust.
Pattern 2: JSON schemas make outputs contract-first
Natural language is ambiguous. Contracts aren’t.
If you let a model answer in free text and then try to parse it, you’ll fight edge cases forever.
Better pattern:
Example idea (simplified):
{
"type": "object",
"additionalProperties": false,
"required": ["audience", "offer", "channels", "kpis"],
"properties": {
"audience": { "type": "string", "minLength": 3 },
"offer": { "type": "string", "minLength": 3 },
"channels": {
"type": "array",
"minItems": 1,
"items": {
"type": "string",
"enum": ["email", "linkedin", "x", "blog", "ads"]
}
},
"tone": {
"type": "string",
"enum": ["direct", "casual", "formal"]
},
"kpis": {
"type": "array",
"minItems": 1,
"items": { "type": "string" }
},
"constraints": {
"type": "array",
"items": { "type": "string" }
}
}
}
That schema says:
Now you can:
The difference in practice:
Pattern 3: Retries that repair, not just repeat
“Just retry it” helps when:
It does nothing when:
You want retries that repair. That usually looks like a small loop:
The repair prompt is simple:
“Here is a JSON schema and an object that failed validation. Fix the object so it passes validation. Return only valid JSON, nothing else.”
Pair that with:
Recommended by LinkedIn
Now your node isn’t just “retrying and hoping,” it’s actively using the schema to fix its own output.
This pattern is especially powerful when:
Pattern 4: Guardrails
Guardrails are the rules around the model, not inside the prompt.
They answer questions like:
Some practical guardrails to consider:
1. Input filters
2. Output filters
3. Cost and token budgets
Per node and per request, enforce:
If the node hits a budget:
4. Environment constraints
5. Human-in-the-loop hooks
Guardrails don’t make the model perfect. They make the system predictable enough to trust.
Putting It Together: How An “LLM Node” Actually Looks
Take a marketing brief generator node as an example.
Input contract
{
"type": "object",
"required": ["product", "audience", "goal"],
"properties": {
"product": { "type": "string" },
"audience": { "type": "string" },
"goal": { "type": "string" },
"constraints": {
"type": "array",
"items": { "type": "string" }
}
}
}
LLM node behavior
Guardrails
Logging
For each run, record:
At that point, you don’t have “a prompt.” You have a component that:
Why These Patterns Hold Up Over Time
Models will change.
Vendors will change.
Your stack will change.
If you rely on “prompt magic,” you’ll constantly chase regressions.
If you rely on:
…you can swap in new models, tools, and backends as long as one thing stays true:
Each LLM node keeps honoring its contract.
That’s what makes the behavior durable.
You’re not betting on one model. You’re betting on the discipline of treating LLM steps as real nodes in a real system.
And once you do that, AI stops feeling like a science experiment and starts feeling like the rest of your engineering work: defined, observable, and fixable when it breaks.
Dino Cajic