All fields are required
Guardrails intercept violations after the model decides. Governed inference means the model never has the authority to violate in the first place.
The pizza problem: an AI agent had a pizza ordering flow with good prompts, correct tools, and the right menu. The model decided, unprompted, that several items were sold out. Nothing was sold out. No signal told it that. The model inferred it. It did not crash or say anything offensive. It quietly, confidently lied about product availability.
The Air Canada problem: a chatbot hallucinated a bereavement fare policy. A customer relied on it. A tribunal forced the airline to honor a fare that never existed. Cost: the refund plus regulatory attention plus brand damage.
The PCI problem: a voice agent repeats a credit card number back to the caller "for confirmation." At scale, that is a PCI violation on every concurrent call simultaneously. Card numbers spoken aloud on recorded lines.
These failures share a root cause: the model had access to information and capabilities it should not have had. Guardrails attempt to detect and block damage after the model has already made the decision.
from signalwire_agents import AgentBase
from signalwire_agents.core.function_result import SwaigFunctionResult
class SupportAgent(AgentBase):
def __init__(self):
super().__init__(name="Support Agent", route="/support")
self.prompt_add_section("Instructions",
body="You are a customer support agent. "
"Greet the caller and resolve their issue.")
self.add_language("English", "en-US", "rime.spore:mistv2")
@AgentBase.tool(name="check_order")
def check_order(self, order_id: str):
"""Check the status of a customer order.
Args:
order_id: The order ID to look up
"""
return SwaigFunctionResult(f"Order {order_id}: shipped, ETA April 2nd")
agent = SupportAgent()
agent.run()
Describe role and behavior at each step. "You are a scheduling assistant. Do not discuss pricing." Useful for guidance, but unreliable as the only enforcement. This is all that "prompt and pray" provides.
At each step, the model sees only the tools registered for that step. The model in "collect_info" cannot book appointments, not because it was told not to, but because the booking tool does not exist in its reality.
Each step defines which steps it can transition to. The platform validates every transition against the allowed set. The conversation follows a deterministic path the model cannot override.
When the model calls a tool, it is making a request. Your code handles the request, accesses authoritative data, applies business logic, and returns a structured result. The model does not execute actions. Your backend does.
SignalWire provides a hidden data layer that passes information to your tool handlers without it appearing in the AI model's context window. Your handlers receive this data on every call. The model never sees it. It cannot leak what it does not have.
Consider a payment flow. Without a hidden data layer, you either pass card details through the AI context (PCI violation, the model could repeat them) or build a separate out-of-band channel (engineering complexity). With the hidden data layer, the payment session ID travels to your tool handler without entering the model's context. Your handler charges the card. The model says "Your payment has been processed" without ever knowing the card number, amount, or payment method.
| Concern | Prompt and Pray | Governed Inference |
|---|---|---|
| Tool access | Model sees all tools | Model sees only current step's tools |
| Data exposure | Everything in the prompt | Only what the current step requires |
| State transitions | Model decides flow | Platform enforces allowed transitions |
| Business logic | Prompt instructions | Deterministic code |
| Failure mode | Quiet, confident fabrication | Structural impossibility |
| Jailbreak blast radius | Full system access | One step's tools only |
| Audit trail | Hope the logs capture it | Every step, tool, and transition logged |
| Scaling risk | More calls = more hallucination surface | More calls = same governance |
Card data never enters the AI context via the hidden data layer. Tool handlers process payments in your PCI-compliant backend. The model cannot repeat what it does not have.
Patient identifiers scoped to specific steps via the hidden data layer. Step-level isolation prevents information leakage between conversation phases.
Consent tracking via deterministic tool handlers. Call disposition logging native to the platform. Opt-out handling enforced in code, not prompts.
One vendor, one data flow, one audit surface. Structured event stream for every interaction. Deterministic state machine provides reproducible behavior for auditors.
Single step, a few tools. Adequate for low-risk informational interactions where hallucination risk is limited.
Multi-step, scoped tools, enforced transitions. Appropriate for workflows that modify state in external systems.
Strict transitions, hidden data layer for card details, no-exit terminal steps, full audit trail. Required for operations involving sensitive data.
Guardrails intercept violations after the model has already made a decision. Governed inference means the model never encounters the information or tools it should not have. There is nothing to intercept because the violation is structurally impossible.
No. The step constraints are enforced by the platform, not by the prompt. Even if the model is manipulated into ignoring its instructions, it can only access the tools registered for the current step. It cannot discover or call tools from other steps.
Yes. The hidden data layer is a platform feature, not a model feature. It works the same way regardless of which LLM provider you use.
Each step definition declares its tools and transitions. You can diff two versions of an agent to see exactly what changed. Every tool call, every transition, and every state change is logged with a structured event stream.
Trusted by
Build voice AI where the model handles language and your code handles truth.