Contact Sales

All fields are required

The Model Should Never Have Had Authority | SignalWire
CEO

The Model Should Never Have Had Authority

Understanding Programmatically Governed Inference (PGI)

Anthony Minessale

Programmatically Governed Inference (PGI), also called System-Directed AI, is an architectural approach where software — not the AI model — holds all authority over state, decisions, and side effects. Instead of prompting models to behave correctly, the system controls what tools and context the model can see at each step, making it structurally impossible for the model to take actions outside its current scope. The model handles language and conversation; deterministic code handles correctness.

Language models are brilliant at conversation and unreliable as decision-makers. They sound confident while being wrong. They improvise in ways that are charming until they drift off course. They follow instructions most of the time, but not all of the time. In real systems — payments, healthcare, customer support, telecom — "most of the time" is not good enough.

The industry's response has been to prompt harder. Write a more detailed system message. Add more instructions. Tell the model what not to do. Hope it listens. This methodology has a name: prompt and pray. It treats the model as the brain of the system and everything else as a safety net underneath it. It works in the demo. It falls apart in production, because the model retains authority over decisions it should never have been making.

Here is a concrete example: A team built a pizza ordering agent. They gave the model the menu, the tools, and a solid prompt. It worked well — until the model decided, unprompted, that several menu items were sold out. Nothing was sold out. No inventory signal told it that. The model inferred it. It did not crash. It did not say anything offensive. It just confidently lied about availability, and unless someone was watching closely, it would have gone unnoticed.

That is the insidious failure mode of prompt and pray. Not catastrophic breakdown, but quiet, confident fabrication that erodes trust without announcing itself. Taco Bell and McDonald's experienced the same failure publicly when they deployed voice AI in production.

The fix is not making the model more obedient. The fix is changing what the model is responsible for.

The core idea: Programmatically Governed Inference (PGI)

The approach is called System-Directed AI (the formal name is Programmatically Governed Inference, or PGI). It starts from a simple premise: the model should never have authority in the first place. All authority over state, decisions, and side effects lives in ordinary software — the same kind of deterministic code that has powered reliable systems for decades. The model has a focused role: listen to people, interpret messy human language, call the right tools at the right time, and communicate results in natural conversation. It makes the system pleasant to interact with. It does not determine outcomes.

The core design rule is: do not tell the AI anything it does not need to know.

Do not show it tools that do not belong to the current step. Do not give it data it should not reason about. Do not explain the full system to it. Reduce the model's world until its only job is understanding what the human said and choosing which of its currently available tools to call. Everything else is handled by your code.

The insight that changes everything

Here is what makes System-Directed AI fundamentally different from guardrails, output filtering, or any other containment strategy: the model does not know it is being governed.

It does not know that other tools exist elsewhere in the system. It does not know that a state machine is managing the interaction. It does not know that its context is being curated by the platform underneath it. It sees a prompt, a set of tools, and a conversation. That is its entire reality. There is nothing to reason around, nothing to game, nothing to circumvent, because the model has no awareness that anything exists beyond what it can see.

This is not a subtle distinction. It is the whole game. Guardrails assume the model has authority and try to intercept the damage after the fact. They are reactive: catch bad outputs, filter harmful content, flag anomalous behavior. System-Directed AI is structural: the model never has authority to begin with. There is nothing to intercept because there is nothing to catch.

Think about it the way autonomous vehicles work. Nobody ships a self-driving car and says "we told it not to hit things." You build lane markers, speed governors, and collision avoidance systems around imperfect AI. The AI handles the dynamic parts. The infrastructure handles the constraints. System-Directed AI applies the same logic to AI applications. You do not wait for the model to never hallucinate. You build systems where hallucinations cannot reach the things that matter.

How it works in practice

Consider a blackjack game built as a voice AI agent.

During the betting step, the model can only call one tool: place a bet. It cannot deal cards, because that tool does not exist in its world yet. When the bet is placed, the system changes the step. Now the model can hit, stand, or double down. The betting tool has disappeared. The model's capabilities changed not because it was told to behave differently, but because the available operations were mechanically replaced.

There is also a step called "you_lost." It has zero tools and zero transitions. The game is over. A user can beg, negotiate, or try to sweet-talk the AI into dealing another hand. None of it works, because the mechanism for continuing does not exist. There is nothing for the model to comply with or resist. The interaction is structurally complete.

The model does not know it is in a terminal state. It does not know transitions exist. It just knows it is having a conversation where the user lost and the game is over.

This is what governance looks like when it is architectural instead of conversational. You do not tell the model "do not let them play again." You remove the ability to play again. There is nothing to negotiate with.

A surprising number of real business problems have a "you_lost" step. Payment declined. Account suspended. Compliance hold. Policy denied. The model should communicate clearly and empathetically. It should not be able to override the decision.

The model handles language. Your code handles truth.

Consider a billing agent. A customer calls to dispute a charge. During the identity verification step, the model knows the customer's name and has one tool: verify the account. It does not know the account balance, credit history, or dispute tools exist. Once identity is confirmed, the system silently transitions to the charge review step — the model's reality expands to include recent transactions and a dispute tool. Meanwhile, a structured data layer the model never sees carries the full account record, risk flags, and authentication tokens between tool handlers. The model cannot promise a credit it does not know exists. It cannot negotiate a threshold it has never seen.

When a user provides payment information, the system hands the interaction to a secure subsystem. The model does not hear the card number, does not process it, does not retain it. The subsystem returns only "payment approved" or "payment declined." The model reports it as fact.

System-Directed AI combines what neither pure software nor pure AI can produce alone. The AI provides flexibility at the edges: absorbing ambiguity, keeping the conversation natural. The software provides certainty at the core: enforcing rules, maintaining state in a structured data layer invisible to the model, ensuring every consequential action is correct. The result is applications that feel like talking to an intelligent agent but behave like well-tested software.

What this means for developers

The current AI narrative tells developers that prompting is the new programming. It is not. Prompting is negotiation. Programming is the answer.

System-Directed AI restores programming to the equation. The developer's traditional skills — defining state machines, writing business logic, managing data, enforcing rules in code — are exactly what makes AI applications safe and reliable. Tool handlers are ordinary functions. State transitions are declared in configuration. Business rules are enforced in code, not in natural language instructions the model might ignore.

A developer building an intake flow writes a gather sequence: each step presents the model with one question and one collection tool. The model does not know how many questions remain or what came before. Each step is authored fresh. When the answer arrives, the tool handler validates it, stores it in the structured data layer, and silently advances to the next step. The model never saw the full form and never had the chance to skip ahead. Ordinary backend validation still applies — if quantity > 20: quantity = 20 — but that is a single line of defensive code, not the architecture. The architecture is authoring what the model believes is real at each moment.

This discipline is not all-or-nothing. A casual FAQ agent might use a single step with a handful of tools. A payment flow might use strict transitions, data isolation, and no-exit terminal steps. The discipline scales with the risk. You can start simple and progressively adopt stricter patterns as your application demands it.

This matters because System-Directed AI is not a proprietary technique reserved for internal use. It is a set of design principles that any developer can employ. The goal is to give every development team the tools to build AI applications that work in production, using skills they already have.

Why PGI matters now

Every company building customer-facing AI is navigating the same problem: the model is impressive but unreliable, and the gap between "works in the demo" and "works in production" is where projects go to die. The industry average is sobering — roughly half of AI projects never reach production, and the ones that fail cost real money.

Prompt and pray is a treadmill. Every model improvement creates new failure modes that require new prompts that require newer models. You are not building on a foundation. You are chasing a moving target.

System-Directed AI is the alternative. When correctness lives in software instead of prompts, the model can improve without breaking the application. You can swap models, upgrade models, or run different models for different steps — your application stays correct because correctness was never the model's job. The model gets better at being natural. The software was already correct. Both improve independently, and each improvement lifts the application without destabilizing the other.

The companies that figure this out first will not just have better AI. They will have AI that compounds.

The model makes it natural. The software makes it correct. In a well-governed system, those are independent properties. That independence is the foundation everything else is built on.

SignalWire built the communications control plane that makes System-Directed AI mechanically enforceable. The principles described here are how every AI agent on the platform operates. To learn more about the architecture, visit signalwire.com, read more about our AI, or explore the dev docs.

Frequently asked questions

What is Programmatically Governed Inference (PGI)?

PGI is a design approach where all authority over decisions, state, and side effects lives in deterministic software rather than the AI model. The model's only job is to interpret language and call the tools available to it at that moment — everything else is governed by code.

How is System-Directed AI different from guardrails?

Guardrails assume the model has authority and try to intercept bad outputs after the fact. System-Directed AI is structural — the model never has authority to begin with, so there is nothing to intercept. The model cannot circumvent constraints it doesn't know exist.

What does governance look like in practice with PGI?

At each step in a workflow, the model only sees the tools relevant to that step. When the step changes, the available tools change mechanically. Terminal states have zero tools and zero transitions — the model cannot continue an interaction because the mechanism to do so simply doesn't exist.

Who can use System-Directed AI?

Any developer can apply these principles. Tool handlers are ordinary functions, state transitions are declared in configuration, and business rules are enforced in code. The discipline scales with risk — simple flows need minimal structure, high-stakes flows like payments or compliance can use strict transitions and data isolation.

Related Articles