Contact Sales

All fields are required

Programmable AI Agents: State Machines, Not Prompt Engineering | SignalWire
Voice AI Architecture

Stop Prompting. Start Programming.

Prompting is negotiation. State machines are software. Build voice AI agents with scoped prompts, scoped tools, and enforced transitions.

< 1.2s
typical AI response latency
2,000+
companies in production
$0.16
per minute, AI processing
2.7B
minutes processed
The Problem

Monolithic Prompts Break in Production

One Giant Prompt, Infinite Failure Modes

A 2,000-word system prompt forces the model to decide which instructions apply right now. It guesses wrong. It skips steps. It uses tools from the wrong phase.

Expensive Models Mask Bad Architecture

The task looks complex because the entire agent lives in one prompt. You pay for GPT-4 when a cheaper model would handle each step on its own.

No Way to Test, Diff, or Audit

You cannot unit test a blob of prose. You cannot diff two versions meaningfully. You cannot tell an auditor which instruction applied to a given call.

Tools Available When They Should Not Be

The greeting step can see the payment tools. The model decides whether to respect boundaries. In production, it sometimes does not.

Build a Voice AI Agent

from signalwire_agents import AgentBase
from signalwire_agents.core.function_result import SwaigFunctionResult

class SupportAgent(AgentBase):
    def __init__(self):
        super().__init__(name="Support Agent", route="/support")
        self.prompt_add_section("Instructions",
            body="You are a customer support agent. "
                 "Greet the caller and resolve their issue.")
        self.add_language("English", "en-US", "rime.spore:mistv2")

    @AgentBase.tool(name="check_order")
    def check_order(self, order_id: str):
        """Check the status of a customer order.

        Args:
            order_id: The order ID to look up
        """
        return SwaigFunctionResult(f"Order {order_id}: shipped, ETA April 2nd")

agent = SupportAgent()
agent.run()

Prompt and Pray vs. Programmable Agents

Prompt and Pray

  • One 2,000-word system prompt with every instruction crammed in
  • Model follows rules when it feels like it, ignores them when it does not
  • Cannot test, version, diff, or debug the behavior
  • Expensive models required because the task appears complex
  • No audit trail for which instruction applied

Programmable State Machines

  • Each conversation phase is a discrete step with its own prompt
  • Platform enforces transitions; the model cannot skip steps
  • Version-controlled config that diffs, tests, and deploys through CI/CD
  • Match model capability to step complexity (cheap models for greetings)
  • Structured audit log for every step, tool call, and transition

What Changes When You Program Instead of Prompt

ConcernMonolithic PromptProgrammable Agent
Tool accessAll tools visible to all phasesScoped per step; greeting cannot see payment tools
Prompt length2,000+ words, conflicting instructionsShort, focused prompt per step
Model costExpensive model for every turnCheap model for greetings, capable model for reasoning
TestingNot possible (probabilistic blob)Step-level unit tests with defined inputs and outputs
Audit trailGuessing which instruction appliedStructured log: step, tool call, transition, parameters
RollbackFind the old prompt somewheregit revert

Anatomy of a Programmable Agent

Scoped Prompts

Each step has its own prompt describing one task. The model sees only what it needs for the current phase. No conflicting instructions, no ambiguity.

Scoped Tools

Tools are bound per step. The model in the greeting step cannot access payment tools. Those tools do not exist in its context.

Enforced Transitions

The platform validates and executes transitions between steps. The model chooses which allowed path to take; the platform enforces the boundary.

Hidden Data Layer

Sensitive data is available to tool functions but invisible to the LLM context window. The model cannot leak what it cannot see.

Real-World Agent Patterns

1

Healthcare Scheduling

greeting > identify_patient > check_availability > confirm_booking > farewell

2

Customer Support

greeting > authenticate > lookup_order > resolve_issue > confirm_action > farewell (with branch to transfer_specialist)

3

Collections

greeting > verify_identity > present_balance > collect_payment > confirm > farewell (with branch to payment_failed > retry_or_transfer)

💡
State machines, input validation, data management, and structured testing: these are the skills that make AI applications work in production. The industry told you prompting is programming. It is not.

FAQ

Can I use any LLM provider?

Yes. The platform integrates with multiple LLM providers. You can assign different models to different steps based on complexity and cost.

How do scoped prompts reduce cost?

Each step sends a short, focused prompt instead of the entire agent definition. Fewer tokens per turn means lower inference cost. You can also use cheaper models for routine steps like greetings.

What happens when the model tries to skip a step?

It cannot. The platform enforces transitions. The model can only move to steps defined as valid transitions from the current step.

How do I test a programmable agent?

Each step is an isolated unit with defined inputs, tools, and transitions. Write unit tests for individual steps and integration tests for the full conversation flow. Deploy through your existing CI/CD pipeline.

What is the hidden data layer?

Tool functions can access sensitive data (account numbers, balances, PII) without exposing it to the LLM context window. The model requests data through tools; your code decides what to return.

Trusted by 2,000+ companies

Build Voice AI as Software, Not Prompts

Define steps. Scope tools. Enforce transitions. Ship agents you can test and audit.