All fields are required
AI Agent API
Build on SignalWire’s ultra-low-latency control plane with direct-media LLM pipelines, deterministic call control,
and carrier-grade failover without fragile chains of ASR → NLU → LLM → TTS stitched together.
Voice AI on stateless infrastructure has no reliable picture of where the call is. When you build on the SignalWire platform, the AI model runs inside the interaction, where the control plane already owns state, context, and lifecycle. Your code controls what the model sees, which tools are available at each step, and what happens when a function returns.
Meet Sigmond. A live AI agent built on SignalWire. Ask him about our platform, AI architecture, or FreeSWITCH. He can search docs, answer questions, and even collect your info for a follow-up. This is real-time voice AI in action.

Build interactive voice response systems that use speech recognition and natural language understanding instead of rigid DTMF menus. Route calls dynamically based on intent, user profile, or external API lookups, all with sub-500ms latency.
Deploy conversational AI agents that can answer questions, execute workflows, and escalate to humans when needed. Combine real-time transcription, LLM-based reasoning, and API integrations for production-grade support automation.
Defines the “taking order” stage of an AI ordering workflow, including conversation rules, tool-calling logic, and state management to ensure the agent correctly adds, removes, or modifies items in a customer’s order.
Defines an AI taxi dispatcher agent, configuring its model settings, personality, and strict conversational rules so it can handle phone-based ride bookings using backend tools instead of guessing information.
Turn Claude into an expert SignalWire pair programmer. Generate working SWML, structure AI agent prompts, implement SWAIG functions, and follow production best practices without digging through docs.
Use the OSS Agents SDK to run real-time Python-based voice agents. Trigger API calls mid-session, route by intent, and respond with live data. Run everything from a single Python service.
Architect AI agents directly within the media and signaling layers. No external detours. Native integration ensures sub-second round trips.
Treat calls, messages, and sessions as stateful interactions. SignalWire manages lifecycle, routing, and context so your application can control communications directly without rebuilding telecom infrastructure.
The SignalWire difference
Everything you need is native to the platform, not assembled from external services. Telecom and AI run as one integrated system.
SIP, PSTN, WebRTC, and SMS on a single control plane, no third-party telephony bolted on. Built by the team behind FreeSWITCH. The same architecture at 10 calls and 10,000.
Host multiple specialized agents on a single server. Transfer callers between humans and AI agents with full context intact.
Real-time bidirectional translation across thousands of languages. One line of SWML. No external translation service.
Ground your agent in your docs instead of letting it freestyle. Upload PDFs/web content into SignalWire Datasphere and retrieve the right chunks at runtime, with filtering and chunking controls so answers stay traceable and consistent.
Workflows are declarative state machines, not prompt instructions. Each step specifies: the prompt fragment for that step, the function schemas available, and the conditions required to transition. The model can’t improvise or fall off task.
Platform-level logs of what the ASR heard, what the model decided, what the function returned, and how long each step took. Debug from real data instead of guessing based on transcripts and call duration.
SignalWire's AI Agent API integrates speech recognition, LLM orchestration, and text-to-speech natively into its media and signaling layers, eliminating the fragile chains of external ASR → NLU → TTS services. Use a Python SDK or SignalWire Markup Language (SWML) to define agent logic, connect to APIs mid-call, and route to humans when needed.
SignalWire uses a Contexts and Steps system where the agent progresses through explicitly defined steps with completion criteria. At each transition, the application passes updated state into the next step's prompt via variable expansion. The AI never carries the full conversation history. Instead it receives a fresh, accurate context window at every step, preventing context drift in long interactions.
The SignalWire AI control plane is the orchestration layer that sits between the telecom infrastructure and the language model. Rather than letting the LLM drive the entire call, SignalWire keeps call state, routing logic, and telephony actions in the infrastructure layer and uses the AI as a tool within that controlled environment. This prevents the LLM from losing context and corrupting the session.
SignalWire AI agents are stateless by default. The LLM doesn't persist memory between turns. Instead, the application layer holds all call state (caller data, step progress, variables) and injects it into the prompt dynamically at each step using variable expansion. This means the AI always sees exactly what it needs, without relying on fragile LLM memory.
Yes. SignalWire AI agents can interact with callers across PSTN phone numbers, SIP endpoints, and WebRTC browser or app sessions using the same logic and workflows.
SignalWire AI agents use SWAIG (SignalWire AI Gateway) to trigger programmable "functions" during a live conversation. This allows the AI to autonomously execute actions like sending an SMS confirmation, querying a CRM database for customer records, or inserting an appointment into a calendar without human intervention or ending the call session.
Build AI voicebots that can automate higher value tasks, with far less latency and DevOps required.