prompt
Defines the AI agent’s personality, goals, behaviors, and instructions for handling conversations. The prompt establishes how the agent should interact with callers, what information it should gather, and how it should respond to various scenarios.
It is recommended to write prompts using markdown formatting as LLMs better understand structured content. Additionally it is recommended to read the Prompting Best Practices guide.
Prompt types
There are three ways to define prompt content, each suited for different use cases:
-
Text prompt — A single string containing the full prompt. Best for simple agents where the entire personality, instructions, and rules fit naturally into one block of text.
-
POM (Prompt Object Model) — A structured array of sections with titles, body text, and bullet points. Best for complex prompts that benefit from clear organization. SignalWire renders the POM into a markdown document before sending it to the LLM.
-
Contexts — A system of named conversation flows, each with its own steps, memory settings, and transition logic. Best for multi-stage conversations where the agent needs to switch between distinct modes (e.g., greeting → support → billing). Requires a
defaultcontext as the entry point. See Contexts below.
Text and POM are mutually exclusive — use one or the other. Contexts can be combined with either a text or POM prompt to add structured conversation flows on top of the base prompt.
Properties
ai.prompt
An object that contains the prompt parameters.
The prompt property accepts one of the following objects:
Regular Prompt
POM Prompts
prompt.text
The main identity prompt for the AI. This prompt will be used to outline the agent’s personality, role, and other characteristics.
prompt.temperature
Randomness setting. Float value between 0.0 and 1.5. Closer to 0 will make the output less random.
prompt.top_p
Randomness setting. Alternative to temperature. Float value between 0.0 and 1.0. Closer to 0 will make the output less random.
prompt.confidence
Threshold to fire a speech-detect event at the end of the utterance. Float value between 0.0 and 1.0. Decreasing this value will reduce the pause after the user speaks, but may introduce false positives.
prompt.presence_penalty
Aversion to staying on topic. Float value between -2.0 and 2.0. Positive values increase the model’s likelihood to talk about new topics.
prompt.frequency_penalty
Aversion to repeating lines. Float value between -2.0 and 2.0. Positive values decrease the model’s likelihood to repeat the same line verbatim.
prompt.max_tokens
Limits the amount of tokens that the AI agent may generate when creating its response. Valid value range: 0 - 4096.
prompt.contexts
An object that defines the available contexts for the AI. Each context represents a set of steps that guide the flow of the conversation. The object must include a default key, which specifies the initial context used at the start of the conversation. Additional contexts can be added as other keys within the object.
contexts.default
The default context used at the beginning of the conversation.
contexts.*
Additional contexts for specialized conversation flows. The key is user-defined (e.g., support, sales, billing).
*.steps
An array of step objects that define the conversation flow for this context. Steps execute sequentially unless otherwise specified. Each step contains either a text string or a pom array to provide prompt instructions.
*.isolated
When true, resets conversation history to only the system prompt when entering this context. Useful for focused tasks that shouldn’t be influenced by previous conversation.
*.enter_fillers
Language-specific filler phrases played when transitioning into this context.
An array of filler phrases for the specified language code. One phrase is randomly selected during transitions. Possible language codes:
default- Default language set by the user in theai.languagespropertybg- Bulgarianca- Catalancs- Czechda- Danishda-DK- Danish (Denmark)de- Germande-CH- German (Switzerland)el- Greeken- Englishen-AU- English (Australia)en-GB- English (United Kingdom)en-IN- English (India)en-NZ- English (New Zealand)en-US- English (United States)es- Spanishes-419- Spanish (Latin America)et- Estonianfi- Finnishfr- Frenchfr-CA- French (Canada)hi- Hindihu- Hungarianid- Indonesianit- Italianja- Japaneseko- Koreanko-KR- Korean (South Korea)lt- Lithuanianlv- Latvianms- Malaymulti- Multilingual (Spanish + English)nl- Dutchnl-BE- Flemish (Belgian Dutch)no- Norwegianpl- Polishpt- Portuguesept-BR- Portuguese (Brazil)pt-PT- Portuguese (Portugal)ro- Romanianru- Russiansk- Slovaksv- Swedishsv-SE- Swedish (Sweden)th- Thaith-TH- Thai (Thailand)tr- Turkishuk- Ukrainianvi- Vietnamesezh- Chinese (Simplified)zh-CN- Chinese (Simplified, China)zh-Hans- Chinese (Simplified Han)zh-Hant- Chinese (Traditional Han)zh-HK- Chinese (Traditional, Hong Kong)zh-TW- Chinese (Traditional, Taiwan)
*.exit_fillers
Language-specific filler phrases played when leaving this context. Same format as enter_fillers.
Examples
Basic prompt
POM prompt
Rendered prompt
The above POM example will render the following markdown prompt:
Basic context
Advanced multi-context
This example demonstrates multiple contexts with different AI personalities, voice settings, and specialized knowledge domains:
Variable Expansion
Use the following syntax to expand variables into your prompt.
${call_direction}
Inbound or outbound.
${caller_id_number}
The caller ID number.
${local_date}
The local date.
${spoken_date}
The spoken date.
${local_time}
The local time.
${time_of_day}
The time of day.
${supported_languages}
A list of supported languages.
${default_language}
The default language.