amazon_bedrock
Create an Amazon Bedrock agent with a prompt. Since the text prompt is central to getting great results out of the AI, it is highly recommended that you also read the Prompting Best Practices guide.
Properties
amazon_bedrock
An object that accepts the following properties.
amazon_bedrock.global_data
A powerful and flexible environmental variable which can accept arbitrary data that is set initially in the SWML script or from the SWML set_global_data action. This data can be referenced globally. All contained information can be accessed and expanded within the prompt - for example, by using a template string.
amazon_bedrock.params
A JSON object containing parameters as key-value pairs.
amazon_bedrock.post_prompt
The final set of instructions and configuration settings to send to the agent.
Accepts either a text string or a pom object array for structured prompts, plus optional tuning parameters.
See post_prompt details below.
amazon_bedrock.post_prompt_url
The URL to which to send status callbacks and reports. Authentication can also be set in the url in the format of username:password@url.
See post_prompt_url callback below.
amazon_bedrock.prompt
Establishes the initial set of instructions and settings to configure the agent.
See prompt for additional details.
amazon_bedrock.SWAIG
An array of JSON objects to create user-defined functions/endpoints that can be executed during the dialogue.
See SWAIG for additional details.
post_prompt
The post_prompt object accepts either a plain text prompt or a structured POM prompt, plus optional tuning parameters.
Regular prompt
POM prompt
post_prompt.text
The main identity prompt for the AI. This prompt will be used to outline the agent’s personality, role, and other characteristics.
post_prompt.temperature
Controls the randomness of responses. Higher values (e.g., 0.8) make output more random and creative, while lower values (e.g., 0.2) make it more focused and deterministic. Range: 0.0 to 1.0.
post_prompt.top_p
Controls diversity via nucleus sampling. Only tokens with cumulative probability up to top_p are considered. Lower values make output more focused. Range: 0.0 to 1.0.
post_prompt.confidence
Minimum confidence threshold for AI responses. Responses below this threshold may be filtered or flagged. Range: 0.0 to 1.0.
post_prompt.presence_penalty
Penalizes tokens based on whether they appear in the text so far. Positive values encourage the model to talk about new topics.
post_prompt.frequency_penalty
Penalizes tokens based on their frequency in the text so far. Positive values decrease the likelihood of repeating the same line verbatim.
post_prompt_url callback
SignalWire will make a request to the post_prompt_url with the following parameters:
action
Action that prompted this request. The value will be “post_conversation”.
ai_end_date
Timestamp indicating when the AI session ended.
ai_session_id
A unique identifier for the AI session.
ai_start_date
Timestamp indicating when the AI session started.
app_name
Name of the application that originated the request.
call_answer_date
Timestamp indicating when the call was answered.
call_end_date
Timestamp indicating when the call ended.
call_id
ID of the call.
call_log
The complete log of the call, as a JSON object.
call_log.content
Content of the call log entry.
call_log.role
Role associated with the call log entry (e.g., “system”, “assistant”, “user”).
call_start_date
Timestamp indicating when the call started.
caller_id_name
Name associated with the caller ID.
caller_id_num
Number associated with the caller ID.
content_disposition
Disposition of the content.
content_type
Type of content. The value will be text/swaig.
conversation_id
A unique identifier for the conversation thread, if configured via the AI parameters.
post_prompt_data
The answer from the AI agent to the post_prompt. The object contains the three following fields.
post_prompt_data.parsed
If a JSON object is detected within the answer, it is parsed and provided here.
post_prompt_data.raw
The raw data answer from the AI agent.
post_prompt_data.substituted
The answer from the AI agent, excluding any JSON.
project_id
ID of the Project.
space_id
ID of the Space.
SWMLVars
A collection of variables related to SWML.
swaig_log
A log related to SWAIG functions.
total_input_tokens
Represents the total number of input tokens.
total_output_tokens
Represents the total number of output tokens.
version
Version number.
Post prompt callback request example
Below is a json example of the callback request that is sent to the post_prompt_url:
Responding to post prompt requests
The response to the callback request should be a JSON object with the following parameters:
Amazon Bedrock example
The following example selects Bedrock’s Tiffany voice using the voice_id parameter in the prompt. It includes scaffolding for a post_prompt_url as well as several remote and inline functions using SWAIG.