*** id: 3fb570ef-7b0e-4341-8a8e-1c270fd6bedc slug: /reference/amazon-bedrock title: amazon\_bedrock description: Create an Amazon Bedrock agent interaction. max-toc-depth: 3 ---------------- [params]: /docs/swml/reference/amazon-bedrock/params [prompt]: /docs/swml/reference/amazon-bedrock/prompt [SWAIG]: /docs/swml/reference/amazon-bedrock/swaig [set_global_data action]: /docs/swml/reference/amazon-bedrock/swaig/functions/data-map#actions [Prompting Best Practices]: /docs/platform/ai/best-practices#crafting-the-initial-prompt-for-the-ai Create an Amazon Bedrock agent with a prompt. Since the text prompt is central to getting great results out of the AI, it is highly recommended that you also read the [Prompting Best Practices][Prompting Best Practices] guide. ## **Properties** An object that accepts the following properties. A powerful and flexible environmental variable which can accept arbitrary data that is set initially in the SWML script or from the SWML [`set_global_data` action][set_global_data action]. This data can be referenced **globally**. All contained information can be accessed and expanded within the prompt - for example, by using a template string. A JSON object containing [`parameters`][params] as key-value pairs. The final set of instructions and configuration settings to send to the agent. Accepts either a `text` string or a `pom` object array for structured prompts, plus optional tuning parameters. See [post\_prompt details](#post_prompt) below. The URL to which to send status callbacks and reports. Authentication can also be set in the url in the format of `username:password@url`. See [post\_prompt\_url callback](#post_prompt_url-callback) below. Establishes the initial set of instructions and settings to configure the agent. See [`prompt`][prompt] for additional details. An array of JSON objects to create user-defined functions/endpoints that can be executed during the dialogue. See [`SWAIG`][SWAIG] for additional details. ## post\_prompt The `post_prompt` object accepts either a plain text prompt or a structured POM prompt, plus optional tuning parameters. The main identity prompt for the AI. This prompt will be used to outline the agent's personality, role, and other characteristics. Controls the randomness of responses. Higher values (e.g., 0.8) make output more random and creative, while lower values (e.g., 0.2) make it more focused and deterministic. Range: 0.0 to 1.0. Controls diversity via nucleus sampling. Only tokens with cumulative probability up to top\_p are considered. Lower values make output more focused. Range: 0.0 to 1.0. Minimum confidence threshold for AI responses. Responses below this threshold may be filtered or flagged. Range: 0.0 to 1.0. Penalizes tokens based on whether they appear in the text so far. Positive values encourage the model to talk about new topics. Penalizes tokens based on their frequency in the text so far. Positive values decrease the likelihood of repeating the same line verbatim. An array of objects that defines the prompt object model (POM) for the AI. The POM is a structured data format for organizing and rendering a prompt for the AI agent. This prompt will be used to define the AI's personality, role, and other characteristics. See the [`POM technical reference`](/docs/swml/reference/amazon-bedrock/prompt) for more information. Controls the randomness of responses. Higher values (e.g., 0.8) make output more random and creative, while lower values (e.g., 0.2) make it more focused and deterministic. Range: 0.0 to 1.0. Controls diversity via nucleus sampling. Only tokens with cumulative probability up to top\_p are considered. Lower values make output more focused. Range: 0.0 to 1.0. Minimum confidence threshold for AI responses. Responses below this threshold may be filtered or flagged. Range: 0.0 to 1.0. Penalizes tokens based on whether they appear in the text so far. Positive values encourage the model to talk about new topics. Penalizes tokens based on their frequency in the text so far. Positive values decrease the likelihood of repeating the same line verbatim. ## post\_prompt\_url callback SignalWire will make a request to the `post_prompt_url` with the following parameters: Action that prompted this request. The value will be "post\_conversation". Timestamp indicating when the AI session ended. A unique identifier for the AI session. Timestamp indicating when the AI session started. Name of the application that originated the request. Timestamp indicating when the call was answered. Timestamp indicating when the call ended. ID of the call. The complete log of the call, as a JSON object. Content of the call log entry. Role associated with the call log entry (e.g., "system", "assistant", "user"). Timestamp indicating when the call started. Name associated with the caller ID. Number associated with the caller ID. Disposition of the content. Type of content. The value will be `text/swaig`. A unique identifier for the conversation thread, if configured via the AI parameters. The answer from the AI agent to the `post_prompt`. The object contains the three following fields. If a JSON object is detected within the answer, it is parsed and provided here. The raw data answer from the AI agent. The answer from the AI agent, excluding any JSON. ID of the Project. ID of the Space. A collection of variables related to SWML. A log related to SWAIG functions. Represents the total number of input tokens. Represents the total number of output tokens. Version number. ### Post prompt callback request example Below is a json example of the callback request that is sent to the `post_prompt_url`: ```json { "total_output_tokens": 119, "caller_id_name": "[CALLER_NAME]", "SWMLVars": { "ai_result": "success", "answer_result": "success" }, "call_start_date": 1694541295773508, "project_id": "[PROJECT_ID]", "call_log": [ { "content": "[AI INITIAL PROMPT/INSTRUCTIONS]", "role": "system" }, { "content": "[AI RESPONSE]", "role": "assistant" }, { "content": "[USER RESPONSE]", "role": "user" } ], "ai_start_date": 1694541297950440, "call_answer_date": 1694541296799504, "version": "2.0", "content_disposition": "Conversation Log", "conversation_id": "[CONVERSATION_ID]", "space_id": "[SPACE_ID]", "app_name": "swml app", "swaig_log": [ { "post_data": { "content_disposition": "SWAIG Function", "conversation_id": "[CONVERSATION_ID]", "space_id": "[SPACE_ID]", "meta_data_token": "[META_DATA_TOKEN]", "app_name": "swml app", "meta_data": {}, "argument": { "raw": "{\n \"target\": \"[TRANSFER_TARGET]\"\n}", "substituted": "", "parsed": [ { "target": "[TRANSFER_TARGET]" } ] }, "call_id": "[CALL_ID]", "content_type": "text/swaig", "ai_session_id": "[AI_SESSION_ID]", "caller_id_num": "[CALLER_NUMBER]", "caller_id_name": "[CALLER_NAME]", "project_id": "[PROJECT_ID]", "purpose": "Use to transfer to a target", "argument_desc": { "type": "object", "properties": { "target": { "description": "the target to transfer to", "type": "string" } } }, "function": "transfer", "version": "2.0" }, "command_name": "transfer", "epoch_time": 1694541334, "command_arg": "{\n \"target\": \"[TRANSFER_TARGET]\"\n}", "url": "https://example.com/here", "post_response": { "action": [ { "say": "This is a say message!" }, { "SWML": { "sections": { "main": [ { "connect": { "to": "+1XXXXXXXXXX" } } ] }, "version": "1.0.0" } }, { "stop": true } ], "response": "transferred to [TRANSFER_TARGET], the call has ended" } } ], "total_input_tokens": 5627, "caller_id_num": "[CALLER_NUMBER]", "call_id": "[CALL_ID]", "call_end_date": 1694541335435503, "content_type": "text/swaig", "action": "post_conversation", "post_prompt_data": { "substituted": "[SUMMARY_MESSAGE_PLACEHOLDER]", "parsed": [], "raw": "[SUMMARY_MESSAGE_PLACEHOLDER]" }, "ai_end_date": 1694541335425164, "ai_session_id": "[AI_SESSION_ID]" } ``` ### Responding to post prompt requests The response to the callback request should be a JSON object with the following parameters: ```json { "response": "ok" } ``` ## Amazon Bedrock example The following example selects Bedrock's **Tiffany** voice using the [voice\_id](/docs/swml/reference/amazon-bedrock/prompt) parameter in the prompt. It includes scaffolding for a [post\_prompt\_url](#properties) as well as several remote and inline functions using [SWAIG](/docs/swml/reference/amazon-bedrock/swaig). ```yaml --- version: 1.0.0 sections: main: - amazon_bedrock: post_prompt_url: https://example.com/my-api prompt: voice_id: tiffany text: | You are a helpful assistant that can provide information to users about a destination. At the start of the conversation, always ask the user for their name. You can use the appropriate function to get the phone number, address, or weather information. post_prompt: text: Summarize the conversation. SWAIG: includes: - functions: - get_phone_number - get_address url: https://example.com/functions defaults: web_hook_url: https://example.com/my-webhook functions: - function: get_weather description: To determine what the current weather is in a provided location. parameters: properties: location: type: string description: The name of the city to find the weather from. type: object - function: summarize_conversation description: Summarize the conversation. parameters: type: object properties: name: type: string description: The name of the user. ``` ```json { "version": "1.0.0", "sections": { "main": [ { "amazon_bedrock": { "post_prompt_url": "https://example.com/my-api", "prompt": { "voice_id": "tiffany", "text": "You are a helpful assistant that can provide information to users about a destination.\nAt the start of the conversation, always ask the user for their name.\nYou can use the appropriate function to get the phone number, address,\nor weather information.\n" }, "post_prompt": { "text": "Summarize the conversation." }, "SWAIG": { "includes": [ { "functions": [ "get_phone_number", "get_address" ], "url": "https://example.com/functions" } ], "defaults": { "web_hook_url": "https://example.com/my-webhook" }, "functions": [ { "function": "get_weather", "description": "To determine what the current weather is in a provided location.", "parameters": { "properties": { "location": { "type": "string", "description": "The name of the city to find the weather from." } }, "type": "object" } }, { "function": "summarize_conversation", "description": "Summarize the conversation.", "parameters": { "type": "object", "properties": { "name": { "type": "string", "description": "The name of the user." } } } } ] } } } ] } } ```