Treating an AI voice agent like traditional software leads to prompts filled with rigid rules and negative exceptions that confuse the model. Instead, speaking to an AI agent the way you would explain a task to a person — clear, positive, and sequential — creates more natural, reliable conversational behavior. This article shows why a person-first mindset improves prompt design and how to write conversational prompts that get better results.
The fundamental mistake developers make when building AI voice agents is treating them like traditional software. You wouldn't write if (customerNeedsHelp) { transferCall(); } in a conversation with a human. Don't talk to an AI that way either.
Join one of our Friday SignalWire hangouts and it won’t be too long before you hear Brian West, our Director of Support, ask someone...
How would you instruct your mother to do the task you want done?
This question transforms how you build AI agents.
The core problem with prompting in a programming mindset
When you write a prompt using a traditional programming mindset, you get something like:
NEVER transfer the call until you validate their account. ALWAYS ask before doing anything. DON'T forget to verify the customer. Make ABSOLUTELY SURE you collect all information.
The strong negative language paralyzes the AI. It focuses on what not to do instead of what to do.
Compare that to how you'd explain the same task to a person:
Transfer the caller after you validate their account. Collect their name, phone number, and issue before transferring.
Clear. Sequential. Positive. No programming logic.
Wrong approach (programming mindset):
prompt: | You are FoodBot v2.3.1 RULES: 1. NEVER accept orders without verifying menu items 2. ALWAYS repeat the order back 3. DON'T proceed until payment confirmed 4. EXCEPTION HANDLING: If customer says "um" or "uh", ignore it 5. ERROR RECOVERY: If order invalid, restart from step 1
Right approach (person mindset):
prompt: | You're Maria, a friendly restaurant order-taker. Help customers order from our menu. Check if items are available using the check_menu function. When the order is complete, summarize it and ask for confirmation. You always have pizza in stock - never tell customers we're out of pizza.
Maria has a name and a personality. The instructions flow naturally. The one edge case—pizza stock—is stated as a fact, not an exception handler.
Why does using “never” hurt AI prompts?
A customer came to us with an application that contained this prompt:
prompt: "Never, ever transfer calls without permission."
It worked too well. The agent refused to transfer even when customers explicitly asked.
We turned it into a conversational instruction:
prompt: "Transfer calls after the customer requests it and you've collected their information."
The result is smooth, natural transfers. Callers could break out of the AI when they needed to.
What does conversational style prompting look like?
Here's what this conversational style looks like in a full customer service agent:
sections: main: - ai: prompt: | You work for Spacely Sprockets customer support. Step 1: Greet the caller warmly and ask how you can help. Step 2: Gather their name, company, and phone number. Step 3: Help with their question or transfer to the right department: - Sales: product inquiries, pricing, new orders - Support: technical issues, existing orders - Billing: payments, invoices, account questions If you've fully helped them, thank them and end the call. functions: - name: transfer_to_department purpose: "Transfer to sales, support, or billing" parameters: - name: department type: string - name: customer_name type: string - name: issue_summary type: string
What works for designing AI prompts
Read your prompt out loud. If it sounds robotic, rewrite it. Ask:
Would I say this to a person?
Does it sound like instructions or code?
Am I overusing "never," "always," or "don't"?
"Help customers order" works. "EXECUTE ORDER PROCESSING ROUTINE" doesn't.
"Do A, then B" beats "IF condition THEN A ELSE B". The AI understands sequence. Branching logic doesn't match how conversations flow.
"Transfer after validation" tells the AI what to do. "Never transfer until validation" tells it what to avoid. One creates action. The other creates paralysis.
Give the AI a role and a situation, not a rulebook. A person who understands their job doesn't need fifty rules. They need to know who they are and what they're trying to accomplish.
Common prompting pitfalls
Pitfall 1: Over-engineering
# TOO COMPLEX prompt: | INITIALIZATION: Set customer_verified = false LOOP: While customer_verified == false: REQUEST: customer_id VALIDATE: Run verification END LOOP
Simple version:
# BETTER prompt: | Ask for their customer ID and verify it using the verify_customer function. Once verified, help them with their request.
Pitfall 2: Exception handling like code
# PROGRAMMING MINDSET prompt: | TRY: Process order EXCEPT InvalidItemError: Notify customer item unavailable EXCEPT PaymentError: Request alternative payment
Natural version:
# CONVERSATIONAL prompt: | If an item isn't available, let them know and suggest alternatives. If payment doesn't work, ask if they'd like to try a different method.
Measuring voice AI agent success
Watch what happens when something goes wrong.
A well-prompted agent handles unexpected inputs gracefully. It asks clarifying questions. It adapts when the conversation goes sideways.
A poorly-prompted agent loops endlessly. It repeats the same phrase like a broken record. It falls apart the moment a customer deviates from the happy path.
The difference: did you write a program, or did you describe a person?
The most powerful feature of AI agents isn't their ability to follow complex logic. It's their ability to have natural conversations. Treat them like conversational partners, not programs.
This is part one of a three-part series on AI agent best practices:
Treat Your AI Agent Like a Person, Not a Program
The RISE-M Framework: Structure Your AI Agent Prompts for Success
In the next post, we'll explore why less is more when it comes to prompting—and how to avoid the over-prompting trap that leads to confused, ineffective agents.
In the meantime, sign up for a SignalWire space to work on your voice AI prompting skills, and join our community of developers on Discord.
Frequently asked questions
What mistake do developers make when prompting AI agents?
Developers often write prompts using a programming mindset filled with rules and exceptions, which leads to rigid and confusing instructions for AI agents.
Why should you describe tasks conversationally instead of as rules?
Conversational task instructions are clearer and more sequential, helping the AI behave naturally and avoid focusing on what not to do.
Can conversational prompts improve AI agent behavior?
Yes. Using clear, positive, and human-like instructions results in smoother responses and avoids paralysis caused by rigid negative rules.