Introduction: Why this guide exists
Most AI tools in sales generate content. Pod AI Deal Agents are designed to generate decisions.
They operate with full deal context — CRM data, transcripts, meetings, emails, stakeholder signals, historical patterns — and synthesize it into guidance that helps reps and managers move deals forward.
But the quality of an agent depends on how it’s built. This guide will show you how to create agents your team will actually use: focused, repeatable, and easy to scan — not bloated, generic, or ignored.
What makes a great AI Agent?
A strong Deal Agent is built around a real workflow moment. It’s not “an AI that analyzes everything.”
It’s:
“I’m prepping for a QBR.”
“I need to validate stage before commit.”
“This deal feels stuck — what’s actually wrong?”
The best agents share four characteristics:
Situation based
Built around a moment in the workflow: “I’m prepping for a meeting” or “I’m reviewing this deal with my manager.”
Actionable
It ends with specific next steps, questions to ask, or risks to mitigate.
Simple to consume
Tables, short sections, and tight limits beat walls of text.
Easy to iterate
Start with a V0, run it on a real deal, tweak, repeat.
A helpful mental model: agents fit for purpose. Smaller agents are usually adopted more often than one mega agent.
Where to access and launch your AI agent list
Agents live inside the Deal Agent experience:
Open Pod
Search or select a deal
You land in the Deal Agent experience where you can ask any question
Click Explore Agents
Click Create Agent
Fill out the builder fields and launch
Agents can also be accessed in the Chrome Extension, so reps can run them where they already work.
How to build an AI Agent?
The builder has four inputs. Each plays a different role.
1) Agent Name (required)
Make the name tied to a workflow moment.
Good examples:
Deal Health Check
Call Prep Brief
Discovery Gaps Finder
Stakeholder Map & Gaps
ROI Builder
Re-engage Stalled Deal
If the name feels abstract, it’s probably too broad.
2) Agent Description (required)
This description is the simplest description of the action you want the agent to perform. Keep it plain and specific.
Good description example:
“Assess deal health, identify the top risks and gaps, and recommend the next best actions to improve win probability.”
3) Agent Prompt (required)
This defines the specific instructions the agent should perform about.
Strong prompts define the situation/objective, define the task, define the decision being made, and specify the expected output.
Weak prompts describe the product or repeat obvious context. Clarity beats creativity here.
Review the prompt best practices guide below
4) Response format (optional but strongly recommended)
This defines how the output appears.
Response format is your control layer to force scannability with summary tables, word limits, ranked risks, clear section headers, or evidence requirements.
Pro tip: if you want tables, structure, or brevity, you’ll usually get better results by putting that instruction in Response format (instead of bloating the prompt).
The 4-Part Recipe for Writing High-Quality Agent Prompts
Most weak agents fail for one reason: the prompt is underspecified.
Strong prompts remove ambiguity. They clarify the context, define the decision, constrain the inputs, and structure the output. When those elements are tight, the agent becomes consistent and usable across deals.
Here’s the structure that works.
1. Define the Situation
Ground the agent in a real workflow moment. Be clear about where you are in the sales motion, who is using the output, and what is about to happen next. Context reduces generic advice.
Instead of:
Analyze this deal.
Write:
You are preparing for a late-stage forecast review of an enterprise opportunity.
2. Define the Decision
State exactly what the agent must determine. Use direct verbs: identify, rank, diagnose, validate, compare.Avoid stacking multiple objectives into one agent. If you try to solve everything at once, the output becomes diluted.
Instead of:
Review the opportunity.
Write:
Identify the top three risks reducing close probability and recommend corrective actions tied to each risk.
3. Define the Inputs
Tell the agent what signals matter. Explicit inputs increase reliability and reduce artificial confidence.
For example:
Use CRM stage history, activity velocity, transcripts, stakeholder engagement, sentiment signals, and email patterns.
If helpful, add:
If evidence is missing, state “Unknown” and list questions required to confirm.
4. Define the Output
Structure determines adoption. Specify what format to use (table, scorecard, ranked list), what to prioritize (risks, gaps, actions), and word limits/maximum items. Constraints drive clarity. Without them, verbosity creeps in.
For example:
Start with a 6-row summary table. Limit executive summary to 120 words. Provide no more than 5 next actions.
5. Adding Company-Specific Playbooks and Standards
Generic analysis is helpful. Contextual analysis is powerful. You can significantly increase agent quality by embedding your company’s internal standards directly into the prompt.
This shifts the agent from general sales advice to enforcing your selling motion. It also creates consistency across reps and managers.
The more clearly you define “what good looks like,” the better your agents will reinforce it.
Examples:
Your stage exit criteria
MEDDPICC or custom qualification framework
ICP definition
POC entrance criteria
Procurement engagement rules
Definition of “Commit”
Instead of asking the agent to “assess stage accuracy,” instruct it to:
Validate the current stage against our defined Stage 3 exit criteria listed below.
Paste the criteria directly into the prompt.
Prompt Tips & Tricks
Great agents aren’t complicated. They’re disciplined.
Here are the patterns that consistently improve output quality and adoption.
Keep Agents Narrow
If the output feels like a wall of text, the agent is trying to do too much.
Instead of building one “analyze everything” agent, split it by decision. When an agent has one job, it performs better — and reps are more likely to use it.
Ask for the Summary First
Reps skim. Design for the first 10 seconds.
In your prompt or response format, instruct the agent to start with a short summary table or clear health rating before going into detail. You can also tell it to expand only on areas marked Yellow or Red.
Good patterns to include:
• “Start with a 6 row summary table”
• “Then include detail only for items marked Yellow or Red”
If the most important insight isn’t visible immediately, it won’t be used.
Add Limits to Control Verbosity
Word limits dramatically improve clarity.
Explicitly constrain the output: limit executive summaries, cap the number of risks, restrict next steps. Without constraints, the model will often over-explain.
Tight limits force prioritization — which mirrors how good sellers think.
Force Evidence, Not Opinions
Trust increases when the agent shows its work.
Instruct it to reference observable signals such as buyer quotes, engagement patterns, sentiment shifts, or stage velocity. If evidence is missing, require it to state “Unknown” and propose questions to validate.
This reduces artificial confidence and increases credibility.
Make It Role-Aware
Different roles need different outputs. Specify the intended user in the prompt. The same deal analyzed through different lenses should produce different insights.
Example:
AE: deal strategy, stakeholders, close plan
SE: technical risks, validation plan, demo plan, requirements
Manager: coaching questions, stage accuracy, next actions, forecast risk
Example: Deal Health
Agent Name: Deal Health
Agent Description: Analyze deal health, execution risks, and highest-impact corrective actions.
Agent Prompt
Analyze the following sales opportunity using the available deal data (CRM stage, activity history, call transcripts, emails, stakeholder list, sentiment signals, and timeline). Produce a concise Deal Health Report focused on in-flight deal execution.
Structure the report as follows:
Overall Deal Health & Trajectory
Assign a clear health status (e.g., Healthy, At Risk, Critical).
Indicate whether the deal is Improving, Stable, or Deteriorating.
Briefly justify the assessment using concrete signals such as stage velocity, buyer engagement, sentiment trends, and recent activity.
Primary Risk Drivers (Ranked)
Identify the top 2–3 specific factors materially hurting the deal.
Rank them by impact on close probability.
Ground each risk in observable evidence (e.g., stalled stage progression, weak economic buyer coverage, declining sentiment in late-stage calls).
Stage & Momentum Anomalies
Highlight where observed buyer behavior (calls, emails, meetings, response times, decision signals) does not align with the current CRM stage or expected buying motion.
Call out any premature stage advancement or false-positive momentum.
Stakeholder & Sentiment Risk
Identify gaps or weaknesses in buying committee coverage (missing roles, low engagement from key personas).
Surface negative or weakening sentiment from high-influence stakeholders, including explicit or implied objections.
Highest-Leverage Corrective Actions
Recommend specific, concrete actions that would most reduce risk right now.
Tie each action directly to the risks identified above (avoid generic next steps).
Prioritize actions that unblock decision progress, strengthen stakeholder coverage, or reverse negative sentiment.
Response format:
Summary table with columns: Area, Status (Green Yellow Red), Why, Evidence
Top 3 risks (bullets, 1 sentence each)
Top 5 gaps or unknowns (bullets)
Recommended next steps (5 bullets max), include exact questions to ask on the next call
Limits: executive summary max 120 words
How to roll agents out to your team
A rollout plan that typically works well:
Start with 2-3 agents reps will use weekly
Test on 3 real deals and refine
Cut anything that feels too long, too generic, or not actionable.
Publish as org wide agents
So every rep has the same starting point.
Coach reps to ask their own questions too
Pre built agents create consistency. Rep questions create learning and better deal thinking. You want both.
Common pitfalls to avoid
One giant mega agent that tries to do everything
No summary, only long narrative
No evidence, only opinions
Prompts that describe the product instead of the decision
Outputs that don’t end in next actions
No limits (verbosity creeps in fast)
Quick checklist before you hit “Create Agent”
Is the name tied to a real workflow moment?
Can someone scan the output in 20 seconds?
Does it produce actions, questions, and risks?
Is there a structure and or word limit to prevent verbosity?
If it is too broad, can you split it into two agents?

