Skip to main content

How Pod agents think and work

Learn what happens while a Pod agent is running β€” context loading, reasoning, tool calls, and web browsing β€” and how to read the Thoughts panel.

Written by Patrick Monnot
Updated today

When you ask a Pod agent to do something, it doesn't just type back an answer. It stops, looks around, reasons through the problem, reaches for the tools it needs, and only then writes. This article is a short tour of what that looks like on screen β€” and of the Thoughts panel where you can watch it happen.

The experience is identical in the Pod web app and the Chrome Extension.

How a Pod agent thinks 🧠

Under the hood, a Pod agent works a lot like a thoughtful teammate β€” not like a chatbot.

Picture what happens when you ask a strong teammate to help you prep for a meeting with a stalled account. They don't answer off the top of their head. They pull up the account in the CRM, skim the last few meeting notes, check recent email, maybe glance at the company's latest press release, then form a point of view. When they finally come back to you, their answer is grounded in real context β€” and if you push back on any part of it, they can explain where each piece came from.

A Pod agent follows the same loop:

  1. Gather context. Before doing anything else, the agent loads the relevant facts it has access to β€” deal record, activity timeline, recent meetings, CRM fields, connected knowledge.

  2. Think in steps. The agent reasons about what you asked in short, labeled steps instead of trying to answer in one shot. That step-by-step approach is often called a chain of thought, and it's one of the biggest reasons modern agents produce higher-quality, better-grounded answers than plain chat models.

  3. Use tools. When reasoning alone isn't enough, the agent calls tools β€” fetching more data (e.g. pulling a HubSpot contact) or taking an action on your behalf (e.g. drafting an email). Tools come from Pod's core integrations, from any MCP servers your workspace has connected, and, if the agent has it enabled, from the live web.

  4. Write the answer. Once the agent has what it needs, it writes a final response and, where appropriate, proposes actions for you to approve.

This loop has two payoffs. Accuracy β€” forcing the agent to plan and consult real sources makes it far less likely to hallucinate or over-generalize. And trust β€” because the reasoning and the tool calls are visible as they happen, you can see how the agent arrived at an answer, not just what it concluded. If it pulled the wrong contact or leaned on the wrong fact, you'll spot it immediately, not after the email is sent.

The three stages of a run 🎬

Every agent run moves through three visible stages:

  1. Loading context β€” Pod is gathering the deal, activity, meeting, and CRM data the agent needs. You'll see a small Loading context indicator at the bottom of the conversation.

  2. Thinking β€” the agent is reasoning and calling tools. A Thinking… header with three bouncing dots appears above the in-progress response, and the Thoughts section automatically opens so you can follow along.

  3. Response β€” the final answer streams in below. Thoughts auto-collapses so the answer takes center stage. You can reopen Thoughts at any time.

What's in the Thoughts panel πŸ”

The Thoughts section (brain-circuit icon, right under the agent's turn) is where the agent's working memory lives. It contains two kinds of steps.

Reasoning steps

Short labeled blocks of the agent's working thought β€” for example Planning the email or Checking the deal stage. They're written in plain language and help you sanity-check the agent's logic before you trust the answer.

Tool calls

Every time the agent calls a tool, it appears in the panel as a wrench-icon box. The title uses the format {server} -> {tool}, for example HubSpot MCP -> update_deal_stage. Click the box to expand it and see:

  • Arguments β€” the exact JSON the agent sent to the tool.

  • Failure β€” an error message if the tool call didn't work.

  • A status badge when relevant: Pending approval (yellow), Failed (red), or Incomplete (red).

Read tool calls to verify that the agent is reaching for the right information or the right action β€” especially when an agent is updating a CRM record or drafting an email on your behalf.

What β€œtool calls” actually are πŸ› οΈ

Tool calls are how your agent gets work done outside of pure reasoning. They come in two flavors:

  • Read tools β€” fetching information Pod doesn't already have in context (for example, pulling a contact from HubSpot, checking a recent email). These execute immediately.

  • Write tools β€” actions that change something in an external system (sending an email, updating a CRM field, creating a task). These wait for your approval. In the Thoughts panel the tool call is tagged Pending approval, and the agent message shows the same action as a pending action you can approve or reject.

Tools are provided by Pod's core integrations (like HubSpot) and by any MCP servers your workspace has connected.

Web browsing 🌐

Agents can also reach outside Pod's connected systems to the open web, but only when you turn it on.

  • Where to enable it: in the agent builder, toggle Allow web browsing on. On an agent card in Settings β†’ Agents, the current state shows as Browser on or Browser off.

  • What it's for: live context from the outside world β€” company news, industry announcements, public product info.

  • What it's not for: facts Pod already knows about your deal. Deal-specific information should still come from the loaded deal context, not from browsing.

Most templates ship with browsing off. Turn it on for research-heavy agents (for example, an account-research agent), and leave it off for agents that only work with internal context.

Why the Thoughts panel is worth reading πŸ‘€

A few practical reasons to open it up:

  • Sanity-check the reasoning before you trust the answer.

  • Verify the right tool was called β€” especially for CRM writes.

  • Debug a disappointing response: if the agent never called a tool you expected, the prompt may need tightening, or the agent may need web browsing on. For help improving prompts, see the AI Agent Best Practices Guide.

  • Learn what your agent actually does so you can iterate on the setup.

What this article does not cover

This article sticks to what you see during a run. For adjacent workflows:


πŸ’‘ Need help? Send us a message via the in-app chat or email us at [email protected].

🀝 Want to talk to someone? Book a session with one of our specialists.

Did this answer your question?