A five-part launch series on the Agent Experience layer for B2B work: context and methodology, channels and persona, governed action, research orchestration, and the runtime that makes agents dependable.
You are on a deal team looking at a company before a partner meeting. There are twenty files in the data room, a founder podcast from last week, a customer reference that contradicts the growth story, two old conversations in the CRM, partner notes in Slack, and a Crustdata signal showing hiring velocity shifted in the last month.
Most of the evidence is not conveniently shaped for an agent. The operating plan is a PDF. The customer quote is in a call transcript. The founder claim is in a podcast. The partner concern is in a Slack thread. The hiring signal is external.
Or you are monitoring a portfolio of fifty companies. Each one needs what changed, what they shipped, who they hired, who churned, what showed up in support, and whether the external signal matches the internal story.
Or you are a CSM prepping a QBR: contract terms in the CRM, escalation history in Zendesk, Slack DMs, meeting transcripts, competitive pressure, usage trends, sentiment over time.
Real research is not one search, and none of it is one source. The hard part is not finding more data. It is deciding which evidence universe matters for this question, in this moment, for this user.
This is day four of launch week. Monday was context and methodology. Tuesday was presence. Wednesday was action. Today is intelligence.
How agents actually research.
Research Has To Wake Up
Research does not always start with a user typing a question into a chat box.
Sometimes it does. You ask: what changed since our last partner review? The agent answers inline, with the same context graph and methodology it has used all week.
Sometimes research starts on a schedule. A Portfolio Monitor runs every weekday morning, walks the companies you care about, checks what changed, and posts only when there is something worth your attention.
Sometimes research starts because something happened. A new data room file lands. A customer sends an escalation. A CRM meeting moves to tomorrow. A support thread mentions legal, security, churn, or procurement. The agent wakes up because the work changed.
Sometimes research starts from another system. A webhook says a deal moved stages, a renewal entered risk, or a partner meeting got created. The agent turns that event into preparation: context gathered, questions drafted, briefing delivered.
Sometimes research is a heartbeat. Lightweight checks during working hours: did a portfolio signal appear, did a support anomaly cluster, did a customer mention a competitor, did the external story drift from the internal one?
That matters because serious B2B research is often most useful before anyone remembers to ask.
The best agent is not only responsive.
It is situationally awake.
Before Search, The Agent Has To Choose A Path
Most research agents start by searching.
That is already too late.
Before search, the agent needs to understand the shape of the request.
Is this a customer-history question? A public-market question? A people-enrichment question? A contradiction check? A QBR prep workflow? A diligence pass across ten companies? A board-update synthesis?
Does it need internal context, external research, enrichment, prior agent conversations, a skill, or all of them?
Should it answer now, ask a clarifying question, or split the work into workers?
This is the first act of research: routing.
Not retrieval.
Routing.
The agent should not experience your company as one search box. It should experience it as a set of evidence surfaces, each useful for a different kind of question.
A renewal-risk question should lean into support history, QBR transcripts, account facts, open commitments, and sentiment. A diligence question should lean into the data room, founder interviews, CRM notes, external hiring, public market signal, and contradictions. A board-prep question should lean into investor threads, product decisions, customer signal, weak signals, and what changed since the last update.
The same graph.
Different path through it.
Internal Context Is Not A Smaller Web Search
There is a common mistake in agent design: treat internal retrieval as a smaller cousin of web search - a document store, a plugin, a search box over "your data."
Internal context is not a smaller cousin. It is a peer.
Web search can answer what is public. Your internal graph can answer who talked to whom, what changed, what commitments remain open, what the customer said last quarter, what the support team escalated, what the agent already briefed you on last week, and which source was current when a decision was made.
Dossium gives agents multiple internal evidence surfaces.
Content: docs, emails, transcripts, pages, posts, attachments.
Communications: Slack threads, calls, messages, meeting notes.
Entities: people, companies, products, places, events.
Facts: commitments, decisions, escalations, changes, goals.
Conversations: prior chats and briefings with the agent.
Skills: methodology and playbooks.
Memories: notes the agent carries across runs.
Entity exploration: the canonical record and the surrounding graph.
All of that is RAG. But it is not one RAG call against one index.
Search retrieves fragments.
The graph gives the agent a model of the work.
Real Context Means Handling Contradiction
Retrieving both sides of a contradiction is not enough.
The CRM says the renewal is on track; the last QBR transcript says the executive sponsor is frustrated. The signed contract says the customer bought the enterprise package; the support escalation says they still do not have SSO enabled. The strategy doc says healthcare is the target market; the last three sales calls are all financial services.
A retrieval-only system returns fragments. A research agent has to reason about source authority, recency, and conflict: which source is canonical for the claim, when it was last confirmed, whether the older artifact is stale or the newer Slack thread is just a one-off, and whether the contradiction itself matters more than either side.
This is why provenance and temporal modeling matter. Dossium does not flatten evidence into anonymous chunks. Facts carry source links, categories, timestamps, and validity windows. Entities have relationships. Prior conversations remain retrievable.
So the agent can say:
CRM shows green, but the last two support escalations and the QBR transcript point to renewal risk.
Not:
Here are five relevant documents.
Real context means surfacing the conflict, not smoothing it into a confident sentence.
External Signal Is Not One Thing
External research is not "the agent calls Google."
It is a set of signal surfaces, each useful for a different job.
Parallel Web Systems is the company-research workhorse: structured web search, company background, news, Reddit threads, and fast external context during an agent run.
Perplexity is useful when the agent needs synthesis, not just links.
Tavily and Exa cover general web retrieval from different angles: clean factual search and semantic web search.
Podscan watches podcast transcripts, which matter more than people expect. Executives say things in interviews and podcasts that they would never put into a press release or data room.
For data rooms, Reducto is our default PDF extraction partner. The agent should not have to treat a forty-page financial appendix as a blob. PDFs become Markdown with enough structure for Graphlit to index, extract, and connect.
And Crustdata powers enrichment: company firmographics, people profiles, headcount, roles, hiring velocity, funding events, LinkedIn activity, profile updates, and Signals that keep flowing after setup.
The important distinction is timing.
Some research happens when the user asks.
Some has been warming up all along.
A Crustdata Signal is not a lookup. It is a subscription. Once a company signal feed is running, job postings, funding rounds, news mentions, company LinkedIn posts, and profile updates flow into the same graph the agent retrieves from.
By the time your 7am Portfolio Monitor asks "what changed?", some of the answer may already be there.
Workers Are How Deep Research Scales
Some questions are simple. Most useful B2B research is not.
Classify these ten companies for partner meeting has phases. Enrich the companies. Identify key people. Enrich those people. Fill gaps with external research. Compare against internal context. Classify. Synthesize.
The agent can do that as one long chain, but that is rarely the best shape.
Deep research needs workers.
The easy mistake is to split workers by source: one for web, one for CRM, one for Slack.
That is usually wrong.
Useful workers split by analytical lens, not by data source.
A Reality Check worker asks what is actually true now, separated from stale claims and marketing copy.
A Traction & Signals worker asks what changed: hiring, funding, launches, churn, usage, support, public mentions.
A Customer Context worker asks what the account actually said, bought, escalated, renewed, delayed, or promised.
A Strategic Implications worker asks what the evidence means for the user's goal.
Each worker can pull from any source it needs. The Reality Check worker might use CRM, Slack, a data-room PDF, Parallel, and Crustdata in the same run. The lens constrains the reasoning. The sources supply evidence.
That is the difference between parallel search and parallel research.
What Comes Back
The point of all this machinery is not a pile of citations. It is a briefing that is hard to produce any other way.
For a deal team, the final answer should look more like this:
- Current read: the company is credible, but the growth story is less clean than the memo implies.
- New signal: hiring velocity shifted toward enterprise sales and AI infrastructure roles in the last 30 days.
- Contradiction: the data-room model assumes healthcare expansion, while recent calls and public hiring point toward financial services.
- Source authority: the signed customer reference supports retention, but the latest support escalation weakens the renewal story.
- Open questions: why did the customer reference omit the SSO delay, and who owns the enterprise security roadmap?
- Recommended next step: ask the founder about vertical focus and bring the support escalation into partner discussion before voting.
For a QBR, it might look like this:
- Account state: usage is up, but sentiment is down.
- Risk: support volume clustered around the same integration issue.
- Commitment: the customer was promised an SSO timeline on the last call.
- Opportunity: three stakeholders engaged with the new workflow but only one is in the CRM.
- Recommended next step: lead with the unresolved commitment before talking about expansion.
That is the bar: a synthesized read that names what changed, what conflicts, what matters, and what to do next.
Not "here are ten relevant documents."
Not "here is a web summary."
The Context Compounds
Every briefing, conversation, agent run, and delivered output becomes future context.
Last week's diligence brief can inform this week's partner prep. Yesterday's QBR prep can inform today's renewal-risk watcher. A support escalation can become a fact that appears in the next account briefing. A portfolio monitor can notice that a company signal keeps recurring across weeks, not just once.
This is the compounding loop. The agent is not starting over every time. The graph gets warmer, the methodology gets reused, the research paths get more targeted.
That is what makes Dossium different from "ask a model to search the web."
Tomorrow
Everything above needs a runtime. Research can run for minutes or hours. Workers need to finish even if the process that started them disappears. A Slack thread that kicked off research needs the result delivered back to that same thread later.
Tomorrow is the technical deep dive: durable workflows, scheduling, queues, harness controls, context-window management, prompt caching, and side-effect gating.
Agents are not just model calls.
They are runtimes.
Getting Started
Give the agent ten companies and ask the question that usually eats an afternoon:
What changed, what conflicts, and what should we ask next?
You will see the difference between a search chain and a research plan in the first answer.
Sign up at dossium.ai.
