You have a call with Acme in an hour. You open Dossium, click the meeting on your calendar, and ask for a briefing. Thirty seconds later, you know who's on the call, what happened since the last QBR, which commitments are still open, and that Sarah Chen got promoted last month.
Every fact is sourced. Every claim is traceable. You didn't search for anything. You ask and you're ready.
This post is about what happens in those thirty seconds. Not the UX — the intelligence layer underneath. Because the difference between Dossium and "AI-powered search" isn't the interface. It's what the system does before it ever writes a sentence.
In my previous post, I described the three-layer data model that makes relationship intelligence possible: content, entities, and facts. This post is about how those layers compose into something you can act on — briefings and dossiers that synthesize what your organization knows about an account into grounded, structured intelligence.
Three Entry Points, One Engine
Dossium produces intelligence from three starting points:
Briefings start from content. You have a meeting on your calendar, an email thread, a Slack conversation. The seed is a specific piece of content, and the question is: "What do I need to know about the people and accounts involved in this?"
Dossiers start from entities. You're looking at an account page, a person profile, your portfolio view. The seed is an account or a person, and the question is: "What does our organization know about them?"
Chats start from a question. You don't need a structured briefing — you just want to ask something. "What did Sarah say about the API migration?" "Which accounts mentioned budget concerns this quarter?" That's an open-ended conversation with your account context, powered by agentic RAG across the same knowledge graph.
Different entry points, same layers underneath. Briefings and dossiers run the full research pipeline — resolve, gather, evaluate, synthesize, verify. Chats are more fluid — agentic sessions using our MCP tools and any third-party MCPs you've connected, grounded in the same identity-resolved, fact-aware context.
This maps to how people actually work:
- Before a call: You have a calendar event. You want a briefing on the people and account involved. That's a briefing — content seed, relationship context.
- Preparing a QBR: You're looking at an account. You want the full picture — timeline, key people, open items, risks. That's a dossier — entity seed, comprehensive view.
- Monday morning: You want to know what happened across your portfolio since Friday. That's a portfolio briefing — multiple entity seeds, cross-account synthesis.
- After a meeting: You have a transcript. You want to capture decisions, action items, and what changed. That's a debrief — content seed, backward-looking synthesis.
- Between meetings: You want a quick answer. "When does Acme's contract renew?" That's a chat — open-ended, instant, grounded.
Multiple entry points. Same knowledge. The intelligence adapts to the question.
What the Engine Does
Here's the conceptual flow.
1. Read the Seed
Every briefing starts with a seed: a calendar event, an email, an account, a person. By the time the engine touches it, the seed has already been ingested — content processed, markdown extracted, entities identified. The engine reads what's already there: who's mentioned, what organizations are involved, what time period is relevant, what the content says.
For a meeting, that's attendees, organizer, the agenda (if there is one), and the time window. For an email, it's the sender, recipients, CC list, and body. For an account, it's the entity itself and its known relationships.
The seed tells the engine where to look. Everything that follows is about filling in what the seed doesn't say.
2. Look Up the Actors
The entities extracted from the seed — the people and organizations — are already resolved in the knowledge graph. Identity resolution happens continuously as content flows into the system, not at query time. By the time you ask for a briefing, "Sarah Chen" on a calendar invite is already a canonical person linked to every email, meeting, and Slack message she's appeared in.
The engine looks up those resolved entities and pulls their full profiles: title (SVP Engineering, since December), employer (Acme Corp), communication history (47 interactions across email, Slack, and meetings), relationships to other people and accounts.
This is the step most systems skip — and the one that makes the difference. Without resolved entities, you're working with text. With them, you're working with people who have history.
3. Gather Context Across All Three Layers
With resolved entities in hand, the engine retrieves context from all three layers simultaneously:
Facts: Temporal assertions linked to the resolved entities — commitments, decisions, escalations, changes, goals. Not keyword matches, but structured facts with categories, validity periods, and relevance scores. The system prioritizes high-signal categories: commitments that were made, decisions that were taken, escalations that haven't been resolved.
Content: Recent emails, meeting transcripts, messages, documents — the evidence trail for the account and its people. Ranked by relevance and recency, scoped to the right time window. Briefings look back 30 days. Dossiers look back 90.
Entity relationships: How the resolved people connect to each other, to the account, and to your team. Who's the primary contact? Who was the primary contact six months ago? Who on your team has the deepest relationship?
Three retrieval channels, running in parallel, drawing from three different layers of the same knowledge graph.
4. Evaluate and Deepen
Here's where the engine does something most systems don't: it looks at what it gathered and asks whether it's enough.
For each person involved, does the system have sufficient context? Name, title, organization, recent interactions, open commitments? Or are there gaps — a key attendee with no recent data, an account with no facts from the last quarter, an open commitment with no follow-up?
If gaps exist, the engine runs targeted follow-up queries. Not another broad search — specific retrievals designed to fill specific holes. Additional fact lookups, deeper entity resolution, even external enrichment for context the internal systems don't have.
This evaluate-and-deepen cycle is what separates synthesis from summarization. A summarizer takes whatever it finds and writes it up. A research engine assesses coverage, identifies what's missing, and goes looking before it writes anything.
5. Synthesize
All the context — resolved entities, gathered facts, supporting content, additional findings — is assembled and passed to a language model with a structured prompt tailored to the output type.
The synthesis isn't "summarize these documents." It's: "Given these resolved entities, these temporal facts, and this supporting evidence, produce a briefing that covers who's involved, what the relationship looks like, where we left off, what's still open, and what to watch for."
Every claim in the output is grounded in source material. Citations aren't optional — they're structural. When the briefing says "Sarah mentioned budget concerns," that links to the specific Slack thread. When it says "Acme renewed for 2 years at $50k," that links to the contract record.
And when the system doesn't know something, it says so. An honest gap — "No recent interaction data for Marcus Wong" — is more useful than a confident guess.
6. Verify
The final step: the output is checked against the source material it was built from.
Is the briefing complete relative to what the system gathered? Is every claim supported by evidence? Are there assertions that don't trace back to a source?
Unsupported claims are flagged. Gaps between what was gathered and what was synthesized are identified. The system doesn't ship output it can't verify.
This is what "grounded, not hallucinated" means in practice. Not a marketing claim — an architectural guarantee. Facts you can trace, not just trust.
The Loop
One more thing about how the engine works: the output feeds back into the system.
A briefing gets published as content — the same kind of content as an email or a meeting transcript. It gets entity and fact extracted, becomes searchable, and enters the knowledge graph. Next time you ask about Acme, last week's briefing is part of the evidence base. The system's understanding compounds over time.
Chats work similarly but differently. When you ask Dossium a question — open-ended, not a structured briefing — that's an agentic RAG session using our MCP tools and any third-party MCPs you've connected. The chat itself isn't content, but the entities and facts it surfaces feed back into the graph. Every interaction makes the next one richer.
This is the compounding effect: briefings produce knowledge that informs future briefings. Chats surface facts that ground future chats. The graph grows with use.
What You Get
The engine produces different output depending on what you asked for. Each output type has a different structure, a different emphasis, and a different set of sections — but they're all built from the same three layers.
Meeting Prep
You have a call in an hour. The briefing covers:
- Who's on the call — resolved identities with titles, roles, and relationship history. External attendees first, then your team.
- Relationship context — how long you've known them, last contact, interaction frequency, key milestones in the relationship.
- Where you left off — the 3-5 most relevant recent interactions, distilled to substance. Not a chronological email list — themes and outcomes.
- Open items — unfulfilled commitments, pending decisions, unresolved escalations. Who made the commitment, when, and what's the current status.
- Watch for — risks, stale commitments, gaps in knowledge. Things that could come up that you'd rather not be surprised by.
Account Dossier
You're looking at a customer page and want the full picture:
- Overview — what this account is, the commercial relationship, current state.
- Key people — the stakeholder map. Who matters, what they care about, how their roles have evolved.
- Relationship timeline — how the relationship has developed over time. Not a list of events — a narrative arc.
- Recent activity — what happened in the last 90 days across all channels.
- Open items — same as meeting prep, but broader scope.
- Risks and gaps — where the relationship is thin, where context is missing, where attention is needed.
Portfolio Briefing
Monday morning. You want to know what happened across your accounts:
- Needs attention — accounts with open escalations, stale commitments, or negative signals.
- Coming up — upcoming meetings, renewals, QBRs across your portfolio.
- What happened — significant activity since your last briefing, grouped by account.
Post-Meeting Debrief
You just got off a call. The debrief captures:
- Decisions made — what was agreed, by whom.
- Action items — who committed to what, with context.
- What changed — new information that updates existing facts.
- Follow-up needed — threads that weren't resolved.
Effort as a Dial
Not every situation requires the same depth. Checking on an account between meetings is different from preparing a board-level QBR.
Dossium offers three effort levels:
Quick — seconds. The engine parses the seed, resolves entities, gathers the most relevant facts and content, and synthesizes immediately. No evaluation cycle, no follow-up queries. Good for a fast check, a Slack message context card, a glance before switching tabs.
Standard — tens of seconds. Full pipeline: resolve, gather, evaluate, deepen, synthesize, verify. The system assesses coverage, fills gaps, and quality-checks the output. This is the default for meeting prep and account briefings.
Deep — as long as it takes. Everything in standard, plus external enrichment — web search for company news, funding announcements, leadership changes. Deeper entity resolution. More comprehensive fact gathering. Recursive follow-up queries when the first pass reveals gaps that need filling. This is for QBR prep, board meeting preparation, or when you need the most complete picture available. The system decides how deep to go based on what it finds — and what it doesn't.
You choose the depth. The system adapts its thoroughness, its model selection, and its verification rigor to match.
What This Isn't
It's worth being explicit about what this isn't, because the market is full of things that sound similar but work differently.
It's not RAG with a nice prompt. RAG retrieves text chunks by similarity and passes them to a language model. There's no identity resolution, no temporal awareness, no fact extraction, no coverage evaluation. The output is only as good as the chunks that happened to match. Dossium operates on structured data — resolved entities and temporal facts — not just similar text.
It's not a chatbot that searches your files. Chatbots are reactive — you ask a question, they search, they respond. Dossium's briefing engine is proactive — it knows the structure of what a briefing should contain, it evaluates whether it has enough context to fill that structure, and it goes looking for what's missing before it writes anything.
It's not a meeting summarizer. Meeting summarizers take a transcript and compress it. They don't know who the people are, what the account history looks like, or what commitments existed before the meeting started. A Dossium debrief knows all of that — so it can tell you not just what was said, but what changed.
It's not enterprise search. Enterprise search indexes everything and returns results. Dossium indexes what matters — your accounts, the people involved, the facts that govern the relationship — and returns intelligence. Not results you have to interpret. Answers you can act on.
What's Coming
Everything described above produces the same output regardless of who's reading. But as I described in the previous post, the next layer is personas — adapting the synthesis to the reader. The data model doesn't change. What changes is which facts surface first, which relationships get emphasis, and what the briefing leads with. More on this soon.
The Substrate
Everything described in this post — the three layers, the research engine, the output types, the effort levels — is built on Graphlit's context infrastructure. Multimodal ingestion, identity resolution, entity extraction, temporal fact modeling, knowledge graph construction. Three years of infrastructure, purpose-built for this.
The same context that powers Dossium's briefings is available through Graphlit's API and through MCP for any agent that needs relationship intelligence. The briefing engine is one way to consume it. Your agents are another.
We built the API first. Then we built the product. That order matters — it means the intelligence isn't trapped in our UI. It's available wherever you need it.
If your work is relationships, and your bottleneck is context, this is what thirty seconds of intelligence looks like.
This is the sixth in a series on context graphs. Previous posts: "The Context Layer AI Agents Actually Need", "Building the Event Clock", "Context Graphs: What the Ontology Debate Gets Wrong", "Introducing Dossium", and "From Enterprise Search to Relationship Intelligence".
