Articles
March 12, 2026
Entity Resolution Is the Core Problem in AI Agents
At Revo AI we're building an ambient AI agent on top of email. We wrote about why email solves the cold start problem for AI agents. This post is about the hardest engineering problem we found building it — and why we think it's the hardest problem in ambient AI generally.
The problem nobody talks about because it isn't glamorous
The AI conversation is about models. Reasoning quality, context windows, tool use, inference speed. All of that matters. None of it is where production agents actually break.
Agents break on entity resolution.
The same person appears as a first name in Slack, a full name in a contract, an email domain in a thread, a company name in a CRM record, and an account ID in a support ticket. An agent operating across a professional context sees all of these signals and has to decide, without hallucinating, whether they refer to the same entity — and what that entity's relationship is to everything else it knows.
Get this wrong and the agent doesn't just fail quietly. It drafts a response using language from the wrong deal. It follows up with the wrong contact. It connects a sensitive conversation to someone who shouldn't see it. Entity resolution failures in an ambient agent aren't UX problems. They're relationship risks and potential privacy violations.
This is the problem we spent more time on than anything else at Revo AI. It's still not solved — it's managed. Here's what we learned.
Why entity resolution is structurally harder for agents than for search or CRM
Entity resolution isn't a new problem. Search engines, CRMs, and data warehouses have dealt with versions of it for years. But agents face a harder version of it for three reasons.
The error cost is asymmetric. A wrong autocomplete in a regular email client is annoying. A wrong entity match in an agent that drafts and sends on your behalf is a relationship risk. The tolerance for error is an order of magnitude lower than in passive tools.
The signal is noisier. Email is informal, inconsistent, and human. People refer to the same entity in dozens of different ways across thousands of messages. There's no schema. There's no enforced naming convention. The agent has to infer structure from unstructured signal at scale.
The graph is dynamic. Entities change. People change roles. Companies get acquired and domains change. The entity graph can't be built once and cached — it has to update continuously as new messages arrive and flag conflicts when the new signal contradicts what it thought it knew.
What we actually built at Revo AI
We maintain a living entity graph that updates with every message processed. The graph connects people, companies, deals, threads, and tools — resolving the same entity across different naming conventions and data sources in real time.
When the graph encounters conflicting signals — two contacts named James at the same company, a domain that changed after an acquisition, a name that could plausibly refer to two different people — it flags the ambiguity instead of guessing. We treat unresolved ambiguity as a first-class state, not an error to suppress.
That costs us automation coverage. Flagging ambiguity means some actions don't get taken automatically and surface for human review instead. We made that tradeoff deliberately. The alternative — guessing and being wrong — produces the kind of failure that makes users turn off an agent permanently.
Entity resolution also has to happen inside strict privacy boundaries. An agent reading a professional inbox has access to sensitive information across clients, deals, teams, and organizational layers. The entity graph needs to know not just how entities are connected but what the agent is allowed to connect. Domain-level access control isn't optional infrastructure — it's a core part of the resolution logic. Entity resolution without privacy controls is a liability, not a feature.
What ambient AI actually does when it works
Most ambient AI products died in the gap between "technically correct" and "trustworthy." The hard part isn't building an agent that can act — it's building one that knows when not to.
Here's what a Tuesday morning looks like when entity resolution is working:
8:14 AM. "Lisa Chen flagged the IP clause in the Meridian contract. Draft response ready using your standard language from the Acme deal."
8:22 AM. "No response from James Park on Q2 budget (5 days). Follow-up drafted."
9:01 AM. "Sarah mentioned the migration finished. Three client tickets reference that bug. Updates drafted."
Each one: accept, refuse, or give feedback. That's the entire interaction surface. The agent planned the work. The user governs the output.
The UX consequence: trust is earned incrementally
We designed the Revo AI agent to start narrow and earn its way to broader action. On day one it surfaces simple things: a forgotten follow-up, a draft for a pending response, a flagged conflict. Accept or dismiss. No configuration, no prompting, no manual.
This wasn't the original design. It was a response to watching early users turn off an agent that was technically correct but moved faster than they were ready to trust. Progressive disclosure isn't just a UX pattern — it's how you give users enough time to verify that entity resolution is working before the agent starts taking higher-stakes actions.
What this means for agent infrastructure
Entity resolution is infrastructure, not a feature. It's the layer that determines whether an AI agent operating on real professional context can be trusted to act — or whether it's a sophisticated demo that breaks the moment the data gets messy.
Most agent frameworks treat it as an edge case. It isn't. It's the core problem. The agents that earn long-term use will be the ones that solve it with the right combination of graph architecture, ambiguity handling, privacy controls, and trust pacing — not the ones with the best inference.
Email is the hardest environment to solve entity resolution in, because the data is the messiest and the error tolerance is the lowest. That's also why solving it on email generalizes. The infrastructure we built at Revo AI for resolving entities across a professional inbox is the same infrastructure any ambient agent needs to operate reliably on unstructured real-world data.
If this resonates, reach out: mehdi@revo.ai
FAQ
Got questions? We've got expert-backed answers.



