The Cross-Channel Moment: One Correlation ID, Email + Voice, In a Single Agent Run
An inbound email triggers a voice callback. Both events tie back to one correlation ID, render in /audit as a single chain, and stream live through am tail. Here's the demo, the data model, and why this is the data moat.
The most boring-looking feature we shipped this month is the most strategically important thing in the platform.
We added a correlation ID to every cross-channel action: email send, SMS, voice call, vault read or write, phone provisioning. One workflow → one ID → one audit chain. Inbound email gets a synthetic correlation ID so reply chains link back to the trigger that started them. The dashboard /audit page renders the chain. The am tail CLI streams it live.
The reason this matters is not "better debugging" (though it is). The reason this matters is: every cross-channel interaction logged under an Anima identity makes the next one more valuable. That is the data moat. A competitor cannot recreate it retroactively. They have to start collecting on day one and wait.
This post walks through the demo workflow, the data model that makes it work, and why we pulled this forward to Week 1 of the launch sprint when the original plan put it in Week 4.
The demo: an inbound email triggers a voice callback#
Here is the workflow, end to end. A customer emails the support inbox. The agent decides this needs a voice follow-up. The agent places the call. The customer answers. The agent gets a transcript and writes a summary back to the original email thread. Five steps. Three channels. One workflow.
import { Anima } from "@anima/sdk";
const am = new Anima({ apiKey: process.env.ANIMA_API_KEY! });
// 1. Inbound email arrives. Anima auto-generates a correlation ID
// and emits an audit_event with channel="email", direction="inbound".
am.on("email.received", async (msg) => {
const correlationId = msg.correlationId; // synthetic, attached server-side
// 2. Agent decides this is voice-worthy
if (msg.subject.includes("urgent")) {
// 3. Place the voice call. Same correlationId carries through.
const call = await am.voice.placeCall({
identityId: msg.identityId,
correlationId,
to: msg.from.phone,
consentSource: "customer-initiated",
greeting: `Hi, this is the support team calling about your email "${msg.subject}".`,
});
// 4. Wait for the call to finish, get the transcript
const transcript = await am.voice.getTranscript(call.callId);
// 5. Reply to the original email with the transcript summary
await am.email.send({
identityId: msg.identityId,
correlationId, // SAME ID
threadId: msg.threadId,
to: msg.from.email,
subject: `Re: ${msg.subject}`,
html: `<p>Following up on our call:</p><blockquote>${transcript.summary}</blockquote>`,
});
}
});Five tool calls, three channels, one identity, one correlation ID. Now look at what the operator sees in the dashboard:
/audit?correlation_id=corr_5x2c1k
Chain: 5 events • Started 14:22:01 PT • Duration 4m 18s
📨 email.received 14:22:01 from=alice@example.com → support@agent
📞 voice.call.placed 14:22:14 to=+14155550142, tier=basic, consent=customer-initiated
📞 voice.call.connected 14:22:31 duration=3m 41s
📞 voice.transcript 14:26:12 192 words, sentiment=positive
📨 email.sent 14:26:18 to=alice@example.com, threadId=thr_8b1...
That is one screen. One ID. Five events. The operator does not have to grep five log services, correlate timestamps by hand, or guess which email reply belongs to which inbound trigger. The platform threaded it.
The same chain in the CLI#
For developers debugging in the terminal, am tail streams the same data live via Server-Sent Events. The CLI auto-reconnects on disconnect, supports filtering by channel and agent, and renders each event with the correlation ID prefix so you can group at a glance:
$ am tail --agent agent_8x1
[14:22:01] corr_5x2c1k 📨 email.received alice@example.com → support
[14:22:14] corr_5x2c1k 📞 voice.call.placed +14155550142 (basic, customer-initiated)
[14:22:31] corr_5x2c1k 📞 voice.call.connected duration started
[14:26:12] corr_5x2c1k 📞 voice.transcript 192 words
[14:26:18] corr_5x2c1k 📨 email.sent re: original threadThat is a 4-minute, 5-event workflow rendered in ~6 lines of terminal output. The correlation ID is the index. Without it, the same workflow looks like five disconnected events that happen to involve the same email address.
The data model#
Every action through the Anima API writes to one table:
CREATE TABLE audit_events (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
correlation_id TEXT NOT NULL,
org_id TEXT NOT NULL,
agent_id TEXT NOT NULL,
channel TEXT NOT NULL, -- email | sms | voice | vault | phone
action TEXT NOT NULL, -- received | sent | placed | read | provisioned ...
direction TEXT, -- inbound | outbound | null
status TEXT NOT NULL, -- pending | success | failure
metadata JSONB NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
CREATE INDEX audit_events_correlation_id_created_at
ON audit_events (correlation_id, created_at);
CREATE INDEX audit_events_agent_id_created_at
ON audit_events (agent_id, created_at DESC);That is the whole schema. Five required text columns, one JSONB for the channel-specific payload, two timestamps, two indexes. One table, indexed two ways: by correlation ID for chain queries, by agent ID for live tailing.
The reason this is one table and not five (one per channel) is because the question "what did this workflow do?" cuts across channels. Joining five tables every time someone hits /audit would be expensive and slow. Materializing the union view we want as a single table makes the query path simple: one indexed scan, one ORDER BY, done.
Why we pulled this forward to Week 1#
The original launch plan had this feature in Week 4. We pulled it to Week 1 after a CEO review pointed out the obvious thing we were missing.
The argument was: every Anima feature is replicable. AgentMail can ship inbound email parsing in two weeks. AgentPhone can ship voice in two weeks. Composio can ship vault in two weeks. The thing that is not replicable is the dataset. Every cross-channel interaction logged under an Anima identity is a row in audit_events. After 12 months of customer usage, we have a corpus of correlated agent behavior across email, voice, vault, and phone. After 24 months, we have reputation signals, fraud patterns, behavioral baselines that make every new inference more valuable.
A competitor that ships the same feature surface today gets to start collecting from row zero. They do not get to import our data. They have to wait. That is the moat.
But the moat only works if collection starts on day one. Building the capture infrastructure in Week 4 means burning three weeks of identity data we will never get back. So the schema migration shipped in Week 1, the synthetic correlation IDs on inbound email shipped in Week 1, and the dashboard /audit view shipped in Week 1. The "nice UX polish" of the chain renderer can iterate. The data capture cannot wait.
What this enables in the next 60 days#
Three things ship on top of this data layer in the Day 31-60 window:
Reputation scoring. Once we have 4-6 weeks of correlation data, we can compute per-agent behavioral baselines: typical call volume, typical email volume, typical vault access patterns. Agents that deviate get flagged before the abuse damages the customer's account or the platform's deliverability. This is impossible without a multi-channel correlation dataset.
Cross-channel fraud signals. A new agent that immediately tries to dial 200 numbers in 90 seconds is suspicious. A new agent that sends 50 emails to addresses none of which have ever responded is suspicious. The signal lives in the multi-channel pattern, not in any single channel. The fraud signal that catches "agent registers, places one voice call, immediately tries to read 50 vault credentials" requires correlation across email, voice, and vault. We get this for free now that the data layer is one table.
Customer-facing analytics. "Which agents are using which channels most?" "Which workflows take the longest?" "Where are the failure points?" Today these are operator questions answered by SQL queries. By Day 60 they are dashboard widgets the customer answers themselves.
The honest catch#
Two things to be direct about.
This is not magic distributed tracing. The correlation ID propagates through HTTP requests, through MCP stdio (we landed that in 2026-04-25), and through the SDK clients. It does not propagate through your application code unless you pass it through. If your agent receives a webhook and then calls Anima, you need to forward the x-correlation-id header or pass it as a parameter. We are working on auto-detection from the inbound webhook context but it is not shipped yet.
The /audit page is read-only and customer-scoped. You see your own org's audit chain. You do not see other customers'. The platform-admin view (separate page, separate auth) sees the cross-tenant aggregate, gated by the ANIMA_PLATFORM_ADMIN_EMAILS allowlist. Customers do not get cross-tenant analytics; they get their own data in a clear UI.
Try it#
If you are an existing Anima customer, every action you take from today forward is in audit_events. Try it: send an email, place a voice call, then visit /audit?correlation_id=... in the dashboard. You will see the chain.
If you are not an Anima customer yet, run npx @anima-labs/cli init, pick "Create a fresh agent identity," and place a test call:
$ am voice place --to +14155550142 --consent-source customer-initiated
$ am tail --agent <your-agent-id>Watch the events stream in. That is your data starting to accumulate. Six months from now, the row count is your moat.