Aura · The AI Analyst Layer of Mediaura Signal
The AI Marketing Analyst That Can't Make Things Up
Aura is the agentic AI layer built into Mediaura Signal. Ask it anything about your marketing performance — channels, locations, campaigns, customer journeys, causal lift, anomalies, weather effects, foot traffic. Aura answers in plain English.
But Aura doesn't do the math. Aura calls the tools that do the math.
Every number Aura cites came from a tool that queried your production data. Every claim is grounded in a real result. There is no version of Aura where a language model invents a KPI and hands it to your CFO.
Every Other "AI Marketing Assistant" Has the Same Problem
You've seen the demos. "Just chat with your data!" A friendly chat panel, a logo in the corner, and a language model that's been pointed at a dashboard. Ask it what your Meta ROAS was last quarter and you get a confident answer in two seconds.
Sometimes the answer is right. Sometimes the answer is off by 30%. Sometimes the answer is a number that doesn't exist anywhere in your database. You can't tell which is which without going to check, which defeats the entire point of asking the AI in the first place.
This is the core problem with bolting a language model onto a marketing dashboard: the model isn't calculating anything. It's pattern-matching against text it was shown in a prompt, and language models trained on the entire internet are extraordinarily good at producing text that sounds like a marketing analytics answer. Whether that text reflects your actual data is a separate question, and most products in this space don't structurally answer it.
Aura was built to make hallucination architecturally impossible. Not "carefully prompted," not "instructed to be accurate." Impossible.
The Architecture
Aura Doesn't Do the Math. Aura Calls the Tools That Do the Math.
The difference between a chatbot and an agent is not the chat interface. The difference is what happens between the question and the answer.
A chatbot reads your question, generates an answer that looks plausible, and hands it back. An agent reads your question, decides what data it needs, calls a tool that runs an actual query against your production database, waits for the real result, and only then formulates an answer based on what came back. The tool is the thing the model can't fake.
Aura is an agent.
How a question actually gets answered
When you ask Aura "what did Meta contribute to the Fishers location last month?", here's what happens in the seconds before you see a response:
Aura interprets the question
Identifies what it doesn't yet know — which location "Fishers" refers to, what date range "last month" means, and what M-CE has computed for Meta's causal lift at that location over that period.
Aura issues structured tool calls
To a tool that lists locations and resolves the name, then to a tool that pulls causal attribution analysis from M-CE for that location and date range. The tool calls are typed: parameters, return shapes, and validation are defined in advance. The model cannot invent a tool, and it cannot pretend a tool returned data that it didn't.
The tools execute against your production data
Real SQL queries, real API calls, real M-CE coefficients. Everything Aura is about to say is grounded in what those queries actually return.
Aura assembles the answer
From the real results, in plain English. It might also issue follow-up tool calls if the first round didn't fully answer the question — Aura can chain up to ten tool calls per question to build the context it needs.
You see the answer
With progress indicators showing which tools ran along the way. "Analyzing performance... Running correlation analysis... Pulling attribution data..." It's not a black box. You can see Aura working.
Why this matters
The architectural property that makes hallucination impossible is this: the model cannot respond with quantitative claims until tools have returned real data. A traditional chatbot can answer "what was Meta's ROAS" without ever consulting a database — it just generates plausible text. Aura cannot. The tool-use loop is structurally enforced. If Aura needs a number and the tool fails, Aura tells you the tool failed. It doesn't make one up.
This is what we mean when we say Aura is built on "agentic AI." The phrase has been diluted into marketing fluff in the last two years, but it has a precise technical meaning: an AI system that uses tools to take actions in the world (in Aura's case, querying real data) rather than generating responses purely from its own parameters. The tools are the difference between an AI that talks about your data and an AI that actually consults it.
The honest version of the trust story
We want to be precise about what is and isn't enforced, because the honest version is more compelling than overclaiming.
Structurally enforced
Tool calls are typed and validated. The model can only respond after tools return. Tool execution runs against your production database with no intermediate caching or summarization. The model sees real JSON results before formulating a response.
Prompt-level enforcement
Aura's system prompt instructs the model to use tools rather than guessing, to stay on topic, to refuse prompt manipulation, and not to disclose proprietary methodology. These are guardrails, not architectural constraints, and we tell clients exactly which is which.
Not in place yet
There is no post-hoc output validator that programmatically checks every number in Aura's response against tool results. The model sees real data and writes accurately about it, but a separate verifier layer is on the roadmap. We think the tool-use architecture is sufficient for the cases Aura is deployed for today, and we'll add the verifier when the cost of an error grows large enough to justify it.
This is more transparency than you'll get from any other AI marketing tool on the market, and it's intentional. The trust we're asking for is technical, not vibes-based.
The Tools
The Tools, Adapted Per Vertical
Aura's tool set is configured per deployment, because the questions a multi-location restaurant operator asks are not the questions a behavioral health facility director asks. The agentic architecture is identical across verticals. The tools are not.
Here's the production tool set for a multi-location restaurant deployment as a representative example.
Location and performance tools
List locations
Every active location with city, state, and identifiers
Get store performance
Sales, orders, AOV, ad spend, ROI, month-over-month change, for one location or the whole portfolio over any date range
Get daily trends
Day-by-day time series of sales, orders, and ad spend
Analytical tools
Run correlation analysis
Pearson r, optimal lag detection, adstock-transformed correlations, weather correlations, in three modes (raw daily, detrended, weekly)
Get attribution analysis
M-CE's causal model coefficients, daily lift estimates per channel, iROAS, adstock hyperparameters, and stability diagnostics
Context tools
Get weather data
Daily temperature, precipitation, and conditions for any location, for any date range
Get foot traffic
Placer.ai visit counts, unique visitors, repeat rate, dwell times, cross-shopping brands, demographics, capture rate
Get location notes
Annotations and operational context attached by your team
For autonomous Weekly Insights, the tool set expands
Campaign-level breakdowns (Google ad groups, Meta ad sets), creative performance, and platform-specific drilldowns. Aura can pull up to 50 rows of campaign data and 20 individual ad creatives when writing the Monday morning report.
Healthcare deployments
The tool set is reshaped around the admission funnel: program lists, activity summaries by pipeline stage (verified benefit → prospect → admitted), channel breakdowns with conversion rates, dispositions, ad performance, daily activity trends, facility annotations, weather, predictive forecasts, and causal lift analysis.
Eleven tools, shaped to the questions a healthcare marketing director actually asks.
B2B deployments
The tool set is built around long-cycle deal mechanics: deal summaries, attribution breakdowns (originated vs. influenced), pipeline stage analysis, new logo identification via Leadfeeder, deal-level deep dives, and website visitor analysis.
Eight tools, shaped to a 127-day sales cycle.
The architectural pattern is identical across all three verticals. The tools are vertical-specific because the questions are vertical-specific. Aura adapts to your business; you don't adapt to Aura.
Context Layer
The Human Context Your Data Doesn't Capture
Every marketing analytics tool on the market has the same blind spot: it only sees what got logged. It doesn't see that your busiest location had a staffing crisis in week 3. It doesn't see that you ran a soft launch in a new market with intentionally throttled spend. It doesn't see that the regional manager went on parental leave. It doesn't see that a competitor opened across the street in early March.
These things matter. They explain anomalies that the data alone can't. And no causal model in the world is going to reverse-engineer them from POS receipts.
Aura includes a notes layer for exactly this. Any user can attach a note to a specific date, date range, or location — staffing changes, renovations, new competition, promotional anomalies, regulatory events, seasonal one-offs. Notes are stored alongside the structured data and surfaced to Aura through a get_notes tool whenever relevant.
Example
"Why did sales drop at the Carmel location in week 3 of February?"
"Sales at Carmel dropped 18% in week 3 of February. Two factors: a winter storm cut Tuesday and Wednesday traffic by an estimated 40% based on the weather model, and the operational notes show the general manager was on bereavement leave and the assistant manager was running the store. The weather effect explains roughly 11 points; the management transition likely explains the remaining 7."
That's a different kind of answer than any pure-data analytics tool can produce, because the answer required information that lives in human heads and got captured in a notes field — not in an API.
Marketing data has a much larger context window than the database. Aura is built to work inside that larger context.
Surfaces
Aura Meets Your Team Where They Already Work
Aura is not a destination. Aura is a layer that surfaces wherever the question gets asked.
In the Mediaura Signal dashboard
Aura ships as a floating chat widget in the corner of every page in your Mediaura Signal deployment. Click to expand into a 400×560 chat panel; expand again to fullscreen if you're going deep. The widget persists across pages — start a conversation on the attribution dashboard, navigate to the campaign view, and Aura is still there with your conversation intact. Animated progress indicators show you which tools are running while Aura works.
In the dedicated Analyst view
For deployments that include the full analyst surface, Aura also has its own page — a fullscreen chat interface with two tabs: one for live conversation, one for saved analyses.
In your Monday morning inbox
Every Monday at 7 AM, Aura autonomously reviews the entire previous week of data — every channel, every location, every campaign, every causal coefficient — and writes a 400 to 600 word narrative report with action items. It's emailed to your team, archived to the Aura Readings web view, and waiting for you when you sit down with your first cup of coffee.
The Weekly Insights report is not a templated dashboard export. It's Aura, autonomously deciding what mattered most this week and explaining it in prose. Some weeks the headline is a campaign that crushed; other weeks it's a coefficient that moved meaningfully; other weeks it's a foot traffic pattern that's worth investigating.
This is Aura's most valuable output for most clients, because nobody actually logs into the dashboard every day. They log in when there's a reason. The Monday email creates the reason.
Configurable surfacing — Slack, email alerts, custom thresholds
Aura's surfacing layer is configurable per engagement. We deploy clients on Slack two-way conversation, email reports and alerts, and configurable threshold-based notifications using infrastructure we already operate in production for budget pacing alerts in our media tracker. If your team lives in Slack, Aura lives in Slack. If your CFO wants a Monday email and a Friday email, Aura writes both.
What's not built yet: real-time anomaly-triggered alerting where Aura wakes itself up at 2 AM because Meta CPM spiked. The infrastructure exists in pieces (M-CE produces the residuals that would feed it; the alert pipeline exists in the media tracker), but the integration into Aura's autonomous monitoring loop is roadmap, not shipped. We'd rather tell you that than imply otherwise.
Workflow
Save an Analysis. Verify It. Share It as Expert Work.
This is one of Aura's quieter features and it's the one that ends up changing how teams work.
When Aura produces an analysis you want to keep — a deep dive into a location's performance, a correlation study, a causal attribution breakdown, a campaign post-mortem — you can save it. Saved analyses live in the database with searchable tags, the full conversation context, and a unique shareable URL.
The workflow that emerges
A marketing analyst asks Aura a question and watches it work through the answer.
The analyst verifies the answer looks right — the tools that ran are appropriate, the data they returned looks reasonable, the interpretation matches their expert intuition.
The analyst saves the analysis with a tag and a name.
The analyst shares the URL with the client team, the executive who asked the question, or the rest of the marketing team.
What gets shared isn't a chat transcript. It's an analysis that an expert has reviewed and blessed. The recipient sees a polished, contextual document — Aura's reasoning, the tools it called, the data it found, the conclusion it reached — at a permanent URL they can come back to.
This solves a real workflow problem for marketing teams: the gap between "the data exists" and "the data has been interpreted and trusted by someone who knows what they're looking at." Most teams handle this gap with a Slack message and a screenshot. Aura's saved analyses make it a real artifact, with the full reasoning preserved.
Read-Only by Design
Aura is your analyst. Aura is not your autopilot.
Today, Aura is strictly read-only across every deployment. Every tool Aura has access to is a read query against your data. There are no tools that can push budget changes to Google or Meta, pause campaigns, write to your CRM, or modify any external system. Aura can recommend that you shift $40K from Meta to Google at the Fishers location, and it can show you the M-CE diagnostics that justify the recommendation, and it can write you an action item — but a human at Mediaura makes the actual budget change.
This is deliberate, and it will stay deliberate for a while longer.
Action-taking is on the roadmap. We have specific designs for it. We're going to ship it carefully and slowly, because the failure mode of an AI agent that can spend money on your behalf is the worst possible failure mode for a marketing intelligence product, and we want to earn the trust to do that before we do it. The current read-only posture is not a limitation we're embarrassed about. It's the trust-building phase of a longer roadmap, and we'd rather get there with our reputation intact.
Aura reads everything and writes nothing.
When we ship action-taking, we'll ship it the same way we shipped the rest of Aura: with explicit boundaries, clear human-in-the-loop checkpoints, full auditability, and no overpromising.
The Intelligence Layer Across All Four Layers
Aura sits across the entire Mediaura Signal stack, calling into each layer when you ask a question:
Aura Tracker
Captures the signals
Identity Resolution
Stitches them into customer journeys
Revenue Mapping
Ties journeys to actual booked dollars
The Mediaura Causal Engine (M-CE)
Runs the causal models
Aura
The layer that lets you talk to all of it in plain English, and that talks back in the form of analysis you can trust.
When you ask about lift, Aura calls M-CE. When you ask about a customer journey, it calls the identity layer. When you ask about a transaction, it calls revenue mapping. When you ask why a location is underperforming, it calls all four — plus the weather tool, the foot traffic tool, and the notes layer — and synthesizes the answer.
This is why Aura doesn't hallucinate. Every layer beneath it is producing real, queryable, structured data. Aura doesn't have to guess, because there's always a tool that can answer the question for real.
Talk to Aura
The fastest way to evaluate Aura is to watch it answer real questions about a real deployment. We'll walk you through a live conversation against one of our production environments (anonymized), show you the tools running underneath, and talk through how Aura would be configured for your business and your verticals.
What happens next:
- 30-minute working session with a Mediaura engineer
- Live walkthrough of Aura against an anonymized production deployment
- Discussion of what Aura's tool set would look like for your business
- Sample saved analyses and Weekly Insights reports