Deepnote is now open-source! Star us on GitHub ⭐️
Get started
← Back to all posts

Your data team is already building a context layer. They just don't call it that.

By Jakub Jurovych

Updated on May 13, 2026

AI agents are failing because they lack business context. The industry is scrambling to build "context layers" as a fix. But if your team works in notebooks, the context layer already exists - inside the analytical work itself.

Illustrative image for blog post

AI agents are failing because they lack business context. The industry is scrambling to build "context layers" as a fix. But if your team works in notebooks, the context layer already exists - inside the analytical work itself.

Agents fail without context

Over the past year, organizations have tried to put AI agents on top of their data stacks. Chat-with-your-data bots, automated analysts, agentic workflows. Most hit the same wall.

The models were capable enough. The data was accessible. What was missing was the layer between raw data and useful answers - the business definitions, tribal knowledge, governance rules, and analytical reasoning that an experienced analyst carries in their head but no system captures.

Ask an agent "what was revenue growth last quarter?" and it immediately needs answers to a dozen sub-questions: Is this ARR or run-rate? Which fiscal calendar? Are the new product lines included? Which table is canonical - fct_revenue or mv_revenue_monthly? Without this context, the agent either guesses (confidently wrong) or stalls.

context-layer-stats.png
Gartner's 2026 D&A Summit quantified the gap: only 1 in 5 AI investments currently show measurable ROI. Their finding is that the organizations getting the most from AI invest nearly 2x more in foundations - data quality, governance, semantics - than in AI tools themselves. The bottleneck is not models. It is context.

What a context layer actually is

The term "context layer" has gained traction quickly - sometimes called a context engine, contextual data layer, or ontology. Strip away the naming, and the concept breaks into three pillars:

context-layer-pillars.png

The three pillars of a context layer: what things mean, what's happening, and what happened. An effective context layer captures all three.

A traditional semantic layer is a subset of this - metric definitions in YAML or LookML. A real context layer is broader. It encodes how the business works, tracks what changed, records agent behavior and reasoning, and makes all of this consumable by both humans and machines.

For years, this knowledge lived in people's heads, scattered across Slack threads, stale wiki pages, and one-off SQL queries. That was fine when only humans needed it. It is not fine when agents do.

Why context that lives inside the work actually works

The history of standalone semantic layers and metrics layers offers a clear lesson for anyone evaluating context layer solutions: if context has to be built and maintained as a separate system, it will not get built and maintained.

This is not a vendor problem. It is an operational one. Data teams are already stretched. Adding a dedicated context-construction workflow - writing YAML, populating a knowledge graph, keeping definitions in sync with the actual analysis - means asking people to do the same work twice: once to get the answer, and again to document how they got it. In practice, the documentation step gets skipped, deferred, or done once and never updated. The context layer goes stale within weeks.

The alternative is an environment where context is a byproduct of the analytical work itself. When your team writes a revenue definition in a notebook, that notebook is the semantic record. When they document why they chose one data source over another, that is the tribal knowledge capture. When they set permissions and freeze a snapshot, that is governance and traceability. Nothing extra to maintain - the work and its context are the same artifact.

For teams deploying agents, this distinction is practical: your agent's context is only as good as the last time it was updated. If that update is a separate task, it will lag. If it happens automatically because the team's daily work produces it, it stays current. And with scheduled runs, notebooks re-execute on a cadence - so even context that depends on fresh data stays up to date without anyone manually triggering it.

context-layer-iceberg.png

context-layer-compare.png

Deepnote is the context layer

Deepnote is an AI workspace - notebooks, SQL, Python, visualizations - where data teams and their AI agents collaborate on analytical work. That work produces the context layer. When a team writes a revenue definition in a Deepnote notebook, documents a data source decision, or publishes a reusable module, they are constructing exactly the kind of governed, human-curated context that agents need - without a second workflow.

Deepnote connects to data sources and MCP servers, and serves governed context to agents, tools, teams, and workflows above.

context-layer-scatter.png

The Deepnote context layer, decomposed

Here are the features of the Deepnote context layer broken down by pillar. These are shipping capabilities, not roadmap items.

PillarWhat agents needDeepnote featureHow it works
SemanticsMetric definitions and business rulesNotebooks + ModulesPlain-English definitions live alongside executable code. Modules let teams publish reusable metric libraries that other projects import.
SemanticsBusiness logic with reasoningNotebooks (Markdown + code)Narrative context ("why we define churn this way") and computational logic live in the same document. An agent can read both.
GovernanceOwnership and access policiesPermissionssecured connectionsRole-based access at project, notebook, and connection level. Controls who can see, edit, execute, and share.
OperationalCanonical entitiesNotebooks as versioned assetsEach notebook is an addressable, versioned, permissioned entity - a structured unit of analytical work.
OperationalActivity trackingVersion history, audit logsEvery edit, execution, and collaboration event is recorded. Full activity log for any asset.
OperationalEnvironment and connectionsMachine types, MCP connections, Deepnote CLICompute environments, MCP server integrations, and CLI-based programmatic access are part of the context surface. Agents and automation connect through the same interfaces humans use.
OperationalAutomated freshnessSchedulingNotebooks run on a schedule - hourly, daily, weekly. Context stays current automatically, not just when someone remembers to re-run the analysis.
TraceabilityReproducibility and audit trailSnapshotsPoint-in-time captures of data + code + results together. Reproducible, auditable, shareable.
TraceabilityAgent observabilityAgent tracesFull record of what the agent did, which context it consumed, which tools it called, and what it returned. Traceable reasoning from input to output.

Building your context layer in Deepnote

If your team already uses Deepnote, you are further along than you think. The steps below formalize what many teams already do informally:

1. Codify your canonical definitions

Create a module for your core business metrics - revenue, churn, active users, LTV, whatever your organization's key measures are. Write the definitions in Markdown (the "why") and the computation in code (the "how"). Other notebooks import the module; agents read it for context.

2. Use notebooks as systems of record

For recurring analyses - monthly reporting, quarterly business reviews, ad hoc investigations that become canonical - treat the notebook as the authoritative record. It includes the question, the reasoning, the code, the result, and the decision. Schedule it to re-run on a cadence so the context stays fresh automatically. Snapshots freeze point-in-time versions for audit and comparison.

3. Govern access to where the work lives

Use team permissions to control who can see and modify what. When an agent queries Deepnote for context, it inherits these same access rules - no separate governance product needed.

4. Expose context to agents and automation

Connect external agents and workflows via MCP or the Deepnote CLI. The same notebooks and modules your team writes become the context that agents programmatically consume. One asset, two audiences.

The bottom line

Every organization deploying AI agents will need a context layer. The question is whether you build it as a separate system - expensive, fragile, prone to staleness - or recognize that the place where your team already defines metrics, documents reasoning, and governs analytical work is the context layer.

Deepnote is not a notebook with some context features. It is the context layer that happens to have a notebook interface. The notebook is how humans author context. The platform is how everyone - and everything - consumes it.

Jakub Jurovych

CEO @ Deepnote

Follow Jakub on LinkedIn

Blog

Illustrative image for blog post

Data notebooks as the atomic unit for Reinforcement Learning

By Jakub Jurovych

Updated on May 13, 2026

Try Deepnote now

Get started – it’s free
Book a demo

Footer

Solutions

  • Notebook
  • Data apps
  • Machine learning
  • Data teams

Product

Company

Comparisons

Resources

Footer

  • Privacy
  • Terms

© 2026 Deepnote. All rights reserved.