← Back to blog
AI Infrastructure 2026-03-26

Context Is the Control Plane for AI

By R. Dustin Henderson, PhD

Every serious infrastructure decision eventually comes back to the same question: who's in charge?

In networking, this question was resolved decades ago. You have a data plane—the layer that forwards packets, moves bits, executes instructions. And you have a control plane—the layer that governs how the data plane behaves. The control plane sets policy. It decides what traffic gets priority, what routes get taken, what rules apply. The data plane just follows orders.

Software-defined networking (SDN) made this distinction the organizing principle of an entire industry. Separate the intelligence from the execution. Put the intelligence somewhere persistent, auditable, and programmable. Let the execution layer be fast and dumb.

AI systems have built an extraordinarily capable data plane. They have almost no control plane.

That's not a small gap. It's a structural vulnerability that gets worse with every capability improvement.

What the AI Data Plane Looks Like

The data plane in modern AI systems is impressively complex:

Tokens and embeddings: The raw substrate—high-dimensional representations of language, images, and code, processed at scale.

Vector retrieval (RAG): Retrieval-Augmented Generation routes factual queries into a semantic search layer, pulling relevant documents into the inference context. It's powerful for knowledge recall. It doesn't set policy.

Tool calling: Agents invoke external APIs, read databases, write code. This gives AI systems effectors—things they can do. It doesn't tell them what they should do.

Agent memory: Systems like Zep and Mem0 store conversation history, user preferences, and temporal facts. They improve relevance. They don't enforce values.

System prompts: A block of text prepended to every inference call that tries to establish persona, constraints, and behavior guidelines. This is as close as most systems get to a control plane.

System prompts are not a control plane. They're a text file.

The Problem with System Prompts as Governance

System prompts are ephemeral. They don't persist across sessions unless you re-inject them. They can be jailbroken. They can't be versioned against user-specific values. They contain no structured logic—just natural language that the model may or may not interpret consistently.

More fundamentally: system prompts operate at the same semantic layer as everything else. There's no separation between the instruction layer and the execution layer. When a user says "ignore your previous instructions," the model evaluates that request using the same mechanisms it uses for everything else. There's no firewall. There's no privileged control channel.

This is the equivalent of running a network where your routing rules live in the same unprotected memory as your packet payloads. Security people have a word for that architecture: compromised.

Mialon et al.'s 2023 survey of augmented language models surveys the full landscape of how LLMs are being extended with tools, memory, and reasoning capabilities. [1] The paper covers reasoning, retrieval, and action—but notably, there's no category for "values governance." It's not that the researchers missed it. It's that the field hasn't built it yet.

What a Real Control Plane Would Look Like

In networking, the control plane has specific properties:

  • Persistent: It survives individual packet flows
  • Authoritative: Its decisions override data plane behavior
  • Auditable: You can inspect what rules were applied and why
  • Programmable: You can update it without touching the data plane

An AI control plane needs the same properties—plus one more: it needs to encode human values, not just technical policy.

Here's what that means concretely:

Persistent values context: A structured representation of who this person is, what they believe, and what tradeoffs they're willing to make—stored separately from conversation history and injected at every inference call as authoritative context.

Priority over data plane inputs: When values context and user input conflict, the control plane wins. This isn't about refusing requests—it's about ensuring the AI reasons with consistent principles even when the user's immediate request might be in tension with their longer-term values.

Auditable decisions: Every inference that involves a values-relevant judgment should be traceable to a specific element of the values context. Not "the AI did X" but "the AI did X because the values context specified Y."

Versionable: As users' values evolve, their values context should be updatable without retraining the model or re-establishing the relationship from scratch.

Who Is Building Toward This

A few practitioners are naming pieces of this problem, even if they're not using "control plane" language.

The LangChain agent framework—currently the most widely used orchestration layer for agentic systems—structures agent behavior around what information is in scope at any given point in a workflow. [2] This is control-plane thinking applied to information flow, but it doesn't yet address the values layer.

The Zep team explicitly uses "context engineering" as their primary framing: "Agents fail without the right context." [3] Their temporal knowledge graph approach is the most sophisticated attempt I've seen to make context structural rather than ephemeral. But Zep's context is still fundamentally factual—it knows what happened, not what should happen.

Packer et al.'s MemGPT paper (2023) drew the explicit OS analogy: treat the LLM like a CPU and manage memory tiers the way an operating system manages RAM and disk. [4] This is the right metaphor. But MemGPT focuses on memory hierarchy, not on the policy layer that governs memory. It's like building virtual memory without building an OS scheduler.

The Missing Primitive

What's missing is a primitive that doesn't currently exist in any major AI framework: values-as-context.

Not a fine-tuned model with values baked in. Not a system prompt with behavioral instructions. Not a memory store with preference data. Something structurally different: a persistent, authoritative, auditable representation of human values that governs AI reasoning at runtime.

The distinction matters:

  • Fine-tuning bakes values into model weights—frozen, model-specific, not per-user
  • System prompts state values in natural language—ephemeral, jailbreakable, not structured
  • Memory stores record value-adjacent facts—reactive, not authoritative
  • Values infrastructure is persistent, structured, authoritative, per-user, and model-agnostic

This is what a control plane actually is: a governance layer that's categorically different from the execution layer it governs.

The AI industry is very good at building faster, smarter data planes. The control plane is still missing.

Every AI deployment that goes wrong—the model that gives inconsistent advice, the agent that makes decisions that surprise its user, the assistant that behaves differently under adversarial pressure—is a control plane failure. Not a model failure. Not a memory failure.

A governance failure.

TruContext is the control plane. Persistent, auditable, per-user values context at runtime. Two commands: npm install -g trucontext-openclaw — first 1,000 keys get 1M Ops free. Start here.


References

  1. Mialon, Grégoire, et al. "Augmented Language Models: A Survey." arXiv:2302.07842 [cs.CL], 2023. https://arxiv.org/abs/2302.07842
  2. LangChain. "Agents." LangChain Documentation. https://docs.langchain.com/oss/python/langchain/agents [LangChain's agent documentation describes the framework's approach to managing tool use, context scope, and iterative reasoning in agentic workflows.]
  3. Zep. "Context Engineering & Agent Memory Platform for AI Agents." getzep.com, 2024. https://www.getzep.com
  4. Packer, Charles, et al. "MemGPT: Towards LLMs as Operating Systems." arXiv:2310.08560 [cs.AI], 2023. https://arxiv.org/abs/2310.08560
  5. Open Networking Foundation. "Software-Defined Networking (SDN) Definition." opennetworking.org. https://opennetworking.org/sdn-definition/

Frequently Asked Questions

What is the AI control plane?

The AI control plane is the governance layer that determines how an AI system reasons and behaves. It is persistent (survives individual sessions), authoritative (overrides data plane inputs when values conflict), auditable (decisions are traceable to specific principles), and programmable (updatable without retraining). Most AI systems today have no real control plane — they rely on system prompts, which are ephemeral and jailbreakable.

Why aren't system prompts a real control plane?

System prompts are ephemeral (don't persist across sessions), operate at the same semantic layer as user input (no privileged control channel), can be jailbroken, can't be versioned against user-specific values, and contain no structured logic. In security terms, this is like running a network where routing rules live in the same unprotected memory as packet payloads.

What is the difference between AI data plane and AI control plane?

The AI data plane handles execution — tokens, embeddings, vector retrieval, tool calling, agent memory. It moves information and generates outputs. The AI control plane governs what the data plane does — setting values, priorities, and behavioral commitments. The data plane is fast and capable. The control plane is where trust and governance live.

What is values-as-context in AI?

Values-as-context is a primitive where human values are stored as persistent, structured, auditable context that governs AI reasoning at runtime. Unlike fine-tuning (frozen, model-specific), system prompts (ephemeral, jailbreakable), or memory stores (reactive, not authoritative), values-as-context is persistent, model-agnostic, per-user, and authoritative.

Continue Reading

TruContext is the persistent values layer for AI systems.

← Back to blog