← Back to blog
AI Values 2026-03-26

The Decisions That Matter Most Aren't Logic Problems

By R. Dustin Henderson, PhD

She's an attorney, 38. Six years on partnership track. The offer just landed — her name on the door, significant money, the thing she's built toward since law school.

Her husband was just offered his dream job. In another city.

There's no spreadsheet that resolves this. She can model the income differential over ten years, research both cities, build the analysis. At the end of it, she's still staring at the thing no spreadsheet answers: what does she actually value?

Her professional identity? The partnership she has with her husband? Her theory of what a life well-lived looks like? The career regret she'll carry versus the relationship regret? None of that fits in a cell.

This is the class of decision that breaks AI — and most people have made peace with it. They use AI for the logic problems. They figure out the hard ones alone.

That shouldn't be the answer. It doesn't have to be.


Why Logic Fails Here

The rational choice model — the framework underlying most decision science — treats choices as optimization problems. Assess your options. Weight the probabilities. Calculate expected utility. Pick the highest number.

Useful for a class of decisions. For the ones that actually matter, it fails — not because of a bug, but because of an assumption it smuggles past you: that you already know what you value and by how much.

Psychologist Shalom Schwartz spent decades mapping human values across 80 countries. What he found: people carry roughly ten core values — achievement, benevolence, security, self-direction — and they're fundamentally in tension with each other. Achievement pulls against benevolence. Security pulls against independence. These don't operate on the same scale. You can't do the math. You can only decide which value wins.

Behavioral psychology adds another layer: moral and values-laden judgments are primarily intuitive, not analytical. People don't reason their way to a values answer and then feel it. They feel it first — and sometimes construct a justification afterward. If you've ever known something was wrong but struggled to articulate why, that's the mechanism. Values aren't outputs of reasoning. They're the substrate reasoning runs on.

The implication is direct: an AI optimized for analytical reasoning is systematically mismatched to the decisions where people need the most help.


What AI Actually Does

Ask an LLM a genuinely hard question — should I take this job, should I pursue more treatment, should I end this relationship — and you get one of three responses.

Utilitarian collapse. A pros and cons list weighted toward measurable outcomes. Salary. Career advancement. Survival rates. The quantifiable things get the most weight, because those are the things that can be modeled. The implicit assumption is that you want to optimize for measurable outcomes — which may not be true for you specifically.

Values-hedging neutrality. "This depends on your personal values and priorities — only you can decide." Technically accurate. Functionally useless. The user came to the AI to help reason through those values. A disclaimer isn't help.

Majority-values substitution. The most insidious. Large language models are trained on human feedback at scale — which means they've encoded the aggregate judgment of whoever was providing that feedback. That population has values. Those values are now baked into the model's reasoning. When you ask an AI what you should do, you're often getting what the statistically average person in its training data would do. Which may be the inverse of your actual values.

The concrete failure mode: an AI that doesn't know your values gives you the right answer for someone else.


What the Labs Are Building (and What They're Missing)

The major labs aren't ignoring this problem. They deserve credit for how seriously they're taking it.

OpenAI's memory system now references saved facts and conversation history across sessions. It knows your name, your job, your preferred format. Real progress — and not enough. Memory of facts is not a model of values. Knowing everything about your behavior pattern still doesn't tell an AI what you'd refuse to compromise on.

Anthropic's Constitutional AI encodes a set of ethical principles directly into the model's training. The model learns to critique its own outputs against those principles. Meaningful advance for reducing harm at population scale. But the constitution is Anthropic's, not yours. A deeply religious user and a committed secular humanist both get their hardest questions filtered through the same constitutional layer.

Google's medical AI work shows domain-specific models can approach expert-level clinical knowledge — and the published research explicitly notes these models "may produce text generations that are misaligned with clinical and societal values." The field's response is better training data and better prompting. Not individualized values infrastructure.

The pattern: every lab is solving for better population-level behavior. More accurate, less harmful, more culturally careful. Genuinely important work.

What none of them have built is the individual values layer — a structured, persistent representation of this person's values that can actually change how an AI reasons about their specific hard decisions.


Values Need to Be Infrastructure, Not a Feature

Here's why preference settings don't solve this.

A feature improves UX. A setting stores a preference. Infrastructure is the substrate everything else runs on.

Values aren't preferences. They have internal structure — organized in a hierarchy with inherent tensions. They have temporal dimension — how your values have evolved matters as much as what they are today. They have situational mapping — which value conflicts get activated by which types of decisions. When two values you hold conflict, which one wins? In what context?

This is not a text field. It's a knowledge graph: persistent across sessions, structured for queryability, rich enough to actually influence AI reasoning when it matters.

The CRM analogy is exact. A CRM doesn't have conversations with your customers. It doesn't replace the sales team. What it does is maintain a persistent, structured substrate of customer data so that every conversation draws on a rich understanding of who this person is and what they need. The CRM is infrastructure. The conversations are built on top of it.

TruContext is to values what a CRM is to customer data. Not the conversation — the substrate. A persistent, queryable values layer that travels with you across every AI interaction. The AI that knows you value family cohesion over career advancement. That knows your financial risk tolerance is shaped by watching your parents lose their savings. That knows you have a strong commitment to honesty and won't bend on it even when the utilitarian calculus says to.

Values can't be something you configure once in a settings panel. They have to be the layer beneath all other reasoning — extracted over time, structured for retrieval, injected at inference so the AI is reasoning from your actual values when you face a decision that activates them.

That's not a feature. That's infrastructure. And it's the layer the entire AI stack is missing.


The World Needs Both

The race to build smarter AI is real and it matters. Better reasoning, better retrieval, better accuracy — all of it compounds. The labs are doing genuinely important work.

But intelligence without values context is a very precise tool for helping you optimize toward someone else's priorities. The attorney with the partnership decision doesn't need a smarter AI. She needs an AI that knows her — what she values, how she prioritizes when those values conflict, what kind of person she's trying to be.

Someone has to build the layer that makes AI genuinely yours. Not trained on your inputs. Not remembering your facts. Actually yours — reasoning from your values, on the decisions where it matters most.

That layer is what TruContext is building.

TruContext is the persistent values layer for AI systems.

← Back to blog