Why AI Analysis Gives You The Wrong Specific Answers
The case for orthogonal context
When people evaluate a company or a product, they almost never rely on a single number. Instead, they combine multiple signals to form a mental model of what’s really going on. The best data-informed product and business leaders do this instinctively. They triangulate.
AI works in a remarkably similar way. When given a set of facts, it assembles them into the most coherent explanation it can construct. But there’s a critical difference between how a seasoned analyst thinks and how AI thinks: when context is missing, a human draws on years of domain-specific intuition to fill the gaps. AI, on the other hand, defaults to the most statistically common pattern it has seen during training. It doesn’t flag what it doesn’t know. It just picks the most plausible story and moves forward with confidence.
This means that the quality of AI’s reasoning is strongly influenced by the quality of the context you give it.
The Metrics Trap
Consider a simple example. You’re told that a product has:
5 million monthly active users (MAU)
DAU/MAU of 80%
Most people would immediately assume this is a strong product. Five million users represents meaningful scale, and an 80% DAU/MAU ratio shows that four out of five monthly users come back every single day, signaling exceptional engagement. With only these two data points, both a human analyst and an AI system would reasonably conclude: this company is doing well.
But watch what happens when you add another piece of information.
Average daily session length: 30 seconds.
Now the picture shifts. For most products, especially consumer and social ones, a thirty-second session means users aren’t doing much. They might be opening the app out of habit, glancing at a notification, and leaving. The high DAU/MAU ratio suddenly looks less like deep engagement and more like shallow, reflexive behavior.
Except — what if the product is a payments app? Something like Venmo or Zelle or UPI? In that case, thirty seconds is perfectly natural. Users open the app, send money, confirm, and close it. A short session length isn’t a weakness; it’s a feature of the product category.
This is the metrics trap: any individual metric, taken in isolation, supports multiple contradictory interpretations. The number itself doesn’t tell you whether the company is thriving or struggling. Only context does.
Why AI Falls Into This Trap
LLM systems reason by pattern matching at enormous scale. When you present a set of facts, the model searches its learned representations for the most coherent explanation that fits those facts.
When context is rich and specific, this process works remarkably well. The model can narrow down to a single plausible interpretation and reason about it with precision.
But when context is thin, many different stories remain equally plausible and the model has no way to distinguish between them. In that situation, it does the only thing it can: it selects the interpretation that is most common in its training data and presents it as though it were the obvious conclusion.
This isn’t a bug. It’s the fundamental mechanism. And it means that vague inputs reliably produce generic outputs.
If you tell an AI “DAU/MAU is 80%” and nothing else, the model doesn’t know if the product has a hundred users or a hundred million. It doesn’t know if it’s a game, a banking app, or an enterprise tool. It doesn’t know if engagement is organic or subsidized. So it picks the most typical scenario, probably a consumer app with decent traction, and builds its analysis around that assumption, without telling you it’s assuming.
The Concept of Orthogonal Context
The solution is what we can call orthogonal context: independent pieces of information that describe the situation from different, non-overlapping dimensions.
The word “orthogonal” comes from geometry. It means “at right angles,” or more broadly, independent. In this context, it means each new piece of information you provide should reduce ambiguity in a direction that the other pieces don’t already cover.
Here’s a practical example. Consider these four data points:
DAU/MAU = 80% → tells you about engagement frequency
MAU = 5 million → tells you about scale
Average session length = 10 seconds → tells you about engagement depth
Product category = payments app → tells you about expected user behavior
Each one describes a different dimension of the product. None of them is redundant with the others. Together, they paint a specific and coherent picture: a payments app at meaningful scale with high-frequency, low-duration usage, which is exactly what you’d expect from a well-functioning product in that category.
Now compare that with providing four data points that all describe the same dimension:
DAU/MAU = 80%
Weekly active users / MAU = 90%
D7 retention = 75%
D30 retention = 70%
These are all engagement metrics. They’re correlated with each other. Providing all four gives you more precision on one axis, but it doesn’t help the model understand the broader picture. You know engagement is high, but you still don’t know at what scale, in what product category, or whether the engagement is organic.
The principle is straightforward: breadth of context matters more than depth on a single axis to construct a unique story.
AI Needs a Unique Story
Here’s a useful way to think about what happens inside the model when you give it information.
AI is implicitly trying to construct a single coherent narrative that explains all the data points simultaneously. The fewer data points you provide, the more narratives remain plausible. The more orthogonal context you add, the more candidate stories get eliminated, until ideally, only one remains.
Think of it like a detective solving a case. One clue (the suspect was in town that day) leaves hundreds of possibilities open. Two clues (they were in town and had a motive) narrows it down. Five independent clues might point to exactly one person.
Story A:
Ride-sharing app
8M MAU
DAU/MAU 70%
Average 4.5 rides per week per active user
This looks like a product with strong product-market fit. High frequency, solid scale, healthy engagement. AI would likely benchmark it against Uber’s early growth and project a promising trajectory.
Story B — same facts, plus one:
Average rider subsidy: $8 per ride
Now the original story crumbles. Users aren’t choosing the product, they’re choosing the discount. At 4.5 rides per week, the company is burning roughly $36 per user per week to maintain those engagement numbers. The DAU/MAU ratio isn’t measuring product love but rather price sensitivity. When the subsidy shrinks, so will every metric on this dashboard.
One additional orthogonal fact completely changed the story.
This is why context completeness matters more than the sophistication of the question you ask.
A brilliant question with sparse context will produce a mediocre answer. A simple question with rich, orthogonal context will produce a sharp one.
How to Read AI’s Output as a Diagnostic Tool
There’s an important corollary to all of this: the quality of AI’s output tells you something about the quality of your input.
If AI gives you a response that feels generic, confident, and unsurprising, that’s usually a signal. It’s not that the AI isn’t doing a good job, but that it likely didn’t have enough context to do anything other than default to the most common pattern.
Generic output is a symptom of ambiguous input.
When you see this happening, don’t try to fix it by asking a more clever follow-up question. Instead, go back and examine what context is missing. Ask yourself:
Does the AI know the scale of what I’m describing?
Does it know the category or domain?
Does it know about external factors — incentives, constraints, competitive dynamics?
Have I given it information that distinguishes my situation from the typical case?
If the answer to any of these is no, that’s where the gap is.
Conversely, when AI produces an insight that feels genuinely specific and non-obvious, it usually means you’ve provided enough orthogonal context for the model to converge on a single story. That’s the signal that the system is working well.
Practical Guidelines for Working with AI
If you want AI to produce high-quality analysis, focus less on crafting the perfect prompt and more on assembling the right context. Here’s how:
1. Provide multiple independent metrics
Don’t hand the model a single signal and expect it to work backwards to a full picture. Combine data points that cover different dimensions:
Scale: MAU, revenue, headcount
Engagement: DAU/MAU, session length, actions per session
Retention: D1, D7, D30 cohort retention
Economics: Unit economics, LTV/CAC, gross margin
Each category tells the model something the others don’t.
2. Always specify the product category and use case
This is one of the highest-leverage pieces of context you can provide, because it sets the baseline for what “good” looks like.
Ten seconds of daily usage in a payments app is excellent. Ten seconds in a social network is a disaster. Ten seconds in a meditation app is confusing. The exact same number means completely different things depending on what the product is supposed to do.
If you don’t specify the category, the model will guess. And it will usually guess “generic consumer tech product,” which may be completely wrong for your situation.
3. Surface incentives, subsidies, and external drivers
Engagement metrics are easy to distort. Common drivers that change interpretation include:
Promotional offers and sign-up bonuses
Referral rewards
Forced usage
Advertising spend driving installs
Seasonal effects
If any of these factors exist, the model needs to know. Otherwise, it will interpret artificially inflated metrics as organic signals and build its analysis on a false foundation.
4. Name what makes your situation unusual
AI defaults to the typical case. If your situation is atypical in any important way, you need to say so explicitly. This might include:
Operating in a regulated industry
Serving a niche market
Having an unusual business model
Facing a specific competitive threat
Being at an unusual stage of growth
The model can reason well about unusual situations, but only if it knows they’re unusual.
5. Reduce ambiguity before asking for analysis
Before asking the AI to draw conclusions, check whether you’ve given it enough information to rule out alternative interpretations.
The goal: give AI enough independent facts that only one story makes sense. That’s when analysis becomes sharp.
In Summary
Most people try to get better output from AI by writing better prompts. They tweak the phrasing, add instructions, ask the model to “think step by step.” These things help at the margins, but they’re optimizing the wrong variable.
The far higher-leverage move is assembling better context before you ever type the question. Give the model enough independent facts (scale, category, engagement depth, external drivers) that only one story makes sense. When you do that, you don’t need a clever prompt. The analysis sharpens itself.
The next time AI gives you a generic answer, don’t ask a better question. Ask yourself what you forgot to tell it.





love the 5 tips! my default now is basically if AI gives me slop, it's not AI's fault, it's me who is accountable and i need to iterate to make it useful for whatever i am doing.
-Paras
I’ve seen this play out a lot in teams. when the inputs are vague, AI fills in the gaps with something that looks right but isn’t really useful.
In real work the difference usually comes from context - constraints, data, and a clear sense of what the output is actually for. and without that, the output ends up generic no matter how good the model is.