Are You Treating Your Coding Agent Like a Mushroom?
If you’re starving your AI of product context, you’re guaranteed generic outcomes.
Over and over teams try them out and report back with predictable results: they're great at migrations and refactoring, but terrible at building new features or making architectural decisions.
This makes perfect sense when you think about what we're actually giving them.
What Works
For a migration, the agent just needs to understand the codebase. Transform this pattern to that pattern, update dependencies, move files around. The repository contains everything it needs to know.
But for a new feature?
The agent needs to understand users, business priorities, edge cases, and how this change fits into the broader product strategy. We're asking it to make product decisions based solely on technical artifacts.
How We Onboard Humans
What's wild is that we'd never onboard a junior engineer this way. You wouldn't hand a new hire access to the repo on day one and say "go build the user authentication system."
You'd introduce them to the rest of the organization, have them fix some bugs, maybe refactor a small module, gradually build up their understanding of both the codebase and the product.
With AI agents, we skip all of that. We throw them directly at complex feature work with zero context about users, business constraints, or team conventions. Then we're surprised when they make poor decisions.
The Cost of Context Loss
When we were building Brief itself, our coding agent had an obsession with dashboards. Tell it we're making a B2B SaaS app and the first thing it would do is scaffold out a generic dashboard component. No consideration for what the product actually did. No understanding of what users needed first. Just: "B2B SaaS = dashboard."
The agent was pattern matching on the wrong patterns. It had learned that B2B apps have dashboards, but it had no context about our product, our users, or our priorities. So it optimized for building the most generic, least useful thing possible.
Once we started feeding it Brief's own product context (what we were building and why), it stopped suggesting dashboards. It finally had enough information to make decisions that aligned with our actual product strategy.
Or take one of our customers building legal tech for law firms. Professional tools for serious clients handling serious matters. Their agent kept injecting emojis into error messages, adding playful microcopy, and generally treating it like a consumer social app.
The agent wasn't trying to sabotage them. It was doing exactly what it had been trained to do: make engaging, friendly interfaces. But it had zero context about who these users were, what they were trying to accomplish, or what tone was appropriate.
After they gave Brief their ICP and company context (that these were legal professionals working on high-stakes matters), the agent quit using emojis cold turkey. It understood not just what to build, but how it should feel.
These aren't edge cases. They're the predictable result of context loss.
The Telephone Game
The real diagnostic question isn't "is my agent good enough?" It's: "how much of a game of telephone am I playing?"
Think about the path information takes:
PM → Slack thread → Engineering ticket → Engineer's brain → Prompt → Agent
At every step, context gets compressed, simplified, or lost entirely.
The PM has a conversation with a customer about a painful workflow. They summarize it in Slack. Someone distills that into a ticket. An engineer reads the ticket and forms their mental model. Then they try to express that mental model in a prompt to an agent.
By the time the agent sees it, the original customer pain has been filtered through four layers of lossy compression. The agent is building from a photocopy of a photocopy of a photocopy.
The more steps between the original context and the agent, the more you're treating it like a mushroom. Feeding it degraded information, expecting clarity.
What Agents Actually Need
Product Context: Who are your users? What problems are they trying to solve? What does success look like for them? Your agent should understand the difference between building for lawyers and building for teenagers, between enterprise IT buyers and prosumer creators.
Business Context: What are you optimizing for right now? Is this a scrappy MVP where perfect is the enemy of shipped? Or a mature product where a bug could cost millions? What are the quality bars, the compliance requirements, the performance budgets?
Team Context: What are your conventions? Your agents should know that your team prefers composition over inheritance, that you always add error tracking to new features, that the design system is in Figma and not the repo.
Decision Context: Why did you make past architectural choices? What alternatives did you consider? What constraints drove those decisions? Without this, agents will make the "textbook correct" choice that ignores your specific constraints.
A senior engineer joining your team can only make good decisions if they have sufficient context. Your agent needs the same.
The Real Difference
Successful coding agents have access to context. Disappointing ones don't.
The sophistication of the AI and the size of the context window matter less than whether you're treating it like a mushroom: feeding it shit and keeping it in the dark.
A Self-Assessment
Ask yourself:
1. The Dashboard Test: If you asked your agent to build a new feature for your product right now, would it suggest something generic or something specific to your users?
2. The Telephone Count: How many steps are between your original product decisions and what your agent sees? Can it access the PM discussion, or just the code?
3. The New Hire Test: If a senior engineer joined your team today with only the context your agent has, could they make good architectural decisions?
4. The Tone Test: Does your agent understand not just what to build, but how it should feel for your specific users?
If you're failing these tests, you don't have an agent problem. You have a context problem.
And the good news is: context problems are solvable.
Stay in the Loop
Get notified when we publish new insights on building better AI products.
Get Updates