I Almost Built the Wrong Thing

Product review meeting being replaced by structured context infrastructure

The first version of Brief in my head was a product review bot. Before shipping, you would run work through simulated versions of your product leader, design lead, and engineering lead. An AI council would dunk on your feature so you could fix it before facing the real humans. It felt clever. It was wrong.

It was wrong because it copied the surface of the meeting instead of the substance. Product review was never about the performance. It was about transmitting context and judgment. I almost automated the meeting when I should have been automating the context.

What I learned from real product reviews

The best product leaders stay deeply involved in decisions long after most CEOs or execs would have delegated. Gates was infamous for "think weeks" and brutal sessions that left teams sweating but clear on what mattered. Bezos forced clarity through six-page memos and PR/FAQ drafts. At Yammer, David Sacks treated product review like a dojo. My first review with him was about a single button. He zoomed out to the history of computing, the place of the iPad, and zoomed back in to that button. The point was not the button. It was the stack of context behind it.

Those sessions were not about catching mistakes. They were about aligning on how to think. When you left, you had a mental model that guided hundreds of micro-decisions without Sacks or Bezos in the room.

Why "meeting bots" miss the point

A bot that says "this is off-brand" is a parlor trick unless it carries the why: the audience, the positioning, the constraints, the quality bar. A simulation of an exec's tone without their context is cosplay. The meeting is a proxy for context transmission. Automating the proxy and ignoring the payload recreates the bottleneck, just faster and more shallow.

The new bottleneck

Agents collapsed build time. A solo engineer can ship something meaningful in hours. The product review cycle did not shrink with it. You cannot schedule a two-hour review for every three-hour build. Hiring more PMs or writing longer specs does not fix it. The bottleneck is getting the right context into the hands of the builder and the agent at the moment of decision.

If the context stays in heads and slides, you will get 10x output of the wrong thing. The fix is not more critique; it is turning context into infrastructure.

What context actually is

In human-led teams, this context spreads through reviews, docs, hallway chats, and scars from past incidents. With agents, those channels do not reach the code window. That is why the agent "hallucinates" choices. It fills gaps with patterns, not intent.

Context as infrastructure

Turning context into infrastructure means:

The goal is to let builders move without waiting for a meeting while staying inside the rails of product intent.

A simple stack for context delivery

1) Decision register

2) Briefs that bind

3) Ingestion and distribution

4) Feedback and drift

5) Lightweight rituals

This stack replaces the need for a person to be in every review. The thinking travels without the meeting.

Levels of context to encode

You do not need to encode everything at once. Start with the decisions that affect current work. Expand as you see repeat misses.

What a modern product review can look like

The "review" becomes a fast loop of confirm and adjust, not a gate where context finally shows up.

A story of building without the meeting

A team needed a new onboarding path for enterprise buyers. Old world: write a spec, schedule reviews, wait for sign-off. Agents would have generated a generic flow and copy. Instead they:

No meeting theater. The context was live, structured, and available at generation and review.

Why this beats more documentation

Documentation is necessary. It is also brittle and slow. Context infrastructure differs because:

Docs explain. Context infrastructure guides.

Anti-patterns to avoid

Metrics that show context is working

If these move in the right direction, you are replacing meetings with context that sticks.

Signals you are getting it right

Signals you are stuck in meeting land

Stakeholders without the theater

Execs still need confidence that quality, risk, and strategy are being upheld. Context infrastructure gives them:

They get visibility without becoming a bottleneck.

Risks to watch

A rollout you can start this month

Week 1: Create a decision register. Ten items: tone, ICP, dependencies, performance, security, logging, testing, rollout rules, design system usage, analytics.

Week 2: Add briefs for current initiatives. Keep them to one page. Link decisions.

Week 3: Pipe decisions and briefs into agent runs automatically. Stop manual pasting.

Week 4: Add drift alerts. When an agent or PR violates a decision, flag it.

Week 5: Hold a short decision hygiene meeting daily. Add new decisions, retire stale ones.

Week 6: Review metrics: rework, drift events, prompt size, cycle time. Tune decisions and briefs.

None of this requires a new offsite. It requires treating context as a product surface.

What almost building the wrong thing taught me

The product review bot idea was appealing because it mirrored something familiar. It also would have kept the bottleneck in place. The value of those legendary reviews was not the sparring. It was the transfer of judgment. Judgment comes from context. Context can be captured, structured, and distributed. When you do that, you do not need a simulated panel. You need a reliable way to put the CEO's thinking, the design lead's taste, and the engineering guardrails into the system your agents and engineers use every day.

That is why Brief is context infrastructure, not a meeting bot.

Stay in the Loop

Get notified when we publish new insights on building better AI products.

Get Updates
← Back to Blog