Remote Killed the Shoulder Tap. AI is Breaking It Again.
How structured context became the new shoulder tap
Product work used to ride on tiny, informal moments. You heard a designer mutter about a confusing flow. You caught a sales rep on a call and realized a deal was at risk. A PM tapped an engineer to say, "That customer on the east coast is stuck on onboarding, can we swap the order of these fields?" Those shoulder taps moved context faster than any ticket ever could.
Remote work erased most of that layer. The Allen curve says communication plummets as distance grows. Microsoft researchers measured remote collaboration and saw cross-team ties fray and rich, ad hoc conversations shrink. Studies on relational communication found people felt more transactional and less connected after the abrupt shift. The evidence matches what teams felt: the casual backchannel that kept product intent aligned went quiet.
What changed day to day
- Simple clarifications turned into Slack threads, then meetings.
- A two-minute nudge became a 30-minute calendar slot.
- PMs bundled feedback into weekly reviews because interrupting felt expensive.
- Engineers optimized for uninterrupted blocks, which meant less context-sharing in flight.
The result was friction on every handoff. A story that used to get shaped in three hallway conversations now lived in a doc and a queue. By the time code shipped, the original nuance was gone. Teams compensated with ceremony: longer specs, more screenshots, Looms for every demo. Those efforts helped, but they never recreated the fast path that proximity gave.
The hidden cost showed up as rework. Tickets shipped with partial intent. UX tone mismatched the buyer. Edge cases got discovered after launch because the "oh, and make sure it works for resellers" reminder never happened. Late feedback arrived after code froze. Cycle time metrics looked fine; the real loss was in alignment and quality of the first pass.
Back in the office did not fully fix it. Hybrid days clustered collaboration, but the informal layer stayed thin. People still defaulted to async. The habit of over-structuring every interaction stuck. The shoulder tap muscle atrophied.
Then AI coding agents arrived and multiplied the speed gap. An engineer can sit with Cursor or Claude and ship a feature before lunch. The agent will happily scaffold thousands of lines based on whatever prompt it gets. If the PM context is late or compressed, the agent builds the wrong thing quickly and confidently.
Failure mode in the wild
A team building compliance workflows asked their agent to add "client notifications." The repository contained a mix of consumer-style components and enterprise pages. The agent matched the patterns it saw and produced playful in-app toasts with emoji-rich copy. The missing pieces were obvious to any PM on the account: regulated clients, audit trails, tone constraints, delivery guarantees. None of that made it into the prompt. Rework meant ripping out UI, wiring email with retention, and rewriting copy. The engineering velocity looked great until you count the do-over.
Other patterns keep repeating:
- Generic scaffolding: the agent adds dashboards, toast systems, and preference centers because the repo contains them, not because the user needs them now.
- Tone mismatch: consumer patterns leak into B2B flows. Legal tech gets smiley microcopy. Healthcare gets celebratory confetti on failure.
- Non-functional blind spots: performance budgets, logging, and audit trails vanish because the agent does not see them in the immediate prompt.
- Decision thrash: one task uses Jest, the next pulls in Vitest; one feature ships REST, the next quietly adds GraphQL because it seemed cleaner.
- Dependency creep: the agent installs new UI kits or HTTP clients because they solve the immediate task, ignoring the team's standards.
Why the usual patches fall short
- Longer specs: they age fast, they still miss the tacit "why," and agents rarely parse them end to end.
- More meetings: they slow the loop and still do not reach the agent inside the IDE.
- Looms and walkthroughs: great for demos, weak for structured, recallable decisions the agent can apply.
- Ad hoc prompting: engineers translate PM intent under time pressure, compressing nuance and omitting rationale.
- Knowledge bases: helpful for humans, but useless if the agent cannot query them in a structured way at decision time.
What fixed looks like
Context has to be accessible, structured, and alive. The modern shoulder tap is not a chat ping; it is a system that lets the agent consume constraints and intent before it generates code.
1) Structure the context the agent can read
- Product: ICP, user archetypes, tone rules, accessibility expectations, onboarding priorities, success definitions.
- Business: pricing model, regulated vs unregulated segments, SLAs, risk posture, compliance boundaries.
- Technical: stack decisions, API contracts, performance budgets, logging standards, data residency rules, privacy choices.
- Design: component library rules, brand voice, motion guidelines, layout constraints.
- Non-functional: observability expectations, error handling defaults, security controls, rollback plans.
Keep this in a format the agent can consume directly. Not buried in Notion pages or scattered docs. Short, structured artifacts with clear keys and values. Update them when reality changes, not once a quarter.
2) Turn decisions into first-class objects
Write decisions like you write code: small, explicit, versioned. Examples:
- We notify by email for regulated events; no in-app toasts for legal clients.
- We log every client-facing message with a retention policy of 365 days.
- We use Jest for testing and React Query for data fetching.
- We never ship playful copy to enterprise buyers; default tone is concise and direct.
- We do not add new dependencies without approval if an equivalent exists in the stack.
These constraints reduce search space for the agent. They also remove the guesswork for a human pairing with it. When the agent tries to deviate, it should surface a question instead of pushing code.
3) Add lightweight rituals to keep context fresh
- Ten-minute decision review at the start of a work block. Capture new choices as decisions, not meeting notes.
- After a customer call, record the two or three implications that affect current work. Add them to the decision set.
- When a feature ships, add the gotchas discovered during QA to the constraints list so the agent avoids them next time.
- Rotate a weekly audit of decisions to prune stale ones and highlight new ones to the team.
None of this needs a big meeting. It needs habit and a place to put the information that the agent and humans can both read.
4) Instrument drift and prompt bloat
- Drift detection: when the agent suggests patterns outside your decisions, flag and log it. Example: it proposes GraphQL when the standard is REST. Approve or reject once; the system learns.
- Prompt bloat control: track how much context you stuff into each request. If prompts are ballooning because you do not trust the agent to remember standards, you have a context distribution problem.
- Rework ratio: measure how often agent output is rewritten due to missing context. Track by feature, not by file.
- Time to clarity: time from "we need X" to "agent has the right constraints to start." Shrink this by improving decision availability, not by adding meetings.
5) Give PMs visibility into the IDE loop
The PM should be able to inject context into an agent session without waiting for a ticket cycle or a meeting. That can be a decision toggle, a constraint update, or a quick note tied to the task. The engineer stays unblocked, the agent stays aligned, and the PM regains influence on the first pass.
How this looks in practice
Day zero: a PM hears from a customer that export files must include a new compliance note. Instead of writing a paragraph in Slack, the PM adds a decision: "All exports include compliance footer text provided by legal, immutable after generation." The agent sees it before generating the export flow and includes the footer. QA catches tone issues, the PM tightens the tone rule. No rework cycle.
Week one: the team picks React Query and Jest as defaults. They add them as decisions. When the agent tries to introduce Axios or Vitest, it flags a suggestion instead of pulling the package. The team accepts or rejects. Dependency creep stops.
Week two: a customer call reveals that in-app toasts are acceptable for sandbox accounts but never for production tenants. The PM adds that nuance. The agent now branches behavior based on environment without being told in the prompt each time.
Week three: the team realizes performance budgets are missing. They add: "All list endpoints must return in under 300ms at P95 with pagination." The agent starts choosing pagination and indexing patterns that fit the budget without being reminded.
Before and after
Before: PM feedback is bundled into a weekly review. The engineer prompts the agent with partial context. The agent scaffolds a feature with playful copy and missing audit logs. QA flags issues. PM reopens the ticket. Engineer rewrites half the code. Two weeks of motion for a feature that should have shipped in two days.
After: PM adds decisions in line with customer calls. Engineer prompts the agent with the decision set available. The agent ships the feature with email notifications, audit logging, and the right tone. QA verifies minor details. The feature ships in two days with no rewrite.
A pragmatic rollout plan
Week 1: Create a decision register. Seed it with ten to twenty choices that actually affect current work. Keep it in plain text or JSON so agents can read it. No jargon.
Week 2: Wire the decision set into your agent workflow. Pass it as context automatically. Do not rely on humans to paste it.
Week 3: Add drift tracking. When the agent suggests something outside the decisions, capture it. Decide once. Update the decision set or block the change.
Week 4: Add a daily ten-minute decision review. Add or retire decisions. Keep it light.
Week 5: Add a short PM injection path: a way for PMs to add a constraint that is live within an hour, not a sprint.
Signals you are improving
- Fewer rewrites attributed to missing context.
- Shorter prompts because standards live in the decision set.
- Consistency in dependencies and patterns across features.
- QA finds fewer tone and compliance issues.
- PMs spend less time re-explaining user intent and more time refining it.
Signals you are not there yet
- Engineers paste walls of text into prompts because they do not trust shared context.
- Agents introduce new dependencies every week.
- PMs learn about output only at sprint review.
- QA keeps catching the same category of errors: tone, logging, accessibility.
- Decision docs become stale and nobody can tell which ones matter.
The new shoulder tap
The old shoulder tap was a two-minute interruption that carried nuance without paperwork. The modern version is structured context that the agent and the human can both use. It is fast because it is pre-baked. It is reliable because it is explicit. It restores the PM voice inside the IDE loop without slowing down the engineer.
Engineers keep the speed they get from Cursor, Claude, or Copilot. PMs regain influence on the first pass. The organization ships features closer to the customer need on the first try. The informal glue that remote work dissolved comes back as a system, not a hope that people will talk more in hallways.
Stay in the Loop
Get notified when we publish new insights on building better AI products.
Get Updates