
Bloomberg ran a piece about how AI coding agents are creating a "productivity panic." Engineers burning out, executives tracking "interactions per day" with Claude Code, people waking up at 5 a.m. to vibe code. The conclusion was that maybe the real productivity hack is restraint, knowing what not to build.
It's a well-reported article. It also mistakes the symptom for the disease.
Everything described in that piece, the compulsive overwork, the surveillance metrics, the gap between what executives think is happening and what employees actually experience, none of it is new. AI didn't create these dynamics. It made them impossible to ignore.
You're measuring the wrong thing
One company in the article monitors engineers' "interactions per day" with Claude Code. Another CEO pulls up agent bills and calls out people who aren't spending enough. A third has Claude itself publish weekly reports on each engineer's unproductive loops.
This is lines-of-code thinking with a new coat of paint. It's the same managerial instinct that produced keystroke monitoring, commit frequency dashboards, and the conviction that physical presence equals output. These are the metrics of managers who can't evaluate the actual work, so they measure the activity around it instead.
Swap "Claude Code interactions" for "Jira tickets closed" or "Slack messages sent" and nothing about the picture changes. The tool is different. The management failure is identical.
Intuit's CTO, buried in the same article, mentions that engineers are 30% more productive as measured by the velocity of code they're actually producing and shipping. That's an outcome metric. That's the thing worth measuring. It got one sentence in a piece that spent paragraphs on interaction counts and agent billing.
The C-suite gap is an org design problem
A Section survey found that 40% of C-suite executives said AI saved them at least eight hours a week. Meanwhile, 67% of non-managers said it saved them fewer than two hours.
Bloomberg frames this as executives having inflated expectations. The real explanation is simpler. Executives control their own calendars. They choose which tasks to hand to an agent. Nobody is asking them to also do their regular job at the same pace. When the Intuit CTO wakes up early to code with Claude, that's his choice and his schedule.
Non-managers don't get that flexibility. One CEO in the article actually says it plainly: employees are "implicitly being asked to find time to explore and experiment, but their day-to-day work expectations aren't changed to make space for that."
That's the whole problem in one sentence. Not a gap in AI capability. A gap in organizational design. The tool works the same for everyone. The freedom to use it well doesn't.
If you hand a developer an AI coding agent but don't reduce their ticket load, don't adjust sprint commitments, and don't give them dedicated time to learn the tool, you haven't given them a productivity boost. You've given them extra homework.
"Task expansion" is just missing process
The Berkeley study in the article found that when non-technical colleagues start vibe coding prototypes, engineers end up cleaning the output. Bloomberg calls this "task expansion" and treats it as a consequence of AI.
It's not. It's a consequence of shipping prototypes without a handoff process.
Product managers have always produced artifacts that engineers need to translate. Before AI, it was Figma mockups and specs that didn't account for edge cases. Engineers rebuilt those artifacts. The dynamic is identical. The artifact changed from a design file to a vibe-coded prototype.
The management challenge is the same as it's always been: define the handoff, set expectations about what "done" means at the prototype stage, and stop pretending a PM's demo is production-ready code. At Intuit, product managers building prototypes with Claude is actually a good thing, because "I want something like this" backed by a working demo is a more precise specification than any PRD ever written.
Busyware isn't an AI problem
The most provocative claim in the article is that AI-fueled productivity produces "busyware": minor updates nobody asked for, dashboards for an audience of one, half-baked demos that engineering must maintain.
Companies have been building things nobody needs since before software existed. Feature bloat is as old as product management. The marketing demo that engineering has to make real is a tale as old as cross-functional teams.
What actually changed is that the cost of building a bad idea dropped dramatically. When prototyping takes twenty minutes instead of two weeks, more ideas get built. Some of those ideas are bad. That's not a crisis. That's the expected outcome of cheaper experimentation.
The answer isn't restraint for its own sake. It's getting better at evaluating what was built. Kill the prototypes that don't pass the bar. Use cheap experimentation to find better ideas faster, not to accumulate features nobody wanted.
Companies that had good product judgment before AI agents still have it. Companies that didn't are now producing more bad ideas at higher speed. AI didn't break their judgment. It amplified the absence of it.

The pattern we keep repeating
The Berkeley finding that people work longer hours even while offloading work to agents is presented as surprising. It isn't.
Email was supposed to save time. The smartphone was supposed to free us from the desk. Slack was supposed to reduce meetings. Each tool increased capacity, management responded by increasing expectations, and people worked more hours, not fewer.
AI coding agents are following the exact same script. The tool isn't the problem. The organizational response to the tool is the problem.
What productive teams actually look like
I work with organizations deploying AI tools. The ones getting results look nothing like what Bloomberg describes.
They reduced sprint commitments when introducing coding agents, then measured whether output quality held. It usually did, with fewer hours worked.
They gave engineers actual dedicated learning time. Not "find time to explore on top of your existing work," but real blocks on the calendar with no other expectations.
They measure outcomes, features shipped, bugs resolved, customer-reported issues, rather than inputs like interactions per day or agent bills.
They defined clear handoff processes for vibe-coded prototypes. A PM's demo is an input to the engineering process, not a shortcut around it.
Nobody is being called out for not spending enough on their Claude Code bill, because that would be absurd.
The difference between these organizations and the ones in the Bloomberg piece isn't the AI tool. It's the management layer above it. Good management makes AI agents a productivity multiplier. Bad management makes them a surveillance vector and a burnout accelerator.
The real scarcity
Bloomberg is right that knowing what not to build matters. But that was true before AI coding agents, and it'll be true after whatever comes next. Editorial judgment, looking at a prototype and deciding it doesn't justify the investment, has always been the most valuable and least common skill in product organizations.
When the cost of building a proof of concept drops from two weeks to twenty minutes, that judgment becomes more important, not less. The organizations that develop it will build better products faster. The ones that don't will drown in busyware.
But the drowning isn't the AI's fault. It's the same management failure it's always been, wearing new technology and a new panic as a disguise.