Briefs

My commentary on AI, enterprise tech, and the future of work — with source articles linked.

When you credit AI for layoffs, employees take notes — and draw conclusions

The WSJ declared this week "The Week the Dread AI Jobs Wipeout Got Real." The proximate cause: Block — Jack Dorsey's fintech company — laid off roughly 1,000 people, and Dorsey was explicit that AI productivity improvements were a driver. The market responded predictably: the stock went up. Employees responded predictably too: with anxiety, anger, and a lot of posts on X.

What's notable isn't the layoffs themselves — companies restructure. What's notable is the explicitness. Dorsey named AI as the reason. That clarity may have helped the stock, but it did something else: it gave workers everywhere a concrete data point to file next to their own uncertainty. As one person put it on X: "everyone will point the finger at everyone else." That's what distrust looks like before it becomes resistance.

I've written before about the 50% of workers quietly using AI and not telling their employers. This is the other side of that coin. When the implicit deal becomes visible — "we're rolling out AI so we can reduce headcount" — the rational response for any remaining employee is to slow-roll adoption, hide productivity gains, and protect themselves. Not because they're bad actors, but because that's what rational people do when they don't trust the incentives.

This tension can't be managed away with messaging. It has to be managed with actual design — and honesty. The companies that get this right will be the ones that can credibly answer: what's in it for the people being asked to change? In a growing company, the answer can be genuine: we're expanding capacity, not shrinking headcount, and your productivity gains translate into leverage, scope, and growth. That's a real win-win — and it's worth being creative about how you structure it. In a cost-cutting context, it's harder to make that case honestly. But trying to make it dishonestly is worse. Employees will sense it. Trust will erode. And the productivity you're trying to unlock won't materialize — because people who feel set up will quietly undermine what they don't believe in.

The transition is real. The tension is real. You can't manage it by pretending the tension isn't there — you have to name it, and build toward outcomes people can actually believe in.

Source article: The Wall Street Journal · "The Week the Dread AI Jobs Wipeout Got Real" · Feb 28, 2026

The 2028 scenario everyone should read — and why Anthropic takes it seriously

A research piece by Citrini Research went viral this week — 1,700 likes, 351 restacks on Substack, enough to spook markets on Monday (Dow -1.7%). The WSJ picked it up. Here's what it actually says, and why it matters to me.

The piece is a thought exercise, explicitly not a prediction. It's written as a June 2028 macro memo looking back at how things unraveled. The scenario in brief: in late 2025, agentic coding tools — the paper specifically names Claude Code and Codex — took a step-function jump in capability. By mid-2026, enterprise procurement teams had seen enough demos to ask a dangerous question: "What if we just built this ourselves?" The long-tail of SaaS started collapsing. Companies most threatened by AI became AI's most aggressive adopters — not out of vision, but survival. Margins expanded. Stocks rallied. Profits funneled back into AI compute.

The problem was what happened next. White-collar workers who lost jobs didn't go back to spending. The consumer economy — 70% of GDP — withered. What the paper calls "Ghost GDP" emerged: productivity gains that showed up in national accounts but never circulated through the real economy. A negative feedback loop with no natural brake.

The paper closes: "But you're not reading this in June 2028. You're reading it in February 2026. The S&P is near all-time highs. The negative feedback loops have not begun... As a society, we still have time to be proactive. The canary is still alive."

I believe this scenario is plausible. More importantly, I believe most people in a position to do something about it aren't saying so out loud. Dario Amodei has — openly, repeatedly. He's talked about AI potentially compressing decades of scientific progress into years, and in the same breath about the obligation that creates. That honesty is rare, and it's a big part of why I want to be at Anthropic.

The canary is still alive. That's the window.

Sources: Citrini Research · "The 2028 Global Intelligence Crisis" · Feb 22, 2026  |  The Wall Street Journal · Feb 24, 2026

Enterprise AI buying just got more complex — and that's actually good for Anthropic

The WSJ reported this week that selling specialized AI software has gotten harder. Companies are slowing down, bringing Finance and Legal into the room early, and scrutinizing ROI more carefully. The underlying worry: if frontier models keep advancing, will the specialized tool they buy today still be relevant in two years?

It's a reasonable fear — and it's actually clarifying. Companies aren't buying less AI; they're buying more carefully. The purchases getting scrutinized are the specialized, point-solution tools. Frontier models sit in a different category: they're increasingly what makes those point solutions potentially obsolete.

This is the environment I know well. As a CRO, I ran enterprise sales cycles that routinely ran 3 months to a year or more. That means building consensus across Legal (contracts, data governance, liability), Finance (ROI modeling, TCO, budget cycles), IT/Ops (security, integration, change management), and domain experts who actually have to use the thing. That's not a sequence — it all has to move at once, or deals stall.

The conversations Anthropic is having now — especially with enterprise accounts weighing full Claude adoption vs. integrating Claude into existing systems vs. buying third-party AI tools — map directly to what I've navigated. I can help customers think through the macro question ("go all-in on Claude, integrate it selectively, or evaluate what to build vs. buy"), and I can run the room when Finance is asking about IRR and Legal is asking about data residency in the same meeting.

The buying cycle got more complex; navigating it well is increasingly what separates serious enterprise AI deployments from stalled ones.

Source article: The Wall Street Journal · Feb 22, 2026

AI mandates create "workslop" — fixing it requires system-level change

HBR introduced a useful term: workslop — the low-quality, AI-generated output that floods inboxes when organizations mandate AI use without thinking through the implications. The pressure is real: boards want leaner teams, execs feel the push to show AI ROI, and the implicit message is "do more with less."

The authors argue this isn't a people problem — it's a system problem. Their three-level fix:

  • Culture: Rebuild trust through actual collaboration — feedback, questions, dialogue. Not just AI outputs.
  • Practice: Create clear norms for when/how to use AI, with review processes that reinforce human judgment rather than offload it.
  • Accountability: Someone needs to own the AI-human integration. The authors suggest "forward deployed AI collaboration architects" who understand both the tech and the people.

That last point caught my attention — it's basically describing a role focused on making AI actually work inside organizations. The irony they point out: to make AI work at work, we need to get better at being human.

Source article: Harvard Business Review · Niederhoffer, Robichaux & Hancock · Jan 2026

Employees are hiding their AI productivity gains — it's human nature

Ethan Mollick (Wharton) shared a striking finding on the Prof G podcast: about 50% of American workers are using AI and reporting 3x productivity gains on tasks where they use it. But here's the catch — many of them aren't telling their employers. They're not using corporate AI tools. They're keeping it quiet.

Why? Because if you (and coworkers) prove you're 3x more efficient, you may be part of the next RIF. Often, the 'calculated' move is to pocket the gains and stay under the radar.

This is a real problem, and it won't be solved with mandates or monitoring. It requires honesty: leaders need to make the case — credibly — that AI adoption benefits both the company and the employee. That's easier at growing companies ("we're expanding, let's make you more productive and reward you") than at struggling ones where a RIF feels inevitable.

The adoption gap isn't a training problem alone, it's a trust problem.

Source article: Prof G Podcast · Ethan Mollick (Wharton) · Feb 12, 2026

Pro-worker AI isn't automatic

MIT's Daron Acemoglu made a point this week that stuck with me: AI that actually helps workers requires deliberate design. It won't happen by default.

Three things that resonated: build domain-specific systems aligned with how experts actually work, design for skill development (not just task completion), and add friction to prevent blind reliance on AI outputs.

This maps to a best practice where you collaborate with AI tools rather than relying on them blindly. The best tools I've worked with don't try to replace judgment — they surface context and let the human decide. The worst ones optimize for "AI did the thing" without asking whether the thing was right.

Source article: MIT Sloan