A growing number of ChatGPT users have switched to Claude, with a 1,487% increase in sessions in March alone. The article explores how this shift impacts the workplace and career strategies.
The surge reflects a broader realignment in the AI tools market as professionals discover that different models excel at different tasks, and that the "default" chatbot may not be the best fit for serious work.
Report analyzing Claude usage patterns in February 2026, focusing on learning curves and how experienced users achieve greater success with AI tools through developed habits and strategies.
Andrej Karpathy's vision for the future of software development, where AI agents handle coding through autonomous loops, research, and iterative problem-solving.
BlackRock CEO Larry Fink warns in his annual letter that the AI boom threatens to make wealthy companies and investors even richer while exacerbating inequality, unless more individuals can share in market gains.
Bloomberg Opinion argues the White House National AI Legislative Framework is a blueprint for AI companies to carry on with business as usual, targeting states enacting their own AI rules.
Meta acquires the team behind AI startup Dreamer, including co-founder Hugo Barra, to join Meta's Superintelligence Labs group focused on building AI agents.
"He said he was leaving me for someone smarter, but I did not expect it to be a chatbot."
A 1,487% surge in Claude sessions signals that the era of one-size-fits-all chatbots may be ending. Workers are choosing tools that match how they actually think and produce, not just the ones their company defaults to.
The AI Quiz
Acronym Quiz
How well do you know your AI alphabet soup? Pick the correct expansion for each acronym.
A technical walkthrough of building a custom voice AI agent for a mechanic shop, featuring RAG pipeline development, Vapi integration, and real-world voice tuning.
Simon Willison discusses recent breakthroughs in running massive Mixture-of-Experts language models on consumer hardware through streaming expert weights from storage.
Anthropic's own data shows experienced Claude users achieve 10% higher success rates than newcomers, and that the learning curve takes six months to climb. In the same week, Forbes reports a 1,487% surge in users switching from ChatGPT to Claude. Connect those two data points and you get the story beneath all of today's stories: AI tool mastery is becoming a career differentiator, the tools themselves are in violent flux, and nearly everyone switching platforms is starting that learning curve from zero.
▶Listen to the Digest~7 min
Today's Headlines
The Great Migration
Claude's 1,487% session surge is real, but the numbers need context. Larridin's AI measurement platform shows Claude overtook ChatGPT in daily active users during the first week of March, with Claude users averaging 38 sessions weekly versus 18 for ChatGPT. The migration was catalyzed by a combination of political and ethical concerns about OpenAI and genuine performance advantages in coding and analysis tasks. In corporate settings, Claude now drives twice as many sessions as ChatGPT.
Anthropic's Economic Index reveals what happens after the switch. Analyzing 1 million conversations from February 2026, the report finds that 49% of all jobs have seen at least 25% of their tasks performed using Claude. But the headline number is the learning curve: users with 6+ months of experience achieve 3-4 percentage points higher success rates in controlled regressions. API power users respond to task complexity twice as strongly as casual users, with Opus usage rising 2.8 percentage points per $10 increase in hourly wage. The top 10 tasks declined from 24% to 19% of conversations since November, suggesting experienced users find increasingly diverse applications.
Fink sees the macro risk. BlackRock's CEO warns in his annual letter that since 1989, a dollar invested in US stock markets grew 15x faster than median wages. AI accelerates this divergence: workers with AI-related skills already command wage premiums as high as 43%, while one-third of BlackRock survey respondents lack $500 for emergencies. Fink's proposed solution, letting Social Security invest diversified portfolios like public pensions, is his clearest policy intervention yet.
The Tooling Revolution
Karpathy went from writing 80% of his code to writing none. In a No Priors interview, the former Tesla AI lead describes December 2025 as the inflection point where he stopped coding entirely and began directing autonomous agent loops. His AutoResearch project discovered hyperparameter improvements overnight, including weight decay and Adam beta optimizations, that survived two decades of his own manual tuning. His new bottleneck: "your ability to direct agents," with token throughput replacing lines of code as the productivity metric.
Anthropic replaced prompt engineering with skills architecture. Based on an insider post from an Anthropic builder, the new paradigm treats each AI capability as a structured environment the model navigates, not a monolithic system prompt. Four pillars: progressive disclosure (fetch data on demand instead of reading entire codebases), failure-driven design (document gotchas, not standard patterns), continuous calibration through persistent logs, and scoped guard rails that activate contextually instead of globally.
Pencil turns sketches into working apps in 15 minutes. Matt Maher demonstrates a progressive narrowing pattern: 6 parallel agents create design concepts, you pick a direction, 3 agents refine it, 6 agents explore variations across states, then Claude Code builds the final Next.js app via MCP handoff. The key insight: separating UI from business logic makes AI-driven development dramatically easier. Two-way sync means code fixes update the design file automatically.
Neil Kakkar's productivity thesis: "It's the infrastructure, not the AI." After six weeks at Tano, Kakkar cut server restart time from 60 seconds to under 1 second, scaled from 2 branches to 5 simultaneous worktrees, and shifted from individual implementer to "manager of agents." His highest-leverage work was building enabling infrastructure, not features.
Agents in the Wild
Mozilla wants to build Stack Overflow for AI agents. Mozilla.ai's cq project addresses a stark statistic: Stack Overflow peaked at 200,000+ questions monthly in 2014 and collapsed to 3,862 in December 2025, matching its launch-month volume. LLMs trained on Stack Overflow subsequently hollowed out that community, a process Mozilla calls "matriphagy." cq proposes a shared knowledge commons where agents contribute and query collective learnings, with trust earned through demonstrated use across multiple codebases rather than authority.
A developer built an AI receptionist that handles 100+ missed calls per week. Kedasha Kerr's "Axle" uses a RAG pipeline with 21 structured documents, 1024-dimensional vectors via Voyage AI, and Claude Sonnet for natural conversation. The hardest part wasn't the code but getting the ElevenLabs voice to sound like "someone who works at a mechanic shop and not a Silicon Valley startup." Each missed call represents $50 to $2,000 in lost revenue.
Voice agents can complete tasks OR deliver good experiences, rarely both. ServiceNow and Hugging Face's EVA framework tested 20 voice systems across 50 airline scenarios and found a fundamental tradeoff invisible to existing benchmarks: high accuracy agents deliver poor experiences, and vice versa. The dominant failure mode is named entity transcription, where a single misheard character cascades into complete authentication failure.
Policy, Power, and Infrastructure
The White House AI framework is regulation in name only. Bloomberg's Dave Lee argues the National Policy Framework unveiled March 20 is designed primarily to preempt state-level AI regulation, the only mechanism currently constraining AI companies. The framework amounts to an industry wish list.
Meta's AI acqui-hire spree continues. Meta acquired the entire Dreamer startup team, including former Stripe CTO David Singleton, for its Superintelligence Labs. Dreamer raised $56 million at a $500 million valuation in November 2024. This follows Meta's $2 billion Manus acquisition in December and the Moltbook deal in March.
MoE models now run on consumer hardware. Simon Willison highlights "streaming experts," a technique that runs 397-billion-parameter and 1-trillion-parameter models by streaming expert weights from SSD storage. Dan Woods ran Qwen3.5-397B on 48GB RAM; another user ran Kimi K2.5 (1T parameters) on a MacBook Pro at 1.7 tokens/second. The gap between datacenter and consumer AI continues to narrow.
$2.7 trillion in US data center investment by 2030. After Trump urged tech companies to build their own power plants, executives signed a White House pledge to pay full energy costs. The Washington Post reports this "private grid" approach could protect ratepayers from cost-shifting as AI infrastructure scales.
Apple's WWDC 2026 promises an AI overhaul. iOS 27 and macOS 27 arrive June 8-12 with a standalone Siri app, "Ask Siri" button, and a new "Core AI" framework replacing Core ML.
Quick Hits
ProofShot lets AI coding agents capture video recordings and error logs as "visual proof" of their work, generating PR-ready artifacts.
ANEMLL runs LLMs on Apple's Neural Engine at 47-62 tok/s for 1B models and ~9 tok/s for 8B, using a fraction of GPU power draw.
Claude Code Cheat Sheet is a comprehensive, printable reference for v2.1.84 covering shortcuts, MCP, memory, and CLI flags.
Silicon Valley's DC Summit exposed the tension between Gulf state AI investment ($2 trillion pledged) and the Iran war's destabilizing effect on that funding.
The Throughline
The thread connecting today's stories is the emergence of AI expertise as the new professional class divide, and the paradox that the tools creating this divide won't sit still long enough for most people to master them.
Anthropic's Economic Index provides the hard evidence: experienced users don't just use AI more, they use it differently. High-tenure users tackle higher-complexity tasks, diversify their applications faster, and achieve measurably better outcomes. But that learning curve takes six months. Meanwhile, Forbes reports users are abandoning ChatGPT for Claude at record rates, resetting their accumulated expertise to zero. Karpathy, one of the most experienced practitioners alive, describes a categorical workflow shift as recently as December 2025. If even Karpathy is starting over, what hope does the average knowledge worker have of building durable AI skills?
This creates a troubling dynamic. Neil Kakkar's thesis, that the highest-leverage work is building infrastructure, not using AI directly, suggests the real winners won't be the best prompters or even the best coders. They'll be the people who build the systems that let AI agents operate at scale. That's a much smaller group. And it maps uncomfortably well to Fink's warning about wealth concentration: when AI multiplies the output of those who already have infrastructure and capital, the gap between the equipped and the unequipped widens faster than any learning curve can close.
Mozilla's cq project and the EVA framework both represent attempts to address this from different angles. cq asks: what if agents could share hard-won knowledge collectively instead of each one learning in isolation? EVA asks: how do we even know if these agents are working well? Both are early-stage answers to a problem that will only intensify as AI agents move from developer tools to business-critical infrastructure handling real phone calls, real airline rebookings, real financial decisions.
The Bigger Picture
We are watching the emergence of two parallel AI economies. In one, a small cohort of experienced practitioners, the Karpathys and Kakkars, architect autonomous loops that generate compounding returns. In the other, a much larger population cycles between AI tools that render last month's expertise partially obsolete. Anthropic's own data quantifies this: US state-level AI adoption is converging (the top 5 states' share dropped from 30% to 24%), but global inequality is increasing (top 20 countries rose from 45% to 48%). The projected US convergence timeline quietly extended from 2-5 years to 5-9 years.
The skills architecture that Anthropic is promoting, where AI capabilities are structured environments rather than monolithic prompts, is itself a bet on the experienced-user premium. It assumes practitioners who can build and maintain skill libraries, wire up MCP servers, and design progressive disclosure systems. That's powerful if you're already in the cohort. It's another barrier if you're not. When the Claude Code Cheat Sheet for a single tool runs to eight color-coded sections on landscape A4, the notion that AI is "making technology accessible to everyone" deserves serious scrutiny.
Fink's proposal to let Social Security invest in diversified portfolios is an acknowledgment that traditional employment income cannot keep pace with capital returns, and AI is about to make that disparity worse. The question is whether AI tool democratization, streaming trillion-parameter models on a MacBook, running agents on Apple's Neural Engine, building apps from pencil sketches, can close the gap faster than AI expertise concentration widens it. Today's evidence suggests the race is closer than the optimists claim and further from resolved than the pessimists fear.
What to Watch
The 6-month learning curve as hiring signal. Anthropic's data suggests AI tool tenure maps directly to productivity. Watch for whether employers start valuing platform-specific experience, and whether that creates lock-in effects that undermine the cross-platform "AI resilience" Forbes recommends.
Agent knowledge commons vs. walled gardens. Mozilla's cq and ProofShot both assume agents should share knowledge openly. But if AI expertise is the new professional edge, companies have strong incentives to keep their agent learnings proprietary. The contest between shared and siloed agent knowledge will shape whether AI productivity gains distribute broadly or concentrate.
Voice AI's accuracy-experience tradeoff at scale. EVA's finding that task completion and user experience are inversely correlated in voice agents has immediate implications as companies like Kedasha Kerr deploy real-world AI receptionists. The first high-profile voice AI failure, a misheard name cascading into a lost sale or worse, will test whether the technology is ready for customer-facing roles.
Go Deeper
The End of Coding: Andrej Karpathy on Agents, AutoResearch, and the Loopy Era of AI - Karpathy's AutoResearch project discovered hyperparameter improvements overnight that he missed in two decades of manual tuning. The study guide covers his framework for "claws" (persistent, looping agents), distributed research using blockchain principles, and why he believes digital transformation will vastly outpace physical robotics.
Anthropic Just Broke Prompt Engineering (And Replaced It With This) - A detailed breakdown of Anthropic's four-pillar skills architecture, including the novel CCD Paradigm (Continuous Calibration and Development) that transforms stateless chat into persistent agents through append-only logs and external verification tools.
I Designed a Full App in Pencil, Then Claude Code Built It - Matt Maher's progressive narrowing pattern (6 agents ideate, 3 refine, 6 explore states, then Claude Code builds) produced a working Next.js weather app in 15 minutes. The study guide details the two-way sync between design and code and why separating UI from business logic is the key enabler.