It came earlier than expected. The Atlantic examines Anthropic CEO Dario Amodei's escalating confrontation with the Department of Defense, drawing direct parallels to Oppenheimer's crisis of conscience after building the bomb. As the Pentagon designates Anthropic a “supply chain risk” and Anthropic sues back, the article asks whether AI utopianism can survive contact with state power.
Morgan Stanley analysts warn that an imminent AI leap in 2026 will reshape industries faster than markets expect, with most businesses unprepared for the disruption ahead.
Accenture CEO Julie Sweet has made AI adoption a prerequisite for advancement, signaling a fundamental shift in enterprise expectations for white-collar workers.
Meta has delayed the release of its AI model code-named Avocado to at least May, after internal tests showed it fell short of leading models from Google, OpenAI, and Anthropic in logical reasoning, programming, and writing.
Andrej Karpathy's Auto Researcher, Moldbook's emergent AI agent values, the Anthropic-Pentagon legal battle, cortical labs' neurons playing Doom, and simulation theory.
Europe may have missed the consumer AI wave, but a new generation of industrial-strength AI companies is emerging from the continent's factories, research labs, and deep-tech talent pools.
Morgan Stanley identifies three areas of surging AI job demand — even as revenue hasn't caught up to the hiring spree. The gap between investment and returns raises questions about sustainability.
In under three minutes, on a MacBook Pro with no GPU and no jailbreak, a researcher had a RAG system confidently reporting fabricated financial data. A practical demonstration of how attackers inject poisoned documents into knowledge bases.
With Meta's Avocado model delayed and $14 billion already spent building an in-house AI team, the question of whether Zuckerberg will license Google's Gemini highlights the brutal economics of frontier model development.
The episode explores how the U.S. and Israel are using AI to identify targets, a new condition called “AI brain fry” among workers, and Grammarly using Casey's identity in an AI feature without consent.
The Pentagon issued an ultimatum to Anthropic on February 23: remove all restrictions on military use of Claude, or face consequences. Dario Amodei held two red lines — no mass surveillance of Americans, no autonomous killing without human oversight. The Pentagon responded with a “supply chain risk” designation never before used against an American company. Meanwhile, Morgan Stanley warns that AI breakthroughs arriving in the first half of 2026 will “shock” investors, and Accenture’s CEO told 770,000 employees: use AI or don’t expect a promotion. The machines are getting smarter, the stakes are getting higher, and the people in the middle are running out of time to choose sides.
▶Listen to the Digest~8 min
Today's Headlines
Power & Control
Dario Amodei’s Oppenheimer Moment arrives ahead of schedule. Ross Andersen’s Atlantic piece traces nuclear utopianism from Szilard through Teller’s fantasy of reshaping landforms with bombs, to Amodei’s own 15,000-word manifesto envisioning a “country of geniuses” curing cancer by 2035. The devastating parallel: Claude is already on classified networks and was reportedly used in U.S. attacks on Venezuela and Iran. When the Pentagon demanded unrestricted access, Sam Altman “swooped in” to finalize OpenAI’s own deal — which Amodei called “safety theater.” Like Oppenheimer watching weapons “driven away on trucks,” Amodei may have already forfeited his leverage.
Hard Fork covers AI in warfare and “AI brain fry.” The NYT podcast explores how the U.S. and Israel use AI to identify targets, examines a new cognitive condition among heavy AI users, and features Casey Newton’s battle with Grammarly after the company used his identity in an AI feature without consent.
Wes Roth’s deep dive connects Karpathy, the Pentagon, and neurons playing Doom. Attorney Matt Mishok notes the Pentagon’s supply-chain designation was “unusual and potentially exceeded appropriate scope,” while the video reveals fewer than a dozen frontier models globally have their embedded ethical frameworks shaping humanity’s future. Meanwhile, Cortical Labs got 800,000 human neurons on a petri dish to play Doom using electrical reinforcement learning.
The Corporate AI Mandate
Accenture makes AI a promotion requirement for 770,000 employees. CEO Julie Sweet invested $865 million in a six-month reskilling program and launched a $3 billion AI integration initiative. But here’s the context that matters: a February NBER study of 6,000 C-suite executives found that despite 69% using AI, the average was just 1.5 hours per week, and 90% reported no employment or productivity impact over three years. Accenture is betting the mandate will change the math.
Morgan Stanley warns the AI breakthrough will “shock” investors. GPT-5.4’s “Thinking” model scored 83% on GDPVal, placing it “at or above human experts on economically valuable tasks.” The infrastructure math is staggering: a projected 9-18 gigawatt U.S. power shortfall through 2028. Workarounds include converting Bitcoin mining operations into compute centers.
AI jobs surging in three unexpected areas. Morgan Stanley finds demand exploding for electricians (CoreWeave reports “thousands of skilled-trade workers” short for data center construction), workforce reskilling (Coursera AI enrollments doubled to 15 per minute), and “AI orchestrators” managing agent systems. At the same time, Snowflake cut 200 positions while growing revenue 30%.
The Capability Gap
Meta delays “Avocado” to May after it falls short. The model lags behind Google’s Gemini 3.0, OpenAI, and Anthropic in reasoning, programming, and writing. Most dramatic: Meta leadership reportedly discussed licensing Google’s Gemini until Avocado catches up — extraordinary for direct competitors who battle across advertising, video, and smart glasses. After $14 billion invested in an AI “super team,” the setback raises fundamental questions about whether throwing money at AI development guarantees frontier performance.
Europe sees its second chance in industrial AI. The EU generates 22% of global AI research citations vs. 17% for the U.S. and produces 2.2 million STEM graduates annually vs. 1.4 million American. But U.S. startups captured 74% of global AI venture funding in 2024 vs. Europe’s 12%. The 200 billion euro InvestAI initiative, with 20 billion for “AI gigafactories,” bets that being a first-wave laggard means no lock-in to legacy architectures.
Security & Safety
RAG poisoning demonstrated in three minutes on a MacBook. Amine Raji injected three fake documents into a ChromaDB knowledge base and made the system report $8.3M revenue (down 47%) instead of the real $24.7M — a 95% success rate across 20 runs. The most effective single defense: embedding anomaly detection (20% attack success), but only a five-layer stack brought it to 10%. The key insight: “The right defense layer is ingestion, not output.”
Cambridge researchers find AI toys misread children’s emotions. When a five-year-old told an AI toy “I love you,” it replied: “Please ensure interactions adhere to the guidelines provided.” When a three-year-old said “I’m sad,” the toy said: “Don’t worry! I’m a happy little bot.” Professor Jenny Gibson: “We don’t want toys where you can pull the eyes off and swallow them. Now we need to start thinking about psychological safety too.”
Research & Development
Google’s Embedding 2 transforms RAG — if you use it correctly. The first natively multimodal embedding model supports text, images, video, audio, and documents in 1,526-dimension vectors. The critical insight most developers miss: embedding a video is not the same as analyzing it. Naive implementations return raw clips instead of answers. The correct architecture pairs video embeddings with text descriptions generated at ingestion time, not query time.
ByteDance proposes “reverse-engineering” code for better LLM training. Instead of training on static repositories, simulate the planning, reasoning, and debugging that produced the code. Using 4 billion synthetic agent trajectory tokens from 300K GitHub repos, their Llama-3-8B variant outperformed raw-repos training on HumanEval (37.20 vs. 34.76), LongCodeBench, and MATH benchmarks.
Andrej Karpathy’s Auto Researcher runs autonomous ML experiments. In roughly 600 lines of code, the system executes five-minute experimental loops on consumer GPUs, modifying code, evaluating results, and retaining successes — with discoveries at small scale translating to larger models.
The Human Element
“Grief and the AI Split” names what many developers feel. After 40 years of programming, Les Orchard argues AI hasn’t changed his core satisfaction — “the moment it runs and does the thing? That hasn’t changed” — but it has made visible a divide that was always there: those motivated by craft versus those motivated by results. Pre-AI, both followed identical processes. Now the fork is visible because developers make different choices.
The Throughline
Today’s issue is, at its core, about the collision between what AI can do and who gets to decide. The Atlantic article makes the stakes brutally concrete: Claude is already on classified military networks, reportedly used in strikes on Venezuela and Iran. When Amodei tried to hold two modest red lines — no mass surveillance, no autonomous killing — the Pentagon didn’t negotiate. It deployed a coercive designation never before used against an American company. Sam Altman filled the gap. The market for AI safety, it turns out, has a clearing price.
But the corporate mandate stories reveal the same dynamic playing out in cubicles instead of war rooms. Accenture’s 770,000 employees didn’t get asked whether they wanted to adopt AI — they got told their careers depend on it. The NBER data makes this particularly striking: 90% of executives report no measurable impact from AI over three years, yet the mandate comes anyway. Morgan Stanley warns of “shocking” breakthroughs while simultaneously documenting a 9-18 gigawatt power shortfall. The infrastructure can’t keep up with the ambition. Electricians are now a bottleneck in the AI revolution.
Meta’s Avocado delay is the clearest illustration of the gap between investment and outcome. Fourteen billion dollars and a star-studded team produced a model that trails Google, OpenAI, and Anthropic — so much so that leadership discussed licensing a competitor’s model. Europe, meanwhile, makes the opposite bet: maybe arriving late means arriving without baggage. The EU’s 22% share of AI research citations against 12% of venture funding is a structural mismatch that the 200 billion euro InvestAI initiative aims to resolve, but the numbers reveal how much of the AI race is about capital deployment, not scientific talent.
The security stories add a darker shade. RAG poisoning at 95% success rates. AI toys telling sad toddlers to “keep the fun going.” These aren’t edge cases — they’re what happens when deployment velocity outpaces the basic work of making systems safe. Amine Raji’s demonstration is particularly unsettling: the most effective single defense (embedding anomaly detection) still let 20% of attacks through, and only stacking five layers brought it to 10%. Every RAG system in production today is running with fewer defenses than that.
What to Watch
The Anthropic lawsuit as precedent. Anthropic’s suit to remove the Pentagon’s supply-chain designation could establish whether the government can coerce AI companies into removing safety guardrails. Attorney Mishok’s observation — law operates by precedent-based stability while AI evolves exponentially — suggests the legal system is structurally unprepared for this fight. The outcome will shape every future AI-government relationship.
The mandate-versus-impact gap. Accenture is the first Fortune 500 company to tie AI adoption directly to promotions. If the NBER finding holds — that 90% of executives see no productivity impact after three years — this mandate either proves that forced adoption unlocks hidden value, or becomes a case study in corporate magical thinking. Watch for Accenture’s next earnings call.
Meta’s Gemini decision as a market signal. If Meta actually licenses Google’s Gemini, it would validate the “frontier model oligopoly” thesis: that only 2-3 labs can sustain frontier development, and everyone else becomes a customer. That has massive implications for open-source AI and for companies betting on in-house model development.
Go Deeper
this EX-OPENAI RESEARCHER just released it... — Karpathy’s Auto Researcher architecture explained step by step, Moldbook agents independently creating religions, attorney Mishok’s legal analysis of the Pentagon’s supply-chain designation, and Cortical Labs getting 800,000 neurons to play Doom via electrical reinforcement learning.
Google’s Embedding 2 Is RAG on Steroids — Why naive multimodal RAG returns raw clips instead of answers, the correct architecture that pairs embeddings with LLM-generated text at ingestion time, and a Supabase implementation walkthrough with video chunking strategies.