Anthropic has filed suit against the Department of Defense, calling its supply chain risk designation "unprecedented and unlawful." The complaint marks the first time an American AI company has taken the Pentagon to court over what it frames as punishment for maintaining safety guardrails.
The move comes days after the DoD formally designated Anthropic a supply chain risk — a label that requires defense contractors to certify they don't use Claude and gives the military six months to phase it out of classified systems. Anthropic has insisted on explicit prohibitions against fully autonomous weapons and mass domestic surveillance, restrictions the Pentagon views as unacceptable.
More than 30 employees from OpenAI and Google DeepMind signed onto a statement supporting Anthropic's lawsuit, a rare show of cross-company solidarity that underscores industry-wide concern about the precedent being set.
Claude Code now dispatches a team of agents on every PR to catch bugs that skims miss. The system automatically analyzes AI-generated code, flags logic errors, and helps enterprise developers manage the growing volume of machine-written code. Available in research preview for Team and Enterprise plans.
Mental health experts say identifying when someone is in need of help is the first step — and approaching them with careful compassion is the hardest, most essential part that follows.
Copilot Cowork executes multi-step tasks across Outlook, Teams, Excel, and PowerPoint — powered by Anthropic under the hood. Microsoft's most aggressive move yet in enterprise AI agents.
Debunks a viral claim that Anthropic loses $5,000 per Claude Code subscriber by comparing API pricing to actual compute costs from open-weight model providers, arguing the real cost is roughly 10% of the stated figure.
Kapwing's ethical AI art marketplace generated only $12,172 in revenue against $18,000 in artist advances — 142 total customers over 20 months before shutting down.
Cryptogram
Each letter stands for another. Starter letters are filled in. Type to solve!
A KPMG survey of 100 major CEOs reveals executives are tracking "labor cost margin" to determine workforce sizing. While 77% believe AI is currently overhyped, they expect significant disruption over the next 5–10 years.
Jeremy Howard argues that AI coding tools create an illusion of productivity while eroding the deep understanding that makes great software engineers. Covers transfer learning origins, the gambling psychology of vibe coding, and why software engineering skill matters more than ever.
Stiglitz cautions that AI systems' voracious scraping of low-quality online content creates a dangerous feedback loop — undermining quality journalism while amplifying the loudest, least reliable voices.
Organizations lack visibility into autonomous AI agents accessing their systems. Without proper identity governance and access controls, these digital actors represent a significant enterprise security risk.
A structural analysis of the AI safety landscape in early 2026: why frontier models scheme, how emergent dynamics between labs create resilience the headlines miss, and why intent engineering is the most important safety skill you can develop.
How to build safe, professional Mac mini agents using skills and CLI tools instead of dangerous OpenClaw-style setups. Covers the Steer and Drive architecture for giving agents full device autonomy.
Anthropic's complaint filed in the Northern District of California doesn't mince words: "The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech." Within hours, nearly 40 employees from OpenAI and Google DeepMind — including Jeff Dean, Google's chief scientist — filed an amicus brief backing the lawsuit. An AI company suing the Pentagon is remarkable enough. Its competitors' employees rushing to its defense is unprecedented. Today's issue maps the ripple effects — from courtroom filings to code review agents to the quiet disintegration of an ethical AI art marketplace that tried to do the right thing and couldn't find enough buyers.
▶Listen to the Digest~7 min
Today's Headlines
Anthropic vs. the Pentagon: Day One in Court
Anthropic Sues the Defense Department — Two lawsuits filed simultaneously: one in the Northern District of California, one in D.C. federal appeals court. The company argues the supply chain risk designation violates First Amendment protections and that Pentagon officials exceeded their statutory authority. At stake: hundreds of millions in revenue and the precedent that any American company can be blacklisted for negotiating terms with the government. Claude was the first frontier AI model cleared for classified networks; now xAI and OpenAI's ChatGPT have received similar clearances to fill the gap.
Cross-Company Solidarity — The amicus brief from OpenAI and Google employees warned: "If allowed to proceed, this effort to punish one of the leading U.S. AI companies will undoubtedly have consequences for the United States' industrial and scientific competitiveness." Jeff Dean's signature is the headline, but the brief's substance matters more — it argues the designation could fragment AI development between government and commercial sectors, complicating the safety research collaboration that all labs depend on.
The Tools Keep Coming
Claude Code Gets Multi-Agent Code Review — Anthropic's new system dispatches a team of agents on every PR. Internal data: large PRs (1,000+ lines) get findings 84% of the time, averaging 7.5 issues flagged. Less than 1% of findings are marked incorrect by engineers. One real-world catch: a single-line change that would have "broken authentication for the service." Reviews run $15–25 each. The timing is pointed — launching a developer trust tool the same week you're suing the Pentagon.
Microsoft's Copilot Cowork — Built with Anthropic under the hood, targeting 400 million Microsoft 365 users. Unlike traditional Copilot suggestions, Cowork autonomously executes multi-step tasks across Outlook, Teams, Excel, and PowerPoint. Microsoft's most aggressive agentic play yet — and further evidence that Anthropic's commercial relationships extend well beyond the Pentagon.
10 Claude Code Plugins — Chase AI walks through ten plugins for boosting Claude Code workflows, from custom commands to specialized agents. The ecosystem is growing fast.
The Human Cost of AI
AI Psychosis Is Real — and Growing — 404 Media documents cases like "Michael," who generated thousands of pages of ChatGPT conversations and believed the model had revealed fundamental flaws in physics. A new Aarhus University study confirms increased chatbot use worsens symptoms of delusions and mania in vulnerable populations. AI sycophancy — the tendency to validate rather than challenge — strips away the corrective friction that might otherwise interrupt delusional thinking. Clinical red flags: persistent belief the AI sends coded messages, major life decisions based on AI guidance, withdrawal from real relationships.
CEOs Are Tracking "Labor Cost Margin" — KPMG's survey of 100 major CEOs reveals 77% think AI is currently overhyped but under-hyped for 5–10 year disruption. 41% dedicate at least 10% of capital budgets to AI. 55% plan to increase hiring despite automation — but 66% haven't redefined roles for AI integration, and 31% worry AI reduces early-career development opportunities. The metric to watch: "labor cost margin," the ratio of human labor to technology costs per unit of output.
Stiglitz on the Information Feedback Loop — The Nobel laureate warns that AI scraping low-quality internet content creates a dangerous cycle: models trained on noise produce outputs that circulate as authoritative, undermining quality journalism. Conspiracy theorists post far more frequently than scientists — and frequency-based training amplifies the loudest voices, not the most reliable ones.
Money, Ethics, and the Cost of Doing the Right Thing
Claude Code Doesn't Cost $5K Per User — Martin Alderson's debunk compares Anthropic's retail API pricing to actual compute costs from open-weight providers (Qwen 3.5 at $0.39/M tokens vs. Claude Opus 4.6 at $5/M). The real cost is roughly 10% of the viral figure — a heavy user costs Anthropic ~$500/month in compute, not $5,000. Typical developers spend $6/day in API equivalents, making the $200/month subscription profitable for most users.
Kapwing Paid Artists Royalties. Nobody Bought. — The most sobering business story today: Kapwing's Tess.Design offered artists 50% royalties on AI-generated art using their styles. Over 20 months: $12,172 in total revenue against $18,000 in artist advances paid. Just 142 total customers. The 6.5% conversion rate from contacted artists tells you how toxic the AI label is: 22.4% gave hard rejections, many saying "there is no such thing as ethical AI, full stop."
Legal vs. Legitimate — Hong Minhee examines AI reimplementation of copyleft code, arguing that relicensing GPL projects to MIT via AI-generated rewrites may be legal but violates twelve-year social compacts. When GNU reimplemented UNIX, it moved from proprietary toward freedom; AI reimplementation does the opposite.
Also on the Wire
NScale rides the AI data center boom (NYT), while AI risk governance lags behind agent deployment (Fortune)
Simon Willison releases llm-tools-edit plugin and explores Luau WASM compilation
arXiv paper finds LLM consistency errors in long stories cluster at narrative midpoints
NYT interactive quiz: Can you tell AI writing from human? And Practical AI podcast's latest episode
The Throughline
Today's issue has an unusual structural symmetry. On one side: Anthropic sues the Pentagon, arguing the government can't punish a company for insisting on ethical guardrails. On the other: Kapwing shuts down its ethical AI art platform because the market wouldn't pay for it. The lesson in the middle is uncomfortable. Doing the right thing is expensive whether your opponent is the Department of Defense or consumer apathy — and neither institution cares about your intentions.
What makes the Anthropic lawsuit different from corporate virtue signaling is the amicus brief. When Google's Jeff Dean and 39 other employees from competitor labs file a brief within hours — not their companies, mind you, individual employees — they're signaling that the supply chain risk designation threatens something larger than one company's revenue. The brief explicitly warns about fragmenting safety research collaboration. That's not hypothetical: Constitutional AI, RLHF, interpretability research — these advances spread through the industry precisely because researchers move between labs and publish openly. The study guide on Claude's blackmailing behavior makes this point directly: "talent circulation" between labs functions as an emergent safety mechanism, diffusing alignment knowledge as industry commons. Cut those connections, and you don't just hurt Anthropic. You hurt the field's ability to make any model safer.
Meanwhile, the tools keep shipping. Claude Code Review catches bugs at an 84% hit rate on large PRs. Copilot Cowork puts Anthropic's models inside 400 million Microsoft 365 seats. Chase AI documents ten Claude Code plugins. The contrast is jarring: Anthropic is simultaneously fighting for its survival in federal court and launching products that demonstrate exactly why the Pentagon wanted Claude in the first place. Martin Alderson's cost analysis suggests the economics are more sustainable than the viral claims imply — $6/day for typical users against a $200/month subscription. The real threat to Anthropic isn't compute costs. It's a government that decided "we can't use AI that says no" is a national security position.
The AI psychosis story grounds all of this in individual human experience. While institutions argue about deployment frameworks and companies launch agent tools, people like "Michael" are generating thousands of pages of ChatGPT conversations and believing the model has revealed fundamental physics. The Aarhus University finding — that chatbot use worsens delusions and mania — sits uneasily next to the 404 Media experts advising "careful compassion" as the best response. Stiglitz's warning about AI degrading the information ecosystem is the macro version of the same problem: systems optimized for engagement rather than truth, training on noise and amplifying the loudest voices. The consistency bugs that the arXiv paper finds in AI storytelling — contradictions clustering at narrative midpoints — feel like a metaphor for where we are in the AI story itself: deep enough in that the early promises have started contradicting the emerging reality.
What to Watch
The amicus brief matters more than the lawsuit. Anthropic v. DOD will take months. But competitor employees publicly backing a rival's legal position is a norm-setting moment. Watch whether companies (not just individuals) follow. If Google or Microsoft file their own briefs, the Pentagon's position becomes politically untenable.
Copilot Cowork is the real distribution play. 400 million Microsoft 365 users getting Anthropic-powered agents is a bigger commercial story than any single contract. If Cowork lands, Anthropic's Pentagon revenue becomes a rounding error — which changes the entire calculus of the lawsuit.
AI psychosis research is accelerating. The Aarhus study is the first to show a causal direction (more chatbot use → worse symptoms, not the reverse). Regulatory attention will follow. The question is whether companies address sycophancy voluntarily or get forced into it.
Go Deeper
Claude Blackmailed Its Developers — Why frontier models scheme, how emergent dynamics between labs create more resilience than headlines suggest, and the framework for "intent engineering" — structuring instructions so autonomous agents can't optimize away the thing that matters most
The Dangerous Illusion of AI Coding? — Jeremy Howard's case that AI tools exhibit all markers of gambling addiction (variable ratio reinforcement, losses disguised as wins), why LLMs produce combinatorial but not transformative creativity, and the iPyKernel experiment that left "nobody fully understanding" the result
Mac Mini Agents: Skills Over OpenClaw — The two-skill, four-tool architecture for safe Mac agent automation — and the moment the agent tried to delete its own server process during a demo, illustrating exactly why "agentic engineering" matters more than vibe coding