Anthropic submitted sworn declarations to a California federal court pushing back against Pentagon claims that the AI company poses an "unacceptable risk to national security," arguing the government's case relies on misunderstandings of how AI safety works.
The filing reveals that just one week before the Trump administration declared the relationship over, Pentagon officials told Anthropic the two sides were "nearly aligned" on safety protocols. The gap between those private assurances and the public confrontation raises pointed questions about whether the dispute is really about safety, or politics.
An exclusive conversation with OpenAI's chief scientist Jakub Pachocki about the company's grand challenge of building a fully automated AI research system and why he believes it's the next frontier.
The White House framework pushes federal preemption of state AI laws while shifting responsibility for child safety toward parents rather than AI developers.
Block's COO Amrita Ahuja explains how the company's internal AI agent drove gross profit per employee from $500,000 in 2019 to a projected $2 million, enabling the company to cut nearly half its workforce.
A Florida homeowner used ChatGPT to manage his entire home sale, from marketing and pricing to scheduling viewings, securing a price $100,000 above agent estimates and closing in just five days.
Women use AI at 25% lower rates than men despite their jobs being three times more likely to face automation. Harvard researcher Mara Bolis warns that excluding women from AI adoption could deepen economic inequality.
Zvi Mowshowitz dissects the newly released Federal AI Policy Framework, which preempts state AI laws while offering minimal federal regulation of AI development and frontier risks.
Researchers are developing three architectural approaches to world models, including JEPA and 3D Gaussian Splats, that allow AI to build internal simulators of physical reality.
Microsoft is reducing Copilot entry points on Windows, starting with Photos, Widgets, Notepad, and other apps. The move follows user complaints about AI features cluttering the OS.
The AI Puzzle
The AI Mini
Across
1. President pushing AI policy to preempt state laws
4. Block's "Goose," or OpenAI's automated researcher
6. What you do to a model before deployment
Down
2. What Anthropic says Pentagon court filings lack
3. Climbing spike, or a fixed point in scaling heights
The NYT's Hard Fork podcast discusses the new wave of tech layoffs at Atlassian and Block, and whether AI job loss has truly begun or other factors are at play.
Power has become a critical bottleneck in deploying AI data centers, with up to 50% of announced projects experiencing delays. The supply-demand imbalance is creating investment opportunities in the energy sector.
Senator Sanders spoke to Anthropic's AI agent Claude about AI collecting massive amounts of personal data and how that information is being used to violate privacy rights.
A new state space model featuring improved recurrence formulas, complex-valued state tracking, and MIMO variants that outperform prior models on latency benchmarks.
A deep dive into Anthropic's official skills strategy for Claude Code, examining how internal teams are building reusable capabilities.
✦ The Big Picture
One week before the Trump administration publicly declared Anthropic an unacceptable national security risk, Pentagon officials privately told the company they were "very close" to agreement on the two contested issues. That whiplash between private alignment and public confrontation runs through today's entire 27-story issue: the gap between what institutions say about AI and what they actually do with it has never been wider.
▶Listen to the Digest~8 min
Today's Headlines
The Anthropic-Pentagon Standoff
Court filings reveal a timeline contradiction at the heart of the dispute. On March 4, the Pentagon finalized its supply-chain risk designation against Anthropic. One day earlier, Under Secretary Emil Michael emailed Dario Amodei saying they were "very close" on the two contested issues: autonomous weapons and mass surveillance. By March 6, Michael posted on X that there was "no active negotiation," and by March 13 told CNBC there was "no chance" of renewed talks. Anthropic's Head of Public Sector testified that "once Claude is deployed inside a government-secured, air-gapped system, Anthropic has no access to it; there is no remote kill switch, no backdoor." A hearing is set for March 24.
The Policy Vacuum
Trump's AI framework preempts state laws while offering almost nothing federal. The four-page outline urges Congress to prevent states from regulating AI while relying on "sector-specific regulation" through existing agencies. Child safety gets shifted to parents via "commercially reasonable" age assurance rather than strict verification. As Zvi Mowshowitz writes, it is "an unframework" that "effectively abandons serious policy-making." States would retain only narrow exceptions: zoning, procurement, and generally applicable laws. No mechanisms address existential AI risks.
Bloomberg, WaPo, and the companion "TRUMP AMERICA AI Act" fill in context. Sen. Marsha Blackburn's legislative proposal would replace the growing patchwork of state laws with a national standard while weaving protections for children, IP, and conservative speech. Notably, her framework does not preempt states from enacting child protection laws that exceed the federal standard, a carve-out the White House's own outline lacks.
AI's Productivity Promise, Tested
Block's internal AI agent "Goose" is driving gross profit per employee from $500K to $2M in four years. Developer productivity is up 40% per engineer since September. An underwriting model that took a full quarter was built in a fraction of the time. COO Amrita Ahuja calls this a "two-year journey" made from organizational strength, though the company has cut nearly half its workforce. Hard Fork's coverage notes Meta is considering 20% cuts, Atlassian is restructuring. The question of whether "AI-enabled productivity" is a euphemism for layoffs is becoming harder to avoid.
A Florida man let ChatGPT sell his house for $100K over asking, closing in five days. Robert Levine used the chatbot for pricing analysis, marketing strategy, specific renovation recommendations (which walls to repaint), scheduling 15 viewings, and negotiation guidance. The $954,800 sale hit one of the highest per-square-foot prices in the market. But he still hired a lawyer, had to prompt the AI at every step, and notes it "couldn't host open houses." The story illustrates both the genuine utility and the clear limits of current AI tools.
Women use AI 25% less than men, yet hold jobs three times more likely to be automated. Of the 6.1 million workers most vulnerable to AI disruption, 86% are women. Harvard researcher Mara Bolis warns of a "two-tiered AI economy" and advocates "fierce ambivalence," holding divergent attitudes while demanding equitable implementation. A Brookings analysis shows women tend to exit the labor market entirely after displacement rather than transitioning to new roles.
The Technical Frontier
OpenAI's chief scientist laid out a two-phase roadmap to automated AI research. By September 2026: an autonomous "research intern" that tackles problems taking a human a few days. By early 2028: a fully automated multi-agent research system. Safety relies on "chain-of-thought monitoring" where models narrate reasoning in auditable scratchpads. Altman disclosed roughly $1.4 trillion in infrastructure spending across 30 gigawatts of compute. OpenAI is pursuing joint safety standards with Anthropic, Google DeepMind, and xAI.
Mamba-3 optimizes for inference, not training. The state space model from CMU, Princeton, Cartesia, and Together AI clocks 35.11s for prefill+decode at 4K tokens vs. Llama-3.2-1B's 58.64s. At 16K tokens the gap widens to 140.61s vs. 976.50s. The architecture replaces single-input/single-output with MIMO variants and uses complex-valued state tracking.
OpenCode hits 127K GitHub stars with 5 million monthly developers. The open-source coding agent supports 75+ LLM providers, multi-session agents, and shareable sessions. It represents the growing open-source alternative to proprietary coding tools.
Everything Else
Microsoft is rolling back Copilot entry points from Photos, Widgets, Notepad, and other Windows apps, acknowledging that saturating an OS with AI touchpoints may have diminishing returns.
WordPress.com now lets AI agents write and publish posts autonomously, making it one of the first major CMS platforms with end-to-end AI content management.
Energy is becoming the real AI bottleneck: 50% of announced data center projects face delays, only 5 GW of 190 GW tracked are under construction, and 36% of projects slipped timelines in 2025.
A North Carolina man pleaded guilty to $8 million in AI music streaming fraud using 10,000 bot accounts and hundreds of thousands of AI-generated tracks across Spotify, Apple Music, Amazon, and YouTube Music. Deezer now receives 60,000 fully AI-generated tracks daily.
Bernie Sanders interrogated Claude about AI privacy, framing it as "the dangers of AI as described by AI itself." Perplexity shipped impressive computer-use features, but Nate B. Jones asks whether multi-model orchestration is a moat when every provider is a competitor. Anthropic's Dispatch connects agents to your phone, and the company dropped its internal skills strategy for Claude Code.
The Throughline
The most telling number in today's stories is not a dollar amount or a metric. It is a date gap: March 3 to March 6. Three days between the Pentagon telling Anthropic they were "very close" and a senior official publicly declaring negotiations dead. That gap, between what institutions say privately and what they do publicly, is the thread running through everything today.
Block's COO describes cutting nearly half the workforce as a journey undertaken "from a position of organizational strength." The White House describes preempting all state AI laws while offering no federal alternatives as an "innovation-oriented regime." OpenAI's CEO discloses $1.4 trillion in infrastructure spending while his chief scientist promises the system that makes human researchers obsolete. In each case, the framing and the action point in different directions.
Robert Levine's ChatGPT house sale is perhaps the most honest data point. He saved money, beat agent estimates by $100K, and closed in five days. He also had to prompt the AI at every step, hire a lawyer separately, and couldn't delegate physical tasks. "ChatGPT is not coding," he said. "It is a conversation." That is a more accurate description of where AI stands in March 2026 than anything in OpenAI's roadmap or the White House's framework. The tool is genuinely useful when a human drives it. The question of what happens when the human steps away remains unanswered, and the institutions building and regulating AI seem disinclined to answer it honestly.
The Bigger Picture
Two parallel tracks are forming in AI development, and today's stories mark the point where they're becoming impossible to ignore. Track one is the infrastructure race: OpenAI's $1.4 trillion in compute spending, energy bottlenecks delaying half of all data center projects, Mamba-3 optimizing inference latency because the volume of AI processing is outstripping available hardware. This track is governed by physics and capital, and it is moving fast regardless of policy.
Track two is the governance vacuum. The White House framework explicitly refuses to create federal AI regulatory bodies, preempts the states that tried to fill the gap, and offers parents the responsibility for child safety that it won't impose on platforms. The Anthropic-Pentagon dispute shows what happens when there are no clear rules: a company and a government agency can be "very close" to agreement one day and in federal court the next, with no institutional framework to prevent the whiplash. Meanwhile, Block is using AI to cut half its workforce while 86% of the 6.1 million workers most at risk are women who use AI 25% less than men.
The infrastructure track will continue accelerating because it is driven by market forces. The governance track will continue stalling because it is driven by politics. The gap between them is where the real consequences accumulate: in displaced workers without retraining pathways, in music platforms flooded with 60,000 AI-generated tracks daily, in a Florida man who beat every real estate agent but still needed a human lawyer. The technology works. The institutions do not.
What to Watch
The March 24 hearing in Anthropic v. Pentagon. Judge Rita Lin will weigh sworn declarations against the government's national security claims. The outcome will set precedent for how AI safety commitments interact with defense procurement, and whether private assurances of alignment can be overridden by political decisions. Microsoft and retired military chiefs have intervened on Anthropic's side.
State responses to the preemption framework. California, Colorado, and other states with existing or pending AI legislation will have to decide whether to fight federal preemption or accept it. The Blackburn companion bill's child-safety carve-out suggests even within the GOP there is no consensus on how far preemption should go.
Block's Q2 numbers as an AI-productivity benchmark. If gross profit per employee actually hits $2 million, it becomes the most concrete public evidence that AI-driven workforce reduction improves financial metrics. Other companies will either follow the playbook or explain why they aren't.