A Brookings Institute study found that AI risks in education currently outweigh potential benefits, as digital technology adoption in schools has coincided with declining cognitive abilities among Gen Z students.
Full 1 million token context window available at standard pricing for both Claude models, with no premium and expanded media support. Standard per-token rates apply across the entire window. Media processing expanded to 600 images or PDF pages per request, up from 100. Competitors charge premium rates above 200K-272K tokens; Anthropic charges nothing extra.
Palantir demonstrated how AI chatbots could generate complete military battle plans, condensing weeks of operational planning into real-time sessions. The demos show Claude integrated into the Maven Smart System processing intelligence and suggesting targets with coordinates.
Karp addresses the Anthropic-DoD dispute, asserting there was "never a sense" AI products would be deployed domestically. He argues the DoD requires "wide license" for counterterrorism and operations against foreign adversaries.
Hard Fork explores how the U.S. and Israel use AI to identify targets in Iran, BCG research on "AI brain fry" affecting 14% of workers, and Grammarly's fake expert review controversy. Claude is the only AI model currently integrated into classified military systems via Palantir's Maven Smart System, which compresses weeks of planning into real-time operations.
Yang argues the U.S. should eliminate income taxes on workers and instead tax AI companies, citing automation threats to employment. Individual income taxes currently represent $2.6 trillion annually, over half of U.S. government revenue.
ActivTrak study of 10,584 users: email time jumped 104%, messaging climbed 145%, daily task time up 346%. Deep focus work sessions fell 9%. BCG identifies a "three-tool cliff" where efficiency drops after adopting more than three AI tools.
Bank of America forecasts 3 billion humanoid robots by 2060, surpassing the world's 1.5 billion cars. Investment surged from $700M in 2018 to $4.3B in 2025. MIT's Rodney Brooks calls the domestic robot vision "pure fantasy thinking."
The New York Times examines how AI coding assistants like Claude and ChatGPT are reshaping the programming profession and what comes next for developers whose core skill is being automated.
Pew survey of 8,512 adults: only 4% say datacenters benefit the environment, 6% think they create jobs, 6% improve quality of life. Senator Sanders calls for a construction moratorium.
Workers using AI tools saw email time jump 104% and messaging time surge 145%, while deep focus work fell 9%. In schools, 57% of teenagers now use AI for information searches, and neuroscientist Jared Cooney Horvath says Gen Z is the first generation less cognitively capable than their parents. The tools that were supposed to make us sharper are, by almost every measure available this week, making us duller.
▶Listen to the Digest~8 min
Today’s Headlines
The Cognitive Decline Story
Brookings Institute: AI risks in education outweigh benefits. A new study finds 57% of teenagers use AI for information searches and 54% use it for assignments. The core problem is “cognitive offloading,” where students delegate thinking to machines rather than developing their own capacity. Neuroscientist Jared Cooney Horvath draws a parallel to 1920s “teaching machines” that promised to revolutionize education and instead created dependency. Google’s early-2000s push to put Chromebooks in every classroom now looks like the setup for this moment.
Microsoft study links AI use to diminished critical thinking. The finding reinforces Horvath’s argument: the more students offload cognitive tasks, the less they develop the neural pathways required for independent reasoning. This is not a prediction about what might happen; it is measurement of what is already happening.
A lawyer warns AI chatbots are now appearing in mass casualty cases. After years of AI psychosis cases linked to individual suicides, the scale is shifting. The technology, the lawyer argues, is moving faster than any safeguard framework can track.
AI Is Straining, Not Saving, the Workplace
ActivTrak study of 10,584 users paints a grim productivity picture. Email time doubled (+104%), messaging time nearly tripled (+145%), and daily task time climbed between 27% and 346% depending on role. Deep focus work, the kind that produces actual insight, fell 9%. Workers are doing more communicating about work and less actual work.
BCG identifies “AI brain fry” and the “three-tool cliff.” Workers using four or more AI tools are measurably less efficient than those using three or fewer. The returns don’t just diminish; they reverse. UC Berkeley researchers add that AI-augmented workers are burning out and taking fewer breaks, suggesting the tools create an illusion of momentum that masks exhaustion.
Military AI and the Surveillance Question
Palantir channels Anthropic technology to the Department of Defense. Claude is integrated into classified military systems via Palantir Maven, where it suggests hundreds of targets with coordinates. CEO Alex Karp insists there was “never a sense” the technology would be used for domestic surveillance, while simultaneously arguing the DoD needs a “wide license” for counterterrorism operations. Anthropic sued the Pentagon over its “supply chain risk” designation.
Wired reports on Palantir demos showing AI chatbots generating war plans. The Pentagon inquired about a Venezuelan operation. Meanwhile, Hard Fork’s study of AI in warfare documents a pattern of policy reversals: Google, OpenAI, Meta, and Anthropic have all softened their military restrictions over the past year. The historical parallel Hard Fork draws is 1970s “Lordstown Syndrome,” where autoworkers rebelled against automation they could not control.
Economics and Infrastructure
Andrew Yang proposes eliminating income taxes on labor, taxing AI instead. Income taxes currently generate $2.6 trillion, half of federal revenue. With business leaders predicting AI could raise unemployment to 20%, Yang argues the tax base must shift. Senator Booker suggests exempting the first $75,000; Vinod Khosla proposes $100,000. A “task tax” on humanoid robot activities is also on the table.
Bank of America forecasts 3 billion humanoid robots by 2060. Investment has surged from $700 million in 2018 to $4.3 billion in 2025. The projection: 90,000 units shipping in 2026, growing to 1.2 million by 2030 at an 86% compound annual growth rate. Chinese units are priced at $35,000, dropping to $17,000. Tesla targets late 2027 for its entry. MIT’s Rodney Brooks calls the domestic robot vision “pure fantasy thinking.”
Only 4% of Americans say datacenters benefit the environment. Pew finds just 6% believe they create jobs and 6% say they improve quality of life. Bernie Sanders calls for a moratorium. Michigan researchers confirm: promised jobs don’t materialize. Examples include a $1.2 billion Michigan facility, a $2.4 billion South Carolina project, and the Mississippi xAI datacenter running on gas turbines.
Capability and Culture
Anthropic’s 1 million token context window goes GA at standard pricing. No premium tier. $5/$25 for Opus, $3/$15 for Sonnet. Up to 600 images or PDFs per request (from 100). The model scores 78.3% on MRCR v2 accuracy. As Simon Willison notes, competitors charge premium rates above 200,000 to 272,000 tokens; Anthropic’s pricing is more predictable and substantially cheaper at scale.
xAI is overhauling its AI coding assistant for the second time. Two Cursor executives have been hired. Internal sources describe the product as “not built right the first time.” Spielberg, at SXSW, declared he has never used AI in any of his films, supporting AI adoption in other industries but drawing a firm line at creative sectors.
The Throughline
The most important collision in today’s issue is between two datasets that should be read side by side. In schools, the Brookings study documents teenagers outsourcing their thinking to AI at alarming rates, with 57% using it for basic information searches. In workplaces, the ActivTrak study of over 10,000 users shows adults doing the same thing, except the consequences show up as email time doubling and deep focus work eroding. The mechanism is identical: cognitive offloading. The difference is that teenagers are losing capacities they haven’t yet fully developed, while workers are losing capacities they once had.
BCG’s “AI brain fry” research adds a crucial finding: there is a hard ceiling. Workers with four or more AI tools perform worse than those with three or fewer. This is not a story about needing better tools or more training. It is a story about human cognitive architecture hitting a wall. The “three-tool cliff” suggests that the productivity gains from AI are real but bounded, and that pushing past the boundary produces the opposite of what was intended. UC Berkeley’s finding that AI-augmented workers take fewer breaks reinforces this: the tools create a sense of velocity that masks the fact that the human doing the work is degrading.
Layer in the military dimension and the picture darkens further. Claude is on classified networks, suggesting hundreds of targets with coordinates. Palantir’s CEO assures us there is “never a sense” this will be used for domestic surveillance, while lobbying for a “wide license” on counterterrorism. Every major AI lab has softened its military restrictions in the past year. The technology that is diminishing critical thinking in classrooms and burning out workers in offices is simultaneously being deployed in systems where the stakes are measured in human lives. The common thread across all three domains is the same: the speed of deployment has outrun the capacity of the humans in the loop to meaningfully oversee what the tools are doing.
What to Watch
The cognitive offloading research pipeline. The Brookings and Microsoft studies both point toward a measurable decline in critical thinking linked to AI use. If additional longitudinal studies confirm the pattern, expect policy responses targeting AI in K-12 education by fall 2026. The 54% assignment-use figure among teenagers is the number that will drive that conversation.
The three-tool cliff in enterprise adoption. BCG’s finding that four-plus AI tools reduce efficiency has direct implications for every company stacking copilots, chatbots, and agent frameworks. Watch for enterprises consolidating their AI tool suites rather than expanding them, a reversal of the current trend.
Anthropic vs. the Pentagon. The lawsuit over the “supply chain risk” designation remains the highest-stakes AI governance case in progress. Whether AI companies can maintain safety guardrails against government pressure will be decided here, not in congressional hearings.
Go Deeper
How the U.S. Military Is Using A.I. to Wage War — Hard Fork’s investigation into Claude on classified networks via Palantir Maven, BCG’s “AI brain fry” research documenting the three-tool cliff where 14% of workers experience cognitive strain, and the policy reversal pattern across Google, OpenAI, Meta, and Anthropic on military contracts.
One Simple System Gave All My AI Tools a Memory — Nate B. Jones’s “Open Brain” architecture using Supabase and MCP, the “two-door” pattern of machine-readable and human-readable interfaces sharing a single database, and the “AI Flywheel Effect” where every model improvement enhances the entire system.
This Is the ONLY AI Skill You Need — Wes Roth argues “AI-Assisted Execution” is the defining skill, demonstrating the “UI Collapse” where many tools converge into one conversational interface. Includes an agent that scraped YouTube data via API through Telegram and a health coaching case that identified an MTHFR mutation.
WordPress 7 AI Revealed: Here’s What Actually Works — Centralized AI connectors in WordPress core, with alt text generation as the single most useful feature. Paul C’s assessment: the tools produce “useful starting points rather than finished tools,” and Review Notes flags issues but lacks suggested fixes.