Coby Adcock's defense startup Scout AI has closed a $100 million Series A to build Vision Language Action models that drive autonomous military vehicles. TechCrunch visited the company's off-road bootcamp, where the AI is trained to handle the messy, uncooperative terrain real war runs on.
The round is the loudest signal yet that the AI defense category has moved past slideware. The labs that refused this work created a vacuum, and well-funded startups are walking straight into it.
Anthropic is pushing Claude into the working creative stack with new connectors for Blender, Adobe, Autodesk, and Ableton. The pitch is not "AI replaces artists" but "AI clears the parts you hate" so the actual creative work expands.
NVIDIA's new Nemotron 3 Nano Omni handles text, images, video, and audio in a single long-context model, with best-in-class numbers on document intelligence, video understanding, and ASR. It is small enough to ship as the workhorse for document and video agents.
Google has handed the Department of Defense expanded access to its AI on classified networks after Anthropic refused to provide unrestricted use without guardrails on domestic surveillance and autonomous weapons. The frontier-lab values stack is now visibly forking.
Ben Thompson reads Intel's surprisingly strong earnings as a structural story, not a one-quarter blip: AI is reshaping CPU demand in Intel's favor. He also asks the harder question about Terafab and what Intel's manufacturing future actually looks like.
A day after OpenAI ended Microsoft's exclusive cloud rights, AWS announced OpenAI model offerings on Bedrock, including a new agent service. The hyperscaler chessboard re-opens.
Amazon shoppers can now ask questions about a product and get a conversational, spoken answer. Voice-first commerce, finally without the smart speaker as a chokepoint.
A new Premium-tier feature stitches text and video clips into step-by-step answers. The video index is becoming a queryable knowledge base, not just a list of links.
Elon Musk testified about the falling-out with Larry Page over AI safety that he says set the stage for OpenAI's founding. The trial is now as much a history lesson as a legal proceeding.
An automated trademark-enforcement service quietly took down activist posts critical of SXSW, a clean illustration of how AI moderation systems suppress protected speech by default.
OpenAI executives publicly counter the narrative that consumer growth is slowing, framing the company as performing across all business lines. The story to track is whether the numbers eventually match the talking points.
Scramble
Scramble
Unscramble today's AI headlines. The red letters spell a bonus word.
D · E · C · U · L · A
Anthropic's flagship model.
CLAUDE
M · Z · A · N · O · A
Bezos's empire, now selling OpenAI on AWS.
AMAZON
E · O · G · O · L · G
Search giant expanding Pentagon AI access.
GOOGLE
S · N · E · G · A · T
Autonomous workers running loops while you sleep.
AGENTS
Bonus Word
E · A · L · T
Clue: Tardy, behind schedule.
LATE
✦ The Big Picture
Scout AI just raised $100 million to teach its Vision Language Action models how to drive a military ATV across a real California hillside. Two days earlier, Anthropic refused to give the Department of Defense unrestricted access to Claude. Yesterday, Google said yes. Today's issue is the moment the AI values stack visibly forked, and the labs that walked away from the war room learned that someone is always willing to walk in.
▶Listen to the Digest~7 min
Today's Headlines
The Defense Fork
Scout AI raises $100M to train models for war (TechCrunch) - Coby Adcock's Scout AI closed a Series A led by Align Ventures and Draper Associates to scale "Fury," a Vision Language Action model for autonomous military vehicles, plus "Ox," a soldier-facing command layer. The company already has $11M in development contracts with DARPA, the Army Applications Lab, and DoD, and is field-testing with the 1st Cavalry at Fort Hood ahead of a 2027 deployment.
Google expands Pentagon's AI access after Anthropic refusal (TechCrunch) - Anthropic refused to grant the DoD unrestricted use of Claude, citing domestic surveillance and autonomous weapons concerns. Google stepped in with classified-network access for "all lawful uses," with non-binding language about not intending the AI for those same use cases. 950 Google employees signed a letter asking the company to follow Anthropic's lead. The company did not.
Models and Tools
Anthropic launches Claude for Creative Work (Anthropic) - Claude now ships with first-party connectors for Blender, Adobe Creative Cloud, Autodesk Fusion, Ableton, plus Affinity, SketchUp, Splice, and Resolume. The pitch is not "AI replaces artists" but "AI handles the Python script, the format conversion, the batch render." It is the most explicit product-level rejection yet of the generative-replacement frame.
NVIDIA's Nemotron 3 Nano Omni goes long-context multimodal (Hugging Face) - A 30B-parameter (A3B) hybrid Mamba-Transformer-MoE that handles text, image, video, and audio in one model with 5+ hour context, leading scores on OCRBenchV2, Video-MME, and VoiceBench, and 9x throughput gains for multimodal workloads. Small enough to be the workhorse for document and video agents, not just a research demo.
Markets and Infrastructure
Intel earnings, differentiation, whither Terafab (Stratechery) - Ben Thompson reads Intel's surprisingly strong quarter as structural rather than cyclical: AI is reshaping CPU demand in Intel's favor, even as the harder Terafab manufacturing question remains unresolved. The bull case for Intel is now an AI infrastructure story.
Amazon offers new OpenAI products on AWS (TechCrunch) - One day after OpenAI's Microsoft cloud exclusivity ended, AWS announced OpenAI model offerings on Bedrock with a new agent service. The hyperscaler chessboard is fully reopened.
Wire Briefs
Amazon adds AI audio Q&A to product pages (TechCrunch) - Shoppers can now ask spoken questions about a product and get a conversational reply, no smart speaker required. Voice commerce slips quietly into the listing page.
YouTube tests "Ask YouTube" guided AI search (TechCrunch) - A Premium-tier feature stitches text and video clips into step-by-step answers, turning the video index into a queryable knowledge base.
Musk relitigates an old friendship at his OpenAI trial (TechCrunch) - Musk testified about the Larry Page falling-out over AI safety that he says set the stage for OpenAI's founding. The trial is now as much origin story as legal proceeding.
SXSW used an AI trademark tool to censor Instagram dissent (404 Media) - An automated trademark-enforcement service quietly removed activist posts critical of SXSW, a clean illustration of how AI moderation systems suppress protected speech by default.
OpenAI hits back at growth fears: "firing on all cylinders" (Bloomberg) - Executives publicly counter the slowing-consumer-growth narrative, framing all business lines as healthy. The story to watch is whether the numbers eventually match the talking points.
The Throughline
The defense story is the story today. Scout AI's $100M is not a niche raise. It is a market signal that the venture class has decided the autonomous-military-vehicle category is investable, and that the labs declining to participate have created a vacuum well-funded startups are walking straight into. Read Scout's deck of customers, DARPA, Army Applications Lab, DoD, 1st Cavalry, and you are reading the buyer list that Anthropic just walked away from.
What makes the Anthropic-Google split important is that both companies use roughly the same words. Google's contract includes language about not intending its AI for domestic surveillance or autonomous weapons. Anthropic refused because intent without enforceability is not a guardrail. The 950 Google employees who signed the letter asking their company to follow Anthropic's lead understood the distinction perfectly. Their leadership chose the contract anyway. That is the fork: not whether to write the values into the press release, but whether to let those values constrain the deal.
The consequence is already visible in the structure of the market. The Pentagon has reportedly designated Anthropic a "supply-chain risk" for refusing the deal. OpenAI and xAI have signed their own military contracts. Scout AI is the third path, a defense-native startup unencumbered by the values-stack debate at all because the values stack is the product positioning. We will train the model on the hillside. When the labs holding the line on autonomous weapons get punished as procurement risks while the labs willing to ship get rewarded with classified-network access, the policy is not whatever any single company writes in its acceptable-use document. The policy is the equilibrium that emerges from who said yes.
Meanwhile, Anthropic's Claude for Creative Work and NVIDIA's Nemotron 3 Nano Omni point at the other axis the industry is moving along: depth of integration over leaderboard scores. Anthropic shipped first-party connectors into the tools artists actually use, framing Claude as the assistant that writes the Blender script and converts the file format, not the engine that replaces the artist. NVIDIA shipped a 30B omni-modal model with five-hour context and best-in-class document and video benchmarks, sized to be a workhorse, not a hero. Both moves treat capability as plumbing. Both refuse the headline-grabbing framing. The labs that are winning the next phase are the ones building products you can route work through, not demos you can post.
The Bigger Picture
The values question used to be abstract. "What should AI labs refuse to do?" is the kind of sentence that fits in a panel discussion. As of today it is concrete and procurement-shaped. Anthropic refused. Google accepted. Scout AI raised $100M. The Department of Defense designated Anthropic a supply-chain risk. The market sorted the labs into those willing to ship to combatant commands and those not, and the market's verdict on which posture pays better is now legible.
This is not the end of the values discussion. It is the beginning of the version of it where the costs are real. Every lab that drew a line will now have to defend why that line is worth the revenue, the access, and the political risk. Every lab that did not will have to defend why intent-language without enforcement is a guardrail rather than a slogan. And every other industry watching, healthcare, education, law enforcement, will take notes on which posture got rewarded. The fork in the AI defense road is also the fork in every other regulated-buyer market AI is about to enter.
What to Watch
Whether Anthropic's "supply-chain risk" designation gets formalized. If it sticks, it sets a precedent that refusing certain DoD work is a procurement disqualifier across the federal stack, not just within DoD.
Whether the Google employee letter becomes a Maven moment. 950 signatures is a number that scales. The Maven protest reshaped Google's defense posture in 2018; this letter tests whether the same lever still works inside a company that now sees AI as core revenue.
Whether Scout AI's 2027 field deployment actually ships. A VLA model driving an Army ATV in production would be the first real-world VLA defense deployment at scale, and would validate the bet that battlefield autonomy is a startup category, not a prime-contractor monopoly.