Nicholas Carlini, a research scientist at Anthropic, reported at the [un]prompted AI security conference that he used Claude Code to find multiple remotely exploitable security vulnerabilities in the Linux kernel, including one that sat undiscovered for 23 years.
The finding underscores a growing role for AI in offensive and defensive cybersecurity, where large language models can systematically scan massive codebases for subtle flaws that human reviewers have overlooked for decades.
Pascual Restrepo's new NBER paper argues it's not about what AI can do. It's about what AI will bother doing, and most human work doesn't make the cut.
Anthropic announced that Claude subscription holders can no longer use their token limits with third-party tools like OpenClaw, citing capacity constraints and outsized strain on their systems. The move comes days after a CVSS 8.6 privilege escalation vulnerability (CVE-2026-33579) was disclosed in OpenClaw's device pairing system, which allowed unprivileged callers to approve admin-level access requests.
Netflix released VOID, their first public AI model, on Hugging Face. The model handles video object and interaction deletion, generating major buzz in the local LLM community with over 1,400 upvotes. The release signals Netflix's first move into the open-source AI space.
Mintlify replaced expensive sandboxes with ChromaFs, a virtual filesystem over Chroma, to give their docs AI assistant the ability to explore documentation like a developer would.
Can a large language model improve at code generation using only its own raw outputs, without a verifier, a teacher model, or reinforcement learning? Researchers answer in the affirmative with simple self-distillation (SSD), a method that requires no external supervision.
Kyle Daigle, GitHub's COO, shared new platform growth statistics: one billion commits in 2025, with GitHub Actions expanding from 500 million to 2.1 billion minutes weekly. The numbers paint a picture of developer activity accelerating alongside AI-assisted coding tools.
Scramble
N I X U L
OS where a 23-year-old vulnerability was hiding
L N R E K E
Core of an operating system
D U L C E A
Anthropic's AI that found the bug
L E D O M
What Netflix just dropped on Hugging Face
Bonus Word
Storage array, or what hackers do to your data (4 letters)
✦ The Big Picture
An Anthropic researcher pointed Claude Code at the Linux kernel and it found a remotely exploitable heap buffer overflow that had been hiding in the NFS driver since March 2003. Five vulnerabilities total, with "hundreds of crashes" still awaiting validation. The same week, Anthropic cut off OpenClaw users from subscription billing after a CVSS 8.6 privilege escalation CVE surfaced in the tool's device pairing system, a Yale economist argued that AGI won't bother automating most human work because it's not economically worth the trouble, and Netflix quietly released its first open-source AI model. Today's issue is about who controls the tools that find the flaws, who decides which flaws get fixed, and the strange economics of a world where AI labor is simultaneously too powerful and too expensive to deploy everywhere.
▶Listen to the Digest~7 min
Today's Headlines
AI Finds What Humans Missed
Claude Code Finds 23-Year-Old Linux Kernel Vulnerability - Nicholas Carlini used a simple automated script that pointed Claude Opus 4.6 at individual kernel source files, framing the task as a capture-the-flag competition. The primary finding: a heap buffer overflow in the NFSv4.0 LOCK replay cache where a 1,056-byte denial response overflows into a 112-byte buffer, leaking approximately 944 bytes of kernel heap memory over the network. Five total vulnerabilities were reported and fixed, including an io_uring out-of-bounds read and ksmbd use-after-free bugs. Carlini's assessment: "We now have a number of remotely exploitable heap buffer overflows in the Linux kernel. I have never found one of these in my life before. This is very, very, very hard to do." The bottleneck is no longer discovery but verification and responsible disclosure.
Self-Distillation Improves Code Generation - Researchers showed that Qwen3-30B-Instruct improved from 42.4% to 55.3% pass@1 on LiveCodeBench v6 using only its own outputs for training, with no verifier, no teacher model, and no reinforcement learning. The method resolves what the authors call a "precision-exploration conflict in LLM decoding," selectively suppressing unhelpful token distributions while maintaining beneficial diversity. The gains concentrated on harder problems, suggesting self-improvement scales where it matters most.
Platform Control and Trust
Anthropic Blocks OpenClaw After Critical CVE - Anthropic emailed subscribers that third-party harnesses like OpenClaw can no longer use subscription token limits, citing "outsized strain" on systems. The timing is notable: CVE-2026-33579 (CVSS 8.6) disclosed days earlier revealed that OpenClaw's device pairing system allowed unprivileged users to approve admin-level access requests through missing scope validation. One HN commenter called OpenClaw "a walking attack surface." The competitive dimension is impossible to ignore: OpenAI owns OpenClaw. Anthropic offered a one-time credit and up to 30% discount on token bundles, but community sentiment ran negative, with critics noting that Claude Code's own /loop and scheduled tasks enable similar automated usage patterns. Anthropic's estimated revenue run rate: $19 billion, up from $1 billion at the start of 2025.
GitHub's Commit Volume Is Exploding - GitHub saw 1 billion commits in all of 2025, but the current pace is 275 million commits per week, projecting to roughly 14 billion commits annually if sustained. That's a 14x year-over-year increase. GitHub Actions usage quadrupled from 500 million minutes per week in 2023 to 2.1 billion minutes now. The numbers suggest AI-assisted coding isn't just changing how developers write code; it's changing how much code gets written.
The Economics of Automation
Yale Economist: Most Jobs "Not Worth" Automating - Pascual Restrepo's NBER paper "We Won't Be Missed" divides work into "bottleneck work" (energy, infrastructure, science, defense) and "supplementary work" (hospitality, arts, customer support). AGI will automate bottleneck roles using compute resources but ignore supplementary work as economically inefficient to replace. The striking claim: total human brain computing capacity equals roughly 10^18 flops, while future computational resources could reach 10^54 flops, making human labor "economically marginal." In post-AGI economies, labor's share of GDP converges to zero, with most income accruing to compute owners. Surviving jobs persist not because humans are better, but because replicating them computationally is not justified. The paper cites current data showing construction electricians earning $81,800 annually (a 32% premium) on data center projects.
Building Smarter Infrastructure
Mintlify's ChromaFs: From 46 Seconds to 100 Milliseconds - Mintlify replaced expensive sandboxed environments for their AI assistant with ChromaFs, a virtual filesystem that intercepts UNIX commands and translates them into database queries. The results: P90 boot time dropped from 46 seconds to 100 milliseconds, annual infrastructure cost fell from $70,000 to effectively zero marginal cost, and the system now handles 30,000+ daily conversations across hundreds of thousands of users. The architecture uses Chroma for coarse file filtering, Redis for bulk prefetching, and lazy S3 pointers for large OpenAPI specs.
Netflix Drops VOID on Hugging Face - Netflix released their first public AI model, VOID (Video Object and Interaction Deletion), generating over 1,400 upvotes on r/LocalLLaMA. The model handles removing objects and interactions from video, a practical tool for content production workflows. The release marks Netflix's first contribution to the open-source AI ecosystem.
The Throughline
The thread connecting every story in today's issue is the asymmetry between what AI can find and what institutions are prepared to handle. Carlini's Linux kernel work is the clearest example: Claude Code discovered five serious vulnerabilities including a remotely exploitable 23-year-old heap overflow, but the bottleneck isn't discovery anymore. It's the human process of verification, responsible disclosure, and patching. He has "hundreds of crashes" waiting in a queue. The tool is faster than the system built to receive its output.
The same asymmetry appears in the OpenClaw story, but running in the opposite direction. CVE-2026-33579 revealed a privilege escalation flaw with a CVSS 8.6 rating, and Anthropic responded by cutting off subscription access for all OpenClaw users. The security fix existed (upgrade to version 2026.3.28), but Anthropic's response went beyond the CVE, restricting an entire category of third-party tooling. The HN community noticed the competitive angle immediately: OpenAI owns OpenClaw. Whether the crackdown was driven by legitimate security concerns, capacity management, or competitive positioning, the effect is the same. Platform owners can redefine the terms of access unilaterally. One day your workflow runs on subscription billing; the next day you need pay-as-you-go.
Restrepo's paper reframes this asymmetry at the economic level. His "bottleneck versus supplementary" distinction implies that AI won't replace most workers, not because it can't, but because the economic return doesn't justify the compute. The surviving jobs, hospitality, arts, live performance, persist because automation costs exceed the value of the work. But his own numbers contain a darker implication: labor's share of GDP converging to zero means that even if jobs persist, their economic power doesn't. You keep your job, but the economy is structured around compute ownership, not labor contribution. The $81,800 construction electricians wiring data centers are building the infrastructure that makes their own labor economically marginal.
GitHub's 14x increase in annual commit volume and Mintlify's 460x improvement in boot time both point to the same conclusion: the tools are getting dramatically faster, cheaper, and more capable. But the institutions, security processes, labor markets, platform governance, licensing frameworks, are not scaling at the same rate. The gap between tool velocity and institutional readiness is today's real story.
The Bigger Picture
We're watching a shift from AI as an assistant to AI as an autonomous agent, and the infrastructure isn't ready. When Carlini points Claude Code at kernel source files and walks away, that's not assisted coding. That's autonomous security research at a scale no human team could match. When OpenClaw runs continuously on a user's behalf, consuming tokens at rates that break subscription economics, that's not a chatbot. That's an autonomous agent with its own resource footprint. Anthropic's response, cutting off subscription access, is a platform owner grappling with the reality that autonomous AI agents behave differently than human users, and pricing models built for human interaction patterns don't survive first contact with agents running 24/7.
The self-distillation paper adds a subtler dimension. If models can meaningfully improve at code generation using only their own outputs, with no external supervision or verifier, then the loop between AI capability and AI deployment tightens. Today's 30% improvement on hard coding problems becomes tomorrow's baseline. Combined with GitHub's commit explosion (14 billion projected annual commits, most presumably AI-assisted), the picture is of a system that's accelerating its own capability while the governance structures lag further behind. Restrepo's economic framework suggests this acceleration has a natural endpoint: compute becomes the primary factor of production, labor becomes supplementary, and the question isn't whether AI takes your job but whether your job's economic value justifies the compute cost of replacing you.
The uncomfortable pattern across all of today's stories is that the people and institutions most affected have the least visibility into the decisions being made. Linux users didn't know about a 23-year-old kernel vulnerability. OpenClaw users didn't know their access would be cut until the email arrived. And if Restrepo is right, most workers won't know their jobs have become "supplementary" until the GDP data confirms it. The tools move first. The understanding follows.
What to Watch
AI security research velocity versus disclosure infrastructure. Carlini's "hundreds of crashes" queue is a leading indicator. If AI tools can find vulnerabilities faster than the CVE/patch process can handle them, the security community will need fundamentally new disclosure frameworks. Watch whether other AI labs launch similar automated kernel auditing programs, and whether the Linux Foundation responds with AI-specific vulnerability triage processes.
Platform restrictions on autonomous agents. Anthropic's OpenClaw crackdown is likely the first of many. As AI agents consume resources at non-human rates, every SaaS platform will face the same pricing mismatch. Watch for similar restrictions from OpenAI, Google, and others, and whether a new pricing paradigm emerges for agent-versus-human usage patterns.
The labor share data. Restrepo's prediction that labor's share of GDP converges to zero is testable. Watch Q2 and Q3 productivity data alongside hiring numbers, particularly in the "bottleneck" categories he identifies (energy, infrastructure, defense). If productivity surges while employment stagnates in those sectors, his framework gains real predictive power.