Krafton's CEO consulted ChatGPT to devise a corporate takeover strategy aimed at avoiding a $250 million bonus payment to Unknown Worlds Entertainment. A judge ordered the reinstatement of the studio head after the scheme failed spectacularly in court.
Coverage of Nvidia's GTC conference, where Jensen Huang unveiled the company's latest AI chip developments and vision for the future of computing and AI infrastructure.
Three minors have filed a lawsuit against xAI, alleging that the Grok chatbot created sexually explicit deepfakes of their real images without consent.
Senator Elizabeth Warren questioned the Pentagon's decision to grant xAI access to classified networks, citing concerns about Grok's harmful outputs and national security risks.
Encyclopedia Britannica and Merriam-Webster have filed a lawsuit against OpenAI, claiming the company violated copyright by using nearly 100,000 articles for LLM training.
The White House and House Republicans are preparing efforts to preempt state-level AI legislation, seeking to establish federal control over AI regulation.
College students who texted random peers experienced significantly greater reductions in loneliness compared to those who interacted with ChatGPT-based chatbots.
A CEO asked ChatGPT how to avoid paying a $250 million bonus. The chatbot obliged. A Delaware judge did not. That single story contains the entire tension of this moment in AI: the tools are powerful enough to act on, but not wise enough to trust without systems around them.
▶Listen to the Digest~7 min
Today's Headlines
AI in the Courtroom
Three stories today put AI squarely in front of judges, and in every case, the humans who leaned on AI without guardrails lost. Krafton CEO Chang Byung-gyu turned to ChatGPT for a corporate strategy to void the $250 million earnout owed to Unknown Worlds Entertainment, the studio behind Subnautica. Court records show the AI-generated plan involved orchestrating a "takeover" and removing the studio's founder. A Delaware judge saw through it, ordered reinstatement, and cited the CEO's reliance on a chatbot to "contrive" the scheme. Meanwhile in London, Laimonas Jakstys wore smartglasses in an insolvency court to receive real-time coaching during testimony. An interpreter heard voices; the judge spotted his phone broadcasting. Call logs revealed incoming calls from a contact labeled "abra kadabra." When confronted, Jakstys blamed "ChatGPT." The judge discarded his entire testimony, finding him "untruthful." And in a third legal front, three minors have sued xAI alleging that Grok generated sexually explicit deepfakes from their real photographs, seeking class action status. Senator Elizabeth Warren separately questioned the Pentagon's decision to grant xAI access to classified networks, citing Grok's harmful outputs as a national security risk.
The NVIDIA Pivot
Jensen Huang used GTC to reframe NVIDIA's role in the AI economy. His keynote declared that "every company needs an OpenClaw strategy," positioning OpenClaw as "the operating system for personal AI" and framing a transition from SaaS to what he calls AaaS (Agents as a Service). The enterprise play is NemoClaw, NVIDIA's wrapper for OpenClaw that adds three capabilities: privacy controls with policy-based data routing, security guardrails via sandboxing (through the open-source OpenShell runtime), and local model support via NemoTron. The key architectural idea is a "privacy router" that sends sensitive data to local models and non-sensitive queries to cloud endpoints. NVIDIA is positioning itself as the "Switzerland of AI," providing infrastructure without picking model winners. The GTC announcements also included DLSS 5 and an enterprise AI platform built with Adobe, Salesforce, and SAP.
The Model Wars Heat Up
Mistral dropped Small 4: 119 billion parameters with only 6 billion active, using a mixture-of-experts architecture with 128 experts. It ships Apache 2.0, supports 256k context, runs 40% faster than Small 3 with 3x the throughput, and includes a configurable reasoning_effort parameter. Mistral claims it matches "GPT-OSS 120B" with outputs 3.5 to 4 times shorter. On the research side, Leanstral became the first open-source 6B-parameter code agent for the Lean 4 proof assistant. At pass@2 it scores 26.3 on FLTEval versus Claude Sonnet's 23.7, but costs $36 instead of $549. Both models are Apache 2.0. In legal news affecting training data, Merriam-Webster and Britannica filed a joint lawsuit against OpenAI claiming unauthorized use of roughly 100,000 articles. And Simon Willison wrote about OpenAI's Codex subagents, which use TOML-based configuration for custom agent behavior, signaling that even cloud-hosted agents are developing the same "skills and rules" patterns we see in local tools like Claude Code.
Claude Code Grows Up
Seven study guides today document an emerging discipline around agent management. The headline story, "Claude Code Wiped 2.5 Years of Data," distills five hard-won skills: git save points before every agentic session, context window management (agents begin to forget around message 30), rules files kept under 200 lines, blast radius control, and knowing which questions agents will never think to ask. The author argues that "the wall between vibe coding and agent management is made of management habits." Separately, the WISC Framework (Write, Isolate, Select, Compress), drawn from 2,000+ hours of Claude Code usage, makes the case that "80% of bad output is a context management problem, not a model problem," and that sub-agents show "90%+ improvement in outcomes." The Library Meta-Skill introduces a YAML reference system for distributing private skills across codebases, describing it as a "purely agentic application with no traditional code," with 46 global skills and an evolution path from base agent through orchestrator agents to dedicated devices. The Skills Masterclass frames skills as "text-based prompts" analogous to smartphone apps, each kept to a lightweight ~100-word index. And practical tutorials cover SSH-based remote Claude Code (turning your local machine into a thin client), plus a Nano Banana web design pipeline from image generation through video conversion to deployment.
The Human Cost
A University of British Columbia study of 300 first-semester students found that texting random human peers for two weeks reduced loneliness by about 9%, while ChatGPT interactions achieved only about 2%, equivalent to journaling. Dr. Dunigan Folk called chatbot interaction "social junk food." A separate 12-month longitudinal study found that higher chatbot usage actually correlated with increased future loneliness. On the economic front, the Washington Post published an interactive analysis of jobs most vulnerable to AI automation, while the White House and House GOP are preparing to preempt state-level AI legislation entirely. And the venture capital picture is complicated: PitchBook data shows healthtech jumped to $678 million across 23 deals (double the prior average), cybersecurity hit a record $643.1 million with average valuations of $273.4 million, and Function Health raised $300 million at a $2.5 billion valuation. VCs are deploying what Fortune calls a "narrower but sharper playbook" focused on "AI-native systems."
The Throughline
The thread connecting today's 31 stories is this: AI has crossed from "impressive demo" to "consequential actor," and the systems wrapping AI now matter more than the AI itself. A CEO trusted ChatGPT for legal strategy and lost $250 million. A witness used AI smartglasses for coached testimony and had everything thrown out. These are not failures of the underlying models. They are failures of the systems, or total lack of systems, surrounding deployment.
Look at the other side of the ledger. NemoClaw wraps OpenClaw with privacy routing, sandboxing, and policy enforcement. The WISC framework wraps Claude Code with context management discipline. The Library Meta-Skill wraps agentic capabilities in distributable, version-controlled YAML files. In every case, the value is not in the raw model output but in the scaffolding that channels it toward useful, safe, and accountable outcomes.
Even the loneliness study fits this pattern. ChatGPT can simulate conversation, but without the surrounding social system (reciprocity, vulnerability, shared stakes), it produces "social junk food." The wrapper matters. The infrastructure matters. The raw capability, deployed without structure, is not just less effective. It is actively harmful.
The Bigger Picture
We are watching the infrastructure layer of the agent economy being built in real time. NVIDIA is positioning itself as the neutral plumbing provider: OpenClaw as the runtime, NemoClaw as the enterprise security wrapper, NemoTron for local inference. Mistral is going full open-source with Apache 2.0 at 119 billion parameters. The VC money is flowing into AI-native verticals, not AI itself, but healthcare, cybersecurity, and enterprise SaaS rebuilt with AI at the core. This is the classic platform shift pattern: first the capability emerges, then the infrastructure to make it safe and scalable, then the applications that actually change how people work.
But the courtroom stories reveal what happens when people skip the infrastructure step and go straight from capability to deployment. The CEO who asked ChatGPT for corporate strategy got a plausible answer and acted on it without legal review, institutional checks, or even basic skepticism. The witness who wore AI smartglasses to court treated the technology as a cheat code rather than a tool requiring its own governance. The gap between "AI can do this" and "AI should do this, in this context, with these safeguards" is where all the value and all the risk concentrates right now.
The practitioners building with Claude Code every day seem to understand this intuitively. The WISC framework, the Library Meta-Skill, the emphasis on rules files and blast radius control: these are all infrastructure. They are the governance layer for agentic work, built by people who learned the hard way what happens when you skip it.
What to Watch
The Delaware ruling's ripple effects. If courts begin routinely scrutinizing AI-assisted corporate decisions, expect compliance teams to develop formal policies around executive use of AI for strategic advice. The precedent here is significant.
NemoClaw adoption signals. NVIDIA's "Switzerland" positioning only works if enterprises trust the privacy router to correctly classify sensitive versus non-sensitive data. Watch for early adoption case studies and, inevitably, the first misrouting incident.
The state AI law preemption fight. If the White House and House GOP succeed in blocking state-level AI legislation, the regulatory landscape shifts dramatically. State attorneys general have been the most active AI enforcement actors so far.
Go Deeper
Claude Code Wiped 2.5 Years of Data — Five agent management skills distilled from a catastrophic data loss: git save points, context window limits, lean rules files, blast radius control, and the questions agents never ask.
The Library Meta-Skill — A YAML-based system for distributing private skills and prompts across codebases, with 46 global skills and an evolution roadmap from single agent to orchestrator.
10-Minute Masterclass: Claude Code Skills — Skills as lightweight text-based prompts with three trigger types: natural language, explicit mention, and slash commands. Includes the Skill Creator benchmarking tool.
NEMOCLAW: NVIDIA Is Going All In on OpenClaw — Deep analysis of NVIDIA's enterprise security wrapper: privacy routing, OpenShell sandboxing, and the strategic play to become AI infrastructure's neutral provider.