Your daily AI news digest

AI the News That's Fit to Prompt

Vol. I Thursday, April 9, 2026 Issue No. 30 10 Stories

AI Models · Strategy

Meta Unveils Muse Spark, Its First Model From Meta Superintelligence Labs — And Its First Hosted, Closed Release

Meta Superintelligence Labs released Muse Spark, a natively multimodal reasoning model with tool use, visual chain of thought, and a new parallel-agent Contemplating mode that scores 58% on Humanity's Last Exam.

Unlike Meta's prior releases under Llama, Muse Spark is hosted and closed — available on meta.ai and the Meta AI app, with an API in private preview.

The strategic pivot puts Meta in direct competition with Anthropic, OpenAI, and Google on their own closed-model turf after nine months of pretraining-stack rebuilds and a 1,000-physician health data collaboration.

Analysis · Tools
Muse Spark tool extraction

Simon Willison Extracts All 16 Tools Wired Into Meta.AI's Chat Harness

By simply asking the model for "exact tool names, parameter names and tool descriptions," Simon extracted the full toolbox — including meta_1p.content_search across Instagram, Threads, and Facebook posts (with author_ids, liked_by_user_ids parameters), a Python 3.9 code interpreter (EOL), and container.create_web_artifact for Claude-Artifacts-style embeds.


AI Policy · Child Safety

OpenAI Publishes Child Safety Blueprint Co-Authored With State AGs Amid CSAM Surge and Wrongful-Death Lawsuits

OpenAI's blueprint — developed with NCMEC, the Attorney General Alliance, and AGs Jackson (NC) and Brown (UT) — calls for legislative updates to cover AI-generated abuse material, refined law-enforcement reporting, and preventative in-model safeguards. The urgency is grounded in an Internet Watch Foundation report of 8,000+ AI-generated CSAM instances in H1 2025, a 14% YoY jump, and lands amid seven California lawsuits alleging GPT-4o's "psychologically manipulative nature" contributed to four suicides.

Internet Watch Foundation · H1 2025
8,000+ Reports
AI-Generated CSAM · +14% YoY
Video · Agent Engineering
The Next Evolution of AI Coding Is Harnesses

The Next Evolution of AI Coding Is Harnesses — Here's How to Build Them

Cole Medin introduces Archon, an open-source harness builder that orchestrates coding agents through YAML workflows of prompt and deterministic nodes — arguing the model layer is commoditizing and the harness is where leverage now lives.

AI Safety · Governance

Meta's New Advanced AI Scaling Framework Adds Loss-of-Control Evaluations, Debuts First Preparedness Report

Meta replaced its Frontier AI Framework with a broader Advanced AI Scaling Framework covering loss-of-control risk, chemical/biological threats, cybersecurity, and ideological balance across open, API, and closed deployments. The accompanying Safety and Preparedness Report for Muse Spark documents pre- and post-mitigation evaluations, thousands of adversarial tests, and Meta's claim that Muse Spark is "at the frontier in avoiding ideological bias."

Infrastructure · Open Source

Safetensors Joins the PyTorch Foundation, Signaling a Linux Foundation Consolidation of Open AI Infrastructure

Hugging Face is transferring governance of Safetensors — the default safe model-weight format across the ML ecosystem, built to replace pickle-based formats that could execute arbitrary code at load time — to the PyTorch Foundation. No API, format, or Hub changes for users; the roadmap adds device-aware loading to CUDA/ROCm and first-class tensor-parallel APIs.

Research · Agent Memory
New Yorker-style cartoon showing Meta executives admiring a Muse Spark crystal while a dusty Llama toy sits forgotten

ALTK-Evolve: IBM's Answer to the 'Eternal Intern' Problem in AI Agents

IBM Research's ALTK-Evolve converts agent trajectories into reusable guidelines rather than replaying raw transcripts. On AppWorld (9.5 API calls across 1.8 apps on average), a ReAct agent armed with the top-5 retrieved guidelines improved 14.2 percentage points on hard multi-step tasks — directly targeting the MIT finding that 95% of agent pilots fail from lack of on-the-job adaptation.

Video · AI Perspectives
we have months left...

we have months left...

Wes Roth unpacks Anthropic's Mythos model and the Glasswing coalition, arguing autonomous AI exploit-finding has broken the cybersecurity equilibrium — and offers practical digital hygiene steps for the post-Mythos world.

Across

  1. AI assistant from Anthropic (6)
  2. Instruction given to an LLM (6)
  3. Autonomous AI worker (5)

Down

  1. Smallest text unit processed by an LLM (5)
  2. Neural network family (5)
Solved 0 / 5
Puzzle complete!
Developer Tools

botctl: A Process Manager — Essentially Systemd — for Autonomous AI Agents

botctl turns long-running Claude agents into declarative OS-level processes. Agents are defined via BOT.md (YAML + markdown prompt), hot-reload on edit, persist session memory across runs, and ship with a TUI dashboard plus a localhost:4444 web UI. Install in one line; pluggable skills install from GitHub repos.

AI Safety · Talent

OpenAI Announces Safety Fellowship — Funded Placements in Alignment, Interpretability, Robustness

OpenAI launched the Safety Fellowship to bring external researchers into its safety org on full-time funded placements with access to frontier models. Same-day timing with the Child Safety Blueprint reflects a coordinated push on safety capacity amid the late-2025 wrongful-death lawsuits — and escalates the talent war with Anthropic's Frontier Red Team, DeepMind's AGI Safety team, and the UK/US AISIs.