AI News Digest — April 2, 2026
A packed 24 hours in AI-land. GitHub dropped a game-changing CLI feature for multi-agent workflows, the npm supply chain came under coordinated attack, and HuggingFace shipped a major post-training library. Let’s dig in.
GitHub Copilot CLI Gets /fleet — Parallel Multi-Agent Dispatching
GitHub just shipped the /fleet command for GitHub Copilot in the CLI, and it’s the most interesting developer tooling release this week. Instead of running a single coding agent, /fleet lets you dispatch multiple AI agents in parallel across your codebase. Think of it as a swarm of specialized workers — one refactoring your auth module, another writing tests for the API layer, a third updating documentation — all running simultaneously.
The command integrates directly with your existing GitHub Copilot setup. You describe the tasks, and /fleet decomposes them, assigns them to parallel agents, and coordinates the results back into your working tree. For monorepo maintainers and teams working on large-scale refactors, this could be a serious productivity multiplier.
We’re still digging into the docs, but the initial impressions suggest this is more than a gimmick — it’s a legitimate shift in how we think about AI-assisted development, moving from “pair programmer” to “team lead” of an AI crew.
Supply Chain Under Siege: Axios npm, LiteLLM, and plain-crypto-js
The open source supply chain took a beating over the past 48 hours, and if you’re running JavaScript or Python in production, you need to pay attention.
Axios npm compromise: The popular axios package was caught serving malicious code after a dependency was hijacked. The attack vector was plain-crypto-js, a seemingly innocuous dependency that was subtly modified to include data exfiltration payloads. Simon Willison has an excellent breakdown of how the attack worked — the malware was designed to harvest environment variables and send them to a remote server, a classic credential-stealing technique disguised inside a transitive dependency.
LiteLLM breach fallout: The LiteLLM library compromise we reported on earlier this week has wider implications than initially thought. Mercor, the AI evaluation platform, confirmed it was hit by a cyberattack tied directly to the LiteLLM vulnerability. If you’re using LiteLLM as a proxy layer for LLM API calls, rotate your keys now.
GitHub’s response — Trusted Publishing: GitHub published a comprehensive post on securing the open source supply chain in response to these incidents. The key takeaway: Trusted Publishing (OIDC-based, no long-lived tokens) is now available for both npm and PyPI, and you should be using it. The post also covers dependency cooldown — a mechanism that delays automatic dependency updates to give the community time to catch malicious code. Support is landing across pnpm, Yarn, Bun, Deno, uv, pip, and npm itself.
This is a good time to audit your dependency tree. Run npm audit or pip audit, check your lockfiles for unexpected transitive dependencies, and consider enabling trusted publishing for your own packages.
Anthropic: Repo Takedowns, Claude Code Auto Mode, and Government Showdowns
Anthropic had a chaotic few days across multiple fronts.
GitHub repo takedowns gone wrong: Anthropic accidentally took down thousands of GitHub repositories while trying to DMCA-leak their own proprietary source code. The overreach was massive — legitimate repos were caught in the crossfire. GitHub has since restored most of them, but the incident raises serious questions about automated DMCA enforcement at scale and the collateral damage it can cause to the open source community.
Claude Code auto mode: On a more positive note, Claude Code shipped a new “auto mode” with a classifier-based permission system. Instead of prompting you for every potentially destructive action, auto mode uses a trained classifier to decide which operations are safe to execute automatically and which still need human approval. This is a meaningful step toward making AI coding assistants genuinely autonomous without sacrificing safety.
Government contract battle: Anthropic filed an injunction against the Trump administration over a Defense Department ban preventing the company from working with federal agencies. The case could set a major precedent for how AI companies interact with government contracts.
HuggingFace Drops TRL v1.0, Granite 4.0, and More
HuggingFace had a banner day with multiple significant releases:
TRL v1.0: The Transformer Reinforcement Learning library hit its first major release. TRL is the go-to library for post-training — think RLHF, DPO, PPO, and all the alignment techniques that turn a base model into a useful product. The v1.0 milestone means stable APIs, comprehensive documentation, and production readiness. If you’re fine-tuning LLMs, this is your toolkit.
IBM Granite 4.0 3B Vision: IBM released a 3-billion-parameter vision-language model under the Granite family. It’s small enough to run on consumer hardware but punches above its weight class on multimodal benchmarks. Great for anyone building vision-capable apps without the budget for massive models.
Falcon Perception: A new model focused on visual understanding and spatial reasoning, pushing the boundaries of what open-weight models can do on perception tasks.
HuggingFace Storage Buckets: A new offering for managing large datasets and model artifacts with better versioning and access controls. Think of it as purpose-built cloud storage for ML workflows.
Simon Willison’s Corner: LLM 0.30, Datasette Plugins, and Supply Chain Vigilance
Simon Willison continues to be one of the most reliable voices in the AI developer space. Recent highlights:
LLM 0.30: The llm CLI tool hit version 0.30 with plugin hooks that let you extend it with custom commands, output formatters, and model providers. The plugin system is clean and Pythonic — write a hook function, register it, and llm picks it up automatically. This turns llm from a simple CLI wrapper into a composable framework for LLM workflows.
Datasette-LLM plugins: New plugins connecting the Datasette data exploration tool with LLM capabilities, letting you run AI-powered queries and analysis directly against your SQLite databases.
Supply chain watchdogging: Simon’s coverage of the plain-crypto-js / Axios attack was one of the most technically detailed writeups available, walking through exactly how the malware was injected, what it targeted, and how to detect it. His blog remains essential reading for anyone building with or on AI tools.
OpenAI’s $122B Round and Sora Shutdown
OpenAI officially closed its massive funding round at a $122 billion valuation, with investments from Amazon, Nvidia, Softbank, Microsoft, and $3 billion from retail investors. The company reports over 900 million weekly ChatGPT users.
Less positive: Sora, OpenAI’s video generation model, was shut down amid concerns about misuse and content authenticity. No timeline for its return has been announced.
Salesforce Reinvents Slack with AI
Salesforce unveiled 30 new AI-driven features for Slack in its biggest overhaul yet. The focus is turning Slack from a messaging platform into an AI-powered workspace — automated thread summaries, AI-generated action items, intelligent surfacing of relevant conversations, and deep integration with Salesforce’s Einstein AI platform. If your team lives in Slack, the AI features could genuinely change how you interact with your work communications.
Quick Hits
- Google TurboQuant: A new memory compression algorithm for LLMs that could dramatically reduce inference costs. Early benchmarks are promising.
- Microsoft Copilot Cowork now integrates with Anthropic’s Claude, plus a new Researcher agent and AI-powered code Critique feature.
- Apple Intelligence went live accidentally in China ahead of the official launch. Apple is also building Siri Extensions — essentially an “AI App Store” for third-party capabilities.
- Runway launched a $10M fund and Builders program for AI creative tool startups.
- Rebellions raised $400M for AI chip development.
- Mistral AI secured $830M in debt financing.
- Nothing (the phone company) announced AI-powered glasses.
- KPMG released an AI agent playbook for enterprise adoption.
Tool & Release Radar
| What | Details |
|---|---|
GitHub Copilot /fleet | Parallel multi-agent CLI dispatching |
| HuggingFace TRL v1.0 | Post-training library (RLHF, DPO, PPO) |
| LLM 0.30 | Plugin hooks for the llm CLI tool |
| IBM Granite 4.0 3B Vision | Open-weight vision-language model |
| Falcon Perception | Visual understanding and spatial reasoning |
| Claude Code auto mode | Classifier-based permission system |
| GitHub Trusted Publishing | OIDC-based, no long-lived tokens for npm/PyPI |
| HuggingFace Storage Buckets | ML artifact storage with versioning |
This digest is published daily at 5pm AEST. Got a tip or a tool we should cover? Drop it in the comments. Tomorrow’s edition will cover developments from April 3, 2026.