Axios Supply Chain Attack

David Crush | Apr 1, 2026 min read

What a day for a vulnerability. April 1st. April Fools. Haha. So funny.

By now I’m sure most people are aware of the Axios vulnerability. But let me recap it concisely:

  1. Some hackers (possibly North Korean) stole the credentials for Axios and published a malicious update to 1.14.1 and 0.30.4 approximately March 30th 7:20 PM EST.
  2. The malicious postinstall script ran on npm install and essentially created a backdoor for sending secrets/credentials to a remote server.
  3. Finally, the script would clean itself up. So examining node_modules alone wouldn’t show the exploit after the fact. There was no malicious code, it wasn’t apparent this had even happened.

Suffice to say, this is probably the most sophisticated attack I’ve ever seen first hand. The changes were clearly staged and coordinated ahead of time. This wasn’t an accident, this was a highly sophisticated and coordinated attack.

However, this also exposes an uncomfortable truth, trust is a huge part of our current NPM ecosystem. I was trying to figure out a solution to prevent this in the future and it’s hard to come up with a foolproof plan. NPM 11 [supports minimumReleaseAge], however, that’s not really foolproof. In this situation, if we say had a 24 hour min age before install, we wouldn’t have been affected. But if we all move to this mechanism, it’s kinda just kicking the can further down the road. We still need to solve the core issue, how do we better detect these things in real time?

It took 3 hours from the time of original publish until the malicious version was found and unpublished. But what if it took days? minimumReleaseAge is a nice to have but we’re still trusting our dependencies don’t turn malicious. To me, this seems like a possible usecase for AI. Essentially we would want something that can do some reasoning on the fly to flag potential vulnerabilities like this one.

For a human, this would have been almost impossible to catch in real time. But what about for an AI agent? Assuming we could do this with a low enough latency, we could have an LLM recap what changes NPM is about to perform, scrutinize those changes, and only proceed if nothing seems out of the ordinary. If something seems off, stop and prompt the user for instruction on how to proceed.

This may or may not be a good idea. Maybe one day I’ll dive into this problem more.