Catégories :

Slopsquatting: when AI gets it wrong… and it becomes dangerous

Let’s be honest: AI tools have become pretty much unavoidable in development. Whether it’s to save time, find a library, or fix a bug, it’s now a reflex. But like anything useful, it comes with a few traps. And today, we’re talking about one of them: slopsquatting.

What is it, exactly?

Slopsquatting is a technique used by malicious actors to take advantage of… AI mistakes.

In practice, when you ask an AI to suggest a library or a tool, it may sometimes make something up. Not out of bad intent—it’s just trying to “fill in the gaps” with what seems logical.

And that’s where an attacker steps in.

They notice that a fake library is often being suggested, create a package with that exact name, add some shady code… and wait for someone to install it.

Why does it work so well?

Because it plays on something very human: trust.

When an answer is well-written, clear, and seems logical, we tend not to question it too much—especially when we’re in the zone, focused on solving a problem.

Add to that:

  • the names sound “professional”
  • the AI rarely seems uncertain
  • and copy-pasting a command is fast

…and you’ve got the perfect recipe for something to slip under the radar.

A simple example

Imagine you ask:

“What’s the best library to parse JSON quickly?”

The AI responds with something like:

fast-json-parser-pro

Sounds legit. Very legit, even.

Except… it doesn’t exist.

Now someone decides to create that library and uploads it to a public registry. And you, trusting the suggestion, install it… without realizing you may have just opened the door to malicious code.

How is it different from typosquatting?

Typosquatting is when someone takes advantage of a typo (like typing “gooogle” instead of “google”).

Slopsquatting goes a step further:

  • it’s not a human mistake
  • it’s an AI-generated mistake

So even if you’re doing everything “right,” you can still get caught.

Is it common?

We’re starting to hear about it more and more, especially with the massive adoption of AI tools in development.

It’s not everywhere yet, but it’s definitely a trend worth watching. And as with many security issues, it’s not just about how often it happens… it’s about the potential impact.

How can you avoid the trap?

No need to get paranoid, but a few simple habits can make a big difference:

  1. Check that the library actually exists
    A quick look on the official registry (npm, PyPI, etc.) or GitHub takes 10 seconds and can save you a lot of trouble.
  2. Look for trust signals
    Download count, recent activity, documentation… if it looks empty or suspicious, be careful.
  3. Avoid blind copy-pasting
    Yes, it’s tempting. But taking 30 seconds to read what you’re installing is a good investment.
  4. Use security tools
    Dependency audits, internal rules, allowlists… in a team setting, it’s absolutely worth it.

The takeaway

Slopsquatting is a simple reminder:

AI is incredibly useful—but it’s not infallible.

It can make things up, and some people will try to take advantage of that. That’s not a reason to stop using it—just a reason to keep a critical mindset.

Bottom line: trust… but verify 😉