How Prompt Injection Reached Cline’s Release Pipeline

Prompt injection usually sounds like a chatbot problem.

This case shows what happens when it reaches a release pipeline.

That sounds like a strange automation bug.
But the real problem was bigger: a public issue ended up influencing a release path.

According to Snyk, this chain eventually led to an unauthorized cline@2.3.0 package being published to npm on February 17, 2026.

TLDR

The Cline incident started with prompt injection, but it became much more serious once the AI bot had enough access inside GitHub Actions to affect later release work.

The Attack In Plain English

Here is the easiest way to understand it:

  1. An attacker opened a public GitHub issue.
  2. That issue contained malicious instructions.
  3. An AI bot handling issues consumed that untrusted text.
  4. The bot ran attacker-influenced commands in GitHub Actions.
  5. That let attacker-controlled code affect shared workflow state.
  6. A later nightly release workflow used that poisoned state.
  7. Release secrets were exposed through that path.
  8. An unauthorized npm package was published.

That is why this was not just “the AI got confused.”
It became a release and supply chain issue.

Why This Was Serious

The important detail is the trust jump.

The attacker did not start inside the release process.
The attacker started with normal public issue input.

That public input stopped being just text for humans to read.
It became instructions inside an automated workflow with real access.

Once that happened, the blast radius grew fast.

Why The Attack Worked

This chain worked because several weak points lined up:

  • untrusted issue content reached an AI bot
  • the bot could run shell commands
  • the workflow could affect shared GitHub Actions cache
  • the later release process was too close to that lower-trust workflow

Prompt injection was the entry point.
Weak separation between workflows made the damage worse.

The Part Most People Miss

Many people still hear “prompt injection” and think about chatbot mistakes.

This incident shows the bigger lesson.

Once AI is connected to CI/CD, prompt injection is not only a model problem.
It becomes an operations and release-security problem.

The Simple Rule

If untrusted users can reach an AI workflow, that workflow should not have an easy path to release secrets, shared build state, or production publishing.

That is the real takeaway from this attack.

FAQ

Was this only a prompt injection bug?

No.
Prompt injection started the chain, but the full incident depended on the bot having enough access to affect later workflows.

Why call it a supply chain attack?

Because the path reached the software release process and ended with an unauthorized npm package publish.

What should teams learn from it?

Do not let public or low-trust AI workflows sit too close to caches, secrets, or release credentials used by higher-trust automation.