Back to BlogSecurity

TanStack’s npm Compromise Is a GitHub Actions Wake-Up Call: 5 CI Fixes to Copy Right Now

TanStack’s May 11 npm compromise was not just another bad package story. It exposed a very real GitHub Actions trust problem, and the follow-up is full of fixes worth stealing.

tanstackgithub-actionsnpm-securitysupply-chainreact-router
TanStack’s npm Compromise Is a GitHub Actions Wake-Up Call: 5 CI Fixes to Copy Right Now

TanStack just gave every JavaScript team a free CI security review

On May 11, 2026, TanStack published a blunt postmortem for its npm supply-chain compromise. A day later, the team followed up with a second post, Hardening TanStack After the npm Compromise, which is the one I think more teams need to read.

Because this was not the usual story.

Nobody got tricked into pasting a token into a fake login page. No maintainer laptop needed to get owned first. TanStack says the attacker chained together pull_request_target, shared GitHub Actions cache state, and runtime OIDC token extraction from the runner process. Then the attacker used that path to publish malicious versions across 42 @tanstack/* packages. The GitHub advisory lists the affected packages and patched versions.

That is ugly. It is also useful, because the failure mode is painfully concrete.

If your team ships packages, runs release jobs in GitHub Actions, or lets outside contributors open PRs, this is not a TanStack-only story. It is a workflow design story.

The part that should make you uncomfortable

The detail I keep coming back to is this: TanStack did several things that security people usually recommend. They used OIDC trusted publishing. They avoided long-lived npm publish tokens. They had 2FA. They had normal PR review on code going into main.

And they still got burned.

That matters because it kills a lazy assumption a lot of us have been carrying around: if we switch to short-lived tokens and stop storing secrets in GitHub, our release pipeline is basically fine.

Not really.

TanStack's own write-up says the problem was the workflow shape. An untrusted PR hit a pull_request_target workflow, that workflow wrote to a shared cache, the release workflow later restored that cache on main, and the runner's temporary publish credential got pulled out of memory at exactly the wrong moment. OIDC helped with auditability. It did not save the pipeline.

That is the lesson.

1. Stop running fork code inside pull_request_target

This is the biggest one.

If you use pull_request_target and then check out PR code or execute anything influenced by PR code, you are mixing trusted context with untrusted input. TanStack's May 12 follow-up says they removed every use of pull_request_target from CI after the incident.

The unsafe shape looks like this:

on:
  pull_request_target:

jobs:
  benchmark:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
        with:
          ref: refs/pull/${{ github.event.pull_request.number }}/merge
      - run: pnpm install && pnpm test

If you need PR comments, labels, or reports with base-repo permissions, split the work.

Run untrusted code in a normal pull_request job. Then use workflow_run or a separate trusted workflow to consume artifacts and post comments.

It is more annoying. Good. Security usually is.

2. Treat caches as shared state, not free performance

TanStack disabled the pnpm cache in release workflows and removed caches from affected GitHub Actions while they reworked things. That sounds extreme until you read the attack chain.

A lot of teams still talk about caches like they are harmless plumbing. They are not. They are state. Shared state. If an untrusted workflow can write to something a trusted workflow later restores, you have created a bridge whether you meant to or not.

If you really need caching, make it boring and explicit:

- uses: actions/cache/restore@v4
  with:
    path: ~/.pnpm-store
    key: ${{ runner.os }}-pnpm-${{ hashFiles('pnpm-lock.yaml') }}

The TanStack team specifically called out moving away from cache behavior that auto-saves on job exit. That is the right instinct. Restore-only is easier to reason about than "something might get written after the job finishes."

Fast CI is nice. Predictable trust boundaries are nicer.

3. Put id-token: write on a very short leash

One of the nastier parts of the postmortem is that the attacker did not need a stored npm token. They waited for the release workflow to mint a fresh OIDC token, then used it.

So yes, keep OIDC trusted publishing. I still think it is better than long-lived tokens sitting around forever. But stop treating it like magic.

If a job can run attacker-influenced code and also has id-token: write, you should assume that job is one bad edge case away from becoming a publishing surface.

Keep publish permissions isolated:

permissions:
  contents: read

jobs:
  build:
    permissions:
      contents: read

  publish:
    needs: build
    permissions:
      contents: read
      id-token: write

Even then, do not let the publish job restore state from places untrusted jobs can poison.

4. Pin actions and lint workflow files like real code

TanStack pinned every action to a commit SHA after the incident. They also said they are adding zizmor as a required PR check and considering tighter ownership rules for .github.

That all sounds correct to me.

Workflow files are not boring config. They are privileged code with a YAML skin.

Steal these rules:

  • Pin third-party actions to commit SHAs, not floating tags.

  • Put CODEOWNERS on .github/workflows.

  • Run a workflow linter or static analyzer on every PR.

  • Review CI changes like application security changes, because that is what they are.

A lot of orgs still protect src/ more carefully than .github/. That is backwards.

5. Patch this like compromise response, not dependency hygiene

If you directly use affected TanStack Router packages, do not treat this as "bump some versions when Renovate gets around to it." The advisory lists patched releases such as @tanstack/react-router 1.169.9, @tanstack/router-core 1.169.9, and @tanstack/history 1.161.13.

For example:

pnpm up @tanstack/[email protected] @tanstack/[email protected] @tanstack/[email protected]

But version bumps are the easy part.

TanStack's postmortem says anyone who installed an affected version on May 11, 2026 should treat the install host as potentially compromised and rotate reachable AWS, GCP, Kubernetes, Vault, GitHub, npm, and SSH credentials. That is not normal patch Tuesday advice. That is incident response advice.

Also worth noting: the postmortem says @tanstack/query*, @tanstack/table*, @tanstack/form*, @tanstack/virtual*, and a few others were confirmed clean. So do not turn this into vague panic about all of TanStack. Check the package list, then act precisely.

My takeaway

The most useful thing TanStack published this week was not the scary part. The scary part was obvious.

The useful part was the admission that modern supply-chain defenses can all be present and you can still lose if your CI trust boundaries are sloppy.

That is the piece a lot of teams need to hear.

If I were auditing a JavaScript org this weekend, I would start with three questions:

  • Do any pull_request_target workflows execute fork-controlled code?

  • Can untrusted jobs write cache state that trusted jobs later restore?

  • Which jobs actually need id-token: write, and which ones just have it because nobody tightened the file?

If the answers are fuzzy, TanStack just handed you your next security sprint.