The Scariest Line of Code in JavaScript
Why npm install has become a security gamble — and why AI is making it obsolete.
Note: While this essay was being finalized, axios — the library I use as a running example throughout, with more than 100 million weekly downloads — was actively compromised. Attackers hijacked a lead maintainer’s credentials and published two poisoned versions carrying a cross-platform remote-access trojan. Socket, the company founded by Feross Aboukhadijeh, whose story opens this essay, flagged the malware within minutes. — March 30, 2026
In 2010, Feross Aboukhadijeh became briefly famous for YouTube Instant, a dorm-room hack that captured the early internet’s promise: a small developer, an open platform, and near-instant scale. Fifteen years later he runs Socket, a supply-chain security company, and the ecosystem that once rewarded that kind of improvisation now requires dedicated security infrastructure to keep from collapsing under its own trust assumptions.
That arc does not explain everything about modern software. But it captures something essential about the world web developers built over the past two decades. The same ecosystem that made it cheap and easy to assemble software from open-source parts also made it normal to trust vast dependency trees that almost nobody actually understood. For a long time, that bargain looked not only reasonable, but enlightened. Open source meant leverage. Package managers meant speed. Reuse meant efficiency.
That logic is now starting to break.
This is not an obituary for open source. Open source is fine. What is getting harder to defend is the reflexive habit of importing massive dependency trees for routine problems — what you might call dependency maximalism. AI is changing the economics on both sides of the ledger at once. It is making dependency reliance riskier, and local replacement cheaper. For a growing class of software teams, the old defaults no longer hold.
To see what is changing, it helps to remember the bargain modern web development made with itself sometime around 2012. Writing software from scratch is expensive. Maintaining it is tedious. A developer’s time is usually better spent composing solutions from prebuilt parts than reinventing plumbing. Package managers — npm, yarn, pnpm — made that philosophy frictionless. You needed an HTTP client? npm install axios. A utility belt? npm install lodash. A way to check if a number is even? Yes, there was a package for that.
The economics were real. A solo developer or a three-person startup could not afford to write and maintain a web framework, bundler, cookie parser, WebSocket abstraction, and a dozen other solved problems. Open source provided all of it for free — or rather, for a price that remained invisible until it didn’t. You inherited whatever those packages depended on, and whatever those depended on, recursively, all the way down. A modest project might pull in hundreds of transitive dependencies. You didn’t audit them. Nobody did. The working assumption was that someone, somewhere, was paying attention.
For a long time, that math held. The convenience of the ecosystem outweighed the risk, and the risk felt mostly theoretical. When something broke — left-pad here, event-stream there — the community treated it as an anomaly, patched the hole, and went back to installing packages.
AI is changing that math. Not all at once, and not uniformly. But in ways that are getting harder to ignore.
Start with the trust problem. Open source has always run on a paradox: critical infrastructure maintained by people working nights and weekends, often for free. AI did not create that fragility, but it has made it measurably worse.
Before AI coding tools, contributing to a project required real effort — reading the codebase, understanding its conventions, writing a patch, testing it. That effort acted as a filter. Most contributions came from people who cared enough to do the work. AI dissolved the filter. Now anyone can point a model at an issue, generate a plausible-looking fix, and submit it in minutes. The contribution looks structurally sound. The CI passes. But the contributor may not understand what the code does, why the project is designed the way it is, or what edge cases the patch misses. One estimate suggests it takes a reviewer twelve times longer to evaluate an AI-generated pull request than it took the AI to produce it.
The consequences are visible. Mitchell Hashimoto, creator of the terminal emulator Ghostty, banned AI-generated contributions entirely. Steve Ruiz, creator of tldraw, shut down external pull requests altogether. The Curl project reported that twenty percent of security-vulnerability reports submitted in 2025 were AI hallucinations: structurally plausible descriptions referencing code that did not exist. In a single week in July 2025, Curl received eight such reports; each engaged three to four security team members for up to three hours. Daniel Stenberg, Curl’s maintainer, said his team was being “effectively DDoSed.” GitHub itself shipped a feature in early 2026 allowing maintainers to disable pull requests entirely — a kill switch for contributions, framed as a configuration option. The company compared the situation to Usenet’s “Eternal September,” the moment in 1993 when a flood of new users overwhelmed the norms that had kept online communities functional. The analogy is apt: the cost of creating a contribution has collapsed, while the cost of reviewing one has not.
That governance strain would be serious enough on its own. The darker coincidence is that the same dependencies maintainers can barely keep up with are also becoming more dangerous to depend on.
The September 2025 npm compromise showed what supply-chain risk looks like at scale. Attackers registered a convincing phishing domain — npmjs.help — and sent targeted emails to package maintainers claiming urgent two-factor-authentication updates were required. Josh Junon, who maintained critical infrastructure packages, later described the moment with disarming honesty: he had had a long week and a panicky morning, and was just trying to knock something off his to-do list.
That single lapse cascaded. What began with eighteen confirmed compromised packages expanded to more than two hundred, with a combined download count exceeding 2.6 billion per week. The malware hooked into browser APIs, intercepted network calls, and swapped cryptocurrency wallet addresses using Levenshtein-distance matching across six blockchain platforms. One payload, dubbed Shai-Hulud, functioned as a self-replicating worm that spread through the compromised maintainer’s organizational footprint.
The nx package breach a month earlier pointed to something newer: what Socket’s researchers called “AI-assisted supply-chain abuse.” The injected malware detected LLM-based coding tools already installed on the developer’s machine and used them to scan for secrets, expressing its intent through natural-language prompts rather than conventional code. Within seventy-two hours, one victim organization saw an attacker escalate from a compromised npm install to full AWS administrator access. Detection systems built to flag suspicious API calls were blind to it. The attack looked like an AI assistant doing its job.
These are dramatic incidents, and they should be handled carefully. They do not prove that supply-chain attacks are routine for typical projects. What they do show is that the tail risk of dependency reliance is higher than most developers intuitively appreciate — and that AI is expanding the threat surface in ways that the old faith in “many eyeballs” cannot fully answer. When AWS, Google, Microsoft, Anthropic, and OpenAI jointly announce $12.5 million in funding to help open-source projects cope with AI-accelerated security threats, as they did in March 2026, the signal is hard to dismiss.
If this were only a story about rising risk, it would end with better defenses: patch faster, audit more, fund maintainers. But something else is happening at the same time, and it changes the shape of the problem. AI is collapsing the cost of the alternative.
For twenty years, the reason you installed Axios instead of wrapping fetch yourself was not that Axios was irreplaceable. It was that writing your own version, testing it, documenting it, and maintaining it across edge cases cost more than accepting the dependency. Axios was a convenience tax you paid in trust rather than engineering hours. The same was true for Express, cookie parsers, WebSocket abstractions, and half the middleware in a typical Node.js application.
I chose Axios as my running example because it is the canonical case of a dependency that is useful but not irreplaceable — a friendlier API on top of capabilities the platform already provides. I did not expect the example to validate itself while I was still writing.
On March 30, 2026, attackers compromised a lead Axios maintainer’s npm credentials, swapped the account email to an anonymous ProtonMail address, and manually published two poisoned versions: axios@1.14.1 and axios@0.30.4. The diff was surgical. Every dependency was identical to the prior clean version except one addition: plain-crypto-js@4.2.1, a package that had not existed eighteen hours earlier. Its sole purpose was to execute a postinstall script deploying a cross-platform remote-access trojan — separate payloads for macOS, Windows, and Linux — and then erase itself to destroy forensic evidence. Both release branches were hit within thirty-nine minutes. StepSecurity called it one of the most operationally sophisticated supply-chain attacks ever documented against a top-ten npm package. Socket flagged the malware within six minutes.
Axios has more than 100 million weekly downloads. Any developer who pulled the latest version during that window potentially handed an attacker a remote shell on their machine.
Now consider the alternative. AI has substantially reduced the cost of writing a narrow, purpose-built HTTP wrapper — not a general-purpose library that handles every edge case for every user, but a tailored implementation that handles your cases, for your codebase, with your conventions. It can scaffold the tests and write the documentation. And because the resulting code is yours — scoped to your needs, free of transitive dependencies, legible to your team — maintaining it is often simpler than tracking upstream changes in a package you do not control. If you had replaced Axios with a thin internal wrapper around native fetch six months ago, today’s compromise would have been someone else’s emergency.
I should disclose that I am not making this argument as a detached observer. We have made this bet ourselves, and I have a participant’s bias.
Our backend server tier now runs on something like ten dependencies. The ones that remain are substantial — AI SDKs, complex infrastructure components that would be irrational to rewrite. But the routine layers — our web framework, socket server, request handling, middleware stack — are all in-house, generated with heavy AI assistance, reviewed adversarially, and tested against our actual usage patterns.
The results have been good, though it is worth being honest about what that means and where the limits are. The codebase got smaller and builds got faster. Every abstraction now exists for a reason someone on the team can explain. There are no Dependabot alerts for packages three levels deep that nobody ever consciously chose. The first time something broke in our custom middleware and I found myself reading code I recognized — code a colleague had written last month, not code published by a stranger in 2019 — the difference was visceral. When something breaks, we own it entirely.
But the migration also took real engineering judgment — not just prompting a model and shipping the output. We had to decide what to replace, what to keep, and what edge cases mattered. Some of the internal code needed multiple adversarial revision cycles before it was production-ready. We discovered bugs in our own abstractions that the packages we replaced had long since fixed; in a few cases, we found ourselves reimplementing solutions to problems we did not realize existed until our code hit them in production.
We are also a team with strong opinions about software architecture and a high tolerance for bespoke infrastructure. A team with less experience, or one where rapid feature delivery matters more than stack control, might reasonably reach a different conclusion. I do not think this strategy is right for everyone. I think it is available to more teams than it was two years ago, and that the calculation is worth revisiting.
None of this applies uniformly, and the argument falls apart if you think it does. There is a taxonomy of dependencies, and the dividing line is not “good versus bad” or “popular versus obscure.” It is this: does the dependency solve a problem that is genuinely hard, or one that is merely tedious?
Hard problems still benefit enormously from shared solutions. Cryptography libraries, where a subtle implementation error can compromise everything. Database engines, where decades of optimization cannot be replicated in a sprint. Compilers, where correctness across an enormous surface area is non-negotiable. Major frameworks with deep ecosystem value — plugin systems, community extensions, integration patterns no single team could reproduce. For these, the trust cost is justified by genuine complexity. If anything, you should be more wary of replacing them with AI-generated code, because the failure modes are harder to detect and more consequential.
Tedious problems are a different story. An HTTP client that wraps a built-in API with a slightly nicer interface. A cookie parser. A middleware router. A utility library full of functions that modern JavaScript already provides natively. For a growing number of teams, the old calculation — import the package, accept the dependency tree, move on — deserves to be reexamined.
This logic also extends unevenly. It is most persuasive in JavaScript and TypeScript ecosystems, where package counts are extreme and dependency trees run deep. It is less obviously applicable in ecosystems like Rust or Go, where dependency culture is more conservative and standard libraries more comprehensive. And it applies far more to application-layer infrastructure than to anything involving cryptography, protocol handling, or correctness-critical computation. The claim is not that dependencies are bad. The claim is that for routine, well-understood abstractions, the default of importing rather than owning deserves scrutiny it did not once receive.
Large technology companies have operated on a version of this logic for years. Facebook, Google, Amazon — they built internal libraries for almost everything, not only for security, but for control, performance, and the simple fact that at their scale, external dependencies created coordination costs that outweighed convenience. They could afford to: dozens of engineers whose full-time job was maintaining internal infrastructure.
AI is compressing some of that capability. Not for everything — nobody is generating an internal database engine with a three-person team — but for the routine application-layer plumbing that makes up most of a web server’s dependency list, the gap between what a large platform team could build and what a small team with AI assistance can produce has narrowed dramatically.
There is a subtler shift underneath. When the primary consumer of an internal abstraction is an AI coding assistant rather than a junior developer, the design constraints change. The code can be more explicit, less magical, closer to the metal. Documentation matters more as context for the model than as onboarding material for humans. You can optimize for your specific workload rather than accepting the performance compromises a general-purpose library makes to serve everyone adequately.
Ownership, which used to be a luxury reserved for teams with hundreds of engineers, is increasingly within reach of teams with three.
Of course, replacing a mature, community-reviewed library with AI-generated internal code may eliminate transitive risk only to introduce bespoke risk. Your homegrown HTTP wrapper might have its own vulnerabilities — ones nobody else will find, because nobody else uses it. This is the best argument against everything I have written, and there are clear cases where the strategy should not be attempted.
Anything involving cryptographic operations should almost never be replaced with internal code. Security-sensitive protocol implementations — TLS, OAuth, JWT validation — have failure modes that are subtle, non-obvious, and devastating when missed. Database drivers, which must handle connection pooling, transaction isolation, and error recovery across dozens of edge cases, are poor candidates. And any library whose value comes primarily from ecosystem compatibility — React’s component model, for instance, or a database ORM with a rich plugin ecosystem — cannot be meaningfully replaced without losing the very thing that makes it useful.
For narrow, well-scoped abstractions — an HTTP wrapper, a middleware router, application-specific glue code — the calculus is different. A few hundred lines of internal code that your team wrote, reviewed, and tested present a categorically smaller attack surface than a dependency tree spanning dozens of packages maintained by strangers. You can audit the entire thing in an afternoon. You can run adversarial review cycles with AI-assisted security tools as one layer among several.
The point is not that internal code is invulnerable. It is that you can see the risk whole — scope it, own it, manage it — in a way that deep dependency graphs make structurally impossible.
What is changing, for a growing number of teams, is the default. The assumption that importing is always cheaper than owning held for twenty years and built an extraordinary ecosystem. But the conditions that justified it — expensive local development, cheap trust, low attacker sophistication — are shifting. AI raises the cost of dependency reliance and lowers the cost of local replacement. Those two pressures do not have to be catastrophic individually to be transformative in combination.
I do not know whether this becomes the dominant pattern in software development or remains a strategy adopted primarily by security-conscious, engineering-strong teams. I suspect the answer is somewhere in between: not universal, but far more common than today, and increasingly normal rather than contrarian.
The arc from YouTube Instant to supply-chain security does not tell you everything about where software is headed. But it tells you something important about the bargain web development struck, and the terms under which it is now being renegotiated. The magic was always built on trust. The question now is how much of it was warranted.

