The Autodidacts

Exploring the universe from the inside out

Underrated reasons to dislike AI

The big arguments for and against AI have been endlessly discussed, and I don’t feel I have much to add. AGI and existential risk; human obsolescence; power use; cybersecurity; safety + censorship; slop; misinformation. Also, I’m kind of tired of everything being about AI, which is why I have specifically avoided writing about it.

But here is a list of petty grievances with AI, that don’t make the cover of WIRED.

  • Basically none of it is actually open source. Lots of the tooling is, but “open weights” is fundamentally different, and worse, than source available, let alone open source. There can be targeted attacks, or blatant censorship, in open weights models. They’re a black box: they aren’t safe. And none of the state-of-the-art models release their code + training data.

    • This is not just big companies are bad: it’s partly because the training data is huge (and usually pirated copyrighted material, and too poorly vetted to want to make public).
  • AI is centralized even though that’s bad architecture. Check what server you’re connecting to when working with local LLMs: it’s invariably HuggingFace. But models are huge, and though there would be certain legal and technical hurdles, BitTorrent would be a far more efficient technology to distribute open weight models.

  • AI makes Nvidia rich, and I don’t like Nvidia because their Linux support sucks ♥

  • Because it’s so resource intensive, AI is even more of a Matthew effect than technology in general. In the local LLM world, it’s the people who can afford a late-model Macbook with 64–128gb RAM and a hefty GPU that get to use the actually good models while maintaining their sovereignty, so the people who are empowered become more empowered. This gets even worse the farther up the chain: the capital requirements mean they are only a handful of frontier AI companies worldwide.

  • AI is fundamentally non-deterministic. At societal and epistemic levels, this is, obviously, disastrous. AI has no conception of truth: only probability of seeing it in the training data. Non-determinism can’t be “aligned” away with RLHF: it’s baked in. Tesseract is vastly worse than AI OCR. But the damage a Tesseract error can cause is bounded. The damage of what AI can hallucinate is practically unbounded. This means some of the practical advantages of AI are offset by how carefully the transcript/code/OCR must be reviewed if it’s being used in a context where the truth/meaning matters.

  • AI’s mistakes are less obvious. Surface-level mistakes are a red flag in human output. AI makes fewer surface level mistakes, but more fundamental errors. Because our heuristics are trained on human outputs, this makes it seem more trustworthy than it is.

  • AI adds another layer between humans and the world, distancing them from the consequences of their choices. People spend more money when they use a credit card than when they use cash. Cash is an abstraction layer on work. Credit cards are an abstraction layer on an abstraction layer, making it even more convenient to spend money. I worry that this distancing will make (for example) waging war with semi-autonomous weapons, or just trading on the stock market, feel even more like a video game than it currently does, and buffer the operator from the real-world consequences of their operations. Anyone who has read Ender’s Game knows that this kind of gamification can end badly.

  • AI feels grievously inefficient. It took 29.29 minutes to OCR the 4-page handwritten draft of this essay with Qwen3-VL:8B. I didn’t get exact power draw, but it was likely at least 45 watts, and could have been up to the rated 110 watts TPD. A human brain is estimated to draw ~20 watts, and could do the task in a fraction of the time. This feels...wasteful. And this is just inference! (To be fair, humans require training, also.)

    • Probably, much of this will be optimized away. And part of it is inherent to general-purpose systems, which are, by definition, not optimized for a given task. After all, many operations that were once too inefficient for widespread use — full disk encryption, VPNs — are now widespread.
  • Without knowing what consciousness is, we won’t know whether AI has become conscious. We also don’t know whether whether AI is conscious will matter. Which is a scarier: conscious AGI, or AGI without consciousness?

  • AI makes me feel dumb.

Note: this post is part of #100DaysToOffload, a challenge to publish 100 posts in 365 days. These posts are generally shorter and less polished than our normal posts; expect typos and unfiltered thoughts! View more posts in this series.

Sign up for updates

Join the newsletter for curious and thoughtful people.
No Thanks

Great! Check your inbox and click the link to confirm your subscription.