This essay argues that the “AI takeover” most people imagined (dramatic, coercive, Terminator-style) isn’t what’s happening in practice. Instead, we’re making a series of tiny, convenience-driven choices: pasting in stack traces, letting a model write the paragraph, taking the instant fix instead of building the mental model.
The key distinction the author draws is that previous tools mostly offloaded information (calculators, search, GPS), while modern LLMs can offload reasoning. That can be great when you use it deliberately (boilerplate, scaffolding, alternatives), but dangerous when it becomes an autopilot that quietly removes the “friction” that trains judgment and taste. The piece is a good prompt to audit your own workflow: which steps are “mechanical toil” to automate, and which steps are the point because they build durable understanding?