← News

Reflections on AI at the End of 2025

Hacker NewsDecember 20, 2025Original link

This year-end post is a set of field notes on what changed in how people talk about LLMs (and what might change next). One theme is that “stochastic parrots” has largely stopped being a useful model for explaining what modern LLMs can do in practice: even without claiming they “understand,” it’s increasingly hard to argue they have no internal representations of prompts and outputs.

It also frames chain-of-thought as more than a presentation trick: a mechanism that can act like an internal search process, and (when paired with reinforcement learning) a learned strategy for steering token-by-token state toward useful solutions. That perspective is paired with a claim that “we’re out of tokens to scale” is no longer a hard ceiling, because verifiable-reward RL can keep pushing on tasks with crisp objective functions (like code performance tuning).

On the practical side, it notes how mainstream AI-assisted programming has become, while distinguishing between “LLMs as colleagues” (interactive copilots) and “LLMs as agents” (more autonomous systems). Finally, it touches on research momentum beyond transformers, ARC shifting from an anti-LLM benchmark to something LLMs increasingly validate on, and ends with a blunt reminder: the hardest long-term constraint on AI progress might be avoiding catastrophic outcomes.

Read the original