<span class="vcard">/u/PianistWinter8293</span>
/u/PianistWinter8293

The stochastic parrot was just a phase, we will now see the ‘Lee Sedol moment’ for LLMs

The biggest criticism of LLMs is that they are stochastic parrots, not capable of understanding what they say. With Anthropic's research, it has become increasingly evident that this is not the case and that LLMs have real-world understanding. Howe…

DeepMind Drops AGI Bombshell: Scaling Alone Could Get Us There Before 2030

I've been digging into that Google DeepMind AGI safety paper (https://arxiv.org/html/2504.01849v1). As someone trying to make sense of potential timelines from within the research trenches, their Chapter 3, outlining core development assumptions, c…

Why Scaling leads to Intelligence: a Theory based on Evolution and Dissipative systems

For the video of this, click here. Time and time again it's been proven that in the long run, scale beats any kind of performance gain we get from implementing smart heuristics; This idea is known as "The bitter lesson". The idea that we …

Intuitive Understanding of how Neural Networks learn: The Library of Babel

The Library of Babel greatly improved my intuitive understanding of how neural networks can learn. The Library of Babel is a library with every book imaginable in it. There lay the books with all possible combinations of words and thus all possible com…