<span class="vcard">/u/PianistWinter8293</span>
/u/PianistWinter8293

DeepMind Drops AGI Bombshell: Scaling Alone Could Get Us There Before 2030

I've been digging into that Google DeepMind AGI safety paper (https://arxiv.org/html/2504.01849v1). As someone trying to make sense of potential timelines from within the research trenches, their Chapter 3, outlining core development assumptions, c…

How do you deal with uncertainty?

I think never has life been as uncertain as it is now. The ever increasing amount of change and foresight of AGI in coming years means that its hard to adapt. Nobody knows exactly how the world will change, as a young person I don't know what to do…

[D] Why Bigger Models Generalize Better

There is still a lingering belief from classical machine learning that bigger models overfit and thus don't generalize well. This is described by the bias-variance trade-off, but this no longer holds in the new age of machine learning. This is empi…

The Difference Between Human and AI Reasoning

Older AI models showed some capacity for generalization, but pre-O1 models weren't directly incentivized to reason. This fundamentally differs from humans: our limbic system can choose its reward function and reward us for making correct reasoning …

Reward Functions in AI: Between Rigidity and Adaptability

The relationship between human and artificial reasoning reveals an interesting tension in reward function design. While the human brain features a remarkably flexible reward system through its limbic system, current AI architectures rely on more rigid …

Why Scaling leads to Intelligence: a Theory based on Evolution and Dissipative systems

For the video of this, click here. Time and time again it's been proven that in the long run, scale beats any kind of performance gain we get from implementing smart heuristics; This idea is known as "The bitter lesson". The idea that we …

Intuitive Understanding of how Neural Networks learn: The Library of Babel

The Library of Babel greatly improved my intuitive understanding of how neural networks can learn. The Library of Babel is a library with every book imaginable in it. There lay the books with all possible combinations of words and thus all possible com…

Recent Paper shows Scaling won’t work for generalizing outside of Training Data

For a video on this click here. I recently came across an intriguing paper (https://arxiv.org/html/2406.06489v1) that tested various machine learning models, including a transformer-based language model, on out-of-distribution (OOD) prediction tasks. T…

The Human Brain might follow same Scaling Law as AI: It aligns surprisingly well with a Performance vs. Compute Graph made for AI

submitted by /u/PianistWinter8293 [link] [comments]