The timeline debate is everywhere right now. But none of the major players actually agree on what AGI looks like. LeCun has been saying for a while that LLMs alone are a dead end and is now backing startups focusing on Energy-Based Models (EBMs). Instead of autoregressive next-token prediction, EBMs search for the lowest energy answer across a whole problem space based on constraints. It's optimization, not generation.
If true reasoning requires an ecosystem of EBMs and world models rather than just one massive LLM, aren't all these timeline bets based on LLM scaling curves kind of meaningless? Are we aiming for one model to rule them all, or a patchwork of different architectures?
[link] [comments]