UC Berkeley prof just proved modern AI is fundamentally stuck at animal-level intelligence
UC Berkeley prof just proved modern AI is fundamentally stuck at animal-level intelligence

UC Berkeley prof just proved modern AI is fundamentally stuck at animal-level intelligence

been reading this new textbook (Learning Deep Representations of Data Distributions - ) and it basically says deep learning can't reach human intelligence because of how we train it

animals learn through closed-loop feedback - they do something, reality corrects them immediately, brain updates. our models? train once on a dataset, freeze, deploy. no real-time correction from the world.

turns out this was understood in the 1940s by wiener and shannon but we still haven't figured out how to scale closed-loop learning. we have the math, we have the theory, we just can't make it work at scale without it becoming unstable or computationally impossible.

which is wild if everyone thinks AGI is 5 years away. like we're celebrating how good ChatGPT is at pattern matching while ignoring that it literally can't learn from reality the way a dog does.

am i missing something here or is this actually a hard wall we're pretending doesn't exist?

Source - https://ma-lab-berkeley.github.io/deep-representation-learning-book/

submitted by /u/techiee_
[link] [comments]