"highly autonomous systems that outperform humans at most economically valuable work"
We used to call it AI, now AGI, but whatever we call it, I think what we all want is a system that can reason, hypothesize and if not dangerous, self-improve. A truly intelligent system should be able to invent new things, based on its current learning.
Outperforming humans at 'most' work doesn't sound like it guarantees any of those things. The current models outperform us in a lot of benchmarks but will then proceed to miscount characters in a string. We have to keep inventing new words to describe the end-goal, it went from AI to AGI and now apparently ASI.
If that's OpenAi's definition of AGI then I don't doubt them when they say they know how to get there, but that doesn't feel like AGI to me.
[link] [comments]