Has the boom in AI in the last few years actually gotten us any closer to AGI?
Has the boom in AI in the last few years actually gotten us any closer to AGI?

Has the boom in AI in the last few years actually gotten us any closer to AGI?

LLMs are awesome, I use them everyday for coding and writing, discussing topics etc. But, I don't believe that they are the pathway to AGI. I see them as "tricks" that are very (extremely) good at simulating reasoning, understanding etc. by being able to output what a human would want to hear, based on them being trained on large amounts of human data and also through the human feedback process, which I assume tunes the system more to give answers that a human would want to hear.

I don't believe that this is the path to a general intelligence that is able understand something and reason the way that a human would. I believe that this concept would require interaction with the real world and not just data that has been filtered through a human and converted into text format.

So, despite all the AI hype of the last few years, I think that the developments are largely irrelevant to the development of true AGI and that all the news articles and fears of a "dangerous, sentient" AI are just as a result of the term "artificial intelligence" in general becoming more topical, but these fears don't particularly relate to current popular models.

The only benefit that I can see with this boom in the last few years is that it is investing a lot more money in infrastructure, such as datacentres, which may or may not be required to power whatever an AGI would actually look like. It has probably got more people to work in the "AI" field in general, but whether that work is beneficial to developing an AGI is debateable.

Interested in takes on this.

submitted by /u/AchillesFirstStand
[link] [comments]