Why do some people say LLMs and generative models like ChatGPT/DALL-E will slow/halt the creation of AGI?
Are they not the same thing, but just a matter of scale? Like, if you took a massice text LLM like GPT-4 and integrated other models for thigns like image processing, motor function, generative content, et cetera – would that not, in effect, be AGI? Wh…