It looks like in a few years, the basic large language models (LLMs) we use will get commoditised, and it won't really matter which one you pick. The next big thing could be LLMs that use Retrieval-Augmented Generation (RAG), which means they need a ton of data to work well.
Given that Google has access to loads of data through its search engine, do you think they're in a better position to lead in this new phase compared to other companies? What do you all think?
[link] [comments]