I've seen some insist that eventually, we won't need to pay for subscriptions for services like Chatgpt because phones will be powerful enough to run stuff like that locally.
I disagree.
To run LLMs locally, you need 100+GB of RAM, and even then it will run very slowly. This, mind you, is despite most of these open source models being considerably or even vastly inferior next to larger and advanced models like GPT4 or even just GPT3.5. Nobody is going to be running this stuff at home, let alone their phones, even 5-10 years from now unless it's something very basic and dumbed down.
[link] [comments]