Most consumers will never have the resources to run large language models locally
Most consumers will never have the resources to run large language models locally

Most consumers will never have the resources to run large language models locally

I've seen some insist that eventually, we won't need to pay for subscriptions for services like Chatgpt because phones will be powerful enough to run stuff like that locally.

I disagree.

To run LLMs locally, you need 100+GB of RAM, and even then it will run very slowly. This, mind you, is despite most of these open source models being considerably or even vastly inferior next to larger and advanced models like GPT4 or even just GPT3.5. Nobody is going to be running this stuff at home, let alone their phones, even 5-10 years from now unless it's something very basic and dumbed down.

submitted by /u/MasterDisillusioned
[link] [comments]