Current obstacles in the field of LLMs
Current obstacles in the field of LLMs

Current obstacles in the field of LLMs

What are the biggest obstacles in LLMs?
Which have solutions in the near future (research papers) and for which there are no solutions yet?

What I currently see:
- hallucinations
- ability to "remember" someone to catch up previous conversations
- huge (V)RAM requirements
- very slow when running on CPU / computational intensive
- context size (summarizing long texts)

submitted by /u/Koliham
[link] [comments]