AI Memory, what’s the biggest struggle you’re facing? How would you handle memory when switching between LLMs?
AI Memory, what’s the biggest struggle you’re facing? How would you handle memory when switching between LLMs?

AI Memory, what’s the biggest struggle you’re facing? How would you handle memory when switching between LLMs?

Hey everyone,

I’ve been exploring how to make memory agnostic systems... basically, setups where memory isn’t tied to a specific LLM.

Think of tools that use MCP or APIs to detac memory from the model itself, giving you the freedom to swap models (GPT, Claude, Gemini, etc.) without losing context or long-term learning.

I’m curious:

  • What challenges are you facing when trying to keep “memory” consistent across different LLMs?
  • How do you imagine solving the “memory layer” problem if you wanted to change your model provider at scale?
  • Do you think model-independent memory is realistic... or does it always end up too model-specific in practice?

Would love to hear how you’re thinking about this... both tecnically and philosophically.

submitted by /u/Litlyx
[link] [comments]