<span class="vcard">/u/Independent-Flow3408</span>
/u/Independent-Flow3408

Reducing LLM context from ~80K tokens to ~2K without embeddings or vector DBs

I’ve been experimenting with a problem I kept hitting when using LLMs on real codebases: Even with good prompts, large repos don’t fit into context, so models: – miss important files – reason over incomplete information – require multiple retries Appr…