HACKER Q&A
📣 grigio

Do you use LLM memory features?


I've been experimenting with AI assistants and their "memory" features, but I'm finding them opaque and unreliable.

Instead, I've switched to keep important context in .md files that I explicitly reference when I need or when I have to recall them. This gives me:

Full visibility into what's in context No mystery recalls from weeks ago Easier debugging when behavior drifts Predictable token usage The downside is manual maintenance, but it feels more reliable than hoping the system remembered correctly.

Do you use built-in memory features? Or do you manage context explicitly through files/RAG/other methods? What's working for you?


  👤 RyanByrne Accepted Answer ✓
I've been doing exactly the same thing as you and i've found it to be the most reliable.

👤 TheServitor
I prefer flat files too. I'd rather have mostly manual control of context.

👤 tizzzzz
I am more accustomed to letting AI summarize key points every once in a while, and then supplementing myself.