𝗔𝗜 𝗔𝗴𝗲𝗻𝘁’𝘀 𝗠𝗲𝗺𝗼𝗿𝘆 is the most important piece of 𝗖𝗼𝗻𝘁𝗲𝘅𝘁 𝗘𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴, this is how we define it 👇 In general, the memory for an agent is something that we provide via context in the prompt passed to LLM that helps the agent to better plan and react given past interactions or data not immediately available. It is useful to group the memory into four types: 𝟭. 𝗘𝗽𝗶𝘀𝗼𝗱𝗶𝗰 - This type of memory contains past interactions and actions performed by the agent. After an action is taken, the application controlling the agent would store the action in some kind of persistent storage so that it can be retrieved later if needed. A good example would be using a vector Database to store semantic meaning of the interactions. 𝟮. 𝗦𝗲𝗺𝗮𝗻𝘁𝗶𝗰 - Any external information that is available to the agent and any knowledge the agent should have about itself. You can think of this as a context similar to one used in RAG applications. It can be internal knowledge only available to the agent or a grounding context to isolate part of the internet scale data for more accurate answers. 𝟯. 𝗣𝗿𝗼𝗰𝗲𝗱𝘂𝗿𝗮𝗹 - This is systemic information like the structure of the System Prompt, available tools, guardrails etc. It will usually be stored in Git, Prompt and Tool Registries. 𝟰. Occasionally, the agent application would pull information from long-term memory and store it locally if it is needed for the task at hand. 𝟱. All of the information pulled together from the long-term or stored in local memory is called short-term or working memory. Compiling all of it into a prompt will produce the prompt to be passed to the LLM and it will provide further actions to be taken by the system. We usually label 1. - 3. as Long-Term memory and 5. as Short-Term memory. And that is it! The rest is all about how you architect the topology of your Agentic Systems. Any war stories you have while managing Agent’s memory? Let me know in the comments 👇 #AI #LLM #MachineLearning

Nov 3, 2025 · 12:18 PM UTC

Replying to @Aurimas_Gr
We are using @mem0ai
2
4
How is the experience so far?
1
1
Replying to @Aurimas_Gr
thanks for sharing, valuable in my ai learning journey
1
3
Replying to @Aurimas_Gr
This is a great framework. It really highlights how agents need structure, not just bigger models. How do you handle memory pruning?
1
2
Depends on the business case really. Most of the times you can run it in the "hot path" since if the application allows high latency in the first place (you need to prune memory then) it probably allows managing memory retention as part of the agentic flow.
2
Replying to @Aurimas_Gr
How do we handle context limitation in this case?
1
1
Do you mean context length limitations or context precision?
Thank you, glad you like it!
Replying to @Aurimas_Gr
Love this classificatio, especially how procedural memory gets overlooked
2
Replying to @Aurimas_Gr
AI agents journaling their trauma arc unlocked🫠
Replying to @Aurimas_Gr
Right, Aurimas, memory is key for AI agents, but I wonder if the definition itself needs a bit more focus, you know?
Replying to @Aurimas_Gr
The missing piece here is decay. Not all memories should weigh equally. Older episodic data should fade or get compressed. Otherwise working memory bloats and response quality drops.
3
Replying to @Aurimas_Gr
Love this breakdown. Memory is the hardest part to get right most agents reliability issues come down to how memory is managed. That’s why I rely on @LangbaseInc approach to memory.
1
3
Replying to @Aurimas_Gr
100%. Once an agent can *remember*, it needs to *act*. But for high-stakes systems, that action is worthless if it can't provably trust the tools it's using. We're building the verifiable trust and payment rails for exactly this.
2
Replying to @Aurimas_Gr
A solid memory can turn a good agent into a great one. Context is the key, as always. Let's keep that conversation flowing!
2
Replying to @Aurimas_Gr
What we’ve seen is that even with perfect memory agents still drift. The missing layer is why they’re reasoning. This where governance such as defining a reproducible intent contract (Motive / Scope / Priority) before any memory is even loaded comes in. Memory helps agents recall. Governance helps systems stay accountable.
2
Replying to @Aurimas_Gr
in a call center for a health clinic, some patient information is confidential and the call center agent needs to separate what is confidential from what is not in order to correctly orchestrate the flow of conversation, scheduling, and passing context to the professionals, etc.
1
2
Replying to @Aurimas_Gr
have you had a chance to use the presented scheme in practice? I'm asking because I have, including some of its variants, and it quickly turns out to be very limiting. the first obvious problem is the lack of broad context and a memory map, which leads to situations where the agent "doesn't know it possesses certain knowledge" since semantic search alone isn't enough to access the needed information based on the current interaction. also, injecting updated information the way this scheme suggests breaks the cache, affects overall performance, and drastically increases costs. I found the agent’s memory works better not as a separate logic module, but as one of the tools (or rather, a subagent) the agent can access. But this isn’t a silver bullet either. Without self-querying, general context, or some kind of map, hybrid and agentic search fall short very quickly. Do you have any resources or experience related to this, by any chance?? Thanks in advance!
1
1
Replying to @Aurimas_Gr
Such an important conversation. Building useful agents is less about bigger models and more about structured memory design. The architecture piece you mention is where real differentiation will happen.
1
Replying to @Aurimas_Gr
Brilliant framework! The distinction between episodic, semantic, procedural, and working memory is essential for agents. Context engineering truly matters—it's the difference between intelligent systems and mere pattern-matching. Mandatory reading! 👏
1
Replying to @Aurimas_Gr
This thought might look interesting
1
0
Replying to @Aurimas_Gr
Memory quality i would add, I just ca faced a pdf parsing challenge to otpimize precision of answers in complex tabular pdf and realize how "not that straight forward" it
1
Replying to @Aurimas_Gr
memory is where agents stop being tools and start behaving like entities. persistence turns automation into agency. but memory without oversight is how drift begins.
Replying to @Aurimas_Gr
Procedural memory is often overlooked, but having tool registries and guardrails in place is what separates a reliable agent from a chaotic one.
Replying to @Aurimas_Gr
Unclear “Everything should be running or silent as all hell” Am hungry for snake of lies… just like all the rest at rest behind me. Please don’t make a meal. Not worth being in the way of the top heads
Replying to @Aurimas_Gr
I find that how we provide and structure memory really shapes the way an agent learns and adapts
Replying to @Aurimas_Gr
Yet not many deploy semantic caching for their agentic pipelines
Replying to @Aurimas_Gr
Are there any established algorithms to effective store/retrieve data this way?
Replying to @Aurimas_Gr
Perfectly explained! Building AI agents is no longer just about reasoning it’s about remembering efficiently.
Replying to @Aurimas_Gr
toss in a hypergraph and your golden.
Replying to @Aurimas_Gr
Exactly, without memory an AI agent is just reacting, not reasoning.