your AI agent only forgets because you let it.
there is a simple technique that everybody needs, but few actually use, and it can improve the agent by 51.1%.
here's how you can use workflow memory:
you ask your agent to train a simple ML model on your custom CSV data.
— it implements the model in PyTorch,
— tests different hyperparameters,
— optimizes the model and the configs,
— and finally finishes with a training script.
but if you want to do this again in a couple of days, you must have some sort of memory of that workflow, so the agent doesn't retry everything from scratch and make the same mistakes.
you need the agent to use the experience from the previous workflow.
it is an intuitive method to give the agent a practical memory, so it can avoid previous mistakes and focus on improving similar future workflows.
The result?
→ use fewer tokens and save costs
→ no mistakes made again
→ agent learns fast from real-world experience
Workflow memory is possible to implement with simple markdown files.
How?
By the end of tasks, you ask the agent to summarize key information for later use: task description, the faced challenges, lessons learnt, etc.
then, when starting a new task, you give the agent a short description of each workflow.[md] and ask it to choose which is most relevant to this task.
the key is in the prompts, it's what really makes a difference, and either makes or breaks the system.
in CAMEL (
@CamelAIOrg), we have just rolled out a new version of smart workflow retrieval: the agent will choose which workflow is best fit for each task. you can use this feature in your applications, take inspiration from, or open a PR and make it better!
→ check it out here:
github.com/camel-ai/camel/pu…
→ a paper from MIT that researched a similar idea reported a 24.6% and 51.1% increase in agent's web navigation benchmark results (Mind2Web and WebArena):
arxiv.org/pdf/2409.07429