Smarter AI, Smarter Context: Experiments with Memory, TILs, and Specialized MCPs
For an AI to be a truly effective collaborator in software development, it needs more than just raw processing power; it requires robust mechanisms for accessing, retaining, and utilizing context and memory. My previous explorations into AI environments and orchestration tools continually underscored this need. This post focuses on my experiments with various approaches to bolstering AI memory and contextual understanding.
Accessing External Knowledge: The Documentation Dilemma
One key aspect of context is providing AI with access to relevant documentation. I compared a couple of tools for this:
- context7 (project): Despite the hype, it struggled in a test case involving
markdownlint-cli2
and its underlyingmarkdownlint
library. Even when feeding both projects’ docs and querying for a specific error code like “MD013” (even with ~100k token context), it yielded no results. The issue seemed to be thatmarkdownlint-cli2
is a wrapper and referencesmarkdownlint
for error descriptions, a nuancecontext7
didn’t navigate effectively in this instance. - git-mcp: In contrast,
git-mcp
successfully returned a reference formarkdownlint-cli2
and the full documentation formarkdownlint
. This suggested it might be more adept for smaller, interconnected project documentations.
This highlighted that effectively “teaching” an AI about a project’s dependencies and documentation structure is non-trivial. Further testing is needed for larger projects, but the initial results favored more direct or perhaps simpler retrieval mechanisms for specific, linked documentation.
Building a “Second Brain”: TILs and Memory Banks
Beyond external docs, there’s the project-specific and personal knowledge an AI needs.
- Custom TIL (Today I Learned) System: I initially ventured into creating a basic TIL feature. While a good learning exercise, I soon realized it was largely an attempt to reinvent existing wheels, especially as the mcp-memory-service I was considering didn’t seem like a long-term fit.
- Exploring Memory MCPs: This led to an exploration of dedicated memory tools and MCPs:
- mem0-mcp: My initial understanding of this was as a simple wrapper over a graph database. However, thanks to its web interface and another MCP implementation also named mem0-mcp (a completely different project by a different author!), I realized its true intent is more about improving user-agent communication rather than just being a persistent memory store. A TIL moment in itself!
- assistant-mcp: This primarily provides data retrieval capabilities.
- cognee: Took a while to set up, and unfortunately, I encountered several issues that led me to pause evaluation.
- zettelkasten-mcp: Another tool on the list for managing structured notes.
- Memory Bank Foundations: The idea of combining the strengths of cursor-memory-bank with roo-code-memory-bank remained an underlying theme, and projects like rUv-dev also appeared as potential enhancements or replacements for basic memory bank functionalities.
Tools for Understanding: MCP Inspector
In this complex landscape of inter-tool communication, the MCP inspector proved to be an invaluable utility for debugging and gaining clarity on how different components were interacting (or failing to).
The journey to equip AI assistants with effective, persistent, and easily accessible memory is ongoing. The current ecosystem offers a plethora of specialized tools, each with unique approaches and varying degrees of maturity. Stitching them together into a cohesive and efficient “second brain” for AI remains a significant, but exciting, challenge.