Introducing Rail Tech MME for Persistent LLM Agents

Most LLM agents forget everything between sessions. The ones that don't usually bolt on a vector database which works, but retrieves by semantic similarity and tends to surface adjacent noise alongside the things that actually matter. I've been building MME (Modular Memory Engine) at Rail Tech to take a different angle. MME stores memories as a weighted tag graph, every saved fact is automatically tagged, and retrieval propagates through the graph by keyword overlap and learned edge weights. Designed to sit alongside your vector DB, not replace it. Keep your embeddings. Add a layer that knows what actually got used. This week we shipped the official Python SDK: 👉 👉🏻 pip install railtech-mme Thin, typed client over the MME REST API. Sync and async, full Pydantic models, and a LangChain extra (MMESaveTool, MMEInjectTool) that drops into any LangChain or LangGraph agent. Works on a cold account from day one — no embedding warm-up, no fine-tuning. If you're building LLM agents in Python and you're tired of either watching your agent forget everything or wrestling with a vector DB, take a look: → pip install railtech-mme → https://lnkd.in/enWXdmfDmme.railtech.io Would love feedback from anyone shipping agents in production. #LLM #AI #Python #LangChain #AgentMemory

  • No alternative text description for this image

To view or add a comment, sign in

Explore content categories