Hello all, Kindly check my latest GitHub repo. Use it and let me know how it worked or you and feedback are welcome. If you like it then give a ⭐ Agent Diaries is a lightweight, zero-dependency local SDK that prevents your AI agents from getting stuck in loops. By providing a persistent "diary" memory, your agents can remember their past actions, avoid repetitive tasks, and reflect on their results over time. ✨ Features 🚫 Duplicate Prevention: Automatically filter out tasks your agent has already processed across different sessions. 💾 Local-First Storage: Operates entirely locally. Diary entries are stored as lightweight JSON files—no expensive cloud KV or Vector databases required. ⚡ Zero Dependencies: Fast, secure, and built entirely with native Node.js APIs. 🔌 Highly Extensible: Comes with an easy-to-use StorageAdapter interface so you can swap local files for SQLite, Redis, or anything else. 🛡️ 100% TypeScript: Built-in type safety and excellent developer experience. 📦 Installation Install the package via npm: npm install @swapwarick_n/agent-diaries https://lnkd.in/dpb6urpP
Agent Diaries Lightweight Local SDK for AI Agents
More Relevant Posts
-
Just published a new open-source tool to GitHub. Claude Code has no built in way to track how many tokens you are burning across sessions, so I built one. It is a two-file setup. A Python server reads the JSONL session logs Claude Code writes to ~/.claude/projects/ and serves the data to a local browser dashboard. It breaks down input, output, cache write, and cache read tokens per session and calculates estimated cost using current Sonnet 4 rates. The whole thing auto-refreshes every 30 seconds and exports to CSV. No external dependencies, no cloud, nothing leaves your machine. Two features already in the pipeline. First is a configurable token budget alert that warns you in the dashboard when you cross a daily or monthly threshold. Second is an n8n workflow that pulls from the local API and delivers a usage digest by email on whatever schedule you want. #claudecode #aitools #anthropic #ITAutomation #DeveloperTools #opensource
To view or add a comment, sign in
-
Found a GitHub repo today that fixes the most expensive mistake in Claude Code setups. It's called Claude Context, built by Zilliz (the team behind Milvus). Here's the problem it solves: When you run Claude Code on a large repo, the typical response is to dump entire directories into context. The costs spiral. A single refactoring session can burn $30-50 in tokens because Claude keeps reading files that aren't relevant to the task. Claude Context fixes the root cause, not the symptom. It indexes your entire codebase into a vector database. When Claude needs context, it runs a semantic search and retrieves only the relevant code chunks. No directory dumps. No manual file curation. Session costs stay flat as your repo grows. Here's how it works technically: 1. Index once: run the setup command and Claude Context builds a vector index of your entire codebase 2. Plug into Claude Code via MCP: one config line and it's live 3. Every Claude Code session now retrieves semantically relevant files instead of reading everything What I like most: it works with any MCP-compatible agent. Cursor, Windsurf, Claude Code. Not locked to one tool. Also ships as a VS Code extension and npm packages if you prefer that integration path. 7,500+ stars on GitHub. Trending #1 today. MIT license. The detail that makes this practical: you self-host the vector DB (Milvus) or use Zilliz Cloud. Your code never leaves your environment. For anyone building on large codebases, this is worth 20 minutes to set up. (Link to repo in the comments) --- 🎁 Bonus Get the best AI news, tips, tutorials and resources in my newsletter. Join today and receive: • AI Playbook • 60+ free AI courses • 3000+ helpful prompts 100% FREE 👉 https://lnkd.in/gmbECShG
To view or add a comment, sign in
-
-
AI Agents Are Breaking Microsoft GitHub. GitHub went from a platform people never questioned to one that's genuinely under pressure. Here's why it matters even if you don't write code. AI agents have pushed commit volume on GitHub to roughly 14 times last year's number. Public commits from Claude Code alone went from about 100,000 per week to more than 2.5 million per week in six months. That's one agent from one provider. The infrastructure strain is the visible problem. The harder one is economic. GitHub charges flat monthly subscriptions. An agent pushing thousands of commits a day pays the same as a developer pushing code once a week. The platform absorbs the cost without capturing any of the incremental value. That's not a pricing strategy. That's a countdown. Meanwhile, GitHub's former CEO left to build a competitor targeting AI-generated code. OpenAI is reportedly building its own code hosting platform. New users entering through AI tools have no muscle memory tied to GitHub, no legacy repositories creating switching costs, and no reason to default to the incumbent. An AI agent doesn't care where it pushes code. It goes wherever the API endpoint is configured to send it. That's a fundamentally different kind of user than a developer who has spent years building workflows on a platform. If you run a company building software, three questions are worth sitting with now: How deeply is GitHub embedded in your stack? What happens to your costs if pricing shifts to usage-based models? What's your contingency if reliability continues to degrade? The outages have already started. The competitive field is forming. The time to think about this is before the answer becomes obvious.
To view or add a comment, sign in
-
-
GitHub shipped `gh skill`, a new CLI command for discovering, installing, and publishing agent skills. Skills are portable bundles of instructions, scripts, and context that teach AI agents how to do specific tasks. The mechanics are simple: `gh skill install` pulls a skill from a GitHub repository into your environment, `gh skill publish` shares one you've built. There's a discovery layer so you're not hunting for things by URL. The skills command makes it something you can install, version, and share systematically. If you want something closer to a full dependency manifest for agent config — skills, instructions, MCP servers, plugins all declared in a lockfile, APM is a great, comprehensive option and is worth a look alongside this. - gh skill: https://lnkd.in/gDkZ9HmV - microsoft/apm: https://lnkd.in/gsNFs-4P
To view or add a comment, sign in
-
GitHub Copilot has published updated guidance on pricing and said they are moving to usage-based billing in June. Articles linked at the bottom of the post. Here’s a couple of impressions from me: Flat subscriptions are ending in the industry and moving to usage based now that adoption has grown significantly. This could later shift to an “outcome pricing model” as well, where the value of the output of the tokens will be what is licensed, but for now, it is the "compute cost" of the token. Meaning, a "synthetic worker" that produces valuable output tokens in the medical field, advancing discovery of cures or treatments, may be seen as more valuable to enterprises than a call center agent generating different kinds of output, even though compute costs could be comparable volume wise under the right conditions. This distinction is one that hyper scalers may attempt to monetize differently. Token efficiency matters significantly more in a usage-based model compared to a model of number of interactions allowed, this has led the industry to consider the switch to ultra token efficiency with this GitHub repo gaining huge popularity as an example, making Claude talk like a “caveman”, cutting token usage by 65% without sacrificing technical capability. Articles and GitHub Reference: https://lnkd.in/ecDKBPGA https://lnkd.in/epymYbfi https://lnkd.in/e-v_3EUf #AI #TokenEfficiency #ResponsibleAIConsumption
To view or add a comment, sign in
-
We just published the KubeOpt GitHub Action to the GitHub Marketplace. Add it to any repo and every pull request gets a Kubernetes cost breakdown: monthly spend per cluster, savings available, and top rightsizing opportunities ranked by dollar impact. - uses: kubeopt/kubeopt@v1 with: kubeopt-url:${{ secrets.KUBEOPT_URL }} kubeopt-username: ${{ secrets.KUBEOPT_USERNAME }} kubeopt-password: ${{ secrets.KUBEOPT_PASSWORD }} Three secrets. One workflow file. No local install needed. Also runs on a schedule for weekly cost reports, and supports manual dispatch for one-off scans. Read-only. Never touches your cluster. Find it on the GitHub Marketplace: KubeOpt Cost Scan Repo: https://lnkd.in/g4MuS92S #kubeopt #kubernetes
To view or add a comment, sign in
-
If you are using Github Copilot you need to be aware that your interactions and private repository code will be used to train their models unless you OPT-OUT of it. This is very subtle and most users will not even be aware that this may be happening. Time to protect your code and interactions and change the default setting to opt-out of this behaviour. https://lnkd.in/gEG9zf9X
To view or add a comment, sign in
-
A single git push can execute arbitrary commands on GitHub's backend servers... CVE-2026-3854 is a command injection in GitHub's push processing pipeline. User-supplied push option values were not sanitized before being injected into internal service headers. Standard git client... any authenticated user... full RCE. Here is the high-signal breakdown of the chain: > The RCE Chain. > Three injections chained together. A non-production rails_env bypasses the sandbox... custom_hooks_dir redirects the hook directory... and a crafted hook entry executes arbitrary commands as the git user. > Result: Full filesystem read/write and visibility into internal service configurations. > The Cross-Tenant Blast Radius. > This is the nightmare scenario... GitHub's shared storage architecture meant code execution on one node gave access across tenants. > Millions of public and private repositories—including those of other organizations—were accessible on the affected nodes. > The AI Angle: IDA MCP. > This is one of the first critical vulnerabilities discovered in closed-source binaries using autonomous AI. > Wiz used IDA MCP for automated reverse engineering across compiled binaries. AI is now finding bugs faster than humans can patch them. > The Exposure Now. > GitHub.com was patched within two hours of disclosure on March 4. Public disclosure was held until yesterday to give Enterprise Server operators time to patch. > Current state: 88% of GHES instances remain unpatched. The takeaway.. The responsible disclosure window is officially closed. If you run GitHub Enterprise Server... you are likely still exposed. Upgrade to GHES 3.19.3 now. Not this week... now. #AppSec #DevSecOps #GitHub #AIInfrastructure #SoftwareEngineering
To view or add a comment, sign in
-
-
bareagent v0.5.0 — MCP Bridge bare-agent is a lightweight agent orchestration library. You give it an LLM, tools, and a goal — it runs the think/act/observe loop until done. Zero deps, ~2000 lines, composable components you can use independently. Until now, you wired tools manually. v0.5.0 changes that. The new MCP Bridge auto-discovers servers from your IDE configs (Claude Code, Cursor, Claude Desktop), connects via stdio, and exposes every tool in bareagent's standard format. Your agent doesn't know it's talking to MCP — it just sees tools. If you have barebrowse, baremobile, filesystem, GitHub, Slack, or any other MCP server configured, bareagent can orchestrate all of them. One agent, every tool. This is exciting and terrifying in equal measure. We've all seen the OpenClaw demos — autonomous agents clicking, typing, navigating with real hands. The MCP ecosystem is giving agents those same hands: file access, browser control, messaging, database operations. The problem is obvious. You can't give an autonomous agent 25+ tools across multiple servers and hope it doesn't delete something, send something, or break something. That's why the bridge ships with a governance layer. First run writes a .mcp-bridge.json listing every discovered tool as "allow". Change any to "deny" — the tool disappears entirely. The LLM never sees it, can't call it, doesn't know it exists. One JSON file is the single source of truth for what your agent is allowed to touch. Claws clipped, hands still functional. No code changes to restrict tools. No redeployment. Edit a JSON file. Deny entries survive refresh. Agents with hands need governance by default, not as an afterthought. npm install bare-agent@0.5.0 https://lnkd.in/dQvgdeWE
To view or add a comment, sign in
Explore related topics
- How to Build AI Agents With Memory
- How Developers can Use AI Agents
- AI Agent Memory Management and Tools
- How to Use AI Agents to Optimize Code
- How to Build Agent Frameworks
- Importance of Long-Term Memory for Agents
- How to Boost Productivity With Developer Agents
- Using Asynchronous AI Agents in Software Development
- How to Use AI for Manual Coding Tasks
- How to build multiple AI assistants for insurance
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development