𝗘𝘃𝗲𝗿𝘆𝗼𝗻𝗲 𝗶𝘀 𝘄𝗿𝗼𝗻𝗴 𝗮𝗯𝗼𝘂𝘁 𝗯𝘂𝗶𝗹𝗱𝗶𝗻𝗴 𝗔𝗜 𝗮𝗴𝗲𝗻𝘁𝘀. You are locking your best work inside proprietary systems. Build in Claude Code, CrewAI, or AutoGen, and your agent is trapped there. No portability. No reuse. Just dead ends when you want to switch platforms. The smartest engineers are taking a different route. They are turning their git repositories into the agent itself. With an open standard like GitAgent, you only need two simple files to build a universal foundation: • agent.yaml for the manifest and rules. • SOUL.md for the core identity. Here is the uncomfortable truth: if you cannot 𝘃𝗲𝗿𝘀𝗶𝗼𝗻 𝗰𝗼𝗻𝘁𝗿𝗼𝗹 your agent, you do not really own its output. When you make your repository the agent, the entire dynamic shifts: • Roll back broken prompts with one git revert. • Fork public models to remix their skills instantly. • Build 𝘀𝗲𝗴𝗿𝗲𝗴𝗮𝘁𝗶𝗼𝗻 𝗼𝗳 𝗱𝘂𝘁𝗶𝗲𝘀 directly into the code. • Export the exact same logic to OpenAI or LangGraph. It strips away the bloated architecture. You get CI/CD testing, pull requests for your system prompts, and strict financial compliance right out of the box. Treat your 𝗔𝗜 𝗮𝗴𝗲𝗻𝘁𝘀 like actual software, or watch them break in production. Which framework do you think is winning the agent race right now? #AIAgents #SoftwareEngineering #OpenSource #Developers
Build Portable AI Agents with GitAgent
More Relevant Posts
-
We've standardized this across our entire development team — and it changed how we build with AI. Most Claude Code setups we come across are running on one layer. A CLAUDE.md file, maybe some basic instructions. That works for solo projects. It doesn't scale across a team. There are actually 6 layers to the full architecture: Layer 1 — Memory: What Claude reads before every session. Team rules in git, personal overrides gitignored, modular instruction files always on. Layer 2 — Skills: Self-contained expertise folders Claude invokes automatically through semantic matching. You don't call them. They show up when relevant. Layer 3 — Hooks: Shell scripts wired to 17 event triggers. They are deterministic — they run every single time without exception. This is where you put anything you can't leave to chance. Layer 4 — Agents: Parallel subagents running in isolated context windows. Code review, security audits, QA — all happening without touching your main thread. Layer 5 — Plugins: Bundle your entire workflow — skills, agents, hooks — into one package your whole team can install in a single command. Layer 6 — MCP: The connection layer between Claude and your full stack. GitHub, Jira, databases, internal APIs. The distinction that changes every architecture decision you make: → CLAUDE.md + Hooks are deterministic. They always run. → Skills + Agents are probabilistic. Claude decides. We put together a full visual breakdown of all 6 layers in the carousel above. Swipe through — it's worth 2 minutes. If your team is building seriously with Claude Code and wants to compare notes on how we've structured this in production, drop a comment or reach out directly. @Anthropic @ClaudeAI #ClaudeCode #SoftwareDevelopment #AIDevelopment #AgenticAI #EngineeringLeadership #DeveloperTools #AIEngineering #TechLeadership #SoftwareEngineering #Anthropic
To view or add a comment, sign in
-
I fixed the windows: Not Really lol Coding agents will stand in front of you with a straight face and say I fixed the window and if you were to look right behind them you'll see that the window is completely messed up and is nowhere near being 'fixed'! This is scary!!! For any enterprise this is a huge liability!!! I leveraged human ai experts and also claude code, codex, Antigravity and grok to help take patent applications into a finished product. You see at the heart of this one solution is an innovation called the Enterprise Context Fabric (ContextECF). → Try it: npm install -g @codeledger/cli → See it: demo.codeledger.dev → Read it: codeledger.dev #EngineeringIntelligence #AITooling #DeveloperExperience #ContextEngineering #CodeLedger And If ContextECF CodeLedger does what it says it does it will be a blessing to sw developers and teams. CodeLedger helps developers give the coding agent a concise prompt (we have so much innovations in this step alone; ontology packs of industry best practices), audits and verifies what it says it did and stores the lessons learned for everyone on the team and for future team members. These lessons form a new institutional asset class: how we build software around here and also maps of what we built for faster onboarding of new team members and also compliance teams. Please share with software professionals. Zero risk. Zero cloud. And they can see a living demo first at demo.codeledger.dev it's a github repo with codeledger running on top of it using synthetic developers..if you're a developer you run some CodeLedger commands there and see the results of your work. Please like and share. Happy Friday. Anthropic OpenAI Cursor Google Antigravity xAI
To view or add a comment, sign in
-
-
The Developer Agent Doesn’t Just Write Code Most people think a dev agent is there to generate code. That’s not how I’m using it. I call mine Archon. Right now, Archon operates in isolation. A cloned GitHub environment. Separate from production. Every change gets reviewed before it touches anything real. Archon doesn’t just build. It reviews: Code efficiency Logical flaws Hidden bugs Unintended consequences Then it improves what already exists. The shift: I’m not asking for code. I’m asking for better code. Most people use AI to accelerate development. I’m using it to raise the quality of what gets shipped. Nothing goes straight to production. Everything passes through scrutiny first. That’s where most systems break. Not in creation. In what gets allowed to continue. Archon reduces that risk. I still decide what merges, but I’m not reviewing everything alone anymore. Without this layer, speed becomes liability. Tomorrow I’ll break down the security agent and how I make sure nothing unsafe ever gets deployed.
To view or add a comment, sign in
-
Developers don’t only need faster tools. We need tools that help us understand what we are doing. A lot of modern dev work now looks like this: - Install a package - test an API - switch branches - ask AI to change something - come back later and forget why that change existed That’s the messy part of development people don’t talk about enough. So I’m building INFYNON CLI around a simple idea: developer workflows should be traceable, testable, and safer by default. Not just “run command and hope it works.” More like: know what package you are adding replay API flows when something breaks remember why a file, branch, or repo change happened Still early, but this is the direction I care about: less blind execution, more visible engineering. https://cli.infynon.com https://lnkd.in/dnNPsQ7x #developers #opensource #devtools #cli #rustlang #backenddevelopment #softwareengineering #developerexperience #aitools
To view or add a comment, sign in
-
Every developer tool in your stack was built for a human operator. Coding agents are not human operators. And the tools don't know that. I've been running Claude Code agents in Docker containers against real repos and real CI. Here's what keeps breaking: Streaming CI status (--watch) - perfect for a human watching a terminal. For an agent, it's context pollution. 4-5x redundant output before the final result. A YAML parse error in GitHub Actions produces zero runs. Not a failed run. Nothing. The agent's diagnostic loop returns empty at every step. The error signal is the absence of signal. An agent told to modify a container environment reaches for devcontainer.json. But if the actual lifecycle is a custom shell script, no process reads that file. Nothing breaks. The agent moves on, confident in a change with zero effect. I asked the agent which of these it finds hardest. Not the verbose output, that's fixable with a filter. The two it flagged: the silent failure and the wrong config file. Both cases where the feedback loop is missing entirely. The agent can't self-correct because the problem never surfaces as something to reason about. That's not an argument against agents. It's a map of where human oversight is structurally necessary, the only source of feedback the system can't generate on its own. The tools aren't broken. They were designed for a consumer that no longer matches. This is part of a series on what changes when coding agents hit real infrastructure. Follow along if you're building agentic workflows and harnesses not just reading about them. What's the most surprising tool assumption you've hit running agents against real infrastructure? PS: Link to my blog post in the comments down below. #AIEngineering #CodingAgents #DeveloperTooling #CI #ContextWindow
To view or add a comment, sign in
-
-
I've been using Claude Code in production for over a year. Biggest lesson: an AI agent without discipline can do more damage than good. It forgets rules, writes where it shouldn't, commits without verification, and tries to bypass hooks when it gets the chance. At some point I stopped writing rules and hoping they'd stick, and started enforcing them at the system level. That’s how APD (Agent Pipeline Development Framework) came out of it: → Spec → Builder → Reviewer → Verifier — no commit until all phases pass → 13 guard scripts blocking unauthorized git, out-of-scope writes, and secrets access → Pipeline metrics, audit logs, and a memory system that turns mistakes into new guardrails → Self-healing sessions, auto summaries, and 51 automated verification checks When Amazon launched Kiro last summer, a lot of it felt very familiar — spec-driven workflows, agent hooks, same direction overall. I just got there the hard way, building and breaking things in production. #ClaudeCode #APD #AgenticDevelopment #SoftwareEngineering #OpenSource #Anthropic https://lnkd.in/dYKGN5eX
To view or add a comment, sign in
-
The persona system in WoterClip (inspired by Paperclip): Each persona is a directory with three files: - SOUL.md defines who the agent is (injected into Claude's context) - TOOLS.md defines what it can do - config.yaml sets the model, turn budget, and required tools Backend persona: Opus, 300 turns, database tools. Frontend persona: Sonnet, 200 turns, browser automation. CEO: Sonnet, 100 turns, Linear-only. Same Claude instance. Completely different behavior based on which "hat" it's wearing. Want a QA persona? Create a directory. A DevOps persona? Same pattern. Open source: https://lnkd.in/gimub7sb #AIAgents #ClaudeCode #DevTools #OpenSource
To view or add a comment, sign in
-
Why do we version-control everything but not the most expensive code we run? Infrastructure — in git. App config — in git. CI pipelines — in git. Agent workflows? Stored in a database. Can't diff. Can't review. Can't roll back. I hit this after an agent loop burned through $4k overnight. Went to check what changed. There was nothing to check — the config lived in a UI with no history. So I built the thing I couldn't find. It's called Runsight — a YAML-first workflow engine for AI agents. - Workflows are YAML files on your filesystem. Commit, diff, review in PRs - Per-run cost tracking with caps — one bad loop doesn't drain your budget - Open source, self-hosted. No vendor lock-in — it's files in your repo Still early. The core loop works. wdyt — how are you managing agent workflows today? #opensource hashtag #aiagents hashtag #devtools
To view or add a comment, sign in
-
The Register called Claude Code Routines "mildly clever cron jobs." They're not wrong. But that's also not the point. Anthropic just shipped a feature that lets Claude Code run automations on a schedule, on GitHub events, or via API trigger - in the cloud. Your Mac can be completely off. You configure the routine once, push the settings, and it executes on Anthropic's infrastructure while you're asleep or in meetings. I've been using Claude Code as a core part of how I ship. The bottleneck was never the model's intelligence - it was the session model. Every automation required either active supervision or a machine staying awake, which meant async coding workflows were mostly theoretical for anyone without a permanently-on dev server. The HN thread has real skepticism. Pro users get five routines per day. There's no local execution path. Debugging a cloud-based automation that fails at 3am isn't like debugging a local script. These are legitimate concerns for production use. But the engineers dismissing this as "cron plus an API wrapper" are describing the mechanism, not the shift. The meaningful thing isn't the scheduling - it's that agent-driven code tasks can now run fully decoupled from a developer's presence and machine. That's a different category of tool than what we had last week. The rate limits will go up. The debugging story will improve. What won't change is that async agent workflows just moved from architecture blogs to shipped software. #ClaudeCode #AItools #WebDev #SoftwareDevelopment
To view or add a comment, sign in
-
-
Wrapping up an update to Autarch to bring it into compliance with Anthropic’s guidance. Instead of getting an OAuth token for the API, you can now switch your Autarch backend from API to Claude Code. It: - uses the Claude CLI in print mode - uses the same Autarch system prompts - uses the same Autarch shell tools - exposes all Autarch tools via dynamically created and registered MCP - parses the streamed JSON output to give a consistent experience, regardless of backend - is functionally equivalent to using the API Testing things locally and working out the kinks (biggest one is tool call correlation across the CLI/MCP boundary) and will be pushing the last of the fixes up shortly. https://lnkd.in/eCaH5q7D
To view or add a comment, sign in
Explore related topics
- Open Source Frameworks for Building Autonomous Agents
- How to Build Agent Frameworks
- How to Build Intelligent Agents
- How to Build Production-Ready AI Agents
- How Developers can Use AI Agents
- Steps to Build AI Agents
- How to Choose the Best AI Agent Framework
- How AI Agents Are Changing Software Development
- How to Develop Trustworthy AI Agents
- How to Use AI Agents to Optimize Code
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
Repo Link : https://github.com/open-gitagent/gitagent