Every enterprise is asking the same question right now: how do we get coding agents to follow the same rules as our devs—no matter which vendor they come from? Welcome to Coding Agent Onboarding: Onboarding developers into these norms—approved tech stacks, deployment processes, test coverage, documentation guidelines— is standard practice. But with coding agents, this "onboarding" process doesn’t exist yet. And without it, leaders risk shadow rules, inconsistent quality, and compliance gaps. Imagine being able to package up your team’s rules—everything from testing requirements to deployment workflows—into reusable modules that any coding agent can consume, regardless of whether it’s GitHub Copilot, Claude Code, or Cursor. And imagine being able to do this in a way that scales to software projects of any size, where coding agents always get the context that matters. This is exactly what Context Compilation in Agent Package Manager (APM) makes possible: 🔹 Teams capture rules in targeted instruction files (just as they do today). 🔹 These are packaged as context modules (agent packages) with APM. 🔹 Any project can declare dependencies on the relevant packages. Context Compilation injects these rules into the project via dynamically generated, nested Agents[dot]md files—already the cross-platform standard for coding agents. And it does it in a mathematically optimized way to minimize context pollution for agents. The outcome? 👉 Onboarding coding agents into enterprise development becomes as simple as running 2 commands: apm install apm compile This isn’t just technical convenience—it’s a new layer of governance, scale, and trust for AI-native development in the enterprise.
How to Set Coding Constraints for Software Projects
Explore top LinkedIn content from expert professionals.
Summary
Setting coding constraints for software projects means defining rules and guidelines that shape how code is written, reviewed, and maintained—helping teams ensure consistency, safety, and clarity, whether humans or AI agents are writing the code. These constraints can cover everything from formatting and tech stack choices to how much uncertainty is allowed in AI-generated code, making software development more predictable and manageable for everyone involved.
- Document project rules: Create clear instruction files or rule sets that outline coding standards, technology choices, and testing requirements to guide developers and AI coding agents.
- Mark uncertainty levels: Assign an “entropy tolerance” to each task or ticket so your team knows how closely AI-generated code should be reviewed, focusing energy where it matters most.
- Centralize settings: Use configuration and dependency files to store formatting rules and package versions, preventing drift and keeping your codebase organized from the start.
-
-
Last month Gergely Orosz interviewed Martin Fowler and Mr. Fowler compared software development with non-deterministic LLMs to structural engineering where you deal with degrees of tolerance. Quote: "we need probably some of that kind of thinking ourselves what are the tolerances of the non-determinism that we have to deal with and realizing that we can't skate too close to the edge". 100% agreed! I've written about and been giving conference talks on my approach to this I call "Entropy Tolerance" (named after Claude entropy). One of the simplest, most impactful things teams working with LLMs can do right now, today, is to 𝘮𝘢𝘳𝘬 𝘵𝘩𝘦 𝘦𝘯𝘵𝘳𝘰𝘱𝘺 𝘵𝘰𝘭𝘦𝘳𝘢𝘯𝘤𝘦 𝘰𝘧 𝘵𝘪𝘤𝘬𝘦𝘵𝘴 𝘵𝘩𝘦 𝘵𝘦𝘢𝘮 𝘪𝘴 𝘸𝘰𝘳𝘬𝘪𝘯𝘨 𝘰𝘯. The definition of entropy tolerance is: how much uncertainty (or AI-generated 'guesswork') a software-supported process can handle. The idea is not the tolerance of the code itself, but of the processes the code is supporting. For example, the entropy tolerance of creating a prototype for testing is high, but the tolerance of a function to calculate payroll is low. The entropy tolerance (ET) of a ticket tells you how detailed your human code review needs to be. This has other benefits. Teams I talk to say they are getting burned out on reviewing thousands of lines of AI-generated code. By marking certain work as having a high tolerance, you can focus your team's code review energy on the areas that are the highest risk, and permit AI-generated bugs in areas where it is low risk or can be identified and fixed over time. If you mark your tickets in Jira, Linear, or however you manage work as high, medium, or low tolerance, you can carry that forward into code reviews and pull requests. 𝗛𝗶𝗴𝗵 𝘁𝗼𝗹𝗲𝗿𝗮𝗻𝗰𝗲: code review can by cursory or just review the running software. 𝗠𝗲𝗱𝗶𝘂𝗺 𝘁𝗼𝗹𝗲𝗿𝗮𝗻𝗰𝗲: code should be reviewed, but the review can be high-level and coders don't need to burn out reading every line. 𝗟𝗼𝘄 𝘁𝗼𝗹𝗲𝗿𝗮𝗻𝗰𝗲: every line of code must be reviewed by a human coder. Another benefit of this is setting accurate expectations for yourself and your team. If you end up with most tickets having a low ET, then you know that you simply aren't going to have the 10x speed up with LLMs that social media tells you that you should have. You've surface that the processes your software supports can't risk unreviewed AI-generated code. And that's ok! It means you need to adjust your processes, and have your coders prompt and review in slower, smaller, iterative steps to avoid burnout. So consider the Entropy Tolerance of the process being supported by the code you are writing today. Surface that tolerance in the related tickets or requirements. Avoid burnout. Avoid unreasonable expectations of AI-enhanced productivity. Your team will thank you. [Links to the blog post and recording of a conference talk on the subject in the comments 👇].
-
Stop "Vibe Coding" and Start Building with Intent 🚀 I’ve been exploring a game-changer for AI-assisted engineering: Spec Driven Development (SDD) using the new GitHub Spec Kit. We’ve all been there—tossing prompts at an LLM and hoping for the best. It’s fast, but it’s often messy and hard to scale. John Capobianco recently demonstrated a much more professional path forward using Claude Code and the Specify CLI. The SDD Workflow Breakdown: 📜 Constitution: Define your project’s "soul"—its tech stack, constraints, and values before a single line is written. 📑 Specification: Draft a full MVP spec with user stories and edge cases. 🔍 Clarification: The AI actually asks you questions to fill in the gaps. 🗺️ Planning & Tasks: Turn that vision into a technical roadmap and a list of atomic, verifiable tasks. 🛠️ Implementation: Only now does the code get written—guided by the specs, not just "vibes." Why this matters: It brings the discipline of traditional software engineering into the AI era. It’s not just about getting code; it’s about getting the right code that fits your architecture and long-term goals. Whether you're building a side-scrolling game (like in the demo!) or a complex enterprise integration, this kit ensures your AI agent is an architect, not just a copy-paster. Check out the full tutorial here: An Introduction to Spec Driven Development (SDD) with the GitHub SpecKit #SoftwareEngineering #AI #ClaudeCode #GitHub #SpecDrivenDevelopment #Programming #SpecKit
-
Cursor often goes haywire when generating code. Not following best practices, adding try catch blocks for silent failures, not following your design patterns. The best way to overcome this, is by using Cursorrules. Why Cursor rules lift productivity Cursor lets you drop a .cursorrules file (or add project “Rules” in settings) that the editor automatically prepends to every AI request. Developers say this context cuts repetitive clarifications, keeps coding style consistent, and speeds up. Because the guidance is persistent, you waste less time re-explaining domain facts or edge-case constraints. How rules work: Large language models have no memory between completions, so Cursor reads your rule block and inserts it as a system prompt each time the AI runs. Rules can be global, file-scoped, or attached to specific commands, giving you fine control over when the extra context appears. Bonus: Using ChatGPT or Claude to craft rules Draft the skeleton – In ChatGPT or Claude, paste a short project description plus a few style examples, then ask: “Generate a .cursorrules snippet that enforces these patterns and adds bullet-proof docstring guidance.” This works because both models can output structured text that you can drop straight into Cursor. - Iterate – When you notice recurring AI missteps, ask the model to amend the file with corrective lines such as “Prefer functional components over classes in React. - Leverage community gists – Reuse open-source rule sets on GitHub as starting points and tweak them with ChatGPT prompts like “adapt these for Go with go test coverage expectations. Folding your PRD into the rules Your PRD already distills user goals, constraints, and success criteria. Add a section in .cursorrules titled “Project Context” and paste the key bullets— architecture choices, naming conventions, edge-case tests. The AI will now reference those facts every time it proposes code, refactors, or explains a diff, keeping outputs aligned with the product vision.
-
Whenever I start a new .NET project, I don’t begin with controllers or services. I begin by creating three simple files: .editorconfig Directory.Build.props Directory.Packages.props Over time, these became essential for me because they set the tone for the entire codebase long before the first feature is built. .editorconfig helps keep the code consistent. It defines the basics - indentation, spacing, naming rules, encoding - so the team writes code the same way. It reduces noise in pull requests and keeps reviews focused on the logic, not formatting. Directory.Build.props centralizes the shared project settings. Things like language version, nullable rules, warnings, and analyzers belong in one place instead of being copied across multiple csproj files. It keeps the solution clean and avoids configuration drifting over time. Directory.Packages.props manages all NuGet package versions. Having one place for dependencies makes it easier to upgrade, review, track, and avoid version conflicts. In larger systems, this alone prevents a lot of hidden problems. These may look like small details, but they add real structure from day one. They make onboarding easier, reduce unnecessary friction, and keep the project predictable as it grows. Starting strong is always easier than cleaning things up later. I’m curious - do you use these files as well? Or do you have your own way of setting the foundation for a new .NET project? #dotnet #softwareengineering #cleanarchitecture #bestpractices #devcommunity #csharp
-
AI coding tools can 10x your productivity… But if you use them wrong, they slow you down. Here’s the workflow most people miss 👇 I’m Jean! I spent years as a Tech Lead and a Manager at Meta Managing AI coding agents doesn’t feel that different from managing human engineers. In both cases: ↳ You don’t jump straight into implementation. ↳ You plan first. ↳ You write specs. ↳ You set guardrails. Here’s the workflow: ✅ 1. Create “Memory Files” for AIThink of these as instruction manuals for your agents. Add a file like: ↳ agents.md or ↳ CLAUDE.md inside your repo to store: ↳ Project goals ↳ Coding conventions ↳ Tech stack details ↳ Build + test commands ↳ Style and design rules This prevents the agent from “forgetting context” every step. ✅ 2. Force the AI to Plan First Before any code is written, explicitly ask: ↳ Outline the steps before implementation. ↳ List risks or unknowns. ↳ Tell me which files you plan to change. ↳ Wait for my approval before coding. You want the design doc before the PR. ✅ 3. Write a Lightweight Spec Even a simple AI-generated spec works: ↳ Problem to solve ↳ Scope & non-goals ↳ Approach ↳ Files & interfaces affected Most people skip this because they “just want to start coding.” That’s how projects go sideways. Are you using AI as a careful collaborator Or vibe coding to the max? ♻️ Repost if this helps someone with coding workflow 👉 Follow me, Jean, for real-world AI engineering workflows and career lessons. #AIcoding #AIEngineering #SoftwareEngineering
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development