Preventing Code Isolation Breaks with Automated Architecture Rules

I'm building an Electron app with AI. The AI writes all of the code. One pattern I keep catching: it imports Node.js modules directly in the renderer process. Or pulls business logic into the UI instead of routing through IPC. The code compiles and passes unit tests. It just quietly breaks the process isolation that keeps the UI sandboxed. I got tired of catching these in code review. I remembered a conversation with a former colleague, Siddharth Goel, about ArchUnit for Java, which lets you write architecture rules as tests. I checked if a TypeScript port existed. It does (archunit on npm). Things like: "The UI layer can't import the database driver." "SQL queries must use placeholders, never string concatenation." I have twenty-five of these rules now, all running at pre-commit. You can put architecture guidelines into the AI's prompt, and it follows them most of the time. Most of the time isn't enough for security boundaries. But the test catches it every time. And the rules double as architecture documentation that never goes stale. Curious how others handle this. Automated rules, manual review, or something else that worked? #AI #SoftwareArchitecture #CodeQuality

I don't review the code anymore, I don't even view it to start with. The AI is quite capable of putting instructions how it wants and where it wants and scaling to high complexity if handled the right way. When it struggles to find the solution for a given problem, I tell it to examine the structure of the code in the related areas and make improvements before continuing to implement the requested functionality update. All code and manipulation of the code is now under the hood. Review would only serve to impose human ergonomic constraints on a system that simply does not need them when humans are not eyeballing the code.

Dejan Dukaric we happen to be solving for this problem space and just entered open beta. https://docs.sonarsource.com/sonarqube-cloud/analyzing-source-code/context-augmentation https://youtu.be/p4hVlXCD6Lg?si=GFDP3WWhnIMd2I-G (Demo at 10:30 mark) Idea here is to guide agents for their task at hand by letting them retrieve dynamic context. We're still figuring this out, so would appreciate feedback. Lmk if we can get you early access or an in person session in Geneva ;)

Thanks Dejan for the mention! It's been a few "AI years" since we last discussed this topic 😄 — so I wanted to share some of my recent learnings on guardrails. My take: as much as possible, guardrails should be deterministic in nature — but our starting point can still be an AI prompt. I have pasted an image where I try to compile some of the prominent guardrails I have for my newer projects following Agentic AI flows. I had to past as an image since LinkedIn does not allow longer comments 😂

  • No alternative text description for this image

Try my agent, its been configured to avoid literally that, among several other things: https://github.com/SeanCannon/agent-cannon

That's a good setup. Were you able to get the AI to automatically fix issues after the rules were put in place, or are the rules just there to catch issues?

Having an architecture instruction file helps in this case, for writing code and reviewing code. It’s a boundary guideline that AI will look at before coding

See more comments

To view or add a comment, sign in

Explore content categories