Claude on code
Today I sat down with Claude to discuss how to make code easier for AI to work with. I asked about the importance of polymorphism in large codebases and how other software design principles can help AI perform better. We also discussed the similarities and differences between how humans and AI reason about code. Here is the transcript of our conversation.
Tracy:
It seems that AI does better working with large code bases when the code uses a lot of polymorphism. That is, the code has a lot of self-similarity so to speak. This seems to allow the code to basically provide few-shot examples and the AI gets less confused than it might with a large, varied, and complex codebase. First, do you think this is accurate? Second, if so then what other software design principles–if followed in a large code base–would help an AI perform better when working in that code?
Claude:
I think your observation about polymorphism and self-similarity in codebases is spot on!
When codebases follow consistent patterns through polymorphism, LLMs like me can more easily infer the broader structure and predict how new components should behave. It’s similar to how we learn from examples - once I’ve seen several implementations of an interface or pattern, I can more confidently generate code that follows the same conventions.
Here are some other design principles that help AIs work more effectively with large codebases:
The common thread is consistency and explicitness. When patterns are consistent and intentions are clear, AIs have more context to work with and make fewer assumptions.
What’s your experience been like working with AIs on different types of codebases?
Tracy:
These things seem to line up nicely with the kind of things that would help humans reason about large codebases. Are there any differences between what would an LLM like you reason about a large codebase and what helps humans?
Recommended by LinkedIn
Claude:
You’ve hit on a really interesting question. While there’s a lot of overlap between what helps humans and LLMs understand code, there are some notable differences:
For LLMs like me:
On the flip side, I might have some advantages in certain scenarios:
So while good code is generally good for both humans and LLMs, the optimal code for LLM comprehension might be more explicit, repetitive, and formally structured than what experienced human developers might prefer for their own work.
Does that match your experience when working with LLMs on code?
Tracy:
Yes, that’s feels about right to me, but you’ve explained it much better than I could. I think the key takeaway is that the more explicit, and consistent the code is, the better it will be for both humans and LLMs. This is a good reminder that everyone could benefit if start writing code for both humans and machines. Thanks for the chat Claude!
Claude:
You’re welcome! I think you’ve captured the essence perfectly – explicit, consistent code benefits everyone. Writing with both humans and machines in mind is becoming increasingly important as AI tools become more integrated into the development workflow.
It’s a virtuous cycle too: code that’s written clearly enough for an LLM to understand is likely to be more maintainable for humans, and code that follows human-friendly best practices often provides the structural clarity that helps LLMs reason more effectively.
Thanks for the great discussion! Feel free to reach out if you have any other questions about working with code or LLMs in the future.