Three Types of Computer Users (And Why It Changes Everything)
Ever wondered why your favorite apps look and work the way they do? The answer lies in understanding who was using computers at different points in history. Let me take you on a journey through three distinct eras that shaped how we build software today.
Back in the Day: Everything Was Made for People
Remember the early days of computing? Everything was designed with one user in mind: you and me - actual humans with eyes, fingers, and limited patience.
Think about it:
Web browsers with big, clickable buttons because our fingers aren't laser-precise
Desktop software with dropdown menus and toolbars, we could see and navigate
Mobile apps with swipe gestures designed for our thumbs (not tentacles!)
Multi-touch interfaces that responded to human touch patterns
Every pixel, every button, and every interaction was crafted around human limitations and capabilities. We needed visual feedback, we made mistakes, and we didn't want to memorize complex commands. The entire concept of "user experience" was born from designing for human behavior.
The API-First: Then Computers Started Using Computers
Then something interesting happened. Computers started talking to other computers, and they didn't need fancy interfaces.
This was the API-first revolution. Suddenly, the primary "users" of many services weren't humans at all, but other software systems. And these digital users had very different needs:
These computer "users" were predictable, didn't need pretty colors, and could process information at lightning speed. They just needed reliable, well-documented interfaces that returned consistent data. No emotional design required – just pure functionality.
Recommended by LinkedIn
The AI Era: Now We Have AI Users
Now we're living through the most fascinating shift yet. Meet the new users of our software: AI systems that think like humans but operate like computers.
Large language models and AI agents are consuming our services in ways we never imagined:
This is giving birth to entirely new interface paradigms like Model Context Protocol (MCP), where services need to be both machine-readable and contextually rich.
Why This Matters
If you're building anything digital, it helps to know who's going to use it:
Building for people? Focus on making it obvious and forgiving. People will click the wrong things and get confused.
Building for other software? Focus on reliability and clear documentation. Make it predictable.
Building for AI? This is new territory, but it seems like you need both structure and flexibility.
The future is multimodal. The most successful platforms today serve all three types of users. GitHub has a nice interface for developers, solid APIs for automated tools, and AI features for coding assistants.
What I'm Curious About
I think we're still figuring out what it means to design for AI users. It's not quite like designing for humans (they don't get frustrated the same way), but it's not like designing for traditional software either (they can handle ambiguity and context).
This is an excellent read. Thanks for sharing. I wonder what challenges are being encountered or design patterns employed when trying to bring the use of GenAI into legacy systems. Do you have any feedback on it?