Survey of Software
I needed a QR code library. Which one?
I did the research and then kept going.
My Survey of Software (SoS) website covers 25 domains including sorting and searching algorithms, graph analysis, NLP, LLM infrastructure, CJK linguistics, calendar tooling, civic data APIs, and beyond.
Each survey report consists of four reports:
Plus, I also create an "explainer" that puts the topic into generalist terms. Every survey is publicly accessible and CC BY 4.0 licensed, and the method itself is also published for replicability and adaptation.
There's also a Claude skill anyone can add — install it once and Claude can query the research library directly in your conversations. Instead of Googling "best Python fuzzy search library" and hoping for the best, Claude fetches the actual survey and synthesizes a recommendation for your specific context.
Why I published this
SoS accelerates my development speed. I don't have to stop and do deep research anymore, unless I think of something that hasn't been covered yet. When I have an idea, I throw it to Claude and instantly get a stack recommendation including categories and libraries I might not have considered.
For example: I needed a JavaScript calendar UI. Got one from the survey. But it didn't work well on mobile, so I grabbed a second one from the same research — same data, renders differently on mobile/desktop using two separate libraries. Maybe down the line I'll replace it with something better, but for now, done and done in one session.
For a new project I'm working on — Schema Evolution Automation — I needed Python Code Parsing. I'm in the deep end of the pool now, using LibCST to automatically propagate database schema changes throughout a codebase.
Not bad, huh?
For glow-ups — I've been able to take an existing product and ask "how can I make this better" and "in what order should I tackle this" — and the research provides a systematic foundation.
I'm sharing all this because you'll create things that I would never imagine, and if SoS accelerates your efforts, all the better.
If the research doesn't quite fit your use case or feels outdated, use it as a jumping-off point. Challenge the results — ask your AI "is this report accurate?" The surveys give you and your AI a shared knowledge base to work from rather than starting each conversation from scratch.
Message in a bottle
I put the site live for the AI Tinkerers Science Fair on January 30th, then added it to Google's index shortly after. In the last 30 days, here's where readers came from:
France at #2 was unexpected. And high engagement from smaller European countries (Finland's 600 readers from 5.5 million people, Poland's strong showing) suggests systematic research resources may be particularly valued in certain developer communities.
Singapore at #3 reflects a dense, internationally connected hub where developers routinely make architectural decisions across multilingual, multi-market contexts. The CJK sections of the library (tokenizers, morphological analysis, traditional-simplified conversion) likely explain part of that traffic.
What's your SoS Story?
I'll continue filling out the remaining surveys and adding categories.
And then I'll feed the entire project back into SoS to ask: "How can I make this better?"
In the meantime, if SoS helps you, I'd love to hear your stories. What did you build? How did you build it? What libraries made the difference? I'm not tracking usage (this Cloudflare data was a surprise), but hearing how people actually use systematic research to create things — that's the real measure of whether this works.
Survey of Software (SoS): https://research.modelcitizendeveloper.com/