AI Engineering Is Systems Thinking, Not Framework Assembly - So I stopped using Langchain and built my own!

AI Engineering Is Systems Thinking, Not Framework Assembly - So I stopped using Langchain and built my own!

We think we're buildng AI applications. Too often, we're just assembling frameworks.

For the longest time, I genuinely believed I was building AI applications.

You know the drill. Pick the hottest LLM. Add LangChain. Plug in a vector database. String together some prompt templates. Build a retrieval pipeline. Ship it.

It felt productive. Hell, it looked like progress.

Abstractions for everything. Chains, agents, memory, tools, callbacks—all neatly packaged like the future had been gift-wrapped and handed to me. Just import, configure, and go faster.

And honestly? At first, it worked.

We shipped fast. Prototypes turned into demos. Demos turned into happy clients. Everyone loved that something "AI-powered" was moving.

But then, after the fourth or fifth application, something started gnawing at me.

We weren't building our product anymore.

We were learning how to please the framework.

That realization lands like a cold drink to the face.

Now, let me stop you before you think this is another "LangChain is terrible" rant. It's not.

LangChain did something important. It gave teams speed when the AI ecosystem was pure chaos back when no one wanted to manually stitch prompts, tools, memory, and retrieval together from scratch.

That mattered. A lot.

But here's the thing no one tells you: there's a massive difference between using a framework to speed up a prototype and letting that framework quietly become your permanent architecture.

That difference? It's where most teams get trapped.

Because the moment your product starts depending on actual AI behavior, Not just calling an LLM, but relying on orchestration, retrieval quality, tool execution, memory flow, model flexibility those neat abstractions start costing you.

Debugging gets harder. Control slips through your fingers. Local model support? Awkward at best. Testing becomes a nightmare. And every time you want to do something even slightly custom, you face a weird, uncomfortable choice:

Fight the framework. Or lean into it even more.

That's not leverage. That's slow, creeping lock-in.

I hit that wall after building seven different AI applications. Seven. Across totally different workflows.

Every project looked unique on the outside. But the pain was always the same.

We needed better streaming control. We wanted local execution through Ollama. We craved clean boundaries between execution and LLM compute. We needed retrieval that fit our product—not retrieval that fit the framework's idea of a product. We wanted observability. Evaluation. Ownership.

But every step forward felt like a negotiation.

Not with the product. With the framework.

And that was the breaking point.

Because if adding a new capability means deepening your dependency, then the framework isn't accelerating you anymore. It's defining your ceiling.

That's a dangerous place to be especially in AI, where the entire rulebook gets rewritten every few months.

The provider you trust today might vanish tomorrow. The model everyone swears by now could be irrelevant next quarter. That "future-proof" abstraction you leaned on? It becomes technical debt faster than most teams realize.

So I stopped.

And I started building my own.

Not some dramatic, ground-up rebuild of LangChain. I wasn't trying to recreate an entire ecosystem.

I was trying to rebuild ownership.

Article content
How different layers of proto stitch together to build the AI infrastructure

That meant separating everything into clear layers. LLM compute became its own concern. Execution became its own layer. Chains became testable units—not magical black boxes. Streaming got boundaries. Tools got interfaces. Retrieval became independent. Model adapters made switching providers simple instead of painful.

I stopped asking, "How do I make this work inside the framework?"

Instead, I asked: "What should this system actually look like if I want to own it for the next five years?"

That question built Proto.

Proto isn't a wrapper around AI APIs. It's an independent AI layer inside a real web application. A system where execution, retrieval, tools, memory, and model orchestration are modular enough to survive change.

Because change is guaranteed. Dependency shouldn't be.

Here's the mistake I see most teams making today.

They think they're building AI applications. But really, they're just stitching together AI frameworks and vector databases and calling it architecture.

That's not architecture. That's assembly.

Real AI systems aren't defined by the prompt. They're defined by the boundaries around the prompt. How execution happens. How retrieval is measured. How easily you can swap model providers. How local inference can replace hosted inference. How failures are handled. How the system evolves without collapsing under its own dependencies.

That's the real work.

And honestly? That's where the real moat is too.

Not in clever prompts. Not in wrappers. In systems thinking.

The biggest lesson I learned from building Proto is simple—but it took me years to really feel it:

AI is not a library problem. It's a systems problem.

If AI affects your product behavior, your customer trust, your margins, or your operational reliability, then it shouldn't live as a plugin hidden behind someone else's abstraction.

Treat it like architecture.

Because eventually, every team reaches the same moment. The moment where speed is no longer the problem.

Control is.

And when that happens, the question stops being, "Which framework should we use?"

It becomes: "Who actually owns how our product thinks?"


Article content
Screenshot of the actual code which runs an entire RAG pipeline without any Frameworks

Look, I didn't write all of this to convince you to abandon your framework tomorrow.

That would be reckless.

What I wanted to share is simple: there's a difference between moving fast and owning where you're going. And after seven applications, one broken assumption at a time, I chose ownership.

But here's the honest truth I've only scratched the surface.

In the upcoming blogs, I'll go much deeper. I'll break down Proto's architecture piece by piece. The good decisions. The mistakes...

I'll share what I learned about building production-grade AI applications that don't fall apart when the model changes or the vector DB decides to misbehave.

So if any of this resonated if you've ever felt that slow, sinking feeling that your framework owns you instead of the other way around stick around.

We're just getting started.

See you in the next one.

Siddharth


If these kinds of explorations resonate with you and you want to continue this journey together, I'd love to have you join the newsletter "build, ship, and iterate with sid"


To view or add a comment, sign in

More articles by Siddharth Chopda

Others also viewed

Explore content categories