Foundations of Software Engineering

It’s All About Information

The hidden foundation of software engineering isn’t code — t’s the relationship between information, decision-making, and value.

Ask a software engineer what their craft is built on and you’ll hear a chorus of familiar answers: algorithms, data structures, design patterns, clean code. These answers aren’t wrong — but they’re incomplete. Beneath every elegant abstraction, every well-architected system, lies something more fundamental: information, and what we do with it.

The underlying foundation of software engineering is found in the relationship between information, decision-making, and valuation. Once you see it, you can’t unsee it. It reframes how we think about systems, how we write requirements, and how we measure success.

Every Decision Is an Information Problem

Think about the last meaningful decision you made — in software or in life. You weighed options. You drew on what you knew. You may have felt uncertain about some facts and confident about others. That experience captures something universal: every decision is made on the basis of the best available information.

If we don’t believe we have the best information, we postpone the decision — or, lacking that opportunity, we must accept the possibility of making a mistake.

When we can’t postpone and uncertainty remains, sensible behavior means we try to understand the potential cost of the error. This isn’t just good epistemics — it’s the logic behind every risk register, every feature flag, every staged rollout. We build software systems that help people make better decisions, so we should understand decisions at their core via the pipeline below.

External Data → Encoded Information → Decision → Outcome & Value

Software as an Information Encoding Machine

When we develop software applications, the source of the information is almost always external: events in the world, behaviors of users, states of physical systems, movements of money. None of that raw data is useful on its own. It must be encoded — structured, labeled, contextualized — before it can support analysis or action.

This is why data modeling is never purely a technical exercise. Choosing how to represent a concept in a database is choosing how that concept can be reasoned about. The schema is the theory. And like all theories, it has assumptions baked in that will shape every decision downstream.

Engineers who understand this treat data design as the highest-leverage work in a project. Getting the encoding right — understanding what information is really needed, how it should be structured, what relationships matter — determines whether the system will serve its decisions well or constantly fight against them.

The Three Questions That Should Precede Every Feature

To fully understand the purpose of any decision a system must support, we need to ask three foundational questions before writing a line of code:

  1. Who is responsible for the process? Is it a human operator, an automated process, or a machine learning model? The answer shapes the interface, the latency requirements, and the tolerance for ambiguity.
  2. How can the result of deciding be observed? What does “correct” look like? How frequently can we measure it? What are the benefits of good decisions, and what problems do bad ones create?
  3. What inputs are available, and how must they be encoded? What data exists in the world that is relevant to this decision? What transformation is needed to make it useful? What’s missing, and how do we handle that gap?

These questions aren’t new — they echo through every serious requirements process. But framing them explicitly as information questions changes how teams approach them. It shifts the conversation from “what should the system do?” to “what does the system need to know, and how will it know it?”

Valuation: The Part We Often Skip

The third leg of the foundation — valuation — is the one most frequently left implicit. Every decision has stakes. Some errors are cheap to correct; others are catastrophic. Software systems that serve high-stakes decisions (medical, financial, safety-critical) demand rigorous cost modeling. But even lower-stakes systems benefit from asking: what is the cost of a wrong answer here?

Valuation is also what connects information quality to business value. Better information leads to better decisions; better decisions lead to better outcomes. When we invest in data quality, in richer instrumentation, in cleaner encodings, we’re making a valuation argument — we believe the improvement in decision quality is worth the cost. Making that argument explicitly leads to better engineering trade-offs.

A More Grounded Engineering Practice

This framework won’t replace the technical depth that software engineering requires. But it provides a lens through which to evaluate technical choices: Does this architecture give the right people the right information to make the right decisions at the right time?

When the answer is yes, we’ve built something genuinely useful. When the answer is murky, we’ve found the real problem to solve — and that’s always a better place to start than the code itself.  — — Robert W Ferguson

To view or add a comment, sign in

More articles by Bob Ferguson

  • The Case for Requirements Engineering

    Why software engineers should learn requirements engineering Requirements engineering formalizes exactly what you…

    2 Comments

Others also viewed

Explore content categories