The Quantization of Everything
Eisenhower once wrote, “Plans are worthless, but planning is everything.” He meant that while the specific battle plans any meeting produced would almost invariably fall apart the moment combat started, the act of considering what the battle could look like and stepping through the possibilities armed generals with the critical information they needed to navigate war.
The same could be said for financial modeling. The true power in a financial model often comes not from one data point produced by one set of inputs but from the possible future scenarios the model illustrates when fed a wide range of inputs. This distribution is the valuable information planning sessions give to leadership. It’s not the definitive knowledge of what will be but the nuanced understanding of what could be.
Enabled by the steady growth of cheap available compute, complex financial modeling has become a foundational element within finance over the past 40 years. Following the 2008 financial crisis, the number of models has exploded, some in response to direct regulatory requirements and some from financial institutions' innate desire to control material risk.
Recommended by LinkedIn
These models are built in all forms of environments, from code languages like C++ or Python to no-code platforms like Alteryx. However, the dominant modeling environment today is spreadsheets, with an estimated over 1 trillion spreadsheets worldwide across business users. Institutions have built up years of IP in spreadsheets. Many of the most proficient financial modelers still depend on it as their go-to tool. Aside from existing familiarity, there are good reasons why financial modelers use spreadsheets. The environment enables the efficient construction of models, and for the vast majority of the workforce, understanding a model built in spreadsheets is far easier than understanding one built in any of the alternatives listed above.
The weakness of spreadsheet-based modeling is the difficulty in creating the aforementioned distribution of possible outcomes. While both code/no-code platform-based modeling can efficiently process many sets of inputs, models written in spreadsheets struggle in this critical task. Running at most “high”, “low”, and “base” scenarios through their spreadsheet models leaves many institutions with significant gaps in their risk frameworks. Spreadsheets also trap the modeling effort in a desktop app that is relatively brittle, slow/cumbersome, and error-prone.
As regulatory frameworks evolve and increasingly emphasize financial models as integral parts of the required risk frameworks, spreadsheet-based models must be folded into an environment where mass scenario analysis is possible. The alternative is a dangerous world of false precision where a few point estimates are taken as gospel and risks go undetected. As the apocryphal Mark Twain quote goes, “It ain’t what you don’t know that gets you in trouble. It’s what you know for sure that just ain’t so.”
Excellent piece!
Great stuff!