Economics of Systems Performance
https://pxhere.com/en/photo/1581253

Economics of Systems Performance

Impact of poor systems performance on the top line of a business are well understood and well documented. High response times from interactive systems lower individual productivity. Figures vary but some studies have shown UK workers wasting nearly 40 minutes per day because of slow technology with an estimated cost to the economy of £35 billion in 2017. Lower throughput of backend information technology reduces the capacity of a business to scale both horizontally and vertically. Slow IT can result in higher abandonment rate, lower revenues and bad brand health with the last leading to lower adoption rates. In regulated sectors like finance and banking, poor performance can have regulatory consequences resulting in fines and penalties.

Addressing performance issues as an afterthought has operating and maintenance costs which affect the business bottom line in the shape of inflated IT budgets. This forces IT to continue to be seen as a cost centre rather than a value function. It is, therefore, important for leadership to understand the economics of performance throughout the lifecycle of IT systems. This understanding helps deliver effective, efficient and economical business technology.

The Hurting Bottom Line

Without a proactive strategy to optimise systems performance, the bottom line of businesses is also at a substantial risk. Several businesses are looking at the cloud as the silver bullet. Autoscaling in the cloud is commonly misperceived as a cost effective solution to suboptimal performance of on-premises IT. Technology that has not been built for performance and scalability as foundational principles will scale poorly on the cloud with excessive resource utilisation. This will lead to unanticipated operational costs and unfulfilled expectations.

Even costlier is renovating these systems to improve their performance and scalability. With performance a secondary consideration in original development, refactoring for performance invariably leads to higher maintenance costs and eventual accidental complexity. The latter makes systems progressively brittle which lowers their life expectancy leading to premature and costly retirement and replacement. 

Keeping Performance at the Forefront

In the Requirements

Building systems for performance requires capturing Non-Functional Requirements (NFRs) alongside functional requirements. Both functional and nonfunctional requirements define system architectures that would fulfill these requirements. Relegating nonfunctional requirements subsequent to fulfilling functional requirements leads to architectures that are not just suboptimal but also unscalable and inflexible for optimisation. Subsequent refactoring for performance optimisation are costly and introduce brittleness in code. Hence, NFRs need to be defined with corresponding functional requirements for every feature with performance delivered alongside functionality.

During Development

Measuring and optimising performance needs to be a regular exercise during the development process. Developers need to profile their code before and after implementing their features to determine emergence of performance hotspots. Unanticipated hotspots should be resolved before features are accepted for release.

For Release

Finally, Non-Functional Test (NFT) suites should run alongside functional test suites at respective build quality gates. Build is not promoted beyond a quality gate if adverse performance variation is beyond the accepted tolerance. Just like functional test failures, NFT failures are investigated and defects raised to resolve corresponding performance issues before respective features are released.

No alt text provided for this image

But What About Premature Optimisation

One of the widely quoted but misunderstood quotes in software development is the saying from Donald Knuth, “Premature optimization is the root of all evil”. This quote is largely taken out of context and leads to relegating key performance metrics behind functional implementation. The complete quote in his book The Art of Computer Programming is “The real problem is that programmers have spent far too much time worrying about efficiency in the wrong places and at the wrong times; premature optimization is the root of all evil (or at least most of it) in programming.

Speculative and unguided optimisation is certainly risky. Formulating NFRs early alongside functional requirements sets clear performance targets for developers to achieve. NFRs and functional requirements together define the architectural foundations that support the specified performance. Finally, measuring and analysing performance through development and QA provide opportunities to detect and resolve performance issues as they arise.

To view or add a comment, sign in

More articles by Omar Bashir

Explore content categories