IT Change and Performance Risk

IT Change and Performance Risk

IT is infamous for software application failures (web sites et al) due to performance and scalability issues that resulted in damaged reputations and lost revenue. There have been some stunning web site failures over the past decade for Apple, US Super Bowl, London Olympics and New York Hurricane Help Line. More recently, failures have occurred within Retail Banking and Financial Services, Online Shopping, Government Portals and the reality game Pokemon Go.

So why do performance failures still happen?

Continuous change is the norm in today’s digital world, and therefore one reason is that companies are under constant pressure to deliver application software (products, services) that differentiates them from the competition, quickly and at minimal cost. Another reason is an apparent refusal to engineer applications correctly and prove them ‘fit for operational purpose’ due to time and cost constraints. Unfortunately, these reasons can lead to myopic behaviour where focus is predominantly upon delivering functionality with scant attention paid to performance and scalability risk (generically referred to as ‘performance risk’). This is exacerbated when undisciplined agile practices are used and further compounded when confined to just the software construction phase rather than the E2E software delivery lifecycle (SDLC).

Performance failures negatively impact the “bottom line” both directly (unbudgeted capital expenditure) and indirectly (brand damage, lost credibility, reduced competitiveness), therefore increasing an application’s total cost of ownership. Yet despite this some companies implicitly accept the cost of developing and maintaining non-performant software applications?

The root cause of performance failures can usually be attributed to how the application was originally designed in respect of the performance risk that should have been understood at the outset and mitigated early in the SDLC with minimal cost and disruption. Mitigation of performance risk is imperative due to the competitive and mobile world in which we currently exist but also as we move towards the “Internet of Things”. Furthermore, continuous change and the performance risk thereof, demands changes to the traditional ways of software delivery but without compromising functional and non-functional quality by adoption of lean and disciplined agile practices.

One example of this change is DevOps, and defined by Wikipedia as:

“A culture, movement or practice that emphasises the collaboration and communication of both software developers and other IT professionals while automating the process of software delivery and infrastructure changes. It aims at establishing a culture and environment where building, testing, and releasing software can happen rapidly, frequently, and more reliably.”

Ignoring performance risk is both naive and irresponsible! Implying organisations do not undertake the required performance risk mitigation is misplaced, many do a good job, some do it too late, some do what they can under immense pressure and some do nothing. Furthermore, some performance risk mitigation strategies are flawed. Consider the scenario where performance testing under operational conditions is undertaken for the first time just before operational deployment. This is effectively a gamble, as the implicit assumption is that performance failures will not occur, and if they do can be easily resolved within project timescales and budget. But consider the consequences if a serious design flaw is found in the application architecture that prevents compliance to the required NFR’s and therefore threatens delay to the published operational deployment date?

Application performance is a critical success factor. Mitigation of application performance risk is a life-cycle discipline and should be treated as such. Unless application performance risk is considered during the design phase, there is a likely risk of performance failure later on (technical debt). So why not prevent performance failures early on in the SDLC (aka the ‘deployment pipeline’) rather than find them in production by embracing Performance Engineering:

“A set of proactive life-cycle disciplines (simple modelling, prototyping, continuous testing etc.) to deliver performant and scalable software by early mitigation of performance risk that results in a lower total cost of ownership through cost avoidance, both directly and indirectly.”

Unfortunately, there is often a reluctance by IT executives to fully provide the commitment and investment for performance engineering (instead they endorse a “one off” or “part time” commitment for short term gains) as it delays application software delivery and increases cost. This could not be further from the truth! It is significantly cheaper to prevent performance failures at the start of the SDLC (cost avoidance) rather than find them later when they are inherently more expensive to resolve both directly and indirectly. Thats assuming they can be resolved!

This is why DevOps and all that it encompasses (continuous everything and everything automated) is so very important as it easily enables performance engineering, i.e. “shift left” performance risk mitigation and continuous proving of the application software under operational conditions.

However, as with most process improvement initiatives, Performance Engineering (and DevOps) will only be successful given the requisite commitment and investment. But this alone is not enough, accountability and responsibility is also required. Accountability ideally rests with an IT executive in order to drive down responsibility for performance engineering. Responsibility will differ across IT organisational structures, but essentially it is dispensed to those stakeholders who have a vested interest in software delivery, such as architecture, functional and non-functional testing, operational acceptance, risk management and quality assurance.

Synopsis: Performance engineering discipline will deliver performant and scalable software applications whilst avoiding the consequences of direct and indirect costs. Combine this with DevOps and software quality will be vastly improved, including the delivery of software releases in shorter time-scales thus reducing risk and cost.

But don’t forget, in these heady days of ever emerging technologies and best practice frameworks it is more important than ever to “keep on doing the basics well” in respect of software application delivery. It is not acceptable to ignore the “basics” like performance risk mitigation for short terms gains as many still do! Unless performance engineering is engrained within a company’s DNA and therefore the SDLC, software application failures due to performance and scalability issues will continue to happen…

To view or add a comment, sign in

More articles by Malcolm Lees

  • Leadership and Change

    Leadership is the determination of a strong will to succeed. A leader's commitment to the task must be so obvious, so…

  • Change and why Attitudes Matter

    Attitudes determine behaviour and so important they can frequently overwhelm facts. When people see reality staring…

  • Change and the Power of Communication

    Communication is one of the most underrated and undervalued aspects of management, somewhat strange considering today’s…

  • Managing Change

    Managers are, first and foremost, employed to be leaders. It is squarely on their shoulders to provide leadership in…

  • The Imperative of Change

    Continual change is an inherent feature of modern business that demands alternatives to the traditional ‘ways of…

  • Leadership: What does it take to be one?

    Leadership is the determination of a strong will to succeed. A leader's commitment to the task must be so obvious, so…

Others also viewed

Explore content categories