The Ages of Software
Working recently on a new application has reminded me of the three ages of software.
We programmers all yearn to work on a ‘Greenfield’ project where there is no prior development to have to take account of. After all we are all ‘brilliant’ programmers and all earlier development will have been done by muppets. Give us a clean sheet and you will not regret it. Sadly, the reality is that you will be given some pre-existing code written by one of the muppets and told to amend it in some manner.
My current project is a trading desk risk and reconciliation application which has developed over the years. It uses many of the approved idioms of modern programming, lambda functions, interface classes, MVC GUI, task based. Unfortunately, the original author has now retired and the code is largely undocumented. Even its functionality is known only to the trading desk. It requires amending to cater for the transition to Risk Free Discounting as part of the migration of finance away from the IBOR which happens in the next 3 months. No pressure then.
So what are the 3 ages of software? One might say they are (i) green field developments, (ii) existing application enhancements and (iii) legacy application maintenance. However, it is more subtle than that. I categorise the ages as pristine, middle-aged and chronic. The difference boils down to how easy the software is to enhance and maintain.
Ok, so what do I mean by these terms? I consider pristine software to be readily amendable to cater for new requirements at low cost. Introduce a new derived class to support a different product, allow migration from one form of data storage to another with minimal fuss, extend the reach of the application by delivering it on different technology platforms.
As requirements change, personnel are replaced, design documents are lost, the ability to modify software in the optimal way becomes more difficult, entropy enters the application. Changes are implemented against the grain of the original design, the code becomes more complex, less easy to test and harder for a new team member to get to grips with. It is now entering middle age.
Urgent delivery deadlines mean that taking a cold hard look at the design and reworking parts that no longer fit the modified business requirements becomes more difficult. The cost of change increases. Finally, this middle-aged software reaches a point where the only practical method of enhancement is to leave the existing code unchanged and navigate new functionality around it. We are now in a chronic death spiral as changes require wholesale rewriting, holy code exists that no one dare touch for fear of breaking the application. Change becomes more and more expensive and less reliable.
A greenfield development does not necessarily create pristine software. Depending on the quality of the specification, design and implementation, new software can arrive in the chronic state. On the other hand, skilled maintenance to a middle-aged piece of software can reinvigorate it and move it towards the pristine state. Chronic state code is rarely revivable.
In finance the investment in technology is huge. Some banks employ more technical staff than Microsoft, Google and other mega technology companies. Software is all too easily condemned because it is poorly understood, does not use the right technology platform or does not fit into the latest strategic technology plan. However, understanding the scope of existing software is difficult; replacement programmes founder because the amount of effort required is underestimated. This can result in the worst of all worlds where a new system now sits beside the original one, data has to be shared between incompatible systems and endless reconciliations are required to ensure the systems are operating on comparable data.
We developers do not help ourselves by leaving behind poorly documented, overly complex implementations. The conceit that code is self-documenting is not even true when the original developer looks at it in 6 months’ time and wonders which idiot wrote it[1]. All the project design documentation is usually invalidated when the code meets the reality of the business requirements, assuming that it was not lost in the last server upgrade.
The single best improvement to be encouraged is to document every class (and even every method) with not necessarily the ‘What?’ but the ‘Why and the How?’ the class fits into the overall design. The documentation must be with the code and must be maintained.
So where am I with my latest project? Well it is like wondering into a chemical factory and surrounded by hundreds of unlabelled pipes, being told to connect two of them together but not which ones. You need to search for the source and destination of each to work out whether they are potential candidates, then try and work out the fluid dynamics of joining the candidates and decide the optimal ones to join. Hopefully, the cost of a mistake is a broken application not an explosion!
[1] I have checked source control logs to find the culprit for some coding idiocy and then found it was me!
How do you know when software is reaching the end of its life? Thinking about my current project, a relatively small change has required me to modify dozens of files to integrate into the solution. The problem has been (apart from my ignorance of its overall design) that getting the control variables to the point of use has required many different code paths to be amended. So a simple metric to measure the ratio of modified files against the number containing functional changes. If this is high then your software is entering old age.