Standardising Technology Stack... Not!
Society’s mistrust of technology is deep-rooted. ...stories cemented fear that technology, particularly an unnatural use of that technology, would generally do us no good in the end.
Background
It is not the first time I am being "participated" in a motion to standardise the technology stack on a project or in an organization. Managers and IT professionals gather together at the campfire to exchange recipes on what technology to converge on. Now, I am about to go into another cycle like that and have sat down over the weekend to gather my thoughts before the series of meetings to come next week.
Drive to Standardize
The motivation is driven by commons sense and by the following evident benefits:
- IT department faces a learning curve with each new piece of technology organization brings on board
- Some developers feel comfortable with their ancient, serviceable software, while others will always clamor for the latest applications and features. However, organization can save a lot of time and hassle when entire organization uses the same version of the same software
- When organization have more software, it’s harder to automate the installation of security patches and software upgrades
Bad "Technology" Decisions
Well intentioned and very experienced professionals will reflect on consequences of selecting wrong technology using past horror stories, e.g.:
- The year 2000 was a terrible one for Nike, and all of their woes were tied to the failed roll-out of the i2 ERP software. After investing $400 million in a software package that was designed to oversee the process of fulfilling warehouse orders, the company was handed an estimated $100 million in lost sales, several class action lawsuits and a 20 percent dip in their stock market prices...
- Shortly after the new ERP was put into place, HP discovered that approximately 20 percent of their orders were not going through. Fixing this glitch was cumbersome, but was not the extent of the issue. Instead, the order issue created a massive server backlog that HP was completely unprepared to handle...
- Department of Defense was starting another big PC buy—386-based PCs from Unisys, under the Desktop III contract. But while the PCs themselves were getting upgraded, in many cases the old ZDS monitors were remaining. This was in part because the lab had standardized on MS-DOS 5.1 and was going nowhere near Windows 3.11...
Alternative "Common" Sense
While not rejecting any of the common sense arguments or the huge cost associated with the bad "technology" selection in the past. I am inclined to bring up the opposite and the darker side to the conversation.
Through decades in IT industry I finally started to realize some trends and coming to some far reaching conclusions. Sorry, I am not very bright and, probably, should have realized following factors earlier in my career:
- Although (theoretically) any programming language can do any job, they all have their strengths and weaknesses. Additionally programming languages are broken down into categories: procedural, object-oriented, and functional. Choosing the right language helps solving problems faster and more efficiently. Standardizing on programming language or its version will be an equivalent of restricting choice of programming languages available to development team, hence reducing the efficiency and agility of the development team
- Complex enterprise applications use different kinds of data. Increasingly such applications manage their own data using different technologies: Sql and NoSql to support geo-replication, to offer different data modelling, and to empower with increased availability. Standardizing on database or its version will come at the cost of reduced performance and availability
- According to International Data Corp. of Framingham, MA, 36 percent of businesses surveyed recently have two or more midrange (or larger) server platforms and operating systems. Similar to programming languages and to databases, operating systems evolve differently and at a different pace, some databases and programming languages favour one operating system over the others. Standardizing on a version of operating system will limit choice of programming languages and databases resulting again in reduced development efficiency and production availability/performance
- The most intriguing part to me is how do we select the "right" programming language, db-engine, data processing, data streaming platform, and etc? How do we become aware of the advantages of an unknown tool? How do we restrict ourselves from trying everything and finishing nothing?
Quest for the Balance
Can any development manager embrace 32 programming languages listed on github or 100 languages ranked on tibo? The answer is not how many languages to adopt, the answer is in the process of adopting new languages and retiring old languages from new initiatives: review often, fail fast, embrace the change, stay away from the silver bullets.
Would any dba entertain a notion of leveraging 318 db-engines? The answer lies not in number of engines, but in the balance between engine capabilities and operational readiness. Operation can only support so many db engines, let operation rule the decision process and you will be using Ashton Tate dBase III as tried and true technology trusted by the VP of infrastructure until she is replaced. Let the developers rule the decision process and you may end-up supporting 318 db-engines * 5 major versions. The answer I am personally looking forward is DaaS - developers adopt new engines and versions as project needs dictate and operation of the countless database engines is someone else's trouble. Should you live in the superstitions kingdom of 'cloud is not secure' - select at least one database from different groups: rdbms, key/value, document, graph, and wide-column family to cover different use cases.
How many out of 100 operating systems would an infrastructure manager would certify before jumping from a bridge? Not too many! Managing OS is a headache with no business values: application code solves business problems, OS makes the code run, but requires ongoing maintenance. My personal answer to that dilemma is to ditch the operating system altogether, switch to PaaS (platform as a service) and serverless architecture. No control over OS selection - change application code to fit into the PaaS restrictions. Install no security patches, perform no operating systems upgrades - migrate code to newer version of PaaS/serverless instead when the timing is right.
How about running an enterprise or a startup using cartesian product of the industry choices? During the startup phase limited and familiar technology stack will likely produce results faster. During mid/late enterprise lifecycle lack of diversity will likely bury the organization, e.g. Kodak ignoring digital cameras. My answer: hold teams accountable for deliverables not for technology selection, business need is driving factor and technology selection is a mean of delivering the solution. Technology team needs to embrace culture of adopting new technologies with certain failure tolerance, not 'use what we know works' mindset, that to survive in a long term.
In organizations, I have worked or working for, the tendency seems to be:
- development is pushing for newer tools and versions in pursuit of innovation and agility
- operations are pushing back in the pursuit of stability and reduced cost of ownership
- business demands both: speed of delivery and production stability
Where is the voice of reason?
Recognizing that any "innovative" technology yesterday is "obsolete" technology today, acknowledging that any business requires financial viability, few things seem to settle in my mind:
- breaking down complex system to number of simpler systems reduces technology interdependencies - microservices seems to be the answer to-date. New functionality can be built using new technology without worrying about older technology. Automation: CI, CD, and DevOps, in that regard, cannot be an afterthought
- migrating applications between operating systems and platforms/versions is never a cheap and low-risk task - maintaining legacy applications as-is until replaced by newer components maybe the most cost effective approach. Immutable deployments and immutable infrastructure anyone?
- continuous adoption of new technologies exponentially increases cost of operations - outsourcing the trouble using SaaS, DaaS, and PaaS maybe the only sustainable strategy. Leading public cloud providers from that perspective are light years ahead of what in-house effort can afford. Should we bother investing in private cloud, or should we focus on multi-cloud strategy?
- to benefit from the new technologies - a continuous retraining of the IT personnel is required, building an adequate corporate culture is a must. Occasional training might not be sufficient, innovation is so fast - waiting for formal training events will put an organization behind. Can mid-level management and executives even keep up with technology evolution due to other obligations?
Good article Vlad, completely agree! The new technologies coming and dying every year - impossible to keep up with them and there're no benefits either in most cases.