High Reliability Knocks Down the Door
My first experiences with programming computers were using punched cards which were uploaded onto mainframe computers, such as the IBM-360/370's which used large arrays of core memory. The picture attached to this article shows a piece of core memory with about 100 rings, and each ring is laced with three wires. That comprises a modest 100 bits of memory. A space physically the size of your thumbnail, with enough storage to contain your phone number.
These technologies were largely obsolete by the time I arrived on the computing scene. While in college I worked for a bank operating a rather large mainframe computer, the NCR-8500 "Criterion". Most weekends I could be found in the bank either running the Saturday's business, monitoring the ATM machines, printing statements, or sorting checks. The aforementioned tasks consumed most of Saturday. By the time Sunday came around I could work on homework or just experiment with programming languages such as FORTRAN, NCR Neat/3, RPG-II or just "bare metal" assembly languages (of various types). Over about 10 years I used these types of systems, in banking and aerospace applications. There was value in these systems because they performed real work, and they were relatively difficult to create.
Then, I closed my eyes one day, and those hardware technologies were obsolete. I eventually found myself working for an oil company in Houston developing pipeline control software on microprocessors which were only fractionally as powerful as either the IBM 360/370 or the old NCR-8500. This was the early 1980's and these computers used Intel's newest processors, 8080's, 80186's, and the new "hot rod" 80286. What these computers lacked in power, they inversely made up for in terms of size. In other words they were small and compact. This was also the time that the legendary figure Richard Stallman launched the GNU "free software" project at MIT. Like that meteor of extinction cutting the night sky over the heads of the dinosaurs, I looked up at the sky, but did not comprehend what was happening.
Then, I again closed my eyes one day, and those technologies of the 1980's were gone. But not the GNU Project; like Mt. St. Helens, this GNU project was a volcano was just simmering under the surface, waiting to explode on the scene. I eventually found myself owning a small consulting company in Cincinnati designing electronic systems, firmware systems and device drivers, using ever more powerful computers and ever smaller machines. Over 20 years I eventually found myself with a dozen some odd employees, and a half dozen regular customers.
As we rolled into the second decade following the new millennium I initially felt that size/weight and power of these computer systems would be driving factors of customer success and professional advancement. After all, we could now purchase a microprocessor (e.g. many Cortex M types) for a buck which had the same power as the NCR-8500 or the IBM 360/370. The reduction of size and weight, with quantum leaps in computing power, had made what was difficult in the past, look easy in the present. I should have kept my eyes open, because something remarkable happened, and until recently I missed it.
I opened my eyes and discovered that the GNU project had finally hit critical mass and exploded onto the scene. The GNU cost model is that software is free. And free to those not experienced in the world of technology also means devoid of value. Some industry players became lemmings, running their technology organizations off the cliff in order to reduce the populations of engineers. Many industry players sent high value engineering into third world venues where the skills of some (arguably many) with advanced university degrees were less capable than some (arguably many) in the US with just high school diplomas. (P.S. --- If you doubt this narrative, you should check the academic credentials of Messrs . Bill Gates, Mark Zuckerberg, or Thomas Edison).
In the world of today (summer 2018) excellent GNU systems (e.g. Linux, Beaglebone, Git, and more) live side by side in the online world with vast numbers of electronic and software systems which have less quality than a 6th grade volcano science project. Using these low quality systems won't get value into customer products, won't get value into engineering consultancies, and won't reflect value into engineering salaries. Value in software systems is often no longer seen in the acquisition price, because it's now "free" (thank you Richard Stallman). From my viewpoint, the value in these types of systems is now in the evidence of quality.
Regardless of whether you're designing a medical device, an autonomous robot or just a desktop printer, evidencing quality through methods of high reliability engineering is now key to building value and enabling success.
I'll discuss this more in future posts, but in the mean time if you have an interest, you can explore online discussions around standards such as the RTCA's DO-178B/C (for aviation software), DO-254 (for aviation hardware), or EIC 62304 (medical software). And in the interim, I think I'll keep my eyes open for a while. SC