Programming is dead, and we have killed it.

The embarrassment of an Amazon HR software inadvertently discriminating against female applicants is not totally unexpected because it is very common for programmers to use predictiveness as the sole goal. The good news is that computer programming as we know it will be dead soon. The current dichotomy between STEM and humanities education is misplaced - to borrow an analogy from Derek Parfit, they are just climbing the same mountain from different sides. Humanities education can provide the light to illuminate the serious context-related flaws as in the Amazon HR software program. It can guide AI to a more appropriate use in reality.

In the Software 2.0 framework proposed by Andrej Karpathy of Tesla, the future role of the Machine Learning (ML) engineer will be tuning the parameters of some general purpose AI engines and we will no longer need to build anything from scratch. The role of a programmer will be to teach rather than to program the computer. The reasons are twofold. Firstly, the neural network for computing is far too complicated for anyone to comprehend. In the realm of deep learning, there are layers and layers of software networks working together to solve a problem. The number of parameters, or things that you could tune, grows exponentially. The second reason is that software needs to be able to be scaled up properly, ideally linearly, i.e. double the resources and it will cut the time in half. It is impossible to do that by hand.

Domain knowledge will be even more important to AI applications since the technical details have been “abstracted” away. Analysts will need to judge data usage and their implications. Machines can tell you which factors are predictive and may even tell you why. However, only humans can assess the context and the appropriateness. For instance, LGBTQ society membership could be predictive of certain things but the use of it would be problematic at best. This is also an area where imagination from humanities training would be useful. Ada Lovelace coined the term Poetical Science to reconcile logic and passion. Interesting enough, when Alan Turing was thinking about the direction of computing, he thought about the Lady Lovelace’s objection - can a machine be capable of true thinking if it just follows instructions, no matter how complicated they are?

It is informative to compare our current optimism on AI to those during the Age of Reason and the Enlightenment. It was considered men, not God, was the center of the world and with reason, one could reach Utopia. It brought unprecedented economic growth from Industrialization but it also led to large scale wars that almost totally destroyed our civilization. Words indicate underlying thinking: we used to call science as physical philosophy; economics was known as moral philosophy when Adam Smith wrote the Wealth of Nation, as social philosophy in Victorian era, and as economic science after the Second World War. This evolving theme reflects our changing attitude toward our perception of “usefulness”. Our perception of AI is also following the same path.

Neural networks take us to the quantum world from the Newtonian world due to the intractable layers of logic. We might not care how a computer can tell a dog from a cat but we do care why a computer considers a patient has an elevated stroke risk and then recommends a particular drug. When a doctor, after diagnosis, puts down a medical opinion in her prognosis, she also needs to put down the clinical reasoning. This is similar to drug development: we need to speculate a chemical or biological pathway so that we can follow the logic on why something should work. Of course the machine could also build up this “speculative universe” but it will require human input to make them comprehensible to us. This is an area where the “deconstruction” training from literature could be fruitful - to break down the contributing elements with precision and care.

The quality of Romanticists would be well suited to dissect the messiness of real world relationships because they are profound, subtle, and receptive. They are better equipped to assess the context and judgment in the use of big data. System Thinking is important in this aspect as it requires people to look how each element fits into the big picture and their interdependence. This approach would be familiar to any humanities students with critical thinking training.

Note that this does not mean that STEM training has no usefulness. STEM training gives the domain knowledge needed to direct the machine as to what to “think” about, as well as helping the machine to assess, the subtleties of trade-offs and to prepare for “unexpected” events (i.e. Black Swan). Besides, we do need civil engineers to build bridges. In the end, we need just a few, a happy few, a band of programmers. Mathematics is the foundation for understanding and imagination is the foundation for creativity. As with any changes, there are always winners and losers. The changing nature of transportation worker is an illuminating example.

Horse husbandry knowledge was passed down from generations and working in this field required physical capital, social capital, and emotional capital. With the establishment of railway, young men from poor families could join as apprentices and acquired newly needed skills: reading, teamwork, and standardization of tasks. This resulted in unprecedented social changes and higher social mobility. We have already witnessed this kind of dramatic changes once when large scale university education opportunities were introduced to the general public after the War.

With technological advances, the need to create is less important than the way to use - we no longer make our own clothing but we do need to mix and match in a style that fit our needs. AI will take over many jobs we currently considered as “learned” professions such as accounting or even teaching. However, we will still need people to determine the goal, the timing, and how to use under what circumstances. We definitely do not want AI to consider human as “non-useful” as in the Terminator or iRobot. Finally, once the computer has enough sophistication to build such a speculative virtual universe with no programmer oversight, we could ask “Explain, Dolores” again and again as in the HBO-series Westworld and hope that we have a merrier ending.

Good programmers whom are also good people are a valuable solution, and maybe a very good one. Stephen C. Moose P.S. That never ocurred on my watch... anywhere.

Math is highly creative and Art can involve some complex math, the separation is a social construct useful for understanding the mechanism but like most of language full of circular symbol processing.

To view or add a comment, sign in

Others also viewed

Explore content categories