Can a Python script do your job?
In the movie Sully, there is a moment when you hear the following cockpit conversation between First Officer Skiles (Aaron Eckhart) and Captain "Sully" Sullenberger (Tom Hanks):
Skiles: got flaps two, you want more?
Sully: no lets stay at two.
Ten seconds later Captain Sully executes a near-perfect water landing of an A320 with 155 souls on board in the Hudson river, a real life feat that has been described as "the most successful ditching in aviation history."
For those of you who haven't watched the movie "Sully", it is about the crash investigation that follows this aviation feat. And it is about Captain Sullenberger's decision to land his impaired jet in the Hudson even though ATC had cleared him for an emergency landing at two nearby airports.
But this article is not about why Captain Sullenberger decided to land the A320 in the freezing Hudson. The movie provides an excellent perspective on that decision.
This article is about, with just seconds remaining before impact, whether Captain Sully would have been able to land on the Hudson with such perfect precision, if he would have said 'Yes' to Skiles' question on the flaps.
One can almost imagine the integrated effect of 35 years of flying experience bearing upon Captian Sully's thought process when he decided flaps at position 2 was all he needed to stay level and land flat on the water on that day.
Would a machine, an automation, an A.I. have taken the same decision?
What would be the thought process of a program that is asked to take a decision that is not in the rule book?
Let us suspend these questions 'in mid air' [so to speak], and take a look how automation has fared so far.
Our association with automation goes back to the 18th century, a.k.a. the Industrial Revolution.
This was when machines started doing several things for us such as weaving our clothes, printing our books and transporting us to different places at high speed.
A key insight behind many of these inventions was that in each one of them, there was a pattern of activities that was repeated over and over. Invent a mechanical way to perform these steps, and voila, you have automated most of the process. One still needed to arrange all the inputs in just the right way before the machine could take over. For example, with the weaving machine all the spindles with the different colored threads needed to be hoisted in just the right positions. One small mistake in the positioning, or a missing spindle here and there and the machine would fail to waive properly. One could almost look at these early machines as some form of a very primitive AI which followed rules that were encoded into the size, shape and arrangement of cogs and levers that made up the metallic beast.
Over the past two centuries industrial automation has expanded to subsume practically all of manufacturing. Today we have the concept of 'lights out' factories, i.e. factories that are 100% automated. No humans need apply.
But what about automation that did not involve the execution of repetitive processes? This kind of automation really came into prominence a good 150 years after the industrial revolution. That was when "Expert Systems" started creating a buzz during the 1980s.
These were (and still are) software defined machines that encode vast bodies of knowledge about specific subjects (such as infectious diseases) and a rules engine that knows how to use this knowledge to draw conclusions and take decisions. Expert systems began to exhibit the sort of reasoning capabilities that we frequently associate with 'thinking'.
Over the past three decades, the volume of knowledge stored by expert systems and their A. I. driven variants has exploded. Reasoning algorithms have grown beyond simple if-then-else structures to evolve into the millions of artificial neurons and the billions of connections contained within the state-of-the-art neural nets of today. And just like the weaving machines of the 18th century eventually became better at weaving than their human creators, expert systems are surpassing human beings' ability to reason and draw the right conclusions. Chess playing computers now routinely beat human grand masters and there is at least one artificial neural net that has surpassed the capability of physicians to detect the presence of cancer tissue. Crucially, during these years of evolution, A.I. has been able to deal with raw, unstructured input almost as effectively as we humans do. For e.g. it is not longer needed to hand deconstruct the X-ray image of a lung into a format that the machine would be able to understand. Advances in computer vision enable us to feed the image straight through into the software.
Let us go back and rescue the question that we had left in suspension: Would an AI have been able to take the right decision on flaps in Flight 1549 in those crucial seconds before touchdown? As is often the case, the answer is 'it depends'.
It depends on how accurately the AI is able to 'sense' the realities of the situation in the integrated sort of way in which a human being can do so. And it depends on how effectively it is able to develop a 'gut feeling' for what is the right thing to do. The former i.e. the sensing part, is becoming more and more possible as the algorithms that reason over what is being sensed become ever more 'complete' in their analysis. And while 'gut feeling' might not seem like the sort of capability one would attribute to a program, it has been claimed that the creators of AlphaGo have embodied elements of human intuition into AlphaGo's learning.
With this background, let us circle back to the title of this article: Would you consider your job to be 'scriptable'?
For most of us the answer might be, well maybe not entirely scriptable. In that case, what are the aspects of your job that someone can write a Python script to do?
And by some unfortunate coincidence, would you define your strengths mostly in terms of the aspects of your job that can be easily automated?
Let's look at a few cases.
Does your job involve performing repetitive processes (such as in the kitchen of a diner or a burger joint, or in quality testing or in a laboratory)?
Is your job 'algorithmic' in nature (think insurance claims processing, stock trading, loans processing, the practice of law, financial accounting, work planning and scheduling, developing software that follows detailed specifications, and even the practice of medicine).
Do you spend a good amount of time in doing 'message passing', or do you add value to those messages that reach your inbox before you forward them on to their final actionable destinations?
Do you operate machines for a living, even complex ones such as cars, trucks, trains, airplanes and heavy equipment that require a high degree of situation awareness to use, awareness that is generated through the use of faculties such as vision, speech and hearing, faculties that were thought to be outside the capacities of A. I. driven automation...until recently?
If you manage people, for a moment you might be excused into thinking that you are in an unassailable spot in terms of being replaced by an algorithm. But then think of apps that truckers use to signup for hauling jobs. The trucker never has to interact with the human employees of the company owning the app. Everything from task assignment, status reporting to performance evaluation to billing is handled via the app. Of course this example applies more to 'gigs' where people work on short term assignments. And human intervention is needed whenever there is a conflict or a dispute. But an idea has to take root somewhere before it spreads.
Is there that undefinable aspect in what you do that lies in the domain of only human beings?
Crucially, can the inputs that you depend on to perform your work be easily fed into a computer? Are these inputs of a kind that computers cannot handle very easily, in spite of the advances in robotics, speech recognition and computer vision? A trivial but important example is the activity of folding clothes. Folding crumpled up laundry is still a very hard skill for a robot to master.
If your job involves creative skills (think about creative arts, performing arts, writing, innovation and discovery, managing conflict among human beings), you have an advantage over our robot friends. While AI has already taken to writing stories and composing music, it's still far away from the the sheer spread of talent that humans exhibit in these fields.
A noteworthy area that is still safe from automation is one that requires the exhibition of a high degree of empathy - that uniquely human emotion that enables a person to understand and appreciate another human being's situation, predicament, and point of view. And the consequent 'trust' that it generates on the receiver's side. Think about jobs such as the the customer service desk at your bank, or the practice of nursing, psychology and indeed medicine in general. Whether a robotic oncologist would be able to 'understand' a cancer patient's suffering and determine how to modulate the treatment (or advise the family to spare their loved one the trauma of another round of chemo ), is quite literally not known.
And speaking of trust, would you trust a robotic ambulance to transport your loved one to the emergency room? Or would you trust the human EMR crew who tells you: Just hang in there, we are going to get you to the ER soon, traffic or no traffic!
Would you trust human lawyers and accountants, or would you trust an Expert System to represent you if you are getting deported, audited, prosecuted, divorced?
Coming full circle to Captain Sully's decision to not deploy more flaps, when the chips are down, would you trust A.I. to take the right decision for you?
Maybe what it comes down to is this: when you start suspecting that people would soon able to trust A. I. to do most of what you do even when the going gets tough, it's time to start retooling oneself.