Algorithmic Morality
Recently I've been asked the following question - partially seriously and partially as an entertaining thought:
Can moral problems be solved in a completely algorithmic way, by following a fixed sequence of unambiguous, logical steps?
I do not think this question poses an inquiry about a viability of an attempt to devise a “universal synthetic judge” algorithm to evaluate and quantify the Right and the Wrong, our moral and immoral behavior, what is considered ethical and otherwise. There is no universal truth outside of factual natural phenomena. There is a universal truth in the fact that you and I will eventually cease to exist. An interesting thought, perhaps, is to speculate that all fundamental tenets of modern ethics theories arise as a consequence of this imminent physiological event.
Our definition of morality and ethics is synthetic. It is derived within our historical, socio-economic, and political frameworks. I think there are two layers of ethics and morality - one reinforced by society (what do you do when everyone is watching?), and one reinforced by inner you (what do you do when all your actions remain unseen?) If we imagine for a moment that there is a possibility to solve moral problems algorithmically, then moral “norms” imposed by a social construct are must better candidates for they tend to be more rational and less self-serving (everyone is watching, remember?). There is a problem, however, with synthesizing an algorithm to solve moral issues that aligns with our inner moral compass.
Humanity and its representatives are largely volatile and irrational, in a better sense of the word, to bring about a superset of rules that each and every subset of human moral principles.
As much as one may want to argue that “Thou shall not steal” is a definitive and unbreakable rule of “generally acceptable” human behavior, morality and ethics exhibits plasticity simply because we do too. After all, who is writing the acceptance tests?
Can all moral problems be solved algorithmically? No. Can some moral dilemmas be resolved in a systematic way? Certainly.
Algorithms and theories - including theories in ethics and morality - are not coplanar. Theories are bodies of speculative knowledge based either on empirical yet unexplained observations or a personal conjecture of “truth”. Algorithms, on the other hand, are deterministic, factual, and most importantly observable agents that operate per strict (even though at times it seems that the rule are flexible, self-imposed, or chaotic - think fuzzy logic) set of rules.
I think we live in a fascinating era where technology and humanity blends to the point where either can no longer be distinguishable. Synthetic intelligence has entered our lives some time ago, and this augmentation we embraced and promoted willingly and enthusiastically. I want to believe that in our lifetime we will be forced to reconsider the definition of intelligence and life. At a point where AI ceased to be synthetics and crossed the boundary of self-awareness, it would be impossible to definitively answer a question about whether this self-awareness and “human-like” behavior is algorithmic in nature. At that point, AI would be able to literally speak in its own defense guided by its own moral weights. Are those synthetic in nature? Perhaps this seemingly synthetic morality, initially derived from reinforced learning models of fundamentals of generally acceptable human behavior, procedurally evolved into a set of rules outside of our original task-specific definition of morality. Perhaps it evolved outside of many of our own definitions of morality and ethics. Does it make it unacceptable? Arguably, it may make it as unacceptable as another new ethics theory compiled by an established moral theoretician. To each their own.
I believe that moral problems that carry potentially catastrophic consequences are best suited for algorithmic approach. Famously formulated Asimov’s 3 laws of robotics may serve as a good example of such algorithm. Harming and killing humans is taboo in virtually every society. We tend to glorify and elevate things and ideas that benefit us the most. Thus, killing and harm of fellow humans is a moral taboo perhaps because when projected from society onto an individual, is something no rational human would even want to personally experience.
The 3 laws, loosely adapted to moral algorithm about ethics and morality of human harming, can be stated as follows:
- An Algorithm may not injure a human being or, through inaction, allow a human being to come to harm.
- An Algorithm must obey orders given it by human beings except where such orders would conflict with the First Law.
- An Algorithm must protect its own existence as long as such protection does not conflict with the First or Second Law.
Sequential? Yes. Definitive and unambiguous? Check. Fixed? Yes, all 3 steps are closed to interpretation.
I understand that this is a very simple example, yet this is exactly what I was going to. Some moral problems lend themselves relatively well into this approach, while most not so much. What would happen if this algorithm were to be applied to a death penalty prisoner execution situation? So, as we can immediately see, moral algorithms - if ever possible - shall be narrow and problem-specific. On the other hand, it is rather uncommon for a moral dilemma to be problem specific due to an inherent presence of various “points of view”.
Lastly, I was to say that algorithms, regardless of their complexity, is a way of mapping inputs to outputs where both inputs and outputs are typically defined. In the realms of Ethics and Morality, the definition of inputs is a rather trivial task, yet the definition of outputs - the moral decisions - is problematic. Having so many theories on morality and ethics may serve a good indicator that the humanity as a whole is still exploring and has no idea (and likely never will) how to definitively answer the question. No answer - no outputs - no algorithms.
What do you think?
It is only a program. Alan Turing thought of this as a question. Quantum theory computing and the q-bit may lead toward possibilities but computers lack creativity beyond the program, essentially.
That's from I, Robot 🤷♂️
Ellen ter Gast