Programming Morality

Programming Morality

You might find this article interesting.  This concerns you, your peers, and everyone who will be around in the next 15 years and beyond. 

First premise: Humans are not a perfect design. Biologically, we have many limitations and Intellectually, let’s just say the last few years have not been a step forward. 

Second premise: Design for robotics is moving beyond human limitation (why should a robot walk when it can move with greater agility using multiple legs and/or methods of propulsion ?).  So too for the algorithms we build to mimic intelligence. Today, we are playing with neural nets and trying to build structures and algorithms to match human capability. In the future, we’ll see computers design their own structures, which will be far superior to our human abilities. 

Third premise: Cognition and multi-layered thinking is only one part of analysis and decision making. The other is the “human” ability to do what is right and what is best in a given scenario. We rely on our “feelings” (empathy, sympathy, compassion, humor”) to guide these decisions. We also rely on our “beliefs” (morality, ethics, values) to provide guardrails to balance (perhaps insulate our decisions from) non-desirable human “emotions” (bias, anger, resentment, greed, selfishness, etc.). 

Proposal: Given the above progression, it seems to make sense that evolution of artificial (non-human) intelligence has the potential to surpass human intelligence in every conceivable manner. One question that remains is how and who should build the algorithms to guide the right behaviors (decisions bounded by the best of what is right for a given scenario). Given the limitation of human capabilities, how should these core algorithms be formed and governed to ensure humans do not corrupt them (intentionally or unintentionally)? How do we leverage these algorithms to minimize AI bias (development, testing, and execution of algorithms used to impart intelligence, influence perception, and guide actions (intended and unintended) ?

Here’s an interesting article that was posted in a blog in 2015 that begins to explore this question with a bit more depth. 

https://naturalistphilosophy.wordpress.com/2015/09/25/the-moral-algorithm/

What is even more interesting to consider, is IFF we can program "Morality" and we can assure that it is "tamper proof", then how and when would we apply this artificial capability to improve on human intelligence (and decision making) ?

To view or add a comment, sign in

More articles by Jeffrey Wallk

  • Time to Measure the Unmeasurable

    Just read a brief post by Seth Godin about the difficulty of measuring words like unacountable, trust, honesty…

    1 Comment
  • Meetings are NOT Scalable

    I just got another invitation from an organization to learn how to make meetings more effective (for a modest price)…

    3 Comments
  • Strategy vs. Stragedy

    How many times do you see products that make you WANT to buy them? They don't even need any marketing. It's as if they…

    4 Comments
  • Time for Exponential Thinking

    I've been sitting in a few Clubhouse chat rooms over the last few weeks listening to folks discussing AI, technology…

    2 Comments
  • Change versus Transformation

    It's interesting to read posts about change and transformation. Everyone agrees that things are changing, though not…

    2 Comments
  • New Year's Resolution - Promoting Positivity

    Just watched this Ted Talk (several times) to absorb the messages from Prof. Martin Seligman.

    2 Comments
  • Knowledge engineering will introduce a new era of operational adaptability

    As companies begin to align common features and patterns of functionality across all their RPA / Workflow / ML (still…

  • Becoming more successful with innovation

    Ideas can come from any stakeholder (employees, customers, partners, vendors, others). But there's a bit more to…

    2 Comments
  • Industry 4.0 and the Future of Manufacturing

    I would like to share a few thoughts about ways we may want to leverage modeling and converge architecture disciplines…

  • Closing the Decision Making Gap

    Companies recognize the need to become more agile across strategy, execution, and operations, so they can respond to…

    6 Comments

Others also viewed

Explore content categories