Learning from Robots

Learning from Robots

Some years back finding work "demanding but not challenging", I decided to read for an MSc in Psychology. Whilst sitting in a neuropsychology lecture it dawned on me that many of the key findings of the 60s and 70s, such as short term, long term, and working memory, had been instantiated in computers before their human equivalents had been "discovered".

No alt text provided for this image

So I decided (unwisely as it turned out) to ask the lecturer who had worked with Damasio, one of the bigger noises in Neuroscience, whether there might be value in reflecting upon computer architecture and languages to work up some useful hypotheses that could be tested. The look of withering contempt that I received could only have been rendered more palpable had the lecturer added, sotto voce, "stupid boy". (Brits d'un certain age will know exactly what I mean.)

Sensibly I offered no further "helpful suggestions" throughout the remainder of the lecture series, passed the end of semester exam, and wrote off the entire incident as just another example of "Discipline Envy" (think Mervyn King & John Kay on Kahneman).

So it is therefore with some trepidation that I put forward the suggestion that instead of assuming it is always we humans that teach and train robots, that perhaps we could, if we summon up the humility, learn from robots.

In an attempt to persuade you there is some merit in this suggestion and, maybe more importantly, that I haven't taken complete leave of my senses, I'd like to share: reflections on some work I undertook more than a decade ago; thoughts prompted by an MIT Technology Review article in LinkedIn some time back; and a very tentative conclusion on what "learning from robots" might mean in practice.

Dependent Rational Agents

Several years before I committed my intellectual faux pas in a lecture theatre full of individuals who, almost without exception, were more qualified to be in attendance than me, I had been tasked with improving the performance of a shared services software support team. I learned a great deal across many topics thanks to this opportunity, however I should like to focus specifically on a couple of reflections germane to the matter at hand.

"The Spanish Phrase Book Problem"

The Support Team had been provided with, what at the time, would have been considered fairly high quality information and training with which to help them perform their jobs: contextual information about the systems they had to support, how the systems operated, and detailed instructions on how to fix common problems arising.

However I observed the team diligently attempting to resolve incidents as they arose but on many occasions struggling to do so without involving the teams of software developers that had originally built the systems they were supporting. This was a source of frustration for all concerned: the support team members wanted to be able resolve problems quickly and autonomously and couldn't, and the software developers were fed up of being dragged into solving support problems when they had mountains of development work to do.

One developer said exasperatedly, "I can't write down every conceivable thing that might go wrong and what to do when it does!" Very true ...

No alt text provided for this image

Now "The Spanish Phrase Book Problem" is less a thought experiment than a lived experience of many British visitors to Spain.

Our would-be visitor decides to show some respect for Spanish culture by learning at least some rudimentary Spanish before going on holiday, so she / he purchases a Spanish Phrase Book. Now the book is well written and contains such useful phrases as "Lo siento, pero no hablo español muy bien", "Dos cerveza por favor", and, as corollary to the latter, "Donde estan los servicios". However the minute that our culturally-sensitive tourist attempts to have a conversation with a Spanish person it all falls apart because once the conversation gets beyond the most basic specific responses our tourist has no clue as to what is being said.

And so it was with the support team. We had given them excellent "phrase books" and some useful context but hadn't taught them the "grammar" of the systems they had to support. This was the first clue that the challenge we all faced had less to do with technology but more to do, at least in part, with cognitive science (and AI).

Co-operative Teams of Collaborative Rational Agents

Around the time that the Cognitive Science clue dropped into my lap, I was reading ...

This book has been / was described as "... the AI bible for the next decade", " ... It will become the standard text for the years to come.", etc, etc. (One of the authors gave the BBC 2021 Reith Lectures)

Chapter 2 outlines the concept of Intelligent Agents and in order to get to the punchline (for now at least) I need to précis the excellent summary provided on page 59 (3rd Edition):

  • An agent perceives and acts in an environment; an agent function specifies what an agent should do in response to any percept sequence
  • Performance measures (are used to) evaluate the agent's behaviour in an environment; rational agents act to maximise the value of the performance measure
  • The task environment (in which an agent operates) comprises the performance measure, the external environment, actuators, and sensors. Task environments are multi-dimensional, eg fully / partially observable; single / multi agent; etc
  • Simple Reflex Agents respond directly to percepts (Stimulus-Response machines living "in the now"); Model-based Reflex Agents take into account what has happened in the past when responding to percepts (Model-based = Stimulus-Organism-Response machines); Model-based Goal-based Agents act to achieve their goals; Model-based Utility-based Agents try to maximise their own expected "happiness" (effectively our old friend "self-actualisation")
  • All agents can improve (their performance) through learning (this is key)

Reading this made me realise that we had inadvertently taken highly educated, thoughtful, diligent, and autonomous human beings and turned them into Simple Reflex Agents; B F Skinner would have been proud of us.

So it became obvious that we needed collectively to bring into existence "Co-operative Teams of Collaborative Rational (ie Model-based Utility-based) Agents": the environment comprised multiple interacting teams of multiple agents; and, we needed everyone to operate autonomously to the best of their abilities and towards common goals.

With a nod to Schein (though maybe he wouldn't have expressed it this way) - all the Soft Machines involved, i.e., those supporting the systems, those developing the software; and those using the systems, (including myself) had to (re-)program ourselves,

It's at this point that any Critical Psychologist reading this article, assuming there are any, throws up her (or his) hands in horror and accuses me of "Humaneering" (commonly characterised as Stalinesque engineering of human souls). So I'd like to request some forbearance as I promise to address this entirely valid concern later.

Unconscious (or maybe not so Unconscious) Bias

This has been one of the hot topics for many years now but has risen right to the top of the stack (where it should always have been). However it could be argued that we use the term "unconscious" to absolve ourselves for our lack of thoughtfulness.

Now per evolutionary psychologists, our biases have their roots in mental heuristics designed to keep us alive. There's nothing inherently wrong with heuristics; performing computationally intense logical calculus as to whether to run or not when being attacked by a grisly bear is a sure fire way to wind up dead. What's wrong is when they are applied in situations where they are not appropriate, eg hiring decision, policing, etc.

So where does "Learning from Robots" come into this? Well I recently re-read an MIT Technology Review article republished in LinkedIn (originally published 04-Feb-2019) "This is how AI bias really happens - and why it's so hard to fix".

The article identifies how AI bias happens, and I quote and précis:

  • Framing the problem: " ... If the algorithm discovered that giving out sub-prime loans was an effective way to maximise profit, it would end up engaging in predatory behaviour even if it wasn't [sic] the company's intentions"
  • Collecting the data: " ... Amazon discovered that its internal recruiting tool was dismissing female candidates. Because it was trained on historical hiring decisions, which favoured men over women, it learned to do the same."
  • Preparing the data: " ... the "art" of deep learning [entails] choosing which attributes to consider or ignore" ... "the impacting the impact [of such decisions] on accuracy is easy to measure, it's impact on the model's bias is not"

The article then identifies why AI bias is hard to fix, but the punchline is that the unconscious biases ascribed to AI are our unconscious biases; the sub-prime example in the article describes exactly human behaviours (predatory lending for profit) in the lead up to the 2007-2008 Credit Crunch.

No alt text provided for this image

Robots are in one sense our progeny and learn through imitating us; they hold up a mirror to us, allowing us to "see ourselves as others see us" in ways we find ordinarily very difficult

So "Learning from Robots" means (to use the jargon) "engaging in reflexive practice", ie considering the implications of one's learning / actions in the wider context within which the learning / activity takes place. With respect to the points above it means asking ourselves: what are the consequences of following through on the goals / objectives we've been set (think Wells Fargo); should the past dictate what we do now and in the future; and, on what basis are we defining evaluation criteria that drive the decisions we make. For the latter two I'll leave you to provide your own examples.

However just to emphasise quite how far we have to go in addressing deep-seated biases, as if events over the last several years haven't made this abundantly clear, take a look at the screen shot below; it's what was on the screen when I originally viewed the article to which I've just referred.

No alt text provided for this image

So what do Alexa, Siri, Amelia ("The Most Human AI and Your Digital Workforce), and the myriad of helper bots that keep springing up, have in common?

Wisdom may have been "personified" in Judaeo-Christian scripture as female, but it seems that software being written today leverages deeply embedded stereotypes (i.e., "unconscious biases") that characterise women as "helpers" waiting to undertake tasks.

There is a very sad irony (and worse) that an advert for "Digital Employees" popped up right next to the text of a very thoughtful article on AI bias; we really have a very long way to go.

And the point is ...

I originally started writing this article in 2020. It was meant be a gently irreverent article to provoke thought on a serious topic, however as Gillian Tett pointed out in the FT some time back, at a level higher than "Unconscious Biases" there are (d'apres Bourdieu) "Social Silences", i.e. things we never acknowledge or talk about at societal level. (Feel free to create your own list but I put writing this article aside until now thanks to one incident in May 2020 which reflected the Pandemic that has gone on for about 400 years.)

No alt text provided for this image

So what's the answer?

I attended a very good course on "Unconscious Bias" some years ago (much of the material was familiar to me thanks to my foray into Psychology).

The only thing wrong with the course was that there was no solid follow through: no practical techniques were provided; and, although there was plenty of moral support to address the issue, there was no practical support to help behaviours change.

So if we are going to address "Unconscious Bias" and its bigger uglier sibling "Social Silence" we'll all have to commit to life-long self-reflexion and personal recalibration collectively and support each other in doing so.

Something similar goes for the shared services support team example earlier. We only made progress when we stepped back and reflected upon the situation (and the culture in which the situation and others like it occurred) and stopped blaming the people and seeing them as mere cogs in a bigger wheel. (As Deming said regarding metrics "first, cast out fear"; he could have added "second, eliminate bias" which is an inappropriate and toxic use of "lazy thinking" - back to Kahneman again.)

Earlier I promised all the Critical Psychologists who are not reading this article that I would address their valid concern(s). So my suggestion is the following:

"Let's treat our robots / machines as if they are human and stop treating humans as if they are machines."

.

To view or add a comment, sign in

More articles by Leslie Cameron

  • They have become like us ... the promise of Strong AI

    Two Rival Views of Strong AI (and its feasibility) Many people think that John Searle definitively killed off the…

    2 Comments
  • More mug punters ...

    Fascinating article in last Saturday’s FT (yes, it's been that kind of week) regarding AI agents and their capability…

  • Are You (Digitally) Experienced?

    It's all very well talking about "the Future of Work" but how does one get to that state? Well, let's consider Jimi…

    15 Comments
  • Decisions, Decisions ...

    Very recently I was drawn into an extremely interesting LinkedIn thread by my dear friend, and co-conspirator, Paul…

    2 Comments
  • The Future of Work

    In the 2020s we are, at least in the soi-disant “Developed [sic] World”, drowning in technological capability…

    26 Comments

Others also viewed

Explore content categories