Learning the Lessons From Google Duplex
Google's Duplex has really pushed the boundaries of natural language processing in the AI space. From a technical perspective it has shifted the needle with concepts we have been struggling with for a while now. The idea of context and empathy in conversation has now moved ahead significantly.
Take some of the early examples of natural language processing such as Siri or Cortana. They got ok at answering one question like "Hey Siri, what is the weather like in New York" but if you wanted to follow that up with "and what time will the sunrise", that second question would fall on deaf ears. There was no context, Google has really made some inroads here and if you look at the latest capabilities of Google home these issues are now being successfully solved and Duplex’s ability in this space is amazing.
The other area where Duplex is great is the idea of empathy or reacting naturally in a conversation. If you watch the video google released https://www.youtube.com/watch?v=bd1mEm2Fy08 then you will see th "ums" and "ahas" that are weaved into the conversation and more crucially weaved in correctly with good timing and empathy towards what is going on in the conversation. Many are arguing that Google Duplex now passes the Turing Test https://en.wikipedia.org/wiki/Turing_test which has remained unsolved since the 50’s.
So why has there been such a backlash towards Duplex. There is definitely a wave of people that see the technology as a great leap forward, equally the main stream media and others are questioning whether it’s a good thing or not.
The question that Google needs to solve is the ethics of a machine acting as a human. Where is the line? Should the duplex call have started with a disclaimer like "I am a machine acting for.." so the human knew they were talking to a machine. Should the machine have a robotic voice rather than a natural human voice. I am fascinated by research going on at MIT in Boston right now around ethics. Have a look at http://moralmachine.mit.edu/ that is focused on the ethics around autonomous machines but the lesson is still the same. For Artificial Intelligence and Machine Learning to be excepted by humans generally we need to get the ethics right. This was a missed opportunity by Google, the technology is brilliant but they forgot the human side.
It’s the same thing I see at work. There are company after company having this AI feature or that machine learning feature. Don't get me wrong some of the work is fantastic and I am a massive fan of work by Mind X and Pymetrics in this space but overall there is way too much hype, way too much "AI for AI's sake" and way too much buzz word bingo going on without considering the human impact.
Imagine if you had a decision on whether you get a job or not made by AI without any reason or explanation feed back to you and without any recourse or path to feedback on how you might improve. This happens today and far more often than, shall I say, is ethically acceptable.
I believe that especially in the HR space we need to learn the lessons from Google and consider the ethics of AI and Machine Learning in human predictions. At work we have plenty of AI going on, we are starting to generate assessment items using ML, we are looking at statistics and validation process with ML and our support system are moving to chat bots (that clearly announce themselves as a bot). With these initiatives there is a "human in the loop" or there is a clear correlation between the ML and human process and above all ethically it works the same as if the tasks were being done manually.
It’s not flashy, I can't stand on stage and do an assessment with Siri (yet :)) but I am firmly committed to ethics in the assessment process and introducing AI and ML in a measured and ethical approach. I have learnt that lesson from Google!
Well written mate.
Just like Google Glass... this demonstration was 'put out there' to stimulate the discussion around these issues (Ethics/Human Factor). It's not a failure, it's a test, how better to get reactions to a humanistic issue then to test it on humans...
http://www.bbc.com/news/technology-27762088