Introduction to Emerging Technology: Machine Learning


On Wednesday we welcomed the participants of the 2017 Machine Learning Academy to IDEALondon. It was not a little bit intimidating to be speaking at the front of a room full of super-bright, super-talented and super-interesting people, with interests ranging from laser physics, through VR psychoanalysis, to retail and finance. Over the next six weeks, we’re going to explore the latest research on Deep Learning, NLP and Analytics (Marketing, Network and Predictive), the business strategies to take advantage of these technologies, and how to design and implement them in a business setting. I have no doubt that we’re going to see some really exciting projects coming out of this!

As induction sessions often are, this was about learning a bit about each other, as well as where the fire exits are and who to contact if the dog eats your homework. Happily for everyone, we managed to find time to chat over tea and gluten free biscuits/beer and millionaires shortbread (the algorithm couldn’t decide which one to go for!), and I had some great discussions about what effect people thought Machine Learning and AI were going to have on our futures.

First, a brief, hugely simplified, explanation of the difference between Machine Learning (ML) and Artificial Intelligence (AI). Machine Learning is the various techniques that allow a computer/robot/system/machine to process an input, check whether the output was correct, and adjust the processing to get a ‘better’ output next time. AI is when a computer/robot/system/machine is able to make choices, decisions or predictions in a similar way that a human might. This could be a chatbot, a curated media source or a heating system, and at the moment, the range of possibilities these technologies have opened up has lead to huge interest and investment.

Obviously, everyone in the room on Wednesday was fairly positive about the potential of ML, at least for their business, and I was probably the least technologically savvy one there! Still, several people raised a note of caution about the potential uses of these powerful tools.

“There’s a real risk that it’ll tell us things we don’t want to hear… and if I don’t want to hear from my phone that I’m making bad decisions, I really don’t want my boss hear that too!”

“Like the guy from Google said, it’s not like our intelligence is that great… I don’t want robot police, robot doctors, robot teachers, who just reinforce all the stuff we do wrong already.”

“Aren’t you worried, you know, about what happened with the US election?”

Luckily, we’ve also included a session on ethics in the course programme, but I think these questions aren’t going to go away anytime soon. For me, the issue raised by ‘Artificial’ Intelligence and ‘Machine’ Learning are the same as those raised by ‘Natural’ Intelligence and ‘Human’ Learning. Questions over privacy, ownership of data, the boundaries between private and personal; questions over bias and discrimination, accuracy and prejudice, the value of individuality within society; questions over privilege, knowledge and power.

It’s great that we’re asking these questions. It’s great that there are high profile arguments about the AI Apocalypse and how to regulate it away, and it’s great that these issues are visible to those of us who want to have an empty inbox, rather than run a global corporation. In these discussions, the issues can become very polarised, a technology is ‘good’ or ‘bad,’ bringing about an apocalypse or salvation, and it’s easy to forget that there are no foregone conclusions, no clear paths and to be honest, not even that many certain technological facts. By having these conversations, we can decide what’s important to us – do we want lower CO2 emissions or shorter commuting times, a cheaper sandwich or a cheaper tax bill? Do we have hard limits, things we won’t accept, or can we find work arounds as economies, populations and social attitudes change?

Even now, conversations about Universal Income are coming to the fore as a possible solution to the jobs that will cease to exist in the future. That wasn’t the case with Vaucanson and the french silk industry, and it’s an encouraging sign that we’ve learnt something from previous technological revolutions.

The future of work is a more palatable discussion than the role of technology and inequality. Whether it’s income, access to justice, healthcare provision or educational opportunities, technology can reinforce existing prejudices. If systems are designed to make things easier for existing users, to find more of the same winning formula, to predict consistent behaviour across individuals or societies, then we risk technological ‘lock-in’ around our current set of social assumptions.

Harder again than these big questions based on ‘what if’ and ‘what could be’, are the questions based on ‘WTF just happened’ and ‘who did that?’. The use of algorithmic marketing to target specific voters with false information, such that there appeared to be minimal dissent and a great deal of consensus on dubious stories, has lead to poll results that have shocked those who considered themselves experts.

Should AI become part of the fabric of our society, we’re all going to have to keep asking these hard questions, even if it seems our voices aren’t heard, and we’re going to have to be clear about our values.

Think of it like you might about becoming a parent; would you want to bring a child into this society? What stories would you tell them, what would you want them to learn? When they’re no longer your baby, but an independent adult of their own, how would you hope they would behave?

After all, AI and ML are based on our (well, the programmers) knowledge of neuroscience, psychology and philosophy, and part of the reason these technologies are so exciting right now is because we maybe, just maybe, have got to the stage where they’re no longer a model of us, but something of their own.

To view or add a comment, sign in

More articles by Feodora Rayner

Others also viewed

Explore content categories