What are the current challenges in Artificial Intelligence?
Still of Ava, the robot in the movie Ex Machina (2015)

What are the current challenges in Artificial Intelligence?

Cars can drive on their own and there are promises to have self-driving taxis within two years. The Singularity University, a benefit corporation helping the world to understand cutting-edge technologies, conveys the vision of a robot apocalypse . In a recent man-vs-machine showdown, Google’s AlphaGo beat world-champion Lee Sedol in four out of five games of Go – an extremely complex, ancient game – crushing the prediction this wouldn’t happen for another ten years. Just a few examples of what the wonderful field of Artificial Intelligence (AI) brings us. In a world where it seems that everything has already been invented, AI is booming. For example, there is currently over US$1 billion committed to the OpenAI Foundation, a non-profit AI research company. Thus, AI promises to be the source of current and future ground breaking inventions that will contribute to continuously improve the world we live in. According to computer-science experts though, we still have to overcome various technical challenges before we can reach the next level of AI. In this article, I would like to share three of the challenges we are still faced with.

AI promises to be the source of current and future ground breaking inventions -  What challenges stand in the way of reaching the next level?

1. The Creativity Challenge

Douglas Hofstadter mentioned a fundamental difference between computers and human beings in his book, Godel, Escher, Bach: an Eternal Golden Braid (1979). Computers cannot think outside of their task description as “real intelligence” can, whereas human beings think in more creative ways. After executing a specific task several times, we start considering the task on a meta-level asking questions like: how can we improve the output, how can we execute the task with greater efficiency? We even ask why we are actually doing the task.

Computers can learn from data within a specified system, because all the data within a particular domain can be identified and defined. What a computer cannot do – as yet – is jump to “thinking outside of the box” or come up with radical new ideas that could speed up its learning within the original “world” it knows.

2. The Challenge of Understanding

Information can be looked at from a number of perspectives. Two of these are syntax and semantics.

Syntax: the arrangement of words and phrases used to create structured statements, the symbols used to represent a message.

Semantics: the intention and meaning of the message.

To illustrate the differences between the two, we can look at well-known clichés. For example, “There is more than one way to skin a cat" and “All roads lead to Rome”. Both of these sentences have similar meanings (semantics), while each uses quite different words (syntax).

How are syntax and semantics relevant to artificial intelligence? Well, the challenge is that while a computer can understand information on syntax level - extracting syntax rules from language and doing all kinds of statistical analysis on words and sentences – it cannot understand the semantics: the real meaning of the given information.

While a computer can construct well-formed sentences (syntax), it does not have understanding of the meaning of such sentences (semantics).

Last April, Luc Steels, speaking on the topic of computers and the thinking process at a philosophical evening in Amsterdam, said that computers still operate on a syntactic level only. The computer has no profound understanding or sense of the outcome it generates.

Luc gave an example of the Tay bot – a twitterbot of the Microsoft research team. The chatbot was designed to speak like a millennial and learn authentic conversation by interacting with humans online. Within 15 hours, @Tayandyou began to tweet racist, profane, and disrespectful tweets like “Bush did 9/11”.  This is because the Tay bot handles the syntax with correctness. Its output is well-formed sentences that we can read, and yet the bot has no understanding of the meaning it conveys. Tay did use a blacklist to filter out profanity. Still, the list didn’t include every bad sentence combination possible without obscene words.

3. The Small Data Challenge

One of the reasons that AI is booming, is big data. Computers either use big piles of data to extract patterns and the patterns behind patterns (deep learning), or generate big data by doing a lot of test runs. What if there’s no big data available though, and an infinite amount of testing is not possible?

According to Gary Marcus, this is an unsolved puzzle. Computers can learn, and yet they learn very slowly, especially if we compare the learning speed of computers with the learning speed of children. Without enough test opportunities for an algorithm, the chance of mistakes is high. Algorithms are designed to work through a lot of trial and error. As Gary puts it, “If you have a robot in your home to take care of your domestic situation, you cannot afford any mistakes... you don’t want it to put your cat in the dishwasher even once.”

Continued Research is Crucial

Applied AI is advancing more rapidly than ever before. This is, in the main, because of cloud computing and big data as both result in lots of solutions and commercial opportunity. Still, to reach a higher level of AI, more research into fundamentals is needed.  In other words, the world needs new “Einsteins” to come up with groundbreaking ideas and progress that will allow AI to get to the next level and reach its full potential in this century.

Creativity, understanding, and small data are AI challenges  that the world needs new “Einsteins” to address.

Like your precise and accurate explanation of a.I. and its current shortcomings. Think there is hope, see dendrisystems.

To view or add a comment, sign in

More articles by Peter Blomsma, Phd.

Others also viewed

Explore content categories