Artificial Intelligence or Ghost in the Machine?
Artificial intelligence is the technology buzzword of today. But what is artificial intelligence and does it really exist? To answer these questions, let’s first define what artificial intelligence is supposed to be.
“Artificial” is a copy of something natural. “Intelligence” is the ability to learn and apply knowledge and skills to known and unknown conditions and situations. Therefore, artificial intelligence can be defined as, “the replication of natural intelligence that can learn and react to known and unknown environments”.
Within the context of computer science, artificial intelligence aims to create intelligent machines and computer programs that can be taught to perceive, reason, problem solve and learn, by mimicking cognitive human attributes.
Some computer scientists set the bar high for a system to be considered artificially intelligent, with the following artificial intelligence tests:
- The Turing Test (by Turing) whereby a machine and human can have a conversation, with a listener unable to differentiate between the machine and human
- The Coffee Test (by Wozniak) whereby a machine can enter a home, find a coffee machine, coffee, & mug, and brew a cup of coffee by pushing the coffee machine’s correct buttons
- The Robot College Student Test (by Goertzel) whereby a machine can enrol in a university, take and pass the same classes that humans would, and obtain a degree
- The Employment Test (by Nilsson) whereby a machine can work an economically important job, and perform at the level of humans or better, at the same job
The key theme throughout the definitions above, is that artificial intelligence must have the ability to learn across different domains. From learning a language and being able to communicate across varying topics, or entering a new environment and being able to perform multiple, disparate types of tasks. In other words, artificial intelligence must be able to learn and improve in a general way.
To evaluate whether machines or computer programs currently exhibit artificial intelligence, it would be useful to focus within the three key types of AI:
- Applied, narrow or weak artificial intelligence
- Strong or general artificial intelligence, and
- Artificial superintelligence
Weak AI
Weak AI, otherwise known as narrow or applied AI, is a closed system of code that focuses on one narrow task at a time. Code that understands specific external stimuli, that when sensed, triggers specific responses. Weak AI is limited by lines of code explicitly taught, or programmed, by humans.
Examples of weak AI include
- IBM’s Deep Blue master chess player system
- Computer-controlled characters in video games
- Personal voice assistants like Apple’s Siri, Microsoft's Cortana, and Amazon’s Alexa
- Chatbots that operate on the WeChat, Facebook Messenger, and Slack messenger platforms
To explain weak AI further, let’s dive into a weak AI system.
Imagine we’re a pizza business ‘HALs Pizza’ developing a conversational commerce system using a Facebook Messenger chatbot. For this, we’d program the chatbot application to understand a typical user-flow to allow a customer to order pizza. This would include greeting the customer, taking their order, and collecting their pick-up or delivery details.
In this example, we’ll isolate the ‘taking their order’ part of the process. Here, HALs Pizza would ask the customer, “Hi Dave. Would you like to order pizza?”. The customer could respond with a positive reply such as, “yes”, “ok”, “sure”, “pizza”, or even “give me pizza with anchovies”. If the chatbot has been programmed to contextualise the words “yes”, “ok”, “sure”, “pizza”, or even “<chillies> / <olives> / <anchovies> / <capers>”, and so on, HALs Pizza would reply with, “Great. Select from the following menu, Dave…”, or “<anchovies>! Good choice Dave. Here’s the <anchovy> pizza menu…” thereafter, displaying the standard or <anchovy> pizza menu for the customer to choose from.
If the customer answered with a word or phrase the HALs Pizza chatbot had not been taught, such as “pozza now!”, HALs Pizza would simply reply with “I'm sorry, Dave. I'm afraid I don’t understand ‘pozza now!’. You want pizza, yes?”, and the ‘taking the order’ part of the process would iterate.
Weak AI appears to be intelligent but is not real artificial intelligence
Weak AI is not real artificial intelligence because it does not have the ability to learn. For weak AI to be able to pass any of the AI tests above, it would need to be programmed by a human and explicitly taught all of the lines of code needed, which is practically impossible.
This probably explains the prefix weak prepended to the name, ‘weak AI’.
General AI
General AI, otherwise known as strong AI, is expected to have the full range of human cognitive abilities to perform tasks on its own, as human beings do. It’s expected to work as an open system that is able to break out of its static lines of code and learn and teach itself to deal with unknown stimulus and situations. This ability to learn and adapt to new contexts not previously taught, is what differentiates general AI from weak AI.
At the moment, real general AI is theoretical and does not yet technically exist. This is because, although it can teach itself to a degree, it can only do so within a limited and specific, rather than general range. In addition to this, general AI does not fulfil any of the AI tests above; it is unable to converse with a human without the human knowing they’re talking with an ‘artificially intelligent’ thing, and it’s a long way off being able to enter your home to make a cup of coffee.
General AI can teach itself within specific areas, but not across general domains
Despite this, general AI is a potential contender for genuine artificial intelligence, and has recently generated a lot of excitement in the field due to recent developments. With ever increasing quantities of data, processing power, and interconnected applications and networks, a new subfield of general AI, machine learning, has made steady advances towards true artificial intelligence.
Machine Learning and General AI
Within limits, machine learning is able to teach and learn from itself. It does this using algorithms that can interpret and learn from patterns of data, without being explicitly programmed to do so. Instead of using vast arrays of hand-coded software routines with specific sets of pre-programed instructions, machine learning trains itself using flexible algorithms that learn from large amounts of data that represent models of the world. This ability to break free from explicit code using data is what makes machine learning unique and promising.
Examples of machine learning algorithm applications include
- Email spam filtering and category sorting
- Image identification, tagging and compression
- Optical Character Recognition
- Stock market, natural disaster, and other predictions
- Google and Telsa’s self-driving cars
Machine learning algorithms use reinforcement processes to modify and improve themselves, based on positive results. They do this by parsing a subset of training data through their system, which is used to model a part of the world that the algorithm moulds itself around. As the underlying model moves one way or the other, the results are measured to eliminate error and the algorithm is tweaked, thus providing a technique to be feed back into the system to continuously improve the results. Fuller sets of data are then added to the model to further improve the system.
Machine learning uses positive reinforcement to improve its results
Once machine learning has advanced in rendering patterns and order from chaotic world models, it should be able to generally apply these across different domains using the same technique it uses to understand specific domains.
At present however, machine learning is limited to - albeit extremely complex - specific applications such as self-driving cars. Although it can recognise, filter and categorise your emails, handwriting and images for example, it is not yet able to perform generalised cross-domain tasks such as “enrol in a university… make a cup of coffee… and obtain a degree”.
Machine learning is still highly specific and cannot generally operate across domains
Due to being restricted to specific applications, machine learning is not general enough to be considered real artificial intelligence at this stage, either.
An Example of Machine Learning
Earlier this year, Google DeepMind's AlphaGo, beat world champion Ke Jie at ‘Go’, a 3,000-year-old Chinese board game considered the most complex in the world. Go has more possible moves than there are atoms in the universe, making it practically impossible to calculate every move, making AlphaGo the holy grail of today’s artificial intelligence.
Unlike previous AIs that use brute force to calculate possible moves, AlphaGo’s algorithm uses reinforcement learning processes that mimic the human brain.
Does AlphaGo enter the realm of true artificial intelligence?
The creators of AlphaGo claim it is general purpose and can learn and solve many other complex problems without alteration or guidance, generically. Currently however, AlphaGo has ‘only’ proven itself to outperform a human within the highly constrained dimensions of a board game.
AlphaGo has not been proven to perform generally across domains to “work an economically important job, and perform at the level of humans or better” for example, and still, can not be considered to be true artificial intelligence at this stage either.
Artificial Superintelligence
Artificial superintelligence is considered the most powerful of artificial intelligence and refers to a time when computers will vastly surpass human cognitive abilities. Some examples of this idea in fiction are HAL 9000 from 2001: A Space Odyssey, Holly from Red Dwarf, and Skynet from the Terminator series.
Despite HAL, Holly and the Terminator being a little ‘clunky’ - don’t forget, they were really humans mimicking artificial intelligence - artificial superintelligence is expected to be much smarter than the best human brains in almost every field, including general wisdom, scientific creativity and even social skills.
Assuming general AI will be used as building blocks for artificial superintelligence to kick-start itself from, it’s easy to imagine artificial superintelligence rapidly accelerating into existence, should general AI get a foothold. And if artificial superintelligence does become a reality, it will ‘come-to-life’ through interaction technologies such as IoT, virtual and augmented reality, wearables, voice recognition systems, and robotics.
Emerging from its intangible presence concentrated around gigantic data centres, channelled cable and wireless networks, artificial superintelligence will morph with humans as symbiotic cyborgs, containerise us within vehicles and smarter-than-us kitchens, bed and living rooms, and roam with us as semi-autonomous, or maybe even autonomous robots.
Until then however, artificial superintelligence is blocked until the likes of general AI and machine learning spark true artificial intelligence into life.
Ghost in the Machine?
So, does artificial intelligence really exist yet?
Although weak AI exists and is used for a lot of interesting and useful applications, it is unable to learn anything beyond what it has already been taught to perpetuate its own intelligence. In other words, although weak AI is at an advanced stage, it is still relatively dumb, and will remain that way.
General AI, including machine learning, can teach itself, but only to a degree within specific areas. It cannot span across a range of domains generally, from one that it has learned to another unknown environment that it can apply this knowledge to. For this reason, general AI and machine learning are not yet real artificial intelligence either.
Despite being the logic behind self-driving cars and master board games, today’s artificial intelligence still only appears to be intelligent. Until AI learns to stand on its own two feet and operate on its own across different domains it will continue to be relatively inert, requiring human input for any major steps it can take forward.
Even though it’s progressing at an incredible rate, emerging from the imagination of human minds, artificial intelligence is still, a Ghost in the Machine.
Great article Mat :)
Very good article Mathew. AI and ML are buzzwords used everywhere these days, and often overused and overhyped. Good to get some more scientific-based ground on these fascinating topics.
Nice overview. I'd say that we are beginning to emerge from 'general AI' with a lot of work going into more generalized AI - the focus being in finding how these generalizations can be made - hence why there's som much cross-domain work going on, with a current focus on deep learning (i.e. mostly around reinforcement) and a resurgent interest in genetic algorithms and more blended approached to the reinforcement approach. And now that the barrier of entry is so much lower, the cross-domain possibilities are becoming more available, and emerging at an unprecedented rate. It really isn't going to be that long before the approaches taken across domains will be an area of study in it's own right, and applying AI to the selection approach makes generalisable AI inevitable. We've had the general approaches for a while. Now there's a weight of people active in the area and the computational resource to push the forwards. And then there's the whole area of understanding the underlying mechanisms of intelligence in humans which is gathering huge momentum. A convergence of these areas is happening already, and the two are only going to feed each other. I'd give it 5-10 years until we are getting more confidence in the 'superintelligence' territory.
Great overview of AI! Thanks!