Knowing Artificial Intelligence
AI is the ability of a digital computer or computer-controlled robot to perform tasks that traditionally require human intelligence. AI programs develop systems having characteristic of humans such as the ability to reason, discover meaning, generalize, or learn from the past experience. This includes creating algorithms to classify, analyze, and draw predictions from data. It also involves acting on data, learning from new data, and improving over time. This is just like a tiny human child growing up into a smarter human adult. And like humans, AI is not perfect.
Since the development of the digital computer in the 1940s, computers can be programmed to carry out very complex tasks as discovering proofs for mathematical theorems or playing chess—with great proficiency. Despite continuing advances in computer processing speed and memory capacity, there are as yet no programs that can match human flexibility over wider domains or in tasks requiring much everyday knowledge. On the other hand, some programs have attained the performance levels of human experts and professionals in performing certain specific tasks, so that artificial intelligence in this limited sense is found in applications as diverse as medical diagnosis, computer search engines, and voice or handwriting recognition.
Regular programs define all possible scenarios and only operate within those defined scenarios. AI trains a program for a specific task and allows it to explore and improve on its own. A good AI figures out what to do when met with unfamiliar situations. A normal spreadsheet application cannot improve on its own, but facial recognition software can get better at recognizing faces, the longer it runs. To apply AI, you need a lot of data. AI algorithms are trained using large datasets so that they can identify patterns, make predictions and recommend actions, much like a human would, just faster and better.
Even the best AI today cannot match up to the human brain in some respects. While some AI is designed to mimic the human brain, AI today is only good at a relatively narrow range of tasks. AI can apply massive computing power to a narrow set of data and methods. But a human brain applies medium computing power to a much wider set of data and methods. In other words, we can apply our brains to almost anything, while AI specializes in certain things.
AI research follows two distinct, and to some extent competing, methods, the top-down approach, and the bottom-up approach. The top-down approach seeks to replicate intelligence by analyzing cognition independent of the biological structure of the brain, in terms of the processing of symbols. The bottom-up approach, on the other hand, involves creating artificial neural networks which is inspired from the brain’s structure. To illustrate the difference between these approaches, consider the task of building a system, equipped with an optical scanner, that recognizes the letters of the alphabet. A bottom-up approach typically involves training an artificial neural network by presenting letters to it one by one, gradually improving performance by tuning the network. Tuning adjusts the responsiveness of different neural pathways to different stimuli. In contrast, a top-down approach typically involves writing a computer program that compares each letter with geometric descriptions.
Machine learning algorithms identify patterns and/or predict outcomes. Many organizations sit on huge data sets related to customers, business operations, or financials. Human analysts have limited time and brainpower to process and analyse this data. Therefore, machine learning can be used to predict outcomes given input data, like regression analysis but on much larger scales and with multiple variables. A perfect example is algorithmic trading, where the trading model must analyse vast amounts of input data and recommend profitable trades. As the model keeps working with real-world data, it can even ‘improve’ itself and adapt its trading strategies to market conditions. Machine learning is also used to find insights or patterns in large data sets that human eyes sometimes miss. For example, a company can study how its customer purchase patterns are evolving and use the findings to modify their product lines. Many AI methodologies including neural networks, deep learning are related to machine learning.
A neural network tries to replicate the human brain’s approach to analyzing data. They can identify, classify and analyze diverse data, deal with many variables, and find patterns that are too complex for human brains to see. Deep learning is a subset of machine learning. When applied to a neural network, it allows the network to learn without human supervision from unstructured data. Unstructured data are data that isn’t classified or labeled. Deep learning is perfect for analyzing big data sets that organizations collect. These big data sets include different data formats such as text, images, video and sound. Neural networks are frequently combined with machine learning, deep learning, and computer vision. That’s why people talk about ‘deep neural networks,’ which is basically a neural network with more than 2 layers. More layers mean more analytical power. Deep neural networks can be trained to identify and classify objects. A cool use is facial recognition to identify unique faces in photos and videos. Neural networks also learn over time. For instance, they get better at classifying objects and identifying faces as they are fed more data.
Expert systems occupy a type of microworld—for example, a model of a ship’s hold and its cargo—that is self-contained and relatively uncomplicated. For such AI systems every effort is made to incorporate all the information about some narrow field that an expert or group of experts would know, so that a good expert system can often outperform any single human expert. There are many commercial expert systems, including programs for medical diagnosis, chemical analysis, credit authorization, financial management, corporate planning, financial document routing, oil and mineral prospecting, genetic engineering, automobile design and manufacture, camera lens design, computer installation design, airline scheduling, cargo placement, and automatic help services for home computer owners.
We interact with AI every day in our professional and personal lives. Keep in mind that many real-world applications use more than one AI technology.
- Task automation: The repetitive back-office tasks such as clerical work, invoicing, and management reporting can be automated to save time and improve accuracy. Factory and warehouse work can also be automated using AI-powered robots.
- Customer support: remember the online text chat you had with your bank’s customer support? That may have been a chatbot instead of an actual human.
- Social media: Facebook uses AI to recognize faces. When you upload photos to Facebook, it puts a box around the faces in the photo and suggests friends’ names to tag.
- Self-driving cars: On-board cameras and computers identify objects and people on the road, follow traffic signs, and drive the car. Early models are already safer than human drivers.
- Video streaming apps apply machine learning to your viewing history to personalize the movie TV show recommendations you see. It also analyses what you and people with similar preferences watched in the past, and even auto generates personalized thumbnails and artwork for movie titles, to entice you to click on a title that you’d otherwise ignore. All to ensure that you stay glued to the screen while your brain melts.