Getting Started with AI
We've come a long way....the pioneering computer "CSIRAC"

Getting Started with AI

The basics...

Artificial Intelligence has existed for more than 60 years field within computer science. AI aims to have computers behave in similar ways that humans do; as such there is no single definition of what constitutes an AI system, but it typically includes characteristics such as:

  • Ability to learn
  • Ability to behave or interact in a human-like manner

Many approaches and technologies have been explored or used within AI, including

  • Statistical and probabilistic approaches (mathematical approaches for optimization)
  • Expert Systems (e.g. rules engines)
  • Genetic Algorithms (typically used for optimization problems)
  • Neural Networks (attempts to loosely mimic the behavior of the brain; often used for classification problems) 

Machine learning (typically using Neural Networks or less commonly Genetic Algorithms) is a useful approach when it is not clear how to programmatically compute an answer due to the complexity of the subject.

Machine learning approaches have become more popular since they have advantages over the previous generation of rule based approaches, including:

  • Ability to generalize from examples. By comparison expert systems often failed in the ‘real world’ whenever input data is unexpected.
  • Ability to find patterns.

Deep Learning has also garnered significant attention in the last few years. Deep Learning is a Machine Learning approach built on Neural Networks, but with multiple ‘layers’ of neurons, allowing the system to encode more information. 

The Cognitive Computing term has also become popular, typically to denote AI systems that focus on unstructured data sources, such as text, images, videos or voice. 

AI approaches to training can be categorized into ‘supervised’ or ‘unsupervised’, and they are also used for different needs. Supervised learning involves human experts in the training process to tell the algorithm the correct output; once given enough examples the system should then be able to give the same answer for similar inputs. Unsupervised learning can be used to uncover trends or clusters in data – for example to find anomalies or patterns.

Why Now?

While most of the algorithms have now been around for many years, AI has particularly taken off in the last few years due to the availability of data (e.g. via cloud data sources) and the additional computational power available in Cloud environments. The increase in the amount of data has had two effects: it has made it more difficult for humans to interpret the data and make decisions, while simultaneously providing more examples from which machine learning algorithms can learn.

AI Application Areas

AI systems have become better than humans within narrow domains – for example, classification of thousands of images. However successful usage often falls into ‘Augmented’ Intelligence, where the human decision-making process is supplemented by the system providing relevant – and often difficult to find – information.

AI has proved useful in areas such as:

  • Automation – what ad should you see based on your browsing history? Speech recognition or translation between languages.
  • Prediction – will this machine fail?
  • Optimization – what’s the fastest route for this vehicle? What’s the minimum amount of fuel we need to keep?
  • Troubleshooting – how do I fix this?
  • Decision Making – what are the risks in this type of project?
  • Perception – including detection of anomalous behavior in streaming data, image recognition (eg. detection of rust)

Getting Started

Lots of Data + Clever Algorithms + Compute Power = Great AI Outcomes? Not quite that easy.

AI adoption needs to be driven from a statement of one or more business challenges, and from the nature of those challenges, we can then drive selection of the best solution; points to consider include:

1.    Is a pre-built solution suitable? Otherwise, can we simply integrate cloud-based APIs? Or is there a compelling need to build a new bespoke AI solution?

2.    what data is available for training and testing the system? Often only a small percentage of the data is in-house, and the solution can be more robust with the aid of external data (such as weather, social media, IoT sensors).

3.    what skills, tools, frameworks and computational power will be needed? What experiments might be needed to ensure we are using the right parameters for our chosen AI approach?

4.    a definition of success: for example, how will business users gain trust in the output of the end solution?

5.    Proof of success – how will the system be tested?

6.    architectural solution – how will performance, availability, privacy and security be managed?

7.    adoption plan: how will the solution fit into the day-to-day activities of the end user?

8.    Executive support: bespoke AI projects in particular are inherently experimental and driven by a need for innovation – the project will not be successful without executive sponsorship. This can be offset (to a degree) by use of pre-built applications or frameworks that have pre-trained AI capabilities.


Summary

AI is a very broad field that has grown extremely rapidly over the last few years, and as such it can be daunting to know where to start.

Fortunately a range of industry-specific solutions, applications have been designed, built and pre-trained; where these exist they are often the fastest path to adoption. Alternatively, there are a range of ‘atomic’ AI capabilities made available on Cloud platforms, and these can be readily integrated into applications. Finally, if there are very specific needs then a range of AI frameworks exist, and although the organization will need access to data scientists and AI experts to take advantage of these, tooling does exist to increase their productivity, and hardware options have arisen to ensure high performance computation for both cloud and on-premise.

New adoption should start by specifying a business problem, considering the data required and then the best tool, technique or algorithm, rather than from a bottom-up ‘what is the best AI tooling’ approach.

Recently I read a fascinating analysis by Gartner analysts Whit Andrews, Kenneth F. Brant, Magnus Revang, Martin Reynolds, Frances Karamouzis and Jim Hare. Artificial intelligence (AI) is changing the way in which organizations innovate and communicate their processes, products and services. Practical strategies for employing AI and choosing the right vendors are available to data and analytics leaders right now. AI continues to change how businesses and governments interact with customers and constituents. Gartner's 2017 predictions show that humans - as is always the case in computing change - are the pivot on which AI can turn. Read more at https://www.garudax.id/pulse/gartner-predicts-2017-artificial-intelligence-simon-berglund/

To view or add a comment, sign in

Others also viewed

Explore content categories