Machine Learning 101
Machine Learning. When you hear those words, what pops into your head? Something abstract or futuristic? AI? Under the hood, ML is just about improving performance with experience. Let's dive in.
It works similar to how people learn.
I am aiming to write a pretty short post on a weekly cadence around the topics we are learning. Think of these as mini-reads. The Aim is for me to condense my own notes into bite-size articles that I can quickly recap over.
Let's look at a definition of learning.
New input leads to different behaviour.
The faster that you can take an input, and alter your behaviour the faster that you can learn. The reason a kid only touches a hot flame once is because they only need one experience to learn that particular lesson and change their behaviour. Not all lessons work as well.
In ML, we can break it down like this:
Task (T) = activity you are trying to do
Performance (P) = how you measure success at that task
Recommended by LinkedIn
Experience (X) = data or feedback system
Again let's take a simple example. A kid learning to cycle a bike. The combination of attempting the task along with feedback ("you're falling left") means that they will get better at cycling. If the feedback is better, they will progress faster. If they have no feedback and just do the task themselves over and over, they will improve. Simply by doing the task, they will improve over time. The important point here is that there are factors here that mean some kids might improve at this quicker than others. Same thing in ML.
There are different ways to approach learning.
These learning styles all lead to different practical applictions. AI and ML has been around for a long time. The Google search engine had a ML algorithm long before Chat GPT came out.
Some of the most common examples in ML
Classification => image detection, is it a sheep vs. goat
Regression => based on previous data, predict new result e.g. house prices.
Clustering => grouping unlabelled data. e.g. take multiple image inputs and try and identify by some unique feature. e.g. labelling animals by number of legs (2 legs vs. 4 legs)
These examples are really simple. That's the point though. It is to help get a better understanding of what the 'magic' actually is in Machine Learning. Once you think that these algorithms 'learn from experience', it doesn't feel magical. It starts to make sense while they are brilliant at some tasks and terrible at others. In essence, they have traits similar to how people learn.
We look forward to gaining useful insights from your weekly articles on ML and AI, Michael.