Do It Yourself AI: Foundations of Artificial Intelligence
Day 1 – From Math to Model: What Makes AI ‘Intelligent’?
Welcome back to the DIY AI series! Over the past few weeks, we've explored how anyone—from hobbyists to technologists—can harness AI tools to build powerful, creative, and useful applications from scratch. Whether you’ve created a chatbot, experimented with Stable Diffusion, or built a mini recommender, you’ve taken part in the democratization of artificial intelligence.
But now it's time to go deeper.
In this new series, Foundations Expanded, we’ll unpack the essential building blocks that give AI its shape and power. We’ll strip away the black boxes and show what’s happening under the hood—both conceptually and practically. You won’t just learn what AI does, but why it works, and how you can build your own.
Our 10-Part Journey: The DIY AI Foundations Expanded Series
Every post will include a hands-on DIY element to help solidify your understanding with small, practical code snippets.
Day 1: From Math to Model – What Makes AI Intelligent?
Let’s start at the root. What actually is AI doing?
At its core, almost all of modern machine learning comes down to this:
Given data and a goal, find a mathematical function that maps inputs to outputs as accurately as possible.
This sounds abstract, but here’s a real-world analogy:
Recommended by LinkedIn
In other words, AI is just function fitting—but done at massive scale and complexity.
Step-by-Step: A Simple Linear Model
Let’s write a basic linear regression model from scratch using only NumPy. We’ll predict y from x using the equation:
Here’s a minimalist implementation:
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(0)
x = np.random.rand(100, 1)
true_w, true_b = 2, 1
y = true_w * x + true_b + np.random.normal(0, 0.1, size=(100, 1))
w, b = np.random.randn(), np.random.randn()
learning_rate = 0.1
epochs = 100
for epoch in range(epochs):
y_pred = w * x + b
loss = ((y - y_pred) ** 2).mean()
# Compute gradients
grad_w = -2 * (x * (y - y_pred)).mean()
grad_b = -2 * (y - y_pred).mean()
# Update parameters
w -= learning_rate * grad_w
b -= learning_rate * grad_b
if epoch % 10 == 0:
print(f"Epoch {epoch}: Loss = {loss:.4f}")
plt.scatter(x, y, label="Data")
plt.plot(x, w * x + b, color="red", label="Learned Model")
plt.legend()
plt.title("DIY Linear Regression")
plt.show()
This is machine learning stripped to its bones:
Why This Matters
If you understand what’s happening in this tiny model, you’re already grasping the essence of how far more complex systems (transformers, CNNs, etc.) work. All of them are just fancier versions of this same pipeline—layers of functions trying to fit data better, step by step.
In Day 2, we’ll dive into loss functions—the fuel gauge that tells AI how well it's doing (and what direction to go next).