Eigen Value and Eigen Vector
Eigenvalues and Eigenvectors: The Hidden Power Behind Data Transformation 🔍💡
In this post, I’m diving into one of the most misunderstood topics (at least I struggled a lot with it!) – Eigenvalues and Eigenvectors. A few days ago, while I was trying to understand the clustering process of around 1000 variables into 50 clusters (a different topic altogether), I stumbled upon these two concepts. Suddenly, I was transported back to my engineering days, remembering how we used to solve equations like:
A X = λ X
(where A is a matrix, X is an eigenvector, and λ is the eigenvalue). But I never truly understood it back then.
After much struggle, I revisited online resources, and eventually, I came across two insightful YouTube videos that genuinely helped me grasp the concept (highly recommend watching if you want to understand Linear Algebra better!):
What are Eigenvectors and Eigenvalues? 🤔
In simple terms, eigenvectors are special vectors that don’t change direction when a linear transformation (like rotation) is applied. Instead, they either stretch or shrink. The amount of stretching or shrinking is quantified by the eigenvalue.
The Mathematical Essence:
Recommended by LinkedIn
Real-World Connection – Clustering and Variance 📊:
When trying to find the direction in which the variance of the data is the maximum, we use eigenvectors. The eigenvalue tells us how much of the total variance is explained in that direction.
To visualize this, I created a plot showing the correlation of:
Corr(X, Y) and Corr(Y, X)
Now, here’s the cool part:
Key Insight:
This transformation rotates the axis in the direction of the correlation matrix (blue dotted grid lines), but the direction of the eigenvectors remains unchanged, illustrating how we can decompose the variance in the data along these principal axes.
Ready for any feedback or corrections! Feel free to share your thoughts! 😊