Eigen Value and Eigen Vector

Eigen Value and Eigen Vector


Eigenvalues and Eigenvectors: The Hidden Power Behind Data Transformation 🔍💡

In this post, I’m diving into one of the most misunderstood topics (at least I struggled a lot with it!) – Eigenvalues and Eigenvectors. A few days ago, while I was trying to understand the clustering process of around 1000 variables into 50 clusters (a different topic altogether), I stumbled upon these two concepts. Suddenly, I was transported back to my engineering days, remembering how we used to solve equations like:

A X = λ X

(where A is a matrix, X is an eigenvector, and λ is the eigenvalue). But I never truly understood it back then.

After much struggle, I revisited online resources, and eventually, I came across two insightful YouTube videos that genuinely helped me grasp the concept (highly recommend watching if you want to understand Linear Algebra better!):


What are Eigenvectors and Eigenvalues? 🤔

In simple terms, eigenvectors are special vectors that don’t change direction when a linear transformation (like rotation) is applied. Instead, they either stretch or shrink. The amount of stretching or shrinking is quantified by the eigenvalue.


The Mathematical Essence:

  • Eigenvectors are vectors that remain on the same line after a transformation (like rotating axes), but their magnitude changes.
  • Eigenvalues are the factors by which these eigenvectors are stretched or compressed.


Real-World Connection – Clustering and Variance 📊:

When trying to find the direction in which the variance of the data is the maximum, we use eigenvectors. The eigenvalue tells us how much of the total variance is explained in that direction.

To visualize this, I created a plot showing the correlation of:

Corr(X, Y) and Corr(Y, X)

Now, here’s the cool part:

  • If you find the eigenvectors, one will be the direction that bisects these two vectors (the correlation between X and Y).
  • The second eigenvector will be perpendicular to the first, pointing in the direction with less variance.


Key Insight:

  • The spread in the first eigenvector direction is greater because it explains more of the data's variance. That’s why the first eigenvector expands (with an eigenvalue > 1).
  • On the other hand, the second eigenvector explains less variance, which is why it contracts (with an eigenvalue < 1).

This transformation rotates the axis in the direction of the correlation matrix (blue dotted grid lines), but the direction of the eigenvectors remains unchanged, illustrating how we can decompose the variance in the data along these principal axes.


Ready for any feedback or corrections! Feel free to share your thoughts! 😊


Article content
Eigen Vectors


To view or add a comment, sign in

Others also viewed

Explore content categories