Linear Transformations
Understanding the bridge between linear transformations and AI/Machine Learning Algorithms.

Linear Transformations

After having been given the opportunity these last few months to learn all the conceptual aspects of linear algebra, I am now delving into the environment for creating connections into the AI space.

Initially, the best ways to understand any concepts within the linear algebra space is through the introduction of proofs. Having been fortunate enough to have proofs as one of my forefront skills in mathematics, I found that not only did proofs enhance my foundation of all concepts within this particular math space, but it also helped to regurgitate that information back to a group of individuals.

One of the key foundational concepts of linear algebra are linear transformations.

Linear transformations are essentially transformations applied to a system of which hold up two differentiating standard properties: the preservation of scalar multiplication, as well as the preservation of addition.

The preservation of scalar multiplication states that a given transformation applied to a single column vector, allows us to take any scalar attached given with an all-real numbers space, to apply scalar pull-through. For example: IF T(r→x), then by preservation of scalar multiplication the transformation can also be rewritten as: rT(→x).

Furthermore, the preservation of addition states that a given transformation applied to two single column vectors within any given space, can both be applied individually as well as simultaneously. For example: IF T(→x) + T(→v), where x and v are some vectors in any given space, then the transformation can also be rewritten as: T(→x + →v).

Another key essential concept to understanding linear transformations is their domain and codomain mappings. When applying any given transformation to a particular domain-space, we are taking in the consideration that the following transformation will create a mapping to a codomain which operates in secondary space. However, this does not apply to linear transformations.

When a linear transformation is applied to the domain space, the codomain space mapping operates within that same space itself. For example: IF a linear transformation is applied to a given (R^3) space, then the codomain should likewise map to a given (R^3) space.

Each of these concepts are the core foundation for understand the meaning behind linear transformations within the scope of linear algebra. And as shown above, the best way to understand these simple stepping stones is through the process of building proofs.

As for the applications of linear transformations in AI, linear transformations or transformations as a whole have a huge part in the data processing field of producing a given output given both systems or matrices. A broad understanding of this can be through the following process of which given a matrix containing data, several transformations can be applied onto the given matrix shrinking both the size of the given input data as well as paving the way to pick the correct output. The transformations applied directly onto the given set of data vary depending on the algorithms designed for the backbones of these transformations.

Furthermore, the aspect of decision-making plays a major role as well. Knowing when to apply the correct transformation onto the given matrix containing a given set of data is highly of importance.

Understanding the basic principles of linear transformations through the use of proofs is a great way to become familiar with the mathematics involved in many common algorithms for AI design.


To view or add a comment, sign in

More articles by Kousha Salimkhan

  • Relational Basis

    One of the core foundational concepts of Linear Algebra plays on the understanding of basis and spanning sets. A basis…

Others also viewed

Explore content categories