Foundational Math for Generative AI: Understanding LLMs and Transformers through Practical Applications
With Axel Sirota
Liked by 97 users
Duration: 3h 23m
Skill level: Intermediate
Released: 2/3/2025
Course details
Unlock the mysteries behind the models powering today’s most advanced AI applications. In this course, instructor Axel Sirota takes you beyond just using large language models (LLMs) like BERT or GPT and highlights the mathematical foundations of generative AI. Explore the challenge of sentiment analysis with simple recurrent neural networks (RNNs) and progressively evolve your approach as you gain a deep understanding of attention mechanisms, transformers, and models. Through intuitive explanations and hands-on coding exercises, Axel outlines why attention revolutionized natural language processing, and how transformers reshaped the field by eliminating the need for RNNs altogether. Along the way, get tips on fine-tuning pretrained models, applying cutting-edge techniques like low-rank adaptation (LoRA), and leveraging your newly acquired skills to build smarter, more efficient models and innovate in the fast-evolving world of AI.
Skills you’ll gain
Earn a sharable certificate
Share what you’ve learned, and be a standout professional in your desired industry with a certificate showcasing your knowledge gained from the course.
LinkedIn Learning
Certificate of Completion
-
Showcase on your LinkedIn profile under “Licenses and Certificate” section
-
Download or print out as PDF to share with others
-
Share as image online to demonstrate your skill
Meet the instructor
Learner reviews
Contents
What’s included
- Practice while you learn 1 exercise file
- Test your knowledge 3 quizzes
- Learn on the go Access on tablet and phone