From the course: NVIDIA Certified Associate AI Infrastructure and Operations (NCA-AIIO) Cert Prep
Unlock this course with a free trial
Join today to access over 25,500 courses taught by industry experts.
Model training vs. model inference - NVIDIA Tutorial
From the course: NVIDIA Certified Associate AI Infrastructure and Operations (NCA-AIIO) Cert Prep
Model training vs. model inference
When you build your world-class AI infrastructure, you would focus on using this infrastructure for model training and model inferences. It is highly possible that these two things would keep on happening regularly in your environment. Obviously, once your model is deployed, you would use it for inferences, so lots of requests would come to it. Let's say it's a fraud detection model or recommendation system or it is prediction model for market that will be always performing inferences and then over a period of time this model would require retraining. So retraining would ensure that whatever new data has been collected model is able to understand it if there were any errors it is able to fix it and then keep on providing the accurate inferences for your application. So these two activity will keep on happening on a regular basis on your AI infrastructure but there is a difference in their use cases or in the way they are optimized. So let's talk about differences between model…
Practice while you learn with exercise files
Download the files the instructor uses to teach the course. Follow along and learn by watching, listening and practicing.
Contents
-
-
-
-
-
-
-
(Locked)
AI workflows5m 23s
-
(Locked)
ML frameworks3m 9s
-
(Locked)
The NVIDIA differentiator1m 41s
-
(Locked)
Model training vs. model inference7m 46s
-
(Locked)
Job scheduling vs. container orchestration6m 13s
-
(Locked)
Slurm vs. Kubernetes5m 16s
-
(Locked)
NVIDIA integration2m 30s
-
(Locked)
ML Ops: Analogy4m 16s
-
(Locked)
Why ML Ops?3m 58s
-
(Locked)
NVIDIA tools supporting ML Ops4m 16s
-
(Locked)