From the course: Handling Sensitive Data with Cloud and Local AI

Unlock this course with a free trial

Join today to access over 25,500 courses taught by industry experts.

Run an LLM using cloud inference

Run an LLM using cloud inference

In this one, we're going to choose an LLM and run it using Cloud Inference. Now, some considerations when it comes to choosing an LLM include whether that language model is open-weight or proprietary. And having open-weight models can be helpful since it gives you more control in making sure that things don't change under the hood. A lot of times you can experiment more and stretch the abilities of a model or test out its limits when it's open-weight. There's also transparency, and this ties directly to whether the model is open-weight or not. There's also consistency. If you're not using an open weight model, you want to make sure that you're okay with the kind of changes that may be happening to the model that you are using in your system. Now here is an example of using Mistral 3 with AWS Bedrock. Now Mistral 3 is an open weight model. You have access to its weights. It's not open source, though, since there are unknowns about how this model was created. But it is an extremely…

Contents