From the course: Handling Sensitive Data with Cloud and Local AI
Unlock this course with a free trial
Join today to access over 25,500 courses taught by industry experts.
Choosing an inference platform
From the course: Handling Sensitive Data with Cloud and Local AI
Choosing an inference platform
Where we run our AI models or our inference platform is one of the most important choices when it comes to data privacy, security, and AI systems. Commonly, we have assistance as a service, so this would be something like ChatGPT, Gemini, Copilot, or Cloud. We also have cloud-hosted inference solutions like AWS Bedrock and Azure Foundry. Then we have on-premise inference, which means having your own hardware where you can run your AI solutions. Now, assistant-as-a-service solutions are pretty straightforward to set up, and that makes them very approachable. They should always be configured for maximum safety, and they may be susceptible to data disclosure since we're using third-party solutions. Cloud-hosted inference solutions may reduce exposure, especially since many businesses already trust cloud providers with their data. They could present a vulnerability called unbounded consumption, where basically a malicious actor drains resources from the system. This can be mitigated with…
Contents
-
-
Privacy controls in popular AI assistants3m 9s
-
(Locked)
Understanding AI and data safety1m 4s
-
(Locked)
Build a safety framework for responsible AI use2m 23s
-
(Locked)
Choosing an inference platform2m 7s
-
(Locked)
Visualizing LLM risks: Create an interactive UI2m 21s
-
(Locked)
Build it: Implementing the dual LLM pattern4m 34s
-
-
-
-
-