From the course: AI Data Pipelines with Spring

Introducing Spring AI

- [Narrator] Now I'll introduce you to the Spring project that makes it easy to use artificial intelligence with Java applications, and it's called Spring AI. When I started exploring machine learning models, many of the examples were written in the Python programming language. The goal of Spring AI is to simplify the introduction of artificial intelligence and machine learning patterns for Java applications. Check out spring.io for more information. You can also learn more about Spring AI by checking out the course Introduction to Spring AI. Spring AI provides a standard interface to use machine learning models. I will use it with an embedding model. In this case, Spring AI provides a text to the model and it returns a vector embedding. The vector embedding is basically a numerical version of the text. This numerical version can be used to find similar text. I will talk more about this later. Spring. AI makes it really easy to use any machine learning model that's available at Hugging Face or other model providers, such as Open AI's ChatGPT. Similar to Postgres ML, Spring AI will download Hugging Face models at runtime. Spring AI makes it easy to develop a model chat client. It provides an easy to use interface to ask the model a question in the form of a prompt. The model's response is returned to the Spring application. With Spring AI, I can communicate with the large language model using its natural language processing abilities. The Spring AI chat interface supports large language models from popular vendors. I will use it to interface with a large language model on my local machine using Ollama. Ollama is a platform for running models on Linux, Windows, or Mac-based computers. Ollama makes it easy to run models such as the Llama model. Llama is a large language model, developed by Meta, the parent company of Facebook. It supports many artificial intelligence use cases. It's almost like having open AI's ChatGPT interface running locally on your machine. Ollama supports a library of models that can be downloaded from ollama.com. The details to install Ollama can be found on its website. I already have it installed on my local machine. To start it, type ollama serve in a terminal. You can pull the Llama 3 model to your local environment using the Llama run command with the name of the model. Now it's basically ready for you to ask it a question. Next, I'll show you how easy it is to use Spring AI for data pipeline processing. In this case, I will determine the sentiment of the customer feedback using the Llama model.

Contents