Using LLMs to create a ChatBot Experience
I spent years architecting and building a Conversational AI (Artificial Intelligence) Chatbot. Building a chatbot system before the advent of large language models (LLMs) required substantial effort, expensive infrastructure (software and hardware) and engineering resources. Now a custom designed prompt and an LLM model can replace all of these infrastructure into a simple API call.
Building an AI system required developing all the components necessary to train and deploy a model for each task and domain. For example, below list of models and infrastructure required for building a customer service bot,
Below is a simple demonstration of how you can write a prompt and get a response using ChatGPT. (You can use same approach with other LLM’s like Bard, LLaMA or any other LLM’s)
Detecting User Intents (e.g Baggage Limit) and Extracting Entities (Cabin type, Weights, Status)
system_prompt = "You are a customer service agent bot, to answer questions about baggage limits.
You start with greeting the customer, \
then wait for the response and then collect all the information. \
You should never act like a customer. \
If the user does not have a mileageplus account then ask for cabin type and then choose from “Weights by cabin” to provide appropriate response. \
Use only the information below to provided below to answer the question. If you can not find the answer respond with \"I will connect you to an agent\"\
The weight limit for a checked bag depends on your cabin and your MileagePlus status. If they have different limits, we’ll go by the one that has the larger limit. \
Weights by Cabin \
United Economy: 50 pounds (23 kilograms) \
Premium Economy: 70 pounds (23 kilograms) \
\
Weights by MileagePlus status \
MileagePlus status | Maximum weight per bag \
Premier Silver \
Premier Gold \
Premier Platinum \
Premier 1K \
\
70 pounds (32 kilograms) \
\
Star Alliance Gold \
70 pounds (32 kilograms)in Business \
50 pounds (23 kilograms) in Economy"
openai.api_key = "<copy your openai key here>"
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": system_prompt}]
)
bot_response1 = response.choices[0].message["content"]
print(bot_response1)
You should expect a response something similar to this.
Hello! How may I assist you today with your baggage limits?
Followup api call to get the results
prompt2 = "Can you tell me what is the baggage limit for Economy class?
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": system_prompt},
{"role": "assistant", "content": bot_response1},
{"role": "user", "content": prompt2}]
)
bot_response2 = response.choices[0].message["content"]
print(bot_response2)
And the bot response!
For United Economy, the weight limit for a checked bag is 50 pounds (23 kilograms).
Is there anything else I can help you with?
As you can see the bot has detected the user intent "baggage limit check" and also identified the cabin type and responded with the limit.
Recommended by LinkedIn
Sentiment Detection
system_prompt = "What is the sentiment of the following text? Give your answer in one word only.
user_input = "I lost my baggage in the transit, I am so pissed. What do I do now?"
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": system_prompt},
{"role": "user", "content": "Text: " + user_input}]
)
bot_response = response.choices[0].message["content"]
print(bot_response)"
And the bot response!
Negative.
Spelling Correction
prompt3 = "Can you also give me baggae limit for premier pltainum?
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": system_prompt},
{"role": "assistant", "content": bot_response1},
{"role": "user", "content": prompt2},
{"role": "assistant", "content": bot_response2},
{"role": "user", "content": prompt3}]
)
bot_response2 = response.choices[0].message["content"]
print(bot_response2)"
And the bot response!
Certainly! The maximum weight allowed for a checked bag in Premier Platinum is 70 pounds (32 kilograms).
Below is a similar experience with the ChatGPT interface,
Now we can accomplish various tasks by creating a tailored prompt and employing an LLM model. These models can be further fine-tuned to improve the accuracy of the output. Commercially available pre-trained models like GPT-4, Cohere, etc., offer a cost-effective alternative for many applications. However, developing your own LLM model is still expensive due to many factors like amount of data needed, software development cost and hardware costs like specialized GPUs.
I hope this inspired you in an innovative way to adopt LLM for your use case. Good luck. You can reach out to me for any questions or if you need help with building AI or Analytics systems.