Building AI Conversations with Streamlit and OpenAI

Building AI Conversations with Streamlit and OpenAI

This guide will demonstrate how to build a conversational AI app using two powerful tools - Streamlit and OpenAI. We'll use Python as our programming language of choice.

Github Repo -- https://github.com/datamokotow/streamlit-chat/tree/main

Setting the Stage

We're using Streamlit, an open-source Python library that simplifies app building processes. It's designed to help machine learning engineers and data scientists deploy their projects swiftly.

Next up is OpenAI, a technology leader in AI and the creator of impressive AI models like GPT-3.5-turbo, which we'll use in our app.

Breaking Down the Code

Let's dive into the code that helps us build this AI conversational application:

import openai
import streamlit as st

st.title("Streamlit Chat Interface")

api_key = st.text_input("Enter your OpenAI API Key", type="password")
openai.api_key = api_key

First, we import the necessary libraries: openai and streamlit. We then set the title for our application. Following that, we prompt the user to input their OpenAI API Key.

if "openai_model" not in st.session_state:
    st.session_state["openai_model"] = "gpt-3.5-turbo"

if "messages" not in st.session_state:
    st.session_state.messages = []

Here, we're checking if the openai_model and messages exist in our session state. If not, we initialize them. We're using the GPT-3.5-turbo as our OpenAI model and initializing messages as an empty list.

for message in st.session_state.messages:
    with st.chat_message(message["role"]):
        st.markdown(message["content"])

In this part of the code, we display the previous messages, either from the user or the AI assistant.

if prompt := st.chat_input("What is up?"):
    st.session_state.messages.append({"role": "user", "content": prompt})
    with st.chat_message("user"):
        st.markdown(prompt)

Here, we receive user input and add it to the messages in our session state. Then, we display this user message in the chat interface.

with st.chat_message("assistant"):
    message_placeholder = st.empty()
    full_response = ""
    for response in openai.ChatCompletion.create(
        model=st.session_state["openai_model"],
        messages=[
            {"role": m["role"], "content": m["content"]}
            for m in st.session_state.messages
        ],
        stream=True,
    ):
        full_response += response.choices[0].delta.get("content", "")
        message_placeholder.markdown(full_response + "▌")
    message_placeholder.markdown(full_response)
st.session_state.messages.append({"role": "assistant", "content": full_response})

The final part is where our assistant replies. It sends the conversation history to the OpenAI model, receives the assistant's response, and displays it. It then adds this response to the session's messages.

This tutorial barely scratches the surface of what's possible. If you want to dive deeper into Streamlit, you can check out their official documentation. With it, you'll be equipped to create more complex and fascinating AI conversational applications!

Building an AI conversation app can seem daunting at first, but with tools like Streamlit and OpenAI, it becomes a much more manageable task. Don't hesitate to get your hands dirty and start experimenting. Who knows? You could build the next big thing in convers