top of page
  • doctorsmonsters

OpenAI API, Making the Response More Consistent, Part 1: Implementing the Chat Completion API

Updated: Aug 17, 2023



It would be fair to say that modern large language models, like OpenAI’s GPT series, possess a certain level of autonomy. Their outputs can often be unpredictable, making them appear to have a mind of their own. OpenAI recently introduced the function calling capability, which has indeed enhanced the consistency of these models. Yet, for self-taught programmers such as myself, comprehending this enhancement isn’t always straightforward. Fortunately, there’s an alternative approach to achieving more consistent responses. We’ll begin by exploring the implementation of the Chat API.


Chat Completion APIs, an Introduction:

You can skip to Part 2 if you’re already acquainted with API usage.

OpenAI has introduced chat-completion APIs, which facilitate context-building for a conversation by taking your prompt and generating a response. Here’s an illustrative screenshot from OpenAI’s website:



Here’s a rundown of the process:

  1. You send a list of dictionaries, each carrying specific information for the model.

  2. Each dictionary in the list contains two keys. The first one, “role”, signifies what the dictionary represents. The “system” role indicates that the dictionary is the prompt. Here, you can submit your prompt as a value for the “content” key. This is also where you can assign a persona to the model, like: “you are Ben, a real-estate agent,” for example. This instructs the model to present itself as Ben, a real estate agent.

  3. The “user” role represents the user input. Any value passed as “content” under this role will be perceived as the user’s response to the model.

  4. The “assistant” role symbolizes the model’s response. You can provide it initially, thereby setting an expectation for the model’s output. Any input here will serve as an example of what you expect the model to say and will influence its future responses.

  5. The “messages” list retains the entire conversation context. Every time you make a call, this dictionary needs to be passed to the model to maintain the continuity of the conversation. Every response from the user and the model should be appended to the messages list.

Let’s take a look at the code implementation, which I’ve done using Jupyter Notebook.



Pretty neat and simple right? As you can see, we told the model what it’s name is and who it represents, it remembered that and used that to introduce itself.

In order to continue with the conversation and for the model to remember the context, you have to keep adding the model response and the user response to messages and keep passing that on.



Now we can pass the list of messages once again and get a response.



You could create a loop to repeat the whole process and have a chat with the model. In the next part, we will see if we can try to get a consistent response that we can parse to use in our app that is beyond the simple conversation.

10 views0 comments
Post: Blog2_Post
bottom of page