How to call openai chat model using langchain


To use the ChatOpenAI model from Langchain to get a response for messages, you can follow these steps:

  • Install the OpenAI Python package by running pip install openai.
  • Obtain an API key from OpenAI by creating an account and visiting their API key page.
  • Set the API key as an environment variable by running export OPENAI_API_KEY="your-api-key", or set the key as parameter in the function (see below).
  • Import the ChatOpenAI class from langchain.chat_models.
  • Initialize an instance of ChatOpenAI with the API key: chat = ChatOpenAI(openai_api_key="your-api-key") ( pass api key as parameter If api key is not set in environment).
  • Create a list of messages to send to the model. Messages can be of types AIMessage, HumanMessage, or SystemMessage.
  • Invoke the model by calling chat.invoke(messages). This will return an AIMessage containing the model’s response.
  • Alternatively, you can use chat.stream(messages) to stream the model’s response in chunks.
  • You can also use chat.batch([messages]) to batch process multiple sets of messages.

Here is an example of using ChatOpenAI to get a response for messages:

from langchain.chat_models import ChatOpenAI
from langchain.schema.messages import HumanMessage, SystemMessage

chat = ChatOpenAI(openai_api_key="your-api-key")

messages = [
SystemMessage(content="You're a helpful assistant"),
HumanMessage(content="What is the purpose of model regularization?"),
]

response = chat.invoke(messages)
print(response.content)

This will output the model’s response to the given messages.


Author: robot learner
Reprint policy: All articles in this blog are used except for special statements CC BY 4.0 reprint policy. If reproduced, please indicate source robot learner !
  TOC