Setting Up the OpenAI Client
Importing the OpenAI Library
Firstly, ensure that you have the OpenAI library installed in your Python environment. If not, you can install it using pip:
pip install openai |
Once installed, you can begin by importing the OpenAI module:
from openai import OpenAI |
Initializing the Client
The next step involves initializing the OpenAI client. This is a straightforward process and is the gateway to accessing the ChatGPT models:
client = OpenAI() |
Crafting a ChatGPT Request
Structuring the Request
With the client initialized, you can now structure your request to the ChatGPT model. This involves specifying the model, the format of the response, and the messages you wish to send:
response = client.chat.completions.create( |
In this example, the request uses the “gpt-3.5-turbo-1106” model. The messages are sent in a structured format, identifying the roles (system and user) and their corresponding content.
Handling the Response
Once the request is sent, you can handle the response from ChatGPT:
print(response.choices[0].message.content) |
This command prints out the content of the response, allowing you to see the result of your query.
Providing the API Key
When it comes to authentication, you have two primary ways to provide your OpenAI API key.
- Environment Variable (Recommended)
Setting the API key as an environment variable (OPENAI_API_KEY) is the recommended approach. This enhances security and makes your code portable without exposing the key:
export OPENAI_API_KEY='your-api-key' |
When initialized, the OpenAI client automatically searches for this environment variable.
- Directly in Code
Alternatively, you can pass the API key directly when initializing the OpenAI client:
```
client = OpenAI(api_key=’your-api-key’)
···
Be cautious with this method to prevent unintentional exposure of your API key.