Intro to Prompt Engineering
Prompt engineering refers to the method of interacting with pre-trained language models, like OpenAI’s GPT, by providing precise inputs (prompts). These inputs guide the model in generating useful responses. In this article, we’ll create a basic chatbot by sending text prompts to the OpenAI API and letting the model return natural language responses in a conversational style.
The video tutorial of this article can be found here.
Prerequisites
For initial setup, you need the following:
OpenAI API account: Sign up at platform.openai.com to create an API key.
Python and Jupyter Notebook: If you haven’t already installed Python and Jupyter Notebook, check out our earlier video for a step-by-step installation guide.
Setting Up the Environment
Check Python Installation: Open your terminal and type:
python3 - version
First Make sure you have Python installed. If not, use brew install python
on Mac or download it from python.org.
2. Install Jupyter Notebook: Once Python is installed, run Jupyter Notebook by typing:
jupyter notebook
You’ll be able to interactively code and execute commands in real time, which is why Jupyter is perfect for prototyping prompt engineering examples.
Implementing OpenAI API in Python
Now that your environment is set up, let’s install the OpenAI API package. In your terminal, run:
pip install openai
Once installed, import it in your Jupyter notebook.
Creating the API Connection
Now, let’s write a function to interact with the OpenAI API. Here’s how to proceed:
Obtain Your API Key:
Go to platform.openai.com.
In the Profile section, find API Keys.
Generate a new key, copy it, and store it securely, as you can only view it once.
2. Define the Function to Interact with OpenAI: Here’s a sample Python code to define a function for sending prompts and receiving responses from the chatbot:
import openai
def get_response_from_chatbot(prompt):
openai.api_key = "your_api_key_here"
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo", # Choose the latest available model
messages=[{"role": "user", "content": prompt}]
)
return response['choices'][0]['message']['content']
Testing the Chatbot
You can test the chatbot by passing a prompt like:
prompt = "You are an amazing bot!"
response = get_response_from_chatbot(prompt)
print(response)
Running this will generate a response directly in the notebook, making it an excellent way to prototype and refine prompts interactively.
Example: Chatbot Prompt
Let’s try another example. We’ll ask the chatbot a simple question: “How do I make a cup of coffee?” and see how it responds:
prompt = "How do I make a cup of coffee?"
response = get_response_from_chatbot(prompt)
print(response)
The model will return a step-by-step guide to making coffee, showcasing its ability to understand and provide detailed answers based on simple prompts.
Avoid Exposing Your API Key
One important note: always keep your API key secure. Do not share it publicly, as anyone with access can use your credits, and you may end up paying for unintended usage.
Conclusion
Building an AI-powered chatbot using prompt engineering is simpler than you might think! With just a few lines of code, you can create a responsive and conversational agent to enhance your workflows or systems.