GPT
Given a prompt/question/request, returns a response from OpenAI GPT language model.
Syntax
=GPT(prompt, gpt_model, temperature, max_tokens, cache)
With the following function parameters:
- prompt: String (or cell reference) representing a user prompt.
- **gpt_model: [optional] String representing the OpenAI GPT model to use. One of "gpt-4", "gpt-3.5-turbo" (default), "text-davinci-003", "text-curie-001", "text-babbage-001", "text-ada-001".
- temperature: [optional] Number between 0 and 1 representing how much variance to introduce when responding. 0 is very little variance and 1 is the most variance.
- max_tokens: [optional] Number representing the max number of tokens to return as the response. Default is 1000.
- cache: [optional] Whether or not to cache the response. Setting to false will incur a re-execution on every cell refresh. Default is true.
** Caution: The GPT4 model uses 25 times more SheetGPT usage credits than the default GPT3.5 model. **
Description
GPT is a simple way to make a single request of one of OpenAI's GPT models and get a response. Prompts can be quite large to guide GPT tone and behavior and achieve consistently optimal results. For instance, the following prompt is perfectly valid (and uses the "&" operator in Google Sheets to append a cell's value to the introductory prompt):
=GPT("Act as a JSON parser. I will give you a JSON snippet, and you will return the value of the keys I request as a comma separated list. Please return the values of the productName key in this JSON: " & A1)
Sample Usage
=GPT("Greet me in the language of your choice")
=GPT("Tell me good morning in " & A1)
Advanced Options
The GPT function accepts several optional arguments which you can use to further control the response to your prompt. Here are some examples:
Specify a different GPT model
The second argument to =GPT lets you specify a different OpenAI model to use when responding to your prompt. The default is currently "gpt-3.5-turbo" which is the best blend of cost-efficiency and performance. If you have more specific needs you may use one of the other OpenAI models instead:
=GPT("Create a list of three types of animals", "gpt-4")
** Caution: The GPT4 model uses 25 times more SheetGPT usage credits than the default GPT3.5 model. **
Adjust the variety of the response
The third argument to =GPT lets you specify a different "temperature" to use for the response. The temperature is a number between 0 and 1 (so a decimal like 0.5) that defines how much variability you want in your response.
=GPT("Write three titles for an article reviewing the current state of natural language processing AIs", , 0.1)
A higher temperature like 1 (the default) means you are likely to get very different responses to the same prompt - there is more "drift" in the responses. Use a lower number like 0 or 0.1 if you want more determinism in your responses.
Limit the length of the response
The fourth argument to =GPT lets you specify a response length limit (in terms of tokens, which roughly equals half an average word in length). So if you want to limit the length of your response to about 15 words you could pass in 25 as the fourth argument to GPT:
=GPT("Write a title for an article reviewing the current state of natural language processing AIs",,, 25)
Performance & caching
The completion that is returned is cached by SheetGPT to ensure that cell refreshes and other Sheet actions do not cause an unnecessary burn on your tokens. Any GPT request with the same arguments in a Sheet will return the cached value, indefinitely. This is well beyond the typical 6-hour cache limit of most Sheet plugins and is our attempt to make SheetGPT the most cost-effective way to utilize various GPT functionality.
If you ever need to avoid hitting the cache and force the generation of a new response, you can set the cache argument to false to force re-generation:
=GPT("What is the most appealing European city?",,,,false)
See Also
You may also find the following resources useful: