Returns a predicted completion for the given prompt.
createCompletion(params)
params
An object with request parameters:
model
An identifier of the model to use.
suffix optional
A suffix for completion text.
max_tokens optional
A maximum number of tokens to generate (defaults to 16).
temperature optional
A temperature from 0 to 2 (defaults to 1). Higher values give more random results.
top_p optional
An alternative to temperature.
n optional
The number of completions to generate.
logprobs optional
The number of most likely tokens for which to include the log probabilities (defaults to 5, maximum is 5).
echo optional
Include the prompt in the response.
stop optional
An array of sentences or a string where the API will stop generating tokens (maximum is 4).
presence_penalty optional
A number between -2.0 and 2.0 (defaults to 0).
frequency_penalty optional
A number between -2.0 and 2.0 (defaults to 0).
best_of optional
The number of best completions to generate.
logit_bias optional
The likelihood of specified tokens appearing in the completion.
user optional
A unique identifier of the end user.
An object containing response from OpenAI API.
await openai.createCompletion({
model: "text-davinci-003",
prompt: "In the future, AI will",
max_tokens: 7,
temperature: 0,
})
{
id: cmpl-6msIA4IeyQ255mHcAsKXsa9dqllUJ,
object: text_completion,
created: 1677106462,
model: text-davinci-003,
choices: [{
text: be used to automate many tasks that,
index: 0,
logprobs: null,
finish_reason: length
}],
usage: {
prompt_tokens: 6,
completion_tokens: 7,
total_tokens: 13
}
}