Returns a predicted completion for the given prompt.
complete(prompt, options?)
prompt
The prompt string.
options optional
An optional object containing the following properties. Note that not all models support all the properties. If the property is unsupported by the model, it is ignored.
The name of the model to use. Defaults to the current codebook model.
For example, "openchat" or "gpt-3.5-turbo".
The number of tokens to generate. If 0 or not given, the new tokens are generated until the stop text, end-of-text token, or the end of the model’s context size.
The temperature parameter (defaults to 0.5).
The top-k parameter (defaults to 40).
The top-p parameter (defaults to 0.95).
The repetition penalty parameter (defaults to 1.3).
The seed for the random number generator, a 32-bit integer. (defaults to a random value).
Note that the random number generator and the sampling algorithm are currently not stable and will change in the future.
The stop text: a string or an array of strings, after generating which the model will stop generating futher tokens.
A completion string for the text.
ai.complete("It's always a good idea", { model: "openchat" })
" to be prepared for the unexpected."