Optional best_Optional biasesBias the provided words to appear more or less often in the generated text. Values should be comprised between -100 and +100, with negative values making words less likely to occur. Extreme values such as -100 will completely forbid a word, while values between 1-5 will make the word more likely to appear. We recommend playing around to find a good fit for your use case.
Defaults to undefined.
Optional concat_The original prompt will be concatenated with the generated text in the returned response.
Defaults to false.
Optional frequency_How strongly should tokens be prevented from appearing again if they have
appeared repetitively. Contrary to presence_penalty, this penalty scales with
how often the token already occurs. Use values between 0 and 1. Values closer
to 1 discourage repetition, especially useful in combination with biases.
Defaults to 0.
Optional kNumber of most likely tokens considered when sampling in top-k mode.
⚠️ Only in TopK mode.
Defaults to 5.
Optional modeHow the model will decide which token to select at each step.
Defaults to [ApiMode.Nucleus].
Optional n_Number of different completion proposals to return for each prompt.
Defaults to 1.
Optional n_Number of tokens to generate. This can be overridden by a list of stop_words,
Defaults toll cause generation to halt when a word in such list is encountered.
Defaults to 20.
Optional pTotal probability mass of the most likely tokens considered when sampling in nucleus mode.
⚠️ Only in Nucleus mode.
Defaults to 0.9.
Optional presence_How strongly should tokens be prevented from appearing again. This is a one-off penalty:
tokens will be penalized after their first appearance, but not more if they appear repetitively
-- use frequency_penalty if that's what you want instead. Use values between 0 and 1.
Values closer to 1 encourage variation of the topics generated.
Defaults to 0.
Optional return_Returns the log-probabilities of the generated tokens.
Defaults to false.
Optional seedMake sampling deterministic by setting a seed used for random number generation. Useful for strictly reproducing Create calls.
Defaults to undefined.
Optional skillSpecify a 🤹 Skill to use to perform a specific task or to tailor the generated text.
Defaults to undefined.
Optional stop_Encountering any of these words will halt generation immediately.
Defaults to undefined.
Optional temperatureHow risky will the model be in its choice of tokens. A temperature of 0 corresponds to greedy sampling; we recommend a value around 1 for most creative applications, and closer to 0 when a ground truth exists.
⚠️ Only in TopK/Nucleus mode.
Defaults to 1.
Generated using TypeDoc
Among n_completions, only return the best_of ones. Completions are selected according to how likely they are, summing the log-likelihood over all tokens generated.
⚠️ Must be smaller than n_completions.
Defaults to
undefined.