LanguageModel
Properties
topK
number
topP
number
temperature
number
repeatPenality
number
flashAttention
boolean
contextSize
number
grammar extension
any
maxTokens
number
tokensSoFar
number
tokensLeft
number
Methods
prompt
async (input, options) => string
See promptStreaming
promptStreaming
(input, options) => ReadableStream
This prompts the language model with a continuation of the conversation. Internally, the input is appended to the set of messages in the language model's context window. Older messages outside of the language model's context window are automatically discarded.
Input
Either a string
, single prompt or array of prompts such as
Options (optional)
signal optional AbortSignal
Returns a readable stream that updates each time new tokens are available from the language model
countPromptTokens
async (input, options) => number
Input
Either a string
, single prompt or array of prompts such as
Options (Optional)
signal? AbortSignal
Last updated