LanguageModel
Properties
topK
number
topP
number
temperature
number
repeatPenality
number
flashAttention
boolean
contextSize
number
grammar
any
maxTokens
number
tokensSoFar
number
tokensLeft
number
Methods
prompt
async (input, options) => string
See promptStreaming
promptStreaming
(input, options) => ReadableStream
This prompts the language model with a continuation of the conversation. Internally, the input is appended to the set of messages in the language model's context window. Older messages outside of the language model's context window are automatically discarded.
Input |
---|
Either a |
Options (optional) |
---|
signal |
Returns a readable stream that updates each time new tokens are available from the language model
countPromptTokens
async (input, options) => number
Input |
---|
Either a |
Options (Optional) |
---|
signal? |
Last updated