LanguageModel
Static Methods
static availability
static async availability(options) =>
AIModelAvailability
Get the availability of the on-device language model.
options optional
LanguageModelCreateOptions
Returns the availability
static compatibility extension web
static async compatibility(options) =>
AIModelCoreCompatibility
Get the availability of the on-device language model.
options optional
LanguageModelCreateOptions
Returns the compatibility
static create
static async create(options) =>
LanguageModel
Creates a new embedding session
options optional
LanguageModelCreateOptions
Returns a new LanguageModel session that can be prompted with the pre-provided configuration
Properties
gpuEngine extension web
dtype extension web
flashAttention extension web
boolean
contextSize
number
inputUsage
number
inputQuota
number
topK
number
topP extension web
number
temperature
number
repeatPenalty extension web
number
Methods
prompt
async (input, options) => string
See promptStreaming
promptStreaming
(input, options) => ReadableStream
This prompts the language model with a continuation of the conversation. Internally, the input is appended to the set of messages in the language model's context window. Older messages outside of the language model's context window are automatically discarded.
Either a string
, single prompt or array of prompts such as
[
{ content: "The prompt content", role: "user" },
{ content: "The prompt content", role: "assistant" }
]
signal optional AbortSignal
responseConstrains optional object
Returns a readable stream that updates each time new tokens are available from the language model
append
async (input) =>
void
Appends a message to the conversation without prompting the model
A string
measureInputUsage
async (input, options) =>
number
Measures the prompt usage of the input
A string
signal optional AbortSignal
Returns prompt usage based on the input
Last updated