LanguageModel

Static Methods

static availability

static async availability(options) => AIModelAvailability

Get the availability of the on-device language model.

Options (optional)

Returns the availability

static compatibility extension web

static async compatibility(options) => AIModelCoreCompatibility

Get the availability of the on-device language model.

Options (optional)

Returns the compatibility

static create

static async create(options) => LanguageModel

Creates a new embedding session

Options (optional)

Returns a new LanguageModel session that can be prompted with the pre-provided configuration


Properties

gpuEngine extension web

AIModelGpuEngine

dtype extension web

AIModelDtype

flashAttention extension web

boolean

contextSize

number

inputUsage

number

inputQuota

number

topK

number

topP extension web

number

temperature

number

repeatPenalty extension web

number


Methods

prompt

async (input, options) => string

See promptStreaming

promptStreaming

(input, options) => ReadableStream

This prompts the language model with a continuation of the conversation. Internally, the input is appended to the set of messages in the language model's context window. Older messages outside of the language model's context window are automatically discarded.

Input

Either a string, single prompt or array of prompts such as

[
  { content: "The prompt content", role: "user" },
  { content: "The prompt content", role: "assistant" }
]
Options (optional)

signal optional AbortSignal

responseConstrains optional object

Returns a readable stream that updates each time new tokens are available from the language model

append

async (input) => void

Appends a message to the conversation without prompting the model

Input

A string

measureInputUsage

async (input, options) => number

Measures the prompt usage of the input

Input

A string

Options (optional)

signal optional AbortSignal

Returns prompt usage based on the input

Last updated