CoreModel API

Most top-level APIs don't supply the prompt that you provide directly to the language model. They wrap this prompt up in a template that allows the model to have a little more understanding of what you're doing. This, for example, could indicate that the model is taking in a conversation, and certain snippets of text are from an assistant, whilst others are from a user.

The CoreModel API is powerful because it exposes the underlying model to the API and passes your prompts and options directly through. This gives you more power to constrain the language model in new ways, with the added complexity of building templates and configurations yourself.

Prompting

Prompting can be done in much the same way as the other models. The prompt that's provided to the session is passed directly to the language model

const session = await window.aibrow.coreModel.create();

// Prompt the model and wait for the whole result to come back.
const result = await session.prompt("Write me a poem.");
console.log(result);

// Prompt the model and stream the result:
const stream = await session.promptStreaming("Write me an extra-long poem.");
for await (const chunk of stream) {
  console.log(chunk);
}

Prompting with templates

Using a template can be a really useful way, to indicate that the language model is part of a conversation. It can suggest that snippets of text are from the user, whilst others are the assistant. You can manually build your own templates or use some 3rd-party libraries to help

import { Template } from '@huggingface/jinja'

const session = await window.aibrow.coreModel.create();

// Build the template
const template = new Template('...')
const prompt = template.render({
  messages: [],
  ...
})

// Prompt the model and stream the result:
const stream = await session.promptStreaming(prompt);
for await (const chunk of stream) {
  console.log(chunk);
}

Demos

Spreadsheet autofill

Last updated