Page cover image

Welcome

Power up your web apps with local AI.

Meet AiBrow, which enables on-device AI in your browser. Private, Fast and Free. It's Open Source and supports Llama, Gemini, Phi and many other models.

The AiBrow API follows the current proposal for the browser Prompt API currently being developed in Google Chrome, but then extends this base feature set with new capabilities. AI in the browser is implemented in 1 of 3 ways...

  1. Using the in-browser window.aiAPI

  2. Using the AiBrow extension which implements native llama.cpp

  3. Using web-apis such as WebGPU and WASM

Each method has its own advantages and limitations as well as performance considerations to take into account (Feature comparison). You can use the AiBrow Web API to check on-device support and access each of these APIs as needed.

Quick Start

Install the dependencies:

npm install @aibrow/web

You can use the languageModel API to have a conversation with the AI, using whichever backend you choose.

import AI from '@aibrow/web'

// WebGPU
const webGpu = await AI.web.languageModel.create()
console.log(await webGpu.prompt('Write a short poem about the weather'))

// Llama.cpp
const ext = await AI.aibrow.languageModel.create()
console.log(await ext.prompt('Write a short poem about the weather'))

// Chrome AI
const browser = await AI.browser.languageModel.create()
console.log(await browser.prompt('Write a short poem about the weather'))

๐Ÿ˜ƒ Take a look at the examples to get started

๐Ÿ“” Check out the API reference to see everything that AiBrow supports

๐Ÿ‘พ The AiBrow extension is on GitHub if you want to contribute or chat

๐Ÿงช Try out some of the AiBrow demos

Last updated