Welcome
Power up your web apps with local AI.
Meet AiBrow, which enables on-device AI in your browser. Private, Fast and Free. It's Open Source and supports Llama, Gemini, Phi and many other models.
The AiBrow API follows the current proposals for the browser machine learning APIs, namely the
These are currently being developed & trialled in Google Chrome, but AiBrow extends this base feature set with new capabilities. This means you can use AI in the browser using a number of different implementations...
Using the in-browser APIs when available
Using the AiBrow extension, which uses native llama.cpp
Using web APIs such as WebGPU and WASM
Each method has its own advantages and limitations as well as performance considerations to take into account (Feature comparison). You can use the AiBrow Web API to check on-device support and access each of these APIs as needed.
Quick Start
Install the dependencies:
npm install @aibrow/web
You can use the languageModel API to have a conversation with the AI, using whichever backend you choose.
import AI from '@aibrow/web'
// WebGPU
const webGpu = await AI.AIBrowWeb.LanguageModel.create()
console.log(await webGpu.prompt('Write a short poem about the weather'))
// Llama.cpp
const ext = await AI.AIBrow.LanguageModel.create()
console.log(await ext.prompt('Write a short poem about the weather'))
// Chrome AI
const browser = await AI.Browser.LanguageModel.create()
console.log(await browser.prompt('Write a short poem about the weather'))
๐ Take a look at the examples to get started
๐ Check out the API reference to see everything that AiBrow supports
๐พ The AiBrow extension is on GitHub if you want to contribute or chat
๐งช Try out some of the AiBrow demos
Last updated