1. Overview

In this codelab, you'll learn how to use built-in AI - the Prompt API to vivid web applications. Your involvement is invaluable as we explore opportunities to improve or augment web experiences with AI!

Gemini
Google Gemini, https://blog.google/products/gemini/google-gemini-anniversary-quiz/

What you'll learn

  • Opt-in built-in AI feature > the Prompt API with browser Chrome.
  • The basic usage of the Prompt API.
  • Available resources to build application quickly.

What you'll need

  • Install browser Chrome.
  • Opt-in built-in AI feature.
  • Use you familiar editor to code JavaScript.

2. Install Chrome

Built-in AI is available for browser Chrome only. We need to install Chrome before experience AI features. Check the follow Chrome download links if current device doesn't have Chrome.

Download Chrome
Google Chrome, https://www.google.com/chrome/

Besides the stable version, you can also choose Chrome Canary to experience the newest features and fastest update cycle.

Download Chrome Canary
Google Chrome, https://www.google.com/chrome/canary/

Here comes the download links

3. Check requirements

Here comes some requirementst before we opt-in built-in AI features.

A. Acknowledge Google’s Generative AI Prohibited Uses Policy.

B. Check the following table and make sure your hardware is OK.

Download Chrome
Built-in AI Early Preview Program > The Prompt API, https://docs.google.com/document/d/1VG8HIyz361zGduWgNG7R_R8Xkv0OOJ8b5C9QKeCjU0c/edit?tab=t.0

C. Chrome version is is equal or newer than 128.0.6545.0.

4. Opt-in built-in AI

Let's opt-in built-in AI.

  1. Turn on Chrome and copy the following code into chrome omnibox then press Return.

    chrome://flags/#optimization-guide-on-device-model

    Then select Enabled BypassPerfRequirement. This action is just make sure LLM download won't be block by app.

    Chrome flags > chrome://flags/#optimization-guide-on-device-model
    Chrome flags > chrome://flags/#optimization-guide-on-device-model
  2. Copy the following code into chrome omnibox then press Return.

    chrome://flags/#prompt-api-for-gemini-nano

    Then select Enabled. This action is make sure we would like to experience built-in AI > prompt API.

    Chrome flags > prompt-api-for-gemini-nano
    Chrome flags > prompt-api-for-gemini-nano
  3. Relaunch Chrome to let the flags we just turned on active.

  4. Turn on Chrome DevTool and switch to tab Console ( + + i).

  5. Copy the following code into tab Console then press Return.

    (await ai.languageModel.capabilities()).available;

    This action will check built-in AI available or not. Everything will be all set once it returned readily and you can skip the following steps.

    DevTool Console
    DevTool Console > check built-in AI available or not
  6. Stay tab Console then copy and execute it. This action will force browser download Gemini nano.

    await ai.languageModel.create();

    The result will likely fail but it’s intended.

  7. Copy the following code into chrome omnibox then press Return

    chrome://components

    Find out Optimization Guide On Device Model section. Make sure version is greater or equal to 2024.5.21.1031. If there is no version listed, click on Check for update to force the download.

    Chrome components
    Chrome components
  8. Once the model has downloaded and has reached a version greater than shown above. Back to Step.5 to check available or not. If this still fails, please see the troubleshooting.

5. Code with the Prompt API

It's time to experience built-in AI > the Prompt API. Check the following steps and feel its amazing power.

At once version

// Start by checking if it's possible to create a session based on the availability of the model, and the characteristics of the device. const { available, defaultTemperature, defaultTopK, maxTopK } = await ai.languageModel.capabilities(); if (available !== 'no') { const session = await ai.languageModel.create(); // Prompt the model and wait for the whole result to come back const result = await session.prompt('Write me a poem'); console.log(result); }

It's possible to modify systemPrompttemperature and topK. All we need to do is just adding these parameters with create().

const session = await ai.languageModel.create( { systemPrompt: 'Your are a top ecommerce sales', temperature: 1, topK: 8 } );

Streaming version

const { available, defaultTemperature, defaultTopK, maxTopK } = await ai.languageModel.capabilities(); if (available !== 'no') { const session = await ai.languageModel.create(); // Prompt the model and stream the result: const stream = session.promptStreaming('Write me an extra-long poem'); for await (const chunk of stream) { console.log(chunk); } }

Tracking model download progress

const session = await ai.languageModel.create({ monitor(m) { m.addEventListener('downloadprogress', e => { console.log(`Downloaded ${e.loaded} of ${e.total} bytes.`); }); } })

Session persistence

const session = await ai.languageModel.create({ systemPrompt: "You are a friendly, helpful assistant specialized in clothing choices." }); const result = await session.prompt(` What should I wear today? It's sunny and I'm unsure between a t-shirt and a polo. `); console.log(result); const result2 = await session.prompt(` That sounds great, but oh no, it's actually going to rain! New advice?? `);

Terminating a session

await session.prompt(` You are a friendly, helpful assistant specialized in clothing choices. `); session.destroy(); // The promise will be rejected with an error explaining that the session is destroyed. await session.prompt(` What should I wear today? It's sunny and I'm unsure between a t-shirt and a polo. `);

Session cloning

To preserve resources, you can clone an existing session. The conversation context will be reset, but the initial prompt or the system prompts will remain intact.

const clonedSession = await session.clone();

6. Congratulations!

You've experienced built-in AI > the Prompt API.

What you've covered

  • Opt-in built-in AI.
  • the Prompt API > prompt() & promptStreaming() methods.

Next steps

Learn more

7. Optional: Prompt API in Extensions

Join the Prompt API origin trial, running in Chrome 131 to 136, to create Extensions with this API. While there may be usage limits, you can integrate these features for live testing and gathering user feedback. The goal is to inform future iterations of this API, as we work towards wider availability.

Participate in the origin trial

To sign up your extension for the origin trial, use the URL chrome-extension://YOUR_EXTENSION_ID as the Web Origin. For example, chrome-extension://ljjhjaakmncibonnjpaoglbhcjeolhkk.

Gemini
Prompt API origin trial, https://developer.chrome.com/origintrials/#/view_trial/320318523496726529

After you've signed up for the original trial, you receive a generated token, which you need to pass in an array as the value of the trial_tokens field in the manifest.

{ "manifest_version": 3, "name": "YOUR_EXTENSION_NAME", "permissions": ["aiLanguageModelOriginTrial"], "key": "YOUR_EXTENSION_KEY", "trial_tokens": ["GENERATED_TOKEN"], }

Use the Prompt API

Once you have requested permission to use the Prompt API, you can build your extension. There are two new extension functions available to you in the chrome.aiOriginTrial.languageModel namespace:

const capabilities = await chrome.aiOriginTrial.languageModel.capabilities(); // Initializing a new session must either specify both `topK` and // `temperature` or neither of them. const slightlyHighTemperatureSession = await chrome.aiOriginTrial.languageModel.create({ temperature: Math.max(capabilities.defaultTemperature * 1.2, 2.0), topK: capabilities.defaultTopK, });