[blink-dev] Intent to Ship: Prompt API

1,798 views
Skip to first unread message

Deepti Bogadi

unread,
Apr 1, 2026, 6:58:12 PM Apr 1
to blink-dev, Mike Wasserman, Kenji Baheux, Etienne Noël, Rob Kochman

Contact emails

m...@chromium.org , rei...@chromium.org , kenji...@chromium.org , dbo...@chromium.org


Explainer

https://github.com/webmachinelearning/prompt-api/blob/main/README.md


Specification

http://webmachinelearning.github.io/prompt-api


Summary

The Prompt API gives web developers direct access to a browser-provided on-device AI language model. The API design offers fine-grained control, aligned with cloud API shapes, for progressively enhancing sites with model interactions tailored to individualized use cases. This compliments task-based language model APIs (e.g. Summarizer API), and varied APIs and frameworks for generalized on-device inference with developer-supplied ML models. The initial implementation supports text, image, and audio inputs, as well as response constraints that ensure generated text conforms with predefined regex and JSON schema formats.


This supports a variety of use cases, from generating image captions and performing visual searches to transcribing audio, classifying sound events, generating text following specific instructions, and extracting information or insights from multimodal source material.


This API has already been shipped in Chrome Extensions; this intent tracks the shipping on the web. An enterprise policy GenAILocalFoundationalModelSettings is available to disable the underlying model downloading, which would render this API unavailable. Enterprise admins can also set the BuiltInAIAPIsEnabled policy to block Built-In AI API usage, while still permitting other on-device GenAI features.


Language support log:

  • Chrome M139 and earlier only supported English ('en')

  • Chrome M140 added support for Spanish and Japanese ('es' and 'ja')



Blink component

Blink > AI > Prompt


Web Feature ID

https://github.com/web-platform-dx/web-features/issues/3530


Motivation

Direct access to a language model can help web developers accomplish tasks beyond those with dedicated APIs (e.g. Summarizer API) , and tailor their usage for site-specific requirements. Compared to the low-level APIs approach (e.g a custom AI model run via WebGPU, WASM, or WebNN), using the built-in language model can save the user's bandwidth and disk space, and has a lower barrier to entry. The design offers simple shorthands for common patterns (e.g. await session.prompt(‘write a haiku’) ), and supports more complex use cases for handling structured content sequences, streaming responses, availability checks, session management, and response constraints.


Initial public proposal

https://github.com/webmachinelearning/charter/pull/9


Search tags

LanguageModel , Language Model , Prompt API, Built-in AI


TAG review

https://github.com/w3ctag/design-reviews/issues/1093


TAG review status

Issues addressed


WebFeature UseCounter name

kLanguageModel_Create


Risks


Interoperability and Compatibility

The Prompt API is designed to provide a stable and interoperable surface for language model interactions, acknowledging the inherent diversity and non-deterministic nature of underlying models. Variance in behaviors and responses is a well understood expectation amongst developers employing this technology, and this API aims to provide an interoperable framework for consistent web platform access across browsers and models.


The Prompt API specifically aims to maximize compatibility by:

- Codifying an interoperable API surface for generalized language model interactions, so developers can write code that works across different browser engines and models. This surface has demonstrated compatibility with models from Google and Microsoft, and been polyfilled by extensions and JS frameworks, using different backends.

- Enforcing objective response conformance with constraints that ensure output adheres to known JSON schemas or regexes for interoperable processing of generated text.

- Supporting progressive enhancement patterns, by offering availability signals that encapsulate device and model support dimensions, and encourage developers to consider this API as one option among varied compatible AI offerings, including developer supplied models and cloud-based services.


Shipping this API provides a critical opportunity to broaden real-world implementation experience, explore future refinements, and collaborate with the web community on interoperable model diversity within a robust, predictable platform surface.


Gecko : Negative ( https://github.com/mozilla/standards-positions/issues/1213 )


WebKit : No signal ( https://github.com/WebKit/standards-positions/issues/495 )


Web developers : Strongly positive ( https://github.com/webmachinelearning/prompt-api/blob/main/README.md#stakeholder-feedback )


Other signals : Microsoft Edge developers have been strong collaborators with notable contributions including structured output, and experimental tool use enhancements. Edge will be shipping this API using a different underlying model.


Ergonomics

The API deprecated parameters and renamed identifiers, leaving legacy access for previously launched extension contexts. We plan to align the web and extension surfaces through careful additive changes and cautious deprecation processes. Developers are encouraged to use the new identifier names in both contexts and observe deprecation messages regarding planned API alignments.


Activation

This feature would definitely benefit from having polyfills , backed by any of: cloud services, lazily-loaded client-side models using WebGPU/WASM/WebNN, or the web developer's own server. We anticipate seeing an ecosystem of polyfills and client frameworks grow as more developers experiment with this API.


WebView application risks

Does this intent deprecate or change behavior of existing APIs, such that it has potentially high risk for Android WebView-based applications?

Not Applicable; this API is not available in WebView.



Debuggability

The API surface supports basic DevTools debugging. Perfetto tracing (via optimization_guide and other events) is useful, and internal debugging pages which give more detail on the model's status, e.g. chrome://on-device-internals might be suitable to port into DevTools. The team is maintaining extensions of DevTools panels for improving debuggability. It is possible that giving more insight into the nondeterministic states of the model, e.g. random seeds, could help with debugging.


Will this feature be supported on all six Blink platforms (Windows, Mac, Linux, ChromeOS, Android, and Android WebView)?

No

The initial launch focuses on Windows, Mac, Linux, and ChromeOS (on Chromebook Plus devices) . An implementation for Android using that platform’s OS-level built-in language model is being prototyped and will ship after the initial launch.


Is this feature fully tested by web-platform-tests ?

No

Web platform tests cover the API surface adequately: https://wpt.fyi/results/ai/language-model These attempt to mitigate execution environments differences, e.g. stub/full implementations (content_shell, chrome), and device/model states (unavailable, downloadable, downloaded). The core responses of real models can be unpredictable (especially without sampling parameters) and may cause inconsistent test results, but some facets are more readily testable, e.g. the adherence to structured output response constraints. Test coverage and reliability improvements are ongoing, including planning for WebDriver extensions.


DevTrial instructions

https://developer.chrome.com/docs/ai/prompt-api


Flag name on about://flags

prompt-api-for-gemini-nano-multimodal-input


Finch feature name

AIPromptAPIMultimodalInput


Rollout plan

Will ship enabled for all users


Requires code in //chrome?

True


Tracking bug

https://crbug.com/417526788


Launch bug

https://launch.corp.google.com/launch/4461863


Measurement

The API has use counters for all methods and attributes e.g.: LanguageModel_Create LanguageModel_Availability LanguageModel_Prompt LanguageModel_PromptStreaming LanguageModel_Append LanguageModel_MeasureContextUsage LanguageModel_OnContextOverflow LanguageModel_ContextUsage LanguageModel_ContextWindow LanguageModel_Clone LanguageModel_Destroy


Non-OSS dependencies

Does the feature depend on any code or APIs outside the Chromium open source repository and its open-source dependencies to function?

Yes: this feature depends on a language model, which is bridged to the open-source parts of the implementation via the interfaces in //services/on_device_model.


Estimated milestones

Shipping on desktop

148

Origin trial desktop first

139

Origin trial desktop last

144

Origin trial extension 1 end milestone

147

DevTrial on desktop

137



Anticipated spec changes

Open questions about a feature may be a source of future web compat or interop issues. Please list open issues (e.g. links to known github issues in the project for the feature specification) whose resolution may introduce web compat/interop risk (e.g., changing to naming or structure of the API in a non-backward-compatible way).

Params may be re-added after addressing interop concerns: https://github.com/webmachinelearning/prompt-api/issues/170 Identifiers have been renamed for clarity before Web GA launch: https://github.com/webmachinelearning/prompt-api/issues/177 Any post-launch additive changes should be backwards compatible: e.g. tool use, multimodal sampling info/options and outputs, session history access, model info/options, etc.


Link to entry on the Chrome Platform Status

https://chromestatus.com/feature/5134603979063296?gate=5123192519393280


Links to previous Intent discussions

Intent to Prototype: https://groups.google.com/a/chromium.org/d/msgid/blink-dev/CAM0wra_LXU8KkcVJ0x%3DzYa4h_sC3FaHGdaoM59FNwwtRAsOALQ%40mail.gmail.com

Intent to Experiment: https://groups.google.com/a/chromium.org/d/msgid/blink-dev/CAM0wra9oT0jygAYT00WPp0_wtZ-znrB2OdZ6GQb%2B3thFLP19pA%40mail.gmail.com

Intent to Extend Experiment 1: https://groups.google.com/a/chromium.org/d/msgid/blink-dev/CAJcT_ZhyheBntZHMEwFJA%3DuhpkWmDx8yFieL5E5g%2Bwp5UA0mzQ%40mail.gmail.com


This intent message was generated by Chrome Platform Status .

Mike Taylor

unread,
Apr 3, 2026, 3:53:12 PM Apr 3
to Deepti Bogadi, blink-dev, Mike Wasserman, Kenji Baheux, Etienne Noël, Rob Kochman

LGTM1

The only thing I was going to request was already taken care of: pinging the Mozilla issue and providing a pointer to this thread (since it's been about a year - maybe their position has evolved).

--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org .
To view this discussion visit https://groups.google.com/a/chromium.org/d/msgid/blink-dev/CAJcT_Zj73wjXZfmMcpQRWePp-H%3D5LzxYBOnasViYcn%3DFzY2vVQ%40mail.gmail.com .

Rick Byers

unread,
Apr 6, 2026, 4:23:22 PM Apr 6
to Mike Taylor, Deepti Bogadi, blink-dev, Mike Wasserman, Kenji Baheux, Etienne Noël, Rob Kochman
Can you elaborate on this? I see 14 issues were filed in the spec repo based on TAG feedback, but all are still 'open' with little comment. As you mentioned below, the interoperable model parameters one was addressed by leaving these as "experimental", so I assume they are not covered by this I2S. 

Many of these issues don't seem particularly actionable to me - they represent inherent risks in this space which, personally, I think we should be comfortable taking, learning from and iterating on post-ship. It's fine to disagree with TAG on this (especially since even TAG couldn't agree among themselves - the issue is formally 'lacks consensus'). But maybe there are still one or two things we should be doing  now to reduce future interop risk?

Can you do a triage pass over the issues and flag which are fine to leave for future consideration (as a non-breaking enhancement), which have been mitigated for now (eg. model params), and which are ones the group just disagrees with and wants to close? If there are any which seem like they have a good chance of leading to future interop risk, just call those out. Personally interop risk here seem unavoidable and I appreciate the amount of effort that's gone into collaborating between two different implementations (Google and Microsoft). Given the developer demand I do think we should be shipping, I just want the transparency of being clear on our thinking at this point to maximize learning.

WebFeature UseCounter name

kLanguageModel_Create


Risks


Interoperability and Compatibility

The Prompt API is designed to provide a stable and interoperable surface for language model interactions, acknowledging the inherent diversity and non-deterministic nature of underlying models. Variance in behaviors and responses is a well understood expectation amongst developers employing this technology, and this API aims to provide an interoperable framework for consistent web platform access across browsers and models.


The Prompt API specifically aims to maximize compatibility by:

- Codifying an interoperable API surface for generalized language model interactions, so developers can write code that works across different browser engines and models. This surface has demonstrated compatibility with models from Google and Microsoft, and been polyfilled by extensions and JS frameworks, using different backends.

- Enforcing objective response conformance with constraints that ensure output adheres to known JSON schemas or regexes for interoperable processing of generated text.

- Supporting progressive enhancement patterns, by offering availability signals that encapsulate device and model support dimensions, and encourage developers to consider this API as one option among varied compatible AI offerings, including developer supplied models and cloud-based services.


Shipping this API provides a critical opportunity to broaden real-world implementation experience, explore future refinements, and collaborate with the web community on interoperable model diversity within a robust, predictable platform surface.

Agreed, thank you!

Gecko : Negative ( https://github.com/mozilla/standards-positions/issues/1213 )


WebKit : No signal ( https://github.com/WebKit/standards-positions/issues/495 )


Web developers : Strongly positive ( https://github.com/webmachinelearning/prompt-api/blob/main/README.md#stakeholder-feedback )


Other signals : Microsoft Edge developers have been strong collaborators with notable contributions including structured output, and experimental tool use enhancements. Edge will be shipping this API using a different underlying model.


Ergonomics

The API deprecated parameters and renamed identifiers, leaving legacy access for previously launched extension contexts. We plan to align the web and extension surfaces through careful additive changes and cautious deprecation processes. Developers are encouraged to use the new identifier names in both contexts and observe deprecation messages regarding planned API alignments.


Activation

This feature would definitely benefit from having polyfills , backed by any of: cloud services, lazily-loaded client-side models using WebGPU/WASM/WebNN, or the web developer's own server. We anticipate seeing an ecosystem of polyfills and client frameworks grow as more developers experiment with this API.


WebView application risks

Does this intent deprecate or change behavior of existing APIs, such that it has potentially high risk for Android WebView-based applications?

Not Applicable; this API is not available in WebView.



Debuggability

The API surface supports basic DevTools debugging. Perfetto tracing (via optimization_guide and other events) is useful, and internal debugging pages which give more detail on the model's status, e.g. chrome://on-device-internals might be suitable to port into DevTools. The team is maintaining extensions of DevTools panels for improving debuggability. It is possible that giving more insight into the nondeterministic states of the model, e.g. random seeds, could help with debugging.


Will this feature be supported on all six Blink platforms (Windows, Mac, Linux, ChromeOS, Android, and Android WebView)?

No

The initial launch focuses on Windows, Mac, Linux, and ChromeOS (on Chromebook Plus devices) . An implementation for Android using that platform’s OS-level built-in language model is being prototyped and will ship after the initial launch.


Is this feature fully tested by web-platform-tests ?

No

Web platform tests cover the API surface adequately: https://wpt.fyi/results/ai/language-model These attempt to mitigate execution environments differences, e.g. stub/full implementations (content_shell, chrome), and device/model states (unavailable, downloadable, downloaded).

Yeah that's probably the most we can ask for in this intent. However most of these tests appear to be failing on wpt.fyi. Can you please triage all the failures on wpt.fyi and either fix them so they're passing or explain why the failure doesn't represent a real interop issue?

The core responses of real models can be unpredictable (especially without sampling parameters) and may cause inconsistent test results, but some facets are more readily testable, e.g. the adherence to structured output response constraints. Test coverage and reliability improvements are ongoing, including planning for WebDriver extensions.

Are there tracking issues in the CG regarding conformance testing that you can point to? Are there discussions somewhere around benchmarks / eval suites? Just like with our other big non-determinstic area of competition in browsers, web performance, having open benchmarks can be very useful for promoting compatibility.

Mike Wasserman

unread,
Apr 6, 2026, 6:04:47 PM Apr 6
to blink-dev, Rick Byers, Deepti Bogadi, blink-dev, Mike Wasserman, Kenji Baheux, Etienne Noël, Rob Kochman, Mike Taylor
Regarding TAG review status:
The scope of issues raised is broad. We conducted thoughtful google-internal reviews and triage , and prioritized addressing issues that couldn't be addressed after launch by backwards-compatible spec and implementation changes. In that regard, sampling parameters are not part of this initial launch proposal, and various API identifiers have been renamed. We still need to respond to the rest; some touch on inherent risks of platform advancements in this nascent domain; many could benefit from outlining non-normative best practices for implementers to respect device resources and real world costs. We will expedite sharing those responses.

Regarding wpt.fyi test failures:
Almost all failures are model download timeouts; we hope to devise a way to retain or sideload models in wpt.fyi's continuous integration infrastructure.

Regarding conformance testing and benchmarks / eval suites:
Web platform tests should continually expand to include more substantive conformance coverage, ideally including support for more objective analysis of natural language. We have also conducted  initial research on cross-browser interop and are collaborating on followups, including Microsoft fixing several bugs and planning model training around those shared use cases of interest. Microsoft collaborators also explored a very rough  quality benchmark . We are also discussing internally and with partners about use cases and data sets that can be shared publicly and use to represent performance baselines on different representative devices. We don't have yet a formal open/public discussion yet, but we have been iterating on some eval tests, e.g.  https://web-ai.studio/cortex . We will continue to iterate and engage with the goal of formalizing open quality and performance eval suites.

Adding color to the "strongly positive" web developers feedback statement:
We have run a number of hackathons, especially the Google Chrome Built-in AI Challenge 2025 , and have a good number of engagement from specific partner explorations. We also get anecdotal positive organic signals from press articles, forum threads, and direct feedback .

To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+unsubscribe@chromium.org .
--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+unsubscribe@chromium.org .

Vladimir Levin

unread,
Apr 7, 2026, 10:45:10 AM Apr 7
to Mike Wasserman, blink-dev, Rick Byers, Deepti Bogadi, Kenji Baheux, Etienne Noël, Rob Kochman, Mike Taylor
On Mon, Apr 6, 2026 at 6:04 PM Mike Wasserman < m...@chromium.org > wrote:
Regarding TAG review status:
The scope of issues raised is broad. We conducted thoughtful google-internal reviews and triage , and prioritized addressing issues that couldn't be addressed after launch by backwards-compatible spec and implementation changes. In that regard, sampling parameters are not part of this initial launch proposal, and various API identifiers have been renamed.

Is the explainer up-to-date with the proposed changes? I'm struggling to understand exactly what will ship if this intent is approved. As an aside, the explainer starts with "This proposal is an early design sketch ...", which I suspect should be updated to indicate that this is close to shipping.

Also in the explainer: "The following features have been recently renamed. The legacy aliases are deprecated, and clients should update their code ...". Was this an OT or developer trial or just people trying this out?

Generally, I agree that we should ship something in this space like this API so thank you for working on this! I'm a little worried about stability of the API shape though. Specifically, I'm wondering if we've had developer feedback on the shape of the API and that it accomplishes the needed goals. Can you comment on this? As a concrete example, specifying "role" as hard-coded strings either "user" or "assistant" seems a little brittle to me as compared to some declared constant in LanguageModel. Again, I'm not suggesting that these should change, but simply asking if real-world developers had a chance to explore this feature and provide feedback

Thanks!
Vlad
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org .
--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org .
--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org .
To view this discussion visit https://groups.google.com/a/chromium.org/d/msgid/blink-dev/fd88dc6e-a5fd-4dfd-9a8a-78971c13d363n%40chromium.org .

Mike Wasserman

unread,
Apr 7, 2026, 3:00:02 PM Apr 7
to blink-dev, Vladimir Levin, blink-dev, Rick Byers, Deepti Bogadi, Kenji Baheux, Etienne Noël, Rob Kochman, Mike Taylor, Mike Wasserman
See responses inline, thanks!

On Tuesday, April 7, 2026 at 7:45:10 AM UTC-7 Vladimir Levin wrote:
On Mon, Apr 6, 2026 at 6:04 PM Mike Wasserman < m...@chromium.org > wrote:
Regarding TAG review status:
The scope of issues raised is broad. We conducted thoughtful google-internal reviews and triage , and prioritized addressing issues that couldn't be addressed after launch by backwards-compatible spec and implementation changes. In that regard, sampling parameters are not part of this initial launch proposal, and various API identifiers have been renamed.

Is the explainer up-to-date with the proposed changes? I'm struggling to understand exactly what will ship if this intent is approved. As an aside, the explainer starts with "This proposal is an early design sketch ...", which I suspect should be updated to indicate that this is close to shipping.

Thanks for this question. While we've kept many aspects of the explainer updated with changes for the initial web platform API launch, we've also been representing enhancements slated for experimentation after the initial launch (e.g. tool use, sampling parameters), and other parts have grown stale. I'm preparing an explainer PR to update and clarify this document in the next day or two.


Also in the explainer: "The following features have been recently renamed. The legacy aliases are deprecated, and clients should update their code ...". Was this an OT or developer trial or just people trying this out?

The API has been in Web Origin Trial from Chrome 139 (August 2025), and launched general availability for Chrome Extensions in Chrome 138 (June 2025). TAG feedback  provided in Nov 2025 suggested these renames; spec  and  impl  renames were performed in Feb 2026. Their legacy aliases are deprecated in extension contexts and will be removed in due course to align with the Web API.


Generally, I agree that we should ship something in this space like this API so thank you for working on this! I'm a little worried about stability of the API shape though. Specifically, I'm wondering if we've had developer feedback on the shape of the API and that it accomplishes the needed goals. Can you comment on this? As a concrete example, specifying "role" as hard-coded strings either "user" or "assistant" seems a little brittle to me as compared to some declared constant in LanguageModel. Again, I'm not suggesting that these should change, but simply asking if real-world developers had a chance to explore this feature and provide feedback

We believe API shape has stabilized through comprehensive experimentation phases (lengthy Extension+Web Dev Trial, early partner programs and hackathons, Extension Origin Trial and GA Launch, extended Web Origin Trial), and is ready for shipping to the open Web in Chrome. Most developer feedback about the API shape was in early Dev Trial stages, which precipitated many improvements for ergonomics and extensibility, reflected in commits  during the first half of 2025 by our retired spec editor. These included the addition of an append() method, message and content sequence refinements (e.g. interleaving multimodal content of the same role), continued support for `prompt("write a poem")` shorthand, and more.

Since then, many real world developers have integrated this API in real sites and exploratory demos, while others have built polyfills and incorporated use in JS toolkits. We've additionally solicited feedback through additional channels (e.g. devtools console messages on session creation), and recent API shape feedback has generally been positive; no issues have been raised regarding the LanguageModelMessageRole enum and its use in the LanguageModelMessage dictionary. Our team is happy to continue discussing any thoughtful feedback through issues filed in the spec repository or Chromium issue tracker.


Thanks!
Vlad
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+unsubscribe@chromium.org .
--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+unsubscribe@chromium.org .
--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+unsubscribe@chromium.org .

Rick Byers

unread,
Apr 7, 2026, 3:43:45 PM Apr 7
to Mike Wasserman, blink-dev, Vladimir Levin, Deepti Bogadi, Kenji Baheux, Etienne Noël, Rob Kochman, Mike Taylor
On Tue, Apr 7, 2026 at 3:00 PM Mike Wasserman < m...@chromium.org > wrote:
See responses inline, thanks!

On Tuesday, April 7, 2026 at 7:45:10 AM UTC-7 Vladimir Levin wrote:
On Mon, Apr 6, 2026 at 6:04 PM Mike Wasserman < m...@chromium.org > wrote:
Regarding TAG review status:
The scope of issues raised is broad. We conducted thoughtful google-internal reviews and triage , and prioritized addressing issues that couldn't be addressed after launch by backwards-compatible spec and implementation changes. In that regard, sampling parameters are not part of this initial launch proposal, and various API identifiers have been renamed.

Thanks Mike. From the triage doc I see there were three ship blockers identified and now addressed:

I agree the others are non-blocking. Thank you! Broadly from all the public dialog and collaboration I've seen I trust you and the team to keep investing in the tricky tradeoffs here and expanding the industry alignment. So this is good enough for me for now.

Is the explainer up-to-date with the proposed changes? I'm struggling to understand exactly what will ship if this intent is approved. As an aside, the explainer starts with "This proposal is an early design sketch ...", which I suspect should be updated to indicate that this is close to shipping.

Thanks for this question. While we've kept many aspects of the explainer updated with changes for the initial web platform API launch, we've also been representing enhancements slated for experimentation after the initial launch (e.g. tool use, sampling parameters), and other parts have grown stale. I'm preparing an explainer PR to update and clarify this document in the next day or two.


Also in the explainer: "The following features have been recently renamed. The legacy aliases are deprecated, and clients should update their code ...". Was this an OT or developer trial or just people trying this out?

The API has been in Web Origin Trial from Chrome 139 (August 2025), and launched general availability for Chrome Extensions in Chrome 138 (June 2025). TAG feedback  provided in Nov 2025 suggested these renames; spec  and  impl  renames were performed in Feb 2026. Their legacy aliases are deprecated in extension contexts and will be removed in due course to align with the Web API.


Generally, I agree that we should ship something in this space like this API so thank you for working on this! I'm a little worried about stability of the API shape though. Specifically, I'm wondering if we've had developer feedback on the shape of the API and that it accomplishes the needed goals. Can you comment on this? As a concrete example, specifying "role" as hard-coded strings either "user" or "assistant" seems a little brittle to me as compared to some declared constant in LanguageModel. Again, I'm not suggesting that these should change, but simply asking if real-world developers had a chance to explore this feature and provide feedback

We believe API shape has stabilized through comprehensive experimentation phases (lengthy Extension+Web Dev Trial, early partner programs and hackathons, Extension Origin Trial and GA Launch, extended Web Origin Trial), and is ready for shipping to the open Web in Chrome. Most developer feedback about the API shape was in early Dev Trial stages, which precipitated many improvements for ergonomics and extensibility, reflected in commits  during the first half of 2025 by our retired spec editor. These included the addition of an append() method, message and content sequence refinements (e.g. interleaving multimodal content of the same role), continued support for `prompt("write a poem")` shorthand, and more.

Since then, many real world developers have integrated this API in real sites and exploratory demos, while others have built polyfills and incorporated use in JS toolkits. We've additionally solicited feedback through additional channels (e.g. devtools console messages on session creation), and recent API shape feedback has generally been positive; no issues have been raised regarding the LanguageModelMessageRole enum and its use in the LanguageModelMessage dictionary. Our team is happy to continue discussing any thoughtful feedback through issues filed in the spec repository or Chromium issue tracker.


Thanks!
Vlad
We still need to respond to the rest; some touch on inherent risks of platform advancements in this nascent domain; many could benefit from outlining non-normative best practices for implementers to respect device resources and real world costs. We will expedite sharing those responses.

Regarding wpt.fyi test failures:
Almost all failures are model download timeouts; we hope to devise a way to retain or sideload models in wpt.fyi's continuous integration infrastructure.

Thanks. Do you have a report of some kind about what's passing vs. failing in a harness that avoids the model download issue? I ran a few of the tests myself manually (on wpt.live) and they passed. So I'm assuming we're broadly in good shape, right? Just want to know details (a few known failures with bugs is fine). 

Mike Wasserman

unread,
Apr 7, 2026, 7:16:35 PM Apr 7
to blink-dev, Rick Byers, blink-dev, Vladimir Levin, Deepti Bogadi, Kenji Baheux, Etienne Noël, Rob Kochman, Mike Taylor, Mike Wasserman
Thanks for asking. Internally, we have a separate python script that side-loads pre-downloaded models and configurations to run WPTs on Chrome builds, those are run continuously on FYI  internal.optimization_guide  "wpt bots" with a separate set of  /third_party/blink/web_tests/AIExpectations , which enumerates known failures (inherent side-loading availability values, and hopefully solvable platform-specific timeouts). There's a single new Mac-x64 failure there that I've been meaning to address this week.

I also took the liberty of manually running the 28 WPT files in https://wpt.live/ai/language-model/  on Chrome Canary and observed: 90/95 tests pass that pertain to proposed stable API surfaces. Two failures are missing `gc()` WPT method references, two pertain to multimodal session creation, and the last is a context usage measurement OBO. I will file issues and followup with all of these. There are another 43 tests related to experiments for params and tool use that aren't part of the initial API launch.

Vladimir Levin

unread,
Apr 8, 2026, 10:47:23 AM Apr 8
to Mike Wasserman, blink-dev, Rick Byers, Deepti Bogadi, Kenji Baheux, Etienne Noël, Rob Kochman, Mike Taylor
Thank you for your responses.

LGTM2

Thanks,
Vlad

To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org .
To view this discussion visit https://groups.google.com/a/chromium.org/d/msgid/blink-dev/25e6f203-87b6-4844-9edf-fa301ad36e3an%40chromium.org .

Rick Byers

unread,
Apr 8, 2026, 11:12:11 AM Apr 8
to Vladimir Levin, Mike Wasserman, blink-dev, Deepti Bogadi, Kenji Baheux, Etienne Noël, Rob Kochman, Mike Taylor
Thank you for the additional details. Sounds good, LGTM3

Alex Russell

unread,
Apr 30, 2026, 5:19:02 PM (2 days ago)  Apr 30
to blink-dev, Rick Byers, Mike Wasserman, blink-dev, Deepti Bogadi, Kenji Baheux, Etienne Noël, Rob Kochman, Mike Taylor, Vladimir Levin
Hey folks,

First, an apology: I've had concerns about this API for a long while but simply missed that we had 3 LGTMs for 148. We're now hours from branch, and it would be pretty anti-social to -1 at this point. However, I think we need to consider finching the Prompt API off for 148 and moving back to OT for a few reasons.

First, none of the OT feedback I've seen, or engagement that the Edge team has separately had with developers, suggests that an unversioned multi-modal client-side model will be able to avoid serious application breakage if used as the basis for core app functionality. I've mentioned to various folks overthe past year+ that it would be useful if we had a way to call out capabilities, versions, or other parameters at construction time, but the spec doesn't seem to have any discussion of that flexibility today:

https://webmachinelearning.github.io/prompt-api/#dictdef-languagemodelcreateoptions

Microsoft's own experiments with alternative models suggest getting to interoperability with multi-modality via a different training stack is not a given, but feedback about the API only suggests workarounds, rather than API enhancements we'd expect in response to OT feedback:

https://github.com/webmachinelearning/prompt-api/issues/202

This bleeds into a second, related concern: there hasn't been much in the way of what I'd characterise as "pull"-based feedback from developers about this API. But it's hard to judge based on the public record. I might have missed it, but OT interest hasn't been captured here, the original I2E, or the extension request thread:

https://groups.google.com/a/chromium.org/g/blink-dev/c/6uBwiiFohAU/m/WhaKAB9fAAAJ
https://groups.google.com/a/chromium.org/g/blink-dev/c/qs059tBaQMI/m/dWDP2ZvpAQAJ

Should we be looking somewhere else to get a sense for developer enthusiasm?

It's also a bit concerning re: our process for features we're the first to ship to see responses like this, rather than proposed enhancements (even if they're for a future iteration):

https://github.com/webmachinelearning/prompt-api/issues/187

Shipping subsets for v1 is fine; avoiding design work in response to feedback, less so.

If partners are interested in seeing this ship ASAP, I'd be swayed (per usual) by them weighing in here. It would be particularly convincing if they could explain why they are not worried about the API changing behaviour in response to model upgrades, and it would be *most* persuasive if they'd tried both Edge's OTs using Phi in addition to Google's Gemini-based behaviour.

Without that, I'm not sure what there is to do but flip the "Needs Work" flag, avoid shipping downstream in Edge, and suggest we all go back to OT and sort out interop, versioning, and modality questions.

Given that time is short for 148, I hope we can get some useful on-the-record input from developers and/or persuasive details from Google's OT partners.

Thanks,

Alex

Reilly Grant

unread,
May 1, 2026, 12:04:31 PM (16 hours ago)  May 1
to Alex Russell, blink-dev, Rick Byers, Mike Wasserman, Deepti Bogadi, Kenji Baheux, Etienne Noël, Rob Kochman, Mike Taylor, Vladimir Levin
On Thu, Apr 30, 2026 at 2:19 PM Alex Russell < sligh...@chromium.org > wrote:
Hey folks,

First, an apology: I've had concerns about this API for a long while but simply missed that we had 3 LGTMs for 148. We're now hours from branch, and it would be pretty anti-social to -1 at this point. However, I think we need to consider finching the Prompt API off for 148 and moving back to OT for a few reasons.

First, none of the OT feedback I've seen, or engagement that the Edge team has separately had with developers, suggests that an unversioned multi-modal client-side model will be able to avoid serious application breakage if used as the basis for core app functionality. I've mentioned to various folks overthe past year+ that it would be useful if we had a way to call out capabilities, versions, or other parameters at construction time, but the spec doesn't seem to have any discussion of that flexibility today:

https://webmachinelearning.github.io/prompt-api/#dictdef-languagemodelcreateoptions

The parameters in LanguageModelCreateCoreOptions (in particular, expectedInputs and expectedOutputs) which can be passed to both availability() and create() allow the developer to check for the availability of and request both language support and multi-modality capabilities. For example:


const session = await LanguageModel.create({

  expectedInputs: [

    { type: "text", languages: ["en"] },

    { type: "audio" },

    { type: "image" },

  ],

  expectedOutputs: [{ type: "text", languages: ["en"] }],

});


This seems to cover everything you’re asking for other than versions, which is something I think we should be opinionated about: The browser picks the model.


This is the fundamental difference between these “built-in AI” APIs versus other options like WebNN. The tradeoff we’re making here is that the user benefits from having a single model (or small set of models focused on specific tasks, e.g. translation) rather than each developer downloading their own large model consuming bandwidth, disk and memory. I could be convinced to reveal the model version to the developer however I am concerned this would result in a similar situation we see today with user-agent sniffing rather than capability detection-based progressive enhancement.

Microsoft's own experiments with alternative models suggest getting to interoperability with multi-modality via a different training stack is not a given,

We meet regularly with our counterparts on the Edge team and this concern is news to me. I’d like to hear more details.
but feedback about the API only suggests workarounds, rather than API enhancements we'd expect in response to OT feedback:

https://github.com/webmachinelearning/prompt-api/issues/202

This feature request is quite fresh but our team agrees with the reporter that there should be better capability detection here. We suggested they file that issue based on a discussion in the Chromium issue tracker.

This bleeds into a second, related concern: there hasn't been much in the way of what I'd characterise as "pull"-based feedback from developers about this API. But it's hard to judge based on the public record. I might have missed it, but OT interest hasn't been captured here, the original I2E, or the extension request thread:

https://groups.google.com/a/chromium.org/g/blink-dev/c/6uBwiiFohAU/m/WhaKAB9fAAAJ
https://groups.google.com/a/chromium.org/g/blink-dev/c/qs059tBaQMI/m/dWDP2ZvpAQAJ

Should we be looking somewhere else to get a sense for developer enthusiasm?

Agreed, we dropped the ball on publishing developer feedback and are working on a better summary.
It's also a bit concerning re: our process for features we're the first to ship to see responses like this, rather than proposed enhancements (even if they're for a future iteration):

https://github.com/webmachinelearning/prompt-api/issues/187

Shipping subsets for v1 is fine; avoiding design work in response to feedback, less so.

Batched inference is a good feature request but supporting it well (versus conversational inference) will require a different API and underlying inference engine features which may not be available. I think it’s reasonable to acknowledge the feature request but flag it as out-of-scope for v1.
If partners are interested in seeing this ship ASAP, I'd be swayed (per usual) by them weighing in here. It would be particularly convincing if they could explain why they are not worried about the API changing behaviour in response to model upgrades, and it would be *most* persuasive if they'd tried both Edge's OTs using Phi in addition to Google's Gemini-based behaviour.

I agree. It sounds like Edge has some developer feedback (or maybe it’s just internal testing) that we haven’t seen yet. I’ve shared your concerns about the interoperability issues inherent to this API. I think developers successfully building sites that work in Chrome and Edge, as well as our ability to upgrade the model (you’ll note that Google just released Gemma 4, stay tuned) without breaking existing sites is the only true test of whether the concerns are well-founded.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org .
--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org .
To view this discussion visit https://groups.google.com/a/chromium.org/d/msgid/blink-dev/ec9da2ca-20a1-41f2-b60b-ca0ecd959215n%40chromium.org .

Rick Byers

unread,
May 1, 2026, 4:36:18 PM (12 hours ago)  May 1
to Reilly Grant, Alex Russell, blink-dev, Mike Wasserman, Deepti Bogadi, Kenji Baheux, Etienne Noël, Rob Kochman, Mike Taylor, Vladimir Levin
From a procedural point of view I don't personally think we should be considering flagging this off at this stage. It already went out in our early stable release last week and will be deployed to all Chrome stable users in a matter of days. Using a kill switch at this late stage is generally reserved for emergencies to mitigate  significant user harm (and comes with its own risks, since we have not been testing the off state in our canary/dev/beta population). While the strength of the Edge team's concern is apparently a surprise, I don't think there's actually any significant new information about the cost/benefit risk tradeoff here.

Instead I suggest we try to focus on finding and fixing any interop issues in practice. As I said on the Mozilla standards position thread, I'm personally supportive of Chrome making breaking changes where necessary to break calcification around model-specific quirks (as we've done in the past in the name of interop, eg. when fixing WebKit border image quirks, breaking mobile Gmail in order to match the spec and Firefox). Reilly says the team is working to share more developer signals. Once we have some real-world use cases to point at we can focus together on driving interop in practice.

I'll note that there's some parallels here to how the browser community approaches performance - the one big longstanding aspect of probabilistic platform behavior (alongside snaller examples like the web speech API). Most applications aren't that sensitive to performance differences, and I expect the same here (but could be wrong!). But for some applications, a performance difference can mean the difference between an app being usable or not. Browsers largely compete on performance implementations but collaborate on benchmarks and API design. It has its downsides, but I think it's also done a lot of good to drive user-positive innovation and investment through constructive competition.

Rick

Mike Wasserman

unread,
May 1, 2026, 6:31:16 PM (10 hours ago)  May 1
to blink-dev, Rick Byers, Alex Russell, blink-dev, Mike Wasserman, Deepti Bogadi, Kenji Baheux, Etienne Noël, Rob Kochman, Mike Taylor, Vladimir Levin, Reilly Grant

To address some additional requests from Alex:


Regarding open API issues and feature requests :

This quote from earlier in the thread might have been overlooked: “ We conducted thoughtful google-internal reviews and triage , and prioritized addressing issues that couldn't be addressed after launch


We know we have work to respond more thoughtfully to each issue, but we have exercised diligence to ensure all open issues could be addressed by additive API enhancements and backwards-compatible implementation improvements.


Please do raise any concrete counter-examples of API design traits or specified functionality that preclude interoperable implementations. As Rick says, we will break APIs after launch for the unequivocal health of the platform.


Regarding developer feedback:

I acknowledge that the initial evidence provided regarding developer sentiment could be more fresh and complete. We are compiling more recent examples of developer feedback to better reflect the broad demand we are seeing for this capability.


Unfortunately, I’m unable to name and quote partners in this particular forum without advance consent; we are actively working to provide an approved list.


Meanwhile, here are a couple published case studies:

https://developer.chrome.com/blog/prompt-api-blog-cyberagent

https://developer.chrome.com/blog/ai-guessing-game



--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+unsubscribe@chromium.org .
Reply all
Reply to author
Forward
0 new messages