Stay organized with collectionsSave and categorize content based on your preferences.
Interactive Canvas is a framework built on the Google Assistant that allows
developers to add visual, immersive experiences to conversational Actions.
This visual experience is an interactive web app that the Assistant sends as a
response to the user in conversation. Unlike traditionalrich responsesthat exist
in-line in an Assistant conversation, the Interactive Canvas web app renders
as a full-screen web view.
You should use Interactive Canvas if you want to do any of the following in
your Action:
Create full-screen visuals
Create custom animations and transitions
Do data visualization
Create custom layouts and GUIs
Figure 1.An interactive game
built using Interactive Canvas.
Supported devices
Interactive Canvas is currently available on the following devices:
Smart Displays
Google Nest Hubs
Android mobile devices
How it works
An Action that uses Interactive Canvas works similarly to a regular conversational
Action. The user still has a back-and-forth conversation with the Assistant to
fulfill their goal; however, instead of returning responses in-line in the
conversation, an Interactive Canvas Action sends a response to the user that
opens a full-screen web app. The user continues to interact with the web app
through voice or touch until the conversation is over.
There are several components to an Action that uses Interactive Canvas:
Conversational Action:An Action that uses a conversational
interface to fulfill user requests. Interactive Canvas Actions use web
views to render responses instead of rich cards or simple text and voice
responses. Conversational Actions use the following components:
Dialogflow agent:
A project in Dialogflow that you customize to converse with your Action
users.
Fulfillment:
Code that is deployed as a webhook that implements the conversational
logic for your Dialogflow agent and communicates with your web app.
Web app:A front-end web app with customized visuals that
your Action sends as a response to users during a conversation. You build
the web app with web standards like HTML, JavaScript, and CSS.
The conversational Action and web app communicate with each other using the
following:
Interactive Canvas API:A JavaScript API that you include in the web app to
enable communication between the web app and your conversational Action.
HtmlResponse:
A response that contains a URL of the web app and data to pass it.
You can use theNode.jsorJavaclient libraries to return anHtmlResponse.
To illustrate how Interactive Canvas works, imagine a hypothetical Action
calledCool Colorsthat changes the device screen color to a color the user
specifies. After the user invokes the Action, the flow looks like the following:
The user saysTurn the screen blueto the Assistant device.
The Actions on Google platform routes the user's request to Dialogflow to
match an intent.
The fulfillment for the matched intent runs and anHtmlResponseis
sent to the device. The device uses the URL to load the web app if it
has not yet been loaded.
When the web app loads, it registers callbacks with theinteractiveCanvasAPI. Thedataobject's value is then passed into the registeredonUpdatecallback of the web app. In our example, the fulfillment sends anHtmlResponsewith adatathat includes a variable with the value ofblue.
The custom logic for your web app reads thedatavalue of theHtmlResponseand makes the defined changes. In our example, this turns
the screen blue.
interactiveCanvassends the callback update to the device.
Next steps
To learn how to build an Interactive Canvas Action,
see theBuild Overviewpage.
To see the code for a complete Interactive Canvas Action, see thesample.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-09-18 UTC."],[[["\u003cp\u003eInteractive Canvas lets you build visual, immersive experiences for Google Assistant using web apps rendered full-screen on supported devices.\u003c/p\u003e\n"],["\u003cp\u003eCurrently, Google primarily approves Interactive Canvas Actions designed as gaming experiences.\u003c/p\u003e\n"],["\u003cp\u003eActions using Interactive Canvas require a Dialogflow agent, fulfillment, and a web app that communicate through the Interactive Canvas API and HtmlResponse.\u003c/p\u003e\n"],["\u003cp\u003eBuilding with Interactive Canvas adds visual elements and interactions to your Action by sending a web app URL to the device, allowing two-way communication between your Action and the web app.\u003c/p\u003e\n"],["\u003cp\u003eExpect longer Google review times for Actions utilizing Interactive Canvas.\u003c/p\u003e\n"]]],[],null,["# Overview (Dialogflow)\n\n**Note:** Actions that use Interactive Canvas take slightly longer for Google's review team to test and approve. Allow extra time in your release process to account for this. \n\nInteractive Canvas is a framework built on the Google Assistant that allows\ndevelopers to add visual, immersive experiences to conversational Actions.\nThis visual experience is an interactive web app that the Assistant sends as a\nresponse to the user in conversation. Unlike traditional\n[rich responses](/assistant/df-asdk/rich-responses) that exist\nin-line in an Assistant conversation, the Interactive Canvas web app renders\nas a full-screen web view.\n\nYou should use Interactive Canvas if you want to do any of the following in\nyour Action:\n\n- Create full-screen visuals\n- Create custom animations and transitions\n- Do data visualization\n- Create custom layouts and GUIs\n\n**Figure 1.** An interactive game built using Interactive Canvas. **Note:** At this time, Google is only approving Actions that are gaming experiences.\n\nSupported devices\n-----------------\n\nInteractive Canvas is currently available on the following devices:\n\n- Smart Displays\n- Google Nest Hubs\n- Android mobile devices\n\n | **Note:** For Canvas to work on Android mobile devices, your AGSA (Android Google Search App) must be version 9.86 or above.\n\nHow it works\n------------\n\nAn Action that uses Interactive Canvas works similarly to a regular conversational\nAction. The user still has a back-and-forth conversation with the Assistant to\nfulfill their goal; however, instead of returning responses in-line in the\nconversation, an Interactive Canvas Action sends a response to the user that\nopens a full-screen web app. The user continues to interact with the web app\nthrough voice or touch until the conversation is over.\n\nThere are several components to an Action that uses Interactive Canvas:\n\n- **[Conversational Action](/assistant/df-asdk/overview):** An Action that uses a conversational interface to fulfill user requests. Interactive Canvas Actions use web views to render responses instead of rich cards or simple text and voice responses. Conversational Actions use the following components:\n - **[Dialogflow agent](https://cloud.google.com/dialogflow/docs/agents-overview)**: A project in Dialogflow that you customize to converse with your Action users.\n - **[Fulfillment](https://cloud.google.com/dialogflow/docs/fulfillment-overview)**: Code that is deployed as a webhook that implements the conversational logic for your Dialogflow agent and communicates with your web app.\n- **Web app:** A front-end web app with customized visuals that your Action sends as a response to users during a conversation. You build the web app with web standards like HTML, JavaScript, and CSS.\n\nThe conversational Action and web app communicate with each other using the\nfollowing:\n\n- **[Interactive Canvas API](/assistant/df-asdk/interactivecanvas/reference/interactivecanvas):** A JavaScript API that you include in the web app to enable communication between the web app and your conversational Action.\n- **[`HtmlResponse`](/assistant/conversational/reference/rest/Shared.Types/AppResponse#htmlresponse)** : A response that contains a URL of the web app and data to pass it. You can use the [Node.js](https://actions-on-google.github.io/actions-on-google-%20%20%20nodejs/2.10.0/classes/_service_actionssdk_conversation_response_html_.htmlresponse.html) or [Java](https://github.com/actions-on-google/actions-on-google-java) client libraries to return an `HtmlResponse`.\n\nTo illustrate how Interactive Canvas works, imagine a hypothetical Action\ncalled *Cool Colors* that changes the device screen color to a color the user\nspecifies. After the user invokes the Action, the flow looks like the following:\n\n1. The user says `Turn the screen blue` to the Assistant device.\n2. The Actions on Google platform routes the user's request to Dialogflow to match an intent.\n3. The fulfillment for the matched intent runs and an `HtmlResponse` is sent to the device. The device uses the URL to load the web app if it has not yet been loaded.\n4. When the web app loads, it registers callbacks with the `interactiveCanvas` API. The `data` object's value is then passed into the registered `onUpdate` callback of the web app. In our example, the fulfillment sends an `HtmlResponse` with a `data` that includes a variable with the value of `blue`.\n5. The custom logic for your web app reads the `data` value of the `HtmlResponse` and makes the defined changes. In our example, this turns the screen blue.\n6. `interactiveCanvas` sends the callback update to the device.\n\nNext steps\n----------\n\nTo learn how to build an Interactive Canvas Action,\nsee the [Build Overview](/assistant/df-asdk/interactivecanvas/build) page.\n\nTo see the code for a complete Interactive Canvas Action, see the\n[sample](https://github.com/actions-on-google/dialogflow-interactive-canvas-nodejs)."]]