Build your web app (Dialogflow)

A web app is the UI for an Action that uses Interactive Canvas. You can use existing web technologies (HTML, CSS, and JavaScript) to design and develop your web app. For the most part, Interactive Canvas is able to render web content like a browser, but there are a few restrictions enforced for user privacy and security. Before you begin designing your UI, consider the design principles outlined in the Design guidelines section.

The HTML and JavaScript for your web app do the following:

  • Register Interactive Canvas event callbacks .
  • Initialize the Interactive Canvas JavaScript library.
  • Provide custom logic for updating your web app based on the state.

This page goes over the recommended ways to build your web app, how to enable communication between your web app and fulfillment, and general guidelines and restrictions.

While you can use any method to build your UI, Google recommends using the following libraries:

Architecture

Google strongly recommends using a single-page application architecture. This approach allows for optimal performance and supports continuous conversational user experience. Interactive Canvas can be used in conjunction with front-end frameworks like Vue , Angular , and React , which help with state management.

HTML file

The HTML file defines how your UI looks. This file also loads the Interactive Canvas JavaScript library, which enables communication between your web app and your conversational Action.

 < !DOCTYPE html 
>
< html 
>  
< head 
>  
< meta 
  
 charset 
 = 
 "utf-8" 
>  
< meta 
  
 name 
 = 
 "viewport" 
  
 content 
 = 
 "width=device-width,initial-scale=1" 
>  
< title>Immersive 
  
 Canvas 
  
 Sample 
< / 
 title 
>  
<! -- 
  
 Disable 
  
 favicon 
  
 requests 
  
 -- 
>  
< link 
  
 rel 
 = 
 "shortcut icon" 
  
 type 
 = 
 "image/x-icon" 
  
 href 
 = 
 "data:image/x-icon;," 
>  
<! -- 
  
 Load 
  
 Interactive 
  
 Canvas 
  
 JavaScript 
  
 -- 
>  
< script 
  
 src 
 = 
 "https://www.gstatic.com/assistant/df-asdk/interactivecanvas/api/interactive_canvas.min.js" 
>< / 
 script 
>  
<! -- 
  
 Load 
  
 PixiJS 
  
 for 
  
 graphics 
  
 rendering 
  
 -- 
>  
< script 
  
 src 
 = 
 "https://cdnjs.cloudflare.com/ajax/libs/pixi.js/4.8.7/pixi.min.js" 
>< / 
 script 
>  
<! -- 
  
 Load 
  
 Stats 
 . 
 js 
  
 for 
  
 fps 
  
 monitoring 
  
 -- 
>  
< script 
  
 src 
 = 
 "https://cdnjs.cloudflare.com/ajax/libs/stats.js/r16/Stats.min.js" 
>< / 
 script 
>  
<! -- 
  
 Load 
  
 custom 
  
 CSS 
  
 -- 
>  
< link 
  
 rel 
 = 
 "stylesheet" 
  
 href 
 = 
 "css/main.css" 
>  
< / 
 head 
>  
< body 
>  
< div 
  
 id 
 = 
 "view" 
  
 class 
 = 
 "view" 
>  
< div 
  
 class 
 = 
 "debug" 
>  
< div 
  
 class 
 = 
 "stats" 
>< / 
 div 
>  
< div 
  
 class 
 = 
 "logs" 
>< / 
 div 
>  
< / 
 div 
>  
< / 
 div 
>  
<! -- 
  
 Load 
  
 custom 
  
 JavaScript 
  
 after 
  
 elements 
  
 are 
  
 on 
  
 page 
  
 -- 
>  
< script 
  
 src 
 = 
 "js/main.js" 
>< / 
 script 
>  
< script 
  
 src 
 = 
 "js/log.js" 
>< / 
 script 
>  
< / 
 body 
>
< / 
 html 
> 

Communicate between fulfillment and web app

Now that you've built your web app and fulfillment and loaded in the Interactive Canvas library in your web app file, you need to define how your web app and fulfillment interact. To do this, modify the files that contain your web app logic.

action.js

This file contains the code to define callbacks and invoke methods through interactiveCanvas . Callbacks allow your web app to respond to information or requests from the conversational Action, while methods provide a way to send information or requests to the conversational Action.

Add interactiveCanvas.ready(callbacks); to your HTML file to initialize and register callbacks :

  // 
 action 
 . 
 js 
 class 
  
 Action 
  
 { 
  
 constructor 
 ( 
 scene 
 ) 
  
 { 
  
 this 
 . 
 canvas 
  
 = 
  
 window 
 . 
 interactiveCanvas 
 ; 
  
 this 
 . 
 scene 
  
 = 
  
 scene 
 ; 
  
 const 
  
 that 
  
 = 
  
 this 
 ; 
  
 this 
 . 
 commands 
  
 = 
  
 { 
  
 TINT 
 : 
  
 function 
 ( 
 data 
 ) 
  
 { 
  
 that 
 . 
 scene 
 . 
 sprite 
 . 
 tint 
  
 = 
  
 data 
 . 
 tint 
 ; 
  
 }, 
  
 SPIN 
 : 
  
 function 
 ( 
 data 
 ) 
  
 { 
  
 that 
 . 
 scene 
 . 
 sprite 
 . 
 spin 
  
 = 
  
 data 
 . 
 spin 
 ; 
  
 }, 
  
 RESTART_GAME 
 : 
  
 function 
 ( 
 data 
 ) 
  
 { 
  
 that 
 . 
 scene 
 . 
 button 
 . 
 texture 
  
 = 
  
 that 
 . 
 scene 
 . 
 button 
 . 
 textureButton 
 ; 
  
 that 
 . 
 scene 
 . 
 sprite 
 . 
 spin 
  
 = 
  
 true 
 ; 
  
 that 
 . 
 scene 
 . 
 sprite 
 . 
 tint 
  
 = 
  
 0x0000FF 
 ; 
  
 // 
  
 blue 
  
 that 
 . 
 scene 
 . 
 sprite 
 . 
 rotation 
  
 = 
  
 0 
 ; 
  
 }, 
  
 }; 
  
 } 
  
 /** 
  
 * 
  
 Register 
  
 all 
  
 callbacks 
  
 used 
  
 by 
  
 Interactive 
  
 Canvas 
  
 * 
  
 executed 
  
 during 
  
 scene 
  
 creation 
  
 time 
 . 
  
 * 
  
 */ 
  
 setCallbacks 
 () 
  
 { 
  
 const 
  
 that 
  
 = 
  
 this 
 ; 
  
 // 
  
 declare 
  
 interactive 
  
 canvas 
  
 callbacks 
  
 const 
  
 callbacks 
  
 = 
  
 { 
  
 onUpdate 
 ( 
 data 
 ) 
  
 { 
  
 try 
  
 { 
  
 that 
 . 
 commands 
 [ 
 data 
 . 
 command 
 . 
 toUpperCase 
 ()]( 
 data 
 ); 
  
 } 
  
 catch 
  
 ( 
 e 
 ) 
  
 { 
  
 // 
  
 do 
  
 nothing 
 , 
  
 when 
  
 no 
  
 command 
  
 is 
  
 sent 
  
 or 
  
 found 
  
 } 
  
 }, 
  
 }; 
  
 // 
  
 called 
  
 by 
  
 the 
  
 Interactive 
  
 Canvas 
  
 web 
  
 app 
  
 once 
  
 web 
  
 app 
  
 has 
  
 loaded 
  
 to 
  
 // 
  
 register 
  
 callbacks 
  
 this 
 . 
 canvas 
 . 
 ready 
 ( 
 callbacks 
 ); 
  
 } 
 } 
 

main.js

This file constructs the scene for your web app. In this example, it also handles the success and failure cases of the promise returned with sendTextQuery() . The following is an excerpt from main.js :

  // 
  
 main 
 . 
 js 
 const 
  
 view 
  
 = 
  
 document 
 . 
 getElementById 
 ( 
 'view' 
 ); 
 // 
  
 initialize 
  
 rendering 
  
 and 
  
 set 
  
 correct 
  
 sizing 
 this 
 . 
 renderer 
  
 = 
  
 PIXI 
 . 
 autoDetectRenderer 
 ({ 
  
 transparent 
 : 
  
 true 
 , 
  
 antialias 
 : 
  
 true 
 , 
  
 resolution 
 : 
  
 this 
 . 
 radio 
 , 
  
 width 
 : 
  
 view 
 . 
 clientWidth 
 , 
  
 height 
 : 
  
 view 
 . 
 clientHeight 
 , 
 }); 
 view 
 . 
 appendChild 
 ( 
 this 
 . 
 element 
 ); 
 // 
  
 center 
  
 stage 
  
 and 
  
 normalize 
  
 scaling 
  
 for 
  
 all 
  
 resolutions 
 this 
 . 
 stage 
  
 = 
  
 new 
  
 PIXI 
 . 
 Container 
 (); 
 this 
 . 
 stage 
 . 
 position 
 . 
 set 
 ( 
 view 
 . 
 clientWidth 
  
 / 
  
 2 
 , 
  
 view 
 . 
 clientHeight 
  
 / 
  
 2 
 ); 
 this 
 . 
 stage 
 . 
 scale 
 . 
 set 
 ( 
 Math 
 . 
 max 
 ( 
 this 
 . 
 renderer 
 . 
 width 
 , 
  
 this 
 . 
 renderer 
 . 
 height 
 ) 
  
 / 
  
 1024 
 ); 
 // 
  
 load 
  
 a 
  
 sprite 
  
 from 
  
 a 
  
 svg 
  
 file 
 this 
 . 
 sprite 
  
 = 
  
 PIXI 
 . 
 Sprite 
 . 
 from 
 ( 
 'triangle.svg' 
 ); 
 this 
 . 
 sprite 
 . 
 anchor 
 . 
 set 
 ( 
 0.5 
 ); 
 this 
 . 
 sprite 
 . 
 tint 
  
 = 
  
 0x00FF00 
 ; 
  
 // 
  
 green 
 this 
 . 
 sprite 
 . 
 spin 
  
 = 
  
 true 
 ; 
 this 
 . 
 stage 
 . 
 addChild 
 ( 
 this 
 . 
 sprite 
 ); 
 // 
  
 toggle 
  
 spin 
  
 on 
  
 touch 
  
 events 
  
 of 
  
 the 
  
 triangle 
 this 
 . 
 sprite 
 . 
 interactive 
  
 = 
  
 true 
 ; 
 this 
 . 
 sprite 
 . 
 buttonMode 
  
 = 
  
 true 
 ; 
 this 
 . 
 sprite 
 . 
 on 
 ( 
 'pointerdown' 
 , 
  
 () 
  
 = 
>  
 { 
  
 this 
 . 
 sprite 
 . 
 spin 
  
 = 
  
 ! 
 this 
 . 
 sprite 
 . 
 spin 
 ; 
 }); 
 

Support touch interactions

Your Interactive Canvas Action can respond to your user's touch as well as their vocal inputs. Per the Interactive Canvas design guidelines , you should develop your Action to be "voice-first". That being said, some Smart Displays support touch interactions.

Supporting touch is similar to supporting conversational responses; however, instead of a vocal response from the user, your client-side JavaScript looks for touch interactions and uses those to change elements in the web app.

You can see an example of this in the sample, which uses the Pixi.js library:

  ... 
 this 
 . 
 sprite 
  
 = 
  
 PIXI 
 . 
 Sprite 
 . 
 from 
 ( 
 'triangle.svg' 
 ); 
 ... 
 this 
 . 
 sprite 
 . 
 interactive 
  
 = 
  
 true 
 ; 
  
 // 
  
 Enables 
  
 interaction 
  
 events 
 this 
 . 
 sprite 
 . 
 buttonMode 
  
 = 
  
 true 
 ; 
  
 // 
  
 Changes 
  
 `cursor` 
  
 property 
  
 to 
  
 `pointer` 
  
 for 
  
 PointerEvent 
 this 
 . 
 sprite 
 . 
 on 
 ( 
 'pointerdown' 
 , 
  
 () 
  
 = 
>  
 { 
  
 this 
 . 
 sprite 
 . 
 spin 
  
 = 
  
 ! 
 this 
 . 
 sprite 
 . 
 spin 
 ; 
 } 
 ); 
 ... 
 

In this case, the value of the spin variable is sent through the interactiveCanvas API as an update callback. The fulfillment has logic that triggers an intent based on the value of spin .

  ... 
 app 
 . 
 intent 
 ( 
 'pause' 
 , 
  
 ( 
 conv 
 ) 
  
 = 
>  
 { 
  
 conv.ask(`Ok, 
  
 I 
  
 paused 
  
 spinning. 
  
 What 
  
 else?`) 
 ; 
  
 conv.ask(new 
  
 HtmlResponse({ 
  
 data 
 : 
  
 { 
  
 spin 
 : 
  
 false 
 , 
  
 } 
 , 
  
 } 
 )); 
 } 
 ); 
 ... 
 

Add more features

Now that you've learned the basics, you can enhance and customize your Action with Canvas-specific APIs. This section explains how to implement these APIs in your Interactive Canvas Action.

sendTextQuery()

The sendTextQuery() method sends text queries to the conversational Action to programmatically invoke an intent. This sample uses sendTextQuery() to restart the triangle-spinning game when the user clicks a button. When the user clicks the "Restart game" button, sendTextQuery() calls the Restart game intent and returns a promise. This promise results in SUCCESS if the intent is triggered and BLOCKED if it is not. The following snippet triggers the intent and handles the success and failure cases of the promise:

 //main.js
...
that.action.canvas.sendTextQuery('Restart  
game')  
.then((res)  
=>  
{  
if  
(res.toUpperCase()  
===  
'SUCCESS')  
{  
console.log(`Request  
in  
flight:  
 ${ 
 res 
 } 
`);  
that.button.texture  
=  
that.button.textureButtonDisabled;  
that.sprite.spin  
=  
false;  
}  
else  
{  
console.log(`Request  
in  
flight:  
 ${ 
 res 
 } 
`);  
}  
});
... 

If the promise results in SUCCESS , the Restart game intent sends an HtmlResponse to your web app:

  //index.js 
 ... 
 app 
 . 
 intent 
 ( 
 'restart game' 
 , 
  
 ( 
 conv 
 ) 
  
 = 
>  
 { 
  
 conv 
 . 
 ask 
 ( 
 new 
  
 HtmlResponse 
 ({ 
  
 data 
 : 
  
 { 
  
 command 
 : 
  
 'RESTART_GAME' 
 , 
  
 }, 
 ... 
 

This HtmlResponse triggers the onUpdate() callback, which executes the code in the RESTART_GAME code snippet below:

  //action.js 
 ... 
 RESTART_GAME 
 : 
  
 function 
 ( 
 data 
 ) 
  
 { 
  
 that 
 . 
 scene 
 . 
 button 
 . 
 texture 
  
 = 
  
 that 
 . 
 scene 
 . 
 button 
 . 
 textureButton 
 ; 
  
 that 
 . 
 scene 
 . 
 sprite 
 . 
 spin 
  
 = 
  
 true 
 ; 
  
 that 
 . 
 scene 
 . 
 sprite 
 . 
 tint 
  
 = 
  
 0 
 x0000FF 
 ; 
  
 // blue 
  
 that 
 . 
 scene 
 . 
 sprite 
 . 
 rotation 
  
 = 
  
 0 
 ; 
 }, 
 ... 
 

OnTtsMark()

The OnTtsMark() callback is called when you include a <mark> tag with a unique name in your SSML response to the user. In the following excerpts from the Snowman sample , OnTtsMark() synchronizes the web app's animation with the corresponding TTS output. When the Action has said to the user Sorry, you lost , the web app spells out the correct word and displays the letters to the user.

The intent Game Over Reveal Word includes a custom mark in the response to the user when they've lost the game:

 //index.js
...
app.intent('Game  
Over  
Reveal  
Word',  
(conv,  
{word})  
=>  
{  
conv.ask(`<speak>Sorry,  
you  
lost.<mark  
name="REVEAL_WORD"/>  
The  
word  
is  
 ${ 
 word 
 } 
.`  
+  
` ${ 
 PLAY_AGAIN_INSTRUCTIONS 
 } 
</speak>`);  
conv.ask(new  
HtmlResponse());
});
... 

The following code snippet then registers the OnTtsMark() callback, checks the name of the mark, and executes the revealCorrectWord() function, which updates the web app:

  // 
 action 
 . 
 js 
 ... 
 setCallbacks 
 () 
  
 { 
  
 const 
  
 that 
  
 = 
  
 this 
 ; 
  
 // 
  
 declare 
  
 assistant 
  
 canvas 
  
 action 
  
 callbacks 
  
 const 
  
 callbacks 
  
 = 
  
 { 
  
 onTtsMark 
 ( 
 markName 
 ) 
  
 { 
  
 if 
  
 ( 
 markName 
  
 === 
  
 'REVEAL_WORD' 
 ) 
  
 { 
  
 // 
  
 display 
  
 the 
  
 correct 
  
 word 
  
 to 
  
 the 
  
 user 
  
 that 
 . 
 revealCorrectWord 
 (); 
  
 } 
  
 }, 
 ... 
 

Restrictions

Take the following restrictions into consideration as you develop your web app:

  • No cookies
  • No local storage
  • No geolocation
  • No camera usage
  • No popups
  • Stay under the 200mb memory limit
  • 3P Header takes up upper portion of screen
  • No styles can be applied to videos
  • Only one media element may be used at a time
  • No HLS video
  • No Web SQL database
  • No support for the SpeechRecognition interface of the Web Speech API .
  • No audio or video recording
  • Dark mode setting not applicable

Cross-origin resource sharing

Because Interactive Canvas web apps are hosted in an iframe and the origin is set to null, you must enable cross-origin resource sharing (CORS) for your web servers and storage resources. This allows your assets to accept requests from null origins.

Design a Mobile Site
View Site in Mobile | Classic
Share by: