I got an Amazon echo show for Christmas. Of course, I immediately had a few ideas on how I can use the echo show sensibly. One idea was the following: Together with a friend I build a temperature sensor from an Arduino. This transmits the metrics temperature, humidity and air pressure to an Amazon Web Services (AWS) DynamoDB at regular intervals. Diagrams should then be created from the values and displayed on the echo show. That doesn’t sound too difficult in theory. As usual, the devil is in the details. I have described the implementation of the temperature meter here. In this article however, I would like to show how I implemented the display on the echo show and what difficulties I had to overcome.

First I had to understand how Alexa skills with visual support can be developed. Fortunately, there is a good tutorial for this from Amazon itself, which I worked through first. In essence, these are “ordinary” Alexa skills, which also return a so-called APL document. This document defines what should be displayed and how the display should behave. I wondered how I can generate and display charts. Unfortunately, I was unable to find any direct support for this. Now there were several options:

  • Generation of SVG files, which can then be displayed directly using APL
  • Including a website that shows the diagrams accordingly

I chose the second option. The source code can be found here. The website is hosted statically on S3. See my article on how I set up this blog to learn, how to host static websites on S3..

Now there was still one question remaining: How can I get my echo show to open one or more websites on voice command? After some research I found the openURL command. A prerequisite for this command is the sending of an APL document (even if it is just a dummy document). Otherwise the command cannot be executed. Here´s how to implement that skill:

The first step was to implement the required intents. Intents are the voice commands you can send via echo devices to the backend code. As sample Utterances I provided the following:

  • Go to {choice}
  • {choice} This means that after opening the skill, you can either say ‘Go to xxx’ or just ‘xxx’. XXX will then be one of the following words, which I set up using the choice variable: temperature or gas warner.

This is it for the required interaction model. The rest happens in the lambda function. Here it comes:

/* *
 * This sample demonstrates handling intents from an Alexa skill using the Alexa Skills Kit SDK (v2).
 * Please visit https://alexa.design/cookbook for additional examples on implementing slots, dialog management,
 * session persistence, api calls, and more.
 * */
const Alexa = require('ask-sdk-core');
//const persistenceAdapter = require('ask-sdk-s3-persistence-adapter');
const launchDocument = require('./documents/launchDocument.json');
//const util = require('./util'); //used to retrieve S34 objects

const LaunchRequestHandler = {
    canHandle(handlerInput) {
        return Alexa.getRequestType(handlerInput.requestEnvelope) === 'LaunchRequest';
    },
    handle(handlerInput) {
        const speakOutput = 'Hi, which metrics do you want to see?';

        if (Alexa.getSupportedInterfaces(handlerInput.requestEnvelope)['Alexa.Presentation.APL']) {
            handlerInput.responseBuilder.addDirective({
                type: 'Alexa.Presentation.APL.RenderDocument',
                document: launchDocument,
                datasources: {
                    "textListData": {
                        "listItemsToShow": [
                            {
                                "primaryText": "gas warner"
                            },
                            {
                                "primaryText": "temperature"
                            }
                        ]
                    }
,                }
            });
        }

        return handlerInput.responseBuilder
            .speak(speakOutput)
            .reprompt("Where do you want to go to?")
            .getResponse();
    }
};

const MetricsChoiceIntentHandler = {
    canHandle(handlerInput) {
        return Alexa.getIntentName(handlerInput.requestEnvelope) === 'MetricsChoiceIntent';
    },
    handle(handlerInput) {
        const choice = handlerInput.requestEnvelope.request.intent.slots.choice.value;
        const speakOutput = `Ok, let´s go to ${choice}`;

        if (Alexa.getSupportedInterfaces(handlerInput.requestEnvelope)['Alexa.Presentation.APL']) {
            handlerInput.responseBuilder.addDirective({
                type: 'Alexa.Presentation.APL.RenderDocument',
                document: launchDocument,
                token: 'jip'
            });

            var urlToGo="";
            switch(choice){
                case "gas warner":
                    urlToGo="https://www.foo.com";
                    break;
                case "temperature":
                    urlToGo="https://www.bar.com"
                    break;
            }

            handlerInput.responseBuilder.addDirective({
                type: "Alexa.Presentation.APL.ExecuteCommands",
                token: 'jip',
                commands: [{
                  type: "OpenURL",
                  source: urlToGo
                }]
            });
        }

        return handlerInput.responseBuilder
            .speak(speakOutput)
            .getResponse();
    }
};

const HelpIntentHandler = {
    canHandle(handlerInput) {
        return Alexa.getRequestType(handlerInput.requestEnvelope) === 'IntentRequest'
            && Alexa.getIntentName(handlerInput.requestEnvelope) === 'AMAZON.HelpIntent';
    },
    handle(handlerInput) {
        const speakOutput = 'You can say hello to me! How can I help?';

        return handlerInput.responseBuilder
            .speak(speakOutput)
            .reprompt(speakOutput)
            .getResponse();
    }
};

const CancelAndStopIntentHandler = {
    canHandle(handlerInput) {
        return Alexa.getRequestType(handlerInput.requestEnvelope) === 'IntentRequest'
            && (Alexa.getIntentName(handlerInput.requestEnvelope) === 'AMAZON.CancelIntent'
                || Alexa.getIntentName(handlerInput.requestEnvelope) === 'AMAZON.StopIntent');
    },
    handle(handlerInput) {
        const speakOutput = 'Goodbye!';

        return handlerInput.responseBuilder
            .speak(speakOutput)
            .getResponse();
    }
};
/* *
 * FallbackIntent triggers when a customer says something that doesn’t map to any intents in your skill
 * It must also be defined in the language model (if the locale supports it)
 * This handler can be safely added but will be ingnored in locales that do not support it yet
 * */
const FallbackIntentHandler = {
    canHandle(handlerInput) {
        return Alexa.getRequestType(handlerInput.requestEnvelope) === 'IntentRequest'
            && Alexa.getIntentName(handlerInput.requestEnvelope) === 'AMAZON.FallbackIntent';
    },
    handle(handlerInput) {
        const speakOutput = 'Sorry, I don\'t know about that. Please try again.';

        return handlerInput.responseBuilder
            .speak(speakOutput)
            .reprompt(speakOutput)
            .getResponse();
    }
};
/* *
 * SessionEndedRequest notifies that a session was ended. This handler will be triggered when a currently open
 * session is closed for one of the following reasons: 1) The user says "exit" or "quit". 2) The user does not
 * respond or says something that does not match an intent defined in your voice model. 3) An error occurs
 * */
const SessionEndedRequestHandler = {
    canHandle(handlerInput) {
        return Alexa.getRequestType(handlerInput.requestEnvelope) === 'SessionEndedRequest';
    },
    handle(handlerInput) {
        console.log(`~~~~ Session ended: ${JSON.stringify(handlerInput.requestEnvelope)}`);
        // Any cleanup logic goes here.
        return handlerInput.responseBuilder.getResponse(); // notice we send an empty response
    }
};
/* *
 * The intent reflector is used for interaction model testing and debugging.
 * It will simply repeat the intent the user said. You can create custom handlers for your intents
 * by defining them above, then also adding them to the request handler chain below
 * */
const IntentReflectorHandler = {
    canHandle(handlerInput) {
        return Alexa.getRequestType(handlerInput.requestEnvelope) === 'IntentRequest';
    },
    handle(handlerInput) {
        const intentName = Alexa.getIntentName(handlerInput.requestEnvelope);
        const speakOutput = `You just triggered ${intentName}`;

        return handlerInput.responseBuilder
            .speak(speakOutput)
            //.reprompt('add a reprompt if you want to keep the session open for the user to respond')
            .getResponse();
    }
};
/**
 * Generic error handling to capture any syntax or routing errors. If you receive an error
 * stating the request handler chain is not found, you have not implemented a handler for
 * the intent being invoked or included it in the skill builder below
 * */
const ErrorHandler = {
    canHandle() {
        return true;
    },
    handle(handlerInput, error) {
        const speakOutput = 'Sorry, I had trouble doing what you asked. Please try again.';
        console.log(`~~~~ Error handled: ${JSON.stringify(error)}`);

        return handlerInput.responseBuilder
            .speak(speakOutput)
            .reprompt(speakOutput)
            .getResponse();
    }
};

/**
 * This handler acts as the entry point for your skill, routing all request and response
 * payloads to the handlers above. Make sure any new handlers or interceptors you've
 * defined are included below. The order matters - they're processed top to bottom
 * */
exports.handler = Alexa.SkillBuilders.custom()
    .addRequestHandlers(
        LaunchRequestHandler,
        MetricsChoiceIntentHandler,
        HelpIntentHandler,
        CancelAndStopIntentHandler,
        FallbackIntentHandler,
        SessionEndedRequestHandler,
        IntentReflectorHandler)
    .addErrorHandlers(
        ErrorHandler)
    .withCustomUserAgent('sample/hello-world/v1.2')
    .lambda();

This lambda code basically consists of two handlers: The LaunchRequestHandler function is triggered, when the skill is activated. It returns an APL document (shown later) which consists of a list, displaying all the items provided in the textListData.listItemsToShow json object.

The MetricsChoiceIntentHandler function is triggered, when the user answers the first question with choice or Go to choice. The function basically sends back the same APL document as the LaunchRequestHandler function. This document is just a dummy, as this is required to send the openURL command. There are two parts in this function which require a bit more attention: The function adds a Alexa.Presentation.APL.ExecuteCommands directive, which contains the openURL command. The second thing to pay attention to is that both directives, the directive fo the document as well as the one for the command, require a token. This token can be anything, but has to be the same for both directives in order to work properly.

Here´s the APL document to be returned by both functions:

{
    "type": "APL",
    "version": "1.1",
    "settings": {},
    "theme": "light",
    "import": [
        {
            "name": "alexa-layouts",
            "version": "1.1.0"
        }
    ],
    "resources": [],
    "styles": {
        "bigText": {
            "values": [
                {
                    "fontSize": "72dp",
                    "color": "black",
                    "textAlign": "center"
                }
            ]
        },
        "listitemText": {
            "values": [
                {
                    "fontSize": "12dp"
                }
            ]
        }
    },
    "onMount": [],
    "graphics": {},
    "commands": {},
    "layouts": {},
    "mainTemplate": {
        "parameters": [
            "textListData"
        ],
        "items": [
            {
                "type": "Container",
                "items": [
                    {
                        "type": "AlexaBackground",
                        "backgroundImageSource": "${assets.backgroundURL}"
                    },
                    {
                        "type": "AlexaTextList",
                        "headerTitle": "Metrics",
                        "headerSubtitle": "Which metrics do you want to see?",
                        "primaryAction": {
                            "type": "SendEvent",
                            "arguments": [
                                "ListItemSelected",
                                "${ordinal}"
                            ]
                        },
                        "listItems": "${textListData.listItemsToShow}",
                        "hideOrdinal": true
                    }
                ],
                "height": "100%",
                "width": "100%"
            }
        ]
    }
}