The listeners Alexa skill session has a time frame of 8 seconds. If the user doesn't ask anything within this time, the session ends (ensuring that Alexa is not always listening ;) )
Often after a fulfillment, we would like to know if the user is satisfied with the answer, or has another query, which is why we use re-prompt. After the initial 8 seconds, a re-prompt message appears and the user is given another 8 seconds to input a command in the session. If the user still doesn't respond, Alexa ends the session.
I used a FAQ skill, where users ask a question and Alexa answers. But closing the session immediately after the answer doesn't seem very user-friendly, so I added a re-prompt. The user is asked if he has any further queries. If not, the session ends. Here is a sample of the conversation:
User: 'Alexa, ask faq assistant Can I claim expenses as contractor?'
Alexa: 'Yeah, sure. You can claim expenses through Keka.'
(Alexa waits for 8 second here, then asks)
Alexa: 'Do you want to know anything else?'
User: 'Yes'
Alexa: 'Go Ahead, ask me'
OR
User: 'No'
Alexa: 'Thank you. It's pleasure to help you.'
And following is the response code I have used:
{
"version": "1.0",
"response": {
"outputSpeech": {
"type": "PlainText",
"text": "Yeah, sure. You can claim expenses through Keka."
},
"reprompt": {
"outputSpeech": {
"type": "PlainText",
"text": "Do you want to know anything else?"
}
}
}
}
Mostly, the re-prompt is a means to confirm that the user got what he wanted. User can simply respond with a Yes or No. To Handle the re-prompt request we can use in-built Amazon intents.
While there are lots of in-built intent, we particularly use AMAZON.YesIntent and AMAZON.NoIntent for the re-prompt purpose. So when the user responds with a Yes or No, these intents get triggered and respond back accordingly (close the conversation or keep it open).
As we have seen display directives in the earlier post, we can add rich text also in templates. We have action tag which we are going to use to mimic re-prompt features. In the case of re-prompt, we can add tertiaryText in textContent with action tags. Below is our display response for the same scenario:
Display Interface Reference is triggered when a user selects action element on the screen. So in our code, we will get this request with token values close_session or open_session and we can respond back accordingly (close the conversation or keep it open).
This is how Lambda pseudo looks like (This is not the final code I am using in production. I have some templateBuilder function in between final response and handleIntent functions. And the handleIntent function is triggering the core controller to get templateType and content):
Alexa re-prompt/confirmation with voice and screen, i.e., the final response with prompt and action links look like this:
{ | |
"version": "1.0", | |
"response": { | |
"outputSpeech": { | |
"type": "PlainText", | |
"text": "Yeah, sure. You can claim expenses through Keka." | |
}, | |
"reprompt": { | |
"outputSpeech": { | |
"type": "PlainText", | |
"text": "Do you want to know anything else?" | |
} | |
}, | |
"directives": [{ | |
"type": "Display.RenderTemplate", | |
"template": { | |
"type": "BodyTemplate1", | |
"title": "Welcome", | |
"textContent": { | |
"primaryText": { | |
"type": "RichText", | |
"text": "Yeah, sure. You can claim expenses through Keka." | |
}, | |
"tertiaryText": { | |
"type": "RichText", | |
"text": "<br/><br/>Do you want to know anything else? <br/><action token='close_session'><u>No</u></action> <action token='open_session'><u>Yes</u></action>" | |
} | |
} | |
} | |
}], | |
} | |
} |