Tuesday 26th September, 2017
3:45pm to 4:45pm
From personal assistants to chatbots, to conversational devices in the home, voice user interfaces that make tasks easier, faster, or more fun are quickly finding ways into our daily lives. New Developer APIs and SDKs for voice-enabled services are being released more and more frequently, providing new opportunities for developers to learn new skills and companies to find new ways for customers to interact with their products and services. But these technologies are very much a new horizon in human-computer interaction and there is a lot to learn.
In this session, we’ll survey the current state of voice and conversational interface APIs, with an eye toward global language support. We’ll look at services such as Alexa, Google, and Cortana and look at their distinct features, the devices, platforms, and interactions they support, as well as what spoken languages they support.
Then, we’ll dive into the voice design process, with questions you’ll want to consider as you think about how to add voice to an application. We’ll also look at important concepts and terminology in voice user interaction that you’ll need to understand in order to successfully build a custom voice “skill”.
Next, we’ll demonstrate a custom skill built for Alexa and how we integrated data from a Drupal site into it.
Finally, we’ll take a look at API.AI and how you can use this service to build a voice user interface and export it to a number of different conversational AI services.
By the end of this session, you will:
Sign in to add slides, notes or videos to this session