This project provides a step-by-step walkthrough to help you build a hands-free Alexa Voice Service (AVS) prototype in 60 minutes, using wake word engines from Sensory or KITT.AI. Now, in addition to pushing a button to "start listening", you can now also just say the wake word "Alexa", much like the Amazon Echo. You can find step-by-step instructions to set up the hands-free prototype on Raspberry Pi, or follow the instructions to set up the push-to-talk only prototype on Linux, Mac, or Windows.
Alexa Voice Service (AVS) is Amazon’s intelligent voice recognition and natural language understanding service that allows you as a developer to voice-enable any connected device that has a microphone and speaker.
You can set up this project on the following platforms -
- Raspberry Pi, or
- Linux, or
- Mac, or
- Windows
Or you can prototype with these third-party dev kits -
- New! Raspberry Pi + Microsemi AcuEdge Development Kit for Amazon AVS
- Raspberry Pi + Conexant 4-mic Development Kit for Amazon AVS
- Raspberry Pi + Conexant 2-Mic Development Kit for Amazon AVS
January 31, 2018:
Updates
- Added support for Australia/New Zealand.
- Added support in the Pi automated script for Java 1.8.161 and 1.8.162.
January 25, 2018:
Important
- The AVS Java Sample App is in maintenance mode. To leverage the latest Alexa features, please use the AVS Device SDK C++ Sample App, which you can find here. To discuss any specific dependencies on the AVS Java Sample App, feel free to reach out to us here.
December 3, 2017:
Updates
- Added support for these locales: Canada, India, and Japan.
Known Issues
- A pause command followed by play/resume results in playback from the beginning of the audio item instead of the offset provided when the audio item was paused.
October 11, 2017:
Updates
- Added support to automatically detect if the AVS Java Sample App should start in headless mode.
- Added support for Raspbian Stretch.
Known Issues
- Error running the WakeWordAgent with Stretch on Pi 2.
July 6, 2017:
Updates
- The sample app has been updated to support Notifications.
- Enable the Quote Maker skill, located in the Alexa Skills Store to test Notifications with the AVS Sample App.
- Added a login/logout button.
June 21, 2017:
Updates
- The sample app now supports Display Cards.
TemplateRuntime
directives will be displayed in the sample app as JSON.- To enable Display Cards:
- Login to the Amazon Developer Portal and navigate to your product: Alexa > AVS.
- Click Edit, then click Device Capabilities.
- Select Display Cards, then select Display Cards with Media.
May 31, 2017:
Updates
- The Raspberry Pi + Microsemi AcuEdge Development Kit for Amazon AVS is now available for purchase. Learn more »
May 4, 2017:
Updates
- The Conexant 4-mic Development Kit for Amazon AVS is now available, making it easier and more cost-effective to build far-field products with Amazon Alexa. Learn more »
April 27, 2017:
Updates
- Need help troubleshooting the AVS Sample App? Check out the new Troubleshooting Guide.
April 20, 2017:
Updates
- The companion service persists refresh tokens between restarts. This means you won't have to authenticate each time you bring up the sample app. Read about the update on the Alexa Blog ».
- The Listen button has been replaced with a microphone icon.
- The sample app uses new Alexa wake word models from KITT.ai.
Maintenance
- ALPN version has been updated in
POM.xml
. - Automated install no longer requires user intervention to update certificates.
Bug Fixes
- The sample app ensures that the downchannel stream is established before sending the initial
SynchronizeState
event. This adheres to the guidance provided in Managing an HTTP/2 Connection with AVS. - Locale strings in the sample app user interface have been updated to match the values in
config.json
. - Fixed no volume in Linux bug.
- WiringPi is now installed as part of
automated_install.sh
. - Fixed 100% CPU bug.
Known Issues
- To log out of the java sample app you must delete your
refresh_tokens
file in the/samples/companionService
folder. Otherwise, the sample app will authenticate on each reboot. Click here for log out instructions.
-
Review the AVS Terms & Agreements.
-
The earcons associated with the sample project are for prototyping purposes only. For implementation and design guidance for commercial products, please see Designing for AVS and AVS UX Guidelines.
-
Usage of Sensory & KITT.AI wake word engines: The wake word engines included with this project (Sensory and KITT.AI) are intended to be used for prototyping purposes only. If you are building a commercial product with either solution, please use the contact information below to enquire about commercial licensing -
- Contact Sensory for information on TrulyHandsFree licensing.
- Contact KITT.AI for information on SnowBoy licensing.
-
IMPORTANT: The Sensory wake word engine included with this project is time-limited: code linked against it will stop working when the library expires. The library included in this repository will, at all times, have an expiration date that is at least 120 days in the future. See Sensory's GitHub page for more information on how to renew the license for non-commercial use.
- Want to report a bug or request an update to the documentation? See CONTRIBUTING.md.
- Having trouble? Check out our troubleshooting guide.
- Have questions or need help building the sample app? Open a new issue.