diff --git a/CONDUCT.md b/CONDUCT.md old mode 100755 new mode 100644 diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 51b56a4a..7aa45161 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -1,26 +1,27 @@ ## Questions - -If you are having difficulties using the APIs or have a question about the IBM Watson Services, please ask a question on [dW Answers] (https://developer.ibm.com/answers/questions/ask/?topics=watson) or [Stack Overflow] (https://stackoverflow.com/users/login?ssrc=anon_ask&returnurl=http%3a%2f%2fstackoverflow.com%2fquestions%2fask%3ftags%3dibm-watson). +If you are having difficulties using the Watson APIs or have a question about the IBM Watson Services, please ask a question on [dW Answers] (https://developer.ibm.com/answers/questions/ask/?topics=watson) or [Stack Overflow] (https://stackoverflow.com/users/login?ssrc=anon_ask&returnurl=http%3a%2f%2fstackoverflow.com%2fquestions%2fask%3ftags%3dibm-watson). ## Issues - If you encounter an issue with the kit, you are welcome to submit an issue. Before that, please search for similar issues. It's possible somebody has encountered this issue already. -##Pull Requests -If you want to contribute to the repository, fork the project, create a pull request. Your commit request needs to be in the following format: +## Pull Requests +If you want to contribute to the repository, please fork the project and create a pull request. Your commit request needs to be in the following format: - (#idOfIssue) Simple description +``` + (#idOfIssue) Simple description + More details. This line is optional. +``` - More details. This line is optional. - If there is no issue related to your commit, create one first. It will be better to understand the goal of your work. After your Pull Request (PR) has been reviewed and signed off, a maintainer will merge it into the master branch. ### Legal stuff -We have tried to make it as easy as possible to make contributions. This applies to how we handle the legal aspects of contribution. We use the same approach—the [Developer's Certificate of Origin 1.1 (DCO)](DCO1.1.txt)—that the Linux® Kernel [community](http://elinux.org/Developer_Certificate_Of_Origin) uses to manage code contributions. + +We have tried to make it as easy as possible to make contributions. This applies to how we handle the legal aspects of contribution. We use the same approach — the [Developer's Certificate of Origin 1.1 (DCO)](DCO1.1.txt) — that the Linux® Kernel [community](http://elinux.org/Developer_Certificate_Of_Origin) uses to manage code contributions. + We simply ask that when submitting a pull request, the developer must include a sign-off statement in the pull request description. Here is an example Signed-off-by line, which indicates that the submitter accepts the DCO: ``` Signed-off-by: John Doe - +``` diff --git a/FAQ.md b/FAQ.md deleted file mode 100644 index 9e44c1c0..00000000 --- a/FAQ.md +++ /dev/null @@ -1,21 +0,0 @@ -# FAQs and Bugs - -> Documenting some known bugs or issues and how to address them - -#Speech to text crashing - - -# Watson Conversation - Resource not found - - - -##Noisy Speaker - - - -##Bluetooth Audio - - - - -##Cannot find module diff --git a/LICENSE b/LICENSE old mode 100755 new mode 100644 diff --git a/MAINTAINERS.md b/MAINTAINERS.md old mode 100755 new mode 100644 index a6daada4..e2c1aa4c --- a/MAINTAINERS.md +++ b/MAINTAINERS.md @@ -1,4 +1,7 @@ # Maintainers -- Maryam Ashoori. maryam at us.ibm.com -- Victor Dibia. dibiavc at us.ibm.com +TJBot was lovingly created by researchers at IBM Research. + +- [Maryam Ashoori](https://github.com/maryamashoori) is the creator of TJBot and is the ringleader of our merry band of cardboard robots. +- [Justin Weisz](https://github.com/jweisz) created the TJBot library and generally tries to bring order to our chaos. +- [Victor Dibia](https://github.com/victordibia) created TJBot’s initial receipes and is our master of demos. diff --git a/README.md b/README.md index 2f12553a..a2846ea8 100644 --- a/README.md +++ b/README.md @@ -1,50 +1,66 @@ # IBM TJBot - + -[IBM Watson Maker Kits](http://ibm.biz/mytjbot) are a collection of DIY open source templates to connect to [Watson services](https://www.ibm.com/watson/developercloud/services-catalog.html) in a fun way. [IBM TJBot](http://ibm.biz/mytjbot) is the first maker kit in the collection. You can 3D print or laser cut the robot frame, then use one of the available [recipes](recipes) to bring him to life! +[IBM Watson Maker Kits](http://ibm.biz/mytjbot) are a collection of DIY open source templates to build things with [Watson](https://www.ibm.com/watson/developercloud/services-catalog.html) in a fun and easy way. [IBM TJBot](http://ibm.biz/mytjbot) is the first maker kit in the collection. You can 3D print or laser cut the robot body, then use one of our [recipes](recipes) to bring him to life! -Better still, you can create your own custom recipes to bring exciting ideas to life using any combination of Watson's Cognitive API's! +In addition, you can unleash your own creativity and create new recipes that bring TJBot to life using any of the available [Watson services](https://www.ibm.com/watson/developercloud/services-catalog.html)! -**TJBot will only run on Raspberry Pi.** +**TJBot only works with a Raspberry Pi.** -# Get TJBot -You can download [the design files](https://ibmtjbot.github.io/#gettj) and 3D print or laser cut TJBot. -[Here is an instructable](http://www.instructables.com/id/Build-TJ-Bot-Out-of-Cardboard/) to help you with the details. +# Build TJBot +You can make your own TJBot in a number of ways. -# Bring TJBot to life -[Recipes](recipes) are step by step instructions to help you connect your TJBot to [Watson services](https://www.ibm.com/watson/developercloud/services-catalog.html). -The [recipes](recipes) are designed based on a Raspberry Pi. You can either run one of the available [recipes](recipes) or create your own recipe that brings sweet ideas to life using any combination of [Watson API](https://www.ibm.com/watson/developercloud/services-catalog.html)! +- **3d Print or Laser Cut**. If you have access to a 3D printer or laser cutter, you can print/cut TJBot yourself. Begin by downloading the [design files](https://ibmtjbot.github.io/#gettj) and firing up your printer/cutter. +- **TJBot Full Kit**. You can order a full TJBot kit with the laser cut cardboard and all the electronics from [Sparkfun](https://www.sparkfun.com/products/14123). +- **TJBot Cardboard Kit**. You can purchase the TJBot laser cut cardboard from [Texas Laser Creations](http://texlaser.com). -We have provided three initial [recipes](recipes) for you: -- Use your voice to control a light with Watson [[instructions](http://www.instructables.com/id/Use-Your-Voice-to-Control-a-Light-With-Watson/)] [[github](https://github.com/ibmtjbot/tjbot/tree/master/recipes/speech_to_text)] -- Make your robot respond to emotions using Watson [[instructions](http://www.instructables.com/id/Make-Your-Robot-Respond-to-Emotions-Using-Watson/)] [[github](https://github.com/ibmtjbot/tjbot/tree/master/recipes/sentiment_analysis)] -- Build a talking robot with Watson Conversation [[instructions](http://www.instructables.com/id/Build-a-Talking-Robot-With-Watson-and-Raspberry-Pi/)] [[github](https://github.com/ibmtjbot/tjbot/tree/master/recipes/conversation)] - -Here are some of the featured recipes created by TJBot enthusiasts: -- Tjwave: Fun controller recipe for TJBot's servo arm [[instructions](http://www.instructables.com/id/Build-a-Waving-Robot-Using-Watson-Services/)] [[github](https://github.com/victordibia/tjwave)] -- Tjdashboard: Web interface to visualize underlying processes on TJBot. [[github](https://github.com/victordibia/tjdashboard)] -- Tjvision: Get your TJBot to recognize images using the Watson Visual Recognition API. [[github](https://github.com/victordibia/tjvision)] -- SwiftyTJ that enables TJBot’s LED to be controlled from a Swift program [[github](https://github.com/jweisz/swifty-tj)] -- Build a TJBot that cares [[instructions](https://medium.com/ibm-watson-developer-cloud/build-a-chatbot-that-cares-part-1-d1c273e17a63#.6sg1yfh4w)] [[github](https://github.com/boxcarton/tjbot-raspberrypi-nodejs)] -- Project Intu, not a recipe but a middleware that can be installed on TJBot and be used to architect more complex interactions for your robot [[developercloud](http://www.ibm.com/watson/developercloud/project-intu.html)] [[github](https://github.com/watson-intu/self-sdk#raspberry-pi)] +## Electronics +There are a number of components you can add to TJBot to bring him to life. Not all of these are required for all recipes. -# Contribute to TJBot -TJBot is open source and we'd love to see what you can make with him. Here are some ideas to get you started. +- [Raspberry Pi 3 + SD card preloaded with NOOBS](http://www.mcmelectronics.com/product/RASPBERRY-PI-RPI-MODB-16GB-NOOBS-/83-17304). **This is a required component to make TJBot work!** 🤖 +- [NeoPixel RGB LED (8mm)](https://www.adafruit.com/product/1734). Note that if you are using other kinds of LEDs, you may need to add a resistor; this LED doesn’t require one. +- [Female-to-female jumper wires](https://www.amazon.com/dp/B00KOL5BCC/). TJBot will only need 3 of these wires, so you’ll have extra. +- [Female-to-male jumper wires](https://www.amazon.com/dp/B00PBZMN7C/). TJBot will only need 3 of these wires, so you’ll have extra. +- [USB Microphone](https://www.amazon.com/gp/product/B00IR8R7WQ/). Other brands of USB microphones should also work. +- [Mini Bluetooth Speaker](https://www.amazon.com/gp/product/B00OEPCHL2/). Any small speaker with either a 3.5mm audio jack or Bluetooth will work. Note that if you are using the 3.5mm audio jack, you may wish to add a [USB Audio Adapter](https://www.adafruit.com/product/1475) to avoid audio interference with the LED. +- [Servo Motor](https://www.amazon.com/RioRand-micro-Helicopter-Airplane-Controls/dp/B00JJZXRR0/). Note that the red (middle) wire is 5v, the brown wire is ground, and the orange wire is data. +- [Raspberry Pi Camera](https://www.amazon.com/dp/B01ER2SKFS/). Either the 5MP or 8MP camera will work. + +## Assembly +Once you have obtained your TJBot, please refer to [the assembly instructions](http://www.instructables.com/id/Build-TJ-Bot-Out-of-Cardboard/) to put it all together. + +For reference, here is the wiring diagram to hook up the LED and servo to your Raspberry Pi. + +![](images/wiring.png) + +> 💡 Be careful when connecting the LED! If it is connected the wrong way, you may end up burning it out. The LED has a flat notch on one side; use this to orient the LED and figure out which pin is which. - - Visual recognition. TJBot has a placeholder behind his left eye to insert a Raspberry Pi camera. Try connecting the camera to the Watson Visual Recognition API so TJ can say hello when he sees you. +> For the servo, note that the red (middle) wire is 5v, the brown wire is ground, and the orange wire is data. - - IoT. The Watson IoT service lets you control smart home devices (e.g. Philips Hue, LIFX lights, etc. ). Connect TJBot to IoT and have him control your home. +# Bring TJBot to Life +[Recipes](recipes) are step-by-step instructions to bring your TJBot to life with [Watson](https://www.ibm.com/watson/developercloud/services-catalog.html). - - Connected robots. You can program multiple TJBots to send messages to each other using the Watson IoT platform. +We have provided three initial [recipes](recipes) for you: + +- Use Your Voice to Control a Light with Watson [[instructions](http://www.instructables.com/id/Use-Your-Voice-to-Control-a-Light-With-Watson/)] [[github](https://github.com/ibmtjbot/tjbot/tree/master/recipes/speech_to_text)] +- Make Your Robot Respond to Emotions Using Watson [[instructions](http://www.instructables.com/id/Make-Your-Robot-Respond-to-Emotions-Using-Watson/)] [[github](https://github.com/ibmtjbot/tjbot/tree/master/recipes/sentiment_analysis)] +- Build a Talking Robot with Watson [[instructions](http://www.instructables.com/id/Build-a-Talking-Robot-With-Watson-and-Raspberry-Pi/)] [[github](https://github.com/ibmtjbot/tjbot/tree/master/recipes/conversation)] + +After checking out our sample receipes, we encourage you to take a look at [featured recipes](../featured) created by members of our community. + +# Contribute to TJBot +TJBot is an open source project designed to make it fun and easy to interact with [Watson](https://www.ibm.com/watson/developercloud/services-catalog.html). We’d love to see what you can make with him. Here are some ideas to get you started. -If you have created your own recipe, we would love to include it as a [featured recipe](featured/README.md)! Just submit a pull request for your receipe instructions and code and send a link to a demo video to tjbot@us.ibm.com (Vimeo & YouTube preferred). We will review it and if we decide to include it in our repository, you'll be listed as the developer. See [CONTRIBUTING.md](CONTRIBUTING.md). +- **Visual recognition**. Make TJBot recognize your face using the [Watson Visual Recognition](https://www.ibm.com/watson/developercloud/visual-recognition.html) service and the Raspberry Pi Camera. +- **IoT**. Let TJBot control your smart home devices using the [Watson IoT platform](https://www.ibm.com/internet-of-things/platform/watson-iot-platform/). +- **Connected robots**. Program multiple TJBots to chat with each other! -We cannot wait to see what you build with [TJBot](http://ibm.biz/mytjbot)! +If you would like your own recipe included in our [featured recipe](featured) list, please [send us email](mailto:tjbot@us.ibm.com) with a link to your repository and a demo video. # About TJBot -[TJ](http://ibm.biz/mytjbot) is affectionately named after Thomas J. Watson, the first Chairman and CEO of IBM. TJBot was born at IBM Research as an experiment to find the best practices in the design and implementation of cognitive objects. +[TJBot](http://ibm.biz/mytjbot) was affectionately named after Thomas J. Watson, the first Chairman and CEO of IBM. TJBot was created by [Maryam Ashoori](https://github.com/maryamashoori) at IBM Research as an experiment to find the best practices in the design and implementation of cognitive objects. He was born on November 9, 2016 via [this blog post](https://www.ibm.com/blogs/research/2016/11/calling-makers-meet-tj-bot/). -Feel free to contact TJBot at tjbot@us.ibm.com +Feel free to [contact the team](mailto:tjbot@us.ibm.com) with any questions about this project. -## License -This library uses the [Apache License Version 2.0 software license] (LICENSE). +# License +This project uses the [Apache License Version 2.0](LICENSE) software license. diff --git a/bootstrap/README.md b/bootstrap/README.md new file mode 100644 index 00000000..e44c972b --- /dev/null +++ b/bootstrap/README.md @@ -0,0 +1,68 @@ +# TJBot Bootstrap + +Perform the following operations to prepare your Raspberry Pi for becoming a TJBot. + +**Note: This is coming soon as a shell script. Stay tuned.** + +1. Boot your Pi and connect to Wifi (click the icon in the menu bar) + +2. Upgrade your Pi’s OS + + sudo apt-get update + sudo apt-get dist-upgrade + +> You’ll need to do `apt-get upgrade` first because that updates the repository cache. Otherwise, `apt-get dist-upgrade` won't do anything because it doesn't know there is a distribution upgrade. + +> During the upgrade, say "Y" when prompted to replace the `lightdm.conf` file with the package maintainers version. + +If you have plugged in your speaker via USB or Bluetooth, disable the kernel modules for the built-in audio jack. + + sudo cp bootstrap/tjbot-blacklist-snd.conf /etc/modprobe.d/ + sudo update-initramfs -u + +If you have plugged in your speaker via the headphone jack, you may experience interference between the speaker and the LED when using both simultaneously. In this case, do not disable the kernel modules for the built-in audio jack. + + sudo rm /etc/modprobe.d/tjbot-blacklist-snd.conf + sudo update-initramfs -u + +3. Reboot + + sudo reboot + +4. Remove old conf files from `/home/pi/oldconffiles` if they are present + + rm -rf ~/oldconffiles + +5. Remove unneeded packages and install missing ALSA packages + + sudo apt-get autoremove + sudo apt-get install alsa-base alsa-utils libasound2-dev + +6. Install Node.js + + curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash - + sudo apt-get install -y nodejs + +7. Check out the TJBot source code + + cd Desktop + git clone https://github.com/ibmtjbot/tjbot + +8. Run a recipe + + cd ~/Desktop/tjbot/recipes/intro + npm install + sudo node intro.js + +## Hardware Tests +Hardware tests are included with bootstrap to ensure the TJBot hardware is set up correctly. Tests are included for the `camera`, `led`, `servo`, and `speaker`. + +Tests can be run in the following manner. + +``` +$ npm install +$ sudo node test/test.camera.js +$ sudo node test/test.led.js +$ sudo node test/test.servo.js +$ sudo node test/test.speaker.js +``` diff --git a/bootstrap/bootstrap.sh b/bootstrap/bootstrap.sh new file mode 100755 index 00000000..85821b83 --- /dev/null +++ b/bootstrap/bootstrap.sh @@ -0,0 +1,2 @@ +#!/bin/sh +echo "This is a placeholder for the TJBot boostrap script. Until it has been written, please follow the directions in README.md to configure your Raspberry Pi for TJBot." diff --git a/bootstrap/tests/package.json b/bootstrap/tests/package.json new file mode 100644 index 00000000..05db1f6f --- /dev/null +++ b/bootstrap/tests/package.json @@ -0,0 +1,27 @@ +{ + "name": "tests", + "description": "TJBot hardware tests", + "version": "1.0.0", + "author": "Justin Weisz ", + "bugs": { + "url": "https://github.com/ibmtjbot/tjbot/issues" + }, + "dependencies": { + "readline-sync": "^1.4.7", + "tjbot": "0.0.10" + }, + "devDependencies": {}, + "homepage": "https://github.com/ibmtjbot/tjbot#readme", + "keywords": [ + "TJBot" + ], + "license": "Apache-2.0", + "main": "test.led.js", + "repository": { + "type": "git", + "url": "git+https://github.com/ibmtjbot/tjbot.git" + }, + "scripts": { + "test": "echo \"Error: no test specified\" && exit 1" + } +} diff --git a/bootstrap/tests/test.camera.js b/bootstrap/tests/test.camera.js new file mode 100644 index 00000000..d0c807e2 --- /dev/null +++ b/bootstrap/tests/test.camera.js @@ -0,0 +1,35 @@ +/** + * Copyright 2016 IBM Corp. All Rights Reserved. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +'use strict'; + +const fs = require('fs'); + +const TJBot = require('tjbot'); + +var tj = new TJBot(['camera'], {}, {}); + +tj._captureImage('picture.jpg').then(function(data) { + if (!fs.existsSync('picture.jpg')) { + throw new Error("expected picture.jpg to have been created"); + } + if (fs.existsSync('picture.jpg')) { + fs.unlink('picture.jpg'); + } + if (fs.existsSync('picture.jpg')) { + throw new Error("expected to have deleted picture.jpg"); + } +}); diff --git a/bootstrap/tests/test.led.js b/bootstrap/tests/test.led.js new file mode 100644 index 00000000..aec11969 --- /dev/null +++ b/bootstrap/tests/test.led.js @@ -0,0 +1,32 @@ +/** + * Copyright 2016 IBM Corp. All Rights Reserved. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +'use strict'; + +const rl = require('readline-sync'); + +const TJBot = require('tjbot'); + +var tj = new TJBot(['led'], {}, {}); +var colors = ['red', 'green', 'blue', 'orange', 'off']; + +colors.forEach(function(color) { + tj.shine(color); + var answer = rl.question('Did the LED turn ' + color + '? Y/N > '); + if (answer.toLowerCase() != 'y') { + throw new Error('expected the LED to turn ' + color + ', please check your LED wiring.'); + } +}); diff --git a/bootstrap/tests/test.servo.js b/bootstrap/tests/test.servo.js new file mode 100644 index 00000000..215fb361 --- /dev/null +++ b/bootstrap/tests/test.servo.js @@ -0,0 +1,47 @@ +/** + * Copyright 2016 IBM Corp. All Rights Reserved. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +'use strict'; + +const rl = require('readline-sync'); + +const TJBot = require('tjbot'); + +var tj = new TJBot(['servo'], {}, {}); + +tj.armBack(); +var answer = rl.question('Is TJBot\'s arm in the BACKWARD position? Y/N > '); +if (answer.toLowerCase() != 'y') { + throw new Error('expected arm to be in backward position, please check servo wiring.'); +} + +tj.raiseArm(); +answer = rl.question('Is TJBot\'s arm in the RAISED position? Y/N > '); +if (answer.toLowerCase() != 'y') { + throw new Error('expected arm to be in raised position, please check servo wiring.'); +} + +tj.lowerArm(); +answer = rl.question('Is TJBot\'s arm in the LOWERED position? Y/N > '); +if (answer.toLowerCase() != 'y') { + throw new Error('expected arm to be in lowered position, please check servo wiring.'); +} + +tj.wave(); +answer = rl.question('Did TJBot wave? Y/N > '); +if (answer.toLowerCase() != 'y') { + throw new Error('expected tj to wave, please check servo wiring.'); +} diff --git a/bootstrap/tests/test.speaker.js b/bootstrap/tests/test.speaker.js new file mode 100644 index 00000000..add7f87a --- /dev/null +++ b/bootstrap/tests/test.speaker.js @@ -0,0 +1,31 @@ +/** + * Copyright 2016 IBM Corp. All Rights Reserved. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +'use strict'; + +const rl = require('readline-sync'); + +const TJBot = require('tjbot'); + +var tj = new TJBot(['speaker'], {}, {}); + +var sound = '/usr/share/sounds/alsa/Front_Center.wav'; +tj.play(sound).then(function() { + var answer = rl.question('Did you hear the words "Front Center"? Y/N > '); + if (answer.toLowerCase() != 'y') { + throw new Error('expected audio to play, please check that you speaker is plugged in, turned on, and set as the current audio output device.'); + } +}); diff --git a/recipes/sentiment_analysis/blacklist-rgb-led.conf b/bootstrap/tjbot-blacklist-snd.conf old mode 100755 new mode 100644 similarity index 100% rename from recipes/sentiment_analysis/blacklist-rgb-led.conf rename to bootstrap/tjbot-blacklist-snd.conf diff --git a/featured/README.md b/featured/README.md index f50c7e87..b665039c 100644 --- a/featured/README.md +++ b/featured/README.md @@ -1,27 +1,13 @@ # Featured Recipes -Featured recipes are some exciting recipes created by TJBot enthusiasts. +Featured recipes are exciting recipes created by members of the TJBot community. +If you would like your own recipe included here, please [send us email](mailto:tjbot@us.ibm.com) with a link to your repository and a demo video. -- **[TJWave](https://github.com/victordibia/tjwave)** : Fun controller recipe for TJBot's servo arm. -- **[Build a TJBot That Cares](https://medium.com/ibm-watson-developer-cloud/build-a-chatbot-that-cares-part-1-d1c273e17a63#.vtxwvsydl)** : This recipe puts a voice interface onto TJBot, then gives it the ability to converse and understand your emotional tones. -- **[SwiftyTJ](https://github.com/jweisz/swifty-tj)** : This recipe enables TJBot’s LED to be controlled from a Swift program. -- **[TJ Weather](https://github.com/suprbh/tjweather)** : TJBOt as a personal weather station -- **[Project Intu](https://github.com/watson-intu/self-sdk#raspberry-pi)** : Project Intu is an experimental service that allows developers to quickly and seamlessly integrate various cognitive services, such as Conversation and Speech-to-Text, with the capabilities of various devices, spaces and physical objects. While not being a recipe, Intu is a middleware that can be installed on TJBot and used to architect more complex interactions for your robot. Learn [more here](http://www.ibm.com/watson/developercloud/project-intu.html). - - - -# Contributing Your Own Recipes - -TJBot is open source and we'd love to see what you can make with him. If you would like your recipe to be featured here, send us an email at tjbot@us.ibm.com. -Here are some ideas to get you started. - - - Visual recognition. TJBot has a placeholder behind his left eye to insert a Raspberry Pi camera. Try connecting the camera to the Watson Visual Recognition API so TJ can say hello when he sees you. - - - IoT. The Watson IoT service lets you control smart home devices (e.g. Philips Hue, LIFX lights, etc. ). Connect TJBot to IoT and have him control your home. - - - Connected robots. You can program multiple TJBots to send messages to each other using the Watson IoT platform. - -Submit a pull request for your receipe instructions and code and send a link to a demo video to tjbot@us.ibm.com (Vimeo & YouTube preferred). We will review it and if we decide to include it in our repository, you'll be listed as the developer. See [CONTRIBUTING.md](../CONTRIBUTING.md). - -We cannot wait to see what you build with TJBot! +- **[TJWave](https://github.com/victordibia/tjwave)** by [Victor Dibia](https://github.com/victordibia). Fun controller recipe for TJBot's servo arm. +- **[Build a TJBot That Cares](https://medium.com/ibm-watson-developer-cloud/build-a-chatbot-that-cares-part-1-d1c273e17a63#.vtxwvsydl)** by [Josh Zheng](https://github.com/boxcarton). This recipe puts a voice interface onto TJBot, then gives it the ability to converse and understand your emotional tones. +- **[SwiftyTJ](https://github.com/jweisz/swifty-tj)** by [Justin Weisz](htts://github.com/jweisz). This recipe enables TJBot’s LED to be controlled from a Swift program. +- **[TJ Weather](https://github.com/suprbh/tjweather)** by [suprbh](https://github.com/suprbh). Use TJBot as a personal weather station. +- **[VisualTJ](https://github.com/samuelvogelmann/visualtj)** by [Samuel Vogelmann](https://github.com/samuelvogelmann). A Node-RED based application to make your TJbot see and recognize the world. +- **[Tell the Time](https://github.com/damiancummins/tell_the_time)** by [Damian Cummins](https://github.com/damiancummins). Have TJBot tell you the time. +- **[Project Intu](https://github.com/watson-intu/self-sdk#raspberry-pi)** by [Watson Intu](https://github.com/watson-intu). Project Intu is an experimental service that allows developers to quickly and seamlessly integrate various cognitive services, such as Conversation and Speech-to-Text, with the capabilities of various devices, spaces and physical objects. While not being a recipe, Intu is a middleware that can be installed on TJBot and used to architect more complex interactions for your robot. [Learn more](http://www.ibm.com/watson/developercloud/project-intu.html) about Project Intu. \ No newline at end of file diff --git a/images/wiring.png b/images/wiring.png new file mode 100644 index 00000000..569a1ecd Binary files /dev/null and b/images/wiring.png differ diff --git a/recipes/README.md b/recipes/README.md index 3781d22e..54e301b8 100644 --- a/recipes/README.md +++ b/recipes/README.md @@ -1,33 +1,38 @@ # Recipes -Recipes are step by step instructions to help you connect your TJBot to [Watson cognitive services](https://www.ibm.com/watson/developercloud/services-catalog.html). -The recipes are designed to be run on a Raspberry Pi. You can either run one of the available recipes or create your own recipe that brings sweet ideas to life using any combination of [Watson API](https://www.ibm.com/watson/developercloud/services-catalog.html)! +Recipes are step by step instructions to help you connect your TJBot to [Watson](https://www.ibm.com/watson/developercloud/services-catalog.html). + +The recipes are designed to be run on a Raspberry Pi. You can either run one of our sample recipes below, or create your own recipe that brings your ideas to life using [Watson](https://www.ibm.com/watson/developercloud/services-catalog.html)! ### [Speech to Text](speech_to_text) -> Use your voice to control a LED with Watson [[instructables](http://www.instructables.com/id/Use-Your-Voice-to-Control-a-Light-With-Watson/)] [[github](https://github.com/ibmtjbot/tjbot/tree/master/recipes/speech_to_text)] +> Use your voice to control TJBot's LED with Watson [[instructables](http://www.instructables.com/id/Use-Your-Voice-to-Control-a-Light-With-Watson/)] -This module provides a Node.js code to control a [8mm NeoPixel RGB led](https://www.adafruit.com/products/1734) using voice commands. It uses [Watson Speech to Text API](https://www.ibm.com/watson/developercloud/speech-to-text.html). +This receipe lets you control the [8mm NeoPixel RGB led](https://www.adafruit.com/products/1734) using voice commands. It uses the [Watson Speech to Text API](https://www.ibm.com/watson/developercloud/speech-to-text.html). [![link to a full video for use voice to control LED](https://img.youtube.com/vi/Wvnh7ie3D6o/0.jpg)](https://www.youtube.com/watch?v=Wvnh7ie3D6o) -###[Sentiment Analysis](sentiment_analysis) -> Make your bot respond to emotions using Watson [[instructables](http://www.instructables.com/id/Make-Your-Robot-Respond-to-Emotions-Using-Watson/)] [[github](https://github.com/ibmtjbot/tjbot/tree/master/recipes/sentiment_analysis)] +### [Sentiment Analysis](sentiment_analysis) +> Make TJBot respond to emotions with Watson [[instructables](http://www.instructables.com/id/Make-Your-Robot-Respond-to-Emotions-Using-Watson/)] -This module provides Node.js code to control the color of a [8mm NeoPixel RGB led](https://www.adafruit.com/products/1734) based on public perception of a given keyword (e.g. "heart" or "iPhone"). The module connects to Twitter to analyze the public sentiment about the given keyword in real time, and updates the color of the LED to reflect the sentiment. It uses [Watson Tone Analyzer](http://www.ibm.com/watson/developercloud/tone-analyzer.html) and Twitter API. +This recipe shines TJBot's [8mm NeoPixel RGB LED](https://www.adafruit.com/products/1734) different colors based on the emotions present in Twitter for a given keyword. It uses [Watson Tone Analyzer](http://www.ibm.com/watson/developercloud/tone-analyzer.html) and the [Twitter API](https://dev.twitter.com/overview/api). -###[Conversation](conversation) -> Build a talking robot with Watson [[instructables](http://www.instructables.com/id/Build-a-Talking-Robot-With-Watson-and-Raspberry-Pi/)] [[github](https://github.com/ibmtjbot/tjbot/tree/master/recipes/conversation)] +### [Conversation](conversation) +> Build a talking robot with Watson [[instructables](http://www.instructables.com/id/Build-a-Talking-Robot-With-Watson-and-Raspberry-Pi/)] -This module provides Node.js code to get your Raspberry Pi to talk. It uses [Watson Speech to Text](https://www.ibm.com/watson/developercloud/speech-to-text.html) to parse audio from the microphone, uses [Watson Conversation](https://www.ibm.com/watson/developercloud/conversation.html) to generate a response, and uses [Watson Text to Speech](https://www.ibm.com/watson/developercloud/text-to-speech.html) to "read" out this response! +This recipe demonstrates how to use the [Watson Speech to Text](https://www.ibm.com/watson/developercloud/speech-to-text.html), [Watson Text to Speech](https://www.ibm.com/watson/developercloud/text-to-speech.html), and [Watson Conversation](https://www.ibm.com/watson/developercloud/conversation.html) services to build a talking chatbot. ## Featured Recipes -Check out some [featured TJ Bot recipes](../featured/README.md) created by the community. +Check out the [featured TJBot recipes](../featured) created by members of our community. ## Contributing Your Own Recipes -TJ Bot is open source and we'd love to see what you can make with him. If you have created your own recipe, we would love to include it as a [featured recipe](../featured/README.md)! Just submit a pull request for your recipe instructions and code and send a link to a demo video to tjbot@us.ibm.com (Vimeo & YouTube preferred). We will review it and if we decide to include it in our repository, you'll be listed as the developer. See [CONTRIBUTING.md](../CONTRIBUTING.md). +TJBot is an open source project designed to make it fun and easy to interact with [Watson](https://www.ibm.com/watson/developercloud/services-catalog.html). + +If you would like your own recipe included in our [featured recipe](../featured) list, please [send us email](mailto:tjbot@us.ibm.com) with a link to your repository and a demo video. + +For guidelines on contributing to the TJBot project, please refer to the [contribution guide](../CONTRIBUTING.md). -We cannot wait to see what you build with TJBot! +We can't wait to see what you make with TJBot! diff --git a/recipes/conversation/.gitignore b/recipes/conversation/.gitignore index 2ba1a0ef..24e3f987 100644 --- a/recipes/conversation/.gitignore +++ b/recipes/conversation/.gitignore @@ -1,3 +1,6 @@ +# config file +config.js + # Logs logs *.log @@ -21,6 +24,9 @@ coverage # Grunt intermediate storage (http://gruntjs.com/creating-plugins#storing-task-files) .grunt +# Bower dependency directory (https://bower.io/) +bower_components + # node-waf configuration .lock-wscript @@ -28,8 +34,11 @@ coverage build/Release # Dependency directories -node_modules -jspm_packages +node_modules/ +jspm_packages/ + +# Typescript v1 declaration files +typings/ # Optional npm cache directory .npm @@ -43,5 +52,11 @@ jspm_packages # Output of 'npm pack' *.tgz +# Yarn Integrity file +.yarn-integrity + +# dotenv environment variables file +.env + # .DS_Store files .DS_Store diff --git a/recipes/conversation/README.md b/recipes/conversation/README.md old mode 100755 new mode 100644 index 2f077f2b..0cdb4556 --- a/recipes/conversation/README.md +++ b/recipes/conversation/README.md @@ -1,82 +1,43 @@ # Conversation +> Chat with TJBot! -> Build a talking robot with [Watson](https://www.ibm.com/watson/developercloud/conversation.html) +This recipe uses the [Watson Conversation](https://www.ibm.com/watson/developercloud/conversation.html) and [Watson Text to Speech](https://www.ibm.com/watson/developercloud/text-to-speech.html) services to turn TJ into a chatting robot. -This module provides Node.js code to get your Raspberry Pi to talk. It uses [Watson Speech to Text](https://www.ibm.com/watson/developercloud/speech-to-text.html) to parse audio from the microphone, uses [Watson Conversation](https://www.ibm.com/watson/developercloud/conversation.html) to generate a response, and uses [Watson Text to Speech](https://www.ibm.com/watson/developercloud/text-to-speech.html) to "read" out this response! +## Hardware +This recipe requires a TJBot with a microphone and a speaker. -**This will only run on the Raspberry Pi.** +## Build and Run +First, make sure you have configured your Raspberry Pi for TJBot. + $ cd tjbot/bootstrap && sudo sh bootstrap.sh -## How It Works -- Listens for voice commands -- Sends audio from the microphone to the Watson Speech to Text Service - STT to transcribe [Watson Speech to Text](https://www.ibm.com/watson/developercloud/speech-to-text.html) -- Parses the text looking for the attention word -- Once the attention word is recognized, the text is sent to [Watson Conversation](https://www.ibm.com/watson/developercloud/conversation.html) to generate the response. -- The response is sent to [Watson Text to Speech](https://www.ibm.com/watson/developercloud/text-to-speech.html) to generate the audio file. -- The robot speaks the response via the Alsa audio playback tools +Go to the `recipes/conversation` folder and install the dependencies. -##Hardware -Check out [this instructable] (http://www.instructables.com/id/Build-a-Talking-Robot-With-Watson-and-Raspberry-Pi/) to prepare your system. You will need a Raspberry Pi 3, Microphone, Speaker, and [the TJBot cardboard](https://ibmtjbot.github.io/#gettj). + $ cd ../recipes/conversation + $ npm install -##Build -> We recommend starting with our [step by step instructions] (http://www.instructables.com/id/Build-a-Talking-Robot-With-Watson-and-Raspberry-Pi/) to build this recipe. +Create instances of the [Watson Conversation](https://www.ibm.com/watson/developercloud/conversation.html) and [Watson Text to Speech](https://www.ibm.com/watson/developercloud/text-to-speech.html) services and note the authentication credentials. -Get the sample code and go to the application folder. Please see this [instruction on how to clone](https://help.github.com/articles/cloning-a-repository/) a repository. +Import the `workspace-sample.json` file into the Watson Conversation service and note the workspace ID. - cd recipes/conversation +Make a copy the default configuration file and update it with the Watson service credentials and the conversation workspace ID. -Install ALSA tools (required for recording audio on Raspberry Pi) + $ cp config.default.js config.js + $ nano config.js + - sudo apt-get install alsa-base alsa-utils +Run! -Install Dependencies + sudo node conversation.js - npm install +> Note the `sudo` command. Root user access is required to run TJBot recipes. -Set the audio output to your audio jack. For more audio channels, check the [config guide. ](https://www.raspberrypi.org/documentation/configuration/audio-config.md) +# Watson Services +- [Watson Conversation](https://www.ibm.com/watson/developercloud/conversation.html) +- [Watson Text to Speech](https://www.ibm.com/watson/developercloud/text-to-speech.html) - amixer cset numid=3 1 - // This sets the audio output to option 1 which is your Pi's Audio Jack. Option 0 = Auto, Option 2 = HDMI. An alternative is to type sudo raspi-config and change the audio to 3.5mm audio jack. +# License +This project is licensed under Apache 2.0. Full license text is available in [LICENSE](../../LICENSE). -Update the Config file with your Bluemix credentials for all three Watson services. - - edit config.js - enter your watson usernames, passwords and versions. - -## Creating a Conversation Flow -You need to train your robot with what to say and when to say it. For that, we use [Watson Conversation] (https://www.ibm.com/watson/developercloud/conversation.html). Open a browser and go to [IBM Watson Conversation link](http://www.ibmwatsonconversation.com) -From the top right corner, select the name of your conversation service and click 'create' to create a new workspace for your robot. You can create intents and dialogs there. [Here](http://www.instructables.com/id/Build-a-Talking-Robot-With-Watson-and-Raspberry-Pi/#step6) is a step-by-step instructions to create a conversation flow. - -##Running - -Start the application - - node conversation.js - -Then you should be able to speak to the microphone. -The robot gets better with training. You can go to your [Watson conversation module](http://www.ibmwatsonconversation.com) to train the robot with more intents and responses. - -##Customization -The attention word is the word you say to get the attention of the robot. -The default attention word is set to 'Watson' but you can change it from config.js. Some words are easier for the robot to recognize. If decided to change the attention word, experiment with multiple words and pick the one that is easier for the robot to recognize. - -The default voice of TJBot is set to a male voice (`en-US_MichaelVoice`) but you can change it from config.js. Two female voices are available for TJBot (`en-US_AllisonVoice` and `en-US_LisaVoice`). - - // The attention word to wake up the robot. - exports.attentionWord ='watson'; - - // You can change the voice of the robot to your favorite voice. - exports.voice = 'en-US_MichaelVoice'; - // Some of the available options are: - // en-US_AllisonVoice - // en-US_LisaVoice - // en-US_MichaelVoice (the default) - -# Dependencies List - -- Watson Developer Cloud - [Watson Speech to Text](https://www.ibm.com/watson/developercloud/speech-to-text.html), [Watson Conversation](https://www.ibm.com/watson/developercloud/conversation.html), and [Watson Text to Speech](https://www.ibm.com/watson/developercloud/text-to-speech.html). -- mic npm package : for reading audio input - - -## Contributing +# Contributing See [CONTRIBUTING.md](../../CONTRIBUTING.md). diff --git a/recipes/conversation/config.default.js b/recipes/conversation/config.default.js new file mode 100644 index 00000000..3a384186 --- /dev/null +++ b/recipes/conversation/config.default.js @@ -0,0 +1,26 @@ +// User-specific configuration +exports.conversationWorkspaceId = ''; // replace with the workspace identifier of your conversation + +// Create the credentials object for export +exports.credentials = {}; + +// Watson Conversation +// https://www.ibm.com/watson/developercloud/conversation.html +exports.credentials.conversation = { + password: '', + username: '' +}; + +// Watson Speech to Text +// https://www.ibm.com/watson/developercloud/speech-to-text.html +exports.credentials.speech_to_text = { + password: '', + username: '' +}; + +// Watson Text to Speech +// https://www.ibm.com/watson/developercloud/text-to-speech.html +exports.credentials.text_to_speech = { + password: '', + username: '' +}; diff --git a/recipes/conversation/config.js b/recipes/conversation/config.js deleted file mode 100755 index 860e450e..00000000 --- a/recipes/conversation/config.js +++ /dev/null @@ -1,24 +0,0 @@ -// The attention word to wake up the robot. -exports.attentionWord ='watson'; - -// You can change the voice of the robot to your favorite voice. -exports.voice = 'en-US_MichaelVoice'; -// Some of the available options are: -// en-US_AllisonVoice -// en-US_LisaVoice -// en-US_MichaelVoice (the default) - -// Credentials for Watson Speech to Text service - -exports.STTPassword = 'xxxxxx' ; -exports.STTUsername = 'xxx-xxx-xxx' ; - - -// Credentials for Watson Conversation service -exports.ConPassword = 'xxxxxx' ; -exports.ConUsername = 'xxx-xxx-xxx' ; -exports.ConWorkspace = 'xxx-xxx-xxx'; - -//Credentials for Watson Text to Speech service -exports.TTSPassword = 'xxxxxx' ; -exports.TTSUsername = 'xxx-xxx-xxx' ; diff --git a/recipes/conversation/conversation.js b/recipes/conversation/conversation.js old mode 100755 new mode 100644 index e205d1ac..bc091f32 --- a/recipes/conversation/conversation.js +++ b/recipes/conversation/conversation.js @@ -1,185 +1,55 @@ -/************************************************************************ -* Copyright 2016 IBM Corp. All Rights Reserved. -* -* Watson Maker Kits -* -* This project is licensed under the Apache License 2.0, see LICENSE.* -* -************************************************************************ -* -* Build a talking robot with Watson. -* This module uses Watson Speech to Text, Watson Conversation, and Watson Text to Speech. -* To run: node conversation.js - -* Follow the instructions in http://www.instructables.com/id/Build-a-Talking-Robot-With-Watson-and-Raspberry-Pi/ to -* get the system ready to run this code. -*/ - -var watson = require('watson-developer-cloud'); //to connect to Watson developer cloud -var config = require("./config.js") // to get our credentials and the attention word from the config.js files -var exec = require('child_process').exec; -var fs = require('fs'); -var conversation_response = ""; -var attentionWord = config.attentionWord; //you can change the attention word in the config file - -/************************************************************************ -* Step #1: Configuring your Bluemix Credentials -************************************************************************ -In this step we will be configuring the Bluemix Credentials for Speech to Text, Watson Conversation -and Text to Speech services. -*/ - -var speech_to_text = watson.speech_to_text({ - username: config.STTUsername, - password: config.STTPassword, - version: 'v1' -}); - -var conversation = watson.conversation({ - username: config.ConUsername, - password: config.ConPassword, - version: 'v1', - version_date: '2016-07-11' -}); - -var text_to_speech = watson.text_to_speech({ - username: config.TTSUsername, - password: config.TTSPassword, - version: 'v1' -}); - -/************************************************************************ -* Step #2: Configuring the Microphone -************************************************************************ -In this step, we configure your microphone to collect the audio samples as you talk. -See https://www.npmjs.com/package/mic for more information on -microphone input events e.g on error, startcomplete, pause, stopcomplete etc. -*/ - -// Initiate Microphone Instance to Get audio samples -var mic = require('mic'); -var micInstance = mic({ 'rate': '44100', 'channels': '2', 'debug': false, 'exitOnSilence': 6 }); -var micInputStream = micInstance.getAudioStream(); - -micInputStream.on('data', function(data) { - //console.log("Recieved Input Stream: " + data.length); -}); - -micInputStream.on('error', function(err) { - console.log("Error in Input Stream: " + err); -}); - -micInputStream.on('silence', function() { - // detect silence. -}); -micInstance.start(); -console.log("TJBot is listening, you may speak now."); - -var textStream ; - -/************************************************************************ -* Step #3: Converting your Speech Commands to Text -************************************************************************ -In this step, the audio sample is sent (piped) to "Watson Speech to Text" to transcribe. -The service converts the audio to text and saves the returned text in "textStream" -You can also set the language model for your speech input. -The following language models are available - ar-AR_BroadbandModel - en-UK_BroadbandModel - en-UK_NarrowbandModel - en-US_BroadbandModel (the default) - en-US_NarrowbandModel - es-ES_BroadbandModel - es-ES_NarrowbandModel - fr-FR_BroadbandModel - ja-JP_BroadbandModel - ja-JP_NarrowbandModel - pt-BR_BroadbandModel - pt-BR_NarrowbandModel - zh-CN_BroadbandModel - zh-CN_NarrowbandModel -*/ - -var recognizeparams = { - content_type: 'audio/l16; rate=44100; channels=2', - interim_results: true, - keywords: [attentionWord], - smart_formatting: true, - keywords_threshold: 0.5, - model: 'en-US_BroadbandModel' // Specify your language model here +/** + * Copyright 2016 IBM Corp. All Rights Reserved. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +var TJBot = require('tjbot'); +var config = require('./config'); + +// obtain our credentials from config.js +var credentials = config.credentials; + +// obtain user-specific config +var WORKSPACEID = config.conversationWorkspaceId; + +// these are the hardware capabilities that TJ needs for this recipe +var hardware = ['microphone', 'speaker']; + +// turn on debug logging to the console +var tjConfig = { + verboseLogging: true }; +// instantiate our TJBot! +var tj = new TJBot(hardware, tjConfig, credentials); -textStream = micInputStream.pipe(speech_to_text.createRecognizeStream(recognizeparams)); - -textStream.setEncoding('utf8'); +console.log("You can ask me to introduce myself or tell you a joke."); +console.log("Try saying, \"" + tj.configuration.robot.name + ", please introduce yourself\" or \"" + tj.configuration.robot.name + ", who are you?\""); +console.log("You can also say, \"" + tj.configuration.robot.name + ", tell me a joke!\""); -/********************************************************************* -* Step #4: Parsing the Text and create a response -********************************************************************* -In this step, we parse the text to look for attention word and send that sentence -to watson conversation to get appropriate response. You can change it to something else if needed. -Once the attention word is detected,the text is sent to Watson conversation for processing. The response is generated by Watson Conversation and is sent back to the module. -*/ -var context = {} ; // Save information on conversation context/stage for continous conversation -textStream.setEncoding('utf8'); -textStream.on('data', function(str) { - console.log(' ===== Speech to Text ===== : ' + str); // print the text once received - - if (str.toLowerCase().indexOf(attentionWord.toLowerCase()) >= 0) { - var res = str.toLowerCase().replace(attentionWord.toLowerCase(), ""); - console.log("msg sent to conversation:" ,res); - conversation.message({ - workspace_id: config.ConWorkspace, - input: {'text': res}, - context: context - }, function(err, response) { - if (err) { - console.log('error:', err); - } else { - context = response.context ; //update conversation context - - if (Array.isArray(response.output.text)) { - conversation_response = response.output.text.join(' ').trim(); - } else { - conversation_response = undefined; - } +// listen for utterances with our attentionWord and send the result to +// the Conversation service +tj.listen(function(msg) { + // check to see if they are talking to TJBot + if (msg.startsWith(tj.configuration.robot.name)) { + // remove our name from the message + var turn = msg.toLowerCase().replace(tj.configuration.robot.name.toLowerCase(), ""); - if (conversation_response){ - var params = { - text: conversation_response, - voice: config.voice, - accept: 'audio/wav' - }; - - console.log("Result from conversation:" ,conversation_response); - /********************************************************************* - Step #5: Speak out the response - ********************************************************************* - In this step, we text is sent out to Watsons Text to Speech service and result is piped to wave file. - Wave files are then played using alsa (native audio) tool. - */ - tempStream = text_to_speech.synthesize(params).pipe(fs.createWriteStream('output.wav')).on('close', function() { - var create_audio = exec('aplay output.wav', function (error, stdout, stderr) { - if (error !== null) { - console.log('exec error: ' + error); - } - }); - }); - }else { - console.log("The response (output) text from your conversation is empty. Please check your conversation flow \n" + JSON.stringify( response)) - } - - } - - }) - } else { - console.log("Waiting to hear", attentionWord); - } -}); - -textStream.on('error', function(err) { - console.log(' === Watson Speech to Text : An Error has occurred =====') ; // handle errors - console.log(err) ; - console.log("Press +C to exit.") ; + // send to the conversation service + tj.converse(WORKSPACEID, turn, function(response) { + // speak the result + tj.speak(response.description); + }); + } }); diff --git a/recipes/conversation/package.json b/recipes/conversation/package.json old mode 100755 new mode 100644 index f8f8f8b2..f4e46f17 --- a/recipes/conversation/package.json +++ b/recipes/conversation/package.json @@ -1,18 +1,26 @@ { - "name": "conversationkit", - "version": "1.0.0", - "description": "TJ Bot Conversation recipe", - "main": "Conversation.js", - "scripts": { - "start": "node conversation.js", - "test": "echo \"Error: no test specified\" && exit 1" + "name": "conversation", + "description": "TJBot conversation recipe", + "version": "0.0.1", + "author": "Justin Weisz ", + "bugs": { + "url": "https://github.com/ibmtjbot/tjbot/issues" + }, + "dependencies": { + "tjbot": "latest" }, - "repository": { + "main": "conversation.js", + "homepage": "https://github.com/ibmtjbot/tjbot/tree/master/recipes/conversation", + "keywords": [ + "tjbot" + ], + "license": "Apache-2.0", + "repository": { "type": "git", - "url": "git@github.ibm.com:watsonkits/conversationkit.git" + "url": "git@github.com:ibmtjbot/tjbot.git" }, - "dependencies": { - "mic": "^2.1.1", - "watson-developer-cloud": "^2.2.0" + "scripts": { + "start": "node conversation.js", + "test": "echo \"Error: no test specified\" && exit 1" } -} +} \ No newline at end of file diff --git a/recipes/conversation/workspace-sample.json b/recipes/conversation/workspace-sample.json new file mode 100644 index 00000000..7a2e51b9 --- /dev/null +++ b/recipes/conversation/workspace-sample.json @@ -0,0 +1 @@ +{"name":"TJBot Conversation","created":"2017-01-06T19:37:51.751Z","intents":[{"intent":"introduce-self","created":"2017-01-06T19:38:41.749Z","updated":"2017-01-06T19:38:41.749Z","examples":[{"text":"introduce yourself","created":"2017-01-06T19:38:41.749Z","updated":"2017-01-06T19:38:41.749Z"},{"text":"please introduce yourself","created":"2017-01-06T19:38:41.749Z","updated":"2017-01-06T19:38:41.749Z"},{"text":"tell me about yourself","created":"2017-01-06T19:38:41.749Z","updated":"2017-01-06T19:38:41.749Z"},{"text":"tell me who you are","created":"2017-01-06T19:38:41.749Z","updated":"2017-01-06T19:38:41.749Z"},{"text":"what are you","created":"2017-01-06T19:38:41.749Z","updated":"2017-01-06T19:38:41.749Z"},{"text":"who are you","created":"2017-01-06T19:38:41.749Z","updated":"2017-01-06T19:38:41.749Z"}],"description":null},{"intent":"tell-joke","created":"2017-01-06T19:39:18.670Z","updated":"2017-01-06T19:39:18.670Z","examples":[{"text":"i want to hear a joke","created":"2017-01-06T19:39:18.670Z","updated":"2017-01-06T19:39:18.670Z"},{"text":"make me laugh","created":"2017-01-06T19:39:18.670Z","updated":"2017-01-06T19:39:18.670Z"},{"text":"please tell me a joke","created":"2017-01-06T19:39:18.670Z","updated":"2017-01-06T19:39:18.670Z"},{"text":"tell me a joke","created":"2017-01-06T19:39:18.670Z","updated":"2017-01-06T19:39:18.670Z"},{"text":"tell me something funny","created":"2017-01-06T19:39:18.670Z","updated":"2017-01-06T19:39:18.670Z"}],"description":null}],"updated":"2017-01-06T19:50:51.713Z","entities":[],"language":"en","metadata":null,"description":"Sample conversation for TJBot.","dialog_nodes":[{"go_to":null,"output":{},"parent":null,"context":null,"created":"2017-01-06T19:40:04.560Z","updated":"2017-01-06T19:43:11.206Z","metadata":null,"conditions":"#tell-joke","description":null,"dialog_node":"node_1_1483731604403","previous_sibling":null},{"go_to":null,"output":{"text":{"values":["Hi, I'm TJBot!","Hi, my name is TJBot!","I'm TJBot, it's nice to meet you!","My name is TJBot!"],"selection_policy":"random"}},"parent":null,"context":null,"created":"2017-01-06T19:41:56.120Z","updated":"2017-01-06T19:45:53.041Z","metadata":null,"conditions":"#introduce-self","description":null,"dialog_node":"node_2_1483731715954","previous_sibling":"node_1_1483731604403"},{"type":"response_condition","go_to":null,"output":{"text":{"values":["A robot walks into a bar. “What can I get you?” the bartender asks. “I need something to loosen up,” the robot replies. So the bartender serves him a screwdriver.","Why do robots have summer holidays? To recharge their batteries.","Why did the robot cross the road? Because he was programmed to do it.","Why did the robot marry his fiancée? He couldn’t resistor.","Why was the robot bankrupt? He had used all his cache.","I'm sorry, I've run out of jokes. How about you create some new ones for me?"],"selection_policy":"sequential"}},"parent":"node_1_1483731604403","context":null,"created":"2017-01-06T19:43:10.878Z","updated":"2017-01-06T19:50:51.713Z","metadata":null,"conditions":null,"description":null,"dialog_node":"node_4_1483731790477","previous_sibling":null}],"workspace_id":"311640d4-12dd-42e2-99a6-304794077daf","counterexamples":[]} diff --git a/recipes/sentiment_analysis/.gitignore b/recipes/sentiment_analysis/.gitignore index 2ba1a0ef..24e3f987 100644 --- a/recipes/sentiment_analysis/.gitignore +++ b/recipes/sentiment_analysis/.gitignore @@ -1,3 +1,6 @@ +# config file +config.js + # Logs logs *.log @@ -21,6 +24,9 @@ coverage # Grunt intermediate storage (http://gruntjs.com/creating-plugins#storing-task-files) .grunt +# Bower dependency directory (https://bower.io/) +bower_components + # node-waf configuration .lock-wscript @@ -28,8 +34,11 @@ coverage build/Release # Dependency directories -node_modules -jspm_packages +node_modules/ +jspm_packages/ + +# Typescript v1 declaration files +typings/ # Optional npm cache directory .npm @@ -43,5 +52,11 @@ jspm_packages # Output of 'npm pack' *.tgz +# Yarn Integrity file +.yarn-integrity + +# dotenv environment variables file +.env + # .DS_Store files .DS_Store diff --git a/recipes/sentiment_analysis/README.md b/recipes/sentiment_analysis/README.md index 7e07c909..81e54d09 100644 --- a/recipes/sentiment_analysis/README.md +++ b/recipes/sentiment_analysis/README.md @@ -1,118 +1,76 @@ # Sentiment Analysis - > Make your robot respond to emotions using [Watson](http://www.ibm.com/watson/developercloud/tone-analyzer.html) -This module provides Node.js code to control the color of a [8mm NeoPixel RGB led](https://www.adafruit.com/products/1734) based on public perception of a given keyword (e.g. "heart" or "iPhone"). The module connects to Twitter to analyze the public sentiment about the given keyword in real time, and updates the color of the LED to reflect the sentiment. - -**This will only run on the Raspberry Pi.** - -[![link to a full video for use voice to control LED](https://img.youtube.com/vi/KU8DNzZNdBY/0.jpg)](https://www.youtube.com/watch?v=KU8DNzZNdBY) - -##How It Works -- Connects to the Twitter Streaming service and listens for tweets related to a given search keyword -- Sends tweets to the Watson Tone Analyzer service to determine the emotions contained in them -- Changes the color of the LED based on the emotions found by Watson +This recipe uses the [Watson Tone Analyzer](http://www.ibm.com/watson/developercloud/tone-analyzer.html) service to shine TJBot’s LED different colors based on the emotions present in Twitter for a given keyword. It also uses the [Twitter API](https://dev.twitter.com/overview/api) to fetch tweets. -##Hardware -Check out [this instructable] (http://www.instructables.com/id/Make-Your-Robot-Respond-to-Emotions-Using-Watson/) to get the wiring diagram and prepare your system. You will need a Raspberry Pi 3, a [8mm NeoPixel RGB LED] (https://www.adafruit.com/products/1734), 3 Female/female jumper wires, and [the TJBot cardboard](http://ibm.biz/mytjbot) +## Hardware +This recipe requires a TJBot with an LED. -##Build ->We recommend starting with [our step by step instructions](http://www.instructables.com/id/Make-Your-Robot-Respond-to-Emotions-Using-Watson/) to build this recipe. +## Build and Run +First, make sure you have configured your Raspberry Pi for TJBot. -Get the sample code and go to the application folder. Please see this [instruction on how to clone](https://help.github.com/articles/cloning-a-repository/) a repository. + $ cd tjbot/bootstrap && sudo sh bootstrap.sh - cd recipes/sentiment_analysis +Go to the `recipes/sentiment_analysis` folder and install the dependencies. -Install Dependencies + $ cd ../recipes/sentiment_analysis + $ npm install - npm install +Create an instance of the [Watson Tone Analyzer](http://www.ibm.com/watson/developercloud/tone-analyzer.html) service and note the authentication credentials. -Add your Bluemix Tone Analyzer credentials +Create a set of [Twitter developer credentials](https://apps.twitter.com/) and note the consumer key, consumer secret, access token key, and access token secret. - edit config.js - enter your Watson Tone Analyzer username, password and version. +Make a copy the default configuration file and update it with the Watson service credentials. -Since this module will be sourcing the text from Twitter, you will need valid [Twitter developer credentials](https://apps.twitter.com/) in the form of a set of consumer and access tokens/keys. + $ cp config.default.js config.js + $ nano config.js + -Add your Twitter credentials +Run! - edit config.js - enter your Twitter credentials. - -##Testing the LED -The wiring diagram is [here] (http://www.instructables.com/id/Make-Your-Robot-Respond-to-Emotions-Using-Watson/). -Before running the code, you may test your LED setup to make sure the connections are correct and the library is properly installed. When you run this test module, it should turn on your LED. + sudo node sentiment.js - sudo node led_test.js +> Note the `sudo` command. Root user access is required to run TJBot recipes. -> Note the `sudo` command. Root user access is required to control the NeoPixel LEDs. +At this point, TJBot will begin listening to Twitter for tweets containing the specified keyword (specified in `exports.sentiment_keyword`). It may take some time to collect enough tweets to perform sentiment analysis, so please be patient. -If the LED does not light up, you can try moving the power from 3.3 to 5 volts. If neither the 3.3v or 5v pins work, you will need a 1N4001 diode. The diode is inserted between the power pin of the LED (the shorter of the two middle pins) and the 5v pin on the Raspberry Pi. +## Customize +Change the keyword TJBot monitors by editing `config.js` and changing the line -If you have problems with the setup, please refer to [Adafruit's Neopixel on Raspbeery Pi guide](https://learn.adafruit.com/neopixels-on-raspberry-pi/overview -) to troubleshoot. + exports.sentiment_keyword = "happy"; // keyword to monitor in Twitter -##Running +You can also change the colors that TJBot shines. The table below shows the colors that TJBot shines for each emotion. -Start the application +| Emotion | Color | +| --- | --- | +| Anger | Red | +| Joy | Yellow | +| Fear | Green | +| Disgust | Blue | +| Sadness | Magenta | - sudo node sentiment.js +You can change these colors by editing the `shineForEmotion()` function. -> Note the sudo command. Root user access is required to control the NeoPixel LEDs. +## Troubleshoot +If the LED does not light up, you can try moving the power from 3.3 to 5 volts. If neither the 3.3v or 5v pins work, you will need a 1N4001 diode. The diode is inserted between the power pin of the LED (the shorter of the two middle pins) and the 5v pin on the Raspberry Pi. -Doesn't your Pi show the right color? No worries, we can fix it. The LED library uses the PWM module (GPIO 18) to drive the data line of the LEDs. This conflicts with the built-in audio hardware, which uses the same pin to drive the audio output. Depending on your configuration of Raspbian, the sound drivers may be more aggressive in taking away control of GPIO 18 from other processes. If your LED shows random colors instead of the expected color, use this trick to fix it. +If the LED shows the wrong color, or flashes different colors very rapidly, it may be due to interference with the built-in audio hardware. Depending on your configuration of Raspbian, the sound drivers may be more aggressive in taking away control of GPIO 18 from other processes. If your LED shows random colors instead of the expected color, use this trick to fix it. - sudo cp blacklist-rgb-led.conf /etc/modprobe.d/ - sudo update-initramfs -u + sudo cp bootstrap/tjbot-blacklist-snd.conf /etc/modprobe.d/ + sudo update-initramfs -u + sudo reboot -Reboot and confirm no "snd" modules are running by executing the command "lsmod". +After TJBot finishes rebooting, confirm no "snd" modules are running. - lsmod + lsmod -## Customization -The default sentiment keyword is set to 'people' but you can change it from config.js: +If you have additional difficulties not covered in this guide, please refer to [Adafruit's NeoPixel on Raspbeery Pi guide](https://learn.adafruit.com/neopixels-on-raspberry-pi/overview) to troubleshoot. - edit config.js - Update searchkeyword - searchkeyword = "people"; +# Watson Services +- [Watson Tone Analyzer](http://www.ibm.com/watson/developercloud/tone-analyzer.html) -The default behaviour of the module assigns the following colors to sentiments. +# License +This project is licensed under Apache 2.0. Full license text is available in [LICENSE](../../LICENSE). -| Emotion | Color | -| --- | --- | -| Anger | Red | -| Joy | Yellow | -| Fear | Purple | -| Disgust | Green | -| Sadness | Blue | - -You can change this mapping by editing `sentiment.js` to add your favorite colors. Note that colors are specified using the hexademical format. - - var red = 0x00ff00 ; - var green = 0xff0000 ; - var blue = 0x0000ff ; - var yellow = 0xffff00 ; - var purple = 0x00ffff ; - - function processEmotion(emotion){ - console.log("Current Emotion Around " + searchkeyword + " is ", emotion.tone_id); - if (emotion.tone_id == "anger"){ - setLED(red); - }else if(emotion.tone_id == "joy"){ - setLED(yellow); - }else if(emotion.tone_id == "fear"){ - setLED(purple); - }else if(emotion.tone_id == "disgust"){ - setLED(green); - }else if(emotion.tone_id == "sadness"){ - setLED(blue); - } - } - -#Dependencies -- [Watson Tone Analyzer](http://www.ibm.com/watson/developercloud/tone-analyzer.html). -- Twitter npm package : An asynchronous client library for the Twitter REST and Streaming API's. -- [rpi-ws281x-native](https://github.com/beyondscreen/node-rpi-ws281x-native) - npm package for controling a ws281x LED. - -## Contributing +# Contributing See [CONTRIBUTING.md](../../CONTRIBUTING.md). diff --git a/recipes/sentiment_analysis/config.default.js b/recipes/sentiment_analysis/config.default.js new file mode 100644 index 00000000..b491bc04 --- /dev/null +++ b/recipes/sentiment_analysis/config.default.js @@ -0,0 +1,21 @@ +// User-specific configuration +exports.sentiment_keyword = "education"; // keyword to monitor in Twitter +exports.sentiment_analysis_frequency_sec = 30; // analyze sentiment every N seconds + +// Create the credentials object for export +exports.credentials = {}; + +// Watson Tone Analyzer +// https://www.ibm.com/watson/developercloud/tone-analyzer.html +exports.credentials.tone_analyzer = { + password: '', + username: '' +}; + +// Twitter +exports.credentials.twitter = { + consumer_key: '', + consumer_secret: '', + access_token_key: '', + access_token_secret: '' +}; diff --git a/recipes/sentiment_analysis/config.js b/recipes/sentiment_analysis/config.js deleted file mode 100755 index 40f4e773..00000000 --- a/recipes/sentiment_analysis/config.js +++ /dev/null @@ -1,21 +0,0 @@ -//You can modify the search keywork to what you like examples are traffic, celebrities, political debates -searchkeyword = "people"; - -// Twitter credentials - Update with your Twitter credentials -var twittercredentials = {}; -twittercredentials.consumer_key = "xxxxxx" ; -twittercredentials.consumer_secret = "xxxxxx" ; -twittercredentials.access_token_key = "xxxxxx" ; -twittercredentials.access_token_secret = "xxx-xxx-xxx"; - -// Tone Analyzer Credentials - Update with your Bluemix credentals. -var toneanalyzercredentials = {} - -toneanalyzercredentials.password = 'xxxxxx' ; -toneanalyzercredentials.username = 'xxxxxx' ; -toneanalyzercredentials.version = 'v3' ; - -// Export both credentials -exports.twittercredentials = twittercredentials ; -exports.toneanalyzercredentials = toneanalyzercredentials ; -exports.searchkeyword = searchkeyword; diff --git a/recipes/sentiment_analysis/led_test.js b/recipes/sentiment_analysis/led_test.js deleted file mode 100755 index e3c45622..00000000 --- a/recipes/sentiment_analysis/led_test.js +++ /dev/null @@ -1,27 +0,0 @@ -// Run "sudo node led_test.js" from your terminal to test your LED. -// It should set your light to white and turn the LED on. - -var ws281x = require('rpi-ws281x-native'); -var NUM_LEDS = 1; // Number of LEDs -ws281x.init(NUM_LEDS); // initialize LEDs - -// ---- reset LED before exit -process.on('SIGINT', function () { - ws281x.reset(); - process.nextTick(function () { process.exit(0); }); -}); - -var color = new Uint32Array(NUM_LEDS); -ws281x.render(color); - -console.log("turning ON the light"); -setLED("on"); // setLED sets the light - -function setLED(value) { - if (value == "on") { - color[0] = 0xffffff ; - } else { - color[0] = 0x000000 ; - } - ws281x.render(color); -} diff --git a/recipes/sentiment_analysis/package.json b/recipes/sentiment_analysis/package.json index 4f1a2215..ec2e2148 100644 --- a/recipes/sentiment_analysis/package.json +++ b/recipes/sentiment_analysis/package.json @@ -1,19 +1,27 @@ { - "name": "sentimentkit", - "version": "1.0.0", - "description": "TJ Bot Sentiment Analysis recipe", - "main": "sentiment.js", - "scripts": { - "test": "node sentiment.js", - "start": "node sentiment.js" + "name": "sentiment", + "version": "0.0.1", + "description": "TJBot sentiment analysis recipe", + "author": "Justin Weisz ", + "bugs": { + "url": "https://github.com/ibmtjbot/tjbot/issues" + }, + "dependencies": { + "tjbot": "latest", + "twitter": "^1.4.0" }, + "main": "sentiment.js", + "homepage": "https://github.com/ibmtjbot/tjbot/tree/master/recipes/sentiment_analysis", + "keywords": [ + "tjbot" + ], + "license": "Apache-2.0", "repository": { "type": "git", - "url": "git@github.ibm.com:watsonkits/sentimentkit.git" + "url": "git@github.com:ibmtjbot/tjbot.git" }, - "dependencies": { - "rpi-ws281x-native": "^0.8.1", - "twitter": "^1.4.0", - "watson-developer-cloud": "^2.0.1" + "scripts": { + "start": "node sentiment.js", + "test": "echo \"Error: no test specified\" && exit 1" } -} +} \ No newline at end of file diff --git a/recipes/sentiment_analysis/sentiment.js b/recipes/sentiment_analysis/sentiment.js index bfd9f65e..6bf3d773 100644 --- a/recipes/sentiment_analysis/sentiment.js +++ b/recipes/sentiment_analysis/sentiment.js @@ -1,166 +1,152 @@ -/************************************************************************ -* Copyright 2016 IBM Corp. All Rights Reserved. -* -* Watson Maker Kits -* -* This project is licensed under the Apache License 2.0, see LICENSE.* -* -************************************************************************ -* -* Control a NeoPixel LED unit connected to a Raspberry Pi pin by analyzing Twitter data using Watson Tone Analyzer -* Must run with root-level protection -* Sudo node sentiment.js +/** + * Copyright 2016 IBM Corp. All Rights Reserved. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +var TJBot = require('../../../tjbotlib/lib/tjbot'); +//var TJBot = require('tjbot'); +var config = require('./config'); +var Twitter = require('twitter'); -Based on ws281x library created by Jeremy Garff (jer@jers.net) +// obtain our credentials from config.js +var credentials = config.credentials; -Follow the instructions in http://www.instructables.com/id/Make-Your-Robot-Respond-to-Emotions-Using-Watson/ to -get the system ready to run this code. -*/ +// obtain user-specific config +var SENTIMENT_KEYWORD = config.sentiment_keyword; +var SENTIMENT_ANALYSIS_FREQUENCY_MSEC = config.sentiment_analysis_frequency_sec * 1000; -/************************************************************************ -* Step #1: Configuring your Twitter Credentials -************************************************************************ -In this step, we set up our Twitter credentials and parameters (keywords) and the -fetch tweets related to the keyword as text. Each tweet is added to a tweet buffer as it arrives -*/ -var config = require("./config") ; // Gets your username and passwords from the config.js file -var Twitter = require('twitter'); -var maxtweets = 20 ; -var confidencethreshold = 0.5 ; // The program only responds to the sentiments that are retrieved with a confidence level stronger than this given threshold. You may change the threshold as needed. -var tweetbuffer = [] ; -var searchkeyword = config.searchkeyword; // keyword to use in twitter search -var searchparams = {q: searchkeyword, count: maxtweets}; -var sentimentinterval = 3000 ; // calculate sentiment every 3 seconds. +// these are the hardware capabilities that TJ needs for this recipe +var hardware = ['led']; -var twitterclient = new Twitter({ //Retrieving your Twitter credentials - consumer_key: config.twittercredentials.consumer_key, - consumer_secret: config.twittercredentials.consumer_secret, - access_token_key: config.twittercredentials.access_token_key, - access_token_secret: config.twittercredentials.access_token_secret -}); +// turn on debug logging to the console +var tjConfig = { + verboseLogging: true +}; -fetchTweets(searchparams) -function fetchTweets(searchparams){ - var alltweets = ""; - console.log("Fetching tweets for keyword " + searchkeyword + ". This may take some time."); - twitterclient.stream('statuses/filter', {track: searchkeyword }, function(stream) { - stream.on('data', function(event) { - if(event && event.text){ - var tweet = event.text ; - tweet = tweet.replace(/[^\x00-\x7F]/g, "") // Remove non-ascii characters e.g chinese, japanese, arabic letters etc - tweet = tweet.replace(/(?:https?|ftp):\/\/[\n\S]+/g, ""); // Remove link - if(tweetbuffer.length == maxtweets){ // if we have enough tweets, remove one - tweetbuffer.shift() ; - } - tweetbuffer.push(tweet) +// instantiate our TJBot! +var tj = new TJBot(hardware, tjConfig, credentials); - } - }); +// create the twitter client +var twitter = new Twitter({ + consumer_key: credentials.twitter.consumer_key, + consumer_secret: credentials.twitter.consumer_secret, + access_token_key: credentials.twitter.access_token_key, + access_token_secret: credentials.twitter.access_token_secret +}); - stream.on('error', function(error) { - console.log("\nAn error has occurred while connecting to Twitter. Please check your twitter credentials, and also refer to https://dev.twitter.com/overview/api/response-codes for more on twitter error codes. \n") - throw error; +console.log("I am monitoring twitter for " + SENTIMENT_KEYWORD + ". It may take a few moments to collect enough tweets to analyze."); + +// turn the LED off +tj.shine('off'); + +monitorTwitter(); + +// --- + +var TWEETS = []; +var MAX_TWEETS = 100; +var CONFIDENCE_THRESHOLD = 0.5; + +function monitorTwitter() { + // start the pulse to show we are thinking + tj.pulse('white', 1.5, 2.0); + + // monitor twitter + twitter.stream('statuses/filter', { + track: SENTIMENT_KEYWORD + }, function(stream) { + stream.on('data', function(event) { + if (event && event.text) { + var tweet = event.text; + + // Remove non-ascii characters (e.g chinese, japanese, arabic, etc.) and + // remove hyperlinks + tweet = tweet.replace(/[^\x00-\x7F]/g, ""); + tweet = tweet.replace(/(?:https?|ftp):\/\/[\n\S]+/g, ""); + + // keep a buffer of MAX_TWEETS tweets for sentiment analysis + while (TWEETS.length >= MAX_TWEETS) { + TWEETS.shift(); + } + TWEETS.push(tweet); + } + }); + + stream.on('error', function(error) { + console.log("\nAn error has occurred while connecting to Twitter. Please check your twitter credentials, and also refer to https://dev.twitter.com/overview/api/response-codes for more information on Twitter error codes.\n"); + throw error; + }); }); - }); + + // perform sentiment analysis every N seconds + setInterval(function() { + console.log("Performing sentiment analysis of the tweets"); + shineFromTweetSentiment(); + }, SENTIMENT_ANALYSIS_FREQUENCY_MSEC); } -SampleTweetBuffer(); -function SampleTweetBuffer(){ - setInterval(function() { - if (tweetbuffer.length > 0){ - //console.log("sampling .. " + tweetbuffer.length); - analyzeTone(); // Analyze the tone of tweets if we have more than one tweet - } - }, sentimentinterval); +function shineFromTweetSentiment() { + // make sure we have at least 5 tweets to analyze, otherwise it + // is probably not enough. + if (TWEETS.length > 5) { + var text = TWEETS.join(' '); + console.log("Analyzing tone of " + TWEETS.length + " tweets"); + + tj.analyzeTone(text).then(function(tone) { + tone.document_tone.tone_categories.forEach(function(category) { + if (category.category_id == "emotion_tone") { + // find the emotion with the highest confidence + var max = category.tones.reduce(function(a, b) { + return (a.score > b.score) ? a : b; + }); + + // make sure we really are confident + if (max.score >= CONFIDENCE_THRESHOLD) { + // stop pulsing at this point, we are going to change color + if (tj.isPulsing()) { + tj.stopPulsing(); + } + shineForEmotion(max.tone_id); + } + } + }); + }); + } else { + console.log("Not enough tweets collected to perform sentiment analysis"); + } } - -/************************************************************************ -* Step #2: Analyze the tone of the Tweets -************************************************************************ -In this step, the program uses Watson Tone Analyzer to analyze the emotions that are retrieved from the tweetbuffer. -The IBM Watson™ Tone Analyzer Service uses linguistic analysis to detect three types of tones from text: emotion, social tendencies, and language style. -Emotions identified include things like anger, fear, joy, sadness, and disgust. -*/ -var watson = require('watson-developer-cloud'); -function analyzeTone(){ - var text = ""; - tweetbuffer.forEach(function(tweet){ - text = text + " " + tweet ; // Combine all texts in the tweetbuffer array into a single text. - }) - //console.log(text + "\n ====== ") - var tone_analyzer = watson.tone_analyzer({ //Retrieving your Bluemix credentials - username: config.toneanalyzercredentials.username, - password: config.toneanalyzercredentials.password, - version: config.toneanalyzercredentials.version, - version_date: '2016-05-19' - }); - tone_analyzer.tone({ text: text }, - function(err, tone) { - if (err) { - console.log(err); - } - else { - tone.document_tone.tone_categories.forEach(function(tonecategory){ - if(tonecategory.category_id == "emotion_tone"){ - //console.log(tonecategory.tones) - tonecategory.tones.forEach(function(emotion){ - if(emotion.score >= confidencethreshold) { // pulse only if the likelihood of an emotion is above the given confidencethreshold - processEmotion(emotion) - } - }) - } - }) - } - }); - } - - /********************************************************************************************* - * Step #3: Change the color of the LED based on the sentiments of the retrieve tweets - ********************************************************************************************** - In this step, the program determines the color of the LED based on the analyzed emotion. - Different colors are associated to different emotions. You can customize your own color! - Anger = Red - Joy = Yellow - Fear = Purple etc - */ - - var ws281x = require('rpi-ws281x-native'); - var NUM_LEDS = 1; - ws281x.init(NUM_LEDS); - var color = new Uint32Array(NUM_LEDS); - - // ---- reset LED before exit - process.on('SIGINT', function () { - ws281x.reset(); - process.nextTick(function () { process.exit(0); }); - }); - - var red = 0x00ff00 ; - var green = 0xff0000 ; - var blue = 0x0000ff ; - var yellow = 0xffff00 ; - var purple = 0x00ffff ; - - // Process emotion returned from Tone Analyzer Above - // Show a specific color fore each emotion - function processEmotion(emotion){ - console.log("Current Emotion Around " + searchkeyword + " is ", emotion.tone_id); - if (emotion.tone_id == "anger"){ - setLED(red); - }else if(emotion.tone_id == "joy"){ - setLED(yellow); - }else if(emotion.tone_id == "fear"){ - setLED(purple); - }else if(emotion.tone_id == "disgust"){ - setLED(green); - }else if(emotion.tone_id == "sadness"){ - setLED(blue); +function shineForEmotion(emotion) { + console.log("Current emotion around " + SENTIMENT_KEYWORD + " is " + emotion); + + switch (emotion) { + case 'anger': + tj.shine('red'); + break; + case 'joy': + tj.shine('yellow'); + break; + case 'fear': + tj.shine('magenta'); + break; + case 'disgust': + tj.shine('green'); + break; + case 'sadness': + tj.shine('blue'); + break; + default: + break; } - } - - // Set the LED to the given color value - function setLED(colorval){ - color[0] = colorval ; - ws281x.render(color); - } +} diff --git a/recipes/speech_to_text/.gitignore b/recipes/speech_to_text/.gitignore index 2ba1a0ef..24e3f987 100644 --- a/recipes/speech_to_text/.gitignore +++ b/recipes/speech_to_text/.gitignore @@ -1,3 +1,6 @@ +# config file +config.js + # Logs logs *.log @@ -21,6 +24,9 @@ coverage # Grunt intermediate storage (http://gruntjs.com/creating-plugins#storing-task-files) .grunt +# Bower dependency directory (https://bower.io/) +bower_components + # node-waf configuration .lock-wscript @@ -28,8 +34,11 @@ coverage build/Release # Dependency directories -node_modules -jspm_packages +node_modules/ +jspm_packages/ + +# Typescript v1 declaration files +typings/ # Optional npm cache directory .npm @@ -43,5 +52,11 @@ jspm_packages # Output of 'npm pack' *.tgz +# Yarn Integrity file +.yarn-integrity + +# dotenv environment variables file +.env + # .DS_Store files .DS_Store diff --git a/recipes/speech_to_text/README.md b/recipes/speech_to_text/README.md index adbea68e..aac68e90 100644 --- a/recipes/speech_to_text/README.md +++ b/recipes/speech_to_text/README.md @@ -1,90 +1,63 @@ # Speech to Text -> Use your voice to control a LED with [Watson](https://www.ibm.com/watson/developercloud/speech-to-text.html) +> Control TJBot's LED with your voice! -This module provides a Node.js code to control a [8mm NeoPixel RGB led](https://www.adafruit.com/products/1734) using voice commands. For example, you may say "Turn the light green" to change the color of the LED to green. +This recipe uses the [Watson Speech to Text](https://www.ibm.com/watson/developercloud/speech-to-text.html) service to let you control the color of TJBot's LED with your voice. For example, if you say "turn the light green," TJBot will change the color of the LED to green. -**This will only run on the Raspberry Pi.** +## Hardware +This recipe requires a TJBot with a microphone and an LED. -[![link to a full video for use voice to control LED](https://img.youtube.com/vi/Wvnh7ie3D6o/0.jpg)](https://www.youtube.com/watch?v=Wvnh7ie3D6o) +## Build and Run +First, make sure you have configured your Raspberry Pi for TJBot. -##How It Works -- Listens for the voice commands (e.g "turn the light green") -- Sends audio from the microphone to the [Watson Speech to Text Service - STT](https://www.ibm.com/watson/developercloud/speech-to-text.html) to convert to text -- Parses the text to identify the given voice command -- Switches the LED on/off depending on the given command + $ cd tjbot/bootstrap && sudo sh bootstrap.sh -##Hardware +Go to the `recipes/speech_to_text` folder and install the dependencies. -Check out [this instructable] (http://www.instructables.com/id/Use-Your-Voice-to-Control-a-Light-With-Watson/) for wiring diagrams and instructions to prepare your system. You will need a Raspberry Pi 3, a microphone, a [8mm NeoPixel RGB LED] (https://www.adafruit.com/products/1734), 3 Female/female jumper wires, and [the TJBot cardboard](http://ibm.biz/mytjbot). + $ cd ../recipes/speech_to_text + $ npm install -##Build -> We recommend starting with [our step by step instructions](http://www.instructables.com/id/Use-Your-Voice-to-Control-a-Light-With-Watson/) to build this recipe. +Create an instance of the [Watson Text to Speech](https://www.ibm.com/watson/developercloud/text-to-speech.html) service and note the authentication credentials. -Get the sample code and go to the application folder. Please see this [instruction on how to clone](https://help.github.com/articles/cloning-a-repository/) a repository. +Make a copy the default configuration file and update it with the Watson service credentials. - cd recipes/speech_to_text + $ cp config.default.js config.js + $ nano config.js + -Install ALSA tools (required for recording audio on Raspberry Pi) +Run! - sudo apt-get install alsa-base alsa-utils + sudo node stt.js -Install Dependencies +> Note the `sudo` command. Root user access is required to run TJBot recipes. - npm install +Now talk to your microphone to change the color of the LED. Say "turn the light blue" to change the light to blue. You can try other colors as well, such as yellow, green, orange, purple, magenta, red, blue, aqua, and white. You can also say "turn the light on" or "turn the light off". -Add your Bluemix Speech to text service credentials +## Customize +We have hidden a disco party for you. Find the code for disco party in `stt.js` and uncomment the code (hint: there are two places that need to be uncommented). Now you can ask TJ to show you the disco lights by saying "Let's have a disco party"! - edit config.js - enter your watson stt username, password and version. +Try implementing your own TJBot party and share it with us #TJBot! -##Testing the LED -The wiring diagram is [here] (http://www.instructables.com/id/Use-Your-Voice-to-Control-a-Light-With-Watson/). +## Troubleshoot +If the LED does not light up, you can try moving the power from 3.3 to 5 volts. If neither the 3.3v or 5v pins work, you will need a 1N4001 diode. The diode is inserted between the power pin of the LED (the shorter of the two middle pins) and the 5v pin on the Raspberry Pi. -Before running the code, you may test your LED setup to make sure the connections are correct and the library is properly installed. When you run this module, it should turn your LED on. +If the LED shows the wrong color, or flashes different colors very rapidly, it may be due to interference with the built-in audio hardware. Depending on your configuration of Raspbian, the sound drivers may be more aggressive in taking away control of GPIO 18 from other processes. If your LED shows random colors instead of the expected color, use this trick to fix it. - sudo node led_test.js - -> Note the `sudo` command. Root user access is required to control the NeoPixel LEDs. - -If the LED does not light up, you can try moving the power from 3.3 to 5 volts. If neither the 3.3v or 5v pins work, you will need a 1N4001 diode. The diode is inserted between the power pin of the LED (the shorter of the two middle pins) and the 5v pin on the Raspberry Pi. - -If you have problems with the setup, please refer to [Adafruit's NeoPixel on Raspbeery Pi guide](https://learn.adafruit.com/neopixels-on-raspberry-pi/overview) to troubleshoot. - -##Running - -Start the application - - sudo node stt.js - -> Note the `sudo` command. Root user access is required to control the NeoPixel LEDs. - -Now talk to your microphone to change the color of the LED. -Say "Turn the light blue" to change the light to blue. You can try other colors: yellow, green, orange, purple, magenta, red, blue, aqua, white). You can either say "Turn the light on" or "Turn the light off"! - -Doesn't your Pi show the right color? No worries, we can fix it. -The LED library uses the PWM module (GPIO 18) to drive the data line of the LEDs. This conflicts with the built-in audio hardware, which uses the same pin to drive the audio output. Depending on your configuration of Raspbian, the sound drivers may be more aggressive in taking away control of GPIO 18 from other processes. If your LED shows random colors instead of the expected color, use this trick to fix it. - - sudo cp blacklist-rgb-led.conf /etc/modprobe.d/ + sudo cp bootstrap/tjbot-blacklist-snd.conf /etc/modprobe.d/ sudo update-initramfs -u + sudo reboot -Reboot and confirm no "snd" modules are running by executing the command "lsmod". - - lsmod - -##Customization -You can add new colors to your color palette in stt.js. TJBot uses a NeoPixel RGB LED, which means it can show any combination of red, green, and blue. +After TJBot finishes rebooting, confirm no "snd" modules are running. -We have hidden a disco party for you. Find the code for disco party in stt.js and uncomment the code. Now you can ask TJ to show you the disco lights by saying "Let's have a disco party"! + lsmod -Try implementing your own TJBot party and share it with us #TJBot! - -Once ready to move on, try the next recipe to [make TJBot respond to emotions using Watson](../sentiment_analysis). - -##Dependencies +If you have additional difficulties not covered in this guide, please refer to [Adafruit's NeoPixel on Raspbeery Pi guide](https://learn.adafruit.com/neopixels-on-raspberry-pi/overview) to troubleshoot. +# Watson Services - [Watson Speech to Text](https://www.ibm.com/watson/developercloud/speech-to-text.html) -- mic npm package for reading audio input -- [rpi-ws281x-native](https://github.com/beyondscreen/node-rpi-ws281x-native) npm package to control a ws281x LED. -## Contributing +# License +This project is licensed under Apache 2.0. Full license text is available in [LICENSE](../../LICENSE). + +# Contributing See [CONTRIBUTING.md](../../CONTRIBUTING.md). + diff --git a/recipes/speech_to_text/blacklist-rgb-led.conf b/recipes/speech_to_text/blacklist-rgb-led.conf deleted file mode 100755 index 48dfad5c..00000000 --- a/recipes/speech_to_text/blacklist-rgb-led.conf +++ /dev/null @@ -1,5 +0,0 @@ -blacklist snd_bcm2835 -blacklist snd_pcm -blacklist snd_timer -blacklist snd_pcsp -blacklist snd diff --git a/recipes/speech_to_text/config.default.js b/recipes/speech_to_text/config.default.js new file mode 100644 index 00000000..e04bbcd9 --- /dev/null +++ b/recipes/speech_to_text/config.default.js @@ -0,0 +1,16 @@ +// Create the credentials object for export +exports.credentials = {}; + +// Watson Text to Speech +// https://www.ibm.com/watson/developercloud/text-to-speech.html +exports.credentials.text_to_speech = { + password: '', + username: '' +}; + +// Watson Speech to Text +// https://www.ibm.com/watson/developercloud/speech-to-text.html +exports.credentials.speech_to_text = { + password: '', + username: '' +}; diff --git a/recipes/speech_to_text/config.js b/recipes/speech_to_text/config.js deleted file mode 100755 index eb246f91..00000000 --- a/recipes/speech_to_text/config.js +++ /dev/null @@ -1,6 +0,0 @@ -// Please replace the username and password with your bluemix credentials - - -exports.password = 'xxxxxx' ; -exports.username = 'xxx-xxx-xxx' ; -exports.version = 'v1' ; diff --git a/recipes/speech_to_text/led_test.js b/recipes/speech_to_text/led_test.js deleted file mode 100755 index e3c45622..00000000 --- a/recipes/speech_to_text/led_test.js +++ /dev/null @@ -1,27 +0,0 @@ -// Run "sudo node led_test.js" from your terminal to test your LED. -// It should set your light to white and turn the LED on. - -var ws281x = require('rpi-ws281x-native'); -var NUM_LEDS = 1; // Number of LEDs -ws281x.init(NUM_LEDS); // initialize LEDs - -// ---- reset LED before exit -process.on('SIGINT', function () { - ws281x.reset(); - process.nextTick(function () { process.exit(0); }); -}); - -var color = new Uint32Array(NUM_LEDS); -ws281x.render(color); - -console.log("turning ON the light"); -setLED("on"); // setLED sets the light - -function setLED(value) { - if (value == "on") { - color[0] = 0xffffff ; - } else { - color[0] = 0x000000 ; - } - ws281x.render(color); -} diff --git a/recipes/speech_to_text/package.json b/recipes/speech_to_text/package.json index 3435a210..7fcbf948 100644 --- a/recipes/speech_to_text/package.json +++ b/recipes/speech_to_text/package.json @@ -1,28 +1,26 @@ { - "name": "sttkit", - "version": "1.0.0", - "description": "TJ Bot Speech to Text recipe", - "main": "app.js", - "scripts": { - "test": "node stt.js", - "start": "node stt.js" + "name": "speech_to_text", + "version": "0.0.1", + "description": "TJBot speech to text recipe", + "author": "Justin Weisz ", + "bugs": { + "url": "https://github.com/ibmtjbot/tjbot/issues" }, - "repository": { - "type": "git", - "url": "git@github.ibm.com:watsonkits/sttkit.git" + "dependencies": { + "tjbot": "latest" }, + "main": "stt.js", + "homepage": "https://github.com/ibmtjbot/tjbot/tree/master/recipes/speech_to_text", "keywords": [ - "Watson", - "IBM", - "Speech", - "To", - "Text", - "STT", - "Raspberry Pi" + "tjbot" ], - "dependencies": { - "mic": "^2.1.1", - "rpi-ws281x-native": "^0.8.1", - "watson-developer-cloud": "^2.0.0" + "license": "Apache-2.0", + "repository": { + "type": "git", + "url": "git@github.com:ibmtjbot/tjbot.git" + }, + "scripts": { + "start": "node stt.js", + "test": "echo \"Error: no test specified\" && exit 1" } -} +} \ No newline at end of file diff --git a/recipes/speech_to_text/stt.js b/recipes/speech_to_text/stt.js index 3f489e93..d3b4c88d 100644 --- a/recipes/speech_to_text/stt.js +++ b/recipes/speech_to_text/stt.js @@ -1,185 +1,84 @@ -/************************************************************************ -* Copyright 2016 IBM Corp. All Rights Reserved. -* -* Watson Maker Kits -* -* This project is licensed under the Apache License 2.0, see LICENSE.* -* -************************************************************************ -* -* Control a NeoPixel LED unit connected to a Raspberry Pi pin through voice commands -* Must run with root-level protection -* sudo node stt.js - - Based on example NeoPixel code by Jeremy Garff (jer@jers.net) - - Follow the instructions in http://www.instructables.com/id/Use-Your-Voice-to-Control-a-Light-With-Watson/ to - get the system ready to run this code. -*/ - -/************************************************************************ - * Step #1: Configuring your Bluemix Credentials - ************************************************************************ - In this step, the audio sample (pipe) is sent to "Watson Speech to Text" to transcribe. - The service converts the audio to text and saves the returned text in "textStream" -*/ -var watson = require('watson-developer-cloud'); -var config = require('./config'); // gets our username and passwords from the config.js files -var speech_to_text = watson.speech_to_text({ - username: config.username, - password: config.password, - version: config.version -}); - -/************************************************************************ - * Step #2: Configuring the Microphone - ************************************************************************ - In this step, we configure your microphone to collect the audio samples as you talk. - See https://www.npmjs.com/package/mic for more information on - microphone input events e.g on error, startcomplete, pause, stopcomplete etc. -*/ - -// Initiate Microphone Instance to Get audio samples -var mic = require('mic'); -var micInstance = mic({ 'rate': '44100', 'channels': '2', 'debug': false, 'exitOnSilence': 6 }); -var micInputStream = micInstance.getAudioStream(); - -micInputStream.on('data', function(data) { - //console.log("Recieved Input Stream: " + data.length); -}); - -micInputStream.on('error', function(err) { - console.log("Error in Input Stream: " + err); -}); - -micInputStream.on('silence', function() { - // detect silence. -}); -micInstance.start(); -console.log("TJBot is listening, you may speak now."); - -/************************************************************************ - * Step #3: Converting your Speech Commands to Text - ************************************************************************ - In this step, the audio sample is sent (piped) to "Watson Speech to Text" to transcribe. - The service converts the audio to text and saves the returned text in "textStream". - You can also set the language model for your speech input. - The following language models are available - ar-AR_BroadbandModel - en-UK_BroadbandModel - en-UK_NarrowbandModel - en-US_BroadbandModel (the default) - en-US_NarrowbandModel - es-ES_BroadbandModel - es-ES_NarrowbandModel - fr-FR_BroadbandModel - ja-JP_BroadbandModel - ja-JP_NarrowbandModel - pt-BR_BroadbandModel - pt-BR_NarrowbandModel - zh-CN_BroadbandModel - zh-CN_NarrowbandModel -*/ -var recognizeparams = { - content_type: 'audio/l16; rate=44100; channels=2', - model: 'en-US_BroadbandModel' // Specify your language model here +/** + * Copyright 2016 IBM Corp. All Rights Reserved. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +var TJBot = require('tjbot'); +var config = require('./config'); + +// obtain our credentials from config.js +var credentials = config.credentials; + +// these are the hardware capabilities that our TJ needs for this recipe +var hardware = ['led', 'microphone', 'speaker']; + +// turn on debug logging to the console +var tjConfig = { + verboseLogging: true }; -var textStream = micInputStream.pipe( - speech_to_text.createRecognizeStream(recognizeparams) -); +// instantiate our TJBot! +var tj = new TJBot(hardware, tjConfig, credentials); -/********************************************************************* - * Step #4: Parsing the Text - ********************************************************************* - In this step, we parse the text to look for commands such as "ON" or "OFF". - You can say any variations of "lights on", "turn the lights on", "turn on the lights", etc. - You would be able to create your own customized command, such as "good night" to turn the lights off. - What you need to do is to go to parseText function and modify the text. -*/ +// full list of colors that TJ recognizes, e.g. ['red', 'green', 'blue'] +var tjColors = tj.shineColors(); -textStream.setEncoding('utf8'); -textStream.on('data', function(str) { - console.log(' ===== Speech to Text ===== : ' + str); // print each text we receive - parseText(str); -}); +console.log("I understand lots of colors. You can tell me to shine my light a different color by saying 'turn the light red' or 'change the light to green' or 'turn the light off'."); -textStream.on('error', function(err) { - console.log(' === Watson Speech to Text : An Error has occurred =====') ; // handle errors - console.log(err) ; - console.log("Press +C to exit.") ; +// uncomment to see the full list of colors TJ understands +// console.log("Here are all the colors I understand:"); +// console.log(tjColors.join(", ")); + +// hash map to easily test if TJ understands a color, e.g. {'red': 1, 'green': 1, 'blue': 1} +var colors = {}; +tjColors.forEach(function(color) { + colors[color] = 1; }); -function parseText(str){ - var containsTurn = str.indexOf("turn") >= 0; - var containsChange = str.indexOf("change") >= 0; - var containsSet = str.indexOf("set") >= 0; - var containsLight = str.indexOf("the light") >= 0; - var containsDisco = str.indexOf("disco") >= 0; +// listen for speech +tj.listen(function(msg) { + var containsTurn = msg.indexOf("turn") >= 0; + var containsChange = msg.indexOf("change") >= 0; + var containsSet = msg.indexOf("set") >= 0; + var containsLight = msg.indexOf("the light") >= 0; + var containsDisco = msg.indexOf("disco") >= 0; if ((containsTurn || containsChange || containsSet) && containsLight) { - setLED(str); + // was there a color uttered? + var words = msg.split(" "); + for (var i = 0; i < words.length; i++) { + var word = words[i]; + if (colors[word] != undefined || word == "on" || word == "off") { + // yes! + tj.shine(word); + break; + } + } } else if (containsDisco) { - discoParty(); + // discoParty(); } -} - -/********************************************************************* - * Step #5: Switching the LED light - ********************************************************************* - Once the command is recognized, the led light gets changed to reflect that. - The npm "onoff" library is used for this purpose. https://github.com/fivdi/onoff -*/ - -var ws281x = require('rpi-ws281x-native'); -var NUM_LEDS = 1; // Number of LEDs -ws281x.init(NUM_LEDS); // initialize LEDs - -var color = new Uint32Array(NUM_LEDS); // array that stores colors for leds -color[0] = 0xffffff; // default to white - -// note that colors are specified as Green-Red-Blue, not Red-Green-Blue -// e.g. 0xGGRRBB instead of 0xRRGGBB -var colorPalette = { - "red": 0x00ff00, - "read": 0x00ff00, // sometimes, STT hears "read" instead of "red" - "green": 0xff0000, - "blue": 0x0000ff, - "purple": 0x008080, - "yellow": 0xc1ff35, - "magenta": 0x00ffff, - "orange": 0xa5ff00, - "aqua": 0xff00ff, - "white": 0xffffff, - "off": 0x000000, - "on": 0xffffff -} - -// ---- reset LED before exit -process.on('SIGINT', function () { - ws281x.reset(); - process.nextTick(function () { process.exit(0); }); }); -function setLED(msg){ - var words = msg.split(" "); - for (var i = 0; i < words.length; i++) { - if (words[i] in colorPalette) { - color[0] = colorPalette[words[i]]; - break; - } - } - ws281x.render(color); -} - +// let's have a disco party! +/* function discoParty() { - // uncomment this for a disco party! - /*for (i = 0; i < 30; i++) { + for (i = 0; i < 30; i++) { setTimeout(function() { - var colors = Object.keys(colorPalette); - var randIdx = Math.floor(Math.random() * colors.length); - var randColor = colors[randIdx]; - setLED(randColor); + var randIdx = Math.floor(Math.random() * tjColors.length); + var randColor = tjColors[randIdx]; + tj.shine(randColor); }, i * 250); - }*/ + } } +*/ \ No newline at end of file