Having finally rescued our Maker Kit from their imprisonment by H.M Customs and Excise, we set about getting the small shell-like white plastic box connected to the internet and power.
The Hub requires a simple 5v AC source such as is supplied by a USB cable or mobile phone charger, and for connectivity it comes only with an ethernet connection on the rear, which would normally be plugged straight into your home modem/router – no WiFi chip here!
Once plugged in, it automatically connects to your internet connection, and can then be configured via dedicated apps on iOS and Android.
Adding sensors to the hub involves using the mobile app to prompt the hub to look for new connections. If that doesn’t work, they can be prised apart and you can hold down the tiny reset button until full communication is established.
Once configured, the apps allow you to see the sensors’ states in near-real time, as they send their messages up to the SmartThings Cloud via the Hub, and the mobile app then receives the notifications.
Our previous aim was to connect the Ginsing speech synth to the SmartThings Maker Shield, with those two mounted on to an Arduino that would monitor the SmartThings activity and generate the speech synthesis directly from the Ginsing.
It turns out that that’s not the way SmartThings wanted to do it.
The maker shield only has the capability to send data up to the SmartThings Cloud from extra sensors attached to the Arduino, via the Hub. It cannot communicate programmatically with the Arduino even though it is physically attached to it. In order to use the shield, we would have to write a custom SmartThings application in Google’s Groovy language to run on their servers to handle communication between the maker shield and the Arduino.
Not wanting to spend the rest of the year trying to hack together a prototype with an undocumented API in a language I’m unfamiliar with, I decided to find another way to produce sound from SmartThings’ presence sensors.
It was a matter of minutes to connect SmartThings’ IFTTT channel to a fresh Twitter account and hook up the presence notifications. While the simple ‘trigger -> action’ protocol is simple to set up, it doesn’t offer much flexibility or any extensibility.
What to do with the data? I decided to grab a fresh Raspberry Pi B+ and set up a Node.js environment to handle the Twitter API. After trying out a few twitter npm modules, I found one that could handle the updated v1.1. streaming API, which would allow us to use Twitter as a real-time messaging system.
We decided to just playback audio files rather than attempt any kind of real-time speech generation. This allowed us to use any audio files we wanted to greet us when we arrived in the studio!
Robustness against error and loss of connection was provided by the excellent process manager pm2, and a node.js script was written to glue all these services together, and an LED attached and programmatically connected to pm2 to indicate whether the script was running and if there was an error.
Finally hooking the Pi up to speakers was taken care of by the construction of a tiny 2.8 watt amplifier from a kit.
This was hooked up to the speaker in a hacked mp3 player procured from the super-cheap gearbest.com.
As often seems to be the way with innovation, the final solution took a different form the one we aimed for at the start, owing to discoveries made along the way – but we finally had our voice greeting service running and announcing our arrival and departure in real-time, just as intended.
Experienced by Daniel Beattie, Technology Director