Lighten up on Hue response times
Controlling your lights from anywhere in the world is one of the convenient features that Hue offers. You tap your lights, and a few seconds later they change as you want them.
Lately, the out-of-home connection for Hue that makes this possible was getting used when you are at home as well. For example when you control your lights with voice control through Amazon Echo. Or if you simply don’t use your home WiFi on your phone any more, because 4G is fast enough. Waiting a few seconds for your lights to come on doesn’t cut it any more in those cases.
That’s why we changed the way your Hue bridge gets connected to the Hue Cloud. Instead of having it poll our servers when you want to interact with your lights, they now have an always-on WebSocket connection. So when you need your lights to come on, they respond a lot faster. Arthur C. Clarke would be proud!
Sending out sticky messages
Setting up a WebSocket is easy enough, but being able to handle lots and lots of WebSockets from all over the world in a scalable way: now that’s where it gets interesting. And what do you do with old bridges that can’t handle WebSockets? You need some kind of fallback mechanism.
To test how everything would work we have been running test micro services on our Kubernetes cluster which only set up a WebSocket connection with the bridge and measure how everything holds up. To test the ping between the bridge and a server we sent “De stroop is plakkerig. Ik herhaal: de stroop is plakkerig” (“The syrup is sticky. I repeat: the syrup is sticky”) strings back and forth.
This gave us all kinds of interesting stats about how fast it would be, how much we would need to scale, how many networks are not friendly to WebSockets and of course: hundreds of GBs of sticky syrup logs per day (oops).
Once we got enough experience running WebSockets at scale the bridges were slowly updated, one country at a time, to the real WebSocket version.
Streaming all the things!
So how do you keep track of all these connected bridges, and what message needs to go to which bridge, and back to what client? For that we used RxJS. We modeled every WebSocket connection as a byte stream encoded with Protocol Buffers. When a user turns on his lights, our WebSocket server receives that request. We send these requests to the bridge in small chunks of data, one at a time. The bridge then processes the request, turns on the lights, and sends a chunked response back to the WebSocket server. There they are merged into complete messages (still represented as a stream) and then sent back to the client.
On top of that we also need to make sure that we don’t overload the bridge with requests, so we also use Rx to throttle the amount of messages going to a bridge. We only send a new request once we get a response to the previous one.
More syrup
Now that everything is live it’s time to celebrate. What better way to do it than with pancakes and lots of sticky syrup!