Since our post last night with instructions on how to get Docker on the Pi, several puzzled commenters have been inquiring as to the reasoning behind such a bizarre undertaking. It's a valid question that we really should have dealt with in the original post. Sleep deprivation probably had something to do with missing that.
So here's a summary of our reasoning:
Dependency redeployment without OS reinstall
Docker helps deploy applications with the entirety of their dependencies etc., without needing an OS reinstall. It also gives the ability to roll back a botched upgrade. This is incredibly important for embedded devices and applications that may need redeployment occasionally, as OS redeploys may end up bricking a device that's deployed somewhere difficult-to-access. We've learned this lesson well maintaining a network of 200 screens strewn around London. In other words, Docker on ARM brings hardware products closer to the SaaS deployment model where continuous updates are taken for granted. Did anyone say lean hardware startup?
Sending container diffs to the device
Docker also allows sending over diffs of the container, saving lots of bandwidth, also a big benefit for embedded devices that are often weakly connected. Think of devices connected over 3G or unreliable/expensive WiFi. But it's not just cost. For a weakly connected device, a small update has a much higher probability of even making it through the pipes to begin with, and gets there faster. The Chromium team also make some additional good points with their post introducing Courgette, Smaller is Faster (and Safer Too).
Admittedly, you could send diffs over the wire even without Docker, but you'd have to rebuild (and maintain!) some of Docker's functionality, and if you do that, you might as well use Docker. Importantly, Docker can do diffs in a way that precludes bricking as discussed in the previous paragraph, and this is non-trivial to do well.
Another benefit, a bit further in the future but incredibly exciting, is that you can run multiple 'capabilities' on the device, in isolated containers, so as not to have one interfere with the others. If this is implemented properly, these capabilities can be pre-packaged without fear. One can imagine an ecosystem of dockerfiles popping up, each adding a capability to the Pi. Think of making something like the CC3000 wireless chip Smart Config mechanism as a simple, re-usable, and downloadable Docker container.
What applications can use this?
With regard to examples, any application that involves deploying to several devices, especially if it contains native dependencies, would benefit from the Docker/Pi combo. As part of the Hardware Renaissance we expect this to become a lot more common. But a good idea can be had just by looking through the awesome things people do with their Raspberry Pi devices already.
Won't there be overhead?
The Pi is already fairly slow compared to the average x86 device (but incredibly powerful compared to an Arduino). Docker is more expensive computationally, but only slightly. The benefit of Docker is that it's not a VM. As such, the overhead for the Pi should be minimal. Whatever you were able to do outside Docker, you should be able to do inside it also.
How is it relevant to Resin.io?
Resin.io the product, was the impetus for doing this, besides doing a cool hack for the sake of it. We're interested in deploying apps made with web technlogies to embedded devices, so there is a huge added benefit in terms of the pre-existing tooling and experience ecosystem that a (lowercase) lean startup can't afford to ignore.
If you have any questions or have other benefits in mind, please do add them in the comments below.
Lots of people asked for a pre-built binary so we're going to make that, and we'll also look a bit into a suggestion by Solomon Hykes to look at Tiny Core Linux as a base OS. We'll be keeping everyone updated on progress through this blog and our twitter feed.
Chat to the team on our community chat