Your average blog post / tutorial / video about containerizing software goes a little something like this:
Trouble is, all that “simple Docker stuff” is where things can (and often do!) go horribly horribly wrong, and there is precious little material available out there on the Internet for helping you through those precarious bits. No more!
In this series, I’m going to take you through the complete containerization effort, using a nontrivial application composed of both off-the-shelf components and some home-spun software.
Together, we’re going to build and containerize an Internet radio station. Let’s meet the software we’re going to be using “off the shelf” to do most of the heavy lifting:
- Icecast2 – https://icecast.org/
- Liquid Soap – https://www.liquidsoap.info/
- ffmpeg – https://www.ffmpeg.org/
- youtube-dl – https://youtube-dl.org/
These four components will allow us to implement an audio delivery system, over TCP, that randomizes tracks that we pull off of YouTube. I’m going to make episodes of the Rent / Buy / Build podcast available.
As we build out this solution, we’re going to climb the abstraction tower. We’ll start by building and running our containers by hand, using nothing but the docker
command. This is the proof-of-concept phase: does the software work together, and can we make all the components play nice and do what we want?
This first stage is usually a very messy one, with lots of dead-ends, failed experiments and retries. I consider this a feature of the process, not a bug. Containers afford us the freedom to play and experiment, without risk of polluting our system with unwanted packages, unused processes, or broken configurations. If something doesn’t work, all we have to do is shut down the container and delete the image.
After we’ve worked out precisely which components we need, and how they work together, we’ll move onto the orchestration stage. Using Docker Compose, we’ll make some infrastructure code that can be used to deploy all of the pieces and parts together, as a cohesive and contained unit. It is at this stage that we capture the relationships between the containerized components and their data stores, explicitly.
At this point, I like to take a step back and determine if the whole system is complete: does it lack features, user interfaces, or anything else? Anything that is missing usually doesn’t exist on its own, and needs to be built. For this particular project, we’re going to need a pretty web interface for interacting with our radio station, and driving new track ingestion.
I specifically wait until I’ve finished assembling components before I go off and build any of the missing bits. I find that this helps to (a) keep the scope of anything I build to a minimum, and (b) ensures that the main backbone of the system is solid and functional before I expend any substantial effort writing glue code. Custom software ain’t cheap; not only does developing software take time from other projects, but it also places some non-trivial obligation on future-you’s time, vis-a-vis maintenance work.
Once all the gaps are filled, and we’re happy with the solution, we move onto the final stage: converting our docker compose recipe into a Kubernetes deployment or three, so that we can run and scale and survive node outages. While this stage builds heavily on the previous stages – Docker Compose and Kubernetes share a lot of the same concepts, after all – it also demands its own specialized bag of tricks.
When all is said and done, we’ll have taken our idea – wouldn’t it be cool to run an Internet radio station – from concept to production-grade implementation, using containers all the way.
Let’s get started, shall we?
Join us next week for part 1, where we dive into assembling the off-the-shelf components into a working system, fit for containerization.