Ok. I lied.
In my last post, I promised I would have got back to you with more on the “6 pillars of DevOps”. I lied because I completely made up that number.
I also feel a bit disappointed because after I finished writing up the last blog post, I figured there’s more than gazillion posts on the same topic in the tech blogosphere.
Apologies for that, I know my post wasn’t particularly innovative nor enlightening, but that’s the farthest I could go.
After the incredible turnaround of readers to my last post (I expected more like dozens rather than more than a hundred) the most feedbacks I got were like: “so, what do DevOps engineers actually do on a day-to-day basis”?
As I was trying to explain DevOps is not a label that can be stuck to one’s job, but rather an organisational approach to organising the delivery and operations of IT services. Now that I’m writing this post, I realised that DevOps is a misnomer in one more sense. I reckon that in 90% of the cases if you are hired as a DevOps engineer, you’ll end up working on Infrastructure projects and do very little coding.
DevOps == SRE on steroids
Nonetheless, DevOps practitioners are very often required to have a common layer of skills (usually large and quite off-putting) and, to that regards, yes, you can ‘label’ an engineer “DevOps” if he has most of those skills.
I will try to summarise what I think are the DevOps pillars. I found this quite helpful in my own experience as it helped me to put some order in my often confused ideas.
The first buzzword that comes to my mind is Microservices, which IMHO is the enabler for all the other pillars. Without a microservice approach, it is very hard to build all the other foundations to DevOps.
Is microservice a small NGINX server running in a container? Is it an AWS Aurora instance? Is it a lightweight daemon running “in da cloud”? No, no, and no!
Microservices, again, is an organisational approach to designing systems architecture, where you try to deconstruct services into tiny, autonomous components. Those micro-services need to talk to each other to exchange state-data, as the entities of an equivalent monolith would do. But they often do it across the network, via message queues or RESTful APIs.
The benefits of that approach are several. Below the three I think are most important:
- independence: each individual component is independent of others (provided the interfaces don’t change) and therefore can be engineered/maintained as a separate entity allowing greater flexibility;
- coding language freedom: often, the development and operations of individual MSs is the responsibility of different micro-teams who might have their own coding tastes and preference. if the interfaces between MSs is all that matters, each micro-team can opt for the languages/methods more appropriate for the service component they are designing;
- release cycle speed: the agility provided by the split of domains and the clear demarcation line between functionality makes testing much more efficient and allows for much shorter software release cycles.
If you want to dig deeper into what microservices are, and how distributed architectures evolved over time, I really encourage you to read Martin Fowler’s pages on microservices.
There is also an excellent book that I just started reading (credits to my friend Andrea who recommended it to me).
Build for failure
A few years ago, a former colleague that moved to Netflix mentioned Chaos Monkey to me. And that was mindblowing.
In essence, Netflix put together this small collection of damage-causing tools that embed the principle of chaos engineering. (see here for details)
What’s better than multiple, repeated, systematic failures constantly happening on your infrastructure to give you the best degree of confidence that you will never be woken up during your on-calls? Chaos Monkey constantly breaks stuff all across Netflix VM/container infrastructure.
Services must be able to survive this constant tampering by design.
Automate, automate, and automate!
As you can probably tell by now, microservices built for failure are all about tearing down VMs/containers and launching fresh ones, provisioning them with the correct dependencies, and shipping your (or your developers) code on them so they can contribute to your ‘cloud’ of workloads.
This clearly requires someone else than a human being doing those activities. This is the main use-case for automation in the magic world of DevOpsthough there are many others I believe).
Automating is also a way of documenting procedures/processes. Reading an Ansible playbook or a Puppet manifest is usually a very informative exercise, and writing them is probably the best way of passing on the knowledge about how a procedure must be executed. Certainly better than a Word document or an email.
Nevertheless, when you are coding some automation it means that you are dedicating some time to think about the process itself and therefore are encouraged to think whether it could be improved or changed at its roots.
If the infrastructure your code is running on is automated and ruled by some tools like the one mentioned at the end of this paragraph, you’re golden! The infrastructure is code! (people often refer to this concept as IaC – Infrastructure as Code)
CI/CD can well be regarded as a form of automation, as, in essence, it deals with automated software test and delivery.
Continuous Integration/Continuous Delivery
What does it actually even mean?
You are doing continuous integration/delivery when you completely rely on some pre-defined and automated steps to deliver the end result of your job, rather than a lengthy and manned time-consuming process.
Continuous delivery often involves automated testing of software components (everything that is coded and testable: be it your infrastructure, your network, or your services) and automated delivery into production environments.
How you deliver the end product of your CI/CD to your different environments during your pipeline depends on your specific requirements and varies from implementation to implementation.
The trend that has emerged in the last few years, and boomed with the advent of Docker is the one of Immutable Delivery, that was formulated at first by (guess who?) Netflix in their famous post “Building with Legos“.
The whole testing/delivery process relies on brand new environments being created at every iteration, where the new software lives. In a nutshell, when your new code needs testing, a brand new VM/Container is built from the grounds up and your software is executed within this newly created embryo, with full and clear control on all the system/software dependencies.
You clearly see now why containers gave this approach a huge boost, in that spawning a new container is virtually effort-free compared to the old times when the only way of isolating OS environments was using VMs.
The concept of CI/CD could be extended to anything that has the form of code and that can be tested.
In the NetOps world (the DevOps concept extended to networks) people are starting to test their network designs by automating some lab tests and configuration generation, then automating the deployment to production.
Build once, run anywhere. Full stop.
Cloud is not just a buzzword that is cool to mention to your bosses because they like it. Cloud is a term that is a perfect match in this scenario, as your infrastructure needs to be so scalable and repeatable and automatable that you don’t really know to know its state at any given point in time. You just need to know it has sufficient resources to support your services and that someone keeps an eye on the hardware components that failed so that the overall availability of compute/storage/memory/network resources is as you expect it to be.
You just need to know it has sufficient resources to support your services and that someone keeps an eye on the hardware components that failed so that the overall availability of compute/storage/memory/network resources is as you expect it to be.
Building a nice infrastructure is just the first step. Your services usually rely on some common features like:
- service discovery
…but all you want to focus on is your application.
This is where the main difference between Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) is.
The way I picture PaaS is like a middleware sitting on top of your infrastructure. PaaS is an abstraction layer that allows you to consume your infrastructure resources without the need of actually knowing the infrastructure at all.
Why have you built all this for? Services. Cool things that make people’s life easier and better (or more fun). Hopefully.
Toolset: intelligence, neurons, experience, gut feeling (I have no URLs for these)
Also published on Medium.