Intro to DevOps

spareproj
4 min readDec 14, 2022

--

I frequently hear the term DevOps being thrown around in work conversations. Turns out it’s really a simple name to describe a function or an area of discipline.

DevOps is a software development approach that combines 2 functions, as like the name suggests: Software Development + IT Operations.

Its main purpose is to help devs deliver applications to production quickly.

To do so, DevOps engineers use automation tools such as Dockers, Kubernetes, and Continuous Integration and Continuous Delivery (CI/CD) processes to streamline the software development process.

Microservices giving rise to containerization tools

Most software today are created via combination of microservices instead of a single monolith.

This refers to a software architecture consisting of individual applications (‘microservices’) each focused on a specific purpose, interacting with each other and collectively serving a bigger purpose.

We can think of a manufacturing factory consisting of different departments. We have a department for procurement, one for designing, and one for stitching. Each department has its unique right to existence, has its own Manager and Workers and their resources required to perform their jobs.

These individual departments are like containers.

A container refers to a package that consists of the application and its dependencies. These containers can be run on a machine to perform its function.

To scale the business, we need each department worker to function properly, and also work cross-functionally across departments, and finally even work with other factories in other countries.

To do so we require the help of containerization tools to help manage each department, and the relationship between departments and between factories.

Use Docker to build containers

Docker is a software development platform to help you package your applications and its dependencies neatly into a virtual containers.

These containers run on our computers like micro-computers; each has its own specific jobs, own operating systems, and isolated CPU processes, memory, network resources.

These apps can run on any machine you choose, and remain agnostic to the environment: virtual machines, cloud, hybrid, physical machines.

You can also run each container independently of other containers, choosing to stop/start any of them without affecting the host machine.

Then, use Kubernetes to manage the containers

Kubernetes (K8s) is a platform to manage these containers, like a conductor leading an orchestra.

We use K8s to automate the deployment, scaling, and management of these containers.

Kubernetes’ clusters of nodes

Let’s start from the top: This entire system deployed on the Kubernetes is known as the cluster.

Each cluster has a control plane or master node, which controls all the worker nodes in the cluster.

Each node contains several pods. Pods are the smallest deployable units in Kubernetes.

Each pod hosts one or more containers we talked about. The pod provides shared storage and networking for the containers it hosts.

Explainer map by ByteByteGo.com

The control plane is responsible for managing the state of the cluster. The API server is the interface between the control plane and the rest of the worker nodes. It also exposes a REST API to the client, complete with UI and CLI so that client can submit requests on how to manage the cluster.

Since the control plane can control worker nodes across different environments, this means that K8s provide one API for interacting with the cluster across many nodes.

Main benefits this orchestration tool:

  1. high availability — no downtime
  2. scalability — flexibility
  3. disaster recovery — back up and restore

By managing the pods and nodes in this cluster, K8s can auto-scale load for each node by adding or removing each pod.

By using Kubernetes to manage a cluster of containers, devs can focus on writing proper codes for the applications, rather than worrying about the infrastructure of their service.

To summarize, we use Docker to create containers, and Kubernetes to manage these containers. Docker and Kubernetes are often used together to automate the end-to-end process of building and managing applications in containers.

Continuous Integration and Continuous Delivery (CI/CD)

CI/CID are practices that developers use to improve speed and reliability of software development process.

CI involves integrating code changes from multiple developers into a shared repo. This prevents isolated work between developers to cause miscommunication, as there would be a common source of truth for all to refer to. CI also involves testing of the codes to ensure that it is working as expected.

CD involves automatically deploying code changes to prod environments when they are ready. This allows developers to release new updates quickly.

Notes:

Most tech orgs merge these different functions in different permutations, depending on the size and complexity of the business. However, here are some nuances between the title and key responsibilities:

  1. DevOps: deliver apps to production environment quickly
  2. Site Reliability Eng (SRE): maintain server uptime availability by building resilience into the system. ie. system maintenance, reliability testing, incident management, security management.
  3. Platform Eng (PE): create the infra or approach that different business units in the organisation can use.

Resources:

  1. https://www.youtube.com/watch?v=an8SrFtJBdM

--

--