Roadmap to become a DevOps Engineer in 2021, This is PART 2.
Check Out Part 1 HERE
FOLLOW ME ON TWITTER
@chetanistaken Or Click Here
NOW LETS CONTINUE THE SECOND PART NOW:
The next concept you need to be familiar with is container orchestration. Think of your workload as an orchestra. Each container is like a musician responsible for their own workload. Each of these musicians are looking to the conductor for guidance. Kubernotes is like the conductor to the orchestra.
Let's have a look at a real world example. Kubernetes is installed on all the servers and forms a cluster. We then take our containers and organize them into what's called pods. pods contain one or more containers.
In our example, we have two pods, each containing three containers of a web application. One is for the production instance of the application, while the other is for the development instance through Kubernetes. We can assign pods to the worker nodes Kubernetes makes sure that the workload is distributed throughout the nodes.
If any of the nodes go offline Kubernetes can make sure that workload is moved to another node when it comes to Kubernetes. There's a lot to know and learn. Before you get started, you'll need adequate knowledge of Docker and containers.
After that the best suggestion is to create your own lab environment, you can either create your own three node cluster, or you can install mini cube which allows you to virtualize a three node cluster on a single server.
If you're looking for a more simple option just to get your feet wet. With container orchestration, you may want to look at Docker swarm, it'll allow you to get your feet wet without going into all the complexities that come with Kubernetes. With container orchestration, you may want to look at Docker swarm, it'll allow you to get your feet wet without going into all the complexities that come with Kubernotes.
To truly have infrastructure as code we need to provision our servers and network as code. That's where tools like terraform come in. terraform is the tool for building changing and versioning infrastructure safely and efficiently.
It allows you to completely codify your infrastructure by creating a plan file. This plan file allows you to create change or remove components of your infrastructure no matter what cloud provider you're using. terraform is also an idea potent tool. And in DevOps, this is a term that you should be very familiar with.
That means it's aware of the current state of your system and it will only make the changes that it needs to terraform is definitely a tool that's going to see more frequent use in the upcoming years.
So once you have your infrastructure provisioned, the next step is to make sure that everything is configured. And that's where configuration management tools come into play. When it comes to configuration management, the most popular tools out there right now is Ansible, saltstack, puppet, and Chef, if you're just getting started out in DevOps, and you haven't used any of these before, my recommendation is to look straight at Ansible. This is definitely the best option to use for configuration management.
In my opinion, it beats all the other options in every category. But the most important one in my mind is the ease of setting it up for an entire infrastructure. If you use something like Chef or Puppet, you need to get a client agent on each of your devices that you're managing. But with Ansible, you just need to make sure that you have a working SSH connection, which you usually do by default.
This also lets you manage a wide range of different types of devices. For example, it's very easy to manage network gear like routers and switches using Ansible, where using something like Chef and Puppet would be very difficult since there wouldn't be an easy way to install the client software on those devices.
Continuous integration and continuous delivery or ci CD for short. Continuous Integration is the act of automating the QA of new code. When a new commit comes into a code repository, a CI tool can automatically launch it in a container and run tests. If the tests fail, the developer is notified and they can have a look at their code and fix it. If the tests pass, then you can move on to the continuous delivery portion which will automatically deploy the code the code can be delivered to any environment, either QA, staging, or even production.
The act of using CI CD pipelines helps automate the testing and delivery of code, saving a lot of developer time. Some of the common tests performed by ci tools include linting, the process of checking code to make sure it's formatted to a certain standard, no more arguing about tabs versus spaces, dependency checks, this could be something like a Python script requiring a certain module and then that module not being added to the requirements dot txt file, I see I test would fail this code unit tests, you know those fun things that you had to program in your college university courses, they're finally coming into use architecture tests, you can have your code run in different types of containers or architectures, and see if it runs properly in all of them.
When it comes to the tools of ci, CD, there's a lot of different choices and you can't really go wrong with any of them, you'll just have to have a look at each of them and just see which one works out the best for you.
However, recently, more and more companies have been embracing proper data analytics and log management. If you do find yourself in a DevOps role, you'll probably find yourself having to learn one of these many tools for log management and data analytics.
My only real advice here is to go out there and check to see what's available and maybe play with them in your lab environment and see what you can do.
—————————————————————————————————————————-
Check Out Part 1 HERE
AND FOLLOW ME ON TWITTER
Comments
No posts