Sunday, September 11, 2022

Terraform vs. Ansible : Key Differences and Comparison of Tools

 


I have always been very confused between tools which have overlapping functionalities such as these two. Both Terraform and Ansible are DevOps tools, but how do these DevOps tools differ? In short, Terraform is an open-source, Infrastructure as Code platform, while Ansible is an open-source configuration management tool focused on the configuration of that infrastructure.


Let us explore the similarities and differences and try to conclude the best way to manage infrastructure, especially on Cloud.


Similarities

At a very high level, given the capabilities of both the products, Terraform and Ansible come across as similar tools. Both of them are capable of provisioning the new cloud infrastructure and configuring the same with required application components.


Both Terraform and Ansible are capable of executing remote commands on the virtual machine that is newly created. This means, both the tools are agentless. There is no need to deploy agents on the machines for operational purposes.


Terraform uses cloud provider APIs to create infrastructure and basic configuration tasks are achieved using SSH. The same goes with Ansible – it uses SSH to perform all the required configuration tasks. The “state” information for both does not require a separate set of infrastructure to manage, thus both the tools are masterless.


Differences


Although Terraform and Ansible are capable of provisioning and configuration management, a deeper dive into them makes us realise the benefits of one over the other in certain areas. If we consider Infrastructure management, which broadly encompasses two aspects – orchestration and configuration management.  Terraform and Ansible have their own ways of managing both – with strong and weak points when it comes to overlaps. Thus, it is important to delve into some details of both the tools to make a “perfect” choice or a combination with boundaries.



Features

Type

Orchestration tool

Configuration management tool

Syntax

HCL

YAML

Language

Declarative

Procedural

Default approach

Mutable infrastructure

Immutable Infrastructure

Lifecycle management

Does support

Doesn’t support

Capabilities

Provisioning & configuring

Provisioning & configuring

Agentless

Yes

Yes

Masterless

Yes

Yes

Should you have any queries, do let me know in the comments. 

… HaPpY CoDiNg

Partha

 



Friday, August 26, 2022

Introduction to Containers

 


Containers – in simple words

In the cloud computing world, especially in AWS, there are three main categories of compute virtual machines (VMs), containers, and serverless. No one-size-fits-all compute service exists because it depends on your needs. It is important to understand what each option offers to build appropriate cloud architecture. Today, we will explore containers and how to run them.

Containers


The idea started long back with certain UNIX kernels having the ability to separate their process through isolation.


With the evolution of the open-source software community, container technology evolved too. Today, containers can host a variety of different workloads, including web applications, lift and shift migrations, distributed applications, and streamlining of development, test, and production environments.


Containers are also used as a solution to problems of traditional compute, including the issues of running applications reliably while moving compute environments. 


A container is a standardised unit that packages your code and its dependencies. This package is designed to run reliably on any platform, because the container creates its own independent environment.


With containers, workloads can be carried from one place to another, such as from development to production or from on premises to the cloud.


Docker


Docker and containers are used synonymously. Docker is the most popular container runtime that simplifies the management of the entire operating system stack needed for container isolation, including networking and storage. Dockers create, package, deploy and run containers. 


What is the difference between containers and virtual machines?


Containers share the same operating system and kernel as the host they exist on, whereas virtual machines contain their own operating system. Each virtual machine must maintain a copy of an operating system, which results in a degree of wasted resources. In contrast, a container is more lightweight. They spin up quicker, almost instantly. This difference in start-up time becomes instrumental when designing applications that need to scale quickly during input/output (I/O) bursts.

Graphical user interface

Description automatically generated


While containers can provide speed, virtual machines offer the full strength of an operating system

and more resources, like package installation, dedicated kernel, and more.


Orchestrate containers


In AWS, containers run on EC2 instances. We can have a large EC2 instance and run a few

containers on that instance. While running one instance is easy to manage, it lacks high

availability and scalability. But when the number of instances and containers increase, running

in multiple AS, the complexity to manage them increases too. If we want to manage compute

at a large scale, we must know the following: 

  1. How to place containers on EC2 instances

  2. What happens if the containers fail?

  3. What happens if the instance fails?

  4. How to monitor deployments of containers


This coordination is handled by a container orchestration service. AWS offers two container orchestration services – Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS).


Elastic Container Service - ECS

ECS is an end-to-end container orchestration service that helps to spin up new containers and manage them across a cluster of EC2 instances.

Diagram

Description automatically generated


To run and manage containers, we need to install the ECS container agent on EC2 instances. This agent is open source and responsible for communicating to the Amazon ECS service about cluster management details. Agent can run on both Linux and Windows AMIs. An instance with the container agent installed is often called a container instance.

Graphical user interface, diagram

Description automatically generated

Once the Amazon ECS container instances are up and running, we can perform actions that include, but

are not limited to, launching and stopping containers, getting cluster state, scaling in

and out, scheduling the placement of containers across your cluster, assigning

permissions, and meeting availability requirements.


To prepare applications to run on Amazon ECS, we create a task definition. The task definition is a text file, in JSON format, that describes one or more containers. A task definition is like a blueprint that describes the resources we need to run a container, such as CPU, memory, ports, images, storage, and networking information.


Kubernetes

Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services. By bringing software development and operations together by design, Kubernetes created a rapidly growing ecosystem that is very popular and well established.  

If you already use Kubernetes, you can use Amazon EKS to orchestrate the workloads in the AWS Cloud. Amazon EKS is conceptually similar to Amazon ECS, but with the following differences:

  1. An EC2 instance with the ECS agent installed and configured is called a container instance. In Amazon EKS, it is called a worker node.

  2. An ECS container is called a task. In Amazon EKS, it is called a pod.

  3. While Amazon ECS runs on AWS native technology, Amazon EKS runs on top of Kubernetes.

Should you have any queries, do let me know in the comments. 

… HaPpY CoDiNg

Partha