Close this search box.

AWS, Containerization & Amazon Elastic Kubernetes Service (AWS EKS): Part 1

Share This Post

Whether you’re a developer, a system engineer, or even a project manager, if you work in IT today, you have to have heard the word “Kubernetes” floating around. After all, Kubernetes is the buzzword everyone’s been talking about since its release back in 2014. Nowadays, most either use Kubernetes or want to introduce it in their cloud environment, and for good reason: Kubernetes is a container orchestration tool that made implementing containers so much easier.

Google Cloud, seeing how their engineers developed Kubernetes in the first place, was already providing a managed Kubernetes service (GKE or Google Kubernetes Engine) in 2015. This was a huge advantage for Google. Many companies rely on managed services, and Kubernetes requires both knowledge and time to maintain, which many companies simply don’t have.

While all this was happening, AWS was severely lagging behind. It was still the largest and most used public cloud, and by far, but it lacked a managed Kubernetes offering, which pulled some clients toward Google. Sure, they had their service for container orchestration, ECS, but the interest in Kubernetes was at an all-time high. So finally, in June of 2018, a full three years after Google introduced their managed solution, Amazon came out with a global release of AWS EKS (Elastic Kubernetes Service).

In this two-part article, we’ll review AWS EKS and see what it can bring to the table for your business. But first, let’s take a look at containerization as a whole and what options you have to utilize it on AWS Cloud.

What are Containers?

Containers are in many ways similar to Virtual Machines, but they share the Operating System among applications, making them very lightweight. Each container has its own file system and a share of resources (memory, CPU, etc.); most importantly, they are decoupled from the underlying infrastructure, making them independent and portable. This independence lets us move away from the monolithic stacks of the past and is the reason why you regularly see the implementation of microservices today. Other benefits include consistency across environments (dev, staging, and production, for example) and easier implementation of continuous integration and deployment.

Containers have actually been around for quite some time. Back in 1979, something called “chroot” was introduced. Chroot was used to change the root directory for the current running process (as well as child processes) and was the first tool used for process isolation.

Later on, new methods were introduced, but it wasn’t until 2013, when Docker entered the scene, that containers caught the eye of a wider audience. Our service-to-service container comparison considers high level issues such as user friendliness, security, and partnership ecosystems for AWS, Azure and GPC.

What’s the deal with Docker?

So what is Docker, and why did it make such a big splash?

Docker is an open platform that helps you with developing, shipping, and running containers. With Docker, you can go from writing the code to deploying it in your production environment in no time. Docker was the first tool that simplified the use of containers overall, making it viable for a wider audience and bringing attention to containers once again.

Where does Kubernetes come in?

Running a container is very simple, but for production environments, there are other considerations you need to be aware of, like container management. Containers need to run your applications, they need to be resilient, and no downtime is allowed. This is where Kubernetes comes in.

Kubernetes offers many features that help you work with containers, for example, self-healing—if a container fails a health check, it will be terminated and replaced with a new one (and a new one must pass the health check before it is allowed to serve). There is also load balancing, which is a necessary component in most environments today. And Kubernetes allows you to define and describe desired states for your containers, giving you automatic rollouts and even rollbacks for your deployments. Another great feature is orchestration of your storage so you can easily mount any kind of storage system, whether it is on a public cloud or a local one.

Running Containers on AWS

So if you decide to run containers on the AWS Cloud, what options are available to you?

Docker Containerization on EC2 Instances

This first option is fairly straightforward. You would provision an EC2 instance (keep in mind that some proper planning is needed to choose the proper instance type and size for your specific business requirements) and use it to deploy Docker containers that will run your application.

This approach was first used before other options were on the table, and while it certainly works, there is no orchestration in place. This creates a lot of overhead, plus the only way to run Docker containers like this is to rely on one of the first orchestration tools like Docker Swarm or Apache Mesos.

Elastic Container Service (ECS)

AWS offers its own fully managed container orchestration service called ECS, which is a great alternative to Kubernetes. Elastic Container Service provides a lot of beneficial features and is natively integrated with other AWS services (CloudWatch, Secrets Manager, Route 53, Identity and Access Management, etc.). It is also very simple to use.

With ECS, you can easily define your application (select the container image and necessary resources) and very quickly have everything up and running. You can also optimize your costs by utilizing Spot Instances, for example, benefiting from up to a 90% discount compared to on-demand instances.

Self-Hosted Kubernetes on AWS

Up until AWS EKS was released, the only way to run Kubernetes on AWS was to host it yourself. Of course, having a Kubernetes option in the cloud is a great thing—after all, that orchestration is what you’re looking for. But Kubernetes is a very complex tool, consisting of many components like kube-apiserver (a control plane component that exposes the API), kube-scheduler (another control plane component that finds nodes for newly started container pods), etcd (a key-value store used for all cluster data), and a few more that all belong to the control plane. And then, there are the worker nodes (machines that are running your containers).

All of this requires experience not only to deploy but also to administer in the long run. And while there are a lot of companies that would love to use Kubernetes, there are far fewer that have the necessary people to self-host it.


Elastic Kubernetes Service is a managed AWS service that allows you to run and scale various Kubernetes applications on the Amazon Cloud as well as on-premises. Since AWS EKS is managed, you get security and high availability out of the box, as well as the necessary patching capabilities.

EKS runs the control plane across multiple Availability Zones, which gives you 99.95% uptime; if a control plane node is down, it will be automatically replaced without you even knowing about it. This is a true black box approach, removing the overhead of doing the work but also keeping you in the dark. Some may prefer this (mostly those with smaller and less-experienced teams), while others will not (companies that have Kubernetes-savvy people on board).

EKS as a Managed Service

When we talk about EKS as a managed service, it’s important to note that EKS only manages the control plane for you (etcd, kube-apiserver, kubescheduler, etc.), while node management is still up to you. So, while a Kubernetes-managed plane is running for you behind the scenes, worker nodes are being deployed on your EC2 instances—you have full control over these, meaning extra work is needed, as you have to patch, scale, and secure them yourself.

Of course, if you don’t have a desire to manage your own worker nodes, you can rely on AWS Fargate to handle this task for you. Fargate provisions and manages the EC2 instances used as worker nodes for EKS, but keep in mind that there is an additional fee for this service. EKS can also work with Spot Instances, which can greatly reduce your overall cost, and Fargate can utilize these as well if desired.

Summing it up

In this article, we’ve taken a look at containerization and considered various ways in which containers can be utilized within your business infrastructure. While Docker brought everyone’s attention back to containers, Kubernetes introduced container orchestration, which has garnered the worldwide adoption we’re seeing today.

We also looked at how you can run containers in your AWS Cloud environment (from running Docker or Kubernetes on your EC2 instances to utilizing ECS or EKS) and gave you a glimpse of what AWS EKS offers.

In Part 2 of this series, we’ll dive deeper into AWS EKS and cover its advantages and downsides, use cases, upcoming features, and pricing. Plus, we’ll show you various ways to deploy EKS so that you can decide for yourself which option suits you best.

Next step

The easier way to recover cloud workloads

Allowed us to save over $1 million in the management of AWS EBS snapshots...

N2WS vs AWS Backup

Why chose N2WS over AWS Backup? Find out the critical differences here.

N2WS in comparison to AWS Backup, offers a single console to manage backups across accounts or clouds. Here is a stylized screenshot of the N2WS dashboard.