In this article, I’ll walk you through what Kubernetes is and why it’s so important to today’s tech world. We’ll also take a look at some examples of how companies are using Kubernetes to become more efficient, effective, and nimble in the face of ever-changing business demands. If you’re familiar with Kubernetes already, feel free to skip ahead! Otherwise, keep reading—you’ll be glad you did!
Google launched its container management system in 2014. It was open-sourced in 2015 as a way to standardize how applications are deployed, managed, scaled and developed across Google’s infrastructure. The idea behind containers is that they run in their own space on a shared operating system. Containers isolate software from its environment to make sure it will always run as expected (this makes them popular with developers). So what does all of that mean? Containerization allows you to deploy and manage an application anywhere in the world. This can be great for creating redundancy—if one set of servers goes down, your website or application is already running on another server somewhere else. With just one click or command, you can also scale your services up or down depending on demand. This enables you to save costs by not wasting money when traffic isn’t high—but also ensure there’s always enough capacity for when demand peaks—allowing you to keep customers happy at all times without overpaying when it’s quiet.
Installing Kubernetes on AWS
Now that you’ve got an overview of what it can do, let’s get our hands dirty and install Kubernetes on AWS. First, make sure your AWS account has a role that allows for EC2 instances. The easiest way to set up IAM roles is through your management console. Navigate to Security Credentials -> IAM Roles in the main navigation menu. Click Create New Role . Choose a name for your role—this can be anything descriptive; for example, MyKubernetesRole. Select EC2 Full Access from under Amazon EC2 in Policy Templates. This will give your users full access to create, read, update, and delete any instance on AWS.
Creating an Ingress Controller
Now that we have an Ingress object, we can create a service. This is where Istio kicks in. If you look at our deployment file (istio-demo.yaml), you will see something similar to: # Istio Ingress Gateway Controller deployment which adds # sidecar containers with Envoy Proxy. Note, it uses version of envoy proxy. # The hostname is mandatory. However all ingress hosts are in DNS # via Service Account bound role Binding rule below hence it’s optional.
Before you go and create your own PaaS (for example, with some Docker magic), take a look at all of the public cloud-based container solutions out there: Google Cloud Platform, Amazon Web Services’ EC2 Container Service, Microsoft Azure. If they don’t meet your needs, by all means, build your own using an open source framework like Kubernetes or Apache Mesos—but before you do that, make sure that creating one isn’t a total waste of time. Unless you have specific requirements for your solution, chances are someone else has already solved them. Consider working with these companies first to see if their infrastructure can handle what you need to accomplish.
Adding Pods to our Cluster
Now that we have our cluster set up, let’s get back to work on our pet project, Kubernest. We can add our first Pods now by running kubectl apply -f petset/pods.yaml. It may take a few minutes for all of our Pods to start up, but they should be ready in a few minutes. To verify everything worked, we can use kubectl get pods —we should see one Pod listed called Petset: The Kubernetes Pet Set (yellow dog). Let’s go ahead and expose Petset as a service by adding something like … service pet-set-api created … to our Petset file. Lastly, let’s install an Ingress controller into our cluster; ingress controller is essentially just another type of proxy server that will route traffic to multiple sets of backend services. Some common examples are Traefik, Gloo, or Kong. Here, I’ll install Nginx-based traefik—and again simply reference its configuration with […], substituting it with our own:
The last thing I want to do before shutting down our cluster is remove my usernamespace with $ kubectl delete namespace puppies . And remember you can always access your K8S cluster via minikube ssh ; exit closes your SSH session.
In today’s world of cloud infrastructure, there are more ways than ever to deploy your applications—or have them deployed for you. Each deployment method has its own strengths and weaknesses. PaaS platforms such as Heroku, Google App Engine or Cloud Foundry offer an application-centric approach with built-in tooling to help you manage deployments. If those offerings aren’t sufficient for your needs, you can use containers on IaaS providers like AWS or GCP (Google Cloud Platform). Alternatively, if you want full control over how and where your apps run, turn to managed VMs like DigitalOcean Droplets or Microsoft Azure Virtual Machines.