Posts Taged azure-kubernetes-service

Azure Stack HCI with Kubernetes – part 2

Introduction to Virtual machines and Containers 

Back in 2016 Microsoft released a new type of OS called Nano Server and the Windows Container feature. Kubernetes had just been released and Docker was already working for some time on containers. While back in 2016 it was all about the jokes with shipping containers and garbage containers. Since then, container usage started to grow and has been adopted by all big vendors on a large scale. It has become yet another game changer in today’s IT infrastructures and application development. 

Today all the big cloud providers like Microsoft with Azure, Amazon with AWS and Google with GCP offer containers based on Docker and Kubernetes. If you want to run containers yourself in your own datacenter you can use Docker, Kubernetes, Windows Containers with Docker on Windows or Linux.  

On the other end, virtual machines are common good these days and will stay with us for a long time. Because not everything can be containerized or is not relevant in a containerized way. Therefore, wouldn’t it be great if you could share your infrastructure to run both Windows and Linux VMs and Windows and Linux Containers?

Microsoft released Azure Stack HCI & AKS for Azure Stack HCI, these products give you the ability to run containers and VMs on your datacenter hardware. Managed and deployed through the Azure Portal and Windows Admin Center.

In this blog we’ll talk a little bit about Kubernetes and how it works. But also, the possibilities we have with Azure, Azure Arc, Azure Stack HCI as virtualization and storage platform to run VMs, and containers managed by Kubernetes.

Virtual Machines

With a virtual machine the hardware is virtualized, and the operating system is running on top of virtual hardware instead of the physical hardware. Inside the OS you can practically do everything as on a physical computer. The VM is running on top of a virtualization host along with multiple other VMs.

On a decent virtualization platform, we want to make sure that VMs are high available. In the case of a failure of a host the VM is quickly moved to another system and booted. In a matter of seconds the VM is back with access and functionality restored. For this to work we need shared storage. This can be in various ways like traditional SAN with Fiber or ISCSI access or Hyperconverged like Storage Spaces Direct. In addition, we need a cluster service to make sure that when a node fails the other systems detects it, and takes action. Within Windows the Failover Clustering feature takes care of this.

Containers

When we look at a container there is some overlap. A container is an isolated, lightweight instance to run an application on the host operating system. This host can be a physical machine or a virtual machine. Containers are built on top of the host operating system’s kernel and contain only apps and some lightweight operating system APIs and services that run in user mode. If you have a Windows VM with docker you can deploy Windows containers. On a Linux VM you can deploy Linux containers. Because it shares the kernel you cannot mix Windows and Linux Containers on the same underlaying OS.

For containers and VM the same applies, we want the application running inside it to be highly available in case something fails. This is where things get different with VMs and containers. With VMs we have the failover cluster manager to manage and detect failures and take actions accordingly. With containers we don’t use the failover cluster manager because the management of deploying, rebuilding, and so on is done by another management tool. Here comes container orchestrator tools such as Kubernetes into play.

Kubernetes and Fail-over Clusters

With VMs and containers the same rule applies. Threat them as cattle not as pets, meaning that you don’t want to have too much dependency on them.

VMs are bigger in size and contain persistent data. If we would destroy it or spin up a new one it takes more time and you potentially could lose data. That’s why they are stored on shared storage. In case of a failure the failover cluster manager boots the VM on another host, which also can access that shared storage, and its up and running again.

Containers are very small and, in most cases, they don’t contain any data. It is easier and faster to just deploy new ones. Container orchestrator platforms like Kubernetes take care of this. It detects when containers are down and spins up new one on another hosts and makes sure it’s accessible.

Kubernetes

Kubernetes manages the deployments of resources (not only containers). Kubernetes has several objects and building blocks it uses to deploy, manage and publish the resources which we will deep dive in to in another blog. For now, it is important to know Kubernetes consist of a management cluster (control pane) with master nodes and additional worker nodes to run workloads. 

Master Nodes

A production Kubernetes cluster requires a minimum of 3 master nodes. The master nodes manage the deployment of various components required to deploy containers and be able to communicate with them. It also provides an API layer for the workers to communicate with the masters. The API is also used to deploy workloads. The master nodes can run on physical or virtual machines and can only run on a Linux based OS. 

Worker Nodes

The worker nodes are used to run the container workloads. Worker nodes are also known as Minions….. 

Let’s hope these minions behave better than the yellow dudes and don’t turn it all into chaos…

The worker nodes can be either Linux or Windows. The Windows option gives us a lot of flexibility with Azure Stack HCI, but before we go down that path, we dive a little deeper in the Kubernetes on Windows requirements first.

Worker Nodes on Windows

To be able to add Windows Workers to a Kubernetes cluster, the Windows worker must run Windows Server 2019 or Azure Stack HCI OS at minimum and a Kubernetes version of 1.17 or above. In addition to that, the Windows Containers feature and Docker are required. There are other container engines available, but Docker is widely used and has the best support for Windows, so we recommend using Docker. Besides the previous requirements we also need some additional things like networking and storage on the worker nodes which we will discuss in the next parts of this blog series. Once we have the requirements setup, we have a working Windows worker capable of running containers deployed and managed by Kubernetes.

Windows and Linux Containers 

As described earlier in this blog you cannot mix different container OSes on the host. But that is only true for Linux workers. A Linux worker node cannot run Windows containers. But a Windows Worker can run both Windows and Linux containers due to the feature WSL (Windows Subsystem for Linux). With a Kubernetes cluster and Windows Workers nodes or let’s say Mixed worker nodes you can run both Linux and Windows containers and that is a great opportunity!    

Azure Stack HCI & Azure Kubernetes Service (AKS)

Azure Stack HCI is the Microsoft Hyper-converged infrastructure offering which is the basis for a software-defined datacenter. HCI brings together highly virtualized compute, storage, and networking on industry-standard x86 servers and components.

With Azure Stack HCI we are able to create a robust platform to host virtual machines, and simultaneously these virtual machines are the foundation for a robust container platform. Because Azure Stack HCI makes use of clustering, it’s also suitable to host the Kubernetes cluster itself, making sure that the VMs hosting the Kubernetes cluster are spread among physical machines to reduce downtime.

Microsoft has released Azure Kubernetes Service on Azure Stack HCI to save you from the hassle setting up Kubernetes yourself. Just as in Microsoft Azure, with AKS, you get your own Kubernetes clusters deployed and managed by Microsoft, but in your own datacenter. This brings a lot of advantages to the table such as latency or data locality.

Getting started with AKS on Azure Stack HCI

Read more about AKS on Azure Stack HCI on the Microsoft Docs page here.
 
To get started and download you can head over to the preview registration page here.

Microsoft released a great blog post on how Kubernetes in intertwined with Azure Stack HCI and the storage components: https://techcommunity.microsoft.com/t5/azure-stack-blog/. It explains the basics and how to get started using Windows Admin Center. 
 
Do you want to consultation how AKS on HCI matches your challenges? Reach out

    Azure Stack HCI with Kubernetes

    The game of abstraction of infrastructure is going fast. If you don’t keep up, you could end up in a world where people point their finger at and whisper “legacy”. 

    Looking back a decade, hardware evolved quick and virtualization technologies came to the rescue, allowing higher densities of workloads on one or multiple physical servers in the form of virtual machines. Applications ran in those VMs would benefit from high availability, so if a physical server fails another server takes over the virtual machine. The hypervisor technology creates hardware abstraction. However, the virtual machine is still bound to the underlying hypervisor and most probably to the hardware the hypervisor is using. This would mean that you can move virtual machines between the same type of hypervisor and hardware, but for example moving a VMware VM to Hyper-V is not possible without conversion. The same goes for moving to or between public cloud providers, no technical challenge there but the portability is not good enough. In other words, moving from and to another platform is not a one-click action. 

    Containers

    Being tied to a specific platform of choice is not very convenient but was accepted for many years. Applications would run in virtual machines and those virtual machines would be on a hypervisor platform. 

    Containers form the new wave of innovation and modernization of applications. Containers run in virtual machines which are called ‘container host s’. While running in virtual machines, the platform creates abstraction of the underlying infrastructure (the hypervisor). 

    This would mean that you can a run one container host on Hyper-V and another on VMware and deploy the same container to it. Using containers, organizations are not tied to specific platforms but can be platform agnostic.  

    Management of containers is a different ball game comparing to management of virtual machines. A virtual machine would typically run one application and the VM would exist as long as the application did. In the container landscape, an application can consists out of multiple containers that are created when needed and destroyed when not used. This requires a different type of toolset and Kubernetes is the swiss knife that has all the tools build-in. 

    Kubernetes 

    Kubernetes is a container orchestrator platform, but it has a lot more capabilities. Seeking agnostic infrastructure, you can use Kubernetes to abstract the infrastructure away from your applications. The container hosts mentioned above are included within Kubernetes and become ‘worker nodes’ where containers are deployed. Kubernetes now orchestrates your container landscape, it notices if more containers are needed or when containers can be removed because of inactivity. Because the Kubernetes nodes can run anywhere you’d like, and Kubernetes manages where containers are deployed, your application is now highly portable and abstracted from any platform.  
      
    Kubernetes itself also needs to run somewhere and is also distributed in multiple virtual machines, which is referred to as a ‘Kubernetes Management Cluster’.  

    In part 2 of this blog series we’ll go in full detail how Kubernetes works. 

    Kubernetes cluster in the cloud

    The major cloud providers were not ignoring the container era, thus are providing customers Kubernetes clusters as a service. They are called Amazon’s Elastic Kubernetes Service (EKS), Microsoft’s Azure Kubernetes Service (AKS), and Google Kubernetes Engine (GKE). The Kubernetes cluster is abstracted as well in a PaaS service by the cloud providers, but you could run it anywhere you’d like. Same for the worker nodes, you could make use of AKS and run your Kubernetes worker nodes in AWS, Google cloud, and Azure Stack HCI simultaneously. Now.. that’s a true hybrid cloud. 
     
    In this blog series we’re explicating the relationship between ‘traditional’ infrastructure, modern Hyper-converged infrastructure and Kubernetes. From an IT-Pro point-of-view. 

    Read Azure Stack HCI with Kubernetes - part 2 here!
    Terms and Conditions