TKG on AWS: Introduction
Tanzu Kubernetes Grid is one of the container orchestration tool available in the market from VMware. You can deploy Tanzu Kubernetes Grid across software-defined datacenters (SDDC)/on-premise and public cloud environments, including vSphere, AWS, and Microsoft Azure.
VMware is following very good strategy towards Cloud Native Modern Applications to make developers life simple with the help of Tanzu portfolio of products.
It’s competent strongly with other container orchestration tools like RedHat Openshift, Kubernetes, Rancher, AKS, EKS, GKE, IKS etc.,
Tanzu Kubernetes Grid provides you enterprise ready Kubernetes platform allows you to run container based application workload, supported by VMware. TKG provides the services such as networking, authentication, ingress control, and logging that a production Kubernetes environment requires.
Tanzu Kubernetes Grid Key Elements
TKG on AWS have below key components,
1: Preparation Node
Preparation node is used to create, connect and manage TKG clusters, where required dependent tools are installed. Preparation node is an Ubuntu Linux VM(It will be based on your choice, linux or windows or mac) and requires tanzu cli, kubectl and docker running on it to deploy TKG management cluster and Kubernetes clusters. This node requires direct or routable connectivity to the TKG management and Kubernetes clusters for deployment.
This node will be routable connection with the TKG clusters, Eg: If you deploying TKG clusters (management and Kubernetes clusters) on existing VPC (virtual private cloud) make sure that this node also part of same VPC or else you need to perform vpc peering if it is not on same VPC.
2: Management Cluster
A management cluster is the first Key component that you deploy when you create a Tanzu Kubernetes Grid. The management cluster is a Kubernetes cluster that performs the role of the primary management and operational control plane for the Tanzu Kubernetes Grid. This is where Cluster API runs to create the Tanzu Kubernetes clusters in which your application workloads run, and where you configure the shared and in-cluster services that the clusters use. This cluster manages the life-cycle of TKG workload clusters e.g., creating, scaling, upgrading, deleting, managing TKG workload clusters.
3: Tanzu Kubernetes Cluster (Shared Service) — Optional
After you have deployed a management cluster, you use the Tanzu CLI to deploy Tanzu Kubernetes clusters and manage their lifecycle. This cluster for shared services deployment is also a conformance Kubernetes cluster restricted for deploying shared services that will be used across all TKG clusters e.g., Harbor registry (Container Registry). This cluster is deployed after the TKG management cluster is up and running.
TKG Kubernetes cluster for shared services deployment is an optional component. Henceforth, the TKG Kubernetes cluster for shared services will be referred to as TKG Shared Services Cluster.
4: Tanzu Kubernetes Cluster (Application Workload)
After you have deployed a management cluster, you use the Tanzu CLI to deploy Tanzu Kubernetes clusters and manage their lifecycle. There can be more than one workload cluster with different Kubernetes versions. This cluster is used to runs the cloud-native applications (container based).
From a network connectivity standpoint, each TKG cluster (Tanzu Kubernetes clusters, workload and shared services) will be routable connection with the TKG management cluster.
You can add functionalities to Tanzu Kubernetes clusters by installing extensions like Contour ( Ingress Control), External DNS (Service Discovery), Fluent Bit (Log Forwarding), Harbor (Container Registry), Prometheus/Grafana ( Monitoring).
In the next article, I will walk-through over preparation node on AWS to install TKG clusters.