TKG on AWS: Part2 | Installation of TKG Management Cluster on AWS using Tanzu CLI
In this article, we are going to see how to install TKG Management Cluster on AWS using Tanzu CLI.
TKG Management Cluster:
A management cluster is the first Key component that you deploy when you create a Tanzu Kubernetes Grid. The management cluster is a Kubernetes cluster that performs the role of the primary management and operational control plane for the Tanzu Kubernetes Grid. This is where Cluster API runs to create the Tanzu Kubernetes clusters in which your application workloads run, and where you configure the shared and in-cluster services that the clusters use. This cluster manages the life-cycle of TKG workload clusters e.g., creating, scaling, upgrading, deleting, managing TKG workload clusters.
Prerequisites:
1: Preparation node with required dependencies (Kubectl, Tanzu CLI, AWS CLI, jq, Docker etc.,)
Follow below link to deploy and get ready with preparation node
2: You have the access key and access key secret for an active AWS account
3: Configure AWS Credentials
To deploy your management cluster on AWS, you have several options for configuring the AWS account used to access EC2.
- You can use a credentials profile, which you can store in a shared credentials file, such as
~/.aws/credentials
, or a shared config file, such as~/.aws/config
. You can manage profiles by using theaws configure
command. - You can specify your AWS account credentials statically in local environment variables.
To use local environment variables on your preparation node, set the following environment variables for your AWS account:
export AWS_ACCESS_KEY_ID=aws_access_key, where aws_access_key is your AWS access key.
export AWS_SECRET_ACCESS_KEY=aws_access_key_secret, where aws_access_key_secret is your AWS access key secret.
export AWS_REGION=aws_region, where aws_region is the AWS region in which you intend to deploy the cluster. For example, us-east-2.
- As an alternative to using local environment variables, you can store AWS credentials in a shared or local credentials file. The credential files and profiles are applied after local environment variables as part of the AWS default credential provider chain.
export AWS_SHARED_CREDENTIAL_FILE=path_to_credentials_file, where path_to_credentials_file is the location and name of the credentials file that contains your AWS access key information. If you do not define this environment variable, the default location and filename is $HOME/.aws/credentials.
export AWS_PROFILE=profile_name, where profile_name is the profile name that contains the AWS access key you want to use. If you do not specify a value for this variable, the profile name default is used.
4: Register an SSH Public Key with Your AWS Account
Create an SSH key pair for the AWS region where you plan to deploy Tanzu Kubernetes Grid clusters.
If you do not already have an SSH key pair for the account and region you are using to deploy the management cluster, create one by performing the steps below:
aws ec2 create-key-pair --key-name somTKG --region us-east-2 --output json | jq .KeyMaterial -r > somTKG.pem
You can also create key pair from AWS console
5: Tag AWS Resources if you are using existing vpc
If both of the following are true, you must add the kubernetes.io/cluster/YOUR-CLUSTER-NAME=shared
tag to the public subnet or subnets that you intend to use for the management cluster:
- You deploy the management cluster to an existing VPC that was not created by Tanzu Kubernetes Grid.
- You want to create services of type
LoadBalancer
in the management cluster.
aws ec2 create-tags — resources subnet-0934baa62aeab842f subnet-0c7fdc9bab96b6ab8 subnet-07e4beeca76dbd4d1 — tags Key=kubernetes.io/cluster/sommc,Value=shared
6: Required Permissions for the AWS Account
Your AWS account must have at least the following permissions:
- Required IAM Resources: Tanzu Kubernetes Grid creates these resources when you deploy a management cluster to your AWS account for the first time.
- Required Permissions for
tanzu management-cluster create
: Tanzu Kubernetes Grid uses these permissions when you runtanzu management-cluster create
Follow below link for more details,
When you deploy your first management cluster to AWS, you instruct Tanzu Kubernetes Grid to create a CloudFormation stack, tkg-cloud-vmware-com
, in your AWS account.
Below command create cloudformation stack in AWS, which will defines the identity and access management (IAM) resources that Tanzu Kubernetes Grid uses to deploy and run clusters on AWS.
tanzu management-cluster permissions aws set
Procedure:
1: Deploy Management Clusters from a Configuration File
Sample configuration file “cluster-config-file.yaml” with minimum required configurations as below,
#! ---------------------------------------------------------------------#! Basic cluster creation configuration#! ---------------------------------------------------------------------CLUSTER_NAME: sommcCLUSTER_PLAN: devINFRASTRUCTURE_PROVIDER: awsENABLE_CEIP_PARTICIPATION: true# TMC_REGISTRATION_URL:ENABLE_AUDIT_LOGGING: trueCLUSTER_CIDR: 100.96.0.0/11SERVICE_CIDR: 100.64.0.0/13#! ---------------------------------------------------------------------#! Image repository configuration#! ---------------------------------------------------------------------# TKG_CUSTOM_IMAGE_REPOSITORY: ""# TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE: ""#! ---------------------------------------------------------------------#! Proxy configuration#! ---------------------------------------------------------------------# TKG_HTTP_PROXY: ""# TKG_HTTPS_PROXY: ""# TKG_NO_PROXY: ""#! ---------------------------------------------------------------------#! Node configuration#! ---------------------------------------------------------------------CONTROL_PLANE_MACHINE_TYPE: "t3.large"NODE_MACHINE_TYPE: "m5.large"# SIZE:# CONTROLPLANE_SIZE:# WORKER_SIZE:# CONTROL_PLANE_MACHINE_TYPE: t3.large# NODE_MACHINE_TYPE: m5.large# OS_NAME: ""# OS_VERSION: ""# OS_ARCH: ""#! ---------------------------------------------------------------------#! AWS configuration#! ---------------------------------------------------------------------AWS_REGION: us-east-2AWS_NODE_AZ: "us-east-2a"AWS_ACCESS_KEY_ID: AKIASVvxxxxxxxxxxRXKZDAWS_SECRET_ACCESS_KEY: zUb/J/dRxxxxxxxxxxxxxxxxxxVWKzSXvbAWS_SSH_KEY_NAME: somTKGBASTION_HOST_ENABLED: true# AWS_NODE_AZ_1: ""# AWS_NODE_AZ_2: ""# AWS_VPC_ID: ""# AWS_PRIVATE_SUBNET_ID: ""# AWS_PUBLIC_SUBNET_ID: ""# AWS_PUBLIC_SUBNET_ID_1: ""# AWS_PRIVATE_SUBNET_ID_1: ""# AWS_PUBLIC_SUBNET_ID_2: ""# AWS_PRIVATE_SUBNET_ID_2: ""# AWS_VPC_CIDR: 10.0.0.0/16# AWS_PRIVATE_NODE_CIDR: 10.0.0.0/24# AWS_PUBLIC_NODE_CIDR: 10.0.1.0/24# AWS_PRIVATE_NODE_CIDR_1: 10.0.2.0/24# AWS_PUBLIC_NODE_CIDR_1: 10.0.3.0/24# AWS_PRIVATE_NODE_CIDR_2: 10.0.4.0/24# AWS_PUBLIC_NODE_CIDR_2: 10.0.5.0/24#! ---------------------------------------------------------------------#! Machine Health Check configuration#! ---------------------------------------------------------------------ENABLE_MHC: trueMHC_UNKNOWN_STATUS_TIMEOUT: 5mMHC_FALSE_STATUS_TIMEOUT: 12m#! ---------------------------------------------------------------------#! Identity management configuration#! ---------------------------------------------------------------------#IDENTITY_MANAGEMENT_TYPE: "oidc"#! Settings for OIDC# CERT_DURATION: 2160h# CERT_RENEW_BEFORE: 360h# OIDC_IDENTITY_PROVIDER_CLIENT_ID:# OIDC_IDENTITY_PROVIDER_CLIENT_SECRET:# OIDC_IDENTITY_PROVIDER_GROUPS_CLAIM: groups# OIDC_IDENTITY_PROVIDER_ISSUER_URL:# OIDC_IDENTITY_PROVIDER_SCOPES: email# OIDC_IDENTITY_PROVIDER_USERNAME_CLAIM: email#! The following two variables are used to configure Pinniped JWTAuthenticator for workload clusters# SUPERVISOR_ISSUER_URL:# SUPERVISOR_ISSUER_CA_BUNDLE_DATA:#! Settings for LDAP# LDAP_BIND_DN:# LDAP_BIND_PASSWORD:# LDAP_HOST:# LDAP_USER_SEARCH_BASE_DN:# LDAP_USER_SEARCH_FILTER:# LDAP_USER_SEARCH_USERNAME: userPrincipalName# LDAP_USER_SEARCH_ID_ATTRIBUTE: DN# LDAP_USER_SEARCH_EMAIL_ATTRIBUTE: DN# LDAP_USER_SEARCH_NAME_ATTRIBUTE:# LDAP_GROUP_SEARCH_BASE_DN:# LDAP_GROUP_SEARCH_FILTER:# LDAP_GROUP_SEARCH_USER_ATTRIBUTE: DN# LDAP_GROUP_SEARCH_GROUP_ATTRIBUTE:# LDAP_GROUP_SEARCH_NAME_ATTRIBUTE: cn# LDAP_ROOT_CA_DATA_B64:#! ---------------------------------------------------------------------#! Antrea CNI configuration#! ---------------------------------------------------------------------# ANTREA_NO_SNAT: false# ANTREA_TRAFFIC_ENCAP_MODE: "encap"# ANTREA_PROXY: false# ANTREA_POLICY: true# ANTREA_TRACEFLOW: false
=============================================================
CLUSTER_NAME: sommc, where ‘sommc’ is the management cluster name
CLUSTER_PLAN: dev, where dev is the cluster plan. When you specify CLUSTER_PLAN as ‘dev’ it will create one control plane node and one worker node as default.
If you specify CLUSTER_PLAN as ‘prod’, it will create three control plane node and one worker node as default.INFRASTRUCTURE_PROVIDER: aws, where aws is the cloud provider name
CONTROL_PLANE_MACHINE_TYPE: “t3.large”, where “t3.large” is an instance type of Amazon EC2 of control plane nodes
NODE_MACHINE_TYPE: “m5.large”, where “m5.large” is an instance type of Amazon EC2 of Worker nodes
AWS_REGION: us-east-2, where ‘us-east-2’ is the AWS region
AWS_NODE_AZ: “us-east-2a”, where ‘us-east-2a’ is the AWS Availability Zone
AWS_ACCESS_KEY_ID: AKIASVxxxxxWD5CRXKZD, where “AKIASVxxxxxWD5CRXKZD” is the AWS access key ID
AWS_SECRET_ACCESS_KEY: zUb/J/dRZxxxxxxxxxxxxxx0uZVWKzSXvb, where “zUb/J/dRZxxxxxxxxxxxxxx0uZVWKzSXvb” is the AWS secret acees key
AWS_SSH_KEY_NAME: somTKG, where “somTKG” is the AWS ssh key pair name
BASTION_HOST_ENABLED: true, where “true” will create bastion node for you.
Note: By default, clusters that you deploy with the Tanzu CLI provide in-cluster container networking with the Antrea container network interface (CNI).
If you want to create management cluster on existing VPC with production cluster plan, then fill below parameters accordingly in configuration yaml file
#! — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — -
#! AWS configuration
#! — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — -AWS_REGION: us-east-2
AWS_NODE_AZ: “us-east-2a”
AWS_NODE_AZ_1: “us-east-2b”
AWS_NODE_AZ_2: “us-east-2c”
AWS_ACCESS_KEY_ID: AKIASVxxxxxxRXKZD
AWS_SECRET_ACCESS_KEY: zUb/J/dRZSVxxxxxxxxxxxVWKzSXvb
AWS_SSH_KEY_NAME: somTKG
AWS_PRIVATE_SUBNET_ID: subnet-0476695e21cdef749
AWS_PRIVATE_SUBNET_ID_1: subnet-04508b72ed78d56c2
AWS_PRIVATE_SUBNET_ID_2: subnet-0cc806f8b2941fbe0
AWS_PUBLIC_SUBNET_ID: subnet-0934baa29aeab842f
AWS_PUBLIC_SUBNET_ID_1: subnet-0c7fdc9bab53b6ab8
AWS_PUBLIC_SUBNET_ID_2: subnet-07e4beeca76dbd4d1
AWS_VPC_ID: vpc-073d85xxxx4ce5612
BASTION_HOST_ENABLED: “true”
Now trigger management cluster deployment as follows,
root@ip-10–0–44–xx:/som/TKGonAWS#tanzu management-cluster create --file cluster-config-file.yaml
Validating the pre-requisites…
Identity Provider not configured. Some authentication features won’t work.
Setting up management cluster…
Validating configuration…
Using infrastructure provider aws:v0.6.4
Generating cluster configuration…
Setting up bootstrapper…
Bootstrapper created. Kubeconfig: /root/.kube-tkg/tmp/config_s4xMGSxA
Installing providers on bootstrapper…
Fetching providers
Installing cert-manager Version=”v0.16.1"
Waiting for cert-manager to be available…
Installing Provider=”cluster-api” Version=”v0.3.14" TargetNamespace=”capi-system”
Installing Provider=”bootstrap-kubeadm” Version=”v0.3.14" TargetNamespace=”capi-kubeadm-bootstrap-system”
Installing Provider=”control-plane-kubeadm” Version=”v0.3.14" TargetNamespace=”capi-kubeadm-control-plane-system”
Installing Provider=”infrastructure-aws” Version=”v0.6.4" TargetNamespace=”capa-system”
Start creating management cluster…
Saving management cluster kubeconfig into /root/.kube/config
Installing providers on management cluster…
Fetching providers
Installing cert-manager Version=”v0.16.1"
Waiting for cert-manager to be available…
Installing Provider=”cluster-api” Version=”v0.3.14" TargetNamespace=”capi-system”
Installing Provider=”bootstrap-kubeadm” Version=”v0.3.14" TargetNamespace=”capi-kubeadm-bootstrap-system”
Installing Provider=”control-plane-kubeadm” Version=”v0.3.14" TargetNamespace=”capi-kubeadm-control-plane-system”
Installing Provider=”infrastructure-aws” Version=”v0.6.4" TargetNamespace=”capa-system”
Waiting for the management cluster to get ready for move…
Waiting for addons installation…
Moving all Cluster API objects from bootstrap cluster to management cluster…
Performing move…
Discovering Cluster API objects
Moving Cluster API objects Clusters=1
Creating objects in the target cluster
Deleting objects from the source cluster
Context set for management cluster sommc as ‘sommc-admin@sommc’.
Management cluster created!
You can now create your first workload cluster by running the following:
tanzu cluster create [name] -f [file]
root@ip-10–0–44–xx:/som/TKGonAWS# tanzu management-cluster get
NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES
sommc tkg-system running 1/1 1/1 v1.20.4+vmware.1 management
Details:
NAME READY SEVERITY REASON SINCE MESSAGE
/sommc True 5m17s
├─ClusterInfrastructure — AWSCluster/sommc True 5m20s
├─ControlPlane — KubeadmControlPlane/sommc-control-plane True 5m18s
│ └─Machine/sommc-control-plane-r7q2v True 5m20s
└─Workers
└─MachineDeployment/sommc-md-0
└─Machine/sommc-md-0–754645f856–687tp True 5m20s
Providers:
NAMESPACE NAME TYPE PROVIDERNAME VERSION WATCHNAMESPACE
capa-system infrastructure-aws InfrastructureProvider aws v0.6.4
capi-kubeadm-bootstrap-system bootstrap-kubeadm BootstrapProvider kubeadm v0.3.14
capi-kubeadm-control-plane-system control-plane-kubeadm ControlPlaneProvider kubeadm v0.3.14
capi-system cluster-api CoreProvider cluster-api v0.3.14
root@ip-10–0–44–xx:/som/TKGonAWS#
Validation of Management cluster deployment:
root@ip-10–0–44–xx:/som/TKGonAWS# kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* sommc-admin@sommc sommc sommc-admin
root@ip-10–0–44–xx:/som/TKGonAWS# kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-10–0–0–xx.us-east-2.compute.internal Ready <none> 14m v1.20.4+vmware.1
ip-10–0–0–xx.us-east-2.compute.internal Ready control-plane,master 15m v1.20.4+vmware.1
root@ip-10–0–44–xx:/som/TKGonAWS# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
capa-system capa-controller-manager-77459bfffc-nn8p7 2/2 Running 0 29m
capi-kubeadm-bootstrap-system capi-kubeadm-bootstrap-controller-manager-5bdd64499b-6jlw5 2/2 Running 0 29m
capi-kubeadm-control-plane-system capi-kubeadm-control-plane-controller-manager-7f89b8594d-mkhdt 2/2 Running 0 29m
capi-system capi-controller-manager-c4f5f9c76–2jllt 2/2 Running 0 29m
capi-webhook-system capa-controller-manager-6c8fc547–2f956 2/2 Running 0 29m
capi-webhook-system capi-controller-manager-768b989cbc-qvz52 2/2 Running 0 29m
capi-webhook-system capi-kubeadm-bootstrap-controller-manager-67444bbcc9-rhgnt 2/2 Running 0 29m
capi-webhook-system capi-kubeadm-control-plane-controller-manager-5466b4d4d6–78vwv 2/2 Running 0 29m
cert-manager cert-manager-6cbfc68c4b-j2s9j 1/1 Running 0 31m
cert-manager cert-manager-cainjector-796775c48f-flw47 1/1 Running 0 31m
cert-manager cert-manager-webhook-7646d5bc94-vlx76 1/1 Running 0 31m
kube-system antrea-agent-cp7sg 2/2 Running 0 31m
kube-system antrea-agent-km7kr 2/2 Running 0 31m
kube-system antrea-controller-6f7fd89c9f-k7qdg 1/1 Running 0 31m
kube-system coredns-68d49685bd-7llq6 1/1 Running 0 32m
kube-system coredns-68d49685bd-cx9sm 1/1 Running 0 32m
kube-system etcd-ip-10–0–0–244.us-east-2.compute.internal 1/1 Running 0 32m
kube-system kube-apiserver-ip-10–0–0–244.us-east-2.compute.internal 1/1 Running 0 32m
kube-system kube-controller-manager-ip-10–0–0–244.us-east-2.compute.internal 1/1 Running 0 32m
kube-system kube-proxy-679cn 1/1 Running 0 31m
kube-system kube-proxy-jhzbh 1/1 Running 0 32m
kube-system kube-scheduler-ip-10–0–0–244.us-east-2.compute.internal 1/1 Running 0 32m
kube-system metrics-server-5bcb57fdd4-wnzqs 1/1 Running 0 28m
tkg-system kapp-controller-5b845bdfdd-x5ncd 1/1 Running 0 32m
tkg-system tanzu-addons-controller-manager-748d5d85db-85dlj 1/1 Running 3 31m
tkr-system tkr-controller-manager-f7bbb4bd4–4txt6 1/1 Running 0 32m
root@ip-10–0–44–xx:/som/TKGonAWS#
Important commands used in this article,
aws ec2 create-key-pair --key-name somTKG --region us-east-2 --output json | jq .KeyMaterial -r > somTKG.pem
aws ec2 create-tags - resources subnet-0934baa62aeab842f subnet-0c7fdc9bab96b6ab8 subnet-07e4beeca76dbd4d1 - tags Key=kubernetes.io/cluster/sommc,Value=shared
tanzu management-cluster permissions aws set
tanzu management-cluster create --file cluster-config-file.yaml
tanzu management-cluster get
kubectl config get-contexts
In the next article, we are going to see how to install TKG Kubernetes Clusters on AWS