# CAPG / CAPI bootstrap with clusterctl This example uses the latest clusterctl (v1.0.0) and also the latest CAPG release with suports v1beta1 (v1.0.0) steps to get a running workload cluster, for testing/development purposes this is a quick overview for more in depth you can check https://cluster-api.sigs.k8s.io/user/quick-start.html 1. create a kind cluster ```console $ kind create cluster --image kindest/node:v1.22.1 --wait 5m ``` 2. export the required variables ```console $ export GCP_B64ENCODED_CREDENTIALS=$( cat /path/to/gcp-credentials.json | base64 | tr -d '\n' ) $ export GCP_CONTROL_PLANE_MACHINE_TYPE=n1-standard-2 $ export GCP_NODE_MACHINE_TYPE=n1-standard-2 $ export GCP_PROJECT= $ export GCP_REGION=us-east4 $ export IMAGE_ID= $ export GCP_NETWORK_NAME=default $ export CLUSTER_NAME=test # you can choose any name this is used for this example ``` 3. setup the network in this example we are using the default network so we will create some router/nats for our workload cluster to have internet access ```console $ gcloud compute routers create "${CLUSTER_NAME}-myrouter" --project="${GCP_PROJECT}" --region="${GCP_REGION}" --network="default" $ gcloud compute routers nats create "${CLUSTER_NAME}-mynat" --project="${GCP_PROJECT}" --router-region="${GCP_REGION}" --router="${CLUSTER_NAME}-myrouter" --nat-all-subnet-ip-ranges --auto-allocate-nat-external-ips ``` 4. deploy CAPI/CAPG ```console $ clusterctl init --infrastructure gcp ``` 5. Generate the workload cluster config and apply it ```console $ clusterctl generate cluster $CLUSTER_NAME --kubernetes-version v1.22.3 > workload-test.yaml $ kubectl apply -f workload-test.yaml ``` 6. you can check the capg manager logs / you can watch the gcp console the control plane vm should be up and running soon 7. checks ```console $ clusterctl describe cluster $CLUSTER_NAME NAME READY SEVERITY REASON SINCE MESSAGE /test False Info WaitingForKubeadmInit 5s ├─ClusterInfrastructure - GCPCluster/test └─ControlPlane - KubeadmControlPlane/test-control-plane False Info WaitingForKubeadmInit 5s └─Machine/test-control-plane-x57zs True 31s └─MachineInfrastructure - GCPMachine/test-control-plane-7xzw2 $ kubectl get kubeadmcontrolplane NAME CLUSTER INITIALIZED API SERVER AVAILABLE REPLICAS READY UPDATED UNAVAILABLE AGE VERSION test-control-plane test 1 1 1 2m9s v1.22.3 ``` 8. Get the kubeconfig for the workload cluster ```console $ clusterctl get kubeconfig $CLUSTER_NAME $ clusterctl get kubeconfig $CLUSTER_NAME > workload-test.kubeconfig ``` 9. apply the cni ```console $ kubectl --kubeconfig=./workload-test.kubeconfig \ apply -f https://docs.projectcalico.org/v3.20/manifests/calico.yaml configmap/calico-config created customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created clusterrole.rbac.authorization.k8s.io/calico-node created clusterrolebinding.rbac.authorization.k8s.io/calico-node created daemonset.apps/calico-node created serviceaccount/calico-node created deployment.apps/calico-kube-controllers created serviceaccount/calico-kube-controllers created Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget poddisruptionbudget.policy/calico-kube-controllers created ``` 10. wait a bit and you should see this when get the kubeadmcontrolplane ```console $ kubectl get kubeadmcontrolplane NAME CLUSTER INITIALIZED API SERVER AVAILABLE REPLICAS READY UPDATED UNAVAILABLE AGE VERSION test-control-plane test true true 1 1 1 0 6m33s v1.22.3 $ kubectl get nodes --kubeconfig=./workload-test.kubeconfig NAME STATUS ROLES AGE VERSION test-control-plane-7xzw2 Ready control-plane,master 62s v1.22.3 ``` 11. edit the `MachineDeployment` in the `workload-test.yaml` it have 0 replicas add the replicas you want to have your nodes, in this case we used 2 12. apply the `workload-test.yaml`` 13. after a few minutes you should have all up and running ```console $ clusterctl describe cluster $CLUSTER_NAME NAME READY SEVERITY REASON SINCE MESSAGE /test True 15m ├─ClusterInfrastructure - GCPCluster/test ├─ControlPlane - KubeadmControlPlane/test-control-plane True 15m │ └─Machine/test-control-plane-x57zs True 19m │ └─MachineInfrastructure - GCPMachine/test-control-plane-7xzw2 └─Workers └─MachineDeployment/test-md-0 True 10m └─2 Machines... True 13m See test-md-0-68bd55744b-qpk67, test-md-0-68bd55744b-tsgf6 $ kubectl get nodes --kubeconfig=./workload-test.kubeconfig NAME STATUS ROLES AGE VERSION test-control-plane-7xzw2 Ready control-plane,master 21m v1.22.3 test-md-0-b7766 Ready 17m v1.22.3 test-md-0-wsgpj Ready 17m v1.22.3 ``` 14. this is a usual k8s cluster you can deploy your apps and whatever you want 15. to delete the workload cluster ```console $ kubectl delete cluster $CLUSTER_NAME ``` 16. delete the router/nat ```console $ gcloud compute routers nats delete "${CLUSTER_NAME}-mynat" --project="${GCP_PROJECT}" \ --router-region="${GCP_REGION}" --router="${CLUSTER_NAME}-myrouter" $ gcloud compute routers delete "${CLUSTER_NAME}-myrouter" --project="${GCP_PROJECT}" \ --region="${GCP_REGION}" ``` 17. delete kind ```console $ kind delete cluster ```