Skip to content

Instantly share code, notes, and snippets.

@jcortejoso
Created February 29, 2024 12:23
Show Gist options
  • Select an option

  • Save jcortejoso/4fcf33275b6709e9ebf4b04498c66dc7 to your computer and use it in GitHub Desktop.

Select an option

Save jcortejoso/4fcf33275b6709e9ebf4b04498c66dc7 to your computer and use it in GitHub Desktop.
Starting GlobalTestnet after volume and IP removal
# Global Testet
## Recreate/Restart the testnet
To save costs, all the volumes and validator's services (static ip) were deleted. But we keep:
- The five clusters
- Helm deployments
- Snapshot
- Bootnode IP
Recommended to use warp and reuse the workflows in the globaltestnet folder.
To recreate the testnet, we need to (on each cluster):
1. Restore the PVC from a snapshot: We do this because we want to have a big state to test the network, so results are comparable to mainnet.
We can restore the PVCs in two waves, one using the snapshot and the other using the PVCs created on first wave. We do in this way becasue there is a rate limit on the number of PVCs that can be restored using the same resource (snapshot/pvc)
1.1. Restore some PVCs from snapshots:
# yamllint disable-line rule:syntax
```bash
kubectl apply -f - <<EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
resize.topolvm.io/increase: 5Gi
resize.topolvm.io/inodes-threshold: 5%
resize.topolvm.io/storage_limit: 800Gi
resize.topolvm.io/threshold: 5%
name: data-globaltestnet-validators-0
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Gi
storageClassName: premium-rwo
dataSource:
name: snapshot-chainsize-500gb
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.io
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
resize.topolvm.io/increase: 5Gi
resize.topolvm.io/inodes-threshold: 5%
resize.topolvm.io/storage_limit: 800Gi
resize.topolvm.io/threshold: 5%
name: data-globaltestnet-validators-1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Gi
storageClassName: premium-rwo
dataSource:
name: snapshot-chainsize-500gb
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.io
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
resize.topolvm.io/increase: 5Gi
resize.topolvm.io/inodes-threshold: 5%
resize.topolvm.io/storage_limit: 800Gi
resize.topolvm.io/threshold: 5%
name: data-globaltestnet-validators-2
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Gi
storageClassName: premium-rwo
dataSource:
name: snapshot-chainsize-500gb
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.io
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
resize.topolvm.io/increase: 5Gi
resize.topolvm.io/inodes-threshold: 5%
resize.topolvm.io/storage_limit: 800Gi
resize.topolvm.io/threshold: 5%
name: data-globaltestnet-validators-3
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Gi
storageClassName: premium-rwo
dataSource:
name: snapshot-chainsize-500gb
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.io
EOF
kubectl scale sts globaltestnet-validators --replicas=4 # Needed to recreate the PVCs
```
1.2. Once restored, recreate other PVCs using this PVCs:
```bash
kubectl apply -f - <<EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
resize.topolvm.io/increase: 5Gi
resize.topolvm.io/inodes-threshold: 5%
resize.topolvm.io/storage_limit: 800Gi
resize.topolvm.io/threshold: 5%
name: data-globaltestnet-validators-4
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Gi
storageClassName: premium-rwo
dataSource:
name: data-globaltestnet-validators-0
kind: PersistentVolumeClaim
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
resize.topolvm.io/increase: 5Gi
resize.topolvm.io/inodes-threshold: 5%
resize.topolvm.io/storage_limit: 800Gi
resize.topolvm.io/threshold: 5%
name: data-globaltestnet-validators-6
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Gi
storageClassName: premium-rwo
dataSource:
name: data-globaltestnet-validators-0
kind: PersistentVolumeClaim
---
... # Add the rest of the PVCs rotating the source PVC
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
resize.topolvm.io/increase: 5Gi
resize.topolvm.io/inodes-threshold: 5%
resize.topolvm.io/storage_limit: 800Gi
resize.topolvm.io/threshold: 5%
name: data-globaltestnet-validators-21
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Gi
storageClassName: premium-rwo
dataSource:
name: data-globaltestnet-validators-3
kind: PersistentVolumeClaim
EOF
kubectl scale sts globaltestnet-validators --replicas=22 # Needed to recreate the PVCs
```
1.3 Scale the statefulset to zero for the next steps: `kubectl scale sts globaltestnet-validators --replicas=0`
2. Recreate the services: We need to recreate the services to get the static IPs back.
2.1 Recreate the IPs with Terraform: We can use terraform or gcloud to recreate the IPs. Using terraform,
I needed to first taint the existing IPs, and then apply using the `--target` syncOptions:
```bash
# Do cluster by cluster
terraform taint 'module.global_cluster_us_central1_arm.google_compute_address.validators[0]'
...
terraform taint 'module.global_cluster_us_central1_arm.google_compute_address.validators[21]'
terraform apply --target=module.global_cluster_us_central1_arm.google_compute_address.validators[0] ... --target=module.global_cluster_us_central1_arm.google_compute_address.validators[21]
```
2.2 Recreate the k8s services:
```bash
# First we need to get the IPs
gcloud compute addresses list --project=blockchaintestsglobaltestnet
# Copy the IPs for you cluster, and edit the services with the new ips. There are two services per validator:
```bash
kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/neg: '{"ingress":true}'
labels:
app: testnet
chart: testnet-0.2.1
component: validators
heritage: Helm
release: globaltestnet
name: globaltestnet-validators-0
namespace: globaltestnet
spec:
allocateLoadBalancerNodePorts: true
clusterIP: 10.80.11.221
clusterIPs:
- 10.80.11.221
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
loadBalancerIP: 35.233.233.129 # Update this ip
ports:
- name: discovery
nodePort: 32062
port: 30303
protocol: UDP
publishNotReadyAddresses: true
selector:
statefulset.kubernetes.io/pod-name: globaltestnet-validators-0
type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/neg: '{"ingress":true}'
labels:
app: testnet
chart: testnet-0.2.1
component: validators
heritage: Helm
release: globaltestnet
name: globaltestnet-validators-0-tcp
namespace: globaltestnet
spec:
allocateLoadBalancerNodePorts: true
clusterIP: 10.80.7.106
clusterIPs:
- 10.80.7.106
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
loadBalancerIP: 35.233.233.129 # Update this ip
ports:
- name: celo
nodePort: 32746
port: 30303
publishNotReadyAddresses: true
selector:
statefulset.kubernetes.io/pod-name: globaltestnet-validators-0
type: LoadBalancer
... # Repeat for the rest of the validators
EOF
2.3 Update the globaltestnet-validators statefulset with the new ip addresses. We need to update the `IP_ADDRESSES` env var:
```bash
kubectl edit statefulset globaltestnet-validators -n globaltestnet
```
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment