# To check that the plugin builds with the current version of coredns 1. Ran local [kind cluster](https://kind.sigs.k8s.io/) with the ServiceImport CRD installed from the [MCS API repo](https://github.com/kubernetes-sigs/mcs-api/tree/master/config/crd). 2. Built local docker image against coredns/Dockerfile of latest coredns master commit [c9eedcb](https://github.com/coredns/coredns/commit/c9eedcb7d11a5c4c90aa4c538cbb07f3bffaaee5), with multicluster plugin installed via refrencing it in coredns/plugin.cfg. ``` docker build $MYDEVHOME/coredns/Dockerfile -t coredns-193-withmc ``` 3. Uploaded local image to kind cluster (did you know you needed to do that? I didn't. See [KiND - How I Wasted a Day Loading Local Docker Images by Ivan Velichko](https://iximiuz.com/en/posts/kubernetes-kind-load-docker-image/)) ``` kind load docker-image coredns-193-withmc:latest ``` 4. Patched coredns deployment in kind cluster based on directions from [these deployment directions](https://github.com/coredns/deployment/blob/master/kubernetes/Upgrading_CoreDNS.md#walkthrough---manual-update-of-coredns), short version below. ``` lauralorenz@lauralorenz:coredns$ kubectl patch deployment coredns -n kube-system -p '{"spec":{"template":{"spec":{"containers":[{"name":"coredns", "image":"coredns-193-withmc:latest"}]}}}}' deployment.apps/coredns patched # # I also updated the coredns/kube-dns Configmap to actually configure multicluster plugin, see multicluster plugin README # ``` 5. Gave the coredns SA, now that I'm using a version with the multicluster plugin installed, RBAC privileges to the ServiceImport CRD. ``` apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: system:coredns-multicluster rules: - apiGroups: - "multicluster.x-k8s.io" resources: - serviceimports verbs: ["*"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: system:coredns-multicluster roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:coredns-multicluster subjects: - kind: ServiceAccount name: coredns namespace: kube-system ``` 5. Confirmed coredns redeployed, logs were healthy using kubectl. # More steps when you want to check that the multicluster plugin does what you think it does 7. Deployed a fake ServiceImport in demo namespace. ``` apiVersion: multicluster.x-k8s.io/v1alpha1 kind: ServiceImport metadata: name: myservice namespace: demo spec: type: ClusterSetIP ips: - 1.2.3.4 ports: - port: 80 protocol: TCP ``` 7. Deployed dnsutils pod in demo namespace. ``` apiVersion: v1 kind: Pod metadata: name: dnsutils namespace: demo spec: containers: - name: dnsutils image: k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.3 command: - sleep - "3600" imagePullPolicy: IfNotPresent restartPolicy: Always ``` 8. Used dnsutils pod to confirm that the DNS query for the serviceimport responds with the IP I set in the fake ServiceImport. ``` lauralorenz@lauralorenz:multicluster$ kubectl exec -it dnsutils -n demo -- bash root@dnsutils:/# nslookup myservice.demo.svc.clusterset.local Server: 10.96.0.10 Address: 10.96.0.10#53 Name: myservice.demo.svc.clusterset.local Address: 1.2.3.4 ```