# Kubernetes Developer Notes This page will document some of the information most useful when developing apps to run on Kubernetes. ## Liveness, Readiness, & Startup Probes Your application should have a liveness, readiness, and startup probe configured before you deploy it to production. This ensures that Kubernetes won’t send it traffic until the application (and its dependencies) are ready to start serving traffic. Links: - https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/ - https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-best-practices-setting-up-health-checks-with-readiness-and-liveness-probes ## Resource Limits Your application should specify the amount of resources it needs to run. This has a couple purposes: - If you have a process that ends up taking up all the CPU available, it can impact other apps. CPU limits prevent one CPU-hungry app from limiting the entire cluster. - If you have a process with runaway memory growth, memory won’t be available for other apps, and it’s going to die eventually when all memory is exhausted. **Requests:** The requests specification is used at pod placement time: Kubernetes will look for a node that has both enough CPU and memory according to the requests configuration. **Limits:** This is enforced at runtime. If a container exceeds the limits, Kubernetes will try to stop it. For CPU, it will simply curb the usage so a container typically can't exceed its limit capacity ; it won't be killed, just won't be able to use more CPU. If a container exceeds its memory limits, it could be terminated. Considerations: - If you under-size these limits, your application may end up running out of memory or running slowly. - If you over-size these limits, yours or someone else’s app may not deploy correctly if the Kubernetes cluster doesn’t have enough resources to schedule all the apps with their reserved limits. You may not notice a problem unless you are watching the deployment status of your app, and problems may only show themselves after apps re-start themselves when they fail. - Your Operations Engineers may impose resource quotas at the Namespace level to prevent apps from taking up too many resources (or too few). Nobody wants to be limited, but we do need to make sure all the apps can run. Links: - https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-best-practices-resource-requests-and-limits ## Pod Settings Always set your ImagePullPolicy to `always`: ``` spec: template: spec: containers: - name: demo-app # ... add this line to never cache the image imagePullPolicy: Always ``` Restrict network access to only what needs to access it. The following allows **all** access to the pod: ``` # pod network policies are a different resource # add these lines at the end of deployment.yml --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all spec: podSelector: matchLabels: app: demo-app ingress: - {} egress: - {} policyTypes: - Ingress - Egress ``` Restrict the security context of the container. You should restrict this as much as is needed for your app to still work. This example just makes the container filesystem read-only and sets a high user/group ID: ``` spec: template: spec: containers: - name: demo-app # add a security context for the container securityContext: runAsUser: 10001 runAsGroup: 10001 readOnlyRootFilesystem: true ``` Links - https://semaphoreci.com/blog/kubernetes-deployments