Skip to content

Instantly share code, notes, and snippets.

@afahounko
Forked from icebob/k3s_helm_install.sh
Created July 5, 2023 11:48
Show Gist options
  • Select an option

  • Save afahounko/4ac7bf4eff1d9b4460f116b56405c7b1 to your computer and use it in GitHub Desktop.

Select an option

Save afahounko/4ac7bf4eff1d9b4460f116b56405c7b1 to your computer and use it in GitHub Desktop.
K3S + Helm installing
# Install K3S
curl -sfL https://get.k3s.io | sh -
# Copy k3s config
mkdir $HOME/.kube
sudo cp /etc/rancher/k3s/k3s.yaml $HOME/.kube/config
sudo chmod 644 $HOME/.kube/config
# Check K3S
kubectl get pods -n kube-system
# Create Storage class
# kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml
# kubectl get storageclass
# Download & install Helm
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > install-helm.sh
chmod u+x install-helm.sh
./install-helm.sh
# Link Helm with Tiller
kubectl -n kube-system create serviceaccount tiller
kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller
# Check Helm
helm repo update
helm search postgres
# Install NATS with Helm
# https://hub.helm.sh/charts/bitnami/nats
helm install --name nats --namespace demo \
--set auth.enabled=true,auth.user=admin,auth.password=admin1234 \
stable/nats
# Check
helm list
kubectl svc -n demo
# Create a port forward to NATS (blocking the terminal)
kubectl port-forward svc/nats-client 4222 -n demo
# Delete NATS
helm delete nats
# Working DNS with ufw https://github.com/rancher/k3s/issues/24#issuecomment-515003702
# sudo ufw allow in on cni0 from 10.42.0.0/16 comment "K3s rule"
@afahounko
Copy link
Author

afahounko commented Jul 5, 2023

cp /etc/rancher/k3s/k3s.yaml ~/.kube/config

helm init --service-account tiller

@afahounko
Copy link
Author

root@k3s01:~/learn-vault-secrets-operator# helm install --name nats --namespace demo \
        --set auth.enabled=true,auth.user=admin,auth.password=admin1234 \
        stable/nats
WARNING: This chart is deprecated
NAME:   nats
LAST DEPLOYED: Wed Jul  5 13:25:59 2023
NAMESPACE: demo
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME  DATA  AGE
nats  1     1s

==> v1/Pod(related)
NAME    READY  STATUS             RESTARTS  AGE
nats-0  0/1    ContainerCreating  0         1s

==> v1/Service
NAME             TYPE       CLUSTER-IP    EXTERNAL-IP  PORT(S)            AGE
nats-client      ClusterIP  10.43.38.201  <none>       4222/TCP           1s
nats-cluster     ClusterIP  10.43.124.94  <none>       6222/TCP           1s
nats-headless    ClusterIP  None          <none>       4222/TCP,6222/TCP  1s
nats-monitoring  ClusterIP  10.43.89.176  <none>       8222/TCP           1s

==> v1/StatefulSet
NAME  READY  AGE
nats  0/1    1s


NOTES:
This Helm chart is deprecated

Given the `stable` deprecation timeline (https://github.com/helm/charts#deprecation-timeline), the Bitnami maintained Helm chart is now located at bitnami/charts (https://github.com/bitnami/charts/).

The Bitnami repository is already included in the Hubs and we will continue providing the same cadence of updates, support, etc that we've been keeping here these years. Installation instructions are very similar, just adding the _bitnami_ repo and using it during the installation (`bitnami/<chart>` instead of `stable/<chart>`)

```bash
$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm install my-release bitnami/<chart>           # Helm 3
$ helm install --name my-release bitnami/<chart>    # Helm 2

To update an exisiting stable deployment with a chart hosted in the bitnami repository you can execute

$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm upgrade my-release bitnami/<chart>

Issues and PRs related to the chart itself will be redirected to bitnami/charts GitHub repository. In the same way, we'll be happy to answer questions related to this migration process in this issue (helm/charts#20969) created as a common place for discussion.

** Please be patient while the chart is being deployed **

NATS can be accessed via port 4222 on the following DNS name from within your cluster:

nats-client.demo.svc.cluster.local

To get the authentication credentials, run:

export NATS_USER=$(kubectl get cm --namespace demo nats -o jsonpath='{.data.*}' | grep -m 1 user | awk '{print $2}')
export NATS_PASS=$(kubectl get cm --namespace demo nats -o jsonpath='{.data.*}' | grep -m 1 password | awk '{print $2}')
echo -e "Client credentials:\n\tUser: $NATS_USER\n\tPassword: $NATS_PASS"

NATS monitoring service can be accessed via port 8222 on the following DNS name from within your cluster:

nats-monitoring.demo.svc.cluster.local

To access the Monitoring svc from outside the cluster, follow the steps below:

  1. Get the NATS monitoring URL by running:

    echo "Monitoring URL: http://127.0.0.1:8222"
    kubectl port-forward --namespace demo svc/nats-monitoring 8222:8222

  2. Access NATS monitoring by opening the URL obtained in a browser.

root@k3s01:/learn-vault-secrets-operator# kubectl get cm --namespace demo nats -o jsonpath='{.data.*}' | grep -m 1 user | awk '{print $2}'
admin
root@k3s01:
/learn-vault-secrets-operator#

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment