Skip to content

Instantly share code, notes, and snippets.

@kalaspuffar
Last active December 20, 2025 13:19
Show Gist options
  • Select an option

  • Save kalaspuffar/53d0e828e96482d3ee1f8c88b0f9ea5d to your computer and use it in GitHub Desktop.

Select an option

Save kalaspuffar/53d0e828e96482d3ee1f8c88b0f9ea5d to your computer and use it in GitHub Desktop.

Revisions

  1. kalaspuffar revised this gist Nov 22, 2025. 1 changed file with 2 additions and 2 deletions.
    4 changes: 2 additions & 2 deletions ceph-manual-install.md
    Original file line number Diff line number Diff line change
    @@ -10,8 +10,8 @@ apt upgrade

    Next we fetch the keys and ceph packages, in this case we download the pacific packages for buster.
    ```
    wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
    echo deb https://download.ceph.com/debian-pacific/ buster main | sudo tee /etc/apt/sources.list.d/ceph.list
    wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo tee /etc/apt/trusted.gpg.d/ceph.asc
    echo deb https://download.ceph.com/debian-tentacle/ bookworm main | sudo tee /etc/apt/sources.list.d/ceph.list
    apt update
    apt install ceph ceph-common
    ```
  2. kalaspuffar revised this gist Oct 25, 2021. 1 changed file with 41 additions and 23 deletions.
    64 changes: 41 additions & 23 deletions ceph-manual-install.md
    Original file line number Diff line number Diff line change
    @@ -64,52 +64,70 @@ sudo chown ceph:ceph /tmp/monkey

    Next up we create a monitor map so the monitors will know of each other. The monitors keeps track on other resources but for high availability the monitors needs to know who is in charge.
    ```
    monmaptool --create --add n1 192.168.6.44 --fsid a9109c9d-cfac-41be-a1bb-468d6b14c9c5 /tmp/monmap
    monmaptool --add n2 192.168.6.42 --fsid a9109c9d-cfac-41be-a1bb-468d6b14c9c5 /tmp/monmap
    monmaptool --add n3 192.168.6.43 --fsid a9109c9d-cfac-41be-a1bb-468d6b14c9c5 /tmp/monmap
    monmaptool --create --add {node1-id} {node1-ip} --fsid {cluster uuid} /tmp/monmap
    monmaptool --add {node2-id} {node2-ip} --fsid {cluster uuid} /tmp/monmap
    monmaptool --add {node3-id} {node3-ip} --fsid {cluster uuid} /tmp/monmap
    ```

    Starting a new monitor is as easy as creating a new directory, creating the filesystem for and starting the service.
    ```
    sudo -u ceph mkdir /var/lib/ceph/mon/ceph-n1
    sudo -u ceph ceph-mon --mkfs -i n1 --monmap /tmp/monmap --keyring /tmp/monkey
    sudo systemctl start ceph-mon@n1
    sudo -u ceph mkdir /var/lib/ceph/mon/ceph-{node1-id}
    sudo -u ceph ceph-mon --mkfs -i {node1-id} --monmap /tmp/monmap --keyring /tmp/monkey
    sudo systemctl start ceph-mon@{node1-id}
    ```

    Next up we need a manager so we could configure and monitor our cluster through a visual dashboard. First we create a new key, put that key in a newly created directory and start the service. Enabling a dashboard is as easy as running the command for enabling, creating / assigning a certificate and creating a new admin user.
    ```
    sudo ceph auth get-or-create mgr.n1 mon 'allow profile mgr' osd 'allow *' mds 'allow *'
    sudo -u ceph mkdir /var/lib/ceph/mgr/ceph-n1
    sudo -u ceph vi /var/lib/ceph/mgr/ceph-n1/keyring
    sudo systemctl start ceph-mgr@n1
    sudo ceph auth get-or-create mgr.{node1-id} mon 'allow profile mgr' osd 'allow *' mds 'allow *'
    sudo -u ceph mkdir /var/lib/ceph/mgr/ceph-{node1-id}
    sudo -u ceph vi /var/lib/ceph/mgr/ceph-{node1-id}/keyring
    sudo systemctl start ceph-mgr@{node1-id}
    sudo ceph mgr module enable dashboard
    sudo ceph dashboard create-self-signed-cert
    sudo ceph dashboard ac-user-create admin -i passwd administrator
    ```

    ### Setting up more nodes.

    First of we need to copy over the configuration, monitor map and all the keys over to our new host.
    ```
    sudo scp woden@n1:/etc/ceph/ceph.conf /etc/ceph/ceph.conf
    sudo scp woden@n1:/etc/ceph/ceph.client.admin.keyring /etc/ceph/ceph.client.admin.keyring
    sudo scp woden@n1:/var/lib/ceph/bootstrap-osd/ceph.keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
    sudo scp woden@n1:/tmp/monmap /tmp/monmap
    sudo scp woden@n1:/tmp/monkey /tmp/monkey
    sudo -u ceph mkdir /var/lib/ceph/mon/ceph-n2
    sudo -u ceph ceph-mon --mkfs -i n2 --monmap /tmp/monmap --keyring /tmp/monkey
    sudo systemctl start ceph-mon@n2
    sudo scp {user}@{server}:/etc/ceph/ceph.conf /etc/ceph/ceph.conf
    sudo scp {user}@{server}:/etc/ceph/ceph.client.admin.keyring /etc/ceph/ceph.client.admin.keyring
    sudo scp {user}@{server}:/var/lib/ceph/bootstrap-osd/ceph.keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
    sudo scp {user}@{server}:/tmp/monmap /tmp/monmap
    sudo scp {user}@{server}:/tmp/monkey /tmp/monkey
    ```

    Next up we setup the monitor node exactly as we did with the first node.
    ```
    sudo -u ceph mkdir /var/lib/ceph/mon/ceph-{node2-id}
    sudo -u ceph ceph-mon --mkfs -i {node2-id} --monmap /tmp/monmap --keyring /tmp/monkey
    sudo systemctl start ceph-mon@{node2-id}
    sudo ceph -s
    sudo ceph mon enable-msgr2
    ```

    Then we setup the manager node exactly as we did with the first node.
    ```
    sudo ceph auth get-or-create mgr.n2 mon 'allow profile mgr' osd 'allow *' mds 'allow *'
    sudo -u ceph mkdir /var/lib/ceph/mgr/ceph-n2
    sudo -u ceph vi /var/lib/ceph/mgr/ceph-n2/keyring
    sudo systemctl start ceph-mgr@n2
    sudo ceph auth get-or-create mgr.{node2-id} mon 'allow profile mgr' osd 'allow *' mds 'allow *'
    sudo -u ceph mkdir /var/lib/ceph/mgr/ceph-{node2-id}
    sudo -u ceph vi /var/lib/ceph/mgr/ceph-{node2-id}/keyring
    sudo systemctl start ceph-mgr@{node2-id}
    ```

    ### Adding storage

    When the cluster is up and running and all monitors are in qourum you could add storage services. This is easily done via the volume command. First prepare a disk so it will be known by the cluster and have the keys and configuration copied to the management directory. Next up you activate the service so your storage nodes will be ready to use. This will be done for all the harddrives you want to add to your network.
    ```
    sudo ceph-volume lvm prepare --data /dev/sdb
    sudo ceph-volume lvm activate 0 5d1e5cee-0b12-439c-8902-93c298cf9ed7
    sudo ceph-volume lvm activate {osd-number} {osd-uuid}
    ```

    ### Post configuration

    Last but not least you want to ensure that all the services starts after a reboot. In debian you do that by enabling the services.
    ```
    sudo systemctl enable ceph-mon@{node-id}
    sudo systemctl enable ceph-mgr@{node-id}
    sudo systemctl enable ceph-osd@{osd-number}
    ```
  3. kalaspuffar revised this gist Oct 25, 2021. 1 changed file with 22 additions and 6 deletions.
    28 changes: 22 additions & 6 deletions ceph-manual-install.md
    Original file line number Diff line number Diff line change
    @@ -1,44 +1,54 @@
    # Manual install of a Ceph Cluster.

    ### Fetching software.

    First of I want to check that I have all the latest packages in my debian system.
    ```
    apt update
    apt upgrade
    ```

    Next we fetch the keys and ceph packages, in this case we download the pacific packages for buster.
    ```
    wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
    echo deb https://download.ceph.com/debian-pacific/ buster main | sudo tee /etc/apt/sources.list.d/ceph.list
    apt update
    apt install ceph ceph-common
    ```

    Last we need to download the smartmontools for our nodes. This is so we can monitor our hard drives for hardware issues.
    ```
    echo deb http://deb.debian.org/debian buster-backports main >> /etc/apt/sources.list
    apt update
    apt install smartmontools/buster-backports
    ```


    A reboot when you have installed packages is always a good thing and if you need to do some extra hardware changes this is a good place to do so.
    ```
    shutdown -r now
    ```

    ### Configure node 1

    First we will create a ceph configuration file.
    ```
    sudo vi /etc/ceph/ceph.conf
    ```

    The most important things to specify is the id and ips of your cluster monitors. A unique cluster id that you will reuse for all your nodes. And lastly a public network range that you want your monitors to be available over. The cluster network is a good addition if you have the resources to route the recovery traffic on a backbone network.
    ```
    [global]
    fsid = a9109c9d-cfac-41be-a1bb-468d6b14c9c5
    mon initial members = n1,n2,n3
    mon host = 192.168.6.44,192.168.6.42,192.168.6.43
    public network = 192.168.6.0/24
    cluster network = 10.0.2.0/24
    fsid = {cluster uuid}
    mon initial members = {id1}, {id2}, {id2}
    mon host = {ip1}, {ip2}, {ip3}
    public network = {network range for your public network}
    cluster network = {network range for your cluster network}
    auth cluster required = cephx
    auth service required = cephx
    auth client required = cephx
    ```

    Next we create keys for admin, monitors and boostrapping our drives. These keys will then be merged with the monitor key so the initial setup will have the keys used for other operations.
    ```
    sudo ceph-authtool --create-keyring /tmp/monkey --gen-key -n mon. --cap mon 'allow *'
    sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'
    @@ -47,22 +57,26 @@ sudo ceph-authtool /tmp/monkey --import-keyring /etc/ceph/ceph.client.admin.keyr
    sudo ceph-authtool /tmp/monkey --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
    ```

    Make the monitor key available to the ceph user so we don't get an permission error when we start our services.
    ```
    sudo chown ceph:ceph /tmp/monkey
    ```

    Next up we create a monitor map so the monitors will know of each other. The monitors keeps track on other resources but for high availability the monitors needs to know who is in charge.
    ```
    monmaptool --create --add n1 192.168.6.44 --fsid a9109c9d-cfac-41be-a1bb-468d6b14c9c5 /tmp/monmap
    monmaptool --add n2 192.168.6.42 --fsid a9109c9d-cfac-41be-a1bb-468d6b14c9c5 /tmp/monmap
    monmaptool --add n3 192.168.6.43 --fsid a9109c9d-cfac-41be-a1bb-468d6b14c9c5 /tmp/monmap
    ```

    Starting a new monitor is as easy as creating a new directory, creating the filesystem for and starting the service.
    ```
    sudo -u ceph mkdir /var/lib/ceph/mon/ceph-n1
    sudo -u ceph ceph-mon --mkfs -i n1 --monmap /tmp/monmap --keyring /tmp/monkey
    sudo systemctl start ceph-mon@n1
    ```

    Next up we need a manager so we could configure and monitor our cluster through a visual dashboard. First we create a new key, put that key in a newly created directory and start the service. Enabling a dashboard is as easy as running the command for enabling, creating / assigning a certificate and creating a new admin user.
    ```
    sudo ceph auth get-or-create mgr.n1 mon 'allow profile mgr' osd 'allow *' mds 'allow *'
    sudo -u ceph mkdir /var/lib/ceph/mgr/ceph-n1
    @@ -73,6 +87,8 @@ sudo ceph dashboard create-self-signed-cert
    sudo ceph dashboard ac-user-create admin -i passwd administrator
    ```

    ### Setting up more nodes.

    ```
    sudo scp woden@n1:/etc/ceph/ceph.conf /etc/ceph/ceph.conf
    sudo scp woden@n1:/etc/ceph/ceph.client.admin.keyring /etc/ceph/ceph.client.admin.keyring
  4. kalaspuffar revised this gist Oct 25, 2021. 1 changed file with 99 additions and 1 deletion.
    100 changes: 99 additions & 1 deletion ceph-manual-install.md
    Original file line number Diff line number Diff line change
    @@ -1 +1,99 @@
    dsadasdas


    ```
    apt update
    apt upgrade
    ```

    ```
    wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
    echo deb https://download.ceph.com/debian-pacific/ buster main | sudo tee /etc/apt/sources.list.d/ceph.list
    apt update
    apt install ceph ceph-common
    ```

    ```
    echo deb http://deb.debian.org/debian buster-backports main >> /etc/apt/sources.list
    apt update
    apt install smartmontools/buster-backports
    ```


    ```
    shutdown -r now
    ```

    ```
    sudo vi /etc/ceph/ceph.conf
    ```

    ```
    [global]
    fsid = a9109c9d-cfac-41be-a1bb-468d6b14c9c5
    mon initial members = n1,n2,n3
    mon host = 192.168.6.44,192.168.6.42,192.168.6.43
    public network = 192.168.6.0/24
    cluster network = 10.0.2.0/24
    auth cluster required = cephx
    auth service required = cephx
    auth client required = cephx
    ```

    ```
    sudo ceph-authtool --create-keyring /tmp/monkey --gen-key -n mon. --cap mon 'allow *'
    sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'
    sudo ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring --gen-key -n client.bootstrap-osd --cap mon 'profile bootstrap-osd'
    sudo ceph-authtool /tmp/monkey --import-keyring /etc/ceph/ceph.client.admin.keyring
    sudo ceph-authtool /tmp/monkey --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
    ```

    ```
    sudo chown ceph:ceph /tmp/monkey
    ```

    ```
    monmaptool --create --add n1 192.168.6.44 --fsid a9109c9d-cfac-41be-a1bb-468d6b14c9c5 /tmp/monmap
    monmaptool --add n2 192.168.6.42 --fsid a9109c9d-cfac-41be-a1bb-468d6b14c9c5 /tmp/monmap
    monmaptool --add n3 192.168.6.43 --fsid a9109c9d-cfac-41be-a1bb-468d6b14c9c5 /tmp/monmap
    ```

    ```
    sudo -u ceph mkdir /var/lib/ceph/mon/ceph-n1
    sudo -u ceph ceph-mon --mkfs -i n1 --monmap /tmp/monmap --keyring /tmp/monkey
    sudo systemctl start ceph-mon@n1
    ```

    ```
    sudo ceph auth get-or-create mgr.n1 mon 'allow profile mgr' osd 'allow *' mds 'allow *'
    sudo -u ceph mkdir /var/lib/ceph/mgr/ceph-n1
    sudo -u ceph vi /var/lib/ceph/mgr/ceph-n1/keyring
    sudo systemctl start ceph-mgr@n1
    sudo ceph mgr module enable dashboard
    sudo ceph dashboard create-self-signed-cert
    sudo ceph dashboard ac-user-create admin -i passwd administrator
    ```

    ```
    sudo scp woden@n1:/etc/ceph/ceph.conf /etc/ceph/ceph.conf
    sudo scp woden@n1:/etc/ceph/ceph.client.admin.keyring /etc/ceph/ceph.client.admin.keyring
    sudo scp woden@n1:/var/lib/ceph/bootstrap-osd/ceph.keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
    sudo scp woden@n1:/tmp/monmap /tmp/monmap
    sudo scp woden@n1:/tmp/monkey /tmp/monkey
    sudo -u ceph mkdir /var/lib/ceph/mon/ceph-n2
    sudo -u ceph ceph-mon --mkfs -i n2 --monmap /tmp/monmap --keyring /tmp/monkey
    sudo systemctl start ceph-mon@n2
    sudo ceph -s
    sudo ceph mon enable-msgr2
    ```

    ```
    sudo ceph auth get-or-create mgr.n2 mon 'allow profile mgr' osd 'allow *' mds 'allow *'
    sudo -u ceph mkdir /var/lib/ceph/mgr/ceph-n2
    sudo -u ceph vi /var/lib/ceph/mgr/ceph-n2/keyring
    sudo systemctl start ceph-mgr@n2
    ```

    ```
    sudo ceph-volume lvm prepare --data /dev/sdb
    sudo ceph-volume lvm activate 0 5d1e5cee-0b12-439c-8902-93c298cf9ed7
    ```
  5. kalaspuffar created this gist Oct 23, 2021.
    1 change: 1 addition & 0 deletions ceph-manual-install.md
    Original file line number Diff line number Diff line change
    @@ -0,0 +1 @@
    dsadasdas