Skip to content

Instantly share code, notes, and snippets.

@kalaspuffar
Last active December 20, 2025 13:19
Show Gist options
  • Select an option

  • Save kalaspuffar/53d0e828e96482d3ee1f8c88b0f9ea5d to your computer and use it in GitHub Desktop.

Select an option

Save kalaspuffar/53d0e828e96482d3ee1f8c88b0f9ea5d to your computer and use it in GitHub Desktop.
How to install a manual ceph cluster.
apt update
apt upgrade
wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
echo deb https://download.ceph.com/debian-pacific/ buster main | sudo tee /etc/apt/sources.list.d/ceph.list
apt update
apt install ceph ceph-common
echo deb http://deb.debian.org/debian buster-backports main >> /etc/apt/sources.list
apt update
apt install smartmontools/buster-backports
shutdown -r now
sudo vi /etc/ceph/ceph.conf
[global]
fsid = a9109c9d-cfac-41be-a1bb-468d6b14c9c5
mon initial members = n1,n2,n3
mon host = 192.168.6.44,192.168.6.42,192.168.6.43
public network = 192.168.6.0/24
cluster network = 10.0.2.0/24
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
sudo ceph-authtool --create-keyring /tmp/monkey --gen-key -n mon. --cap mon 'allow *'
sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'
sudo ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring --gen-key -n client.bootstrap-osd --cap mon 'profile bootstrap-osd'
sudo ceph-authtool /tmp/monkey --import-keyring /etc/ceph/ceph.client.admin.keyring
sudo ceph-authtool /tmp/monkey --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
sudo chown ceph:ceph /tmp/monkey
monmaptool --create --add n1 192.168.6.44 --fsid a9109c9d-cfac-41be-a1bb-468d6b14c9c5 /tmp/monmap
monmaptool --add n2 192.168.6.42 --fsid a9109c9d-cfac-41be-a1bb-468d6b14c9c5 /tmp/monmap
monmaptool --add n3 192.168.6.43 --fsid a9109c9d-cfac-41be-a1bb-468d6b14c9c5 /tmp/monmap
sudo -u ceph mkdir /var/lib/ceph/mon/ceph-n1
sudo -u ceph ceph-mon --mkfs -i n1 --monmap /tmp/monmap --keyring /tmp/monkey
sudo systemctl start ceph-mon@n1
sudo ceph auth get-or-create mgr.n1 mon 'allow profile mgr' osd 'allow *' mds 'allow *'
sudo -u ceph mkdir /var/lib/ceph/mgr/ceph-n1
sudo -u ceph vi /var/lib/ceph/mgr/ceph-n1/keyring
sudo systemctl start ceph-mgr@n1
sudo ceph mgr module enable dashboard
sudo ceph dashboard create-self-signed-cert
sudo ceph dashboard ac-user-create admin -i passwd administrator
sudo scp woden@n1:/etc/ceph/ceph.conf /etc/ceph/ceph.conf
sudo scp woden@n1:/etc/ceph/ceph.client.admin.keyring /etc/ceph/ceph.client.admin.keyring
sudo scp woden@n1:/var/lib/ceph/bootstrap-osd/ceph.keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
sudo scp woden@n1:/tmp/monmap /tmp/monmap
sudo scp woden@n1:/tmp/monkey /tmp/monkey
sudo -u ceph mkdir /var/lib/ceph/mon/ceph-n2
sudo -u ceph ceph-mon --mkfs -i n2 --monmap /tmp/monmap --keyring /tmp/monkey
sudo systemctl start ceph-mon@n2
sudo ceph -s
sudo ceph mon enable-msgr2
sudo ceph auth get-or-create mgr.n2 mon 'allow profile mgr' osd 'allow *' mds 'allow *'
sudo -u ceph mkdir /var/lib/ceph/mgr/ceph-n2
sudo -u ceph vi /var/lib/ceph/mgr/ceph-n2/keyring
sudo systemctl start ceph-mgr@n2
sudo ceph-volume lvm prepare --data /dev/sdb 
sudo ceph-volume lvm activate 0 5d1e5cee-0b12-439c-8902-93c298cf9ed7
@aussielunix
Copy link
Copy Markdown

Helpful write up. Thanks @kalaspuffar

It took me a while to work this out but msgr2 can be enabled when you first setup the monitors with the following changes to your doc.
Issue #53751 is related.

In /etc/ceph/ceph.conf only add the one node as an initial mon member but list all mon hosts with this format. The square brackets are important it appears.

mon_initial_members = labnode-01
mon_host = [v2:10.0.99.31:3300,v1:10.0.99.31:6789],[v2:10.0.99.32:3300,v1:10.0.99.32:6789],[v2:10.0.99.33:3300,v1:10.0.99.33:6789]

Now when you create the monmap use this new format to represent the hosts.
Note: ensure you now use --addv and not --add

monmaptool --create --addv labnode-01 [v2:10.0.99.31:3300,v1:10.0.99.31:6789] --fsid xxxxxxxx /tmp/monmap
monmaptool --addv labnode-02 [v2:10.0.99.32:3300,v1:10.0.99.32:6789] --fsid xxxxxxxx /tmp/monmap
monmaptool --addv labnode-03 [v2:10.0.99.33:3300,v1:10.0.99.33:6789] --fsid xxxxxxxx /tmp/monmap

Check the status and then move to the other two* nodes and setup ceph-mon on them.

user@labnode-01:~$ sudo ceph -s
  cluster:
    id:     xxxxxxxxxxxxxxxxxxxxxxx
    health: HEALTH_WARN
            mon is allowing insecure global_id reclaim
 
  services:
    mon: 1 daemons, quorum labnode-01 (age 10m)
    mgr: no daemons active
    osd: 0 osds: 0 up, 0 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:  

@srinivassivakumar
Copy link
Copy Markdown

Executed the following command :-
1)systemctl start ceph-mon@hostname
2)systemctl status ceph-mon@hostname
-ceph-mon@hostname.service : Main process exited, code=exited,status=1/Failure
-ceph-mon@hostname.service : Failed with result exit-code

Checked issue with command
1)sudo ceph-mon -i hostname
-'auth_cluster_required ' in section 'global ' redefined
-auth_service_required ' in section 'global ' redefined
-auth_client_required ' in section 'global ' redefined

but my ceph.conf has the commands for these are set as
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

I am using Debian 11 (bulseye ) how should I solve this error

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment