First of I want to check that I have all the latest packages in my debian system.
apt update
apt upgrade
Next we fetch the keys and ceph packages, in this case we download the pacific packages for buster.
wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
echo deb https://download.ceph.com/debian-pacific/ buster main | sudo tee /etc/apt/sources.list.d/ceph.list
apt update
apt install ceph ceph-common
Last we need to download the smartmontools for our nodes. This is so we can monitor our hard drives for hardware issues.
echo deb http://deb.debian.org/debian buster-backports main >> /etc/apt/sources.list
apt update
apt install smartmontools/buster-backports
A reboot when you have installed packages is always a good thing and if you need to do some extra hardware changes this is a good place to do so.
shutdown -r now
First we will create a ceph configuration file.
sudo vi /etc/ceph/ceph.conf
The most important things to specify is the id and ips of your cluster monitors. A unique cluster id that you will reuse for all your nodes. And lastly a public network range that you want your monitors to be available over. The cluster network is a good addition if you have the resources to route the recovery traffic on a backbone network.
[global]
fsid = {cluster uuid}
mon initial members = {id1}, {id2}, {id2}
mon host = {ip1}, {ip2}, {ip3}
public network = {network range for your public network}
cluster network = {network range for your cluster network}
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
Next we create keys for admin, monitors and boostrapping our drives. These keys will then be merged with the monitor key so the initial setup will have the keys used for other operations.
sudo ceph-authtool --create-keyring /tmp/monkey --gen-key -n mon. --cap mon 'allow *'
sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'
sudo ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring --gen-key -n client.bootstrap-osd --cap mon 'profile bootstrap-osd'
sudo ceph-authtool /tmp/monkey --import-keyring /etc/ceph/ceph.client.admin.keyring
sudo ceph-authtool /tmp/monkey --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
Make the monitor key available to the ceph user so we don't get an permission error when we start our services.
sudo chown ceph:ceph /tmp/monkey
Next up we create a monitor map so the monitors will know of each other. The monitors keeps track on other resources but for high availability the monitors needs to know who is in charge.
monmaptool --create --add n1 192.168.6.44 --fsid a9109c9d-cfac-41be-a1bb-468d6b14c9c5 /tmp/monmap
monmaptool --add n2 192.168.6.42 --fsid a9109c9d-cfac-41be-a1bb-468d6b14c9c5 /tmp/monmap
monmaptool --add n3 192.168.6.43 --fsid a9109c9d-cfac-41be-a1bb-468d6b14c9c5 /tmp/monmap
Starting a new monitor is as easy as creating a new directory, creating the filesystem for and starting the service.
sudo -u ceph mkdir /var/lib/ceph/mon/ceph-n1
sudo -u ceph ceph-mon --mkfs -i n1 --monmap /tmp/monmap --keyring /tmp/monkey
sudo systemctl start ceph-mon@n1
Next up we need a manager so we could configure and monitor our cluster through a visual dashboard. First we create a new key, put that key in a newly created directory and start the service. Enabling a dashboard is as easy as running the command for enabling, creating / assigning a certificate and creating a new admin user.
sudo ceph auth get-or-create mgr.n1 mon 'allow profile mgr' osd 'allow *' mds 'allow *'
sudo -u ceph mkdir /var/lib/ceph/mgr/ceph-n1
sudo -u ceph vi /var/lib/ceph/mgr/ceph-n1/keyring
sudo systemctl start ceph-mgr@n1
sudo ceph mgr module enable dashboard
sudo ceph dashboard create-self-signed-cert
sudo ceph dashboard ac-user-create admin -i passwd administrator
sudo scp woden@n1:/etc/ceph/ceph.conf /etc/ceph/ceph.conf
sudo scp woden@n1:/etc/ceph/ceph.client.admin.keyring /etc/ceph/ceph.client.admin.keyring
sudo scp woden@n1:/var/lib/ceph/bootstrap-osd/ceph.keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
sudo scp woden@n1:/tmp/monmap /tmp/monmap
sudo scp woden@n1:/tmp/monkey /tmp/monkey
sudo -u ceph mkdir /var/lib/ceph/mon/ceph-n2
sudo -u ceph ceph-mon --mkfs -i n2 --monmap /tmp/monmap --keyring /tmp/monkey
sudo systemctl start ceph-mon@n2
sudo ceph -s
sudo ceph mon enable-msgr2
sudo ceph auth get-or-create mgr.n2 mon 'allow profile mgr' osd 'allow *' mds 'allow *'
sudo -u ceph mkdir /var/lib/ceph/mgr/ceph-n2
sudo -u ceph vi /var/lib/ceph/mgr/ceph-n2/keyring
sudo systemctl start ceph-mgr@n2
sudo ceph-volume lvm prepare --data /dev/sdb
sudo ceph-volume lvm activate 0 5d1e5cee-0b12-439c-8902-93c298cf9ed7
Helpful write up. Thanks @kalaspuffar
It took me a while to work this out but msgr2 can be enabled when you first setup the monitors with the following changes to your doc.
Issue #53751 is related.
In
/etc/ceph/ceph.confonly add the one node as an initial mon member but list all mon hosts with this format. The square brackets are important it appears.Now when you create the monmap use this new format to represent the hosts.
Note: ensure you now use
--addvand not--addCheck the status and then move to the other two* nodes and setup ceph-mon on them.