Skip to content

Instantly share code, notes, and snippets.

@scyto
Last active February 24, 2025 09:25
Show Gist options
  • Select an option

  • Save scyto/452fc778c4c3ba7caf03b833151e84a1 to your computer and use it in GitHub Desktop.

Select an option

Save scyto/452fc778c4c3ba7caf03b833151e84a1 to your computer and use it in GitHub Desktop.

Revisions

  1. scyto revised this gist Aug 7, 2023. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion gluster-fs-install.md
    Original file line number Diff line number Diff line change
    @@ -85,7 +85,7 @@ gluster volume info gluster-vol1
    `sudo nano /etc/fstab` edit the fstab

    add the following as the last line in the fstab
    `localhost:/gluster-vol1 /mnt/gluster-vol1 glusterfs defaults,_netdev 0 0`
    `localhost:/gluster-vol1 /mnt/gluster-vol1 glusterfs defaults,_netdev,noauto,x-systemd.automount 0 0`
    exit and save

    `sudo mount -a` should mount with no errors
  2. scyto revised this gist Feb 13, 2022. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion gluster-fs-install.md
    Original file line number Diff line number Diff line change
    @@ -1,4 +1,4 @@
    Assumes you installed debian, docker, etc as [per the list here](/4624361c4e8c3be2aad9b3f0073c7f9/)
    Assumes you installed debian, docker, etc as [per the list here](/f4624361c4e8c3be2aad9b3f0073c7f9)

    ## Assumptions
    - I will have one gluster volume i will call `glusterfs-vol1`
  3. scyto revised this gist Feb 3, 2022. 1 changed file with 96 additions and 47 deletions.
    143 changes: 96 additions & 47 deletions gluster-fs-install.md
    Original file line number Diff line number Diff line change
    @@ -1,50 +1,99 @@
    Assumes you installed debian, docker, etc as [per the list here](/4624361c4e8c3be2aad9b3f0073c7f9/)

    ## Assumptions
    - I will have one gluster volume i will call `glusterfs-vol1`
    - I will install glusterfs on my docker nodes (best practice is to have seperate dedicated VMs)
    - I have 3 nodes in my cluster (docker01, docker02, docker03)
    - I will have one brick per node (brick1, brick2, brick3)
    - the volume will be dispered - [more on volume types](https://docs.gluster.org/en/latest/Quick-Start-Guide/Architecture/#types-of-volumes)

    ## Prepare Disks
    ### Prepare disks on hypervisor
    Add a VHD to each of your docker host VMs - for example a 100GB Volume
    If using Hyper-V this can be done without rebooting (add it as new SCSI VHD and the VM OS will detect it instantly)

    on docker01 (note the 50gb disk has different dev node because i did iscsi)
    lsblk to confirm device node
    sudo fdisk /dev/sdc (then g, then n and w to write out)
    sudo mkfs.xfs /dev/sdc1
    sudo mkdir /mnt/glusterfs
    sudo mount /dev/sdc1 /mnt/glusterfs
    sudo mkdir /mnt/glusterfs/vol1-brick1
    ls -al /dev/disk/by-uuid/
    UUID=c78ed594-ef62-445d-9486-10938f49b603 /mnt/glusterfs xfs defaults 0 0
    findmnt --verify

    on docker02
    lsblk to confirm device node
    sudo fdisk /dev/sdb (then g, then n and w to write out)
    sudo mkfs.xfs /dev/sdb1
    sudo mkdir /mnt/glusterfs
    sudo mount /dev/sdb1 /mnt/glusterfs
    sudo mkdir /mnt/glusterfs/vol1-brick2
    ls -al /dev/disk/by-uuid/
    UUID=8a5eeebb-a268-45ac-a2c7-e2cef9b2d8d2 /mnt/glusterfs xfs defaults 0 0
    findmnt --verify


    on docker03
    lsblk to confirm device node
    sudo fdisk /dev/sdb (then g, then n and w to write out)
    sudo mkfs.xfs /dev/sdb1
    sudo mkdir /mnt/glusterfs
    sudo mount /dev/sdb1 /mnt/glusterfs
    sudo mkdir /mnt/glusterfs/vol1-brick3
    UUID=de71ad65-65f6-4291-8708-269f3e2a2b5f /mnt/glusterfs xfs defaults 0 0
    findmnt --verify

    on docker01
    sudo gluster volume create gluster-vol1 disperse 3 redundancy 1 docker01.alexbal.com:/mnt/glusterfs/vol1-brick1 docker02.alexbal.com:/mnt/glusterfs/vol1-brick2 docker03.alexbal.com:/mnt/glusterfs/vol1-brick3
    sudo gluster volume start gluster-vol1
    sudo gluster volume info gluster-vol1


    on all docker hosts
    sudo mkdir /mnt/gluster-vol1
    localhost:/gluster-vol1 /mnt/gluster-vol1 glusterfs defaults,_netdev 0 0
    findmnt --verify << cant be used doesnt support checking glusterfs
    mount- a
    df /mnt/gluster-vol1/

    (make sure you get the expected mounted output not the details of the mount point)
    ### Partition, format and mount the underlying storage (be careful)
    Perform these steps on every node.

    `sudo lsblk` to confirm device node (should be sdb but could be different if you diverged from any of gists)

    `sudo fdisk /dev/sdb` (then `g`, then `n` and accept defaults and lastly `w` to write out changes)

    `sudo mkfs.xfs /dev/sdb1` (this formats the new paritions with XFS)

    `sudo mkdir /mnt/glusterfs` (this is where you will mount the new parition)

    `sudo mount /dev/sdb1 /mnt/glusterfs` (this mounts and is used in next steps, don't miss).


    ### on docker01
    `sudo mkdir /mnt/glusterfs/vol1-brick1` this created the folder where brick1 will be stored

    `sudo ls -al /dev/disk/by-uuid/` this gets you the UUID for the partition you created earlier

    edit fstab (be careful) with `sudo nano /etc/fstab`
    add the followling line as the last line in fstab, use the UUID you got in the last step, do not use the one here.
    `UUID=c78ed594-ef62-445d-9486-10938f49b603 /mnt/glusterfs xfs defaults 0 0`

    `sudo findmnt --verify` you should see no errors related to /dev/sdb or /dev/sb1 (you may see errors about CD-ROM those can be ignored) if you get ANY errors do not proceed until you have checked your previous work

    ### on docker02
    `sudo mkdir /mnt/glusterfs/vol1-brick2` this created the folder where brick2 will be stored

    `sudo ls -al /dev/disk/by-uuid/` this gets you the UUID for the partition you created earlier

    edit fstab (be careful) with `sudo nano /etc/fstab`
    add the followling line as the last line in fstab, use the UUID you got in the last step, do not use the one here.
    `UUID=c78ed594-ef62-445d-9486-10938f49b603 /mnt/glusterfs xfs defaults 0 0`

    `sudo findmnt --verify` you should see no errors related to /dev/sdb or /dev/sb1 (you may see errors about CD-ROM those can be ignored) if you get ANY errors do not proceed until you have checked your previous work

    ### on docker03
    `sudo mkdir /mnt/glusterfs/vol1-brick3` this created the folder where brick3 will be stored

    `sudo ls -al /dev/disk/by-uuid/` this gets you the UUID for the partition you created earlier

    edit fstab (be careful) with `sudo nano /etc/fstab`
    add the followling line as the last line in fstab, use the UUID you got in the last step, do not use the one here.
    `UUID=c78ed594-ef62-445d-9486-10938f49b603 /mnt/glusterfs xfs defaults 0 0`

    `sudo findmnt --verify` you should see no errors related to /dev/sdb or /dev/sdb1 (you may see errors about CD-ROM those can be ignored) if you get ANY errors do not proceed until you have checked your previous work

    ## Install & Configure GlusterFS

    ### On all nodes:
    ```
    sudo apt-get install glusterfs-server
    sudo systemctl start glusterd
    sudo systemctl enable glusterd
    ```

    ## Create the glusterfs volume
    ### On the master node (docker01) - note you must run sudo -s and not sudo for each command
    ```
    sudo -s
    gluster peer probe docker02.alexbal.com; gluster peer probe docker03.alexbal.com;
    gluster pool list
    gluster volume create gluster-vol1 disperse 3 redundancy 1 docker01.yourdomain.com:/mnt/glusterfs/vol1-brick1 docker02.yourdomain.com:/mnt/glusterfs/vol1-brick2 docker03.yourdomain.com:/mnt/glusterfs/vol1-brick3
    gluster volume start gluster-vol1
    gluster volume info gluster-vol1
    ```

    ### on all docker hosts
    `sudo mkdir /mnt/gluster-vol1` make the mount point

    `sudo nano /etc/fstab` edit the fstab

    add the following as the last line in the fstab
    `localhost:/gluster-vol1 /mnt/gluster-vol1 glusterfs defaults,_netdev 0 0`
    exit and save

    `sudo mount -a` should mount with no errors

    `sudo df /mnt/gluster-vol1/` should return details about the gluster file system (size etc)


    To test create a file using touch `sudo touch /mnt/gluster-vol1/hello-world-txt` now check for that file in the same path on the other 2 nodes. If you did everything correctly you now have a replicating and redundant file system!


    Note: `findmnt --verify` cant be used for this mount as it doesn't support checking glusterfs
  4. scyto created this gist Feb 3, 2022.
    50 changes: 50 additions & 0 deletions gluster-fs-install.md
    Original file line number Diff line number Diff line change
    @@ -0,0 +1,50 @@



    on docker01 (note the 50gb disk has different dev node because i did iscsi)
    lsblk to confirm device node
    sudo fdisk /dev/sdc (then g, then n and w to write out)
    sudo mkfs.xfs /dev/sdc1
    sudo mkdir /mnt/glusterfs
    sudo mount /dev/sdc1 /mnt/glusterfs
    sudo mkdir /mnt/glusterfs/vol1-brick1
    ls -al /dev/disk/by-uuid/
    UUID=c78ed594-ef62-445d-9486-10938f49b603 /mnt/glusterfs xfs defaults 0 0
    findmnt --verify

    on docker02
    lsblk to confirm device node
    sudo fdisk /dev/sdb (then g, then n and w to write out)
    sudo mkfs.xfs /dev/sdb1
    sudo mkdir /mnt/glusterfs
    sudo mount /dev/sdb1 /mnt/glusterfs
    sudo mkdir /mnt/glusterfs/vol1-brick2
    ls -al /dev/disk/by-uuid/
    UUID=8a5eeebb-a268-45ac-a2c7-e2cef9b2d8d2 /mnt/glusterfs xfs defaults 0 0
    findmnt --verify


    on docker03
    lsblk to confirm device node
    sudo fdisk /dev/sdb (then g, then n and w to write out)
    sudo mkfs.xfs /dev/sdb1
    sudo mkdir /mnt/glusterfs
    sudo mount /dev/sdb1 /mnt/glusterfs
    sudo mkdir /mnt/glusterfs/vol1-brick3
    UUID=de71ad65-65f6-4291-8708-269f3e2a2b5f /mnt/glusterfs xfs defaults 0 0
    findmnt --verify

    on docker01
    sudo gluster volume create gluster-vol1 disperse 3 redundancy 1 docker01.alexbal.com:/mnt/glusterfs/vol1-brick1 docker02.alexbal.com:/mnt/glusterfs/vol1-brick2 docker03.alexbal.com:/mnt/glusterfs/vol1-brick3
    sudo gluster volume start gluster-vol1
    sudo gluster volume info gluster-vol1


    on all docker hosts
    sudo mkdir /mnt/gluster-vol1
    localhost:/gluster-vol1 /mnt/gluster-vol1 glusterfs defaults,_netdev 0 0
    findmnt --verify << cant be used doesnt support checking glusterfs
    mount- a
    df /mnt/gluster-vol1/

    (make sure you get the expected mounted output not the details of the mount point)