Deploy Hyper-Converged Ceph Cluster
Introduction
In this guide we will deploy a hyper-converged Ceph cluster on a Proxmox VE cluster. This guide assumes you have a Proxmox VE cluster with at least three nodes. If you do not have a Proxmox Ve cluster, you can follow our guide on Clustering Proxmox.
This guide will cover the following topics:
- Installing Ceph on Proxmox
- Configuring the Ceph cluster
- Creating a Ceph pool
This guide will cover the deployment of a hyper-converged Ceph cluster using the Proxmox Web UI. If you prefer to use the command line, you can refer to the official Proxmox documentation (opens in a new tab).
What is Ceph?
Ceph is a distributed storage system that provides object storage, block storage, and file storage. Ceph is designed to be scalable and fault-tolerant, making it an ideal storage solution for cloud environments.
Prerequisites
Before you begin, you will need the following:
- A Proxmox VE cluster with at least three nodes Clustering Proxmox
- A dedicated network for Ceph traffic (recommended)
- At least one additional disk per node for Ceph OSDs
Step by Step Guide
Install Ceph on Proxmox
When installing Ceph on Proxmox without a Proxmox subscription, you will need
to use the No Subscription
repository.
To install Ceph on Proxmox, follow these steps:
- Log in to the Proxmox Web UI.
- Click on the
Datacenter
node in the tree view. - Click on the
Ceph
tab. - Click on the
Install
button. - Follow the on-screen instructions to install Ceph on your Proxmox cluster.
- We will be using
reef (18.2)
as the Ceph version. - Click on the Repository drop-down menu, and select the
No Subscription
repository. - Click on the
Start reef installation
button. This will install Ceph on your Proxmox cluster.
- We will be using
You may be prompted to enter [Y/n] to confirm the installation. Enter Y
to
continue.
- Once the installation is complete, you will be able to click the
Next
button. Click on theNext
button to continue.
Configure Ceph Cluster
- On the
Configuring
screen, select a Public Network IP/CIDR and Cluster Network IP/CIDR. These networks should be separate from your Proxmox management network. - If you wish to configure the Number of replicas, and the Minumum replicas,
tick the
Advanced
checkbox at the bottom of the screen. For this guide, we will leave the default settings. - Click on the
Next
button to continue. - On the
Success
screen, view the next steps, you will need to follow these steps to complete the Ceph cluster configuration. - Click on the
Finish
button to complete the installation.
Create additional Ceph Monitors
It is recommended to have an odd number of monitors in your Ceph cluster. For this guide, we will be creating three monitors.
- Click on the Node in the tree view, on which you have installed Ceph.
- Similarly to the previous steps, click on the
Ceph
drop-down menu, and click on theMonitor
option. - Click on the
Create
button. - Select a
Host
from the drop-down menu, and click on theCreate
button. - Repeat the above steps to create additional monitors for each node in your Proxmox cluster.
Create additional Ceph Managers (Optional)
This step is optional, but it is recommended to have multiple managers in your Ceph cluster for redundancy.
These steps are similar to creating additional monitors.
- In the same screen where you created the monitors, click on the
Create
button, under theManager
section. - Select a
Host
from the drop-down menu, and click on theCreate
button. - Repeat the above steps to create additional managers for each node in your Proxmox cluster.
Create Ceph OSDs
You must have at least one OSD per node in your Ceph cluster, and it is recommended to have multiple OSDs per node for redundancy.
For this guide, we will be creating one OSD per node. You can create additional OSDs by following the same steps.
If you have added additional Managers, you can configure Ceph from any Node in your Proxmox cluster. Otherwise, you will need to configure Ceph from the Node where you have installed Ceph initially.
-
Click on the Node in the tree view, which is your Ceph Manager.
-
Click on the
Ceph
drop-down menu, and click on theOSD
option. -
Click on the
Create: OSD
button. -
Select a disk from the
Disk
drop-down menu, on which you want to create the OSD. -
If you wish to configure a different
DB Disk
for the OSD, you can select a disk from theDB Disk
drop-down menu.For this guide, we will leave this option as
use OSD disk
. Additionally we will leave theAdvanced
options as default. For more information on these options, you can refer to the official Proxmox documentation (opens in a new tab). -
Click on the
Create
button to create the OSD.
Create CephFS and Metadata Servers (Optional)
To create a CephFS, you can follow these steps:
- Click on the Node in the tree view, which is your Ceph Manager.
- Click on the
Ceph
drop-down menu, and click on theCephFS
option. - Click on the
Create CephFS
button. - Enter a
Name
for the CephFS, and configure the options as required. This guide will not cover the configuration of these options. - Click on the
Create
button.
If you wish to configure Metadata Servers, you can do so by following these steps:
- On the same screen where you created the CephFS, click on the
Create
button under theMetadata Servers
section. - Select a
Host
from the drop-down menu, and Enter anExtra ID
for the Metadata Server. - Click on the
Create
button.
Create Ceph Pools
Lastly, you must create a Ceph pool to store your data. Don't worry you are almost there!
- Click on the Node in the tree view, which is your Ceph Manager.
- Click on the
Ceph
drop-down menu, and click on thePools
option. - Click on the
Create
button. - Enter a
Name
for the pool, and configure the options as required. For this guide we will leave the options as default. - Click on the
Create
button.
The Ceph Pool should now have been added to your tree view, under the Node where you created it.
Conclusion
Congratulations! You have successfully deployed a hyper-converged Ceph cluster on your Proxmox VE cluster. You can now use the Ceph cluster to store your VMs and LXC containers, as well as configuring High Availability (HA) for your Virtual Guests. A guide on Proxmox HA can be found below.
Next Steps
Happy Hosting!