-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multi-RBD performance does not scale up well as fio-rbd #939
Comments
We notice that currently, one ceph-nvmeof gateway creates only one Ceph IO context(RADOS connection) with Ceph whereas fio creates one Ceph IO context with Ceph for each running job. And Refer to two performance tuning guides below, one Ceph IO context can't support too many RBD images read/write access well. See P9-10 of: |
We currently create a cluster context for every X images. This is configurable by the "bdevs_per_cluster" parameter as in ceph-nvmeof.conf. Note that currently this is done per ANA group (and it had some reasons related to failback and blocklisting), but we are going to make it flat again. So you can set this to 1 if you want 1 Ceph IO context per image, or more. |
Sounds cool, thanks @caroav . Will give it a try. |
Yes I need to update the entire upstream nvmeof documentation. I will do it soon. |
We do some 4k random read/write performance tests on the below testbed. And found that the Nvmeof gateway multi-rbd performance does not scale well as fio-rbd.
Hardware
Software
Deployment and Parameters Tuning
FYI, in case someone is interested in the details of the hybrid x86 and arm Ceph Nvmf Gateway cluster deployment. Please refer to the attached pdf:
Ceph SPDK NVMe-oF Gateway Evaluation on openEuler on openEuler (1).pdf
Fio Running Cmds and Configs
We run fio tests on the client node with cmds
RW=randwrite BS=4k IODEPTH=128 fio ./[fio_test-rbd.conf|fio_test-nvmeof.conf] --numjobs=1
The text was updated successfully, but these errors were encountered: