-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Gluster expects bricks to be hosted on newly peer-probed machine... #4386
Comments
I copy that. Volumes replicate and distributed-replicate. |
DON'T upgrade your cluster to 11.1 trying to join a new peer! I upgraded 9.6 -> 10.5 flawlessley, but when upgrading the first node 10.5 -> 11.1 it is not able to rejoin the cluster anymore:
downgrading to 10.5 unfortunately doesn't work, too:
leaving you in an unhealty state. |
Solved this by Hope after spending a full day on this sharing this information here will prevent you ending up in the same situation. |
Finally you are able to expand the cluster with a new 11.1 node. |
cross-linking a related issue I found while in search of a solution to upgrade a glusterFS cluster from 10.5 to 11.1 without downtime: #4409 still trying to understand if the new NFS Ganesha could be enabled temporarily to remove that deprecated "nfs.disabled" setting that it seems as if it can't be removed with a command ? |
Thank you @apunkt! It didn't prevent me from spending a few hours on this issue, but I was happy to finally find someone who found the culprit! |
I'm trying to move data from one brick to a new one. The new brick is to be hosted on a new machine. Peer probing the new machine succeeds:
However, after that, gluster cannot seem to access any information on my bricks/volumes anymore, reporting it's looking for my bricks on the freshly probed peer:
The gluster volumes appear to be happily humming along otherwise. The only way to get it working again i found was detaching the freshly attached peer again.
The full output of the command that failed:
See above
Expected results:
A report of heal counts (or other volume information when requested).
Mandatory info:
- The output of the
gluster volume info
command:- The output of the
gluster volume status
command:- The output of the
gluster volume heal
command:Not particularly relevant
**- Provide logs present on following locations of client and server nodes -
/var/log/glusterfs/glusterd.log on gluster2:
glusterd.log on gluster20241 (sorry, not sure which is the relevant part here):
**- Is there any crash ? Provide the backtrace and coredump
None
Additional info:
- The operating system / glusterfs version:
Existing machines:
Gluster 10.5 on ubuntu jammy
New machine (peer probed):
Gluster 11.1 on ubuntu jammy
The text was updated successfully, but these errors were encountered: