-
-
Notifications
You must be signed in to change notification settings - Fork 174
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
S3-Mount breaks after starting plex #1268
Comments
Please provide all the details as requested. I really can't help otherwise. It isn't clear what is even breaking. Is mergerfs still running? rclone? |
rclone mounts are still stable. the only part that "breaks" ist mergerfs which is still running because the gdrive part is still in there. but whats missing is the "s3-part". if i only added the s3-mount to the mergerfs pool it brakes aswell. I'm happy to provide any further information, but unfortunately, I don't know what that information might be. |
thats my rclone settings btw `[Unit] [Service] ExecStop=/bin/umount -lf "/mnt/s3" > /dev/null [Install] |
I describe it in the docs and in the ticket template: https://github.com/trapexit/mergerfs#support |
Describe the bug
I use rclone to create a gdrive mount and an S3 mount. Both mounts are stable. I would like to merge both mounts using mergerfs. However, this only works until I start Plex; after that, the merge breaks.
before starting plex:
drwxrwxr-x 1 docker docker 0 Oct 21 22:14 mergerfs
after starting plex:
d????????? ? ? ? ? ? mergerfs
Please be sure to use latest release of mergerfs to ensure the issue still exists. Not your distro's latest but the latest official release.
The master branch is not to be considered production ready. Feel free to file bug reports but do so indicating clearly that you are testing unreleased code.
To Reproduce
[Unit]
Description=MergerFS Mount
After=network-online.target
[Service]
Type=forking
GuessMainPID=no
ExecStart=/usr/bin/mergerfs
/mnt/s3=RO:/mnt/gdrive=RO: /mnt/mergerfs
ExecStop=/bin/fusermount3 -uz /mnt/mergerfs
Restart=on-failure
[Install]
WantedBy=default.target
Expected behavior
Since I have only used it with gdrive mounts so far, and everything has worked very reliably, I expect the same from the S3 mount.
System information:
gdrive: 1.0P 0 1.0P 0% /mnt/gdrive
s3: 1.0P 0 1.0P 0% /mnt/s3
The text was updated successfully, but these errors were encountered: