-
Notifications
You must be signed in to change notification settings - Fork 637
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Network storage cannot be mounted at /data/mounts/<name> because it is not empty #4358
Comments
I think the backup file is still in this folder, because the following works:
|
This worked for me by not creating the (named)folder on my NAS. "homeassistant" is the folder on my NAS. It did not create a new folder inside it call "backup", but files are created there when I do a backup. Not sure if that helps. name: backup |
identical problem with server off and with 2023.6.1 |
Also running into this after a network issue caused me to remove and add back in a CIFS mount with the same name as prior "Backups". On the filesystem I don't see that folder it is trying to mount into. Not sure how to move forward to clean up what is left behind. |
I'm having the same issue with an NFS share. It says it's mounted, but I can't actually write a backup file to it. The backup fails silently, which is a huge issue. I made several changes with the backup box checked thinking I had backups, but I didn't. Good thing nothing broke. While I really appreciate the effort of adding an external backup option, this implementation seems dangerous since there is no check to see if the backup was actually made. |
I encountered this same issue after my file server crashed, bringing down the backup samba share with it. It appears HA will silently write backups to the mount location even when nothing is mounted. To clean it up, I had to connect to the host system console (I have an HA Yellow, so I followed the serial console debugging directions at https://yellow.home-assistant.io/documentation/). It appears these mount points aren't exposed to the docker container that runs the terminal add-on, so connecting to the host system directly was the only way to access them. Once connected, I ran the following to move the misplaced backups to the local backup location and then reboot the host to allow it to cleanly remount the drive:
So it seems like a few things need fixed here:
|
I'm having the same problem. I'm not able to locate the mount point /data/mounts/backup anywhere. |
@CAHutchins what kind of HA setup are you running? If you're using HAOS, you need the access the mount point from the underlying host itself at the path in my previous post. |
@asayler , I'm running on a Raspberry Pi. I just found the command a few minutes ago to access the supervisor file system. As you mentioned above, HA needs to prevent writing if the network path is not mounted. |
Thanks, this solved my issue (for now) |
I am experiencing the same issue on HAOS 10.4, Core 2023.8.1, Supervisor 2023.07.1 after a power outage. I can't get to the /data/mounts folder to troubleshoot further. |
@Rodney-Smith I never figure out how to get to the mount point, but a workaround I found is to remove the external storage and re add it. Same idea as adding it under a new name, but then you don't have a stack of broken stuff. |
Having the same issue. HomeAssistant says my drive has failed to connect, if you press reload it says the issue is fixed, but then shortly throws the error again. Attempting to press reload throws the error mentioned above. Was working fine previously. |
Same issue here. A combination of the Advanced SSH & Web Terminal Add-on running unprotected and the docker exec command above to get into the supervisor container allowed me to remove the offending back-ups by hand. Obviously it's easy to destroy stuff this way if you're not absolutely sure what you're doing, so proceed with caution. |
For me, this problem started with one of the 2023.8.x releases. Not sure which one. |
I am having the same problem with the new release. My last backup was on 01/09/2023 |
I face the same issue because my NAS was offline during backup and the file seems like to be written to the folder anyways. Is there no way on unmounting and deleting the folder on HAOS? |
same issue |
Même probleme |
And while we're talking about this, the error message could be more clear. It was really confusing whether it is the mount point that is not empty or the samba share that is not empty. The discussion here makes it clear it's the mount point that's not empty, but I almost tried creating a new samba share on my NAS to fix it. |
Issue manifested also for me. I mount my Windows workstation to do backups. Workstations isn't permanently running therefore backup location isn't always available. Normally it can remount OK but today I got this dreaded "not empty" error! Please make backup location auto-retry mounting and also give us ability at least to clean up. System yellow - no sane procedure is available to cleanup manually. |
Have the same problem using NFS and Synology as backup utility ... |
Yeah, that's true. Using a different samba share but giving it the same "Name" (which will be the mount point) causes the error message. |
Any news yet? I have this problem too. |
Same problem here with Samba Backup to Synology Diskstation |
Yup, @spants that's exactly it. I hope it gets fixed soon. But if you
want to work around it, see the gist I posted:
https://gist.github.com/davidmankin/d243f6b7fbc103d42cd73333c601896d
It's a pain but at least it gets network backups working again.
|
Architecture discussion on a proposed fixed to the Network Storage issues: |
There hasn't been any activity on this issue recently. Due to the high number of incoming GitHub notifications, we have to clean some of the old issues, as many of them have already been resolved with the latest updates. |
I suddenly realised there had been no backups for about 2 months, and found i had this 'Not Empty' problem. Spent some time trying to diagnose ... found this thread. Tried fixes here, but failed to install Advanced Terminal as I could not understand how to generate SSH key. Then tried adding share again, with different name. Changed backups to that, and all seemed ok. Today noticed network storage share had vanished! Suggestions:
Just observations from a newish HA user! |
Is anybody with Home Assistant following this thread?Not having an off-server backup is bad. This problem isn't going away. |
Frigate share is working for 2 updates now, homeassistant backup share still failing |
I hope this gets some developer attention. Also #4662 relates to this and is closed without fix? |
So there are a bit too many similar/related issue around, and unfortunately it is hard to gauge what the "me toos" in here reference exactly. From the original post, there are actually two things here:
Now 1) is fixed with #4872. However, it might be that your particular installation still has data in that location leading to 2) 😢 . There are two possible fixes: a) Use a new mount name (e.g. for the original poster, simply use something other than Unfortunately, b) is not quite straight forward. The directory is internally managed by Supervisor, and isn't exposed to the user. However on Home Assistant OS it can be done from the system terminal. Use Move the current mount folder away
If mounting then works, you can delete that directory (make sure the data in there is really not needed anymore).
If you see this however:
Then the target is actually mounted right now. You should really no longer see the "cannot be mounted" error. FWIW, I have tested with the current stable version of Supervisor
Restarting the system correctly reported a not mountable target: Trying a backup again, lead to the same error mentioned above. Once the server was online again, using "Reload" on the repair caused it to mount again, and I was able to take a backup. |
Hello @agners, Thanks for the answer, and I agree with you there are two issues on this thread. The fix you mention indeed seems to solve problem 1. That being said, you mention
Couldn't the repair in Home Assistant offer to delete the content of the mount path, to be prevent people from having to Terminal into the instance and manually delete things? Just a guess, not a requirement. |
We could add a repair which fixes that. However, the problem should never have happened in first place. Maybe we should have added a repair (or automatically fix) while implementing #4872, but since that is already a while ago I wonder if it is still worth the effort today 🤔 What we could do is automatically delete that internal folder when creating a mount. This way there would be an easier way out 🤔 |
@agners, you had mentioned "use a new mount name." I'm probably not understanding how you mean. I've tested created a new SMB share name on our NAS (originally was "HABackups" and now is "TEST"), for a brand-new folder, but I get the same error. When you wrote "When the network storage was down, things got written to a location where the mount would go," what is that other location? Is that location not self-healing? Meaning, if the network storage comes back online, this is detected and queued file transfer jobs are completed, or at least don't impede any new jobs? Sorry for being so dense about this. |
I ran into this issue a while back and had to install the terminal plugin to fix it, but then promptly removed the plugin when completed. As a relatively new user I do wish there had been a fix available that was more integrated, but I would strongly recommend not "automatically deleting" any folders. If you want to warn the user that there is already data in the folder and offer to delete it for them that would be okay, but automatically deleting anything always leads to a problem. |
This is inherited from Unix file systems, not a problem from HA specifically. You see, on Unix file system, pretty much everything is a file, or a folder. When you select a mount "location" (in other words a path to a folder) it will point to your NAS. But if the NAS is not available, the same "location" will still exist and will instead be pointing to your internal storage. I hope it helps you with your questions. Don't hesitate to ask otherwise |
I strongly agree with @dasfdlinux . Automatically dropping data without asking would be scary ^^ |
@agners Thanks for that info. |
I've been watching this thread since the beginning, and there may be a simple (but effective) solution (I can't take the credit for the concept, a server where I work actually does this for its backups to a remote filesystem) When creating the mount point to the remote filesystem, HA could write a zero byte file to the backup directory. The only thing relevant is that file exists and HA knows the name (might want to make the name unique to each HA instance in case the user has multiple HA instances backing up to the same remote filesystem directory). When the backup process starts, it should look at the contents of the backup directory, specifically looking for that file. If that file doesn't exist (because say the remote filesystem is not mounted), it could attempt to mount the remote filesystem and then check again. If it fails to find the file after the second attempt, the backup fails and a repair notification is presented to the user. Otherwise the backup runs as intended. Alas, I'm not a programmer just an IT engineer, so I'll let those that are programmers take the idea if they want to run with it and create an appropriate PR. |
There hasn't been any activity on this issue recently. Due to the high number of incoming GitHub notifications, we have to clean some of the old issues, as many of them have already been resolved with the latest updates. |
Not stale, issue still occurs. |
We do a very similar thing to this: We create a read-only, protective bind mount to the target path (see https://github.com/home-assistant/supervisor/blob/2024.04.4/supervisor/mounts/manager.py#L267). To do the mounting we use D-Bus calls to systemd. The issue here was that Supervisor did not know/confused the exact state of the mounts, which ended up without a protective bind mount 😢
As outlined in #4358 (comment), the underlying problem which caused this issue first to appear has been resolved. Nowadays, a protective mount should get created. Even if a failed mount is used as a backup target, you should no longer get into this state. The problem is, that systems which still use the same mount name still have that directory with content at the target location 😢 Currently there is no automatic cleanup/fix for this systems. Can you try the fixes outlined in #4358 (comment)? |
I’m seeing the same issue with my mounted NAS via nfs since many months. In my case, the mount point is in place on the host OS, but when I try to access a file, it errors out. The fastest way to recover is a reboot of the entire system.. the problem is that I rely on my NAS for music, especially as my main alarm clock in the morning, and if that fails (like it often happens), it’s a problem for me. Given the time it’s taking to solve, I guess I should probably move to a safer solution for the alarm clock, playing local music instead of NAS. |
There hasn't been any activity on this issue recently. Due to the high number of incoming GitHub notifications, we have to clean some of the old issues, as many of them have already been resolved with the latest updates. |
The problem
Cannot mount HomeNAS_Backup at /data/mounts/HomeNAS_Backup because it is not empty.
This error message is not accurate because it can either be the mount point or the remote share.
Due to the fact the remote share is empty, the mount point is meant.
So: HomeAssistant keeps some data inside the mount point after deleting the network storage connection and there is no easy way to clear it by yourself.
To reproduce:
What version of Home Assistant Core has the issue?
2023.06
What was the last working version of Home Assistant Core?
No response
What type of installation are you running?
Home Assistant OS
Integration causing the issue
No response
Link to integration documentation on our website
No response
Diagnostics information
No response
Example YAML snippet
No response
Anything in the logs that might be useful for us?
No response
Additional information
No response
The text was updated successfully, but these errors were encountered: