-
-
Notifications
You must be signed in to change notification settings - Fork 386
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug: qBittorrent stops listening to the open port after the gluetun VPN restarts internally #1407
Comments
Exactly the same is happening to me as well. The workaround @Gylesie mentioned works for me too, but unfortunately it is not too nice when one wants to rely on the raspberry just working without needing any input. Maybe my version: "3"
services:
gluetun:
image: qmcgaw/gluetun
container_name: gluetun
cap_add:
- NET_ADMIN
devices:
- /dev/net/tun:/dev/net/tun
environment:
- VPN_SERVICE_PROVIDER=mullvad
- VPN_TYPE=wireguard
- WIREGUARD_PRIVATE_KEY=<redacted>
- WIREGUARD_ADDRESSES=<redacted>
- SERVER_CITIES=<redacted>
- FIREWALL_VPN_INPUT_PORTS=<redacted> # mullvad forwarded port
- PUID=1000
- PGID=1000
ports:
- 8080:8080 # qbittorrent webgui
- <redacted>:<redacted> # mullvad forwarded port
- <redacted>:<redacted>/udp # mullvad forwarded port
restart: unless-stopped
qbittorrent:
image: lscr.io/linuxserver/qbittorrent:latest
container_name: qbittorrent
network_mode: "service:gluetun"
environment:
- PUID=1000
- PGID=1000
- WEBUI_PORT=8080
volumes:
- <redacted>:/config
- <redacted>:/downloads
depends_on:
gluetun:
condition: service_healthy
restart: unless-stopped EDIT: EDIT 2: |
Chiming in that I have the same issue with qbittorrent and gluetun with the hotio image for qbittorrent. @Gylesie's workaround is okay but troublesome when it happens at night. |
It might be because there is a listener going through the tunnel, but gluetun destroys that tunnel on an internal vpn restart and re-creates it. I had the same issue with the http client fetching version info/public ip info from within gluetun, and the fix was to close 'idle connections' for the http client when the tunnel is up again A bit weird though, since a server (listener) should still work across vpn restarts (it does work with i.e. the shadowsocks server).
Doing this restarts the listener which is why it works again I would say. I don't think I can really do something from within Gluetun, you could perhaps have some script reading the logs of Gluetun and restart qbittorrent when a vpn restarts occurs. Not ideal but I cannot think of something better really for now. |
Hmm, that's unfortunate. Are you interested in implementing a way to define a custom script after the VPN gets restarted? That would be kinda useful in situations like this. |
@qdm12 When the tunnel gets destroyed, does that mean that also the network interface gets destroyed and recreated afterwards? |
yes and no, because this script would likely have to run on the host outside the gluetun container. We could eventually as an option add capabilities for Gluetun to do Docker host operations by bind mounting the docker socket, but that's kinda risky security wise (although it already runs as root + NET_ADMIN capabilities, so maybe why not). Anyway the backlog of more pressing issues is already thick, but let's keep this opened, it would be interesting to explore this more. |
In the meantime, feel free to use this script I made, it's not perfect but good enough. Keep it running the whole time on the host system. #!/bin/bash
# Gluetun monitoring script by Gylesie. More info:
# https://github.com/qdm12/gluetun/issues/1407
######### Config:
gluetun_container_id="gluetun"
qbittorrent_container_id="qbittorrent"
timeout="60"
docker="/usr/bin/docker"
#################################################
log() {
echo "$(date) [INFO] $1"
}
# Wait for the container to be running
while ! "$docker" inspect "$gluetun_container_id" | jq -e '.[0].State.Running' > /dev/null; do
log "Waiting for the container($gluetun_container_id) to be up and running! Sleeping for $timeout seconds..."
sleep "$timeout"
done
# store the start time of the script
start_time=$(date +%s)
# stream the logs and process new lines only
"$docker" logs -t -f "$gluetun_container_id" 2>&1 | while read line; do
# get the timestamp of the log line
log_time=$(date -d "$(echo "$line" | cut -d ' ' -f1)" +%s)
# check if the log line was generated after the script started
if [[ "$log_time" -ge "$start_time" ]]; then
# Check if vpn was restarted
if [[ "$line" =~ "[wireguard] Wireguard is up" ]]; then
# Check if qbittorrent container is running
if "$docker" inspect "$qbittorrent_container_id" | jq -e '.[0].State.Running' > /dev/null; then
log "Restarting qbittorrent!"
"$docker" restart "$qbittorrent_container_id"
else
log "qBittorrent container($qbittorrent_container_id) is not running! Passing..."
fi
fi
fi
done |
I'd imagine it would be possible to have some environment variables for Gluetun which specify the address, port username and password of your qBittorrent instance, then Gluetun could use the qBittorrent web API to change the port and then back whenever the tunnel is restarted. This wouldn't require any special Docker permissions. Obviously not the cleanest solution, however a solution nonetheless. |
@eiqnepm I wasn't aware of such web API can you create a separate issue for this? Definitely something doable! |
The API is documented here, I went ahead and created the new issue #1441 (comment), thanks a bunch for the quick response! |
I've gone ahead and made a container Docker Compose exampleversion: "3"
services:
gluetun:
cap_add:
- NET_ADMIN
container_name: gluetun
devices:
- /dev/net/tun:/dev/net/tun
environment:
- FIREWALL_VPN_INPUT_PORTS=6881
- OWNED_ONLY=yes
- SERVER_CITIES=Amsterdam
- VPN_SERVICE_PROVIDER=mullvad
- VPN_TYPE=wireguard
- WIREGUARD_ADDRESSES=👀
- WIREGUARD_PRIVATE_KEY=👀
image: qmcgaw/gluetun
ports:
- 8080:8080 # qBittorrent
restart: unless-stopped
volumes:
- ./gluetun:/gluetun
portcheck:
container_name: portcheck
depends_on:
- qbittorrent
environment:
- DIAL_TIMEOUT=5
- QBITTORRENT_PASSWORD=adminadmin
- QBITTORRENT_PORT=6881
- QBITTORRENT_USERNAME=admin
- QBITTORRENT_WEBUI_PORT=8080
- QBITTORRENT_WEBUI_SCHEME=http
- TIMEOUT=300
image: eiqnepm/portcheck
network_mode: service:gluetun
restart: unless-stopped
qbittorrent:
container_name: qbittorrent
environment:
- PGID=1000
- PUID=1000
- TZ=Etc/UTC
- WEBUI_PORT=8080
image: lscr.io/linuxserver/qbittorrent
network_mode: service:gluetun
restart: unless-stopped
volumes:
- ./qbittorrent/config:/config
- ./qbittorrent/downloads:/downloads Environment variables
I've just updated the container to not rely on the Gluetun HTTP control server for the public IP address of the VPN connection, it now uses the outbound address from within the Gluetun service network to check the qBittorrent incoming port, this also has the added benefit of not needing to query the qBittorrent incoming port from the public IP address of your server. For anyone that was using this before I made the change, make sure to run the container inside of the Gluetun service network and update the environment variables which have changed. |
I recently switched from linuxserver/transmission to linuxserver/qbittorrent and noticed that qbittorrent (working inside the gluetun docker network) stops working after some time. I have been suspecting that is due because gluetun kind of restarts itself for some reason. I am glad to see I am not the only one who has noticed this issue. The extra container solution is nice but not ideal. I think I will revert to transmission until a proper solution is found out but really appreciate all your efforts. Will keep subscribed for updates. |
Thank you for writing this - works great! For others experiencing this issue, I'm wondering if it would also help to increase the Is the default setting of 6 seconds too sensitive? |
My pleasure! After reading the wiki, it seems the healthcheck was primarily created due to the unreliability of OpenVPN connections. Considering I'm using WireGuard which is stateless I've just decided to completely disable the healthcheck feature and see how that goes. With my current knowledge, barring my VPN provider itself going offline, I can't think of a reason why my connection would be interrupted (I guess we'll find out). While the healthcheck feature cannot be disabled per se, you can just set the |
I can confirm that this fixed it for me. I set Comcast hiccups often in my area, so 6 seconds was definitely too aggressive for me |
in qBittorent you can go into options and under advanced, and you can lock the network interface to Also, someone just posted a bug that tun0 disappeared after the last update, but it hasn't been verified yet. |
I can also confirm this. I was having this problem regularly, but locking the network interface to |
any chance are you on the latest version and not having the |
I was running 3.32. I've updated to 3.33 and do not have any issues with |
I tested this script with an echo instead of restart before actually enabling, and if your gluetun has been running a while and already restarted a few times, it will restart qb just as many times in rapid sequence. I think I will try the longer timeout for the gluetun healthcheck first to avoid the internal reconnects |
Switched over to this recently and started seeing this daily (scheduled VPN reconnect). Glad it's already been reported but hoping for an integrated solution. AirVPN Wireguard here. Same solutions seem to work (restarting container) however I would like to avoid having to do that. Is an official solution possible? @qdm12 |
The best workaround for now is to use the libtorrentv1 version of qbittorrent. Or switch to transmission. It's an issue with libttorrentv2. |
If restarting the container is undesirable, you should use #1407 (comment). |
I found no other functionality changes with v1. Does unraid not let you use any image from docker hub? You could accomplish the same thing with a cron script to poke the api. |
Under apps and then setting, enable additional search results from dockerHub. The container is very lightweight. It could be implimented into Gluetun, I even made an issue upon request #1441 (comment), however I don't currently understand the inner workings of Gluetun and don't have the ability to implement the feature myself at this time. If the maintainer decides this is an issue that Gluetun should resolve first hand, it should not be a very daunting task, considering I managed to get it done with just over two-hundred lines of Go. |
If this is a libtorrent issue then a bug should be opened there. I don't think gluetun should add a fix for a third-party issue that already has a simple container workaround. |
Cool that there is that option however I do not see it. As it happens... the issue sort of just went away on it's own apparently. There were several days I needed to restart the container but after a recent Gluetun update, the issue seems to have gone away. |
Here's how I handle restarting dependent dockers when Gluetun restarts: |
Could Gluetun just get an option to fully restart whenever the connection goes down? That would resolve the problem in a roundabout way. When Gluetun restarts, docker restarts all containers that use its network. |
That would be a good solution for those who don't mind the service containers restarting. I'd imagine Gluetun would need access to |
Based on what the other person said, it would just need to end its own process, no? |
Gluten would need to restart the container it is running in to restart the service network, otherwise the service network would remain the same. I am not sure as to whether a Gluten process restart would fix the torrent issue as it doesn't effect the torrent client containers directly. |
|
When the Gluten Docker container restarts all of the Docker containers using it as a service network will restart, however if Gluten was to have a persistent entry point process which merely restarted the main Gluten process all within the Gluetun Docker container it would not affect other Docker containers as the Gluten Docker network would remain the same. Processes inside Docker containers don't have the ability to manipulate the state of the container itself OOTB. |
Docker containers live and die by their main process.
This process would be under the control of gluetun, not docker. And gluetun could then have this process end, which would end the container, which would then cause docker to restart it (if it is configured to do so by the restart policy).
They absolutely do by necessity simply by the fact that the container only runs as long as the main process is running. |
You are correct, however this would break workflows for those who do not want the container to restart on actual failures. |
it would simply need to be optional |
Giving the Gluten container access to Two ways to achieve the same thing, but I think having the Gluten container restart itself instead of relying on a restart policy is a more ideal solution if Gluten was going to go the container restart route to address this issue. |
One complicated solution that needs gluetun to get extra unnecessary access to then implement more complex logic to go out and restart other containers, vs a dead simple solution that takes 2 lines of code to implement. |
What I suggested was for Gluten to restart itself, say when an environment variable is enabled and the Gluten container has access to the Docker socket. This way you get the benefit of the service network restarting, which indirectly restarts all of the dependent containers and you don't have to use the always restart policy, which is undesirable for some. I wouldn't call it complex, obviously in comparison to exiting the process it would be more "logic", however neither is challenging to implement and maintain. Both viable suggestions. Like I said, I still believe it would be better to not break the no restart policy workflow, but that's subjective. I don't think there's more for me to add. |
I have made a |
Is it possible to check multiple ports with one docker instance? In my case Prowlarr and qBittorent both have the issue. |
I could be implemented. You can restart multiple containers. Are you not able to just check the one port and have both containers restart? I assume that if one service is unreachable the other will be too. |
That's how i currently have it set up. When the qBittorent port is unreachable then both containers restart. Ill see how it works. |
Setting |
Setting |
I was searching the internet for a solution and I found https://portcheck.transmissionbt.com/4330 which return 1 if the port is open and 0 if it's closed. Meaning that you guys can add a healthcheck to the gluetun container: healthcheck:
test: ["CMD-SHELL", "wget -qO- http://portcheck.transmissionbt.com/4330 | grep -q 1 || exit 1"]
interval: 1m30s
timeout: 10s
retries: 3
start_period: 40s |
I believe I fixed it by manually setting |
Nevermind, |
Need to do this on unraid... |
Thank you so much |
This is still a problem even with qBittorrent fails to reconnect to the forwarded port but downloads seem to still work. Seeding does not. What is the last version of qBittorrent that uses libtorrent1? |
Running an outdated BitTorrent client is probably not a good idea. Are you unable to use https://github.com/eiqnepm/portcheck or the above healthcheck workaround? |
What does it do (more specifically than 'check a port')? I would have a hard time without a guide on how to integrate that into UNRAID. I know very little about docker and would be more likely to break something without a stable framework to manage it. It's fairly common, I think, to hold back on upgrading to new qBittorrent releases until they are proven 'good'. That said, this problem does not manifest in a linux VM using the same qBittorrent version (or a newer version like 4.6.7) connected with a native WireGuard client. So it's tough to pin the blame entirely on libtorrent2 since it works fine in that environment without requiring restarts when the VPN reconnects. libtorrent2 may play a role but is not exclusively the cause of the issue as I see it. |
I'm not sure what to make of this at the moment but this port forward disconnection issue happened only the one time so far and each subsequent day after the qbit container restart, it has been fine. |
I resolved this issue by connecting via OpenVPN. I will also share the code. ` volumes:
change your openvpn client ip , port , adapter name ` if [ "$1" = "up" ]; then ` |
Is this urgent?
No
Host OS
Ubuntu 22.04
CPU arch
x86_64
VPN service provider
Custom
What are you using to run the container
docker-compose
What is the version of Gluetun
Running version latest built on 2022-12-31T17:50:58.654Z (commit ea40b84)
What's the problem 🤔
Everything works as expected when qBittorrent and gluetun containers are freshly started. The qBittorrent is listening on the open port and it is reachable via the internet. However, when gluetun runs for a longer period of time and for some reason the VPN stops working for a brief time, trigerring gluetun's internal VPN restart, the open port in qBittorrent is no longer reachable.
What I found out was that by changing the open listening port in qBittorrent WebUI settings to some random port, saving the configuration and then immediately after that reverting the change to the original port, it starts listening and it is now once again reachable. Or just restarting the qBittorrent container without changing anything also worked.
Is there anything gluetun can do to prevent this? Is this solely qBittorrent's bug? Unfortunately, I have no idea.
Thanks!
Share your logs
Share your configuration
No response
The text was updated successfully, but these errors were encountered: