Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

doc: add doc for accessing ui through rancher when network policy is enabled. #1049

Merged
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
51 changes: 51 additions & 0 deletions content/docs/1.9.0/deploy/install/install-with-rancher.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,3 +37,54 @@ Each node in the Kubernetes cluster where Longhorn is installed must fulfill [th
{{< figure src="/img/screenshots/install/rancher-2.6/dashboard.png" >}}

After Longhorn has been successfully installed, you can access the Longhorn UI by navigating to the `Longhorn` option from Rancher left panel.


## Access UI With Network Policy Enabled

Note that when the Network Policy is enabled, access to the UI from Rancher may be restricted.

Rancher interacts with the Longhorn UI via a service called remotedialer, which facilitates connections between Rancher and the downstream clusters it manages. This service allows a user agent to access the cluster through an endpoint on the Rancher server. Remotedialer connects to the Longhorn UI service by using the Kubernetes API Server as a proxy.

However, when the Network Policy is enabled, the Kubernetes API Server may be unable to reach pods on different nodes. This occurs because the Kubernetes API Server operates within the host’s network namespace without a dedicated per-pod IP address. If you're using the Calico CNI plugin, any process in the host’s network namespace (such as the API Server) connecting to a pod triggers Calico to encapsulate the packet in IPIP before forwarding it to the remote host. The tunnel address is chosen as the source to ensure the remote host knows to encapsulate the return packets correctly.

In other words, to allow the proxy to work with the Network Policy, the Tunnel IP of each node must be identified and explicitly permitted in the policy.

You can find the Tunnel IP by:
```
$ kubectl get nodes -oyaml | grep "Tunnel"

projectcalico.org/IPv4VXLANTunnelAddr: 10.42.197.0
projectcalico.org/IPv4VXLANTunnelAddr: 10.42.99.0
projectcalico.org/IPv4VXLANTunnelAddr: 10.42.158.0
projectcalico.org/IPv4VXLANTunnelAddr: 10.42.80.0
```

Next, permit traffic in the Network Policy using the Tunnel IP. You may need to update the Network Policy whenever new nodes are added to the cluster.
```yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: longhorn-ui-frontend
namespace: longhorn-system
spec:
podSelector:
matchLabels:
app: longhorn-ui
policyTypes:
- Ingress
ingress:
- from:
- ipBlock:
cidr: 10.42.197.0/32
- ipBlock:
cidr: 10.42.99.0/32
- ipBlock:
cidr: 10.42.158.0/32
- ipBlock:
cidr: 10.42.80.0/32
ports:
- port: 8000
protocol: TCP
```

Another way to resolve the issue is by running the server nodes with `egress-selector-mode: cluster`. For more information, see [RKE2 Server Configuration Reference](https://docs.rke2.io/reference/server_config#critical-configuration-values) and [K3s Control-Plane Egress Selector configuration](https://docs.k3s.io/networking/basic-network-options#control-plane-egress-selector-configuration).