Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't access Kubernetes Dashboard from outside the cluster (VirtualBox host ) #9160

Open
brokedba opened this issue Jun 15, 2024 · 28 comments
Open
Labels
kind/support Categorizes issue or PR as a support question. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@brokedba
Copy link

What happened?

0

Hi I have kubernetes provisioned in my vagrant build using virtualbox.

See git repo

I have set the port forwarding in the vagrantfile as shown below

  ...  # NOTE: This will enable public access to the opened port
  config.vm.network "forwarded_port", guest: 4443, host: 8444, id: 'awx_https'
  config.vm.network "forwarded_port", guest: 8090, host: 8090, id: 'awx_http'
  config.vm.network "forwarded_port", guest: 8443, host: 8443, id: 'kdashboard_console_https'
  config.vm.network "forwarded_port", guest: 8001, host: 8081, id: 'kdashboard_console_http'

Here is the kubernetes setup successfully deployed during provisioning

==== Install the kubernetes Dashboard

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml

Endpoints:

# kubectl -n kubernetes-dashboard get endpoints -o wide
NAME                        ENDPOINTS              AGE
dashboard-metrics-scraper   192.168.102.131:8000   16h
kubernetes-dashboard        192.168.102.134:8443   16h

pods:

[root@localhost ~]# kubectl -n kubernetes-dashboard get pods -o wide
NAME                                         READY   STATUS    RESTARTS   AGE   IP                NODE                    NOMINATED NODE   READINESS GATES
dashboard-metrics-scraper-5657497c4c-t5zz4   1/1     Running   0          16h   192.168.102.131   localhost.localdomain   <none>           <none>
kubernetes-dashboard-78f87ddfc-nbjwc         1/1     Running   0          16h   192.168.102.134   localhost.localdomain   <none>           <none>

Service:

[root@localhost ~]# kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
dashboard-metrics-scraper   ClusterIP   10.108.36.241   <none>        8000/TCP   17h
kubernetes-dashboard        ClusterIP   10.96.123.89    <none>        443/TCP    17h
  • Service manifest
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard

I tried both proxy and port-forwarding and it didn't work:

1. Proxy : I tried with different ports (8001/443)

kubectl proxy Starting to serve on 127.0.0.1:8001

HTTP URL: http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

http://localhost:8090/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

test from the Vbox host:

Behavior : No Access ERR_CONNECTION_RESET

2. Port forwarding:

Listen on port 8443 on all addresses, forwarding to 443 in the pod

kubectl port-forward -n kubernetes-dashboard service/kubernetes-dashboard 8443:443 
--address="0.0.0.0" &

test from the Vbox host:

https://localhost:8443
Behavior : No Access ERR_CONNECTION_RESET

I'm I missing something ?

What did you expect to happen?

access Kubernetes console

How can we reproduce it (as minimally and precisely as possible)?

clone repo :
https://github.com/brokedba/Devops_all_in_one_vm/tree/main/OL7
cd Devops_all_in_one_vm/OL7
vagrant up

Anything else we need to know?

No response

What browsers are you seeing the problem on?

No response

Kubernetes Dashboard version

v2.7.0

Kubernetes version

v.1.28

Dev environment

No response

@brokedba brokedba added the kind/bug Categorizes issue or PR as related to a bug. label Jun 15, 2024
@lprimak
Copy link

lprimak commented Jun 16, 2024

I have everything working by exposing the dashboard like this:

# Expose k8s dashboard
k delete -n kubernetes-dashboard service/kubernetes-dashboard 
k expose -n kubernetes-dashboard deployment/kubernetes-dashboard --type="LoadBalancer" --port 443 --target-port 8443

@lprimak
Copy link

lprimak commented Jun 16, 2024

Oh and I also have MetalLB installed to expose external IP automatically

@brokedba
Copy link
Author

@lprimak I'll give it a shot but my kubernetes install is on a vm not a managed cluster within a cloud platform. I always thought LoadbBalancer meant using cloud controller manager based resource.

I find the kube dashboard documentation regarding access a bit confusing and not that sufficient.

@lprimak
Copy link

lprimak commented Jun 16, 2024

Yes. Same setup here. VM on premises

@lprimak
Copy link

lprimak commented Jun 16, 2024

Here is my MetalLB config for your reference: https://github.com/lprimak/infra/blob/main/scripts/cloud/oci/k8s/metallb-config.yaml

@brokedba
Copy link
Author

I just tried . and the URL still doesn't work .

localhost:8443

@lprimak
Copy link

lprimak commented Jun 16, 2024

Not localhost. You have to see external IP for dash board and connect your browser there. No 8443

@brokedba
Copy link
Author

there is no external IP for now , it's still pending
image

@brokedba
Copy link
Author

brokedba commented Jun 16, 2024

ok I understand what you meant , I just don't know how metallb once configured can alow me to access kube dashboard endpoint from my virtualbox host as I only use guest/host port forwarding for now.
Even if I have the external IP , it's not going to be accessible from the host.

my --pod-network-cidr is 192.168.0.0/16 btw

@lprimak
Copy link

lprimak commented Jun 16, 2024

Yes. MetalLB needs to be installed and functioning to get an external IP. External IP can be accessed from your local network because your VM host is on the network somehow or you will need to make it accessible from the VirtualBox side

@brokedba
Copy link
Author

Thank you @lprimak for the context. I'll install MetalLB, but I also hoped I could fix the basic proxy or port forwarding issue I shared.

@brokedba
Copy link
Author

brokedba commented Jun 17, 2024

After I checked the metallb doc and created the addresspool and l2advertisement . I updated the kube dash service as you suggested which now has an external IP. But It still not accessible although I can ping the IP from my host.

[root@localhost ~]# kgs -n kubernetes-dashboard
NAME                        TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP      10.108.36.241    <none>           8000/TCP        2d19h
kubernetes-dashboard        LoadBalancer   10.102.120.234   192.168.56.100   443:32702/TCP   45m

192.168.56.100 :8443 --- times out

$ ping 192.168.56.100
PING 192.168.56.100 (192.168.56.100) 56(84) bytes of data.
64 bytes from 192.168.56.100: icmp_seq=1 ttl=255 time=0.445 ms

--- Telnet
 telnet 192.168.56.100 8443
Trying 192.168.56.100...
telnet: Unable to connect to remote host: Resource temporarily unavailable

 kubectl describe svc kubernetes-dashboard -n kubernetes-dashboard
Name:                     kubernetes-dashboard
Namespace:                kubernetes-dashboard
Labels:                   k8s-app=kubernetes-dashboard
Annotations:              metallb.universe.tf/ip-allocated-from-pool: service-pool
Selector:                 k8s-app=kubernetes-dashboard
Type:                     LoadBalancer
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.102.120.234
IPs:                      10.102.120.234
LoadBalancer Ingress:     192.168.56.100
Port:                     <unset>  443/TCP
TargetPort:               8443/TCP
NodePort:                 <unset>  32702/TCP
Endpoints:                192.168.102.151:8443
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

@brokedba
Copy link
Author

Bottom is that it didn't work !! Kubernetes dash is one of those tools that are nice on the paper but not that simple to use .
K9s does the job, and I would switch to lense at worse If I need UI.

@brokedba brokedba reopened this Jun 18, 2024
@brokedba
Copy link
Author

if you close issues automatically at least add a line or two explaining why. This is not helpful for the users at all.

@lprimak
Copy link

lprimak commented Jun 18, 2024

Not port 8443. Regular https port (443). And looks like you closed that issue yourself (probably by accident :)

@brokedba
Copy link
Author

ouch , My apologies . I have been in projects where issues were closed before we got to the bottom of it.
the reason behind the port switch is because I have port-forwarding on vagrant too as shared in the OP

...  # NOTE: This will enable public access to the opened port 
  config.vm.network "forwarded_port", guest: 8443, host: 8443, id: 'kdashboard_console_https' <---- 
  config.vm.network "forwarded_port", guest: 8001, host: 8081, id: 'kdashboard_console_http' 

@floreks floreks added kind/support Categorizes issue or PR as a support question. and removed kind/bug Categorizes issue or PR as related to a bug. labels Jun 18, 2024
@lprimak
Copy link

lprimak commented Jun 18, 2024

Then you need to use the same port in your vagrant config as in your kubectl expose --port command. If they don't match it won't work

@brokedba
Copy link
Author

brokedba commented Jun 18, 2024

I did the other way around

  config.vm.network "forwarded_port", guest: 443, host: 8443, id: 'kdashboard_console_https' <---- 

and kept the loadbalancer with the same port and endpoint but it's still times out.

[root@localhost ~]# kgs -n kubernetes-dashboard
NAME                        TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)         AGE 
kubernetes-dashboard        LoadBalancer   10.102.120.234   192.168.56.100   443:32702/TCP   45m

192.168.56.100:8443 --- times out

@lprimak
Copy link

lprimak commented Jun 18, 2024

Actually, I don't think you can forward your port that way.
External IP needs to be routable on your local network, then you can access it from any machine on your local network

@lprimak
Copy link

lprimak commented Jun 19, 2024

Also there is NodePort type of service available and I think you can port forward via vagrant with that

@brokedba
Copy link
Author

It is routable since I can ping it from my host.
image
the question is why is the forward port 8443 not working

@lprimak
Copy link

lprimak commented Jun 19, 2024

But you are still forwarding ports. You need to access the external IP directly.
NodePort can probably be used for your vagrant forwards but I don't use that in my setup

@brokedba
Copy link
Author

I can try

  1. using 443 as port on host side in the vagrant forwarding rule
  2. use nodeport

after that I think it'll run out of options. docker does a better job at exposing its containers.

@lprimak
Copy link

lprimak commented Jun 20, 2024

I suggest to abandon your idea of port forwarding, just use external IP like it's intended (i.e. Directly) without any port forwarding.

@brokedba
Copy link
Author

I'll try without forwarding but I don't think External IP socket will work . will let you know

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 18, 2024
@brokedba
Copy link
Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 18, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests

5 participants