Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot run hal with a Vert.x project + minikube #55

Open
jponge opened this issue Oct 14, 2019 · 17 comments
Open

Cannot run hal with a Vert.x project + minikube #55

jponge opened this issue Oct 14, 2019 · 17 comments

Comments

@jponge
Copy link

jponge commented Oct 14, 2019

Here is a Vert.x project that uses Dekorate to generate Kubernetes and Halkyon YAML.

I am using minikube.

If I run hal component push, then hal complains because there is no component.

I can of course run kubectl apply -f target/classes/META-INF/dekorate/halkyon.yml which does create a component resource:

kubectl get components
NAME                 RUNTIME   VERSION          AGE     MODE   STATUS   MESSAGE   REVISION
dekorate-and-vertx             0.0.0-SNAPSHOT   2m39s   dev

However the halkyon-operator-(...) pod goes in CrashLoopBackoff until I kubectlk delete -f target/classes/META-INF/dekorate/halkyon.yml:

operators              halkyon-operator-78fb78b5df-gkjnj             0/1     Error     0          70s
operators              halkyon-operator-78fb78b5df-gkjnj             0/1     Error     1          72s
operators              halkyon-operator-78fb78b5df-gkjnj             0/1     CrashLoopBackOff   1          85s
operators              halkyon-operator-78fb78b5df-gkjnj             1/1     Running            2          87s
operators              halkyon-operator-78fb78b5df-gkjnj             0/1     Error              2          88s
operators              halkyon-operator-78fb78b5df-gkjnj             0/1     CrashLoopBackOff   2          100s
operators              halkyon-operator-78fb78b5df-gkjnj             0/1     Error              3          114s
operators              halkyon-operator-78fb78b5df-gkjnj             0/1     CrashLoopBackOff   3          2m9s
operators              halkyon-operator-78fb78b5df-gkjnj             1/1     Running            4          2m36s
operators              halkyon-operator-78fb78b5df-gkjnj             0/1     Error              4          2m37s
operators              halkyon-operator-78fb78b5df-gkjnj             0/1     CrashLoopBackOff   4          2m48s
operators              halkyon-operator-78fb78b5df-gkjnj             1/1     Running            5          4m
operators              halkyon-operator-78fb78b5df-gkjnj             0/1     Error              5          4m1s
operators              halkyon-operator-78fb78b5df-gkjnj             0/1     CrashLoopBackOff   5          4m15s
operators              halkyon-operator-78fb78b5df-gkjnj             1/1     Running            6          6m42s

Here is a series of hal and kubectl commands that exhibit funny behaviors, like failing on .editorconfig:

➜  dekorate-and-vertx hal component push
 ✗  error validating push: no component named 'dekorate-and-vertx' exists, please create it first
➜  dekorate-and-vertx kubectl apply -f target/classes/META-INF/dekorate/halkyon.yml
component.halkyon.io/dekorate-and-vertx created
➜  dekorate-and-vertx hal component push
 ✗  error running push: walking dekorate-and-vertx/.editorconfig: dekorate-and-vertx/.editorconfig: stat: stat dekorate-and-vertx/.editorconfig: no such file or directory
➜  dekorate-and-vertx kubectl delete -f target/classes/META-INF/dekorate/halkyon.yml
component.halkyon.io "dekorate-and-vertx" deleted
➜  dekorate-and-vertx
@jponge
Copy link
Author

jponge commented Oct 14, 2019

dekorate-and-vertx.zip

@jponge
Copy link
Author

jponge commented Oct 14, 2019

@metacosm here's the project we talked about on Zulip

@metacosm
Copy link
Contributor

@jponge the given project doesn't compile. Also, with hal master, you do need to hal component create the project before you do a push.

@jponge
Copy link
Author

jponge commented Oct 18, 2019 via email

@metacosm
Copy link
Contributor

I've fixed the project so that it builds properly. Looking at the other issues now :)

@metacosm
Copy link
Contributor

What’s the error?

Basically, I don't have a vert.x 4 snapshot and didn't have a dekorate 0.9-SNAPSHOT either. Switched to vert.x 3.8.2 and dekorate 0.9.3…

@jponge
Copy link
Author

jponge commented Oct 18, 2019 via email

@metacosm
Copy link
Contributor

Can you give it another try, @jponge, please?

@jponge
Copy link
Author

jponge commented Oct 19, 2019

I've upgraded to the latest hal on the master branch.

First thing hal component create is a bit confusing since it also offers scaffolding.
I eventually figured out how to use it, but I had to go to the parent folder of my project:

➜  dekorate-and-vertx cd ..
➜  playgrounds hal component create dekorate-and-vertx
? Runtime vert.x
? Version 3.8.2
? Expose microservice Yes
? Port 8080
? Use code generator No
? Local component directory dekorate-and-vertx
? Env variable in the 'name=value' format, simply press enter when finished
❯ Selected Name: dekorate-and-vertx
 ✓  Successfully created 'dekorate-and-vertx' component

I then did a push which looked ok on the surface:

➜  dekorate-and-vertx hal component push
Local changes detected for 'dekorate-and-vertx' component: about to push source code to remote cluster
 ✓  Uploading /Users/jponge/Code/playgrounds/dekorate-and-vertx/dekorate-and-vertx.tar
 ✓  Cleaning up component
 ✓  Extracting source on the remote cluster
 ✓  Performing build
 ✓  Restarting app
 ✓  Successfully pushed 'dekorate-and-vertx' component to remote cluster

It took me a while to figure out that this had pushed the source code but that nothing had actually been built / deployed.

I eventually discovered the -b flag:

➜  dekorate-and-vertx hal component push -b
Local changes detected for 'dekorate-and-vertx' component: about to push packaged binary to remote cluster
 ✓  Uploading /Users/jponge/Code/playgrounds/dekorate-and-vertx/target/dekorate-and-vertx-0.0.0-SNAPSHOT-all.jar
 ✓  Restarting app
 ✓  Successfully pushed 'dekorate-and-vertx' component to remote cluster

But the service has no external IP:

➜  dekorate-and-vertx kubectl get components
NAME                 RUNTIME   VERSION   AGE   MODE   STATUS   MESSAGE                                                             REVISION
dekorate-and-vertx   vert.x    3.8.2     10m   dev    Ready    Ready: 'PodName' changed to 'dekorate-and-vertx-85fcfff4d6-5nc7t'   e10d5df6c0725078778ec3b76ff92c06f7d3900b
➜  dekorate-and-vertx kubectl get services
NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
dekorate-and-vertx   ClusterIP   10.104.157.62   <none>        8080/TCP   10m
kubernetes           ClusterIP   10.96.0.1       <none>        443/TCP    5d11h
➜  dekorate-and-vertx minikube service --url dekorate-and-vertx
😿  service default/dekorate-and-vertx has no node port

I tried to manually kill the pod, but then hal was lost because obviously the new pod had a different name so I deleted the Halkyon component, and re-created it:

➜  playgrounds hal component create dekorate-and-vertx
? Runtime vert.x
? Version 3.8.2
? Expose microservice Yes
? Port 8080
? Use code generator No
? Local component directory dekorate-and-vertx
? Env variable in the 'name=value' format, simply press enter when finished
❯ Selected Name: dekorate-and-vertx
 ✓  Successfully created 'dekorate-and-vertx' component
➜  playgrounds hal component push -c dekorate-and-vertx -b
Local changes detected for 'dekorate-and-vertx' component: about to push packaged binary to remote cluster
 ✓  Uploading /Users/jponge/Code/playgrounds/dekorate-and-vertx/target/dekorate-and-vertx-0.0.0-SNAPSHOT-all.jar
 ✓  Restarting app
 ✓  Successfully pushed 'dekorate-and-vertx' component to remote cluster

But this still gives a ClusterIP service:

➜  playgrounds hal component push -c dekorate-and-vertx -b
Local changes detected for 'dekorate-and-vertx' component: about to push packaged binary to remote cluster
 ✓  Uploading /Users/jponge/Code/playgrounds/dekorate-and-vertx/target/dekorate-and-vertx-0.0.0-SNAPSHOT-all.jar
 ✓  Restarting app
 ✓  Successfully pushed 'dekorate-and-vertx' component to remote cluster
➜  playgrounds kubectl get components
NAME                 RUNTIME   VERSION   AGE   MODE   STATUS   MESSAGE                                                             REVISION
dekorate-and-vertx   vert.x    3.8.2     35s   dev    Ready    Ready: 'PodName' changed to 'dekorate-and-vertx-85fcfff4d6-8t4lx'   e10d5df6c0725078778ec3b76ff92c06f7d3900b
➜  playgrounds kubectl get services
NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
dekorate-and-vertx   ClusterIP   10.100.169.9   <none>        8080/TCP   43s
kubernetes           ClusterIP   10.96.0.1      <none>        443/TCP    5d11h
➜  playgrounds minikube service --url dekorate-and-vertx
😿  service default/dekorate-and-vertx has no node port

I then ran minikube tunnel and did the following:

➜  dekorate-and-vertx kubectl get services
NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
dekorate-and-vertx   ClusterIP   10.100.169.9   <none>        8080/TCP   12m
kubernetes           ClusterIP   10.96.0.1      <none>        443/TCP    5d11h
➜  dekorate-and-vertx http 10.100.169.9:8080

http: error: ConnectionError: HTTPConnectionPool(host='10.100.169.9', port=8080): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x10dee07d0>: Failed to establish a new connection: [Errno 61] Connection refused')) while doing GET request to URL: http://10.100.169.9:8080/

Am I missing anything?

Thanks!

@jponge
Copy link
Author

jponge commented Oct 19, 2019

Also:

➜  dekorate-and-vertx kubectl describe components dekorate-and-vertx
Name:         dekorate-and-vertx
Namespace:    default
Labels:       <none>
Annotations:  <none>
API Version:  halkyon.io/v1beta1
Kind:         Component
Metadata:
  Creation Timestamp:  2019-10-19T19:40:36Z
  Generation:          3
  Resource Version:    68268
  Self Link:           /apis/halkyon.io/v1beta1/namespaces/default/components/dekorate-and-vertx
  UID:                 a822291b-0c5c-4acc-b282-1d42dbe9402a
Spec:
  Build Config:
    Ref:
    URL:
  Deployment Mode:  dev
  Expose Service:   true
  Port:             8080
  Revision:         e10d5df6c0725078778ec3b76ff92c06f7d3900b
  Runtime:          vert.x
  Storage:
  Version:  3.8.2
Status:
  Message:   Ready: 'PodName' changed to 'dekorate-and-vertx-85fcfff4d6-8t4lx'
  Phase:     Ready
  Pod Name:  dekorate-and-vertx-85fcfff4d6-8t4lx
Events:      <none>

@jponge
Copy link
Author

jponge commented Oct 19, 2019

I also did copy / run without hal:

➜  dekorate-and-vertx kubectl cp target/dekorate-and-vertx-0.0.0-SNAPSHOT-all.jar dekorate-and-vertx-85fcfff4d6-8t4lx:/deployments/app.jar
➜  dekorate-and-vertx kubectl exec dekorate-and-vertx-85fcfff4d6-8t4lx /var/lib/supervisord/bin/supervisord ctl start run

@metacosm
Copy link
Contributor

metacosm commented Oct 20, 2019

I've upgraded to the latest hal on the master branch.

First thing hal component create is a bit confusing since it also offers scaffolding.

Yes, this needs to be cleaned-up as this is confusing. See: #40 (comment)

I eventually figured out how to use it, but I had to go to the parent folder of my project:

This also needs to be cleared-up. Indeed, hal works in a similar way to how maven does, i.e. you execute it in the parent directory where components are akin to modules. The physical layout needs to be clarified and I feel we're lacking a unifying application concept to tie everything together: an application would be the set of components and capabilities linked together and be organized as a directory containing one child directory but component. That seems quite intuitive to me but maybe I'm wrong… What do you think?

I then did a push which looked ok on the surface:
It took me a while to figure out that this had pushed the source code but that nothing had actually been built / deployed.

Non-binary push (i.e. the default push) currently only works with Spring Boot because it requires a specific image. We need to either make this image available to other runtimes (though, right now, it bundles many Spring Boot dependencies to accelerate builds) or publish what are the runtime images expectations. I've had a similar discussion with @Ladicek on Zulip on this very subject already :)

I eventually discovered the -b flag:

👍

But the service has no external IP:

➜  dekorate-and-vertx kubectl get components
NAME                 RUNTIME   VERSION   AGE   MODE   STATUS   MESSAGE                                                             REVISION
dekorate-and-vertx   vert.x    3.8.2     10m   dev    Ready    Ready: 'PodName' changed to 'dekorate-and-vertx-85fcfff4d6-5nc7t'   e10d5df6c0725078778ec3b76ff92c06f7d3900b
➜  dekorate-and-vertx kubectl get services
NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
dekorate-and-vertx   ClusterIP   10.104.157.62   <none>        8080/TCP   10m
kubernetes           ClusterIP   10.96.0.1       <none>        443/TCP    5d11h
➜  dekorate-and-vertx minikube service --url dekorate-and-vertx
😿  service default/dekorate-and-vertx has no node port

Hmm, that is weird…

I tried to manually kill the pod, but then hal was lost because obviously the new pod had a different name so I deleted the Halkyon component, and re-created it:

Halkyon should be able to properly recover from a manual kill of the pod. Another thing to look into… :(

@metacosm
Copy link
Contributor

Also:

➜  dekorate-and-vertx kubectl describe components dekorate-and-vertx
Name:         dekorate-and-vertx
Namespace:    default
Labels:       <none>
Annotations:  <none>
API Version:  halkyon.io/v1beta1
Kind:         Component
Metadata:
  Creation Timestamp:  2019-10-19T19:40:36Z
  Generation:          3
  Resource Version:    68268
  Self Link:           /apis/halkyon.io/v1beta1/namespaces/default/components/dekorate-and-vertx
  UID:                 a822291b-0c5c-4acc-b282-1d42dbe9402a
Spec:
  Build Config:
    Ref:
    URL:
  Deployment Mode:  dev
  Expose Service:   true
  Port:             8080
  Revision:         e10d5df6c0725078778ec3b76ff92c06f7d3900b
  Runtime:          vert.x
  Storage:
  Version:  3.8.2
Status:
  Message:   Ready: 'PodName' changed to 'dekorate-and-vertx-85fcfff4d6-8t4lx'
  Phase:     Ready
  Pod Name:  dekorate-and-vertx-85fcfff4d6-8t4lx
Events:      <none>

That looks OK as far as I can tell. Is you service really running on port 8080?

@jponge
Copy link
Author

jponge commented Oct 21, 2019 via email

@jponge
Copy link
Author

jponge commented Oct 21, 2019 via email

@metacosm
Copy link
Contributor

By the way, regarding the runtime issue: halkyonio/operator#169

@jponge
Copy link
Author

jponge commented Oct 28, 2019

Making progress, see dekorateio/dekorate#389 on a related problem

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants