-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cannot run hal with a Vert.x project + minikube #55
Comments
@metacosm here's the project we talked about on Zulip |
@jponge the given project doesn't compile. Also, with |
What’s the error?
… On 18 Oct 2019, at 14:36, Chris Laprun ***@***.***> wrote:
@jponge the given project doesn't compile. Also, with hal master, you do need to hal component create the project before you do a push.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
I've fixed the project so that it builds properly. Looking at the other issues now :) |
Basically, I don't have a vert.x 4 snapshot and didn't have a dekorate 0.9-SNAPSHOT either. Switched to vert.x 3.8.2 and dekorate 0.9.3… |
Ah ok :)
… On 18 Oct 2019, at 14:44, Chris Laprun ***@***.***> wrote:
What’s the error?
Basically, I don't have a vert.x 4 snapshot and didn't have a dekorate 0.9-SNAPSHOT either. Switched to vert.x 3.8.2 and dekorate 0.9.3…
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
Can you give it another try, @jponge, please? |
I've upgraded to the latest First thing
I then did a push which looked ok on the surface:
It took me a while to figure out that this had pushed the source code but that nothing had actually been built / deployed. I eventually discovered the
But the service has no external IP:
I tried to manually kill the pod, but then
But this still gives a
I then ran
Am I missing anything? Thanks! |
Also:
|
I also did copy / run without
|
Yes, this needs to be cleaned-up as this is confusing. See: #40 (comment)
This also needs to be cleared-up. Indeed,
Non-binary push (i.e. the default push) currently only works with Spring Boot because it requires a specific image. We need to either make this image available to other runtimes (though, right now, it bundles many Spring Boot dependencies to accelerate builds) or publish what are the runtime images expectations. I've had a similar discussion with @Ladicek on Zulip on this very subject already :)
👍
Hmm, that is weird…
Halkyon should be able to properly recover from a manual kill of the pod. Another thing to look into… :( |
That looks OK as far as I can tell. Is you service really running on port 8080? |
That looks OK as far as I can tell. Is you service really running on port 8080?
Yes it does :-)
See `.listen(8080, ar -> {`
|
This also needs to be cleared-up. Indeed, hal works in a similar way to how maven does, i.e. you execute it in the parent directory where components are akin to modules. The physical layout needs to be clarified and I feel we're lacking a unifying application concept to tie everything together: an application would be the set of components and capabilities linked together and be organized as a directory containing one child directory but component. That seems quite intuitive to me but maybe I'm wrong… What do you think?
I get that, but there are lots of contexts where 1 service is 1 folder and there are no nested folders / projects.
I’d expect hal to support both cases, and certainly to support operations from a project folder.
Just like when you work on a Maven / Gradle project, your shell is in the project folder, not in the parent one, so you’d expect to be able to run all `hal` commands from here.
Halkyon should be able to properly recover from a manual kill of the pod. Another thing to look into… :(
Can you attach the output of kubectl get cp dekorate-and-vertx -o yaml, please?
Sure:
```yaml
apiVersion: halkyon.io/v1beta1
kind: Component
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"halkyon.io/v1beta1","kind":"Component","metadata":{"annotations":{},"labels":{"app":"dekorate-and-vertx","group":"jponge","version":"0.0.0-SNAPSHOT"},"name":"dekorate-and-vertx","namespace":"default"},"spec":{"deploymentMode":"dev","exposeService":true,"port":8080,"version":"0.0.0-SNAPSHOT"}}
creationTimestamp: "2019-10-19T19:40:36Z"
generation: 5
labels:
app: dekorate-and-vertx
group: jponge
version: 0.0.0-SNAPSHOT
name: dekorate-and-vertx
namespace: default
resourceVersion: "70114"
selfLink: /apis/halkyon.io/v1beta1/namespaces/default/components/dekorate-and-vertx
uid: a822291b-0c5c-4acc-b282-1d42dbe9402a
spec:
buildConfig:
ref: ""
url: ""
deploymentMode: dev
exposeService: true
port: 8080
revision: f4b301b4c7d5c0dc0e29213505067192de8beb4a
runtime: vert.x
storage: {}
version: 0.0.0-SNAPSHOT
status:
message: 'Ready: ''PodName'' changed to ''dekorate-and-vertx-85fcfff4d6-8t4lx'''
phase: Ready
podName: dekorate-and-vertx-85fcfff4d6-8t4lx
```
|
By the way, regarding the runtime issue: halkyonio/operator#169 |
Making progress, see dekorateio/dekorate#389 on a related problem |
Here is a Vert.x project that uses Dekorate to generate Kubernetes and Halkyon YAML.
I am using minikube.
If I run
hal component push
, thenhal
complains because there is no component.I can of course run
kubectl apply -f target/classes/META-INF/dekorate/halkyon.yml
which does create a component resource:However the
halkyon-operator-(...)
pod goes inCrashLoopBackoff
until Ikubectlk delete -f target/classes/META-INF/dekorate/halkyon.yml
:Here is a series of
hal
andkubectl
commands that exhibit funny behaviors, like failing on.editorconfig
:The text was updated successfully, but these errors were encountered: