-
Notifications
You must be signed in to change notification settings - Fork 9
EKCP
catapult supports ekcp out of the box. If you are looking for how to deploy EKCP have a look at Deployment setups.
Once you have an EKCP host, the only thing needed to consume the API is to configure the EKCP_HOST
environment variable to point to your master ip (e.g. EKCP_HOST=<master_ip>:<master_port>
) and specify BACKEND=ekcp
. All the make target will now point to the remote clusters allocated by EKCP.
catapult will use as a context for your cluster CLUSTER_NAME
, and will translate all destroy/create cluster commands to api calls.
There are two ways to get the kubeconfig of a cluster already running in ekcp:
You can browse to the EKCP UI and retrieve the cluster Kubeconfig ( e.g. http://<master_ip>:<master_port>/ui
), retrieve it from the API...
or just run the make recover
target, which retrieves the cluster data and recreates the cluster build folder:
$> BACKEND=ekcp CLUSTER_NAME=test EKCP_HOST=<master_ip>:<master_port> make recover
See also the EKCP wiki page about this topic.
For small scopes, you can create a shell in a new pod and exec into it. Refer to running options in the wiki, catapult has a make target for it.
catapult supports the socks5 method out of the box, and implements two make targets to automatically set up a tunnel to your cluster.
Always with catapult, in another terminal run:
$> CLUSTER_NAME=test EKCP_HOST=<master_ip>:<master_port> BACKEND=ekcp make module-extra-ingress-forward
This command will bind the local port 2224
to the service running on the kube cluster, don't forget to kill it manually when you are done.
To access to the cluster services, now you can set https_proxy=socks5://127.0.0.1:2224
in front of your commands that requires to access to the network (e.g. https_proxy=socks5://127.0.0.1 cf login ..
).
If the proxy pod is not present, you can deploy it inside the cluster with catapult:
$> [..] CLUSTER_NAME=test EKCP_HOST=<master_ip>:<master_port> BACKEND=ekcp make module-extra-ingress
Now if you want to login and push an app to your cluster, set EKCP_PROXY=1
on front of your commands:
$> [..] EKCP_PROXY=1 EKCP_HOST=<master_ip>:<master_port> BACKEND=ekcp make scf-login sample
Destroying a remote cluster named "test"
$> [..] BACKEND=ekcp CLUSTER_NAME=test EKCP_HOST=127.0.0.1:8030 make clean
Creating a remote cluster named "test"
$> [..] BACKEND=ekcp CLUSTER_NAME=test EKCP_HOST=127.0.0.1:8030 make up
Destroying (even if wasn't created in your box) a remote cluster named "test"
$> [..] BACKEND=ekcp CLUSTER_NAME=test EKCP_HOST=127.0.0.1:8030 make force-clean