Skip to content
This repository has been archived by the owner on Nov 23, 2017. It is now read-only.

how to remain conf upon spark cluster stop/start via spark-ec2 #77

Open
biolearning opened this issue Nov 26, 2016 · 1 comment
Open

Comments

@biolearning
Copy link

biolearning commented Nov 26, 2016

Some conf change requires cluster restartup to take effect, say 'SPARK_WORKER_OPTS'; but during stop/start spark cluster via spark-ec2, it seems it re-setup cluster and flushes all of conf. So is there a way to keep them during stop/start cluster via spark-ec2? BTW, somehow I can not stop and start cluster via Spark stop-all/start-all scripts.

@biolearning biolearning changed the title how to remain conf upon restartup via spark-ec2 how to remain conf upon spark cluster stop/start via spark-ec2 Nov 26, 2016
@key2market
Copy link

key2market commented Nov 27, 2016

Same here
./spark-ec2 destroy my-spark-cluster
outputs
Searching for existing cluster my-spark-cluster in region us-east-1... Are you sure you want to destroy the cluster my-spark-cluster? (y/N) y

even when there is no my-spark-cluster in region us-east-1. In other words, there is no prompt that stop script did not find the relevant cluster in the specific region and it looks like the script is working on something when in fact it is not shutting down anything at all.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant