You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Nov 23, 2017. It is now read-only.
Some conf change requires cluster restartup to take effect, say 'SPARK_WORKER_OPTS'; but during stop/start spark cluster via spark-ec2, it seems it re-setup cluster and flushes all of conf. So is there a way to keep them during stop/start cluster via spark-ec2? BTW, somehow I can not stop and start cluster via Spark stop-all/start-all scripts.
The text was updated successfully, but these errors were encountered:
biolearning
changed the title
how to remain conf upon restartup via spark-ec2
how to remain conf upon spark cluster stop/start via spark-ec2
Nov 26, 2016
Same here ./spark-ec2 destroy my-spark-cluster
outputs Searching for existing cluster my-spark-cluster in region us-east-1... Are you sure you want to destroy the cluster my-spark-cluster? (y/N) y
even when there is no my-spark-cluster in region us-east-1. In other words, there is no prompt that stop script did not find the relevant cluster in the specific region and it looks like the script is working on something when in fact it is not shutting down anything at all.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Some conf change requires cluster restartup to take effect, say 'SPARK_WORKER_OPTS'; but during stop/start spark cluster via spark-ec2, it seems it re-setup cluster and flushes all of conf. So is there a way to keep them during stop/start cluster via spark-ec2? BTW, somehow I can not stop and start cluster via Spark stop-all/start-all scripts.
The text was updated successfully, but these errors were encountered: