You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Nov 23, 2017. It is now read-only.
HI all
i am creating an ec2 cluster using 2.0 branch.
The cluster is created with 4 cores
When created, i am connecting to each slave , kicking off exactly the same application with the following command:
But the second app is being kept in waiting, even though only 2/4 cores are in use. I am getting this in to the logs
17/02/18 21:00:57 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
17/02/18 21:01:12 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
Could you please advise why? I can provide as much information as you need .....
kr
marco
The text was updated successfully, but these errors were encountered:
This question is not related to the EC2 scripts -- I'd suggest asking this on the Spark users mailing list / stack overflow as described in http://spark.apache.org/community.html
HI
indeed,before i ask this, could youu please advise how can i disable this setting this in the generated /spark/conf/spark-env.sh ?
export SPARK_WORKER_CORES=2
i have tried to comment out that line in spark-ec2/templates/root/spark/conf/spark-env.sh
but when the cluster gets generated and i go to the /root/spark/conf/spark-env.sh that line is not commented out.
Could you kinldy advise how i can do that?
i have tried to stop the cluster, edit the file in the master, copy-dir that directory but when i start the cluster it generates master and slaves at brand new addresses.....
kind regards
marco
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
HI all
i am creating an ec2 cluster using 2.0 branch.
The cluster is created with 4 cores
When created, i am connecting to each slave , kicking off exactly the same application with the following command:
oot@ip-172-31-4-154 bin]$ ./spark-submit --master spark://ec2-54-186-158-159.us-west-2.compute.amazonaws.com:7077 --executor-cores 1 /root/pyscripts/dataprocessing_Sample.py file:///root/pyscripts/tree_addhealth.csv
But the second app is being kept in waiting, even though only 2/4 cores are in use. I am getting this in to the logs
17/02/18 21:00:57 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
17/02/18 21:01:12 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
Could you please advise why? I can provide as much information as you need .....
kr
marco
The text was updated successfully, but these errors were encountered: