You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
scala>valmodel= classifier.fit(trainData)
19/09/0313:35:44WARNUDTRegistration:Cannot register UDTfor org.apache.spark.angel.ml.linalg.Vector, which is already registered.
19/09/0313:35:44WARNUDTRegistration:Cannot register UDTfor org.apache.spark.angel.ml.linalg.DenseVector, which is already registered.
19/09/0313:35:44WARNUDTRegistration:Cannot register UDTfor org.apache.spark.angel.ml.linalg.SparseVector, which is already registered.
19/09/0313:35:44WARNUDTRegistration:Cannot register UDTfor org.apache.spark.angel.ml.linalg.Matrix, which is already registered.
19/09/0313:35:44WARNUDTRegistration:Cannot register UDTfor org.apache.spark.angel.ml.linalg.DenseMatrix, which is already registered.
19/09/0313:35:44WARNUDTRegistration:Cannot register UDTfor org.apache.spark.angel.ml.linalg.SparseMatrix, which is already registered.
19/09/0313:35:45ERRORExecutor:Exception in task 0.0 in stage 12.0 (TID12)
java.lang.Exception:Pls. startAngel first!
at com.tencent.angel.sona.core.ExecutorContext.sparkWorkerContext$lzycompute(ExecutorContext.scala:32)
at com.tencent.angel.sona.core.ExecutorContext.sparkWorkerContext(ExecutorContext.scala:30)
at com.tencent.angel.sona.core.ExecutorContext$.checkGraphModelPool(ExecutorContext.scala:65)
at com.tencent.angel.sona.core.ExecutorContext$.toGraphModelPool(ExecutorContext.scala:78)
at org.apache.spark.angel.ml.common.Trainer.trainOneBatch(Trainer.scala:43)
at org.apache.spark.angel.ml.classification.AngelClassifier$$anonfun$train$1$$anonfun$apply$mcVI$sp$1$$anonfun$8.apply(AngelClassifier.scala:245)
at org.apache.spark.angel.ml.classification.AngelClassifier$$anonfun$train$1$$anonfun$apply$mcVI$sp$1$$anonfun$8.apply(AngelClassifier.scala:245)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
at scala.collection.TraversableOnce$class.reduceLeft(TraversableOnce.scala:185)
at scala.collection.AbstractIterator.reduceLeft(Iterator.scala:1336)
at org.apache.spark.rdd.RDD$$anonfun$reduce$1$$anonfun$14.apply(RDD.scala:1015)
at org.apache.spark.rdd.RDD$$anonfun$reduce$1$$anonfun$14.apply(RDD.scala:1013)
at org.apache.spark.SparkContext$$anonfun$33.apply(SparkContext.scala:2123)
at org.apache.spark.SparkContext$$anonfun$33.apply(SparkContext.scala:2123)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:109)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
19/09/0313:35:45WARNTaskSetManager:Lost task 0.0 in stage 12.0 (TID12, localhost, executor driver): java.lang.Exception:Pls. startAngel first!
at com.tencent.angel.sona.core.ExecutorContext.sparkWorkerContext$lzycompute(ExecutorContext.scala:32)
at com.tencent.angel.sona.core.ExecutorContext.sparkWorkerContext(ExecutorContext.scala:30)
at com.tencent.angel.sona.core.ExecutorContext$.checkGraphModelPool(ExecutorContext.scala:65)
at com.tencent.angel.sona.core.ExecutorContext$.toGraphModelPool(ExecutorContext.scala:78)
at org.apache.spark.angel.ml.common.Trainer.trainOneBatch(Trainer.scala:43)
at org.apache.spark.angel.ml.classification.AngelClassifier$$anonfun$train$1$$anonfun$apply$mcVI$sp$1$$anonfun$8.apply(AngelClassifier.scala:245)
at org.apache.spark.angel.ml.classification.AngelClassifier$$anonfun$train$1$$anonfun$apply$mcVI$sp$1$$anonfun$8.apply(AngelClassifier.scala:245)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
at scala.collection.TraversableOnce$class.reduceLeft(TraversableOnce.scala:185)
at scala.collection.AbstractIterator.reduceLeft(Iterator.scala:1336)
at org.apache.spark.rdd.RDD$$anonfun$reduce$1$$anonfun$14.apply(RDD.scala:1015)
at org.apache.spark.rdd.RDD$$anonfun$reduce$1$$anonfun$14.apply(RDD.scala:1013)
at org.apache.spark.SparkContext$$anonfun$33.apply(SparkContext.scala:2123)
at org.apache.spark.SparkContext$$anonfun$33.apply(SparkContext.scala:2123)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:109)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
19/09/0313:35:45ERRORTaskSetManager:Task0 in stage 12.0 failed 1 times; aborting job
my spark version is 2.3.0
and run this code in spark-shell
run the demo of this page
while run this line
my spark version is 2.3.0
and run this code in spark-shell
The text was updated successfully, but these errors were encountered: