You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Jan 9, 2020. It is now read-only.
Copy file name to clipboardExpand all lines: resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/k8s/KubernetesClusterSchedulerBackend.scala
+6-3Lines changed: 6 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -153,9 +153,12 @@ private[spark] class KubernetesClusterSchedulerBackend(
153
153
} { executorExited =>
154
154
logWarning(s"Removing executor $executorId with loss reason "+ executorExited.message)
155
155
removeExecutor(executorId, executorExited)
156
-
// We keep around executors that have exit conditions caused by the application. This
157
-
// allows them to be debugged later on. Otherwise, mark them as to be deleted from the
158
-
// the API server.
156
+
// We don't delete the pod running the executor that has an exit condition caused by
157
+
// the application from the Kubernetes API server. This allows users to debug later on
158
+
// through commands such as "kubectl logs <pod name>" and
159
+
// "kubectl describe pod <pod name>". Note that exited containers have terminated and
160
+
// therefore won't take CPU and memory resources.
161
+
// Otherwise, the executor pod is marked to be deleted from the API server.
159
162
if (executorExited.exitCausedByApp) {
160
163
logInfo(s"Executor $executorId exited because of the application.")
0 commit comments