Skip to content

Latest commit

 

History

History
3340 lines (3340 loc) · 63.6 KB

api-docs.md

File metadata and controls

3340 lines (3340 loc) · 63.6 KB

Packages:

sparkoperator.k8s.io/v1beta2

Package v1beta2 is the v1beta2 version of the API.

Resource Types:

    ApplicationState

    (Appears on:SparkApplicationStatus)

    ApplicationState tells the current state of the application and an error message in case of failures.

    Field Description
    state
    ApplicationStateType
    errorMessage
    string

    ApplicationStateType (string alias)

    (Appears on:ApplicationState)

    ApplicationStateType represents the type of the current state of an application.

    Value Description

    "COMPLETED"

    "FAILED"

    "SUBMISSION_FAILED"

    "FAILING"

    "INVALIDATING"

    ""

    "PENDING_RERUN"

    "RUNNING"

    "SUBMITTED"

    "SUCCEEDING"

    "UNKNOWN"

    BatchSchedulerConfiguration

    (Appears on:SparkApplicationSpec)

    BatchSchedulerConfiguration used to configure how to batch scheduling Spark Application

    Field Description
    queue
    string
    (Optional)

    Queue stands for the resource queue which the application belongs to, it’s being used in Volcano batch scheduler.

    priorityClassName
    string
    (Optional)

    PriorityClassName stands for the name of k8s PriorityClass resource, it’s being used in Volcano batch scheduler.

    resources
    Kubernetes core/v1.ResourceList
    (Optional)

    Resources stands for the resource list custom request for. Usually it is used to define the lower-bound limit. If specified, volcano scheduler will consider it as the resources requested.

    ConcurrencyPolicy (string alias)

    (Appears on:ScheduledSparkApplicationSpec)

    Value Description

    "Allow"

    ConcurrencyAllow allows SparkApplications to run concurrently.

    "Forbid"

    ConcurrencyForbid forbids concurrent runs of SparkApplications, skipping the next run if the previous one hasn’t finished yet.

    "Replace"

    ConcurrencyReplace kills the currently running SparkApplication instance and replaces it with a new one.

    Dependencies

    (Appears on:SparkApplicationSpec)

    Dependencies specifies all possible types of dependencies of a Spark application.

    Field Description
    jars
    []string
    (Optional)

    Jars is a list of JAR files the Spark application depends on.

    files
    []string
    (Optional)

    Files is a list of files the Spark application depends on.

    pyFiles
    []string
    (Optional)

    PyFiles is a list of Python files the Spark application depends on.

    packages
    []string
    (Optional)

    Packages is a list of maven coordinates of jars to include on the driver and executor classpaths. This will search the local maven repo, then maven central and any additional remote repositories given by the “repositories” option. Each package should be of the form “groupId:artifactId:version”.

    excludePackages
    []string
    (Optional)

    ExcludePackages is a list of “groupId:artifactId”, to exclude while resolving the dependencies provided in Packages to avoid dependency conflicts.

    repositories
    []string
    (Optional)

    Repositories is a list of additional remote repositories to search for the maven coordinate given with the “packages” option.

    archives
    []string
    (Optional)

    Archives is a list of archives to be extracted into the working directory of each executor.

    DeployMode (string alias)

    (Appears on:SparkApplicationSpec)

    DeployMode describes the type of deployment of a Spark application.

    Value Description

    "client"

    "cluster"

    "in-cluster-client"

    DriverInfo

    (Appears on:SparkApplicationStatus)

    DriverInfo captures information about the driver.

    Field Description
    webUIServiceName
    string
    webUIAddress
    string

    UI Details for the UI created via ClusterIP service accessible from within the cluster.

    webUIPort
    int32
    webUIIngressName
    string

    Ingress Details if an ingress for the UI was created.

    webUIIngressAddress
    string
    podName
    string

    DriverIngressConfiguration

    (Appears on:SparkApplicationSpec)

    DriverIngressConfiguration is for driver ingress specific configuration parameters.

    Field Description
    servicePort
    int32

    ServicePort allows configuring the port at service level that might be different from the targetPort.

    servicePortName
    string

    ServicePortName allows configuring the name of the service port. This may be useful for sidecar proxies like Envoy injected by Istio which require specific ports names to treat traffic as proper HTTP.

    serviceType
    Kubernetes core/v1.ServiceType
    (Optional)

    ServiceType allows configuring the type of the service. Defaults to ClusterIP.

    serviceAnnotations
    map[string]string
    (Optional)

    ServiceAnnotations is a map of key,value pairs of annotations that might be added to the service object.

    serviceLabels
    map[string]string
    (Optional)

    ServiceLabels is a map of key,value pairs of labels that might be added to the service object.

    ingressURLFormat
    string

    IngressURLFormat is the URL for the ingress.

    ingressAnnotations
    map[string]string
    (Optional)

    IngressAnnotations is a map of key,value pairs of annotations that might be added to the ingress object. i.e. specify nginx as ingress.class

    ingressTLS
    []Kubernetes networking/v1.IngressTLS
    (Optional)

    TlsHosts is useful If we need to declare SSL certificates to the ingress object

    DriverSpec

    (Appears on:SparkApplicationSpec)

    DriverSpec is specification of the driver.

    Field Description
    SparkPodSpec
    SparkPodSpec

    (Members of SparkPodSpec are embedded into this type.)

    podName
    string
    (Optional)

    PodName is the name of the driver pod that the user creates. This is used for the in-cluster client mode in which the user creates a client pod where the driver of the user application runs. It’s an error to set this field if Mode is not in-cluster-client.

    coreRequest
    string
    (Optional)

    CoreRequest is the physical CPU core request for the driver. Maps to spark.kubernetes.driver.request.cores that is available since Spark 3.0.

    javaOptions
    string
    (Optional)

    JavaOptions is a string of extra JVM options to pass to the driver. For instance, GC settings or other logging.

    lifecycle
    Kubernetes core/v1.Lifecycle
    (Optional)

    Lifecycle for running preStop or postStart commands

    kubernetesMaster
    string
    (Optional)

    KubernetesMaster is the URL of the Kubernetes master used by the driver to manage executor pods and other Kubernetes resources. Default to https://kubernetes.default.svc.

    serviceAnnotations
    map[string]string
    (Optional)

    ServiceAnnotations defines the annotations to be added to the Kubernetes headless service used by executors to connect to the driver.

    serviceLabels
    map[string]string
    (Optional)

    ServiceLabels defines the labels to be added to the Kubernetes headless service used by executors to connect to the driver.

    ports
    []Port
    (Optional)

    Ports settings for the pods, following the Kubernetes specifications.

    priorityClassName
    string
    (Optional)

    PriorityClassName is the name of the PriorityClass for the driver pod.

    DriverState (string alias)

    DriverState tells the current state of a spark driver.

    Value Description

    "COMPLETED"

    "FAILED"

    "PENDING"

    "RUNNING"

    "UNKNOWN"

    DynamicAllocation

    (Appears on:SparkApplicationSpec)

    DynamicAllocation contains configuration options for dynamic allocation.

    Field Description
    enabled
    bool

    Enabled controls whether dynamic allocation is enabled or not.

    initialExecutors
    int32
    (Optional)

    InitialExecutors is the initial number of executors to request. If .spec.executor.instances is also set, the initial number of executors is set to the bigger of that and this option.

    minExecutors
    int32
    (Optional)

    MinExecutors is the lower bound for the number of executors if dynamic allocation is enabled.

    maxExecutors
    int32
    (Optional)

    MaxExecutors is the upper bound for the number of executors if dynamic allocation is enabled.

    shuffleTrackingTimeout
    int64
    (Optional)

    ShuffleTrackingTimeout controls the timeout in milliseconds for executors that are holding shuffle data if shuffle tracking is enabled (true by default if dynamic allocation is enabled).

    ExecutorSpec

    (Appears on:SparkApplicationSpec)

    ExecutorSpec is specification of the executor.

    Field Description
    SparkPodSpec
    SparkPodSpec

    (Members of SparkPodSpec are embedded into this type.)

    instances
    int32
    (Optional)

    Instances is the number of executor instances.

    coreRequest
    string
    (Optional)

    CoreRequest is the physical CPU core request for the executors. Maps to spark.kubernetes.executor.request.cores that is available since Spark 2.4.

    javaOptions
    string
    (Optional)

    JavaOptions is a string of extra JVM options to pass to the executors. For instance, GC settings or other logging.

    lifecycle
    Kubernetes core/v1.Lifecycle
    (Optional)

    Lifecycle for running preStop or postStart commands

    deleteOnTermination
    bool
    (Optional)

    DeleteOnTermination specify whether executor pods should be deleted in case of failure or normal termination. Maps to spark.kubernetes.executor.deleteOnTermination that is available since Spark 3.0.

    ports
    []Port
    (Optional)

    Ports settings for the pods, following the Kubernetes specifications.

    priorityClassName
    string
    (Optional)

    PriorityClassName is the name of the PriorityClass for the executor pod.

    ExecutorState (string alias)

    (Appears on:SparkApplicationStatus)

    ExecutorState tells the current state of an executor.

    Value Description

    "COMPLETED"

    "FAILED"

    "PENDING"

    "RUNNING"

    "UNKNOWN"

    GPUSpec

    (Appears on:SparkPodSpec)

    Field Description
    name
    string

    Name is GPU resource name, such as: nvidia.com/gpu or amd.com/gpu

    quantity
    int64

    Quantity is the number of GPUs to request for driver or executor.

    MonitoringSpec

    (Appears on:SparkApplicationSpec)

    MonitoringSpec defines the monitoring specification.

    Field Description
    exposeDriverMetrics
    bool

    ExposeDriverMetrics specifies whether to expose metrics on the driver.

    exposeExecutorMetrics
    bool

    ExposeExecutorMetrics specifies whether to expose metrics on the executors.

    metricsProperties
    string
    (Optional)

    MetricsProperties is the content of a custom metrics.properties for configuring the Spark metric system. If not specified, the content in spark-docker/conf/metrics.properties will be used.

    metricsPropertiesFile
    string
    (Optional)

    MetricsPropertiesFile is the container local path of file metrics.properties for configuring the Spark metric system. If not specified, value /etc/metrics/conf/metrics.properties will be used.

    prometheus
    PrometheusSpec
    (Optional)

    Prometheus is for configuring the Prometheus JMX exporter.

    NameKey

    (Appears on:SparkPodSpec)

    NameKey represents the name and key of a SecretKeyRef.

    Field Description
    name
    string
    key
    string

    NamePath

    (Appears on:SparkPodSpec)

    NamePath is a pair of a name and a path to which the named objects should be mounted to.

    Field Description
    name
    string
    path
    string

    Port

    (Appears on:DriverSpec, ExecutorSpec)

    Port represents the port definition in the pods objects.

    Field Description
    name
    string
    protocol
    string
    containerPort
    int32

    PrometheusSpec

    (Appears on:MonitoringSpec)

    PrometheusSpec defines the Prometheus specification when Prometheus is to be used for collecting and exposing metrics.

    Field Description
    jmxExporterJar
    string

    JmxExporterJar is the path to the Prometheus JMX exporter jar in the container.

    port
    int32
    (Optional)

    Port is the port of the HTTP server run by the Prometheus JMX exporter. If not specified, 8090 will be used as the default.

    portName
    string
    (Optional)

    PortName is the port name of prometheus JMX exporter port. If not specified, jmx-exporter will be used as the default.

    configFile
    string
    (Optional)

    ConfigFile is the path to the custom Prometheus configuration file provided in the Spark image. ConfigFile takes precedence over Configuration, which is shown below.

    configuration
    string
    (Optional)

    Configuration is the content of the Prometheus configuration needed by the Prometheus JMX exporter. If not specified, the content in spark-docker/conf/prometheus.yaml will be used. Configuration has no effect if ConfigFile is set.

    RestartPolicy

    (Appears on:SparkApplicationSpec)

    RestartPolicy is the policy of if and in which conditions the controller should restart a terminated application. This completely defines actions to be taken on any kind of Failures during an application run.

    Field Description
    type
    RestartPolicyType

    Type specifies the RestartPolicyType.

    onSubmissionFailureRetries
    int32
    (Optional)

    OnSubmissionFailureRetries is the number of times to retry submitting an application before giving up. This is best effort and actual retry attempts can be >= the value specified due to caching. These are required if RestartPolicy is OnFailure.

    onFailureRetries
    int32
    (Optional)

    OnFailureRetries the number of times to retry running an application before giving up.

    onSubmissionFailureRetryInterval
    int64
    (Optional)

    OnSubmissionFailureRetryInterval is the interval in seconds between retries on failed submissions.

    onFailureRetryInterval
    int64
    (Optional)

    OnFailureRetryInterval is the interval in seconds between retries on failed runs.

    RestartPolicyType (string alias)

    (Appears on:RestartPolicy)

    Value Description

    "Always"

    "Never"

    "OnFailure"

    ScheduleState (string alias)

    (Appears on:ScheduledSparkApplicationStatus)

    Value Description

    "FailedValidation"

    ""

    "Scheduled"

    "Validating"

    ScheduledSparkApplication

    ScheduledSparkApplication is the Schema for the scheduledsparkapplications API.

    Field Description
    metadata
    Kubernetes meta/v1.ObjectMeta
    Refer to the Kubernetes API documentation for the fields of the metadata field.
    spec
    ScheduledSparkApplicationSpec


    schedule
    string

    Schedule is a cron schedule on which the application should run.

    template
    SparkApplicationSpec

    Template is a template from which SparkApplication instances can be created.

    suspend
    bool
    (Optional)

    Suspend is a flag telling the controller to suspend subsequent runs of the application if set to true. Defaults to false.

    concurrencyPolicy
    ConcurrencyPolicy

    ConcurrencyPolicy is the policy governing concurrent SparkApplication runs.

    successfulRunHistoryLimit
    int32
    (Optional)

    SuccessfulRunHistoryLimit is the number of past successful runs of the application to keep. Defaults to 1.

    failedRunHistoryLimit
    int32
    (Optional)

    FailedRunHistoryLimit is the number of past failed runs of the application to keep. Defaults to 1.

    status
    ScheduledSparkApplicationStatus

    ScheduledSparkApplicationSpec

    (Appears on:ScheduledSparkApplication)

    ScheduledSparkApplicationSpec defines the desired state of ScheduledSparkApplication.

    Field Description
    schedule
    string

    Schedule is a cron schedule on which the application should run.

    template
    SparkApplicationSpec

    Template is a template from which SparkApplication instances can be created.

    suspend
    bool
    (Optional)

    Suspend is a flag telling the controller to suspend subsequent runs of the application if set to true. Defaults to false.

    concurrencyPolicy
    ConcurrencyPolicy

    ConcurrencyPolicy is the policy governing concurrent SparkApplication runs.

    successfulRunHistoryLimit
    int32
    (Optional)

    SuccessfulRunHistoryLimit is the number of past successful runs of the application to keep. Defaults to 1.

    failedRunHistoryLimit
    int32
    (Optional)

    FailedRunHistoryLimit is the number of past failed runs of the application to keep. Defaults to 1.

    ScheduledSparkApplicationStatus

    (Appears on:ScheduledSparkApplication)

    ScheduledSparkApplicationStatus defines the observed state of ScheduledSparkApplication.

    Field Description
    lastRun
    Kubernetes meta/v1.Time

    LastRun is the time when the last run of the application started.

    nextRun
    Kubernetes meta/v1.Time

    NextRun is the time when the next run of the application will start.

    lastRunName
    string

    LastRunName is the name of the SparkApplication for the most recent run of the application.

    pastSuccessfulRunNames
    []string

    PastSuccessfulRunNames keeps the names of SparkApplications for past successful runs.

    pastFailedRunNames
    []string

    PastFailedRunNames keeps the names of SparkApplications for past failed runs.

    scheduleState
    ScheduleState

    ScheduleState is the current scheduling state of the application.

    reason
    string

    Reason tells why the ScheduledSparkApplication is in the particular ScheduleState.

    SecretInfo

    (Appears on:SparkPodSpec)

    SecretInfo captures information of a secret.

    Field Description
    name
    string
    path
    string
    secretType
    SecretType

    SecretType (string alias)

    (Appears on:SecretInfo)

    SecretType tells the type of a secret.

    Value Description

    "GCPServiceAccount"

    SecretTypeGCPServiceAccount is for secrets from a GCP service account Json key file that needs the environment variable GOOGLE_APPLICATION_CREDENTIALS.

    "Generic"

    SecretTypeGeneric is for secrets that needs no special handling.

    "HadoopDelegationToken"

    SecretTypeHadoopDelegationToken is for secrets from an Hadoop delegation token that needs the environment variable HADOOP_TOKEN_FILE_LOCATION.

    SparkApplication

    SparkApplication is the Schema for the sparkapplications API

    Field Description
    metadata
    Kubernetes meta/v1.ObjectMeta
    Refer to the Kubernetes API documentation for the fields of the metadata field.
    spec
    SparkApplicationSpec


    type
    SparkApplicationType

    Type tells the type of the Spark application.

    sparkVersion
    string

    SparkVersion is the version of Spark the application uses.

    mode
    DeployMode

    Mode is the deployment mode of the Spark application.

    proxyUser
    string
    (Optional)

    ProxyUser specifies the user to impersonate when submitting the application. It maps to the command-line flag “–proxy-user” in spark-submit.

    image
    string
    (Optional)

    Image is the container image for the driver, executor, and init-container. Any custom container images for the driver, executor, or init-container takes precedence over this.

    imagePullPolicy
    string
    (Optional)

    ImagePullPolicy is the image pull policy for the driver, executor, and init-container.

    imagePullSecrets
    []string
    (Optional)

    ImagePullSecrets is the list of image-pull secrets.

    mainClass
    string
    (Optional)

    MainClass is the fully-qualified main class of the Spark application. This only applies to Java/Scala Spark applications.

    mainApplicationFile
    string

    MainFile is the path to a bundled JAR, Python, or R file of the application.

    arguments
    []string
    (Optional)

    Arguments is a list of arguments to be passed to the application.

    sparkConf
    map[string]string
    (Optional)

    SparkConf carries user-specified Spark configuration properties as they would use the “–conf” option in spark-submit.

    hadoopConf
    map[string]string
    (Optional)

    HadoopConf carries user-specified Hadoop configuration properties as they would use the “–conf” option in spark-submit. The SparkApplication controller automatically adds prefix “spark.hadoop.” to Hadoop configuration properties.

    sparkConfigMap
    string
    (Optional)

    SparkConfigMap carries the name of the ConfigMap containing Spark configuration files such as log4j.properties. The controller will add environment variable SPARK_CONF_DIR to the path where the ConfigMap is mounted to.

    hadoopConfigMap
    string
    (Optional)

    HadoopConfigMap carries the name of the ConfigMap containing Hadoop configuration files such as core-site.xml. The controller will add environment variable HADOOP_CONF_DIR to the path where the ConfigMap is mounted to.

    volumes
    []Kubernetes core/v1.Volume
    (Optional)

    Volumes is the list of Kubernetes volumes that can be mounted by the driver and/or executors.

    driver
    DriverSpec

    Driver is the driver specification.

    executor
    ExecutorSpec

    Executor is the executor specification.

    deps
    Dependencies
    (Optional)

    Deps captures all possible types of dependencies of a Spark application.

    restartPolicy
    RestartPolicy

    RestartPolicy defines the policy on if and in which conditions the controller should restart an application.

    nodeSelector
    map[string]string
    (Optional)

    NodeSelector is the Kubernetes node selector to be added to the driver and executor pods. This field is mutually exclusive with nodeSelector at podSpec level (driver or executor). This field will be deprecated in future versions (at SparkApplicationSpec level).

    failureRetries
    int32
    (Optional)

    FailureRetries is the number of times to retry a failed application before giving up. This is best effort and actual retry attempts can be >= the value specified.

    retryInterval
    int64
    (Optional)

    RetryInterval is the unit of intervals in seconds between submission retries.

    pythonVersion
    string
    (Optional)

    This sets the major Python version of the docker image used to run the driver and executor containers. Can either be 2 or 3, default 2.

    memoryOverheadFactor
    string
    (Optional)

    This sets the Memory Overhead Factor that will allocate memory to non-JVM memory. For JVM-based jobs this value will default to 0.10, for non-JVM jobs 0.40. Value of this field will be overridden by Spec.Driver.MemoryOverhead and Spec.Executor.MemoryOverhead if they are set.

    monitoring
    MonitoringSpec
    (Optional)

    Monitoring configures how monitoring is handled.

    batchScheduler
    string
    (Optional)

    BatchScheduler configures which batch scheduler will be used for scheduling

    timeToLiveSeconds
    int64
    (Optional)

    TimeToLiveSeconds defines the Time-To-Live (TTL) duration in seconds for this SparkApplication after its termination. The SparkApplication object will be garbage collected if the current time is more than the TimeToLiveSeconds since its termination.

    batchSchedulerOptions
    BatchSchedulerConfiguration
    (Optional)

    BatchSchedulerOptions provides fine-grained control on how to batch scheduling.

    sparkUIOptions
    SparkUIConfiguration
    (Optional)

    SparkUIOptions allows configuring the Service and the Ingress to expose the sparkUI

    driverIngressOptions
    []DriverIngressConfiguration
    (Optional)

    DriverIngressOptions allows configuring the Service and the Ingress to expose ports inside Spark Driver

    dynamicAllocation
    DynamicAllocation
    (Optional)

    DynamicAllocation configures dynamic allocation that becomes available for the Kubernetes scheduler backend since Spark 3.0.

    status
    SparkApplicationStatus

    SparkApplicationSpec

    (Appears on:ScheduledSparkApplicationSpec, SparkApplication)

    SparkApplicationSpec defines the desired state of SparkApplication It carries every pieces of information a spark-submit command takes and recognizes.

    Field Description
    type
    SparkApplicationType

    Type tells the type of the Spark application.

    sparkVersion
    string

    SparkVersion is the version of Spark the application uses.

    mode
    DeployMode

    Mode is the deployment mode of the Spark application.

    proxyUser
    string
    (Optional)

    ProxyUser specifies the user to impersonate when submitting the application. It maps to the command-line flag “–proxy-user” in spark-submit.

    image
    string
    (Optional)

    Image is the container image for the driver, executor, and init-container. Any custom container images for the driver, executor, or init-container takes precedence over this.

    imagePullPolicy
    string
    (Optional)

    ImagePullPolicy is the image pull policy for the driver, executor, and init-container.

    imagePullSecrets
    []string
    (Optional)

    ImagePullSecrets is the list of image-pull secrets.

    mainClass
    string
    (Optional)

    MainClass is the fully-qualified main class of the Spark application. This only applies to Java/Scala Spark applications.

    mainApplicationFile
    string

    MainFile is the path to a bundled JAR, Python, or R file of the application.

    arguments
    []string
    (Optional)

    Arguments is a list of arguments to be passed to the application.

    sparkConf
    map[string]string
    (Optional)

    SparkConf carries user-specified Spark configuration properties as they would use the “–conf” option in spark-submit.

    hadoopConf
    map[string]string
    (Optional)

    HadoopConf carries user-specified Hadoop configuration properties as they would use the “–conf” option in spark-submit. The SparkApplication controller automatically adds prefix “spark.hadoop.” to Hadoop configuration properties.

    sparkConfigMap
    string
    (Optional)

    SparkConfigMap carries the name of the ConfigMap containing Spark configuration files such as log4j.properties. The controller will add environment variable SPARK_CONF_DIR to the path where the ConfigMap is mounted to.

    hadoopConfigMap
    string
    (Optional)

    HadoopConfigMap carries the name of the ConfigMap containing Hadoop configuration files such as core-site.xml. The controller will add environment variable HADOOP_CONF_DIR to the path where the ConfigMap is mounted to.

    volumes
    []Kubernetes core/v1.Volume
    (Optional)

    Volumes is the list of Kubernetes volumes that can be mounted by the driver and/or executors.

    driver
    DriverSpec

    Driver is the driver specification.

    executor
    ExecutorSpec

    Executor is the executor specification.

    deps
    Dependencies
    (Optional)

    Deps captures all possible types of dependencies of a Spark application.

    restartPolicy
    RestartPolicy

    RestartPolicy defines the policy on if and in which conditions the controller should restart an application.

    nodeSelector
    map[string]string
    (Optional)

    NodeSelector is the Kubernetes node selector to be added to the driver and executor pods. This field is mutually exclusive with nodeSelector at podSpec level (driver or executor). This field will be deprecated in future versions (at SparkApplicationSpec level).

    failureRetries
    int32
    (Optional)

    FailureRetries is the number of times to retry a failed application before giving up. This is best effort and actual retry attempts can be >= the value specified.

    retryInterval
    int64
    (Optional)

    RetryInterval is the unit of intervals in seconds between submission retries.

    pythonVersion
    string
    (Optional)

    This sets the major Python version of the docker image used to run the driver and executor containers. Can either be 2 or 3, default 2.

    memoryOverheadFactor
    string
    (Optional)

    This sets the Memory Overhead Factor that will allocate memory to non-JVM memory. For JVM-based jobs this value will default to 0.10, for non-JVM jobs 0.40. Value of this field will be overridden by Spec.Driver.MemoryOverhead and Spec.Executor.MemoryOverhead if they are set.

    monitoring
    MonitoringSpec
    (Optional)

    Monitoring configures how monitoring is handled.

    batchScheduler
    string
    (Optional)

    BatchScheduler configures which batch scheduler will be used for scheduling

    timeToLiveSeconds
    int64
    (Optional)

    TimeToLiveSeconds defines the Time-To-Live (TTL) duration in seconds for this SparkApplication after its termination. The SparkApplication object will be garbage collected if the current time is more than the TimeToLiveSeconds since its termination.

    batchSchedulerOptions
    BatchSchedulerConfiguration
    (Optional)

    BatchSchedulerOptions provides fine-grained control on how to batch scheduling.

    sparkUIOptions
    SparkUIConfiguration
    (Optional)

    SparkUIOptions allows configuring the Service and the Ingress to expose the sparkUI

    driverIngressOptions
    []DriverIngressConfiguration
    (Optional)

    DriverIngressOptions allows configuring the Service and the Ingress to expose ports inside Spark Driver

    dynamicAllocation
    DynamicAllocation
    (Optional)

    DynamicAllocation configures dynamic allocation that becomes available for the Kubernetes scheduler backend since Spark 3.0.

    SparkApplicationStatus

    (Appears on:SparkApplication)

    SparkApplicationStatus defines the observed state of SparkApplication

    Field Description
    sparkApplicationId
    string

    SparkApplicationID is set by the spark-distribution(via spark.app.id config) on the driver and executor pods

    submissionID
    string

    SubmissionID is a unique ID of the current submission of the application.

    lastSubmissionAttemptTime
    Kubernetes meta/v1.Time

    LastSubmissionAttemptTime is the time for the last application submission attempt.

    terminationTime
    Kubernetes meta/v1.Time

    CompletionTime is the time when the application runs to completion if it does.

    driverInfo
    DriverInfo

    DriverInfo has information about the driver.

    applicationState
    ApplicationState

    AppState tells the overall application state.

    executorState
    map[string]github.com/kubeflow/spark-operator/api/v1beta2.ExecutorState

    ExecutorState records the state of executors by executor Pod names.

    executionAttempts
    int32

    ExecutionAttempts is the total number of attempts to run a submitted application to completion. Incremented upon each attempted run of the application and reset upon invalidation.

    submissionAttempts
    int32

    SubmissionAttempts is the total number of attempts to submit an application to run. Incremented upon each attempted submission of the application and reset upon invalidation and rerun.

    SparkApplicationType (string alias)

    (Appears on:SparkApplicationSpec)

    SparkApplicationType describes the type of a Spark application.

    Value Description

    "Java"

    "Python"

    "R"

    "Scala"

    SparkPodSpec

    (Appears on:DriverSpec, ExecutorSpec)

    SparkPodSpec defines common things that can be customized for a Spark driver or executor pod. TODO: investigate if we should use v1.PodSpec and limit what can be set instead.

    Field Description
    template
    Kubernetes core/v1.PodTemplateSpec
    (Optional)

    Template is a pod template that can be used to define the driver or executor pod configurations that Spark configurations do not support. Spark version >= 3.0.0 is required. Ref: https://spark.apache.org/docs/latest/running-on-kubernetes.html#pod-template.

    cores
    int32
    (Optional)

    Cores maps to spark.driver.cores or spark.executor.cores for the driver and executors, respectively.

    coreLimit
    string

    CoreLimit specifies a hard limit on CPU cores for the pod. Optional

    memory
    string
    (Optional)

    Memory is the amount of memory to request for the pod.

    memoryOverhead
    string
    (Optional)

    MemoryOverhead is the amount of off-heap memory to allocate in cluster mode, in MiB unless otherwise specified.

    gpu
    GPUSpec
    (Optional)

    GPU specifies GPU requirement for the pod.

    image
    string
    (Optional)

    Image is the container image to use. Overrides Spec.Image if set.

    configMaps
    []NamePath
    (Optional)

    ConfigMaps carries information of other ConfigMaps to add to the pod.

    secrets
    []SecretInfo
    (Optional)

    Secrets carries information of secrets to add to the pod.

    env
    []Kubernetes core/v1.EnvVar
    (Optional)

    Env carries the environment variables to add to the pod.

    envVars
    map[string]string
    (Optional)

    EnvVars carries the environment variables to add to the pod. Deprecated. Consider using env instead.

    envFrom
    []Kubernetes core/v1.EnvFromSource
    (Optional)

    EnvFrom is a list of sources to populate environment variables in the container.

    envSecretKeyRefs
    map[string]github.com/kubeflow/spark-operator/api/v1beta2.NameKey
    (Optional)

    EnvSecretKeyRefs holds a mapping from environment variable names to SecretKeyRefs. Deprecated. Consider using env instead.

    labels
    map[string]string
    (Optional)

    Labels are the Kubernetes labels to be added to the pod.

    annotations
    map[string]string
    (Optional)

    Annotations are the Kubernetes annotations to be added to the pod.

    volumeMounts
    []Kubernetes core/v1.VolumeMount
    (Optional)

    VolumeMounts specifies the volumes listed in “.spec.volumes” to mount into the main container’s filesystem.

    affinity
    Kubernetes core/v1.Affinity
    (Optional)

    Affinity specifies the affinity/anti-affinity settings for the pod.

    tolerations
    []Kubernetes core/v1.Toleration
    (Optional)

    Tolerations specifies the tolerations listed in “.spec.tolerations” to be applied to the pod.

    podSecurityContext
    Kubernetes core/v1.PodSecurityContext
    (Optional)

    PodSecurityContext specifies the PodSecurityContext to apply.

    securityContext
    Kubernetes core/v1.SecurityContext
    (Optional)

    SecurityContext specifies the container’s SecurityContext to apply.

    schedulerName
    string
    (Optional)

    SchedulerName specifies the scheduler that will be used for scheduling

    sidecars
    []Kubernetes core/v1.Container
    (Optional)

    Sidecars is a list of sidecar containers that run along side the main Spark container.

    initContainers
    []Kubernetes core/v1.Container
    (Optional)

    InitContainers is a list of init-containers that run to completion before the main Spark container.

    hostNetwork
    bool
    (Optional)

    HostNetwork indicates whether to request host networking for the pod or not.

    nodeSelector
    map[string]string
    (Optional)

    NodeSelector is the Kubernetes node selector to be added to the driver and executor pods. This field is mutually exclusive with nodeSelector at SparkApplication level (which will be deprecated).

    dnsConfig
    Kubernetes core/v1.PodDNSConfig
    (Optional)

    DnsConfig dns settings for the pod, following the Kubernetes specifications.

    terminationGracePeriodSeconds
    int64
    (Optional)

    Termination grace period seconds for the pod

    serviceAccount
    string
    (Optional)

    ServiceAccount is the name of the custom Kubernetes service account used by the pod.

    hostAliases
    []Kubernetes core/v1.HostAlias
    (Optional)

    HostAliases settings for the pod, following the Kubernetes specifications.

    shareProcessNamespace
    bool
    (Optional)

    ShareProcessNamespace settings for the pod, following the Kubernetes specifications.

    SparkUIConfiguration

    (Appears on:SparkApplicationSpec)

    SparkUIConfiguration is for driver UI specific configuration parameters.

    Field Description
    servicePort
    int32
    (Optional)

    ServicePort allows configuring the port at service level that might be different from the targetPort. TargetPort should be the same as the one defined in spark.ui.port

    servicePortName
    string
    (Optional)

    ServicePortName allows configuring the name of the service port. This may be useful for sidecar proxies like Envoy injected by Istio which require specific ports names to treat traffic as proper HTTP. Defaults to spark-driver-ui-port.

    serviceType
    Kubernetes core/v1.ServiceType
    (Optional)

    ServiceType allows configuring the type of the service. Defaults to ClusterIP.

    serviceAnnotations
    map[string]string
    (Optional)

    ServiceAnnotations is a map of key,value pairs of annotations that might be added to the service object.

    serviceLabels
    map[string]string
    (Optional)

    ServiceLabels is a map of key,value pairs of labels that might be added to the service object.

    ingressAnnotations
    map[string]string
    (Optional)

    IngressAnnotations is a map of key,value pairs of annotations that might be added to the ingress object. i.e. specify nginx as ingress.class

    ingressTLS
    []Kubernetes networking/v1.IngressTLS
    (Optional)

    TlsHosts is useful If we need to declare SSL certificates to the ingress object


    Generated with gen-crd-api-reference-docs.