Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEAT] add driverPod/executorPod in Spark #6085

Open
wants to merge 9 commits into
base: master
Choose a base branch
from

Conversation

machichima
Copy link
Contributor

@machichima machichima commented Dec 5, 2024

Tracking issue

#4105

Why are the changes needed?

Enable setting K8sPod separately for Spark Driver and Executor pods.

What changes were proposed in this pull request?

Add driverPod and executorPod field with type K8sPod in SparkJob. Uses existing mergePodSpecs to merge default podSpec with our driverPod or executorPod.

How was this patch tested?

Unit tests

I extended the existing Spark unit test TestBuildResourceContainer and TestBuildResourcePodTemplate and create a new test named TestBuildResourceCustomK8SPod for testing.

Test with my_spark example

Modified the @task for hello_spark function in ``my_spark` example here as follow to set the driver_pod and executor_pod.

driver_pod_spec = V1PodSpec(
    containers=[
        V1Container(
            name="primary",
            image="ghcr.io/machichima",
            command=["echo"],
            args=["wow"],
            env=[V1EnvVar(name="x/custom-driver", value="driver")]
        ),
    ],
    tolerations=[
        V1Toleration(
            key="x/custom-driver",
            operator="Equal",
            value="foo-driver",
            effect="NoSchedule",
        ),
    ],
)

executor_pod_spec = V1PodSpec(
    containers=[
        V1Container(
            name="primary",
            image="ghcr.io/machichima",
            command=["echo"],
            args=["wow"],
            env=[V1EnvVar(name="x/custom-executor", value="executor")]
        ),
    ],
    tolerations=[
        V1Toleration(
            key="x/custom-executor",
            operator="Equal",
            value="foo-executor",
            effect="NoSchedule",
        ),
    ],
)

@task(
    task_config=Spark(
        # This configuration is applied to the Spark cluster
        spark_conf={
            "spark.executor.cores": "1",
            "spark.executor.instances": "2",
            "spark.driver.cores": "1",
            "spark.jars": "https://storage.googleapis.com/hadoop-lib/gcs/gcs-connector-hadoop3-latest.jar",
        },
        driver_pod=K8sPod(pod_spec=driver_pod_spec.to_dict()),
        executor_pod=K8sPod(pod_spec=executor_pod_spec.to_dict()),
    ),
    container_image=custom_image,
    pod_template=PodTemplate(primary_container_name="primary"),
)

Verify the pods have Tolerations and EnvVar set.

❯ kubectl describe sparkapplications.sparkoperator.k8s.io -n flytesnacks-development acsqt4vd4pctzvp8t4cz-n0-0 | grep "Tolerations:" -A 4
    Tolerations:
      Effect:    NoSchedule
      Key:       x/custom-driver
      Operator:  Equal
      Value:     foo-driver
--
    Tolerations:
      Effect:             NoSchedule
      Key:                x/custom-executor
      Operator:           Equal
      Value:              foo-executor
❯ kubectl describe sparkapplications.sparkoperator.k8s.io -n flytesnacks-development acsqt4vd4pctzvp8t4cz-n0-0 | grep "Name:        x/custom-executor" -A 1
      Name:        x/custom-executor
      Value:       executor
❯ kubectl describe sparkapplications.sparkoperator.k8s.io -n flytesnacks-development acsqt4vd4pctzvp8t4cz-n0-0 | grep "Name:        x/custom-driver" -A 1
      Name:        x/custom-driver
      Value:       driver

Setup process

Screenshots

Check all the applicable boxes

  • I updated the documentation accordingly.
  • All new and existing tests passed.
  • All commits are signed-off.

Related PRs

flyteorg/flytekit#3016

Docs link

Summary by Bito

This PR enhances Spark task configuration by introducing separate driver and executor pod specifications through new SparkJob message fields. The implementation adds support for customizing Kubernetes pod configurations independently for Spark driver and executor components, including tolerations, environment variables, and other pod settings. The changes encompass protobuf definitions with generated code for multiple languages and improvements to pod helper functions.

Unit tests added: True

Estimated effort to review (1-5, lower is better): 4

Add driverPod/executorPod field in SparkJob class and use them as Spark
driver and executor

Signed-off-by: machichima <[email protected]>
Copy link

codecov bot commented Dec 5, 2024

Codecov Report

Attention: Patch coverage is 58.20896% with 28 lines in your changes missing coverage. Please review.

Project coverage is 37.01%. Comparing base (ba331fd) to head (48902c9).
Report is 36 commits behind head on master.

Files with missing lines Patch % Lines
flyteplugins/go/tasks/plugins/k8s/spark/spark.go 71.42% 10 Missing and 4 partials ⚠️
flyteidl/gen/pb-go/flyteidl/plugins/spark.pb.go 0.00% 10 Missing ⚠️
...ns/go/tasks/pluginmachinery/flytek8s/pod_helper.go 50.00% 2 Missing and 2 partials ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master    #6085      +/-   ##
==========================================
- Coverage   37.10%   37.01%   -0.09%     
==========================================
  Files        1318     1318              
  Lines      132331   132578     +247     
==========================================
- Hits        49097    49078      -19     
- Misses      78961    79249     +288     
+ Partials     4273     4251      -22     
Flag Coverage Δ
unittests-datacatalog 51.58% <ø> (ø)
unittests-flyteadmin 54.25% <ø> (+0.15%) ⬆️
unittests-flytecopilot 30.99% <ø> (ø)
unittests-flytectl 62.29% <ø> (ø)
unittests-flyteidl 7.23% <0.00%> (-0.01%) ⬇️
unittests-flyteplugins 53.87% <68.42%> (+0.04%) ⬆️
unittests-flytepropeller 42.59% <ø> (-0.04%) ⬇️
unittests-flytestdlib 55.17% <ø> (-2.38%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

fix protobuf number mismatch

pass K8sPod instead of annotation and label separately

Signed-off-by: machichima <[email protected]>
successfully apply pods specify in SparkJob

Signed-off-by: machichima <[email protected]>
@machichima machichima force-pushed the 4105-spark-driver-executor-podtemplate branch from ae39e8f to 394c269 Compare December 15, 2024 15:21
@machichima machichima force-pushed the 4105-spark-driver-executor-podtemplate branch from 394c269 to da4199b Compare December 20, 2024 14:59
Signed-off-by: machichima <[email protected]>
@machichima machichima changed the title [WIP] feat: add driverPod/executorPod in Spark [FEAT] add driverPod/executorPod in Spark Dec 20, 2024
Signed-off-by: machichima <[email protected]>
@machichima machichima force-pushed the 4105-spark-driver-executor-podtemplate branch from c3eed97 to 70cfdff Compare December 21, 2024 03:14
Comment on lines 160 to 166
if k8sPod != nil && k8sPod.GetMetadata() != nil {
if k8sPod.Metadata.Annotations != nil {
annotations = pluginsUtils.UnionMaps(annotations, k8sPod.GetMetadata().GetAnnotations())
}
if k8sPod.Metadata.Labels != nil {
labels = pluginsUtils.UnionMaps(labels, k8sPod.GetMetadata().GetLabels())
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: there is a nil check in GetMetadata(), so we don't need to do it here

Suggested change
if k8sPod != nil && k8sPod.GetMetadata() != nil {
if k8sPod.Metadata.Annotations != nil {
annotations = pluginsUtils.UnionMaps(annotations, k8sPod.GetMetadata().GetAnnotations())
}
if k8sPod.Metadata.Labels != nil {
labels = pluginsUtils.UnionMaps(labels, k8sPod.GetMetadata().GetLabels())
}
if k8sPod.GetMetadata().GetAnnotations() != nil {
annotations = pluginsUtils.UnionMaps(annotations, k8sPod.GetMetadata().GetAnnotations())
}
if k8sPod.GetMetadata().GetLabels() != nil {
labels = pluginsUtils.UnionMaps(labels, k8sPod.GetMetadata().GetLabels())
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed. Thanks!

err = utils.UnmarshalStructToObj(executorPod.GetPodSpec(), &customPodSpec)
if err != nil {
return nil, errors.Errorf(errors.BadTaskSpecification,
"Unable to unmarshal pod spec [%v], Err: [%v]", executorPod.GetPodSpec(), err.Error())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"Unable to unmarshal pod spec [%v], Err: [%v]", executorPod.GetPodSpec(), err.Error())
"Unable to unmarshal executor pod spec [%v], Err: [%v]", executorPod.GetPodSpec(), err.Error())

err = utils.UnmarshalStructToObj(driverPod.GetPodSpec(), &customPodSpec)
if err != nil {
return nil, errors.Errorf(errors.BadTaskSpecification,
"Unable to unmarshal pod spec [%v], Err: [%v]", driverPod.GetPodSpec(), err.Error())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"Unable to unmarshal pod spec [%v], Err: [%v]", driverPod.GetPodSpec(), err.Error())
"Unable to unmarshal driver pod spec [%v], Err: [%v]", driverPod.GetPodSpec(), err.Error())

// of c.Name
if val, ok := taskTemplate.GetConfig()[PrimaryContainerKey]; ok {
primaryContainerName = val
c.Name = primaryContainerName
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could we add a small unit test for it

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am wondering whether this part is needed. As I know, to set the primary container name, we need to define in pod_template=PodTemplate(primary_container_name="primary") . But if we set this, we will get into case *core.TaskTemplate_K8SPod here. Therefore, this if val, ok := taskTemplate.GetConfig()[PrimaryContainerKey] will never be True.

I am thinking of removing this and use TaskTemplate_K8SPod in spark_test.go instead.

@flyte-bot
Copy link
Collaborator

flyte-bot commented Dec 28, 2024

Code Review Agent Run #ebad2b

Actionable Suggestions - 8
  • flyteplugins/go/tasks/pluginmachinery/flytek8s/pod_helper.go - 2
    • Consider validating primary container name value · Line 285-290
    • Consider impact of function visibility change · Line 573-573
  • flyteplugins/go/tasks/plugins/k8s/spark/spark.go - 3
  • flyteplugins/go/tasks/plugins/k8s/spark/spark_test.go - 3
Additional Suggestions - 1
  • flyteplugins/go/tasks/plugins/k8s/spark/spark.go - 1
    • Consider more specific utils import alias · Line 28-29
Review Details
  • Files reviewed - 10 · Commit Range: d847a63..48902c9
    • flyteidl/gen/pb-es/flyteidl/plugins/spark_pb.ts
    • flyteidl/gen/pb-go/flyteidl/plugins/spark.pb.go
    • flyteidl/gen/pb_python/flyteidl/plugins/spark_pb2.py
    • flyteidl/gen/pb_python/flyteidl/plugins/spark_pb2.pyi
    • flyteidl/gen/pb_rust/flyteidl.plugins.rs
    • flyteidl/protos/flyteidl/plugins/spark.proto
    • flyteplugins/go/tasks/pluginmachinery/flytek8s/pod_helper.go
    • flyteplugins/go/tasks/pluginmachinery/flytek8s/pod_helper_test.go
    • flyteplugins/go/tasks/plugins/k8s/spark/spark.go
    • flyteplugins/go/tasks/plugins/k8s/spark/spark_test.go
  • Files skipped - 0
  • Tools
    • Golangci-lint (Linter) - ✖︎ Failed
    • Whispers (Secret Scanner) - ✔︎ Successful
    • Detect-secrets (Secret Scanner) - ✔︎ Successful

AI Code Review powered by Bito Logo

@flyte-bot
Copy link
Collaborator

Changelist by Bito

This pull request implements the following key changes.

Key Change Files Impacted
Feature Improvement - Enhanced Spark Pod Configuration Support

spark_pb.ts - Added driver and executor pod configuration support in SparkJob

spark.pb.go - Implemented driver and executor pod configuration in Go protobuf

spark_pb2.py - Added Python bindings for driver and executor pod configuration

spark_pb2.pyi - Updated Python type hints for new pod configuration fields

flyteidl.plugins.rs - Added Rust support for driver and executor pod configuration

spark.proto - Added K8sPod fields for driver and executor pod configuration

Other Improvements - Pod Helper Function Improvements

pod_helper.go - Enhanced pod helper functions with primary container name configuration and exposed MergePodSpecs

Feature Improvement - Enhanced Spark Pod Configuration Support

spark_pb.ts - Added driver and executor pod configuration support in SparkJob

spark.pb.go - Implemented driver and executor pod configuration in Go protobuf

spark_pb2.py - Added Python bindings for driver and executor pod configuration

spark_pb2.pyi - Updated Python type hints for new pod configuration fields

flyteidl.plugins.rs - Added Rust support for driver and executor pod configuration

spark.proto - Added K8sPod fields for driver and executor pod configuration

spark.go - Implemented support for custom driver and executor pod configurations

spark_test.go - Added tests for custom driver and executor pod configurations

Other Improvements - Pod Helper Function Improvements

pod_helper.go - Enhanced pod helper functions with primary container name configuration and exposed MergePodSpecs

pod_helper_test.go - Updated tests to use exposed MergePodSpecs function

Comment on lines +285 to +290
if val, ok := taskTemplate.GetConfig()[PrimaryContainerKey]; ok {
primaryContainerName = val
c.Name = primaryContainerName
} else {
primaryContainerName = c.Name
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider validating primary container name value

Consider handling empty string value for PrimaryContainerKey in task template config to avoid potential issues with container naming.

Code suggestion
Check the AI-generated fix before applying
Suggested change
if val, ok := taskTemplate.GetConfig()[PrimaryContainerKey]; ok {
primaryContainerName = val
c.Name = primaryContainerName
} else {
primaryContainerName = c.Name
}
if val, ok := taskTemplate.GetConfig()[PrimaryContainerKey]; ok {
if val != "" {
primaryContainerName = val
c.Name = primaryContainerName
}
} else {
primaryContainerName = c.Name
}

Code Review Run #ebad2b


Is this a valid issue, or was it incorrectly flagged by the Agent?

  • it was incorrectly flagged

@@ -563,7 +570,7 @@
}

// merge podSpec with podTemplate
mergedPodSpec, err := mergePodSpecs(&podTemplate.Template.Spec, podSpec, primaryContainerName, primaryInitContainerName)
mergedPodSpec, err := MergePodSpecs(&podTemplate.Template.Spec, podSpec, primaryContainerName, primaryInitContainerName)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider impact of function visibility change

Consider if the function name change from mergePodSpecs to MergePodSpecs is intentional as it changes the visibility of the function from package-private to public. This could impact API stability and usage patterns. A similar issue was also found in flyteplugins/go/tasks/pluginmachinery/flytek8s/pod_helper_test.go (line 2050-2144).

Code suggestion
Check the AI-generated fix before applying
Suggested change
mergedPodSpec, err := MergePodSpecs(&podTemplate.Template.Spec, podSpec, primaryContainerName, primaryInitContainerName)
mergedPodSpec, err := mergePodSpecs(&podTemplate.Template.Spec, podSpec, primaryContainerName, primaryInitContainerName)

Code Review Run #ebad2b


Is this a valid issue, or was it incorrectly flagged by the Agent?

  • it was incorrectly flagged

@@ -65,7 +66,7 @@
}

sparkJob := plugins.SparkJob{}
err = utils.UnmarshalStruct(taskTemplate.GetCustom(), &sparkJob)
err = utils.UnmarshalStructToPb(taskTemplate.GetCustom(), &sparkJob)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider using appropriate unmarshal function

Consider using UnmarshalStruct instead of UnmarshalStructToPb since sparkJob is not a protobuf message but a regular struct.

Code suggestion
Check the AI-generated fix before applying
Suggested change
err = utils.UnmarshalStructToPb(taskTemplate.GetCustom(), &sparkJob)
err = utils.UnmarshalStruct(taskTemplate.GetCustom(), &sparkJob)

Code Review Run #ebad2b


Is this a valid issue, or was it incorrectly flagged by the Agent?

  • it was incorrectly flagged

Comment on lines +152 to +165
annotations := pluginsUtils.UnionMaps(
config.GetK8sPluginConfig().DefaultAnnotations,
pluginsUtils.CopyMap(taskCtx.TaskExecutionMetadata().GetAnnotations()),
)
labels := pluginsUtils.UnionMaps(
config.GetK8sPluginConfig().DefaultLabels,
pluginsUtils.CopyMap(taskCtx.TaskExecutionMetadata().GetLabels()),
)
if k8sPod.GetMetadata().GetAnnotations() != nil {
annotations = pluginsUtils.UnionMaps(annotations, k8sPod.GetMetadata().GetAnnotations())
}
if k8sPod.GetMetadata().GetLabels() != nil {
labels = pluginsUtils.UnionMaps(labels, k8sPod.GetMetadata().GetLabels())
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider extracting annotation merging logic

Consider extracting the annotation and label merging logic into a separate helper function since it's used in multiple places. This would improve code maintainability and reduce duplication.

Code suggestion
Check the AI-generated fix before applying
Suggested change
annotations := pluginsUtils.UnionMaps(
config.GetK8sPluginConfig().DefaultAnnotations,
pluginsUtils.CopyMap(taskCtx.TaskExecutionMetadata().GetAnnotations()),
)
labels := pluginsUtils.UnionMaps(
config.GetK8sPluginConfig().DefaultLabels,
pluginsUtils.CopyMap(taskCtx.TaskExecutionMetadata().GetLabels()),
)
if k8sPod.GetMetadata().GetAnnotations() != nil {
annotations = pluginsUtils.UnionMaps(annotations, k8sPod.GetMetadata().GetAnnotations())
}
if k8sPod.GetMetadata().GetLabels() != nil {
labels = pluginsUtils.UnionMaps(labels, k8sPod.GetMetadata().GetLabels())
}
annotations, labels := mergeMetadata(config.GetK8sPluginConfig().DefaultAnnotations, config.GetK8sPluginConfig().DefaultLabels, taskCtx.TaskExecutionMetadata(), k8sPod)

Code Review Run #ebad2b


Is this a valid issue, or was it incorrectly flagged by the Agent?

  • it was incorrectly flagged

if executorPod != nil {
var customPodSpec *v1.PodSpec

err = utils.UnmarshalStructToObj(executorPod.GetPodSpec(), &customPodSpec)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider nil check for PodSpec

Consider checking if executorPod.GetPodSpec() is nil before attempting to unmarshal it to avoid potential nil pointer dereference.

Code suggestion
Check the AI-generated fix before applying
 @@ -252,6 +252,10 @@
 		var customPodSpec *v1.PodSpec
 +		podSpec := executorPod.GetPodSpec()
 +		if podSpec == nil {
 +			return nil, errors.Errorf(errors.BadTaskSpecification, "executor pod spec cannot be nil")
 +		}
 -		err = utils.UnmarshalStructToObj(executorPod.GetPodSpec(), &customPodSpec)
 +		err = utils.UnmarshalStructToObj(podSpec, &customPodSpec)

Code Review Run #ebad2b


Is this a valid issue, or was it incorrectly flagged by the Agent?

  • it was incorrectly flagged

Comment on lines +354 to +375
func dummySparkTaskTemplateDriverExecutor(id string, sparkConf map[string]string, driverPod *core.K8SPod, executorPod *core.K8SPod) *core.TaskTemplate {
sparkJob := dummySparkCustomObjDriverExecutor(sparkConf, driverPod, executorPod)

structObj, err := utils.MarshalObjToStruct(sparkJob)
if err != nil {
panic(err)
}

return &core.TaskTemplate{
Id: &core.Identifier{Name: id},
Type: "container",
Target: &core.TaskTemplate_Container{
Container: &core.Container{
Image: testImage,
},
},
Config: map[string]string{
flytek8s.PrimaryContainerKey: "primary",
},
Custom: structObj,
}
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider extracting common task template code

Consider extracting the common code between dummySparkTaskTemplateContainer and dummySparkTaskTemplateDriverExecutor into a shared helper function to reduce duplication. Both functions appear to create similar task templates with only minor differences.

Code suggestion
Check the AI-generated fix before applying
Suggested change
func dummySparkTaskTemplateDriverExecutor(id string, sparkConf map[string]string, driverPod *core.K8SPod, executorPod *core.K8SPod) *core.TaskTemplate {
sparkJob := dummySparkCustomObjDriverExecutor(sparkConf, driverPod, executorPod)
structObj, err := utils.MarshalObjToStruct(sparkJob)
if err != nil {
panic(err)
}
return &core.TaskTemplate{
Id: &core.Identifier{Name: id},
Type: "container",
Target: &core.TaskTemplate_Container{
Container: &core.Container{
Image: testImage,
},
},
Config: map[string]string{
flytek8s.PrimaryContainerKey: "primary",
},
Custom: structObj,
}
}
func createSparkTaskTemplate(id string, sparkJob interface{}) *core.TaskTemplate {
structObj, err := utils.MarshalObjToStruct(sparkJob)
if err != nil {
panic(err)
}
return &core.TaskTemplate{
Id: &core.Identifier{Name: id},
Type: "container",
Target: &core.TaskTemplate_Container{
Container: &core.Container{
Image: testImage,
},
},
Config: map[string]string{
flytek8s.PrimaryContainerKey: "primary",
},
Custom: structObj,
}
}
func dummySparkTaskTemplateDriverExecutor(id string, sparkConf map[string]string, driverPod *core.K8SPod, executorPod *core.K8SPod) *core.TaskTemplate {
sparkJob := dummySparkCustomObjDriverExecutor(sparkConf, driverPod, executorPod)
return createSparkTaskTemplate(id, sparkJob)
}

Code Review Run #ebad2b


Is this a valid issue, or was it incorrectly flagged by the Agent?

  • it was incorrectly flagged

Comment on lines +1051 to +1055
assert.Equal(t, len(findEnvVarByName(sparkApp.Spec.Driver.Env, "FLYTE_MAX_ATTEMPTS").Value), 1)
assert.Equal(t, defaultConfig.DefaultEnvVars["foo"], findEnvVarByName(sparkApp.Spec.Driver.Env, "foo").Value)
assert.Equal(t, defaultConfig.DefaultEnvVars["fooEnv"], findEnvVarByName(sparkApp.Spec.Driver.Env, "fooEnv").Value)
assert.Equal(t, findEnvVarByName(dummyEnvVarsWithSecretRef, "SECRET"), findEnvVarByName(sparkApp.Spec.Driver.Env, "SECRET"))
assert.Equal(t, 9, len(sparkApp.Spec.Driver.Env))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider consolidating env var assertions

Consider consolidating the environment variable assertions into a helper function to improve test readability and maintainability. Multiple similar assertions for environment variables could be simplified.

Code suggestion
Check the AI-generated fix before applying
Suggested change
assert.Equal(t, len(findEnvVarByName(sparkApp.Spec.Driver.Env, "FLYTE_MAX_ATTEMPTS").Value), 1)
assert.Equal(t, defaultConfig.DefaultEnvVars["foo"], findEnvVarByName(sparkApp.Spec.Driver.Env, "foo").Value)
assert.Equal(t, defaultConfig.DefaultEnvVars["fooEnv"], findEnvVarByName(sparkApp.Spec.Driver.Env, "fooEnv").Value)
assert.Equal(t, findEnvVarByName(dummyEnvVarsWithSecretRef, "SECRET"), findEnvVarByName(sparkApp.Spec.Driver.Env, "SECRET"))
assert.Equal(t, 9, len(sparkApp.Spec.Driver.Env))
assertEnvVars(t, sparkApp.Spec.Driver.Env, defaultConfig.DefaultEnvVars, dummyEnvVarsWithSecretRef)

Code Review Run #ebad2b


Is this a valid issue, or was it incorrectly flagged by the Agent?

  • it was incorrectly flagged

assert.Equal(t, findEnvVarByName(dummyEnvVarsWithSecretRef, "SECRET"), findEnvVarByName(sparkApp.Spec.Executor.Env, "SECRET"))
assert.Equal(t, 9, len(sparkApp.Spec.Executor.Env))
assert.Equal(t, testImage, *sparkApp.Spec.Executor.Image)
assert.Equal(t, defaultConfig.DefaultPodSecurityContext, sparkApp.Spec.Executor.SecurityContenxt)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fix typo in SecurityContext property name

There appears to be a typo in the property name SecurityContenxt. Consider correcting it to SecurityContext.

Code suggestion
Check the AI-generated fix before applying
Suggested change
assert.Equal(t, defaultConfig.DefaultPodSecurityContext, sparkApp.Spec.Executor.SecurityContenxt)
assert.Equal(t, defaultConfig.DefaultPodSecurityContext, sparkApp.Spec.Executor.SecurityContext)

Code Review Run #ebad2b


Is this a valid issue, or was it incorrectly flagged by the Agent?

  • it was incorrectly flagged

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: In review
Development

Successfully merging this pull request may close these issues.

3 participants