-
Notifications
You must be signed in to change notification settings - Fork 98
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Provide Spark 3.4 Support for Spline w/ Backwards Compatibility #793
Provide Spark 3.4 Support for Spline w/ Backwards Compatibility #793
Conversation
core/src/main/scala/za/co/absa/spline/harvester/plugin/embedded/RDDPlugin.scala
Show resolved
Hide resolved
// We only want the one that is from CreateDataSourceTableAsSelectCommand | ||
// The one we ignore here is an extra InsertIntoHadoopFsRelationCommand | ||
// They can come out of order so we need to filter out which one is which. | ||
(plan2, _) <- if (ver"$SPARK_VERSION" >= ver"3.4.0") { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not a fan of taking this approach but this does allow the tests to still pass - if we are comfortable with knowing that Spark is firing additional events here we get the same behavior.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So the Spark creates two plans where it used to create just one? What is the new root/write command that it creates? Spline should react only on write commands.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So this is what's confusing me a little bit - we get both a CreateDataSourceTableAsSelectCommand & InsertIntoHadoopFsRelationCommand (printed from LineageHarvester) in an a single Spark Action (and they aren't guaranteed to appear in the same order as well, hence why I needed to do that weird filter to find the right one).
I think I'm going to run a custom QueryExecutionListener and see if Spark itself happens to print out two actions as well. If not I'm going to be even more confused 😅
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But yes - this didn't happen in < 3.4
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok I think I've just confirmed that this is indeed Spark and not Spline doing this. I made a listener specifically for this test like so:
class TestListener extends QueryExecutionListener {
override def onSuccess(funcName: String, qe: QueryExecution, durationNs: Long): Unit = {
println("A COMMAND JUST RAN")
println(qe.commandExecuted.getClass.getCanonicalName)
}
override def onFailure(funcName: _root_.scala.Predef.String, qe: _root_.org.apache.spark.sql.execution.QueryExecution, exception: scala.Exception): Unit = {
}
}
3.3 tests give just per CTAS action.
A COMMAND JUST RAN
org.apache.spark.sql.execution.command.CreateDataSourceTableAsSelectCommand
But 3.4 tests give this per CTAS action:
A COMMAND JUST RAN
org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand
A COMMAND JUST RAN
org.apache.spark.sql.execution.command.CreateDataSourceTableAsSelectCommand
This seems to support what I saw from printing out in LineageHarvester.
I unfortunately don't have the context to say whether this is ok or not 😅 . I would imagine Spline UI users would see this additional event, no?
The correct lineage events are getting generated but there's extra noise. I know InsertIntoHadoopFsRelation is associated with other writes occurring so your theory of Spark changing something under the hood is probably correct.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I found the Spark PR that introduced this behavior change in 3.4 - TLDR v1 data writes originally in CTAS turned into the two relation's we're seeing here under the two types of CTAS supported. They call out command nesting in various places.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So you are sure that both of those commands are triggered by .write.mode(Append).saveAsTable(tableName)
?
This is not a question for testing only, but generally how to handle this in the application. Usually there is only one lineage for one write. The simplest solution would be to ignore one of them, but how to check if what we are ignoring is actually duplication?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I read the Spark ticket, they actually separate the action of table creation and data insert, that is not a problem. We could generate events for both, but it should be clear that no data are inserted in the table creation lineage.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would there be another suite/case that we'd want for something like this? I suppose that might depend on the actual events that Spline would be outputting here now.
val writeUri = plan1.operations.write.outputSource | ||
val readUri = plan2.operations.reads.head.inputSources.head | ||
|
||
val writePlan = Seq(plan1, plan2) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is some common code here on filtering out the correct options we want here between this and the Hive CTAS code - not sure how much level of DRY you all are looking to reduce here but this is potential.
integration-tests/src/test/scala/za/co/absa/spline/KafkaSinkSpec.scala
Outdated
Show resolved
Hide resolved
integration-tests/src/test/scala/za/co/absa/spline/harvester/LineageHarvesterSpec.scala
Outdated
Show resolved
Hide resolved
@@ -0,0 +1,1344 @@ | |||
<?xml version="1.0" encoding="UTF-8"?> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Want to make sure I've done this right -
- Ran the jars-pommefizer on spark-3.4.1-bin-hadoop3
- Copied over some basic pom setup info from bundle-3.3 (higher level project template
- Moved dependencies from pommefizer output into dependencyManagement
- Filled in dependencies whose groupIds were blank from pommefizer (copied from 3.3, maybe a bug in pommefizer?)
- Copied build plugins from 3.3
- Manually removed versions from everything mentioned in dependencyManagement (this was a doozy 😄 )
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems alright, @wajda what do you think?
|
@rycowhi Thanks a million for your effort. |
Just checking on status of this PR. We are waiting on this feature. @wajda any updates. |
Not yet, but it's high on my todo list for this week. |
@wajda Looks like this feature is in high demand. Our team is also watching the PR merge, highly appreciate if you could review this as early as possible. Thank you in advance. |
Sorry folks, I've still been buried in work on another high-priority project, and have yet to obtain time allocation approval for this one. The Spline project was essentially put on hold by the company last year, that's the reason why it received almost no support for the past X months. I'm not giving up, but I can't really spend my free time on this any more, so trying to get some official allocation from the employer. |
Hello! Is there any news regarding this Spark 3.4 support? I've been using Spline since a long time and the fact that there is no support for Scala 2.13 is going to affect a lot of projects and maybe the need to find a new listener. If there is any way I could contribute to this feature, or to speed things up, please let me know! Thanks! Edit: I've just seen Upgrade org.scala-lang:scala-library from 2.12.17 to 2.13.14 #806 |
@ramonje5 thank you for the message. Unfortunately our company has yet to sort out its plans and book of work for Spline. Currently the team has zero capacity to work on it.
The PR #806 is automatically created by Snyk and won't work. Please ignore it. Upgrading to Scala 2.13 cannot be a simple change like that, it would require adding another vertical into the build matrix, so to speak. Basically the agent is built for every supported Sclala+Spark version combination. If you want to help that would be awesome if you take this current PR, test it, address @rycowhi's points in the related issue #705. When testing succeeds and those points are addressed we can merge and release this PR. |
At least for the versions maintained off of the existing cross-builds is there anything else to look into? I believe the only outstanding piece was seeing that there are now two (different commands) emitted off of a CTAS in 3.4 but have confirmed this is a new Spark behavior (by seeing the same, multiple events happen on a barebones query listener). Is the outstanding piece to see how that would look in the Spline UI, or? |
e601a25
to
1f1aec9
Compare
integration-tests/src/test/scala/za/co/absa/spline/harvester/LineageHarvesterSpec.scala
Outdated
Show resolved
Hide resolved
…ineageHarvesterSpec.scala
When all builds pass I'll test it manually on my end and if no issue found I'll merge the PR. |
953e8ec
into
AbsaOSS:feature/agent-705-spark-3-4-support
fixes #705
TODO: