-
Notifications
You must be signed in to change notification settings - Fork 109
feat(datasets): SparkDataset Rewrite #1185
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: Sajid Alam <[email protected]>
Signed-off-by: Sajid Alam <[email protected]>
Signed-off-by: Sajid Alam <[email protected]>
Signed-off-by: Sajid Alam <[email protected]>
Signed-off-by: Sajid Alam <[email protected]>
Signed-off-by: Sajid Alam <[email protected]>
Signed-off-by: Sajid Alam <[email protected]>
Signed-off-by: Sajid Alam <[email protected]>
Signed-off-by: Sajid Alam <[email protected]>
Signed-off-by: Sajid Alam <[email protected]>
deepyaman
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Digging in a bit, this feels less like a rewrite and more like a refactoring. Here are my initial thoughts:
- I've added a comment re my concern removing the
_dbfs_globlogic. This needs to be validated carefully (perhaps Databricks improved performance of regular glob?) so we don't reintroduce a performance issue. I remember debugging this on a client project, because IIRC (it's been years) performance degrades to the point of unusability with a large number of versions. - Will this provide the best experience with
spark-connectanddatabricks-connect? (FWIWdatabricks-connectis a bit annoying to look into since it's not open source.) Spark 3.4 introduced Spark Connect, and Spark 4 includes major refactors to really make it part of the core (e.g.pyspark.sql.classicis moved to the same level aspyspark.sql.connect, and they inherit from the same baseDataFrameand all—wasn't the case before). IMO Spark Connect looks like the future of Spark, and aSparkDatasetrefresh should work seamlessly with it. Spark Connect (and Databricks Connect) are also potentially great for users who struggle with the deployment experience (e.g. need to get code onto Databricks from local). That said, the classic experience is still likely a very common way for teams who are working more from within Databricks to operate. - I like the fact that HDFS is supported through PyArrow now. If there's still concern that people may need the old, separate HDFS client (not sure there is?
hdfshasn't had a release in two years and doesn't support Python 3.13 for example), maybe that could be handled through some sort of fallback logic?
|
Thanks @deepyaman you're right about the DBFS glob issue that is a good catch we'll add that back in. Regarding refactor vs rewrite, we chose V2 for safety, but I'm open to discussing whether we should refactor original instead if you think that's better. |
Yeah, if course. I think can get the V2 "ready", and then see if it's sufficiently different that it needs to be breaking/a separate dataset. |
Signed-off-by: Sajid Alam <[email protected]>
Signed-off-by: Sajid Alam <[email protected]>
|
@noklam would also appreciate your thoughts on this. |
noklam
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry I don't have time to review this in details but I don't want to block this. A quick question top of my head:
- When should I use
databricksspecific dataset and spark.SparkDataset on Databricks? I recalled there are already something that is only possible with the databricks one. If we are re-writing this I think we should have a look at this. - dbfs is a bit annoying - Databricks already deprecated it, new cluster are default to UC's volume but still a lot of people are using dbfs in older cluster.
- Is there a goal/additional things that this rewrite improve? Or is it more like refactoring?
Hey @noklam thanks, I think the Databricks datasets are more for TABLE operations while the SparkDataset is for FILE operations. The new V2 handles both DBFS and UC Volumes properly, they still supports /dbfs/, dbfs:/, and /Volumes/ paths and we only do the DBFS optimisations only when needed. This goes a bit beyond a refactor I think, we solving some long standing issues such as the Databricks users can now actually use it, we add Spark Connect for Spark 4.0 and now the users can choose their deps instead of installing everything via pyproject.toml changes. It makes the dataset more usable. |
Signed-off-by: Sajid Alam <[email protected]>
Signed-off-by: Sajid Alam <[email protected]>
Signed-off-by: Sajid Alam <[email protected]>
Signed-off-by: Sajid Alam <[email protected]>
Signed-off-by: Sajid Alam <[email protected]>
Signed-off-by: Sajid Alam <[email protected]>
Signed-off-by: Sajid Alam <[email protected]>
Signed-off-by: Sajid Alam <[email protected]>
Signed-off-by: Sajid Alam <[email protected]>
Signed-off-by: Sajid Alam <[email protected]>
Signed-off-by: Sajid Alam <[email protected]>
Signed-off-by: Sajid Alam <[email protected]>
Signed-off-by: Sajid Alam <[email protected]>
Signed-off-by: Sajid Alam <[email protected]>
I spent a good amount of time trying to get |
Signed-off-by: Dmitry Sorokin <[email protected]>
|
Remote connections to Databricks should be established using Databricks Connect. |
Signed-off-by: Sajid Alam <[email protected]>
Signed-off-by: Sajid Alam <[email protected]>
Signed-off-by: Sajid Alam <[email protected]>
Signed-off-by: Sajid Alam <[email protected]>
Signed-off-by: Sajid Alam <[email protected]>
Signed-off-by: Sajid Alam <[email protected]>
merelcht
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! 👍
Don't forget to add it to the release notes too and a small note on when to use this vs. the legacy SparkDataset.
DimedS
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work, @SajidAlamQB !
Signed-off-by: Sajid Alam <[email protected]>
Signed-off-by: Sajid Alam <[email protected]>
Description
This PR introduces
SparkDatasetV2, a cleaner alternative toSparkDatasetthat addresses long-standing issues outlined in #135.Problems with Current SparkDataset
split_filepathvs.get_protocol_and_path) causes inconsistenciesDevelopment notes
Dependency Improvements:
spark-corewith zero dependencies - no forced PySpark on Databricks!spark-local,spark-databricks,spark-emrspark-s3,spark-gcs,spark-azureCode Improvements:
TYPE_CHECKING/Volumes/...)SnowparkTableDataset)SparkHooksrequiredget_spark_with_remote_support()automatically detects environment:DATABRICKS_HOSTandDATABRICKS_TOKENset and databricks-connect installed)SPARK_REMOTEset)Current Status
spark-localKnown Limitations:
/FileStore) is deprecated by Databricks not very easy to test if legacy works.Breaking Changes
kedro-datasets[spark]must choose specific bundlesspark-hdfswith PyArrowNow:
kedro-datasets[spark]will need to choose specific bundlesspark-hdfs)Checklist
jsonschema/kedro-catalog-X.XX.jsonif necessaryRELEASE.mdfile