-
Notifications
You must be signed in to change notification settings - Fork 411
Bin-pack Writes Operation into multiple parquet files, and parallelize writing WriteTasks
#444
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
+192
−45
Merged
Changes from 1 commit
Commits
Show all changes
15 commits
Select commit
Hold shift + click to select a range
5a7c8f9
bin pack write
kevinjqliu bae32e2
add write target file size config
kevinjqliu 2730a8f
test
kevinjqliu ef64c92
add test for multiple data files
kevinjqliu 9cb9649
parquet writer write once
kevinjqliu 3f284b2
parallelize write tasks
kevinjqliu 6462d06
refactor
kevinjqliu fd1efe0
chunk correctly using to_batches
kevinjqliu 7ccfdb2
change variable names
kevinjqliu 1ee3a55
get rid of assert
kevinjqliu f92de1a
configure PackingIterator
kevinjqliu 0047fd8
add more tests
kevinjqliu c6cb8de
rewrite set_properties
kevinjqliu d80054d
set int property
kevinjqliu 8cd7160
Merge branch 'main' into kevinjqliu/bin-pack-write
kevinjqliu File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Next
Next commit
bin pack write
- Loading branch information
commit 5a7c8f9a99f1435dfc302a352d1fbe7619f9acf8
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In Java we have the
write.target-file-size-bytesconfiguration. In this case, we're looking at the size in memory, and not the file size. Converting this is very tricky since Parquet has some excellent encodings to reduce the size on disk. We might want to check the heuristic on the Java side. On the other end, we also don't want to explode the memory when decoding a Parquet fileThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is done in Java like so #428 (comment)
Write is done row by row and on every 1000 rows, the file size is checked against the desired size.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
and the
write.target-file-size-bytesconfiguration is just a heuristic to achieve, not the absolute size of the result file.Based on this comment, it seems that even in Spark result parquet files can be smaller than the target file size.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For now, I propose we reuse the
write.target-file-size-bytesoption and default to 512MB of arrow size in memory.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here's a test run when we bin-packed 685.46 MB of arrow memory into 256MB chunks. We ended up with 3 ~80MB parquet files.