-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-23815][Core]Spark writer dynamic partition overwrite mode may fail to write output on multi level partition #20931
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 1 commit
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
- Loading branch information
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -187,6 +187,13 @@ class HadoopMapReduceCommitProtocol( | |
| for (part <- partitionPaths) { | ||
| val finalPartPath = new Path(path, part) | ||
| if (!fs.delete(finalPartPath, true) && !fs.exists(finalPartPath.getParent)) { | ||
| // According to the official hadoop FileSystem API spec, delete op should assume | ||
| // the destination is no longer present regardless of return value, and in our case, | ||
| // it should return false only when finalPartPath does not exist. | ||
| // When finalPartPath does not exist, we need to take action only when the parent of | ||
| // finalPartPath does not exist(e.g. the scenario described on SPARK-23815), because | ||
| // FileSystem API spec on rename op says the rename destination must have a parent | ||
| // that exists, otherwise we may get unexpected result on the rename. | ||
| fs.mkdirs(finalPartPath.getParent) | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. do you have some official HDFS document to support this change?
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @cloud-fan yes, in official HDFS document(https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/filesystem/filesystem.html), the rename command has precondition "dest must be root, or have a parent that exists" |
||
| } | ||
| fs.rename(new Path(stagingDir, part), finalPartPath) | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why we only create the parent dir if we fail to delete the
finalPartPath?Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@cloud-fan this is to follow the behavior of HDFS rename spec: it requires the parent to be present. If we create finalPartPath directly, then it will cause another wired behavior in rename when the dst path already exists. From the HDFS spec I shared above: " If the destination exists and is a directory, the final destination of the rename becomes the destination + the filename of the source path". We have confirmed this in our production cluster, and used the current patch to only create parent dir which follows the HDFS spec exactly,
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the problem here is we didn't check whether the
finalPartPathexists, and we shall actually check that before rename.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I feel the code here is not safe. We may fail to delete if
finalPartPathdoesn't exist, or there are some real failures. We should make surefinalPartPathdoesn't exist before renaming.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
BTW we should add comments around here to explain all these stuff.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 on add comments.
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The FileSystem API spec on delete says "Code SHOULD just call delete(path, recursive) and assume the destination is no longer present". Referring to its detailed spec, the only case that we may get false from delete would be
finalPartPathdoes not exist. Other failures should result in exception. WhenfinalPartPathdoes not exist, which is an expected case, we only need action if the parent offinalPartPathdoes not exist because otherwise we will have problem in rename according to rename spec. Please advise if you guys think we still should double-checkfinalPartPathbefore rename. will add comment after the discussionThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah makes sense, let's add some comment to summary these discussions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added