Skip to content
Closed
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
[SPARK-23815]follow up. add comment
  • Loading branch information
Fangshi Li committed Apr 7, 2018
commit 686a4043b54ff44a29f9d01b615c9de50678217c
Original file line number Diff line number Diff line change
Expand Up @@ -187,6 +187,13 @@ class HadoopMapReduceCommitProtocol(
for (part <- partitionPaths) {
val finalPartPath = new Path(path, part)
if (!fs.delete(finalPartPath, true) && !fs.exists(finalPartPath.getParent)) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why we only create the parent dir if we fail to delete the finalPartPath?

Copy link
Author

@fangshil fangshil Apr 5, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@cloud-fan this is to follow the behavior of HDFS rename spec: it requires the parent to be present. If we create finalPartPath directly, then it will cause another wired behavior in rename when the dst path already exists. From the HDFS spec I shared above: " If the destination exists and is a directory, the final destination of the rename becomes the destination + the filename of the source path". We have confirmed this in our production cluster, and used the current patch to only create parent dir which follows the HDFS spec exactly,

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the problem here is we didn't check whether the finalPartPath exists, and we shall actually check that before rename.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel the code here is not safe. We may fail to delete if finalPartPath doesn't exist, or there are some real failures. We should make sure finalPartPath doesn't exist before renaming.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BTW we should add comments around here to explain all these stuff.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 on add comments.

Copy link
Author

@fangshil fangshil Apr 6, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The FileSystem API spec on delete says "Code SHOULD just call delete(path, recursive) and assume the destination is no longer present". Referring to its detailed spec, the only case that we may get false from delete would be finalPartPath does not exist. Other failures should result in exception. When finalPartPath does not exist, which is an expected case, we only need action if the parent of finalPartPath does not exist because otherwise we will have problem in rename according to rename spec. Please advise if you guys think we still should double-check finalPartPath before rename. will add comment after the discussion

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah makes sense, let's add some comment to summary these discussions.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added

// According to the official hadoop FileSystem API spec, delete op should assume
// the destination is no longer present regardless of return value, and in our case,
// it should return false only when finalPartPath does not exist.
// When finalPartPath does not exist, we need to take action only when the parent of
// finalPartPath does not exist(e.g. the scenario described on SPARK-23815), because
// FileSystem API spec on rename op says the rename destination must have a parent
// that exists, otherwise we may get unexpected result on the rename.
fs.mkdirs(finalPartPath.getParent)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you have some official HDFS document to support this change?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@cloud-fan yes, in official HDFS document(https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/filesystem/filesystem.html), the rename command has precondition "dest must be root, or have a parent that exists"

}
fs.rename(new Path(stagingDir, part), finalPartPath)
Expand Down