-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-13979][Core] Killed executor is re spawned without AWS key… #14601
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 1 commit
9efc140
89e7c97
ab6ab9b
9765dc9
dee4e6d
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -107,6 +107,14 @@ class SparkHadoopUtil extends Logging { | |
| if (key.startsWith("spark.hadoop.")) { | ||
| hadoopConf.set(key.substring("spark.hadoop.".length), value) | ||
| } | ||
| // Copy any "fs.swift2d.foo=bar" properties into conf as "fs.swift2d.foo=bar" | ||
|
||
| else if (key.startsWith("fs.swift2d")){ | ||
| hadoopConf.set(key, value) | ||
|
||
| } | ||
| // Copy any "fs.s3x.foo=bar" properties into conf as "fs.s3x.foo=bar" | ||
| else if (key.startsWith("fs.s3")){ | ||
| hadoopConf.set(key, value) | ||
| } | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. s3 is the AWS EMR filesystem, but an obsolete one on ASF Hadoop. I would recommend the list of s3, s3n, s3a, swift, adl, wasb, oss, gs (edited 10/oct, set list in sync with what I believe is current set)
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Actually, I would copy everything under fs.s3a, fs.s3, etc. Why? there's a lot more than passwords: for s3a we include: proxy info and passwords, a list of alternate s3 auth mechanisms (e.g. declaring using IAM), etc, etc.
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @steveloughran can u help me with some default setting for adl, wasb, oss, gs to test or just syntax for them so that i can decide on filter conditions |
||
| } | ||
| val bufferSize = conf.get("spark.buffer.size", "65536") | ||
| hadoopConf.set("io.file.buffer.size", bufferSize) | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the comment above does not apply for the whole loop anymore and should be moved to this if statement
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done.