-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-19052] the restSubmissionServer don't support multiple standby masters on standalone cluster #16450
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Can one of the admins verify this patch? |
|
@srowen can you help review it ? I think it is a bug. Thank you very much. |
|
I'm not sure this is a good change, because it means you don't override defaults set somewhere in a config file anymore. That is I'm not clear this isn't on purpose -- could be right, wrong, just not sure. |
|
I checked the other places, and I'm sure the other places is set "spark.master" by user configuration but not the master's address directly. |
|
Where? I don't see any other instances of this pattern. It's easier if you add specifics when discussing code changes. |
|
If we submit the application submission by SPARK REST API, we transmit the configure by the sparkProperties. Like that: We hope "spark.master" of the driver 's configure should be "spark://10.20.23.22:7077,10.20.23.21:7077", but in fact it maybe spark://10.20.23.22:7077 due to the spark core's code: If we kill the master(spark://10.20.23.22:7077), and the "spark://10.20.23.21:7077" will be master。After we kill the driver, then the spark can't restart the driver automatically. Because the driver only know old master's address which is spark://10.20.23.22:7077. |
|
@srowen masterUrl maybe defaults set, but the user's program's configure should have high priority. The user's program's configure can be delivered by the REST API's params "sparkProperties". |
|
Aha that's making more sense to me. I don't know this code much. @andrewor14 would be an ideal reviewer but not sure if he's available at this point. |
|
@srowen @andrewor14 can you review it again? |
|
@hustfxj Unluckily we don't support multi-master nodes in standalone mode, so could you please close this PR? Thank you! |
## What changes were proposed in this pull request? This PR proposes to close stale PRs, mostly the same instances with apache#18017 I believe the author in apache#14807 removed his account. Closes apache#7075 Closes apache#8927 Closes apache#9202 Closes apache#9366 Closes apache#10861 Closes apache#11420 Closes apache#12356 Closes apache#13028 Closes apache#13506 Closes apache#14191 Closes apache#14198 Closes apache#14330 Closes apache#14807 Closes apache#15839 Closes apache#16225 Closes apache#16685 Closes apache#16692 Closes apache#16995 Closes apache#17181 Closes apache#17211 Closes apache#17235 Closes apache#17237 Closes apache#17248 Closes apache#17341 Closes apache#17708 Closes apache#17716 Closes apache#17721 Closes apache#17937 Added: Closes apache#14739 Closes apache#17139 Closes apache#17445 Closes apache#18042 Closes apache#18359 Added: Closes apache#16450 Closes apache#16525 Closes apache#17738 Added: Closes apache#16458 Closes apache#16508 Closes apache#17714 Added: Closes apache#17830 Closes apache#14742 ## How was this patch tested? N/A Author: hyukjinkwon <[email protected]> Closes apache#18417 from HyukjinKwon/close-stale-pr.
The driver only know a master's address which come from StandaloneRestServer's masterUrl If submitting the job by rest api. Like that:
The masterUrl only is a master's address. So we should give priority to set "spark.master" by "sparkProperties". Like that:
Then we can set the one or more masters at the "sparkProperties".