-
Notifications
You must be signed in to change notification settings - Fork 29k
Add a note about jobs running in FIFO order in the default pool #20881
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Can one of the admins verify this patch? |
docs/job-scheduling.md
Outdated
| means that each user will get an equal share of the cluster, and that each user's queries will run in | ||
| order instead of later queries taking resources from that user's earlier ones. | ||
|
|
||
| If jobs are not explicitely set to use a given pool, they end up in the default pool. This means that even if |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @Alexis-D , there are a few minor typos here;
'explicitely' -> 'explicitly'.
'ran' -> 'run'
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
right my bad -- I updated the PR
|
|
||
| If jobs are not explicitly set to use a given pool, they end up in the default pool. This means that even if | ||
| `spark.scheduler.mode` is set to `FAIR` those jobs will be run in `FIFO` order (within the default pool). | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not actually correct. There is no reason why you can't define a default pool that uses FAIR scheduling.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assume you mean that the second sentence is incorrect? I drew that conclusion based from empirical observations +
spark/core/src/main/scala/org/apache/spark/scheduler/SchedulableBuilder.scala
Lines 109 to 117 in 992447f
| private def buildDefaultPool() { | |
| if (rootPool.getSchedulableByName(DEFAULT_POOL_NAME) == null) { | |
| val pool = new Pool(DEFAULT_POOL_NAME, DEFAULT_SCHEDULING_MODE, | |
| DEFAULT_MINIMUM_SHARE, DEFAULT_WEIGHT) | |
| rootPool.addSchedulable(pool) | |
| logInfo("Created default pool: %s, schedulingMode: %s, minShare: %d, weight: %d".format( | |
| DEFAULT_POOL_NAME, DEFAULT_SCHEDULING_MODE, DEFAULT_MINIMUM_SHARE, DEFAULT_WEIGHT)) | |
| } | |
| } |
However, I might very well be missing something?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You seem to be missing a few somethings: 1) You can define your own default pool that does FAIR scheduling within that pool, so blanket statements about "the" default pool are dangerous; 2) spark.scheduler.mode controls the setup of the rootPool, not the scheduling within any pool; 3) If you don't define your own pool with a name corresponding to the DEFAULT_POOL_NAME (i.e. "default"), then you are going to get a default construction of "default", which does use FIFO scheduling within that pool.
So, item 2) effectively means that spark.scheduler.mode controls whether fair scheduling is possible at all, and it also defines the kind of scheduling that is used among the shedulable entities contained in the root pool -- i.e. among the scheduling pools nested within rootPool. One of those nested pools will be DEFAULT_POOL_NAME/"default", which will use FIFO scheduling for schedulable entities within that pool if you haven't defined it to use fair scheduling.
If you just want one scheduling pool that does fair scheduling among its schedulable entities, then you need to set spark.scheduler.mode to "FAIR" in your SparkConf and also define in the pool configuration file a "default" pool to use schedulingMode FAIR. You could alternatively define such a fair-scheduling-inside pool named something other than "default" and then make sure that all of your jobs get assigned to that pool.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool, thanks @markhamstra I think I grasp what's going on now. Some form of your comment would be a useful addition to the documentation; rationale being that there seems to be a (common?) misunderstanding about how to schedule jobs in a FAIR way, e.g. https://stackoverflow.com/a/37882686/2813687, or myself trying to do this leading to this very PR. After reading your comment, the current documentation makes sense, and obviously this PR is incorrect (at the very least it doesn't underscore all the caveats/config knobs at play here). I'll take another look at improving the doc such that the actual behavior is obvious to Spark users not familiar with Spark scheduling nitty gritty who merely want to run a few jobs concurrently.
Closes apache#20458 Closes apache#20530 Closes apache#20557 Closes apache#20966 Closes apache#20857 Closes apache#19694 Closes apache#18227 Closes apache#20683 Closes apache#20881 Closes apache#20347 Closes apache#20825 Closes apache#20078 Closes apache#21281 Closes apache#19951 Closes apache#20905 Closes apache#20635 Author: Sean Owen <[email protected]> Closes apache#21303 from srowen/ClosePRs.
What changes were proposed in this pull request?
Make it clear in the doc that setting
spark.scheduler.modetoFAIRisn't enough to get jobs to run in aFAIRfashion if the default pool is used.