Skip to content
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -1638,7 +1638,7 @@ Apart from these, the following properties are also available, and may be useful
For more detail, see the description
<a href="job-scheduling.html#dynamic-resource-allocation">here</a>.
<br><br>
This requires <code>spark.shuffle.service.enabled</code> to be set.
This requires <code>spark.shuffle.service.enabled</code> to be set true.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there's no ambiguity here. Usually configuration with name "xxx.enabled" can only have two values "true" or "false". So "to be set" usually means to enable it (to set it to true).

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are right, but usually is not very sure. Can not let the user to guess, the document needs to be accurately described. In addition, the spark project, several other are clearly described, spark.shuffle.service.enabled set true.
1
2

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since other places are clearly defined the property, so there should be no ambiguity. Personally I'm not fond of this super nit fix...

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for your comments.
This requires spark.shuffle.service.enabled to be set true. It is very clearly. Only such an accurate description,there be no ambiguity.

The following configurations are also relevant:
<code>spark.dynamicAllocation.minExecutors</code>,
<code>spark.dynamicAllocation.maxExecutors</code>, and
Expand Down