Skip to content
Prev Previous commit
Next Next commit
Add documentation for spark.executor.bindAddress
  • Loading branch information
Hendra Saputra committed Sep 20, 2023
commit 0d0e77dd3391d57f2d8a46b966a062a4a84fa70c
15 changes: 14 additions & 1 deletion docs/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -349,6 +349,19 @@ of the most common options to set are:
</td>
<td>3.0.0</td>
</tr>
<tr>
<td><code>spark.executor.bindAddress</code></td>
<td>(local hostname)</td>
<td>
Hostname or IP address where to bind listening sockets. This config overrides the SPARK_LOCAL_IP
environment variable (see below).
<br />It also allows a different address from the local one to be advertised to other
executors or external systems. This is useful, for example, when running containers with bridged networking.
For this to properly work, the different ports used by the driver (RPC, block manager and UI) need to be
forwarded from the container's host.
</td>
<td>4.0.0</td>
</tr>
<tr>
<td><code>spark.extraListeners</code></td>
<td>(none)</td>
Expand Down Expand Up @@ -3028,7 +3041,7 @@ Apart from these, the following properties are also available, and may be useful
For more detail, see the description
<a href="job-scheduling.html#dynamic-resource-allocation">here</a>.
<br><br>
This requires one of the following conditions:
This requires one of the following conditions:
1) enabling external shuffle service through <code>spark.shuffle.service.enabled</code>, or
2) enabling shuffle tracking through <code>spark.dynamicAllocation.shuffleTracking.enabled</code>, or
3) enabling shuffle blocks decommission through <code>spark.decommission.enabled</code> and <code>spark.storage.decommission.shuffleBlocks.enabled</code>, or
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -733,8 +733,8 @@ private[yarn] class YarnAllocator(
for (container <- containersToUse) {
val rpId = getResourceProfileIdFromPriority(container.getPriority)
executorIdCounter += 1
val executorBindAddress = sparkConf.get(EXECUTOR_BIND_ADDRESS.key, executorHostname)
val executorHostname = container.getNodeId.getHost
val executorBindAddress = sparkConf.get(EXECUTOR_BIND_ADDRESS.key, executorHostname)
val containerId = container.getId
val executorId = executorIdCounter.toString
val yarnResourceForRpId = rpIdToYarnResource.get(rpId)
Expand Down