Skip to content
Merged
Changes from 1 commit
Commits
Show all changes
17 commits
Select commit Hold shift + click to select a range
b999fa4
[SPARK-17696][SPARK-12330][CORE] Partial backport of to branch-1.6.
drcrallen Sep 28, 2016
376545e
[SPARK-17721][MLLIB][BACKPORT] Fix for multiplying transposed SparseM…
bwahlgreen Oct 2, 2016
d3890de
[SPARK-15062][SQL] Backport fix list type infer serializer issue
brkyvz Oct 6, 2016
585c565
[SPARK-17850][CORE] Add a flag to ignore corrupt files (branch 1.6)
zsxwing Oct 13, 2016
18b173c
[SPARK-17678][REPL][BRANCH-1.6] Honor spark.replClassServer.port in s…
jerryshao Oct 13, 2016
903cc92
Merge branch 'branch-1.6' of github.com:apache/spark into csd-1.6
markhamstra Oct 14, 2016
745c5e7
[SPARK-17884][SQL] To resolve Null pointer exception when casting fro…
priyankagar Oct 14, 2016
0f57785
Prepare branch-1.6 for 1.6.3 release.
rxin Oct 17, 2016
7375bb0
Preparing Spark release v1.6.3
pwendell Oct 17, 2016
b95ac0d
Preparing development version 1.6.4-SNAPSHOT
pwendell Oct 17, 2016
4f9c026
Merge branch 'branch-1.6' of github.com:apache/spark into csd-1.6
markhamstra Oct 17, 2016
82e98f1
[SPARK-16078][SQL] Backport: from_utc_timestamp/to_utc_timestamp shou…
Oct 20, 2016
1e86074
Preparing Spark release v1.6.3-rc2
pwendell Nov 2, 2016
9136e26
Preparing development version 1.6.4-SNAPSHOT
pwendell Nov 2, 2016
8f25cb2
[SPARK-18553][CORE][BRANCH-1.6] Fix leak of TaskSetManager following …
JoshRosen Dec 1, 2016
70f271b
[SPARK-12446][SQL][BACKPORT-1.6] Add unit tests for JDBCRDD internal …
maropu Dec 3, 2016
91c9700
Merge branch 'branch-1.6' of github.com:apache/spark into csd-1.6
markhamstra Dec 6, 2016
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Next Next commit
[SPARK-17696][SPARK-12330][CORE] Partial backport of to branch-1.6.
From the original commit message:

This PR also fixes a regression caused by [SPARK-10987] whereby submitting a shutdown causes a race between the local shutdown procedure and the notification of the scheduler driver disconnection. If the scheduler driver disconnection wins the race, the coarse executor incorrectly exits with status 1 (instead of the proper status 0)

Author: Charles Allen <charlesallen-net.com>

(cherry picked from commit 2eaeafe)

Author: Charles Allen <[email protected]>

Closes apache#15270 from vanzin/SPARK-17696.
  • Loading branch information
drcrallen authored and zsxwing committed Sep 28, 2016
commit b999fa43ea0b509341ac2e130cc3787e5f8a75e5
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ package org.apache.spark.executor

import java.net.URL
import java.nio.ByteBuffer
import java.util.concurrent.atomic.AtomicBoolean

import org.apache.hadoop.conf.Configuration

Expand All @@ -45,6 +46,7 @@ private[spark] class CoarseGrainedExecutorBackend(
env: SparkEnv)
extends ThreadSafeRpcEndpoint with ExecutorBackend with Logging {

private[this] val stopping = new AtomicBoolean(false)
var executor: Executor = null
@volatile var driver: Option[RpcEndpointRef] = None

Expand Down Expand Up @@ -106,19 +108,23 @@ private[spark] class CoarseGrainedExecutorBackend(
}

case StopExecutor =>
stopping.set(true)
logInfo("Driver commanded a shutdown")
// Cannot shutdown here because an ack may need to be sent back to the caller. So send
// a message to self to actually do the shutdown.
self.send(Shutdown)

case Shutdown =>
stopping.set(true)
executor.stop()
stop()
rpcEnv.shutdown()
}

override def onDisconnected(remoteAddress: RpcAddress): Unit = {
if (driver.exists(_.address == remoteAddress)) {
if (stopping.get()) {
logInfo(s"Driver from $remoteAddress disconnected during shutdown")
} else if (driver.exists(_.address == remoteAddress)) {
logError(s"Driver $remoteAddress disassociated! Shutting down.")
System.exit(1)
} else {
Expand Down