Skip to content
Closed
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
use DurationConvertions
  • Loading branch information
xueyumusic committed Jun 15, 2018
commit 10c221efc67fe6cdadf4f867ee150f69004484db
5 changes: 3 additions & 2 deletions core/src/main/scala/org/apache/spark/HeartbeatReceiver.scala
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@ package org.apache.spark
import java.util.concurrent.{ScheduledFuture, TimeUnit}

import scala.collection.mutable
import scala.concurrent.duration._
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this will be unused after you address my comments

import scala.concurrent.Future

import org.apache.spark.internal.Logging
Expand Down Expand Up @@ -76,9 +77,9 @@ private[spark] class HeartbeatReceiver(sc: SparkContext, clock: Clock)
// "milliseconds"
private val slaveTimeoutMs =
sc.conf.getTimeAsMs("spark.storage.blockManagerSlaveTimeoutMs",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The val slaveTimeoutMs is only used in

sc.conf.getTimeAsSeconds("spark.network.timeout", s"${slaveTimeoutMs}ms") * 1000
, so I don't think this change shall fix anything.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cc @zsxwing to confirm.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I look at this carefully, I think your are right, thanks @jiangxb1987 . One case that is not relevant with this PR is like this: set spark.storage.blockManagerSlaveTimeoutMs=900ms and not configure spark.network.timeout, then executorTimeoutMs will be 0 since getTimeAsSeconds loos precision for ms. This config maybe not reasonable. If need fix how about add ensuring > 0 or make executorTimeoutMs's min value as 1, @jiangxb1987 @zsxwing

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like we never use slaveTimeoutMs. Let's delete this val and just assign this value to executorTimeoutMs.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have removed temp val slaveTimeout, also timeoutIntervalMs is the same case, so removed too, thanks @zsxwing @jiangxb1987

s"${sc.conf.getTimeAsSeconds("spark.network.timeout", "120s") * 1000L}ms")
s"${sc.conf.getTimeAsSeconds("spark.network.timeout", "120s").seconds.toMillis}ms")
private val executorTimeoutMs =
sc.conf.getTimeAsSeconds("spark.network.timeout", s"${slaveTimeoutMs}ms") * 1000
sc.conf.getTimeAsSeconds("spark.network.timeout", s"${slaveTimeoutMs}ms").seconds.toMillis

// "spark.network.timeoutInterval" uses "seconds", while
// "spark.storage.blockManagerTimeoutIntervalMs" uses "milliseconds"
Expand Down