Skip to content
Closed
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
40 commits
Select commit Hold shift + click to select a range
39ba441
spark-9104 first draft version
liyezhang556520 Aug 17, 2015
ecc1044
Merge remote-tracking branch 'apache/master' into netMem-9104
liyezhang556520 Aug 17, 2015
2101538
show N/A for nio
liyezhang556520 Aug 17, 2015
9ccaf88
handle executor add and remove event for memotyTab
liyezhang556520 Aug 18, 2015
13c17fb
show removed executors info on page
liyezhang556520 Aug 19, 2015
c9b44b1
add stage memory trace
liyezhang556520 Aug 19, 2015
984feaf
add history support for heartbeat event
liyezhang556520 Aug 20, 2015
2501c82
limit history event log frequency
liyezhang556520 Aug 20, 2015
e0ae855
add some comments for EventLoggingListener
liyezhang556520 Aug 20, 2015
7491279
Merge remote-tracking branch 'apache/master' into netMem-9104
liyezhang556520 Aug 20, 2015
424c172
scala style fix
liyezhang556520 Aug 20, 2015
f21a804
remove executor port and fix test failure
liyezhang556520 Aug 21, 2015
2f3d30b
merge spache/master after master updated
liyezhang556520 Sep 25, 2015
7b846a2
work with JavaConverters
liyezhang556520 Oct 9, 2015
41874aa
Merge remote-tracking branch 'apache/master' into netMem-9104
liyezhang556520 Oct 9, 2015
0531d0f
Merge remote-tracking branch 'apache/master' into netMem-9104
liyezhang556520 Oct 29, 2015
27b7da1
refine the code according to Imran's comments and the design doc
liyezhang556520 Nov 2, 2015
a8fcf74
Merge remote-tracking branch 'apache/master' into netMem-9104
liyezhang556520 Nov 2, 2015
f2f0e64
fix scala style test
liyezhang556520 Nov 3, 2015
5f7a999
capitalize class name
liyezhang556520 Nov 3, 2015
5ad7a6a
change task metrics json format back to origin
liyezhang556520 Nov 3, 2015
c836fb9
Merge remote-tracking branch 'apache/master' into netMem-9104
liyezhang556520 Nov 3, 2015
b5aa4da
Merge remote-tracking branch 'apache/master' into netMem-9104
liyezhang556520 Nov 5, 2015
e8e2bdd
Merge remote-tracking branch 'apache/master' into netMem-9104
liyezhang556520 Nov 6, 2015
1dffa29
accroding to Imran's comment, refine the code
liyezhang556520 Nov 17, 2015
75e63c3
add first test case
liyezhang556520 Nov 17, 2015
0c1241c
fix scala style
liyezhang556520 Nov 17, 2015
c78628e
add more test cases, with eventloging test left
liyezhang556520 Nov 19, 2015
a93bd96
scala style fix
liyezhang556520 Nov 19, 2015
89214f3
fix test fail and add event logging unit test
liyezhang556520 Nov 23, 2015
1ed48c1
scala syle
liyezhang556520 Nov 23, 2015
cb307aa
merge to apache/master branch, fix merge conflict
liyezhang556520 Nov 24, 2015
b438077
roll back useless change
liyezhang556520 Nov 24, 2015
4123ac7
modify the code according to Imran's comments, mainly with unit test
liyezhang556520 Dec 8, 2015
2ce9fd9
fix scala style
liyezhang556520 Dec 8, 2015
17d094e
merge to master branch with tests update
liyezhang556520 Dec 8, 2015
4b3dbe4
change port to option and some bug fixes
liyezhang556520 Dec 9, 2015
0ea7cab
address comments of code refinement
liyezhang556520 Jan 12, 2016
5e031ce
merge to latest master branch from spark-9104-draft
liyezhang556520 Jan 12, 2016
87f8172
fix import ordering error
liyezhang556520 Jan 12, 2016
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
refine the code according to Imran's comments and the design doc
  • Loading branch information
liyezhang556520 committed Nov 2, 2015
commit 27b7da1b775b0c4101af3f1f6ee454668a49cdc1
5 changes: 4 additions & 1 deletion core/src/main/scala/org/apache/spark/executor/Executor.scala
Original file line number Diff line number Diff line change
Expand Up @@ -437,7 +437,7 @@ private[spark] class Executor(
metrics.updateAccumulators()

if (isLocal) {
// JobProgressListener will hold an reference of it during
// JobProgressListener will hold a reference of it during
// onExecutorMetricsUpdate(), then JobProgressListener can not see
// the changes of metrics any more, so make a deep copy of it
val copiedMetrics = Utils.deserialize[TaskMetrics](Utils.serialize(metrics))
Expand All @@ -452,6 +452,9 @@ private[spark] class Executor(

env.blockTransferService.getMemMetrics(this.executorMetrics)
val executorMetrics = if (isLocal) {
// JobProgressListener might hold a reference of it during onExecutorMetricsUpdate()
// in future, if then JobProgressListener can not see the changes of metrics any
// more, so make a deep copy of it here for future change.
Utils.deserialize[ExecutorMetrics](Utils.serialize(this.executorMetrics))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

whats the point of this? sorry if we discussed it earlier ... if its just to test the ExecutorMetrics really is serializable, that would be better in a test case

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is due to SPARK-3465. Currently we do not have any aggregation operations for ExecutorMetrics, we can remove this. We can add it back when we do some aggregation.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah, this is a great point. In that case, I think you can leave it in for now, but add a comment that the serialization & deserialization is just to make a copy, for to SPARK-3465

} else {
this.executorMetrics
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,5 @@ class ExecutorMetrics extends Serializable {
@DeveloperApi
case class TransportMetrics(
timeStamp: Long,
clientOnheapSize: Long,
clientDirectheapSize: Long,
serverOnheapSize: Long,
serverDirectheapSize: Long)
onHeapSize: Long,
directSize: Long)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know "direct" was my suggestion earlier, but now I see that actually we already use "offheap" extensively in the codebase, so lets use that instead.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also, I think we should avoid using a case class. The problem is binary compatibility of the apply / unapply methods when you add a field.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ import scala.concurrent.{Future, Promise}

import io.netty.buffer._

import org.apache.spark.{SecurityManager, SparkConf}
import org.apache.spark.{SecurityManager, SparkConf, SparkEnv}
import org.apache.spark.executor.{TransportMetrics, ExecutorMetrics}
import org.apache.spark.network._
import org.apache.spark.network.buffer.ManagedBuffer
Expand All @@ -32,7 +32,6 @@ import org.apache.spark.network.server._
import org.apache.spark.network.shuffle.{RetryingBlockFetcher, BlockFetchingListener, OneForOneBlockFetcher}
import org.apache.spark.network.shuffle.protocol.UploadBlock
import org.apache.spark.serializer.JavaSerializer
import org.apache.spark.SparkEnv
import org.apache.spark.storage.{BlockId, StorageLevel}
import org.apache.spark.util.{Clock, Utils, SystemClock}

Expand Down Expand Up @@ -64,17 +63,17 @@ class NettyBlockTransferService(conf: SparkConf, securityManager: SecurityManage
val currentTime = clock.getTimeMillis()
val clientPooledAllocator = clientFactory.getPooledAllocator()
val serverAllocator = server.getAllocator()
val clientDirectHeapSize: Long = sumOfMetrics(
val clientDirectSize: Long = sumOfMetrics(
clientPooledAllocator.directArenas().asScala.toList)
val clientOnHeapSize: Long = sumOfMetrics(clientPooledAllocator.heapArenas().asScala.toList)
val serverDirectHeapSize: Long = sumOfMetrics(serverAllocator.directArenas().asScala.toList)
val serverDirectSize: Long = sumOfMetrics(serverAllocator.directArenas().asScala.toList)
val serverOnHeapSize: Long = sumOfMetrics(serverAllocator.heapArenas().asScala.toList)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

push the .asScala.toList into the helper method (and then I think you don't even need the toList)

executorMetrics.setTransportMetrics(Some(TransportMetrics(currentTime,
clientOnHeapSize, clientDirectHeapSize, serverOnHeapSize, serverDirectHeapSize)))
logDebug(s"current Netty client directHeapSize is $clientDirectHeapSize, " +
s"client heapSize is $clientOnHeapSize, server directHeapsize is $serverDirectHeapSize, " +
logDebug(s"Current Netty Client directSize is $clientDirectSize, " +
s"Client HeapSize is $clientOnHeapSize, server directHeapsize is $serverDirectSize, " +
s"server heapsize is $serverOnHeapSize, executer id is " +
s"${SparkEnv.get.blockManager.blockManagerId.executorId}")
executorMetrics.setTransportMetrics(Some(TransportMetrics(currentTime,
clientOnHeapSize + serverOnHeapSize, clientDirectSize + serverDirectSize)))
}

private def sumOfMetrics(arenaMetricList: List[PoolArenaMetric]): Long = {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,6 @@ package org.apache.spark.scheduler
import java.io._
import java.net.URI

import akka.remote.transport.Transport
import org.apache.spark.executor.TransportMetrics
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: ordering

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed


import scala.collection.mutable
Expand Down Expand Up @@ -97,8 +96,9 @@ private[spark] class EventLoggingListener(
private[scheduler] val logPath = getLogPath(
logBaseDir, appId, appAttemptId, compressionCodecName)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this shouldn't need to change, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, my mistake, the width is 100, it's correct. I'll change it back later, thanks


private val latestMetrics = new HashMap[String, SparkListenerExecutorMetricsUpdate]
private val modifiedMetrics = new HashMap[String, SparkListenerExecutorMetricsUpdate]
private val executorIdToLatestMetrics = new HashMap[String, SparkListenerExecutorMetricsUpdate]
private val executorIdToModifiedMaxMetrics = new
HashMap[String, SparkListenerExecutorMetricsUpdate]

/**
* Creates the log file in the configured log directory.
Expand Down Expand Up @@ -161,17 +161,23 @@ private[spark] class EventLoggingListener(
}
}

// We log the event both when stage submitted and stage completed, and after each logEvent call,
// replace the modifiedMetrics with the latestMetrics. In case the stages submit and complete
// time might be interleaved. So as to make the result the same with the running time.
private def logMetricsUpdateEvent() : Unit = {
modifiedMetrics.map(metrics => logEvent(metrics._2))
latestMetrics.map(metrics => modifiedMetrics.update(metrics._1, metrics._2))
// When a stage is submitted and completed, we updated our executor memory metrics for that stage,
// and then log the metrics. Anytime we receive more executor metrics, we update our running set of
// {{executorIdToLatestMetrics}} and {{executorIdToModifiedMaxMetrics}}. Since stages submit and
// complete time might be interleaved, we maintain the latest and max metrics for each time segment.
// So, for each stage start and stage complete, we replace each item in
// {{executorIdToModifiedMaxMetrics}} with that in {{executorIdToLatestMetrics}}.
private def updateAndLogExecutorMemoryMetrics() : Unit = {
executorIdToModifiedMaxMetrics.foreach { case(_, metrics) => logEvent(metrics) }
executorIdToLatestMetrics.foreach {case(_, metrics) => logEvent(metrics) }
executorIdToLatestMetrics.foreach { case (executorId, metrics) =>
executorIdToModifiedMaxMetrics.update(executorId, metrics)
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd rename this to updateAndLogExecutorMemoryMetrics or something like that, to be a little more specific. I'd also change the first sentence of the comment to something like

When a stage is submitted and completed, we updated our executor memory metrics for that stage, and then log the metrics. Anytime we receive more executor metrics, we update our running set of {{maxMetrics}} and {{latestMetrics}}.

I don't understand the last two sentences of the comment -- can you expand on that?

Finally you should use foreach and you can use case to extract the parts you want and make it a little clearer:

modifiedMetrics.foreach { case (_, metrics) => logEvent(metrics) }
latestMetrics.foreach { case (executorId, metrics) => modifiedMetrics.update(executorId, metrics) }

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand the last two sentences of the comment -- can you expand on that?

I'll update the code according to the design doc. I think the code is not that correct. Please refer it in design doc


// Events that do not trigger a flush
override def onStageSubmitted(event: SparkListenerStageSubmitted): Unit = {
logMetricsUpdateEvent()
updateAndLogExecutorMemoryMetrics()
logEvent(event)
}

Expand All @@ -185,7 +191,7 @@ private[spark] class EventLoggingListener(

// Events that trigger a flush
override def onStageCompleted(event: SparkListenerStageCompleted): Unit = {
logMetricsUpdateEvent()
updateAndLogExecutorMemoryMetrics()
logEvent(event, flushLogger = true)
}

Expand Down Expand Up @@ -218,8 +224,8 @@ private[spark] class EventLoggingListener(
}

override def onExecutorRemoved(event: SparkListenerExecutorRemoved): Unit = {
latestMetrics.remove(event.executorId)
modifiedMetrics.remove(event.executorId)
executorIdToLatestMetrics.remove(event.executorId)
executorIdToModifiedMaxMetrics.remove(event.executorId)
logEvent(event, flushLogger = true)
}

Expand All @@ -228,7 +234,7 @@ private[spark] class EventLoggingListener(

// No-op because logging every update would be overkill
override def onExecutorMetricsUpdate(event: SparkListenerExecutorMetricsUpdate): Unit = {
latestMetrics.update(event.execId, event)
executorIdToLatestMetrics.update(event.execId, event)
updateModifiedMetrics(event.execId)
}

Expand Down Expand Up @@ -258,10 +264,10 @@ private[spark] class EventLoggingListener(
* @param executorId the executor whose metrics will be modified
*/
private def updateModifiedMetrics(executorId: String): Unit = {
val toBeModifiedEvent = modifiedMetrics.get(executorId)
val latestEvent = latestMetrics.get(executorId)
val toBeModifiedEvent = executorIdToModifiedMaxMetrics.get(executorId)
val latestEvent = executorIdToLatestMetrics.get(executorId)
if (toBeModifiedEvent.isEmpty) {
if (latestEvent.isDefined) modifiedMetrics.update(executorId, latestEvent.get)
if (latestEvent.isDefined) executorIdToModifiedMaxMetrics.update(executorId, latestEvent.get)
} else {
val toBeModifiedMetrics = toBeModifiedEvent.get.executorMetrics.transportMetrics
if (toBeModifiedMetrics.isDefined) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

won't latestEvent always be defined? In fact, at the one call site, you could even just pass in latestEvent directly so you avoid another lookup. I also think this becomes slightly cleaner w/ pattern matching:

private def updateModifiedMetrics(executorId: String, latestEvent: SparkListenerExecutorMetricsUpdate): Unit = {
  executorIdToModifiedMaxMetrics.get(executorId) match {
    case None => executorIdToModifiedMaxMetrics.update(executorId, latestEvent)
    case Some(toBeModEvent) =>
      val toBeModMetrics = toBeModEvent.executorMetrics.transportMetrics
      ...
  }
}

and depending on whether or not we need to keep tranportMetrics as an option, we may need to handle the else case here, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

latestEvent should not always be defined when toBeModifiedEvent.isEmpty is true. The case is that at the beginning, the latestEvent is empty before the first metrics update event received.

Thank you for you cleaner code style example, I've updated in my code.

Expand All @@ -270,29 +276,23 @@ private[spark] class EventLoggingListener(
val toBeModTransMetrics = toBeModifiedMetrics.get
var timeStamp: Long = toBeModTransMetrics.timeStamp
// the logic here should be the same with that for memoryListener
val (clientOnheapSize, serverOnheapSize) =
if (latestTransMetrics.clientOnheapSize + latestTransMetrics.serverOnheapSize >
toBeModTransMetrics.clientOnheapSize + toBeModTransMetrics.serverOnheapSize) {
val onHeapSize = if (latestTransMetrics.onHeapSize > toBeModTransMetrics.onHeapSize) {
timeStamp = latestTransMetrics.timeStamp
(latestTransMetrics.clientOnheapSize, latestTransMetrics.serverOnheapSize)
latestTransMetrics.onHeapSize
} else {
(toBeModTransMetrics.clientOnheapSize, toBeModTransMetrics.serverOnheapSize)
toBeModTransMetrics.onHeapSize
}
val (clientDirectheapSize, serverDirectheapSize) =
if (latestTransMetrics.clientDirectheapSize + latestTransMetrics.serverDirectheapSize >
toBeModTransMetrics.clientDirectheapSize + toBeModTransMetrics.serverDirectheapSize) {
val directSize = if (latestTransMetrics.directSize > toBeModTransMetrics.directSize) {
timeStamp = latestTransMetrics.timeStamp
(latestTransMetrics.clientDirectheapSize, latestTransMetrics.serverDirectheapSize)
latestTransMetrics.directSize
} else {
(toBeModTransMetrics.clientDirectheapSize, toBeModTransMetrics.serverDirectheapSize)
toBeModTransMetrics.directSize
}
toBeModifiedEvent.get.executorMetrics.setTransportMetrics(
Some(TransportMetrics(timeStamp, clientOnheapSize, clientDirectheapSize,
serverOnheapSize, serverDirectheapSize)))
Some(TransportMetrics(timeStamp, onHeapSize, directSize)))
}
}
}

}

private[spark] object EventLoggingListener extends Logging {
Expand Down
65 changes: 29 additions & 36 deletions core/src/main/scala/org/apache/spark/ui/memory/MemoryTab.scala
Original file line number Diff line number Diff line change
Expand Up @@ -41,11 +41,11 @@ class MemoryListener extends SparkListener {
type ExecutorId = String
val activeExecutorIdToMem = new HashMap[ExecutorId, MemoryUIInfo]
val removedExecutorIdToMem = new HashMap[ExecutorId, MemoryUIInfo]
// latestExecIdToExecMetrics include all executors that is active and removed.
// latestExecIdToExecMetrics including all executors that is active and removed.
// this may consume a lot of memory when executors are changing frequently, e.g. in dynamical
// allocation mode.
val latestExecIdToExecMetrics = new HashMap[ExecutorId, ExecutorMetrics]
// stagesIdToMem a map maintains all executors memory information of each stage,
// activeStagesToMem a map maintains all executors memory information of each stage,
// the Map type is [(stageId, attemptId), Seq[(executorId, MemoryUIInfo)]
val activeStagesToMem = new HashMap[(Int, Int), HashMap[ExecutorId, MemoryUIInfo]]
val completedStagesToMem = new HashMap[(Int, Int), HashMap[ExecutorId, MemoryUIInfo]]
Expand All @@ -55,10 +55,9 @@ class MemoryListener extends SparkListener {
val executorMetrics = event.executorMetrics
val memoryInfo = activeExecutorIdToMem.getOrElseUpdate(executorId, new MemoryUIInfo)
memoryInfo.updateExecutorMetrics(executorMetrics)
activeStagesToMem.map {stageToMem =>
if (stageToMem._2.contains(executorId)) {
val memInfo = stageToMem._2.get(executorId).get
memInfo.updateExecutorMetrics(executorMetrics)
activeStagesToMem.foreach { case (_, stageMemMetrics) =>
if(stageMemMetrics.contains(executorId)) {
stageMemMetrics.get(executorId).get.updateExecutorMetrics(executorMetrics)
}
}
latestExecIdToExecMetrics.update(executorId, executorMetrics)
Expand All @@ -84,21 +83,19 @@ class MemoryListener extends SparkListener {
override def onStageSubmitted(event: SparkListenerStageSubmitted): Unit = {
val stage = (event.stageInfo.stageId, event.stageInfo.attemptId)
val memInfoMap = new HashMap[ExecutorId, MemoryUIInfo]
activeExecutorIdToMem.map(idToMem => memInfoMap.update(idToMem._1, new MemoryUIInfo))
activeExecutorIdToMem.foreach(idToMem => memInfoMap.update(idToMem._1, new MemoryUIInfo))
activeStagesToMem.update(stage, memInfoMap)
}

override def onStageCompleted(event: SparkListenerStageCompleted): Unit = {
val stage = (event.stageInfo.stageId, event.stageInfo.attemptId)
val memInfoMap = activeStagesToMem.get(stage)
if (memInfoMap.isDefined) {
activeExecutorIdToMem.map { idToMem =>
val executorId = idToMem._1
val memInfo = memInfoMap.get.getOrElse(executorId, new MemoryUIInfo)
if (latestExecIdToExecMetrics.contains(executorId)) {
memInfo.updateExecutorMetrics(latestExecIdToExecMetrics.get(executorId).get)
activeStagesToMem.get(stage).map { memInfoMap =>
activeExecutorIdToMem.foreach { case (executorId, _) =>
val memInfo = memInfoMap.getOrElse(executorId, new MemoryUIInfo)
latestExecIdToExecMetrics.get(executorId).foreach { prevExecutorMetrics =>
memInfo.updateExecutorMetrics(prevExecutorMetrics)
}
memInfoMap.get.update(executorId, memInfo)
memInfoMap.update(executorId, memInfo)
}
completedStagesToMem.put(stage, activeStagesToMem.remove(stage).get)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you can use option handling to simplify this:

activeStagesToMem.get(stage).map { memInfoMap =>
  activeExecutorIdToMem.foreach { case (executorId, _) => 
    val memInfo = memInfoMap.getOrElseUpdate(executorId, new MemoryUIInfo)
    latestExecIdToExecMetrics.get(executorId).foreach { prevExecutorMetrics =>
      memInfo.updateExecutorMetrics(prevExecutorMetrics)
    }
  }
}

Expand All @@ -107,46 +104,42 @@ class MemoryListener extends SparkListener {

class MemoryUIInfo {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@DeveloperApi on the other 3 classes in this file.

var executorAddress: String = _
var transportInfo: Option[transportMemSize] = None
var transportInfo: Option[TransportMemSize] = None

def this(execInfo: ExecutorInfo) = {
this()
executorAddress = execInfo.executorHost
}

def updateExecutorMetrics(execMetrics: ExecutorMetrics): Unit = {
if (execMetrics.transportMetrics.isDefined) {
execMetrics.transportMetrics.map { transPortMetrics =>
transportInfo = transportInfo match {
case Some(transportMemSize) => transportInfo
case _ => Some(new transportMemSize)
case _ => Some(new TransportMemSize)
}
executorAddress = execMetrics.hostname
if (execMetrics.transportMetrics.isDefined) {
transportInfo.get.updateTransport(execMetrics.transportMetrics.get)
}
transportInfo.get.updateTransport(transPortMetrics)
}
}
}

class transportMemSize {
var onheapSize: Long = _
var directheapSize: Long = _
var peakOnheapSizeTime: MemTime = new MemTime()
var peakDirectheapSizeTime: MemTime = new MemTime()
class TransportMemSize {
var onHeapSize: Long = _
var directSize: Long = _
var peakOnHeapSizeTime: MemTime = new MemTime()
var peakDirectSizeTime: MemTime = new MemTime()

def updateTransport(transportMetrics: TransportMetrics): Unit = {
val updatedOnheapSize = transportMetrics.clientOnheapSize +
transportMetrics.serverOnheapSize
val updatedDirectheapSize = transportMetrics.clientDirectheapSize +
transportMetrics.serverDirectheapSize
val updatedOnHeapSize = transportMetrics.onHeapSize
val updatedDirectSize = transportMetrics.directSize
val updateTime: Long = transportMetrics.timeStamp
onheapSize = updatedOnheapSize
directheapSize = updatedDirectheapSize
if (updatedOnheapSize >= peakOnheapSizeTime.memorySize) {
peakOnheapSizeTime = MemTime(updatedOnheapSize, updateTime)
onHeapSize = updatedOnHeapSize
directSize = updatedDirectSize
if (updatedOnHeapSize >= peakOnHeapSizeTime.memorySize) {
peakOnHeapSizeTime = MemTime(updatedOnHeapSize, updateTime)
}
if (updatedDirectheapSize >= peakDirectheapSizeTime.memorySize) {
peakDirectheapSizeTime = MemTime(updatedDirectheapSize, updateTime)
if (updatedDirectSize >= peakDirectSizeTime.memorySize) {
peakDirectSizeTime = MemTime(updatedDirectSize, updateTime)
}
}
}
Expand Down
20 changes: 10 additions & 10 deletions core/src/main/scala/org/apache/spark/ui/memory/MemoryTable.scala
Original file line number Diff line number Diff line change
Expand Up @@ -34,10 +34,10 @@ private[ui] class MemTableBase(
protected def columns: Seq[Node] = {
<th>Executor ID</th>
<th>Address</th>
<th>Net Memory (on-heap)</th>
<th>Net Memory (direct-heap)</th>
<th>Peak Net Memory (on-heap) / Happen Time</th>
<th>Peak Net Read (direct-heap) / Happen Time</th>
<th>Network Memory (on-heap)</th>
<th>Network Memory (direct-heap)</th>
<th>Peak Network Memory (on-heap) / Happen Time</th>
<th>Peak Network Read (direct-heap) / Happen Time</th>
}

def toNodeSeq: Seq[Node] = {
Expand Down Expand Up @@ -68,20 +68,20 @@ private[ui] class MemTableBase(
</td>
{if (info._2.transportInfo.isDefined) {
<td>
{Utils.bytesToString(info._2.transportInfo.get.onheapSize)}
{Utils.bytesToString(info._2.transportInfo.get.onHeapSize)}
</td>
<td>
{Utils.bytesToString(info._2.transportInfo.get.directheapSize)}
{Utils.bytesToString(info._2.transportInfo.get.directSize)}
</td>
<td>
{Utils.bytesToString(info._2.transportInfo.get.peakOnheapSizeTime.memorySize)}
{Utils.bytesToString(info._2.transportInfo.get.peakOnHeapSizeTime.memorySize)}
/
{UIUtils.formatDate(info._2.transportInfo.get.peakOnheapSizeTime.timeStamp)}
{UIUtils.formatDate(info._2.transportInfo.get.peakOnHeapSizeTime.timeStamp)}
</td>
<td>
{Utils.bytesToString(info._2.transportInfo.get.peakDirectheapSizeTime.memorySize)}
{Utils.bytesToString(info._2.transportInfo.get.peakDirectSizeTime.memorySize)}
/
{UIUtils.formatDate(info._2.transportInfo.get.peakDirectheapSizeTime.timeStamp)}
{UIUtils.formatDate(info._2.transportInfo.get.peakDirectSizeTime.timeStamp)}
</td>
} else {
<td>N/A</td>
Expand Down
Loading