Skip to content
Closed
Show file tree
Hide file tree
Changes from 10 commits
Commits
Show all changes
27 commits
Select commit Hold shift + click to select a range
3f8321a
Integration of ProcessTreeMetrics with PR 21221
Jul 26, 2018
cd16a75
Changing the position of ptree and also make the computation configur…
Aug 7, 2018
94c2b04
Seperate metrics for jvm, python and others and update the tests
Aug 8, 2018
062f5d7
Update JsonProtocolSuite
Sep 25, 2018
245221d
[SPARK-24958] Add executors' process tree total memory information to…
Oct 2, 2018
c72be03
Adressing most of Imran's comments
Oct 3, 2018
8f3c938
Fixing the scala style and some minor comments
Oct 3, 2018
f2dca27
Removing types from the definitions where ever possible
Oct 4, 2018
a9f924c
Using Utils methods when possible or use ProcessBuilder
Oct 5, 2018
a11e3a2
make use of Utils.trywithresources
Oct 5, 2018
34ad625
Changing ExecutorMericType and ExecutorMetrics to use a map instead o…
Oct 9, 2018
415f976
Changing ExecutorMetric to use array instead of a map
Oct 10, 2018
067b81d
A small cosmetic change
Oct 10, 2018
18ee4ad
Merge branch 'master' of https://github.com/apache/spark into ptreeme…
Oct 17, 2018
7f7ed2b
Applying latest review commments. Using Arrays instead of Map for ret…
Oct 23, 2018
f3867ff
Merge branch 'master' of https://github.com/apache/spark into ptreeme…
Nov 5, 2018
0f8f3e2
Fix an issue with jsonProtoclSuite
Nov 5, 2018
ea08c61
Fix scalastyle issue
Nov 5, 2018
8f20857
Applying latest review comments
Nov 14, 2018
6e65360
Using the companion object and other stuff
Nov 27, 2018
4659f4a
Update the use of process builder and applying other review comments
Nov 28, 2018
ef4be38
Small style fixes based on reviews
Nov 30, 2018
805741c
Applying review comments, mostly style related
Nov 30, 2018
4c1f073
emove the unnecessary trywithresources
Nov 30, 2018
0a7402e
Applying the comment about error handling and some more style fixes
Dec 4, 2018
3d65b35
Removing a return
Dec 6, 2018
6eab315
Reordering of info in a test resource file to avoid confusion
Dec 6, 2018
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
272 changes: 272 additions & 0 deletions core/src/main/scala/org/apache/spark/executor/ProcfsBasedSystems.scala
Original file line number Diff line number Diff line change
@@ -0,0 +1,272 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

package org.apache.spark.executor

import java.io._
import java.nio.charset.Charset
import java.nio.file.{Files, Paths}
import java.util.Locale

import scala.collection.mutable
import scala.collection.mutable.ArrayBuffer

import org.apache.spark.{SparkEnv, SparkException}
import org.apache.spark.internal.{config, Logging}
import org.apache.spark.util.Utils

private[spark] case class ProcfsBasedSystemsMetrics(
jvmVmemTotal: Long,
jvmRSSTotal: Long,
pythonVmemTotal: Long,
pythonRSSTotal: Long,
otherVmemTotal: Long,
otherRSSTotal: Long)

// Some of the ideas here are taken from the ProcfsBasedProcessTree class in hadoop
// project.
private[spark] class ProcfsBasedSystems(val procfsDir: String = "/proc/") extends Logging {
val procfsStatFile = "stat"
val testing = sys.env.contains("SPARK_TESTING") || sys.props.contains("spark.testing")
var pageSize = computePageSize()
var isAvailable: Boolean = isProcfsAvailable
private val pid = computePid()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pageSize is only a var for testing -- instead just optionally pass it in to the constructor

also I think all of these can be private.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I can't call computePageSize() in the constructor signature to compute the default value. Another solution is to check for testing inside computePageSize and if we are testing assign a value to it that is provided in the constructor (default to 4096).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can't put it as a default value, but if you make it a static method, then you can provide an overloaded method which uses it, see squito@cf00835

But, I think your other proposal is even better, if its testing just give it a fixed value (no need to even make it an argument to the constructor at all).

private val ptree = mutable.Map[ Int, Set[Int]]()

var allMetrics: ProcfsBasedSystemsMetrics = ProcfsBasedSystemsMetrics(0, 0, 0, 0, 0, 0)
private var latestJVMVmemTotal = 0L
private var latestJVMRSSTotal = 0L
private var latestPythonVmemTotal = 0L
private var latestPythonRSSTotal = 0L
private var latestOtherVmemTotal = 0L
private var latestOtherRSSTotal = 0L

computeProcessTree()

private def isProcfsAvailable: Boolean = {
if (testing) {
return true
}
try {
if (!Files.exists(Paths.get(procfsDir))) {
return false
}
}
catch {
case f: FileNotFoundException => return false
}
val shouldLogStageExecutorMetrics =
SparkEnv.get.conf.get(config.EVENT_LOG_STAGE_EXECUTOR_METRICS)
val shouldLogStageExecutorProcessTreeMetrics =
SparkEnv.get.conf.get(config.EVENT_LOG_PROCESS_TREE_METRICS)
shouldLogStageExecutorProcessTreeMetrics && shouldLogStageExecutorMetrics
}

private def computePid(): Int = {
if (!isAvailable || testing) {
return -1;
}
try {
// This can be simplified in java9:
// https://docs.oracle.com/javase/9/docs/api/java/lang/ProcessHandle.html
val cmd = Array("bash", "-c", "echo $PPID")
val length = 10
val out2 = Utils.executeAndGetOutput(cmd)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can be out instead of out2

val pid = Integer.parseInt(out2.split("\n")(0))
return pid;
}
catch {
case e: SparkException => logDebug("IO Exception when trying to compute process tree." +
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why only SparkException, not any Exception? also the msg shouldn't say "IO Exception".

should probably be logWarn

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let me double check I thought there was a comment before that I should just get SparkException, but you are right. it doesn't make sense. Probably a mistake on my side. I was just caring about IOException here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh it seems there wasn't a mistake here and I jut forgot the reason here. I caught SparkException since executeAndGetOutput may throw such an exception. I will remove the IOException

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

well, executeAndGetOutput might throw a SparkException ... but are you sure nothing else will get thrown? Eg. what if you get some weird output and then the Integer.parseInt failse? Is there some reason you wouldn't want the same error handling for any exception here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At first I was getting all throwables. Then I thought it can be dangerous. There was also a review comment about that. So Not sure what is the correct way of handling this. Is it better to just take care of exceptions that we know can be thrown or catch all throwables?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there's a distinction between Throwable and Exception -- Throwable includes Errors which are fatal to the JVM, you probably can't do anything.

In general its a good question whether you should catch specific exceptions or everything. Here, you're calling an external program, and I don't feel super confident that we know how it always behaves, so I think we should be a little extra cautious. An unhandled exception here would lead to not sending any heartbeats, which would be really bad. Except for JVM errors, I think we just want to turn off this particular metric and keep going.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

found the old comment from @mccheah

Catching Throwable is generally scary, can this mask out of memory and errors of that sort? Can we scope down the exception type to handle here?

I think this (partially) agrees with what I said above, we dont' want to catch Throwable because that can mask other stuff where the jvm is hosed. But I still think Exception is the right thing to catch. sound ok @mccheah ?

if you really do want more specific exceptions, we should look through this more carefully to come up with a more exhaustive list, eg. I certainly don't want to fail the heartbeater because we dont' get an int out of the external call for some reason.

" As a result reporting of ProcessTree metrics is stopped", e)
isAvailable = false
return -1
}
}

private def computePageSize(): Long = {
if (testing) {
return 0;
}
val cmd = Array("getconf", "PAGESIZE")
val out2 = Utils.executeAndGetOutput(cmd)
return Integer.parseInt(out2.split("\n")(0))
}

private def computeProcessTree(): Unit = {
if (!isAvailable || testing) {
return
}
val queue = mutable.Queue.empty[Int]
queue += pid
while( !queue.isEmpty ) {
val p = queue.dequeue()
val c = getChildPids(p)
if(!c.isEmpty) {
queue ++= c
ptree += (p -> c.toSet)
}
else {
ptree += (p -> Set[Int]())
}
}
}

private def getChildPids(pid: Int): ArrayBuffer[Int] = {
try {
val cmd = Array("pgrep", "-P", pid.toString)
val builder = new ProcessBuilder("pgrep", "-P", pid.toString)
val process = builder.start()
val output = new StringBuilder()
val threadName = "read stdout for " + "pgrep"
def appendToOutput(s: String): Unit = output.append(s).append("\n")
val stdoutThread = Utils.processStreamByLine(threadName,
process.getInputStream, appendToOutput)
val exitCode = process.waitFor()
stdoutThread.join()
// pgrep will have exit code of 1 if there are more than one child process
// and it will have a exit code of 2 if there is no child process
if (exitCode != 0 && exitCode > 2) {
logError(s"Process $cmd exited with code $exitCode: $output")
throw new SparkException(s"Process $cmd exited with code $exitCode")
}
val childPids = output.toString.split("\n")
val childPidsInInt = mutable.ArrayBuffer.empty[Int]
for (p <- childPids) {
if (p != "") {
logDebug("Found a child pid: " + p)
childPidsInInt += Integer.parseInt(p)
}
}
childPidsInInt
} catch {
case e: IOException => logDebug("IO Exception when trying to compute process tree." +
" As a result reporting of ProcessTree metrics is stopped", e)
isAvailable = false
return mutable.ArrayBuffer.empty[Int]
}
}

def computeProcessInfo(pid: Int): Unit = {
/*
* Hadoop ProcfsBasedProcessTree class used regex and pattern matching to retrive the memory
* info. I tried that but found it not correct during tests, so I used normal string analysis
* instead. The computation of RSS and Vmem are based on proc(5):
* http://man7.org/linux/man-pages/man5/proc.5.html
*/
try {
val pidDir = new File(procfsDir, pid.toString)
Utils.tryWithResource( new InputStreamReader(
new FileInputStream(
new File(pidDir, procfsStatFile)), Charset.forName("UTF-8"))) { fReader =>
Utils.tryWithResource( new BufferedReader(fReader)) { in =>
val procInfo = in.readLine
val procInfoSplit = procInfo.split(" ")
if (procInfoSplit != null) {
val vmem = procInfoSplit(22).toLong
val rssPages = procInfoSplit(23).toLong
if (procInfoSplit(1).toLowerCase(Locale.US).contains("java")) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could this just be vmem and rssPages, rather than splitting into JVM, Python, and other? Can you explain more about how the separate values would be used?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is separated since it turns out knowing main actors like jvm in seperation can have some value for the user. We just consider jvm (case of pur scala) and python (case of using pyspark). Other stuff can be added per interest in future, but for now we consider everything else under "Other" category

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@edwinalu It would be nice to have a break up of the total memory being consumed. Its easier to tune the parameters knowing what is consuming all the memory. For example if your container died OOMing - it helps to know if it was because of python or JVM. Also R fits in the other category so it makes sense to have all 3 of them as of now.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't have much pyspark ourselves, but yes, it seems useful to have the breakdown, and it's easy to sum the values for the total.

latestJVMVmemTotal += vmem
latestJVMRSSTotal += rssPages
}
else if (procInfoSplit(1).toLowerCase(Locale.US).contains("python")) {
latestPythonVmemTotal += vmem
latestPythonRSSTotal += rssPages
}
else {
latestOtherVmemTotal += vmem
latestOtherRSSTotal += rssPages
}
}
}
}
} catch {
case f: FileNotFoundException => logDebug("There was a problem with reading" +
" the stat file of the process", f)
}
}

def updateAllMetrics(): Unit = {
allMetrics = computeAllMetrics
}

private def computeAllMetrics(): ProcfsBasedSystemsMetrics = {
if (!isAvailable) {
return ProcfsBasedSystemsMetrics(-1, -1, -1, -1, -1, -1)
}
computeProcessTree
val pids = ptree.keySet
latestJVMRSSTotal = 0
latestJVMVmemTotal = 0
latestPythonRSSTotal = 0
latestPythonVmemTotal = 0
latestOtherRSSTotal = 0
latestOtherVmemTotal = 0
for (p <- pids) {
computeProcessInfo(p)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the state used here is a little trickier than it needs to be.

computeProcessTree is updating a member variable, even though its only used locally -- it would be easier to follow if instead it just returned the process tree, and then you passed it around. Also I dont' think you actually care about the tree, just the set of pids?

similarly for allMetrics. it doesn't really need to be a member variable, since its use is entirely contained within this function, you could just pass it around.

val pids = discoverPids()
val allMetrics = ...
for (p <- pids) {
  allMetrics = updateMetricsForProcess(allMetrics, p)
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The tree was there in case we want to do some other stuff with it, but I guess we can have a tree structure when we actually need it. Right now as you mentioned we don't need it. So I will change it.
the allMetrics was there for testing, but I can change the test anyway.

}
ProcfsBasedSystemsMetrics(
getJVMVirtualMemInfo,
getJVMRSSInfo,
getPythonVirtualMemInfo,
getPythonRSSInfo,
getOtherVirtualMemInfo,
getOtherRSSInfo)

}

def getOtherRSSInfo(): Long = {
if (!isAvailable) {
return -1
}
latestOtherRSSTotal*pageSize
}

def getOtherVirtualMemInfo(): Long = {
if (!isAvailable) {
return -1
}
latestOtherVmemTotal
}

def getJVMRSSInfo(): Long = {
if (!isAvailable) {
return -1
}
latestJVMRSSTotal*pageSize
}

def getJVMVirtualMemInfo(): Long = {
if (!isAvailable) {
return -1
}
latestJVMVmemTotal
}

def getPythonRSSInfo(): Long = {
if (!isAvailable) {
return -1
}
latestPythonRSSTotal*pageSize
}

def getPythonVirtualMemInfo(): Long = {
if (!isAvailable) {
return -1
}
latestPythonVmemTotal
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -74,6 +74,11 @@ package object config {
.booleanConf
.createWithDefault(false)

private[spark] val EVENT_LOG_PROCESS_TREE_METRICS =
ConfigBuilder("spark.eventLog.logStageExecutorProcessTreeMetrics.enabled")
.booleanConf
.createWithDefault(false)

private[spark] val EVENT_LOG_OVERWRITE =
ConfigBuilder("spark.eventLog.overwrite").booleanConf.createWithDefault(false)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ package org.apache.spark.metrics
import java.lang.management.{BufferPoolMXBean, ManagementFactory}
import javax.management.ObjectName

import org.apache.spark.executor.ProcfsBasedSystems
import org.apache.spark.memory.MemoryManager

/**
Expand Down Expand Up @@ -59,6 +60,43 @@ case object JVMOffHeapMemory extends ExecutorMetricType {
}
}

case object ProcessTreeJVMRSSMemory extends ExecutorMetricType {
override private[spark] def getMetricValue(memoryManager: MemoryManager): Long = {
ExecutorMetricType.pTreeInfo.updateAllMetrics()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still don't like how this is actually updating all the other metrics -- it makes the code more confusing to follow, as you have to know there is a relationship between all of the metrics. I understand that you want to do the work once and grab all the metrics, but we should find a better way to do that. I see how the current api makes that hard to do.

I have two suggestions:

  1. change the api to have getMetricValue() also pass in the System.currentTimeInMillis(). Then every single metric type would compare the passed in time against the last time you computed the metrics -- if it was stale, it would recompute everything and update the time.

  2. Change the api to allow one "metric getter" object to actually supply multiple metrics. You'd have a simple implementation which woudl just provide one metric, and you'd change all the existing metrics to extend that simple case, but your implementation would provide multiple metrics in one go.

I actually think (2) is better (that is what I did in memory-monitor plugin) though its a bit more work. You might need to play with this a bit.

thoughts @edwinalu @mccheah ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will wait for other people comment

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it makes sense for the metrics provider API to return a Map[String, Long] for a set of "named" metrics - we've talked before about attaching a schema to the metrics bundle passed around by this API. So, similar to option 2 above.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should make ProcessTreeMemory extends ExecutorMetricType and individual metrics can be returned from it. This also makes the assumption of calculating the metrics only in the ProcessTreeJVMRSSMemory and subsequent calls using it. We shouldn't depend on the order here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the delayed response. Thanks for adding the total memory metrics -- these will be very useful. Agreed that doing the work once is better, but that having ProcessTreeJVMRSSMemory.getMetricValue() update all the metrics is confusing, especially if a user at some point wants to call getMetricValue() for one of the other metrics, and not ProcessTreeJVMRSSMemory.

@squito 's #1 is probably the easiest to make the change for with the existing code. However, #2 with @mccheah's suggestion to return Map sounds best/cleanest as an API, with @dhruve 's suggestion to consolidate into ProcessTreeMemory -- I prefer this approach as well.

Right now the call for getMetricValue is done in Heartbeater.getCurrentMetrics(), and it's mapping ExecutorMetricType.values to the array of actual values. Translating the returned maps to an array (with index mapping to name rather than ExecutorMetricType) will involve some more code. In retrospect, getting the current metrics is probably better done by ExecutorMetrics iteself, rather than having Heartbeater exposed to the implementation details -- would you be able to move the logic there?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I changed this to use a map instead of an array of metrics to implement #2 of what @squito suggested

ExecutorMetricType.pTreeInfo.allMetrics.jvmRSSTotal
}
}

case object ProcessTreeJVMVMemory extends ExecutorMetricType {
override private[spark] def getMetricValue(memoryManager: MemoryManager): Long = {
ExecutorMetricType.pTreeInfo.allMetrics.jvmVmemTotal
}
}

case object ProcessTreePythonRSSMemory extends ExecutorMetricType {
override private[spark] def getMetricValue(memoryManager: MemoryManager): Long = {
ExecutorMetricType.pTreeInfo.allMetrics.pythonRSSTotal
}
}

case object ProcessTreePythonVMemory extends ExecutorMetricType {
override private[spark] def getMetricValue(memoryManager: MemoryManager): Long = {
ExecutorMetricType.pTreeInfo.allMetrics.pythonVmemTotal
}
}

case object ProcessTreeOtherRSSMemory extends ExecutorMetricType {
override private[spark] def getMetricValue(memoryManager: MemoryManager): Long = {
ExecutorMetricType.pTreeInfo.allMetrics.otherRSSTotal
}
}

case object ProcessTreeOtherVMemory extends ExecutorMetricType {
override private[spark] def getMetricValue(memoryManager: MemoryManager): Long = {
ExecutorMetricType.pTreeInfo.allMetrics.otherVmemTotal
}
}

case object OnHeapExecutionMemory extends MemoryManagerExecutorMetricType(
_.onHeapExecutionMemoryUsed)

Expand All @@ -84,6 +122,8 @@ case object MappedPoolMemory extends MBeanExecutorMetricType(
"java.nio:type=BufferPool,name=mapped")

private[spark] object ExecutorMetricType {
final val pTreeInfo = new ProcfsBasedSystems
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can ProcfsBasedSystems just be an object in and of itself?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I first considered this class to have a companion object, but it didn't work. Mostly related to how the ExecutorMetricType is defined. I don't remember the exact detail.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is a weird place to keep this, unless there is some really good reason for it. I think it should go inside ProcessTreeMetrics.

also I'm not sure what the problem was with making it an object. Seems to work for me. its a bit different now as there are arguments to the constructor for testing -- but you could still have an object which extends the class

private[spark] object ProcfsBasedSystems extends ProcfsBasedSystems("/proc/")

though doesn't really seem to have much value.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What are the benefits of the companion object vs this current approach? I can revert to the companion object model and do testing again to see what was the problem before, but just wanted to understand the benefits of it before investing time.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Today I spent sometime on the companion object solution and figured out the problem that I was facing before and was able to fix it. I will send the updated pr sometime tonight or tomorrow. thanks.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Normally having an object helps make it clear that there is a singleton; its easier to share properly and easier to figure out how to get a handle on it. Given that we'll have a class anyway, I don't think there is a ton of value in having there be a companion object.

I do still think the instance you create here should go somewhere else.


// List of all executor metric types
val values = IndexedSeq(
JVMHeapMemory,
Expand All @@ -95,7 +135,13 @@ private[spark] object ExecutorMetricType {
OnHeapUnifiedMemory,
OffHeapUnifiedMemory,
DirectPoolMemory,
MappedPoolMemory
MappedPoolMemory,
ProcessTreeJVMVMemory,
ProcessTreeJVMRSSMemory,
ProcessTreePythonVMemory,
ProcessTreePythonRSSMemory,
ProcessTreeOtherVMemory,
ProcessTreeOtherRSSMemory
)

// Map of executor metric type to its index in values.
Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,19 @@
[ {
"id" : "application_1538416563558_0014",
"name" : "PythonBisectingKMeansExample",
"attempts" : [ {
"startTime" : "2018-10-02T00:42:39.580GMT",
"endTime" : "2018-10-02T00:44:02.338GMT",
"lastUpdated" : "",
"duration" : 82758,
"sparkUser" : "root",
"completed" : true,
"appSparkVersion" : "2.5.0-SNAPSHOT",
"lastUpdatedEpoch" : 0,
"startTimeEpoch" : 1538440959580,
"endTimeEpoch" : 1538441042338
} ]
}, {
"id" : "application_1506645932520_24630151",
"name" : "Spark shell",
"attempts" : [ {
Expand Down
Loading