Skip to content
Closed
Show file tree
Hide file tree
Changes from 6 commits
Commits
Show all changes
38 commits
Select commit Hold shift + click to select a range
2568a6c
Rename JobProgressPage to AllStagesPage:
JoshRosen Oct 29, 2014
4487dcb
[SPARK-4145] Web UI job pages
JoshRosen Oct 30, 2014
bfce2b9
Address review comments, except for progress bar.
JoshRosen Nov 6, 2014
4b206fb
Merge remote-tracking branch 'origin/master' into job-page
JoshRosen Nov 6, 2014
45343b8
More comments
JoshRosen Nov 6, 2014
a475ea1
Add progress bars to jobs page.
JoshRosen Nov 11, 2014
56701fa
Move last stage name / description logic out of markup.
JoshRosen Nov 11, 2014
1cf4987
Fix broken kill links; add Selenium test to avoid future regressions.
JoshRosen Nov 11, 2014
85e9c85
Extract startTime into separate variable.
JoshRosen Nov 11, 2014
4d58e55
Change label to "Tasks (for all stages)"
JoshRosen Nov 11, 2014
4846ce4
Hide "(Job Group") if no jobs were submitted in job groups.
JoshRosen Nov 12, 2014
b7bf30e
Add stages progress bar; fix bug where active stages show as completed.
JoshRosen Nov 12, 2014
8a2351b
Add help tooltip to Spark Jobs page.
JoshRosen Nov 12, 2014
3d0a007
Merge remote-tracking branch 'origin/master' into job-page
JoshRosen Nov 17, 2014
1145c60
Display text instead of progress bar for stages.
JoshRosen Nov 17, 2014
d62ea7b
Add failing Selenium test for stage overcounting issue.
JoshRosen Nov 17, 2014
79793cd
Track indices of completed stage to avoid overcounting when failures …
JoshRosen Nov 18, 2014
5884f91
Add StageInfos to SparkListenerJobStart event.
JoshRosen Nov 18, 2014
8ab6c28
Compute numTasks from job start stage infos.
JoshRosen Nov 18, 2014
8955f4c
Display information for pending stages on jobs page.
JoshRosen Nov 19, 2014
e2f2c43
Fix sorting of stages in job details page.
JoshRosen Nov 19, 2014
171b53c
Move `startTime` to the start of SparkContext.
JoshRosen Nov 19, 2014
f2a15da
Add status field to job details page.
JoshRosen Nov 19, 2014
5eb39dc
Add pending stages table to job page.
JoshRosen Nov 19, 2014
d69c775
Fix table sorting on all jobs page.
JoshRosen Nov 19, 2014
7d10b97
Merge remote-tracking branch 'apache/master' into job-page
JoshRosen Nov 20, 2014
67080ba
Ensure that "phantom stages" don't cause memory leaks.
JoshRosen Nov 20, 2014
eebdc2c
Don’t display pending stages for completed jobs.
JoshRosen Nov 20, 2014
034aa8d
Use `.max()` to find result stage for job.
JoshRosen Nov 20, 2014
0b77e3e
More bug fixes for phantom stages.
JoshRosen Nov 20, 2014
1f45d44
Incorporate a bunch of minor review feedback.
JoshRosen Nov 20, 2014
61c265a
Add “skipped stages” table; only display non-empty tables.
JoshRosen Nov 20, 2014
2bbf41a
Update job progress bar to reflect skipped tasks/stages.
JoshRosen Nov 20, 2014
6f17f3f
Only store StageInfos in SparkListenerJobStart event.
JoshRosen Nov 21, 2014
ff804cd
Don't write "Stage Ids" field in JobStartEvent JSON.
JoshRosen Nov 21, 2014
b89c258
More JSON protocol backwards-compatibility fixes.
JoshRosen Nov 21, 2014
f00c851
Fix JsonProtocol compatibility
JoshRosen Nov 21, 2014
eb05e90
Disable kill button in completed stages tables.
JoshRosen Nov 24, 2014
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 8 additions & 5 deletions core/src/main/scala/org/apache/spark/ui/SparkUI.scala
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ import org.apache.spark.storage.StorageStatusListener
import org.apache.spark.ui.JettyUtils._
import org.apache.spark.ui.env.{EnvironmentListener, EnvironmentTab}
import org.apache.spark.ui.exec.{ExecutorsListener, ExecutorsTab}
import org.apache.spark.ui.jobs.{JobProgressListener, JobProgressTab}
import org.apache.spark.ui.jobs.{JobsTab, JobProgressListener, StagesTab}
import org.apache.spark.ui.storage.{StorageListener, StorageTab}

/**
Expand All @@ -45,20 +45,23 @@ private[spark] class SparkUI private (

/** Initialize all components of the server. */
def initialize() {
val jobProgressTab = new JobProgressTab(this)
attachTab(jobProgressTab)
attachTab(new JobsTab(this))
val stagesTab = new StagesTab(this)
attachTab(stagesTab)
attachTab(new StorageTab(this))
attachTab(new EnvironmentTab(this))
attachTab(new ExecutorsTab(this))
attachHandler(createStaticHandler(SparkUI.STATIC_RESOURCE_DIR, "/static"))
attachHandler(createRedirectHandler("/", "/stages", basePath = basePath))
attachHandler(createRedirectHandler("/", "/jobs", basePath = basePath))
attachHandler(
createRedirectHandler("/stages/stage/kill", "/stages", jobProgressTab.handleKillRequest))
createRedirectHandler("/stages/stage/kill", "/stages", stagesTab.handleKillRequest))
// If the UI is live, then serve
sc.foreach { _.env.metricsSystem.getServletHandlers.foreach(attachHandler) }
}
initialize()

val killEnabled = sc.map(_.conf.getBoolean("spark.ui.killEnabled", true)).getOrElse(false)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you need to move this to before the initialize() call -- as is, kill is always disabled, because killEnabled is false when initialize is called, so StagesTab will be initialized with killEnabled set to false (I noticed this when I was playing around with this, because the kill button was nowhere to be found).


def getAppName = appName

/** Set the app name for this UI. */
Expand Down
13 changes: 13 additions & 0 deletions core/src/main/scala/org/apache/spark/ui/UIUtils.scala
Original file line number Diff line number Diff line change
Expand Up @@ -283,4 +283,17 @@ private[spark] object UIUtils extends Logging {
</tbody>
</table>
}

def makeProgressBar(started: Int, completed: Int, failed: Int, total: Int): Seq[Node] = {
val completeWidth = "width: %s%%".format((completed.toDouble/total)*100)
val startWidth = "width: %s%%".format((started.toDouble/total)*100)

<div class="progress">
<span style="text-align:center; position:absolute; width:100%; left:0;">
{completed}/{total} { if (failed > 0) s"($failed failed)" else "" }
</span>
<div class="bar bar-completed" style={completeWidth}></div>
<div class="bar bar-running" style={startWidth}></div>
</div>
}
}
144 changes: 144 additions & 0 deletions core/src/main/scala/org/apache/spark/ui/jobs/AllJobsPage.scala
Original file line number Diff line number Diff line change
@@ -0,0 +1,144 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

package org.apache.spark.ui.jobs

import scala.xml.{Node, NodeSeq}

import javax.servlet.http.HttpServletRequest

import org.apache.spark.ui.{WebUIPage, UIUtils}
import org.apache.spark.ui.jobs.UIData.JobUIData


/** Page showing list of all ongoing and recently finished jobs */
private[ui] class AllJobsPage(parent: JobsTab) extends WebUIPage("") {
private val sc = parent.sc
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will parent.sc be set when this is created? If so, I think it would be better to have an Option[Long] that you create here describing the start time, and then use to optionally show the elapsed time later (just because it makes it more clear how this is used).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a good idea, and good dependency injection in general.

private val listener = parent.listener

private def getSubmissionTime(job: JobUIData): Option[Long] = {
for (
firstStageId <- job.stageIds.headOption;
firstStageInfo <- listener.stageIdToInfo.get(firstStageId);
submitTime <- firstStageInfo.submissionTime
) yield submitTime
}

private def jobsTable(jobs: Seq[JobUIData]): Seq[Node] = {
val columns: Seq[Node] = {
<th>Job Id (Job Group)</th>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This notion of "Job Group" always confused me -- what's the difference between this and Job Id? Do we need to mention both names in the UI? Do you use "job group" here because that's what we call the id you can pass in to kill the job?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure how Job Group is being used in all cases now, or whether it even works particularly well at all, but the concept of a Job Group could be useful when the "job" from the user's point of view is actually composed of multiple Spark jobs. That can be the case when you want to do something like sorting an RDD without falling into the nastiness of embedded, eager RDD actions to generate a RangePartitioner. Instead, you'd queue up multiple jobs in a Job Group with later jobs depending on the results of earlier jobs in the group. If the user decides that the "job" should be killed, then all of the jobs in the Job Group should be canceled.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OH I see -- I didn't realize that when there's a job group, it will be shown in parentheses ("1 (4)", for example). I thought the "(job group)" was indicating that the job Id was the same as the job group Id. This all makes sense now -- thanks!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Job groups are perhaps an obscure feature, but they can be useful in environments where multiple users interact with a shared SparkContext (such as a job server).

Do you think there's a clearer way to label this that's less confusing? Job groups might be an uncommon feature that most users aren't aware of / don't need.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One idea I had was to just label the column as Job ID if none of the jobs
are part of a job group. Is that easy to do?

On Tue, Nov 11, 2014 at 2:15 PM, Josh Rosen [email protected]
wrote:

In core/src/main/scala/org/apache/spark/ui/jobs/AllJobsPage.scala:

+/** Page showing list of all ongoing and recently finished jobs */
+private[ui] class AllJobsPage(parent: JobsTab) extends WebUIPage("") {

  • private val sc = parent.sc
  • private val listener = parent.listener
  • private def getSubmissionTime(job: JobUIData): Option[Long] = {
  • for (
  •  firstStageId <- job.stageIds.headOption;
    
  •  firstStageInfo <- listener.stageIdToInfo.get(firstStageId);
    
  •  submitTime <- firstStageInfo.submissionTime
    
  • ) yield submitTime
  • }
  • private def jobsTable(jobs: Seq[JobUIData]): Seq[Node] = {
  • val columns: Seq[Node] = {
  •  <th>Job Id (Job Group)</th>
    

Job groups are perhaps an obscure feature, but they can be useful in
environments where multiple users interact with a shared SparkContext (such
as a job server).

Do you think there's a clearer way to label this that's less confusing?
Job groups might be an uncommon feature that most users aren't aware of /
don't need.


Reply to this email directly or view it on GitHub
https://github.com/apache/spark/pull/3009/files#r20185389.

<th>Description</th>
<th>Submitted</th>
<th>Duration</th>
<th>Tasks: Succeeded/Total</th>
}

def makeRow(job: JobUIData): Seq[Node] = {
val lastStageInfo = job.stageIds.lastOption.flatMap(listener.stageIdToInfo.get)
val lastStageData = lastStageInfo.flatMap { s =>
listener.stageIdToData.get((s.stageId, s.attemptId))
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you move the code to compute the last stage's name up here? It was hard for me to figure out why you were grabbing the last stage here.

val duration: Option[Long] = {
job.startTime.map { start =>
val end = job.endTime.getOrElse(System.currentTimeMillis())
end - start
}
}
val formattedDuration = duration.map(d => UIUtils.formatDuration(d)).getOrElse("Unknown")
val formattedSubmissionTime = job.startTime.map(UIUtils.formatDate).getOrElse("Unknown")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I realize we use "Unknown" in a few places. Can you declare a

val UNKNOWN: String = "Unknown"

in UIUtils? Then you can just import UIUtils._

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In other places, we have "Unknown stage name", etc; I'm not sure that this is a huge win (it would be beneficial if we decided to localize, though, but we're not doing that here).

val detailUrl =
"%s/jobs/job?id=%s".format(UIUtils.prependBaseUri(parent.basePath), job.jobId)

<tr>
<td sorttable_customkey={job.jobId.toString}>
{job.jobId} {job.jobGroup.map(id => s"($id)").getOrElse("")}
</td>
<td>
<div><em>{lastStageData.flatMap(_.description).getOrElse("")}</em></div>
<a href={detailUrl}>{lastStageInfo.map(_.name).getOrElse("(Unknown Stage Name)")}</a>
</td>
<td sorttable_customkey={job.startTime.getOrElse(-1).toString}>
{formattedSubmissionTime}
</td>
<td sorttable_customkey={duration.getOrElse(-1).toString}>{formattedDuration}</td>
<td class="progress-cell">
{UIUtils.makeProgressBar(job.numActiveTasks, job.numCompletedTasks,
job.numFailedTasks, job.numTasks)}
</td>
</tr>
}

<table class="table table-bordered table-striped table-condensed sortable">
<thead>{columns}</thead>
<tbody>
{jobs.map(makeRow)}
</tbody>
</table>
}

def render(request: HttpServletRequest): Seq[Node] = {
listener.synchronized {
val activeJobs = listener.activeJobs.values.toSeq
val completedJobs = listener.completedJobs.reverse.toSeq
val failedJobs = listener.failedJobs.reverse.toSeq
val now = System.currentTimeMillis

val activeJobsTable =
jobsTable(activeJobs.sortBy(getSubmissionTime(_).getOrElse(-1L)).reverse)
val completedJobsTable =
jobsTable(completedJobs.sortBy(getSubmissionTime(_).getOrElse(-1L)).reverse)
val failedJobsTable =
jobsTable(failedJobs.sortBy(getSubmissionTime(_).getOrElse(-1L)).reverse)

val summary: NodeSeq =
<div>
<ul class="unstyled">
{if (sc.isDefined) {
// Total duration is not meaningful unless the UI is live
<li>
<strong>Total Duration: </strong>
{UIUtils.formatDuration(now - sc.get.startTime)}
</li>
}}
<li>
<strong>Scheduling Mode: </strong>
{listener.schedulingMode.map(_.toString).getOrElse("Unknown")}
</li>
<li>
<a href="#active"><strong>Active Jobs:</strong></a>
{activeJobs.size}
</li>
<li>
<a href="#completed"><strong>Completed Jobs:</strong></a>
{completedJobs.size}
</li>
<li>
<a href="#failed"><strong>Failed Jobs:</strong></a>
{failedJobs.size}
</li>
</ul>
</div>

val content = summary ++
<h4 id="active">Active Jobs ({activeJobs.size})</h4> ++ activeJobsTable ++
<h4 id="completed">Completed Jobs ({completedJobs.size})</h4> ++ completedJobsTable ++
<h4 id ="failed">Failed Jobs ({failedJobs.size})</h4> ++ failedJobsTable

UIUtils.headerSparkPage("Spark Jobs", content, parent)
}
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ import org.apache.spark.scheduler.Schedulable
import org.apache.spark.ui.{WebUIPage, UIUtils}

/** Page showing list of all ongoing and recently finished stages and pools */
private[ui] class JobProgressPage(parent: JobProgressTab) extends WebUIPage("") {
private[ui] class AllStagesPage(parent: StagesTab) extends WebUIPage("") {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This naming change is great

private val sc = parent.sc
private val listener = parent.listener
private def isFairScheduler = parent.isFairScheduler
Expand All @@ -41,11 +41,13 @@ private[ui] class JobProgressPage(parent: JobProgressTab) extends WebUIPage("")

val activeStagesTable =
new StageTableBase(activeStages.sortBy(_.submissionTime).reverse,
parent, parent.killEnabled)
parent.basePath, parent.listener, parent.killEnabled)
val completedStagesTable =
new StageTableBase(completedStages.sortBy(_.submissionTime).reverse, parent)
new StageTableBase(completedStages.sortBy(_.submissionTime).reverse, parent.basePath,
parent.listener, parent.killEnabled)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't killEnabled always be false here (so don't pass it in, in which case it defaults to false)? Since killing a completed / failed stage has no meaning.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a good catch. It turns out that this was a pretty serious mistake, since the parent.killEnabled that i'm passing here was actually being used for the isFairScheduler boolean. I've explicitly named my boolean parameters to avoid this sort of mistake. I've also added a Selenium test to catch these sorts of regressions where spark.ui.killEnabled isn't respected.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh awesome!!

On Tue, Nov 11, 2014 at 3:16 PM, Josh Rosen [email protected]
wrote:

In core/src/main/scala/org/apache/spark/ui/jobs/AllStagesPage.scala:

   val completedStagesTable =
  •    new StageTableBase(completedStages.sortBy(_.submissionTime).reverse, parent)
    
  •    new StageTableBase(completedStages.sortBy(_.submissionTime).reverse, parent.basePath,
    
  •      parent.listener, parent.killEnabled)
    

This is a good catch. It turns out that this was a pretty serious mistake,
since the parent.killEnabled that i'm passing here was actually being
used for the isFairScheduler boolean. I've explicitly named my boolean
parameters to avoid this sort of mistake. I've also added a Selenium test
to catch these sorts of regressions where spark.ui.killEnabled isn't
respected.


Reply to this email directly or view it on GitHub
https://github.com/apache/spark/pull/3009/files#r20189716.

val failedStagesTable =
new FailedStageTable(failedStages.sortBy(_.submissionTime).reverse, parent)
new FailedStageTable(failedStages.sortBy(_.submissionTime).reverse, parent.basePath,
parent.listener, parent.killEnabled)

// For now, pool information is only accessible in live UIs
val pools = sc.map(_.getAllPools).getOrElse(Seq.empty[Schedulable])
Expand Down Expand Up @@ -93,7 +95,7 @@ private[ui] class JobProgressPage(parent: JobProgressTab) extends WebUIPage("")
<h4 id ="failed">Failed Stages ({numFailedStages})</h4> ++
failedStagesTable.toNodeSeq

UIUtils.headerSparkPage("Spark Stages", content, parent)
UIUtils.headerSparkPage("Spark Stages (for all jobs)", content, parent)
}
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ import org.apache.spark.ui.jobs.UIData.StageUIData
import org.apache.spark.util.Utils

/** Stage summary grouped by executors. */
private[ui] class ExecutorTable(stageId: Int, stageAttemptId: Int, parent: JobProgressTab) {
private[ui] class ExecutorTable(stageId: Int, stageAttemptId: Int, parent: StagesTab) {
private val listener = parent.listener

def toNodeSeq: Seq[Node] = {
Expand Down
102 changes: 102 additions & 0 deletions core/src/main/scala/org/apache/spark/ui/jobs/JobPage.scala
Original file line number Diff line number Diff line change
@@ -0,0 +1,102 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

package org.apache.spark.ui.jobs

import scala.xml.{NodeSeq, Node}

import javax.servlet.http.HttpServletRequest

import org.apache.spark.scheduler.StageInfo
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: fix import ordering

import org.apache.spark.ui.{UIUtils, WebUIPage}

/** Page showing statistics and stage list for a given job */
private[ui] class JobPage(parent: JobsTab) extends WebUIPage("job") {
private val listener = parent.listener
private val sc = parent.sc
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this ever used?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes; it's used to compute the "Total duration" field if the UI is live.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like that's the case for AllJobsPage.scala, but I couldn't find any uses of this, even via IntelliJ.


def render(request: HttpServletRequest): Seq[Node] = {
listener.synchronized {
val jobId = request.getParameter("id").toInt
val jobDataOption = listener.jobIdToData.get(jobId)
if (jobDataOption.isEmpty) {
val content =
<div>
<p>No information to display for job {jobId}</p>
</div>
return UIUtils.headerSparkPage(
s"Details for Job $jobId", content, parent)
}
val jobData = jobDataOption.get
val stages = jobData.stageIds.map { stageId =>
// This could be empty if the JobProgressListener hasn't received information about the
// stage or if the stage information has been garbage collected
listener.stageIdToInfo.getOrElse(stageId,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When will this be empty? Is this if a job has started but some of the stages have not yet started? A comment here would be helpful.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It could be empty if we haven't received information about the stage or if the stage information has been garbage collected. Right now, stages and jobs are GC'd separately in the web UI's JobProgressListener. We could revisit this, though.

new StageInfo(stageId, 0, "Unknown", 0, Seq.empty, "Unknown"))
}

val (activeStages, completedOrFailedStages) = stages.partition(_.completionTime.isDefined)
val (failedStages, completedStages) =
completedOrFailedStages.partition(_.failureReason.isDefined)

val activeStagesTable =
new StageTableBase(activeStages.sortBy(_.submissionTime).reverse,
parent.basePath, parent.listener, parent.killEnabled)
val completedStagesTable =
new StageTableBase(completedStages.sortBy(_.submissionTime).reverse, parent.basePath,
parent.listener, parent.killEnabled)
val failedStagesTable =
new FailedStageTable(failedStages.sortBy(_.submissionTime).reverse, parent.basePath,
parent.listener, parent.killEnabled)

val summary: NodeSeq =
<div>
<ul class="unstyled">
{
if (jobData.jobGroup.isDefined) {
<li>
<strong>Job Group:</strong>
{jobData.jobGroup.get}
</li>
} else Seq.empty
}
<li>
<a href="#active"><strong>Active Stages:</strong></a>
{activeStages.size}
</li>
<li>
<a href="#completed"><strong>Completed Stages:</strong></a>
{completedStages.size}
</li>
<li>
<a href="#failed"><strong>Failed Stages:</strong></a>
{failedStages.size}
</li>
</ul>
</div>

val content = summary ++
<h4 id="active">Active Stages ({activeStages.size})</h4> ++
activeStagesTable.toNodeSeq ++
<h4 id="completed">Completed Stages ({completedStages.size})</h4> ++
completedStagesTable.toNodeSeq ++
<h4 id ="failed">Failed Stages ({failedStages.size})</h4> ++
failedStagesTable.toNodeSeq
UIUtils.headerSparkPage(s"Details for Job $jobId", content, parent)
}
}
}
Loading