Skip to content
Closed
Changes from 1 commit
Commits
Show all changes
33 commits
Select commit Hold shift + click to select a range
1957e82
[SPARK-25299] Introduce the new shuffle writer API (#5) (#520)
mccheah Mar 20, 2019
857552a
[SPARK-25299] Local shuffle implementation of the shuffle writer API …
mccheah Apr 3, 2019
d13037f
[SPARK-25299] Make UnsafeShuffleWriter use the new API (#536)
mccheah Apr 17, 2019
8f5fb60
[SPARK-25299] Use the shuffle writer plugin for the SortShuffleWriter…
mccheah Apr 15, 2019
e17c7ea
[SPARK-25299] Shuffle locations api (#517)
mccheah Apr 19, 2019
3f0c131
[SPARK-25299] Move shuffle writers back to being given specific parti…
mccheah Apr 19, 2019
f982df7
[SPARK-25299] Don't set map status twice in bypass merge sort shuffle…
mccheah Apr 19, 2019
6891197
[SPARK-25299] Propose a new NIO transfer API for partition writing. (…
mccheah May 24, 2019
7b44ed2
Remove shuffle location support.
mccheah Jun 27, 2019
df75f1f
Remove changes to UnsafeShuffleWriter
mccheah Jun 27, 2019
a8558af
Revert changes for SortShuffleWriter
mccheah Jun 27, 2019
806d7bb
Revert a bunch of other stuff
mccheah Jun 27, 2019
3167030
More reverts
mccheah Jun 27, 2019
70f59db
Set task contexts in failing test
mccheah Jun 28, 2019
3083d86
Fix style
mccheah Jun 28, 2019
4c3d692
Check for null on the block manager as well.
mccheah Jun 28, 2019
2421c92
Add task attempt id in the APIs
mccheah Jul 1, 2019
982f207
Address comments
mccheah Jul 8, 2019
594d1e2
Fix style
mccheah Jul 8, 2019
66aae91
Address comments.
mccheah Jul 12, 2019
8b432f9
Merge remote-tracking branch 'origin/master' into spark-shuffle-write…
mccheah Jul 17, 2019
9f597dd
Address comments.
mccheah Jul 18, 2019
86c1829
Restructure test
mccheah Jul 18, 2019
a7885ae
Add ShuffleWriteMetricsReporter to the createMapOutputWriter API.
mccheah Jul 19, 2019
9893c6c
Add more documentation
mccheah Jul 19, 2019
cd897e7
REfactor reading records from file in test
mccheah Jul 19, 2019
9f17b9b
Address comments
mccheah Jul 24, 2019
e53a001
Code tags
mccheah Jul 24, 2019
56fa450
Add some docs
mccheah Jul 24, 2019
b8b7b8d
Change mockito format in BypassMergeSortShuffleWriterSuite
mccheah Jul 25, 2019
2d29404
Remove metrics from the API.
mccheah Jul 29, 2019
06ea01a
Address more comments.
mccheah Jul 29, 2019
7dceec9
Args per line
mccheah Jul 30, 2019
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Set task contexts in failing test
  • Loading branch information
mccheah committed Jun 28, 2019
commit 70f59db45a4518159c613d3106533c4cc0ece05e
16 changes: 12 additions & 4 deletions core/src/test/scala/org/apache/spark/ShuffleSuite.scala
Original file line number Diff line number Diff line change
Expand Up @@ -383,13 +383,19 @@ abstract class ShuffleSuite extends SparkFunSuite with Matchers with LocalSparkC
// simultaneously, and everything is still OK

def writeAndClose(
writer: ShuffleWriter[Int, Int])(
writer: ShuffleWriter[Int, Int],
taskContext: TaskContext)(
iter: Iterator[(Int, Int)]): Option[MapStatus] = {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as you're touching this, can you fix the indentation of the parameters?

val files = writer.write(iter)
writer.stop(true)
TaskContext.setTaskContext(taskContext)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is this needed? the context is already passed into manager.getWriter().

Its not necessarily bad to add this to the test, just trying to understand what has changed to require this.

Copy link
Contributor

@ifilonenko ifilonenko Jul 18, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because there are TaskContext.get() calls in the implementation (createMapOutputWriter) — it is necessary that we set these on the tests to avoid NPE if it were not to be set.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, but is there any particular reason the implementations are doing that, rather than using the passed in TaskContext?

in fact its a little confusing because it only pulls the shuffle write metrics, but those are passed directly to BypassMergeSortShuffleWriter, so it then looks like BypassMergeSortShuffleWriter is forgetting to pass those metrics on to the ShuffleMapOutputWriter. I know we use taskcontext a lot, but because its global state its harder to keep track of, and to know to add to tests (like here). If it were an argument to the methods / constructors it would be obvious.

anyway I know we already have to call TaskContext.set() in a ton of tests as it is assumed to be there, so this isn't a hard requirement, but seems like we can keep it a little more clean.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because the shuffle writer API is itself not given a TaskContext object. Or in other words, we didn't want to put TaskContext in any of the function signatures in the plugin tree.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

any particular reason why not? like this, its an implicit part of the api. (Its already a public class.)

is the only reason its needed ShuffleWriteMetrics / ShuffleReadMetrics? then maybe those should become an explicit part of the api? otherwise the api is even more confusing -- implementors just have to know they should be grabbing the taskcontext and incrementing those metrics.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd imagine that all spark users, regardless of what shuffle implementation their admins have setup, would want to have tasks report shuffle read & write stats.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But will they want to use the task context's metrics system? I'd imagine they would want to use whatever custom metrics system that is appropriate for their environment.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Basically I would prefer to minimize the API surface area as much as possible. If we can consider TaskContext.get() as an implementation detail and that the TaskContext isn't absolutely necessary for the API, I'd rather not include it and only add it to the API in a follow-up. Especially since this discussion is primarily centered around tests, which we have seen there are other places in tests where TaskContext.set is used anyways 0- so we're not going against existing precedent.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

though I mentioned this comment on the test, the whole reason was just because it made me worry about the general api design.

Another shuffle implementation may have its own metric system, to monitor that system -- but nonetheless, the actual end user of spark is going to want to see metrics in the Spark UI about the shuffle. We don't have a way for the alternative shuffle implementation to plugin their own metrics to the UI (nor do I think we want to). I guess the most important metrics, the number of records & bytes, are recorded outside of the plugin -- but the plugin should be updating the write time metric, regardless of what storage its using.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why can't the higher level writer update the write time?

Regardless, we can add the task context to the API - but we're pushing the number of parameters I think we would like in createMapOutputWriter - 5 parameters is quite a large number. Maybe we can group some of the parameters into a structure.

try {
val files = writer.write(iter)
writer.stop(true)
} finally {
TaskContext.unset()
}
}
val interleaver = new InterleaveIterators(
data1, writeAndClose(writer1), data2, writeAndClose(writer2))
data1, writeAndClose(writer1, context1), data2, writeAndClose(writer2, context2))
val (mapOutput1, mapOutput2) = interleaver.run()

// check that we can read the map output and it has the right data
Expand All @@ -405,8 +411,10 @@ abstract class ShuffleSuite extends SparkFunSuite with Matchers with LocalSparkC

val taskContext = new TaskContextImpl(
1, 0, 0, 2L, 0, taskMemoryManager, new Properties, metricsSystem)
TaskContext.setTaskContext(taskContext)
val metrics = taskContext.taskMetrics.createTempShuffleReadMetrics()
val reader = manager.getReader[Int, Int](shuffleHandle, 0, 1, taskContext, metrics)
TaskContext.unset()
val readData = reader.read().toIndexedSeq
assert(readData === data1.toIndexedSeq || readData === data2.toIndexedSeq)

Expand Down