Skip to content
Closed
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
96 commits
Select commit Hold shift + click to select a range
81d52c5
WIP on UnsafeSorter
JoshRosen Apr 29, 2015
abf7bfe
Add basic test case.
JoshRosen Apr 29, 2015
57a4ea0
Make initialSize configurable in UnsafeSorter
JoshRosen Apr 30, 2015
e900152
Add test for empty iterator in UnsafeSorter
JoshRosen May 1, 2015
767d3ca
Fix invalid range in UnsafeSorter.
JoshRosen May 1, 2015
3db12de
Minor simplification and sanity checks in UnsafeSorter
JoshRosen May 1, 2015
4d2f5e1
WIP
JoshRosen May 1, 2015
8e3ec20
Begin code cleanup.
JoshRosen May 1, 2015
253f13e
More cleanup
JoshRosen May 1, 2015
9c6cf58
Refactor to use DiskBlockObjectWriter.
JoshRosen May 1, 2015
e267cee
Fix compilation of UnsafeSorterSuite
JoshRosen May 1, 2015
e2d96ca
Expand serializer API and use new function to help control when new U…
JoshRosen May 1, 2015
d3cc310
Flag that SparkSqlSerializer2 supports relocation
JoshRosen May 1, 2015
87e721b
Renaming and comments
JoshRosen May 1, 2015
0748458
Port UnsafeShuffleWriter to Java.
JoshRosen May 2, 2015
026b497
Re-use a buffer in UnsafeShuffleWriter
JoshRosen May 2, 2015
1433b42
Store record length as int instead of long.
JoshRosen May 2, 2015
240864c
Remove PrefixComputer and require prefix to be specified as part of i…
JoshRosen May 2, 2015
bfc12d3
Add tests for serializer relocation property.
JoshRosen May 3, 2015
b8a09fe
Back out accidental log4j.properties change
JoshRosen May 3, 2015
c2fca17
Small refactoring of SerializerPropertiesSuite to enable test re-use:
JoshRosen May 3, 2015
f17fa8f
Add missing newline
JoshRosen May 3, 2015
8958584
Fix bug in calculating free space in current page.
JoshRosen May 3, 2015
595923a
Remove some unused variables.
JoshRosen May 3, 2015
5e100b2
Super-messy WIP on external sort
JoshRosen May 4, 2015
2776aca
First passing test for ExternalSorter.
JoshRosen May 4, 2015
f156a8f
Hacky metrics integration; refactor some interfaces.
JoshRosen May 4, 2015
3490512
Misc. cleanup
JoshRosen May 4, 2015
3aeaff7
More refactoring and cleanup; begin cleaning iterator interfaces
JoshRosen May 4, 2015
7ee918e
Re-order imports in tests
JoshRosen May 5, 2015
69232fd
Enable compressible address encoding for off-heap mode.
JoshRosen May 5, 2015
57f1ec0
WIP towards packed record pointers for use in optimized shuffle sort.
JoshRosen May 5, 2015
f480fb2
WIP in mega-refactoring towards shuffle-specific sort.
JoshRosen May 5, 2015
133c8c9
WIP towards testing UnsafeShuffleWriter.
JoshRosen May 5, 2015
4f70141
Fix merging; now passes UnsafeShuffleSuite tests.
JoshRosen May 5, 2015
aaea17b
Add comments to UnsafeShuffleSpillWriter.
JoshRosen May 6, 2015
b674412
Merge remote-tracking branch 'origin/master' into unsafe-sort
JoshRosen May 6, 2015
11feeb6
Update TODOs related to shuffle write metrics.
JoshRosen May 7, 2015
8a6fe52
Rename UnsafeShuffleSpillWriter to UnsafeShuffleExternalSorter
JoshRosen May 7, 2015
cfe0ec4
Address a number of minor review comments:
JoshRosen May 7, 2015
e67f1ea
Remove upper type bound in ShuffleWriter interface.
JoshRosen May 7, 2015
5e8cf75
More minor cleanup
JoshRosen May 7, 2015
1ce1300
More minor cleanup
JoshRosen May 7, 2015
b95e642
Refactor and document logic that decides when to spill.
JoshRosen May 7, 2015
9883e30
Merge remote-tracking branch 'origin/master' into unsafe-sort
JoshRosen May 8, 2015
722849b
Add workaround for transferTo() bug in merging code; refactor tests.
JoshRosen May 8, 2015
7cd013b
Begin refactoring to enable proper tests for spilling.
JoshRosen May 9, 2015
9b7ebed
More defensive programming RE: cleaning up spill files and memory aft…
JoshRosen May 9, 2015
e8718dd
Merge remote-tracking branch 'origin/master' into unsafe-sort
JoshRosen May 9, 2015
1929a74
Update to reflect upstream ShuffleBlockManager -> ShuffleBlockResolve…
JoshRosen May 9, 2015
01afc74
Actually read data in UnsafeShuffleWriterSuite
JoshRosen May 10, 2015
8f5061a
Strengthen assertion to check partitioning
JoshRosen May 10, 2015
67d25ba
Update Exchange operator's copying logic to account for new shuffle m…
JoshRosen May 10, 2015
fd4bb9e
Use own ByteBufferOutputStream rather than Kryo's
JoshRosen May 10, 2015
9d1ee7c
Fix MiMa excludes for ShuffleWriter change
JoshRosen May 10, 2015
fcd9a3c
Add notes + tests for maximum record / page sizes.
JoshRosen May 10, 2015
27b18b0
That for inserting records AT the max record size.
JoshRosen May 10, 2015
4a01c45
Remove unnecessary log message
JoshRosen May 10, 2015
f780fb1
Add test demonstrating which compression codecs support concatenation.
JoshRosen May 11, 2015
b57c17f
Disable some overly-verbose logs that rendered DEBUG useless.
JoshRosen May 11, 2015
1ef56c7
Revise compression codec support in merger; test cross product of con…
JoshRosen May 11, 2015
b3b1924
Properly implement close() and flush() in DummySerializerInstance.
JoshRosen May 11, 2015
0d4d199
Bump up shuffle.memoryFraction to make tests pass.
JoshRosen May 11, 2015
ec6d626
Add notes on maximum # of supported shuffle partitions.
JoshRosen May 11, 2015
ae538dc
Document UnsafeShuffleManager.
JoshRosen May 11, 2015
ea4f85f
Roll back an unnecessary change in Spillable.
JoshRosen May 11, 2015
1e3ad52
Delete unused ByteBufferOutputStream class.
JoshRosen May 11, 2015
39434f9
Avoid integer multiplication overflow in getMemoryUsage (thanks FindB…
JoshRosen May 11, 2015
e1855e5
Fix a handful of misc. IntelliJ inspections
JoshRosen May 11, 2015
7c953f9
Add test that covers UnsafeShuffleSortDataFormat.swap().
JoshRosen May 11, 2015
8531286
Add tests that automatically trigger spills.
JoshRosen May 11, 2015
69d5899
Remove some unnecessary override vals
JoshRosen May 11, 2015
d4e6d89
Update to bit shifting constants
JoshRosen May 11, 2015
4f0b770
Attempt to implement proper shuffle write metrics.
JoshRosen May 12, 2015
e58a6b4
Add more tests for PackedRecordPointer encoding.
JoshRosen May 12, 2015
e995d1a
Introduce MAX_SHUFFLE_OUTPUT_PARTITIONS.
JoshRosen May 12, 2015
56781a1
Rename UnsafeShuffleSorter to UnsafeShuffleInMemorySorter
JoshRosen May 12, 2015
0ad34da
Fix off-by-one in nextInt() call
JoshRosen May 12, 2015
85da63f
Cleanup in UnsafeShuffleSorterIterator.
JoshRosen May 12, 2015
fdcac08
Guard against overflow when expanding sort buffer.
JoshRosen May 12, 2015
2d4e4f4
Address some minor comments in UnsafeShuffleExternalSorter.
JoshRosen May 12, 2015
57312c9
Clarify fileBufferSize units
JoshRosen May 12, 2015
6276168
Remove ability to disable spilling in UnsafeShuffleExternalSorter.
JoshRosen May 12, 2015
4a2c785
rename 'sort buffer' to 'pointer array'
JoshRosen May 12, 2015
e3b8855
Cleanup in UnsafeShuffleWriter
JoshRosen May 12, 2015
c2ce78e
Fix a missed usage of MAX_PARTITION_ID
JoshRosen May 12, 2015
d5779c6
Merge remote-tracking branch 'origin/master' into unsafe-sort
JoshRosen May 12, 2015
5e189c6
Track time spend closing / flushing files; split TimeTrackingOutputSt…
JoshRosen May 12, 2015
df07699
Attempt to clarify confusing metrics update code
JoshRosen May 12, 2015
de40b9d
More comments to try to explain metrics code
JoshRosen May 12, 2015
4023fa4
Add @Private annotation to some Java classes.
JoshRosen May 12, 2015
51812a7
Change shuffle manager sort name to tungsten-sort
JoshRosen May 13, 2015
52a9981
Fix some bugs in the address packing code.
JoshRosen May 13, 2015
d494ffe
Fix deserialization of JavaSerializer instances.
JoshRosen May 13, 2015
7610f2f
Add tests for proper cleanup of shuffle data.
JoshRosen May 13, 2015
ef0a86e
Fix scalastyle errors
JoshRosen May 13, 2015
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Expand serializer API and use new function to help control when new U…
…nsafeShuffle path is used.
  • Loading branch information
JoshRosen committed May 1, 2015
commit e2d96ca59b74c2aa004c471b651c7de2acaca51f
Original file line number Diff line number Diff line change
Expand Up @@ -125,6 +125,11 @@ class KryoSerializer(conf: SparkConf)
override def newInstance(): SerializerInstance = {
new KryoSerializerInstance(this)
}

override def supportsRelocationOfSerializedObjects: Boolean = {
// TODO: we should have a citation / explanatory comment here clarifying _why_ this is the case
newInstance().asInstanceOf[KryoSerializerInstance].getAutoReset()
}
}

private[spark]
Expand Down
26 changes: 25 additions & 1 deletion core/src/main/scala/org/apache/spark/serializer/Serializer.scala
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ import java.nio.ByteBuffer
import scala.reflect.ClassTag

import org.apache.spark.{SparkConf, SparkEnv}
import org.apache.spark.annotation.DeveloperApi
import org.apache.spark.annotation.{Experimental, DeveloperApi}
import org.apache.spark.util.{Utils, ByteBufferInputStream, NextIterator}

/**
Expand Down Expand Up @@ -63,6 +63,30 @@ abstract class Serializer {

/** Creates a new [[SerializerInstance]]. */
def newInstance(): SerializerInstance

/**
* Returns true if this serializer supports relocation of its serialized objects and false
* otherwise. This should return true if and only if reordering the bytes of serialized objects
* in serialization stream output results in re-ordered input that can be read with the
* deserializer. For instance, the following should work if the serializer supports relocation:
*
* serOut.open()
* position = 0
* serOut.write(obj1)
* serOut.flush()
* position = # of bytes writen to stream so far
* obj1Bytes = [bytes 0 through position of stream]
* serOut.write(obj2)
* serOut.flush
* position2 = # of bytes written to stream so far
* obj2Bytes = bytes[position through position2 of stream]
*
* serIn.open([obj2bytes] concatenate [obj1bytes]) should return (obj2, obj1)
*
* See SPARK-7311 for more discussion.
*/
@Experimental
def supportsRelocationOfSerializedObjects: Boolean = false
}


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ import java.util

import com.esotericsoftware.kryo.io.ByteBufferOutputStream

import org.apache.spark.{ShuffleDependency, SparkConf, SparkEnv, TaskContext}
import org.apache.spark._
import org.apache.spark.executor.ShuffleWriteMetrics
import org.apache.spark.scheduler.MapStatus
import org.apache.spark.serializer.Serializer
Expand All @@ -34,17 +34,31 @@ import org.apache.spark.unsafe.memory.{MemoryBlock, TaskMemoryManager}
import org.apache.spark.unsafe.sort.UnsafeSorter
import org.apache.spark.unsafe.sort.UnsafeSorter.{KeyPointerAndPrefix, PrefixComparator, PrefixComputer, RecordComparator}

private[spark] class UnsafeShuffleHandle[K, V](
private class UnsafeShuffleHandle[K, V](
shuffleId: Int,
override val numMaps: Int,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

override val is redundant

override val dependency: ShuffleDependency[K, V, V])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

override val is redundant

extends BaseShuffleHandle(shuffleId, numMaps, dependency) {
require(UnsafeShuffleManager.canUseUnsafeShuffle(dependency))
}

private[spark] object UnsafeShuffleManager {
private[spark] object UnsafeShuffleManager extends Logging {
def canUseUnsafeShuffle[K, V, C](dependency: ShuffleDependency[K, V, C]): Boolean = {
dependency.aggregator.isEmpty && dependency.keyOrdering.isEmpty
val shufId = dependency.shuffleId
val serializer = Serializer.getSerializer(dependency.serializer)
if (!serializer.supportsRelocationOfSerializedObjects) {
log.debug(s"Can't use UnsafeShuffle for shuffle $shufId because the serializer, " +
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I propose to use log.warn in canUseUnsafeShuffle. It would be much easier for people to compare the performances of UnsafeShuffleManager and SortShuffleManager. They usually need to know the new UnsafeShuffleHandle does take effect.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I considered this, but I worry that this will result in extremely chatty logs because many operations won't be able to use this new shuffle yet. For example, this would trigger a warning whenever reduceByKey is used.

This is a tricky issue, especially as the number of special-case shuffle optimizations grows. It will be very easy for users to slightly change their programs in ways that trigger slower code paths (e.g. by switching from LZF to LZ4 compression). Conversely, this also creates the potential for small changes to result in huge secondary performance benefits in non-obvious ways: if a user were to switch from LZ4 to LZF, then the current code would hit a more efficient shuffle merge path and might exhibit huge speed-ups, but a user might misattribute this to LZF being faster / offering better compression in general, whereas it's really the optimized merge path that's activated by LZF's concatenatibility that is responsible for the speed up. This is a general issue that's probably worth exploring as part of a broader discussion of how to expose internal knowledge of performance optimizations back to end users.

s"${serializer.getClass.getName}, does not support object relocation")
false
} else if (dependency.aggregator.isDefined) {
log.debug(s"Can't use UnsafeShuffle for shuffle $shufId because an aggregator is defined")
false
} else if (dependency.keyOrdering.isDefined) {
log.debug(s"Can't use UnsafeShuffle for shuffle $shufId because a key ordering is defined")
false
} else {
log.debug(s"Can use UnsafeShuffle for shuffle $shufId")
true
}
}
}

Expand Down Expand Up @@ -73,15 +87,13 @@ private object PartitionerPrefixComparator extends PrefixComparator {
}
}

private[spark] class UnsafeShuffleWriter[K, V](
private class UnsafeShuffleWriter[K, V](
shuffleBlockManager: IndexShuffleBlockManager,
handle: UnsafeShuffleHandle[K, V],
mapId: Int,
context: TaskContext)
extends ShuffleWriter[K, V] {

println("Construcing a new UnsafeShuffleWriter")

private[this] val memoryManager: TaskMemoryManager = context.taskMemoryManager()

private[this] val dep = handle.dependency
Expand Down Expand Up @@ -158,7 +170,6 @@ private[spark] class UnsafeShuffleWriter[K, V](
memoryManager.encodePageNumberAndOffset(currentPage, currentPagePosition)
PlatformDependent.UNSAFE.putLong(currentPage.getBaseObject, currentPagePosition, partitionId)
currentPagePosition += 8
println("The stored record length is " + serializedRecordSize)
PlatformDependent.UNSAFE.putLong(
currentPage.getBaseObject, currentPagePosition, serializedRecordSize)
currentPagePosition += 8
Expand All @@ -169,7 +180,6 @@ private[spark] class UnsafeShuffleWriter[K, V](
currentPagePosition,
serializedRecordSize)
currentPagePosition += serializedRecordSize
println("After writing record, current page position is " + currentPagePosition)
sorter.insertRecord(newRecordAddress)

// Reset for writing the next record
Expand All @@ -195,8 +205,10 @@ private[spark] class UnsafeShuffleWriter[K, V](
// TODO: don't close and re-open file handles so often; this could be inefficient

def closePartition(): Unit = {
writer.commitAndClose()
partitionLengths(currentPartition) = writer.fileSegment().length
if (writer != null) {
writer.commitAndClose()
partitionLengths(currentPartition) = writer.fileSegment().length
}
}

def switchToPartition(newPartition: Int): Unit = {
Expand All @@ -219,8 +231,6 @@ private[spark] class UnsafeShuffleWriter[K, V](
val baseObject = memoryManager.getPage(keyPointerAndPrefix.recordPointer)
val baseOffset = memoryManager.getOffsetInPage(keyPointerAndPrefix.recordPointer)
val recordLength: Int = PlatformDependent.UNSAFE.getLong(baseObject, baseOffset + 8).toInt
println("Base offset is " + baseOffset)
println("Record length is " + recordLength)
// TODO: need to have a way to figure out whether a serializer supports relocation of
// serialized objects or not. Sandy also ran into this in his patch (see
// https://github.com/apache/spark/pull/4450). If we're using Java serialization, we might
Expand All @@ -244,12 +254,8 @@ private[spark] class UnsafeShuffleWriter[K, V](

/** Write a sequence of records to this task's output */
override def write(records: Iterator[_ <: Product2[K, V]]): Unit = {
println("Opened writer!")

val sortedIterator = sortRecords(records)
val partitionLengths = writeSortedRecordsToFile(sortedIterator)

println("Partition lengths are " + partitionLengths.toSeq)
shuffleBlockManager.writeIndexFile(dep.shuffleId, mapId, partitionLengths)
mapStatus = MapStatus(blockManager.shuffleServerId, partitionLengths)
}
Expand All @@ -264,7 +270,6 @@ private[spark] class UnsafeShuffleWriter[K, V](

/** Close this writer, passing along whether the map completed */
override def stop(success: Boolean): Option[MapStatus] = {
println("Stopping unsafeshufflewriter")
try {
if (stopping) {
None
Expand Down Expand Up @@ -300,7 +305,6 @@ private[spark] class UnsafeShuffleManager(conf: SparkConf) extends ShuffleManage
numMaps: Int,
dependency: ShuffleDependency[K, V, C]): ShuffleHandle = {
if (UnsafeShuffleManager.canUseUnsafeShuffle(dependency)) {
println("Opening unsafeShuffleWriter")
new UnsafeShuffleHandle[K, V](
shuffleId, numMaps, dependency.asInstanceOf[ShuffleDependency[K, V, V]])
} else {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -131,8 +131,7 @@ private[spark] class ExternalSorter[K, V, C](
private val kvChunkSize = conf.getInt("spark.shuffle.sort.kvChunkSize", 1 << 22) // 4 MB
private val useSerializedPairBuffer =
!ordering.isDefined && conf.getBoolean("spark.shuffle.sort.serializeMapOutputs", true) &&
ser.isInstanceOf[KryoSerializer] &&
serInstance.asInstanceOf[KryoSerializerInstance].getAutoReset
ser.supportsRelocationOfSerializedObjects
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sryza, this change is intended to partially address https://issues.apache.org/jira/browse/SPARK-7311.


// Data structures to store in-memory objects before we spill. Depending on whether we have an
// Aggregator set, we either put objects into an AppendOnlyMap where we combine them, or we
Expand Down