Skip to content
Closed
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
51 commits
Select commit Hold shift + click to select a range
1e752f1
Added unpersist method to Broadcast.
Feb 5, 2014
80dd977
Fix for Broadcast unpersist patch.
Feb 6, 2014
e427a9e
Added ContextCleaner to automatically clean RDDs and shuffles when th…
tdas Feb 14, 2014
8512612
Changed TimeStampedHashMap to use WrappedJavaHashMap.
tdas Feb 14, 2014
a24fefc
Merge remote-tracking branch 'apache/master' into state-cleanup
tdas Mar 11, 2014
cb0a5a6
Fixed docs and styles.
tdas Mar 11, 2014
ae9da88
Removed unncessary TimeStampedHashMap from DAGScheduler, added try-ca…
tdas Mar 12, 2014
e61daa0
Modifications based on the comments on PR 126.
tdas Mar 13, 2014
a7260d3
Added try-catch in context cleaner and null value cleaning in TimeSta…
tdas Mar 17, 2014
892b952
Removed use of BoundedHashMap, and made BlockManagerSlaveActor cleanu…
tdas Mar 18, 2014
e1fba5f
Style fix
tdas Mar 19, 2014
f2881fd
Changed ContextCleaner to use ReferenceQueue instead of finalizer
tdas Mar 25, 2014
620eca3
Changes based on PR comments.
tdas Mar 25, 2014
a007307
Merge remote-tracking branch 'apache/master' into state-cleanup
tdas Mar 25, 2014
d2f8b97
Removed duplicate unpersistRDD.
tdas Mar 25, 2014
6c9dcf6
Added missing Apache license
tdas Mar 25, 2014
c7ccef1
Merge branch 'bc-unpersist-merge' of github.com:ignatich/incubator-sp…
andrewor14 Mar 26, 2014
ba52e00
Refactor broadcast classes
andrewor14 Mar 26, 2014
d0edef3
Add framework for broadcast cleanup
andrewor14 Mar 26, 2014
544ac86
Clean up broadcast blocks through BlockManager*
andrewor14 Mar 26, 2014
e95479c
Add tests for unpersisting broadcast
andrewor14 Mar 27, 2014
f201a8d
Test broadcast cleanup in ContextCleanerSuite + remove BoundedHashMap
andrewor14 Mar 27, 2014
c92e4d9
Merge github.com:apache/spark into cleanup
andrewor14 Mar 27, 2014
0d17060
Import, comments, and style fixes (minor)
andrewor14 Mar 28, 2014
34f436f
Generalize BroadcastBlockId to remove BroadcastHelperBlockId
andrewor14 Mar 28, 2014
fbfeec8
Add functionality to query executors for their local BlockStatuses
andrewor14 Mar 29, 2014
88904a3
Make TimeStampedWeakValueHashMap a wrapper of TimeStampedHashMap
andrewor14 Mar 29, 2014
e442246
Merge github.com:apache/spark into cleanup
andrewor14 Mar 29, 2014
8557c12
Merge github.com:apache/spark into cleanup
andrewor14 Mar 30, 2014
7edbc98
Merge remote-tracking branch 'apache-github/master' into state-cleanup
tdas Mar 31, 2014
634a097
Merge branch 'state-cleanup' of github.com:tdas/spark into cleanup
andrewor14 Mar 31, 2014
7ed72fb
Fix style test fail + remove verbose test message regarding broadcast
andrewor14 Mar 31, 2014
5016375
Address TD's comments
andrewor14 Apr 1, 2014
f0aabb1
Correct semantics for TimeStampedWeakValueHashMap + add tests
andrewor14 Apr 2, 2014
762a4d8
Merge pull request #1 from andrewor14/cleanup
tdas Apr 2, 2014
a6460d4
Merge github.com:apache/spark into cleanup
andrewor14 Apr 4, 2014
c5b1d98
Address Patrick's comments
andrewor14 Apr 4, 2014
a2cc8bc
Merge remote-tracking branch 'apache/master' into state-cleanup
tdas Apr 4, 2014
ada45f0
Merge branch 'state-cleanup' of github.com:tdas/spark into cleanup
andrewor14 Apr 4, 2014
cd72d19
Make automatic cleanup configurable (not documented)
andrewor14 Apr 4, 2014
b27f8e8
Merge pull request #3 from andrewor14/cleanup
tdas Apr 4, 2014
a430f06
Fixed compilation errors.
tdas Apr 4, 2014
104a89a
Fixed failing BroadcastSuite unit tests by introducing blocking for r…
tdas Apr 4, 2014
6222697
Fixed bug and adding unit test for removeBroadcast in BlockManagerSuite.
tdas Apr 4, 2014
41c9ece
Added more unit tests for BlockManager, DiskBlockManager, and Context…
tdas Apr 7, 2014
2b95b5e
Added more documentation on Broadcast implementations, specially whic…
tdas Apr 7, 2014
4d05314
Scala style fix.
tdas Apr 7, 2014
cff023c
Fixed issues based on Andrew's comments.
tdas Apr 7, 2014
d25a86e
Fixed stupid typo.
tdas Apr 7, 2014
f489fdc
Merge remote-tracking branch 'apache/master' into state-cleanup
tdas Apr 8, 2014
61b8d6e
Fixed issue with Tachyon + new BlockManager methods.
tdas Apr 8, 2014
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Add tests for unpersisting broadcast
There is not currently a way to query the blocks on the executors,
an operation that is deceptively simple to accomplish. This commit
adds this mechanism in order to verify that blocks are in fact
persisted/unpersisted on the executors in the tests.
  • Loading branch information
andrewor14 committed Mar 27, 2014
commit e95479cd63b3259beddea278befd0bdee89bb17e
16 changes: 13 additions & 3 deletions core/src/main/scala/org/apache/spark/broadcast/Broadcast.scala
Original file line number Diff line number Diff line change
Expand Up @@ -48,16 +48,26 @@ import java.io.Serializable
* @tparam T Type of the data contained in the broadcast variable.
*/
abstract class Broadcast[T](val id: Long) extends Serializable {

/**
* Whether this Broadcast is actually usable. This should be false once persisted state is
* removed from the driver.
*/
protected var isValid: Boolean = true

def value: T

/**
* Remove all persisted state associated with this broadcast.
* Remove all persisted state associated with this broadcast. Overriding implementations
* should set isValid to false if persisted state is also removed from the driver.
*
* @param removeFromDriver Whether to remove state from the driver.
* If true, the resulting broadcast should no longer be valid.
*/
def unpersist(removeFromDriver: Boolean)

// We cannot have an abstract readObject here due to some weird issues with
// readObject having to be 'private' in sub-classes.
// We cannot define abstract readObject and writeObject here due to some weird issues
// with these methods having to be 'private' in sub-classes.

override def toString = "Broadcast(" + id + ")"
}
13 changes: 10 additions & 3 deletions core/src/main/scala/org/apache/spark/broadcast/HttpBroadcast.scala
Original file line number Diff line number Diff line change
Expand Up @@ -17,8 +17,8 @@

package org.apache.spark.broadcast

import java.io.{File, FileOutputStream, ObjectInputStream, OutputStream}
import java.net.{URL, URLConnection, URI}
import java.io.{File, FileOutputStream, ObjectInputStream, ObjectOutputStream, OutputStream}
import java.net.{URI, URL, URLConnection}
import java.util.concurrent.TimeUnit

import it.unimi.dsi.fastutil.io.{FastBufferedInputStream, FastBufferedOutputStream}
Expand Down Expand Up @@ -49,10 +49,17 @@ private[spark] class HttpBroadcast[T](@transient var value_ : T, isLocal: Boolea
* @param removeFromDriver Whether to remove state from the driver.
*/
override def unpersist(removeFromDriver: Boolean) {
isValid = !removeFromDriver
HttpBroadcast.unpersist(id, removeFromDriver)
}

// Called by JVM when deserializing an object
// Used by the JVM when serializing this object
private def writeObject(out: ObjectOutputStream) {
assert(isValid, "Attempted to serialize a broadcast variable that has been destroyed!")
out.defaultWriteObject()
}

// Used by the JVM when deserializing this object
private def readObject(in: ObjectInputStream) {
in.defaultReadObject()
HttpBroadcast.synchronized {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,12 +17,12 @@

package org.apache.spark.broadcast

import java.io._
import java.io.{ByteArrayInputStream, ObjectInputStream, ObjectOutputStream}

import scala.math
import scala.util.Random

import org.apache.spark._
import org.apache.spark.{Logging, SparkConf, SparkEnv, SparkException}
import org.apache.spark.storage.{BroadcastBlockId, BroadcastHelperBlockId, StorageLevel}
import org.apache.spark.util.Utils

Expand Down Expand Up @@ -76,10 +76,17 @@ private[spark] class TorrentBroadcast[T](@transient var value_ : T, isLocal: Boo
* @param removeFromDriver Whether to remove state from the driver.
*/
override def unpersist(removeFromDriver: Boolean) {
isValid = !removeFromDriver
TorrentBroadcast.unpersist(id, removeFromDriver)
}

// Called by JVM when deserializing an object
// Used by the JVM when serializing this object
private def writeObject(out: ObjectOutputStream) {
assert(isValid, "Attempted to serialize a broadcast variable that has been destroyed!")
out.defaultWriteObject()
}

// Used by the JVM when deserializing this object
private def readObject(in: ObjectInputStream) {
in.defaultReadObject()
TorrentBroadcast.synchronized {
Expand Down
20 changes: 9 additions & 11 deletions core/src/main/scala/org/apache/spark/storage/BlockManager.scala
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ import akka.actor.{ActorSystem, Cancellable, Props}
import it.unimi.dsi.fastutil.io.{FastBufferedOutputStream, FastByteArrayOutputStream}
import sun.nio.ch.DirectBuffer

import org.apache.spark.{Logging, SecurityManager, SparkConf, SparkEnv, SparkException, MapOutputTracker}
import org.apache.spark._
import org.apache.spark.io.CompressionCodec
import org.apache.spark.network._
import org.apache.spark.serializer.Serializer
Expand Down Expand Up @@ -58,7 +58,7 @@ private[spark] class BlockManager(

private val blockInfo = new TimeStampedHashMap[BlockId, BlockInfo]

private[storage] val memoryStore: BlockStore = new MemoryStore(this, maxMemory)
private[storage] val memoryStore = new MemoryStore(this, maxMemory)
private[storage] val diskStore = new DiskStore(this, diskBlockManager)

// If we use Netty for shuffle, start a new Netty-based shuffle sender service.
Expand Down Expand Up @@ -210,9 +210,9 @@ private[spark] class BlockManager(
}

/**
* Get storage level of local block. If no info exists for the block, then returns null.
* Get storage level of local block. If no info exists for the block, return None.
*/
def getLevel(blockId: BlockId): StorageLevel = blockInfo.get(blockId).map(_.level).orNull
def getLevel(blockId: BlockId): Option[StorageLevel] = blockInfo.get(blockId).map(_.level)

/**
* Tell the master about the current storage status of a block. This will send a block update
Expand Down Expand Up @@ -496,9 +496,8 @@ private[spark] class BlockManager(

/**
* A short circuited method to get a block writer that can write data directly to disk.
* The Block will be appended to the File specified by filename.
* This is currently used for writing shuffle files out. Callers should handle error
* cases.
* The Block will be appended to the File specified by filename. This is currently used for
* writing shuffle files out. Callers should handle error cases.
*/
def getDiskWriter(
blockId: BlockId,
Expand Down Expand Up @@ -816,8 +815,7 @@ private[spark] class BlockManager(
* @return The number of blocks removed.
*/
def removeRdd(rddId: Int): Int = {
// TODO: Instead of doing a linear scan on the blockInfo map, create another map that maps
// from RDD.id to blocks.
// TODO: Avoid a linear scan by creating another mapping of RDD.id to blocks.
logInfo("Removing RDD " + rddId)
val blocksToRemove = blockInfo.keys.flatMap(_.asRDDId).filter(_.rddId == rddId)
blocksToRemove.foreach { blockId => removeBlock(blockId, tellMaster = false) }
Expand All @@ -827,13 +825,13 @@ private[spark] class BlockManager(
/**
* Remove all blocks belonging to the given broadcast.
*/
def removeBroadcast(broadcastId: Long) {
def removeBroadcast(broadcastId: Long, removeFromDriver: Boolean) {
logInfo("Removing broadcast " + broadcastId)
val blocksToRemove = blockInfo.keys.filter(_.isBroadcast).collect {
case bid: BroadcastBlockId if bid.broadcastId == broadcastId => bid
case bid: BroadcastHelperBlockId if bid.broadcastId.broadcastId == broadcastId => bid
}
blocksToRemove.foreach { blockId => removeBlock(blockId) }
blocksToRemove.foreach { blockId => removeBlock(blockId, removeFromDriver) }
}

/**
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -147,6 +147,24 @@ class BlockManagerMaster(var driverActor: ActorRef, conf: SparkConf) extends Log
askDriverWithReply[Array[StorageStatus]](GetStorageStatus)
}

/**
* Mainly for testing. Ask the driver to query all executors for their storage levels
* regarding this block. This provides an avenue for the driver to learn the storage
* levels of blocks it has not been informed of.
*
* WARNING: This could lead to deadlocks if there are any outstanding messages the
* executors are already expecting from the driver. In this case, while the driver is
* waiting for the executors to respond to its GetStorageLevel query, the executors
* are also waiting for a response from the driver to a prior message.
*
* The interim solution is to wait for a brief window of time to pass before asking.
* This should suffice, since this mechanism is largely introduced for testing only.
*/
def askForStorageLevels(blockId: BlockId, waitTimeMs: Long = 1000) = {
Thread.sleep(waitTimeMs)
askDriverWithReply[Map[BlockManagerId, StorageLevel]](AskForStorageLevels(blockId))
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is introduced for testing broadcast cleanup (but can be used for other things in the future).

/** Stop the driver actor, called only on the Spark driver node */
def stop() {
if (driverActor != null) {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ import java.util.{HashMap => JHashMap}

import scala.collection.mutable
import scala.collection.JavaConversions._
import scala.concurrent.Future
import scala.concurrent.{Await, Future}
import scala.concurrent.duration._

import akka.actor.{Actor, ActorRef, Cancellable}
Expand Down Expand Up @@ -126,6 +126,9 @@ class BlockManagerMasterActor(val isLocal: Boolean, conf: SparkConf, listenerBus
case HeartBeat(blockManagerId) =>
sender ! heartBeat(blockManagerId)

case AskForStorageLevels(blockId) =>
sender ! askForStorageLevels(blockId)

case other =>
logWarning("Got unknown message: " + other)
}
Expand Down Expand Up @@ -158,6 +161,11 @@ class BlockManagerMasterActor(val isLocal: Boolean, conf: SparkConf, listenerBus
blockManagerInfo.values.foreach { bm => bm.slaveActor ! removeMsg }
}

/**
* Delegate RemoveBroadcast messages to each BlockManager because the master may not notified
* of all broadcast blocks. If removeFromDriver is false, broadcast blocks are only removed
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same question here - curious when it is and is not notified. It's probably fine to just braodcast this (no pun intended) to all block managers anyways, but I'm just curious.

* from the executors, but not from the driver.
*/
private def removeBroadcast(broadcastId: Long, removeFromDriver: Boolean) {
// TODO(aor): Consolidate usages of <driver>
val removeMsg = RemoveBroadcast(broadcastId)
Expand Down Expand Up @@ -246,6 +254,19 @@ class BlockManagerMasterActor(val isLocal: Boolean, conf: SparkConf, listenerBus
}.toArray
}

// For testing. Ask all block managers for the given block's local storage level, if any.
private def askForStorageLevels(blockId: BlockId): Map[BlockManagerId, StorageLevel] = {
val getStorageLevel = GetStorageLevel(blockId)
blockManagerInfo.values.flatMap { info =>
val future = info.slaveActor.ask(getStorageLevel)(akkaTimeout)
val result = Await.result(future, akkaTimeout)
if (result != null) {
// If the block does not exist on the slave, the slave replies None
result.asInstanceOf[Option[StorageLevel]].map { reply => (info.blockManagerId, reply) }
} else None
}.toMap
}

private def register(id: BlockManagerId, maxMemSize: Long, slaveActor: ActorRef) {
if (!blockManagerInfo.contains(id)) {
blockManagerIdByExecutor.get(id.executorId) match {
Expand Down Expand Up @@ -329,6 +350,7 @@ class BlockManagerMasterActor(val isLocal: Boolean, conf: SparkConf, listenerBus
// Note that this logic will select the same node multiple times if there aren't enough peers
Array.tabulate[BlockManagerId](size) { i => peers((selfIndex + i + 1) % peers.length) }.toSeq
}

}


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,9 @@ private[storage] object BlockManagerMessages {
case class RemoveBroadcast(broadcastId: Long, removeFromDriver: Boolean = true)
extends ToBlockManagerSlave

// For testing. Ask the slave for the block's storage level.
case class GetStorageLevel(blockId: BlockId) extends ToBlockManagerSlave


//////////////////////////////////////////////////////////////////////////////////
// Messages from slaves to the master.
Expand Down Expand Up @@ -116,4 +119,8 @@ private[storage] object BlockManagerMessages {
case object ExpireDeadHosts extends ToBlockManagerMaster

case object GetStorageStatus extends ToBlockManagerMaster

// For testing. Have the master ask all slaves for the given block's storage level.
case class AskForStorageLevels(blockId: BlockId) extends ToBlockManagerMaster

}
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,10 @@ class BlockManagerSlaveActor(
mapOutputTracker.unregisterShuffle(shuffleId)
}

case RemoveBroadcast(broadcastId, _) =>
blockManager.removeBroadcast(broadcastId)
case RemoveBroadcast(broadcastId, removeFromDriver) =>
blockManager.removeBroadcast(broadcastId, removeFromDriver)

case GetStorageLevel(blockId) =>
sender ! blockManager.getLevel(blockId)
}
}
Loading