Skip to content
Closed
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
51 commits
Select commit Hold shift + click to select a range
1e752f1
Added unpersist method to Broadcast.
Feb 5, 2014
80dd977
Fix for Broadcast unpersist patch.
Feb 6, 2014
e427a9e
Added ContextCleaner to automatically clean RDDs and shuffles when th…
tdas Feb 14, 2014
8512612
Changed TimeStampedHashMap to use WrappedJavaHashMap.
tdas Feb 14, 2014
a24fefc
Merge remote-tracking branch 'apache/master' into state-cleanup
tdas Mar 11, 2014
cb0a5a6
Fixed docs and styles.
tdas Mar 11, 2014
ae9da88
Removed unncessary TimeStampedHashMap from DAGScheduler, added try-ca…
tdas Mar 12, 2014
e61daa0
Modifications based on the comments on PR 126.
tdas Mar 13, 2014
a7260d3
Added try-catch in context cleaner and null value cleaning in TimeSta…
tdas Mar 17, 2014
892b952
Removed use of BoundedHashMap, and made BlockManagerSlaveActor cleanu…
tdas Mar 18, 2014
e1fba5f
Style fix
tdas Mar 19, 2014
f2881fd
Changed ContextCleaner to use ReferenceQueue instead of finalizer
tdas Mar 25, 2014
620eca3
Changes based on PR comments.
tdas Mar 25, 2014
a007307
Merge remote-tracking branch 'apache/master' into state-cleanup
tdas Mar 25, 2014
d2f8b97
Removed duplicate unpersistRDD.
tdas Mar 25, 2014
6c9dcf6
Added missing Apache license
tdas Mar 25, 2014
c7ccef1
Merge branch 'bc-unpersist-merge' of github.com:ignatich/incubator-sp…
andrewor14 Mar 26, 2014
ba52e00
Refactor broadcast classes
andrewor14 Mar 26, 2014
d0edef3
Add framework for broadcast cleanup
andrewor14 Mar 26, 2014
544ac86
Clean up broadcast blocks through BlockManager*
andrewor14 Mar 26, 2014
e95479c
Add tests for unpersisting broadcast
andrewor14 Mar 27, 2014
f201a8d
Test broadcast cleanup in ContextCleanerSuite + remove BoundedHashMap
andrewor14 Mar 27, 2014
c92e4d9
Merge github.com:apache/spark into cleanup
andrewor14 Mar 27, 2014
0d17060
Import, comments, and style fixes (minor)
andrewor14 Mar 28, 2014
34f436f
Generalize BroadcastBlockId to remove BroadcastHelperBlockId
andrewor14 Mar 28, 2014
fbfeec8
Add functionality to query executors for their local BlockStatuses
andrewor14 Mar 29, 2014
88904a3
Make TimeStampedWeakValueHashMap a wrapper of TimeStampedHashMap
andrewor14 Mar 29, 2014
e442246
Merge github.com:apache/spark into cleanup
andrewor14 Mar 29, 2014
8557c12
Merge github.com:apache/spark into cleanup
andrewor14 Mar 30, 2014
7edbc98
Merge remote-tracking branch 'apache-github/master' into state-cleanup
tdas Mar 31, 2014
634a097
Merge branch 'state-cleanup' of github.com:tdas/spark into cleanup
andrewor14 Mar 31, 2014
7ed72fb
Fix style test fail + remove verbose test message regarding broadcast
andrewor14 Mar 31, 2014
5016375
Address TD's comments
andrewor14 Apr 1, 2014
f0aabb1
Correct semantics for TimeStampedWeakValueHashMap + add tests
andrewor14 Apr 2, 2014
762a4d8
Merge pull request #1 from andrewor14/cleanup
tdas Apr 2, 2014
a6460d4
Merge github.com:apache/spark into cleanup
andrewor14 Apr 4, 2014
c5b1d98
Address Patrick's comments
andrewor14 Apr 4, 2014
a2cc8bc
Merge remote-tracking branch 'apache/master' into state-cleanup
tdas Apr 4, 2014
ada45f0
Merge branch 'state-cleanup' of github.com:tdas/spark into cleanup
andrewor14 Apr 4, 2014
cd72d19
Make automatic cleanup configurable (not documented)
andrewor14 Apr 4, 2014
b27f8e8
Merge pull request #3 from andrewor14/cleanup
tdas Apr 4, 2014
a430f06
Fixed compilation errors.
tdas Apr 4, 2014
104a89a
Fixed failing BroadcastSuite unit tests by introducing blocking for r…
tdas Apr 4, 2014
6222697
Fixed bug and adding unit test for removeBroadcast in BlockManagerSuite.
tdas Apr 4, 2014
41c9ece
Added more unit tests for BlockManager, DiskBlockManager, and Context…
tdas Apr 7, 2014
2b95b5e
Added more documentation on Broadcast implementations, specially whic…
tdas Apr 7, 2014
4d05314
Scala style fix.
tdas Apr 7, 2014
cff023c
Fixed issues based on Andrew's comments.
tdas Apr 7, 2014
d25a86e
Fixed stupid typo.
tdas Apr 7, 2014
f489fdc
Merge remote-tracking branch 'apache/master' into state-cleanup
tdas Apr 8, 2014
61b8d6e
Fixed issue with Tachyon + new BlockManager methods.
tdas Apr 8, 2014
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Correct semantics for TimeStampedWeakValueHashMap + add tests
This largely accounts for the cases when WeakReference becomes no longer strongly
reachable, in which case the map should return None for all get() operations, and
should skip the entry for all listing operations.
  • Loading branch information
andrewor14 committed Apr 2, 2014
commit f0aabb1c8496dc79daeb6d090fb36ceef310622b
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ abstract class Broadcast[T](val id: Long) extends Serializable {
*/
protected def assertValid() {
if (!_isValid) {
throw new SparkException("Attempted to use %s when is no longer valid!".format(toString))
throw new SparkException("Attempted to use %s after it has been destroyed!".format(toString))
}
}

Expand Down
43 changes: 27 additions & 16 deletions core/src/main/scala/org/apache/spark/util/TimeStampedHashMap.scala
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ import java.util.Set
import java.util.Map.Entry
import java.util.concurrent.ConcurrentHashMap

import scala.collection.{immutable, JavaConversions, mutable}
import scala.collection.{JavaConversions, mutable}

import org.apache.spark.Logging

Expand Down Expand Up @@ -50,11 +50,11 @@ private[spark] class TimeStampedHashMap[A, B](updateTimeStampOnGet: Boolean = fa
}

def iterator: Iterator[(A, B)] = {
val jIterator = getEntrySet.iterator()
val jIterator = getEntrySet.iterator
JavaConversions.asScalaIterator(jIterator).map(kv => (kv.getKey, kv.getValue.value))
}

def getEntrySet: Set[Entry[A, TimeStampedValue[B]]] = internalMap.entrySet()
def getEntrySet: Set[Entry[A, TimeStampedValue[B]]] = internalMap.entrySet

override def + [B1 >: B](kv: (A, B1)): mutable.Map[A, B1] = {
val newMap = new TimeStampedHashMap[A, B1]
Expand Down Expand Up @@ -86,8 +86,7 @@ private[spark] class TimeStampedHashMap[A, B](updateTimeStampOnGet: Boolean = fa
}

override def apply(key: A): B = {
val value = internalMap.get(key)
Option(value).map(_.value).getOrElse { throw new NoSuchElementException() }
get(key).getOrElse { throw new NoSuchElementException() }
}

override def filter(p: ((A, B)) => Boolean): mutable.Map[A, B] = {
Expand All @@ -101,9 +100,9 @@ private[spark] class TimeStampedHashMap[A, B](updateTimeStampOnGet: Boolean = fa
override def size: Int = internalMap.size

override def foreach[U](f: ((A, B)) => U) {
val iterator = getEntrySet.iterator()
while(iterator.hasNext) {
val entry = iterator.next()
val it = getEntrySet.iterator
while(it.hasNext) {
val entry = it.next()
val kv = (entry.getKey, entry.getValue.value)
f(kv)
}
Expand All @@ -115,27 +114,39 @@ private[spark] class TimeStampedHashMap[A, B](updateTimeStampOnGet: Boolean = fa
Option(prev).map(_.value)
}

def toMap: immutable.Map[A, B] = iterator.toMap
def putAll(map: Map[A, B]) {
map.foreach { case (k, v) => update(k, v) }
}

def toMap: Map[A, B] = iterator.toMap

def clearOldValues(threshTime: Long, f: (A, B) => Unit) {
val iterator = getEntrySet.iterator()
while (iterator.hasNext) {
val entry = iterator.next()
val it = getEntrySet.iterator
while (it.hasNext) {
val entry = it.next()
if (entry.getValue.timestamp < threshTime) {
f(entry.getKey, entry.getValue.value)
logDebug("Removing key " + entry.getKey)
iterator.remove()
it.remove()
}
}
}

/**
* Removes old key-value pairs that have timestamp earlier than `threshTime`.
*/
/** Removes old key-value pairs that have timestamp earlier than `threshTime`. */
def clearOldValues(threshTime: Long) {
clearOldValues(threshTime, (_, _) => ())
}

private def currentTime: Long = System.currentTimeMillis

// For testing

def getTimeStampedValue(key: A): Option[TimeStampedValue[B]] = {
Option(internalMap.get(key))
}

def getTimestamp(key: A): Option[Long] = {
getTimeStampedValue(key).map(_.timestamp)
}

}
Original file line number Diff line number Diff line change
Expand Up @@ -18,47 +18,61 @@
package org.apache.spark.util

import java.lang.ref.WeakReference
import java.util.concurrent.atomic.AtomicInteger
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use @aarondav 's import organizer!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do! Too bad that the user of the tool (i.e., me) forgets to engage its keyboard shortcut!


import scala.collection.{immutable, mutable}
import scala.collection.mutable

import org.apache.spark.Logging

/**
* A wrapper of TimeStampedHashMap that ensures the values are weakly referenced and timestamped.
*
* If the value is garbage collected and the weak reference is null, get() operation returns
* a non-existent value. However, the corresponding key is actually not removed in the current
* implementation. Key-value pairs whose timestamps are older than a particular threshold time
* can then be removed using the clearOldValues method. It exposes a scala.collection.mutable.Map
* interface to allow it to be a drop-in replacement for Scala HashMaps.
* If the value is garbage collected and the weak reference is null, get() will return a
* non-existent value. These entries are removed from the map periodically (every N inserts), as
* their values are no longer strongly reachable. Further, key-value pairs whose timestamps are
* older than a particular threshold can be removed using the clearOldValues method.
*
* Internally, it uses a Java ConcurrentHashMap, so all operations on this HashMap are thread-safe.
* TimeStampedWeakValueHashMap exposes a scala.collection.mutable.Map interface, which allows it
* to be a drop-in replacement for Scala HashMaps. Internally, it uses a Java ConcurrentHashMap,
* so all operations on this HashMap are thread-safe.
*
* @param updateTimeStampOnGet Whether timestamp of a pair will be updated when it is accessed.
*/
private[spark] class TimeStampedWeakValueHashMap[A, B](updateTimeStampOnGet: Boolean = false)
extends mutable.Map[A, B]() {
extends mutable.Map[A, B]() with Logging {

import TimeStampedWeakValueHashMap._

private val internalMap = new TimeStampedHashMap[A, WeakReference[B]](updateTimeStampOnGet)
private val insertCount = new AtomicInteger(0)

/** Return a map consisting only of entries whose values are still strongly reachable. */
private def nonNullReferenceMap = internalMap.filter { case (_, ref) => ref.get != null }

def get(key: A): Option[B] = internalMap.get(key)

def iterator: Iterator[(A, B)] = internalMap.iterator
def iterator: Iterator[(A, B)] = nonNullReferenceMap.iterator

override def + [B1 >: B](kv: (A, B1)): mutable.Map[A, B1] = {
val newMap = new TimeStampedWeakValueHashMap[A, B1]
val oldMap = nonNullReferenceMap.asInstanceOf[mutable.Map[A, WeakReference[B1]]]
newMap.internalMap.putAll(oldMap.toMap)
newMap.internalMap += kv
newMap
}

override def - (key: A): mutable.Map[A, B] = {
val newMap = new TimeStampedWeakValueHashMap[A, B]
newMap.internalMap.putAll(nonNullReferenceMap.toMap)
newMap.internalMap -= key
newMap
}

override def += (kv: (A, B)): this.type = {
internalMap += kv
if (insertCount.incrementAndGet() % CLEAR_NULL_VALUES_INTERVAL == 0) {
clearNullValues()
}
this
}

Expand All @@ -71,31 +85,53 @@ private[spark] class TimeStampedWeakValueHashMap[A, B](updateTimeStampOnGet: Boo

override def apply(key: A): B = internalMap.apply(key)

override def filter(p: ((A, B)) => Boolean): mutable.Map[A, B] = internalMap.filter(p)
override def filter(p: ((A, B)) => Boolean): mutable.Map[A, B] = nonNullReferenceMap.filter(p)

override def empty: mutable.Map[A, B] = new TimeStampedWeakValueHashMap[A, B]()

override def size: Int = internalMap.size

override def foreach[U](f: ((A, B)) => U) = internalMap.foreach(f)
override def foreach[U](f: ((A, B)) => U) = nonNullReferenceMap.foreach(f)

def putIfAbsent(key: A, value: B): Option[B] = internalMap.putIfAbsent(key, value)

def toMap: immutable.Map[A, B] = iterator.toMap
def toMap: Map[A, B] = iterator.toMap

/**
* Remove old key-value pairs that have timestamp earlier than `threshTime`.
*/
/** Remove old key-value pairs with timestamps earlier than `threshTime`. */
def clearOldValues(threshTime: Long) = internalMap.clearOldValues(threshTime)

/** Remove entries with values that are no longer strongly reachable. */
def clearNullValues() {
val it = internalMap.getEntrySet.iterator
while (it.hasNext) {
val entry = it.next()
if (entry.getValue.value.get == null) {
logDebug("Removing key " + entry.getKey + " because it is no longer strongly reachable.")
it.remove()
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about

Option(internalJavaMap.get(key)).map { weakValue =>
    val value = weakValue.weakValue.get
    if (value == null) {
        internalJavaMap.remove(key)
    }
    value
}

?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not the same logic. This when value is null this makes the function return Some(null) instead of None. Changing map to flatMap is the solution.

}

// For testing

def getTimestamp(key: A): Option[Long] = {
internalMap.getTimeStampedValue(key).map(_.timestamp)
}

def getReference(key: A): Option[WeakReference[B]] = {
internalMap.getTimeStampedValue(key).map(_.value)
}
}

/**
* Helper methods for converting to and from WeakReferences.
*/
private[spark] object TimeStampedWeakValueHashMap {
private object TimeStampedWeakValueHashMap {

/* Implicit conversion methods to WeakReferences */
// Number of inserts after which entries with null references are removed
val CLEAR_NULL_VALUES_INTERVAL = 100

/* Implicit conversion methods to WeakReferences. */

implicit def toWeakReference[V](v: V): WeakReference[V] = new WeakReference[V](v)

Expand All @@ -107,12 +143,15 @@ private[spark] object TimeStampedWeakValueHashMap {
(kv: (K, WeakReference[V])) => p(kv)
}

/* Implicit conversion methods from WeakReferences */
/* Implicit conversion methods from WeakReferences. */

implicit def fromWeakReference[V](ref: WeakReference[V]): V = ref.get

implicit def fromWeakReferenceOption[V](v: Option[WeakReference[V]]): Option[V] = {
v.map(fromWeakReference)
v match {
case Some(ref) => Option(fromWeakReference(ref))
case None => None
}
}

implicit def fromWeakReferenceTuple[K, V](kv: (K, WeakReference[V])): (K, V) = {
Expand All @@ -128,5 +167,4 @@ private[spark] object TimeStampedWeakValueHashMap {
map: mutable.Map[K, WeakReference[V]]) : mutable.Map[K, V] = {
mutable.Map(map.mapValues(fromWeakReference).toSeq: _*)
}

}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This map is used for storing persisted RDDs in SparkContext

Loading