Skip to content
Closed
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
41 commits
Select commit Hold shift + click to select a range
6d6776a
SPARK-1729. Make Flume pull data from source, rather than the current…
harishreedharan May 9, 2014
d24d9d4
SPARK-1729. Make Flume pull data from source, rather than the current…
harishreedharan May 18, 2014
08176ad
SPARK-1729. Make Flume pull data from source, rather than the current…
harishreedharan May 18, 2014
03d6c1c
SPARK-1729. Make Flume pull data from source, rather than the current…
harishreedharan May 19, 2014
8df37e4
SPARK-1729. Make Flume pull data from source, rather than the current…
harishreedharan May 20, 2014
87775aa
SPARK-1729. Make Flume pull data from source, rather than the current…
harishreedharan May 21, 2014
0f10788
SPARK-1729. Make Flume pull data from source, rather than the current…
harishreedharan May 24, 2014
c604a3c
SPARK-1729. Optimize imports.
harishreedharan Jun 5, 2014
9741683
SPARK-1729. Fixes based on review.
harishreedharan Jun 6, 2014
e7da512
SPARK-1729. Fixing import order
harishreedharan Jun 6, 2014
d6fa3aa
SPARK-1729. New Flume-Spark integration.
harishreedharan Jun 10, 2014
70bcc2a
SPARK-1729. New Flume-Spark integration.
harishreedharan Jun 10, 2014
3c23c18
SPARK-1729. New Spark-Flume integration.
harishreedharan Jun 10, 2014
0d69604
FLUME-1729. Better Flume-Spark integration.
harishreedharan Jun 16, 2014
bda01fc
FLUME-1729. Flume-Spark integration.
harishreedharan Jun 17, 2014
4b0c7fc
FLUME-1729. New Flume-Spark integration.
harishreedharan Jun 18, 2014
205034d
Merging master in
harishreedharan Jun 18, 2014
86aa274
Merge remote-tracking branch 'asf/master'
harishreedharan Jul 10, 2014
8136aa6
Adding TransactionProcessor to map on returning batch of data
harishreedharan Jul 14, 2014
9fd0da7
SPARK-1729. Use foreach instead of map for all Options.
harishreedharan Jul 14, 2014
120e2a1
SPARK-1729. Some test changes and changes to utils classes.
harishreedharan Jul 15, 2014
393bd94
SPARK-1729. Use LinkedBlockingQueue instead of ArrayBuffer to keep tr…
harishreedharan Jul 15, 2014
8c00289
More debug messages
harishreedharan Jul 15, 2014
1edc806
SPARK-1729. Update logging in Spark Sink.
harishreedharan Jul 15, 2014
10b6214
Changed public API, changed sink package, and added java unit test to…
tdas Jul 17, 2014
d248d22
Merge pull request #1 from tdas/flume-polling
Jul 17, 2014
3c5194c
Merge remote-tracking branch 'asf/master'
harishreedharan Jul 19, 2014
799509f
Fix a compile issue.
harishreedharan Jul 21, 2014
3572180
Adding a license header, making Jenkins happy.
harishreedharan Jul 21, 2014
f3c99d1
Merge remote-tracking branch 'asf/master'
harishreedharan Jul 23, 2014
e59cc20
Use SparkFlumeEvent instead of the new type. Also, Flume Polling Rece…
harishreedharan Jul 23, 2014
65b76b4
Fixing the unit test.
harishreedharan Jul 23, 2014
73d6f6d
Cleaned up tests a bit. Added some docs in multiple places.
harishreedharan Jul 24, 2014
1f47364
Minor fixes.
harishreedharan Jul 25, 2014
a082eb3
Merge remote-tracking branch 'asf/master'
harishreedharan Jul 25, 2014
7a1bc6e
Fix SparkBuild.scala
harishreedharan Jul 25, 2014
981bf62
Merge remote-tracking branch 'asf/master'
harishreedharan Jul 28, 2014
5f212ce
Ignore Spark Sink from mima.
harishreedharan Jul 28, 2014
e48d785
Documenting flume-sink being ignored for Mima checks.
harishreedharan Jul 28, 2014
96cfb6f
Merge remote-tracking branch 'asf/master'
harishreedharan Jul 29, 2014
e7f70a3
Merge remote-tracking branch 'asf-git/master'
harishreedharan Jul 29, 2014
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
SPARK-1729. Make Flume pull data from source, rather than the current…
… push model

Update to the previous patch fixing some error cases and also excluding Netty dependencies. Also updated the unit tests.
  • Loading branch information
harishreedharan committed May 18, 2014
commit d24d9d47795fe0a81fa2d70a4f81c24d2efd8914
Original file line number Diff line number Diff line change
Expand Up @@ -30,9 +30,8 @@ import java.util.concurrent._
import java.util
import org.apache.flume.conf.{ConfigurationException, Configurable}
import com.google.common.util.concurrent.ThreadFactoryBuilder
import org.apache.avro.ipc.{NettyTransceiver, NettyServer}
import org.apache.avro.ipc.NettyServer
import org.apache.avro.ipc.specific.SpecificResponder
import org.jboss.netty.channel.socket.nio.NioServerSocketChannelFactory
import java.net.InetSocketAddress

class SparkSink() extends AbstractSink with Configurable {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Scala style nit: no need for parenthesis when no parameter present

Expand Down Expand Up @@ -75,12 +74,10 @@ class SparkSink() extends AbstractSink with Configurable {

val responder = new SpecificResponder(classOf[SparkFlumeProtocol], new AvroCallbackHandler())

serverOpt = Option(new NettyServer(responder, new InetSocketAddress(hostname, port),
new NioServerSocketChannelFactory(
Executors.newCachedThreadPool(new ThreadFactoryBuilder().setNameFormat(
"Spark Sink " + classOf[NettyTransceiver].getSimpleName + " Boss-%d").build),
Executors.newFixedThreadPool(maxThreads, new ThreadFactoryBuilder().setNameFormat(
"Spark Sink " + classOf[NettyTransceiver].getSimpleName + " I/O Worker-%d").build))))
// Using the constructor that takes specific thread-pools requires bringing in netty
// dependencies which are being excluded in the build. In practice,
// Netty dependencies are already available on the JVM as Flume would have pulled them in.
serverOpt = Option(new NettyServer(responder, new InetSocketAddress(hostname, port)))

serverOpt.map(server => server.start())
lock.lock()
Expand All @@ -93,10 +90,14 @@ class SparkSink() extends AbstractSink with Configurable {
}

override def stop() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldnt super.stop() be called as well?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, my mistake.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are start() and stop() guaranteed to be not called by Flume in concurrently? If that is not the case, these probably should be synchronized.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, they will not be called concurrently.

transactionExecutorOpt.map(executor => executor.shutdownNow())
serverOpt.map(server => {
server.close()
server.join()
})
lock.lock()
try {
running = false
transactionExecutorOpt.map(executor => executor.shutdownNow())
blockingCondition.signalAll()
} finally {
lock.unlock()
Expand Down Expand Up @@ -131,23 +132,28 @@ class SparkSink() extends AbstractSink with Configurable {
Status.BACKOFF
}


// Object representing an empty batch returned by the txn processor due to some error.
case object ErrorEventBatch extends EventBatch

private class AvroCallbackHandler() extends SparkFlumeProtocol {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Scala style nit: no need for parenthesis when no parameter present


override def getEventBatch(n: Int): EventBatch = {
val processor = processorFactory.get.checkOut(n)
transactionExecutorOpt.map(executor => executor.submit(processor))
// Wait until a batch is available - can be null if some error was thrown
val eventBatch = Option(processor.eventQueue.take())
if (eventBatch.isDefined) {
val eventsToBeSent = eventBatch.get
processorMap.put(eventsToBeSent.getSequenceNumber, processor)
if (LOG.isDebugEnabled) {
LOG.debug("Sent " + eventsToBeSent.getEventBatch.size() +
" events with sequence number: " + eventsToBeSent.getSequenceNumber)
val eventBatch = processor.eventQueue.take()
eventBatch match {
case ErrorEventBatch => throw new FlumeException("Something went wrong. No events" +
" retrieved from channel.")
case events => {
processorMap.put(events.getSequenceNumber, processor)
if (LOG.isDebugEnabled) {
LOG.debug("Sent " + events.getEventBatch.size() +
" events with sequence number: " + events.getSequenceNumber)
}
events
}
eventsToBeSent
} else {
throw new FlumeException("Error while trying to retrieve events from the channel.")
}
}

Expand Down Expand Up @@ -211,17 +217,38 @@ class SparkSink() extends AbstractSink with Configurable {
val events = eventBatch.getEventBatch
events.clear()
val loop = new Breaks
var gotEventsInThisTxn = false
loop.breakable {
for (i <- 0 until maxBatchSize) {
var i = 0
// Using for here causes the maxBatchSize change to be ineffective as the Range gets
// pregenerated
while (i < maxBatchSize) {
i += 1
val eventOpt = Option(getChannel.take())

eventOpt.map(event => {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These few lines can be formatted better (more Scala-like) as

           Option(getChannel.take()) match {
              case Some(event) =>
                val headers = toCharSequenceMap(event.getHeaders)
                val body = ByteBuffer.wrap(event.getBody)
                events.add(new SparkSinkEvent(headers, body))
                gotEventsInThisTxn = true

              case None =>
                if (!gotEventsInThisTxn) {
                  if (maxBatchSize >= Int.MaxValue / 2) {
                    // Random sanity check
                    throw new RuntimeException("Safety exception - polled too many times, no events!")
                  }
                  maxBatchSize += 1
                  Thread.sleep(500)
                } else {
                  loop.break()
                }
            }

events.add(new SparkSinkEvent(toCharSequenceMap(event
.getHeaders),
ByteBuffer.wrap(event.getBody)))
gotEventsInThisTxn = true
})
if (eventOpt.isEmpty) {
loop.break()
if (!gotEventsInThisTxn) {
// To avoid sending empty batches, we wait till events are available backing off
// between attempts to get events. Each attempt to get an event though causes one
// iteration to be lost. To ensure that we still send back maxBatchSize number of
// events, we cheat and increase the maxBatchSize by 1 to account for the lost
// iteration. Even throwing an exception is expensive as Avro will serialize it
// and send it over the wire, which is useless. Before incrementing though,
// ensure that we are not anywhere near INT_MAX.
if (maxBatchSize >= Int.MaxValue / 2) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why increase maxBatchSize and handle corner cases like Int.MaxValue / 2 ? Isnt it rather more intuitive to run the loop till while events.size() < maxBatchSize?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is too expensive to send empty batches, so we will keep polling the channel to see if events are available. Each time we poll, it means that we need to increase the maxBatchSize by 1. I realize it is counter-intuitive now that I read the code myself. I will use two variables so it is clearer.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, I meant why isnt the following logic sufficient?

while (events.size < maxBatchSize) {
     val eventOpt = Option(getChannel.take())     
      // do whatever you do when eventOpt is not None
      if (eventOpt.isEmpty) {
           if (!events.isEmpty) {
               Thread.sleep(500)
           } else {
                loop.break()
           }
      }
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, a slightly modified version of this should be good (return when eventOpt is empty).

// Random sanity check
throw new RuntimeException("Safety exception - polled too many times, no events!")
}
maxBatchSize += 1
Thread.sleep(500)
} else {
loop.break()
}
}
}
}
Expand Down Expand Up @@ -283,7 +310,7 @@ class SparkSink() extends AbstractSink with Configurable {
null // No point rethrowing the exception
} finally {
// Must *always* release the caller thread
eventQueue.put(null)
eventQueue.put(ErrorEventBatch)
// In the case of success coming after the timeout, but before resetting the seq number
// remove the event from the map and then clear the value
resultQueue.clear()
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,6 @@ import org.apache.spark.streaming.{TestSuiteBase, TestOutputStream, StreamingCon
import org.apache.spark.storage.StorageLevel
import scala.collection.mutable.{SynchronizedBuffer, ArrayBuffer}
import org.apache.spark.streaming.util.ManualClock
import java.nio.charset.Charset
import org.apache.flume.channel.MemoryChannel
import org.apache.flume.Context
import org.apache.flume.conf.Configurables
Expand All @@ -39,7 +38,7 @@ class FlumePollingReceiverSuite extends TestSuiteBase {
// Set up the streaming context and input streams
val ssc = new StreamingContext(conf, batchDuration)
val flumeStream: ReceiverInputDStream[SparkPollingEvent] =
FlumeUtils.createPollingStream(ssc, "localhost", testPort, 100, 5,
FlumeUtils.createPollingStream(ssc, "localhost", testPort, 100, 1,
StorageLevel.MEMORY_AND_DISK)
val outputBuffer = new ArrayBuffer[Seq[SparkPollingEvent]]
with SynchronizedBuffer[Seq[SparkPollingEvent]]
Expand All @@ -63,15 +62,17 @@ class FlumePollingReceiverSuite extends TestSuiteBase {
ssc.start()

val clock = ssc.scheduler.clock.asInstanceOf[ManualClock]
val input = Seq(1, 2, 3, 4, 5)
var t = 0
for (i <- 0 until 5) {
val tx = channel.getTransaction
tx.begin()
for (j <- 0 until input.size) {
for (j <- 0 until 5) {
channel.put(EventBuilder.withBody(
(String.valueOf(i) + input(j)).getBytes("utf-8"),
Map[String, String]("test-" + input(j).toString -> "header")))
String.valueOf(t).getBytes("utf-8"),
Map[String, String]("test-" + t.toString -> "header")))
t += 1
}

tx.commit()
tx.close()
Thread.sleep(500) // Allow some time for the events to reach
Expand All @@ -86,19 +87,30 @@ class FlumePollingReceiverSuite extends TestSuiteBase {
assert(timeTaken < maxWaitTimeMillis, "Operation timed out after " + timeTaken + " ms")
logInfo("Stopping context")
ssc.stop()
sink.stop()
channel.stop()

val decoder = Charset.forName("UTF-8").newDecoder()

assert(outputBuffer.size === 5)
val flattenedBuffer = outputBuffer.flatten
assert(flattenedBuffer.size === 25)
var counter = 0
for (i <- 0 until outputBuffer.size;
j <- 0 until outputBuffer(i).size) {
counter += 1
val eventToVerify = outputBuffer(i)(j).event
val str = decoder.decode(eventToVerify.getBody)
assert(str.toString === (String.valueOf(i) + input(j)))
assert(eventToVerify.getHeaders.get("test-" + input(j).toString) === "header")
for (i <- 0 until 25) {
val eventToVerify = EventBuilder.withBody(
String.valueOf(i).getBytes("utf-8"),
Map[String, String]("test-" + i.toString -> "header"))
var found = false
var j = 0
while (j < flattenedBuffer.size && !found) {
val strToCompare = new String(flattenedBuffer(j).event.getBody.array(), "utf-8")
if (new String(eventToVerify.getBody, "utf-8") == strToCompare &&
eventToVerify.getHeaders.get("test-" + i.toString)
.equals(flattenedBuffer(j).event.getHeaders.get("test-" + i.toString))) {
found = true
counter += 1
}
j += 1
}
}
assert (counter === 25)
}

}
1 change: 0 additions & 1 deletion project/SparkBuild.scala
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,6 @@ import sbt.Keys._
import sbt.Task
import sbtassembly.Plugin._
import AssemblyKeys._
import sbtavro.SbtAvro._
import scala.Some
import scala.util.Properties
import org.scalastyle.sbt.ScalastylePlugin.{Settings => ScalaStyleSettings}
Expand Down