-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-12073] [Streaming] backpressure rate controller consumes events preferentially from lagg… #10089
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why was sendMessages moved to here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The kafka messages don't actually need to be sent three times, just once is sufficent for all tests. When I send "foo" 200 times in one batch, they all go to the same partition. However, when I do this 3 times (for each of 100, 50, 20), the batches of 200 go to a random partition each time. I suspect something in how the test kafka cluster does the partitioning.
I was usually getting 200 on 1 partitions, and 400 on the other 2. I was explicitly changing the test case to "all the messages are on one partition" since I can't control the split deterministically.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While I haven't done much work with Kafka, it seems like we could maybe explicitly specify the partitioner for producer in the producerConfiguration to be round robin if we wanted to (although that requires some custom code from what I can tell) or rotating the partition key.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure if you're just talking about for testing, or for the main code.
For production use the complication is that how messages are partitioned
into kafka is configurable at the time you're producing the messages, so
configuration of spark partitioner would have to match.
On Thu, Dec 10, 2015 at 2:14 PM, Holden Karau [email protected]
wrote:
In
external/kafka/src/test/scala/org/apache/spark/streaming/kafka/DirectKafkaStreamSuite.scala
#10089 (comment):@@ -364,8 +365,8 @@ class DirectKafkaStreamSuite
val batchIntervalMilliseconds = 100 val estimator = new ConstantEstimator(100)
- val messageKeys = (1 to 200).map(_.toString)
- val messages = messageKeys.map((_, 1)).toMap
- val messages = Map("foo" -> 200)
- kafkaTestUtils.sendMessages(topic, messages)
While I haven't done much work with Kafka, it seems like we could maybe
explicitly specify the partitioner for producer in the
producerConfiguration to be round robin if we wanted to (although that
requires some custom code from what I can tell) or rotating the partition
key.—
Reply to this email directly or view it on GitHub
https://github.com/apache/spark/pull/10089/files#r47278574.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh yah only for testing - this was in response to switching the test to all messages on a single partition (which seemed limiting for testing code which changes us to handling each partitions back pressure instead of a single global).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the original code, as I understand it, since the Kafka test setup wasn't cleared/reinitialized between testing rounds, only the first batch of 200 messages, produced in the first of 3 test rounds, was ever consumed. The other testing rounds produced messages that were never used. My changes aside, I think moving the test message generation outside of the individual test rounds makes the most sense.
This failure scenario depended on imbalanced Kafka partitions, would you prefer to see tests on both a balanced and an imbalanced scenario?
|
This generally looks sensible to me, would like to see if it solves your issue first. Thanks for working on it. |
|
Sounds good. If you can add that maxRatePerPartition handling this would be
|
ffb033f to
b58d517
Compare
|
|
|
LGTM |
|
Could someone verify the patch please? @tdas perhaps? |
|
Original code (before this patch) has serious error - it doesn't respect maxRateLimitPerPartition in case when backpressure rate is smaller than number of partitions. In such case effectiveRateLimitPerPartition equals 0 (because limit / numPartitions == 0) and in consequence all messages from topic are consumed at once. According my tests this patch solves this error. |
|
Jenkins, test this please |
|
Is this a CI failure? I don't have access rights to see what happened. I'd like to get this ready to merge, could someone give me a hand? |
|
The console output seems like it's available without logging in, and looks like a jenkins issue rather than an actual test failure: GitHub pull request #10089 of commit b58d51767f0370c65fa65dfb15416b75fa914d05 automatically merged.
|
|
retest this please |
1 similar comment
|
retest this please |
|
Test build #47938 has finished for PR 10089 at commit
|
|
This looked like a good patch, did it just fall through the cracks? The mima test failure was probably just due to the change in signature of KafkaTestUtils.createTopic, it should be ok to add an exclude for that since it's a testing class. Jason if you don't have time to deal with it let me know + I can fix it |
|
I'm not sure how to handle the mima test failure, could you point me to where to add the exclude? I'll rebase and add the exception. |
|
retest this please |
|
Test build #50983 has finished for PR 10089 at commit
|
|
Sure, it's in project/MimaExcludes.scala. You should be able to match up On Tue, Feb 9, 2016 at 12:22 PM, Jason White [email protected]
|
|
Thanks! |
b58d517 to
c37f6c1
Compare
|
@koeninger I've added the two error messages from Jenkins to the MimaExcludes, under the v2.0 section. There was one from the test function, but this PR also changes the signature of the protected method |
|
I think that should be ok. On Tue, Feb 16, 2016 at 4:29 PM, Jason White [email protected]
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: can be fit into one line
|
Test build #51451 has finished for PR 10089 at commit
|
|
Changes addressed, but looks like it's causing PySpark unit tests to fail. Investigating... |
From Java clients' POV, the signature of |
|
A single-argument version is easy enough, and good to support backwards-compatibility anyway. I haven't been able to get PySpark tests running locally yet, so I'm afraid that's a blind fix. |
|
Test build #51457 has finished for PR 10089 at commit
|
|
It looks like this PySpark unit test failure above was prior to my commit to add a backwards-compatible single-argument version as suggested. Could someone kick it again? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you remove the default value 1 since you added the overload version?
|
retest this please |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove this comment. It's a bit misleading. Since the tests won't start the StreamingContext, the core number is irrelevant.
|
@JasonMWhite thanks, looks great except some nits. |
|
Test build #51525 has finished for PR 10089 at commit
|
|
@JasonMWhite looks like this failed the "offset recovery" test in DirectKafkaStreamSuite. Are you able to reproduce that test failure locally? |
…agesPerPartition to MimaExcludes
f19f746 to
0a78e8d
Compare
|
Addressed @zsxwing's comments also. Waiting for the test build to complete. |
|
Test build #52299 has finished for PR 10089 at commit
|
|
Test build #52304 has finished for PR 10089 at commit
|
|
All comments addressed, builds cleanly, all tests passing. GTM? |
|
LGTM Thanks for following up on this. |
|
LGTM. Merging to master. Thanks @JasonMWhite and @koeninger |
… preferentially from lagg… …ing partitions I'm pretty sure this is the reason we couldn't easily recover from an unbalanced Kafka partition under heavy load when using backpressure. `maxMessagesPerPartition` calculates an appropriate limit for the message rate from all partitions, and then divides by the number of partitions to determine how many messages to retrieve per partition. The problem with this approach is that when one partition is behind by millions of records (due to random Kafka issues), but the rate estimator calculates only 100k total messages can be retrieved, each partition (out of say 32) only retrieves max 100k/32=3125 messages. This PR (still needing a test) determines a per-partition desired message count by using the current lag for each partition to preferentially weight the total message limit among the partitions. In this situation, if each partition gets 1k messages, but 1 partition starts 1M behind, then the total number of messages to retrieve is (32 * 1k + 1M) = 1032000 messages, of which the one partition needs 1001000. So, it gets (1001000 / 1032000) = 97% of the 100k messages, and the other 31 partitions share the remaining 3%. Assuming all of 100k the messages are retrieved and processed within the batch window, the rate calculator will increase the number of messages to retrieve in the next batch, until it reaches a new stable point or the backlog is finished processed. We're going to try deploying this internally at Shopify to see if this resolves our issue. tdas koeninger holdenk Author: Jason White <[email protected]> Closes apache#10089 from JasonMWhite/rate_controller_offsets.


…ing partitions
I'm pretty sure this is the reason we couldn't easily recover from an unbalanced Kafka partition under heavy load when using backpressure.
maxMessagesPerPartitioncalculates an appropriate limit for the message rate from all partitions, and then divides by the number of partitions to determine how many messages to retrieve per partition. The problem with this approach is that when one partition is behind by millions of records (due to random Kafka issues), but the rate estimator calculates only 100k total messages can be retrieved, each partition (out of say 32) only retrieves max 100k/32=3125 messages.This PR (still needing a test) determines a per-partition desired message count by using the current lag for each partition to preferentially weight the total message limit among the partitions. In this situation, if each partition gets 1k messages, but 1 partition starts 1M behind, then the total number of messages to retrieve is (32 * 1k + 1M) = 1032000 messages, of which the one partition needs 1001000. So, it gets (1001000 / 1032000) = 97% of the 100k messages, and the other 31 partitions share the remaining 3%.
Assuming all of 100k the messages are retrieved and processed within the batch window, the rate calculator will increase the number of messages to retrieve in the next batch, until it reaches a new stable point or the backlog is finished processed.
We're going to try deploying this internally at Shopify to see if this resolves our issue.
@tdas @koeninger @holdenk