Skip to content

Conversation

@JasonMWhite
Copy link

…ing partitions

I'm pretty sure this is the reason we couldn't easily recover from an unbalanced Kafka partition under heavy load when using backpressure.

maxMessagesPerPartition calculates an appropriate limit for the message rate from all partitions, and then divides by the number of partitions to determine how many messages to retrieve per partition. The problem with this approach is that when one partition is behind by millions of records (due to random Kafka issues), but the rate estimator calculates only 100k total messages can be retrieved, each partition (out of say 32) only retrieves max 100k/32=3125 messages.

This PR (still needing a test) determines a per-partition desired message count by using the current lag for each partition to preferentially weight the total message limit among the partitions. In this situation, if each partition gets 1k messages, but 1 partition starts 1M behind, then the total number of messages to retrieve is (32 * 1k + 1M) = 1032000 messages, of which the one partition needs 1001000. So, it gets (1001000 / 1032000) = 97% of the 100k messages, and the other 31 partitions share the remaining 3%.

Assuming all of 100k the messages are retrieved and processed within the batch window, the rate calculator will increase the number of messages to retrieve in the next batch, until it reaches a new stable point or the backlog is finished processed.

We're going to try deploying this internally at Shopify to see if this resolves our issue.

@tdas @koeninger @holdenk

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why was sendMessages moved to here?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The kafka messages don't actually need to be sent three times, just once is sufficent for all tests. When I send "foo" 200 times in one batch, they all go to the same partition. However, when I do this 3 times (for each of 100, 50, 20), the batches of 200 go to a random partition each time. I suspect something in how the test kafka cluster does the partitioning.

I was usually getting 200 on 1 partitions, and 400 on the other 2. I was explicitly changing the test case to "all the messages are on one partition" since I can't control the split deterministically.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While I haven't done much work with Kafka, it seems like we could maybe explicitly specify the partitioner for producer in the producerConfiguration to be round robin if we wanted to (although that requires some custom code from what I can tell) or rotating the partition key.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure if you're just talking about for testing, or for the main code.
For production use the complication is that how messages are partitioned
into kafka is configurable at the time you're producing the messages, so
configuration of spark partitioner would have to match.

On Thu, Dec 10, 2015 at 2:14 PM, Holden Karau [email protected]
wrote:

In
external/kafka/src/test/scala/org/apache/spark/streaming/kafka/DirectKafkaStreamSuite.scala
#10089 (comment):

@@ -364,8 +365,8 @@ class DirectKafkaStreamSuite

 val batchIntervalMilliseconds = 100
 val estimator = new ConstantEstimator(100)
  • val messageKeys = (1 to 200).map(_.toString)
  • val messages = messageKeys.map((_, 1)).toMap
  • val messages = Map("foo" -> 200)
  • kafkaTestUtils.sendMessages(topic, messages)

While I haven't done much work with Kafka, it seems like we could maybe
explicitly specify the partitioner for producer in the
producerConfiguration to be round robin if we wanted to (although that
requires some custom code from what I can tell) or rotating the partition
key.


Reply to this email directly or view it on GitHub
https://github.com/apache/spark/pull/10089/files#r47278574.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh yah only for testing - this was in response to switching the test to all messages on a single partition (which seemed limiting for testing code which changes us to handling each partitions back pressure instead of a single global).

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the original code, as I understand it, since the Kafka test setup wasn't cleared/reinitialized between testing rounds, only the first batch of 200 messages, produced in the first of 3 test rounds, was ever consumed. The other testing rounds produced messages that were never used. My changes aside, I think moving the test message generation outside of the individual test rounds makes the most sense.

This failure scenario depended on imbalanced Kafka partitions, would you prefer to see tests on both a balanced and an imbalanced scenario?

@koeninger
Copy link
Contributor

This generally looks sensible to me, would like to see if it solves your issue first. Thanks for working on it.

@JasonMWhite
Copy link
Author

This patch solved our skew problem. Below is a 15-minute snapshot of our lag earlier this week, showing a single partition getting slowly worse. It would get to about 8 million messages behind overnight.

Alt text

And the most recent 24 hour period, after this patch went live on our system.

Alt text

@koeninger
Copy link
Contributor

Sounds good. If you can add that maxRatePerPartition handling this would be
ready to go from my point of view
On Dec 5, 2015 3:59 PM, "Jason White" [email protected] wrote:

This patch solved our skew problem. Below is a 15-minute snapshot of our
lag earlier this week, showing a single partition getting slowly worse. It
would get to about 8 million messages behind overnight.

https://monosnap.com/file/VG0rXFZn05bKIIDLFIHOxfNOWKOxy5

And the most recent 24 hour period, after this patch went live on our
system.

https://monosnap.com/file/DMSzLP8lFKJ0DxHuiWcxZHKqVeJHWL


Reply to this email directly or view it on GitHub
#10089 (comment).

@JasonMWhite JasonMWhite force-pushed the rate_controller_offsets branch 5 times, most recently from ffb033f to b58d517 Compare December 6, 2015 00:34
@JasonMWhite
Copy link
Author

maxRatePerPartition now respects the limit set by maxRateLimitPerPartition, if it is set. Let me know if you think any additional tests are needed.

@koeninger
Copy link
Contributor

LGTM

@JasonMWhite
Copy link
Author

Could someone verify the patch please? @tdas perhaps?

@mrszg
Copy link

mrszg commented Dec 10, 2015

Original code (before this patch) has serious error - it doesn't respect maxRateLimitPerPartition in case when backpressure rate is smaller than number of partitions. In such case effectiveRateLimitPerPartition equals 0 (because limit / numPartitions == 0) and in consequence all messages from topic are consumed at once. According my tests this patch solves this error.

@zsxwing
Copy link
Member

zsxwing commented Dec 14, 2015

Jenkins, test this please

@JasonMWhite
Copy link
Author

Is this a CI failure? I don't have access rights to see what happened. I'd like to get this ready to merge, could someone give me a hand?

@koeninger
Copy link
Contributor

The console output seems like it's available without logging in, and looks like a jenkins issue rather than an actual test failure:

GitHub pull request #10089 of commit b58d51767f0370c65fa65dfb15416b75fa914d05 automatically merged.
[EnvInject] - Loading node environment variables.
Building remotely on amp-jenkins-worker-07 (centos spark-test) in workspace /home/jenkins/workspace/SparkPullRequestBuilder

git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
git config remote.origin.url https://github.com/apache/spark.git # timeout=10
Fetching upstream changes from https://github.com/apache/spark.git
git --version # timeout=10
git fetch --tags --progress https://github.com/apache/spark.git +refs/pull/10089/:refs/remotes/origin/pr/10089/ # timeout=15
Build was aborted
Aborted by anonymous
ERROR: Step ?Archive the artifacts? failed: no workspace for SparkPullRequestBuilder #47665
ERROR: Step ?Publish JUnit test result report? failed: no workspace for SparkPullRequestBuilder #47665
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/47665/
Test FAILed.
ERROR: amp-jenkins-worker-07 is offline; cannot locate JDK 7u60

@zsxwing
Copy link
Member

zsxwing commented Dec 17, 2015

retest this please

1 similar comment
@zsxwing
Copy link
Member

zsxwing commented Dec 17, 2015

retest this please

@SparkQA
Copy link

SparkQA commented Dec 17, 2015

Test build #47938 has finished for PR 10089 at commit b58d517.

  • This patch fails MiMa tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@koeninger
Copy link
Contributor

This looked like a good patch, did it just fall through the cracks? The mima test failure was probably just due to the change in signature of KafkaTestUtils.createTopic, it should be ok to add an exclude for that since it's a testing class.

Jason if you don't have time to deal with it let me know + I can fix it

@JasonMWhite
Copy link
Author

I'm not sure how to handle the mima test failure, could you point me to where to add the exclude? I'll rebase and add the exception.

@zsxwing
Copy link
Member

zsxwing commented Feb 9, 2016

retest this please

@SparkQA
Copy link

SparkQA commented Feb 9, 2016

Test build #50983 has finished for PR 10089 at commit b58d517.

  • This patch fails MiMa tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@koeninger
Copy link
Contributor

Sure, it's in project/MimaExcludes.scala. You should be able to match up
the problem type and class in the error message from jenkins when adding an
appropriate ProblemFilters.exclude line.

On Tue, Feb 9, 2016 at 12:22 PM, Jason White [email protected]
wrote:

I'm not sure how to handle the mima test failure, could you point me to
where to add the exclude? I'll rebase and add the exception.


Reply to this email directly or view it on GitHub
#10089 (comment).

@JasonMWhite
Copy link
Author

Thanks!

@JasonMWhite JasonMWhite force-pushed the rate_controller_offsets branch from b58d517 to c37f6c1 Compare February 16, 2016 22:08
@JasonMWhite
Copy link
Author

@koeninger I've added the two error messages from Jenkins to the MimaExcludes, under the v2.0 section. There was one from the test function, but this PR also changes the signature of the protected method maxMessagesPerPartition.

@koeninger
Copy link
Contributor

I think that should be ok.

On Tue, Feb 16, 2016 at 4:29 PM, Jason White [email protected]
wrote:

@koeninger https://github.com/koeninger I've added the two error
messages from Jenkins to the MimaExcludes, under the v2.0 section. There
was one from the test function, but this PR also changes the signature of
the protected method maxMessagesPerPartition.


Reply to this email directly or view it on GitHub
#10089 (comment).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: can be fit into one line

@SparkQA
Copy link

SparkQA commented Feb 18, 2016

Test build #51451 has finished for PR 10089 at commit 73e9ae3.

  • This patch fails PySpark unit tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@JasonMWhite
Copy link
Author

Changes addressed, but looks like it's causing PySpark unit tests to fail. Investigating...

@JoshRosen
Copy link
Contributor

py4j.Py4JException: Method createTopic([class java.lang.String]) does not exist
    at py4j.reflection.ReflectionEngine.getMethod(ReflectionEngine.java:335)
    at py4j.reflection.ReflectionEngine.getMethod(ReflectionEngine.java:344)
    at py4j.Gateway.invoke(Gateway.java:279)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:209)
    at java.lang.Thread.run(Thread.java:745)

From Java clients' POV, the signature of createTopic() in KafkaTestUtils has changed, so you'll need to explicitly provide the extra argument from the Python caller or will need to add back a single-argument constructor for backwards-compatibility.

@JasonMWhite
Copy link
Author

A single-argument version is easy enough, and good to support backwards-compatibility anyway. I haven't been able to get PySpark tests running locally yet, so I'm afraid that's a blind fix.

@SparkQA
Copy link

SparkQA commented Feb 18, 2016

Test build #51457 has finished for PR 10089 at commit 7a5dad3.

  • This patch fails PySpark unit tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@JasonMWhite
Copy link
Author

It looks like this PySpark unit test failure above was prior to my commit to add a backwards-compatible single-argument version as suggested. Could someone kick it again?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you remove the default value 1 since you added the overload version?

@zsxwing
Copy link
Member

zsxwing commented Feb 19, 2016

retest this please

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove this comment. It's a bit misleading. Since the tests won't start the StreamingContext, the core number is irrelevant.

@zsxwing
Copy link
Member

zsxwing commented Feb 19, 2016

@JasonMWhite thanks, looks great except some nits.

@SparkQA
Copy link

SparkQA commented Feb 19, 2016

Test build #51525 has finished for PR 10089 at commit f19f746.

  • This patch fails Spark unit tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@koeninger
Copy link
Contributor

@JasonMWhite looks like this failed the "offset recovery" test in DirectKafkaStreamSuite. Are you able to reproduce that test failure locally?

@JasonMWhite JasonMWhite force-pushed the rate_controller_offsets branch from f19f746 to 0a78e8d Compare March 2, 2016 06:03
@JasonMWhite
Copy link
Author

DirectKafkaStreamSuite passes all tests for me locally, but the test failure above appeared to be on an outdated sha.

Addressed @zsxwing's comments also. Waiting for the test build to complete.

@SparkQA
Copy link

SparkQA commented Mar 2, 2016

Test build #52299 has finished for PR 10089 at commit 0a78e8d.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Mar 2, 2016

Test build #52304 has finished for PR 10089 at commit a7a0877.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@JasonMWhite
Copy link
Author

All comments addressed, builds cleanly, all tests passing. GTM?

@koeninger
Copy link
Contributor

LGTM

Thanks for following up on this.

@zsxwing
Copy link
Member

zsxwing commented Mar 5, 2016

LGTM. Merging to master. Thanks @JasonMWhite and @koeninger

@asfgit asfgit closed this in f19228e Mar 5, 2016
@JasonMWhite JasonMWhite deleted the rate_controller_offsets branch March 5, 2016 01:57
roygao94 pushed a commit to roygao94/spark that referenced this pull request Mar 22, 2016
… preferentially from lagg…

…ing partitions

I'm pretty sure this is the reason we couldn't easily recover from an unbalanced Kafka partition under heavy load when using backpressure.

`maxMessagesPerPartition` calculates an appropriate limit for the message rate from all partitions, and then divides by the number of partitions to determine how many messages to retrieve per partition. The problem with this approach is that when one partition is behind by millions of records (due to random Kafka issues), but the rate estimator calculates only 100k total messages can be retrieved, each partition (out of say 32) only retrieves max 100k/32=3125 messages.

This PR (still needing a test) determines a per-partition desired message count by using the current lag for each partition to preferentially weight the total message limit among the partitions. In this situation, if each partition gets 1k messages, but 1 partition starts 1M behind, then the total number of messages to retrieve is (32 * 1k + 1M) = 1032000 messages, of which the one partition needs 1001000. So, it gets (1001000 / 1032000) = 97% of the 100k messages, and the other 31 partitions share the remaining 3%.

Assuming all of 100k the messages are retrieved and processed within the batch window, the rate calculator will increase the number of messages to retrieve in the next batch, until it reaches a new stable point or the backlog is finished processed.

We're going to try deploying this internally at Shopify to see if this resolves our issue.

tdas koeninger holdenk

Author: Jason White <[email protected]>

Closes apache#10089 from JasonMWhite/rate_controller_offsets.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants