Skip to content

Conversation

@nurkiewicz
Copy link
Contributor

This test case fails because two ObservableCollapserGetWordForNumber commands are collapsed, both of them having the same request argument. Test passes with:

TestSubscriber<String> subscriber1 = getWordForNumber(contextAwareScheduler, 0);
TestSubscriber<String> subscriber2 = getWordForNumber(contextAwareScheduler, 1);

//then
subscriberReceived(subscriber1, 0);
subscriberReceived(subscriber2, 1);

The error is:

java.lang.IllegalStateException:
No response set by ObservableCollapserGetWordForNumber 'mapResponseToRequests' implementation

Looks like here requests with the same argument are silently ignored. When response later arrives they are not populated.

I'm not sure if it's a bug in Hystrix or in example ObservableCollapserGetWordForNumber.

@cloudbees-pull-request-builder

NetflixOSS » Hystrix » Hystrix-pull-requests #437 SUCCESS
This pull request looks good

@mattrjacobs
Copy link
Contributor

Thanks for the testcase along with the description, @nurkiewicz .

My thought here is that :

  1. If request-caching is implemented for the collapser, the 2nd-nth submission of the same argument should get immediately looked up in the request-cache.
  2. Otherwise, the request-cache data structure is not set up, so we can't use it.
    There's a problem if we submit the repeated argument in the batch multiple times: Given a series of OnNext(value)s for that argument, there's no way to distinguish which argument should get matched.
    Therefore, I think the only sensible thing to do is to dedupe the arguments within the batch, and then return the same stream of values to each instance of an argument.

Thoughts?

@mattrjacobs
Copy link
Contributor

Given the current structure of the HystrixObservableCollapser, I think there are 3 choices:

  1. If request-caching is enabled, then the only way to encounter this condition is 2 simultaneous batch-adds that happen before the cache-key is added to the request-cache. In this case, caching is clearly desired by the user, so we should prevent duplicate addition to the batch, and only submit the argument once. Then we must hook up the responses to all such arguments.

  2. Otherwise, request-caching is off. In this case, there are 2 possibilities:

2a) The presence of duplicate arguments in the batch is an error. The 2nd-nth such argument would receive an error.

2b) The presence of duplicate arguments in the batch is permitted. The 2nd-nth such argument will receive the same value as the first.

I'll add a new configuration option to allow the user to choose which of 2a/2b should apply.

Also, note that allowing duplicate arguments in the same batch is still not permitted with this design. Given that this would be a more involved refactoring (and the only time it's come up is here - in the hystrix-examples module), I will defer it for now but track it separately.

Thoughts?

@mattrjacobs
Copy link
Contributor

After thinking about this more, since the default for request cache enabled is true, having the user disable it is a strong enough signal that they don't want multiple arguments to result in the same work being done.

Therefore, I will not add a new config option, and use request cache enabled for choosing between 2a/2b above. The only 2 cases will be that :

A) If request caching is on, the 2nd-nth appearance of an argument in a batch results in the same value as the first (from the cache)

B) If request caching has been explicitly disabled, the 2nd-nth appearance of an argument in a batch results in an error. This forces the user to not place the same argument in the batch. If this is problematic in practice, then that's another issue to work on. I suspect it will not be.

@mattrjacobs
Copy link
Contributor

This test case pases now after merging #1278

@mattrjacobs mattrjacobs merged commit 8f96fde into Netflix:master Jul 13, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants