Skip to content
This repository was archived by the owner on Mar 21, 2024. It is now read-only.

Conversation

@3gx
Copy link
Contributor

@3gx 3gx commented Apr 29, 2016

This fixes regression #780 introduced in #777 without regressing the origin fix #776

@sdalton1 can you please verify this fixes your issues?

@jaredhoberock While the fix being reviewed, I will work on adding unit test coverage for this functionality. Don't merge just yet.

3gx added 2 commits April 29, 2016 14:26
When argument is iterator, the value is obtained by dereferencing
iterator, otherwise it is pointer and the value is obtained via get_value.
@jaredhoberock
Copy link
Contributor

Thanks Evghenii. Since this new function isn't a customization point (we don't want users to try to customize its behavior, and we don't want to ourselves, either), let's not organize it into thrust/system/detail/generic/.

How about thrust/detail/get_iterator_value.h?

Also, the way we should invoke it is:

detail::get_iterator_value(...)

Instead of

get_iterator_value(...)

Unlike get_value(), we don't want to call it through ADL.

@3gx
Copy link
Contributor Author

3gx commented Apr 29, 2016

Thanks for the explanation. I now understand better the directory hierarchy. I've moved the implementation to thrust/detail/get_iterator_value.h , and add unit testing for issues #780 & #776.

I think that's all, and it can be merged.

@sdalton1
Copy link
Contributor

Works for me, thanks!

template<typename DerivedPolicy, typename Iterator>
__host__ __device__
typename thrust::iterator_traits<Iterator>::value_type
get_iterator_value(thrust::execution_policy<DerivedPolicy> &, Iterator it)
Copy link
Contributor

@jaredhoberock jaredhoberock May 2, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure this implementation will work as intended because it ignores the execution policy. For example, if the user provides a CUDA stream in the execution policy, that will be ignored, and this dereference will use the default stream.

Wouldn't it be more correct for the body of get_iterator_value to do something like call thrust::copy?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why wouldn't it, aren't true iterators supposed to be safe for usual derlerencing? If not, what is the interface to deference an iterator with an execution policy? This passes all the tests.

Copy link
Contributor

@jaredhoberock jaredhoberock May 2, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that we intend for get_iterator_value() itself to be the interface to dereference an iterator with an execution policy. So, we have to build the implementation and ensure that it handles all cases correctly. I'm concerned that this implementation doesn't handle all cases correctly, namely, those cases where the user has provided a custom execution policy which is different from whatever is tagged in the iterator.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not following. What would be an example of dereferencing an iterator with execution policy?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To add to this: it will be a non-issue because nothing nothing changes for iterators: they are still � dereference via 'operator*', as it was earlier. If users were hitting bugs with such dereferencing, we would have already known. We need a counter example to show it is unsafe now and worked before this change.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suppose I call max_element(par.on(my_stream), vec.begin(), vec.end()) and look at the visual profiler timeline of my application. What sort of synchronization behavior would I observe with this implementation?

I believe I would see that all CUDA streams in my application synchronize at the call to max_element because the iterator dereference inside of get_iterator_value() will use the default stream instead of the stream contained inside of my execution policy.

Remember that in addition to preventing a crash due to dereferencing a raw device pointer on the host, we also have to ensure that the user's execution policy is used for this dereference. The one overload's use of get_value() ensures both of those things happen, but the other overload's plain iterator dereference does not.

I think that using thrust::copy will solve this problem.

Copy link
Contributor Author

@3gx 3gx May 2, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand. However, this fix doesn't change the previously existed behaviour, but fixes a bug when a raw pointer is passed to max_element.

That being said, replacing *it with

  typename thrust::iterator_traits<Iterator>::value_type value;
  thrust::copy(exec, it,it+1, &value); 
  return value; 

makes the reproducer to die with

terminate called after throwing an instance of 'thrust::system::system_error'
  what():  an illegal memory access was encountered
Aborted (core dumped)

when used with transform_iterator and I wasn't able to quickly nail down the issue. I agree the get_value_iterator needs to be enhanced to respect customer user policy, but it may take more time.

I suggest either accepting this fix, or reverting #777 to make sure the user codes are not broken and then continue working on this PR to make get_iterator_value to respect user policy.

@jaredhoberock
Copy link
Contributor

Will merge this now and track the task of generalizing the implementation of get_iterator_value() elsewhere.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants