-
Notifications
You must be signed in to change notification settings - Fork 5.3k
Don't widen to Int128 in CastCache #92984
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Leftover change from hackathon. Existing code is widening to Int128 and then counting zeros. It still worked because we only use it for bit shifting and bit shifts do wrap around if they're > 64. But this was bringing Int128 into `Runtime.Base` in my hackathon project.
|
Tagging subscribers to this area: @agocke, @MichalStrehovsky, @jkotas Issue DetailsLeftover change from hackathon. Existing code is widening to Int128 and then counting zeros. It still worked because we only use the number of zeros for bit shifting and bit shifts do wrap around if they're > 64. But this was bringing Int128 into Cc @dotnet/ilc-contrib
|
src/libraries/System.Private.CoreLib/src/System/Runtime/CompilerServices/CastCache.cs
Show resolved
Hide resolved
|
|
||
| // Fibonacci hash reduces the value into desired range by shifting right by the number of leading zeroes in 'size-1' | ||
| byte shift = (byte)BitOperations.LeadingZeroCount(size - 1); | ||
| byte shift = (byte)BitOperations.LeadingZeroCount((ulong)(size - 1)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Out of curiosity, why ulong instead of uint?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Originally this was:
// Fibonacci hash reduces the value into desired range by shifting right by the number of leading zeroes in 'size-1'
DWORD bitCnt;
#if HOST_64BIT
BitScanReverse64(&bitCnt, size - 1);
HashShift(tableData) = (BYTE)(63 - bitCnt);
#else
BitScanReverse(&bitCnt, size - 1);
HashShift(tableData) = (BYTE)(31 - bitCnt);
#endifThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this should be ulong on 64bit and uint on 32bit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
or nuint, if we have such LeadingZeroCount overload.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
size is int, so uint makes the most sense here. Alternatively, this can call Int32.LeadingZeroCount that would make the cast unnecessary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a bit more explanation for ulong vs. uint differences.
When reducing the hashed value into an array index we want to take the top N bits, where N is the nonzero bits in size-1. The hashed value is of native size, thus we shift it more on 64bit.
I.E. For the size = 1024 case, the shift will be 53 on 64 bit and 21 on 32 bit.
It is also possible to just use uint and add 32 to the result on 64bit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, ok. I see why this should be nuint.
Does it mean that this is actually fixing a functional regression? The number of leading bits is wrong with Int128 that this ends up calling currently.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does it mean that this is actually fixing a functional regression?
This code is not on a hot path (shift is computed on table resize, which is rare), so 128bit math has no perf impact.
What saves us on the correctness side is that in C# shift operators mask the count according to the operand size, so we end up using modulo 63 value.
https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/operators/bitwise-and-shift-operators#shift-count-of-the-shift-operators
Basically - this ends up working correctly, but not intentionally.
|
Should we also delete the internal Int128/UInt128 overloads in BitOperations, and switch any code that needs them to use the equivalent public methods on Int128/UInt128? |
Oh, I see, that's the answer to my question #92984 (comment) then... the Int128 overload is internal, not public. Yes, we should do that. |
|
A similar fix may be needed in: runtime/src/libraries/System.Private.CoreLib/src/System/Runtime/CompilerServices/GenericCache.cs Line 233 in b06b325
|
I'm not doing this part because the equivalent public methods on Int128/UInt128 return Int128 and UInt128 respectively, apparently to cover the case when there is more than 128 leading zeros. |
Leftover change from hackathon. Existing code is widening to Int128 and then counting zeros. It still worked because we only use the number of zeros for bit shifting and bit shifts do wrap around if they're > 64. But this was bringing Int128 into
Runtime.Basein my hackathon project.Cc @dotnet/ilc-contrib