Summary: | [CTS] GTF-GL46.gtf32.GL3Tests.packed_pixels.packed_pixels tests fail on 32bit Mesa | ||
---|---|---|---|
Product: | Mesa | Reporter: | Mark Janes <mark.a.janes> |
Component: | Drivers/DRI/i965 | Assignee: | Kenneth Graunke <kenneth> |
Status: | RESOLVED FIXED | QA Contact: | Intel 3D Bugs Mailing List <intel-3d-bugs> |
Severity: | normal | ||
Priority: | medium | CC: | agoldmints, agomez, mattst88 |
Version: | unspecified | ||
Hardware: | Other | ||
OS: | All | ||
Whiteboard: | |||
i915 platform: | i915 features: | ||
Bug Depends on: | |||
Bug Blocks: | 102590 |
Description
Mark Janes
2017-12-27 15:15:44 UTC
This basically boils down to: _mesa_lroundevenf(0.571428597f * 0xffffffffu) returning wildly different results on 64-bit and 32-bit. 64-bit: result = 0x92492500 32-bit: result = 0x80000000 This happens even if I use the lrintf path directly instead of the SSE intrinsics. ---------------------------------- How to arrive at this conclusion: 1. Simplify the test to do less work: a. Comment out all "formats" other than GL_RED b. Comment out all "types" other than GL_SHORT and GL_UNSIGNED_INT 3. Break in get_tex_rgba_uncompressed We should be doing a R16_SNORM -> R32_UNORM conversion here 4. Note the conversion to RGBA32_FLOAT for transfer ops 5. Observe that texgetimage.c:546 _mesa_format_convert has the same float source data in both 32-bit and 64-bit builds, but produces different R32_UNORM result data. Oh right, because that's larger than a signed long (32-bit). llrintf() works of course. So perhaps _mesa_float_to_unorm needs to be using llrintf() if dst_bits == 32? Fixed by this: https://gitlab.freedesktop.org/mesa/mesa/merge_requests/1761 Merged: commit e18cd5452aa4434fb22105eb939843381771b91c Author: Kenneth Graunke <kenneth@whitecape.org> Date: Fri Aug 23 11:10:30 2019 -0700 mesa: Fix _mesa_float_to_unorm() on 32-bit systems. |
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.