Investigation at Sun of performance problems with the onboard RageXL on Ultra 20 systems (which use the Tyan S2865 motherboard with AMD64 CPU) showed that performance improved when not using the RageXL memory for offscreen pixmaps, since accessing them over the PCI33 bus to the RageXL was the bottleneck. The patch I'm about to attach to this bug was created by Edward Shu of our x86 video team to provide a xorg.conf option to disable offscreen pixmap usage on mach64 chipsets.
Created attachment 4601 [details] [review] Patch to make use of off-screen pixmaps configurable
you can also just use existing the "XAANoOffscreenPixmaps" option.
I need non offscreen pixmap to be a default behaviour of Rage XL which usually connected to PCI 33 bus. the reason is in most cases final user do not intend to get into trouble with the xorg.conf file. And offscreen pixmap may be efficient to other video cards. So I create a new option only for RageXL in case of user must use offscreen pixmap which is not efficient on RageXL.
would it not be easier to just write out XaaNoOffscreenPixmaps in the (Open)Solaris tool that generates xorg.conf?
(In reply to comment #4) > would it not be easier to just write out XaaNoOffscreenPixmaps in the > (Open)Solaris tool that generates xorg.conf? DO you mean write out "XaaNoOffscreenPixmaps" with bool value "True"? That will disable "offscreen pixmap" in solaris on all kinds of video cards besides of RageXL by default. People may like "offscreen pximap" on their non RageXL video card by default.
(In reply to comment #2) > you can also just use existing the "XAANoOffscreenPixmaps" option. XAANoOffscreenPixmaps implicit that XAA enable "offscreen pixmap" by default. However, I noticed that "offscreen pixmap" was not very efficient on several video cards. So I propose a new option to disable "offscreen pixmap" on RageXL by default. Other in-efficient driver may also use this opion to disable "offscreen pixmap" by default. A special inefficient operation is one big pixmap is combined by some other small pixmaps. the pixmap data will move between system memory and video cards back and forth. the following sequence will happen: 1.User application create the small pximaps and XAA allocate space in offscreen video memory for them. 2.User application call XPutImage or XShmPutImage to draw image into these pixmaps. Please be awared that the image data is moved from system to video memory. 3.User application call XCopyArea to copy the pixmaps to the big pixmap. two cases here: 1. the big pixmap is stored in system memory. XAA will copy the smaller pixmap from video card to system memory which will consume extra much bus bandwith. Please be awared the image data is moved back from video memory to system memory. 2. the big pixmap is stored in video card offscreen memory. It is even worse, because XAA did not support accerlerated bitblt from offscreen pixmap to offscreen pixmap. And XAA will drop to fallback function "fbCopyArea" in that case which will move the image data back and forth. Please be awared the image data is moved twice here. Finally the the application will copy the Pixmap to visable windows. From above we can know XAA waste a lot off bus bandwith to transfer the image data. BTW: Why XAA doesn't support "accerlerated Bitblt from offscreen pixmap to offscreen pixmap"? Is there any deep reason behind that? code in hw/xfree86/xaa/xaaCpyArea. ---------------------------------------------------------------------------- XAACopyArea( DrawablePtr pSrcDrawable, DrawablePtr pDstDrawable, GC *pGC, int srcx, int srcy, int width, int height, int dstx, int dsty ) { XAAInfoRecPtr infoRec = GET_XAAINFORECPTR_FROM_GC(pGC); if(pDstDrawable->type == DRAWABLE_WINDOW) { if((pSrcDrawable->type == DRAWABLE_WINDOW) || IS_OFFSCREEN_PIXMAP(pSrcDrawable)){ if(infoRec->ScreenToScreenBitBlt && CHECK_ROP(pGC,infoRec->ScreenToScreenBitBltFlags) && CHECK_ROPSRC(pGC,infoRec->ScreenToScreenBitBltFlags) && CHECK_PLANEMASK(pGC,infoRec->ScreenToScreenBitBltFlags)) return (XAABitBlt( pSrcDrawable, pDstDrawable, pGC, srcx, srcy, width, height, dstx, dsty, XAADoBitBlt, 0L)); } else { if(infoRec->WritePixmap && ((pDstDrawable->bitsPerPixel == pSrcDrawable->bitsPerPixel) || ((pDstDrawable->bitsPerPixel == 24) && (pSrcDrawable->bitsPerPixel == 32) && (infoRec->WritePixmapFlags & CONVERT_32BPP_TO_24BPP))) && CHECK_ROP(pGC,infoRec->WritePixmapFlags) && CHECK_ROPSRC(pGC,infoRec->WritePixmapFlags) && CHECK_PLANEMASK(pGC,infoRec->WritePixmapFlags) && CHECK_NO_GXCOPY(pGC,infoRec->WritePixmapFlags)) return (XAABitBlt( pSrcDrawable, pDstDrawable, pGC, srcx, srcy, width, height, dstx, dsty, XAADoImageWrite, 0L)); } } else if(IS_OFFSCREEN_PIXMAP(pDstDrawable)){ if((pSrcDrawable->type == DRAWABLE_WINDOW) || IS_OFFSCREEN_PIXMAP(pSrcDrawable)){ if(infoRec->ScreenToScreenBitBlt && CHECK_ROP(pGC,infoRec->ScreenToScreenBitBltFlags) && CHECK_ROPSRC(pGC,infoRec->ScreenToScreenBitBltFlags) && CHECK_PLANEMASK(pGC,infoRec->ScreenToScreenBitBltFlags)) return (XAABitBlt( pSrcDrawable, pDstDrawable, pGC, srcx, srcy, width, height, dstx, dsty, XAADoBitBlt, 0L)); } } return (XAAFallbackOps.CopyArea(pSrcDrawable, pDstDrawable, pGC, srcx, srcy, width, height, dstx, dsty)); } -------------------------------------------------------------------------
shu, I don't know how your configurations are generated on (open)solaris, but the ubuntu tool at least was aware of the card type and whathaveyou. so a simple: if [ "x$DEVICE_IDENTIFIER" = "xATI Technologies, Inc. Rage XL" ]; then echo ' Option "XAANoOffscreenPixmaps"' >> $CONFIG fi would do the trick. if this is the case for you, then it would seem a far better idea than patching the driver for what is arguably a vendor-specific case, and certainly breaks expectations relative to the other drivers ...
Since Solaris starting shipping XF86/Xorg with Xorg 6.7, after David Dawes autoconfig work was in, we actually generally ship Solaris with no xorg.conf and let the autoconfig handle it. We do provide xorgconfig & xorgcfg for when autoconfig fails, but as Stuart explained at XDevConf & FOSDEM, would rather work towards making autoconfig do the right thing as much as possible. That said, I still would rather have the ATI Rage driver simply set the existing option if it's not set in the config already than to define a new option name that duplicates the existing one. Is it possible for the driver to tell if the XAANoOffscreenPixmaps is not set at all, and if so, set it true? If it's already set to either true or false, that should still override the automatic setting though.
(In reply to comment #8) > That said, I still would rather have the ATI Rage driver simply set the > existing option if it's not set in the config already than to define a new > option name that duplicates the existing one. Is it possible for the driver > to tell if the XAANoOffscreenPixmaps is not set at all, and if so, set it true? > If it's already set to either true or false, that should still override the > automatic setting though. Offscreen pixmaps may be enabled by the driver by setting the OFFSCREEN_PIXMAPS flag in the ->Flags member of the XAAInfoRec. The XaaNo* option simply masks that bit off. The real bug here is that X is unable to detect such an imbalanced system dynamically. We should be checking for the relative performance of host-to-host blits versus card-to-card blits at runtime (or possibly just first X start) and turning off offscreen pixmaps when they are, say, >=20% slower than host copies.
Sorry about the phenomenal bug spam, guys. Adding xorg-team@ to the QA contact so bugs don't get lost in future.
Is this still a problem? In particular, is XAA still the preferred path with modern X servers on this chipset and target? Closing for now, feel free to re-open.
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.