I’ve just committed code to hopefully fix some of the remaining issues with RS4xx (XPRESS chips). If you have an XPRESS chip and have problems with dualhead, or display corruption, or DVI, please test the latest code in ati git master!
Archive for March, 2008
As some of you have probably heard, I’ve been working on adding full support for EXA composite for R3xx-R5xx cards. Well, as of last night, it lives. R3xx/R4xx support is pretty solid although there are some blend combinations that still need to be debugged. RS690 works, but only after running a 3D app like gears first. R5xx support works on some chips, but not on others, probably along the same lines as textured video. I think both may be related to the number of raster pipes that should be enabled for each chip. I haven’t merged it into master yet, but you can grab it from the r3xx-render branch of xf86-video-ati. To check it out:
git clone git://anongit.freedesktop.org/git/xorg/driver/xf86-video-ati
git checkout -b r3xx-render origin/r3xx-render
Update: The new render code has been merged to ati git master.
DCE 3.0 (Display Control Engine version 3.0) is the new display controller in the latest radeons from AMD. This includes the HD3600/HD3400 series cards (RV620/RV635) and the RS780. The analog functionality (DACs) is largely unchanged however the digital portion has been completely revamped to support a variety of digital outputs including DisplayPort. Previous chips generally had fixed function encoder/transmitter blocks (LVTMA is an exception) for specific output types (LVDS or TMDS). On the new chips, there are now multi-purpose digital encoders and transmitters. The encoders and transmitters can be configured to support any digital output type that is required. Support for DCE 3.0 cards is available in both the radeon and radeonhd drivers.
You’ve probably seen that the radeon driver recently got support for textured video. This basically exposes an Xv adapter using the 3D texture engine for rendering. Textured video has some advantages over traditional overlays. These include:
womens nike air jordan
- multiple ports (display more than one video at a time)
air jordan Outlet Online
- no crtc limitations (videos can span multiple heads)
cheap oakley baseball sunglasses
- works with composite (wobbly videos or rotated screens)
nike air max ltd
The video pipeline has two basic parts: decoding and rendering. I’ll just cover the rendering as that is what it relevant here. For rendering with Xv you have a decoded buffer of YUV data. In order to display on the screen the data needs to be converted to RGB and scaled to fit the size of the drawable. With an overlay, the overlay hardware does the YUV to RGB conversion and scaling, then the data is “overlaid” with the graphics data during scan out. The converted data is never actually written to a buffer. This is what makes the overlay incompatible with composite.
Using the texture engine, we treat the YUV data as a texture. The radeon texture engine has native support for YUV texture and YUV to RGB conversion. This is what we use. On hardware without YUV textures, the color space conversion can be done with a fragment shader program. Now that we have the YUV data as a texture, we render a quad and apply the texture to it. The texture is scaled based on the size of the quad rendered. All of this happens in radeon_textured_videofuncs.c.
RADEONInit3DEngine() sets up the common state required for 3D operations. Next we set up the texture and the surface info for the destination of the 3D engine. This could be the desktop or an offscreen pixmap. We choose the appropriate texture format based on the YUV format and enable YUV to RGB conversion in the texture engine. On r3xx, r4xx, and r5xx chips we load a small vertex program to setup the vertexes and a small fragment program to load the texture. On older chips the pipeline is fixed so the setup is not as complex. Finally we iterate across the clipping regions of the video and draw the quads for the exposed regions of the video. We send four vertexes, one for each corner of each box in the clipping region. Each vertex consists of four coordinates, two for position (X, Y) and two texture coordinates (S, T). The position defines the location on the destination surface for that vertex. The texture coordinates specify the location within the texture that corresponds to that vertex. Think of a texture as a stamp with a width and a height. When a texture is applied to a primitive, you need to specify what portion of the texture is applied. Textures are considered to have a width and height of 1; as such your S and T coordinates are proportions rather than raw coordinates.
At the end you’ve drawn a video frame using the texture engine.
EXA composite support uses the same principles, only it adds blends (blending texture data with data in the destination buffer — translucent windows) and coordinate transforms (changing the X,Y coordinates of the primitive to transform the object — this is how rotation is done).