You’ve probably seen that the radeon driver recently got support for textured video. This basically exposes an Xv adapter using the 3D texture engine for rendering. Textured video has some advantages over traditional overlays. These include:
womens nike air jordan
- multiple ports (display more than one video at a time)
air jordan Outlet Online
- no crtc limitations (videos can span multiple heads)
cheap oakley baseball sunglasses
- works with composite (wobbly videos or rotated screens)
nike air max ltd
The video pipeline has two basic parts: decoding and rendering. I’ll just cover the rendering as that is what it relevant here. For rendering with Xv you have a decoded buffer of YUV data. In order to display on the screen the data needs to be converted to RGB and scaled to fit the size of the drawable. With an overlay, the overlay hardware does the YUV to RGB conversion and scaling, then the data is “overlaid” with the graphics data during scan out. The converted data is never actually written to a buffer. This is what makes the overlay incompatible with composite.
Using the texture engine, we treat the YUV data as a texture. The radeon texture engine has native support for YUV texture and YUV to RGB conversion. This is what we use. On hardware without YUV textures, the color space conversion can be done with a fragment shader program. Now that we have the YUV data as a texture, we render a quad and apply the texture to it. The texture is scaled based on the size of the quad rendered. All of this happens in radeon_textured_videofuncs.c.
RADEONInit3DEngine() sets up the common state required for 3D operations. Next we set up the texture and the surface info for the destination of the 3D engine. This could be the desktop or an offscreen pixmap. We choose the appropriate texture format based on the YUV format and enable YUV to RGB conversion in the texture engine. On r3xx, r4xx, and r5xx chips we load a small vertex program to setup the vertexes and a small fragment program to load the texture. On older chips the pipeline is fixed so the setup is not as complex. Finally we iterate across the clipping regions of the video and draw the quads for the exposed regions of the video. We send four vertexes, one for each corner of each box in the clipping region. Each vertex consists of four coordinates, two for position (X, Y) and two texture coordinates (S, T). The position defines the location on the destination surface for that vertex. The texture coordinates specify the location within the texture that corresponds to that vertex. Think of a texture as a stamp with a width and a height. When a texture is applied to a primitive, you need to specify what portion of the texture is applied. Textures are considered to have a width and height of 1; as such your S and T coordinates are proportions rather than raw coordinates.
At the end you’ve drawn a video frame using the texture engine.
EXA composite support uses the same principles, only it adds blends (blending texture data with data in the destination buffer — translucent windows) and coordinate transforms (changing the X,Y coordinates of the primitive to transform the object — this is how rotation is done).