Introduction to 3D: Textured video

You’ve probably seen that the radeon driver recently got support for textured video. This basically exposes an Xv adapter using the 3D texture engine for rendering. Textured video has some advantages over traditional overlays. These include:

- multiple ports (display more than one video at a time)

- no crtc limitations (videos can span multiple heads)

- works with composite (wobbly videos or rotated screens)

The video pipeline has two basic parts: decoding and rendering. I’ll just cover the rendering as that is what it relevant here. For rendering with Xv you have a decoded buffer of YUV data. In order to display on the screen the data needs to be converted to RGB and scaled to fit the size of the drawable. With an overlay, the overlay hardware does the YUV to RGB conversion and scaling, then the data is “overlaid” with the graphics data during scan out. The converted data is never actually written to a buffer. This is what makes the overlay incompatible with composite.

Using the texture engine, we treat the YUV data as a texture. The radeon texture engine has native support for YUV texture and YUV to RGB conversion. This is what we use. On hardware without YUV textures, the color space conversion can be done with a fragment shader program. Now that we have the YUV data as a texture, we render a quad and apply the texture to it. The texture is scaled based on the size of the quad rendered. All of this happens in radeon_textured_videofuncs.c.

RADEONInit3DEngine() sets up the common state required for 3D operations. Next we set up the texture and the surface info for the destination of the 3D engine. This could be the desktop or an offscreen pixmap. We choose the appropriate texture format based on the YUV format and enable YUV to RGB conversion in the texture engine. On r3xx, r4xx, and r5xx chips we load a small vertex program to setup the vertexes and a small fragment program to load the texture. On older chips the pipeline is fixed so the setup is not as complex. Finally we iterate across the clipping regions of the video and draw the quads for the exposed regions of the video. We send four vertexes, one for each corner of each box in the clipping region. Each vertex consists of four coordinates, two for position (X, Y) and two texture coordinates (S, T). The position defines the location on the destination surface for that vertex. The texture coordinates specify the location within the texture that corresponds to that vertex. Think of a texture as a stamp with a width and a height. When a texture is applied to a primitive, you need to specify what portion of the texture is applied. Textures are considered to have a width and height of 1; as such your S and T coordinates are proportions rather than raw coordinates.

At the end you’ve drawn a video frame using the texture engine.

EXA composite support uses the same principles, only it adds blends (blending texture data with data in the destination buffer — translucent windows) and coordinate transforms (changing the X,Y coordinates of the primitive to transform the object — this is how rotation is done).

12 Responses to “Introduction to 3D: Textured video”

  1. osiris Says:

    Great post.

    I have a question.
    Why do we even care about some windows covering video window? Shouldn’t it be taken care by window manager? I can see a reason why do we need clipping regions for 3d apps - it’s performance, the less triangles and vertices we have to process, the better. But I don’t see a reason, why we’re doing it for such a simple case. For me, we’re just making it more complicated then it should be, but very likely I just misunderstand something.

  2. DagB Says:

    Would it be possible to extend the core xvideo code in such a way that an application wanting to use an xv adapter checks for an environment variable XVPORT before it queries the system for available ports?

    If doable, this would allow users with applications which doesn’t have a way to specify which adapter to use, to actively select one adaptor over the other. And it would elliminate a need for driverspecific code, like the recent patches to the intel driver by Maxim Levitsky. And it requires no changes(?) to current applications.

  3. agd5f Says:

    osiris: We need to worry about clipping regions because you don’t want to render the video on top of the window that’s supposed to be covering it.

    DagB: You can do that without any changes to the Xv core. Just set the env. var. yourself and edit your app to check for it. You’d have to change all your applications though.

  4. osiris Says:

    agd5f: Yeah, I forgot that we render directly to framebuffer and not to offscreen pixmap.

  5. agd5f Says:

    If you are running composite, then you don’t have to worry.

  6. mrthefter Says:

    If there’s a problem with the hardware texture YUV->RGB colorspace converter on the rv250, why not use the ATI_fragment_shader extension to do it instead?
    From what I’m reading, this is what you do for cards that don’t support it, anyway; only difference being that I’d assume you were using ARB_fragment_shader instead. I believe there’s code available in mplayer that does exactly the same conversion using ATI’s implementation, if you need.

  7. agd5f Says:

    The colorspace conversion could be done with a shader program, it just needs to be done. Patches welcome :) In the case of textured video, we program the shader directly rather than using OpenGL so we’d have to translate the OpenGL to the native instructions.

  8. mrthefter Says:

    From what I can tell, the hardware bug doesn’t seem to affect green and purple, which leads me to believe that YUV values are being decoded as YIQ, and dropping the I components. Or some similar effect.

    Another question: Why is textured output limited to YUV colorspace, while overlays aren’t? Is there a technical reason, or are rgb colorspaces just not implemented yet?

  9. agd5f Says:

    Adding support for RGB surfaces would be pretty easy, it just hasn’t been done yet.

  10. ranma Says:

    Great, I’ve been waiting for this feature ever since I experienced the convenience of the nvidia bitblitting video adapter.
    However I’m seeing tearing in the Video, is this due to being ‘very new’?
    EnablePageFlip is “yes” in my xorg.conf and XV_DOUBLE_BUFFER is also on…
    HTH, keep up the good work. :)

  11. ranma Says:

    Oh, and I forgot: I’m using the stock Debian unstable package. And I also noticed there is some off-by-one (probably) when the window is partially obscured, it looks like the part of the video below the obscuring window is shifted up a little bit (single pixel?).

  12. radeon video cards Says:

    radeon video cards…