I think part of the problem is that many people are confusing the concept with with a particular implementation of the concept. To me, and probably to Hans, a video overlay is a very specific hardware implementation that is distinct from chromakeying and video texturing. To the next guy, maybe not. So, for the sake of clarity, I'll use the term "hardware video surface" to describe the general concept of a hardware assisted mechanism for rendering YCbCr formatted frames without the need for software colour conversion or scaling.
Nobody is arguing against the need for "hardware video surface" support in display drivers. Even when you don't need to scale the output, not having to do the colour conversion can make a difference in the amount of data that has to be transferred over the bus as the YCbCr data is typically more compact than RGB. In a perfect world, various parts of the hardware that are involved in video stream decoding would be openly documented and we could use those too.
The argument has been over the merits of particular ways of implementing support for "hardware video surfaces". The original hardware mechanism that we are used to from the days of Cirrus Logic and S3 chipsets of yesteryear (and variations thereof) are what are being described as obsolete here. I haven't looked too closely, but from what I gather, this particular method doesn't even exist in a lot of current generation cards. Instead, video colour conversion and scaling is handled by parts of the hardware 3D pipeline through the use of texture mapping, in which each YCbCr frame is used as a texture and rendered onto a quad (a rectangle primitive, optionally realized as a triangle fan or strip on hardware without direct support for quads).
Right now, as far as I know no specific implementation exists for the RadeonHD drivers. With adequate 3D support, however, implementing a hardware video surface via video texturing becomes feasible. Since both 3D and video playback are desirable features, it makes sense to focus on the overlap.