View Full Version : RGB v. YUV


Riley Harmon
April 7th, 2005, 11:52 AM
I heard that it was better to edit with YUV color space, is this true? What is the difference/benefits of editing with YUV vs. RGB? Which is better to edit with? What is the best way to convert between the two? Etc. Etc.

Graham Hickling
April 7th, 2005, 09:02 PM
Assuming your raw footage is DV, then its in YUV colorspace to begin with. So there's some modest quality and sometimes speed advantage in staying there during editing rather than converting to RGB and then back again to whatever your final product is (YUV again for DV or DVD, etc).

Graeme Nattress
April 7th, 2005, 11:38 PM
Actually, DV is in the Y'CbCr colour space - YUV being something completely different, and is often used innnacurately to refer to Y'CbCr. Even Apple etc. do this incorrect naming.

So yes, both DV and DVD use a Y'CbCr colour space, and hence unnecessary conversions to RGB and back can cause issues but:

There's practically few effects or colour corrections that work entirely in Y'CbCr space,

Even effects that are Y'CbCb native often involve some conversion to RGB,

And Y'CbCr is effectively a compression technique as it is part of the 4:1:1, 4:2:2 type goings on where chroma is spatially compressed compare to the luma.

So, although it's wonderful in theory to stay in Y'CbCr, it's often very hard to in practise.

Graeme

Graham Hickling
April 8th, 2005, 12:29 AM
Graeme - I'll have to admit to being confused! The YUV page of www.fourcc.org starts off with this introduction:

"YUV formats fall into two distinct groups, the packed formats where Y, U (Cb) and V (Cr) samples are packed together into macropixels which are stored in a single array, and the planar formats where each component is stored as a separate array, the final image being a fusing of the three separate planes"

So when you say that YUV and Y'CbCr are "completely different", what are you getting at? And what are they getting at?

Thanks!

Graeme Nattress
April 8th, 2005, 12:34 AM
YUV is an intermediate stage in turning component video into composite video or S-video. Y'CbCr are the component of digital video, often wrongly called YUV.

I recommend this as a highly technical guide to video: http://www.poynton.com/DVAI/index.html where he explains it much better than I can.

Graeme

Graham Hickling
April 8th, 2005, 12:37 AM
Oh and ... my understanding has been that all the AVIsynth2.5 filters work in YV12, and that this is the FourCC for 4:2:0, and so these filters would be particularly appropriate for HDV (and PAL DV) work, since both of these are 4:2:0 formats.

If I'm confused on this as well, it would be good to know!

Graeme Nattress
April 8th, 2005, 12:42 AM
I don't know about those specific formats. But MPEG2 4:2:0 and DV 4:2:0 are different. I think it's where the Cb and Cr samples are sited with respect to the luma. There's a diagram, I think, in the Poynton book I mentioned. I don't know if the people who write the effect take this into account, or wether they have to or not if the actual capture process of the video converts them both into a more standard format.

Graeme

Graham Hickling
April 8th, 2005, 01:01 AM
>>http://www.poynton.com/DVAI/index.html

Well there's bedtime reading for a good long while (I just "looked inside the book" on Amazon - yikes!).

Thanks for the recommendation.

Graeme Nattress
April 8th, 2005, 05:47 AM
Yes, it's very heavy going, but it's full of wisdom from someone who knows what they are talking about. I learned a lot from this book on every page, which in these days of books and articles that can barely get basic facts right, is a most wonderful thing. However, you probably want a maths or electrical engineering degree under your belt to be able to get the most out of it.

Graeme

Andre De Clercq
April 8th, 2005, 07:25 AM
YUV processing conserves color precision and luma resolution for most of the NLE processing routines and is thus preferable in DV video.
As far as I know Charles Poyntong only pinpoints the confusion between the gamma corrected luma value(') and the linear version ( he is a gamma fan!). This is only a mathematical issue when color processing (space conversion...) is involved in graphics applications ( he used to be a Silicon Graphics boy). In video, all signals are gamma precorrected and YUV and Y'CrCb eve YCrCb, are the same for video people. They both belong to the same analog luma/color difference signal protocols. For many years professionals, professional companies and standardisation organisation routinely use the "wrong" expressions..

Graeme Nattress
April 8th, 2005, 07:33 AM
Well, I think Charles has it right on this one. If there's a proper name to call something, then that's what should be used, even if many people as standard practise use the incorrect terminology. In this context, read:

http://developer.apple.com/quicktime/icefloe/dispatch027.html

I'd disagree that "YUV processing conserves color precision and luma resolution for most of the NLE processing routines and is thus preferable in DV video" because most processing that you do to video is, by necessity RGB based. Although yes, the edit package should be aware that the video is Y'CbCr and that this needs to be taken into account. For instance in FCP, you'd make sure that the timeline is set to "YUV", even though 90% of the effects you apply will convert the video to RGB at some point.

AFAIK:

Y'PbPr is analogue component video
YUV is an intermediary step in converting Y'PbPr to S-Video (YC) or composite video
Y'CbCr is digital component video.

3 different names, for three different things.

Graeme

Andre De Clercq
April 8th, 2005, 11:41 AM
See http://www.canopus.us/us/pdf/Storm_comparison.pdf and much more just by a Google search ...but just search YUV instead of Y'CrCb otherwise you will get no results.

Riley Harmon
April 8th, 2005, 11:47 AM
Im using premiere pro, which i assume will capture RGB (?), so I should just stay with RGB and not worry about YUV. I edit in Premiere and do most of my effects in AE.

Graeme Nattress
April 8th, 2005, 01:04 PM
Every compositing application works purely in RGB as there are 100 times as many things you can do to an image in RGB space as Y'CbCr space. Sometimes Y'CbCr space is useful too. The key to good effects software is clean conversions between the two without clipping or rounding errors.

Graeme

Andre De Clercq
April 8th, 2005, 03:54 PM
One of the problems, even using ïdeal YUV>RGB is that each of the R, G and B converted components are a linear combination of a full resolution luma part and a 1/4 resolution color difference part. So, e.g. R= Y (full res) + (R-Y) ((1/4 res). So R in this example doesn't have the Y original bandwidth anymore. After processing, the result is back converted into YUV, but the Y value in this conversion is again made up out of bandwidth reduced RGB components. The issue is comparable by shooting a color TV pic in B&W and shoot the same pic in B&W when only the luma is taken (fully desaturated pic). The "color leakage"(bandwidth reduced color difference signals) is not presant in the latter case, resulting in a "full bandwidth" photo

Graeme Nattress
April 8th, 2005, 03:56 PM
That's why you have to be very careful! But we are resigned to converting to RGB at some point in the chain, as there is very little compositing that you can do in Y'CbCr space. I for one can't wait for the day that we can get rid of Y'CbCr altogether as it's rather a crude compression compared to keeping everything RGB full rez on each channel and using more perceptual compression techniques.

Graeme

Peter Jefferson
April 9th, 2005, 06:24 AM
well i can safely say that teh way Canopus can offer a true realtime DV/HDV editing solution is to work in YUV format as opposed to RGB.
Apparently its more efficient to work with??

I wouldnt know, but i do know that EdiusSP for HDV card and components are all YUV with rgb emulation.

Graeme Nattress
April 9th, 2005, 02:46 PM
Most standard compositing techniques produce vastly different results in Y'CbCr than RGB. Try doing Add in Y'CbCr - you can't. You can add do Y'a = Y'b + Y'c, but if you do Cba = Cbb + Cbc, your colours will all change. In RGB, you can easily do Ra = Rb + Rc, etc. which will produce the intended result. Just doing the Y' addition and leaving the chroma alone in Y'CbCr space will produce very different results. Similarly for multiply.

So yes, if you're video is Y'CbCr, you need something that edits in that space natively, but knowing that for most compositing effects (even a simple dissolve will look different in Y'CbCr space) you must also have great conversions to and from RGB.

Graeme

Andre De Clercq
April 10th, 2005, 03:27 AM
There is no reason why adding wouldn't work in native YUV. Not the color but the the color difference values (C is not a color value!) get changed and so do the Y' values resulting in the correct colors. Disolving (multiplying) is the same story.

Rob Lohman
April 10th, 2005, 03:46 AM
<<<-- Originally posted by Riley Harmon : Im using premiere pro, which i assume will capture RGB (?), so I should just stay with RGB and not worry about YUV. I edit in Premiere and do most of my effects in AE. -->>>

ALL DV capture programs capture the stream AS IS!!

They may put a wrapper around it (AVI, QuickTime etc.) and split
the audio, but the content itself is NOT touched. So Premiere
is not capturing "in" RGB. It captures the stream as it comes in
which is in normal DV form: YUV/Y'CbCr.

Graeme Nattress
April 10th, 2005, 05:59 AM
It's not that adding doesn't work in Y'CbCr, just that it looks different to adding in RGB, and the RGB result is the one people expect to see. Y'CbCr is really just compression when you think about it, and as we know, you can't do effects directly on compressed video without decompressing it. Because of the nature of teh Y'CbCr compression, it's not too bad as we can do quite a few things to the video without converting to RGB. We can do geometric tranforms, alpha blending, things that effect luma different from chroma and such.... But these are few compared to the vast number of visual processing algorithms and effects that work in RGB only.

Graeme

Andre De Clercq
April 10th, 2005, 06:55 AM
In NLE systems, all DV data is being decompressed into its basic Y'CrCb components before executing any pixel based processing.
Why would RGB processing look different? RGB is (apart from the resolution issues and rounding errors when back converted, as I mentioned earlier)is just another way (linear combination) to describe the same thing. The reason why RGB was originally used in video is just because all processing routines already existed in ( full bandwidth RGB) graphics applications.

Graeme Nattress
April 10th, 2005, 07:17 AM
Yes, there's a linear tranform between the two spaces, so you just transform to RGB, do your processing and transform back. If you're lucky you can concatenate the matrices and therefore do the entire think native to Y'CbCr space, but not all such RGB processings can be described as matrices that you can concatenate.

Graeme

Andre De Clercq
April 10th, 2005, 01:10 PM
Unfortunately in video it's not only about static colorspace conversion. That's indeed simple between YÇrCb and RGB. However we need 720x480 (560 for PAL) pixels converted per frame and once stripped down in there spatial relations (RGB out of Y and resolution reduced CrCb) it is no longer possible to reconstruct a full bandwidth Y component out of stripped down (converted RGB) components. Colorspace conversion as such does not at all address spatial resolution.
I will stop my inputs Graeme unless you give me a specific processing problem which would be impossible in native components. I will give it a try then...

Graeme Nattress
April 11th, 2005, 08:31 AM
Andre, I see exactly what you're getting at now:

As you can linearly convert from Y'CbCb to RGB, there is no possible way to make an image process that can be done in RGB only, as all you have to do is put the conversion from Y'CbCr to RGB on the front.

Graeme

Andre De Clercq
April 11th, 2005, 12:41 PM
Indeed Graeme you need sometimes a fraction of the (contminated) RGB values too, but the major part can remain Y'CrCb.For instance white point correction in component processing only needs a smal percentage of the reduced RGB data while in thre non componennt version rhe full stripped down RGB data is involved.

Graeme Nattress
April 11th, 2005, 01:16 PM
Interesting. Do you have a link to some more info on that?

Graeme

Andre De Clercq
April 11th, 2005, 02:16 PM
I will look for it Graeme.