DV Info Net

DV Info Net (https://www.dvinfo.net/forum/)
-   Non-Linear Editing on the PC (https://www.dvinfo.net/forum/non-linear-editing-pc/)
-   -   RGB v. YUV (https://www.dvinfo.net/forum/non-linear-editing-pc/42589-rgb-v-yuv.html)

Graeme Nattress April 8th, 2005 03:56 PM

That's why you have to be very careful! But we are resigned to converting to RGB at some point in the chain, as there is very little compositing that you can do in Y'CbCr space. I for one can't wait for the day that we can get rid of Y'CbCr altogether as it's rather a crude compression compared to keeping everything RGB full rez on each channel and using more perceptual compression techniques.

Graeme

Peter Jefferson April 9th, 2005 06:24 AM

well i can safely say that teh way Canopus can offer a true realtime DV/HDV editing solution is to work in YUV format as opposed to RGB.
Apparently its more efficient to work with??

I wouldnt know, but i do know that EdiusSP for HDV card and components are all YUV with rgb emulation.

Graeme Nattress April 9th, 2005 02:46 PM

Most standard compositing techniques produce vastly different results in Y'CbCr than RGB. Try doing Add in Y'CbCr - you can't. You can add do Y'a = Y'b + Y'c, but if you do Cba = Cbb + Cbc, your colours will all change. In RGB, you can easily do Ra = Rb + Rc, etc. which will produce the intended result. Just doing the Y' addition and leaving the chroma alone in Y'CbCr space will produce very different results. Similarly for multiply.

So yes, if you're video is Y'CbCr, you need something that edits in that space natively, but knowing that for most compositing effects (even a simple dissolve will look different in Y'CbCr space) you must also have great conversions to and from RGB.

Graeme

Andre De Clercq April 10th, 2005 03:27 AM

There is no reason why adding wouldn't work in native YUV. Not the color but the the color difference values (C is not a color value!) get changed and so do the Y' values resulting in the correct colors. Disolving (multiplying) is the same story.

Rob Lohman April 10th, 2005 03:46 AM

<<<-- Originally posted by Riley Harmon : Im using premiere pro, which i assume will capture RGB (?), so I should just stay with RGB and not worry about YUV. I edit in Premiere and do most of my effects in AE. -->>>

ALL DV capture programs capture the stream AS IS!!

They may put a wrapper around it (AVI, QuickTime etc.) and split
the audio, but the content itself is NOT touched. So Premiere
is not capturing "in" RGB. It captures the stream as it comes in
which is in normal DV form: YUV/Y'CbCr.

Graeme Nattress April 10th, 2005 05:59 AM

It's not that adding doesn't work in Y'CbCr, just that it looks different to adding in RGB, and the RGB result is the one people expect to see. Y'CbCr is really just compression when you think about it, and as we know, you can't do effects directly on compressed video without decompressing it. Because of the nature of teh Y'CbCr compression, it's not too bad as we can do quite a few things to the video without converting to RGB. We can do geometric tranforms, alpha blending, things that effect luma different from chroma and such.... But these are few compared to the vast number of visual processing algorithms and effects that work in RGB only.

Graeme

Andre De Clercq April 10th, 2005 06:55 AM

In NLE systems, all DV data is being decompressed into its basic Y'CrCb components before executing any pixel based processing.
Why would RGB processing look different? RGB is (apart from the resolution issues and rounding errors when back converted, as I mentioned earlier)is just another way (linear combination) to describe the same thing. The reason why RGB was originally used in video is just because all processing routines already existed in ( full bandwidth RGB) graphics applications.

Graeme Nattress April 10th, 2005 07:17 AM

Yes, there's a linear tranform between the two spaces, so you just transform to RGB, do your processing and transform back. If you're lucky you can concatenate the matrices and therefore do the entire think native to Y'CbCr space, but not all such RGB processings can be described as matrices that you can concatenate.

Graeme

Andre De Clercq April 10th, 2005 01:10 PM

Unfortunately in video it's not only about static colorspace conversion. That's indeed simple between YÇrCb and RGB. However we need 720x480 (560 for PAL) pixels converted per frame and once stripped down in there spatial relations (RGB out of Y and resolution reduced CrCb) it is no longer possible to reconstruct a full bandwidth Y component out of stripped down (converted RGB) components. Colorspace conversion as such does not at all address spatial resolution.
I will stop my inputs Graeme unless you give me a specific processing problem which would be impossible in native components. I will give it a try then...

Graeme Nattress April 11th, 2005 08:31 AM

Andre, I see exactly what you're getting at now:

As you can linearly convert from Y'CbCb to RGB, there is no possible way to make an image process that can be done in RGB only, as all you have to do is put the conversion from Y'CbCr to RGB on the front.

Graeme

Andre De Clercq April 11th, 2005 12:41 PM

Indeed Graeme you need sometimes a fraction of the (contminated) RGB values too, but the major part can remain Y'CrCb.For instance white point correction in component processing only needs a smal percentage of the reduced RGB data while in thre non componennt version rhe full stripped down RGB data is involved.

Graeme Nattress April 11th, 2005 01:16 PM

Interesting. Do you have a link to some more info on that?

Graeme

Andre De Clercq April 11th, 2005 02:16 PM

I will look for it Graeme.


All times are GMT -6. The time now is 06:17 AM.

DV Info Net -- Real Names, Real People, Real Info!
1998-2025 The Digital Video Information Network