DV Info Net

DV Info Net (https://www.dvinfo.net/forum/)
-   Sony XDCAM PMW-F3 CineAlta (https://www.dvinfo.net/forum/sony-xdcam-pmw-f3-cinealta/)
-   -   Sony F3 vs RED? (https://www.dvinfo.net/forum/sony-xdcam-pmw-f3-cinealta/487791-sony-f3-vs-red.html)

Alister Chapman February 8th, 2011 01:16 PM

Apologies to Red, I was not aware that Red One shipped with the MX sensor as standard. I thought a Red One with MX sensor was known as a Red One MX.

I do feel that many producers and directors, very often non technical people don't know the difference between a camera where the resolution is given as horizontal pixel count, actual resolution or image format.

If I proposed to many of them that I shot on a "2.5k" camera over a 1080p/i camera few would realise that I was talking about the same thing (F3's speculated H pixel count). I agree that most owners know the difference (at least they should do), but very often the pressure to use this or that comes from producers that have read a headline number without really understanding what it means. Then the crew have to spend an age trying to educate said producer in the pro's and cons of the different cameras/formats/workflows only for the producer to come back with "but the front page headline says it 5k".

Alister Chapman February 8th, 2011 04:12 PM

Quote:

Originally Posted by Peter Moretti (Post 1615787)
Where I do find Red misleading is the 4:4:4 claim. It's interpolated 4:4:4. You can claim "4K" or you can claim 4:4:4 at a lower resolution, but Red really can't claim both. If chroma resolution is less than luma resolution, then it's not 4:4:4.

And to be fair to Red, Sony and the F3 is no different.

Graeme Nattress February 10th, 2011 08:39 AM

Terminology like 4:4:4 and 4:2:2 refer to chroma subsampling of Y'CbCr data. They should not be used (although often are, and indeed used pejoratively by people who know better) to refer to sensors, because not least sensors are not Y'CbCr, and sub-sampling implies there's a fully sampled RGB signal converted to Y'CbCr to produce a sub-sample from.

With camera systems, it does make sense to measure chroma and luma resolution both horizontal and vertical, but the 4:2:2 terminology is not the right thing to use to describe that.

Unfortunately, there's so many ways camera systems can work that single numbers or simple schemes can't really describe what is happening. Each scheme is a balance of factors as we've been discussing, and it's almost like you need an essay to describe things.

Things get more complex when you have an image that is not subsampled, but the RGB channels don't line up. Can you really call it 4:4:4? Sure you can, but it doesn't look the same as an RGB where the channels are in perfect alignment.

So - 4:4:4 says nothing about measured resolution, nothing about channel alignment and nothing about image quality. All it says is that the image is RGB or Y'CbCr with no chroma sub-sampling.

Graeme

Alister Chapman February 11th, 2011 01:58 AM

Quote:

Originally Posted by Graeme Nattress (Post 1616605)
So - 4:4:4 says nothing about measured resolution, nothing about channel alignment and nothing about image quality. All it says is that the image is RGB or Y'CbCr with no chroma sub-sampling.

Graeme

But in most bayer systems, the sensor is sub sampling the chroma.

If the sensor is subsampling the aerial image B and R compared to G (Bayer matrix, 2x G samples for each R and B) then no matter how you interpolate those sample, the B and R are still sub sampled and data is missing. Potentially depending on the resolution of the sensor even the G may be sub sampled compared to the frame size. In my mind a true 4:4:4 system means one pixel sample for each colour at every point within the image. So for 4k that's 4k R, 4K G and 4K B. For a Bayer sensor that would imply a sensor with twice as many horizontal and vertical pixels as the desired resolution or a 3 chip design with a pixel for each sample on each of the R,G and B sensors.

If it's anything less than that, while the signal coming down the cable may have an even number of RGB data streams the data streams won't contain even amounts of picture information for each colour, the resolution of the B and R channels will be lower than the Green, so while the signal might be 4:4:4, the system is not truly 4:4:4. Up-converting the 4:2:2 output from a camera to 4:4:4 does not make it a 4:4:4 camera. This is no different to the situation seen with some cameras with 10 bit HDSDI outputs that only contain 8 bits of data. It might be a 10 bit stream, but the data is only 8 bit. It's like a TV station transmitting an SD TV show on an HD channel. The channel might call itself an HD channel, but the content is still SD even if it has been upscaled to fill in all the missing bits.

This is an issue within the industry in general. So called standards are not really standards as every manufacturer will manipulate or obscure the meaning of the term to suit their marketing requirements. "HD Camcorder" might mean 720P or 1080i and it doesn't necessarily mean that the camera can actually resolve an HD image, all it seems to guarantee is that the output signal will be an HD signal. An f1.8 lens might only be f1.8 under certain conditions, Resolution might be expressed as 1000 Lines, but some manufactures will measure horizontally while others use vertical or TVL/ph. Some boast 10 bit HDSDi, forgetting to mention that the top 2 bits are empty. You really have to look past the headline numbers and terms an d look closely at what is really going on.

Steve Kalle February 12th, 2011 04:30 AM

Hi Alister,

Quick question: which cameras output 8 bits through a 10bit HD-SDI?

Graeme Nattress February 12th, 2011 02:33 PM

Quote:

Originally Posted by Alister Chapman (Post 1616994)
But in most bayer systems, the sensor is sub sampling the chroma.

And that has nothing to do with 4:4:4 or 4:2:2 as they are referring to subsequent chroma subsampling (or not) of a RGB image that has been converted to a luma / chroma space like Y'CbCr (so it's post gamma correction also).

If you want to refer to what a Bayer sensor does with regard to it's ratio of green to red / blue pixels, then you can use the term Bayer Pattern sensor. If you wish to refer to chroma sub-sampling before transmission or as part of a codec, then 4:2:2 or 4:2:0 etc. are appropriate terminology.

As you point out, spec for pixel container doesn't determine measured resolution. Chroma sub-sampling notation doesn't determine measured chroma resolution.

Let's use the appropriate terminology and not apply a Y'CbCr chroma sub-sampling notation to a sensor with different ratios of R:G:B pixels.

Graeme

Alister Chapman February 13th, 2011 05:02 AM

4:4:4 can have either YCbCr or RGB color space, It is not always referring to chroma sub sampling or conversion to luma/chroma colorspace. The Sony F3 optional dual link output for example will be RGB 4:4:4 capable. HDCAM SR can use 4:4:4 RGB, there is NO color subsampling, nor colorspace conversion of the signal from the sensor. You can also refer to a sensor as 4:4:4 if it has 1:1:1 ratio of RGB samples.

As I said. If the original data isn't there, you can fill your data pipe with as much data as you want, call it what you want, argue over the terminology all you want, but when the data comes out the end of your pipe the data still won't contain the missing picture information.

A camera is a system from lens to output. The output quality will be limited to the lowest common denominator, starting with the lens and working back through sensor, processing, encoding and output.

The implication rightly or wrongly in many end users minds of having a camera with a 4:4:4 output is that the signal contains a full resolution 1:1:1 ratio of picture information for YCbCr or RGB. But as the ratio of Y:Cb:Cr or R:G:B off the senor or out of the processor is not 1:1:1 due to the sub sampling of chroma compared to luma (or R,G,B) by the bayer sensor pattern, then the picture information going into the 4:4:4 data encoder does not have a 1:1:1 ratio, and thus the picture information coming out of the 4:4:4 pipe will not have a 1:1:1 ratio. Anyone that says otherwise is trying to pull the wool over the end users eyes.

Yes strictly speaking the data is 4:4:4, but the content is not. The Implication of 4:4:4 is that all important 1:1:1 ratio of image data. You could even take a monochrome picture and encode it's output as a 4:4:4 data stream, again the data would be 4:4:4 but it wouldn't be much good for chroma key. I'm sure anyone making such a camera would have a hard time justifying any 4:4:4 claim in such an extreme case.

If you refer to Sony's HDCAM format you will often see this annotated as 3:1:1. Would you regard this as correct or incorrect? The cameras and decks record an anamorphic 1440x1080 image with 3 Y samples for each Cb or Cr sample, but for compatibility reasons the signal that comes down the HDSDi cable is 1920x1080 4:2:2. So while the data coming down the HDSDi cable is a 4:2:2 signal the content is not and no amount of reconstruction will ever make the 3:1:1 content the same as real 4:2:2.

It's important to understand this and realise that just because the data analyser, tech specs or marketing literature says the signal is 4:4:4 it does not necessarily mean 1:1:1 sampling of the light entering the camera lens. In fact when used the way it is by many manufacturers in terms of final image quality it's actually pretty vague.

Graeme Nattress February 13th, 2011 07:39 AM

Neither RGB not Y'CbCr are colour spaces - they are ways of storing image data that can be in any of a multitude of colour spaces. A colour space is something like sRGB or REC709, or Adobe1998, and is defined by a transform from XYZ along with a white point.

To say "nor colorspace conversion of the signal from the sensor" would be incorrect. There are always quite a few processes that happen from the sensor to a viewable RGB image. A basic image processing system would be something like: sensor data -> (demosaic here if a Bayer CFA) -> black offset correction -> colour correction matrix -> gamma curve -> RGB Image. The colour correction matrix will be applying a colour space conversion from the native space of the camera sensor to REC709 colorimetry, for instance.

So, let's look at the original HDCAM: The recorded luma signal is 1440x1080, and the chroma is 1/3 of that at 480x1080, which is why it's a 3:1:1. The 3 you'll note does not refer to that 1440 is 3/4 of 1920, but that the 480 is 1/3 of the 1440. Now, the particular numbers for HDCAM work the other way to, referencing back to "4"4 as "full", but that screws up the meaning of 4:2:2 as had been used when you consider 16:9 recording on Digibeta would now, instead of being 4:2:2 as we know, it would have to be something like 3:1.5:1.5 referencing back to the full RGB image before it got anamorphically squashed prior to recording. Similarly with 16:9 DV we'd have them recording as 3:1.5:0. (not even to get distracted by how the "0" is used in 4:2:0 is a mathematical abomination.) Panasonic's DVCProHD follows this convention also, being luma reduced before recording to 960x720 (for 720p) from 1280x720, then the chroma is halved to 480 due to 4:2:2.

That there is no SDI transmission standard for the HDCAM native data luma and chroma sub-sampled format does imply that a 4:2:2 1920x1080 signal is created by the deck as an output signal to be transmitted over the SDI. At that point it is real 4:2:2 or else it couldn't be transmitted. 4:2:2 refers to the signal, not to it's content, so to say that real 4:2:2 is a 4:2:2 signal which comes from a source with a high-enough resolution in luma and chroma to justify such appellation is not correct. It's either a valid 4:2:2 format or it's not formatted correctly. The notation assumes nothing about quality and as noted above on 3:1:1 HDCAM does not tell us the luma resolution at all, just how chroma is sub-sampled compared to luma.

"It's important to understand this and realise that just because the data analyser, tech specs or marketing literature says the signal is 4:4:4 it does not necessarily mean 1:1:1 sampling of the light entering the camera lens." That is correct. 4:4:4 refers to no chroma sub-sampling at a particular part of an image recording chain and that is that. Indeed, it has never referred to back to even as far back as the previous full RGB image (see anamorphic Digibeta case above) never mind light entering the lens!

Graeme

Peter Moretti February 13th, 2011 09:12 AM

Quote:

Originally Posted by Steve Kalle (Post 1617387)
Hi Alister,

Quick question: which cameras output 8 bits through a 10bit HD-SDI?

Canon's HDV cameras are an example.

Peter Moretti February 13th, 2011 09:19 AM

Quote:

Originally Posted by Graeme Nattress (Post 1617727)
... There are always quite a few processes that happen from the sensor to a viewable RGB image. A basic image processing system would be something like:

sensor data -> (demosaic here if a Bayer CFA) -> black offset correction -> colour correction matrix -> gamma curve -> RGB Image.

The colour correction matrix will be applying a colour space conversion from the native space of the camera sensor to REC709 colorimetry, for instance. ...

Graeme,

Thanks for contributing to this discussion. May I ask in the case of the Red, where does compression take place? I'm assuming before colour correction matrix (so colour correction matrix -> gamma curve -> RGB Image all take place in RedCine-X, not in camera).

But then that begs the next ? in my mind, "What is 'black offset correction'?"

Graeme Nattress February 13th, 2011 09:22 AM

The analogue output of a pixel in a sensor will not produce zero volts at pure black (lens-cap-black), but instead there will be an offset voltage. Colorimetry math assumes linear colour data with black at zero, so a correction needs to be made.

In the RED, the REDCODE compression works directly on the raw data from the sensor. There's no demosaic or colour correction before compression. In REDCine-X, the REDCODE decodes back to raw, and from there the image processing pipeline works on the decompressed raw sensor data.

Graeme

Peter Moretti February 13th, 2011 09:50 AM

Right, Graeme, I know that the Red compresses before demosaic. It's kind of essential for a RAW workflow.

What I guess I was getting at is are there any steps that happen between sensor data and compression? It seems that black offset correction and maybe some other "low level" processing might/should happen before compression.

Thanks.

Graeme Nattress February 13th, 2011 09:53 AM

Black shading and pixel correction, then off to compression while still raw.

Graeme

Peter Moretti February 13th, 2011 10:07 AM

Some of the other cameras bake in values like WB at the AD stage at very high bit rates (like 14 or 16 bits). (Not to imply there is some deficiency in how Red does it.)

Am I remembering correctly that there will be a new RedCode coming out that will be 16-bits? And will this be for across the Red line or just Epic-X?

Thanks again.

Alister Chapman February 13th, 2011 10:12 AM

Quote:

Originally Posted by Graeme Nattress (Post 1617727)
Neither RGB not Y'CbCr are colour spaces - they are ways of storing image data that can be in any of a multitude of colour spaces. A colour space is something like sRGB or REC709, or Adobe1998, and is defined by a transform from XYZ along with a white point.

Yes and No, by definition Y'CbCr is both a description of the way the luma and chroma signals are encoded as well as being a family of non linear encoded color spaces, which includes some of the ones you have outlined above. RGB is both a description of both the image processing method and also a family of additive color spaces. RGB itself is a color space with unlimited gamut. Conversion between Y'CbCr and RGB color spaces can sometimes lead to an overall reduction in gamut.

It would perhaps have been clearer if I had used the term encoding instead of color space, but the argument remains the same.

If the source isn't up to it, 4:4:4 potentially brings no advantage over 4:2:2


All times are GMT -6. The time now is 04:49 PM.

DV Info Net -- Real Names, Real People, Real Info!
1998-2024 The Digital Video Information Network