8bit vs 10bit Aquisition - Page 2 at DVinfo.net
DV Info Net

Go Back   DV Info Net > High Definition Video Acquisition > General HD (720 / 1080) Acquisition
Register FAQ Today's Posts Buyer's Guides

General HD (720 / 1080) Acquisition
Topics about HD production.

Reply
 
Thread Tools Search this Thread
Old July 29th, 2012, 03:12 PM   #16
Inner Circle
 
Join Date: Jan 2006
Posts: 2,699
Re: 8bit vs 10bit Aquisition

Quote:
Originally Posted by Bruce Watson View Post
I'm familiar with macro-blocking, and what I'm seeing isn't macro-blocking. It's classic standard banding. Just like you get from Photoshop working on an 8 bit scan. That kind of banding.
Please, please do the Photoshop test I outlined earlier. I guarantee you'll be surprised. Macro-blocking is one form of artifacting due to compression, but banding (which does indeed look "like you get from Photoshop working on an 8 bit scan") is another, and nothing to do with bit depth.

It's difficult to be specific, since with broadcast encoders all sorts of things can vary. The ratio of data allocated to the luminance and chroma channel being one, and variation of allocation between I-frames and difference frames being another. What is key is the point you made before - that the banding issues mainly noticeable on gradients with saturated colours. That's largely due to bitrate allocation to chroma compression being low cf luminance, and also that chroma block sizes are large compared to luminance because of subsampling.
Quote:
I'll also second the comment that 4:2:0 chroma subsampling is part of the problem. I suspect that the local commercials I'm talking about were captured to AVCHD, so compressed in a bunch of different ways, including 4:2:0 and eight bits per channel. Then color graded and polished up. I'm not surprised this results in banding. No one should be.
Bear in mind that AFAIK *ALL* transmission to home, Blu-Ray etc systems are 8 bit and 4:2:0. And those factors in themselves do not cause any problems - the problems come through not giving them enough bits to do their job well!

In the acquisition world then systems tend to have defined bandwidths, and if they are restricted (as with AVC-HD) designers have to decide how to balance compromises. And if they went for 10bit/4:2:2 it means more data to be compressed and hence far higher compression - and likely worse overall than 8bit/4:2:0!!!

If you see problems, likelihood is the prime cause is too low bitrate, too high compression. Moving to 10 bit may just make matters worse unless the bitrate is increased proportionately - it would just mean even higher compression.
David Heath is offline   Reply With Quote
Old July 29th, 2012, 03:12 PM   #17
Inner Circle
 
Join Date: Dec 2005
Location: Bracknell, Berkshire, UK
Posts: 4,957
Re: 8bit vs 10bit Aquisition

Saaresh:

You won't see a difference between 10 bit and 8 bit sampling on a vector scope or waveform monitor because waveform monitors and vector scopes measure amplitude and phase and there is no difference between amplitude and phase between 8 bit and 10 bit. A waveform monitor rarely has the resolution to show the 235 grey shades in an 8 bit signal, let alone the 956 shades in a 10 bit signal and if your looking at a cameras output, noise will diffuse any steps that might be possible to see. A normal waveform monitor/vectorsope is completely the wrong tool for trying to find any difference between 8 bit and 10 bit. It's like using a VU meter or audio level meter to determine the audio frequency.

Some histograms will tell you whether a signal is 8 bit or 10 bit by the number of steps there are from left to right across the histogram. Some NLE's and Grading tools can return the data value for points within recorded images, this may also tell you whether the signal is 8 bit or 10 bit.

The Alan Roberts reference is a bit of a red herring. The reason the RED's output was not deemed suitable for broadcast has nothing to do with the bit depth. It is because the real time de-bayering employed by RED introduces significant artefacts into the image. RED is designed around it's raw workflow, the HDSDi output is for on set monitoring only and not really meant to be used for off board recording.

Engineers don't just look at a monitor and trust their eyes, if it was that simple there would be no need for engineers.

One test you can do with almost any NLE to asses the practical, real world difference between acquisition in 8 bit and 10 bit for your camera is to record the same scene at both 8 bit and 10 bit. You can try different scenes to see how different subjects are handled. Blue sky, flat walls can be very revealing. Then bring the clips in to the NLE or grading package and use a gain/brightness effect or filter to reduce the image brightness by 50%. Then render out that now dark clip as an uncompressed 10 bit file. Then apply a gain/brightness fitter to on the new uncompressed file to return the video levels to that of the original. By layering the original over the now corrected uncompressed clip and using a difference matte you can see the differences between the 8 bit and 10 bit performance. How much or little of a difference there will be depends on many factors including subject, noise, compression artefacts etc. It is best to view the pictures on a large monitor. For this test to be meaningful it is vital that you ensure the NLE is not truncating the clips to 8 bit.

While Edius may be 10 bit, I think you still need to check whether quicktime on a PC is 10 bit, If quicktime on a PC still truncates to 8 bit then having a 10 bit edit package won't help.

Bruce. Excessive compression will absolutely cause banding in an image. Most banding artefacts that people see are not down to bit depth but quantisation noise caused by insufficient data to record subtle image changes. Perhaps there isn't enough data to record 10 shades in a gradient, the 10 shades get averaged together into 4 and the end result is steps. Another issue can be that the OTA signal is at best 8 bit, this is then passed to the TV's processing circuits which will be doing all kinds of image manipulation in an attempt to make the pictures look good on the screen, this processing is commonly done at 8 bits and 8 bit processing of an 8 bit signal can lead to further issues.
__________________
Alister Chapman, Film-Maker/Stormchaser http://www.xdcam-user.com/alisters-blog/ My XDCAM site and blog. http://www.hurricane-rig.com
Alister Chapman is offline   Reply With Quote
Old July 29th, 2012, 03:44 PM   #18
Inner Circle
 
Join Date: Jan 2006
Posts: 2,699
Re: 8bit vs 10bit Aquisition

Quote:
Originally Posted by Sareesh Sudhakaran View Post
IThe Red One has 12-bit processing. The C300 has 8-bit processing. Yet the latter is fine for broadcast, the former converted to 10-bit is not. Which one looks better on a 10-bit monitor?
Not true, for starters the C300 is an 8 bit output signal. That is not the same as 8 bit processing. The reference to Red I suspect is really referring to 12 bit RAW recording.

Alister earler said:
Quote:
S-Log and other Log formats designed for 10 bit, now that's another story all together.
Quite correct, and it's crucial to understand the difference between processed video, s-log and RAW. For the former, 8 bits are generally enough. (Which is why the C300 gets full approval.) For s-log and certainly RAW they are most certainly not. The latter two record a much wider range of values than is normally the case, and display on a normal monitor will likely look in a very flat low contrast image.

The extra bits are needed for the processing - after which 8 bits will then normally be adequate.

S-log and 10 bit will certainly give more scope for post processing than 8 bit - but it's the combination that makes the difference, not just the 10 bit factor. Processed 10 bit video is not the same as 10 bit s-log.
Quote:
Just because you have that many extra bits it doesn't mean those bits contain anything useful.
Quite true. And this is where the comments about noise come in. Except for the best cameras, the extra bits are likely to be just filled with noise!
Quote:
Based on theory, Prores LT with its DCT intraframe codec should leave MPEG-2 in the dust - however in practice I believe they should be just about visually equal.
Sorry, but MPEG2 is DCT based as well. Discrete cosine transform - Wikipedia, the free encyclopedia If you really want to plough through the theory, that explains why excessive compression leads to banding as said earlier, otherwise key phrase is:
Quote:
The DCT is used in JPEG image compression, MJPEG, MPEG, DV, and ......
David Heath is offline   Reply With Quote
Old July 29th, 2012, 08:18 PM   #19
Major Player
 
Join Date: Oct 2009
Location: Reno, NV
Posts: 553
Re: 8bit vs 10bit Aquisition

Quote:
Originally Posted by Tim Polster View Post
I have a Varicam "H" model camera that has 10bit HDSDI output. For a year now I have been using a Nanoflash to record my footage which is an 8bit recorder. For the same money I could be using a 10bit Samurai recorder.
While I expect the 1280x720 resolution 2/3" chips in the H model produce relatively less noise than most of the cameras discussed in this forum, the main issue appears to be whether you are recording a flat image profile such as cinegamma or not. If you record a flat image with the intention of pushing the colors in post, then 10-bit 4:2:2 can make a noticable difference. If you are using standard video gamma with no color-correction in post, then 8-bit recording is enough.

Note that computer displays are typically 8-bit, distribution is 8-bit while most flat screen TVs often display only 6-bits of color. Acquisition and post processing can benefit from higher color depths, but your final product never needs more than 8-bit color.

Last edited by Eric Olson; July 30th, 2012 at 01:09 AM.
Eric Olson is offline   Reply With Quote
Old July 29th, 2012, 10:07 PM   #20
Trustee
 
Join Date: Jan 2008
Location: Mumbai, India
Posts: 1,385
Re: 8bit vs 10bit Aquisition

Quote:
Originally Posted by David Heath View Post
Not true, for starters the C300 is an 8 bit output signal. That is not the same as 8 bit processing. The reference to Red I suspect is really referring to 12 bit RAW recording.
David, what is the real signal processing bit depth in the C300? I've shot with it and I have no clue. Red One claims 12-bit DSP and in my experience with it I've seen it's better than 8 bit and slightly worse than 12-bit linear. Red has a sampling system quite unlike any other camera, due to their compressed RAW codec. So I wouldn't believe their claims either.

Quote:
Quite correct, and it's crucial to understand the difference between processed video, s-log and RAW.
Before S-log, RAW and everything else - there's sampling. That's where everything is decided. Please feel free to ask any camera manufacturer for details of their sampling process.

Quote:
The extra bits are needed for the processing - after which 8 bits will then normally be adequate.

S-log and 10 bit will certainly give more scope for post processing than 8 bit - but it's the combination that makes the difference, not just the 10 bit factor. Processed 10 bit video is not the same as 10 bit s-log.
Thanks for clarifying - I was only strictly referring to linear data.

Quote:
Sorry, but MPEG2 is DCT based as well. Discrete cosine transform - Wikipedia, the free encyclopedia If you really want to plough through the theory, that explains why excessive compression leads to banding as said earlier, otherwise key phrase is:
DCT is an algorithm, not a compression scheme. Under the hood, many standards mix and match algorithms. A codec is a framework, a protocol - that is why it can be manipulated, as in the case of H.264 or AVCHD over the MPEG-4 protocol. DCT over intraframe is a different thing altogether.

I learnt this lesson while programming an image processing engine (similar to photoshop) 12 years ago for my college final year project - I used BMP, TIFF and JPEG specifically.

To be honest I don't care about the numbers anymore - what I really learnt was that my eyes were the best judge. Manufacturers hide too many things, and marketing is very powerful, and who has the time to sit and analyze each camera system - especially when it will be obsolete by the next trade show?
__________________
Get the Free Comprehensive Guide to Rigging ANY Camera - one guide to rig them all - DSLRs to the Arri Alexa.
Sareesh Sudhakaran is offline   Reply With Quote
Old July 30th, 2012, 12:17 AM   #21
Trustee
 
Join Date: Jan 2008
Location: Mumbai, India
Posts: 1,385
Re: 8bit vs 10bit Aquisition

Quote:
Originally Posted by Alister Chapman View Post
Saaresh:

You won't see a difference between 10 bit and 8 bit sampling on a vector scope or waveform monitor because waveform monitors and vector scopes measure amplitude and phase and there is no difference between amplitude and phase between 8 bit and 10 bit.
Never said I was looking for differences between 10-bit and 8-bit. I am looking at the signals and the sampling.

By the way, a waveform monitor displays voltage over time. An oscilloscope (or a digital waveform monitor will do as well) can freeze a wave for further study.

A vectorscope picks the frequency of two simultaneous waves - even within a complex wave such as a video signal - and compares them side by side (or one on top of the other). In video, I could display Cb and Cr on a vectorscope.

For 292M, I will need a scope that has been designed to test this particular signal. It will show me the wave pattern in relative amplitude and relative phase - from which I can derive the wave function of that particular wave. The wave function tells me everything I need to know.

Quote:
A waveform monitor rarely has the resolution to show the 235 grey shades in an 8 bit signal, let alone the 956 shades in a 10 bit signal and if your looking at a cameras output, noise will diffuse any steps that might be possible to see. A normal waveform monitor/vectorsope is completely the wrong tool for trying to find any difference between 8 bit and 10 bit. It's like using a VU meter or audio level meter to determine the audio frequency.
Forgive me, but I feel we are talking about two different things. I am to blame for it. To clarify, I'm talking about a digital waveform analyzer, capable of generating test signals under the 292M specification, like the ones Leader makes.

From such a device, by studying the signals, Y'CbCr values, and cross referencing them against test patterns I can reverse engineer the sampling process.

By comparing this data with other signals (test, random and actual) I can tell very easily the 'quality' of the color information present in a signal. If I felt particularly loony, I could also reverse engineer the tristimulus values from the chrominance information and derive the sensor and Rec.709 color spaces, just to show off. This is how the scope knows whether you are within the bounds of a particular gamut or not - except I might do the calculations manually just because I don't trust my scope either! It all depends on how paranoid I am on any given day.

Once I know which color space I'm in, I know how many colors I need - from that information I will know whether the data needs 8-bit or 10-bit word lengths to be represented accurately. I don't care what the data already is - you can put a scooter engine, an elephant or a ferrari engine in a Ferrari body. What I really want to know is how it was sampled.

Guess what I learnt? No matter what the color space, I always need 32-bit (or the maximum possible) words - every gamut has infinite potential combinations - it's like Zeno's paradox.

But since 292M can only output 10-bit files, I have to use my eyes and judge for myself whether I can live with it. 8-bit is minimum wage. 10-bit is a pat on the back with your minimum wage. The difference between 8-bit and 10-bit in practice is negligible - both in its signal characteristics and visually.

But this is my opinion, for my own worfklow, based on my training and experience. I would like to believe I am right, but I might be totally wrong, and I might be the weakest link in my workflow.

Quote:
Some histograms will tell you whether a signal is 8 bit or 10 bit by the number of steps there are from left to right across the histogram. Some NLE's and Grading tools can return the data value for points within recorded images, this may also tell you whether the signal is 8 bit or 10 bit.
Beware of histograms - are they showing the RAW data or a processed signal that has already been sampled? How is the histogram color information derived? Is it debayered information? Is it 8-bit or 10-bit or 16-bit or what? What are the 'clipping' parameters of any particular histogram - you might be surprised to learn it was a 'subjective' thing. You can see this in practice because no two camera manufacturers design histograms the same way. And no two RAW processing engines read histograms the same way either.

E.g., in digital photography, many high end cameras only show JPEG histograms with clipping warnings. When one pulls these files into a processing engine, one is surprised to see their histogram was not really accurate. Whom should I believe - the sensor manufacturer, the signal processing engineer, the compression engineer or the software developer who coded the RAW engine?

I'm totally for simple tools to understand data - histograms, waveforms, vectorscopes, etc - these are tools that tell me what ball park I'm in. On the field they are a great help. But I still prefer a good monitor as the easiest way to get where I want to go. The eye is just another tool - one of my favorites!

As a side note, I love the fact that BM has decided to ship the Ultrascope free with their camera, using thunderbolt.

Quote:
The Alan Roberts reference is a bit of a red herring. The reason the RED's output was not deemed suitable for broadcast has nothing to do with the bit depth. It is because the real time de-bayering employed by RED introduces significant artefacts into the image. RED is designed around it's raw workflow, the HDSDi output is for on set monitoring only and not really meant to be used for off board recording.
Never meant the reference to be an example of bit depth. My words: "Red's mistake? Either poor sampling engineering or a sensor that cannot conform to the traditional HDSDI format."

It's a sampling problem, and is caused by the compression RAW scheme employed by Red. They probably had to resample an already sampled image for HD-SDI. I'm not sure how many have wondered why Red can't give out an uncompressed 4K/5K redcode stream.

The sampling of the sensor signals, combined with the sensor's gamut, bayering mechanism and filtering process, determines everything.

Quote:
Engineers don't just look at a monitor and trust their eyes, if it was that simple there would be no need for engineers.
Sorry to disappoint you, but engineers are human too. :) There are no compulsions or laws of the universe that force engineers to choose between two legal voltage ranges in a single semiconductor transistor, let alone a circuit or microprocessor or sensor. When it comes to software coding, it's all subjective. In the end, the exact parameters of camera systems are arrived at on subjective estimates - even if it's a committee's.

E.g., a RAW file is just data - if you open a RAW file on different RAW processing engines you will get different results. If you apply different algorithms you'll get different results. Two issues qualify as suspects to explain this: 1. Patents. 2. Subjectivity.

If I'm looking at a signal and doing my math based on what I know - I'll arrive at a certain conclusion. Another engineer will see it a totally different way. The variety of electronic devices and software programs in the world show that clearly. You can interpret results differently, and change the world based on those interpretations.

The only way I can know if I'm still sane at the end of the day is by looking at the result like a lay person. Does red look red? Does the music note sound the way I want it to sound? Only then is the math worth it. Don't you think?

Anyway, I don't speak for all engineers, only myself!

Quote:
One test you can do with almost any NLE to asses the practical, real world difference between acquisition in 8 bit and 10 bit for your camera is to record the same scene at both 8 bit and 10 bit. You can try different scenes to see how different subjects are handled. Blue sky, flat walls can be very revealing. Then bring the clips in to the NLE or grading package and use a gain/brightness effect or filter to reduce the image brightness by 50%. Then render out that now dark clip as an uncompressed 10 bit file. Then apply a gain/brightness fitter to on the new uncompressed file to return the video levels to that of the original. By layering the original over the now corrected uncompressed clip and using a difference matte you can see the differences between the 8 bit and 10 bit performance. How much or little of a difference there will be depends on many factors including subject, noise, compression artefacts etc. It is best to view the pictures on a large monitor. For this test to be meaningful it is vital that you ensure the NLE is not truncating the clips to 8 bit.
Excellent tip, Alistair - this is EXACTLY what I can do with a signal analyzer, except there's no romance to the process when an engineer does it!
__________________
Get the Free Comprehensive Guide to Rigging ANY Camera - one guide to rig them all - DSLRs to the Arri Alexa.
Sareesh Sudhakaran is offline   Reply With Quote
Old July 30th, 2012, 10:42 AM   #22
Inner Circle
 
Join Date: Dec 2004
Location: Arlington, TX
Posts: 2,231
Re: 8bit vs 10bit Aquisition

Thanks everybody for your input on this thread. As usual I have learned a lot. I called Dan Keaton today and he highlighted to me that HD-SDI signals are always 10bit, but the Varicam is still an 8bit camera.

This was what I had always thought but mis-read a some information recently which caused me to think the camera was actually 10bit. So my decision is easy to stay with the Nanoflash.
Tim Polster is offline   Reply
Reply

DV Info Net refers all where-to-buy and where-to-rent questions exclusively to these trusted full line dealers and rental houses...

B&H Photo Video
(866) 521-7381
New York, NY USA

Scan Computers Int. Ltd.
+44 0871-472-4747
Bolton, Lancashire UK


DV Info Net also encourages you to support local businesses and buy from an authorized dealer in your neighborhood.
  You are here: DV Info Net > High Definition Video Acquisition > General HD (720 / 1080) Acquisition


 



All times are GMT -6. The time now is 10:55 AM.


DV Info Net -- Real Names, Real People, Real Info!
1998-2024 The Digital Video Information Network