DV Info Net

DV Info Net (https://www.dvinfo.net/forum/)
-   2nd Unit Television (https://www.dvinfo.net/forum/2nd-unit-television/)
-   -   16 and 32 bit color correction (https://www.dvinfo.net/forum/2nd-unit-television/95868-16-32-bit-color-correction.html)

Paolo Ciccone June 5th, 2007 06:30 PM

16 and 32 bit color correction
 
Given that many cameras, especially in the prosumer space, record color information at 8 bit precision it might sound absurd to hear people talking about using 16-bit precision color correction techniques. After all, no matter how much precision you throw at the problem , you will end up back to 8-bit (DVD/TV) so what's the point.
Let me give you an example.

In order to understand this we need to have at least a passing knowledge of the binary system. As you probably know digital information is stored using binary digits or "bits". A binary digit is the same of a decimal digit except that it only assumes two values, zero and 1. Binary math is the same of digital math except that only numbers made out of zeros and ones are used.

The normal rules of counting apply. When I ask people to formally describe the process of counting very few can actually answer. That's because the process is so familiar that we can't see how to formalize it.
Here it is:

* first you count using all the symbols available. In decimal we have 10 symbols, from 0 to 9
* when you reach the last symbol you shift to the left, write the first non-zero symbol and then fill the "gap" to the right with zero.
* restart counting until you reach the end of the symbols, when that happens, incremeber the symbol to the left and reset the symbol to the right to zero. If you reach the end of the symbols to the left shift left another step


If you follow the above rule, you can count in binary, in the following table the left side is binary, the rght side is the decimal equivalent.

1 = 1
10 = 2 (shift left, fill with 0)
11 = 3 (restarted counting, reach the end of the left symbol, time to shift again)
100 = 4
101 = 5
110 = 6
111 = 7
1000 = 8
1001 = 9
1010 = 10



And so on.

Another usefull piece of information is how to determine the maximum value of a number, given its amount of digits. For example, if we have 3 decimal digits we have no problem in understaning that the maximum value is 999. But what is it in binary? The same rules apply again. To formally define the maximum value that we can express with a given amount of digits we take the base of the numeric system, 10 for decimal or 2 for binary, to the power of the number of digits and subtract 1.

For example: 4 decimal digits is 10 to the power of 4 minus 1 => which is 10 by 10 = 100 by 10 = 1000 by 10 = 10000 minus 1 = 9999.

That's why in binary we can store a maximum value of 255 if we use 8 bits . 2 to the power of 8 minus 1 is, in fact 255.

All right, now that we have this little bit of binary foundation, let's see how it applies to colors.

Let's say that I have a nice scene to which to add a golden hue just to make it look warmer. I could load the clip in AfterEffects, create a an orange solit, set the transparency to a low value and see what happens. Let's say that some pixels where already quite "warm", orangy, we can risk to oversaturate the value.
For the sake of simplicity let's assume that we have a byte set at near maximum, 254. That is 11111110 in binary. If we add a value of just 8, 100 in binary we end up with a final value of 100000010. Count the digits, they are 9. We have only 8 "slots" available in memory so your color could be trucated to 00000010. Quite a far cry from 11111110. In fact it's at the opposite end of the scale of value. All of a sudden your image looks like crap and you don't know why.
If you switch your composition to use 16 bit then the colors will look good again.
Of course, if you convert the footage back to 8 bits later you will loose part of the color information but at least the rounding will be done to the closest value instead of overflowing. In the worst case scenario you end up with th eoriginal value, in the best case scenario you get your color grading to work and your antialiasing is calculated to a higher precision. This is in fact why, when they digitize film the process is done at 4K resolution even if the final scan is at 2K. The higher resolution allows for better anti-aliasing calculation.

Hope this helps!

Greg Boston June 11th, 2007 07:36 PM

Yes Paolo, it is said that there are only 10 types of people in the world. Those that understand binary, and those that don't.

-gb-

Shawn McCalip June 13th, 2007 02:06 PM

Very interesting post! Thank you for explaining binary in a way that for the first time, makes sense to me. This is excellent stuff to know!

Stephen Armour July 6th, 2007 07:04 PM

Quote:

Originally Posted by Paolo Ciccone (Post 692473)
..of course, if you convert the footage back to 8 bits later you will loose part of the color information but at least the rounding will be done to the closest value instead of overflowing. In the worst case scenario you end up with th eoriginal value, in the best case scenario you get your color grading to work and your antialiasing is calculated to a higher precision. This is in fact why, when they digitize film the process is done at 4K resolution even if the final scan is at 2K. The higher resolution allows for better anti-aliasing calculation.

Hope this helps!

Is that one of the reasons Cineform 10-bit upconversion helps? All the editing/processing is done with more bit info, before final output?

Paolo Ciccone July 8th, 2007 12:33 AM

Thank you guys for the feedback, glad that helped. Loved the joke about 10 people... :)

Stephen, yes Cineform helps, if you convert from HDV, because of the higher bit accuracy. Also I found that companies dedicated to developing codecs, as opposed to NLE producers which have to deal with several aspects of the technology, have better accuracy when it comes to reproduce scenes. My favorite codec so far is Sheervideo by Bitjazz: http://www,bitjaz.com

Glenn Chan July 9th, 2007 01:29 AM

Quote:

Of course, if you convert the footage back to 8 bits later you will loose part of the color information but at least the rounding will be done to the closest value instead of overflowing. In the worst case scenario you end up with th eoriginal value, in the best case scenario you get your color grading to work and your antialiasing is calculated to a higher precision. This is in fact why, when they digitize film the process is done at 4K resolution even if the final scan is at 2K. The higher resolution allows for better anti-aliasing calculation.
In practice, NLEs and color grading systems are designed to avoid overflow errors. You don't get them for 8-bit work unless the programmers made a mistake and there is a bug (e.g. Cineform had some overflow errors at one time; they are now fixed).

2- Aliasing is something else.

3- As far as 8-bit versus 16-bit and 32-bit goes:

If you are doing linear light processing (e.g. linear blending in AE7), then AE's 32-bit mode definitely improves quality (this is noticeable).

If not, then the difference between 8-bit and higher are minor differences in precision. Suppose your original value is 100. Filter A divides everything by 3, and filter B multiplies everything by 3.

In (most) 8-bit implementions, the value will go from 100 --> 33 (the fraction 100/33 rounds down to 33) --> 99.

With higher bit depth, the value will go from 100 --> 33.333repeating --> ~100 --> 100.

So your calculations can be slightly off with 8-bit. Visually, you rarely rarely see these errors. The only time you would see these errors if there are large gradients... these errors will show up as banding. However, noise in the image will mask banding (and most images will ultimately pick up noise). And images without large gradients in it won't exhibit banding.

Bill Ravens July 9th, 2007 07:21 AM

In practice, Glenn, as I'm sure you know, processing scenes in which there is a significant amount of sky, even the slightest amount of CC'ing will produce banding in the sky at 8 bits. Noticeable and unacceptable, to my eye. Similarly, banding will occur in skin tones and other broad areas of mono-chrome, when faint shadows produce subtle gradients in tone. You can see this "problem" in the histogram of a frame after it has been processed. This is the reason I've long been a proponent of 10bit(as a minimum) processing. While Vegas is limited to an I/O of 8 bits, its internal processing engine works in 10 bit.

Glenn Chan July 9th, 2007 01:46 PM

Vegas' internal processing engine doesn't work in 10bits AFAIK. From filter to filter, 8 bits get passed. The filters can convert the values to 32-bit int/float, do calculations in 32-bit, and then convert back to 8-bit. Or the filters can work in 8-bit.

But AFAIK none of the filters work in 10-bit... it just wouldn't make sense. In a CPU or GPU, your convenient choices are 8, 16, (24), 32, 64/80-bit precision. There's no reason to do 10-bit processing... you'd kind of have to go out of your way to work in 16-bit but truncate or round the values to 10-bit precision, and there's no point in doing that.

Bill Ravens July 9th, 2007 03:03 PM

I was led to beleive that 10 bit was so. Guess I was misled.

Dale Stoltzfus September 5th, 2007 08:45 AM

Hate to dredge up an old thread, but I have a question that's been bugging me for a while. I know it is important to color correct in higher than 8 bit. My question is this: should I use 16 bit or 32 bit? Is 32 better? It sounds like it should be. Just wondering...

Paolo Ciccone September 6th, 2007 01:00 AM

Quote:

Originally Posted by Dale Stoltzfus (Post 739551)
Hate to dredge up an old thread, but I have a question that's been bugging me for a while. I know it is important to color correct in higher than 8 bit. My question is this: should I use 16 bit or 32 bit? Is 32 better?

Of course higher precision is better but you have to ask yourself what is the color correction that you need to do and can, you do it? I'm giving you an example. I had to do a product shot of a motorcycle helmet. This is a piece with wild graphics and basically black, grey and white areas. On a reflective surface. There is basically no way you can expose this correctly in one shot, the contrast ratio is very high and the glossy surface makes everything worse. This shot is done on a turntable to create the illusion of a spinning 360-degree clip but I used a DSLR camera to take 24 frames.
I decided to use multiple exposures and combine the images into a HDRI image. I combined the images in OpenEXR format by using Photoshop and this gave me 24 frames, each frame generated by a number of different exposures of the same "pose". The advantage of this system is that I can isolate areas of each frame and use the "Exposure" filter of AfterEffects to change the exposure of white areas while maintaining an overall contrast that keeps the blacks and greys at the right level. Without the HDRI (High Dynamic Range Image) support and the 32-bit processing I would get banding and artifacts that would degrade the image. With the 32 bit support I can extract exposure information from the RAW footage and adjust it without loss of detail. It's pretty amazing to see it in action.

I would use 16-bit for general color correction of anything that is destined to TV and HD. Use 32-bits when you really need the extra precision. Handling frames at 32 bits is incredible heavy even for modern dual-processor CPUs.

Thomas Smet October 31st, 2007 10:51 PM

Sorry I never saw this post before.

Isn't it true that some programs may not upconvert to 16bit very well? For example in Photoshop I took a 8 bit 3D rendered image and converted the color space to 16 bit and adjusted the levels and I still got banding. I then rendered the same image as 16 bit and leveled it in Photoshop and there was no banding. In this example up converting to 16 bit from 8 bit didn't do anything at all except make Photoshop run slower.

Is Photoshop just really bad at up converting color depths and it doesn't use any level of interpolation? I tried the same thing in After Effects and workign with a 8 bit image in a 16 bit project didn't help the banding either.

Is it only Cineform that up converts well to 10 bit?

Glenn Chan October 31st, 2007 11:09 PM

Upconverting can't create information that didn't exist in the first place.

If you import a 16-bit file, then that file may have up to 16 bits of precision.
A 8-bit file will only have 8 bits of precision.

2- When you process the image, you might end up with extra bits...
for example, if you divide 9 by 2, the result is 4.5. If you store that as an integer, you will lose the .5 part.

If you work at higher precision... e.g. you keep that decimal place... then you don't lose that information to rounding (or truncation).

If you have multiple filters then the little bits on the end might be useful.

Thomas Smet October 31st, 2007 11:53 PM

Yes I understand that part. The question I had was that some products such as Cineform claim that converting 8 bit material to their 10 bit codecs will get rid of banding when adjusting the footage. Cineform even has an example of this on their website. Every other tool I have tried this with didn't even come close to the results they got. My question was is there something special Cineform is doing when it converts 8 bit to 10 bit or is the image on their website really really bad marketing. I usually trust a lot of stuff that comes out of Cineform but having studied and wrote software to deal with rounding errors in 2D animation I just do not see that much of an advantage to correcting 8 bit material in a 10 bit environment.

Am I correct in the research and testing I have done or am I doing something wrong and just haven't figured out how to create the best interpolated up conversion?

Glenn Chan November 1st, 2007 12:18 AM

What Cineform is showing is this:

A- An 8-bit source.

B- A gamma / power function operation of 0.5 (or something like that).

C- This is rendered out to a 10-bit intermediate, and a 8-bit intermediate. Both 4:2:2 Y'CbCr. The 1-bit Cineform intermediate is compressed (but that doesn't really matter).

D- The opposite power function is applied (2.0 or something like that). This undoes the original operation.

E- The result is shown in 8-bit.
(Or less if you computer monitor and/or color management of your OS, browser, display is messing with things. That doesn't really matter here.)

2- To me, the demonstration is misleading since you'd never have that problem is a real-world scenario. You can render your project out properly by doing step D before C.

And then a 8-bit intermediate would not have such dramatic banding issues.

2b- In some cases you might be better off mastering to a 10-bit format... since of 8-bit formats, moving between full scale 8-bit R'G'B' and 8-bit Y'CbCr (or studio range R'G'B') will give you rounding error.

3- In my opinion you usually don't have to worry about banding issues unless your source footage is very noise-free (and you have large, smooth gradients). The noise acts as a natural dither and tends to hide banding issues.


All times are GMT -6. The time now is 04:41 AM.

DV Info Net -- Real Names, Real People, Real Info!
1998-2024 The Digital Video Information Network