|
|||||||||
|
Thread Tools | Search this Thread |
February 28th, 2016, 07:09 AM | #16 | |||
Inner Circle
Join Date: Jan 2006
Posts: 2,699
|
Re: Compression Question: From 1080 to 720. How does it work?
Quote:
I've also heard a suggestion put forward for very high frame rate TV. The problem for such is obviously the huge amounts of raw data. Inter-frame coding is obviously one way forward, but another suggestion is for one bit coding within each frame, and gradations of tone for a pixel to be conveyed by how many frames have it "white" and how many "black". Obviously we are talking about very high frame rates indeed, but in theory the principle is as you say in both those cases - oversampling spatially in the first case, temporally in the second to trade sampling frequency for bit depth. Quote:
But note it puts some maths to it: "The number of samples required to get n bits of additional data precision is number of samples = (2^n)^2 = 2^2n." So if we want to move from 8 bit to 10 bit, (2 bits of additional data precision), that formula predicts us to need 2^4 times as many samples - 16x as many! Not 4x. And even then it qualifies it. Quote:
So whilst I don't disagree with the basic principle of what you're saying - that oversampling can be traded for better bitdepth - I do disagree with simply saying "downscale a 8 bit 4K signal to FHD and it can be considered as 10 bit." The above formula predicts that the BEST that could be hoped for is a "9bit" signal, and even this is dependent on circumstance. You'd need to be talking about 16x as many samples to really get 10 bit, in other words, 8K. |
|||
February 29th, 2016, 12:59 PM | #17 |
Inner Circle
Join Date: May 2006
Location: Camas, WA, USA
Posts: 5,513
|
Re: Compression Question: From 1080 to 720. How does it work?
Good catch on the formula; however, the article first says this:
When oversampling by a factor of N, the dynamic range increases by log2(N) bits, because there are N times as many possible values for the sum. So the basic formula says four times the samples gives two more bits of dynamic range. The next statements about noise are in the context of an A/D or D/A converter. The A/D is the equivalent of a camera sensor system. They assume that for a given A/D technology, if you speed up the clock, it will have a shorter sampling time, which would increase noise. This is similar to increasing the resolution of a video sensor. That makes each pixel smaller, so the noise increases. But the context I'm presenting is signal-only. I'm not comparing the 4K downsampled signal to what you would have gotten had the sensor been 2K with its inherent lower noise. In our case, we have a given signal with its given noise depending on camera, ISO, etc. We are just looking at the extra dynamic range without comparing it to an engineering tradeoff with a lower res, lower noise camera. And yeah, the part about needing distributed noise is important. This is a problem in synthetic media, but not typically with real scenes as the signal varies. One real pixel might be 0.1 shy of the recorded value while the next is 0.1 too hot. Scene variation gives us that randomness, even when the noise is quantized. But yeah, don't apply a heavy handed noise reduction to create plastic faces before the downsampling. The downsampled signal would show that same, low resolution, low noise, inaccurate face tone. So, yeah, conditions need to be right. Also keep in mind that a good digital low pass filter (used for downsampling) has many taps across many samples horizontally and vertically. So each new 2K pixel gets a small contribution from a wide range of 4K pixels. This helps ensure that the noise contribution is random as each new pixel gets fed by more than its nearest neighbor. The bottom line is that one "can" get the equivalent of more bits of information by downsampling, but only if that additional information hasn't already been lost.
__________________
Jon Fairhurst |
February 29th, 2016, 03:32 PM | #18 | ||
Inner Circle
Join Date: Jan 2006
Posts: 2,699
|
Re: Compression Question: From 1080 to 720. How does it work?
Quote:
Quote:
So in that case we're talking about 4 extra bits - so n=4 - so we need 2^2*4 as many samples. In other words, 256. This is all consistent with needing 16x as many to get an equivalence with 2 extra bits. What you quote above should be seen as an interim step. The way I see it is that in your earlier example (4 values of 13,13,13,14, averaging to give 13.25) it's an idealised example, which may not be typical - and is not likely to be. The next block of 4 may be 13,13,14,14 and give an averaged value of 13.5, the next may be 13,13,13,13 and so on on a statistical basis. It's only when you get up to 16 samples that statistically you can realistically expect three times as many "13" values as "14". (In practice I'd expect other values such as 12 and 15, but the average to become more predictably 13.25.) But this always assumes that what is really 13.25 gets digitised randomly, and not always perfectly to the nearest integer - when it would always be 13, and the average of any number of samples will then always be 13 exactly. But I think we're agreed on that.....? |
||
February 29th, 2016, 09:30 PM | #19 |
Inner Circle
Join Date: May 2006
Location: Camas, WA, USA
Posts: 5,513
|
Re: Compression Question: From 1080 to 720. How does it work?
What's weird is that SACD (1-bit) samples at just 64 times 44.1 kHz, yet competes with DVD-A, which is up to 24 bits at 192 kHz. Of course, it's sigma-delta, which is a bit different than PCM, but still...
And regarding the 13.25 example, that's where the many samples in the filter help out. It's not just nearest neighbor, but point taken that it relies on a random distribution to work.
__________________
Jon Fairhurst |
| ||||||
|
Thread Tools | Search this Thread |
|