DV Info Net

DV Info Net (https://www.dvinfo.net/forum/)
-   Canon XL and GL Series DV Camcorders (https://www.dvinfo.net/forum/canon-xl-gl-series-dv-camcorders/)
-   -   Anyone with some 8bit DSP insights? (https://www.dvinfo.net/forum/canon-xl-gl-series-dv-camcorders/28903-anyone-some-8bit-dsp-insights.html)

Aaron Koolen July 13th, 2004 03:31 PM

Anyone with some 8bit DSP insights?
 
I noticed this on one of the Skinny pages that the DSP is 8bit (Like all the Canons from the looks of it) as opposed to the DVX's 12 bit. Anyone know if this is going to have any real world effect on the quality level of the post processing that the camera will do? I'd imagine it would enhance the chance of banding.


Aaron

Moderator note: a while after this thread started Canon
came forth with the information that the XL2 actually has a 12 bit
DSP, not 8 bit! More information can be found in this thread

Luis Caffesse July 13th, 2004 05:06 PM

"I'd imagine it would enhance the chance of banding."

I dont' know about that, but I would like to hear from anyone else with expertise on this too.

DV is an 8 bit format, so in the end even Panasonics extra 4 bits of information is getting discarded at some point in the DV compression.

But, I can't help but think that there is something to the idea of having a higher bit DSP. Sony uses higher bit DSPs in some of their cameras. And, as you pointed out, Panasonic uses a 12 bit DSP in the DVX100a. Even more interesting to me is that they had a 10 bit DSP in the DVX100.

If Panasonic is upping the DSP that was already more bits than DV footage can handle, it seems like there is a reason for it. I would have a hard time believing a company like that would throw money away for no reason (I'm assuming here that a 12 DSP would cost more than a 10 bit).


Can anyone here explaing the advantages and disadvantages of the 8bit vs. 12bit DSP?


Thanks,

-luis

Jeff Donald July 13th, 2004 05:19 PM

Fewer rounding errors take place if adjustments are done in the higher bit space. If the cameras internal adjustments (black level, cine gamma etc.) are done in a higher bit space, higher quality images may be obtained. It's too early to tell if the 8 bit/12 bit differences will be visually apparent.

Aaron Koolen July 13th, 2004 05:20 PM

Well it would seem that you get more dynamic range. So if you're applying colour multiplication, addition etc, then you've got a higher resolution without rounding off the values. Then once all the effects are applied you can scale it down. If you start with 8 bits, do 8 bit multiplication, addition etc you'll see a loss, clamped values etc. This would be compounded by multiple passes.

It's the same principal why software programs will work in higher resolution when graphics and sound. You lose less as you work.

If you imagine you're tweaking an 8bit picture, you can still only have 256 possible values, regardless of the processing, so a value that should be 47.5 is made 47. If you were using a higher bit range (16) it might be 12079. Do this multiple times and you're losing less resolution in the 16bit than the 8 bit. Then you scale it down to 8 bit and you've got a closer representation of what it should look like than if you had done all the processing in 16bit.

This is how it seems to me. Anyone really know?

Aaron

Jeff Donald July 13th, 2004 05:29 PM

In reality I think it is more marketing hype (smoke and mirrors) than a make it or break it mandatory feature. I look at 16 bit RAW and 8 bit JPEG files from 11 plus megapixel cameras. It is very difficult to ever see much of a difference. The processing power inside these video cameras is very limited and probably negates any advantage that the larger bit files have over the smaller files.

Jarred Land July 13th, 2004 07:58 PM

RAW image files and moving video is 2 completely different ballgames, although You would think video was just moving stills :)

Anyways, there by no stretch of the imagination is a big difference between 12 and 8bit sampling, The best place to ask about this is a transfer house, because its what they see (and Hate) when they even do Downsampling from 10 bits to 8.

Jeff Donald July 13th, 2004 08:03 PM

There are many reason to use high bit depth files. But the conversion from 12 bit to 8 bit takes place in camera and the tape records an 8 bit file. The camera performs a relatively crude down-sampled file because of the limited processing capabilities in camera.

Aaron Koolen July 13th, 2004 08:32 PM

Jarred, why do they hate downsampling?


Aaron

Rob Lohman July 14th, 2004 03:20 AM

Just to note there are some mixups happening here. An 10 or
12 bit DSP will not give you more "dynamic range" on its own.
You need a 10 or 12 bit CCD for this first. If I'm not mistaken the
DVX has a 12 bit CCD chip. The XL2 has 8 bit CCD's. So it is not
of much use to have a 10 or 12 bit DSP with this.

It could yield *some* improvements, but I think those would be
marginal. Now if you have a 10 or 12 bit CCD then you will need
to match your DSP accordingly. More bits usually gives a wider
dynamic range as well.

Aaron Koolen July 14th, 2004 03:53 AM

Rob I thought a CCD was analogue and it's the conversion process from A to D that converts this to the bits needed. I assumed that by 8bit DSP that inluded the conversion hardware - maybe I was to presumptuous.

Aaron

Rob Lohman July 14th, 2004 04:11 AM

Aaron: you are correct that it is analogue. I thought everyone was
talking about the DSP that does all signal processing like white
balancing, digital effects etc. I'm not sure if the A->D process is
done by something you call a "DSP". I'm not that far into CMOS/
CCD's inner working yet <g>

So if you guys where talking about that little device then I
withdraw my comments. They where totally targeted to the DSP
that does all the image processing lateron.

Don Palomaki July 14th, 2004 04:18 AM

The original XL1 read the CCD as an analog signal, applied gain, gamma, white balance, AGC, and pedestal adjustment, to this analog signal, then converted it to a 9-bit (yes nine) signal for 9-bit DSP. The processed 9-bit signal was then converted to 8-bit for recording to tape.

I doubt that the XL2 is less than 9-bit, and it could be higher.

Jason Rodriguez July 14th, 2004 05:33 AM

Quote:

In reality I think it is more marketing hype (smoke and mirrors) than a make it or break it mandatory feature. I look at 16 bit RAW and 8 bit JPEG files from 11 plus megapixel cameras. It is very difficult to ever see much of a difference. The processing power inside these video cameras is very limited and probably negates any advantage that the larger but files have over the smaller files.
There's actually a huge difference in these files when you start to do adjustments and manipulations. With RAW files I can still put awesome images out of pictures that I underexpose by up to two f-stops to maintain the highlights in high-contrast situations, and still get wonderful, practically noise-free images. JPEG's break up long before then.

The reason you see no difference is because both the 8-bit JPEG and the 16-bit RAW started from the same 12-bit DSP in the camera. So the picture in the camera is created at 12-bit precision, and then dithered down to 8-bits and compressed into JPEG.

Saying that you can't see the difference between an 8-bit and 16-bit RAW file is like saying you can't see the difference between a JPEG and a TIFF. The fact is you're not suppose to see the difference, but that doesn't remove the fact that extra information is NOT there, and therefore depending on how much you desire to tweak the image, you won't be able to since the information and bit-depth isn't there for the tweaking adjustments.

The idea behind a RAW file is to give you all the information that the camera started with before it made that 8-bit JPEG. That means the high color-depth, no sharpening, the ability to change color balance, etc. YOU get to be the camera after the fact, it's basically like having a digital negative.

Another good analogy would be like saying I have a chrome slide here and a contact print from the chrome slide. They both will probably look the same, but I can assure you that if you throw away the slide and decide to use the contact print as your scanning source, you're going to be in for a rude awakening.

Moral: Higher bit-depth DPS's are a good thing, but as mentioned before, Canon may be doing a good portion of the processing in analog rather than digital, and then digitizing those adjustments at 8-bits, which is a different story all together than simply digitizing a signal at 8-bits and doing all the processing on that informaton.

Chris Hurd July 14th, 2004 07:26 AM

Don

<< I doubt that the XL2 is less than 9-bit, and it could be higher. >>

See my CCD / DSP comparison chart at the bottom of this page.

Don Palomaki July 14th, 2004 04:21 PM

Where did your table data come from?

The Canon documentation I've seen (DM-XL1A Service Manual, Jan 1998) clearly indicates it is 9-bit DSP (the DSP chip is an MN67343A2). Even the schematics show 9-data lines for R, B, and G. Further the A/D converters are 10-bit with the LSB discarded.

Chris Hurd July 14th, 2004 05:42 PM

Wow. Thanks Don. I simply asked the USA product manager. But then again, he didn't build the thing. Let me bring this up with him. Appreciate the info!

Don Palomaki July 15th, 2004 08:03 PM

Be interestig to hear what he has to say.

Chris Hurd July 16th, 2004 06:04 AM

He says that he does not have that information, which can mean only that I must have been hallucinating earlier. So my chart is now changed to reflect that it is unknown, or at least not known to be factual. I'm told that when an answer comes back from Japan as to what the bit depth really is, then "they'll let me know." That's the official word. I have asked once and I will not ask again.

Jonah Lee Walker July 17th, 2004 10:04 AM

Very interested in seeing the real world tests of the 12 bit vs. the 8 bit.

Cinamtography.com's forums have some pretty negative statements about the camera, especially dealing with this issue;
http://www.cinematography.com/forum2004/index.php?showtopic=1611&st=30 Check out Mitch Gross's post.

It also makes me wonder why people want HDV so badly, considering that is a 4 bit color space. And with side by side comparisons that I have shot with a PD170 and anamorphic adapter next to the JVC HDV camera, and with the DV being blown up to HD and with zero color correction, the PD170 blew away the JVC in color fidelity. The skies actually looked kind of pink with the JVC.

Michael Struthers July 17th, 2004 10:40 AM

Color can be corrected. Resolution cannot.

Canon I'm sure does not want to play up the fact that they are not using 12bit as on other professional cams.

David Ziegelheim July 17th, 2004 06:43 PM

Let me see if I can help.

The analog CCD sends a signal as a voltage level to an analog to digital converter. It actually passes through some filters first, however that is not relevant here.

If it is converted to an 8-bit value their are 256 different levels for the signal. 10-bits provides 1024 levels. 12-bit provides 4096 levels. And the 14-bit value in some Sony camera's provides 16384 levels.

Now lets say you do a black stretch. and in that stretch the values from 0-20 IRE are stretched to 0-40 IRE (ok, that is too big, but the math is easy). Now in the final 8-bit value that range (0-40) is 100 of the 256 values.

With 8-bits you start with 50 possible values, expand to 1000 and interpolate the others (information is made up). With 10-bits you have 200 possible levels and interpet them to 100 levels (information is lost, however the resulting information is a result of more knowledge than you need). With 12-bits there where initially 800 levels; with 14-bit there were initially 3200 levels.

If black stretch were the only adjustment, then 10-bits would be all you need. However each of those adjustments cascade the the size of the area that needs to be enlarged. Change detail, shift color, everything requires taking a smaller area and making it cover a larger area. As does white balance and of course gain.

The more bits of information you have, the less likely you are making up information. The smaller differences you have to work with.

So, 8-bit, 10-bits, 12-bits, it means a lot. A better question: How important is the difference between 12- and 14- bits? Does Sony have an advantage?

Jeff Donald July 17th, 2004 07:11 PM

In a scene with a dynamic range of 5 stops (256 levels, typical for DV) the first stop (highlights) uses 128 levels. The second stop uses 64 levels, the third 32 levels, the fourth 16 levels, and the last 8 levels. Can you try your explanation again?

David Ziegelheim July 17th, 2004 07:59 PM

I made the example linear because: it is simplier, I don't have the algorithms used for compression (crushed blacks is not an algorithm), and the example serves its purpose regardless of complexity.

What the extra data does is provide a way to peform the various transformations (I didn't list gamma, knee, and others) and still have good data. And all of the computation in the world can't recreate data that isn't there, it can only estimate what it may have been.

Canon did what it did because of time and financial contraints. I heard a rumor (I can't reveal the source) that Canon may add Matshuisha electronics (JVC and Panasonic) in the future.

XL2S?

Also heard a rumor that the DVX100B may finally have the 16x9 CCD in 2005. And my wife was sure we were going to win that last big lottery.

Jeff Donald July 17th, 2004 08:17 PM

My example is linear also. When you close you lens one stop you have reduced the light striking your CCD by 50%. If you have 256 levels, then 128 levels are required to represent the first 50% of the light reaching the CCD. In 10 bit you would have 1024 levels and a one stop reduction in light would use the first 512 levels.

There is no arguing that more data will yield a better image. However, is the chip capturing 8 bit data and it's being up-sampled to 10 bit or 12 bit, processing applied and then down-sampled to 8 bit for recording to tape? Or do they actually capture 10 bit or 12 bit data, process it and down-sample to 8 bit for recording?

David Ziegelheim July 17th, 2004 08:39 PM

The CCD doesn't capture bits. It captures analog voltage. The A/D converter turns that into bits.

If you have $2,529.65 cents in your pocket and I let use store two digits of data, you have $2,500. If I give you three you have $2530. If you have 4 then still $2530, however the last digit is significant. 6-digits gives you the full value.

If the range of the conversion was $0-100,000, then two digits only would give you $3k. That is the information lost.

The unless you modifed the signal when it was still analog, all of the allocation is done digitally. I don't know the specifics of the implementation in camcorders.

Saying it is a 12-bit DSP may be misleading. For our purposes it is probably more accurate to say their is a 12-bit A/D conversion. Or an 8-bit A/D conversion.

You could process 12-bits in a 4-bit DSP or a 24-bit DSP. That is a matter of speed. Remember 8-bit and 16-bit computers. They could still process 32-bit numbers, just not in one step.

Jeff Donald July 17th, 2004 08:51 PM

I'm aware that CCD's are analog , not digital, but you are correct and I should be more accurate in my descriptions. I think you're making assumptions regarding signal processing. You think the reference to 10 bit DSP is the data processing and not the bit depth? What is the source of this information and who is the manufacture of these chips?

David Ziegelheim July 17th, 2004 09:06 PM

The Canon uses an 8-bit A/D converter. The Panasonic a 12-bit. I'm not confusing it. I trying clarify it.

The reference to the DSP is always processing. JVC (which uses a 12-bit signal) advertises a 24-bit DSP. Still a 12-bit signal.

This is how Panasonic describes it (on the first text page of their brochure):

"High Image Quality with 12-Bit A/D Conversion

"The AG-DVX100A features an A/D converter that uses the same 12-bit processing as broadcast camera-recorders. Precisely digitizing the gradation and colors captured by the progressive CCD, this A/D converter supports gamma switching and other fine downstream image adjustments — one of the keys to achieving rich image expression."

DSP is never mentioned in their brochure.

JVC said about there DV5000:

"the GY-DV5000 is a highperformance 1/2" 3-CCD Professional DV camcorder which includes advanced features such as a 12-bit ADC (used only in broadcast cameras), a 12-bit camera digital signal processor for superior, high-resolution images, ...".

And about the DV300:

"Newly-developed 12-bit ADC* and 24-bit DSP** The 12-bit ADC allows direct digital input to the DSP without passing through analog pre-gain and pre-knee circuits, eliminating signal degradation. In addition, JVC's new DSP with advanced 24-bit video processing brings out natural details, eliminates spot noise, and accurately reproduces dark areas.
* ADC: Analog Digital Converter ** DSP: Digital Signal Processor"

Sony says about the PDX-10:

"14-bit DXP (Digital Extended Processor)

"The use of 14-bit A/D conversion combined with 14-bit digital processing drastically reduces the noise commonly seen across dark areas of a picture. This precision of digital processing also contributes to expanding the dynamic range of the camera so that both dark and light areas of a picture are reproduced with more contrast, thus reducing the wash-out effect."

And about the DSR570 and DSR370:

"10-bit A/D DSP (Digital Signal Processing) LSI

"The advanced Sony 10-bit DSP technology used in these camcorders delivers one of the best picture performances in the industry. Optimized digital-signal processing ensures excellent picture sharpness. And innovative camera features such as TruEye™ and DynaLatitude™are also incorporated."

Jeff Donald July 17th, 2004 09:25 PM

thanks for taking the time to quote the manufactures brochures. David, we're misunderstanding each other. You and I are basically saying the same thing.

Don Palomaki July 26th, 2004 04:27 AM

Where does it say the Canon uses 8-bit DSP or A/D conversion? (It is not a quote from any Canon literature that I've seen.)

The original XL1 service manual and schematic show 10-bit A/D conversion with the LSB discarded as it is passed to 9-bit DSP. Gain, white balance, and pedestal adjustments are in the analog section before the A/D conversion. It becomes 8-bit after DSP as an encoded DV signal for recording on tape.

David Ziegelheim July 26th, 2004 06:57 AM

That came from a Canon marketing manager who went to research it and came back.

What you said is interesting. The XL1 had a 10-bit A/D with 9-bits used for processing? I would have assumed that the XL2 would have had at least the equivalent.

And, it is not in any of the literature. Its absence is what made the question necessary.

Gints Klimanis July 26th, 2004 02:01 PM

>But, I can't help but think that there is something to the idea of >having a higher bit DSP. Sony uses higher bit DSPs in some of >their cameras. And, as you pointed out, Panasonic uses a 12 bit >DSP in the DVX100a. Even more interesting to me is that they >had a 10 bit DSP in the DVX100.

Higher bit DSPs are more efficent at their native word length,
but they also may consumer more power, depending on the design. At the very least, their local, high-speed memories are
delivering a greater memory bandwidth.

Even though a DSP is lower, bit, the code can be written to do higher precision arithmetic. So, an 8-bit DSP can do 16-bit
operations at roughly 1/4 speed.

Digital signal processing involves gain changes, noise (clipping at the top and quantization noise at the bottom of the dynamic range) will be introduced unless there is more headroom for high values and more footroom for low values that exceed the dynamic range in intermediate operations. While noise in one operation may be negligible, noise in successive operations will be easier to notice. Many have seen noise artifacts in low-light
video. I am quite sure that some of this is attributable to the shorter word lengths of the DSPs, although this is only one contributor of noise.

One fellow commented on inspecting images at the 8-bit level vs. 16-bit level. The cameras in question aren't able to deliver
real 16-bit images, and the algorithms that convert RAW files to screen pixels do not operate at even an 8-bit noise level for most of the picture because they are simple interpolators. Fast, but noisy. What I don't understand is why high end digital camera people aren't complaining about this, other than the tradeoff is to wait much longer for a RAW conversion. Hints to these differences
are dropped in, say, the Nikon D70 group, where the color performance of Nikon Capture is better than Photoshop CS, although the latter finishes the job faster. I bet we all would be
happy with a super-fast but low quality RAW previewer.

Also, our eyes can't see more than a 10 to 11-bit range (contrast ratio of 1:1000 to 1:2000) in a single scene,
although the dynamic range of our eyes is WAY larger given
time to adjust.

Don Palomaki July 26th, 2004 04:58 PM

Where did the Canon marketing manager come back with the 8-bit answer? Last I saw from Chris is he was looking for the answer.

I've heard that on normal displays/monitors, the limit of most people's eyes is about 6- to 7-bit gray scale. Anyone have firm data on this?

Aaron Koolen July 26th, 2004 08:55 PM

Don, even though our eyes might only be able to see 6-7 bits of the final output, the quantisation that happens during processing (Assuming it's all done in 8 bits and truncated etc) could still cause banding which could be quite noticeable.


Aaron

Yang Wen July 26th, 2004 09:12 PM

whats with all the fuss about this 8bit DSP? We all know that the DVX100 was (before the DVX100A) the benchmark for 1/3 MiniDV image quality. That camera was 8bit. So we all know the result that can be achieved with an 8bit DSP so why are people all of the sudden appauled that the XL2 doesn't have 12bit processing?

Nick Hiltgen July 26th, 2004 10:27 PM

Yang

Someone better knowledgeable then me can correct me fi I'm wrong but, it was my understanding that most cameras CCD's are 12bit and then when it's taken into the camera it get's transfered fdown to 8 bit, So I think that' in fact the dvx100 was 12 bit and then an algorithm or somethign converts it to dv format 8bit (which may have been the cause for the original canon persons quote, who knows?)

Luis Caffesse July 26th, 2004 10:27 PM

Actually Yang I believe the DVX100 had a 10 bit DSP.

In the end, of course, the image will speak for itself hopefully.
I can't see that anyone is going to buy, or not buy, a camera
based on it's DSP bit depth.

Nick Hiltgen July 26th, 2004 10:31 PM

yeah, i mean er 10bit like luis said

Chris Hurd July 26th, 2004 10:50 PM

<< Where did the Canon marketing manager come back with the 8-bit answer? >>

David Ziegelheim's claim is incorrect. The Canon product manager (not the marketing manager) has most definitely not come back with that answer. Looks like they're leaving it in the air for now.

Don Palomaki July 27th, 2004 03:55 AM

Thanks for the info Chris. As Dilbert shows, marketing types rarely have solid technical information.

As points of reference, the Canon A1 Digital and L2 Hi8 camcorders used 8-bit DSP, but used 8-bit A/D on the "Y" signal and 6-but on the "C" signal.

For Nick: CCD pixels are analog output. The analog voltage is read from the CCD pixel, fed to amplifiers that provide the gain, white balance and pedestal, and then it is convereted to a digital value (10-bit A/D in the XL1). The least significant bit is truncated prior to 9-bit DSP.

Nick Hiltgen July 27th, 2004 06:57 AM

Don,

Thanks, I should have known better then to get involved in the whole DSP thing, I'll leave it to you guys.


All times are GMT -6. The time now is 11:34 AM.

DV Info Net -- Real Names, Real People, Real Info!
1998-2024 The Digital Video Information Network