DV Info Net

DV Info Net (https://www.dvinfo.net/forum/)
-   Canon XL and GL Series DV Camcorders (https://www.dvinfo.net/forum/canon-xl-gl-series-dv-camcorders/)
-   -   Anyone with some 8bit DSP insights? (https://www.dvinfo.net/forum/canon-xl-gl-series-dv-camcorders/28903-anyone-some-8bit-dsp-insights.html)

Gints Klimanis July 26th, 2004 02:01 PM

>But, I can't help but think that there is something to the idea of >having a higher bit DSP. Sony uses higher bit DSPs in some of >their cameras. And, as you pointed out, Panasonic uses a 12 bit >DSP in the DVX100a. Even more interesting to me is that they >had a 10 bit DSP in the DVX100.

Higher bit DSPs are more efficent at their native word length,
but they also may consumer more power, depending on the design. At the very least, their local, high-speed memories are
delivering a greater memory bandwidth.

Even though a DSP is lower, bit, the code can be written to do higher precision arithmetic. So, an 8-bit DSP can do 16-bit
operations at roughly 1/4 speed.

Digital signal processing involves gain changes, noise (clipping at the top and quantization noise at the bottom of the dynamic range) will be introduced unless there is more headroom for high values and more footroom for low values that exceed the dynamic range in intermediate operations. While noise in one operation may be negligible, noise in successive operations will be easier to notice. Many have seen noise artifacts in low-light
video. I am quite sure that some of this is attributable to the shorter word lengths of the DSPs, although this is only one contributor of noise.

One fellow commented on inspecting images at the 8-bit level vs. 16-bit level. The cameras in question aren't able to deliver
real 16-bit images, and the algorithms that convert RAW files to screen pixels do not operate at even an 8-bit noise level for most of the picture because they are simple interpolators. Fast, but noisy. What I don't understand is why high end digital camera people aren't complaining about this, other than the tradeoff is to wait much longer for a RAW conversion. Hints to these differences
are dropped in, say, the Nikon D70 group, where the color performance of Nikon Capture is better than Photoshop CS, although the latter finishes the job faster. I bet we all would be
happy with a super-fast but low quality RAW previewer.

Also, our eyes can't see more than a 10 to 11-bit range (contrast ratio of 1:1000 to 1:2000) in a single scene,
although the dynamic range of our eyes is WAY larger given
time to adjust.

Don Palomaki July 26th, 2004 04:58 PM

Where did the Canon marketing manager come back with the 8-bit answer? Last I saw from Chris is he was looking for the answer.

I've heard that on normal displays/monitors, the limit of most people's eyes is about 6- to 7-bit gray scale. Anyone have firm data on this?

Aaron Koolen July 26th, 2004 08:55 PM

Don, even though our eyes might only be able to see 6-7 bits of the final output, the quantisation that happens during processing (Assuming it's all done in 8 bits and truncated etc) could still cause banding which could be quite noticeable.


Aaron

Yang Wen July 26th, 2004 09:12 PM

whats with all the fuss about this 8bit DSP? We all know that the DVX100 was (before the DVX100A) the benchmark for 1/3 MiniDV image quality. That camera was 8bit. So we all know the result that can be achieved with an 8bit DSP so why are people all of the sudden appauled that the XL2 doesn't have 12bit processing?

Nick Hiltgen July 26th, 2004 10:27 PM

Yang

Someone better knowledgeable then me can correct me fi I'm wrong but, it was my understanding that most cameras CCD's are 12bit and then when it's taken into the camera it get's transfered fdown to 8 bit, So I think that' in fact the dvx100 was 12 bit and then an algorithm or somethign converts it to dv format 8bit (which may have been the cause for the original canon persons quote, who knows?)

Luis Caffesse July 26th, 2004 10:27 PM

Actually Yang I believe the DVX100 had a 10 bit DSP.

In the end, of course, the image will speak for itself hopefully.
I can't see that anyone is going to buy, or not buy, a camera
based on it's DSP bit depth.

Nick Hiltgen July 26th, 2004 10:31 PM

yeah, i mean er 10bit like luis said

Chris Hurd July 26th, 2004 10:50 PM

<< Where did the Canon marketing manager come back with the 8-bit answer? >>

David Ziegelheim's claim is incorrect. The Canon product manager (not the marketing manager) has most definitely not come back with that answer. Looks like they're leaving it in the air for now.

Don Palomaki July 27th, 2004 03:55 AM

Thanks for the info Chris. As Dilbert shows, marketing types rarely have solid technical information.

As points of reference, the Canon A1 Digital and L2 Hi8 camcorders used 8-bit DSP, but used 8-bit A/D on the "Y" signal and 6-but on the "C" signal.

For Nick: CCD pixels are analog output. The analog voltage is read from the CCD pixel, fed to amplifiers that provide the gain, white balance and pedestal, and then it is convereted to a digital value (10-bit A/D in the XL1). The least significant bit is truncated prior to 9-bit DSP.

Nick Hiltgen July 27th, 2004 06:57 AM

Don,

Thanks, I should have known better then to get involved in the whole DSP thing, I'll leave it to you guys.

Don Palomaki July 27th, 2004 05:29 PM

Nick - not wrong to get involved, that is how we all learn.

Vamshidhar Kuchikulla July 28th, 2004 10:34 AM

hi everybody
 
Canon XL2 is definetely 8 bit A/D Digital Quantization. Its specified on simplydv.co.uk

url :http://www.simplydv.co.uk/docs/CanonXL2_specifications.pdf

thanx

Andre De Clercq July 28th, 2004 12:21 PM

If setup, and WB are performed in analog a 10 bit quantization depth for the analog CCD signals is OK (8+2 bits extra for standard gamma correction which is difficult in analog)) for not too complicated (digital) processing (excluding cine gamma , knee processing...) Less bitdepths result in banding and/or spatial dithering noise. Of course the final quantization after processing allways remains 8 bit for DV.

Don Palomaki July 28th, 2004 08:20 PM

DV is by definition an 8-bit quantization signal. That is what goes to tape and out the firewire. The question at hand is the internals before the video reaches the tape.

I would like to read the 8-bit A/D and DSP front-end specification from an authorative Canon voice, rather than a third party website.

Per the Canon service manuals, in the XL1 and the GL1 gamma correction takes place in the analog section before the initial A/D conversion and 9-bit DSP. It drops to 8-bit when it leaves the DSP to go to the recorder section. If find it difficult to believe that Canon would dumb it down in the XL2.

Chris Hurd August 12th, 2004 09:38 AM

I have received notification from Canon USA that the XL2 has a 12-bit DSP. This applies of course to both NTSC and PAL versions.


All times are GMT -6. The time now is 12:37 AM.

DV Info Net -- Real Names, Real People, Real Info!
1998-2024 The Digital Video Information Network