View Full Version : DVCPRO HD - Intra DC Precision question


John Jay
August 5th, 2005, 03:57 PM
Can someone say how many bits are allocated to the coefficients of the Discreet Cosine Transform in the in-camera compression scheme for DVCPRO HD ?

My question is related to the use of 35mm adapters whereby a high bit count can improve very fine shadings (Bokeh) in the encoded video

John Jay
March 8th, 2006, 04:41 PM
Anyone know for sure?

Chris Hurd
March 8th, 2006, 04:44 PM
Wow, over half a year and no responses? We've got to do something about that.

John Jay
March 8th, 2006, 04:51 PM
I'm sure you'll nail it Chris :)

A. J. deLange
March 9th, 2006, 03:55 PM
This is probably not a very satisfying answer but it is determined adaptively in order to make the output bitstream fit into the available bandwidth. The dc (0,0) coefficient is allocated more bits than the ac terms and the number of bits decreases as the frequency increases.

John Jay
March 10th, 2006, 10:37 AM
AJ; I read an article which says that in M-JPEG the DC coefficient of each macroblock (the average luminance of the macroblock) is defined with only eight bits of precision while with MPEG-2, the DC precision is defined to be 10 bits. This results in a perceivable image improvement with certain type of pictures.

more here >> http://www.broadcastpapers.com/editing/Pinnacle4MPEGMyths%20-%20print.htm

I am presuming M-JPEG is the scheme used in DVCPRO-HD ?

So with MPEG-2 the edges of the 8x8 pixel blocks are that much smoother and more appropriate for images where subtle gradients are present and therefore less prominent

I am surprised Panasonic's white paper on DVCPRO HD says very little in this regard

Bob Grant
March 10th, 2006, 04:26 PM
There's another issue which is what I thought you were originally on about.
The effeciency of the encoder. Even though the data rates maybe 25 or 100Mbits/sec for DV25 or DVCPro100 not all of those bits are necesaarily used to carry useful data.
The encoder may simply run out of time to encode a frame due to it's complexity and therefore just pad the data. The article that I read was pretty old so maybe encoders have improved but back then measured actual data could run as low as 30% of available bandwidth.