DV Info Net

DV Info Net (https://www.dvinfo.net/forum/)
-   JVC GY-HD Series Camera Systems (https://www.dvinfo.net/forum/jvc-gy-hd-series-camera-systems/)
-   -   Intermediate Codecs and CC? (https://www.dvinfo.net/forum/jvc-gy-hd-series-camera-systems/55444-intermediate-codecs-cc.html)

Steve Mullen December 3rd, 2005 08:20 AM

Intermediate Codecs and CC?
 
Folks report that CC degrades HDV.

It would seem that using an intermediate codec might help.

But what's the difference between:

1) Decode HDV to 8-bit uncompressed

2) Perform FX on the uncompressed frames

3) Compressed/Encode the FX results using another codec


A) Decode HDV to 8-bit uncompressed

B) Perform FX on the uncompressed frames

C) Compressed/Encode the FX results using another codec

Not much difference, it seems to me.

Would one feel differently if (1) is performed in the timeline from HDV stored on the disk? And, (2) is performed in the timeline.

Would one feel differently if (A) is performed in during HDV capture and the uncompressed is stored on the disk? And, (B) is performed on the uncompressed frames pulled from disk.

What if (1) used 4:2:2 uncompressed?

What if (A) used 4:4:4 uncompressed?

And, does it make any difference whether 4:2:2 or 4:4:4 is used given the HDV is 4:2:0?

I can't see any difference between (1 2 3) and (A B C).

Yet, 1-2-3 is how FCP handles HDV or AIC in the Timeline while A-B-C is how FCP handles HDV captured and converted to the BEST possible intermediate codec -- uncompressed.

(Remember, FCP never re-encodes to HDV unless you do so during export.)

If there is no difference, then how can using ANY intermediate codec increase the quality the color correction of MPEG-2?

It seems the damage has already been done and CC simply manipulates data that are not robust enough to survive.

Transcoding, at best, can only preserve the information in the HDV data stream.

Tim Dashwood December 3rd, 2005 11:18 AM

Quote:

Originally Posted by Steve Mullen
Folks report that CC degrades HDV.

I think you have mis-interpreted the discussion in the other thread.
( http://www.dvinfo.net/conf/showthread.php?t=55344 )

Colour correcting doesn't "degrade" HDV (or any other codec for that matter) if, as you noted above, it is manipulated and rendered in a lossless codec. (Maintaining HDV codec through post is something different and has its own set of problems.)

However, the discussion revolves around the fact that MPEG2 used in HDV seems much more "lossy" than the DV codec we have become accustomed to.
This is not a surprise since there's 3-4 times as much pixel information being squeezed into the same bandwidth.

So before HDV is even captured from tape, it is already "degraded" in comparison to DV (and of course DVCPROHD). Colour Correction is much more difficult with HDV because it is easier to reveal these faults (less colour depth in blacks & compression artifacts.)
For example, let's say I shoot a subject backlit on a bright day, use the widest latitude/DR possible (B Stretch 3, 80% KNEE) expose the sky so that the brightest clouds are peaking at 108%, and I shoot my test in HDV and in DV.
The task at hand is to maintain the whites, raise the low mids (so we can expose the subject better) and then crush the blacks back down again (after raising mids) so the shadows aren't muddy.
The DV codec seems to have a different distribution of levels (0-255) than Mpeg2 (MPEG2 has higher concentration in bright areas, whereas bright gradations in DV can look "stepped.") The result is that raising the mids on DV codec reveals the shadow detail we wanted and the subject still looks good. Now, we do the same thing with the HDV codec and we can reveal the shadow detail, but now we can see that their were huge "steps" between the different levels of black and we have also now revealed some MPEG artifacts on the subject's face that were imperceivable before.

So the moral of the story is that, armed with this knowledge, it makes more sense to shoot for the format, and get it close in-camera - requiring only minor shot-by-shot adjustments in post - nothing extreme. So many of us who started out shooting on film had already adjusted our lighting "brain" for the limited dynamic range of video and HDCAM, and now we need to tighten the restrictions even more and keep in mind the effects of MPEG2 compression.
I don't like shooting this way, but these new formats are producer driven. I don't think I've shot a roll of 35mm in over two years! That's the way things are going, and we need to adapt.

What I wouldn't give for a Firestore sized battery-powered Wafien!!!
http://www.cineform.com/press/rel-WafianHR1.htm

David Newman December 3rd, 2005 11:38 AM

Steve,

If you stick within one application an uncompressed intermediate and native solution (decompressing to uncompressed) would appear the same*. If the native solution did application to application exports as uncompressed, then it would also share those benefits. We know that application exporting is the point of generation loss, which is where the intermediate solutions do much better, uncompressed offers the least loss, of course at the sacrifice of disk space, HDV MPEG offers the most loss but with a small disk footprint. Compressed intermediates of a range of solutions inbetween.

* However you did hit on the reason a intermediate solution can be better than an native solution even within a single application -- color space. Graeme Nattress very eloquently points out that color space processing has a great effect on quality : http://www.nattress.com/Chroma_Inves...masampling.htm. Technically native application could do the advanced 4:2:0 to 4:4:4 up-conversion, to achieve the best results, but typically they don't. Whereas a intermediate approach is free to up-sample independently to the applications native processing path. This is why the first generation CineForm image actually looks subtly better than the HDV 4:2:0 source. This becomes even more obvious through effects that resize the image data and color correction, as the extra chroma precision, preserve and more natural looking color image than native 4:2:0 approaches can.

Stephen L. Noe December 3rd, 2005 12:53 PM

In my experience, it's the transcoding that degrades the video so much. David points out that keeping the content original (whether native or captured to an intermediate codec) gives about the same output when kept within the same application. I had a long thread going with Graeme Nattress about subpixel processes in Liquid (specifically) that will overcome most problems Tim Dashwood points out (to a limit). So with that program you're delivering second generation material (possibly third generation if you take into accound FX and title as a render and the final render for delivery). This does not go beyond the same workflow as DV which is raw to begin with and then titles/fx and then final transcode to DVD or whatever delivery format is needed.

I maintain that HDV should be ingested as HDV and worked with as HDV and then output to whatever format is needed for delivery. I think the Cineform codec is ideal for component capture because, once again, it would be in it's raw captured (pristine) state of capture to work with and not an HDV transcode. This is what makes the Wiafan so intriging. You'd be capturing to a MUCH higher codec from the beginning with a very inexpensive (but capable) camera. Very high end stuff.

David Newman December 3rd, 2005 12:59 PM

Re-read my post, you will see that I argue for the opposite position, and with good reason. HDV native can be improved upon through intermediate work-flows.

Stephen L. Noe December 3rd, 2005 01:35 PM

David,

Please read entry to see that the Liquid solution has 16bit subpixel routines built in for HDV.

It's all good....

Guy Barwood December 3rd, 2005 05:45 PM

"This is why the first generation CineForm image actually looks subtly better than the HDV 4:2:0 source."

I am suprised anyone would suggest that anything can be better than an origonal unless you are looking for a particular effect using effects etc. No matter what you do you can't create information that isn't there to start with, and anything you can simulate in a conversion to Cineform you can do with the native source.

Except for the reduced workload on the system for more real time effects, I just don't see any need for intermediate codecs.. For example, Liquid does all its RT effects and rendering from the native HDV using uncompressed frames, as would any NLE using an intermediate codec (ie it all has to be decoded to uncompressed for processing anyway), so you just get potentially more layers with Cineform as it is easier to decode but use's 4x the disk space (it is about 100Mbps isn't it?). If it is the processing power requiements of the effects that is the limiting factor you won't even get more layers with an Intermediate codec (eg a single layer with highly complex effect or multiple effects on the one clip).

Steve Mullen December 3rd, 2005 06:41 PM

Quote:

Originally Posted by David Newman
However you did hit on the reason a intermediate solution can be better than an native solution even within a single application -- color space. Technically native application could do the advanced 4:2:0 to 4:4:4 up-conversion, to achieve the best results, but typically they don't.

We both agree, I think, that if one is going to export/import between applications or do a Temp Render (for playback speed) -- using an Intermediate is clearly better -- assuming a good Intermediate. We also know that an Intermediate tyypically supports more streams.

I was focused only on the CC of an HDV clip issue.

But you raise an interesting point. As I understand Graeme -- since 4:2:0 is ALWAYS converted to a different color space when it is decoded -- the nature of that conversion has an impact on how the image looks (and one assumes, how it handles manipulation).

Which means -- I think -- we don't know if Tim's CC issues are from the HD100's MPEG-2 compression or Apple's decoding of 4:2:0 to 4:4:4.

(We know with interlace HDV, Apple does not correctly handle 4:2:0 to 4:2:2 in their conversion from HDV to AIC.)

(We also have no idea how good the MPEG-2 encoder is in the HD100. It might well be not as good as Sony's, for example.)

So both products are suspect!

Which raises the question of what happens FCP users try a conversion to an Intermediate QT codec.

How does FCP convert HDV to Apple's uncompressed?

How does FCP convert HDV to Apple's DVCPRO HD?

NOTE: Apple could use one method when playing a Timeline (on-the-fly conversion) and a different, far better, method when doing a Batch Convert? But, do they?

Likewise, they might use a much better converter when they do an export. Therefore, viewing a timeline on a monitor isn't a real test unless one is viewing exported video. I certainly suspect Apple cuts-out lots of processing when they do RT playback! So we need to know what Tim is viewing.

Likewise, what method is used by HDVxDV when it does an export?


Lastly, since the obvious 4:2:2 intermediate for FCP is DVCPRO HD (because of Apple's RT support) -- does it handle CC better than HDV?

1) We know the 1280 samples are scaled to 960 -- although since the effective horizontal of HDV is only about 800-lines, I'm not sure that really means a 25% loss of real detail? Does H rez drop to only 600-lines using DVCPRO HD?

2) We know that the chroma rez of DVCPRO HD is 640 samples. Which means there is an upsample of HDV's chroma involved during a conversion. What is the quality of this upsample?

3) Does HDVxDV use the same routine as does does Apple during a Batch Conversion?

4) If we compared the on-the-fly HDV 4:2:0 to 4:4:4 uncompressed conversion to a on-the-fly DVCPRO HD 4:2:2 to 4:4:4 uncompressed conversion WHERE the DVCPRO HD was HDV that had been previously converted from 4:2:0 to 4:2:2 -- which will look better and which will handle CC better?

5) Lastly, the same question as (4) but we are comparing exported, not RT images:

a) Export to HDV?
b) Export to uncompressed?
c) Export to DVCPRO HD VTR via FireWire?
d) Export to NTSC MPEG-2 for a DVD?
e) Export to AVC?
f) Export to WM9?

It seems to me that without all these tests -- on a COMPLETE production CYCLE -- it's not possible to conclude which is a better path.

And, even if the original HD100 MPEG-2 is not all that good (say, as good as DV) -- without all these tests, one doesn't really know which path handles JVC's HDV the best.

It seems that if one is editing on a PC, or willing to switch, that CineForm is the only one that understands this issue AND has a shipping total (HDV in and HDV out) 24P solution that works on most any PC -- even a laptop.

Apple users are limbo as we have no idea if, when Apple releases 24p support, they will improve AIC and HDV quality.

Right now, if you need TC, it seems the HDVxDV solution is the only option. But, I have no idea what the better export codec is.

For those of us who don't need TC -- double-system with AIC seems faster than conversion -- but far more work.

And, I'm really curious what a Kona or Blackmagic might offer using either uncompressed or DVCPRO HD. And, if the BR50 VTR can be controlled and TC obtained via the DeckLink Pro's RS422 port.

----------------

Tim: Actually I said "It seems the damage has already been done and CC simply manipulates data that are not robust enough to survive."

----------------

Stephen: JVC's position is stay native all the way to a 1280x720 display.


All times are GMT -6. The time now is 01:48 AM.

DV Info Net -- Real Names, Real People, Real Info!
1998-2024 The Digital Video Information Network