Quote:
Originally Posted by Dale Paterson
Hi David,
I have my own theory on this (which nobody has ever passed an opinion on) and the theory is this:
Vegas de-interlaces a file - using the de-interlace method selected - whenever it feels it needs to and then 're-interlaces' the file to produce the desired output. This may sound strange but think about it - HDV is interlaced, upper field first and PAL DV is interlaced, bottom field first so I can understand that by converting HDV to PAL DV you are changing the field order which is a no-no and I can understand why you would get interlace flicker (particularly when there is movement)
|
I think it might be even more fundamental than that. In the computer, particularly in an application like Vegas, things tend to work out well when being frame oriented, not field oriented. So it's very likely that Vegas would treat a 60i video track as, essentially, 30p plus an "interlace" bit, for all practical purposes. That "progressive" representation is the natural field order for that media type, eg, the upper and lower field that belong in a frame together, based on the chronology. This isn't real de-interlacing, in any sense of blending.
Quote:
Originally Posted by Dale Paterson
BUT why then does the setting of the de-interlace method even come into play in this scenario. In other words (if my theory is wrong) I should get a PAL DV file that is interlaced, bottom field first, with much interlace flicker (because I am effectively changing the field order) if I rendered with the de-interlace method set to none (which is in fact the result) but I should get the SAME result even if de-interlace method is set to blend or interpolate BECAUSE at no point have I instructed Vegas to de-interlace my footage! Any thoughts on this?
|
I think basically, it amounts to a choice here... if you were to render "straight" to, say, DVD-class MPEG-2/60i from your HDV, same field order, the video would look fine rendered directly from that internal model. If you rendered to DVD-class MPEG-2/30p, same thing. But if you render to a form, like DV, where the interlacing is reversed, then the temporal ordering of your frames is wrong, given than internal model, and must be changed. I think the "de-interlace method" set to "NONE" is essentially saying "never change from this internal model" in any intelligent way.
This isn't just Vegas, either.. other programs I've used seem to behave in a way that fits the model I'm suggesting.
Quote:
Originally Posted by Dale Paterson
Either the above or the Vegas de-interlace method is ALSO used when you for some or the other reason are swapping fields / changing the field order for whatever reason (and again the only explanation for this would be if Vegas first de-interlaces the original footage and then re-interlaces depending on whether or not you are trying to create an interlaced file with a different field order).
|
I think it really amounts to "do something" vs. "do nothing", it's just that interlacing is the biggie, when it comes to bad-looking video. If you render from HDV to MPEG-2 60i MP@ML resolution, without any de-interlacing method, you just get the resizing. If you have de-interlacing checked, it should be essentially the same thing (well, assuming it's smart enough to do field-based resizing; if not, the former could be messed up). If you render to 30p, the video still looks reasonable, though with de-interlacing checked, you may well get "smart" field blending, rather than something a bit more brute-force... but it's way more nunanced. When changing the field order, it's an seriously obvious case when this isn't handled properly (back in the days of SuperVideoCD, I made this mistake occasionally).
Quote:
Originally Posted by Dale Paterson
Just another bit of useless (useful) information: If you allow the Sony FX1E to convert HDV to PAL DV the output file size is almost identical (as a matter of fact it is slightly larger) to the same .m2t file captured from the FX1E and converted using Vegas to PAL DV with the de-interlace method set to blend fields.
|
That makes perfect sense: DV isn't a variable-rate format. I've never actually tried on-camera DV conversion (I have the HVR-A1), simply because I would naturally assume that Vegas would do it better -- it's got many, many times the resources, even on a modest PC... and there's no demand to do it in realtime, though I would hope in my case (Athlon64x2 4200+), it's happening much faster.
[/quote]
Quote:
Originally Posted by Dale Paterson
Interpolate fields creates a much smaller file.
|
When rendered to DV? That's really weird; it's not supposed to work that way.
Quote:
Originally Posted by Dale Paterson
From this I can only assume that you are getting the same or possibly even better quality HDV to DV conversion when the camera does the conversion and of course take only a fraction of the time it takes to convert a HDV tape to DV using Vegas.
|
I think you can bank on Vegas, or other PC-based tools, doing at least as good a job on the downconversion. They should do better, if there's any real margin in the conversion process.
-Dave