DV Info Net

DV Info Net (https://www.dvinfo.net/forum/)
-   Sony HVR-Z1 / HDR-FX1 (https://www.dvinfo.net/forum/sony-hvr-z1-hdr-fx1/)
-   -   Does DV mode bypass MPEG-2 Compression? DIE HDV DIE!!! (https://www.dvinfo.net/forum/sony-hvr-z1-hdr-fx1/50094-does-dv-mode-bypass-mpeg-2-compression-die-hdv-die.html)

Steven White August 26th, 2005 09:35 PM

Does DV mode bypass MPEG-2 Compression? DIE HDV DIE!!!
 
In the ATV Vibration thread, Ken had a theory. He was seeing horrible wonky vibration artefacts in his footage, and it didn't matter whether he was in DV mode or HDV mode. Considering he'd had great success with his straight up DV cam, he wondered whether this was a "fundamental flaw of HDV" - implying the trusty MPEG-2 codec was at fault. His theory was that even when shooting in DV mode the FX1 footage went through the HDV MPEG-2 compression before going to tape.

THIS EVENING I SET OUT TO FIND EVIDENCE TO SUPPORT OR DENOUNCE THIS CLAIM.

My objective was to break HDV. To break it so badly that the GOP would fall apart and that MPEG-2 blocking artefacts would be large enough to dominate the picture on down conversion to DV resolution - exceeding any DV artefact seen. The trick was the following: I had to come up with something that would reproducibly kill the codec, so I could shoot it in both modes.

I went to Canadian Tire and bought two 10 W strobe lights. Amazingly they were on sale for 40% off.

I then assembled what I thought to be the Image of Doom: a collection of junk lying around in my living room, with as much detail as I could find. DVD covers, miscellaneous implements, software boxes, props, books, flyers, and lay all that on a throw with enough detail to strain the codecs in of itself.

I then turned off all lights in the room, and set the strobes on opposite sides of the jumble, such that the illumination of each strobe would produce a highly different image from the previous one in the GOP. Shadows would jump, detail in text that was visible from one angle would be obscured, and chroma would spontaneously appear and disappear from frame to frame, to absolute blackness when the strobes were off. The purpose of the strobes was not only to change the lighting, but to alias any zooms or pans I did, maximizing the bandwidth of the images.

I then did 4 tests with the strobes on:
- static imaging (lighting changes only)
- zooming (lighting + motion)
- and panning

The most dramatic result (surprisingly to me) came from the static tests… and the worst image compression I could get was when the lighting was mostly dark and then the upper field would get exposed by one strobe, and the lower field exposed by the other. Note: this is a dramatic difference in the upper and lower field in a single frame. And I managed to get one from DV and one in an HDV acquisition.

Here are the deinterlaced fields (best quality in After Effects 6.5) as *.png files with no compression.

http://s94963366.onlinehome.us/HDRFX1/DieHDVdie.zip (4.5 MB)

I encourage you all to download and inspect them at your leisure… Resize ‘em, line ‘em up, and gape in horror at all the mosquito noise, blocking, and chroma aberration. Indeed… these frame grabs are a testament to every compression artefact they have a name for.

The horror. The horror.

And… so what’s my conclusion?

Well. The DV looked consistently awful, and the HDV looked sometimes better and sometimes worse. When HDV truly had the worst time, its artefacts exceeded the copious mosquito noise and 4:1:1 chroma bleeding coming from the DV. I’m pretty sure Ken’s theory is wrong.

Now… I am admittedly biased in this. I would like for Sony to have done what makes sense… So I would like other people to look at the images, do tests of their own, or submit tests. If tomorrow is a sunny day, I’ll go out and shoot some whip pans with 1/10000 shutters to make my point.

-Steve

Steve Mullen August 26th, 2005 10:51 PM

Quote:

Originally Posted by Steven White
The horror. The horror.

Trying to compress two really different fields is where interlaced MPEG really breaks down.

DV is supposed to have the capability to measure the correlation between fields. If HIGH, then the frame is compressed. If LOW, then each field is compressed seperately.

So it should handle this OK -- while going nuts on details.

Barry Green August 26th, 2005 11:14 PM

Try your test while rotating the camera. Still frames will help HDV the most; if you want to break the compression you have to put it in circumstances where it can't predict what's going to happen next. So side-to-side pans won't do much to push it, but if you spiral in on the shot, that should probably really tax it. Combine that with everything else you're doing and you should hit the limits.

Regarding DV/MPEG -- logically, there's pretty much no chance that the camera's doing a pre-MPEG pass before making the DV output. That just makes no sense. There are two separate encoder chips in there (or, at least, two separate compression processes). I'm sure everything up to the encoder part is the same, and the DSP then presents its final image to either DV or MPEG-2, but certainly not both.

Steven White August 27th, 2005 02:15 AM

I'm pretty sure you're right Barry. This has been the common expectation for a while, but so far as I know, no one has set out to definitively support it. With some sunny weather I ought to have a better test.

-Steve

Mark Grant August 27th, 2005 06:45 AM

If people were claiming that Sony developed HDV based on technology from the Roswell UFO crash, I could give that at least a few seconds' thought before dismissing it, but I have a hard time believing they'd be dumb enough to go MPEG2 encode, MPEG2 decode, scale, DV compression rather than skip the first two stages completely.

Surely one easy test would be to look for delay on the Firewire output. If they're going through MPEG2 there should be at least half a second delay on DV output, whereas if it's pure DV compression there should only be a couple of frames delay.

Boyd Ostroff August 27th, 2005 07:34 AM

I think for your tests to really be meaningful you need to repeat them with a regular DV camera side-by-side so we can compare the results. Personally, from the casual testing I've done the DV mode looks petty clean and the whole thing is a non-issue....

Steven White August 27th, 2005 01:07 PM

Okay. I tried again.

This time, I went outside, setup the camera with 1/10000 shutter, zoomed in and did 180 degree whip pans in 0.5 seconds. In this instance, every single field of every single frame within a 15-frame GOP would be different by a factor of 50 to 100%.

I tested in both DV and HDV modes, 60i and CF30.

The result is surprising... and not for the reasons you'd expect. Essentially - I CAN'T TELL. The reason I can't tell is that at SD resolutions, the HDV footage looks pretty much indistinguishable from the DV footage. Macroblocks all but disappear, and what blocking artifacts there are at SD are very comparible to the mosquito noise already present.

I have two theories:
- Either the DV mode does pass through the HDV compressor first.
- Or the HDV compression supports the claim that at a given data rate, the very worst you can do is i-frame only compression. A well implemented GOP structure cannot give a noticably worse result.

As has been stated, it would have been illogical of Sony to include MPEG compression in the DV record setup. I did not however prove this hypothesis. I encourage someone else to try.

-Steve

Douglas Spotted Eagle August 27th, 2005 02:09 PM

Quote:

Originally Posted by Steven White
I have two theories:
- Either the DV mode does pass through the HDV compressor first.
- Or the HDV compression supports the claim that at a given data rate, the very worst you can do is i-frame only compression. A well implemented GOP structure cannot give a noticably worse result.


-Steve

At no point in time does the DV signal get routed through the MPEG compression circuitry if the camera is in DV mode.

Barry Green August 27th, 2005 02:15 PM

Quote:

Originally Posted by Steven White
In this instance, every single field of every single frame within a 15-frame GOP would be different by a factor of 50 to 100%.

Yes, but... there's no detail. In such a whip pan, there's probably so much motion blur that there's no fine detail. And so HDV can keep up with it.

Try shooting a rippling lake. Point down so that the entire shot is composed of rippling water. I've seen a shot like that from the JVC, and it's very blocky and "bitty". That should push the HDV compression pretty far. The DV shot should look crystal-clear by comparison.

Quote:

- Either the DV mode does pass through the HDV compressor first.
I'd be willing to bet cash money that there's no point where MPEG-2 enters the equation if you're shooting in DV mode.

Quote:

Or the HDV compression supports the claim that at a given data rate, the very worst you can do is i-frame only compression. A well implemented GOP structure cannot give a noticably worse result.
Not true at all. The GOP structure can give horribly worse results under certain conditions, depending on the bitrate. In the worst possible circumstances, MPEG-2's GOP does revert to basically i-frame-only compression. But just imagine how bad the compression would be if you were trying to do i-frame-only high-def in 25 megabits. Or, don't imagine, just go to this picture and look at the examples. This shows HDV at its best (on a still frame for the entire GOP) and at its worst (every frame showing a complete change):

http://www.icexpo.com/1080vs1080.PNG
http://www.icexpo.com/720vs720.png

Steven White August 27th, 2005 02:29 PM

Quote:

Yes, but... there's no detail. In such a whip pan, there's probably so much motion blur that there's no fine detail. And so HDV can keep up with it.
Read my post more carefully Barry. I was running the shutter at 1/10000 s. Motion blur was essentially gone. The compression was awful. Motion blocking was evident, detail loss was evident, and mosquito noise was everywhere. Despite all this, when comparing with DV I wasn't convinced of one or the other.

Quote:

I'd be willing to bet cash money that there's no point where MPEG-2 enters the equation if you're shooting in DV mode.
I agree. Now prove it. I didn't think this would be difficult to prove. So far, it has.

Quote:

just imagine how bad the compression would be if you were trying to do i-frame-only high-def in 25 megabits
I don't have to imagine it Barry, I just did this test. What I'm seeing is DV footage up-sampled to HD... but rather than be perfectly upsampled, somethings will be resolved better, and some will be blocky. I'll try to imagine it the opposite way now... Gee. Most of my blockies disappear - not only is the resolution lower, but because I've averaged out half the available chroma to 4:1:1, it looks smoother than it was originally.

Look. You could get me an engineer from Sony who could tell me exactly what you guys are - but they're only words unless they're true. Someone do a test and prove it definitively with a physical device - not a piece of paper.

Barry: I've seen your codec tests. Now try this:
- Take the 1080vs1080 *.png file, and get it in its native resolution (scale to 33.333% due to your 300% blowup) in a square pixel 1440x1080 composition
- Render this composition to 720x480 4:1:1 DV
- Open the DV file as square pixel, and export:
http://s94963366.onlinehome.us/HDRFX1/NowDV.png

And that's what it looks like. NOW differentiate between your 3 codecs. The fact is... you pretty much can't... And that's with a test pattern that you deliberately made "worst case HDV". How the heck are you supposed to prove your point with real world footage? Convinced? I thought not.

So I went and downloaded both the source_1080.jpg file and the oneriver-hdv-worst.jpg file that you uploaded at DVXuser. I then went through this identical procedure, resizing the image to 720x480 and compressing it with the DV compressor, and the results are here:

http://s94963366.onlinehome.us/HDRFX1/DV1.png
http://s94963366.onlinehome.us/HDRFX1/DV2.png
http://s94963366.onlinehome.us/HDRFX1/DV3.png

- One of these images came from completely uncompressed source, and was passed through the DV compressor.
- One image was a worst case HDV image that was downsampled to 720x480.
- One image was a worst case HDV image, downsampled and compressed with the DV codec.

I'm not going to tell you which came from which - I'll leave that for you to tell me. If you can get all three right, the first time, with 100% confidence, that's what this test entails... Except it's more difficult because none of the images are exactly the same... and all of the images were taken in the real world. I think getting a conclusive result will be very difficult.

-Steve

Barry Green August 27th, 2005 03:27 PM

Quote:

Originally Posted by Steven White
Read my post more carefully Barry. I was running the shutter at 1/10000 s. Motion blur was essentially gone. The compression was awful. Motion blocking was evident, detail loss was evident, and mosquito noise was everywhere.

DOH! Sorry, need to pay better attention. Yes, at 1/10000 you would get exactly the results you were after.

Quote:

And that's what it looks like. NOW differentiate between your 3 codecs. The fact is... you pretty much can't... And that's with a test pattern that you deliberately made "worst case HDV". How the heck are you supposed to prove your point with real world footage? Convinced? I thought not.
Interesting.

Well, you do have a point -- scaling down does hide a number of errors. However, it just would be loony-tunes for them to pre-compress with MPEG-2 and then to scale down and recompress with DV.

As to how to prove it, definitively -- well, I think the suggestion earlier, about the firewire delay, was on the right track... MPEG-2 HDV has to "queue up" 15 frames before it can execute compression on the group as a whole. If you see immediate response through firewire for DV, but a 1/2-second delayed response when in HDV mode, then that should be enough to prove that the MPEG-2 isn't being employed during DV origination.

Then again, does the receiving device need to receive a full GOP before it can begin decompressing? If so, there'd be no circumstance where you could ever see a non-delayed output from the firewire, so again it would prove nothing...

The way to test it would be to devise a shot that thoroughly trashes the MPEG-2 codec (as you've tried to do) such that even when resized, it obviously looks quite a bit inferior to DV, obviously carrying through artifacting that shows up even when DV'd. But such a shot would have to be one that can easily be critically analyzed, and your whip-pan probably wasn't. I'll have to think about that...

Ken Eberhard August 27th, 2005 05:23 PM

Greetings, I am very hesitant to join in here because you guys are in way over my head with experience. But I'd like to suggest that maybe you do the test that I have found that trashes the codec in my camera. Vibration.

http://www.dvinfo.net/conf/showthread.php?t=49804

Very minor vibration will screw it up reliably. Very minor at a slow to mid rpm as compared to a motorcycle engine. You can even set the rpm at a constant speed to get comparative test. Just touch any part of the bike, even the seat with a mono pod. Or even a cloths dryer. I've done this on my quad and find the problem in both codecs with the FX-1.

I'm not the one who knows anything about the signal path in the Sony FX-1. But if I where to build such a device, I would have to think to myself. Let's put the energy into a chip that compresses to the HDV codec. Now they also need to get the camera to convert the HDV to DV output... Why not use that converter that is already there to go to tape in DV record mode?

Thanks... Ken E.

Steven White August 27th, 2005 06:13 PM

Well Ken - from your experience I know the vibration affects both camera modes. Your theory was that it was an MPEG-2 problem... and my hypothesis was that it's not, but rather a mechanical vibration problem with the camera itself that has nothing to do with the codecs.

What I was hoping to do here was disprove your theory by demonstrating HDV macroblocking in footage that the DV codec didn't produce artifacts in by taking advantage of the DV codec's i-frame only compression.

My problem is that down-sampled HDV footage - even with copious artifacts - is so close to DV footage that I can't rationally tell the difference. Even stress tests like Barry's synthetic images yield completely comparible results once DV has its way.

Now... there were some artifacts in MPEG-2 streams in my strobing test that I never saw in my DV tests... but due to the nature of the artifacts (bits of fields were interchanged), I am unsure if they are dropout/compression/GOP capture artefacts, that the DV codec managed to avoid reading from the tape.

-Steve

Ken Eberhard August 27th, 2005 06:29 PM

Steve, I wold be so happy if you can prove just that. And if you can tell me who to contact at sony to replace my camera. They tried to repair it twice and have come to the conclusion it is the nature of the camera. So I can't use it as I had intended to. So Pleeeeeeeeeese prove me wrong...

Ken E.. :-)

Alex Raskin August 28th, 2005 07:50 AM

No juggies
 
Steven, how did you achieve this:

http://s94963366.onlinehome.us/HDRFX1/DV3.png

?

Thanks.


All times are GMT -6. The time now is 04:46 AM.

DV Info Net -- Real Names, Real People, Real Info!
1998-2024 The Digital Video Information Network