DV Info Net

DV Info Net (https://www.dvinfo.net/forum/)
-   Canon XL H Series HDV Camcorders (https://www.dvinfo.net/forum/canon-xl-h-series-hdv-camcorders/)
-   -   Canon's 24F and 60i modes explained (https://www.dvinfo.net/forum/canon-xl-h-series-hdv-camcorders/63270-canons-24f-60i-modes-explained.html)

Barlow Elton March 20th, 2006 09:45 PM

What he said!!

Leave it to Chris to just swoop in and make a salient point!

I totally appreciate what Steve is trying to get to the bottom of, but in the end all anyone worth their salt with one of these cameras cares about is the result of the mechanism. Yes, I do indeed care about what that mechanism actually is, but only so much. In the end, do the results sell or not?

It still seems like there's something very good but still unexplainable about 24F. Maybe Canon is working on a patent for it and it's a relatively new deinterlace technique.

btw Steve, I think MPEG Streamclip has a "2D FIR (Finite Impulse Response) filter" of sorts for deinterlacing and conversion. I'd like to try that with 50i H1 material and see if the results are anything close to 25F mode. The only problem is I'm pretty sure the program doesn't offer all the motion adaptive deinterlacing that Compressor does, but I'm not sure if Compressor has a 2D FIR.

I'll look into it, but please keep us posted on your further research.

John Cordell March 20th, 2006 10:06 PM

Quote:

Originally Posted by Barlow Elton
Ok, Steve, fair enough. I do appreciate what you bring to the table, but you must understand that even the tone of a post like yours would almost seem to add to the FUD that seems to be swirling around. It can get a little frustrating as an H1 owner because it seems like there's always a lot of silly naysaying going on.

Just want to point out that I'm a H1 owner and I didn't detect a shred of 'tone' in Steve's original post. Seemed pretty straightforward tech analysis to me. I see Steve's contribution, in both facts and tone, to be anti-FUD like in nature.

I'm not negating Barlow's reading of Steve's post, just point out how mine differed. Hoping more for a cancelling out effect!

Robert Sanders March 20th, 2006 11:54 PM

If my responses have come across as a result of Steve's post, then I apologize. They were not my intention.

I think my "rant" was more in response to a general consensus over several years that there seems to be an anti-Canon bias in the filmmaking community. Of which I don't understand.

A. J. deLange March 21st, 2006 07:35 AM

Deinterleaving is indeed frequenty done with FIR filters though often in the "vertical temporal" domain and I wouldn't be too surpised to discover that Canon has come up with some new twist on this that they wish to keep proprietary. Since we are free to speculate here's my particular guess. Sucessful deinterleaving depends on being able to look at a small part of the image and tell whether it has moved from sampling instant to sampling instant and how far it has moved vertically and horizontally so that the lower field can be shifted into alignment with the upper. Does this ring a bell? It should because that is exactly what an MPEG encoder needs to do. The difference is that with encoding you compute the residual and send that along with the motion vector whereas with deinterleaving you shift the moved part back into alignement with the other field. Now thinking about how to measure movement it occured to me that if you take two successive upper fields, DFT (not DCT) them and congugate multiply the results the phase of the product will give you (integrate vertically and do a linear fit for phase - the slope of the phase is the horizontal offset; then do the same for the vertical offset) the offsets. Multiplying by the conjugate phase slopes will and taking the inverse transform (some issues with edge effects possibly solvable by proper 0 stuffing) shift the second field by the amount of movement so parts that really did move will be on top of where they were in the first and parts that didn't will be misaligned. Taking the difference between the two separates the moving from non moving parts (the difference is small where the model is good and larger where it isn't) so that the moving parts of the recorded lower field can now be shifted by half the measured difference and combined with an unshifted copy of the parts that didn't to generate a lower field with the moving parts in the right places.

This is equivalent to "weave" where there is no motion and to "bob" where there is except that it suggests that both can be done at once and that "bob" can be adaptively wickered to the amount of motion as opposed to using a fixed set of coefficients as is apparently the practice. Note that the frequency and spatial domains are dual so that my guess could be implemented in either domain but time varying coefficients would be required in latter.

So my WAG is that Canon have done something like this which cleverly combines deinteleaving with MPEG encoding (note that the DCT, which is required for MPEG encoding is simply the real part of the DFT).

A note on dB: A one stop decrease in sensitivity means twice the light is required for the same signal to noise ratio: 10*log(2) = 3 in terms of the light energy. True summing two CCD cells each producing 1 volt of signal gives 2 volts which is a 6 dB (20*log in the case of voltage) increase but the noise voltages from the 2 cells will also add (though incoherently) resulting in 3 dB more noise. The improvement in SNR is thus 6 - 3 = 3 dB if cells are combined. This reasoning applies if the summation is done before gamma correction. If done after it's a different ballgame.

Thomas Smet March 21st, 2006 01:12 PM

How would that then work for SDI which has no mpeg2 compression? The 24f must happen before it even gets to the mpeg2 part so it can split off to the SDI on one branch and mpeg2 encoder on the other branch. Are the digic DSP and mpeg2 encoder both doing some of the same things for the mpeg2 version?


Here is a good way to test the damn thing for those with the H1. Lock the camera down and use the remote control to zoom in and out. Record 60i and 24f a few times. Since the remote zooms at a locked speed it should be pretty easy to match up a 24f and 60i version. This will allow us to compare exact motion between 24f and 60i. While at it maybe somebody with a Decklink system could also do it with SDI to see what is going on. The 24f chroma channels on 24f could tell us a lot. Try to shoot a scene with lots a detail and color but that will not change while you are zooming in and out.

Steve Mullen March 21st, 2006 05:25 PM

Quote:

Originally Posted by A. J. deLange
So my WAG is that Canon have done something like this which cleverly combines deinteleaving with MPEG encoding.

A note on dB: The improvement in SNR is thus 6 - 3 = 3 dB if cells are combined.

When we were waiting for the HD100, I wrote a very similar explantion of how Motion Smoothing could be done as part of MPEG-2 encoding. After all, we get the motion vectors for free. And, we get them at 60Hz. Using them with some "movement logic" would allow the encoder to "re-position" objects to minimize object strobing when only 30fps is recorded.

But, that means the results are only available after encoding! So I think, as was pointed-out, the SDI output rules this out. Still, it seems only a matter of time for someone to use the encoder to do smart things to video. Especially, with AVC where objects are tracked very closely.

RE dB: so 6dB gain but only a 3dB increase in S/N. Right?

I found the logic error in my model that caused my model to estimate decresed sensitivity in 24F mode. Bad logic = GIGO.

Will start a new clean 24F Thread tonight since this one is getting very messy. Thank you for participating!

A. J. deLange March 21st, 2006 05:36 PM

We know that the MPEG encoder is running when the SDI output is active because one can pull a tape at the same time he is taking output from SDI. So if I'm right (and I have no real reason to think I am) there should be no poblem with SDI. The deinterleaving machine runs in either case and feeds parallel paths. One to the rest of the MPEG processor and the other to the SDI processor.

Roger on the dB.

Pete Bauer March 21st, 2006 09:20 PM

Quote:

Originally Posted by Steve Mullen
Will start a new clean 24F Thread tonight since this one is getting very messy. Thank you for participating!

Steve, please don't start a new thread on the same topic. There are far too many "How does 24F work?" threads already. Everyone, just stay on-topic (24F and 60i modes explained) and remain polite with each other.

Dan Vance March 21st, 2006 11:05 PM

Another 24F "Technique"?
 
Based on the all the comments about how great the image is, perhaps there is no "deinterlacing" at all. Since there are 3 CCDs, then in the 24F (48Hz) mode, they could invert the phase on the clock on the GREEN CCD chip. Then the odd field (rows) of the RED and BLUE CCDs would "see" the same image at the same time as the even field of the GREEN CCD. Now every frame contains incomplete but accurate "progressive" image information. No "motion-sensing" required!

Then the signal processing consists of deriving some luminance info from the RED and BLUE and some chrominance info from the GREEN signals. Not 100% accurate, but simpler and probably a better image result than a motion-sensing deinterlace scheme.

Steve Mullen March 22nd, 2006 12:34 AM

Quote:

Originally Posted by Pete Bauer
Steve, please don't start a new thread on the same topic. There are far too many "How does 24F work?" threads already. Everyone, just stay on-topic (24F and 60i modes explained) and remain polite with each other.

OK -- if you'll re-enable my ability to EDIT my posts!

Chris Hurd March 22nd, 2006 12:41 AM

He doesn't control that function. I do. If you feel the need to revise a post after the window of time has expired for editing it, you can either post a follow-up indicating the revision, or contact me directly and I'll do it for you.

Steve Mullen March 22nd, 2006 02:59 AM

Quote:

Originally Posted by Dan Vance
Since there are 3 CCDs, then in the 24F (48Hz) mode, they could invert the phase on the clock on the GREEN CCD chip. Then the odd field (rows) of the RED and BLUE CCDs would "see" the same image at the same time as the even field of the GREEN CCD. Now every frame contains incomplete but accurate "progressive" image information.

Very interesting! The entire image is captured during one field time by either R & B OR G elements. Each row would get a luma sample from:

Even rows: R + B + G from row above

Odd rows: G + (R + B from row above)

I see a couple of issues:

1) Need R + 2G + B for Y

2) Not clear if the system will generate 800 TVL for static and 540 TVL for dynamic.

John Cordell March 22nd, 2006 01:11 PM

Interesting. So would this be the moral equivalent of vertical greenshifting 1440x540? If so, are there known artifact of green shifting that could be used to determine if this is the case? For example, on material that includes no green, it would seem you'd have visible loss of vertical resolution. A red or blue rez chart perhaps?

It does seem that if this was the case that Canon would have simply claimed progressive, rather that murky the waters with their 24f nomenclature. As I recall, in their promo literature, they did a strange thing where they sort of claimed true progressive for SD and then used slightly watered down language to describe their HD 24f mode. I assumed it was because there are enough pixels in single field to generate the SD frame. Seems like if they were comfortable with that, they'd have been ok calling a greenshifted field true progressive as well.

Dan Vance March 22nd, 2006 03:51 PM

I think if they had claimed True Progressive from interlaced CCDs, that would cause a credibility problem, regardless of the method used.

Chris Hurd March 22nd, 2006 07:28 PM

Quote:

Originally Posted by John Cordell
green shifting

Let's at least use the correct terminology please. It's called Pixel Shift. To refer to it as "green shifting" is both inaccurate and misleading. Yes it is the green CCD that is offset by one-half pixel, and yes this CCD receives one-half of all light coming into the CCD block. However, this misnomer "green shift" mistakenly implies that the color green itself is affected by the Pixel Shift process, which is not true, nor is it true that an absence of green in the image would affect the resolution.

Pixel Shift process is a good thing, not a bad thing, it creates more sampling points per pixel, or in other words higher resolution. How much "green" there is has nothing to do with it. And it's interesting how Canon used a technique several years ago very similar to the Panasonic HVX200. The original Canon XL1 employed Pixel Shift in both axes to produce DV, at 720x480 which is a 345,600 pixel matrix, from CCDs that had only 250,000 effective pixels each. Nobody made a big deal about that back in 1998, but now suddenly it's a federal case when Panasonic does the exact same thing with the HVX. The reason for all this pointless measurebating is that there are too many people talking about these cameras and not enough people actually using them.

At any rate, please put that sophomoric term "green shift" out of its misery and call it what it is. Pixel Shift. That's how the industry refers to it... that's how any decent, self-respecting video geek should refer to it too.


All times are GMT -6. The time now is 08:18 AM.

DV Info Net -- Real Names, Real People, Real Info!
1998-2024 The Digital Video Information Network