DV Info Net

DV Info Net (https://www.dvinfo.net/forum/)
-   Canon XL H Series HDV Camcorders (https://www.dvinfo.net/forum/canon-xl-h-series-hdv-camcorders/)
-   -   Canon's 24F and 60i modes explained (https://www.dvinfo.net/forum/canon-xl-h-series-hdv-camcorders/63270-canons-24f-60i-modes-explained.html)

Steve Mullen March 19th, 2006 09:39 PM

Canon's 24F and 60i modes explained
 
Using my math model that relates CCD resolution to measured resolution, I correctly predicted HVX200 CCD resolution to be 960x540. I've now used my model to try to understand 24F.

One key question is whether Canon uses the same or a different deinterlacing process than does Sony.

Contrary to everything I've seen, the Sony does NOT simply discard one field and "bob" interpolate a new 1080-line frame. If it did, it's CineFrame V. rez. would be only about 360 TVL -- not 540 TVL. (The Sony's 50i/60i measures about 720 TVL because of row-pair summation performed in the CCD.)

I believe a "2D FIR (Finite Impulse Response) filter" is applied to one of the fields as part of the deinterlace process. These filters can have a small or large number “taps” where each tap is a sample. Current filters (interpolators) have up to 1024 taps; which would support a 32x32 window around each target pixel. The filter vertically scales the 960x540 image to a 960x1080 frame. For static video, effective vertical resolution is increased by 150%. The result is 1080-line “interlace” video (without interlace artifacts) and with an effective vertical resolution of 540 TVL.

Because of Green pixel-shift, Sony static horizontal resolution is about 560 TVL/ph while dynamic horizontal resolution is about 535 TVL/ph. (Typical resolution is 550 TVL/ph.) {From Adam's measures.}

We have two sets of 24F numbers for the Canon: Adam's worst-case (wobulation) tests have 24F resolution at 800x540. And, Shannon's best-case STATIC 24F tests at 800x800.

These numbers cannot come from a Sony CF deinterlacer. But, they could come from a more advanced deinterlacer.

Canon’s 24F function likely uses frame-adaptive deinterlacing. The CCDs are interlaced scanned at 48Hz. Each scan yields a 540-line field that is sent to the deinterlacer where logic is used to measure the motion between fields.

When there is little or no motion, the H1 performs a “weave” interpolation where odd and even fields are simply combined into one 1080-line frame. Effective vertical resolution on a static test will be about 810 TVL.

When there is motion between fields, one field is discarded and the H1 performs a "bob" interpolation to generate a 1080-line frame. Effective vertical resolution will be about 540 TVL.

In order for this process to work as described, each scanned 540-line field must have a 540 TVL resolution. And, a frame must have 1080 TVL. Were row-pair summation DISABLED this is exactly the resolution that would be generated by interlace scanning. In 1080i60 mode, row-pair summation reduces effective vertical (frame) resolution to about 710 TVL.

Disabling row-pair summation should reduce sensitivity by 6dB (1 stop). That's not the case with the H1. In fact, sensitivity is slightly incrreased by the switch from a 1/60th shutter to a 1/48th shutter-speed. So although I can explain the deinterlacing process--I cannot understand how H1 interlace scans it's CCDs so as not lose 1-stop.

NOTE 1: Row-pair summation obtains 6dB gain because it "adds" pairs of CCD rows--typically within the CCD. Because the pairs of rows are in slightly different vertical locations, the "addition" acts as a spatial filter. If the rows were not in different locations, no resolution would be lost and yet 6dB of gain obtained.

NOTE 2: If the static resolution measures were wrong -- then there is no puzzle! Were the 24F vertical resolution about 720 TVL rather than 800 TVL, row-pair summation could be fine. (I see only 750 TVL from Shannon's chart.) Which is why it would be nice to have Adam's static measure of 24F. My model really needs measures from the the same chart.

NOTE 3: If we assume each incoming field has only about 360 TVL -- is there another deinterlacing model that yields 540 TVL (dymamic) and 800 TVL (static)? I'm VERY open to being wrong on HOW the deinterlacing is done.

Because of Green pixel-shift, for both 60i and 24F, static horizontal resolution is, in theory, up to 830 TVL/ph. The lens MTF and CCD MTF, plus the anti-aliasing filter appear to limit horizontal resolution to about 810 TVL/ph. My model's estimate, for both 60i and 24F, dynamic horizontal resolution is 800 TVL/ph.

By the way, my model now estimates with less than 1-pixel AVERAGE error the performance of ALL the low-cost HD camcorders, plus the VARICAM. (Pointers to additional HDCAM and CineAlta measures are welcome.)

John Cordell March 20th, 2006 02:26 AM

Thanks Steve, I was hoping your expertise would weigh in on what's under the hood for 24f. Much appreciated.

Barlow Elton March 20th, 2006 09:29 AM

Thanks Steve! I felt the winds from the east blowing pretty hard from that turbo propeller on top of your beanie workin' overtime to come up with that conclusion!

For a mere mortal such as myself, your model would seem to be accurate. The only thing I think you might have forgot to mention that makes 24F work is that the CCD's are likely clocked at 48hz for 24F. This is probably a crucial aspect of the mode that makes the motion look right. I've done a number of "smart deinterlaces" in converting 60i to 24p and they've never looked quite as good or consistent as 24F does.

Thanks again!

Barlow Elton March 20th, 2006 11:59 AM

btw, Steve, I'm not sure how much time you've put behind the wheel of the H1 because every real-world shooting with an HD monitor I've done in a controlled environment showed to my human eyeballs that 24F is actually *slightly* more sensitive than 60i.

1/60th in 60i at F 2.0 and 1/60th in 24F at F 2.0 on a MacBeth chart doesn't show any perceivable difference in sensitivity. And if you swtich to 1/48th in 24F you get a little more sensitivity, which is what most people would shoot 24F at anyway.

I want to understand how industry professionals like Scott Billups and people at Cineform and FotoKem have publically used words like "awesome" to describe the quality of the H1 24F when transferred to film, and in direct comparison to the F900. If the H1 isn't resolving enough TVL for film outs and other high end purposes, then what's going on here?

Is it just not "progressive" enough to be useable?

http://cineform.blogspot.com/

Robert Sanders March 20th, 2006 12:29 PM

I think we fall victim to catch words and tech phrases.

"It's not really really 'progressive'", therefore it must be bad.

The footage looks great. The resolution looks great. The motion looks great. The film outs look great. But it's not really really progressive.

I find many forums, threads and message boards to be very hostile toward Canon for some reason.

I guess I better grab a cup of coffee. I'm cranky this morning.

Barlow Elton March 20th, 2006 01:03 PM

Quote:

Originally Posted by Robert Sanders
I think we fall victim to catch words and tech phrases.

True dat.

Quote:

"It's not really really 'progressive'", therefore it must be bad.
Funny, isn't it? No matter how good the result some want the security of a buzzword.

Quote:

The footage looks great. The resolution looks great. The motion looks great. The film outs look great. But it's not really really progressive.
It's like..."it depends on what the meaning of the word IS is"

Quote:

I find many forums, threads and message boards to be very hostile toward Canon for some reason.
Again, curious, isn't it? Either you have the MOJO of Panny or the ahh gee isn't it cool that JVC made a neat camera? You know, if Canon had something like SSE they would be CRUCIFIED and ridiculed by a good many that participate on the forums. Instead JVC gets treated with kid gloves, even by guys like Steve Mullen...and they're a MAJOR MANUFACTURER!!

Funny how Canon is perceived as a brand when it comes to video. There's a lot of love/hate rather than objective talk. I dunno, I just go by what I see and evaluate the images and the features, then debate the tradeoffs, then make my move.

Imagine if the HVX had SDI? Would it not be shouted from the mountaintops?

Quote:

I guess I better grab a cup of coffee. I'm cranky this morning.
I made mine extra strong this morning. Only made me crankier.

Barlow Elton March 20th, 2006 01:13 PM

Steve, how do you explain this?
 
https://i.cmpnet.com/dv/video/4cam-1080cams.mov.zip

I believe it's handling the weeble wobble fairly well.

Robert Sanders March 20th, 2006 01:31 PM

Well Barlow, it's easy for you and I to be written off as sour grapes because we've made our decisions and placed our money on the H1.

Again, if there are some technical inferiorities to the camera then I'm willing to accept them. What I'm having difficulty with is the "tone" of the conversations in the H1 forums as opposed to the love fests elsewhere.

And you're dead right about the HD-SDI issue. It would've been universally heralded as brilliant forsight by Panasonic if it had been included in the HVX (same goes for the JVC).

It's only a matter of time before some clever manufacturer's figure out how to take advantage of that HD-SDI tap (in a small form factor) and the Canon suddenly becomes truly codec agnostic (in many ways it already is).

Interchangeable lenses. Meh.
Full raster 4:2:2 HD-SDI. Meh.
Full resolution 1080 chip. Meh.
24F, 30F, 60i. Meh.
Hundreds of accessories pre-built for it. Meh.

BUT THE HVX CAN DO 60P. WOOT!

John Benton March 20th, 2006 03:03 PM

Quote:

Originally Posted by Barlow Elton
https://i.cmpnet.com/dv/video/4cam-1080cams.mov.zip

I believe it's handling the weeble wobble fairly well.


Yes,
but look, you can see the lens abberation (Blue & Red borders) around the black line on the Left on the H1.

Yes, forums are love fests. And to a certain extent they should be:
ie. I LOVE My Camera.
And hey, we all want to be able to hallucinate our dream camera; especially before we get our hands on it and after we have Paid for it.
but one of the reasons I trusted this board at DVinfo, is that it seems to be quite level headed.

J - (I Love My Camera)

Robert Sanders March 20th, 2006 04:02 PM

The CA of the stock Canon lens needs to be addressed by Canon.

Steve Mullen March 20th, 2006 04:35 PM

Quote:

Originally Posted by Barlow Elton
... every real-world shooting with an HD monitor I've done in a controlled environment showed to my human eyeballs that 24F is actually *slightly* more sensitive than 60i.

YOU ARE CORRECT. I had read early posts that claimed the opposite, but I went to the Canon site to confirm the small INCREASE in sensitivity. That lead me to alter my first post. And, we now have a puzzle.

I have no idea what then led you into a rant that has NOTHING to do with my post. And, look at the mess that followed! We are talking about the mechanism by which the H1 gets its results. We are NOT talking about the results themselves! We are NOT comparing camcorders!

Let's get back on topic.

Barlow Elton March 20th, 2006 04:42 PM

Quote:

Originally Posted by Robert Sanders
The CA of the stock Canon lens needs to be addressed by Canon.

No doubt. But then again, this is where Canon will be Canon. What? CA?!Fringing?! Not on our camera!! Not with OUR lenses!!

What are your settings? Oh, that's just internet forums...there's no problem.

Try calling them and you'll see. I like a lot of people at Canon USA but I hate the smug Sony-esque arrogance. If they want to keep customer goodwill they need to tell us what's really going on and if it's fixable. Again, I don't think it's show stopping but I hate denial/obfuscation of issues.

JVC seems to be doing right by their customers. At the very least, for $9K, they could tell us the good and the bad news. I fairly sure they have the ability to do firmware updates via SD cards. Let's hope the growing chorus of complaints over this anomaly will motivate them somehow.

If noise won't do it, I'm sure a financial hit will.

Steve Mullen March 20th, 2006 06:14 PM

Barlow, Roberts, and John -- your posts are TOTALLY OT.
 
Barlow, Roberts, and John -- your posts are TOTALLY OT.

Are you guys totally unable to resist posting that you'll use any thread to carry on your debate on some topic that clearly has nothing to do with 24F?

Barlow Elton March 20th, 2006 06:27 PM

Ok, Steve, fair enough. I do appreciate what you bring to the table, but you must understand that even the tone of a post like yours would almost seem to add to the FUD that seems to be swirling around. It can get a little frustrating as an H1 owner because it seems like there's always a lot of silly naysaying going on.

Thank you for the time you put into solving the riddles of these cameras.

Could it possibly be that somehow Canon actually has a progressive image to begin with?

Chris Hurd March 20th, 2006 07:31 PM

Quote:

Originally Posted by Barlow Elton
Could it possibly be that somehow Canon actually has a progressive image to begin with?

To begin with? No. But to end with -- practically. For all practical purposes, Frame mode is progressive scan, in that it produces the same visual results as progressive scan. As Robert points out above, how it looks is far more important than how it happens. The operator's manual for the XL H1 actually refers to Frame mode as the progressive mode. For those who remain fixated on whether or not it's "true progressive," the real question is, what difference does it make? Not much, apparently. And that's all that should really matter.

Barlow Elton March 20th, 2006 09:45 PM

What he said!!

Leave it to Chris to just swoop in and make a salient point!

I totally appreciate what Steve is trying to get to the bottom of, but in the end all anyone worth their salt with one of these cameras cares about is the result of the mechanism. Yes, I do indeed care about what that mechanism actually is, but only so much. In the end, do the results sell or not?

It still seems like there's something very good but still unexplainable about 24F. Maybe Canon is working on a patent for it and it's a relatively new deinterlace technique.

btw Steve, I think MPEG Streamclip has a "2D FIR (Finite Impulse Response) filter" of sorts for deinterlacing and conversion. I'd like to try that with 50i H1 material and see if the results are anything close to 25F mode. The only problem is I'm pretty sure the program doesn't offer all the motion adaptive deinterlacing that Compressor does, but I'm not sure if Compressor has a 2D FIR.

I'll look into it, but please keep us posted on your further research.

John Cordell March 20th, 2006 10:06 PM

Quote:

Originally Posted by Barlow Elton
Ok, Steve, fair enough. I do appreciate what you bring to the table, but you must understand that even the tone of a post like yours would almost seem to add to the FUD that seems to be swirling around. It can get a little frustrating as an H1 owner because it seems like there's always a lot of silly naysaying going on.

Just want to point out that I'm a H1 owner and I didn't detect a shred of 'tone' in Steve's original post. Seemed pretty straightforward tech analysis to me. I see Steve's contribution, in both facts and tone, to be anti-FUD like in nature.

I'm not negating Barlow's reading of Steve's post, just point out how mine differed. Hoping more for a cancelling out effect!

Robert Sanders March 20th, 2006 11:54 PM

If my responses have come across as a result of Steve's post, then I apologize. They were not my intention.

I think my "rant" was more in response to a general consensus over several years that there seems to be an anti-Canon bias in the filmmaking community. Of which I don't understand.

A. J. deLange March 21st, 2006 07:35 AM

Deinterleaving is indeed frequenty done with FIR filters though often in the "vertical temporal" domain and I wouldn't be too surpised to discover that Canon has come up with some new twist on this that they wish to keep proprietary. Since we are free to speculate here's my particular guess. Sucessful deinterleaving depends on being able to look at a small part of the image and tell whether it has moved from sampling instant to sampling instant and how far it has moved vertically and horizontally so that the lower field can be shifted into alignment with the upper. Does this ring a bell? It should because that is exactly what an MPEG encoder needs to do. The difference is that with encoding you compute the residual and send that along with the motion vector whereas with deinterleaving you shift the moved part back into alignement with the other field. Now thinking about how to measure movement it occured to me that if you take two successive upper fields, DFT (not DCT) them and congugate multiply the results the phase of the product will give you (integrate vertically and do a linear fit for phase - the slope of the phase is the horizontal offset; then do the same for the vertical offset) the offsets. Multiplying by the conjugate phase slopes will and taking the inverse transform (some issues with edge effects possibly solvable by proper 0 stuffing) shift the second field by the amount of movement so parts that really did move will be on top of where they were in the first and parts that didn't will be misaligned. Taking the difference between the two separates the moving from non moving parts (the difference is small where the model is good and larger where it isn't) so that the moving parts of the recorded lower field can now be shifted by half the measured difference and combined with an unshifted copy of the parts that didn't to generate a lower field with the moving parts in the right places.

This is equivalent to "weave" where there is no motion and to "bob" where there is except that it suggests that both can be done at once and that "bob" can be adaptively wickered to the amount of motion as opposed to using a fixed set of coefficients as is apparently the practice. Note that the frequency and spatial domains are dual so that my guess could be implemented in either domain but time varying coefficients would be required in latter.

So my WAG is that Canon have done something like this which cleverly combines deinteleaving with MPEG encoding (note that the DCT, which is required for MPEG encoding is simply the real part of the DFT).

A note on dB: A one stop decrease in sensitivity means twice the light is required for the same signal to noise ratio: 10*log(2) = 3 in terms of the light energy. True summing two CCD cells each producing 1 volt of signal gives 2 volts which is a 6 dB (20*log in the case of voltage) increase but the noise voltages from the 2 cells will also add (though incoherently) resulting in 3 dB more noise. The improvement in SNR is thus 6 - 3 = 3 dB if cells are combined. This reasoning applies if the summation is done before gamma correction. If done after it's a different ballgame.

Thomas Smet March 21st, 2006 01:12 PM

How would that then work for SDI which has no mpeg2 compression? The 24f must happen before it even gets to the mpeg2 part so it can split off to the SDI on one branch and mpeg2 encoder on the other branch. Are the digic DSP and mpeg2 encoder both doing some of the same things for the mpeg2 version?


Here is a good way to test the damn thing for those with the H1. Lock the camera down and use the remote control to zoom in and out. Record 60i and 24f a few times. Since the remote zooms at a locked speed it should be pretty easy to match up a 24f and 60i version. This will allow us to compare exact motion between 24f and 60i. While at it maybe somebody with a Decklink system could also do it with SDI to see what is going on. The 24f chroma channels on 24f could tell us a lot. Try to shoot a scene with lots a detail and color but that will not change while you are zooming in and out.

Steve Mullen March 21st, 2006 05:25 PM

Quote:

Originally Posted by A. J. deLange
So my WAG is that Canon have done something like this which cleverly combines deinteleaving with MPEG encoding.

A note on dB: The improvement in SNR is thus 6 - 3 = 3 dB if cells are combined.

When we were waiting for the HD100, I wrote a very similar explantion of how Motion Smoothing could be done as part of MPEG-2 encoding. After all, we get the motion vectors for free. And, we get them at 60Hz. Using them with some "movement logic" would allow the encoder to "re-position" objects to minimize object strobing when only 30fps is recorded.

But, that means the results are only available after encoding! So I think, as was pointed-out, the SDI output rules this out. Still, it seems only a matter of time for someone to use the encoder to do smart things to video. Especially, with AVC where objects are tracked very closely.

RE dB: so 6dB gain but only a 3dB increase in S/N. Right?

I found the logic error in my model that caused my model to estimate decresed sensitivity in 24F mode. Bad logic = GIGO.

Will start a new clean 24F Thread tonight since this one is getting very messy. Thank you for participating!

A. J. deLange March 21st, 2006 05:36 PM

We know that the MPEG encoder is running when the SDI output is active because one can pull a tape at the same time he is taking output from SDI. So if I'm right (and I have no real reason to think I am) there should be no poblem with SDI. The deinterleaving machine runs in either case and feeds parallel paths. One to the rest of the MPEG processor and the other to the SDI processor.

Roger on the dB.

Pete Bauer March 21st, 2006 09:20 PM

Quote:

Originally Posted by Steve Mullen
Will start a new clean 24F Thread tonight since this one is getting very messy. Thank you for participating!

Steve, please don't start a new thread on the same topic. There are far too many "How does 24F work?" threads already. Everyone, just stay on-topic (24F and 60i modes explained) and remain polite with each other.

Dan Vance March 21st, 2006 11:05 PM

Another 24F "Technique"?
 
Based on the all the comments about how great the image is, perhaps there is no "deinterlacing" at all. Since there are 3 CCDs, then in the 24F (48Hz) mode, they could invert the phase on the clock on the GREEN CCD chip. Then the odd field (rows) of the RED and BLUE CCDs would "see" the same image at the same time as the even field of the GREEN CCD. Now every frame contains incomplete but accurate "progressive" image information. No "motion-sensing" required!

Then the signal processing consists of deriving some luminance info from the RED and BLUE and some chrominance info from the GREEN signals. Not 100% accurate, but simpler and probably a better image result than a motion-sensing deinterlace scheme.

Steve Mullen March 22nd, 2006 12:34 AM

Quote:

Originally Posted by Pete Bauer
Steve, please don't start a new thread on the same topic. There are far too many "How does 24F work?" threads already. Everyone, just stay on-topic (24F and 60i modes explained) and remain polite with each other.

OK -- if you'll re-enable my ability to EDIT my posts!

Chris Hurd March 22nd, 2006 12:41 AM

He doesn't control that function. I do. If you feel the need to revise a post after the window of time has expired for editing it, you can either post a follow-up indicating the revision, or contact me directly and I'll do it for you.

Steve Mullen March 22nd, 2006 02:59 AM

Quote:

Originally Posted by Dan Vance
Since there are 3 CCDs, then in the 24F (48Hz) mode, they could invert the phase on the clock on the GREEN CCD chip. Then the odd field (rows) of the RED and BLUE CCDs would "see" the same image at the same time as the even field of the GREEN CCD. Now every frame contains incomplete but accurate "progressive" image information.

Very interesting! The entire image is captured during one field time by either R & B OR G elements. Each row would get a luma sample from:

Even rows: R + B + G from row above

Odd rows: G + (R + B from row above)

I see a couple of issues:

1) Need R + 2G + B for Y

2) Not clear if the system will generate 800 TVL for static and 540 TVL for dynamic.

John Cordell March 22nd, 2006 01:11 PM

Interesting. So would this be the moral equivalent of vertical greenshifting 1440x540? If so, are there known artifact of green shifting that could be used to determine if this is the case? For example, on material that includes no green, it would seem you'd have visible loss of vertical resolution. A red or blue rez chart perhaps?

It does seem that if this was the case that Canon would have simply claimed progressive, rather that murky the waters with their 24f nomenclature. As I recall, in their promo literature, they did a strange thing where they sort of claimed true progressive for SD and then used slightly watered down language to describe their HD 24f mode. I assumed it was because there are enough pixels in single field to generate the SD frame. Seems like if they were comfortable with that, they'd have been ok calling a greenshifted field true progressive as well.

Dan Vance March 22nd, 2006 03:51 PM

I think if they had claimed True Progressive from interlaced CCDs, that would cause a credibility problem, regardless of the method used.

Chris Hurd March 22nd, 2006 07:28 PM

Quote:

Originally Posted by John Cordell
green shifting

Let's at least use the correct terminology please. It's called Pixel Shift. To refer to it as "green shifting" is both inaccurate and misleading. Yes it is the green CCD that is offset by one-half pixel, and yes this CCD receives one-half of all light coming into the CCD block. However, this misnomer "green shift" mistakenly implies that the color green itself is affected by the Pixel Shift process, which is not true, nor is it true that an absence of green in the image would affect the resolution.

Pixel Shift process is a good thing, not a bad thing, it creates more sampling points per pixel, or in other words higher resolution. How much "green" there is has nothing to do with it. And it's interesting how Canon used a technique several years ago very similar to the Panasonic HVX200. The original Canon XL1 employed Pixel Shift in both axes to produce DV, at 720x480 which is a 345,600 pixel matrix, from CCDs that had only 250,000 effective pixels each. Nobody made a big deal about that back in 1998, but now suddenly it's a federal case when Panasonic does the exact same thing with the HVX. The reason for all this pointless measurebating is that there are too many people talking about these cameras and not enough people actually using them.

At any rate, please put that sophomoric term "green shift" out of its misery and call it what it is. Pixel Shift. That's how the industry refers to it... that's how any decent, self-respecting video geek should refer to it too.

John Cordell March 22nd, 2006 07:42 PM

I'm cool with not using the term 'green shifting', but I'm not sure you get that I was using it to capture the fact that the green CCD would be clocked out of phase with the red and green CCDs. That's not pixel shift, it's a different thing. I'm fine not coining a phrase for it, though. It wasn't my idea, though it is a very interesting and cool one.

Are you sure the absense of green doesn't affect resolution? If the green CCD sees no light, how can the fact that it's offset by half a pixel offer any luma resolution gain? The idea of pixel shift is that the offset grid of information allows you to derive additional luma.

I'm fine not using 'green shift' anymore. But sophmoric doesn't quite capture the thinking behind it.

Chris Hurd March 22nd, 2006 08:06 PM

Well, I'm just making a strong suggestion, but certainly no mandate. Call it whatever you want to call it. What I meant about the color green not affecting it is, not affecting it as much as one might think. Not every pixel is a chroma pixel. You're not getting color information out of every pixel.
Quote:

I'm not sure you get that I was using it to capture the fact that the green CCD would be clocked out of phase with the red and green CCDs... not pixel shift, it's a different thing.
You're right; I'm sorry. I blew right over that part when I saw that non-word "green shift" and had to go ballistic on you. My apologies. Sure it needs a term and no doubt somebody will coin one.

If you wanted to represent a curve on a piece of graph paper, and if you were limited to putting say twenty pencil points on that graph to draw the curve, you could do it but it would be somewhat stair-stepped when you connect those dots. If you had forty pencil points to put on that graph, then you get a more accurate curve... in fact you could say a higher resolution curve because you've got more information going into the representation of that curve. That's what Pixel Shift does, it gives you a much smoother curve because you've got more dots to put on that graph (sorry for the overly simplified description). Never mind that it's the green CCD. That's not the point. The point is what Pixel Shift does, and not the color of the CCD that's offset. That's all I'm trying to get across.

Many folks don't realize that a CCD (charge-coupled device) can't see color anyway and isn't digital to begin with. A CCD is a monochromatic, analog device. The people who aren't aware of that, those who simply use these cameras to create compelling and meaningful content and who prefer to talk about that instead of tiny electronic innards, are the ones I wish I had more of around here.

John Cordell March 22nd, 2006 10:36 PM

Quote:

Originally Posted by Chris Hurd
You're right; I'm sorry. I blew right over that part when I saw that non-word "green shift" and had to go ballistic on you. My apologies. Sure it needs a term and no doubt somebody will coin one.

Thanks for the nice apology -- completely accepted. And I know exactly where you're coming from in terms of an innacurrate name/term getting a grip and then taking off, so no worries at all. My reaction might have been exactly the same had the shoe been on the other foot. "Green shift" sucks as a name for what we're talking about anyway, so dropping it is all good all around. Something like "phase-inverted interlaced scanning" would make way more sense anyway.

I have read, by the way, that because of green's special status in how the human eye processes color, it is the the best of the three colors to shift, and that choosing to offset the green isn't a random choice, although luma gains are to be had with shifting any of the three. One thing I've never seen addressed is what the theoretical gain would be if all three CCDs were shifted relative to each other by 1/3 of a pixel.

On a side note, I would guess that green is special because it's the color of chlorophyll, and no doubt our early ancestors needed to be good at spotting predators in dense foliage, or maybe just needed to be good at figuring out which leaves to eat!

Steve Mullen March 22nd, 2006 10:38 PM

I had hoped I could simply clean-up my first post -- which has several errors. Nevertheless, here are my corrections. Thankfully, essentially everything I said was true, except the numbers were wrong.

1) Sony:
Contrary to everything I've read, the Sony does NOT simply discard one field and "bob" interpolate a new 1080-line frame in it's CineFrame modes. If it did, it's CineFrame vertical resolution would be only about 405 TVL -- not 540 TVL.

I believe a "2D FIR (Finite Impulse Response) filter" is applied to one field of each frame as part of the deinterlace process. These filters can have a small or large number “taps” where each tap is a sample. Current filters (interpolators) have up to 1024 taps; which would support a 32x32 window around each target pixel. The filter vertically scales a 960x540 field to a 960x1080 frame. Effective vertical resolution is increased by 1.4X. The result is 1080-line “interlace” video (without interlace artifacts) and with an effective vertical resolution of 540 TVL.

2) Canon:

Canon’s 24F function likely uses Motion Adaptive deinterlacing. The CCDs are interlaced scanned at 48Hz. Because row-pair summation is employed, CCD sensitivity remains constant and a pair of 405 TVL fields are sent to the deinterlacer where logic is used to measure motion between fields.

For static frames, “weave” is employed that combines together both fields and thus yields up to 810 TVL. (Shannon measured 800 TVL.) Because information from different moments in time may be combined, moving objects will have combing on their edges. A second-stage, isotropic filter is necessary to blend pixels at the edges of moving objects in order to reduce combing. The eye will likely not notice the blend, because we expect moving objects to be slightly blurred.

For dynamic frames, a "2D FIR" filter is employed that scales a single 1440x540 field to a 1440x1080 frame. In the process, effective vertical resolution is increased by approximately 1.4X. The result is 540 TVL video.

Either Frame Adaptive or Region Adaptive deinterlacing could be used. Resolution measures will not reveal which is used. A frame-based system makes each deinterlace mode decision for an entire frame.

A region-based deinterlacer is far more complex. The smaller the region, the more total image resolution is maximized. Under real-world conditions a region-based deinterlacer delivers an image where only regions with movement lose vertical resolution. The eye will likely not notice this effect, because we expect moving objects to be slightly blurred.

NOTE: Because of pixel-shift, for both 60i and 24F, and both static and dynamic conditions, horizontal resolution is, according to my model, 820 TVL/ph. The lens MTF and CCD MTF, plus the anti-aliasing filter appear to limit horizontal resolution to about 800 TVL/ph. Or, the charts are limiting the measurements.

Canon's deinterlace process generates 24fps video with as much quality as is possible (within a mobil device) given that the video is obtained from interlace scanning. Deinterlacing technology, like most video processing such as pixel-shift, digital noise reduction, and compression cannot deliver consistently optimum quality under ALL conditions. Nevertheless, the more sophisticated the process, the greater the level of quality and the more consistent the results.

Understanding HOW the H1 deinterlacer works, fully supports subjective reports that the Canon "looks better" than than the Sony (in CF25) although they both have 1080-row CCDs and both measure the same on Adam's tests.

Thomas Smet March 22nd, 2006 11:30 PM

Hey here is an interesting thought to build on the HVX200.

Maybe somebody should make a HD camera with 6 CCD's. Yes I said six.

2 for green, 2 for blue, 2 for red.

1 each of the R,G,B chips are pixel shifted by 1/2 H and V.

You now have a HD camera using 960x540 chips but yielding exactly 1920x1080 unique points for R,G and B.

No variation on detail based on green chroma or debates on if you can get a 4:2:2 image this way. All full raster points have an exact RGB match no matter what color the pixel is.

I think six 960x540 chips would be cheaper than three 1920x1080 chips not to mention help with the limitations of 1/3" chips.

Anyways sorry for getting OT.

Christopher Glaeser March 23rd, 2006 12:47 AM

Quote:

Originally Posted by Thomas Smet
I think six 960x540 chips would be cheaper than three 1920x1080 chips ...

Does this require a six-way beam splitter? Does this reduce the light at each sensor by a full stop?

Best,
Christopher

Daniel Epstein March 23rd, 2006 01:38 PM

I believe the JVC uses a split technique CCD on their 100 camera. The effect they have had to deal with is called SSE. There is a difference in the exposures on some shots left to right. Some people say they have solved this with the latest update. It certainly is easy to manufacturer fewer pixel chips but it is not so easy to make them act as a single one. Of course computers have been moving to dual processors for some of the same reasons.


All times are GMT -6. The time now is 03:17 AM.

DV Info Net -- Real Names, Real People, Real Info!
1998-2024 The Digital Video Information Network