David Heath |
February 11th, 2006 05:29 PM |
I'd prefer to keep out of the 'which camera is better' debate, except to say that I'm sure all four of the comparable models will be found good enough to earn their respective owners money. But a few words of general theory regarding the whole concept of 'pixel shift' and resolution.
At first sight, it sounds like magic - how can resolution be magiced out of nowhere, how can a system deliver more resolution than the sensor posseses? Perhaps the key is to realise that what it can deliver is increased LUMINANCE resolution. What it effectively does (in a one dimensional sense) is convert five three-colour pixels (say) into ten from the perspective of luminance only. Hence instead of (R+G+B),(R+G+B),(R+G+B),(R+G+B),(R+G+B) it will give (R+B),(G),(R+B),(G),(R+B),(G),(R+B),(G),(R+B),(G) - ten (derived luminance) pixels instead of five.
What it won't do is give the same level of chrominance resolution - but that's not normally important as chrominance is recorded at a lower resolution anyway, and the eye is less sensitive to chrominance resolution than luminance. It also depends on the pixels being smaller than ideal, the theory showing best results would be obtained if they are only half as wide as the inter pixel spacing. If the pixels could be made as wide as they are spaced (obviously desirable for sensitivity) pixel shift wouldn't theoretically work.
It's easier to see how this all works in the horizontal dimension (and that's how the Z1 gets 1440 from a 960 chip). The suspicion is that Panasonic have employed it vertically as well, and that's where the geometry gets interesting........ (It also leads me to think that 4:2:2 recording there gives no advantage to 4:2:0 - the chroma resolution isn't there in the first place, for the reasons given earlier - but that's another story......)
|