David Heath
August 20th, 2012, 03:43 PM
The resolution of a sensor with n pixels in the horizontal direction and m in the vertical is n/2 lp horizontally and m/2 lp vertically. Period.
Not so simple, (assuming we're talking about colour, and a Bayer sensor) and the easiest way to illustrate why not is with the example of the C300.
In this case, a Bayer block of 2x2 photosites is treated as a single resolution unit, with values for R,G, and B. To get resolution matching the 1080 system it follows there must be 1920x1080 blocks - so 4x1920x1080 photosites in total. (8,294,400 - we'll call it "8 megapixel") The beauty of the Canon approach is that not only will it give full 1080 resolution for R,G, and B, (so 4:4:4 off the front end) but it will get it with very simple processing - no deBayering etc. Each block corresponds to a single output pixel of the image.
What therefore would be the point of moving up to a sensor with a higher number of photosites? If every photosite is still read, then not only does that impose difficulties in the reading (at videoframerates), but it will give a base raster of GREATER than 1080 resolution - which then needs downconversion down.
In practice, cameras such as DSLRs and the AF100 with high count sensors have to cheat. Their reason for using such sensors is that they are neccessary for high quality stills (for DSLRs), and in the case of the AF100 that the GH2 chip was readily available. In every case, they get round the problems by ignoring a percentage of the photosites. Early DSLRs did it in quite a clumsy fashion which gave considerable colour aliasing, such as the AF100 and GH2 are better in that they employ virtually the same technique as the C300 - reading out blocks of 2x2 as a resolution "unit".
The reason that the performance of the AF100 is so much poorer than the C300 is that it only reads out one block in two horizontally and vertically - so only one in four in total. Consequently, a resolution of a quarter of the total photosite count on both axes. Using the measured figure of 650 from Adams chart, that predicts a total chip count of about 2600 vertically and hence about 4620 horizontally. Multiply them together and you get 12,012,000, say 12 megapixel - as would be expected for such a sensor windowed to 16:9. (Look earlier in the thread and you'll see more exact figures and an explanation of why even the GH2 outperforms the AF100 for resolution.)
If you go the route of deBayering, downconverting etc, then it's possible to get full 1080 res with much less than the 8 megapixels of the C300 - at the expense of complexity, power, hence cost etc. But in that case the minimum necessary is about 3.5 megapixels - exactly as with the F3.
Yes, in this case oversampling may be beneficial for the reasons you mention to do with aliasing (as the Alexa does) - but there is absolutely no point in going even up to 8 megapixel, let alone 12 or above!
Reducing the number of photosites and using them all is a nice idea but it's like trying to go faster than the speed of light. Physics won't allow it. Perhaps you can do better than a particular manufacturer is doing and that's what this game is all about but the limits are still there
The "laws of physics" lower limits are about 3.5 megapixel if you fully deBayer etc, and 8 megapixel for direct read. That's what Sony are doing with the F3 in the former case, and Canon in the latter - and no, I couldn't do any better than those manufacturers. But using substantially more won't bring significant benefits for a 1080 output, and is likely to have the reverse effect. As the results from DSLRs and the AF100 prove.
The block skipping means that whilst about 3 million 2x2 blocks are AVAILABLE, only about a quarter (0.75 million) are USED. Far better to do what Canon do and have about 2.2 million and be able to use them all!! But then you can't make use of an already existing still camera chip......
Not so simple, (assuming we're talking about colour, and a Bayer sensor) and the easiest way to illustrate why not is with the example of the C300.
In this case, a Bayer block of 2x2 photosites is treated as a single resolution unit, with values for R,G, and B. To get resolution matching the 1080 system it follows there must be 1920x1080 blocks - so 4x1920x1080 photosites in total. (8,294,400 - we'll call it "8 megapixel") The beauty of the Canon approach is that not only will it give full 1080 resolution for R,G, and B, (so 4:4:4 off the front end) but it will get it with very simple processing - no deBayering etc. Each block corresponds to a single output pixel of the image.
What therefore would be the point of moving up to a sensor with a higher number of photosites? If every photosite is still read, then not only does that impose difficulties in the reading (at videoframerates), but it will give a base raster of GREATER than 1080 resolution - which then needs downconversion down.
In practice, cameras such as DSLRs and the AF100 with high count sensors have to cheat. Their reason for using such sensors is that they are neccessary for high quality stills (for DSLRs), and in the case of the AF100 that the GH2 chip was readily available. In every case, they get round the problems by ignoring a percentage of the photosites. Early DSLRs did it in quite a clumsy fashion which gave considerable colour aliasing, such as the AF100 and GH2 are better in that they employ virtually the same technique as the C300 - reading out blocks of 2x2 as a resolution "unit".
The reason that the performance of the AF100 is so much poorer than the C300 is that it only reads out one block in two horizontally and vertically - so only one in four in total. Consequently, a resolution of a quarter of the total photosite count on both axes. Using the measured figure of 650 from Adams chart, that predicts a total chip count of about 2600 vertically and hence about 4620 horizontally. Multiply them together and you get 12,012,000, say 12 megapixel - as would be expected for such a sensor windowed to 16:9. (Look earlier in the thread and you'll see more exact figures and an explanation of why even the GH2 outperforms the AF100 for resolution.)
If you go the route of deBayering, downconverting etc, then it's possible to get full 1080 res with much less than the 8 megapixels of the C300 - at the expense of complexity, power, hence cost etc. But in that case the minimum necessary is about 3.5 megapixels - exactly as with the F3.
Yes, in this case oversampling may be beneficial for the reasons you mention to do with aliasing (as the Alexa does) - but there is absolutely no point in going even up to 8 megapixel, let alone 12 or above!
Reducing the number of photosites and using them all is a nice idea but it's like trying to go faster than the speed of light. Physics won't allow it. Perhaps you can do better than a particular manufacturer is doing and that's what this game is all about but the limits are still there
The "laws of physics" lower limits are about 3.5 megapixel if you fully deBayer etc, and 8 megapixel for direct read. That's what Sony are doing with the F3 in the former case, and Canon in the latter - and no, I couldn't do any better than those manufacturers. But using substantially more won't bring significant benefits for a 1080 output, and is likely to have the reverse effect. As the results from DSLRs and the AF100 prove.
The block skipping means that whilst about 3 million 2x2 blocks are AVAILABLE, only about a quarter (0.75 million) are USED. Far better to do what Canon do and have about 2.2 million and be able to use them all!! But then you can't make use of an already existing still camera chip......