Do We Really Need 22MP? - Page 2 at DVinfo.net

Go Back   DV Info Net > Canon EOS / MXF / AVCHD / HDV / DV Camera Systems > Canon EOS Full Frame for HD

Canon EOS Full Frame for HD
All about using the Canon 1D X, 6D, 5D Mk. IV / Mk. III / Mk. II D-SLR for 4K and HD video recording.


Reply
 
Thread Tools Search this Thread
Old October 19th, 2010, 09:51 AM   #16
Major Player
 
Join Date: May 2008
Location: Nuremberg, Germany
Posts: 285
Quote:
Originally Posted by Daniel Browning View Post
I have no idea why. All I know is that it does stay the same.
Take a look at this review page of the 5D mark II:
Canon EOS 5D Mark II Review: 3. What's New: Digital Photography Review
They state, that the gaps became smaller compared to the original 5D. Also they mention, that the 50D has a gap-less design. So the gap-sizes seem indeed to vary across different camera models. But if all kinds of sensors have a QE of 55% at a certain wavelength, I interpret this as an indication that often the same semiconductor material is used and the QE is just a property of this material and not the whole sensor with all its microlenses and complex engineering-stuff. But I may be wrong.
Quote:
Originally Posted by Daniel Browning View Post
Whatever it is they are doing, it works, because I've measured and seen measurements of $5,000 DSLR with 8 micron pixels and the $500 digicam with 2 micron pixels. They both have 55% QE.
How did you measure that?
Quote:
Originally Posted by Daniel Browning View Post
In a recent discussion with other image sensor technologists, the inventor of CMOS image sensors said "It is interesting to note that output data rates have hovered around 1 Gpix/sec since the late 1990's."
1 GPixel/s would be enough to read 22MP 30 times per second. So do you think that it is possible but too expensive, to build it into a $3000 camera within the next years?
Dominik Seibold is offline   Reply With Quote
Old October 19th, 2010, 10:12 AM   #17
Major Player
 
Join Date: Jan 2007
Location: Portland, OR
Posts: 949
Quote:
Originally Posted by Dominik Seibold View Post
They state, that the gaps became smaller compared to the original 5D.
Yes, and that is one reason why smaller pixels tend to perform better than larger pixels if you don't control for technology level (as I'm sure you already know).

Quote:
Originally Posted by Dominik Seibold View Post
Also they mention, that the 50D has a gap-less design. So the gap-sizes seem indeed to vary across different camera models. But if all kinds of sensors have a QE of 55% at a certain wavelength,
The original 5D had a QE of 25%, the 5D2 improved this to 33%. The move to gapless microlenses in the 50D improved QE from 33% in the 40D to 38% in the 50D.

Quote:
Originally Posted by Dominik Seibold View Post
How did you measure that?
Take several raw files of a colorchecker chart, plot the values of the gray patches, and the slope gives the inverse gain (ADU per electron). From there all that's needed to calculate the QE is the ISO calibration of the camera (one camera's "ISO 200" may be another camera's "ISO 500"), but figuring that out requires some sort of calibrated lighting system and highly accurate light meter. So I just use measured ISO numbers from DxOMark.

Quote:
Originally Posted by Dominik Seibold View Post
1 GPixel/s would be enough to read 22MP 30 times per second. So do you think that it is possible but too expensive, to build it into a $3000 camera within the next years?
Yes, that's my guess.
Daniel Browning is offline   Reply With Quote
Old October 19th, 2010, 08:02 PM   #18
Wrangler
 
Join Date: Jul 2002
Location: Makati, Metro Manila
Posts: 2,706
Images: 32
Couldn't you just use larger pixels while in video mode? It seems like a straightforward way to optimize it. If you increased the pixel size so that the data was being read as a 2 or 4 megapixel video image it seems like it would a. increase light sensitivity and b. severely reduce the aliasing common with high pixel images.

Or did I just get off at the wrong train station?
Michael Wisniewski is offline   Reply With Quote
Old October 20th, 2010, 12:30 PM   #19
Major Player
 
Join Date: May 2008
Location: Nuremberg, Germany
Posts: 285
Quote:
Originally Posted by Michael Wisniewski View Post
Couldn't you just use larger pixels while in video mode?
How do you increase the pixel-size without physically replacing the whole sensor?
Dominik Seibold is offline   Reply With Quote
Old October 20th, 2010, 12:57 PM   #20
Wrangler
 
Join Date: Jul 2002
Location: Makati, Metro Manila
Posts: 2,706
Images: 32
My understanding is that you don't literally have to read every point of data on the sensor as a pixel. You can choose instead to use an area with several physical points of sensor data as a single pixel. For example, you could read every 10x10 points on the sensor as 1 pixel for your image. Generally the bigger the pixel the better low light performance. Additionally, lowering the image size to 2-4 megapixels can also significantly lower aliasing artifacts.

Well it sounds like a win-win anyway, but camera manufacturers have been shy to go down this route, so there may be a good reason it hasn't been implemented. But it still sounds logical to me, or am I nuts?
Michael Wisniewski is offline   Reply With Quote
Old October 20th, 2010, 02:22 PM   #21
Major Player
 
Join Date: May 2008
Location: Nuremberg, Germany
Posts: 285
Quote:
Originally Posted by Michael Wisniewski View Post
For example, you could read every 10x10 points on the sensor as 1 pixel for your image.
So you want to read out the average value of multiple pixels with just one read-out operation? I don't think that's possible, without designing a whole new sensor. But I think it sounds like an interesting idea.
Dominik Seibold is offline   Reply With Quote
Old October 21st, 2010, 08:59 AM   #22
Wrangler
 
Join Date: Jul 2002
Location: Makati, Metro Manila
Posts: 2,706
Images: 32
I thought this was being done in most cameras and camcorders anyway. There's a single read operation from the sensor and then compression is applied to raw sensor data to give us the final image. The larger pixel idea would be applied after the raw sensor data was read.
Michael Wisniewski is offline   Reply With Quote
Old October 21st, 2010, 09:17 AM   #23
Trustee
 
Join Date: Oct 2008
Location: Byron Bay, Australia
Posts: 1,142
Quote:
Originally Posted by Michael Wisniewski View Post
My understanding is that you don't literally have to read every point of data on the sensor as a pixel. You can choose instead to use an area with several physical points of sensor data as a single pixel. For example, you could read every 10x10 points on the sensor as 1 pixel for your image. Generally the bigger the pixel the better low light performance. Additionally, lowering the image size to 2-4 megapixels can also significantly lower aliasing artifacts.

Well it sounds like a win-win anyway, but camera manufacturers have been shy to go down this route, so there may be a good reason it hasn't been implemented. But it still sounds logical to me, or am I nuts?
That's pretty much what I was talking about. The rumour (again, I don't want to perpetuate or confirm it) is that Canon is working on sensor that does just that in video mode. I don't know anything about any future cameras but the logic behind the rumour seems to make sense.

Although a 10x10 pixel area might be a bit large. That would mean that to get a 1920x1080 image, you'd be working with roughly a 220mp sensor!
John Wiley is offline   Reply With Quote
Old October 21st, 2010, 05:22 PM   #24
Major Player
 
Join Date: May 2008
Location: Nuremberg, Germany
Posts: 285
Quote:
Originally Posted by Michael Wisniewski View Post
I thought this was being done in most cameras and camcorders anyway. There's a single read operation from the sensor and then compression is applied to raw sensor data to give us the final image. The larger pixel idea would be applied after the raw sensor data was read.
There's a read-out operation for each individual pixel, not just for the one frame! That's why 5DmIIs video has a lot of aliasing, because only every third row and column gets read out to reduce the number of read-out operations per frame. If you want to make a virtual larger pixel, you would have to read out each native one within it first. So you wouldn't cope with the problem that the number of pixels which can get read out per second is limited. Btw. calculating the average out of a block of pixels and building a new frame with smaller dimensions out of these blocks is equivalent to apply a box-low-pass-filter and then downsampling to the raw data.
I initially understood your idea, that the sensor gets designed in a way, that multiple pixels can become connected to additional operational amplifier circuits, which generate the average values in an analog way, and only the output of these additional circuits gets read out.
Dominik Seibold is offline   Reply
Reply

DV Info Net refers all where-to-buy and where-to-rent questions exclusively to these trusted full line dealers and rental houses...

Professional Video
(800) 833-4801
Portland, OR

B&H Photo Video
(866) 521-7381
New York, NY

Z.G.C.
(973) 335-4460
Mountain Lakes, NJ

Abel Cine Tech
(888) 700-4416
N.Y. NY & L.A. CA

Precision Camera
(800) 677-1023
Austin, TX

DV Info Net also encourages you to support local businesses and buy from an authorized dealer in your neighborhood.
  You are here: DV Info Net > Canon EOS / MXF / AVCHD / HDV / DV Camera Systems > Canon EOS Full Frame for HD

Thread Tools Search this Thread
Search this Thread:

Advanced Search

 



Google
 

All times are GMT -6. The time now is 10:57 PM.


DV Info Net -- Real Names, Real People, Real Info!
1998-2017 The Digital Video Information Network