DV Info Net

DV Info Net (https://www.dvinfo.net/forum/)
-   Canon EOS Full Frame for HD (https://www.dvinfo.net/forum/canon-eos-full-frame-hd/)
-   -   Do We Really Need 22MP? (https://www.dvinfo.net/forum/canon-eos-full-frame-hd/486125-do-we-really-need-22mp.html)

Dominik Seibold October 19th, 2010 08:51 AM

Quote:

Originally Posted by Daniel Browning (Post 1580009)
I have no idea why. All I know is that it does stay the same.

Take a look at this review page of the 5D mark II:
Canon EOS 5D Mark II Review: 3. What's New: Digital Photography Review
They state, that the gaps became smaller compared to the original 5D. Also they mention, that the 50D has a gap-less design. So the gap-sizes seem indeed to vary across different camera models. But if all kinds of sensors have a QE of 55% at a certain wavelength, I interpret this as an indication that often the same semiconductor material is used and the QE is just a property of this material and not the whole sensor with all its microlenses and complex engineering-stuff. But I may be wrong.
Quote:

Originally Posted by Daniel Browning (Post 1580009)
Whatever it is they are doing, it works, because I've measured and seen measurements of $5,000 DSLR with 8 micron pixels and the $500 digicam with 2 micron pixels. They both have 55% QE.

How did you measure that?
Quote:

Originally Posted by Daniel Browning (Post 1580009)
In a recent discussion with other image sensor technologists, the inventor of CMOS image sensors said "It is interesting to note that output data rates have hovered around 1 Gpix/sec since the late 1990's."

1 GPixel/s would be enough to read 22MP 30 times per second. So do you think that it is possible but too expensive, to build it into a $3000 camera within the next years?

Daniel Browning October 19th, 2010 09:12 AM

Quote:

Originally Posted by Dominik Seibold (Post 1580157)
They state, that the gaps became smaller compared to the original 5D.

Yes, and that is one reason why smaller pixels tend to perform better than larger pixels if you don't control for technology level (as I'm sure you already know).

Quote:

Originally Posted by Dominik Seibold (Post 1580157)
Also they mention, that the 50D has a gap-less design. So the gap-sizes seem indeed to vary across different camera models. But if all kinds of sensors have a QE of 55% at a certain wavelength,

The original 5D had a QE of 25%, the 5D2 improved this to 33%. The move to gapless microlenses in the 50D improved QE from 33% in the 40D to 38% in the 50D.

Quote:

Originally Posted by Dominik Seibold (Post 1580157)
How did you measure that?

Take several raw files of a colorchecker chart, plot the values of the gray patches, and the slope gives the inverse gain (ADU per electron). From there all that's needed to calculate the QE is the ISO calibration of the camera (one camera's "ISO 200" may be another camera's "ISO 500"), but figuring that out requires some sort of calibrated lighting system and highly accurate light meter. So I just use measured ISO numbers from DxOMark.

Quote:

Originally Posted by Dominik Seibold (Post 1580157)
1 GPixel/s would be enough to read 22MP 30 times per second. So do you think that it is possible but too expensive, to build it into a $3000 camera within the next years?

Yes, that's my guess.

Michael Wisniewski October 19th, 2010 07:02 PM

Couldn't you just use larger pixels while in video mode? It seems like a straightforward way to optimize it. If you increased the pixel size so that the data was being read as a 2 or 4 megapixel video image it seems like it would a. increase light sensitivity and b. severely reduce the aliasing common with high pixel images.

Or did I just get off at the wrong train station?

Dominik Seibold October 20th, 2010 11:30 AM

Quote:

Originally Posted by Michael Wisniewski (Post 1580303)
Couldn't you just use larger pixels while in video mode?

How do you increase the pixel-size without physically replacing the whole sensor?

Michael Wisniewski October 20th, 2010 11:57 AM

My understanding is that you don't literally have to read every point of data on the sensor as a pixel. You can choose instead to use an area with several physical points of sensor data as a single pixel. For example, you could read every 10x10 points on the sensor as 1 pixel for your image. Generally the bigger the pixel the better low light performance. Additionally, lowering the image size to 2-4 megapixels can also significantly lower aliasing artifacts.

Well it sounds like a win-win anyway, but camera manufacturers have been shy to go down this route, so there may be a good reason it hasn't been implemented. But it still sounds logical to me, or am I nuts?

Dominik Seibold October 20th, 2010 01:22 PM

Quote:

Originally Posted by Michael Wisniewski (Post 1580539)
For example, you could read every 10x10 points on the sensor as 1 pixel for your image.

So you want to read out the average value of multiple pixels with just one read-out operation? I don't think that's possible, without designing a whole new sensor. But I think it sounds like an interesting idea.

Michael Wisniewski October 21st, 2010 07:59 AM

I thought this was being done in most cameras and camcorders anyway. There's a single read operation from the sensor and then compression is applied to raw sensor data to give us the final image. The larger pixel idea would be applied after the raw sensor data was read.

John Wiley October 21st, 2010 08:17 AM

Quote:

Originally Posted by Michael Wisniewski (Post 1580539)
My understanding is that you don't literally have to read every point of data on the sensor as a pixel. You can choose instead to use an area with several physical points of sensor data as a single pixel. For example, you could read every 10x10 points on the sensor as 1 pixel for your image. Generally the bigger the pixel the better low light performance. Additionally, lowering the image size to 2-4 megapixels can also significantly lower aliasing artifacts.

Well it sounds like a win-win anyway, but camera manufacturers have been shy to go down this route, so there may be a good reason it hasn't been implemented. But it still sounds logical to me, or am I nuts?

That's pretty much what I was talking about. The rumour (again, I don't want to perpetuate or confirm it) is that Canon is working on sensor that does just that in video mode. I don't know anything about any future cameras but the logic behind the rumour seems to make sense.

Although a 10x10 pixel area might be a bit large. That would mean that to get a 1920x1080 image, you'd be working with roughly a 220mp sensor!

Dominik Seibold October 21st, 2010 04:22 PM

Quote:

Originally Posted by Michael Wisniewski (Post 1580778)
I thought this was being done in most cameras and camcorders anyway. There's a single read operation from the sensor and then compression is applied to raw sensor data to give us the final image. The larger pixel idea would be applied after the raw sensor data was read.

There's a read-out operation for each individual pixel, not just for the one frame! That's why 5DmIIs video has a lot of aliasing, because only every third row and column gets read out to reduce the number of read-out operations per frame. If you want to make a virtual larger pixel, you would have to read out each native one within it first. So you wouldn't cope with the problem that the number of pixels which can get read out per second is limited. Btw. calculating the average out of a block of pixels and building a new frame with smaller dimensions out of these blocks is equivalent to apply a box-low-pass-filter and then downsampling to the raw data.
I initially understood your idea, that the sensor gets designed in a way, that multiple pixels can become connected to additional operational amplifier circuits, which generate the average values in an analog way, and only the output of these additional circuits gets read out.


All times are GMT -6. The time now is 04:44 PM.

DV Info Net -- Real Names, Real People, Real Info!
1998-2024 The Digital Video Information Network