View Full Version : silicon and HDR


Giroud Francois
January 14th, 2007, 04:38 PM
would be the Silicon camera able to shoot HDR (high dynamic range) picture ?
I mean if a build a filter that is able to switch an ND filter (lcd shutter) 60 times per seconde, would the sensor or the electronic or the codec able to ignore this change and keep the picture as is. Or in the opposite, would the electronic be fast enough to follow and give a nice picture for both frames (with and without filter) ?
Or to make simpler, would it be possible to simulate bracketting, by switching gain on the camera 60 time per seconde.

Jason Rodriguez
January 15th, 2007, 11:09 AM
The camera frame-rate is only running at a maximum of 30fps at 1080P . . . your technique sounds interesting, although I'm curious how subjects in motion would be rendered.

I know right now that CineForm RAW does not support companding for multi-exposure HDR images, so this would have to be some sort of custom application.

Ari Presler
January 17th, 2007, 10:36 PM
Theoretically, we can do double exposures 60fps in 720P mode. Instead of using an lcd shutter, a short electronic shutter would be used for alternating frames. The double frames would then be combined into an hdr image in post.

Noah Yuan-Vogel
January 19th, 2007, 09:55 AM
Wouldnt combining time-offset frames of with different exposures create some weird edge artifacts when dealing with motion? Might look like ghosting or something.

Ari Presler
January 19th, 2007, 08:24 PM
Yes. There would be motion artifcats. That is the reason you need to operate at a minimum of 60fps.

True HDR systems take multiple image iwhere the first image(s) are obtained with a short exposures and a non-destructive readout and final image is with the full frame time of exposure. The faster the frame rate of readout the more images which can be combined into a single HDR image with limited motion artifiacts.

Jason Rodriguez
January 20th, 2007, 10:14 PM
What about a 3-CMOS system with ND filters on two of the chips? This would take a bit of bandwidth, but would eliminate all the motion artifact and temporal alignment issues (i.e., you won't need optical flow algorithms to try and line-up mis-matched frames).

Ari Presler
January 21st, 2007, 12:54 PM
The easiest would be a 4x speed non-destructive readout system with the last image having a full frame time exposure (nothink we can do today with our current 2K architecture but somehting to keep in mind for our next gen products).