View Full Version : Moving HDR images..?


Karel Bata
January 11th, 2009, 06:52 AM
Maybe I'm a bit late coming to this, but I've only just discovered HDR photography and wondered if anyone's considered it's use in moving images.

Here's a page showing some stills examples HDR Tutorial (http://www.stuckincustoms.com/hdr-tutorial/)

Cool or what? At the moment you have to take bracketed exposures and then do some faffing in photoshop, but I reckon there'll be a PS plug-in soon and cameras that will do it in one take. Itching to try it myself.

Any ideas on how to do this with moving images?

I'm curious about what exactly is happening to the sensitometric curves, but I've not had time to sit down and study this properly yet.

Karel Bata
January 11th, 2009, 07:20 AM
Found some software Photomatix at HDR photo software & plugin - Tone Mapping, Exposure Blending & HDR Imaging for photography (http://www.hdrsoft.com/index.html)

"If you have ever photographed a high contrast scene, you know that even the best exposure will typically have blown out highlights and flat shadows. Photomatix offers two ways to solve this problem:
› Exposure Blending: Merge differently exposed photographs into one image with increased dynamic range.
› Tone Mapping: Reveal highlight and shadow details in an HDR image created from multiple exposures. The tone mapped image is ready for printing while showing the complete dynamic range captured."

Here's an amazing gellery Photomatix User Gallery - Luke Kaven (http://www.hdrsoft.com/gallery/gallery.php?id=41&gid=0) These look like they were painted. Imagine if they moved...

John Wyatt
January 11th, 2009, 07:47 AM
Is it possible to take several images for each single frame quickly enough without the subject moving and therefore cause ghosting when you try to combine them for a single HDR video frame? Even with still photography, HDR is usually used for tripod landscapes to allow accurate line-up of the bracketed exposures -- how could this translate to video in which movement with actors is needed? It would be a wonderful way to improve dynamic range, but I just don't understand how it could be done with "motion pictures". Can someone explain it to me?

Biel Bestue
January 11th, 2009, 02:47 PM
OMRON Automotive Electronics Technology GmbH (http://www.hdrc.com)

but they don't have a HD version of it yet!

Giroud Francois
January 11th, 2009, 03:09 PM
basically any camera able to shoot at 60p could do a 30p low exposed picture interlaced with a 30p high exposed picture. Then each picture would be merged into an HDR picture.
for sure, in fast moving subject there would be some problem.
and you need to add a flip-flop device that switch the exposure mode between each picture.

Sebastian Pichelhofer
January 13th, 2009, 05:26 AM
Tonemapping is just a way to reduce the bitdepth of an image.

Though instead of just plain scaling or cropping the values into 8 bit range most algorithms do pretty fancy local exposure adaptions which results in the typical painting style that most tonemapped images have, especially when processed with "Details Enhancer" in Photomatix.

Theoretically every motion picture camera that outputs 10 or 12 bit video could be used to create files that then could be tonemapped.
Though there is no real workflow for this yet as tonemapping so far only aimed at still picture processing.

I did some tonemapped HDR Timelapse videos some time with a DSLR camera and multiple exposures.

There are a few of these kind of videos available on youtube. Search for "hdr timelapse".

I had recently seen some tests done by Elphel camera team members where the camera is able to vary exposure time with every recorded frame. So the camera could film 3 exposure levels at 75fps resulting in real time 25fps video when combined into HDR images. Maybe we will see results of this soon.

Ben Winter
January 13th, 2009, 09:29 AM
I am pursuing experimentation with two HDV cameras with their iris settings locked at two extremes, mounted in a beam-splitter mirror setup used for 3D applications with their positions set to achieve the same field-of-view. Provided they can be genlocked for frame-accurate sync, I have confidence the images can be combined for HDR video.

But first I must procure two HDV cameras with genlock capability, and a mirror beam-splitter :)

Nicky Campos
January 13th, 2009, 11:23 AM
Yup... There are infact several ways to achieve HDR video, each technique has its own flaws though.

What I dont understand is why NOBODY on planet earth has EVER done it???
If I am wrong please send a link!!!

(1) shoot higher than 8bits (10-12 bits) and then SUCCESSFULLY encode (aka-tonemapping) into 8bit stream!

(2a) shoot multiple exposures at the same time using beam-splitter

(2b) shoot multiple exposures off-set by time, eg. frame01 under-exposed... frame02 over-exposed... then combined BUT will have temporal motion differences... can be fixed using motion vectors!!!

(3) just shoot normal, convert to 16-bit TIFF, use tonemapping (photomatix) ... But will suffer from heavy grain - not really suitable unless one shoots in super-flat/ambient conditions.

Last important note - Even if one successfully achieves ANY of the above techniques, playing the clip at video rates will display heavy grainy temporal inconsistencies because of the STATIC nature of HDR/Tonemapping processing.

The only fix I can suggest is a temporal stabilizer like Revisions DENOISER plug-in, or any other TEMPORAL-BASED denoising plug-in like Foundrys Furnace denoiser. (please note that a normal denoiser plug-in WILL NOT work because it will not STABILIZE the noise from frame to frame!!!)

By this time the overall image quality will be considerably soft due to HEAVY denoising! ( depending on how much weird temporal grain effects u can live with??)

One would need an extremely high-quality picture to start with in the first place.
4K raw perhaps?... depending on your delivery format, could work for SD?

Biel Bestue
January 14th, 2009, 08:26 AM
I think it will come once the cmos or the ccd can achieve it with the single exposure, because any other variant would limit shutter speed or the problem of having two much cameras mounted in a single place, so I think it will come once the chips can achieve that much latitude, even the "light stage" aproach doesn't work because it shoots more photographys per second (so it limits shutter speed), it will be all about the chips, and I think hdrc may have achieved it but at lower resolution, so it won't take them much time to get it to HD

Karel Bata
January 15th, 2009, 03:54 AM
In theory many decent stills cameras that feature auto-bracketing (which are +1 & -1, or even +2 +1 -1 & -2 ! ) can be hacked to carry on taking pics and feeding the result to a hard drive. (And you could use proper lenses without a dof adapter!) But whether you can get 24fps I don't know. But I reckon someone out there must be trying.

In fact I would think any video camera that allows external control of it's functions while the camera is running would be a candidate. It could be exposing 1/48, 1/96, and 1/192 in quick succession 24 times each second. But that means your key exposure is the 1/96, losing you 2 stops over normal 24fps.

(Another approach could involve multiple passes of a scene with motion control. You need only have the actors in once and software would produce the relevant traveling matte from the image differences etc. That could work in a location where you're not allowed to rig any big lights, but you could wheel in a pre-rigged motion-control camera. A potential niche market there, I say.)

The next step of stripping the video sequence into three seperate layers out of frames A B C A B C A etc. looks straightforward, but the tone mapping doesn't look like something you can automate. Not only would you have to tweak each shot individually, but you'd probably need to make changes during the progress of a shot. And would the whole thing be a tad temperamental like, say, toonit, where small changes in an image cascade into larger effects? Here's a Toonit example Toonit Experimentl (http://www.youtube.com/watch?v=5zM2YbRYAfE) Notice the amount of distracting shimmer? An HDR image that did that would lose the sense of hyper-reality a lot of HDR stills have.

Quite promising is the idea that you don't need multiple exposures, but can do it with a high dynamic range image capturing device. (Hey, like film!) The following were done with single exposures on a DSLR
http://farm1.static.flickr.com/126/394713395_7851d0c366_b.jpg
http://farm1.static.flickr.com/73/198735994_bab41730c2_b.jpg
http://farm4.static.flickr.com/3143/2767250890_1bf7a40f85_b.jpg

With all this you enter into a whole debate about what looks 'real'. I notice also that many of the HDR stills have enormous depth of field. I wonder if that's an aesthetic judgment taking advantage of a locked off camera, or if the algorithms used later don't work so well with blur...


But I have a question here: can someone explain tonemapping in terms us camera folk can understand? A high dynamic range usually means a flat image lacking punch, so how exactly does tonemeapping give you the best of both worlds..?

Bob Hart
January 15th, 2009, 06:45 AM
A team over here was working on a product called the "smart filter". I have been out of touch for a good while now. Their "filter" that I saw was a pass-through device which selectively adjusted exposure by ND filtering at a pixel level on its filter screen.

The level of ND was established by an operator choice. Their early coarser pitch prototype apparently enabled outdoors exposure to be correct yet indoors subjects to be also correct requiring only colour temperature filtering on the window or gels on interior lighting.

Karel Bata
January 15th, 2009, 07:17 AM
Interesting concept.

I've been looking around a bit and found a few very nice HDR videos. They're all time lapse but they show some of the problems like noise, artefacting, and some notable ringing. I note that not many folk decide to repeat the experience, suggesting that post may be time consuming, or that it's simply not easy to get it right! Some of these work best full screen.

YouTube - HDR Time Lapse (http://www.youtube.com/watch?v=FA1eBSzTXN0)
YouTube - HDR timelapse (http://www.youtube.com/watch?v=9QMijgW2b7w)
YouTube - HDR Evening Timelapse (http://www.youtube.com/watch?v=OOfRQbuQdB8)
YouTube - HDR Timelapse (http://www.youtube.com/watch?v=e7hr5MPv00o) massive ringing - but you notice the sun is in shot!
HDR Sunflower time-lapse on Vimeo (http://vimeo.com/1885188) David Lynch would love this!
Tamarama HDR Timelapse on Vimeo (http://vimeo.com/2025465) in HD!
Point Judith Sunset on Vimeo (http://vimeo.com/2636529) HD too - very very nice this one. How many hours did he put into that?

Nicky Campos
January 15th, 2009, 09:50 AM
Nice... although time-lapse HDRs dont really count as video.

The ABC shutter alternating from frame to frame is a good idea - but how to actually implement it is difficult... How exactly?

A similar trick would be to use a variable ND filter that darkens as u turn it - attach some sort of motor to continuously turn it so that frame to frame is darker then brighter ABC, etc.

Single 12-bit DR exposures are no-brainer but are limited to that specific DR... limiting.

Shallow DOF doesn't work with HDRs because it... well just is weird looking really... HDR look is surreal and painterly/toony - Just doesn't suit shallow DOF.

PS Technik make an expensive and bulky 3d/stereo rig that can actually achieve inter-ocular distances so close - that two cameras can record the SAME image and therefore achieve multiple exposures!!!

Karel Bata
January 15th, 2009, 10:40 AM
"but how to actually implement it is difficult... How exactly?"

Ask a camera software hacker! Not my line really, but I'm sure it could be done. There's plenty of folk shooting stop-frame animation using a stills camera linked to a PC. So it would have to be auto-bracketing and locked into a continuous burst mode - effectively shooting at 72fps! Hmm...

Another way would be to use an old-fashioned 3 CCD prism with 3 HD chips gen-locked together generating 3 video streams running to a hard drive (or 3 hard drives?). The technology for all that is well established.

My 2p.

Nicky Campos
January 15th, 2009, 02:16 PM
Theoretically it can be done Im sure... just like theoretically its possible to get the Canon 5D Mk2 to give us our dream camera! Find us a camera software hacker then.

Auto bracketing is a still camera function - no still camera can run a 72FPS... fastest is 15P I think. For video cameras iris cannot be used cos DOF will change... leaving us with shutter or ND filters.

Although if a "camera software hacker" could just implement NDs on the 3 chips separately (eg. green - no ND, Red - ND1, Blue - ND2) and of-coarse treat each chip as a single bayer pattern (eg. recover full color from each chip using software like Juan from Reel-stream) then maybe it can be done?

Karel Bata
January 16th, 2009, 05:34 AM
12fps eh? So I'm only out by a wee factor of six..? :) But the main point is that stills cameras already do auto-bracketing, so it's not a completely wild idea.

But you like the 3 CCD prism? The software ND sounds like it could work.

I wonder if it's possible to replace the chips on an old prism camera with your own? Or is that just too difficult? If it's possible (big big if) do you really need full HD res on all 3 chips? So what about this: the -1 could be a more sensitive B+W chip to pick out shadow detail, and the +1 a cheaper but less sensitive chip for getting the highlights? Genlocked together, running to 3 hard drives. Shoot me down in flames!

HDR videography will certainly be a reality in ten years, but it would be so nice to do it sooner.[/list]

Biel Bestue
January 17th, 2009, 04:48 AM
3ccd with the dichroic filters on the prism replaced with ND's with diferent levels? but would a 3 exposure be useful as a "bracketed" image?

Take Vos
January 17th, 2009, 05:58 AM
There is a bayer-like sensor, which also includes white pixels. As there is no color filtering in the white pixels they are a lot more sensitive and are thus able to get a few more bits of intensity depth. Kodak RGBW filter.

I have also seen a sensor which included two photo sites for each pixel, a large and a small one. I don't know where I've seen those though.

Cheers,
Take

Andy Tejral
January 17th, 2009, 10:15 AM
As I've read this thread, one question comes to mind: once you've got this HDR video, where are you going to view it?

Despite all the talk of huge contrast ratios on flat screen TVs, I believe all the electronics are based on the old NTSC 100 IRE standard.

Just like still images, yes, you can create images that have a large dynamic range--in fact, so large that no printer or screen can actually display the image.

Karel Bata
January 17th, 2009, 11:27 AM
You would use Photomatix (http://www.hdrsoft.com/) to 'blend' the image sequences together to bring the highlights and shadows to within normal viewing parameters. Well, not blend but tonemap (would still appreciate an explanation of how exactly that works without flattening the image!). It means - and taking that photomatix page as an example - combining the three pictures on the left of the Eiffel Tower to make the big one.

Currently Photomatix can only do this in batch mode by treating each triplet of frames as a single image. HDSoft (or someone) needs to make a plug-in to make it work with moving images.

Biel Bestue
January 17th, 2009, 06:50 PM
Kate: Super CCD - Wikipedia, the free encyclopedia (http://en.wikipedia.org/wiki/Super_CCD)

Biel Bestue
February 15th, 2009, 07:23 PM
wow this dynamax seems to rock, and it's form 2008 i guess by end of 2009 or 1010 we'll see another genesis form panavision using this

but for us, this chip may be really expensive coming form panavision

Toenis Liivamaegi
February 16th, 2009, 12:00 PM
But as far as Red RAW goes I'm sure it can be converted to HDR. If it can be done from single exposure RAW photographs why not from Red RAW? Even with still image tools you can extract three different exps from Red RAW and off you go. RED One rentals are cheap these days.

T

Karel Bata
February 17th, 2009, 04:56 AM
Nice idea, but how far does the latitude of Red extend above white and below black? Film stock will go 2+ stops while a top end digital stills will also have some of that extra latitude built in. I doubt that Red (which is really 2 to 3.2k, depending on who you are) will record any image information that is 'invisible' on set when any such lost bit depth could instead be used to beef up what you can actually see.

Nicky Campos
February 17th, 2009, 03:13 PM
I believe this can be done but remember Red is only raw in terms of "raw bayer sensor data", its actually VERY compressed (what a marketing genius) - so artifacts will be many like I said earlier :(

However the new ikonoskop camera is TRUE RAW 12-BIT (not compressed at all) which I believe could be the first acceptable tone-mapping tool provided you get all ya exposures in a single capture... shoot in the shade I guess :)

Karel Bata
February 18th, 2009, 04:50 AM
I think that someone here is getting this wrong. Is it you guys or me? or are there different flavors of HDR I don't know about?

My understanding is that a normally exposed image of a subject with excessive brightness range will look like this http://www.hdrsoft.com/images/eiffel/mean530.jpg with a burnt out sky and detail lost to shadow, while the bracketed exposures will capture those details: http://www.hdrsoft.com/images/eiffel/over530.jpg and http://www.hdrsoft.com/images/eiffel/under530.jpg No one exposure gets it all. You're saying that a Red or Ikonoscope will? I don't think so. Neither of them will be able to handle the range in a picture like this The Lonely Road to the Dinosaur Dig (http://www.flickr.com/photos/stuckincustoms/2767250890/) which was shot pointed at the sun! Maybe that particular one's a long way off, but what i'd love to be able to do is get images looking this This is Nathaniel (http://www.flickr.com/photos/stuckincustoms/198735994/).

Or is that last one possible from a RAW image? Would be good, but I doubt it. I'm pretty certain you can't do this http://www.hdrsoft.com/images/eiffel/tm530.jpg

Nicky Campos
February 18th, 2009, 02:53 PM
Yes alot of pro photographers use a single-exposed raw 16-bit (although usually 12-bit wrapped into 16-bits) photo to tone-map and achieve a "hdr/tone-mapped effect".

Sometimes if there are extreme lighting conditions one should bracket exposures to capture wider dynamic ranges.

Remember that a true HDR image is unable to be displayed on a 8-bit(low DR) monitor, that is why tone-mapping is usually applied so that all the captured DR can be displayed within a 8-bit range.

Dylan Couper
February 18th, 2009, 09:58 PM
I've done live action (non-timelapse) HDR-esque video. I used my Canon 1D mk2 shooting RAW frames at 8fps, then made multiple exposures of each still using different ISOs, and blended them in Photomatix, brought them into Vegas, and boom... video.

And... loads of range, but the surreal look was too much and looked cheesy so I didn't pursue it any further.

Nicky Campos
February 19th, 2009, 01:27 AM
Nice Dylan! I know the cheesy look is off-putting but thats just one possible look, try doing something more natural and wide DR looking.

Craig Bellaire
February 19th, 2009, 02:35 PM
How about using HD usb web cams and capture all at once?

Biel Bestue
February 19th, 2009, 09:25 PM
Yes alot of pro photographers use a single-exposed raw 16-bitstrange when hdr images can go up to 64 bit images!

Raw doesn't have to mean more latitude but non-debayered image

Nicky Campos
February 23rd, 2009, 02:31 AM
How about using HD usb web cams and capture all at once?

Great Idea but is it possible to capture simultaneously with many cams and be synch-locked?... with one capture/pc station?

Also the cameras cannot be in the exact same place can they?

This would be really cool if someone could assist... even for stereo/3d parallax?

Craig Bellaire
February 23rd, 2009, 07:22 AM
With this software
| NORPIX | DIGITAL VIDEO RECORDING SOFTWARE (http://norpix.com/products/multicamera.php)

Yes you can record multiple cameras on one computer... Or use IP cameras and record using gigabit crossover cable straight to the computer... No Encoding just capture the stream just like using a USB HD TV card...

And for the offset, get the cameras as close a possible in a rack or mounted on something, check all the levels and make sure they are as close as can be, shooting straight, then over shoot and realign in post and crop...

Anyway that's what I've been pondering on....

Karel Bata
March 6th, 2009, 11:18 AM
I'd much rather use a single camera, which is why I'm going to try some experiments with film. Unlike video, modern film stock has a lot of info 'hidden' in the latitude. it is in fact an HDR capture medium already, but any photographic paper, or monitor, that was set to flatten out and display the whole range would look very, er, flat.

I've just tried something with an old stills negative: Use password 55555 to see the images (http://gs93.photobucket.com/groups/l67/EZA6L48IV5/?albumview=grid)

The first pic shows how a 'normal' print from the neg would look. Unimpressive. Burnt out sky. By altering the light setting on the flat-bed scanner I produced the next two stills which pulled out the information previously hidden. I then combined them in Photomatix.

The result was a bit flat so I tweaked the contrast - which is what you'd do in post anyway. (And I must confess -naughty me- I also fixed the scratches because they really bugged me)

Not bad methinks for a first attempt with the settings in Photomatix left in their default. The important points are 1) this is from only one camera exposure and 2) using regular domestic Kodak stills stock. In other words, all done in post. With video you would need 3 seperate camera exposures to get this (or something like a Dynamax chip) but with film you can do it in one.

Which means, with the right software, you can do it with live action in DI. However, that's a bit out of my budget right now...

Thoughts...?

__________________

Nicky Campos
March 7th, 2009, 01:14 AM
I like that Karel!!!

I have a friend who has done this same thing... although I dont see any extra latitude in the shadows though? Is there anything else in there?

Damnn I wish film were more affordable - the HDR look isnt well suited for shallow DOF of 35mm sized format... so Im thinkin a good 16mm stock equivalent could be cool???

What ya think of 16mm for single-exposed HDR? A guy once told me he could put a 16mm rental package together for the same price as a good 2/3rd inch HD camera package??

Karel Bata
March 8th, 2009, 05:26 PM
Glad you like it Nicky. Yes, it looks very good for a random image!

Still a few problems to crack. Like
how do you get the negative sequence into a computer without a proper telecine? And scanned three times at different light levels! That's a lot of scanning.
Film speeds are never exactly as they say so optimally you need to do a clip test for each film batch to establish what the speed really is.
And it could be the film exposures have to be spot on for it to work - or at least incorrect in the same way on all the shots within a sequence or you may not get a good match.
And then there's the amount of time it takes to scan it all in and process it!

I reduced the images from my original 35mm scan to 1280 and it then took Photomatix about a minute to process (in batch mode that would be half an hour for a second of film!) I then shrunk them to the size you see on Photobucket.

I think Super-16 should work fine. Regular 16 might be a problem with grain - the HDR may accentuate it. Might be nice though. Again, more tests. I'll see if I can dig out some old 16mm neg.

I've also tested using only two scans, which would lessen the workload, but that didn't come out too good, but maybe it's a case of finding the settings' sweet spot. There's a lot of stuff on the net that's been written about HDR, so now that I've established this approach works in principle I ought to do some serious RTFM.

Will continue doing tests though with old negs taken in different lighting conditions and post links here.

:-)

Karel Bata
March 23rd, 2009, 01:39 PM
Here's some more experiments I've done with still images taken from film negs and then digitised at different light settings (you'll need password 55555 ) (http://gs93.photobucket.com/groups/l67/EZA6L48IV5/?albumview=grid)

And an interesting allied link: Work with natural light in HDR (http://www.adobe.com/designcenter/aftereffects/articles/aft7it_hdr/aft7it_hdr.pdf)
"It’s a common yet mistaken assumption that the only
Adobe® After Effects® Professional users who have any
business using 32-bit mode are artists working on bigbudget
blockbuster films. The reality is that anyone
using the Professional edition can benefit from the
32-bit float High Dynamic Range (HDR) pipeline."

Karel Bata
November 15th, 2009, 05:43 PM
Well it seems that Arri may have beaten me to it!

Arri's new cameras Everything is about to change... (http://ow.ly/CxDl) according to an article in High Definition magazine (http://ow.ly/CxBn) have a 'magic' trick up their sleeve:
"The key to the increased performance is what we call ‘Dual Read Out’ – we have on the sensor 32 output channels and we read every pixel twice through a normal gain and through a high gain. Each of these read outs happens in 14 bit depth. We then combine them and get 16 bit pixel depth for every pixel element. So that results in a very quiet image with very low noise. This is a philosophy which is similar to our scanner with its double flashing, however these read outs are happening at the same time.”

"Arri promise more ‘magic’ to do with this heavy processing which may be to do with HDR or High Dynamic Range as its known in stills photography. This is where you combine several different exposures into one, taking the best exposure elements from each to make a extraordinary image. Could this be what Arri have in mind?"

Well, if Arri aren't thinking that, other folk clearly are, and tonemapping software is now well established and it wouldn't be too difficult for a programmer to adapt it. Photmatix (the standard in stills hdr software) are going to kick themselves for not looking ahead and making their software available as an AE plug-in. I wrote to them about it, and they said they had no plans. So someone else will do it, and it'll be the next Big Thing.