Here’s part two of my CineGear presentation, in which I take an uncomfortably close look at flesh tone and overexposure…
First, read Part One. Then, watch this video:
I’m fascinated by over- and under-exposure. When I shot film I almost never placed flesh tones on the exposure that they were supposed to be. I ran them a little hot or a little dark as a matter of habit. I can’t explain why; I guess I was just ornery in my youth.
When I started shooting video I consistently erred on the bright side, and it took me quite a long time to realize why: video’s decreased resolution made the image feel a bit muddy to me, so I popped the important things in the frame with a bit more light to make them stand out more clearly. Whereas I grew out of using backlights and edge lights for separation in film, I grew back into them in video.
Pushing flesh tone just a little too far resulted in really nasty shifts in color and chroma clipping, so I had to walk a fine line.
Some cameras were more forgiving of bright flesh tones than others. I remember the Panasonic SDX 900 and early Varicams (some of you may need to break out your history books at this point) being much more forgiving than a number of Sony cameras, such as the original F900 which seemed to have two stops of overexposure latitude, tops! This improved with the F900/3 software update (which, by the way, was the first time I was aware that a camera could have a software update!), and the F900R further improved both overexposure latitude (and represented a sea change in how Sony cameras rendered color). Still, I always found myself exposing flesh tone a little on the high side, although less so in HD than in SD.
In SD I’d let flesh tone peak at 70%, but when I tried that in HD I found that this was way too hot. That baffled me until I realized that SD video had “setup,” where the signal started at 7.5% on a waveform monitor, whereas HD did not—so its black fell at exactly 0% instead. What was formerly 70% in SD become roughly 62% in HD, so exposing flesh tone at my former SD levels resulted in some bad-looking pictures. I quickly adjusted my technique.
As cameras improved they became much more forgiving of overexposed flesh tone. Sony’s hypergammas gave us the first huge leap in this direction, but it was Arri’s Alexa that really set the bar: properly exposed flesh tones were beautiful, but overexposed flesh tones were “filmic.” I set out to discover why and learned quite a lot on the way.
In part one I discussed how film limits color saturation by dint of being a subtractive medium: the darkest colors are always the most saturated. Video camera manufacturers, not having any direct film experience, naturally worked to video’s strengths, where highlights are most saturated. They quickly learned that they had to roll off saturation in extreme highlights or nasty things happened when a color channel clipped, but short of that they were quite content to let color saturate right up to 70% on a waveform monitor. (Most of the luma information in a video signal—about 70% of it— is derived from the green channel, so when luma hits 70% most cameras will start desaturating highlights before colors distort.)
Last year I performed an experiment comparing saturation roll-off between a Sony F55 and an Arri Alexa, and I presented the results to Sony during a new product focus group I attended at NAB in 2013. They’d received a lot of requests to emulate Alexa color saturation roll-off, but this was the first time someone had shown them what was really going on. Six months later they introduced the LC-709 Type A Look Profile, which attempts to replicate how Alexa and film handle color saturation in relation to exposure.
This history what I try to illustrate with the video above. I start out with traditional Rec 709, the standard to which all HD equipment is made, to demonstrate how limited we are creatively when image capture is bound to that spec. Rec 709 only allows for about six scene-referred stops of dynamic range, which is fine in a studio with zero reflective surfaces and some serious makeup but not so good in the real world of film-style video production. After that I show Hypergamma 7 which, at one time, was my go-to gamma curve when shooting with Sony cameras. I thought it did a great job of making a video camera look less “video-y” by rolling off a modern camera’s extended highlight latitude in an aesthetically pleasing fashion but it still didn’t satisfy me. Highlights rolled off well but color saturation in highlights was still an issue.
Sony, like most manufacturers, falls into the “if the camera can do it then let’s use it” school: if the camera can make saturated colors then it should. Asking them to pull back a bit forced them to fight against their very nature as camera manufacturers, but—to their credit—they listened to the market and did it anyway.
The rest of the video shows comparisons between LC-709 Type A and Sony’s enhanced Rec 709 look, LC-709, which does not aggressively control saturation in highlights; SGamut color reproduction vs. SGamut3.cine color reproduction; and SLog2 and SLog3. It also shows off their Cine + 709 look, which attempts to emulate the color and contrast of an actual film print.
I don’t see a lot of practical difference between SLog2 and SLog3.Some have said that SLog3 is noisier, but of course it is if you look at it directly: SLog2 puts middle gray down at 32-33%, while SLog3 places it at a more Cineon-friendly 40% to open up shadows a bit more, but grading will either pull SLog2’s middle gray up to 40% (where it falls in Rec 709) or push the shadows down a bit in SLog3 so ultimately it’s a wash. Looking at it directly doesn’t really tell you anything important because log formats are meant for storage only: they hold information, they don’t display it. You must grade log footage in order to judge it.
The one big advantage of SLog3—which is something that Alistair Chapman told me immediately after my presentation—is that it pulls up shadow detail to the point where it is no longer heavily compressed. Video codecs have a lot of tricks up their sleeves and a big one is that they throw away a lot of detail in the darkest shadows. By boosting the shadows SLog3 actually preserves detail by lifting it above the threshold at which XAVC and SStP aggressively throw it away. SLog2 doesn’t do this, and in his experiments Alistair has seen more compression artifacts in SLog2 shadow detail than in SLog3. (This does not apply to raw, which is uncompressed.)
I hope you found this interesting. I’ve got more coming soon, including an article I never expected to write that came about as the result of comments that I received immediately after the presentation. Stay tuned!
About the Author
Director of photography Art Adams knew he wanted to look through cameras for a living at the age of 12. After ten years in Hollywood working on feature films, TV series, commercials, music videos, visual effects and docs he returned to his native San Francisco Bay Area, where he currently shoots commercials and high-end corporate marketing and branding projects.
When Art isn’t shooting he consults on product design and marketing for a number of motion picture equipment manufacturers. His clients have included Sony, Arri, Canon, Tiffen, Schneider Optics, PRG, Cineo Lighting, Element Labs, Sound Devices and DSC Labs.
His writing has appeared in HD Video Pro, American Cinematographer, Australian Cinematographer, Camera Operator Magazine and ProVideo Coalition. He is a current member of the International Cinematographers Guild, and a past active member of the SOC and SMPTE. His website is at www.fearlesslooks.com. Find him on Twitter: @artadams.