I’m a geek. I read cinematography, videography and color textbooks for fun. I’m awful at math so I skip the equations and look for digestible nuggets of information that help me understand advanced concepts in intuitive ways.
I recently stumbled across this quote in a book about digital cinema mastering:
“…the most saturated colors that film can reproduce are dark cyans, magentas and yellows, each produced by a maximum density of its respective image dye, but resulting in low luminance levels. In contrast, the most saturated colors on a digital display are the bright red, green and blue primaries, each produced by the maximum output of its additive color channel, and therefore resulting in the maximum luminance for that channel.”
— Kennel, Glenn. Color and Mastering for Digital Cinema. Focal Press, 2007. Page 22, “Unstable Primaries.”
Mr. Kennel helped design Kodak’s first high speed film stock, 5293. He’s currently the CEO of Arri.
In this article I showed how Alexa limits color saturation in highlights and mid-tones, something that most video cameras either don’t do or don’t do as well. As the video clip in that article shows, Alexa color saturation locks once the luma value of any hue approaches middle gray. (Middle, or 18% gray, falls at 40% on a luma waveform.) As exposure increases beyond middle gray a hue will brighten but its saturation will remain the same. The paragraph quoted above gives an important clue as to why.
I’m not sure it’s fair to say that Arri is “emulating” film so much as they are pursuing a film aesthetic in a digital medium. When Kennel talks about how the most saturated colors in film are also the darkest he seems to be referring to film in terms of it being a subtractive color medium. Video is inherently an additive color medium, and most video cameras embrace that aesthetic. That’s one of the reasons why film and video have such distinctive looks. Alexa seems to buck that trend by emulating the functionality of a subtractive color model in an additive color system.
This makes sense given that Arri has made film both film cameras and film scanners for quite some time. If any video camera manufacturer understands what the “film look” really is, it’s probably Arri.
I’ve wracked my brain trying to think of a good metaphor to describe the differences between these two aesthetics, and the best I can come up with is to compare film to watercolor painting:
Watercolor painting is a subtractive color medium because it starts with a broad spectrum white palette—a white paper surface—and subtracts color from it. The dyes in the paint reflect some parts of the spectrum and absorb others, effectively shaping the light bouncing off the white paper underneath.
Lighter tones are always the least saturated because that sense of lightness comes from seeing the white of the paper through the paint dyes. Adding broad spectrum white light to any hue will make it appear less colorful. The most saturated watercolors are always the darkest because dense paint blocks most of the spectra from the white base, and removing spectra from white light increases its colorfulness while reducing its apparent brightness.
A simple Google image search for watercolor art shows that this principle generally holds true. This image is particularly interesting. (Some colors, such as yellow, seem very saturated in some of these images, but this may be a side effect of viewing these images on a video monitor. I have some beautiful Japanese watercolor woodblock prints hanging over my desk and the most saturated colors are definitely the darkest.)
Video is completely different. Our brain combines clusters of red, green and blue subpixels into a single colored pixel. The luma ratios of these subpixels determine the color we perceive: for example, if the blue and green subpixels are full on and the red subpixel is full off then we’ll perceive that pixel as cyan.
LCD monitors are, in a way, an additive color system in that the most saturated colors are the brightest, although those colors are created through a subtractive process. Every LCD display starts off with a “white” backlight (I say “white” because modern LCDs use either LED or fluorescent sources that may possess gaps in their spectrum) that is filtered at the subpixel level through red, green and blue filters. Filters only block light—they never add it—so the red filter, for example, absorbs all the wavelengths from the backlight that aren’t red and passes only the wavelengths that are.
What makes these monitors additive is the fact that those pure hues are blended back together to create the final colors that we see. Even though the base colors are created through a subtractive process, it’s their addition that counts in the end because that’s what reaches our eyes.
Film is different in that there is no part of the process that is truly additive. The creation of the film negative, where dyes are deposited on a transparent substrate, is subtractive, and the projection process, where white light is projected through dyes, is also subtractive. (This section edited for clarity.)
The difference between subtractive color and additive color is key to differentiating between the classic “film” and “video” looks. Film dyes subtract light to recreate color and because the dyes aren’t perfect they can only subtract so much color from white light, beyond which point brightness increases but saturation stays the same. Video, on the other hand, uses tiny glowing saturated lights to recreate color, so the most saturated color possible comes from turning one or two of those tiny saturated lights full on.
In the article referenced above I show that Alexa handles color very differently to a Sony F55, which renders color in an additive manner similar to traditional video cameras. As exposure increases the F55’s color saturation increases as well until a color channel clips, usually when something in the image hits 70% on a luma waveform monitor. (This is a gross simplification of what really happens, but that’s the subject of another article.) The Alexa, on the other hand, saturates color until the luma waveform hits 30%-40%, at which point color saturation locks into place: colors only get brighter, NOT more saturated.
It almost as if luma is completely decoupled from saturation beyond a certain point: one increases while the other one doesn’t, whereas other cameras lock the two together until one fails.
Director of Photography Geoff Boyle, FBKS, recently shot a test illustrating how Alexa handles flesh tone with increasing overexposure. He has graciously allowed me to reproduce his results. (The originals can be found here.)
Notice how color saturation doesn’t increase beyond the level seen in the “normal” exposure. The model’s face gets brighter, but it doesn’t get richer or redder or ruddy. If this was a watercolor painting then I’d say that each successive step of overexposure simply shows us more of the white paper underneath the paint.
One of my early mentors, Alex Funke, ASC, was a master of photochemical blue screen. From him I learned that it’s possible to create a bright blue or a saturated blue on film, but not at the same time. The rule of thumb was to expose blue screens as close to middle gray as possible: more exposure reduced saturation, and less exposure reduced brightness. At the time I learned this I only understood that this was the case, but I didn’t know why.
Now I know why, at least in watercolor terms: Lots of blue paint obscures the white paper beneath, eliminating any sense of lightness but creating a very rich blue. Too little blue paint lets too much white through, resulting in a very light but desaturated blue. The sweet spot is in the middle of the tonal range, where I can find a nice compromise between brightness and saturation.
Saturating blue is not a problem in video. It’s really easy to use Super Blue Kino Flo tubes and create a crazy bright and saturated blue screen that keying software loves. For years the rule of thumb was to expose blue and green screens on video at a luma value around middle gray but that’s a misperception based on film exposure techniques. I’ve lit green screens to 70% on a luma waveform with great results.* The primary limiting factor is colored spill light, not color saturation and brightness. The secondary limiting factor is that most video cameras desaturate extreme highlights to control color shifts when a color channel clips, and making a blue or green screen too bright or saturated might cause clipping and subsequent desaturation. (This method of highlight desaturation is very different to what Alexa does and is more about damage control than aesthetics.)
It’s fascinating that this one color can illustrate so strongly the difference between film and video color reproduction. I’m sure there’s lots more I don’t know about film and video colorimetry that Arri is exploiting to make Alexa look so beautiful. It’s really frustrating sometimes, to know that there’s so much I don’t know that I don’t know.
By decoupling color saturation from luma beyond a certain level Arri seems to be deliberately playing against video’s strengths as an additive color system, and I think that’s a key aspect of what sets them apart. Video has its own look, and it’s a pleasing look in its own right, but bright saturated colors tend not to appeal to visually sophisticated viewers (see The Wagner Color Response Report, if you can find a copy). Film’s approach to color has more in common with painting than TV displays, and it’s an aesthetic that greatly appeals to many in our industry… including myself.
Thanks are due to Geoff Boyle for allowing me to reproduce his work in this article. He owns the Cinematography Mailing List, one of the best resources (if not the best) for cinematographers and related crafts. Check it out here.
The opinions expressed in this article are my own.
Disclaimer: I designed the DSC OneShot dailies grading chart that appears at the head of this article.
Art Adams | Director of Photography | FearlessLooks.com
*A bit of trivia: humans are most sensitive to green light in the middle of the visible spectrum, as those wavelengths predominate in any normal environment, so that’s the portion of the spectrum to which light meters are most sensitive. For that reason light meters don’t see strongly saturated red or blue terribly well, so the warmer or cooler the color of the light measured the less reliable light meter readings are.
For this reason spot meters were not very useful in judging exposure when shooting blue screens on film, as they consistently read the screen as one or more stops too dark. This resulted in overexposed and desaturated blue screens.
The most reliable exposure method was to place an 18% gray card in front of the blue screen, light it to the desired f/stop using a spotmeter, and then light the screen by eye by viewing both the gray card and the blue screen through a dark blue filter. Viewing the gray card through the blue gel made the gray card appear the same blue color as the screen, and it was a fairly simple matter to then light the screen by eye to match the gray card while viewing both through the blue filter. (Fairly simple is relative—photochemical blue screens had to be lit evenly within 1/4 f/stop all the way across!)