Jaron Berman |
February 1st, 2007 11:16 AM |
With SD, the reason broadcast monitors were used was because basically everyone used them in aquisition and color correct. While the old "NTSC" jokes applied, there was a generally accepted range where things would look good. Because everyone at the broadcast/mastering level used roughly the same monitors (and more importantly vectorscopes and waveforms), the signal being transmitted by most networks would have roughly the same color. What the end user decided was "pleasing" would then be uniformly pleasing across most networks, instead of one being heavily red, another being heavil green, etc... This was the "ideal" situation - transmit the signals in whatever format (broadcast, tape, dvd) to look the same, and let the viewer choose from there.
What has changed now is the amount of content created which never goes through a proper color correct. That process requires calibrated monitors AND measurement devices. Online editors and colorists rely on actual measurements to set the final coloration of their product, not just monitors. Why not consumer monitors? Because broadcast monitors can be calibrated so they look fairly similar to one another regardless of location...or at least that's the idea. More importantly, they display a wider range of colors and offer FAR more adjustability than consumer displays. If you judge your color on a consumer display with a narrow color gamut, you may think it looks fantastic until you watch it on another consumer display with different color response. The idea being - edit and create to the largest possible color palette, and provide your viewers with the possibility of watching your "ideal" signal. Should they choose to warm or cool the picture on their own tube, that's fine - you gave them the choice.
Bottom line is: don't trust the monitor by itself. Learn to read the scopes alongside the monitor, and you'll have a far better looking product at the end.
|