View Full Version : When to use V-Log


Bruce Lomasky
October 13th, 2015, 12:52 PM
I have had my DX200 for only a couple of days, and being a newbie, perhaps this is a silly question, (I have not worked with a camera that had a "log" setting before).

It seems to me that VLog works great on a contrasty (sunny) scene. If I am shooting outdoors, overcast, with less tonal range, is there any advantage to using V-Log?

2nd part of the question, has anyone "quantified" what the 6 different built in scenes do to the image? I know I can do trial and error, but it would make my life easier (and perhaps others), if there was more information than "scene appropriate for record with modulation in resolution, color shade and contrast" (I have NO clue what that means !


Thanks

Bruce Lomasky
Sunny (sometimes!) Connecticut,

Barry Green
October 16th, 2015, 06:13 PM
VLOG is for use when you know you're going to be doing extensive post-production grading on your footage. VLOG doesn't make monitor-ready footage, it makes grading-ready footage. If you plan on shooting the footage "baked in", ready for display, then you shouldn't use VLOG.

There are 7 gamma curves in the camera; FILMLIKE3 gives almost as much dynamic range as VLOG does (about 11 stops, vs. 12 for VLOG).

As for the built-in scene files; scene 1 is for general HD-style footage; I don't pay much attention to the other built-in scene files. I'll have a number of scene files included in The DVX200 Book when it's ready for release, which could be around 3-4 weeks from now.

Bruce Lomasky
October 16th, 2015, 08:45 PM
1st, thanks for the answer Barry

2nd, Count me as one of the 1st people to buy your book. Please let me know where and how I can purchase it.

3rd, Since I will always edit what ever I shoot, and since its really easy and fast to add the LUT to the clip and Presto, its 12 stops of depth. Should I use vlog for most of my work?

Thanks
Bruce

Barry Green
October 17th, 2015, 09:22 AM
You won't have to buy it; Panasonic will be making it available free to all DVX200 buyers, so don't worry about that.

And if you're always going to be applying LUTs and rendering the footage, then you may want to use VLOG principally. VLOG isn't magic, it just delays when the work needs to be done; you can either have the camera do the work, or have your post-production computer do the work, but the same work needs to be done (converting the raw or log-encoded image into viewable gamma, enhancing or correcting color rendition, applying noise reduction, etc). All those steps need to be taken in any video workflow, you just need to determine whether you want to do them up-front in-camera, or later, in post.

The advantages to doing it in-camera are that it's much faster, the work's already done by the time you even see the footage, there's no rendering (or minimal corrections, etc).

The advantages to doing it in post are obvious, you can take a lot more time and fine-tune the image.

But there's a big difference: if you record the image in the camera's native 8-bit 4:2:0 recording, then the amount of post grading you can do is limited, as versus what the camera could do; the camera is working with the footage natively internally at very high precision and can do more manipulation and stretching and pushing and pulling of the image, without introducing banding or other artifacts. It then quantizes the footage down to 8 bits, and chroma subsamples to 4:2:0, for recording. The results are quite good when doing it that way, certainly the best one can expect for 8-bit 4:2:0.

But, once it's 8-bit 4:2:0, further manipulation of the image is more limited. Pushing, pulling, making significant color changes, etc., will be limited by the recording format. And that's the issue for VLOG; if you're doing extensive image manipulation on an 8-bit 4:2:0 image, you may run into banding or other limitations. Now, banding can often be solved easily enough with a bit of dithering, but still the point remains: the closer you can get the image to "final" in-camera, the less manipulation you need to do on it in post, and the less of a concern you'll have related to recording format limitations.

The best way to employ VLOG-L would be through the use of an external recorder, where you can take advantage of the 10-bit 4:2:2 output. That would preserve the most data, and therefore the most post-processing flexibility. You'll find 10-bit 4:2:2 images can be pushed farther, manipulated more, than 8-bit 4:2:0 images can be. There are many benefits to an external recorder, and this is one of the primary benefits: further post flexibility, through recording at very high bandwidth (about 900 megabits per second, as versus the camera's internal recordings at 100 megabits per second).

There is, however, another way to go; if your footage is primarily intended for 1080P delivery, you can get the benefit of 10-bit 4:4:4 by recording internally in UHD, and downconverting in post. Downconverting UHD 8-bit 4:2:0 footage results in FHD 10-bit 4:4:4 footage. And that will give you all the range you need to manipulate the footage in post.

Do be aware that recording in VLOG-L bypasses pretty much every aspect of internal camera processing; about the only thing you can do to the image internally is adjust the gain and the white balance, and you absolutely want to have those spot-on before recording. But this also means that the camera won't be doing much if anything in the way of effectively sharpening the footage or reducing noise. A LUT is one thing, but is not the only thing, you have to do in post. If you don't sharpen the footage, it may look too soft; that's one of the first lessons people working with RAW had to learn: video really benefits from a little sharpening, and raw (and VLOG) don't sharpen it internally. Secondly, you're going to find a lot more noise in the VLOG image, because the noise reduction circuitry (such as the detail coring and skin tone dtl functions) are bypassed. So you can expect to need to apply noise reduction in post to clean up your footage and get it optimal for viewing. And noise reduction takes processing time.

In my opinion, VLOG-L is best utilized when recording external 10-bit 4:2:2 and when using a properly-engineered LUT (or grading from scratch), and finishing with noise reduction followed by an appropriate level of video sharpening.

VLOG-L is least effective when recording internal 8-bit 4:2:0 and just slapping a LUT on in post. It might work out just fine, but (depending on the LUT) you may find issues with gradients, banding, noise, or compression artifacts, depending on just how extensive the LUT manipulates the recorded brightness and color. It's entirely possible to make wonderful VLOG-L footage in 8-bit 4:2:0, but it is less flexible than if you'd started with 10-bit 4:2:2.

Bruce Lomasky
October 17th, 2015, 09:37 PM
Thanks! I am coming from a Still photography background where the opposite is true. Your response has opened my eyes. Now, just to find the "right" settings! So many choices... So little time :) (of course, when your book comes out with "suggestions" for settings, well, that will be the 1st chapter that I focus on.

Enjoy the good weather and stop wasting time here Barry and get the darn book finished!!!!


(only kidding)

(pretty sure I am only kidding!)

Bruce

Joachim Claus
October 18th, 2015, 01:08 AM
Barry,
thank you very much for your clarification on V-Log. You have opened my mind and my eys with your description. Primarily I have appreciated V-Log for its ability to get 12 f-stops footage. I have not expected, that V-Log would be associated with "no internal video-pre- processing". When I get my DVX200 I will try to find a set-up to get near to 12 f-stops withou V-Log setting.
Joachim

BTW
I am looking forward to getting your book.

Roland Schulz
October 18th, 2015, 08:35 AM
Barry, why this wrong and confusing "10-bit UHD becomes 8-bit FHD" here?!

http://pro-av.panasonic.net/en/dvx4k/pdf/ag-dvx200_tech_brief_vol1_en.pdf


Is that what you "think" or what Panasonic made from it?! No 8-bit camera can seperate intensity information to the degree of a 10-bit camera, even not after downscaling! Some of your points are not wrong, but there is no "10-bit" benefit in this "method"!
Where a 10-bit camera kicks in is in smooth transitions that can lead to banding, especially after grading. Banding only becomes visible in areas >> larger than single pixel distances. This 8- to 10-bit theory only is "noisy" "may happen" statistics and only is "true" for little pixel to pixel/neighboured distances.


Example:
On a real 10-bit FHD camera a grey ramp (horizontal line/block) could look like: 508-509-510-511-512

An 8-bit UHD camera sees this as: 127.127-127.127-128.128-128.128-128.128 (twice as much samples due to twice the horizontal resolution, but coarse 8-bit values).
Downscaled this to 10-bit FHD becomes: 508-508-512-512-512 (get it? There is NO benefit at all!!! The “steps” are as coarse as in 8-bit, there is no more information as in 8-bit!)

8-bit UHD camera shifted one pixel to the left: 127.127-127.127-127.128-128.128-128.128
Downscaled to 10-bit FHD becomes: 508-508-510-512-512 (Here the maximum, only statistical benefit appears, a “gain” to 9-bit between two pixels, but not to the whole horizontal ramp!).

The described 8- to 10-bit benefit has nothing to do with a 10-bit recording!
V-LOG would benefit from real 10-bit, not from this downscaling theory!

Gary Huff
October 18th, 2015, 07:05 PM
That's correct. You can get 444 by downscaling, but not 10-bit. It's still basically 8-bit 444.

Joachim Claus
October 19th, 2015, 06:02 AM
Sorry guys,
when downconverting UHD 8-bit 4:2:0 encoded video to HD (1080P) video you can get 10-bit 4:4:4 or 4:2:2 video, provided you use adequate downconverting software.
What the software does, is simply considering a "quadruple" of pixels and add its individual RGB-values. In other words: It sums up horizontally and vertically two RGB values from UHD frame and gets a new RGB value for one pixel in a HD frame.
As the 8-bit values from UHD have a digital representation of 0 ... 255, the sum of four values can have a value from 0 ... 4020 (which is a 10-bit count).
I hope this helps you to understand the method behind downscaling UHD to HD.
Joachim

Roland Schulz
October 19th, 2015, 07:24 AM
Sorry Joachim, but as I explained before completely wrong. You "may" gain an increase in bit depth in pixel neighbour areas (2x2 matrix), but when larger areas are covered by one plain color followed by another area in the "next" color or intensity (e.g. 127, next 128), there is no way to get something between these two colors in post!!! You still get the same banding as in the 8-bit data!!

This 8-bit to 10-bit is only a may be statitics in pixel near pixel areas.

Barry Green
October 19th, 2015, 08:51 AM
Example:
On a real 10-bit FHD camera a grey ramp (horizontal line/block) could look like: 508-509-510-511-512

An 8-bit UHD camera sees this as: 127.127-127.127-128.128-128.128-128.128 (twice as much samples due to twice the horizontal resolution, but coarse 8-bit values).
Downscaled this to 10-bit FHD becomes: 508-508-512-512-512 (get it? There is NO benefit at all!!! The “steps” are as coarse as in 8-bit, there is no more information as in 8-bit!)
There's nothing wrong in what I wrote; the difference in what you're describing and what I'm describing is that you're limiting your example to a pristine mathematical gradient and therefore limiting the sampling to strictly horizontal, whereas I was using a 2x2 matrix and the assumption that we would never be shooting something that is absolutely as defined as perfectly resolved gray in perfect lines. When using 2x2, as shown in the article, more shades are possible for each pixel. And those shades will be captured during the downconversion, and will be properly represented and stored in the 10-bit recording.

8-bit UHD camera shifted one pixel to the left: 127.127-127.127-127.128-128.128-128.128
Downscaled to 10-bit FHD becomes: 508-508-510-512-512 (Here the maximum, only statistical benefit appears, a “gain” to 9-bit between two pixels, but not to the whole horizontal ramp!).
But if you add another line below it, thus filling out the matrix to 2x2 rather than 2x1, you'll create the potential for not only that 9th bit, but also a tenth bit. Which is exactly what I was saying.

The described 8- to 10-bit benefit has nothing to do with a 10-bit recording!
Of course it does. The summation of the pixel data retains the differences between the sampled pixels, and recording it at 10 bits allows a wide enough data pipe to record those retained differences.

Let's say two cameras are looking at a pixel matrix of:
128 129
129 128

One camera is UHD 8-bit, and renders out those four pixels exactly as shown.
The other camera is 8-bit FHD, and has to render all four of those data points as one pixel. It will render it all out as 128, which is the average of the brightness of all four, rounded down to an integer value.

What would happen if we downconverted the UHD camera to one pixel? We'd add 128+129+129+128, giving us a 10-bit code of 514. That's a shade of gray that's halfway between the two finest shades an 8-bit camera could render.

]V-LOG would benefit from real 10-bit, not from this downscaling theory!
Yes, of course VLOG would benefit from real, true 10-bit recording. Just like every gamma benefits from 10-bit recording. But if you don't have 10-bit recording, and all you need is 1080p output, you can gain much of the benefit of 10-bit by downconverting (using pixel summation) your 8-bit data.

There's nothing wrong with the process I outlined, it's simple math. Real-world applications are going to determine how much benefit there is vs. the noise level, but again, that's the same in true 10-bit recording as well; the benefit of recording 10-bit is always limited by how much noise there is present in the signal, and a noisy-enough signal shows no real-world benefit to recording 10-bit. The same would apply here. You could, if you so wanted, bypass the 10-bit shading increase and go for noise reduction instead; averaging the UHD pixels together (instead of summing them together) would result in a 6dB decrease in perceived noise. Summing them together results in 10-bit shading vs. 8-bit. It isn't necessarily identical to what you would have gotten from a true 10-bit process in the first place, but it should be a significant benefit over 8-bit FHD.

Gary Huff
October 19th, 2015, 08:55 AM
It's 10-bit in post, but not in the actual image data. Just because it averages doesn't mean it's correct. To demonstrate this actually being 10-bit, you'd need to do both and show there is no difference, which can be done on the GH4. It's not going to be any more 10-bit than the AF100A's 10-bit output or sending 8-bit 422 into a Shogun where it becomes a 10-bit 4:2:2 ProRes.

So shoot something that stresses 8-bit that 10-bit can handle, and then show it look exactly the same both in 1080p 10-bit and 4K 8-bit downsampled to 1080. And no "adding noise will help smooth it out" because that just proves what I'm saying.

Barry Green
October 19th, 2015, 08:59 AM
Sorry Joachim, but as I explained before completely wrong. You "may" gain an increase in bit depth in pixel neighbour areas (2x2 matrix), but when larger areas are covered by one plain color followed by another area in the "next" color or intensity (e.g. 127, next 128), there is no way to get something between these two colors in post!!! You still get the same banding as in the 8-bit data!!

This 8-bit to 10-bit is only a may be statitics in pixel near pixel areas.
The benefit is most obvious in neighboring pixels, but that doesn't mean that's the only place that would benefit. You're grossly oversimplifying the situation by assuming that entire sections of the frame are going to be rendered at only one pixel level.

Assuming a perfectly flat gray, you're saying that it will be rendered by the camera as either 127-127-127-127, or 128-128-128-128. I'm saying that it may very well be rendered as 127-128-127-128-127-128. And if so, that shade will be properly represented by the 8-bit-to-10-bit downconversion.

I would dare say that idealized perfectly flat illumination such as what you're suggesting are not the norm. There will be variation to some degree just in natural distribution. The 10-bit downconversion will preserve those variations. Will they be identical to what a 10-bit FHD camera would have delivered in the same scenario? Probably not; a 10-bit camera could be able to preserve that absolutely perfectly flat tone better. But is that a realistic concern? I would say such absolutely perfectly neutral flat shades are not generally naturally occurring. Even in a blue sky there's variation.

8-bit banding is more a symptom of compression throwing away and exaggerating the linearity and flatness of an area, than it is of the area itself being linear and flat. Adding just the smallest amount of dithering corrects that problem, as Canon found in their XL1 with the DV codec.

Gary Huff
October 19th, 2015, 09:09 AM
Adding just the smallest amount of dithering corrects that problem

I think "hides" is a better word to use in this context than "corrects".

Roland Schulz
October 19th, 2015, 09:31 AM
Everything correct here, but the benefit for 8-bit UHD to 10-bit FHD is zero zero since "banding" is HARDLY visible in neighboured pixels, it is visible on "slow" graduations, and there this "8 becomes 10-bit" theory has hardly any effect.

Telling 8-bit UHD becomes 10-bit FHD is simply misleading!! It is only based on statistics, not on image information nor technology!!
Even worse, this method can act as a low pass filter!!


I made a sheet to show what happens in this table:

Roland Schulz
October 19th, 2015, 09:33 AM
The benefit is most obvious in neighboring pixels, but that doesn't mean that's the only place that would benefit. You're grossly oversimplifying the situation by assuming that entire sections of the frame are going to be rendered at only one pixel level.

Assuming a perfectly flat gray, you're saying that it will be rendered by the camera as either 127-127-127-127, or 128-128-128-128. I'm saying that it may very well be rendered as 127-128-127-128-127-128. And if so, that shade will be properly represented by the 8-bit-to-10-bit downconversion.

I would dare say that idealized perfectly flat illumination such as what you're suggesting are not the norm. There will be variation to some degree just in natural distribution. The 10-bit downconversion will preserve those variations. Will they be identical to what a 10-bit FHD camera would have delivered in the same scenario? Probably not; a 10-bit camera could be able to preserve that absolutely perfectly flat tone better. But is that a realistic concern? I would say such absolutely perfectly neutral flat shades are not generally naturally occurring. Even in a blue sky there's variation.

8-bit banding is more a symptom of compression throwing away and exaggerating the linearity and flatness of an area, than it is of the area itself being linear and flat. Adding just the smallest amount of dithering corrects that problem, as Canon found in their XL1 with the DV codec.

As I also said, this happens on "noisier" cameras - again, no real 10-bit information by downscaling! That´s the only thing I am talking about.

Barry Green
October 19th, 2015, 09:44 AM
It's not going to be any more 10-bit than ... sending 8-bit 422 into a Shogun where it becomes a 10-bit 4:2:2 ProRes.
Of course it is. Sending 8-bit 422 into a Shogun, which records in 10 bit, causes the Shogun to pad the 8-bit data with two zeroes at the end. There's no more information in that Shogun recording; it's just taking up two more bits.

Summing four pixels together retains the differences in shade between them. The difference between 508, 510, and 512 will be retained by the downconversion method, where it would be lost in the "bit padding" method used when recording 8-bit data into a 10-bit codec.

So shoot something that stresses 8-bit that 10-bit can handle, and then show it look exactly the same both in 1080p 10-bit and 4K 8-bit downsampled to 1080. And no "adding noise will help smooth it out" because that just proves what I'm saying.
It's easily enough demonstrated by using a simple gradient in Photoshop. No need to add variables of shooting something, codecs, noise levels, inconsistent lighting, etc; just prove the theory simply enough. Make an 8-bit 3840x2160 gradient. Switch to 16-bit mode and resize it to 1920x1080. Then, in another window, make an 8-bit 1920x1080 gradient. Compare the two and see if they're the same, or if one shows more banding than the other. Or, hey, I'll do it for you...

http://fiftv.com/Gradient/Gradient-8-bit-FHD-vs-UHD-downconverted.psd

Download that file. It's a photoshop document with two layers. Each layer is an identically-parameter gradient created in Photoshop; one layer is native FHD at 8 bits, the other is FHD which was created by making an 8-bit UHD gradient, and downconverting it to 16-bit using bilinear. Note, I used 16-bit because Photoshop doesn't have a 10-bit option, but -- it won't matter at all, because if your theory is correct there should be no benefit whatsoever, right? So it shouldn't matter if it was done in 8-bit or 10-bit... also, I used bilinear mode; I didn't even use bicubic, which would have done much better. Of course, if you use "nearest neighbor", then there won't be hardly any improvement, so -- why do that?

So just view that at 100% size, and toggle the layers on and off, you'll see the differences. Stretch it up or yank it down, manipulate it however you want, you'll see that the downconverted UHD holds up much better than the native 8-bit FHD. So there's obviously more data being stored and retained, more shades available, in the downconverted UHD.

Gary Huff
October 19th, 2015, 09:47 AM
The banding is smoothed out somewhat, but not by a whole lot.

And you said nearest neighbor wouldn't make any difference, and there's a lot of comments using "proper" software, so what is this proper software explicitly? After Effects? Resolve? EditReady? And which settings?

Barry Green
October 19th, 2015, 09:48 AM
As I also said, this happens on "noisier" cameras - again, no real 10-bit information by downscaling! That´s the only thing I am talking about.
There is most definitely information gained by downscaling. Your graphs are correct insofar as they go, but they also represent an unrealistic situation. There is practically no possibility of a video sensor rendering an absolutely flat image like what you show. As I pointed out, first of all you're discarding half the benefit which would come from the use of a 2x2 matrix instead of 2x1, and second you're assuming that the only possible downconversion method is nearest neighbor.

Look at the gradient photos I just posted. The benefit of downconverting 8-bit UHD to 10+ bit FHD is obvious, and contributes to much less banding.

Barry Green
October 19th, 2015, 09:58 AM
The banding is smoothed out somewhat, but not by a whole lot.
It's definitely better. And that's just using bilinear.

And you said nearest neighbor wouldn't make any difference
Nearest neighbor will act like what Roland's been saying on an absolute mathematical gradient, and as such it would result in only the slightest improvement at the transition point. It would probably work out better on a real-world scenario where there's some minor variation in flat fields. Bilinear is a more comprehensive conversion and takes into account more than just the immediately neighboring pixels, thus it results in much better results. Bicubic does much better still. If you were to resize the UHD gradient using bicubic, you'd see that the banding is nearly entirely eliminated in the resulting FHD 10-bit image.

and there's a lot of comments using "proper" software, so what is this proper software explicitly?
Anything that scales with something more sophisticated than simple decimation or nearest neighbor.

After Effects? Resolve? EditReady? And which settings?
I am not an expert on all post-production software out there. I would say that thirty seconds of experimentation should reveal whether any particular program you're interested in would do a satisfactory or unsatisfactory job. Heck, just import the source UHD 8-bit gradient, render it out as an uncompressed FHD 10-bit still, and see how it looks to you. If it looks like the native photoshop gradient I supplied, then yeah, that software with those settings isn't going to show you any real benefit. But I'm sure that with whatever modern program you're using, there'll be an option to get much better results, at least as good as the bilinear resize I showed in the photoshop example.

Here's the source 8-bit UHD gradient if you want to use it to experiment with.
http://fiftv.com/Gradient/Gradient-8-bit-UHD.psd

Roland Schulz
October 19th, 2015, 10:23 AM
There is most definitely information gained by downscaling. Your graphs are correct insofar as they go, but they also represent an unrealistic situation. There is practically no possibility of a video sensor rendering an absolutely flat image like what you show. As I pointed out, first of all you're discarding half the benefit which would come from the use of a 2x2 matrix instead of 2x1, and second you're assuming that the only possible downconversion method is nearest neighbor.

Look at the gradient photos I just posted. The benefit of downconverting 8-bit UHD to 10+ bit FHD is obvious, and contributes to much less banding.

Using an x-LOG gamma, as in the topic, is an absolute realistic situation where you easily get into visible banding on 8-bit recordings. No 10-bit FHD downscaling really helps you there!!