View Full Version : nanoFlash Public Beta 1.6.226 Firmware Comments
Dan Keaton November 20th, 2010, 09:23 AM Dear Friends,
Disclaimer: While I understand a Vectorscope and Histogram, I do not consider myself an expert in using these tools.
With that said, it appears to me that the Vectorscope tells the story.
In Image 3 (nanoFlash), the scopes for the nanoFlash, take a close look at the blacks (center bullseye).
Note that these is only one dot in the nanoFlash image.
Compare this to Image 4 (EX), there is not just one center dot, but quite a few in the center (black) area.
In my opinon, the nanoFlash recording is cleaner and this is easily seen in the Vectorscope.
Besides the blacks, the cyan area of the EX has extra dots that are not present in the nanoFlash image.
And note that some are outside the box.
The histogram for the EX will be "Wider" using Priotr's term, for the EX.
The Vectorscope shows why: there is more noise in the EX 35 Mbps (or at least it is less precise).
My comments are not to trash the EX, but to explain why there is a difference in the histogram,
in other words, to answer Piotr questions.
I welcome comments and suggestions from those that are experts in reading these tools.
Piotr Wozniacki November 20th, 2010, 09:41 AM Dear Dan,
As I already stated in my last post, this is what I'm thinking as well.
Thank you for inviting me to post some screen grabs; while I was reluctant at the beginning (for obvious reasons :)), this led me to the idea of shooting the EX1's own bars. Their scopes, and as you point out - particularly the vectorscope, says a lot.
Since I'm not an expert in interpreting all the intricacies scopes show, I'd once again like to invite those more knowledgeable than myself, to chime in and discuss the matter further.
The histogram for the EX will be "Wider" using Priotr's term, for the EX.
The Vectorscope shows why: there is more noise in the EX 35 Mbps (or at least it is less precise).
.
I'd say it's not more noise (in the common meaning of the term), but certainly less precision of the EX 4:2:0 picture when compared to the nanoFlash'es 4:2:2.
To Chris: if you think this subject deserves it, please take out posts starting with #34 from the Beta firmware discussion thread and create a separate one. Thank you.
Piotr
Dan Keaton November 20th, 2010, 11:22 AM Dear Piotr,
After I posted my reply, I re-read your original post and noted that you came to a similar conclusion.
Piotr Wozniacki November 20th, 2010, 11:48 AM Dear Dan,
Yes - this is the only explanation that comes to my mind.
This can be further illustrated with the examples below: on the right, you can see a blown-up detail from the EX. At 4:2:0 color resolution, you can see black stripes across the solid red color - they may be accounted for by the extra content at the very left end of the histogram.
The left-hand side pics show the same frame from the nano; no black stripes, less black garbage in the histogram, richer vectorscope trace...
Peter Moretti November 20th, 2010, 02:38 PM I believe the six vectorscope dots around the center are caused by bleeding between the luma and chroma values along the horizontal edge where the vertical color bars meet the grey horizontal stripe. This is causing values that are "mostly white." That is, they have almost equal R, G and B vlaues but do also contain small amounts of color information from the color bars.
Such values would show as being slightly off center in the direction of the "polluting color" on the vectorscope's graticle. And that's exactly what we see.
If you blowup by a considerable amount the border between the horizontal line and the color bars, you can actaully see this "bleeding" with your naked eye.
I HTH and believe it's accurate. ;).
Piotr Wozniacki November 20th, 2010, 03:00 PM If you blowup by a considerable amount the border between the horizontal line and the color bars, you can actaully see this "bleeding" with your naked eye.
Yes Peter - the vertical color resolution is so much better with 4:2:2 than it is with 4:2:0, isn't it. It can be seen at the border between the horizontal line and the color bars; it's also apparent in my red flower blow-up.
How nice it would be if the same was possible in the other direction - but there is only one way to get it all: 4:4:4 :(
Peter Moretti November 20th, 2010, 04:04 PM Actually, 4:2:2 and 4:2:0 theoretically have the same vertical resolution. It's the horizontal resolution that's different.
But when you throw in macro blocking, bit rate and how the codec works, there can be a difference in vertical resolution as well. So the horizontal banding you're seeing is not really the direct result of 4:2:0 but a more complex combination of factors.
Dan Keaton November 20th, 2010, 04:22 PM Dear Friends,
I received an email concerning these comparisons:
He points out that "the difference between the SxS and the NANO files, is not due to the difference on the recorded signal, but on how is displayed."
"The difference is dues to a property of the QT files called APERTURE ( Properties > Presentation).
The aperture of the NANO files is "CLASSIC", while the aperture of the SxS files is PRODUCTION.
QT has four different aperture options.
The same clip displayed with different "apertures" will show some differences on the screen (QT Player, FC) and on the VideoScope."
Peter Moretti November 20th, 2010, 04:41 PM Dan,
I respectfully disagree w/ that explanation. The aperture size in QT has to do with subtle differences between SD and DV frame sizes and pixel aspect ratio. It would not cause the horizontal banding that I described above.
A difference in QT Aperture can cause a difference in adding or removing black from the the borders of the entire frame, but I really don't think that's happening in this case.
HTH, and that I'm giving you the correct info.. (I obviously believe that I am.)
Piotr Wozniacki November 20th, 2010, 05:00 PM Actually, 4:2:2 and 4:2:0 theoretically have the same vertical resolution. It's the horizontal resolution that's different.
I beg to disagree - just take a look at diagrams on Page 3 here:
http://dougkerr.net/pumpkin/articles/Subsampling.pdf
But I guess we mean the same thing - just a matter of nomenclature; better vertical resolution = more horizontal lines.
Dan Keaton November 20th, 2010, 05:15 PM Dan,
I respectfully disagree w/ that explanation. The aperture size in QT has to do with subtle differences between SD and DV frame sizes and pixel aspect ratio. It would not cause the horizontal banding that I described above.
A difference in QT Aperture can cause a difference in adding or removing black from the the borders of the entire frame, but I really don't think that's happening in this case.
HTH, and that I'm giving you the correct info.. (I obviously believe that I am.)
Dear Peter,
I tried hard, in posting the information, to convey the information provided to me.
I am not knowledgeable about Aperature settings.
And I do not understand how this information can be determined from the images Piotr posted.
But, the information came from a trusted source, but his posts, for some reason were not getting through.
Also, I am certain that the originator of the comments was not referring to your post. His email arrived in my inbox at 2 pm EST today, before your first post in this thread.
I am trying to learn about "Aperature" settings as used in this context.
Dan Keaton November 20th, 2010, 05:48 PM Dear Friends,
I am wondering if the "Aperature" has application here in this discussion or not.
I believe that Piotr uses Sony Vegas Pro 10.
If "Aperature" is a Apple Quicktime or Final Cut Pro control only, and if it does not apply to the way that Sony Vegas Pro 10 displays the files, then we should discount the "Aperature" discussion.
Peter Moretti November 21st, 2010, 01:18 AM I beg to disagree - just take a look at diagrams on Page 3 here:
http://dougkerr.net/pumpkin/articles/Subsampling.pdf
But I guess we mean the same thing - just a matter of nomenclature; better vertical resolution = more horizontal lines.
I believe we do mean the same thing. But, FWIW, the "correct" terminology is 1/2 horizontal resolution. You can see in the diagram that you referenced a "H: 1/2" next to 4:2:2.
Peter Moretti November 21st, 2010, 01:32 AM Dear Friends,
I am wondering if the "Aperture" has application here in this discussion or not.
I believe that Piotr uses Sony Vegas Pro 10.
If "Aperture" is a Apple Quicktime or Final Cut Pro control only, and if it does not apply to the way that Sony Vegas Pro 10 displays the files, then we should discount the "Aperture" discussion.
An Aperture setting is usually set during creating the QT wrapped file. Now if the NLE or player actually uses this setting depends on the program. Some also let you override what's set in the file. Often times slight issues in aspect ratio or garbage lines on the top and bottom of the frame are caused by an incorrect Aperture setting. So extra black info in a histogram can be the result of such garbage, but that's not what's going on here.
I have no doubt whatsoever that the nano writes the correct Aperture setting. And just because the EX uses "Production" instead of "Classic" does not mean that there is any type of mismatch. Often times different Aperture settings will yield the identical result.
Piotr Wozniacki November 21st, 2010, 04:17 AM I believe we do mean the same thing. But, FWIW, the "correct" terminology is 1/2 horizontal resolution. You can see in the diagram that you referenced a "H: 1/2" next to 4:2:2.
Exactly, Peter:
4:4:4 ----> H:1/1, V:1/1
4:2:2 ----> H:1/2, V:1/1
4:2:0 ----> H:1/2, V:1/2
- which means the vertical resolution is exactly 2x higher in 4:2:2 than it's in 4:2:0. Both 422 and 420 have horizontal resolution 2x lower than the full 4:4:4 sampling (and this is probably what you meant).
Rafael Amador November 21st, 2010, 04:37 AM Actually, 4:2:2 and 4:2:0 theoretically have the same vertical resolution. It's the horizontal resolution that's different.
But when you throw in macro blocking, bit rate and how the codec works, there can be a difference in vertical resolution as well. So the horizontal banding you're seeing is not really the direct result of 4:2:0 but a more complex combination of factors.
422 has DOUBLE COLOR VERTICAL RESOLUTION than 420.
Just have a look to a 2x2 pixels block as 4.2.0, and to the same 2x2 block as 4.2.2.
rafael
Rafael Amador November 21st, 2010, 05:07 AM I have no doubt whatsoever that the nano writes the correct Aperture setting. And just because the EX uses "Production" instead of "Classic" does not mean that there is any type of mismatch. Often times different Aperture settings will yield the identical result.
Peter,
The miss match is HUGE.
You simply can not cut between a picture with an aperture and the same picture with a different aperture.
The picture JUMP as when you were cutting between signal out of sync.
They have different sizes on screen.
This may be something originally designed for DV, but affects every single standard, format, size,..
Just open any QT file and switch between the different aperture options.
rafael
Piotr Wozniacki November 21st, 2010, 05:46 AM Since this thread is about comments on the latest Beta software, here is some from me - but please do not treat them as ranting :)
After placing on Vegas timeline, and synchronizing by TC, clips recorded simultaneously to SxS (XDCAM EX HQ) and nanoFlash (220 Mbps I-Fo), I have noticed 2 things:
1. The nanoFlash embedded audio is lagging by 4 milliseconds. Not a big deal, but when mixing the two for any reason, there is a slight echo (empty bucket effect)
Dear Dan,
Just to return to this thread main subject: what do you think is causing this slight audio delay in the present Beta?
FYI, it used to be even longer with previous firmware releases (I don't know about the 1.6.29, as I skipped it).
Thanks,
Piotr
Dan Keaton November 21st, 2010, 06:24 AM Dear Priotr,
How does the audio line up to a loud clap, one near the lens and near the mic?
4 milliseconds = 1/250th of a second.
May I assume that you are recording in MXF, and using an EX1, using embedded audio in HD-SDI?
I assume that you are using I-Frame Only at 220 Mbps.
Piotr Wozniacki November 21st, 2010, 06:35 AM Dear Dan,
All your assumptions above are correct.
I'm not sure I understood your question, but I'll try to be more specific:
- the slight delay of nanoFlash clip (both audio and video) in relation to the EX1 internally recorded one is natural. However, I'm talking about the audio lagging behind the video inside the nano clip - i.e. after having aligned the nF and EX video with a single frame accuracy (using TC), the nF-recorded audio is still those 4 ms behind the EX-recorded audio.
As I said - not a big deal, but out of technical curiosity - what might be causing it? Both audio and video are fed to the nanoFlash simultaneously, through SDI...As is the TC, by which the video can be synchronized perfectly in Vegas Pro - but not the audio?
To synchronize the audio perfectly, I must ungroup it from the video, turn frame quantization off, and slide the audio by exactly 4 ms.
Believe me, 4 ms can be important with the kind of recordings I do. I produced a live classic guitar performance recently: Marcin Dylla plays Rossinina at the Polish Royal Castle in Krakow on Vimeo- and on those close-ups of the guitarist fingers, even 1 ms audio/video lag is visible!
Thanks
Piotr
Oops - I didn't realize inserting a link to Vimeo clip will open the player; this was NOT my intention as there are no close-ups I mentioned in this particular clip, which is just a sample of the DVD I made later :) But you can take my word for it - with fast passages watched on a really big screen, every string pulling must be perfectly synchronized with the sound of the string, or a person with a good ear for music will notice. I remember that while editing this video, I had to adjust the audio by milliseconds quite often...
Peter Moretti November 21st, 2010, 08:56 AM Exactly, Peter:
4:4:4 ----> H:1/1, V:1/1
4:2:2 ----> H:1/2, V:1/1
4:2:0 ----> H:1/2, V:1/2
- which means the vertical resolution is exactly 2x higher in 4:2:2 than it's in 4:2:0. Both 422 and 420 have horizontal resolution 2x lower than the full 4:4:4 sampling (and this is probably what you meant).
Yes we agree :). And you are right concerning the comparison to 4:2:0 in terms of vertical resolution. So what I wrote about 4:2:0 not being responsible for the horizontal banding is incorrect. I knew that 4:2:0 halves the resolution in both directions, but had in my mind that 4:2:2 halves the horizontal resolution (compared to 4:4:4).
So your explanation of the horizontal banding being caused by less vertical resolution in 4:2:0 is correct. ;)
P.S. Are you sure the audio issue you're having might not be caused by mic placement? I'm sure you know that a mic placed farther away from a subject will cause delay, due to the fact that the sound traveling to the mic travels more slowly than the light traveling to the sensor. I believe the rule of thumb is that a mic 20' away from the subject will cause a one frame audio delay.
Piotr Wozniacki November 21st, 2010, 09:47 AM Peter,
Of course I'm aware of the light vs. sound speed difference, but this is not the case here - I was comparing the nano recording with the EX own recording, done simultaneously, with a single microphone, and with the nano fed from the EX1 through HD-SDI.
The above is an explanation of my "4 ms audio lag in the current Beta", and NOT how I recorded the live event I mentioned later on :)
Peter Moretti November 21st, 2010, 10:40 AM Gottcha. I thought that was the case, but just thought I'd ask ;).
Adam Stanislav November 21st, 2010, 11:43 AM just a sample of the DVD I made later :)
Is that Wawel?
Piotr Wozniacki November 21st, 2010, 11:49 AM Thats due to the so call "Presentation".
SxS clips and NANO files use different "Aperture" , so QT display both pictures in a different fashion.
This also affects how FC canvas display the picture, and how look on the VideoScope. The difference is small but noticeable when you shift between pictures.
rafael
Thanks Rafael - your posts arrive with considerable delay, which is a great pity :(
You're probably reading this "in real time", so you're aware that some people (like Luben) also suggested the Aperture setting difference being the reason, but that has been ruled out later on the basis that my examples come from MXF files and Vegas Pro, the environment which doesn't use the Aperture parameter at all.
Nevertheless, the question remains open to Dan on why CD have chosen to encode with Aperture set to "Classic" setting while most camcorders record with Aperture set to "Production". If there are no special reasons for this, perhaps the nanoFlash could also use the latter, if only for compatibility reasons?
Piotr
Piotr Wozniacki November 21st, 2010, 11:51 AM Is that Wawel?
That's right, Adam. BTW, this particular event took place at the Batory Courtyard - a place never available to the public before.
Billy Steinberg November 21st, 2010, 01:57 PM Exactly, Peter:
4:4:4 ----> H:1/1, V:1/1
4:2:2 ----> H:1/2, V:1/1
4:2:0 ----> H:1/2, V:1/2
- which means the vertical resolution is exactly 2x higher in 4:2:2 than it's in 4:2:0. Both 422 and 420 have horizontal resolution 2x lower than the full 4:4:4 sampling (and this is probably what you meant).
Keep in mind that you're talking about chroma subsampling here, not luminance. While I'm not trying to dismiss the degradation that happens with chroma subsampling, it's a far cry from what most people think of as 1/2 the resolution.
Billy
Piotr Wozniacki November 21st, 2010, 02:06 PM Thank you very much for enlightening me, Mr. Steinberg. It would have never occurred to me!
Billy Steinberg November 21st, 2010, 02:08 PM Dear Dan,
Just to return to this thread main subject: what do you think is causing this slight audio delay in the present Beta?
FYI, it used to be even longer with previous firmware releases (I don't know about the 1.6.29, as I skipped it).
Do you have an easy means to check the sync right out of the camera's HD-SDI spigot (or HDMI spigot if that's what you use)?
The reason I ask is that the path the video/audio takes to the internal recorder is very different from the path it takes getting out of the camera. Just because the internal recorder is dead on sync while the nano recording is slightly out doesn't mean the delay is being introduced by the nano. It doesn't mean it's not the fault of the nano, particularly if early versions of the nano firmware were enough out of sync that it was obvious it wasn't the camera, but now that you're talking about minuscule sync errors, before you spend a huge amount of time determining if the nano is out of sync, you might want to check if the feed from your camera is out of sync.
Just a thought.
Billy
Rafael Amador November 21st, 2010, 02:38 PM Keep in mind that you're talking about chroma subsampling here, not luminance. While I'm not trying to dismiss the degradation that happens with chroma subsampling, it's a far cry from what most people think of as 1/2 the resolution.
Billy
Right.
Luma is full resolution, whatever the format, from DV up.
rafael
Peter Moretti November 21st, 2010, 06:28 PM Peter,
The miss match is HUGE.
You simply can not cut between a picture with an aperture and the same picture with a different aperture.
The picture JUMP as when you were cutting between signal out of sync.
They have different sizes on screen.
This may be something originally designed for DV, but affects every single standard, format, size,..
Just open any QT file and switch between the different aperture options.
rafael
"Classic" and "Encoded Pixels" mode usually look the same. Many times there will be no difference between "Clean" and "Production" modes. One crops for frame size and one doesn't.
So are you saying that nano is using the wrong pixel aspect ratio for the codec, which would cause a differenc between "Classic" nano and "Production" native recording? If so, how come no one, AFICT, has noticed this? (It's hugely obvious but no one has complained about it?)
Finally, are you saying that the six dots around the center are caused by an Aperture Mode difference? You can see clear as day in the images by Piotr that the difference is caused by chroma bleeding in along the horizontal grey band of the color bars. That has to do with 4:2:2 vs 4:2:0 chroma sub-sampling. It has nothing to do w/ Aperture size. If you can show otherwise, then I will readily admit I'm wrong. I don't want to mislead anyone. But I don't see how you can say that difference is caused by Aperture mode. But, of course, I'll leave my mind open to being convinced otherwise.
Adam Stanislav November 21st, 2010, 11:37 PM While I understand a Vectorscope and Histogram, I do not consider myself an expert in using these tools.
It is really very simple. The vectorscope plots the chroma on Cartesian coordinates, with Cb being the x-axis and Cr being the y-axis. So, it calculates the Cb and the Cr of every pixel in the image and places a dot in the 2D plane based on those coordinates. It does not tell you how many pixels each dot represents, only that there is at least one pixel in the bitmap with those Cb and Cr values.
The placement of each dot around the circle (i.e., the angle of the imaginary line connecting the dot and the center of the coordinate system) represents the hue of the pixel(s) the dot represents. The distance of the dot from the center (i.e., the length of said imaginary line) represents the saturation of the pixel(s) the dot represent.
So, any pixel with no saturation (black, white, or any gray value in-between) will be represented by a dot in the center (Cb = 0, Cr = 0). In the case Piotr has presented, the EX image had a lot of pixels with very little saturation (so the dots were scattered near the center), while those same pixels had no saturation in the nF image (one dot in the center). Neither is necessarily “better,” they are just different.
Final Cut Pro 7 - Scopes - Vectorscope (http://www.kenstone.net/fcp_homepage/fcp_7_scopes_vectorscope_stone.html) describes how to interpret the vectorscope in Final Cut Pro, but it really applies to any software displaying a vectorscope.
The histogram calculates the Y (luminosity) of each pixel and shows you how many pixels in the bitmap have that Y value. In this case, the x-axis represents the luminosity, the y-axis the number of the pixels with that particular luminosity. The x-axis also shows you the darkest (left extreme) and the lightest (right extreme) colors in the bitmap.
At any rate, when color grading, the vectorscope helps you adjust different clips to about the same hue and saturation, the histogram helps you adjust them to about the same range of luminosities.
Dan Keaton November 22nd, 2010, 01:36 AM Dear Adam,
Thank you very much.
I was thinking of the single dot in the center of the nanoFlash vectorscope as being Black.
Is this correct?
Adam Stanislav November 22nd, 2010, 02:51 AM No, the single dot in the center means any type of gray, from black to white. In other words, anything with no chroma regardless of its luminosity (or luminance or brightness).
I have just taken a second look at those graphs and realized they were of the SMPTE bars. In that case I take back what I said one was not necessarily better than the other. The nF image is clearly superior to the EX image because the EX has some chroma for the grays, which clearly should be achromatic, just as the nF has them.
There are very few dots in those vectorscopes because there are only a few colors in the SMPTE bars. And as I pointed out, the vectorscope simply shows how many colors are in an image, not how many pixels are of each color.
Neither is perfect due to the MPEG compression, but the nano is considerably better: It shows four different yellows, one green, three cyans, two blues, two magentas, and two reds. There really should be just one or two of each. The white, black, and the grays are all without any chroma (as they should be), so that is excellent.
The EX shows many more colors because it compresses more, so it produces more intermediate colors at the edges of two colors. And, unfortunately, it adds some chroma to the grays, which is rather bad.
The nano is a clear winner here.
Piotr Wozniacki November 22nd, 2010, 03:05 AM The EX shows many more colors because it compresses more, so it produces more intermediate colors at the edges of two colors. And, unfortunately, it adds some chroma to the grays, which is rather bad..
Agreed - this is exactly what I meant supposing earlier:
[...] that the "extra information" would probably be just the EX's lower color resolution garbage.
Thanks Adam!
John Richard November 22nd, 2010, 06:38 AM Maybe Adam, who is more versed than I in scopes, could chime in on the potential effect of software scopes vs. hardware scopes in this discussion.
Most software scopes are only using parts of the image to display results. Some software scopes will only look at and display the results of every 6th or 7th scan line - NOT the whole image as is done with hardware scopes. The only sofware scope I know that attempts to display the vectorsope results of the WHOLE image is the one in Apple's Color (and even this software scope uses some manipulations to achieve a measurement representative of the whole image).
It would seem to me that we are looking at some very fine quality detail judgements with a software vectorscope that is only measuring a very small percentage of the scan lines in the image. I have no idea what the actual ratio of total scan lines that the scopes in Sony Vegas use, but I would bet that it isn't measuring but a small percentage of the scan lines in the image as do most software scopes. I think we could be using the wrong type of vectorscope to make such critical technical quality decisions?
Rafael Amador November 22nd, 2010, 07:49 AM "Classic" and "Encoded Pixels" mode usually look the same. Many times there will be no difference between "Clean" and "Production" modes. One crops for frame size and one doesn't.
So are you saying that nano is using the wrong pixel aspect ratio for the codec, which would cause a differenc between "Classic" nano and "Production" native recording? If so, how come no one, AFICT, has noticed this? (It's hugely obvious but no one has complained about it?)
Finally, are you saying that the six dots around the center are caused by an Aperture Mode difference? You can see clear as day in the images by Piotr that the difference is caused by chroma bleeding in along the horizontal grey band of the color bars. That has to do with 4:2:2 vs 4:2:0 chroma sub-sampling. It has nothing to do w/ Aperture size. If you can show otherwise, then I will readily admit I'm wrong. I don't want to mislead anyone. But I don't see how you can say that difference is caused by Aperture mode. But, of course, I'll leave my mind open to being convinced otherwise.
Hi Peter,
No.
No relation between the "6 dots" and the aperture.
The "6 dots" are clearly generated when the 422 stuff is down-sampled as 420.
When the line on the base of the color bars are merged in with the top line of the gray below.
The 420 blocks generates those pixels, with the same tone, but desaturated.
No mystery.
About "Aperture", as you can read in the fallowing post, I've asked CD to address this issue.
http://www.dvinfo.net/forum/convergent-design-nanoflash/483233-aperture.html
I've saw long ago that NANO pictures do not match with the same pictures on the SxS cards.
I have to apologize because I've pointed Dan about "Aperture" as the reason of the NANO/SxS files miss matching on the Piotr test.
I'm trying to find out if there is something similar to "Aperture" on the MXF files.
I understand that the need of different "presentations" for digital video, is not exclusive of the QT files.
However I'm very limited to try to dig on the MXF format.
Cheers,
rafael
Piotr Wozniacki November 22nd, 2010, 08:27 AM About "Aperture", as you can read in the fallowing post, I've asked CD to address this issue.
http://www.dvinfo.net/forum/convergent-design-nanoflash/483233-aperture.html
I've saw long ago that NANO pictures do not match with the same pictures on the SxS cards.
Rafael,
Yep - I remembered your post immediately after the "Aperture" term came into play in this thread. I've been wondering the same - is there a similar setting somewhere, for the MXF format?
So far, I found nothing.
Cheers
Piotr
Piotr Wozniacki November 22nd, 2010, 08:39 AM Maybe Adam, who is more versed than I in scopes, could chime in on the potential effect of software scopes vs. hardware scopes in this discussion.
Most software scopes are only using parts of the image to display results. Some software scopes will only look at and display the results of every 6th or 7th scan line - NOT the whole image as is done with hardware scopes. The only sofware scope I know that attempts to display the vectorsope results of the WHOLE image is the one in Apple's Color (and even this software scope uses some manipulations to achieve a measurement representative of the whole image).
It would seem to me that we are looking at some very fine quality detail judgements with a software vectorscope that is only measuring a very small percentage of the scan lines in the image. I have no idea what the actual ratio of total scan lines that the scopes in Sony Vegas use, but I would bet that it isn't measuring but a small percentage of the scan lines in the image as do most software scopes. I think we could be using the wrong type of vectorscope to make such critical technical quality decisions?
John,
I think you're right that the software scopes are not as accurate (and unbiased) as good, properly calibrated and used, hardware instruments - but I believe that for the sake of this simple comparison, they're enough.
What they show is (at least for me) clearly the same as what I'd expect - and Adam's post explains the details very well.
Nevertheless, being knowledge-hungry - I'm still hoping someone will join the party here with even more details, especially on the Aperture setting (whether it has its counterpart in the PC-based NLE world, etc.).
Cheers
Piotr
Adam Stanislav November 22nd, 2010, 11:07 AM Maybe Adam, who is more versed than I in scopes
Oh, I am no expert. I just understand the mathematics behind it. As for the difference between hardware and software, it is hard to tell without knowing about the specific software used. Some software will only look at portions of the image, some at the entire image. And some will look at the entire image of the current frame but only at portions of the image while playing the video. Add to it that software manuals do not always disclose this information.
Nevertheless, in this case we were comparing the vectorscope of the same image as compressed by two different devices, presumably using the same software for the scope, and the differences were quite clear.
Rafael Amador November 22nd, 2010, 11:27 AM John,
I think you're right that the software scopes are not as accurate (and unbiased) as good, properly calibrated and used, hardware instruments - but I believe that for the sake of this simple comparison, they're enough.
Piotr
There is no reason for software scopes being less accurate.
I mean, if there is any kind of limitation lack of accuracy is due to a poor design of the application.
If well designed, on PLAY a system my have problem to make a perfect reading in RT, but that is just a matter of processing power.
On stand-by the scope should make a perfect reading,
I don't know about other NLEs, but I think that the VideoScope in FC is OK (full scan).
rafael
Peter Moretti November 22nd, 2010, 12:07 PM ...
About "Aperture", as you can read in the fallowing post, I've asked CD to address this issue.
http://www.dvinfo.net/forum/convergent-design-nanoflash/483233-aperture.html
I've saw long ago that NANO pictures do not match with the same pictures on the SxS cards.
...
Thanks Rafael. I read the post and understand better what you're dealing with. I don't use FCP, so I can't comment on how Aperture Mode works with it. Be aware that sometimes a program will ignore or default to a particular Aperture Mode setting regardless of what's designated in the QT file. This can cause some very annoying problems.
But I don't see an Aperture Mode problem or mismatch in the colorbars that Piotr posted. The frame sizes are identical, there is no border around one but not the other, and the bars are exactly the same width. Actually, in all the posts I've seen by Piotr comparing EX and nano images, I haven't seen a size disparity. Since Aperture Mode changes can only affect image size (by cropping or changing the pixel aspect ratio) I never considered Aperture Mode with the nano as being an issue.
But Piotr is not using FCP, so maybe that's why you are seeing issues that we seem not to be. BTW, have you tried opening the file in QT Pro and manually changing the Aperture Mode?
Perhaps CD can add a firmware choice to change the Aperture Mode used by the nano? IIUC, it's just a flag in the QT wrapper that doesn't affect how the image is recorded; it's strictly for playback.
Thanks and good luck,
Peter
Rafael Amador November 22nd, 2010, 02:51 PM Hi Peter,
Yes, I shot too fast without realizing that Piotr was working with MXF files.
However I think that in the background of the issue brought by Piotr could be some similar "displaying related" reason.
- The issue that the Piotr's pics shows (Post 33) is exactly what FC shows when comparing the same clip with different Presentation/Aperture.
- I don't think that Presentation/Aperture is something exclusive of QT.
That feature has been set to address certain needs when displaying file-based digital Video on a computer.
Somehow, MXF files have to address the same "needs".
All this told from my absolute lack of knowledge on MXF files and the fact that I can't try to dig more inside that technology with my Mac.
We will keep trying to put together the pieces of our puzzles :-)
Cheers,
rafael
Peter Moretti November 23rd, 2010, 12:32 PM I can completely understand why you would think from just looking at the scopes that the problem is an Aperture Mode issue; as I'm sure that's very similar to the way an Aperture Mode discrepancy looks.
If I were CD, IMHO, I'd be hesitant to make an all out change to the Aperture Mode from Classic to Production, but including an option to change it would be nice.
BTW, have you tried changing the Aperture Mode manually in QT Pro? It's under: Window... Show Movie Properties... Conform aperture to.
Good luck and I really hope this gets resolved for you. B/c the differences can be relatively minor, thus potentially passing before many sets of eyes before someone spots it. But when that happens, all hell can break loose. (I imagine there hasn't been much of an uproar over this b/c MXF is being widely used.)
Take care.
Billy Steinberg November 23rd, 2010, 01:56 PM A comment and a request.
If you want to go to the horses mouth, click here (http://support.apple.com/kb/HT4265) for the Apple tech note on the functionality of, and how Macintoshes react to, the aperture setting. Also note that Google is your friend; entering "macintosh aperture mode quicktime" brought it up as the first choice. It does not go into detail about whether the aperture mode is a flag in the quicktime header, or an atom in the data, or even whether it's Apple specific. I suspect it's in the quicktime header, but I haven't done the detective work to verify my suspicion. I also don't know if CD is embedding pixel aspect ratio or not, but that's one of the functions of the aperture mode fields. The edge cropping flag is one of the others. If CD decides it's worthwhile to implement the aperture mode info in one or more of their file formats (mov, mpg, mxf), then the discussion becomes "what should it be set to, or should it be a user menu selection".
That was the comment; the request is for a moderator to separate out all these extraneous discussions from this thread, the nanoFlash Public Beta 1.6.226 Firmware Comments thread, and for everyone to PLEASE try to stop hijacking threads, (to which I guess I'm now also guilty of), particularly ones as important as this one.
Billy
Dan Keaton November 23rd, 2010, 02:24 PM Dear Billy,
Thank you for posting the link.
I have read it.
Based on what I read, I have a few questions:
1. It only mentions SD video. Does this make any difference for HD video?
It does not indicate that HD video will be changed at all.
2. We may have choosen "Classic" so that our files work with only versions of Quicktime Player and newer versions.
I do not know the answer, but I do not know what will happen if we choose Production and this prevents older versions of Quicktime Player from working. This would have to be something for us to consider.
Billy Steinberg November 23rd, 2010, 03:48 PM Note that the aperture setting does NOT change the way the video is recorded at all, it's just a little information that the playback or editing system can choose to implement or ignore.
The cropping aspects were mostly due to the crap on the right and left of the frame that DV cameras invariably put out. The pixel aspect ratio was there for cameras that didn't use square pixels (DV again, mostly), so the aspect ratio didn't get changed on playback. (Actual pixels versus displayed pixels). As far as I know, this is only relevant in SD.
Whether it was meant to apply to HiDef or not I don't know, but at least on Macintoshes, it does seem to have an effect on playback of HiDef images today, (even if it's not supposed to). In early days, before "aperture" and prosumer HD camcorders, there was a setting called "hi quality". DV format video suffered when it was turned off, though I always attributed it to course de-interlacing rather than alteration of the pixel aspect ratio (it may have been both). Back then almost nothing was shot in progressive, and this was an easy way for Apple to get rid of the interlace artifacts when played back on the (progressive scan) computer. They de-interlaced when "high quality" was turned off. Of course, they threw away half the vertical resolution that way...
Billy
ps From my reading of the tech note, it looks like older versions of Quicktime Player will ignore the aperture setting, so that probably shouldn't effect your decision. Newer versions of Quicktime can be told to ignore the setting embedded in the video and use whatever you tell them. In any case, more research is needed, particularly with regard to how non-Apple systems react to the "aperture" info. (What an inappropriate name to call what this setting does).
Rafael Amador November 23rd, 2010, 05:18 PM Hi Dan, Billy and Peter,
At the end of the day I don't think the "Aperture" is any big issue.
I think that is enough that the people is aware of this act, in the improbable case that somebody want to mix the same picture coming from SxS and NANO.
So I don't think of the need of changing nothing on the NANO files.
Billy, you are right that, up to the Apple notes, the "Presentation" is to address some issues of DV, but the fact is that that setting is in available for every QT movie, and can change the properties of the movie.
I guess that the NANO-QT files just keeps the same "Presentation" option than the original standard SONY XDCAM HD 422.
Cheers,
Rafael
PS: Just to add that I found that difference when I was trying to reproduce the Piotr tests about SxS vs NANO noise.
I saw that I could never fully match both pictures. it was always like a "1 frame" off-set but impossible to correct by trimming in any direction.
Peter Moretti November 23rd, 2010, 07:05 PM I just want to add, with Billy's permission ;), that there are HD formats that don't use square pixels. Some are HDV, XDCAM (not 4:2:2) and XDCAM EX SP. These formats should all benefit from using "Production" over "Classic" Aperture Mode settings.
I am surprised that there is a Classic vs Production difference w/ nano files, but of course it's possible and seeing some frame grabs would be nice.
BTW, Aperture Mode has been part of QT since 7.1, so I think the vast majority of nano users will be using a version of QT that takes advantage of the setting. And like Billy said, I'd be very surprised if the older versions of QT don't just ignore any Aperture Mode setting.
HTH.
Adam Stanislav November 23rd, 2010, 09:16 PM Does this make any difference for HD video?
Does Programmer's Guide to Video Systems - Lurker's Guide - lurkertech.com (http://www.lurkertech.com/lg/video-systems/#1080i_sampling) answer your question? In brief, the clean aperture takes 16 pixels off the left edge, 16 pixels off the right, 9 pixels off the top and 9 pixels off the bottom in HD, both 1080 and 720. That preserves the 16:9 ratio.
As Billy pointed out, the full frame (production aperture) is in the file, it is just that video players are supposed to cut those edges off. Editors are supposed to use the full production aperture. This gives them some flack at the edges for things like sharpening and softening filters which need the values of the surrounding pixels to work. That way the contents of the clean aperture can be filtered properly and the final viewer never sees those edges that could not be filtered properly.
Theoretically, computer players will probably show the full frame and video (non-computer) hardware will only display the clean aperture. I'm no Martha Stewart, but it's a good thing.
|
|