International Standards Organization (ISO), the US formerly used the ASA (American Standards Association) designation.
Video chips don't respond in a linear fashion. This has been discussed extensively in the past. You might want to do a search and save yourself some time and effort. |
Thanks for the info, but really I wasn't looking to get pin point accurate readings out of my light meter. I understand it cannot be used as precisely as with film.
I guess I won't post results anyhow since it seems the ISO equivalent is very on-camera settings related, but I'm still willing to find a rough number for my particular settings. This is video, I'm still adjusting, and understand I will need to rely much more on the waveform monitor and zebra pattern, which are pretty much new tools for me, but I will still heavily use a light meter to set my contrast ratios. I will also try to get a global reading for exposure. I'm not even talking F-stops here, just general guidelines like not enough light for exposure without using the gain up, deep or shallow DOF, too dark, too bright, etc. I'm comfortable with a meter and even though I know videographers don't find much use for it, I will still feel more comfortable getting a rough lighting setup with the light meter and then fine tuning with the zebra and waveform monitor. |
<<<-- Originally posted by Jeff Donald : Video chips don't respond in a linear fashion. -->>>
Ah, Jeff, i don't agree. CCDs or CMOS chips do respond in a linear fashion. When you double the brightness (the number of photons that is) the output voltage of the CCD or CMOS element doubles also. That is a major difference between digital photography and "film" photography. Film as well as our eye responds in a logarithmic fashion. Twice the amount of light does not seem twice as bright to our eye. |
Found this thread and a post from the always knowledgeable Don Berube about this subject:
Quote:
|
In addition to that you can get better picture quality too. IF you set the highest DVD compliant data rate of around 9mbits/s you will get a 20% better picture (theoretically) as you are now only encoding 24 images a second instead of 30. Think of it like this.
9000000/30=300000 or about 300K per frame. 9000000/24=375000 or about 375k per frame. ITs not a lot but it can give you better results. Besides in my experience when I encode an mpg file that is 24P and I don't remove the 3:2 pulldown or it wasn't 24P 2:3:3:2 then the mpg encoder isn't efficient and the fiels based frames that are inherent in 3:2 cadences look like caca. |
It's still good to know that ISO number for your camera settings because it's easier to set lights while on set metering for footcandles. Then of course you can find the exact exposure with zebras or a scope.
|
Well, regardless David, please do go ahead and post your results anyway.
|
Yes you're right Marty, looking at it this way, it appears the 24fps will be of a better visual quality regardless of the time of your project, since the 9mb/sec. limit is less distributed.
|
David, my experience has been that outside of prelighting a set (where it's more feasible to use a meter than set up the camera) or building up a key light, my meters tend to stay on the cart when shooting video. Especially when it comes to measuring contrast. The reason being that when in low light situations, levels that barely register on the meter can potentially register perfectly well on screen, but as you get to the top end of the latitude, the overexposure is so limited that I don't even trust a rule of thumb like "two stops over is completely blown out" because sometimes it is and sometimes it isn't. And then there's adjustable knee settings which can shift this aspect without affecting the rest of the curve significantly, not to mention gamma which shifts the midtone response, which can affect the results of a key/fill ratio.
In other words...a good monitor will tell you a lot more accurately what's going on than a meter. But it is nice to know what you are looking at for a median ISO, so you can get a rough idea of the lighting package for a given job (like in a Mini35 setup, where exposure is not a casual thing due to the light loss!) |
Charles, how do you keep concistency throughout your different setups if not using a light meter? I understand the waveform monitor is more powerful and accurate than the best spot meter in the world, since in a way it gives you reflected light in IRE for every pixel in the frame, but how can you get a constant look over different setups without using your meter to determine key and ratios? Your eye is good enough to determine if it looks good for a particular setup, but might play tricks on you if you're trying to get your other setups to match the results from a previous setup.
Also, regarding settings, I concede the slight adjustment to pedestal, setup, gain, knee, etc. will change the way the cam behaves, and thus change the way it sees light and reacts to it, and just to make sure, I'll experiment with that too this week-end (trying to see how every setting affects a theoretical ISO value), but if you create one camera custom preset for your entire shoot, and get your ISO corresondance based on that preset, wouldn't the meter be accurate (enough) until you change those setup values again? I'm a bit puzzled here because to me, it just makes sense to set the frame for the shot, use the light meter to get a very rough reading for your key, say more towards F2 or more towards F11 depending on the desired DOF, then fine tweak with the waveform monitor to avoid overexposure as well as zebra (and mine is set at 95 IRE, so that leaves a tiny bit of head room), use your meter again to determine ratios/zones, then getting back to the camera and waveform monitor to make final adjustments (kill hot spots, kill shadows bellow 7.5 IRE, etc.). This is the only way I can think of to simultaneuously get constant lighting setups throughout and expose well for the limitations of the video format (mainly less lattitude, a slightly different gamma curve, etc.). Of course I understand that setting a 3:1 ratio between fill and key for example won't look and respond the same for film and video, but what I'm looking for is reproduceable and predictable results, adjusted around the camera's behavior under specific confitions with specific settings. Don't know if that makes sense or if there's still an element I fail to grasp. |
Quote:
But in just progressive mode, at lower light levels (low enough that the camera's autoexposure recommended an f/2.8) the equivalent ISO rating was 400. Under much brighter conditions, where the camera was recommending f/16, the equivalent ISO response was 800. Depending on the gamma settings it could be as high as 1000. Over most of the exposure curve the rating was 640, but could be a half-stop different at both ends of the spectrum. Definitely not a raw linear response. (tested with a gray card and with real-life scenes, using a waveform monitor and two different spot meters). |
Barry, of course no real world system responds in a perfectly linear fashion, especially at the ends of the spektrum. There will allways be a certain linearity error.
The difference between a CCD or CMOS chip and a film is, that the film "by design" does not respond in a linear fashion but in a logarithmic fashion while the chips respond more or less linearly. A few days ago Jeff posted a link to some white papers on these topics on the Adobe website. Very recommended reading. |
amazing footage. It show what the xl2 can do. What a shame the thing is so expensive. I sure love that girl in the outside light. Very warm and friendly.
|
The orange of the pumpkin is really beautifully rendered.
|
Thanks for all your feedback and comments. I'm about to start shooting a new short film titled "Little Acorns" and should have some stills (and possibly clips) to post in the next few weeks.
Also tomorrow, I'm on set for a 16mm short film being produced by a filmmaker friend of mine here in Toronto. I plan to bring the XL2 along and I'll see if I can't snag a few clips of some scenes - it would be great to see the results using a film lit scene. |
Direct To Edit with 24p mode
Hey guys,
I'm looking at getting the xl2. I also want to purchase a Direct To Edit capture device so that I can start editing right away with out having to spend time capturing my footage. This is the question. If I'm using and DTE capture device, will it capture the 24fps or does the xl2 only record the 24p to tape first, and then outputs it when played back from the tape. If the xl2 will only output the 60p through firewire, then it would be useless to have a DTE device because it would only capture 60p instead of 24p. Is this right? Help me out here guys. I hope this makes some sense. Josh M. |
60p does not exist in consumer camera's with the exception of
one HDV camera I believe. Anyway, as with the DVX100 the XL2 does *NOT* record 24p (to disk, firewire, whatever). Yes, it scans the CCD's at 24p, but it does a 3:2 (or 2:3:3:2) pulldown to turn that into 30p before it goes out. So any standard DTE will function and record this 30p signal which you would also get when capturing. In your NLE you select the 24p template and it will remove the pulldown to get your true 24p back. Keep in mind that 2:3:3:2 pulldown is beter suited for editing. |
The camera must convert to DV format which is 60i (in NTSC) and that is what will go out over the Firewire be it from playback or direct. In 2:3:3:2 if the field sequence is A1A2 B1B2 B2C1 C1C2 D1D2 (note that A1 and A2 represent odd and even lines from the same exposure) then it's clear that the NLE, can recover the 24P (or 23.976 actually) by simply throwing away the B2C1 frame and so I'm guessing that's how it works. ??
|
That is how it works. With 2:3:3:2 pulldown, for every group of four 24P frames, the tape records four full frames and one disposable "split frame". When using an NLE that understands 2:3:3:2 pulldown, the NLE can simply discard the split frame, and be able to display/work with the original raw 24P frames.
|
" Keep in mind that 2:3:3:2 pulldown is beter suited for editing. "
From what I read in the manual, 2:3 is what you should use if the end result is for television and 2:3:3:2 is if you plan on transfering to film. Could you expand on why 2:3:3:2 would be better for editing ? And what would be the difference if you shot on 2:3:3:2 but were going to display it on a television. Sorry, confused. - Ray |
The difference for display on a TV is in the way motion looks,
please take a look at the two diagrams on the following page: http://www.adamwilt.com/24p/#24pRecording (Under "How is 24p recorded? What's the difference between 24p and 24p Advanced?") The first image displays 2:3 pulldown where the second one displays 2:3:3:2 pulldown. The top row represents the capture as it is coming of the CCD's. The middle represents the frames as they are laid to tape (or send over firewire) and the last row shows the re-assembled frames. The middle row is the most important: 2:3 => AA, BB, BC, CD, DD 2:3:3:2 => AA, BB, BC, CC, DD As you can see frames 3 & 4 is where the difference in every 5 (original it was 4 in 24p) frame sequence is located. Now if your NLE needs to do an inverse pulldown to reconstruct the original 24p it can simply drop the third frame in every sequence in the case of 2:3:3:2. That is the frame "BC" in this example. As you can see removing this frame will result in the original 24p sequence of AA, BB, CC, DD Now with a 2:3 pulldown it cannot do this since it needs both frames 3 & 4 to reconstruct the C frame. So it takes a C field from frame 3 (BC) and frame 4 (CD), then drops frames 3 & 4 and inserts the newly created frame. As you can imagine this requires more processing power to do (especially at 30/24 fps). The reason they claim 2:3 is better for tv broadcast is that you smooth the fields out a bit better. Whether it will be that noticable in the real-world I don't know (not in NTSC country here, so no 2:3/2:3:3:2 pulldown in our camera's). So you can edit both (lossless), but one should be a bit faster to work with. How much faster I don't know. Some testing of both methods might be in order if you go the broadcast route. If you are just making a webmovie or a shoot for a potential film blowup then it is better to go with 2:3:3:2 since you can just drop one frame and be done with it. |
XL2 24p 16X9 Final Cut Problems, please help!
I shot 24p (not advanced) 16X9 footage on the XL2 & am having real problems.
First I can't get FCP to see the camera on my G5...I did get it to work on somebody elses G5, but not mine. What could be the problem here? Then I put the tape in my GL-1, and I got FCP to see it. Is that cool? Can I use the GL1 to capture and not lose anything, i.e. still be 24p 16X9? Next, with the captured footage that was shot on the XL2, but captured using a GL1 I just couldn't find settings that worked (assuming it's cool to use the GL1)... I get streaking. I get footage that needs to be rendered. I get footage that looks stretched. I just can't get it right! I'm freaking...I'm making a feature that mixes media, footage from all kinds of cameras (which is why I didn't go for 24p "advanced" just normal 24p). I wanted my main camera to be native 16X9 24p and super hot, mixing with footage from all kinds of cameras including Digital 8s. So, assuming I can even get my computer to see my XL2, and I can get the footage in there...then what? What happens when I bring in GL1 footage, & 8mm footage? |
The footage shot using your XL2 and captured using your GL1 will be the same assuming the heads on the GL1 are clean and undamaged. I'm not sure what you mean by "streaking" but I am hazzarding a guess that its the banding that usually occurs because of dirty heads. I am assuming that the problem lies in the GL1 heads as i presume the XL2 you have is relatively new.
Once your footage is captured I believe the problem lies with how you have your timeline setup. Make sure you have your timeline set up at the right framerate of 24p as well as the right pixel aspect ratio. I suspect you have pixel aspect ratio set wrong and that's why it appears stretched. I have never used FCP so am not able to help you with where to look specifically in that application but i'd imagine the setting will be in some sort of preferences/properties menu. Hope that helps. |
Many of the issues you're discussing have been worked out on a thread on lafcpug. Check it out http://www.lafcpug.org/phorum/read.php?f=1&i=70752&t=70548
Short answers I will note here. 1. If your G5 cant "see" your XL2, but another G5 could, clearly you have some set up or settings issue. Check cables, try new firewire, try different firewire ports, check capture settings, try NTSC Basic, if all else fails. 2. Yes assuming the heads are in similar allinment the GL2 should work fine. 3. Unclear what "streaking" is. This could very well be the results of dirty heads on the XL2. Check out your owners manual and see if the dirty head photo looks like the streaking you are describing. There are some reports of brand new XL2's coming from the factory needing their heads cleaned before the camera can be used for the first time. If you recorded with dirty heads, there's no saving the footage. Clean the heads with a dry head cleaner like Canon's DVM CL or Panasonic DVM CL. Do not use any other brands. It's also possible the GL2 is the one with dirty heads. Check it out. 4. You say your footage looks "stretched" but you probably mean it looks "squeezed." This is because you need to capture with the anamorphic setting turned on. You're still going to have to set FCP to handle 24p in he 3:2 pulldown. See Steve Zinn's awsome webpage for more details. http://www.mandarinpictures.com/stephenzinn/ 5. Mixing 24p 16:9 footage with your GL2's 4:3 60i footage will look clunky. Also you will have some difficulty editing the footage all on the same timeline. Timeline will either be in 29.97 or it will be in 24.97. There are workarounds, however, such as turning your GL2 footage into 24p using software like Nattress (http://www.nattress.com/index.htm). Then you can edit all footage on the 24p (really 23.97) timeline. But be careful, Natress conversion only works correctly if you've shot your GL2 footage in 60 fps to begin with. As to getting good quality 16:9 out of your GL2, you will have to purchase an anamorphic lens from Century Optics. http://www.centuryoptics.com/products/dv/16x9/16x9_37mm.htm Of course you don't have to shoot your XL2 footage in 24p 16:9. If you shot in 4:3 60i, you would have an easier time matching the GL2 footage. Douglas |
thanks
thanks for the suggestions...I'll try them out...
I hope the heads weren't dirty on the XL2, it looks good in the viewfinder/LCD when we replay on the camera. |
Q: Picture lag in 24p mode...?
Please forgive me if this question makes no sense, but I wanted to see if anyone could help.
I just got my XL2, and out of the box, when I shoot in 24fps there's an image lag in the view-finder. In other words, if I pan, or move quickly, the image in the viewfinder drags for a split second. Is there a way to correct this, or does it not show up that way when you burn your footage to disc? Thanks. -C |
I think it was Barry Green who said that everything all those DVX100 users had to learn about properly using 24p, will have to be learned all over again by XL2 users.
24p is an amazing feature but you'll need to revise your shooting habits in order to accomodate its limitations. The smoother and slower your camera movements are, the better they'll look when shooting in 24p. |
Yeah, I'm not sure of the exact effect you're experiencing, but it may just be that you're not used to the slower frame rate? In 24p the screen's getting updated only 40% as often as it does in normal video. But even so, you shouldn't be able to actually detect a lag, I mean, 24 times a second is still pretty darn quick...
|
I suspect this is not a 24p issue. This might be image stabilazation at work. Try turning image stabilization off and see if the lag is still there. Let us know.
Douglas |
Good point, Douglas -- that's always a factor too.
Make sure that OIS is turned off whenever shooting from a tripod. |
Thanks guys...
Douglas, I will try turning off the image stabilization tonight and let y'all know if it worked. |
I think that lag IS due to the 24P setting. The lag you're seeing is due to the film-look's aesthetics. It will naturally have more of a motion blur than shooting at 60i. That's why you want to try to shoot 24p with the same discipline as if you are shooting with film.
When you burn your footage to disc (or essentially view your footage on a regular TV), that "jumpy" look you're probably seeing in your LCD viewfinder will look like a smooth motion blur. |
I have experienced this aswell. I haven't captured it and put it on my computer though. I really hop this is only seen on the viewfinder and appears as a motion blur on computer screens/tv screens like Joseph said.
|
using 24p and 30p
my friend just shot a scene of something in 24p and another scene in 30p. he's not sure how to edit them together in final cut pro. i'm not sure what to tell him either. he wants to mix 24p and 30p footage on the same timeline. any ideas?
|
The key is to use a 29.97fps timeline. There will be a bit of artificially induced interlacing on the 24p footage due to the 3:2 pulldown but it's the best way to match 30p and 24p footage (you cannot use 30p footage on a 24p timeline unless you plan on slowing down the footage).
If the 24p footage was shot using the 3:2 mode, there's nothing else to do, FCP will recognize the footage as standard 60i DV. If however your friend shot using 24p 2:3:3:2 (which isn't a very good idea if you plan to mix it with 30p footage), then he will need to convert back the footage to 24p and then do a 3:2 pull down on it, then use that newly converted footage to mix it with the 30p. It's not that you can't do it in the 2:3:3:2 pull down mode, it's just that the 3:2 footage will look better (smoother motion) when used in a 29.97fps timeline. All that being said, I don't think mixing 30p and 24p footage on purpose is that good of an idea, unless you're doing it for very specific reasons, one of which could be a slow motion on the 30p footage. But generally speaking, it is best to choose a frame rate and stick with it for the whole project. The obvious reasons for that being less hassle, the interlacing issue (which will be present on some frames of the 24p footage because there's a 3:2 pull down applied to it) and also the fact that 24p and 30p footage don't look quite the same, especially when it comes to motion. |
24p Footage
http://www.paganweirdo.com/org.wmvx
Rename and remove the X to play Rough footage shot in Oregon on a hazy day But if your looking to buy a XL2 I figure all the samples you can look at may help |
David,
Thanks for sharing the beauty of Oregon and your 24p footage with us. Reminds me that I'm overdue for a vacation from smoggy, flat, traffic-clogged Houston! |
another xl2 question
your playing back a tape on tv useing the rca cable . you took the video in 24p .what are you seeing on your tv .i am guessing 60i .thanks dp
|
That's right. It doesn't matter what mode the camera was set to during recording the playback is always 720 x 480 60i. The way in which the frames you exposed are distributed over the 59.94 playback frames per second does depend on the recording mode. That's been discussed here many times.
|
It's 60i but it looks like 24p. That's the whole point. The camera head does the frame rate and the progressive scan, but the tape transport records sixty interlaced fields no matter what -- because if it didn't, then you wouldn't be able to play it back on any conventional NTSC monitor.
|
All times are GMT -6. The time now is 03:59 AM. |
DV Info Net -- Real Names, Real People, Real Info!
1998-2024 The Digital Video Information Network