![]() |
What codec are you editing?
Just curious to know some of the work flows you all are taking at working with the footage from this camera. For those shooting in 1080p hq, are you editing in the native xdcam codec or are you transcoding to something else for any particular reason?
|
For just about everything I convert to ProRez HQ after I import the Raw footage. Then it makes it a little sharper (and faster) when I edit in in FPS 'Color'.
|
XDCAM. Seems to work great and I love how little space it uses on the drive. Also, it's nice to just have one copy of everything to keep track of.
|
I also use the original XDCAM Files. Importing to FC is so easy and I can start editing immediately. That's one of the reasons I switched from Avid/Windows to Final Cut/Apple.
|
Quote:
Therefore one's editing quickly, no footage gets transcoded unless it needs to, and you've got a frame based codec just when you need it. |
Use XDCAM EX settings in FCP.
|
Matt, yes I have and it's a great option to use. Its just that when I grade my footage I tend to add a very soft 'gradual' vignette to my shots so I want to make sure there's the least amount of banding as possible. Thats why I convert all my footage to ProRez. And then once in Color I edit in 32-bit floating point.
|
Justin,
Wouldn't you get the same benefit from just going into Color in XDCAM and coming out in ProRes? Less conversion required? I haven't really done much with the EX1 in Color yet but have found this basic concept to work for other codecs. |
Quote:
In any case, while what you are suggesting works well enough when supported, you are imposing something of a processing burden on your system. ProRes decodes easier- and there is hardware to accelerate it. Its a give and take though. You have to evaluate your circumstances and decide what makes sense for your workflow. I like ProRes all the way through the pipeline for everything where that is practical- starting at acquisition if I can. Of course in some cases, like say covering a school play or the like, XDCAM HQ is overkill, so there isn't any point in playing around with "better" codecs. Then there are situations where data storage is an issue. I could go on in either direction. |
Color does indeed do XDCAM on the input side.
|
Quote:
I notice you are from the West Midlands - many, many years ago I got pissed in Sutton Coalfield. Just thought you might like to know. |
Quote:
So, that being the case... you are saving processing power and that is all. On new fast machines it may not be an issue at all. One thing I noticed, an ambiguity which may conceal a hidden assumption in your earlier statement about codecs. You are striving to minimize "conversion." There are three reasons to concern yourself with that. One is time. I can't speak to that. Depends entirely on your system and deadlines. The second is storage capacity. Again your system etc. If either of these are your concern, then you'll find I am being pedantic. Sorry if that is the case. The third is "generational loss." (More properly its accumulated rounding errors due to color representation conversions... but people never say that.) If you are using a "quality" modern codec like Cineform, DNxHD and ProRes, they shouldn't cause visible generational deterioration until the tenth or later generation. I mean it- even "golden eyed" viewers standing on the material coming from a D-Cinema projector shouldn't see differences until at least that many generations. So bringing your data into these codecs as early in the process as practical helps preserve data. This is especially true for the typical acquisition codec, like XDCAM, which is 8 bit and has a paucity of color data and is highly compressed. Again, sorry if I am being pedantic. Hope it gives you something to consider if you are in the third case. |
Quote:
And, since any effect one adds to XDCAM EX is -- in reality -- being added 4:4:4 uncompressed video, there is no need for any other codec in a workflow. The choice of how many bits are used for FX rendering is determined by the setting you choose for FCP. It is not determined by the source codec. However many bits are in the source are simply used when 32-bit rendering is used. Only when 32-bits are down-converted to the OUTPUT codec does the format play a role. Since ProRes is used for OUTPUT, you automatically get 10-bits. Moreover, by waiting until an effect is actually applied to a clip, the needless transcoding from MPEG-2 to ProRes is avoided. You save the compress and decompress to and from ProRes. There is, no matter how you work, a decode of MPEG-2. |
Quote:
That is not a good assumption. Even very high end computers can only handle a fixed number of video streams. There are two other ways in which a computer is limited. The first, which I refer to, is CPU limiting. That still happens for a lot of systems with HD video. The second is decoded bandwidth limits. You may only be storing 35Mbps on disk, but when the computer decodes it, you essentially have to sling an uncompressed video around the various system busses out the video ports. Finally- it isn't so much an if that ProRes decodes faster than just about any long GOP format on the same hardware. This leaves aside issues of hardware acceleration. Quote:
Any effect you add to a time line is calculated based on uncompressed video... but at the bit depth and chroma sampling of the source sequence. Motion, Shake, After effects and other graphics/compositing software behave differently. They do everything at 4:4:4 uncompressed, then throw away data right before output, exactly as you suggest. Quote:
On most machines this exacts a huge performance penalty, which is why Apple included it as an option and set the default as it did, only using high precsion YUV on high precision YUV source materials. Examining this high-precision YUV, what actually happens is that FCP (its the same in any editor) will calculate in higher bit depth- but with the same chroma subsampling of the source codec. So, for XDCAM HQ materials, which are sampled at 8 bit 4:2:0 you merely gain the advantages of using 10 bit 4:2:0, but with 8 bit data. If you transcode to ProRes, you still have 8 bit 4:2:0 data, but the editor will operate on it as 10 bit 4:2:2 data. The question is how useful the extra headroom will be. In a lot of cases it won't be useful at all. If you are using XDCAM EX HQ footage, then you should dial back the settings in FCP most of the time. The same reasoning means that you shouldn't bother transcoding into a better codec most of the time. When is it useful? When your footage is going to be handled time and again in your post workflow. i.e. your typical film workflow. Of course in these workflows you will usually acquire in a better format. Indie films are often acquired with ENG quality codecs- because thats what folk can get. In these cases transcode immediately to a better codec. A lot of users here don't have anything near that complex- so it isn't useful. A lot of diskspace and CPU power get burned for no reason. It literally won't make any visible difference for a school play. (as an example) |
Quote:
Quote:
Quote:
1) 32-bit processes both 8-bit and 10-bit source files. 2) Because of the longer computation time for 32-bit, Apple recommends turning it on only for the final render(s). How can multiple 8-bit values be computed in only 8-bits? The intermediate calculations must be performed in 16-bits. Likewise, 10-bit values can't be computed with only 10-bit intermediates. Something seems wrong in Apple's description. PS: I wonder what happens when 8- and 10-bit are mixed? Quote:
|
Quote:
|
Pro Res Question
First please let me say that compared to guys like Steve Mullen and Alexander Ibrahim, I consider myself to be a relative thickie. 90% passes so far over my head that I look at you guys in wonder...
When I edit EX1 HQ footage with FCP, I myself stay with the native codec. I ran a small test whereby I brought a 5 second video of a flag into FCP. Then I rendered the file out as H.264, ProRes, and Photo JPEG. All render settings were set to best or as unrestricted as possible to allow the codec to produce the highest quality files. In every case I found the rendered files quality to be pale in comparison to the original. If any FCP users would like to take a look at the results of my flag test, the raw, H.264, ProRes, and Photo-JPEG are posted on my FTP site at Host: download.gotfootage.com User: vegas Pass: demo Port: 21 Or you can paste the following link into Safari ftp://vegas:demo@download.gotfootage.com Test File Names: flag.raw.mov flag.H.264.mov flag.ProRes.mov flag.pjpg.mov David Schmerin www.GotFootageHD.com |
Pro Res Question
First please let me say that compared to guys like Steve Mullen and Alexander Ibrahim, I consider myself to be a relative thickie. 90% passes so far over my head that I look at you guys in wonder...
When I edit EX1 HQ footage with FCP, I myself stay with the native codec. I ran a small test whereby I brought a 5 second video of a flag into FCP. Then I rendered the file out as H.264, ProRes, and Photo JPEG. All render settings were set to best or as unrestricted as possible to allow the codec to produce the highest quality files. In every case I found the rendered files quality to be pale in comparison to the original. If any FCP users would like to take a look at the results of my flag test, the raw, H.264, ProRes, and Photo-JPEG are posted on my FTP site at Host: download.gotfootage.com User: vegas Pass: demo Port: 21 Or you can paste the following link into Safari ftp://vegas:demo@download.gotfootage.com I would very much to know if anyone else feels the same as I do about the rendered video files. Test File Names: flag.raw.mov flag.H.264.mov flag.ProRes.mov flag.pjpg.mov David Schmerin www.GotFootageHD.com |
1 Attachment(s)
Quote:
You can set rendering precision. 10 bit materials are rendered in 32 bit floating point (FP). You can force 8 bit materials to render in 32 bit FP Now, I think some confusion sets in in that FCP creates all GENERATED imagery in RGB at 10 bits per channel. (32 bit RGB) Another point that might confuse a reader is that in Volume 3 page 650 the manual says, "By default, render files are created at full quality, but you can speed up rendering by choosing lower-quality options in the Render Control tab and the Video Processing tab of the Sequence Settings window." This has nothing to do with bit depth or chroma subsampling. Finally if you are stupid just like me, as opposed to in your own peculiar way, then you may have confused material you read in the Shake manual with stuff you read in the FCP manuals. Man that still drives me crazy. If you do a crossfade, that's done at the color space of the sequence. If you want that to happen in 4:4:4, then you have to have a 4:4:4 sequence. This is standard amongst NLE's Quantel, AVID, FCP you name it. This is a smart optimization by the software engineers. The majority of "rounding errors" will result in color levels that are unsupported, not colors out of gamut- even in 4:1:1 or 4:2:0 color spaces. So, you waste CPU cycles on extra bits and that makes a difference. Wasting them on increased colorspace is actually a waste. You are limited by the output codec- so never bother exceeding its capabilities. The relevant section is in the Final Cut Pro Manual Volume 3 page 659. If you are going to 32 bit processing though you may want to read Volume 3 pages 241-252 which contains notes on which effects are disabled when you are working in high precision YUV modes. HINT: Most of them. Compositing software, like Shake, NUKE or Flame works in 32 bit FP all the time. Of course they work in RGB space, causing a whole different set of headaches.. but hey that's the game we are in right? If you need an NLE that works this way then you really need a "Finishing system." I think AVID DS, Autodesk Smoke, Autodesk Fire, and anything from Quantel are the only NLEs right now that can work in 10 bit with all features all the time. Avid Symphony Nitris is mostly 8 bit, but it offers 10 bit titling- sorta like FCP just wicked fast. Of course 8 Core Mac is also wicked fast- but Nitris can be wicked faster. Enjoy |
Quote:
Well just keep on working- this stuff just seeps into your head as you slog through project after project. Quote:
Check your monitor calibration. Also make sure that high quality is set in the Quicktime movie properties panel. (I think its that way by default on QT7, but check it.) |
Hi,
I would like to say, then, what is the right setup ? Stay in XDCAM-EX (4:1:1) or go to prores (4:2:2) ? or maybe some other format ? Then how ? Is it just a matter of changing the sequence setting ? Is this workflow good so : import XDCAM-EX into FCP use the media manager to re-compress as prores (4:2:2 1920x1080 HQ) create a new sequence, set it as prores. Will this improve the plugins/color correction you may do ? What will go to Color then ? Will you get the same quality when coming back to FCP ? I asked this kind of questions, to myself first, with the manual, then on some forums. Nobody have been able to answer yet. I may not habe the time nor the experience to try all this myself. Thanks. |
Quote:
Import your footage as XDCAM. Drop it in a sequence that is Set to ProRes, DNxHD, Cineform or some other high quality codec. That's it. You will get VERY good results that way. Chances are you can stop reading now. IF You plan on extensive adjustment or alteration in post production then you should get your material into a high quality codec as early in the production chain as possible. Before I go into that- I need you to realize that I am talking about an edge case. I do that a LOT on this forum. 90% of videographers should never use any of the information that follows. Every time you do any effect, from a crossfade, to adding titles, to heavy compisting and DI you reduce the quality of your image. Especially if you are working across different facilities with lots of generations of processing. These things are bad for the technical quality of the image, but are essential to achieving our practical results. Balance is the key. The very best thing you can do is skip XDCAM altogether, and just use a superior codec from acquisition. If you can hook the camera up to a higher end capture solution, like the Convergent Designs Flash XDR, or AJA ioHD, or the proposed Cineform solid state recorder, those are all excellent options. I guess (wildly) that about 10% of the EX1's users will need to do this. If you know your footage is going to be "abused" then uncompressed capture over SDI is the right thing. I estimate that 99% of users of the EX1 will never EVER need uncompressed. Of the remaining 1%, less than 2% of our footage will need to be captured uncompressed. The rest falls into the "capture with a better codec" category above. Failing that you will get MARGINALLY better results by transcoding to a high quality codec. Encoding in XDCAM already threw away a LOT of sensor data- and there is nothing you can do to get it back. You are trying to stop the bleeding. Often we end up back at square one- the best way to stop the bleeding is to leave the footage alone. If you follow this path you have to consider what is going to happen to every bit of your footage. The goal being to transcode it as few times as practical- and then to the codecs most suitable for continued work on that bit of footage. It is hard to draw this line- and only lots of experience will help you draw it intelligently. In the meantime, try what you need to do with the footage in its source codec, when that fails try improving the codec. Notice when this helps and try to figure out why. HINT: It doesn't help that often. If your footage will go through ten or more generations of processing then you need to consider converting to uncompressed. |
All times are GMT -6. The time now is 11:36 PM. |
DV Info Net -- Real Names, Real People, Real Info!
1998-2025 The Digital Video Information Network