|
|||||||||
|
Thread Tools | Search this Thread |
September 3rd, 2011, 02:34 PM | #1 |
Regular Crew
Join Date: May 2006
Location: Victoria, BC
Posts: 64
|
Comparing Two Methods of Capturing Beta SP
I don't have an AJA card or IO HD box... yet. So for now, I'm capturing some old Beta SP footage in the following clunky manner: via the S-video port on a Sony PVW-2800 Beta SP deck straight into my old Canon GL2 camcorder. (I'll spare you the description of how I'm simultaneously capturing the audio.) Once it's on a Mini DV tape (both sound and picture, in sync), I then capture it in real time to FCP.
So... sell me on the value of getting an AJA card or an IO HD box. Obviously, I could capture the Beta SP footage in one step instead of two... so there's the time saving. But would I notice ANY quality improvements, assuming that I'm using the component outputs of the Beta SP deck to the component inputs of the AJA device? (If it matters, most of the footage I'm capturing is news/documentary type stuff. And it will not likely be treated to excessive digital manipulation afterwards, in which case... do I need 10 bit? Will I notice any difference?) -dB |
September 3rd, 2011, 11:45 PM | #2 |
AJA
Join Date: Apr 2009
Location: Grass Valley, CA
Posts: 252
|
Re: Comparing Two Methods of Capturing Beta SP
Dale B. - Good questions. Let's take AJA out of the mix for a moment and concentrate only on the signals and compression you are currently dealing with and compare those to what you could be dealing with.
S-Video (more specifically, Y/C), is just a luminance and chrominance defined video signal; it's one small step better than composite video... where it's all just mashed together (sigh). Compare this to component video where you have luminance and color difference (Y, R-Y, B-Y) defined. This is essentially the backbone of quality signal as we know it in a 4:2:2, chroma subsampling, world. So, the first important benefit that you would derive is likely an overall improvement in picture reproduction by using the component video output on your VTR. Component video from your Sony PVW-2800 is as good of a signal as that device can provide, and it's actually quite good. Betacam SP was the standard in the industry for many years; it's probably a big part of why you still have such a VTR. Let's look at the other thing you are battling: compression. NTSC DV is not the greatest compression scheme when trying to reproduce the quality originally achieved on your tape. It's an 8-bit compression scheme, with a fair amount of compression applied (25Mbps), that truncates color definition to 4:1:1; the result can be described as picture "smearing" because color definition has been decreased. Nobody likes to perform color correction or chroma key work on NTSC DV material if they have a choice because of this type of thing. Also keep in mind that should you re-compress DV encoded footage to another codec, it often looks... well... "not so nice." The best bet with post production processes is to achieve a relatively low level of compression to produce a visually lossless copy of your source; do the least harm (if any) possible to your source footage. Another thing to consider is the quality of the audio you are achieving with your current workflow. The output of the Sony PMW-2800 VTR is analog balanced, but you need to get to unbalanced if you are feeding the DV camcorder you mention... hopefully there is at least an impedance correcting box/mixer in the signal path... even then, the best you can hope for is 16-bit 48kHz audio. A better solution would be to take the unbalanced audio directly from the VTR to an uncompressed 24-bit 48kHz recording. So having said all of this, your best bet would be to use the component output on your Sony PVW-2800 and connect it directly to a device that will produce a low level of compression and create 4:2:2 video and 24-bit 48kHz audio. Is 10-bit important? If you never planned on adding titles to your footage, or color correcting anything or compositing anything - and picture editing was merely cuts only - 8-bit might be sufficient. As soon as any of the tasks like color correction, compositing, titling or transitions are added to the editing process, 8-bit color specificity becomes less desirable, even if you began with an 8-bit source like Betacam SP. So, in summary, I think you would notice a difference if you used a capture card in your edit system, like an AJA KONA LHi, connected to your VTR via component video and balanced analog audio connections, and you captured to a codec with less compression (100Mbps or higher) than the DV compression you are currently working with. I hope this information is helpful... it's certainly not a sales pitch. I'm far more interested in helping you understand the benefits of a better picture quality workflow since I think that's the majority of what you were asking about. An AJA authorized resellers can "sell you on the value of getting an AJA card"... :-) Hope this is helpful, Jon Thorn Senior Product Manager AJA Video Systems, Inc. |
September 4th, 2011, 01:51 AM | #3 |
Regular Crew
Join Date: May 2006
Location: Victoria, BC
Posts: 64
|
Re: Comparing Two Methods of Capturing Beta SP
Thanks for your reply. With my klunky hookup, the audio does in fact go unbalanced. It goes out the XLR outputs of the Sony Beta SP deck into a Korg MR-1000 (balanced line in) so to that point, it's all good. I could in fact record it onto the Korg hard drive (which I would then have to re-sync), but instead I take the RCA line out that it has and feed that to the GL2's yellow input jack. So it gets video from the S-video cable, and audio courtesy of my Korg MR1000's converting the signal from a balanced input to an unbalanced output.
Anyway, there's much to think about. Perhaps there's an AJA in the cards for me. |
September 4th, 2011, 03:39 AM | #4 |
Major Player
Join Date: Feb 2008
Location: Cardiff, Wales, UK
Posts: 410
|
Re: Comparing Two Methods of Capturing Beta SP
I make a lot of HD programmes that use SP and Digibeta content. All new work is HD 1080i or 25p so I use an AJA IO-HD for uprezzing from SP SDI to HD-SDI. Now that the future of Final Cut is down the pan the IO-HD has a new lease of life. The main thing is having a clear conscience that technically the best has been done for the client in making the most of their archive material.
|
September 4th, 2011, 12:17 PM | #5 |
Regular Crew
Join Date: May 2006
Location: Victoria, BC
Posts: 64
|
Re: Comparing Two Methods of Capturing Beta SP
I agree completely with your last statement. But can you explain how FCP's shaky future gives the IO HD a new lease on life?
|
| ||||||
|
Thread Tools | Search this Thread |
|