DV Info Net

DV Info Net (https://www.dvinfo.net/forum/)
-   All Things Audio (https://www.dvinfo.net/forum/all-things-audio/)
-   -   Mic solutions - on/off camera - TC synching (https://www.dvinfo.net/forum/all-things-audio/36541-mic-solutions-off-camera-tc-synching.html)

Gints Klimanis July 6th, 2007 02:45 PM

Quote:

Originally Posted by Ron Priest (Post 708045)
Your welcome. Sorry, I havn't tested the LanC. BTW, How many audio tracks to you need to cover a martial arts match?

I usually use a stereo microphone with a camera on a monopod so its easy to get in close. A few times, I've tried wireless microphones on the players, and that adds so much to the video. Now, I want to record both the stereo microphone as well as the body mics. When the players move around, the body-mounted mics deliver audio that is often channel-swapped as well as in the process of crossing over. So, I want to record four channels. In a recent tournament situation, I really wanted some remotely recorded audio from the other side of the mat. So, four is a good number, although I would also like a stationary microphone as well in the future. Lots of audio.

Gints Klimanis July 6th, 2007 02:48 PM

Quote:

Originally Posted by Seth Bloombaum (Post 708001)
Yes, In all cases it is essential to have reference audio on the camera. This is best with a feed from the recorder, but, all the synching I've done has been with reference audio from the camera mic.

Thanks for all of your great advice, Seth. I'd rather no use the clapper for TC, so probably I am still better off with the TC-capable R4 pro. I'll feed an audio reference to the camcorder, but I'd rather have two of the four channels I'm actually recording. Gotta break out the manual and check if the R4 pro can route only two of the four input channels to the output mix.

Peter Moretti July 10th, 2007 02:05 AM

Quote:

Originally Posted by Ty Ford (Post 706861)
Steve,

There's a place for you in Heaven...

There sure is. THANKS SO MUCH!

Steve House July 10th, 2007 02:59 AM

Quote:

Originally Posted by Gints Klimanis (Post 708082)
Thanks for all of your great advice, Seth. I'd rather no use the clapper for TC, so probably I am still better off with the TC-capable R4 pro. I'll feed an audio reference to the camcorder, but I'd rather have two of the four channels I'm actually recording. Gotta break out the manual and check if the R4 pro can route only two of the four input channels to the output mix.

Even if your recorder is timecode capable, it doesn't do you any good for syncing unless you have some method of getting identical timecode in BOTH the picture and the sound files as you shoot. That means a couple of options ... 1) if the camera has a timecode output (Z1 doesn't) you connect that to the recorder's timecode input so that the same code that is recorded in the camera is also recorded in the audio; or 2) the audio recorder outputs its timecode to a smart slate clapper so the slate displays the code the recorder is generating and the slate is photographed at the start of each take to record the timecode into the video. The bottom line is that if the timecode recorded with picture and the timecode recorded with sound aren't identical to within a few miliseconds of each other, it's useless for syncing other than getting into an approximate ballpark. You still have to figure out how to match up a known point in the audio file with the precise frame in the picture where that sound occurs. If you don't have a camera with timecode I/O, you gotta use a slate. A smart slate with displayed timecode lets you align using the code. A dumb slate with just clapper sticks lets you align with the 'bang' and any timecode with the audio is superfluous. Just recording timecode generated in the audio recorder with the audio without using a slate accomplishes nothing at all.

Seth Bloombaum July 10th, 2007 10:18 AM

Quote:

Originally Posted by Steve House (Post 709639)
Even if your recorder is timecode capable, it doesn't do you any good for syncing unless you have some method of getting identical timecode in BOTH the picture and the sound files as you shoot....
...The bottom line is that if the timecode recorded with picture and the timecode recorded with sound aren't identical to within a few miliseconds of each other, it's useless for syncing other than getting into an approximate ballpark. You still have to figure out how to match up a known point in the audio file with the precise frame in the picture where that sound occurs...

I have to disagree. Manually setting time-of-day code on unsynchronized devices has gotten me within a half-second over a typical 3-4 hour shoot, which has proven to be straightforward to resolve and lock in post using camera reference audio.

I say this because I've done it at least a couple dozen times, sometimes without timecode on the audio recording (hour-long takes).

This solution has served me well for event-oriented projects. I'd find it much too tedious for short takes, and would rent a TC slate as Steve suggests above. Tape is cheap though, and another strategy for short takes is "don't stop rolling".

Far from "useless", this has been a very useful workflow for prosumer gear. Your results may vary. Certainly I'd advise trying it before betting the farm.

And yes, I'm well familiar with typical sync and timecode technology, having started with pre-timecode 2" quad, crystal sync, and bloop lights, then seeing 1" type C support the first timecode-assisted editing, and so on.

Steve House July 10th, 2007 10:46 AM

Quote:

Originally Posted by Seth Bloombaum (Post 709810)
I have to disagree. Manually setting time-of-day code on unsynchronized devices has gotten me within a half-second over a typical 3-4 hour shoot, which has proven to be straightforward to resolve and lock in post using camera reference audio.

I say this because I've done it at least a couple dozen times, sometimes without timecode on the audio recording (hour-long takes).

This solution has served me well for event-oriented projects. I'd find it much too tedious for short takes, and would rent a TC slate as Steve suggests above. Tape is cheap though, and another strategy for short takes is "don't stop rolling".

Far from "useless", this has been a very useful workflow for prosumer gear. Your results may vary. Certainly I'd advise trying it before betting the farm.

And yes, I'm well familiar with typical sync and timecode technology, having started with pre-timecode 2" quad, crystal sync, and bloop lights, then seeing 1" type C support the first timecode-assisted editing, and so on.


Guess my reflexes are too slow <grin> to manually set two separate TOD clocks to be close enough to be of much use. The technique that some multicam shooters use when all the cams are the same to set code on all of them by arranging them so they can all see the same remote control and setting each camera's clock to the same value but not completing the final entry, then completing the setting of all of them at once using the remote control seems viable, but just barely and it wouldn't work with devices like a Sony Cam and SD recorder where they don't have a common remote controller. Dropping a BWF file into the project so its timestamp aligns to the project timeline is a no brainer, but then you still have the problem of locating the precise frame in the video that matches an event in the audio and slipping it back and forth until they align. The distinctive audio shape of a 'clap' is easy to spot as you scan along the waveform but if you're going to have to look for it anyway, what's the point of the timecode?

Seth Bloombaum July 10th, 2007 04:25 PM

Quote:

Originally Posted by Steve House (Post 709826)
Guess my reflexes are too slow <grin> to manually set two separate TOD clocks to be close enough to be of much use...

Not hard to get within about 10 frames. Sometimes I do a countdown to time with the audio or camera op, we count down to a preset time - 3, 2, 1, mark, we hit the button at the same time. It's close.

Other times, I'll just use the second hand on my watch and do cameras & recorders one at a time. Takes a couple minutes per device.

Quote:

Originally Posted by Steve House (Post 709826)
...some multicam shooters ... set code on all of them by arranging them so they can all see the same remote control and setting each camera's clock to the same value but not completing the final entry, then completing the setting of all of them at once using the remote control seems viable, but just barely...

I've done it, it works on some cameras, doesn't on others. Precise when it works. (within a frame).

Quote:

Originally Posted by Steve House (Post 709826)
...Dropping a BWF file into the project so its timestamp aligns to the project timeline is a no brainer, but then you still have the problem of locating the precise frame in the video that matches an event in the audio and slipping it back and forth until they align. The distinctive audio shape of a 'clap' is easy to spot as you scan along the waveform but if you're going to have to look for it anyway, what's the point of the timecode?

Now this is the real question - how well does the workflow work? Once you're aligned within a half-second or so, you play the reference camera audio track against the audio you're looking to sync, and slide the audio left and right to eliminate echo.

As it turns out, the human ear is quite discriminating about echo. I suppose it's because small timing differences have everything to do with how we perceive directionality of sound. Some NLEs let you slip the audio track by less than a frame, which also helps. Very helpful if you can slip while previewing.

If you do have a clap in the track, or other visually distinctive waveform, of course that helps as well; zoom in and line it up.

The point of time-of-day code in this exercise is that it quickly gets you close enough to use visual and echo methods. I've done it without code (4 cameras with very rough code and a 24-track recording with no code), it's a little more finicky but also quite possible for long takes.

Steve House July 10th, 2007 08:13 PM

Quote:

Originally Posted by Seth Bloombaum (Post 710017)
...
Now this is the real question - how well does the workflow work? Once you're aligned within a half-second or so, you play the reference camera audio track against the audio you're looking to sync, and slide the audio left and right to eliminate echo.

As it turns out, the human ear is quite discriminating about echo. I suppose it's because small timing differences have everything to do with how we perceive directionality of sound. Some NLEs let you slip the audio track by less than a frame, which also helps. Very helpful if you can slip while previewing.

If you do have a clap in the track, or other visually distinctive waveform, of course that helps as well; zoom in and line it up.

The point of time-of-day code in this exercise is that it quickly gets you close enough to use visual and echo methods. I've done it without code (4 cameras with very rough code and a 24-track recording with no code), it's a little more finicky but also quite possible for long takes.

Well, I have to confess, in my comments on timecode I've habitually been thinking in a classic, film-style, double system workflow where there is no audio reference track recorded with the video. Camera does picture and picture only, recorder does audio. If I can coin the term, the sort of 'hybrid' system you're using does have some alignment tools that a classic double system would not have, being the reference audio track recorded in camera. But I think it's worth mentioning for some of our readers that may have missed it that there are some cautions in using the hybrid workflow - the biggest one off the top of my head being that the audio recorder and the camera audio tracks are best both fed by a splits of the same signal at the mixer. Running some lavs on the talent or a closely held boom to the recorder for production sound while using an on-camera shotgun for the reference track (as some might be tempted to try) could create headaches due to arrival time differences between picture, production sound, and reference sound. Depending on distance from talent to camera, slipping production sound to remove the echo when mixed with reference sound could have the result in the production sound actually being OUT of sync because at longish distances the reference track itself won't be in sync with picture, each 30 feet of subject to reference mic distance delaying sound by 1 frame with respect to picture. Sending two feeds from the mixer, one to the camera and the other to the production sound recorder, eliminates that issue.

Seth Bloombaum July 10th, 2007 11:01 PM

Well.... right. Easier when every device sees the same reference audio.

However, any time you go 30 feet away with your camera-mounted mic you're a frame off. That's perceptible for most people, but we can do better!

If lip-sync looks off, fix it. If all of this were easy, then everybody would be editing their own projects.

Oops, they are :)

Perhaps Steve's cautions are appropriate, some of this is harder than straight-ahead editing in the NLE of your choice. With some smarts and a lot of persistence this prosumer gear can support untethered double-system sound.

Again, it isn't all that much easier and a whole lot more expensive with full genlock and master code.


All times are GMT -6. The time now is 12:37 AM.

DV Info Net -- Real Names, Real People, Real Info!
1998-2024 The Digital Video Information Network