View Full Version : Using Timecode to sync MultiCam shoot


Kevin O'Connor
July 11th, 2016, 01:52 PM
I'm not sure if this is the best thread to post this but it is related to Single Person Crew.

I try to video our local ice skating show each year to make a DVD for our club. I've been working alone but thought I had a good plan to shoot from a different angle at each show and use multicam in my CS5.5 to edit each skaters routine. This was a lot of work before I tried Plural eyes this last weekend. But Plural Eyes 4.0 does not work very well with CS5.5.

So I'm back to figuring out a process to use multi-cams and being able to sync them better then guessing at a point in the audio tracks of each clip. Time code has got to help this issue.

I'm still working alone and I like to video from four angles or shoot from one angle each day of the show. Plural Eyes 4.0 did not like that my file dates were different even though the music was exactly the same. I'm hoping using the time code in my C100 will allow me to sync video clips that were shot on different days. The performances are a little off buy the kids skating but the sound track is the same.

Can I use time code to do what I want? If so, how?

Thanks, KPO.

Sam Lee
July 11th, 2016, 06:39 PM
Plural Eyes' processing times are a huge time waster. It took about 2.4 days to sync a 4 hours show! Unbelievably inefficient and unacceptable in time sensitive pro productions. I typically shoot with 12-16 cams. Imagine trying to sync that with Plural Eyes. I think it's suitable smaller scale (probably 2 cams max). I gave up on it many years ago and op to the classic SMPTE TC (since the 70s). Has never gone back ever since! I'm so convinced of TC's easy but very effective tool for syncing that I even have the Deneke TS-C3 smart slate. This allows me to easily sync up non external jammable TC cam such as the Go Pro, GH4, DJI Osmo, etc.. My sound recorder, all types of cams are now syncing flawlessly ever since.

Another issue with syncing using audio method is the distant of the cam to the sound source. This is very common in Final Cut Pro X syncing by audio (free, no Plural Eyes needed). Typically if you have the rear cam, there'll be a small delay. That's a huge problem especially with speech scenes. The frame can drift between 3-6. When you get beyond 4, you'll see the mouth is not matching the audio. At least with SMPTE TC, you'll be off by 1-3 frames per battery change if it's not rejammed at least every 5 hours. The way I see it is that you can't escape the costs involved with syncing. Either you waste lots of valuable editing time in post trying to get the audio and video aligned or use SMPTE TC (spend lots of $$$$ upfront for the gear) to have absolute certainty and no need to guess about the frame drifting. There's no escaping the cost.

As long as you obey the rules and limitations of TC with camcorders, you'll be fine. If not, 1-3 frames of drift will be experienced. Whenever you remove the battery power, the internal oscillator will drift pretty much on all camcorders (including the most expensive ones). There are 3rd party solution to this problem but they costs in the thousands. You can get wireless TC or a portable one where you'll go to each gear and rejam it whenver there's a battery change. And of course the good old running long BNC cables to rejam also worked great. Setting proper NDF and DF TC on all cams is the 2nd rule.

Pete Cofrancesco
July 12th, 2016, 05:06 AM
It shouldn't be that difficult to manually sync for an event that's recorded continuously. You could use a laser pointer or some form of noise generator to aid syncing.

I've used pluraleyes 3.5 on the mac never had the problem you describe it uses the wave forms not the date/time to sync.

Audio for large venues can be a little weird since there can be echos/reverb and audio can reach cameras at different times especially if you're recording both the board feed and the ambient.

Jay Massengill
July 12th, 2016, 07:52 AM
It is important to have the cleanest possible audio signal going to all the cameras and recorders, especially if they aren't timecode capable and you can't spend the money on specialized timecode gear.

There are less expensive wireless audio kits that send audio that's good enough for sync use, without having to be perfectly drop-out free.

I use several pairs of stereo transmitter and receiver from Wi digital. The audio quality is great but their maximum range is limited.

There are a lot more 2.4GHz sets available now, as well as used analog wireless gear.

If you can afford real timecode gear, that's great too. If something goes wrong though, having clean audio is the best backup. Even if it's just for you as the human user during the editing process.

First make sure all your recording devices' date and time clocks are set as closely as possible just to stay organized.
Also figure out which devices list the file time as when it started recording versus when the file is finished.

Use something visual and/or acoustic as already mentioned, to get your starting sync point in case something goes wrong with the audio signal later. Although that won't help you with inter-cutting material from different days, you must rely on the audio signal then. The same is true with using real timecode for inter-day use, unless the audio playback system for the event is actually capable of generating timecode and you can access that.

Clean audio (versus ambient on-camera mic only) on all the recording devices will help both you and Plural Eyes a great deal.
Even if you can't go wireless or wired with a feed from the house system, taking some effort to use a very good external mic on each recording device will help too. Try pointing an external guide-audio mic at the closest loudspeaker instead of just the same direction as the camera is pointing.

Recording continuously also helps even if that uses up a lot of storage space and extra time in transferring material.

Depending on your cameras and editing software, stitching together multiple files that span over a long recording time may also be helpful and in some cases a necessity. Software can do that easily when preparing the files for editing, but it does double the storage space and takes some extra time. It can help greatly in editing though because you are now trimming across many fewer regions on the timeline, thus making it less likely to get out of sync accidentally when cutting out dead sections.

Bruce Watson
July 12th, 2016, 08:23 AM
...shoot from a different angle at each show...

Let me be sure I understand this -- you're using a single camera, and shooting multiple shows, using a different camera angle at each show. Yes?

...and use multicam in my CS5.5 to edit each skaters routine...

That's not really what multicam is for. Multicam is for multiple cameras (camera angles) used simultaneously for a single show.

That said, no reason you can't do this the way I think you want to, but sync will be a variable, because each performance will vary from the others.

So I'm back to figuring out a process to use multi-cams and being able to sync them better then guessing at a point in the audio tracks of each clip. Time code has got to help this issue.

Again, this isn't "multi-cams", it's multi-takes.

What time code is for is when more than one camera are used to simultaneously record a single event. What you're talking about is one camera serially recording multiple events. Time code will help you keep simultaneous recording in sync, but can't really help you with serial recordings because they happen at different times.

Sync for one-camera-multiple-event work is always a PITA. There's no way to automate it, because each event is slightly different. Even if you sync the music and the music doesn't vary in any meaningful way, the skaters are going to vary slightly with the music. The take offs and landings from jumps, for example, won't occur in exactly the same place relative to the music at each event.

About the best you can do is pick one event's music, and strip the music off the other "camera angles" after you sync the music across the "camera angles". Then, move the various camera angles as needed to sync the skater's movements as you swap back and forth from one angle to the next. You'll have to sync each cut, more or less. It'll be a lot of work, but when done well it should be more or less seamless.

Kevin O'Connor
July 12th, 2016, 09:12 AM
Thanks for all the feed back. I'll try Plural Eyes 3.5, Red Giant gave me a link to down load but have not tried it yet.

I think the common take away from everyone is I need to video one event. I have multiple camera sources I can set up. Then they will all have the same date and recording the same performance. I'll just need to get another person to run a camera, no big deal. I do need to verify the dates are set correctly in each camera.

I'll still riley on audio sync then. I'm not sure how to setup time code on my other camera sources like the GoPro's, and Sony NEX (manual lens)

Thanks KPO

Ron Evans
July 12th, 2016, 09:58 AM
Here are a few things to watch for syncing different cameras. Depending on the camera and recording codec the audio clock may not be synced with the video clock. This is particularly true for DV consumer recorders which for which the audio can be almost 2 frames out of sync with the video and still be fully within the spec for DV. Secondly depending on how far each of the cameras is from the stage and sound source the relationship of audio to video can also be off by several frames. A camera at stage level will be likely be in sync with video but one at the back of the hall 200 feet away can be 2 frames out ( audio delay because of the difference in speed of sound through the air compared to light). Also over a long period of time the different cameras may drift because of clock differences. The NLE will take care of this for video because it will frame sync but the audio may not sync as well. Again based on cameras, codec etc. So audio sync is a reference that will then need to be cross checked with the video as the clip based on audio alone could be out be a frame or two.

Ron Evans

Kevin O'Connor
July 12th, 2016, 12:33 PM
Let me be sure I understand this -- you're using a single camera, and shooting multiple shows, using a different camera angle at each show. Yes?

(Yes that is correct for what I thought I should try next. What I have been doing is shooting multiple shows with multiple cameras (3 stationary, 1 manned) and picking different angles for each show for the manned camera. Although the stationary cameras never caught enough of the action to be useful.)

That's not really what multicam is for. Multicam is for multiple cameras (camera angles) used simultaneously for a single show.

That said, no reason you can't do this the way I think you want to, but sync will be a variable, because each performance will vary from the others.

(I'm ok with the performances being off a little, actually the younger skaters can be way off show to show.)

Again, this isn't "multi-cams", it's multi-takes. (Yes, multi take is better description.)

What time code is for is when more than one camera are used to simultaneously record a single event. What you're talking about is one camera serially recording multiple events. Time code will help you keep simultaneous recording in sync, but can't really help you with serial recordings because they happen at different times.

Sync for one-camera-multiple-event work is always a PITA. There's no way to automate it, because each event is slightly different. Even if you sync the music and the music doesn't vary in any meaningful way, the skaters are going to vary slightly with the music. The take offs and landings from jumps, for example, won't occur in exactly the same place relative to the music at each event.

About the best you can do is pick one event's music, and strip the music off the other "camera angles" after you sync the music across the "camera angles". Then, move the various camera angles as needed to sync the skater's movements as you swap back and forth from one angle to the next. You'll have to sync each cut, more or less. It'll be a lot of work, but when done well it should be more or less seamless.

(I need to plan for a one performance shoot next time and with two cameras max. But I can add some GoPro footage because the skaters are okay with wearing them during the performance.)

Thanks KPO

Pete Cofrancesco
July 12th, 2016, 12:44 PM
I reread you original post, I didn't realize you were trying to put different performance together. That's problematic because the timing will be different between performances. And this was probably why pluraleyes couldn't sync them it's not intended for that use. So yes use multiple cameras in one performance.

You should only be mixing two tracks the board feed (the music) and the ambient (audience applause). So you will only use the audio from one camera. The audio from the other cameras are only needed for syncing purposes.

Jay Massengill
July 13th, 2016, 06:07 AM
If it's exactly the same music being played for performances on different days, and you are recording a direct music feed on your cameras, then you can sync shots from different performances and intermix them (within the limits of how well the performer is on time).

I do this every year with the dance recital I'm involved in when it's needed, which fortunately is rare over the course of 42 to 45 dances each year.

I shoot two nights with 3 cameras each night. Each camera has a direct feed of the music and either an ambient mic for applause or a stage mic for picking up the tap shoes closely.

This venue is thoroughly wired with a large Yamaha digital system, so it's not too hard to get direct audio to all the camera positions.

I also have the music files that were used.

I don't use Plural Eyes, but it syncs well by hand. As already mentioned, mostly the camera direct feeds are just used for sync (except when a handheld wireless is used onstage with no music or performing). The music file is laid in directly on the timeline and then the ambient and/or stage mics are also in the mix at the proper levels.

Roger Gunkel
July 13th, 2016, 07:56 AM
In addition to wedding ceremonies, I regularly shoot school productions using 4-5 cameras and in fact shot one yesterday evening. It was just over an hour long and I used two video cameras with unlimited recording time, two bridge cameras with 29 minute limits, which needed restarting one GoPro, and a main audio recorder. I digitised them all this morning and synched the sound manually, with the synching taking less than five minutes.

All of the audio tracks are first set to show waveforms, then the video and audio tracks are roughly aligned visually. Once that is done, it is very easy to expand the view to show a detailed waveform, with a vertical cursor set above a clear peak on the intended master audio track. The same peaks are easily found on the other rough aligned video/audio tracks and simply slid into position. This can be done extremely easily and quickly. If there is any shift due to slight differences between cameras, this can easily be corrected with a quick re-alignment on shot changes during the edit.

In the event that any camera is set some distance from the sound source and shows a lag between video and audio, the video and audio of the track can be separated and the video move a frame or two backwards or forwards to re-align.

Providing the music is the same, footage from different days shouldn't be any more difficult, within the limitations of the performers of course.

Roger

Kevin O'Connor
July 13th, 2016, 04:46 PM
I had a big long reply to all the feed back I've received but my log in timed out and I lost everything I wrote.

So thank you to all, I am using your suggestions.

Take care KPO.