Wesley Cardone
October 13th, 2010, 07:55 AM
SUMMARY:
When synchronizing DV audio from two clips in a NLE you can only do so in increments of full frames. Sometimes this is not accurate enough. A plug-in or effected is needed to add from zero to 16 milliseconds of delay.
DISCUSSION:
For a multi-cam shoot we place two or more clips on a single timeline and adjust the relative time difference between them such that the audio in each superimposes over the other leaving no echo or other weird sound from slightly mismatched audio. However, the accuracy of the sync can only be adjusted to the frame level. Usually this level of accuracy is sufficient. However, in theory there could still remain up to a half-frame difference between them since NLEs only allow shifting a clip by full frames plus or minus. On occasion when syncing audio on a PPro timeline between multi-cameras I find that I need to shift the audio on a sub-frame level.
Consider a quantitive analysis. At 29.97 fps we have 33.3 milliseconds per frame or 16.68 milliseconds per half-frame. At STP sound travels at 340.29 meters/second. At STP therefore sound travels 5.68 meters or 18.6 feet in the time of a half-frame.
Consider two clips, each taken from a different camera, of the same event using wireless microphones attached to two subjects in the video. The two subjects might be standing within less than a couple of feet of each other but in theory the audio on each clip on the timeline could be separated by up to a half-frame which would create the effect of the two individuals standing 18 feet apart—an annoying but maybe tolerable echo. At 24 fps it would be 23 feet.
Is there a plug-in or audio effect to shift a given audio track from zero to 16 milliseconds or from -8 to +8 milliseconds?
When synchronizing DV audio from two clips in a NLE you can only do so in increments of full frames. Sometimes this is not accurate enough. A plug-in or effected is needed to add from zero to 16 milliseconds of delay.
DISCUSSION:
For a multi-cam shoot we place two or more clips on a single timeline and adjust the relative time difference between them such that the audio in each superimposes over the other leaving no echo or other weird sound from slightly mismatched audio. However, the accuracy of the sync can only be adjusted to the frame level. Usually this level of accuracy is sufficient. However, in theory there could still remain up to a half-frame difference between them since NLEs only allow shifting a clip by full frames plus or minus. On occasion when syncing audio on a PPro timeline between multi-cameras I find that I need to shift the audio on a sub-frame level.
Consider a quantitive analysis. At 29.97 fps we have 33.3 milliseconds per frame or 16.68 milliseconds per half-frame. At STP sound travels at 340.29 meters/second. At STP therefore sound travels 5.68 meters or 18.6 feet in the time of a half-frame.
Consider two clips, each taken from a different camera, of the same event using wireless microphones attached to two subjects in the video. The two subjects might be standing within less than a couple of feet of each other but in theory the audio on each clip on the timeline could be separated by up to a half-frame which would create the effect of the two individuals standing 18 feet apart—an annoying but maybe tolerable echo. At 24 fps it would be 23 feet.
Is there a plug-in or audio effect to shift a given audio track from zero to 16 milliseconds or from -8 to +8 milliseconds?