DV Info Net

DV Info Net (https://www.dvinfo.net/forum/)
-   All Things Audio (https://www.dvinfo.net/forum/all-things-audio/)
-   -   I dont understand the fundamentals of stereo. Please explain. (https://www.dvinfo.net/forum/all-things-audio/322567-i-dont-understand-fundamentals-stereo-please-explain.html)

Dan Lukehart August 25th, 2009 10:53 PM

I dont understand the fundamentals of stereo. Please explain.
 
First, I am familiar with the concepts of stereo imaging but I am not sure how it is applied to general video use. I have a hobbyist background with studio recording so that may help to give you a point of reference.

Specifically:

I generally shoot documentaries with my rode NTG 3 attached to my Ex1 or on a boom (no mixer) if I am doing interviews. I will often use my second channel to have a wireless microphone on whoever I am doing a documentary on for when they are running around doing whatever it is they do. I may just have my internal microphone on auto for the 2nd channel just as a backup.

In post, I generally just find whatever channel has the best quality audio and double it up on 2 separate tracks and pan one hard left and the other hard right. If I need to create the sensation that my subject is on one side of the screen or another, I mix it until the audio sounds like its coming from the direction of the speaker.

Ive had some friends listen/watch my work and they say "why is it all in mono?"

Clearly my audio has not stereo imaged. I just double up a track and pan it so its a stereo signal but without a stereo imaged. If there is background noise, it also appears to be coming from the direction I am trying to make the speaker come from.

Am i doing something wrong? Does the average person expect a stereo image for all audio period? If so, how would I achieve it? Using a stereo setup for all applications? That doesn't seem practical and I only have 2 xlr inputs to work with as of now.

Whenever I record music, I just do a XY or ORTF and its no big deal.

Basically, whats standard operating procedure for general video dialogue?

My NLE is vegas pro 9 if that helps.

Andy Wilkinson August 26th, 2009 02:44 AM

You are doing OK! (and in fact are ahead of many as you already realise the necessity to boom the NTG-3 rather than leave it on camera - the worst place for anything other than run and gun!).

Talking heads should be in Mono. Music and Ambient sounds etc. are best in Stereo (there are exceptions/cases where Mono is fine/best). I suggest you do a bit of reading about audio for video (I'm sure the audio experts on here will suggest some good sources). In time, perhaps consider a small portable (stereo) Digital Audio Recorder (Sony / Zoom / Edirol etc.... many options, just read around on here...) for capturing ambient sounds etc. independently to your EX1/NTG-3 and Radio Mic camera set-up for mixing into the video during post to create a more complex sound texture - when it's required.

Steve House August 26th, 2009 02:52 AM

As Andy said, SOP in both film and video is for dialog to be mono, centered, with music and some FX stereo. You may be working too hard doubling your tracks and panning hard left and right respectively - just leave it as one mono track, pan it to the centre, and leave it there. Audiences can find it distracting to try to position the track to follow the speaker from side to side around the screen but if you want to try it, just having one track and twisting the pan a bit left or right of centre can do that.

Jay Massengill August 26th, 2009 08:26 AM

As Steve said, especially if you're working in Vegas, you don't have to double your tracks. Just right-click on an audio event and go down to "Channels", then select the appropriate choice for your situation. Preferably do this before you start chopping up a long event into many sections.
Then use the pan control on that track as needed.
You can always change an individual part to another channel selection later if needed.

Brian Luce August 26th, 2009 10:16 AM

Quote:

Originally Posted by Jay Massengill (Post 1271862)
As Steve said, especially if you're working in Vegas, you don't have to double your tracks. Just right-click on an audio event and go down to "Channels", then select the appropriate choice for your situation. Preferably do this before you start chopping up a long event into many sections.
Then use the pan control on that track as needed.
You can always change an individual part to another channel selection later if needed.

I hired a sound engineer recently to do some mixing, he did that, doubled the tracks, what's the thinking here?

Steve House August 26th, 2009 12:26 PM

Quote:

Originally Posted by Brian Luce (Post 1272211)
I hired a sound engineer recently to do some mixing, he did that, doubled the tracks, what's the thinking here?

Doubling a track can add some richness by introducing what is effectlively a very short time constant reverb.

Shaun Roemich August 26th, 2009 12:31 PM

Quote:

Originally Posted by Steve House (Post 1272666)
Doubling a track can add some richness by introducing what is effectlively a very short time constant reverb.

As long as the delay between tracks is minimal, you'll get "reverb" or "chorus" effects based on phase cancellation. Strictly doubling SHOULD do nothing more than increase overall output levels by 3 - 6 dB (depending on method used for calculating/monitoring dB). If the overall quality of the audio "drops out" after doubling, you've created excess phase cancellation.

Jay Massengill August 26th, 2009 01:32 PM

The great thing about the video editing revolution is that everyone gets to do it their own way.
I was mainly commenting based on a way to reduce the effort of editing.
Doubling the exact same audio onto a second track adds complexity to the edit in many subtle ways. From simply taking the time to double the items, to adding to the track count (often a visual handicap on your editing screen), to increasing the need to group items, to increasing the risk of mono incompatibility, to adding the danger of slipping some event even by one frame and causing severe echo, etc. etc.
An accident like that can often happen way at the other end of the editing timeline if you're using track ripple edits and some group or track doesn't shift exactly as you're expecting. Those errors may go undetected without closely inspecting the final timeline. A slip of one frame in lip-sync won't stand out like a one frame error in doubled audio would.
There are lots of people who like to double for a variety of reasons and that's their choice so I won't argue against them. I'm simply playing Devil's Advocate for a moment.
Just like noise reduction and removing echo, recording your original tracks so they are of high enough quality to stand on their own is what I strive for (most of the time!). It not only sounds better, it's just faster and easier to edit.
That leaves more time for things like re-editing a paragraph someone in the legal department changed <groan>.

Jon Fairhurst August 26th, 2009 04:04 PM

Yep. Keep the dialog centered. Pan the music wide. In 5.1, limit music on the rear speakers to light ambiance. Place foley and sound design wherever you want.

Yeah, doubling tracks to create a mono mix isn't necessary.

There is one situation where I do like to double tracks: narration.
1) Make a mono track.
2) EQ it as needed.
3) Mess with envelopes, if needed, to keep the level somewhat consistent.
4) Duplicate the track.
5) Compress the heck out of one of the tracks. Something like 20:1, -25dB threshold.
6) Mix to taste.

This really fattens up a voice. The original track generally has detail, but can sound thin. The compressed track is full, but lacks dynamics and punch. Mix the two and get the best of both worlds - and keep them both mono and centered.

And, yes, make sure that they are perfectly aligned in time. You'll have no phase issues at all. Group them so they don't slip. Route them to a new "Narrator" bus. If you want another pass at levels, EQ, reverb, delay, etc, add it on that bus, so it affects the mixed tracks. Trying to go back and affect the original tracks will make your head explode. :)

Sean Scarfo August 28th, 2009 11:44 PM

Quote:

Originally Posted by Steve House (Post 1272666)
Doubling a track can add some richness by introducing what is effectlively a very short time constant reverb.

technically, very short delay time. Reverb is the room sound.

When an exact track is delayed less than 30ms (thirty milliseconds), it will sound like one track/sound. Delaying a track will give it a 'kinda' chorus affect, but basically muddy's up the combined sine wave to make it sound more 'fat' or 'rich'

I completely agree with Jon about narration, minus if you're going to duplicate/compress the heck out it (parallel compression), just pop a de-esser on the second track and compress it and no need to kill yourself on eq on the 2nd track.

I see work flow Jon has laid out, simpler to eq the main, and then duplicate etc...

My personal trick (since I work in PT) is I bus the narration track to an aux track compress the crap out of that, then add a 4-7ms delay. This ads a fullness that doesn't chorus, nor cause phase issues. After you get your levels mixed right between the 2, I send the outputs of those out to a group track and control the level with just one fader (kinda like grouping).

But who's counting.

Ty Ford August 29th, 2009 08:21 AM

Quote:

Originally Posted by Steve House (Post 1272666)
Doubling a track can add some richness by introducing what is effectlively a very short time constant reverb.

::Squirming:: (Steve, do you really want to go there?) If the tracks are truly time aligned there isn't any difference...other than more gain from that source because you have it on more tracks.

Some folks do this rather than boost gain on an audio clip, or if they run out of gain. I did that recently when shooting a Pre-Vis. I was booming to a mixer and fed both mixer outputs to the camera; A Sony EX3. In post, I had both audio tracks on the timeline. Because it was dialog and I wanted it centered, I raised both tracks when I wanted more gain.

As to Dan's concern. Not everything on the timeline should be stereo. Dan, start thinking about what COULD be stereo. Ambi, music, some sound effects depending on where they occur.

To do this correctly, you need a good audio monitoring environment. You need two good monitors set up in a pretty much equilateral triangle with your head. Tweeters at ear height. If the distance between the monitors is much greater than the distance between your head and any one monitor, you'll under mix the stereo field. If the distance between the monitors is much less than the distance between your head and one monitor, you'll over mix the stereo spectrum.

You need a monitor system that has a solid low end down to at least 50 Hz. If you don't have that you need a sub-woofer.

Genelec, ADAM, Myers, (some) JBL, K&H are good brands. The powered Event Opals are remarkable. $3k a pair, no sub-woofer needed. I am patiently (well almost) waiting for a pair.

Regards,

Ty Ford

PS: Steve, will you be at AES in October?

Ty Ford August 29th, 2009 08:26 AM

Quote:

Originally Posted by Jon Fairhurst (Post 1273467)
And, yes, make sure that they are perfectly aligned in time. You'll have no phase issues at all. Group them so they don't slip. Route them to a new "Narrator" bus. If you want another pass at levels, EQ, reverb, delay, etc, add it on that bus, so it affects the mixed tracks. Trying to go back and affect the original tracks will make your head explode. :)

And be aware that if you have multiple tracks of the same source you're trying to fatten, that simply adding a plugin to one track may cause delay because of the time it takes the audio to go to the plugin and back. Not many systems have autocompensation for this yet.

If you don't have a really good ear for cancellation effects, I'd suggest you don't try this. And by all means have a good monitoring system and check everything in mono.

Regards,

Ty Ford

Steve House August 29th, 2009 11:07 AM

Quote:

Originally Posted by Ty Ford (Post 1284868)
::Squirming:: (Steve, do you really want to go there?) If the tracks are truly time aligned there isn't any difference...other than more gain from that source because you have it on more tracks.

...
PS: Steve, will you be at AES in October?

I confess when I wrote you'd add a certain richness I was thinking in terms of the multitracking technique of doubling the track, actually recording a second take of the track performed in unison with a playback of the first and then combining the two in the final mix, rather than just duplicating a single track by copying it in the DAW.

I'd sure like to get to AES this year but I don't know if I'll be able to make it.

Mike Demmers August 29th, 2009 06:44 PM

Short delays of a few milliseconds will cause phase cancellation at higher frequencies.

The classic 'fatten the voice' trick used to avoid this was to use two Eventide Harmonizers, one a quarter tone up (or so), the other a quarter tone down, panned left and right.

That used to require about $7000 worth of Eventide gear to do - nowadays, you probably got a pitch shift plugin for free.

-Mike

Jimmy Tuffrey August 30th, 2009 03:00 AM

doubling the tracks and panning out.... you wont find a pro doing it that way. Even in the old 1/4 " days we only ever took one side of the recording and panned centrally. It's a modern take with the nle of an old technique. Standard procedure should be to take one side and pan centrally as advised above.

Jon Fairhurst August 30th, 2009 02:52 PM

Quote:

Originally Posted by Ty Ford (Post 1284870)
And be aware that if you have multiple tracks of the same source you're trying to fatten, that simply adding a plugin to one track may cause delay because of the time it takes the audio to go to the plugin and back.

That's unlikely for compression, since simple compression has no inherent delay. (The attack and release settings look to the past, never the future.) EQ is another story. Filters are implemented with delays, scalings, and sums.

If worried about delays that your software might have in its implementation, run both tracks through the same type of compressor plug in - but only apply heavy compression to one. This all but guarantees sample accuracy.

Rick Reineke August 30th, 2009 08:37 PM

Vegas has latency compensation. One thing Sony didn't screw up... yet anyway.
OT.. Ah, I remember my first Eventide 910, making perfectly good voices sound like Satan.
OT-2: Steve, If you come to NY, I'll buy ya dinner & drinks @ Mustang Harry's on 7th Ave. (three blocks South of MSG)

Jon Fairhurst August 30th, 2009 08:50 PM

Quote:

Originally Posted by Rick Reineke (Post 1291437)
Vegas has latency compensation...

I use Vegas and have never heard any funny phasing when using plugins on doubled tracks. Apparently, they implemented it ...and it works. :)

Paul R Johnson August 31st, 2009 03:27 AM

Are we not getting a little 'blended' with techniques for singers here? As far as I'm concerned, we're talking about very different ways to do things. I'm hearing delays, merges, split or central panning, phase issues etc etc - all very critical for thickening up a weak singer, but we were talking about stereo for video. Since having voices constantly moving around in the stereo field to match pictures never works that well, and just sounds a bit 'fake', most of us take the subject audio as a centre mono source and surround it with stereo (or pseudo stereo) music and fx. If you need more punch to the artiste in vision, or VO, then compression is the best way, and this is usually all that's needed to let it cut through even a music heavy mix.

I've never thought about double tracking the forward sound from a camera mic, or a studio naration track for video. A decent voice artiste and a nice compressor will do for me. As mentioned, double tracking that introduces time delay means phase cancellation - that's the entire point, isn't it? Great for music, not so hot for video - unless it's video to track, as in pop videos.

Jon Fairhurst August 31st, 2009 07:49 AM

Paul,

The doubling that I'm talking about is very heavy compression on one track and little or no compression on the other. I use this specifically for a powerful voice-over. As with all dialog, it should be centered. Done properly, there is no delay between tracks, and no phasing problems result.

The problem with straight compression is that you trade off fullness and dynamics. By the time you've really filled in the vowels, you've smashed the hard consonants. You can play with the attack, but that only helps the first consonant of each word. Middle consonants still get crushed.

Picture the original signal as sharp peaks around deep valleys, and the heavily compressed signal as high plains with small hills. By mixing the two, we get high plains (strong vowels) with sharp peaks (crisp, dynamic consonants). This is like filling the deep valley of the original signal with water, as compared to straight compression which is designed to file down the peaks.

Again, I use this for the big voice over, not typically for on-screen dialog. On the other hand, if you have an actor with a particularly thin voice, this might be a good trick.

Mike Demmers August 31st, 2009 03:13 PM

Quote:

Originally Posted by Jon Fairhurst (Post 1293310)

The doubling that I'm talking about is very heavy compression on one track and little or no compression on the other.

This does nothing you should not be able to do with any good modern compressor suitable for professional use.

Quote:

Originally Posted by Jon Fairhurst (Post 1293310)

The problem with straight compression is that you trade off fullness and dynamics. By the time you've really filled in the vowels, you've smashed the hard consonants. You can play with the attack, but that only helps the first consonant of each word. Middle consonants still get crushed.

Not with a decent compressor. The only places I have ever seen this problem are with cheap, poorly designed 'compressors' in cameras (which are really just meant to be levelers, not compressors), and the old 'suck and blow' compressors hobbyists used to make from a small lightbulb and photocell in the 1960s. Oh, and some stomp pedal guitar effects.

Even the really inexpensive compressors I have use dual time constants to avoid that kind of problem. Most nowadays use RMS level detectors which should never have that problem. Most use some kind of 'over easy' type curve as well, which also helps eliminate that sort of thing.

'Compressor' is kind of a loose term. I may use, as needed on a voice, leveling, compression, peak limiting, de-essing, and frequency-selective limiting/compression. I just pick the right one(s) for the job. If one unit does not have all the functions needed built in, I just add another that does in series, there is no need for a separate track. (And of course they do have to be adjusted for the material)

Combining two tracks the way you do is just the equivalent of following a slower time constant compressor with a a peak limiter or faster time constant compressor. Perhaps you just need a more flexible compressor. What are you using that has this problem?

-Mike

Jon Fairhurst August 31st, 2009 03:50 PM

I'm using the Track Compressor that ships with Vegas. It's the standard deal: Ratio, Threshold, Attack, Release. Vegas also has "Wave Hammer", which is more for mastering than mixing, and while it's "smart", it's lacking in user controls.

I've also got the Isotope frequency selective compressor, which comes with Sound Forge 9. I can't use this in Vegas though - it's locked to SF9. It's great for really pumping up a final mix. I haven't tried it on a single voice.

I know there's higher end stuff out there, like Waves, but the track doubling technique is available for free from most any NLE or DAW.

Mike Demmers August 31st, 2009 04:20 PM

Quote:

Originally Posted by Jon Fairhurst (Post 1294803)
I'm using the Track Compressor that ships with Vegas.

Ah. Well, this is why I still like actual hardware for certain important functions. With hardware, I get a manual with full specs and usually a schematic (on pro gear), and it was probably designed by an engineer that has some practical experience with audio.

With software, I have no idea how the thing works, whether is has any of the more useful design features (as I mentioned above), and may have been designed by a computer programmer who just looked up some algorythms and called it good...it was free...

I have no idea how well that compressor is designed, but you might look around for something better (maybe even free). I can tell you for certain that any software compressor I tried that had the artifacts you describe woud be thrown out here immediately. That has just not been acceptable performance in a compressor for the last 50 years or so. ;-)

Quote:

Originally Posted by Jon Fairhurst (Post 1294803)
I know there's higher end stuff out there, like Waves, but the track doubling technique is available for free from most any NLE or DAW.

Can't you just add two instances of the compressor on the same track, and adjust them as you would for the two track approach? (I have never used Vegas) I don't see any need of two tracks. This should actually work slightly better, because following the slow compressor with the faster one should give you a slightly more consistent peak to average ratio.

There's no harm in two tracks, it just seems like a waste of time and adds one more track cluttering up things for no good purpose.

For what it is worth, 99 percent of the narration I have recorded over the years has used just one compressor, either a Urei LA-2 (stupidly expensive) or a dbx 266 (dirt cheap). And I never saw those problems with either of them.

-Mike


All times are GMT -6. The time now is 06:31 PM.

DV Info Net -- Real Names, Real People, Real Info!
1998-2024 The Digital Video Information Network