![]() |
How come no realtime audio fx?
So you have your video clips in the timeline to which you can add an effect, and then watch it play with the effect, but not have to render until you output.
I KNOW you can do audio fx the same way at the track level, so I'm not asking how, I just asking WHY they don't have the same type of fx for individual audio events/clips in the timeline? Why do you have to render a clip to put an effect on it? Just curious. Layman's terms please. |
Audio FX work in real time if applied at the track, bus or project level, but not at the event level. Click the little green box in the upper right-hand corner of the track header.
I find this works quite well, since I can designate a track for "Dialog 1" and make sure all the events on that track get exactly the same treatment. You can also apply FX at the "Media" level, although I don't recall if those are real-time or not. |
Right, I wanted to know why the program was designed this way?
|
There was an explanation of this in the official sony forum. Not sure which thread it is.
Part of it is because some effects like reverb can extend past a clip/event. Part of it is because implementing effects that way isn't compatible with the TC powercore cards. 2- You can right click clips and apply non-real-time FX. |
Glenn, I'm glad to hear that there's at least some rationale for this, because it has always seemed overwhelmingly silly to me. However, I just don't see that this rationale makes any sense (I know you're just reporting what you've heard and that your memory is a little fuzzy--I'm not shooting the messenger).
I can't figure out why, if we continue to use reverb as an example, the reverb couldn't simply be played back on the fly regardless of the length of the clip. That's how the track-level effects work. As for TC Powercore cards not being compatible with it... I guess that makes some degree of sense, but why rule out a very helpful bit of functionality simply because one type of hardware doesn't like it? At the very least, Sony could allow users of this hardware to elect into the non-real-time model and eliminate what I consider to be a huge pain in the rump for the rest of us. This particular aspect of Vegas has always annoyed the pants off of me. It's completely absurd that complex effects can be applied non-destructively to video--obviously a much more complex type of file--while audio has to be rendered on an event-by-event basis. An analogy: imagine if you had a lawnmower that could easily cut tall grass in one pass, but that required three passes for each patch of clover on the same lawn. It's completely nuts. What happens is that you end up with a folder full of hundreds or thousands of little .wav files that you're afraid to ever get rid of on the off-chance that you'll accidentally delete something that's used in a project you'll need to open again someday. Apart from that, it just slows down the entire editing process quite a bit in certain cases. If you want to add differing levels of reverb (thereby making track-level reverb a non-option, since all events on that track would have the same level of reverb applied) to 47 clips, and then you change your mind later and want to remove the reverb, you're screwed--you have to manually go back through and dig up the original .wavs. If you try to use track-level reverb for each of these 47 events, you end up with 47 unwanted extra tracks. Granted, this is a somewhat exceptional scenario here, but I personally run into these kinds of issues far more often than I'd like to. I wish with all my being that Sony would do something about this. That might be putting it a bit strongly, but my annoyance with this is very strong. If there's a good reason for it, I guess I'd begrudgingly accept it. I'm just not so sure that there is a good reason--or even a mediocre reason. As things are, I find this situation to be an extremely bone-headed bug in an otherwise amazingly well-designed app, all things considered. Please excuse the rant. I'm just frustrated with this. |
I'm new to Vegas, so ignore me if I sound crazy.
Is there a way to set markers and then have sort of "keyframes" for the level of the reverb on the audio track? Event-level effects would be better, but at least some sort of tweaking on-the-fly would make things tolerable. |
Actually event-level isn't preferable, because if the event ends, the reverb can't continue.
You can insert a reverb as an FX bus, then right click the track header for which you'd like to apply reverb. A line will appear/envelope will appear. Double click that envelope to set nodes/envelope points. |
I agree, track level effects make perfect sense ... this is very much in keeping with audio workflow. if you want specific treatment for an event, then create a new track and name it accordingly- you can easily slip any subsequent events into this track and it will receive the same effects, very useful. tracks might be- "char a dialogue", "char b dialogue", "char a reverb matching", "char a lf cut", "ambience", "distant / lf cut" .. etc .. your tracks become your presets, events get slipped accordingly .. I can't see in what circumstance you'd end up with "a folder full of hundreds or thousands of little .wav files" .. if you're doing anything dozens (let alone hundreds) of times, then surely they should be on a track (or obviously several tracks).
|
As other people point out, just use trackFX and use extra tracks if need be. Vegas will support a lot of tracks. Some people will use like 40+ tracks (at least from what I've seen in other audio apps; I don't do such detailed audio mixing ever, though I believe Vegas shouldn't have a problem there).
Not having audioFX on an event level is kind of unintuitive (since it's not analogous to video)... but not having it is not a huge deal esp. since you shouldn't need to go into non-real-time FX often (which itself is kind of a misnomer to me, since the effects in it are usually real-time). |
Quote:
By the way, Spot, your suggestion to use envelopes (or automation) makes sense. However, it still strikes me as being much more of a pain to manage than real-time event effects would be. Then you end up having to be careful that your envelope nodes are staying in the right places when you move clips around and so on. There are lots of workarounds and/or alternatives for this lack of functionality, but look at how much easier and simpler everything would be if audio effects could work like video effects work. Quote:
Quote:
Quote:
Don't get me wrong--the ability to apply effects to an entire track is very useful... but it is far from ideal for use with a single event. |
I agree with Jarrod on this. Always been a pet peeve of mine that in Vegas you can apply real-time video fx but not audio. Other packages let you do this with great effect (pun intended). I suspect it's more of legacy mindset ("always been done this way") than a real implementation issue.
|
Quote:
|
Well, one point-- if an event-level effect can continue past the end of the event, what happens if you butt two of them together? (For example, event B starts before event A has finished it's reverb). Does the second event kill the first, or..? I would suspect that you'd want them to calculate separately and then mix, in which case internally vegas would need to put each event on a separate audio track. I guess that this is the complexity, as it would have to internally create and delete additional audio tracks for each event that had this type of effect, and mix that back into the track. Currently, event event just 'writes' directly into the track, with that summed result going into the track fx.
|
Quote:
The problem is that "video" folks not only don't think this way -- although they could learn -- in many, if not most cases, the audio work that must be done involves cleaning-up sync audio which is event based. Not to have RT event based audio is not simply a mindset issue, it is a loss of productivity issue. Adding reverb is red herring as in most cases the problem is too much reverb. It is probably the only FX that needs to continue past the event. |
So is there any chance a future Vegas would change this?
|
Potentially, but it also causes many more headaches. It's got nothing to do with loss of productivity; this has been discussed since the early days of Vegas. Vegas is intended to:
a-operate like an analog suite operates in the real world b-provide logical interface for workflows. Audio can have temporal effects applied to the media. What happens say...when you have a 5 second event to which you want to apply a 10 second reverb? (Very common application) The 5 second file has ended. A new file must be written. This is just one example. Yes, the engineers at Sony *could* (and probably easily) write code to make this work. But it's at a cost that likely isn't cost-effective in terms of various aspects of the way the app works. In the "real" world, an FX bus is an FX bus. Vegas works this way, just like an analog system works. Interfaced with a HUI, Vegas functions identically to an analog console setup in a live or studio environment. |
Quote:
Video can have temporal effects too, by the way. In the "real world" you don't have to do the equivalent of rendering out to a new file. If you were mixing on a console and had to record your sound to a tape every time you made minor tweaks to the effects configuration on it, I bet you wouldn't be happy. I suspect you'll rebut by saying that if I were to use automation or envelopes, then I'll be operating the software like I would operate a console in the real world. But we're talking about software, not the real world. When I use a word processor, I don't expect to have to listen for a "ding" at the end of each line of text. When I am listening to an MP3, I don't expect to have to rewind at the end. Software doesn't have to emulate the "real" world when certain aspects of the real world are a huge pain is the hindquarters; the entire point of using computers is that they are supposed to make things easier, cheaper, and less time-consuming (*everyone laughs uneasily here*). This particular aspect of Vegas makes things harder, more expensive (due to lost productivity), and much more time-consuming. Quote:
|
What DAW packages out there don't emulate the real world?
Could you identify temporal fX for video that go beyond the end of a file without writing a new file. I'm not defending Vegas, I'm curious as to how you'd rather see this realistically work. 'nother example... you a door slam/Foley. File size/length is 1.5 seconds. You want it to reverberate for say....15 seconds. Since it's a file-level door slam (which cannot be accomplished in the real world), where do the additional 13.5 seconds of reverberation come from without writing a new file on the fly? |
Quote:
Quote:
Quote:
|
Quote:
Event is 1.5 seconds. 15 second reverb. The next event on the timeline is only 6 seconds downstream from the event that has a 15 second reverb on it, on the same track. How would you see Vegas managing this at at Event level? |
Quote:
If I can apply color correction, masks, keyframed motion, and keys all to a single event without having to render it before I can see those effects depicted on the timeline, I should be able to do the same amount of tweaking to an audio event and be able to see the changes reflected on the timeline without having to render out first. It's a no-brainer. Quote:
Theoretically speaking: when the playback reaches the event, it plays the event as if it has reverb on it, and the reverberation continues after the event ends. Simple. Or, alternatively, Vegas could truncate the file like I was saying before, making a manual adjustment to the length of the event necessary. Could you address each of these scenarios directly and tell me why, in your opinion, they would not work? |
In your color correction example, the color corrected clip isn't playing over top of the next clip in line. It requires a new track should you wish to play both events at the same time, and have them both be visible.Temporal-based video events do not spill beyond the temporal boundaries of the file/event.
Again...two events, 3 seconds apart. First event has 15 seconds of reverb on it. What happens to next event? What happens to the reverb during the next event, because the reverb should be carrying over it. On a track level, this is possible. But on an event level, it's not. I guess what seems fairly simple to grasp from my perspective may not be so easy to understand, or rather, I'm not explaining it clearly enough. |
Quote:
Quote:
Quote:
|
Quote:
I'd also like to add that even if, in your example, I had to put the event with reverb on it onto a second track so that it could overlap an immediately following event, I'd still prefer this method to the current situation, in which you have to create an entirely new track for the clip with the reverb on it. Instead, I could put the event with reverb applied to it on an already existing track. Part of my problem with the way Vegas handles audio effects is that you either: A) Have to render out tons of tiny wav files for each event to which you want to apply effects; B) Have to wade your way through a project that contains an absurd number of tracks in many situations; or C) Have to contend with extremely complicated and easy-to-screw-up envelope and/or automation schemes. All I'm saying is that none of these scenarios lends itself to an efficient use of one's time and/or resources, and that a more efficient model than any of the above should by no means be too much to ask. |
Quote:
Quote:
2- Feature film mixes can easily have 40+ tracks. Vegas has shortcuts for collapsing tracks so they don't take as much real estate... check the shortcuts sticky at the top of this forum, I think it has them in there. 3- To me, a somewhat more sensible solution is if the non-real-time FX engine is tweaked so that it remembers your audio plug-ins... that way it sort of behaves like there is eventFX. Though reverbs won't work properly, and that might be confusing if you don't know what's going on. |
Thanks for digging up that post, Glenn. At least now I have a better idea of *why* audio effects are incapable of being handled in real-time.
The poster from the Vegas forum mentioned that CD Architect's GUI makes it seem as if effects are being applied in real-time... I've never used CDA, but it sounds like a similar tweak to the Vegas GUI would satisfy me, because it sounds like at least in some ways, I'd never know the difference on any practical level. You say that feature films routinely use 40+ tracks... this is in line with my experience with Vegas on long-form projects. Unfortunately, in an NLE, you can't see all of those tracks at once like you can on a console, for example. The problem with minimizing tracks is that it doesn't really go too far toward solving the problem of keeping everything in view if you do have 40+ tracks (which I often do), and if (make that when) you ever need to move a video clip and all of its corresponding audio clips, you're still likely to miss some little event down on, say, track 43. Yes, grouping events can help avoid this problem, but it doesn't apply to every situation, and you can still miss an event when you go to group them. Also, even though the bit about not knowing when an audio effect will end almost kinda-sorta makes sense (especially since Sony can't know what 3rd-party plug-ins are going to do), I still don't understand why Vegas couldn't, as a slightly inelegant alternative, truncate the end of a reverb application at the end of an event, so that you have to think ahead and allow for silence before applying the reverb--or at the very least, allow us to use a real-time preview on some effects. I have no doubt that quite a few hours of my life that I have wasted wading through tons of tracks would be returned to me if Sony had adapted either of these models from the beginning. I guess I just have to learn to live with this, as painful and time-wasting and counterintuitive and absurd and ridiculous as it is. Surely there has got to be a better way, but it sounds like we're never going to actually get a better way to handle audio, because Sony has their stock excuse and their minds are closed on the matter. Anyway, thanks again for digging up the info, Glenn. |
This may not help with your central concern, since you will still need to render out a .wav for an event-based Audio FX. However, you can help manage all the resulting "tiny .wav" files, at least on the timeline, by using Vegas' "take" function.
If you add the rendered .wav file over the original as a take (try right-mouse-clicking and dragging the file to overlap the original), you can easily return to the original, or cycle through multiple takes, by pressing the "T" key. You can also set Vegas to automatically display the take name, if you need to remember what you did to each one. If they're truly tiny events, the rendering time isn't much, and once rendered they don't need to be again. Just a thought. |
That is a helpful suggestion Brian, and it does address at least one part of the issue I personally have with non-real-time effects.
Thanks. |
Oh yeah, I wanted to ask you guys about that. . .can you make Vegas 6 tell you the name of the clip/event in the timeline? 4 Did it, and I can't find an option anywhere in 6. What I'm talking is about is actual text written on the event in the timeline ("John CU take 6").
Also, doesn't FCP have the realtime audio FX? |
Vegas does have realtime audio FX, at two different levels, whereas FCP only offers it at one level. Problem is, Vegas offers event-based FX as well, but they're not real time. Therefore, it's a problem for some folks.
Coming from the audio world, it's not an issue for many of us. To view take names/event/audio file names, select VIEW/ACTIVE TAKE INFORMATION or CTRL+SHIFT+I. |
All times are GMT -6. The time now is 04:08 AM. |
DV Info Net -- Real Names, Real People, Real Info!
1998-2025 The Digital Video Information Network