I feel like I’m Waves’s biggest fan, but I’m unable to use many of Waves’s newest plugins due to the PDC latency they add.
I mix as I go during composition, so I have to avoid plugins with PDC latency. Lately most of Waves’s plugins have latency including SSL EV2, Magma Channel, Magma Springs, etc.
Other developers have adjustable oversampling options. This allows us to turn off oversampling for zero-latency during composition, and then dial it back up for rendering.
The smartest plugins are able to intelligently switch to oversampling during rendering, although I appreciate being able to set this myself because that doesn’t work in all DAWs.
Anyhow, please consider zero latency for those of us who mix as we compose. Ideally it would be set in the effect itself, but even a separate “Live” plugin is adequate. (I always use AR TG Mastering Chain Live, for example, because it is zero latency vs. ~8000 samples for the full version.)
Thank you for your consideration.
Often it’s oversampling that is the culprit. Sometimes it’s things like linear phase filters though. Oversampling will always add latency. The trouble is everyone is requesting it.
Ironically you can increase the sample rate of your session, you’ll get what is effectively the same thing as oversampling but at a lower latency. The downside is it “can” be a lot more taxing on the system. I say can because if everyone has set every plugin to oversample at x4 and you’re simply running at 96k, then that oversampling session will end up using more CPU.
But I digress…
For low latency plugins the internal oversampling would have to be effectively at zero.
Exactly. That’s why I’m requesting adjustable oversampling options, like other developers are implementing.
It’s great: Set oversampling to OFF during composition, and dial it up as high as you want for rendering. This is the best of both worlds!
Good point about the linear phase issues. But again, in the best plugins that can be switchable, too.
SSL EV2, Magma Channel & Magma Springs have ~1ms of latency when working at 48kHz. How does 1ms of latency make them unusable for you? (I’m asking, because I’m trying to understand the issue)
Because it’s cumulative, and in addition to the audio interface’s latency.
So let’s say I start with something like 5.33 ms roundtrip latency from my audio interface. Then we layer a few 1ms latency plugins on the channel, bus, and master.
Now we’re over 14ms.
And some plugins have more latency than that. HLS Channel for example has 3.35ms.
How about the named Nx rooms. Those are 11.2 ms of latency – and that’s in addition to the latency of the audio interface and effects. (I understand those have to have latency because of how the rooms are sampled, not due to oversampling – but I’m mentioning as an example of how the latency adds up.)
The Puigtec EQs are 11.2 ms each… So if I use both Puigtec EQs on a channel that’s 22.4 ms, and again — it stacks up from channel to submix to master.
It has become a standard with certain other companies to offer adjustable oversampling. It’s a great feature. You can turn it off during composition and dial it up when you render. Some companies even have an automatic option so you have one setting for realtime performance and another during rendering.
That’s what I’m asking for. Adjustable oversampling options with the ability to turn it off when we don’t want it.
I agree, where it’s possible.
There would be cases because of the maths involved it wouldn’t be, quite possibly the aforementioned NX plugins as an example. H-Reverb is an FIR effect, that’s another example where latencies may become an issue because of the maths involved. If the Abbey Road reverbs are based on the same tech it’s possible their latencies can’t be reduced either.
I’m just taking a wild guess at those examples, so I could easily be wrong there. The point was, though, was to introduce some latencies are also introduced not simply because of oversampling but because of the complex maths. Anything with Linear Phase Filters would be a good example too.
But yes, where its possible I believe the user should have the choice of “quality” or perhaps even give the plugins a “live” mode.
Yes, yes, a live mode would make me happy. That’s what I use with AR TG, and it’s one of my favorites.
I’m taking Krabbencutter’s advice to try to ignore the latency and see if I can work without noticing it, or just live with it. That way I can at least enjoy Magma Tube Channel Strip and SSL EV2 which are really great.
I do know what you are talking about though, as I can have similar experiences with latencies.
That’s how I came across the methodology where I first make a compromise and track with the “rough” sound I have in mind. It only has to be good enough to set the right vibe and to get my ideas down. This way I also don’t get lost in endless tweaking upfront where it stalls the entire recording process.
Once I pretty much have the arrangements in that’s where I work on improving the sounds. This could involve duplicating the track and swapping out one amp or effect for another then comparing the two versions after I make my changes. At this point I don’t have to care as much about latencies either.
Once I’m sure I have “improved” the sound I can then clean up the session and get rid of the original version.
It’s a compromise, but it does place the priorities where they are needed and offers me a way to improve it going forward. This is why I don’t render anything unless absolutely necessarily. Freezing tracks provides me a more efficient method at going back and improving on something as the arrangement builds.
Good call & good advice. I’ve never gone too deep with freezing because I have an irrational fear that something will break!
In a similar-but-different way — once my projects reach a point of complexity, I typically export a render and add additional parts in a separate project file. I always do vocals & guitar for example, in separate project files – then I import them back in to the main project after all the editing and cleanup.
That’s an interesting approach. I know Bruce Swedien (Michel Jackson) would do a rough mix and bounce it to another tape only to record more tracks. Then he’d repeat that process a few more times until he had everything down.
This was to preserve the integrity of the tape because it wears down over time.
When it comes to mixdown he’d recall all the preserved original sessions and mix with those. I’m just unclear whether he bounces them down to a new tape or not, but I imagine he does.
I’ve been using the freeze function for as long as it as been around myself, so I have no reservations personally. But I do make thorough use of folder stacks, and track alternatives as flesh out the track. I also make project alternatives as the mix moves forward. So I kind of have a working history of most things. Which has proved to be helpful in the past.
Because of all this it’s rare that I have made any mistakes that I couldn’t recover from. Though, they do still seem to happen on the odd occasion. Nothing is foolproof.
That is fascinating re: integrity of the tape.
When I was recording in a studio years ago we had a song with a difficult part where the bass player had to record over and over and over to get it right. I began to worry with so many repeated recordings! The engineer said it would be fine…
To my surprise I just read that it’s not the magnetics which are a problem, but rather just physical wear – and mainly only an issue if the machine isn’t in perfect running order.
Man, I’d love to mix down to real tape. I like tape emulations a lot, so that leads me to believe I’d enjoy the real thing even more.
Emulations are cool thing. I could also use them as part of my workflow and revisit what I use and how I use them when I finally have the context of a full mix.
Maybe my choice in tape was too bass heavy or too top heavy, or too saturated, I could adjust my settings or swap it out for a different emulation that’ll fit better in the mix.
That’s what I love about the non-destructive approach in modern day DAW.