Hi All,
I create small ensemble classical music. Everything I build is created with VSTs. Almost all of the VSTS have reverb, delay, and EQ presets applied. I tend to use the factory settings.
Outside the obvious, panning, a bit of EQing, volume balancing and such, I hear no discernible difference if I do have a work mastered, other than making it louder.
Is there a lot of point in mastering music made with VSTs?
All the best,
Rob
Hey @kenrob2037,
Welcome to the Waves Forum.
Mastering adds the finishing touch to your music or audio, ensuring it sounds polished and consistent across different platforms and playback systems, regardless of the sources you used such as VST or if you maybe recorded everything with live tools i.e.
So my opinion here is yes, there is a point to mastering VST-based music.
Hi,
While I agree with you, what I’ve found is that most mastering, especially AI and online mastering only raises the volume.
I use Logic 11, and its mastering does make quite a difference. Especially with the valve, transparent and other choices. I’ve paid for mastering, and as I said, the only difference I can hear is a volume increase.
Most frontline VST makers do a pretty good job of supplying their products with sufficient adjustments to make them sound the best. That sad, none really sound like the real thing, especially orchestral sounds, because when changing dynamics, that’s all most do. Very few alter the tone, attack or decay.
As I can’t afford an orchestra, they will have to do until something better comes along.
1 Like
A large majority of professional music productions has been done on projects that only use plugins. Hip-hop, rap, EDM, pop all good examples. It’s not the plugins.
Mastering doesnt just raise the volume, but it refines the tonality, energy and dynamics of the music. That said, the difference is generally subtile, especially if it’s been well mixed. The most obvious thing you hear is the loudness difference, but it you listen to the before and after with the levels matched you should hear some other differences too.
Hi Simon,
I agree with you on what mastering is supposed to do. While I’m a composer and not an audio technician, I’ve yet to hear online mastering do anything other than raise the volume. Even looking closely at a spectrum analyser, I can’t see a difference.
However, when I do my own in Logic, I can hear a huge difference. The balance, the presence and tone of instruments is much more refined, clear and noticeable when I do it, as opposed to online mastering.
I have a friend who does this for a living. He records classical music concerts in Pro Tools. He says that is what most professionals use as it is designed to record live music. He mixes and masters in Pro Tools too.
For him, the most important processing for live music is EQing. He reckons that most of the other things take care of them self with a standard approach. However, what most of us creators do is not what he does. While we will never be able to reproduce the sound of a live orchestra, I think what manufacturers have done with VSTs is pretty â– â– â– â– â– â– amazing. I rarely touch a VST setting to make it sound better.
So, I do minimal mastering to all my music, because I believe that the quality VST makers, like Pianoteq, have done most of the job for us. But when we combine different VSTs, and effects, that’s where we need to step in and resolve issues.
Whether you/re working with music, post-production for film, or audio in general. If you get enough sound sources together there will be eventually conflicts. Sounds interfering with each other.
If you only have a few sounds it may not be noticeable, but enough will cause things like frequency or level masking. It could sound dark, and muddy, lack clarity and presence, etc. It doesn’t matter if it’s live, sampled or synth based. It’s the psychoacoustics of the way sound works. Of course a good mix would deal with all of that.
When it comes to mastering though, it’s only really to apply a final polish. If you have an average mix the effect of mastering would be more noticeable, on the other hand, if you’ve managed to create a fantastic mix, then the mastering engineer or service won’t change a thing.
What constitutes a fantastic mix is quite subjective however. Give 5 mastering engineers the same track and you will end ups with 5 slightly different versions depending on their experience and personal tastes. Online services are no different, except in place of experience and taste the AI has training and algorithms.
It’s actually quite conceivable that with a good enough mix a particular service may seem to do nothing than just raise the volume. More likely though, there are very small nips and tucks that are quite subtle to the ear. Whereas another service may do something more noticeable simple because its the product of its training.
All this is independent of what sound sources you use, but is highly dependant on your production and the nature of the AI’s training and algorithm.
What mastering (and more broadly, any mixing you do on your busses) will achieve goes across all your instruments, after they’ve been summed. It works with the interactions between all the instruments in your song.
No single virtual instrument can balance itself with the other instruments - they can’t even “hear” them. (And a sidechain is a very very limited version of what I’m referring to.)
Individual channel mixing will help your instruments sound like they’re in the same “place”, like they belong together. Master bus processing will help that place sound good, applied to all instruments together. Master-ing usually refers to making the recording sound good with no further processing on playback (i.e. the listener doesn’t have to adjust the volume, or their EQ, etc.). Virtual instruments cannot do these last two stages for you, no matter how good the raw inputs are. And different ones usually need a bit of help (typically just EQ) to sound like they belong together, even before they hit the bus.
1 Like
Hi All,
Thank you for your thoughts. It’s most helpful and rewarding to get other perspectives on Mixing and Mastering.
I remember when I first started out in the early 1980s with my first music notation and DAW programs, I never did any mixing. Everything was panned dead centre, everything the same volume, etc. How times and knowledge changes.
Using with VSTs from various suppliers, I’ve found getting them to work together is all about EQing. It’s a lot of work in a 20-stave orchestral score to EQ every track, but it is worth it.
AI is a tricky thing. I think people will go with a quick AI result over listening to each track separately and EQing them and then EQing the Stereo Out.
The other thing to do with VSTs is turn off all reverbs and apply one in the mix, I use a Plate I’ve created in Logic for orchestral music and different reverbs for chamber and solo music.
Thanks again for all your words, most helpful to understand the process better.
There is way too much reverb in alot of sound rests these days, compression and saturation too. Sure they sound nice by themselves, but as soon as you start laying up the sounds it all starts getting too much.
I always find myself dealing back or switching off alot of effects just to make sounds sit better in the arrangement.
Yes, Simon. Using VSTs from different suppliers in the one project is the killer. Some are swamped in reverb.
For me, mixing and mastering is heavily focused on EQing.
Cheers,
Rob
1 Like