by
diiz » Tue Nov 12, 2013 7:37 am
Well, I'm a self-taught "musician", and I've no idea if I'm doing it "properly", but I can tell you what I've found works for me.
Primarily, I try to ensure the bass and the bassdrum don't go invading over each other. I try to firstly make sure they're somewhat separated in the spectrum, with eq:s - the C* stereo EQ is good for this. For both, I turn the lowest 30hz dial a bit down, unless I really need to get some sub-bass sounds, which means things have to be done differently... but usually, on the bass I turn the 60hz area up a bit, and the 125hz area down a bit, and then do the opposite for the drum channel.
Then I usually also separate the bassdrum and bass a bit on the stereo image as well. If I pan the bass a bit on the right, I'll pan the bassdrum a bit on the left...
Another trick I use sometimes is setting a peak controller on the drum channel, then connecting it to the volume of the bass - but the catch is, you can't connect this directly on the volume knob, it causes glitching. The standard "amplifier" plugin does the same thing. So what I do is I take one of the tube amp simulation plugins, I've found there's no glitching on most of those, and as a bonus you get a bit of tube amp sound for your bass. Anyway connect the peak controller to the gain knob, set the controller's amt negative, and adjust the base, so then the controller will tweak the gain down everytime the drum hits.
(this glitching may not actually affect the rendered song, especially if you render with sample-exact controllers, but still it's best to be safe).
As for the actual mixing itself... I no longer use compressors in LMMS for anything other than the drum channel. I do the initial mixing in LMMS, try to make sure the volumes are balanced, and that the stereo image is mostly balanced as well. Then I export to wav (always export 32bit, it leaves headroom for further editing). I usually export in parts - at least the drums are good to export separately, and if there are any vsti's that require exporting in 44,1khz, they need to be exported separately as well (so that I can export the rest of the song at 96k).
I then open the exported tracks in Audacity. Then I play the track through once more, if there are any glitches or errors, it's best to fix it at this point. That's also another reason for exporting in multiple parts - if I have to fix one tiny part in one instrument, I don't have to export the entire song again.
Anyway, I do any final tweaks to the volumes and such in audacity, usually don't need to do much - maybe adjust the drum track, some instruments may sound a bit louder/quieter after being exported... then I mix the tracks together, normalize the result, and only then I apply compression. Because at this point, I can see the waveform of the entire track, and I can see at what point the volume is for most of the song, and how much higher the peaks are. Plus, the compressor in Audacity is pretty good.
Anyway, I hope this helps.