There's nothing automatic that does this - and if there was, you almost certainly wouldn't like the results! Generally this is something that vocalists do when fed a sensible level of a track back into their headphones; they adapt their performance. It's a basic part of mic technique, involving more than just singing louder - they tend, when they get louder, to move slightly away from the mic too. But the major difference when they get louder is that the quality of their voices changes, and this is immediately perceptible to the listener - and this is what any form of automated system couldn't achieve, and why you really wouldn't like the result very much; it would sound completely unnatural. Same thing happens the other way around as well - if they get quieter, their voices get softer, and you can't achieve that at all with software either.
But this is only the tip of the iceberg - all the other things you can do to separate out vocals are what is essentially the art of mixing, and there are plenty of things you can do with tailoring frequency responses and stereo field positioning that will make a huge difference to the way a vocal is perceived in a performance. What generally happens is that you get an 'appropriate' vocal performance, and don't alter that too much in terms of dynamics - instead you fix just about everything else!