Copy link to clipboard
Copied
Hi all,
Is there any effect in Audition that lets you compare two channels and automatically lower (or remove) the leaked in audio from the other channels. So the voice of person 1 is automatically lowered or removed from channel 2, and vice versa.
I have decent michrophones but the setup is not perfect and a lot of audio spills over from the other person at the table.
Using Audition 24, Macbook Pro.
Thanks for any help.
Copy link to clipboard
Copied
The closest you can get is to auto-duck one channel. Generally it's easier to use the automation system to do this, just setting it up to write both channels and run the faders manually. Even if you get it wrong you can edit the result, after all.
Copy link to clipboard
Copied
The solution I would try as a base is to use the Dynamics Effect on both tracks (assuming they are separate and you have them in a multitrack session for non-destructive editing). With the track selected, go to "Effects" > "Amplitude and Compression" > click on "Dynamics...".
Once the effects window opens up, (or you can double click it from the Effects Rack), select the checkbox for AutoGate. Play around with the Threshold control in the Autogate section until you get the desired result.
To simplify, this effect does not let audio through unless it crosses the loudness threshold. It is a very blunt solution. The proper solution would be a combination of multiple effects based on the sound of the actual file, and a lot of manual editing of the tracks.
Copy link to clipboard
Copied
I'm afraid that generally, that doesn't work - simply because most speakers have a dynamic range that they actually use, and you can only set one threshold for them. It's no accident that broadcasters handle multiple contributors in talks the way they do - which is to use a studio manager to select the sources they want at any given moment. I don't think that it would be beyond the wit of mankind to develop an auto-mixing soloution that could take account of the way humans actually speak, which would sort-of automate the process. But to work properly even that would need to be 'managed' - just to take account of people talking across each other.
One of the problems with Audition, when it comes to podcasting, etc is that it was never really designed to be a live studio tool, so it doesn't really have all of the facilities you might want to use. For instance, it would be a lot easier to manage this sort of recording if the channel mute buttons worked silently in Record... (they don't presently work in Record at all), and the treatment the OP wants (active auto-muting across channels) was available.
The trouble is that because this is a fragmented market, none of the big players is going to agree that this is worth spending development time on - and they are the ones that get asked, because it's them that pay the wages bill.
Copy link to clipboard
Copied
As I suggested, it can be the base of a cleanup effort, because it was the start of my own solution to a similar problem. The autogate caught out most of the bleed in my case, and I had to adjust the automation curve to clean-up the levels for a few peaks (btw the interface for doing that sucks if you don't have a fader/mixing console). If the bleed is so severe that a gate can't catch even 10-20% of the overlap then its a poor recording setup and needs more work to clean up the result.
The point is to take the question at face value and attempt a solution using the available tools, and I have given that. Its up to the asker to determine how useful the advice is to their case.