Copy link to clipboard
Copied
Hey guys I'm editing a podcast between 2 men both using individual mics. So I have 2 audio tracks but unfortunately they were close to each other and the room is very echo-y so I can hear both speakers in each track. I tried the Unmix Mutliple Voices in SpectraLayers 10 but it didn't work very well cause they have pretty similar voices. I was thinking that Adobe Audtion (or Spectralayers) could scan both tracks and recognize that they're of the same recording, but that each track has a worse version of the other because of distance, echo etc and then use that information to separate the voices. Does anybody know how to do that?
Copy link to clipboard
Copied
I'm afraid that the question you should be asking is whether it's actually possible at all in Audition, and I'm afraid that the answer is, at present, no.
It seems quite possible that at some stage, somebody will figure out how to program an AI system to do this, but if the voices are that similar, I expect that even an advanced system will have difficulty - as you've already discovered.
Copy link to clipboard
Copied
Thank you for your answer Steve. I've done some more digging and it sounds like what I'm asking is pretty similar to the process of Debleeding. Am I correct? Would Audition be good at that?
Copy link to clipboard
Copied
Audition doesn't do 'debleeding', I'm afraid. It's primarily an editor, and what you are looking at is rather more the territory of iXotope's repair tools.