Copy link to clipboard
Copied
Given covid restrictions I'm currently working on a project with dialogue captured in different studios and different times. To try and match the audio captured we recorded in the same sample rate and bit depth. Now when editing im having a couple of issues with matching dialogue. I have improved this with some effects, namely the graphic equalizer. It still isn't matched to a standard im happy with. I tried to figure out why this would be and used the Frequency Analysis panel. I found that the frequency maxes out at different points and wondered if anyone knew how i could adjust this. I have attached a screenshot of my frequency analysis.
Hope i have provided enough useful information,
Harvey
The reason I asked about the voice files is that they live in a smaller frequency space than say
a full environment recording.
As such the main human voice lives between 200hz and 8khz roughly.
This article has a good chart half way down:
https://www.dpamicrophones.com/mic-university/facts-about-speech-intelligibility
Your frequency analysis panel shows a lot of frequencies beyond the usual range
so likely noise or background.
So, here's a method of trying to fix your voice EQ and volume ma
...Copy link to clipboard
Copied
Unfortunately the human voice is something that we are very attuned to - we notice minute variations in it that we'd miss completely in other sounds. One of the things we're especially sensitive to is the place in which it's recorded - the 'room tone', if you like - and that you can't alter. We can even tell if it's the same person in the same place on a different day, especially if they've eaten or drunk something they perhaps shouldn't have. The upshot of this is that however hard you try, you won't get rid of the variations you can hear. This is why all professional voice-overs are recorded at one sitting - even mornings and afternoons can make a difference to some voices, especially when recorded up close, as most of them are.
Yes you will notice changes in frequency response, but even eliminating them (difficult, even with EQ matching) won't get rid of the variations you hear - that will be down to just about every other factor involved. You have to bear in mind also that if you alter the EQ, you are also altering the sound of the recording background, and the chances of getting both correct at once are - well, pretty much non-existent. The sample rate and bit depth (as long as it was at least 44.1k and 16-bit) won't affect the results at all. The mic you used certainly will, though - as will the furniture in the room, etc etc.
So good luck - but unfortunately you're doomed to fail. Most people get around this by not trying to run dialogue from two different settings abutted together; use location sound, music, pauses, anything you can. The location changes are most noticeable at the transition points, so these are what you need to disguise.
Copy link to clipboard
Copied
Hi Harvey,
Are these voice / speaker files?
Copy link to clipboard
Copied
yeah they are dialogue files.
Cheers,
Harvey
Copy link to clipboard
Copied
The reason I asked about the voice files is that they live in a smaller frequency space than say
a full environment recording.
As such the main human voice lives between 200hz and 8khz roughly.
This article has a good chart half way down:
https://www.dpamicrophones.com/mic-university/facts-about-speech-intelligibility
Your frequency analysis panel shows a lot of frequencies beyond the usual range
so likely noise or background.
So, here's a method of trying to fix your voice EQ and volume matching in Audition
courtesy of Mike Russell.
https://youtu.be/KXu-J4xnJ6U?t=41
Your frequency analysis looks almost the same as the one Mike fixes 🙂