Exit
  • Global community
    • Language:
      • Deutsch
      • English
      • EspaƱol
      • FranƧais
      • PortuguĆŖs
  • ę—„ęœ¬čŖžć‚³ćƒŸćƒ„ćƒ‹ćƒ†ć‚£
  • ķ•œźµ­ ģ»¤ė®¤ė‹ˆķ‹°
0

Normalizing the audio

Contributor ,
Nov 22, 2017 Nov 22, 2017

I'm putting together my film's audio including the camera audio and lav mics and have read and watched a lot of tutorials of which many recommend for example normalizing the audio for all peaks to -3 db and then some say for tv  the audio peak should be -12 db and some say - 24 db so go figure ?

2. Anyway,  my common sense would tell me not to normalize the audio to for example - 3 db or some other fixed number because that would mean that a normal quiet dialogue speech would be peaking at the same loudness to the listener than somebody shouting which would not make sense, correct ? So if that's the case would the solution be to have to go over each audio clip manually and make sure for example normal talk dialogue would peak out consistently for example in the -10 db range and loud scenes to max out peak out in the - 3db range ?

3. I recorded the actors' audio with xlr mic lavs stereo not mono to a tascam audio recorder but for some reason on the Adobe Audition timeline it comes out only from one speaker which i can fix quite easily by the effect Fill left from right, or actually perhaps even better just right clicking and then clicking Audio Channels and putting a checkmark on both Left and Right checkbox vertically on top of each other. I guess that's acceptable as well ?

TOPICS
How to
42.8K
Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
LEGEND ,
Nov 26, 2017 Nov 26, 2017

Regarding 1. DC Offset. I am not sure why you need to correct for DC offset in this day and age? Used to be necessary with some dodgy early soundcards but not these days.

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Expert ,
Nov 26, 2017 Nov 26, 2017

ryclark  wrote

Regarding 1. DC Offset. I am not sure why you need to correct for DC offset in this day and age? Used to be necessary with some dodgy early soundcards but not these days.

Quite. The amount of waveform asymmetry you get from the human voice (which would be the only real consideration here) would be pretty much wiped out by any limiting carried out above the threshold level you've chosen. So, as long as you've left sufficient headroom on your initial recording to be able to do this accurately, that can't be considered as a legitimate issue. On top of that, the method used for estimating DC offset isn't exactly accurate either, unless you make that estimate on a zero-level signal. Despite any claims to the contrary I wouldn't worry about this, given a correct consideration of the technical facts.

Unless of course you're using a complete old dog of a sound device...

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Contributor ,
Nov 27, 2017 Nov 27, 2017

Does Adobe audition have a feature which would enable the user easily to convert a person's voice pitch and sound to resemble/match believably another actor's voice, in a case for example if an actor'r voice is recorded and new lines need to be recorded but the actor is not available at that moment so that any person could say the new lines and adobe audition could match and convert the voice to the other actor's voice based on already existing recordings/samples of his voice with different dialogue ?

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Expert ,
Nov 28, 2017 Nov 28, 2017

julianm44443758  wrote

Does Adobe audition have a feature which would enable the user easily to convert a person's voice pitch and sound to resemble/match believably another actor's voice, in a case for example if an actor'r voice is recorded and new lines need to be recorded but the actor is not available at that moment so that any person could say the new lines and adobe audition could match and convert the voice to the other actor's voice based on already existing recordings/samples of his voice with different dialogue ?

Yes. Look at this video:

How to use Auto Speech Alignment in Audition - YouTube

It works rather well, even on sung vocals sometimes!

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
LEGEND ,
Nov 28, 2017 Nov 28, 2017

Yes but you can't make it sound the same. There is no software out there yet that can do that. The only way is to replace the whole lot unfortunately with another voice and use Auto Speech Alignment in Audition to sync the new dialogue.

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Expert ,
Nov 28, 2017 Nov 28, 2017

Yes this is true. Voice-mimicking software isn't available, and if it were, it would probably end up getting Adobe sued, simply on the grounds that they'd be providing a tool that would make misrepresentation not only possible, but extremely likely. But getting the original, or even another artiste to re-record, and making it fit works pretty well.

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Contributor ,
Nov 28, 2017 Nov 28, 2017

Thanks for the  YouTube video about aligning things,  it's a good video to learn from and a feature I didn't know existed. Regarding mimicking  somebody's voice if an actor gives a permission in his contract as usual to record his voice and be edited etc etc , I wouldn't think there's anything wrong having his voice edited if needed, once that software comes available

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
LEGEND ,
Nov 28, 2017 Nov 28, 2017

julianm44443758  wrote

, once that software comes available

I'm afraid that that won't be 'once' but 'if ever'.

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Contributor ,
Nov 28, 2017 Nov 28, 2017

It seems Adobe already created it over 1 year ago but I don't know if it was released New Adobe Software Can Mimic Your Voice & Create New Speech   . On  another note while I was editing my audio i felt that after transferring the audio from the source monitor to the timeline the audio clip became a bit more echoy , can that be possible and if so any cure ?  And secondly regarding another issue , what was the shortcut (i'm on windows 10)  to drag a clip to the above or below track without misaligning it timewise left or right ?

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Engaged ,
Nov 23, 2017 Nov 23, 2017

First, let's make sure Loudness and the measurement nomenclature is accurately referenced:

LU -> Loudness Unit. 1 LU is a direct correlation to 1 dB. So yes, 1 LU == 1 dB.

LUFS -> Loudness Units Relative to Full Scale. The Integrated or Program loudness descriptor (referenced in LUFS) represents the average, perceived loudness of a measured piece in it's entirety. It's similar to (but not the same as ) RMS. It does NOT represent peak amplitude or a peak ceiling, which by the way does not in any way indicate perceptual loudness. Peak amplitude is a representation of proportional voltage, aka signal level.

When any spec. asks for something like -23.0 LUFS, the average loudness of the entire piece is mixed in RT to that descriptor. Or, the piece is Loudness Normalized after the fact using an off-line process.

It's important to note all specs. require True Peak (as opposed to Sample Peak) compliance as well. Sample Peaks or Intersample Peak values are referenced using dBFS or dBTP respectively on a discrete meter. In essence, Intersample Peaks must be recognized and prevented. The sole purpose of this requirement (or restriction) is to prevent clipping, distortion, codec artifacts, etc. A capable Limiter inserted at the end of a processing chain is used to maintain the user or spec. defined peak ceiling.

Loudness Meters require a user defined or spec. defined Integrated Loudness target. The measurement scale can be set to display Absolute values in LUFS. Or - Relative values in LU's. Their correlation is as follows:

0 LU (Relative) == the LUFS (Absolute) Integrated Loudness Target

For example:

Integrated Loudness target -23.0 LUFS (Absolute scale) == 0 LU (Relative scale)

Integrated Loudness target -16.0 LUFS (Absolute scale) == 0 LU (Relative scale)

Integrated Loudness target -14.0 LUFS (Absolute scale) == 0 LU (Relative scale)

Integrated Loudness target -24.0 LUFS (Absolute scale) == 0 LU (Relative scale)

If you prefer to mix/master using the Absolute scale, you're referencing LUFS. If you prefer to mix/master using the Relative scale - just remember that 0 LU is always the equivalent to the defined Absolute target.

Again this descriptor has no correlation to Peak Amplitude and/or signal level!

As far as specification compliance, you simply mix/master the average, perceptual loudness to the specified Integrated Loudness target and adhere to the specified amplitude ceiling. How you do this is of course subjective.

Lastly, the delivery targets, most notably the Integrated Loudness target - will vary based on the institution or distribution platform. For example you wouldn't target -23.0 LUFS for a highly dynamic Podcast that would most likely be consumed on a mobile device in a less than ideal environment. It would be difficult to comfortably consume under those circumstances. It's simply not loud enough. Add wide dynamics and these problematic consumption issues will be intensified. -16.0 LUFS (Stereo) is recognized as a suitable target for this particular platform.

Bottom line - recognize the difference between referenced average/perceived loudness and peak amplitude. Use your ears accordingly and mix/master based on how (and where) you intend to distribute your work.

-paul.

@produceNewMedia

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Contributor ,
Nov 28, 2017 Nov 28, 2017

In addition to the previous I'm experiencing an issue I hope I would get help for and it would probably help other users as well:  1. I'm using my Tascam Dr-70D audio recorded with XLr input mic lavs and although the audio recorder is set for stereo, on Adobe Audition the audio comes only from one speaker. Is the easiest and best way to get both speakers working without further complications is the Fill from left or right effect dragged to selected multiple audio clips which have left and right channels but the other channel is empty ?

Thanks

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
LEGEND ,
Nov 29, 2017 Nov 29, 2017

Beta Testing #VoCo

Holding the Shift key whilst dragging the audio clip will allow you to move the clip between tracks without altering it's time position. Or alternatively you can Lock the clip in time from the right click drop down menu on the clip.

How many lav mics are you recording and how do you have their Pan positions set on the Tascam. If you are just recording two then have one panned Left and the other panned Right so that you can work on them separately. However you would be better recording in Mono on the Tascam as you would then get a separate mono file for each of the connected mics. This would be much easier for editing and mixing in the Multitrack view.

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Contributor ,
Nov 29, 2017 Nov 29, 2017

Thanks, the pan position was set to center and recording stereo. Yes usually i have only two mic lavs through xlr hooked up to this audio recorder.  If i change the one mic to left pan mono and the other one to right mono, i would assume then after i have finished editing i would change the tracks from mono to stereo before exporting the film ?

Secondly, regarding matching the final loudness of all the audio in the film, do you use under Match Volume settings the Lufs or Perceived loudness at -12 db for example ?

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
LEGEND ,
Nov 29, 2017 Nov 29, 2017

When working in the Multitrack view in Audition you start with a stereo session and import your two mono audio clips onto two separate tracks. Then you can pan your tracks wherever you want in the stereo field and when you mix down the session you end up with a stereo audio file anyway. No need to convert anything.

if you are not working for a broadcasting organisation then there is no reason to use any loudness settings. Just adjust your final levels to reasonable peaks of around -3dB and all should be OK. If the final video gets uploaded to Youtube or Vimeo or such then their processing will probably sort out any loudness adjustments required for their streaming network.

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Mentor ,
Nov 29, 2017 Nov 29, 2017

peak amplitude -3db is going to sound loud in your editor.

Standards are more than broadcast. its part of the 'make stuff not sound annoying' movement for the last decade;

with dynamic range, relative loudness etc. Sure, you can ignore these, but why accidentally annoy people, right? it takes just a single mouse click. -16LUFS(not db) is the new standard on the internet. Micro-dynamics will always sound better if you do the tweaking yourself, not youtube. Right click any youtube video and click 'stats for nerds' Any dynamic stuff will be compressed under content loudness. for example, a -3db 1khz tone uploaded youtube will be encoded at -10db. sound familiar??

some technical stuff

Loudness vs. Speech Normalization in Film and Drama for Broadcast

http://www.aes.org/technical/documentDownloads.cfm?docID=521

Commercial Advertisement Loudness Mitigation (CALM) Act Summary of Best Practices

https://www.nab.org/documents/resources/CALMBestPractices.pdf

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Contributor ,
Nov 29, 2017 Nov 29, 2017

good info. If I took a glance correctly according to the article it seems that for feature films the Lufs is around -23 and for tv perhaps around -20

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Expert ,
Nov 29, 2017 Nov 29, 2017

chrisw44157881  wrote

Standards are more than broadcast. its part of the 'make stuff not sound annoying' movement for the last decade;

You might think that, but that's not really the driving force behind it. Mainly it's about not having to train audio operators properly - or even at all. Also it's about forcing advert makers (who are remarkably adept at making everything ridiculously loud) to comply to a system that effectively forces them to be 'quieter'.

If you do a mix on a programme that doesn't require people to keep altering the volume, and you don't crash it into 0dB, then that's a competent mix, whatever the LUFS meter says (although see next para). And some of us don't need a LUFS meter to do that...

And it's entirely ridiculous anyway, as the last thing that happens to all broadcast audio is that it gets shoved through an Optimod, which kills all the excesses anyway. The BBC still accepts material created only using PPMs (to their guidelines) and say that if they really feel it's necessary to alter things, they generally won't play with the dynamics, but just shove the level up or down until it fits -23 LUFS.

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Contributor ,
Dec 01, 2017 Dec 01, 2017

I used the loudness auto-match to -23 Lufs for dialogue which seems to work great but I suspect that it does not limit the db of going over 0 on the plus size, does it ? If not then do you just normalize that audio clip manually to for example for All peak levels for that particular clip for example to -6 db or so to match the other dialogue clips ?

I assume usually the film buyers don't have a strict "legal" recommendation for for example for the dialogue audio to need to be for example at -23 lufs, but instead they just want it foremostly to sound good and even, correct ?

As a side note, I realized that this very informative and wonderful thread has gone to 2nd page and as i opened the thread and went to the bottom of it I barely saw that there is a 2nd page available, so I would suggest Adobe if possible to have the entire posts on one page instead of divide a thread to multiple pages, for convenience and efficiency.

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Mentor ,
Dec 01, 2017 Dec 01, 2017

bs.1770 is a standard per time, not a clip equalizer. I use RMS -23db on all my clips first for that. Naturally, the dynamic range of human voice will peak at a nice -10db or so. I can now use that headroom for explosions and gunfire. cool, huh?

Final step, bs.1770. Btw, if you did have anything over zero, it would fix that too, even without RMS first in case your wondering. Always edit dialogue first, then add the sfx and music around it. always, always, always.

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Contributor ,
Dec 01, 2017 Dec 01, 2017

nice, so after all its done, highlighting all dialogue audio and finding the setting for 1770 bs ?

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Contributor ,
Dec 01, 2017 Dec 01, 2017

i just realized after applying the Reduce noise tab by checking it under Essential sounds on a short highlighted clip on the timeline that once i render the clip and save and play the scene the noise reduction (to remove the background air huss noise ) is not  applied on the clip ? Is this a glitch on Premiere's side ?

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Contributor ,
Dec 02, 2017 Dec 02, 2017

Important question : Where is that setting or effect in premiere pro or audition to set the Bs to 1770 ? Because automatching to Lufs -23 unfortunately does not prevent clipping which should be the case.

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Mentor ,
Dec 02, 2017 Dec 02, 2017

audition - under match loudness window.

if your files peak 0db even in RMS, then you need to either lower volume or add compressors/limiters. bs.1770 isn't meant to be a bandaid.

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Contributor ,
Dec 02, 2017 Dec 02, 2017

i found it , it's set by default to that 1770 bs and -23 lufs. I assume its the same in premiere pro. On another note i'm trying to eliminate the loud ac background noise by using Clean Noisy dialogue , but it only applies it to the mid and end of the clip , not the start. I saw posts about that problem from 2014. Is there any solution ?

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Mentor ,
Dec 02, 2017 Dec 02, 2017

hard to say without an example. in audition there's 3 ways, noise reduction via noise print, learn sound model and adaptive noise.

test the FFT sizes and click 'only noise output'. That way, if you start hearing dialogue, then you have the wrong settings. and remember not many effects work perfect so don't max anything out. sometimes different lighter FFT passes work better than one big one. You can also set the frequencies that will be affected with the top blue line.

Translate
Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines