Skip to main content
Known Participant
December 17, 2023
Question

Realtime vs rendered multitrack stretch

  • December 17, 2023
  • 1 reply
  • 400 views

Hi all,

Here's an issue that's proving a bit elusive, both in product documentation and what I can find in forums.

We produce a syndicated radio show and one of the networks we run on requires "Sponsor reads," where in addition to the regular program content the host has to say "From the Blankety-Black Studios, THIS is the [name of show]" and also needs to work in a promotional mention here and there.  But there is ALSO a generic version of the show for international broadcast (and for posterity) that omits these elements.  The added elements add about 20 seconds in total to a 47:50 broadcast hour.  The obvious way to do it is to create the generic version, copy the session file, add the custom elements to the copy, then use global multitrack stretch to scale the hour from 48:10 or so with the added elements back to 47:50 (nobody but the most talented of musicians would ever notice the pacing change by shoe-horning 20 seconds of additional content into 47:50).  But since it IS a music show and therefore the session contains songs, I'm looking for the 100% highest-possible quality approach to the stretch, which I recognize is a quality-reducing step.  If Stretch is set to realtime, which is lower playback quality when playing inside Audition than Rendered, do we recover the quality when exporting to a WAV file (i.e. does it do a high quality render on output)?  In Stretch properties we're obviously set to Poly and not Mono, but are there other settings I should hit to wring every possible ounce of quality out of this step?  Thanks in advance for any help out there!

This topic has been closed for replies.

1 reply

SteveG_AudioMasters_
Community Expert
Community Expert
December 17, 2023

I don't know if it's any help, but to the best of my knowlege, it goes like this: Real time stretching occurs when you play back a track in multtrack, and it also runs in realtime when you mix that track down for export. If you select Rendered (high quality) it will do a non-real time render on the track, but it doesn't appear to store that anywhere - if you click on the stretched track and open it in Waveform view, it's still the unstretched length.

 

This all begs some questions, that have indeed been asked before. Some people report that the real time render actually sounds better than the 'high quality' one, although for a very small shift it's very hard to tell the difference. You can obviously test the result quite easily by turning your shifted track into a unique copy, which will have whatever options you've selected applied to it, and that's what you use for the mix - no potential errors during the mixdown because you aren't doing anything to it. It's worth opening the Properties box for the track and looking at the advanced stretch options - you may be able to change something here that will help.

 

There are other approaches, though. Some people say that the best way to do this is to open the finished mixed result in Waveform view and use the Time and Pitch>Stretch and Pitch effect using the Radius algorithm - YYMV.

 

In view of the fact that it's a music show, and potentially the musos might spot things you'd rather they didn't, there is one more approach to the whole thing - and that's to not treat the music to any stretching or shrinking, but just treat the dialog. So you have a full length version that you could either have the Sponsor reads on or not - depends on you. One way you have to shrink the dialog slightly, the other you have to expand it. All you need then is to position the music items appropriately on their own track, but not alter the timing of any of them. And it is generally the music where these alterations will show, after all. For this option I'd almost certainly select Monophonic as a processing option.

PeteGouldAuthor
Known Participant
December 17, 2023

Thanks Steve.

 

There are, I must admit, some serious gaps in both documentation and product.  Having to export as a waveform and then have to reimport the waveform really isn't a great workflow.  And one has to do it in some really surprising areas.  Another even less logical example is exporting segmented audio content via marker ranges.  Syndicated radio shows go out to stations in segments (often as MP3 rather than WAV or BWF) - each segment being a contiguous portion of the show that airs between commercials.  We define segments using marker ranges, and I was stupefied to discover that it's impossible to directly export individual segments from within the multitrack session.  Given that Audition grew up as radio software and this is Radio 101,  that one's a real head scratcher.  Ultimately we found the best workflow was to export the entire show as a WAV, then use Magix (formerly Sony) SoundForge to split it (SoundForge inherits the markers from Audition via the WAV file and will name the file segments based on the range names).  It will split the whole two-broadcast-hour episode in less than two seconds.  Then we use it to batch convert the segments into MP3 as that's what the stations want (and SoundForge's MP3 batch-convert is crazy fast too).  But it would be a lot more convenient if Audition would directly export the MP3 segments from the mileitrack session in the first place.  (You,listening, Adobe?)

 

Still great software overall but some of the deficiencies (that could be EASILY solved!) are very puzzling.