Copy link to clipboard
Copied
Hello there,
I made an audio visualizer and I used polar coordinates to make it round, however, the beginning and the end don't match because of different volumes. I'll include a picture to make it more clear =). Now, does anyone perhaps know how I could possibly solve this?
Thanks,
Job.
1 Correct answer
Hey Rick, thanks for responding;
I've actually solved it myself simply by dragging the start and end point out further. Looks completely normal now.
Anyway I really want to thank you a lot for taking the time to help me
As for my video, it's just a plain song with a visualizer & lyrics to make it more interesting. A very nice solution instead of a videoclip in my opinion, and I get plenty of requests from artists to make one for them. I don't think I completely understand you when you say I should
...Copy link to clipboard
Copied
You have to process the audio in an audio app to normalize volumes and/ or push specific frequency bands. Also make sur you actually cover the entire spectrum range in the Audio Spectrum effect by tweaking the start, end and offset values.
Mylenium
Copy link to clipboard
Copied
Hey, thanks for responding!
Would a compressor in Audition do the job?
Changing the start & end values unfortunately doesn't create what I want to achieve.
Job
Copy link to clipboard
Copied
The tools you use in Audition to process audio depend entirely on the audio. Using Compressor may be one option but you also have to consider loudness, equalization, normalize, noise reduction, and a bunch of other tools.
You may want to process the audio you use to drive the audio visualizer entirely differently than the audio you what the audience to hear. I have never used Audio Visualizer to do anything but simulate a digital display on a control panel. The voice track that I doctored to drive the display was processed to the point of distortion to get the look I wanted and was not part of the final audio mix. Every time I use audio to drive animation I process the track separately.
I can't think of a more boring project for me than covering a song with a vibrating graphics but if I had to do one I would not consider using a sound the audience is going to hear to drive the animation.
Copy link to clipboard
Copied
Hey Rick, thanks for responding;
I've actually solved it myself simply by dragging the start and end point out further. Looks completely normal now.
Anyway I really want to thank you a lot for taking the time to help me
As for my video, it's just a plain song with a visualizer & lyrics to make it more interesting. A very nice solution instead of a videoclip in my opinion, and I get plenty of requests from artists to make one for them. I don't think I completely understand you when you say I should use completely different audio to sync my visualizer with, would you mind explaining that further to me?
Kind regards,
Job
Copy link to clipboard
Copied
I take the sound that is used in production, make a copy of it and process the heck out of it so that I get the results that I need with the animation.
For example, I had this shot of a bunch of detectives in a van doing a wiretap on a suspect. We recorded the audio of the conversation so we could play it back on the set so the actors could react to it. I had the sound on set from the actors + a little ambient from the recording we played back, plus some added "room tone" to enhance the feeling that we were in a van, + the equalized track of the recorded conversation so sound like it was coming from the headphones the detectives were waring + about 4 effects tracks to sweeten up the sound in the Premiere Timeline. When I exported the shot so I could add a digital display of the audio levels to the equipment panel mockup in the van I only brought added a copy of the recorded conversation to the comp. That copy was equalized to the point of only using a very narrow frequency range and compressed like crazy in Audition. I added a noise track to give some low-level noise to the sound so I would have a more interesting looking display. That track was then used to drive audio spectrum and sound keys to generate an animated image that looked like the recording meters were actually working in the van. The track sounded awful but it gave me the values that I needed to get the look I needed for the effect.
If I were doing a music video that had a strong bass line and vocals I would probably cut down the bass a lot so the spectrum looked more like the audio sounds to my ear, but I would not want to have the audience hear the music with the bass cut down 40% because that would spoil the recording but I would not want them to see a spectrum that had huge spikes in the bass and small fluctuations in the vocals. Does that make sense? If you wanted to do a seamless circle I would completely cut the low end and high end of the frequencies so they were ZERO.

