We have a brand new look! Take a tour with us and explore the latest updates on Adobe Support Community.
When will Adobe Audition support spacial audio for 360 degree video?
What specifically do you want it to do in this regard?
Lol...36 hours ago I would have had to ask the same question...but my 12 year old son bought a VR headset yesterday (well, I bought it as a reward for a particular achievement at school) and on the best Youtube 360 videos, swivelling your head changes the audio information as well as the images.
I found this guideline for uploading spatial audio to Youtube. Use spatial audio in 360-degree and VR videos - YouTube Help I imagine there's a more detailed spec somewhere but this gives a rough Idea of what's needed. I'm overloading my addled brain trying to work out if Audition could emulate the files as needed using existing options--but, unless VR and spatial audio become a flash in the pan, it's probably something the developers will have to consider a short cut for.
As an aside, I saw an interview a few days back with a friend who happens to be an award winning sound designer in the West End, on Broadway, and lots of world tours. He's looking into 3D audio with everyone in the audience having Ossic headphones. If it was anyone besides Gareth talking about this, I'd have laughed...but he has the street cred....
That's all very well for specifying the end product but where do you start to originate the audio? Would a Soundfield mic provide suitable output for encoding 360deg audio for instance?
I got interested and have been Googling. It appears that, so far, there's no one accepted method. Links I've found vary everywhere from a tetrahedral array to two binaural setups to doing nothing if the video content has one "strong draw" image in one direction.
I'm lucky enough that I may not have to work on this stuff....but I suspect others will!
Bob Howes wrote:
I got interested and have been Googling. It appears that, so far, there's no one accepted method.
Okay, I'll come clean and admit that I already knew that!
It wouldn't surprise me in the slightest that the devs will a) keep an eye on it, and b) do nothing about it until there's an agreed 'normal' production process. There are a few problems with just saying 'well, you can encode a SoundField mic to do it'. Not the least of them being that this mic is essentially a first-order device, which means that the image resolution will be based on a cosine directional response, and for many of the uses intended, it may be preferable to use a second-order response at least, and possibly more. If you want that in simple terms, it's about directional response, and which bit of physics determines image positioning. I realise that the google spec does go on about first-order uploads at some length, and that it will certainly work to a degree, but this isn't really the be-all and end-all of it by any means; it's a developing field.
The thing about all of this is that it's really about encoding standards. Audition can already handle 4-channel files with no trouble whatsoever, but you have to deal with 4-channel mp4 encoding and especially the metadata handling. And that's the real issue here, not Audition. There are plenty of tools around available for next to nothing to achieve the standards necessary, but at the moment, of necessity, they won't all run in Audition as plugins. That's the bad news. The good news is that Audition already supports an extension system that would definitely support some of them, but it's going to require some third-party coding from somebody who's dedicated to the idea, and I dare say that they wouldn't be free to the end-user.
It's a developing field. It needs to settle down and be easy to create, or it will disappear like a flash in the pan. At present, the issues surrounding it (pun intended) are complex, and there are definitely many ways of achieving the end result required. Some simple productions could be achieved as audio within Audition as it stands - probably not the metadata though.
Now that Audition talks to Adobe Media Encoder is it something that could happen with a suitable preset within ME?
Since it's specifically audio to go along with video, AME seems the most logical compromise to me.
It would indeed be interesting to hear some Adobe comments on how easy (or possible) this would be using AME, yes.
Can you go into more detail on what you would like to see? We can't really make forward looking statements saying that any particular functionality will come by a certain date. You can, however, tell us what you want and we can consider that feedback along with other priorities.
Charles VW, I'd like to invite you and everyone else who is interested in Spatial Audio to the Facebook group Spatial Audio in VR Public Group | Facebook There is a tremendous conglomeration of industry professionals who, I am sure, will be more than happy to tell you what they would like to see in a Spatial Audio DAW. I personally am currently held back to develop spatial audio (even though I am recording it!) by the lack of a common standard across playback platforms (Samsung Gear VR, YouTube 360, Oculus Rift, HTC Vive to name a few). What the industry really needs is for Adobe to lead the way, like you did when you were the first to offer H.265 export from Premiere. So what i would like to see is a way to comfortably ingest A-Format and/or B-Format files, edit them in Audition with a visual SurroundScope option that shows us where each sound is on the vertical plane and then export to ONE format that is supported by all major playback platforms. My wish list for Christmas 2017
So what i would like to see is a way to comfortably ingest A-Format and/or B-Format files, edit them in Audition with a visual SurroundScope option that shows us where each sound is on the vertical plane and then export to ONE format that is supported by all major playback platforms. My wish list for Christmas 2017
By using the Harpex decoder, you can more or less do this now, certainly as far as the editing is concerned (although the integration isn't brilliant - yet...) but as far as the output format is concerned, I don't think you'll ever satisfy everybody. The best you could hope for is a satisfactory domestic format, but we know full well that just about everybody doing multi-speaker presentations, etc will laugh derisively. I've been lurking Sursound for a heck of a long time, and watching the arguments and reading the discussions, often on a daily basis over the years, reveals very clearly that we are no further forward with this than we were in the 1970s!
"we are no further forward with this than we were in the 1970s!"
That's exactly my feeling - everyone tries to push their own format and those who miss out are the producers and the consumer. However, I simply cannot produce 5 different versions of a video only to satisfy each platforms spatial audio implementation. So stereo it is until the platforms have reach consensus on what to use.
Has there been any update for this?
I specifically have a 360 video and we used the zoom h2n with updated firmware to record the spatialized audio. My question is how do I mix in the VO that we recorded in a studio? I want the user to be able to look in any direction and hear the natural audio but when the speaker comes on it sounds like they are omnipresent or the audio is coming from all around...
Has there been any update for this?
Not yet, and because of Adobe policy regarding advance announcements (see Charles' comments above), we won't have until it (whatever 'it' is) has happened, I'm afraid.
Yes, I saw that when I first read the post... Being it has been 9 months since the thread was started and Charles reply was in Aug of 2016... I was hoping there would be some updated news
Adobe... Dragging their feet as usual... ugh
I want the user to be able to look in any direction and hear the natural audio but when the speaker comes on it sounds like they are omnipresent or the audio is coming from all around...
What you're describing is what I've often heard described as a head-locked audio track. Whether this is feasible or not depends on your downstream destination format (e.g. Facebook, YouTube, SamsungVR, etc).
Right, but how do you do this in Audition? I know youtube accepts the format the H2N records ambisonics in. So I just want to combine my VO and location audio. I have seen other tools for this if you have the levels how you want. But I wanted to do this in a mixer so I could control the audio level of the VO to ambient.
Have you seen the first order plugins for premiere pro? Is there something like that coming for Audition? Guess I'll just keep working in premiere pro
Have you seen the first order plugins for premiere pro?
Yes, I worked on them.
Guess I'll just keep working in premiere pro
I could have a better Audition recommendation if I knew what your downstream requirements were. This information also helps us know what we should put inside Audition in regards to this functionality.
when can we make ambisonic, bformat, spatial audio from scratch using audition. I have tracks recorded to mix with a 360 video I shot. I do not have any device that can record in bformat. I wan to be able to combine the tracks to make an ambisonic file to combine to the video
Not that it helps this discussion at all but there was a good discussion on radio here about how VR cinema Could work. Basically it degenerated into to an argument among producers about how they want it to work...and there were some quite different ideas about what They wanted.
Basically it degenerated into to an argument among producers about how they want it to work...and there were some quite different ideas about what They wanted.
This is why, 50 years later, we've got precisely nowhere!