Skip to main content

アイデアステータスでフィルタリングする

10000+ ideas

KarstenJensenKnown Participant

More AI features in Adobe Audition for sound editingOpen for Voting

   1. AI-Powered FeaturesArtificial intelligence represents the biggest near-term opportunity for Audition to differentiate itself. The following features would automate time-consuming manual tasks and deliver meaningful results for podcast producers, voiceover artists, broadcast engineers, and music editors alike. 1.1  Semantic noise removalInstead of manually dialing in noise reduction thresholds, AI should identify and classify noise types — HVAC, traffic, room tone, electrical hum — and apply context-aware profiles automatically. The interface should expose a single adjustable 'aggressiveness' slider rather than a bank of technical parameters. This makes the tool accessible to non-engineers without sacrificing precision for advanced users.  ● High priority   1.2  AI-powered dialogue cleanupAutomatically detect and remove mouth clicks, breath sounds, mic pops, and plosives without affecting the surrounding audio. The current workflow requires manual identification and repair, which is extremely time-consuming. The ideal implementation is one-click processing with per-detection override — allowing editors to review and restore anything the AI incorrectly flagged.  ● High priority   1.3  Smart silence detection and pacingExisting silence detection is based purely on audio level. An AI-based replacement should understand natural speech rhythm — detecting pauses that feel long relative to the speaker's cadence — and allow trimming or shortening them without making the result sound unnatural. Ideally, the system should learn from an editor's decisions over time to improve per-project.  ● High priority   1.4  Automatic room acoustics matchingWhen audio from different recording environments is combined — remote interviews, multi-location podcasts, overdubbed narration — the acoustic mismatch is immediately noticeable. AI should analyze the room tone and impulse response of each track and normalize them to a shared acoustic profile. This alone would save hours on professional podcast production.  ● High priority   1.5  Speaker diarization and track separationAutomatically detect and separate multiple speakers in a single recording, assign each speaker to an individual track, and label them by identity. This is already available in tools like Descript and is increasingly expected in professional audio software. It is particularly valuable in podcast, interview, and conference recording workflows.  ● High priority   1.6  Stem separationIntegrate machine-learning-based stem separation — vocals, instruments, drums, bass — directly into Audition. This would allow editors to isolate or remove individual components of mixed audio without requiring third-party plugins like iZotope RX or Spleeter. Use cases include music podcast production, archival restoration, and sound design.  ● Medium priority   1.7  AI mastering assistantAnalyze a finished mix and suggest or automatically apply mastering corrections: dynamic range adjustments, loudness normalization to LUFS targets, and EQ balance corrections. The system should offer platform presets (podcast, broadcast, Spotify, Apple Music, Netflix) and present its suggestions transparently before committing changes.  ● Medium priority   1.8  Voice cloning for gap-fillUsing an existing recording of a speaker as a reference, generate short corrective phrases or fill missing words. This is invaluable for podcast editing and voiceover work where re-recording a session is impractical. The feature must include ethical safeguards: audible watermarking options, disclosure metadata, and explicit user consent workflows.  ● Medium priority   1.9  Natural language editing commandsAllow editors to type or speak instructions — 'remove all background noise from 00:30 to 01:15' or 'bring up the second speaker 2 dB in the chorus' — and have AI execute them on the timeline. This lowers the barrier for less experienced users and accelerates repetitive operations for professionals.  ● Medium priority   2. Core Editing CapabilitiesSeveral fundamental editing capabilities are either missing or underdeveloped in Audition compared to competing tools. Addressing these would significantly improve the experience for experienced editors working on complex projects. 2.1  Fully non-destructive effects chainAll effects should be applied as a live, non-destructive chain that can be reordered, bypassed, and removed at any point in the project lifecycle — similar to a plugin rack in a DAW. Currently, many operations in Audition's waveform editor are baked into the audio file. This should be resolved by moving to a fully non-destructive architecture throughout the application.  ● High priority   2.2  Waveform-linked transcript editingAn automatically generated, editable transcript that is directly linked to the waveform. Deleting or rearranging words in the transcript removes or moves the corresponding audio. This feature — available in tools like Descript and Hindenburg — has become a standard expectation for podcast and spoken-word production workflows.  ● High priority   2.3  Spectral repair with content-aware fillThe spectral editor should be expanded with multi-selection, lasso tools, and frequency-locked brushes. More significantly, it should include AI-powered content-aware fill — analogous to Photoshop's — that seamlessly removes tonal artifacts such as sirens, ringing phones, or microphone interference by reconstructing the surrounding spectral content.  ● High priority   2.4  Clip gain automationClip-level gain should be independently automatable with its own lane on the timeline, separate from track volume automation. This is a standard feature in Logic Pro and Pro Tools that is conspicuously absent in Audition, and its absence forces inefficient workarounds for fine-grained level control.  ● Medium priority   2.5  Improved punch-in and compingPunch-in recording should include a configurable pre-roll and post-roll buffer by default. Audition should also offer automatic take comparison — using pitch and timing analysis to identify the best take — and allow layered comping directly in the waveform view, rather than requiring editors to work around the current limited implementation.  ● Medium priority   2.6  Improved loudness meteringLUFS, LRA, and true peak meters should be permanently docked, always-visible panels — not buried in the Diagnostics or Amplitude Statistics panels. They should include selectable platform presets for Spotify, Apple Podcasts, Netflix, EBU R128, and ATSC A/85, and update in real time during playback.  ● High priority   3. UX & Workflow ImprovementsSeveral workflow and usability issues slow down everyday production work. The following improvements would reduce friction for recurring tasks and bring Audition closer to the efficiency expected of a professional-grade tool. 3.1  Customizable macro systemUsers should be able to record, name, and assign keyboard shortcuts to sequences of actions. This is critical for repetitive podcast production tasks — for example: apply loudness normalization, add fade in/out, export to MP3 at 192 kbps. A macro system would make Audition significantly more competitive with tools like Adobe Premiere Pro's own action recording features.  ● High priority   3.2  Batch processing overhaulThe batch processor requires a full redesign. It should support conditional logic, allow drag-and-drop reordering of processing steps, show a preview before committing changes, and present a modern navigable interface. The current dialog is functionally limited and visually outdated compared to even basic audio converters.  ● Medium priority   3.3  Project templates with full configurationUsers should be able to save complete project configurations — track layouts, routing, effects chains, loudness targets, export presets — as named templates. Starting a new podcast episode, voiceover session, or radio spot should take seconds, not minutes of setup.  ● Medium priority   3.4  Plugin management improvementsA centralized plugin manager with category tagging, favorites, A/B comparison while auditioning, and the ability to save and name effect chains as reusable presets available across all projects. Currently, managing third-party VST plugins in Audition is cumbersome and lacks the organizational tools that professional users require.  ● Medium priority   3.5  Cloud-based collaboration and version historyCloud-synced projects with version history, the ability to add time-coded comments on clips or regions, and a shareable review link — similar to Frame.io's model, but built specifically for audio. This is particularly valuable for remote podcast production teams, where client approval workflows are currently handled outside the application entirely.  ● Nice to have   SummaryThe table below provides a quick-reference overview of all proposed improvements by section and priority. Feature             Category         PrioritySemantic noise removal   AI features     HighAI dialogue cleanup              AI features     HighSmart silence detection    AI features     HighRoom acoustics matching                               AI features     HighSpeaker diarization                AI features     HighStem separation                       AI features     MediumAI mastering assistant        AI features     MediumVoice cloning / gap-fill         AI features     MediumNatural language editing   AI features     MediumNon-destructive effects chain                      Core editing  HighTranscript-linked editing   Core editing  HighSpectral repair + AI fill         Core editing  HighImproved loudness metering                         Core editing  HighClip gain automation            Core editing  MediumPunch-in and comping       Core editing  MediumMacro system                             UX & workflow                            HighBatch processing overhaul                              UX & workflow                            MediumProject templates                    UX & workflow                            MediumPlugin management              UX & workflow                            MediumCloud collaboration              UX & workflow                            Nice to have End of document

Adobe! Cooperation Between Adobe CC and Wine Development Team?Open for Voting

I am very hopeful to have a response from Adobe to this suggestion, please hear me out?It goes without saying and likely has been said a million times before, that many wish for Adobe CC's suite to be available on Linux.** I understand Adobe's position on this and I'm not seeking to change that. **Enough people have asked in the past and there's no point asking again.Adobe sees no financial motivation to support a platform that has such a small userbase, that's a 100% reasonable position for a financially motivated company to take.The only problem is, that platform, Linux, has such a small userbase partially because Adobe doesn't support that platform. The Adobe CC suite is such an essential suite of software for so many people, that without it, a platform is dead in the water. Hence Linux is stuck in a catch 22 position, of not being able to grow it's userbase to get a port of Adobe's CC suite, because it doesn't have a port of Adobe CC.Leaving Linux, up a creek without a paddle.What I am suggesting is an alternative solution that could satisfy all parties.A solution that allows Linux user's everywhere to pay for and use Adobe CC's suite on Linux (more customers is a good thing right Adobe?) and which requires no development work from Adobe, and no obligation to support Linux as a platform in any official capacity.Wine, is a software tool that Adobe developers have no doubt heard of, as it's currently the only way which Adobe CC users can run Adobe CC on Linux. It's the tool that also powers Valve's tool Proton, and has improved so much in recent times that it's being used to allow Windows games to run on Linux.It's incredible software and gets better by the time, the latest version is finally almost capable of running the Adobe CC suite, but just not quite there yet. The Adobe CC application itself, that manages installation of Adobe CC applications, can't not run, however many Adobe CC applications can *mostly* run with minor issues if that can be navigated around using hacky methods of copying installation files from Windows.The Wine developers are always hard at work trying to improving Wine and get Adobe CC running, but it's a huge task. What would really help Wine's development team is something that they can't do themselves, but which Adobe could do with almost no effort.> Having a look at Adobe CC's source code."Whoa now!" some of you might be saying, "Why would Adobe let anyone view the source code of Adobe CC?".Here's what I'm proposal: Could Adobe make the offer to the Wine developers, to allow them to sign NDAs preventing them from sharing anything they are shown, and in exchange get to temporarily gain access to *relevant sections* Adobe CC's source, to view and review aspects of the code that are causing issues with Wine, and determine the root cause of why certain things don't function under Wine as they should. Wine's developers could even suggest *minor* changes that could be made that would ensure compatibility with Wine.No actual development effort would be required from Adobe, no funding, no support, no testing, but in return for just a tiny bit of cooperation from Adobe CC , the entire suite could be made to function on Linux with almost no effort. This idea is actually similar to something Valve is currently doing with EasyAntiCheat, to get that software working with Proton, they are currently in talks right now and dealing with NDA issues as well.I do not represent Wine's development team, but if Adobe is open to considering this idea, please response and indicate so, then perhaps Wine's team and Adobe's team could have a chat over this and see if some kind of agreement could be worked out?

shikamu
shikamuKnown Participant

P: Add mask defringe to global editsOpen for Voting

I asked a question about the different defringe tools in masks vs. global, but no one knew the answer.  I am now requesting a new feature.  Please add the mask Defringe to the global edits.  It works much better than the current global Defringe and also is much easier to use without all the bad side-effects that the global one often has.  Currently the kludge is to create a linear gradient that covers the whole image and then use the mask Defringe, but it would be easier to just make the mask version of Defringe in the global edit controls.  Sure, keep the old one too for those who want to use that one.https://community.adobe.com/questions-675/defringe-mask-vs-global-1556459In a mask we can use the Defringe control and it works well.  Sometimes I create a mask over an area using a brush or gradient solely to use the Defringe tool.  Why is Defringe in a mask so easy to use, effective, and without the bad side effects of the complex Defringe in the global tools?The global Defringe tool rarely works well and often it doesn't work at all.  It is common to get a message with the eyedropper that the area is too small or it can't detect anything despite it being clearly visible.  And when it does work you have to play around with the 4 sliders and almost never get a satisfactory result.  And on top of that the real killer is that if there is anything else in the photo that has a similar color (clothing, car, etc.) then parts of that can also be changed to monochrome.I sometimes even create a Linear Gradient over the whole image if I have problems in many places in order to use the mask Defringe which works so well.  So much better results than using the global Defringe.Why can't the simple, easy to use, effective Defringe in a mask not be also available in the global area?After 15 years of using LrC I have found that when I need to defringe that the global tool is truly hit and miss and very often changes completely unrelated things to monochrome that are not fringe.  The tool seems to be a mess.  And the very simple Defringe tool in a mask has worked very easily every time I have used it.  In fact, the kludge I have used sometimes is to make a linear gradient that covers the whole image just so I can use the mask Defringe.  I do this after first trying the global Defringe and it being a failure.

kanedidi
kanedidiParticipant

OTIO export/import does not preserve speed keyframes (time remapping keyframes) in Premiere ProOpen for Voting

Hello Adobe Premiere Pro team, I would like to report an issue with OTIO export/import in Premiere Pro. I tested OTIO round-trip behavior using a sequence that contains both transform keyframes and speed keyframes (time remapping keyframes).  What I tested   I created clips in Premiere Pro with:   Transform keyframes (position / scale / rotation) Speed keyframes (time remapping / variable speed)   I exported the sequence as OTIO Then I re-imported the exported OTIO back into Premiere Pro    Result   Transform keyframes were preserved correctly However, speed keyframes were not preserved In other words, time remapping keyframes were lost during OTIO export/import round-trip    Expected result   OTIO export/import should preserve speed keyframes, just like transform keyframes At minimum, Premiere Pro should be able to round-trip its own OTIO export without losing time remapping keyframe information    Actual result   Edit order, cuts, and some other basic information are preserved Transform keyframes are preserved But speed keyframes are not restored after re-import    Why this matters  OTIO is becoming increasingly important for interchange between editing and finishing systems.Many users are expecting reliable workflow exchange between applications such as:  Adobe Premiere Pro DaVinci Resolve Avid Media Composer   At the moment, transform animation seems to be exported, but time remapping keyframes do not appear to survive the OTIO round-trip in Premiere Pro. This makes high-precision OTIO-based workflows difficult for real production use. I strongly hope Adobe will improve this behavior and help support highly accurate OTIO interchange between Premiere Pro, DaVinci Resolve (Blackmagic Design), and Media Composer (Avid). Thank you for your work on OTIO support, and I hope this can be improved in a future update.