Copy link to clipboard
Copied
Hi. Currently I have on a single 6 core laptop CH, AU (Adobe Audition for noise filtering), Virtual Cable (to pipe the audio), OBS, Reflector 3 (to capture airplay signal from iPad) all on the one machine. This streams to YouTube. CH and Reflector consume most of the CPU. Its a little tricky to get the audio to sync with the video - I send it to CH for lipsync and AU for noise reduction and pitch bending.
I was wondering if it made more sense to run CH on a separate machine and pipe the video signal via NDI between machines? That would distribute the CPU load.
The audio seems tricky however - I don’t think CH exports the audio does it? So I would have to send the audio to two machines myself?
Anyone stream in this way? Any advice / recommendations?
Copy link to clipboard
Copied
CH will playback audio during Play and Record operations, but it will not route input audio to the output. We also don't send audio over NDI, so if you do use multiple machines, you will have to manually manage the audio.
Try setting the frame rate to 60fps. What's the reported frame rate in the scene panel? If you're well over your broadcast frame rate, then I don't think you would benefit from an extra machine. I tend to keep everything on a single machine if I have a single character.
Normally we want to delay the audio a bit to make up for the short video latency when processing audio to visemes, but your AU processes are probably introducing an audio latency so you may need a video delay to get the audio synchronized properly.
What signal are you sending from the iPad?
Thanks,
Dan Ramirez
CH QA
Copy link to clipboard
Copied
I will check the fps next time, but I would be surprised if its anywhere near 60. I normally get more in the 7 to 12 fps range with my puppet. It was drawn using Adobe Draw on an iPad, and I have not used “Simplify Paths” everywhere, so its probably not that efficient artwork.
Yes, AU with noise reduction does introduce an audio delay. I use OBS to try and introduce a consistent video delay. It’s all a bit finicky. It would be a bit easier if the NDI video output included the audio - the whole setup would be a little less complicated. (Mic->Au->VirtualCable->Ch->OBS instead of Mic->Au->VirtualCable->Ch + Mic->Au->VirtualCable->OBS with tweaked delays to make sure still in sync)
It does work, but sometimes when I come back I have to re-learn all the setup steps. E.g. its a USB mic and if I plug into a different USB port it comes up as a new microphone and I have to fiddle with Au to get it all right again. And getting the audio delay right is hard, because the OBS monitoring can get a bit laggy as well. The audio going over NDI would avoid the lag problem (Ch would keep the audio and video in sync)
What signal from the iPad? Screen recordings. Extra Ordinary Star Trek Fleet Command - YouTube But I am giving up on live streams now and pre-recording. The livestreams were much easier, but each time I came back I had problems with audio levels not being right, I had to merge and sync also with ipad screen, etc. 50% failure rate. So if I could ask for a feature now it would be Ch showing a video *file* behind the puppet inside Ch (and merging the result). That would help more easily sync the puppet with the video. E.g. I was trying to have puppet hands tap on buttons on screen, but I ended up using After Effects because it was too hard to guess the puppet hand position when you could not see the video behind the puppet (they were in separate windows).
For example, what I do now is record the ipad to a video file using the built in iPad screen recorder. At the same time I record live audio in AU for the voice track. I then go back and do the animation afterwards to move the puppet around etc., dropping in the audio file for lip sync. I then composite the background video (the game) and the Ch recording (which includes the audio track) exported video using Premier Pro. It takes a lot longer, but the result can be reviewed safely so is better quality.