First, AE is not a video editing app. You should not be editing movies in AE, use Premiere Pro. A 12 minute AE comp in AE is a little nuts. You should only be using After Effects to create motion graphics or visual effects that you cannot do in an NLE like Premiere Pro. More than 90% of my comps are 5 seconds or less. Last year they were 7 seconds or less. 99% of them are only one shot. The shots are shorter because the average length of a shot in a movie or a commercial is a little less every year. Back in the '50s, it was not uncommon to have a shot in a movie that was four or five minutes long. I just read an article and Michael Bay's average shot is 3 seconds long.
Let's talk about your rendering question. What you are doing is creating a Digital Intermediate or DI, and those should always be visually lossless frame based files. You just have to live with the file sizes. The format used in the industry varies between production companies, but they always pick a coded (often called the Mezzanine Format) that is visually lossless and Interframe compressed.
You have rendered an Intra frame compressed video file. MP4 - H.264 compressed movies, at the very best settings, only have one frame out of 3 that is a compressed version of the original comp. The rest of the frames are predicted and the position, color, and everything else that is in the frame is based on a best guess instead of the original pixels. It's amazing that it even works. There are always color artifacts and compression artifacts, and motion artifacts in an MP4 file. The more times you re-compress the file, the worse those artifacts become.
Here's how IPB compression works. The first frame is an I-frame (keyframe). That means that the original colors are compressed and the luminance values are also averaged. Then comes the P-frame, the Predicted frame. The change between the I frame and the 3rd frame is calculated and predicted so only the luminance values that the calculations think are changing are included in the frame. Color is also averaged. This results in a huge decrease in the amount of data for that frame. Then comes the B-frame (bidirectionally predicted frame) that looks at the 4th frame and the second frame and tries to calculate the changes. Then the process starts over. Like I said, it is amazing that it works as well as it does, but you are always going to get a significant, though almost imperceptible data loss from the original pixels that increases exponentially every time you decompress the footage and render it again as an MP4.
Digital intermediates are usually rendered using the Render Cue and the Output Module. The Lossless preset is just that, a frame-based lossless file that is intended for use as a digital intermediate or archival copy of the original comp. Cineform (from Go Pro) is another 10-bit lossless format. If you need alpha channel info you can either choose the Lossless with Alpha preset in the Output Module, or take some time learning about video compression and color.
If you are hard up for storage space, render to an External Drive. You can get a good 2 TB external drive for less than $75 or you could get a really good external 1 TB SSD for about $120.
Rendering your DI's and editing them in Premiere Pro is also going to take less time and be more reliable than rendering a 12-minute video in AE. Unless you have a single effect that lasts for the entire 12 minutes I would seriously consider breaking up your project into more manageable sections. I've done a lot of slideshow projects using AE, but all of my comps were only one phrase or sentence long, and almost all comps were rendered and the final edit and sound mix done in Premiere Pro. It's a lot more efficient way to work.
Hi Rick. I can’t thank you enough for your very thoughtful and helpful reply. You are definitely right, I should be using Premiere Pro as my editing app. I’ve totally resisted learning Premiere Pro, and your reply is an important wake up call.