I am looking to be a bit more specific with colour grading and therefore would like to learn DaVinci but still do all my editing in Premiere.
I know that when exporting from Premiere, I have to make export the video as a "clean out" (no effects, etc). Upon doing my research I hear that the best, least complicated workflow between premiere and davinci would be to take transcoded footage back to it's camera original and then export it in it's highest resolution from Premiere Pro and then bring it into DaVinci.
I shoot on the Sony A7III so my footage is only 8-bit but most of the time, I shoot via my Ninja V.
The record settings on my Ninja V are as follows:
- Codec: DNxHD
- Compression: 145
On my Sony I typically shoot in HLG 2 or HLG 3 picture profile. The footage from my Ninja V does take a lot of space (on average 1-3 GB for a clip). All the files come in .MOV format.
Before editing in Premiere, I proxy my files to make editing a bit smoother.
This question may be a little dumb, but is there a significant difference between transcoding and creating proxy? Are they the same thing but just different terms essentially? Is one way better than the other? The way I create my proxies is before importing my files I select the ingest setting and then once imported, Adobe Media Encoder automatically opens up and starts the process of creating proxies.
So when it's said to "take transcoded footage and connect it back to camera original" I would, with my current workflow, just turn the proxy toggle off and then export? Additionally, it was recommended that if shooting on A7III or any 8-bit camera to then export it to PRO RES REGULAR (no need for 4444 or HQ for 8 bit)
I am looking to understand the technicalities of these work flows a bit better, as sometimes I find understanding all these different codecs, colour profiles, bit rates, etc. and their relationship to one another a little confusing or just a lot of information to take in. Any advice is welcome or any direction to some great and proper resources to learn more for dummies like me would also be great!
The "old standard" process for taking a timeline from an NLE into Resolve (or Baselight/whatever grading app) is to export the work of the sequence in either an XML file or EDL file. You import that file into Resolve, import the original media clips, and use the XML/EDL to 'conform' the media to the timeline as 'cut' in the NLE.
There are things that are an issue with this such as time-ramps of course. And it's normally considered better to use "Set to ... " rather than "scale to" for sizing preferences.
Also, most colorists want a low-res export of the sequence from Premiere in probably H.264, something they would put on V2 and with the main program on V1, can easily check to make sure that all cuts are correct, the bits of clips are correctly chosen, scaling is proper, all of that. Because there typically is some futzing to get everything conformed properly.
This process has the advantage of working from the original clips, but the disadvantage of a potential amount of futzing. Working on your own project should eliminate the potential for mis-communications between editor and colorist ... 😉
You can instead do an export from the NLE of media to use in the grading app. It's best to do this without any transitions applied, so all clips have full 'cuts' at the beginning/end of the cut. And of course in a high-Q format like DNxHD/R or ProRes.
Then in the grading app, they run a scene-detect 'cut' process to give cuts on the timline where the clips change. You check those, adjust as necessary, and do the grading to that media.
To answer one of your questions: Creating proxies means creating copies that are easier on your computer then the originals (example: you can make ProRes Proxy copies of RAW footage). You will always need the origial footage for the final export.
Transcoding to a DI (Digital Intermediate) means that you create copies that have a predictable behaviour on your system/workflow while keeping as much quality as possible. The DI becomes your source, the orignal file only serves as backup. (Example you can make ProRes (regular) copies of both h.264 footage and h.265 footage and ProRes HQ of RAW footage). Some broadcasters transcode everything to AVC-Intra100.
To add to Joost's comments, when you make DI files (or also proxies) that are larger on disc than the originals, you can also delete those when done if you have kept the originals. As they can always be recreated later if necessary to rework the project.