• Global community
    • Language:
      • Deutsch
      • English
      • Español
      • Français
      • Português
  • 日本語コミュニティ
    Dedicated community for Japanese speakers
  • 한국 커뮤니티
    Dedicated community for Korean speakers
Exit
1

Lightroom 13.0 merged dng files are suddenly very small

Community Beginner ,
Nov 27, 2023 Nov 27, 2023

Copy link to clipboard

Copied

I have updated to LR 13.0 and found that merged dng files (for panoramas) are now very small. For example, I merged three 76MB RAW files and the resulting dng was only 18MB. Normally the merged dng file would be more than 200MB. However I found that when I exported the dng file as a tiff without compression at 300dpi it did exceed 200MB. 
I've gone back and checked previous merged dng panoramas and observed that they were very large but when I merged them again in the 13.0 version of LR they suddenly dropped down to about a tenth of the files merged before updating.

I am using an M1 Mac Mini and running Ventura.

What's going on? Any ideas? 

TOPICS
Experiment , macOS

Views

902

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines

correct answers 1 Correct answer

Community Expert , Nov 27, 2023 Nov 27, 2023

Adobe updated the compression algorithm they use in merged panoramas, HDR and enhanced dng images to jpeg XL compression which is far more efficient than the jpeg compression they were using before that. So that's what you're seeing the effect off. There is no loss in quality as a result just much more efficient compression.

Votes

Translate

Translate
Community Expert ,
Nov 27, 2023 Nov 27, 2023

Copy link to clipboard

Copied

Adobe updated the compression algorithm they use in merged panoramas, HDR and enhanced dng images to jpeg XL compression which is far more efficient than the jpeg compression they were using before that. So that's what you're seeing the effect off. There is no loss in quality as a result just much more efficient compression.

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Beginner ,
Nov 27, 2023 Nov 27, 2023

Copy link to clipboard

Copied

Thanks very much for that and you were so quick. Seriously, you'd think Adobe would explain this.

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Beginner ,
Dec 26, 2023 Dec 26, 2023

Copy link to clipboard

Copied

Thank you for the wonderfull answer. I g¡had the same problem.

Still I can't wrap my mind around it. on how is possible to have a panorama, in my case 3 times shifted 30MB, in a 17MB lossless DNG. And how is it then the correct MB weight is again there in PS when opened as a TIFF. I would really like to read something more specific about it. I found the temathic quite compelling. did Adobe publish something.

thanks again for the infos

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Expert ,
Dec 26, 2023 Dec 26, 2023

Copy link to clipboard

Copied

The dng is not losslessly compressed is the trick. It is a lossy but extremely high quality impossible to ditinguish dng. The compression algorithm is jpegXL which is far higher quality than jpeg compression. It's done at the raw demosiaced level so nothing like what people think of as jpeg. Adobe's reasoning is that the demosaic, the panorama stitch, the denoise, the HDR merge is already lossy, so using a lossy compression algorithm will not lead to additional loss and this really appears to be true.  

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Expert ,
Dec 26, 2023 Dec 26, 2023

Copy link to clipboard

Copied

Oh and when you then save to tiff from this lossy compressed dng file, tiff is either uncompressed or losslesly compressed, so you will balloon in size as a result.

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Beginner ,
Dec 28, 2023 Dec 28, 2023

Copy link to clipboard

Copied

thanks for your answer.

sorry i do still miss some base reasoning:

- why a Panorama stitch should be consider as a lossy process. I am stitching  3 or more different photos with a 1/3 overlapping in both size.

- if file is smaller, that means that has less info contained. which one are lost? if I shoot in 14/16 bit raw how many colorbit has the Panorama dng

- when opened in PS as a Tiff, which infos are then software generated or recovered?

 

I really wish Adobe will clarify a bit more and maybe let us choose how panorama steach should work.

I understand by other process the noise reducction was ilarius after it one got 3 times filesize dng so there i get some compression, but at least should stay in the MB size of the original Raw.

as a Pro i don't feel like having a lossy compressed file.

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Expert ,
Dec 28, 2023 Dec 28, 2023

Copy link to clipboard

Copied

Stitching a panorama is inherently lossy.  There are a few things you have to realize. One is that filesize does NOT 1:1 correlate with information amount. Two is that lossy means a conversion process is irreversible - you cannot go back to the original data from what you have now. Lossless processes are always by principle reversible. You also have to realize that even raw data is in a sense compressed since at the start each pixel has only one color channel in it because each pixel on your sensor has either a red, green, or blue filter on it. This is called the Bayer mosaic after the inventor of this trick who worked at Kodak many decades ago. So you have to do a demosaic the raw data to create an actual image that you can stitch. This gets you from only one color (red green or blue) per channel to full red green and blue info for each pixel. The extra color channel info is interpolated from the neighboring pixels. So now you came from 12 or 14 bits of single color info to 16 bits typically of red, green, and blue for each pixel but realizing that 2/3 of that data is made up or basically redundant. However you cannot go back from this data to the original raw data, meaning this process is lossy (not reversible). Now you need to stitch this into a panorama, which means that you have to warp distort the images. This is again lossy as you cannot perfectly go back to the original images pre- distorting them into the warped versions. This is because the math used is inherently lossy too as you're projecting a matrix of 16-bit integer values onto another predefined matrix of  16-bit integers. You cannot do this without losing info. Now you have to blend the images together in their new warped stated and this is again lossy - you cannot go back to the three original images anymore. So the whole process is goes through three lossy steps. There is no way you can get back to the exact original three raw images. It is impossible. You might be able to train an AI to get close but you can never recover the actual original data from the stitched panorama.

So if you think about it, Say you have three 24 Megapixel images shot at 14 bits precision, this means you have 3*24*14/8 = 3*42 = 126 megabytes of actual original data (note that lossless compression in raw files usually compresses that to about 2/3 of that). Demosaicing those images goes from 14 bit single channel to 16 bit and three color channels, so after demosaicing, you have 3*24*3*16/8=432 megabytes of data. However, this 432 megabytes actually contains LESS original information that the 126 megabytes of raw data! This is because 2/3 of it is made up and you have to use 16 bit precision to contain the 14 bits of the original yielding another 14% of made up data. This leads overall to the factor of 3.4 times bigger data size. Now you stitch this together with about 1/3 of overlap, which means about 2/3 of a single image is redundant data, and assuming not too much loss by the lens correction and projection warp, you end up with a size for the needed data to contain the stitched pano of 2.33*24*3*16/8=336 megabytes. This was another lossy step so now you have even less actual original image data than in the original 126 megabytes.

As a last step, this data is compressed down using a lossy algorithm that is less lossy than the three steps in between and you end up with something about the size of a single original raw file. Also, the data loss in the lossy compression is almost exclusively in the part of the data that is made up in the other steps of the pano stitch.

This is all just math and inherent to what you are trying to do in a pano stitch. None of that is avoidable. Data loss is unavoidable. 

 

Some of your other questions:

 

"- if file is smaller, that means that has less info contained. which one are lost? if I shoot in 14/16 bit raw how many colorbit has the Panorama dng"

 

Smaller files do NOT mean less info. In this case, yes there is less actual original info in the stitched pano than in the original three images but that has more to do with the math inherent to the demosaic process than with the lossy compression. The info lost in the demosaic, warping and projecting and blending is far greater than what you loose in the compression step. Lastly, the panorama dng files use 16 bits 3-channel encoding, no matter what precision you shot at.

 

"- when opened in PS as a Tiff, which infos are then software generated or recovered?"

 

No information is generated or recovered in the first part of that process - the decompression. The decompression from the panorama dng file is reversible. You do end up with much more memory needed but the amount of information is the same. After that this data is also ran through camera raw, which interprets the linear data in the camera color space and transforms it into a reference color space such as prophotoRGB, displayP3, etc. It uses camera profiles to distort the color data into a pleasing image and all kinds of camera raw settings and algorithms. This is a lossy process. You cannot reverse it back to the data in the original panorama dng file. So you are again losing information.  So while you end up with a far bigger image file, it actually has less of the original camera data in it retained from a math perspective. There is a lot of new info in it which mostly is your creative input in manipulating the camera raw settings. This is of course not info that came from the original raw files.

 

"I really wish Adobe will clarify a bit more and maybe let us choose how panorama steach should work.

I understand by other process the noise reducction was ilarius after it one got 3 times filesize dng so there i get some compression, but at least should stay in the MB size of the original Raw."

 

Do understand that most of that reduction of filesize is because you remove the noise. A compressed image file size, whether it is lossy or losslessly compressed is very sensitive to the noise (=entropy!) in the data. More noise, means bigger files. However that is not necessarily useful data. Denoise is inherently lossy too in that it throws away that noise and makes up other data (recognizing actual structures through the noise by being trained on many other images) that did not come from your original image file but is a best guess by the AI on what that blob in your image actually represents. 

 

"as a Pro i don't feel like having a lossy compressed file."

 

I understand that but it is irrational. The lossy compression is the least lossy part of the process (it's in the noise - pun intended)  and this lossy compression is nothing like heavily compressed jpegs as you might be thinking off. It is a far more advanced algorithm that is very close to lossless but uses far less bytes to contain the same data. No significant data loss happens in that compression step. At least nothing compared to the other steps. 

 

 

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Expert ,
Dec 28, 2023 Dec 28, 2023

Copy link to clipboard

Copied

Forgot to say that the compression used before the last update was already lossy! It was actually lower quality lossy than the current JPEG XL algorithm even though the filesizes were larger.

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines
Community Beginner ,
Mar 18, 2024 Mar 18, 2024

Copy link to clipboard

Copied

LATEST

Hi Joa,

 

Sorry for the late response. Just wanna thank you for the great answer and explications.

Cheers

gio

Votes

Translate

Translate

Report

Report
Community guidelines
Be kind and respectful, give credit to the original source of content, and search for duplicates before posting. Learn more
community guidelines