Skip to main content
johnrellis
Legend
April 20, 2018
Open for Voting

P: Document the format of RGB LUTs in enhanced profiles

  • April 20, 2018
  • 29 replies
  • 3736 views

If Adobe documents the format of RGB LUTs stored in enhanced profiles, then it will be straightforward for LR plugins to make it much easier for users to import video LUTs as enhanced profiles and apply them to their images. 

There are hundreds, perhaps thousands, of existing LUTs, primarily used for video editing, and most of those are in Rec 709.  There are several roadblocks to users wanting to use those LUTs in LR:

- Camera Raw doesn't allow the Rec 709 gamma 2.2 and 2.4 color spaces when importing a LUT into a new enhanced profile:
https://feedback.photoshop.com/photoshop_family/topics/camera-raw-add-rec-709-to-lut-color-spaces-in...

- A user would need to buy an expensive utility to convert Rec 709 LUTs to a supported color space (e.g. ProPhoto RGB or Rec 2020).

- While LR 7 users will have access to Camera Raw via Photoshop (included in their CC subscription), many of them are unfamiliar with Camera Raw and would find the steps for creating enhanced profiles obscure and daunting.

--------------------------------------
My current surmise for how the LUTs are encoded:

- The LUT is converted to the internal LR/ACR working space (ProPhoto RGB primaries and white point, linear transfer function).

- It is compressed somehow. 

- The compressed LUT is then encoded using a minor variant of Ascii85.

29 replies

Participant
July 13, 2022

Your github gist isn't there anymore. Would you be able to reupload?

Inspiring
January 30, 2021

using dng sdk("dng_big_table.cpp" functions) I've replicated with success the alghorithm that decode-decompress / compress-encode the CUBE LUTs into xmp metadata "Table_...", I need a partner to complete this project, write to me: Michele Renzullo.

1) We need to understand how our RGB Values are "transformed" from CUBE(see the output please)

2) We need to recalculate fingerprint(I think it won't be diffuclt)

I've recreated DecodeFromString, DecodeFromBinary, EncodeAsString, EncodeAsBinary funtions. My working code, you need to replace output txt files:

https://gist.github.com/michelerenzullo/a1b8c0a62270dda23c50d483bdcf41d7

CUBE test LUT_SIZE 2X2X2 + XMP encoded CUBE

TITLE "test"
DOMAIN_MIN 0 0 0
DOMAIN_MAX 1 1 1
LUT_3D_SIZE 2
0.274061893 0.219932506 0.177115963  
0.571511126 0.263131176 0.179752179  
0.656680978 0.525461611 0.312991872  
0.852257362 0.575297026 0.347083429  
0.107468354 0.235404972 0.338322431  
0.396458804 0.216070001 0.364647435  
0.273604227 0.545908918 0.545072734  
0.612230119 0.602805907 0.584224470  

#70 #56 #45
#146 #67 #46
#167 #134 #80
#217 #147 #89
#27 #60 #86
#101 #55 #93
#70 #139 #139
#156 #154 #149

#XMP RGB TABLE CONVERTED WITH CAMERA RAW -> 71000OcImwBRLy1{X$/w$i%Whe:T7?%[R'N^`d7etfwl1o9NpBT9x5p.8f[!A?Y$h5751ZDva8xperJFP#'ng/^$B0qItqxe)8

output with my code:

DECODED AND DECOMPRESSED XMP RGB TABLE   -- please see dng_big_table.cpp and dng_big_table.h maybe also dng_fingerprint inside Adobe dng sdk

1 0 0 
0 1 0           <-- some internal flags, seems costant
0 0 3           <-- 3 should be our 3 dimensions
0 0 0 
2 0 0           <-- please note the first number, stands for LUT_3D_SIZE, in our example is "2"
0 
40 70 76 56 86 45              1     <-- problem 1) rgb values aren't sorted! I see a pattern, look the order, seems easy to solve with an alghorithm
130 27 66 60 156 86            5     <-- problem 2) what are the number BEFORE our RGB value? Idk... HSV? HSL? It doesn't seem.
27 168 132 134 31 80           3
10 70 192 139 137 139          7
78 146 91 67 3 46              2
126 101 79 55 89 93            6
45 218 70 147 217 88           4
187 156 81 154 143 149         8
0 0 0     <-- please note the first 0 stands form sRGB
0 1 0     <-- please note the 1 in 2nds position stands for gamma linear
0 0 0
0 0 0 
0 0 0 
0 0 0 
0 0 0 
0 0 0
0 0 0 
64 
Inspiring
September 5, 2020
As eartho and John Ellis point out, it's in the DNG SDK. Here's a snippet for writing an extended profile:
    dng_xmp *xmp = host.Make_dng_xmp();

dng_string strn;

xmp->Set(XMP_NS_CRS, "PresetType", "Look");
xmp->SetBoolean(XMP_NS_CRS, "SupportsAmount", true);
xmp->SetBoolean(XMP_NS_CRS, "SupportsColor", true);
xmp->SetBoolean(XMP_NS_CRS, "SupportsMonochrome", true);
xmp->SetBoolean(XMP_NS_CRS, "SupportsHighDynamicRange", true);
xmp->SetBoolean(XMP_NS_CRS, "SupportsSceneReferred", true);
xmp->SetBoolean(XMP_NS_CRS, "SupportsOutputReferred", true);

xmp->Set(XMP_NS_CRS, "CameraModelRestriction", "");
xmp->Set(XMP_NS_CRS, "Copyright", "");
xmp->Set(XMP_NS_CRS, "ContactInfo", "");
xmp->Set(XMP_NS_CRS, "Version", "12.3");
xmp->Set(XMP_NS_CRS, "ProcessVersion", "11.0");
xmp->SetBoolean(XMP_NS_CRS, "ConvertToGrayscale", false);

strn.Set(name.c_str());
xmp->SetAltLangDefault(XMP_NS_CRS, "Name", strn);
strn.Set(group.c_str());
xmp->SetAltLangDefault(XMP_NS_CRS, "Group", strn);

dng_look_table look_table;
look_table.Set(get_dng_hue_sat_map(), 0);
dng_big_table_storage storage;
look_table.WriteToXMP(*xmp, XMP_NS_CRS, "LookTable", storage);

dng_memory_block *blk = xmp->Serialize(false,
0,
4096,
false,
true);

std::cout << "Writing output XMP profile to " << filename << std::endl;
std::ofstream outf(filename);
outf.write(blk->Buffer_char(), blk->LogicalSize());

johnrellis
Legend
August 10, 2020
I think the SDK does provide all the methods needed for creating the encoded and compressed LUTs in enhanced profiles' .xmp files.
Participating Frequently
August 10, 2020
I'll sink my teeth into this as well and will report back soon.
johnrellis
Legend
August 8, 2020
"I'm still grokking the spec and the source to guess which color space the LUTs operate in."

Ignore this comment -- I'm still grokking.
johnrellis
Legend
August 8, 2020
Thanks very much, Eartho.

With more poking around (my rudimentary C++ dates back to 1995), I see that dng_rgb_table is a subclass of dng_big_table, and the parent class does indeed provide methods for encoding/compressing and decoding/uncompressing using an Ascii85 variant and zlib.  

I'm still grokking the spec and the source to guess which color space the LUTs operate in. I'm guessing they're like the HSV look tables (Prophoto RGB with linear encoding) but haven't confirmed that.
johnrellis
Legend
August 8, 2020
I poked through the DNG SDK specification and source code and didn't see anything that answers my original question. Details: 

The .xmp files containing enhanced profiles have crs:Table_* tags containing compressed LUTs that are encoded with what is likely a modified version of Ascii85 suitable for XML.  See the post above for an example:
https://feedback.photoshop.com/photoshop_family/topics/camera-raw-and-lightroom-document-the-format-...

I see nothing in the DNG SDK that specifies this compression and encoding. 

In dng_big_table.h/cpp, the class dng_rgb_table defines a 1- or 3-dimensional LUT with the same color spaces as allowed in enhanced-profile LUTs.  The class implementation provides methods for reading and writing uncompressed tables to/from byte streams. But I couldn't find any hints about the compression and encoding used for the crs:Table_* tags in enhanced-profile .xmp files.

Maybe I'm missing something, but I don't think so.
Earth Oliver
Legend
August 8, 2020
I heard back from one of the devs with a link to the DNG SDK, which supposedly has the info you all are looking for. Can you let me know if what you need is there?
https://supportdownloads.adobe.com/product.jsp?product=120&platform=Mac

See the "dng_rgb_table" data structure.
Inspiring
August 4, 2020
Definitely waiting for this as well so we can programmatically create Lightroom compatible LUTs with Photon!