Skip to main content
Inspiring
March 31, 2011
Open for Voting

P: A real plugin architecture to make TIFF files unnecessary

  • March 31, 2011
  • 34 replies
  • 1169 views

My number one feature request would be a real plugin architecture for Lightroom. It's just kind of nuts that to use any sort of plugin or external application, I have to create a separate TIFF file. The TIFF files break workflow by losing all the history of adjustments applied to the RAW before creating the TIFF, they're a pain to manage, and they take up a large amount of space on disk.Perhaps Lightroom could let plugins create mask overlays or something like that, which would integrate with the develop history. But anything that would avoid the necessity of creating a TIFF file would be welcome.

34 replies

areohbee
Legend
April 5, 2011
There are some BIG differences in what I'm talking about versus what we have now. Yet only SMALL changes would be required, because all of the existing pieces more or less already exist - they just need some glue to hold them together.

Presently, if you edit a smart object in Photoshop, you still end up forking a large tif that ends up as a separate photo in Lightroom. This is the biggest part of the problem the OP wants resolved, and so do I.

As I see it, Lightroom can (and should) implement two flavors of 3rd party imaging support:

1. The ability to feed image data to a resident app, and use its output in Lightroom, without forking a new photo. App only stores parametric settings - no image data, thus non-destructive, and small data footprint.

2. An SDK that supports resident (non-modal) plugins, in the panels, with native look & feel, for BOTH reglar plugins and imaging plugins.

In my estimation, type 2 is WAY more work than type 1. So, it seems to me, unless Adobe has more development resources up their sleeve than I think, it makes a lot of sense for them to do type 1 in Lr4, since it gets the most bang for the buck by far, and then reserve type two for Lr5, since it would require a lot more work.

If it were an either-or deal, and Adobe/users were dead set on type 2, I'd say "so be it". But, the two are not mutually exclusive at all, and I think having a big box of imaging plugin-apps available within a few months of releasing Lr4 should be a compelling motivation. These would be available quickly because existing apps could be massaged in relatively minor ways, instead of having to be re-written.
areohbee
Legend
April 5, 2011
I mean, if work in prior stages effects the input to and hence output of a plugin, that's almost the definition of non-destructive. Destructive means baked-in (results saved in rgb concrete), and non-destructive means "a recipe for transforming input to output", where the output varies with the input, as dictated by the settings.
Inspiring
April 5, 2011
That's really no different that what we have now with passing data to PS as a smart object and then using all the tools, filters and plugins available there, just as described in Tom's blog post. I think the team would rather go a step further and allow plugins directly into the raw pipeline somewhere, even if it's at only one or two places. However, I think that's actually pretty difficult to do. As I've said before, this team seems to prefer to fix problems thoroughly rather than quickly. I've used this example before, but I and many others wanted them to fix the orange-reds problem. They did, but they also revised the DNG spec, came out with all new profiles, came out with camera-matching profiles, and provided a tool users can use to make their own profiles. They did the same with lens corrections. It seems to be their way, so I'm not sure if they'd seriously consider a processed-RGB-only approach such as you described above, especially since we sort-of already have it. Perhaps I'm wrong, I don't know.
areohbee
Legend
April 5, 2011
When you say its what we have now, do you mean the external editor interface that forks a new tif? - I'm not sure what you mean. I mean presently thats the only way for a 3rd party to influence image processing in Lightroom, and it leaves a lot to be desired - thus this topic.

If you look at how Nx2 works, it does this:

First, there is an optimized process to fold basic settings in with the raw to produce an rgb, then passes that rgb to optional next steps in a chain. Each link of the chain has settings and performs a transformation and outputs the rgb to the next link in the chain. Yes, previous steps may influence subsequent results - thats the good news and the bad news all rolled into one, and is the reason every single step includes a histogram and shadow / highlight recovery - to keep the image data bounded for the next stage.

Its totally non-destructive. Raw data enters, rgb data emerges, and the settings in between determine the final outcome.
Inspiring
April 5, 2011
That's basically what we have now, Rob. It's not non-destructive, and if you wanted to go back and make adjustments at the LR/raw level, it might mess up the work of the plugin massively.
areohbee
Legend
April 5, 2011
Regarding: http://blogs.adobe.com/lightroomjourn...

The journal-blog looks like a great resource (sorta like a forum, except Tom Hogarty willing to do more than a cameo appearance there).

From what I gleaned in that article, it seems Adobe is getting stuck, and maybe they shouldn't be. In other words, plugins dont need access to raw data, just rgb in / rgb out. Although one could conceivably do this with true plugins (meaning UI and everything else via SDK) by extending the SDK with functions like: GetRGBImage (for getting input), displayRGBImage (for displaying intermediate results), and SetRGBImage (for final plugin output)... I would think a better approach would be to allow full-fledged external applications as imaging plugins. I say better, because it could be done immediately - I'm talking about Lightroom development, AND the development of the plugins.

With this approach, the only thing Lightroom has to do is:

- be able to invoke the plugin app for user setup, pass it image data, and retrieve image data out (for parametric plugin edit setup).
- Be able to invoke the plugin without the UI, once already setup, and pass it image data and retrieve output (for re-rendering).

Not only would this be easy and quick for Adobe to do, but would allow 3rd parties to harness existing technology immediately with very little fuss. - No need to rewrite from scratch to conform to an all new environment - maybe just a tweak for source of input / output...

Bottom line: Postponing development of image access to plugins / plugin-apps in favor of some super-integrated all-new environment, is too much work, and will make us wait too long - for Lightroom imaging plugin support, as well as the imaging plugins themselves. In other words, this is a case where a "compromise" solution may actually be "better" all around than going all out...
Inspiring
April 5, 2011
"Either adobe does not want to implement this or they are waiting for the rendering pipeline architecture in LR to stabilize..."
Tom Hogarty has commented directly on this:

http://blogs.adobe.com/lightroomjourn...

"Photographers would still like to see image processing plug-ins in Lightroom and I agree with them."

It seems clear that the problem is not one of intent.
areohbee
Legend
April 5, 2011
The more I think about this, the more doable it sounds to me.

I mean, ultimately, Lightroom ends up creating an rgb bitmap in the develop module. I would think inserting a pixel editing step or a plugin image transformation step would be a matter of finalizing the rgb bitmap and passing it to the plugin. All subsequent lightroom edits would then operate on the new image data.

I dont mean to make this sound trivial - its not. But, very, very doable...

And this would be parametric/non-destructive by definition: all plugins or image editors in the chain would be required to take their input and create their output on demand, given the settings (or parameters) that are set for them.

PS - I understand your emotion/disappointment/frustration...

R
Known Participant
April 5, 2011
"Rory - Don't you think the term "mockery" is a little "dramatic" :-}"

Okay Rob, I'll grant you that. 😉

I guess it is the disappointment leaking out. I luv Lightroom and want it to be the best it can be. Early on in the Lightroom development I mistakingly interpreted the promised SDK to be "all inclusive". Given it is impossible for adobe to meet everyone's needs, and their history with photoshop plugins, and their contempt for what aperture calls plugins, that are not parametric, I felt that a rendering pipeline plugin architecture would be necessary and forthcoming.

Either adobe does not want to implement this or they are waiting for the rendering pipeline architecture in LR to stabilize before implementing a SDK. I'm hoping for the latter. Being basically on optimist, LR4 would be an excellent time

To any Lightroom engineers listening in, I did not mean to disparage your work, for which I have the greatest respect. LR is a joy to use.
areohbee
Legend
April 5, 2011
Rory - Don't you think the term "mockery" is a little "dramatic" :-}

Still, I think I see your point - if you have to edit non-parametrically/destructively at any stage, it breaks the parametric/non-destructive editing chain...

The good news: even traditional destructive pixel editors can be made "parametric", in some instances. If its possible to define a formula for taking input and producing output, and that formula can be saved as "parameters", then so-called destructive editors can be chained into the parametric pipeline. User's will need to understand that these stages will not be as efficient as the native Lightroom stuff, since there is no consolidation of settings...

Anyway, Adobe can certainly invent the solution once they set their minds to it. It'll be interesting to see if/when it happens...

Note: This paradigm already exists for exporting: plugins can define "filters" which take an image file and produce an image file which is passed to the next "filter" in the chain. There has been surprisingly little use of this feature to date (I'm not sure why - except the documentation is confusing), but the idea can certainly be extended to the develop module - preferably, with option to bypass the intermediate files and pass the image data in ram...

Rob