Skip to main content
johnrellis
Legend
October 28, 2021
Open for Voting

P: Allow users to disable the GPU for AI masking

  • October 28, 2021
  • 24 replies
  • 5521 views

Update 6/23/2024: LR 13.2 broke the mechanism for disabling the use of the GPU by AI commands:

https://community.adobe.com/t5/lightroom-classic-ideas/p-allow-users-to-disable-the-gpu-for-ai-masking/idc-p/14698393#M22221

 

Update 10/14/2023: LR 13 (finally) provides a mechanism for disabling the use of the GPU for AI masking commands:

https://community.adobe.com/t5/lightroom-classic-ideas/p-allow-users-to-disable-the-gpu-for-ai-masking/idc-p/14158013#M20035

 

The AI masking commands should obey the setting of Preferences > Performance > Use Graphics Processor, allowing users to disable its use. That will let users on older, incompatible hardware continue to use AI masking (*), and it will greatly accelerate troubleshooting when users have incompatible GPUs or out-of-date drivers. In my quickie testing on Windows, these commands take 1-2 seconds with a GPU and 5-10 seconds without (so still quite useable). (+)

 

Both the CoreML library (Mac) and the ONNX Runtime (Windows) allow the client to specify whether the CPU or the GPU is used to execute models.

 

Providing the option Use GPU for AI Masking would be exactly in line with the other existing GPU options:

- Use GPU For Display
- Use GPU For Image Processing
- Use GPU For Export
- Use GPU To Apply Dehaze And Clarity Adjustments While Generating Previews

 

(*) LR 12 no longer lets users with less than 2 GB VRAM use People masking.  Older Macs, e.g. Mac Minis and Macbook Airs, have only 1.5 GB VRAM and can't be upgraded. Intel has an explicit "legacy drivers" policy, where they no longer fix most bugs in drivers for their older graphics hardware. The other manufacturers probably have such de facto policies as well.

 

People masking also fails for users with Imacs with Radeon 5700 graphics processors. There appears to be a bug in the graphics driver that AMD hasn't fixed or has fixed but Apple hasn't included it in Mac OS.

 

(+) It's easy to demonstrate that AI masking runs correctly but more slowly without a GPU.  On an Intel Mac, boot into Safe Mode and run LR -- Activity Monitor demonstrates there is no GPU available. On Windows, run a Parallels Virtual Machine, which won't use the GPU of the host computer.

24 replies

johnrellis
Legend
November 19, 2022

To show how technically simple it is to control whether AI models execute on the CPU or GPU, I wrote a simple Python script using the same Core ML library used by LR. It loads two instances of the open-source MobileNetV2 image-classification model, with one allowed to use the GPU and the other not. The script then alternates execution of the two models.  The key lines of the script are:

modelGPU = ct.models.MLModel ("MobileNetV2.mlmodel", 
    compute_units = ct.ComputeUnit.ALL)

modelCPU = ct.models.MLModel ("MobileNetV2.mlmodel", 
    compute_units = ct.ComputeUnit.CPU_ONLY)

 

A prediction using the GPU takes about 9 msecs, while a prediction using the CPU takes 39 msecs. When the GPU model is running, Activity Monitor shows % CPU at about 50% and % GPU at 50%. But when the CPU model is running, % CPU is about 175% and % GPU is 0%.

 

You can download the entire script and instructions from here:

https://www.dropbox.com/s/0edo6jm2s1sgrc6/coreml-gpu-cpu.2022.11.14.zip?dl=0 

 

That .zip includes a screen recording of the script in action.

 

 

 

davido48902742
Participating Frequently
October 23, 2022

I agree entirely. Lots of people with otherwise perfectly serviceable systems don't have advanced graphics cards (which are more commonly bought for gaming than anything else). We can all appreciate the Lightroom functions which do make use of GPU acceleration but they should still work, even at a slower pace without them.

johnrellis
Legend
October 13, 2022

In another thread, Adobe employee @simonsaith wrote, "To use or not to use the GPU for ML inference for a particular GPU/driver version is not within Lr's control. The platform OS will do the check at runtime."

 

This is incorrect. LR uses the CoreML library on Mac and the ONNX Runtime on Windows, and both allow the client to control whether the GPU is used to execute a model.

 

Mac: MLModelConfiguration allows the client to limit execution to the CPU via "computeUnits".  

 

Win: onnxruntime.InferenceSession() allows the client to specify the "providers" option listing CPUExecutionProvider only.

 

 

johnrellis
Legend
October 29, 2021

This would also allow people with older graphics chipsets that lack newer drivers to use Select Sky and Subject.