Copy link to clipboard
Copied
I'm using a powerful Epyc 9575F+RTX5090 workstation, but I find that Lightroom classic's performance is way underwhelming. The example is doing a batch enhance detail for 500 images. This is AI based so it should run under GPU and yes it run under GPU on my old computer and Mac. But I find that in here it runs on CPU, and more disappointingly not fully saturating the CPU. I enabled CCD as virtual NUMA node so task manager will show 8 graphs, and each graph present 8 cores. You can see that only one CCD is used (likely to because of Windows's scheduler trying to bin on one NUMA node aka CCD, which is the reason of enabling virtual NUMA to enhance L3 cache hitrate), and only half is saturated. There's VRAM allocated but GPU not computing. Except for AI functionality speed the overall performance is even worse than my M4 Pro, which is so disappointing. The system information also shows only 6 maximum camera raw threads. Full GPU acceleration is manually enabled and system information proving this with DirectX engine. However in indexing after import, the CPU can be fully saturated, every CCD work and will saturate the whole CPU to about 80%, which is good. Also, in saving XMP and exporting, it's also the same sense. Very slow and only CPU ultulization. It even seems to be serial, not parallel at all. I think this is a bug, does anyone have any idea why and how to solve it?
 
 
Copy link to clipboard
Copied
Please do the LR menu command Help > System Info and copy/paste the entire contents here so we can see exactly which versions of hardware and software LR thinks you're running and important LR options that are set.
Copy link to clipboard
Copied
Lightroom Classic version: 14.5.1 [ 202508231203-c2638d01 ]
License: Creative Cloud
Language setting: en
Operating system: Windows 11
Version: 11.0.26100
Application architecture: x64
System architecture: x64
Logical processor count: 64
Processor speed: 3.3GHz
SqLite Version: 3.36.0
CPU Utilisation: 1.0%
Power Source: Plugged In
Built-in memory: 589290.2 MB
Dedicated GPU memory used by Lightroom: 196.4MB / 32100.0MB (0%)
Real memory available to Lightroom: 589290.2 MB
Real memory used by Lightroom: 6122.2 MB (1.0%)
Virtual memory used by Lightroom: 7077.8 MB
GDI objects count: 892
USER objects count: 3037
Process handles count: 3160
Memory cache size: 1026.8MB
Internal Camera Raw version: 17.5 [ 2318 ]
Maximum thread count used by Camera Raw: 9
Camera Raw SIMD optimization: SSE2,AVX,AVX2
Camera Raw virtual memory: 430MB / 294645MB (0%)
Camera Raw real memory: 495MB / 589290MB (0%)
Cache1:
NT- RAM:0.0MB, VRAM:0.0MB, Combined:0.0MB
Cache2:
m:1026.8MB, n:0.0MB
U-main: 86.0MB
System DPI setting: 168 DPI (high DPI mode)
Desktop composition enabled: Yes
Standard Preview Size: 3840 pixels
Displays: 1) 3840x2160
Input types: Multitouch: No, Integrated touch: No, Integrated pen: No, External touch: No, External pen: No, Keyboard: No
Graphics Processor Info:
DirectX: NVIDIA GeForce RTX 5090 (32.0.15.8097)
Init State: GPU for Export supported by default
User Preference: GPU for Image Processing Enabled
Enable HDR in Library: ON
GPU for Preview Generation: On (S5_31)
Application folder: E:\Program Files\Adobe\Adobe Lightroom Classic
Library Path: P:\Lightroom\Lightroom Catalog-v13-3.lrcat
Settings Folder: C:\Users\GUGU\AppData\Roaming\Adobe\Lightroom
Installed Plugins:
1) AdobeStock
2) DxO PureRAW 5
3) DxO PureRAW 5 Importer
4) Flickr
5) LRT Export 7
Config.lua flags:
Adapter #1: Vendor : 10de
Device : 2b85
Subsystem : 205710de
Revision : a1
Video Memory : 32100
Adapter #2: Vendor : 1414
Device : 8c
Subsystem : 0
Revision : 0
Video Memory : 0
Adapter #3: Vendor : 1414
Device : 8c
Subsystem : 0
Revision : 0
Video Memory : 0
AudioDeviceIOBlockSize: 1024
AudioDeviceName: System Default - PHL 27E1N8900 (NVIDIA High Definition Audio)
AudioDeviceNumberOfChannels: 2
AudioDeviceSampleRate: 48000
Build: Uninitialized
Direct2DEnabled: false
GL_ACCUM_ALPHA_BITS: 16
GL_ACCUM_BLUE_BITS: 16
GL_ACCUM_GREEN_BITS: 16
GL_ACCUM_RED_BITS: 16
GL_ALPHA_BITS: 0
GL_BLUE_BITS: 8
GL_DEPTH_BITS: 24
GL_GREEN_BITS: 8
GL_MAX_3D_TEXTURE_SIZE: 16384
GL_MAX_TEXTURE_SIZE: 32768
GL_MAX_TEXTURE_UNITS: 4
GL_MAX_VIEWPORT_DIMS: 32768,32768
GL_RED_BITS: 8
GL_RENDERER: NVIDIA GeForce RTX 5090/PCIe/SSE2
GL_SHADING_LANGUAGE_VERSION: 4.60 NVIDIA
GL_STENCIL_BITS: 8
GL_VENDOR: NVIDIA Corporation
GL_VERSION: 4.6.0 NVIDIA 580.97
GPUDeviceEnabled: false
OGLEnabled: true
GL_EXTENSIONS: GL_AMD_multi_draw_indirect GL_AMD_seamless_cubemap_per_texture GL_AMD_vertex_shader_viewport_index GL_AMD_vertex_shader_layer GL_ARB_arrays_of_arrays GL_ARB_base_instance GL_ARB_bindless_texture GL_ARB_blend_func_extended GL_ARB_buffer_storage GL_ARB_clear_buffer_object GL_ARB_clear_texture GL_ARB_clip_control GL_ARB_color_buffer_float GL_ARB_compatibility GL_ARB_compressed_texture_pixel_storage GL_ARB_conservative_depth GL_ARB_compute_shader GL_ARB_compute_variable_group_size GL_ARB_conditional_render_inverted GL_ARB_copy_buffer GL_ARB_copy_image GL_ARB_cull_distance GL_ARB_debug_output GL_ARB_depth_buffer_float GL_ARB_depth_clamp GL_ARB_depth_texture GL_ARB_derivative_control GL_ARB_direct_state_access GL_ARB_draw_buffers GL_ARB_draw_buffers_blend GL_ARB_draw_indirect GL_ARB_draw_elements_base_vertex GL_ARB_draw_instanced GL_ARB_enhanced_layouts GL_ARB_ES2_compatibility GL_ARB_ES3_compatibility GL_ARB_ES3_1_compatibility GL_ARB_ES3_2_compatibility GL_ARB_explicit_attrib_location GL_ARB_explicit_uniform_location GL_ARB_fragment_coord_conventions GL_ARB_fragment_layer_viewport GL_ARB_fragment_program GL_ARB_fragment_program_shadow GL_ARB_fragment_shader GL_ARB_fragment_shader_interlock GL_ARB_framebuffer_no_attachments GL_ARB_framebuffer_object GL_ARB_framebuffer_sRGB GL_ARB_geometry_shader4 GL_ARB_get_program_binary GL_ARB_get_texture_sub_image GL_ARB_gl_spirv GL_ARB_gpu_shader5 GL_ARB_gpu_shader_fp64 GL_ARB_gpu_shader_int64 GL_ARB_half_float_pixel GL_ARB_half_float_vertex GL_ARB_imaging GL_ARB_indirect_parameters GL_ARB_instanced_arrays GL_ARB_internalformat_query GL_ARB_internalformat_query2 GL_ARB_invalidate_subdata GL_ARB_map_buffer_alignment GL_ARB_map_buffer_range GL_ARB_multi_bind GL_ARB_multi_draw_indirect GL_ARB_multisample GL_ARB_multitexture GL_ARB_occlusion_query GL_ARB_occlusion_query2 GL_ARB_parallel_shader_compile GL_ARB_pipeline_statistics_query GL_ARB_pixel_buffer_object GL_ARB_point_parameters GL_ARB_point_sprite GL_ARB_polygon_offset_clamp GL_ARB_post_depth_coverage GL_ARB_program_interface_query GL_ARB_provoking_vertex GL_ARB_query_buffer_object GL_ARB_robust_buffer_access_behavior GL_ARB_robustness GL_ARB_sample_locations GL_ARB_sample_shading GL_ARB_sampler_objects GL_ARB_seamless_cube_map GL_ARB_seamless_cubemap_per_texture GL_ARB_separate_shader_objects GL_ARB_shader_atomic_counter_ops GL_ARB_shader_atomic_counters GL_ARB_shader_ballot GL_ARB_shader_bit_encoding GL_ARB_shader_clock GL_ARB_shader_draw_parameters GL_ARB_shader_group_vote GL_ARB_shader_image_load_store GL_ARB_shader_image_size GL_ARB_shader_objects GL_ARB_shader_precision GL_ARB_shader_storage_buffer_object GL_ARB_shader_subroutine GL_ARB_shader_texture_image_samples GL_ARB_shader_texture_lod GL_ARB_shading_language_100 GL_ARB_shader_viewport_layer_array GL_ARB_shading_language_420pack GL_ARB_shading_language_include GL_ARB_shading_language_packing GL_ARB_shadow GL_ARB_sparse_buffer GL_ARB_sparse_texture GL_ARB_sparse_texture2 GL_ARB_sparse_texture_clamp GL_ARB_spirv_extensions GL_ARB_stencil_texturing GL_ARB_sync GL_ARB_tessellation_shader GL_ARB_texture_barrier GL_ARB_texture_border_clamp GL_ARB_texture_buffer_object GL_ARB_texture_buffer_object_rgb32 GL_ARB_texture_buffer_range GL_ARB_texture_compression GL_ARB_texture_compression_bptc GL_ARB_texture_compression_rgtc GL_ARB_texture_cube_map GL_ARB_texture_cube_map_array GL_ARB_texture_env_add GL_ARB_texture_env_combine GL_ARB_texture_env_crossbar GL_ARB_texture_env_dot3 GL_ARB_texture_filter_anisotropic GL_ARB_texture_filter_minmax GL_ARB_texture_float GL_ARB_texture_gather GL_ARB_texture_mirror_clamp_to_edge GL_ARB_texture_mirrored_repeat GL_ARB_texture_multisample GL_ARB_texture_non_power_of_two GL_ARB_texture_query_levels GL_ARB_texture_query_lod GL_ARB_texture_rectangle GL_ARB_texture_rg GL_ARB_texture_rgb10_a2ui GL_ARB_texture_stencil8 GL_ARB_texture_storage GL_ARB_texture_storage_multisample GL_ARB_texture_swizzle GL_ARB_texture_view GL_ARB_timer_query GL_ARB_transform_feedback2 GL_ARB_transform_feedback3 GL_ARB_transform_feedback_instanced GL_ARB_transform_feedback_overflow_query GL_ARB_transpose_matrix GL_ARB_uniform_buffer_object GL_ARB_vertex_array_bgra GL_ARB_vertex_array_object GL_ARB_vertex_attrib_64bit GL_ARB_vertex_attrib_binding GL_ARB_vertex_buffer_object GL_ARB_vertex_program GL_ARB_vertex_shader GL_ARB_vertex_type_10f_11f_11f_rev GL_ARB_vertex_type_2_10_10_10_rev GL_ARB_viewport_array GL_ARB_window_pos GL_ATI_draw_buffers GL_ATI_texture_float GL_ATI_texture_mirror_once GL_S3_s3tc GL_EXT_texture_env_add GL_EXT_abgr GL_EXT_bgra GL_EXT_bindable_uniform GL_EXT_blend_color GL_EXT_blend_equation_separate GL_EXT_blend_func_separate GL_EXT_blend_minmax GL_EXT_blend_subtract GL_EXT_compiled_vertex_array GL_EXT_Cg_shader GL_EXT_depth_bounds_test GL_EXT_direct_state_access GL_EXT_draw_buffers2 GL_EXT_draw_instanced GL_EXT_draw_range_elements GL_EXT_fog_coord GL_EXT_framebuffer_blit GL_EXT_framebuffer_multisample GL_EXTX_framebuffer_mixed_formats GL_EXT_framebuffer_multisample_blit_scaled GL_EXT_framebuffer_object GL_EXT_framebuffer_sRGB GL_EXT_geometry_shader4 GL_EXT_gpu_program_parameters GL_EXT_gpu_shader4 GL_EXT_multi_draw_arrays GL_EXT_multiview_texture_multisample GL_EXT_multiview_timer_query GL_EXT_packed_depth_stencil GL_EXT_packed_float GL_EXT_packed_pixels GL_EXT_pixel_buffer_object GL_EXT_point_parameters GL_EXT_polygon_offset_clamp GL_EXT_post_depth_coverage GL_EXT_provoking_vertex GL_EXT_raster_multisample GL_EXT_rescale_normal GL_EXT_secondary_color GL_EXT_separate_shader_objects GL_EXT_separate_specular_color GL_EXT_shader_image_load_formatted GL_EXT_shader_image_load_store GL_EXT_shader_integer_mix GL_EXT_shadow_funcs GL_EXT_sparse_texture2 GL_EXT_stencil_two_side GL_EXT_stencil_wrap GL_EXT_texture3D GL_EXT_texture_array GL_EXT_texture_buffer_object GL_EXT_texture_compression_dxt1 GL_EXT_texture_compression_latc GL_EXT_texture_compression_rgtc GL_EXT_texture_compression_s3tc GL_EXT_texture_cube_map GL_EXT_texture_edge_clamp GL_EXT_texture_env_combine GL_EXT_texture_env_dot3 GL_EXT_texture_filter_anisotropic GL_EXT_texture_filter_minmax GL_EXT_texture_integer GL_EXT_texture_lod GL_EXT_texture_lod_bias GL_EXT_texture_mirror_clamp GL_EXT_texture_object GL_EXT_texture_shadow_lod GL_EXT_texture_shared_exponent GL_EXT_texture_sRGB GL_EXT_texture_sRGB_R8 GL_EXT_texture_sRGB_decode GL_EXT_texture_storage GL_EXT_texture_swizzle GL_EXT_timer_query GL_EXT_transform_feedback2 GL_EXT_vertex_array GL_EXT_vertex_array_bgra GL_EXT_vertex_attrib_64bit GL_EXT_window_rectangles GL_EXT_import_sync_object GL_IBM_rasterpos_clip GL_IBM_texture_mirrored_repeat GL_KHR_context_flush_control GL_KHR_debug GL_EXT_memory_object GL_EXT_memory_object_win32 GL_NV_memory_object_sparse GL_EXT_win32_keyed_mutex GL_KHR_parallel_shader_compile GL_KHR_no_error GL_KHR_robust_buffer_access_behavior GL_KHR_robustness GL_EXT_semaphore GL_EXT_semaphore_win32 GL_NV_timeline_semaphore GL_KHR_shader_subgroup GL_KTX_buffer_region GL_NV_alpha_to_coverage_dither_control GL_NV_bindless_multi_draw_indirect GL_NV_bindless_multi_draw_indirect_count GL_NV_bindless_texture GL_NV_blend_equation_advanced GL_NV_blend_equation_advanced_coherent GL_NVX_blend_equation_advanced_multi_draw_buffers GL_NV_blend_minmax_factor GL_NV_blend_square GL_NV_clip_space_w_scaling GL_NV_command_list GL_NV_compute_program5 GL_NV_compute_shader_derivatives GL_NV_conditional_render GL_NV_conservative_raster GL_NV_conservative_raster_dilate GL_NV_conservative_raster_pre_snap GL_NV_conservative_raster_pre_snap_triangles GL_NV_conservative_raster_underestimation GL_NV_copy_depth_to_color GL_NV_copy_image GL_NV_depth_buffer_float GL_NV_depth_clamp GL_NV_draw_texture GL_NV_draw_vulkan_image GL_NV_ES1_1_compatibility GL_NV_ES3_1_compatibility GL_NV_explicit_multisample GL_NV_feature_query GL_NV_fence GL_NV_fill_rectangle GL_NV_float_buffer GL_NV_fog_distance GL_NV_fragment_coverage_to_color GL_NV_fragment_program GL_NV_fragment_program_option GL_NV_fragment_program2 GL_NV_fragment_shader_barycentric GL_NV_fragment_shader_interlock GL_NV_framebuffer_mixed_samples GL_NV_framebuffer_multisample_coverage GL_NV_geometry_shader4 GL_NV_geometry_shader_passthrough GL_NV_gpu_program4 GL_NV_internalformat_sample_query GL_NV_gpu_program4_1 GL_NV_gpu_program5 GL_NV_gpu_program5_mem_extended GL_NV_gpu_program_fp64 GL_NV_gpu_program_multiview GL_NV_gpu_shader5 GL_NV_half_float GL_NV_light_max_exponent GL_NV_memory_attachment GL_NV_mesh_shader GL_NV_multisample_coverage GL_NV_multisample_filter_hint GL_NV_occlusion_query GL_NV_packed_depth_stencil GL_NV_parameter_buffer_object GL_NV_parameter_buffer_object2 GL_NV_path_rendering GL_NV_path_rendering_shared_edge GL_NV_pixel_data_range GL_NV_point_sprite GL_NV_primitive_restart GL_NV_primitive_shading_rate GL_NV_query_resource GL_NV_query_resource_tag GL_NV_register_combiners GL_NV_register_combiners2 GL_NV_representative_fragment_test GL_NV_sample_locations GL_NV_sample_mask_override_coverage GL_NV_scissor_exclusive GL_NV_shader_atomic_counters GL_NV_shader_atomic_float GL_NV_shader_atomic_float64 GL_NV_shader_atomic_fp16_vector GL_NV_shader_atomic_int64 GL_NV_shader_buffer_load GL_NV_shader_storage_buffer_object GL_NV_shader_subgroup_partitioned GL_NV_shader_texture_footprint GL_NV_shading_rate_image GL_NV_stereo_view_rendering GL_NV_texgen_reflection GL_NV_texture_barrier GL_NV_texture_compression_vtc GL_NV_texture_dirty_tile_map GL_NV_texture_env_combine4 GL_NV_texture_multisample GL_NV_texture_rectangle GL_NV_texture_rectangle_compressed GL_NV_texture_shader GL_NV_texture_shader2 GL_NV_texture_shader3 GL_NV_transform_feedback GL_NV_transform_feedback2 GL_NV_uniform_buffer_unified_memory GL_NV_uniform_buffer_std430_layout GL_NV_vertex_array_range GL_NV_vertex_array_range2 GL_NV_vertex_attrib_integer_64bit GL_NV_vertex_buffer_unified_memory GL_NV_vertex_program GL_NV_vertex_program1_1 GL_NV_vertex_program2 GL_NV_vertex_program2_option GL_NV_vertex_program3 GL_NV_viewport_array2 GL_NV_viewport_swizzle GL_NVX_conditional_render GL_NVX_linked_gpu_multicast GL_NV_gpu_multicast GL_NVX_gpu_multicast2 GL_NVX_progress_fence GL_NVX_gpu_memory_info GL_NVX_multigpu_info GL_NVX_nvenc_interop GL_NV_shader_thread_group GL_NV_shader_thread_shuffle GL_KHR_blend_equation_advanced GL_KHR_blend_equation_advanced_coherent GL_OVR_multiview GL_OVR_multiview2 GL_SGIS_generate_mipmap GL_SGIS_texture_lod GL_SGIX_depth_texture GL_SGIX_shadow GL_SUN_slice_accum GL_WIN_swap_hint WGL_EXT_swap_control
Copy link to clipboard
Copied
[This post contains formatting and embedded images that don't appear in email. View the post in your Web browser.]
"Init State: GPU for Export supported by default
User Preference: GPU for Image Processing Enabled"
You've got Use GPU For Export unchecked, so LR won't use the GPU for exporting:
Copy link to clipboard
Copied
There is also this notice in the lower section:
GPUDeviceEnabled: false
It looks like the GPU is getting disabled more deeply somehow.
Copy link to clipboard
Copied
Thank you so much for pointing out that. I think it may shadows a bug or something. If you have any disagnose ways, I'm very happy to participate.
Copy link to clipboard
Copied
Sorry I have to change language to get an english system info dump. and i don't know why after changing language, gpu export is disabled. But I'm sure I previously work on GPU full acceleration enabled, and you can see that it does not only alter export, also processing. Another friend just said that GPU is automatically disabled by the engine although shows to be enabled, I think that MAY be the bug? I just checked again, with sure that GPU is fully enabled, both by hand or by auto, the problem still exists, GPU engine never ultilized with only VRAM allocation, and even CPU can't be fully ultilized, making it very slow.
Copy link to clipboard
Copied
I don't have a similar machine to compare so not sure but that line does worry me that indeed the GPU is not playing at all consistent with your observations. This machine should be at least as fast as that M4 Pro you compare to. I would reinstall the GPU driver and disable any custom settings on the GPU you might have done. Make sure you use the studio driver from NVidia and not the gaming one. From the system listing above you probably have the very latest version installed but good to check.
Copy link to clipboard
Copied
[This post contains formatting and embedded images that don't appear in email. View the post in your Web browser.]
Normally, when the Camera Raw engine embedded in LR thinks the GPU is operating normally, it displays this:
But when Camera Raw detects a problem with the GPU, it will temporarily disable it and display this message:
What do you see in your Preferences > Performance?
Copy link to clipboard
Copied
I tried fresh reinstalling latest studio driver, then open lightroom classic. Now upon first run, I immediately checked system info, it says that full graphics acceleration is autiomatically enabled (I currently put GPU to auto mode), but in the info log, it still says "GPUDeviceEnabled: false" in the same place. I tried exporting 700 raws to jpg for a test. I find that lightroom or internal camera raw allocated huge amount of VRAM, making my system's VRAM usage to 26G (lightroom is the only gpu heavy app that i run). The GPU usage is strange. It only ultilizes Graphics_1 engine and on very light load. The load pattern is similar to the image below, only engine differs, the same small spikes. I also tried batch doing SR to 250 images, which is a real GPU heavy work. when doing this, the system's VRAM usage is 13.4G, and GPU is also in very light load, while CPU in heavy load but constrained to maybe the camera raw max thread count. CPU's load is definitely heavier than the GPU. GPU has load only on Compute_1 engine not 3D, and very light, only 100w power draw, like this:
I tried to store the images (and also the lightroom binary) on a very fast disk (optane), so disk must not be the limitation. After all these tests, the preferences still not say GPU error so disabled (I occurred it on my old computer, so I know it will prompt a window saying GPU acceleration is disabled until next launch etc.), and also the system information still saying the same "GPUDeviceEnabled: false"
Copy link to clipboard
Copied
"but in the info log, it still says "GPUDeviceEnabled: false" in the same place."
Try this procedure:
1. Exit LR.
2. In Windows File Explorer, copy and paste this line into the address bar:
%APPDATA%\Adobe\CameraRaw\GPU
3. Rename the subfolder "Adobe Photoshop Lightroom Classic" to "Adobe Photoshop Lightroom Classic.old".
4. Restart LR. The Camera Raw engine will reevaluate the GPU's capabilities, hopefully recognizing the presence of the GPU now. This is sometimes required when you change hardware or driver configurations.
Copy link to clipboard
Copied
I have done these procedures. Now after lightroom starts, a new "Adobe Photoshop Lightroom Classic" folder is created beside the renamed folder. However, nothing changes.
Copy link to clipboard
Copied
"after lightroom starts, a new "Adobe Photoshop Lightroom Classic" folder is created beside the renamed folder. However, nothing changes."
Inside that folder, there should be the file "Camera Raw GPU Config.txt" (normal). Do you also see the files TempDisableGPU2 or TempDisableGPU3?
Copy link to clipboard
Copied
Another strange thing I noticed in System Info:
Adapter #2: Vendor : 1414
Device : 8c
Subsystem : 0
Revision : 0
Video Memory : 0
Adapter #3: Vendor : 1414
Device : 8c
Subsystem : 0
Revision : 0
Video Memory : 0
The video adapter device "Vendor: 1414 / Device: 8c" is Windows' "Basic Render Driver", which provides basic graphics rendering and is present on every machine. But you have two of them installed -- I don't know what that means or whether it's normal.
Copy link to clipboard
Copied
This is what in the txt, no gpu disable.
<x:xmpmeta xmlns:x="adobe:ns:meta/" x:xmptk="Adobe XMP Core 7.0-c000 1.000000, 0000/00/00-00:00:00 ">
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#">
<rdf:Description rdf:about=""
xmlns:crs="http://ns.adobe.com/camera-raw-settings/1.0/"
crs:gpu_preferred_system=""
crs:gpu_init_digest="315BFE7B80450746E2FDC4207A2FD8ED"
crs:gpu_compute_digest="23B9FD622B9933E0F0DF8115D285D719"
crs:gpu_compute_quick_self_test_passed="True"
crs:gpu_hdr_display_scale="1"/>
</rdf:RDF>
</x:xmpmeta>
I have two gpus, one aspeed igpu from bmc, and one rtx5090. My aspeed gpu's driver is also installed, in device manager, it shows name "ASPEED Graphics Family(WDDM)", with aspeed driver 9.0.10.116. It will provide a virtual monitor for bmc vnc. However I disabled it from windows settings. In device manager, there's no microsoft basic display shown, so I don't know why lr will detect two of these. I tried to disable the aspeed gpu from device manager, and then open lr. Now there will be only one 1414:8c device, meaning that one of it is because of the aspeed igpu. However, the other one still exist, and gpuenabled is still false.
Copy link to clipboard
Copied
If it's possible, as an experiment try removing the Aspeed card from the computer. Perhaps its presence is confusing Windows and/or Camera Raw. Other than that, I'm out of ideas.
Copy link to clipboard
Copied
Sorry, the aspeed chip is integrated on the motherboard like a south bridge chip in a normal pc, so it's physically impossible to uninstall it. All I can do is to logically disable it via device manager. But why the nvidia gpu will expose two devices and one is the basic display, I don't know and am interested in that.
Copy link to clipboard
Copied
"GPUDeviceEnabled: false"
I think this was a red herring, and that line doesn't mean that LR isn't recognizing your GPU.
I noticed that my own test Windows machine reports the same thing, but it fully utilizes its GPU. I searched this forum for all posts containing System Info containing that line and found over 1000:
I sampled about a dozen from recent posts about issues that would have nothing to do with GPU performance.
So whatever that line means, it doesn't mean that LR / CR considers the GPU unvailable.
Copy link to clipboard
Copied
[This post contains formatting and embedded images that don't appear in email. View the post in your Web browser.]
I also recalled a discussion here last spring, where I learned about how Task Manager reports the utilization of the GPU:
https://devblogs.microsoft.com/directx/gpus-in-the-task-manager/
It's not always intuitive. What does your Task Manager show for overall GPU utilization when doing a batch Denoise? On my small Windows test machine, it reports 90% overall utilization (and larger) even though the graphs show the GPU as mostly idle:
Copy link to clipboard
Copied
What type of drive is P:? how is it connected?
Find more inspiration, events, and resources on the new Adobe Community
Explore Now