Copy link to clipboard
I searched for a similar thread but couldn't find one. I apologize if somebody else has already asked about this before.
My photos of night sky look different between Camera Raw vs. Photoshop and Lightroom Develop vs. Library.
1. Camera Raw and Lightroom Develop module: stars look bigger and more crisp
2. Photoshop canvas and Lightroom Library: stars look smaller and kind of mushy
I have attached screenshots showcasing the issue. These are just screenshots so the quality is not very high but the issue can be identified easily. After saving the photo through Photoshop or Lightroom, it looks like 2. in Windows image browser and in Internet browsers.
For reference, this issue is similar to this one but I couldn't find an answer in the linked thread:
Please note that when zoomed in at 100% I cannot see the difference. Therefore, I'm assuming this might have to do something with how the previews are rendered. In-camera preview looks closer to what I'm seeing in Camera Raw and LR Develop module and I would like to keep it that way on the final image. I have not experienced this several years ago when all my night sky photos looked crisp.
Software versions I'm using:
- Photoshop 22.3.1
- Camera Raw 13.2
- Lightroom Classic 10.2
Things I tried:
- Updating my graphic card drivers.
- Using a different monitor.
- Recalibrating my monitor (I thought this might be a color management issue. However I doubt this, as I'm not seeing any color shift).
- Turning off GPU acceleration both in Lightroom and Photoshop.
- Clearing cache both in Lightroom and Photoshop.
- Using a new catalog in Lightroom.
- Changing preview quality from Medium to High in Lightroom (Catalog settings > File Handling > Preview Quality).
Unfortunately, nothing helped. Has anyone experienced something similar?
Thank you for your help!
Copy link to clipboard
You're very close to the correct explanation, but not quite. It's not how it's "rendered", but how pixels are resampled on screen.
Think about it: this is all individual pixels that are very different from their neighbors. Technically, this is random noise. How do you represent that when you only have a fraction of the number of available pixels? Do you just average the values? Worst case you may not have any stars at all - just a medium gray.
It's correct at 100% for the simple reason that all the pixels are intact. That's why you must view at 100%.
Put another way: this isn't a Photoshop problem. It's a general resampling/scaling problem. How do you do it, what pixel information do you throw out?
Thank you for the quick reply!
However, this is still kind of confusing if the resampling/scaling is different between modules.
Since Windows image browser and Internet browsers resample more less the same way as the Photoshop canvas and LR Library module, most people interpert it as the true way the stars look. I guess then that I should refer to the image as a whole based on these two (PS and LR Library). I understand that the night sky images have to be examined at 100% but the final image is still viewed as a whole. Thus, I guess it would perhaps be a good idea to unify the sampling method between Camera Raw and Photoshop, and LR Develop and Library.
Thank you once again for your help despite a newbie (?) question.
For judging your original file, there's only one way: 100%, one image pixel to one screen pixel. Everything else involves screen resampling and is inherently unreliable.
You need to consider the actual output. If this is for web, it will be seen at 100%. There won't be any screen resampling. Instead, you have to scale the image itself to the required size. And here, you can experiment with different resampling algorithms and subsequent sharpening. It's all under your control, the finished, presented image won't have any of these issues. It's already done.
Thank you once again! I will definitely do some experimenting. 🙂