Copy link to clipboard
Copied
Recently was told that an HDRI I made had no dynammic range built into it, which is strange because I combined 6 photos with a pretty wide range of exposure. I believe that they may be correct because the light interaction in the CG renderings is very weak.
I combined the photos using Photoshop's Automate>Merge to HDR Pro feature. Does one need to manually convert the HDR to 32-bit color after creating these?
What is a good appproach to analyze the HDR once it has been created? The histogram isn't working in 32-bit mode.
Are there any other methods of analyzing a HDRI to determine that the range is wide? I applied an exposure adjustment layer, and it looks like its working correct, but I can't be sure if that's helpful.
Copy link to clipboard
Copied
Hi use following article for your help .... regards
http://blog.gregzaal.com/2016/02/23/what-makes-good-hdri/
Copy link to clipboard
Copied
Amazing. Thank You
Copy link to clipboard
Copied
Ali, thank you for the reference. It looks like my HDRI wasn't as bad as I had thought, although I will need far more exposure range for a daylight scene.
Still, I am curious about how one would go about analyzing an HDRI in Photoshop to determine if the exposure range is acceptable. If I were to look at a histogram and see that values were wide enough to represent candle light or sunlight, then I could be more certain that the results in a CGI lighting application would be accurate. But the histogram appears to not work with 32 bit images? Maybe I use Nuke or some compositing app?
Copy link to clipboard
Copied
Accurate? What an odd term to use, no offense! Even the best HDR images are not a representation of actual light. That's a widespread misconception and the linked article gets that part wrong, too. If we were to talk about such stuff, we'd have to base it on actual physical units like luminous density per area (candela) and spectral ranges. An HDR image is always only a perceptive capture of a given situation, limited by the camera's construction or even more generic things like lens transparency, dispersion, abberation, lights not emitting full spectrums (LEDs in particular) and what have you. And then of course there's a difference between strictly photographic (pseudo-)HDR files and the ones used for 3D work. Which is getting near a point, I suppose. What's a good HDR for 3D work? Most of the time one that favors the brights and spreads most of its contrast across that range, i.e. you want bright lights to not to eat up a soft blue sky. That's on some level where all this gobbledigook about EVs and just stepping through every exposure level becomes irrelevant. It's still an artistic decision, as you want your HDRs to be "cinematic", not necessarily realistic. In the earliest day of HDR we only had 3 levels, anyway, because the apps couldn't process more. So rather than just amassing large numbers of shots or covering large ranges its still more about finding the right balance. Does this help you? Probably not, but a good rule of thumb is to create about 3 low-level exposures and double the number, i.e. for the upper range with the grey point ideally being centered around the camera's native Gamma to allow stretching the ranges left and right more or less linearly, regardless of their actual ranges. As pointed out earlier, even the strongest exposure should still have colors discernible with the naked eye, but also contain the smallest number of shadows, with only the darkest areas being visible. Beyond that there's point in using overexposure and then talking yourself into that you can "see" UV light.
Mylenium
Copy link to clipboard
Copied
So there isn't a way to analyze this in Photoshop?