how to do math correct frequency separation of 16 bit RGB?
I copy image as a LF layer and blur it . then convert the doc to 32 bit ( if I have any alpha I apply gamma 0.45454 to any alpha channel ) Then I set subtract blending mode to LF layer and get looking almost black HF image with most pixels negative . With add blending mode on top of same blurred original they do perfectly frequency separated pair.
But how could I brinng it back to 16 bit integer mode ? I need to lift that mostly negative HiFreq layer to be around 0,5 right? So I add 0,5 flat color layer with add blending but when I convert the doc back to 16 bit mode the gamma changes and what was 0,5 is brighter now . I do gamma 0,454545 to fix it. Now I believe this HiFreq layer should work on top of blurred original being blended as LInear Light, right? But it looks too contrast. I try hard light and it also not exactly right . I am totally puzzled now.
Could anyone explain me please where my mistake is or perhaps how could it all be done in 16 bit mode altogether without converting into 32 bit mode at all ? I would be very appreciative for any suggestion. Ideally it should work as an action or script. When chat GPT writes me one it tends to use "Apply Image" and it is not working for some uncertain reason. Still I can't even do it manually with math accurate result .
PS. I figured how to do it without 32 bit mode conversion. Just using same subtract idea with 128 offset ( lift) and 2 scale in apply image dialog. Works just right in 16 bit integer mode. Chat GPT tried to lift it to 32768 for some uncertain reason
