Using OpenGL and/or GLFW as a processing algorithm in the main window is not really that complicated, as long as you keep and manage your own rendering context and do the proper context switching. You then have to convert the input pixel buffers (in RAM) that you get from the render call and convert them to OpenGL textures, upload them to your GL context and process them there. The result needs to be converted back then into a RAM buffer (e.g. by reading out the backbuffer or a texture).
This all however only applies to the pixel processing in the main window, not to a panel UI or the effect parameter panel. I don't think it is possible to do this in a simple way.
I have successfully rendered IMGUI inside the Panelator example.
The key was to create an opengl view as a child to the root NSView.
Then initialise IMGUI properly.
Ah, cool, good to know!
Now, if I can find a way to pass down the key and mouse events, we are good to go!
I'll host this sample on github soon for this community.
The keyboard and mouse input are now working when docked.
When not docked, the input are very flaky.