Are there any plans to use the semantic maps included in iPhone ProRaw .dng for better edits? Like the AI detected mask of a person or de depth map to make a local adjustment? Adobe worked with apple on the DNG 1.6 spec to support it, it's in the photo, but the data isn't used as far as I can tell.
ProRaws still look better in the Apple Photos app than in LR thanks to these, I think Apple does local adjustments in the dng like when a person is detected.
Anyone who knows the answer would be unable to tell you - it would breach their NDAs.
Sorry, I didn't think about that. I thought I might have missed a feature in some corner and was ready to be schooled.
I'll try again post-MAX 🙂
Update: Adobe did release masking features in Lightroom but isn't reading embedded ProRaw depth maps or Apple generated semantic maps.