Working on a project where there are 8000+ images of a project taken over 5 days of the same 140 people. So it is a good testing ground for face detection.
One thing that is noticable is that face detection is performed on a per image basis and does not seem to leverage the fact that several sequential images taken with say 5-10s are actually related - there is a "temporal" relationship between the images. The observations in 11.4 are 1) the face detected between each image can be quite different, 2) even when you label one of the images with the correct face, there is no process that is intelligently updating sequential images.
The consequence is that you end up having to label each successive image, which means the process can become inefficient. You've fixed one image, so you would expect that the next 5 images that have high corelation are going source the same data.
If you have an image with several people where you have already confirmed a face, there are suggestions of that same person for other faces - this can occur multiple times in the same image. There is no reduction in probability of that same face when a face has been confirmed. You would expect some reduction in the weightings when a face is confirmed.
Because I have photographed some of the people before, they are in my face detection list. However, in previous cases they may not have been labelled correctly - maybe I didn't know their full surname and just used an initial. There are times when I need to aggregate faces. When you go to check whether the face has already been detected in the Keyword panel and navigate to the individual's set of images. There is no back button - so you have to go back to the specific folder and also find the image you were working on - everything is reset to the beginning of the folder.
There's no Secondary Window support to have the list of faces on a secondary monitor. That would speed things up if you could view the Keywords and the respective gallerys in a secodnary grid without having to break your workflow in the current Library folder.
There's some subtle timing issues when closing the text box asking you to retype the name. Occasionally I would end up in lights out mode because the keyboard events somehow didn't make it into the text field and became keyboard shortcut events.
I actually know the subset of names in this cohort. So being able to restrict the search space would be useful. There are faces that are definitely not going to be in this project that can be ignored.
Blurry or background images of faces can be useful or not. So it would be useful to be able to ignore people in the background. Now that Subject algorithms are used for selection in both LRC and PS it could make sense to leverage the detected in-focus subjects as the primary faces to use.
Different profiles / angles seem to be a challenge for the face detection algorithm. Whilst you might label front on images, those at an angle are not well recognised. Understand that training data is required. It just seems from the results that side profiles and front on don't have an underlying relationship - so there might be an opportunity for 3D modelling ideas from photogrammetry.