We have a brand new look! Take a tour with us and explore the latest updates on Adobe Support Community.
Hi all I have been thinking about this for a while and wanted to get others opinion. As I am sure most will have noticed that when you upload a photo Adobe already knows what an image is of and will present you with a list of suggested keywords. This is likely done with the help of machine learning which has learned what something is by other examples already uploaded. What I am wondering is if the flagging of images that are rejected are done using the reading of the editing information in the image? I read an article last year on PetaPixel https://petapixel.com/2017/06/19/website-can-reveal-jpeg-photo-edited-lightroom/ about a website that could tell you how a photo was processed if done in Lightroom and so am wondering if seeing Adobe makes Lightroom that they could have a similar ability to read the images and anything that has above an X amount of certain edit options then get flagged for further revision by a human so something similar. I could see that this could make things potentially easier for those doing the vetting of images as they have empirically reject images due to having too much processing.
This is just a theory I was thinking about given I know that the technology exists but could be totally wrong too. I just was wondering what others thought? If an official statement on this that would be cool but not needed. If it is true it would be nice if these limits were published to save everyone's time by not having to guess what is acceptable processing and give us guidelines to stick within.
I am informed that it's all human that does the review. I would however expect the use of a photo viewer for viewing zoomed images. That is how most of the issues are identified. The reviewers are experienced, so they are able to quickly identify flaws. It would appear, once an issue is identified, the image is rejected for that reason, even though there might be other issues. I do not believe there is a vetting machine. I draw this conclusion, because I submitted unedited photos that were rejected. The note that came with them indicated that it is believed that they were post processed. Another thing. For example all my photos rejected for noise, when zoomed up to between 100 and 200% I can see the noise. Based on what I gather, except for post processing all photos taken has some amount of noise. Depending on the ISO setting, along with other factors will determine how much noise there is. If you go through the relevant tutorials you will understand how these issues are easily identified.
Thanks for your contribution to this discussion. As I said in my post I was just suggesting this is a theory that I thought could be plausible based on what I have seen so far and what is tech is available. The one thing that would suggest that if they were doing this that makes my theory less likely is the turnaround on vetting images. It would seem that at minimum it is 2 days for an image to be vetted here where on other platforms I have had images approved within 2 hours of being submitted. I still look forward to seeing if anyone else has a view on if they think the vetting process is assisted with some type of automation.
Out of interest for those experienced members do you still get the odd image rejected?
There have been numerous threads about people thinking the pictures were being reviewed by AI. A staff member on here - Matt Hayward replied that the pictures were reviewed by humans.
It may seem questionable at times when it is hard to see the rejection reason, but if you look more carefully at the image it is possible to see what they may be referring to. Often, that's why it helps here to post an example so other 'eyes' can see the image and give advice.
I get still get a few rejections even though I know what to look out for, but sometimes the photo itself is at fault so not much can be done in post processing - too much noise showing, not sharp enough, lens too soft - you have to be aware of this and so on.