I think the most likely case is: this company is labeling images from meta AI use from people who opted-in to share their data with Meta.
It's certainly possible that it's something much more surprising / sinister, but there is a fairly logical combination of settings that I could see a company could argue lets them use the data for training.
I'm also very certain that few users with these settings would expect the images to be shown to actual people, so I'm not defending Meta.
I think the most likely case is: this company is labeling images from meta AI use from people who opted-in to share their data with Meta.
It's certainly possible that it's something much more surprising / sinister, but there is a fairly logical combination of settings that I could see a company could argue lets them use the data for training.
I'm also very certain that few users with these settings would expect the images to be shown to actual people, so I'm not defending Meta.