On Thursday, Meta’s oversight board urged the tech giant to update its rules on pornographic deepfakes to reflect advances in artificial intelligence, moving beyond the outdated “Photoshop” terminology.
The independent oversight board, which serves as a supreme arbiter for Meta’s content moderation policies, made this recommendation after reviewing two cases involving deepfake images of prominent women in India and the United States.
In one instance, a deepfake posted on Instagram remained online despite a complaint, while in the other case, a similar image was removed from the platform. Both decisions led to appeals to the board.
The board determined that the deepfakes in question breached Meta’s policy against “derogatory sexualized Photoshop,” a term the board found too vague and outdated. The policy currently refers to manipulated images sexualized in ways likely to be unwelcome to those depicted.
Since Adobe Photoshop was first released in 1990, its name has become synonymous with image editing. However, with the advent of generative AI capable of creating images or videos from simple text prompts, the term “Photoshop” is no longer adequate, the board concluded.
The board recommended that Meta explicitly prohibit AI-generated or manipulated non-consensual sexual content.
While Meta has committed to following the board’s decisions on individual content moderation cases, it treats policy recommendations as optional, adopting them at its discretion.