Facebook added female-gendered terms to its hate speech policy in July as part of its ongoing effort to keep women safe on its platform.
The social network said in a Newsroom post that it consulted with experts including anthropological and cognitive linguists, women’s rights organizations and safety organizations before revising its rules on targeting cursing, or profanity used to attack a private individual.
The new rules apply across Facebook and Instagram, but the company recognized that cultural norms vary around topics such as sexuality in different parts of the world.
Vice president of global policy management Monika Bickert wrote in the Newsroom post, “The general themes when it comes to women’s safety tend to be the same around the world, but we find that when we look at specific countries or regions, the actual types of behavior are very localized.”
Facebook shared examples of harassers trying to humiliate women by sharing images that would be shameful in their communities, saying that this might mean nude photos or videos of those women engaging in sexual activities in the U.S., while in other countries, it could be something like a photo of a woman’s ankle, or of a woman walking with a man who is not a family member.
The social network said in the Newsroom post, “To account for this wide spectrum of harassment types, our rules need to be thoughtful and similarly comprehensive … We take a comprehensive approach to making our platform a safer place for women, including writing clear policies and developing cutting-edge technology to help prevent abuse from happening in the first place.”
Facebook noted that the Facebook Community Standards and Instagram Community Guidelines are developed by the company’s policy teams and include rules against behaviors that impact women, such as sharing of non-consensual intimate imagery and harassment, including sending multiple unwanted messages to someone.
The social network noted that women have tools at their disposal, including the ability to ignore unwanted messages and to block other people without them knowing. And, as always, behavior that potentially violates its policies can be reported.
Facebook provided another example of tailoring its policies to specific countries and cultures, noting that some women and safety advocates in India, Pakistan and Egypt noted a reluctance among women to share profile pictures with their faces due to fears of impersonation, and saying that Facebook developed a profile picture guard to give women in those countries and other nations where they have similar concerns more control over who can download or share their photos.
The social network wrote, “Blocking, reporting and other user-facing tools are only part of the solution, and their success relies on people knowing to seek them out and understanding how to use them—plus feeling comfortable enough to use them. A victim who’s already feeling anxious or threatened may not want to trigger a harasser for fear of retribution. Sometimes, the behavior isn’t visible to the woman it affects: An ex might share non-consensual intimate images in a private group, for example. Or a bully might set up a fake account in a woman’s name and operate it without her knowledge, adding members of her community as friends. That’s why Facebook is not only investing in digital literacy programs and improved safety resources, but we’re also investing in technology that can find violating content proactively—and, in some cases, prevent it from being shared in the first place.”
Facebook also provided more details on it work to prevent non-consensual sharing of intimate images, referencing the pilot program it started in April 2017, which enables people who fear that their images are in danger of being shared to reach out to victim advocate organizations, and which creates digital fingerprints of the images before destroying them, so that photo-matching technology can be used to block those images from being posted to Facebook or Instagram. The social network noted that this is all done without anyone at the company actually viewing the images.
Machine learning and artificial intelligence techniques were also developed to proactively detect nude or near-nude images shared without permission, prior to anyone reporting them.
Facebook wrote, “Comprehensive approaches to complicated problems, like the one we’ve developed for NCII, require a lot of input and a lot of expertise, and we know we can’t do this alone. That’s why we host roundtable discussions around the world with women’s safety experts, women who have experienced some of these issues and women’s advocates in order to ensure that we’re including their feedback, perspectives and expertise in our work.”
The company hosted the 2019 Global Safety and Well-Being Summit in New York in May, where more than 100 organizations from 40 countries came together to discuss women’s safety and other issues.
Nighat Dad, executive director of the Digital Rights Foundation in Pakistan, who has worked with Facebook on harassment issues, including NCII, said in the Newsroom post, “Online gender-based violence—it’s not a technology problem, it’s a societal problem. The people who are working on the ground, they need to work together on this and also keep telling social media platforms how they can improve their products, how they can improve their reporting mechanisms. It’s not just one person who can address the issue, or one organization or one institution: We all need to work together.”
Cindy Southworth, executive vice president of the National Network to End Domestic Violence and member of the Facebook Safety Advisory Board, added, “It’s a civil liberties and civil rights issue to be able to access loved ones any time, access information, access job searches. The world is out there, and you need access to technology to access that world.”