Zuckerberg said in a post Wednesday:
I want to add my voice in support of Muslims in our community and around the world.
After the Paris attacks and hate this week, I can only imagine the fear Muslims feel that they will be persecuted for the actions of others.
As a Jew, my parents taught me that we must stand up against attacks on all communities. Even if an attack isn’t against you today, in time, attacks on freedom for anyone will hurt everyone.
If you’re a Muslim in this community, as the leader of Facebook, I want you to know that you are always welcome here and that we will fight to protect your rights and create a peaceful and safe environment for you.
Having a child has given us so much hope, but the hate of some can make it easy to succumb to cynicism. We must not lose hope. As long as we stand together and see the good in each other, we can build a better world for all people.
OHPI CEO Andre Oboler said in a release introducing the report:
There is clearly a long way to go between the sort of environment Mark Zuckerberg says he wants Facebook to be and the reality we see online today. Part of the problem is the abuse of Facebook by those spreading hate, but the failure of Facebook to properly respond to users’ reports is also part of the problem. When people feel unwelcome, excluded and vilified, when they feel their dignity as a person is under attack, a reply from Facebook that the abusive content they reported does not violate Facebook’s community standards is just rubbing salt into an open wound. Our data shows that Facebook needs to improve the way it responds to reports of anti-Muslim hate–indeed to reports of all kinds of hate. We hope Facebook will work with us so we can help them make this happen. Grand words are welcome, but the proof is in the hard data, and right now Facebook is coming up short.
She wrote in a message on the Change.org petition:
The best tool we have to keep terrorist content off Facebook is our vigilant community of more than 1.5 billion people who are very good at letting us know when something is not right. There are billions of new posts on Facebook every day, so we make it easy for people to flag content for us, and they do. Every piece of content on Facebook can be reported to our teams directly through the site.
When content is reported to us, it is reviewed by a highly trained global team with expertise in dozens of languages. The team reviews reports around the clock, and prioritizes any terrorism-related reports for immediate review.
We remove anyone or any group who has a violent mission or who has engaged in acts of terrorism. We also remove any content that expresses support for these groups or their actions. And we don’t stop there. When we find terrorist-related material, we look for and remove associated violating content, as well.
This is not an easy job, and we know we can make mistakes and are always working to improve our responsiveness and accuracy. We have expanded our team and increased our language capabilities so that we can respond to crises around the world faster and more effectively. As part of this effort, we have expanded our engagement with experts and follow world events closely. We remain in close contact with NGOs (non-governmental organizations), industry partners, academics and government officials about the best ways to keep Facebook free of terrorists and terror-promoting content. As governments and academics have pointed out, it is often hard to identify new terror groups and individuals because the landscape is constantly changing. We do our best to monitor emerging groups or trends by maintaining relationships with experts in the field and listening closely to our community.
Every time there is a terror attack, people come to Facebook to share their reactions. These posts from people around the world often express frustration and despair, but also empathy and a desire to help. Our community uses Facebook to share devastating news, but also to console one another, express solidarity and mobilize support for victims and other vulnerable people. For instance, after the Charlie Hebdo attacks in Paris, we saw many people use Facebook to plan offline events to stand in solidarity against terrorism.
Of course, when people talk about these events for good reasons, they sometimes share upsetting content. It is horrifying to see a photograph of a refugee child lying lifeless on a beach. At the same time, that image may mobilize people to take action to help other refugees. Many people in volatile regions are suffering unspeakable horrors that fall outside the reach of media cameras. Facebook provides these people a voice, and we want to protect that voice.
If Facebook blocked all upsetting content, we would inevitably block the media, charities and others from reporting on what is happening in the world. We would also inevitably interfere with calls for positive change and support for victims. For this reason, we allow people to discuss these events and share some types of violent images, but only if they are clearly doing so to raise awareness or condemn violence. However, we remove any graphic images shared to promote or glorify violence or the perpetrators of violence.
Readers: How can Facebook walk the line between censorship and protecting its user base?