When Do Things Cross the Line and Become Online Harassment?

Pew Research Center examined three fictional scenarios

Pew Research Center says there is considerable debate over what actually constitutes online harassment AIMSTOCK/iStock

In a new report released this morning, Pew Research Center sought to bring into focus the blurry lines that tend to define what does and does not constitute online harassment.

The research organization asked respondents to look at three fictional scenarios depicting potential online harassment, and asked which elements of each should be considered harassment. Pew Research Center associate director of research Aaron Smith said in a release introducing the results, “When it comes to online harassment, there is a broad public consensus that certain types of severe actions are beyond the pale. At the same time, there is also a substantial gray area in public opinion on this issue. Americans are much more divided over whether other types of less overt actions meet the threshold for constituting online harassment or not.”

In the first example, a conversation between two friends in which they disagree about a political issue is shared via social media by one of the friends, and the other begins to receive “unkind messages” from strangers, escalating to the point where the second user’s phone number and home address are posted online, and that user begins to receive threatening messages.

Pew found that 89 percent of respondents believe the second user experienced online harassment at some point, while just 4 percent believe this was not the case and 7 percent were unsure.

Only 5 percent believe the disagreement between the two friends counts as online harassment, while that number jumps to 48 percent when the previously private conversation is shared via social media, and to 54 percent when the conversation is shared publicly.

72 percent of respondents believe the second user experiences online harassment when unkind messages from strangers begin, rising to 82 percent when those messages become vulgar and 85 percent when that user’s personal information is posted online and when threatening messages are received.

Pew said gender did not play a role, as changing the fictional characters to females resulted in 91 percent of respondents believing the second user experienced online harassment at some point, versus the 89 percent figure from when the characters were male.

In the second scenario, a female posts about a controversial political issue on her social media account, spurring unkind messages. Her post is then shared by a “popular blogger with thousands of followers,” and she begins receiving “vulgar messages that insult her looks and sexual behavior,” escalating to people posting edited images of her that include sexual images, and threatening messages.

89 percent of respondents believe the female was the victim of online harassment at some point, while 6 percent feel that she was not harassed and 5 percent were unsure.

Just as in Pew’s first scenario, the percentage of respondents who believe the female user experienced harassment increases as actions escalate: 3 percent felt that the initial disagreement with her friend constituted harassment, while 43 percent cited unkind messages and 17 percent pointed to her post being shared by the blogger.

The vulgar comments about her looks and sexual behavior were seen as harassment by 85 percent of respondents, while 84 percent believed the edited images qualified and 85 percent did so for the threatening messages.

As for the role of social platforms, while 43 percent believe the female user receiving unkind messages represented harassment, just 20 percent felt that the social platform should have intervened. Those figures were 85 percent and 66 percent, respectively, regarding the vulgar messages she began receiving.

Pew’s third scenario is virtually identical to its second scenario, but racism is swapped in for sexism.

85 percent of respondents believe the “victim” experienced harassment at some point, while 6 percent did not and 10 percent were unsure.

Much like in the previous scenarios, very few people believe the initial disagreement represented harassment, but as things escalated:

  • 82 percent believed vulgar messages with racially insulting language constituted harassment.
  • 80 percent believed the victim’s picture being edited to include racially insensitive images qualified as harassment—however, just 57 percent believe the social platform should have intervened.
  • 82 percent counted personal threats as harassment.

 


david.cohen@adweek.com David Cohen is editor of Adweek's Social Pro Daily.
{"taxonomy":"","sortby":"","label":"","shouldShow":""}