Messenger From Facebook Rolls Out Alerts to Warn Users of Impersonators, Scams

Minors will be prompted to take action if they are contacted by adults they don’t know

The new feature taps Facebook’s machine learning technology to detect behavioral signals that may indicate imposters or scams
Facebook

Messenger From Facebook is introducing new in-application notifications to warn people who are about to interact with someone impersonating someone else or trying to scam them.

The social network quietly began rolling the feature out on Android in March, and it will be extended to iOS and to more people around the world next week.

Director of product management for Messenger privacy and safety Jay Sullivan said in a blog post that the new feature taps Facebook’s machine learning technology to detect behavioral signals that may indicate imposters or scams.

Facebook
Facebook

He wrote, “Too often, people interact with someone online whom they think they know or trust, when it’s really a scammer or imposter. These accounts can be hard to identify at first, and the results can be costly. Messenger already filters some potential spam or malware and offers tips to avoid common scams. Our new safety notices also help educate people on ways to spot scams or imposters and help them take action to prevent a costly interaction.”

Sullivan added that the new alerts were designed with full encryption in mind and, as Messenger moves toward end-to-end encryption, “People should be able to communicate securely and privately with friends and loved ones without anyone listening to or monitoring their conversations … These safety notices will help people avoid potentially harmful interactions and possible scams, while empowering them with the information and controls needed to keep their chats private, safe and secure.”

For people under 18, the new Messenger alerts will be triggered if they are interacting with adults they may now know, enabling them to take action before responding to messages.

Sullivan said Facebook already uses machine learning to detect and disable the accounts of adults engaging in inappropriate interactions with minors, analyzing behavioral signals such as an adult sending a large amount of friend or message requests to people under 18.

Family Online Safety Institute CEO Stephen Balkam said in the blog post, “These features show a great integration of the technical tools that will help curb bad behavior on the platform, while also reminding people of their own control over their account. It’s important to use language that empowers people to make wise decisions and think more critically about who they’re interacting with online. We’re especially glad to see this reflected in the thoughtful approach around safety considerations for younger users.”

Recommended videos