LinkedIn Details Upcoming Updates Regarding User Safety

The professional network has turned to AI, machine learning, including with parent Microsoft

LinkedIn promised to 'close the loop' with members on either side of reports stockcam/iStock
Headshot of David Cohen


LinkedIn revealed some changes that are coming in order to better ensure the safety of members of the professional network.

Vice president of product, trust Tanya Staples said in a blog post that LinkedIn’s policies are being updated to reinforce the fact that “hateful, harassing, inflammatory or racist content has absolutely no place on our platform.”

She added, “In this ever-changing world, people are bringing more conversations about sensitive topics to LinkedIn, and it’s critical that these conversations stay constructive and respectful, never harmful. When we see content or behavior that violates our policies, we take swift action to remove it. We’re starting to roll out new educational content in the feed as you post, message and engage with others, to review our policies and to always keep things professional.”

Staples also detailed LinkedIn’s use of artificial intelligence and machine learning, including an ongoing partnership with parent company Microsoft, saying that it is using AI models to find and remove profiles containing inappropriate content, and the professional network created the LinkedIn Fairness Toolkit to help it measure multiple definitions of fairness in large-scale ML workflows.

LinkedIn also implemented AI to help stop inappropriate, inflammatory, harassing and hateful content sent via private messages. Users will see a warning that enables them to view and report the content, or dismiss the warning and mark it as safe.

When LinkedIn users report content or behavior, the professional network takes actions including removing the content or restricting accounts, and Staples said transparency regarding this process will be increased in the coming weeks.

She wrote, “We’ll close the loop with members who report inappropriate content, letting them know the action we’ve taken on their report. And, for members who violate our policies, we’ll inform them about which policy they violated and why their content was removed.”

Staples concluded, “We embrace the opportunity to be better as a platform, amplifying all the good we see every day and stopping the behavior and content that could harm your experience. We’ll keep you posted as we release new technology and features that keep you and everyone on the platform safe.”


david.cohen@adweek.com David Cohen is editor of Adweek's Social Pro Daily.
{"taxonomy":"default","sortby":"default","label":"","shouldShow":"on"}