Brands large and small have hurt themselves with questionable posts on Facebook and tweets on Twitter, but what if another pair of eyes could look at those posts or tweets before they went live? Social moderation outfit Crisp unveiled a new service to do just that.
The new solution from Crisp combines a sophisticated work-flow engine and full-time human moderators, 24 hours per day, seven days per week, in 50 languages, to ensure that potentially harmful Facebook posts or tweets never see the light of day.
Crisp said it is currently rolling out its new service to its existing clients, with new customers to be added “over the coming weeks and months.” Founder and CEO Adam Hildreth added in a release announcing the new service:
It’s one thing to quickly identify and remove offensive consumer-generated content in a brand’s owned social media channels, but something entirely different when a brand’s internal social media manager or customer-service representative either intentionally or inadvertently tweets or posts something offensive. Given the recent incidents involving some high-profile brands, we developed the industry’s first solution to eliminate such risk, while at the same time preserving the ability to engage with fans, followers, and customers in real-time.
Brand managers are very scared about what their employees say online despite recognizing the need to engage with customers. It comes down to who’s controlling what your employees are saying in the moment. You could be fine for years with a traditional community manager, but it only takes one accidental copy and paste or one disgruntled employee to make you the next brand trending on Twitter.
Page admins: Would you consider a service like the one just launched by Crisp?