Crisp Thinking Promises to Prevent Your Worst Social Media Nightmares


We’ve already established the fact that social media screwups can affect even the most infallible among us.

Such failures may be an accepted risk of doing business in the digital realm–but wouldn’t you like a little more in the way of security to ensure that your client doesn’t pull a U.S. Airways?

UK-based company Crisp Thinking started as a provider of child protection technologies for Internet service providers, but its latest product promises to deliver the unthinkable by protecting your company and your clients from the kind of missteps that can quickly go viral–and ensure days, if not weeks, of terrible headlines.

Curious? We spoke to founder Adam Hildreth for more details.

Could you describe your basic service for the layman?

Our current service offers 24/7 oversight in terms of reviewing all inbound content posted on a brand’s social media pages. We collect all the content from their own channels in real time by using a combination of advanced tech and human supervision, and we define all content using 43 general categories: spam, offensive materials, etc. We also tailor categories for clients.

This combination of automation and human moderation allows us to categorize everything posted to a page within 15 minutes in 50 languages.

How does it differ from similar moderation services?

Other products use keywords but don’t understand the context: is this really a bomb threat? Does this comment qualify as “offensive content?”

Does this not fall under the purvey of a social media/community manager?

The job of a community manager should be to engage positively with people, but in many companies, the manager is also the moderator. No one can work 24/7, so content gets missed and crises escalate quickly.

To what degree can you customize the service for individual clients?

We use an advanced filter, but our team reviews everything anyway, and we tailor it to different clients. Here’s an example: we can track social media comments about a battery overheating on a certain model of mobile phone in a certain region.

If a brand is going through a social crisis, we specifically categorize everything related to that crisis as positive or negative and inform the brand.

But your new product turns that equation on its head by monitoring outbound content, correct?

Yes. The question is: how do you control outbound messaging and, for example, stop a customer service agent from posting something (whether deliberately or by mistake) that’s not acceptable for the brand?

With customer service, there’s very rarely an approval process. We put a person in the middle to review but do it so quickly that you’d never know it happened, categorizing every statement and stopping “issue-based communications” from going out.

To whom do you bounce these problem messages?

That depends on both the customer and the type of content. The standard approach is that we stop it from ever getting posted and let the client know. In more extreme cases, we stop it from going out, then contact the head of comms/services according to brand guidelines.

How do you pitch your service to, say, U.S. Airways?

Brands are very scared about what their employees say online despite a need to engage with customers.

It comes down to: who’s controlling what your employees are saying in the moment? You could be fine for years with a traditional community manager, but it only takes one accidental copy and paste or one disgruntled employee to make you the next brand trending on Twitter.

What sorts of clients do you serve, and how many messages do you review and remove?

We have lots of media broadcasters, a major airline, alcoholic beverage brands, mobile phone makers, etc. But this model is applicable to every single industry.