Sentiment Analysis (Part 1): Tackling Visual Images and Facial Coding

By Nancy Lazarus Comment

Multi-ethnic group of people holding the talet

“Now social media is awash in still and moving images that can be analyzed,” said Seth Grimes, founder of Alta Plana Corporation and organizer of the recent Sentiment Analysis Symposium in New York. Indeed, as real world applications of sentiment analysis continue to grow, it’s not just text being examined: visual and video analytics and facial coding offer technologies to assess other forms of attitudinal information from consumers.

Vendors specializing in these techniques offered their takes on the latest best practices.

Visual images put consumers in proper context

There’s a vast amount to measure, with 1.8 billion images shared on social media each day, noted Francesco D’Orazio, VP of product for Pulsar. On mobile devices, about two-thirds are still images and one-third is videos. Rather than just serving as a forum for discussion, social media opens a window into people’s lives, resembling 24/7 focus groups.

D’Orazio outlined 3 different ways to analyze images:

  • Machine learning involves data replacing experts, and creating models.
  • Pattern recognition entails splitting images into sections, and requires human analysis.
  • Deep learning uses several layers to identify images using visual maps.

With all the methods, the process is multilayered, and requires tasks like extracting and pooling images, tagging mentions and clustering concepts.

Images can also be classified and displayed in various ways, D’Orazio explained.

  • Meta data can be indexed and displayed, like the selfie city project.
  • Images can be mined and correlations made, such as identifying which group drinks which liquor brand at which musician’s concerts.
  • Findings can be visualized via bar charts, like the numbers of happy people per city.
  • Images can be contextualized, and viewers can uncover unexpected trends, such as breakfast eaters drinking Gatorade.

Facial coding of real and fake (but not frozen) expressions

FACS, or facial action coding systems have been around for a while to detect human facial expressions, like the 7 basic emotions: anger, contempt, disgust, fear, joy, sadness and surprise. Now vendors like Emotient and Affectiva use webcams to analyze and quantify emotional responses as they appear on faces of individuals or crowds.

Gwen Littlewort, co-founder of Emotient, said there are even techniques to detect real vs. fake emotions. Apparently it’s all in the facial muscle timing, as spontaneous expressions register more quickly and appear smoother than more deliberate actions like fake smiles.

Emotient also conducts facial coding of groups, and tested Stephen Colbert’s Pom pistachio ad at a bar during the Super Bowl. Their capabilities extend to live events like basketball games, where they can collect emotions during fancam moments. Here they can measure 12 emotions frame- by-frame to detect attention spans and where the crowd is looking.

Affectiva also measures facial expressions and reports reactions to media content using webcams. Principal scientist Daniel McDuff said the company has an emotion data repository and has used their facial coding techniques in a large-scale observational test in 75 countries. As he noted, this is an improvement from the past when people self-reported their expressions.

The firm analyzed which countries and audience groups exhibited more positive expressions, by gender and age. But alas, since the technology is relatively new they’re not yet able to classify those whose facial muscles are temporarily paralyzed by botox. (So it might be a while before this technique is applied to entertainment award show attendees and nominees.)

Stay tuned for Part 2 tomorrow: addressing 5 key sentiment analysis challenges…

(Image courtesy of Emotient)

Advertisement
Advertisement