Over 2 billion people use Facebook Messenger, Google Assistant and Amazon Echo, enabling brands to communicate directly with them and deliver highly relevant, personalized content. But while chatbots and virtual assistants offer many benefits, they also create business, legal and ethical challenges. Cambridge Analytica and the General Data Protection Regulation (GDPR) have added such challenges to the landscape.
New medium, old rules
The main concern among regulators and legislators surrounds consumers and their level of understanding around whether or not they know that they are talking to a chatbot. The Federal Trade Commission (FTC) has stated its position on privacy disclosures online as: “if a platform does not provide an opportunity to make proper disclosures, then it should not be used to disseminate advertisements that require such disclosures.” According to Hannah Taylor, an advertising attorney at Frankfurt Kurnit Klein & Selz, “The law is the tortoise and technology is the hare.”
According to advertising law, if a consumer asks a chatbot a pricing question, the chatbot cannot answer “The price is $X” because it would require all the disclosure details alongside the response. Rather, the bot would need to respond with something along the lines of “Great question,” and then take the user to all the disclosure details with the actual response, said Taylor. The FTC states, “Disclosures are an integral part of a claim [and] should not be communicated through a hyperlink. Instead, they should be placed on the same page and immediately next to the claim and be sufficiently prominent.”
However, from a marketing perspective, this can make the experience clunky and abrupt. One solution is to make chatbot conversations more of a teaser conversation toward additional information rather than providing direct answers with all the disclosure details in the chat window.
Cambridge Analytica debacle
By now, advertisers are aware of the debacle where Cambridge Analytica violated Facebook’s policy, which clearly stipulates that Facebook’s data can only be used for scientific research and not for political (or any other) purposes. Through an app called thisisyourdigitallife, which provided personality quizzes, Cambridge Analytica harvested the personal data of 87 million Facebook users. Even though Facebook itself collects this type data for ad retargeting purposes, it took the Cambridge Analytica scandal for the Federal Communications Commission (FCC) and other entities, including state legislatures, to take notice and call into question broader industry practices. This led to Facebook creating stricter privacy policies and no longer allowing apps access to collect users’ personal information, such as relationship status, political views and education. Mark Zuckerberg also announced that the company no longer supports searching users’ profiles by their phone numbers.
In the case of Google Assistant, the user has to give permission for Google to access their location data and email addresses. The Aveda-branded Google Action is an example of a voice experience that can send the nearest store locations to the user, but only if the user explicitly opts in. In addition, the consumer can sign up for an email newsletter only if explicit permission is given to Google.
California Consumer Privacy Act
California was the first to respond and create new laws to address personal data on the internet, including an IoT bill and the California Consumer Privacy Act. The IoT bill states that reasonable security should be employed when customers are using a connected device, such as Google Hub or Amazon Echo. Further, the California Attorney General’s office has identified six concepts—transparency, choice, reasonable security, limit collection and retention, sensitive data and reasonable expectations—as privacy principles, and these have been largely adapted in some shape or form by attorney generals in other states, according to Daniel Goldberg, a privacy and data security attorney at Frankfurt Kurnit Klein & Selz.
“Every state in the U.S. has laws addressing reasonable security through data breach laws,” Goldberg said. “But these laws don’t necessarily cover the type of data collection being done by companies online.”
U.S. companies need to be aware of the new GDPR law on data protection and privacy for all individuals within the EU. Google was recently fined nearly $57 million by violating GDPR. The ruling was due to Google’s business model, which takes user data to serve up highly targeted ads. A central element of GDPR states that companies must clearly explain how the data is collected and used, and users must in turn give consent before a company can begin to collect their data.
The effect on advertisers and brands
Personalization has been a cornerstone of advertising and marketing. Marketers have been dependent on user data from Facebook and Google for years. The majority of branded chatbots and voice assistants have been built on the premise of collecting user data and sentiment. With the new laws in place, marketing campaigns must be rethought. Solutions could be as simple as asking a user to opt in or anonymizing data collected during a bi-directional conversation with a user.
As new mediums continue to push the boundaries of conversational automation and data collection, more regulations will be implemented. Following the Cambridge Analytica scandal, users are leaving Facebook by the millions. Now more than ever advertisers and brands must be transparent and up front.
Don't miss Adweek NexTech, live this week, to explore privacy, data, attribution and the benchmarks that matter. Register for free and tune in.