How Brands Can Protect Themselves From Legal Ramifications Over AI Privacy

There are already components that aren't being monitored in the best ways

Artificial intelligence is not a futuristic dream—it’s here now. Marketers are tapping the boundless power of AI for a variety of reasons, and each use of AI carries its own privacy and other legal challenges.

The Wall Street Journal has said AI systems have a “thirst for data.” As AI stretches its tentacles, it interacts with other machines and engages with more and more third parties. Data sharing with third parties pose a variety of privacy challenges and risks, such as data leakage.

The complexity of AI systems makes maintaining and enforcing accurate and transparent data policies even more challenging. Increasingly, legislators are passing laws that demand companies are transparent about the chain of custody of information. For example, GDPR and California’s Consumer Privacy Act of 2018 (CaCPA) are only two recent examples of major legislative overhauls of how data is regulated.

To help manage privacy compliance, companies should (to the extent possible) keep a clear record of how their AI will collect, store, use and share data. Companies should also work closely with legal counsel to determine how this information should be disclosed and what options should be presented. Of course, these things are challenging given the transformative nature of AI systems.

Below are some examples of how AI is already being used and the inherent privacy risks. When experimenting with these examples of AI and others, ask yourself three questions: Are we being transparent? Can our customers understand our privacy policy? Do we have permission?

To help manage privacy compliance, companies should (to the extent possible) keep a clear record of how their AI will collect, store, use and share data.

Emotional chatbots

Studies show that customers are warming up to chatbots, which is especially true when the interactions feel more lifelike. Specifically, emotional chatbots are designed to make interactions with humans more seamless and interactive.

Part of how chatbots become more human is by extracting information from interactions with customers and using details to relay real emotions. Chatbots aren’t just engaging in witty banter. Emotional conversations churn sensitive data regarding health insurance, financial problems and relationships drama. As chatbots become more lifelike, marketers must take measures to monitor and protect the collection, storage and disclosure of personal information and consider what disclosures are necessary to users about the nature of what they are interacting with.

Recommendation engines

A friend was recently gathering research for an article she was working on about PTSD. After an hour or so of clicking on articles, she returned to her inbox to find an eerie email: a medical provider from the other side of the country had emailed her an advertisement about help for PTSD.

Companies that are selling healthcare-related data have to be especially careful to secure the necessary permissions they need to solicit customers and to make data anonymous so it can’t be used to invade the privacy of healthcare consumers, which can mean a violation of the Health Insurance Portability and Accountability Act. As with all who collect and use personal information for marketing or other purposes, be sure to consider carefully whether your privacy notice is up-to-date, clearly disclosed and understandable.

Dynamic pricing

Dynamic pricing is increasingly all the rage in ecommerce marketing circles. Companies tap the power of algorithms to adjust prices on the fly based on what the customer might be willing to pay at any given time. Organizations employ AI to make decisions based on factors such as customer behavior and attributes coupled with supply and demand factors. But as companies take advantage of AI to increase sales through dynamic pricing, they must be aware of the legal risks. Making calculations based on protected classes such as race, gender, religion or nationality is illegal. Dynamic pricing can also violate anti-trust laws. When considering dynamic pricing, be sure to think through how you are collecting the info you are using to set the prices, whether permissions are needed, what disclosures are needed and whether you are clearly explaining your data sharing and collection practices in your privacy policy. These and other legal issues that dynamic pricing may trigger must be flushed out fully.

Facial and speech recognition

Facial and speech recognition technology is more sophisticated and prevalent than ever. With its ability to detect subtle changes in a person’s face combined with its incorporation into the latest iPhone model, facial recognition especially is on the brink of becoming ubiquitous. Having the means to ID someone by their face and to upload that marker into the digital data universe will likely change our culture and shouldn’t be taken lightly.

The same principles discussed above need to be considered with recognition technology. The Federal Trade Commission recommends that “companies take steps to make sure consumers are aware of facial recognition technologies when they come in contact with them and that they have a choice as to whether data about them is collected. So, for example, if a company is using digital signs to determine the demographic features of passersby, such as age or gender, they should provide clear notice to consumers that the technology is in use before consumers come into contact with the signs.”