How AI Tech Giants Could Improve Their Images By Serving the Public Good

Opinion: Smaller companies are already experimenting with this model

We don’t have to wait for a disaster to put such data to use
McIninch/iStock

Several of the world’s tech giants (Facebook, Twitter, Google) have developed sophisticated artificial intelligence capabilities to collect, manage and act on consumer data. However, while these companies monetize digital data to serve ads and drive sales, there’s an unexplored opportunity for them to develop a platform to listen to citizen concerns and serve the public good.

With a lot of deserved mistrust for these companies by the public, caused by misuse of personal information and high-profile data breaches and leaks, these tech companies could work to redeem themselves for the recent trouble they’ve caused by creating a public portal that connects public agencies and citizens during natural or man-made disasters, or even for voicing citizen concerns.

In fact, at a certain point, this is something that the public should expect from the big AI companies that weild so much power over data and information.

How it might work

There’s already a lot of public data out there, but it’s very hard for public agencies, for example, to use Twitter as is for such a purpose as monitoring and prioritizing citizen concerns, especially in real-time during disasters. This requires something developed with advanced algorithims and machine learning to pull in, organize and flag priorities from all of the unstructured data, as well as AI for things like language translation.

An individual or cooperative digital data platform developed by one or more of the tech giants would be a powerful tool to help public agencies navigate disasters or address citizen concerns (safety, public health, infrastructure). Working in partnership, any combination of these companies could develop a cloud-based interface using the data they already possess.

Smaller companies are already experimenting with this model.

Planet, for example provided satellite data during Hurricane Florence. Only Planet data was available in this platform, however. A cooperative platform between various satellite agencies such as NASA and European, Indian and other space agencies could be possible, but step No. 1 would be for Planet to share its data.

AI tech companies could so something similar with social data for a larger social good. This could start with any one of these companies developing an individual platform (like Planet did), which could be queried by various public agencies.

More than 3 billion people are on the internet, and the data they produce could potentially connect geographic location with crime levels and weather events. It can even be of use during or after a catastrophe.

This is not idle speculation. In November 2015, Chennai, India, got 19 inches of rain in one day. That prompted the worst flooding the city had seen in a century. More than 500 people died as a result, and the property damage ran as high as $10 billion.

In desperation, people in the area took to Facebook, Twitter and WhatsApp. Public agencies used the channels for real-time geographic-based information. The data helped them administer aid. It also helped aid workers identify neighborhoods where people were trapped or needed blankets.

What about privacy?

One initial concern is privacy, especially with current and future General Data Protection Regulation-style legislation such as California’s data privacy law.

If these companies are trying to redeem themselves for past misuse of data, this topic can’t be ignored. However, for this proposed platform, only publicly available data would be used for analysis (public social media posts). Such a platform would not share data such as private chats and emails. Between the big three AI tech companies, there would be sufficient public data required for public agencies to effectively handle a crisis or understand citizen concerns.

What would a platform look like?

This summer, mass flooding similar to that in Chennai three years earlier occurred in Kerala, India, displacing millions. At LatentView Analytics, we quickly mobilized and created a social dashboard monitoring rescue, relief and health topics on Twitter that we provided to the central control rooms of local relief organizations that were on the ground helping during the floods. Here is a look at the dashboard.

This is a basic example using social data, and the AI tech companies mentioned above would be able to create a much more advanced product with real-time alerts (red, yellow, green) for key topics.

But we don’t have to wait for a disaster to put such data to use. Imagine if a municipal government, for instance, had a real-time view into what was happening in every neighborhood. If residents knew that the government was watching, they could also use digital channels to voice their concerns about issues involving public safety. AI can identify new needs as they arise. It can then rank needs based on past trends.

Language, meanwhile, will no longer be a barrier, since AI can translate text into hundreds of different languages. Citizens of polyglot cities like New York, Los Angeles and San Francisco could make their voices heard on such a platform no matter the language.

Data produced through digital channels would be a check and balance to official reports. If the field team says disaster relief is complete, a city government can check in social media and see if it’s true.

To illustrate the types of insights such a platform could provide to local governments, LatentView Analytics created a dashboard to monitor and analyze civic concerns being voiced in San Francisco across social media and blogs between May 1 and Aug. 12. There were some interesting findings.

Of course, there’s a potential downside to this, too

Data from social media suffers from the same biases as social media itself. There’s potential for trollishness and bullying. Squeaky wheels will get the most attention.

But such activity is baked into the analysis. Almost 90 percent of social media data is noise or junk. But the big tech companies are adept at tuning out the noise and focusing on the 10 percent or so of relevant data. AI-powered advanced analytics systems can also identify false positives.

Additionally, the government (local, state, federal) has conflicts of interest when it comes to resolving civic issues, so data would need to be anonymized so it wouldn’t become a place where the government collects and monitors online activist behavior in specific communities. Citizens who would want their voices heard would be pushed to conduct their social media on one of the big platforms.

Meanwhile, the unconnected population would be invisible. Governments would still need to rely on offline methods to better understand the concern of these citizens.

Final thoughts

All told, the potential positive benefits to the public outweigh the negatives. The examples above, including our own, are simply that—examples of what could be advanced much further by the AI tech giants. A well-thought-out, well designed plaform with proactive training to various public agencies, along with governance processes processes that enabled local-level listening, would also cost these tech companies little (relative to their influence and financial stature). Meanwhile, these companies would gain back some of the public good will they’ve lost with recent data breaches and misuse. Citizens would get something as well: The data that they provide to big tech wouldn’t just be used to target them for ads, but might also save their lives one day.

Ganesh Sankaralingam is director of data science and machine learning at digital analytics firm LatentView Analytics.

Recommended articles