Chatbots a danger to democracy?!

The December 4th issue of the New York Times contained an opinion piece by Jamie Susskind, a lawyer and author of “Future Politics: Living Together in a World Transformed by Tech,” entitled “Chatbots are a danger to democracy,” with the tag line “We need to identify, disqualify and regulate chatbots before they destroy political speech.”

Susskind defines chatbots as “software programs that are capable of conversing with human beings on social media using natural language.” He argues that social media could be overwhelmed by automated conversations that simulate humans delivering political points of view.

Susskind’s definition of chatbots, however, is flawed by an example he gives, purportedly of chatbots, of non-interactive tweets the days following the disappearance of the columnist Jamal Khashoggi (rumored to have been ordered by Saudi Arabia’s Crown Prince Mohammed bin Salman). On a single day in October, the phrase “we all have trust in Mohammed bin Salman” featured in 250,000 tweets, and “We have to stand by our leader” was posted more than 60,000 times, along with 100,000 messages imploring Saudis to “Unfollow enemies of the nation.” A reference Susskind cited referred to the likelihood these were sent by “bots,” that is, automated messages by a computer, which has little to do with interactive chatbots.

The misuse of technology to deliver messages that could be attributed to a human is regrettable, but nothing new. I suspect most of us have gotten marketing calls from automated systems that try to sound like a live human (“Hi, this is Amy”). This illustrates the difficulty of defining a technology like natural language processing as the villain.

Susskind notes the Bot Disclosure and Accountability Bill introduced by Senator Dianne Feinstein which would amend the Federal Election Campaign Act of 1971 to prohibit candidates and political parties from using bots intended to impersonate or replicate human activity for public communication. Feinstein said when introducing the bill: “This bill is designed to help respond to Russia’s efforts to interfere in U.S. elections through the use of social media bots, which spread divisive propaganda. This bill would require social media companies to disclose all bots that operate on their platforms and prohibit U.S. political campaigns from using fake social media bots for political advertising.”

Whatever you think of this approach to limiting bots in political campaigns, it is clear Feinstein is not thinking about interactive chatbots, although it would seem to require disclosure of any automated distribution of political information. The practical difficulty with such legislation is that it would be unlikely to deter a foreign power from doing what Russia did, since attributing the source of such messages is difficult, and extraditing suspects from the suspect country is essentially impossible. It would, however, deter a politician or attacked party from fighting back with similar messages.

And, how do you define “bots”? Is a political ad automatically displayed on a web page you visit a “bot”? It’s automated and certainly not a human. And the decision to exhibit the ad to you may involve “AI”—machine learning or linguistic keyword detection—to decide you are an appropriate prospect.

It’s certainly the case that a chatbot could be designed to carry on a political conversation. But designing such a chatbot is a much more difficult task than simple sending out a tweet. Further, the computational support to carry on a conversation with hundreds of thousands of recipients simultaneously requires orders of magnitude larger amounts of computing power than simply sending out a tweet. Certainly, some deep-pocketed PAC could potentially do so, but a legitimate political organization would most likely identify the source and that it is automated.

Further, the danger of a chatbot successfully impersonating a human is small. All one has to do is ask an unrelated question, such as “how long should I boil pasta,” to generate a response that will likely reveal the chatbot is not human. An automated system could even be designed to receive such messages and note that the responses to a given question is exactly the same (or has a limited number of versions) with multiple interactions, detecting that it is automated. Such a system could then fight back by engaging the system in an automated conversation for hours, burdening the system. The use of a natural language chatbot is not a very practical extension of simple bots sending tweets that can’t be usefully questioned.

Susskind attacking natural language processing as a specific threat is not appropriate. It’s like the general threat of artificial intelligence taking over humanity that has been suggested by people like Elon Musk. Should we outlaw AI? The technology used is not the problem; the way it’s used is the problem.

More
articles