Press "Enter" to skip to content

How the Internet is Being Misused to Manipulate Voters

Voter targeting is enabled by collecting personal data, including online shopping history, likes, shares and comments. The use of such personal information for political messaging can help spread misplaced fears of potential attacks on what people hold dear. Democracies need better privacy laws.

On 7 October 2018, as Brazilians geared up to vote in a historic election, a few influencing factors were set in motion.

The left-wing government had ruled for over two decades, making anti-incumbency a difficult problem to surmount. Additionally, the right-wing party candidate, Jair Bolsonaro, was a particularly polarizing figure. He had made highly controversial claims against women, minorities, the LGBTQ community and environmental protection.

Such figures, as we have seen around the world, are often able to pry on latent fears and create a wave of populism in their favour. One of the many messages circulated by the Bolsonaro campaign – often meant to fire up emotions – included forewarnings about a ‘gay kit’. The campaign claimed that the ‘kit’ would encourage children to become gay and would be introduced in schools if Fernando Haddad, Bolsonaro’s rival, were to win. In addition, images were circulated, showing Bolsonaro’s supporters being beaten up by ‘enemies of freedom’.

To be clear, there was no ‘gay kit’, and actress Beatriz Segall – who claimed to have been beaten for campaigning for Bolsonaro – was actually injured in a car accident. Yet, Bolsonaro won a decisive victory.

But how did a candidate go from reality television to the presidency of the world’s sixth most populated country, while making deeply damning claims?

Among other things, Bolsonaro ingeniously followed the latest playbook in world politics – voter targeting and manipulation. He systematically and repeatedly discredited all mainstream news as fake. His campaign came out with fiery and carefully crafted messages about the economy, the environment, homosexuality, and so on. And finally, he found ways to spread this message beyond the realms of the mainstream media and reached people’s ears through their Facebook accounts, Twitter handles, and even text messages. America popularised; Brazil used.

How Voter Targeting Works

Historically, technology has played a significant role in shaping elections. From the age of the (M)ad Men of Madison Avenue to the subsequent growth of predictive capabilities of computers, campaigns have drawn all into their fold.

The logic of voter targeting technologies can be traced back to psychological warfare: the use of psychological techniques, including false media propaganda, to mentally exhaust the enemy and tilt public opinion in one’s own favour.

Voter targeting is enabled by collecting personal data. Machine learning and modelling algorithms use this data, often not explicitly shared by internet users, to track and predict behaviour. They understand human sentiments based on thousands of data points and present critical messages that push them along the political spectrum. These data points include indications of behavioural and lifestyle choices, such as online shopping history, and opinions gauged through likes, shares and comments on specific interest posts, movies and online forms.

The use of such personal information for political messaging can help create fears of potential attacks on what people hold dear. It can also grossly compromise an individual’s informational autonomy, since such messages are based on information that an individual hasn’t consented to share. Think of Trump’s constant use of the phrase “radical left mobs will destroy the suburbs”. Closer home, the political messaging of imminent danger to the foundations of ‘Indian culture’ hits similar notes.

Alarmingly, powerful people are legitimising these fake messages and undermining credible news sources. Labelling credible news as fake drives people away from the truth.

Voter targeting also threatens rational debate, which is the very crux of deliberative democracy. In a democracy, one person must know what is being said to the other person. How else can rational debate and deliberation happen? Dark advertising – which is one of the methods of voter targeting – precludes this. It displays one message to a section of the audience and a completely opposite one to another section. Each message panders to a specific fear harboured by that particular crowd and hence creates clashing visions of the future.

Thus, the sacrifice of personal autonomy of those who are targeted, and the harm to the right to information of those who are not targeted, constitutes double harm to citizens and to democracy itself.

Means of Dissemination

Voter targeting reached India in a big way in the 2019 elections. ‘Positive messaging’ is the phrase that political parties use to describe their micro-targeted adverts. Members of WhatsApp groups receive apolitical content interspersed with ‘positive’ messages. This may be on the success of a particular welfare scheme of the government. But there is also ‘negative’ messaging put out by third-party advertising, through which political rivals are vilified and groups (other than those being targeted by the advert) are deemed as ‘enemies’.

Richer parties are better poised to spread the information that they know that people want to read. This is done through hierarchical WhatsApp group structures and verified Twitter accounts. The former ensures that (dis)information systematically reaches the grassroots, and the latter is critical since the mainstream media tends to pick up tweets from these accounts. Collectively, they ensure the spread of information far and wide.

As per research by the Oxford Internet Institute in 2019, 70 countries were using computational propaganda to manipulate public opinion. This goes beyond hiring sophisticated analytics firms that put together hundreds of data points to create extremely personal and specific messages that stir particular people to action, based on their biases and interests.

While China has been focusing mainly on its domestic audience, its cyber-strategy took a fantastic and deeply dangerous turn when the government started using its boundless resources to paint pro-democracy advocates in Hong Kong as ‘terrorists’. A key concern here is China’s ability to suppress any counter-narrative with a flood of pro-China propaganda, seemingly at the drop of a hat.

Can Regulation Meet These Challenges?

To meet these challenges, democracies need to oversee the creators of information, monitor its digital dissemination, and protect the receivers of this information.

Strengthening data protection and governance laws and the right to privacy is the first step in minimising access to citizen data. If key political players, including the ruling party, are denied access to personal data, their ability to curate specific messages to sway particular sections of the people will be thwarted to a large extent.

Additionally, there must be spending limits on the digital campaigns of both, the political parties themselves and their hard-to-trace online proxies. Lack of such transparency hinders the accurate monitoring of where a certain message has originated. Tracking how information flows through the digital and mainstream media will force all players to be more truthful.

Furthermore, analysing whether technology promotes rational debate is crucial. Ethan Zuckerman, an American media scholar and internet activist, uses the term ‘Facebook logic’ to describe social media platforms and their opaque algorithms and governance structures, over which users have little control.

These platforms are built with the intent of increasing user screen time, underscored by unlimited scrolling, to maximise ad revenue. But why should systems built on such capitalistic logic monopolise our arenas of debate and information dissemination? There is an urgent need for social media that is designed on civic and democratic principles to counter untruth and disinformation.

All of the above require expanded capacities of the election oversight body in every country. Nimble regulations that can adapt and respond quickly to changing technology are the need of the hour.

As power threatens to cast murky shadows over the truth, it is not in the public interest to deploy technology that fuels opacity and disinformation. Short-term political gains will only give way to long-term trends, which will collectively harm the tenets of a democratic society.

+ posts

Raghav Chopra is working as a program manager with the International Innovation Corps based out of the University of Chicago Trust. He is a business studies graduate from Shaheed Sukhdev College of Business Studies, University of Delhi.

+ posts

Deborah Thomas is working with researchers at Foundation for Ecological Security as a YLT Fellow. She is an engineer with a Master's degree from the Azim Premji University.