In the “Year of Elections,” Is AI on the Ballot?
Politicians like Pakistan’s Imran Khan have found themselves duplicated by artificial intelligence—sometimes by their own parties.
On December 17, 2023, more than a million supporters of Imran Khan, the former prime minister of Pakistan, fired up their browsers to take part in Pakistan’s first ever virtual rally. Khan’s party, the Pakistan Tehreek-e-Insaaf (PTI), has been the subject of a brutal crackdown by the country’s powerful military establishment, after thousands of his supporters laid siege to military installations in opposition to his arrest last May. Toward the end of the nearly five-hour virtual event—which was made necessary by the restrictions placed on the party assembling in public—something seemingly inexplicable happened. Viewers were treated to a keynote speech by Khan himself, in spite of his being incarcerated in Adiala Jail. “You all must be wondering how I’m doing in prison,” he said. “The first thing I want to make clear is that if it brings Pakistan true freedom, living in jail for me is an act of worship.”
In actual fact, the voice was not that of Imran Khan. The digital media team of the PTI had trained an artificial intelligence program to mimic Khan’s speech patterns by feeding it a selection of his oratory, and then used the clone to deliver a speech based on notes smuggled out of prison by the PTI chairman’s lawyers.
“What we wanted to give the audience was the feeling that Imran Khan was there, but with the disclaimer that this was the AI voice of Imran Khan based on his notes from jail,” says Chicago-based Khan loyalist, Jibran Ilyas, who masterminded the initiative. “It helped us get his message across, and that’s the most important thing, because people are really looking at him to guide not just the PTI but Pakistan as a whole.”
In 2024, a year when it is estimated that more than 2 billion people worldwide will go to the polls to choose their representatives, policymakers, and strategists have begun to turn their eyes toward the possibility of artificial intelligence playing a decisive role in global democracy. Kat Duffy of the Council of Foreign Relations has described the current environment as a “post-market, pre-norms space,” one where emergent technologies have entered the marketplace but the regulation to control their output is yet to be determined.
Throughout South Asia, machine-learning and AI technologies have already begun to play a significant role in the democratic process. There is the now infamous case of Indian Prime Minister Narendra Modi, whose AI-cloned voice has been made to sing songs in a variety of regional languages he is known not to speak—an innovation that raises the possibility of the same being done to make his speeches accessible in parts of the country where Hindi is either not the first language or is poorly understood.
In Pakistan, commentators and digital media specialists have hailed PTI’s strategy as a significant milestone. “I thought it was a pretty effective way of circumventing the persecution that the political party has been facing,” says Islamabad-based digital rights activist, Usama Khilji. “Now you have a dude who’s in jail addressing everyone through AI because you can, so I thought it was pretty innovative.”
The recently concluded election in Bangladesh, however, became a case study in how artificial intelligence can be used to peddle disinformation, particularly among voters who lack the sophistication to differentiate between authentic audiovisual content and deepfake clones. In one instance, an AI-generated video posted on Facebook showed exiled opposition leader Tarique Rahman arguing that his political party, the BNP, ought to placate the United States by keeping quiet about the Israeli offensive in Gaza—a potentially disastrous position to take in a country where more than 90 percent of the population is Muslim.
A report by the Financial Times published last month uncovered a campaign of disinformation in Bangladesh, where pro-government news outlets were promoting deepfakes and other AI-created media to manipulate public opinion in the run-up to the January 7 general election. Sayeed Al-Zaman, a Bangladeshi academic whose research focuses on digital information, believes that “tools and techniques such as deepfakes, autogenerated content, and AI bots could become potent carriers of political propaganda” in Bangladesh and worldwide.
According to Al-Zaman, whose research indicates that over 60 percent of individuals exposed to online disinformation are susceptible to believing it, the advent of artificial intelligence has made the challenge of verification more difficult for two reasons. “Firstly, effective AI-based misinformation detection tools are not yet widely available, making it difficult for the masses to discern between fact and fiction,” he tells The Nation. “Secondly, digital information literacy offers limited assistance in cases of AI-based misinformation due to its heightened level of sophistication.”
Even in the United States, AI-generated content has been deployed in attempts to influence voters. On Monday, NBC News reported that a robocall featuring an AI voice clone of President Biden was being used to discourage New Hampshire residents from voting in its presidential primary. Last year, deepfake photographs surfaced on social media showing Donald Trump wrestling with police officers.
In November, social-media giant Meta announced that it would require political campaigns to disclose their use of artificial intelligence in advertisements published on all of its platforms, a policy meant to be rolled out this month. But at precisely the moment when the need for content moderation has reached its highest point, the tech sector is experiencing mass layoffs. Last year, Twitter instituted a number of cuts to its trust and safety team under the leadership of Elon Musk—a decision that was copied by a number of other tech giants including Meta and Amazon. All in all, the industry tracker Layoffs.fyi estimates that around 250,000 tech workers were let go in 2023.
The implications for the Global South are obvious; in a country like Pakistan, where 60 million people are classified as illiterate, the electorate is ripe for manipulation. But the danger of election interference is also considerable in affluent countries, where these AI technologies are at their most developed and sophisticated. It is not out of the question that 2024, which has been dubbed the year of the election, turns out instead to be the year of misinformation.
More from The Nation
My Dystopian Novel Predicted Trump 2.0 My Dystopian Novel Predicted Trump 2.0
SOLIS was meant to serve as a warning of what could come. With Trump’s reelection, it should serve as a blueprint for the bravery and activism needed to fight back.
The Fall of Syria Changes Everything The Fall of Syria Changes Everything
Retired diplomat Chas Freeman and writer Pascal Lottaz discuss what happens now that Damascus is in the hands of Hayat Tahrir al-Sham.
My Brother Chef Mahmoud Almadhoun Died Because He Fed Gaza’s Starving Citizens My Brother Chef Mahmoud Almadhoun Died Because He Fed Gaza’s Starving Citizens
His killing by Israel sent a chilling message that no one is safe, including humanitarians who stand in the way of Gaza’s erasure.
Enough With the Bad Election Takes! Enough With the Bad Election Takes!
To properly diagnose what went wrong, we need to look at the actual number of votes cast.