Big Data and AI: Why We No Longer Have Free or Fair Elections

Big data and psychological operations have played a big role in people’s lives, likely without their knowledge. Personal data is being used in ways that allow people to be surrounded by misinformation, thus influencing their beliefs. Social media is flooded with “fake news” impacting the way people make decisions. 

Psychological operations (PSYOPs) are the planned use of propaganda and other forms of disinformation to influence the opinions, emotions, attitudes, and behavior of certain groups. These operations are dedicated military actions that were first created during World War I to destroy the morale of German soldiers. In our current age of information and big data, PSYOPs are mostly being used by private and public actors to influence the way people vote. 

Data is one of the most valuable resources in today’s world. Both public and private actors are racing to amass data on consumers and citizens, using it to enhance their economic profit, power and influence. Cambridge Analytica, a British political consulting firm, is the most prominent example of a company that was able to access the personal data of global citizens, using it to construct PSYOPs and influence elections all over the world.  

Cambridge Analytica described its mission as “us[ing]advanced scientific research and social analysis techniques, adapted for civilian use from military applications, to better understand behavior within electorates.” They combined the military application of PSYOPs with user information to influence the way people vote via social media platforms. The firm’s aim was to bring a potentially powerful new weapon to the market that allows wealthy investors to reshape politics in their vision. Their successful campaigns range from assisting in several Kenyan elections to helping President DonaldTrump in his bid for the 2016 U.S. Presidential Election.

The company played a dominant role in Kenyan President Uhuru Kenyatta’s election campaigns in 2013 and 2017. Cambridge Analytica conducted a vast political research effort with 50,000 respondents and used the data they collected to craft the campaigns. They worked with a local research partner “to ensure that variations in language and customs were respected,” and then used social media to target the youth population. Cambridge Analytica used their data to conduct misinformation and disinformation campaigns in order to sway the youth vote in favor of Kenyatta. Kenya’s presidential election in August 2017 pitted President Uhuru Kenyatta against Raila Odinga. This election was overruled by the Supreme Court in September due to procedural irregularities. The election was held again in October, where Kenyatta won with 98 percent of the vote. 

However, the 2017 election saw two key incidences of misinformation. The first was a video entitled “The Real Raila,” which depicted a world in 2020 with an imagined Odinga presidency. This video was viewed more than 141,000 times. The second incident was the misinformation about the violence that surrounded the electoral process. People would take videos and photos from the past and post them as current, on-going events.  Rebekka Rumpel, a research assistant for the Africa program at think tank Chatham House, said, “there were certainly a number of sites and campaign ads that were flagged by Kenyans and international organizations as using scaremongering tactics to win votes ahead of the August 2017 elections.” 

Cambridge Analytica is most famously known for being hired by Trump during his 2016 presidential campaign. The firm harvested private information from Facebook profiles of as many as 87 million users without their permission and used this information to create targeted advertisements to sway voters away from presidential candidate Hillary Clinton. Facebook’s trust in software developers allowed Cambridge Analytica to access the data and create psychographics for numerous voters. 

Facebook offers a number of technology tools for software developers, with one of the most popular being the Facebook Login extension, which lets people log in to a website or app using their Facebook account instead of creating new credentials. Back in 2014, Facebook’s terms and services allowed developers to collect some information on the friend networks of people who used Facebook Login. That meant that while a single user may have agreed to hand over their data, developers could also access some data about their friends. Cambridge Analytica accessed this data via a third party and collected details on “roughly 30 million [people]containing enough information, including places of residence, that the company could match users to other records and build psychographic profiles.”

Christopher Wylie, a co-founder of Cambridge Analytica, stated that its leaders wanted “to fight a culture war in America…Cambridge Analytica was supposed to be the arsenal of weapons to fight that culture war.” With this as their mission, Cambridge Analytica did exactly that with the U.S. 2016 presidential campaign. Traditional analytics firms used voting records and consumer purchase histories to try to predict political beliefs and voting behavior. However, Cambridge Analytica instead used private data bought from Facebook, identifying users’ inherent psychological traits, and using them to design powerful political messages aimed at swaying voters. As in Kenya, the U.S. 2016 elections saw the proliferation of misinformation and disinformation on voters’ social media platforms. 

Since the 2016 U.S. presidential election, Cambridge Analytica is no longer in business due to its part in the data breach scandal with Facebook. However, there are still many companies like Cambridge Analytica that exist and work in similar ways–Clarity Campaigns, bluelabs, and Civis Analytics are just a few to name.  Social media platforms, like Facebook, are being pressured by politicians to strengthen their data privacy terms in light of public protest. The data used in the Cambridge Analytica scandal was collected without the consent or knowledge of the people whose information was being used psychologically against them. In July 2019, the Federal Trade Commission fined Facebook around $5 billion to settle their investigation into the data breach. Additionally, there were various regional governments in the United States with lawsuits in their court systems from citizens affected by the data breach. It will be impossible to ever say definitively whether data manipulation tipped the results of the American presidential election. Yet, it is evident that “the Internet has created a political ecosystem in which the extreme, the incendiary, and the polarising tend to prevail over the considered, the rational, and the consensus-seeking.”  

This data breach calls into question whether governments around the world can control these social media giants and how they obtain and use their users’ information. The Cambridge-Analytica and Facebook data scandal has made people all over the world more aware of the fact that their information can be used and acquired by a variety of public and private actors without their explicit consent. The fallout of this scandal has prompted politicians to recognize the way these monopolistic companies mine and exploit our data to make their vast profits, with governments around the world strengthening their regulatory framework of the digital economic sector.

Another question that has been brought into play, is whether governments can control the AI-powered information bubbles that are used by tech giants Google, Amazon, Facebook, and Microsoft. In 2009, Google announced that it would use fifty-seven signals to customize search results. These customized search results are from the collection of data about the users’ activity to tailor information to each person according to their interests, desires and preferences. 

Initially, these information bubbles were meant to help users find information that conforms to their preferences. However, these bubbles also allow personalization algorithms to provide people with the information that will most likely align with their beliefs. As a result, people are segregating themselves into information bubbles, where their own beliefs are reinforced and they are not exposed to opposite views. Eli Pariser, the man who coined the term ‘filter bubble’, has stated:  “Left to their own devices, personalization filters serve up a kind of invisible auto propaganda, indoctrinating us with our own ideas, amplifying our desire for things that are familiar and leaving us oblivious to the dangers lurking in the dark territory of the unknown.” Personalized algorithms allow people to believe that the information they consume represents the undisputed truth, when in fact it may be far from reality. 

This can be seen in the 2016 U.S. election, where people continuously saw polls showing Clinton in the lead, which influenced  some people to believe that they did not have to go out and vote. Additionally, those on the fence between Clinton and Trump were targeted with ads made by Cambridge Analytica. The firm had created personalized propaganda that was pro-Trump, which pushed viewers towards voting for Trump. After  viewing one Trump-related ad, user’s personalized algorithms continuously provided that user with pro-Trump information, real or fake. 

Yet, the challenge of regulation is difficult in an ecosystem with so many actors, many of which utilize covert tactics. Cambridge Analytica stated that they used propaganda anonymously and “just put information into the bloodstream of the internet and then watch[ed]it grow, g[a]ve it a little push every now and again.” Everyday people would be exposed to this information and it would “infiltrate the online community and expand but with no branding – so it’s unattributable, untrackable.”  

So, what can politicians do about this? Politicians will have to choose between establishing advanced regulatory frameworks to better monitor and control the online sphere or they can continue to give in to these data giants, allowing them to continue amassing information on global citizens and disrupt democratic processes. Politicians could force social media platforms to create stricter privacy terms, adopt fact-checking tools, comply with independent regulatory oversight, declare the origins of political advertisements, and ensure their algorithms and AI tools are conducive to a pluralistic media ecosystem. 

Some social media sites, Facebook, Instagram, and Twitter, currently have adopted the fact-checking tool. Facebook has used third-party fact checkers to reduce the volume of false information since 2016. On October 14, 2020, Facebook also announced they used other methods to reduce the distribution of potential disinformation but did not go into detail about what those methods are. On the same day, Twitter users reported they could not share certain links and would get an alert saying, “Your Tweet could not be sent because this link has been identified by Twitter or our partners as being potentially harmful.” Angie Holan, the editor-in-chief of PolitiFact, questioned Twitter by saying “Who are these partners they (Twitter) speak of? Has Twitter partnered with fact-checkers without telling anyone? It would be news to me.” These fact checking tools are still being questioned by the International Fact-Checking Network (IFCN) and are still relatively new to know if they are effective in reducing the amount of false information online. More time and research are needed to conclude whether or not fact checking tools are useful in combating this age of misinformation.   

Europe has also passed rules and laws in regard to monitoring social media. The European Union has introduced the General Data Protection Regulation (GDPR) which set rules on how companies, including social media platforms, store and use people’s data. Additionally, Australia passed the Sharing of Abhorrent Violent Material Act in 2019. This introduced criminal penalties for social media companies, possible jail sentences for tech executives for up to three years and financial penalties worth up to 10% of a company’s global turnover. 

It is increasingly evident that “politicians cannot control the digital giant with rules from the past” and that legislative change is necessary to effectively regulate today’s big tech companies and the expansive use of big data.  

Comments

Jamie Vang

Jamie Vang is currently a Progressive Degree Program (PDP) student at the University of Southern California (USC). She is simultaneously getting a BA in International Relations Global Business (IRGB) and a Master of Studies in Law (MSL) degree. She hopes to attend law school after completing both degrees in December 2021. Jamie is an avid watcher of documentaries and enjoys those that deal with social issues that are currently afflicting our world. She also enjoys traveling and has been to Italy, the United Kingdom, Thailand, Costa Rica, and Jamaica and wishes to continue to explore and experience the many cultures of the world.

jvang@usc.edu