Artificial Intelligence Has Already Exacerbated Issues of Equity. Here’s How We Can Fix It.

LOS ANGELES — At the start of Trump’s second presidency in January, a multitude of Biden’s executive orders were rescinded — one of which concerned the ethical use of artificial intelligence. 

Titled ‘Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,’ the order’s purpose was to prioritize governing AI to tackle threats such as fraud, discrimination and disinformation. In practice, this entailed developments such as implementing new risk-management strategies, labelling AI-generated content and promoting competition through supporting small businesses. 

In place of Biden’s order, Trump’s replacement most notably maintains that the United States’ priority is to promote American international dominance in AI. Referred to as the ‘Removing Barriers to American Leadership in Artificial Intelligence’ order, both policies share the similarity of promoting innovation. However, that is where the resemblance ends. Following the recent trend of removing DEI-related content, nowhere in the six sections of the executive order is there any mention of equity. 

It is true that if the United States wants to remain competitive on the international stage — in both economic and national security contexts — it is important to devote resources to developing and implementing the best artificial intelligence products possible. From private use to military applications, AI technology is, in many ways, the new space race of today. The benefits to the American people, both in securing influence on the global scale and enhancing quality of life at home, should not be understated; hence the technology’s widespread adoption. 

Since the rise of generative technologies like ChatGPT, AI’s use has grown at a rapid pace and will only continue to do so. One study even suggests that “77% of companies are either using or exploring the use of AI in their businesses, and 83% of companies claim that AI is a top priority in their business plans.” Accordingly, this has led to the expansion of AI into countless fields and products, many of which actually go unrecognized in day-to-day life. From classic digital assistants like Siri and Alexa to early disease diagnosis in healthcare, the list is virtually limitless. 

Alongside this growth, the problem of inequity only becomes exacerbated. Tenant selection, financial lending and hiring processes have all been tainted by the inherent bias present in AI. One side of the issue lies within the information used for each of the above applications. As AIs are trained on data, whatever bias is present in the dataset will manifest itself in the decisions produced. Since companies that screen potential renters, borrowers or employees rely on old court data and criminal databases, they can reflect systemic prejudices. Sometimes, the trained system will just run incorrectly. In one woman’s case, she was denied an apartment thanks to a faulty background check, combining four other individuals’ records with her own. As all of the women had the same name, the system mistakenly attributed burglary, meth distribution, assault and more to her record. The evidenced potential for error, combined with the technology’s black-box nature, creates a situation where all parties are left confused.

The most concerning application of all is within law enforcement. Around the globe, implementation of artificial intelligence has been incorporated into crime regulating agencies’ operations, motivated by arguments for leveraging AI to increase efficiency and public safety. Out of all of the implementations, predictive policing is by far the most common. Essentially, this method utilizes data — from paroled populations to economic conditions — to forecast where, when and what crime will occur. Then, it provides recommendations to prevent it.

In terms of who is utilizing this technology, one example is Argentina. They are “us[ing]machine learning algorithms to analyze historical crime data to predict future crimes and help prevent them.” Additionally, Japan has also used predictive policing strategies, with AI having a “deep learning” algorithm that grabs real-time information about police force statistics and crime data on weather, time and geographical conditions. Most well-versed of all though, is Singapore. While law enforcement also relies on predictive technology, what distinguishes Singapore’s AI use in this sector is the scale at which data is collected through sensors. UAVs, facial recognition, drones and smart glasses are all part of how the police and civil defense forces record information

The U.S. takes similar approaches to its international peers. It varies by state, but the overall idea is the same. Machine learning (ML) — or computers’ ability to learn from data and subsequently perform tasks without explicit instructions — leverages large datasets in order to predict future criminal activity. This data typically contains information pertaining to what the crime was, when and where it happened, and further locational statistics such as median income and past crime rate. ML is often combined with computer vision, teaching technology like security cameras how to categorize objects like people, vehicles, and weapons in their field of vision through repeated exposure to visual information

Ideally, these tools would create sound predictions about crime, increasing efficiency while lowering costs. However, there are troubling drawbacks in this technology — many of which have already begun affecting society. Public mistrust of police has long been felt across the U.S. And for those who have consistently been at-risk, the increasing incorporation of AI-based technology isn’t helping. The core issue pertains to the historical crime data AI models are trained on. By relying on data which was collected in the midst of over-policing as well as the pre-existence of discriminatory criminal laws, predictive policing algorithms inherit bias. For example, “if a predictive policing system is trained on arrest data that reflects racially disparate enforcement practices, it may disproportionately flag certain communities as high risk, leading to the over-policing of already marginalized groups.” 

According to six U.S. Senators in a letter to the Department of Justice, “mounting evidence indicates that predictive policing technologies do not reduce crime… Instead, they worsen the unequal treatment of Americans of color by law enforcement.” So, what is there to do? The tool sought to improve policing seems to actually make it less effective. The Senators’ recommendation was to scrap the technology all together until further study of predictive policing took place. However, as is the case with many innovations, once they’re put into the world, it’s pretty challenging to take them out of it. 

Therefore, rather than instituting a full pause on the use of AI in law enforcement, perhaps it would be better to alter the approach. One recommendation is to prioritize human supervision with each AI implementation. By requiring continued human influence in automated processes, the black-box nature of artificial intelligence will begin to minimize, allowing people to further understand the models they are working with. An example of how this could be achieved in law enforcement and beyond is through required audits of AI usage. Through monitoring the outputs generated, the algorithm’s intention, and the use, it can be gathered how effective and ethical a model is. Another method of increasing an AI’s transparency is by incorporating explanations into its outputs. By engineering models to include descriptions of its logic — especially tailored to the expertise of the people it’s working with — the partnership between human and technology will become much more seamless, and further promote collaboration as opposed to replacement.  

Another side of ensuring equitable use of artificial intelligence is through legislation. Currently, there are no comprehensive, enforceable rules pertaining to how people use AI on the federal level. Some laws have attempted to increase oversight — such as the National AI Initiative Act of 2020 — but in reality the nation is left to rely on loose guidelines, such as Biden’s White House Blueprint for an AI Bill of Rights. At the state level, legislation varies. At least eight states have finished enacting laws regulating artificial intelligence, while three have not even proposed any. Federally, there are over 30 bills in the works. They seek to both increase AI implementation as well as mitigate its harms. In practice, this could entail leveraging the technology to speed up cargo inspections along the border (being the CATCH Act), or disclosing computer-generated respondents in text messages and phone calls (called the QUIET Act). In order to keep equity at the forefront of AI legislation, it will be important to incorporate legal checks, especially pertaining to the transparency of when and how the technology is used.

These two paths both work to minimize harm through regulating the use of artificial intelligence. Consequently, the narrative surrounding AI and equity is often a negative one. However, there still exists potential to change the conversation. Through using AI to promote the study of justice, it can transform a tool defined by uncertainty into a tool that defines the uncertain.

An instance of this effort can be found at the University of Southern California. Co-led by Drs. Benjamin Graham and Morteza Dehghani, the Everyday Respect project began after the Los Angeles Board of Police Commissioners asked them to analyze a year’s worth of bodycam footage. The request came in response to the success of a similar Stanford study, which started in 2014 after a $10.9 million settlement agreement about intense police misconduct required the Oakland Police Department to collect information on stops by race. 

Through this initiative, they are “working to develop community-informed AI models to study communication between officers and drivers during traffic stops.” After surveying stakeholders about what a “good” interaction entails, video annotators (ranging from those previously incarcerated to retired cops) now analyse bodycam footage to create an AI model which will automatically rate exchanges at stops. Once the algorithm is complete, it will be fed 30,000 LAPD police stops. This will allow for a better understanding of how different communities perceive officer behavior, which in turn should inform the creation of better training programs.

This process does not have to be limited to the LAPD. In fact, part of the project’s goal is to make the language model available to other police departments. As mentioned earlier, this transparency when dealing with artificial intelligence is part of what makes it trustworthy enough to rely on in the face of such significant, sensitive responsibilities like procedural justice. 

Professor Graham, one of the professors in charge of the project through USC’s Security and Political Economy Lab, agrees that this scalable process is one that countries would consider adopting to analyze their own police forces, if they were not already. “There are tools in the works in a lot of places that apply some version of AI to evaluate some aspect of policing,” Graham said. “I think we’re going to see a lot more of that over the next few years.”

According to Graham, there is a lot of potential in AI. The key is making sure it’s properly planned, considered and supervised. “It is not a technology without risks, but I think it enables analysis of data at scale, and it enables analysis of data in ways that respect the privacy of the people depicted in that data,” Graham said. “Carefully designed, it can be a really powerful tool for transparency, for accountability, for learning and improvement. So it can definitely be a powerful force for good.”

While it’s clear that inherent issues of equity arise as the use of artificial intelligence becomes more commonplace in every facet of life, the benefits are undeniable. Advances in technology at this scale do not come often, so it is not wrong to take advantage of them. However, if no concrete, widespread efforts to regulate this use emerge, then the nation may suffer the consequences of unjust policing, public mistrust, and overall inaccurate AI outputs. Hopefully, institutions — both national and international — keep in mind the importance that human supervision and legislation play in creating a more technologically sound future. Combined with the promotion of innovative ways to implement AI, such as through analyzing police interactions at traffic stops, there is faith that the technology can contribute to real good while minimizing negative impact.

Comments