“Thank you for calling the office of—” I was interrupted before I could finish greeting the person on the other line.
“Hi! This is Uzi Garcia,” the caller said.
I was taken aback first by the caller’s distinctively young voice and then by their off-putting tone. As an intern on Capitol Hill, I routinely take calls from constituents and knew there was something different about this caller. Although I could tell there was something off about his voice, I could not pinpoint exactly what was wrong.
Then I heard him say, “I am a fourth grader at Robb Elementary School in Texas. Or at least I was until a man with an AR-15 killed 18 of my classmates, two teachers and me.”
In February 2024, the parents of 6 victims of gun violence initiated “Shotline,” a campaign using artificial intelligence to recreate their children’s voices in calls to Congress. With the click of a button, users can send one of the six AI-generated calls to a congressional office of their choice. All calls start the same: the AI voice introduces themselves, explains their personality, describes the details of their death and then calls for stricter gun laws that may have prevented their deaths. As of Oct 28, 2024, 170,547 calls have been submitted to Congress.
According to an interview with one of the six victims’ parents, the process for obtaining life-like audio of their deceased children was relatively easy. All it took was uploading one short audio clip of their children’s voices to the platform, Eleven Labs, an AI voice generator that supports 29 languages and accents. From there, parents typed in their messages and an AI-generated version of their child repeated it back to them. While the voices sound similar to the real kids’ voices, there is something distinctly unsettling about hearing a dead person talk to you — especially a child who was murdered in such a horrific manner.
Extreme and powerful gun-violence advocacy campaigns are not uncommon — people still remember the “Evan” TV ad where seemingly ordinary scenes in a school setting subtly revealed signs of potential violence among students. However, Shotline crosses a new threshold as one of the first campaigns to use artificial intelligence in this way. Speaking as someone who listened to these calls and heard the voices’ disturbing and inhuman cadences, the impact of their messages cannot be understated.
This intensity and shock have sparked debate on the use of artificial intelligence, especially to replicate those who have died. Proponents of the campaign say that advocating to Congress in conventional ways has gone nowhere. The father of Joaquin Oliver, a 17-year-old victim of the Majorie Stoneman Douglass High shooting, believes that if lawmakers won’t listen to him, maybe hearing from his son will have an impact. In his opinion, the campaign is supposed to disturb people. “Like, if you find this uncomfortable, … I think that you don’t know what uncomfortable means. I can tell you about feeling uncomfortable. When they let you know that your son, your loved one, was shot and you won’t be able to see him anymore.”
The general public has had mixed views of the situation as well. In the comment section of one of Shotline’s posts, one user says, “What if everybody did this? It will break avenues to contact representatives when everyone does this.” Others see the Shotline project as exploiting the dead. “What you’re doing is disgusting and shameful. Putting words in dead people’s mouths, literally in their voices. Exploitation.”
Other people are praising Shotline’s leaders for their courage and savvy in using this new tool to spur controversy and dialogue. One proponent states, “This is an amazing tool, and I’m so proud to have seen you guys here in Tulsa. You are doing great things in the name of love.”
Along with the ethical debates, another possible ramification that is widely overlooked involves how this type of advocacy could change how congressional offices take calls. Currently, there are thousands of Congressional staff fielding constituent messages. While how many calls an office receives per day can change, my personal experience leads me to believe that offices can get up to hundreds of calls a day. Additionally, messages are only taken from real-life constituents; automated messages are rare and ignored. If AI-generated calls to Congress become more prevalent and the technology improves so that voices are indistinguishable from reality, how will Congressional staff be able to tell the difference? Petitioning your representatives is integral to political engagement, but these calls have the potential to complicate constituent engagement and representation.
AI-driven advocacy campaigns such as Shotline are a fascinating intersection of technology, ethics and activism. The chilling effect of hearing the simulated voices of deceased victims pleading for legislative action not only challenges conventional advocacy tactics but also calls into question the broader implications of AI in influencing public discourse and policymaking. As technology continues to advance, the ethical considerations surrounding its use in all areas of life will become increasingly complex. As we grapple with these dilemmas, it becomes imperative to navigate the intersection of technology and activism with sensitivity, transparency and a steadfast commitment to ethical principles.