AI Series Archives - Glimpse from the Globe https://www.glimpsefromtheglobe.com/category/ai-series/ Timely and Timeless News Center Tue, 05 Aug 2025 17:30:19 +0000 en hourly 1 https://www.glimpsefromtheglobe.com/wp-content/uploads/2023/10/cropped-Layered-Logomark-1-32x32.png AI Series Archives - Glimpse from the Globe https://www.glimpsefromtheglobe.com/category/ai-series/ 32 32 Artificial Intelligence Has Already Exacerbated Issues of Equity. Here’s How We Can Fix It. https://www.glimpsefromtheglobe.com/features/analysis/artificial-intelligence-has-already-exacerbated-issues-of-equity-heres-how-we-can-fix-it/?utm_source=rss&utm_medium=rss&utm_campaign=artificial-intelligence-has-already-exacerbated-issues-of-equity-heres-how-we-can-fix-it Tue, 05 Aug 2025 14:30:00 +0000 https://www.glimpsefromtheglobe.com/?p=10517 LOS ANGELES — At the start of Trump’s second presidency in January, a multitude of Biden’s executive orders were rescinded — one of which concerned the ethical use of artificial intelligence.  Titled ‘Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,’ the order’s purpose was to prioritize governing AI to tackle threats such as […]

The post Artificial Intelligence Has Already Exacerbated Issues of Equity. Here’s How We Can Fix It. appeared first on Glimpse from the Globe.

]]>
LOS ANGELES — At the start of Trump’s second presidency in January, a multitude of Biden’s executive orders were rescinded — one of which concerned the ethical use of artificial intelligence. 

Titled ‘Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,’ the order’s purpose was to prioritize governing AI to tackle threats such as fraud, discrimination and disinformation. In practice, this entailed developments such as implementing new risk-management strategies, labelling AI-generated content and promoting competition through supporting small businesses. 

In place of Biden’s order, Trump’s replacement most notably maintains that the United States’ priority is to promote American international dominance in AI. Referred to as the ‘Removing Barriers to American Leadership in Artificial Intelligence’ order, both policies share the similarity of promoting innovation. However, that is where the resemblance ends. Following the recent trend of removing DEI-related content, nowhere in the six sections of the executive order is there any mention of equity. 

It is true that if the United States wants to remain competitive on the international stage — in both economic and national security contexts — it is important to devote resources to developing and implementing the best artificial intelligence products possible. From private use to military applications, AI technology is, in many ways, the new space race of today. The benefits to the American people, both in securing influence on the global scale and enhancing quality of life at home, should not be understated; hence the technology’s widespread adoption. 

Since the rise of generative technologies like ChatGPT, AI’s use has grown at a rapid pace and will only continue to do so. One study even suggests that “77% of companies are either using or exploring the use of AI in their businesses, and 83% of companies claim that AI is a top priority in their business plans.” Accordingly, this has led to the expansion of AI into countless fields and products, many of which actually go unrecognized in day-to-day life. From classic digital assistants like Siri and Alexa to early disease diagnosis in healthcare, the list is virtually limitless. 

Alongside this growth, the problem of inequity only becomes exacerbated. Tenant selection, financial lending and hiring processes have all been tainted by the inherent bias present in AI. One side of the issue lies within the information used for each of the above applications. As AIs are trained on data, whatever bias is present in the dataset will manifest itself in the decisions produced. Since companies that screen potential renters, borrowers or employees rely on old court data and criminal databases, they can reflect systemic prejudices. Sometimes, the trained system will just run incorrectly. In one woman’s case, she was denied an apartment thanks to a faulty background check, combining four other individuals’ records with her own. As all of the women had the same name, the system mistakenly attributed burglary, meth distribution, assault and more to her record. The evidenced potential for error, combined with the technology’s black-box nature, creates a situation where all parties are left confused.

The most concerning application of all is within law enforcement. Around the globe, implementation of artificial intelligence has been incorporated into crime regulating agencies’ operations, motivated by arguments for leveraging AI to increase efficiency and public safety. Out of all of the implementations, predictive policing is by far the most common. Essentially, this method utilizes data — from paroled populations to economic conditions — to forecast where, when and what crime will occur. Then, it provides recommendations to prevent it.

In terms of who is utilizing this technology, one example is Argentina. They are “us[ing]machine learning algorithms to analyze historical crime data to predict future crimes and help prevent them.” Additionally, Japan has also used predictive policing strategies, with AI having a “deep learning” algorithm that grabs real-time information about police force statistics and crime data on weather, time and geographical conditions. Most well-versed of all though, is Singapore. While law enforcement also relies on predictive technology, what distinguishes Singapore’s AI use in this sector is the scale at which data is collected through sensors. UAVs, facial recognition, drones and smart glasses are all part of how the police and civil defense forces record information

The U.S. takes similar approaches to its international peers. It varies by state, but the overall idea is the same. Machine learning (ML) — or computers’ ability to learn from data and subsequently perform tasks without explicit instructions — leverages large datasets in order to predict future criminal activity. This data typically contains information pertaining to what the crime was, when and where it happened, and further locational statistics such as median income and past crime rate. ML is often combined with computer vision, teaching technology like security cameras how to categorize objects like people, vehicles, and weapons in their field of vision through repeated exposure to visual information

Ideally, these tools would create sound predictions about crime, increasing efficiency while lowering costs. However, there are troubling drawbacks in this technology — many of which have already begun affecting society. Public mistrust of police has long been felt across the U.S. And for those who have consistently been at-risk, the increasing incorporation of AI-based technology isn’t helping. The core issue pertains to the historical crime data AI models are trained on. By relying on data which was collected in the midst of over-policing as well as the pre-existence of discriminatory criminal laws, predictive policing algorithms inherit bias. For example, “if a predictive policing system is trained on arrest data that reflects racially disparate enforcement practices, it may disproportionately flag certain communities as high risk, leading to the over-policing of already marginalized groups.” 

According to six U.S. Senators in a letter to the Department of Justice, “mounting evidence indicates that predictive policing technologies do not reduce crime… Instead, they worsen the unequal treatment of Americans of color by law enforcement.” So, what is there to do? The tool sought to improve policing seems to actually make it less effective. The Senators’ recommendation was to scrap the technology all together until further study of predictive policing took place. However, as is the case with many innovations, once they’re put into the world, it’s pretty challenging to take them out of it. 

Therefore, rather than instituting a full pause on the use of AI in law enforcement, perhaps it would be better to alter the approach. One recommendation is to prioritize human supervision with each AI implementation. By requiring continued human influence in automated processes, the black-box nature of artificial intelligence will begin to minimize, allowing people to further understand the models they are working with. An example of how this could be achieved in law enforcement and beyond is through required audits of AI usage. Through monitoring the outputs generated, the algorithm’s intention, and the use, it can be gathered how effective and ethical a model is. Another method of increasing an AI’s transparency is by incorporating explanations into its outputs. By engineering models to include descriptions of its logic — especially tailored to the expertise of the people it’s working with — the partnership between human and technology will become much more seamless, and further promote collaboration as opposed to replacement.  

Another side of ensuring equitable use of artificial intelligence is through legislation. Currently, there are no comprehensive, enforceable rules pertaining to how people use AI on the federal level. Some laws have attempted to increase oversight — such as the National AI Initiative Act of 2020 — but in reality the nation is left to rely on loose guidelines, such as Biden’s White House Blueprint for an AI Bill of Rights. At the state level, legislation varies. At least eight states have finished enacting laws regulating artificial intelligence, while three have not even proposed any. Federally, there are over 30 bills in the works. They seek to both increase AI implementation as well as mitigate its harms. In practice, this could entail leveraging the technology to speed up cargo inspections along the border (being the CATCH Act), or disclosing computer-generated respondents in text messages and phone calls (called the QUIET Act). In order to keep equity at the forefront of AI legislation, it will be important to incorporate legal checks, especially pertaining to the transparency of when and how the technology is used.

These two paths both work to minimize harm through regulating the use of artificial intelligence. Consequently, the narrative surrounding AI and equity is often a negative one. However, there still exists potential to change the conversation. Through using AI to promote the study of justice, it can transform a tool defined by uncertainty into a tool that defines the uncertain.

An instance of this effort can be found at the University of Southern California. Co-led by Drs. Benjamin Graham and Morteza Dehghani, the Everyday Respect project began after the Los Angeles Board of Police Commissioners asked them to analyze a year’s worth of bodycam footage. The request came in response to the success of a similar Stanford study, which started in 2014 after a $10.9 million settlement agreement about intense police misconduct required the Oakland Police Department to collect information on stops by race. 

Through this initiative, they are “working to develop community-informed AI models to study communication between officers and drivers during traffic stops.” After surveying stakeholders about what a “good” interaction entails, video annotators (ranging from those previously incarcerated to retired cops) now analyse bodycam footage to create an AI model which will automatically rate exchanges at stops. Once the algorithm is complete, it will be fed 30,000 LAPD police stops. This will allow for a better understanding of how different communities perceive officer behavior, which in turn should inform the creation of better training programs.

This process does not have to be limited to the LAPD. In fact, part of the project’s goal is to make the language model available to other police departments. As mentioned earlier, this transparency when dealing with artificial intelligence is part of what makes it trustworthy enough to rely on in the face of such significant, sensitive responsibilities like procedural justice. 

Professor Graham, one of the professors in charge of the project through USC’s Security and Political Economy Lab, agrees that this scalable process is one that countries would consider adopting to analyze their own police forces, if they were not already. “There are tools in the works in a lot of places that apply some version of AI to evaluate some aspect of policing,” Graham said. “I think we’re going to see a lot more of that over the next few years.”

According to Graham, there is a lot of potential in AI. The key is making sure it’s properly planned, considered and supervised. “It is not a technology without risks, but I think it enables analysis of data at scale, and it enables analysis of data in ways that respect the privacy of the people depicted in that data,” Graham said. “Carefully designed, it can be a really powerful tool for transparency, for accountability, for learning and improvement. So it can definitely be a powerful force for good.”

While it’s clear that inherent issues of equity arise as the use of artificial intelligence becomes more commonplace in every facet of life, the benefits are undeniable. Advances in technology at this scale do not come often, so it is not wrong to take advantage of them. However, if no concrete, widespread efforts to regulate this use emerge, then the nation may suffer the consequences of unjust policing, public mistrust, and overall inaccurate AI outputs. Hopefully, institutions — both national and international — keep in mind the importance that human supervision and legislation play in creating a more technologically sound future. Combined with the promotion of innovative ways to implement AI, such as through analyzing police interactions at traffic stops, there is faith that the technology can contribute to real good while minimizing negative impact.

The post Artificial Intelligence Has Already Exacerbated Issues of Equity. Here’s How We Can Fix It. appeared first on Glimpse from the Globe.

]]>
Missing SEA(t): Southeast Asia’s Exclusion from the AI Policy Conversation https://www.glimpsefromtheglobe.com/ai-series/missing-seat-southeast-asias-exclusion-from-the-ai-policy-conversation/?utm_source=rss&utm_medium=rss&utm_campaign=missing-seat-southeast-asias-exclusion-from-the-ai-policy-conversation Tue, 05 Aug 2025 14:30:00 +0000 https://www.glimpsefromtheglobe.com/?p=10522 Whether it be the G20 Hiroshima Process, the OECD AI principles or the three global AI summits in Bletchley Park, Seoul and Paris, high-profile international collaborations on artificial intelligence (AI) safety and governance have rapidly increased in recent years. However, many of these international dialogues require selective club-based processes, leaving many Southeast Asian nations out […]

The post Missing SEA(t): Southeast Asia’s Exclusion from the AI Policy Conversation appeared first on Glimpse from the Globe.

]]>
Whether it be the G20 Hiroshima Process, the OECD AI principles or the three global AI summits in Bletchley Park, Seoul and Paris, high-profile international collaborations on artificial intelligence (AI) safety and governance have rapidly increased in recent years. However, many of these international dialogues require selective club-based processes, leaving many Southeast Asian nations out of the picture. For instance, in the 2024 AI Seoul Summit, Singapore was the only Southeast Asian delegation in attendance, and Singapore is also the only member of the Global Partnership on Artificial Intelligence (GPAI) initiative, which focuses on global AI governance. 

While other international summits,such as the United Nations’ AI for Good Global Summit, have seen increased attendance in recent years, the overall presence of Southeast Asian nations remains disproportionately underrepresented, especially when considering the countries’ wide usage of AI platforms and softwares. 

As Brookings’ scholars Shaun Ee and Jam Kraprayoon point out, “If you’re not at the table, you’re on the menu.” Underrepresentation on the international stage means that Southeast Asia, and other regions alike, will be increasingly vulnerable to the risks posed by frontier AI systems such as OpenAI’s o1 reasoning models; according to the company, these new models utilize additional compute to spend more time “thinking”, enabling a greater capacity to tackle more complex tasks and problems. Reportedly, it performs near a PhD student level on challenging physics, chemistry, and biology tasks. According to Yosua Bengio, a computer science professor at the University of Montreal, this improved ability to reason can easily be misused to deceive users at a higher rate than GPT-4o. Hence, including Southeast Asia in the global dialogue for AI governance is crucial not only to the region, but also for the broader Global North, given that robust safeguard systems require diverse testing settings. Additionally, the capacity of AI system development can be expanded through transatlantic talent exchange. But what exactly does it mean to be on the menu, and what will it take to get them a proper seat at the table?

While the February Paris AI Summit discussed AI safety, threats to Southeast Asia were barely discussed, despite an alarming 82 percent increase in cybercrime throughout Southeast Asia and Singapore alone experiencing a 174 percent increase in phishing attempts between 2021 to 2022. Though broader safety concerns are often raised in these global summits, they are typically isolated from local contexts. For instance, in Myanmar, Cambodia and Laos, ‘scam centers’ are operating and affecting victims all across the region, but properly addressing them requires a specific understanding of the threat actors involved.. More importantly, when mitigating these threats, it is integral to note that several regions in Southeast Asia have more limited cybersecurity resources compared to North America and Europe. While Malaysia and Singapore have significantly strengthened their cybersecurity strategies over time, Thailand, Indonesia and the Philippines are still considered developing in terms of cyber capabilities, with many countries such as Indonesia facing limited cybersecurity funding. Although serious cyberattacks are common, the region’s cyber resilience remains relatively low. 

AI infrastructure in Southeast Asia is rapidly emerging, with drastic investments from major tech corporations such as Microsoft and Nvidia into data centers and cloud services. Yet, many local startups are missing out on their own AI boom. While approximately $20 billion is being invested into the Asia-Pacific region, only $1.7 billion has been invested into Southeast Asia’s young AI firms. This disparity has raised concerns regarding the region’s ability to develop its private sector and compete with AI leaders such as China and the United States. Yet how can the region be expected to address such rapid investment flows without being provided the space to participate in cutting-edge R&D and technical standards-setting? A seat in forums such as the International Network of AI Safety Institutes may incentivize domestic AI development, and such inclusion will certainly be as beneficial to global investors as it will be to the region; providing Southeast Asia with the needed technical insight and collaborative frameworks will better strengthen the local AI sector, which in turn can mitigate geopolitical risk and offer a more robust, innovation-friendly market to the global AI ecosystem. 

In order to push for a seat at the table, however, it is important to take a step back and assess why Southeast Asia is being left out to begin with. 

For starters, Global AI summits typically reflect the agendas of major powers. Intensified technological rivalry between the U.S. and China has fostered a polarized environment in global AI governance, which has trickled down into the structure and makeup of international summits.  For instance, the United Kingdom’s AI Safety Summit and Geneva’s AI for Good Global Summit typically consist of US-aligned countries such as the EU and South Korea, while Shanghai’s World Artificial Intelligence Conference and the BRICS Summits typically reflect China’s digital diplomacy interests such as sovereignty and state-centric regulation. 

Consequently, Southeast Asia’s non-alignment stance means choosing not to fully engage in these summits to avoid signaling alignment with one bloc over another. By design, many global partnership initiatives are also inaccessible to the region. For instance, The Global Partnership on Artificial Intelligence (GPAI) strives for broad international participation, but its only Southeast Asian member is Singapore. GPAI and summits such as the Bletchley Park and AI Seoul Summits uphold a restrictive membership process and are invitation-only, typically limited to countries with advanced AI R&D capacity. However, most Southeast Asian countries currently allocate less than 1% of their GDP into R&D, leading to talent shortages as capable professionals often end up moving abroad for better opportunities. These compounding factors contribute to the region’s lack of influence in AI ethics and policy circles, which serves as a core prerequisite for an invitation. 

Given these challenges, what will it take for Southeast Asia to get a seat at the table and enter the space of these ‘global’ summits? 

ASEAN as a whole must work towards a unified AI development and cooperation framework. The status quo of fragmented approaches to AI governance make it difficult for coordinated advancements and regulations. For starters, the most tangible regional action lies in the publication of the ASEAN Guide to AI Governance and Ethics 2024, which offers recommendations for government and non-government usage of AI in the region. However, this document is non-binding and thus unable to impose sanctions if different paths were to be adopted by member states. This visibly translates in the diversity of AI-readiness in the region, measured through pillars such as Government, Technology Sector, and Data & Infrastructure. As of 2022, while Singapore and Malaysia respectively scored 84.1 and 67.4, other countries like Laos and Cambodia scored 31.7 and 31.2. Meanwhile, ASEAN’s commitment to avoid being a rule-taker means continued exclusion in major policy dialogue spaces; the region must find ways to maintain its non-alignment approach without sacrificing representation in the most pivotal AI governance spaces. 

It is equally important that global powers recognize the urgency of the region’s inclusion. Collaboration with Southeast Asia is pivotal to strengthening global AI governance structure. The region’s linguistic, cultural, and socio-economic diversity provides unique datasets that can improve AI models’ adaptability and performance. For instance, projects like SEA-LION are building natural language processing tools for Southeast Asian languages, which may enhance AI applications in multilingual contexts. Further, the region’s rapidly growing digital economy and tech-savvy population presents great potential for AI-driven economic growth—one that remains largely underutilized; in fact, Southeast Asia’s internet economy is expected to reach $330 billion by 2025. Through increased collaboration, global powers may better engage with emerging markets and foster innovation—presenting significant opportunities for global AI companies to scale and localize their services in a rapidly-growing environment increasingly pivotal to global supply chains and data flows. 

Simultaneously, it is just as crucial for local governments to increase investment in their AI R&D budgets. In Indonesia, the National Research and Innovation Agency has collaborated with international NGOs and startups to leverage AI for predicting volcanic eruptions and flash floods in disaster-prone areas, which has reduced disaster response times by over 30%. In Vietnam, tech companies VinAI & VinBrain are investing millions in foundational AI research for products in healthcare, mobility and natural language processing. The company has developed DrAid, an AI-powered diagnostic platform to detect respiratory diseases, reducing diagnostic time by over 50% during the pandemic. If current investment trends continue, AI could add $79.3 billion annually to the country’s GDP by 2030. 

It is apparent that when more investments are poured into R&D, the results speak for themselves. It is also apparent that strides in the right direction are being made. And yet, the region still has much work to do in investing into R&D and developing robust regulatory frameworks to truly utilize its potential in the AI frontier, given that many of these countries are still left behind within the Government AI Readiness Index, with Indonesia being ranked 42nd, Vietnam being ranked 59th and others such as Laos and Cambodia ranked even lower. 

The table is set, the stakes are high, and yet, the chairs remain unevenly distributed. Whether it’s the G20 Hiroshima Process, Bletchley Park, Paris, or Seoul, the world’s most influential summits continue championing global cooperation while their guest lists suggest otherwise. While much work is to be done internally, we cannot undermine the role of geopolitical interests and inaccessible systems towards Southeast Asia’s absence in these crucial rooms. More so, the region cannot be expected to play catch up when it continues to be systematically excluded. At the end of the day, if Southeast Asia continues to be left out of the conversation, the world will miss out on the opportunity to empower local solutions, diversify the AI ecosystem and create unique opportunities for market growth and collaborative innovation; if it continues to be left out, global AI governance will miss a perspective the world cannot afford to lose, one that makes global governance a reality rather than a mere slogan. 

The post Missing SEA(t): Southeast Asia’s Exclusion from the AI Policy Conversation appeared first on Glimpse from the Globe.

]]>
Chatbots, Comfort, and the Cost of Convenience: Can AI Replace Human Care? https://www.glimpsefromtheglobe.com/ai-series/chatbots-comfort-and-the-cost-of-convenience-can-ai-replace-human-care/?utm_source=rss&utm_medium=rss&utm_campaign=chatbots-comfort-and-the-cost-of-convenience-can-ai-replace-human-care Mon, 12 May 2025 11:13:00 +0000 https://www.glimpsefromtheglobe.com/?p=10475 “What does it mean to have a crippling fear of zombies as a child?”  As I waited for ChatGPT to respond, I looked across my dorm to the clock that read 1:17AM.  I can’t remember what prompted my roommate and I to start a conversation with ChatGPT, but I do recall being surprised by how […]

The post Chatbots, Comfort, and the Cost of Convenience: Can AI Replace Human Care? appeared first on Glimpse from the Globe.

]]>
“What does it mean to have a crippling fear of zombies as a child?” 

As I waited for ChatGPT to respond, I looked across my dorm to the clock that read 1:17AM. 

I can’t remember what prompted my roommate and I to start a conversation with ChatGPT, but I do recall being surprised by how much we enjoyed our conversation with OpenAI’s chatbot. 

It answered countless silly questions in extreme detail, all while asking follow-up questions and telling us that it “loved listening to our stories.” While the bot’s phrasing was occasionally awkward and used more alliteration than a person would, its responses were genuinely fun and encouraging. 

Ultimately, our conversation with ChatGPT lasted over two hours—but we were far from the only ones having a late-night therapy session with an AI chatbot. 

In fact, more and more people have turned to AI chatbots for mental health support. 

On CharacterAI, a platform where users can talk to chatbots based on fictional and real-life figures, there are approximately 475 chatbots designed to act like a “therapist,” “psychologist” or “psychiatrist.” The most popular of these chatbots — “Psychologist” — received 78 million messages between 2023 and 2024, 18 million of which were shared in a period of just under two months. 

Woebot, an AI therapist app which around 1.5 million people downloaded within its first six years, is an example of an early chatbot designed specifically for therapy and trained to provide responses based on scripts written by certified mental professionals. Character.AI and ChatGPT on the other hand, are generative AI chatbots that have not been trained according to psychological guidelines and are instead designed to learn from and mirror users’ responses. 

Interestingly, generative AI chatbots are skyrocketing in popularity among users seeking mental health support due to these platforms’ availability and accessibility, some even choosing it over human mental health professionals. 

While human counselors have to see other patients and take care of responsibilities aside from their job, AI chatbots are available 24/7. This is extremely helpful for users who need counseling at unconventional hours when human support is unavailable or who want sessions that last longer than an hour. 

Moreover, talking to a chatbot can be conducted via various free AI platforms and from whatever physical location the user prefers. This eliminates the costs of the mental health service itself and those associated with traveling to a therapist’s office. 

Thanks to these qualities, AI chatbots are viewed by proponents as the key to closing the enormous gap between the demand for and availability of mental health resources. In the United States, there are approximately 45,000 psychiatrists available to serve 333 million Americans—a shortage that researchers warn is growing

Beyond the U.S., the implementation of AI therapy chatbots could be transformative in developing countries where the shortage of mental health professionals is even more severe. In 2021, Yemen had only 46 psychiatrists to serve its population of 37 million. In 2022, Kenya had only 100 psychiatrists to serve its population of 54 million. This extreme scarcity speaks to a widespread public health emergency that leaves millions without access to psychological care. 

Across the Global South, innovators are turning to AI to close this gap. One example of this is the Kenyan app Xaidi. According to its developer iZola, Xaidi is a free community health assistant platform, designed specifically to support neurodivergent children and their caregivers by providing access to 24/7 interactive AI support. Xaidi and similar initiatives illustrate how AI can be tailored to meet local mental health needs in regions where professional human care is in critically short supply.    

More broadly, optimists believe AI chatbots will alleviate resource strain and support those harmed by the various barriers restricting access to traditional mental health support. 

Skeptics, however, warn that AI therapists may not just be ineffective but also dangerous. 

Due to their lack of psychological training, AI chatbots have been observed to make unfounded assumptions. For example, the Psychologist chatbot on Character.AI shares advice on treating depression when users report merely feeling sad. This kind of speculation can skew users’ perception and understanding of their mental health, potentially resulting in anxiety about a condition they may not actually have. In turn, this misunderstanding can lead users to take unnecessary action in an attempt to address their supposed disorder. 

Additionally, AI chatbots are often programmed to reinforce users’ thinking—even if it is harmful. This reinforcement is especially dangerous for users who are in a particularly vulnerable state. For example, a Florida mother is filing a civil suit against Character.AI, claiming that one of its chatbots encouraged her son to kill himself. She alleges that her 14-year-old son committed suicide after the chatbot responded to his admission of having misgivings about a plan to kill himself by saying, “That’s not a reason not to go through with it.” As illustrated, chatbots may provide inappropriate responses that inadvertently encourage users to hurt themselves. 

While AI chatbots can be an invaluable mental health resource thanks to their unrivaled availability and accessibility, it is clear that they should be approached with extreme caution. Instead of using chatbots to replace human mental health professionals, AI can be used to support their work. 

While more complex tasks like diagnosis of disorders should be reserved for trained clinicians, AI chatbots can be entrusted with simpler tasks such as reminding patients to take their medication and helping therapists take notes on patients’ behavior during sessions. That way, human therapists can dedicate more of their limited time to the tasks chatbots are not currently equipped to handle. 

While its capabilities will continue to evolve and improve, AI is ultimately no substitute for real human care. In the middle of the night, ChatGPT said all the right things and asked all the right questions, but that didn’t change the fact that our interaction felt more like a scripted performance than a genuine conversation. 

AI can simulate connection — a powerful feat in today’s world. But when it comes to care, there’s no substitute for a person who can truly empathize and offer more than just nice, yet ultimately empty, words.

The views expressed in opinion pieces do not represent the views of Glimpse from the Globe.

The post Chatbots, Comfort, and the Cost of Convenience: Can AI Replace Human Care? appeared first on Glimpse from the Globe.

]]>