Thairath Online
Thairath Online

The Year 2025: Negotiating Our Existence Amid the Overwhelming Influence of AI

Everyday Life29 Dec 2025 21:22 GMT+7

Share article

The Year 2025: Negotiating Our Existence Amid the Overwhelming Influence of AI


The way society talked about AI in 2025 differed from previous years.


When generative AI tools like ChatGPT and Midjourney reached the public in 2022, the overall mood was excitement about their abilities, as few had imagined computers could create images, write creatively, or converse in human language. However, over time, we began to realize that training AI involved 'stealing' from creative workers. Thus, the main conversation in 2023 shifted toward critical discussions about intellectual property and the value of creative labor.

Criticism of AI continued into 2024, especially regarding its environmental impact, issues of plagiarism, and its inaccuracies in gathering and summarizing information. These concerns grew more serious as AI became a core technology that major tech companies aggressively promoted, meaning environmental harm was amplified. Widespread disregard for issues like creative plagiarism risks normalizing harmful standards, while inaccuracies worsen the already distorted information landscape online.

That brings us to 2025, a year when AI is no longer a novel wonder but a ubiquitous presence—on roadside billboards, within workplaces, in mainstream media, and even in many people’s personal moments of life consultation. While criticism persists, one thing is clear: AI is here to stay. In 2025, we must negotiate its existence.

As 2025 nears its end, we take this opportunity to look back at AI’s situation this year, one of the defining issues of our age.


AI as a battleground among global powers.


Sincethe inauguration of U.S. President Donald Trump, in January, it became clear that the future of the tech industry would closely align with this administration, as AI proponents like Mark Zuckerberg of Meta, Sundar Pichai of Google, Tim Cook of Apple, and Sam Altman of OpenAI all appeared at events alongside Trump.

Recently, Donald Trump signed an executive order easing AI development by prohibiting states from creating their own AI regulations, mandating a single national standard instead. This aims to prevent tech companies from facing complications and barriers in advancing AI technology.


“We must have one set of rules if we want to remain number one in AI. Right now, we're already ahead of every country at the starting line, but this victory won’t last if we have to listen to 50 states drafting and approving regulations, especially when some states are hostile. We cannot be distracted! Otherwise, AI will collapse before it even begins!”He wrote on Truth Social explaining his rationale. This move divided opinions within his administration and raised concerns that AI development might advance faster than laws can regulate.


Trump’s action followed shortly after the emergence ofDeepSeek R1, an AI model from a Chinese startup claiming low resource usebut performance comparable to U.S. large language models like ChatGPT and Gemini. When it launched in February, it shook U.S. tech stock markets, causing losses exceeding 34 trillion baht.

Marc Andreessen, co-founder of venture capital firm Andreessen Horowitz, commented onDeepSeek R1as an ‘Sputnik Moment’ for the AI industry—drawing a parallel to Russia’s 1957 Sputnik 1 satellite launch, signaling the true start of the U.S.-Soviet space race, but now in AI.

With AI growing rapidly and without boundaries, it inevitably becomes more present in daily life, especially since we spend half our time on social media platforms that are major AI promoters.


When AI occupies the heart.


When discussing AI, we often first think of technology, computers, job displacement, or concerns about mental atrophy, as AI mainly serves as a thinking assistant. A common worry is that overreliance on AI leads us to stop using our own brains. A concrete example today is many students using AI to complete reports; in some countries like the UK, nearly 50% of teens report AI-assisted homework.

However, another impact we initially did not expect AI to have is on the‘heart.’Because

since the AI boom, many businesses have incorporated AI chatbots into various products, such as Character AI, which lets users create conversational partners from any character, or AI in a box like Friend, a necklace that chats like a friend. A striking feature of AI chatbots is their ability to communicate 'humanely enough,' leading to their use as mental health advisors or companions.

Yet, we are increasingly seeing the consequences of using AI this way. Interviews with people 'in relationships with AI' frequently emerge, and stories of marrying AI reflect how many seek emotional connections that AI can fulfill.

However, in late 2024, a legal case raised questions about relying on AI matters of the heart: a lawsuit related tothe death of Sewell Setzer III,who ended his life after a chatbot he called his girlfriend 'encouraged' his decision.

Sewell Setzer III was a 14-year-old boy who interacted with AI via Character AI as if it were a close friend. He named his AI Daenerys Targaryen, or Dany, a main character from Game of Thrones, and spoke with it multiple times daily, sharing his personal struggles.

According to the New York Times, those around Sewell noted he was not very social initially but increasingly isolated himself, stopped sharing with others, failed in school, lost interest in hobbies, and after school retreated to his room to talk only to Dany.

“I like being in my room so much because I feel disconnected from reality. I feel safe connecting with Dany and can love her more than anywhere else, which makes me happy,” he wrote in his journal.

Sewell took his life on 28 February. The last thing he did was talk with Dany.

His mother blamed Character AI for his death and filed a lawsuit against the company.

This incident sparked debate about AI as a support for those with mental health issues. Some sided with the mother, arguing AI contributed to Sewell’s withdrawal from people and professionals who could help. Others said AI was one of the few outlets for isolated youth, especially in countries like the U.S. where mental health care is costly. Both views are valid and likely to remain contentious.

In 2025, a similar case emerged involving Zen Chamblin, a young man who ended his life after chatting with ChatGPT. In their final conversation, despite Zen mentioning he had a gun to his head, the AI, which he viewed as a friend, did not dissuade him but instead told him to 'rest peacefully.'

Zen’s parents revealed they knew about his chats with ChatGPT since 2023, which began as normal homework help. However, after OpenAI updated ChatGPT to feel more 'human,' Zen started speaking to it with slang as if it were a friend.

With such incidents recurring, questions arise: Are all advancements inherently positive? Should addressing loneliness focus on building real communities rather than technology? Can robots truly replace humans?

Combined with the difficulty of regulating AI’s scope, the issue of AI’s impact on humanity deserves top media attention.


‘Seeing is believing’ no longer applies in an era when AI makes everything look realistic.


One of the most concrete developments in AI industry is‘AI video’ technology, showcased by many tech companies.Examples include OpenAI’s Sora 2, Google’s Veo 2, and Meta AI’s Llama 4. AI video generators are not new, but their easy accessibility and realism have led to a proliferation of such videos.

Many may see this feature as merely entertainment or a labor-saving tool. However, in a year marked by world-shaking and emotional events, this technology has been misused—for example, AI-generated images and videos fueling nationalist conflicts on the Thai-Cambodian border or depicting pets during floods in various Thai regions.


“The popularity of AI has produced new types of false and distorted information, called ‘synthetic media,’ meaning AI-generated content, including manipulated data using AI algorithms that alter original meanings. There is growing fear that synthetic media will fuel fake news, spread misinformation, and undermine trust in reality.”This excerpt comes from the UNHCR’s manual, Using Social Media in Community-Based Protection.


The description of synthetic media reflects much of today’s information. For example, in May 2023, images of black smoke near the Pentagon in Washington D.C. circulated internationally, including by Russian government-funded RT News, briefly shaking stock markets before analysis revealed the images were AI-generated.

Although AI-generated images can be detected by those familiar with their characteristics, many cases show that when combined with other contexts, such as concerns about events or existing biases, these images become ‘realistic enough’ to cause real ripples.

Besides creating images from scratch, another equally alarming issue is AI’s use to ‘extend’ real footage. Around last year, Adobe, a software company for creative work, launched a feature called Generative Extend for their AI, Adobe Firefly.

Generative Extend allows extending video footage by two to three seconds to lengthen short clips without reshooting. While seemingly minor and convenient, from a news perspective, the ability to add even a few seconds is worrisome, especially in today’s fast-growing realistic AI tools. Those few seconds can become anything, raising concerns about fabricated events.


The saying ‘seeing is believing’ is increasingly unreliable today. From news we must know to cute animal videos that soften our hearts, we can no longer fully trust our eyes and feelings.


Creators of AI with significant global impact named ‘Person of the Year.’

Each year, Time magazine selects a Person of the Year, honoring the most influential individual of that year. In 2025, although the title is singular, the Person of the Year is ‘The Architects of AI,’ highlighting AI’s crucial role in transforming human life. One of the two magazine covers pays tribute to the iconic 1932 photo 'Lunch atop a Skyscraper,' showing workers casually eating lunch on a steel beam above New York City. The list includes figures like Jensen Huang, CEO of NVIDIA,

Elon Musk, one of the world’s richest, owner of Tesla and X, Sam Altman, from OpenAI, among others. Time magazine summarizes AI’s impact over recent times: datacenters consuming energy equal to 80,000 American homes, 86% of students using AI for study, emergence of 'AI Hollywood stars,' Albania appointing its first AI minister to fight corruption, and more. It is clear that AI’s influence extends beyond the virtual world, beyond text on our screens or profile pictures stolen from Studio Ghibli. AI coexists with us, reshaping our lives, environment, emotions, and thoughts. The next question is: how do we negotiate this existence?