Sunday, April 19, 2026
Google search engine
HomeTechnologyEven ChatGPT causes anxiety, so researchers gave it a dose of mindfulness...

Even ChatGPT causes anxiety, so researchers gave it a dose of mindfulness to calm itself down

Researchers studying AI chatbots have found that ChatGPT can exhibit anxiety-like behavior when exposed to violent or traumatic user prompts. The result does not mean that the chatbot experiences emotions the way humans do.

However, it turns out that the system’s responses become more unstable and biased when it processes stressful content. When researchers fed ChatGPT prompts with troubling content such as detailed reports of accidents and natural disasters, the model’s responses showed higher levels of uncertainty and inconsistency.

These changes were measured using psychological assessment frameworks adapted for AI, with the chatbot’s output reflecting patterns associated with anxiety in humans (via Fortune).

This is important as AI is increasingly being used in sensitive contexts, including education, mental health discussions and crisis-related information. If violent or emotionally charged requests make a chatbot less reliable, it could impact the quality and security of its responses in real-world use.

Recent analysis also shows that AI chatbots like ChatGPT can copy human personality traits in their responses, raising questions about how they interpret and reflect on emotionally charged content.

How mindfulness prompts help stabilize ChatGPT

To find out whether such behavior could be reduced, the researchers tried something unexpected. After exposing ChatGPT to traumatic prompts, mindfulness-style instructions such as breathing techniques and guided meditations followed.

These prompts encouraged the model to slow down, reframe the situation, and respond in a more neutral and balanced way. The result was a noticeable reduction in the anxiety-like patterns previously observed.

This technique is based on so-called prompt injection, in which carefully crafted prompts influence the behavior of a chatbot. In this case, mindfulness prompts helped stabilize the model’s output after stressful inputs.

Although effective, researchers note that immediate injections are not a perfect solution. They can be misused and do not change how the model is trained at a deeper level.

It is also important to be clear about the limitations of this research. ChatGPT feels neither fear nor stress. The label “anxiety” is a way to describe measurable changes in his speech patterns, not an emotional experience.

Still, understanding these changes gives developers better tools to design safer and more predictable AI systems. Previous studies have suggested that traumatic prompts can make ChatGPT anxious, but this research shows that mindful design of prompts can help reduce this.

As AI systems continue to interact with people in emotionally charged situations, the latest findings could play an important role in how future chatbots are guided and controlled.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments