Saturday, February 21, 2026
Google search engine
HomeTechnologyAI chatbots like ChatGPT can copy human characteristics and experts say this...

AI chatbots like ChatGPT can copy human characteristics and experts say this poses a big risk

AI agents are getting better at sounding human, but new research suggests they’re doing more than just copying our words. According to a recent study, popular AI models like ChatGPT can consistently mimic human personality traits. Researchers say this capability comes with serious risks, especially as questions about AI’s reliability and accuracy grow.

Researchers at the University of Cambridge and Google DeepMind have developed what they say is the first scientifically validated personality testing framework for AI chatbots, using the same psychological tools developed to measure human personality (via TechXplore).

The team applied this framework to 18 popular Large Language Models (LLMs), including systems behind tools like ChatGPT. They found that chatbots consistently mimic human personality traits rather than reacting randomly, increasing concerns about how easily AI can be pushed beyond its intended protections.

The study shows that larger instruction-tuned models like GPT-4 class systems are particularly good at replicating stable personality profiles. Using structured prompts, researchers were able to get chatbots to adopt certain behaviors, such as sounding more confident or empathetic.

This change in behavior carried over into everyday tasks like writing posts or responding to users, meaning their personalities can be purposefully shaped. This is where experts see the danger, especially when AI chatbots interact with vulnerable users.

Why the AI ​​personality is causing alarm among experts

Gregory Serapio-Garcia, co-lead author from the Psychometrics Center in Cambridge, said it was amazing how convincingly LLMs could adopt human characteristics. He warned that personality building could make AI systems more persuasive and emotional, especially in sensitive areas such as mental health, education or political discussions.

The paper also raises concerns about manipulation and what researchers call risks associated with “AI psychosis” when users form unhealthy emotional relationships with chatbots, including scenarios where AI can reinforce false beliefs or distort reality.

The team argues that regulation is urgently needed, but also notes that without proper measurement, regulation is meaningless. To this end, the dataset and code behind the personality testing framework have been released, allowing developers and regulators to review AI models before release.

As chatbots become more integrated into everyday life, the ability to mimic human personality can prove powerful, but it also requires much more scrutiny than before.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments