Friday, April 17, 2026
Google search engine
HomeTechnologyAnthropological research shows that AI can expose anonymous internet accounts en masse

Anthropological research shows that AI can expose anonymous internet accounts en masse

New research involving scientists from Anthropic and ETH Zurich suggests that modern artificial intelligence systems could identify the real identities behind supposedly anonymous Internet accounts. Published as a preprint on arXiv, the study shows that large language models (LLMs) may be able to analyze online activity and link pseudonymous profiles to real people at scale.

The study, titled “Large-Scale Online Deanonymization with LLMs,” examines how AI agents can automate the process of deanonymization—the act of linking anonymous or pseudonymous online accounts to real identities. Traditionally, this process required extensive manual research by analysts sifting through posts, writing styles, and scattered online clues. However, the researchers show that modern AI models can perform many of these steps automatically.

In the study, the AI ​​system analyzed public texts from online platforms and extracted identity-related signals such as personal interests, demographic cues, writing style and random details revealed in posts. The AI ​​then searched the Internet for suitable profiles and assessed whether the references applied to known people.

To test the method, the researchers created several datasets with known ground truth identities

One experiment attempted to match Hacker News users to their LinkedIn profiles, even after removing obvious identifiers such as names and usernames. Another data set involved linking pseudonymous Reddit accounts across different communities. A third data set split a single user’s post history into two separate profiles to see if the AI ​​could recognize that they belonged to the same person.

The results showed that LLM-based systems significantly outperformed traditional deanonymization techniques. In some cases, the models achieved a hit rate of up to 68% with an accuracy of around 90%, meaning the AI ​​correctly identified many accounts while maintaining relatively low error rates. Traditional methods achieved almost no success in the same experiments.

Researchers say the results show how AI can reproduce tasks that once required human investigators for hours. An AI system can automatically extract identity-related characteristics from text, search for potential matches across thousands of profiles, and conclude which candidate is most likely to be the right one.

This development is significant because anonymity has long been viewed as a basic protection for many Internet users

Pseudonymous accounts are often used by journalists, whistleblowers, activists and ordinary individuals who want to discuss sensitive topics without revealing their true identities.

The study suggests that this layer of protection – sometimes called “practical obscurity” – may be weakening as AI systems become better at linking digital clues across platforms. If automated tools can do this work quickly and inexpensively, the barrier to identifying anonymous users could be dramatically reduced.

Researchers estimate that the cost of identifying an online account using their experimental pipeline could be between $1 and $4 per profile, meaning large-scale investigations could be conducted relatively inexpensively.

However, the authors also note that the research was conducted in controlled environments using public data. The paper has not yet been peer-reviewed and the researchers have intentionally withheld some technical details to reduce the risk of misuse.

Nevertheless, the results have already sparked debate among privacy experts and technologists

The work suggests that individuals may need to reconsider how much personal information they share online – even in areas that seem anonymous. Looking forward, researchers say more work is needed to understand both the risks and possible defenses against AI-powered deanonymization. Possible solutions could include improved privacy tools, stronger platform security measures, or AI systems designed to anonymize sensitive data before it is shared publicly.

As artificial intelligence becomes increasingly capable of analyzing massive amounts of online content, the study highlights a growing challenge: balancing the power of AI-driven discovery with the need to protect privacy in the digital age.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments