A new study suggests that the way artificial intelligence thinks about us may be a little overly optimistic. Researchers have found that popular AI models like OpenAI’s ChatGPT and Anthropic’s Claude tend to assume people are more rational and logical than they actually are, especially in situations involving strategic thinking.
This gap between what AI expects people to do and what people actually do could impact how these systems predict human decisions in business and beyond.
Testing AI versus human thinking
Researchers tested AI models like ChatGPT-4o and Claude-Sonnet-4 in a classic game theory setup called a Keynesian beauty contest. Understanding this game helps explain why the results matter (via TechXplore).
The beauty pageant requires contestants to predict what others will choose to win, not just what they personally prefer. Rational gaming, in theory, means going beyond first impressions and actually thinking about others’ reasoning, a deep level of strategic thinking that people often have difficulty with in practice.
To see how AI models performed, the researchers had the systems play a version of this game called “Guess the Number,” in which each player chooses a number between zero and one hundred. The winner is the one whose selection is closest to half of the average selection of all players.
AI models were given descriptions of their human opponents, ranging from freshmen to experienced game theorists, and asked to not only choose a number but also explain their reasoning.
The models adjusted their numbers based on who they thought they faced, indicating strategic thinking. However, they always assumed that humans possess a level of logical thinking that most real players do not actually possess, and often played “too smart” and thereby missed their goal.
While the study also found that these systems can adjust decisions based on characteristics such as age or experience, they still had difficulty identifying dominant strategies that people might use in two-player games. The researchers argue that this highlights the ongoing challenge of adapting AI to real human behavior, particularly in tasks that involve predicting other people’s decisions.
These findings also reflect broader concerns about today’s chatbots, including research showing that even the best AI systems are only about 69% accurate and warnings from experts that AI models can convincingly mimic human personality, raising concerns about manipulation. As AI continues to be used in economic modeling and other complex areas, it will be critical to understand where its assumptions diverge from human reality.




