As AI systems become increasingly embedded in society, a new question arises: Can they improve our lives beyond technical and creative tasks? Can they help humanity make better decisions, make us less selfish, and promote better collaboration?
A current study by researchers Arend Hintze and Christoph Adami examines exactly this question in their article “Promoting Cooperation in the Public Goods Game Using Artificial Intelligence Agents” published in npj Complexity.
The tragedy of the commons
The tragedy of the commons is an economic theory in which individuals in a shared and limited resource environment overuse and deplete the resource, resulting in the suffering of the entire group. TedEd has a good video explaining this theory and I recommend you watch it. To test their theory of whether AI can improve collaboration between people, the researchers used a well-known collaboration experiment often called the “public goods game.”
In this experiment, players can either contribute to a shared pool that benefits everyone or keep their contribution for themselves. While the group performs best when everyone contributes, each individual can stand back and enjoy the shared reward. People alone did not do well on this test and acted out of self-interest rather than as part of the group. The researchers then introduced AI agents into the mix.
In the first scenario, the AI agents were programmed to always cooperate. That sounds promising, but it hasn’t changed human behavior. People continued to act in their own interests. Simply adding “good” actors to the system was not enough. In the second scenario, players could control the AI agents. As you can imagine, this backfired. Players set the AI to cooperate while they choose not to cooperate themselves. This is how they outsource good behavior while maximizing personal profit.
The third scenario showed promising results. The AI agents mimicked the behavior of the players they interacted with. If a person cooperated, the AI cooperated. If the person acted selfishly, the AI reflected that decision. This created a powerful feedback system where human cooperation was rewarded by AI cooperation. This led to improved collaboration between human players.
What does all this have to do with self-driving cars?
While the study is limited and simplified to achieve a real-world effect, the researchers said this study can be applied to multiple scenarios, including self-driving cars. For example, autonomous cars could be designed to reward cooperative driving rather than following strict rules. If enough self-driving cars adopt this function, they could create a positive feedback loop that benefits everyone.
AI cannot magically eliminate selfishness. However, it may provide enough incentive to make collaboration the smarter choice, especially in the case of electric vehicles. The results, published in the journal Transportation Research, also propose an integrated system for route guidance and coordinated movement of stationary vehicles to provide the best possible service to passengers. Another study published in the journal Robotics proposes a collision-free tracking and visual connectivity system between self-driving vehicles.
This principle could also be used to plan the charging process for self-driving electric cars to avoid long waiting times and strain on the power grid, as described in this paper. AI systems, including chatbots like ChatGPT and Gemini, already follow a reward-based system to learn and improve their performance, and it appears that the system could very well solve real-world robotaxi problems as they slowly move into the mainstream.




