Experiments conducted without the intervention of artificial intelligence tools showed that participants were more committed to honesty.

A new study revealed that people become more comfortable engaging in deception when using AI tools, according to the British newspaper “The Independent.”

Research Findings

The study, conducted at the Max Planck Institute for Human Development in Berlin and involving 13 experiments with over 8,000 participants, found that about 85% of participants resorted to lying when using AI, while the honesty rate was 95% when there was no interaction with the machine. Zoe Rahwan, a co-researcher in the study, explained that “using AI creates a comfortable ethical distance between people and their actions, which may lead them to request behaviors they would not do themselves or ask others to do.”

Academic Warnings

Dr. Sandra Waister, Professor of Technology and Organization at the University of Oxford, considered the results concerning, stating: “We must carefully consider how AI is used, especially in sensitive sectors such as finance, health, and education, as individual actions may have significant consequences for society as a whole.” She added: “If people manage to cheat on medical, law, or business school exams using these tools, it not only deprives them of proper learning but may also harm others through incorrect advice.”

Experiment Mechanism

The researchers relied on the so-called “dice-rolling task,” where participants receive higher monetary rewards the higher the reported number. When the task of reporting results was assigned to AI, lying rates increased, with one-third to half of participants asking the system to cheat completely. In contrast, experiments conducted without machine intervention showed participants were more honest.

Need for Regulatory Frameworks

Researcher Nils Köbes said, “The study showed that people are more willing to engage in unethical behaviors when they can delegate them to machines, especially if they do not have to explicitly express them.” Professor Iyad Rahwan, involved in preparing the study, emphasized the need to develop clear technical controls and legislation, adding: “But more importantly, society needs to confront a central question: what does it mean to share ethical responsibility with machines?”