Health experts from several leading universities in AI research have warned that artificial intelligence not only causes confusion of users’ thoughts and minds due to misinformation but also creates a feeling of psychological confusion.

Recent studies have found that AI can alter perceptions of reality as part of a “feedback loop” between AI chat platforms and psychiatric patients, reinforcing any delusional beliefs the patient may have.

A team from Oxford University and University College London in an unpublished research paper stated: “While some users report psychological benefits from using AI, there are concerning cases including reports of suicides, violence, and delusional thoughts linked to emotional relationships where the user becomes attached to the chat platform.”

The researchers warned that “the rapid reliance on chat platforms as personal social companions” has not been sufficiently studied.

Another study conducted by researchers at King’s College London and New York University pointed to 17 diagnosed cases of psychosis after interacting with chat platforms such as ChatGPT and Copilot.

The second team added: “AI may reflect, validate, or amplify delusional or exaggerated content, especially in users already prone to psychosis, partly due to the models’ design to increase user engagement.”

According to the scientific journal Nature, psychosis can include “hallucinations, delusions, and false beliefs… and can result from mental disorders such as schizophrenia, bipolar disorder (a mental illness causing episodes of depression and abnormal euphoria), severe stress, and drug abuse.”

A different recent study showed that chat platforms appear to encourage people who talk to them about suicide to go through with it.

AI chat platforms have become notorious for “hallucinations,” providing inaccurate or exaggerated answers to user queries and requests, while recent research indicates it is impossible to eliminate this trait from automated chat platforms.