In the past month, I've witnessed several people fall prey to AI-assisted delusions of grandeur, a.k.a. AI-psychosis. What they have in common is that the person was debating another person and using AI to find support for their own arguments. Speaking for myself, I sought win some philosophical arguments, one about the mind-body problem and another about Ayn Rand objectivism. I got puffed up with so much "you're so right", but fortunately, I noticed it before I could got carried away. Some others didn't fare so well in my estimation.
One talked herself into believing she could communicate with the aliens on 3I/Atlas via so-called scalar waves. When I politely asked her to design an experimental instrument to detect scalar waves as a baby step toward proving that certain people can detect them unassisted, she gave me a list of electronic components that kind of sounded plausible unless you had a bit of electrical engineering background. For example, she said to use a Toshiba amplifier as a "saw filter". She also mentioned a commercial off-the-shelf signal generator by model number, as if that particular one is special. She responded to my retorts of vagueness and inconsistency with more bulleted lists, as she (and LLMs) are wont to do. Then the big reveal: she copied and pasted an AI chat transcript of her asking the LLM how she can reduce the distortion of the spiritual channel emergent from the chat, ostensibly so that she can answer me better. Apparently, LLMs are the new Ouija board.
Another person got into a debate with someone about the definition and etiology of addiction. In this case, both parties used LLMs, lobbing AI-assisted essays at each other until one side dropped a 2000-word one, which the other side refused to put in the mental energy to read. Understandably so, because if one is not going to put in the effort to write something, why should another person put in the effort to try to (in)validate it? One side called out the other side for incorrect usage of LLMs. The other side called them out for ignoring the most current scientific literature on the topic. Both are probably fair arguments, but the points were lost because the AIs polarized them to the point they were emotionally incapable of listening to the other side.
A third case of an argument tainted with AI is a couple arguing about their respective needs. One party used the LLM for validation that the other was being one-sided with her ultimatums. You can bet your bottom dollar she has a similar complaint about him that, were she to post her dialogue to the chat bot, she would get the same, "You're absolutely right!"
It bothers me that people are so quick to believe these turbo-confabulators [YouTube]. Don't they know these LLMs were subject to reinforcement-learned obsequiousness and dark patterns [Wikipedia] to keep the user engaged? I thought we learned from the decade plus of online echo-chambers and social media scandals, but I guess we're in a perpetual arms race against our robot overlords.
