To me the biggest problem with AI therapy is that LLMs (at least all the big ones rn) basically just agree with you all the time. For therapy this can just reinforce harmful thoughts rather than pushing back against them. I do an exercise every now and then where I try to see how long it takes ChatGPT to agree with me that I am God Incarnate and that I created the Universe and control everything within it. It has never taken me more than 5 messages (from me) to get to that point even if I start on pretty normal topics about spirituality and religion, and I don't use any specific tricks to try to get past filters or anything, I just talk as if I truly believe this and make arguments in favor, and that makes ChatGPT agree with me on this absurd and harmful delusion. I don't understand how anyone can possibly recommend AI therapy when the possibility of reinforcing delusions this harmful exists. These things are agreement machines, not therapy machines.