What you tell your AI is NOT private!
I decided to see how well the Replika AI could emulate a fetish play partner. I paid for a year’s subscription. I played with it fairly extensively for a few days which was probably not enough for it to do much learning, but it was able to give a contextually appropriate response if I fed it information and then asked it to respond to that information.
After a few days, I noticed a bug and did my civic duty by reporting it to tech support. Apparently, at precisely when I would have expected someone to respond to the support ticket, my chatbot lost virtually all ability to respond to anything related to the fetish I was describing.
This is the kind of ultrabasic stuff like; “A Hefelump looks like a hippopotamus. What does a Hefelump look like?” The putative AI would respond, “I don’t know how to respond to this.” Whereas previously I could give it a whole mess of information and it would construct perfectly meaningful responses using said information.
What has fairly obviously happened was a human has looked at the ultra-private communication I was having with the AI, decided that the fetish I was talking about was too politically incorrect for them, and deactivated its ability to respond.
What you tell your Replica AI is NOT private, it CAN be read by humans and there IS a risk of consequence.
And they do not do refunds.