I don't hate this idea. Having an adjunct to therapy- really, a vastly souped-up ELIZA- that can gently talk back could be valuable.

My real issue here is that there are no privacy protections around any of it. Your conversations with GPT-3 in a therapeutic/journaling context might be mined and/or exposed just as easily as your attempts to get it to generate shitposts. Back in the day you had to do a heist to steal someone's therapist's notes and physically steal a filing cabinet:

https://www.smithsonianmag.com/history/the-worlds-most-famou...

At least information you give to a therapist mostly stays in their head, and unless they have a very good memory, naturally blurs over time. Information you give GPT-3 can sit in a database, attached to your name, perfectly preserved indefinitely.

Really every texting-therapy service has this issue, not just automated ones. At least you have to go out of your way to archive a video or audio call, but text can just get blindly copied around systems forever.

Yet another reason why we need open, free, self-hostable AI models.

Note that while "open and free" is possible if a company bites the bullet, there is currently no way it would be self-hostable on consumer GPU

Wouldn't it be great if there was some way to do distributed training of models?

Actually there is the Petals [0] project which does that (I think it's more for inference and fine-tuning, not full training) and might be the best current approach for "self-hosted LLM", though it defeats the point of privacy/anonymity since everyone in the distributed swarm have access to your data

[0] https://github.com/bigscience-workshop/petals