I spent about two weeks having "morning meetings" with my AI life coach, which was essentially just a GPT-3 prompt that I continually tuned, and fed (summaries of) the previous days' conversations. There were major advantages but a few things missing.
It was probably most useful as "rubber duck" technique. Forcing myself to articulate all of the things I needed/wanted to get done that day was itself extremely useful. Sometimes the agent would help me by identifying the highest-priority next action, but usually it was just recognizing what I thought was highest priority from implicit context. This can still be psychologically valuable, as a lot of procrastination can be caused by the logjam of not being sure which thing to focus on.
The main missing ingredient, which caused me to ultimately stop the practice, was that it didn't really remember past conversations. I would feed past conversations to it and tell them to summarize the key points, then feed those summaries in as starting context, but this workflow was not sustainable. First of all, the summarization lost too much important nuance. Second and more importantly, even that summarization context block became larger than GPT-3's context window within a few days. This lack of persistent context destroyed the sense that I was talking to a real person, someone who could reliably recall information about a project that I last worked on 10 days ago and apply that context to the current conversation.
I suspect we are not far away from both of these issues being mostly solved. The trend is obviously going in the direction of LLMs with different types of memory and/or much larger context windows.
PS: not tried myself.