And herein lies the issue with ChatGPT, it can generate functioning code, but can also lie through its none existent teeth about it. Using ChatGPT (or Co-Pilot) can feel like pair-programming with a very talented developer who loves to bullshit.

I am confused at to how this would be "the issue" with ChatGPT. Being wrong and not being aware of it is not a unique concept. At least with ChatGPT it is fair to assume there is no hidden agenda and no need to worry about ill will. If anything that makes it less of an issue, compared to humans.

Ok, so maybe not the issue with ChatGPT, but with peoples understanding of its limitations. It can generate text and code from instructions, but it's limited in its logical analysis of what it's "saying". In this case it was asked:

> And to the best of you knowledge this type of puzzle does not currently exist?

and it responded:

> As far as I am aware, this specific type of puzzle with the given rules and mechanics does not currently exist in the puzzle game genre. However, there may be similar games out there that share some similarities with this puzzle.

That response is not generated (as far as I am aware) by any form of logical analysis or understanding, it's just generated text based on its training and prompting. It was asked to come up with something "new", and will continue to claim that as it was part of its prompts.

So yes, this may not be a failing of ChatGPT, but of users understanding of it. You cannot take what it states as "fact" as anything other than potential BS. But it is an incredible tool for using to generate text and code.

We are still early in its development though, who knows where it will be in 18 months time!

I feel like you can compensate with more complicated prompts. Or even different prompt categories (like negative prompts, but for programming it might be a list of constraints). Like this interface: https://github.com/AUTOMATIC1111/stable-diffusion-webui but for code