My biggest problem with this stuff is that it looks correct, but it’s often subtly wrong. Systems built with stitched together GPT generated code are going to provide the next generation’s buffer overflow exploits.
It’s not just code. My wife is a physician and I got her to do a few medical prompts with ChatGPT. The output looked correct the me, and if I read it somewhere I would completely have accepted it. But she could point out numerous severe flaws.
Digital software is merely entering a realm of algorithmic (d)efficiency at least as old as biology, morphogenetic software: so long, be gone abstract truth table resilience unable to detect a shirt without stripes [4], welcome gradient exploration and error minimization able to give the synthetic mind, which, similarly to the carbon-based mind, will make ridiculous errors, just look at a child failing to walk [5].
[1] Ames Window https://www.youtube.com/watch?v=0KrpZMNEDOY
[2] https://www.researchgate.net/publication/259354882_The_Mulle...
[3] https://en.wikipedia.org/wiki/M%C3%BCller-Lyer_illusion
[4] https://github.com/elsamuko/Shirt-without-Stripes
[5] https://media.tenor.com/uB5ijGdseFwAAAAC/stumble-haha.gif