Worth remembering GPT3 was 10-20 times more expensive 2yrs ago. There's a super-moore's law type learning function going on here and I suspect in 1yr GPT4.5 will be the same cost as GPT3.5 today.
It's insane... feels like iPhone 3G to iPhone 4 level quality improvement every year.
GPT-4 is a scary improvement over 3.5, especially for handling code. It will be the literal definition of awesome when these models get a large enough context space to hold a small-medium sized codebase.
I've been playing around with it for an hour seeing what it can do to refactor some of the things we have with the most tech debt, and it is astounding how well it does with how little context I give it.
There are already some cool projects that help LLM go beyond the context window limitation and work with even larger codebases like https://github.com/jerryjliu/llama_index and https://github.com/hwchase17/langchain.