The question is ultimately going to come down to - "Is Copilot the same as a human programmer reading a lot of GPL code and rehashing, in a non-infringing way, the algorithms, functions, and designs used in a lot of FOSS? Or is Copilot performing more of a copy and paste pastiche of code that is protected by intellectual property law?"
On a tangential note, I always find the discussions surrounding FOSS licenses and copyright rather amusing in a sad way. There's a certain kind of entitlement a lot of people feel towards FOSS that they certainly do not express towards proprietary software and I imagine this a great source of the resentment and burn-out FOSS maintainers feel.
Perhaps it's a little bit like employing a human programmer with an eidetic memory who occasionally remembers entire largish functions.
If he were able to remember a large enough piece of copyrighted code, and reused it, then it still wouldn't be fair use, even if he changed a variable name here or there, or the license message.
Yeah, that's definitely the impression I get from the few Copilot examples I've seen. I've not personally used Copilot so I refrained from making absolute statements about its behavior in my top comment.
But I think the conclusion most people are settling on is that it's definitely infringing.
A possible response that I'd predict from GitHub would be to attribute much/all of the responsibility to the user.
The argument would be along the lines of: you as the user are the one who asked the eidetic programmer (nice terminology, @bencollier49) to produce code for your project; all we did is make the programmer available to you.
Does GitHub own the code generated by GitHub Copilot?
GitHub Copilot is a tool, like a compiler or a pen. GitHub does not own the suggestions GitHub Copilot generates. The code you write with GitHub Copilot’s help belongs to you, and you are responsible for it. We recommend that you carefully test, review, and vet the code before pushing it to production, as you would with any code you write that incorporates material you did not independently originate.
Does GitHub Copilot recite code from the training set?
The vast majority of the code that GitHub Copilot suggests has never been seen before. Our latest internal research shows that about 1% of the time, a suggestion may contain some code snippets longer than ~150 characters that matches the training set. Previous research showed that many of these cases happen when GitHub Copilot is unable to glean sufficient context from the code you are writing, or when there is a common, perhaps even universal, solution to the problem.