Okay but I can't avoid noticing the bug in the copilot-generated code. The generated code is:

    async function isPositive(text: string): Promise {
        const response = await fetch('https://text-processing.com/api/sentiment', {
            method: "POST",
            body: `text=${text}`,
            headers: {
                "Content-Type": "application/x-www-form-urlencoded",
            },
        });
        const json = await response.json();
        return json.label === "pos";
    }
This code doesn't escape the text, so if the text contains the letter '&' or other characters with special meanings in form URL encoding, it will break. Moreover, these kinds of errors can cause serious security issues; probably not in this exact case, the worst an attacker could do is change the sentiment analysis language, but this class of bug in general is rife with security implications.

This isn't the first time I've seen this kind of bug either -- and this class of bug is always shown by people trying to showcase how amazing Copilot is, so it seems like an inherent flaw. Is this really the future of programming? Is programming going to go from a creative endeavor to make the machine do what you want, to a job which mostly consists of reviewing and debugging auto-generated code?

>> This isn't the first time I've seen this kind of bug either -- and this class of bug is always shown by people trying to showcase how amazing Copilot is, so it seems like an inherent flaw.

I think it's because people copy/paste generated code without reading it carefully. They eyball it, it makes sense, they go tweet about it.

I don't know if this predicts how people will mostly use generated code. I note however that this is probably too much code to expect CoPilot to generate correctly: about 10 LoCs is too much for a system that can generate code, but can't check it for correctness of some sort. It's better to use it for small code snippets of a couple of lines, like loops and branches etc, than to ask it to genrate entire functions. The latter is asking for trouble.

I don't think you're right here frankly, since the buggy snippet is taken from the Copilot marketing page (https://github.com/features/copilot). The examples on that page which could conceivably have missing escape bugs are the sentiment analysis example (sentiments.ts), the tweet fetcher examples (fetch_tweets.js, fetch_tweets.ts, fetch_tweets.go) and the goodreads rating examples (rating.js, rating.py, rating.ts, rating.go). Of all of them, only the rating.go example is without a serious escaping bug, and only because Copilot happened to use a URL string generation library for rating.go.

These are the examples which GitHub itself uses to demonstrate what Copilot is capable of, so it's not just a matter of people tweeting without reading through the code properly. It also suggests that the people behind Copilot do believe that one primary use-case for Copilot is to generate entire functions.