There are (at least) four categories of DAW users:
1. Professionals who are being paid to make music, and for whom time is essentially money. Tools that speed up the production of that music are both financially valuable to them, and also make their overall lives easier (if done right).
2. Musicians for whom making music is a creative act of self-expression. They are not being paid by the hour (of music, or of effort), they are not under deadlines, but they do want tools that fit their own workflow and understanding of what the process should look like.
3. People who just want to have fun making music. Their level of performance virtuosity is likely low, and the originality of what they produce is likely to be judged by most music fans to be low. They want results that can be quickly obtained and are recognizable musical in whatever style they are aiming at, and they don't want to feel bogged down by the technology and process.
4. Audio engineers in a variety of fields who have little to no interest/need for music composition, but are faced with the task of taking a variety of audio data and transforming it radically or subtly to create the finished version, whether that's a collection of musical compositions or a podcast or a move soundtrack.
The same individual may, at different times, be a member of more than one of these groups (or other groups that I've omitted).
The needs of each of these groups overlap to a degree, but specifically the extent to which the current conception of AI in music&audio can help them, and how it may do so, are really quite different.
We can already see this in the current DAW world, where the set of users of DAWs like Live, Bitwig and FL Studio tends to be somewhat disjoint from the users of ProTools, Logic and Studio One.
TFA acknowledges this to some degree, but I don't think it does enough to recognize the different needs of these groups/workflows. Nevertheless, not a bad overview of the challenges/possibilities that we're facing.