Haven’t tried this demo, but in my experience these open-source models that split music into four components work absolutely fantastically. Not quite perfectly — if you remove vocals, the remaining track may have faint echos of vocals - but astoundingly well compared to the state of the art just 5 years ago or so.
However… what if you want more than four components? What if you want to split a complex arrangement into a separate component for each individual instrument? Does anyone know of any interesting research in this area?
>We are also releasing an experimental 6 sources model, that adds a guitar and piano source. Quick testing seems to show okay quality for guitar, but a lot of bleeding and artifacts for the piano source.
I also believe Audioshake [2] (a company in the space) is doing guitar separation as well.
1: https://github.com/facebookresearch/demucs 2: https://www.audioshake.ai/