Replies: 1 comment
-
|
That's a pretty nifty idea, but that's not how these models, or Fourier transformers in general work. The models don't get trained on separate instruments rather they are a probabilistic state sinusoidal pattern matching and mathematical extraction machine. That's why you can't get piano / strings / other similar-range timbre instruments to separate. That's why it's always drums / bass / mid range / vocals as stems. Music played by instruments of similar 'range' cannot be separated any more than what is. Not much you can do about it. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
The track has segments where there are vocals and where there aren't. Algorithm analyzes parts of this specific track which don't have vocals and based on these, removes vocals in other parts of the track.
Is this even a thing?
Beta Was this translation helpful? Give feedback.
All reactions