I recently discovered Audacity includes plug-ins for audio separation that work great (e.g. split into vocals track and instruments track). The model it uses also originated at Facebook (demucs).
This is hilariously bad with music. Like I can type in the most basic thing like "string instruments" which should theoretically be super easy to isolate. You can generally one-shot this using spectral analysis libraries. And it just totally fails.
This is super cool. Of course, it is possible to separate instrument sounds using specialized tools, but can't wait to see how people use this model for bunch of other use cases, where its not trivial to use those specialized tools:
* remove background noise of tech products, but keep the nature
* isolate the voice of a single person and feed into STT model to improve accuracy
* isolating sound of events in games and many more
> Visual prompting: Click on the person or object in the video that’s making a sound to isolate their audio.
How does that work? Correlating sound with movement?
Finally a way to perhaps remove laugh tracks in the near future.
I wonder if this would be nice for hearing aid users for reducing the background restaurant babble that overwhelms the people you want to hear.
Given TikToks insane creator adoption rate is Meta developing these models to build out a content creation platform to compete?
I wonder if the segmentation would work with a video of a ventriloquist and a dummy?
Can I create a continuous “who farted” detector? Would be great at parties
I tried this to try to extract some speech from an audio track with heavy noise from wind (filmed out on a windy sea shore without mic windscreen), and the result unfortunately was less intelligible than the original.
I got much better results, though still not perfect, with the voice isolator in ElevenLabs.