Hi HN, I built Contrapunk because I wanted to play guitar and hear counterpoint harmonies generated in real-time. It takes audio from your guitar, MIDI player or your computer keyboard and generates harmony voices that follow counterpoint rules to generate harmonies. You can choose the key you would like to improvise/play in and the voice leading style and which part of the harmony you would like to play as, as well.
macOS DMG: https://github.com/contrapunk-audio/contrapunk/releases/tag/...
Source: https://github.com/contrapunk-audio/contrapunk (do open any issues if you have any)
Would love feedback on the DSP approach and the harmony algorithms. I am also looking at training a ML model for better realtime guitar to midi detection. I believe that will take some time.
"Realtime" as in "while playing guitar" has some pretty challenging latency requirements. Even if your solution is optimal, hardware specs will play a meaningful role. I'd be really interested if you've solved for this e2e.
Cool idea!
I've got a few thoughts for features, if you're open to them:
1. Ability to specify where your "played" voice resides in the voicing: As the bass note, as an inner voice, or as the top line.
2. Options for first species, second species, third, florid, etc counterpoint for each of the generated voices. Ex: You play a single note and the upper voice plays two notes for every one of yours, etc, etc.
3. If you want to get real fancy, make the generated voices perform a canon of your played notes.
Gradus ad Parnassum! What a cool idea, and the fact it's counterpoint gives you a nice little time buffer for any DSP. Super cool
Very cool!
FYI in this phrase: "AI is not going to kill music till people keep playing music together."
The "till" (until) kind of inverts what I think is the intended meaning.
A better replacement would be "as long as".