Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Short of a genuinely complicated ML system which having been fed a bunch of (say) Jazz Pianists midi data for chord patterns etc, I think you could probably do it with some kind of weighted (let's say by the user) Markov chain/state machine with some kind of graph for which chord voicing should be used relative to those around it: e.g. "This Dm7 has a F in it, the next chord is a Cmaj7#11 so move the F up a semitone[Within the same octave]"

The above system should be able to come up with coherent movement of voicings (and fingers...) although I can't imagien that it would sound very human.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: