The problem as I see it is that MIDI is a representative protocol. MIDI “note-on” however does not always represent the sample's (WAV) internal note-on”/pulse/downbeat/transient peak. MIDI “note-on” instead represents the non-musical, sample start time (time zero) and not the samples internal (WAV) note-on time. Midi therefore suffers from a systemic failure of musical representation.
The solution, as I see, it is to augment the present standard/protocol with a pre-note MIDI data type for triggering the pre-note (wav) samples (in cases where pre-note material is required). Such cases include orchestral (two handed) “crrash" cymbals in which the “crr..” is the pre-note, the “..aa..” is the downbeat/note-on (the transient peak/pulse) and the “ssh” is the ring out. Other examples include violins, woodwinds, foot-close high hats etc.
Missing from this list are percussive (short attack) instruments such as pianos, organs, tom toms, snares etc. This is because the time (distance) between the between the sample start-time (MIDI note-on) and transient peaks (WAV note-on) for these instruments is so short that it can be safely ignored.
The
Seaboard example is interesting but is very expensive in terms of the data stream – 16 midi channels per instrument. I would have to employ a separate MIDI system and my present system would still have its systemic issues. I also perform the drums with my feet.
I am not recommending that musicians learn to perform in countless difference wonky ways with various fingers in order to compensate for countless different pre-note latencies. I am recommending instead, a comprehensive system-based solution that permits musicians to perform in the coherent manner to which they were trained.
The
Seaboard Rise applies a lot of data to post note-on processing but does nothing to address the pre-note issue addressed here...
Or perhaps I am mistaken.. How would the
Seaboard fix the following problem...
I wish to perform live in my band with three layers. Piano on midi channel 1, a string patch on midi channel 2 and an orchestral crash (on midi channel 3) on the high velocity notes taken from control channel MIDI 1. The problem is that piano being a "ding" sound has almost no pre-note content but violins and orchestral crashes have significant pre-note content and subsequently they always come in late compared to the piano. What are the possible live performance solutions here?
(A) Chop off the pre-note content of the violins and orchestral crashes to make them more piano like in their attack (this destroys the natural sound of these instruments) - not good
(B) Delay the piano to bring it into time with the late violins and possibly the crash. This makes the piano delayed and subsequently most awkward to play and nearly impossible to groove with the band, moreover the crash pre-note length and the violin pre-note length may be of different lengths. (In the case of three or more different pre-note lengths, live synchronization is physically impossible).
None of these are real solutions. In the first case the sounds are damaged and in the second case the performance interface is damaged. I am unable to see how the
Seaboard solves this issue. The problem here is that midi note-on does not correspond to sample note-on but instead applies to the sample playback start-time zero.
What is needed is an extended midi standard that requires that a midi note-on must correspond (in fixed milliseconds) to the sample note-on. Because some samples have significant pre-note content this extra sound must {according to this standard} be allocated to the pre-note sample and triggered by midi pre-note-on as the key (piano hammer) is leaving the resting position and heading for the later note-on.
In order to provide performance timing flexibility the pre-note would typically include a looped element - just as the note-on would typically contain a looped portion.
As a professional live MIDI musician I am acutely aware of the healing and uplifting power of music. I therefore take music and musical interfaces very seriously. The pre-note problem has made me acutely aware of the musical limitations of MIDI and my inability to work well with various musical sounds and styles (jazz and classical music are two examples). In 1999 I was commissioned to produce a MIDI classical symphony. This work however was like "herding cats" due to the countless pre-note articulations of countless orchestral samples.
I do realize, however, that many people use synth type keys for MIDI and that these interfaces provide very little pre-note room. This is due to there being very little distance between the synth keys resting position and the keys depressed note-on position. This is not true for piano keys (or pk5 keys) wherein significant pre-note room exists. A MIDI pre-note protocol would in no way conflict with the pianists training, whereas any requirement that certain specific pianist fingers must land early or late to compensate for pre-note content would indeed conflict with previous training.
Due to the pre-note content of the foot closed high hat, I am presently forced to perform in a hobbled compensatory manner on stage while performing on my (Roland PK5) foot drums.
In the case of the acoustic drum kit, the foot close high hat serves as a pre-note signal (both audio and visual) for other band members that a down beat is about to occur. The midi protocol however, being devoid of midi pre-note signalling makes real-time grooving (locking into) the rhythm of a midi (Roland Pk5) drummer such as myself significantly more difficult for other band members. The addition of midi pre-note would solve this problem for live performances, providing both, anticipatory audio and anticipatory visual pre-note data to all band members.
In order to remedy these issues am now seeking a trigger mechanism of some kind that I could attach to my MIDI pedals (Roland PK-5's) that could detect when the pedal leaves the resting position. I,E, I am looking to "Macgiver" or "Jerry rig" my system in order to compensate for systemic representational failure of the MIDI protocol. If anyone is aware of a third party triggering mechanism that would work for this please let me know.
In conclusion. I am not recommending that musicians learn to perform in countless different wonky ways in order to compensate for countless different pre-note latencies. I am recommending instead, a comprehensive, standardized, system-based solution that permits musicians to perform in the manner to which they are trained.
Because the industry may well be unresponsive to this issue, I am presently looking for small aftermarket sensors that could be added to my present (PK5) midi rig. These would detect the initial movement of the keys from their resting position and instantly send a midi signal. (It doesn't matter what type midi of signal because I can always translate midi of any type into midi note-on and visual data).
PS..It turns out that my wife's brother's son is technically savvy and he is recommending certain small industrial sensors mounted above my PK5 keys, coupled with an Arduino circuit board to produce MIDI "pre-note" data. Stay tuned...
*(In orchestral crashes, two cymbals are held in two hands and then pushed together. As with the foots close high-hat, the pre-note sound is produced as the cymbals connect only partially at first {pre-note} and then fully {midi note on} at the down beat ).