What might be some practical uses for program change automation?


ESD

Joined
Sep 6, 2020
Messages
2
Reaction score
0
For instance, I'm using a DSI Pro 2 in Cubase 10.

I've created a series of MIDI notes in the Key Editor, and drew in program changes so that the synth flips through a series of patches in the synth creating a cool riff.

In theory, it's awesome. In practice it's a complete waste of time. Given the DAW freezing, the inconsistency of the patches w/o changing the automation lane settings, the patches being altered when adding new tracks...it would seem that this isn't the purpose of program change automation.

It's the same problem using a VST and/or any other hardware synth.

How do you use Program Changes?

Thanks for any insight.
 
Ad

Advertisements

SeaGtGruff

I meant to play that note!
Moderator
Joined
Jun 6, 2014
Messages
3,667
Reaction score
1,558
Normally you would change the program (patch/preset/voice/tone/timbre) and then leave it there for a while, and not change it again until you need to. In other words, it's meant to be used sparingly in that particular context.

Standard MIDI files and devices are frequently limited to using no more than 16 different voices at any given time, 1 voice per channel. (I tend to say "voices" because that's what Yamaha calls them and I own Yamahas, but Casio calls them "tones," and other manufacturers may have their own preferred term; MIDI calls them "programs," and the usual term for modular synths is "patches," but the most musically-correct term is "timbres.")

Some keyboards have multiple sets of MIDI ports, allowing them to use more than 16 different voices at once-- 16 per port-- and some synths are limited to fewer than 16 voices at once, but 16 voices at once could be called the "standard limit" since a MIDI file is normally limited to 16 channels at once, and most MIDI hardware and software is designed to use 1 set of MIDI ports.

By the way, there is a MIDI Meta-Event message that lets you select a different MIDI port, so it's actually possible to create MIDI files that use more than 16 channels at once, although it's still normal to stick with 16 channels so the MIDI files will work on keyboards that have only 1 set of MIDI ports.

Anyway, if you want to use more than 16 different voices in a song but are limited to 16 channels or parts, you can switch the voices used by some of the parts at some point during the song, comparable to the members of a band putting down or stepping away from their initial selection of instruments and switching to a different selection of instruments. For instance, the guitarist might start out with a particular electric guitar, then switch to an acoustic guitar for a while, then switch to a different electric guitar, then switch to a mandolin, etc. The drummer might move to a different drum set, or to a marimba, or a set of gongs, etc. And the keyboardist-- who is of course the most important and most talented member of the band and the only person we really care anything about (just kidding!)-- might start by playing an electric piano and organ at the same time, then switch to playing an acoustic piano and synth at the same time, then switch to playing a melodica and an organ pedalboard whilst also juggling a set of razor-sharp knives and dancing a jig, etc.

So if you're creating a 16-channel MIDI file and want to simulate that sort of thing, you can start by setting each channel to a specific voice (bank and program), then at some point you can use Bank Select and Program Change messages to set the channels to different voices for a while, and so on. As I think you've noticed, the switching can take a moment to finish, so the normal practice is to issue the necessary Bank Select and Program Change events, then wait a certain number of MIDI ticks (as recommended by the manufacturer of your specific MIDI hardware or software) before trying to play any Note events with the new set of voices.

Now, you might have noticed that I briefly used the term "parts" a couple of paragraphs back. Analog and virtual analog synths normally use "oscillators" to generate their basic sounds, whereas samplers and ROMplers normally use "tone generators" that play sound samples. On digital synths and arranger keyboards, the operating systems are programmed to handle a certain number of sounds at a time, and each of these is called a "part," similar to how music is written for different parts-- the first violinist's part, the second violinist's part, the soprano singer's part, the tenor's part, etc. Parts aren't the same things as channels, but channels and parts are closely related to each other because you normally assign a given part to transmit its MIDI events over a given MIDI Out channel, or conversely you might assign a given part to receive MIDI events from a given MIDI In channel.

I apologize for going off on parallels and tangents, but I hope you got the basic idea about how and why a Program Change event might be added to the automation lane in a DAW. Of course, just as a 16-channel MIDI file would normally set up a separate channel for each voice needed, and wouldn't get into the technique of switching voices on those channels unless there was a need for more than 16 different voices in the song as a whole, you would normally set up a separate track in the DAW for each voice you want to use, rather than switching between 2 or more voices on a given track. But if you've imported a MIDI file into the DAW, or plan to export the DAW's tracks to a MIDI file afterward, then you might want to set up no more than 16 tracks-- 1 track per channel-- and use the automation lanes to switch voices from time to time as needed.
 
Ad

Advertisements

ESD

Joined
Sep 6, 2020
Messages
2
Reaction score
0
Thank you Michael, appreciate it! Gonna take a few reads to absorb the material.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top