Documente online.
Zona de administrare documente. Fisierele tale
Am uitat parola x Creaza cont nou
 HomeExploreaza
upload
Upload




Ask Mr. Science

technical


Ask Mr. Science



Lately I've been giving a lot of seminars, and during the question and answer segment, sequencing-related questions often crop up--which seems like an excellent reason to devote a column to some common inquiries.

I play music that doesn't follow a fixed time, yet I'd like to use a sequencer to control effects and automated mixdown moves in the studio, as well as do a few overdubs of sampled sound effects and such. I don't want to play to a click track. Any solutions? I've heard of boxes that follow whatever beat you give it and create a MIDI sync track.

A. Although such boxes exist, they're not necessary for this application. Assuming you have a device that can generate MIDI data (JL Cooper FaderMaster, Russ Jones Automation Station, controllers on a keyboard, etc.), stripe the tape with sync code that your sequencer can follow. Set the sequencer to a moderate tempo (e.g., 120 BPM), roll tape, and record your mixing or signal processing moves into the sequencer. Turn off the sequencer's metronome so you don't drive yourself nuts hearing clicks that are out of time with the music.

In this mode, the sequencer acts as an "event recorder"--when you move a fader, the sequencer records it, and always plays the move back at that point on the tape. Don't be concerned that in theory, changes may not happen exactly on the beat; 480 ppqn resolution at a tempo of 120 BPM gives a timing resolution of around 1 ms. Since an event can be recorded every millisecond, any move will never be more than 0.5 ms off the beat. If you can move a fader fast enough to where that matters, let's get together and talk about how life on Krypton was before you moved to earth.

I do a lot of 30 and 60 second commercials that combine taped acoustic tracks with virtual MIDI instruments. Recently, some of my SMPTE track became corrupted and I lost about 10 seconds in the middle of a spot; the sequencer goes crazy before re-synching. I don't have gear to regenerate the SMPTE track. Help!

A. Just re-record a new SMPTE track over the old one. Most modern tape recorders have sufficient timing stability that you shouldn't notice any short-term drift. However, you'll almost certainly need to change the sequence's SMPTE start point to have the virtual tracks match up with the acoustic ones.

If you can only change the start point in whole frames (i.e., you can't use subframes), synching to the new SMPTE track could create timing differences as great as 15 ms between the virtual and acoustic tracks. If this is a problem, set the sequence start point slightly ahead of the beat, and use a digital delay line to delay the SMPTE track on its way to your sequencer or MIDI interface. Adjust the delay time until the virtual and acoustic tracks line up perfectly.

When I record a sequence in my keyboard's onboard sequencer, I can hardly record more than a couple of verses before I get a "memory full" indication. Yet I haven't played as many notes as the spec sheet says the sequencer is capable of recording. What's wrong?

A. Make sure that aftertouch (pressure) is disabled. If you play the keys hard and generate lots of aftertouch as you record a track, those events fill up memory rapidly. Remember, a note requires only two events--note on and note off. Aftertouch generates a stream of events for as long as you're pressing on the keys; polyphonic aftertouch generates even more. Other controllers, such as master volume, modulation, and pitch bend also send lots of data but are harder to generate accidentally.

This tip also applies to computer-based sequencers. Even though you may not run out of memory, too much pressure data can clog the data stream and interfere with timing.

Speaking of on-board sequencers, I use a sampler "workstation" and sometimes I run out of memory with short sequences, but other times I can record really long pieces. I don't get it.

A. Samplers often trade off sample memory for sequencer memory. The more sounds you have loaded and the more memory they take up, the less memory is available for sequencing.

How can I sequence realistic-sounding guitar parts?

A. The easiest way is to find a MIDI guitar player! Although you will probably need to spend time cleaning up glitches, and possibly shifting individual tracks (or even notes) to compensate for timing problems, a real guitarist is your best bet.

If you must sequence guitar parts from a keyboard, check out the accompanying article on creating realistic guitar parts.

Finally, don't forget that sending a keyboard's guitar part through a guitar processor or guitar amp can also work wonders in getting realistic sounds.

(From a manufacturer's representative) I really resent your making a big deal about small timing differences, like 10 ms and 20 ms. So what if a keyboard is 10 or 20 ms late in responding to data? Ringo Starr never hit things right on the beat, and that didn't seem to bother people. I don't see any reason why you have to make people feel bad about the gear they bought by pointing out that it may be a little late in responding to data, especially since this doesn't make any difference anyway.

A. I strongly disagree that musicians can't discriminate between subtle timing differences. As pointed out in my January 1992 column, most musicians I've talked to consider machines with fast rising envelopes as "punchy," even if they're not aware of timing specifics.The reason I became interested in timing was because it was audibly obvious that something strange was going on with the timing of sequenced instruments, and it seemed like a good idea to track down the source of these anomalies.

Granted that many musicians don't play exactly on the beat, but many times that is a musical decision, not random time-shifting. (In fact, WC Music Research has analyzed drum patterns for different styles of music, and offers companion disks for the Cubase and Performer sequencers that include various "groove templates.") Musicians may take liberties with the beat, but they expect that playing a note will cause it to play at that instant--witness how many guitarists are disturbed by pitch-to-MIDI-based guitar synthesizers because the low E typically plays about 20 milliseconds late. Any guitarist can hear that amount of delay; some guitarists can hear, and object to, even shorter delays.

Many people are aware that due to technological limits, keyboards cannot respond instantly to incoming MIDI data. If manufacturers won't provide timing specs, then someone else should. Far from being an attempt to make people "feel bad" about their equipment, the object is to help people get various pieces of gear working together properly so they don't encounter frustration or disappointment. Once we know what the problems are, we can figure out how to overcome them without wasting time on trial-and-error approaches.

EDITING A LA CARTE

Piloting a sequencer these days can be pretty daunting, so anything that simplifies matters is welcome. Conditional editing--applying editing operations only to notes meeting certain conditions--can speed up the sequencing process and make for more expressive edits.

Different manufacturers call conditional editing by different names (such as logical editing, change filter, selection filter, split notes, etc.). But these all have the same basic purpose: set up note criteria (such as pitch, velocity, placement within a measure, and the like) to which editing operations--cut, transpose, quantize, etc.--then apply.

There's a great scene in the movie Amadeus where the king tells Mozart that a piece has "too many notes," at which point Mozart asks sarcastically which ones he should take out. If Mozart had been able to use conditional editing, he could have taken out every nth note, ran off a test cassette, pleased the king, and gotten a big fat commission to write a whole bunch of nifty symphonies. Instead, he died a pauper at a young age.

Coincidence? Maybe...but just to be on the safe side, read on for vital information on what conditional editing can do.

Conditional Editing Dialects

Different sequencer manufacturers handle conditional editing dialog boxes in different ways. Vision's functions are fairly representative, although other programs (such as Cubase, Pro 5, and Beyond) tend to combine these functions in a single box. The top box appears if you check "with metrical placement" on the main box, and the third box down appears when you specify "bracketing events" (events that determine when the criteria apply).

Figure 1.

The bottom screen, split notes, is a separate edit menu item--Performer 3.6 has something quite similar, which in conjunction with other edit menus, allows for a variety of conditional edits.

In the course of using conditional editing, you'll probably develop personal routines for particular tasks. Here are some that I've found useful.

Rhythm Section Reinforcement

Use velocity conditional editing to increase the levels on the 2nd and 4th beats of drum parts, and the 1st and 3rd beats of bass parts. For dance music, you can even just slam both sets of values up to their max.

More Humanized Quantization

Quantizing only notes that fall near quarter notes can tighten up music while retaining a fair amount of the "human touch" in between beats. If the non-quantized notes are obviously out of rhythm, try adding quantization at less than 100% strength (for more information on this and other ways to humanize quantization, see the February 1991 column).

Precision Percussion Accenting

To tone down an overly-busy hi hat part, it's easy to take (for example) a 16th note part and strip all notes that don't fall on off-beat eighth notes to end up with an eighth note hi hat part.

Dynamic Percussion

It's often hard to play electronic shaker, cabasa, tambourine, conga, and other percussion with the wide dynamic range most acoustic players use. Conditional editing can help: lower the velocities of all notes that don't fall on quarter notes, then lower the velocities of all notes that don't fall on quarter notes or eighth notes (try lowering velocities by 20% initially; adjust the percentage to taste). So, quarter notes will tend to be louder, eighth notes will tend to be softer, and the remaining notes will be softer still. This exaggerates the dynamics, but rhythmically.

Selective Ups And Downs

To simulate right and left hands hitting a drum, set up two slight variations on the same drum sound, and assign them to different pitches. Transpose every other note in a drum part to the second pad's pitch.

A variation works well for "octave-bouncing" bass parts; simply transpose only those notes falling between beats. For example, suppose you step-enter an eighth-note bass part where each note plays the root of the current chord. Transpose all notes that don't fall on quarter notes, and you'll have a vintage disco/funk-style bass line.

First Measure Note Duration

For some parts, increasing the duration of a measure's first note gives more of an accent to the rhythm. This seems particularly applicable to bass.

Stupid Analog Synth Pet Tricks

One feature I like with analog synths is varying the envelope decay in real time to change the duration. Although many synths won't let you do this digitally, there is a workaround. For example, suppose you want to increase the amplitude envelope decay on the third beat of each measure. To simulate a similar effect, use logical editing to increase the duration of only the third beat (decreasing the duration on other beats can also add interest).

Selective "Feel"

If you're into altering note placement to control a part's "feel," conditional editing can do such tricks as delay a snare by a few clocks on only the 2nd and 4th beats, or rush the tempo a bit by advancing only the first hi hat hit of each measure.

Salvaging Drum Parts

Once I was presented with drum sequences generated on a drum machine that lacked velocity-sensitive buttons; it was my job to transfer these over to Performer. The parts were good, but the lack of dynamics was a real problem. A combination of conditional editing to create dynamic patterns, then a slight touch of randomization (velocity and start time), helped add a little more variety.

Strumming Along

Sometimes it's fun to break up block chords into strums. Use conditional editing to slightly delay notes above a certain pitch, and advance notes below a certain pitch. If your chord voicings cover a real wide range, you may have to do this a few measures at a time instead of processing an entire track.

Solving The Mozart Problem

And of course, if some king comes along and says you have too many notes, conditional editing is exactly what you need.

SEQUENCING TECHNO/RAVE LEADS

Techno music doesn't have "leads" in the traditional sense of screaming guitars and vocals; usually, they are sampled sound snippets whose source can be anything from political speeches to old movies. Trying to find appropriate samples is only one task--the other is laying them into the tune's rhythmic bed. The samples seldom have inflections that match the music's rhythms, which can be distracting. Some musicians attack this problem at the sampler itself, by breaking phrases down into individual samples and triggering different words from different keys at the desired rhythms. However, there are a lot of sequencer tricks that can produce similar effects with less effort.

A Real-World Example

A friend recently turned me on to a grade Z sci-fi movie, "Invisible Invaders," which turned out to be a gold mine of samples. The premise is that earth's invaders can only be destroyed with sound waves. One sample, "Sound is the answer," became the song's title. Other samples were: "I asked you a question," "The answer is in sound," "The device must have used sonic rays," "If you think sound is the answer," "Sound vibrations," and "Only two theories seem to make any sense." Some of these are fairly long, and at 135 BPM, I wanted to have the words line up with the rhythms as much as possible, and also mutate the samples into other things. Here are some tricks that worked for me.

Sample Truncation Within The Sequence

This is easy: just shorten the note's duration. For example, I wanted to follow "The device must have used sonic rays" with "The device must have used sound vibrations." Rather than cut and paste in the sampler, I simply shortened the note for the first sample so that it ended after "...must have used," then added a note for the "sound vibrations" sample immediately after to create the composite sentence.

Truncating to extremely short times gives nifty percussive effects that sound very primitive and guttural. Generally, I map a bunch of samples across the keyboard as a multisample so that each sample covers at least a fifth, making different pitches available. Playing several notes at the desired rhythm, and setting their durations to 30-50 ms, gives the desired effect. This works best with sounds that have fairly abrupt beginnings; a word such as "whether" has an attack time that lasts longer than 30-50 ms.

As one example, I wanted a series of eighth-note "ohs." Triggering "only two theories seem to make any sense" with a note just long enough to play the "o" from "only" did the job.

Setting Sample Start Time Within The Sequence

What if you want to play back the last part of a sample rather than the beginning? This is a little trickier. Put a controller 7 = 127 (maximum volume) message where you want the phrase to start in the sequence, and a controller 7 = 0 message somewhere before that. Jog the note start time so that the controller 7 = 127 message occurs right before the section of the phrase you want to hear (Fig. 2).

Figure 2.

Note that in a multisampled keyboard setup, this will affect any other samples that are sounding at the same time. To fix this, set up the different samples multitimbrally.

The Early Bird Catches The Ear

It seems that many samples work best if they're nudged forward in the track so that they start just a bit ahead of the beat. This is probably because some sounds take a while to get up to speed (like the "w" sounds mentioned earlier). Another factor might be that the ear processes data on a "first come, first served" basis. Placing the sample before the beat gives it more importance than the sounds that follow it right on the beat.

Creating Weird Doubling Effects

If a sample covers a range of the keyboard rather than just one key, you can play two samples at the same time for groovacious effects. For example, take a note that triggers a sample, copy the note, and transpose it down a half-step. The lower-pitched sample takes longer to play, so move it slightly ahead of the higher-pitched sample. Depending on the start times of the two notes, you'll hear echo, flanging, and/or chorusing effects. If they start and end with about the same amount of delay, you'll hear a way cool flanging effect in the middle.

Using Pitch Bend To Change Rhythm

If a sample works perfectly 242g622c except that you need to shorten or lengthen a single word, no problem--apply pitch bend to just one portion of the phrase. Bend pitch down to lengthen, bend up to shorten. This can also add some fun, goofy effects if taken to an extreme.

Figure 3 shows this technique applied to several notes. The first note rises in pitch (thus shortening the sample), whereas the fourth and fifth notes are bent downward to lengthen the sample. The right-most note shortens the beginning, lengthens the middle for emphasis, and shortens the end.

Figure 3.

Combining all these tricks means you can lay samples into the track that sound as if they were cut specifically for your tune. If you're into rave and techno music, check out these techniques--they take a little work, but they really make a difference.

Fun With Dance Music Sequencing

It's time for some fun. Forget about timing jitters in your computer and tweaking every last little piece of velocity data--set the controls for a four-on-the-floor kick drum, and let's groove. Along the way we'll look into "subtractive sequencing," an interesting approach to making sequenced dance music.

How Dance Music Works

Most cutting edge dance music, such as house and techno/rave, is essentially an audio collage. To create a finished composition, the person making the music will mix looped rhythms from CDs or samplers, unusual samples for leads, snippets of various recordings, and live overdubs (usually percussion, but sometimes melodic or rhythmic instruments). Occasional "ambient" periods, such as sustained chords, and periodic breakdowns provide the space to cue up the next batch of sounds and rhythms. The engine driving all this is usually a very loud kick drum playing at a steady quarter-note rhythm. Some musicians look down on this way of making music, but creating a great collage is as difficult as creating a great painting (try it sometime).

Techno/rave music adds a few specific elements: bloops and bleeps straight out of sci-fi movies, along with an attitude probably best described by Keyboard staffer Bob Doerschuk as "punk for people with pocket calculators."

The Sequencer Connection

However, what if you're the kind of person who likes to play their own music rather than use canned loops? Being one of those people, I wanted to figure out how to translate this kind of music into a sequenced environment, which brings us to "subtractive sequencing." The object is to combine conventional playing techniques within a framework of creating a collage of repetitive elements.

Begin the process by recording, for example, 16 sequenced measures. Pack these with tracks--I often end up with 20 to 30 tracks of trap drum sounds, percussion, bass, pads, leads, rhythmic figures, melodic fragments, and so on. The percussion and bass parts should groove like crazy. (If you want to move around as you listen, you know you're on the right track.)

Take those 16 measures and copy them until you have about four or five minutes' worth of music. The end result should be the densest pile of data ever to grace your sequencer's RAM.

Less Is More

Now comes the subtractive part. Some people compare sequencing to oil painting, where you start with a blank canvas and add elements until the work is complete. Subtractive sequencing is more like sculpture: hack away at the tracks and remove extraneous material until a tune takes shape.

For example, start off by cutting out everything except the pads for the first few measures. Then cut out everything except pads and percussion (leaving in the trap drums) for the next several measures. Now delete the pads and bring in the bass to establish a drums and bass groove. Next, serve up some rhythmic bloops and bleeps to accent the rhythm. When things get too repetitive, take out everything except for the kick drum and re-introduce the pads.

Once you've whittled away at your sculpture, add your "leads," which will usually be provocative samples. These provide highlights and contrasts that prevent the groove from getting boring. Let your imagination fly on this part, but don't clutter things up too much--the beat is always foremost.

At this point, you'll probably want to go back and hack away more of the sequence to accommodate the samples. The first time an important sample motif appears, try taking out everything (except possibly leaving in the kick drum) to give the sample extra emphasis. During parts where the piece is really grooving along and contains all the percussion and drum parts, try removing just the percussion for a few measures while the sample appears. Keep going back and editing ruthlessly until you create a varied, yet repetitive, piece. Sometimes you may want to replace sections of a track with a new part to keep things interesting.

The screen dump shows a typical techno tune structure as displayed in Metro's song editor; it used to be all solid black lines, but note the gaps in the music where material has been cut away (incidentally, after doing this screen dump, the sequence got whittled down even further).

Specific Tips

Trap drums anchor the beat, while percussion is often double-time. For bass parts, I sometimes add echoes transposed up an octave (as recommended in last issue's column) to give more of a hyper feel. Samples can be edited so different fragments of a single sample appear on different keys, which makes for easy phrase rearrangement.

A final piece of advice: don't obsess over stuff. Quantize and step time everything (yes, I have said at some seminars that quantization is a tool of the devil, but it does have its place and dance music is one such place). Slash and burn your way through the sequence; destroy entire groups of measures in a single bound. Save often under different file names just in case you trash something you wanted to keep.

Well, that's about enough for now. Try making some sequenced collage dance music--it can be a blast!

JITTER BUGS AND COMPUTERS

Ever get the feeling that sometimes what you get out of a sequencer isn't exactly what you put in? The timing jitters of MIDI-based gear is audible and measurable but the question is, where do theses inconsistencies come from--limited MIDI bandwidth? The computer itself? The way the software is written? MIDI instrument delay?

The answer is all of the above, but that doesn't do much good when you're on a session, things don't sound right, and you need to track down the culprit. After researching the synth delays in my setup, it seemed like a good time to look at what variables the computer platform itself contributes.

Although this article was written in 1992, long before such wonders as the Power Mac and Pentium PCs were unleashed on the world, many of the same principles still apply.

The Test Subject

The computer under test was a Mac IIci, a 68030-based workhorse in the upper middle part of the Macintosh computer food chain--not as fast as the fx or Quadra, but better than a Classic or LC. One of the reasons for not using slower machines (like the Plus or SE) is that many software manufacturers discourage using these for serious music applications. Besides, we can all hear the stutterings of slower computers; what interested me is whether a so-called "fast" computer would also show timing jitters.

However, the computer does not exist in isolation since there's also the interface (in this case, MOTU's MIDI Time Piece) and software (Passport's Pro 5 and System 6.0.7). Taking a systems approach has the advantage of being real-world, but the disadvantage of making it more difficult to isolate where specific problems occur (which element is causing the problem, or are they all interacting strangely?). So, consider these tests as not being Consumer Reports stuff, but rather, just an attempt to see if any conclusions can be drawn about what makes for the stablest platform.

The Test Setup

The test sequence consisted of two measures of quantized 16th notes (1/32nd note in duration) on MIDI channel 1, along with pitch bend data every two ticks and controller 7 data every tick (Pro 5 runs at 240 ticks per quarter note resolution). This seemed like a good way to build up the MIDI data density without getting into putting data into other channels. Tempo was 127. When using MTC, I generated SMPTE from the Prosonus Code Disk CD.

To test for jitter, I set up the various test conditions shown in Table 1 and played the sequence into an HR-16 used as a tone module; of everything in my studio, this device has the most consistent (and smallest) amount of MIDI reception delay. I recorded the HR-16 triangle output onto DAT, then transferred the DAT data into Sound Tools. Measuring the time interval between each triangle hit indicated the degree of consistency.

I ran tests for three days straight on a variety of different things--sequences with just notes, with various controllers added, with INITs and CDEVs removed from the system folder and INITs and CDEVs left in, etc. As expected, the more stuff you cram down the MIDI line, the more variations you'll have--no news there. Nor did having INITs and CDEVs present or absent seem to make much difference, except in the case of MIDI Manager (more on this later). So, for the tests I settled on using the one particular test sequence mentioned above.

Test Results

Figure 4.

Figs. 1-8 chart the timing variations for the final series of tests. The following Table summarizes the tests. MTC-m means MTC received throught the modem port, MTC-p through the printer port. "Manager" refers to whether MIDI Manager was in use. Under "Finder," M indicates MultiFinder, followed by the number of programs open at one time (the other programs were Word 4.0 and HyperCard 2.0).

Figure 5.

In the graphs, the horizontal axis shows each sixteenth-note ÒhitÓ, starting with the second of the series. The vertical axis shows the time it took to get to that hit from the previous hit. For example, in Fig. 1, it took a little under 117.5 ms to go from the first hit to the second, about 118 ms to go from the second hit to the third, around 117.7 ms to go from the third hit to the fourth, and so on. The largest deviations are in Test 8, where it took 120 ms to get from the tenth to the 11th hit, and 116.2 ms to get from the 12th hit to the 13th.

Figure 6.

The following table summarizes the test conditions.

Test#

Clock

MIDI Manager?

Finder?

1

Int

No

M-1

2

Int

No

M-3

3

MTC-m

No

M-1

4

MTC-m

No

M-3

5

MTC-p

No

MF-1

6

Int

No

F

7

MTC-m

No

F

8

MTC-p

Yes

MF-1

Figure 4.

Tests 1 & 2 and 3 & 4 show that having several programs open under MultiFinder doesn't seem to make much difference in terms of timing stability, although the actual tempo appears to slow down very slightly. In fact, Test 2 implies that having more programs open smooths out the timing. I had a hard time accepting this and ran the same test with two other sequences, with similar results. So if you like to keep a word processor open for jotting down lyrics as you sequence, go for it.

Figure 5.

There also seems to be little difference between running under MultiFinder and Finder. If you overlay the curves for Tests 3 and 7, or Tests 1 and 6, it may be that running under the Finder makes a very, very slight improvement, but it hardly seems significant.

Figure 6.

However, using MTC instead of the internal clock is another story. There does seem to be a bit of an improvement if you dedicate one port to MTC (as is the recommended practice), but still, translating SMPTE-to-MTC seems to be a significant source of timing jitters, at least in this setup.

Figure 7.

The biggest problem, though, is MIDI Manager 2.01. I've never liked using it because I thought I could hear the timing differences it created when run under MTC, and after looking at Test 8, I'm not surprised--the variation in milliseconds is significant, even for a "fast" machine.

Figure 8.

The Bottom Line

With so many things to be concerned about these days, I was relieved to see that the computer variations weren't all that great--the MIDI instruments themselves tend to be the main source of timing errors. Nonetheless, instruments like the EPS 16 Plus and HR/SR-16 exceed the capabilities of this setup; for them, the platform is the limitation.

If you want the most stable timing possible, then I'd recommend against using MIDI Manager. There is some question as to how much further support it will receive from Apple anyway, considering that much of the original design team is no longer there. And if you work with both tape and acoustic or electric instruments, you might consider recording your sequenced tracks (running from the internal clock) in real time on one pass to multiple tape tracks, then lay your overdubs down on tape. You can still synch a sequencer to SMPTE to do fader automation, signal processor program changes, and other functions that aren't as timing-critical as musical parts.

Another approach would be to record just the MIDI drum, bass, and other timing-sensitive parts to tape as the first tracks (again, with the sequencer synced to the internal clock). Then sync the sequencer to tape, and record less critical parts into the sequencer and play only these latter tracks as virtual tracks.

The biggest lesson seems to be that timing errors are cumulative, and come from a variety of sources. If you have instruments with fast MIDI response times, a fast computer that's not loaded down with time hogs like MIDI Manager, and sequences that have been trimmed to be as lean as possible, any timing errors will probably be fairly small. As usual, the more careful you are, the less the odds of getting into trouble.

"De-Synchronizing" Sequencers and DigitalAudio

As MIDI redefined the musical landscape in the early 80s, budget digital recording is about to do the same for the 90s. In the process, our perceptions of how to apply sequencing will change.

Sequencing levelled the playing field for musicians on a budget since you could send the outputs of sequenced instruments directly to a master tape to avoid the problems associated with analog multitrack recording. Many musicians were able to produce "CD-quality" master tapes using budget gear and a sequencer.

Yet there's much more to music than MIDI instruments, so various programs let you record acoustic signals on hard disk along with MIDI data. A MIDI sequencer simply isn't enough these days;

studios now want to integrate sequencing with digital audio. With items such as the Alesis ADAT, Digidesign Session 8, and four zillion "multimedia" plug-in boards for IBM PCs that combine

sequencing and digital recording, it's easier than ever.

That Synching Feeling

However, once you start using different recording media, synchronization becomes an issue--and synchronization has some practical problems. Decoding SMPTE into MTC can produce annoying tempo variations; for music that requires rock-solid timing, running off the sequencer's internal clock generally gives the best results.

Fortunately, digital recording--whether DAT, 8-track tape, hard disk, whatever--offers a "back door" to synchronization that has saved my posterior three times in the past two weeks. The big secret: you don't really need sync in several types of applications. The timing stability of digital devices, *when running from their internal clocks,* is pretty close to perfect.

Salvaging The Demo That Wanted To Be A Master

While songwriting, I wanted to record some guitar parts over a sequence to see what worked. Since I wasn't expecting a final take--the sequence hadn't even been completely tweaked--I just bounced a premix of the sequence on to two tracks of ADAT and started practicing. But then a funny thing happened: I played just the right kind of part the first time around (an extreme rarity). Luckily, ADAT was recording and captured the guitar part, but I assumed I'd have to throw it away because I hadn't synched the sequence to tape.

Eventually I got the sequence just the way I wanted so it was time to redo the guitar part. Then the light bulb went on. I hadn't changed the sequence tempo, so if I could just start it at the same time as the original sequence, perhaps I could fly the new sequence premix to

two other tracks.

Since I'm in the habit of recording fairly long pre-rolls (about 15 seconds of a rhythm pattern), I played back the ADAT and immediately upon hearing the first drum hit, started the sequence. The sequence lagged behind ADAT slightly, so slowing ADAT down using the variable pitch control let the parts "catch" each other. I then immediately hit the pitch up/down switches to return the deck to normal pitch.

But here's the killer part: the amount of drift between the new and old premixes was less than what I've sometimes experienced from synching a sequence to tape while recording it, rewinding the tape, and playing the same sequence--synched to the same sync track--on another track. Comparing two parts recorded this way actually exhibited more drift than the parts that were free-synched.

Extending Hard Disk Space For Cheap

Sequence looping lets you record multiple parts, and pick and choose the best. When playing acoustic instruments to a looped sequence, recording the parts on hard disk allows for easy editing and bouncing over to tape, but it's easy to run out of space if you're doing lots of takes and don't have a humongous hard disk.

Try this: Load up a standard DAT with tape, start your looped sequence, and play your acoustic instrument into one DAT channel. Feed a rough premix of the sequenced audio into the other DAT track. You'll end up with lots of takes on the DAT, recorded one right after the other.

To assemble the various pieces onto a multitrack master tape, first record a premix of the sequence into two tracks. Now pick your favorite take from the DAT, and line up the DAT's sequence premix with the multitrack's sequence premix. Again, adjust the multitrack variable speed until the parts sync. When they do, punch into record and capture the part on tape. This even works with analog decks if the part isn't too long.

De-Synchronizing To Video

A recent audio-for-video project required combining narration with several musical tracks. These were all to end up on ADAT, but there was no way to synchronize ADAT to the video.

The video edits were cut to the narration, which had been recorded on DAT. So, the video work tape had the video along with narration on the audio track.

I bounced the DAT narration to one ADAT track. This project required many small sequences, which were "proofed" by playing them against the work tape. When finished, I'd note which word to start the sequence on, and record a finished sequence premix to ADAT. The final ADAT tape included narration and all the sequence mixes; this was mixed down to DAT.

The DAT then went to the local video editing suite, which did not have a SMPTE-synchronizable DAT. So, the DAT was transferred to the Hi-Fi tracks on an S-VHS tape (the "A" machine). The "B" machine had the original video and narration track. The A and B machines were lined up using the narration as a reference, synched together, and the A audio was inserted into the B audio track.

Despite all that flying in of parts, at the end of the 12 minute video, there was less than 20 ms of drift. Cool!

I hope these examples have given you some ideas on how to take advantage of today's ultra-stable gear. Of course, I still use sync about 80% of the time, but it sure is nice to know there's an alternative when unusual situations arise.

MIDI Sequencing: Breaking the 16-ChannelBarrier

In the early days of MIDI, 16 channels seemed like enough. After all, the first MIDI synthesizers were often expensive and not capable of multi-timbral operation (i.e., the ability to set up different sounds

within one unit to respond to data appearing over different MIDI channels). The idea of someone having, say, 16 Prophet-5s seemed remote enough that 16 channels was assumed to be enough for a basic MIDI setup.

However, nowadays most synthesizers are multi-timbral and gobble up channels faster than you can say "data overload." Furthermore, MIDI commonly controls devices other than synthesizers--such as

automated mixers, signal processors, and sundry MIDI widgets. As a result, 16 channels no longer seems like the generous allotment of data it once was. (And don't even think about MIDI guitar, which uses up six channels per mono mode part.) Fortunately, there are several ways to beat the 16-channel limit with computer-based sequencers, and this article describes three main workarounds: multiport MIDI interfaces, squeezing more data out of existing channels, and finally, synching multiple sequencers together.

MULTIPORT INTERFACES

One of the easiest ways is to upgrade your computer's MIDI interface to one with multiple MIDI out ports. These are not just thru outputs but actual independent outputs, each capable of carrying 16 channels of MIDI data.

Unfortunately, different multi-port interfaces are not necessarily compatible with all pieces of software, so you have to be careful to mix and match correctly. If you're interested in a particular program, make sure that it can address a multi-port interface if you want more than 16 channels. Conversely, if you need a MIDI interface, check out which programs will run with it. I generally advise getting hardware/software combinations that were specifically designed to work with each other. Various problems can occur, some subtle and some major, if you try to drive an interface with software that wasn't designed to support it.

For the IBM PC, the Opcode MQX 32 (which offers two outputs and a SMPTE reader/generator and was formerly made by MusicQuest) is the current de facto standard for multiport interfaces. However, Opcode and Mark of the Unicorn provide other multiport interfaces for the PC; and with increased musical activity on this platform, more options will be on the way.

With the Macintosh, most sequencer programs can send MIDI data out to both the modem and printer ports, providing 16 channels per port. The simplest approach is to add a low-cost MIDI interface to each port. However, dual-port interfaces (such as Opcode's Studio 3) are all-in-one boxes with two independent MIDI outs and two independent MIDI ins. Many dual port interfaces also include several "MIDI thru" connectors for the two ports to make it easier to distribute MIDI data to multiple pieces of gear, and often provide some kind of sync-to-tape feature.

Just about every pro-level Mac sequencer I've seen supports dual-port interfaces. For those situations where even 32 channels is not enough, Mark of the Unicorn's MIDI Time Piece II can address 128 independent MIDI channels via eight outputs. What's really amazing is that Time Pieces can be "stacked" to add up to 512 MIDI channels--that should be enough for you power user types. On the Opcode side of things, their Studio 5 interface family provides similar capabilities.

And now, a word for those with "orphan" computers. The Amiga has only one serial port, however several plug-in boards can add another serial port. Currently, Bars & Pipes supports Checkpoint Technologies' Serial Solution board (this board was designed with MIDI in mind: it provides power over the same pins as the Amiga port, and includes an onboard 31.25 kHz oscillator). Of course, you'll need a second MIDI interface for the additional serial port to take advantage of the extra output. (Note that since MIDI uses a non-standard Baud rate, any serial board for MIDI applications must be able to handle data at 31.25 kBaud or higher.)

Atari owners have had several expansion options, starting back in the days when both Sonus and Hybrid Arts offered interfaces that plugged into the modem port and gave additional MIDI outs. C-Lab's Unitor gives two MIDI outs (in addition to the one in the Atari), two MIDI ins, and SMPTE in and out; the Export, also from C-Lab, gives three MIDI outs. Steinberg's SMP-24, which plugs into the Centronics printer port, gives four additional outputs; their Midex interface plugs into the cartridge port and provides two MIDI ins, four MIDI outs, SMPTE in/out, and four spaces for hardware dongles. Anything that runs under M-ROS, Steinberg's multi-tasking operating system, will work with Midex. Those who have a real passion for channels can use a program like Cubase with Midex and the SMP-24 for nine independent outputs and five mergeable MIDI ins.

SQUEEZING MORE DATA OUT OF 16 CHANNELS

If you either can't decide which interface to buy or can't afford a new interface, don't despair: as is usually the case in the world of MIDI sequencing, there's more than one way to solve any given problem. In fact, you can probably add dozens of extra MIDI channels to your existing sequencer using equipment you already have, and not even have to worry about software compatibility problems. For example...

Restricting Note Ranges

Many tone generators, particularly multi-timbral devices and samplers, allow a sound to respond not just to a specific MIDI channel but also a specific MIDI note range. This feature can at least double the effective number of channels.

For example, suppose a bass part covers the notes C1-G2, and a synth flute the range G4-B5. Restricting each instrument to its appropriate note range lets you assign both parts to a single MIDI channel; since neither part has any notes in common, each instrument will play its intended part exclusively. There's no reason to restrict this to two parts: trigger an extra percussive sound on C6, a sound effect on F4, and so on.

Electronic drums work very well with this channel-packing technique since most modern drum machines let you map individual drum sounds to particular notes. I usually play drum parts from a five-octave keyboard; once the part is recorded, I transpose the drums either below or above the keyboard note range (so that's what notes like C -2 and G 8 are good for!) and remap the notes at the drum machine itself to respond to these out-of-range notes. With this approach, you can continue to play away on the keyboard and record more parts knowing that whatever you do will not interfere with the drum part.

There is one caution: for ease of editing, don't place parts that share a channel on the same track. The edit operations in contemporary sequencers usually affect particular tracks rather than particular MIDI channels, so keep your parts separated to simplify the editing process. If you do run out of tracks, then perfect each part that shares a channel prior to mixing the parts together.

Layering Tricks

Another technique, suggested by Keyboard staffer Jim Aikin, involves selective layering of parts. Suppose you want to layer two sounds in some sections of a song, yet in other sections, you want to hear just one sound or the other. The usual way to accomplish this would be to copy the part to be layered to two sequencer tracks, each assigned to its own channel, then erase notes as needed to layer the right parts at the right time.

To do this with only one channel, use different continous controller messages for each sound to turn it on and off as needed. Typically, one instrument would be controlled by MIDI volume (controller #7), and the other by some other controller such as breath (controller #2), pedal (controller #4), etc.). Note, however, that this second instrument must allow you to assign its volume parameter to the controller you've chosen. Some synths are "hardwired" so that controller 7 affects volume, but will sometimes allow a "wildcard" MIDI controller as a modulation source along with the usual envelope, LFO, pressure, and other modulators. In this case, use the wildcard controller to vary overall level, and assign it to the appropriate controller number.

Piggyback controllers (and program changes). Typical MIDI setups now include MIDI-controlled audio signal processors, which will at least change presets in response to program change commands. More sophisticated units may allow real-time parameter control using MIDI controller messages. Both techniques are very powerful, but you may not want to dedicate an entire channel just to control a signal processor.

The trick here is to "piggyback" program changes and controller messages onto a channel that drives an instrument capable of ignoring these messages. For example, drum machines generally don't respond to continuous controllers, so you can assign controller data for a signal processor to the same channel as the drums. (However, also check that your signal processor doesn't respond to note messages, or the drums might end up playing the signal processor parameters...hmmm...then again, that might produce a very cool effect.) Poke around the instrument's MIDI menus, and you may find a page that lets you determine the MIDI data to which the unit will respond.

Different Sequential Parts

It's a given that any medium has limits, and back in the days of tape, recording engineers would often run out of tape tracks. Fortunately, there would usually be situations where a track was used only for a certain portion of a tune--for example, dropping a guitar solo into the middle of a tune. So, if you wanted to add some fabulous intro sound or a little percussion on the fade, all you had to do was locate a track with some empty space, and record the new part into it. This often meant more gymnastics during mixdown--changing level, equalization, etc.--but could significantly increase the number of effective tracks.

The same principle can work with MIDI sequencing, thanks to program changes. When an instrument has finished playing a certain part, issue a program change to alter the patch to something else. I find this particularly useful for adding textural changes and sound effects at the beginning of a song (where most of the instruments haven't come in yet, thus leaving room for other sounds) and toward the end, where a little extra background activity can add interest. You'll probably be able to sequence any mixing and EQ changes (if necessary) using MIDI commands.

E: All Of The Above

Combining the above techniques multiplies the number of available sounds per 16 channels even further. Pack multiple parts on the same channel, issue program changes, use controllers creatively...you get the idea.

SYNCHING MULTIPLE SEQUENCERS

Here's a way to not only gain virtually unlimited channels, but also minimize MIDI timing problems.

You can expand a MIDI setup way beyond its existing limits by synchronizing additional sequencers, each of which contributes another 16 MIDI channels to your main sequencer. The additional sequencer(s) need not be too elaborate--an inexpensive stand-alone unit will do (I've seen the Alesis MMT-8 sell second-hand for $100). But you may not need a dedicated sequencer at all, since many keyboards include "scratchpad" sequencers with varying degrees of sophistication. While these sometimes have primitive editing facilities, that doesn't really matter in this application since you will generally have done all your editing on the main, computer-based sequencer before transferring the part over to the keyboard sequencer. As a bonus, most keyboard-based sequencers can play external devices via MIDI along with internal sounds.

Requirements

For proper synchronization, the main sequencer should generate song position pointer messages, and the other sequencers should respond to these messages so that you can start the main sequencer anywhere in a song and have the others follow along. Otherwise, you would have to start all sequencers from the beginning of the song each time you played the main sequencer to insure synchronization.

After song position pointer, the next most important issue is resolution. I find it easier to record into a computer-based sequencer, and when the channels run out, transfer parts over to other sequencers. Hopefully the two sequencers will have the same degree of resolution (pulses per quarter note) since transferring non-quantized parts from a sequencer with higher resolution to one with lower resolution can lead to timing inconsistencies. Unfortunately, most keyboard sequencers have lower resolution than computer-based sequencers so timing vagaries may be a problem.

There are two solutions: one is to transfer only quantized parts to the auxiliary sequencer, the other to quantize the main sequencer part to a value compatible with the auxiliary sequencer. For example, suppose you're transferring a part to a sequencer with 24 pulse per quarter note resolution. This allows for a level of resolution equal to 64th note triplets, so quantize the part being transferred to 64th note triplets. If it sounds acceptable after being quantized, it will sound acceptable when played back on the auxiliary sequencer.

Sequence Transfers Vs. Multiple Recordings

Even with quantized parts, when transferring sequences it's common to have a little timing slop in the destination sequence--usually a clock pulse or two either way. To minimize inconsistencies, remember that the slower the tempo when you transfer a part, the more accurate the results. In some cases a little timing slop actually sounds good, but if you run into consistent problems, try re-quantizing a part in the destination sequencer.

It's also good practice to transfer one channel's worth of data at a time and mute any other tracks during the transfer. You may even want to send over note and controller data in different passes. Also, strip out any unused data. For example, even if a patch doesn't respond to aftertouch your keyboard may still be generating it and your sequencer might be recording it. Don't transfer any more data over to the slave sequencer than is absolutely necessary.

One way to avoid sequence transfer hassles is to sync the keyboard sequencer to the main sequencer while recording, and record your part directly into the keyboard sequencer. I first got into this technique when I went overboard using the poly aftertouch keyboard in the Ensoniq EPS. After pushing kilobytes of data down my computer's throat and watching it choke, I simply recorded the aftertouch-intensive parts into the EPS sequencer. This cleaned up the timing in the main sequencer and gave some additional channels.

Bugs And Gremlins

For some reason, during sequence transfers the first note is often cut off. I always leave a blank measure before a song starts, and this seems to solve the problem (don't use countdowns or precounts; it's not the same thing. You want real blank space). You can always delete the extra measure later after all the tracks have been recorded.

Usually when transferring tracks, I set up the computer sequencer as the master and the keyboard sequencer(s) as the slave(s). However, sometimes this doesn't work as well as making the keyboard the master and synching the computer to it. There's no hard and fast rule; use what works best, and if one approach doesn't work, try something else.

Another caveat is that the keyboard containing the sequencer may play its internal sounds in response to note data coming in over the MIDI line from the main sequencer. Ideally, you want the keyboard sequencer to follow timing data and nothing else. Some keyboards make this easy by letting you limit the types of MIDI data to which the instrument responds; otherwise, set the keyboard to poly mode and the channel to a MIDI channel on the main sequencer that doesn't contain any note data. Since the timing messages are not channel-specific, the sequencer will follow the timing data and since there aren't any notes assigned to the channel, no notes will sound. In a situation like this, it's worth giving up one channel to gain 16 more. Furthermore, you can daisy-chain multiple keyboard sequencers together and set all the keyboards to the same "dummy" channel, thus giving up one channel to obtain 32, 48, or even more channels.

The downside of using multiple sequencers is that your data may end up being saved on multiple disks. On the other hand, if your slave sequencer lets you save the sequence data as sysex messages, you may be able to record these into your main sequencer, thus keeping all your data in one place. In any event, though, this inconvenience is a small tradeoff to pay for having gobs of MIDI channels at your disposal.

What's the lesson to be learned from all this? Be creative, and you can figure out simple solutions to complex problems--like getting as many channels as you want from a spec that, theoretically, allows for only 16 channels.

Managing Multiple Sequencers

There's a whole world beyond the sequencer we know and love--namely, other sequencers. Each sequencer implements a particular vision of how to marry computers and music, and just as there are advantages to being bilingual, learning two sequencers can expand your musical options.

What makes this possible is the Standard MIDI File (SMF), which is great for exchanging files between different computers as well as between different programs running on the same computer. (You export a sequence as an SMF from one sequencer, then import it into a different sequencer.)

Here are some reasons to become familiar with more than one sequencer:

* Use different types of sequencers to create different types of music. Pattern-oriented sequencers are well-suited to repetitive dance and pop music, some sequencers have audio-for-video features such as locking events to SMPTE times, etc. And you may find that the fastest way to record an individual track is with a keyboard sequencer, not a computer-based one.

* Take advantage of special editing features. Some programs are better at controller editing, some at sequencing drum parts, and so on. Besides, it's often little--but crucial--features that make a musician choose one program over another.

* Compatibility with special interfaces and/or digital audio. If your favorite sequencer can't support an interface with multiple MIDI cables or digital audio, import your sequence into an sequencer that does.

* Sync stability. Some sequencers are more stable than others when synched to SMPTE.

* Different revs. This isn't quite the same as using two sequencers, but as programs become more complex and gobble more memory, they often become bloated parodies of the sleek programs that

attracted you in the first place. So, start recording with the earlier rev, then move over to the behemoth version for the final touches.

Of course, there are disadvantages to using multiple sequencers: two learning curves, additional $$, more space on your hard disk, and more complex file management since you have to export/import files as SMFs to cross sequencer boundaries. But there are ways to minimize some of these problems.

Shortening The Learning Curve

To get up to speed ASAP on a new sequencer:

Do a quick read-through of the manual, and use a highlighter pen to keep track of any significantly strange or different features compared to your usual sequencer.

Import a MIDI file that needs some editing into the sequencer. It's much easier to edit an existing file than start a new sequence in a new sequencing environment.

As you need to do different tasks, use the manual's index or table of contents (this can often help you find things faster than an index) to find the desired procedure.

After you've learned the basic functions, read the manual in full to find out about the features and quirks unique to that particular sequencer.

The Keyboard Equivalents Solution

Programs such as QuicKeys for the Mac let you create macros (a sequence of operations, triggered by one or two keystrokes). This is wonderful enough by itself, but can be invaluable when working with multiple sequencers: simply devise a consistent set of keystrokes for common functions in different sequencers.

This sounds easy, but requires some thought if you want a logical, easy-to-remember system that works for sequencers that may have different ways of doing things. (Incidentally, there's a third party

macro program for Performer called Momentum, but it is not compatible with other sequencers). Two tips:

* Establish a consistent command structure. In my setup, each function key (Fkey) calls up a specific editing function (transposition, velocity, duration, quantization, etc.). Shift-Fkey and Command-Fkey trigger macros that perform two of the most-used related functions (Shift usually increases and Command decreases). Option-Fkey and Control-Fkey trigger two less-used functions. An example:

F8: Brings up transpose menu. Shift-F8: Triggers a macro that transposes +1 semitone. Command-F8: Transposes -1 semitone.Option-F8: Transposes +1 octave. Control-F8: Transposes -1 octave.

Incidentally, suppose you want to transpose up 2 semitones. It's invariably much faster to hit Shift-F8 twice than it is to hit F8, make changes on the menu, then hit enter.

* Use the numeric keypad. These keys usually aren't assigned to anything else in the program and can come in handy. I use them for track selection: digits 1-0 select tracks 1-10; Shift-1 through 0 select tracks 11-20, Command 1-0 tracks 21-30, and Option 1-0 tracks 31-40.

Gotchas

You may have slight timing anomalies when transferring between sequencers with different resolutions; this is not a problem with quantized parts, so you might want to save your "feel factor" editing until you bounce over to your final sequencer.

Although SMFs don't yet contain "cable" data for multiport interfaces such as the MIDI Time Piece II or Studio 5, there's a workaround: use template sequences with the same tracks always assigned to the same cables and MIDI channels. Usually the track channel/cable assignments are independent of the track data, so you can import the file, cut all the track data, call up the template file, and paste the track data into the template.

Concerning macros, sometimes invoking complex macros causes problems. For example, I've used a velocity expansion macro (this makes high values higher and low values lower by subtracting 64

from all velocity values, then multiplying by two). Unfortunately, applying this macro to notes with velocities of 64 or less causes them to disappear. But that's what the undo function is for, right?

Also, sometimes complex macros take a while to process since you may have to add delays so the computer can catch up. Still, I sure find it much easier to hit a couple of keys than type a bunch of

numbers and make multiple mouse movements.

Is It Worth It?

There's much to be said for sticking with one sequencer; multiple sequencers aren't for everybody. The expense and learning curve are not trivial issues. However, don't ignore some of the benefits that using two sequencers can offer. With the tips given above--especially developing a consistent set of macros--you may find that in some applications, the advantages of using two or more sequencers far outweigh the disadvantages.

THE PERILS OF QUANTIZATION

I get to hear a lot of sequenced material. Some of it is for professional reasons, when I'm approached about doing production work; sometimes it's just from friendly readers who are proud enough of their work to send me a copy, or who want some feedback. In the course of listening to all this music I've noticed that while there are many common problems that limit the musical impact of

sequences, one of the worst culprits is excessive use of quantization.

Actually, quantization is a somewhat controversial subject. Some people take a "holier than thou" approach to quantization by saying it's for musical morons who lack the chops to get something right in the first place. These people, of course, never use quantization (at least while no one's looking!). I feel quantization has its place; it's the ticket to ultra-tight grooves, and a way to let you get something right on the first take, instead of having to play a part over and over again. But like any tool, if misused quantization can cause more harm than good by giving an overly rigid, inhuman quality to your work.

Trust Your Feelings, Luke

The first thing to remember is that computers make terrible music critics. Forcing music to fit the rhythmic criteria established by a machine is silly--it's real people, with real emotions, who make and listen to music. To a computer, having every note hit exactly on the beat may be desirable, but that's not a real human way of doing things.

There's a fine line between "making a mistake" and "bending the rhythm to your will." Quantization removes that fine line. Yes, it gets rid of the mistakes, but it also gets rid of the nuances.

When sequencers first appeared, musicians would often compare the quantized and non-quantized versions of their playing. Invariably, after hearing the quantized version, the reaction would be a crestfallen "gee, I didn't realize my timing was that bad." But in many cases, the human was right, not the machine. I've played some solo lines were notes were off as much as 50 milliseconds from

the beat, yet they sounded perfect. Rule #1: You dance; a computer doesn't. You are therefore much more qualified than a computer to determine what rhythm sounds right.

Why Quantization Should Be The Last Thing You Do

Some people quantize a track as soon as they've finished playing it. Don't! In analyzing unquantized music, you'll often find that every instrument of every track will tend to rush or lag the beat together. In other words, suppose you either consciously or unconsciously rush the tempo by playing the snare a bit ahead of the beat. As you record subsequent overdubs, these will be referenced to the offset snare, creating a unified feeling of rushing the tempo. If you quantize the snare part immediately after playing, then you will play to the quantized part, which will change the feel completely.

Another possible trap occurs if you play a number of unquantized parts and find that some sound "off." The expected solution would be to quantize the parts to the beat, yet the "wrong" parts may not be off compared to the [absolute] beat, but to a part that was purposely rushed or lagged. In the

example given above of a slightly rushed snare part, you'd want to quantize your parts in relation to the snare, not a fixed beat. If you quantize to the beat the rhythm will sound even more off, because some parts will be off with respect to absolute timing, while other parts will be off with respect to the

relative timing of the snare hit. At this point, most musicians mistakenly quantize everything to the beat, destroying the feel of the piece. Rule #2: Don't quantize until lots of parts are down and the relative--not absolute--rhythm of the piece has been established.

Selective Quantization

Often only a few parts of a track will need quantization, yet for convenience musicians tend to quantize an entire track, reasoning that it will fix the parts that sound wrong and not affect the parts that sound right. However, the parts that sound right may be consistent to a relative rhythm, not an absolute one.

The best approach is to go through a piece, a few measures at a time, and quantize only those parts that are clearly in need of quantization. Very often, what's needed is not quantization per se but merely shifting an offending note's start time. Look at the other tracks and see if notes in that particular part of the tune tend to lead or lag the beat, and shift the start time accordingly. Rule #3: If

it ain't broke, don't fix it. Quantize only the notes that are off enough to sound wrong.

Bells And Whistles

Modern-day sequencers have many options that make quantization more effective. One of the most useful is quantization strength, which moves a note closer to the absolute beat by a particular percentage. For example, if a note falls 10 clocks ahead of the beat, quantizing to 50% strength would place it 5 clocks ahead of the beat. This smooths out gross timing errors while retaining some of the original part's feel. Some programs offer "feel templates" (where you can set up a relative rhythm to which parts are quantized), or the option to quantize notes in one track to the notes in another track (which is great for locking bass and drum parts together). Rule #4: Study your sequencer's manual and learn how to use the more esoteric quantization options.

Experiments In Quantization Strength

Here's an experiment I like to conduct during sequencing seminars to get the point across about quantization strength. First, record an unquantized and somewhat sloppy drum part on one track. It

should be obvious that the timing is off. Then copy it to another track, quantize it, and play just that track back; it should be obvious that the timing has been corrected.

Then copy the original track again but quantize it to a certain strength--say, 50%. It will probably still sound unquantized. Now try increasing the strength percentage; at some point (typically in the 70% to 90% range), you'll perceive it as quantized because it sounds right. Finally, play back that track along with the one quantized to 100% strength into the drum machine, and check out the timing

differences, as evidenced by lots of slapback echoes. If you now play the 100% strength track by itself, it will sound dull and artificial compared to the one quantized at a lesser strength. Rule #5: Correct rhythm is in the ear of the beholder, and a totally quantized track never seems to win out over a track quantized to a percentage of total quantization.

Yes, quantization is a useful tool. But don't use it indiscriminately, or your music may end up sounding stiff and mechanical--which is not a good thing unless, of course, you want it to sound stiff and mechanical!

SEQUENCE "PROOFING"

Many factors can prevent music from sounding "clean" and "airy," such as improper arranging or excessive signal processing. But lurking deep within your sequencer's data stream are other traps that are much more subtle, such as double triggers caused by quantizing two notes so that they land on the same beat, "phantom" controller data that messes up timing, and voice-stealing that abruptly cuts off notes (and may throw off synth timing). These glitches may not be obvious when listening to a group of instruments playing together, but nonetheless detract from the overall quality of a piece by robbing clarity and/or creating timing errors.

Fortunately, sequencers allow for tweaking a recorded track long after the actual recording took place. This lets you "proof" a sequence before committing it to DAT or some other mixdown medium, much like you'd check over text for punctuation and grammar problems before printing it out. Here are some sequence proofing procedures that have worked well for me.

One Track At A Time

Begin by listening to individual tracks in isolation. To establish a good rhythmic "bed" before working on leads and pads, start with drums, then move on through bass and percussive rhythm instruments (e.g., guitar, piano). Check for:

* Unwanted controller data. If your keyboard generates aftertouch (pressure) but a patch isn't programmed to use it, it's easy to record a track with pressure data that serves no purpose other than to take up memory and clog the MIDI data stream. Accidentally brushing against a mod wheel or foot pedal can also generate unneeded data.

Although you can usually enable or disable the sequencer's record filter before recording, in the heat of creative passion this is easy to overlook. "Piano-roll" graphic editing isn't much help in finding stray data; scrolling through an event list (set to view only data other than notes) is the way to go.

* Doubled notes. This is particularly troublesome with drum parts, which are often quantized. If you "bounce" a key when playing a drum note (or if the switch itself bounces), you can end up with two triggers that are close to each other. Quantization will force these notes to hit on the same beat, using up an extra voice and possibly producing a flanged sound. Usually listening to a track in isolation will reveal these flanged notes, at which point you can erase one (as a rule of thumb, if two notes hit on the same beat I erase the one with the lower velocity value). Some programs, such as Beyond, offer a function that deletes duplicates automatically.

* Notes with abnormally low velocities. These are most easily found in an event list editor, but if you're in a hurry, just do a global "remove every note with a velocity of less than 10." This removes notes that may have resulted from brushing your finger against a key (or a glitch from MIDI guitar). You generally can't hear these notes, but they nonetheless use up voices.

* Notes with abnormally short durations. Like extremely low-velocity notes, these are often the product of a playing error and can be deleted individually as you scroll through a list editor. However, a global "nuke everything that's less than 10 ticks long" will usually do the job.

* Too many variations in dynamics. When listening to parts in isolation, it becomes clear if, for example, some kick drum hits are much lower or higher in level than other ones. There are two main remedies: edit individual notes (most accurate, but most time-consuming) or use a track command that sets a minimum or maximum velocity level. With pop music drum parts, I often limit the minimum velocity to around 60 or so.

Limiting works well for taming peaks; simply add a constant to all velocity values. Values that already sit at 127 will not increase, but values below 127 will be limited to 127 if adding the constant tries to raise the value above 127. This reduces the level difference between louder and softer notes.

* Note overlap with single-note lines. This applies mostly to bass and wind instruments. In theory, with single-note lines you want one note to end before another begins. Even slight overlaps make the part sound more mushy (bass in particular loses "crispness") but what's worse, two voices will briefly play where only one is needed, causing voice-stealing problems. I usually edit these on a piano-roll notation screen, since it's easy to see the relationship between note endings and beginnings, and note durations can often be adjusted by simply dragging them.

* Excess controller data. This is not something you'll hear until you have a lot of tracks playing together, and even then, you may only notice that the timing seems a little "off" but not know exactly why. The problem is that moving a mod wheel or pedal, or applying keyboard pressure, often generates more controller data than is needed to create the desired effect. (The one exception is pitch bend; insufficient bend data often leads to an unacceptably grainy sound.). Invoke the sequencer's thinning algorithm, then listen carefully to make sure that the part still sounds okay--look out for "zipper noise" caused by changing a parameter in overly-large steps. Removing all data within four values of each other seems to work pretty well. By the way, it's usually best to work on a copy of the data in case you need to revert to the original version.

All Together Now

After optimizing individual tracks, it's time to solo tracks in various combinations. First comes bass and drums, to make sure that the notes lock solidly together. Then unmute other decaying, percussive-type sounds (e.g., guitar parts) and add them to the mix.

At this point, you can draw attention to particular parts by shifting note start times ever so slightly. Your ear will be drawn to whichever sound it hears first in a cluster of notes, so if you have kick and bass hitting at the same time but want to emphasize the drums, move the kick a clock pulse or two ahead of the beat or delay the bass a bit.

(Incidentally, another application for this technique involves emphasizing key modulations. Try shifting the initial notes of a melodic instrument to land slightly ahead of the beat to announce "hey listener, key change!")

This is also a good time to check for voice-stealing problems caused by multiple instruments playing back through multitimbral units. Sometimes if notes are cut off, merely changing note durations to prevent overlap, or deleting one note from a chord, will solve the problem.

Another way to clean up timing involves drum hits. Most drum sound modules only care about when an incoming trigger starts; the note's actual duration is usually not important. As a result, try doing a global duration change for all drum notes to a non-rhythmic value (like 33 ticks). This improves the odds that the note-off data will occur when nothing else is playing.

Closing Comments

While listening to tracks together, try monitoring in old- fashioned, 1950s-style mono. If the instruments all sound distinct and separate in mono, then when it's time to create a stereo field, they'll only sound that much better. On the other hand, if some instruments sound muddy when played in mono, you know it's time for an equalization (or maybe even a patch) change.

This kind of detail work is time-consuming; it can take several hours to clean everything up. But the end result is an airier mix, with more space and clarity--which makes the effort well worthwhile.

SIGNAL PROCESSING WITH SEQUENCERS

In today's multi-timbral synth world, it seems there are never quite enough synth outputs--let alone signal processors--for the average mix. However, your sequencer can provide many signal processing effects by tickling your sound generators in particular ways. There are a few tradeoffs; some of the following techniques involve synergistic operation between sequencer and synthesizer, and some techniques use up more voices than non-processed parts. But there are advantages as well: sequencers can give effects you can't get with outboard units, so let's start with some of those.

Synchro-Sonic Tremolo.

"Synchro-sonic" is a term I use to describe the process of imparting rhythmic characteristics to a sound that normally doesn't have such characteristics. A simple example is using a noise gate triggered by a kick drum to gate a bass sound on and off, thus "synching" it to the kick.

Tremolo has fallen out of favor in recent years compared to the halcyon days of tremoloed surf music and Bo Diddley beats, but this effect can help make a synth sound less static, especially if the tremolo effect is synchronized to a tune's rhythm. Unlike conventional tremolo boxes, sequencer-generated tremolo is easy to synchronize to the beat.

To add tremolo, simply "draw in" the desired modulation waveform and apply it as a continuous controller that affects a signal's master volume (Fig. 1, measure 1). Usually controller 7 affects this, although you may want to dedicate a different controller to tremolo (such as controller 92, standardized as "tremolo depth") and route it to a DCA that affects the signal path so that you can continue to use controller 7 for overall volume.

If you come up with any particularly useful waveforms, save them in a separate sequence

that serves as a "waveform library." When you need that waveform again, just copy it and merge the desired waveform into the track to be processed. Ramps, for example, work very well.

A few other tips: to fade in or fade out tremolo effects, use the sequencer's "change smoothly by percentage" command (Fig. 1-- measure 2 fades into measure 3). Define the fadein start point (zero volume) and end point, then smooth from 1% to 100%. Do the reverse for fadeouts.

To offset the tremolo waveform, add a constant to all values (Fig. 1, measure 4, shows the same waveform as measure 1 with 40 subtracted). Finally, note that you can often thin the controller stream to an almost absurd degree (see Fig. 1, measure 5) without noticing much difference--the human ear is far more sensitive to pitch variations than level variations. So, tremolo effects need not take up too much sequence memory.

Synchro-Sonic Vibrato.

Sure, you can just use the mod wheel to add vibrato, but will it probably won't sync to the music. However, you can use pitch bend messages to control vibrato similarly to how we added tremolo. Since this signal will be centered around zero, changing smoothly by a percentage brings in the vibrato effect without offsetting the pitch center (Fig. 2 shows an "eighth-note" waveform fading in from measure 2, beat 1 to measure 4, beat 1; the fadeout starts at measure 5). Vibrato with a period of an eighth-note, eighth-note triplet, or 16th note seems to work the best.

Waveform libraries are also handy for vibrato, but find a MIDI guitarist who can give you

some good finger vibrato patterns. They're just what the doctor ordered for synthesized

lead parts.

Note that you will not be able to thin the data as much as tremolo without the vibrato sounding "grainy."

Cool Drum Flanging

I've never been a big fan of the "randomize/humanize" functions found on sequencers, at least for their intended application. (Good musicians don't make random timing changes--unless they've had too many beers or aren't very technically proficient--but rather, lead or lag the beat in a conscious way.) But randomization can work great for flanging drum parts, even though this uses up twice as many drum voices.

Copy the drum part (or just the drums you want flanged) to a separate track, then randomize the copy track within about a 15 ms window. At 480 ppq, this works out to +/- 8 clocks or so. Assign both tracks to the same channel, and voila--instant flanged drums. This works for keyboard parts too.

Cool Drum Panning

Here's a tip for those who like to use samplers as drum sound generators. Assign the same

sound to two different notes, but set different pan positions--for example, center and left of center. At the sequencer, use a "change filter" or "logical edit" filter to cut, for example, just the 2nd and 4th beats of every measure driving one of the drum sounds. Paste this into another track, then transpose it to the pitch of the other drum sound. As the part plays, the percussion sound will bounce over to one side periodically.

Musically, I find this works best on short percussion sounds, such as claves, tambourine hits, claps, etc. Moving primary drums like the kick, snare, or toms can be disorienting.

So far, we've used sequencers to provide amplitude- and frequency- domain signal processing. Now let's investigate time delay effects and panning.

Chorusing

If a keyboard has an on-board chorus, that's convenient--but sequencer-created chorusing allows for cool rhythmic possibilities.

There are several ways to create chorusing. First, decide whether to use pitch bend messages, or a continuous controller set to modulate pitch. If you modulate with pitch bend, you will not be able to do pitch bending and add chorusing. Using continuous controllers, which I recommend, keeps pitch bending independent from the chorus effect.

Sequencer-driven chorusing works best with two-oscillator-per-voice synths. For the most obvious chorus effect, program the two oscillators for the same waveform and pitch. Set one oscillator to respond negatively to the controller, and the other oscillator to respond positively (in other words, upon receiving a controller message the two oscillators shift pitch in different directions). This maintains a proper pitch center, instead of having one oscillator simply go flat or sharp compared to concert pitch.

Fig. 3 shows one of my favorite chorusing curves. It creates a sudden thickening at the beginning of a measure that fades out over the measure, with the next measure devoid of chorusing.

Fig. 4 is another goodie that works well for Techno type music. This turns chorusing on

and off rapidly, four times per measure.

Note that the "low" chorusing value is set to a non-zero number. When set to zero, there is no pitch difference between the oscillators and they sound static. Even a low controller value, like 3 or 4, adds a bit of animation. The amount of modulation need not be large. Since it's easiest to draw sequenced curves with fairly large values, I usually trim modulation depth way back at the synth oscillators themselves.

Echo

Most people are familiar with the procedure of copying a track, pasting it to another track, lowering the velocity of the copied track, and shifting its timing to provide echo effects. However, I'd like to add a few tips:

* Program the synth so that lower velocities close down the filter somewhat. This prevents the echo from"stepping on" the straight signal.

* Unless you run out of tracks, never merge the echoed track with the straight track. If you decide you don't like the echo effect, you'll have a hard time removing or editing it.

* Transposing the echo track (octaves work well) is a great effect. This is particularly useful when creating octave-jumping, "eurodisco"-style bass parts.

* Try changing the duration of the echoed notes by a percentage, such as 50% or 150% of the existing value. This gives an entirely different feel.

* Although it takes more work, polyrhythmic echoes can really liven up a tune. Try having one track provide an echo one beat after the straight sound, with a second track adding an echo one eighth note after the first echo. Throw in triplets every now and then, too--the variations provide an effect you'll never hear from a stock digital delay.

* Stereo echo effects are lots of fun, and also create a more wide open mix compared to having the straight and echoed sounds located in the same place in the stereo field. Some synthesizers make this easy by letting you create a "combi" patch where the same program can be assigned to different MIDI channels and have different panning. With other synthesizers, you'll need to copy the program being echoed to another program location, pan each program individually, and assign these to two different MIDI channels in a combi setup. Three echoed parts--center, right, and left--is even cooler.

Synths where the programs need to be assigned to different channels can eat up MIDI channels pretty fast. I sometimes offload the echo parts to a keyboard's on-board sequencer, synched to the main (computer) sequencer.

Panning/Delay Effects

Here's a technique that uses delay to spread a signal across the stereo field, and is particularly effective for thickening brass and string pad parts.

Copy the track to be thickened, and assign it to a different MIDI channel compared to the primary track. Shift the copied track 20-40 ms (that's approximately 20-40 clocks at 120 BPM, if the sequencer resolution is 480 ppqn). Assign the same synth program to both channels, with one panned left and the other panned right. The overall sound will be much fuller.

Something Really Gross And Disgusting

Fig. 5 shows a controller signal where on one clock, the signal is full on, and on the next clock, it's full off.

Repeat the pattern for as long as you want the gross and disgusting effect. This started as an attempt to get ring modulation, but what actually happens depends on a variety of factors (the synth itself, MIDI bandwidth, etc.). Usually it produces an effect that sounds as if the synth has some intermittent hardware problem. Who says digital has to sound perfect? The truth of the matter is that if properly abused, digital can produce ugliness that analog circuits can only dream about.

And that concludes our foray into signal processing via sequencer. The main advantage is the ability to add rhythmic effects; this makes the overall mix bigger and more interesting, even with a limited number of sounds. Try it--you'll hear what I mean.

SEQUENCERS AS PATCH EDITORS

Signal processors are becoming almost as complex to program as synthesizers. Not only do modern signal processors include a slew of programmable parameters, but also, many parameters can now be controlled in real time via MIDI continuous controllers or other control signals.

Unfortunately, your access to these parameters is often a small LCD or LED display, with editing done through the tedious method of selecting a parameter, altering its value, selecting another parameter, altering its value, and so on. Although there are quite a few computer-based synthesizer editing programs that let you program synth sounds on-screen, there aren't a lot of computer

editors for signal processors.

This point was driven home the other day as I slogged through programming the graphic EQ parameters on a DigiTech DSP 256, a digital multieffects unit that my band uses a lot on vocals. Not only did it take a lot of button presses to switch among parameters and alter values, it was frustrating to vary just one parameter at a time, listen to the results, vary another parameter, listen again, go

back and re-tweak previous settings...in fact, you're probably dozing off just listening to me describe the editing procedure. Well, it's even more boring when you're trying to tweak up 30 or 40 patches.

Once again, sequencers to the rescue. This time we're not going to use test sequences, but instead use the fader option found on sequencers like Vision, Performer, Cubase, Logic, and Metro to change signal processor parameters easily and conveniently. The essence of this technique is that these faders can usually be programmed to output any particular controller, not just the traditional controllers such as 7 for master volume, 1 for mod wheel, etc. In fact, you can think of the sliders not just as mixdown level controls, but as a bank of MIDI continuous controller faders. Before putting these to use, however, we need to cover a bit of theory as to how signal processors handle continuous controllers.

The Controller/Signal Processor Connection

Signal processors seem to subscribe to two basic philosophies of MIDI real time control. The per-patch approach lets you assign a limited number of controllers (usually from1 to 10) to specific parameters in that patch. Thus, controller 15 might affect reverb decay time in one patch and echo feedback in another. The global approach assigns specific controllers to specific parameters,

and these are valid for all patches. For example, if controller 17 affects the chorus wet/dry mix, any patch that contains that parameter will have it controlled by controller 17.

The technique described in this column works equally well with both approaches. I've used sequencer faders to control the ART SGE, Lexicon PCM-70, DigiTech GSP-5 and GSP-21, Alesis Quadraverb, and Peavey ProFex. All except the latter two show the results of any changes you make in the display, which is very helpful. However, probably the best way to illustrate how this technique works is to choose a specific example, so we'll program the graphic equalizer on the DSP 256. This unit uses the global control approach, and is well-suited to being edited via sequencer faders. However, these techniques can also be translated to other gear, although there may be some differences (e.g., sometimes a device's display will show a parameter being changed, and sometimes it will always display the programmed value even though the parameter is being varied via MIDI continuous control).

Graphic EQ Control

With the DSP 256, the first step is to define which controllers will affect which parameters. This involves dialing up each graphic EQ band parameter and linking it to a specific controller. In this case the 63 Hz band is assigned to controller 10, the 125 Hz band to controller 11, the 250 Hz band to controller 12, etc.

The next step is to program the on-screen faders. One advantage of Performer is that sets of faders can be saved to disk with a sequence file; with a sequencer like Macromedia's Metro, fader assignments can only be saved as a preference. However, Metro is so easy to set up that most of the time I'll just create a quick fader template, do my programming, and not worry about saving the fader assignments.

By The Way

One extremely wonderful DSP 256 feature is that sending controller information for a specific parameter forces the LCD to that parameter, and the display also updates the parameter value as you change it. For example, if you start moving the 4 kHz slider, the LCD will jump to the 4 kHz display and show the results of your edits; grab the 500 Hz fader, and the LCD shows that parameter. Signal processor manufacturers take note: this is the way to go.

Yes, sequencers can even make signal processor programming easier. Not all signal processors are equally well-suited to this technique, but more often than not, using your sequencer's faders will at the very least beat making changes from the device's front panel. Happy programming!

Making Bass Parts Come Alive

Having covered sequencing tips for drum, guitar, and wind parts, let's get down--literally--and turn our attention to bass parts.

Monophonia Reigns

As with wind parts, bass parts are often monophonic. Therefore, all the tips we covered in the previous installment of Power Sequencing for making wind instruments monophonic also apply to bass. To summarize, you want a note to end before the next one begins; some programs have an algorithm that can do this automatically, while other programs require you to trim notes manually.

Let It Slide

Probably the most important part of making bass parts seem "real" (assuming, of course, that you're trying to simulate a "real" bass) is the judicious use of slides. Whether fretless or fretted, bass players often transition from one note to another by sliding, as well as use longer slides for accents (e.g., sliding down an octave and "landing" on the tonic the same time that the kick drum hits).

As with so many other aspects of sequencing, there's a relationship between the sequencer and the sound generators being driven. Set your bass patches to respond to a pitch band range of +/-12 semitones, as this allows for slides up to two octaves.

Fretless bass parts are the easiest to emulate since the slide isn't "quantized" by the bass's frets. Players with good wheel technique can simply move the pitch bend wheel as they play to do "fretless" parts. However, this is quite tricky with large pitch bend ranges, and it may be difficult to obtain the desired degree of pitch accuracy.

Alternately, you can play the bass part without slides, then "draw" them in later with pitch bend messages. This works because most slides end by plucking a new note anyway, so all we really need to do is add slides between existing notes. To draw in messages with the appropriate bend amount, it helps to make a chart of what pitch bend values correspond to which notes of an octave.

(Note: This chart assumes a linear pitch bend response at the synth.)

There are three columns (along with the resulting interval) because different sequencers show pitch bend data differently. Pro 5, for example, shows these values as +/-127, Performer as +/-8192, and Cubase "splits" 8192 so that no pitch bend corresponds to 4096, maximum bend up is 8192, and maximum bend down is 0.

0-127

0-8192

4096-8192

Interval

0

0

4069

Tonic

10

683

4437

flatted 2nd

21

1365

4779

2nd

31

2048

5120

min. 3rd

42

2731

5461

3rd

53

3413

5803

4th

63

4096

6144

flatted 5th

74

4779

6485

5th

84

5461

6827

flatted 6th

95

6144

7168

6th

105

6827

7509

flatted 7th

116

7509

7851

7th

127

8192

8192

octave

Suppose we want to add a slide that goes from the tonic to the fifth, as shown in the last two beats of Fig. 1 (measure 2, beat 1 and part of beat 2). Just draw in a slope that ends at the appropriate value (for example, if your sequencer follows the convention in column 1 of the table, end the slope at 74). Then add a pitch bend = 0 message just before the fifth plays. If necessary, extend the note being slid so that its duration equals that of the slide. (Also note that many sequencers have functions that let you smooth the slope for a bionically-perfect slide.)

One caution: when extending the note, end it before the pitch bend returns to 0 or you may get a pitch glitch (although sometimes this can sound cool). This also implies having a very short--almost nonexistent--release time on the patch. Program the patch so that the initial decay and sustain parameters control the duration, not the final release.

Time To Fret

Fretted bass slides are a little more complex, but adding this effect can create a stunningly realistic part that has the listener wondering "is it a synth or an extremely consistent and accurate bass player?"

You have two main options for emulating a fretted bass. The first requires a synthesizer with legato mode (Yamaha TX81Z, Yamaha TX802, Ensoniq EPS 16+, Peavey Spectrum Bass, etc.). What this means is that if the duration of two notes overlaps, the second note will change the pitch, but not retrigger the note's envelopes--just like sliding on a fretted instrument (by the way, this is why legato mode is so crucial for use with MIDI guitar and bass).

Fig. 1's first two beats shows the same example as the last two beats, but with a fretted bass slide. Add notes in semitone steps between the "source" and "target" notes, but make sure that the note durations overlap until you hit the target note (which you do want to retrigger).

Figure 1

To create the slide, enter notes in step time with 100% "articulation" (i.e., if the step time interval is eighth-notes, then each note should be an eighth note long). Then use a "change duration" command to set each note to 110% of its original length. This insures that the end of a note will overlap the attack of the subsequent note, thus producing the legato effect in synthesizers that are so equipped.

If your synth does not include a legato feature, write the manufacturer and complain so they'll add it in the next update! Meanwhile, you can create fretted slide effects by extending a note's duration to the length of the slide, and using evenly-spaced pitch bend messages to change pitch. This produces the same effect as having legato mode on the synth (better, actually, because you don't have the end of one note hanging over another note's attack), but requires more work.

Figure 1, middle two beats, shows the same examples and the first two and last two beats except that it uses discrete pitch bend messages to add "frets" to the slide. The message values are derived from Table 1.

The moral of the story: with bass parts, sometimes it's better to just let things slide. And with those words of pseudo-wisdom, it's time to sign off.

Making Drum Parts Come Alive

In many sequenced tunes, the drums serve as little more than a glorified

metronome. While robotic drum parts have their uses, musicians these days

seem more interested in creating drum sequences that swing rather than plod.

There are two main qualities that sequenced drum parts often lack compared to

acoustic parts: the dynamics that let a part "breathe," and the variations that add

realism and interest. In this article, we'll focus on techniques that add both to

your sequences.

But first, the sounds. The drum sounds themselves are crucial to creating

realistic drum sequences. Samplers are ideal for playing back drums because the

processing options can add more realism. Set the filter cutoff so it is driven

higher by increased velocity; hard hits will sound somewhat brighter than soft

hits, which improves the sense of dynamics. Also try setting a sample start

point several milliseconds into a drum sample and using velocity to push the

start point closer to the beginning of the sample so as to add more attack to the

hard hits, or use velocity switching to bring in samples that have added punch.

While a discussion of drum sound programming isn't appropriate here, the better

your raw materials, the easier it will be to make great-sounding drum sequences.

Randomization Alone Is Not The Answer

Although randomization can add variations to velocity and timing, human

drummers add variations within a planned context. In other words, although

every hit of a hi-hat may be slightly different, some of these differences--such as

accenting the first beat of a measure--are the result of a conscious (or uncousious

but nonetheless musical) decision.

One way to simulate this with a sequencer is to combine randomization with

regional edits, to create a reandomized overlay within a non-random structure. A

regional edit is an editing operation that is applied only to notes that fall within

a certain region of beats or measures; a typical regional edit might be, "Increase

velocity smoothly from 64 to 127 for the notes that fall between bar 4, beat 1,

and bar 7, beat 1." But what if you want to edit certain notes within the region,

such as those falling on the beats, while not editing other notes? A change filter,

also called a logical edit filter, allows edits to be made only to notes that meet

certain criteria, such as falling within a particular number of clocks of a specific

beat, being within a certain pitch or velocity range, etc.

Let's look at some examples of drum track edits.

Consider a 16th-note hi-hat part played at constant velocity...then again, don't

bother, because it will sound extremely mechanical. One improvement is to use a

change filter to increase the velocities of the notes that fall on each quarter-note

by 20 or so, then make a second change filter pass and increase the velocity (this

time by 10) of hits that occur only on the first beat of a measure.

Figure 1 shows how these operations affect beats 3 and 4 of measure 1, which originally had all hi-hat hits at a velocity of 64. (The highlighted column shows the velocity values after the change filter edit.)

Figure 1.

Although the part is more interesting, it still follows a rigid pattern. Now is the time to add randomization and change velocities within, for example, a range of +/-5. Slightly randomizing the start time of each hit (in this case, by up to 3 clocks at 240 ppq) can also help. The next example shows the result of these edits: a hi-hat part with randomized variations on a non-random superstructure.

As an example of combining randomization with a regional change, consider a one-measure-long 32nd-note snare drum roll. Again, playing all hits at a constant velocity sounds really dumb, so add a crescendo by defining the region of the roll and increasing the velocities smoothly from 40 to 125.

Figure 2.

However, no drummer can produce a roll that crescendos perfectly, with each hit ever-so-slightly louder than the last one. Again, randomization comes to the rescue; the same range of values given for the hi-hat (+/-5) works well here too. Although there will be a general crescendo, there will be enough variations that the part won't sound so mechanical.

Of course, touch-up editing on individual notes after these operations can improve a part even further.

Event Time Editing

Some types of drum patterns benefit from not being quantized (as explained in

the article "The Perils of Quantization"). If your chops aren't accurate enough to

give you the feel you want, you can quantize a part and then shape it by hand.

Tom fills in ballads, for example, sometimes sound good if they're a few clocks

late, as if the drummer is sitting down on the beat rather than pushing it. A crash

cymbal hit, on the other hand, might sound best a few clocks ahead of the beat.

If you've got several percussion sounds hitting on the same beat in the pattern,

try pushing the various parts ahead or behind by a small amount; you'll be

amazed at the effect this can have on the feel.

Timbral Variations

No two consecutive drum hits have the same timbre. As mentioned earlier,

sampled drum sounds make it easy to introduce timbral variations that are tied

to belocity. But what if you're using drum machine sounds that make the same

noises every time you trigger them?

The answer is to create two (or more) slightly different drum sounds. For

example with an Alesis D4, copy the snare sound to a different MIDI note, then

detune the second sound by 1 tuning unit. There will be a subtle, but noticeable,

timbral difference between the two snare sounds. Shift every other note or so of

your snare part to trigger the second snare, and you'll end up with a much more

lively part.

If you're using sampled drum sounds, try routing a small amount of velocity

modulation to pitch, so that high-velocity hits will be at a slightly higher pitch.

This gives the feeling of the drum skin being stretched tighter and therefore

playing slightly sharp. I usually set the pitch modulation amount so that the

change is not really audible except when compared to the non-bent sound. It's

better to err on the side of too much subtlety than to end up with an obvious

pitch-bend effect that makes the drum sound less, not more, realistic.

Signal Processing

Signal processors whose parameters can be controlled via MIDI continuous

controllers are another useful tool. Increase the reverb decay time on a crucial

snare hit or bring up the EQ in a particular part to accent the toms instead of

just increasing their levels. Although its tempting to enter these changes by

manipulating something like a mod wheel, this can chew up sequencer memory

and possibly affect the timing of other nearby notes. In most cases you can

create any required changes with "snapshots": Sequence one controller event to

make the change, then sequence a second event to return the controller to

normal. This technique uses very little memory.

Multitimbral Keyboard Considerations

Multitimbral keyboards often include drum sounds along with other

synthesized or sampled instruments, making it unnecessary to use a separate

drum sound generator. However, these machines do not have unlimited

polyphony, and it's a drag to hear notes being cut off when you run out of

voices.

I usually find that when programming drums from a keyboard, the triggering

note events last longer than they need to because of the time my fingers rest on

the keys. Many drum sounds will have already played through before the MIDI

note has ended, which means that a synth voice may be sustaining for a sound

that is no longer playing, possibly reducing polyphony. On some synths, a

voice will become available as soon as its sample finishes playing, so this isn't

always a problem. But if you do run short of voices, here's what to do: After

recording the drum notes in a sequence, use a "change duration" command to

change all drum notes to the shortest duration required to trigger the sound (e.g.,

a 32nd note). This insures that no voices will be wasted on sounds that aren't

sounding. If the drum sounds are processed with the same kinds of envelopes as

other sounds, you may need to increase the release times of the envelopes in the

drum kit, or set the envelopes to trigger mode, so as not to get short, choppy

notes. (Envelope trigger mode may be called something else on your

synth--cycle, one shot, or non-sustain, for example. The idea is that the

envelope should cycle through all of its stages regardless of when the note-off is

received.)

One Last Tip

Spend some time listenening to jazz drummers like Elvin Jones, Art Blakely,

and Dave Weckl. They can demonstrate more about the effective use of drum

dynamics than anything I could say here. I particularly recommend checking out

what Tony Williams does on Miles Davis's "Shh/Peaceful" (from In a Silent

Way [Columbia]). The entire drum part consists of hi-hat, yet Williams's

command of the dynamics of that part is so complete that it propels the tune as

effectively as if he'd used an entire kit. I don't know whether or not he got bored

playing that part, but I never get bored listening to it.

Making Guitar Parts Come Alive

The easiest way to sequence convincing guitar parts is to find a guitarist who groks MIDI guitar, but that's not always possible. Fortunately, there are several guitar-specific idiomatic "cues" that imply to the listener that a guitar is being played. If you add those cues to your sequences, your guitar parts will sound a whole lot more guitar-like...which is what this article is all about.

Use The Right Voicing For Rhythm Guitar Parts

Most synthesists can fake good lead guitar sounds, but rhythm guitar is another matter entirely. Guitars give wide-open voicings and the same note often appears in several octaves.

There are three very popular barre chord fingerings (basically first position E, D, and B). In simpler rock tunes, the rhythm guitarist will play one basic voicing (usually the 1st position E voicing) and slide it up and down the neck as different chords are required. Simply using the same voicing as these barre chords will help create realistic guitar sequences.

Fig. 1 shows the keyboard notes that correspond to the three main major barre chord voicings. If the lowest string is not the root, it will often not get played.

Figure 1.

There are two easy ways to sequence guitar voicings. James Chandler's "KeyFrets" program for the Mac and C-64 translates a keyboard's MIDI notes into guitar chord structures ($15 each from Jim's Software, 204 California Ave., Chattanooga, TN 37415; tel. 615/877-6835). Also, several years ago, Oberheim introduced the "Strummer," a dedicated hardware box for keyboard to guitar

voicing translation. You may be able to find one used.

A final voicing consideration is that guitarists will often play block chords high up on the neck, and let low open strings ring. These open strings are E and A below middle C, and D above middle C. If these are in your sequence, increase the note length to simulate the increased sustain.

About Strums

Quantizing chords is verboten; when using a pick, it's physically impossible to play all six strings at the same time. What's more, most guitarists will exploit the time differential between the striking of the first and last strings of the strum in a purposeful, musical way.

After examining several sequences containing MIDI guitar chords, the strum timing of the following chord (Fig. 2) seemed most representative for comparatively slow strums:

Figure 2.

This was recorded at 240 PPQ at 120 BPM, so the first three notes fall approximately 94, 80, and 48 ms before the beat. The fourth note of the chord lands right on the beat, and the last note of the chord comes about 32 ms after the beat. This represents a total spread of around 120 ms between the first and last notes of the chord. So, there's an average 24 ms difference between each note.

Interestingly, my strums tend to "push" the beat on uptempo numbers, as evidenced by more notes falling before the beat than after. This implies that strum timing can be used for "feel factor" effects that push or lag the overall beat.

The Lazy Way To Strum

I've never been a fan of step time entry but it works very well for strums. Set the step time duration to a 64th note (again, I'm referencing to 120 BPM) and you'll get about 30 ms between notes. If that's too much, tighten up the strum by quantizing the notes to the nearest beat at about 20% quantization strength. Randomize the note start times within a 2 to 5 clock window, and you're set.

If you're really lazy, your keyboard can layer sounds, and the layers can be delayed (e.g., EPS 16 PLUS, DPM 3 SE), you can stack the notes necessary for a particular chord voicing on one key for one-finger power chord strums. Set the delay for about 20-30 ms between each layered note, and to avoid too metronomic a strum, modulate the delay time if possible.

A Case Of The Bends

The way you bend notes can make or break a guitar sequence. Don't use the mod wheel, but wiggle the pitch bend wheel instead. Fig. 3 shows an up/down bend on MIDI guitar followed by finger vibrato; note the considerable variation between each cycle of vibrato. A triangle wave LFO can't do this.

Figure 3.

Also remember that a guitar string can only bend up, but an LFO modulates pitch up and down around a centerline. The one exception to this involves using a whammy bar, which can bend notes both up and down. Whammy bar vibrato looks like Fig. 4.

Figure 4.

Generally you'll use whammy bar-style vibrato on chords, and bend-up-only bending on single note leads, but this is by no means a "rule." The easiest solution is to have a MIDI guitarist record some pitch bend data into your sequnecer; paste it into your own sequences as necessary.

Getting The Right Sound

Of course, using a good guitar patch is crucial. You can make your quest for guitar simulation a lot easier by processing your keyboard with a guitar-oriented signal processor. Most of the guitars you hear are heavily processed with compression, chorusing, distortion, and the like, and listeners identify those effects with rock guitar.

That's it for now. After reading all this, I hope you don't have to--uh--"fret" any more about how to get good-sounding guitar parts.

Making Wind Parts Come Alive

Previous columns covered how to make more expressive drum and guitar parts, so let's turn our attention to wind instruments.

Good wind players caress each note. Changes in attack, level, and timbre create a very expressive melodic line that is difficult to synthesize electronically in real time. A good wind player with a wind-to-MIDI converter is your best bet for getting expressive wind instrument sequences, but if your only choice is to record the part using a keyboard, there's still plenty you can do to coax your synths into sounding more wind-like.

One Note At A Time

Most wind instruments are monophonic; unless your keyboard offers a monophonic mode, though, the end of one note can often overlap the beginning of the next.

It's easy to fix this in Performer: Select "Move releases to the closest attack" in the duration menu; this prevents notes from overlapping. To insert some "breaths," select the notes after which you want the breaths, then use duration again to set the duration of the selected notes to 90% of their current values.

Fig. 1 shows how to do this with Cubase. The quantize parameter dialog box lets you set the number of ticks by which notes overlap; entering a negative value creates a gap between notes (in this case, 60 ticks). The top note grid shows the unedited line. Notes 1, 3, and 5 overlap the attacks of notes 2, 4, and 6 respectively. By selecting these notes and invoking the Legato edit option, notes 1, 3, and 5 end -60 ticks before the start of notes 2, 4, and 6, as shown in the lower note grid. You can select the entire track, but this means that each note-off will be changed to create a 60 tick gap between it and the next note on. This may stretch notes you don't want stretched. The same thing happens with Performer.

For icing on the cake, modify the part on a note-by-note basis. For example, after long notes leave a little more space before the next one so your "player" can breathe. On fast passages, tighten up the gaps if needed.

One Instrument At A Time

Avoid playing polyphonic wind parts. Record each line (assigned to a different MIDI channel) into the

sequencer individually. This lets you add slightly different pitch bend, modulation, pressure, etc. to each sound for a more realistic overall effect.

It's time to split. Recording individual lines can be time-consuming, and if you have to get the sequence disk to Fed Ex in two hours, you'll probably play the ensemble part polyphonically and record it on one MIDI channel. You can still make the part more interesting by "pulling out" individual lines within the chords, and increasing the velocity of these notes. You might be able to split off a part with conditional editing by specifying only notes within a certain pitch range. Otherwise, you may need to dig into the piano-roll screen or event list and, assuming your sequencer allows discontiguous selection, select the events then do a velocity increase. If discontiguous editing is not available, edit each note individually.

A quicker option is to randomize the start times a bit so that the attacks don't all hit at the same time (a little randomized velocity can help too).

Bend Me, Shape Me

One characteristic of most wind instruments is that they don't stay on a constant pitch. "Drawing" in some low-level pitch bend messages toward the end of a sustained note can simulate this effect. It's usually not necessary to draw a curve and generate lots of data; just a few blips here and there will do the job.

Also try adding some upward pitch bend at the beginning of notes to imitate the effect of a player not

starting right on pitch, but bending up over time. You can do this with a pitch bend envelope or pitch wheel, but "drawing in" bend data lets you make really subtle changes.

Lotsa Control

Pitch isn't the only parameter worth controlling: volume, waveform, filter, and envelope time changes are also important. It's difficult to play all these controllers at once, which is why post-performance sequence editing can be so useful--you can literally add "layers" of expressiveness with multiple controllers. However, real-time control adds a certain magic that you don't get in any other way. Using footpedals for, say, volume and filter cutoff leaves your hands free to add pressure and pitch bend.

Since all these controllers fatten up the MIDI data stream, use controller thinning algorithms to eliminate redundant data. Level can usually be thinned more radically than pitch bend or filter sweeps.

And that's about it for this articlet. Don't forget to listen to some great horn players--John Coltrane and Miles Davis come to mind--for an occasional reality check, as well as to get some humbling reminders of the limitations of present-day synthesis.

Speed Sequencing

We all recognize the symptoms of MIDI malaise: a desire to just sit down at a piano and play instead of set channels, move mice, and squint at LCDs containing messages from operating systems designed by former CIA cryptologists. Yet learning any musical instrument involves plateaus, and MIDI sequencers are musical instruments (even though they sometimes resemble word processors in musical clothes). I hit a plateau a while ago, but kept slogging through; the reward was that eventually, sequencing became a smooth, natural process that helped--rather than inhibited--the music-making process. After years of working with this technology, it finally seemed more like a musical instrument than a machine.

Part of what got me back on track was an idea from songwriter/author Dan Daley. To paraphrase, he postulated that the part of your brain responsible for intuitive thinking (e.g., coming up with nifty musical ideas) is not the same part that takes care of detail work (like editing note velocities). As you sequence, you're forced to shuttle between these two opposing mindsets; just when your intuitive side is riding high, you have to deal with the nuts and bolts of making your sequencer work, then bounce back to thinking intuitively. This complicates the process of making music with MIDI, and may explain why some of the coolest records of all time had engineers taking care of the studio so that the artists could just create.

So, I've been looking at how to streamline the sequencing process not only to have more fun, but to make better music (almost invariably, the tunes I complete in a few hours get a better reaction from listeners than the ones that I agonize over for weeks). Here are a bunch of tips that help encourage the creative process.

* Don't do detailed editing or mixing as you record tracks. Sequencing encompasses three processes: recording tracks, editing, and mixing. Recording requires that you work efficiently, because this is a real-time activity where the creative juices flow. Editing and mixing are "off-line" operations that have nothing to do with real-time music-making.

If a track needs quantizing in only a few places, don't take the time to do it. Copy the track, do a track quantize on the copy as a temporary measure, and mute the original--quantize the original during the editing phase. And don't even think about mixing moves. They'll need to change anyway as you record more tracks.

Or, suppose you're recording just fine but then start to mess up. Don't spend the time to set up punch points. Go back to before where the mistake occurred, mute the original track, and start recording the next section on a separate track. Cut and paste the various elements into a composite track during editing.

* Strip down your composition setup. I do 80% of my recording with a single multitimbral synth and drum machine. I chose a synth that is fast to set up for multitimbral operation, and can respond to all 16 channels. The other 20% of the time, I add a sampler to cover sounds the synth can't make; only during the editing and mixing process do I send the track data to other synths for particular sounds, or to reduce voice-stealing. Using one synth speeds up the recording process and allows you to get to know it well--a key to speed sequencing.

I have several "multi" setups saved on disk, so sometimes it's not even necessary to assign sounds to channels, and sequencing can begin after the few seconds it takes to load the setup file. Remember the golden rule of speed sequencing: if you can't record tracks almost as fast as you can play them, something's wrong.

* Check your hardware. Not having to repatch or diddle with MIDI patch bays speeds things up, and multi-port interfaces (such as the MIDI Time Piece II, Studio 5, etc.) are the answer. If you absolutely need to change routings, you can generally call up custom routings for these interfaces from computer files.

* Use keyboard equivalents. Earlier, we discussed doing a track quantize of a copy. This is such a common function that I have a keyboard macro that calls up the quantize dialog box set to my most common setting (16th notes with 85% strength); pressing return completes the operation in two keypresses. Remember, every time you move the mouse instead of hit a key, you're losing

time. Assign as many operations as you can to single function keys instead of making macros like "option-shift-alternate Q."

Some sequencers let you trigger operations like record, play, "rewind," etc. From your master MIDI controller, which saves moving between a master MIDI controller and computer keyboard.

* Program "template" sequences. When you call up a new sequence, there's no reason it should be blank. Program a template sequence with pre-programmed channel assignments, instrument names, etc. My template sequences even include primitive drum parts--several tracks of kick, snare, and hi-hat, all playing different patterns--which let you create a drum machine-type pattern

by simply muting and unmuting tracks. These never get used as final drum parts, but it can be more inspiring to play against drum sounds than a metronome click.

* Optimize your computer. Here are seven tips for a happier Macintosh:

Take out any Inits and Cdevs that aren't absolutely essential.

Use plenty of RAM. Sequences don't use up a lot of RAM, but if you have several applications open or use MIDI Manager, more RAM seems to reduce crashing (and there's nothing like a crash to really screw up the creative process).

Rebuild your desktop and optimize your hard disk periodically.

Go into the control panel and allocate 128 or 256K of RAM to the RAM cache.

For some older models, memory cache cards can speed up calculation-intensive operations (like global transposition of multiple tracks). If sequencing is part of your job, the time savings may justify the cost of the card.

Use the largest screen monitor you can afford. Even time you have to use a mouse to mess around with scrolling through windows, you're not making music.

* Practice reckless sequencing. I was always real concerned about making my cuts and pastes just right, fixing things as I went along, and so on. Wrong! Now I slash and burn instead of cut and paste, create lots of copies and work with them just in case I get carried away and obliterate some cool solo, and--at least during the recording process--don't worry too much about accuracy (things can

always be fixed when editing/mixing). To prevent losing something by accident, save often and under different names (SONG.1, SONG.2, SONG.3, etc.). However, if you've been following my advice about copying original parts to create lots of copy tracks within a single sequence, you'll probably be able to get back any data you need without digging through older versions. (After the song is done, seek out and destroy the old versions. Otherwise you may come back at a later date and not remember which was the "real" version.)

Reckless sequencing is more than speed, though. Be adventurous! Remember, it's only data, and most computers have an Undo command. It has been said that art is enamored of chance, so try taking a chance. Stuck for an intro? Swipe some of the chorus, transpose it, run it through a generated sequence option, and see what happens. Check out some key modulation. Apply some goofy controller "grooves."

That's probably enough for now. The main point is this: if every time you sat down at the piano you had to think, "hmmm, the index finger goes here, and the middle finger goes here...oh yes, time to press the pedal," you wouldn't have too much fun, and you'd probably never get a song written. Practice sequencing until it becomes second nature, and use the technology to work for you, not against you. It's a sure cure for MIDI malaise.

Timing: The Graphic Details

There's been a lot of talk about timing stability, delays, jitters, and other byproducts of the sequencing process. These anomalies may last only milliseconds, yet they can alter (or even damage) a sequence's feel. It's time to tackle some of these issues head-on, starting with where the sound ends up--the synthesizer, sampler, or drum box.

Not all synths and samplers process MIDI messages with equal efficiency, which can make a big difference in the sound and rhythm of your sequence. If you can identify these timing differences, you may be able to shift tracks by an equal and opposite amount to put a track right on the beat. If you then want to lead or lag a track or specific notes for specific effects, fine--but at least everything will start off from a common timing base.

There are a lot of ways to determine timing stability. The easiest is to use your ears--in most of my timing experiments, I've found that the measurements merely confirm what musicians with good ears can sense anyway. Yet even the best of ears, and the best of test equipment, can't catch everything. In checking out timing in my own setup, I found that taking a graphic approach revealed several aspects of timing I hadn't seen discussed before, the most significant being instruments with fixed and unalterable attack times.

If you want to see what's going on with your instrument timing, this article is for you. We'll cover how to test for synth delays with Sound Tools (or any sample editing program that can record at least 10 or 20 seconds of stereo samples and display both channels simultaneously on-screen) and a stable

sequencer, then tell how to calculate the clock pulse shifts needed to sync an instrument with other instruments in a sequence.

The Test Setup

The synth timing test setup (Fig. 1) used Sound Tools, an Alesis MMT-8 sequencer, and the instrument under test.

Fig. 1. MIDI Timing test setup. The MMT-8 provides the reference click as well as the MIDI triggers.

Figure 1.

The MMT-8 served as the source of the reference click as well as the eighth-note MIDI triggers that were used to drive the instrument under test. To check the accuracy and consistency of the MMT-8's click, I recorded it into Sound Tools for about a minute, and measured the number of milliseconds between each click. At a nominal 120 bpm, the clicks were virtually always 248.7 ms apart (which, incidentally, works out to 120.62 bpm). To check that the MIDI out was equally consistent, I drove an SR-16 (which showed a constant and very short MIDI delay of around 1.7 ms in Sound Tools) and measured the period between drum hits; this was also solid, meaning that the MMT-8 MIDI timing was stable as well.

The MMT-8's MIDI out drove the test device's MIDI in. The device's sound was programmed to have the fastest attack time possible, a relatively high pitch, and no effects. MIDI was always in poly mode.

The MMT-8 click and the synth/sampler sound were fed into separate channels in Sound Tools' Pro I/O interface. To do a test, I'd start the MMT-8, then put Sound Tools into record. The result: a file that showed, on-screen, the MMT-8 reference click on one channel and the synth or sampler sound on the other, making it easy to compare the timing offset between the two attacks.

Measuring Delay

Fig. 2 shows a test file up close. The bottom channel shows the MMT-8 click, the top, an initialized patch from a Peavey DPM 3 SE. Clicking the mouse at the beginning of the MMT-8 click and dragging to the start of the DPM 3 SE sound gives the delay reading, in milliseconds, between the MMT-8 click and the onset of the DPM note (in this case, 5.4ms, as shown in the box to the right

of the speaker/cursor icon).

Figure 2.

Fig. 2. DPM 3 SE note-on delay compared to reference click. The left edge of the highlighted area marks the start of the click; the right edge, the onset of the DPM's audio output.

I worked my way down the file, measured each instance of delay, and took an average. With the DPM 3 SE, the delay ranged from 4.4 to 6.4 milliseconds (a 2ms variation), and averaged a little under 5.5ms. (Those of you who saw the article in the December 1991 issue of Keyboard on timing tests may recall that the variation figures for the DPM 3 SE--listed as "Standard Deviation" in the

charts--were different from the ones I achieved. This is because my figures show the maximum amount of variation measured over the course of the test, whereas Keyboard's figures described the range around the average delay within which most of the variation resides.)

Once the DPM kicked in, the attack time was instantaneous, which I related to subjectively as a "punchy" sound. But this wasn't the case with some other instruments I tested. For example, my vintage Oberheim OB-8 (Fig. 3) exhibits quite a wide delay range (5.5 ms to 15.5 ms), and averaged 11.3 milliseconds of delay.

However, note the attack time before the signal reaches full strength--even though there was no programmed attack time. This attack, a little under 2 ms, was enough to remove much of the sound's percussive punch.

Fig. 3. OB-8 delay. Even though no attack time was programmed, note the 2ms attack time (shown at the upper right as the first two low-level iterations of the waveform) before the signal reaches its maximum level.

Figure 3.

I then tested a bunch of other gear in my studio. Three exhibited attack times (see Fig. 4) even with no attack time programmed.

Figure 4.

Here's the summary:

Device

Average Ms. Delay

Variation

Alesis SR-16

1.7

<0.1 ms

Ensoniq EPS 16 Plus

3.3

0.1 ms

YamahaTX81Z

4.3

2.3 ms

Yamaha TG55

4.7*

0.7 ms

Peavey DPM 3 SE

5.5

2.0 ms

Kawai K3

8.6

4.8 ms

Oberheim OB-8

13.1*

10.0 ms

E-mu Emulator II

15.4*

1.6 ms

* includes non-removable attack time. The TG55 averaged 3.7 ms of delay. However, because of the 1 ms attack time, it takes essentially 4.7 ms for the TG55 to reach full volume after a note on. The Emulator II attack time is around 5 ms.

Determining Track Shifts

To compensate for timing differences, you need to know how many "ticks" to shift a track. This depends on the tempo of the tune and the resolution of your sequencer. The formula is:

(60,000 / Tempo) / Sequencer Resolution in PPQ = milliseconds per tick

For example , assume a sequence tempo of 120 bpm and a sequencer with 240 ppq resolution. Each tick equals (60,000 / 120 bpm) / 240 ppq = 2.08 ms. Thus, shifting a track by 1 tick moves it 2.08 ms. At 96 ppq, each tick would equal 5.2 ms.

What Does It All Mean?

Let's check out some strategies for dealing with the timing delays in a sequencing environment. For illustration, let's say our song is running at 120 bpm and our sequencer has 240 ppq resolution.

The first task is to try to equalize the timing of all the instruments by shifting their tracks. Tests of my system showed that several of the instruments had MIDI delays of about 5 ms, so I chose that as the default amount of delay to which other instrument delays would be referenced.

Those instruments whose delays were close to 5 ms were not shifted at all. Those with delays greater than 5 ms were shifted by the appropriate number of clock ticks so that their delays were in line with the 5ms default. Compensating for the delay in my Alesis SR-16 drum machine is music-dependent. I measured its delay as 1.7 ms, so delaying it by one clock pulse puts its delay at 3.78 ms--close to 5 ms but still ahead of the beat. A second option is to delay it by two clock pulses and place the drums a tiny bit behind the beat at 5.86 ms.

Of course, track-shifting is only a partial solution for those instruments, such as the OB-8 and the Kawai K3, that have considerable variation in their delays. It will be tough to make them feel right on the beat. Instruments like these are best suited for pads and other non-rhythmic sounds.

Instruments with long attack times pose a different problem. Because of the E-II's attack time, even though you can shift its track to compensate for delay it's still not particularly well-suited for percussion; generally speaking, only those instruments with instantaneous attack times are good candidates for percussive sounds.

In Closing

I don't want to appear too picky about timings; sound moves at roughly 1 foot per millisecond, so a 5 ms change theoretically affects a track about as much as moving an amp five feet further behind the drummer. Yet timing differences can be significant. Before I tested the instruments in my setup, I would often delay SR-16 tracks to make them "sound right," and wonder why I never seemed to need to "humanize" the OB-8 or K3 (now I know--they vary all over the place anyway). And I had often noticed that the E-II was late, although I never really knew by how much. But with my test results in hand, instead of going through time-consuming trial-and-error adjustments (and wondering whether I was really hearing a difference or not), I can now sync up everything in seconds, and know which synths are best suited for tight timing and which should be used for background pads.

Remember too, that depending on the type of music you're doing, it's probably not worth worrying too much about small timing differences. It takes about 1ms just to send a single MIDI message; start generating multiple notes and lots of controller data, and the milliseconds really add up. And timing variations elsewhere in the system will likely mask any small differences between synthesizers anyway.

Acknowledgements: I'd like to express my gratitude to Michael Stewart (Digidesign), James Chandler (Jim's Software, and the designer of a very useful stand-alone MIDI delay timing test apparatus), Marius Perron (Jeanius Electronics), and Larry Fast (Synergy) for sharing the results of their investigations and insights into timing over the years. Their ideas really helped me along.

Straight Talk About Shifty Tempos

You've quantized, edited, transposed, and scaled. Your velocities are trimmed, your controllers controlled, and you've even shifted some note timings around to add a more human feel. Still, your song isn't finished yet--unless you've tweaked the tempo track.

This is because sequencers and drum machines produce metronomic tempos, but humans don't. Subtle tempo changes, inserted over several measures or just in selected parts of individual measures, can build anticipation, humanize a tune, change moods, and add a certain spark to a tune that makes it more interesting. Some examples? Sure. How about...

* Compensating for drag. Sometimes different sections of a tune will sound like they drag (even though there's a consistent tempo throughout the tune) if the sections have different rhythms. Here's a simple example.

Program eight measures or so of a drum part at 120 BPM, where the kick drum hits on every beat, and the closed high hat on every offbeat:

This gives a light, bouncy feel, as found in lots of "afropop" and world beat-style tunes. Now follow that with eight measures of a more rock-oriented drum part, also at 120 BPM, where both the kick and closed high hat fall on every beat:

Finally, follow the rock part with eight more measures of the afropop part, again at 120 BPM. If you listen to the three parts consecutively, the second part may seem to drag in terms of feel (although not necessarily in terms of tempo) compared to the first part. The third part will probably feel faster in

relation to the second.

Now increase the tempo of only the second rock section by 1 BPM, and play all three sections consecutively. You'll notice a definite difference in feel compared to when all three parts were set to the same tempo. It's a matter of personal preference which feel you like better, but the one with the changed tempo often sounds the most appropriate.

This is just one example; there are other situations where a part of a tune will seem to drag or rush. Try the 1 BPM solution and see if this fixes the problem.

* Boosting a song's energy level. Increasing tempo slightly is the timing-oriented equivalent of modulating upward by a semitone since both jack up the energy level. For example, assume a 16-bar solo passage where one instrument takes an 8-bar solo, then a second instrument takes the next 8 bars. Increase the tempo for the second part by 1 BPM and--especially if the second part is accompanied by a key modulation--the solo will take off to another level.

After you've upped the tempo, the question then becomes whether to maintain that tempo or drop back when the next section begins. If you maintain the tempo, a quality of tension remains. If you drop back, there's a feeling of release. The greater the amount of tempo change, the more pronounced the effect.

* Leading from one section to another. The following technique is good for transitioning from one part of a tune to another (e.g., verse to chorus, chorus to solo, chorus to verse, etc.). It involves dropping the tempo by 1 BPM halfway through the measure before the next part, then resuming the normal tempo halfway through the next measure:

This raises the anticipation level before the next part appears, since the slight slowdown prepares the listener for the fact that something is changing. Increasing the tempo after the next part begins provides a fairly smooth change from tension to release.

However, you might want a more drastic change in tension, accompanied with a quicker return to the normal tempo. A slight variation on the above works well:

In this example, the tempo drops 1 BPM at the third beat of the measure before the next part, then drops 1 more BPM at the fourth beat. If there's a drum fill going on here, the effect is very cool, particularly with rolls--you're pole-vaulted right into the next section. Since the next section resumes the normal tempo immediately, you're snapped back into the flow of the song.

I often use the first approach with verses and the second for choruses or solos, but you really have to judge each case with your ears and program appropriately.

* Enhancing pauses. Any time there's a "dramatic pause" in a song, a tempo change can reinforce it. For example, I was recently working on a fairly straight-ahead dance tune where after the verse, there was a sparse 2-bar pause with only voice, bass, and a little vibes. This was the high point of the song, where the singers sang the title. To give them more time to really caress the words, I dropped the first measure's tempo by 1 BPM and the second measure by 2 BPM. Because there wasn't any steady rhythm in the background, the effect was of a brief elongation of time that really emphasized the words.

* Adding a breather. Sometimes you'll wind up some fabulous solo, and it doesn't seem quite right to immediately proceed to the next measure. In this case, it's a good idea to give just a tiny pause so that the listener can "reboot" between leaving the solo and starting the next part. A drastic drop in tempo for a very short period can add such a pause:

Insert this change between notes, or you'll have some weird timing distortions on any notes that play during the tempo drop (sustaining notes are not a problem).

Another type of common pause is a "false ending," where something sustains for a measure or two before the main melody or hook returns. Inserting a drastic, short tempo drop just before the return of the next part adds a further element of surprise because the brain expects the tune to return on the beat; when it comes in just a fraction of a second late, there's an added element of interest.

(Those of you who have a copy of my "Forward Motion" CD might want to check out the tune "Paradise," since the tempo drops just a bit before the orchestral bass drum returns after the sustaining voices.)

* Inserting spaces other than a full measure into a song. Most sequencers let you insert measures into a song, but few let you insert spaces smaller than a measure, such as an eighth note. The need for this arose on a tune that started with a particular melodic line. During the mix, it seemed like a good idea to add some environmental sounds to set the mood prior to the melody line coming in.

Unfortunately, when the melody started, the transition between the sound effects and tune was too sudden. So, I decided to add pauses between segments of the melody line to break it down into multiple themes separated by environmental sounds.

Most of these pauses needed to be much less than a measure, so I thought I was out of luck. But inserting drastic downward tempo changes between notes slowed down the sequence enough at those points to give the effect of adding space. (Incidentally, I used this technique at the beginning of "Oasis" on the above-mentioned CD.)

Experimentation Time

The more I work with tempo shifting, the more it seems like a necessary final step before considering a sequence done. Sure, a sequence can sound okay when playing back at a constant tempo; but try some of these tricks and see if you don't agree that occasional tempo changes can add vitality and interest to a tune.

The Lone Fader

Many sequencer programs include software faders, which are onscreen representations of mechanical slide pots. You generally assign these faders to various MIDI controllers, and record controller changes by manipulating the "virtual faders" with a mouse. This is particularly handy for automated mixdown using controller 7, since the fader action more closely resembles that of a traditional mixing fader--with one very major exception.

The basic problem is that controlling fader motion with a mouse can feel awkward enough to turn a virtual fader into Darth Fader; the experience just doesn't compare to moving a real mechanical fader with a long throw and smooth feel. Companies like JL Cooper and Peavey have recognized this and designed hardware fader boxes (the FaderMaster and PC1600, respectively) where each fader can be programmed to produce a specified type of MIDI data. This approach also offers the benefit of letting you adjust several parameters at once, as opposed to the one-fader-at-a-time bottleneck of mouse movement.

Yet a fader box may be overkill when all you want to do is mix a track or two at a time, or if you're on a tight budget. Fortunately, there's an amazingly simple solution. Meet The Lone Fader, a hardware fader that can be yours for a song--and just a little bit of soldering.

Let Your Keyboards Do The Thinking

Building a fader that actually generates MIDI data is relatively complex, but we can take a shortcut by using the "brains" of an existing keyboard. The Lone Fader works because several keyboards include a control pedal jack; moving a pedal plugged into this jack creates MIDI controller data (usually either controller 7 or controller 4) at the keyboard's MIDI out. With some keyboards,

this data can be reassigned to a different controller number. If you don't have a keyboard with these capabilities, Anatek's Pocket Pedal (described later) is a

relatively inexpensive accessory box that can also convert foot pedal motion

into controller data.

Don't worry--I'm not going to suggest mixing with your feet, but I do suggest building an utterly simple project (see Fig. 1) that consists of the best 10K ohm fader as you can afford and a stereo jack. Run a stereo cord from the fader out jack to the keyboard's control pedal input or the Pocket Pedal's pedal jack, and when you move the fader, it will do whatever the footpedal would normally do.

Figure 1: The Lone Fader

Construction is simple to the point of ennui. You can use any 10K linear taper slide pot; audio taper may give a better feel in some instances, but linear is more suited to controlling parameters such as LFO depth. Since slide pots are a hassle to mount (making slots in metal is never fun), you might also consider using a traditional rotary pot with an oversize knob. If the knob is big enough, you'll even get better apparent resolution than with a fader.

Case Histories

I tested this circuit with an E-mu Emulator II, Ensoniq EPS, Ensoniq VFX-SD, and Peavey DPM 3SE. Instructions are included for these keyboards; if you're using a different keyboard, look through its manual to see whether operation is similar to any of the examples below.

EPS and VFX-SD: Plug The Lone Fader into the pedal jack. Varying the fader will produce controller 7 data on the selected base MIDI channel.

The EPS gives you another option if you want the EPS to receive multi-timbrally yet also transmit mixing moves. On the Edit MIDI page, set Transmit to Instrument; the Base Channel setting doesn't matter. Create an Instrument, and in the Edit Instrument page, set the MIDI Output channel to the channel over which you want to transmit the volume data.

Emulator II: Plug The Lone Fader into the A/D jack. Select the Preset Definition module, then select MIDI setup (30). Use slider A to select the Pedal> page, and assign the desired controller (00-31, channel pressure, or pitch bend). Varying the fader will produce controller data as assigned, on the base MIDI channel.

DPM 3: Plug The Lone Fader into the Control Voltage jack. Access the Controls page of the master menu, and assign Pedal to either Volume (to generate controller 7 data) or XCtrl (assignable to any controller number from 1 to 120 except 7).

More Is Sometimes More

Being able to vary more than one fader at a time is helpful, but whether that's physically possible depends on your keyboard. With the E-II, you can assign the mod wheel and pitch bend to any controller, allowing for such tricks as mixing levels with The Lone Fader and panning with the mod wheel. The DPM 3 SE includes two front panel faders, for Volume and Data. Volume is permanently assigned to controller 7, but the Data control, like the pedal, can be assigned to any controller. This allows for real time control of three different parameters.

If You Don't Want To Use A Keyboard...

The Lone Fader can also plug into the Anatek Pocket Pedal. This little box converts pedal motion to controller data--pitch bend, modulation (1), volume (7), or portamento time (5) messages--over any of the 16 MIDI channels. Internally, its circuitry uses the resistance between the tip and ring connections to set a one-shot time. As a result, the fader requires connections only to the hot and wiper; ground is not necessary, but leaving it connected works fine too.

Hi Yo Silver!

Well, it's time to stop horsing around, and go trigger some MIDI events. Try putting together The Lone Fader and see if it doesn't make dealing with software faders just a bit more pleasant and intuitive.

The Mutewriter

If you synchronize sequencers with tape recorders, you may have already discovered the joys of MIDI-controlled automated mixing. Several hardware boxes (such as the Niche ACM, which inserts between a tape recorder and mixer or in the mixer's channel inserts) let you adjust the level of each audio channel with MIDI messages.

Devices like JL Cooper's Fader Master, the Lone Fader (see previous related article), or the programmable sliders in sequencer programs like Vision, Performer, Cubase, Logic, Metro, etc., provide interfaces designed for MIDI mixing. However, in many cases what you really need is to do mutes, not a mix. This is particularly true if you want a "lean" sequence that's not so stuffed with data as to cause timing slop. Unfortunately, entering mute on and mute off points in the sequencer can be a time-consuming process ("oops! missed where the vocal came in, better rewind...").

MuteWriter To The Rescue

This simple circuit listens to the audio signal from a tape track. When audio appears, MuteWriter generates an "on" signal and when the audio goes away, the circuit generates an "off." MuteWriter does not actually generate the MIDI data; to simplify construction, MuteWriter interfaces with any keyboard that includes a control pedal jack capable of responding to varying voltage levels, as

well as CV/MIDI converters that produce controller outputs. Varying the voltage to the keyboard's control pedal jack, usually from 0 to +5V or 0 to +10V, generates MIDI controller data (typically controller 7 or controller 4) at the keyboard's MIDI out. Some keyboards let you assign this data to other controllers as well.

To use MuteWriter you would sync the sequencer to tape, roll tape, and record data into the sequencer from the keyboard doing the MuteWriter-to-MIDI translation. You need to make a separate pass for each track requiring mute data. If the MIDI device wants to see a controller other than 7 but that's all your keyboard generates, no sweat--record the changes anyway, and change the controller number within the sequence.

How It Works

The first op amp is a comparator, with its threshold (sensitivity) set by R8. Q1 splits the comparator output into two out-of-phase signals, whose negative portions are removed by D1 and D2. R14 and R15 then sum the remaining positive signals; when audio is present, Q2 turns on and the indicator LED lights. Meanwhile, the output op amp switches from 0 volts to the op amp's maximum output level. When the audio goes away, the LED turns off, and the output drops back to 0 volts (R17 adds a slight amount of hysteresis to inhibit false triggering).

Construction And Hookup

I built the prototype on a circuit board so ratty, and with so many dangling wires, that if nothing else I can attest that this circuit is fairly non-critical. It requires a bipolar, +/-15V power supply. Each op amp can be half of any general purpose dual compensated unit like the 4558, or even two single op

amps (like the 741). You can use more expensive parts, but they won't really improve the performance.

You may want to parallel another input jack with J1 so that you can plug your audio output into J1 and run a cord from the new jack to the mixer input. J2 goes to your keyboard's control voltage/pedal input. I recommend using a stereo cable even though J2 is mono; in a pinch a mono cord will usually work.

Trimming And Tweaking

I tested the MuteWriter with several keyboards. With the Ensoniq EPS and VFX-sd, MuteWriter will produce controller 7 data on the selected base MIDI channel. If you want the EPS to receive multi-timbrally yet also transmit mixing moves, set Transmit to Instrument on the Edit MIDI page (the Base Channel setting doesn't matter). Create an Instrument, and in the Edit Instrument page, set the MIDI Output channel to the channel over which you want to transmit the volume data.

With the DPM 3, the Controls page of the master menu lets you assign the signal at the pedal input to Volume (which generates controller 7 data) or to XCtrl, which can generate any controller number from 1 to 120.

Regarding E-mu, the following relates to the Emulator II, but the procedure is similar for other E-mu keyboards such as Emax. Select MIDI Setup in the Preset Definition module. Use slider A to select the Pedal> page, then assign the desired controller (00-31, channel pressure, or pitch bend). MuteWriter will produce controller data as assigned, on the base MIDI channel.

There are two MuteWriter controls: Threshold and Output. Start with the output about halfway up. If this doesn't let the keyboard produce a maximum controller value, turn up the output. This setting is not too critical. Also note that you can adjust the output lower if you want the maximum controller value to be less than 127.

Threshold determines MuteWriter's sensitivity and should be set the same way you would set a noise gate threshold--as sensitive as possible, consistent with doing the job and not getting false triggering. With signals that come in and out cleanly, false triggering will seldom be a problem; however, signals that fade out slowly or vary rapidly can cause MuteWriter to write a quick series of mute on and mute off messages. In a way, though, this is almost an advantage since it shows the best places to draw fade ins and fade outs. Call up the sequencer's graphic editing options, locate the region where the false triggering starts and ends, then draw in the appropriate type of fade to smooth things out.

Something Shifty

MuteWriter has an attack time of around 25 ms. This reduces ripple and false triggering, but can cut off the beginning of percussive sounds. No problem: use your sequencer's time or track shift option to move MuteWriter's controller messages 25 ms ahead.

Those of you who have read this far must really be into it, so here's some fun for the hardcore. Trigger MuteWriter with audio from a microphone, drum machine, or other audio source and use this to control the level of a different instrument. This is similar to a noise gate's "key" function, or a primitive type of envelope following, and can produce very interesting results.

Parts List

Resistors (1/4 watt, 5% tolerance)

R1 1k

R2 1k5 (1.5k)

R3 2k2 (2.2k)

R4, R5 2k7 (2.7k)

R6, R7 10k

R8, R9 10k linear taper pot

R10 56k

R11-R15 82k

R16 100k

R17 1M

Capacitors (minimum 30 working volts)

C1-C3 100nF (0.1uF)

Semiconductors

D1-D3 1N914 or equivalent diode

Q1, Q2 2N3904 or equivalent NPN transistor

IC1 4558, 5558, or 1458 dual op amp (see text)

Other Components

J1, J2 Mono open circuit 1/4" phone jack

Misc. Chassis, circuit board, wire, solder, etc.


Document Info


Accesari: 2354
Apreciat: hand-up

Comenteaza documentul:

Nu esti inregistrat
Trebuie sa fii utilizator inregistrat pentru a putea comenta


Creaza cont nou

A fost util?

Daca documentul a fost util si crezi ca merita
sa adaugi un link catre el la tine in site


in pagina web a site-ului tau.




eCoduri.com - coduri postale, contabile, CAEN sau bancare

Politica de confidentialitate | Termenii si conditii de utilizare




Copyright © Contact (SCRIGROUP Int. 2024 )