Diccionario de Música Electrónica The A to Z of Computer



Diccionario de Música Electrónica The A to Z of Computer
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
1 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
The A to Z of computer music: A
Part one of the ultimate guide to computer
production lingo, jargon and terminology
Computer MusicMarch 18, 2013, 15:42 UTC
Computer Music magazine has begun a quest to collate all the essential digital music production terms
- and they'll keep going forever (or until they run out of letters.)
A/B comparison
A system for comparing two different states of an effects plugin, assigned to slots called A and B.
Ableton Live
This popular music-making program (DAW) is about to reach version 9. Its popularity stems from novel features
including: Session View, enabling the user to 'jam' with fragments of ideas spontaneously rather than laying
them out on a strict timeline; easy MIDI control of practically everything on the screen; unusual, creative effects
plugins and flexible chaining; easy 'warping' of audio material, allowing you to combine loops of different
pitches/tempos, or even whole tracks; and an emphasis on live performance.
Initially popular with forward-thinking DJs and electronic musicians, Live is now suitable for all kinds of music
due to the addition of more 'traditional' DAW features.
To find out about the latest version of Live, check out our full review of Ableton Live 9, or for tips and tutorials,
check out The Ultimate Guide To Ableton Live.
Standing for Analogue-to-Digital Converter, this is a piece of circuitry in an audio interface that converts an
analogue signal (eg, from your microphone or guitar) into the digital signal used by your computer. The ADC's
opposite is the DAC, converting a digital signal to analogue for playback via speakers, headphones, etc.
ADSR envelope
The volume (amplitude) of a real instrument changes over time. This can be quite a complex curve (aka
envelope,) but for many sounds, it can be simplified to the parameters Attack, Decay, Sustain and Release:
The first phase of a note's sound is the Attack, be it the quick pluck of a virtual guitar string (pictured right) or
the gradual fade in of orchestral strings (left). Next comes Decay, which controls how quickly the sound level
falls to a stable state.
Sustain is the level at which a sound remains until the player ends the note; Release tells us how fast it fades
away once the player releases the note. In some instruments, the sustain level is not constant, such as the
guitar, where it fades slowly.
2 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
The ADSR system is often used in software instruments to allow the user to specify volume shaping to be
applied to the sound over time, such as in a synthesiser.
So, if you want a fast, snappy attack to your sound, turn down the Attack knob/slider to make it faster; turn it up
for a gradual fade in. Note that in this context, Sustain is usually a fixed, continuous level (unlike our guitar
example above.)
We've been talking about the amplitude envelope here, but ADSR envelopes can be applied to other
parameters, such as filter cutoff, for example. Alternative envelopes exist too - some have a Hold stage, or a
Delay stage before the Attack.
The underlying method(s) that software uses to process audio, as designed by the programmer/developer. The
algorithms that the programmer invents and employs are directly responsible for the resulting sonic quality of
their plugins.
A type of digital audio artefact, often manifested as high-frequency unmusical 'hashing', or tones that seem to
move down in pitch as the musical note moves up, and vice versa. Distortion and high synth notes are among
the most common sources of audible aliasing.
A typical technique for reducing aliasing is oversampling, and some software lets you set the amount of this;
higher values mean a greater CPU hit but less aliasing.
All-pass filter
Unlike a low- or high-pass, these filters have no cutoff, passing all frequencies equally. While there's no cutting
or boosting going on in an all-pass filter, inserting one on a channel will alter the phase of different frequencies
by different amounts.
We can use all-pass filtering correctively, to better align multi-mic'ed material with phase issues, or creatively, to
create interesting phasing effects.
The amplitude of a signal typically refers to the level of its highest peak, which you can often see when
examining the waveform in your music program (many sample editors can find the precise peak/maximum
amplitude.) Note that a signal with a higher amplitude doesn't always sound louder to the human ear.
Amplitude modulation (Am)
Modulating the amplitude of a signal using an LFO is referred to as amplitude modulation. Slower LFO speeds
give rise to what is usually known as tremolo; faster speeds give gritty, grungy effects similar to ring modulation.
Amp simulator
3 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
Developed with guitarists in mind, amplifier simulators are effects that mimic the traditional hardware setup of a
guitar player. Most amp sims offer a mix-and-match selection of: effects pedals, amplifier heads, speaker
cabinets, microphones, and studio effects.
This offers an enormous degree of flexibility that is not possible with traditional guitar hardware. While they're
designed for use with guitars, you can use an amp sim as an effect to change the sound of any signal - try
synths, drums and vocals!
Sounds in the real world and in electrical audio signals are analogue, ie, represented by a continuous 'curve',
as opposed to the individual samples that make up digital audio.
Analogue audio hardware is renowned for imparting such desirable sonic qualities as warmth, punch and so on,
and mimicking this in digital plugins using techniques like analogue modelling is very popular.
Using an arpeggiator, you can hold down the notes of a chord, and the computer will play an arpeggio based on
those notes. This is commonly used with synth sounds, and many synths have an arpeggiator built in, as do
many music programs (DAWs.)
Side-effects of digital audio processing can sometimes create unwanted but audible sounds, which we refer to
as artefacts. For instance, ever listened to a low-quality MP3 and heard swirling, bubbly sounds on top of the
music? Those are MP3 compression artefacts.
Artillery2 CM
4 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
This handy effects plugin from Sugar Bytes is armed to the teeth with modulation, and it's free as part of the CM
Plugins collection of software which comes with each issue of Computer Music magazine.
A piece of software called a driver is required for your music-making program (DAW) to be able to send audio to
your audio interface (aka soundcard.) On Windows PCs, the standard so-called multimedia driver is not suitable
for audio work, as it can result in high latency (ie, a slow response, or audible 'delay' between playing a note
and hearing it back through the speakers.)
Steinberg's ASIO (Audio Stream Input/Output) is a driver standard designed for audio work, and most audio
interfaces come with an ASIO driver, so make sure it's installed and selected in your DAW's settings.
Alternatively, try the free ASIO4All driver.
The initial part of a sound - see ADSR envelope above.
A reduction in level/gain. The opposite of boost.
AU (Audio Units)
This is a Mac-only format for plugin instruments and effects. Apple developed the format, and their Logic and
GarageBand software works only with AU plugins. Some DAWs (eg, Ableton Live) can load AUs and other
formats too, such as Steinberg's popular VST format.
Audio interface
A piece of hardware used to get sound in and out of your computer, often with MIDI ports too. A dedicated audio
5 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
interface will give better audio quality than your computer's built-in audio, and it'll be less prone to glitches too.
Have you ever wanted to make a sound in your mix get louder at certain points? Or to make the filter cutoff of
your synth move around without you having to move the knob yourself during playback? This is what
automation is for, and most DAWs have it.
You can record movements in real time: enable automation "writing", then play the track and move the controls
in the way you want to hear them. The next time you play back the song, the controls will move about without
you having to touch them. The other way to do it is to manually edit the automation curve visually, allowing for
more precise results.
Immensely popular pitch-correction effect made by Antares, Auto-Tune processes vocals so that they are more
in tune. With extreme settings, this gives a distinctive robot-ish effect, used creatively by many pop and dance
Auto-Tune is so ubiquitous and popular that it has entered common vocabulary; pitch-correct vocals may be
referred to as "Auto-Tuned", even if other software was used to achieve it.
Auxiliary send/bus/return
Auxiliary (or aux) sends are used to route an additional version of a signal elsewhere, to an auxiliary bus
(sometimes called an auxiliary return.) Typically, an effect such as a reverb is placed on the auxiliary bus. Thus
multiple sounds in the mix can be sent to the same reverb, and the overall level of the reverb controlled with the
aux bus level fader.
Aux sends/buses have other uses, such as creating headphone mixes for live musicians.
6 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
The A to Z of computer music: B
Our comprehensive round-up of production jargon
Computer MusicApril 15, 2013, 11:43 UTC
Computer Music goes below the bonnet in this month's lexical supplement, demystifying yet more
digital audio bits and bobs.
Adjusting the balance of a stereo signal means changing the amplitude of the left and right signals. It can be
used to change the 'position' of an audio signal in the stereo field, though it takes its name from the corrective
use of giving each channel equal strength, to 'balance' and centralise the signal.
Officially, balance differs from panning in that the former will silence the right channel if panned hard-left, while
the latter will bring the two together in the left channel. In fact, the pan knobs in most DAWs and plugins actually
function like a balance control!
Balanced connections
A system using balanced connections (cables, equipment, input/output connections, etc) will be less
susceptible to unwanted interference such as mains hum and can maintain a clear audio signal over longer
distances than its unbalanced counterpart.
A range of frequencies, eg, 120–200Hz. Some plugins divide the incoming signal into multiple bands for
processing separately before recombining them. The term 'band' is also used to refer to the filters in an
equaliser (EQ), whether they be bell/peak, low shelf, high shelf, etc. An EQ is often specified as having a
number of bands, indicating its flexibility.
A type of filter used to let through frequencies within a certain band, rejecting those outside it, similar to a lowpass and high-pass in series. There is a gradual roll-off of frequencies outside of the passed area, rather than
an abrupt cutoff. Band-reject filters have the opposite frequency response, cutting out a band of frequencies.
Most plugins have a selection of presets: this 'bank' contains ready-made settings that can be good starting
points from which to create a customised sound. For VST plugins, a preset is usually stored as an FXP file, and
banks of presets are stored in the FXB format, (though plugins may use their own custom formats too.) If you're
short on inspiration, preset banks can be bought in formats like FXB.
7 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
The lower end of the frequency spectrum; that is, the lowest sounds that we are capable of hearing. At their
very lowest, bass sounds are more felt than they are heard, and these frequencies are termed sub-bass. As a
rule of thumb, bass is anything up to about 250Hz, while sub-bass resides at 20–80Hz.
As an instrument, both a standard bass guitar and a standard double bass can play the notes E1–G4. The
intricacies of bass were explored in Computer Music issue 186's Bass cover feature.
In music theory: One whole unit of musical time. In 4/4 time, represented by a quarter-note (aka crotchet.) A
beat can be subdivided into two eighth-notes, four sixteenth-notes, etc.
In hip-hop: This is the instrumental track/music sans vocals, created by a hip-hop producer, sometimes known
as a beatmaker.
In drum-speak: An arrangement of percussive hits into one repeating pattern.
In acoustics: If two signals are close enough in frequency, we will perceive a 'throbbing' - or beating sensation in amplitude as they fall in and out of phase. For example, signals of 328Hz and 332Hz will sound
like one signal of 330Hz, rising and falling in level four times per second (332-328=4Hz). These regular 'pulses'
in level are called beats. Beating is used creatively in unison synth sounds such as Reese basses, achieved by
using the synthesiser's unison detune features or simply detuning oscillators manually.
Taking a rhythmic piece of audio, then 'slicing' it up for rearrangement. The sliced beat can be processed to
change its tempo, arrangement, and sounds of its parts.
Traditionally, beat-slicing is done to drum or percussion loops, but it can be applied to guitar or synth parts,
vocals, or anything with enough of a rhythmic quality.
While beat-slicing can be done manually within a DAW, there are also methods for doing it automatically.
Ready-sliced material is available via the REX format, and some timestretching and pitchshifting algorithms are
essentially behind-the-scenes beat-slicers.
Digital data is composed of 1s and 0s. Each one is a bit, and a group of eight is called a byte. As an example, in
MIDI, a Note On message is sent using three bytes. The first byte is split into two: four bits to say 'Note on' and
four to show the channel number; the second byte is used to communicate which note; and the final byte
describes the MIDI velocity.
Bit depth
A measurement of the amplitude resolution of a digital audio signal. Lower bit depths mean there are fewer
possible amplitude values for each sample, and the mapping of desired sample values to the nearest possible
destination values can result in gritty, grungy distortion known as quantisation noise. This can be alleviated via
8 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
a process called dither, which, in simple terms, converts the distortion into (less obnoxious) background noise.
A higher bit depth means greater usable dynamic range and quieter quantisation noise (and/or dither.)
An audio signal with a high bit depth (pictured left) is a closer representation of the desired waveform (green
line) than a lower bit depth one (right.)
CD-quality audio is 16-bit. 24-bit audio is classed as 'professional standard', and any potential
quantisation/dither noise is practically inaudible (and indeed, is often quieter than the softest sounds that an
audio interface can actually reproduce.) DAWs and plugins generally use 32- or 64-bit floating point calculations
internally, to preserve audio fidelity throughout repeated complex calculations.
An effect that reduces bit depth and/or sample rate of a signal in order to create distortion. Here, degradation of
the signal is the goal, so no restorative functions are applied. What's left of the signal on output is a low-bit,
under-sampled (read: poorer quality) version, reminiscent of the sounds of '80s computers and music gear.
The opposite of attenuation in an equaliser. Boosting frequencies will increase their levels.
Like a 'render' or 'export' command, this DAW function will play a selected region or entire project, and record
the results to a fresh wave file. The resulting audio is often placed back into the project on a new track.
In the early days of audio engineering, the limitations of recording devices with only a few tracks made
bouncing a very necessary part of the production process, summing a six-track mix down to a pair of stereo
tracks, for example, to free up the original six tracks for further recording.
Nowadays, this limitation is irrelevant, but bouncing is still used to:
Export a mixed track for collaboration or demoing purposes, as well as to produce a final export of a
completed mix.
Consolidate multiple takes across multiple tracks into one master take.
Take a snapshot of an effect, whether for a collaborator who doesn't have the effect installed, or for a fixed
copy of an instrument/effect that generates randomly.
Reduce CPU load.
Beats Per Minute, a measure of tempo. 120BPM equates to two beats per second.
A drum beat, typically sampled from a vintage funk or rock track during a 'break' in which the other instruments
cease playing, leaving the drummer to strut his funky stuff. Breakbeats are a staple of many electronic genres,
and they can be sampled, processed and reprogrammed for use as new beats.
A part in the song with barer instrumentation, intended as a respite from the intensity of the main track, offering
a different mood. For instance, the drums and bass could be cut out, for a more musical, atmospheric section.
Brickwall limiter
9 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
Traditionally, a brickwall limiter is one with a high ratio (of at least 20:1, up to ∞:1) and a minimal attack time.
Once an amplitude threshold is set, any parts of our signal that are louder than this point will, very quickly, be
reduced to that amplitude. The brickwall limiter is intended to do its best to prevent the signal exceeding the
threshold, swiftly pulling it back down the instant that it does. This is often used to prevent overloading of
subsequent processes, and to catch sudden, loud bursts of sound that could damage speakers and ears.
In the world of digital audio, brickwall limiters are more literal: to qualify, they must not pass any signal at all
above the threshold. Such limiters are often used to reduce the peaks of a mix so the signal can then be turned
up louder without the peaks causing harsh digital clipping.
Digital audio systems use buffers to split the processing of audio signals into manageable chunks, helping to
maintain a constant signal that's free of glitches and dropouts. High (large) buffer sizes mean higher latency;
that is, the delay between input and output, manifested as an audible delay/echo between playing a note and
hearing it back through the speakers.
When recording MIDI or audio and monitoring through the DAW, it is important to use a low buffer size to
ensure low latency. Buffer size is set using your DAW's preferences or in your audio interface's settings. The
number and complexity of plugins used and the processing power of your system are two key factors (though
not the only ones) affecting the size of the buffer required for good audio performance.
Part of a song that builds up to something (usually 'the drop' or breakdown), adding intensity with elements
such as gradually opening filters, drum rolls, risers and FX.
An area on a mixer (real or virtual) that signals can be sent to. One use is that of auxiliary send/return buses.
Signals sent to these can then be sent through effects processors - handy if you want to apply the same effect
(reverb or delay, usually) to a number of tracks, then control the level of this effect via the auxiliary bus fader.
Tracks can also be bused directly to create a group bus (say, the tracks of a drum kit) that can then be treated
and processed as one.
The final destination for tracks in a mix is the master bus, where you can apply effects to the whole mix (eg, for
10 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
When you've got an effect on a track, you can activate the bypass to hear what it would sound like without the
11 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
The A to Z of computer music: C
Another instalment in our anthology of digital
production terminology
Computer MusicApril 29, 2013, 13:26 UTC
We unleash a concentrated cavalcade of definitions, cluing you in on some of the finer points of MIDI,
filters and microphones.
Cardioid (pickup pattern)
A microphone with a cardioid pickup pattern is more sensitive to sound coming from directly ahead than from
behind it, as shown in the diagram above. In short, whatever you point the mic at comes out loudest, and
whatever's behind it is most strongly rejected.
Carrier signal
In applications of modulation such as AM/FM or a vocoder, a carrier signal is modulated by another signal (the
modulator,) resulting in an output that responds to both signals. In a vocoder, the carrier signal is the sound you
want to effect (typically a synth sound,) while the modulator signal (usually a vocal) has the characteristics you
want to apply to the carrier.
A MIDI CC (Control Change aka Continuous Controller) is a type of MIDI data that's used to control parameters
like modulation, volume, pan and sustain. When you see a MIDI controller with knobs and faders, those
generally send out MIDI CCs. Once you've set up your software to receive the corresponding CCs, you can use
the physical controls to manipulate things like volume or filter cutoff, and record your movements. MIDI CCs can
also be programmed using the mouse.
CD (compact disc)
CDs themselves are increasingly obsolete, but CD-quality audio is still very relevant, being the standard quality
for audio consumption. Specifically, we mean a 16-bit signal with a sample rate of 44,100Hz (44.1kHz).
A unit used to divide musical pitch. Semitone = 100 cents; Octave = 1200 cents.
12 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
Channel (audio)
Audio signals are described as having a certain number of channels. For instance, mono audio has just one
channel; stereo has two; while surround sound audio might have 5.1 channels (five 'full range' and the '.1'
containing low- frequency information only, for the subwoofer.)
Channel (mixer)
In your music program (DAW,) audio from various sources (eg, recorded audio tracks, virtual instruments, real
instruments/mics connected to your audio interface) flows through channels in your DAW's virtual mixer,
ultimately arriving at a destination (typically a channel called the 'master', which is fed to your audio interface's
Each channel has its own settings, such as volume, panning, insert effects, send effects levels, and so on.
Several related channels may be routed to a group channel (aka group bus) for broader control.
The terms 'track' and 'channel' are often used interchangeably. While they are technically different, they're so
intrinsically related that it's rarely a source of confusion, and the context almost always makes the meaning
Channel (MIDI)
MIDI data can be sent on one of 16 channels. This was very necessary in the bad old days of hardware
because typical setups sent the same MIDI data to all devices. Your devices would then filter just the MIDI
notes (and other data) intended for them, eg, bass on channel 1, lead on channel 2, etc.
In modern DAWs, you can usually send MIDI to specific devices, so you may not actually need to worry about
channels at all.
As well as so-called Channel Messages (Note On, Note Off, Pitchbend, Control Change, etc), MIDI can also
send System Messages, which all MIDI devices in the setup should heed.
This effect duplicates the audio signal, then delays one version by between 15 and 35 miliseconds, modulating
the time/tuning of the delayed signal to give an effect similar to unison, where multiple instruments play the
same thing. Chorus can be thickened further by using more of these delayed voices, each with slightly different
delay, modulation and panning values.
Digital clicks and pops can occur whenever there is an abrupt change in the signal level. This may happen if
you cut the start (or end) of a piece of audio at a point where the waveform is not at or near-zero. When played,
the signal will jump immediately from zero to the waveform's starting value, sometimes resulting in an audible
click. The solution is to make your cuts at zero- crossing points, and/or to use short fades and crossfades when
editing audio.
13 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
If you've ever tried to push digital audio to its loudest, you'll have heard the distorted sound of the signal
An audio interface can reproduce a maximum signal amplitude of 0dBFS (dB Full Scale). If the signal is pushed
louder than this on your DAW's master channel, the data for the top and bottom parts of it will be 'clipped' off
and lost - as pictured in the diagram above.
The same happens when audio is saved to an integer format such as 8-bit, 16-bit or 24-bit. Floating point
formats (eg, 32-bit float) can store data above 0dBFS, but it would still clip upon playback unless you reduce its
level first.
While individual audio tracks in a project may not be so loud as to clip, multiple tracks played together may
result in a rise in level to the point of clipping. Clipping may occur momentarily, as a fast, high peak that we may
not always notice. Fortunately, most DAWs indicate clipping with a red light in the mixer or transport. Clipping is
commonly prevented using a brickwall limiter.
Comb filter
When a signal is mixed with a slightly delayed version of itself, the result is a cancellation of certain frequencies
at regular intervals due to phase interaction. For example, delaying the signal by 1ms will create frequency
'dips' or 'notches' at 500Hz, 1500Hz, 2500Hz, 3500Hz and so on, creating the spectrum's resemblance to a
comb. The effect can be intensified by applying feedback to the delay.
Two or more notes played at the same time.
Combo input
Audio interfaces may be furnished with XLR inputs, jack inputs or a mixture of the two. A combo input does
exactly as you'd expect – it allows you to insert either of the two into one socket, though not at the same time,
of course.
Compression (codec)
Audio file formats like WAV/AIFF can offer pristine sound quality, but they can also be very large. For
convenience, audio is compressed into a smaller file size using codecs such as the ubiquitous MP3 and the
free-to-use Vorbis (aka Ogg Vorbis.)
These aim to remove parts of an audio signal that listeners would - in theory - not be able to hear anyway,
thereby reducing the space required to store the audio.
Naturally, the resulting decoded audio is not the same as the audio that went in; we call this lossy, as opposed
to the lossless nature of WAV/AIFF.
In practice, this aggressive removal of parts of the audio signal can be audible, and the quality of the
compressed audio depends mainly on the target bitrate (measured in kbps.)
A 128kbps MP3 is easy to spot, whereas a 320kbps can be practically indistinguishable from the WAV source.
Beware of using these files for music-making, though, as processing can expose their hidden lack of quality.
14 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
One of the most important audio processors to get your head round. Music is a mixture of quiet and loud parts
(dynamics) and, in simple terms, a compressor will act to reduce the difference between them, reducing the
dynamic range.
Use compression to:
Reign in a sloppy performance, bringing the levels of the notes to similar levels
Add more perceived power to a signal
'Glue together' an entire mix or a sub-group of
tracks such as those of a drum kit
For more on the finer points of compression, check out Computer Music 170's huge compression feature. Or for
more background on dynamics processing, read the ultimate guide to effects: dynamics.
Condenser microphone
Rather than the moving magnetic parts of a dynamic microphone, a condenser (sometimes called a capacitor)
microphone uses two charged electric plates to create an electrical signal from sound. While generally more
expensive than a dynamic mic, a condenser performs better at higher frequencies, providing a lighter, airier
sound. Note that condenser mics must be used with a mixer or audio interface capable of providing phantom
power of 48V.
Convolution allows us to take a sampled 'snapshot' of a process (ie, an effect) and apply its characteristic
sound to another signal. Think of it as a sort of sampler, but one that samples an effect rather than actual
Note that convolution only works fully for so-called linear processes; non-linear aspects such as distortion,
dynamic response or modulation will not be captured. Typical uses of convolution are for emulating speaker
cabinets, EQ curves and the reverb of actual rooms. A captured 'snapshot' is called an impulse response.
Core audio
The under-the-hood audio system in Mac OS X and iOS, this provides a built-in solution for smooth, low-latency
audio playback, MIDI, processing, etc.
The CPU (Central Processing Unit) is your computer's calculating brain, often referred to as the 'processor'. The
rate at which the CPU can perform calculations is given in GHz (billions per second.) Most modern CPUs
consist of a number of 'cores', essentially duplicating the processing circuitry to allow multiple calculations to be
performed at once.
15 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
Piling on too many plugins can tax your CPU, often indicated in your DAW by a CPU meter. If you max out your
CPU, you can expect the audio to break up and stutter as the computer fails to keep up. You can alleviate the
situation by disabling some of the plugins (just bypassing them doesn't always work) or freezing particularly
intensive tracks.
Just as the end of an audio sample can be faded in or out, two audio samples can be 'crossfaded' together,
bringing the volume of the first down as the volume of the second rises. This is good for seamless looping of
samples, avoiding clicks and pops that could occur between two parts.
Splits a signal into two or more frequency bands that can then be processed separately before being
recombined. The split occurs around a designated crossover frequency.
Cutoff/centre frequency
The frequency at which a filter will come into effect. For instance, take the example of a high-pass filter that has
been set to 'cut off' at 800Hz. Regardless of how steep the filter's slope is, the cutoff frequency of 800Hz is the
frequency at which the signal has been reduced by -3dB.
16 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
The A to Z of computer music: D
Our guide to the language of digital music making
marches on
Computer MusicMay 14, 2013, 10:20 UTC
Learn more computer music lingo as we explain principles, techniques and tools of the trade, this time
focusing on the letter D.
Audio signals in your computer (or any digital audio device) are shuttled around as digital data. In order for
them to be played back on speakers or headphones, or sent to devices like outboard effects or a mixer, they
need to be converted to an analogue electrical voltage. This is done by the Digital-to-Analogue Convertor
(DAC) in your computer's audio interface (whether it's an external USB/FireWire unit or just the built-in
headphone socket.)
The 'opposite' of a DAC is an ADC, which converts analogue signals to digital.
Digital Audio Workstation. This is a digital system for recording, playing back and editing audio. That system
could be a dedicated piece of hardware (such as the hard disk recorder/mixer combos that were popular in the
'90s) or - much more likely these days - a computer-based setup.
For a computer to be used like this, it needs some DAW software, and today's DAW applications go far beyond
basic recording and editing, with features like effects, plugin support, virtual instruments and MIDI capability.
Popular DAW packages (eg, Logic, Sonar, Cubase, Ableton Live, etc) offer practically everything you need to
make a complete piece of music in any style.
Note that the term DAW has a dual meaning: some producers consider it be your whole computer music setup
(computer, audio interface, software, etc) while for others the software is the DAW. We stick to the (now
prevalent) latter definition.
The tail end of any sound as it dies away to silence. Decay is also the 'D' in ADSR - a type of envelope found in
synths and other musical devices.
Decibel (dB)
A measurement of the level of a signal relative to a reference level. Many types of signals can be measured in
decibels, but here we're talking about digital audio signals, and the reference level for this is 0dBFS - the
maximum a digital system can output (and input) before clipping occurs (note that clipping generally doesn't
occur inside the DAW due to the use of floating point signals for processing, which do not clip at 0dBFS.)
For instance, if a signal peaks at -12dB, its loudest peaks are 12dB below 0dBFS - increasing its level by, say,
15dB would cause it to exceed 0dBFS by 3dB, clipping the peaks of the waveform upon playback.
Note that an increase of 3dB results in a doubling of 'power' (decibels are a logarithmic unit), while boosting by
17 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
6dB doubles the 'height' of the waveform. Decibels are the usual unit for specifying cuts and boosts, often with
a - or + to indicate which (eg, -6dB is a 6dB cut.)
In filters, the dB/octave rating indicates how 'steeply' the frequencies in a signal are reduced beyond the cutoff
point. So, with a 12dB/octave filter, each octave beyond the cutoff point is reduced in level by a further 12dB.
Filters are sometimes instead specified as having a number of 'poles', where 6dB equates to a 'pole', so a
24dB/octave filter is known as a 4-pole filter.
When audio is encoded to MP3, OGG, FLAC or any other compressed format, it must be decoded back to
normal uncompressed digital audio for playback.
The term is also used in reference to mid/side processing, where a stereo signal is converted (encoded) into
mid (mono) and side (stereo) signals for processing each individually, before being decoded back to stereo for
An effect used to reduce sibilant vocal sounds (such as 's' and 'f'.) Some de-essers reduce the entire signal's
level when sibilant frequencies are present; others use EQ or multiband processing to cut only the sibilant
This effect delays the input signal then plays it back after a user-specified interval - an echo, in other words with an optional feedback loop enabling the output to be fed back into the input for repeating delays that get
quieter (decay) over time.
Common features are ping-pong mode, causing echoes to 'bounce' between the left and right speakers; filtering
in the feedback loop, so echoes become progressively darker, brighter, etc; host sync, for echoes that are
locked to the rhythm of your song; and multitap operation, which lets you run multiple delays of varying timings
at once.
Some delays mimic old-school hardware tape delays, where changing the delay time adjusted the speed of a
loop of tape, causing the pitch to change.
Delay can also refer to the latency in a digital recording/playback system.
Check out The ultimate guide to effects: delay for more.
Direct Input or Direct Injection, this refers to plugging a guitar or other instrument directly into an audio
interface, rather than running it through an amplifier first. A clean, DI'ed guitar signal, for instance, is used as the
input for virtual guitar amplifier effects.
Digital audio
Audio can be 1) A soundwave that travels through the air 2) An electrical signal that passes through wires and
electronics 3) A digital audio signal in your computer.
The first two of these are analogue - the signal can be thought of as a continuous curve/line with effectively
infinite resolution - you can always zoom in further to see the waveform in greater detail.
In practice, analogue systems aren't perfect; for instance, some background noise is unavoidable. Digital audio
18 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
is different; the signal is not continuous but consists of 'slices' - samples - made at regular intervals in time (the
sample rate, usually 44,100 samples per second.) In addition, there's a fixed number of amplitude values that
each sample can be assigned to (the bit-depth, eg, 16-bit gives us 65536 possible values.)
Digital audio can be very precisely manipulated, and it's free of interference and background noise (well, sort of
- see Dither.)
If analogue audio is a smooth curve on graph paper, digital audio is a series of points dotted on that curve at
regular intervals at the nearest grid division. There's a misconception that this means digital audio must be
stepped/jagged, but a DAC's reconstruction circuitry ensures that, when converted back to an analogue signal,
the audio has a perfectly smooth curve.
Directional pattern
AKA 'polar pattern'. The angles from which a microphone is able to pick up sounds define its directional pattern.
See also: Cardioid.
In music production, we usually mean harmonic distortion, which introduces harmonics musically related to the
source frequencies.
Distortion is also the name of a common effect that applies strong harmonic distortion, often inspired by guitar
pedals. Saturation and overdrive are other sources of harmonic distortion, though they are generally less overt
than effects specifically labelled 'distortion'.
In addition, bitcrushing and sample rate reduction, while not giving harmonic distortion, are largely thought of in
the same category, as tools for dirtying-up signals in a useful way.
Distortion may also refer to the overloading of an input or output in a damaging way (ie, digital clipping, which is
rarely desirable.)
Check out The ultimate guide to effects: distortion for more.
Random noise added to a signal in order to "disguise" ugly quantisation noise caused by a reduction in bit
depth - for example, when converting a 24-bit audio file to a 16-bit one.
More accurately, it's essentially the conversion of one form of digital noise into another that's more pleasant to
listen to. Dither is often an option in the export settings of your DAW.
In computing, a driver is a software layer that sits between the operating system/applications and a piece of
connected hardware, such as an audio interface or MIDI keyboard, enabling the two to communicate.
With audio interfaces, a chief concerns is reducing latency, and to that end, a 'serious' driver such as ASIO
(Windows) or CoreAudio (OS X) is best. Most modern audio interfaces of any quality support both.
In loudspeakers, the driver is the speaker cone and the electrical mechanism causing it to move, thus creating
Drum machine
19 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
A device that mimics the sounds of the drum kit and other percussion instruments, using either synthesis or
sample playback, usually including sequencing for creating rhythms with them. There are many plugins that
emulate classic drum machines such as Roland's TR series.
In effects, the dry signal is the effect-free input signal; the wet is the signal with the effect applied. In some
effects (eg, reverb) the 'dry' signal is mixed with the 'wet' one to achieve the desired amount of the added effect,
and there's usually a dry/wet control for the user to set this.
Dry also describes a sound with little or no reverb/ambience, ie, sounding as if it was recorded in a 'dead' room.
The manipulation of a digital data signal - including audio - is known as Digital Signal Processing. Effects
plugins, mixer EQs and timestretch algorithms are all examples of DSP.
DSP also refers to dedicated Digital Signal Processor hardware, such asUniversal Audio's UAD-2 series. With
UAD-2 DSP hardware connected to the computer, UAD-2 plugins are loaded as normal in your DAW, but the
actual processing takes place inside the DSP hardware, thus reducing the load on the computer's CPU.
The process of one signal being reduced in volume by the presence of another, either by a fixed amount or
dynamically, where the signal is ducked more when the control signal gets louder.
Ducking can serve all manner of purposes, from making a bassline or synth pad pulse in time with a kick drum
to lowering the volume of a music track in a radio broadcast when the DJ talks.
It's usually done using sidechain compression: in the radio example, the DJ's voice feeds into the sidechain
input of a compressor (as well as its mixer input,) while the music feeds into the same compressor's main input.
When the voice hits the sidechain, the compressor responds to it by dropping the volume of the music track.
Dynamic mic
The most affordable type of microphone, dynamic mics work like a loudspeaker in reverse, converting
movement of a diaphragm by sound waves into a high-gain signal that doesn't require pre-amplification before
20 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
being connected to a recording device or processor.
The changing volume of a sound over time, and how loud or quiet it is in relation to other sounds and itself.
Dynamics processors - including compressors, gates and transient-shaping effects - manipulate these aspects.
21 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
The A to Z of computer music: E
Our compendium of digital music jargon continues
Computer MusicJune 04, 2013, 15:38 UTC
We demystify an ensemble of essentials from every era of computer music. E is for effects, emulators
and a lot more.
Early reflections
Many reverb plugins offer control over early reflections (ER) simulating the initial burst of sound bouncing off the
'walls' before the more complex full reverb tail kicks in.
Early reflections arrive at the ears very shortly after the initial, direct sound (the audio your reverb plugin is
affecting) and give the brain a lot of information as to the size, shape and even surface material of the reverb
Depending on the plugin you use, the ER parameters you get access to will range from just a level control to
'density', EQ and more (as with the excellent 2CAudio B2, pictured above.)
An alternative name for 'delay' (see the previous edition of the A to Z of computer music,) although some
consider the two to be slightly different, in that 'echo' implies a certain amount of analogue-style
degradation/diffusion as the echoes repeat, while the coldly digital 'delay' maintains signal integrity through the
As in other areas of media, 'edit' in the context of music production means to intentionally change something in
order to improve it - either subjectively as part of the compositional process, or objectively to correct an error or
mistake (out of tune, clipped, etc.)
Editing describes everything from moving notes in a MIDI recording or timestretching a sample, to tweaking the
parameters on a plugin synth or applying a groove template to a MIDI clip. Pretty much anything done to a preexisting clip, part or arrangement is considered an edit.
In the less literal sense of the word, an 'edit' is an altered version of an established musical element - sliced and
22 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
rearranged, for example - re-employed for added interest later in the track.
In music software, there are many types of editor. The two most common are the MIDI piano roll and
audio/sample editor, but a plugin GUI is also an editor of sorts, as is a DAW's arranger and even mixer.
Essentially, any window in which you make changes that affect the sound of a track - from its individual
components to the master output - is an editor.
Reverbs, delays, compressors, EQs, phasers, filters... Anything that accepts an audio input signal, changes it in
some way and outputs the result is an effect.
In software, these could be plugin effects (in formats like VST, AU, etc), or built directly into software such as a
DAW or soft synth.
Thousands of individual effects plugins are available at prices ranging from free to hundreds of pounds. MIDI
effects, which apply transformative processes to MIDI note and controller data, are rather less well represented
and not actually supported by many DAWs.
German developers of the Creator and Notator MIDI sequencers and, ultimately, the Logic DAW that was
bought by Apple in 2002.
Although Apple continued development with Logic Pro and Logic Express, the most significant thing to result
from the buy-out (aside from them ditching Windows support) was their intuitive, inclusive and massively
popular Garageband DAW, the development of which, it has been suggested by many commentators, was the
real motivation behind the move.
Without Emagic, the DAW landscape as we know it would look very different indeed.
Any software (plugin or standalone) that attempts to 'virtualise' a real-world instrument, effect or component,
either algorithmically or via a sampled soundbank.
23 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
There are numerous software emulations of classic synths and hardware, from the Minimoog (as with Arturia's
iMini app, pictured) and TB-303 to the LA-2A and Space Echo; and while purists will seemingly never believe
that software can sound like the real thing, the convenience, cost and ability to open multiple instances
arguably outweighs any slight sonic discrepancies that may or may not be perceptible.
A piece of software for converting audio from one format to another, most pertinently for compressing
uncompressed formats like WAV and AIFF to compressed formats like MP3, OGG, etc. A decoder is then used
to play back the encoded file - see last A to Z of computer music for more on this.
Conversion of stereo audio to or from mid/side also uses the encoder/decoder terminology.
A rotary encoder is a knob on a hardware MIDI controller that converts its position or direction of movement to
MIDI CC data.
A modulation source in a synthesiser, sampler or effects plugin that controls the amplitude 'shape' of a signal
over time.
An envelope progresses through a series of stages, each one individually set to determine the duration or level
of the signal state at that point, and the most ubiquitous envelope type is the ADSR - Attack, Decay, Sustain,
Release (see more on this in the first A to Z of computer music.)
To use the volume of a synth as an example: the Attack stage sets how long the volume takes to rise from zero
to the value determined by the strength of the key press; Decay is the time it takes the volume to drop from that
peak to the Sustain level, which is the level at which the signal is held until the key is released; and Release is
the time the signal takes to drop from the Sustain level to zero once the key is released.
The two most common envelope targets are volume and filter cutoff frequency, although today's virtual
instruments allow envelopes to be pointed at many parameters.
Two common variations on the ADSR are the AD envelope (Attack, Decay) and the ADSHR, which adds a Hold
stage of user-defined duration in between the Sustain and Release stages.
Envelope follower
24 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
A processor that 'listens' to an incoming audio signal and converts its amplitude envelope (i.e. loudness over
time) into a modulation source.
The most well-known envelope-following effect is probably the auto-wah used by guitarists - when the strings
are struck harder, the filter cutoff is raised.
An envelope follower is also a key component of any compressor, gate or other dynamics processor.
Equaliser (EQ)
Arguably the most important of all mixing and sound-shaping effects plugins, equalisers come in a range of
types, all of them designed to allow boosting or attenuating of specific frequencies in an audio signal to
creatively change its timbre or make it work with other sounds in a mix.
25 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
While graphic EQs (the ones with rows of tiny faders) are arguably the public face of EQ, the most versatile and
ubiquitous EQ in music production is the parametric (such as the excellent DMGAudio Equilibrium, pictured.)
This gives full control over the gain, centre frequency and bandwidth of a number of bands, often including the
option to switch the highest and lowest bands to shelving or low-/high-pass modes.
Understanding how to use EQ is an absolutely essential skill for any music producer.
Most DAWs (Ableton Live being perhaps the most notable exception) feature an eraser among their array of
mouse pointer tools, used to delete clips, MIDI notes and other data.
Every single thing that occurs on the timeline of a DAW is an event (right down to individual automation curve
breakpoints,) but in its original music technology usage, the word referred specifically to MIDI data.
Fewer and fewer DAWs include them these days, but once upon a time, the entirely non-graphical MIDI Event
editor was an essential tool for making precise adjustments to MIDI Note and CC data.
A signal processing concept kicked off by Aphex's classic 1975 Aural Exciter rack, while there is no strict
definition of what an exciter or enhancer must do, they generally combine techniques like dynamic EQ,
harmonic distortion and synthesis, to apply that oft-quoted 'fairy dust' to a sound or mix - or, more technically,
the addition/enhancement of high-order harmonics.
Loosely speaking, exciters operate at the top end of the frequency range; enhancers work across the entire
The opposite of a compressor, an expander is an effect used to increase the dynamic range of a signal, rather
than reduce it.
While a compressor lowers the volume when the input signal exceeds a defined threshold, thus bringing the
loudest and quietest parts closer together, an expander lowers the volume when the signal falls below the
threshold, thus moving loudest and quietest parts further apart. Upward expansion also exists, raising the level
of signals above the threshold, again, to increase dynamic range.
Expansion has plenty of creative and corrective uses, and you probably already employ it more than you realise
in the form of gates, which are a type of expander.
26 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
The A to Z of computer music: F
We separate fact from fiction with a frenzy of F
Computer MusicJune 24, 2013, 15:27 UTC
Get your digital music fix with a frenzy of F words. We've separated fact from fiction and put it all in the
key of F.
Fade in/out
As well as their creative application for gradually introducing or removing sounds from a mix, applying very
short fade-ins and fade-outs is an essential part of good audio editing practice, acting to avoid clicks and pops
at the start and/or end of samples, caused by the audio signal sharply jumping from/to the zero point rather
than smoothly coming to rest on it.
Some DAWs, such as Ableton Live, actually apply tiny fades to imported and recorded audio clips by default,
under the assumption that it almost always needs doing, and if it doesn't, the listener won't notice the fades
anyway. Watch out, though, as this can sometimes dull the initial transient of attacking sounds like drums and
plucked strings.
The top waveform ends on a non-zero value and will audibly click; below, we've fixed it with a fade-out.
A vertical (or, occasionally, horizontal) sliding controller, either virtual or physical. Faders are commonly used to
set the channel volume levels in a mixer, but they can also be found in the interfaces of many plugin synths,
samplers and effects.
When the output of a device is routed back into its input (either via cable or microphone in the physical world)
feedback occurs, with the signal layering repeatedly on top of itself.
27 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
This can be a useful phenomenon when employed in a controlled manner (in delay plugins, for instance, to
create repeated echoes that gradually get quieter,) or it can be the undesired result of an accidental routing
mishap, in which case the feedback can build up fast and loud enough to cause damage to loudspeakers and
other equipment, including your ears.
Fast Fourier Transform. In very simple terms, an algorithm that converts waveform-style audio to - and from frequency graph-style audio, such as you'd get in a graphical editor like Adobe Audition. This can be used for
analysis (think spectral analysers) or to process the converted audio in some way before turning it back into
waveform-style audio for output. Most software with 'spectral' in the title uses FFT to do its thing.
In the context of music production, a filter is a device for attenuating specific user-selected frequencies in an
audio signal. The frequency at which this attenuation begins is called the cutoff point, and depending on the
filter type, the attenuated frequencies will either be above (low-pass), below (high-pass), around (notch) or on
both sides of (band-pass) the cutoff.
A resonance control is often present for applying a bit of boost to the frequencies directly around the cutoff
point, and the attenuation's steepness is determined by the number of 'poles' in the filter, each one adding 6dB
of attenuation for every octave the signal moves away from the cutoff frequency. So, a 4-pole low-pass filter
lowers the volume of the signal by 24dB for every octave it goes above the cutoff point.
One final point: we said the attenuation begins at the cutoff point, but in reality, it's not such a precise science
as the transition is gradual.
Developed by Apple in the early '90s, FireWire (or IEEE 1394, to give it its technical name) was, for a while, the
fastest connection standard available for external hard disks and audio interfaces. FireWire 400 offers a
theoretical maximum transfer speed of 400Mbit/s, while FireWire 800 doubles this to 800Mbit/s - more than
enough for bi-directional 8-channel audio I/O.
USB 3, eSATA and Thunderbolt have since trounced FireWire in terms of speed, but it's still a popular
connection choice.
There's hardware, there's software, and then there's firmware sitting somewhere between the two. While your
audio interface or MIDI controller interfaces with a driver on your Mac or PC, it's the firmware that actually does
the talking and listening - without it, your digital device is just a box of componentry.
A firmware update could change the default assignments on your MIDI controller, perhaps, or fix a compatibility
issue between your audio interface and a new version of your computer's operating system. If you discover that
a new firmware version is available for any given device, it's a good idea to apply it.
FL Studio
28 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
Originally known as Fruity Loops, Image-Line's quirky Windows DAW is one of the most popular music
applications ever made.
From humble beginnings as a 4-track sample-based drum machine, it's grown into a complete and powerful
production system complete with its own instruments and effects. There's even a mobile version for iOS and
Android, and we gather Mac support might well be coming soon, too. Read our review of the latest version.
A classic studio effect allegedly invented by Abbey Road engineer Ken Townsend, flanging originally involved
playing back two identical signals together on two tape decks, with one of them offset in time by a very small
amount. The characteristic whooshing sound was created by applying changing pressure to one of the tape
reels in order to vary this offset.
Later came flanger effects units, which mimicked the sound using delays and LFOs. These days, flanger
plugins can be used for this same task.
Floating/fixed point
In digital signal processing, signal values can either be represented as floating or fixed point numbers.
In floating point numbers, the decimal point can be moved around (ie, 2546.77, 25467.7, 254.677, etc),
enabling the representation of very large and very small numbers. In fixed point systems, the decimal point is
always in the same place (ie, in a system with a fixed one-place decimal point, 43129.9 can't be floated to
become 4312.99).
What this means for music software is that the possible dynamic range in a 32-bit floating point system is
absolutely enormous, making it extremely difficult (practically impossible during normal use) to overload or clip
the audio signal path since the clipping point is far higher than 0dB. With floating point, quiet signals are
represented with as much relative accuracy as louder ones, which isn't the case in a fixed point system.
FM synthesis
29 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
A form of synthesis involving the modulation of one audio signal's frequency by another, FM can be
implemented using analogue oscillators, but is generally thought of as the concept behind digital FM synths like
Yamaha's seminal DX7 (pictured above and gloriously reborn in Native Instruments' FM8.)
The operator (as FM oscillators are known) generating the main signal is called the 'carrier', while the one doing
the modulating is called the 'modulator'. The number of operators available in an FM synth typically ranges from
two to eight, and obviously the more you have, the more potentially complex the modulation setup and,
consequently, sound will be.
FM synthesis is known for its punchy basses, sparkling pianos and bells, and generally bright tones. It also has
a reputation for being unapproachable and intimidating, though this is largely a hangover from the bad old days
of programming a Yamaha DX7 via its tiny, cryptic LCD system.
The sound of the human voice is largely defined by the frequency-based characteristics of vowels (generated
via movement of the vocal tract, tongue and lips) and these 'phonetic signatures' are known as formants.
Vocal emulation software like Yamaha's Vocaloid, as well as many synth and effects plugins, enable
manipulation of formants in order to make a male vocal sound female, for example, or turn a non-vocal sound
into something at least vaguely human - think of 'talking' synth sounds.
Whether released through sheer magnanimity on the part of a developer looking to make their name, or as a
spin-off from a full commercial release, freeware is quite simply software that's freely available.
These days, there are freeware plugins available that outclass premium offerings from only a few years ago.
While Computer Music's own CM Plugins suite is indeed free with every issue of the magazine, it's not freeware
as such because it's not freely available elsewhere. It's generally dubbed 'magware'.
First introduced in Logic Pro, track freeze is now a feature of most DAWs. Invoking it simply renders the
contents of the target track (usually including all loaded effects) in place, thus freeing up CPU and RAM, with
the option to instantly unfreeze it at any point for further editing.
The frequency of an audio wave is the speed at which it vibrates in air, measured in Hertz (cycles per second)
and directly related to pitch. The human ear is theoretically capable of hearing frequencies around 20Hz-20kHz,
but this drops off with age, such that by 40, the average person's upper range stops at about 14kHz.
Manipulation of frequency using such tools as filters, EQ and multiband dynamics processing is one of the key
concepts in music production, with each instrument type occupying its own characteristic frequency range that
30 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
generally needs to be cleared to make room for it.
Inaudible frequencies also have their uses; for instance, the LFOs in synths can produce signals that would be
too low in pitch to discern, but using such a signal to control other elements of the synth results in very audible
effects. For instance, a 1Hz LFO applied to the filter cutoff frequency will make the filter move up and down in a
continuous, second-long cycle.
Frequency response
The frequency response of a device (a pair of loudspeakers or a plugin effect, for example) is a measure of its
spectral output in response to a given input.
It can be measured in various ways, but the ultimate goal in most pro audio gear is for the response to be as
close to linear as is possible - ie, the input spectrum curve matches the output spectrum curve - at least when
the device's settings are in a neutral position.
A plugin's frequency response can be measured by feeding it white or pink noise then analysing the output with
a spectral analyser.
Frequency shift
Pitchshifting a signal retains the harmonic relationships between its various components - ie, shifting a 440Hz
wave up by an octave means doubling it to 880Hz, which also shifts its second harmonic at 880Hz up to
Frequency shifting, on the other hand, raises or lowers all affected frequencies by the same amount, rather than
by the same ratio, so shifting a 440Hz signal up an octave to 880Hz will push its second harmonic up to
1320Hz (880 + 440.)
The resulting sound is very different to the more 'realistic' pitch shifting, and both have their uses. Frequency
shifting, for instance, is useful for tuning percussion, since it can be used to adjust the fundamental frequency
without affecting the upper frequencies so much.
31 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
Fundamental frequency
The fundamental frequency (or just 'fundamental') is the frequency in a waveform that determines its discernible
pitch, and it's often (but not always) the lowest and/or loudest frequency present. The fundamental frequency of
the note A4 is 440Hz, while middle C cycles at 261.63Hz. The fundamental is accompanied by a series of
harmonics, which are multiples of it.
Abbreviation of 'effects', but used primarily to describe sound effects rather than effects processors. Sample
libraries, for example, often come with a folder full of 'FX', comprising risers, booms, wooshes and the like.
32 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
The A to Z of computer music: G
A gaggle of G words in the latest instalment of our
geek-friendly glossary
Computer MusicAugust 02, 2013, 09:26 UTC
We break down a gaggle of G words in the latest instalment of our guru's glossary of geeky gear and
digital gadgetry.
When the level of a signal is raised, the process is referred to as increasing the gain. You'll find gain controls on
all manner of music software, from the mixer in your DAW (push a channel fader above 0dB and you're
increasing the gain) to the majority of effects plugins, the Gain Make-up control on a compressor being the most
obvious example. Note that gain can also be negative, used to indicate a reduction in volume.
Gain staging
When using multiple audio devices in a chain, the output of each connected to the input of the next, gain
staging - the correct setting of gain controls and other settings through the chain - is an important consideration.
Even though in the software-based studio noise isn't the issue it once was, when you hook up an instrument or
microphone to an audio interface, particularly if you have a hardware compressor or EQ involved, you need to
set your gains correctly to optimise the signal-to-noise ratio and ensure a not-too-high, not-too-low level.
Gate is short for noise gate, a type of effect invented to combat background noise, hisses, rumbles, etc, in the
parts of recordings that are supposed to be 'silent'.
The simplest gates shut the signal down to silence unless it is louder than a certain level, called the threshold.
The idea is that the gate will 'shut' during parts when the instrument is not playing, and it will 'open' again when
the instrument plays, silencing background noise. Attack and release parameters determine how quickly the
gate opens and closes.
Some gates let you set a 'floor', so that instead of the signal being completely silenced, it will instead be
reduced in volume. Gates can often be sidechained, either with internal filters (to cause the detection to
33 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
respond only to a certain frequency range) or an external input.
The latter is great for creative applications, eg, the rhythmic 'chopping up' of a pad sound using a gate
sidechained to a hi-hat.
Also read: 11 essential gating tips
GM (General MIDI)
General MIDI is the specification that ensures that all MIDI synths and keyboards have their mod wheels
outputting MIDI CC1, for example, and that all GM-compatible sound sources map the same set of sounds to
the same program numbers (Trombone on Program 051, Goblins on Program 101, etc.)
The GM soundbank doesn't see much action today, as even the lowliest of soft synths and ROMplers leave it
standing in every sense, but the MIDI CC numbers of the GM protocol are still totally relevant.
Also known as portamento, activating the glide function on a synthesiser causes the pitch of one played note to
smoothly slide up or down to the pitch of the next, as opposed to jumping from one discrete pitch to another.
Depending on the synth in question, the time that this glide takes could be adjustable, fixed or scaled
(determined by the distance of the glide.)
In the context of music production, glitch has at least three meanings: an electronic music genre, an unpleasant
digital noise and a category of plugin.
The first is self-explanatory; the second can be caused by things like missing a zero crossing when cutting a
sample or a sample file being corrupted; and the third is any effects processor designed to chop up, rearrange
and process its input signal in a jittery fashion, Illformed Glitch, Sugar Bytes Effectrix and Ableton Beat Repeat
being three fine examples.
Graphic EQ
34 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
While a parametric EQ enables you to apply cut and boost to a small number of adjustable frequencies, a
graphic EQ divides the audible frequency spectrum up into a number of overlapping bands (between 3 and 30odd) of fixed frequency and width.
While this makes it a great option for music playback systems (both domestic and PA-driven,) where total
precision isn't essential, it's not hugely suitable for music production beyond balancing out 'bumpy' frequency
responses that would be difficult to flatten using a parametric.
That said, there are some classic graphic EQs out there by the likes of Urei and API that are highly prized by
mix engineers for their particular sounds.
Graphical User Interface. The GUI is the visual front-end of any software application - the knobs, faders, tracks,
clips, keys and buttons that you interact with on screen to make your DAW, virtual instruments and effects do
their thing.
The 'persona' of an application is largely driven by its GUI (think Ableton Live's abstract cleanliness or Reason's
quasi-realistic studio clutter) and can even have a psychological effect on the way we perceive the sounds
coming from it.
For that reason, it's a good idea to check your mixes with the screen turned off at some point in your production
35 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
The A to Z of computer music: H
Our compendium of production terms moves on to
the letter H
Computer MusicAugust 16, 2013, 15:21 UTC
Ever had a question about harmonics, hertz or headroom? Or wondered how best to use hi-hats,
hysteresis or hard syncing? H is for...
Hard clipping
When an audio signal exceeds the limits of a digital system, it clips, completely flattening the waveform at the
threshold. This is known as hard clipping. Soft clipping, on the other hand, eases into the clipping, rounding off
the waveform so that any threshold violations are smoothly curved rather than brutally flattened.
Hard disk drive
While a Mac or PC's random access memory (RAM) serves as essential temporary storage for real-time
operational data, amongst other things, the hard disk drive is the primary storage system, maintaining its
contents even when the host computer is powered down. A hard disk comprises one or more non-magnetic
platters coated in magnetic material, spinning at (usually) 5400, 7200 or 10,000rpm.
Although most conventional computers will host at least one internal hard drive that stores the operating system
and any installed applications, additional drives can be attached via USB, FireWire, Thunderbolt or External SATA for the storage of documents and media, including- pertinently for the computer musician - sample libraries
and ROMpler soundbanks.
Today, the HDD faces increasingly stiff competition from flash memory-based solid state drives (SSDs), which
offer the benefits of faster data access and no moving parts. With SSD technology rapidly falling in price, it's
only a matter of time before the HDD goes the way of magnetic tape.
Hard sync
Many virtual analogue synthesisers offer hard sync as an option in their oscillator sections. When activated, one
oscillator is designated the 'master' while another is designated the 'slave'.
Whenever the master oscillator completes a wave cycle, the slave oscillator begins its wave cycle again from
the start too, no matter where in the cycle it may have been at the point of retriggering. By offsetting the
frequency of the slave oscillator, we get a harmonically rich, edgy tone that works particularly well for lead lines
and hard basses.
A series of mathematically predictable frequencies present above the fundamental frequency of a (musical)
audio signal that serve as essential components of the overall sound.
Harmonics are spaced apart equally by an amount dependent on the fundamental. The note A4 above middle
C, for example, is at 440Hz, with harmonics at 880Hz (A5), 1320Hz (E5), 1760Hz (A6), etc – so the harmonic
frequencies are multiples of the fundamental.
Perhaps slightly confusingly, the first such frequency above the fundamental is the 'second harmonic', as the
fundamental is considered to be the 'first harmonic'. An understanding of harmonics is essential in sound design
(particularly involving synthesisers) and mixing (with regard to EQ, specifically). Sounds may also contain
inharmonic frequencies; that is, non-musical or dischordant tones such as noise mixed into a synth sound, and
these are not multiples of the fundamental frequency.
36 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
The process of composing a chordal accompaniment to a leading melody.
An audio effects processor that pitchshifts its input signal and overlays the results on top of the original in order
to generate a two-or-more-part harmony. The original harmoniser was Eventide's H910 hardware unit, released
in 1974, and the American company still own the rights to the Harmonizer trademark today.
When two or more notes played or sung at the same time sound pleasing to the ear due to their concordant
37 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
harmonic relationship (eg, a major chord), they are said to be 'in harmony'. Chords that don't seem to fit
together in such a pleasant way are said be 'dissonant', but this doesn't necessarily mean they aren't musically
relevant or that they aren't part of harmony theory.
In practical terms, headroom is the amount you can boost a particular audio signal before the system it's
playing through will distort – in other words, the difference in level between the signal's loudest point and the
maximum level the system can handle.
If you find that the master output is clipping in your DAW, lowering its level fader attenuates the signal level and
increases the headroom, giving you more 'room' to add further sounds to the mix that might raise the overall
level further.
Hertz (Hz)
Defined as "the number of cycles per second of a periodic phenomenon", hertz (named after the physicist
Heinrich Hertz) is the unit of frequency in the International System of Units (SI).
The pitch of a soundwave is defined by its frequency in hertz, with a signal at 440Hz generating the note A4, for
example. Synthesiser oscillators, filters and EQ, LFOs... numerous elements of your DAW and plugins deal
directly with the frequency of sound or modulation waves (generating or shaping them), all described in Hz and
kHz (kilohertz - thousands of hertz).
The clock speeds of various components in your computer are also described in hertz, of course - these days,
they'll most likely be up in the gigahertz (GHz - billions of hertz) range.
Also known as 'hex' or 'base 16', hexadecimal is a numeric system that most musicians will never encounter –
unless, that is, they use tracker software.
While the numbers 0-9 are self- representing, the numbers 10-15 are represented by the letters A-F. Trackers
are programmed by entering values into a grid (that you can loosely think of as analogous to your DAW's
arrange page and MIDI editor combined) to specify note data and parameter changes. Most trackers use a hex
numbering scheme for efficient representation of values in a limited space.
High-pass filter
38 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
A filter that attenuates (lowers) the level of all frequencies below its cutoff point (thus letting all those above it
through), the high-pass filter (or the high-pass mode on a multimode filter) can be used correctively to reduce
low-end rumble and sub bass, or creatively for upward sweeps, general sound design, etc.
A high-pass filter often features a resonance control for applying boost to the frequencies directly around the
cutoff point, and the amount of attenuation is determined by the number of 'poles' in the filter, each one adding
6dB of attenuation for every octave the signal moves away from the cutoff frequency.
So, a 4-pole high-pass filter lowers the volume of the signal by 24dB for every octave below the cutoff point.
A pair of small cymbals mounted, facing each other, on a stand and brought together via a footpedal, the hi-hats
are one of the three main components of the drum kit, along with the bass (kick) and snare drums.
Very generally speaking, while the kick and snare provide the main beat (boom, crack, boom boom, crack, etc),
the hi-hat is held closed and struck with a stick on eighth- or 16th-notes, filling out the groove - a core drumming
technique known as 'riding'.
Played closed, the sound of the hi-hats is tight and bright; played open, they become much bigger and more
splashy; and when brought together by depressing the pedal, the hi-hats make a soft "chick" sound that can be
very useful for accenting the off-beat when riding the ride cymbal instead, or playing with brushes on the snare
(in jazz, primarily).
Electronic music also uses hi-hat sounds, though almost always of the programmed, non-realistic variety.
Related to hum but not as controllable, a certain level of hiss is inevitable in even the most software-based
studio, since the amplifiers in or connected to your loudspeakers can't help but generate a certain level of it.
Other causes of hiss include poor quality and/or unbalanced cables, and recording instruments and
microphones at too low a level, which bring up the noise floor and hiss when the playback level is raised to
A short, one-shot percussive sample, eg, a drum.
39 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
Not the hair-wrenching issue it once was thanks to the proliferation of integrated digital systems, hum in the
recording studio (beyond the low- level noise intrinsic in the transformers of any electronic audio equipment) is
caused by various undesirable processes including ground loops, CRT monitors and shared impedances.
The fix depends on the cause, but fortunately, it's not something the computer musician has to worry about too
much these days, since virtual instruments and effects don't generate noise of any kind (unless they're
designed to!).
In audio, hysteresis is a parameter found on some noise gates, used to reduce 'chatter' when the input signal
hovers around the threshold level. To implement it, two thresholds are applied: one to open the gate when the
signal exceeds it, and the other, a few decibels below the first, to close it when the signal drops below it. The
gap between the two prevents the constant opening or closing of the gate by signals hovering around either.
40 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
The A to Z of computer music: I
Our dictionary of music making definitions reaches
the letter I
Computer MusicSeptember 19, 2013, 11:53 UTC
For those illiterate in the industry lingo, we're here to issue some indispensable information on the ins
and outs of the tools of the trade.
PACE's iLok copy protection system is a USB dongle that stores user licences for applications and plugins by
any developer choosing to implement the system. While the latest version, iLok 2, has proved secure (ie, has
not been cracked), recent serious (but now seemingly resolved) problems with an update to its new License
Manager software have irritated both users and developer partners alike.
Beyond that, the concept of having to keep a USB key connected to your computer in order to use software that
you've paid for is always going to be controversial, although the method's portability between systems is a
definite plus.
The arrangement and balance of sounds in a stereo panorama from left to right and - less literally - 'front' to
'back' is known as the stereo image. While placing signals between the left and right channels of a stereo pair is
(generally speaking) a simple matter of adjusting their pan pots in the mixer and/or applying effects plugins with
stereo positioning controls (reverb, delay, etc), working in a sense of perceived depth is a little trickier. There
are numerous plugins on the market designed to help with imaging, often utilising mid/side techniques and
psychoacoustic processing.
The entry point on one device (a mixer channel or plugin effect, say), designed to accept the output signal from
another device (eg, an instrument or microphone). Inputs can be virtual or physical - plugging a microphone into
an audio interface is done via a hardware input, for example, while connecting the resultant output from the
audio interface to a reverb plugin in your DAW is done via a virtual input (an insert point - see below - or effects
41 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
Inputting audio from the outside world into the software domain inevitably involves a degree of processing
latency, which is simply the delay introduced by your computer and audio interface converting the analogue
signal into a digital data stream and back again, and the processing that happens while the signal is inside the
In order to enable monitoring of the input signal with practically no delay, many audio interfaces feature 'direct
monitoring', which duplicates the input signal, sending one to the computer for recording (delayed as described)
and the other directly to the interface's monitor outputs for latency-free listening, but without any of the
processing offered by the computer (plugin effects).
A point in any mixer channel (hardware or software) at which the input and output of an effects or dynamics
processor are connected in order to route 100% of the channel's input signal through it for processing. In a
DAW mixer, insert slots give access to your menu of plugin effects, several of which can be loaded into a
channel at once, processing the signal in top-down series.
Although send effects are considered the alternative to insert effects, the auxiliary channel used for the
send/return loop is in fact just another mixer channel with effects inserted into it, the difference being that, rather
than that channel being connected to a single track in a multitrack project, it's instead capable of receiving
parallel user-adjustable input from multiple channels at once.
An instrument is anything used to make musical sounds, of course, but in the context of computer music, the
word is generally prefixed with 'virtual'. The biggest revolution in music technology of the last 15 years has been
the proliferation of increasingly powerful and affordable software instruments.
Broadly speaking, there are two types of virtual instrument: synthesisers and sample players. Synthesisers use
algorithmically modeled oscillators to generate their basic tones for shaping with filters, envelopes, etc, while
sample players use digital audio recordings (samples) as their raw material. Sample player engines can be
42 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
used to construct incredibly lifelike virtual pianos, guitars, drums, brass, strings, etc, as well
as classic synth emulations, sonically elaborate 'power synths' and even vocalists. These instruments are often
referred to as ROMplers; short for 'ROM player', this name harks back to the early days of sample-based sound
modules, which stored their fixed sound libraries in Read Only Memory (ROM).
Debate continues to rage as to whether virtual analogue synths can ever sound as good as their real-world
counterparts, but we're absolutely unequivocal in our opinion that they already do - in fact, we'd always favour
the price, convenience and stability of, say, a virtual Minimoog over the real thing, particularly given that recent
emulations really couldn't be described as sounding in any way lacking.
The world's biggest manufacturer of microprocessors, Intel was founded in 1968 and now makes CPUs for the
majority of Windows PCs, as well as all of Apple's desktop and notebook computers. They also produce
graphics chips, hard disks, mobile CPUs and more, but it's their x86-based Mac/PC CPUs that form the core of
their business.
Inter-app audio
Part of Apple's iOS 7, which is available now (although, at the time of writing, it'shaving some major audio
issues), Inter-App Audio is a new library of APIs that enable apps running on iPhone, iPad or iPod touch to send
and receive audio streams to and from each other.
Thus, a soft synth app could send its output to a DAW app, perhaps via an effects processing app, taking us a
big step closer to the dream of the full-on mobile music studio. The majority of IAA's functionality actually
already exists in the form of A Tasty Pixel's AudioBus app. Still, IAA promises to improve on AudioBus's feature
set and obviously benefits from being part of the OS itself in terms of developer support and the user not having
to invest in a third-party solution.
Any point at which a computer or software application connects to a peripheral device (which includes you, the
user!) is an interface, and there are many types of interface involved in computer-based music production. An
audio interface, for example, enables you to connect your DAW to your speakers for output, and microphones
and external instruments (guitars, etc) for input; while a MIDI interface accepts input from and delivers output to
MIDI devices of all kinds. Other more general computing interface types include USB, FireWire and the
graphical realisations, analogies and metaphors through which you interact with your operating system, DAW
and plugins (the Graphical User Interface, or GUI).
Also read:How do you choose an audio interface?
The opening section of a song, in which you'll generally either set out your stall for what's to come, or - less
commonly - play with your listener's assumptions by creating something wholly at odds with the rest of the
track. Like other song sections, the intro will usually be 4, 8, 16 or 32 bars long. It can also be repeated later in
the track or revisited stylistically for the outro.
Input/Output. In digital music production, commonly used in reference to the combined input (signals entering)
and output (signals exiting) capabilities of a device. If, for example, you see an audio interface described as
sporting both analogue and digital I/O, it means that it's built with both analogue and digital inputs and outputs
onboard, for sending and receiving signals to and from guitars, microphones, loudspeakers, etc, as well as
digital devices such as DAT recorders and other audio interfaces.
43 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
The operating system behind Apple's iPhone, iPad and iPod touch, iOS is a surprisingly capable platform for
music production thanks to its OS X roots (and the excellent CoreAudio and CoreMIDI APIs that come with it,
as well as the new Inter-App Audio functionality – see above) and responsive touchscreen interface.
While it would be stretching the truth to suggest that you could cheerfully produce complete 'pro-quality' tracks
from start to finish using iOS apps, that day is surely coming; and for MIDI control, live performance and light,
on-the-go production, iOS (and, to a lesser extent, its less audio-centric rival Android) is a miracle of modern
Perhaps surprisingly, Apple's only contributions to the iOS music scene have been Garageband and their recent
Logic Remote controller for Logic Pro X. Still, both are absolutely excellent examples of a mobile DAW and
MIDI controller respectively.
Iterative quantise
While 'regular' quantise snaps selected notes rigidly to a note-value-based rhythmic grid, iterative quantise - an
option in a few DAWs, including Cubase, Nuendo and Renoise - lets you specify a percentage amount of
movement towards said grid. So with a setting of 50%, off-grid notes will move halfway closer to the nearest
grid division. It's useful for tightening up loose MIDI parts without making them sound overly robotic or
44 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
The A to Z of computer music: J
Another round-up of digital production terminology
Computer MusicOctober 11, 2013, 12:25 UTC
Get acquainted with some more essential computer music terms as our lexical tour continues. This
edition is brought to you by the letter J.
Originally invented in the 19th century for use in telephone switchboards, the 1/4-inch jack plug/socket (born
alongside its 1/8-inch 'minijack' sibling) is the most ubiquitous of professional audio equipment connection
1/4-inch jacks are available in two contact configurations: TRS (tip, ring, sleeve) and TS (tip, sleeve). Each
contact carries its own signal, with TS jacks used for unbalanced mono connections and TRS jacks used for
unbalanced stereo or balanced mono connections. (1/8-inch jacks also come in TRRS form, with a second ring
enabling the addition of a microphone input signal for use on mobile phone headsets.)
Balanced jacks are the preferred standard in pro audio applications, using balancing circuitry in order to do
away with electrical hum and ground noise. Unbalanced jacks are usually only found connected to guitars,
hardware synthesisers and other line/instrument level devices.
The circular acronym for JACK Audio Connection Kit, a cross-platform API that enables applications running
under Linux, Windows and OS X (amongst other operating systems) to share audio devices and pass audio
streams to and from each other.
Primarily of interest to Linux-based musicians, it's essentially comparable to ASIO (Windows) and CoreAudio
(OS X), providing low-latency inter-app and server-based plugin- style audio streaming (and MIDI connectivity)
between DAWs, audio editors, plugins, etc.
When the periodic output of a digital clock deviates from total regularity, the deviation is referred to as jitter. For
computer musicians, jitter is an issue with audio interfaces, in which it causes imperfect sampling/reproduction
of digital audio.
45 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
Fortunately, these days the digital clocks in even the cheapest of audio interfaces are incredibly accurate and
consistent, so jitter isn't the issue it once was.
1. A jitterless sampling of a sine wave.
2. Sampling a sine wave with jitter - note the erratic spacing of sample points.
3. The sine wave from diagram 2 as viewed in a wave editor, where the samples are presumed to be of equal
spacing - the spacing/ amplitude errors cause the wave to be imperfect.
Jog Wheel
A feature on certain MIDI (and video editing) control surfaces, the jog wheel is a large rotary controller used to
move the playback head in a DAW or audio editor left and right.
Using it, you can shuttle the head quickly through a DAW project or - depending on your software and surface scrub backwards and forwards through an audio file, for example, analogous to manually controlling the reels
on a tape machine. Some DAWs also feature a virtual onscreen jog wheel.
Just Intonation
The standard tuning system used in Western music is 12-tone equal temperament (12 TET) - a non-justly
46 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
intonated scheme. Here, the octave is divided into 12 equally spaced pitches. This is easy to understand but
has the drawback that most musical intervals are not actually 100% in tune, often resulting in a subtle 'beating'
when playing chords (a bit like the effect produced by unison detune in a synth). For instance, in a 12 TET
major third interval, the ratio between the two notes' frequencies is about 1.26.
A more harmonious result could be had using a ratio of 1.25, which is equivalent to the simple fraction 5/4. Thus
the wave cycles of the two notes would 'sync up' precisely every four cycles, giving a more solid sound with no
beating. This is just intonation. There are alternate tuning schemes to facilitate the playing of JI intervals; the
caveat is that to make one key justly intonated, other chords/keys may not be usable due to them being way out
of tune. Thus we can see the advantage of 12-tone equal temperament: every musical key is equally out of
tune, so we can easily change key and play various chords across those keys with predictable results.
Some soft synths (including most of LinPlug's range, u-he Zebra, Spectrasonics Omnisphere and Cakewalk
Z3TA+) are able to load .TUN files, which can retune the 12 notes to any system you like, including justly inton
47 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
The A to Z of computer music: K
Join us in a world of kick drums, keyboards and more
Computer MusicNovember 27, 2013, 12:02 UTC
For those illiterate in the industry lingo, we're here to issue some indispensable information on the ins
and outs of the tools of the trade.
The key of a piece of music refers to its tonic note and chord – ie, the note and/or triad that sits comfortably as
its final melodic/harmonic point – and the scale of notes on which it centres. Although the key of a piece is
usually described by its key signature (see below), that's not always an accurate indicator, since keys can
change within a piece without the key signature itself changing, through the use of accidental notes (sharps and
flats). Also, keys are major or minor, which isn't represented in the key signature – for example, the keys A
minor and C major use the same key signature but, of course, are based on different tonic notes/chords.
Key signature
On a musical stave, the series of sharps or flats immediately to the right of the clef indicate which white
(natural) notes should be sharpened (raised to the next semitone up) or flattened (lowered to the next semitone
below) throughout the piece, or until a new key signature is encountered within it. The key signature is simply a
guide that does away with the need to notate individual accidentals for every sharp or flat note in a piece. Every
major and minor key has a related key signature, although as mentioned above, the key signature doesn't
necessarily tell you what key you're in.
Keyboard (QWERTY)
The main method of data entry for computers, the QWERTY keyboard is ubiquitous in both physical and virtual
touchscreen form. Most DAWs let you use the QWERTY keyboard as a musical keyboard, typically with the top
two rows of letters generating MIDI note data in a layout as close as possible to the black and white keys of a
music keyboard, and the bottom row enabling down/up adjustment of octave range and velocity. Use of the
QWERTY keys for note entry/playing harks back to trackers, some of which were entirely keyboard controlled.
48 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
Keyboard (MIDI)
While any modern synthesiser or digital piano can output MIDI data, the MIDI keyboard as we generally think of
it today is a 'dumb' device with no onboard sounds of its own – a piano-style keyboard that connects to a Mac,
PC, iPad, etc, via USB and sends MIDI note and controller data to the DAW and any virtual instruments running
on it. Many MIDI keyboards also feature jack inputs for connecting sustain and expression pedals, as well as a
variety of controller knobs, faders and buttons, blurring the line between them and their control surface cousins.
A truly enormous range of MIDI keyboards is available at prices to suit all budgets, from two-octave
microkeyboards designed for use on the go, through aftertouch-enabled, feature-packed home studio hubs to
impressive full-on 88-key hammer-action performance instruments.
After the computer, DAW, audio interface and monitor speakers, the next thing on any newbie computer
musician's shopping list should be a MIDI keyboard.
The external signal entry point on the sidechain of a dynamics plugin (or hardware device) is called the key
input, and it's one of the most useful components in any compressor or gate. When a signal is detected at the
key input, it's used instead of the actual source signal to control the action of the compressor, which is still
applied to the source. Depending on the plugin, the key may be mixable with thesource or not present at the
output. Keyinghas all manner of well established corrective and creative applications, from de-essing, to
ducking a bassline out of the way of a kickdrum, to rhythmically gating a pad soundand many more.
Kick drum
Aka bass drum, the kick is the big, 'sideways' drum at the front of the drum kit, played via a footpedal (hence
the name) and – in very general terms – used to emphasise the first and third beats of the bar while the snare
pounds out the backbeat (beats two and four). In house, techno and many other dance styles, the kick drum
plays on all four beats of the bar – the famous 'four-to-the-floor' rhythmic standard.
Of course, kick drum sounds don't actually have to come from kick drums, and inelectronic music, they're
frequently generated by drum machines, synths and processed samples rather than the real thing, most often
resulting in a synthetic, larger-than-life sound. If the drums are the bedrock of any track, then the kick drum is
the mantle that lies below it – while many tracks featuring drums will still sound relatively coherent with the
snare drum removed, few can lose the kick without coming across as decidedly flaccid.
49 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
Kilohertz (kHz)
Defined as "the number of cycles per second of a periodic phenomenon", the hertz (named after the physicist
Heinrich Hertz) is the unit of frequency in the International System of Units (SI). Kilohertz are simply "thousands
of Hertz", so rather than describe the sampling frequency of a CD-quality waveform as 44,100Hz, for example,
we would generally say 44.1kHz. The pitch of a soundwave is defined by its frequency in Hertz/Kilohertz, as are
many other things, including digital clock speeds, EQ and filter frequencies, oscillator rates and more.
50 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
The A to Z of computer music: L
L is for low-pass, latency, Logic and more
Computer MusicNovember 23, 2013, 11:00 UTC
Transforming the layperson into lord of the music-making landscape, we continue our look at the
essential lingo of computer music
Latency is a perennial issue with software music production systems, affecting both monitoring when recording
audio and live MIDI playback of virtual instruments.
The problem can manifest as a delay between either the audio input occurring and the sound being heard from
the speakers or the key of a MIDI keyboard being pressed and the target virtual instrument responding.
Latency, measured in milliseconds, is simply the result of the host computer and its audio interface processing
the incoming signal, which necessarily takes a certain amount of time.
This processing is done within a buffer, the size of which can be adjusted via a software control panel to reduce
latency at the expense of stability (in the shape of clicks, pops and other artifacts being introduced at overly
small buffer sizes).
As long as you keep your latency below 10ms, you shouldn't have any trouble playing virtual instruments, while
most audio interfaces include a 'direct monitoring' option that passes the input straight to the output for zerolatency monitoring, albeit without the option of monitoring through plugin effects.
Italian for 'tied together', legato is both a performance technique and a setting on many soft synths.
Playing legato simply means to play a series of notes contiguously with no gaps between them. It differs from
portamento - which also involves playing a contiguous series of notes - in that the pitch doesn't slide from note
to note.
On a synthesiser, activating the legato setting tells it not to retrigger the amp envelope (and sometimes other
LFOs/ envelopes) until all keys are released.
In basic terms, the level of an audio signal is simply its volume, or a little more precisely, the 'height' of its
waveform. However, the word can be applied to any variable parameter, even if not always entirely accurately 'filter cutoff level' or 'envelope attack level', for example.
Low Frequency Oscillator. Featured on any virtual synth or sampler worth its salt, as well as many effects
plugins, an LFO is an oscillator that outputs a waveform oscillating at (usually) a sub-audible frequency - up to
around 20Hz. The waveforms on offer are generally of the standard sine/square/saw/triangle variety, and the
depth of the modulation will be controlled by its own setting.
LFO signals can be used to modulate all manner of target parameters, including filter cutoff frequency, oscillator
pitch, envelope depth… whatever the instrument/effect designer sees fit to include. Perhaps the most
recognisable LFO-based sound of modern times is the wobbled dubstep bassline.
51 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
When a digital audio signal exceeds 0dBFS when reaching the outputs of a digital (ie, software) mixer, the
waveform's peaks are clipped, resulting in unpleasant artifacts.
Limiting (applied via a dynamics plugin called a limiter) can alleviate this by preventing the signal exceeding a
user-defined level. Limiting is really just fast-attack compression to a very high ratio (up to infinity:1, which is
called 'brickwall limiting'), so as well as serving as a preventative measure, it can also be used creatively to
reduce the dynamic range of a signal or mix.
Linear-phase EQ
Standard digital EQs - and all analogue EQs – don't only adjust the relative levels of frequencies within a signal;
they also adjust the phase of those frequencies, meaning that some frequencies get pushed further back in
time than others.
In most cases, this is not audible, and it's certainly never held anyone back from making a good mixdown. Still,
it can be a concern with certain types of signals (eg, those with sharp transients), mastering or when performing
parallel processing.
A linear phase EQ is one that does not affect the phase of the frequencies in a signal. The tradeoff is that such
EQs tend to use more CPU and they introduce latency and preringing, which is a bit like a tiny reverse reverb
preceding sonic events.
52 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
Line level
A range of standardised audio signal levels used for passing signals between consumer (hi-fi, TV, games
consoles, etc) and professional (audio interfaces, mixers, etc) equipment.
For consumer (unbalanced) gear, the nominal line level is -10dBv, while for professional (balanced) gear it's
+10dBv. Line level is much higher than the levels of signals from microphones and guitars, which have to be
amplified to line level to become usable in the studio.
An increasingly popular alternative to Microsoft's Windows and Apple's Mac OS X operating systems, Linux is a
free and open source Unix-like OS. There are many versions of it (called 'distributions' or 'distros') available,
including the extremely popular Ubuntu, which is designed with consumers and 'creatives' in mind.
Numerous DAW, audio editing and other music applications are available for Linux, which also offers various
systems for incorporating VST plugins. While it wouldn't be true to say that Linux is 'more powerful' than
Windows or OS X for music production, everything running in it is completely free, so it's certainly a great option
for many musicians, cash-strapped or not.
Logic Pro
Currently at version 10 (or X), Apple's Mac-only DAW has a long and illustrious history. Originally developed
and owned by German company Emagic (with ancestry tracing back to C-Lab Notator on the Atari ST in the
80s), Apple bought that company in 2002 with the primary intention - it's always been speculated - of putting its
developers to work on the hugely successful GarageBand.
Despite persistent rumours of its imminent abandonment, Logic has been actively (if slowly) developed ever
since, with this year's upgrade to Logic Pro X being one of its most radical yet, featuring a complete overhaul of
the GUI, the impressive Drummer virtual drummer and a brand new collection of excellent MIDI effects.
Logic Pro X is one of the most powerful DAWs on the market, yet also - thanks to Apple's general enormity one of the most affordable.
53 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
Found on many dynamics plugins (compressors, limiters and gates), look-ahead enables the processor to 'see
into the future' in order to facilitate ultra-fast attack times (down to 0ms).
Rather than actually reading ahead of the current playback position, however, look-ahead splits the input signal
into two, one of which is delayed. The non-delayed signal is then fed into the compressor to trigger its envelope,
while the delayed signal is the one that actually gets processed. This delay has to be countered by plugin delay
compensation in the host DAW.
Both a noun and a verb, 'loop' can either refer to an audio sample edited so that its end runs into its beginning
seamlessly on repeated playback, or the process of cycling a section of music in a DAW or audio editor) so that
it, too, repeats seamlessly.
Sampled loops have been a cornerstone of modern music since the dawn of the sampler, when classic
breakbeats were looped up to serve as the rhythmic base for hip-hop, breakbeat, drum 'n' bass, pop and
numerous other styles. Since then, pretty much all genres have incorporated loops to some extent, and an
entire industry has developed around producing and selling them.
While any audio file cut to a musically logical length for repetitive playback qualifies as a loop, a few formats
exist specifically to be looped, automatically following changes in tempo (and, in some cases, pitch) in a
compatible host DAW. These include Acidized WAV, Apple Loops and REX files.
Lossless compression
The process of converting a WAV or AIFF file to a compressed format like MP3 or OGG involves losing an
amount of psycho-acoustically negligible data. Known as 'lossy' compression, this can reduce file sizes by a
factor of up to around five with no apparent reduction in quality.
Lossless compression (in formats like FLAC and Apple Lossless) uses linear prediction to compress files by a
factor of around two with no loss in quality - audible or otherwise.
Defined by the American National Standards Institute as "that attribute of auditory sensation in terms of which
sounds can be ordered on a scale extending from quiet to loud", loudness is about more than just amplitude
level - it's largely a psychoacoustic property, determining how impactful a piece of music feels as well as
54 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
Maximising loudness is a prime consideration for many of today's producers, particularly at the mastering stage,
although the broadening of dynamic range seems to be making a bit of a welcome comeback at present.
Low-pass filter
A filter type that attenuates all frequencies below a user-set cutoff point. The low-pass is perhaps the most
ubiquitous of filter types and certainly the most sonically recognisable when swept from closed to open, creating
the rising 'underwater' sound made famous by tracks such as Fatboy Slim's Rockafeller Skank.
55 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
The A to Z of computer music: M (part one)
So massive we've had to split it into two
Computer MusicDecember 06, 2013, 10:53 UTC
So massive we've had to split it in two, read up on the letter M in our mammoth monthly map of the
music-making mother-tongue.
Mac OS X
The operating system powering Apple's Macintosh computers, based on Unix and first released for server
usage in 1999, followed by the desktop client in 2001.
Up until the latest, each version of OS X has been named after a big cat - Cheetah, Panther, Lion, etc. OS X
10.9, however, unleashed in October 2013, sees a shift in nomenclature to California landmarks, the first being
Mavericks, a famous surfing hang-out.
After Microsoft Windows, OS X is the second most popular desktop operating system in the world, renowned for
its ease of use and stability.
An increasing number of DAWs and plugins - including Ableton Live, Apple Logic Pro X and Native Instruments'
Massive - feature macro controls. These are customisable banks of knobs and buttons, each of which can be
assigned to one or more controls from anywhere within the plugin or, in the case of DAWs, instruments and
effects in the channel strip/rack.
Macros enable 'performance' setups to be quickly put together, giving ready access to a custom set of
parameters of the user's choice in one handy interface.
In Western music, major and minor describe a range of melodic and harmonic properties, including key,
intervals, scales and chords.
Adhering to a formal structure that's existed for centuries, minor keys, scales, chords and intervals are generally
perceived as having a 'sad' or 'moody' sound, while major ones sound more upbeat and 'happy'.
While we don't have space here to get into detail, you can start your investigations into these essential musical
concepts by learning to appreciate the difference between major and minor scales.
Start on any note of the keyboard and play this sequence to get a major scale (a tone is two steps up the
keyboard, including black notes; a semitone is one step): T T S T T T S. For the minor version of the same
scale, play: T S T T S T T.
The process of assigning a target parameter to a controlling source parameter or controller is known as
mapping, as is the process of assigning samples to notes on the keyboard in a sampler.
For example, when you set one of the abovementioned macros up to control, say, the cutoff and resonance
parameters of a filter, the connection made between them is a mapping, as is linking a knob on your hardware
MIDI controller to a parameter on a synth.
Undoubtedly Native Instruments' most successful and best known software synthesiser, Massive is a
semimodular virtual analogue and wavetable instrument with a huge array of waveforms onboard, extraordinary
56 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
modulation capabilities, a bank of Macro controllers for designing custom control interfaces, and, most
importantly, a sound so powerful it could sink a battleship.
Massive has played a huge part in defining the sounds of dubstep and bass music in general, thanks to its
idiosyncratic low-end tones and razor-sharp leads.
Master buss/channel
The final output channel of any audio mixer - hardware or software. The individual channels, groups and busses
that make up the mix all end up 'summed' at the master channel, which is where the final level is set and any
master effects (those intended to process the whole mix) are added. Exceeding 0dBFS is to be avoided at all
costs on this bus, as it will result in nasty digital clipping.
Once a track is written and mixed, the final stage of production is mastering. Originally a necessity to prepare
singles and albums for the physically sensitive requirements of the vinyl cutting lathe, mastering these days is
about maximising transient punch, dynamic and frequency-wide clarity and control, and stereo spread, as well
as making sure that the mix comes across well on as wide a variety of playback systems as possible.
It's a highly specialised process calling for the very careful application of EQ, dynamics and other effects (either
separate plugins or hardware modules, or a mastering suite such as iZotope's industry standard Ozone) to a
finished mix, either by placing such effects on the master bus of an otherwise finished project or by applying
them to a rendered mixdown of the track.
In the wrong hands, mastering effects can do more harm than good, so consider commissioning an
experienced mastering engineer for critical tasks - indeed, many would argue that mastering your own tracks is
57 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
Abbreviated as MB, a megabyte is an amount of disk or memory storage space equal to 1,000,000 or
1,048,576 bytes, depending on context: the former reported by modern operating systems for disks, the latter
for memory.
The reason for the discrepancy is that storage volumes are described using the '2 to the 20th power' rule, which
yields the value 1,048,576. For music production purposes, however, all you really need to know is that 1MB
equates to about one minute of compressed near-CD-quality audio (MP3, say) or about six seconds of
uncompressed CD-quality audio.
Memory (RAM)
The array of chips in any computer that stores data for immediate or short-term processing, RAM (Random
Access Memory) differs from other storage media in that data can be read from and written to it in any order, as
contrasted with the scheduled access necessitated by the physical design of hard disks, DVD-ROMs, etc - ie,
the linear movement of the read/write head or laser.
RAM is also much, much faster than disc-based storage, as it needs to be to keep up with the CPU and other
speedy onboard components that come together to create a computer.
While the RAM in your Mac or PC is volatile (ie, when the system is powered down, its content is lost), nonvolatile RAM also exists, such as the ROM (read-only memory) in your old hardware synths and sound
modules, and the NVRAM in your USB flash drive.
A device that converts acoustic vibrations into electrical signals, the microphone is clearly one of the most
important inventions in the history of music technology.
Several types of microphone exist, at a wide range of prices. The cheapest, most robust type is the dynamic
microphone, which is particularly well suited to high-volume sources such as guitar amps, snare drums and live
58 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
vocals. The more delicate condenser and ribbon types make better options for less explosive sources, such as
acoustic instruments and studio vocals.
There are other types available, too, but for music production, dynamic, condenser and ribbon are the
essentials that you need to be familiar with.
Mid/side recording and processing
After capturing a sound source using a direct cardioid mic and a sideways facing omnidirectional mic, or
converting an existing stereo audio file within a mid/side-capable plugin, the resulting mono 'mid' and stereo
'sides' signals can be manipulated and processed independently (EQ just on the sides, compression just on the
mid, for example), then turned back into normal stereo audio for conventional use within your DAW or sampler.
Mid/side processing is incredibly useful for controlling stereo imaging and the weight of sounds in the centre of
the mix, which is an important factor in making stereo tracks compatible with mono playback systems.
Musical Instrument Digital Interface. Standardised in 1983 (credit for the invention itself goes to Sequential
Circuits' Dave Smith and Chet Wood), MIDI is a protocol agreed on and adhered to ever since by electronic
instrument manufacturers and software developers that enables their devices to talk to each other.
For example, a MIDI controller keyboard typically sends out MIDI note, velocity, aftertouch and controller data
that your DAW is designed to understand and use appropriately for triggering and controlling plugin
Despite its age and the occasional attempt to improve on it, the MIDI specification is essentially the same today
as it was when first conceived, and we don't envision it changing significantly any time soon.
In the audible frequency spectrum, mid-range describes the range 300Hz-2500kHz or thereabouts. Crucially,
this is the area that contains the majority of intelligible vocal information, as well as the dominant frequencies of
many instruments.
While many novice producers put plenty of effort into solidifying the high- and low-ends of the mix, the mids
often get overlooked, resulting in a lack of presence and punch. No doubt about it, the mid-range is every bit as
important as the more immediately satisfying highs and lows.
59 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
60 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
The A to Z of computer music: M (part two)
More musical definitions as we return for the second
half of the multi-part Ms
Computer MusicFebruary 07, 2014, 11:27 UTC
More musical definitions get demystified as we return for part two of the multi-part Ms. Check out part
one here.
Any device that combines two or more signals and sums them (adds them together) to a single output (and/or
several sub-outputs) can be termed a mixer, but by far the most common mixers you'll come across are the
project-controlling one in your DAW and the oscillator/sample blending ones in your synths and samplers.
The mixer in your DAW is a thing of truly incredible power compared to its hardware forebears, with no ceiling in
terms of the number of channels and effects it can host, precision EQ and perhaps compression built-in, and
limitless signal routing potential for creating subgroups and auxiliary effects loops.
The bringing together and balancing of the individual tracks that make up a song is called mixing, and far from
being just a technical or corrective exercise, it's an art form that requires understanding of the principles
involved and a lot of practice.
At the most basic level, getting a mix together means setting volume levels for each sound and their positions in
the stereo panorama, but that's really only the beginning.
EQ, compression, limiting, reverb, delay and all manner of other effects processors are called on during mixing
in order to get your disparate track elements sounding like a cohesive piece of music with depth, character,
punch and just the right frequency balance.
Dating back to the Middle Ages but with roots in ancient Greece, in music theory, the seven modes are a set of
scale/key types, each comprising its own series of intervals and characteristic sound.
Named Ionian, Dorian, Phrygian, Lydian, Mixolydian, Aeolian and Locrian (after seven different peoples of the
61 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
ancient Greek world), they're still used in many styles of music and are well worth investigating by anyone
looking to go beyond the conventional major and minor scales - which are in fact the Ionian and Aeolian modes.
Multimode filter
The mode of an audio filter determines the range of frequencies that it attenuates beyond the cutoff point - a
low-pass filter reduces frequencies above the cutoff, while a high-pass does away with those below it, for
A multimode filter, then, is simply one that can be switched between more than one mode, and you'll be hard
pressed to find a modern filter plugin that doesn't offer at least low-, high- and band-pass modes.
Modular synthesis
In 1970, Moog Music changed the world of synthesis with the release of the Minimoog, the first self-contained,
relatively portable analogue synthesiser. Prior to that, synths were entirely modular, with each component
module (oscillators, filters, envelopes, sequencers, etc) constituting a discrete box of circuitry that required
connecting to other modules via cables called 'patch cords'.
They were also enormous, heavy and expensive. Although the modular synth is still very much alive and kicking
in both software and hardware forms, it would be accurate to say that fixed-path and semi-modular (in which a
limited range of modules can be mixed and matched) instruments dominate the market.
In simple terms, to modulate means to exert change over time, and in music technology, modulation is a key
element of synthesis, sampling and effects processing.
Within your synth, sampler or effects plugin, a modulator, such as an LFO, envelope or MIDI Continuous
Controller, can be assigned to control a target parameter, such as filter cutoff, oscillator frequency or pan
62 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
With an LFO running at 10Hz assigned to a filter's cutoff frequency, for example, said frequency will be moved
up and down ten times a second, with the distance travelled in each direction determined by the modulation
depth. How these assignments are made and depth-adjusted will depend on the instrument in question, but
many soft synths do it via an interface called a 'modulation matrix', typically with menu-driven selection of
sources/targets, and numeric entry of depth.
While modulation is a key feature of all synths, FM (frequency modulation) is a type of synthesis based entirely
on the technique. With FM, each of up to eight pairs of digital oscillators combine to form 'operators', in which
one oscillator (the modulator) modulates the frequency of the other (the carrier), often to create a
characteristically hard, bright sound.
Other forms of modulation that you'll come across in synthesis are ring modulation (outputting the sum and
difference of the frequencies contained in two signals), amplitude modulation (the level of one signal being
controlled by the level of another) and phase modulation (modulation of the phase of a signal, also involved in
Several types of effects processor - including phasers, flangers and chorus - are also based on modulation, with
input audio signals shifted in time and/or pitch by LFOs, envelope followers or other audio signals.
Also read: The ultimate guide to effects: Modulation
A loudspeaker designed to facilitate critical listening and appraisal in the music studio, as opposed to the pure
recreational listening of a 'domestic' playback system, is a monitor.
The technical difference between the two is that a monitor is built to be as flat and neutral-sounding as possible
(for the price paid - in general, you pay more for greater accuracy), while most hi-fi speakers are made to flatter
and 'improve' the sound.
Monitors come in many shapes and sizes, with the number and type of woofers, tweeters, mid-range drivers
and ports on board varying. They all fall into one of two categories, though: active or passive. Active monitors
have matched amplifiers built in (and thus cost more), while passives require amplification by a separate unit.
A mono (short for monaural) signal is one that only comprises a single channel, usually as a result of being
63 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
recorded via a single microphone.
The stereo mixes of most modern music tracks actually comprise numerous mono sounds, each positioned
within the stereo spectrum and processed with effects that put them in a stereo context of their own.
A mono sound is moved between the left and right channels of a stereo mix using the pan control on its mixer
channel - when placed directly in the centre, it's represented equally in the left and right channels.
Stereo signals placed on stereo channels, on the other hand, may be balanced rather than panned - ie, the
relative levels of their left and right component channels are adjusted.
A monophonic synthesiser is one that can only play one note at a time. Monophony was once a technological
limitation that synth manufacturers raced to overcome, but although almost all hardware and software synths
designed today are polyphonic, switching them to monophonic behaviour can be advantageous when playing
bass parts and lead lines in particular, as many synths feature a 'glide' or portamento function for smoothly
sliding the consecutive note pitches of a monophonic part into each other.
Abbreviated 'ms', a millisecond is a thousandth of a second, and it's a unit of measurement that you'll come
across time and time again in music production. From audio interface latencies and channel offsets to
compressor envelope speeds and delay times, time-based parameters within your DAW and its plugin
instruments and effects are commonly set in milliseconds, hundredths of seconds and seconds.
Any effects unit able to operate independently on multiple frequency ranges at once is a multiband processor.
The most ubiquitous multiband effect is the multiband compressor, with which you can, for example, compress
the low end of a mix heavily, while leaving the highs dynamically 'open'.
The width of the bands within a multiband unit are adjusted by moving the crossover frequencies between
them, and each band has its own set of parameter controls with which to apply the particular effect.
64 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
An effects plugin offering more than one kind of processor within a single interface. Notable examples include
Camel Audio CamelSpace (delay, filter, reverb and trance gate), PSP N2O (filters, EQ, delay, reverb,
pitchshifting, distortion and more) and Sugar Bytes Artillery and Turnado (too many to list!)
Multisampled instrument
A collection of audio samples recorded, compiled and mapped in a sampler as a set, usually with the aim of
'virtualising' a real-world instrument as realistically as possible.
For example, to create a multisampled guitar, you'd record samples of every string on your guitar playing every
possible note at a range of volume levels and perhaps via several playing techniques, then map those samples
in your sampler with pitch corresponding directly to note position, and volume levels corresponding to velocity.
With today's computers boasting vast amounts of RAM and fast drives for handling high volumes of data, the
current generation of multisampled instruments can reach multi-gigabyte sizes and sound staggeringly
convincing in the hands of a skilled player or MIDI programmer.
65 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
The A to Z of computer music: N
Navigate the next installment of our lexicon of
industry jargon
Computer MusicMarch 05, 2014, 10:51 UTC
Fire up those neurons and get your nerd on as we navigate another list of industry jargon you should
never be caught without.
In the context of music technology, a native application or plugin is one that runs on your computer, requiring no
dedicated external DSP (digital signal processing) hardware. The vast majority of music software is native, but
platforms such as Universal Audio's UAD system require proprietary DSP chips (on internal expansion cards
and external boxes) to operate.
A big selling point of this approach was once that such units took the signal processing strain away from the
host Mac or PC, but with today's powerful CPUs, this is less of a concern, and there is nothing about DSPbased plugins that makes them sound fundamentally any better (or worse, for that matter).
Universal Audio's UAD-2 Satellite is an example of dedicated external DSP hardware
Native Instruments
Berlin-based software and hardware developer founded in 1996 and now regarded as one of the music
technology industry's greatest success stories. NI's earliest releases were the Generator modular synth (the
first version of which, 0.96, required a proprietary audio interface!) and Transformator sampler/granular synth,
which eventually came together to form the groundbreaking Reaktor - now at version 5 and still a key product in
the company's line-up.
Various classic instrument emulations followed in the early '00s (Pro-5, FM7 and B4), before the core line-up
that today forms the backbone of NI's epic Komplete package was born - Battery, Absynth, Massive, Kontakt
and the aforementioned Reaktor. Supplementing these are an ever-expanding collection of high-quality effects
plugins, Reaktor/Kontakt-based instruments and audio interfaces - not to mention an entire sub-industry of
sampled instrument developers regularly releasing new libraries for Kontakt, which has become the industry
standard sampler because of it.
As well as the Komplete line of music production software, NI's other tentpoles are the hugely successful
Traktor series of DJ applications and the younger but increasingly popular Akai MPC-inspired Maschine range
of hardware/software hybrid grooveboxes.
66 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
NI's debut software synth Generator, released in 1996
Nearfield monitor
A small monitor speaker designed to be installed within just a few feet of the listener's seating position, and thus
the most appropriate monitoring solution for the average home studio.
The advantages of nearfield monitors are their low cost, small footprint and compact spacing distance; the
disadvantages are poor off-axis performance and relatively limited bass response (an issue often resolved with
the addition of a separate sub-bass monitor). Most importantly, though, in terms of sound quality, the music
producer is spoilt for choice with modern nearfield monitors these days - there are plenty of truly impressive
options out there for anyone with a few hundred quid to spend.
NN-19 and NN-XT
Propellerhead's Reason DAW includes two "pitched" samplers, NN-19 and NN-XT. NN-19, included in v1, is a
simple device largely superseded by NN-XT, introduced in Reason 2. As well as being much more capable than
its diminutive sibling in just about every area (limited parameter automation being the only exception), NN-XT
has become a popular choice among third-party sample developers, who often include NN-XT patches in their
multi- format libraries.
67 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
Any unwanted, unintentionally captured or generated low-level component of an audio signal can be considered
noise. It could be mains hum, tape hiss, radio interference, background leakage, vinyl crackle, mic bleed...
anything that isn't considered a desirable part of the sound being captured.
Getting rid of or reducing noise is a routine and relatively straightforward part of the music production process,
involving EQ, gating, expansion, clip editing and - in recent years - dedicated noise reduction plugins.
While noise is generally seen as a 'bad thing', there are many who would argue that a bit of tape hiss or
transformer hum actually enhance a recording, imbuing it with that oft-cited "desirable analogue warmth".
Noise floor
The level of background noise in an audio signal (be it an individual element of a track or the final mix output) is
called the noise floor, measured in decibels (dB). With an analogue setup, the noise floor is always an issue
(albeit usually a minor one) thanks to tape hiss, transformer hum from mixers and outboard gear, and noisy
instrument outputs.
In the entirely software-based studio, noise is less of an issue, coming primarily via incidental sounds captured
via microphones, hardware instrument output noise, the inherent background noise of audio interface preamps
and converters (they contain analogue componentry). Software instruments and effects do not generate
background noise unless they've been specifically coded to do so, for instance, as part of an analogue
Noise oscillator
Most virtual (and physical) analogue synths feature a noise oscillator. This is simply an 'extra' oscillator that
generates a noise signal, of which there may be several types available.
White noise is the most ubiquitous, constituting all frequencies at the same level. Other noise types include
68 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
brown, pink and blue, each applying its own mathematically applied loudness/frequency curve to generate a
particular sound. Noise oscillators are particularly useful when programming synthesised snare drums and
cymbals, as well as for adding a touch of 'whoosh' to pads and other sounds.
Pink noise is weighted to become less powerful as frequency increases. -3dB per octave, to be precise
Non-destructive audio editing
Destructive audio editing is, as the name implies, the process of making changes to an audio clip or sample in a
DAW, audio editor or sampler that are permanently rendered into the source file itself. Non-destructive audio
editing, then, is the manipulation of an audio part within an application without modifying the source file at all.
Non-destructive edits are applied on the fly by the DAW or editor, merely referencing the audio file rather than
writing to it, and thus demanding a more powerful computer than destructive editing, but these days even the
lowliest of machines is more than up to the job. Non-destructive edits can be (destructively) rendered as audio
files - new or overwriting the original - if desired.
Applying gain to an audio clip to bring it up to a standardised level, usually 0dB. Peak normalisation simply
boosts the whole clip until the peak sample (the highest point in the waveform) reaches the target level.
Loudness normalisation, on the other hand, brings the level of the clip up so that it reaches the desired RMS
(average) level. Any modern audio editor or DAW will offer normalisation as a menu option.
Long before the advent of the MIDI note editor (see below), representing music visually was done on paper,
using standardised regional systems of notation - indeed, for many it still is.
Less "literal" than the piano roll-based MIDI editor, traditional Western notation is an esoteric (to the untrained
eye) but effective system involving notes of various temporal denominations positioned on a five-line stave, the
lines representing pitch. Key and time signatures set the overall context of the notation, and a diverse library of
symbols and markings is used to express volume, articulation, "effects" and more.
Many DAWs feature notation or score editors, presenting MIDI parts in notation form, and there are even
applications dedicated to producing printed notation, which is - and perhaps always will be - the standard format
for instrumentalists and composers the world over.
69 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
Notch filter
A type of filter that attenuates the signal within a specific frequency range, allowing all frequencies above and
below to pass through unaffected. On any filter plugin worth its salt, the centre frequency will be adjustable and
the bandwidth modifiable via the Q or resonance control.
Note (MIDI)
In a MIDI sequencer, musical notes are represented by Note On events with associated pitch and velocity (at
least) settings, followed by Note Off events. The first tells the target sound source to start playing a note at the
specified pitch and velocity; the second tells it to stop playing that note.
Graphical MIDI note editors enable notes to be drawn into a piano roll/timeline display as "blocks", with Note On
defined by the start of the note on the timeline, pitch specified by vertical position on the piano roll, velocity and
other parameters set in a controller lane, and Note Off represented by the end of the block on the timeline.
Note editor
The heart of any MIDI sequencer, the note editor - aka piano roll editor, MIDI editor, matrix editor, etc - is a
graphical interface into which MIDI notes (see above) are entered, either by recording a MIDI keyboard, drum
kit, breath controller or other input device, or drawing in manually using the computer mouse.
The note editor is a grid layout with a vertically arranged piano keyboard on the left-hand side and a horizontal
timeline running along the top. The pitch of each note is determined by its position on the keyboard, while
placement in time is represented by its position on the timeline. The note editor will also incorporate one or
more controller lanes, for parameters such as velocity, aftertouch and MIDI Continuous Controllers.
70 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
The A to Z of computer music: O
The next instalment of our jargon-buster
Computer MusicApril 02, 2014, 14:51 UTC
Step onboard our alphabetical omnibus once more to observe some oft-used occupational objects and
Along with the semitone, tone and fifth, the octave is one of the fundamental musical intervals. A single octave
contains the eight diatonic steps of a given scale (C to C in C major, for example).
Any note (C4, for example) is exactly double the frequency of the same note an octave below (C3), and half the
frequency of the same note an octave above (C5). The octave in which a musical piece, passage, phrase or
single note is played determines its 'register'.
Ogg Vorbis
Combining the Vorbis lossy audio compression codec with the Ogg container format, Ogg Vorbis is a higher
quality (for the same size) alternative to MP3, AAC and WMA. Some music applications support it as an export
and/or import format.
Omni Mode
While all synths and samplers - virtual and real-world - are capable of receiving input on one of 16 MIDI
channels at a time, some of them can also be switched to omni mode, under which they'll accept input from all
16 channels at once. While for multitimbral instruments, this is essential in order to be able to trigger multiple
sounds on multiple channels, for monotimbral instruments, it's not a particularly useful feature, which is why
you'll rarely see it included in modern soft synths.
The flagship virtual instrument of Californian developer Spectrasonics. Omnisphere is one of the most powerful
and sonically impressive synthesisers ever made, boasting not only an astonishingly broad range of synthesis
types, modulation sources, effects and performance controllers, but a stunning 42GB library of source samples
and presets.
71 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
The popular synth Omnisphere often crops up in our Producer Masterclass tutorials
An omnidirectional microphone is one that responds equally to sounds coming at it from any direction. While
the response pattern can be generally considered even within a 360 degree 'sphere', the body of the
microphone will have an effect on soundwaves approaching from behind. Because of that, omnidirectional mics
are typically designed to be as small as possible. The Shure SM63LB is a good example of a high-quality
dedicated omni, but there are many mics on the market that include omnidirectional among their switchable
Microphones are designed with the intention of the sound source being in a very specific position for optimum
performance - the 'sweet spot' in terms of frequency response. When your microphone and source are thus
positioned, the mic is 'on-axis'; when it or the source is moved so as to deliberately change this optimum
position for creative purposes (changing the relayed sound of a guitar cab by pointing the mic at the outer edge
of the cone rather than the centre, say), they're 'off-axis'.
Loudspeakers are also said to be on- or off-axis, depending on whether the listener is placed in the right spot to
have all output frequencies delivered to their ears at full amplitude - this is almost always such that each ear is
positioned perpendicular to the front of the relevant speaker. Off-axis performance (ie, how far off-axis you can
be before it actually matters) is one of many ways by which speakers are qualitatively measured.
A one-shot sample is simply a short audio clip that isn't looped or part of a multisample set. Standard examples
of one-shots would include drum hits, spot FX and orchestral stabs.
Open source
The term 'open source' describes any software that's formally made available under license for anyone to study,
modify, use and redistribute as they see fit. Open source licenses might include certain requirements to be
fulfilled by developers - that the source code for any modifications also has to be made freely available, for
72 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
Operating system
The software that constitutes the fundamental operational environment on your Mac, PC, smartphone or tablet
(not to mention your hardware synth, sampler, TV, car, etc) under which all your applications and hardware
drivers run, is its operating system.
The most ubiquitous and well-known operating systems are Windows, OS X, Linux, iOS, Android and Windows
Phone, although there are many others. The modern operating system is a staggeringly complicated feat of
software engineering, in which designers and engineers have to equally prioritise stability, usability,
compatibility, resource management and many other considerations in order to deliver what is, to all intents and
purposes, the heart and soul of the general computing experience for the full spectrum of users, from media
consumers to office workers, gamers to bankers, photographers to music producers.
In an FM synth, each carrier or modulator (the oscillators - see below) and each associated modifier (an
envelope, say) is called an operator. Carrier operators generate audible signals, while modulator operators
rapidly adjust (modulate) the pitch of the carriers to which they're assigned. A particular routing of carrier and
modulator operators is known in some synths as an 'algorithm'. Operator is also the name of Ableton's FM
synth, available only in their Live DAW.
Ableton's nifty Operator synth
Apple's current Unix-based operating system (see above) for its Macintosh computer platform, and one of two
main OSes used for music production, the other being Windows. Currently at version 9 (Mavericks), OS X's
Core Audio and Core MIDI APIs offer plug-and-play functionality with much hardware, the ability to 'aggregate'
multiple audio devices so that they appear as one to your music software, and - useful in certain situations acceptable latency and smooth performance when using the system's built-in audio. Overall, OS X is known for
its stability and relatively simple user interface.
73 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
The majority of music software and hardware is compatible with both OS X (Mac) and Windows (PC) - ie, it's
'cross-platform' - though there are exceptions, such as Logic Pro (Mac-only) and Sonar (for PC). OS X also has
a lot less freeware for musicians.
Open Sound Control. A cross-platform protocol for communication between audio and video applications that's
deployed for a range of musical and multimedia-related purposes, mostly relating to real-time performance
control, triggering and networking. OSC has gained a fair bit of traction on Mac/PC and iOS in recent years,
with forward-thinking developers capitalising on its higher resolution and greater flexibility than MIDI to come up
with novel controllers for instruments, lighting rigs, audio/ video installations and more.
Generates a cyclical (oscillating) signal of a fixed or user-adjustable shape. In a synthesiser, the oscillator
cycles fast enough to produce a signal in the audible range (from around 50Hz up to around 20kHz, though in
practice, you'd rarely play a musical note whose fundamental pitch goes this high) as the raw tone to be
'sculpted' and modulated using the synth's filters, LFOs (Low Frequency Oscillators are another type of
oscillator used to impart movement on other parameters, usually so slow as to be below the audible range),
envelopes, etc.
Analogue synths generally have between one and three audible-range oscillators, and typical waveshapes on
offer would be sine, square, sawtooth (up and down) and triangle. Other types of synth use oscillators in
different ways - an FM synth pairs 'carrier' oscillators and 'modulator' oscillators (see Operator), for example,
while a wavetable synth uses very short looped samples, which aren't oscillators in the traditional sense, but
sample memory.
A standard recording technique, overdubbing simply means to record (as opposed to import) a new part on top
of one or more existing parts. For example, the drummer, bass player and guitarist have all laid their tracks
down and gone home before the singer hauls himself out of bed and makes it to the studio. Luckily, he or she
can overdub their vocals over their bandmates' performances, recording their voice onto a separate track, ready
to be mixed in with the rest of the band.
Overdubbing can also refer to replacing badly recorded/played parts with better ones. While early, premultitrack use involved bouncing the entire existing recording whilst mixing in the overdubbed part being played
- resulting in yet another 'everything mixed together' single recording - the advent of multitrack recording made
overdubbing much more convenient and flexible.
A process used 'under the hood' by music software to work with audio signals at a sample rate much higher
than their Nyquist frequency (twice the highest frequency said signal can contain) in order to reduce digital
aliasing, thus typically sounding 'better', particularly at the higher end of the frequency spectrum. This is
achieved by 'upsampling' the incoming signal, processing it, then 'downsampling' it back to the original sample
rate for output.
74 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
The A to Z of computer music: P
From parallel processing to pre/post-fader, our
glossary continues
Computer MusicMay 13, 2014, 10:09 UTC
Point by point, we peruse everything from pink noise to peaks in our expert glossary of all things
Parallel processing
A mixing and sound design technique whereby a signal is duplicated (either via mixer bussing, copying of the
part onto a second track in a DAW, or inside a plugin with appropriate capabilities) and one instance is
processed with an effect.
Compression is the classic usage, and by balancing the compressed and uncompressed parts, to achieve the
desired blend of compressed character and uncompressed punch. Parallel compression is also know as 'New
York style compression'. These days, many new dynamics plugins feature a wet/dry mix control, so parallel
compression can be applied without the need to run two signals.
Patch cord
A short cable used to connect modules in a modular synth or points on a patchbay. In the software studio,
virtual patch cords are largely eschewed in favour of modulation matrices and 'behind the scenes' signal routing
schemes, although some software represents them literally - Propellerhead Reason, most famously.
In literal terms, the peak in an audio signal is the highest point reached by its waveform, although the word is
also often just used to refer to any notably high point, not just the highest.
The peak is a useful reference for dynamics processing - setting a limiter to reduce peaks to a particular
absolute level, for example - and signal levelling in a mixer, where peak metering gives a visual indication of the
signal level at any given moment, making it easy to prevent it exceeding 0dBFS.
Any instrument that's played by striking with the hand or a stick/beater/brush/hammer/ mallet, shaking or
scraping is a percussion instrument - drums and cymbals of all kinds, xylophone, woodblock, guiro, cabasa,
caxixi... even the piano, strictly speaking, since its strings are excited by hammers.
Phantom power
Unlike passive dynamic microphones, the active electronics of more sensitive condenser mics require a power
75 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
source to operate. Since the mic's only input is via an XLR socket, this power needs to come from the receiving
equipment to which the mic is connected, via the same cable that carries the audio signal. So, any mixer,
preamp or audio interface worth its salt will feature 48V phantom power (so called as it's 'invisible' to dynamic
mics, which ignore it).
The phase of a cyclical wave is, essentially, its position in time, described in degrees, with 360 degrees being a
complete cycle. For example, 'rotating' a sine wave by 180 degrees 'pushes it along' the timeline by half its
cycle length, turning peaks into troughs and vice-versa.
Adjusting the phase of a synth waveform can affect the attack of the sound, and adjusting an LFO's phase will
alter its 'start point'. Phase is a particularly important issue in mixing, where multiple signals recorded from the
same source need to be kept in phase in order to prevent phase cancellation (see below).
Phase cancellation
When two or more signals recorded from the same source are out of phase (see above), phase cancellation is
the result. This can range from a mild 'hollowing' of the sound to complete silence when two signals are 180
degrees out of phase. Any multi-miked sound source has the potential to suffer from phase cancellation, so that
includes drum kits, guitars (multiple cabs/ mics and perhaps a DI too), room mic pairs, etc.
Most mixers (real and virtual) feature so-called phase invert switches, but these actually invert the polarity,
flipping the whole waveform on its horizontal axis, turning it 'upside down'. This can be useful in certain miking
situations, such as mics placed on the top and bottom of a drum, which will naturally move in 'opposite'
directions when the drum is struck - flipping the polarity of one is necessary to avoid phase cancellation.
Other techniques for correcting phase issues include manually dragging one of the offending audio parts left or
right a touch, and using dedicated phase rotation processors.
Physical modelling
A synthesis technique in which the physical properties of an instrument (string, skin, resonant chamber, etc)
and the object used to play it (drumstick, plectrum, hand, etc) are calculated using mathematical models in
order to achieve a realistic simulation. Various parameters of the instrument and the 'exciter' are usually
adjustable - eg, string length, drumhead tension, tube width, stick material and so on - and the upper and lower
limits of these parameters often range into the unreal, enabling the creation of gigantic drums or absurdly small
wind instruments, for example.
Ping-pong delay
A type of stereo delay plugin that alternates its 'taps' between left and right channels. Being a very 'standard'
effect, ping-pong delay can be found in the stock effects library of most DAWs.
Pink noise
White noise contains all frequencies at the same power level; pink noise, on the other hand, is a randomly
generated signal with the power reducing as the frequency increases, leading to a sloping spectrum with equal
power in each octave. Some synths feature pink among the colours available to their noise oscillators.
Smoothly sweeping the pitch of a synth or sampler up or down, either manually using a MIDI keyboard pitch
wheel, or automatically via automation. Generally, pitchbend affects the entire output of an instrument, but there
are a few instruments and controllers available that support polyphonic pitchbend, including Camel Audio's
Alchemy soft synth and KMI's QuNexus (below).
76 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
The process of raising or lowering the pitch of an audio signal by a fixed musical interval (usually represented in
cents and semitones), either in a DAW or sample playback instrument, or via a dedicated plugin effect.
Although digital technology enables the pitch and duration of samples to be controlled independently, some
tape and vinyl emulation plugins offer old-style pitchshifting/timestretching, whereby the playback speed of the
'tape' or 'vinyl' is increased or decreased in order to change the pitch.
Pitchshifting shouldn't be confused with frequency shifting, which raises or lowers all included frequencies by
the same amount, rather than by the same ratio.
A piece of software that 'plugs into' a host application, extending its functionality in some way. In music
software, there are three types of plugin: instruments, audio effects and MIDI effects. Instruments are triggered
with MIDI and they output audio, while plugin effects process an input audio signal and output the results.
Plugin instruments and effects include virtual synthesizers, samplers, compressors, EQs, reverbs, delays, etc,
either emulating real-world devices or completely original. MIDI effects plugins, meanwhile, modify a MIDI
stream - arpeggiators, chord generators, etc.
Due to its cross-platform compatibility, the most common audio plugin format by far is Steinberg's VST (Virtual
Studio Technology), but Apple's Mac-only AU (Audio Units) comes a close second, followed by Avid's RTAS
(Real Time AudioSuite) and AAX (Avid Audio Extension), with MOTU's MAS (MOTU Audio System) and various
Linux-friendly others (LADSPA, DSSI et al) bringing up the rear. Oddly, there's no accepted standard format for
MIDI plugins, with individual DAWs supporting only their own proprietary systems.
A measure of the number of simultaneous voices playable on a given synthesizer or sampler. If a synth is '8voice polyphonic', for example, it can play up to eight voices at the same time. If a ninth voice is played without
releasing one of the first eight, it will replace one of the other voices (known as 'note stealing') or simply not play
at all, depending on the instrument and its settings.
Also known as glide, activating the portamento function on a synth means that one note's pitch will smoothly
slide up or down to the pitch of the next, instead of just 'jumping' from one discrete pitch to another. Depending
on the synth in question, the time that this slide takes could be adjustable, fixed or scaled (determined by the
distance of the slide).
The auxiliary send points on a mixer can be positioned pre- or post-fader – ie, before or after the channel
volume fader. Pre-fader means that the level you set on the volume fader has no effect on the level of the send,
while post-fader means that changing the volume of the channel also changes the gain of the signal going into
77 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
the send.
Pre-fader sends are useful for setting up monitor mixes in the studio, where the musicians being recorded won't
want their monitoring levels to be affected by changes made to the control room mix. When the band have gone
home, though, post-fader can be considered 'default', as the mix engineer will generally want channel volume
changes to be reflected in their effects sends.
Short for 'preamplifer', a preamp is an electronic amplifier used to boost the very low- level signal from a guitar
or microphone up to the 'line' level required by a mixer or audio interface for effective monitoring and recording.
While all mixers and modern audio interfaces have preamps built in, there are also dedicated units available at
a huge range of price points (up to many thousands of pounds), including vintage classics desired for their
particular sound, and ultra-transparent models designed to apply as little colouration as possible.
Pulse width modulation (PWM)
The on-the-fly adjustment of the cycle width (expressed as a percentage) of a synthesizer's pulse (square)
wave via a modulation source (typically an LFO) in order to introduce a thickening effect.
Punch in/out
The overwriting of a section of a recording between a predetermined start (punch in) and end (punch out) point,
to replace a mistake or revise said section, leaving the rest of the recording untouched. In a DAW, punch in and
out can be triggered manually in real-time (via a MIDI controller or key command), or automatically by setting
the in and out points on the timeline.
78 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
The A to Z of computer music: Q-R
You get two letters for the price of one in the latest
instalment of our jargon buster
Computer MusicJune 04, 2014, 09:26 UTC
Become a qualified computer music quizmaster by raiding our knowledge repository for the required
When you want to get your recorded MIDI (or sliced audio) parts perfectly in time, Quantise is the function to
reach for. When you apply quantisation to a MIDI part, any notes within it that aren't perfectly aligned to a
specified time-based grid (eight-notes, 16th-note triplets, etc) are 'snapped' to it.
Quantise is obviously just the thing, then, for fixing badly performed recordings, but it's also effective at
straightening out overly 'loose' (but not actually 'bad') parts to work them into rigidly timed electronic tracks.
Many DAWs also feature iterative quantise, whereby notes are moved towards the grid lines by a user-set
amount or percentage, with 100% being full alignment.
American terminology for the English crotchet, being a note one quarter the duration of a semibreve (whole
note), which in turn is equivalent in duration to one bar in 4/4 time.
English terminology for the American eighth-note, a note one eighth the duration of a semibreve (whole note),
which in turn is equivalent to four quarter-note beats - one bar in 4/4 time.
Quavers and quarter-notes shown on the traditional stave and in a piano roll.
QWERTY keyboard
As opposed to a MIDI/piano keyboard, the QWERTY keyboard is the one you use to interact with your
computer, phone or tablet, entering alphanumeric data and other symbols, controlling cursor movement and
'selection', and so on. In many DAWs (and some software instruments), it can also be pressed into service as a
makeshift MIDI keyboard. It's named after the first six letters of the top row of letter keys.
79 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
Random Access Memory. A form of computer data storage designed for very high and roughly consistent
read/write access speed no matter what order the data stored is read/written in. While your computer's hard
disk stores your apps and documents, its RAM is where all that data is sent to be handled by the OS (which is
also loaded from your hard disk into RAM!).
RAM is much faster than disk-based storage - it needs to be to keep up with the CPU and other speedy
components that come together to create a computer. While the RAM in your Mac or PC is volatile (when the
system is powered down, its contents are lost), non-volatile RAM also exists, like the ROM (Read-Only
Memory) in old hardware synths and sound modules, and the NVRAM in USB flash drives.
Reference track
When mixing or mastering a track, it can be immensely helpful to compare your ongoing results to a finished
'reference' track in the same style. The goal isn't necessarily to copy the sound of the reference track but rather
to match its dynamic feel, energy, frequency spectrum, etc.
Even the most experienced of producers and engineers will use reference tracks, so don't think of them as
'stabilisers' for novices learning how to mix - they're part and parcel of the established production process.
The release stage of an envelope, when triggered, determines the amount of time it takes for the signal's
processing to drop from the sustain level to zero. The release stage could be triggered via the literal releasing
of a key on a keyboard when the envelope is on a synth or sampler (fading the sound down to silence), or an
audio signal dropping below the threshold parameter when it's part of a compressor ('withdrawing' the
compression), for example.
Originally meaning to simply take a master multitrack tape recording that's already been mixed and mix it again
to correct or improve it, remixing today refers to the process of taking a track and remaking it entirely, while
retaining some or all of its key elements in order to keep it recognisable as the same core piece. This might be
done for purely artistic reasons or in order to increase the market for a particular release - making a chartfriendly remix of an underground hip-hop number, for example.
Closely related to resampling to the point of being essentially the same thing much of the time, to render means
to capture an entire signal chain (be that a single part, or the output of a single, group or master mixer channel)
in a single audio file. When you freeze a track to free your CPU from the plugins running on it, or bounce down
a finished mix to a stereo master, you're rendering it.
Whenever you change a piece of digital audio in any way (effects processing or timestretching, say) and render
or bounce it down as a new piece of digital audio, we call this resampling. Resampling can be done for creative
purposes or just to claim back valuable CPU cycles. Once a part or track is resampled, plugins used on it can
(and indeed, should) be disabled.
In the same way that the resolution of a graphical display determines the level of sharpness and detail with
which images are represented on it, the resolution of an audio signal determines the effective dynamic range
(according to the number of discrete volume levels - ie, bit depth) and upper range of frequencies (sample rate)
it's able to represent.
80 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
The industry standard for mastered music delivery is 16-bit/44.1kHz, but at the production stage, higher
resolutions (particularly in bit depth) are used in order to optimise quality.
Also known as 'emphasis', 'Q' or 'Q factor', the resonance parameter on a filter controls the width of a band of
boosted frequencies immediately adjacent to the cutoff frequency. Its purpose is to add bite and aggression to
filtered sounds, and very effective it is, too!
Reverberation, or reverb, is the reflection of soundwaves off the surfaces of an environment. In music
production, the simulation of reverb is a cornerstone effect, used to give instruments and sounds spatial
context, ambience and 'air'.
There are two types of reverb plugin: algorithmic, which applies mathematical signal processing to the signal to
place it in a wholly 'conjured' space; and convolution, which imposes samples of real-world spaces and
equipment (impulse responses) on the signal for impressive realism. However, reverb isn't all about rooms,
halls and arenas - some plugins are designed specifically to generate utterly unreal, 'sci-fi' effects.
REX file
An audio file format generated only by Propellerhead's ReCycle and Reason applications. Primarily intended to
be looped, the audio clip within a REX file is sliced, and the DAW playing it back can move the slices closer to
or further away from each other in order to 'timestretch' the loop while still maintaining the original sound of
each component of it. Some DAWs can also trigger the slices via MIDI for on-the-fly rearranging of loops via a
MIDI keyboard.
Propellerhead's Dr. Octo Rex rack unit can arrange, manipulate and play back multiple REX loop files.
Ring modulation
A processing technique in which two waveforms are multiplied together, outputting the sum and difference of
the signals' frequencies. All you really need to know is that ring modulation produces brash, metallic sounds,
and that you'll find it in quite a few synth and effects plugins.
Root Mean Square. A calculation used in metering to define the average level of a signal over a short period of
81 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
time (a few hundred milliseconds by default in most systems), as opposed to peak, which defines the highest
level of a signal over time.
The RMS level gives a good indication of the average loudness and dynamic range of a signal, although it's not
as accurate as peak metering for the detection of 'overs' and clipping.
The roll-off of a filter is the 'steepness' of its attenuation slope beyond the cutoff frequency, expressed in
decibels per octave. A 24dB/octave low-pass filter, for example, reduces the amplitude of frequencies within a
signal at a linear rate of 24dB for every octave they rise above the cutoff - a steep roll-off, resulting in much
'sharper' filtering than a 6dB, 12dB or 18dB/octave model.
Root note
'Root note' has two definitions in music production. The first is the note within a chord that gives it its name - the
note G in a G major chord, for example. The second is in sampling: when a single sample is mapped across
multiple notes in a sampler (the note G mapped to F, F#, G, G# and A, say), the note on which it plays back in
its natural, un-timestretched form (that central G) is the root note.
Rotary encoder
A knob on a hardware controller that rotates endlessly with no start and end point and is mapped via the
software to which it's connected. The rotary encoders on Ableton's Push controller/instrument, for example,
switch between adjusting volume, pan, plugin parameters, effects sends, etc, as determined by the currently
selected 'layer' within the software.
Get tactile with your controller's rotary encoders.
A royalty-free work is any piece of audio, video or media of any other kind that can be used by anyone within a
project of their own as they see fit, without any payment of any kind to the originator or their agents. The only
thing you can't do with royalty-free material is resell it. The samples you can download from our own
SampleRadar are royalty-free, so you can commercially release tunes made using them.
82 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
The A to Z of computer music: S (part 1)
Sampling, saturation, sidechaining and more
Computer MusicJuly 02, 2014, 14:05 UTC
Getting stranded by sonic speak? Take yourself back to school with our ongoing guide to the specifics
of music production.
Sample and Hold
Usually abbreviated to S&H, Sample and Hold is a synthesis technique by which a 'snapshot' of an oscillator
waveform is captured and 'held' for a period of time, then used to modulate another parameter - filter cutoff or
oscillator pitch, for example. The oscillator used for S&H will very often be a random noise generator, making
the sampled waveform particularly useful for the creation of old sci-fi soundtrack-style beeps and burbles.
Any digitally stored piece of audio can be considered a sample, but the word is generally used in relation to the
short clips and loops used in music production, rather than full tracks. The act of sampling involves recording a
sound into your computer via an analogue-to-digital converter (your audio interface) and using the resulting
sample either 'as is' or processing and manipulating it into a whole new sound.
Samples can either be placed directly on audio tracks within a DAW, or imported into a sampler or sampling
synth, where they can be played up and down the keyboard via your MIDI keyboard, triggered 'percussively'
using a MIDI drum kit, etc. They can also be employed as 'one-shots' or 'loops', the former playing back only
once when triggered, the latter cycling repeatedly until triggering or playback stops.
The advent of sampling was a milestone in music production history and, without it, dance music, electronica,
hip-hop, drum 'n' bass and many other styles either wouldn't exist at all or would sound very different indeed.
Today, there's a sizeable commercial market for pre-produced samples, with companies such as Loopmasters,
Sample Magic, Zero-G and countless others pumping out packs of loops and one-shots to serve all musical
needs imaginable, as well as lavishly produced multisampled instrument libraries that give the home-studiobased producer affordable access to astonishingly realistic virtual drum kits, orchestras, guitars, pianos and
pretty much any other real-world instrument you care to name.
Sample Rate
When an analogue signal is converted to a digital one, the number of times a second that the waveform's
position is captured and logged as a discrete digital value is called the sample rate. The higher the sample rate,
the greater the range of frequencies that can be captured - some modern DAWs are capable of handling audio
at a sample rate up to 384kHz (384,000 captures per second).
83 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
However, the average musician really needn't go beyond 96kHz at most. Indeed, the highest frequency that can
be represented by a digital signal is half that of the sample rate - known as the Nyquist frequency - so with
human hearing maxing out at around 20kHz, it's no coincidence that the sample rate for CDs (the original digital
consumer format) is 44.1kHz. That particular rate ("CD quality") remains the standard for audio playback today,
and it is able to store and reproduce all audio content perfectly within the range 0-22050Hz.
Another point to consider when selecting sample rates, of course, is storage space - every doubling of sample
rate doubles the size of the audio file, after all.
Once upon a time, a sampler was a physical device used to record and play back samples. Today, the
hardware sampler is all but dead, replaced by more powerful DAW-integrated soft samplers, most of which are
only used to play back imported samples rather than record them.
A sampler can be used to map related samples (a series of sampled piano notes, for example) across the
range of the keyboard for pitched play, or to place discrete sounds (eg, the various elements of a drum kit or a
collection of loops) on individual notes. As well as their position on the pitch 'axis', samples can also be mapped
up and down the MIDI velocity range - eg, sampled gentle taps on a snare drum for low velocities, heavy hits
for higher ones. Most samplers also feature a range of synthesis controls (filters, envelopes, LFOs, etc) for
manipulating their loaded samples, as well as effects, mapping and playback options, and more.
Saturation originally referred to the 'overmagnetising' of magnetic tape, resulting in overt distortion of the
recorded signal or gentle 'warming', depending on how heavy-handedly it was applied. These days, the word is
used to describe the subtle distortion of any valve, transformer or other circuit or stage through lightly
overdriving it.
Although tape is no longer a feature of the average recording studio, and valves and transformers are
becoming rarer, the unarguable benefits of saturation can still be had via the increasing number of plugins on
the market that emulate them.
The ability to shuttle playback backwards and forwards through audio at a user-controlled speed (either using
the mouse or the jogwheel on a hardware control surface). Scrubbing is a software-specific feature (ie, not a
universal one) that helps position the playhead precisely in applications that support it.
An interval of half a tone on the western musical scale, and the distance between adjacent keys on a piano
keyboard - F to F#, say, or B to C. An octave comprises 12 semitones.
84 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
The send control on a mixer channel taps the signal from that channel and literally sends it to a dedicated
'return' channel, which is fed back into the mix. The main use for this setup (known as a send/return loop) is
effects processing: with an effect (eg, a reverb) inserted into the return channel 100% wet, any number of
source signals can be fed to it in varying amounts via their send controls, thus placing them all in the same
sonic 'environment'.
A series of MIDI or audio events in a DAW/sequencer is a sequence, and the act of creating such a series is
called sequencing.
The definition has changed greatly over the years (and these days the word has been all but replaced by 'DAW'
- Digital Audio Workstation), but a sequencer nowadays can be described as a software or hardware device for
recording and editing MIDI - and often audio - and arranging it over time.
The earliest software sequencers were MIDI-only, with audio added later, but the concept is the same:
sequences are recorded in real-time via a MIDI keyboard or other instrument, or programmed using the mouse
and keyboard, then edited in terms of pitch, timing, velocity and various other parameters in a MIDI editor
(piano roll, drum, event/list, etc).
Another popular form of sequencer, the step or pattern sequencer, can be found on many synths and drum
machines, enabling non-realtime sequencing within their own interfaces.
The shifting of every second note on a specified note-value grid (usually, that means every other 16th-note) to
the right by an adjustable amount in order to apply an increasingly 'swung' or 'dotted note' feel to the rhythm. In
software, the exact timing of any given swing percentage will vary, but the benchmark is still Akai's classic MPC
series, the swing settings of which have been emulated time and time again by software.
The harsh, high-frequency "s" and '"sh" sounds that are inherent in any vocal performance and often need to be
reined in as part of the mixing process. Just EQ'ing out these frequencies will make the whole vocal sound dull,
so frequency-conscious compression is the solution.
85 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
This can either be applied via a compressor with a sibilance-boosting EQ inserted into its sidechain input, or a
dedicated de-esser plugin, either of which essentially do the same thing: reduce the volume of the vocal when
sibilant frequencies are detected. Some smarter de-essers will attenuate only the sibilant frequency range.
A secondary input on a compressor or gate into which a separate signal to the one being processed can be fed
and used to control the action of the compression or gating. The classic example in modern music is the
rhythmic pumping effect created by sidechaining a bassline or pad sound (or even a full mix!) with a 4/4 kick
drum, but sidechaining is also useful for de-essing vocals (see Sibilance) and general 'ducking' (lowering in
volume) of any sound to make room for another sound occurring at the same time. A sidechain input is a
common feature among today's dynamics plugins.
The slope of a filter determines the 'steepness' of its response curve. So, a 12dB/octave slope reduces the
volume of the signal by 12dB for every octave it extends beyond the cutoff frequency in the direction
established by the filter type - above for a low-pass, below for a high-pass. Higher slopes give more 'brutal'
filtering, while lower slopes give a more natural sound. Most filters offer a choice of slope - usually 6, 12 and
24dB/octave, some including an 18dB option and a few sharpening up to 48.
Slope is also described in terms of the number of 'poles' of the filter, each pole contributing 6dB/octave of
attenuation - a 4-pole filter, then, has a 24dB/octave slope.
Snap to grid
With a DAW's snap to grid function enabled, any action that changes or establishes the position of an event or
events (inserting, moving or slicing of clips, MIDI notes, automation data, etc) on the timeline will result in that
event(s) automatically aligning with the quantise grid (set by the user), rather than the actual position of the
mouse pointer. With snap enabled, dragging an audio clip into a project and placing it between two gridlines will
see it jump to the nearest one.
86 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
The A to Z of computer music: S (part 2)
From 'solo' to 'synthesiser' and everything in
Computer MusicJuly 31, 2014, 07:00 UTC
Shuttle through some more production terms - we've sorted the stems from the subs and put it all into
simple English.
Italian for 'alone', solo in a musical context simply refers to any sound or instrument auditioned or played
entirely on its own. Every channel on any DAW mixer will feature a Solo button - activating it silences all other
channels that aren't also soloed (or made 'solo safe' - an option in certain mixers) for the purpose of tweaking
the sound of that channel in isolation. Most DAWs use 'common sense' within their soloing systems - soloed
auxiliary tracks (eg, effects returns) maintain input from channels routed to them, for example, rather than
requiring them to be soloed as well.
Commercially retailed (or free) packages of 'sounds' intended to be loaded into DAWs and virtual instruments.
Soundware comes in various forms, including libraries of discrete sampled loops and one-shots, multisampled
'ROMs' for playback via engines like NI Kontakt and UVI Workstation, banks of presets for synths and source
material for hybrid synths/samplers.
When multiple microphones are used at the same time to capture separate elements of a single source - the
drums and cymbals of a drum kit, the strings and body of a guitar, the various sections of an orchestra, etc - the
inadvertent but inevitable leakage of elements other than the one intended to be captured into any given mic is
called spill or bleed. While this is a phenomenon that recording and live sound engineers put a lot of effort into
minimising, mic spill can actually be beneficial in certain situations, such as imbuing drum kit recordings with a
natural, organic energy that often makes them sound more cohesive and 'glued'. Indeed, multi-mic'd virtual
instruments - such as drum kits - often give users the ability to adjust spill levels.
State variable filter (SVF)
A filter capable of outputting multiple filter modes (low-/band-/high-pass, etc) at once - depending on the design,
you'll be able to either mix or switch between them. Some software filters emulate the active analogue circuitry
of original SVFs and/or the mixable modes feature.
87 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
One of the biggest players in the music software industry, Steinberg are a German music software and
hardware (audio interfaces and MIDI controllers) manufacturer/developer established in 1984 and now owned
by Yamaha. Their first proper release (disregarding the commercially unsuccessful Multitrack Recorder) was the
Pro-16 MIDI sequencer for the Commodore 64, followed by Pro-24 for the Atari ST, which developed into the
seminal Cubase, eventually making its way onto Mac and PC, currently at version 7 (although various 'reboots'
along the way make the real number much higher) and without doubt one of the most popular DAWs in the
world. As well as pretty much defining the general shape of the modern software DAW with Cubase, Steinberg
also invented the game-changing VST protocol, bringing the incredible real-time effects and instrument plugins
that we now take for granted to the software studio for the first time.
A rendered mix of a related group of tracks that together serve as one element of a larger mix, created to
facilitate remixing, archiving or any other operation for which supplying every component track of a project
individually would be excessive or unnecessary. Typical stems might include a mixed drum kit, the individual
sections of an orchestra, combined synth pad and lead parts, mixed bass and drums, or a collection of backing
vocal tracks. However, more recently, the term has also come to refer to individual tracks rendered from a
project including processing.
Step sequencing
Before the graphical MIDI sequencer, synth and drum machine patterns were programmed directly into the
instruments themselves via step sequencers. These typically offered 16 button-operated 'steps' cycling around
constantly, each of which triggered its assigned sound when activated. The steps could be set to a range of
note values (1/4, 1/8, 1/16, etc), the tempo and shuffle/swing (see S, Part 1) of the sequencer could be raised
and lowered, each step was adjustable in terms of pitch, volume and other parameters, and sequences were
strung together to make 'songs'. Step sequencers still get built into many new virtual instruments (and effects,
for modulating parameters), offering, as they do, a fun, focussed, self-contained and 'rigidly electronic'
alternative to the more flexible piano roll.
88 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
A stereo audio signal is one comprising discrete 'left' and 'right' channels, each of which is sent to a
corresponding loudspeaker, creating a wide panorama. Although finished tracks end up as a stereo mix, many
(if not most) of its components will be mono channels, panned to specific positions between the left and right
channels of the master output, routed to stereo reverbs and other effects, and ultimately coming together to
create a stereo soundstage.
In sampling, streaming refers to the real-time playback of audio data directly from a hard drive, rather than from
RAM. Not so long ago, consumer-level hard drives and the busses they ran on weren't fast enough to handle
streaming, and thus the number and length of samples
that could be used to make a sampler patch was limited by the amount of RAM available to the sampler. Today,
though, even external drives are more than up to the task, and streaming of (potentially enormous) samples is a
standard feature in soft samplers such as NI's Kontakt (via its Direct From Disk mode) and Steinberg's HALion.
Strip silence
89 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
A function of certain DAWs and audio editors - such as Apple Logic, Avid Pro Tools and Steinberg Cubase whereby an audio file or region is analysed for sections of silence (or amplitude below a user-specified
threshold), which are then automatically deleted, leaving the resultant non-silent sections in place for moving
and manipulating as discrete regions or files.
Sub bass
Any bass sound below around 80Hz, either played on its own or used to bolster a higher bassline. Sub bass
tones are usually generated using simple sine, square or triangle synth waveforms with little processing, and
while they're not heard without an accompanying 'regular' bass sound that often these days, they were a
defining feature of early drum 'n' bass and hip-hop.
Subtractive synthesis
A form of synthesis in which a raw waveform (eg, square wave, saw wave, a sample, etc) is routed through one
or more filters, the mode, cutoff frequency and resonance of which are adjusted in order to attenuate ('subtract')
harmonics in the signal - cutting the high frequencies with a low-pass filter, for example. By far the most
ubiquitous form of analogue synthesis is subtractive, while other kinds of synthesis can be said to contain a
subtractive element if they include a filter section.
The act of mixing multiple signals down to a single mono or stereo output. Using group/bus channels in a mixer
is summing, as is routing those groups to the master output. In digital systems, summing is a transparent
process that doesn't have any effect on the sound of the summed channels; analogue summing busses,
however, can impart a small amount of desirable saturation, harmonic distortion, crosstalk and other subtle
electrical influences, giving the summed signal warmth and character (assuming the components involved are
of high quality, of course!). Some producers bounce mixes through hardware summing boxes to get this sound,
or more commonly, use console emulation plugins like Slate Digital's VCC.
In music technology, sustain has two definitions. The first is the volume level at which a synthesiser tone is held
between the end of the decay stage and the start of the release phase - ie, until the triggering key/note is
released. The second is as in 'sustain pedal', which connects to a MIDI keyboard to emulate its equivalent on a
piano, allowing notes to ring out when depressed.
Synchronisation. Multiple events - single or repeated - occurring at the same time and/or rate; or the regulation
of the timing of one thing by another. In music production, the word can refer to a few things, but the main one
is slaving time-based parameters of plugin instruments and effects to the DAW hosting them. The speed of an
LFO on a synthesiser plugin, for example, may be switchable between absolute time (setting it to 8Hz, say) and
host sync, whereby a note value (1/8, 1/16, etc) is specified, slaving the LFO speed to the tempo of the host
An electronic instrument (hardware or software) designed to replicate real-world sounds and - more importantly
- create entirely new ones. There are several types of synthesiser, but they can all be broadly categorised as
(virtual) analogue, digital or hybrid. Simply put, analogue synths use oscillator circuits to generate raw tones
that are sculpted into musical sounds using filters, modulation and effects; digital synths do their thing using one
or more of a range of techniques, including frequency modulation (FM - also a feature of many analogues)
waveshaping, additive synthesis, physical modelling and granular. Hybrid synths combine analogue and digital
elements (eg, an additive synth with analogue filters, or a synth mixing sampled and analogue oscillators).
90 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
Synthesisers are fundamental to electronic music of all kinds, and important in pop, rock and even modern
classical and jazz styles.
91 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
The A to Z of computer music: T
Take a trip from 'tail' to 'tube'
Computer MusicAugust 29, 2013, 14:00 UTC
Tune up on your terminology and teach yourself to talk tech, as we present another list of essentials for
you to trawl through.
The late reflections of a reverb effect (ie, the main body of the sound after the very short early reflections) are
also known as the tail, in reference to the fact that they 'tail off' to silence. The reverb tail can be shaped using a
variety of controls including decay time, high- and low-frequency damping, and density.
Tap tempo
A feature of many software applications and hardware sequencers that enables the user to set the project
tempo by clicking (or tapping) a dedicated button at the desired tempo. The minimum number of taps required
to take a reading will usually be four, with further taps at the same tempo (given the relative looseness of
human timing) refining the accuracy of the resulting BPM value.
Once the only practical recording medium available to the world's studios, analogue tape is now effectively
dead (apart from use as a creative tool) thanks to the unstoppable rise of hard disk recording. Nevertheless, the
hiss, background noise and saturation of tape are still seen as hugely desirable sonic qualities by many
producers, resulting in the advent of tape emulation plugins such as u-he Satin and Slate Digital Virtual Tape
Machines, which model all aspects of tape and tape machines, right down to wow and flutter, bias and
92 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
Tape delay
The original studio delay effect. The very earliest incarnation of tape delay involved simply looping an analogue
tape recording, with the delay time adjusted by changing the length of the tape loop. However, tape delay as it's
generally thought of today involves sending the signal to be processed to a tape machine with the repro head
set up for monitoring, thus generating an echo, the timing of which is adjusted by changing the tape speed
and/or the distance between the play and repro heads. Feedback can also be introduced by routing the tape
machine's output back into its input.
While classic machines such as the Roland Space Echo, Watkins Copicat and EchoSonic are still sought after
for their characteristic sound, there are plenty of superb tape delay plugins on the market for those who would
rather save a small fortune and not have to deal with the many practical drawbacks of the real thing.
The musical measurement of speed, described in BPM (beats per minute) but also expressed in 'classical'
terms such as allegro (fast), adagio (slow) and rallentando (slowing down). With DJs requiring a level of tempo
consistency from track to track in order to maintain coherence in their mixes, dance music is quite strict in terms
of BPM. House music, for example, is usually at between 120 and 128bpm, drum 'n' bass is at 160-180bpm,
and dubstep will almost always be at 140bpm. Tempo can be manipulated via automation in most DAWs, and
upping the pace by 2-3bpm through a chorus can be a great way to inject a bit of energy.
In music production, a (usually user-specified) amplitude level in a dynamics processor above or below which
the process in question is brought to bear. The most ubiquitous example is probably the threshold control on a
compressor: when the input signal exceeds the set volume level, it's compressed. Similarly, the threshold on a
gate effect establishes the volume level above which the gate is opened and below which it's closed.
The tonal character of a sound, as defined by the combination of all of its components: frequency content over
time, amplitude envelope, distortion, etc. Timbre might be verbalised as "dark", "rich", "warm", "smooth", "shiny"
or any other evocative descriptor. In modern music production, the word is used most frequently in the
discussion of synthesiser patches, but it can actually be applied to any musical sound.
A key technique in contemporary production, timestretching is exactly what the name suggests: the lengthening
93 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
(slowing down) or shortening (speeding up) of an audio clip or sample. Today, timestretching is applied
independently of pitchshifting, but in the early days of sampling (and prior to that, when 'timestretching' involved
adjusting tape speed), speeding up or slowing down a sound necessarily involved pitching it up or down.
Modern timestretching algorithms are incredibly powerful, enabling adjustments of up to around 40bpm either
way (depending on the source material) before the quality begins to suffer. The 'elastic audio' technologies now
featured in pretty much every DAW employ 'segmented' timestretching to facilitate the precise positioning of
sounds in a clip (drum hits, for example) on the timeline.
The graded ruler running along the top edge of a DAW arrange page and its MIDI and audio editors that
indicates when in time - in bars/beats, minutes/seconds, samples, or several other formats - the audio, MIDI
and automation events that make up a project occur.
As well as its non-technical usage to describe any piece of recorded music, the word 'track' refers to the
individual 'lanes' in a DAW (or, back in the day, on a multitrack tape), onto which audio and MIDI parts are
recorded or imported, edited, arranged and played back in parallel with all the other tracks in the project, the
outputs of which are brought together in a mixer (either within the DAW or external).
The relatively loud and cutting initial attack of a sound - eg, the 'crack' of a snare drum, initial hammer strike of a
piano key, or pluck of a guitar string. Many instruments - especially percussion - are defined in large part by
their transients, and there are many transient-shaping plugins available that can be used to cut or boost them
(and, separately, the sustained sound that follows), making them more or less punchy.
When an effects processor (a compressor or EQ, say) is described as 'transparent', it usually means that it
applies its process to an audio signal without colouring it in any way - ie, with all parameters set to neutral, the
signal at the output will be identical (or very close to it) to the signal at the input. A plugin may also be described
as transparent if the processing it applies results in no detectable artifacts or other audio 'clues' that the audio
has in fact been altered. For instance, some EQs are designed to offer a definite 'sound' when boosting the
bass, for example, perhaps even saturating the signal, whereas others are intended to raise the level of bass
frequencies in as unobtrusive - ie, transparent - a manner as possible.
94 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
The transport section of a DAW or audio editor contains its playback and recording control buttons (Play,
Record, Fast Forward, Reverse, etc), as well as numerical displays indicating the current playback position,
cycle range, etc.
The shifting of a MIDI note or audio clip up or down in pitch. Transposing audio is done by adjusting a knob or
applying a pitchshifting plugin, while transposing MIDI notes can be done by moving the actual notes in a MIDI
editor and/or applying a transposing MIDI plugin.
An audio effect that modulates the amplitude (volume) of the input signal using an LFO, the depth and speed of
which can be adjusted.
Triangle wave
One of the 'standard' oscillator waveforms found in most analogue and virtual analogue synths, alongside the
likes of sine, square and sawtooth. The triangle wave contains only odd harmonics and can be thought of as
sitting between sine and square waves in terms of tone - harder than the former and softer than the latter - and
thus serves as a good starting point for bass sounds as well as mellow leads and percussives.
In order to elicit sound from a synth or sampler, it needs to be triggered. Triggering can be done by any MIDI
source - keyboard, electronic drum kit, sequencer, etc - capable of sending the requisite data to the instrument:
note volume, pitch and on/off timing. 'Trigger' also refers to transducer pickups attached to acoustic drumheads
for - yep - triggering electronic percussion modules, samplers and synths, or alternatively, the process of using
dedicated triggering software to achieve the same thing using mic'd recordings of drums.
Another name for a basic volume/gain control, typically used to set the input gain into a mixer channel for
capturing an optimum signal level when recording. Nowadays, you might see trim knobs elsewhere, too - eg, in
Also known as a valve, a tube is a glass-encapsulated vacuum used as a voltage-controlled amplifier (electrons
95 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
move freely in a vacuum) in mixers, mics and mic preamps, guitar amps and other audio gear, that imparts a
characteristically warm sound, clipping and overdriving in a way that's considered more desirable and euphonic
for many applications than its solid state transistor counterpart. Tubes are emulated in all kinds of software
effects - indeed, there are plugins out there dedicated purely to virtualising valve preamps!
96 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
The A to Z of computer music: U and V
Two letters for the price of one!
Computer MusicSeptember 23, 2014, 10:03 UTC
Vexed by virtual analogues? Unsure about unison? We've got more verified virtual music verbosity for
the undereducated...
An unbalanced audio signal is one in which the ground component is carried on the same physical connection
(wire or input/output socket) as one of the two audio components, resulting in a lower construction cost than a
balanced connection (in which the ground gets its own dedicated carrier), but also a lower signal level and
greater susceptibility to interference (background noise). For that reason, balanced cables and circuitry are
generally prefereable for music production, particularly with regard to microphones, mixer and audio interface
I/O, and monitor connections.
A feature of many synthesisers, unison is the activating of multiple voices (usually up to eight, but sometimes
going higher) or oscillators, all playing together but slightly offset from a centre frequency and each other, with a
Detune control for shifting them away from it even further, and possibly a Spread control for spacing them out
across the stereo panorama. The end result is a much thicker, richer sound than that of a single oscillator.
Perhaps the most recognisable example of unison in action is the Roland JP-8000 super-saw oscillator as
heard in many trance tracks.
Unity Gain
The gain level at which a device makes no change to the volume of a signal passing through it. For example, if
a sine wave enters a mixer input at -3dB and the mixer channel is set to unity gain (the Gain knob and/or level
fader set to 0dB), the sine wave will hit the mixer output at -3dB. With every stage of a device chain set to unity
gain, background noise is (theoretically) minimised.
One of the best things about the software-based music studio is that developers can improve their applications
and plugins by releasing downloadable updates and upgrades for them. The terminology is not set in stone, but
generally, an update is a free 'point release' (eg, v1.0 to v1.1) that makes minor enhancements without
97 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
redesigning the UI or introducing major new features, while an upgrade steps up a full version number (2.6 to
3.0, say) and makes much more profound changes - all the way up to a full architectural/conceptual redesign,
even, possibly dropping support for older operating systems and/or hardware in the process. Paying to make
the jump from an entry level product to its higher-end 'parent' is also an upgrade - Cubase Artist 7 to Cubase 7,
for example.
The line between update and upgrade has become increasingly blurred of late, with an increasing number of
developers - most notably Steinberg and Cakewalk - now charging for some point releases, albeit significantly
less than for full version upgrades.
Universal Serial Bus. The market-leading connectivity standard for Mac and PC peripherals, most relevant to
musicians with regard to audio interfaces, MIDI keyboards and controllers, external hard drives and software
authorisation dongles. The USB specification is now at version 3 (offering an impressive 5Gbit/s of data
transfer), but audio interface manufacturers seem to be happy to stick with USB 2 (much slower at just
480Mbit/s, but cheaper to license and build) for most of their releases, with high-end models eschewing USB 3
in favour of FireWire and Thunderbolt. MIDI devices, meanwhile, shuttle such small amounts of data around
that USB 2 is more than fast enough for even the most intense usage.
Valve/Vacuum Tube
A voltage-controlled amplifier component used in analogue mixers, preamps and other devices, and emulated
in all manner of music software.
Voltage Controlled Oscillator. An electronic oscillator, the frequency of which is controlled by a variable voltage
applied to its input. In an analogue synth, each key sends out the required voltage to set the oscillator
frequency to the pitch of that note. In virtual analogue hardware and soft synths, though, the term VCO is really
only used for familiarity, as they are, of course, not actually controlled by electronic voltages.
A MIDI message that converts the force or speed with which a keyboard key, drum pad, etc is struck to a
numeric value from 0-127. Velocity is most commonly used to control the volume of a synth or sampler (so
pressing a key or striking a pad harder results in a louder sound, just like an acoustic piano or drum), but - like
all MIDI data - it can be assigned to modulate other things too, such as filter cutoff, pitch, envelope depth, etc.
98 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
An important setting on most hardware MIDI instruments and certain soft synths/samplers is the velocity curve,
which maps the input velocity to the output volume (or other parameter) level, enabling the instrument to be
adapted to the user's playing style. Light-fingered keyboardists, for example, might set the curve so that
medium velocity strokes trigger higher volume levels than the default 1:1 (linear) mapping.
The 'wobbling' in pitch of a sustained sound. Vibrato is applied in a synth or sampler by assigning an LFO to the
oscillator or sample pitch parameter. Some effects make a headline feature of their ability to add/remove/adjust
natural-sounding vibrato in vocal parts in particular, most famously Antares Auto-Tune and Celemony
Virtual Analogue
A virtual analogue synthesiser is a digital synth that models the behaviour - and so the sound - of an analogue
one. Despite the word 'virtual', there are plenty of hardware VAs out there as well as software ones, including
classics like the Access Virus and Novation SuperNova, and newcomers such as Roland's Aira System-1.
Virtual Instrument
A synthesis- or sample-based instrument in software form. Virtual instruments are available in a variety of plugin
99 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
formats for use in compatible host DAWs (VST, Audio Units, AAX, etc), with many of them also operating
standalone - ie, without the need for a host to feed them MIDI input or process and 'present' their audio output.
Invented in the 30s as a system for encoding and decoding speech for encrypted broadcast, the vocoder is now
a mainstay of the electronic music studio. It's a highly recognisable audio effect that splits an input 'modulator'
signal (usually the human voice) into a number of frequencies using parallel band-pass filters, which are tuned
by a pitched 'carrier' signal (generated by an analogue synthesiser), thus imposing the character of the former
onto the latter, resulting in the 'singing robot' effect. While synth and voice make the classic vocoder
combination, software vocoders enable any sources you like to be used for either or both.
Each individual note played back by a synth or sampler (either triggered directly or as part of a unison 'stack') is
called a voice, and while the number that can be active simultaneously in today's ultra-powerful software and
hardware instruments is immense (limited only by the power of the host computer in many cases!), polyphony as the maximum number of voices available to an instrument is known - used to be a serious consideration,
with early polyphonic analogue synths such as the five-voice Sequential Circuits Prophet-5 commanding hefty
Virtual Studio Technology. Invented by Steinberg in 1996 (initially only for effects; instruments were added in
1999), VST is a software protocol and SDK that enables developers and users to build and deploy virtual
effects and instruments as plugins within any VST-compatible DAW or audio editor - which is the vast majority
of them.
Today, VST plugins represent the most active area of development in the music tech industry, with countless
amazing virtual instruments and effects already on the market and more arriving every month, from classic
synths and outboard emulations to mind-bending devices without real-world equivalents. Being software, VST
plugins are usually much cheaper to buy than their hardware equivalents, and many of them are free - including
those in the ever-expanding Plugins suite, the entirety of which is yours for free whenever you buy the latest
issue of .
A VU (Volume Unit) meter is an analogue level meter using particular (decidedly non-linear) ballistics to
represent the perceived loudness of a signal rather than its absolute level at any moment in time (for which
peak programme meters are best). Many software effects feature virtual VU meters (often alongside PPMs) for
level monitoring, and there are even dedicated VU metering plugins available (see right).
100 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
The A to Z of computer music: W-Z
The end of the alphabetical line
Computer MusicOctober 28, 2014, 10:00 UTC
In the final instalment of the computer music glossary, we conclude with some more wise words, before
heading off to catch some Zs…
A guitar pedal effect (or software emulation thereof) that enables the player to adjust the cutoff frequency of a
resonant filter (usually band-pass) by rocking the pedal with their foot, creating a characteristic guitar sound
often heard in funk, blues, psychedelia and other styles. Auto-wah plugins (and, indeed, any synth or sampler)
can achieve the same effect, but without the pedal, the filter cutoff instead modulated by an LFO, an envelope
follower or anything else you like.
Wave file (aka .WAV)
The standard audio file container format for Microsoft Windows, although readable and writeable by any
modern operating system. Wave files use the extension .wav, causing them to be commonly referred to as
WAVs (while technically incorrect, the term is widespread and accepted). Although they can contain
compressed audio, they're almost always employed uncompressed. The wave file is the standard for recording
and playing back audio on Windows PCs, and it's second only to AIFF on Macs. Indeed, pretty much all modern
music software supports both WAV and AIFF formats.
In music software terms, typically refers to a graphical representation of an audio signal, plotting time on the X
axis and amplitude on the Y axis. Although waveforms are primarily used as a visual reference for audio editing
and amplitude assessment, some audio editors and DAWs allow audio waveforms to be edited directly using a
'pencil' or other graphical tool. This is mostly useful for lowering excessive volume peaks, but can be put to
creative - albeit probably rather glitch-orientated - use, too.
Elsewhere, a waveform refers specifically to a periodic waveform such as a sine, square, sawtooth or triangle
wave, or indeed, absolutely any repeating wave pattern.
Wave sequencing
101 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
A type of synthesis using sampled sounds. These 'waves' are arranged in sequences and played back
contiguously, and each wave is independently editable in terms of pitch, duration, level and fade settings. The
result can be anything from smoothly blended evolving sounds to discretely stepped
waveprogressions.The classic example of a wavesequencing synth is Korg'slegendary early 90s Wavestation now available virtually in Korg's Legacy Collection - which introduced the concept in the first place.
A collection of very short audio samples stored in the memory of a wavetable synth, for playback either
individually, contiguously, in a particular user-specified order, or randomly, and looped or one-shot. The travel of
the 'playback head' through the wavetable can be modulated using LFOs, envelopes, step sequencers, etc, and
beyond that, the wavetable is deployed in the same way as any other synth oscillator, generating a raw tone for
developing into a full patch through filtering, modulation and effects processing. Classic wavetable synths
include the PPG Wave and Waldorf Microwave
Windows Driver Model. The standard Windows audio driver, which ties in closely to the Windows kernel and
provides easy compatibility for manufacturers of consumer audio interfaces. For serious music production,
Steinberg's device-specific ASIO driver is vastly preferable, which is why all high-end interfaces ship with this
kind of driver.
Wet signal
The processed signal that comes out of an effects plugin is its 'wet' output, while the unprocessed signal that
goes in prior to processing is referred to as 'dry'. Many plugins nowadays feature a wet/dry mix control, for
blending the dry and wet signals at the final output stage - a technique known as 'parallel processing'. When
effects are inserted into auxiliary busses to be fed by the sends on individual mixer channels, their output
should generally be set 100% wet, since the dry signal is already present in the mix on the channel you're
sending from.
White noise
A random signal of equal power at all frequencies, typically generated by a synthesiser's noise oscillator. White
noise is particularly useful for adding cut, definition and attack to synthesised drum sounds. And being very
amenable to filtering - thanks to its broad frequency range - it's also an excellent option for risers and general
'whoosh'-type sounds.
102 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
Whole note
The American name for the European semibreve, a whole note is a note that lasts for four beats of a bar in 4/4
Wind Controller
As the MIDI keyboard is to the trained pianist, so the wind controller is to the skilled woodwind player. Looking
like futuristic oboes, clarinets or soprano saxophones, devices like the ultra-posh Eigenlabs Eigenharp and
those in the Akai EWI and Yamaha WX ranges enable MIDI notes and accompanying CC, pitchbend, aftertouch
and expression data to be sent to anything capable of receiving it.
By far the world's most popular computer operating system, Microsoft Windows was first launched in 1985 as a
graphical interface and shell for MS-DOS. Since then, it's been through many versions - including the
landmarks Windows 3.0, 95, XP, Vista and 7 - the latest being the touchscreen-firendly Windows 8, which runs
on PC, tablet and smartphone.
As far as the musician is concerned, assuming they have an audio interface with a solid ASIO driver on PC,
there's no significant functional difference between the same music software running on Windows or Mac OS X.
The only buying decisions to be made between the two operating systems centre on their general usability,
exclusive applications (Logic Pro on Mac and FL Studio and Sonar on Windows, for example), and whether you
want to buy into the relatively closed shop that is the Apple ecosystem, or the 'wilder' but generally cheaper and
far more open (hardware-wise) world of Windows.
Wireless Sync-Start Technology. A wireless protocol invented by Korg that enables their iOS apps to
synchronise playback and tempo information across devices. It works over Bluetooth, with one iPad, iPhone or
iPod touch serving as the master and the other as the slave: adjusting tempo and starting playback on the
master does likewise on the slave.
Windows Media Audio. Microsoft's own set of audio compression codecs and file format, originally created as a
rival to MP3. While WMA didn't make much of a dent in the ubiquity of that all-conquering format, it has found
its place in all manner of devices and software media players.
103 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
Word clock
As a computer-based producer, you'll probably never have to deal with the joys of audio synchronisation, but
once upon a time, getting digital devices such as hard disk recorders, samplers and effects units feeding audio
data streams into each other so that their sample 'words' (the packages of bits that make up samples) aligned
perfectly (anything less than that resulting in show-stopping glitches) was part and parcel of the engineering
process. Word clock was (and still is) the signal used to
do it, and it can be transmitted and received over S/PDIF, ADAT, TDIF and other connection types, in a
master/slave configuration. Today, you're only likely to find yourself working with word clock if you need to chain
multiple audio interfaces (via the aforementioned hook-ups or Ethernet), or integrate digital outboard gear into
your setup.
Even less of an issue for the software-based producer than word clock, wow is the slow fluctuation in pitch that
inevitably occurs with playback of analogue tape on a tape deck, due to their mechanical imperfections.
Although it might sound like something we should all be happy to see banished to the music technology dustbin
forever, some tape emulation plugins actually feature the controls for dialling in wow and its higher-rate sibling,
flutter, for vintage authenticity, alongside the equally retrospective saturation and hiss algorithms that usually
accompany them.
A software 'shell' into which plugin instruments and effects can be loaded in order to convert them to another
format. FXpansion's VST-AU Adapter, for example, wraps VST plugins in an Audio Units wrapper to be used in
Audio Units-compatible (and VST-incompatible) hosts.
104 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
Once also referred to as the Cannon connector, since it was invented by Cannon Electric, what is now almost
universally dubbed the XLR is a three- or seven-pin connector (there are also four-, five- and six-pin variants for
lighting and video equipment) that carries a balanced signal between suitably equipped microphones,
instruments and receiving equipment -audio interfaces, mixers, effects, etc. The main advantage of XLR (and
its 1/4" TRS jack sibling) over the unbalanced alternatives is lower interference and noise thanks to its
shielding, earthing and polarity-opposed 'hot' and 'cold' (not 'left and right') connectors. Interestingly, the name
XLR comes from the connector's designation by Cannon as the X series, its Locking mechanism, and the
Rubber insulation around the female contacts.
While they're primarily known as the manufacturers of musical instruments from pianos to drums (and the
classic DX7 synth, of course), Yamaha also have a part to play in today's software industry: they're the parent
company of Cubase developer Steinberg and are also behind the Vocaloid vocal synth.
Yoi bass
A 'talking' bass patch/sound that came to prevalence through the weird and wonderful genre of dubstep. While
there are plenty of creative variations on the idea, the basic patch is created by running a harmonic rich
waveform (a square wave works well) through a low-pass filter with plenty of resonance. The cutoff is then
modulated or automated to arrive at a wobbly, 'wow'ing sound, and to get the formant-esque effect, a bitcrusher
is applied, reducing the sample rate (using its downsampling/resampling mode).
Possible variations include using a different source wave - a lightly modulated wavetable, for example - adding
effects after the patch,
and tweaking the filter cutoff modulation for a tighter/longer 'y' sound at the start of each 'yoi'. You could also go
about making these kind of 'talking' sounds using a formant filter, which purposely mimics human vocal tract
105 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
Zero-delay feedback filter
A currently popular buzzword in filter technology. ZDF filters aren't actually new, but the term is essentially used
to denote the 'new breed' of ZDF filters that also incorporate realistic, analogue-like distortion along with the
qualities of ZDF filters, namely that they can be modulated quickly with minimal artefacts and have a good
phase response. u-he claim that it's the zero-delay feedback filter that gives their Diva synth it's astonishingly
'analogue' sound.
Zero crossing
The horizontal central line in the waveform display of an audio editor (either standalone or within a DAW),
representing zero amplitude -ie, silence. When chopping up audio samples, it's always best to make cuts at
points where the waveform meets the zero crossing line, to avoid the nasty clicks that occur when 'instantly'
jumping to/from zero (or the start/end value of an immediately following/preceding waveform) from/to a positive
amplitude. Most audio editors feature an option to automatically snap edits to zero crossing points, although
with left and right channels potentially differing in amplitude at any given moment, the effectiveness of this
setting can't be guaranteed with stereo material. Many DAWs also offer the option to create fades automatically
when cutting and slicing any audio (an option that's sometimes switched on by default), minimising the
likelihood of clicks and pops by ensuring that the level is faded down to zero at the audio clip's edges.
106 / 107
Diccionario de Música Electrónica
The A to Z of Computer Music (MusicRadar.com)
Zero latency
Your software DAW will always suffer some degree of latency between input and output - for example, the short
delay between singing into a microphone routed to an audio channel and hearing it back through the speakers.
The larger your audio interface's buffer setting, and the greater the latency induced by plugins, the longer the
overall delay. To counter this when recording, most audio interfaces offer 'direct monitoring', whereby the input
is routed directly to the interface's physical output (as well as the output to the DAW), thus enabling zerolatency (ie, with no delay) monitoring of the input signal. In truth, there is still a tiny amount of latency caused by
the signal passing through the interface, but it's insignificant.
Since this monitored signal has to bypass the DAW itself, however, you can't apply plugin effects to it, which
can be an issue for singers and guitarists, who generally prefer to hear themselves with effects. As a
workaround, certain interfaces include built-in reverb, delay, compression and other processors that can be
applied to the input for monitoring purposes - and, indeed, recorded with, if desired.
Zipper noise
The characteristic 'ripping' sound that occurs when a stepped parameter of a digital instrument, effect or mixer
is adjusted quickly. An unavoidable by-product of the discrete value changes involved in any digital system,
zipper noise can be masked to a great extent using interpolation techniques or simply increasing the resolution
of the controller.
107 / 107

Similar documents