Spring 2009 TIMES Newsletter - FROM ti

Transcription

Spring 2009 TIMES Newsletter - FROM ti
In This Issue:
Dr. Thomas Rudolph Receives Honorary Award___2
Absolutely Free: Addressing the MENC...________5
Audio Basics: What Makes Microphones Tick_____8
EAMIR: Alternative Instruments...____________ 12
TI:ME National Conference Photos_______________ 17
Inclusion of Music Technology Resources...________ 18
2009 Mike Kovins
TI:ME Teacher of
the Year Named
By Scott Watson
At this year’s TI:ME
National Conference, held
February 11-14, 2009 in
San Antonio, Texas in
association with the Texas
Music Educators Association
National Conference, Wayne
Splettstoeszer was awarded
the 2009 Mike Kovins TI:ME
Teacher of the Year award.
This award is given each year in recognition of an
outstanding music teacher using technology with and
for their students.
Synthesis Basics
By Alfred Johnson
In today’s world of new technologies, we are constantly presented
with innovative ways of achieving the ultimate sound. From signal
path to outboard gear to analog and digital, converters play a role
in achieving the end result of the desired sound.
The microphone, mixing console and/or the converter (referred to
as the front end) is important in capturing the initial sound. If
the desired sound is not captured at this point, it becomes more
challenging to “fix in the mix.” Next enters the world of signal
continued on page 3
Wayne is the Director of Bands and Music Technology
at Torrington High School, Torrington, Connecticut
where he has developed an award-winning band program
during his tenure while building , from the ground up, a
nationally recognized Music Technology program. The
music technology program at Torrington began with
just one class offering in 1996 and now offers two levels
of classes delivered in a 20-station music technology lab
installed in 2000. Like many music technology offerings,
Splettstoeszer’s classes have been attracting students not
traditionally found in music classes.
One of the hallmarks of Splettstoeszer’s teaching with
technology includes his ability to stretch funds using
freeware and online resources as well as older software
and hardware. He does a lot with what he is given, and
he enjoys sharing all he has been doing at workshops
and conferences, as well as sessions at several
universities. Splettstoeszer is Adjunct Professor of
Music Technology at Fairfield University, Fairfield,
Connecticut and has been active in TI:ME serving as a
member of its Publications Committee.
continued on page 4
continued on page 11
The TI:MES • Spring 2009
1
In Appreciation
Many Thanks to Rocky Reuter!
by Floyd Richmond
TI:ME Officers:
President:
Tom Rudolph
[email protected]
President-Elect:
Amy Burns
[email protected]
Vice-President:
James Frankel
[email protected]
Treasurer:
Marc Jacoby
[email protected]
Key Contacts:
Executive Director:
Kay Fitzpatrick, JD, CAE
[email protected]
Associate Director:
Alecia Powell
[email protected]
Newsletter Editor:
Mark Lochstamphor
[email protected]
TI:ME
Contact Information:
TI:ME, 3300 Washtenaw Avenue,
Suite 220, Ann Arbor, MI 48104
Fax: 734-677-2407
Email: [email protected]
Website: www.ti-me.org
There is no way we can adequately express our
appreciation for your work for TI:ME through
the years. From the first conferences where we
were bringing our own cables and power supplies
to this year’s conference in San Antonio, where
everything seemingly fell into place, we have seen a steady stream of successes.
From our original negotiations with Texas, Ohio, Florida, and many more, you
were the one who made things work.
Thank you for all of the assistance with the TI:ME/TMEA collaboration this
year. Everything was tremendously successful. We had 25 very well attended
music technology sessions during the Wednesday pre-conference. From Thursday
to Saturday, we had another 52 sessions (a total of 77). Setup and registration
went well. The presenters were exceptional and interacted well with the
participants. The quality of the question-and-answer periods after the sessions
were outstanding. The TI:ME keynote speaker, Jordan Rudess, gave an excellent
performance and presentation. David Sebald’s concert was first rate. The Texas
Chapter of TI:ME had a wonderful meeting. The TI:ME reception went well
and was appreciated by all. We had one student complete his TI:ME alternative
certification and identified several potential new sites for TI:ME courses. The
traffic at the TI:ME booth outside the exhibit hall was heavy and resulted in the
building of numerous relationships between TMEA and TI:ME members.
The work that you have done for this latest, and for all the previous conferences
have prepared TI:ME for great conferences to come. Thanks for all your help! You
will be missed more than you can imagine. l
Dr. Thomas Rudolph Receives
PA TI:ME Honorary Award
By Michael Fein
On March 28th, 2009, the Pennsylvania
Chapter of TI:ME held its 2nd annual
conference at West Chester University.
This year, Beth Sokolowski, the president
of the PA TI:ME chapter, along with the
PA TI:ME steering committee wanted
to honor one of the most important
educators in the world of music
technology, Dr. Thomas Rudolph, with
the PA TI:ME Honorary Award. Most of
you reading this newsletter know of Tom
from his sessions at conferences, summer
technology courses, or one of his many
publications. For anyone even slightly
interested in music technology, it is tough
to miss Tom.
Although Tom is an internationally
known music and technology educator,
continued on page 14
2
The TI:MES • Spring 2009
Synthesis Basics - continued from front page
processing and D.S.P. (Digital Signal
Processing), commonly known as
“plug-ins.” These processors range
from equalizers, compressors, limiters
and reverb devices, all which are used
strategically in capturing the “desired
sound.”
“In today’s world of new
technologies, we are
constantly presented with
innovative ways of achieving
the ultimate sound.”
Once the sound is captured with the
best combination of microphones and
microphone placement techniques, we
can now look to the “art of mixing.” In
mixing, one of the key objectives is to
blend and balance the multiple sounds.
The role of synthesis is unique in shaping
and developing the “desired mix.” The
early days of synthesizers in the 1980s,
which brought us the classic Yamaha DX7
or the EMU’s Proteus, revolutionized the
way music is produced.
Synthesizers are devices capable
of producing a variety of sounds by
generating and combining signals of
different frequencies. They incorporate
some basic features and parameter
controls, and are generally simple to use.
These controls and parameters shape and
develop the default sounds built in to the
synthesis device. A simple piano pitch can
be bended or modulated. The same piano
note can have a longer or shorter sustain
for a specified duration. The resonance of
a snare can now be shortened to the sound
of a stick. The possibilities are endless.
One electronic device may contain 100
different sounds ranging from keyboards
to bass, and the sounds can be shaped
into thousands of new sounds. The future
of music was never the same after the
introduction of synthesizers.
As technology developed, these devices
eventually evolved into more elaborate
and intricate controls allowing
thousands of sounds to be manipulated
from one sound. Today we have the “fat
hip-hop” beat sound, the “drum’n’bass”
kick sound, the “mellow” jazz drum or
kick sound, and many, many more.
In the world of Software Synthesis (also
known as “Soft-Synths” or “Virtual
Instruments”), such as Propellerhead’s
Reason, Apple’s Logic ES1 and
ES2 (which are integrated into the
sequencing software), Spectrasonics or
Arturia’s ARP-2600. The basic elements
of these sounds are achieved through
the manipulation of syntheses and the
envelopes of the sounds.
Synthesis and Envelopes
All synthesizers, both hardware and
software, are based on certain principles.
These principles are discussed below.
These principles include: synthesis
basics, the envelope of the synthesis, and
synthesizers. The ability to control the
parameters of the sounds may vary from
hardware device or soft-synth, however
the key elements remain.
Synthesis Basics:
Sine Waves: These are known for their
hollow, pure sound.
Triangle Waves: These waveforms
rise linearly and fall linearly at the
same rate. Triangle waves are brighter
sounding than sine waves because of
the increased number of overtones.
Saw Tooth Wave: These have a linear
rise followed by a rapid drop off. They
generally are very sharp sounding.
Pulse or Rectangle Wave: These have
a generally sharp rise, horizontal peak
area, and a sharp dropoff. These are
frequently sharp sounding and known
for creating a variety of tone colors.
When the distance between the rise
and fall equals the length of the peak,
a square shape occurs. This “square
wave” sounds rather warm and open,
similar to a clarinet.
Envelope (loudness contour):
The envelope is the dynamic change of
a sound over a period of time, usually
contained within one second. This can
refer to the envelope of the loudness
contour (volume) or of the synthesis
filters (timbre). My discussion will focus
on envelope parameters as they relate to
loudness. Whereas the selection of the
synthesis or waveform affects the sound
or tonal color, the envelope is used to
adjust the body and depth of the sound.
There are four stages of an envelope:
Attack, Decay, Sustain, Release
(A.D.S.R.)
ATTACK: Attack represents the time
the sound takes to rise from an initial
value of zero to its maximum level.
continued on page 4
The TI:MES • Spring 2009
3
Synthesis Basics - continued from page 3
Adjusting this will affect how the
initial striking will sound.
DECAY: Decay is the time for the
initial falling off to the sustain level.
This works best in combination with
the attack to achieve the presence of
the initial sound. A lowered attack and
decay can result in a softer or thinner
sound, while a increased attack and
decay can result in a fuller sound.
SUSTAIN: Sustain is the time
during which it remains at this level.
Programming the sustain will affect
the resonance of the note.
RELEASE: Release is the time it
takes to move from the sustain to its
final level. Release typically begins
when a note is let up. Controlling the
release will determine how long before
the sound diminishes.
4
Certain soft-synths or virtual
instruments may allow programming
of amplitude, modulation or filtering
envelopes. However, the functions are
similar.
Now, with your understanding of
synthesis basics, you can work to achieve
your desired sound using any type of
synthesizer or virtual instrument. Every
twist of a knob or slight change in slider
movement has an impact on producing
that hip-hop gritty sound or creating a
simple synth patch. l
Professor Alfred Johnson is an
instructor at Medgar Evers College of
C.U.N.Y. where he teaches Software
Sound Design and Music Technology,
and serves as coordinator of Music
Technology and iTunes U. Contact
Professor Johnson at 718-270-5172 or
via e-mail at [email protected].
The TI:MES • Spring 2009
2009 Mike Kovins... continued from front page
He also knows from experience that all music
educators can learn to use and benefit from
technology. “Being named the TI:ME Teacher
of the Year has been a tremendous honor,”
shares Splettstoeszer. “To be recognized on a
national level is something I never expected.
When I first started at Torrington High
School in 1996 I knew nothing about music
technology. Anyone, teacher and/or student
can have success with music technology.”
This is by no means the first time Wayne has
been honored for doing what he does so well.
Wayne was the Torrington Public Schools
2003-04 Teacher of the Year and has been
recognized by Roland Corporation, School
Band and Orchestra Magazine, and Phi Beta
Mu International Band master’s fraternity
among others.
TI:ME is very proud to name Wayne
Splettstoeszer as the 2009 Mike Kovins
TI:ME Teacher of the Year. l
Absolutely Free:
Addressing the MENC National
Standards using Freeware, Open
Source and Shareware Software
By Jay Dorfman and Marc Jacoby
You’ve budgeted for computers, keyboards and other MIDI
devices, sound systems, etc., etc. But what about buying
software? Buying commercial, shrink-wrapped software will
bust your budget. That’s where Open Source, Freeware, or
Shareware might help you provide solutions for integrating
technology into your program without breaking the bank.
In Part I of this three-part series, we distinguished between
Open Source, Freeware, and Shareware software, and we
addressed software that could be matched to the first two
National Standards. In this, Part II, we look at three more
standards: improvisation, composing, and reading and
notating music.
Standard III - Improvising melodies,
variations, and accompaniments.
Jazz may come to mind first when addressing the
improvisation standard, especially in school band programs.
But whether it’s for jazz, rock/pop, mariachi, or any other
genre, transcribing your favorite player’s licks is a great way
to learn style and improve your ears at the same time. Being
able to slow the playback tempo down makes it easier to
learn those fast or complicated runs.
Some may remember using a record player or tape deck’s
speed control and the frustrating change in pitch that
resulted from it (especially when transcribing Gerry
Mulligan bari sax solos). You could use Audacity’s Effects
(see Part I) to do this although it will require rendering
time, is “destructive” editing and therefore changes the file
permanently, and the resulting fidelity is not very good.
Woodshedding is a colloquial term jazz musicians use for
the act of practicing. It alludes to the image of the jazzer
(Sonny Rollins being one of the more famous) working in
solitude on scales, arpeggios, and patterns they’ll use in
their improvisations. Inspired by the book, Patterns for Jazz
by Jerry Coker, iShed is a freeware application that provides
tools for practicing these elements in various jazz styles.
Users can transpose scales, arpeggios, or patterns into all
twelve keys by one of eight different methods. Teachers and
students can also create their own patterns for practicing
and sharing with others. Standard IV - Composing and arranging
music within specified guidelines
Using technology to teach composition and arranging can
come in many forms. Most popular and common are theory
programs that present instruction and skill development on
continued on page 6
Instead, there are software apps specifically geared to
accomplish this task. The Amazing Slow Downer falls
under the category of shareware since the free download is
a “limited” feature version. With this version, you can only
play the first two tracks of a CD and only the first quarter (up
to 3 minutes) of any audio file. With Amazing Slow Downer,
you can control playback, equalization, and loop points of
any sound file. This is “non-destructive” editing and enables
the user to change pitch and tempo independently. One
particularly useful feature of the Amazing Slow Downer is its
ability to handle DRM (digital rights management) encoded
files such as those you would buy through the iTunes Store.
As the screen capture shows, you can assign control
functions to MIDI messages. That way you can work at your
MIDI keyboard, controlling ASD without ever having to use
your mouse or QWERTY keyboard.
The TI:MES • Spring 2009
5
Absolutely Free... - continued from page 5
elemental components. Some are very broad in scope while
others focus on more specific topics such as counterpoint or
overviews of instrument performance techniques. These niche
CAI applications can cost over one hundred dollars or more.
Production values are generally good with these shrinkwrapped products and may include built-in student/class
management tools or integration with notation programs.
Produced by Garritan Interactive and hosted by
NorthernSounds.com (http://www.northernsounds.com/
forum), Rimsky-Korsakov’s Principles of Orchestration and
Chuck Israel’s Exploring Jazz Arranging are free, interactive
versions of traditional textbooks, complete with audio files,
animated score examples, and live video demonstrations. Both
are web-based applications and require an active internet
connection. An interesting feature with the Principles of
Orchestration is the addition of margin notes that address
contemporary issues not found in the original text and
additional score and audio examples. One especially exciting
feature of both sites is the possibilities that a “forum” setting
can offer. By allowing users to post comments and create
threads of interest, the sites become on-line communities in
which both students and teachers can participate.
Standard V - Reading and notating music
Though some recent interpretations of the National
Standards say that reading and notating music is actually
embedded into many of the other standards, for the purposes
of this article, we will treat reading and notation music as
skills separate from those assumed in the other standards.
We have identified two types of software that are most
applicable to teaching students to read and notate music:
notation software, and music theory training software.
There are several popular notation applications on the
market that are incredibly powerful. They produce stunning,
professional quality scores, and many can now perform tasks
such as creating high-quality audio recordings from scores,
complex part creation, and even producing versions of scores
ready for posting to the web. These packages can be costly, and
6
rightfully so because they are very sophisticated.
For those interested in using notation software without a
financial investment, there is Finale Notepad. This free
version of the popular Finale software from MakeMusic is
available for cross-platform, unlimited installation. Its notation
tools are substantially limited in comparison to the commercial
version of the program. Limitations on free notation software
often include a reduced number of available staves or pages, a
smaller set of score markings, and fewer formats that can be
imported into or exported from the program.
Despite these limitations, Finale Notepad is an excellent
choice for introducing students (or yourself) to notation
software. Limited versions of competitor software, such as
Sibelius and Notion are also available, but Notepad is the
only fully functioning product we have located.
Music theory training software is the type of software that
most people associate with the term “CAI,” or computerassisted instruction. Among the most popular titles in
this category are Musition, Music Ace, and Alfred’s Music
Theory. Similar to our discussion of software that relates
to Standard 2, software in this category can have excellent
production value, can provide good student feedback, and can
allow teachers to track student progress. A free option in this
category is a program found at www.musictheory.net. This
web-based application offers several excellent tutorials for
learning to read and notate music.
A fairly regular obstacle that music teachers encounter is
that school-based Internet security prohibits students from
accessing certain websites. While we strongly advocate
for online security and appropriate uses of web resources,
we realize these types of obstacles can be frustrating.
Musictheory.net offers a “work-around” for this security
continued on page 16
The TI:MES • Spring 2009
Chapters News
New York Chapter
By sponsoring and running many of the educational, social,
and promotional activities for which TI:ME is known, TI:ME
chapters have quickly become an important part of our
organization. As a TI:ME member, you are automatically a
member of a chapter if one has been started in your state.
Within the last several months, new chapters have been
started in Alabama, Kansas and New York, while existing
chapters continue to run exciting technology opportunities.
Ohio Chapter
By Jay Dorfman, Chapter Committee Chair
Here are some of the activities chapters have been involved
in recently, as well as plans for some upcoming activities:
New Jersey Chapter
The New Jersey chapter sponsored sessions at the first
NJMEA summer in-service in August. The chapter hosted
the 3rd Annual NJ TI:ME state in-service at Rowan
University in October, and sponsored sessions as well
as a technology sandbox at the NJMEA convention in
February. The New Jersey chapter will again sponsor
sessions at the summer in-service (August 3rd in Red Bank,
NJ), and host a state music technology in-service (October 12
at East Brunswick High School). With the TI:ME National
Conference coming to New Jersey in 2010, the chapter is
looking forward to welcoming fellow ‘techies’ from around
the country!
The New York chapter, while still in the formative stages,
is planning to be present in some form at the NYSSMA
conference in Rochester next Thanksgiving. The chapter is
also planning a spring tech expo, most likely at Five Towns
College in Dix Hills, Long Island.
On Saturday, March 21, the Ohio chapter hosted a daylong workshop at Capital University in Columbus. With 8
new music teachers, 5 board members, plus Capital faculty
member Mark Lochstampfor in attendance the day was
action-packed with things to discover, discuss, and deliver to
the new learners and the veteran learners, too.
The topic for the day was “Working with Media for Your
Classroom: Free Applications and Materials from the
Internet” with the digital audio application Audacity being
continued on page 16
Pennsylvania Chapter
The Pennsylvania chapter held its 2nd Annual Conference on
Saturday, March 28, 2009 at West Chester University. About
30 teachers and administrators from Pennsylvania and New
Jersey gathered for a day of diverse presentations on Web 2.0
tools, digital history projects, Garage Band ‘09, podcasting,
and many other topics. Our keynote presenter, Dr. Scott
Watson, shared his message on unlocking creativity through
technology. The event culminated with a performance by
guest artist and EVI player John Swana in a session where
he, Dr. Marc Jacoby and Dr. Van Stifel presented a session
on alternative MIDI Controllers.
PA TI:ME was once again be involved in co-sponsoring the
Electric Playground with SoundTree on Friday, April 24 at
the PAMEA Conference. Teachers had the opportunity to
network and share ideas in collaboration with PA TI:ME
members, and SoundTree had its mobile lab available for
further inquiry and exploration of varied software programs.
Our PA TI:ME General Assembly meeting occured on
Friday, April 24 at the PMEA Conference. All Music
Educators from PA were invited to attend the 11:00 am
meeting to join in discussing how we can support the efforts
of bringing technology into our music classrooms.
The TI:MES • Spring 2009
7
Audio Basics: What
Makes Microphones Tick...
Credits: Edited from the Shure Educational Publication, “Microphone Techniques:
Live Sound Reinforcement” by Dave Mendez, Shure Inc.
The selection and placement of
microphones have a major influence
on the audio quality of a sound
reinforcement system or recording.
There are several main objectives
of microphone techniques ranging
from maximizing pick-up of suitable
sound from the desired instrument,
to minimizing pick-up of undesired
sound from instruments or other sound
sources, to providing sufficient gainbefore-feedback in a live situation.
“Suitable” sound from the desired
instrument may mean either the
natural sound of the instrument or
some particular sound quality which
best fits the application. “Undesired”
sound may mean the direct or ambient
sound from other nearby instruments
or just background noise. “Sufficient”
gain-before-feedback means that the
desired instrument is reinforced at the
required level without feedback in the
sound system.
In order to achieve the desired
result with your sound, it is useful
to understand some important
characteristics of microphones.
Therefore, we will explore three of
the most important characteristics of
microphones which are their operating
principle, frequency response, and
directionality and what affect these
elements have on your live or recorded
sound.
Microphone
Characteristics
The most important characteristics of
microphones for live sound applications
are their operating principle, frequency
response and directionality.
Operating Principle
This refers to how the microphone
picks up sound and converts it into
an electrical signal. The operating
principle determines some of the basic
capabilities of the microphone. The two
8
most common types are Dynamic and
Condenser.
Dynamic microphones employ a
diaphragm/ voice coil/magnet assembly
which forms a miniature sound-driven
electrical generator. Sound waves strike
a thin plastic membrane (diaphragm)
which vibrates in response. A small
coil of wire (voice coil) is attached to
the rear of the diaphragm and vibrates
with it in a magnetic field created by
a small permanent magnet. It is the
motion of the voice coil in this magnetic
field the sound picked up by a dynamic
microphone into an electrical signal.
Dynamic microphones have relatively
simple construction and are therefore
economical and rugged. They can
provide excellent sound quality
and, in particular, they can handle
extremely high sound levels: it is almost
impossible to overload a dynamic
microphone. In addition, dynamic
microphones are relatively unaffected
by extremes of temperature or humidity.
Dynamics are the type most widely
used in general sound reinforcement.
Condenser microphones are based
on an electrically-charged diaphragm/
back-plate assembly which forms a
sound-sensitive capacitor. Here, sound
waves vibrate a very thin metal or
metal-coated-plastic diaphragm that
is electrically charged that is just
in front of a rigid metal or metalcoated-ceramic back-plate that is also
electrically charged. It is the variation
of this spacing, due to the motion of the
diaphragm relative to the back-plate,
that changes the sound picked up by a
condenser microphone into an electrical
signal.
All condensers microphones are
required to be powered to operate due
to internal circuitry. That power can
come from batteries or from phantom
power (a method of supplying power to
The TI:MES • Spring 2009
Dynamic Microphone Cross-section
Condenser Microphone
Cross-section
a microphone through the microphone
cable itself).
There are two potential limitations
of condenser microphones due to
the additional circuitry: first, the
electronics produce a small amount
of noise; second, there is a limit to
the maximum signal level that the
electronics can handle. For this reason,
condenser microphone specifications
always include a self-noise figure and
a maximum sound pressure level (Max
SPL). Good designs, however, have very
low noise levels and are also capable of
handling very loud sounds.
Condenser microphones are more
complex than dynamics and tend to be
somewhat more costly. Also, condensers
may be adversely affected by extremes
of temperature and humidity which
can cause them to become noisy or fail
temporarily. However, condensers can
readily be made with higher sensitivity
and can provide a smoother, more
continued on page 10
The TI:MES • Spring 2009
9
Audio Basics...
- continued from page 8
natural sound, particularly at high
frequencies making them a better
choice for miking instruments o people
from a distance.
Frequency Response
The frequency response of a
microphone refers to the output level or
sensitivity of the microphone over its
operating range from lowest to highest
frequency. Virtually all microphone
manufacturers list the frequency
response of their microphones over
a range, for example 50 - 15,000 Hz.
This usually corresponds with a graph
that indicates output level relative to
frequency. The graph has frequency in
Hertz (Hz) on the x-axis and relative
response in decibels (dB) on the y-axis.
Flat Frequency Response
A microphone whose output is equal
at all frequencies has a flat frequency
response. Flat response microphones
typically have an extended frequency
range. They reproduce a variety of
sound sources without changing or
coloring the original sound.
In contrast to a flat response, a
shaped response is usually designed to
enhance a sound source in a particular
application. For instance, a microphone
may have a peak in the 2 – 8 kHz
range to increase intelligibility for live
vocals. This shape is called a presence
peak or rise. A microphone may also be
designed to be less sensitive to certain
other frequencies. One example is
reduced low frequency response (low
end roll-off) to minimize unwanted
“boominess” or stage rumble.
The choice of flat or shaped response
microphones again depends on the
sound source, the sound system
and the environment. Flat response
microphones are usually desirable
to reproduce instruments such as
acoustic guitars or pianos. They are
also common in stereo miking and
distant pickup applications where
the microphone is more than a few
feet from the sound source largely
because the absence of response peaks
minimizes feedback and contributes
to a more natural sound. On the other
10
Shaped Frequency Response
hand, shaped response microphones
are preferred for close-up vocal use and
for certain instruments such as drums
and guitar amplifiers. They are also
useful for reducing pickup of unwanted
sound and noise outside the frequency
range of an instrument.
Directionality
The directionality of a microphone
is its sensitivity to sound relative to
the direction or angle from which the
sound arrives. There are a number of
different directional patterns found
in microphone design. These are
typically plotted in a polar pattern to
graphically display the directionality
of the microphone. The polar pattern
shows the variation in sensitivity
360 degrees around the microphone,
assuming that the microphone is in the
The TI:MES • Spring 2009
center and that 0 degrees represents
the front of the microphone. The three
basic directional types of microphones
are omnidirectional, unidirectional,
and bidirectional.
The omnidirectional microphone
has equal output or sensitivity at all
angles. Its coverage angle is a full
360 degrees. An omnidirectional
microphone will pick up the maximum
amount of ambient sound. In live sound
situations, an omni should be placed
very close to the sound source to pick
up a useable balance between direct
sound and ambient sound. In addition,
an omni cannot be aimed away from
undesired sources such as PA speakers
which may cause feedback.
The unidirectional microphone is
most sensitive to sound arriving from
one particular direction and is less
sensitive at other directions. The most
common type is a cardioid (heartshaped) response. This has the most
sensitivity at 0 degrees (on-axis) and is
least sensitive at 180 degrees (off-axis).
The effective coverage or pickup angle
of a cardioid is about 130 degrees,
which is up to about 65 degrees off
axis at the front of the microphone.
In addition, the cardioid mic picks up
only about one-third as much ambient
sound as an omni.
Unidirectional microphones isolate
the desired on-axis sound from both
unwanted off-axis sound and from
ambient noise. For example, the use
of a cardioid microphone for a guitar
amplifier which is near the drum set
is one way to reduce bleed-through
of drums into the reinforced guitar
sound. Unidirectional microphones
have several variations on the cardioid
pattern. The most prevalent is the
super-cardioid pattern shown below.
This pattern offers a narrower front
pickup angle than the cardioid (115
degrees for the Supercardioid) and also
greater rejection of ambient sound.
While the cardioid is least sensitive at
the rear (180 degrees off-axis) the least
sensitive direction is at 126 degrees
off-axis for the Supercardioid. When
placed properly they can provide more
focused pickup and less ambient noise
than the cardioid pattern, but they
have some pickup directly at the rear,
called a rear lobe. The rejection at the
rear is -12 dB for the Supercardioid as
opposed to as much as 15-20 dB of rear
rejection for a cardioid pattern.
See the chart pictured below for a
side-to-side comparison of the different
polar patterns.
omnidirectional types they pick up
less overall ambient or stage sound.
Unidirectional mics should be used to
control ambient noise pickup to get a
cleaner mix.
Distance factor - Because directional
microphones pick up less ambient sound
than omnidirectional types they may
be used at somewhat greater distances
from a sound source and still achieve
the same balance between the direct
sound and background or ambient
sound. An omni should be placed
closer to the sound source than a unidirectional (about half the distance)
to pick up the same balance between
direct sound and ambient sound.
Off-axis coloration - Change in
a microphone’s frequency response
that usually gets progressively more
noticeable as the arrival angle of sound
increases. High frequencies tend to be
lost first, often resulting in “muddy”
off-axis sound.
Proximity effect - The bass response
increases as all uni-directional
mics are moved closer, i.e. within 2
feet, to the sound source. With closeup unidirectional microphones (less
continued on page 19
Omni-directional Pickup Pattern
Cardioid Pickup Pattern
Super-Cardioid Pickup Pattern
In order to fully appreciate and
understand the differences between
polar patterns of microphones, we
also need to look at some features that
are consequences of these different
polar patterns, such as ambient sound
rejection, distance factor, off-axis
coloration, and proximity effect.
Ambient sound rejection - Since
unidirectional microphones are
less sensitive to off-axis sound than
Side-by-side Polar Pattern Comparison
The TI:MES • Spring 2009
11
EAMIR: Alternative
Instruments for Music
Education
By Professor V. J. Manzo
Go to the TI:ME website to view related video examples
indicated with
in this article.
All EAMIR programs have the option to record a
performance as a MIDI or audio file. MIDI files can then be
brought into any program that works with MIDI such as Pro
Tools, Logic, and Garage Band, as well as notation programs
like Finale and Sibelius. Audio files can be edited in similar
programs that deal with audio.
Lazy Guy
http://www.eamir.org/laser.htm
Introduction
Imagine a room where music is produced by touching the
wall or the floor; where physical gestures are mapped to
notes and harmony; with new musical instruments designed
to let individuals without any formal music training create
and perform meaningful music.
EAMIR (Electro-Acoustic Musically Interactive Room) is an
interactive music system that allows individuals and those
with disabilities to create a unique, tonal musical expression
without the physical and technical limitations found in
traditional instruments. Alternate controllers and sensors
connect with software to allow users to create music through
accessible physical gestures. These gestures are mapped to
diatonic musical events and notes/chords in novel ways.
Teachers specify the tonic and mode for the instrument
enabling them to perform with individuals who would
otherwise have great difficulty performing/composing.
All of the controllers/interfaces are unmodified, making EAMIR
primarily a software project. This allows the end-user to obtain
any controller/interface associated with EAMIR software and
“plug-and-play” to begin making music on any Windows or
Macintosh computer. This free software is available from http://
www.eamir.org, which provides information regarding setup,
installation, and use. EAMIR is also open-source allowing users
to adapt the software to fit their personal needs. EAMIR was
created by V.J. Manzo in 2007.
Lazy Guy is an interactive music system for composition/
performance. Using a webcam, Lazy Guy tracks a color a
user selects by mouse-clicking on the screen. The orientation
of the tracked color will determine the pitch and velocity of
the synthesis much like a Theremin. Lazy Guy differs from a
Theremin in that the user may filter their playback to allow
only diatonic notes to be played.
Monochrome
http://www.eamir.org/wacom.htm
EAMIR Software
EAMIR’s software is written primarily in Max/MSP/Jitter,
and LISP. Data is received from each controller/sensor
connected to a computer.
By default, all EAMIR programs are in the key of C Major,
but contain drop-down menus for selecting a new tonic and
mode. The data from the controller is mapped differently
in each program, but primarily allow the controller to play
notes and/or chords from the selected diatonic mode.
Most EAMIR programs output standard MIDI messages and
use the computer’s internal MIDI core to synthesize notes. This
allows the user to change MIDI parameters such as timbre.
The MIDI output can also be routed to any software/hardware
synthesizer allowing for unlimited timbres to be used.
12
Monochrome is an interactive music system for composition and
performance. Monochrome takes user input from a graphics
tablet and generates a piece in reaction to the artists gestures
on the tablet. The software calculates user input such as pen
orientation and pressure to control pitch and velocity. As the
artists begins to fill the canvas, changes in modality and
timbre can be triggered to occur in response to the amount of
shading that has occurred in a given section of the canvas. The TI:MES • Spring 2009
Guitar EAMIR-o
http://www.eamir.org/guitar_hero.htm
Using the controller of the popular game Guitar Hero, I
began writing a program that would allow the buttons of this
controller to play the notes of any diatonic scale. Since, nearly
all of my K-12 students play this game regularly and I saw
the practicality of using this controller, which they all seem to
know, as an interface for making tonal music.
between two modes of operation: Absolute Mode and Relative
Mode. Both modes utilize the padKontrol’s 16 touch-sensitive
pads to create chords in various performance manners.
P5 Glove
http://www.eamir.org/p5glove.htm
The first 4 buttons are mapped to the 8 notes of any diatonic
scale with respect to any tonic. The fifth button enables chord
mode which enables each note to act as the root for the typical
bar chord voicing found in guitar literature. The back button
on the controller allows them to switch the octave designation.
DDR EAMIR
http://www.eamir.org/ddr.htm
The P5 glove controller sends 5 streams of continuous data
for each of the five fingers on the glove. Data is also sent with
respect to the glove’s X Y and Z orientation. In this example,
the EAMIR software translates the finger bend data to
chords from the C Major scale. Buttons on the glove can be
programmed to trigger modulation (common-tone, chromatic
mediant, etc.). X, Y, and Z data can be programmed to
control dynamics, arpeggiation tempo, and other aspects of
the performance.
Using the controller of the popular game Dance Dance
Revolution, I began writing a program that would allow the
buttons of this controller to play back loops of audio. Simply
load up the audio files (or use the default loops) and step
on each pad to begin playback of each file. Start and stop
loops on the fly, bring them all in or out, change the EQ and
volume--anything you want.
EAMIR padKontrol
http://www.eamir.org/padKontrol.htm
The EAMIR padKontrol system uses the Korg padKontrol USB
MIDI controller and PC or Mac compatible software to compose
music. When the software is launched, the user can select
Slider
http://www.eamir.org/touchscreen.htm
Slider is a touchscreen musical interface that converts
graphical table data into a linear array of musical notes.
The slider software is run on a touchscreen computer which
allows the table data to be entered by physically touching the
graphical interface. Each point in the table is analyzed serially
and yields a note. The notes are played back at user variable
rhythm which can be changed by touching a rhythmic value
icon at the left. The tempo can be changed as well.
When a user touches the screen, the points are read and
produce notes. The higher the line appears when it is read,
continued on page 14
The TI:MES • Spring 2009
13
EAMIR... - continued from page 13
the higher the note will sound. A keyboard appears beneath
the table to reflect the pitch of the point currently being
analyzed. By default, notes are read “up and down” that is,
from left to right to left. This option can be changed to have
notes read up (from left to right only) and down (from right
to left only).
Tiles
http://www.eamir.org/tiles.htm
The EAMIR tiles take the amount force exerted into them
and convert that energy it into music. Embedded in each floor
tile, beneath the colored rectangle, is a sensor that measures
force. The amount of force is translated differently depending
on which tile program is being run. This simple program
outputs the notes of the C Major scale from low to high
depending on the amount of force exerted. Other programs
allow the tiles to be used for pitch matching games and
activities for memory reinforcement as well as extended options
for musical performance. The tiles are moveable and may also
be mounted vertically (attached to the walls) if desired. l
Resources:
EAMIR – www.eamir.org
The Modal Object Library – a collection of algorithms to control/
define modality http://www.vincemanzo.com/modal_change
Professor Manzo is the Director of Music Technology at
Montclair State University (2007) and Kean University (2007)
where he teaches courses in traditional & electronic music
and composition. e-mail: [email protected]. websites: www.
vincemanzo.com, www.eamir.org
14
Dr. Thomas Rudolph Receives PA TI:ME
Honorary Award - continued from page 2
writer, and presenter, I know Dr. Rudolph as my middle
school band director. As a Haverford School District
student in Havertown, PA, Dr. Rudolph’s enthusiasm
and exceptional teaching practices inspired me in middle
school concert band and jazz ensemble. I recall early
morning jazz ensemble rehearsals learning standard jazz
repertoire such as Jumpin’ at the Woodside and Harlem
Nocturne and developing improvisational skills over Bb
Blues and other entry-level chord progressions. Thanks
to Dr. Rudolph, I became a music teacher and now a
colleague of his at Haverford.
As music department head at Haverford, Tom knew the
importance of music technology electives in addition to
a vibrant music performance program. He understood
that music technology electives could pull students
from the entire school population into music and attract
all of the students who didn’t fit into the typical band/
orchestra/chorus model. Today, the music lab at Haverford
High School has doubled the size of high school music
program serving 384 students per year. Thanks to Tom’s
consistent support over many years for music technology
in the district, the high school now has a full-time music
technology teaching position with a 24 student-seat lab
along with a keyboard lab at the middle school.
In November, Tom had surgery to remove a cancerous
growth in his colon that was discovered during a routine
colonoscopy. Fortunately, the cancer was removed at the
early stages and Tom is now cancer free. He is receiving
chemotherapy as a preventative measure and his energy
level is diminished; consequently, Tom was unable to
attend the PA TI:ME Conference and receive the award
in person. Instead, I presented the award to Tom at
Haverford’s 30th Annual Evening of Jazz. Started by Tom
30 years ago, Haverford’s Annual Evening of Jazz features
Haverford students from the middle school and high school
jazz groups along with a special guest artist. In recent
years Haverford has hosted outstanding jazz soloists
such as Jon Faddis, Wycliff Gordon, and Randy Brecker.
To commemorate the 30th Annual Evening of Jazz, Tom
assembled a jazz combo of some of the top Haverford
graduates from his tenure ranging from the class of ’77
to the class of ’06. Tom received recognition for his years
of outstanding service in music technology in front of a
packed house of students, parents, and alumni from the
past 32 years. This was the first time Tom was honored
for his contributions to music education and technology.
Obviously, this award was long overdue.
Tom has had a tremendous impact on my life and the lives
of the students and staff in Haverford School District. He
is an inspirational teacher, role model, mentor, and friend
and I can’t think of anyone more deserving of the PA
TI:ME Honorary Award. l
The TI:MES • Spring 2009
Research Corner:
Recent Research on
Looping and DJ Software
in the Music Classroom
Kimberly C. Walls
Today there are vast possibilities for
creating music through technology and
most commercial music is produced
with software. Our music students
have greater options for composing
and arranging music than most of us
experienced while growing up. Our
lack of early experiences may make
it difficult to know the best ways to
incorporate creative technologies into
our classrooms. Research can help
remedy the situation by informing us
how student learn with music software
and by suggesting best practices for
using software to teach music creativity.
Looping or DJ software is popular
for creating pop music. Many music
teachers have used looping software
in their classes to encourage music
creativity. Looping software contains
audio samples that can be repeated
(or vamped) without a break for a
specified number of measures in a
multitrack mix. The audio samples are
categorized by style, instrument, and
mood and can be played back in the
tempo and key of the mix. The looping
or DJ category includes software titles
such as Acid, GarageBand, Super
Duper MusicLooper, Live, FL Studio
(previously Fruityloops), and eJay
(soon to become eQuality) as well as
various web sites that support looping
compositions Most of the titles also
support live recording of speech, vocal,
and instrumental tracks. Previous
research supports the idea that students
are motivated by the authenticity of
the musical styles and the professionalsounding arrangements.
eJay (www.ejay.com) has been widely
used in English schools to promote
arranging and composition. Eight 13-14
year old boys and girls of various levels
of music experience who attended a
low-socioeconomic school in England
were the participants in Mellior’s
(2008) study: “Creativity, Originality,
Identity: Investigating Computer-Based
Composition in the Secondary School.”
Each individual student used eJay for
15 minutes to “to compose a piece that
sounds good to you.” The computer
composition activity was similar taking
a music class to the school computer
lab or rotating a number of individual
students through a music room computer
station. Liz Mellior wanted to learn
about the strategies students used in
composing with eJay and the impact of
previous musical experiences on musical
choices and compositional processes.
Each individual student had a short
software orientation session in which
they used the mouse to find and preview
loops and clicked play, rewind and
stop. Mellior took care not to reinforce
any of the actual arranging done in
the orientations and gave each student
a handout with screen prints of the
various tools. The researcher left the
room and each student spent 15 minutes
to complete the open-ended composition
task. Mouse actions on the screen were
recorded through a video scan converter
and the compositions were recorded on
mini-disc.
Following the composition period,
student were asked to fast-forward
and stop the video at what they
continued on page 16
The TI:MES • Spring 2009
15
Research Corner - continued from page 15
Absolutely Free... - continued from page 6
thought were important points in the compositional process
and describe the significance of each episode. Students also
answered a number of scripted open-ended questions in which
they named their favorite loops, described the best section of
their mix, told what they enjoyed, what they learned, what
was original, and what was creative. The verbal responses
and the video screen were taped for analysis. Each student
was interviewed; after telling about their memorable musical
experiences they described the feelings and meanings of each
incident. The interviews were recorded on mini-disc Mellior
coded the screen actions, analyzed the verbal reflections and
answers for emergent themes, and charted the interviews for
critical incidents.
Although participants were not instructed to compose a piece
with sections, the process all students used was vertical in
which each section of a composition was completed before
beginning work on a new section. All the students felt they
were creative and all of them used divergent thinking when
previewing loops and used convergent thinking when editing
their mixes. Formal music training seemed to influence how
students used musical terminology as well as their musical
preferences and compositional processes. The student with
more formal music training used less experimentation, felt
that his creation was not original, and doubted that computer
composition in the dance genre could ever be creative.
Participants differed in what they thought they had learned,
varying from musical choices and concepts to ease of composing
with technology. This was especially important to some girls
who had never used technology for composing and a low school
achiever who felt he had a credible accomplishment. Composing
with eJay also affected participants’ thoughts of personal
identity; as they recognized and valued their creative decisions,
some began to think of themselves as being musicians.
Implications for music teachers are that eJay (and similar
software titles) can provide efficient means for students to
create music by helping to generate and refine ideas that
can result in musical choices. In as short an exposure as
15 minutes, arranging and composition standards can be
approached. Looping software provides motivation for learners
because the style of music is more relevant to student’s world
outside of school and using the software is authentic to the tasks
practiced by commercial musicians. Cultural relevance and ease
of use are especially important to children who may have had
no formal music training or little school success. l
Mellior, L. (2008). Creativity, originality, identity: Investigating
computer-based composition in the secondary school. Music
Education Research, 10(4), 451-472. Retrieved April 2, 2009
from http://dx.doi.org/10.1080/14613800802547680
Kimberly C. Walls is Professor and Coordinator of Music
Education at Auburn University. She serves on the TI-ME
Research Committee and is Vice-President of Association for
Technology in Music Instruction. Contact Kim at kim.walls@
auburn.edu
16
debacle—the entire contents of the site can be downloaded
and installed on as many local machines as you like, free
of charge.
So far we have described software that can help music
teachers and students reach the goals implied by the first
five National Standards. The last four standards present
some very interesting possibilities for the use of software
technologies. We will tackle those in the third and final
instalment of this article series. Stay tuned! l
Chapter News - continued from page 7
the main vehicle for moving and manipulating everything.
Every effort was made to make the day as non-platform
specific as possible.
The basics of how to use Audacity were followed by a
quick look at a lesson on doing digital musique concrete,
collaborative composing with looping, simple song forms
at online sites, and more.
Although the learning was comprehensive, it was very
intentional that the presentation style needed to be very
casual with plenty of individual time for exploration. The
lunch break conversation continued to stay pretty much
on topic: teachers talking about teaching, and successful
approaches to utilizing technology in the classroom.
Future plans for the Ohio group include a gathering for
a day in the fall at Lebanon High School and once again
hosting the Central Regional TI:ME Conference in January
2010. The call for proposals will be going out soon! l
TI:ME chapters are also active in California, Florida,
Maryland, Massachusetts, Texas, and Singapore. If you
are interested in getting involved in your state’s existing
chapter, visit www.ti-me.org/chapters to contact the
chapter leadership. If you would like to know more about
starting a chapter, or have questions about chapters,
please contact me at [email protected].
The TI:MES • Spring 2009
Photos courtesy of Karen Garrett
The TI:MES • Spring 2009
17
Inclusion of Music
Technology Resources in
Early Childhood Music
Education
By Fred Kersten
Technology has found its way into virtually every aspect of
the home and classroom. Currently it is highly available to
the young child (early childhood age 4–8) through resources
on the Internet, toys, games, and technology-teaching genre.
A visitor to a local toy store will find an increasing number
of games, musical instruments, and controllers that are
focused on the musical aspects of education. Software that
provides opportunities for composition, improvisation, and
discovering musical elements is becoming plentiful and is of
high quality.
Increasingly available to the home situation, the Internet can
make available real-time musical activities thus providing
an opportunity for child musical concept interaction with
and without the support of a care provider. The objective
of interaction with Internet music technology is to develop
an overt, physical, psychomotor involvement music activity
rather than a sedentary, inactive, observation of audio/visual
output. Simply stated, active physical musical participation
is the primary goal including playing, singing, movement,
creating, and listening to music.
Through technology resources the following standards are
supported for musical interaction:
• Identifying and experiencing musical instrument
characteristics and timbres and constructing simple
instruments
• Singing and playing to musical backgrounds
• Experiencing creative processes with realistic aural
sounds involving composition, change of tempo, timbre,
and form while obtaining immediate musical feedback.
• Listening to music.
• Engaging in movement activities- rocking patterns,
moving to beat.
• Identifying feelings and ideas that music communicates
and focusing on them.
• Exposing children to diverse types and styles of music.
•
•
•
Listening
•
•
The following links provide resources for implementing the
above-mentioned opportunities for musical interaction.
•
Symphony Orchestra sites such as New York
Philharmonic Kids Zone (www.nyphilkids.org/games/
main.phtml). This site includes Orchestration Station,
which provides an opportunity to orchestrate a short
composition with authentic instrument timbres, and
18
Classics for Kids (www.classicsforkids.com/index.
asp) provides listening resources including podcasts &
classical archives.
Music Creativity Online
Web Resources for Musical Interaction
Instrument timbre awareness
Percussion Showdown that offers musical memory
exercises. As you view the San Francisco Symphony
site enjoy The Speed of Music-CHECK OUT TEMPO
(www.sfskids.org/templates/home.asp?pageid=1).
The Dallas Symphony site for kids, parents, and
teachers provides information on making instruments,
and instrument sound identification (www.dsokids.
com/2001/rooms/DSO_Intro.html).
Backstage, produced by the American Symphony
Orchestra, is a site that portrays various instruments of
the orchestra (www.playmusic.org/stage.html). Included
are short movie examples of older students playing
instruments as well as sound excerpts. The Percussion
site features a tonal memory game with an opportunity
for performance evaluations (www.playmusic.org/
percussion/index.html). This game provides interactive
levels of achievement difficulty in addition to sound
examples of percussion instruments. A Woodwinds site
(www.playmusic.org/woodwinds/index.html) details this
family.
Science of Sound and Hearing - from the bbc.co.uk
- age 5–6 science pages. Timbres; illustrated musical
vibration examples; loud/soft discrimination; sound
production; are interactively examined (www.bbc.co.uk/
schools/scienceclips/ages/5_6/sound_hearing.shtml).
•
•
The TI:MES • Spring 2009
creatingmusic.com developed by Morton Subotnick,
is an online creative music environment for children
of all ages. It is a place to compose music and interact
with musical performance, games, and puzzles. This
site provides music interaction opportunities for early
childhood. Pages include:
• Musical Sketch Pad ( creatingmusic.com/new/
sketch/index.html)
• Rhythm Band (www.creatingmusic.com/mmm/
mmmrb.html)
• Games and Puzzles (www.creatingmusic.com/
puzzles/index.html)
• Cartoon Conductor ( www.creatingmusic.com/
cartoons/index.html)
• Playing with Music (www.creatingmusic.com/
playing/index.html)
• Same or Different (creatingmusic.com/
ComparingGame/index.html)
• Melodic Contour (www.creatingmusic.com/
contours/index.html)
The Playing with Music site emphasizes musical
concepts, permitting children to experiment with slow/
fast and forward/backward (www.creatingmusic.com/
playing/play1.html).
Playing with scales allows performance using various
timbres in major or minor featuring clarinet, oboe, or
•
•
xylophone. Young children, with parent supervision,
can do the clicking. (www.creatingmusic.com/
playing/play3.html).
PBS Kids (pbskids.org/) provides opportunities
for games, stories, music, and coloring. Check out
Global Groovin’ to see and hear multicultural
instruments, then click on Go mix music and
compose with these and other sounds (pbskids.org/
mayaandmiguel/english/games/globalgroovin/
game.html). The PBS music site allows listening to,
and singing of, songs including lyrics that parents
and teachers can print out. Other sites such as Mr.
Rogers’ Neighborhood are available with songs,
stories, and make believe (pbskids.org/rogers/
songlist/).
Cbeebies (BBC) provides an interesting music
site where very authentic timbres may be heard
and associated with instrument pictures (www.
bbc.co.uk/cbeebies/tweenies/songtime/games/
makemusic/). Additional pages for singing songs
(karaoke) are available (www.bbc.co.uk/cbeebies/
tweenies/songtime/).
Audio Basics... - continued from page 11
than 1 foot), be aware of proximity effect and roll off the bass
until you obtain a more natural sound. You can (1) roll off low
frequencies on the mixer, or (2) use a microphone designed to
minimize proximity effect, or (3) use a microphone with a bass
roll-off switch, or (4) use an omnidirectional microphone
(which does not exhibit proximity effect).
Conclusion
A person’s choice of microphone can often be a matter of personal
taste. There really is no one ideal microphone or way of placing
that microphone to get the sound you are looking for. We
recommend experimenting with all sorts of microphones and
positions until you get the sound that works for that particular
moment. However, the desired sound can often be achieved
more quickly through a solid understanding of the microphone
characteristics outlined above as well as a good knowledge of
how sound radiates from different instruments and also how
the room affects that sound. Just remember, whatever method
sounds right for the particular situation, is right. l
Early Childhood Music Software
•
•
•
•
Making Music, Making More Music, Hearing
Music, and Playing Music from Morton Subotnick
parallel items found on his creatingmusic.com site.
Several titles indicate “age 8-up;” however, each
title has aspects that are easily utilized with early
childhood music education (www.emediamusic.com/
academic/index.html).
Pianomouse goes to Preschool includes
instrument identification, basic music rudiments,
and composer backgrounds (www.pianomouse.com/
products.htm).
ECS Music Education Software (Kids Stuff
section) provides a wide-ranging source of music
software for early childhood (www.ecsmedia.com/
products/prodmusic.shtml#kids).
Sibelius Software--Groovy Music, Groovy
Shapes (5–7), Groovy Jungle (7–9), Groovy
City (9–11), allows composition and music
experience, and is very comprehensive. A feature to
look for in early childhood software is user-friendly
supportive narration, which is well illustrated
within these titles (www.sibelius.com/products/
groovy/index.html).
Further Resources to Explore
For more specifics regarding this topic and additional
resources, access: (http://fredkersten.com/TIME/
TIMEEarly.htm). l
Dr. Fred Kersten has extensive experience with music
technology and has presented for ATMI, TI:ME, and
MENC in addition to providing articles for MENC
journals. He is currently developing research on technology
aspects of early childhood music education. Visit him at:
(http://fredkersten.com).
The TI:MES • Spring 2009
19
TI:ME
3300 Washtenaw Avenue, Suite 220
Ann Arbor, Michigan 48104-4294
SAVE THE DATE!
TI:ME National Conference 2010
February 18-20, 2010
East Brunswick Hilton & Towers
Hosted by New Jersey Music Educators Association (NJMEA)
20
The TI:MES • Spring 2009