Continuity editing

Transcription

Continuity editing
Continuity
Editing
Editing is the creative force of filmic reality...and the foundation of
film art. (V.I. Pudovkin, 1915)
The above statement was made in 1915, and since that time editing
has become even more important.
Editing establishes the structure and content of the production
along with the production's overall mood, intensity, and tempo.
In this series of modules we'll explore the various dimensions of this
topic. We'll start with continuity editing.
Continuity editing refers to arranging the sequence of shots to
suggest a progression of events. Given the same shots an editor
can suggest many different scenarios. Consider just these two
shots.
Let's start with a simple example of
only two shots
•
a man glancing up in surprise
•
another man pulling a gun and
firing toward the camera
In this order it appears that the first
man was shot. However, if you
reverse the order of these two scenes, the first man is watching a
shooting.
Let's look at what we can do with just three shots.
•
•
•
people jumping from a car
the car on fire
an explosion
In the 1-2-3 sequence shown the shots suggest that people are
jumping from a car seconds before it catches fire and explodes.
A 3-2-1 order suggests that there is an explosion and then the car
bursts into flames; and, as a result, the people have to jump out.
Page 1 of 40
In a 2-3-1 sequence people jump from the car after a fire causes an
explosion.
If the sequence is changed to 2-1-3, it appears that as a result of a
fire passengers jump out of the car just in time to escape a
devastating explosion.
Very different scenarios with only three shots to work with!
When hundreds of scenes and takes of scenes are available to an
editor, which is normally the case in dramatic productions, the
editor has tremendous control over the basic continuity and
message of the production.
Changing Expected Continuity
Continuity editing primarily suggests guiding an audience through a
sequence of events and, in the process, showing them what they
want to see when they want to see it. In the end, you've told a
story, or logically traced a series of events to their conclusion.
In dramatic television good editors sometimes break from the
expected to achieve a dramatic effect. Unfulfilled expectations can
be used to create audience tension. Let's take this simple shot
sequence:
•
•
•
•
A man is working at his desk late at night.
There is a knock at the door.
The man behind the desk routinely calls out, "Come in."
After looking up, the calm expression on the man's face
dramatically changes to alarm.
Why? We don't know. Where is the shot of who or what just came
in? What happens if we don't cut to that expected shot? The
audience is then just left hanging with curiosity and apprehension—
or, depending on how it's handled, with frustration and resentment.
Here's an example of the latter. In a story about the changes in the
U.S. $100 bill a treasury spokesperson spends considerable time
giving specific details on the changes that were necessary to foil
counterfeiting. Let's assume that the whole time all we see is a twoshot of the men carefully examining one of the new $100 bills.
Obviously, we would want to see a close-up of the bill so we can see
the changes they are referring to. If there is no such shot, we feel
frustrated.
Page 2 of 40
So, unless you want to leave your audience hanging for momentary
dramatic effect, always keeps in mind what you think the audience
expects to see at any given moment. If you do, your edit decision
list (sequence of edits) will largely write itself.
In news and documentary work the more logically your can present
events (without extraneous material) the less room there will be for
misunderstanding or frustration. In these types of productions you
want to be as clear and concrete as possible.
In dramatic production we have much more creative latitude. In
fact, with dramatic productions it's often desirable to leave some
things open for interpretation.
Sometimes for dramatic effect and to keep an audience guessing
(for a while) you may want to use editing to intentionally delay
answering an obvious question.
Acceleration Editing
In film and video production time is routinely condensed and
expanded.
For example, let's say you want to tell the story of a young woman
going out on an important date. The process of just watching her
pick out clothes from her closet, take a shower, dry her hair, do her
nails, put on her clothes and make-up, check the whole effect in a
mirror, make any necessary adjustments, and then drive to some
prearranged place could take 90 minutes. That's the total time
devoted to most feature films—and the interesting part hasn't even
started yet!
Very early in the film business audiences were taught to assume
things not shown. For example, the 90 minutes or so it took the
woman to meet her date could be shown in 19 seconds.
•
•
•
•
•
•
•
a shot of her concluding a conversation on the phone and
moving quickly out of the frame (3 seconds)
a quick shot of her pulling clothes out of her closet (2
seconds)
a shot of her through a steamy shower door (2 seconds)
a couple shots of her blow-drying her hair (4 seconds)
a quick shot of her heading out the front door. (2 seconds)
one or two shots of her driving (4 seconds)
and finally, a shot of her pulling up in front of the prearranged
meeting place (2 seconds)
Page 3 of 40
Or how about this.
•
•
A shot of her hanging up the phone, jumping up and moving
out of frame.
A shot of her arriving at the agreed upon place.
Expanding Time
Occasionally, an editor or director will want to drag out a happening
beyond the actual time represented.
The noted director, Alfred Hitchcock, (North By Northwest, Psycho,
etc.) used the example of a scene where a group of people sitting
around a dinner table was blown up by a time bomb. In a real-time
version of the scene, the people sit down at the table and the bomb
goes off. End of people; end of scene.
But Hitchcock was famous for suspense, and no real suspense
would be generated by this approach.
In a second version the people gather, talk, and casually sit down at
the dinner table. A shot of the bomb ticking away under the table is
shown revealing to the audience what is about to happen.
Unaware of the
conversations.
bomb,
the
people
continue
their
banal
Closer shots of the bomb are then intercut with the guests laughing
and enjoying dinner. The intercutting continues (and speeds up)
until the bomb finally blows the dinner party to bits.
The latter version understandably creates far more of an emotional
impact.
Causality
Often, a part of continuity editing is to
suggest or explain cause. A good script
(enhanced by good editing) suggests or
explains why things happen.
For example, in a dramatic production it
would seem strange to cut to a shot of
Page 4 of 40
someone answering the phone unless we had heard the phone ring.
A ringing phone brings about a response; the phone is answered.
We may see a female corpse on the living room floor during the first
five minutes of a dramatic film, but not know who killed her or why
until 90 minutes later.
In this case effect precedes cause. Although strict continuity editing
would dictate that we present events in a logical sequence, it would
make a more interesting story—one that would be more likely to
hold an audience—if we present the result first and reveal the cause
gradually over time. Is this not the approach of almost every crime
story?
Sometimes we assume cause.
If we are shown a shot of someone with all the signs of being drunk
(effect), we can probably safely assume they have been drinking
(cause).
If we see a shot of someone attempting a difficult feat on skis for
the first time, followed by a shot of them arriving back home with
one leg in a cast, we assume that things didn't quite work out.
Let's go back to the corpse on the living room floor. Knowing that
the husband did it may not be enough (maybe for the police, but
not for most viewers). In causality there is also the question of why.
This brings up motivation.
Motivation
Under motivation, we can assume any one of the age-old motives,
including money, jealousy and revenge.
But, even knowing that the motive was revenge may not be enough
for a well thought-out, satisfying production. Revenge must have a
cause.
To provide that answer we may have to take the viewer back to
incidents in the past. We could show a woman with a string of
lovers. We could then see suspicion, jealousy, resentment, and
anger building in her husband. Finally, we could see that these
negative emotions could no longer be restrained.
Now we understand.
motivation.
We've
been shown
Page 5 of 40
effect,
cause,
and
Editors must perceive the dynamics of these cause-and-effect
relationships to skillfully handle them. They must also have an
understanding of human psychology so that they can portray
feelings and events realistically. How many serious dramatic
productions have you seen where actions and reactions just don't
seem to be realistic? Does this not take away from the credibility of
the production?
Writers and directors also know they shouldn't reveal answers
(motivations) too quickly. In a good mystery we will probably try to
hold our audience by leading them through critical developments in
a step-by-step fashion.
Continuity
Techniques
While holding to the basic continuity of a
story, an editor can enhance the look of a
production by adding insert shots and
cutaways. We introduced these previously,
but now let's look at these from the
standpoint of editing.
Insert Shots An insert shot is a close-up of
something that exists within the basic
scene. (Photo above.) The latter is typically
represented
by
the
establishing
(wide)
shot.
Insert
shots
add
information
that
be immediately visible
needed information,
wouldn't
otherwise
or clear.
In our earlier example of the new $100 bill, a CU of the bill that was
being discussed would be an insert shot. In the photos above the
close-up of the woman's hands on the right is an insert shot.
Cutaways
Page 6 of 40
Unlike insert shots that show significant aspects of the overall scene
in close-up, cutaways cut away from the main scene or action to
add
related material.
Here, we cut away from a person
descending into a mine shaft (photo on
the
left) to a shot of a man who is already at
a lower level. The fact that the man shown on the left is glancing
down, provides motivation to cut away to the shot above.
During a parade, we might cut away from the parade to a shot of
people watching from a nearby rooftop or a child in a stroller
sleeping through the commotion.
In the editing process we have to rely on regular insert shots and
cutaways to effectively present the elements of a story. We can only
hope that whoever shot the original footage had enough production
savvy to shoot them.
Relational Editing
Many years ago, the Russian filmmakers Pudovkin and Kuleshov
conducted an experiment in which they juxtaposed a scene of a
man sitting motionless and totally expressionless in a chair with
various other scenes. The latter included a close-up of a bowl of
soup, a shot of a coffin containing a female corpse, and a shot of a
little girl playing. To an audience viewing the edited film, the man
suddenly became involved in these scenes.
When the shot of the man was placed next to the shot of the coffin,
the audience thought that the actor showed deep sorrow. When it
was placed next to the close-up of the food, the audience perceived
hunger in his face; and when it was associated with the shot of the
little girl, the audience saw the actor as experiencing parental pride.
Thus, one of the most important tenets of editing was
experimentally established: the human tendency to try to establish
a relationship between a series of scenes.
Page 7 of 40
In relational editing, scenes which by themselves seem not to be
related take on a cause-effect significance when edited together in a
sequence.
Remember the scenario in the last module of the woman who was
apparently murdered by her husband? What if we preceded the shot
of the corpse on the living room floor with shots that included the
woman covertly cleaning out large sums of money from a home
safe as her husband entered from a doorway to catch her? Is a
relationship between these events suggested? Do we now have a
clue as to what might have happened?
It is easy—and generally even desirable—to combine continuity and
relational editing. However, when it comes to the next topic,
thematic editing, these fundamental concepts change.
Thematic Editing
In thematic editing, also referred to as montage editing, images are
edited together based only on a central theme. In contrast to most
types of editing, thematic editing is not designed to tell a story by
developing an idea in a logical sequence.
In a more general sense, thematic editing refers to (as they say in
the textbooks) a rapid, impressionistic sequence of disconnected
scenes designed to communicate feelings or experiences. This type
of editing is often used in music videos, commercials, and film
trailers (promotional clips). The intent is not to trace a story line,
but to simply communicate action, excitement, danger, or even the
"good times" we often see depicted in commercials.
From continuity, relational, and montage editing we now move to a
technique for enriching editing and stories by adding extra "layers."
Parallel Cutting
Early films used to follow just one story line—generally, with the
hero in almost every scene. Today, we would find this simplistic
story structure rather boring.
Afternoon soap operas, sitcoms, and dramatic productions typically
have two or more stories taking place at the same time.
The multiple story lines can be as simple as intercutting between
the husband who murdered his wife in the previous scenario and
the simultaneous work of the police as they try to convict him. This
is referred to as parallel action. When the segments are cut
Page 8 of 40
together to follow the multiple story lines, it's referred to as parallel
cutting.
By cutting back and forth between two or more mini-stories within
the overall story, production pace can be varied and overall interest
heightened. And, if the characters or situation in one story don't
hold your attention, possibly the characters or situations in one of
the other storylines will.
Page 9 of 40
Solving
Continuity
Problems
As we've noted, audiences have learned to accept the technique of
cutting out extraneous footage to keep a story moving. Strictly
speaking, these are discontinuities in the action.
While some discontinuities in action are expected and understood,
some are not. When edits end up being confusing or unsettling,
they are called jump cuts.
If you are very observant you'll notice that many films and weekly
television series provide good examples of minor discontinuities in
action. Here are some examples:
•
•
•
A two-shot of a couple talking on a dock will show their hair
blowing gently in the breeze, but in an immediate close-up of
one of them, the wind has inexplicably stopped blowing.
A close-up of an actress may show her laughing, but in an
immediate cut to a master shot we see that she is not even
smiling.
A man is walking with his arm around the waist of his
girlfriend, but an immediate cut to another angle suddenly
shows that his arm is around her shoulder.
These problems are primarily due to shooting in single-camera,
film-style, where a significant period of time can elapse between
scene setups and takes. We'll look at single-camera techniques a
little later.
It would be nice if such discontinuities could always be noticed
during shooting. New scenes could be immediately shot and we
wouldn't need to try to fix things during editing.
Sometimes, however, these problems only become evident when
you later try to cut scenes together. Apart from costly and timeconsuming re-shooting, there are some possible solutions.
Bridging Jumps in Action
Let's start with how a jump cut in a major dramatic production
might be handled.
Page 10 of 40
Remember our young woman who was getting ready for a date?
Let's say we see her hang up the phone in the kitchen and then
head out the door to the bathroom. No problem yet.
Let's now assume that after exiting the kitchen (moving left-toright), the hallway scene has her immediately reaching the
bathroom door from the right. Now she's moving right-to-left.
The audience is left with a question: Why did she instantly seem to
turn a full 180-degrees and start walking in the opposite direction to
get to the bathroom?
The solution to most of these problems is to use the cutaways and
insert shots we discussed earlier.
With this particular continuity problem we could add a quick closeup of someone's hands (which look like those of our actress)
opening a linen closet and taking out a towel. Not only is a bit of
visual variety introduced, but when you cut to her entering the
bathroom we won't be as apt to notice the reversal in action.
If that didn't work, you might consider putting the scene of her in
front of her closet deciding on her clothes before the bathroom shot.
All of these tricks can be used to cover continuity problems.
Bridging Interview Edits
Interviews are almost never run in their
entirety. An audience used to short, pithy
sound bites will quickly get bored by
Page 11 of 40
answers that wander from the topic, are less than eloquent, or that
are... just boring. In interviews you may shoot ten times as much
footage as you end up using.
It's the job of the video editor to cut the footage down—
•
•
•
without leaving out anything important
without distorting what was said, and
without abrupt changes in mood, pacing, energy, or rhetorical
direction
Not an easy job.
To start with, cutting a section out of dialogue will normally result in
an abrupt and noticeable jump in the video of the person speaking.
One solution, illustrated here, is to insert a three- or four-second
cutaway shot over the jump in the video.
This assumes, of course, that you've already made a smooth audio
edit between the segments.
These cutaways, which are typically done with an insert edit, are
often reaction shots ("noddies") of the interviewer.
Typically, these cutaway shots are on a separate videotape (a Broll) as opposed to the videotape used to record the interview
answers (the A-roll). In linear editing this process of recording on
two separate videotapes makes things easier. With nonlinear editing
two tapes aren't necessary. We'll cover that in more detail later.
Editors depend greatly on this supplementary, B-roll footage to
bridge a wide range of editing problems. Therefore, you should
always take the time to tape a variety of B-roll shots on every
interview—insert shots, cutaways, whatever you can get that might
be useful in editing.
Another (and somewhat less than elegant) way of handling the
jump cut associated with editing together nonsequential segments
of an interview is to use an effect such as a dissolve between the
segments. This makes it obvious to an audience that segments have
been cut out, and it smoothes out the "jump" in the resulting jump
cut.
Abrupt Changes in Image Size
Another type of jump cut results from a major, abrupt change in
subject size.
Page 12 of 40
Going from a wide-angle (establishing shot) directly to a close shot
can be too abrupt. An intermediate medium shot is generally
needed to smooth out the transition and orient the audience to the
new area you are concentrating on.
For example, if you cut from the shot on the left
directly to the one on the right (the area
indicated by the red arrow), the audience would have trouble
knowing where the new action was taking place
within the overall scene—or even if it was in the
first scene.
However, if you cut to the medium shot as shown
here before the close shot, the area you are
moving to becomes apparent.
A well-established 1-2-3 shot formula covers this. It starts with
1. a momentary wide shot (also called a master or establishing
shot), then
2. a cut to a medium shot, and then
3. cuts to various close-up shots
To "back out of a scene," if that's necessary, the shot order can be
reversed.
Periodically going back to the wide or establishing shot is often
necessary to remind viewers where everyone and everything is.
This is especially important during or after a talent move. When you
cut back to the wide shot in this way, it's referred to as cutting to a
reestablishing shot.
Although this long-shot-to-medium-shot-to-close-up formula is
somewhat traditional (and it was even mandated at a time when
some Hollywood studios were cranking out one feature film a week
in an assembly line fashion), there will be times when an editor will
see an advantage in another approach.
Page 13 of 40
For example, by starting a scene with an extreme close-up of a
crucial object, you can immediately focus attention on that object.
In a drama that could be smashed picture frame, a gun, or any
crucial subject matter. Once the importance or significance of the
object was established, the camera can dolly or zoom back to reveal
the surrounding scene.
Shooting Angles
Another type of jump cut results from cutting from one shot to a
shot that is almost identical.
Not only is it hard to justify a new shot that is not significantly
different, but a cut of this type simply looks like a mistake.
To cover this situation, videographers keep the 30-degree rule in
mind.
According to this rule, a new shot of the same subject matter can
be justified only if you change the camera angle by at least 30
degrees.
Of course, cutting to a significantly different shot—for example,
from a two-shot to a one shot—would be okay (even at basically the
same angle), because the two shots would be significantly different
to start with.
Related to shooting angles is the issue of on-screen direction. One
example
of
this
is
illustrated
by
the
following
photos.
If
the
woman
were
talking
to the man
on the
phone,
which
angle
seems
the
most
logical:
her facing
the
right (first photo), or
facing the left (photo
on the right)?
We assume that if two people are talking
they will be facing each other—even though
on the telephone this is not necessarily the
case. Although this seems logical when we
Page 14 of 40
look at photos such as these, when you are shooting in the singlecamera style and scenes are shot hours or even days apart, these
things are easy to overlook.
Crossing The Line
And, finally, we come to one of the most vexing continuity
problems—crossing the line.
Any time a new camera angle exceeds 180 degrees you will have
crossed the line—the action axis—
and the
action
will
be
reversed.
some
of
the
This
is
hard
to
fix during
editing,
although
techniques we've
outlined can help.
Football fans know that action on the field is reversed when the
director cuts to a camera across the field. For this reason it's never
done in following a play—only later during a replay. And then it's
only justified (generally with an explanation) if that camera position
reveals something that the other cameras couldn't see.
When something is being covered live, this type of reversal of action
is immediately obvious. The problem can be much less obvious
when actors must be shot from different angles during singlecamera, film-style production.
Let's say you wanted a close-up of the man at the left of this photo.
If the camera for this shot were placed over the woman's right
shoulder (behind the blue line in the illustration above), this man
would end up looking to our left as he talked to the couple instead
of to our right as shown in the photo. You would have "crossed the
line."
Note, however, that camera positions #1 or #2 in front of the blue
line could be used without reversing the action.
Page 15 of 40
If all close-ups are shot from in front of the blue line, the eye-lines
of each person — the direction and angle each person is looking —
will be consistent with what we saw in the establishing shot.
Occasionally, a director will intentionally violate the 180-degree rule
for dramatic effect. For example, during a riot scene a director
might choose to intentionally cross the line on many shots in order
to communicate confusion and disorientation. That should tell you
something right there about the effect of crossing the line.
Assuming that confusion is not the objective, an editor must always
remember to maintain the audience's perspective as scenes are
shot and edits are made.
Editing
Guidelines:
Today's nonlinear computer editors are capable of just about any
effect you can envision. Because of this, it's tempting to try to
impress your audience with all the production razzle-dazzle you can
manage.
But, whenever any production technique calls attention to itself,
especially in dramatic productions, you've diverted attention away
from your central message. Video professionals—or maybe we
should say true artisans of the craft—know that production
techniques
are
best
when
they
are
transparent; i.e., when
they go unnoticed by the
average viewer.
However,
in
music
videos,
commercials,
and
programming
themes we are in an era
of
where
production
(primarily editing) techniques are being used as a kind of "eye
candy" to mesmerize audiences.
It's difficult to predict whether this editing razzle-dazzle will be a
passing infatuation, like some production gimmicks have been, or
whether ostentatious effects will become a standard competitive
feature of TV and audio production. Even though the traditional
rules of editing seem to be regularly transgressed in commercials,
music videos, etc., the more substantive productions—the ones that
often win awards for editing—seem to generally adhere to some
Page 16 of 40
accepted editing guidelines. As in the case of the guidelines for good
composition, we are not referring to them as rules.
As you will see, many of these guidelines apply primarily to singlecamera, dramatic, film-style production.
Guideline # 1: Edits work best when they are motivated. In making
any cut or transition from one shot to another there is a risk of
breaking audience concentration and subtly pulling attention away
from the story or subject matter. When cuts or transitions are
motivated by production content, they are more apt to go
unnoticed.
For example, if someone glances to one side during a dramatic
scene, we can use that as motivation to cut to whatever has caught
the actor's attention. You may recall the example we gave in a
previous chapter. When one person stops talking and another
starts, which provides the motivation to make a cut from one
person to the other. If we hear a door open, or someone calls out
from off-camera, we generally expect to see a shot of whoever it is.
If someone picks up a strange object to examine it, it's natural to
cut to an insert shot of the object.
Guideline # 2: Whenever possible cut on subject movement. If cuts
are prompted by action, that action will divert attention from the
cut, making the transition more fluid. Small jump cuts are also less
noticeable because viewers are caught up in the action.
If a man is getting out of a chair, for example, you can cut at the
midpoint in the action. In this case some of the action will be
included in both shots. In cutting, keep the 30-degree rule in mind
that we previously discussed.
Maintaining Consistency in Action and Detail
Editing for single-camera production requires great attention to
detail. Directors will generally give the editor several takes of each
scene. Not only should the relative position of feet or hands, etc., in
both shots match, but also the general energy level of voices and
gestures.
You will also need to make sure nothing has changed in the scene
— hair, clothing, the placement of props, etc. — and that the talent
is doing the same thing in exactly the same way in each shot.
Page 17 of 40
Note in the photos below that if we cut from the close-up of the
woman taking to the four-shot, that the position of her face
changes. This represents a clear continuity problem—made all the
more apparent because our
eyes would be focused on
her.
Part
of
the
art
of
acting is in to maintain absolute consistency in the takes. This
means that during each take talent must remember to synchronize
moves and gestures with specific words in the dialogue. Otherwise,
it will be difficult, if not impossible, to cut directly between these
takes during editing.
It's the Continuity Director's job to see not only that the actor's
clothes, jewelry, hair, make-up, etc., remain consistent between
takes, but also that props (movable objects on the set) remain
consistent.
It's easy for an object on the set to be picked up at the end of one
scene or take and then be put down in a different place before the
camera rolls on the next take. When the scenes are then edited
together, the object will then seem to disappear, or instantly jump
from one place to another.
Discounting the fact that you would not want to cut between two
shots that are very similar, do you see any problem in cutting
between the two shots above? Okay, you may have caught the
obvious disappearance of her earrings and a difference in color
balance, but did you notice the change in the position of the hair on
her forehead?
Entering and Exiting the Frame
As an editor, you often must cut from one scene as someone exits
the frame on the right and then cut to another scene as the person
enters another shot from the left.
It's best to cut out of the first scene as the person's eyes pass the
edge of the frame, and then cut to the second scene about six
frames before the person's eyes enter the frame of the next scene.
Page 18 of 40
The time is significant. It takes about a quarter of a second for
viewers' eyes to switch from one side of the frame to the other.
During this time, whatever is taking place on the screen becomes a
bit scrambled, and viewers need a bit of time to refocus on the new
action. Otherwise, the lost interval can create a kind of subtle jump
in the action.
Like a good magician that can take your attention off something
they don't want you to see, an editor can use distractions in the
scene to cover the slight mismatches in action that inevitably arise
in single-camera production. An editor knows that when someone in
a scene is talking, attention is generally focused on the person's
mouth or eyes, and a viewer will tend to miss inconsistencies in
other parts of the scene.
Or, as we've seen, scenes can be added to divert attention.
Remember the role insert shots and cutaways can play in covering
jump cuts.
Guideline # 3: Keep in Mind the Strengths and Limitations of the
Medium. Remember:
Television is a close-up medium.
An editor must remember that a significant amount of picture detail
is lost in the 525- and 625-line broadcast television systems. The
only way to show needed details is through close-ups.
Except for establishing shots designed to momentarily orient the
audience to subject placement, the director and the editor should
emphasize medium shots and close-ups.
There are some things to keep in mind in this regard. Close-ups on
individuals are appropriate for interviews and dramas, but not as
appropriate for light comedy. In comedy the use of medium shots
keeps the mood light. You normally don't want to pull the audience
into the actors' thoughts and emotions.
In contrast, in interviews and dramatic productions it's generally
desirable to use close-ups to zero-in on a subject's reactions and
provide clues to the person's general character. In dramatic
productions a director often wants to communicate something of
what's going on within the mind of an actor. In each of these
instances, the judicious and revealing use of close-ups can be
important.
Page 19 of 40
Editing
Guidelines:
In this module we'll cover the final editing guidelines.
Guideline number 4: Cut away from the scene the moment the
visual statement is made.
First, a personal observation. From the perspective of having taught
video production for more than two decades I can say that more
than 90% of the videos I see from students are too long. Most could
have been vastly improved by being edited down—often by at least
50%.
When I tell students this they seem skeptical until I show them
sample scenes from commercials, dramatic productions, news
segments, and resume reels from noted professionals.
If you ask someone if he or she enjoyed a movie and they reply,
"Well, it was kind of slow," that will probably be a movie you will
avoid. "Slow moving" connotes boring. In today's fast-paced and
highly competitive film and television fields that's one thing you
don't want to be if you want to stay in the business.
The pace of a production rests largely with the editing, although the
best editing in the world won't save bad acting or a script that is
boring to start with.
So how long should scenes be?
First, keep in mind that audience interest quickly wanes once the
essential visual information is conveyed. Shots with new information
stimulate viewer interest.
In this regard there are some additional points that should be
considered.
New vs. Familiar Subject Matter
Shot length is in part dictated by the
complexity and familiarity of the subject
matter.
How long does it take for a viewer to see the key elements
Page 20 of 40
in a scene? Can they be grasped in a second (take a look at some
contemporary commercials), or does the subject matter require
time to study?
You wouldn't need a 15-second shot of the Statue of Liberty,
because we've all seen it many times. A one or two-second shot
would be all you would need to remind viewers of the symbolism
(unless, of course you were pointing out specific areas of damage,
restoration, or whatever).
On the other hand, we wouldn't appreciate a one or two second
shot of a little green Martian who just stepped out of a flying saucer
on the White House Lawn. Those of us who haven't seen these
space creatures would want plenty of (scene) time to see what one
really looks like.
In an earlier module we mentioned montage editing. With this
technique shots may be only a fraction of a second (10-15 video
frames) long. Obviously, this is not enough time even to begin to
see all of the elements in the scene. The idea in this case is simply
to communicate general impressions, not details. Commercials often
use this technique to communicate such things as excitement or
"good times."
Next, cutting rate depends on the nature of the production content.
For example, tranquil pastoral scenes imply longer shots than
scenes of rush hour in downtown New York. You can increase
production tempo by making quick cuts during rapid action.
Varying Tempo Through Editing
A constant fast pace will tire an audience; a constant slow pace will
induce them to look for something more engaging on another
channel.
If the content of the production doesn't have natural swings in
tempo, the editor, with possible help from music, should edit
segments together to create changes in pace. This is one of the
reasons that editors like parallel stories in a dramatic production—
pace and content can be varied by cutting back and forth between
stories. How you start a production is critical, especially in
commercial television. If you start out slow (and boring), your
audience will probably go elsewhere. Remember, it's during these
opening seconds that viewers are most tempted to "channel hop"
and see what else is on.
Page 21 of 40
Because the very beginning is so important, TV programs often
show the most dramatic highlights of the night's program right at
the start. To hold an audience through commercials, newscasts
regularly "tease" upcoming stories just before commercial breaks.
So, try to start out with segments that are strong—segments that
will "hook" your audience. But, once you have their attention, you
have to hold onto it. If the action or content peaks too soon and the
rest of the production goes down hill, you may also lose your
audience.
It's often best to open with a strong audio or video statement and
then fill in needed information as you go along. In the process, try
to gradually build interest until it peaks at the end. A strong ending
will leave the audience with positive feelings about the program or
video segment.
To test their productions, directors sometimes use special preview
sessions to try out their productions on general audiences. They will
then watch an audience's reaction throughout a production to see if
and exactly where attention drifts.
Guideline number 5: Emphasize the B-Roll. Howard Hawks, an
eminent American film maker, said: "A great movie is made with
cutaways and inserts." In video production these commonly go
under the heading of "B-roll footage."
In a dramatic production the B-roll might consist of relevant details
(insert shots and cutaway shots) that add interest and information.
One valuable type of cutaway, especially in dramatic productions, is
the reaction shot—close-ups showing how others are responding to
what's gong on.
Sometimes this is more telling that holding a shot of the person
speaking. For example would you rather see a shot of the reporter
or the person being interviewed when the question is asked: "Is it
true that you were caught embezzling a million dollars?"
By using strong supplementary footage, the amount of information
conveyed in a given interval increases. More information in a
shorter time results in an apparent increase in production tempo.
The A-roll in news and interviews typically consists of a rather static
looking "talking head." In this case the B-roll would consist of
scenes that support, accentuate, or in some way visually elaborate
on what's being said.
Page 22 of 40
For example, in doing an interview with an inventor who has just
perfected a perpetual-motion machine, we would expect to see his
creation in as much detail as possible, and maybe even the
workshop where it was built. Given the
shortage of perpetual motion machines, this
B-roll footage would be more important to
see than the A-roll (talking head) interview
footage.
Guideline number 6: The final editing
guideline is: If in doubt, leave It out.
If you don't think that a scene adds needed
information, leave it out. By including it, you
will probably slow down story development
and maybe even blur the focus of the
production
and
sidetrack
the
central
message.
For example, a TV evangelist paid tens of
thousands of dollars to buy network time. He
tried to make his message as engrossing,
dramatic, and inspiring as possible.
But, during the message the director saw fit to cutaway to shots of
cute, fidgety kids, couples holding hands, and other "interesting"
things going on in the audience.
So, instead of being caught up in the message, members of the TV
audience were commenting on, or at least thinking about, "that
darling little girl on her father's shoulders," or whatever. There may
have been a time and place for this cutaway, but it was not in the
middle of the evangelist's most dramatic and inspiring passages.
So, unless an insert shot, cutaway, or segment adds something
significant to your central message, leave it out!
Five Rules for Editing News Pieces
A recent study done at Columbia University and published in the
Journal of Broadcasting & Electronics analyzed the editing of news
pieces. It was found that if a set of post-production rules are used,
viewers will be able to remember more information. In addition, the
rules make the stories more compelling to watch.
Page 23 of 40
Although the rules centered on news pieces, many of the principles
apply to other types of production. The rules are condensed and
paraphrased below.
1. Select stories and content that will elicit an emotional reaction in
viewers.
2. If the piece has complex subject matter, buck the rapid-fire trend
and make sure neither the audio or video is paced too quickly.
3. Try to make the audio and video of equal complexity. However, if
the video is naturally complex, keep the audio simple to allow the
video to be processed.
4. Don't introduce important facts just before strong negative visual
elements; put them afterwards to give the audience a better chance
to remember them.
5. Edit the piece using a strong "beginning, middle and end"
structure, keeping elements as concrete as possible.
Linear and Nonlinear Editing
Before we get to the use of linear and nonlinear editors, we need to
distinguish between....
Dedicated and Software-Based Editors
A dedicated editor is designed to do only one thing: video editing.
Dedicated editing equipment was the norm until desktop computer
editors started to become available in the late 1980s.
Software-based editors use modified desktop computers as a base.
Videotape editing is just one of the tasks they can perform; it all
depends on the software you load.
It was in the early 1990s that sophisticated video editing hardware
and software became available for desktop computers. The Video
Toaster system for the Amiga computer was the first widely used
system. The basic screen for that system is shown here. The
Toaster was both a video switcher and an editing system.
Page 24 of 40
Throughout the 1990s many manufacturers introduced computerbased
editin
g
syste
ms
for
the
Apple
and
Wind
ows
stand
ards.
Today
's 3.0GHz speed
laptop PCs can
rival or exceed the
capabilities
of
many
of
the
dedicated editing
systems.
Although it used to be said that linear editor systems were faster
than the nonlinear systems, this is no longer true, at least for the
latest generation of nonlinear editors. If the need arises, you can
even save time on some editors by bypassing the editing clip bins
and dragging the digital clips directly to an editing timeline, thus,
doing a rough assemble edit as the clips are imported into the
editor.
With sophisticate editing systems there are a variety of video filters
and plug-ins (software additions that go beyond the effects included
with the original program) that can be applied as you go along:
blur, color corrections, cropping, sharpening, fog effects, geometric
distortions, etc.
Some editing systems can apply a type of image stabilization to
shaky images by locking onto a central element in a scene and (to a
point) keeping it from moving, thus canceling out moderate camera
shake.
Page 25 of 40
The most sophisticated editing systems today need to display
considerable editing information, and it's common to display this
information on two video monitors as shown here.
The computer cursor simply moves from one window to the other as
you move the mouse.
Laptop Editing Systems
There are two
laptop
editing
dedicated
and
based.
types of
systems:
computer
An example of a dedicated
system is this Panasonic
field editing unit, primarily
used in news work. Note
that the controls designed
exclusively for video and
audio editing.
With
computer-based
systems, such as the one
below, you have the advantage of a wide variety of "off the shelf"
laptop computers, plus the software can be readily switched and
upgraded. In addition to editing, computer-based systems can
accommodate programs used to write new scripts.
Computer-based editing used to be confined to especially modified
(souped up) desktop computers; but, as we've noted, in recent
years laptop computers have become so powerful in terms of
processor speed, memory, and hard disk capacity, that they can do
most anything desktop systems can.
These computers use a FireWire, IEEE 1394, or i.Link cable
connection to download the video from
the camcorder to the computer's hard
drive. Since video information takes up
a lot of digital space, you need a
computer with a large hard-drive
capacity.
One of the best ways to learn how a
nonlinear editor works is simply to play
with one for several hours. One
Page 26 of 40
popular nonlinear editing program, Adobe Premiere, is available on
the Internet for download. This demo version—if it's still available
when you read this—does everything but save files. If you are
interested, go to Adobe Premiere in your web browser.
Editing programs such as the popular Adobe Premiere are
expensive. More in line with individual budgets are programs such
as Pinnacle Studio Version 8, which sells for under $100. (This
program was voted #1 by PC Magazine in April 2002. Coming in a
close second is Unlead's Video Studio 6, which also sells for under
$100.) Another highly-rated editing program under $100 is
Broderbund MovieShop Deluxe.
In late 2002, another major computer magazine voted Video
Explosion Deluxe for Windows and Final Cut Pro for the Mac as the
best video editing programs. Video Explosion is under $80 and Final
Cut Pro runs about $1,000, although a "slimmed-down," less
expensive version is available. In 2004 about 70% of feature films
were being edited on Windows based editors, and about 30% on
Macs. Most of the latter used Apple's Final Cut Pro for the Mac.
Avid, which has one of the most widely-used and respected
professional editing systems, announced early in 2003, that they
would provide a free, "lite" version of their editing program to
serious users who request the program at their web site. The free
program handles two video tracks, four audio tracks, basic trimming
and editing functionality, and up to two streams of real-time effects.
The idea is to get new users familiar with the program, in hopes
that they will later move up to their full version.
Far less sophisticated than any of these programs are the Windows
and Apple video editors that come as a part of the respective
operating systems. Most of these programs have built-in tutorials
that take you through the editing process.
Although a sophisticated nonlinear (random access) editing system
may take a while to learn, once you figure one out, you can readily
transfer the skills to other editing programs.
With nonlinear editing the video and audio segments are not
permanently recorded as you go along as they are in linear editing.
The edit decision sequence exists in computer memory as a series
of internal digital markers that tell the computer where to look for
segments on the hard disk.
This means that at any point you can instantly check your work and
make adjustments. It also means that you can easily (and maybe
Page 27 of 40
endlessly!) experiment with audio and video possibilities to see how
you like them.
Once you finalize things, you will want to save your EDL (edit
decision list) on a computer disk. This will save you from having to
start from scratch if you later need to make revisions.
The final edited video and audio output can be handled in two ways.
It can be "printed" (transferred) in final, linear form to a videotape,
as we've noted, or it can remain on computer disk to be recalled
and modified as needed. The latter approach, which is often used
for segments in newscasts, requires high-capacity storage devices
such as....
Video Servers
Video and audio segments take up a great amount of digital storage
space in computers.
Instead of trying to replicate the needed storage in each desktop
computer, many facilities have gone to a centralized mass storage
device called a video server, sometimes called a media server. An
example is shown in this photo. These were introduced in an earlier
module. Even the editing software can be run from a server, rather
than having to take up disk space in a desktop or laptop computer.
A centralized video server not only gives all of the computer editing
stations the advantage of having access to large amounts of
storage, but it means that segments can be reviewed, edited, or
played back from any of the editing workstations (desktop
computers equipped with a network connection) within the facility.
As high-speed, broadband modems become commonplace, this also
means that you will be able to link to a media server from any
location—even your home—and edit and reedit pieces.
Making Use of Time-Code
Although we've mentioned time-code previously, we now need to
more fully explore its role in the editing process.
SMPTE/EBU time-code (or just " time-code") is an eight-digit code
that allows you to specify, among other things, precise video and
audio editing points.
A designated time-code point (set of numbers) cannot vary from
one editing session to another, from one machine to another, or
even from one country to another.
Page 28 of 40
Editing instructions like, "cut the scene when Whitney smiles at the
camera," leave room for interpretation—especially if Whitney tends
to smile a lot. There is the very real possibility that different takes
of the same scene will become confused.
But even though a tape may be four hours long, "00:01:16:12"
refers to one very precise point within that total time.
Breaking the Code
Although a string of eight numbers like 02:54:48:17 might seem
imposing, their meaning is very simple: 2 hours, 54 minutes, 48
seconds and 17 frames.
Since time-code numbers move from right to left when they are
entered into an edit controller, you must enter hours, minutes,
seconds and frames, in that order.
By entering only six numbers instead of eight, the machine assumes
"00 hours," since the combination of numbers entered would only
(as they move from right to left) reach to the minute’s designation.
If there is anything tricky about time-code, it's the fact that you
don't add and subtract from a base of ten the way you do with most
math problems.
The first two numbers are based on 24 hours (military time). The
minutes and seconds range from 00 to 59, just the way they do on
any clock, and the frames go from 00 to 29. (Recall there are 30
frames per second in NTSC video. The PAL and SECAM systems use
25 as a base.)
Thirty frames, like 5/5 of a mile, would be impossible to display,
because 30 frames in NTSC equal one second. Likewise, "60
minutes" would be impossible in time-code (but not necessarily
impossible on CBS).
So, as an example, the next frame after 04 hours, 59 minutes, 59
seconds and 29 frames would change the counter to: 05:00:00:00.
Now let's look at some problems. If one video segment is 8
seconds, 20 frames long, and a second segment is 6 seconds, 19
frames long, the total time of the two segments added together
would be 15:09.
Page 29 of 40
8 seconds, 20 frames, plus
6 seconds, 19 frames
= 15:09
Note in this example that as we add the total number of frames we
end up with 39. But, since there can be only 30 frames in a second,
we add one second to the seconds' column and we end up with 9
frames. (39 minus 30 = 09 frames). Adding 9 seconds (8 plus the 1
we carried over) and 6 gives us 15 seconds, for a total of 15:09.
Let's look at this question. If the time-code point for entering a
video segment is 01:22:38:25, and the out-point is 01:24:45:10,
what is the total time of the segment?
segment out-point - 01:24:45:10
segment in-point - 01:22:38:25
= total segment time - 00:02:06:15
Getting the answer is just a matter of subtracting the smaller
numbers from the larger numbers.
Note that since we can't subtract 25 frames from 10 frames we
have to change the 10 to 40 by borrowing a second from the 45.
For people who regularly do time-code calculations, computer
programs and small hand held calculators are available. A
computer pop-up time-code calculator called WTCC is available for
free downloading on the Internet.
Drop-Frame Time-code
Basic SMPTE/EBU time-code assumes a frame rate of 30 persecond in NTSC video. Although that's a nice even number, it
actually only applies to black and white television. For technical
reasons, when the NTSC color television was introduced, a frame
rate of 29.97 frames per second was adopted. This frame rate is
also used in the U.S. version of DTV/HDTV.
Although the difference between 30 and 29.97 may seem
insignificant, in some applications it can result in significant timing
problems. If you assume a rate of 30 frames per second instead of
29.97, you end up with a 3.6-second error every 60 minutes.
Since broadcasting is a to-the-second business, a way had to be
devised to correct this error. Just lopping off 3.6 seconds at the end
of every hour was not seen as a practical way of doing this—
Page 30 of 40
especially from the viewpoint of a sponsor that gets the end of a
commercial cut off as a result.
The Solution
So how do you fix this error? Well, 3.6 seconds equals an extra 108
video frames each hour (3.6 times 30 frames per second). So, to
maintain accuracy, 108 frames must be dropped each hour, and
done in a way that will minimize confusion. Unfortunately, we're not
dealing with nice even numbers here.
First, it was decided that the 108-frame correction had to be equally
distributed throughout the hour. (Better to lose a bit here and there
instead of everything all at once.)
If you dropped 2 frames per minute, you would end up dropping
120 frames per hour instead of 108. That's nice and neat, but it's
12 frames too many. But, since you can't drop half frames, this is
as close as you can get by making a consistent correction every
minute.
So what to do with the 12 extra frames? The solution is every 10th
minute not to drop the 2 frames.
In one hour that equals 12 frames, since there are six ten-minute
intervals in an hour. So, using this approach you end up dropping
108 frames every hour—exactly what you need to get rid of.
Since the frame dropping occurs right at the changeover point from
one minute to the next, you'll see the time-code counter on an
editor suddenly jump over the dropped frames every time the
correction is made.
For example, when you reach 01:07:59:29, the next frame would
be 01:08:00:02. In drop-frame time-code frames 00 and 01 don't
exist.
Maybe this is not the most elegant solution in the world, but it
works, and now it should be obvious why it's called drop-frame
time-code.
For non-critical applications, such as news segments, industrial
television productions, etc., drop-frame isn't needed. However, if
you are involved with producing 15-minute or longer programs for
broadcast, you should use an editor with drop-frame capability.
Page 31 of 40
On most edit controllers you will find a switch that lets you select
either a drop-frame or non-drop frame mode. Software programs
typically have a drop-down box where you can select the approach
you want.
When you use the drop-frame mode, a signal is added to the
SMPTE/EBU time-coded video that automatically lets the equipment
know that drop-frame is being used.
User Bits
There is room within the time-code signal for a bit of extra data.
Specifically, there are 32 bits of space called user bits where a small
bit of information can be entered. This can be used to trigger
equipment, or to simply to record abbreviated user data, such as
reel numbers, recording dates, camera information, etc.
User bits are limited to four letters or eight digits—which,
admittedly, isn't much in the way of data. Not all equipment will
record or read user bits, so you need to check out this feature if you
plan to use it.
Adding Time-code
Time-code is not an inherent part of the video signal; it must be
recorded on the videotape as the production is being shot—or, in
some cases, when the tape is being reviewed. The same goes for
user bits, which are a part of time-code.
These time-code numbers can be used to organize and specify
segments on a videotape or hard drive, as well as to calculate total
times. Editing equipment will also use the numbers to correctly cue
and edit the segments you've selected—and, if needed, to recall the
segments at a later time.
Approaches to Recording Time-code
Time-code can be recorded on videotape in two ways: as part of an
audio signal, or as part of the video signal.
Longitudinal time-code
(LTC)
Page 32 of 40
Time-code consists of 2,400 bits of
information per second in the NTSC
system and 2,000 bits of information
in the PAL and SECAM systems.
Although it's digital information, it can still be recorded on either
audio track #2, an address (time-code) track, or a cue track.
When it's recorded in this way it's referred to as longitudinal timecode (LTC ).
The digital pulses are converted into an audio signal, much like a
modem converts a computer's digital pulses into sound for
transmission over telephone lines.
Although the longitudinal system of time-code has been greatly
improved in recent years and it's the easiest to record, it can have
three major weaknesses.
First, it can only be reliably read off the tape on most VCRs when
the tape is moving. This can be a problem when you are constantly
starting and stopping the tape during the editing process.
Second, when a videotape is copied, the audio track can suffer a
loss of signal quality that will make the high speed digital pulses
unreliable—especially when the videotape is shuttled back and forth
at various speeds during editing.
To solve this problem a jam sync process is required to regenerate
the longitudinal time-code as a new copy of the tape is being
dubbed (copied).
Finally, if the time-code is recorded at too high a level, it can cause
crosstalk, which results in a obnoxious "whine" in the program
audio.
Longitudinal time-code requires a VTR with well-aligned recording
heads, excellent audio capabilities (wide band amplifiers and
broadband reproduce heads), and well-aligned tape guides—not to
mention a near perfect time-code signal to start with. Otherwise,
when the tape is shuttled at high speeds, the machine will lose
count of the pulses.
VITC Time-Code
Page 33 of 40
VITC (vertical-interval time-code) and other systems that record
time-code with the video heads have many advantages over the
longitudinal system.
Since VITC is recorded by the video heads, it's less likely to be
affected by the variety of technical problems that can plague LTC.
VITC time-code is also always visible, even when the tape is in the
pause mode.
VITC is recorded in the area of scanning lines 10 and 20 in the socalled vertical blanking interval of each video field. This means that
the signal is recorded four times per frame, a bit of redundancy that
lowers the chance of things getting messed up due to dropouts, or
whatever.
And, if that isn't enough, there is built-in error checking (CRC error
checking code) that ends up making the process pretty much failsafe.
When VITC time-code is used, it should be recorded as the video is
shot. Otherwise, you will have to copy the entire tape over in order
to add the time-code. You will lose video quality in this process
unless you are doing the copying from uncompressed digital video.
Jam-Sync in VITC Editing
Of course, during the editing process you are bringing together a
variety of segments, all of which have different time-codes. If these
are all just transferred onto the edited master, you would end up
with a real hodgepodge of time codes on your final tape.
The solution is jam sync, where the time-code is regenerated as
part of the editing process. The time-code generator will take its
cue from the first segment, and then resynchronize the code with
each edit so that the edited master contains a smooth time-code
progression. In the words of one editor, "Jam-sync with VITC isn't
optional, it's mandatory."
Two last caveats.
First, with any video segment involving time-code, you need to
have enough time-coded leaders preceding the actual segment you
want to use so that equipment will have an opportunity to "pick up
the count." Systems vary, but some editors suggest up to 20
seconds. This isn't a lot when the editing machine is speeding
backwards to find a time-code number to cue up a segment.
Page 34 of 40
Second, time-code numbers which reverse in time can really
confuse an editing system. If you stop a recording session, you
need to make sure that when the time-code resumes you are going
forward in time. Don't for some reason start with a time-code that
jumps back in time. The latter could easily result at some point in
reproducing the same code you recorded earlier. This means that
during editing the system might easily come up with the wrong
segment—and refuse to get the right one for you.
Time-Code Display
Some editing systems have small timecode displays on the top of the edit
controller,
as shown
here.
More
sophisticated
systems
superimpose
code numbers over the
itself,
as
we
see
the latter case the timenumbers may be either
superimposed over the
(keyed-in code), or they may become a permanent
picture (burned-in time-code).
editing
timevideo,
below. In
code
temporarily
video
part of the
the
In the case of keyed-in code, an electronic device reads the digital
time-code information from the tape and generates the numbers to
be temporarily keyed (superimposed) over the video. The
disadvantage of this approach is that you can only see the code if
you are using special equipment, such as an appropriate editing
system.
Once the time-code-numbers
have
been
burned
in
(permanently
superimposed
into the video), the video and
time-code can be viewed on
any
VCR.
Although
this
approach requires making a
special copy of the tape, it can
be an advantage if you want
to use a standard VCR to
review tapes at home or on
Page 35 of 40
location and make notes on the segments.
Reviewing a tape in this way and making what is called an initial
paper-and-pencil edit can save a great deal of time later on.
On-Line and Off-Line Editing
First, some definitions.
The basic goal of off-line editing is to create a list of edit decisions.
Before all-digital and tapeless camcorders, this involved using a
copy of the original videotape footage in order to protect the
original videotape from damage during the often arduous process of
making edit decisions.
Off-line editing involves reviewing footage and compiling a list of
time-code numbers that specify the "in" and "out" points of each
needed scene. In this phase a rough cut (an initial rough version
without special effects, color corrections, etc.) is assembled, which
can be shown to a director, producer, or sponsor for approval.
Typically, at this point a number of changes will be made.
In on-line editing the goal is to use the original footage to create
the final edited version of a program, complete with audio and video
effects, color correction, etc.
Since this process can be rather expensive if full-time engineers and
costly, high-quality on-line equipment are involved, an off-line
phase will reduce editing expenses and allow time for greater
experimentation. An important part of the creative process is trying
out many possibilities with video, music, and effects. Hours can be
spent on just a few minutes, or even a few seconds, of a
production.
When time is limited, such as in preparing a news segment, you
probably can't afford the luxury of an off-line phase.
A laptop computer equipped with one of the many available editing
programs can assemble a basic news segment. We'll talk more
about this later.
Digital Editing With A Video Server
Page 36 of 40
Once video editing becomes totally digital with equipment that can
handle video with minimal compression, there will be no need for
the traditional on-line and off-line editing phases. Digital recordings
can be made in the studio or on location and transferred directly
into a video server for editing. There will be no danger of tape
damage in editors, no matter how many times the footage is
previewed. (Digital information stored on a computer disk does not
gradually degrade with repeated access the way it can when it's
recorded on videotape.)
When a video server is used, the original footage can be viewed and
edited by anyone with a video link to the server. This is generally
someone within the production facility; but, thanks to high-speed
Internet connections, it could even be someone in another city—or
even in another country. In the case of animation and special
effects, which are labor intensive, projects are often electronically
transferred to countries where labor is comparatively cheap.
Creating a Paper-and-Pencil Edit
Regardless of what approach you take to editing, previewing
footage and making a paper-and-pencil edit can save considerable
time.
For one thing, you may not really know what you have—what to
look for and what to reject—until you have a chance to review all of
your footage.
By jotting down your tentative in and out time codes, you will also
be able to add up the time of the segments and get an idea of how
long your production will be. At that point—and assuming you have
to make the project a certain length—you will know if you need to
add or subtract segments. Having to go back and shorten or
lengthen a project after you think you have it completed, is not
most people's idea of fun!
A form for a paper-and-pencil EDL, such as the abbreviated one
shown below, will give you an idea of how this data is listed.
Page 37 of 40
Videotape Log
Sequence
Reel #
Start Code
End Code
Scene Description
.
.
.
.
.
Page 38 of 40
There are also computer programs designed for
logging time-codes and creating EDLs. By using a
mouse, scenes can be moved around on the
screen and assembled in any desired sequence.
The programs can keep track of time-codes and
provide total times at any point.
There are EDL programs and time-code
calculators available as software for PDA's
(Personal Digital Assistants), such as the Palm
Pilot shown on the left.
If you are preparing a news script on a computer, the time codes
and scene descriptions can be typed in while you view the scenes on
a monitor.
When you finish logging all of the scenes, you can split the
computer screen horizontally, putting the time code and scene
listings on the top, and your word-processing program for writing
your script on the bottom.
By using a small camcorder and a laptop computer, producers have
been able to create an entire EDL while flying from the East to the
West coasts of the United States.
Once the EDL is created, it can be uploaded from a computer disk
directly into a video server or editor for final editing.
Six Digital Editing Quick Tips
1. Although with camera in hand, you may want to shoot everything
that you think you can possibly use, when it comes to uploading or
capturing this footage on a computer hard disk, you will want to
restrain yourself. After reviewing the footage and making a rough
paper-and-pencil edit, upload only the footage that you are
reasonably certain you will use. Not only does excess footage take
up valuable hard drive space, but prodding through this footage
during editing adds considerable time to the editing process.
2. After the footage is uploaded or captured, trim the front and back
ends of the segments to get rid anything you're not going to use.
This will speed up editing, reduce storage space, and make the clips
easier to identify on the editing screen.
3. Once this is done (#2 above), look for connections between
segments; specifically, how one segment you are considering will
end and another will start. Look for ways to make scenes flow
Page 39 of 40
together without jarring jump cuts in the action, composition, or
technical continuity.
4. Find appropriate cutaways. In addition to enhancing the main
theme or story, they should add visual variety and contribute to the
visual pace.
5. Use transition effects sparingly. Although some editing programs
feature 101 ways to move from one video source to another,
beware! True professionals know that they can be distracting and
get in the way of the message—not to mention looking hooky.
6. Use music creatively and appropriately. Strangely, "silence" in a
video is generally distracting, causing viewers to wonder what's
wrong with the sound. Finding music that supports the video
without calling attention to itself (unless, of course, it's a music
video) can be a formidable task in itself.
If you have a bit of talent in this area, you might consider a do-ityourself approach to your music. The Sonic Desk Smart Sound
program, among others, will not only give you full control of your
music, but it will eliminate copyright problems. Sometimes simple
music effects will be all that you will need.
This brings us to the end of the modules on editing. At this point in
this course you should be able to write a production proposal, do a
decent job on a script, plan out a production, shoot on-location
footage, and assemble what you shoot into a logical and coherent
"package." In the next module we'll move into the TV studio where
the production process takes on a number of new dimensions.
Page 40 of 40