Untitled

Transcription

Untitled
Contents
UNIT 1
CHARACTER DEVELOPER
1
Introduction
Animation
Cartoon
Character Development
Commercial Animation
Steps to Create Animation
Computer Animation
Movies
Amateur Animation
Cartoon Builder
Character and Background Art
Character Rigging
Complex Character Animation
Character Developer
Classical Analysis of Character
Twelve Basic Principles of Animation
Clay Modeling
Summary
Keywords
Review Questions
Further Readings
UNIT 2
VISUALIZATION OF DIFFERENT VIEWS
Introduction
Animation for Visualization
Principles of Animation
Animation in Scientific Visualization
Learning from Cartooning
Downsides of Animation
AnimationÊs Exploration
Types of Animation
GapMinder and Animated Scatterplots
A Taxonomy of Animations
Staging Animations with DynaVis
33
Animation Pipeline
Summary
Keywords
Review Questions
Further Readings
UNIT 3
HOW TO DRAW EXPRESSIONS
55
Introduction
How to Draw Cartoon Faces?
How to Draw Cartoon Emotions and Facial Expressions?
Manga Drawing Tutorial
Summary
Keywords
Review Questions
Further Readings
UNIT 4
HOW TO ACHIEVE LIP SYNCHRONIZATION
Introduction
Lip Synchronization
Automatic Lip Synchronization System
Mouth Positions
Good Facial Animation or Lip Sync Animation
Kinds of Cartoons
Summary
Keywords
Review Questions
Further Readings
77
Unit 1 Character
Developer
Character Developer
Notes
Unit Structure
Introduction
Animation
Cartoon
Character Development
Commercial Animation
Steps to Create Animation
Computer Animation
Movies
Amateur Animation
Cartoon Builder
Character and Background Art
Character Rigging
Complex Character Animation
Character Developer
Classical Analysis of Character
Twelve Basic Principles of Animation
Clay Modeling
Summary
Keywords
Review Questions
Further Readings
Learning Objectives
At the conclusion of this unit, you will be able to:
Define the concept of animated cartoon
Learn about projections and feature films
Know about the history and development of animation
Learn about the twelve basic principles of animation
Introduction
An animated cartoon is a short, hand-drawn (or made with computers to look similar
to something hand-drawn) film for the cinema, television or computer screen,
featuring some kind of story or plot (even if it is a very short one). This is distinct
from the terms "animation" and "animated film," as not all follow the definition.
Although cartoons can use many different types of animation, they all fall under the
traditional animation category.
Punjab Technical University 1
Cartoon Animation
Notes
Animation is the process of drawing and photographing a character – a person, an
animal, or an inanimate object – in successive positions to create lifelike movement.
Animation is both art and craft; it is a process in which the cartoonist, illustrator, fine
artist, screenwriter, musician, camera operator, and motion picture director combine
their skills to create a new breed of artist – the animator. The art of animators is
unique. Animators bring life to their drawings, creating an illusion of spirit and vigor.
They caricature the gestures and expressions in the drawings, give them a fantastic
array of character and personality, and make us believe that the drawings actually
think and have feelings.
This unit helps you to learn how to animate-how to make a series of drawings that
create a sensation of movement when viewed in sequence. The pioneers of the art of
animation learned many lessons, most through trial and error, and it is this body of
knowledge that has established the fundamentals of animation.
Animation
The animation process, however, involves much more than just good drawing.
Animators must have knowledge of the elements of screen writing: plot-setting,
premise, character, conflict, crisis, climax, exposition, dialogue, and action. These
factors determine the types of personalities, expressions, and actions they will need to
create. Moreover, if the animation includes a character moving her mouth in speech
or song, animators need a knowledge of phonetics; if it features a character
responding to a musical soundtrack, they must have a knowledge of music and
rhythm. Animators must also know how an animation camera works and how to time
the character actions to fit the speed of the film. The list goes on and on; the
animatorsÊ job is immeasurable.
Animation is a vast and virtually unexplored art form. It is, perhaps, more popular
today than ever before, and new techniques and methods of animating – including
computer animation – are being developed all the time. There are many characters,
styles, and background designs, however, that remain to be discovered-so pick up
your pencil and get started!
Animators must first know how to draw; good drawing is the cornerstone of their
success. They must be able to dramatize and caricature life while they time and stage
their characters' actions and reactions. The value of animators' work is determined by
the ability of their characters to sway the emotions of the audience-in other words, the
characters' "acting" skills. Animators must know how to entertain an audience: how to
present gag comedy and how to portray an interesting or unusual happening of life.
To do this, they study the great film comedians and read recognized texts on acting.
This knowledge helps them to grip their viewers with suspense or make them smile
and laugh with humor in the theater of animation.
Cartoon
A cartoon is any of several forms of illustrations with varied meanings. The term has
evolved from its original meaning from the fine art of the late Middle Ages and
Renaissance, to the more modern meaning of humorous illustrations in magazines
and newspapers, to the contemporary meaning referring to animated program.
Creativity, even in the very recent past, could not earn a person a monthly salary but
now the scenario is completely different. In the booming market of cartoons and
animations, one can easily satiate his creative spirits and take home a handsome pay
packet. The art of animation, a creative field dominated by the West, now has India
running as one of the most serious contenders. Cartoons are illustrations of a single
diagram with varied meanings, while an animation is a rapid display of 2-D or 3-D
images to create an illusion of motion.
2 Self-Instructional Material
History of Cartoon
Cartoon animation has a long history, dating to at least the second century. Early
examples of attempts to capture the phenomenon of motion into a still drawing can be
found in paleolithic cave paintings, where animals are depicted with multiple legs in
superimposed positions, clearly attempting to convey the perception of motion.
Character Developer
Notes
However, it wasn't until the advent of motion-picture technology that the genre was
fully explored. The first cartoon animation was very rough and basic; over time, it
developed into a fully realized expression of storytelling. One of the very first
successful animated cartoons was "Gertie the Dinosaur" (1914) by Winsor McCay. It is
considered the first example of true character animation.
The first animated feature movie was "El Apóstol", an Argentine movie that was
made in 1917, it last 70 min, and it's considered a Lost Film.
In the 1930s to 1960s, theatrical cartoons were produced in huge numbers, and usually
shown before a feature film in a movie theater. MGM, Disney, Paramount and Warner
Brothers were the largest studios producing these 5 to 10-minute "shorts".
Competition from television drew audiences away from movie theaters in the late
1950s, and the theatrical cartoon began its decline. Today, animated cartoons are
produced mostly for television.
The Walt Disney Co. has been on the forefront of these changes since the early days,
eventually helping to bring computer animation to popularity.
Zoetrope
One of the first known devices used in cartoon animation was the zoetrope. This was
a device created by Ting Huan in China around A.D. 180. It featured a convectionpowered cylinder that would spin with images on the inside. A viewer would look
through slits that showed the illusion of movement on the inside.
Projection
The first animated projection (screening) was created in France, by Charles-Émile
Reynaud, who was a French science teacher. Reynaud created the Praxinoscope in
1877 and the Théâtre Optique in December 1888. On 28th October 1892, he projected
the first animation in public, Pauvre Pierrot, at the Musée Grévin in Paris. This film is
also notable as the first known instance of film perforations being used. His films
were not photographed, but drawn directly onto the transparent strip. In 1900, more
than 500,000 people had attended these screenings.
The first (photographed) animated projection was "Humorous Phases of Funny Faces"
(1906) by newspaper cartoonist J. Stuart Blackton, one of the co-founders of the
Vitagraph Company arrived. In the movie, a cartoonist's line drawings of two faces
were 'animated' (or came to life) on a blackboard. The two faces smiled and winked,
and the cigar-smoking man blew smoke in the lady's face; also, a circus clown led a
small dog to jump through a hoop.
The first animated projection in the traditional sense, i.e. on motion picture film was
"Fantasmagorie" by the French director Émile Cohl in 1908. This was followed by two
more films, "Le Cauchemar du fantoche" ["The Puppet's Nightmare", now lost] and
"Un Drame chez les fantoches" ["A Puppet Drama", called "The Love Affair in
Toyland" for American release and "Mystical Love-Making" for British release], all
completed in 1908.
Motion Pictures
With the rise of motion pictures, animation entered a new era. French film director
Emile Cohl animated a series of short films about a stick figure clown. The first of
these was „Fantasmagorie,‰ released in 1908.
Punjab Technical University 3
Cartoon Animation
Notes
Animated Shorts
From the 1930s until the 1960s, movie studios released short animated cartoons that
played before feature films. The most prominent of these were Walt Disney's Mickey
Mouse and Warner Brothers' Looney Tunes cast of characters.
Disney
Walt Disney established a new genre of feature films with its 1937 production of
„Snow White & The Seven Dwarfs.‰ Due to its success, Disney became a powerhouse
in the film industry and released other classic films such as "Peter Pan," "Sleeping
Beauty" and "Cinderella."
The first cartoon with synchronized sound is often identified as Walt Disney's
Steamboat Willie, starring Mickey Mouse in 1928, but Max Fleischer's 1926 My Old
Kentucky Home is less popularly but more correctly credited with this innovation.
Fleischer also patented rotoscoping, whereby animation could be traced from a live
action film.
With the advent of sound film, musical themes were often used. Animated characters
usually performed the action in "loops", i.e., drawings were repeated over and over,
synchronized with the music. The music used is often original, but musical quotation
is often employed.
Disney also produced the first full-color cartoon in Technicolor, "Flowers and Trees",
in 1931, although other producers had earlier made films using inferior, 2-color
processes instead of the 3-color process offered by Technicolor.
Later, other movie technologies were adapted for use in animation, such as
multiplane cameras, stereophonic sound in Disney's Fantasia in 1941, and later,
widescreen processes (e.g. Cinema Scope), and even 3D.
Today, animation is commonly produced with computers, giving the animator new
tools not available in hand-drawn traditional animation. However, many types of
animation cannot be called "cartoons", which implies something that resembles
drawings. Most forms of 3D computer animation, as well as clay animation and other
forms of stop motion filming, are not cartoons in the strict sense of the word.
An animated cartoon created using Adobe Flash is sometimes called a webtoon.
Feature Films
The name "animated cartoon" is generally not used when referring to full-length
animated productions, since the term more or less implies a "short". Huge numbers of
animated feature films were, and are still produced.
Notable artists and producers of "shorts"
4 Self-Instructional Material
J. R. Bray
Max Fleischer
Pat Sullivan
Walter Lantz
Walt Disney
Ub Iwerks
Tex Avery
Chuck Jones
Bob Clampett
Fred Quimby
Hanna-Barbera
Paul Terry
Character Developer
Notes
Television
American television animation of the 1950s featured quite limited animation styles,
highlighted by the work of Jay Ward on Crusader Rabbit. Chuck Jones coined the
term "illustrated radio" to refer to the shoddy style of most television cartoons that
depended more on their soundtracks than visuals. Other notable 1950s programs
include UPA's Gerald McBoing Boing, Hanna-Barbera's Huckleberry Hound and
Quick Draw McGraw, and rebroadcast of many classic theatrical cartoons from
Warner Brothers, MGM, and Disney.
Hanna-Barbera's show The Flintstones was the first successful primetime animated
series in the United States, running from 1960-66 (and in reruns since). While many
networks followed the show's success by scheduling other primetime cartoons in the
early 1960s, including The Jetsons, Top Cat, and The Alvin Show, none of these
programs survived more than a year in primetime. However, networks found success
by running these failed shows as Saturday morning cartoons, reaching smaller
audiences with more demographic unity among children. Television animation for
children flourished on Saturday morning, on cable channels like Nickelodeon and
Cartoon Network, and in syndicated afternoon timeslots.
Primetime cartoons were virtually non-existent until 1990s hit The Simpsons ushered
in a new era of adult animation. Now, "adult animation" programs, such as Aeon
Flux, Beavis and Butt-head, South Park, Family Guy, The Boondocks, American Dad!,
Aqua Teen Hunger Force, and Futurama are a large part of television.
Computer-generated Imagery
Cartoon animation again changed with the advent of computer-generated imagery.
Pixar created the first fully computer-animated feature film, "Toy Story," in 1995.
Since then, almost all animated cartoon films have moved to computer graphics,
including films such as the "Shrek" series, "Finding Nemo," "Monster House,"
"The Incredibles" and "Toy Story."
Character Development
The outward personality, the clothing, the facial segregations, the outlook of a cartoon
gives shape to the character of the person depicted through the cartoon. While this
type of information may be helpful once you have begun to develop your characters,
it does little for you as you begin the process! I did not find ONE book that walked
the novice writer through the process of determining the best point-of-view for the
story or developing interesting, well-rounded, believable characters.
These two elements: point-of-view and character development are very important to
the success of your story or novel! Point-of-view means the viewpoint character, the
person who is telling your story. Is your main character telling your story? Is there a
narrator who knows and sees everything? Options for point-of-view include:
first-person single view, first-person multiple views, third-person single view,
third-person multiple views, and omniscient point-of-view – and there are few stories
that can be successfully told from all possible points-of-view.
In this course, we will explore these options, examine text written from each
point-of-view, even experiment in writing the same passage using several different
points-of-view! By the time you finish these exercises, you will be able to identify the
Punjab Technical University 5
Cartoon Animation
Notes
point-of-view most effective for the story you want to tell. But you will also learn
about the various character options available to you.
So, we will then proceed to character development! Compelling characters bring your
story to life for the reader. If your characters are not likeable, do not keep your
readers' attention, they will not finish your story.
In order for your characters to be compelling enough to drive your story, you must
know your characters intimately. This means not just knowing their physical
description, but their likes/dislikes, past, present, and dreams for the future. Your
characters must be human (even if you are writing about aliens from another world),
and they must be believable.
Resist the urge to use cliche or stereotypical characters. This might work alright for
minor secondary characters, but your main characters need to be real to your
readers – complete with weaknesses and faults that cause them to fail (so the reader
can identify with them). In taking this course, your main – and lifelike characters will
be created, your protagonist (hero) and antagonist (villain)! We will develop them
into believable characters, each with a life of his/her own. We will also use forms and
worksheets to help you develop compelling characters. And, we'll explore perfect
names for your characters based on time period, ethnic origin, occupation, and
physical description.
Visualization of Different Views
The key to high-density visualizations ⁄ is that they can present both the overview
and the detail simultaneously," says Andrew Cardno, CTO of BIS2. That allows users
to understand both the overall patterns in the data and to see the outliers – critical
information in today's fast-moving world, Cardno maintains. His company, BIS2,
offers software for analyzing massive data collections and producing visually centric
results.
In this interview, Cardno talks with TDWI about the types of data visualization
techniques and the rich insights into data that high-density, multi-layered
visualization in particular offers.
BI this Week: What's happening in data visualization since we spoke last year? It
seems to be an area of business intelligence that's moving very quickly. Is that what
you're seeing?
Andrew Cardno: Data visualization is moving very quickly. There are a large number
of data visualization players now that are very active. It's a competitive landscape,
which shows how much interest there is in the space. It also shows you how difficult
a subject it is to tackle. There are so many different approaches, from the very
traditional graphs and charts to some very advanced visualizations. As you move
through that spectrum, everyone is experimenting and working at new ways of
looking at data.
What are some of those different approaches, especially as they relate back to business
intelligence? Across the spectrum, what do you see from companies in terms of data
visualization approaches?
There seem to be three categories of approach.
6 Self-Instructional Material
The first is traditional techniques such as the line graph. There have been
enhancements, but we've essentially been using this approach for a long time –
but how do we best use [of line graphs], and what are the right kinds of
approaches? Should we use 3D on a graph? How many lines can we add? Should
we have graphs with axes on both sides? Pie graphs? In the end, there are players
in that space who are working hard to say, "This is a more effective way of using
traditional charting techniques."
The second group is exploring and working at unfolding – something like a
Rubik's Cube. They're taking Rubik's Cube and unfolding the dimensionality of
the data. The data itself is used to represent the structure – the important
dimensions of the data. If you think about it in terms of database modeling, these
are the dimensions or the axes of the data that are important in the data. How can
I use those, unfold those, and show them in graphics?
Character Developer
Notes
The third approach is a multilayered, [structured] visualization. ⁄ The structure
of the data is deterministic – for example, a geospatial map, which is deterministic
because people [indicated on the map] have locations. The deterministic structure
then has layers of data added on top of it.
People are tackling all three of those approaches. If you look at data visualization
companies, you can usually put them into one of those three categories. I don't think
that any of them are right or wrong. They're just different; each has places where they
can be applied effectively.
Commercial Animation
Animation has been very popular in television commercials, both due to its graphic
appeal, and the humor it can provide. Some animated characters in commercials have
survived for decades, such as Snap, Crackle and Pop in advertisements for Kellogg's
cereals.
The legendary animation director Tex Avery was the producer of the first Raid "Kills
Bugs Dead" commercials in 1966, which were very successful for the company. The
concept has been used in many countries since.
Funny Animals
The first animated cartoons often depicted funny animals in various adventures. This
was the mainstream genre in the United States from the early 1900s until the 1940s,
and the backbone of Disney's series of cartoons.
Zany Humor: Bugs Bunny, Daffy Duck of Warner Brothers, and the various films of
Tex Avery at MGM introduced this popular form of animated cartoons. It usually
involves surreal acts such as characters being crushed by massive boulders or going
over the edge of a cliff but floating in mid air for a few seconds. The Road Runner
cartoons are great examples of these actions. The article Cartoon physics describes
typical antics of zany cartoon characters. Disney has, to a lesser extent, applied this to
some of their cartoons.
Sophistication: As the medium matured, more sophistication was introduced, albeit
keeping the humorous touch. Classical music was often spoofed, a notable example is
"What's Opera, Doc" by Chuck Jones. European animation sometimes followed a very
different path from American animation. In the Soviet Union, the late 1930s saw the
enforcement of socialist realism in animation, a style which lasted throughout the
Stalinist era. The animations themselves were mostly for kids, and based on
traditional fairy tales.
Limited Animation: In the 1950s, UPA and other studios refined the art aspects of
animation, by using extremely limited animation as a means of expression.
Modernism
Graphic styles continued to change in the late 1950s and 1960s. At this point, the
design of the characters became more angular, while the quality of the character
animation declined.
Punjab Technical University 7
Cartoon Animation
Notes
Animated Music Videos and Bands
Popular with the advent of MTV and similar music channels, music videos often
contain animation, sometimes rotoscoped, i.e., based on live action performers.
Cartoons animated to music go at least as far back as Disney's 1929. The Skeleton
Dance. These are now popular with the animated bands Gorillaz and Dethklok, the
latter of which is based on a television show about the band.
Steps to Create Animation
Before we begin our animation, we need a character. This little guy below is one that I
quickly drew using a similar process to that described in another tutorial, Line Art In
Flash. He looks a little bit lost, bless him. Let's call him Dexter.
Figure 1.1: First Draft of Character
At the moment, Dexter is just a collection of lines and fills, and not much good for
animating. What we need to do is break up the character into his component parts
(ouch!), and save them all as Library items.
Of course, the easiest way to do this is to simply draw each body part on a different
layer as you go. That way, you can see how the fit together in relation to each other,
get the sizes right etc etc. But if you've already drawn the character, then you'll just
have to get your mouse dirty, get in there and pull poor Dexter to pieces. Select the
lines and fills that make up a body part (say the head) then cut and paste into a new
symbol. In a lot of case, you may find that a piece isn't complete, where it intersected
with other areas. If that happens, then you just add some more lines and complete the
part.
You can see from the picture above that Dexter is now split up into sections. Notice
that although we only had one original eye image, I've duplicated the symbol and
made three more versions, each with the lids closing. We need this to make him blink.
I've also made a short movie clip for the mouth, containing a couple of lines/states for
8 Self-Instructional Material
a talking mouth. A little tip for arms – make sure that the registration point is located
at the 'shoulder' joint of the image. This makes it easier when you come to rotate
them.
Character Developer
Notes
Figure 1.2: Individual Symbols
Make sure you have all your symbols saved in the library, and a clear Stage. Now, we
can begin to create a small animation.
1.
Make a new movie clip symbol, and call it M Eyes Blink. Inside it, place your
open eyes in frame 1. Insert another keyframe at around frame 40 or so. In this
one, replace the open eyes with the next level down eye image, where the lids are
starting to close. Put another keyframe in the next frame, and repeat the process,
inserting the almost closed eyes. Put the fully closed eyes in the next keyframe.
Then insert another three frames and reverse the process. No need to put any
scripting on the end, we want this to loop continuously, so Dexter will blink every
few seconds.
Figure 1.3: Your Eyes Go in this Order
2.
Create a new movie clip. We're going to use this one for our character, so give it a
couple of layers. Call each one something meaningful, like R Arm, L Arm, Legs,
Head etc. Remember that some body parts will need to be behind others. In the
case of Dexter, one of his arms is partially behind his body, so I have to make sure
that the layer containing his left arm is behind the layer containing his T-Shirt.
3.
It may also be worth making a new clip for the characters' head, although this
depends on how complex you want the animation to be. For the more detailed
movies where you're trying to sync voice files and mouth movements, it's
probably not worth doing, and you'll find it better making individual movies for
each line. In this case though we just want to see an example, so we'll combine the
Punjab Technical University 9
Cartoon Animation
head, eye animation and mouth animation into one, and place it on the top layer
of our character movie.
Notes
Figure 1.4: Layer Structure
4.
But that's not enough! How about we get him to look at his watch every now and
then? That's just a simple motion tween of our arm and head symbols. Go into the
character movie clip, and F5 up to around frame 43 on each layer.
5.
On about frame 15 of the R Arm layer, make a keyframe. A few frames later
(depending on how long you want the action to take, make another. In this
second one, rotate the arm (this is where you need the registration point on the
shoulder axis, it makes it a lot easier) to the point you would like it to be. You
may find that the arm overlaps part of the body image. In that case, edit the arm
symbol so this doesn't happen. When you've chosen your finish position, select a
frame in between these two states, and apply a motion tween. Give him a few
frames to look at his watch, the reverse the process, moving the arm back to its
original position. Do something similar with the head, so that he actually looks
down at his watch.
6.
Now run your movie, and you'll see him getting the time every few seconds, still
chatting and blinking as he does so.
And that's the basic principle. It's very handy to make a small collection of body
parts (not literally, I do mean Flash images here) in your library. Draw various
positions of arms, so you'll always have the one you need on hand. Different
expressions can be made by using different mouths and eyes, so have a good
selection of these too.
Certain motions (like the watch checking) can be put into small clips, then re-used
as and when you need them, so you can build up a collection of mini movie
actions too.
10 Self-Instructional Material
Character Developer
Notes
Figure 1.5: Animation Timeline
Add more characters, backgrounds, anything you like. Here's the .fla file for the
basic movie (including a selection of other characters for you to break up) ready
for you to animate and warp to your hearts' desire.
Computer Animation
Beginning in the 1990s, with the rise of computer animation, some cartoons
implemented CGI and a few were done entirely in computer animation. Beast Wars
and ReBoot were done entirely in CGI whereas Silver Surfer only partially
implemented CGI. Donkey Kong Country also used CGI to make it look like the SNES
game. CGI is common today, whether obvious such as in Tak and the Power of Juju or
made to look two-dimensional such as in Speed Racer X.
Computer animation (or CGI animation) is the art of creating moving images with the
use of computers. It is a subfield of computer graphics and animation. Increasingly it
is created by means of 3D computer graphics, though 2D computer graphics are still
widely used for stylistic, low bandwidth, and faster real-time rendering needs.
Sometimes the target of the animation is the computer itself, but sometimes the target
is another medium, such as film. It is also referred to as CGI (computer-generated
imagery or computer-generated imaging), especially when used in films.
To create the illusion of movement, an image is displayed on the computer screen and
repeatedly replaced by a new image that is similar to the previous image, but
advanced slightly in the time domain (usually at a rate of 24 or 30 frames/second).
Punjab Technical University 11
Cartoon Animation
Notes
This technique is identical to how the illusion of movement is achieved with
television and motion pictures.
Computer animation is essentially a digital successor to the art of stop motion
animation of 3D models and frame-by-frame animation of 2D illustrations. For 3D
animations, objects (models) are built on the computer monitor (modeled) and 3D
figures are rigged with a virtual skeleton. For 2D figure animations, separate objects
(illustrations) and separate transparent layers are used, with or without a virtual
skeleton. Then the limbs, eyes, mouth, clothes, etc. of the figure are moved by the
animator on key frames. The differences in appearance between key frames are
automatically calculated by the computer in a process known as tweening or
morphing. Finally, the animation is rendered.
For 3D animations, all frames must be rendered after modeling is complete. For 2D
vector animations, the rendering process is the key frame illustration process, while
tweened frames are rendered as needed. For pre-recorded presentations, the rendered
frames are transferred to a different format or medium such as film or digital video.
The frames may also be rendered in real time as they are presented to the end-user
audience. Low bandwidth animations transmitted via the internet (e.g. 2D Flash, X3D)
often use software on the end-users computer to render in real time as an alternative
to streaming or pre-loaded high bandwidth animations.
A Simple Example
Figure 1.6: Computer Animation
The screen is blanked to a background color, such as black. Then a goat is drawn on
the right of the screen. Next the screen is blanked, but the goat is re-drawn or
duplicated slightly to the left of its original position. This process is repeated, each
time moving the goat a bit to the left. If this process is repeated fast enough the goat
will appear to move smoothly to the left. This basic procedure is used for all moving
pictures in films and television.
The moving goat is an example of shifting the location of an object. More complex
transformations of object properties such as size, shape, lighting effects often require
calculations and computer rendering instead of simple re-drawing or duplication.
Explanation
To trick the eye and brain into thinking they are seeing a smoothly moving object, the
pictures should be drawn at around 12 frames per second (frame/s) or faster (a frame
is one complete image). With rates above 70 frames/s no improvement in realism or
smoothness is perceivable due to the way the eye and brain process images. At rates
below 12 frame/s most people can detect jerkiness associated with the drawing of
12 Self-Instructional Material
new images which detracts from the illusion of realistic movement. Conventional
hand-drawn cartoon animation often uses 15 frames/s in order to save on the number
of drawings needed, but this is usually accepted because of the stylized nature of
cartoons. Because it produces more realistic imagery computer animation demands
higher frame rates to reinforce this realism.
Character Developer
Notes
The reason no jerkiness is seen at higher speeds is due to „persistence of vision.‰
From moment to moment, the eye and brain working together actually store whatever
one looks at for a fraction of a second, and automatically "smooth out" minor jumps.
Movie film seen in theaters in the United States runs at 24 frames per second, which is
sufficient to create this illusion of continuous movement.
Flash
Flash is one of those wonderful programs that can be put for a plethora of uses. But
one of the most common things people want to be able to do with it is make cartoons
and animations. There's a lot of ways to do this.
Methods of Animating Virtual Characters
In this .gif of a 2D Flash animation, each 'stick' of the figure is keyframed over time to
create motion.
In most 3D computer animation systems, an animator creates a simplified
representation of a character's anatomy, analogous to a skeleton or stick figure. The
position of each segment of the skeletal model is defined by animation variables, or
Avars. In human and animal characters, many parts of the skeletal model correspond
to actual bones, but skeletal animation is also used to animate other things, such as
facial features (though other methods for facial animation exist). The character
"Woody" in Toy Story, for example, uses 700 Avars, including 100 Avars in the face.
The computer does not usually render the skeletal model directly (it is invisible), but
uses the skeletal model to compute the exact position and orientation of the character,
which is eventually rendered into an image. Thus by changing the values of Avars
over time, the animator creates motion by making the character move from frame to
frame.
There are several methods for generating the Avar values to obtain realistic motion.
Traditionally, animators manipulate the Avars directly. Rather than set Avars for
every frame, they usually set Avars at strategic points (frames) in time and let the
computer interpolate or 'tween' between them, a process called keyframing.
Keyframing puts control in the hands of the animator, and has roots in hand-drawn
traditional animation.
In contrast, a newer method called motion capture makes use of live action. When
computer animation is driven by motion capture, a real performer acts out the scene
as if they were the character to be animated. His or her motion is recorded to a
computer using video cameras and markers, and that performance is then applied to
the animated character.
Each method has their advantages, and as of 2007, games and films are using either or
both of these methods in productions. Keyframe animation can produce motions that
would be difficult or impossible to act out, while motion capture can reproduce the
subtleties of a particular actor. For example, in the 2006 film Pirates of the Caribbean:
Dead Man's Chest, actor Bill Nighy provided the performance for the character Davy
Jones. Even though Nighy himself doesn't appear in the film, the movie benefited
from his performance by recording the nuances of his body language, posture, facial
expressions, etc. Thus motion capture is appropriate in situations where believable,
realistic behavior and action is required, but the types of characters required exceed
what can be done through conventional costuming.
Punjab Technical University 13
Cartoon Animation
Notes
Computer Animation Development Equipment
Computer animation can be created with a computer and animation software. Some
impressive animation can be achieved even with basic programs; however the
rendering can take a lot of time on an ordinary home computer. Because of this, video
game animators tend to use low resolution, low polygon count renders, such that the
graphics can be rendered in real time on a home computer. Photorealistic animation
would be impractical in this context.
Professional animators of movies, television, and video sequences on computer games
make photorealistic animation with high detail. This level of quality for movie
animation would take tens to hundreds of years to create on a home computer. Many
powerful workstation computers are used instead. Graphics workstation computers
use two to four processors, and thus are a lot more powerful than a home computer,
and are specialized for rendering. A large number of workstations (known as a render
farm) are networked together to effectively act as a giant computer. The result is a
computer-animated movie that can be completed in about one to five years (this
process is not comprised solely of rendering, however). A workstation typically costs
$2,000 to $16,000, with the more expensive stations being able to render much faster,
due to the more technologically advanced hardware that they contain. Pixar's
Renderman is rendering software which is widely used as the movie animation
industry standard, in competition with Mental Ray. It can be bought at the official
Pixar website for about $3,500. It will work on Linux, Mac OS X, and Microsoft
Windows based graphics workstations along with an animation program such as
Maya and Softimage XSI. Professionals also use digital movie cameras, motion
capture or performance capture, bluescreens, film editing software, props, and other
tools for movie animation.
Photorealistic Animation of Human
One open challenge in computer animation is a photorealistic animation of humans.
Currently, most computer-animated movies show animal characters (A Bug's Life,
Finding Nemo, Ratatouille, Newt, Ice Age, Over the Hedge), fantasy characters
(Monsters Inc., Shrek, Teenage Mutant Ninja Turtles 4, Monsters vs. Aliens),
anthropomorphic machines (Cars, WALL-E, Robots) or cartoon-like humans
(The Incredibles, Despicable Me, Up). The movie Final Fantasy: The Spirits Within is
often cited as the first computer-generated movie to attempt to show realistic-looking
humans. However, due to the enormous complexity of the human body, human
motion, and human biomechanics, realistic simulation of humans remains largely an
14 Self-Instructional Material
open problem. Another problem is the distasteful psychological response to viewing
nearly perfect animation of humans, known as "the uncanny valley." It is one of the
"holy grails" of computer animation. Eventually, the goal is to create software where
the animator can generate a movie sequence showing a photorealistic human
character, undergoing physically-plausible motion, together with clothes,
photorealistic hair, a complicated natural background, and possibly interacting with
other simulated human characters. This could be done in a way that the viewer is no
longer able to tell if a particular movie sequence is computer-generated, or created
using real actors in front of movie cameras. Complete human realism is not likely to
happen very soon, and when it does it may have major repercussions for the film
industry.
Character Developer
Notes
For the moment it looks like three dimensional computer animation can be divided
into two main directions; photorealistic and non-photorealistic rendering.
Photorealistic computer animation can itself be divided into two subcategories; real
photorealism (where performance capture is used in the creation of the virtual human
characters) and stylized photorealism. Real photorealism is what Final Fantasy tried
to achieve and will in the future most likely have the ability to give us live action
fantasy features as The Dark Crystal without having to use advanced puppetry and
animatronics, while Antz is an example on stylistic photorealism (in the future
stylized photorealism will be able to replace traditional stop motion animation as in
Corpse Bride). None of them mentioned are perfected as of yet, but the progress
continues.
The non-photorealistic/cartoonish direction is more like an extension of traditional
animation, an attempt to make the animation look like a three dimensional version of
a cartoon, still using and perfecting the main principles of animation articulated by
the Nine Old Men, such as squash and stretch.
While a single frame from a photorealistic computer-animated feature will look like a
photo if done right, a single frame vector from a cartoonish computer-animated
feature will look like a painting (not to be confused with cel shading, which produces
an even simpler look).
Detailed Examples and Pseudocode
In 2D computer animation, moving objects are often referred to as „sprites.‰ A sprite
is an image that has a location associated with it. The location of the sprite is changed
slightly, between each displayed frame, to make the sprite appear to move. The
following pseudocode makes a sprite move from left to right:
var int x := 0, y := screenHeight/2;
while x < screenWidth
drawBackground()
drawSpriteAtXY (x, y) // draw on top of the background
x := x + 5 // move to the right
Modern (2001) computer animation uses different techniques to produce animations.
Most frequently, sophisticated mathematics is used to manipulate complex three
dimensional polygons, apply „textures‰, lighting and other effects to the polygons
and finally rendering the complete image. A sophisticated graphical user interface
may be used to create the animation and arrange its choreography. Another technique
called constructive solid geometry defines objects by conducting boolean operations
on regular shapes, and has the advantage that animations may be accurately
produced at any resolution.
Punjab Technical University 15
Cartoon Animation
Notes
Let's step through the rendering of a simple image of a room with flat wood walls
with a grey pyramid in the center of the room. The pyramid will have a spotlight
shining on it. Each wall, the floor and the ceiling is a simple polygon, in this case, a
rectangle. Each corner of the rectangles is defined by three values referred to as X, Y
and Z. X is how far left and right the point is. Y is how far up and down the point is,
and Z is far in and out of the screen the point is. The wall nearest us would be defined
by four points: (in the order x, y, z). Below is a representation of how the wall is
defined
(0, 10, 0)
(10, 10, 0)
(0, 0, 0)
(10, 0, 0)
The far wall would be:
(0, 10, 20)
(10, 10, 20)
(0, 0, 20)
(10, 0, 20)
The pyramid is made up of five polygons: the rectangular base, and four triangular
sides. To draw this image the computer uses math to calculate how to project this
image, defined by three dimensional data, onto a two dimensional computer screen.
First we must also define where our view point is, that is, from what vantage point
will the scene be drawn. Our view point is inside the room a bit above the floor,
directly in front of the pyramid. First the computer will calculate which polygons are
visible. The near wall will not be displayed at all, as it is behind our view point. The
far side of the pyramid will also not be drawn as it is hidden by the front of the
pyramid.
Next each point is perspective projected onto the screen. The portions of the walls
ÂfurthestÊ from the view point will appear to be shorter than the nearer areas due to
perspective. To make the walls look like wood, a wood pattern, called a texture, will
be drawn on them. To accomplish this, a technique called „texture mapping‰ is often
used. A small drawing of wood that can be repeatedly drawn in a matching tiled
pattern (like wallpaper) is stretched and drawn onto the walls' final shape. The
pyramid is solid grey so its surfaces can just be rendered as grey. But we also have a
spotlight. Where its light falls we lighten colors, where objects blocks the light we
darken colors.
Next we render the complete scene on the computer screen. If the numbers describing
the position of the pyramid were changed and this process repeated, the pyramid
would appear to move.
Movies
CGI short films have been produced as independent animation since 1976, though the
popularity of computer animation (especially in the field of special effects)
skyrocketed during the modern era of U.S. animation. The first completely
computer-generated television series was ReBoot, and the first completely computergenerated animated movie was Toy Story (1995).
Amateur Animation
The popularity of sites such as Newgrounds, which allows members to upload their
own movies for others to view, has created a growing community of what are often
considered amateur computer animators. With many free utilities available and
programs such as Windows Movie Maker or iMovie, which are included in the
16 Self-Instructional Material
operating system, anyone with the tools and a creative mind can have their animation
viewed by thousands. Many high end animation software options are also available
on a trial basis, allowing for educational and non-commercial development with
certain restrictions. Several free and open source animation software applications
exist as well, Blender as an example. One way to create amateur animation is using
the GIF format, which can be uploaded and seen on the web easily.
Character Developer
Notes
Cartoon Builder
Cartoon Builder allows you to create your own cel-animation sequences, by
positioning and manipulating a playful cartoon character inside a filmstrip, and by
using provided media assets – backgrounds, characters in multiple poses, sound
effects. You can play the sequence at different speeds, and save it.
„Make Your Own‰ functionality lets you produce original cartoons that incorporate
character poses, backgrounds and sounds you made yourself using other XO
applications (Paint, Camera, Microphone, etc.)
Beta 1.4 (old!) Features
Place 6 character poses in a filmstrip by selecting the cel/frame and then select the
character pose you want to go in it.
Character, Background and Sound selectors (Scroll through background options
using "Back" and "Forward" buttons. The one in view will be used in the cartoon.)
Three starter characters (Space Blob, Elephant, Turkey), 12+ starter backgrounds,
and four starter sounds.
"Add Background" functionality
"Add Sound" functionality
Changing the animation speed – fast and slow range
Clear All (removes images from all cells of the filmstrip)
Play/Pause
Punjab Technical University 17
Cartoon Animation
Notes
Saving/loading animations (locally on the XO)
Can take images created with Paint and Record and make them into images in
Cartoon Builder.
Can take sounds made with Record or other sources and use them in Cartoon
Builder.
Full journal integration for Keep and Resume of Cartoon projects.
Full journal integration for selecting images and sounds made in other programs
to use as backgrounds and sound effects in Cartoon Builder.
Ability to share animations via Mesh.
Full screen interface with Toolbar.
Additional translations as available.
Dont have the Safari or Chrome Browsers Installed? Watch a video of the Animation
Character and Background Art
The character art was created in adobe illustrator and copy/pasted into photoshop for
export. When designing a character for animation. Carefully plan all the pieces that
will compose your character. I decided on simple shapes and built each part of the
body separately, like a marionette. This allowed me to re-use some graphics. By
building the character like a marionette. I was able to simplify the animation process.
As previously stated, the graphics originated in illustrator. Vectors paths and points
make it simple to adjust the artwork. For instance, if you need spidey looking up or
down. Simply adjust the position of the eyes and webbing on the head shape to
complete the illusion. You can clean up any extra lines by simply moving the point or
deleting it.
Same applies for the torso, legs and arms. By adjusting the orientation of the chest
design and belt we can fake the rotation of a 2d element. Moving the arms and/or
legs a little behind and a little in front of the torso creates the illusion of a subtle 3rd
dimension.
18 Self-Instructional Material
The cityscapeÊs were also created in illustrator. But they started as plain screen-grabs.
I captured certain parts of the video as it played. Then pasted them into illustrator.
I created another layer above the screen-grab. Drawing paths over the screen-grabs
images. Using it like tracing paper. When graphics were done, I trashed the
screen-grab layer.
Character Developer
Notes
Just like the character. When creating these cityscape graphics you must think a little
in advance about how the pieces must fit together. Background images used for
scrolling and zooming should be 2 or 3 times wider and/or taller to allow for panning
left, right, up, or down. The same applies for zooming in and out.
Now we have our finished character and background graphics. We can begin setting
up our graphics for animation.
Character Rigging
One of the trickiest parts to creating these css3 character animations is the rigging of
joints or skeleton so that they hinge and rotate believably. The most important
property is the transform-origin property. The transform-origin property uses x,y
coordinates to determine the point from which animation or transformations occur.
(If you want a full explanation of css3 animation properties please read my last
article.) In order to determine the accurate transform-origin point for our arm and leg
graphics weÊre going to use a simple trick.
Notice how the foot graphic (1) is much wider than the thigh graphic (2). It would be
impossible to match up the transform-origin point on these two pieces.
However, if we match the width of the thigh (2) with the width of the foot (1)⁄ ThatÊs
it! By increasing the width of the canvas area of the thigh to match the width of the
foot we can accurately match the rotation points of these „hinges‰ using the css3
transform-origin property. By giving the thigh image a wider canvas area we can
move the thigh graphic left or right to line it up with the foot graphic.
We can now ascertain that these two pieces will have the exact rotation points via css3
transform origin. By setting the transform-origin for foot to (x) number of pixels from
the left we can guarantee that the foot will rotate from exactly where the bottom of the
shin is located.
Now we can apply the same transform origin coordinates to the top of the leg div
(which wraps around thigh & foot). By rotating the leg div we rotate its children too.
This means that the leg div rotates both the thigh and foot graphics at the same time.
Mimicking the functionality of a skeleton. Which allows us to have the foot moving
along its own rotation path while also rotating along the path of the whole leg. Its
important to nest items because this allows you to have multiple animation paths
without having to recalculate their position constantly.
If I rotate the thigh and foot without first wrapping them in a leg div how would I
also make the foot rotate upon the axis of the leg? It would be near to impossible. And
it would look horrible. By nesting and breaking down the animations into their major
body parts and just animating one major body part at a time and utilizing the
animation of children I can achieve complex (for css) animation combinations.
Creating the illusion of the character zooming in required that the images be created
at a bigger size. Then implementing them into the html mark-up at a smaller size.
There are two ways to do this. Using the scale transform property or using the image
width property. By increasing the image width over a key-framed animation you give
it the illusion of growing or zooming into the screen. We can make a 500 pixel wide
image then insert it into the html at 5% of its size. We are then able to make it smaller
or bigger by increasing the height or width via css3.
Punjab Technical University 19
Cartoon Animation
Notes
Complex Character Animation
When building a character composed of precisely placed elements. Being able to
visually construct the character is essential. When we first place all the different pieces
for our character into the html. They are all jumbled up together.
Firefox 3.x has implemented css3 moz-transform-origin and moz-rotate. Which means
I can use firebug developer tool to visually position the characterÊs body parts where
they belong. This makes setting up the start and end positions/rotations for the
characters pose simple.
To begin, we set up the characters body parts in the start position. I just adjust the
values to where I need them in order to get the proper first position. I visually adjust
the rotation and position values of the graphic using firebug.
This allows me to gain pixel perfect precision over where I want the graphic placed.
Using firebug like this takes the guesswork out of animating something as technical as
different body parts. I just visually adjust the values to where I need them to get the
proper first position. We then place those coordinates into the css.
Now we have to figure out the end positions for our animation. We use the same
technique. Adjusting the values to accommodate the end position and plug those
values into the css as well. Finally, I place these values in the ÂstartÊ and ÂendÊ parts of
the css3 keyframe function. Then we check the animation in webkit. Webkit does the
rest of the magic on its own.
Putting it all Together
Believe it or not, the biggest road block I encountered when creating this css3 cartoon
was switching the scenes at the correct moment to coincide with the css3 animation. I
used jqueryÊs built in delay function to flip through the scenes and activate the next
scene. As IÊm writing this I realize that I probably could have pulled off the scene
switching with css3 animation as well. But IÊm not sure – as I havenÊt thought it all the
way through.
We used the built in animation delay function on css3 to begin animating the scenes
just as they were activated by jQuery. Another challenge I had was that I had no
control over when the animation started. I had to preload all the elements. Because
there was no way to start all of the animations on just one click throughout the entire
flick. And if all the elements are not loaded when the animations start running it gets
jumpy.
Character Developer
Character development means:
20 Self-Instructional Material
The change in characterization of a dynamic character, who changes over the
course of a narrative.
Character creation, especially for games.
Character advancement, increase in scores and other changes of a game
character – for example, in computer role-playing games or console role-playing
games.
Moral character, a term used in many educational systems to indicate a strategy
for the maturation of individual students.
Developing your Cartoon Man
To Create More Inspired Characters: This cartoon man tutorial has been made for
artists who are ready to go past the beginner stage and begin working on creating
their own unique style and characters. We should all strive to take real pride in our
work and a big part of this is developing your characters as fully as possible.
Character Developer
Notes
Who is He?
The more thought that you put into your character; the easier that it will be to make
the character believable in the eyes of the people that see him.
Here's an example that everyone can relate to: Homer Simpson from the Simpsons TV
show. Everyone knows his character and some people might even say that he's the
most simpleminded character on the entire show – aside from Cleetus, of course! But
Punjab Technical University 21
Cartoon Animation
Notes
when you really start to look closely and analyze his actions then you can begin to
come up with a personality profile for him and realize that he might not be just as
simple as you might think.
Creating a personality profile for a character by looking at his actions as suggested
with Homer is working backwards but this is still a good exercise because it help
develop your ability to create better characters and ultimately become a better
storyteller and artist.
When you're going to create your own cartoon man, you should start with creating
the personality.
Here's a list of questions that you can use to get the ball rolling for your own
believable character:
What scares him?
Is this character based off of someone that you know in the real world?
What style of clothes does he wear?
If your character bought a magazine, which magazine would he most likely
choose?
What kind of foods does he like?
What's his favorite color?
Does he play sports?
Does he have children?
Is he married?
Where does he work?
Is he quick and muscular, or slow and fat?
Does he have any pets?
What makes him the happiest?
Is your character more of a hero or a sidekick?
By answering these questions you should start to be able to see the personality of your
character coming out a little. Doing this personality step is extremely important if you
plan on using your cartoon man in any kind of comic strip or series of drawings. The
more time you spend on it then the easier it will be when you are making your comic
strip. If you know your character inside and out, then it's easy to think up a situation,
and then ask yourself the question 'What would my character do in THIS situation?'
and the answer so quickly and easily come to you.
Basic Drawing Tips
There's lots of good information on this site about drawing cartoon characters and
faces but the part that I usually start with what I'm doing my own drawings is the
chest. I like to start here because the chest really defines the male character. The arms
and legs aren't actually that different for men and women depending on your style
but the chest and stomach can really make or break your design and that's why I
suggest that you try several different body styles before you finally choose one that
you think best suits the personality of the character that you invented in the exercise
above.
22 Self-Instructional Material
Character Developer
Notes
TIP: You may find it easier to draw the chest in two different parts, the pecks and the
stomach, when you're sketching out your cartoon man in the beginning. Doing it this
way frees you from thinking of the body as one big blob and will help you more
easily come up with interesting shapes.
Character (Arts)
A character is the representation of a person in a narrative or dramatic work of art
(such as a novel, play, or film). Derived from the ancient Greek word kharaktêr (χα ρακ
ττ ήρ), the earliest use in English, in this sense, dates from the Restoration, although it
became widely used after its appearance in Tom Jones in 1749. From this, the sense of
"a part played by an actor" developed. Character, particularly when enacted by an
actor in the theatre or cinema, involves "the illusion of being a human person." Since
the end of the 18th century, the phrase "in character" has been used to describe an
effective impersonation by an actor. Since the 19th century, the art of creating
characters, as practised by actors or writers, has been called characterisation.
A character who stands as a representative of a particular class or group of people is
known as a type. Types include both stock characters and those that are more fully
individualised. The characters in Henrik Ibsen's Hedda Gabler (1891) and August
Strindberg's Miss Julie (1888), for example, are representative of specific positions in
the social relations of class and gender, such that the conflicts between the characters
reveal ideological conflicts.
The study of a character requires an analysis of its relations with all of the other
characters in the work. The individual status of a character is defined through the
network of oppositions (proairetic, pragmatic, linguistic, proxemic) that it forms with
the other characters. The relation between characters and the action of the story shifts
historically, often miming shifts in society and its ideas about human individuality,
self-determination, and the social order.
Classical Analysis of Character
In the earliest surviving work of dramatic theory, Poetics (c. 335 BCE), the Greek
philosopher Aristotle deduces that character (ethos) is one of six qualitative parts of
Athenian tragedy and one of the three objects that it represents (1450a12). He
understands character not to denote a fictional person, but the quality of the person
acting in the story and reacting to its situations (1450a5). He defines character as "that
which reveals decision, of whatever sort" (1450b8). It is possible, therefore, to have
tragedies that do not contain "characters" in Aristotle's sense of the word, since
character makes the ethical dispositions of those performing the action of the story
clear. Aristotle argues for the primacy of plot (mythos) over character (ethos). He
writes:
„But the most important of these is the structure of the incidents. For (i) tragedy is a
representation not of human beings but of action and life. Happiness and
unhappiness lie in action, and the end [of life] is a sort of action, not a quality; people
are of a certain sort according to their characters, but happy or the opposite according
Punjab Technical University 23
Cartoon Animation
Notes
to their actions. So [the actors] do not act in order to represent the characters, but they
include the characters for the sake of their actions" (1450a15-23).‰
In the Tractatus coislinianus (which may or may not be by Aristotle), comedy is
defined as involving three types of characters: the buffoon (bômolochus), the ironist
(eirôn) and the imposter or boaster (alazôn). All three are central to Aristophanes'
"Old comedy."
By the time the Roman playwright Plautus wrote his plays, the use of characters to
define dramatic genres was well-established. His Amphitryon begins with a prologue
in which the speaker Mercury claims that since the play contains kings and gods, it
cannot be a comedy and must be a tragicomedy. Like much Roman comedy, it is
probably translated from an earlier Greek original, most commonly held to be
Philemon's Long Night, or Rhinthon's Amphitryon, both now lost..
Types of Characters
Characters may be classified by various criteria:
Antagonist or villain
Anti-hero
Foil
Hero
Main character
Minor character
Protagonist
Round vs. Flat Characters
In his book, Aspects of the Novel, E. M. Forster defined two basic types of characters,
their qualities, functions, and importance for the development of the novel: flat
characters and round characters. Flat characters are two-dimensional in that they are
relatively uncomplicated and do not change throughout the course of a work. By
contrast, round characters are complex and undergo development, sometimes
sufficiently to surprise the reader.
Twelve Basic Principles of Animation
The 12 basic principles of animation is a set of principles of animation introduced by
the Disney animators Ollie Johnston and Frank Thomas in their 1981 book, The Illusion
of Life: Disney Animation. Johnston and Thomas in turn based their book on the work
of the leading Disney animators from the 1930s onwards, and their effort to produce
more realistic animations. The main purpose of the principles was to produce an
illusion of characters adhering to the basic laws of physics, but they also dealt with
more abstract issues, such as emotional timing and character appeal.
The book and its principles have become generally adopted, and have been referred
to as the "Bible of the industry." In 1999 the book was voted number one of the "best
animation books of all time" in an online poll. Though originally intended to apply to
traditional, hand-drawn animation, the principles still have great relevance for today's
more prevalent computer animation.
The twelve principles are illustrated here with the help of an illustration of the
"squash and stretch" principle:
1.
24 Self-Instructional Material
Squash and Stretch: The most important principle is "squash and stretch", the
purpose of which is to give a sense of weight and flexibility to drawn objects. It
can be applied to simple objects, like a bouncing ball, or more complex
constructions, like the musculature of a human face.
Character Developer
Notes
Animated sequence of a race horse galloping. The horse's body demonstrates
squash and stretch in natural musculature.
Taken to an extreme point, a figure stretched or squashed to an exaggerated
degree can have a comical effect. In realistic animation, however, the most
important aspect of this principle is the fact that an object's volume does not
change when squashed or stretched. If the length of a ball is stretched vertically,
its width (in three dimensions, also its depth) needs to contract correspondingly
horizontally.
2.
Anticipation: Anticipation is used to prepare the audience for an action, and to
make the action appear more realistic. A dancer jumping off the floor has to bend
his knees first, a golfer making a swing has to swing the club back first, or the
famous swimmer Grant Koudelka bending down on the block preparing to jump
off at the minute the horn sounds. The technique can also be used for less physical
actions, such as a character looking off-screen to anticipate someone's arrival, or
attention focusing on an object that a character is about to pick up.
A Baseball Player Making a Pitch Prepares for the Action by Moving his Arm Back
For special effect, anticipation can also be omitted in cases where it is expected.
The resulting sense of anticlimax will produce a feeling of surprise in the viewer,
and can often add comedy to a scene. This is often referred to as a 'surprise gag'.
3.
Staging: This principle is akin to staging as it is known in theatre and film. Its
purpose is to direct the audience's attention, and make it clear what is of greatest
importance in a scene; what is happening, and what is about to happen. Johnston
and Thomas defined it as "the presentation of any idea so that it is completely and
unmistakably clear", whether that idea is an action, a personality, an expression or
a mood. This can be done by various means, such as the placement of a character
in the frame, the use of light and shadow, and the angle and position of the
Punjab Technical University 25
Cartoon Animation
camera. The essence of this principle is keeping focus on what is relevant, and
avoiding unnecessary detail.
4.
Notes
Straight Ahead Action and Pose to Pose: These are two different approaches to
the actual drawing process. "Straight ahead action" means drawing out a scene
frame by frame from beginning to end, while "pose to pose" involves starting with
drawing a few, key frames, and then filling in the intervals later. "Straight ahead
action" creates a more fluid, dynamic illusion of movement, and is better for
producing realistic action sequences. On the other hand, it is hard to maintain
proportions, and to create exact, convincing poses along the way. "Pose to pose"
works better for dramatic or emotional scenes, where composition and relation to
the surroundings are of greater importance. A combination of the two techniques
is often used.
Computer animation removes the problems of proportion related to "straight
ahead action" drawing; however, "pose to pose" is still used for computer
animation, because of the advantages it brings in composition. The use of
computers facilitates this method, as computers can fill in the missing sequences
in between poses automatically. It is, however, still important to oversee this
process, and apply the other principles discussed.
5.
Follow Through and Overlapping Action: These closely related techniques help
render movement more realistic, and give the impression that characters follow
the laws of physics. "Follow through" means that separate parts of a body will
continue moving after the character has stopped. "Overlapping action" is the
tendency for parts of the body to move at different rates (an arm will move on
different timing of the head and so on). A third technique is "drag", where a
character starts to move and parts of him take a few frames to catch up. These
parts can be inanimate objects like clothing or the antenna on a car, or parts of the
body, such as arms or hair. On the human body, the torso is the core, with arms,
legs, head and hair appendices that normally follow the torso's movement. Body
parts with much tissue, such as large stomachs and breasts, or the loose skin on a
dog, are more prone to independent movement than bonier body parts. Again,
exaggerated use of the technique can produce a comical effect, while more
realistic animation must time the actions exactly, to produce a convincing result.
Thomas and Johnston also developed the principle of the "moving hold".
A character not in movement can be rendered absolutely still; this is often done,
particularly to draw attention to the main action. According to Thomas and
Johnston, however, this gave a dull and lifeless result, and should be avoided.
Even characters sitting still can display some sort of movement, such as the torso
moving in and out with breathing.
6.
26 Self-Instructional Material
Slow In and Slow Out: The movement of the human body, and most other
objects, needs time to accelerate and slow down. For this reason, an animation
looks more realistic if it has more frames near the beginning and end of a
movement, and fewer in the middle. This principle goes for characters moving
between two extreme poses, such as sitting down and standing up, but also for
inanimate, moving objects, like the bouncing ball in the above illustration.
Follow though/Overlapping Action: as the horse runs, its mane and tail follow
the movement of the body.
7.
Arcs: Most human and animal actions occur along an arched trajectory, and
animation should reproduce these movements for greater realism. This can apply
to a limb moving by rotating a joint, or a thrown object moving along a parabolic
trajectory. The exception is mechanical movement, which typically moves in
straight lines.
8.
Secondary Action: Adding secondary actions to the main action gives a scene
more life, and can help to support the main action. A person walking can
simultaneously swing his arms or keep them in his pockets, he can speak or
whistle, or he can express emotions through facial expressions. The important
thing about secondary actions is that they emphasize, rather than take attention
away from the main action. If the latter is the case, those actions are better left out.
In the case of facial expressions, during a dramatic movement these will often go
unnoticed. In these cases it is better to include them at the beginning and the end
of the movement, rather than during.
9.
Timing: Timing in reality refers to two different concepts: physical timing and
theatrical timing. It is essential both to the physical realism, as well as to the
storytelling of the animation, that the timing is right. On a purely physical level,
correct timing makes objects appear to abide to the laws of physics; for instance,
an objectÊs weight decides how it reacts to an impetus, like a push. Theatrical
timing is of a less technical nature, and is developed mostly through experience. It
can be pure comic timing, or it can be used to convey deep emotions. It can also
be a device to communicate aspects of a characterÊs personality.
Character Developer
Notes
10. Exaggeration: Exaggeration is an effect especially useful for animation, as perfect
imitation of reality can look static and dull in cartoons. The level of exaggeration
depends on whether one seeks realism or a particular style, like a caricature or the
style of an artist. The classical definition of exaggeration, employed by Disney,
was to remain true to reality, just presenting it in a wilder, more extreme form.
Other forms of exaggeration can involve the supernatural or surreal, alterations in
the physical features of a character, or elements in the storyline itself. It is
important to employ a certain level of restraint when using exaggeration; if a
scene contains several elements, there should be a balance in how those elements
are exaggerated in relation to each other, to avoid confusing or overawing the
viewer.
11. Solid Drawing: The principle of solid – or good – drawing, really means that the
same principles apply to an animator as to an academic artist. The animator needs
to be a skilled draughtsman and has to understand the basics of anatomy,
composition, weight, balance, light and shadow etc. For the classical animator,
this involved taking art classes and doing sketches from life. One thing in
particular that Johnston and Thomas warned against was creating "twins":
characters whose left and right sides mirrored each other, and looked lifeless.
Modern-day computer animators in theory do not need to draw at all, yet their
work can still benefit greatly from a basic understanding of these principles.
12. Appeal: Appeal in a cartoon character corresponds to what would be called
charisma in an actor. A character who is appealing is not necessarily
sympathetic – villains or monsters can also be appealing – the important thing is
that the viewer feels the character is real and interesting. There are several tricks
for making a character connect better with the audience; for likable characters a
symmetrical or particularly baby-like face tends to be effective.
Punjab Technical University 27
Cartoon Animation
Notes
Clay Modeling
Modeling clay is a group of impressionable substances used in building and
sculpting. The material compositions and production processes vary considerably.
Clay Modeling Techniques
Making figures out of clay is a relaxing and rewarding hobby. Don't worry about
modeling the figures for exactness. Instead, concentrate on enjoying yourself.
Ground Rules
If you are making a model from clay with the goal to have it fired, you must
remember a few rules. Before beginning your project, the clay must be wedged. This
means that you must press the clay like dough, pressing out all the air bubbles. If
there are any air bubbles or hollow enclosed parts of your project, it will explode in
the kiln.
In order to attach one clay part to one more, you should use a method called
"scoring." Score the clay by using a needle tool or a modeling tool to make hatch
marks in the two pieces of clay. Make the hatch marks only where the two pieces will
be touching each other. Next, wet the emerge marks with water or slip (which is
water with clay suspended in the mixture), and then attach the two pieces to each
other. This is scoring. Scoring is necessary because with no it, the pieces will likely
crack and fall apart from each other either when they dry or in the kiln.
The final creation must be no more than one inch or one and a half inches thick. It
must be completely dry before firing. When the clay has completely dried, it will be
hardened and lighter in color, and it will be room temperature. They clay will take
wherever from two days to several more days to dry, depending on its size.
Slabs, Coils and Pinch Pots
If you are working at a potter's wheel, pots, containers and structures in clay are often
made through one of three ways: with slabs, coils and pinch pots.
Pinch pots are possibly the easiest structure to make in clay. Pinch pots are frequently
the first project taught to children in pottery classes. Start by rolling the clay into a
ball. Hold the ball in one hand, and compress the thumb of your other hand into the
ball, until it is midway through (or a little farther). Now, hold the ball with your
thumb inside and pinch the wall of the ball among your free fingers and your thumb.
Rotate the ball 30 degrees on your thumb, then pinch again. Rotate, and then pinch
again. You will notice the hole in the center of the ball rising wider. The ball will no
longer be a ball at all; instead, it will take on the shape of a cone or a bowl. This is
your pinch pot. Continue to mold the pot until the walls are the suitable thickness and
shape.
Pinch pots have a lot of uses in clay modeling. Try making the pinch pot into an
animal by attaching feet to the bottom and a head and tail to the sides. For a
somewhat more realistic-looking animal, turn the pinch pot upside down before
attaching the head and legs, so that the pinch pot becomes a body and loses its
functionality as a bowl.
If you need to make a bigger clay body for modeling a larger object, try making two
pinch pots and scoring them jointly to form a sphere with a hollow center. Remember
that the hollow inside will cause the object to explode in the kiln, so be sure to use a
needle tool to make a little hole in the body of the clay. One tiny hole is enough to
allow air to pass back and forth between the inside and outside of the clay body.
28 Self-Instructional Material
Another method for making structures in clay is with coils. Use your hands to roll a
coil (a long tube like a snake). Shape the coil into a circle, then begin to stack coils on
each other to form the walls of a tube or a pot. Keep in mind to score the coils
together. Smooth the coils for the appearance of a uniform wall, or leave them in their
coil form for decorative purposes.
Character Developer
Notes
The final method is by creating slabs. Slabs of clay are rolled out with a rolling pin
and then cut into whatever shape desired with a clay knife or needle tool. These slabs
can be used to build the walls of a box or a clay house or a mug. Flat objects such as
signs and tiles can also be made this way. To make an entirely round slab, trace
around something that is already round, like a cup or a coffee tin.
Organic Figures
Make an organic figure by first selecting a subject to make. You might want to look off
of something real to model from. Examine the textures of the figure, and consider
how you will ensue before beginning. It is often best to break a figure down into its
most basic parts and assemble the figure piece by piece. As an example, if you were
making a fish, you would start by forming the body from a piece of clay. You would
form the tail from a slab or another hunk of clay and score the tail to the body. The
head of a fish is more or less a part of the body. You would use a needle tool to make
the eyes and give texture to the tail as necessary. By pressing small-hole netting into
the sides of the body, scales would be formed. After working with the clay and
alternately smoothing and texturizing to your tastes, form the fins and attach to the
sides by scoring.
Once the figure was made, then you would vacant out the figure from the bottom if
necessary and set it to dry.
Student Activity
Prepare a study note on the twelve basic principles of animation.
Summary
Early examples of attempts to capture the phenomenon of motion into a still drawing
can be found in paleolithic cave paintings, where animals are depicted with multiple
legs in superimposed positions, clearly attempting to convey the perception of
motion. Animation has been very popular in television commercials, both due to its
graphic appeal, and the humor it can provide.
Computer animation (or CGI animation) is the art of creating moving images with the
use of computers. It is a subfield of computer graphics and animation. Increasingly it
is created by means of 3D computer graphics, though 2D computer graphics are still
widely used for stylistic, low bandwidth, and faster real-time rendering needs.
Sometimes the target of the animation is the computer itself, but sometimes the target
is another medium, such as film. It is also referred to as CGI (computer-generated
imagery or computer-generated imaging), especially when used in films.
Keywords
Computer Animation: Computer animation (or CGI animation) is the art of creating
moving images with the use of computers. It is a subfield of computer graphics and
animation.
Squash and Stretch: The most important principle is "squash and stretch", the purpose
of which is to give a sense of weight and flexibility to drawn objects.
Punjab Technical University 29
Cartoon Animation
Notes
Zany Humor: Bugs Bunny, Daffy Duck of Warner Brothers, and the various films of
Tex Avery at MGM introduced this popular form of animated cartoons. It usually
involves surreal acts such as characters being crushed by massive boulders or going
over the edge of a cliff but floating in mid air for a few seconds.
Sophistication: As the medium matured, more sophistication was introduced, albeit
keeping the humorous touch. Classical music was often spoofed, a notable example is
"What's Opera, Doc" by Chuck Jones.
Limited Animation: In the 1950s, UPA and other studios refined the art aspects of
animation, by using extremely limited animation as a means of expression.
Anticipation: Anticipation is used to prepare the audience for an action, and to make
the action appear more realistic.
Staging: This principle is akin to staging as it is known in theatre and film.
Straight Ahead Action and Pose to Pose: "Straight ahead action" means drawing out a
scene frame by frame from beginning to end, while "pose to pose" involves starting
with drawing a few, key frames, and then filling in the intervals later.
Follow through and Overlapping Action: "Follow through" means that separate parts
of a body will continue moving after the character has stopped. "Overlapping action"
is the tendency for parts of the body to move at different rates (an arm will move on
different timing of the head and so on).
Slow In and Slow Out: The movement of the human body, and most other objects,
needs time to accelerate and slow down.
Arcs: Most human and animal actions occur along an arched trajectory, and animation
should reproduce these movements for greater realism.
Secondary Action: Adding secondary actions to the main action gives a scene more
life, and can help to support the main action.
Timing: Timing in reality refers to two different concepts: physical timing and
theatrical timing.
Exaggeration: Exaggeration is an effect especially useful for animation, as perfect
imitation of reality can look static and dull in cartoons.
Solid Drawing: The principle of solid – or good – drawing, really means that the same
principles apply to an animator as to an academic artist.
Appeal: Appeal in a cartoon character corresponds to what would be called charisma
in an actor.
Review Questions
1.
What are the concepts of animated cartoon?
2.
Describe, in brief, the projections and feature films.
3.
Discuss the history and development of animation.
Further Readings
Amid Amidi, Cartoon Modern: Style and Design in Fifties Animation.
Tim Jones, Barry J. Kelly, Allan Rosson, Foundation Flash Cartoon Animation.
Anne Hart, How to Turn Poems, Lyrics, & Folklore into Salable Children's .
Chris Webster, Animation: The Mechanics of Motion, Volume 1.
30 Self-Instructional Material
Kit Laybourne, The Animation Book: A Complete Guide to Animated Filmmaking.
Character Developer
Friedrich, C., and P. Eades, (2002). „Graph Drawing in Motion‰ Journal of Graph
Algorithms and Applications 6, no. 3: 353–370.
Heer, Jeffrey, and George G. Robertson, 2007, „Animated transitions in statistical data
graphics.‰ IEEE Transactions on Visualization and Computer Graphics 13, no. 6: 1240–
1247.
Notes
Hundhausen, Christopher D., Sarah A. Douglas, and John T. Stasko, 2002,
„A metastudy of algorithm visualization effectiveness,‰ Journal of Visual Languages &
Computing.
Johnson, Ollie, and Frank Thomas, 1987, The Illusion of Life, New York: Disney
Editions.
Punjab Technical University 31
Unit 2 Visualization of
Different Views
Visualization of
Different Views
Notes
Unit Structure
Introduction
Animation for Visualization
Principles of Animation
Animation in Scientific Visualization
Learning from Cartooning
Downsides of Animation
AnimationÊs Exploration
Types of Animation
GapMinder and Animated Scatterplots
A Taxonomy of Animations
Staging Animations with DynaVis
Animation Pipeline
Summary
Keywords
Review Questions
Further Readings
Learning Objectives
At the conclusion of this unit, you will be able to:
Learn the use of animation for visualization
Know the use of Java, Flash, Silverlight and JavaScript in animation and active
visualization
Discuss the principles for animating visualizations
Introduction
In a visualization, animation might help a viewer work through the logic behind an
idea by showing the intermediate steps and transitions, or show how data collected
over time changes. A moving image might offer a fresh perspective, or invite users to
look deeper into the data presented. An animation might also smooth the change
between two views, even if there is no temporal component to the data. This unit
discusses about the principles for animating visualizations. This unit attempts to work
out a framework for designing effective animated visualizations. WeÊll begin by
looking at some background material, and then move on to a discussion of one of the
most well-known animated visualizations, Hans RoslingÊs GapMinder.
Animation for Visualization
It is a question as to whether animation helps build richer, more vivid, and more
understandable visualizations, or simply confuse things? The use of Java, Flash,
Silverlight, and JavaScript on the Web has made it easier to distribute animated,
Punjab Technical University 33
Cartoon Animation
Notes
interactive visualizations. Many visualizers are beginning to think about how to make
their visualizations more compelling with animation. There are many good guides on
how to make static visualizations more effective, and many applications support
interactivity well. But animated visualization is still a new area; there is little
consensus on what makes for a good animation.
The Intuition behind Animation
The intuition behind animation seems clear enough: if a two-dimensional image is
good, then a moving image should be better. Movement is familiar: we are
accustomed to both moving through the real world and seeing things in it move
smoothly. All around us, items move, grow, and change color in ways that we
understand deeply and richly. As an example, letÊs take a look at Jonathan Harris and
Sep KamvarÊs We Feel Fine animated visualization. In this visualization, blog entries
mentioning feelings are represented as bubbles. As users move between views, the
bubbles 330 are reorganized into histograms and other patterns. For example, one
screen shows the relative distribution of blog entries from men and women, while
another shows the relative distribution of moods in the blog entries. While the
bubbles fly around the screen freely, there are always a constant number on the
screen. This constancy helps reinforce the idea of a sample population being
organized in different ways. Animation is also used to evoke emotion: the bubbles
quiver with energy, with those that represent „happy‰ moving differently than
bubbles that represent „sad.‰
Not all animations are successful, though. Far too many applications simply borrow
the worst of PowerPoint, flying data points across the screen with no clear purpose;
elements sweep and grow and rotate through meaningless spaces, and generally only
cause confusion.
Animation as a Technique
Animation can be a powerful technique when used appropriately, but it can be very
bad when used poorly. Some animations can enhance the visual appeal of the
visualization being presented, but may make exploration of the dataset more difficult;
other animated visualizations facilitate exploration. The other project discussed here
is on explored animated scatterplots like GapMinder; this makes a fine launching
point to discuss both successes and failures with animation. As weÊll see, successful
animations can display a variety of types of transformations. The DynaVis project
helps illustrate how some of these transitions and transformations can work out.
Principles of Animation
At its core, any animation entails showing a viewer a series of images in rapid
succession. The viewer assembles these images, trying to build a coherent idea of
what occurred between them. The perceptual system notes the changes between
frames, so an animation can be understood as a series of visual changes between
frames. When there are a small number of changes, it is quite simple to understand
what has happened, and the viewer can trace the changes easily. When there are a
large number of changes, it gets more complex.
The Gestalt perceptual principle of common fate states that viewers will group large
numbers of objects together, labeling them all as a group, if they are traveling in the
same direction and at the same speed. Individual objects that take their own
trajectories will be seen as isolates, and will visually stand out. If all the items move in
different directions, however, observers have far more difficulty following them.
Perception researchers have shown that viewers have difficulty tracking more than
34 Self-Instructional Material
four or five objects independently – the eye gives up, tracking only a few objects and
labeling other movement as noise (Cavanagh and Alvarez 2005).
There have been several attempts to formulate principles for animation. Tversky,
Morrison, and Bétrancourt (2002) offer two general guidelines at the end of their
article: that visualizations should maintain congruence and apprehension. The former
suggests that the marks on the screen must relate to the underlying data at all times.
The latter suggests that the visualization should be easy to understand. The principles
we have articulated fit into these categories. (Other, related guidelines have been
suggested in Heer and RobertsonÊs [2007] discussion of the DynaVis research, by
Zongker and Salesin [2003] in their discussion of animation for slideshow
presentations, and, with regard to graph drawing, by Freidrich and Eades [2002].)
Visualization of
Different Views
Notes
Staging
It is disorienting to have too many things happen at once. If it is possible to change
just one thing, do so. On the other hand, sometimes multiple changes need to happen
at once; if so, they can be staged.
Compatibility
A visualization that will be disrupted by animation will be difficult for users to track.
For example, it is not disruptive to add another bar to a bar chart (the whole set can
slide over), and it may not be disruptive to add another series to a bar chart.
However, a squarified treemap is laid out greedily by size; growing a single rectangle
will require every rectangle to move to a new location and will look confusing.
Necessary Motion
In particular, avoid unnecessary motion. This implies that we want to ensure that
motion is significant – i.e., we should animate only what changes. In general, the
image should always be understandable. As the DynaVis user tests showed, excess
motion – even significant motion – can be confusing.
Meaningful Motion
The coordinate spaces and types of motion should remain meaningful. This also
entails two points discussed earlier: preserve valid mappings and maintain the
invariant. Verifying that youÊve adhered to these principles can help you figure out
whether an animation is headed in the right direction.
Animation in Scientific Visualization
Attendees at the annual IEEE VisWeek conference – the research summit for
visualization – are divided into two groups: information visualizers and scientific
visualizers. The two groups give different talks, sit in different rooms, and sometimes
sit at different tables at meals. Watching the talks, one quickly notices that roughly
half of the papers in the scientific visualization room feature animation, while almost
no papers in the information visualization room do. You could say that the difference
between the groups is that scientific visualizers are people who understand what the
x-, y-, and z-axes actually mean: they are very good at picturing the dimensions of an
image and understand the meaning of depths and distances. The dynamic processes
they often represent – wind blowing over an airplane wing, hurricanes sweeping
across maps, blood flowing through veins – also involve an additional dimension:
that of time. As it would be difficult to squeeze its representation into any of the other
three dimensions, animating is an attractive method for displaying such processes.
Punjab Technical University 35
Cartoon Animation
Notes
In contrast, data visualization is less straightforward. Information visualizers usually
work with abstract data spaces, where the axes do not correspond to the real world
(if they mean anything at all). Viewers need to get acclimated to the dimensions they
can see, and learn how to interpret them. Consequently, there are comparatively few
examples of animation published in the information visualization community.
Learning from Cartooning
Animation, of course, appears popularly in places outside of visualizations. Movies
and cartoons depend on some of the same physical principles as computer animation,
so several people have asked whether cartooning techniques might bring useful
insights to the creation of animated visualizations. As early as 1946, the Belgian
psychologist Albert Michotte noted the „perception of causality‰ (Michotte 1963). It is
easy to believe that the movement in an animation shows intent: that this point is
chasing another across the screen (rather than moving in an equivalent trajectory one
second behind it), that this ball hit another (rather than „this dot stopped at point A,
and this other dot moved from A to B‰), and so on. Thus, we can ascribe agency and
causality where none really exists.
In cartoons, of course, we wish to communicate causality. Traditional cartoonists have
described how they endow drawn shapes with the „illusion of life‰ (Johnston and
Thomas 1987) in order to convey emotion, and several rounds of research papers
(Lasseter 1987; Chang and Ungar 1993) have tried to see how to distill those ideas
forcomputer animation and visualization.
Traditional cartoonists use a barrage of techniques that are not completely true to life.
Squash and stretch, for instance, distorts objects during movement to draw the eye
toward the direction of motion: objects might stretch when they fly at their fastest,
and squashing them conveys a notion of stopping, gathering energy, or changing
direction.
Moving items along arcs implies a more natural motion; motion along a straight line
seems to have intent. Before objects begin moving, they anticipate their upcoming
motion; they conclude with a follow-through. Ease-in, ease-out is a technique of
timing animations: animations start slowly to emphasize direction, accelerate through
the middle, and slow down again at the end. Complex acts are staged to draw
attention to individual parts one at a time.
Visualization researchers have adapted these techniques with differing degrees of
enthusiasm and success – for example, the Information Visualizer framework (Card,
Robertson, and Mackinlay 1991), an early 3D animated framework, integrated several
of these principles, including anticipation, arcs, and follow-through. On the other
hand, some elements of this list seem distinctly inappropriate. For instance, squashing
or stretching a data point distorts it, changing the nature of the visualization; thus, we
can no longer describe the visualization as maintaining the consistent rule „height
maps to this, width maps to that‰ at each frame of the animation. In their research on
slideshows, Zongker and Salesin (2003) warn that many animation techniques can be
distracting or deceptive, suggesting causality where none might exist. Also, they are
often meant to give an illusion of emotion, which may be quite inappropriate for data
visualization. (An exception would be We Feel Fine, in which the motion is supposed
to convey emotion and uses these techniques effectively to do so.)
Downsides of Animation
Animation has been less successful for data visualization than for scientific
visualization. Two meta-studies have looked at different types of animations –
process animations and algorithm visualizations – and found that both classes have
36 Self-Instructional Material
spotty track records when it comes to helping students learn more about complex
processes. The psychologist Barbara Tversky found, somewhat to her dismay, that
animation did not seem to be helpful for process visualization (i.e., visualizations that
show how to use a tool or how a technique works). Her article, „Animation: Can It
Facilitate?‰ (Tversky, Morrison, and Bétrancourt 2002), reviews nearly 100 studies of
animation and visualization. In no study was animation found to outperform rich
static diagrams. It did beat out textual representations, though, and simple
representations that simply showed start and end state without transitions.
Visualization of
Different Views
Notes
Algorithm animation is in many ways similar to process visualization: an algorithm
can be illustrated by showing the steps that it takes. Some sort algorithms, for
example, are very amenable to animation: an array of values can be drawn as a
sequence of bars, so the sort operations move bars around. These animations can
easily show the differences between, say, a bubble sort and an insertion sort.
Christopher Hundhausen, Sarah Douglas, and John Stasko (2002) tried to understand
the effectiveness of algorithm visualization in the classroom, but half of the controlled
studies they examined found that animation did not help students understand
algorithms. Interestingly, the strongest factor predicting success was the theory
behind the animation. Visualization was most helpful when accompanied by
constructivist theories – that is, when students manipulated code or algorithms and
watched a visualization that illustrated their own work, or when students were asked
questions and tried to use the visualization to answer them. In contrast, animations
were ineffective at transferring knowledge; passively watching an animation was not
more effective than other forms of teaching.
Animation’s Exploration
Exploration with Animation is Slower
We found that when users explored the data on their own, they would often play
through the animation dozens of times, checking to see which country would be the
correct answer to the question. In contrast, those who viewed a presentation and
could not control the animation on their own answered far more rapidly: they were
forced to choose an answer and go with it. Thus, animation in exploration was the
slowest of the conditions, while animation in presentation was the fastest.
Interestingly, this might shed light on why the process animations by Tversky et al.
found so little success. In our tests, users clearly wanted to be able to move both
forward and backward through time; perhaps this is true of process animations, too.
More effort may be required to get the same information from an animation as
opposed to a series of static images, because you have to replay the entire thing rather
than just jumping directly to the parts you want to see.
Animation is Less Accurate
Despite the extra time the users spent with the animation, the users who were shown
the static visualizations were always more accurate at answering the questions. That
is, the animation appeared to detract from the usersÊ ability to correctly answer
questions. Their accuracy was not correlated with speed: the extra time they spent in
exploration did not seem to drive better outcomes. This seems like bad news for
animation: it was slower and less accurate at communicating the information. On the
other hand, we found the animation to be more engaging and emotionally powerful:
one pilot subject saw life expectancy in a war-torn country plummet by 30 years and
gasped audibly. Generally, users preferred to work with the animation, finding it
more enjoyable and exciting than the other modes.
They also found it more frustrating, though: „Where did that dot go?‰ asked one
angrily, as a data point that had been steadily rising suddenly dropped. These
Punjab Technical University 37
Cartoon Animation
Notes
findings suggest that RoslingÊs talk is doing something different from what our users
experienced. Critically, Rosling knows what the answer is: he has worked through the
data, knows the rhetorical point he wishes to make, and is bringing the viewers along.
He runs much of his presentation on the same set of axes, so the viewers donÊt get
disoriented. His data is reasonably simple: few of the countries he highlights make
major reversals in their trends, and when he animates many countries at once, they
stay in a fairly close pack, traveling in the same direction. He chooses his axes so the
countries move in consistent directions, allowing users to track origins and goals
easily. He takes advantage of the Gestalt principle of common fate to group them, and
he narrates their transitions for maximum clarity.
In contrast, our users had to contend with short sessions, had to track countries that
suffered abrupt reversals, and did not have a narrator to explain what they were about
to see; rather than learning the answer from the narrator, they had to discover it
themselves. This suggests to us that what we were asking our users to do was very
different from what Rosling is doing so different, in fact, that it deserves its own section.
Presentation is not Exploration
An analyst sitting before a spreadsheet does not know what the data will show, and
needs to play with it from a couple of different angles, looking for correlations,
connections, and ideas that might be concealed in the data. The process is one of
foraging – it rewards rapidly reviewing a given chart or view to see whether there is
something interesting to investigate, followed by moving on with a new filter or a
different image.
In contrast, presenters are experts in their own data. They have already cleaned errors
from the dataset, perhaps removing a couple of outliers or highlighting data points
that support the core ideas they want to communicate. They have picked axes and a
time range that illustrate their point well, and they can guide the viewersÊ perception
of the data. Most importantly, they are less likely to need to scrub back and forth, as
we saw users doing with our animation, in order to check whether they have
overlooked a previous point. In these conditions, animation makes a lot of sense: it
allows the presenter to explain a point vividly and dramatically.
The experience of exploration is different from the experience of presentation. It is
easy to forget this, because many of our tools mix the two together. That is, many
packages offer ways to make a chart look glossy and ready for presentation, and those
tools are not clearly separated from the tools for making the chart legible and ready
for analysis. In Microsoft Excel, for example, the same menu that controls whether my
axis has a log scale also helps me decide whether to finish my bar chart with a glossy
color. The former of these choices is critical to exploration; the latter is primarily
useful for presentation. After finishing analyzing data in Excel, copy the chart directly
into PowerPoint and show the result. As a result of this seamlessness, few people who
use this popular software have seriously discussed the important distinctions between
presentation and exploration.
Table 2.1 summarizes major differences between the needs of exploration and
presentation.
Table 2.1: Differentiating Exploration from Presentation
Contd…
38 Self-Instructional Material
Visualization of
Different Views
Notes
These two perspectives are not completely disjoint, of course. Many interactive web
applications allow users to explore a few dimensions, while still not exposing raw
data. The tension between presentation and exploration suggests that designers need
to consider the purpose of their visualizations. There are design trade-offs, not only
for animation, but more generally.
Types of Animation
Some forms of animation are most suited to presentation, while others work well for
exploration. In this section, weÊll discuss a hierarchy of different types of
transformations, ranging from changing the view on a visualization to changing the
axes on which the visualization is plotted to changing the data of the visualization. LetÊs
begin with an example of a system that needs to manage two different types of changes.
Dynamic Data and Animated Recentering
In 2001, peer-to-peer file sharing was becoming an exciting topic. The Gnutella system
was one of the first large-scale networks, and was in a group of students who thought
it would make a good subject of study. Gnutella was a little different from other
peer-to-peer systems. The earlier Napster had kept a detailed index of everything on
the network; BitTorrent would later skip indexing entirely. Gnutella passed search
requests between peers, bouncing around the questions and waiting for replies. When
used a peer-to-peer search to track down a song, how many machines were really
getting checked? How large a network could my own client see? We instrumented a
Gnutella client for visualization, and then started representing the network.
We rapidly realized a couple of things: first, the new nodes were constantly appearing
on the network; and second, that knowing where they were located was really
interesting. The appearance of new nodes meant that we wanted to be able to change
the visualization stably. There would always be new data pouring into the system,
and it was important that users not be disoriented by changes taking place in the
visualization as new data came in. On the other hand, we did not want to pause, add
data, and redraw: we wanted a system where new data would simply add itself to the
diagram unobtrusively.
Because the Gnutella network used a peer-to-peer discovery protocol, it was often
interesting to focus on a single node and its neighbors. Is this node connected to a
central „supernode‰? Is it conveying many requests? We wanted to be able to focus
on any single node and its neighbors, and to be able to easily estimate the number of
hops between nodes. This called for changing the viewpoint without changing the
remainder of the layout.
Our tool was entitled GnuTellaVision, or GTV (Yee et al. 2001). We addressed these
two needs with two different animation techniques. We based the visualization on a
radial layout, both to reflect the way that data was changing – growing outward as we
discovered more connections – and in order to facilitate estimation of the number of
hops between the central node and others. A radial layout has the virtues of a
welldefined center point and a series of layers that grow outward. As we discovered
new nodes, we added them to rings corresponding to the number of hops from the
starting node. When a new node arrived, we would simply move its neighbors over
by a small amount (most nodes in the visualization do not move much). As the
visualization ran, it updated with new data, animating constantly (Figure 2.1).
Punjab Technical University 39
Cartoon Animation
Notes
Source: http://research.microsoft.com
Figure 2.1: GTV Before (Left) and After (Right) Several New Nodes are
Discovered on the Network – as nodes yield more Information, their
Size and Color can also Change
When a user wanted to examine a node, GTV recentered on the selection. In our first
design, it did so in the most straightforward way possible: we computed a new radial
layout and then moved nodes linearly from their previous locations to the new ones.
This was very confusing, because nodes would cross trajectories getting from their old
locations to the new ones. The first fix was to have nodes travel along polar coordinate
paths, and always clockwise. Thus, the nodes remained in the same space as the
visualization was drawn, and moved smoothly to their new locations (Figure 2.2).
Because GTV is oriented toward examining nodes that may be new to the user, and is
constantly discovering novel information, it was important that this animation
facilitate exploration by helping users track the node paths.
Figure 2.2: Interpolation in Rectangular Coordinates (Top) Causes
Nodes to Cross through each others’ Paths; Interpolation in Polar
Coordinates (Bottom) makes for Smooth Motion
40 Self-Instructional Material
A radial layout has several degrees of freedom: nodes can appear in any order around
the radius, and any node can be at the top. When we did not constrain these degrees
of freedom, nodes would sometimes travel from the bottom of the screen to the top.
We wanted to ensure that nodes moved as little as possible, so we added a pair of
constraints: nodes maintained, as much as they could, both the same relative
orientation and order. Maintaining relative orientation means that the relative
position of the edge from the old center to the new center is maintained. Maintaining
relative order means that nodesÊ neighbors will remain in the same order around the
rings. Both of these are illustrated in Figure 2.3.
Visualization of
Different Views
Notes
Figure 2.3: Animated Recentering
Punjab Technical University 41
Cartoon Animation
Notes
Last, we adapted the ease-in, ease-out motion from cartooning in order to help users
see how the motion was about to happen. This section demonstrated some useful
principles that are worth articulating:
Compatibility
Choose a visualization that is compatible with animation. In GTV, the radial layout
can be modified easily; new nodes can be located on the graph to minimize changes,
and – like many tree representations – it is possible to recenter on different nodes.
Coordinate Motion
Motion should occur in a meaningful coordinate space of the visualization. We want to
help the users stay oriented within the visualization during the animation,so they can
better predict and follow motion. In GTV, for instance, transforming through
rectangular coordinates would be unpredictable and confusing; the radial coordinates,
in contrast, mean that users can track the transition and the visualization retains its
meaning.
Meaningful Motion
Although animation is about moving items, unnecessary motion can be very confusing.
In general, it is better to have fewer things move than more in a given transition.
Constraining the degrees of freedom of the GTV animation allows the visualization to
change as little as possible by keeping things in roughly the same position.
GapMinder and Animated Scatterplots
One recent example of successful animated visualization comes from Hans RoslingÊs
GapMinder (http://www.gapminder.org). Rosling is a professor of Global Health
from Sweden, and his talk at the February 2006 Technology, Entertainment, Design
(TED) conference riveted first a live audience, then many more online. He collected
public health statistics from international sources and, in his brief talk, plotted them
on a scatterplot. In the visualization, individual points represent countries, with
x and y values representing statistics such as life expectancy and average number of
children and each pointÊs area being proportionate to the population of the country it
represents. Rosling first shows single frames – the statistics of the countries in a single
year – before starting to trace their progress through time, animating between the
images with yearly steps in between. Figure 2.4 shows three frames of a GapMinderlike animation. On the x-axis is the life expectancy at birth; on the y-axis is the infant
mortality rate. The size of bubbles is proportionate to the population. Color-coding is
per continent; the largest two dots are China and India.
RoslingÊs animations are compelling: he narrates the dotsÊ movement, describing their
relative progress. China puts public health programs in place and its dot floats
upward, followed by other countries trying the same strategy. Another countryÊs
economy booms, and its dot starts to move rapidly rightward. Rosling uses this
animation to make powerful points about both our preconceptions about public
health problems and the differences between the first and third world, and the
animation helps viewers follow the points he is making.
42 Self-Instructional Material
Visualization of
Different Views
Notes
Source: http://research.microsoft.com
Figure 2.4: A GapMinder-like Visualization showing Information about a set of 75
Countries in 1975, 1985, 1995, and 2000; this Chart Plots Life Expectancy (x axis)
against Infant Mortality (y axis) – Countries at the Top-left have a High Infant
Mortality and a Short Life Expectancy too many Dots?
The perceptual psychology research mentioned earlier showed that people have
trouble tracking more than four moving points at a time. In his presentation, Rosling
is able to guide the audience, showing them where to look, and his narration helps
them see which points to focus on. He describes the progress that a nation is making
with the assistance of a long pointer stick; it is quite clear where to look. This reduces
confusion.
It also helps that many of the two-dimensional scatterplots he uses have
unambiguously „good‰ and „bad‰ directions: it is good for a country to move toward
a higher GDP and a longer life expectancy (i.e., to go up and to the right), and bad to
move in the opposite direction (down and to the left).
With RoslingÊs sure hand guiding the watcherÊs gaze, the visualization is very
effective. But if a temporal scatterplot were incorporated into a standard spreadsheet,
would it be useful for people who were trying to learn about the data?
Testing Animated Scatterplots
At Microsoft Research, we became curious about whether these techniques could
work for people who were not familiar with the data. We reimplemented a
GapMinder-like animation as a base case, plotting points at appropriate (x, y)
locations and interpolating them smoothly by year. We then considered three
alternative static visualizations that contained the same amount of information as the
animation. First, of course, we could simply take individual frames (as in Figure 2.4).
Even in our earliest sketches, however, we realized this was a bad idea: it was too
Punjab Technical University 43
Cartoon Animation
Notes
difficult to trace the movement of points between frames. The ability to follow the
general trajectories of the various countries and to compare them is a critical part of
GapMinder; we wanted users to have a notion of continuity, of points moving from
one place to another, and the individual frames simply were not helpful.
Source: http://research.microsoft.com
Figure 2.5: Tracks View in which each Country is Represented as a Series of Dots
that become more Opaque Overtime; Years are Connected with Faded Streaks
We therefore implemented two additional views, using the same set of countries and
the same axes as Figure 2.4, for the years 1975-2000. The first is a tracks view, which
shows all the paths overlaid on one another (Figure 2.5). The second is a small
multiples view, which draws each path independently on separate axes (Figure 2.6).
In the tracks view, we cue time with translucency; in the small multiples view, we
instead show time by changing the sizes of the dots.
We wanted to understand how well users performed with the animation, as
compared with these static representations. Users can set up their own scatterplots at
the GapMinder website, but would they be able to learn anything new from their
data? We chose 30 different combinations of (x, y) values based on public health and
demographic data from the United Nations, and presented users with fairly simple
questions such as „In this scatter plot, which country rises the most in GDP?‰ and „In
this scatterplot, which continent has the most countries with diminishing marriage
rates?‰ We recruited users who were familiar with scatter plots, and who dealt with
data in their daily work. Some subjects got to „explore‰ the data, and sat in front of a
computer answering questions on their own. Others got a „presentation,‰ in which a
narrator showed them the visualization or played the animation. We measured both
time and accuracy as they then answered the questions.
The studyÊs numerical results are detailed in Robertson et al. (2008). The major
conclusions, however, can be stated quite simply: animation is both slower and less
accurate at conveying the information than the other modalities.
44 Self-Instructional Material
Visualization of
Different Views
Notes
Source: http://research.microsoft.com
Figure 2.6: Small Multiples view in which each Country is in its Own
Tiny Coordinate System: Dots Grow Larger to Indicate the Progression of Time
A Taxonomy of Animations
A number of changes might occur in visualization. According to GapMinder, these
can be about changes to data; in the example of GTV, the study examines changes to
both the data and the view. There are more types of transitions that one might wish to
make in visualization, though. The following is a list adapted from one assembled by
Heer and Robertson (2007). Each type of transition is independent; it should be
possible to change just the one element without changing any of the others. Many of
these are applicable to both presentation and exploration of data:
Change the View: Pan over or zoom in on a fixed image, such as a map or a large data
space.
Change the Charting Surface: On a plot, change the axes (e.g., change from linear to
log scale). On a map, change from, for example, a Mercator projection to a globe.
Filter the Data: Remove data points from the current view following a particular
selection criterion.
Reorder the Data: Change the order of points (e.g., alphabetize a series of columns).
Change the Representation: Change from a bar chart to a pie chart; change the layout
of a graph; change the colors of nodes.
Change the Data: Move data forward through a time step, modify the data, or change
the values portrayed (e.g., a bar chart might change from Profits to Losses). As
discussed earlier, moving data through a time step is likely to be more useful for
presentations. These six types of transitions can describe most animations that might
be made with data visualizations. Process visualizations would have a somewhat
different taxonomy, as would scientific visualizations that convey flow (such as air
over wings).
Punjab Technical University 45
Cartoon Animation
Notes
Staging Animations with DynaVis
Two people exploring a dataset together on a single computer have a fundamental
problem: only one of them gets the mouse. While it is perfectly intuitive for one of
them to click „filter,‰ the other user might not be able to track what has just
happened. This sits at an interesting place between exploration and presentation: one
of the major goals of the animation is to enable the second user to follow the leader by
knowing what change the leader has just invoked; however, the leader may not know
specifically what point he is about to make. Animation is plausibly a way to transition
between multiple visualizations, allowing a second person – or an audience – to keep
up. For the last several years, we have been experimenting with ways to show
transitions of data and representations of well-known charts, such as scatter plots, bar
charts, and even pie charts.
DynaVis, a framework for animated visualization, was our starting point. A summer
internship visit by Jeff Heer, now a professor at Stanford, gave us a chance to work
through a long list of possibilities. This discussion is outlined in more detail in his
paper (Heer and Robertson 2007).
In DynaVis, each bar, dot, or line is represented as an object in 3D space, so we can
move smoothly through all the transitions described in the preceding section. Many
transformations are fairly clear: to filter a point from a scatterplot, for instance, the
point just needs to fade away. There are several cases that are much more interesting
to work through, though: those in which the type of representation needs to change,
and those in which more than one change needs to happen at a time. When the
representation is being changed, we try to follow several basic principles.
Do One Thing at a Time
Ensure that the visualization does not entail making multiple simultaneous changes.
This might mean staging the visualization, to ensure that each successive step is
completed before the next one is started.
Preserve Valid Mappings
At any given time during a step, ensure that the visualization is a meaningful one that
represents a mapping from data to visualization. It would be invalid, for example, to
rename the bars of a bar chart: the fundamental mapping is that each bar represents
one x-axis value.
Figure 2.7 shows a first attempt at a transition from bar chart to pie chart. There are
some positive aspects to the transition. For example, the bars do not move all at once,
so the eye can follow movement fairly easily, and the bars maintain their identities
and their values across the animation. While there are some issues with occlusion as
the bars fly past each other, they move through a smooth trajectory so that it is
reasonable to predict where they will end up. Finally, the animation is well staged: all
the wedges are placed before they grow together into a full pie.
This visualization has a critical flaw, though. The length of the bar becomes the length
of the pie wedge, so longer bars became longer wedges. However, longer bars will
ultimately have to become fatter wedges in the final pie chart. That means that bars
are becoming both fat and long, or both skinny and short. This, in turn, means that the
visualization does not employ a constant rule (such as „number of pixels is
proportionate to data value‰).
Maintain the Invariant
While the previous rule referred to the relationship between data elements and the
marks on the display, this rule refers to the relationship of the data values to the
visualization. If the data values are not changing, the system should maintain those
46 Self-Instructional Material
invariant values throughout the visualization. For example, if each barÊs height is
proportionate to the respective data pointÊs value, the bars should remain the same
height during the animation.
Figure 2.8 illustrates both of these principles in a more successful bar chart to pie chart
animation. This chart shows a 1:1 correspondence between the drawn entity – the bar,
the curved line, or the pie slice – and the underlying data. This assignment never
changes: the bar furthest on the left („A‰) becomes the leftmost pie slice (also „A‰).
The invariant is maintained by the lengths of the bars, which remain proportionate to
the data values. While we do not illustrate it here, we follow similar principles in
changing a bar chart into a line chart: the top-left corner of the bar represents the
value, so as the bar shrinks into a line, that data point will remain rooted at the topleft
corner of the bar.
Visualization of
Different Views
Notes
Figure 2.7: Less Successful Bar Chart to Pie Chart Animation: Long Bars become
Long, Fat Wedges on the Pie; Short bars become short, Skinny Wedges;
then all Wedges Grow to Full Length
Punjab Technical University 47
Cartoon Animation
Notes
Figure 2.8: Better Bar Chart to Pie Chart Animation: The Lengths of the Bars are
Maintained as they are Brought into the Ring; the Ring then Fills to become a Pie
Another interesting case brings back the cartoon notion of staging. In GnuTellaVision
we were able to recenter in a single motion, but in DynaVis it often makes more sense
to break a transformation into two steps. For instance, in each of these examples, we
ensure that we change only one thing at a time:
48 Self-Instructional Material
To filter a dataset in a bar chart, we first remove bars we will not use, and then
close ranks around them.
To unfilter, we open space for the bars that will be added, and then grow the
bars up.
To grow or shrink a bar, such as when data changes, we may need to change the
axes. Imagine growing a bar chart from the values (1,2,3,4,5) to (1,2,10,4,5) – the
y-axis should certainly grow to accommodate the new value. If we grow the bar
first, it will extend off the screen; therefore, we must change the axis before
changing the bar.
When sorting a selection of bars, sorting them at once could cause all bars to pass
through the center at once. This is confusing: it is hard to figure out which bar is
which. By staggering the bars slightly, so that they start moving a small amount
of time apart, we found that the sort operation was much clearer.
Visualization of
Different Views
Notes
Staging is not always appropriate, though. In Heer and RobertsonÊs report on the
project (2007), they found that some staged animations are more challenging to
follow. In particular, when resizing segments of a donut or pie chart, it was difficult to
monitor the changes as the pie turned to accommodate the new sizes. DynaVis
attempted to stage this transition by extracting segments to either an external or an
internal ring, adjusting their sizes, and then collapsing them back into place. While
this made the changes much more visible, it also added a layer of potentially
confusing action.
Heer and Robertson collected both qualitative results – how much users liked the
animations – and quantitative ones – finding out which animations allowed users to
answer questions most accurately. They found that users were able to answer
questions about changes in values over time more easily with the animations than
without; furthermore, the animations that were staged but required only one
transition did substantially better than the animations that required many transitions.
Even with these caveats, though, it is clear that these sorts of dynamics could
potentially help users understand transitions much more easily: compared to a
presenter flipping through a series of charts, forcing the audience to reorient after
each slide, a DynaVis-like framework might allow users to remain oriented thoughout
the presentation.
Animation Pipeline
The animation pipeline consists of:
1.
Pre-production (Designing a character, creating expressions)
2.
Production (Animation, Lip movement)
3.
Post-production (Lip Synchronization, Sound Editing)
Pre-production (Designing a character, creating
expressions)
A speaking cartoon character can have a lot more influence than a silent one. The
speaking cartoon character can make quips, tell jokes and reveal his inner thoughts
through speech. You do not have to create composite animations to make a character
speak. Oftentimes, you need only create a character whose mouth can open and close
to create the illusion of animated speech. Actually, designing a simple character is a
good way to learn the basics of animating speech without causing preventable
complication and frustration.
The Basic Frame of your Character
Punjab Technical University 49
Cartoon Animation
Create the design of your speaking character with uncomplicated shapes. Draw the
head with a small oval and the neck with a vertical line. Create the body with a raised
oval shape.
Notes
Simple details are added to the character
Draw the details on the character. Add hair with a curvy line across the top of the
head. Draw the eyes of your character with two small ovals and semicircles above
each oval.
Add the nose with small semicircle below the eyes. Add the mouth with a rounded
line below the nose. Create glasses with two large circles over the eyes. Thicken the
neck with a parallel line on the left and right side of the neck guideline.
Color helps bring your character to life
Ink the lines you wish to keep with a black pen. Let the ink dry and rub out the pencil.
Color the character using art markers. Use any color scheme you wish. If you are
generating a more traditional character design, use beige or brown colors for the skin
and yellow, red, brown or black for the hair.
A cut out mouth is a simple way to create speaking animation
Sketch and color a mouth that is open. Cut this out using an X-ACTO knife. Pace the
open mouth over the character to create the illusion of the mouth moving. Also cut
out two skin colored circles just a bit bigger than the eyes. These will allow your
character to blink or wink while he speaks.
50 Self-Instructional Material
Visualization of
Different Views
Notes
A series of pictures with a mouth open then closed will simulate talking
Animate your character design by taking pictures with a digital camera. Take a
picture of the character's face with the mouth closed. Place the open mouth over the
character and take another picture. Place the eyelid pieces on the character and take
away the open mouth. This will make sure the character has motion on his face while
speaking.
Upload the pictures into a program such as iMovie or Movie Magic. Every picture can
act as a frame. Use the open mouth frame followed by the closed mouth frame for
each consonant in a word. You can reuse the same pictures several times and string
them together on the program's time-line to make a sequence of talking animation.
Production
To create an animation talk, you need to create a lip sync. A lip sync is where a voice
is recorded to go over an animation. Lip syncs can be added to 2-D or 3-D animations.
To make an animated character talk, you need to be convinced the mouth movements
match the words, and the lip sync recording matches the movement.
2-D Animation
1.
Draw a character onto the animation paper.
2.
Start the lip sync sequence with the first mouth movement. To conclude the
mouth movement, look into a mirror and speak the word. Copy that movement.
3.
Draw the next frame (page) on the animation paper with the next mouth
movement for the word. Keep drawing the mouth movements until the word is
finished.
4.
Repeat drawing the character to complete the animation lip sync sequence. This
may require many frames depending on the word or sentence.
5.
Record a voice to speak the character's line for the lip sync sequence.
6.
Capture each frame by using a capture machine or scanner. A capture machine
takes pictures of each frame and repeatedly downloads them to an animation
program to become a movie. This may not be the final animation software you
use. Scan each frame if a capture machine is not available.
7.
Open the 2-D animation software for the lip sync.
8.
Import each frame or the movie into the software.
9.
Import the recorded lip sync.
10. Align the voice to the drawn character. This may require several adjustments.
3-D Animation
1.
Create a character using a 3-D program.
2.
Animate the character's face by adding cameras and influence the mouth for each
frame of the lip sync. The camera will record each frame to create the lip sync.
Punjab Technical University 51
Cartoon Animation
There are 30 frames per second. Insert multiple cameras to take video of various
angles of the lip sync.
3.
Add handles to manipulate the mouth. Handles are tools in the animation
program used to help make a movement to the character.
4.
Record a voice to accompany the lip sync for the 3-D character.
5.
Render your character. Rendering completes the animation process and makes
your animation seamless. Rendering can take hours depending on the duration of
the animation.
6.
Take your rendered character and import it into an editing program. Import your
lip sync and sync the voice to the character. This may need some adjustments.
Notes
Post Production
A fundamental part of most animation is spoken dialogue. The mass of that dialogue
must be animated. Lip syncing is a basic and vital skill for most animators that give
an extra touch of life to animated characters. With the proper timing and frame rate,
well-animated lip syncing offer the viewer a chance to read the lips of some
characters – an impressive touch that largely goes unnoticed. Ironically, the things
done right in animation often get noticed less than things done wrong.
Setting the Stage
1.
Assemble all the parts of the character on the stage, but for the mouth, and lock
the layer.
2.
Insert a new layer above the first and drag the mouth into place.
3.
Import the dialogue clip into the Library.
4.
Insert a new layer above the prior two.
5.
Drag the sound clip from the Library onto the stage in the new layer.
6.
Extend the time line to the length of the sound clip. The frames in the sound clip's
layer visually display the wave form of the sound, permitting you to see the parts
of the clip.
7.
Click on the sound's layer in the list to select all of the frames, and open the
property window.
8.
Set the sound clip's Sync property to "Stream." This will allow you to hear the
sound as you scrub the timeline.
Basic Lip Syncing
52 Self-Instructional Material
1.
Press "Enter" to play the timeline, or scrub the timeline to find the primary
syllable of the dialogue.
2.
Insert a new key frame one frame ahead of the first syllable. Doing this helps to
better align what the viewer sees with what they hear.
3.
Swap the current mouth symbol with the one suitable for the syllable.
4.
Play or scrub the timeline to decide the next sound or syllable, and insert a new
key frame at that location.
5.
Swap the previous mouth symbol with the next appropriate symbol.
6.
Continue this process to the end of the sound clip.
Student Activity
Visualization of
Different Views
Prepare a study note on the visualization of different views.
Summary
Notes
In this unit, we have discussed the difference between presentation and exploration of
data. We have also discussed the various layers of a visualization that might be
altered, and some principles for making a visualization-safe animation.
So now youÊre staring at a visualization youÊre working on, and trying to decide
whether to animate it or not. The question that this unit has repeatedly asked is: what
function does the animation serve? If it is meant to allow a user to smoothly transition
between views, then it is likely to be helpful. On the other hand, if the user is meant to
compare the „before‰ to the „after,‰ the animation is less likely to be of use.
Users want to understand why a change is happening, and what is changing. If
everything on the screen is going to move around, perhaps it would be better to
simply switch atomically to a new image; this might spare the user the difficulty of
trying to track the differences. Finally, animations mean that it can be more difficult to
print out visualizations. Individual frames should be meaningful, so that users can
capture and share those images. Animation imposes a burden of complexity on the
user, and that complexity should pay off.
Keywords
Animated Cartoon: An animated cartoon is a short, hand-drawn (or made with
computers to look similar to something hand-drawn) film for the cinema, television or
computer screen, featuring some kind of story or plot (even if it is a very short one).
Animation: Animation is the process of drawing and photographing a character –
a person, an animal, or an inanimate object – in successive positions to create lifelike
movement.
Staging: It is disorienting to have too many things happen at once. If it is possible to
change just one thing, do so. On the other hand, sometimes multiple changes need to
happen at once; if so, they can be staged.
Compatibility: A visualization that will be disrupted by animation will be difficult for
users to track. For example, it is not disruptive to add another bar to a bar chart (the
whole set can slide over), and it may not be disruptive to add another series to a bar
chart.
Necessary Motion: In particular, avoid unnecessary motion. This implies that we
want to ensure that motion is significant – i.e., we should animate only what changes.
In general, the image should always be understandable.
Meaningful Motion: The coordinate spaces and types of motion should remain
meaningful. This also entails two points discussed earlier: preserve valid mappings
and maintain the invariant.
Review Questions
1.
What do you think about the use of animation for visualization?
2.
Discuss the use of Java, Flash, Silverlight and JavaScript in animation and active
visualization.
3.
Discuss the principles for animating visualizations.
Punjab Technical University 53
Cartoon Animation
Notes
Further Readings
Elmqvist, N., P. Dragicevic, and J.D. Fekete, 2008, „Rolling the dice: Multidimensional
visual exploration using scatterplot matrix navigation,‰ IEEE Transactions on
Visualization and Computer Graphics 14, no. 6: 1141–1148.
Erten, C., P.J. Harding, S.G. Kobourov, K. Wampler, and G. Yee, 2003, „GraphAEL:
Graph animations with evolving layouts.‰ In Proceedings of the 11th International
Symposium on Graph Drawing Springer Verlag.
Fisher, Danyel A, 2007, „Hotmap: Looking at geographic attention,‰ IEEE
Transactions on Visualization and Computer Graphics 13, no. 6: 1184–1191.
Friedrich, C., and P. Eades, 2002, „Graph drawing in motion,‰ Journal of Graph
Algorithms and Applications 6, no. 3: 353–370.
Heer, Jeffrey, and George G. Robertson, 2007, „Animated transitions in statistical data
graphics,‰ IEEE Transactions on Visualization and Computer Graphics 13, no. 6: 1240–
1247.
Hundhausen, Christopher D., Sarah A. Douglas, and John T. Stasko, 2002,
„A metastudy of algorithm visualization effectiveness,‰ Journal of Visual Languages &
Computing.
Johnson, Ollie, and Frank Thomas, 1987, The Illusion of Life, New York: Disney
Editions.
54 Self-Instructional Material
Unit 3 How to Draw
Expressions
How to Draw Expressions
Notes
Unit Structure
Introduction
How to Draw Cartoon Faces?
How to Draw Cartoon Emotions and Facial Expressions?
Manga Drawing Tutorial
Summary
Keywords
Review Questions
Further Readings
Learning Objectives
At the conclusion of this unit, you will be able to:
Learn the tips on how to bring animated characters to life
Know how to draw cartoon faces
Discuss the elements of expression
Know how to draw cartoon emotions and facial expressions
Introduction
It is a curiosity as to know, How to Draw Expressions in Animation, has found its
way into many animation classrooms. And it was really a matter of delighted when
the people at Animation World, ask frequently. Appealing characters lie at the heart
of animation; and it always struck me that unless you create great characters, it's
pointless to put so much energy into making them move. If you are interested in
learning more about character design (both cartoony and semi-realistic types), as well
as in creating fluid, convincing motion based on fundamentals and more advanced
techniques, then give these pages a look. Although the examples given are of 2D
animation, the same principles may carry over to 3D.
Don't settle for the ordinary. By "tweaking," or pushing, a character's facial
expression, you get that extra energy and vitality that can make a memorable
moment.
Christopher Hart has written and illustrated many successful "how to" cartoon and
animation books for Watson-Guptill, in addition to writing for many studios and
networks like NBC, Showtime, 20th Century Fox, MGM and others. He is also the
author and on-screen host of a popular art instruction CD-ROM series. Hart has
worked in animation, comic strips (Blondie), and magazines, including contributing
regularly to Mad Magazine.
Punjab Technical University 55
Cartoon Animation
Notes
Source: © Christopher Hart; drawn by Christopher Hart.
Figure 3.1: How to Draw Animation by Christopher Hart
How to Draw Cartoon Faces?
Figure 3.2: A Sample Cartoon Face
56 Self-Instructional Material
Have you ever wondered how cartoonists succeed in creating variety of interesting
characters with their colors? Be it animation movies, comic books or magazines with
cartoon pictures, the talent of the cartoonists breathe life to every single character
drawn.
Drawing cartoon, especially faces is not an easy task; it encompasses creativity, wit
and style. However, amateurs willing to draw cartoon faces can definitely give a try.
With practice you will surely refine your drawing skill and achieve perfection. For all
the beginners wondering how to draw cartoon faces, following guidelines will be of
immense help.
How to Draw Expressions
Notes
Guidelines on How to Draw a Cartoon
Drawing cartoon faces become really simpler when you divide the face into basic
shapes that comprise it. You can create different types of faces by drawing
different geometric shapes like square, circle, oval, triangular etc.
Once you are done with the face structure, you can add eyes probably two circles,
a nose, mouth and finally hair. Depending upon what expression you want to
give to your character, the eyes and mouth structure will vary.
Step-by-Step Method
1.
First draw an oval shape. Now draw crossed lines across the face so that you can
keep all the facial features like eyes, eyebrows, ears, nose and mouth in right
proportion to one another.
Figure 3.3: Step-by-Step Method of Drawing Cartoon
2.
At the uppermost portion of the face, add two curved lines for the eyebrows. You
could make it thicker or thinner as per your choice. Below the eyebrows draw two
smaller circles for the eyes. Inside these circles make two solid black dots for the
eyeballs. Add eyelashes to the upper eyes if you want.
3.
Now proceed to other features. Draw a nose below and between the eyes. For
nose you can either draw a U shape, an inverted V, a slight curve line with two
dots above it or even an L shape.
Punjab Technical University 57
Cartoon Animation
4.
At the lower portion of the face, draw a mouth. You must draw it keeping in
mind the expression you want to add to the face. If you want a smiling face, then
make a slight curve and connect it to a broader U shape below it. If a frowning
expression is required then make a curve downwards. Make sure for this type of
expression the eyebrows should be slanting on the inside, down toward the eyes.
5.
Add ears at the sides of the face. Depending upon whether you are drawing the
front view or side view of the face, you will add both the ears and just one ear.
6.
Cartoon faces are incomplete without hair on their heads. You can experiment
with different types of lines like curved, straight, slanted, curly, zig zag, etc. Using
different lines draw appropriate hair style for your cartoon face. Have a look at
some of the cartoon characterÊs hair style and try to do something similar.
7.
Once the hair is done, you can go on to do some shading or coloring. This will
make your cartoon face much more eye-catchy.
Notes
Elements of Expression
The key elements of facial expressions are the eyes, eyebrows, and mouth. In furry
characters, the ears are also important.
Take a look at these examples:
Figure 3.4: Depicting Elements of Face Expression
58 Self-Instructional Material
It is important to note here that the most changing parts include the shape of the eyes,
the angle of the eyebrows, and the mouth.
Notice that ears are ÂpinnedÊ, or pointed back, when the character is angry or
distressed. Both the upper and lower eyelids affect the shape of the eye, and even the
eyebrows have some effect too if they are strongly furrowed, as in the angry
expression. A genuinely happy expression should show the effect of the lower eyelid,
flattening the shape of the bottom of the eye- fake smiles lack this effect in real life!
How to Draw Expressions
Notes
In cases where the pupil is ÂfloatingÊ, not touching the top or bottom edge of the eye,
the character appears surprised. Without raised eyebrows and lowered jaw, the
floating pupil just makes a character look deranged. In other words, the combination
of all 3 elements is key to conveying the right expression.
Focus on the Eye
Figure 3.5: Depicting Elements of Eye Expression
HereÊs an eye in several poses, depicting some different possibilities for showing
emotion. The first shows a little of the bottom eyelid, as well as the top. This
expression is relaxed, or just plain normal. The high and slightly arched eyebrow
makes the character alert. TheyÊre engaged or interested, and thinking.
The second example eye is surprised, or shocked. The eyebrow arches way up, and
we see the floating pupil again. The upper eyelid is pulled back, making the eye seem
larger. You can change the angle of the eyebrow to add more subtle effects to the
expression – angle it up and toward the center of the face for a concerned or unhappy
surprise, and angle it down toward the center of the face to show anger.
The third eye is definitely not happy. We have the angry, down-angled eyebrow,
which touches the eye itself itÊs so low. Making the angle of the eyebrow even
sharper, and covering more of the eyeball, will make a more intensely angry
expression.
Eye 4 is concerned, sad, or fearful. The up-angled eyebrow is pulling at the flesh
around the eye, distorting it. The lower eyelid is also making a strong appearance.
An even more, angry menacing expression in the next eye as mentioned before, the
angle of the eyebrow has intensified the emotion. The addition of the lower eyelid
narrowing the eye adds to the effect.
The last eye is bored, tired, or otherwise disengaged. Most of the eye is covered by the
upper lid, and thereÊs not much action in the eyebrow.
Learn by Example
My final advice to you, is to study the right models. DonÊt just copy expressions you
see in cartoons or manga. Get a mirror, study your own expressions. Watch how the
muscles of the face move and bunch. Pay attention to the shape of the eye, and how
much of the teeth you see when the mouth is open. Animators often act out the poses
and expressions they need to draw, and I think this technique will serve you well.
DonÊt be afraid of looking like a goofball!
Punjab Technical University 59
Cartoon Animation
Here are some examples of facial expressions from my webcomic Good Cheese.
Analyze the shape of the eye, angle of the eyebrow, and the mouth. What emotion
does it convey? And, can you think of a better way to draw it?
Notes
Figure 3.6: Cartoons with Facial Expressions
To make a cartoon face look great you have to master two things: the basic face and
facial expression. The basic face is what you can build on to make more complex
faces. Facial expression is what makes your cartoon character memorable.
Here is how to use both to draw fantastic cartoon faces.
Basic Cartoon Face
1.
Draw the basic cartoon face. The basic cartoon face is used to teach the basics of
facial expressions. You can dress the face up any way you want and you can use it
as a base for more complex characters. To draw a basic cartoon face, draw a circle
for the head.
2.
Draw two circles for the eyes.
3.
Draw a 'U' shape for the nose.
4.
Finish with a line for the mouth.
Expression
1.
60 Self-Instructional Material
Drawing facial expressions has to do with the positions of the eyebrows and the
mouth, for the most part. Using the simple cartoon head you created in section
one, you can practice drawing different emotions.
2.
Anger can be achieved by slanting the eyebrows on the inside, down toward the
eyes. Make the mouth curve downwards.
How to Draw Expressions
Notes
3.
Slant the inside tips of the eyebrows, and then tip the outside of the eyebrows
down towards the eyes.
4.
Draw a surprised face by drawing the eyebrows high on the forehead and the
mouth in an 'O' shape.
5.
Draw a calm face without eyebrows and a straight line for a mouth.
Punjab Technical University 61
Cartoon Animation
Notes
How to Draw Cartoon Emotions and Facial
Expressions?
This entry is part 2 of 15 in the series Drawing Faces
You can change the expressions on your face without changing your emotions (by
acting) but donÊt you wish that drawing facial expressions was just as easy? Well,
with practice, you will see that it is just this easy and a LOT of fun to try. Most facial
expressions can be easily made by changing the size, shape, and relationship of eyes,
nose, and mouth and other parts of the face such as eyelids and eyebrows. If you want
to be a cartoonist, an illustrator, an artist or just simply good at drawing, it would be a
good idea to start studying peopleÊs faces. Keep a sketchpad with you at all times and
when you see peopleÊs faces change in emotion, quickly draw a simple shape (as seen
below) to chart out all of the differentt expressions that a personÊs face can make. It
would also be a good idea to study your own face in the mirror as you make silly and
crazy faces. This is a lot of fun and is a good drawing exercise for beginner artists.
You might also find the previous drawing article helpful: How to draw Cartoon
Emotions and Expressions in Characters Eyes
We began your facial expressions chart below⁄print it out and keep it as reference.
Try to draw these faces in your new chart.
62 Self-Instructional Material
How to Draw Expressions
Notes
Facial Expression: Anger … Aggression
Facial Expression: Agitation
Facial Expression: Angry … Furious … Violent Anger
Facial Expression: Calm and Composed … really no emotion
at all … maybe a blank stare even
Facial Expression: Discouraged, Sad, Disappointed
Contd…
Punjab Technical University 63
Cartoon Animation
Notes
Facial Expression: Disdainful, Conceited, Prim & Proper, Judgemental
Facial Expression: A little confused and a little Disappointed
Facial Expression: Happy, Joyful, Excited possibly
Facial Expression: Grumpy, Groggy, Constipated, Angry
Facial Expression: Grumpy, Angry, Furious, Impatient, Irritated
Facial Expression: Overjoyed, Laughing, Hysterical, Happy
64 Self-Instructional Material
Contd…
How to Draw Expressions
Notes
Facial Expression : Overjoyed, Laughing, Hysterical, Happy
Facial Expression: Content, Happy, Smiling, Joyful
Facial Expression: Surprised, Scared, Frightened
Facial Expression: In Wonder, Surprised, Shocked
Figure 3.7: How to Draw Cartoon Emotions and Expressions in Character’s Eyes
Punjab Technical University 65
Cartoon Animation
Notes
First... use your pencil to sketch in your drawing... then use the marker to darken the
lines you want to keep (if you don't use a marker... press harder and tilt your pencil
up more to make a darker pencil line). Once you've done this... erase the lighter pencil
lines you no longer need.
Cartoon Faces... they can be any shape and size... there aren't many rules to drawing
cartoon faces. Just draw them and giggle! Have fun with them!
The guide lines are used to help you when your character faces... to the right... to the
left... to help you keep the eyes, nose, and mouth in the same general area.
Figure 3.8: Drawing Character Faces
Take a face... stretch it... pull it! Make it silly!
Figure 3.9: Drawing a Sketch of Faces
Use your imagination... Look in the mirror and make a face... then try to draw that
face. Notice what your eyes... mouth... eyebrows look like.
Billy Bear's cousin LOVES to make faces!
66 Self-Instructional Material
Here are a few face shapes...
How to Draw Expressions
Notes
Manga Drawing Tutorial
Manga consist of comics and print cartoons (sometimes also called komikku), in the
Japanese language and conforming to the style developed in Japan in the late 19th
century. In their modern form, manga date from shortly after World War II, but they
have a long, complex pre-history in earlier Japanese art.
Punjab Technical University 67
Cartoon Animation
Notes
Anime, an abbreviated pronunciation in Japanese of "animation", pronounced [anime]
in Japanese, but typically in English is animation originating in Japan. The world
outside Japan regards anime as "Japanese animation". While the earliest known
Japanese animation dates from 1917 and many original Japanese cartoons were
produced in the ensuing decades, the characteristic anime style developed in the
1960s – notably with the work of Osamu Tezuka – and became known outside Japan
in the 1980s. Anime, like manga, has a large audience in Japan and recognition
throughout the world. Distributors can release anime via television broadcasts,
directly to video, or theatrically, as well as online. Both hand-drawn and
computer-animated anime exist. It is used in television series, films, video, video
games, commercials, and internet-based releases, and represents most, if not all,
genres of fiction.
How to Draw different Anime Eye Expressions?
Different anime eye expressions are listed in this guide... I hope this will help when
you're trying to add emotion and expression to your anime drawings.
Step 1: Neutral/Normal Expression
The neutral eye expression is the most commonly used expression I've seen being
drawn in RMD. However, you can change the expression by adding a happy or sad
mouth... but hey this guide is about eye expressions.
68 Self-Instructional Material
Step 2: Happy/Delighted Expression
How to Draw Expressions
Notes
The happy expression is very simple. A smiling mouth will amplify the happy
expression of these eyes.
Step 3: Angry Expression
The angry expression expresses anger... or it could become a grin if paired with a
smiling mouth...
Punjab Technical University 69
Cartoon Animation
Step 4: Sad/Worried Expression
Notes
Step 5: Scared Expression
The small lines amplify the sadness of the expression... best paired with neutral
mouth.
70 Self-Instructional Material
Step 6: Crying
How to Draw Expressions
Notes
yea... everyone experiences this emotion...
Step 7: Guilty/Scared Expression
This becomes guilty expression when paired with a smiling mouth but scared if
paired with a semi-open or sad mouth.
Punjab Technical University 71
Cartoon Animation
Step 8: Shocked Expression
Notes
Maybe he/she saw a ghost. Best paired with a wide open mouth.
Step 9: Furious Expression
A very angry expression. Best paired with angry mouth...
72 Self-Instructional Material
How to Draw Cartoon Facial Expressions?
How to Draw Expressions
Showing the CharacterÊs Mood to the Viewer
Notes
Source: Mar 25, 2009 Alina Bradford
How to Draw Anger – Alina Bradford?
Comics and cartoons rely on very few words, so the character's expressions must do a
lot of talking for them.
The best way to covey mood though expression is by figuring out what the face does
at certain times and translating that into lines on the paper.
Emotions are expressed mainly through the eyebrows and mouth in simple cartoon
and comic drawings. It is important to learn these movements before moving on to
rendering the eyes, since this can be a more complicated process. In this introduction
to facial expressions in comics and cartooning, simple „smiley faces‰ will be used so
that the artists can clearly see the changes in facial movement. Simple instructions on
the eyes are included.
How to Draw Anger and Sadness?
Anger is conveyed by drawing the eyebrows in a V-shape low over the eyes. The
mouth is often a straight line with the edges drawn downward. This denotes a
tight-lipped anger, like the character wants to say something but canÊt find the words.
A yelling characterÊs mouth can be drawn open, but the artists should avoid making it
look like a complete circle. The upper lip is curved and the bottom lip is a V-shape
when yelling. The eyes are squinted, so they are drawn smaller than they would be
normally.
Sadness is drawn with the same eyebrows as anger, but they are set higher on the
face. They are also drawn with more curve so that they are not as severe.The mouth is
a down-curved line. The edges of the eyes are usually drawn at a downward angle, as
well.
How to Draw Surprise?
Surprise is drawn much like drawing anger and sadness, but in reverse. Instead of
drawing the eyebrows in a V-shape, the eyebrows will be in an A-shape and the lines
wonÊt be sharp, they will have a slight curve to them. They are also set higher on the
head, making them look raised. The mouth is generally drawn in an O-shape. The
eyes follow suit by being wide-open and rounded.
Punjab Technical University 73
Cartoon Animation
Notes
How to Draw Calm and Sleepy?
The simplest of faces is the calm face. A calm cartoon face is often drawn without any
eyebrows at all. The mouth is just a straight line and the eyes are looking straight
ahead without any special rendering. A sleepy face is drawn as a calm face with halfclosed eyelids.
When not drawing smiley faces, this technique can seem more complicated, but it
really isnÊt. The artist should simply draw your character as usual and tweak the
features according to the tips above.
Materials to Use
Pencil: Use a comfortable pencil. Number two regular pencils are recommended.
Paper: 8.5 by 11 inches paper can be used. If this is your first attempt to draw
cartoons, prepare a thick sheet of paper that you can use when you make mistakes.
You will need to practice before you can draw to perfection.
Erasers: You should use clean erasers so as not to dirty your drawing.
Body
The most difficult part of drawing the body is getting the proportion right. First, the
body should be proportioned to the head. Second, the shoulder width, body length,
arm length, and leg length should also be proportional. This may take quite some
time to learn, but eventually youÊll get it right.
Basic Steps
1.
Use your imagination in creating cartoons, especially if you are working on funny
cartoons for kids. You can start by drawing cartoon characters or animals by
simplifying their images to basic designs and shapes. For instance, if you want to
draw a head, you can use a circle. In addition, you can use dots for eyes and a
small circle for a nose. You can also place another medium-sized circle on the top
of the head for the ears.
2.
Cartoons are popular for the facial expression of its images. So be imaginative in
giving your characters facial expressions. The character should clearly express
emotions of happiness, sadness, or worry. The shapes of the eyes, eyebrows, and
mouth can be changed to convey the appropriate emotion.
3.
Now that the facial expression is finished, you can draw the body of your
character. The body should convey dynamism; use different angles to show
actions. Avoid stationary figures.
4.
Finally, you have to breathe life into your cartoon characters by involving them in
activities such as playing ball, diving, or golfing. These actions along with the
facial expressions will make your cartoons realistic.
5.
To master the art of drawing cartoons, you have to be patient and observant.
Study the cartoons of the masters. Practice, practice, and more practice will
eventually produce results.
Student Activity
Prepare a study note on how to draw different facial expressions.
74 Self-Instructional Material
Summary
Drawing cartoon faces become really simpler when you divide the face into basic
shapes that comprise it. You can create different types of faces by drawing different
geometric shapes like square, circle, oval, triangular etc. You can change the
expressions on your face without changing your emotions (by acting) but donÊt you
wish that drawing facial expressions was just as easy? Well, with practice, you will
see that it is just this easy and a LOT of fun to try. Most facial expressions can be
easily made by changing the size, shape, and relationship of eyes, nose, and mouth
and other parts of the face such as eyelids and eyebrows. If you want to be a
cartoonist, an illustrator, an artist or just simply good at drawing, it would be a good
idea to start studying peopleÊs faces.
How to Draw Expressions
Notes
The key elements of facial expressions are the eyes, eyebrows, and mouth. In furry
characters, the ears are also important. Surprise is drawn much like drawing anger
and sadness, but in reverse. Instead of drawing the eyebrows in a V-shape, the
eyebrows will be in an A-shape and the lines wonÊt be sharp, they will have a slight
curve to them. They are also set higher on the head, making them look raised. The
mouth is generally drawn in an O-shape. The eyes follow suit by being wide-open
and rounded. The simplest of faces is the calm face. A calm cartoon face is often
drawn without any eyebrows at all.
Keywords
Manga: Manga consist of comics and print cartoons (sometimes also called komikku),
in the Japanese language and conforming to the style developed in Japan in the late
19th century. In their modern form, manga date from shortly after World War II, but
they have a long, complex pre-history in earlier Japanese art.
Anime: Anime, an abbreviated pronunciation in Japanese of "animation", pronounced
[anime] in Japanese, but typically in English) is animation originating in Japan. The
world outside Japan regards anime as "Japanese animation". While the earliest known
Japanese animation dates from 1917 and many original Japanese.
Elements of Expression: The key elements of facial expressions are the eyes, eyebrows,
and mouth. In furry characters, the ears are also important.
Surprise: Surprise is drawn much like drawing anger and sadness, but in reverse.
Instead of drawing the eyebrows in a V-shape, the eyebrows will be in an A-shape
and the lines wonÊt be sharp, they will have a slight curve to them. They are also set
higher on the head, making them look raised. The mouth is generally drawn in an
O-shape. The eyes follow suit by being wide-open and rounded.
Calm and Sleepy: The simplest of faces is the calm face. A calm cartoon face is often
drawn without any eyebrows at all.
Anger and Sadness: Anger is conveyed by drawing the eyebrows in a V-shape low
over the eyes. The mouth is often a straight line with the edges drawn downward.
Review Questions
1.
How to bring animated characters to life? Explain it.
2.
How to draw cartoon faces?
3.
Discuss the elements of expression.
4.
How to draw cartoon emotions and facial expressions?
Punjab Technical University 75
Cartoon Animation
Notes
Further Readings
Elmqvist, N., P. Dragicevic, and J.D. Fekete, 2008, „Rolling the dice: Multidimensional
visual exploration using scatterplot matrix navigation,‰ IEEE Transactions on
Visualization and Computer Graphics 14, no. 6: 1141–1148.
Erten, C., P.J. Harding, S.G. Kobourov, K. Wampler, and G. Yee, 2003, „GraphAEL:
Graph animations with evolving layouts,‰ In Proceedings of the 11th International
Symposium on Graph Drawing, Springer-Verlag.
Fisher, Danyel A, 2007, „Hotmap: Looking at geographic attention,‰ IEEE Transactions
on Visualization and Computer Graphics 13, no. 6: 1184–1191.
Friedrich, C., and P. Eades, 2002, „Graph drawing in motion,‰ Journal of Graph
Algorithms and Applications 6, no. 3: 353–370.
76 Self-Instructional Material
Unit 4 How to
Achieve Lip
Synchronization
How to Achieve Lip
Synchronization
Notes
Unit Structure
Introduction
Lip Synchronization
Automatic Lip Synchronization System
Mouth Positions
Good Facial Animation or Lip Sync Animation
Kinds of Cartoons
Summary
Keywords
Review Questions
Further Readings
Learning Objectives
At the conclusion of this unit, you will be able to:
Learn the methods for adding topology to pre-existing model to make creating mouth
shapes
Know about the lip synchronization in music
Introduction
One of the major problems in character animation is the synchronization of the speech
signal and lip movements. If the speech signal is already given, off-line
synchronization is necessary by using a time-consuming process called šlip-syncš that
is also used in classic cartoon animation. Here, the speech signal is marked with a
time-code which is then used to manually determine when a certain expression is
needed. The in-betweens are calculated by interpolation. Another possibility to create
the lip movements uses an animation system to create the expressions of the character
one after the other in several passes. Each pass, like creating the lip movements for the
letter a, can be created in real time, i.e. while playing back the speech signal. Several
passes are needed to create and fine tune the final animation. With all of these
techniques either a high technical effort is needed or they stay a very time consuming
task. A technique that facilitates this process would be of great help for the central
task of creating convincing animations of computer generated characters, i.e.
synchronizing speech signal and speech movements. One possibility is synthesizing
the speech signal and creating the lip movements synchronized to the synthesized
signal.
This approach does not work if a high quality speech signal, spoken by a professional
speaker, is needed or if the speech signal is already given as it is in our case. Another
approach would be to recognize phonemes, for example in, and create the lip
Punjab Technical University 77
Cartoon Animation
Notes
movements that are appropriate for each phoneme. This approach does not take into
account coarticulation effects, i.e. the context of the particular phonemes, and will not
work for character animation where the context of a phoneme is important for the
expression. Also this approach may not catch any mood or special characteristics that
are important for a specific character.
In this unit we present an automated technique for creating the lip movements of a
computer-generated character from a given speech signal by using a neural net. The
net is trained by a set of pre-produced animations. We first discuss the relevant
aspects of facial animation and speech. Thereafter, we explain the processing of the
speech signal and the network architecture and present some results.
This unit covers methods for adding topology to pre-existing model to make creating
mouth shapes and expressions feasible. For anyone focusing on animation, having a
character to work with is important. In a previous course, we went through the
process of building a basic character that we could use to practice our animation. The
character is meant to show action and emotion through only the movement of the
body. While this is great practice for getting the most out of our character's body
movements, at some point we'll probably want to start creating facial expressions and
even make our character speak. We'll use a variety of tools in Maya like the Split
Polygon Tool and the Insert Edge Loops tool to manually add and remove edges and
generally reroute the geometry as needed. We'll also add geometry for the interior of
the mouth and reassemble the head with the existing body.
Lip Synchronization
Lip-sync or lip-synch (short for lip synchronization) is a technical term for matching
lip movements with voice and can refer to any of a number of different techniques
and processes. In the case of live concert performances, lip-synching is a
commonly-used shortcut, but can be considered controversial. In lip synchronization
the audio and video are combined in recording in such a way that the sound is
perfectly synchronized with the action that produced it; especially synchronizing the
movements of a speaker's lips with the sound of his speech.
Lip synchronization is the synchronization of audio signals (sometimes with
corresponding video signals) so that there is no noticeable lack of simultaneity
between them. The term lip sync is an abbreviation of lip synchronization, and
describes two similar forms of vocal pantomime.
Lip Synchronization in Music
Though lip-synching, also called miming, can be used to make it appear as though
actors have musical ability (e.g., The Partridge Family) or to misattribute vocals (e.g.,
Milli Vanilli), it is more often used by recording artists to create a particular effect, to
enable them to perform live dance numbers, or to cover for illness or other
deficiencies during live performance. Sometimes lip-synching performances are
forced by television for short guest appearances, as it requires less time for rehearsals
and hugely simplifies the process of sound mixing. Some artists, however, lip-synch
as they are not as confident singing live and lip-synching can eliminate the possibility
of hitting any bad notes. The practice of lip synching during live performances is
frowned on by many who view it as a crutch only used by lesser talents.
Because the film track and music track are recorded separately during the creation of
a music video, artists usually lip-sync to their songs and often imitate playing musical
instruments as well. Artists also sometimes move their lips at a faster speed from the
track, to create videos with a slow-motion effect in the final clip, which is widely
considered to be complex to achieve.
78 Self-Instructional Material
Artists often lip-sync certain portions during strenuous dance numbers in both live
and recorded performances, due to lung capacity being needed for physical activity
(both at once would require incredibly trained lungs). They may also lip-sync in
situations in which their back-up bands and sound systems cannot be accommodated,
such as the Macy's Thanksgiving Day Parade which features popular singers
lip-synching while riding floats, or to disguise their lacking of singing ability,
particularly in live or non-studio environments. Some singers habitually lip-sync
during live performance, both concert and televised. Some artists switch between live
singing and lip-synching during the performance of a single song.
How to Achieve Lip
Synchronization
Notes
Automatic Lip Synchronization System
Purpose
As the trend toward digitization accelerates, video and audio are increasingly
processed separately. This separate processing causes a time delay between video and
audio, often affecting program production and broadcasting. NHK has developed an
automatic lip synchronization system (TuLIPS) designed to detect people's lip
movements and voice utterances; it only requires simple operation to correct this
video-audio discrepancy with high precision.
Features
A number of methods have been proposed for the correction of video-audio delays.
One of them superimposes a standard signal on the main video-audio lines, but it
cannot be used while the program is being aired. Another problem is that it affects the
audio quality. Further, both transmission and reception sides need to be fitted with
the device, making the system more complicated.
TuLIPS, on the other hand, does not require any standard signals to be superimposed
or manual cut-and-try adjustment. It automatically measures and corrects the delays
between video and audio solely from their main line signals. As the TuLIPS system
measures only 44 mm (1U) in height, it is convenient for use in broadcasting vehicles
or field locations.
Gestures
One of the first things you discover when you enter Second Life is that you can make
your avatar dance by typing in the chat bar. Gestures can also make your avatar
smile, frown, and otherwise contort his face. So you'd think we'd have the raw
material for creating lip sync by using gestures. But...we don't.
See, when you type a gesture name into the chat bar, that is sent to the Linden Lab
server somewhere across the world. The server sends a message to every user whose
avatar is in the vicinity of your avatar telling them that your avatar wants to dance or
whatever. Their viewers (and your viewer too) check to see whether they have that
particular animation cached nearby. If not, the viewer asks the server to look up the
animation in its database and send it to them. When the viewers get the animation
they (and you) can finally see you dance. And if the gesture includes sound, you
might notice that the sound plays out of sync with the animation if either has to be
fetched from the server.
So you've probably figured that bouncing bits across the world networks is no way to
get fine synchronization between speech and mouth movement. But, well, the pieces
are there. It's just a matter of putting them together the right way.
Punjab Technical University 79
Cartoon Animation
Notes
Morphs
Avatars are nothing but skin and bones. No, really. An avatar actually does have a
skeleton with bones. Virtual bones, anyway. And the bones are covered with skin.
Well, triangulated skin anyway, called meshes.
Avatars can move their body parts in two ways. They can move their bones, and have
the corresponding skin follow along stiffly, or they can stretch and distort their skin.
When you change your avatar's appearance you're really stretching its skin and
changing the length of its bones. These are adjustments to your avatar's visual
parameters. The stretching and contorting uses a 3D construction called morphs.
Some morphs change your avatar's appearance and other morphs make him move.
The difference between appearance morphs and animation morphs is just a practical
one. For animation morphs, you can be sure that every avatar has them set to the
same values. So after the animation is done, you can set each morph back to its default
value. If you tried using appearance morphs for animation, you wouldn't know where
to start of finish for each particular avatar. Yeah, in theory, you could do some relative
calculation, but 3D is hard enough already.
Now unfortunately for us doing lip sync, most of the morphs that come in the Linden
avatar definition file are meant for expressing emotions: surprise, embarrassed, shrug,
kiss, bored, repulsed, disdain, tooth smile. And although a tooth smile kind of looks
like someone saying "see," it's a stretch. (Yeah, I like puns.) But, it's all we've got.
Phonemes
So far, we've been just looking. Let's stop and listen for a moment. When we write or
type, we use letters, and as we all know, words are made of letters. But when we talk,
they're not. The little pieces of a spoken word that take the place of letters are called
phonemes. Sometimes there is a one to one correspondence between letters and
phonemes. In US English, that's true for V, I think, but not much else. That's why
[photi] sounds like "fish." But linguists (that's what they call people who make a living
dreaming about photi and [colorless green ideas] while furiously sleeping) make up
new alphabets to name phonemes. SH is a phoneme that sounds like "shh." Well, of
course it would. And AX is that unstressed "a" sound called a schwa. At least to
linguists who work on speech recognition. Most everybody else turns a [little "e"
upside down]. Anybody else just get an image of [Dale Jr. in a bad wreck]? Wrong
crowd, I guess.
Visemes
When you say a word starting with the letter "f", it has a distinctive sound, and it has a
distinctive appearance: your teeth touch your bottom lip. But when you say the letter
"v", it has a different sound, yet it looks just like an "f": your teeth touch your bottom lip.
You might have guessed that the photi fans have a name for these things that look alike.
Yep, they call them visemes. Something like "visible phonemes" but shorter.
When someone is lipreading, they are translating visemes into phonemes. And since
several phonemes can share the same look, you can see why lipreading is so difficult.
Did he say he's "fairy wed" or "very wet"? Now lip sync, then, is like lipreading in
reverse. We need to map phonemes to visemes.
Verisimilitude
You can consider three types of lip sync. The most elaborate provides accurate mouth
shapes and jaw movements (visemes) based on the sound that is spoken (phonemes).
If the voice is created synthetically, also known as text-to-speech (TTS), this is pretty
80 Self-Instructional Material
straightforward since there is an intermediate representation in most TTS systems
between the text and the speech that is the phoneme sequence with timing
information. It is a fairly simple task to translate the phonemes to visemes which then
get implemented as facial morphs.
For live speech, it isn't so easy. Most of the lip sync that you see in movies like Shrek
and The Incredibles is hand crafted for accuracy and uses a large set of 3D morphing
targets as visemes.
How to Achieve Lip
Synchronization
Notes
The audio can be automatically decoded into a phoneme sequence, which is a fairly
complicated task, but it can be done using speech recognition technology. I won't be
shy about giving a shameless plug for [the IBM ViaVoice Toolkit for Animation],
which is my own project.
Similitude
A simpler form of lip sync just looks at the relative energy of the speech signal and
uses a smaller set of visemes to represent the mouth movement. This still requires
decoding of the audio stream, but it's easier than phonetic decoding.
Tude
The crudest form of lip sync just loops a babble animation while the speaker is
speaking. That is, it just repeats the same sequence of frames over and over while the
character is talking. The visuals are not actually synchronized to the audio; they just
start and stop at the same time. This it what you'll find used for anime and a lot of
Japanese animated shows on TV because it doesn't really matter which language is
used for the sound track. The characters don't have to be reanimated for each
language.
Lip-sync for SecondLife
Unlike gestures, which are sent from the server, lip sync must happen entirely on the
client. This is the only way to ensure synchronization.
The choice of which of the three forms of lip sync to use depends on the level of
reality expected, which in turn depends on the level of reality of the animated
characters. For SecondLife, the energy-based lip sync is probably appropriate. We
don't need to implement realistic visemes, so the lack of a nice set of usable morphs is
not a problem, but...there's another problem.
Voice Chat
SecondLife has a voice chat feature that lets avatars speak, so we have the audio
stream, but unfortunately, we can't get to it. The audio for voice chat never enters the
SecondLife viewer. Instead it is processed by a parallel task called SLVoice, written
for Linden Lab by Vivox. SLVoice is not currently open source, but Linden Lab has
expressed a desire to make it so in the future.
Voice Visualization
But the viewer does get some information from SLVoice, in the form of Participant
Properties Event messages. These messages provide a measure of the speech energy,
but it is averaged over several phonemes, so they cannot provide enough detail for
energy based lip sync. They are used to generate the green waves above the avatar's
head indicating how loud someone is speaking.
Oohs and Aahs
So we can only generate babble loops with the information provided.
Punjab Technical University 81
Cartoon Animation
Notes
We can use the "Express_Closed_Mouth" morph for silence and the
"Express_Open_Mouth" morph for loud sounds. Morphing allows us to blend
between the morph targets, so we can get any level of mouth opening by weighting
these two targets.
The "Express_Open_Mouth" morph is the viseme for the "aah" sound. It turns out that
the "Express_Kiss" morph looks similar to the viseme for the "ooh" sound. So we can
get a variety of different mouth shapes by blending the three morph. "aah" gives us
the vertical dimension and "ooh" gives us horizontal.
By using different length loops for "ooh" and "aah", we effectively create a loop whose
length is the least common multiple of the two loop lengths. (And you never thought
you'd ever hear about least common multiples after high school.)
Unfortunately, there is a problem using the "Express_Kiss" morph. It not only purses
the lips, it also closes the eyes and lowers the eyebrows. This gives the avatar a
nervous appearance if the morph is changed to quickly, and it gives a tired
appearance if done too much.
So, can we extract just the mouth movement from the Express_Kiss morph and make
our own Express_Ooh morph. Why not? When we look at the mesh file definition we
note that all of the vertices used in the morphs are indexed to the mesh definition
using the vertexIndex field. So we just take those vertices out to the Express_Kiss
morph that are also used by Express_Open_Mouth or Express_Closed_Mouth and
voila! we have an Express_Ooh morph. We add a visual param to the avatar
definition file and there you have it.
Because lip sync uses the same visual params as the emotes, we either have to disable
emotes during lip sync or blend the two together. As it turns out, morphs are made
for blending. So with no extra work, it turns out that the emotes blend just fine with
the lip sync morphs. Well, just about.
If we use Express_Open_Mouth in a gesture while we're doing lip sync, we get a
conflict because both want to set different weights for the same morph. So we really
want to have lips sync morphs separate from the emote morphs. So instead of
Express_Ooh, we'll call it Lipsync_Ooh and we'll copy Express_Open_Mouth to
Lipsync_Aah. We may get a mouth opened to twice its usual maximum, but the
maximum was just arbitrary anyway.
There's still one catch, though. The base mesh defines a mouth with lips parted, but
the default pose uses the Express_Closed_Mouth morph at its full weight. The emotes
blend between the Express_... morphs and the Express_Closed_Mouth morph. We
could make a Lipsync_Closed_Mouth morph to blend with the Lipsync_Ooh and
Lipsync_Aah morphs, but then the default pose would have the mouth closed twice.
We could just forget about a separate blending for lip sync, but then the Lipsync_Aah
would not open the mouth as much as Express_Open_Mouth because it would be
blended with the fully weighted Express_Closed_Mouth. So, we add the negative of
the Express_Closed_Mouth to Lipsync_Ooh and Lipsync_Aah to get the same effect
as the emotes and then we don't have to blend to a default.
Energetic Oohs and Aahs
In the future we hope to have access to the audio stream allowing us to compute
precise energy values. We can map these to the "aah" morph, while using a babble
loop for the "ooh" morph to give us some variety.
The "ooh" sounds, created by pursing the lips, have a more bass quality to them than
"aah" sounds. This is a reflection of their audio spectral features, called formants. It
should be possible to make a simple analysis of the speech spectrum to get a real
estimate of when to increase the "ooh" morph amount, rather than just babbling it.
82 Self-Instructional Material
This could provide a realism better than simple energy based lip sync, though still
below phonetic lip sync.
Says Who
Right now, the audio streams for all avatars other than your own are combined
together. SLVoice tells us when each avatar starts speaking and when he stops
speaking, and a little bit about how loud he is speaking, so the information is there,
but it doesn't tell us how to untangle the audio. In order to do good quality
energy-based lip sync, we would need a way of identifying the audio with the correct
avatar.
How to Achieve Lip
Synchronization
Notes
How to Babble
This section describes the settings that can be made in Second Life for lip sync. Here is
a list of settings used for lip sync together with their default values and descriptions.
LipSyncEnabled
1
0 disables lip sync and 1 enables the babble loop. In the future there may be options 2
and on for other forms of lip sync.
LipSyncOohAahRate
24 (per second)
The rate at which the Ooh and Aah sequences are processed. The morph target is
updated at this rate, but the rate at which the display gets updated still determines the
actual frame rate of the rendering.
LipSyncOoh
1247898743223344444443200000
A sequence of digits that represent the amount of mouth puckering. This sequence is
repeated while the speaker continues to speak. This drives one of the morphs for the
mouth animation loop. A value "0" means no puckering. A value "9" maximizes the
puckering morph. The sequence can be of any length. It need not be the same length
as LipSyncAah. Setting the sequence to a single character essentially disables the loop,
and the amount of puckering is just modulated by the Vivox power measurement.
Setting it to just zeros completely disables the ooh morphing.
LipSyncAah
257998776531013446642343
A sequence of digits that represent the amount of jaw opening. This sequence is
repeated while the speaker continues to speak. This drives one of the morphs for the
mouth animation loop. A value "0" means closed. A value "9" maximizes the jaw
opening. The sequence can be of any length. It need not be the same length as
LipSyncOoh. Setting the sequence to a single character essentially disables the loop,
and the amount of jaw opening is just modulated by the Vivox power measurement.
Setting it to just zeros completely disables the aah morphing.
LipSyncOohPowerTransfer
0012345566778899
The amplitude of the animation loops for ooh and aah is modulated by the power
measurements made by the Vivox voice client. This function provides a transfer
function for the ooh modulation. The ooh sound is not directly related to the speech
power, so this isn't a linear function. The sequence can be of any length. Setting it to a
single digit essentially disable the modulation and keeps it at a fixed value.
Punjab Technical University 83
Cartoon Animation
LipSyncAahPowerTransfer
0000123456789
Notes
The amplitude of the animation loops for ooh and aah is modulated by the power
measurements made by the Vivox voice client. This function provides a transfer
function for the aah modulation. The aah sound is pretty well correlated with the
speech power, but to prevent low power noise from making the lips move, we put a
few zeros at the start of this sequence. The sequence can be of any length. Setting it to
a single digit essentially disable the modulation and keeps it at a fixed value.
Tip for machinimators: If you have a close-up shot for which you want accurate lip
sync and you're willing to hand tune the lip sync, here's how you can do that.
Record the phrase that you want to sync. If you have a tool that gives you phoneme
sequence from an audio clip, use that to get the time intervals for each phone. If not,
guess. Use that data to define the LipSyncAah parameter for the entire phrase.
Include zeros at the end for the duration of the clip so the loop doesn't restart if you
talk too long.
Set the LipSyncAahPowerTransfer to a single digit that will define your maximum lip
sync amplitude. Set LipSyncOohPowerTransfer to a single zero to disable the ooh
babble. Then speak without stopping for the duration of the original phrase as you
record the video. You can say anything, you're not using this audio, you just want to
keep the babble loop going. Finally, combine the recorded audio and video to see how
well it matches. Note where you need to adjust the babble loop.
After you have the aahs figured out, enable the ooh loop by setting
LipSyncOohPowerTransfer to a single digit (probably the max, 9). Then you can set
the oohs using the LipSyncOoh parameter to adjust the mouth widths.
Advanced tip for machinimators: At the default frame rate, the LipSynchOoh and
LipSyncAah fields in the Debug Settings can take about five seconds worth of data. If
you want to enter more than that, read on. The limitation is actually just in the entry
field. There is no limit internal to the program. Luckily, the details of the UI are defined
in XML files. You just have to know where to find them and how to change them.
It's getting better all the Time
This is the hopefully just the first step of several to implement lip sync in Second Life.
It's still not very good, but it's better than nothing. We hope to get real-time
energy-based lip-sync if we can get access to the audio streams. After that, we hope to
be able to automatically estimate the appropriate amount of ooh morphing to make
the mouth movement more realistic. Stay tuned.
Lip-sync – Definition
One is a form of musical pantomime in which a performer moves his/her lips to the
words of a played musical recording, creating the illusion of the performer singing in
the recorded singer's voice. The hobby reached its greatest popularity in the 1980s,
hitting its peak with the syndicated television game show, Puttin On The Hits.
Professional performers sometimes use this method in live performances, especially
in dance numbers that require too much exertion to perform as well as sing. It was
once common in the Hong Kong music scene. It can also be used fraudulently to
misrepresent a musical act with the group, Milli Vanilli being the most notorious.
The other is the art of making a character appear to speak in a pre-recorded track of
dialogue. The lip sync technique to make an animated character appear to speak
involves figuring out the timings of the speech (breakdown) as well as the actual
animating of the lips/mouth to match the dialogue track. The earliest examples of
lip-sync in animation were attempted by Max Fleischer in his 1926 short My Old
84 Self-Instructional Material
Kentucky Home. The technique continues to this day, with animated films and
television shows such as Shrek, Lilo & Stitch, and The Simpsons using lip-syncing to
make their artificial characters talk. Lip synching is also used in comedies such as This
Hour Has 22 Minutes and political satire, changing totally or just partially the original
wording. It has been used in conjunction with translation of films from one language
to another, for example, Spirited Away.
How to Achieve Lip
Synchronization
Notes
An example of a lip synchronization problem is the case in which television video and
audio signals are transported via different facilities (e.g., a geosynchronous satellite
radio link and a landline) that have significantly differently delay times, respectively.
In such cases it is necessary to delay the audio electronically to allow for the
difference in propagation times.
Lip sync Mouth Animation
Have you ever wondered how exactly to animate to dialogue? Making a cartoon
character speak can seem like a daunting task. Learning which mouth shapes and
mouth positions is crucial to create convincing lip sync animation. The good news is
that others have already done the work of figuring out which facial expressions and
mouth drawings work best. It can be very easy to do once you have the right
information.
Mouth Positions
Although there are so many different mouth movements and shapes that are possible,
they can be simplified into a few specific mouth shapes that work remarkably well!
For cartoon characters, the following seven mouth shapes produce great results for lip
sync animation:
M,B and P (Closed mouth)
AH (Open mouth)
EEE, or EH
CONSONANTS (example: R, D, N, S etc)
OH, W
TH, L
F, V
Figure 4.1: Drawing Cartoon Mouth Positions
There are certain principles and techniques that can be applied when drawing your
characters mouth positions. Cartoon Solutions provides a simple step by step video
Punjab Technical University 85
Cartoon Animation
tutorial that outlines just how to create the necessary mouth shapes so you can
animate your character's lip sync animation.
Notes
Creating Lip Sync Animation with Character Packs
If you want a fast and easy way to get straight to animating dialogue without having
to create your own character's mouth movements and facial expressions, Cartoon
Solutions provides Character Packs. Character Packs are pre-made cartoon characters,
already built and assembled for you. Once you have your Character Pack, you'll see
that the characters can be animated using 7 different mouth positions (for both the
front and side views), which makes animating to dialogue a simple process.
Because there are many animation and design programs, Character Packs are
available in the following file formats: Flash, Anime Studio, Photoshop and Toon
Boom.
86 Self-Instructional Material
Free Lip Sync Animation Training
If you have little or no experience animating cartoon dialogue, you'll be glad to know
that free animation training is available via free online tutorials, as well as a full
Training Courses on Cartoon Solutions website.
How to Achieve Lip
Synchronization
Notes
If you are planning on using Flash to animate lip sync animation, then make sure to
view the free video tutorial: Animating Dialogue using the "Mouth Comp" system.
This method is sure to speed up the time it takes to create dialogue animation!
Good Facial Animation or Lip Sync Animation
There are four basic stages to good facial animation or lip-sync animation:
1.
Foundation
2.
Structure
3.
Details
4.
Polish
Whatever type of character animation your are doing; whether it's cartoony
animation, realistic animation, or anything in between; you should always follow
these basic stages in this order. You can find more detail on this at the Character
Animation Community – Animation Salvation. Let's look into these Animation Stages
in more detail:
Punjab Technical University 87
Cartoon Animation
Notes
Lip Sync Animation Foundation
We lay a solid foundation for our animation by listening to our soundtrack and
„getting into character‰. You really need to listen to the soundtrack over and over and
over until it is permanently written on your brain.
Once you have this on your brain, close your eyes and Imagine your scene in every
detail. Really see your character in your mind acting out the dialogue you've been
assigned, but imagine it as if it were animated by the BEST animator in the world!
(why not, it's YOUR imagination).
Next, quickly thumbnail out your scene with very rough stick figures. You're not
entering an art competition here, you're just getting down on paper some key poses
that you will want your character to hit.
Once you've got your animation thumbnailed out on paper, try to imagine it again as
you play the soundtrack, and scan your eyes across your thumbnails to see if the
animation you saw in your head matches your thumbnails and if your thumbnails
match the timing of your soundtrack.
I also like to make a note of the rough timing of my key poses. This helps later in the
animation workflow.
Lip Sync Animation Structure
The next stage is the bare structure of our lip sync animation, this is Open/Closed –
Wide/Narrow. In other words, we simply animate the mouth open and closed
positions, and then we animate the mouth wide and narrow positions.
We feel open/closed positions for our lip sync animation by placing our chin on our
fist and saying the dialogue at full speed.
Next we feel the wide/narrow positions for our lip sync animation by placing our
fingers on the corners of our mouth and, again, say the dialogue at full speed.
It is essential to say the dialogue at full speed or else we may exaggerate the amount
of Open/Closed jaw positions or Wide/Narrow. This will make our lip sync
animation look over animated and wrong.
Eye Sync Animation Structure
This means things like eye directions, blinks, eyebrow animation and facial
expressions. This is where we start adding the character to our character animation!
Dialogue
Although much in animation can be communicated entirely via action – such as the
pantomime-based performances of Charlie Chaplin's tramp character of silent picture
fame, and Mr Bean, for example – there are times when dialogue is the most efficient
means of expressing the desires, needs and thoughts of a character in order to
progress the storyline. Dialogue can be as profound as a speech that changes the lives
of other characters in the plot, or as mundane as a character muttering to itself in a
manner that fleshes out its personality making it more believable to the audience.
Voice Characterisation
Choosing the right voice is vital. Much of a character and its personality traits can be
quickly established by the performance of the actor behind the drawings thereby
taking a huge load off the animator. If the real-life actor who is supplying the voice to
your drawings understands the part, they can very often make significant
contributions to a scene through ad libs and asides that are always 'in character'.
88 Self-Instructional Material
If you have given your character something to do during the delivery of their
dialogue, you must inform the voice talent. If your character is doing some action that
requires effort, for example, that physical strain should be reflected in the delivery of
the line.
Just as the designs for any ensemble of animated characters should look distinctive, so
should their voices. Heavy, lightweight, male, female, husky, smooth or accented
voices are some of the dialogue textures that need to be considered when thinking
about animated characters. Using professional talent who can tune and time their
performance to the animator's requirements usually pays dividends. It is immensely
inspiring to animate to a well acted and delivered dialogue. It is interesting that if you
ask practicing animators about what they actually do, most will describe themselves
as actors whose on-camera performance is realised through their craft.
How to Achieve Lip
Synchronization
Notes
Unfortunately drawings, clay puppets and computer meshes don't talk, so when our
synthesised characters are required to say something, their dialogue has to be
recorded and analysed first before we can begin to animate them speaking. Lip
synchronisation or 'lip-sync' is the technique of moving a mouth on an animated
character in such a way that it appears to speak in synchronism with the sound track.
So how is this done?
Track Reading
Still in use today is a method of analysing sound frame by frame which dates from the
genesis of sound cartoons themselves during the late 1920's. Traditionally, this
involved transferring the dialogue tracks for animated films onto sprocketed optical
sound film, and later from the 1950s, sprocketed magnetic film. The sprocket holes on
this sound film exactly match with those of motion picture film enabling sound and
image to be mechanically locked together on editing and sound mixing machines.
A 'gang synchroniser' was used to locate individual components of the dialogue track
with great precision. This device consists of a large sprocketed wheel over which the
magnetic film can be threaded. The sound film is driven by hand back and forth over
a magnetic pick-up head until each part of a word can be identified. This process is
called 'track reading'. The dialogue track is analysed and the information is charted
up onto camera exposure sheets, sometimes called 'dope sheets' or 'camera charts', as
a guide for the animator.
Dialogue can now be accurately analysed using digital sound tools such as
'SoundEdit16' or 'Audacity' which allows you to 'scrub' back and forth over a
graphical depiction of a sound wave. When using a digital tool to do your
track-reading, its vital that the frame-rate or tempo is set to 25 fps (frames per
second), otherwise your soundtrack may not synchronise with your animation.
Note: The timeline of 'Flash' showing a sound waveform, individual frames and the 25 frames per second setting.
Dope Sheet
Dialogue is charted up in the sound column of the dope sheet. Each dope sheet
represents 100 frames of animation or 4 seconds of screen time. Exposure sheets have
frame numbers printed down one side making it possible to locate any sound, piece of
dialogue, music beat or drawing against a frame number. This means that when the
animation is eventually photographed onto motion picture film, it will exactly
synchronise with the soundtrack.
Punjab Technical University 89
Cartoon Animation
Notes
Dope sheets and the information charted up on them provide an exact means of
communicating the animator's intent to those further down the production chain so
that everyone in the studio understands how all the hundreds or thousands of
drawings are to come together and how they are to be photographed under the
camera. Dope sheets employ a kind of standardised language and symbology which
is universally understood by animators around the world. Even computer animators
use dope sheets! Get to know and love them.
Analysis Dialogue
There is an art to analysing dialogue. Sentences are like a continuous river of various
sounds with few obvious breaks. More often than not, the end of one word sound
flows directly into the next. It is our understanding of the rules of language that gives
us the key to unlock the puzzle and to resolve each individual word.
English is not a phonetic language and part of the art of good lip-sync is the ability to
interpret the sounds (phonetics) you are hearing rather than attempting to animate
each letter of a word. For example, the word 'there' consists of five letters yet requires
only two mouth shapes to animate, the 'th' sound and the 'air' sound. The word 'I' is a
single letter in its written form but also requires two mouth positions, 'Ah' and 'ee'.
Accents can also determine which mouth shapes you choose. Its actually easier to
chart up dialogue in foreign language even though we can't understand it.
The simplest lip-sync involves correctly timing the 'mouth-open' and 'mouth-closed'
positions. Think of the way the Muppets are forced to talk. Their lips can't deform to
make all of the complex mouth shapes required for true dialogue, but the simple
contrast of open and shut makes for effective lip-sync if reasonably timed. More
convincing lip-sync requires about 8 to 10 mouths of various shapes.
As you work through a dialogue passage, it quickly becomes apparent that the key
mouth shapes can be re-cycled in different combinations over and over again so that
we could keep our character talking for as long as we like. We can use this to
advantage to save ourselves work. If a character's head remains static during a
passage of dialogue, we can simply draw a series of mouths onto a separate cell level
and place these over a drawing of a face without a mouth. Special care should be
taken to design a mouth so that it looks as though it belongs to the character. Retain
the same sort of perspective view in the mouth as you have chosen for the face to
avoid mouths that look as though they are merely stuck on over the top of the face.
Remember too, that the top set of teeth are fixed to the skull and its the bottom teeth
and jaw that do the moving.
Sometimes the whole head can be treated as the animating 'lip-sync' component. This
enables you to have a bottom jaw that actually opens and drops lower and also allows
you to work stretch and squash distortions into the entire face. Rarely does any one
mouth position have to be on screen for less than two frames. Single frame animating
for lip-sync usually looks too busy. In-betweens from one mouth shape to the next are
mostly unnecessary in 'limited' animation unless the character speaks particularly
slowly. Therefore the mouth can snap directly from one of the recognised key mouth
shape positions to the next.
Body Language
Talking heads can be boring and, without the richness of detail and texture found in
real-life faces, animated ones are even more so. Gestures can tell us something about
the personality of a particular character and the way it is feeling. Give your character
something to do during the dialogue sequence. The use of hand, arm, body gestures
and facial expressions, in fact involving the whole body in the delivery of dialogue,
makes for something far richer to look at than just watching the mouth itself move.
90 Self-Instructional Material
These gestures may wild and extravagant, a jump for joy, large sweeps of the arms, or
as small and subtle as the raising of an eyebrow.
Pointing, banging the table, a shrug of the shoulders, anything may be useful to
emphasise a word in the dialogue or to pick up a sound accent which helps gives the
audience a clue as to what the character is feeling and absolutely gives the animated
character ownership of the words. The delivery of the dialogue during recording will
often dictate where these accents should fall. Mannerisms help establish character too.
A scowl, a scratch of the ear, or some uncontrollable twitch or other idiosyncratic
behaviour.
How to Achieve Lip
Synchronization
Notes
Use quick thumbnail sketches to help you develop the key poses that you believe will
best help express the meaning and emotional content of the words and they way they
have been delivered. Broadly phrasing the dialogue into sections where a key poses
seems appropriate is a good starting point. Some times these visual accents (key
poses) might occur just on one word that you want to emphasise. At other times the
gesture might flow across an entire sentence.
Note: Disney animator, Frank Thomas, uses rough thumbnail sketches to work out key poses for a dialogue sequence for Baloo in
Jungle Book.
The Animator as Actor
Character animators often refer to themselves as actors. All actors must understand
what motivates their characters and what kind of emotional context is required for
any given scene. More on this later, but suffice to say that you must try and animate
from the inside out. That is, to know the inner thoughts and feelings of your character,
and to try and express these externally.
Tips
When charting up 'dope sheets', always use a soft pencil and keep an eraser at hand.
You'll be making plenty of mistakes to start with. The best way to begin mapping out
a dialogue sequence is to divide the dialogue into its natural phraseology. Draw a
whole lot of thumb-nail sketches in various expressive poses and decide which ones
best relate to what is being said and which might usefully underpin the way a line of
dialogue, or a word, is delivered. Animate gestures and body language first, then,
when you are happy with the action test, go back and add in the mouth afterwards.
Having arrived at several expressive gestural poses, don't throw this effort away by
having them appear on the screen for too short a time. Save yourself work by
wringing out as much value from these strategic poses as you can before moving on.
Disney rarely stopped anything moving for too long exploiting a technique his studio
developed called the 'moving hold' in which the characters almost, but never quite
stopped moving when they fell into a pose. Loose appendages come to a stop after the
main mass of the character had reached its final position, and before any part of the
character stops entirely, other parts begin to move off again. That's great if you have a
vast studio to back up the production where each animator had an assistant and an
inbetweener to do a lot of the hack work. You are a one person band, so learn the
value of the 'hold'.
Punjab Technical University 91
Cartoon Animation
Notes
Unless your character is a particularly loud and overbearing soul, most lip-sync is
best underplayed, except for important accents and vowel sounds. This is especially
true where a film's style has moved character design closer to realistic human
proportions. In this case minimal mouth movement is usually more successful. Much
lip-sync animation is spoiled not so much by inaccurate interpretation of the mouth
shapes required, but by undue emphasis on the size and mechanics of the mouth.
Been there done that to my embarrassment.
The audience often watches the eyes, particularly during close-ups, so emphasis and
accents and can be initiated here even before the rest of the face and mouth is
considered. Speak to me with thine eyes – its a powerful way of getting a character to
communicate inner feelings without actually saying anything. Even the act of
thinking of words to speak can be expressed in the eyes. See notes on animating eyes.
Animated characters need to breath too, especially where indicated on the sound
track. Its also a good idea to anticipate dialogue with an open mouth shape that lets
the character suck in some air before forming the first word.
Style
Approaches to lip-sync can be just as varied as the different stylistic approaches to
character design – simple, elaborate, restrained, exaggerated – busy with teeth and
tongue, or just a plain slit. Every individual animator's approach to lip-sync is
different too. In large studios where more than one animator is in charge of the same
character, extensive notes and drawings will instruct the team how to work the mouth
to keep it looking the same throughout. The way a mouth might work is very often
determined by the design of the head in character model sheets. Think of five o'clock
shadow on the faces of Homer Simpson or Fred Flintstone and the way this bit of
design can pulled off to make the mouth move. Sometimes mouths are simply hidden
behind a wiggling mustache.
92 Self-Instructional Material
The Simpsons, South Park, Reboot, UPA stuff (Mr McGoo), Charlie Brown (you never
see teeth), the distinctive lip-sync of Nick Park's Creature Comforts and Wallace and
Gromit (since parodied by one of our graduates, Nick Donkin, in a Yogo commercial)
are all based on a stylistic solution than fits their characters' designs. I'm always
amused by the Japanese approach to lip-sync. A petite young lady will have a tiny
mouth which occupies about .01% of her face, but sometimes it can open up to
become a gross 60% when she gets agitated!
How to Achieve Lip
Synchronization
Notes
Along with the application of computer technology to nearly every aspect of
animated film production, not only 3D but also in tools for 2D animation, has come an
increasing effort to automate the process of lip-sync. "Why", software designers and
producers are asking, "can't the computer analyse a sound wave form automatically
and then determine which mouth shapes to use?" There are lip-sync plug-ins for 3D
animation that create a muscle-like structure in the mouth area of a 3D character
which can be made to deform according to a predetermined library of shapes or
'morph targets'. The children's animated series, 'Reboot' uses this technique. There are
also tools which allow the animator to quickly try out mouth shapes against a piece of
dialogue.
DubAnimation
Well blow me down and shut my mouth! Now there is a piece of software which will
do the analysis for you and chart up the phonetic breakdown into an electronic dope
sheet. You can throw away that old gang synchroniser. Its called dubAnimation.
Look at the way dubAnimation writes up its electronic exposure sheet. Some letters of
the cursive writing are extended to indicate the length of that particular phonetic. This
is just the way animators used to write up their exposure sheets. What a clever little
tool!
"Don't have much
to say about lipsync.
I'm too busy walking
Lip-sync Animation: Using an X Sheet
Animating dialogue can be a time consuming process, but there are a few simple steps
you can take to make it easier and help speed up the process. One of the things you can
do is to use a virtual exposure sheet (X sheet) as is used in traditional animation. Doing
this with dialogue animation will let you know what word and mouth shape are
coming up in the animation and where they can be found in the timeline.
Importing Sound Files
Import your sound file into Flash by choosing File, Import and then browse to the file
you want to use. Make sure that your layers are NOT locked, otherwise your sound
will not import.
Click on the layer on the timeline where you want the sound to start and then create a
keyframe (F7).
In the properties box select your sound file from the drop down menu and make sure
your sound settings are set to "Stream". This enables you to hear the sound file as you
scrub back and forth in the timeline.
Punjab Technical University 93
Cartoon Animation
Notes
After you have made a blank keyframe on your layer in the timeline, the "Sound"
option will be available to you in the Property Box. Here, no sound has been selected
from the drop down menu yet, and the default setting for the sound is set to "Event".
Once the sound file has been imported, it will be available from the "Sound" drop
down menu. Make sure to change the sounds' properties from "Event" to "Stream".
This will make enable you to scrub through the timeline and hear your audio track as
you work.
Getting the Text into the Timeline
Create a new layer in the timeline above the sound file layer. This layer will act as the
X sheet. On this layer you will type in the lines of dialogue as heard in the sound file.
This is especially helpful in scenes with lots of dialogue. Press "Play" by hitting the
"Enter" or "Return" key to hear the sound file play. This will allow you to hear what
words are being spoken. You can also use the Time Slider to scrub back and forth and
listen to individual parts of the sound file and hear only certain sections instead of the
whole file.
In the timeline, find where the first word starts and enter a blank keyframe. Go down
to the property box, and in the input field, type the first word. This text will now
appear above the sound layer in the timeline, letting you know what word starts at
what point in the file.
In the Property box, click inside the "Frame Label" box to make it ready for text input.
94 Self-Instructional Material
Begin typing in your text and when you are done hit the "Enter" or "Return" key.
How to Achieve Lip
Synchronization
Notes
The text will appear in the layer where you have entered a keyframe.
It's a good practice to spell out the words how they sound to the ear rather than
always using their correct spelling. The reason for doing this is to get you to the right
mouth shape quicker. For example, with the word "shop" could be spelled "sh-ahhp".
Even though the word is spelled with an "O", the mouth pose for the character should
be the "AH" mouth rather than using the "O" mouth.
Try to spell the words out as they sound rather than always using their correct
spelling. This helps you to know which mouth shape to use.
Facial Animation and Speech
Presenting a 3D-animated face and a corresponding speech source implies a
combination of two information channels. This collaboration of visual and auditory
cues signifies to observe certain consequences.
Research in psycholinguistics provides a general study of the problems coming up
with this situation, leading to the bimodality in speech perception.
Regarding the lip-reading capability of the deaf, it is well known that this form of
speech perception uses information recoverable from visual movements of the
various articulators like lips, tongue and teeth. Moreover it has been discovered that
not only hearing impaired people utilize this visual information. Several
investigations demonstrate that intelligibility of speech is enhanced by watching the
speakerÊs face, especially if acoustic information is distorted by background noise,
even for normal hearers. This bimodality of visual and acoustic information shows the
necessity of coherence. Experimental results obtained from perceptual studies reveal
the need of spatial, temporal and source coherence.
Spatial Coherence
Concerning speaker localization we can see that vision is dominant on audition. Such
a "capture of source" is widely used by ventriloquists, where the audience is much
more attracted by the dummy whose facial gestures are more coherent with the
speech signal than those of its animator. This demonstrates the capacity of humans to
identify coherence in facial gestures and their corresponding acoustic production,
which is developed even by four-to-five month old. This capacity is frequently used
by listeners in order to improve the intelligibility of a single person in a conversation
group, when the well known "cocktail party effect" occurs.
Punjab Technical University 95
Cartoon Animation
Notes
Temporal Coherence
Acoustically and optically transmitted information contains inherent synchrony.
Experimentally Dixon and Spitz observed that subjects are unable to detect
asynchrony between visual and auditory presentation of speech when the acoustic
signal was presented less than 130 ms before or 260 ms after the continuous video
display of the speakerÊs face. But it was also found that these bounds lower to 75 ms
(before) and 190 ms (after) in the case of a punctual event, such as a hammer hitting
an anvil.
Source Coherence
McGurk found that the simultaneous presentation of an acoustic /ba/ and a visual
/ga/ makes the listener or viewer perceive a /da/. This effect shows that audio and
video must present the same information content.
To sum up these psycholinguistics results we get:
Vision greatly improves speech intelligibility
Concerning speaker localization, vision is dominant on audition
Synchrony holds even when the channels are slightly time delayed
Vision can bias auditory comprehension, as in the "McGurk effect"
Animation Parameters
In addition to the outlined psycholinguistic findings we will have to compete with
one of the major problems in character animation: what parameters should be used
for a particular facial model. Requirements for choosing the parameters are:
all necessary movements are covered
the parameters are intuitively to handle
the animator can overcome the complexity
Considering the evaluation of character animation from cartoons, we can get a first
impression which basic parameters are necessary to "lip-sync" a character by carrying
over to cartoons. Basically, the cartoon designer defines 4-7 keyframes for lip-sync,
including three for central "mouth-open" vocalsÊ /a/, /e/, /o/, and one for the group
of "mouth-close" consonantÊs /b,m,p. The animator generates the in-betweens by
graphic interpolation. Depending on the aspired quality of a character, it is necessary
to define additional keyframes for refining the in-betweens.
Visemes
Another approach to check out the basic animation parameters is to estimate visual
similarities between different phonemes. Visually equal mouth positions for different
phonemes are collected into classes of visual phonemes. These so called visemes are
then used as animation parameters to get keyframes for all possible mouth positions.
Fischer classified visemes for English, Owens and Blazek found 21 visemes for
"General American". Fukuda and Hiki revealed 13 visemes for Japanese, Alich found
12 visemes for German and Beno.t found 21 visemes for French.
Coarticulation Effects
But another reminding effect is that speaking is not a process of uttering a sequence of
discrete units. A key difficulty associated with connected speech is the effect of
coarticulation and plays a great role in the possibilities for subjects to process visual
information. A speech signal for any particular word is influenced by the words that
96 Self-Instructional Material
go before and after it. Even the signal for a single phoneme is influenced in the same
manner by surrounding phonemes. Additionally in context of a sentence intonation
and depending on speech speed vocal reduction alter the speech signal. On the other
hand coarticulation takes also place in visual movements of the articulators and
occurs at least as much in visual as in acoustic speech. For both sources, context
distorts the visual and acoustic pattern that would ideally be expected and
complicates the production of visual and acoustic cues.
How to Achieve Lip
Synchronization
Notes
In cartoons this effect is overcome by individual graphic interpolation of the
transitions by the animator. When using visemes as animation parameters,
workarounds have to be created, like special transition visemes in certain contexts or
tuning the bias of a viseme by a particular context. But visemes remain speaker
dependent and complex to handle if someone wants to consider coarticulation effects.
Our approach is to use signal processing techniques to extract relevant features from
the speech signal and evaluate the mapping to the animation parameters via neural
net. Training of the neural net is done with manually created animations.
The Face Model
The face model that is being used is based on a software environment developed at
the Academy of Media Arts in Cologne, Germany. The software consists of several
distinct modules for every single step of 3D-character animation. Available steps are
sculpturing, modeling and animation. The characters consist mainly of one coherent
polygon mesh with normal vectors and texture coordinates attached to itÊs vertices.
Eyes and teeth and optional accessories are modelled as separate objects.
Motions and facial expressions are achieved by deformations of the polygon surface,
i.e. displacing the vertexÊs coordinates while the net-topology is preserved. Individual
facial expressions and motions are complex movements of sets of vertices. They are
defined by clusters of vertex transformations, where each vertex has itÊs own
translation and rotation about a chosen axis. The set of basic expressions and motions
is designed by the animator during the modelling phase of the character. By mixing
and scaling subsets of these basic motions, complex expressions can be generated. The
intensity of each expression can be controlled by setting a parameter value within the
range of 0 and 1.
For the lip-sync task, the expressions that are important have to be chosen from the
list of mouth expressions of the character. As animation parameters we chose the
vowels /a/, /i/, /o,u/, the mouth closing consonants /b,m,p/ and /open/. The
parameter /open/ accounts for general mouth opening that can not be expressed by
the other parameters. The most important parameters are in fact the mouth closing
(/b,m,p/) and a clear /a/ and /o,u/. If these parameters are not matched very well
the user gets the impression of asynchrony immediately.
Processing Speech Data
Due to the 25frames/sec refresh rate of the animation system, a speech signal context
of 112ms is extracted every 40ms. This context is segmented via a hamming window
function in 9 frames computed every 10ms, resulting in segments of 32ms length.
Thereafter, these speech segments are analyzed by using a fast fourier (FFT)
algorithm. Next, the logarithmically scaled vocal spectrum is summed up in 27
distinctive areas to extract a feature vector of the vocal spectra. This feature vector is
then input in the neural network.
Network Architecture
The resulting feature vector applies to the input and the set of animation parameters
determines the output for a neural net. The neural net is a 3-layer feed-forward net
Punjab Technical University 97
Cartoon Animation
Notes
with back-propagation learning rule. It is build of 27 x 9 input neurons to match the
extracted feature vector, 18 hidden and 5 output elements to match the 5 animation
parameters. The number of hidden units was found by experimentation and extensive
testing. We made several test runs with a different number of hidden units, whereas
18 units gave the best results and more units did not improve the result. 15th IMACS
World Congress on Scientific Computation, Modelling and Applied Mathematics,
Berlin, August 1997.
Figure 4.2: Schematic View of Speech Data Processing
Discussion/Results
For testing the neural net it needs to be trained first. We used a 20 second sound
sample which was animated manually using our character animation system. The
system allows to output the chosen mouth expressions, i.e. for each animation
parameter a value between 0 and 1 will be generated every 40 ms, which can then be
98 Self-Instructional Material
used to train the net. Thereafter, we animated a 2 minutes sequence by presenting an
unknown speech signal to the net and using the net output as animation parameters.
In Figure 4.3 part of the 2 minutes speech signal and the created animation
parameters are shown. The labels indicate the corresponding animation parameter for
each curve. The spoken text (in german) is written underneath the speech signal.
Although the visual impression of the animation is rather convincing it is not perfect
and manual correction is necessary. The correction took about 3 hours, which is a
short period of time compared to 10 hours that would have been necessary when
animating the hole sequence by hand.
How to Achieve Lip
Synchronization
Notes
Source: 15th IMACS World Congress on Scientific Computation, Modelling and Applied Mathematics, Berlin, August 1997, 6.
Figure 4.3: Animation Parameters and Speech Signal
Some animation parameters like the vowel /i/ at the beginning of the sequence
or the /u/ in the german word "nur" are correctly set by the net, other words, like
"klar", are not represented correctly. There are several sources where these
mismatches might stem from. Either the training sequence was not representative, or
the speaker did speak in a different manner in the 2 minutes sequence than in the
training sequence. An effect that often occurs with non-professional speakers
(we achieved better results with professional speakers). The third source of error is
the manual animation of the training sequence which has been animated by an
professional animator but is not necessarily consistent concerning a spoken text to
animation parameter match.
We presented a technique that facilitates the process of lip-synchronization for
3D-character animation. We use a neural net that was trained by a short sample
sequence. The animation results given by the trained net with an unknown speech
sample are not perfect but promising. They need manual correction but do reduce the
production time to less than 30% compared to a complete manual approach. A central
task for animating computer generated characters is the synchronization of lip
movements and speech signal. For real time synchronization high technical effort,
which involves a face tracking system or data gloves, is needed to drive the
expressions of the character. If the speech signal is already given, off-line
synchronization is possible but the animator is left with a time consuming manual
process that often needs several passes of fine tuning. In this unit, we present an
Punjab Technical University 99
Cartoon Animation
Notes
automated technique for creating the lip movements of a computer-generated
character from a given speech signal by using a neural net. The net is trained by a set
of pre-produced animations.
Kinds of Cartoons
We're sure you already know a lot about cartoons. By the time you get to be four
you've probably seen all kinds – Disney, Bugs Bunny, Dora the Explorer.... You get
the idea. But did you know that older people regularly use cartoons to tell their side
of a story, make a point or a joke, or explain how they feel about something?
These other kinds of cartoons are called political cartoons, and with the elections
coming up next year, you are going to start seeing more and more of them in
newspapers, online, in magazines and in other places. They have been used for
hundreds of years to support, or sometimes make fun of, candidates running for
office or to see another side to issues like wars, immigration or taxes.
To understand what we mean, take a look at this poster that's a political cartoon from
the time just after the Civil War. U.S. Grant was running for President of the United
States. Grant was the general who led the Union army. Can you tell what the joke is?
To make a political cartoon, the cartoonist used a drawing of Grant and added a really
creative title. It uses Grant's first two initials – US (just like the US for United States)
or the word us – in a unique way. People back in the 1860s had just gone through a
horrible war and they wanted peace very badly. Why do you think this is a very good
poster for Grant? Would it make you want to vote for him?
Finding Political Cartoons for your Project and Reports
There's an old saying that "a pictures says more than a thousand words." That was
what the artist who put the US Grant political cartoon poster together wanted to do.
He had a picture and just a couple of words to use and he wanted to get the most out
of it. If he had just said "vote for US Grant" the poster would not have gotten the same
kind of attention.
So what about using a political cartoon in your writing or project? It's best if you get
your parents or teacher to help you locate a nice cartoon for your reports. Here are
some places they might search with you:
100 Self-Instructional Material
Daryl Cagle's Professional Cartoonist Index: This site includes a section for teachers,
which you and your parents can use, too.
Cartoons for the Classroom: This site has good ideas for cartoons. You can download
them and create your own cartoons.
How to Achieve Lip
Synchronization
Notes
Why do you have to be Careful when using Cartoons?
1.
Unless you draw your own cartoon, the cartoons you use belong to somebody
else. You and your parents or teacher should check to see if you are allowed to
use other people's cartoons. Many times older cartoons are free for you to use
because they are in what is called the public domain. That means anyone can use
them. Sometimes, like on the sites listed above, the cartoons can be used in your
classroom.
2.
Even though we listed two sites where you can find cartoons, some of the
cartoons on these sites may not be for kids your age to use. That's why you need
your parents' or teacher's help when you search for cartoons. You don't want to
use cartoons you don't understand.
3.
Sometimes cartoons make fun of people or ideas. If you see a cartoon that would
hurt someone's feelings, don't use it. You can find a better one.
Make your own Cartoons
One of the best ways to get that just-right cartoon is to create your own. Just think
about your topic-maybe it's about the polar bears losing their habitat because it's
getting too warm where they live. You might draw a polar bear looking for a new
home. Maybe he's holding a sign that says, "Wanted New Home Where It's Nice and
Cold." Or maybe your polar bear will have an ice pack on his head, and he'll say in the
text bubble, "....."
If you are an artist, you can try cartooning on paper or on the computer. But you don't
have to be an artist to make your own cartoons. With software like Comic Life, you
can easily design your own cartoons by using photographs and text bubbles. It's great
fun and so easy to use.
Start Like Japanese Anime
Anime began at the start of the 20th century, when Japanese filmmakers
experimented with the animation techniques also pioneered in France, Germany, the
United States, and Russia. The oldest known anime in existence first screened
in 1917 – a two-minute clip of a samurai trying to test a new sword on his target, only
to suffer defeat. Early pioneers included Shimokawa Oten, Jun'ichi Kouchi, and
Seitarō Kitayama.
By the 1930s animation became an alternative format of storytelling to the live-action
industry in Japan. But it suffered competition from foreign producers and many
animators, such as Noburō Ōfuji and Yasuji Murata still worked in cheaper cutout not
cel animation, although with masterful results. Other creators, such as Kenzō
Masaoka and Mitsuyo Seo, nonetheless made great strides in animation technique,
especially with increasing help from a government using animation in education and
propaganda. The first talkie anime was Chikara to Onna no Yo no Naka, produced by
Masaoka in 1933. The first feature length animated film was Momotaro's Divine Sea
Warriors directed by Seo in 1945 with sponsorship by the Imperial Japanese Navy.
The success of The Walt Disney Company's 1937 feature film Snow White and the
Seven Dwarfs influenced Japanese animators. In the 1960s, manga artist and animator
Osamu Tezuka adapted and simplified many Disney animation-techniques to reduce
costs and to limit the number of frames in productions. He intended this as a
Punjab Technical University 101
Cartoon Animation
Notes
temporary measure to allow him to produce material on a tight schedule with
inexperienced animation-staff.
The 1970s saw a surge of growth in the popularity of manga – many of them later
animated. The work of Osamu Tezuka drew particular attention: he has been called a
"legend" and the "god of manga". His work – and that of other pioneers in
the field – inspired characteristics and genres that remain fundamental elements of
anime today. The giant robot genre (known as "Mecha" outside Japan), for instance,
took shape under Tezuka, developed into the Super Robot genre under Go Nagai and
others, and was revolutionized at the end of the decade by Yoshiyuki Tomino who
developed the Real Robot genre. Robot anime like the Gundam and The Super
Dimension Fortress Macross series became instant classics in the 1980s, and the robot
genre of anime is still one of the most common in Japan and worldwide today. In the
1980s, anime became more accepted in the mainstream in Japan (although less than
manga), and experienced a boom in production. Following a few successful
adaptations of anime in overseas markets in the 1980s, anime gained increased
acceptance in those markets in the 1990s and even more at the turn of the 21st century.
In Japan, the term anime does not specify an animation's nation of origin or style;
instead, it serves as a blanket term to refer to all forms of animation from around the
world. English-language dictionaries define anime as "a Japanese style of
motion-picture animation" or as "a style of animation developed in Japan".
Non-Japanese works that borrow stylization from anime are commonly referred to as
"anime-influenced animation" but it is not unusual for a viewer who does not know
the country of origin of such material to refer to it as simply "anime". Some works
result from co-productions with non-Japanese companies, such as most of the
traditionally animated Rankin/Bass works, the Cartoon Network and Production I.G
series IGPX or Ōban Star-Racers; different viewers may or may not consider these
anime.
In the UK, many video shops will classify all adult-oriented animated videos in the
"Anime" section for convenience, regardless of whether they show any stylistic
similarities to Japanese animation. No evidence suggests that this has led to any
change in the use of the word.
In English, anime, when used as a common noun, normally functions as a mass noun
(for example: "Do you watch anime?", "How much anime have you collected?").
However, in casual usage the word also appears as a count noun. Anime can also be
used as a suppletive adjective or classifier noun ("The anime Guyver is different from
the movie Guyver").
Synonyms
English-speakers occasionally refer to anime as "Japanimation", but this term has
fallen into disuse. "Japanimation" saw the most usage during the 1970s and 1980s, but
the term "anime" supplanted it in the mid-1990s as the material became more widely
known in English-speaking countries. In general, the term now only appears in
nostalgic contexts. Since "anime" does not identify the country of origin in Japanese
usage, "Japanimation" is used to distinguish Japanese work from that of the rest of the
world.
In Japan, "manga" can refer to both animation and comics. Among English speakers,
"manga" has the stricter meaning of "Japanese comics", in parallel to the usage of
"anime" in and outside of Japan. The term "ani-manga" is used to describe comics
produced from animation cels.
102 Self-Instructional Material
Visual Characteristics
How to Achieve Lip
Synchronization
An example of the wide range of drawing styles anime can adopt
Notes
Many commentators refer to anime as an art form. As a visual medium, it can
emphasize visual styles. The styles can vary from artist to artist or from studio to
studio. Some titles make extensive use of common stylization: FLCL, for example, has
a reputation for wild, exaggerated stylization. Other titles use different methods. Only
Yesterday or Jin-Roh take much more realistic approaches, featuring few stylistic
exaggerations; Pokémon uses drawings which specifically do not distinguish the
nationality of characters.
While different titles and different artists have their own artistic styles, many stylistic
elements have become so common that people[who?] describe them as definitive of
anime in general. However, this does not mean that all modern anime share one strict,
common art-style. Many anime have a very different art style from what would
commonly be called "anime style", yet fans still use the word "anime" to refer to these
titles. Generally, the most common form of anime drawings include "exaggerated
physical features such as large eyes, big hair and elongated limbs... and dramatically
shaped speech bubbles, speed lines and onomatopoeic, exclamatory typography."
The influences of Japanese calligraphy and Japanese painting also characterize linear
qualities of the anime style. The round ink brush traditionally used for writing kanji
and for painting, produces a stroke of widely varying thickness.
Anime also tends to borrow many elements from manga, including text in the
background and panel layouts. For example, an opening may employ manga panels
to tell the story, or to dramatize a point for humorous effect.
Punjab Technical University 103
Cartoon Animation
Character Design
Proportions
Notes
Body proportions emulated in anime come from proportions of the human body. The
height of the head is considered by the artist as the base unit of proportion. Head
heights can vary as long as the remainder of the body remains proportional. Most
anime characters are about seven to eight heads tall, and extreme heights are set
around nine heads tall.
Variations to proportion can be modded by the artist. Super-deformed characters
feature a non-proportionally small body compared to the head. Sometimes specific
body parts, like legs, are shortened or elongated for added emphasis. Most super
deformed characters are two to four heads tall. Some anime works like Crayon
Shin-chan completely disregard these proportions, such that they resemble Western
cartoons. For exaggeration, certain body features are increased in proportion.
Eye Styles
Many anime and manga characters feature large eyes. Osamu Tezuka, who is
believed to have been the first to use this technique, was inspired by the exaggerated
features of American cartoon characters such as Betty Boop, Mickey Mouse, and
Disney's Bambi. Tezuka found that large eyes style allowed his characters to show
emotions distinctly. When Tezuka began drawing Ribbon no Kishi, the first manga
specifically targeted at young girls, Tezuka further exaggerated the size of the
characters' eyes. Indeed, through Ribbon no Kishi, Tezuka set a stylistic template that
later shōjo artists tended to follow.
Coloring is added to give eyes, particularly to the cornea, some depth. The depth is
accomplished by applying variable color shading. Generally, a mixture of a light
shade, the tone color, and a dark shade is used. Cultural anthropologist Matt Thorn
argues that Japanese animators and audiences do not perceive such stylized eyes as
inherently more or less foreign.
However, not all anime have large eyes. For example, some of the work of Hayao
Miyazaki and Toshiro Kawamoto are known for having realistically proportioned
eyes, as well as realistic hair colors on their characters. In addition many other
productions also have been known to use smaller eyes. This design tends to have
more resemblance to traditional Japanese art.[original research?] Some characters
have even smaller eyes, where simple black dots are used. However, many western
audiences associate anime with large detailed eyes.
Facial Expressions
Anime characters may employ wide variety of facial expressions to denote moods and
thoughts. These techniques are often different in form than their counterparts in
western animation.
There are a number of other stylistic elements that are common to conventional anime
as well but more often used in comedies. Characters that are shocked or surprised will
perform a "face fault", in which they display an extremely exaggerated expression.
Angry characters may exhibit a "vein" or "stress mark" effect, where lines representing
bulging veins will appear on their forehead. Angry women will sometimes summon a
mallet from nowhere and strike another character with it, mainly for the sake of
slapstick comedy. Male characters will develop a bloody nose around their female
love interests (typically to indicate arousal, which is a play on an old wives' tale).
Embarrassed or stressed characters either produce a massive sweat-drop (which has
become one of the most widely recognized motifs of conventional anime) or produce
104 Self-Instructional Material
a visibly red blush or set of parallel (sometimes squiggly) lines beneath the eyes,
especially as a manifestation of repressed romantic feelings. Characters who want to
childishly taunt someone may pull an akanbe face (by pulling an eyelid down with a
finger to expose the red underside).
How to Achieve Lip
Synchronization
Notes
Animation Technique
Like all animation, the production processes of storyboarding, voice acting, character
design, cel production and so on still apply. With improvements in computer
technology, computer animation increased the efficiency of the whole production
process. Anime is often considered a form of limited animation. That means that
stylistically, even in bigger productions the conventions of limited animation are used
to fool the eye into thinking there is more movement than there is. Many of the
techniques used are comprised with cost-cutting measures while working under a set
budget. Anime scenes place emphasis on achieving three-dimensional views.
Backgrounds depict the scenes' atmosphere. For example, anime often puts emphasis
on changing seasons, as can be seen in numerous anime, such as Tenchi Muyo!.
Sometimes actual settings have been duplicated into an anime. The backgrounds for
the Melancholy of Haruhi Suzumiya are based on various locations within the suburb
of Nishinomiya, Hyogo, Japan.
Camera angles, camera movement, and lighting play an important role in scenes.
Directors often have the discretion of determining viewing angles for scenes,
particularly regarding backgrounds. In addition, camera angles show perspective.
Directors can also choose camera effects within cinematography, such as panning,
zooming, facial closeup, and panoramic.
The large majority of anime uses traditional animation, which better allows for
division of labor, pose to pose approach and checking of drawings before they are
shot – practices favoured by the anime industry. Other mediums are mostly limited to
independently-made short films, examples of which are the silhouette and other
cutout animation of Noburō Ōfuji, the stop motion puppet animation of Tadahito
Mochinaga, Kihachirō Kawamoto and Tomoyasu Murata and the computer animation
of Satoshi Tomioka (most famously Usavich).
While anime had entered markets beyond Japan in the 1960s, it grew as a major
cultural export during its market expansion during the 1980s and 1990s. The anime
market for the United States alone is "worth approximately $4.35 billion, according to
the Japan External Trade Organization". Anime has also had commercial success in
Asia, Europe and Latin America, where anime has become more mainstream than in
the United States. For example, the Saint Seiya video game was released in Europe
due to the popularity of the show even years after the series has been off-air. Anime
distribution companies handled the licensing and distribution of anime outside Japan.
Licensed anime is modified by distributors through dubbing into the language of the
country and adding language subtitles to the Japanese language track. Using a similar
global distribution pattern as Hollywood, the world is divided into five regions.
Some editing of cultural references may occur to better follow the references of the
non-Japanese culture. Certain companies may remove any objectionable content,
complying with domestic law. This editing process was far more prevalent in the past
(e.g. Voltron), but its use has declined because of the demand for anime in its original
form. This "light touch" approach to localization has favored viewers formerly
unfamiliar with anime. The use of such methods is evident by the success of Naruto
and Cartoon Network's Adult Swim programming block, both of which employ
minor edits.Robotech and Star Blazers were the earliest attempts to present anime
(albeit still modified) to North American television audiences without harsh censoring
for violence and mature themes.
Punjab Technical University 105
Cartoon Animation
Notes
With the advent of DVD, it became possible to include multiple language tracks into a
simple product. This was not the case with VHS cassette, in which separate VHS
media were used and with each VHS cassette priced the same as a single DVD. The
"light touch" approach also applies to DVD releases as they often include both the
dubbed audio and the original Japanese audio with subtitles, typically unedited.
Anime edited for television is usually released on DVD "uncut", with all scenes intact.
TV networks regularly broadcast anime programming. In Japan, major national TV
networks, such as TV Tokyo broadcast anime regularly. Smaller regional stations
broadcast anime under the UHF. In the United States, cable TV channels such as
Cartoon Network, Disney, Syfy, and others dedicate some of their timeslots to anime.
Some, such as the Anime Network and the FUNimation Channel, specifically show
anime. Sony-based Animax and Disney's Jetix channel broadcast anime within many
countries in the world. AnimeCentral solely broadcasts anime in the UK.
Although it violates copyright laws in many countries, some fans add subtitles to
anime on their own. These are distributed as fansubs. The ethical implications of
producing, distributing, or watching fansubs are topics of much controversy even
when fansub groups do not profit from their activities. Once the series has been
licensed outside of Japan, fansub groups often cease distribution of their work. In one
case, Media Factory Incorporated requested that no fansubs of their material be made,
which was respected by the fansub community. In another instance, Bandai
specifically thanked fansubbers for their role in helping to make The Melancholy of
Haruhi Suzumiya popular in the English speaking world.
The Internet has played a significant role in the exposure of anime beyond Japan.
Prior to the 1990s, anime had limited exposure beyond Japan's borders.
Coincidentally, as the popularity of the Internet grew, so did interest in anime. Much
of the fandom of anime grew through the Internet. The combination of internet
communities and increasing amounts of anime material, from video to images, helped
spur the growth of fandom. As the Internet gained more widespread use, Internet
advertising revenues grew from 1.6 billion yen to over 180 billion yen between 1995
and 2005.
Influence on World Culture
Anime has become commercially profitable in western countries, as early
commercially successful western adaptations of anime, such as Astro Boy, have
revealed. The phenomenal success of Nintendo's multi-billion dollar Pokémon
franchise was helped greatly by the spin-off anime series that, first broadcast in the
late 1990s, is still running worldwide to this day. In doing so, anime has made
significant impacts upon Western culture. Since the 19th century, many Westerners
have expressed a particular interest towards Japan. Anime dramatically exposed more
Westerners to the culture of Japan. Aside from anime, other facets of Japanese culture
increased in popularity. Worldwide, the number of people studying Japanese
increased. In 1984, the Japanese Language Proficiency Test was devised to meet
increasing demand. Anime-influenced animation refers to non-Japanese works of
animation that emulate the visual style of anime. Most of these works are created by
studios in the United States, Europe, and non-Japanese Asia; and they generally
incorporate stylizations, methods, and gags described in anime physics, as in the case
of Avatar: The Last Airbender. Often, production crews either are fans of anime or are
required to view anime. Some creators cite anime as a source of inspiration with their
own series. Furthermore, a French production team for Ōban Star-Racers moved to
Tokyo to collaborate with a Japanese production team from Hal Film Maker, Critics
and the general anime fanbase do not consider them as anime.
Some American animated television-series have singled out anime styling with
satirical intent, for example South Park (with "Chinpokomon" and with "Good Times
106 Self-Instructional Material
with Weapons"). South Park has a notable drawing style, itself parodied in "Brittle
Bullet", the fifth episode of the anime FLCL, released several months after
"Chinpokomon" aired. This intent on satirizing anime is the springboard for the basic
premise of Kappa Mikey, a Nicktoons Network original cartoon. Even clichés
normally found in anime are parodied in some series, such as Perfect Hair Forever.
Anime conventions began to appear in the early 1990s, during the Anime boom,
starting with Anime Expo, Animethon, Otakon, and JACON. Currently anime
conventions are held annually in various cities across the Americas, Asia, and Europe.
Many attendees participate in cosplay, where they dress up as anime characters. Also,
guests from Japan ranging from artists, directors, and music groups are invited. In
addition to anime conventions, anime clubs have become prevalent in colleges, high
schools, and community centers as a way to publicly exhibit anime as well as
broadening Japanese cultural understanding.
How to Achieve Lip
Synchronization
Notes
Anime and American Audiences
The Japanese term otaku is used in America as a term for anime fans, more
particularly the obsessive ones. The negative connotations associated with the word
in Japan have disappeared in its American context, where it instead connotes the
pride of the fans. Only in the recent decade or so has there been a more casual
viewership outside the devoted otaku fan base, which can be attributed highly to
technological advances. Also, shows like Pokémon and Dragon Ball Z provided a
pivotal introduction of anime's conventions, animation methods, and Shinto
influences to many American children.
Ancient Japanese myths – often deriving from the animistic nature worship of
Shinto – have influenced anime greatly, but most American audiences not accustomed
to anime know very little of these foreign texts and customs. For example, an average
American viewing the live-action TV show Hercules will be no stranger to the Greek
myths and legends it is based on, while the same person watching the show Tenchi
Muyo! might not understand that the pleated ropes wrapped around the "space trees"
are influenced by the ancient legend of Amaterasu and Susano.
Cartoon Facial Expressions
Every cartoon character requires an accurate depiction of cartoon expressions to
portray what its feeling is. This is easier said than done as achieving this requires a
strong balance between all the features on the character's face. Read on, to learn more
about cartoon facial expressions.
Punjab Technical University 107
Cartoon Animation
Notes
Master Animation
For any aspiring cartoonist, the cartoon expressions are the most important part of the
drawings. Without the cartoon facial expressions it would be impossible to conjure up
various scenes and moods, as the rest of the body is just a replica of the original
blueprint with a few changes in the movements. To transfer the cartoon facial
expressions on to paper is the hardest part, and this is where the cartoonist earns his
money.
The main change while drawing different cartoon expressions has to come in the eyes
of the character. The changes in the eyebrows and the mouth are of secondary
importance, and without a visible difference in the eyes, all cartoon facial expressions
will look the same. Whether it is a look of happiness, sadness, anger or perplexity, a
simple change in the eyes will complete the cartoon expressions. For a cartoonist or an
artist, learning how to draw cartoon faces can be achieved very simply by changing
the shape, the size and the relationship of the various features on the face.
Eyes
The eyes are simply the most expressive part of funny cartoon faces, and almost every
emotion can be replicated through the proper illustration of a set of eyes. Even
without drawing the rest of the face, the eyes can give an accurate depiction of
different cartoon expressions. By changing the shape and the angle of the eyebrows
this look can be completed, and adding the rest of the face and the body is a mere
formality. The bigness or the smallness of the eyeballs will also play an important
part, and thus, it should not be overlooked.
To get better ideas about the ways eyes should be drawn for simple cartoon
expressions, you must look at an actual persons eyes when they are displaying the
emotions that you wish to capture. Soon, you will start drawing these cartoon
expressions on your own, as their memory will get stored in your mind. Practice this
many times in order to achieve perfection, and for this purpose you must carry a
small sketching pad with you at all times. Whenever you notice the shape and size of
the eyes for a particular emotion, you must attempt to draw it in your pad. You can
only teach yourself how to draw cartoon expressions accurately by practicing the
many looks and emotions as soon as you notice them. Read some more cartoon
drawing tips.
Funny Cartoon Expressions
When you are looking to draw various emotions, you have to add a slightly amusing
side to the look. The twist of the mouth and the angle of the eyebrows should not only
accurately depict the look you wish to portray, but it should also be funny at the same
time. Needless to say, one needs a natural flair for drawing such emotions, and this is
something that can be further improved with a lot of practice. Here are some
examples of what you can do, in order, to illustrate some particular emotions on the
face of your drawn characters.
108 Self-Instructional Material
Anger, this emotion is portrayed by drawing the eyebrows in a distinct V-shape
falling over the eyes. The mouth can be presented in a straight shape, with their
edges pointing downwards slightly. If the artist wishes to portray the character as
yelling, then the mouth must be drawn wide open. The eyes will be squinted in
such an image, so they must be drawn smaller than normal.
A feeling of sadness can also be depicted by keeping the eyebrows in the same
V-shape, but placing them higher up on the face. Adding more curve to them and
pointing them downwards, completes the cartoon expressions of sadness. Even
the eyes are drawn with a slightly droopy angle.
A look of surprise would require the eyebrows to be in the exact opposite manner
to that of sadness, that is, they should be in an A-shape. The mouth and the eyes
must be rounded to add further credence to the look, and the eyebrows should
also carry a curved shape with them.
How to Achieve Lip
Synchronization
Notes
Student Activity
Prepare a study note on the importance of lip synchronization in cartoon
animation.
Summary
Lip synchronization or lip sync is a technical term for matching lip movements with
voice. Lip sync audio recording service is mainly used for animation, cartoons,
commercial video dubbing and music video production. Differing to standart voice
over, lip sync recording requires separate recording sessions for each actor and each
word of the script is recorded separately till fitting the actual video speech lip
movements. Lip sync creates more realistic sence of character talk and is also
preferred in advertising to target specific video commercials to specific countries or
language groups, covering the original language in order to create exclusivity of the
commercial message. In animation or cartoons, the original audio speach is lip
synchronized and when localizing cartoons in foreign languages, lip sync has no
alternative. Lip sync is highly difficult, time and effort consuming and responsible
process as it targets direct sales when used in commercials, addresses kids in cartoons
and affects the overall impression of the video content.
A central task for animating computer generated characters is the synchronization of
lip movements and speech signal. For real time synchronization high technical effort,
which involves a face tracking system or data gloves, is needed to drive the
expressions of the character. If the speech signal is already given, off-line
synchronization is possible but the animator is left with a time consuming manual
process that often needs several passes of fine tuning. In this unit, we have presented
an automated technique for creating the lip movements of a computer-generated
character from a given speech signal by using a neural net. The net is trained by a set
of pre-produced animations.
Keywords
Cartoon Solutions: Cartoon Solutions provide a simple step by step video tutorial
that outlines just how to create the necessary mouth shapes so you can animate your
character's lip sync animation.
Lip Synchronization: In lip synchronization the audio and video are combined in
recording in such a way that the sound is perfectly synchronized with the action that
produced it; especially synchronizing the movements of a speaker's lips with the
sound of his speech.
Review Questions
1.
What do you know about the methods for adding topology to pre-existing model
to make creating mouth shapes?
2.
What are your views on lip synchronization in music?
Punjab Technical University 109
Cartoon Animation
Notes
Further Readings
Elmqvist, N., P. Dragicevic, and J.D. Fekete, 2008, „Rolling the dice: Multidimensional
visual exploration using scatterplot matrix navigation,‰ IEEE Transactions on
Visualization and Computer Graphics 14, no. 6: 1141–1148.
Erten, C., P. J. Harding, S.G. Kobourov, K. Wampler, and G. Yee, 2003, „GraphAEL:
Graph animations with evolving layouts,‰ In Proceedings of the 11th International
Symposium on Graph Drawing, Springer Verlag.
Fisher, Danyel A, 2007, „Hotmap: Looking at geographic attention,‰ IEEE Transactions
on Visualization and Computer Graphics 13, no. 6: 1184–1191.
Friedrich, C., and P. Eades, 2002, „Graph drawing in motion,‰ Journal of Graph
Algorithms and Applications 6, no. 3: 353–370.
110 Self-Instructional Material