Sample Assignment Cover Sheet for Individual and Groupwork

Transcription

Sample Assignment Cover Sheet for Individual and Groupwork
AUT Faculty of Design and Creative Technologies - Policy & Procedure on Academic Discipline
5 Sample Assignment Cover Sheet for Individual and Groupwork
AUCKLAND UNIVERSITY OF TECHNOLOGY
TE WANANGA ARONUI O TAMAKI MAKAU RAU
FACULTY OF DESIGN AND CREATIVE TECHNOLOGIES
ASSIGNMENT COVER SHEET
Student Name:
ID Number
Paper Name/Code:
Assignment:
Number Of Words/Pages
(not including quotes):
x In order to ensure fair and honest assessment results for all students, it is a requirement
that the work that you hand in for assessments is your own work.
x Please read and tick the boxes below before handing in your assignment.
x If you are uncertain about any of these matters then please discuss them with your
lecturer. Assignments will not be accepted if this section is not completed.
Where we have used someone else’s words or images, we have clearly
indicated this by putting them inside speech marks (if appropriate) and
adding an intext reference.
Where we have used other people’s ideas or writing, we have clearly
indicated this by putting them into our own words and adding the reference
at the end of the sentence/paragraph.
Other than this, this assignment:
IS NOT copied from another student or previous assignment
IS NOT directly copied from books, journals or other materials. .
IS NOT cut and pasted from the internet.
HAS NOT been handed in by one of us or anyone else in any other course.
HAS NOT been done by someone else (e.g. friend, relative, professional).
…………………………………………………………
Signature
Complete this section for Groupwork
ID
Name
1.
2.
3.
4.
5
6
………………………………………….
Date
Signatures
EXEGESIS
Oleg Efimov | Year 3 | magenta | 2011
In the character driven narratives actor’s performance is the heart and soul that makes
audiences to be highly attracted and drawn into the medium content. Primarily facial performance
in cinematic narrative creates an emotional bond between digital reality and human perception.
People’ emotions have the phenomena of being reflected and resonated between parts in daily
interactions, character on the screen and the audience that experiencing it. In digitally generated
world the emotional intention is energetically powerful when it comes to deliver performer’s soul
which nowadays could be only provided with facial motion capture systems. This paper uncovers
the underwater concepts of the practical craft created using high end motion capture approach from
theoretical, methodological and technical angles with expanded examples and references of various
sources relevant to the study of such phenomena as human face. Application and practical use of
this knowledge is primarily relevant to feature film productions and visual effects but also relevant
to medical and criminal fields also dealing with human faces as a subject of recognizing lies and
cosmetic surgery. Theoretical framework of the craft is dedicated to study of human emotions,
cyber-synthetic underpinnings, digitally generated humans and virtual reality. Methodology is
orbiting around different approaches whether is better to use motion capture technology or
traditional key frame animation. Discovering different motion capture systems and crossing critical
line between them shifting the force towards high quality standards reflected in prime quality facial
performance in feature films like Avatar (J.Cameron, 2009). Pushing the methodological approach
of the craft to the more advanced and sophisticated level also unleashed the energy towards high
disperse luminescent powder approach and five hundred markers setup system of translating key
points to trigger facial flesh. Lastly technical hemisphere of this document is dedicated to
exploration of innovative algorithms in performance capture and possible future of the field.
Dealing with fundamental principles for building digital human marionette system, ways of
enhancement and zooming on more local and the most influential facial section which is human
eye. Lastly the bottom line of the technical area is concluded with investigation and observing
highly valuable resources that are already in the arsenal for successful goal-oriented future
outcome. This is grounded with experience and knowledge which came out of research and
development that were put on the line for the timeframe of this project and reflected in the
conclusion.
Synopsis: “Air force pilot intelligence officer Sandy experiences identity transformation
after realizing she has been artificially built and controlled by a cybernetic existence.”
Theoretical layer is crucially associated with historical visions of the future and cybersynthetic approach. Highly influenced by the science fictional films such as “Avatar” (J.Cameron,
2009), “The Matrix” (Larry and Andy Wachowki, 1999) and “The Ghost in the Shell” (Mashimoru
Oshii, 1995) where the mix of human DNA and mechanical nature is influentially represented.
Those texts just another icons of more deep science fiction literature, ideas where the new area of
information such as biomechanics and cybernetics arose. Cyber culture is dealing with the fear of
an unseen, virtual enemy, results of genetic engineering, ecological destruction, exponential growth
and inflation of technology into the daily life. Manly driven theme of the narrative is the fear of
losing control over life and giving the power away to machines. Connecting fear with the field of
emotion which is the key in facial performance this project is also aimed to bring an uncanny feel to
it by creating a confrontational paradox between attraction and fear. This comes down to the
character herself who are the twenty two year old female hero awaken from artificially controlled
dream. As per story the fact of being awaken is not unique but cycle where the artificiality
systematically erases her memory. This brings us to the heart theoretical soul of this paper which is
the question of identity.
Motoko: “Just as there are many parts needed to make a human a human, there is a
remarkable number of things needed to make an individual what they are. A face distinguish
yourself from others. The memories of childhood, the feelings for the future. All of that goes into
making me what I am.“ (Mashimoru Oshii, 1995).
Beside identity and deep personal transformation the area of knowledge that also carries the
research is virtual reality and creation of digital humans which relevant to simulation, training,
education, artificial intelligence, entertainment, mixed-reality, virtual or digital anatomy and virtual
patients. By creation anatomically motivated digital human head including face with complete
control over any subtle existing shape that the real face can deliver, there is a database of not only
every possible clear frozen emotional expression but also the full range of combinations and
mixtures of those pro-mutated emotions. This comes down to the fact that humans never express
and feel or experience one particular emotion. Usually its is a combination of those based on
various factors such as first or second-hand experience, recent interactions and things that shape
human identity far deep in childhood as well as evolutional experience of human population.
Furthermore seeing emotions as the effect and not a cause of things, not only internal and global
reasons stand behind our emotions, physical conditions of light brightness, direction and intensity,
air temperature, perceived visuals and sounds can cause particular or expected emotions as well as
reaction to something. More deeper down the rabbit hole this knowledge becomes very powerful for
controlling or manipulating masses not only from entertainment point of view but from political,
military and medical origins. The parallel line between emotional knowledge and artificial
intelligence can be easily crossed with the fact that we still not aware of where humans came from
and how the world was shaped. In human history, how many times people were wrong about any
kind of belief? People were thinking that the planet is flat or burning woman thinking they were
witches. Think about it. Probably even today we wrong about something and artificial intelligence
is what already control our lives. The knowledge about real world experience and phenomenas can
change the way we see and operate in every day life with letting that come through us will make
you never look at the human face in the same fashion ever again.
Speaking from methodological angle there are two main approaches in the field of digitally
generated performance of virtual actors which are the traditional key frame animation and
performance driven motion capture technology. Having as highly successful and relevantly recent
examples from popular culture films like “Avatar” (J.Cameron, 2009) and “Rise of the Planet of the
Apes” (R.Wyatt, 2011) where facial motion capture technology was used and found potential
success ground the priority of the methodological approach was directed towards facial imagebased motion capture technology that naturally translates emotional intent and soul of an actor into
digital reality using various technical systems whether or not using reflective passive or active
markers placed on character’s face, markerless video based algorithms or luminescent glowing
powder for scanning the entire surface of the face. However the approach of traditional key frame
animation was practiced as well with complete failure in relation to the balance of quality and time
resourcefulness. Having a captured from real life performance data will confidently allow to
explicit that the animation stage of digital production will be handled far more faster than traditional
key framing technique. This is explained by limited possibilities of human resources to approach
computer animation by studying it, practicing and mastering where the informational database
created by captured performance from real life will provide the first layer or skeleton of point
movement. Before going even more technical the second methodological dilemma was to choose of
what exact type of facial motion capture to choose from in order to deliver highly realistic,
believable and convincing results and jump out of animation category into feature film level. Being
in academic environment put limitations to access to those systems due their complexity and costly
prices. Its not a secret that facial animation is the advanced topic at ACM SIGGRAPH, the
computer graphics and interactive communities and only award winning visual effects or
production houses have access to those systems and publications around that subject. In this case
there were two systems were available for this craft the Optical Motion Capture System and
tracking software Optitrack for facial motion capture purchased by Auckland University of
Technology in 2011 and recently presented to market the innovative Facial Animation Software:
Performance Driven Animation Technology - Faceware developed by leading facial animation
company the Image Metrics. Two different systems, with using markers and markerless video
based. For this particular project was used Faceware due to proven results and simplistic approach
with more believable and convincing results. This statement is arguable due to the complexity of
facial animation workflow in general where the quality of captured data is not the key to successful
digital performance but marionette rigging system that allows to control facial features as closely as
possible to reality. Passing the point of saying that Faceware allows to construct successful and
relatively rapid workflow of facial motion capture and underline that Optitrack systems can also be
utilized for high quality results with bringing more effort to digital marionette craftsmanship stage.
It is crucial to explore other and more high end solutions in this field where this project can be
pushed in future. The “Mova CONTOUR Reality Capture is markerless high-resolution facial
capture, cost-effectively providing photoreal 3D faces for games and visual effects.” This
technology uses luminescent powder sprayed on an actor’s face where optical triangulation
similarly to three dimensional scanning approach and captured by infrared cameras shooting from
various angles in relation to the face. This provides with high fidelity data of entire face surface that
changes in time based on the actor performance which allows to construct a three dimensional
model referencing the given surface. This technology was used in the feature film “The Curious
Case of Benjamin Button” (D.Fincher, 2008) that got three academy awards titled Best Visual
Effects, Best Directing and Best Makeup. Final fundamental and the most advanced method that
was developed and used in visual effects company Weta Digital in New Zealand which was used in
film Avatar (J.Cameron) with non countable awards and premium digital facial performances that
would be the next level of pushing this project forward. Prime focus is to create an outstandingly
complex rigging system that is triggered by limited amount of facial markers placed on the face and
creation scriptable expressions that drive the rest of organic material as it happens in real life. The
use of markers allows to keep three dimensionality and depth of human face as well as having
possibility not to be depended on light conditions. In this case the placement of individual marker
on the face is caused by location of power points of where dynamic forces of nerves are pushing or
pulling facial muscles, tissues underneath facial skin, fat and other constructive elements of the
human face. For this approach the approximate minimum amount of facial markers is about five
hundred which allows to achieve medically and bio mechanically proven believable performance.
This method is the direction of where this project will lead in future and post graduate studies.
As per methodology already technically this project is very complex however it is very crucial
to hit the main key points of important tactics and techniques that will allow to solve problems
before they even arise. There are three main components in technical evaluations which firstly is
revolutional and innovative algorithm developed by the Image Metrics and possible future of it.
Second is facial rigging system which is the key to success in the whole facial performance
workflow, ways and methods of approaching it, pluses and minuses of any particular system and
ways of enhancement. Third is where the actual clear cut future of this body of knowledge and
practice will lead in the space of next two years. The Image Metrics product called Faceware made
this project possible due to the fact that it allowed to have an industry standard high fidelity facial
performance capture data that will drive any type of facial rig based on video analysis that does not
need any marker to be placed on an actor face. Firstly, it highly attracts actors to do the role due to
the fact that people definitely skeptical about placing anything on their faces and secondly and more
significantly this makes the acting be more natural, believable and convincing, because with the use
of markers, actors have to overcome the fact of marker existence but physically they are still there.
That makes the performance limited by the inconvenience of having markers all over the face. This
algorithm is based on defining particular areas of the face and breaking those into three main groups
such as brows, eyes and mouth. Then defining points and depth of those based on the video
reference and calculate the path on the frame by frame basis of where those points travel. This
provides with flawless energy that comes through the capture and later being applied to facial
rigging system which provides with heart rate like density of data that comes through digital actor
or if you like fuels him/her with a soul.
“FACEWARE - The technology that captures the soul.” (New York Times)
In terms of facial motion capture research and development it is literally the revolution
because it is markerless and having enough confidence by investigating more than two years in this
field it is not a doubt that this is where the facial motion capture will lead to in terms of its future.
Image Metrics was always ahead of the facial animation field with creation the Digital Emily
project with breaking uncanny valley belief system which might lead to the possible conclusion that
this will be the company that will first come up with solutions of non marker facial capture still
maintaining premium quality. Second fundamentally technical realization during the project
development was the significance of facial rigging system. At first in order to come up with very
rapid and manageable solution there was used an auto rigging system The Facemachine developed
by the company Anvozin which allows to create the control system of expressive humanoid face in
the space of one hour. This solution is also allows to be driven by any type of performance capture
as well as to be used with the technique of traditional key frame animation. Technically speaking it
is a cluster deformer based rig that has a certain amount of curves with particular influence areas of
the face. Even though it is very easy to use and setup rigging system it does not allow to have
phenomenas such as eyelid follow and sticky lips. The bottom line for The Facemachine product is
that it is perfect for quick setup and use in animation pipelines but not for feature animation
however it could be used as a middle step for having rapid engine of creation morph targets or bled
shapes or key poses. The main evaluation of the technical part was to pay more attention to the
rigging stage. The fundamental outcome was that the hybrid style rig is the perfect solution for
feature film quality. The hybrid rig is the combination of all possible techniques used in facial
rigging including a sixty four joint setup inside the character’s head with built in expressions that
allows to create relationships that caused by open jaw or eyes being moved to any direction. Also in
hybrid rigs the rich amount of blend shapes or key poses are used as well in order to control shapes,
thickness and compressions of facial features in the particular time frame. The logic behind
generating shapes is grounded on Facial Action Coding System (FACS) crafted by Paul Ekman.
From one solution to another the amount of blend shapes differs. From five hundred to couple
thousands which allow to have realistic control of the human face. Its all framed with on top cluster
deformers that follow the mesh. Finally its all programmed and simplified to a relatively easy to use
interface that allow animators to deliver realistic and highly influential performance.
“We each experience the same emotions, but we all experience them differently.” (P.Ekman,
Emotions revealed, 2003) Also by Paul Ekman.
Conclusion for this paper about facial performance in cinematic narrative is about two main
points. First is future of this research which is the main reflection and evaluation of what was
learned, second is the batch of ideas beyond technical, verbal, tangible concepts that have more
spiritual nature to them and basically causing the resonation for keeping this project on the same
track further and further. Before bringing those two points it is significant to underline that the
outcome of this project is the first successful student achievement in facial animation in Australasia
and confirms that it was pushed as far as possible in the given time. This project will be continued
in postgraduate research with the practical solutions for creation a realistic human eye, replicating
behavioral movement of the eye, eyelid follow phenomena, moisture surface of the eyeball,
structure of all constructive optical elements such as pupil, lens, iris, eyeball, etc. There will be
taken high definition references of the human eye from six various angles using six cameras
capturing unique performance simultaneously. As an extension of this current investigation there
will be constructed a hybrid style rig that will allow to pose the character in any possible shape that
can be shown in digital world from any possible viewport. The prime focus of this investigation is
dedicated to the eye area or character’s window to the soul where the double attention will be given
to marionette setup that will allow to displace eyelid accordingly to the shape of the eyeball which
is not sphere but more of an egg shape, taking into consideration limitations caused by liquids that
exist between eyeball and eyelid. Interestingly enough that all those subtle details of the face are
not noticeable in everyday life until we become aware of those and start to pay double attention to
them, seeing when people actually tell truth or not, feel anger even if they smile and so on and so
forth. Thats including animals. Personally speaking when this art investigation was started the
author had experienced emotional harm which resonated so much interest to this body of
knowledge with practical application to film industry. This project had opened a lot of concepts,
ideas and realizations along the road technically but also set free emotional tension and improved
emotional life. This is basically what is aimed to do to audiences that come and watch a film using
those techniques and knowledge for good, not for harm. Real world and life in general is something
that is given to us already and none of the computer systems will have button “Create Human
Head” or “Create Planet Earth” unless in close future which means we have to double appreciate
the world we live in today and keep it in safe peace and infinitive harmony, no matter what we
specialize in.
REFERENCES
F. PIGHIN, J. HECKER, D. LISCHINSK et al. Synthesizing realistic facial expressions from
photographs, Computer Graphics Proceedings, Annual Conference Series, ACM SIGGRAPH,
Orlando, 1998, 75-84
Thibos, L.N., Bradley, A. and Zhang, X.X., (1991). Effect of ocular chromatic
on monocular visual performance. Optom. Vis. Sci., 68, 599-607.
aberration
Ekman P., Friesen WV. (1978b), Facial Action Coding System, Investigator's Guide Part 2,
Consulting Psychologists Press Inc.
MARTINEZ-CONDE, S, MACKNIK, S. AND HUBEL, D. 2004. The Role of Fix- ational Eye
Movements in Visual Perception. Nature Reviews: Neuro- science, 5, March.
BARTEN, P. 1999. Contrast Sensitivity of the Human Eye and Its Effects on Image Quality. SPIE.
WYATT, H. 1995. The Form of the Human Pupil. Vision Research, 35, 14, 2021-2036.
DaugmanJ.Pheno typic versus geno typic approaches to face recognition. In: Face Recognition:
From Theory to Applications. Heidelberg: Springer-Verlag, (1998), pp 108 -123.
George F. Luger. Artificial Intelligence: Structures and Strategies for Complex Problem
Solving (Fifth Edition). Pearson Education, 2005.
Charles Darwin. Expression of the Emotions in Man and Animals. John Murray, London, United
Kingdom, 1872.
Keith Waters. A muscle model for animation three-dimensional facial expression. SIG- GRAPH
Comput. Graph., 21(4):17–24, 1987.
BORSHUKOV, G., AND LEWIS, J. P. 2003. Realistic human face rendering for ’The Matrix
Reloaded’. In ACM SIGGRAPH 2003 Sketches & Applications.
ICT-GRAPHICS-LABORATORY, 2001. Realistic hu- man face scanning and rendering. Web site.
http://gl.ict.usc.edu/Research/facescan/.
DEBEVEC, P. 1998. Rendering synthetic objects into real scenes: Bridging traditional and
image-based graphics with global il- lumination and high dynamic range photography. In
Proceed- ings of SIGGRAPH 98, Computer Graphics Proceedings, Annual Conference Series,
189–198.