An Extensible Simulation Framework Enabling Physically

Transcription

An Extensible Simulation Framework Enabling Physically
Hasselt University
and transnational University of Limburg
School of Information Technology
An Extensible Simulation Framework Supporting
Physically-based Interactive Painting
Dissertation submitted in partial fulfillment of the
requirements for the degree of
Doctor of Philosophy in Computer Science
at the transnational University of Limburg
to be defended by
Tom Van Laerhoven
on June 21, 2006
Supervisor: Prof. dr. Frank Van Reeth
2000 – 2006
Universiteit Hasselt
en transnationale Universiteit Limburg
School voor Informatietechnologie
Een Uitbreidbaar Simulatieraamwerk ter
Ondersteuning van Fysisch-gebaseerd Interactief
Verven
Proefschrift voorgelegd tot het behalen van de graad van
Doctor in de Wetenschappen, richting Informatica
aan de transnationale Universiteit Limburg
te verdedigen door
Tom Van Laerhoven
op 21 juni, 2006
Promotor: Prof. dr. Frank Van Reeth
2000 – 2006
Acknowledgments
This part of the dissertation enables me to finally acknowledge the long list of
people who, directly or indirectly, contributed over the past years to the work
presented here. This is also the part that everyone you know will read first,
in order to see if you have not forgotten to mention their names. Therefore,
playing it on the safe side, I will start by thanking the person who is reading
this dissertation right now.
Most of all, my supervisor, prof. dr. Frank Van Reeth, deserves my gratitude for giving me the opportunity to be a member of his computer graphics
research group at the Expertise Centre for Digital Media. His ability to generate new ideas, along with his vivid ways of communicating them, inspired
me even as a master student, and continues to amaze me. At the EDM, I also
enjoyed working with prof. dr. Karin Coninx, prof. dr. Philippe Bekaert,
prof. dr. Wim Lamotte and prof. dr. Eddy Flerackers.
Furthermore, I thank long-time colleagues, and friends, Jori Liesenborgs,
Kris Luyten, Koen Beets and Peter Quax. Jori and Koen for their direct
contributions to this dissertation. Peter for his vast technical knowledge and
Kris for him being a twenty-four hours one-man helpdesk (does he ever sleep?).
I am also grateful to my fellow NPR researcher Fabian Di Fiore, and the
other computer graphics researchers Erik Hubo, William Van Haevre, Jan
Fransens, Tom Haber, Bert De Decker, Cedric Vanaken, Mark Gerrits, and
most recent members Cosmin Ancuti, Yannick Francken, Chris Hermans and
Maarten Dumont, for providing a pleasant and stimulating working atmosphere. A valuable member who recently left the team, Tom Mertens, has
positively influenced the whole group.
A lot of credit for the creative input goes to several people with artistic
talents: Xemi Morales, Marie-Anne Bonneterre and José Xavier, Koen Beets
ii
and Bjorn Geuns. Ingrid Konings and Peter Vandoren relieved me of most of
the administrative burden during the passed months.
Finally, I want to thank my family and friends for their support.
Diepenbeek, May 2006.
Abstract
Since the introduction of computers, a lot of effort has gone into the research
on applications that enable a user to create images for their artistic value.
In case of interactive systems that try to provide a digital equivalent of the
traditional painting process, however, most still produce images that appear
too artificial and lack the “natural look” and complexity that can be found
in real paintings. Especially capturing the ambient behavior of watery paints
like watercolor, gouache and Oriental ink, which run, drip and diffuse, is a
challenging task to perform in real-time. For this reason, artists still prefer to
use traditional methods. The advantages of such a paint system are numerous,
however, as it allows to fully exploit the flexibility of the digital domain, and
perform all sorts of “non-physical” operations.
In this dissertation we present a physically-based, interactive paint system
that supports the creation of images with watery paint. We mainly target watercolor paint medium, but the model is general enough to simulate also related
media like gouache and Oriental ink. In order to create a software system that
is easy to maintain, change and extend, we first outline a component-based
framework that is able to curb the complexity of building physically-based
applications. The framework uses XML to provide a uniform description of
the system’s composition out of components.
The cornerstone of the paint system is embodied by the canvas model,
which adopts specialized algorithms, both physically-based and empiricallybased, on three different layers for the purpose of simulating paint behavior.
Two implementations of the canvas model are presented: the first relies on a
high-performance cluster (HPC) to perform a parallel simulation, while the
second uses programmable graphics hardware. The second implementation is
extended to a full virtual paint environment: “WaterVerve”, which features the
Kubelka-Munk diffuse reflectance algorithm for rendering paint, a deformable
3D brush model and a set of artistic tools that fully exploit the physically-
iv
based nature of the system.
Contents
Acknowledgments
iii
Abstract
v
Contents
ix
List of Figures
xii
List of Tables
xiii
1 Introduction
1.1 Digital Interactive Painting . . . . . . . . . . . . . .
1.2 Problem Statement . . . . . . . . . . . . . . . . . . .
1.3 Motivation . . . . . . . . . . . . . . . . . . . . . . .
1.4 Contributions . . . . . . . . . . . . . . . . . . . . . .
1.4.1 A Framework for Physically-based Animation
ulation . . . . . . . . . . . . . . . . . . . . . .
1.4.2 Real-time Simulation of Thin Paint . . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
and Sim. . . . . .
. . . . . .
1
1
2
4
5
5
6
I A Framework for Physically-based Animation and Simulation
7
2 A Extensible Component-based Framework
2.1 Introduction and Motivation . . . . . . . . . .
2.2 XML and Component Models . . . . . . . . .
2.3 Architecture . . . . . . . . . . . . . . . . . . .
2.3.1 Simple Reflection Mechanism . . . . .
2.3.2 Storage Facilities . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
9
9
12
12
13
14
vi
CONTENTS
2.4
2.5
2.6
II
2.3.3 Inter-component Communication . . . . .
Component Wiring with XML . . . . . . . . . .
2.4.1 Instantiating Components . . . . . . . . .
2.4.2 Wiring . . . . . . . . . . . . . . . . . . . .
2.4.3 Scripted Behavior . . . . . . . . . . . . .
Building Component-based Applications . . . . .
2.5.1 Unconstrained Body Simulator . . . . . .
2.5.2 A “Tinker Toy” Simulation Environment
2.5.3 A 3D Painting Environment . . . . . . . .
Discussion . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Real-time Simulation of Thin Paint
3 Digital Painting Applications
3.1 Introduction . . . . . . . . . . . . . . . .
3.2 Interactive Painting Applications . . . .
3.2.1 From Automated Rubber Stamps
3.2.2 Physically-based Approaches . .
3.2.3 Western and Oriental Painting .
3.2.4 Commercial Applications . . . .
3.3 Discussion . . . . . . . . . . . . . . . . .
14
15
16
17
19
20
20
22
24
26
27
. . . . . . . . . . .
. . . . . . . . . . .
to Active Canvas .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
4 A Three-Layer Canvas Model
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . .
4.2 Observing Watercolor Behavior . . . . . . . . . . . . . . .
4.3 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.4 An Overview of Fluid Dynamics in Painting Applications
4.5 The Fluid Layer: Adapting the Navier-Stokes Equation
Paint Movement . . . . . . . . . . . . . . . . . . . . . . .
4.5.1 Updating the Velocity Vector Field . . . . . . . . .
4.5.2 Updating Water Quantities . . . . . . . . . . . . .
4.5.3 Evaporation of Water in the Fluid Layer . . . . . .
4.5.4 Updating Pigment Concentrations . . . . . . . . .
4.5.5 Boundary Conditions . . . . . . . . . . . . . . . .
4.5.6 Emphasized edges . . . . . . . . . . . . . . . . . .
4.6 Surface Layer . . . . . . . . . . . . . . . . . . . . . . . . .
4.7 Capillary Layer . . . . . . . . . . . . . . . . . . . . . . . .
4.7.1 Fiber Structure and Canvas Texture . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. . .
. . .
. . .
. . .
for
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
29
29
30
31
33
35
36
37
39
40
40
42
43
45
48
49
51
51
53
53
54
55
55
CONTENTS
4.8
vii
4.7.2 Capillary Absorption, Diffusion and Evaporation . . . .
4.7.3 Expanding Strokes . . . . . . . . . . . . . . . . . . . . .
Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5 A Distributed Canvas Model for Watercolor Painting
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . .
5.2 Distributed Model . . . . . . . . . . . . . . . . . . . . .
5.2.1 Distributing Data . . . . . . . . . . . . . . . . .
5.2.2 Processing a Subcanvas . . . . . . . . . . . . . .
5.2.3 Gathering Results . . . . . . . . . . . . . . . . .
5.3 Brush Model . . . . . . . . . . . . . . . . . . . . . . . .
5.3.1 Distributing Brush Actions . . . . . . . . . . . .
5.4 Implementation . . . . . . . . . . . . . . . . . . . . . . .
5.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
6 WaterVerve: A Real-time Simulation
ing Images with Watery Paint
6.1 Introduction . . . . . . . . . . . . . .
6.2 Graphics Hardware Implementation
6.2.1 Simulating Paint . . . . . . .
6.2.2 Optimization . . . . . . . . .
6.2.3 Rendering the Canvas . . . .
6.3 Watery Paint Media . . . . . . . . .
6.3.1 Watercolor . . . . . . . . . .
6.3.2 Gouache . . . . . . . . . . . .
6.3.3 Oriental Ink . . . . . . . . . .
6.4 User Interface . . . . . . . . . . . . .
6.5 Discussion . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
7 Brush Model
7.1 Introduction . . . . . . . . .
7.2 Brush Models in Literature
7.2.1 Input Devices . . . .
7.3 Brush Dynamics . . . . . .
7.3.1 Kinematic Analysis .
7.3.2 Energy Analysis . .
7.3.3 Constraints . . . . .
7.3.4 Energy Optimization
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
56
57
58
59
59
60
61
61
62
63
64
65
65
68
Environment for Creat.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
69
69
70
71
71
72
75
75
75
75
79
83
.
.
.
.
.
.
.
.
85
85
86
88
89
89
91
93
93
viii
7.4
7.5
7.6
CONTENTS
Brush Geometry . . . . . . .
7.4.1 Single Spine Model . .
7.4.2 Footprint Generation .
7.4.3 Multi-Spine Models .
Results . . . . . . . . . . . . .
Discussion . . . . . . . . . . .
8 Digital Artistic Tools
8.1 Introduction and Motivation .
8.2 Objectives . . . . . . . . . . .
8.3 Real-world Artistic Tools . .
8.3.1 Textured Tissue . . .
8.3.2 Diffusion Controller .
8.3.3 Masking Fluid . . . .
8.3.4 Selective Eraser . . . .
8.3.5 Paint Profile . . . . .
8.4 Digital Artistic Tools . . . . .
8.4.1 Textured Tissue . . .
8.4.2 Diffusion Controller .
8.4.3 Masking Fluid . . . .
8.4.4 Selective Eraser . . . .
8.4.5 Paint Profile . . . . .
8.5 Results . . . . . . . . . . . . .
8.6 Discussion . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
9 Conclusions
9.1 Summary . . . . . . . . . . . . .
9.2 Future Directions . . . . . . . . .
9.2.1 High Resolution Images .
9.2.2 Painterly Animation . . .
9.2.3 Additional Artistic Tools
9.2.4 Brush Improvements . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
94
94
95
97
98
98
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
101
101
103
103
103
104
105
105
106
106
106
108
112
112
113
116
116
.
.
.
.
.
.
119
119
121
121
121
122
122
Appendices
129
A Scientific Contributions and Publications
129
CONTENTS
ix
B Samenvatting (Dutch Summary)
B.1 Introductie . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
B.2 Een Uitbreidbaar Component-gebaseerd Raamwerk . . . . . . .
B.3 Digitale Verfapplicaties . . . . . . . . . . . . . . . . . . . . . . .
B.4 Een Gelaagd Canvasmodel . . . . . . . . . . . . . . . . . . . . .
B.5 Een Gedistribueerd Canvasmodel voor het Schilderen met Waterverf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
B.6 WaterVerve: Een Simulatieomgeving voor het Creëren van Afbeeldingen met Waterachtige Verf . . . . . . . . . . . . . . . . .
B.7 Een Borstelmodel . . . . . . . . . . . . . . . . . . . . . . . . . .
B.8 Digitale Artistieke Hulpmiddelen . . . . . . . . . . . . . . . . .
B.9 Conclusies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
131
131
133
135
136
Bibliography
155
139
140
142
143
146
x
CONTENTS
List of Figures
1.1
Real-life vs. computer-generated paintings. . . . . . . . . . . .
3
2.1
2.2
2.3
An application built on top of the framework. . . . . . . . . . .
A component with its provided and required interfaces. . . . .
Schematic view on the system composition of a simple unconstrained body simulator. . . . . . . . . . . . . . . . . . . . . . .
Schematic view on the system composition of a simulation environment with interacting 3D bodies. . . . . . . . . . . . . . .
A simulation environment demonstrating three types of 3D object interactions. . . . . . . . . . . . . . . . . . . . . . . . . . .
Schematic view on the dynamic system composition of the interactive painting environment. . . . . . . . . . . . . . . . . . .
13
16
3.1
3.2
Early substrate models. . . . . . . . . . . . . . . . . . . . . . .
Recent work on thick paint and Oriental ink. . . . . . . . . . .
33
35
4.1
4.2
4.3
4.4
4.5
The three-layer canvas model with some basic activities. . . .
Real-world strokes showing several typical watercolor effects.
Moving water to a right neighboring cell. . . . . . . . . . . .
Evaporation in the fluid layer on the edge of a stroke. . . . .
Enlarged parts of computer-generated paper textures. . . . .
.
.
.
.
.
42
43
50
52
56
5.1
5.2
5.3
5.4
5.5
5.6
5.7
High-performance cluster (HPC) setup for processing a canvas.
A canvas divided in subgrids. . . . . . . . . . . . . . . . . . . .
Distributed activity scheme. . . . . . . . . . . . . . . . . . . . .
Exchanging border cells. . . . . . . . . . . . . . . . . . . . . . .
Overlapping brush patterns during brush movement . . . . . .
Example strokes created with the distributed canvas model. . .
An image created on a grid with dimension 400 × 400. . . . . .
60
61
62
63
64
66
66
2.4
2.5
2.6
21
22
24
25
xii
LIST OF FIGURES
5.8
5.9
An image created on a grid with dimension 400 × 400. . . . . . 67
The frame rate while drawing on local and distributed canvasses. 67
6.1
6.2
6.3
6.4
6.5
6.6
6.7
6.8
6.9
Strokes showing various watercolor effects. . . . . . .
A computer-generated watercolor image. . . . . . . .
Several computer-generated watercolor images. . . .
An example of a computer-generated gouache image.
A computer-generated image in Oriental black ink. .
The user interface of WaterVerve. . . . . . . . . . . .
Palette and brush dialogs. . . . . . . . . . . . . . . .
Brush view and orthogonal view on the canvas. . . .
Hardware setup of the painting system. . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
76
77
78
79
80
81
82
83
84
7.1
7.2
7.3
7.4
7.5
7.6
7.7
7.8
The three components of a paint brush. . . . . . . . . .
Several commonly used real brushes. . . . . . . . . . . .
Kinematic representation of a bristle. . . . . . . . . . . .
Finding the drag vector in a 2D scenario. . . . . . . . .
A single-spine round brush model. . . . . . . . . . . . .
A flat brush model with two spines. . . . . . . . . . . .
Computer-generated sample strokes with a round brush.
Computer-generated sample strokes with a flat brush. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
86
87
90
92
95
97
99
99
8.1
8.2
8.3
8.4
8.5
8.6
8.7
8.8
8.9
Applying a textured tissue to an area of wet Oriental ink. . . . 104
Applying masking fluid with a brush on the canvas. . . . . . . 106
Removing water and pigment with an absorbent tissue. . . . . 107
Use of the digital textured tissue. . . . . . . . . . . . . . . . . . 108
The unmodified diffusion algorithm. . . . . . . . . . . . . . . . 110
Blocking patterns used to “steer” pigment in the diffusion process.111
Using the masking fluid. . . . . . . . . . . . . . . . . . . . . . . 113
A computer-generated image in Oriental ink. . . . . . . . . . . 115
Applying masking fluid to “oranges”. . . . . . . . . . . . . . . . 116
9.1
Painterly animation using data from an interactive painting session. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
Close-ups showing the (real-world) effects of different material
on paint behavior. . . . . . . . . . . . . . . . . . . . . . . . . . 125
9.2
.
.
.
.
.
.
.
.
.
List of Tables
4.1
4.2
4.3
Updating the velocity vector field. . . . . . . . . . . . . . . . .
Updating water quantities. . . . . . . . . . . . . . . . . . . . .
Updating pigment quantities. . . . . . . . . . . . . . . . . . . .
47
50
52
6.1
6.2
6.3
The digital watercolor palette. . . . . . . . . . . . . . . . . . .
Texture data used in the render step. . . . . . . . . . . . . . . .
Cg code for Kubelka-Munk rendering. . . . . . . . . . . . . . .
73
73
74
7.1
Cg code for free-form deformation. . . . . . . . . . . . . . . . .
96
8.1
8.2
Cg code for removing pigment from the canvas with a tissue. . 109
Cg code for the “selective eraser” tool. . . . . . . . . . . . . . . 114
xiv
LIST OF TABLES
Chapter
1
Introduction
Contents
1.1
1.2
1.3
1.4
1.1
Digital Interactive Painting . . . . . . . . . . . . . .
Problem Statement . . . . . . . . . . . . . . . . . . .
Motivation . . . . . . . . . . . . . . . . . . . . . . . .
Contributions . . . . . . . . . . . . . . . . . . . . . . .
1.4.1 A Framework for Physically-based Animation and
Simulation . . . . . . . . . . . . . . . . . . . . . . .
1.4.2 Real-time Simulation of Thin Paint . . . . . . . . . .
1
2
4
5
5
6
Digital Interactive Painting
The introduction of computers presented the possibility to create drawings and
paintings in a totally different way than practiced for over thousands of years.
Since that time, a lot of effort went into the design of applications that try to
rival the real-life painting process. The large amount of research invested in
creating digital equivalents of real-life painting is not surprising, given the fact
that throughout the majority of our history people have been fascinated with
creating images. The activity is very straightforward and sufficiently accessible
so that users of all ages can practice it; yet, mastering it is complex enough
that it remains challenging to even an expert. Digital interactive painting has
the same extensive audience.
Someone who deserves a lot of credit for making painting catch on with
2
Introduction
people of all ages is Bob Ross. Nearly 300 public television stations all over the
world aired his educational television show “The Joy of Painting”, in which
Bob painted “happy little trees – and let’s not forget his happy little friend
here” on a canvas. He will soon be able to share his talent in a digital interactive way because a “Joy of Painting” video game is currently in production
for PC and Nintendo’s DS and Wii consoles1 .
The popularity of digital art in general is especially recognizable by the
success of Pixar’s and Dreamworks’s computer-generated films, which have
revolutionized the genre of traditional animation. Although the stylized look of
movies like Toy Story and Shrek does not try to mimic traditional paint media,
their creation would not have been possible without the help of digital tools.
This tendency is continued in the recent movie “A Scanner Darkly”, which
uses digital techniques to transform a traditional movie into a cartoon-like
novel. Observing this trend, one could imagine a movie made in some painterly
style, in which a digital interactive painting tool provides the reference input
material.
1.2
Problem Statement
So far, digital paint systems have had little impact on how artists make images.
Most still prefer to use the traditional methods because the digital results look
too artificial. Figure 1.1 depicts two interactively painted images. While the
bottom image is created with a state-of-the-art commercial painting application, it is immediately clear that it lacks the “natural look” and complexity
compared to the real painting in the top image.
Also, while moving from a real-life canvas to the computer screen, an artist
should gain an incredible amount of flexibility, which does not apply to a great
deal of the applications the painterly rendering research has brought us.
The thesis of this dissertation can formulated as follows:
An interactive paint system that adopts physically-based techniques,
combined with intuitive tools, is able to produce images with watery
paint media in real-time, while capturing the complex behavior of
the media more faithfully than traditionally possible.
As we will show in the overview in chapter 3, literature has already brought
forward a long line of interactive painting applications. Applications that have
1
http://www.bobross.com/news.cfm
1.2 Problem Statement
3
(a) “Vase of Flowers”, oil on canvas by
Claude Monet (1880).
(b) Computer-generated image,
[Corel 04].
by Corel Painter
Figure 1.1: Real-life vs. computer-generated paintings.
4
Introduction
an underlying model that is capable of faithfully recreating the effects found
in real artistic media are relatively sparse, and techniques that capture its
complexity, variety and richness only recently began to appear in literature.
Due to the complexity of this process, however, it is a challenging task to
simulate all this in real-time. Existing work reveals that numerous problems
remain in visual results, as well as in the creational process itself. In both
Western and Oriental versions of digital painting quite realistic results can
be obtained, but most often at the expense of a tedious, non-intuitive way
of creating them. User input and rendering often occur in separate stages of
the simulation process, creating a mismatch between what the system delivers
and what the artist actually had in mind. Our work targets precisely these
problems.
It is common knowledge that writing software is hard. Writing software
that addresses the issues related to real-time physically-based painting is especially challenging. With hardware evolving at fast pace, software has to
anticipate continuous changes in requirements. Without a well pre-defined
development strategy the software’s architecture will quickly deteriorate and
become hard to change or extend.
A flexible and extensible environment is required, which is able to curb the
complexity of building and maintaining physically-based simulation systems,
a real-time interactive painting environment in particular.
1.3
Motivation
Creating a digital equivalent of the traditional painting process has several
advantages, making it a valuable tool for both novices and experienced artists:
• the material is free, durable, and can easily be customized according to
a user’s needs;
• painting becomes even more accessible;
• making images is less cumbersome: there is very little set-up time, no
mess and no cleaning up afterwards;
• it allows saving and loading of intermediate results, post-processing and
the making of perfect copies;
• during painting, digital tools allow all sorts of non-physical operations,
control over aspects like drying time, undoing mistakes, removing pigment and water, . . . ;
1.4 Contributions
5
• creating painterly animations becomes a feasible goal.
In real-world painting, durability is an important issue: “foxing” indicates the development of patterns of brown and yellow stains on old paper,
and requires treatment with bleach to be removed. Moreover, depending on
paint characteristics, side effects of the aging process include the appearance of
cracks and changing colors: darkening to black or fading to non-existence, commonly called “fugitive colors”. An anonymous artist summarizes the fragility
of paintings as [Conley 06]:
“Unless you’re planning on hermetically sealing your paintings and
viewing them in a low-UV climate controlled room, skip them.”.
Quality art materials are expensive, require handling with care, and have
to be replaced from time to time. Carefulness does not only apply to the
materials but also to the painter him/herself: solvents used to thin oil paints
or clean-up hands and brushes come with nasty smells and toxic fumes and
necessitate the painter to work in a well ventilated room.
The traditional painting process is linear: it is impossible to return to
a previous layer and modify it. Likewise, the possibilities of undoing mistakes are very limited. In its digital counterpart, however, the presence of
such operations is almost straightforward. The non-linear aspect encourages
experimentation and avoids frustration when things go wrong.
Finally, making a painterly animation without a digital tool would require
painting every single frame, even if many frames differ only slightly from their
predecessor. Some specialized painting application could create a whole animation from a single drawn input image in just a fraction of the time.
1.4
1.4.1
Contributions
A Framework for Physically-based Animation and Simulation
This dissertation consists of two parts. In the first part we outline a lightweight
component-based framework that enables dynamic system composition based
on a description in XML format. The framework is validated by creating
several domain-specific applications, starting with a simple unconstrained rigid
body simulator that is later on extended to a simulation environment in which
several kinds of object-object interactions can be defined. Finally, the system
composition of a painting environment is described.
6
1.4.2
Introduction
Real-time Simulation of Thin Paint
The second part of this dissertation concentrates on the details of the painting
environment. We start by introducing a new physically-based canvas model
for the interactive simulation of thin, watery paint in real-time (chapter 4),
followed by a description of two implementations. The first implementation,
presented in chapter 5, relies on a computer cluster performing a parallel
simulation. Although arguably impractical, this setup is necessary to maintain
the interactive property of the system. However, it does provide a proof-ofconcept of our model and was used to “fine-tune” the algorithms and get more
insight into the complex behavior of paint.
Chapter 6 proposes a second approach. It implements the same canvas
model but adopts modern programmable graphics hardware. It is the first
model that provides real-time painting experience with watercolor paint, while
comprising sufficient complexity to capture its complicated behavior, its interactions with the canvas as well as its chromatic properties.
The real-time interactive painting environment is general enough to simulate paint media related to watercolor, like gouache and diffusive Oriental
ink, and contains a set of digital tools that are able to create some distinctive effects that previously were difficult or impossible to produce (chapter
8). Furthermore, in chapter 7, a new 3D deformable brush model extends the
simulation with the ability to capture an artist’s gestures and translate them
to realistically shaped strokes.
We conclude this dissertation in chapter 9 with some closing remarks and
possible directions for future research.
With this work we want to target not only experienced artists that desire
to create artwork in an unconventional way, but also inexperienced users that
take on painting for mere entertainment. Both can benefit from the advantages
of digital painting.
Part I
A Framework for
Physically-based Animation
and Simulation
Chapter
2
A Extensible Component-based Framework
Contents
2.1
2.2
2.3
Introduction and Motivation . . . . . . . . . .
XML and Component Models . . . . . . . . .
Architecture . . . . . . . . . . . . . . . . . . .
2.3.1 Simple Reflection Mechanism . . . . . . . . .
2.3.2 Storage Facilities . . . . . . . . . . . . . . . .
2.3.3 Inter-component Communication . . . . . . .
2.4 Component Wiring with XML . . . . . . . .
2.4.1 Instantiating Components . . . . . . . . . . .
2.4.2 Wiring . . . . . . . . . . . . . . . . . . . . . .
2.4.3 Scripted Behavior . . . . . . . . . . . . . . .
2.5 Building Component-based Applications . .
2.5.1 Unconstrained Body Simulator . . . . . . . .
2.5.2 A “Tinker Toy” Simulation Environment . .
2.5.3 A 3D Painting Environment . . . . . . . . . .
2.6 Discussion . . . . . . . . . . . . . . . . . . . . .
2.1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
9
12
12
13
14
14
15
16
17
19
20
20
22
24
26
Introduction and Motivation
Physically-based simulations provide an efficient way of investigating and visualizing complex phenomena. They are widely used in scientific visualizations, medical applications, virtual environments, (non-)photorealistic rendering, computer games and movies to obtain a controllable level of reality. Un-
10
A Extensible Component-based Framework
fortunately, the physically-based approaches are generally characterized by
huge computational demands.
Software systems that incorporate support for physically-based simulation
tend to have a very complex architecture. If designed in a monolithic way
they quickly become hard to maintain or extend.
The domain of physically-based simulations is characterized by the availability of many different toolkits, often in the form of libraries that solve
very specific problems. A typical example is the vast amount of available
collision libraries, which have the task of reporting and handling interferences between dynamic objects. For this particular aspect, the developer
of a dynamic virtual environment has the choice between, among many others, the H-COLLIDE library [Gregory 99] if haptic interaction is needed, the
V-COLLIDE library [Hudson 97] for large environments with many objects,
SOLID [van der Bergen 99] if the environment contains deformable objects or
PQP [Gottschalk 99] for additional distance computations. It is not always
clear which is the right choice in a particular situation. The same goes for
the choice of algorithms for collision determination and response, constraint
checking, numerical integration, rendering and almost any other aspect of a
simulation.
A flexible and extensible environment that is able to curb the complexity
of combining and exchanging these kinds of existing toolkits is required.
Component-based development (CBD) can provide an elegant solution for
these issues. In fact, the adoption of CBD in software in general is motivated
by Szyperski simply as:
“Components are the way to go because all other engineering disciplines introduced components as they became mature – and still
use them.” [Szyperski 98].
The term “(software) component” is used extensively throughout literature, in different contexts and not always with a clear meaning. We will use
Szyperski’s definition, which states that components are executable units of
independent production, acquisition, and deployment that interact to form a
functional system [Szyperski 98].
A component framework is defined as a partially completed application of
which pieces are customized by the user. It focuses on design and code reuse,
freeing designers from the difficult choice of creating a suitable domain-specific
2.1 Introduction and Motivation
11
architecture. The “late binding” (or late composition) paradigm enables components to be deployed in the architecture at runtime. Absence of a late binding mechanism in traditional object-oriented programming leads to monolithic
constructs.
There are substantial benefits to applying a component oriented approach
in software design. Even if component reuse is not an immediate goal, the
inherent modularity of a component architecture has significant software engineering advantages. Component dependencies and inter-communication are
made explicit, which makes it easier to control and reduce dependencies, ensure
fault containment, analyze subsystems, pin-point performance issues, perform
maintenance, and get a better understanding of the system as a whole.
Despite many efforts for component standards that brought forward heavyweight models like Microsoft’s COM+, Sun’s Enterprise JavaBeans and OMG’s
CORBA component model, still many application domains resort to their own
custom-made component models and frameworks. The work of Ponder et al.
discovers several reasons for this [Ponder 04]. Most of all, the heavyweight
models mainly focus on enterprise information management, stressing aspects
like transactional, distributed and secure business logic, while in physicallybased simulations quality attributes such as interactivity, predictability, scalability and real-time performance should dominate.
In the work of Baraff, a mixed simulation environment is described that
targets the combination of multiple disparate simulation domains like rigid
body simulation, cloth simulation and fluid dynamics [Baraff 97a]. Using a
modular design, each simulation domain is encapsulated in a “black-box” and
provided with a simple generic interface.
A somewhat related approach can be found in the domain of global illumination, in which Lighting Networks can be adopted to efficiently simulate
specific lighting effects by compositing “lighting operators” into a network
[Slusallek 98]. In this case, each operator is a black box that contains a separate global illumination algorithm.
We describe a lightweight architecture that uses components to encapsulate and provide existing functionality that is delivered to an application by
“plug-ins”: shared libraries (dll’s) that contain code and data that extend the
functionality of an application (also called application extensions or a drop-in
additions).
Most pluggable architectures use their plug-ins “as is”, and limit the possibility of customizing them to the specific demands of individual applications.
12
A Extensible Component-based Framework
Examples can be found in browsers or image manipulation programs, where
plug-ins are commonly used to add tools, actions and effects. The content
of our plug-ins are introduced into the application as components or building
blocks that can be tailored and put in place (“wired”) by an XML document
[XML 01]. This way, the XML document actually serves as a blueprint of the
component layout, instantiating the behavior of an application.
2.2
XML and Component Models
The application of XML in component models is not new. Several projects
have incorporated XML-based protocols like SOAP or XML-RPC as a means
of communication between components [Box 00].
Within the domain of computer graphics, the Extensible 3D Modelling
Language (X3D) is an XML-enabled 3D format that refines VRML97 towards
the CBD methodology. Just like VRML97, it is designed by the Web3D consortium as a standard for 3D graphics on the World Wide Web.
The VHD++ component framework describes a domain-specific architecture supporting games, virtual reality and augmented reality systems (GVAR)
[Ponder 04]. Their work gives a detailed survey on existing methods in this
particular domain. Similar to our approach, the composition of their own
system relies on XML for the structural coupling of components.
Much attention will be paid to the aspects of XML in the framework.
However, a description of the underlying concepts is necessary in order to give
a clear explanation of how we applied XML. This will be the topic of the
following section. Next, a few applications developed with the framework will
be discussed.
2.3
Architecture
In this section we will elaborate on the details of the framework’s architecture.
The discussion is focused on the description of design concepts and abstracts
all underlying implementation issues.
Figure 2.1 shows the general structure of an application realized with the
framework. Each layer will be discussed further in subsequent parts of this
section. As depicted in this figure, the base layer of the framework is embodied
by the core. Its main purpose is to deliver supporting facilities to the extension
and application layers above. These facilities include:
2.3 Architecture
13
Figure 2.1: An application built on top of the framework. Components are
introduced by plug-ins, and customized and wired by means of an XML document.
• Basic types and interfaces.
• Simple reflection mechanism.
• Central storage facilities.
• Inter-component collaboration.
The basic types and interfaces contain several domain-specific types, and
interfaces for components and plug-ins.
2.3.1
Simple Reflection Mechanism
The reflection mechanism relates to the self-descriptive ability of components.
It enables their functionality to be discovered dynamically at system runtime
by other components. Our mechanism for discovering the component interfaces is straightforward and relies on a central repository in which components
register their functionality.
We define a service as an expression that is associated with a functionality
of a certain component, and which has a well-defined format. The format is
similar to that of a filename, which includes a pathname to denote the place
of the file in a tree structure. The service name has to be unique, just as
the combination of a path and a filename has to be unique. Assuming that
branches of a tree contain related services, it becomes very easy to query for
a group of components that deliver interrelated services. As an example, the
14
A Extensible Component-based Framework
expression “Shape/*” denotes all services by a “Shape” component, or more
specific: “Shape/Polygon/*” queries all “Polygon” services.
A service provider is a component that offers one or more services. It
makes these services publicly available by exporting them in a central storage
facility.
2.3.2
Storage Facilities
The storage facilities include a service repository in which service providers can
drop their services and make them public for other components to query and
use. For example, a component that requires some “Shape”-related services
can ask the service repository for matching providers using a query of the form
given in the previous section.
Additional repositories are provided for storing components and plug-ins.
2.3.3
Inter-component Communication
Inter-component collaboration depends on bottleneck interfaces: interfaces introduced to allow the interoperation between independently extended abstractions (components) [Szyperski 98].
Traditionally, programming paradigms rely on the caller-callee concept
that establishes connections between software elements at compile-time (“static
wiring”). The component oriented methodology requires connections to be established dynamically at runtime (“late binding”), which together with the
reflection mechanism enables dynamic system composition.
Communication between components in our framework is connection-driven
and happens by means of an indirect interaction pattern using what we call
commands. A command is a special service provider that intercepts calls and
forwards them to the target component on which the operation is executed.
A source component that wants to invoke a service of a target component
has to instantiate the right type of command and indicate the target component during the initial wiring phase. At runtime the command is activated to
effectuate the collaboration.
The connection-driven composition approach targets high performance, as
opposed to the asynchronous data-driven approach in which collaboration is
established by means of message passing.
The actual functionality of an application built with the framework is
determined by the presence of plugged-in entities or “plug-ins”. An application
2.4 Component Wiring with XML
15
typically loads some basic plug-ins at start-up time. At this stage, the service
repository contains building blocks in the form of prototype components. The
prototype is formed using the creational design pattern “prototype”, which
specfies a prototypical instance that is cloned to produce new components
[Gamma 99]. It is up to the application to instantiate1 and combine these
components, and model the intended behavior. In practice the application can
get the description of what to model from an XML document, as discussed
in the next section. This approach differs from classic plug-in architectures,
in which the plug-ins are fixed, pre-manufactured extensions that allow only
limited modification.
2.4
Component Wiring with XML
A typical application built with the framework consists of a set of coupled
or “wired” components. The term “wiring” comes from the analogy with
integrated circuits, which have various pins that carry input and output signals
[Szyperski 98]. In software these pins are the incoming and outgoing interfaces
respectively. However, this analogy is only valid for the direction of the calls
as seen by the component, not the flow of information (which can go either
way).
Instead of hard-coding the component layout in the application itself, the
structural coupling is based on declarative scripting using an XML document
that is processed at the application’s start-up time.
This allows a uniform expression of a system’s composition out of components, along with their coupling. Additionally, all the advantages of XML are
inherited, of which the following are more important to us; XML:
• has a user-friendly format that is human-readable.
• supports extensible tags, document structure and document elements.
• offers reusable data.
• is flexible, data can be transformed into other formats using XSLT
[Kay 00].
• is portable.
1
As Szyperski points out, components do not normally have direct instances, so the term
“component instance” is in fact a simplifying notion that describes a web of object-oriented
class instances [Szyperski 98].
16
A Extensible Component-based Framework
Our way to achieve this task is to let the plug-ins export a selection of
prototypes for each type of component a user can instantiate. Cloning these
prototypes through XML arguments, and combining the instantiated results
in the right way delivers the desired system behavior.
2.4.1
Instantiating Components
Figure 2.2: A component with its provided and required interfaces.
After loading some plugins, the service repository contains component prototypes. The next step is to instantiate the necessary components and dynamically compose the application by coupling them in the right way.
<component id="00"
name="SOLID"
classtype="CDetection/SOLID">
<!-- Component content -->
</component>
The value of the classtype argument is used to clone the prototype of
the requested component: in this case an instance of the SOLID collision
detection library, an existing library that detects interferences between objects
in a virtual environment [van der Bergen 99]. The additional id argument,
which must be a unique expression, makes it possible to reference the XML
node elsewhere. The value of the name argument is given to the instantiated
component.
In general, components define a mixture of computation and content.
Within graphics applications the content often consists of a 3D model description, like a polygon mesh, which can also be described using an XML-based
format. The XGL specification is a file format based on XML that is designed
2.4 Component Wiring with XML
17
to capture all 3D information from a system that uses the SGI’s OpenGL
rendering library [XGL 01]. Our framework allows extension of the XML vocabulary with additional handlers that parse and process XML data that is
embedded in the XML document, or placed in a separate file that appears in
XML with an include tag. This approach enables an XGL parser to pass a
3D model description as content to a component during its instantiation.
<component id="02"
name="MyShape"
classtype="Shape">
<!-- XGL code -->
<MESH ID="0"> ... </MESH>
</component>
2.4.2
Wiring
The connection-driven communication style requires the specification of incoming or provided and outgoing or required interfaces (figure 2.2). The wiring
procedure involves defining the interaction patterns between these two types
of interfaces. In our approach, the interaction patterns are made explicit in
XML using the previously mentioned commands.
The next XML extract instantiates a command that connects to the provided interface of a Shape component. With the provides argument, one can
specify to which part of the provided interface the command connects, in this
case the Draw procedure of a Shape component with the name “MyShape”:
<component id="02"
name="MyShape"
classtype="Shape">
<command id="501"
name="MyShape/Draw"
provides="Shape/Draw" />
</component>
Some commands need extra parameters. These can be set by adding
cmdparam child nodes:
18
A Extensible Component-based Framework
<command id="510"
name="MyState/AddForce"
provides="State/AddForce">
<cmdparam provides="SetPos">
<udata>0.0,0.5,0.0</udata>
</cmdparam>
<cmdparam provides="SetForce">
...
</cmdparam>
</command>
An AddForce command for applying a force to a rigid body was manufactured that connects to the provided interface of a State component. The two
cmdparam child nodes both customize the command by setting the position of
the force and the force vector itself. The udata tag is used to pass raw data,
in this case the actual value of the parameter.
The required interface of a component contains the procedures that the
component needs to call in order to perform its operations. For example, if a
SimObject component “MyObject” at some point needs to visualize itself, for
which it relies on other components, it will query the service repository for
the command MyObject/Render! that forms a part of its required interface.
The exclamation mark indicates that the command forms part of the required
interface. The XML wiring procedure uses a command with a name attribute
that refers to that part of the required interface, and connects it with a cmdref
node to a previously declared command that represents a part of some other
component’s provided interface.
<component id="01"
name="MyObject"
classtype="SimObject">
<command id="502"
name="MyObject/Render!">
<cmdref idref="501"/>
</command>
</component>
A command can also create more complex wiring and serve as a switchboard
2.4 Component Wiring with XML
19
to specify “callers-to-group” or “group-to-callers” connectivity. Executing the
switchboard command will invoke all referenced commands in its body:
<command id="530"
name="MyObject/Step!">
<cmdref idref="501"/>
<cmdfer idref="502"/>
...
</command>
2.4.3
Scripted Behavior
Interpreted programming languages or “script” languages gave rise to the
fourth party development phenomenon, sometimes referred to as “fan base development”, in which script writers can further customize an existing product
to satisfy their own needs. Especially in context of game development, entire
on-line communities are created around so called mods: game modifications
that enable customizing various aspects of the game.
Script languages like Python and Lua are also valuable tools for system prototyping. They allow for flexible runtime experimentation on a high abstraction level. Our framework supports XML-embedded scripting with Python.
During system composition, instead of creating patterns that connect a component’s required interface to some other component’s provided interface, a
command that encapsulates a Python script can be attached to the required
interface.
<command id="530"
name="MyObject/PreStep!"
<command provides="Script/Python">
<cmdparam="SetCode">
<udata>print "Hello"</udata>
</cmdparam>
</command>
</command>
The example above shows a Python script that fulfills the PreStep procedure of MyObject’s required interface. If the code fragments become larger,
20
A Extensible Component-based Framework
they can be placed in a separate file that appears in XML with an include
tag.
2.5
Building Component-based Applications
The component framework that was outlined in the previous section is now
used to composite several applications in the domain of physically-based simulations.
An important issue that has to be dealt with during system design is that of
component granularity or component size. There is no rule of thumb regarding
the best size of a component, as it depends on many aspects. Szyperski discusses these aspects in his work and classifies them under units of abstraction,
accounting, analysis, compilation, delivery, deployment, dispute, extension,
fault containment, instantiation, installation, loading, locality, maintenance
and system maintenance [Szyperski 98]. Most of these aspects indicate a preference towards coarse-grained components.
The framework is realized in C++, assuring a good balance between high
performance and flexibility. For the same reasons, the components internally
rely on an object oriented design using C++.
2.5.1
Unconstrained Body Simulator
A first application has the ability to deserialize XGL files and turning XGL
meshes into unconstrained rigid simulation bodies that can be viewed in a
3D environment. Figure 2.3 depicts a schematic view on the principal components participating in the application, along with their interfaces and (partial)
wiring.
The five principal component types are defined as:
SimCore Provides a scheduling mechanism.
SimObject An abstraction of a simulation object.
Shape Captures information on a 3D object’s appearance, for example in the
form of a polygon mesh.
State Captures information on a 3D object’s current state in the world, such
as position, orientation and applied forces.
ODESolver Features an ordinary differential equation (ODE) solver.
2.5 Building Component-based Applications
21
Figure 2.3: Schematic view on the system composition of a simple unconstrained body simulator.
As shown in figure 2.3, a 3D simulation object is abstracted as a set of
components, representing its state, shape and simulation functionality. This
abstraction provides a flexible way of defining the object’s capabilities. For
example, a simple static 3D object would be described by a shape component,
containing its visual properties. Adding a simple state component would give
it a place and orientation in the environment. Replacing the state component with a more advanced version, which keeps track of dynamic properties
like velocity and the ability to receive forces, would let the object exhibit dynamic behavior. A dynamic state component relies on the functionality of the
ODESolver component to update itself for the next timestep. The solver can
be parametrized to use a range of different (explicit) integration techniques
such as Euler, midpoint and Runge-Kutta [Baraff 97b].
The scheduling component SimCore provides in this case a simple sequential scheduling algorithm that initiates the different simulation phases such as
timestepping and rendering.
22
A Extensible Component-based Framework
Figure 2.4: Schematic view on the system composition of a simulation environment with interacting 3D bodies. Simplified notation of the connected
interfaces was used to improve the diagram’s readability. Similar to the representation of the bodies, each interaction is represented by a set of components
in which SO is a SimObject component, C is a Constraint component and
Col is a Collision component.
2.5.2
A “Tinker Toy” Simulation Environment
The unconstrained body simulator can be extended to an environment in which
multiple bodies can interact with each other using mechanisms like collision
detection and response, joint constraints in an articulated body, non-contact
forces like gravity and magnetism, and user interaction.
Figure 2.4 shows an environment that contains three different kinds of
interaction mechanisms:
Collision detection and response Detects and resolves collisions between
3D objects.
Angular velocity constraint Constrains the angular velocity of two objects
to be linear dependent through a Constraint component.
2.5 Building Component-based Applications
23
Procedural scripting Describes how objects interact using a Python script.
Taking a similar approach as in the representation of a 3D object by a set
of components, interactions are abstracted as a set of components that collaborate in realizing the interaction between 3D objects. In the example, each
interaction is defined between just two objects. Each interaction also contains
a SimObject component that enables it to participate in the simulation.
The setup includes an engine that drives an axis. Mounted to the axis
is a small gear that is coupled to another gear, which is twice as large and
mounted to a second axis. The same axis contains a fan, in front of which
are two balloons, a red and a green one. In this example, we will assume
only the red balloon is close enough to the fan to get influenced by its airflow.
The result of this particular simulation is that the fan starts rotating, which
generates an airflow, and sets the red balloon on a collision course to the green
balloon. The balloons collide and drift away. Figure 2.5 shows the course of
action.
The Collision library component in the framework encapsulates the
SOLID collision library [van der Bergen 99] for reporting interferences between dynamic 3D objects, along with the bisection method for collision resolution and an impulse-based method that defines collision response behavior
[Mirtich 96]. Each dynamic object that participates in the collision detection/resolve process is associated with a Collision component that represents
the object in the collision library.
Object interactions are not always this complex. Sometimes their means
of communication is relatively simple. For example, the engine makes an axis
rotate by simply applying a torque to it. An efficient approach in this case
is to let the interaction component describe in some scripting language like
Python how the participating objects communicate.
In case of the two gears, the desired interaction is also relatively simple,
and can be expressed by a linear relationship between their angular velocities.
Although in real-life such behavior emerges from collisions between the objects,
the collision detection and response technique is not always suitable for the
simulation of object interactions. For instance, when a drawer is pulled out
of a cabinet, the path that the drawer follows within the cabinet can easily
be described using simple geometric constraints. The application of collision
detection and response in this particular situation would lead to excessive
calculations. The same approach can be taken in case of two gears: a simple
constraint mechanism, encapsulated in Constraint component, provides a
way to force a linear relationship between their angular velocities. When
24
A Extensible Component-based Framework
Figure 2.5: A simulation environment demonstrating three types of 3D object
interactions. The engine activates the fan through the gears, which generates
an airflow that makes the balloons drift away.
more complex constraints between multiple bodies are needed, a constraint
manager that resolves complex constraint graphs should be introduced.
2.5.3
A 3D Painting Environment
The issue of component granularity is also important from the perspective of
real-time performance. Up till now, the applications made with the framework
used fairly fine-grained components. Because of the limited computational
requirements of these applications, the performance is comparable to when
they would have had a monolithic design. However, this approach is not
scalable to complex physically-based systems. Fine-grained components offer
high flexibility at the price of reduced performance originating from handling
wiring overhead. Therefore, grouping of functionality in more coarse-grained
components may be inevitable in order to satisfy performance requirements,
and this at the price of components that are more difficult to compose. As
2.5 Building Component-based Applications
25
Figure 2.6: Schematic view on the dynamic system composition of the interactive painting environment. Simplified notation of the connected interfaces
was used to improve the diagram’s readability. Both Canvas and Brush components require a SimObject (SO) component to participate in the simulation.
pointed out in the work of Ponder, there is no widely accepted systematic
approach to decide on component granularity, so developers need to rely on
their experience of the application domain to deal with the issue of granularity
[Ponder 04].
Figure 2.6 shows a schematic view of the painting environment that is
described in the next part of this dissertation. Apart from most components
introduced in the previous sections, the following essential components are
part of the application:
Canvas Provides the three-layer canvas model discussed in chapter 4.
Brush Contains functionality of a paint brush. Its appearance and state are
described by Shape and State components respectively.
Canvas Writer An abstraction that represents the collaboration between the
canvas and tools operating on the canvas.
Palette Provides storage for pigment types and their properties.
26
A Extensible Component-based Framework
Optimization Encapsulates functionality of an optimization framework (chapter 7). It is used to find the equilibrium of a 3D brush’s deformable
shape.
The Canvas component is a coarse-grained component that encapsulates
all frame-critical functionality present in the three-layer canvas model, which
is discussed in chapters 4 and 6. It forms the cornerstone of the 3D virtual
painting environment. Being an active simulation element, the canvas also
requires a SimObject component that is connected to the “power supply”
(SimCore) and that controls its time-stepping behavior.
The other activate simulation element is represented by the Brush component. The diagram in figure 2.6 shows just one “FlatBrush” component
instance, but any number of parametrized instances can be composited into
the system. Additional Shape and State components upgrade the brush to a
3D object in the virtual environment. A brush also relies on an optimization
framework that controls the brush’s dynamic behavior. This component is
currently embodied by the “donlp2” optimization framework (chapter 7).
Finally, the mediating component between brush and canvas is the Canvas
Writer component. It provides an abstraction for any operation performed
by an external tool on the canvas. The required Palette component stores
pigment types along with their properties (chapter 6).
2.6
Discussion
We discussed the architecture of a lightweight component-based framework
that relies on XML to provide a uniform expression of a system’s composition. Three physically-based applications were made with the framework: a
simple unconstrained body simulator, a “tinkertoy” simulation environment
and virtual painting environment. The design of the painting environment incorporated several coarse-grained components to satisfy its high performance
requirements.
Although XML provides a human-readable syntax, a graphical wiring editor in which components could be coupled visually would greatly facilitate the
process of system composition.
In the rest of this dissertation, we will concentrate on the functionality of
the paint system.
Part II
Real-time Simulation of Thin
Paint
Chapter
3
Digital Painting Applications
Contents
3.1
3.2
Introduction . . . . . . . . . . . . . . . . . . . . . . . 29
Interactive Painting Applications . . . . . . . . . . . 30
3.2.1 From Automated Rubber Stamps to Active Canvas . 31
3.2.2 Physically-based Approaches . . . . . . . . . . . . . 33
3.2.3 Western and Oriental Painting . . . . . . . . . . . . 35
3.2.4 Commercial Applications . . . . . . . . . . . . . . . 36
3.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.1
Introduction
Before discussing in detail our own contributions to the domain, we first look at
how digital painting applications have evolved from being mere experimental
gimmicks to the state-of-the-art systems presented in recent literature.
In contrast to photorealistic rendering, which encompasses techniques to
produce images that resemble realism as closely as possible, the methods described in non-photorealistic rendering (NPR) literature target the creation of
images in some other style than a perfect imitation of “the real world”. If not
for merely creating aesthetic “pleasing” artwork, non-photorealistic rendering
may offer a more effective method of communicating the content of an image.
Technical illustrations or “exploded view” diagrams, for example, have the
ability to reduce or eliminate occlusion when visualizing complex 3D objects.
30
Digital Painting Applications
Also, pen-and-ink illustrations are widely used in medical textbooks to portray
anatomy in a simple and comprehensible way.
In this work we mainly focus on a subset of NPR literature, namely that of
the “Artistic Rendering” (AR) techniques, which concentrate on the rendering
of images for their artistic value, in styles like painting, drawing, cartoon or
something completely ad hoc. More specifically, we are interested in methods
that emphasize the role of the user in creating painted images.
Several recent studies present extensive surveys on this matter. Gooch
& Gooch write in great detail on the simulation of artistic media, including
painting [Gooch 01]. Baxter summarizes some of the history of paint systems
in a table containing a large selection of applications along with their most
distinct features [Baxter 04b]. A very detailed overview on AR techniques formulated by Collomosse includes a comprehensible chart that classifies artistic
rendering systems according to the level of user interaction, the dimension of
the input data and the dimension of the rendered output (either 2D, a 2D
image sequence or 3D) [Colomosse 04]. An illustration of the first generations
of digital paint systems in a more “story-telling” format is written by Smith
[Smith 01].
In this chapter we will not formulate another survey on NPR, but rather
highlight some parts of literature that are most relevant to the work in this
dissertation, namely that of the interactive digital paint systems.
3.2
Interactive Painting Applications
One can not write a historical overview of digital painting without mentioning
the pioneering inventions of Dick Shoup and Alvy Ray Smith and their early
work on a sequence of applications called SuperPaint, Paint, BigPaint, Paint3
and BigPaint3.
SuperPaint was a revolutionary program and can be considered as the
ancestor of all modern paint programs (although it was not the very first1 ).
Shoup not only wrote the paint program software but also designed the pixelbased framebuffer hardware required to support the first 8-bit graphics application. Concepts like a digital palette, a colormap, a tablet and stylus, image
file input and output, in short: all the basic ingredients of a modern paint program, found their origins in SuperPaint. The subsequent applications mainly
extended SuperPaint with more bits per color and higher resolutions, but also
1
According to Smith, this honor goes to a nameless application on a 3-bit framebuffer at
Bell Labs, as early as 1969 [Smith 01].
3.2 Interactive Painting Applications
31
introduced some new techniques that are taken for granted nowadays, such as
airbrushing and image compositing.
A detailed story on the early days of digital painting and the course of
events that led to the creation of commercial applications like Photoshop, the
founding of Pixar Animation Studios and an Academy Award for “Pioneering
Inventions in Digital Paint Systems”, is told by Smith and Shoup themselves
in more recent work [Smith 01, Shoup 01].
In this chapter we are more interested in how painting tools evolved from
being the simple “put-that-color-there” applications, to complex systems that
rival the real-world painting experience. We start by investigating applications that move away from the automated rubber stamp procedure and its
variations, where paint movement ceases after placing repeated copies of some
static or simply derived pattern on a passive canvas, to canvas models that actively process the paint and brush models that genuinely respond to an artist’s
gestures.
3.2.1
From Automated Rubber Stamps to Active Canvas
The work of both Greene [Greene 85] and Strassmann [Strassmann 86] can be
considered as the starting point of a long line of attempts to create a more
realistic model for painting. Greene designs an input device for changing a
physical draw action into a digital stroke, commonly referred to as Greene’s
drawing prism [Greene 85]. Rather than trying to create a new brush model
in software, this special device processes input from real brushes. In fact, any
object can be used as a drawing tool. One face of the prism is used as the
drawing surface, which is viewed by a video camera from an angle such that
only those portions that come in contact with the surface are registered. The
resulting images serve as direct input for the framebuffer. Since then, a great
deal of effort has been dedicated to mimicking as realistically as possible the
behavior of brush, pigment and paper.
Strassman is the first to present a physical model of the brush movement
on a paper with the purpose of creating traditional Japanese artwork using
black ink [Strassmann 86]. The brush is modeled as a one-dimensional array
of idealized bristles, each carrying an amount of ink. The actual creation of
a stroke involves the input of a number of control points with position and
pressure information from which the final stroke is rendered. The role of the
paper canvas itself, however, is limited to rendering the brush’s imprint. This
limitation, mostly due to a lack of available processing power at that time, is
common to all models uptill this point.
32
Digital Painting Applications
Small identifies this obstacle [Small 90] and targets it in his work on watercolor simulation by exploring the actual behavior of pigment and water
when applied to paper. He is one of the first to put the computational power
inside the paper model instead of in the tools that operate on it. The system is designed as a two-dimensional grid of interacting cells. A small number of rules defining simple local behavior between a cell and its immediate neighbors results in complex global behavior. This is the principle of
the cellular automaton [Wolfram 02]. As we will see throughout this chapter, cellular automata play an important role in the work of many authors
[Cockshott 91, Curtis 97, Zhang 99, Yu 03].
In Small’s work, each cell of the automaton carries local paper information
like pigment concentration and color (in terms of cyan, magenta and yellow),
absorbency and water content. In addition, the simulation also takes into
account some global information like humidity, gravity, surface tension of water
and pigment weight. With these variables the effects of various forces on
pigment, water and paper are simulated by means of the predefined local rules.
The computational complexity of the model required an implemention on a
specialized parallel architecture: the Connection Machine II. This hardware
setup allows the mapping of an individual cell to one of the 16k processors,
each one with 64k of memory. A piece of paper with a resolution 1024 × 1024
is reported to run between two and ten times slower than expected from a
real-world canvas.
At about the same time, Cockshott creates his own model out of dissatisfaction with the limited possibilities of traditional systems [Cockshott 91].
Being a trained artist himself, he identifies the main problem of existing paint
systems as being the lack of sparkle in the images they produce when compared
to those made by traditional methods and media. He claims this is caused by
the shallowness of painting models and the lack of understanding the process of real painting and the behavior of paint. His own model, appropriately
named “Wet&Sticky ”, is also based on a cellular automaton and emphasizes
the physical and behavioral characteristics of paint rather than just its color
properties. It therefore includes rules accounting for surface tension, gravity
and diffusion. Again, the model is very computational expensive, making the
speed of interaction a major issue. The original source code of his work was
released under GPL license, enabling us to recreate some of his results (figure
3.1(b)).
3.2 Interactive Painting Applications
(a)
Strassmann’s
hairy
brush
[Strassmann 86]
(b) Paint dripping
under gravity in
“Wet&Sticky”
[Cockshott 91]
33
(c) Computer-generated watercolor [Curtis 97]
Figure 3.1: Some of the earliest results from substrate models that actively
simulate paint media.
3.2.2
Physically-based Approaches
Cockshott’s observations on the shallowness of current painting models, along
with the increased availability of processing power, inspired several authors to
look for more physically-based models to simulate their artistic materials.
Curtis et al. [Curtis 97] adopt a sophisticated paper model and a complex
shallow layer simulation for creating watercolor images, incorporating the work
on fluid flows by Foster and Metaxas [Foster 96]. A painting consists of an
ordered set of translucent glazes or washes, the results of independent fluid
simulations, each with a shallow-water layer, a pigment-deposition layer and
a capillary layer. The individual glazes are rendered and composed using
the Kubelka-Munk equations to produce the final image [Kubelka 31]. Their
model is capable of producing a wide range of effects from both wet-in-wet
and wet-on-dry painting. Fluid flow and pigment dispersion is again realized
by means of a cellular automaton.
The system is used to process input images and convert them to watercolorstyle images (the image in figure 3.1(c) takes up to 7 hours to process), so this
work actually belongs to the literature on automatic rendering methods. It has
an interactive component, however, as the user can indicate the simulation’s
initial conditions. Apart from the automatic “watercolorization” of existing
34
Digital Painting Applications
images, it also supports 3D non-photorealistic rendering. We mention it here
because it has influenced the work of several authors, including ours, in the
creation of a real-time interactive paint system.
In fact, nearly all of the above work is more related to automatic rendering
than to the interactive painting experience, mostly because the computational
complexity did not allow real-time processing.
Baxter et al. are the first to present a fully interactive physically-based
paint simulation for thick oil-like painting medium [Baxter 04b]. This work
focuses on simulating viscous, passive, paint medium in real-time and how to
provide the artist with an efficient user interface that enhances the interactive
painting experience.
They present three applications with different trade-offs between speed and
“fidelity”. Their dAb application [Baxter 01], adopting a relative simple 2D
model for older processors and laptop computers without high-end graphics
hardware, is positioned at one side of the spectrum. It mainly focuses on
complex footprint generation derived from contact with a deforming 3D brush
geometry, including the resulting bi-directional paint transfer between brush
and canvas. The canvas itself is structured in two layers: an active upper layer
for representing the canvas surface, where paint is moved by means of heuristic
algorithms, and the passive “deep” layer that keeps track of dried paint. As
figure 3.2(a) shows, dAb uses a simple color model that does not support
realistic glazing or mixing. Also, paint thickness is a result of visualization
with a bump mapping technique, therefore it has no effect on how the brush
behaves.
In contrast, their Stokes Paint incorporates a full 3D volumetric fluid
model for viscous paint [Baxter 04d]. Physically-based fluid algorithms based
on the Stokes Equation are used to simulate the paint behavior in multiple
layers of active (wet) paint. One of the drawbacks of such a complex model is
the fact that the simulation is limited to work at a relative course resolution.
This means only large-scale physical behavior can be captured, while details
are lost.
A third system, IMPaSTo [Baxter 04e], is designed to be “the best of
both worlds”, using a combination of techniques of both the straightforward
dAb en the complex Stokes Paint applications. It exploits graphics hardware
for the physical simulation of paint flow as well as the rendering with an
interactive implementation of the Kubelka-Munk diffuse reflectance model.
The numerical effort is concentrated on the paint dynamics in the form of a
2D conservative advection scheme for moving paint. An image created with
3.2 Interactive Painting Applications
(a) Viscous,
paint
in
[Baxter 01]
oil-like
“dAb”
(b) Thick impasto
paint in “IMPaSTo”
[Baxter 04e]
35
(c) Oriental ink in
“Moxi” [Chu 05]
Figure 3.2: Recent work on thick paint and Oriental ink.
IMPaSTo is depicted in figure 3.2(b).
3.2.3
Western and Oriental Painting
Although closely related in that they both use brush and water, a distinction
can be made in the goals persued by Oriental ink and Western painting. The
former uses highly absorbent, thinner, and more textured paper types. The
pigment particles are much smaller, allowing them to move along with water
through the paper. Other apparent differences can be found in the brush types
and paint techniques. An extensive comparison of both techniques is given by
several authors [Lee 97, Lee 99, Guo 03].
According to the literature, a great deal of innovative results were achieved
in the research for more realistic Oriental painting models.
The structure of a canvas consists of a mesh of randomly distributed fibers.
In [Guo 91], the first software model of such a structure was proposed. Since
then a number of improvements were made, taking into account the paper’s
texture [Lee 99] and the fact that the diffusion of both water in paper and motion of solid particles in water should be treated differently, although related
[Kunii 95]. Kunii’s diffusion model is capable of handling the ink particle’s
Brownian motion2 by linking the local diffusion rate to temperature and den2
The term “Brownian motion” refers to the physical phenomenon that microscopic par-
36
Digital Painting Applications
sity. The visual result of this technique is a dark border around the original
ink drop, dividing two distinct zones of higher and lower ink concentrations.
The recent work of Chu et al. on Oriental ink simulations [Chu 05] incorporates a fluid flow model based on the lattice Boltzmann equation to simulate
the diffusion within a substrate. It concentrates on the formation of various
ink effects such as branching patterns, boundary roughening and darkening,
and feathery patterns. They provide control over paint wetness, allowing dried
ink to flow again, or to fast-forward the drying process. An interesting option
in their system is the possibility to tinker with the pigment advection algorithm, creating magnetic-like effects where pigment can be pushed or pulled.
A “splash and spray” tool provides an alternative way of applying ink to the
canvas, by emitting a pattern of ink drops using some simple physics scheme.
Also notable in their work is the enhancement of output resolution using implicit modeling and image-based methods, combined in a technique they call
“boundary trimming”. In combination with their previous work on deformable
brush modeling [Chu 02], this system currently forms the state-of-the-art in
digital Oriental painting (figure 3.2(c)).
3.2.4
Commercial Applications
The ability to create realistically painted images is also present in several
commercial paint systems. Most commercial programs are able to support
and mix many types of different media (and even let a user invent new ones)
because their relatively simple underlying models treat them all in the same
simplified manner.
Deep Paint, a freely available application, assigns thickness and shine attributes to paint [Deep Paint 05]. The thick paint is modeled by means of a
height field and relies on 3D dynamic lighting to get its realistic look. Project
DogWaffle has the capability to create animations [Ritchie 05]. Its strongest
features are the highly customizable brushes, which allow users to paint with
an image or even with an animation. In consequence, it is often described as
a tool designed for “unnatural painting”. Another application called ArtRage
[ArtRage 06], uses very similar techniques and provides, just like the previous applications, a long list of different media such as watercolor, charcoal,
acrylics, crayons, pencils, chalk, . . . .
The underlying paint mechanics of Corel Painter IX support the concept
of “wet areas” and include a user-controllable diffusion process [Corel 04]. Additionally, a wide variety of brush categories is available. Because it lacks a
ticles immersed in a fluid move around randomly.
3.3 Discussion
37
supporting model to actually simulate the medium, the program targets a realistic paint-like look and puts less emphasis on a lifelike painting process. Also,
mastering the application involves a steep learning process, the complexity of
the user interface and the amount of tools integrated in such applications is
overwhelming, making it virtually impossible for untrained artists to create
digital paintings with the same ease as in the real world. Complex paint effects
that occur naturally in real life need skills and knowledge of the available tools
to be digitally reproduced.
3.3
Discussion
The previous sections focused on important accomplishments realized in the
domain of painting applications. After more than thirty years it still remains
an active research domain, for both Western and Oriental painting. However,
regarding interactive real-time models, we noticed that systems that capture
the complexity, variety and richness of paint media only recently began to appear in literature, and that the commercial availability of such applications is
still very limited. Most modern approaches take a physically-based approach
in order to get convincing painting behavior and likewise results. In this chapter we already quoted several prototypes proposed in literature. In subsequent
chapters, we will frequently revisit them in order to analyze important features
and compare them to our own approaches.
The next chapter outlines our first contribution to the domain in the form
of a canvas model for watery paint.
38
Digital Painting Applications
Chapter
4
A Three-Layer Canvas Model
Contents
4.1
4.2
4.3
4.4
4.5
4.6
4.7
4.8
Introduction . . . . . . . . . . . . . . . . . . . . . . .
Observing Watercolor Behavior . . . . . . . . . . . .
Objectives . . . . . . . . . . . . . . . . . . . . . . . . .
An Overview of Fluid Dynamics in Painting Applications . . . . . . . . . . . . . . . . . . . . . . . . .
The Fluid Layer: Adapting the Navier-Stokes Equation for Paint Movement . . . . . . . . . . . . . . . .
4.5.1 Updating the Velocity Vector Field . . . . . . . . . .
4.5.2 Updating Water Quantities . . . . . . . . . . . . . .
4.5.3 Evaporation of Water in the Fluid Layer . . . . . . .
4.5.4 Updating Pigment Concentrations . . . . . . . . . .
4.5.5 Boundary Conditions . . . . . . . . . . . . . . . . .
4.5.6 Emphasized edges . . . . . . . . . . . . . . . . . . .
Surface Layer . . . . . . . . . . . . . . . . . . . . . . .
Capillary Layer . . . . . . . . . . . . . . . . . . . . . .
4.7.1 Fiber Structure and Canvas Texture . . . . . . . . .
4.7.2 Capillary Absorption, Diffusion and Evaporation . .
4.7.3 Expanding Strokes . . . . . . . . . . . . . . . . . . .
Discussion . . . . . . . . . . . . . . . . . . . . . . . . .
40
40
42
43
45
48
49
51
51
53
53
54
55
55
56
57
58
40
4.1
A Three-Layer Canvas Model
Introduction
This chapter presents a general canvas model for creating images with watery
paint, which will turn out to be the cornerstone in a complete painting environment. Subsequent chapters will highlight how this model was realized,
focusing on the implementation details of two different solutions. One solution
uses multiple processing nodes to perform a parallel simulation (chapter 5),
while another effectuates the same simulation on a single desktop computer
using recent graphics hardware (chapter 6).
The discussion in this chapter is limited to a single paint medium, namely
watercolor, but from chapter 6 onwards we will show that the model is indeed
general enough to handle other watery paint media like gouache and Oriental
ink, and that it is suitable for extension with several innovative digital painting
tools. Along the way, the canvas design is also complemented with a complex
brush model. First, some observations on paint behavior are made in order to
determine the necessary ingredients for the canvas model.
4.2
Observing Watercolor Behavior
Unlike most previous canvas models described in literature, the tasks this
particular model has to fulfill are considerably more complex. Due to the
ambient behavior of watery paints, which run, drip and diffuse, specific rules
have to be incorporated to support this kind of conduct. Moreover, these
tasks have to be accomplished quite efficiently as we target the creation of a
real-time interactive application.
The complexity of paint media becomes clear when performing a basic
real-life paint experiment: applying a simple stroke of watercolor on a canvas.
This exposes the following typical 1 trajectory of paint and its interaction with
the canvas:
First the stroke of watercolor, a mixture of pigment particles suspended in
water and a binder2 , sits in a thin layer of fluid on top of the canvas. Water
quantities and pigment particles are displaced inside the stroke as a result
of fluid velocities originating from the stroke creation, or as a result of fluid
1
Ignoring differences in canvas material, pigment properties and brush types for now.
Binder is the ingredient in paint which adheres the pigment particles to one another
and to the ground. It creates uniform consistency, solidity and cohesion. In watercolor this
is usually gum arabic.
2
4.2 Observing Watercolor Behavior
41
diffusion. After a short period of time pigment particles begin to settle in
the irregularities of the canvas surface, possibly being uplifted back into the
fluid flow later on. Water disappears by evaporation or is absorbed by the
canvas material where its movement can be described in terms of diffusion.
The stroke boundaries are kept intact by surface tension but can progress
with the movement of the absorbed water below. Finally, all activity stops
when the stroke has dried and all water has evaporated. A dark outline has
formed around the stroke because water evaporated faster at the edge and
was replenished with water from inside the stroke, creating an outward flow
of pigment particles.
This observation reveals that one can roughly distinguish three different
states in the behavior of the paint:
1. Water, binder and pigment particles in a shallow layer of fluid on top of
the canvas,
2. Pigment particles deposited on the canvas surface,
3. Water and binder absorbed by the canvas material;
It provides the motivation for a model with three “active” layers, in each
of which different rules govern the movement paint. We will refer to each
layer as being 1) the fluid layer, 2) the surface layer and 3) the capillary
layer. Together, the layers can be thought of as a discrete abstraction of a
real canvas3 represented by a stack of regular 2D grids of cells. We will refer
to these layers as being the set of active layers, as opposed to passive layers
that contain dried surface layers, which do not participate in the simulation
anymore except during the render step.
This design approach is not new. We described in a previous chapter the
work of several authors who use 2D grids of cells in combination with rules
that let cells interact with their neighbors at discrete time steps, otherwise
known as “cellular automata” [Wolfram 02].
The three-layer canvas model of Curtis et al. [Curtis 97] in particular has
had a profound effect on the architecture of our own model. Our procedure of
organizing the layers and assigning the rules is almost exactly the same, but
as theirs was intended for the automatic creation of watercolor-style images
3
Or more generally: a “substrate’, any material on which can be painted.
42
A Three-Layer Canvas Model
Figure 4.1: The three-layer canvas model with some basic activities.
from input images, as opposed to our goal of real-time interactive creation,
the underlying computational model, however, needs to be very different.
Figure 4.1 presents a schematic view of the canvas model. In subsequent
sections we will elaborate on the details of each layer.
4.3
Objectives
In this chapter we introduce a general canvas model for the interactive simulation of watercolor in real-time. It is the first model that provides real-time
painting experience with watercolor paint, while comprising sufficient complexity to capture its complicated behavior and its interactions with the canvas. More specifically, the model allows faithful reproduction of the following
effects regarding paint-canvas interactions (figure 4.2):
Glazed washes A transparent wash of watercolor laid over a dry, previously
painted area, with the intention to adjust color or intensity of the underlying layer. This is the most distinctive property of watercolor painting.
Emphasized stroke edges In “wet-on-dry” painting, a dark outline appears
over time as the result of an outward fluid flow. This flow is caused by
faster evaporation at the static edges.
Feathery stroke edges When painting with wet paint onto a wet surface,
called “wet-in-wet” painting, or when painting on a highly absorbant
canvas, flow-like patterns appear.
4.4 An Overview of Fluid Dynamics in Painting Applications
(a) Emphasized edges.
(b)
Feathery
edges.
43
(c) Glazed washes.
Figure 4.2: Real-world strokes showing several typical watercolor effects.
c
Copyright 1998
Smith [Smith 98].
Stroke texture effects The appearance of a stroke is influenced by the attributes of the pigments used. Pigment with large grain accentuates the
irregularities in the canvas surface.
4.4
An Overview of Fluid Dynamics in Painting Applications
In its initial form, a stroke consists of a shallow layer of fluid that resides on
top of the canvas surface. The straightforward idea of adopting algorithms
from fluid dynamics literature to describe the fluid-like movement of paint on
canvas has already been used in the work of several authors. Traditionally,
these methods were too computationally expensive to participate in a painting
application.
Cockshott for example models an ad-hoc diffusion algorithm by letting
cells donate surplus paint to randomly chosen neighboring cells [Cockshott 91].
Given the hardware limitations at that time, this simple approach was fast and
good enough. These days, however, we can do much better as more processing
power is available, allowing us to turn our attention towards more physicallybased models like the ones based on the Navier-Stokes equation.
44
A Three-Layer Canvas Model
The canvas model from Curtis et al. [Curtis 97] uses an algorithm proposed by Foster and Metaxas [Foster 96], based on a finite differencing of the
Navier-Stokes equation and an explicit ODE solver. The explicit time solver
in combination with a high diffusion factor is not a problem in their automatic
rendering application, as the simulation is performed off-line. This way, the
time step restriction is bypassed. In a real-time interactive system, however,
the time step restriction would be an issue, as it makes the simulation both
slow and unstable under certain circumstances.
More suitable for real-time simulation is the work of Stam, describing a
number of fast and stable procedures to simulate fluid flows using Lagrangian
and implicit methods [Stam 99, Stam 03]. It allows a simulation to take larger
time steps and results in a faster simulation that never “blows up”.
Baxter et al. use similar approaches in their viscous paint applications
[Baxter 01, Baxter 04e, Baxter 04b]. The “Stokes Paint” application incorporates a 3D volumetric fluid model for viscous paint. The movement of paint
is described by Stokes’ formula, which drops the advection term from the
Navier-Stokes equation. This is a reasonable approximation for a fluid flow
that is heavily damped by viscous forces because the fluid’s inertia does not
have time to exert influence on the flow. This is the case for oil-like paints.
However, watercolor and other kinds of watery paint are very different from
oil-like paints, which is noticeable in the diffusive behavior, as well as the
more complex interaction of pigment and water with the canvas. The computational effort needed to simulate a 3D grid model requires a relatively coarse
resolution in order to maintain the framerate. Moreover, paint volume is not
preserved as a result of a semi-Lagrangian advection implementation.
The fluid dynamics in their IMPaSTo application aims to find a better
trade-off between simulation complexity and performance. The simulation grid
is downgraded to two dimensions, structured in layers, and a special-purpose
advection method ensures the preservation of paint volume. We proposed a
very similar conservative method to move paint [Van Laerhoven 04b], which
will be discussed later on in this chapter.
Recently, the work of Chu et al. reported on the adoption of a relatively
new computational fluid dynamics method in their MoXi application for the
creation of Oriental ink paintings [Chu 05]. The lattice Boltzmann equation
models fluid dynamics with a simplified particle kinetic model that describes
the movement of particles by a set of particle distribution functions, either
streaming or colliding. Simulation steps only require the execution of simple
local operations, giving it a performance advantage on a parallel architecture
4.5 The Fluid Layer: Adapting the Navier-Stokes Equation for
Paint Movement
45
like graphics hardware. This contrasts with the Navier-Stokes based methods,
which contain the globally-defined Poisson equation that comes from the diffusion component. As we will see later on, the Poisson equation requires an
iterative solver that could become a computational bottleneck in the simulation.
4.5
The Fluid Layer: Adapting the Navier-Stokes
Equation for Paint Movement
The movement of the fluid flow through time can be described by the twodimensional Navier-Stokes equation [Kundu 02]. In literature we can find quite
a few numerical approaches to solve this problem, but we must keep in mind
some issues on this matter:
• The procedure of finding a solution must be both fast and stable.
• The fluid flow must be constrained by the stroke’s boundaries, mimicking
the effects of surface tension.
• We want the ability to add constraints on the level of individual cells;
For example, a cell’s water content may not violate predefined upper
and lower boundaries.
• The procedures must respect the law of mass conservation.
The Navier-Stokes equation actually describes how the state of a vector
field ~v changes through time. At any give time and space during the simulation, this state is given by equation 4.1.
∂~v
= −(~v · ∇)~v + ν∇2~v + f~
∂t
(4.1)
The equation is given in many different forms in literature. The variant
in equation 4.1 is used by Stam [Stam 99], containing three components that
contribute to the movement of the flow: advection, diffusion and the sum of
external forces f~. As one may notice, it does not contain a scalar pressure field.
This is because Stam combines the original equation with the conservation of
mass constraint,
∇ · ~v = 0,
(4.2)
46
A Three-Layer Canvas Model
into a single equation, on condition that the final vector field is “projected”
onto its divergence free part4 .
In order to adapt this formula for the purpose of moving paint fluid, we
first introduce the following variables for this layer:
• A 2D vector field ~v defining the fluid velocity. In discrete form we assign
to the center of each cell i, j velocity (vx , vy )i,j .
• Water quantity wi,j for each cell, measured in terms of height, and constrained within [wmin , wmax ].
• Amount of pigment p~ = {p1 , p2 , ...} for each cell, as a fraction of the cell
surface. Each pigment type is denoted by a unique index “idx”.
• Diffusion constants (kinematic viscosity) νw and νp for water and pigment respectively.
These variables map well to the observations made in section 4.2. The
vector field contains velocity values introduced at stroke creation time, most
likely by a brush sweeping on the canvas. Paint is represented as a mixture
of pigment and water, so both amounts are tracked. We omit the binder
ingredient because its tasks, adhering the pigment particles to one another
and to the ground, are divided over pigment and water attributes.
The amount of water at a cell is described in terms of height, while pigment
amounts are described in terms of percentage of covered cell-area, or equally,
the concentration of that particular pigment type in a cell. We make the
assumption that different pigment types do not interact, so mixing becomes
easy. Both amounts are constrained: pigment concentrations obviously within
the interval [0, 1], while water amount has user-defined bounds. The upper
bound makes sure water cannot “pile up” at one cell in the canvas, while the
lower bound avoids the unnatural behavior of all water being moved away
from an area of the stroke by mere fluid flow.
Finally, the diffusion constant or kinematic viscosity determines the “thickness” of the paint. For example, water is “thin”, having a low viscosity, while
oil is “thick”, having a high viscosity. The assumption is made that this value
is equal for every cell in the canvas, and is unrelated to the ratio of pigment
to water.
4
The projection operator is not shown in the equation.
4.5 The Fluid Layer: Adapting the Navier-Stokes Equation for
Paint Movement
47
updateVelocityField(v, v’, dt)
addNewVelocities(v, v’)
diffuseVelocities(v, dt, diff_rate)
advectVelocities(v, dt)
includeWaterDifferences(v, dt)
Table 4.1: Updating the velocity vector field.
Changes in velocity ~v 0 , water w0 and pigment p~ 0 , can simply be added to
the appropriate grid cells at the beginning of each time step.
Once the velocity field is updated according to equation 4.1, we can use it
to update each cell’s water quantity (equation 4.3) and pigment quantity for
each pigment (equation 4.4) [Stam 99]:
∂w
∂t
∂~
p
∂t
= −(~v · ∇)w + νw ∇2 w
(4.3)
= −(~v · ∇)~
p + νp ∇2 p~
(4.4)
In conclusion, this means a time step in the fluid layer requires the execution
of the following operations:
1. Add velocity values, water quantities and pigment concentrations.
2. Update velocity vector field ~v (equation 4.1).
3. Update scalar field of water quantities w (equation 4.3).
4. Update vector field of pigment concentrations p~ (equation 4.4).
Given these variables and equations, we can mainly follow the method of
solution described by Stam to update the state of a fluid flow. However, some
modifications were made that are specific to our problem, as shown in the next
sections.
48
A Three-Layer Canvas Model
4.5.1
Updating the Velocity Vector Field
All steps in the update velocity routine are enumerated in table 4.1.
The state of a two-dimensional fluid flow at a given instant of time can be
modeled as a vector field that is sampled at the center of each cell of a 2D
grid. Alternatively, a “staggered grid” model where values are sampled at the
cell boundaries can be used [Foster 96].
Updating the velocity according to equation 4.1 equals resolving the two
terms that appear at the right hand side of the equation [Stam 03], plus the
annotation of the final vector field with differences in water quantity.
1. Self-advection −(~v · ∇)~v
2. Diffusion ν∇2~v
3. Include water differences.
Steps 1 and 2 are both performed identically as described by Stam. We do
not perform the projection step that deals with conservation of mass (equation
4.2) because this property is ensured by our advection mechanism presented in
the next section. The visual impact of the projection step on flow movement
is the formation of vortices that produce swirly-like motion. In a paint flow
such vortices are practically non-existent, so the absence of the projection step
has little effect on the resulting paint motion.
Self-advection This operation calculates how the values in the velocity field
are affected by the fluid’s internal forces. Stam describes an implicit method
that assigns a particle to each cell center, conveying that cell’s velocity value.
Intuitively, the particles are dropped in the velocity field and re-evaluated at
the resulting position. The implicit version looks at which particles end up at
the cell center by tracing them backwards in time starting from the cell center.
Diffusion This mechanism, caused by the second term at the right hand
side of equation 4.1, accounts for spreading of velocity values at a certain rate.
The problem takes the form of a Poisson equation, so any iterative method of
solution can be used, like the simple Jacobi method or its Gauss-Seidel variant
[Golub 89].
4.5 The Fluid Layer: Adapting the Navier-Stokes Equation for
Paint Movement
49
Include water differences Each cell tracks water quantities in terms of
height, so the water volume is modeled as a 2.5D fluid body. To account for
gravity, additional motion is added to the velocity field that tries to cancel
out differences in water quantities between neighboring cells. Therefore, we
calculate the velocities that are necessary to obtain equal water heights in all
cells. The resulting vector field ~vh is combined with the velocity field ~v we
already calculated:
~vnew = ωi~v + ωh~vh ,
(4.5)
where ωi and ωh are weight factors and ωi + ωh = 1. In our results we use
ωh = 0.06.
The result of this approach is that the movement of pigment, which depends on the velocity field, will also be affected by differences in water quantities. If water is evaporated at a higher rate in the vicinity of the stroke edge,
this mechanism generates a flow towards the edge and creates the effect of
emphasized stroke edges (section 4.5.6).
4.5.2
Updating Water Quantities
The vector field with newly calculated velocities from the previous section can
now be used to move the contents of the two scalar fields of paint material:
one with water quantities and another with pigment concentrations.
Optionally, additional water quantities w0 from an external source (most
often a brush) are first added to the scalar field. Next, the two terms at the
right hand side of the equation are solved:
• Diffusion νw ∇2 w
• Advection −(~v · ∇)w
These equations are scalar counterparts of the equations in the previous
section, so in practice the same methods could be used. This is the case for the
diffusion step, but for advecting the water we will develop our own algorithm,
however, because we want to preserve the overall volume of water and assign
constraints to the upper and lower boundaries of a cell’s water content.
Water advection For each cell, we measure the volume of water that is
exchanged with all neighboring cells. As an example, we will calculate the
amount of water that flows from the center cell to its right neighbor, with
velocities ~vcenter and ~vright respectively, as shown in Figure 4.3(a): We assume
50
A Three-Layer Canvas Model
updateWaterQuantities(w, w’, dt)
addWater(w, w’)
diffuseWater(w, dt, diff_rate)
advectWater(w, dt)
Table 4.2: Updating water quantities.
(a)
(b)
Figure 4.3: Moving water to a right neighboring cell. The dark areas represent
the volume of displaced water.
that the velocity is equal in all points along the border of both cells, and is
given by the average of the two cell velocities:
~vaverage =
~vcenter + ~vright
2
(4.6)
Any point along this border has velocity ~vaverage and travels within a given
time step ∆t a distance ∆x = (~vaverage )x ∆t in horizontal direction. Therefore,
the area covered by the volume of displaced water is given by the colored area
in Figure 4.3(a), and equals ∆x × cellheight.
Finally, from the amount of water in the cell, the total volume of displaced
water, ∆V = wi,j ∆x × cellheight, can be determined. The change in water
quantity is given by equation 4.7, and is also shown in Figure 4.3(b).
4.5 The Fluid Layer: Adapting the Navier-Stokes Equation for
Paint Movement
∆w =
∆V
(cellwidth × cellheight)
51
(4.7)
The same procedure is followed to calculate the exchanged water quantities
with the three remaining neighbors. We add up the results and divide it by
four, because each neighbor contributes exactly one quarter to the total flux
(except at borders). Constraining the cell’s upper and lower water quantities
can be done by simply clamping all individual exchanged volumes of water.
This procedure also ensures conservation of mass.
4.5.3
Evaporation of Water in the Fluid Layer
Water evaporation is modeled by removing at each time step a volume of water
∆Vtop = shallow ∆t × (cellwidth × cellheight),
(4.8)
according to the cell’s water surface and the evaporation rate shallow .
Evaporation also occurs at the sides of cells that have neighbors without
water. This way we incorporate the fact that water evaporates faster at the
edges of a stroke, which forms an important component of the “emphasized
edges” effect discussed in section 4.5.6.
As an example, the cell (i, j) in figure 4.4 has a dry right neighbor, so in
this case an additional volume of water,
∆Vright,i,j = wi,j shallow ∆t × cellheight
(4.9)
disappears.
From equations 4.8 and 4.9 we can determine the evaporated water quantities (∆wtop )i,j and (∆wright )i,j using equation 4.7.
4.5.4
Updating Pigment Concentrations
The velocity field also governs the movement of pigment concentrations. In
this last stage the changes in pigment concentrations will be calculated
Table 4.3 gives the three basic steps taken when moving pigment concentrations in the fluid layer according to a given velocity field. It shows that this
procedure is similar to moving water quantities, adding a vector field of new
pigment quantities p~ 0 = {p01 , p02 , ...} and solving the two terms in equation 4.4:
• diffusion νp ∇2 w
52
A Three-Layer Canvas Model
Figure 4.4: Evaporation in the fluid layer on the edge of a stroke. The left
figure shows the top view of a cell (i, j), which has no right neighbor. The
volume of evaporated water (in grey) is shown in the right figure.
• advection −(~v · ∇)~
p
The pigment source term p~ 0 conveys amounts of pigment that are added
by a brush. The pigment diffusion step is performed in exactly the same way
as the water diffusion step, using νp as the pigment diffusion rate.
Pigment advection The advection, or movement, of pigment caused by the
velocity vector field relies on the algorithm for moving water. The outgoing
fraction pidx of pigment for a similar situation to the one depicted in Figure
4.3(a) is:
∆pidx =
pidx ∆x × cellheight
(cellwidth × cellheight)
updatePigmentQuantities(p, p’, dt)
addPigment(p, p’)
diffusePigment(p, dt, diff_rate)
advectPigment(p, dt)
Table 4.3: Updating pigment quantities.
(4.10)
4.5 The Fluid Layer: Adapting the Navier-Stokes Equation for
Paint Movement
4.5.5
53
Boundary Conditions
One issue we did not consider so far is the fact that the movement of paint
must respect the boundaries of a stroke. In our case, a boundary is defined
as the interface between the paint and the atmosphere. The boundaries can
move, however, as the stroke expands through capillary activity, as we will
discuss later.
At any given time step, no pigment or water is allowed to travel across
these boundaries. Fortunately, our water and pigment advection algorithms
implicitly guarantee this condition. Both algorithms define at each cell the
movement of substance to neighboring cells. If we know which cells belong to
the stroke, we can simply check if a neighboring cell lies within the stroke and
is allowed to receive material. Another consequence of a boundary is that it
influences the velocity vector field, making fluid flow along it. This is done by
setting the normal component of the velocity vector at boundary cells to zero
[Foster 96, Stam 03].
4.5.6
Emphasized edges
The phenomenon of edge darkening is visible in dried coffee stains, where
the majority of the residue is concentrated along the perimeter of the stain,
forming a ring-like structure. In general, this phenomenon occurs in a drop of
liquid when two conditions are satisfied [Deegan 00]: contact line pinning and
evaporation at the edge of the drop.
In our case, the contact line is the edge of the stroke, which is being pinned
to the canvas as long as the stroke boundaries remain. The other condition
is simulated by the combination of two mechanisms we already discussed in
previous sections:
• Faster evaporation of water at the stroke edges,
• Inclusion of differences in water quantities within the velocity vector
field.
The vector field also directs the movement of pigment, so pigment will
build up at the edge of the stroke. Figure 4.2(a) shows an example of brush
strokes with dark edges.
The model of Curtis et al. simulates the mechanism of edge-darkening
by decreasing the water pressure near the edges of the strokes [Curtis 97].
An extra simulation step performs a Gaussian blur on a mask containing
54
A Three-Layer Canvas Model
wet/dry values, which are then used to update the scalar water pressure field.
While the results of this procedure are very similar, our method does not
require a separate step in the simulation but is the combined result of several
mechanisms.
4.6
Surface Layer
The last section discussed the activity of water and pigment at the fluid layer.
Pigment is initially dropped in the fluid layer, but eventually ends up being
deposited on the surface of the paper canvas. In the meantime, there is a
continuous transfer of pigment between the fluid layer and the surface layer.
The surface layer keeps track of the deposited amounts of pigment ~q =
{q1 , q2 , ...} for each cell (i, j). Pigment will be dropped down on, and lifted up
from the canvas according to equations 4.11 and 4.12 respectively:
↓idx = ∆t(pidx (1.0 − wfluid )(1.0 − hγidx )δidx )
δidx
↑idx = ∆t(qidx wfluid (1.0 − (1.0 − h)γidx )
)
βidx
(4.11)
(4.12)
Both equations depend on the amount of water wfluid in the fluid layer as
a fraction of the maximum water allowed, the paper height fraction h at that
cell, and several properties of pigment idx represented by the following factors:
Pigment staining 0 ≤ βidx . The pigment’s resistance to being picked back
up by the fluid layer.
Pigment clumping 0 ≤ γidx ≤ 1. Most pigment particles tend to clump
into larger chunks, affecting the coarseness of the stroke’s texture.
Pigment weight 0 ≤ δidx ≤ 1. The mass affects how long pigment particles
stay in suspension in the fluid layer.
Formulas 4.11 and 4.12 were empirically constructed taking into account
the following observations:
• Pigment with a high staining factor resists being picked back up by the
fluid layer, so it scales the amount of pigment that is being uplifted.
• Pigment with high clumping factor will form chunks in the cavities of the
canvas, which are indicated by cells with a low height value. Therefore
the clumping factor relates cell height to the amount of pigment that
settles on the canvas.
4.7 Capillary Layer
55
• Heavy pigment will settle more quickly. So a high weight factor increases
the amount of settling.
• While drying, more pigment particles should settle on the canvas surface. Consequently, the water quantity in a cell influences the pigment
transfer.
A similar transfer algorithm was used by [Curtis 97], but ignored the effect
of water quantity.
4.7
Capillary Layer
The capillary layer represents the inner paper structure. Within the simulation
it is responsible for the movement of absorbed water, allowing a stroke to
spread across its original boundaries.
At this point in the simulation, water movement is governed by microscopic capillary effects that can be described in terms of diffusion. The paper
structure is represented as a two-dimensional grid of cells or “tanks” that exchange amounts of water. If each cell has a different capacity, water will spread
irregularly through the paper, which is the behavior we want to model. The
first thing we have to do is to assign capacities to each cell by generating a
paper-like texture.
4.7.1
Fiber Structure and Canvas Texture
The structure of the canvas affects the way fluid is absorbed and diffused in
the capillary layer. The canvas texture should also influence the way pigment is transported and deposited. Canvas typically consists of an irregular
porous fiber mesh, with the spaces between fibers acting as capillary tubes to
transport water. The main target in constructing a digital equivalent of such
a canvas is the creation of some irregularity in the paper surface and in the
capillary diffusion process.
Literature gives us a number of methods to generate a wide range of paper structures. The Oriental models for painting with black ink pay much
attention to this aspect, as the quality of their results highly depends on the
diffusion process [Lee 99, Zhang 99, Lee 01, Yu 03, Guo 03]. The generally acknowledged procedure here is to create a homogeneous random fiber network,
which then is used as a base for the calculation of inter-cell diffusion rates.
56
A Three-Layer Canvas Model
Figure 4.5: Enlarged parts of computer-generated paper textures. The texture
is used as a height field, representing the surface of the paper, and to calculate
the capillary capacity at each cell.
The type of paper used in watercolor differs from the former in that it is less
absorbent and less textured.
One method, which does not provide us with much control in the generation process, however, would be to use a digitized paper sample to extract
structural information [Sousa 00]. Instead, we use the algorithm described in
[Worley 96] to produce a textured surface. The same strategy was also used by
Curtis et al. [Curtis 97]. The algorithm creates a procedural texture by means
of a basis function based on distance calculations to randomly placed feature
points. Translating the gray scale texture values to a height field 0 ≤ h ≤ 1
provides us with a paper texture. Similarly, a cell’s capillary capacity can be
calculated from the height field as being c = cmin + h ∗ (cmax − cmin ), where
cmin and cmax denote the minimum and maximum capillary water capacities
respectively.
Figure 4.5 shows enlarged parts of several textures we generated with this
procedure.
4.7.2
Capillary Absorption, Diffusion and Evaporation
The water of a brush stroke will gradually be absorbed by the paper canvas.
At each time step an amount of water
∆w = α∆t,
(4.13)
is transferred from the fluid layer to the capillary layer, determined by the rate
of absorption α. This amount is clamped according to the amount of water
left in the fluid layer, and the available capillary space.
4.7 Capillary Layer
57
Water in the capillary layer moves to neighboring cells through a diffusion process like we described in context of the fluid layer, and disappears by
evaporation at a rate capillary .
The evaporation process removes each time step an amount of water
∆w = capillary ∆t
(4.14)
from every cell in the capillary layer that has no water left in the fluid layer
above.
4.7.3
Expanding Strokes
Until now, a stroke applied to the canvas surface retained its original shape
as defined by the brush. However, absorbed water that diffuses through the
canvas structure has the ability to undermine the stroke boundaries at the
surface layer. In particular when painting using the “wet-in-wet” technique,
where a wash is laid on an area of the canvas that is still wet or damp from a
previous wash, the stroke edges are not well-defined and continue to spread.
In this case, there is no contact line to maintain.
To decide which cells belong to a stroke and need to be processed, a distinction is made between four cell states:
inactive A cell is inactive if it contains no water at both the fluid layer and
the capillary layer.
dry A cell is dry if it contains no water at the fluid layer and the amount
of water at the capillary layer does not exceed a predefined threshold.
(wfluid = 0 and wcap ≤ moist threshold).
moist A cell is moist if its amount of capillary water is greater than a predefined threshold (wcap > moist threshold).
wet A cell is wet if it contains water at the fluid layer, or if it is moist (wfluid >
0 or wcap > moist threshold).
At the fluid layer and the surface layer, only wet cells participate in the
algorithms we discussed in relation to these layers, and therefore are part of
the stroke. As soon as the capillary diffusion process, which processes all
the cells, transfers enough water to a previously dry cell, this particular cell
becomes part of the stroke. Using this technique, stroke expanding is possible
at a user-controllable rate.
58
4.8
A Three-Layer Canvas Model
Discussion
This chapter discussed a canvas model that targets watery paint media. An
observation of real watercolor paint experiment provided the foundation of
the model’s three-layer design, in which each layer contains different rules
that govern paint movement. A mix of physically-based and empirically-based
algorithms are adopted for this purpose.
As stated in the beginning of the chapter, the objectives of the model
are real-time performance together with the capability of reproducing typical
watercolor effects. However, the model still lacks a concrete implementation
that validates its capabilities. The next two chapters will discuss two different
approaches in realizing such an implementation.
Other watery paint media that are related to watercolor are introduced in
chapter 6, together with the necessary modifications that enable the canvas
model to simulate them.
Chapter
5
A Distributed Canvas Model for Watercolor
Painting
Contents
5.1
5.2
5.3
5.4
5.5
5.6
5.1
Introduction . . . . . . . . . . . . . . . . . . .
Distributed Model . . . . . . . . . . . . . . . .
5.2.1 Distributing Data . . . . . . . . . . . . . . .
5.2.2 Processing a Subcanvas . . . . . . . . . . . .
5.2.3 Gathering Results . . . . . . . . . . . . . . .
Brush Model . . . . . . . . . . . . . . . . . . .
5.3.1 Distributing Brush Actions . . . . . . . . . .
Implementation . . . . . . . . . . . . . . . . .
Results . . . . . . . . . . . . . . . . . . . . . . .
Discussion . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
59
60
61
61
62
63
64
65
65
68
Introduction
In this chapter we describe a first software implementation of the theoretical model for simulating watery paint outlined in the previous chapter. This
implementation is intentionally kept simple, with limited functionality that is
reflected in a simple brush model, support for watercolor only, and a straightforward color mixing scheme. The implementation serves as a proof-of-concept
that demonstrates the feasibility of the canvas model and can be considered
a milestone on the way of a fully functional prototype, which is introduced in
the next chapter.
60
A Distributed Canvas Model for Watercolor Painting
Figure 5.1: High-performance cluster (HPC) setup for processing a canvas that
is divided into four pieces. Each subcanvas is assigned to a separate node.
As described before, the canvas model consists of a simulation grid that is
worked upon by a set of algorithms. We assume a single simulation cell maps to
a single pixel in the resulting image, so the grid must be large enough in order
to obtain paintings with acceptable resolution. Post-processing techniques
like image filtering allow to augment the resolution, but without a sufficient
resolution of the simulation grid, many details in paint behavior are lost.
Unfortunately, the nature of the algorithms described in context of the canvas model makes it hard to maintain real-time rates if the canvas, and therefore
the simulation grid resolution, is too large. For this reason we have chosen
to distribute the system by breaking up the whole grid in smaller subgrids
that are simulated separately at remote processing elements. This approach is
valid if the system is constructed as a set of stacked cellular automata, which
means that the processing of a single cell only requires knowledge of its four
immediate neighbors. A consequence is that the procedure is valid for the
distributed processing of cellular automata in general.
5.2
Distributed Model
The hardware setup, depicted in figure 5.1, consists of a high-performance
cluster (HPC) with up to six nodes connected through a high-speed gigabit
network. The configuration in this figure shows the specific case for a canvas
that is divided into four pieces, but the same approach can be taken to divide
5.2 Distributed Model
61
Figure 5.2: A canvas divided in subgrids, each representing a subcanvas. Every
subgrid has an extra border of cells, overlapping neighboring subgrids.
the canvas in even more pieces if more processing nodes are available.
In this section we give a description of our approach, along with some
issues that had to be resolved.
5.2.1
Distributing Data
The grid that represents the whole canvas, and actually consists of the three
layers we described in the previous chapter, is divided into several subgrids or
subcanvasses as shown in figure 5.2.
Each subgrid needs an extra row of cells at its border, overlapping the
neighboring subgrids. This is necessary because the evaluation of cells in the
border needs information from the neighboring cells. A subgrid can be looked
upon as an isolated piece of canvas that can be processed separately, except
for the border cells, which we will deal with later.
Next, every subcanvas is sent to a simulation process that runs on a single
cluster node. In addition, a subcanvas receives palette information, initial cell
contents (in case a previously saved painting was loaded), and current brush
attributes. This concludes the initial setup phase. The basic activities in the
simulation loop are depicted in figure 5.3.
5.2.2
Processing a Subcanvas
The remote simulation of a subcanvas does not differ from that of a local
canvas, and consists of the activities we discussed in chapter 4. The main
62
A Distributed Canvas Model for Watercolor Painting
Figure 5.3: Distributed activity scheme showing one simulation step. Two
separate threads are used on each processing node: a brush thread for passing
brush activity to subcanvasses, and a simulation thread for performing the
simulation algorithms.
difference exists in the treatment of the border cells, whose contents are not
calculated by the subcanvas itself but received from a neighboring subcanvas.
At the beginning of each time step, a subcanvas transfers border cells between
his immediate neighbors. An example is shown in figure 5.4, which depicts
the transfer between subcanvas (i, j) and its top neighbor (i, j + 1).
At this point the algorithms defined at the fluid layer, surface layer and
capillary layer have sufficient information to process the whole grid.
5.2.3
Gathering Results
Not only the simulation occurs remotely, also the rendering step is performed
there. The results are sent back to the parent application which collects the
rendered results and places them in a single texture, ready for display.
An image is rendered remotely, in software, by determining the color of
each cell in the surface layer. Because each pigment is described in terms of
its red, green and blue components, and the concentration of pigment in a
cell is expressed as a fraction of the cell surface, the color of the cell can be
calculated by simply summing the fractions of each color component together
5.3 Brush Model
63
Figure 5.4: Exchanging border cells between subcanvas (i, j) and its neighbor
(i, j + 1) at each time step.
and dividing the result by the total pigment amount. Such linear weighted additive color mixing scheme gives predictable results but is not entirely faithful
to reality. Referring to this issue, Baxter noted that when using this procedure, a mix of saturated yellow (255, 255, 0) and saturated cyan (0, 255, 255)
gives a desaturated green color mix (128, 255, 128) instead of saturated green
as one would expect [Baxter 04b]. In a subsequent chapter we will adopt a
more truthful mixing scheme.
During the painting process, the pigment from the fluid layer is rendered
in the same way and blended with the color from the surface layer.
5.3
Brush Model
The brush plays a minor role at this point as we are mainly interested in
simulating canvas behavior, so it was intentionally kept simple. Touching the
canvas with a brush means depositing an amount of water and pigment with
a certain velocity in each cell below the current brush position into the fluid
layer.
The actual shape of the brush’s imprint is determined by a map of cell
positions indicating which cells below the brush tip are affected. During brush
movement this pattern is applied repeatedly at interpolated positions between
sampled input positions. Interpolation is necessary because the position of a
fast moving brush, represented by mouse movement for example, is sampled
at positions that lie further apart than when the brush is moving slowly. The
64
A Distributed Canvas Model for Watercolor Painting
Figure 5.5: Overlapping brush patterns during brush movement: we keep track
of cells overlapping with the previous pattern to avoid multiple application of
the brush pattern at a cell.
velocity assigned to a cell depends on how fast the user is moving the input
device.
An issue that must be dealt with is the overlapping of patterns applied at
previous sampled points. If we just reapply the whole pattern at each new
position, cells will fill quickly and velocities will grow much too large. Instead,
we keep a copy of the previous pattern and make sure it is disjoined with the
new one by masking overlapping cells. The masked cells are discarded when
applying water, pigment and velocities. Figure 5.5 shows the procedure for
two subsequent footprints of a brush with a square pattern.
Throughout this chapter, a simple circular pattern was used. Paint transfer
occurs in only one direction, and the brush’s paint reservoir is filled with an
unlimited supply of paint.
5.3.1
Distributing Brush Actions
Applying a brush to the canvas produces a stroke that consists of a set of
interpolated positions. At each of these positions water, pigment and velocity
values are assigned to a collection of cells that belong to the brush pattern. In
the distributed setup, these patterns have to be applied at a remote subcanvas.
All position data from the stroke is sent to the remote subcanvas to which
that particular canvas position belongs. Every subcanvas uses a copy of the
5.4 Implementation
65
original brush to apply the pattern itself. This way, we limit the amount of
data that has to be sent. Traffic can be restricted even more by using a buffer
of stroke positions. Every time the buffer is full or when the stroke ends, the
buffer is flushed and a set of position data is sent to the appropriate subcanvas.
One difficulty that remains involves brush actions at subcanvas boundaries. A position belongs to just one subcanvas, but when the brush pattern
is applied, the cells of several subcanvasses could be affected. This is handled
by sending the position under consideration to each subcanvas that will be
affected by the application of the whole brush pattern.
5.4
Implementation
LAM/MPI [Squyres 03], an implementation of the Message Passing Interface
(MPI), is used as a software platform for our implementation. This C++ API
permits a collection of Unix computers hooked together by a network to be
used as a single large parallel computer.
All algorithms were implemented in software on a Linux platform. Several
straightforward optimizations were made to speed-up the calculations:
• A list of “active cells” is maintained, the complement of the set of “inactive cells” (section 4.7.3). It includes every cell that has water left at
the fluid layer or capillary layer, and therefore has to be processed. Only
these cells participate in the simulation algorithms.
• A list of “display cells” is maintained, including every cell that contains
an amount of pigment. Only these cells participate in the render step.
• Subcanvas borders only exchange “active cells”.
5.5
Results
The examples shown in this section were all painted on a grid that maps a
cell to a single pixel, without any post-processing. The painting process is
done in real-time at an initial speed comparable to the frame rate of video.
After drawing several strokes, this rates drops but stays within acceptable
boundaries.
Figure 5.9 shows the frame rate while drawing on a canvas of 256 × 256
cells that was simulated locally, in comparison to a canvas that was divided
66
A Distributed Canvas Model for Watercolor Painting
(a) A computergenerated
stroke showing
the emphasized
edge effect.
(b) Two overlapping strokes. “Pigment clumping” makes the canvas’
structure visible in the dark overlapping area.
(c) Three overlapping strokes.
Figure 5.6: A few examples of computer-generated strokes created with the
distributed canvas model.
Figure 5.7: An image created on a grid with dimension 400 × 400. Each cell
covers exactly one pixel.
in four and six subgrids. The tests were all performed on a cluster of Intel(R) Xeon(TM) 2.40GHz computers using scripted brush actions to make
the results comparable.
A brush position is defined as the application of a single footprint on the
5.5 Results
67
Figure 5.8: An image created on a grid with dimension 400 × 400. Each cell
covers exactly one pixel.
Figure 5.9: The frame rate while drawing on a local canvas (red) and a canvas
that was divided in 4 (green) and 6 (blue) pieces, and simulated remotely.
canvas. The graph shows that the frame rate stays within acceptable boundaries on a distributed canvas, even after a considerable number of brush positions.
Figure 5.6 displays several isolated strokes. The stroke depicted in figure
68
A Distributed Canvas Model for Watercolor Painting
6.1(k) shows the “emphasized edge” or “dark edge” effect explained in section 4.5.6. Figures 5.6(b) and 5.6(c) show overlapping strokes with different
pigment types. The canvas’ structure is visible at the dark overlapping areas
because “pigment clumping” (section 4.6) collects more pigment at cavities:
cells with a lower canvas height value than its neighbors.
Figure 5.7 and figure 5.8 both show images created with the system, using
a 400 × 400 sized grid that is distributed to six processing nodes. Circular
brushes with varying sizes are used, as well as varying amounts of pigment
and water. The palette consists of eight different pigment types, each of which
can be selected one at the time by the brush. At this point, pigment mixing
is only possible on the canvas, after applying the paint.
5.6
Discussion
We described the implementation of a real-time interactive painting system
that uses the canvas model presented in the previous chapter. The computational load of the simulation is distributed on a HPC with processing nodes
that are connected through a high-speed network.
Although the implementation has several limitations, it has proved useful
as a proof-of-concept.
Apart from the impractical hardware setup, the approach leaves little computational room for a complex brush model. Such a brush is indispensable in
creating more convincingly shaped strokes than the current circular imprints.
In practice, breaking up the canvas in smaller subcanvasses does, however,
not guarantee adequate load balancing among the processing nodes. Except
for broad washes that cover the whole surface, brush action within some period of time tends to be concentrated on a specific area of the canvas. In the
worst case this particular area belongs to just one subcanvas on a node, which
then performs the whole simulation while other nodes remain idle. Alternative parallel computing schemes that are based on, for example, dynamically
breaking up and distributing the set of active cells, or computing each layer on
a different node, would suffer from the overhead of synchronizing cell content.
The next chapter presents a different implementation approach using graphics hardware, which in fact also performs a parallel simulation.
Chapter
6
WaterVerve: A Real-time Simulation Environment
for Creating Images with Watery Paint
Contents
6.1
6.2
Introduction . . . . . . . . . . . . . . . . . . .
Graphics Hardware Implementation . . . . .
6.2.1 Simulating Paint . . . . . . . . . . . . . . . .
6.2.2 Optimization . . . . . . . . . . . . . . . . . .
6.2.3 Rendering the Canvas . . . . . . . . . . . . .
6.3 Watery Paint Media . . . . . . . . . . . . . . .
6.3.1 Watercolor . . . . . . . . . . . . . . . . . . .
6.3.2 Gouache . . . . . . . . . . . . . . . . . . . . .
6.3.3 Oriental Ink . . . . . . . . . . . . . . . . . . .
6.4 User Interface . . . . . . . . . . . . . . . . . .
6.5 Discussion . . . . . . . . . . . . . . . . . . . . .
6.1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
69
70
71
71
72
75
75
75
75
79
83
Introduction
A second implementation of the canvas model is introduced in this chapter.
We build on our previous experience with a parallel implementation (chapter
5) and take a new approach using recent programmable graphics hardware.
WaterVerve 1 is the first model that provides real-time painting experi1
The name “WaterVerve” refers to the Dutch language in which “verven” means “painting”. Moreover, “WaterVerve” was a dance hit by DJ Mark van Dale in the late nineties.
70
WaterVerve: A Real-time Simulation Environment for Creating
Images with Watery Paint
ence with watercolor paint on a single workstation, while comprising sufficient
complexity to capture its complicated behavior, its interactions with the canvas as well as its chromatic properties. It features the Kubelka-Munk diffuse
reflectance model as a color mixing scheme, and the capability to produce
paintings with thin paint media related to watercolor, like gouache and Oriental black ink.
6.2
Graphics Hardware Implementation
Recent work in computer graphics makes extensive use of programmable graphics hardware (GPU) for more diverse applications than just the processing of
images. The following quote colorfully illustrates the impact of recent advances
in graphics hardware.
“Imagine that the appendix in primates had evolved. Instead of becoming a prehensile organ routinely removed from humans when it
became troublesome, imagine the appendix had grown a dense cluster of complex neurons to become the seat of rational thought, leaving the brain to handle housekeeping and control functions. That
scenario would not be far from what has happened to graphics processing units.”
From “The Graphics Chip as Supercomputer ”, EE Times. December 13, 2004.
The work of Harris et al. adopts graphics hardware for the implementation
of physically-based cloud simulation with cellular automata [Harris 02]. His
work also provides a motivation for the use of graphics hardware in physical
simulations in general. Graphics hardware is optimized to process vertices and
fragments at high speed. It is an efficient processor of images, taking texture
image data as input, processing it, and rendering another image as output.
Most physically-based simulations represent the state of the simulation as a
grid of values. Images, which are in essence arrays of values, map well to the
values on such a grid. Harris concludes that this natural correspondence, the
programmability and the performance make the GPU an excellent candidate
for processing the simulation data and reducing the computational load on the
main CPU [Harris 03].
With these arguments we can also motivate a GPU implementation of
the proposed canvas model. Two other such implementations appeared in
literature at about the same time as our model. “IMPaSTo” from Baxter
6.2 Graphics Hardware Implementation
71
et al. [Baxter 04e], with hardware accelerated implementations of both the
Stokes algorithm for moving viscous paint, and the Kubelka-Munk rendering
algorithm. And “Moxi” from Chu et al. [Chu 05], which uses the GPU for
processing the lattice Boltzmann equations.
6.2.1
Simulating Paint
RGBA texture objects with floating-point precision were created for each of
the data sets in table ??. The velocity vector field and scalar fields for water
quantities in the fluid layer and the capillary layer are combined in a single
texture. A total of four textures carry the pigment concentrations in both
the fluid layer and the surface layer, so the current implementation limits the
number of pigments in each active cell to eight. The table excludes several
textures that were used to store intermediate results.
All simulation algorithms described in chapter 4 were implemented on
graphics hardware as fragment shaders using NVIDIA’s high-level shading
language Cg. The simulation loop relies on the framebuffer-object extension,
allowing the results of a rendering pass to be stored in a target texture. Each
operation in tables 4.1, 4.2 and 4.3 is mapped to one or more fragment programs.
6.2.2
Optimization
An optimization was made that makes sure only relevant parts of the canvas
are updated by using an overlay grid that keeps track of areas that were
touched by the brush. Just these modified sub-textures or dirty tiles, are
processed in the following time steps. This effort is necessary because, in
contrast to the software implementation from the previous chapter, it is in
this case much harder to determine which cells of the canvas are “active”.
Processing every single cell at each time step would slow down the simulation
considerably.
The downside of this approach is that simulation speed depends on the
size of the area touched by the brush. This behavior can be countered by
annotating each tile with a time-to-live value, based on a fair estimation of
the paint’s time to dry. When this value drops to zero, every cell in the tile is
dry and the tile can be dropped from the simulation loop.
72
WaterVerve: A Real-time Simulation Environment for Creating
Images with Watery Paint
6.2.3
Rendering the Canvas
Baxter created an overview of several color mixing schemes used in literature,
such as additive color mixing, subtractive color mixing, average color mixing
and pigment mixing [Baxter 04b]. In the previous chapter we chose an additive mixing scheme for its simplicity, but concluded that it was unable to
adequately represent the resulting color of a pigment mix.
Instead of directly working with color values, the models classified under
“pigment mixing” try to capture the interaction of light with individual pigment particles, which includes subsurface scattering and absorption of light.
One of the simplest approaches in this category is the Kubelka-Munk diffuse
reflectance model [Kubelka 31]. It makes the assumption that a layer consists
of a homogeneous material and that it covers an infinite area. The model
describes the material properties in terms of two parameters: an absorption
constant σa , and a scattering constant σs 2 .
Similar to many previous digital paint programs in literature, our canvas
is visualized using the Kubelka-Munk model. The real-time GPU implementation of Baxter et al., however, distinguishes itself by the fact that it uses an
eight-component space instead of a three-wavelength RGB space [Baxter 04e].
The eight wavelengths are sampled with a Gaussian quadrature numerical integration scheme from a set of 101 values that were obtained by measuring
reflectance values of real paint samples using a spectra-radiometer. Our implementation uses just three components, with values taken from the work
of Curtis et al. [Curtis 97]. Table 6.1 shows the KM values we used for the
watercolor palette.
The KM algorithm is translated to a Cg fragment program (table 6.3). It
iteratively composites every glaze, including the canvas reflection coefficients,
to produce the final image. The algorithm assumes that three parameters
describing the low-level scattering properties of each layer are known: the
layer’s thickness d, and the absorption and scattering coefficients σa and σs
of every pigment. From these attributes, the KM equations give us R, the
fraction of light that is reflected and T , the fraction that is transmitted through
the layer.
The input variable d is derived on a per-cell level by measuring the total fraction of pigment in both the shallow fluid layer and the surface layer.
The absorption and scattering coefficients are passed to the shader as uniform
parameters. They are part of a user-defined palette, which contains the coefficients for each RGB color channel. Finally, repeated application of the KM
2
In literature, these constants are sometimes denoted by K and S respectively.
6.2 Graphics Hardware Implementation
Pigment name
Indian Red
Quinacridone Rose
Cadmium Yellow
Hookers Green
Cerulean Blue
Burnt Umber
Cadmium Red
Brilliant Orange
Hansa Yellow
Phthalo Green
French Ultramarine
Interference Lilac
Black
σar
0.46
0.22
0.1
1.62
1.527
0.74
0.14
0.13
0.06
1.55
0.86
0.08
2.5
σag
1.07
1.47
0.36
0.61
0.32
1.54
1.08
0.81
0.21
0.47
0.86
0.11
2.5
σab
1.5
0.57
3.45
1.64
0.25
2.1
1.68
3.45
1.78
0.63
0.06
0.07
2.5
73
σsr
1.28
0.05
0.97
0.01
0.06
0.09
0.015
0.009
0.5
0.01
0.005
1.25
0.000001
σsg
0.38
0.003
0.65
0.012
0.26
0.09
0.018
0.007
0.88
0.05
0.09
0.42
0.000001
σsb
0.21
0.03
0.007
0.003
0.4
0.004
0.02
0.01
0.009
0.035
0.01
1.53
0.000001
Table 6.1: The digital watercolor palette. The Kubelka-Munk values are obtained from the measurements used in the work of Curtis et al. [Curtis 97],
except for “Black”, which is a fictional pigment.
Texture purpose
Canvas reflectance values
Canvas composed reflectance values
Background
Content
[Rr , Rg , Rb , unused]
[Rr , Rg , Rb , unused]
[r, g, b, a]
Table 6.2: Texture data used in the render step.
composition equation takes care of multiple, stacked layers. The resulting R
value is used as the pixel’s color. Additionally, bilinear interpolation of pixel
values during rendering reduces “blockiness” at lower resolutions.
Table 6.2 summarizes the textures used in the render process. Instead of
keeping reflectance values for each dried layer in a separate texture object, they
are combined in the “composed reflectance texture”, which serves as the base
layer in the KM compositing algorithm. The “reflectance texture” contains
KM reflectance values for an empty canvas and is used to reset the composed
texture. The last texture contains a background that can assist as a reference
during painting.
74
WaterVerve: A Real-time Simulation Environment for Creating
Images with Watery Paint
fragout_float main(vf30 IN,
uniform float4x3 K, // absorption coeffs (sigma a)
uniform float4x3 S, // scattering coeffs (sigma s)
uniform samplerRECT fluidpigment, // pigment in fluid layer
uniform samplerRECT deppigment, // pigment in surface layer
uniform samplerRECT Rcanvas)
{
fragout_float OUT;
half2 coord = IN.TEX0.xy;
// bi-linear interpolation
float4 sp = f4texRECTbilerp(fluidpigment, coord);
float4 dp = f4texRECTbilerp(deppigment, coord);
float3 Rprev = f4texRECTbilerp(Rcanvas, coord).xyz;
// thickness d
float4 sum = sp + dp;
float d = sum.x + sum.y + sum.z + sum.w;
if (d == 0.0f) // no paint in this cell
OUT.col = float4(Rprev, 0.0);
else
{
// scale K en S according to pigment concentrations
float4 s = (sp + dp) / d;
float3 totalK = mul(s, K), totalS = mul(s, S);
// calculate R (reflection) and T (transmission) coefficients
float3 aa = (totalS + totalK) / totalS;
float3 b = aa * aa - 1.0f.xxx;
b = float3(sqrt(max(b.x, 0.0f)), sqrt(max(b.y, 0.0f)),
sqrt(max(b.z, 0.0f)));
float3 t = b * totalS * d;
float3 sh = float3( sinh(t.x), sinh(t.y), sinh(t.z) );
float3 ch = float3( cosh(t.x), cosh(t.y), cosh(t.z) );
float3 R = sh / (aa * sh + b *ch), T = b / (aa * sh + b *ch);
// compose with canvas’ reflectance
float3 Rtot = R + T * Rprev * T / (1.0f - R * Rprev);
OUT.col = float4(Rtot, 1.0);
}
return OUT; // return current layer’s reflection coefficient
}
Table 6.3: Cg code for Kubelka-Munk layer compositing of a single pigment
set.
6.3 Watery Paint Media
6.3
75
Watery Paint Media
The observations on watercolor behavior we made in section 4.2 are also applicable to some other types of paint media. This section outlines the procedure
for creating gouache and Oriental ink images with the canvas model, along
with several results.
All results in this chapter are created with our application on a Intel(R)
Xeon(TM) 2.40 GHz system equipped with a NVIDIA GeForce FX 5950
graphics card. A Wacom tablet interface was used as a brush metaphor. In
all cases the canvas measured 800 × 600 cells, with an overlay grid of 32 × 32
tiles. The frame rate of about 20 frames/sec is affected when a user covers
a very large area within the same active layer and draws fast enough so that
the drying process does not deactivate any tiles in the overlay grid. In this
situation, user interaction is still possible at about half the normal frame rate.
6.3.1
Watercolor
The strokes in Figure 6.1 show examples of typical watercolor paint effects.
The figure also demonstrates the impact of varying the pigment properties
staining, clumping and weight, introduced in section 4.6, on paint behavior
and stroke appearance.
Several watercolor images created with the system are depicted in figures
6.2 and 6.3.
6.3.2
Gouache
Gouache is watercolor to which an opaque white pigment has been added. This
results in stronger colors than ordinary watercolor. A layer of paint covers all
layers below, so paint is not applied in glazes. Also, gouache is not absorbed in
the canvas but remains on the surface in a thick layer, creating flat color areas.
These properties can be mapped to our model by replacing the KM optical
model with a simpler composition algorithm that blends layers together based
on local pigment concentrations. Using a higher viscosity factor and loading
more pigment in the brush results in thicker paint layers. Figure 6.4 shows an
example of computer-generated gouache.
6.3.3
Oriental Ink
Although the brushes and techniques used in Oriental paintings are very different from those in Western painting, the mechanics of pigment and water
76
WaterVerve: A Real-time Simulation Environment for Creating
Images with Watery Paint
(a)
(f)
(b)
(g)
(c)
(d)
(h)
(e)
(i)
(j)
(k)
(l)
Figure 6.1: Strokes showing various watercolor effects: high and low pigment
granulation causing a difference in accentuation of the canvas texture (a); wet
on wet painting showing a feathery pattern in a computer-generated stroke
(b) and a real stroke (c); the difference between pigment with high staining
power, causing particles to remain stuck on the paper when painted over (d)
and pigment with low staining power that is easily picked back up (e); strokes
that are washed out with a wet brush, using pigment with high staining power
(f) and low staining power (g); the difference between pigment with high
density, which falls quickly to the surface (h), and low density, which stays
longer in the shallow fluid layer (i); the “dark-edge” effect in a computergenerated stroke (j) and a real stroke (k); and “glazing”, achieved by adding
thin layers of watercolor one over another (l).
6.3 Watery Paint Media
Figure 6.2: A computer-generated watercolor image.
77
78
WaterVerve: A Real-time Simulation Environment for Creating
Images with Watery Paint
Figure 6.3: Several computer-generated watercolor images.
6.4 User Interface
79
Figure 6.4: An example of a computer-generated gouache image.
are quite similar. The canvas is generally more textured and more absorbent,
and the dense black carbon particles are smaller and able to diffuse into the
paper. The former property is easily obtained in our simulation by generating
a rougher canvas texture and using a higher absorption constant.
Despite the fact that our canvas model does not simulate pigment particles inside the canvas structure, ink diffusion can still be handled by the top
layer. However, the creation of the typical feathery pattern that appears when
pigment particles are blocked during diffusion needs and extra simulation component. This will be the task of the “diffusion controller” algorithm, discussed
in chapter 8.
The palette consists of very dark pigment with high density. Figure 6.5
depicts a computer-generated painting in black ink, compared with the original “La Mort”. In both the real and the digital painting a canvas with low
absorbency was used.
6.4
User Interface
Users are provided with an intuitive interface, displaying the canvas and a
default palette with different pigment types (figure 6.6). As shown in figure
6.7(a), a small area of the palette, the mix canvas, is reserved for the mixing of
at most eight different pigments. The mix canvas simulates paint in software
80
WaterVerve: A Real-time Simulation Environment for Creating
Images with Watery Paint
(a)
(b)
Figure 6.5: An image in Oriental black ink created with the system by Xemi
Morales (b), based on the original “La Mort” by Marie-Ann Bonneterre (a).
6.4 User Interface
81
Figure 6.6: The user interface of WaterVerve.
using a grid similar to the one proposed in chapter 5. The canvas has no surface
layer and capillary layer, however, and fluid velocities and water amounts are
not tracked. The paint is passive; it does not interact with the canvas and
neighboring cells do not exchange contents. The only activity that takes place,
and which causes the paint to be mixed, is the continuous transfer of pigment
amounts between canvas cells and the cells of the circular brush from chapter
5. The active pigment mix is defined as the list of all pigments fractions in
the center cell of the mix brush. The current mix color is shown next to the
mix canvas.
The user interface also includes basic operations, such as the possibility to
save and load intermediate results, and canvas operations like drying, clearing
and starting a new layer. Starting a new layer involves first drying the layer,
followed by turning it into a passive layer. Passive layers are not separately
tracked but combined into a single layer.
82
WaterVerve: A Real-time Simulation Environment for Creating
Images with Watery Paint
(a) The palette for interactive
color mixing.
(b) Brush dialog for choosing
brush type, pigment and water
load, and a brush size.
Figure 6.7: Palette and brush dialogs.
Other less intuitive simulation attributes, including fluid dynamics parameters, pigment properties and different views on simulation data, can also be
queried and manipulated through various dialog windows.
The canvas itself is presented to the user in three different ways:
• A 3D perspective view, mimicking the viewpoint of the user when painting on a horizontal surface (figure 6.6).
• A 2D orthogonal view showing the canvas from the top. The brush is
visualized only by its shadow. This approach clearly shows the current
brush orientation, without being intrusive to the painting process (figure
6.8(b)).
• A brush-following view that attaches the camera to the brush itself (figure 6.8(a)). This viewpoint is less useful during painting. It is mainly
used to get a clear view on brush deformation. The brush images from
chapter 7 are made from this viewpoint.
6.5 Discussion
83
(a)
(b)
Figure 6.8: The brush view (figure 6.8(a)) fixes the canvas to the brush, following its movements. The orthogonal view (figure 6.8(a)) only shows the
brush’s non-intrusive shadow.
The user is able to control the camera’s position and orientation. This
way, an optimal viewpoint for painting can be obtained.
The hardware setup, showing a user while interactively painting with the
system, is shown in figure 6.9. A Wacom tablet interface offers 5 DOF input
for efficient brush control (chapter 7). A common mouse can also be used in
combination with the circular brush model, which does not require the missing
tilt and pressure information. Pressure is in this case translated to a circle
radius value by means of the slider in figure 6.7(b).
6.5
Discussion
A complete painting environment was presented in this chapter. Contrary to
the parallel implementation presented in the previous chapter, the adoption
of programmable graphics hardware makes it possible to perform the simulation on a single desktop computer. Moreover, enough computational capacity
remains for the complex 3D brush model that will be presented in the next
chapter. This kind of brush is a vital element in order to create believable
painting experience. The circular brush that was used until this point provides predictable behavior and is easy to work with, but is unable to conceal
the artificially-looking circular patterns in an image. This issue will be dealt
with in the following chapter.
Results show that a wide range of typical watercolor effects can be achieved.
84
WaterVerve: A Real-time Simulation Environment for Creating
Images with Watery Paint
Figure 6.9: Hardware setup of the painting system. A Wacom tablet interface
offers 5 DOF input for efficient brush control.
Even the simulation of related paint media like gouache and Oriental ink are
within the model’s capabilities.
One issue that is readily observable in the images created with this implementation is image resolution. The resolution of generated images is inherited
from the simulation grid’s resolution, which in turn is restricted by the simulation’s performance requirement. The “blocky” effect in the results is somewhat reduced using straightforward bi-linear filtering, and could possibly be
improved using other filtering techniques, but it is clear that other approaches
are needed to allow arbitrary image resolution. In chapter 9, we elaborate on
this issue.
Chapter
7
Brush Model
Contents
7.1
7.2
7.3
7.4
7.5
7.6
7.1
Introduction . . . . . . . . .
Brush Models in Literature
7.2.1 Input Devices . . . . . . .
Brush Dynamics . . . . . . .
7.3.1 Kinematic Analysis . . . .
7.3.2 Energy Analysis . . . . .
7.3.3 Constraints . . . . . . . .
7.3.4 Energy Optimization . . .
Brush Geometry . . . . . . .
7.4.1 Single Spine Model . . . .
7.4.2 Footprint Generation . . .
7.4.3 Multi-Spine Models . . .
Results . . . . . . . . . . . . .
Discussion . . . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
85
86
88
89
89
91
93
93
94
94
95
97
98
98
Introduction
The paint brush is the medium that communicates an artist’s intentions onto
the canvas. Its importance is pointed out by an experienced artist as follows
[Wenz-Denise 01]:
“Painting is a messy business. You will find that most of the
materials involved in painting are expendable; paint, rags, solvents,
86
Brush Model
Figure 7.1: The three components of a paint brush: A. the tuft, B. the ferrule,
c
C. the handle. Copyright 1998
Smith [Smith 98].
mediums. In fact, you may even ruin some clothes in the process.
Just about the only tools worth protecting are your paint brushes.”
A paint brush is comprised of three components, as shown in figure 7.1.
The handle is of course what the artist holds when manipulating the brush.
The point where the bristles and the handle meet is called the ferrule, made
of nickel or some other non-reactive metal. Most importantly, the tuft carries
and applies the paint on the canvas. Its material, natural hair, bristle or
synthetic fiber, determines the quality (and value) of the brush. Apart from
its material, the effects a brush can produce is also influenced by its shape
and size attributes. Figure 7.2 depicts a small selection out of the wide range
of available brush shapes, each accompanied with a sample stroke.
A 3D virtual counterpart of a real paint brush must be a able to capture
a user’s gestures and translate them to realistic and predictable strokes. Most
digital paint programs ignore the nuances in brush motion and produce a
uniform, analytical mark.
This chapter presents a new brush model that complements our previously
introduced canvas model. It features the following characteristics:
• An efficient and general applicable method to deform the brush tuft
using free-form deformation.
• Anisotropic friction.
• Complex footprint generation.
7.2
Brush Models in Literature
Virtual brush models were considerably improved since Greene’s drawing prism
and Strassman’s one-dimensional version, which we discussed in chapter 3.
7.2 Brush Models in Literature
(a)
(b)
(c)
(d)
87
(e)
(f)
Figure 7.2: Several commonly used real brushes, along with a representative
c
created stroke. Copyright 1998
Smith [Smith 98]. Chinese calligraphy brush
7.2(a); Flat brush 7.2(b); Round brush 7.2(c); Rigger brush 7.2(d); Fan brush
7.2(e); Mop brush 7.2(f).
The first physically-based 3D brush model was given by Lee, who adopted
Hooke’s law to model a collection of elastic bristles for painting with black ink
[Lee 97, Lee 99].
Most of the previous work that discusses advanced brush models for interactive digital painting are intended for either producing paintings with Oriental ink or creating Chinese calligraphy. In fact, only the work of Baxter et al.
explores the use of virtual brushes in Western painting [Baxter 01, Baxter 04a].
The deformable brush presented in their dAb painting system is the first to
provide haptic feedback, enabling a user to actually feel how the brush deforms and therefore enhancing the sense of realism. A linear spring between
the brush head and the canvas generates the forces that form the input for a
PHANToM haptic feedback device. The dynamics of the brush head itself is
handled by a semi-implicit method that integrates linear spring forces. These
kind of time-stepping integration techniques, however, are less suitable to be
used in a heavily damped system that is required to simulate the stiff bristles.
Further limitations include the inability to handle bristle splitting.
Saito et al. introduced a more appropriate technique based on energy optimization [Saito 99, Saito 00]. The function that has to be minimized captures
the total amount of energy in the system, a summation of bend energy from
joints, potential and kinetic energy from the tuft mass, and frictional energy.
88
Brush Model
The result is a very stiff dynamical system where the static equilibrium is
found almost instantly, which is a good approximation of real bristle behavior. The brush geometry is constructed by a single spine that is traced by a
circular disc.
Several authors extended this technique. Chu et al. added anisotropic
friction, lateral spine nodes to control brush flattening and a bristle spreading
technique based on a static alpha map [Chu 02]. Their system also takes into
account “pore resistance”, which occurs when bristles become stuck in irregularities of the canvas, by setting up a vertical dynamic blocking plane that
prevents a bristle of sliding. Plasticity, which accounts for shape deformations
by internal friction of the wet tuft, is modeled by adjusting the target angle
with a small value. The brush surface is again determined by an elliptical cross
section that is traced along the spine. Therefore, this brush model is only suitable for the round brushes found in Oriental ink painting. In later work they
enhance the splitting procedure by enabling the single tuft to generate smaller
child tufts along its spine [Chu 04].
The model described in more recent work of Baxter et al. generalizes
the latter to a multi-spine architecture able to also model brushes used in
Western painting [Baxter 04c]. The brush geometry is built in two ways: as
a subdivision surface, or as individual bristles represented by thin polygonal
strips. The simulation of multiple bristles makes it possible to create all kinds
of brush shapes.
A completely different approach is used by Xu et al., who design the model
as a set of independent “writing primitives” described by NURBS surfaces,
each representing a collection of bristles [Xu 02]. The writing primitives can
split into several smaller versions whenever their inner stress reaches some
threshold. In later work, they organize the bristles in a two-level hierarchy to
reduce computational complexity. The brush dynamics is partially performed
off-line by first constructing a database containing a collection of samples from
real brush motion. At run-time, the current state of the brush is determined
through retrieval and interpolation of the closest matches in the database.
7.2.1
Input Devices
An important component in an interactive painting system is the input device
that acts as a metaphor for a virtual paint brush and is used to translate
the user’s gestures to painted strokes. In the work of Baxter et al., the first
attempt is made to adopt the PHANToM haptic feedback device for this purpose [Baxter 01]. Although such a device has 6 degrees of freedom and allows
7.3 Brush Dynamics
89
the user to feel the brush’s response to interactions with the canvas, it needs
calibration to indicate the position of the virtual canvas. Also, this kind of
hardware has not yet reached consumer-level status. For these reasons, many
researchers continue to use a tablet interface for controlling a virtual brush
model.
The commonly used mouse seems too restrictive to serve as an input apparatus for the purpose of digital painting. Having only two degrees of freedom,
which are logically attributed to the brush’s position, the full range of possibilities of a 3D virtual brush can not adequately be met.
7.3
Brush Dynamics
In our system, the dynamics that govern the behavior of a single bristle are
virtually identical to the ones found in the work of Saito, Chu and Baxter
[Saito 99, Chu 02, Chu 04, Baxter 04c]. It uses an energy optimization framework to compute the static equilibrium of the system. This approach results in
very “snappy” bristle behavior, as the bristle almost instantaneously regains
its shape when the brush is lifted from the canvas.
Before deriving the kinematic equations we first describe the bristle representation, which relies on the same notation as used in the work of Baxter et
al. [Baxter 04b].
7.3.1
Kinematic Analysis
A single bristle is represented as a kinematic chain, shown schematically in
figure 7.3. Each segment in the chain has a predefined length and two angles,
θ and φ that determine the segment’s orientation. We express the orientation
in fixed XYZ angles representation, because it is intuitive to work with and
because of its compact form. We assume the final rotation around the global
Z axis is always zero, which corresponds to a bristle with zero twist.
The downside of this approach, and in fact of all Euler and fixed angle
representations, is gimbal lock, which occurs when two axes of rotation become aligned [Parent 02]. With fixed XYZ angles this problem occurs when
θ is 90◦ , so this configuration should be avoided. Using quaternions as a
parametrization method would resolve this issue, but at the cost of computational complexity.
With this representation, we can transform a vector i~v = (x, y, z) in the
coordinate frame of segment i to its parent coordinate frame i − 1, using the
90
Brush Model
Figure 7.3: Kinematic representation of a bristle.
combined XY rotation matrix:
i−1
~v =
i−1
Ri i~v
= RY (φ) RX (θ) i~v

 
cosφ sinφ sinθ sinφ cosθ
x



0
cosθ
−sinθ
y 
=
−sinφ cosφ sinθ cosφ cosθ
z
(7.1)
The notation used in this formula is similar to the one used in most kinematic literature [Craig 89, Parent 02], where a vector i~v is the vector ~v expressed in the ith coordinate frame, and rotation matrix j Ri transforms a
vector from coordinate frame i to coordinate frame j.
The actual bend angle β, the angle between two adjacent segments, is
determined by:
β = cos−1 (cosθcosφ).
(7.2)
7.3 Brush Dynamics
7.3.2
91
Energy Analysis
We adopt a rather unphysical approach to simulate the dynamics of a deformable bristle, by optimizing a behavior function C that captures the total
energy in the system. The result is a bristle that immediately regains its
shape, which actually closely mimics the behavior of a real bristle.
In comparison, the physically-based method would convert the behavior
function to a force equation that models a generalized spring, “pulling” the
system in the desired state. Some explicit or implicit time-stepping algorithm
would be used to integrate the equations, but most likely produce either inaccurate or unstable behavior [Baxter 01].
The total energy in a system containing a single bristle is the sum of
deformation energy (the potential energy stored in the angular springs) and
frictional energy:
C = Etotal =
X
(Espring ) + Efriction .
(7.3)
joints
The angular spring pulls the bend angle β between two adjacent segments
towards a rest angle. In our model we assume straight bristles, so the rest
angle is always 180◦ . An angular spring, modeled as a scalar potential energy
function, is based on Hooke’s law:
k
(180 − β)2 ,
2
where k is the spring constant for that particular spring.
Espring =
(7.4)
The friction energy term captures the energy caused by the bristle being
dragged on the rough canvas surface. Both Chu and Baxter use a simple
Coulomb friction model for this purpose:
Efriction = µ
X
~
~ |kdk,
|N
(7.5)
contact
joints
with µ the kinetic friction coefficient, N the force normal to the contact surface,
and d~ the drag direction of the joint projected onto the surface (figure 7.4,
accumulated for every joint in contact with the surface.
Both authors also add an additional component to this equation to account
for anisotropic friction. It models the fact that the direction of minimal resistance is the “pull” direction of the bristle, as opposed to sideway dragging or
92
Brush Model
Figure 7.4: Finding the drag vector in a 2D scenario where the bristle tip
touches the canvas somewhere in the time interval [t0 , t1 ]. The start point of
the drag vector is calculated by interpolation, while the end point is approximated by projecting the joint position at t1 on the canvas surface.
bristle pushing (when the bristle becomes stuck in the canvas pores). We used
the same approach as Baxter, who was inspired by the Blinn-Phong formula
for calculating the intensity of a specular highlight:
Efriction = µ
X
~
~ |kdk,
(1 − η)|N
(7.6)
contact
joints
with
d~
η = Cη max 0, d~p ·
~
kdk
!k
,
(7.7)
and dp the preferred drag direction. The anisotropic constants 0 ≤ Cη ≤ 1
and k determine the shape of the anisotropic cone. As also 0 6 η 6 1, this
addition to the equation effectively scales the friction energy in favor of the
preferred direction, in which the anisotropic component removes all friction.
7.3 Brush Dynamics
93
An important advantage of equation 7.6 is that it has C1 continuity, which
is a requirement for the optimization method described in the next section.
However, this property is lost when Cη is set to zero and the function becomes
non-differentiable at d~ = 0. This isotropic friction configuration presents a
problem for the optimizer, which we circumvent by simply squaring the length
of the drag vector.
~ is a constant, approxOne assumption we make is that the normal force N
imated at each time step. This simplification is done because calculating the
actual normal force based on the configuration of all springs is tedious.
7.3.3
Constraints
To model the impenetrable canvas surface, a simple inequality constraint, for
each joint, suffices:
Planez − (~
p)z ≥ 0.
(7.8)
The constraints force the joints to stay above the surface described by
z = Planez , with the Z-axis pointing down.
In practice, we add an extra equality constraint for every joint that violates
the non-penetration constraint during the current time step (equation 7.9).
This avoids the scenario where the optimizer decides that lifting the joint
from the canvas is a solution that requires less energy than undergoing the
larger frictional energy. In that case, the bristle would make a jump across
the canvas.
Planez − (~
p)z = 0.
7.3.4
(7.9)
Energy Optimization
Having modeled the system’s energy, we have enough information to find its
equilibrium, the state of balance in which all the forces acting on the bristle
are balanced. This state is defined at the energy minimum.
For this purpose we created a software component that encapsulates the
“donlp2” optimization framework [Spellucci 04]. This software package minimizes a (in general nonlinear) differentiable real function f , which is subject to (in general nonlinear) inequality and equality constraints g, h (equation 7.10). It accomplishes this by using sequential quadratic programming
94
Brush Model
(SQP), an effective numerical method for nonlinearly constrained optimization
[Nocedal 99].
f (x) = min
x∈S
(7.10)
S = {x ∈ Rn : h(x) = 0, g(x) ≥ 0}.
Mapping these equations to our model, f is the energy function Etotal ,
and the constraint functions were described in the previous section. Both the
energy function and the constraint functions are non-linear in variables θ and
φ.
7.4
Brush Geometry
Simulating every single bristle of a brush with the technique described above is
not feasible, as a real paint brush can contain hundreds of bristles. Moreover,
bristle-bristle interaction was not considered so far.
Regarding this issue, Baxter et al. choose a Butterfly subdivision scheme,
and in later work a Catmull-Clark surface to tie a monolithic geometric model
to a single spine. This requires a special converter application that determines
the subdivision control vertices. They also describe a second approach in
which every single bristle is modeled as a quadrilateral strip. Only a few
bristles participate in the optimization step. The configuration of the other
strips is derived by interpolation.
The brush geometry in the work of Chu et al. is obtained by sweeping an
elliptical shape along a single spine.
7.4.1
Single Spine Model
We design a polygon mesh of a brush in an undeformed state with some
3D modeling application like the freely available open-source tool Blender
[Blender 06]. A free-form deformation lattice is then associated with the mesh
and deformed based on movement of the kinematic chain from the previous
section. In case of a single bristle, the bristle serves as the brush spine. This
setup is depicted in figure 7.5.
Note from figure 7.5 that the control points of the deformation volume are
allowed to penetrate the canvas. This is necessary because the brush hovers
slightly above the canvas surface, and only a small amount of tilt pushes the
control points below the surface. In consequence, a very broad brush model
7.4 Brush Geometry
95
Figure 7.5: A single-spine round brush model. A free-form deformation grid,
shown in blue, is associated with the tuft’s (partially textured) polygon mesh.
The grid is manipulated by the kinematic chain formed by the green joints.
would not be suitable in combination with this single spine method. The
polygon mesh of the thin round brush design in figure 7.5 is defined closely
around the spine so the small amount of penetration is hardly noticeable.
Unfortunately, free-form deformation is a computationally expensive task
when used in combination with a detailed polygon model. For this reason,
programmable graphics hardware was used to create a vertex shader that
deforms each polygon vertex. The Cg code [NVI 06] for this shader is partially
shown in table 7.1.
7.4.2
Footprint Generation
Determining the area on the canvas surface that is touched by the brush is
straightforward. A separate rendering pass on the stencil buffer from the canvas’ point-of-view, using an orthographic projection results in a black&white
footprint. The front clipping plane is set just below the canvas surface (we allow the brush to slightly penetrate the surface), while the back clipping plane
is placed just above the surface. All geometry that is contained in this viewing
volume contributes to the footprint.
Before handing over this footprint to the paint simulation, a Gaussian blur
is performed to approximate differences in pressure that will affect the amount
96
Brush Model
void main(
in float3 ipos : POSITION, // initial position
out float4 opos : POSITION, // deformed position
const uniform float3 obj_center,
const uniform float3 s,
const uniform float3 t,
const uniform float3 u,
const uniform half3 size,
const uniform float4x4 mvp, // projection and modelview matrix
const uniform float3 lattice[27]) // lattice control points
{
// transform to lattice coordinate frame
float3 v = tolocal(ipos.xyz, obj_center, s, t, u);
float3 v1, v2, res = float3( 0.f, 0.f, 0.f);
// lattice dimensions
int xc = size.x, yc = size.y, zc = size.z;
// free-form deformation
for (int i = 0; i < xc; i++) {
v2 = float3( 0.f, 0.f, 0.f);
for (int j = 0; j < yc; j++) {
v1 = float3(0.f, 0.f, 0.f);
for (int k = 0; k < zc; k++) {
float3 l = lattice[xc*yc*i + xc*j + k];
v1 += comb(zc-1, k) * pow(1.f-v[2], zc-1-k) * pow(v[2], k) * l;
}
v2 += comb(yc-1, j) * pow(1.f-v[1], yc-1-j) * pow(v[1], j) * v1;
}
res += comb(xc-1, i) * pow(1.f-v[0], xc-1-i) * pow(v[0], i) * v2;
}
float4 pos = float4(res - obj_center, 1.0); // transform to world
/* texture & lighting */
/* ... */
opos = mul(mvp, pos); // output position to viewport
}
Table 7.1: Cg code for a vertex shader that performs free-form deformation
[NVI 06]. Texture and lighting code was omitted in this depiction, as well as
the code for some supporting methods such as comb(n,k) for calculating the
binomial of k out of n, and pow(i,n) for calculating the nth power of i.
7.4 Brush Geometry
97
Figure 7.6: A flat brush model with two spines. The spines each manipulate
one side of the free-form deformation grid. The brush geometry itself consists
of a few hundred polylines.
of paint transferred between brush and canvas.
7.4.3
Multi-Spine Models
The concept of associating a deformable spine, represented by a kinematic
chain, with a free-form deformation grid can be extended to work with multiple
spines. Figure 7.6 shows a flat brush design with two spines, each coupled to
one side of the deformation grid. If the user applies pressure to brush, the
spines will spread the brush geometry.
In this particular example the brush was modeled by means of a polyline
mesh, which can be rendered using OpenGL line strips. Unfortunately, the
width of an OpenGL line strip can solely be specified in screen space, as a
(floating-point) number of pixels. Bristle width is only important when creating the footprint, however, as this directly influences the simulation. Based
on the dimensions of the orthographic view volume used in this process, it is
possible to calculate the necessary image space width given a bristle width in
object space. The appearance of the brush as perceived by the user in the
painting environment (using a perspective view) is of less importance, so in
this case drawing the bristles in image space does not disturb the simulation.
Continuing this approach, adding more spines and increasing the resolution
of deformation grid can produce more complex tuft deformation and bristle
98
Brush Model
spreading. Around ten spines can easily be simulated without a noticeable
performance drop, as this is the only complex task the CPU has to fulfill at
this point.
7.5
Results
A Wacom tablet interface provides 5DOF and enables a user to control the
position, pressure and tilt of the brush in an intuitive way.
The images in figures 7.7 and 7.8 show the results of various kinds of brush
movement using both watercolor paint and Oriental black ink. The strokes
in figure 7.7 are created using a single-spine round brush. The brush geometry consists of a polygon mesh modeled with the open-source tool Blender
[Blender 06], and exported to the 3ds file format.
The flat brush model in figure 7.8 consists of about 30 bristles. Hundreds
of bristles can be simulated without affecting performance, but in order to
recreate the scratchy strokes from image 7.8(a) the bristle count must be kept
relatively low.
In both brush models, the spring constants between consecutive segments
decrease near the tip. This makes the tuft tip more flexible than the top, which
is stiffer because the bristles are tightly packed in the ferrule. Each kinematic
chain consists of four segments with decreasing lengths near the tip.
The efficient balance between CPU load, performing energy optimization,
and GPU load, deforming and rendering the brush’s geometry, ensures an
overall real-time painting simulation.
7.6
Discussion
This chapter outlined the construction of a 3D deformable brush design that
extends the previously introduced canvas model. The brush’s deformation
depends on one or more kinematic chains, which participate in an optimization
framework that computes the system’s static equilibrium. The geometry of the
brush consists of a 3D model description, either polygons or polylines. A freeform deformation lattice, which encloses the geometry and which is deformed
using the kinematic chains, ensures the geometry of the brush inherits the
deformation. Additionally, canvas friction is taken into account to enhance
the realism of brush movement.
7.6 Discussion
99
(a) Watercolor stroke.
(b) Moving brush sideways.
(c) Swirly movement
(d) Black ink stroke.
Figure 7.7: Computer-generated sample strokes with a round brush.
(a) “Scratchy” strokes.
(b) Black ink stroke.
(c) Bristle spreading with black
ink.
Figure 7.8: Computer-generated sample strokes with a flat brush.
100
Brush Model
Two brush types are created using this approach, a round brush and a flat
brush. Strokes created with these brushes show that complex shapes can be
produced in combination with a 5DOF tablet interface.
There is some room for improvement of this model, however. First of all,
the current brush implementation transfers paint in a single direction: from
the brush to the canvas. Moreover, the effects of plasticity are ignored at the
moment. For both these issues, adequate solutions that can be incorporated
in our model already exist in literature.
In the following chapter, other means of applying and manipulating the
paint on the canvas are presented.
Chapter
8
Digital Artistic Tools
Contents
8.1
8.2
8.3
Introduction and Motivation . . . . . . . . . .
Objectives . . . . . . . . . . . . . . . . . . . . .
Real-world Artistic Tools . . . . . . . . . . . .
8.3.1 Textured Tissue . . . . . . . . . . . . . . . .
8.3.2 Diffusion Controller . . . . . . . . . . . . . .
8.3.3 Masking Fluid . . . . . . . . . . . . . . . . .
8.3.4 Selective Eraser . . . . . . . . . . . . . . . . .
8.3.5 Paint Profile . . . . . . . . . . . . . . . . . .
8.4 Digital Artistic Tools . . . . . . . . . . . . . .
8.4.1 Textured Tissue . . . . . . . . . . . . . . . .
8.4.2 Diffusion Controller . . . . . . . . . . . . . .
8.4.3 Masking Fluid . . . . . . . . . . . . . . . . .
8.4.4 Selective Eraser . . . . . . . . . . . . . . . . .
8.4.5 Paint Profile . . . . . . . . . . . . . . . . . .
8.5 Results . . . . . . . . . . . . . . . . . . . . . . .
8.6 Discussion . . . . . . . . . . . . . . . . . . . . .
8.1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
101
103
103
103
104
105
105
106
106
106
108
112
112
113
116
116
Introduction and Motivation
While in real life, brush, paint and canvas are the most evident utilities in
creating an artwork, many artists do not only use just these elements. Besides
the vast amount of brushes, with varying shapes, sizes and materials, there
102
Digital Artistic Tools
are several other tools that are commonly used to apply paint on the canvas.
Watercolor pencils for example, which are softer than normal pencils, and
produce thick, strongly textured strokes. Specialized plastic knifes are used
to scrape onto a wet area of paint, producing a cross-hatching effect. Even
a toothbrush appears to be a useful object for splashing small drops of paint
on paper. Removing surplus paint can be done with any material that is
absorbant and textured, with the intention of leaving a distinct footprint onto
the painted image. Preventing wet paint areas to receive paint is accomplished
with materials like masking fluid or masking tape, which selectively cover the
canvas during the painting process.
This shows that there really are no predefined rules telling us how painting
must be performed. All that matters is that the artist can achieve the effect
he/she had in mind. This chapter explores how our previously introduced
painting environment can be extended to include some of the above-mentioned
artistic tools.
As mentioned in an earlier chapter, one of the advantages of a digital counterpart of real-life painting is the fact that we are not restrained by physical
laws. Until now, we mimicked these laws as closely as possible in order to
create predictable behavior of the system. As we will show in this chapter, it
is possible to model some behavior that is “non-physical” but still predictable,
consistent with the user’s expectations.
In literature, several authors recognized the fact that interactive painting
applications are in need of supporting tools, complementing the procedure
of putting paint or ink on a canvas using some sort of brush model. Gooch
& Gooch pointed out that, once physical models of the materials (media,
substrate and brush) exist, high-level tools that empower the (non-)artist are
of great interest [Gooch 01].
Smith stressed the fact that one should make a difference between a digital
paint program and a digital paint system [Smith 01]. Although both can be
described as just “applications processing pixels”, the term “system” implies
many more features and much more capability than just a “program” that
simulates painting with a brush on a canvas. A paint system might even
encompass several other sub-programs.
Related to this matter, Baxter et al. pointed out that current research
in the area of painterly rendering mostly emphasizes the art of painting, the
appearance of the final product, while paying less attention to the craft of
painting [Baxter 01], the way an artist uses different materials to express himself. His paint systems for painting with acrylic or oil combines a deformable
8.2 Objectives
103
3D brush model with a force feedback input device to enhance the user’s sense
of realism, who can focus on the painting process itself. Using a minimalistic
interface, however, it disregards some of the extra digital benefits that can
assist a user during the creation of his artwork.
8.2
Objectives
In this chapter, we extend our real-time interactive paint system with the
following digital tools:
Textured tissue A textured, absorbent piece of paper that is used to dab
pigment and water from the canvas, leaving a distinct pattern.
Diffusion controller A tool that gives the user more control over the diffusion process. A user-defined pattern allows customizing or “steering”
the flow-like patterns paint makes while spreading on the canvas.
Masking fluid Special liquid that sits on the canvas, protecting paint in all
layers beneath it.
Selective eraser A utility for removing some/all pigment and/or water from
an active paint layer. It has the non-physical ability to selectively extract
pigment from a paint mixture.
Paint profile A profile collects predefined palette and canvas parameters under a single identifier, facilitating the ability to switch between or even
combine different paint media and styles.
With these increments we are able to create some distinctive effects that
previously were difficult or impossible to produce in digital art. In fact, most
concepts are new to the domain of digital painting. First, lets take a look at
how this techniques are adopted in the “real world”.
8.3
8.3.1
Real-world Artistic Tools
Textured Tissue
Many common household products become useful painting tools when they are
adopted to dab or blot surplus paint from the canvas, and at the same time
leave a clear impression of the texture in the paint. Examples are wadding
104
Digital Artistic Tools
Figure 8.1: Applying a textured tissue to an area of wet Oriental ink. The
result is a print of the texture of the tissue.
rods, cotton rags, tissues, sponges or even breadcrumbs. We will commonly
refer to these tools as the “textured tissues”, as this is the concept we introduce
into the paint system.
Applying a textured tissue can be done right after applying the paint,
while it is still wet, but also when the paint is already dry. In that case, the
dried paint is made wet again by rubbing over it using a wet brush. This way,
some paint pigments come loose again and can adhere to the absorbing piece
of material.
The motivation for using these materials is to create interesting visual
effects, although they can also be used to wipe some areas of the canvas
completely clean in case of an error that can not easily be rectified.
Figure 8.1 shows how an artist uses an absorbing piece of paper on a wet
spot of Oriental ink.
8.3.2
Diffusion Controller
An artist can obtain strokes with feather-like edges and blooming colors using a technique known as “wet-in-wet”. The English artist Joseph Mallord
William Turner was the first to exploit extensively the wet-in-wet technique
[MacEvoy 05]. Combined with the transparency effects that are obtained with
glazing, the blooming soft edges can create vibrating visual effects.
Because the canvas is wet, paint can flow freely without stroke boundary
restrictions and create such soft edges. Distinct feathery patterns arise because
the particles are being blocked in their path by irregularities in the canvas.
However, predicting the outcome of this highly unpredictable process is very
difficult and requires experience. In real-life, the results depend on the wetness
of the paper, the canvas structure, the pigment characteristics and the paint
8.3 Real-world Artistic Tools
105
quality. The direction and distance of flow can be influenced by holding the
paper in the desired direction, because it is affected by gravity, and by the
amount of paint and water. A wash with soft edges is often used for skies and
water in particular, but also to obtain interesting effects of atmosphere, light,
and texture.
8.3.3
Masking Fluid
There are three reasons why an artist could use some sort masking material
to prevent staining a specific area of the canvas. First, real watercolor does
not provide white pigment, so in order to be able to include white areas in
an image the white of the canvas must be used. Covering or masking these
areas is one way to achieve this. Another possible application is the protection
of previously created layers from paint, especially when using broad washes.
Finally, masking fluid can also be used to obtain exact contours and sharp
edges between two adjoining washes in the painting.
Note that masking can be applied both in areas with and without paint.
In real-life painting, masking is generally realized with two different materials:
tearable adhesive tape and masking fluids. The former requires delineation of
a certain area of the canvas by using adhesive tape, which can be teared to
obtain specific shapes. This method however does not allow for very precise
control as it is difficult to obtain the desired shape.
The second and more precise method is applying masking fluid with a
brush on the canvas (figure 8.2). This fluid is a removable colorless liquid
made from rubber and latex staining that can be used to mask areas of work
that need protection when color is applied in broad washes. Once dry, these
areas remain protected and cannot be penetrated by color. Afterwards, the
dry masking fluid can be removed by rubbing with the hand.
8.3.4
Selective Eraser
This tool has no exact counterpart in real world painting, where it is only
possible to remove part of the pigments that are deposited on the paper.
Precise control is not possible, and selectively removing a specific pigment
from a mixture of paint is impossible either. Sometimes artists wash out the
complete canvas to start over again. This is, however, a very cumbersome
operation that only serves as a last rescue strategy.
106
Digital Artistic Tools
Figure 8.2: Applying masking fluid with a brush on the canvas. Copyright
c
1998
Smith [Smith 98].
8.3.5
Paint Profile
In real-life, a popular technique is the adoption of “mixed media”, which
means a painting is created using a combination of different media, for example
mixing Oriental ink, water color, and even gouache.
However, there are some limitations to this practice. For example, it is
impossible to switch from an absorbant canvas to a non-absorbant one during
the painting process, nor is it feasible to modify the surface texture. Again, a
computer-based model has the ability to overcome all of these restrictions.
8.4
Digital Artistic Tools
In this section we take a closer look on how the proposed utilities fit in our
digital painting application.
8.4.1
Textured Tissue
Apart from the available set of brushes, a user can select the textured tissue as
a canvas manipulation tool. In this case the input device becomes a metaphor
for the finger, applying pressure on the tissue as long as the tool touches the
canvas surface.
8.4 Digital Artistic Tools
107
Figure 8.3: Removing water and pigment from the canvas surface with an
absorbent tissue.
Modeling the tissue-paint interaction resembles the way paint interacts
with the canvas. It is in fact much more simple because we do not have to
keep track of any paint quantities on the tissue itself.
The tissue is implemented as a texture in which one color channel is used
for storage of a height field (the irregular surface of the tissue). A second
color channel marks transferred cells on the tissue. The latter is necessary to
avoid the tissue continuously absorbing water and removing pigment from the
canvas.
We define a tissue operation as being the input of a continuous area of
pressure on the tissue, commencing when the finger touches the canvas and
ending when the tissue is lifted.
During a single tissue operation, water in the fluid layer of the canvas
model is absorbed by the tissue and, at the same time, an amount of pigment
is transferred from the canvas surface area to the tissue. The amounts of
material removed are not individually stored by the tissue but combined to
the single “mark”-value in the second color channel of the tissue texture. This
procedure is shown in figure 8.3.
The operation is implemented using three fragment shaders. Two shaders
remove pigment and water amounts from the canvas according to the tissue
texture’s x-channel, containing height field values between 0 and 1 (the irregular surface). The fragment shader’s Cg code for removing pigment is shown
in table 8.1. A third shader marks the transferred cells in the y-channel of the
tissue texture.
108
Digital Artistic Tools
Figure 8.4: (left) A fragment of the textured tissue used in our system,
rendered with per-pixel bump mapping. (right) A real-life example showing
the resulting image after application of the textured tissue tool on a wet area
of black ink. A distinct print of the tissue is still visible. (middle) A computergenerated look-alike, in which we tried to mimic the original as close as possible
using the digital textured tissue.
The left side of figure 8.4 depicts a close-up of the tissue used in our system.
It was sampled from the piece of paper displayed in the center part of figure
8.1. The final result of this operation is a footprint of the tissue, as shown in
the middle part of figure 8.4. This shows we are able to faithfully reproduce
the original real-life ink stain at the right side of the figure.
In practice, a convenient way of visualizing this whole process is by only
showing an outline of the pressure area. The tissue itself implicitly covers the
whole canvas but is not shown to the user, as this would be too intrusive to
the dabbing process.
8.4.2
Diffusion Controller
In order to create the erratic patterns of “wet-in-wet” painting, the diffusion
component of our simulation needs some additional ingredients. The simulation step that deals with the diffusion of pigment densities is handled by the
diffusion component ν∇2 p~ of the Navier-Stokes equation (chapter 4). If we
use the Jacobi iteration method to solve this Poisson equation and calculate
the new pigment densities pnew at cell (i, j) each time step:
pnew
i,j = (αpi,j + pi+1,j + pi,j+1 + pi−1,j + pi,j−1 )/(4.0 + α)
with α = cellarea/(ν∆t) and ν the diffusion rate.
(8.1)
8.4 Digital Artistic Tools
109
fragout_float main(vf30 IN,
uniform float2
position, //
uniform float
radius,
//
uniform float4
fraction, //
uniform samplerRECT pigment,//
uniform samplerRECT tissue) //
{
fragout_float OUT;
finger position
fingerprint radius
pigment fraction removed
pigment data
tissue texture
float2 pos = position - IN.TEX0.xy;
// copy current value
OUT.col = h4texRECT(pigment, (half2)IN.TEX0.xy);
// cell has to be within circle of current fingerprint
if (inCircle(pos, radius))
{
float2 pat = h4texRECT(tissue, (half2)IN.TEX0.xy).xy;
// gaussian splat, ignoring marked cells
OUT.col = clamp(
OUT.col - gauss(pos, radius) * fraction * pat.x * (1.0f - pat.y),
0.0f.xxxx,
1.0f.xxxx );
}
return OUT;
}
Table 8.1: Cg code for removing pigment from the canvas with a tissue. The
amount of pigment removed is affected by the tissue texture’s x-channel, containing height field values between 0 and 1 (the irregular surface). In the
y-channel of the tissue texture previously transferred cells of the current tissue operation are marked. The code for removing water from the canvas is
very similar.
110
Digital Artistic Tools
Figure 8.5: The unmodified diffusion algorithm makes cells exchange contents
with neighboring cells until an equilibrium is reached.
Intuitively, what happens is that all cells keep exchanging content, trying
to cancel out differences in concentration with neighboring cells (figure 8.5).
This way, when dropping pigment densities into a wet area on the canvas, this
particular diffusion algorithm would result in something similar to the top
image in figure 8.6, which is not what we want and which certainly does not
correspond to what happens in real-life (section 8.3.2).
The MoXi application of Chu et al. incorporates blocking values that
effect the streaming step of the lattice Boltzmann method (chapter 4). These
values are related to the canvas structure, glue concentration and the amount
of alum1 at that cell’s position.
Our “diffusion controller” tool introduces per-cell blocking values in the
simulation, forcing pigment densities to follow a predefined pattern during the
diffusion process. There is no real-world tool that allows this kind of paint
manipulation.
If we introduce a scale factor βi,j at each point where cells exchange content, we can control the amount of pigment density that is passed from one
cell to a neighbor while retaining conservation of mass:
pnew
i,j = (αpi,j + (βp)i+1,j + (βp)i,j+1 + (βp)i−1,j + (βp)i,j−1 )/(βsum + α) (8.2)
with βsum = βi+1,j + βi,j+1 + βi−1,j + βi,j−1 .
1
“Alum” is a coating material to make the canvas more water resistant.
8.4 Digital Artistic Tools
111
Figure 8.6: (left) Examples of blocking patterns used to “steer” pigment
in the diffusion process. Each row shows the effect of 80%, 90% and 100%
contribution of the pattern to the total blocking value. (right) A computergenerated example of some strokes drawn with the diffusion controller brush.
The hand-drawn pattern was used to influence pigment diffusion, giving it its
natural look.
A blocking factor βi,j = 0.0 effectively cancels out pigment exchange with
neighboring cells, while βi,j = 1.0 corresponds to no blocking at all. These
values are stored in a texture (the blocking texture) and used by the fragment
shader that implements the diffusion algorithm to look up per-cell blocking
values. Figure 8.6 shows a few examples of blocking patterns we used in
painting with black ink. The second pattern from the top is drawn by hand
and is mainly used in the results because of its natural appearance. Once
the brush tip touches the canvas such a pattern is written into the blocking
texture, locally “steering” the diffusion process in the next simulation steps.
In practice, we combine values of the height field of the canvas with values
from the blocking pattern to create a combined blocking value. This takes into
account the fact that the canvas surface also affects the way pigment diffuses.
The percentages in figure 8.6 indicate how much the pattern contributes to
the total blocking value.
This procedure is able to mimic the blooming effect in the brush’s footprint,
112
Digital Artistic Tools
as described in the previous section. In extending this technique to create
similar effects in a stroke once the brush is dragged on the canvas, we have to
make sure the blocking pattern is applied at regular intervals. Moreover, the
pattern has to disappear after some time period in order to avoid interferences
with neighboring strokes. This issue is handled by assigning each pattern in
the blocking texture an age-value. Each time-step a small value is subtracted
from the cells in the blocking texture, reducing them to zero over time.
The right image in figure 8.6 demonstrates some strokes drawn in black
ink with the diffusion controller brush.
8.4.3
Masking Fluid
The masking fluid should form a protective, solid layer on top of the canvas,
securing all layers below. Consistent with how this process is performed in
real-life, we introduce a special masking pigment that represents the masking
fluid. While painting with this special kind of pigment, instead of adding
densities to the textures containing pigment data, we mark affected cells in a
mask layer that has no further role in the dynamics phase. The mask layer
is implemented as a texture containing masking values within [0, 1]. We will
use only values of 0 or 1, so either all or none of the applied paint reaches the
canvas.
The passive layers do not participate in the dynamics phase of the simulation, so we only have to consider the effect of the masking fluid on the three
active layers containing water quantities and pigment densities.
Similar to real masking fluid, it is possible to paint on top of it; the paint
just should not reach the canvas. This condition is assured by the ability to
remove all pigment in the masked area when starting a new layer, or when
removing the masking fluid.
When starting a new layer, all water quantities and pigment densities in
the active layers covering the mask pigment are first removed. The same
procedure is followed when removing the masking fluid.
The mask texture is drawn on top of the canvas and its strokes are distinguished from strokes drawn with regular pigment by decorating them with
whites crosses on red circles, as indicated in figure 8.7.
8.4.4
Selective Eraser
A more versatile tool than the classic eraser that wipes paint from the canvas,
found in most painting applications, is the “selective eraser”. It can reduce
8.4 Digital Artistic Tools
113
Figure 8.7: Using the masking fluid, which was previously applied by a brush
(left), to cover the canvas beneath. To distinguish the masking fluid from
regular pigment, it is decorated with a distinctive pattern. After removing the
masking fluid the canvas remains untouched (right).
or remove densities of a specific pigment from a mixture, both from the fluid
layer and the surface layer. Similarly, it is possible to reduce or remove water
quantities from the fluid layer and the capillary layer.
Because all simulation data is stored in texture objects, with values within
[0, 1] for each color channel, the implementation of this tool translates to a
simple fragment shader that can be customized for the manipulation of one or
more specific color channels of the data texture. The fragment shader code is
shown in table 8.2.
Despite its simplicity, the selective eraser has proofed its usefulness as an
error-correction tool.
8.4.5
Paint Profile
As shown in a previous chapter 6, our paint system is capable of simulating
several related paint media: watercolor, gouache and Oriental ink. The difference between each painting technique is the result of an empirical process
where every simulation parameter is chosen in order to imitate that technique
as best as possible.
For example, Oriental ink requires a highly textured and absorbent canvas,
and the dense carbon particles are so heavy they can be carried inside the
canvas. Gouache on the other hand resembles watercolor but is applied in
thick opaque layers on the canvas, creating flat color areas. Its palette contains
114
Digital Artistic Tools
fragout_float main(vf30 IN,
uniform float2
position,
// brush position
uniform float2
prev_position,// previous brush position
uniform float
radius,
// previous brush radius
uniform float
prev_radius, // brush radius
uniform float4
fraction,
// fraction to be removed
uniform samplerRECT data)
// data texture
{
fragout_float OUT;
// fragment position within current brush footprint
float2 pos = position - IN.TEX0.xy;
// fragment position within previous brush footprint
float2 prev_pos = prev_position - IN.TEX0.xy;
// copy current value
OUT.col = h4texRECT(pigment, (half2)IN.TEX0.xy);
// don’t affect from cells from previous brush action
// cell has to be within circle of current brush action
if ( !inCircle(prev_pos, (prev_radius < radius)? prev_radius : radius)
&& inCircle(pos, radius))
{
// gaussian splat
OUT.col = clamp(
OUT.col - fraction * gauss(pos, radius),
0.0f.xxxx,
1.0f.xxxx
);
}
return OUT;
}
Table 8.2: Cg code for the “selective eraser” tool for a brush with a circular
footprint. Within the circle of the brush’s footprint, it subtracts an amount
fraction from each fragment with the selected float-texture data.
8.4 Digital Artistic Tools
115
Figure 8.8: A computer-generated image in Oriental ink, based on the original
work of Marie-Anne Bonneterre. The distinct patterns on the flags are caused
by usage of the textured tissue in combination with the diffusion controller.
The black background was scanned from a real painting made with black ink.
a white pigment, which is not present in a watercolor palette. Therefore we
have a large amount of simulation settings that relate to the canvas, the fluid
flow, the palette and the brush model.
To cope with these different environments we introduce the notion of a
paint profile, which combines all such properties and relates them to a specific
profile identifier. The result is that we can easily change to a different paint
media with a single mouse click and that we can even mix different styles in
a single painting. At present our system supports three profiles: watercolor,
gouache and diffuse Oriental ink.
116
Digital Artistic Tools
Figure 8.9:
Applying masking fluid to “oranges”, a watercolor work in
progress (left), in order to place a message in a part of the orange that is
darkened (right).
8.5
Results
The computer-generated images presented in this chapter were drawn interactively and in real-time using the same hardware setup as described in section
6.3.
Figure 8.8, a painting in Oriental ink based on the work of Marie-Anne
Bonneterre, demonstrates the use of both the textured tissue and the diffusion
controller. The black background was taken from a real painting made with
black ink.
The two results in figure 8.9 show how the masking fluid can be used to
protect existing layers while painting a darker glaze on top.
8.6
Discussion
The tools proposed in this last chapter extended our painting environment
with methods of manipulating paint in a different way than with the familiar
brush. The ambition of extensions like the textured tissue and the masking
fluid was to recreate the effects that can be produced by their real-world
counterparts. The other tools proposed rather “non-physical” additions: the
diffusion controller enables customization of the patterns that arise during
diffusion, while still retaining a natural look, and the selective eraser is a
useful and versatile error correcting tool. Finally, the introduction of a “paint
profile” enables mixing different media in a single painting.
8.6 Discussion
117
The interactive paint system could benefit from the introduction of many
more additions. In chapter 9 we suggest some future work on this matter.
118
Digital Artistic Tools
Chapter
9
Conclusions
9.1
Summary
Our goal in this dissertation was the creation of an interactive paint system
that adopts physically-based techniques to produce life-like images with watery
paint media, and this in real-time.
An extensible architecture for physically-based simulations The lightweight component framework we first introduced has the ability to curb the
complexity of dynamically composing physically-based applications. As these
kinds of applications typically have high computational demands, C++ was
used to realize the implementation of both the framework core layer and the
components. The components are wired with declarative scripting in XML,
allowing for a uniform expression of the system’s composition out of components, along with their coupling. Three applications were made with the
framework: two virtual environments demonstrating the abstraction of 3D
objects and their interactions, and the virtual paint system that is discussed
throughout this dissertation. The design of the paint system relies on fairly
coarse-grained components, targeting optimal performance.
A three-layer canvas model The canvas model proposed in this dissertation targets the simulation of watery paint media. It consists of three layers,
each a 2D grid of cells, in which different rules and algorithms govern paint
behavior. Algorithms based on fluid dynamics, with some necessary modifica-
120
Conclusions
tions , effectively describe the movement of water and pigment at the top fluid
layer. A surface layer captures pigment particles that settle in the canvas surface irregularities. This procedure is influenced by several pigment attributes.
The capillary layer, which embodies the internal canvas structure, proves useful to expand a stroke across its original boundaries. With this model, several
typical watery paint effects can be reproduced, as was demonstrated in the
subsequent chapters that discussed two different implementations.
A first proof-of-concept implementation was presented in the form of a
parallel simulation that uses a high-performance cluster (HPC). The canvas
is partitioned in smaller subcanvasses that are simulated and rendered separately by remote processing units. Although impractical, the setup of this
software implementation was required at that time to retain real-time simulation rates. We also concluded that the performance is related to how – and
more important, where – a user paints on the canvas.
The second implementation of the canvas model required just a desktop pc
with fairly recent graphics hardware. The programmable (and parallel) nature
of graphics hardware allowed us to move the computational complexity of the
simulation towards the GPU, as a set of efficient fragment shaders. Gouache
and Oriental ink can also be categorized as watery paint media, so with some
modifications the canvas model was able to simulate these as well. A basic
user interface shows the virtual paint environment, the palette and a tool for
mixing pigment, and is joined with a tablet interface that serves as an input
device. We noted that the main issue of the simulation is the resolution of
generated images, which currently show “blocky” effects.
Brush model A 3D deformable brush model was added to the environment
in order to create complex stroke shapes in an intuitive way. It uses energy
optimization to simulate the behavior of one or more kinematic chains that
form the tuft spines. Canvas friction was also taken into account. The tuft
geometry consists of a 3D polygon mesh or polyline mesh that is embedded in a
free-form deformation lattice. The brush’s implementation relies on the CPU
for optimization, and the GPU for free-form deformation. We showed that
the two brushes we designed, a round brush and a flat brush, in combination
with a tablet interface, can effectively create brush strokes with a variety of
shapes. We also pointed out that some improvements could be made to the
model, including bi-directional paint transfer and plasticity.
Digital artistic tools We complemented the brush with some digital tools
that allow paint manipulation. Inspired by some real-life tools, the textured
9.2 Future Directions
121
tissue was created to remove some of the wet paint and water from the canvas
surface, leaving a distinct imprint in the paint, and the masking fluid that allows to paint a protective layer on top of the canvas. The diffusion controller
enables to control the pattern of the branchy edges that appears when painting with the wet-on-wet technique, or when using a highly absorbant canvas.
The selective eraser tool can be customized to remove specific material from
the canvas. Finally, as different paint media require different simulation parameters, the paint profile were created that collects such parameters under a
single identifier.
9.2
Future Directions
This section suggests some topics for future research.
9.2.1
High Resolution Images
We already pointed out that the resolution of generated output images forms
an issue in the current implementation of WaterVerve (figure 9.1(a)). A simulation grid with a resolution of at most 800 × 600 was used throughout this
dissertation. The resolution of the images created with this grid can slightly be
enhanced using filtering techniques, but this at the cost of an overall blurring
effect and the loss of sharp stroke edges.
Clearly, other methods must be used if we want to produce high quality
artwork that allows for example to be printed in a magazine or that can be
magnified on screen without loss of detail.
To some extent, the recent work of Chu et al. already addresses this
issue [Chu 05]. The proposed “boundary trimming” technique reconstructs an
implicit curve that represents the stroke’s edge to “trim” away pixels at higher
resolutions. The effect is shown in figure 9.1(b). However, this approach does
not prevent that the texture of the stroke itself retains its blurry appearance.
Other solutions point towards texture synthesis literature.
9.2.2
Painterly Animation
We suggested in the introductory chapter the possibility of producing animations in a painterly style. Current methods for creating painterly animations
are mostly image-based of video-based. These methods analyze input images
in order to reconstruct information on how a painter could have drawn it.
122
Conclusions
In case of a digital interactive application, we have this information readily
available if the systems logs a user’s input.
The left part of figure 9.1 shows a collection hermite curves and their control points that were recorded during an artist’s interactive painting session.
Additionally, at each control point a time stamp and information on tilt, pressure, pigment and water is stored. This enables an automatic painting server
to replay the session. With a keyframing tool like CreaToon [CreaToon 06]
the curves can be manipulated to form a full animation of which each frame is
replayed by the painting server. The right part of figure 9.1 shows six frames
from a full animation of fifty frames. It took a few hours just to create this
very short animation, because each stroke is drawn separately and the drying
time in between strokes is included. Of course, the simulation clock can be
accelerated, but clearly other techniques are needed to make this approach
feasible.
9.2.3
Additional Artistic Tools
The set of tools used in real-world painting is not at all limited to the ones
for which we created digital counterparts in chapter 8. As pointed out at that
time, there are no predefined rules telling us how painting must be performed.
All that matters is that the artist can achieve the effect he/she had in mind.
Apart from the real-life tools for manipulating paint mentioned in the same
chapter, there is also a tendency to add various kind of material to the drying
paint in order to create distinct effects, as shown in the close-ups in figure 9.2.
The materials affect the behavior of paint, which could be incorporated in the
physically-based algorithms presented in this dissertation.
The effects of gravity on paint movement are currently ignored; the canvas
is considered to lie always on a flat surface. Tilting or even deforming the
canvas would result in dripping paint effects that could be incorporated into
the simulation.
9.2.4
Brush Improvements
Several straightforward improvements on the brush model were already presented at the end of chapter 7. Bi-directional paint transfer enables on-canvas
mixing of paint and the possibility of creating strokes that do not consist of a
uniform pigment mix. Inclusion of brush plasticity effects improves the overall
realism of the brush behavior.
The current brush geometry describes a tuft that has flawless appearance,
with bristles that are perfectly straight. Incorporating imperfections found
9.2 Future Directions
123
in real brushes into the brush model would have considerable impact on the
shape of a stroke.
Finally, besides behavior and appearance, also the brush control can be
enhanced using 6DOF input devices, optionally providing force feedback.
124
(a) An image from our system,
showing a “blocky” effect.
Conclusions
(b) Up-scaling resolution using “boundary
trimming”, from the work of Chu et al.
[Chu 05].
Figure 9.1: Painterly animation using data from an interactive painting session. Curves and control points that are logged (left) are used in combination
with a keyframing tool and a painting server that replays user input to render
animation frames (right).
9.2 Future Directions
(a) Alcohol.
125
(b) Salt.
(c) Water.
Figure 9.2: Close-ups showing the (real-world) effects of different material on
paint behavior.
126
Conclusions
Appendices
Appendix
A
Scientific Contributions and Publications
The following list of publications, presented at scientific international conferences, contains work that is part of this dissertation:
[Van Laerhoven 02] Tom Van Laerhoven & Frank Van Reeth. The pLab project:
an extensible architecture for physically-based simulations. In Proceedings of
Spring Conference on Computer Graphics 2002, pages 129–135, Budmerice,
SL, April 2002
[Van Laerhoven 03] Tom Van Laerhoven, Chris Raymaekers & Frank Van Reeth.
Generalized object interactions in a component based simulation environment.
Journal of WSCG 2003, vol. 11, no. 1, pages 129–135, February 2003
[Van Laerhoven 04b] Tom Van Laerhoven, Jori Liesenborgs & Frank Van Reeth.
Real-time watercolor painting on a distributed paper model. In Proceedings of
Computer Graphics International 2004, pages 640–643. IEEE Computer Society
Press, June 2004
[Van Laerhoven 05b] Tom Van Laerhoven & Frank Van Reeth. Real-time simulation of thin paint media. In Conference Abstracts and Applications of ACM
SIGGRAPH 2005, July 2005
[Van Laerhoven 05c] Tom Van Laerhoven & Frank Van Reeth. Real-time simulation of watery paint. In Journal of Computer Animation and Virtual Worlds
(Special Issue CASA 2005), volume 16:3–4, pages 429–439. J. Wiley & Sons,
Ltd., October 2005
[Beets 06] Koen Beets, Tom Van Laerhoven & Frank Van Reeth. Introducing artistic
tools in an interactive paint system. In Proceedings of WSCG 2006, volume 14,
pages 47–54, Plzen, CZ, January 2006. UNION Agency–Science Press
130
Scientific Contributions and Publications
A few technical reports accompanied the publications with some additional
background information:
[Van Laerhoven 04a] Tom Van Laerhoven, Jori Liesenborgs & Frank Van Reeth.
A paper model for real-time watercolor simulation. Technical Report TR-LUCEDM-0403, EDM/LUC, Diepenbeek, Belgium, 2004
[Van Laerhoven 05a] Tom Van Laerhoven, Koen Beets & Frank Van Reeth. Introducing artistic tools in an interactive paint system. Technical Report TR-UHEDM-0501, EDM/UH, Diepenbeek, Belgium, 2005
The next work is not part of this dissertation:
[Luyten 02] Kris Luyten, Tom Van Laerhoven, Karin Coninx & Frank Van Reeth.
Specifying User Interfaces for Runtime Modal Independent Migration. In Proceedings of CADUI 2002 International Conference on Computer-Aided Design
of User Interfaces, pages 238–294, Valenciennes, FR, May 2002
[Luyten 03] Kris Luyten, Tom Van Laerhoven, Karin Coninx & Frank Van Reeth.
Runtime transformations for modal independent user interface migration. Interacting with Computers, vol. 15, no. 3, pages 329–347, 2003
Bijlage
B
Samenvatting (Dutch Summary)
Contents
B.1
Introductie
Dankzij de intrede van de computer werd het mogelijk om afbeeldingen te
maken op een totaal andere manier dan hoe men het voorheen gedurende
duizenden jaren deed. De uitgebreide inspanningen die men sindsdien heeft
gedaan om een realistische digitale versie van het schilderproces te maken is
niet verwonderlijk, aangezien creatievelingen van alle leeftijden al sinds mensenheugenis gefascineerd zijn door het creëren van afbeeldingen. Het is dan
ook een heel toegankelijke, maar toch uitdagende bezigheid. Hetzelfde publiek
kan men terugvinden bij digitale schilderapplicaties. Vooral Bob Ross heeft
via zijn educatieve tv-show het schilderen populair gemaakt. Digitale “kunst”
in het algemeen heeft de laatste jaren veel aandacht gekregen door films zoals
Shrek en Toy Story.
Probleemstelling Artiesten lieten tot nogtoe applicaties die het schilderproces proberen na te bootsen links liggen omdat ze geen realistische resultaten
produceren. Ook bieden de applicaties geen extra flexibiliteit ten opzichte van
de reële wereld.
132
Samenvatting (Dutch Summary)
In de literatuur werd al een uitgebreid aanbod van applicaties die het schilderproces nabootsen voorgesteld. Het blijkt echter geen gemakkelijke opgave
om zowel de realiteit als real-time en interactieve eigenschappen na te streven. Huidige applicaties beogen slechts één van deze doelstellingen. In dit
proefschrift leggen we de nadruk op beide.
Motivatie Een digitale tegenhanger van het schilderproces zoals het in de
echte wereld gebeurt heeft een heel aantal voordelen:
• het materiaal is gratis, duurzaam en kan gemakkelijk aangepast worden
aan wat de noden van de gebruiker;
• schilderen wordt nog toegankelijker;
• het creëren van afbeeldingen is minder omslachtig;
• het is mogelijk tussentijdse resultaten op te slaan of in te laden, resultaten te bewerken en perfecte kopieën te maken;
• digitale hulpmiddelen laten allerlei “on-fysische” operaties toe, alsook de
controle over aspecten zoals droogtijd, het rechtzetten van fouten, het
verwijderen van verf en water, . . . ;
• het creëren van verfanimaties is een mogelijke doelstelling.
Contributies Dit proefschrift bestaat uit twee onderdelen. In het eerste
onderdeel beschrijven we een component-gebaseerd coderaamwerk dat toelaat
om een applicatie dynamisch samen te stellen aan de hand van een XML
document.
In het tweede onderdeel van dit proefschrift gaan we dieper in op de details van de verfapplicatie. We ontwikkelen eerst een nieuw canvasmodel voor
waterachtige verf (hoofdstuk 4), gevolgd door twee implementaties: een parallelle simulatie die gebruikt maakt van een hoge performantie cluster (HPC)
en een implementatie die steunt op recente programmeerbare grafische hardware (hoofdstukken 5 en 4). Enkele artistieke hulpmiddelen, die toelaten verf
te manipuleren op een andere manier dan het klassieke borstelmodel, worden
voorgesteld in hoofdstuk 8. Tenslotte wordt een 3D vervormbaar borstelmodel
in hoofdstuk 7 toegevoegd, waarna we besluiten met de conclusies en enkele
suggesties voor toekomstig werk in hoofdstuk 9.
B.2 Een Uitbreidbaar Component-gebaseerd Raamwerk
B.2
133
Een Uitbreidbaar Component-gebaseerd Raamwerk
Introductie Het gebruik van fysisch-gebaseerde simulaties is wijd verspreid
in domeinen zoals medische applicaties, wetenschappelijke visualisaties, de
creatie van (niet-)fotorealistische afbeeldingen, computer games en in de filmwereld. Ze bieden er de mogelijkheid realistische en betrouwbare resultaten
te produceren. Jammer genoeg worden de fysische simulaties meestal gekenmerkt door hun computationeel zware vereisten, en applicaties die gebruik
willen maken van dergelijke simulaties beschikken vaak over een complexe
structuur. Indien de architectuur monolithisch wordt ontworpen zal de software snel moeilijk te onderhouden en uit te breiden worden. Verder vindt men
in het domein van de fysisch-gebaseerde simulaties een uitgebreide verzameling
van toolkits terug die ontworpen zijn om telkens een heel specifiek probleem
op te lossen. Een goed voorbeeld hiervan zijn de software bibliotheken die de
taak hebben botsingen tussen objecten in een 3D omgeving af te handelen.
Een flexibel en uitbreidbaar software raamwerk is vereist om de complexiteit
van het combineren en uitwisselen van dergelijke toolkits onder controle te
houden.
Component-gebaseerde ontwikkeling (CBD) biedt een elegante oplossing
voor deze problemen. De voordelen van CBD, naast de mogelijkheid van
hergebruik van componenten, zijn vooral terug te vinden binnen de software
engineering. Aangezien de communicatie tussen componenten expliciet wordt
gemaakt, wordt het gemakkelijker om de afhankelijkheden tussen componenten te reduceren en te beheren. Taken zoals foutopsporingen, onderhoud en het
onderzoeken van performantieproblemen worden hierbij ook vergemakkelijkt.
We beschrijven een architectuur die componenten gebruikt om functionaliteit te encapsuleren en deze via plug-ins of gedeelde software bibliotheken aan
een applicatie aan te bieden. De componenten kunnen voorgesteld worden als
bouwblokken die via een beschrijvend XML document met elkaar verbonden
worden.
XML en componentmodellen Het gebruik van XML in componentmodellen is niet nieuw. Technologieën zoals SOAP en XML-RPC worden reeds
door talrijke applicaties gebruikt. Binnen het domein van de computer graphics zijn VRML97 en X3D ook wijd verspreid. Een uitgebreid overzicht van
deze technologieën wordt in het werk van Ponder beschreven [Ponder 04].
134
Samenvatting (Dutch Summary)
Architectuur Figuur 2.1 toont een overzicht van de architectuur van het
framework. De basislaag levert vooral ondersteunende services voor de bovenliggende uitbreidingslaag en de applicatielaag. Ze bevat naast basistypes en
-interfaces een eenvoudig reflectiemechanisme, opslagfaciliteiten en verzorgt
de communicatie tussen de componenten. Het reflectiemechanisme maakt het
mogelijk voor een component zichzelf te beschrijven. In ons model maken we
hiervoor gebruik van een service: een uitdrukking gelijkend op de vorm van
een bestandsnaam, die geassocieerd wordt met de functionaliteit van een bepaalde component. Een service provider is een leverancier van services. De
services worden bijgehouden in een centrale opslagplaats alwaar ze opgevraagd
kunnen worden door andere componenten.
De plug-ins leveren prototypes van een bepaald componenttype dat geinstantieerd en geparametriseerd kan worden.
De communicatie tussen de component gebeurt connectie-georiënteerd door
middel van commands, speciale service providers die een functieoproep onderscheppen en doorgeven aan de doelcomponent.
Zowel de instantiatie en parametrisatie als de koppeling van de componenten gebeurt aan de hand van een XML document.
Koppelen van componenten met XML Het gebruik van XML om de
dynamische koppeling van de componenten te beschrijven, in tegenstelling tot
de statische koppeling bij veel traditionele raamwerken, heeft als voordeel dat
de compositie van de applicatie op een uniforme manier wordt beschreven.
Daarnaast beschikken we ook over alle voordelen van XML (bestaat uit een
intuitief, leesbaar en uitbreidbaar formaat, levert herbruikbare data, en is
flexibel en porteerbaar).
Zodra een plug-in werd geladen hebben we een aantal component prototypes ter beschikking. Door het juiste XML argument bij een component tag
aan te geven kan een clone van het prototype gemaakt worden. XML laat ook
toe om te set van tags uit te breiden, bijvoorbeeld met de XGL specificatie
om OpenGL elementen te beschrijven.
Het koppelen van de instanties gebeurt, zoals eerder vermeld, door middel
van commands die de required interface van een component verbinden met
de provided interface van een andere component. Verder bestaat er ook de
mogelijkheid om Python scripts aan het XML toe te voegen en deze te koppelen
aan de required interface van een component.
Component-gebaseerde applicaties Het raamwerk wordt gebruikt om
drie component-gebaseerde applicaties dynamisch samen te stellen. De eer-
B.3 Digitale Verfapplicaties
135
ste applicatie bestaat uit een eenvoudige simulator voor 3D objecten die niet
beperkt worden in hun bewegingsvrijheid. Een 3D object wordt abstract voorgesteld als een set van componenten: Shape, State en SimObject. Koppeling
met de SimCore component zorgt ervoor dat de component kan deelnemen in
de simulatie.
De tweede applicatie bouwt verder op de vorige, en voegt er 3D object interacties aan toe. De interacties zelf worden op hun beurt voorgesteld door een
set van componenten. Een voorbeeldscene bevat drie interactietypes: collision
detectie/afhandeling, afhankelijke hoeksnelheden en procedurele scripting.
De laatste applicatie beschrijft de interactieve schilderapplicatie die verder
wordt beschreven in dit proefschrift. Hier maken we gebruik van relatief grote
componenten die elk veel functionaliteit omvatten om zo de frame-kritische
eigenschappen van de applicatie te bewaren en de overhead die de koppeling
van de componenten met zich meebrengt te beperken.
B.3
Digitale Verfapplicaties
In tegenstelling tot het domein van de fotorealistische rendering, waarin het
doel is om zo realistisch mogelijke resultaten te bekomen, gaat men bij de
niet-fotorealistische technieken op zoek naar de voorstelling van afbeeldingen
op een andere manier dan de perfecte imitatie van de echte wereld. Dergelijke
afbeeldingen kunnen louter voor hun artistieke waarde gemaakt worden, ofwel
als een vorm van communicatie. In dit proefschrift richten we ons vooral op
de eerste soort, die omvat wordt in het domein van de “artistieke rendering”
(AR). Gedetailleerde overzichten van het NPR onderzoek kunnen gevonden
worden in het werk van Colomosse [Colomosse 04], Baxter [Baxter 04b] en
Smith [Smith 01]. Wij concentreren hier ons vooral op het voorgaande onderzoek naar interactieve digitale verfsystemen.
Interactieve verfapplicaties De geschiedenis van interactieve verfapplicaties begint met het onderzoek van Dirk Shoup en Alvy Ray Smith en hun
baanbrekende applicaties (SuperPaint, Paint, BigPaint, Paint3 en BigPaint3)
die kunnen worden beschouwd als de voorlopers van moderne digitale tekenpakketten [Smith 01, Shoup 01].
Greene’s drawing prism en Strassmann’s hairy brush zetten de eerste stappen in de richting van technieken die meer realistische resultaten beogen
[Greene 85, Strassmann 86]. Het gebrek aan computationele kracht beperkt
echter het gedrag van de borstel en de verf, zodat afbeeldingen enkel gemaakt
worden met passieve afdrukken op een oppervlakte. Small probeert als eerste
136
Samenvatting (Dutch Summary)
het canvas van algoritmes te voorzien die het water en verf actief simuleren
[Small 90]. Hij maakt hiervoor gebruik van cellulaire automaten.
Het werk van Cockshott beschrijft een gelijkaardig model “Wet & Sticky”[Cockshott 91]
en stuurt het onderzoek in de richting van de fysisch-gebaseerde methodes.
Curtis et al. creëren een niet-interactieve applicatie die bestaande afbeeldingen omzet naar een waterverf versie, gebruik makend van algoritmes uit de
vloeistofdynamica [Curtis 97]. Baxter et al. zijn de eersten die een interactieve applicatie beschrijven voor het schilderen met olieverf. Ze ontwikkelen
drie applicaties die elk een afweging maken tussen realisme en snelheid: dAb,
IMPaSTo en Stokes Paint [Baxter 04b].
Een groot deel van het onderzoek op het vlak van verfapplicaties werd
gedaan in de context van Oosterse inkt. Hoewel Oosterse inkt en Westerse
verf gelijkaardige kenmerken hebben, zijn er toch een aantal verschillen te
vinden in de borstelmodellen, het canvas en de verf zelf. De state-of-the-art in
Oosterse inkt is op dit moment de applicatie van Chu et al.: “MoXi ” [Chu 02].
Sommige commerciële applicaties ondersteunen ook de creatie van afbeeldingen met waterverf. Ze bevatten echter allemaal een heel eenvoudig onderliggend model dat niet toelaat het complexe gedrag van waterachtige verf na
te bootsen. Enkel Corel Painter IX gebruikt het concept van “natte vlekken”
op het canvas waardoor de techniek “nat-op-nat” verven mogelijk wordt.
B.4
Een Gelaagd Canvasmodel
Deze sectie beschrijft een canvasmodel dat bestaat uit drie lagen. In de volgende secties bespreken we twee implementaties: één die een parallelle simulatie uitvoert met behulp van een hoge-performantie cluster (HPC), en een
tweede die programmeerbare grafische hardware gebruikt. We beperken ons
voorlopig tot de simulatie van waterverf.
Het gedrag van waterverf Waterverf vertoont zeer complex gedrag, wat
het noodzakelijk maakt om specifieke algoritmen in het model op te nemen.
Deze algoritmen zullen in real-time uitgevoerd moeten kunnen worden. Een
eenvoudig experiment waarbij waterverf op het canvas wordt aangebracht
toont dat men ruwweg drie verschillende stadia in de verf kan onderscheiden:
• water- en pigmentpartikels in een dun laagje bovenop het canvas,
• pigmentpartikels die werden afgezet op het canvasoppervlak,
• water dat werd geabsorbeerd door het canvas;
B.4 Een Gelaagd Canvasmodel
137
Deze observatie biedt de motivatie voor een model met drie lagen: de
vloeistof laag, de oppervlaktelaag en de capillaire laag, waarbij elke laag uit
een 2D grid van cellen bestaat. Een dergelijke opdeling werd reeds gedaan
door Curtis et al. [Curtis 97]. Figuur 4.1 toont een schematische afbeelding
van het model.
Doelstellingen Met het canvasmodel proberen we de volgende effecten te
bekomen die typische zijn voor waterachtige verf:
Doorzichtige wash Overlappende verflagen die door hun transparantie elkaars kleurintensiteiten beinvloeden.
Donkere randen Het effect dat pigmentdeeltjes zich naar de rand van een
vlek verplaatsen tijdens het droogproces.
Vederachtige randen Complexe patronen die ontstaan aan de rand van een
penseelstreek bij nat-op-nat verven of in geval van een sterk-absorberend
canvas.
Textuureffecten Effecten als gevolg van een combinatie van pigmenteigenschappen en onregelmatigheden in het canvasoppervlak.
Terugloopeffecten Fractaal-achtig effect dat ontstaat indien een recente penseelstreek in contact komt met een natte vlek op het canvas.
Vloeistofdynamica in verfapplicaties Het gebruik van algoritmen uit de
vloeistofdynamica om het gedrag van waterachtige verf te benaderen ligt voor
de hand, en werd reeds toegepast in de literatuur. Het model van Cockshott gebruikt vooral eigen empirisch-bepaalde algoritmen, terwijl het werk van Curtis
en Baxter op de Navier-Stokes vergelijking steunen. Stabiele implementaties
voor deze vergelijkingen werden beschreven door Stam [Stam 99].
“MoXi” daarentegen gebruikt een recentere techniek voor zijn Oosterse
inkt simulatie [Chu 05]: de lattice Boltzmann vergelijkingen. Deze techniek
vindt zijn oorsprong bij de cellulaire automaten en is dus heel geschikt voor
implementatie op grafische hardware.
De vloeistoflaag In deze laag wordt het gedrag van vloeistof en pigment
bepaald door een aangepaste versie van de Navier-Stokes vergelijking. De
volgende overwegingen zijn belangrijk:
• De procedure om een oplossing te vinden moet zowel snel als stabiel zijn.
138
Samenvatting (Dutch Summary)
• De vloeistof moet binnen de grenzen van een penseelstreek blijven.
• We willen de mogelijkheid om per cel beperkingen op de inhoud te leggen.
• De algoritmen moeten de wet van behoud van massa respecteren.
Elke cel in de vloeistoflaag bevat naast een hoeveelheid water en pigmentconcentratie ook informatie over het snelheidsveld op die plaats. Per tijdsstap
worden de volgende procedures uitgevoerd:
• Voeg snelheidswaarden, water hoeveelheden en pigment concentraties
toe aan de cellen.
• Bereken het nieuwe snelheidsveld ~v .
• Bereken het nieuwe scalaire veld van waterhoeveelheden w.
• Bereken het nieuwe scalaire veld van pigment concentraties, voor alle
pigmenten p~.
We gebruiken voornamelijk de methode die werd voorgesteld door Stam
[Stam 99], mits enkele wijzigingen. Bij het berekenen van het nieuwe snelheidsveld nemen we de verschillen in waterhoogte in rekening, om het “donkere randen” effect te bekomen. De advectiestap (verplaatsing) van waterhoeveelheden maakt gebruik van een zelf ontworpen techniek gebaseerd op een
cellulaire automaat. Elke cel wisselt hierbij een waterhoeveelheid uit met zijn
buren, afhankelijk van de waarde van het snelheidsveld op die positie. Verder
voegen we een procedure toe die de verdamping van water simuleert, waarbij
water sneller aan de randen van een penseelstreek verdampt dan elders.
De oppervlaktelaag De oppervlaktelaag bevat pigment dat zich in de onregelmatigheden van het canvasoppervlak heeft genesteld. Aan de hand van
enkele empirische formules creëren we pigmentgedrag dat afhankelijk is van
drie pigmenteigenschappen:
Pigment plakkerigheid Zodra pigment werd afgezet op het canvasoppervlak, heeft het de neiging daar te blijven plakken.
Pigment samenklitting Pigmentdeeltjes hebben de neiging samen te klitten waardoor in combinatie met de canvastextuur een korrelige structuur
ontstaat.
B.5 Een Gedistribueerd Canvasmodel voor het Schilderen met
Waterverf
139
Pigment gewicht De massa van het pigment beinvloedt de snelheid waarmee pigment op het oppervlakte wordt afgezet.
Met deze pigmenteigenschappen is het mogelijk het gedrag van pigment en
de interactie met het canvas te beinvloeden.
De capillaire laag De capillaire laag stelt te interne structuur van het canvas voor. In de simulatie heeft het de taak om geabsorbeerd water te verplaatsen en biedt het de mogelijkheid om een penseelstreek te doen uitbreiden. Elke
cel in deze laag heeft een bepaalde watercapaciteit die werd toegekend met behulp van een textuurgeneratie methode. Figuur 4.5 toont enkele texturen die
werden gegenereerd met een procedurele techniek [Worley 96].
Per tijdsstap wordt een bepaalde hoeveelheid water geabsorbeerd in het
canvas en verplaatst aan de hand van een diffusiealgoritme. Een penseelstreek
kan uitbreiden over zijn oorspronkelijke grenzen zodra de onderliggende cellen
in de capillaire laag een bepaalde hoeveelheid water bevatten. De hoeveelheid
water bepaalt de cel status (actief, inactief, droog, nat of drassig).
B.5
Een Gedistribueerd Canvasmodel voor het Schilderen met Waterverf
De eerste implementatie van het canvasmodel werd opzettelijk simpel gehouden, met beperkte functionaliteit: een eenvoudig borstelmodel, ondersteuning
voor alleen waterverf, en een voor de hand liggend kleurenschema. De implementie is een prototype dat als proof-of-concept dient om de haalbaarheid van
het canvasmodel aan te tonen.
We veronderstellen dat elke simulatiecel gemapt wordt op een pixel in het
uiteindelijke resultaat. De simulatie vergt heel veel rekenkracht, daarom zien
we ons genoodzaakt om het canvas op te delen en te distribueren, om zo elk
deel apart te laten simuleren door een verschillende processor. Deze aanpak
is werkbaar indien het systeem werd opgebouwd als een cellulaire automaat.
Gedistribueerd model De hardware setup wordt getoond in figuur 5.1 en
bestaat uit een hoge performantie cluster (HPC) waarin de computers verbonden zijn met behulp van een gigabit netwerk.
Een eerste stap omvat het opdelen van het canvas in verschillende subgrids of subcanvassen. Aangezien we gebruik maken van cellulaire automaten,
heeft elke subgrid aan de rand een extra rij cellen nodig. Met deze informatie
140
Samenvatting (Dutch Summary)
kan het subgrid beschouwd worden als een volwaardige canvas, en dus apart
gesimuleerd worden. Elke subcanvas wordt naar een knoop in de HPC gestuurd, samen met palette- en borstelinformatie. De activiteiten tijdens een
enkele tijdsstap worden getoond in figuur 5.3. Tijdens deze stap wisselt elke
subcanvas zijn cellen aan de rand uit met de naburige subcanvassen, waarna
de simulatiestap wordt uitgevoerd met behulp van de algoritmen uit de vorige
sectie. Tenslotte worden de resultaten ook op de HPC-knoop uitgetekend (in
software) en teruggestuurd naar de hoofdapplicatie die alle resultaten verzamelt en uittekent.
Borstelmodel Het borstelmodel speelt een beperkte rol op dit moment en
werd dus opzettelijk eenvoudig gehouden. De afdruk op het canvas wordt
bepaald door een schijf van cellen. We moeten wel opletten dat bij het bewegen van de muis de tussenin liggende cellen geinterpoleerd worden, en dat
we bijhouden welke cellen reeds werden toegekend. Operaties van de borstel
worden telkens naar het overeenkomstige subcanvas gestuurd. Indien het gaat
om cellen die op de rand van een subcanvas ligt, wordt de informatie naar alle
betrokken subcanvassen gestuurd.
Implementatie Voor de implementatie maken we gebruik van een LAM/MPI
implementatie, een C++ software bibliotheek waarmee een applicatie parallel
kan uitgevoerd worden op een HPC. Enkele voor de hand liggende optimalisaties werden toegevoegd, zoals het bijhouden van een “actieve” lijst van cellen
en een lijst van cellen die afgebeeld moeten worden.
De resultaten tonen aan dat met het gedistribueerde model een aanzienlijk grotere framerate gehaald kan worden, ten opzichte van een (software)simulatie op slechts één computer. Wel is duidelijk dat de performantie van
de implementatie afhankelijk is van hoe, en vooral waar, men op het canvas schildert. De volgende sectie beschrijft een alternatieve implementatie die
gebruik maakt van recente grafische hardware.
B.6
WaterVerve: Een Simulatieomgeving voor het
Creëren van Afbeeldingen met Waterachtige Verf
In deze sectie beschrijven we een tweede implementatie van het canvasmodel:
“WaterVerve”. Het maakt gebruik van programmeerbare grafische hardware.
WaterVerve is de eerste applicatie die toelaat de complexe eigenschappen van
waterverf na te bootsen, en dit in real-time op een enkele computer. Het
B.6 WaterVerve: Een Simulatieomgeving voor het Creëren van
Afbeeldingen met Waterachtige Verf
141
gebruikt het Kubelka-Munk kleurenmodel, en bevat de mogelijkheid om naast
waterverf ook gouache en Oosterse inkt te simuleren.
Grafische hardware implementatie Tegenwoordig wordt de grafische hardware, naast het genereren van beelden, ook voor heel andere dingen gebruikt.
In het werk van Harris et al. bijvoorbeeld wordt voor een fysisch-gebaseerde
wolkensimulatie een cellulaire automaat met behulp van de grafische kaart
berekend. Hij stelt ook vast dat de grafische kaart uitermate geschikt is om
eender welke simulatie die gebruik maakt van een gridstructuur te ondersteunen. Ook ons canvasmodel maakt gebruik van een gridstructuur, en kan dus
de grafische hardware aanwenden.
Tabel ?? toont de benodigde data voor de implementatie, in de vorm
van textures. Elke simulatiestap uit het canvasmodel werd vertaald naar een
fragment shader in de programmeertaal Cg. De simulatie werd nog geoptimaliseerd met behulp van een overlay grid die ervoor zorgt dat gedurende een
tijdstap niet steeds de hele texture berekend moet worden, maar slechts het
gedeelte dat door de borstel werd gemanipuleerd.
Het visualiseren van de resultaten gebeurt met behulp van het KubelkaMunk kleurenmodel. Ook dit onderdeel maakt gebruik van grafische hardware,
door middel van een fragment shader in Cg. Een dergelijk model levert meer
waarheidsgetrouwe kleuren dan een RGB model, en is in staat verschillende
verflagen op een realistische manier samen te stellen.
Waterachtige verf Alle resultaten werden geproduceerd met behulp van
een GeForce FX 5950 grafische kaart op een simulatiegrid met dimensie 800 ×
600, en een overlay grid met dimensie 32 × 32. De framerate bedraagt ten alle
tijden 20fps, tenzij de gebruiker een groot oppervlak beschildert.
Figuur 6.1 toont een aantal waterverfeffecten die met het model geproduceerd kunnen worden. Figuur 6.3 toont een aantal waterverfafbeeldingen.
Naast waterverf ondersteunt het canvasmodel ook de simulatie van gouache, een verfsoort die veel lijkt op waterverf, maar de mogelijkheid biedt om
met opake lagen te schilderen. Tevens bevat het een pigment met een witte
kleur. Door een aanpassing van de Kubelka-Munk parameters, en een hogere
viscositeitswaarde van de vloeistofsimulatie, bekomen we het gewenste effect.
Tenslotte wordt ook Oosterse inkt gesimuleerd. Hoewel het canvasmodel
niet toelaat dat de kleine inktpartikels het canvas binnendringen en zich daar
verspreiden, kunnen we dit met een techniek die in een van de volgende hoofdstukken wordt beschreven nabootsen. Verder gebruiken we een canvas met
142
Samenvatting (Dutch Summary)
heel uitgesproken textuur en een palette met een donkere, zware pigmentsoort. Figuur 6.5 toont een afbeelding in Oosterse inkt.
Gebruikersinterface De verfapplicatie wordt voorzien van een intuitieve,
eenvoudige gebruikersinterface. Ze bevat een mixpalette waarmee kleuren samengesteld kunnen worden, een aantal dialogen om de simulatieparameters in
te stellen, en enkele basisoperaties zoals het drogen van het canvas, het beginnen van een nieuwe laag, het leegmaken van het canvas, en het bewaren/laden
van resultaten. Daarnaast wordt het canvas op drie verschillende manieren
aan de gebruiker getoond: in 3D perspectief, in 2D orthogonaal, en met behulp van een camera die aan de borstel werd bevestigd. Een gebruiker kan
de borstel bedienen met behulp van een Wacom tablet interface, met 5DOF
invoer.
B.7
Een Borstelmodel
Een virtuele tegenhanger van een verfborstel moet de mogelijkheid hebben
om de bewegingen van een gebruiker om te zetten naar realistische en voorspelbare penseelstreken. De meeste bestaande applicaties negeren dit aspect
en produceren slechts eenvoudige, uniforme afdrukken. Het borstelmodel dat
wordt voorgesteld in deze sectie bevat de volgende eigenschappen:
• een efficiente en algemeen toepasbare methode om de borstelkop te vervormen, gebruik maken van de free-form deformation techniek.
• Anisotrope wrijving.
• Creatie van complexe afdrukken.
Borstelmodellen in de literatuur Virtuele borstelmodellen werden aanzienlijk verbeterd sinds Green’s drawing prism en Strassmann’s ééndimensionale
borstel. Vooral in context van Oosterse inkt en Chinese kalligrafie werd er onderzoek gedaan naar complexe vervormbare modellen. Het werk van Saito et
al. introduceerde een techniek gebaseerd op energieminimalisatie. Het systeem
beschrijft een functie die de totale som van de energie aanwezig in een kinematische ketting sommeert en minimaliseert. De geometrie bestaat slechts uit een
ellips die langsheen de ketting wordt getraceerd. Een aantal auteurs hebben
deze techniek uitgebreid. Chu et al. voegen onder andere anisotrope wrijving
en een complexere geometrie toe, terwijl Baxter et al. de geometrie beschrijven met behulp van zowel subdivisie-oppervlakken als langwerpige strips die
B.8 Digitale Artistieke Hulpmiddelen
143
de individuele borstelharen voorstellen. Een totaal andere aanpak werd aangebracht in het werk van Xu et al., waarbij “writing primitives” een collectie van
borstelharen representeren die gevisualiseerd worden met behulp van NURBS
oppervlakken.
Een virtuele borstel kan gemanipuleerd worden door middel van een muis.
Een tablet interface biedt vijf vrijheidsgraden en dus meer controle. In het
werk van Baxter et al. maakt men gebruik van de PHANToM haptic feedback apparaat, dewelke ook terugkoppeling van de borstelvervorming naar de
gebruiker voorziet.
Borsteldynamica In ons systeem maken we ook gebruik van energieminimalisatie. Het resultaat van deze techniek is een systeem dat vrijwel onmiddellijk zijn oorspronkelijke staat probeert te herstellen, net zoals een echte
borstel. Een enkel borstelhaar kan voorgesteld worden als een reeks lijnstukken aaneengeschakeld met hoekveren. De totale energie in het systeem bestaat
dan uit de som van de energie van al deze veren, plus wrijvingsenergie van het
borstelhaar met het canvasoppervlak. Deze wrijving is anisotroop, waarbij
beweging in het verlengde van het borstelhaar minder wrijving oplevert dan
zijwaartse beweging. Verder kan het systeem ook beperkingen leggen op de
beweging van het borstelhaar, zodat deze boven het canvasoppervlak blijft.
Het equilibrium van het systeem wordt gevonden door de energiefunctie te
minimaliseren. Hiervoor gebruiken we het raamwerk “donlp2”.
Borstelgeometrie Elke borstelhaar simuleren met de eerder beschreven methode is niet haalbaar. We simuleren daarom slechts enkele representatieve
haren, of spines. De bewegingen van de spines kunnen gekoppeld worden
aan de controlepunten van een free-form deformation mal. De geometrie van
de borstel zelf kan met behulp van deze mal vervormd worden. We ontwerpen twee soorten borstels. Figuur 7.5 toont een ronde, langwerpige borstel
met slechts één spine. De geometrie bestaat uit een polygon mesh gemodelleerd in Blender. Een tweede borstelontwerp, getoond in figuur 7.6, is een
platte borstel die bestaat uit een verzameling van lijnstukken en twee spines.
De resultaten tonen dat beide borstels een uitgebreide set van penseelstreken
kunnen genereren.
B.8
Digitale Artistieke Hulpmiddelen
Naast verfborstel, canvas en verf gebruiken vele artiesten in het echte leven
ook andere middelen tijdens het schilderproces. Waterverfpotloden, gespecia-
144
Samenvatting (Dutch Summary)
liseerde verfmessen, en zelfs een tandenborstel lijken geschikt om verf op het
canvas aan te brengen. Het verwijderen van overtollige verf kan gedaan worden met elk materiaal dat enigzins absorberend is en textuur bevat, zodat het
een opvallende afdruk achterlaat. Er bestaat ook speciale maskeervloeistof die
op het canvas in een beschermende laag kan aangebracht worden. Dit geeft
aan dat er eigenlijk geen echte voorgedefinieerde regels zijn die vertellen hoe
er geschilderd moet worden. Wat van belang is, is dat de gebruiker zijn/haar
intenties naar het canvas kan vertalen.
Doelstellingen In deze sectie breiden we het verfsysteem uit met een aantal
artistieke hulpmiddelen:
Doekje met textuur Een doekje met uitgesproken textuur dat gebruikt wordt
om pigment en water van het canvasoppervlak te verwijderen.
Diffusie controller Een hulpmiddel dat de gebruiker meer controle geeft
over de patronen die gevormd worden tijdens het diffusieproces.
Maskeervloeistof Speciale vloeistof die op het canvas wordt aangebracht als
beschermende laag.
Selectieve gom Een hulpmiddel voor het verwijderen van een deel/alle pigment/water uit een actieve verflaag.
Verfprofiel Een profiel verzamelt alle voorgedefinieerde palette- en canvasparameters onder een enkele naam, zodat het mogelijk wordt snel van
verfsoort te wisselen.
Artistieke hulpmiddelen uit de echte wereld Zowat alle dagdagelijkse
producten zijn handige hulpmiddelen tijdens het schilderen indien ze gebruikt
kunnen worden om overtollige verf van het canvas te deppen, en hierbij een
opvallende afdruk in de verf achterlaten. Bijvoorbeeld: wattenstaafjes, doekjes
van papier of katoen, sponzen, en zelfs broodkruimels. De motivatie is steeds
het creëren van opvallende visuele effecten.
Bij de “nat-op-nat” (wet-in-wet) techniek schildert men op een gedeelte
van het canvas dat nog nat is. Het resultaat zijn vederachtige randen en
opmerkelijke kleurpatronen. In combinatie met transparantie-effecten kan dit
interessante visuele effecten opleveren. Het voorspellen van hoe de verf zich
gaat gedragen is in dit geval echter heel moeilijk aangezien het afhankelijk is
van een heel aantal parameters. De gebruiker moet dit op basis van ervaring
proberen in te schatten.
B.8 Digitale Artistieke Hulpmiddelen
145
De maskeervloeistof wordt gebruikt om bepaalde delen van het canvas te
beschermen voor een nieuwe verflaag. Aangezien echte waterverf geen wit
pigment bevat, zal het wit van het canvas gebruikt moeten worden. Hierbij
komt de maskeervloeistof van pas. Merk op dat de vloeistof zowel op natte
als op droge delen kan gebruikt worden. Achteraf kan de gedroogde vloeistof
gewoon verwijderd worden. Naast de vloeistof bestaat er ook speciale tape die
op het canvas kan geplakt worden. Het nadeel hiervan is de moeilijkheid om
de juiste vormen te creëren.
In de echte wereld is het heel moeilijk om verf volledig van het canvas
te verwijderen. Foutjes worden meestal op een andere manier gecorrigeerd.
Wel wordt soms, als laatste redmiddel, het hele canvas gewassen zodat men
opnieuw kan beginnen.
Een populaire techniek is het gebruik van een mix van verfmedia. Men
gebruikt dan bijvoorbeeld een combinatie van Oosterse inkt, waterverf en gouache. In de praktijk zijn er echter heel wat beperkingen. Zo is het onmogelijk
om de textuur van het canvas tijdens het schilderen te wijzigen.
Digitale artistieke hulpmiddelen Bij het gebruik van het doekje met
textuur in het verfsysteem wordt het invoerapparaat een metafoor voor de
vinger die over het canvas wrijft. De implementatie van het doekje gebruikt een
extra texture waarvan één kanaal wordt gebruikt als hoogteveld, en een tweede
kanaal om cellen te markeren die reeds pigment en water hebben uitgewisseld
met de verflaag. De interactie van het doekje met het canvas lijkt sterk op
hoe de verflaag met het canvas interageert, en gebruikt een fragment shader
om verf en water te verplaatsen.
Om de grillige vormen tijdens het diffusieproces na te bootsen passen we
het diffusiealgoritme aan met behulp van een blokpatroon. Tijdens het diffusieproces wisselen cellen materiaal uit om zo overal een gelijke hoeveelheid
na te streven. Het blokpatroon, dat door de gebruiker kan opgegeven worden,
zorgt ervoor dat deze uitwisseling niet in alle richtingen gelijkmatig gebeurt,
maar dat het rekening houdt met het patroon, alsook met de textuur van het
canvas. Het resultaat is dat men het diffusieproces kan “sturen”.
De maskeervloeistof kan eenvoudig worden geimplementeerd door een extra
laag bovenop de actieve lagen te voorzien waarin gemaskeerde cellen worden
aangeduid. Alle cellen die zich onder deze laag bevinden, nemen niet deel aan
de simulatie.
De selectieve gom gebruikt opnieuw een fragment shader in Cg om, zodra het wordt toegepast op het canvas, een hoeveelheid water en pigment te
verwijderen.
146
Samenvatting (Dutch Summary)
Het verfprofiel tenslotte zorgt ervoor dat de gebruiker niet telkens alle
canvasparameters moet instellen om een bepaald verfmedium na te bootsen, en
daarbij ook het juiste palette moet kiezen. Het profiel gebruikt een herkenbare
naam die al deze eigenschappen groepeert, zodat er gemakkelijk van het ene
naar het andere profiel kan worden overgeschakeld. Momenteel bevat het
systeem waterverf, gouache en Oosterse inkt profielen.
B.9
Conclusies
Het doel van dit proefschrift was het creëren van een interactieve verfapplicatie
die met behulp van fysisch-gebaseerde technieken een gebruiker toelaat natuurgetrouwe afbeeldingen met waterachtige verf te maken. We beschreven eerst
een component-gebaseerd raamwerk dat XML aanwendt om de complexiteit
van het ontwikkelen van complexe fysisch-gebaseerde simulaties te beheersen.
Daarna werd een gelaagd canvasmodel geintroduceerd, gevolgd door twee implementaties: een gedistribueerde versie die gebruikt maakt van een HPC, en
een implementatie die steunt op programmeerbare grafische hardware. De applicatie werd uitgebreid met een vervormbaar borstelmodel en tenslotte ook
met een aantal digital artistieke hulpmiddelen.
Toekomstig werk Mogelijke onderwerpen voor toekomstig werk omvatten
het zoeken naar technieken om afbeeldingen met hogere resoluties te creëren,
verfanimaties te maken, extra artistieke hulpmiddelen toe te voegen, en verbeteringen in het borstelmodel aan te brengen.
Bibliography
[ArtRage 06] ArtRage v2.05 (Software package). Available at http:
//www.ambientdesign.com. Ambient Design, 2006.
[Baraff 97a] David Baraff & Andrew Witkin. Partitioned dynamics. Technical Report CMU-RI-TR-97-33, Robotics Institute, Carnegie Mellon University, Pittsburgh, PA,
1997.
[Baraff 97b] David Baraff & Andrew Witkin. Physically-based modeling, principles and practice, page Course 34. Course
Notes of ACM SIGGRAPH 1997. ACM, Los Angeles,
CA, August 1997.
[Baxter 01] William V. Baxter, Vincent Scheib, Ming C. Lin & Dinesh Manocha. dAb: interactive haptic painting with 3D
virtual brushes. In Eugene Fiume, editeur, Proceedings
of ACM SIGGRAPH 2001, pages 461–468. ACM Press,
NY, USA, August 2001.
[Baxter 04a] William V. Baxter. Notes on brush simulation with optimization. Technical report, University of North Carolina
at Chapel Hill, Department of Computer Science, 2004.
[Baxter 04b] William V. Baxter. Physically-based modeling techniques
for interactive digital painting. PhD thesis, University of
North Carolina at Chapel Hill, Department of Computer
Science, 2004.
[Baxter 04c] William V. Baxter & Ming C. Lin. A versatile interactive
3D brush model. In Proceedings of the 12th Pacific Conference on Computer Graphics and Applications, pages
148
BIBLIOGRAPHY
319–328, Seoul, Korea, October 2004. IEEE Computer
Society Press.
[Baxter 04d] William V. Baxter, Yuanxin Liu & Ming C. Lin. A
viscous paint model for interactive applications. IEEE
Computer Graphics and Applications, vol. 15, no. 3–4,
pages 433–442, 2004.
[Baxter 04e] William V. Baxter, Jeremy Wendt & Ming C. Lin. IMPaSTo: A realistic model for paint. In Proceedings of the
3rd International Symposium on Non-Photorealistic Animation and Rendering, pages 45–46. ACM Press, NY,
USA, 2004.
[Beets 06] Koen Beets, Tom Van Laerhoven & Frank Van Reeth.
Introducing artistic tools in an interactive paint system.
In Proceedings of WSCG 2006, volume 14, pages 47–
54, Plzen, CZ, January 2006. UNION Agency–Science
Press.
[Blender 06] Blender v2.41 (Software package). Available at http:
//www.blender.org. Blender Foundation, 2006.
[Box 00] Don Boxet al. Simple Object Access Protocol (SOAP)
1.1. W3C, http://www.w3.org/TR/SOAP, May 2000.
[Chu 02] Nelson S.H. Chu & Chiew-Lan Tai. An efficient brush
model for physically-based 3D painting. In Proceedings of
the 10th Pacific Conference on Computer Graphics and
Applications, page 413. IEEE Computer Society, 2002.
[Chu 04] Nelson S.-H. Chu & Chiew-Lan Tai. Real-time Painting
with an Expressive Virtual Chinese Brush. IEEE Computer Graphics and Applications, vol. 24, no. 5, pages
76–85, 2004.
[Chu 05] Nelson S.-H. Chu & Chiew-Lan Tai. MoXi: real-time ink
dispersion in absorbent paper. In Proceedings of ACM
SIGGRAPH 2005, volume 24:3. ACM Press, NY, USA,
July 2005.
BIBLIOGRAPHY
149
[Cockshott 91] Malcolm Tunde Cockshott. Wet and Sticky: A novel
model for computer-based painting. PhD thesis, Glasgow
University, 1991.
[Colomosse 04] John Philip Colomosse. Higher Level Techniques for the
Artistic Rendering of Images and Video. PhD thesis,
University of Bath, 2004.
[Conley 06] Gregory Conley. Watercolor Painting. Watercolor Painting Website, http://www.watercolorpainting.com/,
2006.
[Corel 04] Corel Painter IX (Software package), http://www.
corel.com/painterix. Corel, 2004.
[Craig 89] John C. Craig. Robotics. Addison-Wesley, New York,
1989.
[CreaToon 06] CreaToon v3 (Software package),
creatoon.com. Androme N.V., 2006.
http://www.
[Curtis 97] Cassidy J. Curtis, Sean E. Anderson, Joshua E. Seims,
Kurt W. Fleischer & David H. Salesin. Computergenerated watercolor. In Proceedings of the 24th annual conference on Computer graphics and interactive
techniques, pages 421–430. ACM Press/Addison-Wesley
Publishing Co., NY, USA, 1997.
[Deegan 00] Robert D. Deegan, Olgica Bakajin, Todd F. Dupont,
Greg Huberet al. Pattern formation in drying drops.
Physical Review E, vol. 61, pages 475–485, 2000.
[Deep Paint 05] Deep Paint v2.0 (Software package). Available at http:
//www.righthemisphere.com. Right Hemisphere, 2005.
[Foster 96] Nick Foster & Dimitri Metaxas. Realistic animation
of liquids. In Graphical Models and Image Processing,
pages 471–483, 1996.
[Gamma 99] Erich Gamma, Richard Helm, Ralph Johnson & John
Vlissides. Design patterns - elements of reusable objectoriented software. Addison Wesley, 1999.
150
BIBLIOGRAPHY
[Golub 89] G. H. Golub & C. F. Van Loan. Matrix computations.
JohnsHopkinsPress, Baltimore, MD, USA, second edition, 1989.
[Gooch 01] Amy Gooch & Bruce Gooch. Non-photorealistic rendering. A K Peters, Ltd., 2001.
[Gottschalk 99] S. Gottschalk, M. Lin, D. Manocha & E. Larsen. PQP the Proximity Query Package (Software package). Available at http://www.cs.unc.edu/geom/SSV/. 1999.
[Greene 85] Richard Greene. The drawing prism: a versatile graphic
input device. In Proceedings of the 12th annual conference on Computer graphics and interactive techniques,
pages 103–110. ACM Press, NY, USA, 1985.
[Gregory 99] A. Gregory, M. Lin, S. Gottschalk & R. Taylor. HCOLLIDE: A framework for fast and accurate collision
detection for haptic interaction. In Proceedings of Virtual Reality Conference 1999, 1999.
[Guo 91] Qinglian Guo & T. L. Kunii. Modeling the diffuse painting of sumie. IFIP Modeling in Computer Graphics,
1991.
[Guo 03] Qinglian Guo & Tosiyasu L. Kunii. “Nijimi” rendering algorithm for creating quality black ink paintings. In
Proceedings of Computer Graphics International 2003,
pages 152–161. IEEE Computer Society Press, July
2003.
[Harris 02] Mark J. Harris, Greg Coombe, Thorsten Scheuermann
& Anselmo Lastra. Physically-based visual simulation
on graphics hardware. In Proceedings of the ACM
SIGGRAPH/EUROGRAPHICS conference on Graphics hardware, pages 109–118. Eurographics Association,
2002.
[Harris 03] Mark J. Harris. Real-time cloud simulation and rendering. PhD thesis, University of North Carolina at Chapel
Hill, Department of Computer Science, 2003.
BIBLIOGRAPHY
151
[Hudson 97] T. C. Hudson, M. Lin, J. Cohen, S. Gottschalk &
D. Manocha. V-COLLIDE: Accelerated collision detection for VRML. In Proceedings of VRML 97: Second
Symposium on the Virtual Reality Modeling Language,
pages 117–124, New York City, NY, February 1997.
[Kay 00] Michael Kay. XSLT Programmer’s Reference, 2nd Edition. Wrox Press, 2000.
[Kubelka 31] Paul Kubelka & Franz Munk. An article on optics of
paint layers. In Z. tech Physik, 1931.
[Kundu 02] Pijush K. Kundu & Ira M. Cohen. Fluid mechanics.
Academic Press, second edition, 2002.
[Kunii 95] Tosiyasu L. Kunii, Gleb V. Nosovskij & Takafumi
Hayashi. A diffusion model for computer animation of
diffuse ink painting. In Proceedings of the Computer
Animation ’95, pages 98–102, 1995.
[Lee 97] Jintae Lee. Physically-based modeling of brush painting. In Proceedings of the fifth international conference
on computational graphics and visualization techniques
on Visualization and graphics on the World Wide Web,
pages 1571–1576. Elsevier Science Inc., 1997.
[Lee 99] Jintae Lee. Simulating oriental black-ink painting. IEEE
Computer Graphics and Applications, vol. 19, pages 74–
81, 1999.
[Lee 01] Jintae Lee. Diffusion rendering of black ink paintings using new paper and ink models. In Computers & Graphics, volume 25, pages 295–308, April 2001.
[Luyten 02] Kris Luyten, Tom Van Laerhoven, Karin Coninx &
Frank Van Reeth. Specifying User Interfaces for Runtime Modal Independent Migration. In Proceedings of
CADUI 2002 International Conference on ComputerAided Design of User Interfaces, pages 238–294, Valenciennes, FR, May 2002.
[Luyten 03] Kris Luyten, Tom Van Laerhoven, Karin Coninx &
Frank Van Reeth. Runtime transformations for modal
152
BIBLIOGRAPHY
independent user interface migration. Interacting with
Computers, vol. 15, no. 3, pages 329–347, 2003.
[MacEvoy 05] Bruce MacEvoy. Wet in wet. Handprint: watercolour website, http://www.handprint.com/HP/WCL/
tech23a.html, 2005.
[Mirtich 96] Brian Mirtich. Impulse-based Dynamic Simulation of
Rigid Body Systems. PhD thesis, University of California, Berkeley, December 1996.
[Nocedal 99] Jorge Nocedal & Stephen J. Wright. Numerical optimization. Springer Science+Business Media, 1999.
[NVI 06] NVIDIA.
2006.
Cg toolkit user’s manual, v1.4.1, January
[Parent 02] Rick Parent. Computer animation – algorithms and
techniques. Morgan Kaufmann, San Fransisco, 2002.
[Ponder 04] Michal Ponder. Component-based methodology and development framework for virtual and augmented reality
systems. PhD thesis, École Polytechnique Fédérale De
Lausanne, 2004.
[Ritchie 05] Dan Ritchie. Project Dogwaffle v3.5 (Software package).
Available at http://www.squirreldome.com/cyberop.
htm. 2005.
[Saito 99] S. Saito & M. Nakajima. Physics-based brush model for
painting. page 226, 1999.
[Saito 00] S. Saito & M. Nakajima. 3D Physics-based brush model
for interactive painting (in Japanese). Jyouhou-Shori
Gakkai Ronbushi (Japanese journal), vol. 41, no. 3,
pages 608–615, March 2000.
[Shoup 01] Richard Shoup. SuperPaint: An early frame buffer
graphics system. IEEE Annals of the History of Computing, vol. 23, no. 2, pages 32–37, 2001.
[Slusallek 98] Philipp Slusallek, Marc Stamminger, Wolfgang Heidrich,
Jan-Christian Popp & Hans-Peter Seidel. Composite
BIBLIOGRAPHY
153
lighting simulations with lighting networks. IEEE Computer Graphics and Applications, vol. 18, no. 2, pages
22–31, 1998.
[Small 90] David Small. Modeling watercolor by simulating diffusion, pigment, and paper fibers. In Proceedings of SPIE,
volume 1460, February 1990.
[Smith 98] Stan Smith. The complete watercolour course. Collins
& Brown, second edition, 1998.
[Smith 01] Alvy Ray Smith. Digital paint systems: An anecdotal
and historical overview. IEEE Annals of the History of
Computing, vol. 23, no. 2, pages 4–30, 2001.
[Sousa 00] M.C. Sousa & J.W. Buchanan. Observational model of
graphite pencil materials. Computer Graphics Forum,
vol. 19, pages 27–49, 2000.
[Spellucci 04] Peter Spellucci. DONLP2 Users guide. Technical University at Darmstadt, Germany, 2004.
[Squyres 03] Jeffrey M. Squyres & Andrew Lumsdaine. A component
architecture for LAM/MPI. In Proceedings, 10th European PVM/MPI Users’ Group Meeting, numéro 2840
in Lecture Notes in Computer Science, Venice, Italy,
September 2003. Springer-Verlag.
[Stam 99] Jos Stam. Stable fluids. In Alyn Rockwood, editeur,
Proceedings of ACM SIGGRAPH 1999, pages 121–128,
Los Angeles, 1999. Addison Wesley Longman.
[Stam 03] Jos Stam. Real-time fluid dynamics for games. In Proceedings of the Game Developer, March 2003.
[Strassmann 86] Steve Strassmann. Hairy brushes. In Proceedings of
the 13th annual conference on Computer graphics and
interactive techniques, pages 225–232. ACM Press, NY,
USA, 1986.
[Szyperski 98] Clemens Szyperski.
Component software: beyond
object-oriented programming. ACM Press and AddisonWesley, New York, N.Y., 1998.
154
BIBLIOGRAPHY
[van der Bergen 99] Gino van der Bergen. SOLID, Software Library for Interference Detection (Software package). Available at
http://www.win.tue.nl/cs/tt/gino/solid/. 1999.
[Van Laerhoven 02] Tom Van Laerhoven & Frank Van Reeth. The pLab
project: an extensible architecture for physically-based
simulations. In Proceedings of Spring Conference on
Computer Graphics 2002, pages 129–135, Budmerice,
SL, April 2002.
[Van Laerhoven 03] Tom Van Laerhoven, Chris Raymaekers & Frank
Van Reeth. Generalized object interactions in a component based simulation environment. Journal of WSCG
2003, vol. 11, no. 1, pages 129–135, February 2003.
[Van Laerhoven 04a] Tom Van Laerhoven, Jori Liesenborgs & Frank
Van Reeth. A paper model for real-time watercolor
simulation. Technical Report TR-LUC-EDM-0403, EDM/LUC, Diepenbeek, Belgium, 2004.
[Van Laerhoven 04b] Tom Van Laerhoven, Jori Liesenborgs & Frank
Van Reeth. Real-time watercolor painting on a distributed paper model. In Proceedings of Computer
Graphics International 2004, pages 640–643. IEEE Computer Society Press, June 2004.
[Van Laerhoven 05a] Tom Van Laerhoven, Koen Beets & Frank Van Reeth.
Introducing artistic tools in an interactive paint system. Technical Report TR-UH-EDM-0501, EDM/UH,
Diepenbeek, Belgium, 2005.
[Van Laerhoven 05b] Tom Van Laerhoven & Frank Van Reeth. Real-time simulation of thin paint media. In Conference Abstracts and
Applications of ACM SIGGRAPH 2005, July 2005.
[Van Laerhoven 05c] Tom Van Laerhoven & Frank Van Reeth. Real-time simulation of watery paint. In Journal of Computer Animation and Virtual Worlds (Special Issue CASA 2005),
volume 16:3–4, pages 429–439. J. Wiley & Sons, Ltd.,
October 2005.
[Wenz-Denise 01] Susan Wenz-Denise. The invaluable paintbrush. World
Wide Web, http://www.passionforpaint.com, 2001.
BIBLIOGRAPHY
155
[Wolfram 02] Stephen Wolfram. A new kind of science. Wolfram Media, Inc., IL, USA, first edition, 2002.
[Worley 96] Steven Worley. A cellular texture basis function. In
Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, pages 291–
294. ACM Press, NY, USA, 1996.
[XGL 01] XGL File Format Specification. XGL Working Group,
http://www.xglspec.org/, 2001.
[XML 01] Extensible Markup Language (XML). World Wide Web
consortium, http://www.w3.org/XML/, 2001.
[Xu 02] Songhua Xu, Min Tang, Francis M.C. Lau & Yunhe Pan.
A solid model based virtual hairy brush. In Proceedings
of Computer Graphics Forum, volume 21, September
2002.
[Yu 03] Young Jung Yu, Do Hoon Lee, Young Bock Lee &
Hwan Gue Cho. Interactive rendering technique for realistic oriental painting. vol. 11, no. 1, pages 538–545,
February 2003.
[Zhang 99] Qing Zhang, Youetsu Sato, Jun-ya Takahashi,
Kazunobu Muraoka & Norishige Chiba.
Simple
cellular automaton-based simulation of ink behaviour
and its application to Suibokuga-like 3D rendering
of trees. In Journal of Visualization and Computer
Animation, pages 27–37, 1999.