(Disney Meets Darwin)
3: Approach
In the Disney tradition, animation
is the illusion of life (Thomas, 81). Character animation research has added to this the
simulation of life. Animals are complex things which, according to Darwinism, have
become the complex things that they are because of evolution. Artificial life
researchers take
this approach when modeling artificial organisms, in studying emergent self-organizing
phenomena.
As indicated by the background research I have outlined, the evolutionary techniques
which are
central to artificial life research have also begun to influence character animation
research.
I have taken this approach, with an added emphasis on the creation of funny characters
with
personality, by way of interactive techniques. One might infer from the title of this
thesis
that the Character Evolution Tool is based on "survival of the cutest." But it is
aimed at
being more than just this - the qualities that one can extract from a population can
have a great range of body language.
Two Levels of Evolution
In this thesis,
I consider expressivity to be the product of progressive refinement, which requires
that a human be in the GA loop. Figure 6. illustrates a behavior in which the user
has affected the course of automatic evolution via an overlay of interactive evolution.
Figure 6 The top panel illustrates a walking pattern which emerged in a population
under fitness pressures for locomotion and holding the head high. This character's
behaviors have been automatically optimized through these fitness pressures.
The bottom panel shows a character from the same population in which a user
affected the direction of evolution by favoring ancestors who walked with a
desirable style.
In the top panel of the illustration, a walking pattern
is shown which emerged in a population evolved under fitness pressures for
locomotion and holding the head high. This character is optimized according
to these fitness functions. The bottom panel shows a character from the same
population in which the user has affected the direction of evolution by
favoring ancestors who walked with a particular style. This is accomplished
by combining the automatic optimizing capabilities of a GA with interactive
tools, and allowing a blending of automated and user-guided evolution.
And perhaps most importantly, the proportion of user-guided vs. automated
evolution can vary.
This can be seen as the overall approach: a system
which blends automatically-driven evolution with the critical vision of an
interacting human. Essentially, the source of evolution at any given time
may not be entirely distinguishable to the user - for instance, if an active
objective fitness function is encouraging locomotion, through the GA, the
user may also be encouraging behaviors that make the locomotion look like
swaggering, or skipping, or shuffling. "Shuffling", then, can be the label
the user attaches to this behavior. In the final analysis, the user may
not care how much a final behavior was influenced by objective functions
vs. his/her control. What counts is that a desirable behavior was achieved.
Gesturing
Lying at the conceptual center of this thesis is the gesture tool,
which was conceived for the purpose of enhancing the interactive level of
genetic algorithms for character animation. The idea was to design a tool
which allows an interactive motion from the user to be brought under the
grasp of the genetic algorithm, which uses that motion in a specialized
fitness function. Although it is an experimental component of this thesis,
tests have shown success.
Here the notions of design and evolution are
brought together in an experimental marriage. The design part can be described
as the line-drawing that is gestured into the scene by the user. The evolution
part comes into play as that gesture becomes a component in a specialized
fitness function, which affects the evolution of the population. In developing
this technique, two kinds of algorithms were implemented which interpret
features of the gesture and relate them to some features of the characters
as they move. The first algorithm compares the absolute position of a 2D
character's moving head to the absolute position of a traveling point on
the gesture. I found that this algorithm imposed too harsh a constraint
on the evaluation. The second algorithm compares the direction and speed
of the character's head motion to the direction and speed of a moving point
on the gesture. This algorithm was found to be more flexible in that it
allowed comparisons at a distance. These two algorithms are described in
more detail at the end of the following section.
(go to beginning of document)