(Disney Meets Darwin)
3 Background
The introduction of computer
animation techniques which simulate the physics of interacting bodies and the
motor control systems of animals has introduced new systems for generating
motion, and has brought into the art of animation the concept of autonomy.
In task level animation systems, the
animator only needs to specify general goals and the character (as an autonomous agent)
takes care of the many details necessary to accomplish these goals
[Zeltzer, 91].
Many methods have been developed for generating
goal-directed behavior in characters represented as articulated figures
[Sims, 87], [Cohen, 92], [Ngo and Marks, 93], [van de Panne and Fiume, 93].
These figures are modeled in virtual physical worlds for realism, and their
internal motions can adapt, within the constraints of that physical world,
to obey user-specified constraints, for the purposes of locomotion and
other explicit behaviors. This approach to generating motion is often
referred to as the spacetime constraints paradigm [Witkin and Kass, 88].
The spacetime constraints paradigm is a technique which aims to automate
some of the tasks of the animator, whereby dynamics associated with the
physics of motion are left up to a physically based model, and the
autonomous motions of articulated bodies are automatically optimized
within the system. In these systems, the animator tells the character
where to go and when to get there, for instance, and the spacetime
constraint system automatically computes the optimal motions of the
character for getting there at the right time. Traditional animation
concepts such as squash and stretch, anticipation, and follow-through,
have been shown to emerge through the spacetime constraints approach.
The spacetime constraints approach assumes that the articulated
figure has some ability to change its internal structure (like the angles
in its joints) in such a way as to affect goal-directed behavior. Thus, not
only must some physics model be used, but there must also be some modeling of
a motor control system in the character. There have been many kinds of motor
control used in this regard. Many, as one might expect, are biologically
inspired, and use theories from neurology and physiology. The virtual roach
developed by
McKenna [90], for instance, uses a gait controller which employs a
series of oscillators which generate stepping patterns, analogous to
a system observed in actual roaches. The articulated figures developed
by Ngo and Marks [93], van de Panne and Fiume [93], and others, use
stimulus/response models which allow the figures to sense their
relation to the environment, and to move their parts accordingly - each
action being the direct result of stimulus. Typical senses used in these
models include senses of joint angles, and contact with the ground surface,
for a number of body parts. Responses typically involve changing various
joint angles. Once a stimulus/response model is created, exactly what kinds
of responses should be activated by what sensors for goal-directed behavior
is a difficult problem, and can be a difficult design task.
This is
where the genetic algorithm (GA) has proven useful - it is biologically
inspired, and it is good at dealing with large search spaces (such as
finding the best parameters for locomotion). The genetic algorithm
[Holland, 75] [Goldberg, 89] is a searching and optimization technique
derived from the mechanics of Darwinian evolution. It operates on a
population of individuals (potential solutions to a problem), updating
the population in parallel, over many generations. GA's have been used
by Ngo and Marks [93] for evolving locomotion and other behaviors in
articulated figures. Sims [94] has developed a system which uses a
genetic language for evolution of a variety of 3D morphologies and behaviors,
such as locomotion, reaching, and grabbing, through competition, for
possession of an object. These figures exhibit a wide range of strategies,
and can be entertaining to watch, owing perhaps to the many unexpected strategies
which evolve.
Genetic algorithm-based systems which replace the
objective function with a human are typically called Interactive Evolution
systems. These have been developed by [Dawkins, 86], [Sims, 91], Latham
[Todd, 92], [Baker, 93], and others. What distinguishes these systems from
other uses of the GA is that they incorporate a human fitness function.
In these systems, visual forms are evolved through the selections of
favorable images by a user, through a number of generations of evaluations.
These systems support the notion of an 'aesthetic search' in which the
fitness criteria are based primarily on the visual response of the user.
Interactive evolution is useful when fitness is not measurable by way of
any known computational evaluation techniques.
The ideas described in this paper follow up on a previous paper which
discusses the animator's contribution to evolution [Ventrella 94, 2].
Specifically it aims to complement the automatic evolution of standard
GA's with interactive evolution, as a way to intentionally encourage a
touch of style and humor to otherwise explicit goal-directed behaviors.
(go to beginning of document)