(Disney Meets Darwin)

2: Background and Related Work


Autonomous Character Animation

The fine art of character animation has evolved from what it was in the classic days of Disney, Warner Brothers, and Fleischer. The introduction of computer software in the animation studio has influenced the industry and the way we think about creating and even watching an animated character. Animation was originally a craft whose structure was based primarily on the animation cel - the 1/30th-of-a-second picture frame - as the main building block. This is a direct outcome of the physical properties of movie film.

The introduction of computational techniques which simulate the physics of interacting bodies and the motor control systems of animals has introduced new systems for generating motion, and has brought into the art of animation the concept of autonomy. In these task level animation systems, the animator only needs to specify general goals and the character (as an autonomous agent) takes care of the many details necessary to accomplish this task (Zeltzer, 91). This is still more the realm of research than actual applied use, but some predict that the art of animation may, in the future, appear more like the direction of actors in a movie than the drawing of hundreds of picture frames.

Spacetime Constraints

In the field of character animation research, there have been a number of developments in methods for generating goal-directed behavior in characters represented as articulated skeletal figures (Sims, 87), (Cohen, 92), (Ngo and Marks, 93), (van de Panne and Fiume, 93). These figures are modeled in virtual physical worlds for realism, and their internal motions can adapt, within the constraints of that physical world, to obey user-specified constraints, for the purposes of locomotion and other specified behaviors. This approach to generating motion is often referred to as the spacetime constraints paradigm (Witkin and Kass, 88). The spacetime constraints paradigm is a technique which aims to automate some of the tasks of the animator, whereby dynamics associated with the physics of motion are left up to a physically based model, and the autonomous motions of articulated bodies are automatically optimized within the system. In these systems, the animator tells the character where to go and when to get there, for instance, and the spacetime constraint system automatically computes the optimal motions of the character for getting there at the right time. Traditional animation concepts such as squash and stretch, anticipation, and follow-through, have been shown to emerge through the spacetime constraints system.

Motion Control

The spacetime constraints approach assumes that the articulated figure has some ability to change its internal structure (like the angles in its joints) in such a way as to affect goal-directed behavior. Thus, not only must some physics model be used, but there must also be some modeling of a motor control system in the character. There have been many kinds of motor control used in this regard. Many, as one might expect, are biologically inspired, and use theories from neurology and physiology. The virtual roach developed by McKenna (90), for instance, uses a gait controller which employs a series of oscillators which generate stepping patterns, analogous to a system observed in actual roaches. The articulated figures developed by Ngo and Marks (93), van de Panne and Fiume (93), and others, use stimulus/response models which allow the characters to sense their relation to the environment, and to move their parts accordingly, each individual action being the direct result of stimulus. Typical senses used in these models include proprioceptive senses of joint angles, and contact with the ground surface, for a number of body parts. Responses typically involve changing various joint angles. Once a stimulus/response model is created, exactly what kinds of responses should be activated by what sensors for goal-directed behavior is a difficult problem, and can be a difficult design task. This is where the evolutionary algorithms have proven useful.

The Genetic Algorithm

It may not be too surprising that the genetic algorithm should have entered into the domain of autonomous character animation research - it is biologically inspired, and it is good at dealing with large search spaces (such as finding the best parameters for locomotion). Below I will explain the primary concepts of the genetic algorithm.

The genetic algorithm (GA) (Holland, 75) (Goldberg, 89) is a searching and optimization technique derived from the mechanics of Darwinian evolution. It operates on a population of individuals (potential solutions to a problem), updating the population in parallel, over many generations. In the case of this research, a good individual, or solution, is a setting of motion attributes which cause a character to perform some desired behavior. As the population is updated in the GA, the better (or more "fit") individuals pass on more of their characteristics to the individuals in the next generation, while the less fit individuals pass on less. Since individuals reproduce through sexual reproduction, offspring have the potential to inherit good traits from two fit parents, taking the best of both worlds and thereby acquiring even better traits. The result is that the population as a whole progressively improves over a series of generations.

Individuals within a population are represented in the GA as strings (in the classic GA they are bit strings of 1's and 0's but strings have also been used which consist of other element such as real numbers or integers). A string is sometimes referred to as a chromosome, and the elements in the string are sometimes referred to as genes. The chromosome, taken as an encoded representation of an individual solution, is called the individual's genotype. Each gene in the genotype influences some attribute or attributes in the individual (with the whole set of attributes comprising the phenotype). Thus, the phenotype is the expression of the genotype. The fitness of an individual is determined by an objective fitness function - an evaluation criterion which measures some feature, not of the genotype, but of the phenotype. Figure 4 illustrates this concept - that the phenotype is the representation which is evaluated, and that the genotype is the representation which is affected by the evaluation.



Figure 4 Schematic illustrating a basic concept in genetic algorithms - that there are two components to a representation: the phenotype and the genotype. The genotype is an encoded set of parameters that determine attributes in the phenotype. The phenotype is evaluated by an objective fitness function or a human evaluator, and the genotype is affected by this evaluation, through the operators of the genetic algorithm.

Many varieties of the GA have been implemented, but they all have some basic things in common. A typical GA works like this:

     Initialize
         An initial population of genotypes is created with random settings 
         of gene values.  

     Evaluate
         Each genotype's resulting phenotype is evaluated for fitness.
 
     Select 	
         Pairs of genotypes are chosen randomly from the population for mating
         - with the chances of being chosen proportional to fitness.

     Crossover
         These genotypes mate via crossover: they produce one or two offspring
         genotypes, which inherit randomly sized, alternating "chunks" of 
         gene sequences from each parent.
  
     Mutate
         Each offspring's genes have a chance of being mutated (isolated 
         genes can be changed randomly - this encourages experimentation
         in the population, across generations).

     Update
         The offspring genotypes comprise the next generation, and replace
         the genotypes in the previous one. 

     Repeat
         The process is repeated, beginning with the Evaluate step, for
         a set number of generations, or until the average fitness of 
         the population reaches a desired level. 
   
Most GA's incorporate variations on this form, and experimenters use different settings for things like population size, mutation rate, fitness scaling, and crossover rate (affecting the sizes of the parental chunks which the offspring genotypes inherit).

Artificial Life

Artificial Life is the study of human-made systems which exhibit behaviors that are characteristic of natural organic processes. It complements the traditional biological sciences, which are largely concerned with analysis, by attempting to synthesize life-like systems (Langton, 91). Life, and the myriad systems which are characteristic of life, are seen as emergent behavior. Many artificial life researchers use genetic algorithms in modeling the evolution of self-organizing phenomena, employing a bottom-up methodology. Reproduction, predator/prey dynamics, primitive communication, locomotion, and functional morphology are among the phenomena that have been modeled. The influence of artificial life concepts can be witnessed in a growing number of software products for exploring evolution and emergent phenomena. These include SimLife and other "Sim" products by Maxis, and many others.

Interactive Evolution

Genetic algorithm-based systems which replace the objective function with a human are typically called Interactive Evolution systems. These have been developed by (Dawkins, 86), (Sims, 91), Latham (Todd, 92), (Tolson, 93), (Baker, 93), and others. What distinguishes these systems from other uses of the GA is that they incorporate a human fitness function. In these systems, abstract forms (typically) are evolved through the selections of favorable images by a user, through a number of generations of evaluations. These systems support the notion of an 'aesthetic search' in which the fitness criteria are based primarily on the visual response of the user. Interactive evolution is useful when fitness is not measurable by way of any known computational evaluation techniques. As an example of an interactive evolution system, Richard Dawkins' Blind Watchmaker is illustrated in Figure 5. The software was developed as an accompaniment to the book by the same title, as an interactive illustration of evolutionary principles. In the illustration, the top panel shows one generation of a population of biomorphs from which a user has chosen an individual to spawn a new generation, with genetic mutations for variation. By repeating this action a number of times, one can breed a desired biomorph. This system employs asexual reproduction (one parent per generation). Many other interactive evolution systems, including the Character Evolution Tool, allow for sexual reproduction as well, in which two or more individuals can be chosen from the first generation for mating, to create the next generation.



Figure 5 Two views of Dawkins' Blind Watchmaker (with a schematic overlay) as an example of an interactive evolution system. The top panel shows a population of biomorphs from which a user has picked an individual to spawn a new population with genetic mutation, shown in the bottom panel.

Gesturing for Animation

Scripting by Enactment techniques have been developed for mapping motions from a human body to an animated 3D computer model (Ginsberg, 83). In these techniques, multiple cameras placed in a room sense diodes attached to a performer's body as he/she moves about - the system merges the camera views to synthesize the 3D model. One can use this technique for quick scripting of expressive movements in an animated character, and it requires no drawing.

Still, most animators (at least in the present) are also graphic artists, and are very skilled in the art of expression through drawings. Tools which complement this skill have been explored. Algorithms which extract features from human-made gestures drawn in a computer display have been developed for enhancing the communication between user and computer in Computer-Aided Design research. Donoghue [93] has developed a system in which attributes from gestures such as rate of drawing, direction, 'tilt' of the stylus, pressure on the tablet, and other features are parsed and then used for a variety of applications. These applications include rough prototyping for page layout design, and quick scripting of animated character movements. This facet of her system is derived from earlier work by Baecker (69) which demonstrates the use of an input stylus to create a motion path specifying frame-by-frame positions of animated characters. For specifying character animation movements in Donoghue's system, a sketching grammar interprets geometric aspects of the sketch input and temporal aspects of how the character should move. The Character Evolution Tool may be seen as extending Donoghue's work in using gestures to specify motions in characters. The added dimension is that this system allows the characters the ability (as autonomously moving articulated figures) to optimize internal motions to best approach the motion specified by features of the gesture. It includes adaptation.


APPROACH



(go to beginning of document)