(Disney Meets Darwin)

Gesturing

The gesture tool offers a novel approach to the definition of a fitness function by bringing some element of motion "style" into the grasp of a genetic algorithm. The gesture tool is derived from the research of Karen Donoghue (92). In her work, gestures drawn on a computer display through a pressure/tilt-sensitive stylus are parsed for specific attributes, including direction, pressure, tilt, and rate of the drawing action - and these attributes, once parsed, directly determine the actions of animated characters. Unlike Donoghue's scheme, the characters in my system are encouraged to emulate features in the gesture, statistically, through evolutionary pressure. This variation of the spacetime constraints paradigm of saying what but not how gives the population some freedom to converge on the gesture in a variety of possible ways. Essentially the trajectory traced out by each character's head motion is compared to the gesture to determine how well a character's head motion matches a feature of the gesture. Less of a match between gesture and character head motion results in lower fitness values. Note that this technique is not bound to head tracking only - it can as easily be applied to any other body part (consider evolving an Elvis Presley Hip).

The gesture tool was implemented to be used with the 2D articulated figures. Problems of matching the 2D gesture with 3D motions were not tackled. The question of exactly which features of a gesture to compare to the motions of a character is not a trivial question. Although I have not investigated the capabilities of this technique thoroughly, experiments for this thesis have been successful and have set some groundwork for further research in this regard. Two versions of the gesture tool have been implemented: 1) absolute distance differential, and 2) direction/speed differential. They are described below.

1) Absolute distance differential
The first algorithm developed for matching the gesture with the motion of a character's head was based on absolute coordinate comparisons. The absolute positioning of the head during the gesture-matching period is compared to a moving point on the gesture as the character moves about. Figure 20 illustrates this technique.



Figure 20. The absolute distance differential technique used for matching the gesture with the head motion of an articulated figure. For a specified duration, the position of the head is compared to the position of a moving point along the gesture. During that time, the distances between the two paths are accumulated to determine an overall penalty for distance.


2) Direction/speed differential
This technique is an improvement over the first technique. Instead of comparing absolute Cartesian distances between points in the gesture and points in the head trajectory, it compares their directions and speeds at corresponding locations, as indicated in figure 21. These components represent higher level features, in that they each require two consecutive points along the path to be calculated. This technique has shown to be more flexible because it imposes less constraint on the fitness evaluation by permitting the motion to be compared at a distance - instead of using absolute proximity as a criterion. It measures similarity between motion and gesture, regardless of the gesture's actual location in space.


Figure 21. The direction/speed differential technique used for matching the gesture with the head motion of an articulated figure. For a specified duration, the direction and the speed of the head motion is compared to the implied speed and direction determined by two adjacent points along the gesture. Differences in direction and in speed are accumulated to determine penalty.

While this way of measuring a character's similarity to the gesture is more flexible, it was still found in preliminary tests that characters whose motions could approximate the gesture in any way were few and far between - and motions in the population could not easily converge on the gesture. I identified two reasons for this:

1) Even if a character's head motion traced a trajectory that was similar to the gesture, it could not receive any credit if its trajectory was traced at a different rate than that of the gesture

2) The character's trajectory may have matched the gesture but its matching may have been offset by a certain time amount.

These are comparison problems which have to do with the nature in which the gesture is read by the fitness function as it matches features to the character's motion.

Read my Motion...Now!
For these reasons, I chose to add to the population's genome two unique genes, which I call "motion scanning genes." These genes affect the way in which the fitness function compares the character's motion to the gesture, by causing it to read the gesture in a variety of ways, based on genetic variation. One gene controls the point in time that the fitness function begins to compare the gesture to the motion. The other gene controls the rate in which the gesture is scanned, as it compares it to the motion. The motion scanning genes can be described as messages from the characters to the fitness function, sounding something like: "hey fitness function, read my head motion now", or "read my head motion this fast", where now, and this fast, can vary among the population. The motion scanning genes give the fitness function a better chance at identifying similarities between character motions and the gesture, because all individuals' motions are not read exactly at the same time or at the same rate by the fitness function as it matches them to the gesture.


SAVING AND LOADING GENETIC DATA


(go to beginning of document)