(Animated Artificial Life)
The invention called image remains a standard vessel for perceivable stuff which is measured
as being either "realistic," "not realistic," or something in-between. This standard for
which the adjective realism can be applied, has, in my opinion, obscured developers of
new computergraphic media (like artificial life and computer games), in the pursuit of
virtual realities. A different, and very potent, modality of realism can be rendered.
Some visually-oriented readers may never reach this section because throughout the paper
there is a lack of rich visual illustration: the creatures are almost all stick figures.
That is because the domain I have been exploring doesn't require any more than skeletal
expressions of articulated form and motion. If these illustrations could be set to motion,
however, one would immediately see where the depth lies: not simply in the x, y, or z
dimensions, but in the four dimensions of space and time. One cannot see physics if
there is no movement. Also, a realistic, deep physics can make up for a lack of visual
detail. When this living motion is eventually clothed, the dynamism can still remain
within, as determined by the physical and biological laws employed.
Part of the methodology here is to take advantage of something that the eye-brain system
is good at: detecting movement generated by a living thing. Since it requires high
frame rates for viewers to resolve physically-based motion, sacrifices need to be
made to achieve this, due to available computer speeds. I have chosen not to spend
much computational energy on heavyweight texturemaps, lighting models, and surface
cosmetics, when more computation can be spent on deep physics, for animation
speed (to show off the effects of subtle movements that have evolved).
No pasting 2D images of Disneyesque eyes and ears on the creatures.
Instead, as the simulation itself deepens (for instance, if light sensors
or vibration sensors are able to evolve on arbitrary parts of bodies,
as determined by evolution) primitive graphical elements visualizing these
phenotypic features will be rendered. They may be recognizably eye and ear-like,
or they may not. The important thing is to visualize what is there, rather
than what is not. By not being obscured by cosmetics, the essential dynamics
3.5.1. Techniques Used
While I have chosen to compromise surface rendering for the sake of more direct visual expressions
and faster animation, there are a few techniques worth mentioning, used to visualize the creatures.
For all the articulated stick figures, black 3D lines are drawn to represent body parts. Line
occlusions are not dealt with: since the lines are all the same color, this is not a consideration.
The interesting (and important) 3D visualization comes in the drawing a shadow. This is the
most salient graphical element which gives the viewer a sense of the 3D geometry. The shadow
is simply a "flattened-out" copy of the collection of body lines (without the vertical
component and translated to the ground plane), projecting the figure vertically onto the
ground. The shadow is drawn with a thicker linewidth, and in a shade slightly darker
than the ground color. The shadow is drawn first, followed by the body part lines
(painter's algorithm). In some simulations, a series of shadow objects are drawn in
increasingly darker colors and increasingly thinner widths, to create a composite
shadow object which appears to have blurry boundaries.
The progression in designing increasingly complex bodies is accompanied by a similar
progression in rendering. When line-segment-based body parts were variably fattened,
the thicknesses of the lines were likewise increased, such that they appeared like
rectangles of varying widths, oriented along the lengths of the body parts. The
swimbots in Gene Pool are 2D examples of this technique.
Fig. 16. Representing body parts with spheres and cone sections.
In more advanced 3D bodies to be used in future simulations, each joint connecting body parts
possesses a unique radius. The shape of a body part is determined by the radii of the
connecting joints at either end of the part. Each body part is modeled as a portion of
a cone connecting two spheres of different radii, as shown in figure 16. To render these
parts, polygons oriented towards the viewpoint are drawn. Their shapes roughly correspond
to the projections of the solid parts onto the viewplane. Since spheres and connecting
cones are able to be expressed in parametric form, it is not difficult to generate
2D polygons representing their projections. In this illustration, spheres are shown
as disks of a darker color for clarification.
With this technique, one polygon per body part is drawn as a silhouette, along with
an associated joint-sphere silhouette, instead of many polygons showing a faceted surface.
This enables faster animation speeds. Custom shading techniques are currently being worked
out at this time.
(go to beginning of document)