Brown Computer Graphics Group

Non-photorealistic Rendering of Dynamic Motion


Visualizing Physical Parameters


Background

Realistic animation of human motion has a wide variety of potential applications, ranging from entertainment to sports training and medicine.  Current techniques for animating human characters focus on the motion itself and typically output a rendering of the moving character. Many applications, however, especially those in the sciences and sports, could benefit from the visual display of supplementary information about the motion. For example, a visual display of weight distribution and active muscle groups could help a dancer to understand not just what a motion looks like, but how it is performed. Physically based approaches to animation such as that pursued by both Pollard and Hodgins make this type of supplementary information readily available. For example, we have developed techniques to scale simulated motions such as running and cycling to new individuals. Ground contact forces and joint torques are an intrinsic part of the simulated motions, and a user comparing the performance of different individuals would wish to compare these quantities across simulations. This information, however, is not currently provided to the user in an intuitive form.

We are using a Java3D environment, modeled after an in-house non-photorealistic rendering system developed at Brown (Markosian, Siggraph 1997). This system is now in use by a substantial number of graduate and undergraduate students at Brown, and the Java3D implementation combines the system's intuitive camera manipulation with the the latest Java3D benefits.

The physical parameters to be displayed in this project come directly from physically based simulations or from other sources such as motion capture data. When other motion sources are used, we will calculate the required information using techniques such as inverse dynamics, a method used in robotics to calculate the control torques needed to generate a desired motion.

This project is in conjunction with ongoing research at Georgia Tech.  Currently, the Gatech group is exploring new techniques for emphasizing the overall dynamics of human motion.   See below for more information.


Current Research (Summer 1999)

The first few weeks of this project were spent studying existing research on human animation, information design, and biomechanics.  Simplicity and elegance in motion visualization are crucial, and so we researched the techniques of Marey, Muybridge, and Tufte, scientists and artists who have analyzed motion.  Their work inspired certain design guidelines for this project:  clarity of presentation, visual honesty, and refined use of color.  Also during this time we ported existing motion playback code from C++ to Java3D, and included a Java trackball camera designed by other Brown students.

Currently, we have designed three devices to represent different aspects of a runner's motion, and have more in progress.  These devices include a center-of-mass shadow, ground contact force arrows, and joint trajectory tracers.  Each can be used on either one or multiple runners, for teaching or comparison purposes.
 



  Center of Mass Shadow
Center of Mass Shadows map the location of the runner's gravitational center onto the ground plane.  The Shadows' sizes change based on the distance between the center of mass and the ground.
A center of mass (COM) shadow is the projection onto the ground plane of a runner's center of mass.  The circular shadow has radius proportional to the COM's height (z direction).  As the runner's body shifts weight in the x-y plane, the shadow moves across the ground.  When the COM's z coordinate decreases (i.e., the COM moves closer to the ground), the shadow's radius increases linearly.  Similarly, when the COM moves away from the ground, the shadow's radius shrinks.  Using motion tracks from the simulator developed by Hodgins and Pollard, we found that both an adult male model and a child model project their COMs on the area between their feet.

  Ground Contact Force Arrows
Ground Contact Force Arrows change in length and rotation based on the magnitude and direction of the force applied by the ground on the runner's foot.

Comparison between original child and machine-scaled child (MPEG) (QT)

Ground Contact Force Arrows display the magnitude and direction of the forces applied to runners' bodies as they hit the ground.  The simulator generates force data for support-phase frames in the form of a reference point and force vector.  The Force Arrows disappear for non support-phase frames, since no ground contact forces are applied then.  We have found that at the end of the flight phase, the foot strikes the ground with a significant forward force.  A Force Arrow shows the equal-but-opposite force applied by the ground on the foot.  Then, as the foot stabilizes and prepares for lift-off, the ForceArrow changes in length and rotation, intuitively showing the effects of the runner's shifting weight.
 


Joint Trajectory Tracers
Joint Trajectory Tracers graph the path of a joint across time.  Here, the runner's right elbow follows a periodic path with low amplitude.
Tracers plot the movement of a joint over time.  Every given number of frames, a Tracer appears in the Java3D universe, marking the location of the joint at that frame.  Tracers  emphasize the periodicity of a motion, as well as highlight any differences between cycles.  Tracers also enhance motion comparisons, displaying, for example, that at a particular frame, Runner A's elbow is much higher than Runner B's elbow.

Future Ideas
2D plotting of joint position over time - overlaying graphs from two runners for comparison
2D plotting of foot/arm height over time  - overlaying graphs from two runners for comparison
2D bar chart comparing torques on all body parts for a given frame
Horizontal lines to emphasize stride length


Related Reading

Information Design Biomechanics

Related Projects


Brown Researchers

Artistic Renderings of Dynamic Actions (Georgia Tech)
http://www.cc.gatech.edu/gvu/animation/Areas/nonphotorealistic

Non-photorealistic rendering is a field of significant interest in the graphics community with many recent papers on such topics as rendering with simulated watercolors, creating images in an impressionist style, and automatically extracting silhouettes. We are interested in expanding this repertoire of techniques by exploring techniques that allow moving figures to be rendered in such a way that the dynamics of their movement is emphasized.

One style that has attempted to render figures showing movement is Italian Futurism (1909). Although tangled up in the politics and violence of the time, the artists of the Futurism movement tried to provide a portrayal of energy and speed through their paintings, photographs, and sculpture. Many of their stylistic techniques lasted beyond the period.

Often their figures appear to be off-balance and leaning or falling in a particular direction (Boccioni, Riot in the Galleria, 1910 and Raid 1911). In Raid, an additional technique, "lines of force" is evident. Rays of light form multiple lines to heighten the appearance of distress in the crowd Tisdall,1978). Another artist in the movement, Balla, used multiple images of a sparsely rendered subject to convey a feeling of motion (Balla, Leash in Motion, 1912 and Girl Running on a Balcony, 1912). And finally, Boccioni experimented with gross distortions of the shape of the muscles to illustrate the motion of a sculpted figure   (Boccioni, Spiral Expansion of Speeding Muscles, 1913).

In this project, we will render motion data as animations, images, or textured statues. All the non-photorealistic rendering techniques that we develop will attempt to emphasize the motion in the scene.

Very Abstract Rendering

Changing the Motion Changing the Model Supplementary Information We plan to explore two different sources for the motion: simulation and motion capture. Simulation has been part of the research agenda of the lab for the past five years. With this approach, rigid body simulations are combined with control systems to compute the motion of animated human-like figures. Motion capture is a technique in which the motions of a human actor are captured by sensors and cameras so that the joint angles can later be played back through a graphical figure. Both approaches should yield motion that is appropriate for this project although the behaviors that can be simulated are more limited and motion capture data has characteristic flaws because of the kinematic mismatch between the human subject and the graphical character.

Gatech Researchers


home people information publications research outreach sponsors search