Está en la página 1de 109

See

discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/2364256

Feasibility of a Skeletal Modeler for Conceptual


Mechanical Design

Article April 2001


Source: CiteSeer

CITATIONS READS

2 26

6 authors, including:

Richard H. Crawford Kristin Lee Wood


University of Texas at Austin Singapore University of Technology and Design
160 PUBLICATIONS 1,662 CITATIONS 445 PUBLICATIONS 7,148 CITATIONS

SEE PROFILE SEE PROFILE

Ronald E. Barr Chandrajit L. Bajaj


University of Texas at Austin University of Texas at Austin
59 PUBLICATIONS 754 CITATIONS 488 PUBLICATIONS 10,896 CITATIONS

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Functional and Systems Modeling, Design Representation, and Reframing View project

Advanced Mechatronic Systems View project

All content following this page was uploaded by Kristin Lee Wood on 09 December 2013.

The user has requested enhancement of the downloaded file.


Copyright

by

David Charles Thompson

2000
Feasibility of a Skeletal Modeler
for Conceptual Mechanical Design

by

David Charles Thompson, B.S.M.E., M.S.E

Dissertation

Presented to the Faculty of the Graduate School of

The University of Texas at Austin

in Partial Fulfillment

of the Requirements

for the Degree of

Doctor of Philosophy

The University of Texas at Austin

December 2000
Feasibility of a Skeletal Modeler
for Conceptual Mechanical Design

Approved by
Dissertation Committee:

Richard H. Crawford

Kristin L. Wood

S. V. Sreenivasan

Ronald E. Barr

Chandrajit L. Bajaj
Dedicated to narcissus.
Acknowledgments
I would like to acknowledge all the people whose shoulders (and toes) I have
stood upon.
Since I am still impoverished, I will acknowledge these people by bestowing
upon them a multitude of brownie points: Rich Crawford, Kris Wood, and Uichung
Cho saints of extreme patience; Matthew Thompson, for telling me to stick with
it against his better judgement; Margarent Thompson, for telling me to stick with it
because her son could do no wrong; David E. Thompson, for telling me about how
hed typed his dissertation in one day and had it signed the next so why couldnt
I?; Marc Compere, for excessive analogies between life and differential equations;
Seokmin Park, for making me look bad if I started to slack; Brad Jackson, for
pretending to listen to my running commentary on system disministration; Rodrigo
Ruizpalacios, for his pity upon me; Monty Greer, for being fascinated with physics
after Id taken it for granted; Mike Vanwie, for reminding me engineers should get
their hands dirty once in a while; Matthew Green, for a healthy skepticism of my
priorities; Ranjit Deshmukh and Matthew Campbell, for grading a whole lot faster
than me; Dan McAdams, for BIG FONTS; Jeff Norrell, for teaching me how to
fight the bureaucracy; Irem Tumer, for turning on the light at the end of the tunnel;
Paul Koeneman, for his straightforward disappearing act; and Valerio Pascucci, for
a really good description of the tradeoffs of parallel isocontouring.
Finally, I would like to thank all the people that have written the free software
that I used to produce this dissertation, and thats a big honking lot of software: Al-
pha shapes v2.2 (for inspiration and algorithms), Luis Velhos adaptive triangulation

v
code, Jules Bloomenthals implicit.c, Hans Kohling Pedersens imp (for inspiration),
Qt, libxml, QpThreads, Mesa3D, gmp, gdb, ElectricFence, Graphviz, Dia, Octave,
TEX, LATEX, XFig, Ghostscript, pstoedit, gnuplot, gcc & g++, XFree86, Linux, and
a ton of other stuff (tcsh, sed, grep, awk, etc.)

David Charles Thompson

The University of Texas at Austin


December 2000

vi
Feasibility of a Skeletal Modeler
for Conceptual Mechanical Design

Publication No.

David Charles Thompson, Ph.D.


The University of Texas at Austin, 2000

Supervisor: Richard H. Crawford

Even though much of the design process takes place before a products geometry
is specified, solid modelers are most frequently used when the final shape of the
product is known. One reason for this is the amount of input required on the part
of designers to create even simple models. We propose a modeler requiring only
weighted points to be specified. The connectivity of the points is determined based
on proximity and the value of the weight at each point. The connected diagram a
subcomplex of the regular triangulation of the input points known as an alpha shape
serves as a skeleton for an offset surface which becomes the solid model. The offset
from the skeleton is restricted to lie inside a union of balls centered around the input
points with radii related to the weights of the input points. This restriction forces the
solid model and the skeleton to have the same homology. The homology groups are
easy to compute for the skeleton. In this way, a designer can impose both geometric
and topological constraints on a model. Also, the skeleton can be thought of as a
graph to which design information can be attached; for instance, we show how the
portion of the offset surface associated with each input point can be easily identified

vii
and used in lumped parameter analysis for simulations. Functional representations
of a design might also be attached to the skeleton as well. Finally, it is demonstrated
that the skeleton can serve as a generator for multiple offset surfaces that specify
a materially gradient solid model. The feasibility of the modeler is shown through
the design of a compliant bottle opener using the modeling techniques described.

viii
Contents

Acknowledgments v

Abstract vii

List of Symbols xi

Chapter 1 The Need For A Conceptual Modeler 1


1.1 Current practice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 The Process of Modeling . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Dissertation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.4 An example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

Chapter 2 Partitioning Space 12


2.1 Voronoi diagrams and Delaunay triangulations . . . . . . . . . . . . 12
2.2 Power diagrams and regular triangulations . . . . . . . . . . . . . . . 14
2.3 Representing a regular triangulation . . . . . . . . . . . . . . . . . . 17
2.3.1 Vertex insertion algorithm . . . . . . . . . . . . . . . . . . . . 25
2.4 Topology of finite triangulations . . . . . . . . . . . . . . . . . . . . 26
2.4.1 A compact geometric notation . . . . . . . . . . . . . . . . . 27
2.4.2 Division aint what it used to be . . . . . . . . . . . . . . . . 31
2.4.3 Homology groups . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.4.4 Computing the Betti numbers . . . . . . . . . . . . . . . . . . 34

ix
2.4.5 Algorithm and implementation . . . . . . . . . . . . . . . . . 37
2.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

Chapter 3 The Skeletal Subcomplex 47


3.1 Alpha Shapes and the Space Filling Diagram . . . . . . . . . . . . . 47
3.2 Duality of the unions of balls and alpha shapes . . . . . . . . . . . . 49
3.3 Redundant vertices . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.4 The example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

Chapter 4 Fleshing the Skeleton 57


4.1 Literature survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.2 Power cells and initial tetrahedra . . . . . . . . . . . . . . . . . . . . 61
4.3 Adaptive triangulation . . . . . . . . . . . . . . . . . . . . . . . . . . 65
4.4 Parallel isocontouring . . . . . . . . . . . . . . . . . . . . . . . . . . 67
4.4.1 Safe blending . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
4.5 Algorithm and Measurements . . . . . . . . . . . . . . . . . . . . . . 69
4.6 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
4.7 Analysis Using The Offset . . . . . . . . . . . . . . . . . . . . . . . . 76
4.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

Chapter 5 Non-Uniform Material Distributions 79

Chapter 6 Conclusions 85
6.1 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
6.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
6.2.1 Modeler internals . . . . . . . . . . . . . . . . . . . . . . . . . 86
6.2.2 Applications for a topological modeler . . . . . . . . . . . . . 87

Vita 96

x
List of Symbols

Latin symbols
Bi . . . . . . . . . . . Either (1) elements of Ci formed by boundaries of i + 1-simplices, or
Bi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .(2) a ball centered at s0i with radius s00i .
Ci . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Set of all possible sets of simplices of dimension i.
Hi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The ith homology group. Hi = Zi |Bi .
K . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A simplicial complex of R(S).
K . . . . A subset of K s.t. any simplex with and its boundaries are members.
P . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The power diagram of the input points.
R . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The regular triangulation of the input points.
S . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The set of all input vertices. S = {s0 , s1 , . . . , sp }.
Zi . . . . . . . . . . . . All members of Ci that have no boundary (i.e., Ci means = .
ci . . . . . . . . . . . . . . . . . . . . . . . The material composition vector of the ith skeletal offset.
si . . . . . . . . . . . . . . . . . . . . . . . . . . The ith input vertex. i may index S or . si = (s0i s00i ).
s0i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The position in space (R3 ) of the ith input vertex.
s00i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The weight of the ith input vertex.

xi
Greek symbols
i . . . . . . . . . . . . . . . . . . . . . The ith Betti number of a complex (the cardinality of Hi ).
k . . . The k-dimensional boundary operator: k = ki=0 (1)i [s0 s1 . . . si . . . sk ]
P

. . . . . . . . . . . . . . . . . . . . . . . A simplex composed of input vertices, usu. [s0 s1 . . . sk ].


. . . . . . . The size of a simplex ; the smallest weight of any point orthogonal to .
(x, p) . . . . . . . The power distance between x and p: (x0 p0 ) (x0 p0 ) x00 p00 .

xii
Chapter 1

The Need For A Conceptual


Modeler

This dissertation describes the development of a new solid modeler aimed


specifically at aiding the early stages of the design process. You may think of the
dissertation as the early stages of the design of a solid modeler. This first chapter
describes the customer needs and how they are currently met. The rest of the
dissertation focuses on the feasibility analysis of one proposed solution to meet
these needs and the comparison of the proposed modeler to others research.

1.1 Current practice

In the prevalent engineering design methodologies (Pahl & Beitz 1977, Ull-
man 1992, Ulrich & Eppinger 1995, Otto & Wood 2000), conceptual design is divided
into functional specification, concept generation, and concept selection, as shown in
Figure 1.1. Since designs are intended to be independent of geometric form during
functional specification, geometric modeling should not be required. However, the
input to the concept generation phase is the result of the functional specification:
a set of atomic functions, the flows that relate them, and engineering specifications

1
against which to measure feasibility. The output of concept generation is a set of
concept variants and initial calculations showing the variants as feasible (against
metrics in an engineering specification).
The methodologies mentioned divide concept generation into two steps: one
where each individual function is given a set of possible physical forms, followed
by a step where combiniations of these forms are assembled into a complete con-
cept variant. Solutions for many different subfunctions are combined into overall
concepts that seem likely to produce useful designs. This is not always successful,
and sometimes combinations can be successful with small changes to an unworkable
solution. Designers need a tool that will let them try these changes out rapidly,
often removing one area of a part and replacing it with a different form. This is
similar to the engineering spreadsheet proposed by Ramaswamy and Ulrich (1993).
Compare this to what most commercial modelers can accomplish: current
modelers accept geometric relations (such as coincidence or tangency) and the de-
gree of a curve or surface that satisfies these relations. These modelers output a
boundary representation of a parts geometry. Parametric modelers allow one to
change some of these relations and recalculate the results. However, if the topology
of the output changes, these relations may not make sense. Some modelers, such as
Pro/ENGINEER1 , allow relations to be tied to features which are in turn related
to either the function or manufacture of the part (Shah & Mantyla 1995, Shah,
Mantyla & Nau 1994, Bezdek, Thompson, Crawford & Wood 1999). However, these
modelers still depend on particular faces of a solid model being present after a fea-
ture has been generated. Simple changes such as altering the location of a feature
on the part can prevent the modeler from identifying the correct face, even when
the features faces are locally unchanged.
To summarize: some modelers allow input based upon the functions from a
functional specification, but frequently do not allow changes in topology (the form of
1
A product of Parametric Technology Corporation.

2
Define customer
needs

Translate needs into Describe the process


engineering requirements of using the product

Identify primary
product functions

Generate many solutions


for each function

Combine solutions
into concept variants

Conceptual
Evaluate feasibility of Modeler
concept variants

Compare concept
variants to each other

Current
Embody the best concept Solid
Modelers

Figure 1.1: The mechanical design process.

3
Table 1.1: What designers need from a conceptual solid modeler. (D=Demand,
W=Wish)
D/W Need
Generating concept variants
D Quickly change a solution principle for one subfunction while main-
taining the remainder of a concept variant.
D Quickly change the geometric layout of modules (functionally inde-
pendent parts of a product).
W Build concept variants from a library of standard solution principles.
W Attach functional description to geometric or topological regions of
a model.
Evaluating concept variants
D Generate geometric values needed for lumped parameter models that
represent the product.
W Generate a lumped-parameter analysis automatically.
D Allow lumped-parameter model to vary geometry (for optimization).
Incomplete geometry
D Specify topology of a model with little or no geometry.
D Constrain topology without geometric constraints.

the geometric relations) to be made. Furthermore, the output from most modelers
is not suitable for preliminary feasibility calculations; most models engineers create
early in the design process are lumped models (as opposed to finite element or finite
difference models). While finite element or difference models may produce more
accurate results, they also require much more information than may be available
during concept generation. Thus, engineers need a modeler that allows incomplete
geometry specification. By this, we mean that an engineer should be able to specify
the topology, or connectivity, of the regions of the modeler alone and have some
default geometry present until detail design is underway.
The differences between design requirements and modeler features lead to
a list of proposed needs (in Table 1.1) that designers have for a conceptual solid
modeler.

4
1.2 The Process of Modeling

At its most basic level, geometric modeling involves the designer identifying
points and then specifying geometric relationships between them. Conventional
modelers require designers to connect the points into a shape that is extruded,
revolved, or swept into a solid, designate the location of primitive solids which are
combined using boolean operations, or create loops of edges which form a solid
model (this is called surface modeling as opposed to solid modeling).
Popular solid modelers such as Solidworks and PRO/ENGINEER have two
basic processes through which geometry is modified:

1. A 2D drawing is created in a construction plane and then used to create a swept


volume2 . The volume is then turned into a boundary representation directly
or used in a boolean set operation whose result is a boundary representation
of the solid model.

2. Faces, edges, or vertices of the existing part geometry are selected and an
operation to perform on them is chosen. Usually, these operations modify
the boundary representation directly. Examples of these operations would be
drafts, fillets, and rounds.

Obviously, some initial geometry must be created using the first process before the
second modeling process can be used. This dissertation contends that the initial
process of sweeping volumes from 2D drawings is not ideal for conceptual design
because it is not how designers think at this stage. For this reason, a new process
will be developed. The goal of this process will be to reduce the amount of input
required. There are several ways to accomplish this:

Reduce the dimension of the input required by the designer.

Do not always require the designer to specify the connectivity between points.
2
A swept volume is a generalization of revolution and extrusion operators and can be additive
or subtractive.

5
Provide a library of features common to many designs.

The first two, in particular, will be important to the new process. Conventional
modelers reduce the dimension of the input required by separating the input into
two steps; the first is to create a 2D cross-section. The second is to specify how
the section is swept. Instead, the conceptual modeler will associate a thickness,
or radius, with each input point. The radius is used to determine how the points
are connected as well as the final shape of the solid. Rather than sweep a set of
connected points, the connected set will serve as the skeleton of the part. . . with
the radius function at each vertex determining the distance of the solids boundary
from the skeleton. Using weighted points is an idea originating from Edelsbrunners
(1992) use of them in visualizing and analysing molecular models (Edelsbrunner,
Facello & Liang 1998, Facello 1996, Bajaj, Pascucci, Holt & Netravali 1998). The
functions that this conceptual modeler must perform are shown in Figure 1.2.
The function structure (FS) in Figure 1.2 is for a piece of software, not a
mechanical product, so some explanation is required. The solid lines represent data
that must be stored (and are thus similar to materials in a mechanical FS). The
dashed lines represent data that is used immediately and not stored (and are similar
to signals in mechanical FS).
When points are specified by the designer the modeler must generate the
skeleton that they define. Additional operations allow the designer to specify when
topological changes are allowed. If they are not allowed and inserting or deleting
a point would change the topology, the modeler will attempt to insert points to
maintain the topology. Finally, the solid model must be created from the skeleton.

1.3 Dissertation

Some sections of the function structure presented in the previous section


have been researched already. However, several challenges remain before these tools

6
"Allow
Topology Points Skeleton Skel.
Change" Generate Represent Maintain
Skeleton Skeleton Topology "Topology
Change
"Regenerate" Atempted"
Points Topology
Points Accept Store
Points Points Points
Generate
Surface
Points
Surface
Selection Seln Seln
Accept Store
Selection Selection
Represent
Surface
Operation Perform Operation Surface
Operation
Accept

7
Operation Modify Point Surface &
Display Skeleton
Weights Surface &
Skeleton

Modify Point
Location

Modify Skeleton
Blending

Aggregate Points
Into Skeletal Element

Figure 1.2: The function struction for a skeletal modeler.


Delete Points
can be useful for conceptual design. In particular, these will be addressed by this
dissertation:

1. Modeling operations for generating and manipulating skeletons.

2. Ensuring the topology of the offset surface matches the skeleton.

3. Finding ways to combine the representation of strict offsets and blobby blend-
ing models in a way that preserves topological information on the solid.

4. Quickly generating the offset surface from a skeleton.

The first problem of generating and manipulating skeletons includes methods


for placing and moving points, changing the radius function of the skeleton, adding
sharp edges and corners (without manually inserting skeletal elements), and con-
straining skeletal topology. Techniques that might be useful but will not be covered
include multiresolution editing of the skeleton and the use of one skeletal object to
deform or imprint another.
The second area to be addressed focuses on the techniques to transform the
skeleton into a solid model. Since work has already been done for the case of rolling
ball blends of unions of spheres (Bajaj et al. 1998), I will consider the case of strict
offsets and general implicit functions (i.e., blobby blending). The research issue is
finding an efficient way to generate an approximation to the surface for display and
interchange with other modelers.
The third goal, of maintaining the topology of the solid model will be achieved
by allowing limited blending of neighboring skeletal segments that does not change
the model topology.
The last goal, to provide editing at interactive speeds, will be addressed
through the use of spatial data structures and parallel generation of the offset surface
from the skeleton. The power diagram of points sampling the skeleton divides space
into cells which contain portions of the offset surface. Dividing these cells among
multiple processors will provide for parallel surface generation.

8
When these goals are complete, the result will not be a complete geometric
modeler for conceptual design. The key element missing will be the ability to relate
the skeleton to engineering models. But that is another story.
With these goals in mind, the primary functions from Figure 1.2 related to the
research goals are discussed and implemented in the following chapters. Generate
Skeleton and Maintain Topology are discussed in Chapter 2 (for strict offsets)
and generalized to include blending in Chapter 4. Generate Surface is embodied
in Chapter 4.

1.4 An example

This section introduces a running example of how the modeler will be used.
As Figure 1.1 shows, a designer will have some information from the first stages
of conceptual design before solid modeling is useful. Our example will cover the
design of a bottle opener for 20 oz to 3 L twist-cap soda bottles. The following is
information a designer would have available.
The important customer needs are that the bottle opener

Be small and portable.

Tightens and loosens caps.

Should not damage the bottle or cap.

Should not lose the cap.

Work with wet hands.

Work with one or no hands.

Use human force.

Be inexpensive.

9
Table 1.2: The morphological matrix for the bottle opener.
Function A B C D

Import
Force

Amplify
Force

lever wedge impact

Couple to
bottle

hose clamp compliant jaw

Rotate cap

screw spring wedge

In addition to the customer needs, the designer would recognize that since it is
intended for consuming mass quantities of carbo-bevs, the opener should not shake
the drink enough to cause fizzing. A function structure for the bottle opener is shown
in Figure 1.3 and some solutions to the primary functions are shown in Figure 1.2.
With this information, the designer is prepared to combine solution principles into
overall concepts and a solid modeler becomes a meaningful tool.

10
Human

Hand Hand Hand


Hand Import Import Guide
Hand Hand Force Hand Force
Human Human
"Done"

Rotational Rotational Rotational


Amplify Stabilize Store Release
Force Force Force Force
Rotational Rotational Rotational

11
Neck Neck Neck Neck Neck Neck
Import Couple Decouple Export
Bottle Neck to Neck from Neck Bottle Neck
Rotational
Rotational Heat,Noise
Rotate
Cap wrt Neck

Cap Cap
Import Cap Couple Cap Decouple Cap Export
Cap
Bottle Cap to Cap from Cap Bottle Cap
"Cap is Loose"

Figure 1.3: The function structure for a bottle opener.


Chapter 2

Partitioning Space

As weve discussed in the first chapter (c.f. 1.2), whatever else gets specified,
designers must first specify points. One goal of the conceptual modeler is to reduce
the amount of input required beyond points. What makes this possible is the work
of two mathematicians: Voronoi and Delaunay.

2.1 Voronoi diagrams and Delaunay triangulations

Voronoi(Voronoi 1908) noticed that any set of points can be thought of as


dividing space into regions; given a set of points, S = {si } , i [0, p], the region of
space associated with a point si S is the the set of points in space closer to si
than to sj for any j 6= i. By closer, we mean that the length of the line from some
point, x, to si is less than the length of a line from x to any sj , j 6= i.
Well be fiddling with this definition of closer later on (2.2), but for now,
consider the picture in Figure 2.1. The picture shows a two-dimensional Voronoi
diagram of a set of points. The main concern here is that each segment of a regions
border is a set of points equidistant from exactly two points, say s0 and s1 . Think
of the border as connecting s0 and s1 , rather than dividing the space between them.
They are connected by their proximity to each other. If another input point s5 was

12
 !
 
 
 

     
"# "%$

"%(

"%&
"%'

Figure 2.1: An example of a Voronoi diagram in two dimensions. The Voronoi


regions at the edges are infinite so that all of R2 is enclosed.

placed between s0 and s1 , their regions would no longer share a border. In fact, as
Figure 2.1 shows using red lines, connecting only input points that share borders
produces a graph. This graph is named the Delaunay triangulation, after the French
mathemetician. The Delaunay triangulation and the Voronoi diagram are called
duals of each other because of the relationship between the Voronoi boundaries and
the Delaunay edges: each line segment of a Voronoi boundary has a corresponding
edge in the Delaunay triangulation.
Now, lets bring these diagrams back into the context of the conceptual mod-
eler. The Delaunay triangulation can eliminate the need for further input from a
designer by assigning a default connection to input points. Also, the Voronoi region,
or cell, associated with each input point corresponds to a lump around that input
point which can be used for lumped-parameter analysis.
These graphs, which describe how the space around the input points is con-
nected, can be used for other purposes as well. In particular, section 2.4 describes
how the topology of a model can be related to the topology of the Delaunay trian-
gulation.

13
So far, weve shown which input points could be connected to each other
based on proximity. We havent discussed whether those points should be connected,
though. Also, we havent used all the information we proposed to gather from the
designer: namely, the radius specified at each input point. An extension of the
Voronoi diagram, called the power diagram will help with this.

2.2 Power diagrams and regular triangulations

The Voronoi diagram uses the standard distance metric to an input point
as the method for dividing space into regions associated with each point. However,
this assumes that each input point has the same importance. By associating a
radius or weight at each point, we give different importances to the points. The
regions associated with points will change depending on their relative weights. This
is accomplished by using a new distance metric,

d(p, si ) = (p s0i ) (p s0i ) s00i

where si = (s0i , s00i ) is an input point with weight s00i R and p is a test point (i.e.,
any point in R3 ). Note that d(p, si ) is the square of the distance from point p to a
circle of radius s00i centered at point s0i . Thus, from now on, if an input point is
p

referred to as a sphere, were referring to the sphere centered at s0i with a radius of
p 00
si . We also define the power distance, (si , sj ), as

(si , sj ) = (s0i s0j ) (s0i s0j ) s00i s00j

where si and sj are input vertices. Cell boundaries occur when the power distance
between adjacent vertices is at a minimum.
Of course, this more complicated distance metric brings along complications;
well mention a few here. Properties of power diagrams are discussed in detail by

14
Aurenhammer (1987).

Boundaries of power diagram cells (or just power cell for short) arent nec-
essarily halfway between adjacent input points. If the two spheres intersect,
they occur where the hyperplane of intersection between the spheres intersects
the line joining the two points. This is shown in Figure 2.2, parts (a) and (b).

An input points cell may not contain the input point this can occur when
an input point is inside the sphere of another input point. See Figure 2.2(c-d).

Also, some input points may not have any region of space for which the power
distance is smaller for them than any other input point. This can happen when
one input points sphere is contained in anothers sphere. Such an input point
is called redundant. Figure 2.2(c-d) represent situations where a redundant
point can occur, but not all points in these configurations are redundant.

Define point q0 as the intersection of the cell boundary with the line joining
points s0i and s0j . Let q = (q0 , 0). Figure 2.2 shows four cases relating the
sphere configurations to the power distance. In (a) and (b), the power cell
boundary is between the two input points and (si , q) < 0 if they intersect.
In (c) and (d), the power cell boundary is to one side of both input points and
(si , q) > 0 only when one sphere completely contains another.

The dual of the power diagram is the regular triangulation. Using the prop-
erties above, we can select edges of the regular triangulation to be members of the
skeleton by examining the sign of the distance metric at point q. In fact, the skele-
ton will be composed of more than just edges of the regular triangulation: it will
be vertices, edges, triangles, and tetrahedra from the regular triangulation. For
although the regular triangulation might just appear to be a set of edges, it is also
a way to partition space like the power diagram. Look again at Figure 2.1 and you
will see that the Delaunay triangulation can be thought of as

15
cell boundary cell boundary

w1

d w1

w0 d

w0

x0 x1 x1
x0

(a) Spheres intersect (b) Spheres dont intersect

16
cell boundary

possibly no cell boundary

x1 x1

x0 x0

(c) Cell 0 doesnt contain x0 . (d) Point x0 is not in its cell


and may be redundant.

Figure 2.2: How power cell boundaries are related to input point weights.
a set of edges connecting vertices,

a cyclic, undirected (and for the 2D case, planar) graph, or

a division of R2 (R3 ) into triangles (tetrahedra) for the 2D (3D) case.

So far, weve discussed the properties of power diagrams and regular triangulations,
but we havent described the algorithms or data structures needed to represent
them. Well take care of that in the next section. The section following that reviews
the mathematics used to examine the topology of a regular triangulation from the
point of view of the last item in the list above. This is followed by a more thorough
discussion of how the skeleton will be selected as a subset of the regular triangulation.

2.3 Representing a regular triangulation

We are interested mainly in representing the regular triangulation accurately


since portions of the triangulation will be the skeleton of the designers model. The
regular triangulation is a tricky thing to represent. There are many degenerate
conditions that can occur when input points are coplanar or cospherical and these
are not easily dealt with. Existing software packages that deal with combinato-
rial geometry make assumptions about the nature of the input and take different
approaches for handling degenerate conditions. Benouamer, Jaillon, Michelucci &
Moreau (1996) and Edelsbrunner & Mucke (1988) have good surveys of techniques
as well as proposed solutions.
As far as input is concerned, programs can

1. assume the input is in general position (i.e., free from degeneracies),

2. they can accept degeneracies and implement algorithms that detect them, or

3. they can perturb the input so that it is in general position.

17
The algorithms can then be implemented with either fixed-point math, exact math
(Edelsbrunner & Mucke 1988), or a hybrid technique, such as interval analysis (Snyder
July 1992, Benouamer et al. 1996). If fixed-point math is used, degeneracies can
occur due to machine truncation error even when the input is in general position.
We will use Edelsbrunners technique since it has been applied to the problem of
constructing regular triangulations. Edelsbrunners technique, named Simulation of
Simplicity (SoS), uses exact math and perturbs points so that no special cases need
be handled by geometric algorithms. Each geometric query performed on a set of
points is called a predicate, which returns true or false. For example, one might ask
Is point x in the positive halfspace defined by p0 , p1 , and p2 ? Each predicate is
implemented as a matrix determinant. If the determinant of the matrix is exactly
0, then point coordinates are perturbed in the order they were input until the deter-
minants sign can be determined. The input points themselves are not perturbed
their order alone determines which coordinate will make the determinant nonzero.
Now we can discuss how the regular triangulation is stored. Because we have
only discussed the triangulation so far as a graph of vertices connected by edges, the
most obvious way to represent the triangulation might seem to be a graph. However,
a graph wont allow us to ask questions about the proximity of more than a pair
of points at a time. In order to preserve all of the information about proximity in
the regular triangulation, well store it as a spatial subdivision. The graph divides
space with triangles into tetrahedra, so tetrahedra will be the basic building blocks
for our representation.
Note that you can consider a tetrahedron the simplest three-dimensional
shape possible, since it is specified with only four points. Similarly, a triangle for
two dimensions, an edge for one. This holds for any arbitrary dimension. We
call this class of k-dimensional convex shapes specified by k + 1 points simplices.
Simplices of dimension k are also called k-simplices. The greek letter sigma ( (k) =
[s0 , s1 , . . . , sk ], where si are the points defining the simplex) is usually a variable

18
name well reserve for a k-simplex. Any k-simplex with k > 0 has a boundary made
up of k 1-simplices. A tetrahedron has four triangles as its boundary, for instance.
These boundaries are called facets of the simplex.
The regular triangulation will be a collection of simplices called a complex.
A complex, K, is a set of simplices such that

1. For any K, the facets of are also in K.


T
2. For any two distinct simplices, a and b , their intersection a b is either
null or a common facet, c . This stipulation prevents simplices that intersect
from being present unless they share all of a common border.

A subcomplex is a subset of simplices from a complex that also meets the definition
of a complex. The skeleton will be a subcomplex of the regular triangulation of the
input points.
Since we are extending Edelsbrunners regular triangulation algorithm, our
data storage will be similar. The triangulation consists of an ordered list of points
and a list of triangles that refer to those points by index. Each triangle, in addition
to referencing its vertices, contains information indicating whether the triangle is
on the convex hull of the input points and a list of 6 other triangles (referred to
by their position in the list of all triangles). Figure 2.3 shows how the referenced
triangles are related to the six directed edges of the current triangle. Edges can have
as many triangles attached as needed and each triangle need only refer to 6 others,
as shown in Figure 2.4. An example of the 6 entries is given for a simple set of 3
triangles in Figure 2.5. In Figure 2.3, edges 0, 2, and 4 refer to the triangle with its
normal pointing out of the page while edges 1, 5, and 3 refer to the triangle with its
normal pointing into the page. Once four input points are specified, the two possible
orientations of a triangle the even orientation with edges 0, 2, and 4, or the odd
orientation with edges 1, 5, and 3 refer to different tetrahedra. If the triangle
is on the hull, we say that the orientation pointing outwards refers to a special

19
C
3

2 2
4 E 4
5 0 0

1 A

Edge Next Triangle F


0 A
1 B 3
2 C
3 F 5
D 1
4 E
5 D
B

Figure 2.3: Each oriented edge refers to the next CCW triangle.

20
"W#'%# &)QS P %,/Q#RQS
H H 134 #  
"W#'%# & (*+,# &
I (*,+,# & I
-.+,/#R(*,+,#'&
J J c
J J \
Z T M O
 H

I
X YO KL NO
!
X YM KL NM
-$+/#'S H![ M []\ /^^`_a%6,+b%Q,/#
X I KL I -$+/#'S I[]c![ O /^`_a%6,+b%Q,/#
-.+,/# 02134 # 5768:9# % U . - +,/#R(*,+,#'& dV!e 134 #R(*,+,#'&
  U -$+/# 134 #
  O 9,QS MV 9QS

 "$# %#'&)(*,+,#'& 

 

  !
  
-.+,/# 02134 # 5768:9# %<;> =>?>@ A>BDC =E FG

Figure 2.4: The edge-facet structure is a list of vertices and triangles.

21
null tetrahedron representing all the space outside of the model. Tetrahedra are
not explicit in the edge-facet data structure, but will be stored in a data structure
discussed later.
$ %& #'  
   #
!"(# !"#
( 
    
)
* (

 


 3 &&FEGH;#B
$ %& #'+-,/.01 2)
!
*
354 6879:% ;+-,/.01 "</= >? @BA
C =B(D
!BI
HI +-SEG:R %[\&]:_^
4  4 `#FH;#B
I
)I
RBI 3L4  * EPEQH:R UTVT5WXH,/ 4 4 * EG:R !ZY
)KJ 3L4  NMOEPEQH:R SUTVT5WXH,/ 4 4 MOEG:R !ZY

Figure 2.5: An example of the edge-facet data structure.

Several operations, shown in Table 2.1, are defined for traversing the data
structure. This data structure can be used to describe any polygonal surface or
tetrahedral mesh. When representing a tetrahedral mesh, we require

enext(sym(fnext(enext( )))) = sym(fnext(sym(enext(fnext( )))))

to hold for all edges of any triangle not on the exterior boundary of the tetrahedral
mesh.
In order for the data structure to describe a regular triangulation, we must
develop an algorithm for determining which input points should be connected into

22
Table 2.1: Operations for traversing the edge-facet data structure.
Operator Description Algorithm
sym() Returns the edge-facet num- A 6-entry hash table is used
ber with opposite orientation to map the edge index from
on the same triangle. [0,1,2,3,4,5] to [1,0,3,2,5,4].
enext() Returns the next CCW edge A 6-entry hash table maps
from the given edge-facet num- edge indices from [0,1,2,3,4,5]
ber. to [2,5,4,1,0,3].
fnext() Returns the next face in the The edge index is used as an
CCW direction on the same index into the 6-vector of faces
edge as the given edge-facet. stored by the current trian-
gle labeled next face in Fig-
ure 2.5.

tetrahedra. Joe (1989) was the first to note that the 3D Delaunay triangulation of
a set of points could not be constructed by extending 2D techniques that randomly
connected points and then flipped connections until each one locally satisfied
the definition of a Delaunay triangulation. However, he went on to show that by
incrementally adding points to the triangulation and flipping connections to the
newly added points that did not meet the definition, a Delaunay triangulation would
result. His algorithm is (n2 ) and worked by sorting input points before inserting
them. Facello (1993) developed an experimentally faster method that used the
history of flips which allowed points to be inserted in any order. Mucke (1993)
improved upon this by eliminating the need for storing the history. This dissertation
uses Muckes method but could easily be adapted to others. This may be necessary
to handle vertex removal.
Now, in order to describe how the triangulation is constructed, we need to
define the regular triangulation in a way that is independent of the power diagram.
Here it is: for each tetrahedron (k) = [a, b, c, d], we define a point, (z0 , z 00 )
R3 R called the orthogonal center of the tetrahedron abcd. If z (sm ) > s00m m
[0, 1, . . . , p], m 6= a, b, c, d then the triangulation is globally regular (Facello 1993).
An orthogonal center is a unique point (z0 , z 00 ) R3 R in some tetrahedron abcd

23
 
    
   
 


     
  

Figure 2.6: Flips to achieve regularity of the triangulation.

such that a (z) = b (z) = c (z) = d (z) = z 00 . Note that this would require
checking every tetrahedron against every input point not in the tetrahedron, which is
impractical. So, we also define a condition called local regularity which only requires
us to test each tetrahedron against the four tetrahedra on its border triangles.
If all of the input point weights are equal, then the orthogonal center of each
tetrahedron is the center of the sphere defined by the four input points. In that
case, the test for regularity degenerates to testing whether the sphere corresponding
to each orthogonal center contains any input points other than a, b, c, or d, and the
regular triangulation degenerates to the Delaunay triangulation.
Upon insertion of a new vertex (shown as a vertex flip operation in Fig-
ure 2.6), some of the newly created tetrahedra may be nonregular. If so, they
must be flipped exchanged for regular versions as shown in Figure 2.6. When a
flip occurs, some of the tetrahedra neighboring the flipped tetrahedra may become
nonregular. These, too, must be flipped, leading to possibly more nonregular tetra-
hedra. However, it has been shown that only a finite number of flips are required
for any vertex insertion (Edelsbrunner & Shah 1992).
One important note is that the triangulation used by the modeler does not
match the exact definition of a regular triangulation. A regular triangulation does

24
not contain any edges connecting any point whose sphere is contained entirely in
the union of spheres of other input points. These points are called redundant,
since there is no region of space closest to that point using the power distance
metric. However, since we are dealing with an interactive modeler, we do not wish
to discard points input by the designer; they should be added to the skeleton. This
forces some extra computation when deciding what members of the complex will be
in the skeletal subcomplex and adds some extra logic to the generation of the offset
surface, but produces a more intuitive tool.

2.3.1 Vertex insertion algorithm

Algorithm 2.1: Inserting a vertex into the triangulation


InsertVertex(vectorRational x)
int closestF acet LocatePoint( x )
listint nonRegularF acets
if IsFacetOnHull(closestF acet) then
 x is outside the convex hull of the input points. 
listint visibleF acets CollectVisibleFacets(closestF acet, x)
TetraKey container
for all facets, f , of visibleF acets do
TetraFromFacetAndPoint( f, x, nonRegularF acets, container )
end for
else
 x is inside the tetrahedron with face closestF acet. 
listint visibleF acets TetrahedronBoundary( closestF acet )
TetraKey container TetrahedronKey( closestF acet )
for all facets, f , of visibleF acets do
TetraFromFacetAndPoint( f, x, nonRegularF acets, container )
end for
end if
FlipNonregularFacets( nonRegularF acets, container )

Since we allow redundant vertices, the vertex insertion algorithm is un-


changed from Muckes; instead of testing for Delaunayhood, we test for regularity,
which is shown as Algorithm 2.1. In order to locate a point in the triangulation,

25
Mucke picks several tetrahedra at random and calculates the distance from each
vertex in the tetrahedron to the input point. Then, starting at the tetrahedron
with the smallest distance, we walk through the triangulation until the tetrahedron
containing the point is reached. If the final tetrahedron does not contain the input
point, then the input point is outside of the triangulation (and the closest facet is
on the convex hull of the triangulation).
The visible facets are the facets whose positive halfspaces contain the input
point. A good visual analogy is to think of each tetrahedron as a room and each
triangle as a two-sided wall (i.e., with an interior and an exterior side). All the walls
visible by a person floating at the input point are listed by visibleF acets; when
the closest facet is on the convex hull, these are outward-pointing triangles. If the
point is inside the triangulation, only the 4 inward-pointing facets of the containing
tetrahedron are visible.
Each of the visible facets has three vertices. Adding the input point to those
will form a tetrahedron. The TetraFromFacetAndPoint routine does this for each
visible facet by adding 0, 1, 2, or 3 facets to the model. Along the way, the two
tetrahedra on each side of new facets are tested for regularity. If they are not
regular, the facet is added the a list of nonregular facets. The last argument to
TetraFromFacetAndPoint is a key consisting of the vertices of the tetrahedron con-
taining the input point, if any. This key is used when updating the data structures
representing the topology of the skeleton, and is discussed in the next section.
After the triangles have been created, any nonregular facets are flipped until
the triangulation is locally regular.

2.4 Topology of finite triangulations

One of the goals of the modeler is to provide information about the topology
of a model. By using skeletons (which are subcomplexes of the regular triangulation),
this information becomes relatively easy to manage. However, the theory involves a

26
good deal of mathematical terminology, so put on your hip boots and lets wade in.
In the following discussion, we will take S to be a unique set of p + 1 input points
{s0 , s1 , . . . , sp }.

2.4.1 A compact geometric notation





 

Figure 2.7: A k-simplex is the convex hull of k + 1 points (geometrically) or a


vector of k + 1 point objects (algebraically).

There are two ways well interpret geometric objects: first, as points and
point sets in R3 and second, as combinatorial objects made of labels and combined
into vectors of labels. Consider Figure 2.7. The tetrahedron is described geometri-
cally as the set of all points satisfying

3
X 3
X
i si , i > 0, i = 1
i=0 i=0

We may also consider the subscript, i, of each point, si , to be a label for that point.
The tetrahedron would then be the vector (3) = [s0 s1 s2 s3 ]. Well call this the
combinatorial interpretation for lack of a better name.
Note that while the geometric interpretation of a k-simplex is just a point set
(the convex hull of k + 1 of the input points), the combinatorial interpretation also
indicates an order of the points. This order determines the orientation of a simplex.
Recall that a k-simplex represents a halfspace for example, the three points of a 2-
simplex define a plane which splits R3 in half. The orientation of a simplex indicates

27
which side of the halfspace is taken to be positive. For example, in Figure 2.7, the
triangle (2) = [s0 s1 s2 ] is oriented so that, using the right hand rule, its normal
points towards s3 . This means that s3 is in the positive halfspace of [s0 s1 s2 ] and
so the tetrahedron [s0 s1 s2 s3 ] would be said to be positively oriented.
Simplices may also be oriented by multiplying the vector by a scalar value.
A positive value retains the orientation implied by the ordering and a negative value
inverts the orientation. This allows us to write the simplex as a p-vector, (k,p) . For
example, (2) = [s0 s2 s3 ] = [s0 s1 s2 s3 s4 . . . sp ] where si indicates that si is not
present. Note that (k) = (k,p) .
The boundary, k , of a simplex, , is a (k 1)-chain. For example, if
is a tetrahedron, its boundary is a set of 4 triangles. There is an extremely nifty
relationship between the k-vector of a simplex, = [s0 s1 . . . sk ], and the boundary
chain:
k
X
k = (1)i [s0 s1 . . . si . . . sk ],
i=0

which states that any combination of k vertices (from the k + 1 in the k-simplex)
forms a facet on the boundary of the k-simplex. The chain of all possible combi-
nations is then the complete boundary of the k-simplex. Figure 2.8 shows the 3
boundary operator applied to a 3-simplex on the top row. The next four rows show
the 2 operator applied to the triangles on the right hand side of the first row. Note
that all four triangles point outwards.
We denote the set of all simplices created from S as K. It is a finite collection
of finite, oriented, simplices in Rd . Note that

1. For any K, its boundary, k , is also a member of K, 0 k d.

(k) (k) (k1)


2. 1 , 2 K either 1 2 = or 1 2 = 3 .

The underlying space of K is written |K|. Any point in |K| is contained in at


least one element of K. The underlying space is the geometric interpretation of a
combinatorial collection of simplices; that is |K| is the union of all the point sets of

28

     
    

    
  
  





    
  
    





   
   
  
 




  


   

 
 










 












  
+
=
Figure 2.8: Two applications of the boundary operator always yields .

29
simplices in K.
A k-chain of simplices is simply a union of k-simplices in the geometric
interpretation or a linear combination of the k-vectors in the combinatorial inter-
pretation:
X
ai i = a0 0 a1 1 . . . an n
i

The addition operator represents the union of the simplices. The inverse of a
simplex is the same simplex with its orientation reversed, either by permuting the
elements of the k-vector or changing the sign of ai . When ai = 0, the simplex is not
a member of the chain. We can form a group using simplices as generators and as
the operator. The group has identity . Every element has an inverse as described
above, since a + (a) = and a + = a. The group is abelian1 since the union
operation is independent of order.
Ck is the set of all k-chains. This is a free abelian group. It is free since there
is a set of k-simplices that are independent of one another whose combinations (using
) form the entire set. Chains are independent when they cannot be produced by a
sum of other chains in Ck . To prove Ck is free, we need only consider as a basis the
set of all k-chains with a single k-simplex in them. Since the definition of a complex
does not allow any k-simplex to contain any point of S in its interior, there is no
way to sum any of the k-chains in the basis to get any other. Ck is abelian because
the union operator is independent of order.
A k-cycle is a k-chain that has no boundary in the sense that every (k 1)-
simplices in k is also in k 0 for some 0 6= . The four triangles that form the
boundary of the tetrahedron in Figure 2.8 form a 2-cycle since every edge is shared
by exactly two triangles. Note that k1 (k (k) ) = . The figure illustrates this
with 2 (3 [s0 s1 s2 s3 ]) : adding the result of all the 2 operations results in the
null set.
We let Zk be the set of all k-cycles. Since this is {z|z Ck , k z = } it is
1
An abelian group is a group in which the group operator is commutative: a b = b a

30
obvious that Zk Ck .
Bk is the set of all k-cycles that are boundaries of a chain in the next highest
dimension, eg. z Ck+1 , we insert an entry into Bk that is k+1 z. Note that
because k+1 k z = (as mentioned in (Delfinado & Edelsbrunner 1993)), Bk
Zk Ck . In Figure 2.8, the chain of four triangles that are the boundary of the
tetrahedron would be members of B2 since it forms a cycle and its parent is
present in the complex. The chain of triangles would not be present in B2 if the
tetrahedron it bounded was not in the complex.
So far, we have constructed mathematical groups that represent curves (C1 ),
surfaces (C2 ), closed curves and surfaces (Z1 and Z2 ), and bounding curves and
surfaces (B1 and B2 ) on a complex or subcomplex. The next section is a short
discussion of the division operator for groups, and the section after that shows how
the division operator can be used to measure the topology of a complex.

2.4.2 Division aint what it used to be

Dividing one member of a set by another is a familiar operation, taught in


elementary school. However, mathematicians have also defined a division operation
that divides a whole group by a subgroup of the group. This is not the same familiar
operation, but it is useful for studying the topology of a complex.
Just as one cannot divide any integer by zero, not every subset of some group,
G, can be a divisor. Only subgroups of G can divide G. Recall that a subgroup of
G is defined as a nonempty subset, H, such that

1. a, b H = a b H and

2. a H = a1 H.

As an example (taken from Hungerford (1990)), take G = Z with being conven-


tional addition. H = {. . . , 6, 3, 0, 3, 6, 9, . . .} is then a subgroup of G.
We denote the division of a group, G, by a subgroup, K, as G|K. The result

31
  
 

  

  !"#$%'&)(* ,+-


/.
  !"#$%'&)(* ,+-

/.
  !"#$%'&)(* ,+-
/.

Figure 2.9: The division operator divides a group, G, by subgroup K, into cosets
whose elements are all congruent to each other modulo K.

of this operation is another group whose members are sets of elements from G, not
elements of G themselves. These sets are called cosets or congruence classes of G
mod K. The cosets that make up G|K are defined like this: given a coset S G|K
with some element, a S, and another element b G, b S iff b a1 S. A
property of the division operation is that no cosets of G|K share any elements.
As an example, consider L = Z|H. As shown in Figure 2.9, L contains three
cosets. Since any one element in a coset determines which other elements of G
will be in the coset, cosets are sometimes referred to by a single member enclosed
in brackets (to avoid confusing the coset with the element of G). In the example
L = Z|H, the cosets are labeled [0], [1], and[2]. They could just as easily have been
labeled [3], [4], and[5].
Note that one of the cosets in G|K is K itself. Hopefully, the example
illustrates how the K is used to divide G into sets that are equivalent to one
another.

2.4.3 Homology groups

Now that we have defined the division operation on groups, we can apply it
to our geometric groups. A homology group, Hk = Zk |Bk , is the set of congruence
classes modulo the boundaries of all (k+1)-cycles. You can think of this as a division
of the set of cycles around the holes in the complex. Each congruence class is a

32


 




Figure 2.10: A simplicial complex of a punctured sphere.

set of cycles that are equivalent to each other because any cycle (a curve or surface)
in the coset can be smoothly transformed into any other cycle in the coset without
leaving the surface or volume of the of the complex on which its defined.
In Figure 2.10, any 1-cycle (edge loop) that contains [v0 , v2 ]+[v2 , v3 ][v0 , v3 ]
is not congruent to any 1-cycle that is the boundary of a 2-chain. Thus, there is a
coset of 1-cycles in Z1 that is not in B1 =im(2 C2 ) 2 . Since B1 is the kernel of H1 ,
this is a coset in Z1 that is not congruent to the 0 coset of H1 . Now, although
the coefficients of all the edges in the cycle must be the same, that one value can be
any integer. Thus the coset is isomorphic to Z.
Finally after all of these definitions, we get to quantities of interest. In order
to determine if the topology of a model has changed we will look at some numbers
that are topological invariants if the topology of the complex is the same, these
numbers will be the same. These numbers are called the Betti numbers. For finite
simplicial complexes in Rd , only 0 , . . . , d are nonzero. k is simply the rank of the
k-th homology group, Hk . As an added benefit, the Betti numbers have physical
interpretations shown in Table 2.2.
2
im is the image of a set produced by an operator. For an operator f : B C, for some D B,
im(D) is E = {e|e = f (d), d D}.

33
Table 2.2: Physical interpretations of the Betti numbers.
Number Physical significance
0 The number of connected components.
1 The number of independent holes. Holes are tunnels that
pierce the object.
2 The number of voids. Voids are holes with no access to the
outside.

2.4.4 Computing the Betti numbers

Delfinado & Edelsbrunner (1993) have developed a technique to determine


the Betti numbers of a sequence of simplicial complexes. Their paper is aimed at
molecular modeling and triangulating scattered point clouds that are commonly
obtained from laser range scanners. To pick which points to connect into lines and
triangles, they compute the regular triangulation of the point cloud and form a
sequence of subcomplexes of the triangulation. Each subcomplex in the sequence
is formed from the previous one and a single new simplex, starting with an empty
subcomplex. The simplices must be added in an order that will not violate the
definition of a subcomplex: for an edge to be added, its endpoint vertices must
already be members. The Betti numbers, along with other geometric quantities, are
displayed to help a person decide which subcomplex in the sequence is a good fit of
the data. Rather than recompute the homology groups at each step, an incremental
technique is used to determine the Betti number of each successive complex using
the Betti numbers of the previous subcomplex and two running data structures.
This research extends the technique to allow for deletions and insertions of vertices,
since the entire set of input points is not available ahead of time with an interactive
modeler.
The incremental computation of the Betti numbers depends on a relation
developed from a Mayer-Vietoris sequence. The definitions pertaining to sequences
and the derivation of the Mayer-Vietoris sequence are described in detail by Giblin
(1977) and Munkres (1984). We will accept the sequence without proof here since

34
the derivation is lengthy and not necessary for the use of the sequence. The following
paragraphs defining the sequence draw heavily from the two texts above.
A sequence is simply a list of groups with a map between each set and its
successor:
i1 i i+1
Si1 Si Si+1

A sequence is said to be exact at Si if

im i1 = ker i

An exact sequence is one in which the sequence is exact at every group. A good
example of an exactness would be our tetrahedron from Figure 2.8:


C3 3 B2 2 0

where C3 contains only the tetrahedra [s0 s1 s2 s3 ] and B2 is by definition the image
of 3 . Since the boundary of any 2-cycle is , the sequence is exact at B2 .
Now, as mentioned earlier, Delfinado and Edelsbrunner construct a sequence
of complexes ordered such that the previous complex, K0 , is always a subset of the
current complex, K. This can be written K = K0 K00 , where K00 is a complex
S

containing only the simplex, , added to K0 to get K and all of the facets of (so
that K00 will meet the definition of a complex). The Mayer-Vietoris sequence is the
exact sequence


0 Hk (L) Hk (K0 ) Hk (K00 ) Hk (K)

Hk1 (L) Hk1 (K0 ) Hk1 (K00 ) Hk1 (K)

H0 (L) H0 (K0 ) H0 (K00 ) H0 (K) 0 ,

where L = K0 K00 . Because this sequence is exact we can relate the sizes of each
T

35
Table 2.3: Betti numbers as we close the hole in Figure 2.10.
Betti No. Description
0 (K0 )=1 One connected component.
1 (K0 )=0 No through-holes.
2 (K0 )=0 No voids.
0 (K00 )=1 One connected component.
1 (K00 )=0 No through-holes.
2 (K00 )=0 No voids.
0 (L)=1 One connected component
1 (L)=1 The boundary of the triangle has a through-hole.
2 (L)=0 No voids.
(N0 )=0 No vertices bound in K0 .
(N1 )=1 One 1-cycle that bounds in both K0 and K00 .
(N2 )=0 There are no 2-cycles in K00 .
0 (K)=1 One connected component.
1 (K)=0 Still no through-holes.
2 (K)=1 One void.

of the sets in each row of the sequence3 :

k (K) = k (K0 ) + k (K00 ) k (L) + (Nk ) + (Nk1 ), (2.1)

where Nk = ker k and corresponds to the subgroup of Hk (L) defined by the k-cycles
that bound in both K0 and K00 . Since K00 contains a single simplex and its facets,
(Nk ), will only be nonzero when

dim() = k + 1, and

is in a k + 1-cycle in K and k+1 () forms a boundary of K0 .

For instance, if we were to start with K0 as the complex shown in Figure 2.10 and
add the triangle that closes the surface (i.e., is the triangle, a 2-facet, and K00
is along with its bordering edges and vertices) we would have the Betti numbers
shown in Table 2.3.
3
See Munkres (1984) for a detailed development of the relationship.

36
Although Delfinado and Edelsbrunner only use Equation 2.1 for adding mem-
bers of the regular triangulation to the skeletal complex, the equation works just
as well when solving for k (K0 ) in terms of k (K). As weve mentioned before, this
is important for a modeler since the regular triangulation may change even when
only insertions take place.

2.4.5 Algorithm and implementation

Delfinado & Edelsbrunner (1993) provide an algorithm that computes the


Betti numbers of a sequence of subcomplexes of a fixed complex. We will be com-
puting the Betti numbers of a sequence of subcomplexes of differing complexes.
Because the underlying triangulation of the complex changes, the Betti numbers
are not perfect gauges of whether the topology is invariant. However, they are a
good first approximation and future work would include the use of relative homology
and excision theory to find the cases that just examining Betti numbers would miss.
Edelsbrunners algorithm incrementally computes the Betti numbers by adding
one simplex at a time to the subcomplex of interest. As each simplex is added, the
Betti numbers of the resultant subcomplex are updated using Equation 2.1. How-
ever, the equation needs more information than the data structure for the complex
can provide: the calculation of (Nk ) requires that we know whether is a member
of a k + 1-cycle in the initial and resulting complex. In particular, we must know
when is part of a 2-cycle if dim() = 2 or a 1-cycle if dim() = 1 . Inserting
vertices and tetrahedra is simpler because all Nk will be zero in these cases.
To detect cycles of faces and edges, Delfinado and Edelsbrunner use a path-
compressed union-find data structure which performs fast queries to see if two nodes
in a graph are connected. Because the complex is not fixed in our application, we
must keep the entire graph and not a union-find data structure. The data structure
for the graph is shown in Figure 2.11. For detecting edge cycles, nodes in the graph
are input points and edges of the graph are edges of the triangulation. If, when

37
+2".%+
'
"$#%3+
% +  
 
465 7 + ' ' +8+ 7 
9
%+  
',   
 -  * ',      < 5
 -  *  +".%+- '!"$#%
 -  *  !"$#%'&  ( ) *  ): 6  +;+    3
&  ( 1 *   +2   8 +2".%+
9
 -  *  !"$#%') * 
465 465 465
  
2

465 465 465

/0  ( 1 * 

Figure 2.11: The graph structure used to detect cycles of edges or faces.

inserting a new edge into the subcomplex, the two vertices that are connected by
our new edge were already connected by other edges, we know that our new edge is
part of a new loop (i.e., a new coset in H1 ).
As Figure 2.11 indicates, all the edges that are in the skeletal subcomplex are
in the graph. We store this graph instead of using the triangle-edge data structure
to gather the information because the edges in the skeletal subcomplex are not
efficiently attainable from the data structure either the triangle-edge structure
must be marked as we traverse all of the (possibly large number of) triangles attached
to a vertex or a list of visited triangles must be kept.
The graph is stored by placing at each vertex a record pointing to the vertex
at the other end of the edge. A subgraph of this graph forms a tree. When a new
edge is inserted into the graph, we determine whether the edge will be active (i.e.,
a member of the tree) or inactive (i.e., not a member of the tree) based on whether
the edge connects two components. If an edge is to be active, it is placed in the front
portion of the list of Children[]. Otherwise, it is placed at the end of Children[].
The first active edge is the Parent of a node unless that node is at the top of a tree.
When an edge serving as a Parent is removed, any inactive edge in Children[]
that would keep the node on that connected component is activated. Choosing the
inactive edge with the smallest number of parents that must be traversed to get to

38
the top of the tree means that the tree will be kept wide and shallow. A shallow
tree means faster comparisons to determine if vertices are on the same connected
component. The running time for this comparison is O(n) for the worst case of
a line of n vertices connected only to two neighbors. The best case is O(1) for
a completely connected graph. For a random graph, we would expect O(log n)
running time. Because we attach physical significance to the graph, we can expect
performance on average to be O(log m), where m is the number of vertices in the
largest connected component.
The situation for detecting 2-cycles is more complicated. We maintain a
second graph for detecting 2-cycles. Each node in this graph corresponds to a
tetrahedron in the complex that is not in the skeletal subcomplex. Edges in the graph
correspond to triangles that are also not in the skeletal subcomplex that border two
of these tetrahedra. Whenever a triangle inserted into the subcomplex disconnects
a node or nodes from this graph, it has closed off a void, and thus completed a
2-cycle. Figure 2.12 shows an example of this graph. Unlike the vertex-edge graph,
the tetrahedron-triangle graph can have duplicate edges. This occurs when more
than one triangle of a tetrahedron is on the convex hull. We must store both edges
because one of the triangles may become attached to another tetrahedron as vertices
are inserted into the triangulation. When this occurs, the first tetrahedron is still on
the same connected component (connected to the space surrounding the convex hull
of the input vertices), but is only attached through a single triangle. As Figure 2.11
shows, we store an array of integers marking the number of edges connecting two
nodes. While this adds space to the data structure, we can compensate for it.
Although the implementation in this dissertation uses the same data structure and
algorithm for both the vertex-edge graph and the tetrahedron-triangle graph, we
could eliminate duplicate edges from the vertex-edge graph. For the tetrahedron-
triangle graph, we know that edge tetrahedron will have exactly four edges at all
times. This eliminates the need to store the total number of edges. The integer

39
use to store the total number of edges could then be used to store the number of
duplicates; since each edge can have a maximum of four duplicates, we need only
two bits per edge. With a maximum of four edges, only one byte is needed to store
the number of duplicates of each edge in the graph.
When the regular triangulation does change, it is necessary to first remove
any simplices from the skeletal subcomplex that are in the neighborhood of the
change because the underlying space is about to be excised and replaced with a new
version. This means that the two graphs must be updated. As an example, if an
edge-facet flip is about to occur, any of the 3 tetrahedra, 9 triangles, 10 edges, and 5
vertices that are involved in the flip must be removed from the skeletal subcomplex
if they are present. Then, once the flip has occurred, we must check 2 tetrahedra, 7
triangles, 9 edges, and 5 vertices to see if they belong to the skeletal subcomplex.
Although, in theory, we should remove vertices involved in any flips, they do
not affect the graphs. The other 6 operations do require updating the graphs and
a single integer representing 1 , as shown in Algorithms 2.2 through 2.9. Although
we show a routine named DoVertexSpheresIntersect that returns whether all
the vertices passed to it intersect in a common point, the implementation caches
the results of this routine in data structures for edges, faces, and tetrahedra in
order to avoid the expense. Chapter 3 discusses the mathematics required for the
DoVertexSpheresIntersect algorithm, along with other details of how the skeletal
subcomplex is selected from the regular triangulation.
These algorithms are called by the routines TetraFromFacetAndPoint and
FlipNonregularFacets of Algorithm 2.1. The order in which the algorithms are
used and the state of the triangulation must be correct for the algorithms to work.
The order of the algorithms is determined by these requirements:

1. Any vertices must be added before an edge that connects them. Vertices must
be removed after edges connecting them are removed.

2. An edge must be added before a triangle containing the edge, and removed

40
A triangulation of a set of points.
Ihe triangulation is embeddable in S3

Surrounding space

Select as a subcomplex the triangles of


the highlighted tetrahedron, but not
the tetrahedron itself. This subcomplex
is homeomorphic to a hollow sphere.

A graph of the vertices in


the subcomplex that are
connected by edges in the
subcomplex.

A graph whose vertices are tetrahedra in


the triangulation but not the subcomplex.
Edges connect tetrahedra when the triangles
between two tetrahedra are not in the
subcomplex. In this example, the graph
has two components, one of which is
homologous to 0. The second represents
a void inside the subcomplex. Surrounding space

Figure 2.12: The relationship between the complex and the graph for detecting
2-cycles.

41
after the triangle.

3. Both tetrahedra bordering a triangle must be added before the triangle, or the
the triangle must be marked as on the convex hull if a bordering tetrahedron
has not yet been placed. Similarly, tetrahedra should not be removed before
the triangles on their boundary.

4. A tetrahedron must be added after all the vertices it contains.

So, for adding simplices, we should update the graphs by adding vertices, tetrahedra,
edges, and then triangles. When removing simplices, triangles, edges, and then
tetrahedra should be removed from the graphs.
Before Algorithms 2.2 through 2.5 are called, the triangulation must contain
the new simplices. Algorithms 2.6 through 2.9 must be called before any of the
simplices are removed from the triangulation. This is because the graph algorithms
use the triangulation. For instance, when deleting a triangle, the triangulation is
used to determine which tetrahedra are adjacent to the triangle so that those nodes
in the tetrahedron graph can be referenced. Deleting a triangle is also a special case
where even more information must be provided to the algorithm. In this case, if a
vertex is being inserted and the vertex is inside the convex hull of the existing input
points, it is contained in the container variable of Algorithm 2.1. Before the 4 new
tetrahedra can be added to the graph, the one they replace must be removed. This
is accomplished by invoking DelTriangle with container as the last argument for
each of the 4 triangles of the container tetrahedron.

Algorithm 2.2: Adding a vertex to the homology graphs.


NewVertex(int x)
InsertNodeIntoVGraph( x )

42
Algorithm 2.3: Adding an edge to the homology graphs.
NewEdge(int x, int y)
if DoVertexSpheresIntersect( x, y ) then
 Add an edge to the vertex graph. 
bool graphChanged InsertEdgeIntoVGraph( x,y )
if graphChanged then
 Weve decreased the number of connected components 
else
 Weve increased the number of independent tunnels 
1 1 + 1
end if
end if

Algorithm 2.4: Adding a triangle to the homology graphs.


NewTriangle( int x, int y, int z )
if IsEdgeInVGraph( x, y ) and IsEdgeInVGraph( y, z ) and IsEdgeInVGraph( z,
x ) and DoVertexSpheresIntersect( x, y, z ) then
 The triangle is skeletal; weve closed a tunnel. 
1 1 1
else
 The triangle isnt skeletal. 
TetraKey T1 TetraFromFace(x,y,z)
TetraKey T2 TetraFromFace(x,z,y)
if T1 and T2 are both not members of the skeleton then
if IsEdgeInTGraph(T1 , T2 ) is false, or T1 or T2 is the null tetrahedron of
2.3 then
 Add an edge to the tetrahedron graph. 
bool graphChanged InsertEdgeIntoTGraph(T1 , T2 )
if graphChanged then
 Weve increased the number of independent tunnels. 
1 1 + 1
end if
end if
end if
end if

43
Algorithm 2.5: Adding a tetrahedron to the homology graphs.
NewTetrahedron( int x, int y, int z, int w )
if DoVertexSpheresIntersect( x, y, z, w ) then
 The tetrahedron is skeletal. 
else
 Add a node to the tetrahedron graph. 
TetraKey T TetraFromVerts(x,z,y,w)
InsertNodeIntoTGraph(T )
end if
 Increase 1 : the complex now has an opportunity for an extra face loop. 
1 1 + 1

Algorithm 2.6: Removing a vertex from the homology graphs.


DelVertex(int x)
bool graphChanged RemoveNodeFromVGraph( x )
Ensure: graphChanged is false.

Algorithm 2.7: Removing an edge from the homology graphs.


DelEdge(int x, int y)
if DoVertexSpheresIntersect( x, y ) then
 Remove the edge from the vertex graph. 
bool graphChanged RemoveEdgeFromVGraph( x,y )
if graphChanged then
 Weve increased the number of connected components 
else
 Weve decreased the number of independent tunnels 
1 1 1
end if
end if

44
Algorithm 2.8: Removing a triangle from the homology graphs.
DelTriangle( int x, int y, int z, TetraKey container )
if IsEdgeInVGraph( x, y ) and IsEdgeInVGraph( y, z ) and IsEdgeInVGraph( z,
x ) and DoVertexSpheresIntersect( x, y, z ) then
 The triangle was skeletal; weve opened a tunnel. 
1 1 + 1
else
 The triangle isnt skeletal. 
if container is present then
TetraKey T1 container
else
TetraKey T1 TetraFromFace(x,y,z)
end if
TetraKey T2 TetraFromFace(x,z,y)
if T1 and T2 are both not members of the skeleton then
 Remove an edge from the tetrahedron graph. 
bool graphChanged RemoveEdgeFromTGraph(T1 , T2 )
if graphChanged then
 Weve decreased the number of independent tunnels. 
1 1 1
end if
end if
end if

Algorithm 2.9: Removing a tetrahedron the complex.


DelTetrahedron( int x, int y, int z, int w )
if DoVertexSpheresIntersect( x, y, z, w ) then
 The tetrahedron was skeletal. 
else
 Remove a node to the tetrahedron graph. 
TetraKey T TetraFromVerts(x,z,y,w)
RemoveNodeFromTGraph(T )
end if
 Decrease 1 : the complex now has one less opportunity for a face loop. 
1 1 1

45
2.5 Summary

This chapter has shown how, from just a set of input points, some notion
of connectivity and proximity can be developed and represented with a regular
triangulation. The modeler will use triangulations to form the skeleton of a part.
Next, the data structures necessary to represent a triangulation of the input points
was developed. Finally, given some subcomplex of that triangulation, we have shown
how to compute the Betti numbers of the subcomplex.
In the next chapter well relate the weights of the input points to which
simplices are selected from the triangulation to be part of the skeleton.

46
Chapter 3

The Skeletal Subcomplex

So far, weve discussed how to create a simplicial complex from a set of input
points. The next step is to select part of this complex to serve as the skeleton of the
model. If we think of each input point, p = (p0 , p00 ), p0 Rd , p00 R, as a ball, Bp ,

centered at p0 with radius p00 , the skeleton can be defined as all simplices contained
S S
inside the union of all the input balls, pS Bp . For convenience, let BS denote
S
the union of balls of all points in S and B denote the union of balls of all points
in some facet .

3.1 Alpha Shapes and the Space Filling Diagram

Edelsbrunner defines not just one subcomplex of the triangulation, but a


whole class of subcomplexes indexed by a real number, R. He calls a subcomplex
associated with some particular value of an alpha shape. Our skeleton will be the
subcomplex equivalent to = 0 in Edelsbrunners work. The following is a summary
of his work.
An alpha shape is the underlying space of a subcomplex of R. Note that
adding the same value to the weight of each input point does not change R since
the power distance to each point is offset by the same amount. So, the complex

47
which we are considering is independent of the value of . . . but the subcomplex
that forms the skeleton is not. Given the input point set S, we will call S+ the
input points with the final coordinate increased by .
For any simplex, = (s1 , s2 , . . . , sk ), si S+ , i [1, k], k d + 1, there is a
point y = (y0 , y 00 ) orthogonal to all si with minimum weight y 00 . Call = y 00 the
size of the simplex. There must be a point y because orthogonality to each vertex
in generates one equation (i.e., (si , y) = 0). The equations represent a set of
dimension d k + 1 (assuming there are no degeneracies). In this set, the power
distance to all the vertices in is minimum at a single point. It must have a single
minimum. When k = d + 1, this is trivial since the orthogonality equations have
only a single point as a solution. When k d, note that for all orthogonal points, s00i
is constant. Thus, the Euclidean distance from s0i to y0 determines the value of y 00 .
When the Euclidean distance |s0i y0 | is at a minimum, then so is y 00 . Since we are
constrained (by orthogonality) to be on the surface where |s01 y0 | = |s02 y0 | = =
|s0k y0 |, there can be only one such point. See Figure 3.1 to visualize this condition.
The figure shows, for d = 3 and k = 1, 2, 3, the set of orthogonal points and how the
weight of y varies along a line inside the set of orthogonal points. Note that when
the line ab passes inside a ball, y 00 becomes negative. This will be important later.
We can now define the alpha complex, K , as all simplices, , and their
boundaries such that and, for any point q S , (q, y) > 0. A simplex is
called alpha-exposed when there is a point yT = (yT0 , ) Rd R such that yT is
orthogonal to all the vertices of the simplex. An alpha-exposed simplex is a member
of both K and K (it must be in K because alpha-exposure demands that its
S
vertices all be equidistant from some portion of BS ). The underlying space of
K is W , which is the alpha-shape of the input points. Again, this assumes there
are no degenerate or redundant input points.

48

 

 

8 8   
    
8
 % !21 354 *   !" $#%&   #   9  '

 '    
 '  (   
   
)+*&0 )+*&,-)/. 0 )+*:,)/.,)/;0
!6 6<&=
!6 6 !6 6 !6 6 >@? (5  2 >
A   5:B
7 8  '8 C% 
 8 8 ED
7  7 

Figure 3.1: Obtaining the size, , of facets of dimension 0, 1, 2, and 3.

3.2 Duality of the unions of balls and alpha shapes

We want to use the union of balls as bounds on the size of an offset from the
alpha shape. In order to do this, we must show that the alpha shape never intersects
the boundary of the union of balls and that any loop of the the alpha shape has a
corresponding loops on the surface of the union of balls. We can do this by showing
that the boundaries of each are dual to each other.
Assume there are balls intersecting whose vertices, si = (s0i , s00i + ) do not
form a regular simplex. Say there are k balls. Then there will be an `-sphere
(a point, arc, or spherical surface) common to all vertices. Call a point on this
manifold common to all vertices x0 . Since the point is on the surface of each sphere,
|x0 s0i |2 s00i = for all k vertices. But this is the definition of regularity, which
contradicts our assumption. Therefore, the vertices must form a simplex that is
part of the regular triangulation of the input points, R. Furthermore, the simplex
must be a member of K since the existence of (x0 , ) implies that the simplex is

49
alpha-exposed.
Now we have a good working definition of the skeletal subcomplex; given
input points, we construct R and select all the facets whose sizes are less than
or equal to 0. What this ignores is the possibility of redundant vertices. Unlike
molecular modeling and other applications where alpha shapes are mainly used to
analyze the topology of point sets, a redundant vertex can be intentional on the part
of the designer. The next section discusses some properties of redundant points and
is followed by a section describing how these points are handled.

3.3 Redundant vertices

The regular triangulation is an excellent tool that develops a notion of prox-


imity for a set of points. However, some input points are excluded from a regular
triangulation the redundant points. This occurs when the redundant points are
always further from any point in space than other input points. Even though these
points may not have a region of space closest to them, they reflect the designers
intent. For example, a designer may want to place small features such as fins for
heat dissipation on a large, flat surface. Vertices representing the fins may be redun-
dant, but they have significance in the design. If we insert redundant points into
the regular triangulation by forcing local regularity but not global regularity, we
must show that the homology groups of the skeleton and resulting surface remain
unaffected. First, lets note an important property of redundant points.

Theorem 3.1 No redundant point will be on the convex hull of the input points.

Proof Suppose there were a redundant point, p, on the convex hull. All of
the faces adjacent to p are on the hull. Since we have assumed that there
are no degeneracies, their normals are all unique i.e., p is not coplanar
with any other 3 points. Thus the set of normals define a spherical
polgyon on the unit circle that is not degenerate. Pick any point on the

50
unit circle inside this polygon and use it as the unit normal, n, to a plane
P with base point p. Choose P such that no point in S, when projected
to P along n is coincident to p. Such a plane exists since the spherical
polygon is not degenerate and we have a finite set of input points. As we
travel along n away from the convex hull, the square of the Euclidean
distance to p grows faster than the square of the Euclidean distance to
any point in S. Thus, no matter how large the weight of any point in
S, the power distance to p will eventually be smaller than the power
distance to any other point S. But if there are points closer to the p
than to any point in S p, the power cell of p is not null and thus p is
not redundant. 2

Since no redundant point can be on the convex hull, we can always refer to
the d-simplex formed by the input points that contains a redundant point. We must
show that no redundant point changes the homology of the skeletal subcomplex.
Also, since we have shown that no vertex on conv(S) is redundant, we know
S
that redundant vertices never appear on the boundary of the union of balls, BS .
This means no redundant vertex will be on W .
With these facts out of the way, we can define a rule for selecting members
of the triangulation to be in our alpha complex, K . To make this section easier to
read, call a vertex significant if it is not redundant. Instead of selecting only facets
with 0, we will

1. Select any facet (and its boundaries) whose size, , is less than or equal to
0.

2. Select any facet (and its boundaries) containing only redundant vertices that
are contained in the same significant vertex ball and significant vertices whose
union of balls contains the redundant vertices in . In other words, no simplex
with redundant vertices should be included unless the significant vertices to

51
which they are related form a union of balls completely enclosing them.

With these rules, we need to ensure that the boundary of the union of balls is still
dual to the boundary of the alpha complex.

S
Theorem 3.2 No facet of K intersects BS , nor does any void, open edge
S
loop, or connected component exist in K without a dual in BS .

Proof First, examine the possibility that some facet of K intersects


S
BS . Assume there is such a facet, . cannot contain any
redundant vertices since we have included no facet in K containing
a redundant vertex and combination of significant vertices that does
not contain it. Also, no facet with redundant vertices has 0
since y0 is outside . But if all vertices of are not redundant and
0 then (y0 , ) is a point on B which is either inside or
S
S
contained on BS . Thus, there can be no facet intersecting
S
BS in K .

Now, assume there is some / K such that k K but that


S
|| BS . If all the vertices of were significant, then the
facet size of would be less than 0 and would be in K . So,
must contain at least one redundant vertex. Either a redundant
vertex is not enclosed by any significant vertex in or there are
multiple redundant vertices enclosed in distinct, non-intersecting
balls. In the case of a single redundant vertex, there must be 1-
facets on k containing the redundant vertex in combination with
each significant vertex in . But this 1-facet cannot be in K since
we, by definition, selected no facets with redundant vertices not
S
contained in B unless they are the boundary of a facet we
selected. But weve said that
/ K .

The same holds for simplices with multiple redundant vertices. The

52
last case we need to consider is simplices containing only redun-
dant vertices whose containing significant vertices differ. Again, no
simplex is selected joining vertices whose enclosing spheres do not
intersect. When comparing multiple redundant vertices, we simply
test whether their containing vertices balls intersect. If they do
not, then the simplex could not have been selected by either of our
criteria.

This shows the union of balls and the underlying space of K are homol-
ogous even when we insert redundant vertices. 2

The proof above doesnt really give a feel for what redundant vertices do to
the structure of the complex; think of redundant vertices as thickening the skeleton.
Where the skeleton was an edge, it becomes triangular or tetrahedral. Where it
was triangular, it becomes tetrahedral. Where it was already tetrahedral, there are
more tetrahedra. Because the offset surface from the skeleton need not be a union
of balls, redundant vertices allow small features to exist on the skeleton. This is key
for representing shapes that have aspect ratios near 1.

3.4 The example

To illustrate the concept of the skeletal complex, lets take a look at the
bottle opener example. Figure 3.2 shows one concept variant for a compliant bottle
opener. On the left is the complex formed by the input points and on the right
is an offset surface produced from the skeleton. The spheres associated with each
point are shown along with the offset surface in Figure 3.3. Wherever the spheres
intersect, edges or triangles of the complex are selected for membership in the skeletal
subcomplex. Since the shape is simple, it is easy to see that the Betti numbers of
the model are 0 = 1, 1 = 1, and 2 = 0.

53
(a) The entire complex of our example. (b) The offset of the skeleton.
Thick lines are members of the skele-
tal subcomplex, while thin lines are not.
Only shaded triangles are members of
the skeleton. There are many degener-
ate tetrahedra since the points are all
planar.

Figure 3.2: An example model consisting of 16 points.

54
Figure 3.3: The vertex balls enclose the offset surface.

55
3.5 Summary

This chapter described how to select simplices from the triangulation of the
input points to be members of the skeletal subcomplex. Even when redundant
vertices are inserted into the triangulation, the skeleton remains homologous to
the union of balls, thanks to the duality of their respective boundaries. Now that
we have selected a skeleton, we can proceed by producing a solid model from the
skeleton.

56
Chapter 4

Fleshing the Skeleton

Flesh! Now that I have your attention, we can focus on generating the flesh
(the solid model) from the bones (the skeletal subcomplex). Isocontouring is usually
an expensive operation, but we will see that by taking advantage of the spatial
partition weve made from the input points, the offset can be quickly generated.
The spatial partition provides two ways to speed up isocontouring:

1. Tetrahedra used as starting points to triangulate the offset surface can be


chosen to intersect the offset in a predictable manner.

2. If we have access to several processors, the spatial partition is an efficient way


to perform the offset generation in parallel.

Isocontouring takes a scalar function, w(x), x Rd and generates a piecewise


surface where w(x) = C for some C R. For a strict offset, we can let

w(x) = min(|x t0 (u, v)|2 t00 (u, v))

where t = (t0 , t00 ), t0 (u, v) is some point on the skeleton, and t00 (u, v) is a radius
function at t0 (u, v). Because no point inside a tetrahedral member of the skeleton
will be closest to the boundary, the skeleton may be expressed as a set of parametric

57
patches (with parameters u and v). For our current modeler, the radius function
t00 (u, v) is constrained to be a linear interpolation of radii specified at the input
points. The min function above effectively takes the union of the offsets of each
skeletal element.

4.1 Literature survey

Because the solid model may include blended surfaces which may be de-
fined implicitly, we cannot directly construct a boundary representation composed
of parametric patches as does Vermeer (1994). The blending discussed below lends
itself to a technique called isocontouring which creates a piecewise approximation of
the solid model given a scalar function, w(x)defined over space, x R3 . This prob-
lem was first approached computationally by Lorensen & Cline (1987), who used a
tabular function derived from medical data to extract bone and soft tissue surfaces.
They coined the term marching cubes to describe the algorithm, which works on a
regular mesh of rectangular cells. In this paper, each cell is visited and classified
according to whether the scalar field at each vertex is above or below the isovalue
requested. Then, along edges where vertices have opposite codes, the intersection
of the isosurface with the edge is approximated and facets are formed inside the cell
according to the classification of all the vertices.
Bloomenthal(1988, 1994) has adapted marching cubes to implicit surfaces
and has implemented adaptive subdivision of cubes to handle regions of high cur-
vature. Once it is determined that a cube will not be subdivided further, it is
decomposed into tetrahedra. This avoids some ambiguity and reduces the number
of cases to be considered compared to a cube. Velho (1996) adaptively subdivides
edges of tetrahedra rather than cubes. The subdivision occurs based on curvature at
the edge, so there are no cracks present in output surfaces. Because the subdivision
decision is independent of neighboring cells, this technique is easily parallelized.
Wilhelms & van Gelder (1992) offer a revised marching cubes method based

58
on a compact octree that is used to store the isosurface vertices shared by cells on
the border of a spatial subdivision. This reduces the amount of replicated work
(otherwise the edge intersections must be done at least d times on interior cells).
This brings to light the importance of spatial coherence in a marching algorithm:
marching is efficient because the neighboring cell marched to does not need to
recalculate edge intersections along the shared face between the two cells. Bloo-
menthal (1997) has a good survey of polygonization techniques used for implicit
surfaces, except that it does not address parallelism.
Marching cubes has also been done in parallel. Here, the important issue is
load balancing and there are two approaches: making an initial partition of the data
that will produce a balanced load (Miguet & Nicod 1995) or performing dynamic
load balancing, where processors are given small partitions of data as they become
available (Ellsiepen 1994). The partitions can take the shape of slices, shafts, or
slabs (Neumann 1994).
Miguet and Nicod (1995), for instance, split the volume data into slices along
a sweep direction that the cells are stored in. They balance the load by counting the
number of cells requiring triangle generation as they are read in. An estimate for
each slice (normal to the sweep direction) of the number of output vertices and faces
is made and used to partition the slices to processors. At the partition boundaries,
only one processor computes intersections; after face generation is performed, the
processors send the vertex data into a global array, against which all faces are
referenced.
Partitioning becomes especially important when dealing with unstructured
meshes (Ellsiepen 1994, Martin & Otto 1995, Kernighan & Lin 1970) since there
are not obvious boundaries, and partitions that split across fewer cells will have less
overlap (and thus lower communication costs (Krishnaswamy, Hasteer & Banerjee
1997)). Shephard et al. (1997) allow cells to move from one processor to another to
balance load.

59
In addition to marching methods for generating isosurfaces, particle-based
methods have been devised (Crossno & Angel 1997). Here, particles are placed at
random in the volume and attracted to the isosurface of interest. Once on the iso-
surface, they repel each other with a spatially compact force inversely proportional
to the estimated curvature on the surface until equilibrium is attained. (Here, spa-
tially compact means that particles farther than a given distance exert no force on
each other to avoid high communication costs.) Then, all particles that were exert-
ing forces on each other are attached if the created edge does not cross a gradient
of w(x). This is very similar to techniques developed for implicit shape modeling
by (Stander & Hart 1997, Garland & Heckbert 1997, Szeliski & Tonnesen 1992) and
surface simplification (Turk 1992). Perhaps the most directly related of all the the
rendering techniques is Blanding, Brooking, Ganter & Stortis (1999). Their skele-
tal editor uses a Voronoi diagram to determine initial placement of particles on the
offset surface. However, their work does not provide information on the topology of
the skeleton and solid model. With particle methods, load balance is simple, since
it is proportional to the number of particles on a processor. However, in regions
where particles are dense, splitting connected particles across processors increases
communication costs.
Finally, there is a class of algorithms called accelerated isocontouring,
where the volume information is preprocessed to allow for multiple, fast isosur-
face queries in which not all cells are touched. (Bajaj, Pascucci & Schikore 1996)
and (Cignoni, Marino, Montani, Puppo & Scopigno 1997) use interval, segment, or
k-d trees to store only seed cells. A set of seed cells is a set of cells such that
at least one cell intersects every isolated segment of every possible isosurface. The
tree structures store seed cells according to the range of values spanned by w(x)
in the cell. This way, when an isovalue is given, the tree only returns seed cells
that intersect the desired isosurface. Traditional marching is used to generate the
isosurface given the seed cells. Optimally, only one seed cell would be present on

60
each surface segment for any isovalue (so that the overhead for storing the interval
tree would be small). Several methods for producing small seed sets are discussed.
Bajaj, Pascucci, Thompson & Zhang (1999) have adopted this technique to work in
parallel for rectilinear meshes.
Isocontouring has also been demonstrated with a more abstract mathemati-
cal approach called continuation, which is a technique for constructing a piecewise
approximation of a continuous function defining a manifold. This work has been
both performed and reviewed by Allgower, Georg, and Gnutzmann (Allgower &
Georg 1990, Allgower & Gnutzmann 1991).
This research focuses on Velhos adaptive triangulation technique since it is
easily parallelized. Velho, along with other adaptive subdivision schemes, assumes
that the initial tetrahedra are chosen so that (Bloomenthal 1997)

the solid does not intersect any tetrahedrons edge more than once, and

the solid is completely enclosed by the initial tetrahedra.

The next section discusses how the initial tetrahedra are chosen. It is followed
by a section discussing the adaptive triangulation scheme and finally a section on
parallelism.

4.2 Power cells and initial tetrahedra

Recall that an input vertex, by definition, is a part of the skeleton and will
therefore be inside the solid or on its surface (for the case of a zero weight). Also,
the power cell of the vertex is a convex region surrounding the input vertex; its faces
are planar with linear bounding edges. By intersecting the power diagram with a
bounding box containing the solid, we are guaranteed that each face of any power
cell is finite. Figure 4.1 shows the power cells of the example, intersected with the
solids bounding box.

61
Figure 4.1: Power cells for the example.

Also recall that when selecting elements from the complex, K, to be in the
skeletal subcomplex, S, we test for whether the spheres defined by the vertex weights
intersect, as shown in Figure 3.3.
The power cell can be decomposed into tetrahedra by connecting the input
vertex to either (1) a pair of consecutive edges along each power cells face, or (2) a
point on a power cell face and consecutive edges of the face as shown in Figure 4.2.
For power cell faces that intersect skeletal edges, we will generate tetrahedra using
the second approach. For others, the first approach is used.
This simple scheme appears to meet the needs of the initial tetrahedra; with
no edges attached, the input vertex will form a sphere that will be triangulated
by the tetrahedra from the first case above. When a skeletal edge connected to
the input vertex leaves the power cell, the tetrahedra formed around it will clearly
capture topology of the resulting solid. However, there are more special cases to
consider: triangular and tetrahedral skeletal elements and the case when one input
vertex is contained in the power cell of another input vertex. Also, note that more

62


  

  


  
  

   

!  "  


Figure 4.2: The power cell, its associated input vertex, and skeletal edges inter-
secting the cell form tetrahedra for the triangulation.

than one skeletal edge may intersect a power cell face. The rest of this section deals
with these special cases.
When a power cell face intersects more than one skeletal edge, the face is split
with an edge as shown in Figure 4.3. Clearly, when the skeletal edge is connected
to the input vertex at the center of the power cell, this will generate tetrahedra
that meet our requirements. However, it is possible for skeletal edges not associated
with the input vertex of the power cell to cross through the power cell. Consider
the simple case in Figure 4.4. It would appear that the tetrahedra generated for
power cells in these cases might intersect the solid surface more than once if the
skeletal edge is a member but the triangle connecting the edge to the input point
is not. However, the power cell boundary will never include such an edge because
it is also based on the intersection of input vertex spheres; if the edge is a member
of the skeletal subcomplex without the triangle, that implies that the spheres for
the three input points do not intersect in a common point. The point of minimum
power distance among the three points will be outside the three spheres (but inside
the triangle). Since this point forms the corner of the power cell and is inside the

63
   


  

 !#"


%$&
#' (  
)
 * !+" 


Figure 4.3: A power cell showing how faces and edges are split to handle special
cases.

triangle, the power cell cannot contain the skeletal edge between the outside vertices.
Similarly, in three dimensions, the point of minimum power distance between four
vertices that determines a corner of a power cell will lie inside the tetrahedron
defined by the vertices and prevent a triangle not connected to the input vertex at
the center of a power cell from intersecting the power cell.
To ensure that skeletal triangles are correctly tessellated, we must also split
power cell edges into two wherever a skeletal triangle intersects an edge of the power
cell, as shown in Figure 4.3. Splitting the edge forces two tetrahedra to be generated
instead of one. The two tetrahedra each intersect the solid on one side of the triangle.
As we noted in the previous paragraph, it is possible for a skeletal triangle to be
partially contained in a power cell (when one of its edges passes completely through
the cell). When this occurs, there are edges of the power cell that would normally
have been split for the triangle that are not since they intersect the triangles plane
outside of the skeletal triangle. These edges must be split.
Although it is possible to determine which edges need to be split by tracing
the power cell tetrahedra through which the skeletal edge passes, the implementation

64
 





  






       




Figure 4.4: Some skeletal edges pass through a power cell without connecting to
the input vertex.

for this dissertation simply splits any power cell edge wherever it intersects a plane
defined by a skeletal triangle. This is done not only for simplicity, but also because
it eliminates the need for communication between processors when the algorithm
executes in parallel; if a power cell edge is split for one cell, all the other power
cells that share the edge must also be split or the surface triangulation might have
cracks.
Finally, there are tetrahedra. Luckily, no further processing is required,
since their boundaries are triangles, which have already been discussed, and their
interiors lead to no new segments of the solid surface that mught be missed by the
triangulation.

4.3 Adaptive triangulation

Now that we have initial tetrahedra, the adaptive triangulation is simple and
performed according to Velhos algorithm. His paper (Velho 1996) contains a more
thorough discussion of the algorithm and its merits. Below is a summary of the
algorithm.
To generate the isosurface approximation, we compute the value of w(x) at

65
(a) Initial triangulation. (b) One adaptive step allowed.

Figure 4.5: The initial mesh and a finer triangulation of the offset surface.

each vertex of a tetrahedron. If above zero, the point lies outside the solid. If below
zero, it lies inside the solid. On edges of the tetrahedron that connect vertices with
differing signs, we find the intersection of the solid surface with the edge. These
intersection points (of which there are either 3 or 4 per tetrahedron) are connected
into one or two triangles to generate the isosurface, as shown in Figure 4.2. If
we stopped here, we would have a simple but complete isosurface, like the one in
Figure 4.5(a).
In order to get a better approximation we subdivide output edges where the
difference in surface normals at the endpoints is larger than some given tolerance.
The surface normal at a point is the normalized gradient vector of w(x), which is
obtained numerically by perturbing each input coordinate. Figure 4.6 shows the
case when 2 of the edges need to be subdivided. There are 4 possible cases which
generate between 1 and 4 triangles. Each output triangle edge is then tested to see
if subdivision should occur.

66
        


    " !$# % #  
&' (  )+*  ,-.+
+/

8:= 8:9 8<;


> 
 

5 ,-16  
 ' 
8:;
8<9 3

 8:=


8:9
1 6 0&&2143

8:;
8<=
? @  '  # &  
5 ,-673

Figure 4.6: Deciding whether an output edge is to be subdivided.

4.4 Parallel isocontouring

Tasks are usually run in parallel to reduce execution time. So, in addition to
performing the task, the algorithm used must also fully utilize all processors while
minimizing the overhead required to coordinate the processors. Communication
between processors is typically part of the overhead that prevents an algorithm
on two processors from running twice as fast as on a single processor. So, good
algorithms minimize or even eliminate communication between processors. However,
minimizing communication often incurs other penalties. For instance, our algorithm
eliminates communication between processors at the expense of both extra memory
required to store points duplicated across processors and extra computation time
required in calculating these duplicate points.
This implementation assumes that the thread package will distribute threads
across processors to balance the load at that level. However, future work might
predict the amount of work required for a given input vertex by examining the

67
number of vertices in a power cell and the number of skeletal elements in the cell. The
number of vertices in the power cell determines the number of inital tetrahedra, and
the number of skeletal elements all determine the number of distance calculations
that must be performed for each tetrahedra. This prediction could be used to assign
neighboring input vertices to the same processor so that output vertices could be
shared between power cells.

4.4.1 Safe blending

As we noted at the beginning of this section, implicit blending allows for


topology changes without any change of form in the surface equation. We must be
careful not to allow this to happen without changing the skeleton.
First, lets examine blending equation. Rather than using w(x) from above,
well use
X 
z(x) = min w(x), ebi wi (x) 1
i
Ki

where Ki is a subset of the skeletal complex K that the user has chosen. i represents a
coloring of skeletal elements. The user can create as many different colors as desired
at the expense of contouring time. Only skeletal elements marked as sharing a color
will be blended. All of the blends are then unioned with each other and the strict
offset, w(x), using the min function. wi (x) is the distance to the strict offset of
only the simplices in Ki . bi is the blobbiness factor controlling how sharp the corners
are on the blend. The higher the value of bi , the sharper the corners. As bi ,
the surface approaches the strict offset.
Stander & Hart (1997) have shown how examining the critical points of the
blending function is one way to determine the topology. However, by placing some
restrictions on bi and our isocontouring technique, we can avoid tracking critical
points. Instead of explicitly tracking when

critical points disappear as disjoint components of the blobby surface merge

68
or

critical points are created where the blob function exhibits constructive inter-
ference in a region of space near several blobby skeletal elements

we force the blobby model to have the same topology as the skeleton. This is
accomplished by

1. Disallowing blending between skeletal elements that are not on the same con-
nected component. This is a simple O(log(n)) check using our vertex graph.

2. Checking that vertex values on the initial tetrahedra used for isocontouring
have the expected sign. When a vertex does not have the expected sign, it is
due to a local maximum in the blob function that is not represented by any
input point. Rather than include the additional blobby component, we ignore
it by insuring that the chosen intersection of the tetrahedrons edges with the
offset is the closest to the input point.

We choose to ignore these additional connected components because they do not


adhere to the principle of minimum astonishment.
An alternate technique would include monitoring critical points of the blobby
blending function and checking the function value at each minimum or maximum.
Each tetrahedron of the regular triangulation would have an expected set of critical
points depending on which facets were members of the skeletal subcomplex. While
this method does not use the structure of the spatial partition and would result in
overhead, it is less restrictive in the allowable values of bi .

4.5 Algorithm and Measurements

Now that we have described how the algorithm works, the algorithm should
be clear is the pseudocode form of Algorithms 4.1 through 4.4.

69
Algorithm 4.1 is run for each input vertex in concurrent threads. The calls
to LockSubComplex and UnlockSubComplex insure that only one thread has write
access to the simplicial complex as some operations require the algorithm to mark
members as they are visited.
Algorithm 4.3 is called by Algorithm 4.1 to divide a power cell that contains
input points besides the generator of the power cell. The result is a series of power
cells each of which contains exactly one input vertex.
Algorithm 4.2 is called by Algorithm 4.1 or 4.3 once a power cell has been
constructed. It divides edges and faces where skeletal elements pass through the
power cell.
Finally, Algorithm 4.4 is called for each power cell to approximate the offset
surface. It steps along the edges of each power cell face to generate tetrahedra that
are partitions of the power cell. Each tetrahedron serves as an initial tetrahedra for
Velhos polygonization routine.
Figure 4.7 shows the amount of time the isocontouring took on a single
processor divded by the time required for two processors. The algorithm can peform
at near-theoretical speeds (with a speedup of 1.97 for 11 input vertices), but is slower
and less predictable for lower numbers of input points. This occurs when

several input points that take longer than average are assigned to one proces-
sor.

the number of input points is not divisible by the number of processors and
some processors must take extra load.

However, with small numbers of input vertices, the time required is small and is not
a concern.

70
Algorithm 4.1: Triangulating the offset surface.
TriangulateVertexCell(int vert)
if vert is not inside its own power cell then
return  This vertex will be handled later 
end if
LockSubcomplex()
PowerCell cell ComputePowerCell(vert)
vlist all vertices connected by an edge to vert.
skel all simplices in the skeleton containing vert.
cellV ertices all input points contained in cell.
for all c cellV ertices do
vlist+ = all vertices connected by an edge to c.
skel+ = all simplices in the skeleton containing c.
end for
UnlockSubcomplex()
remove all duplicate entries from skel and vlist.
otherV ertex the vertex in vlist closest to vert.
if otherV ertex is inside cell then
Remove otherV ertex from vlist.
Plane p GetDividingPlane(vert,otherV ert)
DivideAndConquer(vert, otherV ert, cell, p, vlist, skel)
else
 There are no vertices besides vert in cell. 
ImprintSkeleton(cell, skel)
PolygonizeCell(cell, vert, vlist, skel)
end if
SignalCompletion()

71
Algorithm 4.2: Imprinting the skeleton onto the cell boundary.
ImprintSkeleton(cell, vert, f list, elist, tlist)
for all e elist do
for all Faces f cell intersecting e do
if f is marked with another edge then
Split f in two with one intersection point to either side.
else
Mark f with the intersection point.
end if
end for
end for
for all f f list do
for all Edges e cell intersecting f do
Split e in two at the intersection point.
end for
end for

2
1.9
1.8
1.7
Speedup

1.6
1.5
1.4
1.3
1.2
1.1
1
4 5 6 7 8 9 10 11
Number of input vertices
Figure 4.7: Speedup versus number of input vertices.

72
Algorithm 4.3: Dividing the cell into regions around each contained vertex.
DivideAndConquer(vert, otherV ert, cell, p, vlist, skel)
cellCopy copy of cell.
cell cell - halfspace of plane p.
cellCopy cellCopy - halfspace of plane p.
for all v vlist do
if v is in positive halfspace of p then
Remove v from vlist and insert it into v2list
end if
end for
otherV ertex the vertex in vlist closest to vert.
if otherV ertex is inside cell then
Remove otherV ertex from vlist.
Plane p GetDividingPlane(vert,otherV ert)
DivideAndConquer(vert, otherV ert, cell, p, vlist, skel)
else
 There are no vertices besides vert in cell. 
ImprintSkeleton(cell, skel)
PolygonizeCell(cell, vert, vlist, skel)
end if
otherV ertex the vertex in v2list closest to vert.
if otherV ertex is inside cellCopy then
Remove otherV ertex from v2list.
Plane p GetDividingPlane(vert,otherV ert)
DivideAndConquer(vert, otherV ert, cellCopy, p, v2list, skel)
else
 There are no vertices besides vert in cellCopy. 
ImprintSkeleton(cellCopy, skel)
PolygonizeCell(cellCopy, vert, vlist, skel)
end if

73
Algorithm 4.4: Polygonizing the cell surrounding each vertex.
PolygonizeCell(cell, vert, vlist, skel)
v0 the input vertex of cell.
for all faces f cell do
if f is marked with a skeletal edge, s passing through it then
v1 the intersection of s with f .
for all edges, e, in face f do
v2 , v3 the vertices of e.
TriangulateTetrahedron(v0 , v1 , v2 , v3 , skel)
end for
else
for all triangles, t, in face f do
v1 , v2 , v3 the vertices of t.
TriangulateTetrahedron(v0 , v1 , v2 , v3 , skel)
end for
end if
end for

4.6 Example

These properties can also be used in tandem with other simple models to
perform feasibility analysis. The shape shown in Figures 3.2 through 4.5 represents
a simple compliant mechanism intended to aid a person open a bottle with an
overtightened cap. Well consider a concept variant with a slightly different geometry
(in Figure 4.8) as an attempt to handle bottles with different cap sizes.
For the skeletal model, we can treat each skeletal edge as a cantilever beam,
connected to other beams at the input vertices. Taking the mechanism to be verti-
cally symmetric and the vertex at the far left of Figure 4.8 to be fixed, we can sum
external forces to find the reaction at each consecutive vertex. The reaction required
at the vertex will include a moment, M , and a force F. Assuming that the material
is linearly elastic and resists both moment and axial loads at the input vertex, the

74
10 N

Figure 4.8: A simulation of the compliance of a bottle opener concept variant.

deflection of the vertices can be related to the deflection of the input vertices:

M
= 0
EI s
PL
= 0
AE

Here, E is the elasticity of the material, I is the moment of inertia of a cross-section


of the beam, A is the cross-sectional area of the beam, P is the axial portion of the
load F, is the change in length of the beam taken from the difference between

the starting positions of the vertices, and s is the change in slope of the skeletal

segments. Both and s may be computed from the geometry alone. These two
equations can be written for each input vertex. At any vertex, i, i is simply the

length of the i-th skeletal edge minus its original length. s is approximated by the
angle between the i-th and i + 1-th segments divided by half of the length of each
segment.
Solving for the zeros of this array of equations by varying the input vertex
positions yields the deflected shape of the compliant member for any given combi-
nation of external loads. These loads are specified at the handle and computed for
the bottle cap using a penalty function. The nib point of the triangular skeletal
element is handled by treating the triangle as a rigid body and applying the external

75
couple generated by the cap penalty function to the 5th vertex from the left.
For a 10N (2.25lbf ) force clamping the grips together, approximately 3N m
(2.2f t.lbf ) of frictional torque can be produced. By varying the offset radius at
vertices along the skeleton, the sensitivity of the solution can be tested. The position
of the handles in their deflected state is extremely sensitive to small changes in the
cap penalty function, indicating that an important performance measure for this
concept variant is the range of bottle cap sizes it can accomodate.

4.7 Analysis Using The Offset

One of the advantages of using a spatial partition to form the skeleton of the
model is that it can also be used to partition the solid model. It is a simple task to get
surface area, volume, and moments of inertia of portions of the model as well as the
whole. This is ideal for engineering estimates based on lumped parameter analysis.
Because the power cells were used as boundaries for dividing the offset surface
into regions to be distributed to different processors, we have the offset surface
partitioned into regions closest to the input points generating it, as illustrated in
Figure 4.9.
As the offset is generated, each triangle on the surface forms a tetrahedron
with the input point as its fourth vertex. This is labeled as a surface tetrahdron in
Figure 4.9. Edges that do not border two facets are edges that intersect the power
cell boundary. Each face of the power cell where such an edge exists is marked with
a point inside the solid. Using the input point, the marked point, and each boundary
edge, we can form the boundary tetrahedra shown in Figure 4.9. By summing the
surface area of each surface triangle, we get the surface area corresponding to the
vertex. If we sum the area of each triangle formed by the boundary edges and the
power cell mark points, we get the surface area of the boundary. These are useful
for estimating stresses (external force divided by surface area) and heat transfer.
Summing the volumes of all the tetrahedra yields the volume of the part associated

76
   ! #"$"

%,#"$    
  





-.  ) /
  ' ( #)  +

% &  
  ' ( *)  +

Figure 4.9: Obtaining mass properties associated with an input point.

with an input point. This information can be used to estimate the mass of a region
or the Biot number for a region of the part (to determine the relative significance
of convection to conduction heat transfer). The principal axes of the part can also
be obtained by combining the moments of inertia of all the tetrahedra. This data
can be used to further refine the simulation of the bottle opener by providing more
accurate moments of inertia for the beams.

4.8 Summary

In this chapter, weve shown how the regular triangulation can be used to
quickly isocontour the offset surface. This is accomplished by

1. Using the dual diagram of the triangulation to split each region into tetrahedra
that meet the requirements of isocontouring. In particular, the tetrahedra are
guaranteed to intersect the offset at most once along an edge and the triangles
bounding each tetrahedron will not intersect the offset surface without the
edges intersecting.

2. Using the spatial partition to

77
(a) distribute work to processors working in parallel and

(b) avoid calculating the distance function for skeletal elements that could
possibly intersect a given power cell.

A limited form of blending has been described to allow smooth joints between neigh-
boring simplices of the skeletal subcomplex. Finally, mass properties for a region
around each input vertex are extracted from the model and used for analysis.

78
Chapter 5

Non-Uniform Material
Distributions

Now that weve developed the techniques to represent a skeletal solid, lets
consider some applications of the modeler. In addition to our running example of
the bottle opener, consider a hip joint replacement, such as the one in Figure 5.1.
The design of these joints has several conflicting goals. In particular, porous ex-
ternal surfaces such as hydroxyapatite (Bajer, Maldyk & Kowalczewski 1999) or
calcium carbonate (CaCO3 ) are desirable since they are recognized by the body as
biocompatible and reduce the need for bone cement by encouraging the remaining
bone to grow into the pores of the implant. Also, ceramic ball joints will wear less
than metal joints, which is important since hip joint replacements are expected to
last approximately 10 years (of Iowa College of Medicine 2000). The need for a high
strength member conflicts with the benefits of a ceramic surface. Although ceramics
are strong, they often fail without notice, which is an unacceptable liability for med-
ical practitioners. The loads on hip joints can be as high as 8 to 9 times a persons
body weight (Bergmann, Graichen & Rohlmann 1993). Thus the rod section of a
hip joint replacement is usually titanium (Ti).
By spatially varying material composition, we can eliminate these conflicting

79
  !"



 
 


 

  

Figure 5.1: A hip joint replacement has two parts, of which we are concerned with
the femoral component. This figure is taken after (of Iowa College of
Medicine 2000).

goals. Because of the angle between the ball and the rod, there is a bending moment
at the turn in the neck of the joint. This is where the most flexion of the joint
replacement should occur. Placing only metal material in the region of maximum
joint deflection will help prevent cracks from propogating in the ceramic outer layer.
Luckily, the deflection occurs in a spot where little wear or bone/connective tissue
growth takes place.
A designer may specify material composition with a skeletal model of the
joint replacement by choosing more than one set of offset radii for each skeletal
element and setting the material composition on each resulting isosurface. Notice
that, like the specification of the skeleton itself, this reduces the dimensionality of
the problem from creating a 3-dimensional material gradient to a 2-dimensional one.
Figure 5.2 shows a simple skeletal model of a hip joint replacement with two surfaces.
The blue surface of the rod represents an isosurface of 100% Ti. The yellow surface
represents an isosurface of 100% CaCO3 .

80
  


$ %

 "  #    !" 


     !" 



Figure 5.2: The hip joint replacement represented as a skeleton with two sets of
offset radii (one in blue, on in yellow).

Material composition at any point in space, c(x), can be treated as a vector


whose components represent the fraction of a given material. If an object has 5
materials that are mixed to produce the composition at any point, c would have 5
components. We require ci 0, |c| > 0 inside the solid. Note that we dont require
c
|c| = 1 in the solid. Instead, to determine composition, consider c = |c| . As long as
|c| > 0 holds, c exists and has unit magnitude. This extra step is required because
of the way blending between maps is accomplished and will be discussed later. The
components of c may refer to volume or density-based fractions of material present.
Take your pick.
Now, since the designer specifies composition over surfaces (i.e., c(u, v)) and
we want to know composition at any point in a volume (c(x)), we must develop the
relationsip between the two. To find the composition at any point in the solid, we
rely on the order in which the tuples of (radius, composition) are specified, as shown
in Table 5.1. The earlier tuples take precedence. Note that this ordering can be
independent of any geometric relationship; the ordering determines which isosurface
takes precedence if there is any intersection. When two isosurfaces intersect, the
material composition is overspecified without this rule.

81
Order Radius Composition function
1 r1 c1
2 r2 c2
3 r3 c3
.. .. ..
. . .

Table 5.1: Material composition, c, specified on a collection of different isosurfaces.

Order Radius Composition function


1 r1 = [ 12, 0, 0, 0, 3, 3, 3, 4, 4, 2 ] c1 = [ 1, 0 ]
2 r2 = [ 14, 0, 0, 0, 0, 0, 0, 0, 0, 0 ] c2 = [ 0, 1 ]

Table 5.2: Material composition of the hip example, c, specified on a two different
isosurfaces. The radius vector specifies the radii as labeled in Figure 5.2.

For the hip joint example, well have the Ti isosurface take precedence. So, for
each skeletal element we have the two radii and two material composition functions
shown in Table 5.2. The composition functions are constant in our case, meaning
that each isosurface is a surface of constant material composition. To define a more
complex function on the isosurfaces requires a map from the plane to the isosurface.
This problem has been addressed by Hans Kohling Pedersen (1995) and Zonenschein,
Gomes, Velho, de Figueiredo, Tigges & Wyvill (1998).
The radius vector in the table is a vector of the radii for all the skeletal
elements present. The numbering in Figure 5.2 shows which entries of the radius
vector correspond to which skeletal elements. The radius values are only illustrative,
but are to the scale of the model in the figures.
With the ordering determined, we can calculate the material composition
at any test point, x. The first step is to compute the distance to each of the
isosurfaces. Proceeding in the order of the tuples, compute the distance function
di (x). If d1 < 0 then the material composition is determined solely by the first
isosurface the composition is the same as that of c1 (y), where y is the closest
point to x on isosurface 1. Otherwise, for nondegenerate isosurfaces, there will
occur some point where di 0 and di+1 < 0. When this occurs the test point x lies

82
 

    

 

Figure 5.3: The material gradient in a planar slice of the hip joint replacement.

between the two surfaces and we wish to blend the material maps together to get
the composition at x. Here is the blending equation used:

|di+1 (x)|ci (yi ) + |di (x)|ci+1 (yi+1 )


c(x) = (5.1)
|di (x)| + |di+1 (x)|

where yi is the closest point to x on the isosurface of tuple i. Finally, when all of
the distance functions are positive, di (x) > 0, we are outside of the solid.
Note that no singularities can occur in c(x) since all components of all ci are
positive and the sums of the distance functions are all positive. This means that c(x)
exists everywhere inside the solid. However, blending two vectors of unit magnitude
together does not yield a result of unit magnitude. This is why we normalize the
composition vector after blending; the components will not always sum to 1. Still,
the fact that each component of a composition vector is positive means that we
cannot introduce a zero-length vector by blending two vectors pointing away from
each other. This is important since otherwise we could not guarantee the existence
of c.
Figure 5.3 shows the result of the blending algorithm applied to the hip joint
replacement. The region of the slice inside the first isosurface is dependent solely on
the composition on the first isosurface it is 100% Ti. Then, in the region between
the two isosurfaces, there is a smooth gradient from Ti to CaCO3 . If the designer

83
wanted a region of solid 100% CaCO3 , a third isosurface could be added with the
same composition as the second. The blend of the two would be a constant 100%
CaCO3 .
The discontinuity in the figure occurs because the isosurfaces intersect. Note
that, as we stipulated, the Ti surface takes precedence.

84
Chapter 6

Conclusions

From the previous chapters, it should be clear that a skeletal modeler based
on the input of weighted points is feasible for conceptual mechanical design; the areas
we checked for feasibility are based on demands from the customers (designers).
First, the input is smaller than current modelers using both the dimension of the
input data (2D for a skeleton vs. 3D for a solid model) and the type of input (points
alone vs. points and connectivity). Second, the modeler provides information useful
to engineers performing lumped parameter analysis. Third, the topology of the solid
model can be measured and constrained. Finally, the generation of the offset can
be performed efficiently using parallel processing and adaptive triangulation.
The following section reviews the contributions of this research to the commu-
nity that were developed during the course of the feasibility analysis. It is followed
by suggestions for future work on the modeler.

6.1 Contributions

This dissertation develops several unique extensions to the state of the art
of solid modeling and computational geometry:

We have shown how the Betti numbers may be calculated as the set of input

85
vertices changes, rather than requiring knowledge of all the vertices a priori.

We have developed a criterion for selecting simplices to be in a skeletal sub-


complex such that the topology of the subcomplex still matches the topology
of the union of balls.

We have adapted isocontouring techniques to take advantage of the structure


provided by the power diagram to

reduce the overhead associated with visiting tetrahedra that generate no


output triangles and

divide the isocontour into regions that can be processed in parallel.

We have employed the power diagram to partition the solid model into regions
associated with each input vertex and shown how these regions can be used
in lumped parameter analysis for feasibility calculations.

Despite these contributions, there are many areas where more work is needed
to turn skeletal modeling into a useful tool. The next section describes some possi-
bilities.

6.2 Future Work

There are two paths to improve the modeler: by altering the internals of the
modeler to improve known performance problems and by extending the modelers
functionality to accomodate different applications. The two following sections focus
on these avenues.

6.2.1 Modeler internals

Although the topology can be constrained, the cost of this can be high when
thin geometric features are placed close to each other; the sampling that must occur

86
for the union of spheres to enclose the offset surface can produce large numbers of
sample points. One way to reduce this might be to implement a vertex coloring
scheme, in which only vertices sharing the same color can ever be connected by a
member of the skeleton. This would either require a guarantee from the user that
offsets of differently-colored vertices would never touch or a notice that the topo-
logical invariants would no longer be valid for the model. Because the isosurfacing
scheme relies on the topology of the skeleton and offset being identical, a different
technique would have to be developed for generating the offset from the skeleton.
Another area that needs further development is the use of negative blend-
ing, i.e., skeletal elements that reduce the magnitude of the scalar function of space
that is contoured to produce the part from the skeleton. Until this is studied, the
modeler will not be extremely useful for parts with small indentations relative to
the size of nearby geometry. Small protusions are handled well, but not dents or
cuts.
The modeler could also be much more flexible in the way geometry is offset
from the skeleton. As long as the union of spheres bounds the offset, the topology
of the skeleton and offset will match. So, for instance, the distance function could
vary with angle around a skeletal line to produce elliptical cross-sections. In fact,
any convex cross section that is bounded by the union of spheres can be used. The
difficulty with non-convex cross-sections is that voids and tunnels may be created
at junctions of elements.

6.2.2 Applications for a topological modeler

Having control of topology is useful in a variety of engineering problems


because it allows simplified (i.e. lumped) models to be associated with more realistic
geometry. This makes the task of developing a detailed design from a simple model
much easier. Examples of tasks this could be used with are

Pipe flow. Attaching a variable representing the length of a pipe to the length

87
measure of an implicit blob allows the configuration to change and affect the
mathematical model. The same can be done with area. These allow estimates
of flow to be obtained without having exactly specified geometry between pipe
ends.

Kinematics. In addition to the obvious link position, velocity, and acceleration


analysis, intereference checks could be performed. During synthesis, topolog-
ical checks could be used to generate link shapes that avoid contact over the
range of motion. For analysis, given link shapes could be checked for interfer-
ence very quickly by taking advantage of the spatial subdivision that a point
arrangement provides.

Compliant joints. A skeleton is easy to articulate, making it an interesting


tool for visualizing joint compliance. Also, it helps to have a geometric model
(even if it is approximate) to estimate the stiffness of the joint region.

Heat transfer. Having a rough geometry early in the process can also allow
preliminary heat transfer models since quantities like surface area and vol-
ume are available. For instance, one might compute the Biot number for an
aluminum model or estimate form factors for radiative heat transfer.

Manufacturing. Topology can be used to estimate manufacturing cost (number


of slots for molds or number of material removal operations for machining).
Edelsbrunner et al. (1998) identify accessibility for molecules using a skeleton.
A similar analysis might be developed for manufacturing.

Another interesting manufacturing tool might identify geometric sources of


uneven heat flow in molds. By indicating portions of the skeleton where heat is
added or removed, a minimum spanning graph (where the cost is proportional
to the weight of a skeletal point) would indicate where heat flow was highest
and lowest.

88
Product Architecture. The model could be used to perform what-if analyses,
changing topology to minmize the number of parts while maintaining device
functionality.

Design for X. Find ways to change visibility while maintaining topology. For
example, how could the shape of a part be altered to allow easy access for as-
sembly or maintenance while maintaining the topology required for its primary
functionality?

89
Bibliography

Allgower, E. L. & Georg, K. (1990), Numerical Continuation Methods: An Introduc-


tion, number 13 in Springer Series in Computational Mathematics, Springer-
Verlag, Berlin.

Allgower, E. L. & Gnutzmann, S. (1991), Simplicial pivoting for mesh generation of


implicitly defined surfaces, Computer Aided Geometric Design 8(4), 305325.

Aurenhammer, F. (1987), Power diagrams: Properties, algorithms and applica-


tions, SIAM Journal on Computing 16(1), 7896.

Bajaj, C. L., Pascucci, V., Thompson, D. & Zhang, X. (1999), Parallel accelerated
isocontouring for out-of-core visualization, in IEEE Parallel Symposium on
Visualization, San Francisco, CA.

Bajaj, C., Pascucci, V., Holt, R. & Netravali, A. (1998), Dynamic maintenance
and visualization of molecular surfaces, in Proceedings of the Ninth Canadian
Conference on Computational Geometry.

Bajaj, C., Pascucci, V. & Schikore, D. (1996), Fast Isocontouring for Improved
Interactivity, in Proceedings of 1996 Symposium on Volume Visualization,
San Francisco, CA, pp. 3946.

Bajer, C., Maldyk, P. & Kowalczewski, J. (1999), Hip joint implants - survey of
numerical modeling, J. Theor. Appl. Mech. 37(3), 435454.
URL: http://www.ippt.pan.pl/ cbajer/maldyk/index.html

90
Benouamer, M. O., Jaillon, P., Michelucci, D. & Moreau, J.-M. (1996), A lazy
solution to imprecision in computational geometry, IEEE Transactions? .

Bergmann, G., Graichen, F. & Rohlmann, A. (1993), Hip joint forces during walking
and running, measured in two patients, J. Biomech 26, 969990. Web page
notes that maximum force (which was during a stumble) reached 8 to 9 times
the persons body weight.
URL: http://www.medizin.fu-berlin.de/biomechanik/Prohip1e.htm

Bezdek, J., Thompson, D., Crawford, R. & Wood, K. (1999), Direct Engineering:
Toward Intelligent Manufacturing, Kluwer Academic Publishers, chapter Vol-
umetric Feature Recognition for Direct Engineering, pp. 1570.

Blanding, R., Brooking, C., Ganter, M. & Storti, D. (1999), A skeletal-based solid
editor, in Proceedings of the Fifth Solid Modeling Symposium, ACM, pp. 141
150.

Bloomenthal, J. (1988), Polygonization of implicit surfaces, Comput. Aided Geom.


Design 5(4), 341355.

Bloomenthal, J. (1994), An implicit surface polygonizer, in P. Heckbert, ed., Graph-


ics Gems IV, Academic Press, Boston, MA, pp. 324349.

Bloomenthal, J. (1997), Introduction to Implicit Surfaces, The Morgan Kaufmann


Series in Computer Graphics and Geometric Modeling, Morgan Kaufmann,
chapter Surface Tiling.

Cignoni, P., Marino, P., Montani, C., Puppo, E. & Scopigno, R. (1997), Speeding up
isosurface extraction using interval trees, IEEE Transactions on Visualization
and Computer Graphics 3(2), 158170.

Crossno, P. & Angel, E. (1997), Isosurface extraction using particle systems, in


R. Yagel & H. Hagen, eds, Visualization 97 Proceedings, Phoenix, AZ,
pp. 495498.

91
Delfinado, C. & Edelsbrunner, H. (1993), An incremental algorithm for the topology
of simplicial complexes, in Proceedings of the Ninth Annual Symposium on
Computational Geomtry, ACM.

Edelsbrunner, H. (1992), Weighted alpha shapes, Technical Report UIUCDCS-R-


92-1760, University of Illinois at Urbana-Champaign.

Edelsbrunner, H., Facello, M. A. & Liang, J. (1998), On the definition and the
construction of pockets in macromolecules, Discrete Appl. Math. 88, 83102.

Edelsbrunner, H. & Mucke, E. P. (1988), Simulation of simplicity: A technique to


cope with degenerate cases in geometric algorithms, in Proceedings of the 4th
ACM Symposium on Computational Geometry, pp. 118133.

Edelsbrunner, H. & Shah, N. R. (1992), Incremental topological flipping works for


regular triangulations, in Eighth Annual Computational Geometry Sympo-
sium, ACM, Berlin, pp. 4352.

Ellsiepen, P. (1994), Parallel isosurfacing in large unstructured datasets, in M. Go-


bel, H. Muller & B. Urban, eds, Proceedings of the Fifth Eurographics Work-
shop on Visualization in Scientific Computing, Springer-Verlag, pp. 923.

Facello, M. (1993), Constructing delaunay and regular triangulations in three dimen-


sions, Masters thesis, University of Illinois at Urbana-Champaign. technical
report UIUCDCS-R-93-1797.

Facello, M. (1996), Geometric Techniques for Molecular Shape Analysis, PhD thesis,
University of Illinois at Urbana-Champaign. technical report UIUCDCS-R-96-
1967.

Garland, M. & Heckbert, P. (1997), Surface simplification using quadric error met-
rics, in Proceedings of the 1997 ACM SIGGRAPH annual conference on Com-
puter graphics, pp. 209216.

92
Giblin, P. J. (1977), Curves, Surfaces, and Homology, Chapman Hill (John Wiley
& Sons). A most incredibly useful appendix.

Hungerford, T. W. (1990), Abstract Algebra: An Introduction, 2nd ed. edn, Harcourt


Brace.

Joe, B. (1989), Three dimensional triangulations from local transforms, SIAM


Journal of Scientific and Statistical Computing 10, 718741.

Kernighan, B. W. & Lin, S. (1970), An efficient heuristic procedure for partitioning


graphs, The Bell System Technical Journal 49(2).

Krishnaswamy, V., Hasteer, G. & Banerjee, P. (1997), Load balancing and work
load minimization of overlapping parallel tasks, in Proceedings of the 1997
International Conference on Parallel Processing, Bloomington, IL, pp. 272
279.

Lorensen, W. E. & Cline, H. E. (1987), Marching cubes: A high resolution 3d


surface construction algorithm, Computer Graphics 21, 163169. SIGGRAPH
87 Proceedings, M. C. Stone, ed.

Martin, O. C. & Otto, S. W. (1995), Partitioning of unstructured meshes for load


balancing, Concurrency: Practice and Experience 7(4), 303314.

Miguet, S. & Nicod, J.-M. (1995), A load-balanced parallel implementation of the


marching-cubes algorithm, Technical Report 95-24, Ecole Normale Superieure
de Lyon.

Mucke, E. (1993), Shapes and Implementations in Three Dimensional Geometry,


PhD thesis, University of Illinois at Urbana-Champaign.

Munkres, J. R. (1984), Elements of Algebraic Topology, Perseus Books, Reading,


Mass.

93
Neumann, U. (1994), Comunication costs for parallel volume-rendering applica-
tions, IEEE Computer Graphics and Applications pp. 4958.

of Iowa College of Medicine, U. (2000), Virtual hospital, HTML at


http://www.vh.org/.
URL: http://www.vh.org/Patients/IHB/Ortho/HipReplace/HipReplace.html

Otto, K. N. & Wood, K. L. (2000), Product Design, Prentice-Hall.

Pahl, G. & Beitz, W. (1977), Engineering Design, rev. ed. edn, Springer-Verlag,
Berlin. also published by The Design Council, London.

Pedersen, H. K. (1995), Decorating implicit surfaces, in R. Cook, ed., SIGGRAPH


95 Conference Proceedings, Annual Conference Series, ACM SIGGRAPH, Ad-
dison Wesley, pp. 291300. held in Los Angeles, California, 06-11 August 1995.

Ramaswamy, R. & Ulrich, K. (1993), A designers spreadsheet, in T. K. Hight &


L. A. Stauffer, eds, Design Theory and Methodology, Vol. DE-Vol. 53, ASME,
Albuquerque, NM, pp. 105113.

Shah, J. J. & Mantyla, M. (1995), Parametric and Feature-Based CAD/CAM: Con-


cepts, Techniques, and Applications, Wiley.

Shah, J. J., Mantyla, M. & Nau, D. S., eds (1994), Advances in Feature Based
Manufacturing, Vol. 20 of Manufacturing Research and Technology, Elsevier.

Shephard, M. S., Flaherty, J. E., Bottasso, C. L., de Cougny, H. L., Ozturan,


C. & Simone, M. L. (1997), Parallel automatic adaptive analysis, Parallel
Computing 23, 13271347.

Snyder, J. M. (July 1992), Interval analysis for computer graphics, Computer


Graphics (Proceedings of SIGGRAPH 92) 26(2), 121130. ISBN 0-201-51585-7.
Held in Chicago, Illinois.

94
Stander, B. & Hart, J. C. (1997), Guaranteeing the topology of an implicit sur-
face polygonization, in R. Cook, ed., SIGGRAPH 97 Conference Proceedings,
Annual Conference Series, ACM SIGGRAPH, Addison Wesley, pp. 279 286.

Szeliski, R. & Tonnesen, D. (1992), Surface modeling with oriented particle systems,
in Computer Graphics, Vol. 26, ACM SIGGRAPH, pp. 185194.

Turk, G. (1992), Re-tiling polygonal surfaces, Computer Graphics 26(2), 5564.


SIGGRAPH 92 Proceedings.

Ullman, D. G. (1992), The Mechanical Design Process, McGraw-Hill Series in Me-


chanical Engineering, McGraw-Hill.

Ulrich, K. T. & Eppinger, S. D. (1995), Product Design and Development, McGraw-


Hill, New York.

Velho, L. (1996), Simple and efficient polygonization of implicit surfaces, Journal


of Graphics Tools 1(2), 525.

Vermeer, P. J. (1994), Medial Axis Transform to Boundary Representation Conver-


sion, PhD thesis, Purdue University.
URL: http://madison.wlu.edu/ vermeerp/

Voronoi, G. (1908), Nouvelles applications des parametres continus a la theorie des


formes quadratiques, J. reine angew. Math 134, 198287. This is the original
paper on Voronoi diagrams.

Wilhelms, J. & van Gelder, A. (1992), Octrees for faster isosurface generation,
ACM Transactions on Graphics 11(3), 201227.

Zonenschein, R., Gomes, J., Velho, L., de Figueiredo, L. H., Tigges, M. & Wyvill,
B. (1998), Texturing composite deformable implicit objects, in Proceedings
of the XI International Symposium on Computer Graphics, Rio de Janeiro,
pp. 246353.

95
Vita

I was born1 first, definitely. Then, I2 went to school3456 for a long, long time.
Now that Im done, Im not sure what to do7 .
Permanent Address: 2210 A Lanier Dr.
Austin, TX 78757

This dissertation was typeset with LATEX 2 9 by the author.

1
That implies parents: Margaret E. and David E. Thompson.
2
Action defines the man: my hobbies include reading (mainly bad science fiction), mountain
biking, car restoration, and What Not.
3
University High School, Baton Rouge, La., 1988.
4
B.S.M.E., L.S.U. 1993, Magna cum laude.
5
M.S.E., U.T. Austin 1995.
6
I started my Ph.D. work at U.T. Austin in August 1996.
7
Have you a clue? Please let me8 know.
8
dc@thompson.cx
9 A
L TEX 2 is an extension of LATEX. LATEX is a collection of macros for TEX. TEX is a trademark of
the American Mathematical Society. The macros used in formatting this dissertation were written
by Dinesh Das, Department of Computer Sciences, The University of Texas at Austin.
View publication stats
96

También podría gustarte