Documentos de Académico
Documentos de Profesional
Documentos de Cultura
n
BT
k
exp 100
Where k is the index of the array, from 256 to 256, and n is either 2 or 6. This
creates values from 0 to 100 that will be the brightness values assigned to
pixels with a brightness difference corresponding with the one in the LUT.
Masks:
The algorithms can use two kinds of masks to analyse the area surrounding a
pixel. The default is a circular mask of 37 pixels, with the following shape:
XXX
XXXXX
XXXXXXX
XXX+XXX
XXXXXXX
XXXXX
XXX
The other type of mask is a square of 9 pixels, and is used when the program
is called with a special parameter. It provides a much faster evaluation.
XXX
X+X
XXX
The tests carried out showed that the larger mask gave better results for this
project, as it was able to consider a larger area around each pixel, and this
was especially useful because of the noise found in most of the samples.
Edge Detection:
The algorithm used runs over the whole image; at each pixel, the brightness
difference between the pixel and each of the pixels in the mask is evaluated.
This difference is used as the index for the LUT, and the value returned is
summed to those of the other pixels in the mask. The resulting sum is the new
value assigned to the pixel. If the end result is greater than a threshold, then
the pixel is considered to be part of an edge, because that will mean the pixel
and those surrounding it have very large differences.
Smoothing
The image is first scaled to make enough space for the mask to fit at the
edges. To achieve this, the rows at the top and bottom of the image are
repeated a number of times equal to half the size of the mask. The same is
done with the columns at the sides.
Once the image is scaled, the same process as for edge detection is followed.
The brightness difference values are computed for each pixel in the mask, and
an average is taken from all the values in the mask. The final average is
assigned as the new value for the current pixel.
27
Implementation
Edge Detection
5.2.3 Testing of the SUSAN algorithm
The usefulness of the SUSAN algorithms for the purpose required was
evaluated using simplified test images and running the edge detection under
various scenarios of noise in the data, and different thresholds.
The base images used for the tests were produced using the Gimp graphics
package on Linux. The first step was to create the images that the algorithm is
supposed to obtain: completely black images with only the white borders for
the edges that must be found (Figure 5-3, Figure 5-5). The space between the
borders was then filled with white, to simulate the skin tissue as it appears in
MRI images (Figure 5-4, Figure 5-6).
Figure 5-3 Squares base image
Figure 5-4 Squares sample image
Figure 5-5 Diamonds base image
Figure 5-6 Diamonds sample image
Figure 5-7 Stylised version of the human head
The most basic test image is a black image with a white square in it, and
another black square inside of the white one. The second of them is a simple
black image with a hollow white diamond in the middle. A more elaborated test
28
Implementation
Edge Detection
image is a stylised version of an MRI slice of a human head, including the
main features relevant to this project, such as the skin, bone, and other
internal tissues (Figure 5-7).
Additional new noisy images were then created from the base samples,
adding or subtracting an arbitrary amount to the value of each pixel. The
amount is generated randomly within a certain limit that goes from 0 to a
noise factor, which is the largest number generated. The noise factor was
varied in increments of 50, from the base case of 0, up to the maximum value
for a pixel in the image. In the images tested, the maximum value is 255, so
the highest noise factor used is 250, thus obtaining 6 levels of noise for the
tests (Table 5-1). After a random number has been added to each pixel, the final
value is rounded to remain within the limits of 0 and 255.
The SUSAN program was then run over these noisy data, using different
values for the brightness threshold. The resulting edge detections were
compared against the basic images that have only the outer edges. The
comparison is made pixel by pixel, counting the number of those that have
different values from the original image to the one obtained through
processing.
Table 5-1 Sample test images with varying levels of noise
Noise Level: 000
Noise Level: 050
Noise Level: 100
Noise Level: 150
Noise Level: 200
Noise Level: 250
5.2.4 Evaluation of other techniques
29
Implementation
Edge Detection
Some other methods were tested to try to improve the differentiation of the
diverse tissues in the head slices. The sample images were subjected to these
algorithms before doing the edge detection, to improve the results of the
SUSAN program.
Image smoothing
The smoothing functions incorporated in the SUSAN program were also
tested, in an effort to get rid of some of the noise in the input images. The
amount of smoothing can also be adjusted using the same parameter for the
brightness threshold. The smoothing was attempted with parameter values of:
24, 9, 2 and 1. The results were not favourable, since the time taken to do the
processing was increased significantly, while the images obtained were not of
better use, since the smoothing algorithm included with the SUSAN program
does not preserve the edges, but makes them blurry and difficult to identify.
Histogram equalization:
The histogram of an image is a graph that shows the probability of a pixel,
taken at random, of having a certain brightness level. This is obtained by
creating an array the size of the number of different brightness levels. The
array is initialised to all zeros, and then the image is traversed, pixel-by-pixel.
The BL of the pixels is used as the index for the array, and increasing the array
by one at the location pointed to by the BL of the current pixel, as the image is
traversed, the histogram is formed.
Figure 5-8 MRI slice and its associated histogram
To equalize the histogram, an accumulated histogram is created, by adding
the value of the previous BL to each of the entries in the array. Then the
values are normalized to the maximum value allowed. This has the effect of
redistributing the probabilities of a pixel having a determined BL [Sonka1999].
The perceived effect is that bright pixels become brighter, while dark pixels
appear to be darker. This is only a visible illusion to the human eye, since the
dark pixels do not become darker, but appear so when contrasted with the
brighter pixels.
The results obtained from applying Histogram equalization to the MRI samples
available were not satisfactory, because of the noise contained in the images.
After performing the algorithm to the slice images, the noisy pixels in the dark
30
Implementation
Extraction of vertices
areas became more evident, and caused more difficulties to the edge
detection.
Bi-level thresholding
This technique increases the differences between areas of an image with very
large variations, and makes areas with similar values to be the same, by
making the whole image more contrasting.
The technique used is to shift all the pixel brightness values above a certain
threshold to be 255, and to 0 when they are below the threshold. This requires
the use of another parameter. It would be necessary to make new tests to
determine the right threshold to use, according to noise in the images and their
overall brightness.
This technique is very helpful in finding the edges for the skin and bone,
because the edges that must be found are only those where the values go
from near zero to higher values. Edges from a bright area to another of middle
intensity are of no interest for this project, and bi-level thresholding eliminates
these useless edges.
The images extracted by the edge detector after applying bi-level thresholding
provide a more accurate representation of the skull when used with samples of
good quality. The results are very poor, though, when used on very noisy
images, since the noise artefacts are enhanced and produce false edges.
5.3 Extraction of vertices
Once SUSAN has processed the image to highlight the edges, another
process is charged with filtering the information to get only the location of the
air-skin, and the skin-bone interfaces. Finding the 2 outermost edges of the
image, and discarding any inner edges that could be found in between achieve
this.
To locate the outermost borders, each row of the image is analysed
independently. Two passes are done over the data, once going from left to
right, and finding the position of the first two edges, and going on until the end
of the row, counting the total number of edges in the row. Then a second pass
is done going in the opposite direction. This time, as soon as the two required
edges are found, the pass is stopped, and the algorithm skips to the next row.
As each row is being processed, a flag is used to mark whether the pixels
being traversed belong to an edge. When the pixel value goes up from a zero
level, the flag is raised, and remains that way while traversing the several
pixels that represent an edge. Once the pixel values go down again, the flag
returns to show an empty space in the image.
The position of the edge in each row is stored as the X coordinate of the point,
while the number of row is Y, and the number of slice is Z. Currently there is
31
Implementation
Extraction of vertices
no appropriate way of associating the scale of X and Y with Z, since the
sample data used does not contain any information about the separation of the
slices. To modify the distance between slices, a parameter is used to multiply
the number of slice, and create a more appropriate representation of the 3D
head. This parameter is read from a configuration file when running the
program.
A problem was found when the row had only 2 edges, which happened at the
top and bottom of the test images, and would also happen at the top on real
images of a head. In these cases, the algorithm used would not find any inner
edges, and then consider the outer edge as the location for the inner one. To
avoid this problem, when the total number of edges found in the first pass is
equal to two, then the locations of the two inner edges is set to zero, and they
are not considered as vertices for the 3D model.
As part of the implementation, the edges found in each row are stored in an
array of 4 elements. Here the locations of the edges are indexed in the order in
which they appear in the image when going from left to right, and will always
have the same index, regardless of the number of edges found in the row
(Figure 5-9). According to this, the vertices representing the skin will always be
those in the array positions 0 and 3, while the vertices corresponding to the
skull will be stored with indices 1 and 2.
0 1 2 3
0 3
Figure 5-9 Indices of the edges in the array
Having 4 points for every row of each image represents a very large amount of
vertices to render, and would be very difficult to render. To improve the
performance of the display, sacrificing detail for speed, the program has the
option of reducing the number of vertices used to represent the layers; this is
done by taking vertices only every few lines of the input (Figure 5-10). The number
of lines to skip is a parameter given to the program through the configuration
file of the input data. This parameter modifies the results according to the
degree of precision required. To obtain the greatest detail, a value of 1 can be
given, which will make the program use the vertices found in every line, and
thus produce a very smooth 3D model. Larger values will produce a coarser
model, but it will be easier to render.
32
Implementation
Extraction of vertices
Figure 5-10 Vertices taken every 10 lines in the image
After the 4 points have been located, they are stored for their future use when
rendering the head. This is implemented using linked lists. For each image
slice, there are two linked lists, one for the outer edge, and another for the
inner. To make it simpler for the renderer to associate the points, they are
inserted into the lists from both sides, depending on the side of the head that
the point represents. While searching the head from top to bottom, the points
found on the edges to the left of the head are inserted at the beginning of the
list, while the points for the right side of the head are inserted at the end of the
list. With this, all the points follow a sequence to draw the contour of the head
(Figure 5-11).
Points inserted at
the beginning of
the list
Points inserted at
the end of the list
Figure 5-11 Sequence of the stored vertices
Another list is used to store all the information corresponding to a sample
head. Each node in this list represents a slice, and contains the two lists with
the inner and outer edges.
33
Point 1
Slice 1
Outer Edge List
Inner Edge List
Point 2
Point 1
Point 2
Implementation
Model generation
Point 2 Point 1
Slice 2
Outer Edge List
Inner Edge List
Point 2 Point 1
Point 2 Point 1
Slice 3
Outer Edge List
Inner Edge List
Point 2 Point 1
Figure 5-12 Data structures for storing the vertices of the image slices
5.4 Model generation
The rendering process makes use of the list of lists previously generated and
OpenGL instructions to produce a 3D model. The points stored in the lists are
drawn as glVertices, using the coordinates where the edges where found, and
translating them into 3D space.
The first test for a 3D drawing of the vertices obtained was to simply draw the
contour of a single slice from the samples. This was done by painting the
points and joining them using GL_LINES. This required that each point be
drawn twice, except for the final and the last one. Once a single slice had been
successfully drawn, a 3D head effect could be obtained by drawing all of the
slices in a data set. This is still not useful for the purpose of the project, as
there are no real surfaces to measure, but permitted to visually evaluate the
results of the image processing, edge detection and vertex extraction stages.
This process was later simplified by the use of the GL_LINE_STRIP
instruction, which creates a line that joins a list of points specified one after
another. With it, the requirement to duplicate vertices was removed, and the
code necessary was more readable.
The model produced is centred in the viewing space by using the dimensions
of the MRI images, and displacing the objects rendered by half of the sizes of
the image in X, Y and Z.
To create a polygon surface or the head and skull, the same idea was used. A
3D surface was obtained by using the points of two contiguous slices as the
34
Implementation
Model generation
vertices of triangles. The points of the current list and the next one were
passed alternately as vertices to OpenGL, and then drawn using the
GL_TRIANGLE STRIP instruction. Once again, it was necessary to repeat the
points, this time the points of each slice are drawn twice, to draw the triangles
formed between both of the adjacent slices.
In a triangle strip, only the first triangle defined requires 3 vertices, from then
on, all of the triangles are specified with a single new point, and using the last
two vertices from the previous triangle. OpenGL considers the whole strip as a
single primitive, and thus all the triangles have the same face, regardless of
their winding being in clockwise or counter clockwise direction. Because of the
order in which the vertices were extracted, the strip ended up facing inwards
towards the centre of the head. It was necessary to invert the front face of the
strips to obtain appropriate results. Figure 5-13 Shows how a triangle strip is
formed using a list of vertices. The triangles produced are shown as dotted
lines, and the winding of each triangle is shown with an arrow.
6
3
1
8
5
7
4
2
Figure 5-13 Triangle strip formed with a list of points
When the number of vertices in a slice is different from those in the next one, it
is necessary to increase the size of the shorter list. The initial approach taken
to do this was to insert extra points into the list, either by computing averages
between two points in the list, or by simply repeating some of them. This
expansion of the lists was done when, during the rendering process, two lists
were found to have different sizes. By using only the two current lists of points,
it was not necessary to add a very large amount of points to the slice lists at
the sides of the head, which had only a few points, while those in the middle of
the head have several nodes. The disadvantage of inserting new averaged
points to a list was that it could produce discontinuities in the polygon surface,
as the newly created points were not employed the next time the slice
information was used to draw the next triangle strip.
The extra memory management for allocation of new structures did not show
any appreciable effect on the rendering while developing the system on Linux,
but did have a big impact on the framerate displayed when running the
program on Windows.
35
Implementation
Model generation
To solve this problem, another approach was taken to make the lists have the
same length. This time the process was done before the actual rendering of
the image. The new technique employed consisted on creating a new list by
alternately inserting the points from both lists. When one of the lists is shorter
than the other, some of its points are repeated to make the sizes equal, and
keep the number of nodes equivalent for both slices. Having a single list would
also solve some of the issues associated with using two lists to draw a triangle
strip.
The policy used to repeat the nodes of the shorter list is based on the
knowledge that the number of points representing the contours is always even,
and the Y coordinates of the series of points remains constant. Because of
this, it is possible to tell when a list does not have a direct correspondence with
the next one. In general, two lists must have two points for each value of Y.
The only places where a list can have points that the next list does not contain
is at the top of the head, which corresponds to the middle of the list, and at the
bottom of the head, that is the extremes of a list (Figure 5-14). In these cases,
when a point in the longer list does not have a corresponding point in the other
list, the nearest point in the short list is repeated. In the case of Figure 5-14,
the points that will be repeated are marked as A, B, C and D.
D
B C
Middle of the list
A
Beginning
of the list
End of
the list
Figure 5-14 Diagram of two contiguous slices having different list sizes
The use of GL_QUAD_STIP was also explored, to increase the performance
of the rendering. The result was effectively faster, but at the cost of some detail
in the 3D model. The use of quads is also limited by the fact that the points
used to create them do not necessarily lie in the same plane, and this would
create oddly shaped polygons.
5.4.1 Special Effects
To give a better 3D view to the models generated, lighting can be enabled in
the program. Without lighting simulation, the surfaces generated look entirely
36
Implementation
Model generation
flat and it is difficult to recognise the features of the face. Lighting requires
having a normal vector for every polygon, and this vector will be used to
compute the intensity of the light reflected off that polygon, according to the
angle from which the light is hitting the surface.
The light is set up to be a single source located away from the object, with
white light. There is also an ambient light factor to permit viewing sides of the
object hidden from the light.
The polygon mesh representing the skin is rendered with a transparency
effect, allowing the skull to be seen underneath. The implementation in
OpenGL was done using the GL_BLEND method, and using the parameters
for glBlendFunc: GL_ SRC_ALPHA and GL_ONE_MINUS_SRC_ALPHA. The
material properties for the skin polygon were set to have an alpha value of
0.5f, making the skin half transparent. All the objects that must be seen
underneath the skin layer must be drawn before the transparent object to be
considered in the computations; otherwise, the skin will obscure them.
5.4.2 Computation of normal vectors
In order to incorporate lighting into the 3D mesh, it is necessary to have a
normal vector for every triangle in the model. The normal vector of a polygon is
a line perpendicular to the plane, and it is generally normalized, meaning that
the length of the vector must be equal to one.
The vector normal to a plane can be found using 3 vertices that lay on that
plane. These points can be used to form two vectors, and their cross product
will produce the vector normal to the plane. After getting the direction of the
normal vector, it can be normalized by dividing each of its components over
the total length of the vector.
Vector 2
Vector 1
Normal
Vector
Vertex 3
Vertex 2
Vertex 1
Figure 5-15 Calculation of a normal vector [Source: Wright2000]
5.4.3 Display lists
The performance of the program when using very large datasets was very
poor at first. Initially all the commands necessary to create the meshes were
called in real time, and thus, the lists of points had to be traversed each time
37
Implementation
User interface
the scene was redrawn. The method used to create the models was switched
to incorporate the use of OpenGLs display lists, having a very notable
improvement in the framerate.
The display lists consist of precompiling the commands necessary to define an
object in OpenGL. All the instructions to specify vertices, normals, colours and
textures for an object are given to the program before the actual rendering,
and then the list is given a name. When the object must be drawn, another
command simply calls upon the generated list to generate the model, without
the required computations.
Before using display lists, the memory they will use must be reserved, using
the OpenGL function glGenLists (). The actual lists are defined by including all
the necessary OpenGL commands for each object within the instructions
glNewList () and glEndList (). One of the parameters to the glNewList function
defines the mode in which the list will be created, the available modes are:
GL_COMPILE and GL_COMPILE_AND_EXECUTE, the one used for this
project is the first one, since the lists are created before they need to be
displayed. Finally, to call a list to be drawn, the command used is glCallList (),
which only outputs the object already precompiled.
5.5 User interface
After all the processing and the generation of the 3D models, the program
must permit a user to select individual points on the surface of the skull to take
measurements.
The program allows the user to rotate and scale the generated model to have
a better view of the details in the face. It also permits several options to
visualize the data in different ways to facilitate the location of the landmark
points. The interaction is done using the mouse or the keyboard.
The interface is implemented independently of the operating system, so the
arrow keys cannot be used because they are handled differently in each
platform. For this reason, the keys used for the rotation are the characters e,
and d to rotate around the X-axis, and s and f to rotate around Y. The
rotation can also be achieved by using the mouse: pressing the left mouse
button while dragging the mouse rotates the scene with respect to the X and
Y-axis. Scaling is handled with the mouse, pressing the right button and
dragging, or using the keys: q and a.
The display of individual layers can be toggled on or off. This permits to view
only the skull, or only the skin, using the keys: i (for the inner layer, or the
skull) and o (for the outer layer, the skin).
Lighting can also be toggled on or of, using the key l. The scene can also be
viewed in wireframe mode by pressing m, and the key n toggles the display
of the normal vectors at the vertices.
38
Implementation
User interface
For picking points on the skull to measure their distance to the skin, it is
necessary to enter a selection mode, which is toggled with p.
5.5.1 Landmark point selection
OpenGL provides a method for interacting with the graphics generated, using
the mouse. This is generally known as picking. It consists of using alternate
rendering methods, and using the space transformation matrices to compute
the location of an object selected using the mouse pointer.
To pick, OpenGL must switch to one of two special render modes. While any
of these modes is selected, nothing is drawn to the screen, but the objects that
would be rendered normally are registered, and counted as hits. By altering
the size of the pick matrix, the area to be rendered is limited to the location
nearby to the current position of the mouse. In this way, the objects that would
be rendered under the pointer are counted.
The two available render modes are Feedback and Select:
Select only returns the name given to each of the individual primitives
that were drawn in this mode. The names for the primitives are numeric.
Feedback returns information about the objects being drawn, including
the type of primitive selected, and the vertices that compose it, in
screen coordinates. This is not useful for this project, since the program
is looking for coordinates in 3D space, to compute the real distances
from a point to a plane.
The picking is done using the Select mode. Unfortunately, OpenGL considers
a triangle strip to be a single primitive, and since the entire skull is rendered
using such primitives, it was not possible to correctly identify the point selected
using the precompiled display lists.
It was thus necessary to create another display list, to be used only for picking,
which creates the surface of the skull using individual triangles. When picking,
only the surface of the skull is rendered, without the layer of the skin or the
reference axis.
To stop the picking from returning the indices of triangles located on the
opposite side of the head, the drawing is done with culling enabled. This
presented another problem, that drawing the triangles in the same order as the
triangle strip created triangles that alternatively faced different sides. To
correct this, the order of the vertices used to generate each triangle is also
changed while placing the vertices.
39
Vertex used for each triangle:
Triangle V1 V2 V3
1 1 2 3
2 3 2 4
3 3 4 5
4 5 4 6
5 5 6 7
6 7 6 8
8 7
Implementation
Configuration file
6 5
4 3
2 1
Figure 5-16 Order of vertices for triangles used for picking
To alternate the winding of the triangles, the order of the 3 vertices used for
each triangle is changed. When the total number of vertices so far is even,
vertex 1 becomes equal to vertex 3, otherwise, vertex 2 is equal to vertex 3.
5.5.2 Distance measurement
Ratnam did some research on the algorithms available to compute the
distance from a point to a plane, along a vector, and found the method of Fast
minimum storage ray / triangle intersection [Mller1997] to be both fast and
accurate. It consists on a series of progressive tests to discard planes that do
not intersect with a line, and when finally a plane is found that intersects, the
location and distance of this intersection is returned.
Once a vertex has been selected, the distance from it to the nearest plane in
the skin surface is computed using the fast ray/plane intersection algorithm.
This employs also the already computed normal to each vertex.
The search for the near planes is restricted to a number of skin slices around
the point located, to stop the algorithm from testing against all of the triangles
that compose the surface.
At the time of writing, the distance obtained is printed into the console from
which the program was run. This only works in Linux, since the Windows
version does not run from a command prompt.
5.6 Configuration file
The whole process has a few parameters that can be controlled by the user.
These parameters are:
Brightness threshold: used for the SUSAN edge detection, controls the number
of edges found, and the tolerance to noise.
Vertex separation: this parameter specifies every how many horizontal lines in
an image shall the vertices be obtained. Using a smaller number generates a
smoother and more accurate model, but is also more demanding on computer
resources. Using a value of 1 will make the program use the vertices for every
40
Implementation
Configuration file
line, making the model as exact as possible. Numbers larger than one will
make the model have fewer vertices, and thus be not as exact.
These parameters, along with information about the input data, are passed to
the program through a configuration file. The values that are stored in the
configuration file must appear in the same order, and some of them are
restricted to a certain type. These are the parameters necessary:
Filename: it includes the path (absolute or relative) to the file, and name
of the input IMG file.
Number of slices: The integer number of slices into which to divide the
data in the file.
HorizontalSize VerticalSize: These appear in the same line, and are
integer values. All the slices are considered to have the same size.
Slice separation factor: This modifies the distance between the slices in
the sample data, and affects how the data is displayed, by altering the
distance in Z between the slices. This is a floating-point number.
Brightness threshold: Value passed to SUSAN, to adjust the minimum
difference in brightness for two points to be considered an edge. This is
an integer number, between 10 and 200.
Distance between the vertices: A positive integer, greater than 1, which
specifies every how many lines in the image the edges are considered
as vertices for the 3D model. It affects how detailed the generated
model is. A smaller number will produce a smoother model, but is also
more demanding on computer resources. A greater number yields a
blockier model, but which is easier to handle.
Example configuration file:
../../Data/Romanowski1.img
160
256 256
1.7
120
5
The location and name of the configuration file must be supplied as an
argument to the program in Linux. In Windows, the program reads a file called
default.cfg that must be located in the Config directory at the same level as
the executable program.
41
Results
Evaluation of image processing methods
6 Results
The program obtained as a result of this project, although not entirely ready for
its use in the acquisition of a new set of measures for the landmark points,
shows how the use of the techniques proposed can be useful in making an
automated system to extract only the required information from MRI data to be
used for forensic science.
The main parts where this project has achieved important results are the edge
detection and data extraction for the MRI images, the rendering of the 3D
model and the measuring of tissue depths.
6.1 Evaluation of image processing methods
Sets of tests were performed to evaluate the tolerance of the image processing
algorithms to noise on the source images. The results of using different
parameters for the Brightness Threshold with the SUSAN program were
compared to determine which would be more suitable for varying levels of
input noise.
Sample images with added noise were produced, as described in Section
5.2.3. These images were processed with the SUSAN algorithms using
different parameters, and the resulting images were compared with the base
images used to create the samples. In this way, it was possible to determine
how closely the edges obtained resembled the desired figures.
To compare the results of varying noise levels and thresholds, each image
was run through the process. The resulting PGM files were compared pixel by
pixel against a base image created with only the edges, and which was used
to create the samples with noise.
For each image, the total number of pixels different from the base case was
counted, and at the end was converted into a percentage of the image that
differed from the original base image. This percentage was computed as the
number of different pixels, over the total number of pixels in the image.
The results of the differences are shown in tables: Table 6-1 shows the total
percentage of different pixels between the base case image and those
obtained using edge detection, Table 6-2 shows only the amount of False
Positive edges found and Table 6-3 presents the False Negatives. It is clear
that on each table, there is a diagonal line that contains the lowest differences,
and this can be explained because a certain threshold will perform well when
used over an image with an equivalent level of noise, but that threshold may
not give good results when the image contains less noise, or if the noise is
greater than what the threshold can tolerate.
42
Results
Evaluation of image processing methods
Table 6-1 Different pixels. Testing with the square sample
Total Difference
Threshold Noise Factors
0 50 100 150 200 250SUM AVERAGE
20 0.39% 1.73% 1.73% 1.73% 1.73% 1.73% 9.04% 1.51%
70 0.39% 0.39% 1.73% 1.73% 1.73% 1.73% 7.70% 1.28%
120 0.39% 0.39% 0.47% 1.73% 1.73% 1.73% 6.44% 1.07%
170 0.39% 0.43% 0.66% 0.79% 1.73% 1.73% 5.73% 0.96%
220 0.39% 0.81% 0.80% 0.79% 0.82% 1.73% 5.34% 0.89%
SUM 1.95% 3.75% 5.39% 6.77% 7.74% 8.65%
AVERAGE 0.39% 0.75% 1.08% 1.35% 1.55% 1.73%
Table 6-2 False positive edges
False Positives
Threshold Noise Factors
0 50 100 150 200 250SUM AVERAGE
20 0.01% 0.98% 0.98% 0.98% 0.98% 0.98% 4.91% 0.82%
70 0.01% 0.01% 0.98% 0.98% 0.98% 0.98% 3.94% 0.66%
120 0.01% 0.01% 0.05% 0.98% 0.98% 0.98% 3.01% 0.50%
170 0.01% 0.03% 0.13% 0.16% 0.98% 0.98% 2.29% 0.38%
220 0.01% 0.15% 0.08% 0.06% 0.09% 0.98% 1.37% 0.23%
SUM 0.05% 1.18% 2.22% 3.16% 4.01% 4.90%
AVERAGE 0.01% 0.24% 0.44% 0.63% 0.80% 0.98%
Table 6-3 False negative edges
False Negatives
Threshold Noise Factors
0 50 100 150 200 250SUM AVERAGE
20 0.38% 0.75% 0.75% 0.75% 0.75% 0.75% 4.13% 0.69%
70 0.38% 0.38% 0.75% 0.75% 0.75% 0.75% 3.76% 0.63%
120 0.38% 0.38% 0.42% 0.75% 0.75% 0.75% 3.43% 0.57%
170 0.38% 0.40% 0.53% 0.62% 0.75% 0.75% 3.43% 0.57%
220 0.38% 0.66% 0.72% 0.73% 0.72% 0.75% 3.96% 0.66%
SUM 1.90% 2.57% 3.17% 3.60% 3.72% 3.75%
AVERAGE 0.38% 0.51% 0.63% 0.72% 0.74% 0.75%
In these tests, the number of false negatives found is always greater than
0.38%, because they were done using the square sample images (Figure 5-3,
Figure 5-4). The basic image used to compare contains the entire border of both
squares, including the sides, top and bottom. The algorithm used does not find
but two points for the top and bottom of the image, since joining the points on
the edges at the top renders the rest of the surface. This is not a problem
when testing with the diamond or round samples.
Table 6-4 shows the images produced after running the SUSAN algorithm to
the noisy versions of the stylised head image. The sample images are
processed using different brightness thresholds to detect the edges. The
43
Results
Evaluation of image processing methods
images in the table are arranged with increasing noise from left to right. From
top to bottom, the threshold used for the edge detection is increased.
Table 6-4 Edge detection on noisy data. X-axis = noise factor, Y-axis = brightness threshold
0 50 100 150 200 250
020
070
120
170
220
In these images, it is also possible to appreciate the diagonal effect shown
before, where a brightness threshold equivalent to the amount of noise will
produce the best results for edge detection, but may not perform so well with
images that have a lot more or a lot less noise.
The results obtained when performing edge detection over the noisy data
show that, using a low brightness threshold produces images with very clearly
defined borders, but detects a large number of unnecessary edges, and is very
susceptible to noise in the input image, as the algorithm identifies edges all
over the image. These incorrect findings are called false positives.
Having a high threshold provides a far greater tolerance to noise. The
drawback of a very high threshold is that some of the real edges are ignored,
creating false negatives.
From the experiments performed, the threshold used allows for noise tolerance
up to a noise factor lower than the threshold being employed. Thus, a
brightness threshold of 200 will present a low number of false positives when
the image contains noise generated with a noise factor of less than 200.
Some tests were performed to analyse the noisy data with a threshold of 250,
but in these cases the SUSAN program would fail with a floating point
44
Results
Information extraction
exception when the amount of noise was low. The program was unable to find
any edges in the sample images with noise level lower than 150.
The program allows the threshold to be specified by the user, since at this
stage it is not possible to make the program automatically determine which is
the best threshold to use according to the quality of the images being
analysed.
6.2 Information extraction
During each stage of the process, images were generated, to have a visual
representation of the data at each point. The images in Table 6-5 show the
progress done from the original MRI image to a 3D model of the contour of the
head obtained.
The first image is a slice extracted from the MRI scan, without any alterations.
The second has been adjusted by the use of bi-level thresholding, enhancing
the contrast of different areas, and thus making it easier to identify the skin and
the bone. The third image has been processed with the SUSAN algorithm, to
locate the edges where the brightness of the image changes from one level to
another. The next image is obtained after eliminating all of the internal edges,
and keeping only the most external ones. This provides the vertices to be used
for the 3D model. The fifth image is the actual drawing in OpenGL of the
borders in the slice, by joining the vertices with lines. Finally, a 3D model is
shown; the triangle strips were created using 9 contiguous slices.
These images show how the information contained in the MRI slices was
gradually filtered to extract only the location of the two outer layers on the
head, and from them produce a set of points to represent the head and the
skull in a 3D environment.
Table 6-5 Images obtained after each stage of the processing
1. Original MRI slice
2. Bi-level Thresholding
45
Results
Rendering
3. Edge Detection
4. Vertex Extraction
5. Drawing of Vertices as a sequence
of points
6. 3D model generated using 9 slices
from the head
6.3 Rendering
Rendering is done by drawing the vertices previously obtained in 3D space,
and then joining them to form triangles. The location of the vertices is obtained
by using the coordinates of the point in each slice as the X and Y coordinates
in 3D space, and the Z coordinate is the number of slice multiplied by the
Slice separation factor that is read from the configuration file.
46
Results
Distance Measuring
Figure 6-1 3D head generated using a sample of 109 slices
The rendering incorporates lighting simulation and transparency to make the
facial features easier to distinguish. The amount of detail on the model
depends on the source data used.
There are some artefacts that appear where the exact location of vertices was
not found correctly. In these locations, the surface obtained is deformed, and
can produce incorrect measurements later. This problem is most apparent at
the bottom part of the model, since the MRI images used were very blurry
below the nose level.
6.4 Distance Measuring
When a point is selected in the 3D interface using the picking option, the
distance from it to the nearest triangle of the outer layer is computed, following
the normal vector to the point selected in the skull layer. A line is graphically
drawn from the selected point to the point of intersection, to show the distance
being measured. Figure 6-2 and Figure 6-3 show the application being used to
measure the depth on the test images of the squares and the diamonds. The
distances found are stored in the file select.txt under the Text folder. The
results of the measures taken in this case look like this:
Squares test:
Got 3 hits
Found a distance of: 50.00
Diamonds test:
Got 5 hits
Found a distance of: 16.97
47
Results
Distance Measuring
Figure 6-2 Side view of the measuring vector
Figure 6-3 Measuring on the diamonds sample
To test the correctness of the measurements obtained from the program
created, the results were compared to the original images used to create the
samples. The distance between the two edges on the base case images (Figure
5-3, Figure 5-5) was obtained using the Gimp graphic editor (Figure 6-4, Figure 6-5). This
was done for points on different locations. The 3D interface was then used to
obtain the tissue depths on the generated models, and the results compared to
the real distances. The results obtained showed very precise measurements
could be obtained form the program.
Figure 6-4 Measuring of the distance between
borders (squares sample)
Figure 6-5 Measuring of the distance between borders
(diamonds sample)
The samples used for testing had the drawback of having identical images for
all the slices in the 3D volume. Because of this, the nearest triangle intersected
on the skin layer would always lie on the same slice as the point on the inner
layer. It was not possible to test for the correct measurement when the normal
vector to a point intersects the skin at another slice of the head, since such
measures could not be taken directly from the input data.
The final test of comparing the results obtained against the traditional dataset
of tissue depths could not be carried out, because the limited number of
samples to test does not permit making a reliable average of the results
obtained that can be accurately compared.
48
Results
Speed Improvements
6.5 Speed Improvements
During the development of the project, several changes were made to the
whole process in order to make the rendering be faster and more usable.
Some of these improvements involved the extraction of the image in a certain
form, of the pre-processing of the vertex information to avoid extra
computations during the rendering.
Memory management during the rendering process caused a very noticeable
drop in the framerate, especially when running the program on Windows. The
initial versions of the program created new linked lists, as they were being
required during rendering, resulting in allocation of memory for very large lists
in real time. Reordering all the lists before the entering the display phase
solved this problem. The lists for two contiguous slices were merged into one
that would be used to draw a complete triangle strip. This required forcing the
two lists to be the same size by repeating some of the points in the shorter list.
Generating this merged list previous to the actual rendering had two
advantages: one was freeing the computers processor to handle the 3D
graphics, without having to worry about allocation of memory; the other
advantage was no longer needing to simultaneously run through the lists of
two contiguous slices, since the points of both slices were now included in a
single list.
Another great improvement was achieved by the use of OpenGls display lists.
These allow the specification of all the commands to produce an object, before
it is draws. Doing this it was no longer necessary to traverse the lists during
rendering, every time the screen was refreshed. Now the lists are only
processed once, before the rendering. The only noticeable problem is that the
model generated takes slightly longer to be shown.
6.6 Further work
6.6.1 Interface
The user interface for the program is not very friendly at this point, and there is
a lot of ground for improvement in this area. Some of the features that could be
added to the program are:
Open dialog to load other files without killing the program.
Visual display of the toggle switches and options available in the
program.
Graphical display of the distances computed.
Precise identification of each point selected, possibly by naming or
numbering them.
Storage of the distances obtained in a text file, with a name given by the
user.
49
Results
Further work
6.6.2 Data filtering
Currently, the various image-processing steps depend on threshold values that
determine when the information is useful or not. Variations in these thresholds
can produce very different results, also depending on the quality of the source
data. Further tests should be done to identify the optimal values to use, and
determine possible scenarios when a set of values should be used, according
to the input data.
Another approach would be to make the program able to adjust itself upon
analysis of the MRI data, so that it can automatically use the most adequate
thresholds for the current sample. This could incorporate some learning
process into the system. The drawback for this is that the training of a learning
system requires a lot of samples, and there are not many available for the time
being.
The surfaces generated present a lot of undesired peaks and imperfections,
mainly because of noise in the image. To make the program more tolerant to
noise, and produce smoother surfaces, it should be possible to average the
coordinates of points in an area, considering the points on nearby layers. The
process to use could be similar to the one used for image smoothing, but using
points in 3D space instead of pixels. A mask spanning several vertices could
be passed over the whole surface generated, locate the vertices that have
coordinates too different from those around it, and then give those vertices an
average value that is more adequate. Care must be taken when using this
technique to avoid loosing detail on the surfaces.
6.6.3 Reconstruction of new faces
To further increase the use of computer techniques for facial reconstruction,
the same techniques used in this project to display the MRI information could
be also employed to create reconstructions of new faces. Having a scanned
skull, and the information about its vertices and the normal vectors associated
to them, it should be possible to generate a new skin surface by adding the
tissue depths to the vertices of the skull, in the direction of their normals. This
would create something very similar to the dowels used in traditional
reconstruction, and from these, the values in other areas of the face could be
interpolated.
6.6.4 Alternate display of tissue depth measurements
J ames Edge proposed an alternate visualization for the tissue depth distances,
in a personal communication on August 2003. He thought of placing the 3D
model inside of a cylinder and aligned with it, then trace rays from the centre of
the cylinder in every direction. These rays should perform two tasks; find the
tissue depths of the skin by computing the distance from the point where the
ray intersects the skull layer and where it intersects the outer layer. Then at the
50
Results
Further work
point where the ray intersects the outlying cylinder, the thickness found would
be assigned as the value for a texture on the cylinder.
After doing this for every point, the cylinder can be unrolled, producing a plane,
and the texture on this plane would be an image composed of the tissue
depths at every vertex, stored as grey levels, where the greatest distance
would be encoded as white, and a distance of zero would be represented as a
black point. The grey levels for the locations between the sample points would
be interpolated between the known values. This might be useful in that it would
allow a quick and easy location of the points of interest, while still being able to
see the tissue depths of the entire face.
51
Conclusions
Application of computer science to reconstruction
7 Conclusions
7.1 Application of computer science to reconstruction
There has already been a lot of research related to the reconstruction of faces
using computer graphics. Some of the previous projects have obtained very
good results in modelling faces that resemble those of the intended subjects.
Most of the research has been done on how to create the faces using the
already existing tissue depth data, but there are not a lot of projects trying to
obtain new information using more modern methods.
The previous work done by Ratnam was expanded by doing testing of the
results obtained, and finding how different parameters given to the program
can accommodate for imperfect data. The system was also created in a way
that more sample data can be analysed, without the need of altering the
source program.
Something new to this project with respect to Ratnams dissertation was the
use and testing of edge detection algorithms to find the surfaces of the skin
and bone. Several different techniques were evaluated to correctly extract the
information required. The best results were obtained when using bi-level
thresholding on the original images, and then having the edge detection find
the borders.
Unlike the attempt done by Ratnam, the process followed to produce the
polygon mesh did not employ the Marching Cubes algorithm, which takes a
longer time to extract a surface, and produces a very large number of triangles
for the surfaces. Getting the locations of vertices directly form the images was
faster and produced very similar results, while also permitting a fluid user
interface.
7.2 User interface
Using several of the features of OpenGL, such as triangle strips, back side
culling and display lists, permits the system to display the models with a usable
framerate, making it easier to navigate, even when using very complex
models.
The program created has a practical interface for the measurement of tissue
depths. The user interface for selection of points is friendly and permits the
location of the landmark points directly over the 3D skull generated, and then
automatically computes the distance normal to the point on the skull.
52
Conclusions
Image processing
7.3 Image processing
The techniques used to extract information from the MRI images proved to be
useful and have a good possibility of being employed to consistently extracting
the locations of the tissues desired, with some level of tolerance to noise.
While finding the surface of the skin presented few problems, several
shortcomings where found when extracting the location of the skull. This is due
in part to the nature of the data used as input. The MRI images do not present
bone tissue in a way clearly distinguishable from other tissues or void spaces
in the head. This is especially a problem in some areas of the face, such as
the eyes, the nose and the mouth. These areas are of great relevance when
identifying a person, but also present the most difficulties when measuring.
This same problem has been present since the beginnings of facial
reconstruction, and makes it necessary to guess the real shape of these
important areas of the face.
Another cause for incorrect identification of the skull was that most of the
sample data available had a very poor resolution for the purposes required,
and also contained large amounts of noise. It was also evident that the sample
MRI scans were obtained with a focus on the human brain, and thus had a
much greater detail level in the upper part of the head, while the rest of the
face, and particularly the area below the nose, were not captured with enough
contrast or appeared blurred.
This is also an issue that should be considered. The accessibility to MRI
samples was limited because of legal issues concerning the distribution of
patients information. To accomplish the objective of updating the tissue depth
measurements, it will be necessary to have a large number of MRI scans
specifically taken with this purpose.
Nevertheless, the results show that very usable results can be obtained, given
sufficiently clear source images. Varying the thresholds used during the
filtering of information can also help in finding the skull under different
circumstances of noise or ambiguous data.
The program obtained is not usable yet. There are still a number of issues to
be considered before the techniques proposed can be used in a real world
application, but the results obtained show that with some more research, they
could prove to be very useful and reliable.
The use of computer technology and graphics for reconstructions can still be
broadened in considerable ways. There are several possible applications of
computers to the whole process of reconstruction and identification, from the
acquisition of more accurate tissue depths, to the actual reconstruction of the
faces, and tools to edit the faces obtained, for example, it is desired that a
system could simulate the loss or gain of weight of a person, or the aging
process. All of these objectives permit the ongoing research on this area.
53
References
References
Archer1997 Archer, Katrina. (1997) Craniofacial Reconstruction Using
Hierarchical B-Spline Interpolation. Master of Applied Science,
University of British Columbia.
Attardi1999 G. Attardi, M. Betr, M. Forte, R, Gori, A. Guidazzoli, S,
Imboden and F. Mallegni. (1999) 3D facial reconstruction and
visualization of ancient Egyptian mummies using spiral CT data.
Soft tissues reconstruction and textures application
Available [24/08/2003]:
http://medialab.di.unipi.it/Project/Mummia/SIGGRAPH99/
Bourke1997 Bourke, Paul. (1997). PPM / PGM / PBM image files.
Available [24/08/2003]:
http://astronomy.swin.edu.au/~pbourke/dataformats/ppm/
Bullock1999 Bullock, David. (1999) Computer Assisted 3D Craniofacial
Reconstruction. Master of Science, University of British
Columbia.
Cairns2000 Cairns, Matthew. (2000) An Investigation into the use of 3D
Computer Graphics for Forensic Facial Reconstruction. First
Year Report. University of Glasgow.
Available [24/08/2003]:
http://www.dcs.gla.ac.uk/~mc/1stYearReport/Contents.htm
Devine2003 Devine, Christophe (2003). OpenGL : Source Code
Available [24/08/2003]: http://www.cr0.net:8040/code/opengl/
Evison2000 Evison, Martin. (2000) Modelling Age, Obesity, and Ethnicity in a
Computerized 3-D Facial Reconstruction. Forensic Science
Communications. April 2001, Volume 3, Number 2.
Available [24/08/2003]:
http://www.fbi.gov/hq/lab/fsc/backissu/april2001/evison.htm
Hornak2002 Hornak, J oseph P. (2002) The basics of MRI.
Available [24/08/2003]:
http://www.cis.rit.edu/htbooks/mri/inside.htm
J ones2001 J ones, Mark. (2001) Facial Reconstruction Using Volumetric
Data. VMV. Stuttgart, Germany. November 21-23, 2001.
Available [24/08/2003]:
http://www.cs.swan.ac.uk/~csmark/PDFS/vmv01.pdf
54
References
Macleod2000 R.I. Macleod, A.R. Wright, J . McDonald and K. Eremin. (2000)
Historical Review, Mummy 1911-210-1 J .R.Coll.Surg.Edinb., 45,
April 2000, 85-92.
Available [24/08/2003]:
http://www.rcsed.ac.uk/J ournal/vol45_2/4520005.htm
Mller1997 Mller, Tomas and Trumbore, Ben (1997). Fast, minimum
storage ray-triangle intersection. J ournal of graphics tools,
2(1):21-28, 1997
Available [24/08/2003]:
http://www.acm.org/jgt/papers/MollerTrumbore97/
Prag1997 Prag, J ohn and Neave, Richard. (1997) Making Faces. British
Museum Press.
Ratnam1999 Ratnam, J onathan. (1999) Magnetic Resonance Biometry. MSc
Software Systems Technology, University of Sheffield.
Smith1995 Smith, Stephen M. (1995) SUSAN Low Level Image Processing
Available [24/08/2003]:
http://www.fmrib.ox.ac.uk/~steve/susan/susan/susan.html
Sonka1999 Sonka, Milan. (1999) Image processing, analysis, and machine
vision (2nd edition), London: PWS Publishing.
Summit1995 Summit, Steve (1995). comp.lang.c Frequently Asked
Questions.
Available [24/08/2003]: http://www.eskimo.com/~scs/C-
faq/faq.html
Vanezis2000 Vanezis, P., Vanezis, M., McCombe, G. and Niblett, T., (2000)
Facial reconstruction using 3-D computer graphics. Forensic Sci.
Int. 108, p.p. 81-95
Watt1992 Watt, Alan and Watt, Mark (1992). Advanced Animation and
Rendering Techniques. Great Britain: Addison-Wesley
Publishing Company.
Watt2000 Watt, Alan (2000) 3D Computer Graphics (3rd edition), Essex:
Addison-Wesley
Wright2000 Wright, Richard and Sweet, Michael (2000). OpenGL Super
Bible. Second Edition, USA: Waite Group Press.
WWW1 Radiological Society of North America, Inc. (2003)
RadiologyInfo.
Available [24/08/2003]:
http://www.radiologyresource.org/content/menu-
central_nerve.htm
55
References
WWW2 Castle Island Co. (2002) Stereolithography.
Available [24/08/2003]:
http://home.att.net/~castleisland/sla_int.htm
WWW3 Malin, Gay. Facial Reconstruction.
Available [24/08/2003]: http://users.wsg.net/sculpture/facial.html
WWW4 DICOM Homepage. National Electrical Manufacturers
Association.
Available [24/08/2003]: http://medical.nema.org/
WWW5 OpenGL Architecture Review Board (1992). OpenGL Reference
Manual -- The Official Reference Document for OpenGL,
Release 1. Addison-Wesley
Available [24/08/2003]:
http://www.cs.rit.edu/usr/local/pub/wrc/graphics/doc/opengl/book
s/blue/
WWW6 NeHe Productions.
Available [24/08/2003]: http://nehe.gamedev.net/
WWW7 Ultimate Game Programming, OpenGL tutorials
Available [24/08/2003]:
http://www.ultimategameprogramming.com/OpenGLPage1.htm
WWW8 Volume Visualization Data Sets
Available [24/08/2003]:
http://www.siggraph.org/education/materials/vol-
viz/volume_visualization_data_sets.htm
56
Progress history
Progress history
30/ 06/ 2003
- - - - - - - - - -
- Opened f i l es f r om109- sl i ce sampl e.
- Spl i t t he or i gi nal f i l e i nt o i ndi vi dual i mages i n PGM f or mat .
- Run edge det ect i on on i mages usi ng t he SUSAN pr ogr am.
- I mages pr oduced ar e not usef ul because of l ow r esol ut i on, whi ch gi ves a ver y i mpr eci se
r eadi ng of t he t i ssue dept h.
09/ 07/ 2003
- - - - - - - - - -
- Compi l e and r un DI COM Vi ewer
- Anal ysi s of t he pr ogr amt o be abl e t o pr oduce i ndi vi dual PGM i mages.
- Obt ai ned r aw PGM i mages f r omt he DI COM f i l es. Out put of t he pi xel dat a as
soon as i t i s obt ai ned f r omt he DI COM f or mat .
- Run " susan" edge det ect or over PGM f i l es, wi t h bet t er r esul t s.
- Dat a avai l abl e i s not ver y good. The i mages pr ovi ded by J ames Edge ar e not
enough, and pr esent t he same pr obl emas t he pr evi ous dat a set : The l ower par t
of t he i mages i s bl ur r y, and wi l l not al l ow appr opr i at e edge det ect i on.
15/ 07/ 2003
- - - - - - - - - -
- Gener at i on of t est i mages t o assess t he per f or mance of t he edge det ect i on
al gor i t hmas t he best possi bl e case. Al so usef ul t o gener at e a basi c 3D model .
- Gener at i on of i mages wi t h noi se, t o be used f or edge det ect i on. Randomnoi se
was added t o t he i mages, usi ng di f f er ent maxi mumdevi at i ons f or t he r andom
number s. The devi at i on i s i ncr ement ed by 50, so t her e ar e 6 l evel s of noi se:
0, 50, 100, 150, 200, 250. The noi se i s up t o t he maxi mumval ue al l owed i n t he
pgmi mages ( 255) and i s r ounded t o ei t her 0 or 255 when t he boundar i es ar e
exceded.
- Per f or mance t est s of t he SUSAN pr ogr am. Best per f or mance obt ai ned wi t h t he
opt i on: - t 60/ 80
- Bash shel l pr ogr amused t o pr ocess mul t i pl e f i l es at once.
- I ncr easi ng t he t hr eshol d makes t he pr ogr ammor e t ol er ant t o noi se, but al so
r educes t he per f or mance when l i t t l e noi se i s pr esent .
- Havi ng a l ow t hr eshol d, edges ar e mor e cl ear l y det ect ed when t her e i s l i t t l e
noi se, but as t he l evel of noi se i ncr eases, i t becomes much har der t o
di st i ngui sh any f eat ur es.
- A hi gh t hr eshol d wi l l al l ow noi se t ol er ance up t o a noi se l evel l ess t han t he
t hr eshol d used f or t he edge det ect i on. I n t hi s way, i t i s possi bl e t o use a hi gh
t hr eshol d t o el i mi nat e noi se when t he noi se f act or i s known.
- The pr ogr amwi l l not r un wi t h a t hr eshol d of 250, pr oduci ng a " f l oat i ng poi nt
except i on" when anal ysi ng t he i mages wi t h a noi se l evel l ower t han 150. The
r esul t s obt ai ned show t hat edges ar e al most uni dent i f i abl e wi t h hi gh noi se
l evel s. The r esul t s ar e t hus usel ess when usi ng t hi s t hr eshol d.
- When compar i ng t he f i l es pr oduced usi ng t he same t hr eshol d, t he amount of
er r or seems t o be l ess f or i mages of gr eat er t hr eshol d, r egar dl ess of t he
amount of noi se i n t hem. Thi s can be expl ai ned because t he base case f or
compar i son i s t he i mage wi t h 0 noi se, but obt ai ned wi t h t he same t hr eshol d.
Thi s gi ves a base case t hat does not have a ver y l ar ge number of poi nt s i n
t he edges, and t he consecut i ve i mages al so have ver y l i t t l e ef f ect due t o t he
noi se. St i l l r emai ns t o be pr oven i f t he edges det ect ed at hi gh t hr eshol ds can
be usef ul f or t he gener at i on of t he 3D model .
16/ 07/ 2003
- - - - - - - - - -
- Cr eat i on of MRI i mages wi t h noi se, based on t he sampl es f r omJ ames Edge.
- Run t he compar at or pr ogr amt o t est agai nst t he base case of t he edge det ect i on
over t he i mage wi t h no noi se, and t hr eshol d of 20.
- As t he t hr eshol d i ncr eases, t he er r or wi t h r espect t o t he base case
decr eases, mai nl y f or t he i mages wi t h hi gh noi se. The er r or i ncr eases sl i ght l y
f or t he i mages wi t h l i t t l e noi se.
- The MRI i mages do not al l ow f or good t est i ng, due t o t he l ack of det ai l f r om
t he nose down.
17/ 07/ 2003
- - - - - - - - - -
- Obt ai n r esul t s f or t he f al se posi t i ves and f al se negat i ves when r unni ng t he edge
det ect i on al gor i t hms on i mages wi t h noi se.
57
Progress history
18/ 07/ 2003
- - - - - - - - - -
- Pr ogr amt o obt ai n t he l ocat i ons of t he out er most edges, out of t he i mages t hat
have al r eady been t hr ough t he susan al gor i t hm.
- Good r esul t s ar e obt ai ned when usi ng i mages wi t h noi se up t o a f act or of 100, and
anal ysed wi t h a t hr eshol d of 120.
21/ 07/ 2003
- - - - - - - - - -
- Comput e di f f er ences f r omt he i mages obt ai ned by t he ver t exFi nder pr ogr am.
- The r esul t i ng i mages have onl y t he r i mof t he out er most edges.
- Get t i ng t hi s di f f er ence, i t i s possi bl e t o t el l whi ch of t he t hr eshol ds i s mor e
t ol er ant t o noi se, whi l e st i l l per mi t t i ng t o f i nd t he i mpor t ant edges.
- The r esul t s show t hat t he t hr eshol d of 120 has t he l east aver age er r or , bot h f or
f al se posi t i ves and f al se negat i ves.
22/ 07/ 2003
- - - - - - - - - -
- Obt ai ned ver t i ces f r omt he bor der s of t he i mages.
- Di spl ay ver t i ces as poi nt s i n OpenGL, wi t h t he Poi nt Dr awer pr ogr am.
- Rewr i t i ng of t he whol e pr ocess, t o make i t si mpl er and st r ai ght f or war d
- Ext r act ed i mages f r omt he Romanowski 1 f i l e. The i mages ar e ver y noi sy, and
not usabl e i n t hei r cur r ent st at e.
23/ 07/ 2003
- - - - - - - - - -
- Gener at ed new pr ogr amt o vi ew t he ver t i ces gener at ed by t he i mage pr ocessi ng.
- The new pr ogr amuses OpenGL and i s capabl e of r unni ng i n bot h Li nux and
Wi ndows.
- The pr ogr amr eads t he poi nt s f r oma t ext f i l e cont ai ni ng onl y t he coor di nat es
of t he ver t i ces. One poi nt f or ever y r ow.
- ### NOTE ### Cur r ent l y, t he ver t exFi nder put s t he edges f ound i n an ar r ay of
si ze 4, t hat i s because of t he 4 edges sought . The or der i s t r i cky, because of
t he way ver t i ces ar e sear ched, t he or der of t he ver t i ces i s: 1 2 4 3
/ / \ \
| | | |
0 1 3 2
| | | |
\ \ / /
- Poi nt s wi l l be st or ed i n 2 l i nked l i st s. One i s f or t he i nner edge, and t he
ot her f or t he out er . The l i st s wi l l cont ai n t he poi nt s i n or der , f r omt he
bot t om/ f r ont of t he head, t o t he bot t om/ back of t he head. Thi s i s done by
i nser t i ng i t ems at t he f r ont and t he back of t he l i st , accor di ng t o t he si de
of t he head t hey ar e f ound at . Thi s wi l l al l ow f or t he poi nt s t o be easi l y
dr awn i n OpenGL usi ng GL_LI NES. Wi l l have t o dupl i cat e t he poi nt s when
dr awi ng.
- Pr ogr amt o conver t PGM f i l es i nt o sampl e I MG f i l es
- I nt egr at i on of al l t he pr ogr ams so f ar i nt o a si ngl e pr ocess t hat goes
t hr ough t he edge det ect i on, ver t ex l ocat i on, and r ender i ng of a si ngl e sl i ce.
24/ 07/ 2003
- - - - - - - - - -
- Cor r ect ed t he ar r ay of ver t i ces, t o st or e t he ver t i ces i n t he same i ndex
accor di ng t o i f t hey ar e i nner or out er . Thi s al so f i xes a bug when onl y 2
ver t i ces ar e f ound i n a r ow. The new or der i s:
/ / \ \
| | | |
0 1 2 3
| | | |
\ \ / /
- Fi xed bug t hat make t he i nner bor der equal t o t he out er bor der , when t her e
wer e onl y 2 edges i n t he r ow. Thi s happened at t he t op and t he bot t omof t he
sampl e i mages: di amond, cel l .
- Cr eat ed a si ngl e l i st cont ai ni ng al l t he sl i ces. For each sl i ce, t her e i s a
node on t hi s, hol di ng t he l i st s f or t he out er and i nner edges.
- Al l t he pr ogr ams i n GLI nt egr al now use t hi s new l i st .
25/ 07/ 2003
- - - - - - - - - -
- Fi xed r ender i ng, t o make i t abl e t o dr aw al l t he sl i ces i n an I MG f i l e at
once. Thi s pr oduces a pseudo 3D ver si on of t he whol e head.
- Fi xed t he ver t exFi nder f unct i on, t o make i t si mpl er , and mor e under st andabl e.
- Resear ch on how t o equal i ze t he i mages bef or e pr ocessi ng, t o make t hemeasi er
t o anal yse. Usi ng hi st ogr amequal i zat i on, based on t he Sonka1998 r ef er ence.
- Changed f or mat of t he conf i gur at i on f i l es f or t he i nput dat a. El i mi nat ed t he
58
Progress history
r equi r ement f or t he maxi mumval ue, as i t i s di f f i cul t t o know bef or ehand, and
i t i s not necessar y anymor e, af t er t he i mpl ement at i on of t he nor mal i zat i on
f unct i ons.
28/ 07/ 2003
- - - - - - - - - -
- Tr i ed usi ng smoot hi ng bef or e t he edge det ect i on, usi ng t hr eshol ds 24, 9, 2, 1.
The r esul t s ar e not f avour abl e, as t he out put i mage i s mor e di st or t ed t han t he
nor mal ver si on.
- Compi l ed t he GLI nt egr al pr ogr ami n Wi ndows.
- Rer un t he t est s f or er r or i n t he ver t exFi nder , compar i ng wi t h t he si mpl e. pgm
f i l e, and usi ng t he new t hr eshol ds 100, 110, 120, 130, 140, 150.
29/ 07/ 2003
- - - - - - - - - -
- Dr awi ng of a sur f ace, by dr awi ng t he ver t i ces of 2 sl i ces usi ng
GL_TRI ANGLE_STRI P.
- Cr eat ed f unct i on t o add poi nt s t o t he l i st of a sl i ce, so t hat l i st s of 2
cont i guous sl i ces can be dr awn t oget her , havi ng t he same number of poi nt s.
30/ 07/ 2003
- - - - - - - - - -
- Modi f y t he OpenGL pr ogr am" xet r ev. c" t o di spl ay t he r ender ed head i n t he
cent er of t he scr een, and al t er t he r ot at i on about an i nt er nal axi s.
- Added l i ght i ng t o t he r ender i ng, wi t hout not i ceabl e dr op i n f r amer at e.
31/ 07/ 2003
- - - - - - - - - -
- Removed t he swi t chi ng of t he or der of ver t i ces. Now t he t r i angl e st r i ps ar e
al ways bui l t usi ng t he cur r ent ver t ex l i st f i r st , and t hen t he next l i st .
- Added mouse cont r ol over t he r ot at i on of t he 3D model , based on t he sampl es
of OpenGL by Chr i st ophe Devi ne <devi ne@cr 0. net >
01/ 08/ 2003
- - - - - - - - - -
- Added t r anspar ency t o t he pol ygon mesh t hat r epr esent s t he ski n, t o al l ow t he
skul l t o be seen t hr ough t he out er l ayer .
- Swi t ched t he ' gl _dr aw_poi nt _l i st ' t o use GL_LI NE_SRI P, so t hat i t i s not
necesar y t o r epeat ver t i ces.
- Di scover ed pr obl emwi t h ' expandLi st ' i n Wi ndows. I t i s t oo sl ow, and hi t s t he
per f or mance of t he di spl ay.
- Changed t he r eor der i ng of t he ver t i ces f or dr awi ng wi t h GL_TRI ANGLE_STRI P.
- The r eor der i ng i s now done bef or e t he r ender i ng pr ocess. I t i s wr ong at t he
moment .
04/ 08/ 2003
- - - - - - - - - -
- Reor der i ng f i xed, t her e i s now a si ngl e l i st t o dr aw per t r i angl e st r i p, and
i s much f ast er bot h i n Li nux and i n Wi ndows.
- Fi xed xet r ev. c t o dr aw t he t r i angl es usi ng a si ngl e l i st .
- Modi f i ed t he user i nt er f ace, t o al l ow f or r ot at i on whi l e t he l ef t mouse
but t on i s pr essed, and scal i ng wi t h t he r i ght but t on.
- Added scal i ng t o t he keyboar d cont r ol s, as wel l as r eset but t ons f or scal e
and r ot at i on.
05/ 08/ 2003
- - - - - - - - - -
- Added nor mal comput at i on f or t he t r i angl es cr eat ed wi t h t he t r i angl e st r i p.
- Added l i ght i ng t o t he model .
- Adj ust ed t hr eshol ds used f or doi ng bi - l evel t hr eshol di ng, and ver t ex
i dent i f i cat i on. The new val ues pr ovi de good r esul t s wi t h t he dat a pr ovi ded by
J ames Edge.
- New val ues ar e: cont r ast : 70
ver t i ces: 30
- Downl oaded code f or f ast mi ni mumst or age r ay / t r i angl e i nt er sect i on f r om:
ht t p: / / www. acm. or g/ j gt / paper s/ Mol l er Tr umbor e97/
Tomas Ml l er and Ben Tr umbor e. Fast , mi ni mumst or age r ay- t r i angl e
i nt er sect i on. J our nal of gr aphi cs t ool s, 2( 1) : 21- 28, 1997
- Ran basi c t est s wi t h t he al gor i t hm. Gi ves appr opi at e di st ances.
06/ 08/ 2003
- - - - - - - - - -
- Added use of Di spl ay Li st s, t o pr ecompi l e t he model s used.
- The f r amer at e has been consi der abl y i ncr eased, maki ng t he pr ogr ammor e
59
Progress history
usabl e. I t onl y t akes a l i t t l e l onger t o begi n dr awi ng on t he scr een.
07/ 08/ 2003
- - - - - - - - - -
- Por t ed Di spl ay Li st s ver si on t o Wi ndows.
- I ni t i al wor k wi t h pi cki ng, usi ng bot h Sel ect and Feedback r ender modes.
- Feedback i s not ver y usef ul , because i t r et ur ns scr een coor di nat es of t he
ver t i ces, whi ch shoul d have t o be conver t ed back t o 3D space.
- Pr oposed sol ut i on i s t o cr eat e anot her l i st wi t h al l t he ver t i ces and
nor mal s of t he skul l , and gi ve names t o each t r i angl e. When t he sel ect i on i s
done, i t wi l l be necessar y t o go t hr ough t he whol e l i st , and l ook f or t he
cor r ect pai r . Then do t he compar i son agai ns t he whol e l i st of ver t i ces of t he
out er l ayer .
08/ 08/ 2003
- - - - - - - - - -
- I mpl ement ed sel ect i on of ver t i ces usi ng pi cki ng.
- Ther e i s a new f unct i on t hat dr aws t he i ndi vi dual t r i angl es of t he skul l
l ayer . Thi s f unct i on gi ves names t o each of t he t r i angl es i n t he compl et e
model , and t he names ar e l at er used t o i dent i f y t he t r i angl e sel ect ed.
- To l ocat e t he ver t i ces of t he t r i angl e pi cked, t he numer i c name i s used al ong
wi t h t he number of ver t i ces i n t he sl i ce l ayer s, t o l ocat e i n whi ch sl i ce i s
l ocat ed t he poi nt sel ect ed, t hen t he sear ch goes on i si de of t he i ndi vi dual
sl i ce.
- To measur e t he di st ances, t he al gor i t hmi s r est r i ct ed t o sear ch wi t hi n a
shor t r ange of t he sl i ces sur r oundi ng i t .
60