Real-Time Biomechanical Simulation of Volumetric Brain

Transcription

Real-Time Biomechanical Simulation of Volumetric Brain
Real-Time Biomechanical Simulation of Volumetric Brain
Deformation for Image Guided Neurosurgery
Simon K. Warfield , Matthieu Ferrant , Xavier Gallez , Arya Nabavi ,
Ferenc A. Jolesz and Ron Kikinis
Surgical Planning
Laboratory http://www.spl.harvard.edu, Department of Radiology,
Department of Neurosurgery, Brigham and Women’s Hospital
and Harvard Medical
School, 75 Francis Street, Boston, MA 02115
Telecommunications Laboratory, Centre for Systems Engineering and Applied Mechanics, Universitié
Catholique de Louvain, Belgium
Abstract
We aimed to study the performance of a parallel implementation of an intraoperative nonrigid
registration algorithm that accurately simulates the biomechanical properties of the brain and its
deformations during surgery. The algorithm was designed to allow for improved surgical navigation and quantitative monitoring of treatment progress in order to improve the surgical outcome
and to reduce the time required in the operating room. We have applied the algorithm to two
neurosurgery cases with promising results.
High performance computing is a key enabling technology that allows the biomechanical simulation to be executed quickly enough for the algorithm to be practical. Our parallel implementation was evaluated on a symmetric multi-processor and two clusters and exhibited similar performance characteristics on each. The implementation was sufficiently fast to be used in the operating
room during a neurosurgery procedure. It allowed a three-dimensional volumetric deformation to
be simulated in less than ten seconds.
1 Introduction
The key challenge for the neurosurgeon during brain surgery is to remove as much as possible of a
tumor without destroying healthy brain tissue. This can be difficult because the visual appearance of
healthy and diseased brain tissue can be very similar. It is also complicated by the inability of the
surgeon to see critical structures underneath the brain surface as it is being cut. It was our goal to
be able to rapidly and faithfully capture the deformation of the brain during neurosurgery, so as to
improve intraoperative navigation by allow preoperative data to be aligned to volumetric scans of the
brain acquired intraoperatively.
Image guided surgery techniques are used in operating rooms equipped with special purpose imaging equipment. The development of image guided surgical methods over the past decade has provided
a major advance in minimally invasive therapy delivery.
Image guided therapy has largely been a visualization driven task. Quantitative assessment of
intraoperative imaging data has not been possible in the past, and instead qualitative judgements by
experts in the clinical domains have been relied upon. In order to provide the surgeon or interventional
radiologist with as rich a visualization environment as possible from which to derive such judgements,
previous work has primarily been concerned with image acquisition, visualization and registration of
intraoperative and preoperative data. Biomechanically accurate registration of brain scans acquired
during surgery, as proposed here, has the potential to be a significant aid to the automatic interpretation
of intraoperative images and to enable prediction of surgical changes.
0-7803-9802-5/2000/$10.00 (c) 2000 IEEE.
Early work (reviewed by Jolesz [1]) has established the importance and value of image guidance
through the better localization of lesions, the better determination of tumor margins, and the optimization of the surgical approach. Previous algorithm design has been a steady progression of improving
image acquisition and intraoperative image processing. This has included increasingly sophisticated
multimodality image fusion and registration. Clinical experience with image guided therapy in deep
brain structures and with large resections has revealed the limitations of existing rigid registration and
visualization approaches [1].
The changes in brain shape during neurosurgery are now widely recognized as nonrigid deformations. Suitable approaches to capture these deformations and to allow integrated visualizations of
preoperative data matched to the brain as it changes shape during the course of surgery are in active
development. Previous work in capturing brain deformations for neurosurgery can be categorized by
those that use some form of biomechanical model (recent examples include [2–4]) and those that apply
a phenomenological approach relying upon image related criteria (recent examples include [5, 6].)
A fast surgery simulation method was described in [7] which achieved speed by converting a
volumetric finite element model into a model with only surface nodes. This work had the goal of
achieving interactive graphics speeds at the cost of accuracy of the simulation. Such a model is
applicable for computer graphics oriented visualization tasks but during neurosurgical interventions
on patients we aim for as high accuracy and robustness as possible and use parallel hardware to
achieve clinically compatible execution times.
A sophisticated biomechanical model for two-dimensional brain deformation simulation using a
finite element discretization was proposed in [2]. However this work used the pixels of the twodimensional image as the elements of the finite element mesh, and relied upon manually determined
correspondences. Unfortunately, two-dimensional results are not useful in clinical neurosurgical practice and such a discretization approach is extremely computationally expensive (even considering a
parallel implementation) if expanded to three spatial dimensions because of the large number of voxels
in a typical intraoperative MRI (256x256x60 4e06 voxels) leading to a large number of equations to
solve. Instead, the use of a finite element model with an unstructured grid can allow a representation
that faithfully models key characteristics in important regions while reducing the number of equations
to solve by using mesh elements that cover several image pixels in other regions.
Most meshing software packages used in the medical domain do not allow meshing of multiple
objects [8, 9], and often work best with regular and convex objects, which is usually not the case
for anatomical structures. Therefore, we have implemented a tetrahedral mesh generator specifically
suited for labeled 3D medical images. The mesh generator can be seen as the volumetric counterpart
of a marching tetrahedra surface generation algorithm. A detailed description of the algorithm can be
found in [10]. The resulting mesh structure is built such that for images containing multiple objects, a
fully connected and consistent tetrahedral mesh is obtained for every cell. A segmentation of the image indicates the type of anatomical structure the cell belongs to. Therefore, different biomechanical
properties and parameters can easily be assigned to the different cells or objects composing the mesh.
Boundary surfaces of objects represented in the mesh can be extracted from the mesh as triangulated
surfaces, which is convenient for running an active surface algorithm, as described below.
A number of imaging modalities have been used for image guidance. These include, amongst
others, digital subtraction angiography (DSA), computed tomography (CT), ultrasound (US), and
magnetic resonance imaging (MRI). Intraoperative MR imaging can acquire high contrast images of
soft tissue anatomy which has proven to be very useful for image-guided therapy [11]. Multi-modality
registration allows preoperative data that cannot be acquired intraoperatively, such as functional MRI
(fMRI) or nuclear medicine scans, such as Positron Emission Tomography (PET) or Single Photon
Emission Computed Tomography (SPECT) scans, or magnetic resonance angiography (MRA) to be
2
Preparation for Image Guided Therapy
Preoperative MRI
Preoperative
segmentation
Spatial
Localization
During Image Guided Therapy
Biomechanical
Tissue
Rigid Registration
simulation of
Classification
brain deformation
Intraoperative MRI
Aligned preoperative
and intraoperative data
Figure 1: Schema for Intraoperative Segmentation and Registration
visualized together with intraoperative data.
A system for intraoperative visualization has recently been developed [12]. This system allows
surface rendering of previously prepared triangle models and arbitrary interactive resampling of 3D
grayscale data. The system also allows for visualization of virtual surgical instruments in the coordinate system of the patient and patient image acquisitions. The system supports qualitative analysis
based on expert inspection of the image data and the surgeons expectation of what should be present
(normal anatomy, pathology, current progress of the surgery etc.) The ability to automatically capture
the deformation of the brain during neurosurgery would allow the augmentation of such a system
to enable the visualizations prepared preoperatively to be updated to follow the changes that occur
during surgery. For example, this might allow previously acquired functional MRI (which cannot be
acquired intraoperatively) to be transformed to place the functional information in alignment with intraoperatively acquired morphologic MRI, preserving the ability to interpret areas of the morphologic
MRI based upon the functional information.
We aimed to demonstrate that a volumetric three-dimensional biomechanical simulation of brain
deformation is possible even with the time constraints of neurosurgery and that such simulations
significantly add to the value of intraoperative imaging and hence improve surgical outcomes.
2
Method
In order to successfully capture brain deformation from intraoperative images we have developed a
set of image processing algorithms that take advantage of an existing preoperative MR acquisition and
segmentation. We have experimented for several years with a general image segmentation approach
that uses a 3D digital anatomical atlas to provide automatic local context for classification [13–16].
In this application the preoperative data acts as a patient-specific atlas which enables a robust and
reliable intraoperative segmentation of the brain surface of the patient using our previously described
real-time segmentation algorithm [17]. The brain segmentation then constitutes a reliable target for
the biomechanical simulation of the brain deformation. Figure 1 illustrates the processing steps that
take place before and during the therapy procedure.
Since preoperative data is acquired before surgery, the time available for segmentation is longer.
This means we can use segmentation approaches that are as robust and accurate as possible but are
time consuming and hence impractical to use in the operating room. In our laboratory, preoperative data is segmented with a variety of manual [12], semi-automated [18] or automated [15, 16]
3
approaches. We attempt to select the most robust and accurate approach available for a given clinical
application. Each segmented tissue class is then converted into an explicit 3D volumetric spatially
varying model of the location of that tissue class, by computing a saturated distance transform [19] of
the tissue class. This model is used to provide robust automatic local context for the classification of
intraoperative data in the following way.
During surgery, intraoperative data is acquired and the preoperative data (including any MRI,
fMRI, PET, SPECT, MRA that is appropriate, the tissue class segmentation and the spatial localization
model derived from it) is aligned with the intraoperative data using an MI based rigid registration
method [12, 20]. The intraoperative image data then together with the spatial localization model
forms a multichannel 3D data set. Each voxel of the combined data sets is then represented by
a vector having components from the intraoperative MR scan, the spatially varying tissue location
model and if relevant to the particular surgery, any of the other preoperative image data sets. For
the first intraoperative scan to be segmented a statistical model for the probability distribution of
tissue classes in the intensity and anatomical localization feature space is built. The statistical model
is encoded implicitly by selecting groups of prototypical voxels which represent the tissue classes
to be segmented intraoperatively (less than five minutes of user interaction). The spatial location
of the prototype voxels is recorded and is used to update the statistical model automatically when
further intraoperative images are acquired and registered. This multichannel data set is then segmented
with -NN classification [14, 21], a standard classification method which computes the type of tissue
present at each voxel by comparing the signal of the voxel to classify with the signal of previously
selected prototype voxels of known tissue type.
The segmentation of the intraoperative data helps to establish explicitly the regions of tissues
that correspond in the preoperative and intraoperative data. In the past we have described an imagebased nonrigid registration algorithm [22, 23] which has been successfully applied to capture shape
variation in schizophrenia [24]. However, our previous approach does not constitute an accurate
biomechanical simulation of the deformation, and hence it is not possible to effectively model the
different material properties of different structures in the head, and it is not possible to use such an
approach for quantitative prediction of brain deformation. These limitations are addressed by the
biomechanical simulation approach we describe below.
2.1 Method for Biomechanical Simulation of Three-Dimensional Volumetric
Brain Deformation
Our intraoperative image acquisition and processing identifies the region of the brain volume. However, its shape changes during surgery. The goal of simulating the intraoperative brain volume deformation is to project data aligned with the brain in a previous configuration onto the brain in its new
configuration following surgical changes.
We use a two step process to achieve this. In the first step, an active surface algorithm is used to
establish the correspondences between the surfaces of the brain data. In the second step the volumetric
brain deformation implied by the surface changes is computed using a biomechanical model of the
structure of the brain.
The deformation field obtained for the surfaces is used in conjunction with the biomechanical
volumetric model to infer the deformation field inside and outside the surfaces. The key concept is
to apply forces to the volumetric model that will produce the same displacement field at the surfaces
as was obtained with the active surface algorithm. The biomechanical model will then compute the
deformation throughout the volume.
4
2.1.1 Correspondence Detection with an Active Surface
The active surface algorithm iteratively deforms the surface of the first brain volume to match that of
the second volume. This is done iteratively by applying forces derived from the volumetric data to
an elastic membrane model of the surface. The derived forces are a decreasing function of the data
gradients, so as to be minimized at the edges of objects in the volume. To increase robustness and the
convergence rate of the process, we have included prior knowledge about the expected gray level and
gradients of the objects being matched. This algorithm is fully described in [25].
2.1.2 Biomechanical Simulation of Volumetric Brain Deformation
Assuming a linear elastic continuum with no initial stresses or strains, the potential energy of an
elastic body submitted to externally applied forces can be expressed through a finite element model
as [26]:
"!$#&%(')%+*-,
(1)
is the vector representing the forces applied to the elastic
body
where
.
!/#&%0'1(forces
%+*2,
per unit volume, surface forces or forces concentrated
at the nodes of the mesh), the
displacement vector field we wish
to compute, and the body on which one is working described by
a mesh of tetrahedral elements. is the strain vector and the stress vector, linked to the strain vector
by the material’s constitutive equations.
In !436
the
52%(36case
78%(3:9of
%(;<5+linear
78%(;=7(9%>;<elasticity,
5?9?, with no initial stresses or strains, this relation is described as
A@
, where B is the elasticity matrix characterizing the material’s
properties [26].
each element of the mesh is defined as
The continuous displacement field everywhere within
DC
a function of the displacement
C !/GD, at the element’s nodes E weighted by the element’s shape functions
(interpolating functions) F E
,
H!$GD,
S
JI:KML4NPORQ
F
EUT
E
C !$G&,> C
E.V
(2)
C\]
The elements we use to represent volume data are tetrahedral (FXWZY\[ _^ ), with linear interpolation of the displacement field. Hence, the shape function of node ` of tetrahedral element a is defined
as
C !$GD,
C f C #gh C 'i C *kj
C)d0e E
E
E
E
(3)
F E
b2c
c
C
The computation of the volume of the element
and the interpolation coefficients are detailed in
[26, pages 91–92].
The volumetric deformation of the brain is found by solving for the displacement field that minimizes the energy described by Equation 1, after fixing the displacements at the surface to match those
generated by the active surface model. We solve the system of equations with the the Portable, Extensible Toolkit for Scientific Computation (PETSc) package [27, 28] using the Generalized Minimal
Residual (GMRES) solver with block Jacobi preconditioning.
2.2 Hardware for Intraoperative Image Acquisition and Parallel Computation
Figure 2 shows an open-configuration magnetic resonance scanner optimized for imaging during surgical procedures [1, 11]. Operations take place with the patient on the bed inside the toroidal magnets
and the surgeon standing in the gap between them.
5
Figure 2: Intraoperative magnetic resonance imaging scanner in which image guided therapy takes
place. The operating room consists entirely of equipment safe to use around a 0.5T magnetic field.
For neurosurgery cases the patient is typically placed lying inside the two toroidal magnets and the
surgeon stands in the gap between them.
6
Item
CPU
Motherboard
Memory
Hard disk
Network Card
OS
Description
Compaq Alpha 21164A (ev56) 533MHz w/ 8KB+8KB L1 and 96K L2
on chip caches
Microway Screamer LX w/ 2MB L3 9ns SRAM cache and a 128-bit
wide 83MHz memory bus (based on Compaq AlphaPC 164 LX motherboard design)
768 MB , 128 bit ECC unbuffered SDRAM 100MHz (1.3 GBytes/sec
peak transfer rate)
2.1 GB Seagate Medalist 2132 (ST32132A) IDE
Compaq DE500 Ethernet 10/100Mbps RJ45 full duplex
RedHat Linux 6.1
Figure 3: Specifications of workstations in the Deep Flow cluster.
Measurements of parallel performance were made on three different parallel architectures. The
first of the three architectures was a parallel cluster, called “Deep Flow”, consisting of a cluster of
Alpha-based workstations running Linux connected by 100Mbps full duplex Fast Ethernet. The specifications of this cluster is described in Table 3. The second architecture was a Sun Microsystems
Ultra HPC 6000 symmetric multi-processor machine with 20 250MHz UltraSPARC-II (4MB Ecache)
CPUs and 5 GB of RAM. The third architecture was a cluster of two symmetric multi-processor Ultra 80 workstations, each with four 450MHz UltraSPARC-II (4MB Ecache) CPUs and 2GB RAM
networked with 100Mbps Fast Ethernet.
3
Results
In this section illustrative visualizations of biomechanical simulation of intraoperative brain shift are
presented along with an analysis of the performance of our parallel implementation. During surgical
procedures in the brain, intraoperative MRI (IMRI) data was acquired and stored. Segmentation and
nonrigid registration was applied to this data after therapy delivery in order to allow us to assess the
robustness, accuracy and time requirements of the approach. In the future we intend to carry out
segmentation, nonrigid registration and visualization using the approach described here during the
interventional procedures with the goal of improving image guided therapy outcomes.
3.1 Biomechanical Simulation of Volumetric Brain Deformation
In each neurosurgery case several volumetric MRI scans were carried out during surgery. The first
scan was acquired at the beginning of the procedure before any changes in the shape of the brain
took place, and then over the course of surgery other scans were acquired as the surgeon checked the
progress of tumor resection. The final scan in each sequence exhibits significant nonrigid deformation
and loss of tissue due to tumor resection. In order to test our biomechanical simulation approach each
subsequent scan was aligned to the first by rigid registration using maximization of mutual information. This method computes a global alignment accounting for positioning differences in the scan
coordinates but does not attempt to correct for nonrigid deformation. The first scan was manually
segmented to act as an individualized anatomical model. The last scan in each sequence was then segmented with our intraoperative segmentation approach. Our algorithm for biomechanical simulation
of the brain deformation was then executed to compute the volumetric deformation between the two
data scans.
7
In order to check the quality of the registration, the deformation of the first MRI scan was visually
compared with the MRI to which it was matched. In each case the registered brain closely matched the
expected location. Figure 4 is a visualization of the accuracy of the recovered volumetric deformation
illustrated with two-dimensional slices.
Figure 4(a) shows the skin bright, the brain gray and the lateral ventricles dark, in its initial configuration. This defines the coordinate system in which preoperative data is aligned and visualized.
Figure 4(b) is the corresponding slice of the second 3D scan which forms the target to which we
wish to match the first volume. Significant sinking of the surface of the brain due to the surgery is
apparent. Figure 4(c) illustrates the result of simulating the brain deformation in order to match the
first scan to the second scan. A 3D visualization of this match is shown in Figure 5. Figure 4(d)
shows the difference between the simulated deformation of the brain of Figure 4(a) and the MRI scan
of Figure 4(b) to which it is matched. The closeness of the match of the simulated deformation to
the actual deformation can be judged by the very small intensity differences at the boundary of the
simulated deformed brain and the air gap inside the skull of the target image. Some small intensity
differences are expected because intrinsic MR scanner intensity variability causes a small variation
in the observed voxel intensities from scan to scan. A small misregistration of the lateral ventricles
on the side opposite the surgical resection can be observed. This occurs because our biomechanical
model treats the brain as a homogeneous material, but the cerebral falx (a stiff membrane between the
two hemispheres) and the cerebrospinal fluid inside the lateral ventricles are not well approximated
by this homogeneous model. This misregistration is not particularly relevant for a surgical resection
on the opposite side of the brain.
Figure 5 is a visualization of the simulated deformation of the first intraoperative scan volume onto
the second intraoperative scan volume. The 3D surface rendering indicates the simulated deformation
of the brain in the first scan and the cross-sectional slices show the MRI of the second scan. The
skin appears bright in the MRI, and the large dark region between the skin and the brain surface is
part of the deformation due to the surgery. The close match of the simulation and the actual brain
deformation is clear. The color coding indicates the magnitude of the deformation at every point on
the surface of the deformed volume and the blue arrows indicate the magnitude and direction of the
deformation by showing the initial and final position of points on the surface undergoing deformation.
3.2 Performance Analysis of Parallel Implementation
The results here focus upon the biomechanical simulation of brain deformation because in the past
obtaining sufficiently accurate results in a clinically compatible small amount of time has been seen
as extremely difficult. Often less accurate but fast models of deformation have been used. In order to
provide a context for the biomechanical simulation amongst the other image analysis and acquisition
tasks that must occur intraoperatively, a timeline for these actions is shown in Figure 6.
Ultimately the ability to use these intraoperative image analysis methods relies upon them being
sufficiently robust to provide accurate results for typical clinical cases, and critically, to be sufficiently
fast to provide feedback to the surgeon at a rate that can be practical to use during neurosurgery. We
collected performance results on three different architectures in order to assess the absolute performance and the scaling behavior of our parallel implementation.
Figure 7 shows the timing results for a biomechanical simulation of brain deformation on the
Deep Flow cluster. The simulated deformation determined by solving this set of equations is shown
in Figure 4 and Figure 5. This result demonstrates that a biomechanical simulation of volumetric
brain deformation can be carried out in less than ten seconds.
The slow scaling of the implementation is attributed to imbalance in the matrix assembly and
8
(a) A single slice from the first 3D intra- (b) The corresponding slice in the second
operative MRI scan.
3D MRI scan.
(c) The matched slice of the first volume (d) Magnitude of the difference between
after simulation of the brain deformation. the simulated deformation and the corresponding slice of the second scan showing the closeness of the alignment of the
brain.
Figure 4: Two dimensional slices through three-dimensional data, showing the match of the simulated
deformation of the initial brain onto the actual deformed brain. The quality of the match is significantly better than can be obtained through rigid registration alone. The closeness of the match of the
simulated deformation to the actual deformation can be judged by the very small intensity differences
at the boundary of the simulated deformed brain and the air gap inside the skull of the target image. Some small intensity differences are expected because intrinsic MR scanner intensity variability
causes a small variation in the observed voxel intensities from scan to scan. A small misregistration of
the lateral ventricles on the side opposite the surgical resection can be observed. This occurs because
our biomechanical model treats the brain as a homogeneous material, but the cerebral falx (a stiff
membrane between the two hemispheres) and the cerebrospinal fluid inside the lateral ventricles are
not well approximated by this homogeneous model. This misregistration is not particularly relevant
for a surgical resection on the opposite side of the brain.
9
Figure 5: Visualization of deformation of first intraoperative scan volume onto second intraoperative
scan volume. The 3D surface rendering indicates the simulated deformation of the first scan and the
cross-sectional slices show the MRI of the second scan. The skin appears bright in the MRI, and the
large dark region between the skin and the brain surface is part of the deformation due to the surgery.
The close match of the simulation and the actual brain deformation are clear. The color coding
indicates the magnitude of the deformation at every point on the surface of the deformed volume and
the blue arrows indicate the magnitude and direction of the deformation showing the initial and final
position of points on the surface undergoing deformation.
10
Timeline of image proc ess ing for
image guided neuros urgery
Time
Ac tion
Before surgery:
Duri ng surgery:
Preoperati ve segmentati on
I ntraoperati ve MRI
Ri gi d regi strati on
Ti ssue cl assi fi cati on
Surface di spl acement
Bi omechani cal si mul ati on
Vi suali zati on
Surgi cal progress
Figure 6: A timeline of typical intraoperative image acquisition and analysis tasks.
2
10
Time (s)
Assemble system of equations
Solve system of equations
Sum of initialization, assembly and solve times.
1
10
0
10
0
2
4
6
8
10
Number of CPUs
12
14
16
Figure 7: Timing results for assembling, solving, and the sum of initialization, assembling and solving
time for a system of 77511 equations simulating the biomechanical deformation of the brain on a
cluster of 16 Compaq Alpha 21164A 533MHz CPU-based workstations networked with Fast Ethernet.
11
2
10
2
10
Assemble system of equations
Solve system of equations
Time (s)
Time (s)
Assemble system of equations
Solve system of equations
1
10
0
10
1
10
0
0
2
4
6
8
10
12
Number of CPUs
14
16
18
10
20
0
2
4
6
8
10
12
Number of CPUs
14
16
18
20
(a) Sun Microsystems Ultra HPC 6000 (b) Two Sun Microsystems Ultra 80
with 20 250MHz CPUs
servers each with 4 450MHz CPUs networked with Fast Ethernet.
Figure 8: Timing results for assembling and solving the system of 77511 equations simulating the
biomechanical deformation of the brain.
matrix solve processes. Our parallel decomposition for the matrix assembly is based on sending
approximately equal numbers of mesh nodes to each CPU. However, in our unstructured grid different
mesh nodes can have different connectivity, and hence require a different amount of work in order to
interpolate values to the mesh nodes. A second load imbalance affects the scaling of the solving
time. Given an initially balanced set of equations, the surface displacements are applied as boundary
conditions, substituting known values for equations in the original system, reducing the number of
unknowns that must be solved for. This has the effect of creating some imbalance, as the distribution
of surface displacements is not equal across CPUs.
For intraoperative use, the absolute time required for initialization, assembly and solving is acceptable. Time required for initialization can be overlapped with earlier image processing. For display
of the simulated deformation we need to resample a data set according to the computed deformation,
which requires approximately 0.5 seconds. Our current implementation at this time simply writes the
deformation to disk which imposes a fixed I/O cost (not included in the timing results) that will not
be necessary in practical application during surgery.
Figure 8 shows timing results for assembling and solving the system of 77511 equations simulating the biomechanical deformation of the brain on a Sun Microsystems Ultra HPC 6000 symmetric
multi-processor with 20 250MHz CPUs and on a pair of Sun Microsystems Ultra 80 servers each with
4 450MHz CPUs networked with Fast Ethernet. These results indicate scaling performance similar to
that obtained on the Deep Flow cluster, despite the differences in architectures.
In the future an improved biomechanical model could aim to better model different structures in
the brain. This may necessitate a higher resolution mesh, and hence a larger number of equations
to solve. Figure 9 shows timing results for assembling and solving a system of 253308 equations to
simulate the biomechanical deformation of the brain on a Sun Microsystems Ultra HPC 6000 with
20 250MHz CPUs. The timing results indicate that we can assemble and solve a system of equations
2.5 times larger than that necessary to obtain excellent results with our current model in a clinically
compatible time frame.
12
Assemble system of equations
Solve system of equations
2
Time (s)
10
1
10
0
10
0
2
4
6
8
10
12
Number of CPUs
14
16
18
20
Figure 9: Timing results for assembling and solving a system of 253308 equations simulating the
biomechanical deformation of the brain on a Sun Microsystems Enterprise Server 6000 with 20
250MHz CPUs.
4
Discussion and Conclusion
Our early experience with two neurosurgery cases indicates that our intraoperative biomechanical
simulation of brain deformation algorithm is a robust and reliable method for capturing the changes
in brain shape that occur during neurosurgery. The registration algorithm requires no user interaction
and the parallel implementation is sufficiently fast to be used intraoperatively.
Intraoperative registration adds significantly to the value of intraoperative imaging. It provides for
quantitative monitoring of therapy application including the ability to compare quantitatively with a
preoperatively determined treatment plan and enables preoperative data to be aligned with the current
configuration of the brain of the patient in order to allow 3D interactive visualization of the fused
multi-modality data.
The contribution of this work is the evaluation of a parallel implementation of an algorithm for
intraoperative registration by biomechanical simulation of brain deformation and the demonstration
that high fidelity simulations of brain deformation are not too computationally expensive for intraoperative use. High performance computing is a key enabling technology that allows the biomechanical
simulation to be executed quickly enough for the nonrigid registration algorithm to be practical in
clinical use during neurosurgery. Our parallel implementation shows a good speedup with increasing
CPU count and a wallclock time compatible with the requirements of neurosurgical intervention. We
have evaluated our algorithm with two neurosurgery cases with promising results. Further clinical
validation with larger numbers of cases will be necessary to determine if our algorithm succeeds in
improving intraoperative navigation and intraoperative therapy delivery and hence improves therapy
outcomes.
13
In the future, the parallel implementation described here could be improved by addressing the load
imbalances in order to achieve better scaling performance. A tetrahedral mesh with a more regular
connectivity pattern would allow better scaling in the matrix assembly process. The parallel decomposition of the system of equations to solve could be modified to account for the distribution of known
displacements in order to improve the scaling of the solver. Improved registration could result from a
more sophisticated model of the material properties of the brain (such as more accurate modelling of
the cerebral falx and the lateral ventricles). The creation of an intraoperative segmentation capability
able to identify these structures would be necessary to enable such a model to be applied routinely in
surgery.
Acknowledgements: This investigation was supported by NIH NCRR P41 RR13218, NIH P01
CA67165 and NIH R01 RR11747. Access to the Deep Flow parallel cluster
(http://www.meca.ucl.ac.be/mema/deepflow), funded through the ARC 97/02-210 project of the Communaute Francaise de Belgique, was kindly granted to us by Roland Keunings.
References
[1] F. Jolesz, “Image-guided Procedures and the Operating Room of the Future,” Radiology,
vol. 204, pp. 601–612, May 1997.
[2] A. Hagemann, K. Rohr, H. Stiel, U. Spetzger, and J. Gilsbach, “Biomechanical modeling of the
human head for physically based, non-rigid image registration,” IEEE Transactions On Medical
Imaging, vol. 18, no. 10, pp. 875–884, 1999.
[3] O. Skrinjar and J. Duncan, “Real time 3D brain shift compensation,” in IPMI’99, pp. 641–649,
1999.
[4] M. Miga, K. Paulsen, J. Lemery, A. Hartov, and D. Roberts, “In vivo quantification of a homogeneous brain deformation model for updating preoperative images during surgery,” IEEE
Transactions On Medical Imaging, vol. 47, pp. 266–273, February 1999.
[5] D. Hill, C. Maurer, R. Maciunas, J. Barwise, J. Fitzpatrick, and M. Wang, “Measurement of
intraoperative brain surface deformation under a craniotomy,” Neurosurgery, vol. 43, pp. 514–
526, 1998.
[6] N. Hata, Rigid and deformable medical image registration for image-guided surgery. PhD thesis,
University of Tokyo, 1998.
[7] M. Bro-Nielsen, “Surgery Simulation Using Fast Finite Elements,” in Visualization in Biomedical Computing, pp. 529–534, 1996.
[8] W. Schroeder, K. Martin, and B. Lorensen, The Visualization Toolkit: An Object-Oriented Approach to 3D Graphics. Prentice Hall PTR, New Jersey, 1996.
[9] B. Geiger, “Three dimensional modeling of human organs and its application to diagnosis and
surgical planning,” Tech. Rep. 2105, INRIA, 1993.
[10] M. Ferrant, S. K. Warfield, C. R. G. Guttmann, R. V. Mulkern, F. A. Jolesz, and R. Kikinis, “3D
Image Matching Using a Finite Element Based Elastic Deformation Model,” in MICCAI 99:
14
Second International Conference on Medical Image Computing and Computer-Assisted Intervention (C. Taylor and A. Colchester, eds.), pp. 202–209, Springer-Verlag, Heidelberg, Germany,
1999.
[11] P. M. Black, T. Moriarty, E. Alexander, P. Stieg, E. J. Woodard, P. L. Gleason, C. H. Martin,
R. Kikinis, R. B. Schwartz, and F. A. Jolesz, “The Development and Implementation of Intraoperative MRI and its Neurosurgical Applications,” Neurosurgery, vol. 41, pp. 831–842, April
1997.
[12] D. Gering, A. Nabavi, R. Kikinis, W. Grimson, N. Hata, P. Everett, F. Jolesz, and W. Wells, “An
Integrated Visualization System for Surgical Planning and Guidance using Image Fusion and
Interventional Imaging,” in MICCAI 99: Proceedings of the Second International Conference on
Medical Image Computing and Computer Assisted Intervention, pp. 809–819, Springer Verlag,
1999.
[13] S. Warfield, J. Dengler, J. Zaers, C. R. Guttmann, W. M. Wells III, G. J. Ettinger, J. Hiller, and
R. Kikinis, “Automatic identification of Grey Matter Structures from MRI to Improve the Segmentation of White Matter Lesions,” Journal of Image Guided Surgery, vol. 1, no. 6, pp. 326–
338, 1995.
[14] S. K. Warfield, M. Kaus, F. A. Jolesz, and R. Kikinis, “Adaptive Template Moderated Spatially
Varying Statistical Classification,” in MICCAI 98: First International Conference on Medical
Image Computing and Computer-Assisted Intervention, pp. 231–238, Springer-Verlag, Heidelberg, Germany, October 11–13 1998.
[15] M. R. Kaus, S. K. Warfield, A. Nabavi, E. Chatzidakis, P. M. Black, F. A. Jolesz, and R. Kikinis,
“Segmentation of MRI of meningiomas and low grade gliomas,” in MICCAI 99: Second International Conference on Medical Image Computing and Computer-Assisted Intervention (C. Taylor
and A. Colchester, eds.), pp. 1–10, Springer-Verlag, Heidelberg, Germany, 1999.
[16] S. K. Warfield, M. Kaus, F. A. Jolesz, and R. Kikinis, “Adaptive, Template Moderated, Spatially
Varying Statistical Classification,” Medical Image Analysis, vol. 4, no. 1, pp. 43–55, 2000.
[17] S. K. Warfield, F. A. Jolesz, and R. Kikinis, “Real-Time Image Segmentation for Image-Guided
Surgery,” in Supercomputing 1998, pp. 1114:1–14, November 1998.
[18] R. Kikinis, M. E. Shenton, G. Gerig, J. Martin, M. Anderson, D. Metcalf, C. R. G. Guttmann,
R. W. McCarley, W. E. Lorenson, H. Cline, and F. Jolesz, “Routine Quantitative Analysis of
Brain and Cerebrospinal Fluid Spaces with MR Imaging,” Journal of Magnetic Resonance Imaging, vol. 2, pp. 619–629, 1992.
[19] I. Ragnemalm, “The Euclidean distance transform in arbitrary dimensions,” Pattern Recognition
Letters, vol. 14, pp. 883–888, 1993.
[20] W. M. Wells, P. Viola, H. Atsumi, S. Nakajima, and R. Kikinis, “Multi-modal volume registration by maximization of mutual information,” Medical Image Analysis, vol. 1, pp. 35–51, March
1996.
[21] R. O. Duda and P. E. Hart, Pattern Classification and Scene Analysis. John Wiley & Sons, Inc.,
1973.
15
[22] J. Dengler and M. Schmidt, “The Dynamic Pyramid – A Model for Motion Analysis with
Controlled Continuity,” International Journal of Pattern Recognition and Artificial Intelligence,
vol. 2, no. 2, pp. 275–286, 1988.
[23] S. K. Warfield, A. Robatino, J. Dengler, F. A. Jolesz, and R. Kikinis, “Nonlinear Registration
and Template Driven Segmentation,” in Brain Warping (A. W. Toga, ed.), ch. 4, pp. 67–84,
Academic Press, San Diego, USA, 1999.
[24] D. V. Iosifescu, M. E. Shenton, S. K. Warfield, R. Kikinis, J. Dengler, F. A. Jolesz, and R. W.
McCarley, “An Automated Registration Algorithm for Measuring MRI Subcortical Brain Structures,” NeuroImage, vol. 6, pp. 12–25, 1997.
[25] M. Ferrant, O. Cuisenaire, and B. Macq, “Multi-Object Segmentation of Brain Structures in 3D
MRI Using a Computerized Atlas,” in SPIE Medical Imaging ’99, vol. 3661-2, pp. 986–995,
1999.
[26] O. C. Zienkiewicz and R. L. Taylor, The Finite Element Method: Basic Formulation and Linear
Problems. McGraw Hill Book Co., New York, 4th ed., 1994.
[27] S. Balay, W. D. Gropp, L. C. McInnes, and B. F. Smith, “Efficient management of parallelism in
object oriented numerical software libraries,” in Modern Software Tools in Scientific Computing
(E. Arge, A. M. Bruaset, and H. P. Langtangen, eds.), pp. 163–202, Birkhauser Press, 1997.
[28] S. Balay, W. D. Gropp, L. C. McInnes, and B. F. Smith, “PETSc 2.0 users manual,” Tech. Rep.
ANL-95/11 - Revision 2.0.28, Argonne National Laboratory, 2000.
16