poster - Ophthalmic Image Analysis (OPTIMA)

Transcription

poster - Ophthalmic Image Analysis (OPTIMA)
Automatic segmentation of the posterior vitreous
boundary in retinal optical coherence tomography
Alessio Montuoro1, Sebastian M. Waldstein1, Ana-Maria Glodan1, Dominika Podkowinski1,
1
2
1
1
Bianca S. Gerendas , Georg Langs , Christian Simader , Ursula Schmidt-Erfurth
5275
Christian Doppler Laboratory for Ophthalmic Image Analysis (OPTIMA)
1 Vienna Reading Center, Department of Ophthalmology, Medical University of Vienna, Austria
2 Computational Imaging Research Lab, Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Austria
Financial disclosures: None
Methodology
Introduction
Disorders of the vitreomacular interface such as vitreomacular traction and
macular hole formation have recently been made accessible to pharmacologic
treatment by the introduction of enzymatic vitreolysis. However, this therapeutic
option is only efficacious in a subset of patients with strictly defined patterns of
vitreous adhesions. Moreover, posterior vitreous detachment has been
demonstrated to impact the efficacy of intravitreally administered antiangiogenic
agents. Therefore, precise and reproducible detection, quantification and
classification of the posterior vitreous boundary and its adhesions at the macula
is of major importance.
The aim of this study was to develop a method to fully automatically segment
the vitreous boundary in Spectral Domain - Optical Coherence Tomography
(SD-OCT) scans.
Results
Estimate local image orientation
Raw Image Moment
π‘₯𝑝
π‘€π‘π‘ž =
π‘₯
βˆ—
π‘¦π‘ž
βˆ— 𝐼[π‘₯, 𝑦]
𝑀10
π‘₯=
𝑀00
Central Image Moment
𝑝
πœ‡π‘π‘ž =
β€²
πœ‡π‘π‘ž
=
𝑀01
𝑦=
𝑀00
πœ‡π‘π‘ž
Image Orientation
β€²
2 βˆ— πœ‡11
πœ‘ 𝐼 = arctan β€²
β€²
πœ‡20 βˆ’ πœ‡02
Covariance Matrix
β€²
πœ‡20
π‘π‘œπ‘£ 𝐼 = β€²
πœ‡11
π‘ž
(π‘₯ βˆ’ π‘₯) βˆ— (𝑦 βˆ’ 𝑦) βˆ— 𝐼[π‘₯, 𝑦]
π‘₯
𝑦
Centroid
Bottom: iterative segmentation refinement and final segmentation after 5 iterations
2
The orientation is given by the angle of the
eigenvector with the largest eigenvalue
𝑦
β€²
πœ‡11
β€²
πœ‡02
The eigenvectors of this covariance matrix
correspond to major/minor axes
𝑀00
Masked Local Image Moments
π‘₯ 𝑝 βˆ— 𝑦 π‘ž βˆ— 𝐼𝑀 π‘₯, 𝑦 βˆ— π‘šπ‘Žπ‘ π‘˜[π‘₯, 𝑦]
π‘€π‘π‘ž =
π‘₯
𝑦
(π‘₯ βˆ’ π‘₯)𝑝 βˆ— 𝑦 βˆ’ 𝑦
πœ‡π‘π‘ž =
π‘₯
π‘ž
βˆ— 𝐼𝑀 π‘₯, 𝑦 βˆ— π‘šπ‘Žπ‘ π‘˜[π‘₯, 𝑦]
𝑦
Extract patches and compute eigenfeatures
(a)
Data
A set of 88 macula-centered Heidelberg Spectralis SD-OCT volume scans from
patients available at the Vienna Reading Center was included.
The posterior vitreous boundary was manually annotated in 337 B-scans and the
automatic Inner Limiting Membrane (ILM) and Retinal Pigment Epithelium (RPE)
segmentation was extracted.
1. Extract 21 x 21 patches around each pixel
in training set
2. Rotate according local image orientation
3. Compute Principle Component Analysis
4. Use resulting eigenvectors as filters for
feature generation
Train Random Forest classifier and predict
Top: Map of the distance between the
posterior vitreous boundary and the
inner limiting membrane
Left: 3D visualization of the posterior
vitreous boundary (gray), ILM (green)
and RPE (red)
Ground truth
Preliminary segmentation
Comparison with expert annotation
Vitreous
Vitreous cortex
Vitreous
Vitreous cortex
Between vitreous
cortex and ILM
excluded
Between ILM and RPE
Between ILM and RPE
Below RPE
The pixel distance between the automatic ILM segmentation and
the manual posterior vitreous boundary annotation was used as
ground truth. This was compared to the pixel distance between the
ILM and the automatic segmentation result.
The test set consists of 11 randomly chosen SD-OCT volumes.
Between vitreous
cortex and ILM
We use a 3D graph cut approach to find
a preliminary segmentation.
The poor results are due to the fact that
classes have similar appearance (and
therefore similar feature representation).
Therefore additional features are needed
β†’ distance from VMI, ILM and RPE
β†’ local spatial context
Ground truth
Segmentation after 1 iteration
Train Random Forest classifier with additional features
Below RPE
Vitreous
Vitreous cortex
This annotations were used to assign each voxel in the SD-OCT
volume to one of 5 classes:
By repeating this step an iterative
refinement of the segmentation
results can be achieved.
β€’ Vitreous
β€’ Vitreous cortex
β€’
β€’
Between vitreous
cortex and ILM
β€’ Volume between vitreous cortex
and ILM
The voxel that was manually annotated
β€’ Between the ILM and the RPE
and the 3 voxels above
20 voxels above that where excluded from β€’ Below the RPE
the training set
Between ILM and RPE
Below RPE
Department of Ophthalmology | http://optima.meduniwien.ac.at
Financially supported by the Austrian Federal Ministry of Science, Research and Economy and the National Foundation for Research, Technology and Development.
Top: logarithmic plot of manual vs. automatic
segmentation
Far left: segmentation accuracy increase after
context iterations
Left: segmentation error histogram after 5
iterations
Conclusion & Future Work
We have presented a method for the automatic segmentation of the posterior
vitreous boundary in retinal optical coherence using rotation invariant
eigenfeatures. By using an iterative refinement the spatial context of classes could
be automatically learned from training data.
A similar approach has been used for retinal vessel segmentation in color fundus
images (see [1]) showing that this approach can be applied to a variety of
segmentation tasks.
The current system is limited to SD-OCT scans acquired with the Heidelberg
Spectralis scanner, furthermore the ILM and RPE segmentation used for training is
computed automatically and is not guaranteed to be correct.
Manual annotations of scans of different vendors and of the ILM and RPE surfaces
are currently performed which should overcome this limitations.
[1] Rotation invariant eigenvessels and auto-context for retinal vessel detection
Alessio Montuoro; Christian Simader; Georg Langs; Ursula Schmidt-Erfurth;
Proc. SPIE 9413, Medical Imaging 2015: Image Processing, 94131F (March 20, 2015); doi:10.1117/12.2081918.
[email protected]