Software developments in CT scanning

Transcription

Software developments in CT scanning
Software developments in
CT scanning
NB Jopson and CA Glasbey
A brief bit of history….
• We’ve come a long way…
– It took around a minute to reconstruct an
image
– Some CT scanners used PDP11’s
– Images were stored on magnetic tapes
or 8-inch floppy disks
– Every manufacturer used a propriety
image format
– A fast PC was a 386 with 4MB of RAM
and a 40MB hard drive
Where we are today
• Only several seconds to reconstruct an
image
• Scanners use Unix or NT workstations
• Manufacturers have an agreed format
(ACR-NEMA1, ACR-NEMA2, DICOM)
• A fast PC has is 3MHz+, has 1GB+
RAM and 80GB+ of disk storage
Early issues
• How do we read and display the
images?
– Image information
– Image header
• Find image analysis systems not
linked to expensive hardware (e.g.
expensive workstations,
framegrabbers)
Specific issues with CT
• Absolute scale – the Hounsfield Unit
(HU)
– Air =-1000HU
– Water = 0 HU
• 12-bit greyscale images (4096 scale)
• Pixels are voxels – partial averaging
• Streak and ring artefacts, reference
detectors, beam hardening
Image analysis aims
• Main issues are speed and
automation
• Three main processes in collecting
measurements
– Image enhancement
– Segmentation
– Processing
Image enhancement
• Not a big issue in CT
– Absolute unit
– Uniform ‘illumination’ over entire image
• Examples of use include:
– microscopy or video images where
illumination varies
– Particle separation through successive
erosion/dilation cycles
Segmentation
• Detection of boundaries
– Object from background
– Subgroups within the main object
• The most important issue in CT image
analysis, e.g.
– Separation of animal from background
– Separation of muscle, fat and bone
– Separation of carcass from non-carcass
Processing the image
• The actual measurement step
– In CT, this mainly relates to pixel
counting to calculate areas, and means
and variances for the intensity values
– May include other statistical techniques,
e.g. mixture distributions
Software
• Developed in-house e.g. Catman, CTTools, AutoCAT, STAR and others
• A variety of commercial and freeware
packages
e.g. NIH Image (Psion Image), ImageJ,
3D-Doctor
• Software languages, toolkits and
libraries
Segmentation: muscle, fat and bone
• Thresholds work very well for animals
• Classify pixels as belonging to one of
the three classes by setting the
boundaries
– Fat range -200 to -18 HU
– Lean range -17 to 120 HU
– Bone 121HU+
Mixture distribution: f(x1,sd1,x2,sd2,p)
d
d
K-means
Truncation point
Segmentation: carcass/non-carcass
• Not easy to automate
• Manual dissection on the screen
– Human eye is very good at pattern
recognition
– Most labour intensive, but most flexible
• STAR software includes automated
system for specific image location
– Dynamic programming to detect
boundaries
Polar transformed image
Image density information
• Used to estimate physical density, so
volume can be converted to a mass
• Used in estimation of CT tissue
weights
• Can be an endpoint in itself, e.g. bone
density
• Beam hardening is an issue here
Mixture distributions
• Up to this point, we have been
assigning pixels to classes, or using
mean values for a defined tissue
• What happens when all pixels are
mixed?
• Bayesian approach probably best
3D/CAD
• Most new CT scanners have spiral CT
– Image volumes easy to collect
• Programs like Solidworks can be used
to model the shape of a solid
• Virtual design and testing of e.g.
automated bone machines
Techniques
• Cavalieri
• Reduced scan sets for specific
commercial applications
– Carcass
– Primal cuts
• Spiral datasets for 3D
Summary
• A number of good software programs
have been developed to deal with our
current image analysis needs
– More automation desirable
• Most of the software performs quite
basic tasks
• Plenty of scope for new developments
– Better segmentation procedures
– Use of 3D capability