1 L. Yaroslavsky. Fundamentals of Digital Image Processing. Course

Transcription

1 L. Yaroslavsky. Fundamentals of Digital Image Processing. Course
L. Yaroslavsky. Fundamentals of Digital Image Processing. Course 0555.3230
Lecture 1. IMAGING AND IMAGING DEVICES
-
-
Vision in Live Creatures
First artificial imaging systems and techniques
- Magnifying glass and spectacles
- Painting art
- Camera-obscura (pinhole camera)
Optical microscope and telescope
Photography
X-ray imaging
Radiography
Electron microscopy
Electronic television
Acoustic imaging
Scanned proximity probe microscopes
Linear tomography and laminography
Interferometry; fringe methods of active vision
Crystallography
Coded Aperture (multiplexing techniques)
Synthetic aperture radars
Computed tomography
Magnetic resonance imaging
Holography
Digital holography and image processing
Home work:
- List and briefly describe image processing biomedical applications you know
- Find and briefly describe imaging devices other then those that were described in
the lecture
References
1. "History of the Light Microscope" (http://www.utmem.edu/personal/thjones/hist/hist_mic.htm)
2. E. Ruska, The Development of the Electron Microscope and of Electron Microscopy, Nobel Lecture,
Dec. 8, 1986.
3. G. Binning, H. Rohrer, Physica ,127B, 37, 1984
4. R. Wiesendanger and H.-J. Güntherodt, Introduction, Scanning Tunneling Microscopy I, General
Principles and Applications to Clean and Absorbate-Covered Surfaces, Springer Verlag, Berlin, 1994
5. http://stm2.nrl.navy.mil/how-afm/how-afm.html)
6. R. Bracewell, Two-dimensional Imaging, Prentice Hall, 1995
7. L.P. Yaroslavsky, The Theory of Optimal Methods for Localization of Objects in Pictures, In:
Progress in Optics, Ed. E. Wolf, v.XXXII, Elsevier Science Publishers, Amsterdam, 1993
8. D. Gabor, A New Microscopic Principle, Nature, v. 161, 777-778, 1948, Nobel Prize
9. E.N. Leith, J. Upatnieks, New techniques in Wavefront Reconstruction, JOSA, v. 51, 1469-1473,
1961
10. Yu . N. Denisyuk, Photographic reconstruction of the Optical Properties of an Object in its Own
Scattered Radiation Field, Dokl. Akad. Nauk SSSR, v. 1444, 1275-1279, 1962).
11. L. Yaroslavsky, N. Merzlyakov, Methods of Digital Holography, Plenum Press, N.Y., 1980
12. L. Yaroslavsky, M. Eden, Fundamentals of Digital Optics, Birkhauser, Boston, 1995
13. L. Yaroslavsky, From Photo-graphy to *-graphies, Lecture notes,
http://www.eng.tau.ac.il/~yaro/lectnotes/index.html
14. L. Yaroslavsky, Digital Holography and Image Processing, Kluwer, Boston, 2003
1
Vision in Live Creatures
First imaging systems were invented by the Nature.
A range of
invertebrate eyes:
(a) – nautilus pinhole
eye; (b) - marine
snail; (c) – bivalve
mollusc;
(d) – abalone;
(e) - ragworm
Cup eyes from around the animal
kingdom. (a) - flatworm; (b) –
bevalve mollusc; (c) – polychaet
worm; (d) - limpet
(From: Richard Dawkins, Climbing Mount Unprobable, W. W. Nortom Co, New York, 1998 )
Fly eye
2
Optics and nerve system of human eye
Light
(From: J. S. Lim, Ywo-dimensional Signal and Image Processing, Prentice Hall, Englewood Cliffs, N.J., 1990)
Eye retina: rods and cones
(adopted from H. Hofer, D. R. Williams, The Eye’s Mechanisms for Autocalibration, Optics and
Photonic News, January, 2002, p. 34-39 and R. C. Gonzalez< R. E. Woods, Digital Image Processing,
Prentice Hall, 2002)
Density of cones in the area of the highest acuity (fovea) ~100.000 elements/mm2 .
The number of cones in this area is about 300000. Resolving power of human vision
is about 1’.
Natural imaging systems (vast majority)
- are discrete
- are shift invariant
- involve image processing (object tracking, stereo image processing, color constancy,
etc.)
3
Ever first artificial imaging devices:
Magnifying glass and spectacles (Graeco-Roman times). Pliny the Elder wrote in 2379 A.D.:
"Emeralds are usually concave so that they may concentrate the visual rays.
The Emperor Nero used to watch in an Emerald the gladatorial combats."
Spectacles were invented (re-invented) around 1280-1285 in Florence, Italy. It's
uncertain who the inventor was, Some give credit to a nobleman named Amati
(Salvino degli Armati, 1299 ). It has been said that he made the invention, but told
only a few of his closest friends. (adopted from [1])
Object
Image
Image plane
Painting art
-
A woodcut by Albrecht Dürer showing the relationship between a scene, a center of
projection and the picture plane
Camera-obscura (pinhole camera) (Ibn Al Haytam, X century):
Images were regarded as point by point projections of objects on an image plane
4
Invention of optical microscope.
Inventor of optical microscope is not known. Credit for the first microscope is
usually given to Dutch (from other sources, Middleburg, Holland) spectacle-maker
Joannes and his son Zacharios Jansen. While experimenting with several lenses in
a tube, they discovered (around the year 1595) that nearby objects appeared greatly
enlarged. (partly adopted from [1]) . That was the forerunner of the compound
microscope and of the telescope.
The father of microscopy, Anthony
Leeuwenhoek of Holland (1632-1723), started
as an apprentice in a dry goods store where
magnifying glasses were used to count the
threads in cloth. He taught himself new methods
for grinding and polishing tiny lenses of great
curvature which gave magnifications up to 270,
the finest known at that time. These led to the
building of his microscopes and the biological
discoveries for which he is famous. He was the Microscope of Hooke (R. Hooke,
first to see and describe bacteria, yeast plants, Micrographia, 1665)
the teeming life in a drop of water, and the
circulation of blood corpuscles in capillaries.
Robert Hooke, the English father of
microscopy, re-confirmed Anthony van
Leeuwenhoek's discoveries of the existence of
tiny living organisms in a drop of water. Hooke
made a copy of Leeunwenhoek's microscope and
then improved upon his design
Modern Zeiss microscope
Telescope (Zacharias Joannides Jansen of Middleburg,
1590)
In 1609, Galileo, father of modern physics and astronomy,
heard of these early experiments, worked out the principles
of lenses, and made a much better instrument with a
focusing device.
Huygens (“Dioptrica, de telescopiis”) held the view that
only a superhuman genius could have invented the telescope
on the basis of theoretical considerations, but the frequent
use of spectacles and lenses of various shapes over a period
of 300 years contributed to its chance invention.
The scientific impetus produced by the great discoveries
made with the telescope can be gauged from the enthusiastic
manner in which Huygens in the “Dioptrica” speaks of these Newton’s telescopediscoveries. He describes how Galileo was able to see the refractor
mountains and valleys of the moon, to observe sun-spots
and determine the rotation of the sun, to discover Jupiter’s
satellites and the phases of Venus, to resolve the Milky Way
into stars, and to establish the differences in apparent
diameter of the planets and fixed stars (after E. Mach, The
Hubble space
principles of Physical Optics, Dover Publ., 1926).
telescope
5
Photographic camera: a revolutionary step
In the 19-th century, scientists began to explore ways of “fixing” the image
thrown by a glass lens. (H. Nieps, 1826; J. Dagherr, 1836; W. F. Talbot, 1844)
The first method of light writing was developed by the French commercial
artist Louis Jacque Mande Daguerre (1787-1851). The daguerrotype was made on a
shhet of silver-plated coper, which could be inked and then printed to produce
accurate reproduction of original works or scenes. The surface of the copper was
polished to a mirrorlike brilliance, then rendered light sensitive by treament with
iodine fumes. The copper plate was then exposed to an image sharply focused by the
camera’s well-ground, optically correct lens. The plate was removed from the camera
and treated with mercury vapors to develop the latent image. Finally, the image was
fixed by removal of the remaining photosensitive salts in a bath of hyposulfite and
toned with gold chloride to improve contrast and durability. Color, made of powdered
pigment, was applied derectly to the metal surface with a finely pointed brush.
Daguerre’s attempt to sell his process (the daguerreotype) through licensing
was not successful, but he found an enthusiastic supporter in Francois Arago, an
eminent member of the Academie des Sciences in France. Arago recommended that
the French government compensate Daguerre for his considerable efforts, so that the
daguerreotype process could be placed at the service of the entire world. The French
government complied, and the process was widely publicized by F. Arago, 19.8.1839
at a meeting of L’Institut, Paris on August 19, 1839, as a gift to the world from
France.
Astronomers were among the first to employ the new imaging techniques. In
1839-1840: John W. Draper, professor of chemistry at New York University, made
first photographs the moon in first application of daguerreotypes in astronomy. The
photoheliograph, a device for taking telescopic photographs of the sun, was unveiled
in 1854.
Fundamental innovation:
Imaging optics was supplemented with photo sensitive recording material. Image
formation and image display were separated. Photographic plate/film combines three
functions: image recording, image storage and image display
Fast progress of photographic techniques
In 1840 optical means used to reduce daguerreotype exposure times to 3-5 min.
In1841 William Henry Fox Talbot patents a new process involving creation of paper
negatives. By the end of 19-th century, photography had become an important means
for scientific research and also a commercial item that entered people every day life. It
has been keeping this status till very recently.
6
X-ray imaging
Next mile stones in the evolution of imaging techniques are X-ray imaging and
radiography.
X-rays were discovered by Wilhelm Conrad Röntgen, Nov. 8,1895; Institute of
Physics, University of Würzburg, when he experimented with cathode rays. (1-st
Nobel Prize, 1901)
Wilhelm Conrad Röntgen (1845 - 1923)
One of the first medical X-ray images (a
hand with small shots in it)
Fluorography 1907
Fluorography 2000
7
Photography had played a decisive role in
the discovery of radioactivity as well.
In 1896, Antoine Henri Becquerel
accidentally discovered radioactivity
while investigating phosphorescence in
uranium salts.
This discovery eventually led, along with
other, to new imaging techniques,
radiography
Modern gamma-camera:
Gamma-ray collimator + Gamma-ray-to light converter + photo sensitive array + CRT
as a display
Collimator separates rays from different object points; this goal is achieved by the
expense of energy losses.
8
Next mile stone: Electronic imaging.
Electron microscopy (1931. Ernst Ruska, The Nobel Prize, 1986)
Electron optics + luminescent screen or electron sensitive array + CRT display
Scanning electron microscope image (from
http://www.sst.ic.ac.uk/intro/AFM.htm )
Transmission Electron Microscope:
Atoms of gold (Au_clusters) on MoS2.
9
Electronic television
A bit of history
~1910, Boris Lvovich Rosing, St. Petersburg, Russia, suggested CRT as a display device
~1920-25, Rosing’s student, Vadimir Kozmich Zvorykin (1889-1982) – iconoscope& kinescope,
Jan. 1929 V. Zvorykin met David Sarnov from RCA and got financial grant from him
~1935 : first regular TV broadcasting, Britain and USA
An important step: image discretization.
Originally it was dictated by the need to transmit 2-D images over 1-D
communication channels
Modern CCD and CMOS cameras
Radar (~1935), Sonar:
beam forming antenna + space scanning mechanism + CRT as a display
10
Acoustic microscope (1950-th, after R. Bracewell, Two-dimensional Imaging, Prentice Hall,
1995 ):
Electric
Oscillator
Receiver
Piezo-electric
transducer
(niobium titanate)
Sapphir
(Al2 O3 ) rode
Movable specimen
(immersed in a liquid)
A monochromatic sound pulse can be focused to a point on the solid surface of an
object by a lens (sapphire rode), and the reflection will return to the lens to be
gathered by a receiver. The strength of the reflection depends on the acoustical
impedance looking into the solid surface relative to the impedance of the propagating
media. If the focal point performs a raster scan over the object, a picture of the surface
impedance is formed. Acoustic impedance of a medium depends on its density and
elastic rigidity. Acoustic energy that is not reflected at the surface but enters the solid
may be only lightly attenuated and then reflect from surface discontinuities to reveal
an image of the invisible interior. With such a device, an optical resolution can be
achieved. A major application is in the semiconductor industry for inspecting
integrated circuits.
The idea of focusing an acoustic beam was originally suggested by Rayleigh. The
application of scanning acoustic microscopes goes back to 1950.
A scanning optical microscope can also be made on the same principle. It has value as
a means of imaging an extended field without aberrations associated with a lens.
11
Scanned-proximity probe (SPP) microscopes.
SPP- microscopes work by measuring a local property - such as height, optical
absorption, or magnetism - with a probe or "tip" placed very close to the sample. The
small probe-sample separation (on the order of the instrument's resolution) makes it
possible to take measurements over a small area. To acquire an image the microscope
raster-scans the probe over the sample while measuring the local property in question.
Scanned-probe systems do not use lenses, so the size of the probe rather than
diffraction effects generally limit their resolution
Tunnel microscope (1980-th; The Nobel prize 1986)
Schematic of the physical principle and initial technical
realization of Scanning Tunnel Microscope. (a) shows apex Scanning tunnel microscope image of
silicon surface. The image shows two
of the tip (left) and the sample surface (right) at a
magnification of about 108 . The solid circles indicate atoms, single layer steps (the jagged interfaces)
the dotted lines electron density contours. The path of the separating three terraces. Because of the
tunnel current is given by the arrow. (b) Scaled down by tetrahedral bonding configuration in the
factor of 104 . The tip (left) appears to touch the surface silicon lattice, dimer tow directions are
orthogonal on terraces joined by a
(right). (c) STM with rectangular piezo drive X,Y,Z of the
single
layer step. The area pictured is
tunnel tip at left and “loose” L (electrostatic “motor”) for
30x30 nm
rough positioning (ìm to cm range) of the sample S (from
G. Binning, H. Rohrer: Physica 127B, 37, 1984)
A conductive sample and a sharp metal tip, which acts as a local probe, are brought
within a distance of a few ångstroms, resulting in a significant overlap of the
electronic wave functions (see the figure). With applied bias voltage (typically
between 1mV and 4V), a tunelling current (typically between 0.1nA and 10 nA) can
flow from the occupied electronic states near the Fermi level of one electrode into the
unoccupied states of the other electrode. By using a piezo-electric drive system of the
tip and a feedback loop, a map of the surface topography can be obtained. The
exponential dependence of the tunneling current on the tip-to-sample spacing has
proven to be the key for the high spatial resolution which can be achieved with the
STM. Under favorable conditions, a vertical resolution of hundredths of an ångstrom
and the lateral resolution of about one ångstrom can be reached. Therefore, STM can
provide real-space images of surfaces of conducting materials down to the atomic
scale. ( from R. Wiesendanger and H.-J. Güntherodt, Introduction, Scanniing Tunneling Microscopy I,
General Principles and Applications to Clean and Absorbate-Covered Surfaces, Springer Verlag,
Berlin, 1994)
12
Atomic force microscope (after http://stm2.nrl.navy.mil/how-afm/how-afm.html).
The atomic force microscope is one of about two dozen types of.
Figure 1. Concept of AFM and the optical lever: (left) a cantilever touching a sample;
(right) the optical lever. Scale drawing; the tube scanner measures 24 mm in diameter,
while the cantilever is 100 µm long.
AFM operates by measuring attractive or repulsive forces between a tip and
the sample. In its repulsive "contact" mode, the instrument lightly touches a tip at the
end of a leaf spring or "cantilever" to the sample. As a raster-scan drags the tip over
the sample, some sort of detection apparatus measures the vertical deflection of the
cantilever, which indicates the local sample height. Thus, in contact mode the AFM
measures hard-sphere repulsion forces between the tip and sample. In noncontact
mode, the AFM derives topographic images from measurements of attractive forces;
the tip does not touch the sample.
AFMs can achieve a resolution of 10 pm, and unlike electron microscopes, can
image samples in air and under liquids. To achieve this most AFMs today use the
optical lever. The optical lever (Figure 1) operates by reflecting a laser beam off the
cantilever. Angular deflection of the cantilever causes a twofold larger angular
deflection of the laser beam. The reflected laser beam strikes a position-sensitive
photodetector consisting of two side-by-side photodiodes. The difference between the
two photodiode signals indicates the position of the laser spot on the detector and thus
the angular deflection of the cantilever. Image acquisition times is of about one
minute.
13
2.5 x 2.5 nm simultaneous topographic and
friction image of highly oriented pyrolytic graphic
(HOPG). Each bump represents one carbon atom.
As the tip moves from right to left, it bumps into
an atom and gets stuck behind it. The scanner
continues to move and lateral force builds up until
the tip slips past the atom and sticks behind the
next one.
“Steps”
Atomic force microscope, University of Konstanz, May 1991
The ability of AFM to image at atomic resolution, combined with its ability to image
a wide variety of samples under a wide variety of conditions, has created a great deal
of interest in applying it to the study of biological structures. Images have appeared in
the literature showing DNA, single proteins, structures such as gap junctions, and
living cells.
14
Linear tomography (~1930-th)
Moving X-ray point source
O1
O2
O3
Focal plane
Image 3
Image 2
Image 1
Moving stage with a X-ray sensor
Schematic diagram of linear tomography. Due to the synchronous movement of the
X-ray source and X-ray sensor, certain plane cross-section of the object is always
projected in the same place of the sensor while others are projected with a
displacement and therefore will appear blurred in the resulting image.
Application in dentistry
15
Laminography
The principle of laminography (http://lca.kaist.ac.kr/Research/2000/X_lamino.html)
X-ray point source moving in the source plane over a circular trajectory projects
object onto X-ray detector plane. The detector moves synchronously to the source in
such a way as to secure that a specific object layer is projected on the same place on
the detector array for whatever position of the source. The plane of this selected layer
is called “focal plane’. Projections of other object layers located above or beneath of
the “focal plane” will, for different position of the source, be displaced. Therefore if
one sums up all projections obtained for different positions of the source, projections
of the focal plane layers will be accumulated coherently producing a sharp image of
this layer while other layers projected with different displacement in different
projections will produce a blurred background image. The more projections are
available the lower will be the contribution of this background into high frequency
components of the output image.
Illustration of restoration of different layers of a printed circuit board
16
Optical interferometry and moire (Fringe) techniques
Semitransparent
mirror
Semitransparen
t mirror
Photosensitive
medium
Mirro
r
Mirror
Schematic diagram of shape
measurement by mean of
structured light illumination
(1 – fringe image; 2 – image
sensor; 3 – illumination source;
4- support; 5 - object)
Schematic diagram of optical interferometry
Object’s profile
Interferograms without (left) and with (right) spatial carrier
17
2. TRANSFORM IMAGING TECHNIQUES
The main advantages of the direct image plane imaging
- It allows generating images that can be dericely perceived by human vision
- It allows direct interpretation of a priori knowledge on images in terms of those of
objects
Fundamental drawbacks of direct image plane imaging techniques:
- They require access to individual points (locations) of objects
- They require high sensitivity of the sensor: signal energy
Probably, the very first example of indirect imaging method was that of X—ray
crystallography (Max Von Laue, 1912, Nobel Prize 1914-1918)
In 1912 Max von Laue and two students (Walter Friedrich and Paul Knipping)
demonstrated the wave nature of X-rays and periodic structure of crystals by
observing the diffraction of X-rays from crystals of zinc sulfide. Discovery of
diffraction of X-rays had a decisive value in the development of physics and biology
of XX-th century. One of the most remarkable scientific achievements that is based on
X-ray crystallography was discovery by J. Watson and F. Crick of spiral structure of
DNA (Nobel Prize, 1953)
18
“Coded” aperture (multiplexing) techniques (1970-th)
Pinhole camera (camera obscura) has a substantial advantage over lenses - it has
infinite depth of field, and it doesn't suffer from chromatic aberration. Because it
doesn't rely on refraction, pinhole camera can be used to form images from X-ray and
other high energy sources, which are normally difficult or impossible to focus.
Source of irradiation
Pinhole camera
Image plane
detector
a(x,y)
The biggest problem with pinholes is that they let very little light through to the film
or other detector. This problem can be overcome to some degree by making the hole
larger, which unfortunately leads to a decrease in resolution. The smallest feature
which can be resolved by a pinhole is approximately the same size as the pinhole
itself. The larger the hole, the more blurred the image becomes. Using multiple, small
pinholes might seem to offer a way around this problem, but this gives rise to a
confusing montage of overlapping images. Nonetheless, if the pattern of holes is
carefully chosen, it is possible to reconstruct the original image with a resolution
equal to that of a single hole.
Coding mask
m (x , y)
Detector array
b( x , y )
19
Transform imaging :
Synthetic aperture radar (C.Wiley, USA, 1951):
Side looking radar: Direct imaging in “range” co-ordinate and transform imaging in
“azimuth” co-ordinate
20
Radar map of Venus
If the thick clouds covering Venus were removed, how would the surface appear?
Using an imaging radar technique, the Magellan spacecraft was able to lift the veil
from the Face of Venus and produce this spectacular high resolution image of the
planet's surface. Red, in this false-color map, represents mountains, while blue
represents valleys. This 3-kilometer resolution map is a composite of Magellan
images compiled between 1990 and 1994. Gaps were filled in by the Earth-based
Arecibo Radio Telescope. The large yellow/red area in the north is Ishtar Terra
featuring Maxwell Montes, the largest mountain on Venus. The large highland regions
are analogous to continents on Earth. Scientists are particularly interested in exploring
the geology of Venus because of its similarity to Earth.
21
Principles of reconstructive tomography
The x-ray-based computerized tomography (CT) was introduced by Hounsfield in
1973 (Nobel pize, ~1980)
Parallel X-ray
beam
y
Obj( x , y )
x
ϑ
ξ
Pr oj (ϑ,ξ )
X-ray sensitive
line array
Schematic diagram of parallel beam projection tomography
In computer tomography, a set of object’s projections taken a different observation
angles is measured and used for subsequent reconstruction of the object:
22
Schematic diagram of micro-tomography
Surface rendering of a fly head reconstructed using a SkyScan micro-CT scanner
Model L1072 (Advanced imaging, July 2001, p. 22)
23
Magnetic resonance (Nuclear magnetic resonance , NMR,MRI) tomography.
Magnet
and
“gradient”
coils
Strong magnetic field
z
RF inductor
and sensor
Object
y
x
Reconstruction
and display device
RF receiver
RF impulse
generator
Schematic diagram of NMR imaging
MRI is based on the principles of nuclear magnetic resonance (NMR), a spectroscopic
technique used by scientists to obtain microscopic chemical and physical information
about molecules. An effect observed when an atomic nucleus is exposed to radio
waves in the presence of a magnetic field. A strong magnetic field causes the
magnetic moment of the nucleus to precess around the direction of the field, only
certain orientations being allowed by quantum theory. A transition from one
orientation to another involves the absorption or emission of a photon, the frequency
of which is equal to the precessional frequency. With magnetic field strengths
customarily used the radiation is in the radio-frequency band. If radio-frequency
radiation is supplied to the sample from one coil and is detected by another coil, while
the magnetic field strength is slowly changed, radiation is absorbed at certain field
values, which correspond to the frequency difference between orientations. An NMR
spectrum consists of a graph of field strength against detector response. This provides
information about the structure of molecules and the positions of electrons within
them, as the orbital electrons shield the nucleus and cause them to resonate at different
field strengths. (The Macmillan Encyclopedia 2001, © Market House Books Ltd 2000)
24
Magnetic resonance imaging (MRI) is an imaging technique used primarily in medical
settings to produce high quality images of the inside of the human body. MRI started
out as a tomographic imaging technique, that produced an image of the NMR signal in
a thin slice through the human body. MRI has advanced beyond a tomographic
imaging technique to a volume imaging technique.
The brief history of MRI
Felix Bloch and Edward Purcell, both of whom were awarded the Nobel Prize
in 1952, discovered the magnetic resonance phenomenon independently in 1946. In
the period between 1950 and 1970, NMR was developed and used for chemical and
physical molecular analysis.
In 1971 Raymond Damadian showed that the nuclear magnetic relaxation
times of tissues and tumors differed, thus motivating scientists to consider magnetic
resonance for the detection of disease.
In 1973 the x-ray-based computerized tomography (CT) was introduced by
Hounsfield. This date is important to the MRI timeline because it showed hospitals
were willing to spend large amounts of money for medical imaging hardware.
Magnetic resonance imaging was first demonstrated on small test tube samples that
same year by Paul Lauterbur. He used a back projection technique similar to that used
in CT.
In 1975 Richard Ernst proposed magnetic resonance imaging using phase and
frequency encoding, and the Fourier Transform. This technique is the basis of current
MRI techniques. A few years later, in 1977, Raymond Damadian demonstrated MRI
of the whole body. In this same year, Peter Mansfield developed the echo-planar
imaging (EPI) technique. This technique will be developed in later years to produce
images at video rates (30 ms / image).
By 1986, the imaging time was reduced to about five seconds, without
sacrificing too much image quality. The same year people were developing the NMR
microscope, which allowed approximately 10 µm resolution on approximately one cm
samples. In 1987 echo-planar imaging was used to perform real-time movie imaging
of a single cardiac cycle. In this same year Charles Dumoulin was perfecting magnetic
resonance angiography (MRA), which allowed imaging of flowing blood without the
use of contrast agents.
In 1991, Richard Ernst was rewarded for his achievements in pulsed Fourier
Transform NMR and MRI with the Nobel Prize in Chemistry. In 1993 functional MRI
(fMRI) was developed. This technique allows the mapping of the function of the
various regions of the human brain. Six years earlier many clinicians thought echoplanar imaging primary applications was to be in real-time cardiac imaging. The
development of fMRI opened up a new application for EPI in mapping the regions of
the brain responsible for thought and motor control.
In 2003 Nobel Prize in physiology medicine was awarded to Paul C.
Lautenbur and Sir Peter Mansfield, UK for their discoveries concerning magnetic
resonance imaging.
25
Holography
Invention of holography by D. Gabor was motivated by the desire to improve
resolution power of electron microscope that was limited by the fundamental
limitations of the electron optics. The term “holography” originates from Greece word
“holos” (çùëùó). By this, inventor of holography intended to emphasize that in
holography full information regarding light wave, both amplitude and phase, is
recorded by means of interference of two beams, object and reference one. Due to the
fact that at that time sources of coherent electron radiation were not available, Gabor
carried out model optical experiments to demonstrate the feasibility of the method.
However, powerful sources of coherent light were also not available at the time, and
holography remained an “optical paradox” until the invention of lasers. The very first
implementation of holography were demonstrated in 1961 by radio-engineers E. Leith
a nd J. Upatnieks and by optician Yu. Denisyuk.
-
D. Gabor, A New Microscopic Principle, Nature, v. 161, 777-778, 1948, Nobel Prize
E.N. Leith, J. Upatnieks, New techniques in Wavefront Reconstruction, JOSA, v. 51, 1469-1473,
1961
Yu. N. Denisyuk, Photographic reconstruction of the Optical Properties of an Object in its Own
Scattered Radiation Field, Dokl. Akad. Nauk SSSR, v. 1444, 1275-1279, 1962).
Basic principle of holography is illustrated in the figure.
Recording
medium
Source of
coherent light
Object
Object beam
Object beam
Aobj exp i 2π Φ obj
(
Mirror
Reference beam
Aref exp i 2π Φ ref
(
Recording hologram:
26
)
)
Reconstructing hologram:
Scattered
reference beam
Source of
coherent light
Virtual
object
(“real”)
Virtual
object
(“imaginary”
Object beam
Hologram
Reference
beam
Mirror
Schematic diagram of hologram reconstruction
27
Reflection (Denisyuk type) hologram
Object
beam
Photographic
plate
Mirror
Object
Reference
beam
Laser
beam
Schematic diagram of hologram recording
Virtual
object
Point white
light source
Hologram
Hologram playback
28
Digital holography: synthesis and analysis of holograms by digital processing
Synthesis:
Illumination
Light scattered
by object
Observation
surface
Scheme of object visual observation
Mathematical model or signal – computation of the hologram –recording synthesized
hologram
Point source of
coherent light
Computer
generated
hologram
Scheme for visual observation of computer generated holoogram
Recording computer generated hologram is a specific digital-to-analog conversion
process
29
Digital reconstruction of holograms. Holographic microscopy
Hologram
sensor
Computer:
Hologram
reconstruction
and image
processing
Optical hologram and its digital reconstruction (L. Yaroslavsky, N. Merzlyakov, Methods of
Digital Holography, Consult. Bureau, New York, 1980)
Beam spatial
filter
Collimator
Lens
Microscope
Laser
Object table
Computer
Digital holographic microscopy
30
Digital
Photographic
camera
Digital processing of optical and similar signals:
New qualities that are brought to imaging systems by digital computers and
processors:
- flexibility and adaptability. The most substantial advantage of digital computers as
compared with analog electronic and optical information processing devices is
that no hardware modifications are necessary to reprogram digital computers to
solving different tasks. With the same hardware, one can build an arbitrary
problem solver by simply selecting or designing an appropriate code for the
computer. This feature makes digital computers also an ideal vehicle for
processing optical signals adaptively since, with the help of computers, they can
adapt rapidly and easily to varying signals, tasks and end user requirements.
- digital computers integrated into imaging systems enable them to perform not only
element-wise and integral signal transformations such as spatial and temporal
Fourier analysis, signal convolution and correlation that are characteristic for
analog optics but any operations needed. This removes the major limitation of
optical information processing and makes optical information processing
integrated with digital signal processing almost almighty.
- acquiring and processing quantitative data contained in optical signals, and
connecting optical systems to other informational systems and networks is most
natural when data are handled in digital form. In the same way as in economics
currency is a general equivalent, digital signals are general equivalent in
information handling. A digital signal within the computer that represents an
optical one is, so to say, purified information carried by the optical signal and
deprived of its physical integument. Thanks to its universal nature, the digital
signal is an ideal means for integrating different informational systems.
Basic problems:
- Digital representation of signals
- Digital representation of signal transforms
- Development of adaptive algorithms to achieve potential quality limits
- Efficient computational algorithms
31