2011 Issue - The Journal of Undergraduate Research at

Transcription

2011 Issue - The Journal of Undergraduate Research at
Journal of Undergraduate
Research
A Refereed Journal for Undergraduate Research in the Pure & Applied Sciences,
Mathematics, and Engineering
March 2011
Editor: Robert F. Klie
http://jur.phy.uic.edu/
Volume 4
Number 1
On the Cover (from left to right):
1. SEM images of Ni nanoflower growth. (see F. Lagunas
el al., page 57);
2. The trigger board experiment before being placed in the
magnet. There is the trigger board, telescope board,
test board, and ROCs.(see E. Stachura et al., page 48);
3. Scanning fluorescence phase microscope images of copper nanofiber mat(see Chan et al., page 43).
i
c
2011
University of Illinois at Chicago
Journal of Undergraduate Research 4 (2011)
ii
c
2011
University of Illinois at Chicago
Journal of Undergraduate Research 4 (2011)
Journal of Undergraduate Research
A refereed journal for undergraduate research in the pure & applied sciences, mathematics and engineering.
Founding Editor: Robert F. Klie
Department of Physics
University of Illinois at Chicago
845 W Taylor Street, M/C 273
Chicago, IL 60607
email: [email protected]
312-996-6064
The journal can be found online at: http ://jur.phy.uic.edu/
iii
c
2011
University of Illinois at Chicago
Journal of Undergraduate Research 4 (2011)
Contents
Introcution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v
R. F. Klie
The Design and Preparation of a Model Spectrin Protein: βII-Spectrin L2079P . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
N. Palmer, A. Antoniou, and L.W.M. Fung
Localized and Automated Chemical and Oxygen Delivery System for Microfluidic Brain Slice Devices . . . . . . . . . . . . . . . . . 5
G. Yu, A.J. Blake, and D.T. Eddington
Microfluidic Bandage for Localized Oxygen-Enhanced Wound Healing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Z.H. Merchant, J.F. Lo, and D.T. Eddington
Comprehensive JP8 Mechanism for Vitiated Flows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
K.M. Hall, X. Fu, and K. Brezinsky
TEM Study of Rhodium Catalysts with Manganese Promoter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
A. Merritt, Y. Zhao, and R.F. Klie
Selective Atomic Layer Deposition (SALD) of Titanium Dioxide on Silicon and Copper Patterned Substrates . . . . . . . . . . 29
K. Overhage, Q. Tao, G. Jursich, and C.G. Takoudis
Solvent Selection and Recycling for Carbon Absorption in a Pulverized Coal Power Plant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
R. Reed, P. Kotechs, and U. Diwekar
Temperature-Dependent Electrical Characterization of Multiferroic BiFeO3 Thin Films . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
D. Hitchen and S. Ghosh
Hydrodynamics of Drop Impact and Spray Cooling through Nanofiber Mats. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Y. Chan, F. Charbel, S.S. Ray, A.L. Yarin
General Purpose Silicon Trigger Board for the CMS Pixel Read Out Chips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
E. Stachura, C.E. Gerber, and R. Horisberger
Characterization of Nickel Assisted Growth of Boron Nanostructures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
F. Lagunas, B. Sorenson, P. Jash, and M. Trenary
iv
c
2011
University of Illinois at Chicago
Journal of Undergraduate Research 4 (2011)
Introduction
Dear Colleagues,
welcome to the fourth edition of the Journal of Undergraduate Research at the University of Illinois at Chicago
(UIC). After more than six months of hard work from our authors, referees and the journal’s editorial staff,
we have finally completed this edition, which contains 11 outstanding papers from undergraduate students who
performed their research over the last year at UIC. Several papers are part of the National Science Foundation
(NSF) Research Experience for Undergraduates (REU) site in the Departments of Chemical and Biomedical
Engineering. I would especially like to thank Professor C. G. Takoudis and Dr. G. Jursich for heading this effort
here at UIC.
Furthermore, I am also very happy to announce an increasing number of submissions from undergraduate students
performing research here at UIC outside the NSF-REU site. Many thanks to the faculty advisors, graduate
students and post-docs for helping with the preparation and revision of the submitted manuscripts. Finally, a
big thanks to all the faculty reviewers of the submitted manuscripts. I know that your work is invaluable to the
success of this journal and to the undergraduate student research reported within the papers.
Since the inaugural issue in December 2007, many things have changed behind the scenes at the Journal. Foremost
is our new website http ://jur.phy.uic.edu/, and the growing exposure of the work being published. The success
of the journal is, of course, due to the great research that is being performed by our undergraduate students in the
Colleges of Liberal Arts & Sciences, as well as Engineering. To further increase the awareness of our Journal in
the Science and Engineering community not only here at UIC, but also nationwide, we invite every undergraduate
student performing research during the semester or over the summer, to submit his/her work to the Journal for
publication.
Last but not least, I also want to thank the editorial assistant, Kyle Klages, for his outstanding work and help in
putting together the fourth volume of this Journal. Finally, I am very grateful for the financial support from the
College of Liberal Arts & Sciences at the University of Illinois at Chicago.
Robert F. Klie
Nanoscale Physics Group
March 2011
v
c
2011
University of Illinois at Chicago
Journal of Undergraduate Research 4 (2011)
vi
c
2011
University of Illinois at Chicago
Journal of Undergraduate Research 4, 1 (2011)
The Design and Preparation of a Model Spectrin Protein: βII-Spectrin L2079P
N. Palmer
Department of Chemical Engineering, University of Illinois, Urbana, IL 61801
A. Antoniou and L.W.M. Fung
Department of Chemistry, University of Illinois at Chicago, Chicago, IL 60607
Spectrin isoforms are cytoskeletal proteins that give stability to cells. Site directed mutagenesis
was used to replace residue 2079 in brain spectrin βII from leucine to proline, the corresponding
amino acid in red blood cell spectrin βI. We have shown previously that, in spectrin βI, the region
downstream of the proline residue is unstructured, whereas the corresponding region in spectrin
βII (downstream of a leucine residue) appears to be helical. This structural difference has been
suggested to be responsible for binding specific proteins to each β-spectrin isoform, with G5 only
to βI-spectrin and F11 only to βII-spectrin. Thus, it is possible that the mutation from leucine to
proline in βII-spectrin may lead to a conformational change in βII, from helical to unstructured. In
this study, a recombinant protein consisting of a fragment of βII-spectrin, with L2079P mutation,
has been designed and prepared.
Introduction
Spectrin isoforms are common cytoskeletal proteins
that gives the stability and the unique shape to many
cells. Spectrin isoform of the brain cells (spectrin II)
plays a critical role in neuronal growth and secretion.
Spectrin isoform of the red blood cells (spectrin I) provides deformability in red blood cells. Both spectrin
I and spectrin II consist of two subunits, α-spectrin
and β-spectrin, to form αβ heterodimers. Two heterodimers associate at the N-terminus of α-spectrin and
the C-terminus of β-spectrin to form a functional (αβ)2
tetramer.1 In forming spectrin tetramers, the affinity between the αIIβII heterodimers is much greater than that
of αIβI heterodimers. Consequently, the brain spectrin
forms a stable network for complex neurological functions, and the red blood cell spectrin forms a flexible
network to allow red blood cells to deform and to pass
through small capillaries. In studies with tetramerization
site model proteins, erythroid (red blood cell) alpha spectrin consisting of residues 1-368 (αI-N3), non erythroid
(brain) alpha spectrin consisting of residues 1-359 (αIIN3), erythroid beta spectrin consisting of residues 18982083 (βI-C1), and non-erythroid beta spectrin consisting of residues 1906-2091 (βII-C1), the αIβI association
exhibits equilibrium dissociation constants (Kd ) in µM
range and αIIβII association in nM range.2,3 However, it
is found that the difference in the affinity is largely due
to structural differences in αI- and αII-spectrin, since the
Kd values for αIβI association is about the same as those
for the αIβII association.4
Despite their 80% sequence homology and similar affinity to α-spectrin isoforms, βI- and βII-spectrin selectively
βInd to proteins G5 and F11, respectively. G5 and F11
were identified as β-spectrin interacting proteins in a
study using phage display methods to screen a singlechain-variable-fragment library.4 G5 binds to an unstructured region downstream of the residue 2071 (proline) of
βI-spectrin (Figure 1A). However, the corresponding region in βII-spectrin assumes a helical conformation downstream of the corresponding residue 2079 (leucine) (Figure 1B). βII does not bind G5, instead it binds F11.
In this study, βII-spectrin model protein, βII-C1, which
consisted of residues 1906-2091, was used as the wildtype as well as the tempate to prepare L2079P mutant.
The mutation of βII from leucine to proline at residue
2079 may disrupt the helical conformation beyond this
point to give a conformation more similar to that of βI
and thus function more similar to βI than βII. However,
mutation at this site should not disrupt the association
with αII-spectrin.
FIG. 1: Proposed C-terminal structures of βI and βII
spectrin (from reference 3). (A) In βI spectrin the
C-terminal region downstream of residue P2071 is
unstructured, and (B) the corresponding region in βII
spectrin downstream of corresponding residue L2079 is
helical. Mutation L2079P may change the helix into
unstructured conformation to resemble the structure of
βI in this region
Journal of Undergraduate Research 4, 1 (2011)
Materials and Methods
Standard method using primer-mediated site-directed
mutagenesis procedures was used to introduce mutation
L2079P. To design the primers, we used the wild type
deoxyribose nucleic acid (DNA) sequence (gene code:
NM 003128) for amino acid residues 2075 - 2083. The
DNA sequence is 5’ GCC CTG GAA AGG CTG ACT
ACA TTG GAG 3’, with the underlined codon as the
leucine codon. A primer with the following sequence
was designed to introduce the L2079P mutation - 5’
GCC CTG GAA AGG CCT ACT ACA TTG GAG 3’,
with the double underlined codon as the proline codon.
The nucleotide sequence in bold is the StuI recognition
site. A specific restriction site was introduced for analysis purpose, since a successful restriction enzyme digestion indicates a successful introduction of the mutated
sequence. The reverse complimentary primer was also
designed. This pair of primers was then ordered from
UIC Research Resources Center (RRC). A glutathione
S-transferase (GST) fusion protein plasmid, pGEX-2T∆,
previously modified4 to contain the wild type sequence
for βII-spectrin consisting of residues 1906 - 2091, was
used as the parent template for polymerase chain reactions (PCR) in the presence of the designed primers to
generate DNA with the mutation. PCR was performed
in a thermal cycler using these primers and the parent
template. The PCR product was subjected to DPN1
restriction enzyme digestion to remove the methylated
parent template.
This modified pGEX-2T plasmid was transformed into
DH5α competent E. coli cells (Clonetech, Mountain
View, CA), and the cells were grown on agar plates
with LB medium and ampicillin at 37◦ C overnight. The
colonies were then used to innoculate a liquid culture (4
mL LB media) for 37◦ C overnight growth. The plasmid
was extracted from the cells and digested with StuI and
BamHI restriction enzymes. The digestion product was
applied to a 1.3% agarose gel for electrophoresis analysis. The agarose gel was prepared by dissolving 2 g of
agarose in 150 mL of Tris-acetate-EDTA (TAE) buffer.
The mixture was heated to dissolve and poured into a gel
caster to make the 1.3% agarose gel. A ”Low Mass DNA
standard” (NEB, Ipswich, MA) was used as a reference
for DNA size (in base pairs, kilobase). Six trials were
done with one negative control. The plasmid DNA was
also submitted to UIC RRC for DNA sequencing.
The plasmid with correct sequence was then transformed into BL21 competent E. coli cells (Clonetech,
Mountain View, CA) for protein expression.
Protein over-expression was induced by isopropyl β-D-1thiogalactopyranoside (IPTG; from Gold Biootechnology, St. Louis, MO). Small amount of cells were first
grown for whole cell electrophoresis analysis to ensure
proper protein expression. Electrophoresis was performed with a 16% polyacrylamide gel in sodium dodecyl
sulfate (SDS) solution. With a positive whole cell electrophoresis result, a large scale preparation of GST-βII-
FIG. 2: Electrophoresis of PCR products after DPN1
digestion on an agarose gel (1.3%). A DNA marker
sample was loaded and labeled as Standard to show the
mobility of DNA fragments, in kilobases (kb). Samples
with varying DNA template-to-primer ratios were
loaded to Lanes 2-6. A band at about 7 kb was
observed suggesting that the DNA plasmid was
amplified. Lane 7 is that of a negative control, showing
no DNA amplification.
C1 L2079P protein was done with the BL21 cells grown
in LB media (2 L) at 37◦ C in a flask (4 L), placed in a
temperature controlled shaker (Lab line, Melrose Park,
IL). After about 3 hr. growth, with optical density measured at 600 nm (OD600 ) ∼ 0.3, IPTG (0.5 mM) was
added, followed by another 3 hr growth at 27◦ C in the
temperature controlled shaker. The cells were dissolved
in 4 mL of 1% Triton lysis buffer, followed by centrifugation at 4600g for 20 min. The supernatant was then
loaded onto a column packed with GST affinity resin
(Sigma Aldrich, St. Louis, MO), pre-washed extensively
with a 5 mM phosphate buffer with 150 mM NaCl at pH
7.4 (PBS). The GST-βII-C1 L2079P fusion protein was
immoβIlized on the resin while the rest of the E. coli proteins were eluted off the column with the buffer. GST βII
C1 L2079P protein was then eluted using PBS containing freshly added glutathione (Sigma Aldrich, St. Louis,
MO). Electrophoresis in SDS solution was performed on
the fusion protein fractions. SigmaGel 1.0 Software (Jandel Scientific, San Rafael, CA) was used to analyze the gel
to determine the protein purity. The protein was submitted to UIC RRC for molecular mass determination using
mass spectroscopy.
The same procedure was used to obtain αII model protein consisting of residues 1 - 359 (αII-N3) and the wild
type βII-C1.
Isothermal titration calorimetry (ITC) measurements
were done using a VP-ITC (MicroCal, LLC, Northampton, MA) at 25◦ C. The proteins were dialyzed extensively
in PBS buffer overnight at 4◦ C.
Results
The gel electrophoresis of PCR products shows modified and amplified DNA plasmid (about 7 kb) (Figure
2, Lanes 1-6). The negative control lane shows no DNA
band (Lane 7). DNA sequencing results of the PCR products clearly indicate that L2079 mutation has been intro2
c
2011
University of Illinois at Chicago
Journal of Undergraduate Research 4, 1 (2011)
FIG. 3: Electrophoresis of protein samples in SDS buffer
on a polyacrylamide gel (16%). Lane 1 is the βII-C1
L2079P with 93% purity, Lane 2 is the βII-C1 wild type
with 92% purity, and Lane 3 is αII-N3 with 95% purity.
duced.
From cell growth using cells with this modified plasmid
and 2 L medium, we obtained ∼ 1 g of cells harboring the
L2079 protein, and about 27 mg of GST-βII-C1 L2079P
protein at 93% purity, as shown in Figure 3, Lane 1.
The purity of the GST-βII-C1 wild-type is ∼ 92% (Lane
2). The purity of the GST-αII-N3 is ∼ 95% (Lane 3).
Mass spectrometry analysis indicates correct mass for the
mutant protein.
ITC titration results (Figure 4) show that βII-C1
L2079P associated with αII-N3 protein with a Kd value
of ∼ 200 nM for the complex.
FIG. 4: ITC results indicate that the βII-C1 L2079P
mutant is still functional since it associates with αII-N3,
but with lower affinity than the wild type. 2.1 µM
βII-C1 L2079P was used in the sample cell and 35 µM
αII-N3 was used in the titrating syringe. Fusion
proteins were used. The Kd from the titration was 200
nM.
partment of Defense (DOD), National Science Foundation (EEC-NSF Grant # 0755115).
Discussion
Abbreviations
The recombinant protein βII-C1 L2079P was successfully prepared in large quantity and in high purity. ITC
results of αII-N3 and βII-C1 L2079P show that the protein associates with its binding partner αII-N3. The Kd
value of the complex is larger than that of the wild type
complex (Kd ∼ 10 nM), suggesting that the mutation
induces a conformational change in βII-C1 to give a reduced affinity with αII-N3. Thus, this model protein can
now be used for further structural studies to determine
its conformational changes and its affinity with G5 and
F11 proteins.
• αI-N3 - erythroid (red blood cell) alpha spectrin
consisting of residues 1-368
• αII-N3 - non-erythroid (brain) alpha spectrin consisting of residues 1-359
• βI-C1 - erythroid (red blood cell) beta spectrin consisting of residues 1898-2083
• βII-C1 - non-erythroid (brain) beta spectrin consisting of residues 1906-2091
• DNA - deoxyribose nucleic acid
• GST - glutathione S-transferase
Acknowledgements
• ITC - isothermal titration calorimetry
This work was supported, in part, by grants from the
National Institutes of Health (GM68621 to LWMF), De-
• kb - kilo base
3
c
2011
University of Illinois at Chicago
Journal of Undergraduate Research 4, 1 (2011)
• OD600 - optical density measured at 600 nm
• SDS - sodium dodecyl sulfate
• PCR - polymerase chain reaction
• TAE - tris-acetate-EDTA
• PBS - 5 mM phosphate buffer with 150 mM NaCl
at pH 7.4
1
2
3
4
D. W. Speicher, T. M. Desilva, K. D. Speicher, J. A. Ursitt, P. Hembach, and L. Weglarz, J. Biol. Chem 268, 4227
(1993).
P. A. Bignone and A. J. Baines, Biochem. J. 374, 613
(2003).
F. Long, D. McElheny, S. Jiang, S. Park, M. S. Caffrey, and
L. W.-M. Fung, Protein Sci 16, 2519 (2007).
Y. Song, C. Antoniou, A. Memic, B. K. Kay, and L. W.-M.
Fung (2010), manuscript in progress.
4
c
2011
University of Illinois at Chicago
Journal of Undergraduate Research 4, 5 (2011)
Localized and Automated Chemical and Oxygen Delivery System for Microfluidic
Brain Slice Devices
G. Yu, A.J. Blake, and D.T. Eddington
Department of Bioengineering, University of Illinois at Chicago, Chicago, IL 60607
To better study in vitro models of the brain, a localized delivery system is necessary due to the
region specific functionality of the brain. The proposed system allows drugs and oxygen of controllable concentrations to be delivered. The delivery system is integrated into a polydimethylsiloxane
microfluidic brain slice device and uses valves controlled by the LabVIEW programming language.
Delivery is controlled by adjusting the opening/closing frequencies of the valves. Fluorescein isothiocyanate, a fluorescent dye, was used to characterize the delivery with and without brain tissue
(∼300%µm). A linear relationship was found correlating the valve frequencies and the intensity
showing how easily controlled concentrations can be delivered. A delivery system to automatically
mix and deliver oxygen concentrations between 0% and 21% was developed. Accurate and precise outputs were obtained. Combined, these two delivery systems will allow controllable drug and
oxygen concentrations to be tested at defined regions of the brain.
Introduction
Aristotle believed that the heart controlled perception
and thought. After two millennia of research, it is now
widely accepted that the brain is the real heart of the
matter. The complexity of the brain and its astounding
ability to process and relay all the signals and information of the body has made it exceedingly difficult to understand the functional relationship of areas in the brain.
The complex relationship presents a barrier to better understanding and treating neurological and trauma related
disorders.
To better understand the functional relationship of
individual areas of the brain, electrophysiologists use
a combination of recording/stimulating electrodes and
chemicals. In vivo experiments are often challenging as
the stimulation of one region can be dictated by a number
of interconnected pathways that may also activate other
regions of the brain. Additionally, the in vivo environment makes it seemingly difficult to locally deliver and
remove chemicals in a controlled manner. Alternatively,
brain slice preparations have proven invaluable as a physiological tool for investigating intrinsic cellular mechanisms of brain circuitry.1 In combination with brain slice
perfusion chambers, in vitro brain slice models allow researchers to determine the exact nature of each region by
isolating a specific neural networks, thereby reducing the
input pathway activity from other connected areas.
It is necessary to test the response of specific locations
of the brain due to the brain’s spatial organization of
its processing centers.2 Therefore, most brain slice perfusion chambers use an open top bath design to provide
nutrients to the brain slice and access for electrophysiology tools.3 A micro-injector pipette is typically utilized
to inject a solution in a site on the brain slice. However, the design of typical perfusion systems makes testing drugs and chemicals on highly localized areas of the
brain very difficult. The open bath design causes the
chemical to diffuse outward away from its intended tar-
get unless the pipette is very close to the tissue, but even
then it is not guaranteed that the chemical will diffuse
all the way through the tissue. Additional precaution
must also be taken to prevent the tissue from being disturbed as the flow rate of the perfusate and/or puffing of
chemicals from the pipette may cause the tissue to move
out of place or compress the tissue at the target site.
Moving the tissue out of place will cause any electrophysiological sensors that are set to record at the specific
location to take measurements in the incorrect location.
Compression of the tissue will cause unwanted mechanical stimulation to contaminate the electrophysiological
effects of the drug itself.4 Furthermore, pipette systems
also cause the microfluidic brain slice device (µBSD) to
become crowded making placement of the electrophysiological sensors to be difficult as well as interfering with
any microscope being used.5 Whenever the drug needs to
be applied to a new area, either the pipette system needs
to be moved, or another one must be placed in that new
area. If multiple areas are to be tested, then the open
bath area can quickly become crowded.
Here we present a type of perfusion chamber, the
µBSD, for maintaining the viability of brain slices and
controlling the spatiotemporal delivery of solutions to
specific areas of the brain slice. The µBSD was constructed using poly(dimethylsiloxane) (PDMS) microfluidic technology, as it is an inexpensive and flexible platform for rapidly prototyping modifications until an optimal design is achieved.6 The µBSD allows a brain tissue
slice to be placed inside a microfluidic chamber through
which oxygen and nutrients can be delivered to maintain the brain slice’s viability.4 The microscale dimensions of the fluid chamber significantly reduce the perfusate volume, thereby promoting a faster exchange of
oxygen and nutrients at the tissue-fluid interface. Consequently, lower flow rates can sustain the viability of
thick tissue slices.7 Perfusion chambers are important in
order to model and study processes such as ischemia and
epilepsy, as well as study protein expression.3 A partic-
Journal of Undergraduate Research 4, 5 (2011)
ular application we would like to observe is inducing a
physical trauma like stroke. Forcing a specific area of
the brain to undergo hypoxic, or low oxygen, conditions
can create a stroke model that can be observed using
a number of electrophysiology tools. The µBSD can be
integral in the prevention or treatment of stroke by measuring the effectiveness of different drugs being applied
to hypoxic areas.
The proposed µBSD design in this experiment utilizes
VIAs (through-put channels) that can deliver the chemical or drug from underneath the brain slice. The flexibility in the design and manufacturing of the µBSD can
also allow specific locations of the brain to be targeted
and control the spatiotemporal delivery of solutions. By
having the VIA delivery ports placed at the bottom, the
open bath remains uncluttered to allow more access for
electrophysiological tools and microscopes to the slice.
Mechanical solenoid valves, which are compatible with
most chemicals, are utilized to deliver the solutions in
a controlled manner and prevent the tissue from being
disturbed. To automatically regulate delivery, a digital
signal sequence is programs the valves through a LabVIEW program. By varying the VIA diameter and the
frequency at which the valves open and close, we can
manipulate the concentration profile of the solution being delivered.
FIG. 1: The valve and tubing set up for the oxygen
delivery system is depicted. The valves would mix
different oxygen concentrations together by opening and
closing at different frequencies to form a final oxygen
concentration output. The two valves came from the
Lee Micro Dispensing VHS starter kits. One valve was
connected to a 0% oxygen gas tank, and the other was
connected to a 21% oxygen tank. A y-connector
combined their outputs into the Cole-Parmer EW
06498-62 tube which was 40.1cm long to allow sufficient
time for the different oxygen concentration gases to
mix. Not shown in the diagram was the NeoFox FOXY
sensor that would detect the output concentration.
Materials and Methods
Automated Oxygen Delivery System
Equipment Set-Up
In addition to laying the groundwork for the automated delivery of solutions, we have automated a delivery system for premixing 0% and 21% oxygen concentrations to a desired concentration. The main conception
of the automated delivery system is automation and programmability of the code such that after defining parameters such as the oxygen level and the duration of exposure, an experiment can be run without needing further
attendance for the oxygen. Both delivery systems share
the same type of valve as well as the LabVIEW based interface. A graphical user interface (GUI) was developed
for each delivery system to allow the user to program the
quantity that is delivered. For both delivery systems,
it was hypothesized that there would be a linear relationship between the manner in which the valves were
oscillated to release their contents and the concentration
of the output. A linear relationship would allow a simple
and controllable method for delivering various concentrations. Other features were implemented for each delivery system as well in order to facilitate the automation
of each system and will be detailed later in the paper.
To sum, the final chemical and oxygen delivery system
should feature high spatial resolution, interfacing with
other sensors, non-interference with the tissue, and automation.
The oxygen delivery system (Figure 1 consists of
two Lee VHS micro-dispensing starter kits containing
voltage-controlled micro-nozzle valves.8 A 0% oxygen
tank feeds into one valve while a 21% oxygen tank feeds
into the other. Regulators are placed on these lines to
adjust and ensure equal flow rates to each valve. A
y-connector combines the outflow of each valve, and
the total combined output travels through a 15.8 inch
long Tygon tube with an inner diameter of 1/16 inch
(Cole-Parmer EW 06408-62, Vernon Hills, Illinois) before reaching a FOXY (fiber optic oxygen) probe which
senses the percent oxygen level. The VHS starter kits
come with a microcontroller box which can be fed signals from a National Instruments DAQ card. Through
the DAQ card, LabVIEW is able to output digital signals
to open and close the valves.
Calibrating the Oxygen Levels
Before any oxygen levels could be set, the FOXY
(Ocean Optics, Inc.) software had to be calibrated to the
standards. This calibration had to be performed period6
c
2011
University of Illinois at Chicago
Journal of Undergraduate Research 4, 5 (2011)
FIG. 2: The manner in which the valves were
programmed to open and close is shown here. After
opening for a specific duration, the valves would remain
closed until their next opening time. This way, only one
valve would be open at a given time. They would
repeat this cycle for as long as the desired oxygen
concentration needs to be applied. By increasing and
decreasing the open times for the 0% oxygen valve and
the 21% oxygen valve, different oxygen concentrations
could be produced.
FIG. 3: The LabVIEW GUI was developed for use with
the automated oxygen delivery system. The
components will be briefly explained clockwise starting
from the upper left corner. The total elapsed time
displays the length of time that has passed since the
program was initiated using the Start/Stop button. The
target time reached light activates once the total
elapsed time matches the total delivery time. The
testing section allows the user to manually and
independently open and close the valves to make sure
the correct oxygen tank is connected to the valves. The
intervals section allows you to choose the different
oxygen intervals that will be cycled through
sequentially and repeatedly. Each oxygen interval is
defined by an oxygen concentration and a duration.
The Start/Stop button starts the program and can stop
it at any point. The total delivery time can be set such
that the program will stop once the time has bee
reached. The oxygen cycling section contains a more
detailed version of the oxygen intervals.
ically, every 30 minutes, to ensure accurate results. An
Ocean Optics NeoFox Oxygen sensor system was used
to measure the oxygen concentration. This sensor detects the fluorescence level at the tip of a probe which is
quenched by oxygen concentration. A spectrometer measures the degree of quenching which can be used to determine the output of oxygen. The standards used were
the 0% (5% carbon dioxide and balanced nitrogen) and
21% oxygen gases which are typically carried by gas distribution companies. Afterwards, equal flow rates were
set through each valve by adjusting the regulators. For
validation that equal flow rates were achieved, the regulators were slowly manipulated to read the same output.
Then further tunings were applied until 10.5% was detected by the FOXY sensor. If both gases were flowing
at the same rate, then the combined outflow should consist of equal amounts of the 0% and 21% gas resulting
in an average output of 10.5%. Through LabVIEW, the
valves were coded to sequentially open and close such
that only one valve was open at any given time (Figure
2). This method generated more stable results rather
than keeping one valve open constantly while adjusting
the opening/closing frequency of the other. It introduced
too much oscillation in the output mixture. The sequential valve control system allowed packets of 0% and 21%
to diffuse and mix with each other resulting in a more
uniform and consistent output.
ered to output that oxygen concentration. The GUI also
allows the user to define a set time for which the oxygen concentration would be outputted. A total elapsed
time begins and is displayed once the Start/Stop button is pressed. Pressing the button again will stop the
time and close all valves. If the defined delivery time is
reached, the valves are closed, the timer is stopped, and
the user is notified via a bright green light on the GUI.
For calibration purposes, each valve can be manually and
independently opened and closed using toggles to ensure
that the correct gas lines are connected to the valves. The
final feature of the GUI is the ability of the user to program different intervals of delivery. Each interval consists
of a user defined oxygen concentration as well as a duration for which the concentration will be outputted. The
user can specify up to 5 intervals. If the sum of the durations of all of the activated intervals do not exceed the
user specified total delivery time, then the intervals will
repeat again from the beginning interval. In this manner,
experiments can be performed where the brain is subject
LabVIEW GUI for Oxygen Delivery System
The main purpose of the GUI (see Figure 3) was to be
able to output an integer oxygen concentration between
0% and 21%. The user is able to use a slider to input
the desired concentration or simply type the number in.
When the program is activated with a Start/Stop button,
LabVIEW would use the calibrations that were discov7
c
2011
University of Illinois at Chicago
Journal of Undergraduate Research 4, 5 (2011)
FIG. 5: A model of the microchannel layer of the µBSD
was constructed. The 3 microchannel design is shown
with a close up of the channel openings. DI water and
the chemical enter through the inlet ports and are
delivered to the tissue via the channel openings. Any
excess DI water and chemical is transported to the
outlet port to be removed via vacuum line. The
microchannels are 150µm in width and height. The
channel opening in this iteration of the device is 50µm
thick.
FIG. 4: This is a bird’s eye view of the basic µBSD that
was tested. The bottom layer is the microchannel layer
which contains the microchannel. The reservoir layer
sits on top and contains an outlet reservoir and the
main bath chamber. The outlet port delivers any
unused chemical to an area outside the main bath
chamber where it can be removed via a vacuum line.
The output reservoir prevents the vacuum’s suction
from disturbing the bath chamber. The bath chamber is
where the tissue would be placed. A channel opening is
placed in the bath chamber so that the chemical can be
delivered to the tissue. A T channel sits on top of the
reservoir layer. The T channel’s top opening is
connected to a DI water line which acts as a transport
medium for the delivered chemical. The side branch is
connected to the valve through which the chemical is
delivered.
placed to remove influx of DI water. However, the chemical to be delivered does not reach the tissue through the
outlet port. Between the inlet port and outlet port, is a
channel opening through which the chemical will travel
to interact with the bathing fluid and tissue. The width
of the opening matched the width of the channel (150
µm ), but the height of the opening was variable as is
detailed later. Finally, a line from a syringe pump was
placed to flow a constant stream of water over the slice to
wash away any chemical that would diffuse through the
tissue. The excess chemical would be directed toward the
outlet port for the vacuum pump to remove.
to different oxygen environments automatically.
Microfluidic Brain Slice Device (µBSD) with
Chemical Delivery System
Description of the µBSD
Creating the µBSD: Soft Lithography9,10
The device consisted of 2 polydimethylsiloxane layers
as well as one T channel manifold (Figure 4). The bottom layer, or the microchannel layer, contained the microchannels (150µm in width), each having an inlet port
but a combined outlet port on the other end (Figure
5). The top layer, or the reservoir layer, was a block
of PDMS with a reservoir which would hold the tissue
and the bath reservoir. A channel would run through
this layer starting at the inlet ports of the microchannels at the bottom of the reservoir layer to the top of the
layer. The T channel manifold was placed on the channel
that emerged from the reservoir layer. A constant stream
of de-ionized (DI) water would flow into an input of the
T-shaped manifold (Figure 6). A Lee valve could then
be inserted into the remaining branch to allow delivery
of a chemical into that constant DI water stream. At the
outlet port of the microchannel, a vacuum line would be
The primary material of the µBSD was polydimethylsiloxane (PDMS). PDMS was chosen because of its biocompatibility, flexibility, optical translucency, inexpensive cost, and easy use.410? In order to create the microchannel layer µBSD, a master needed to be made
upon which PDMS could be poured to form one layer
of the µBSD. Photolithography was used to construct
the µBSD master. A 250µm layer of SU8, a negative
photoresist, was spun onto a silicon wafer. The layer was
then baked on a metal hotplate to establish cross-linking
of the SU8 into a more solid structure. A mask containing the µBSD design of the particular layer was laid
over the SU8, and the wafer was exposed to ultraviolet
(UV) light to degrade any SU8 that was not protected by
the mask. After another bake to further strengthen and
cross-link the SU8 that wasn’t UV-exposed, the wafer
8
c
2011
University of Illinois at Chicago
Journal of Undergraduate Research 4, 5 (2011)
FIG. 7: The chemical delivery GUI is depicted. It is
currently set up to control up to three valves. An
elapsed time since delivery began is displayed, and a
total duration can be set to stop delivery upon reaching
the set time. When the Go/Stop button is pressed, only
the valves activated by their corresponding toggles run
through their programming. The milliseconds open and
closed control the pulses, and the repeat window allows
the pulse series to be delivered. A full dose delivery has
a repeat of 1 while the pulsing dose method has multiple
repeats. The Every X Seconds option allows the
delivery to repeat every X seconds. The color coding of
the aforementioned windows matches that below the
other valves, and the colors dictate the function of the
number control of that window. A camera system
allows visualization of the slice area with control of the
brightness, gain, and shutter speed. The calibrate
valves toggle allows the user to click on the location of
the valve openings which is saved as an overlay. The
overlay switch can turn the overlay on or off.
FIG. 6: A side view of the T channel manifold better
visualizes where the DI water line and valve are
connected. The actual T channel has been highlighted
in blue. The DI water line flows through the top
opening. The bottom opening is connected to a channel
in the reservoir layer which leads to the inlet port of the
microchannel. The valve is connected to the side branch
so that the chemical can be injected into the constant
DI stream and carried to the channel opening.
was subjected to a developer’s solution to eliminate the
UV-exposed SU8. In this manner, the master was constructed. From the master, the actual µBSD was constructed by pouring a PDMS mixture onto the master.
The PDMS mixture was created by combining the PDMS
curing agent and silicone elastomer in a 1:10 ratio. The
master with the PDMS was spun to form a 250µm layer,
and the entire construct was baked until the PDMS hardened and peeled easily away from the master. This procedure was performed to create the bottom layer of the
µBSD.
To form the reservoir layer, PDMS was poured into a
petri dish and baked. The resulting shape was then cut
using a razor blade to the appropriate dimensions before
a hole was punched using a cylinder to form the reservoir.
The T channel manifolds were constructed similarly except that small cubes of PDMS were cut from the main
batch. A syringe was used to punch a main channel out
the cube. Then a side branch was created using the syringe. Special care was given to remove any PDMS from
the channels after the holes were punched so the flow
through the channels would not be obstructed.
is controlled with a Go/Stop button. There are three
toggles for each of the valves that would lead to a different microchannel. The programming that dictates their
delivery style only runs when they are armed by setting
their toggle to ”on.” Delivery will then proceed when the
Go/Stop button is activated. Again, a total delivery time
can be set which will stop the program once the user defined time has been reached. Delivery can be prematurely
terminated by clicking the Go/Stop button again which
closes all valves. The valve delivery can be manipulated
by changing the pulses (Figure 8). A pulse is defined by
a specified amount of time the valve is open in seconds
and a specified amount of time the valve is closed. The
pulses can be sequentially repeated in a delivery known
as a series of pulses or, simply, series. These series are
repeated until the total delivery time is reached or the
program is stopped. The interval at which the series are
repeated can also be set by the user.
One problem with the delivery system is that after
placing the tissue into the µBSD, the locations of the
channel openings will be covered. The actual target area
of the channel opening will be unknown, so the user will
LabVIEW GUI for Chemical Delivery System
The LabVIEW GUI for Chemical Delivery System is
shown in Figure 7. Starting and stopping the program
9
c
2011
University of Illinois at Chicago
Journal of Undergraduate Research 4, 5 (2011)
FIG. 9: The experimental set up for testing the
chemical delivery system is shown. The µBSD is set on
the stage of a microscope which is integrated with a
camera that can measure flurorescence. FITC is
injected by the valve into the T channel. DI water lines
flow into the T channel and over the surface of the bath
chamber. A vacuum line rests near the outlet chamber
to remove waste and prevent overflooding of the
chamber. A brain harp is depicted which is placed over
the tissue slice in the bath chamber to prevent it from
moving once the microscope has been set.
FIG. 8: The different delivery methods tested in the
experiment are displayed. The full dose method has one
opening time and one closing time. The pulsing dose
method opened the valve for an equivalent amount of
time as the full dose method. However, they were
spread out in smaller pulses. A pulse was defined as one
opening and closing time. Essentially, a full dose
method was just one long pulse. The closing time of the
pulse is also known as the wait time, and a train of
sequential pulses was known as a pulse series.
tors were placed on the lines to ensure a constant flow
rate of 60 mL/hr for the FITC and 20 mL/hr for the
water. The Lee valves were used for this delivery as well.
When tissue was used, approximately 300m brain slices
were obtained from adult black 6 mice. Slices were obtained using the Vibrotome Series 1000. Because the tissue was needed only to study the diffusion characteristics
of FITC through the slice, artificial cerebral spinal fluid
was not used to mainta in the tissue’s viability. The slice
was placed over the channel openings in the reservoir and
was held in place using a brain harp which was stringed
with strands of nylon. Nylon was chosen because of the
thinness of each strand as well as its inertness with respect to the brain tissue.1 The brain harp was required to
keep the tissue in place while the flow of DI water passed
over it from the syringe pump.
not be able to know exactly where the drug is being delivered. Thus, the GUI can display the image that a
camera can obtain through the microscope. A channel
opening location calibration toggle is in place such that
when activated, the locations of the channel openings
can be clicked upon before the tissue is placed. These
locations are saved and can be overlaid over the camera
image even after the tissue has been placed. In this way,
the locations of the channel openings can still be known
assuming that the camera’s field of view has not changed
or the µBSD has not been moved. This overlay can be
toggled on and off as well. The camera’s gain, shutter
speed, and brightness can be controlled via the GUI, and
the frame rate of the camera is displayed as well.
Delivery Methods: Full Dose and Pulsing Dose
Characterizing the Chemical Delivery System
Two different delivery styles were compared: the full
dose method and the pulsing dose method. For this report, ”pulses” described the pulses of FITC being delivered by the valves. The output was referred to as the
bolus. The full dose method involved opening the valve
for a set amount of time before closing the valve. This
would constitute one full dose. A pulsing dose opened
the valve for a small increment of time before closing it,
but instead of increasing the time at which the valve was
left open, the number of times the pulses were repeated
in succession was repeated. A full dose was defined by
Fluorescein isothiocyanate (FITC) was used to characterize the delivery system because the delivered concentration could be related to the intensity that is recorded
by the camera (Figure 9). The microscope camera used
to record the images was manufacture by Hamamatsu,
and the recording software was Wasabi. This characterization was performed with and without tissue. The
reservoir was allowed to be filled with DI water. Flow of
the DI water and FITC was driven by gravity. Regula10
c
2011
University of Illinois at Chicago
Journal of Undergraduate Research 4, 5 (2011)
the length of time the valve was open, and the pulsing
dose was defined by the number of pulses that were repeated. However, in choosing the trials, the open valve
times and the number of pulse repeats were chosen such
that the total amount of time the valve was left open
was the same for both methods. For example, if a full
dose of 10ms was chosen, then a pulsing dose of 2 repeats
of 5ms were performed. The two 5ms pulses sums to a
total open valve time of 10ms. Without tissue, each bolus was injected every 6 seconds. However, this interval
was changed to 10 seconds when the tissue was in place
because saturation occurred.
FIG. 10: A scatter plot of the output of the oxygen
delivery system was made. The desired output at each
step was an integer value between 0% and 21%. Each
step was run for 150 seconds, but there exists oscillation
in the output. However, the average output from each
step is very close to the desired output. The oscillations
stem from the error in the NeoFox FOXY sensor used to
detect the oxygen concentration as well as the sequential
nature of the oxygen mixing technique (Fig. 2).
Exposure Time: 15.005ms
Gain: 125
Offset: 55
Channel Opening Height: 50µm
All of the characterization data was processed in the
form of intensity profiles. Intensity profiles were generated by defining a region in the movies (in .avi format)
taken by the camera. ImageJ, an image analysis software, then could measure the intensity values within the
region. When analyzing the comparison between the full
dose and pulsing dose method without tissue, a rectangular region was defined through which the analysis of each
bolus of a certain setting was performed. Thus, an area
of effect could be seen as well as the spatial distribution
of the intensity of the bolus.
The analysis for the full dose and pulsing dose comparison with tissue used a line to generate the results.
A time lapse profile was created showing the average intensity in the affected area of tissue over time. This was
performed to discover the average concentration of FITC
that would be delivered to the tissue over a period of time
at the given settings. Thus, the decay of delivery could
be studied as the concentration decreased with time.
Channel Opening Height: 50µm
Results and Discussion
Automated Oxygen Delivery
Figure 10 depicts the output of the dual valve system
after the calibrations had been made. A sequential step
up can be seen from 1% to 21% in which the average
value of each plateau resides around an integer between
0 and 21 percent. Each oxygen level was programmed to
run for 150 seconds. For certain oxygen levels, predominantly those between 11 and 21 percent, rapid and high
amplitude oscillations of the detected oxygen level cause
thick lines to appear. The absolute difference between
the average output and the desired oxygen percent was
calculated rather than performing relative error analysis.
Because of the weighting of the numbers, deviations in
the lower oxygen levels would translate into more relative error than higher oxygen levels. Therefore, only a
difference was calculated.
The data collected for Table I suggest that the calibrations for the oxygen levels are fairly accurate because
the average values are very close to the desired oxygen
levels. The absolute difference ranges from 0.009847%
to 0.255671%. The standard deviation is also very low
(0.038476% to 0.171212%) suggesting that the calibration outputs are very precise. However, the calibrations
can be further refined to output more accurate and more
precise results, and this will be performed in the future.
Complete accuracy and precision cannot be guaranteed,
though. The design of the FOXY sensor introduces vari-
Optimizing the Pulsing Dose
The effect of different closing times between each pulse
was tested. This test was performed by injecting a series
of 6 pulses with different wait times between each pulse.
One bolus would be formed by this series. It was hypothesized that different pulse closing times would change the
shape of the bolus delivered. Pulse intervals of 50ms,
100ms, 125ms, and 175ms were tested. Multiple boluses
were recorded and averaged together to generate an average bolus profile. The ImageJ intensity profile was based
on the intensity that passed through a line crossing the
channel width. This way, the duration of the bolus could
be monitored over time.
Exposure Time: 7.528ms
Gain: 255
Offset: 0
11
c
2011
University of Illinois at Chicago
Journal of Undergraduate Research 4, 5 (2011)
Desired Oxygen Avg. (%)
Absolute
Standard
Level (%)
Difference (%) Deviation (%)
1.0
2.0
3.0
4.0
5.0
6.0
7.0
8.0
9.0
10.0
11.0
12.0
13.0
14.0
15.0
16.0
17.0
18.0
19.0
20.0
21.0
1.141
2.013
3.084
4.083
5.079
6.010
6.973
8.108
9.016
9.981
11.041
11.979
13.046
14.159
15.084
16.014
17.036
18.047
18.985
20.026
21.256
0.142
0.013
0.084
0.083
0.080
0.010
0.027
0.108
0.016
0.019
0.042
0.020
0.046
0.160
0.084
0.015
0.036
0.047
0.015
0.026
0.256
0.057
0.055
0.111
0.145
0.038
0.066
0.076
0.113
0.0836
0.0643
0.091
0.118
0.075
0.135
0.163
0.171
0.095
0.171
0.156
0.105
0.070
FIG. 11: The wait times for a pulse series were varied.
FITC was tested, and the temporal profile was
generated using the relative intensity. Measurements
were taken from within the microchannel. The wait
times are identified by the ”closed” label and are
defined to be the time that valve is closed in between
each pulse in a series of pulses. 50ms, 100ms, 125ms,
and 175ms wait times were tested for a pulse series
consisting of 6 pulses.
TABLE I: The average output of each step from Figure
9 was assembled in this table. The standard deviation
was also calculated to obtain the precision of each
result. An absolute difference was taken as a measure of
accuracy rather than calculating relative error because
the weighting of the desired values would underestimate
the relative error for higher concentrations.
each pulse is only 50ms, the pulses stack and compress
each other to form this shape. Finally, this wait time
resulted in the largest maximum.
Waiting 100ms between each pulse results in a wider
bolus meaning the concentration is delivered more slowly.
The shape of the bolus at the front is irregular meaning
that the concentrations are not as uniformly delivered.
Each pulse adds to the overall concentration until a peak
is reached. Therefore, a step is seen before the peak
which again shows that the delivery is not homogenous.
The peak also does not stay constant for a long duration,
approximately 35ms.
An almost symmetrical distribution is seen with the
125ms wait time. Comparing the build up to the peak
intensity and the decay, the two are similar. However, instead of reaching a constant peak, three local maxima are
reached, the middle one being a global maximum. Contrary to the 100ms wait time, a step is not seen before
the peak region arrives. This may be due to the faster
pulsing of the 100ms. For the 100ms wait time, the first
four pulses may have compressed and grouped together
forming that first step similar to the 50ms bolus. However, the final two were grouped and added to the first 4
pulses to form the maximum peak.
The 175ms wait time clearly distinguishes the different
peaks due to the 6 pulses used to form the bolus. Each
pulse sequentially builds off the intensity of the preceding
pulse causing higher and higher peaks. Thus, a homogenous delivery is not achieved because there is no appre-
ability in the data collected. For example, the 21% data,
which comes from a standard gas tank set to 21%, does
not have the lowest standard deviation nor does it have
the lowest absolute difference from the desired oxygen
level. However, part of this error may be caused by the
drift in the FOXY sensor calibration as the experiment
proceeds. Further error may be caused by the manufacturing of the standard gas tanks because it cannot be
guaranteed that the oxygen level contained within is exactly 0% or 21%.
Chemical Delivery System
Pulse Separation Times
The 50ms wait time formed a shape with a steep, positive slope at the forefront (Figure 11). The initial increase in intensity follows a fairly smooth shape. This
was followed by a rapid, relative to the other wait times,
decline towards the base level. The resulting bolus is
thin meaning high concentrations are delivered in a very
short amount of time. Because the wait time between
12
c
2011
University of Illinois at Chicago
Journal of Undergraduate Research 4, 5 (2011)
of 1320µm results from the pulsing dose method. Therefore, the full dose method would allow a higher specificity in the location of application because it can affect
a smaller radius of tissue. The difference in bolus radius
is attributed to the pulsing method delivering a small
amount of FITC multiple times. Because small quantities
are released, the bolus would diffuse out more. The way
that the pulses are released may cause the subsequent
pulses to collide with and spread out previous pulses resulting in a greater area of effect. A tighter radius is seen
with the full dose method because the valve is open for
a singular amount of time which would cause the FITC
to emerge more like a jet which would lessen radial diffusion. This jet also explains the higher maximum that
the full dose method achieves.
Both methods show an increase in maximum and overall intensity as their variable factor increases. For the
full dose method, higher intensities are outputted as the
total valve time increases, and the intensities are also
shown to increase as the number of repeats increases for
the pulsing dose method. However, a closer study upon
the linearity of these increases was performed because
a more linear relationship would make it easier for the
user to control the concentration delivered. A trend line
was fitted through the maximum intensities in order to
determine this linearity. The full dose method resulted
in a linear equation with an R2 value of 0.880 while the
pulsing dose method resulted in an R2 value of 0.99744.
The full dose method also seems to better fit an exponential equation rather than a linear equation. However,
it is clear that the pulsing dose method has a linear relationship between the number of times the pulses are
repeated and the intensity that is produced based on the
trend line’s R2 value and the distribution of its maximum
intensity points.
FIG. 12: The spatial profile of boluses delivered by full
dose and pulsing dose methods were calculated without
tissue using FITC. The graphs on the left are the
average boluses resulting from each delivery method
and setting. On the x-axis, the left graphs have the
width dimension of the boluses showing that the pulsing
dose method resulted in a wider spread of FITC. The
maximum intensities were then plotted versus valve
open time for the full dose method and number of
pulses for the pulsing dose method. A linear trend line
was calculated resulting in a higher R2 value for the
pulsing dose method.
ciable amount of time in which a constant intensity is
delivered.
The ideal delivery is being described as homogenous
meaning that a constant, consistent bolus is being delivered by the pulse style. As the wait times are increased
between each delivered pulse, it is logical that the width
of the bolus also increase because the delivered pulses are
spread out in time. However, the most favored wait time
was the 125ms wait times. All the other styles outputted
never stabilized to a single concentration, but the 125ms
wait time resulted in three peaks that returned to a relatively common local minimum. Also, the symmetrical
distribution over time and the large width of the profile
suggest a more controlled form of delivery. A slower release of concentration prevents sudden or violent releases
from the channel opening that can damage the tissue.
Therefore, the 125ms wait time was chosen for the pulse
dose method for the future trials.
Full Dose vs. Pulsing Dose: With Tissue
A time lapse was performed measuring the intensity at
the affected tissue region over two minutes (Figure 13).
The timing of the delivery was set such that each delivery would occur 10 seconds apart over the time period.
The full dose delivery resulted in a less stable delivery
in that the concentration oscillated with greater amplitude. Conversely, the pulsing dose method resulted in
a more consistent concentration level. The delivery had
less variability than the full dose delivery showing that
the delivery method was more stable. The higher amplitude nature of the full dose method stems from its singular open time which results in a large concentration of
FITC being delivered at once. The sudden introduction
of FITC causes the higher spikes to occur. The equally
sudden closing of the valve causes the concentration levels to drop which is why there is a slower, curved negative
slope after the spike peaks. Pulsing the delivery spreads
out the duration at which the FITC is applied. This
leads to a more gradual increase in intensity as well as a
Full Dose vs. Pulsing Dose: Without Tissue
The results depict an average intensity profile compiled
from multiple boluses that emerged from the channel
opening without tissue (Figure 12). These profiles were
obtained 1.2 seconds after the FITC began to emerge.
The figures show that the full dose method outputs a
smaller bolus with a maximum width of 860µm. A width
13
c
2011
University of Illinois at Chicago
Journal of Undergraduate Research 4, 5 (2011)
variation seen by the 21% standard could be due to
the changing pressurization of the tank as oxygen is released. The small pulses of the delivery system enable
a more controlled release of gas from each tank resulting in greater precision. However, additional tweaks to
the calibration can be made to increase the accuracy of
the outputs even though the absolute difference between
the average output and the desired output is already
very minute. In biological systems, it is impossible to
have constant oxygen environment conditions, so the acceptable range of fluctuations could include up to ±1%.
Besides the use of regulators and the 10.5% check, another method needs to be developed in order to ensure
the robustness of the calibrations. The better solution
to ensure equal flow rates would be to have actual flow
rate monitors in each gas line. If these flow rate monitors were to be in place such that each flow rate is known,
the calibrations could be more easily standardized among
different set ups.
FIG. 13: The temporal profile of boluses delivered by
full dose and pulsing dose methods were calculated with
tissue using FITC. Boluses for both delivery methods
were delivered periodically resulting in different baseline
oscillations depending on the valve open time for the
full dose method or the number of pulses for the pulsing
dose method. The average or baseline intensities were
calculated and plotted against their independent
variable. A linear fit was calculated resulting in a
higher R2 value for the pulsing dose method.
Future Work
Application of this delivery system into the µBSD
would be simple. The output line would bypass having
to go through a T channel manifold but instead connect
directly into the channel at the top of the reservoir layer.
Thus, the desired oxygen concentration can be bubbled
through to the specific location on the tissue slice being
tested. The current system consists only of two valves
and outputs a range from 0% to 21%. By adding another valve connected to 100% oxygen gas, the output
range could be increased to 0% to 100%. If an output
concentration between 0% and 21% were desired, the 0%
and 21% tanks would be used. Above that, the 21%
and 100% tanks would be used. However, the larger the
difference between two gas concentrations, the more difficult it is to maintain a stable output meaning that the
oxygen output at around 60.5% would suffer.
more gradual return to lower intensity levels. However,
both methods output an intensity that oscillates about
some dc offset level. The average output would correspond to this offset and is related to the valve open time
and number of pulses.
Again, the delivery methods were compared to test for
a linear relationship between the delivered intensity and
the delivery style. When plotting the average intensities
against the different valve open times, a linear trend line
with an R2 value of 0.898 was calculated. However, the
pulsing dose method resulted in a linear equation with an
R2 value of 0.972. As with the delivery without tissue,
the pulsing dose method proved to have a more linear
relationship with the outputted intensity.
Chemical Delivery
Optimizing Pulse Width
Conclusion
It was found that a wait time of 125ms created the
most optimal bolus for the pulsing dose method. The
bolus generated had an almost symmetrical distribution
through time and a longer, though fluctuating, peak region compared to the other wait times.
Large increments were given for each wait time tested.
A better wait time could be found if 25ms and 50ms increments were not chosen. Averaging more boluses would
also show a more representative bolus from the wait time.
Because the pulse was defined as both the opening time as
well as the closing time, additional tests could be made
on changing the 5ms opening time that was consistent
through all of the trials. Logically, a longer opening time
Automated Oxygen Delivery
Robustness
The automated oxygen delivery system resulted in precise and accurate outputs that were very close to the
desired oxygen concentration as well as having low amplitude oscillations. When the standard 21% oxygen
was used, the detected oxygen varied by approximately
±0.26%. Table 1 shows that the oxygen output from the
delivery system only maximally oscillated by ±0.16%.
This shows the precision of the delivery system. The
14
c
2011
University of Illinois at Chicago
Journal of Undergraduate Research 4, 5 (2011)
tance traveled from channel opening to tissue, the effects
of diffusion can be reduced. Figure 12 also shows that
the location of peak intensity is not exactly in the center.
This can be attributed to the flow generated by the vacuum line. Further experiments could also help determine
how to optimize the removal of accumulating drug in the
bath as well as limiting its effect on the delivered bolus.
The relationship between the full dose’s open valve
time and maximum intensity for the no tissue experiment
seems to be exponential rather than linear. This also may
be true for the pulsing dose method. In order to better
understand the relationship between the intensities and
the respective variables, more tests should be performed
with more repeats or different open valve times in order
to truly understand the relationship. However, if the linear relationship for the pulsing dose method holds true,
then this chemical delivery system can be tested using actual drugs on brain slices. For example, dopamine could
be delivered and quantified using cyclic voltammetry.4
The proposed method of delivery would be to load a
known concentration of drug into the device which would
be the maximum concentration. The various pulses
would deliver a percentage of the maximum concentration depending on the number of repeats for the pulse
series. The linear relationship would allow an easy calculation on what is actually delivered based on that maximum concentration.
should just yield higher intensities, but the wait time may
also need to be adjusted for any changes to the opening
time as well.
Chemical Delivery System: Full Dose vs Pulsing Dose
Without tissue, the full dose method was able to deliver boluses that were smaller in width (approximately
860µm) meaning that a higher spatial resolution could
be achieved with this delivery style. Channel openings
could be placed closer together without having the chemical affect unwanted regions. However, the relationship
between its maximum intensity and open valve time were
not so linear. The pulsing dose method achieved a very
linear relationship between the maximum intensity at
the cost of spatial resolution. It delivered a wider bolus
(approximately 1320µm). With tissue, the pulsing dose
method achieved a more constant concentration than the
full dose method, and it also had a stronger linear relationship between the average concentration of the tissue
and the number of pulses. Therefore, although there is
less spatial resolution, the pulsing method is the preferred
method of delivery due to the linearity and its more homogenous delivery over time. Due to the linearity, the
delivered concentration is easier to control allowing the
user greater flexibility when dealing with delivering varying concentrations from the same valve.
However, the delivered bolus width using the pulsing dose method of 1320µm is too large to target the
small brain structures of the mouse. Decreasing the bolus width is ideal in order to increase the spatial resolution of the device. Therefore, additional variables need
to be tested such as the size of the channel opening. Also,
the shape of the bolus could also be altered by altering
the shape of the channel opening. The intensity plots
in Figure 12 show an almost Gaussian distribution along
one axis for the full dose method and a less Gaussian but
more linear distribution for the pulsing dose method. A
spherical bolus was intended due to the spherical shape
of the channel opening. Due to the laminar flow delivery and diffusion, a bolus that has constant intensity
throughout cannot realistically be made, so the Gaussian distribution is acceptable. By decreasing the dis-
1
2
3
4
5
Acknowledgements
Much appreciation is held for the National Science
Foundation and the Department of Defense for providing the funding for the experiments. Honored are the
REU director and REU co-director, Professors Takoudis
and Jursich for their organization of the REU. Dr. D. Eddington is held in high regard for allowing the research
to be performed in his lab. Dr. A. Blake is thanked
for his mentorship as he guided the project in progress.
Last but not least is G. Mauleon for providing insight on
obtaining and preparing brain tissue slices.
6
T. Tyler, The introduction of brain slices to neurophysiology (Basel: Karger, 1987), chap. Brain Slices: Fundamentals, Applications and Implications, pp. 1–9.
J. Mohammed, H. Caicedo, and C. F. . D. Eddington, Lab
Chip 8, 1048 (2008).
R. Dingledine, J. Dodd, and J. Kelly, Neurosci. Methods
2, 323 (1980).
A. Blake, T. Pearce, N. Rao, S. Johnson, and J. Williams,
Neurosci. Methods 7, 842 (2007).
K. Rambani, J. Vukasinovic, A. Glezer, and S. Potter,
Journal of Neuroscience Methods 180, 243 (2009).
7
8
9
10
15
D. Beebe, J. Moore, Q. Yu, R. Liu, M. Kraft, B. Jo, and
C. Devadoss, Proc Natl Acad Sci USA 97, 13488 (2000).
T. E. N. Hajos, R. Zemankovics, E. Mann, R. Exley,
S. Cragg, T. Freund, and O. Paulsen, Lab on a Chip 97,
319 (2009).
P. Resto, B. Mogen, E. Berthier, and J. Williams, Eur J
Neurosci 10, 23 (2009).
J. S. Mohammed, H. Caicedo, C. Fall, and D. Eddington,
JoVE 8 (2007).
G. Whitesides, E. Ostuni, S. Takayama, X. Jiang, and
D. Ingber, Annu. Rev. Biomed. Eng 3, 335 (2001).
c
2011
University of Illinois at Chicago
Journal of Undergraduate Research 4, 16 (2011)
Microfluidic Bandage for Localized Oxygen-Enhanced Wound Healing
Z.H. Merchant
Department of Bioengineering, University of Pennsylvania, Philadelphia, PA 19104
J.F. Lo and D.T. Eddington
Department of Bioengineering, University of Illinois at Chicago, Chicago, IL 60607
An oxygen-enhanced, microfluidic bandage was fabricated out of polydimethlysiloxane (PDMS)
and contains a 100 µm thick gas-permeable membrane that allows rapid diffusion of oxygen directly
to the wound bed. The microfluidic bandage was characterized by measuring the effect of modulating
oxygen concentrations, calculating the degree of localization in oxygen delivery when subjected to
a non-planar platform, and determining the extent of oxygen penetration below the tissue surface.
The concentration of the diffused oxygen (0.02 ± 0.73 to 99.2 ± 4.46%) was shown to rapidly
equilibrate (∼30 seconds) to the modulating input oxygen concentration (0 to 100%). The device
also maintained localized oxygen delivery to a specified area when a non-planar irregularity was
introduced. Finally, the extent of oxygen penetration was found to decrease as the thickness of
tissue increased (>75% at 0.8 mm thick). These experiments demonstrate that this microfluidic
bandage can be a viable tool for oxygen-enhanced wound healing.
Introduction
Clinical evidence has demonstrated that adequate oxygenation is important in the process of wound healing.1–5
In the linear phase progression of acute wounds, oxygen
has importance in the inflammation, cell migration and
proliferation, and tissue remodeling phases.6 In the inflammation phase, oxygen is converted to Reactive Oxidative Species (ROS), which is a key step in the wound
healing pathway at low concentrations and is required
for bactericidal activity.3 Topically applied oxygen has
also been shown to induce the production of Vascular
Endothelial Growth Factor (VEGF), which accelerates
the angiogenesis of the cell migration and proliferation
phase.7 During the tissue remodeling phase, oxygen induces collagen deposition, important for regeneration of
the extra-cellular matrix and maintenance of the tensile
strength of skin.6 Oxygen may also trigger the differentiation of fibroblasts into myofibroblasts - cells which are
mechanistically similar to smooth muscle cells and are
responsible for wound area contraction.6
Current oxygen-enhanced wound healing techniques
include the Hyperbaric Oxygen Therapy (HBOT) and
the Topical Oxygen Therapy (TPOT).8 HBOT involves
placement of the patient in a sealed chamber pressurized to 2-3 atm of 100% O2 .8 However, one pitfall of
HBOT includes subjection of the patient to high atmospheric pressure in a small, enclosed space - conditions
that may result in extreme discomfort and claustrophobia. Another problem is the high level of oxygen inhaled
can cause neurotoxic conditions and oxidative damage in
non-wounded tissues.5 One solution to HBOT is TPOT,
which is the direct topical application of oxygen onto the
wound. TPOT is generally applied at 1 atm of 100%
O2 for 1-2 hours a day.5 Benefits of TPOT include localized delivery of oxygen, prevention of oxygen toxicity,
and a more open and comfortable environment than can
be found in HBOT.5 However, current TPOT techniques
are still quite expensive and are not portable. In order
for future oxygen-enhanced devices to be effective, they
must be portable to allow treatment at home, be localized
to the wound site, be inexpensive, have no risk of multiorgan oxygen toxicity, allow moisture retention, and allow for rapid diffusion of oxygen through the tissue.5
To meet these criteria, a microfluidic bandage was developed to diffuse oxygen directly to the wound. The device was fabricated for a three-year mouse study to measure the effect of oxygen on the rate of wound healing.
The device, shown in Figure 1, was fabricated from polydimethylsiloxane (PDMS) and consists of two oxygenfilled chambers 10.0 mm in diameter with microfluidic
channels 300 µm wide. The bandage is designed to be
connected to medical grade oxygen tanks and permits
oxygen to flow into the device. Under each chamber is
a 100 µm thick PDMS membrane that allows for rapid
diffusion of oxygen from the chamber to the wound, moisture retention, and a degree of elasticity to accommodate
any irregularities and unevenness of the skin.
In the characterization of this device, three experiments were performed to understand the effect of modulating oxygen concentration, the extent of localization
of oxygen delivery, and the extent of oxygen penetration
into the tissue.
Materials and Methods
Fabrication of Microfluidic Bandage
The oxygen-enhanced microfluidic bandage was fabricated using standard soft-lithography techniques in a four
part process: microfluidic chamber and channels, PDMS
membrane for gas-diffusion, chamber cap, and ports as
seen in Figure 1(b).
Journal of Undergraduate Research 4, 16 (2011)
for 1-2 hours to further strengthen the bonding. ground.
Leakage Test
To locate any leaks in the bandage, the device was
submerged in water, and a gas line was connected to the
ports. Any bubbles that formed indicated a leaky device
to be discarded.
Oxygen Concentration Validation
An oxygen-sensing chip coated with a ruthenium fluorescent dye (FOXY slide, OceanOptics) was used to
quantify the oxygen concentration that diffused through
the gas-permeable PDMS membrane. Because the fluorescence of the ruthenium dye is quenched in the presence
of oxygen, the concentration of oxygen can be calculated
by measuring the fluorescent intensity. By capturing a
time-lapse image of the fluorescence using fluorescence
equipped inverted Olympus IX71 microscope and MetaMorph software package, the change in oxygen concentration was followed over time. The fluorescent intensities
were empirically fit to a Stern-Volmer model, which was
used to convert the intensities to oxygen concentration.9
All images were acquired at 38◦ C (physiological temperature) using a FOXY-compatible fluorescent filter with excitation wavelength of 475 nm and emission wavelength
of 600 nm.
FIG. 1: a) The chamber and the ports are indicated on
a microfluidic, oxygen-enhanced bandage. The device
was fabricated from polydimethylsiloxane (PDMS),
which is a moldable, biocompatible, and gas-permeable
material; b) an exploded model of the device showing
all of the parts; c) a cross-section schematic of the one
chamber positioned over the wound; Oxygen is shown to
enter the chamber and diffuse onto the wound through
the 100 µm thick PDMS membrane; d) the microfluidic
device attached to a hairless mouse (strain STZ) for a
three-year animal study to analyze the effect of this
oxygen-enhanced bandage on the rate of wound healing.
The microfluidic channels and chamber was designed
using AutoCAD and printed onto a 16k dpi photomask.
SU-8-2150 photoresist (MicroChem, Newton, MA) was
spun to 100 µm thick according to the manufacturer’s
protocol. The SU was then placed under the photomask
and exposed to ultraviolet light, causing the SU8 to selectively crosslink only at the uncovered areas. After exposure baking and further crosslinking of exposed area, the
uncrosslinked areas were removed with SU-8 developer
(MicroChem, Newton, MA), leaving a positive master
mold. Once the master was fabricated, premixed 10:1
ratio of PDMS prepolymer and curing agent was poured
onto the master until the PDMS was 1.0 mm thick. The
PDMS mixture was cured for 2 hours at 90◦ C. Holes were
then made for the chamber and ports.
The 100 µm thick gas-permeable PDMS membrane
was made by spinning PDMS on a silicon wafer using
a precision spinner. The wafer was spun at 500 RP M
for 10 seconds to spread the PDMS droplet, and then at
800RP M for 30 seconds. The membrane was cured for
5 minutes at 85◦ C.
The chamber cap was fabricated by dropping ∼150 µl
of PDMS onto a silicon wafer heated to 120◦ C and then
removing the PDMS once cured. Once all parts of the
device were fabricated, every part surface was treated for
30 seconds under a corona plasma device (STP, Inc) prior
to bonding. The completed device was baked at 100◦ C
Characterization: Modulation of Oxygen
Concentrations
Equilibration of the microfluidic bandage to 0% oxygen
concentration was achieved by pumping a mixture of 5%
CO2 and 95% N2 into the device for 10 minutes. After
0% equilibration, the input concentration was switched
to 100% O2 , and the concentration of the diffused oxygen
through the 100 µm thick PDMS membrane was measured. This process was repeated, measuring the change
in oxygen concentration after switching the input concentration from 0 to 10.5, 21, and 60.5% O2 . This experiment was repeated at each input concentration on three
different devices.
Characterization: Localization of Oxygen Delivery
in Conformal and Non-Conformal Devices
This experiment was conducted to demonstrate the improved sealing of the device design. Because our chamber
has a large width to height ratio, standard engineering
design suggests that the chamber should have interior
pillars to prevent collapse. However, with the pillars, the
device is unable to accommodate the uneven topology
17
c
2011
University of Illinois at Chicago
Journal of Undergraduate Research 4, 16 (2011)
FIG. 3: A range of gas concentrations were rapidly
diffused (<30 s equilibrium) through the PDMS
membrane. Modulation of oxygen was achieved by first
equilibrating the device to 0% O2 (5% CO2 , 95% N2 )
and then switching the input concentration to 10.5, 21,
60.5, or 100% O2 .
FIG. 2: a) The conformal microfluidic lacks interior
pillars to support the chamber; b) the non-comformal
microfluidic bandage was fabricated with interior pillars
to prevent collapse; c) a schematic cross-section side
view of the conformal chamber. The PDMS membrane
is shown to wrap around the PDMS obstacle; d) a side
view of the non-conformal chamber. The interior pillars
prevent the PDMS membrane to conform around the
obstacle.
Input Concentration (% O2 ) Diffused Concentration at
Equilibrium(average ±
standard deviation, % O2 )
10.5
8.88 ± 1.81
21.0
22.2 ± 1.50
60.5
58.9 ± 3.89
100
99.2 ± 4.46
TABLE I: Modulation of Oxygen Concentrations
that is common to a healing wound, causing delocalization of the oxygen delivery. To measure the extent of
oxygen localization with and without the pillars, two devices were fabricated for this experiment: one in which
the PDMS membrane is attached only to the circumference of the chamber (Figures 2(a), 2(c)), and one in which
the flexibility of the PDMS membrane is impeded by pillars in the chamber (Figures 2(b), 2(d)). The chamber of
each type of device was placed on a custom made disk of
PDMS 8 mm in diameter and 0.5 mm thick. Each type
of device was equilibrated to 0% O2 and then the input
concentration was switched to 100% O2 . The diffused
oxygen concentration was measured. Three trials were
conducted for each type of device.
Results
Characterization: Modulation of Oxygen
Concentrations
The change concentration of diffused oxygen through
the PDMS membrane was recorded at each switch of
the input concentration from 0% to, 10.5, 21, 60.5, and
100% O2 as shown in Figure 3. The average diffused
concentration at equilibrium with 0% input was 0.02 ±
0.73%. Equilibration of the diffused oxygen concentrations are achieved within 30 seconds. For each concentration, the diffused output oxygen concentration approximately equilibrated to its respective input concentration
(Table I).
Characterization: Extent of Oxygen Penetration
Oxygen penetration was measured by calculating the
concentration of oxygen after the oxygen diffused through
a PDMS membrane and a phantom tissue. The phantom
tissue, consisting of 3% agar (Fisher Scientific), was sliced
to a desired thicknesses ranging from 0.2-1.0 mm. The
chamber was placed directly on top of a sliced phantom
tissue of selected thickness. The microfluidic bandage
was equilibrated to 0% O2 , and then the input concentration was switched to 100% O2 . At each thickness,
three trials were conducted using the same device.
Characterization: Localization of Oxygen Delivery
in Conformal and Non-Conformal Devices
The change in the diffused oxygen concentration under the chamber and exterior to the device was mapped
in Figures 4(a) and 4(b) for the conformal and nonconformal devices, respectively, when they were placed
over the PDMS obstacle. For the conformal device, the
18
c
2011
University of Illinois at Chicago
Journal of Undergraduate Research 4, 16 (2011)
FIG. 5: Penetration of oxygen at varying depths of 3%
agar phantom tissue at 1, 3, and 5 minutes after input
O2 was changed from 0 to 100%.
(a)
O2 input, concentration at 0.8mm thickness was 75.8 ±
5.38% O2 .
Discussion and Conclusions
Characterization: Modulation of Oxygen
Concentrations
The microfluidic bandage is able to precisely deliver
oxygen directly to the wound, with concentrations ranging from 0.02 ± 0.73 to 99.5 ± 4.40%. Equilibration for
all input concentrations tested was achieved within 30
seconds. Because of this high degree of precision, the
user of the device is able to provide oxygen to the wound
at any desired concentration. The short equilibration
time allows the user to quickly cycle between two or more
concentrations in any protocol that may require hypoxic,
hyperoxic, or intermittent hypoxic conditions.
(b)
FIG. 4: a) The diffused concentration in the conformal
device was measured when the input concentration was
changed from 0 to 100% O2 . Localization was achieved
by providing 100% O2 only to directly under the
chamber; b) a catastrophic failure is seen in the
non-conformal device, as the diffused oxygen
concentration did not rise significantly above ambient at
100% O2 input.
Characterization: Localization of Oxygen Delivery
in Conformal and Non-Conformal Devices
diffused oxygen concentration rapidly changed from 0 to
99.5 ± 4.40% O2 (average concentration at equilibrium),
while the oxygen concentration exterior to the device
was maintained at ambient oxygen concentrations (∼21%
O2 ). In the non-conformal bandage, the average concentration achieved at equilibrium with 100% O2 input was
22.61 ± 0.44% O2 with exterior normoxic conditions.
The conformal device without the interior pillars had
a high degree of localization and sealing - this device will
be able to conform around non-planar irregularities of
the skin, allowing delivery of 100% O2 only to the wound
under the chamber, while leaving the part of the skin exterior to the chamber at ambient oxygen concentrations.
The non-conformal device with the interior pillars did not
deliver the oxygen locally, with the diffused oxygen concentration barely reaching above 21%. It was expected
that mixing of the diffused 100% with ambient 21% at
the exterior of the device would yield the equilibration
of diffused oxygen concentration at about 60%. However, it may be deduced that the interior pillars in the
non-conformal device caused so much inelasticity that the
PDMS obstacle may have blocked diffusion of oxygen to
Characterization: Extent of Oxygen Penetration
The penetration of oxygen at 1, 3, and 5 minutes of
100% O2 input through increasing thicknesses of 3% agar
is displayed in Figure 5. In general, as the depth of
the phantom tissue increased, the oxygen concentration
through that depth decreased. At 5 minutes of 100%
19
c
2011
University of Illinois at Chicago
Journal of Undergraduate Research 4, 16 (2011)
the sensor. Thus, the removal of pillars in the conformal bandage is critical for non-planar features such as
scabbing and scarring during wound healing.
Future Work
This oxygen enhanced micro-fluidic bandage will be
used in a three-year mouse study to assess the effect of
oxygen delivered by this device on the rate of wound healing. Specifically, we will measure the change in the rate
of wound closure with bandage under hypoxic and hyperoxic conditions. We will also test the rate of collagen
deposition and the levels of VEGF with the device.
Characterization: Extent of Oxygen Penetration
As expected, the diffused oxygen concentration decreased as the thickness of the phantom tissue increased.
However, even after 5 minutes of 100% O2 input, the concentration of 0.8 mm was above 75%. This is important
because 0.8 mm lies within the range of thickness of the
human epidermis (0.4-1.5 mm).10 Measuring the extent
of oxygen penetration is necessary because it allows the
user of the oxygen-enhanced bandage to not only control
the oxygen concentration delivered to the surface of the
wound, but also the oxygen concentration delivered to
the epidermis and dermis. Thus, this experiment demonstrates that consistently high concentrations of oxygen
can be distributed beneath the surface of the skin when
100% O2 is delivered to the surface. Note that while
the trend seems to be linear in the range of thicknesses
tested, we might expect that at greater thicknesses, an
exponential decay should be more apparent, following an
expected diffusion-based trend.
1
2
3
4
5
6
7
8
9
10
Acknowledgments
This project was supported financially by the National Science Foundation and the Department of Defense, EEC-NSF Grant # 0755115. The experiments
and analysis were conducted at the University of Illinois
at Chicago in the Biological Microsystems Laboratory.
The author (Zameer Merchant) would like to thank Dr.
Christos Takoudis and Dr. Greg Jursich for their leadership and guidance in this project.
R. Fries, W. Wallace, S. Roy, P. Kuppusamy, V. Bergdall,
G. Gordillo, W. Melvin, and C. Sen, Mutation ResearchFundamental and Molecular Mechanisms of Mutagenesis
579, 172 (2005), ISSN 0027-5107.
S. C. Davis, A. L. Cazzaniga, C. Ricotti, P. Zalesky, L.-C.
Hsu, J. Creech, W. H. Eaglstein, and P. M. Mertz, Archives
of Dermatology 143, 1252 (2007), ISSN 0003-987X.
G. Gordillo and C. Sen, American Journal of Surgery 186,
259 (2003), ISSN 0002-9610.
H. Said, J. Hijjawi, N. Roy, J. Mogford, and T. Mustoe,
Archives of Surgery 140, 998 (2005), ISSN 0004-0010.
L. Kalliainen, G. Gayle, R. Schlanger, and C. Sen, Pathophysiology (2003).
M. Franz, in Current Diagnosis & Treatment Surgery
(2006).
G. M. Gordillo, S. Roy, S. Khanna, R. Schlanger, S. Khandelwal, G. Phillips, and C. K. Sen, Clinical and Experimental Pharmacology and Physiology 35, 957 (2008), ISSN
0305-1870.
C. K. Sen, Wound Repair and Regeneration 17, 1 (2009),
ISSN 1067-1927.
A. Vollmer, R. Probstein, R. Gilbert, and T. Thorsen, Lab
on a Chip 5, 1059 (2005), ISSN 1473-0197.
D. H. Chu, in Fitzpatrick’s Dermatology in General
Medicine 7 (2008).
20
c
2011
University of Illinois at Chicago
Journal of Undergraduate Research 4, 21 (2011)
Comprehensive JP8 Mechanism for Vitiated Flows
K. M. Hall
Chemical Biomolecular Engineering, University of Pennsylvania
X. Fu and K. Brezinsky
Mechanical and Industrial Engineering, University of Illinois at Chicago
With the intent of optimizing the combustion process of complex hydrocarbon liquid fuels such
as JP8 in internal combustion jet engines and their afterburners, simpler surrogate hydrocarbon
compounds were used in a counterflow diffusion flat flame burner to validate the chemical kinetic
modeling process. The combustion products sampled from the flame produced during the burning
of the validation fuels methane and n-heptane were analyzed using a Varian CP3800 gas chromatograph. The effects of sampling with a 350 micron outer diameter (OD) fused-silica tube were
compared to those of a 3.5 mm quartz probe in order to minimize sampling effect on the flame.
Simulations of the sampled species were performed using the OPPDIF package of CHEMKIN with
chemistry models provided by UIC. Concentrations of major species (e.g. CO, CH4 , CO2 , O2 ) were
found to be well simulated with the models, with the best fit occurring for methane and n-heptane,
and wider variation occurring with some species in all validation fuels.
Introduction
With the increasing demand for green energy and efficient, environmentally friendly fuels, the combustion of
complex hydrocarbon liquid fuels such as JP8 in internal
combustion jet engines and their afterburners becomes
increasingly important. However, with the benefits that
come from burning these fuels come enormous environmental impacts. In addition to the desired energy, combustion products of these fuels include harmful pollutants
such as soot, carbon monoxide, unburned hydrocarbons,
and others. In order to optimize the combustion process
of these fuels for maximum efficiency and minimum negative environmental and health impacts, it is necessary
to develop a comprehensive chemical kinetic model of the
process.
JP8 is a liquid fuel mixture of various hydrocarbons
ranging in size from C4 to C16 , which makes it a prohibitively complex task to accurately and completely
model its combustion.1 Hence, in order to validate the
experimental protocols and establish a standard for modeling, simpler surrogate fuels, including m-xylene, npropylbenzene, decane and n-heptane, are used.
Preliminary combustion models have been developed
for m-xylene, n-propylbenzene, and n-heptane and been
found to correlate within experimental uncertainty with
the predictions generated by the computer simulation.
(a)
Materials and Methods
The experimental apparatus is a counterflow diffusion
flat flame burner, into which the oxidizer gases are injected from the top and the prevaporized fuel is injected
from the bottom with a syringe pump. This type of
burner, shown in Figure 1(a)1(b), consists of two opposing streams, a fuel stream and an oxidizer stream, that
(b)
FIG. 1: Methane experimental setup. Counterflow
diffusion flame burner.
Journal of Undergraduate Research 4, 21 (2011)
FIG. 3: Sampling devices. Above, fused-silica tube;
below, quartz probe.
FIG. 2: Geometry of the axisymmetric opposed flow
diffusion flame which enables 1D modeling. From
OPPDIF Application User Manual.
of each fuel.
The large size of the quartz probe was found to interfere with the flame’s flow and introduce additional error into the measurement of species. For the n-heptane
flame sampling and later experiments, the probe was replaced with a smaller fused-silica column of outer diameter of 300 microns, retaining the same inner diameter
as of the quartz probe, 250 microns. The upcoming experiments include repeating the process using this setup
with methane as the fuel. Figure 3 shows the difference
in outer diameter of the two sampling devices.
Due to the relatively few species produced during its
combustion and the predictability of the concentration
profile, methane has been used as a validation fuel to
test and optimize the apparatus.
Simulations of methane and n-heptane flames are performed using the OPPDIF package of CHEMKIN with
chemistry models provided by UIC, and compared to the
data obtained with the flame apparatus. The simulation uses the UIC m-xylene model3 and the GRImech
model4 for the methane experiments, and Paolo Berta’s
n-heptane combustion model for the heptane experiments. By entering methane as the only fuel in the UIC
model, the simulation is forced to bypass the xylene and
larger molecule chemistry in its prediction.
run opposite to one another and create a flame between
the two inlets, simulating the flow of fuel from an afterburner against the oxidizing gases in the atmospheric
air.2 This setup, as shown in Figure 2, enables the formation of a stable stagnation plane and flat diffusion flame,
which greatly simplifies the geometry and enables onedimensional modeling of the flame structure due to the
relatively high strain rate. In order to simulate the conditions of the jet engine afterburner, the fuel is heated to
300◦ C and the oxidizer gases are heated to 700◦ C prior
to injection. A quartz probe with an outer diameter of
3.5 mm is used to sample the combustion products and
attached to a gas chromatograph to measure the mole
fractions of different species present in the flame. After
a 6-minute equilibration period during which the flame
stabilizes, a gas sample is withdrawn and injected into
the GC for analysis.
A type K thermocouple is used to measure flame
temperature. Because this type of thermocouple cannot withstand the high temperatures of the flame, it is
used to measure the temperature of the exit fuel (350 to
360◦ C) and oxidizer gases (650 to 710◦ C). Future work
includes use of Pt-Pt/13%Rh thermocouples to obtain
complete temperature profiles of the flames.
A nitrogen shield is run from bottom to the top of the
apparatus to prevent the combustion products from mixing with the environmental air and maintain an accurate
sampling of the concentration profile of the components
within the burner.
The burner is placed on an adjustable platform, which
is moved up and down relative to the stationary sampling
probe in order to adjust the distance from the fuel inlet
in the burner to the probe. This sampling process is
repeated, increasing the distance from the fuel inlet to
the probe in 0.5 mm intervals spanning the 1.44 cm total
distance from the fuel inlet to oxidizer nozzle in order to
create a complete 1D profile of the combustion products
Results and Discussion
In the methane experiment with the original quartz
probe setup, the observed concentrations of CO, CH4 ,
CO2 , N2 , and O2 were found to agree highly with the
calculated concentrations from CHEMKIN, while other
species (H3 , C2 H2 , C2 H4 ) showed much larger deviation
from the simulation, as shown in Figure 4(a)4(b)4(c).
In a new series of simulations, it was found that the
GRI mechanism provides a better fit to the experimental data than the UIC m-xylene model, as shown in Figure 5(a)5(b), due to differences in kinetics and additional
chemical species in the models.
Because the large quartz probe used for sampling was
22
c
2011
University of Illinois at Chicago
Journal of Undergraduate Research 4, 21 (2011)
(a)
(a)
(b)
(b)
FIG. 5: Comparison of UIC m-xylene model, GRImech
model and experiment. Condition CH4 0.4 L/min,
N2 (fuel side) 1.6 L/min, O2 0.7 L/min, N2 1 L/min.
period. In order to correct for this, the equilibration period in the flame was altered to allow the flame to burn
for 4 minutes rather than 6 prior to insertion, and then
allow the tube to spend 2 minutes in the flame before
sampling.
(c)
The use of this tubing showed much promise for precision in species measurement in the n-heptane combustion, and reduced the experimental limits of precise
and accurate measurement of the species in an actual
afterburner. As predicted, the OPPSMOKE modified
OPPDIF simulation, using n-heptane combustion chemistry provided by Paolo Berta,1 shows excellent agreement with the n-heptane experimental data, as shown in
Figures 6(a) and 6(b).
FIG. 4: Species of UIC m-xylene model methane flame
simulation and experiment comparison. Dots:
experiment data, lines: simulation data Condition CH4
0.4 L/min, N2 (fuel side) 1.6 L/min, O2 0.7 L/min, N2 1
L/min
found to affect the flow in the flame, it was replaced
by much smaller, less-invasive fused-silica tubing. However the smaller tubing of the less invasive probe cannot
withstand the high temperatures of this experiment, and
has been observed to melt when exposed to the hightemperature flame of the hotter-burning fuels for a long
Although some discrepancies still exist between the
simulation and experimental data, the fused-silica tube
setup shows great promise for increasingly accurate and
consistent experimental measurements that will provide
confidence in the validation of these and future models
for combustion.
23
c
2011
University of Illinois at Chicago
Journal of Undergraduate Research 4, 21 (2011)
Future Work
In order to truly validate these models, the methane
and heptane experiments must be repeated to provide
proof of repeatability and consistency of data. Sampling
of the methane flame should be done using the new probe
setup, in order to quantify the comparison in accuracy
between the quartz and fused-silica probes and verify the
heptane results. Other hydrocarbon fuels should also be
simulated and sampled for further validation, as well as
simple surrogate mixtures such as a methane/heptane
mixture.
(a)
Acknowledgements
The authors would like to thank the National Science
Foundation and Department of Defense for financial support from EEC-NSF Grant #0755115, as well as for sponsoring the Research Experience for Undergraduates program at the University of Illinois at Chicago. KMH would
also like to extend gratitude to the directors of the 2010
REU in Novel Advanced Materials, Professor Gregory
Jursich and Professor Christos Takoudis of UIC.
(b)
FIG. 6: Species of n-heptane flame simulation and
experiment.
1
2
3
4
P. Berta, Ph.D. thesis, University of Illinois at Chicago,
Chicago, IL, USA (2005).
R. Seiser, L. Truett, D. Trees, and K. Seshadri, Proceedings
of the Combustion Institute 27, 649 (1998).
R. Sivaramakrishnan, L. et al, and B. et al., uIC mMXYLENE model: Includes Sivaramakrishnan’s Toluene oxidation model with updated pyrolysis steps, m-Xylene thermochemistry from Dagaut’s m-Xylene model, methylcyclopentadiene reactions from Lifshitz et al, updated cyclopentadiene reactions from Burcat et al.
G. P. Smith, D. M. Golden, M. Frenklach, N. W. Moriarty,
B. Eiteneer, M. Goldenberg, C. T. Bowman, R. K. Hanson,
S. Song, W. C. G. Jr., et al., Tech. Rep., University of California, Berkeley, http://www.me.berkeley.edu/gri mech/
(2010).
24
c
2011
University of Illinois at Chicago
Journal of Undergraduate Research 4, 25 (2011)
TEM Study of Rhodium Catalysts with Manganese Promoter
A. Merritt
Department of Physics, Purdue University, West Lafayette, Indiana, 47907
Y. Zhao and R.F. Klie
Department of Physics, University of Illinois at Chicago, Chicago, Illinois, 60607
The focus of this research is on studying the effects of a manganese promoter on rhodium particles
for the purposes of ethanol catalysation from syngas. Through TEM imaging, the particle size has
been studied both before and after reduction with and without a manganese promoter. For pure
rhodium on silica, the average particle size before reduction was 3.1 ± 0.8 nm and 3.1 ± 0.8 nm
after reduction. For rhodium with a manganese promoter on silica, the average particle size before
reduction was 2.3 ± 0.5 nm and 2.4 ± 0.7 nm after reduction. These results point to a clear effect
of manganese on the particle sizes of rhodium, but an insufficient effect on particle size to fully
explain all effects of manganese promotion on rhodium catalysts. Further research will be focusing
on using a JEOL-2010F to conduct electron energy loss spectroscopy (EELS) and Z-contrast imaging
structural studies.
Introduction
In the modern world, doubt over a consistent
petroleum supply has led to increased research on
alternative fuel sources. One such alternative is ethanol,
a simple hydrocarbon chain with the molecular formula C2 H6 OH. The most popular method for ethanol
production is fermentation of carbohydrates, a process
that has been in use for thousands of years to produce
alcoholic beverages, but has only comparatively recently
been adapted for industrial ethanol production. This
process has several drawbacks, the most significant
of which are the relative impurity of the end product
and the low rate of production. Catalysation is an
alternative ethanol production method; through the use
of the Fischer-Tropsch (FT) process, syngas (a mixture
of H2 and CO) can be converted into ethanol, syngas
itself being derived from various feedstocks such as
coal gasification or organic gas (biogas). This process
offers several advantages over traditional fermentation,
significantly higher purity and production capacity, in
direct contrast to fermentation1 .
Nonetheless, an effective catalyst is needed to ensure
the usefulness of this process. Most importantly for
industrial applications, a catalyst must have high
selectivity, activity and longevity. As well, the usage of
a promoter, which is a material that affects the characteristics of a catalyst without being a catalyst itself, can
improve all of these characteristics. However, contemporary research on catalyst effectiveness and promoters is
sparse, consisting mostly of empirical studies1 . Results
from previous studies are that rhodium syngas catalysts
with manganese promoters have increased selectivity for
ethanol over methane as well as increased activity2 . A
fundamental understanding of catalysation mechanics
and catalyst-promoter interaction is important to further
develop the field.
The focus of this research project is on rhodium catalysts with a manganese promoter. Empirical research
shows that rhodium is an ineffective catalyst of syngas
for ethanol production, but that manganese acts as a
promoter, improving the selectivity and activity of the
rhodium13 . H. Trevino reports that the addition of
Mn to Rh catalysts on zeolite NaY support increases
the selectivity of oxygenates, namely ethanol and ethyl
acetate, without an increase in activity during immersion in NaOH solution4 . F. van der Berg et al report
that RhMnMo exhibits greatly increased selectivity for
ethanol as well as increased activity compared to pure
Rh on a silica support5 . Wilson et al report, in contrast,
that the addition of Mn to Rh catalysts in a silica gel has
no significant impact on selectivity but leads to a tenfold
increase in activity6 . More recent research by T. Feltes
confirms the increase in both selectivity and activity of
Mn promoted Rh catalysts for ethanol catalysation from
syngas2 . The content of these studies all point to a need
for a better fundamental understanding of the role of
Mn in the promotion of Rh catalysts with respect to
particle size.
In order to explore the interaction, transmission electron microscopy (TEM) will be used to study the effects
of manganese loading on rhodium particle size and distribution on a silica (SiO2 ) substrate. The usage of highenergy electrons to image the samples provides the capability to measure the sizes of rhodium particles down
to approximately 1 nm in diameter. A study of rhodium
particle size is expected to improve understanding of the
impact of manganese promotion on rhodium particle size,
from which conclusions can be extended to the impact of
the particle size on syngas catalysation.
Journal of Undergraduate Research 4, 25 (2011)
Method
Various methods exist to load rhodium onto a silica
support, and then add the manganese promoter; a
discussion of these techniques is beyond the scope of this
report, but previous authors can offer insight into this
process2 . An important step in the preparation process
is the calcination and reduction of the sample. After
the rhodium and manganese are loaded, the sample is
calcined by heating in air to 350◦ C for four hours, and
then reduced by heating to 300◦ C for 2 hours under an
H2 flow, in the end leaving a pure catalyst particle on the
support; it is at this stage that the focus of this research
lies: in studying the effect of a manganese promoter
on the effect of the calcination-reduction process for
rhodium. This process is essential for rendering the
catalyst usable2 , and so is of great interest to the
scientific and industrial communities.
FIG. 1: TEM image of Rh particles on SiO2 substrate.
Rhodium particles are dark, the gray is the silica
support, the bottom right is empty space.
The powdered samples are prepared by taking bulk
silica and adding rhodium and manganese through the
dry impregnation process, whereby just enough metal
solution is used to fill the pore volume of the silica
support2 . This bulk sample is then ground with mortar
and pestle to a fine powder. A small part (<1 gram) of
this powdered sample is then mixed with approximately
20 mL of DI water and sonicated for 20 minutes to
reduce the average silica particle size. A holey copper
grid is immersed in this solution twice, and allowed
to dry in air after each immersion. The final product
has enough sample deposits for the purposes of this study.
a Gatan 1k × 1k CCD Camera on autoexposure, and
interpreted using the Gatan Digitalmicrograph program.
This program uses information stored in the image file
of the setup to, amongst other features, convert pixel
distances into actual distances, allowing the measuring
of lengths in the image.
Data and Analysis
For the TEM work, a JEOL-3010 TEM was used.
This instrument is capable of 2 Å resolution. For
the purposes of this study, phase contrast imaging
was used, whereby plane electron waves are distorted
slightly by an incident material’s structure, producing
a phase difference between the diffracted electrons and
the undiffracted ones and a change in intensity at the
imaging point. The fact that the rhodium particles
have a crystalline structure versus the amorphous silica
support makes this technique effective, as the rhodium
particles appear as dark spots on a gray background.
In addition, the normal mas-thickness contrast imaging
aided recognition of the heavier rhodium particles when
the phase difference was insufficient.
For each preparation method, the ten best images are
selected out of all those taken and used to obtain an
average particle size. Ten particles are used from each
image for 100 particles in total per preparation method.
For pure rhodium on silica, the average particle size was
3.1 ± 0.8 nm before reduction and 3.1 ± 0.8 nm after
reduction. For rhodium with a manganese promoter on
silica, the average particle size was 2.3 ± 0.5 nm before
reduction and 2.4 ± 0.7 nm after reduction. For the in
situ heated promoted catalysts, the average particle size
was 2.6 ± 0.9 nm before reduction and 2.4 ± 0.7 nm after
reduction. A histogram representative of the particle size
distribution is shown in Figure 2. The particle size results
are compared Table I.
For the study, the promoted rhodium unreduced
sample was used for an in situ reduction study. The
sample was heated in the vacuum of the microscope,
approximately 10−7 torr, in order to drive off the oxide
layer. After two hours, the sample was imaged and
analyzed. The sample was allowed to cool to room
temperature over two hours, and imaged again. This
was done to compare reduction processes as a control.
Sample
Average Particle Size (nm) Standard Deviation (nm)
RhOx (unreduced)
3.1
0.8
Rh+Mn Ox (unreduced)
2.3
0.5
Rh+Mn
2.4
0.7
Rh+Mn (in situ heating)
2.6
0.9
Rh+Mn (after cooling)
2.4
0.6
A typical TEM image for this project is shown below
(Fig. 1). Images are taken at ×300k magnification using
TABLE I: Particle size results.
26
c
2011
University of Illinois at Chicago
Journal of Undergraduate Research 4, 25 (2011)
port material, as the current amount of silica underneath
or above the rhodium particles significantly impacts the
level of contrast attainable using the TEM.
Chromatic aberration in the thick samples is caused
by electron scattering in the amorphous silica induces
incoherent phase shifts in the plane electron wave
impinging on the specimen. The reduced coherence of
the electron beam then influences the diffraction of the
constituent electrons through the rhodium particles,
resulting in some electrons being randomly scattered.
This produces a pronounced blurring and graying effect
in the images, and contributes significantly to read error.
FIG. 2: Unreduced rhodium particle size distribution as
a histogram.
Note that the minimum particle size that can be measured is approximately 1.5 nm, at which point imaging
is almost entirely through mass-contrast imaging; however, particles smaller than this do seem to appear in
the TEM images, but it is impossible to reliably measure
their diameter, and so they are not counted. Switching to
a higher magnification may improve the ability to count
these particles, but this would be difficult due to stability issues. The impact of ignoring these particles is most
likely small, however, due to the very limited number
appearing in images.
Particle Measurement Errors
Errors in using the Digitalmicrograph program can be
divided into two categories: orientation and line-laying
errors. The former refers to taking the length measurement along different axes of a particle, due to the imperfectly spherical nature of a rhodium particle, and was
found to be an average of 0.2 nm. The latter error refers
to the imperfect boundaries of the particle in the image,
resulting in imperfect starting and ending points for a
length measurement; it averaged 0.2 nm after testing.
The error due to taking a chord instead of a diameter is
assumed to be contained in the line-laying error, and so
is not counted separately. As well, the minimum pixel
distance at x300k magnification is 0.11 nm, but this is
subsumed in the line-laying error. The total read error
then (per particle) is 0.4 nm, which for the purposes of
the average reduces to 0.04 nm with 100 test particles as
δa = √δn , which is far less than the standard deviation.
Conclusion
For pure rhodium on silica, the average particle size
was 3.1 ± 0.8 nm before reduction and 3.1 ± 0.8 nm after
reduction. For rhodium with a manganese promoter
on silica, the average particle size was 2.3 ± 0.5 nm
before reduction and 2.4 ± 0.7 nm after reduction. This
points to a clear effect of the manganese on particle size
and distribution. However, the particle size difference
does not fully explain all phenomena associated with
promotion of rhodium catalysts with manganese.In situ
reduction establishes a control for prior reduction, and
the two processes produce comparable results.
Discussion
The largest problems occur in imaging the catalyst
specimens. As the rhodium particles are only a few
nanometers in diameter, drift of portions of a nanometer
in the time it takes to capture an image (on the order
of a second) can seriously impede progress. Thus,
improvements could be made to minimize drift and so
improve the contrast and resolution of the images, and
thus increase the accuracy of particle size measurements.
Future work will be focused on analyzing the samples with a JEOL-2010F capable of EELS and Z-contrast
imaging.EELS enables the analysis of electronic characteristics of the specimen, such as oxidation state densities. This combined with the better resolution available
through Z-contrast imaging, a different imaging technique, will allow studies of the density of certain oxidation states (notably Rh2 O3 and RhO2 ) in the rhodium
particles, both with and without the manganese promoter. Further studies should reveal any differences in
the spatial density of these states, as well as any interfacial interactions between the rhodium particles and the
silica support.
Secondary imaging concerns are hydrocarbon contamination during imaging and poor contrast; these have
been mitigated through decreased exposure time and
usage of DI water for the former and decreased aperture
size and improved focusing for the latter.
The current specimen preparation method has proven
satisfactory as far as deposit distribution is concerned.
However, work is needed to reduce the thickness of sup27
c
2011
University of Illinois at Chicago
Journal of Undergraduate Research 4, 25 (2011)
to thank professors Takoudis and Jurisch of UIC as the
REU organizers, Ke-Bin Low of the UIC Research Resource Center East for his training and help with the
JEOL-3010 TEM, and the Research Resource Center at
UIC for their TEM expertise.
Acknowledgments
The authors would like to thank the National Science
Foundation and the Department of Defense for funding
the Research Experience for Undergraduates (REU) program at University of Illinois at Chicago under EECNSF Grant # 0755115. As well, the authors would like
1
2
3
4
5
6
7
J. J. Spivey and A. Egbebi, Chem. Soc. Rev. 36, 1514
(2007).
T. Feltes, Ph.D. thesis, University of Illinois at Chicago
(2010).
G. C. Bond, in Heterogeneous Catalysis, Principles and Applications (Oxford Science Publications, 1987).
H. Trevino, Ph.D. thesis, Northwestern University (1997).
F. van den Berg, J. Glezer, and W. Sachtler, J. of Catal.
93, 340 (1985).
T. Wilson, P. Kasai, and P. Ellgen, J. of Catal. 69, 193
(1981).
V. Subramani and S. K. Gangwal, Energy Fuels 22, 814
(2008).
28
c
2011
University of Illinois at Chicago
Journal of Undergraduate Research 4, 29 (2011)
Selective Atomic Layer Deposition (SALD) of Titanium Dioxide on Silicon and
Copper Patterned Substrates
K. Overhage
Department of Chemical Engineering, Purdue University, Indiana
Q. Tao
Department of Chemical Engineering, University of Illinois at Chicago, Illinois
G. Jursich
Department of Bioengineering and Department of Mechanical and
Industrial Engineering, University of Illinois at Chicago, Illinois
C. G. Takoudis
Departments of Chemical Engineering and Department of Bioengineering, University of Illinois at Chicago, Illinois
Atomic Layer Deposition (ALD) of TiO2 has potential applications in the micro- and nanoelectronics industry such as in the formation of copper barrier layers. In this paper, TiO2 deposition
on silicon and copper substrates is studied with a focus on the initial growth and nucleation period
on different substrates. Silicon with about 1.5 nm-thick native oxide, silicon with reduced oxide
thickness (i.e., < 1 nm-thick), and copper patterned silicon substrates are used for TiO2 deposition
within the ALD temperature window over which the film deposition rate is independent of the
substrate temperature. The obtained results are used and discussed in the context of selective TiO2
deposition on the silicon part of copper-patterned silicon substrates. Selective ALD is found to be
possible on the silicon of these substrates by taking advantage of the 15-20 cycle TiO2 nucleation
period on copper, therefore allowing a film ∼ 2.5 nm-thick to grow on silicon while less than 12 monolayers grow on copper. These findings can be used to further investigate TiO2 selective
deposition on copper patterned silicon substrates.
Introduction
As the microelectronics industry has evolved, chip
components and systems have become progressively
smaller. In some cases, current fabrication technologies
have reached the limits of component materials, and a
need has developed for a higher class of materials that
can withstand the demands of microscale technology applications and future evolution. A specific example of
this can be found in copper barrier layer technology. The
International Technology Roadmap for Semiconductors
predicted that the standard copper barrier layer would
decrease in thickness from 12 nm in 2003 to 2.5 nm in
2016,1 placing a high demand on researchers to provide
manufacturers with materials and processes capable of
producing effective ultra-thin barrier layers in this realm.
Several materials have emerged as potential candidates
for use in barrier layers, such as HfO2 , Ta2 O5 , Al2 O3 ,
and TiO2 .2–4 These materials were chosen based on their
high dielectric constants, their long term stability in a
variety of conditions, and their ability to bond to a silicon substrate without reacting with or diffusing through
it.
A process is also necessary which will deposit these
materials in a manner conducive to proper barrier layer
function - the deposited barrier layer should have a thickness of about 2.5 nm while still providing full, even coverage over a variety of contours.2 Atomic Layer Deposition
provides a viable production method for forming these
thin films, as it satisfies these characteristics due to the
distinctive nature of the deposition process. ALD is a
self-limiting surface reaction process in which the first
gaseous precursor, followed by the second, is pulsed over
the substrate with a purging session in between pulses.
The cycle is repeated many times to deposit a film with
the desired thickness. Unlike Chemical Vapor Deposition (CVD), which introduces both precursors together
in the vapor phase and may result in undesired side reactions, ALD allows very precise thickness control because
the precursors are introduced individually to ensure the
formation of identical monolayers at the atomic scale.
The limiting factor in most ALD reactions is time, as
the process is lengthy compared to other film deposition
procedures.
One of the key aspects of film deposition and production is that a film is often desired in certain areas of the
substrate. This can be achieved by universally depositing the film, then removing the film from desired areas;
however, masking and etching processes are often time
consuming, technically challenging, and costly. Recently
efforts have been made in order to pattern film deposition, either through self-assembled monolayer (SAM)
masking or direct selective growth of the film material on
one substrate over another.4,5 The latter, known as Selective Atomic Layer Deposition (SALD), allows the desired
patterned film deposition to occur and does not require
Journal of Undergraduate Research 4, 29 (2011)
subsequent etching step(s). Unlike using SAM masking,
it does not require any additional materials, and therefore simplifies the post-mask etching process. SALD in
this study relies solely on the preference of the growing film material on one substrate over another based
on differences in material and surface chemistry along
with reaction engineering. Since ALD is a surface reaction, the surface chemistry of the substrate is critical to
film growth - a film material may require different induction periods for seed nucleation on particular substrates
depending on the surface chemistry. Based on this concept, our studies of SALD have been carried out for the
achievement of selective coatings of titanium dioxide on
silicon over copper surfaces, for copper patterned silicon
substrates. This has potential applications not only for
the copper barrier in the semiconductor sector but also in
a variety of industries, such as integrated circuit metallization, gate electrodes, and very large scale integration
multilevel interconnects.6–8
SALD of HfO2 on silicon (100) substrates patterned
with copper has been studied previously.4 The e-beam
was provided with a 10 kV voltage having 175 mA current resulting in 0.24 nm/sec copper deposition. In this
manner, a ∼ 200 nm-thick copper coating was deposited
on the patterned substrates over a portion of the silicon
substrate whereas the other portion (about one-half of
the silicon substrate) was masked during the evaporation
process in order to prepare the partially copper coated
silicon substrates; next, approximately 3 nm-thick HfO2
was deposited on the silicon portion without any trace
amount of HfO2 on the copper surfaces of the copper
patterned silicon substrate. In that study, the nucleation
period for HfO2 on copper was found to be approximately
the first 25 ALD cycles, which enabled the deposition of
about 3 nm of HfO2 on silicon with a growth rate from
0.11-0.12 nm/cycle.
In the present study, the early growth period and selective deposition potential of TiO2 is investigated with
the goal of introducing new feasible materials and processes into the microelectronics industry. Different surface treatment methods were employed, and findings were
applied to achieve the desired selective deposition of TiO2
on the silicon portion of copper patterned silicon substrates.
Because the copper barrier layer necessitates a thickness of ≤ 2.5 nm,1 the nucleation period of early film
growth is of utmost concern. Typically, the period of
early growth is not commented on, perhaps because the
later constant growth period has been of interest. However, in order to achieve selective deposition, one must
focus on and study the initial growth and film nucleation
period.
successful fabrication of such substrates was imperative.
Three different substrates were used: silicon (100) with
native oxide, silicon (100) with reduced oxide, and a copper patterned silicon substrate with likely native oxides
on both surfaces.
Silicon (100) substrates with approximately 1.5 nmthick native oxide were prepared by rinsing with DI water
and drying with nitrogen gas.
Silicon (100) substrates with reduced/negligible oxide
were prepared by using an RCA-1 clean followed by a 2%
HF etch for 20 seconds, rinsing with DI water and drying
with nitrogen gas. The oxide present from re-oxidation
was less than 1 nm-thick. This is an estimated value, because the oxide thickness was below the resolution level
of the Spectral Ellipsometer (SE) normally used to measure film thickness. Etched substrates are hydrophobic
and the water rolls off of the surface, whereas un-etched
substrates are hydrophilic and water has to be blown off
of the surface with nitrogen. It was assumed etching
was complete if the substrate was hydrophobic after 20
s. Later, it was found that much of the re-oxidation was
a result of storing the substrates in DI water in between
etching and deposition (a period of no more than a few
hours). Omitting the DI water rinse and storage would
result in a significantly smaller oxide layer from the oxygen/humidity in the ambient air.
Patterned substrates were produced by etching a silicon wafer in a 2% HF solution for 20 seconds, rinsing
with DI water and drying with nitrogen gas. One half
of the substrate was then covered with a scrap piece of
silicon and taped in place with thermal tape. The copper
pattern was created using an electron-beam evaporation
procedure described earlier,4 which produced a copper
film approximately 200 nm-thick on the uncovered half
of the silicon substrate. The native oxide layer on the
patterned substrates was approximately 1.5 nm-thick on
silicon and 2 nm-thick on copper. Prior to ALD, the patterned substrates were rinsed with DI water and dried
with nitrogen gas.
The ALD reactor setup consisted of a hot-walled reactor, three metal precursors and DI water in an ice bath
to provide water vapor as the oxidizing precursor. Nitrogen was supplied as both purging and carrier gas, and
Materials and Methods
FIG. 1: Temperature dependence of TiO2 films grown
on silicon substrates after 50 cycles at 0.18 torr.
This research hinges on the ability to test deposition
on substrates with different surface chemistries, so the
30
c
2011
University of Illinois at Chicago
Journal of Undergraduate Research 4, 29 (2011)
a vacuum pump evacuated the ALD chamber down to
0.01 torr before deposition in order to remove any gas or
moisture residues left in the chamber. Further details of
the setup can be found elsewhere.9 The titanium precursor used was tetrakis-diethyl(amino) titanium (TDEAT)
provided by Air Liquide. At least one test sample of TiO2
on silicon (100) with native oxide was taken every day before experimental runs to ensure the proper operation of
the reactor and to flush out any possible contaminants
accrued during the night.
The temperature-independent deposition window for
TiO2 was tested by subjecting silicon (100) samples with
native oxide to 50 cycles at temperatures ranging from
125 to 225 ◦ C, in increments of 25 ◦ C. Film thickness
was measured with a spectral ellipsometer (J. A. Woollam Co., Inc., model M-44); after a sample is measured,
a model is constructed to describe the sample (the model
is used to calculate the predicted response from Fresnel’s
equation which describes each material with thickness
and optical constants). For each thickness determination, 3 measurements across the film were made with
mean values representing the film thickness. Thin film
deposition runs were carried out within the temperatureindependent window and a pressure of ∼ 0.18 torr.
The reactor was recently modified; therefore, a verification of the long-term growth rate of TiO2 on silicon was
necessary. Silicon (100) with native oxide was deposited
for 50, 100, and 150 cycles - after this many cycles, the
nucleation time is complete and growth has entered the
constant region. The reactor successfully produced TiO2
films at a rate of ∼ 0.11 nm/cycle.
Once it was determined that deposition was proceeding normally, attention shifted to the early growth and
nucleation period of TiO2 on silicon. TiO2 was deposited
on silicon (100) substrates with native oxide and with reduced oxide at 200 ◦ C and tested after 0, 5, 10, 15, 30
and 50 cycles. Film thickness was measured using the
spectral ellipsometer (SE), and composition was probed
using X-ray Photoelectron Spectroscopy (XPS). Model
information for the XPS and SE can be found elsewhere.4
Patterned substrates were tested after 15, 20, 25 and
30 cycles of TiO2 deposition at 175 ◦ C. Copper and silicon portions were measured for each cycle number and
compared. SE and XPS were employed to analyze the
resulting films.
The SE used one of three computer models to calculate film thickness, depending on which substrate was
used for deposition. Films on silicon (100) with native
oxide were measured with a model for TiO2 / SiO2 / Si
(three layers). Films on silicon (100) with very little native oxide were measured using a model for TiO2 / Si.
For the measurement of TiO2 on substrates consisting of
approximately 200 nm of copper on silicon, a specially
calibrated Cauchy model (Cauchy/Cu) was required to
achieve proper fit. This model was designed to measure
films on conductive metal substrates due to the different
optical properties of metals from silicon. In this case, the
thickness calibration was not capable of distinguishing
FIG. 2: The early TiO2 growth period on Si (100) with
native oxide and with negligible native oxide is shown.
TiO2 deposition rates are ∼0.11 nm/cycle and 0.10
nm/cycle, respectively, while the nucleation time is
negligible for both surfaces. Deposition temperature is
175 ◦ C.
copper oxide from titanium dioxide. This required that
the copper oxide be measured before deposition, and the
thickness of TiO2 after ALD was calculated by subtracting the initial oxide thickness from the final total film
thickness.
Results and Discussion
The optimum temperature window for TiO2 deposition
is from 150 to 200 ◦ C - in this temperature range, the film
deposition rate is independent of reactor/substrate temperature (Fig. 1). Temperatures below 150 ◦ C result in a
larger growth rate due to likely excess precursor adsorption onto the surface. Substrate temperatures above 200
◦
C result in a lower growth rate perhaps because chemical
bonds are unstable at those higher temperatures, causing re-evaporation of the precursors from the substrate
surface.10
ALD of TiO2 on silicon (100) substrates with native
oxide resulted in an average growth of 0.11 nm/cycle,
while no nucleation time was observed (Fig. 2). Similarly, silicon (100) substrates with reduced oxide produced films at a rate of 0.10 nm/cycle and no observed
nucleation time. The difference in initial growth between
silicon (100) with native oxide and with reduced oxide is
within the experimental uncertainty of the experiments.
At very low cycle numbers (film thickness less than 1
nm-thick, i.e., after about 5-10 cycles), the SE results
alone could not be effectively used for analysis, because
the measurements were near the detectability of the SE.
Indeed, the data points from 0 to 10 cycles in Fig. 2 may
not represent the actual film thickness and growth rates
are therefore concluded from data for 15 - 50 cycles.
XPS results for all samples showed Ti 2p orbitals with
the standard line separation of 5.7 eV, corresponding to
titanium in the Ti4+ oxidation state. This shows formation of TiO2 films.
The deposition of TiO2 on patterned substrates showed
31
c
2011
University of Illinois at Chicago
Journal of Undergraduate Research 4, 29 (2011)
tected by XPS after 15 cycles on copper, but was too
thin to be detected by the SE and was likely less than
1-2 monolayers thick (Fig. 3). The XPS signal did not
increase between 15 and 20 cycles, indicating the nucleation period on copper is approximately 15-20 ALD cycles. In contrast, the TiO2 film on the silicon portion of
the substrates was approximately 2.5 nm thick after 15
cycles. Selective deposition of TiO2 on silicon over copper was indeed achieved at the conditions used in this
study. This degree of selective growth could likely satisfy
the requirement set forth for the copper barrier layer.
(a)
Conclusions
Based on results from ellipsometry and XPS, the nucleation time for growing TiO2 by ALD on silicon (100) is
negligible, regardless of the presence of native oxide. The
initial growth rates are ∼ 0.11 nm/cycle on silicon with
native oxide and 0.10 nm/cycle on silicon with negligible native oxide. Selective deposition of TiO2 thin films
on copper patterned silicon substrates with preference
on silicon over copper was achieved. This selective ALD
occurs for the first 15-20 cycles of deposition, however after that, a film begins to grow on copper. During the
first 15-20 cycles, a minute amount of TiO2 may form on
copper, with a thickness of less than 1-2 monolayers (<
0.3 nm-thick).
Future experiments will involve probing the patterned
substrate surfaces with Scanning Electron Microscopy
(SEM) and applying effective surface treatments to substrates in order to completely remove the oxide layer
while minimizing reoxidation. A greater degree of selectivity may be expected by applying the proper surface
treatment prior to deposition.
(b)
FIG. 3: a) XP spectra of Ti 2p on the silicon portion of
the copper-patterned silicon substrate are shown. The
signal is indicative of Ti4+ , showing successful formation
of TiO2 . The signal steadily increases with the number
of cycles, showing the increasing growth of film on the
silicon portion of the substrate. Deposition temperature
is 175 ◦ C; b) Ti 2p XP spectra on the copper side of the
patterned silicon substrate. These also indicate TiO2
formation. However, the signal is very small and it does
not change between 15 and 20 cycles. The Ti 2p signal
after 30 cycles on silicon is included for magnitude
comparison. The horizontal shift in signal is due to the
different substrates. Deposition temperature is 175 ◦ C.
Acknowledgements
The authors wish to thank the National Science Foundation and the Department of Defense for funding this
research (EEC-NSF Grant # 0755115 and CMMI-NSF
Grant # 1016002). They are also grateful to Air Liquide
for providing the titanium precursor.
preferential deposition on silicon and not on copper for
the first 15-20 cycles - a very thin layer of TiO2 was de-
1
2
3
4
5
6
International Technology Roadmap for Semiconductors,
Semiconductor Industry Association, San Jose, CA (2001).
P. Alén, M. Vehkamki, M. Ritala, and M. Leskelä, J. Electrochem. Soc. G304-G308, 153 (2006).
P. Majumder, R. Katamreddy, and C. Takoudis, J. Cryst.
Growth 309, 12 (2007).
Q. Tao, G. Jursich, and C. Takoudis, Appl. Phys. Lett. 1,
96 (2010).
X. Jiang and S. Bent, J. Phys. Chem. C 41, 17614 (2009).
J. Carlsson, Crit. Rev. Solid State Mater. Sci. 3, 161
7
8
9
10
32
(1990).
M. Tuominen, M. Leinikka, and H. Huotari, Selective deposition of noble metal thin films (2010).
S. Rang, R. Chow, R. Wilson, B. Gorowitz, and
A. Williams, J. Electron. Mater. 3, 213 (1988).
P. Majumder, Ph.D. thesis, University of Illinois at
Chicago (2008).
A. Kueltzo, Q. Tao, M. Singh, G. Jursich, and C. G. Takoudis, Journal of Undergraduate Research 3, 1 (2010).
c
2011
University of Illinois at Chicago
Journal of Undergraduate Research 4, 33 (2011)
Solvent Selection and Recycling for Carbon Absorption in a Pulverized Coal Power
Plant
R. Reed
Department of Chemical Engineering, Kansas State University
P. Kotecha and U. Diwekar
Department of Chemical Engineering, University of Illinois at Chicago, Chicago, IL 60607
Simulated Annealing is used to optimize the solvent selection and recycling conditions for a carbon
dioxide absorber in a pulverized coal power plant. The project uses Aspen Plus V7.1 to model a
pulverized coal power plant and the carbon capture system. Simulated Annealing is introduced
via the CAPE OPEN feature in Aspen Plus to find the best combination to absorb the most
carbon dioxide while using the least amount of power for carbon absorption. With this optimal
configuration, retrofitting carbon absorption into current power plants will cause a smaller drop in
efficiency than that of the current practice. This project will lead to improved sustainability for
fossil fuel power plants, by reducing the amount of emissions from fossil fuel power plants without
a significant reduction in efficiency.
Introduction
Sustainability has become a focus of our efforts in the
United States. The goal is to not use all of the natural
resources and pollute the world before future generations
have a chance to see it. One of the goals of the sustainability projects is to capture carbon dioxide emissions or
to eliminate them altogether from power plants, cars, etc.
With the present technology, we cannot eliminate all of
the carbon emissions and still meet the energy demand
for the population. Coal fired power plants produce and
release tons upon tons of carbon dioxide into the atmosphere daily. In order to become more sustainable, these
emissions need to be reduced, and with the present technology, it is possible to capture the carbon dioxide from
the flue gas. However, this comes with costs to efficiency.
The focus of this study therefore is on optimizing the performance of the absorption of carbon dioxide from the
flue gas of a pulverized coal power plant.
system, but the PC plant had the largest drop in efficiency. The costs increased by 20 to 40 % for each plant
when the absorber was introduced. The cost component
included the initial cost of the equipment as well as the
cost of operation.1 The absorber was optimized to remove at least 90 percent of the carbon dioxide from the
flue gas of the power plants.
PC power plants are the focus of this article, which
may seem strange, as they are the lowest efficiency and
highest cost to produce and operate. So, why study carbon absorption in them? The answer is that the vast majority of power plants in operation today are pulverized
coal plants. Thus, it is ideal to find a way to retrofit the
old plants with a carbon dioxide absorption system. This
would improve the sustainability of the current plants,
while avoiding the need to build new ones.
PC Power Plant
Three mains types of fossil fuel power plants exist
today: integrated gasification combined cycle (IGCC),
pulverized coal (PC), and natural gas combined cycle
(NGCC). Each of these processes varies in their efficiency
and plant/operating costs. Before introducing a carbon
dioxide absorber, NGCC is the most efficient and maintains the lowest startup and operating costs. IGCC and
PC are roughly equal in terms of efficiency and plant
cost, but on average, the IGCC plants are slightly more
efficient and cost less that PC plants. However, when
a carbon dioxide absorber is introduced into each of the
types of plants, the efficiencies decrease and cost increase.
Studies show that the efficiency dropped by 5 to 12 percent in each plant type upon introducing the absorber
FIG. 1: Graphical representation of the carbon
absorption section of a PC power plant produced by
Aspen Plus. Material streams are shown solid. The
main components are the two absorbers, which are
aligned vertically on the left side, and the four
strippers, which are aligned vertically in the center of
the graphic. These components absorb carbon dioxide
using solvent, and regenerate that solvent respectively.
Journal of Undergraduate Research 4,33 (2011)
ods, while effective in certain conditions, are not appropriate for a PC power plant.
Finally, in post-combustion CO2 absorption, carbon
dioxide is separated from the flue gas. The power plant
would operate as normal, but have one additional component at the end of the process for removing the carbon
dioxide before exiting as stack gas. This method is the
easiest to implement into an existing power plant. Generally, the carbon absorption is done with chemical solvents
to pull unwanted molecules from the flue gas similar to
how other unwanted molecules (nitrous oxides, sulfur oxides, mercury, etc.) are currently removed. The solvent
used depends heavily on the concentration of the flue gas
components, but theoretically, a solvent could be used for
any fuel if the waste concentrations are known. Each of
these methods varies in their implementation and operational costs. The efficiency of the plant will also decrease
upon implementing one of these systems. This means
that each one should be fully considered before implementing one into the plant. However, the focus of this
study is on the retrofitting of a carbon dioxide system to
current plants. The post-combustion process is ideal for
this purpose, thus it is used in the modeling efforts for
the project.
Post-combustion processes use solvents to absorb the
carbon dioxide, but that can be done in two different
ways: physical or reactive. Physical absorption is used
when the species to be separated exists in a relatively
high concentration in the flue gas. It typically uses water to dissolve the gas from the process stream, and then
pressure is reduced to remove the gas from the solvent
to recycle it. Reactive absorption uses a chemical reaction between the carbon dioxide and the solvent to pull
carbon from the flue gas. This method works best with
relatively low partial pressures of the species to be separated, which is the case with carbon dioxide in a PC
power plant. Reactive absorption3 is the only type considered in this study, thus only solvents capable of reacting with carbon dioxide on some level are considered.
The solvents themselves will be diluted with water to test
several different concentrations of solvent.
PC plants can operate as two different types depending on the type of steam utilized: sub-critical steam or
supercritical steam. When carbon capture components
are included in the plant design, the supercritical plants
cost slightly less and are slightly more efficient than the
sub-critical PC plants.1 Thus, the choice of exploring supercritical steam PC plants was made for the purpose of
this study. The design of a PC plant consists of three
main parts: the boiler, the steam cycle, and the flue gas
treatment. Coal is first pulverized and then fed into a
boiler, where it is combusted producing carbon dioxide
among other gases. The heat generated by this combustion reaction is transferred to a cycling water reactor,
which heats water to supercritical steam to turn a turbine, thus generating power. The flue gas from the boiler
is treated before being released to the atmosphere in order to remove sulfur, mercury, and any other harmful
gases.
Carbon Capture
Carbon can be removed from processes in four main
ways including: pre-combustion, oxyfuel, industrial processes, and post combustion.2 Each of these types removes carbon at different parts of the plant’s cycle or
through different conditions within the process. These
methods are generally used in combustion processes involving carbon such as coal-fired power plants because
the carbon source is non-mobile and relatively concentrated in a single stream; thus, they are not appropriate
when the carbon source is small or mobile, such as a car.
In pre-combustion processes, the fuel, which is normally some coal derivative, is partially oxidized to form
carbon monoxide and hydrogen. Then steam is added to
the carbon monoxide to convert it into carbon dioxide.
Thus, the fuel is converted into pure carbon dioxide and
hydrogen before the combustion process. At this point,
the carbon dioxide is removed using solvents, and the hydrogen is combusted in the boiler producing only water
as the byproduct.
In oxyfuel processes, the fuel is burned in almost pure
oxygen. This generates a much higher boiler temperature, which causes the flue gas to be comprised of carbon
dioxide, water, and some excess oxygen. The flue gas
can be easily cooled to remove the water vapor by condensation, producing an essentially pure carbon dioxide
stream, which can easily be collected. The downside to
this method is that the materials used in the boiler must
be specially designed to withstand the more extreme operating conditions.
In industrial processes, the carbon is removed by various means. Membranes can be used to remove carbon dioxide selectively from a gaseous stream; however,
they require a slower moving stream than is typically
found in power plants. Another method is cryogenic cooling, which physically removes the carbon species. This
method required a large amount of energy. These meth-
Optimization
The goal of optimizing the carbon dioxide absorption
is a very complex problem. There are several different
variables in the model. Each of these affects the absorption and costs in different ways. Some of these variables
include the operation conditions in the absorber (temperature, pressure, etc.), the solvent(s), the concentration of
solvent and water, and even the height of the separation
column itself. The work of this project is focused on
solvent selection as well as solvent cycling. These two
focuses lead to a complex problem, which is impossible
to solve by hand. This creates a need for a computer
program or method to assist with the calculations.
Gradient-based methods (based on the first derivative)
34
c
2011
University of Illinois at Chicago
Journal of Undergraduate Research 4,33 (2011)
manually varying a variable and running the flowsheet.
The variables explored in this way were the number of
trays in the strippers (which is where the solvent is regenerated), the concentration of solvent in the recycle
stream, and the reflux ratio in the strippers. The results
of these tests are presented later. Before the results, it
is important to gain an understanding of the Aspen Plus
flowsheet used for the project.
Description of the Aspen Model
FIG. 2: It is ideal for energy requirement for the
absorption section to be at a minimum. For this data,
the reflux ratio was set to 1, the feed plate to 2, and the
concentration of solvent (MEA) was set to 0.3 for all
data points. Clearly, as the number of trays is
increased, the energy requirement decreases. The
flowsheet incorporated a design specification of 95%
absorption of carbon dioxide.
Similar to a PC power plant, the Aspen Plus model can
be split into three distinct parts: the boiler, the steam
cycle, and the carbon absorbers. The focus of this article
is on the carbon absorption section, but the other parts
are included for completeness. For a detailed description
of the power plant components, modeled using Aspen
Plus, consult Bhown, A S ,7 where stream compositions
as well as block descriptions can be found.
are effective for well-behaved functions with a single minimum or maximum.4 However, this problem introduces
many minimums into the function, which would cause
a derivative based method to become ”stuck” in a local minimum, and not find the global, or best, minimum
value. In order to combat this, a method called Simulated Annealing is utilized. This method is probabilistic
in nature.
Simulated Annealing is based on the annealing of metals, which causes the molecules to arrange themselves
in the optimum configuration to increase strength.4 The
method generates a starting value. Then it generates a
move and compares the two. If the move has a lower value
(or higher if a maximum is sought), it is accepted and replaces the starting value. If the move is higher however,
it is accepted by a probability, which decreases the longer
the program is run. This means there is a chance that
even if the program finds a non-global minimum, it can
escape the ”well.”4
Simulated Annealing is ideal for this case because the
program will intelligently sift through the different combinations of variables to find the global optimum for the
complex function presented, which will be the best settings to maximize carbon absorption and minimize the
costs.5 The Aspen Plus V7.1 program is used to model
the PC power plant entirely. Utilizing the CAPE OPEN
capability in Aspen Plus, other functions can be introduced. This capability will provide a means to use Simulated Annealing and Aspen Plus together to find the
solution.6
Boiler
The boiler is modeled using a coal stream and three
different air streams feeding into a mixer. This mixer
breaks down each of the streams into their elemental
components. This is needed because coal is reported
to industries as an elemental breakdown, not in terms
of molecules. Thus, there is no way to model the combustion reaction using coal molecules. The stream then
flows into the boiler, allowing a combustion reaction that
produces heat, which is transferred to the steam cycle.
FIG. 3: The carbon curve is located above the legend,
while the power is located below in the figure. It is ideal
for absorption to be a maximum, while power is at a
minimum. For this data, the reflux ratio was set to 1,
the feed plate to 2, and the number of trays was set to
20 for all data points. Notice the power consumption
for 0 solvent is 0, which is expected because that is the
power required to regenerate the solvent. The optimum
concentration at these conditions appears to be between
0.3 and 0.5 by mass of MEA.
Discussion
The first objective for the project was to establish how
some of the variables affect the carbon captured and the
power requirements of the power plant. This was done by
35
c
2011
University of Illinois at Chicago
Journal of Undergraduate Research 4,33 (2011)
The boiler also has a second component, which removes
the fly ash, then other particulates. In real boilers, this
is done at the same time as the combustion, but Aspen
Plus requires it to be done in separate processes. Following the boiler, there are several components used to
remove mercury, sulfur, and nitrous products from the
flue gas.
Steam Cycle
FIG. 4: For this data, the number of trays was set to
20, feed plate at 2, and the concentration of solvent
(MEA) was set to 0.3 for all data points. The optimum
in this case appears to be around a reflux ratio of 1.
The flowsheet incorporated a design specification of
95% absorption of carbon dioxide.
The steam cycle is modeled by utilizing the heat produced by the boiler section to heat water to super critical
conditions. That steam is then used to turn ten turbines:
three high pressure, two mid pressure, and five low pressure. The final steam product is condensed and recycled
to the heat exchanger from the boiler for reheating. The
steam section of the PC power plant is the most visually
complex part of the Aspen Plus chart, which makes it
difficult to graphically display. There are several work
streams utilized to combine the power generated from
the turbines in a single block, which calculates the total
power produced.
Carbon Absorption
In order to develop the dependence curves for different
variables, the flow sheet must be fully run at each new
condition. The variables tested in this way were the solvent concentration, the number of trays in the strippers,
and the reflux ratio in the strippers. After running the
chart several pieces of data were collected while preparing the chart for the next run including the condenser
and reboiler duty in each stripper as well as the amount
of carbon removed from the absorber.
FIG. 5: For this data, the number of trays was set to
20, the reflux ratio to 1, and the concentration of
solvent (MEA) was set to 0.3 for all data points. This
shows the effect of the feed plate to the strippers on the
thermal power requirement.
considered were the number of trays in the strippers, the
concentration of solvent into the absorber, the feed plate
(or inlet location) for the strippers, and the reflux ratio
in the strippers. All of the simulations also incorporate
a design specification to absorb as close to 95% of the
carbon dioxide as possible. For this reason, the carbon
capture percentage is not shown on most figures because
the change is minimal in those cases.
In each of the figures, the energy requirement is reported as the summation of the heating and cooling requirements of each column. By examining the figures,
one can gain an appreciation of how strongly the considered variable affects the efficiency of the design. Figure
2 considers the number of trays in each stripper. As the
number of trays increases, the power requirements of the
column are decreased. Figure 4 looks roughly inverted
from Figure 2, which is because increasing the reflux ratio, decreases the minimum number of trays needed. As
such the reflux ratio and the number of trays are highly
intertwined.
Figure 3 represents the effect of the concentration of
Procedure
In order to develop the dependence curves for different
variables, the flow sheet must be fully run at each new
condition. The variables tested in this way were the solvent concentration, the number of trays in the strippers,
and the reflux ratio in the strippers. After running the
chart several pieces of data were collected while preparing the chart for the next run including the condenser
and reboiler duty in each stripper as well as the amount
of carbon removed from the absorber.
Results
Each of the figures, excluding Figure 1, have been prepared using data collected by running Aspen Plus simulations. There were four variables considered, and the
data was collected while varying one of four variables
and keeping the other three constant. The four variables
36
c
2011
University of Illinois at Chicago
Journal of Undergraduate Research 4,33 (2011)
the solvent(s), the number of trays in the strippers, the
reflux ratio in each stripper, and many more. This problem cannot be optimized by hand. Rather, it requires the
technique Simulated Annealing, which is a probabilistic
method. This allows the method to escape local minima and find the global minima, unlike gradient based
methods.
It is clear from the results section that the amount of
carbon dioxide and the power requirements for solvent
regeneration depend on the selection for reflux ratio, solvent concentration, feed plate location, and number of
trays. It is also theorized that there will be a dependence
on many more such selections such a second solvent. The
optimal selection for each variable would lead to the highest absorption and the lowest requirement of power for
solvent regeneration. Simulated Annealing will intelligently sift through the many combinations and find the
best choices for the variables. This method is introduced
into Aspen Plus using the CAPE OPEN capabilities.
Once the optimal configuration is discovered, it can
be used to reduce the impact of retrofitting carbon absorption into power plants. Power plants each have a
lower efficiency when operated with carbon absorption
than without it. The optimal solution for the carbon absorption section would cause the lowest drop in efficiency.
This project has been working on optimizing the carbon
absorption in a PC power plant because the majority of
power plants in operation are of this type. This type of
absorption has been selected as a way to retrofit the current plants with carbon absorption to make them more
sustainable. If we are to avoid the ill effects of releasing
massive amounts of carbon dioxide into the atmosphere,
it is important to introduce this technology into all current and future fossil fuel plants.
solvent being fed into the stripping section. This is the
only variable that also shows the percent absorbance of
carbon dioxide. This is because the concentration was
the only variable that could not always meet the design requirement of 95% absorption of carbon dioxide.
The concentration of MEA in the solvent stream has
the strongest effect on the overall performance of the
absorbance section. It is important to feed in enough
solvent to perform the absorption to the design specification, but if too much solvent is fed in (measured by
concentration), the power requirement increases rapidly.
The ideal concentration of solvent appears to be between
30% and 50% mass percent.
The final considered variable was the placement of the
flue gas feed; the results of which are shown in Figure 5.
It became more efficient, the lower the feed was placed.
This makes sense because the liquid should be fed at the
top of the absorber, while the gas should be fed at the
bottom for the most efficient absorption. An important
thought is that each of these figures can be produced at
many combinations of the other three variables, thus the
figures only begin to show the complexity of the optimization problem. Clearly, an optimization method is
required for this purpose, and Simulated Annealing has
been chosen.
Ongoing and Future Work
At this point, the flowsheet is being prepared for Simulated Annealing. This will provide the optimal configuration of the four considered variables. Additionally, a
second solvent called DEA is being introduced into the
flowsheet. Data will be generated for how the efficiency
changes with different mixes of DEA with MEA as well
as different concentrations of DEA as the only solvent.
Finally, Simulated Annealing will be used again to determine the optimum configuration for the mixture of
solvents.
Acknowledgements
Funding provided by The National Science Foundation
and Department of Defense, EEC-NSF Grant # 0755115,
research opportunity provided by the University of Illinois at Chicago (UIC), and guidance provided by Drs.
Jursich, Takoudis and Salazar at UIC.
Conclusions and Recommendations
The optimization of carbon dioxide absorbers in a PC
power plant involves many different variables. These include the type of solvent(s) used, the concentration of
1
2
3
4
5
Research and L. R. Development Solutions, Tech. Rep., Department of Energy (2007).
I. P. on Climate Change, Carbon Dioxide Capture and Storage. (Cambridge University Press, 2005).
E. Kenig and P. Seferlis, Chemical Engineering Progress 1,
65 (2009).
U. Diwekar, Introduction to applied optimization (Kluwer
Academic Publishers Group, 2003), ISBN 1-4020-7456-5.
S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi, Science 220,
6
7
37
671 (1983).
U. Diwekar, J. Salazar, and P. Kotecha, Tech. Rep., National Energy Technology Laboratory (2009).
A. Bhown, Tech. Rep., Electric Power Research Institute
(2010).
c
2011
University of Illinois at Chicago
Journal of Undergraduate Research 4, 38 (2011)
Temperature-Dependent Electrical Characterization of Multiferroic BiFeO3 Thin
Films
D. Hitchen
Department of Electrical and Computer Engineering, Rutgers University, New Brunswick, NJ 08901
S. Ghosh
Department of Electrical and Computer Engineering,
University of Illinois-Chicago, Chicago, Illinois 60607
The polarization hysteresis and current leakage characteristics of bismuth ferrite, BiFeO3 (BFO)
thin films deposited by pulsed laser deposition was measured while varying the temperature from
80 - 300 K in increments of 10 K, to determine the feasibility of BFO for capacitive applications in
memory storage devices. Data is compared to the performance of prototypic ferroelectric barium
strontium titanate, Bax Sr1−x TiO3 (BST) under similar conditions. Finding contacts on the BFO
samples that exhibited acceptable dielectric properties was challenging; and once identified, the
polarization characteristics between them varied greatly. However, the non-uniformity among the
contact points within each sample suggests that either the samples were defective (by contamination or growth process), or that the deposition process of the contacts may have undermined the
functionality of the devices. Subjected to increasing temperatures, BFO’s polarization improved,
and though its polarizability was shown to be inferior to BST, the dielectric loss was less.
Introduction
Multiferroic materials have lately been a subject of interest in material science due to the unique properties
that they can possess simultaneously; in fact, a material
is called “multiferroic” if it exhibits two or more of the following characteristics: ferroelectricity, ferromagnetism,
or ferroelasticity. Multiferroics are rare, and rarer still is
a multiferroic that performs at room temperature. The
potential applications that exist if one is found that could
also be made cheaply and compatible with existing technology make multiferroics worthy of further investigation.
The desire to improve upon the existing technology
that enables our everyday devices such as smart cards,
flash drives, and computers has led to the investigation
of higher-performance materials with regard to memory
storage capacity and write/erase efficiency.1 Ferroics are
uniquely capable of adapting to a variety of tasks within
information storage technology due to their ability to exhibit hysteresis-a quality by which they are able to retain a switchable, permanent polarization (ferroelectric),
magnetization (ferromagnetic), or deformation (ferroelastic) when once exposed to an electric or magnetic field,
or mechanical stress. The polarization that can be induced in the material can function as binary for storage
in a non-volatile memory device that can easily be rewritten when exposed to another field, or stored indefinitely.
Ferroelectric devices can be tuned, or adjusted, simply by subjecting them to an electric field-a very precise,
cheap, and contact-less technology.2 There is a remarkable range of applications in addition to information storage that exploit the piezoelectric and pyroelectric qualities that all ferroelectric materials possess.3 Microactuators and transducers can be created because mechanical
stress induces charge in the material (piezoelectric), and
infrared sensors as well as thermal sensors and images are
possible because ferroelectrics detect heat (pyroelectric).
Bismuth ferrite, BiFeO3 (BFO), a perovskite crystal
that is multiferroic at room temperature4 , has been identified as a possible alternative to barium strontium titanate, Bax Sr1−x TiO3 (BST), a known ferroelectric that
is currently used in industry. Though BST has the advantage of a higher polarizability at room temperature than
BFO, it is not multiferroic, and researchers are hoping
to cultivate BFO’s ferromagnetic properties to produce a
higher-performance device. However, it is the ferroelectric properties of BFO that are the concern of the present
study, in particular its pyroelectric qualities. BFO’s ability to induce charge under varying thermal and electric
fields is the chief object of this research, and is what
makes bismuth ferrite a valuable material, and possible
alternative to BST, for a wide range of commercial applications.
Synthesis of Bismuth Ferrite
There are several ways to grow thin-film BFO, one of
which is by a pulsed laser deposition process, in which a
laser is focused onto the surface of a solid body to remove
the material by evaporation or sublimation processes, after which the particles organize onto a substrate.5 Chemical vapor deposition is another method to deposit thin
films in which the constituent elements of the desired material are introduced as gases in the vicinity of a heated
substrate, onto which they combine. Finally, there is
the molecular-beam epitaxy process, in which “beams of
atoms or molecules in an ultra-high vacuum environment
are incident crystal that as previously been processed to
Journal of Undergraduate Research 4, 38 (2011)
(a)
(b)
FIG. 1: The probe station houses our sample in a
pressure and temperature-controlled chamber.
FIG. 2: The probe station: a) material analyzer; b)
semiconductor parameter analyzer.
produce a nearly atomically clean surface. The arriving
constituent atoms form a crystalline layer in registry with
the substrate. . . .”6
Our BFO was grown by pulsed laser deposition to a
thickness of 300 nm on a substrate of strontium titanate,
a material that was chosen because its lattice structure is
a close match to most perovskite crystals. During fabrication, gold and platinum contacts (thickness of 100 nm
and 50 nm respectively) with an area of 0.25 cm2 were
deposited on the sample by an electron beam using a
shadow mask. Before measurements could begin it was
necessary to ground our sample on a solid gold plate using
a silver paste adhesive to ensure that proper conduction
would occur between the sample and the ground.
The probe station was connected to a material analyzer which provided information about polarization (input parameters being contact area and material thickness), while supplying a voltage varying between -4 and
4 V to the sample.
A semiconductor parameter analyzer, also connected
to the probe station, was used to plot current as a function of electric field in order to evaluate the current leakage characteristics of the sample. Before any temperature measurements could begin, it was necessary that
all of the contacts on each sample be probed to discover
their polarizability at room temperature. Because we
were interested in the dielectric behavior of BFO, only
contacts exhibiting a hysteresis were selected for our measurements (see Figures 3 and 4). Realizing that any significant pressure from the probe onto the sample could
influence polarity due to BFO’s piezoelectric properties,
it was important that undue pressure be avoided while
probing. However, while too much pressure on the sample was undesirable, too little pressure would not ensure
sufficient conduction to the probe. As a result, much care
was taken to avoid both extremes.
Each of the gold contacts on the BFO samples was sus-
Procedure
To observe the variations in polarizability and current
leakage characteristics of BFO while changing temperature, it was necessary to house our samples in a pressure
and temperature-controlled probe station. Liquid nitrogen was used to vary the temperature between 80 K and
300 K in increments of 10 K. Pressure was kept at 4
mTorr.
39
c
2011
University of Illinois at Chicago
Journal of Undergraduate Research 4, 38 (2011)
Expectations
It has been documented that the crystalline structure
of bismuth ferrite undergoes a phase change at 140 K and
200 K [7] possibly due to spin-reorientation transitions;8
however it is unknown exactly how the temperatureinduced phase shift influences polarity and dielectric
leakage. Leakage current is expected to improve at lower
temperatures due to the smaller number of charge carriers available. In semiconducting materials, whose lattice
structure is not altered by temperature, this should also
improve the polarity of our material: less current leaking
into the circuit means that more charge is kept separate
and contained in the capacitor. However, because BFO’s
lattice does change, it is unknown how polarizability is
affected after these critical temperatures.
Ideally, relative uniformity in dielectric performance
is expected between contacts; however, some variation
can be explained by the crystal’s morphology. A single crystal sample should be uniform throughout because
of its homogenous arrangement. If our sample is polycrystalline, there will be groupings of similar contacts
within an area that are distinguishable from its neighbors; whereas an amorphous sample will not appear to
have any particular unity among its contact points.
FIG. 3: Polarization versus electric field curves for nine
capacitive contacts on BFO sample. They displayed
widely varying polarization at room temperature.
Experimental Results
We have examined two samples of bismuth ferrite, each
of which has approximately twenty-three contacts. Only
nine contacts exhibited a capacitive hysteresis on one
sample, and two in the other. The other sites were purely
resistive. The nine capacitive contacts were not similar in
their polarization, however, as exhibited by the hysteresis loops in Figure 3. It is clear that some of the devices
have a much larger polarization under the same electric
field than others, as evidenced by their wider loops. Both
Figures 3 and 4 are measurements taken at room temperature, and the colored loops in both graphs represent the
same contact point.
Because several of the capacitive contacts appeared
in spatial proximity to each other on the sample, this
might indicate a polycrystalline lattice structure in our
BFO; however, the predominance of resistive over capacitive contacts is a puzzling find that could possibly be
attributed to defects in the samples resulting from inefficiencies in the laser fluence during fabrication, or in the
deposition of contact points that fail to achieve proper
conduction with the material.
Current leakage characteristics are summarized in Figure 4. There does not appear to be any correlation between polarization and current leakage in the data. Only
two of the nine capacitive contacts on the first sample
retained a capacitive hysteresis after redeposition of the
contacts. The others exhibited purely resistive behavior.
The second sample lost both of its capacitive sites, and
functioned purely as a resistor.
FIG. 4: Current density versus electric field curves for
nine capacitive contacts on BFO sample. The nine
contacts display similar current leakage characteristics,
excepting one particularly leaky device (shown in red).
ceptible to scratching by the probe. Much of this could
not be helped; in fact, it was not possible to tell that contact had been with the probe unless a small scratch was
visible on the gold. Over the course of the experiment,
however, too much of the gold was scratched from the
surface of the contacts, altering the area of each contact,
and thus the software parameters of the material analyzer. This necessitated a redeposition of the contacts,
but it was discovered that the redeposited contacts did
not behave on the sample as they had previously.
40
c
2011
University of Illinois at Chicago
Journal of Undergraduate Research 4, 38 (2011)
FIG. 5: Polarization versus electric field curves for one
BFO contact described over ranging temperatures.
Higher temperatures are associated with wider
hysteresis loops.
FIG. 6: Remanent polarization versus temperature for
one BFO contact at -1.5 V. Polarization markedly
increases after 200 K.
Figure 5 is temperature-controlled data taken from
one of the two capacitive sites. The hysteresis loops
shown there seem to suggest a correlation between increasing temperatures and greater polarizability. This
occurrence may be explained by the predicted lattice
structural changes at 140 K and 200 K. Further research
that details the changing lattice of the crystal should
be conducted on BFO samples under a high-resolution
electron microscope in order to better understand this
phenomenon.
In comparison to current data of BST (composed as
Ba0.75 Sr0.25 TiO3 ) grown by pulsed laser deposition and
of the same thickness as our samples (300 nm),9 it is clear
that our BFO’s dielectric performance is inferior. At approximately 300 K (25◦ C) the BST thin film’s hysteresis
µC
loop ranges between -5 and 5 cm
2 , while the loop in FigµC
ure 5 is roughly between -1.5 and 1.5 cm
2 . It should be
noted that while the polarization of BST was measured
between -5 and 5 V (as opposed to -4 and 4 V for our
BFO), the comparison still demonstrates a more effective
polarizability in BST within the same parameters.
In order to better illustrate the association of BFO’s
polarizability with temperature, the remanent polarization was plotted for data associated to -1.5 V (see Figure
6). A fairly constant polarization is visible between 0.4
µC
µC
cm2 and 0.5 cm2 until approximately 200 K, at which it
sharply increases. The current leakage characteristics behave as expected over a varying temperature. The influx
of charge carriers that exists at higher temperatures allow more current to leak from the device, an occurrence
which Figure 8 appears to confirm.
Compared to current data10 , BFO is considerably
less leaky than Ba0.6 Sr0.4 TiO3 (BST). This is evidenced
where non-annealed BST’s leakage at room temperature
FIG. 7: Current density versus electric field for one
BFO contact described over ranging temperatures.
Current leakage in the dielectric increases at higher
temperatures.
is shown to be in the order of milliamps, while BFO’s
leakage is a fraction of a microampere.
Conclusion
The samples of BFO that were used in this research
were not characterized by dielectric behavior. Selecting
contacts that displayed appropriate capacitive characteristics was difficult; careful probing was necessary due
to the piezoelectricity of the material, and upon redeposition of the contacts most of their capacitive func41
c
2011
University of Illinois at Chicago
Journal of Undergraduate Research 4, 38 (2011)
tact that was selected for the temperature-varying measurements displayed a marked increase in polarization
with increasing temperatures. While more experiments
utilizing high-resolution electron microscopy should be
conducted to observe the structural changes that occur
within the lattice of the crystal, it appears that there is a
discernable change in dielectric as well as leakage characteristics after approximately 200 K. Under similar growth
and testing conditions, BFO does not appear to polarize
as effectively as Ba0.75 Sr0.25 TiO3 ), and its dielectric loss
was less than Ba0.6 Sr0.4 TiO3 .
Acknowledgements
FIG. 8: Remanent current density versus temperature
field for one BFO contact at -1.5 V. The absolute value
of the current increases with increasing temperatures.
The authors would like to thank the United States
Department of Defense as well as the National Science
Foundation (EEC-NSF Grant # 0755115 and CMMINSF Grant # 1016002) for funding this work, and the
directors of the REU program, Drs. Christos Takoudis
and Greg Jursich. Thank you also to Koushik Banerjee,
Tsu Bo, Jun Huang, and Khaled Hassan for the discussion and assisting with the lab equipment.
tionality was lost. Prior to redeposition, there were approximately nine capacitive contacts whose polarizability
at room temperature varied greatly. However, the con-
1
2
3
4
5
6
7
8
9
10
11
12
R. Zambrano., Materials Science in Semiconductor Processing 5, 305 (2003).
W. Kim, M. F. Iskander, and C. Tanaka, in Electronics
Letters (2004), vol. 40.
J. F. S. et al., Science 315, 954 (2007).
R. Ranjith, U. Luders, and W. Prellier, Journal of Physics
and Chemistry of Solids 71, 1140 (2010).
M. M. Kuzma, B. L. Pyziak, I. Stefaniuk, and I. Virt.,
Applied Surface Science 168, 132 (2000).
J. R. Arthur, Surface Science 500, 189 (2002).
S. A. T. Redfern, J. W. H. Can Wang, G. Catalan, and
J. F. Scott, Journal of Physics: Condensed Matter 20, 6
(2008), ISSN 452205.
M. K. Singh, R. S. Katiyar, and J. F. Scott., Journal of
Physics: Condensed Matter 20, 4 (2009), ISSN 252203.
H. Z. H. Xu, K. Hashimoto, T. Kiyomoto, T. Mukaigawa,
R. Kubo, Y. Yoshino, M. Noda, Y. Suzuki, and
M. Okuyama, Vacuum 59, 628 (2000).
J. Li and X. Dong, Materials Letters 59, 2863 (2008).
Plc (2010), URL www.ferrodevices.com.
Precision semiconductor parameter analyzer (2010), URL
www.hp.com.
42
c
2011
University of Illinois at Chicago
Journal of Undergraduate Research 4, 43 (2011)
Hydrodynamics of Drop Impact and Spray Cooling through Nanofiber Mats
Y. Chan
Department of Chemical Engineering, University of Massachusetts Amherst, Amherst, MA 01003
F. Charbel
Department of Mechanical Engineering, University of Illinois at Urbana-Champaign, Chicago, IL 61801
Y. Zhang, S.S. Ray and A.L. Yarin
Department of Mechanical and Industrial Engineering,
University of Illinois at Chicago, Chicago, IL 60607
Spray cooling is one of the most effective technologies that has shown promise in thermal management of microelectronic systems and server rooms. The focus of this research is to increase the
heat flux rate from a hot surface by applying a metal-coated electrospun polymer nanofiber mat.
Samples were prepared from a copper plate substrate coated with an electrospun polymer nanofiber
mat, and electroplated with one of three different metals; nickel, copper and silver. Experiments
were performed in which samples were subjected to impact from water droplets from a height of
17.95 cm at various temperatures. The behaviors of droplet impact and subsequent evaporation
were observed in order to evaluate, and compare heat transfer characteristics of the different sample
types. Silver-plated samples were found to provide the highest heat flux rate, followed by copper
and finally nickel. However, silver was not usable at 200 ◦ C and above due to its tendency to oxidize
and degrade at those temperatures.
Introduction
Drop impact on dry surfaces is a key element of phenomenon encountered in many technical applications including spray printing, rapid spray cooling of hot surfaces, and ice accumulation on power lines and aircrafts.
The drop diameter, surface tension, surface roughness,
and drop impact velocity play important roles in the hydrodynamics of droplet impact on a dry surface.1 Spray
cooling of hot surfaces using liquid sprays offers a very
effective means of localized cooling in small areas. It
is considered a key heat removal technology in many
potential applications. Semiconductor chips, microelectronic devices, and server rooms demand high heat flux
rates to operate. There are spray cooling technologies
for server rooms such as modules place inside the servers
that spray coolant mist directly on the central processing units (CPUs), or ink-jet pumps that spray coolant on
chips. A formidable design challenge in microelectronic
systems, particularly with the progression of miniaturization, is the ability to provide adequate cooling, and
maintaining low operating temperatures. For example,
silicon-based dice in modern integrated circuits typically
have a maximum operating temperature of around 125
◦
C. Spray cooling is attractive to cool electronic elements
because the spray can directly contact the elements and
remove large amounts of heat continuously by evaporation. However, a serious obstacle in spray cooling is the
limited contact between the liquid and the hot surface
due to the Leidenfrost effect. It is a phenomenon occurring when a liquid drop contacts a surface having a temperature greater than the Leidenfrost point of the liquid.
It causes a thin insulating layer of vapor between the
hot surface and the liquid droplet that greatly reduces
contact and heat transfer. Application of a nanofiber
mat coating to a surface has been shown to promote
droplet spreading and adhesion.2 It also has been proposed and shown that if a metal surface is coated with
polymer nanofiber mat, the mass loss can be minimized
to a great amount3 and ameliorate our capability to remove heat through spray cooling economically and effectively. The thermal and structural properties of four
different polymer nanofiber mats were measured.3 Based
on the demonstrated enhancement of heat transfer using
polymer nanofiber mats, this new research investigated
the use of metal-coated nanofiber mats to achieve even
greater heat flux rates. The metal-coated nanofiber mat
allows drops to evaporate completely inside the mat and
avoid the receding, splashing, bouncing, and Leidenfrost
effect when water sprays on the heated surface.
Experiment
Procedure
The materials used to make these coatings on copper
substrates were pure silver, pure copper, pure nickel, and
PAN (poly-acrylonitrile). Metal-coated nanofiber mats
were made by heating electrospun polymer nanofibers
that contained metal atoms in a reducing atmosphere.4
The electrospinning process was performed at room temperature (∼20 ◦ C) and produced polymer nanofiber on a
copper substrate plate (Figure 1). A high voltage power
supply (Extech Instruments Regulated DC Power Supply 382-210) was used to charge a solution of 20 wt%
Journal of Undergraduate Research 4, 43 (2011)
FIG. 1: Diagram of electrospinning process setup.
FIG. 3: Experimental setup 1 in laboratory
a PAN nanofiber mat coating on each copper substrate
sample dice. Sample dices with PAN nanofiber mat coating were then heated for hours and sensitized separately
in preparation for electroplating. The collected PAN
nanofibers sample dices were heated and annealed at a
reducing atmosphere to coat the fibers with silver, copper
and nickel atoms separately. 3 sample dices were plated
with copper-coated, nickel-coated, or silver-coated PAN
nanofibers and 1 sample dice was plated with just PAN
nanofibers. Two top view images of copper nanofiber
mat were taken by scanning fluorescence phase microscope (Olympus Model BX51TRF) are shown in Figures
2(a) and 2(b).
(a)
Experiment 1
(b)
For observations in this experiment, a high speed camera (Redlake Motiopro) was used to take images with the
frame rate of 2000 frames per second at a shutter speed of
1/1000-1/2000 second Experiment setup is shown in Figure 3. A small water drop of about 2 mm in diameter was
produced by a drop generator that pumped at the speed
of 1 mL/hr and then impacted onto a vertical target of
plate at a room temperature. The needle was fixed and
then a drop was accelerated by gravity and impact onto
copper substrate or copper nanofiber mat. The distance
between the needle and surface was varied from 15.88 cm
and 28.7 cm. The drop impact velocity was ranging from
m
1.76 m
s to 2.72 s .
FIG. 2: Two top view scanning fluorescence phase
microscope images of copper nanofiber mat
PAN fed from a syringe fitted with a hypodermic needle. A potential difference of 15 kV was applied between
the spinneret at the needle tip and the grounded copper collector plate. Polymer solution was fed at 1 mL/hr
from a height of 15 cm above the collector. Upon application of the electric field, the polymer bead formed the
Taylor cone geometry and developed a fluid jet. Electrospinning process was performed for 5 minutes to apply
44
c
2011
University of Illinois at Chicago
Journal of Undergraduate Research 4, 43 (2011)
FIG. 6: Plot of radius ratio for drop impacted copper
nanofiber mat at room temperature with different
impact velocities.
FIG. 4: Experimental setup 2 in laboratory
by the copper nanofiber mat. In the experiment, the
water drop impact onto the copper nanofiber mat deposited with a larger contact area between the water and
surface than on copper substrate and stayed almost the
same size afterward. For this experiment, the radius of
drop impact on copper substrate and copper nanofiber
mat were measured using Adobe Photoshop. Figures 5
and 6 demonstrate how the radius of spherical spreading area of an impacting drop depends on its diameter
and impact velocity. It was found that the spreading
area radius of the drop impacts on the copper nanofiber
mat was larger than on copper substrate, as shown in
Table I. The overall droplet radius normalized by preimpact droplet radius for copper nanofiber mat was from
2.5 to 2.8, and the average of overall radius ratio was
2.7. The normalized droplet radius on copper substrate
ranged from 1.7 to 2.4 with an average of 2.0. The data
shows that the spreading area on copper nanofiber mat
was about 1/4 times more than on copper substrate and
can therefore provide better heat transfer. The conduction of heat transfer is expressed as Fourier’s Law:
FIG. 5: Plot of radius ratio for drop impacted on
copper substrate at room temperature with different
impact velocities.
Experiment 2
This experiment setup is shown in Figure 4, one high
speed camera (Redlake Motionpro) was used to take images with the frame rate of 2000 frames per second at
a shutter speed of 1/1000-1/2000 second and one CCD
camera (Pulnix TM-7EX) were used to record the evaporation taking place. Copper substrate and different
nanofiber mat plates were placed on a hot plate and
heated to temperatures of 125 ◦ C, 150 ◦ C, and 200 ◦ C.
A small water drop of 2 mm in diameter was produced
by a drop generator at a rate of 1 mL/hr from the height
of 17.95 cm to fall onto a copper substrate or nanofiber
mat coating plates heated at various temperatures.
∂Q
∂T
= −k · A ·
∂t
∂x
(1)
where ∂Q
∂t is the amount of heat transferred per unit time,
A is the area normal to the direction of heat flow, k is
the thermal conductivity, and ∂T
∂x is the temperature difference along the path of heat flow. Droplet impacted on
copper substrate and copper nanofiber mat that spread
to cover almost the same area and then droplet started
to shrink. However, droplet on copper substrate had a
smaller contact area than on copper nanofiber mat after it shrunk. The larger contact area on the nanofiber
mat would lead to a rise of heat transferred per unit time
due to the increasing area normal to the direction of heat
flow.
Results and Discussion
Contact area analysis (Experiment 1)
According to the images that were taken by the high
speed camera, the impact of the water drop onto the
copper substrate and copper nanofiber mat deposited as
a spherical shape. The water drop hit the surface of
copper substrate and shrunk due to the water surface
tension. However, the water drop receding is prevented
B. Thermal and mass loss analysis (Experiment 2)
Image data was analyzed using Adobe Photoshop. The
base diameter of drop impact on the heated surface and
45
c
2011
University of Illinois at Chicago
Journal of Undergraduate Research 4, 43 (2011)
FIG. 8: Plot of mass loss for drop impact on nanofiber
mats at 150 ◦ C.
FIG. 7: Plot of mass loss for drop impact on nanofiber
mats and copper substrate at 125 ◦ C
mass losses during the evaporation were measured, and
the heat flux rates were calculated by this equation:
q̇ =
ρ 43 r0 3 Lδ
and the heat flux rate of all different nanofiber mats and
copper substrate are shown in Table II. Fig. 8 depicts
the accumulation of volume loss due to spattering for
copper, nickel and silver nanofiber mats at 150 ◦ C. As
shown in Table II, silver nanofiber mat had the highest
heat flux rate follow by copper nanofiber mat and copper
substrate at 125 ◦ C. Nickel and PAN nanofiber mats had
a lower rate of heat flux than the copper substrate because their thermal conductivities are both much lower
than the thermal conductivity of copper and silver. Also,
mass loss during the evaporization on copper substrate
was about 26%, which was much higher than on the
nanofiber mats (below 10%). Hence, it shows the higher
cooling potential on the nanofiber mats than the copper substrate and the latent heat of water evaporation is
more fully exploited due to the amount of mass loss.
(2)
πd0 2 ∆t
where L is the latent heat of water evaporation (2260
J/g), ρ is density of water (1 g/cm3 ), δ is a correction
factor accounting for volume loss due to spattering (1
- cumulative mass loss - 0.02), and d0 is the average
normalized base diameter. Because there was still small
amount of water remained inside the mats, a correction
factor of 0.02 had to be subtracted from the cumulative
mass loss.
It is clearly seen from Fig. 7 that the accumulation
of volume loss due to spattering for copper substrate is
much higher than others. The time needed for complete
drop evaporation, the mass loss during the evaporation
Height 1 (15.877
cm),Impact velocity
(1.764 m/s)
Height 2 (18.85
cm),Impact velocity
(1.922 m/s)
Height 3 (21.55
cm),Impact velocity
(2.055 m/s)
Height 4 (23.85
cm),Impact velocity
(2.162 m/s)
Height 5 (26.25
cm),Impact velocity
(2.268 m/s)
Height 6 (28.70
cm),Impact velocity
(2.272 m/s)
Average
radius/initial radius
ratio
Copper Substrate Copper Nanofiber Mat
Average radius/initial radius ratio
2.03
2.56
1.84
2.5
1.77
2.66
2.36
2.71
1.78
2.76
2.26
2.81
2.01
2.67
Several of the images from the 200 ◦ C experiments
had poor visibility of the droplet base on the mat surface. This made accurate measurement problematic, and
heat flux rates could not be calculated properly. Also,
drop impacts onto copper substrate at 150 ◦ C and 200 ◦ C
formed partial and complete rebounds, respectively, characteristic of the Leidenfrost effect. In the experiments,
the Leidenfrost effect did not happen when the drops fell
onto the metal-coated nanofiber mats, but it occurred
on PAN nanofiber mat and copper substrate. It was because of the temperature of the surface was above the Leidenfrost point and the metal-coated nanofiber mats prevented the Leidenfrost effect from happening. As shown
in Figures 7 and 8, silver nanofiber mats had the best
heat flux rate and the least amount of mass lost, but
copper nanofiber mat is the best choice to coat a heated
surface to minimize the mass loss during spattering because of its high thermal conductivity and cost. Also,
silver oxidized quickly with air and water, and the silver
nanofiber mat degraded easily. When a drop of water
made contact with the silver nanofiber mat at 200 ◦ C,
the water vapor expelled inside the mat and broke the
mat apart.
TABLE I: Average radius ratio of copper substrate and
copper nanofiber mat at room temperature.
46
c
2011
University of Illinois at Chicago
Journal of Undergraduate Research 4, 43 (2011)
At 125 ◦ C
Silver Nanofiber Mat Copper Nanofiber Mat Nickel Nanofiber Mat PAN Nanofiber Mat Copper Substrate
Time needed for complete evaporation
∼0.3 second
∼0.4 second
∼0.4 second
∼4 seconds
∼3 seconds
Mass loss during evaporation
<0.01%
∼4%
∼4%
<10%
∼26%
Heat flux rate (W/cm2 )
257.94
203.80
168.58
9.50
180.21
At 150 ◦ C
Silver Nanofiber Mat Copper Nanofiber Mat Nickel Nanofiber Mat PAN Nanofiber Mat Copper Substrate
Time needed for complete evaporation
∼0.3 second
∼0.4 second
∼0.4 second
N/A
N/A
Mass loss during evaporation
<0.06%
∼8%
∼19%
<6%
∼100%
Heat flux rate (W/cm2 )
613.85
107.87
198.17
N/A
0
At 200 ◦ C
Silver Nanofiber Mat Copper Nanofiber Mat Nickel Nanofiber Mat PAN Nanofiber Mat Copper Substrate
Time needed for complete evaporation
∼0.2 second
∼0.3 second
∼0.4 second
N/A
N/A
Mass loss during evaporation
<2%
∼20%
∼15%
<6%
∼100%
TABLE II: Data of different nanofiber mats and copper substrate at temperature of 125, 150 and 200 ◦ C in
experiment 2.
coatings on copper substrates improve heat flux rates and
avoid the Leidenfrost effect. Although the heat flux rates
for silver-plated mats were high, the tendency of silver
to easily oxidize and degrade made copper-plated mats
the more practical option. This investigation of copperplated nanofiber mats may lead to a breakthrough in the
development of a new generation for spray cooling of microelectronic systems, radiological elements and server
rooms.
Conclusion
In this research, experiments were performed to observe the hydrodynamics of water drop impact onto copper substrate and different nanofiber mat coated copper substrates. The efficiency of spray cooling a heated
surface is dependent on heat flux rate through the conducted area between water and hot surface. Results
of experiment 1 shows that copper nanofiber mat coating increased the contact area between the hot surface
and water. These results demonstrate that use of copper nanofiber mat coating yielded a contact radius 25%
greater than bare copper substrate. The experiments
introduced a novel idea of improving spray cooling by
utilizing a metalized electrospun polymer nanofiber mat
coating. Experiment 2 compared water evaporation and
mass loss through different nanofiber mats and copper
substrate. The image data from the experiments revealed that use of the metalized nanofiber mat coating greatly reduced undesirable phenomena including rebounding (Leidenfrost effect), splashing, and receding. It
was also found that silver nanofiber mats had the highest
heat flux rate. Nickel and PAN nanofiber mats had lower
heat flux rates than bare copper substrate. Experimental
results demonstrated that use of metalized nanofiber mat
1
2
3
4
Acknowledges
The author would like to thank the financial support
from the National Science Foundation (NSF-REU) and
the Department of Defense (DoD-ASSURE) that funded
the REU program through EEC-NSF Grant #0755115
and CMMI-NSF Grant #1016002. Special thanks to
Alex Kolbasou for his guidance and support throughout
the project. Additional thanks to Professor Christos Takoudis and Professor Gregory Jursich for organizing and
running the REU program; thanks to Runshen Xu and
Qian Tao for organizing tutorial and social events.
A. Yarin, Annual Review of Fluid Mechanics 38, 159 (2006).
A. Lembach, Y. Zhang, and A. Yarin, Langmuir Article 26,
9516 (2010).
R. Srikar, T. Gambaryan-Roisman, C. Steffes, P. Stephan,
C. Tropea, and A. Yarin, International Journal of Heat and
Mass Transfer 52, 5814 (2009).
D. Reneker and A. Yarin, Polymer 49, 2387 (2008).
47
c
2011
University of Illinois at Chicago
Journal of Undergraduate Research 4, 48 (2011)
General Purpose Silicon Trigger Board for the CMS Pixel Read Out Chips
E. Stachura and C.E. Gerber
Department of Physics, University of Illinois-Chicago, Chicago, IL 60607
R. Horisberger
Paul Scherrer Institut, 5232 Villigen PSI, Switzerland
A semester research project was completed at Eidgenössiche Technische Hochschule Zürich (ETH
Zürich) and the Paul Scherrer Institut (PSI) in the spring of 2010. A new kind of trigger based on
silicon pixel sensors was developed for the commissioning of the current Compact Muon Solenoid
(CMS) pixel detector. Prior to this trigger there was no silicon sensor based trigger that used the
same technology as the pixel detector. The current trigger systems involve cumbersome photomultiplier tubes and Nuclear Instrument Module (NIM) crates to process the signals. To improve on
these trigger systems it was thought to develop a trigger using pixel technology in the form of a
printed circuit board that assimilates the signal processing circuitry. The board worked well, although there were limitations (e.g. crosstalk occurred so copper shielding was needed). A second
generation trigger board currently exists. It fixes many of the problems encountered with the first
board.
Introduction
The Large Hadron Collider (LHC) at CERN in
Geneva, Switzerland, is a proton-proton accelerator. The
Compact Muon Solenoid (CMS) experiment is one of the
general purpose experiments set up along the LHC, and
this is the experiment of interest. The Super Proton
Synchrotron (SPS) is one of the accelerator machines at
CERN (see Figure 1). It accelerates protons up to 450
GeV before being injected into the LHC. This beam can
also be used for other experiments such as sensor irradiation. The proton beam is incident upon a target-located
at the northern area of the SPS-that produces a beam of
pions. At the end of the pion beam line, there is a site
where experiments may be placed inside a 3T superconducting magnet. This site is at the North Area in Figure
1.
The purpose of this experiment was to simulate degradation of the readout electronics and sensors of the CMS
Pixel Detector after a number of years of being exposed
to radiation. For example, a chip with fluence 6 × 1014
neq /cm2 was tested.5 This simulates electronics used for
2 years in the 4 cm layer of CMS at a luminosity of L
= 1034 cm−2 s−1 .
The idea for this project was to build a trigger for
the experiment. This trigger alerts the computer as to
when charged particles arrive at the experiment so data
taking may begin. This trigger was needed because of
the magnetic field.
Introduction to the Trigger Board
The trigger board detects passing charged particles and
signals the test board to begin taking data. This board
is the first device the charged particles encounter. There
is a silicon sensor attached to it via a gold plated board.
FIG. 1: The CERN accelerator complex.
This sensor, with dimensions 10 x 10 mm, is similar to
the ones bump bonded to the Read Out Chips (ROC)
in the CMS Pixel Detector. The trigger board sensor
is a diode while the sensors on the ROCs are pixelated.
Alignment was a key issue in this test beam. A wire
chamber was used to find the beam, and the beam was
bent until a high peak was seen through the sensors. The
setup was initially installed by visual approximation; that
is, collimators were used to adjust the beam as necessary
after the setup was installed inside the magnet.
Scintillators are usually used to create triggers. Scintillators are large and require photo multiplier tubes
(PMT). Since there was a 3T magnetic field, and PMTs
Journal of Undergraduate Research 4, 48 (2011)
do not always function as expected in magnetic fields,
this project was especially useful. This magnetic field
was needed to ensure charge sharing between pixels. The
other chips in the test setup used to reconstruct particle
tracks were deliberately placed not in parallel with each
other so charge sharing would occur. Scintillator triggers
also use Nuclear Instrumentation Modules (NIM) to process the input from the PMT and generate an output.
This is not advantageous since the NIM crate must be
set up outside the experimental area, and hence there
will be a delay that needs to be taken into account. Silicon sensors are beneficial since they can be tuned to
trigger on particles with certain energy, momentum, etc.
While this trigger has approximately the same speed as
a scintillator, the size advantage is much greater. The
compactness of the trigger board provides ease in installation and transportation.There was an attempt to
measure efficiency, yet this was not exactly the trigger
efficiency. The efficiency recorded was the percentage of
events that recorded hits. However, there was an unknown timing problem, so this efficiency cannot be taken
as the efficiency of the trigger.
The output of the trigger board is connected to an oscilloscope and a NIM crate in the control room. The output pulse can be observed there6 . The NIM Crate counts
the number of triggers sent by the board. It also inverts
the signal received from the trigger board. The signal
comes from the inverted output on the trigger board, so
it is necessary to invert the signal again so the test board
sees a positive signal. The NIM crate output signal is
sent to the test board. Once the test board is triggered,
the Token Bit Manager (TBM)1 is notified to start reading out the signals from all the ROCs (there is a more
explicit description of the experiment in Section 5). The
trigger board can be seen in Figure 3. There is copper shielding placed on the board to prevent crosstalk.
There is also aluminum foil over the sensor to prevent
background photons from hitting the sensor. The trigger board without any components can be seen in Figure
2(a).
The trigger board itself is a two layer Printed Circuit
Board (PCB). On the second layer there is a ground
plane. On the first layer there are two heat sinks surrounding voltage regulators. Since there are a number of
unconnected pins on the voltage regulators, it was realized a heat sink would be the best way to connect these
pins. The circuit schematic can be found in Appendix A.
(a)
(b)
FIG. 2: a) The trigger board without components. The
heat sinks are readily visible. b) The sensor and gold
PCB are protected by a cap.
positive potential and the holes are repulsed away. The
electric field creates a depletion zone in the pn-junction
by removing all the free charges. Holes are created when
charged particles pass through this depletion zone. The
holes then induce a current that is output to the electronics of the trigger board. From this current a signal can be
measured. This signal is then shaped through the trigger electronics. The entire setup is contained within an
electromagnet so the magnetic field must be taken into
account. The drift of the electrons and holes is subject
to the Lorentz force2
Creation of the pulse
F = q(E + v × B)
When particles hit the sensor on the trigger board,
some of their energy is absorbed. This energy is used
for creating electron-hole pairs. These pairs induce signals which can then be readout. A reverse bias voltage
of +100 V is applied to the sensor. The pn-junction of
the sensor with the applied bias voltage is responsible
for the electric field. Electrons are attracted towards the
(1)
The sensor on the trigger board is not bump bonded
onto the board like the ROC sensors. The sensor is glued
onto a gold plated PCB. Both the gold PCB and the
sensor are attached to the trigger board by vias (holes
drilled into the board allowing passage to the other side of
the board). There are wire bonds connecting the sensor
49
c
2011
University of Illinois at Chicago
Journal of Undergraduate Research 4, 48 (2011)
FIG. 4: Preamplifier stage with low pass filter
FIG. 3: The trigger board is 82.3 mm x 63.5 mm. The
foil is to prevent background photons from hitting the
sensor. The copper shielding is to prevent crosstalk.
purpose of this connection was to analyze the output.
Eventually the non-inverted output was too noisy to be
used so instead the inverted output was used.
to the trigger board. The orientation of the sensor is
significant. The high voltage is sent to the sensor, and if
the sensor were oriented differently, the high voltage may
be sent to the output of the sensor instead.
A positive 100 V is applied to the sensor via a resistor
and high voltage capacitor. Two potentiometers are also
used in the circuit. One is used to adjust the threshold
of the comparator and the other adjusts the width of the
pulse. The path of the pulse from the sensor to the test
board can be seen in Figure 5.
The circuit
The first component the pulse encounters on the board
is a preamplifier. This is an inverting amplifier (see Figure 4). The pulse enters through the negative input of
the preamplifier, while the other input is grounded. The
pulse then continues to a shaper. The shaper inverts the
pulse and amplifies it.
The pulse then goes through a gain stage where the
polarity of the pulse stays positive. The magnitude of
the pulse is amplified. The pulse then reaches the comparator. The output of the comparator is connected to
the NIM module in the control room via a long coaxial
cable and then is sent to the test board via another cable.
There is a positive voltage regulator and negative voltage regulator that supply voltage to the amplifiers and
the comparator. Each supplies ±5 V to the circuit. It
was necessary to supply more than 5 V to the negative
voltage regulator because it was old. The working range
was measured to be between 6.5 and 11 V. Seven volts
was the voltage chosen for the experiment.
There are two outputs of the comparator. The noninverted output is connected to the test board. The inverted output is connected to the NIM crate. The initial
FIG. 5: The path the pulse takes. The initial plan was
to connect the comparator output to the test board, but
there were difficulties with this, which are described in
the text.
50
c
2011
University of Illinois at Chicago
Journal of Undergraduate Research 4, 48 (2011)
TABLE I: Noise results for fixed C = 10 pF. It can be
seen that the calculated noise and the simulated noise
are similar until high resistances are reached.
R [kΩ]
1
100
104
106
vsim [µV ]
20.365
20.365
20.325
16.333
vanalytic [µV ]
20.352
20.351
20.352
20.321
FIG. 6: One of the first circuits simulated. This is the
preamplifier and shaper stage.
For a low pass filter, the cutoff frequency is found by
1
(2)
2πRC
where R,C are the values for the feedback resistor and
capacitor, respectively. For R2 the feedback resistor and
R1 the input resistor, we have the gain
fc =
Design Process
Circuit simulations
Circuit simulation was done with Simetrix7 . The circuit was designed in stages. Analysis of each stage was
completed before moving on to the next stage. The first
circuit analyzed was the amplifying stages: the preamplifier, shaper, and accompanying resistors and capacitors.
A number of simulations were conducted to optimize the
signal to noise ratio without losing signal quality. This
was done by adjusting the value of the feedback components. The process of adding components was done until
the circuit was completely understood. One of the first
circuits analyzed can be seen in Figure 6.
A=−
R2
.
R1
(3)
For the inverting amplifiers, we can calculate the output voltage via the gain equation
Vout = (V+ − V− )A.
(4)
The total noise caused by filtering was calculated analytically by use of the equation
2
|vtotal
|=
Noise analysis
KT
Q
(5)
where K is the Boltzmann constant9 , T is the temperature, and Q is the electron charge.
Noise analysis in the time domain was measured using
both analytic equations and SPICE, via the equation
The first study done compared the simulated noise to
the noise calculated analytically. In principle these two
values should agree. The equations used for analytical
calculations are discussed later. A measurement of the
simulated noise for the preamplifier stage was done. This
was done by varying the feedback resistance while keeping the feedback capacitance constant. The results can
be found in Table 1. The conclusion reached was that
the feedback resistor at the preamplifier stage had a negligible affect on the total noise for low resistances. This
allowed adjustment of other feedback components without drastically increasing noise.
Each stage’s contribution to the total noise was simulated. This was done to realize which components were
most sensitive. The preamplifier stage was the only stage
analyzed in depth because of the aggressive development
schedule for the test beam.
Simulations give the ideal behavior of the circuit, yet
in reality this is not perfect. There was not enough
time to fully compare simulations to the observed results. Instead, rather ad hoc solutions were found as
time allowed.8
A number of equations were used in the noise analysis.
sZ
VRM S =
|v|2 dt.
(6)
There are a number of voltage dividers in the circuit.
The voltage after the voltage divider can be found by
Vout =
Z2
Z1 + Z2
(7)
where Z1,2 are impedances (see3 for more).
Since there is also a resistive divider at the gain stage,
we can calculate the output voltage there to be
Vout =
R2
R1 + R2
(8)
where R1,2 are resistors. For a more comprehensive
description, see4 . By calculating the gain after the amplifiers and considering equations 7 and 8 it is possible to
calculate the output voltage at each stage of the circuit.
51
c
2011
University of Illinois at Chicago
Journal of Undergraduate Research 4, 48 (2011)
FIG. 7: Sample voltage divider. A number of them were
used in the circuit.
FIG. 9: Small signal reflection (circled region) as seen
on an oscilloscope. This still occurred after the
optimum resistor was chosen most likely because of
unoptimized layout design.
FIG. 8: Sample resistive divider. There is one used in
the circuit.
the shaper stage. This capacitor is used for DC blocking.
That is, this capacitor only allows AC characteristics of
the signal to be passed on. A similar capacitor is between
the shaper and gain stage. The optimal values for the
feedback components were found to be R = 47 kΩ and C
= 1.8 pF.
The layout
Work began on the layout of the board ”by hand10 ”.
This was the first step before using CAD to design the
board. This process took quite some time. EAGLE11
software was then used for the final design of the board.
The final layout of the board can be found in Appendix
B.
The shaper stage
The pulse then encounters the shaper stage. This stage
consists of another operational amplifier with a low pass
filter. The output is an inverted positive pulse. The
optimal values found for the feedback components are R
= 4.7 kΩ and C = 5.6 pF.
Circuit Stages
The best way to discuss the circuit is to break it down
into stages. The preamplifier stage, the shaper stage, the
gain stage, and the comparator stage will be considered
separately.
The gain stage
Upon inspection of the circuit it was realized that there
was insufficient pulse amplification. Another amplification stage was required. This stage is non-inverting.
There is a resistive divider between the negative input
and output instead of a low pass filter . The optimal
values for the resistive divider were found to be R1 = 47
kΩ and R2 = 1 kΩ.
The preamplifier stage
The first stage of the circuit is the preamplifier stage.
This stage consists of a preamplifier and its feedback components. This amplifier is inverting. There is also a low
pass filter.12 The preamplifier stage was seen previously
in Figure 4.
There is a resistor immediately following the preamplifier. The purpose of this resistor is to reduce signal
reflection. This undesired reflection can be seen in Figure 9.
There is a capacitor between the preamplifier stage and
The comparator
The pulse then reaches the comparator. The negative
input of the comparator is attached to a potentiometer.
This potentiometer controls the threshold. At first only
52
c
2011
University of Illinois at Chicago
Journal of Undergraduate Research 4, 48 (2011)
one output of the comparator was connected. The other
output was eventually connected. There is another potentiometer attached between the latch pin13 and the
output of the comparator. This potentiometer adjusts
the width of the pulse. There is also a Schottky diode14 .
There is a low voltage drop across the diode terminals
when current flows. This allows for better system efficiency. The comparator stage can be seen in Figure 10.
FIG. 11: The rise time of the amplifiers used
the output of the gain stage. The negative input is connected to the potentiometer. The potentiometer acts as
a variable voltage divider. In this case a screw is turned
to adjust the threshold for the comparator. There are
two power supply pins as well. There is also a ”Latch”
pin. This pin can keep input data at the output when
in the high state18 . See Appendix D for a full list of
components.
FIG. 10: The comparator stage. There are two
potentiometers used, as well as a Schottky diode.
Experimental Setup
Power distribution
The experimental setup can be seen in Figure 12.
There are four ROCs with sensors bump bonded to them
to reconstruct the track of incoming particles. There is
one ROC for testing located in between the other ROCs.
This ROC is kept in a cold box in order to prevent annealing (for the irradiated chips). Annealing can change
the electrical properties of the ROC. This is an undesirable effect. The angle of the test ROC with respect to the
beam could be adjusted. This is accomplished by means
of a large pole reaching from near the control room to the
ROC itself. This is done to measure the Lorentz drift.
The tested ROC is kept cold by means of a Peltier cooler.
A Peltier cooler creates a temperature difference between
the two sides of a device by means of a voltage (see [2]
for more in depth discussion). The heat is removed via
cooling liquid hooked up to a chiller.
The entire setup is secured inside a superconducting
3T magnet. Tests are run with and without the magnet
on.19
The telescope board is the board with all the ROCs attached to it. The test board (used to program the chips)
is located underneath the telescope board. The PSI46
test board normally used in radiation hardness experiments was modified in several minor ways, for example,
a magnetic switch was removed since the apparatus is
located inside a 3T magnet.
A cluster of capacitors can be found along the power
distribution lines. These capacitors distribute power to
each of the components.15 There are four capacitors each
for the positive and negative power supply. They are
placed as close as possible to the component pins.
The Components
Most of the components used in the circuit were new.
Some of the older components caused some unwanted
issues. For example, the non-inverted output of the comparator was too noisy to be used. These problems were
mostly fixable.
Ultra low noise amplifiers are used. Much attention is
given to reducing noise since the amplifiers are fast. The
peaking time is approximately 8 ns. This can be seen in
Figure 11.
The voltage regulators were supplied with 7 V for
power. The voltage regulators were not in the software
library for the layout. It was necessary to construct them
in EAGLE with the dimensions given in the data sheet16 .
The comparator is also fast (10 ns). It is an 8-pin component and has a complementary TTL output.17 There
are two input pins. The positive input is connected to
53
c
2011
University of Illinois at Chicago
Journal of Undergraduate Research 4, 48 (2011)
be created in the layout using the data sheets for exact
specifications. Some specifications were not exact on the
layout. This was fixable though. All that was needed
was a wire to connect the pad to the component.
There was also an issue with amplifier oscillation.
Some traces were rather close together. In principle spacing these tracks out would reduce the noise and oscillation. A number of bypass capacitors were added as well
to reduce oscillation.
The board needed to be tested before assembling it
with the rest of the experiment. It was necessary to adjust the values of the feedback and input components for
the best signal to noise ratio. Simulations were run but
it was still necessary to make adjustments.
Problems at the test beam site
FIG. 12: The experiment before being placed in the
magnet. There is the trigger board, telescope board,
test board, and ROCs.
The first issue encountered was the comparator threshold. It was set too high during calibration with a radioactive source at PSI. It needed to be adjusted by the
potentiometer. This was rather inconvenient since the
potentiometer is sensitive and the trigger board was already attached to the rest of the experiment. It was
necessary to go inside the magnet to adjust the threshold. The magnet took 1 hour to ramp down and then 2
hours to ramp up after. For the next generation trigger
board an external pin will be added so the threshold can
be defined by an external voltage.
The initial plan was to attach the output of the comparator straight to the test board. When this was attempted there was too much oscillation. The inverted
output of the comparator was sent to the control room
instead. A NIM module then inverted the signal and then
sent it to the test board.
Another obstacle was one of the lemo connectors on
the trigger board. It was necessary to replace this lemo
since it partially broke off.
The TBM starts the readout sequence for the recorded
hits. It does this via tokens.20 The test board sends a
token to the telescope board and ROCs to readout the
data. The token is passed from ROC to ROC. Once
one finishes the readout process, it passes the token on
to the next ROC. The output signal contains a header
and trailer at the beginning and end respectively. It also
contains the pixel address of the pixel that recorded the
hit.
Now that the experiment is over, analysis is being done
on the data obtained. A satisfactory amount of data was
taken despite the problems encountered.
Problems encountered
There were a number of problems in the design process
and at the test beam site. These problems were fixable
for the most part. The non-inverted output of the comparator, however, was too noisy to be used at all.
Further work
An upgrade of the trigger board is planned to fix all associated problems now that the tests are complete. The
first issue to fix is the pad sizes on the board. All the
pads need to be fitted perfectly for the components. Another issue that needs to be addressed is the crosstalk.
Some tracks and components need to be spaced out more.
There will be an external pin to adjust the threshold instead of using the potentiometer. An analog output from
the gain stage will also be added to explore problems if
the comparator fails. Amplifier oscillation will also be
addressed.
This board will likely be used at the next test beam in
Fall 2010. Some boards may also be used at University
of Illinois at Chicago High Energy Physics (UIC HEP)
department. Currently there is progress being made to
upgrade the board.
Design problems
There was only one month to design this board so ultimately there were a number of flaws in the design. One
complication was with the physical layout of the components. Some of the tracks21 were too close together
so crosstalk transpired. The signal passing through the
preamplifier was affecting the signal at the input of the
shaper. Copper shielding was placed over some components to remedy this. A few components (namely, the
potentiometers) were old, so placing them onto the board
was challenging. There was also an issue of pad sizes on
the board. Some of the pads were not the correct size
for the components. A number of components had to
54
c
2011
University of Illinois at Chicago
Journal of Undergraduate Research 4, 48 (2011)
for all his help; without his patience and assistance, this
project would not have been possible. We also would like
to whole heartedly thank Mr. Beat Meier. His patience
and tremendous insight were crucial in supporting our
work and ensured that the project was completed in time.
We also acknowledge Dr. Jose Lazo-Flores’ incredibly
valuable revisions of this paper.
Acknowledgements
This work was supported in part by grant 0730173 from
the National Science Foundation, PIRE: Collaborative
research with the Paul Scherrer Institute and Eidgenoessische Technische Hochschule on Advanced Pixel Silicon
Detectors for the CMS detector.
The authors would like to thank Dr. Jose Lazo-Flores
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
S. Dambach, CMS Barrel Pixel Module Qualification.
L. Rossi, P. Fischer, T. Rohe, and N. Wermes, Pixel Detectors: From Fundamentals to Applications (Springer, 2006).
J. Segura and C. F. Hawkins, CMOS Electronics: How it
works, how it fails (Wiley Interscience, 2004).
R. Horisberger (2009), notes from Herbtsemester 2009 at
ETH Zurich.
The fluence here is given in neutron equivalent per square
centimeter.
This was not the original plan (see later).
See http://www.simetrix.co.uk/ for more.
Specifically, the simulations did not consider cross talk between the electronics, which turned out to be a significant
issue.
K = 1.3806503 × 10−23 m2 kg s−2 K −1 , but often it is
easier to remember KT ≈ 27 mV at 300K
That is, components were physically laid out on paper,
with tracks drawn with a pencil.
See http://www.cadsoftusa.com/ for more.
Low pass filters allow low frequency signals to pass through
while making it difficult for high frequency signals to pass.
See section on components for better description of the
functionality of this pin
A Schottky diode is a special semiconductor diode with a
low forward voltage drop.
That is, the power is sent through the capacitors instead
of directly to the circuit.
See appendix for list of components.
TTL functions with a 0 V - 5 V supply, while NIM operates
with a voltage from -1 to 0 V and is inverted.
See www.linear.com and the LT1016 data sheet for a more
thorough description of this comparator.
This is the oldest superconducting magnet at CERN. It
took approximately 2 hours to get to the 3T.
A token is a signal sent from the FPGA to the Token
Bit Manager (TBM) to readout the data from the readout chips. See [1] for more.
By tracks it is meant the conducting traces on the board
rather than the reconstructed tracks in CMS.
55
c
2011
University of Illinois at Chicago
Journal of Undergraduate Research 4, 48 (2011)
Appendix
Layout
Schematic
FIG. 14: Layout of trigger board Version 1.0
List of Components
• TOREX XC6202 P502PR High Voltage Positive
Voltage Regulator (1)
• MAXIM 4106/4107 350 MHz, Ultra-Low-Noise Op
Amp (3)
• National LM 79L05 ACM 3-Terminal Negative
Voltage Regulator (1)
• Agilent Surface Mount RF Schottky Barrier Diode
BAS40-04 (1)
• LINEAR LT 1016 UltraFast Precision 10ns Comparator (1)
• 1 kΩ trimmer potentiometers (2)
• SMD resistors size 1206
• SMD capacitors size 0805
FIG. 13: Schematic of the circuit used for the trigger
board
• High voltage capacitor size 1206 (1)
• 3-pronged power connector (1)
• EPL.00.250.NTN Lemo connector (4)
56
c
2011
University of Illinois at Chicago
Journal of Undergraduate Research 4, 57 (2011)
Characterization of Nickel Assisted Growth of Boron Nanostructures
F. Lagunas and B. Sorenson
Department of Chemistry, Northeastern Illinois University, Chicago IL 60625
P. Jash
Department of Chemistry, University of Illinois at Chicago, Chicago, IL 60607 and
Department of Chemistry, Northeastern Illinois University, Chicago IL 60625
M. Trenary
Department of Chemistry, University of Illinois at Chicago, Chicago, IL 60607
Boron nanostructures were synthesized by the vapor-liquid-solid mechanism using nickel as a
catalyst. Two types of catalyst deposition methods were used: thermal evaporation and solution
dispersion of Ni nanopowder. Also, the effect of synthesis temperature on the shapes of the nanostrucrure formed is reported here. The nanostructures were primarily characterized by Scanning
Electron Microscopy (SEM). Further qualitative analyses were done with Transmission Electron
Microscopy (TEM) and High Resolution Transmission Electron Microscopy (HRTEM). For quantitative analyses Energy Dispersive X-ray spectroscopy (EDX) and Electron Energy Loss Spectroscopy
(EELS) were used. These results confirmed that 1) high purity Ni assisted boron nanostructures
grow by pyrolysis of diborane, and that 2) oxide assisted growth of the nanostructures did not take
place as carbon and oxygen were present only as surface contamination. Selected Area Electron
Diffraction (SAED) patterns showed that the nanostructures were mainly crystalline. By decreasing the amount of nickel catalyst that is deposited by thermal evaporation the diameters of the
nanowires were reduced. Also, the use of nickel nanopowder as catalyst instead of Ni film resulted
in significant reduction in wire diameter. The diameter of the boron nanowires are about 36 nm.
With nanowires other types of nanostructures were formed in either type of deposition. At the lower
reaction temperature formation of nanosheets was observed.
Introduction
The increased demand for energy calls for the development of new approaches towards useable energy sources.
Hydrogen is one of the most abundant energy resources
that can be converted to both thermal and electrical energy. Several hydrogen storage methods such as carbonfiber-reinforced high-strength containers, liquid hydrogen, chemical hydrides, and carbon nanotubes have been
suggested.1 The demand for storage materials for hydrogen is the motivation for our research on the synthesis
of boron nanostructures. Boron, because of its unique
semi-metallic properties, allows for optimal efficiency in
a storage unit, and is convenient for transportation because of its light weight.
It also makes the second largest (next to carbon) number of compounds with hydrogen. Thus, research towards
the production, storage, and usage of hydrogen is crucial
as it provides for another energy carrier that may become
both economically and environmentally favorable. Using
hydrogen in such a way minimizes greenhouse gas emissions and acts as a cleaner energy alternative. Nanostructures are studied instead of bulk powder or crystalline
materials because of the increased surface-to-volume ratio. This is beneficial as more hydrogen can be stored per
weight of the storage material. Also, the use of nanostructures is favorable because they are known to have an
increased diffusion rate of the adsorbed material, which
leads to more efficiency in the delivery of the stored hydrogen.
The objective of the current research is to observe the
effect of catalyst deposition method and temperature on
the diameter of nanowires. By the spillover mechanism2
hydrogen is proposed to be stored in carbon nanostructures. In this mechanism the dissociation of diatomic
hydrogen molecules over a metal catalyst particle, which
is on top of a support, takes place and then the hydrogen
FIG. 1: Substrates placed in reaction chamber after
deposition of catalyst.
Journal of Undergraduate Research 4, 57 (2011)
”spills” over and onto the surface. It is hypothesized that
the spillover mechanism will apply to boron nanowires,
with nickel acting as the catalyst and the boron network
acting as the support.
Different types of boron nanostructures have been synthesized using magnetron sputtering, laser ablation and
chemical vapor deposition method. Nanowire-nanotube
hybrid structures are also synthesized using iron as a
catalyst. Also researchers have used an array of catalysts such as gold, platinum, and palladium, resulting in a one dimensional structure and concluded that
the use of nickel as a catalyst is ineffective.3 However,
in this work we have observed growth of boron nanostructures not only with a nickel catalyst deposited thermally, but also starting from nickel nanopowder. At relatively low temperatures (800-1000◦ C) the pyrolysis reaction B2 H6 → 2B + 3H2 occurs in the presence of a
Ni catalyst and different types of boron nanostructures
are formed. We further have observed the effect of catalyst deposition method and the reaction temperature on
the types of nanostructures formed. In the following sections, detailed descriptions of the synthesis method and
materials are discussed.
centimeter rectangles. Precipitation of the desired product would occur as well as the production of byproducts,
which can be pumped out of the reaction chamber. The
wafers were ultrasonically cleaned using first a 20% hydrogen peroxide and sulfuric acid mixture in a 1:2 ratio
to remove all inorganic materials, and then again by a solution of methanol and acetone in a 1:1 ratio to remove
all organic contaminants. Each solution was allowed to
sit in the sonicator for approximately 15 minutes. The
wafers were placed on a slide and then prepared for the
deposition of the catalyst by either thermal evaporation
or solution dispersion.
To characterize the synthesized samples qualitatively,
Scanning Electron Microscopy (SEM, JSM 6320F) and
Transmission Electron Microscopy (TEM) were used.
SEM was first used to identify the basic structures of
the samples. From the SEM analysis, it could be determined whether or not the desired nanostructures were
formed. After the identification of the nanostructures,
TEM analyses were performed, which show the exact diameter of the nanowires. Energy Dispersive X-ray (EDX)
spectroscopy was then used to determine the elements
present in the synthesized nanowires. Also, Electron Energy Loss Spectroscopy (EELS) was used to determine
the relative atomic percentage of each element present in
the nanowires. This is important because the amount of
oxygen present in the sample would tell us if the product
is oxidized or not.
Nickel catalyst deposition methods
Thermal Evaporation Method
Nickel foil was heated in a Joel LTD (model # JEE4X/5B) vacuum evaporator. First, to prevent contamination of substrates, the nickel was ultrasonically cleaned
with methanol and acetone, and then weighed. The thermal evaporator was first coated in a trial run by evaporating a small amount of Ni to eliminate any possible
contamination of wafers. Nickel catalyst of mass ranging
from 1.7 and 1.3 mg were deposited on the three wafers.
After the wafers were retrieved from the evaporator, a
slight color change was observed.
FIG. 2: Boron hybrid structures formed by using 1.7
mg Ni.
Experimental Details
Materials
Solution Dispersion Method
For the synthesis, the Low Pressure Chemical Vapor
Deposition (LPCVD) method (specifically, the VaporLiquid-Solid (VLS) sub-method4 ) was used with a home
built apparatus. The CVD method is often used to produce solid materials at a high purity level. The products
are often used in the semiconductor industry in applications such as electronic devices. The apparatus that was
used for the synthesis in this lab is discussed in detail
elsewhere.5 The substrates used in this experiment were
silicon wafers with a one micron thick thermally grown
layer of silicon oxide, each cut into approximately 1 by 1.5
Nickel nanopowder was deposited on silicon wafers by
“solution dispersion”. In our experiment, about 6.5 mg
of nanopowder (99 %) ranging 20-40 nm in size was dispersed in 6.5 ml of propanol and sonicated for fifteen
minutes. Then a 2 µL aliquot of solution was dispersed
on each substrate utilizing a micropipette and allowed to
dry before placement in the CVD apparatus.
In both cases, the thermal evaporation and solution
dispersion methods of deposition, substrates were then
placed in a quartz boat and inserted into the quartz tube
58
c
2011
University of Illinois at Chicago
Journal of Undergraduate Research 4, 57 (2011)
(a)
(b)
(c)
(a)
(d)
(b)
(c)
(d)
FIG. 5: a) SEM image of nanoflower bundle; b)
HRTEM of nanoflower; c) SAED; d) HRTEM image of
nanoflower showing crystalline structure.
Diborane gas at 1.08% in argon from Matheson tri-gas
products Inc. was then introduced to the reaction chamber at a flow rate of 20 sccm for 120 min. This time the
chamber pressure was 340 mTorr. At the end of the synthesis, the chamber was cooled down under 5 sccm argon
flow. Grey deposits were visible by eye on the substrate.
(e)
FIG. 3: a) SEM image of nanowires; b) TEM shows a
diameter of 175 nm; c) SAED pattern confirms the
crystal structure; d) HRTEM shows that the spacing
between atomic layers is 7 Å; e) An EDX spectrum
collected from the wire.
Results and Discussion
Thermal evaporation method of nickel deposition
From the SEM characterization of samples from numerous synthesis trials, it was observed that wafers 1
and 2, which are located closest to the gas flow (that is
at the lower temperature area), have more boron deposition and nanostructure growth than observed on wafer
3. This growth protocol has been consistent throughout all syntheses conducted, regardless of catalyst deposition method. Specifically, from the thermal evaporation deposition technique, the amount of nickel catalyst
deposited onto the substrate was varied. It is observed
from the SEM images that by changing the amount of
Ni deposited thermally from 1.7 to 1.3 mg, the diameter
of boron nanowires obtained were almost halved. Boron
nanostructures obtained by using 1.7 mg of Ni catalyst
is similar to the nanostructures obtained by Xu et al.3
using a Au catalyst. These are ”tube-catalytic particlewire” hybrid structures. Figure 2 shows that the growth
of boron nanowires with diameters ranging from 0.8 to
1.2 µm and is compared on the right to an SEM image
published by Xu et. al3
By decreasing the amount of Ni to 1.3 mg the
FIG. 4: TEM and SAED of an amorphous nanowire 105
nm in diameter and amorphous in nature.
reaction chamber of the CVD apparatus (arrangement
of wafers can be seen in Figure 1). A continuous flow
of argon gas of high purity was introduced to the chamber for 45 minutes at 5 sccm and the center-of-furnace
temperature was set to 925◦ C. The pressure in the reaction chamber at this time was approximately 200 mTorr.
59
c
2011
University of Illinois at Chicago
Journal of Undergraduate Research 4, 57 (2011)
tapered structures were seen. Most nanostructures had
lengths greater than 14 µm. Figure 3 shows the products
that were synthesized. The EDX spectrum shows that
the elements present are boron, carbon, oxygen, silicon,
nickel and copper. As boron and carbon peak positions
are not well resolved, quantitative analysis was not possible. Oxygen comes from contamination while the copper
peak is from the Cu grid that was used for the TEM.
(a)
FIG. 7: a) SEM images of nanowire formation in island;
b) Nanoflower growth.
By lowering the quantity of nickel the diameters of
the boron nanostructures were decreased, which indicates
that the size of the metal droplet is the key component in
determining nanowire diameters. The metal droplet dil
ameter can be calculated using:4–6 Rmin = ( RT2Vln(s)
)σlv ,
where Vl is the molar volume of the droplet, σlv is the
liquid-vapor surface energy, and s is the degree of supersaturation of the vapor. Therefore this equation restricts
the minimum diameter of the droplet, and of any crystals that can be grown from it. Also, the overcrowding
of catalyst nucleation sites may hinder the growth of the
nanoparticle both in diameter and in length. Figure 4
shows an amorphous nanowires. Other than nanowires,
nanoflowers (Figure 5) with thickness between 50 and 200
nm are also observed. The flowers were of very highly ordered crystalline morphology as confirmed by the SAED
image. The HRTEM also revealed a 7.5 Å distance in
atomic layers in nanoflowers.
Figure 6 shows the formation of a 60 nm wide nanowire
from the nucleation site. A series of EDX spectra collected from different positions not only supports the
nickel catalyst assisted growth, but also it helps us to
conclude that boron remains un-oxidized during synthesis.
(b)
(c)
FIG. 6: a),b) EDX spectra showing boron
nanostructures and the presence of nickel; c) TEM and
SEM images: (top, left) SAED image shows the
amorphous structure of the nanowire , (bottom, right)
SEM of the nanowire in between nanoflower bundles
(circled white), middle. TEM images of 60 nm diameter
nanowire and the nucleation site.
Starting from nickel nanopowder
Utilization of Ni nanopowder with diameters between
20-40 nm is found to be an effective way to overcome the
minimum metal droplet threshold that exists with thermal evaporation. Both root and tip growth mechanisms
are observed; the catalytic particle can be identified at
the base of the structure on the substrate as well as at the
tip of the wire. Xu et al.3 have provided two hypotheses regarding root or tip growth and the residence of the
nanowires obtained are between 60 and 400 nm in diameter. Nanostructures of uniform diameter as well as some
60
c
2011
University of Illinois at Chicago
Journal of Undergraduate Research 4, 57 (2011)
(a)
(a)
(b)
(b)
(c)
(c)
(d)
FIG. 8: a) TEM image of nanowire with Ni particle on
wall; b) TEM image of nanowire with nucleation site; c)
EDX spectrum collected from the area that is displayed
in (a); d) EELS spectrum collected from the area that is
displayed in (a).
(d)
FIG. 9: a) SEM images of nanowires in island; b) TEM
images of nanowires with Ni particle at the tip; c),d)
EDX and EELS spectra are collected from the same
area as the TEM images shown in Figure 10.
catalytic particle within the structure. These hypotheses
helped us to interpret our observations.
As shown in Figure 7, island formation of the Ni catalyst is observed in the SEM images, similar to that observed in the case of a thermally deposited nickel catalyst.
Initial nanostructure growth is seen within the catalyst
network. In Figure 8, the TEM and SAED images show
that 45 nm diameter nanowires are formed. It was challenging to collect HRTEM from this area due to contamination from hydrocarbons. The small spherical form on
the nanowire seen on the left side of the TEM image in
Figure 8b is likely due to electron beam damage. By
changing the synthesis temperature from 925 to 875◦ C,
it was found that along with nanowires some nanoribbons or nanosheets are also formed. Figures 9 and 10
present a detailed analysis of products formed. One of
the nanowires shown here has a diameter of 42 nm and of
relatively shorter length (200 nm). The EDX spectrum
confirms the expected presence of B, Ni, and Si. The Cu
peak is the Cu grid. The presence of C and O as surface
contamination is also confirmed by the EELS spectrum.
It is observed that nanowires are crystalline with a 4.7
Å atomic spacing. The nanostructures are composed of
98% boron with oxygen on the surface.
Conclusion
Boron nanostructures are successfully synthesized using nickel catalyst. Various types of nanostructures such
as nanowires, nanoflowers, and nanoribbons are observed.
61
c
2011
University of Illinois at Chicago
Journal of Undergraduate Research 4, 57 (2011)
As the diameters of the nanowires depend on the catalyst particle size, by decreasing the amount of catalyst deposited thermally, the diameters could be reduced.
Through the use of nickel nanopowder of diameters between 20 and 40 nm, even thinner boron nanowires are
obtained (between 36 and 175 nm). With a center temperature of 875◦ C, nanosheets about 10 µm wide are
formed. Nanostructures are mainly crystalline with a
small quantity of amorphous morphology. EDX and
EELS spectra confirm that the nanostructures are not
oxidized and that the contamination is restricted to the
surface.
Acknowledgements
FIG. 10: (right) SEM images of bundles of nanoribbons
and a single ribbon (top right) TEM image and
HRTEM of one of the nanostructures shows its 4.7 Å
thick atomic layers. (bottom right) EELS spectrum
reveals the absence of oxygen.
1
2
3
4
5
6
The authors would like to thank the Students Center
for Science Engagement of Northeastern Illinois University for funding. This work was also supported by the
NSF grant # CHE 1012201. The authors would also like
to thank the Research Resource Center, UIC.
Basic research needs for the hydrogen economy (2003).
A. J. Lachawiec, G. Qi, and R. Yang, Langmuir 21, 11418
(2005).
T. Xu, A. Nicholls, and R. Ruoff, Brief Reports and Reviews
1, 1 (2006).
B. Bhushan, in Springer Handbook of nanotechnology
(Berlin: Spring-Varley, 2007).
J.-T. Wang, in Nonequilibrium Nondissipative Thermodynamics: With Application to Low-pressure Diamond Synthesi (Berlin: Springer Verlag, 2002).
M. Huang, Y. Wu, H. Feick, N. Tran, E. Weber, and
P. Yang, Adv. Mater 13, 113 (2001).
62
c
2011
University of Illinois at Chicago