SAMPLE PREPARATION: AN ANALYTICAL PERSPECTIVE CHAPTER 1

Transcription

SAMPLE PREPARATION: AN ANALYTICAL PERSPECTIVE CHAPTER 1
CHAPTER
1
SAMPLE PREPARATION: AN ANALYTICAL
PERSPECTIVE
SOMENATH MITRA AND ROMAN BRUKH
Department of Chemistry and Environmental Science,
New Jersey Institute of Technology, Newark, New Jersey
1.1. THE MEASUREMENT PROCESS
The purpose of an analytical study is to obtain information about some
object or substance. The substance could be a solid, a liquid, a gas, or a
biological material. The information to be obtained can be varied. It could
be the chemical or physical composition, structural or surface properties,
or a sequence of proteins in genetic material. Despite the sophisticated arsenal of analytical techniques available, it is not possible to find every bit of
information of even a very small number of samples. For the most part, the
state of current instrumentation has not evolved to the point where we
can take an instrument to an object and get all the necessary information.
Although there is much interest in such noninvasive devices, most analysis is
still done by taking a part (or portion) of the object under study (referred to
as the sample) and analyzing it in the laboratory (or at the site). Some common steps involved in the process are shown in Figure 1.1.
The first step is sampling, where the sample is obtained from the object
to be analyzed. This is collected such that it represents the original object.
Sampling is done with variability within the object in mind. For example,
while collecting samples for determination of Ca 2þ in a lake, it should be
kept in mind that its concentrations can vary depending on the location, the
depth, and the time of year.
The next step is sample preservation. This is an important step, because
there is usually a delay between sample collection and analysis. Sample
preservation ensures that the sample retains its physical and chemical characteristics so that the analysis truly represents the object under study. Once
Sample Preparation Techniques in Analytical Chemistry, Edited by Somenath Mitra
ISBN 0-471-32845-6 Copyright 6 2003 John Wiley & Sons, Inc.
1
2
sample preparation: an analytical perspective
Sampling
Sample
preservation
Sample
preparation
Analysis
Figure 1.1. Steps in a measurement process.
the sample is ready for analysis, sample preparation is the next step. Most
samples are not ready for direct introduction into instruments. For example, in the analysis of pesticides in fish liver, it is not possible to analyze
the liver directly. The pesticides have to be extracted into a solution, which
can be analyzed by an instrument. There might be several processes within
sample preparation itself. Some steps commonly encountered are shown in
Figure 1.2. However, they depend on the sample, the matrix, and the concentration level at which the analysis needs to be carried out. For instance,
trace analysis requires more stringent sample preparation than major component analysis.
Once the sample preparation is complete, the analysis is carried out by an
instrument of choice. A variety of instruments are used for di¤erent types of
analysis, depending on the information to be acquired: for example, chromatography for organic analysis, atomic spectroscopy for metal analysis,
capillary electrophoresis for DNA sequencing, and electron microscopy for
small structures. Common analytical instrumentation and the sample preparation associated with them are listed in Table 1.1. The sample preparation
depends on the analytical techniques to be employed and their capabilities.
For instance, only a few microliters can be injected into a gas chromatograph. So in the example of the analysis of pesticides in fish liver, the ultimate product is a solution of a few microliters that can be injected into a gas
chromatograph. Sampling, sample preservation, and sample preparation are
the measurement process
3
Homogenization,
Size reduction
Extraction
Concentration
Clean-up
Analysis
Figure 1.2. Possible steps within sample preparation.
all aimed at producing those few microliters that represent what is in the
fish. It is obvious that an error in the first three steps cannot be rectified by
even the most sophisticated analytical instrument. So the importance of the
prior steps, in particular the sample preparation, cannot be understressed.
1.1.1. Qualitative and Quantitative Analysis
There is seldom a unique way to design a measurement process. Even an
explicitly defined analysis can be approached in more than one ways. Different studies have di¤erent purposes, di¤erent financial constraints, and are
carried out by sta¤ with di¤erent expertise and personal preferences. The
most important step in a study design is the determination of the purpose,
and at least a notion of the final results. It should yield data that provide
useful information to solve the problem at hand.
The objective of an analytical measurement can be qualitative or quantitative. For example, the presence of pesticide in fish is a topic of concern.
The questions may be: Are there pesticides in fish? If so, which ones? An
analysis designed to address these questions is a qualitative analysis, where
the analyst screens for the presence of certain pesticides. The next obvious
question is: How much pesticide is there? This type of analysis, quantitative
analysis, not only addresses the presence of the pesticide, but also its concentration. The other important category is semiqualitative analysis. Here
4
sample preparation: an analytical perspective
Table 1.1. Common Instrumental Methods and the Necessary Sample Preparation
Steps Prior to Analysis
Analytes
Sample Preparation
Instrumenta
Organics
Extraction, concentration,
cleanup, derivatization
Transfer to vapor phase,
concentration
Extraction, concentration,
speciation
Extraction, derivatization,
concentration, speciation
Extraction, concentration,
derivatization
Cell lysis, extraction, PCR
GC, HPLC, GC/MS, LC/MS
Volatile organics
Metals
Metals
Ions
DNA/RNA
Amino acids, fats
carbohydrates
Microstructures
Extraction, cleanup
Etching, polishing, reactive ion techniques, ion
bombardments, etc.
GC, GC-MS
AA, GFAA, ICP, ICP/MS
UV-VIS molecular absorption spectrophotometry,
ion chromatography
IC, UV-VIS
Electrophoresis, UV-VIS,
florescence
GC, HPLC, electrophoresis
Microscopy, surface spectroscopy
a GC, gas chromatography; HPLC, high-performance liquid chromatography; MS, mass spectroscopy; AA, atomic absorption; GFAA, graphite furnace atomic absorption; ICP, inductively
coupled plasma; UV-VIS, ultraviolet–visible molecular absorption spectroscopy; IC, ion chromatography.
the concern is not exactly how much is there but whether it is above or
below a certain threshold level. The prostate specific antigen (PSA) test
for the screening of prostate cancer is one such example. A PSA value of
4 ng/L (or higher) implies a higher risk of prostate cancer. The goal here is
to determine if the PSA is higher or lower then 4 ng/L.
Once the goal of the analyses and target analytes have been identified, the
methods available for doing the analysis have to be reviewed with an eye to
accuracy, precision, cost, and other relevant constraints. The amount of
labor, time required to perform the analysis, and degree of automation can
also be important.
1.1.2. Methods of Quantitation
Almost all measurement processes, including sample preparation and analysis, require calibration against chemical standards. The relationship between a detector signal and the amount of analyte is obtained by recording
5
the measurement process
the response from known quantities. Similarly, if an extraction step is involved, it is important to add a known amount of analyte to the matrix and
measure its recovery. Such processes require standards, which may be prepared in the laboratory or obtained from a commercial source. An important consideration in the choice of standards is the matrix. For some analytical instruments, such as x-ray fluorescence, the matrix is very important,
but it may not be as critical for others. Sample preparation is usually matrix
dependent. It may be easy to extract a polycyclic aromatic hydrocarbon
from sand by supercritical extraction but not so from an aged soil with a
high organic content.
Calibration Curves
The most common calibration method is to prepare standards of known
concentrations, covering the concentration range expected in the sample.
The matrix of the standard should be as close to the samples as possible. For
instance, if the sample is to be extracted into a certain organic solvent, the
standards should be prepared in the same solvent. The calibration curve is a
plot of detector response as a function of concentration. A typical calibration curve is shown in Figure 1.3. It is used to determine the amount of
analyte in the unknown samples. The calibration can be done in two ways,
best illustrated by an example. Let us say that the amount of lead in soil is
being measured. The analytical method includes sample preparation by acid
extraction followed by analysis using atomic absorption (AA). The stan-
3
2.5
Signal
2
1.5
LOD (3 × S/N)
1
0.5
Limit of linearity
LOQ (10 × S/N)
0
0
0.5
1
1.5
2
2.5
3
Analyte concentration
Figure 1.3. Typical calibration curve.
3.5
4
4.5
6
sample preparation: an analytical perspective
dards can be made by spiking clean soil with known quantities of lead. Then
the standards are taken through the entire process of extraction and analysis.
Finally, the instrument response is plotted as a function of concentration.
The other option assumes quantitative extraction, and the standards are
used to calibrate only the AA. The first approach is more accurate; the latter
is simpler. A calibration method that takes the matrix e¤ects into account is
the method of standard addition, which is discussed briefly in Chapter 4.
1.2. ERRORS IN QUANTITATIVE ANALYSIS:
ACCURACY AND PRECISION
All measurements are accompanied by a certain amount of error, and an
estimate of its magnitude is necessary to validate results. The error cannot
be eliminated completely, although its magnitude and nature can be characterized. It can also be reduced with improved techniques. In general,
errors can be classified as random and systematic. If the same experiment is
repeated several times, the individual measurements cluster around the mean
value. The di¤erences are due to unknown factors that are stochastic in
nature and are termed random errors. They have a Gaussian distribution and
equal probability of being above or below the mean. On the other hand,
systematic errors tend to bias the measurements in one direction. Systematic
error is measured as the deviation from the true value.
1.2.1. Accuracy
Accuracy, the deviation from the true value, is a measure of systematic error.
It is often estimated as the deviation of the mean from the true value:
accuracy ¼
mean true value
true value
The true value may not be known. For the purpose of comparison, measurement by an established method or by an accredited institution is accepted as the true value.
1.2.2. Precision
Precision is a measure of reproducibility and is a¤ected by random error.
Since all measurements contain random error, the result from a single measurement cannot be accepted as the true value. An estimate of this error is
necessary to predict within what range the true value may lie, and this is done
errors in quantitative analysis: accuracy and precision
7
by repeating a measurement several times [1]. Two important parameters, the
average value and the variability of the measurement, are obtained from this
process. The most widely used measure of average value is the arithmetic
mean, x:
P
xi
x¼
n
P
where
xi is the sum of the replicate measurements and n is the total
number of measurements. Since random errors are normally distributed, the
common measure of variability (or precision) is the standard deviation, s.
This is calculated as
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
P
ðxi xÞ 2
ð1:1Þ
s¼
n
When the data set is limited, the mean is often approximated as the true
value, and the standard deviation may be underestimated. In that case, the
unbiased estimate of s, which is designated s, is computed as follows:
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
P
ðxi xÞ 2
s¼
ð1:2Þ
n1
As the number of data points becomes larger, the value of s approaches
that of s. When n becomes as large as 20, the equation for s may be
used. Another term commonly used to measure variability is the coe‰cient
of variation (CV) or relative standard deviation (RSD), which may also be
expressed as a percentage:
RSD ¼
s
x
or
% RSD ¼
s
100
x
ð1:3Þ
Relative standard deviation is the parameter of choice for expressing precision in analytical sciences.
Precision is particularly important when sample preparation is involved.
The variability can also a¤ect accuracy. It is well known that reproducibility of an analysis decreases disproportionately with decreasing concentration [2]. A typical relationship is shown in Figure 1.4, which shows
that the uncertainty in trace analysis increases exponentially compared to
the major and minor component analysis. Additional deviations to this
curve are expected if sample preparation steps are added to the process. It
may be prudent to assume that uncertainty from sample preparation would
also increase with decrease in concentration. Generally speaking, analytical
8
sample preparation: an analytical perspective
70
60
Aflatoxins
50
Relative standard deviation
40
30
20
Pesticide
Drugs residues
Pharmaceuticals in feeds
10
0
−10
−20
Major
components
−30
−40
Minor
components
Trace
Analysis
−50
−60
−70
1.E +00 1.E−02
1.E−04 1.E−06 1.E−08
Concentration
1.E−10
1.E−12
Figure 1.4. Reproducibility as a function of concentration during analytical measurements.
(Reproduced from Ref. 3 with permission from LC-GC North America.)
instruments have become quite sophisticated and provide high levels of
accuracy and precision. On the other hand, sample preparation often remains a rigorous process that accounts for the majority of the variability.
Going back to the example of the measurement of pesticides in fish, the
final analysis may be carried out in a modern computer-controlled gas
chromatograph/mass spectrograph (GC-MS). At the same time, the sample
preparation may involve homogenization of the liver in a grinder, followed
by Soxhlett extraction, concentration, and cleanup. The sample preparation
might take days, whereas the GC-MS analysis is complete in a matter of
minutes. The sample preparation also involves several discrete steps that
involve manual handling. Consequently, both random and systematic errors
are higher during sample preparation than during analysis.
The relative contribution of sample preparation depends on the steps in
the measurement process. For instance, typically two-thirds of the time in an
analytical chromatographic procedure is spent on sample preparation. An
example of the determination of olanzapine in serum by high-performance
liquid chromatography/mass spectroscopy (HPLC-MS) illustrates this point
[3]. Here, samples were mixed with an internal standard and cleaned up in a
errors in quantitative analysis: accuracy and precision
9
solid-phase extraction (SPE) cartridge. The quantitation was done by a calibration curve. The recovery was 87 G 4% for three assays, whereas repeatability of 10 replicate measurements was only 1 to 2%. A detailed error
analysis [3] showed that 75% of the uncertainty came from the SPE step and
the rest came from the analytical procedure. Of the latter, 24% was attributed to uncertainty in the calibration, and the remaining 1% came from the
variation in serum volume. It is also worth noting that improvement in the
calibration procedure can be brought about by measures that are significantly simpler than those required for improving the SPE. The variability in
SPE can come from the cartridge itself, the washing, the extraction, the
drying, or the redissolution steps. There are too many variables to control.
Some useful approaches to reducing uncertainty during sample preparation are given below.
Minimize the Number of Steps
In the example above, the sample preparation contributed 75% of the error.
When multiple steps such as those shown in Figure 1.2 are involved, the
uncertainty is compounded. A simple dilution example presented in Figure
1.5 illustrates this point. A 1000-fold dilution can be performed in one step:
1 mL to 1000 mL. It can also be performed in three steps of 1 : 10 dilutions
each. In the one-step dilution, the uncertainty is from the uncertainty in the
volume of the pipette and the flask. In the three-step dilution, three pipettes
and three flasks are involved, so the volumetric uncertainty is compounded
that many times. A rigorous analysis showed [3] that the uncertainty in the
one-step dilution was half of what was expected in the three-step process.
If and when possible, one or more sample preparation steps (Figure 1.2)
should be eliminated. The greater the number of steps, the more errors there
are. For example, if a cleanup step can be eliminated by choosing a selective
extraction procedure, that should be adapted.
Use Appropriate Techniques
Some techniques are known to provide higher variability than others. The
choice of an appropriate method at the outset can improve precision. For
example, a volume of less than 20 mL can be measured more accurately and
precisely with a syringe than with a pipette. Large volumes are amenable
to precise handling but result in dilution that lowers sensitivity. The goal
should be to choose a combination of sample preparation and analytical
instrumentation that reduces both the number of sample preparative steps
and the RSD. Automated techniques with less manual handling tend to have
higher precision.
10
sample preparation: an analytical perspective
1 ml
1 ml
1000 ml
10 ml
Figure 1.5. Examples of single and multiple dilution of a sample. (Reproduced from Ref. 3 with
permission from LC-GC North America.)
1.2.3. Statistical Aspects of Sample Preparation
Uncertainty in a method can come from both the sample preparation and
the analysis. The total variance is the sum of the two factors:
sT2 ¼ ss2 þ sa2
ð1:4Þ
The subscript T stands for the total variance; the subscripts s and a stand for
the sample preparation and the analysis, respectively. The variance of the
analytical procedure can be subtracted from the total variance to estimate
the variance from the sample preparation. This could have contribution
from the steps shown in Figure 1.2:
2
ss2 ¼ sh2 þ sex
þ sc2 þ scl2
ð1:5Þ
where sh relates to homogenization, sex to extraction, sc to concentration,
and scl to cleanup. Consequently, the overall precision is low even when
errors in quantitative analysis: accuracy and precision
11
a high-precision analytical instrument is used in conjunction with lowprecision sample preparation methods. The total variance can be estimated
by repeating the steps of sample preparation and analysis several times.
Usually, the goal is to minimize the number of samples, yet meet a specific level of statistical certainty. The total uncertainty, E, at a specific confidence level is selected. The value of E and the confidence limits are determined by the measurement quality required:
zs
E ¼ pffiffiffi
n
ð1:6Þ
where s is the standard deviation of the measurement, z the percentile of
standard normal distribution, depending on the level of confidence, and n
the number of measurements. If the variance due to sample preparation, ss2 ,
is negligible and most of the uncertainty is attributed to the analysis, the
minimum number of analysis per sample is given by
na ¼
zsa
Ea
2
ð1:7Þ
The number of analyses can be reduced by choosing an alternative
method with higher precision (i.e., a lower sa ) or by using a lower value of z,
which means accepting a higher level of error. If the analytical uncertainty is
negligible ðsa ! 0Þ and sample preparation is the major issue, the minimum
number of samples, ns , is given by
2
zss
ns ¼
Es
ð1:8Þ
Again, the number of samples can be reduced by accepting a higher uncertainty or by reducing ss . When sa and ss are both significant, the total error
ET is given by
ET ¼ z
2
1=2
ss sa2
þ
ns na
ð1:9Þ
This equation does not have an unique solution. The same value of error,
ET , can be obtained by using di¤erent combinations of ns and na . Combinations of ns and na should be chosen based on scientific judgment and the
cost involved in sample preparation and analysis.
12
sample preparation: an analytical perspective
A simple approach to estimating the number of samples is to repeat the
sample preparation and analysis to calculate an overall standard deviation,
s. Using Student’s t distribution, the number of samples required to achieve
a given confidence level is calculated as
2
ts
n¼
e
ð1:10Þ
where t is the t-statistic value selected for a given confidence level and e is
the acceptable level of error. The degrees of freedom that determine t can
first be chosen arbitrarily and then modified by successive iterations until the
number chosen matches the number calculated.
Example
Relative standard deviation of repeat HPLC analysis of a drug metabolite
standard was between 2 and 5%. Preliminary measurements of several serum
samples via solid-phase extraction cleanup followed by HPLC analyses
showed that the analyte concentration was between 5 and 15 mg/L and the
standard deviation was 2.5 mg/L. The extraction step clearly increased
the random error of the overall process. Calculate the number of samples
required so that the sample mean would be within G1.2 mg/L of the population mean at the 95% confidence level.
Using equation (1.10), assuming 10 degrees of freedom, and referring to
the t-distribution table from a statistics textbook, we have t ¼ 2:23, s ¼ 2:5,
and e ¼ 1:2 mg/L, so n ¼ ð2:23 2:5=1:2Þ 2 ¼ 21:58 or 22. Since 22 is significantly larger than 10, a correction must be made with the new value of t
corresponding to 21 degrees of freedom ðt ¼ 2:08Þ: n ¼ ð2:08 2:5=1:2Þ 2 ¼
18:78 or 19. Since 19 and 22 are relatively close, approximately that many
samples should be tested. A higher level of error, or a lower confidence level,
may be accepted for the reduction in the number of samples.
1.3. METHOD PERFORMANCE AND METHOD VALIDATION
The criteria used for evaluating analytical methods are called figures of
merit. Based on these characteristics, one can predict whether a method
meets the needs of a certain application. The figures of merit are listed in
Table 1.2. Accuracy and precision have already been discussed; other important characteristics are sensitivity, detection limits, and the range of
quantitation.
method performance and method validation
13
Table 1.2. Figures of Merit for Instruments or Analytical Methods
No.
Parameter
1
2
3
Accuracy
Precision
Sensitivity
4
5
6
7
8
Detection limit
Linear dynamic range
Selectivity
Speed of analysis
Throughput
9
10
Ease of automation
Ruggedness
11
12
Portability
Greenness
13
Cost
Definition
Deviation from true value
Reproducubility of replicate measurements
Ability to discriminate between small di¤erences in
concentration
Lowest measurable concentration
Linear range of the calibration curve
Ability to distinguish the analyte from interferances
Time needed for sample preparation and analysis
Number of samples that can be run in a given time
period
How well the system can be automated
Durability of measurement, ability to handle
adverse conditions
Ability to move instrument around
Ecoe‰ciency in terms of waste generation and
energy consumption
Equipment cost þ cost of supplies þ labor cost
1.3.1. Sensitivity
The sensitivity of a method (or an instrument) is a measure of its ability to
distinguish between small di¤erences in analyte concentrations at a desired
confidence level. The simplest measure of sensitivity is the slope of the calibration curve in the concentration range of interest. This is referred to as the
calibration sensitivity. Usually, calibration curves for instruments are linear
and are given by an equation of the form
S ¼ mc þ sbl
ð1:11Þ
where S is the signal at concentration c and sbl is the blank (i.e., signal in the
absence of analyte). Then m is the slope of the calibration curve and hence
the sensitivity. When sample preparation is involved, recovery of these steps
has to be factored in. For example, during an extraction, only a fraction
proportional to the extraction e‰ciency r is available for analysis. Then
equation (1.11) reduces to
S ¼ mrc þ stbl
ð1:12Þ
Now the sensitivity is mr rather than m. The higher the recovery, the
higher the sensitivity. Near 100% recovery ensures maximum sensitivity. The
14
sample preparation: an analytical perspective
blank is also modified by the sample preparation step; stbl refers to the blank
that arises from total contribution from sample preparation and analysis.
Since the precision decreases at low concentrations, the ability to distinguish between small concentration di¤erences also decreases. Therefore,
sensitivity as a function of precision is measured by analytical sensitivity,
which is expressed as [4]
a¼
mr
ss
ð1:13Þ
where ss is the standard deviation based on sample preparation and analysis.
Due to its dependence on ss , analytical sensitivity varies with concentration.
1.3.2. Detection Limit
The detection limit is defined as the lowest concentration or weight of analyte that can be measured at a specific confidence level. So, near the detection limit, the signal generated approaches that from a blank. The detection limit is often defined as the concentration where the signal/noise
ratio reaches an accepted value (typically, between 2 and 4). Therefore, the
smallest distinguishable signal, Sm , is
Sm ¼ Xtbl þ kstbl
ð1:14Þ
where, Xtbl and stbl are the average blank signal and its standard deviation.
The constant k depends on the confidence level, and the accepted value is 3
at a confidence level of 89%. The detection limit can be determined experimentally by running several blank samples to establish the mean and standard deviation of the blank. Substitution of equation (1.12) into (1.14) and
rearranging shows that
Cm ¼
sm stbl
m
ð1:15Þ
where Cm is the minimum detectable concentration and sm is the signal
obtained at that concentration. If the recovery in the sample preparation
step is factored in, the detection limit is given as
Cm ¼
sm stbl
mr
ð1:16Þ
Once again, a low recovery increases the detection limit, and a sample
preparation technique should aim at 100% recovery.
method performance and method validation
15
1.3.3. Range of Quantitation
The lowest concentration level at which a measurement is quantitatively
meaningful is called the limit of quantitation (LOQ). The LOQ is most often
defined as 10 times the signal/noise ratio. If the noise is approximated as the
standard deviation of the blank, the LOQ is ð10 stbl Þ. Once again, when
the recovery of the sample preparation step is factored in, the LOQ of the
overall method increases by 1=r.
For all practical purposes, the upper limit of quantitation is the point
where the calibration curve becomes nonlinear. This point is called the limit
of linearity (LOL). These can be seen from the calibration curve presented in
Figure 1.3. Analytical methods are expected to have a linear dynamic range
(LDR) of at least two orders of magnitude, although shorter ranges are also
acceptable.
Considering all these, the recovery in sample preparation method is an
important parameter that a¤ects quantitative issues such as detection limit,
sensitivity, LOQ, and even the LOL. Sample preparation techniques that
enhance performance (see Chapters 6, 9, and 10) result in a recovery ðrÞ
larger that 1, thus increasing the sensitivity and lowering detection limits.
1.3.4. Other Important Parameters
There are several other factors that are important when it comes to the
selection of equipment in a measurement process. These parameters are
items 7 to 13 in Table 1.2. They may be more relevant in sample preparation
than in analysis. As mentioned before, very often the bottleneck is the sample preparation rather than the analysis. The former tends to be slower;
consequently, both measurement speed and sample throughput are determined by the discrete steps within the sample preparation. Modern analytical instruments tend to have a high degree of automation in terms of
autoinjectors, autosamplers, and automated control/data acquisition. On
the other hand, many sample preparation methods continue to be laborintensive, requiring manual intervention. This prolongs analysis time and
introduces random/systematic errors.
A variety of portable instruments have been developed in the last decade.
Corresponding sample preparation, or online sample preparation methods,
are being developed to make integrated total analytical systems. Many
sample preparation methods, especially those requiring extraction, require
solvents and other chemicals. Used reagents end up as toxic wastes, whose
disposal is expensive. Greener sample preparation methods generate less
spent reagent. Last but not the least, cost, including the cost of equipment,
labor, and consumables and supplies, is an important factor.
16
sample preparation: an analytical perspective
1.3.5. Method Validation
Before a new analytical method or sample preparation technique is to be
implemented, it must be validated. The various figures of merit need to be
determined during the validation process. Random and systematic errors are
measured in terms of precision and bias. The detection limit is established
for each analyte. The accuracy and precision are determined at the concentration range where the method is to be used. The linear dynamic range is
established and the calibration sensitivity is measured. In general, method
validation provides a comprehensive picture of the merits of a new method
and provides a basis for comparison with existing methods.
A typical validation process involves one or more of the following steps:
Determination of the single operator figures of merit. Accuracy, precision,
detection limits, linear dynamic range, and sensitivity are determined.
Analysis is performed at di¤erent concentrations using standards.
Analysis of unknown samples. This step involves the analysis of samples whose concentrations are unknown. Both qualitative and quantitative measurements should be performed. Reliable unknown samples
are obtained from commercial sources or governmental agencies as
certified reference materials. The accuracy and precision are determined.
Equivalency testing. Once the method has been developed, it is compared to similar existing methods. Statistical tests are used to determine
if the new and established methods give equivalent results. Typical tests
include Student’s t-test for a comparison of the means and the F-test for
a comparison of variances.
Collaborative testing. Once the method has been validated in one laboratory, it may be subjected to collaborative testing. Here, identical
test samples and operating procedures are distributed to several laboratories. The results are analyzed statistically to determine bias and
interlaboratory variability. This step determines the ruggedness of the
method.
Method validation depends on the type and purpose of analysis. For
example, the recommended validation procedure for PCR, followed by capillary gel electrophoresis of recombinant DNA, may consist of the following
steps:
1. Compare precision by analyzing multiple (say, six) independent replicates of reference standards under identical conditions.
2. Data should be analyzed with a coe‰cient of variation less than a
specified value (say, 10%).
preservation of samples
17
3. Validation should be performed on three separate days to compare
precision by analyzing three replicates of reference standards under
identical conditions (once again the acceptance criteria should be a
prespecified coe‰cient of variation).
4. To demonstrate that other analysts can perform the experiment with
similar precision, two separate analysts should make three independent
measurements (the acceptance criterion is once again a prespecified
RSD).
5. The limit of detection, limit of quantitation, and linear dynamic range
are to be determined by serial dilution of a sample. Three replicate
measurements at each level are recommended, and the acceptance
criterion for calibration linearity should be a prespecified correlation
coe‰cient (say, an r 2 value of 0.995 or greater).
6. The molecular weight markers should fall within established migration
time ranges for the analysis to be acceptable. If the markers are outside this range, the gel electrophoresis run must be repeated.
1.4. PRESERVATION OF SAMPLES
The sample must be representative of the object under investigation. Physical, chemical, and biological processes may be involved in changing the
composition of a sample after it is collected. Physical processes that may
degrade a sample are volatilization, di¤usion, and adsorption on surfaces.
Possible chemical changes include photochemical reactions, oxidation, and
precipitation. Biological processes include biodegradation and enzymatic
reactions. Once again, sample degradation becomes more of an issue at low
analyte concentrations and in trace analysis.
The sample collected is exposed to conditions di¤erent from the original
source. For example, analytes in a groundwater sample that have never been
exposed to light can undergo significant photochemical reactions when
exposed to sunlight. It is not possible to preserve the integrity of any sample
indefinitely. Techniques should aim at preserving the sample at least until
the analysis is completed. A practical approach is to run tests to see how
long a sample can be held without degradation and then to complete the
analysis within that time. Table 1.3 lists some typical preservation methods.
These methods keep the sample stable and do not interfere in the analysis.
Common steps in sample preservation are the use of proper containers,
temperature control, addition of preservatives, and the observance of recommended sample holding time. The holding time depends on the analyte of
interest and the sample matrix. For example, most dissolved metals are
18
sample preparation: an analytical perspective
Table 1.3. Sample Preservation Techniques
Sample
Preservation Method
pH
—
—
Temperature
—
—
None
Plastic or glass
28 days
None
Plastic or glass
Cool to 4 C
Cool to 4 C
Cool to 4 C, add
zinc acetate and
NaOH to pH 9
Plastic or glass
Plastic or glass
Plastic or glass
Analyze immediately
24 hours
48 hours
7 days
Filter on site, acidify
to pH 2 with
HNO2
Acidify to pH 2 with
HNO2
Cool to 4 C
Acidify to pH 2 with
HNO2
Plastic
6 months
Plastic
6 month
Plastic
Plastic
24 hours
28 days
Plastic or brown
glass
Glass with Teflon
septum cap
Glass with Teflon
septum cap
28 days
PCBs
Cool to 4 C, add
H2 SO2 to pH 2
Cool to 4 C, add
0.008% Na2 S2 O3
Cool to 4 C, add
0.008% Na2 S2 O3
and HCl to pH 2
Cool to 4 C
Organics in soil
Cool to 4 C
Glass or Teflon
Fish tissues
Freeze
Aluminum foil
Biochemical oxygen demand
Chemical oxygen
demand
Cool to 4 C
Plastic or glass
7 days to
extraction,
40 days after
As soon as
possible
As soon as
possible
48 hours
Cool to 4 C
Plastic or glass
28 days
Inorganic anions
Bromide, chloride
fluoride
Chlorine
Iodide
Nitrate, nitrite
Sulfide
Metals
Dissolved
Total
Cr(VI)
Hg
Organics
Organic carbon
Purgeable hydrocarbons
Purgeable
aromatics
Container Type
Glass or Teflon
Holding Time
Immediately on
site
Immediately on
site
14 days
14 days
(Continued)
19
preservation of samples
Table 1.3. (Continued)
Sample
Preservation Method
DNA
Store in TE (pH 8)
under ethanol at
20 C; freeze at
20 or 80 C
Deionized formamide
at 80 C
Store in argon-filled
box; mix with
hydrocarbon oil
RNA
Solids unstable in
air for surface
and spectroscopic
characterization
Container Type
Holding Time
Years
Years
stable for months, whereas Cr(VI) is stable for only 24 hours. Holding time
can be determined experimentally by making up a spiked sample (or storing
an actual sample) and analyzing it at fixed intervals to determine when it
begins to degrade.
1.4.1. Volatilization
Analytes with high vapor pressures, such as volatile organics and dissolved
gases (e.g., HCN, SO2 ) can easily be lost by evaporation. Filling sample
containers to the brim so that they contain no empty space (headspace) is
the most common method of minimizing volatilization. Solid samples can be
topped with a liquid to eliminate headspace. The volatiles cannot equilibrate
between the sample and the vapor phase (air) at the top of the container.
The samples are often held at low temperature (4 C) to lower the vapor
pressure. Agitation during sample handling should also be avoided. Freezing
liquid samples causes phase separation and is not recommended.
1.4.2. Choice of Proper Containers
The surface of the sample container may interact with the analyte. The surfaces can provide catalysts (e.g., metals) for reactions or just sites for irreversible adsorption. For example, metals can adsorb irreversibly on glass
surfaces, so plastic containers are chosen for holding water samples to be
analyzed for their metal content. These samples are also acidified with
HNO3 to help keep the metal ions in solution. Organic molecules may also
interact with polymeric container materials. Plasticizers such as phthalate
esters can di¤use from the plastic into the sample, and the plastic can serve
as a sorbent (or a membrane) for the organic molecules. Consequently, glass
containers are suitable for organic analytes. Bottle caps should have Teflon
liners to preclude contamination from the plastic caps.
20
sample preparation: an analytical perspective
Oily materials may adsorb strongly on plastic surfaces, and such samples
are usually collected in glass bottles. Oil that remains on the bottle walls
should be removed by rinsing with a solvent and be returned to the sample.
A sonic probe can be used to emulsify oily samples to form a uniform suspension before removal for analysis.
1.4.3. Absorption of Gases from the Atmosphere
Gases from the atmosphere can be absorbed by the sample during handling,
for example, when liquids are being poured into containers. Gases such as
O2 , CO2 , and volatile organics may dissolve in the samples. Oxygen may
oxidize species, such as sulfite or sulfide to sulfate. Absorption of CO2 may
change conductance or pH. This is why pH measurements are always made
at the site. CO2 can also bring about precipitation of some metals. Dissolution of organics may lead to false positives for compounds that were actually
absent. Blanks are used to check for contamination during sampling, transport, and laboratory handling.
1.4.4. Chemical Changes
A wide range of chemical changes are possible. For inorganic samples, controlling the pH can be useful in preventing chemical reactions. For example,
metal ions may oxidize to form insoluble oxides or hydroxides. The sample
is often acidified with HNO3 to a pH below 2, as most nitrates are soluble,
and excess nitrate prevents precipitation. Other ions, such as sulfides and
cyanides, are also preserved by pH control. Samples collected for NH3
analysis are acidified with sulfuric acid to stabilize the NH3 as NH4 SO4 .
Organic species can also undergo changes due to chemical reactions.
Storing the sample in amber bottles prevents photooxidation of organics
(e.g., polynuclear aromatic hydrocarbons). Organics can also react with dissolved gases; for example, organics can react with trace chlorine to form
halogenated compounds in treated drinking water samples. In this case, the
addition of sodium thiosulfate can remove the chlorine.
Samples may also contain microorganisms, which may degrade the sample biologically. Extreme pH (high or low) and low temperature can minimize microbial degradation. Adding biocides such as mercuric chloride or
pentachlorophenol can also kill the microbes.
1.4.5. Preservation of Unstable Solids
Many samples are unstable in air. Examples of air-sensitive compounds are
alkali metal intercalated C60 , carbon nanotubes, and graphite, which are
postextraction procedures
21
usually prepared in vacuum-sealed tubes. After completion of the intercalation reaction in a furnace, the sealed tubes may be transferred directly to a
Raman spectrometer for measurement. Since these compounds are photosensitive, spectra need to be measured using relatively low laser power densities. For x-ray di¤raction, infrared, and x-ray photoelectron spectroscopy
(XPS), the sealed tubes are transferred to an argon-filled dry box with less
than 10 parts per million (ppm) of oxygen. The vacuum tubes are cut open
in the dry box and transferred to x-ray sampling capillaries. The open ends
of the capillaries are carefully sealed with soft wax to prevent air contamination after removal from the dry box. Samples for infrared spectroscopy
are prepared by mixing the solid with hydrocarbon oil and sandwiching a
small amount of this suspension between two KBr or NaCl plates. The edges
of the plates are then sealed with soft wax. For the XPS measurements, the
powder is spread on a tape attached to the sample holder and inserted into a
transfer tube of the XPS spectrometer, which had previously been introduced
into the dry box. Transfer of unstable compounds into the sampling chamber of transmission and scanning electron microscopes are di‰cult. The best
approaches involve preparing the samples in situ for examination.
1.5. POSTEXTRACTION PROCEDURES
1.5.1. Concentration of Sample Extracts
The analytes are often diluted in the presence of a large volume of solvents
used in the extraction. This is particularly true when the analysis is being
done at the trace level. An additional concentration step is necessary to
increase the concentration in the extract. If the amount of solvent to be
removed is not very large and the analyte is nonvolatile, the solvent can be
vaporized by a gentle stream of nitrogen gas flowing either across the surface
or through the solution. This is shown in Figure 1.6. Care should be taken
that the solvent is lost only by evaporation. If small solution droplets are lost
as aerosol, there is the possibility of losing analytes along with it. If large
volume reduction is needed, this method is not e‰cient, and a rotary vacuum evaporator is used instead. In this case, the sample is placed in a roundbottomed flask in a heated water bath. A water-cooled condenser is attached
at the top, and the flask is rotated continually to expose maximum liquid
surface to evaporation. Using a small pump or a water aspirator, the pressure inside the flask is reduced. The mild warming, along with the lowered
pressure, removes the solvent e‰ciently, and the condensed solvent distills
into a separate flask. Evaporation should stop before the sample reaches
dryness.
22
sample preparation: an analytical perspective
N2
dispersed small
bubbles
Figure 1.6. Evaporation of solvent by nitrogen.
For smaller volumes that must be reduced to less than 1 mL, a Kuderna–
Danish concentrator (Figure 1.7) is used. The sample is gently heated in a
water bath until the needed volume is reached. An air-cooled condenser
provides reflux. The volume of the sample can readily be measured in the
narrow tube at the bottom.
1.5.2. Sample Cleanup
Sample cleanup is particularly important for analytical separations such as
GC, HPLC, and electrophoresis. Many solid matrices, such as soil, can
contain hundreds of compounds. These produce complex chromatograms,
where the identification of analytes of interest becomes di‰cult. This is
especially true if the analyte is present at a much lower concentration than
the interfering species. So a cleanup step is necessary prior to the analytical
measurements. Another important issue is the removal of high-boiling
materials that can cause a variety of problems. These include analyte
adsorption in the injection port or in front of a GC-HPLC column, false
positives from interferences that fall within the retention window of the
analyte, and false negatives because of a shift in the retention time window.
postextraction procedures
23
air-cooled
condenser
sample
Figure 1.7. Kuderna–Danish sample concentrator.
In extreme cases, instrument shut down may be necessary due to the accumulation of interfacing species.
Complex matrices such as, soil, biological materials, and natural products
often require some degree of cleanup. Highly contaminated extracts (e.g.,
soil containing oil residuals) may require multiple cleanup steps. On the
other hand, drinking water samples are relatively cleaner (as many large
molecules either precipitate out or do not dissolve in it) and may not require
cleanup [5].
The following techniques are used for cleanup and purification of
extracts.
Gel-Permeation Chromatography
Gel-permeation chromatography (GPC) is a size-exclusion method that uses
organic solvents (or bu¤ers) and porous gels for the separation of macromolecules. The packing gel is characterized by pore size and exclusion range,
which must be larger than the analytes of interest. GPC is recommended for
the elimination of lipids, proteins, polymers, copolymers, natural resins, cellular components, viruses, steroids, and dispersed high-molecular-weight
compounds from the sample. This method is appropriate for both polar and
nonpolar analytes. Therefore, it is used for extracts containing a broad range
24
sample preparation: an analytical perspective
of analytes. Usually, GPC is most e‰cient for removing high-boiling materials that condense in the injection port of a GC or the front of the GC column [6]. The use of GPC in nucleic acid isolation is discussed in Chapter 8.
Acid–Base Partition Cleanup
Acid–base partition cleanup is a liquid–liquid extraction procedure for the
separation of acid analytes, such as organic acids and phenols from base/
neutral analytes (amines, aromatic hydrocarbons, halogenated organic
compounds) using pH adjustment. This method is used for the cleanup of
petroleum waste prior to analysis or further cleanup. The extract from
the prior solvent extraction is shaken with water that is strongly basic. The
basic and neutral components stay in the organic solvent, whereas the acid
analytes partition into the aqueous phase. The organic phase is concentrated
and is ready for further cleanup or analysis. The aqueous phase is acidified and extracted with an organic solvent, which is then concentrated (if
needed) and is ready for analysis of the acid analytes (Figure 1.8).
Solid-Phase Extraction and Column Chromatography
The solvent extracts can be cleaned up by traditional column chromatography or by solid-phase extraction cartridges. This is a common cleanup
method that is widely used in biological, clinical, and environmental sample
preparation. More details are presented in Chapter 2. Some examples
include the cleanup of pesticide residues and chlorinated hydrocarbons, the
separation of nitrogen compounds from hydrocarbons, the separation of
aromatic compounds from an aliphatic–aromatic mixture, and similar
applications for use with fats, oils, and waxes. This approach provides e‰cient cleanup of steroids, esters, ketones, glycerides, alkaloids, and carbohydrates as well. Cations, anions, metals, and inorganic compounds are also
candidates for this method [7].
The column is packed with the required amount of a sorbent and loaded
with the sample extract. Elution of the analytes is e¤ected with a suitable
solvent, leaving the interfering compounds on the column. The packing
material may be an inorganic substance such as Florisil (basic magnesium
silicate) or one of many commercially available SPE stationary phases. The
eluate may be further concentrated if necessary. A Florisil column is shown
in Figure 1.9. Anhydrous sodium sulfate is used to dry the sample [8].
These cleanup and concentration techniques may be used individually, or
in various combinations, depending on the nature of the extract and the
analytical method used.
quality assurance and quality control
25
Sampling
Solvent extraction
Acids
Phenols
Base/neutral
Extraction with
basic solution
Aqueous phase
acids and phenols
Basic and neutral
fraction
Acidified and extracted
with organic solvent
Concentrate
Analysis
Concentrate
Analysis
Figure 1.8. Acid–base partition cleanup.
1.6. QUALITY ASSURANCE AND QUALITY CONTROL DURING
SAMPLE PREPARATION
As mentioned earlier, the complete analytical process involves sampling,
sample preservation, sample preparation, and finally, analysis. The purpose
of quality assurance (QA) and quality control (QC) is to monitor, measure,
and keep the systematic and random errors under control. QA/QC measures
are necessary during sampling, sample preparation, and analysis. It has been
stated that sample preparation is usually the major source of variability in
a measurement process. Consequently, the QA/QC during this step is of
utmost importance. The discussion here centers on QC during sample preparation.
26
sample preparation: an analytical perspective
Eluting solvent
Anhydrous
sodium sulfate
for drying
Magnesium sulfate
packing
Figure 1.9. Column chromatography for sample cleanup.
Quality assurance refers to activities that demonstrate that a certain
quality standard is being met. This includes the management process that
implements and documents e¤ective QC. Quality control refers to procedures that lead to statistical control of the di¤erent steps in the measurement
process. So QC includes specific activities such as analyzing replicates,
ensuring adequate extraction e‰ciency, and contamination control.
Some basic components of a QC system are shown in Figure 1.10. Competent personnel and adequate facilities are the most basic QC requirements. Many modern analytical/sample preparation techniques use sophisticated instruments that require specialized training. Good laboratory
practice (GLP) refers to the practices and procedures involved in running
a laboratory. E‰cient sample handling and management, record keeping,
and equipment maintenance fall under this category. Good measurement
practices (GMPs) refer to the specific techniques in sample preparation and
analysis. On the other hand, GLPs are independent of the specific techniques
and refer to general practices in the laboratory. An important QC step is to
have formally documented GLPs and GMPs that are followed carefully.
quality assurance and quality control
Good
documentation
Evaluation
samples
Equipment
maintenance
and calibration
SOP
QUALITY
CONTROL
GLP
GMP
Suitable and
well-maintained
facilities
Well-trained
personnel
27
Figure 1.10. Components of quality control.
Standard operating procedures (SOPs) are written descriptions of procedures of methods being followed. The importance of SOPs cannot be
understated when it comes to methods being transferred to other operators
or laboratories. Strict adherence to the SOPs reduces bias and improves
precision. This is particularly true in sample preparation, which tends to
consist of repetitive processes that can be carried out by more than one
procedure. For example, extraction e‰ciency depends on solvent composition, extraction time, temperature, and even the rate of agitation. All these
parameters need to be controlled to reduce variability in measurement.
Changing the extraction time will change the extraction e‰ciency, which
will increase the relative standard deviation (lower precision). The SOP
specifies these parameters. They can come in the form of published standard
methods obtained from the literature, or they may be developed in-house.
Major sources of SOPs are protocols obtained from organizations, such as
the American Society for Testing and Materials and the U.S. Environmental
Protection Agency (EPA).
Finally, there is the need for proper documentation, which can be in
written or electronic forms. These should cover every step of the measurement process. The sample information (source, batch number, date), sample
preparation/analytical methodology (measurements at every step of the
process, volumes involved, readings of temperature, etc.), calibration curves,
instrument outputs, and data analysis (quantitative calculations, statistical
analysis) should all be recorded. Additional QC procedures, such as blanks,
matrix recovery, and control charts, also need to be a part of the record
keeping. Good documentation is vital to prove the validity of data. Analyt-
28
sample preparation: an analytical perspective
Table 1.4. Procedures in Quality Control
QC Parameters
Procedure
Accuracy
Precision
Extraction e‰ciency
Contamination
Analysis
Analysis
Analysis
Analysis
of
of
of
of
reference materials or known standards
replicate samples
matrix spikes
blanks
ical data that need to be submitted to regulatory agencies also require
detailed documentation of the various QC steps.
The major quality parameters to be addressed during sample preparation
are listed in Table 1.4. These are accuracy, precision, extraction e‰ciency
(or recovery), and contamination control. These quality issues also need to
be addressed during the analysis that follows sample preparation. Accuracy
is determined by the analysis of evaluation samples. Samples of known concentrations are analyzed to demonstrate that quantitative results are close to
the true value. The precision is measured by running replicates. When many
samples are to be analyzed, the precision needs to be checked periodically to
ensure the stability of the process. Contamination is a serious issue, especially in trace measurements such as environmental analysis. The running of
various blanks ensures that contamination has not occurred at any step, or
that if it has, where it occurred. As mentioned before, the detection limits,
sensitivity, and other important parameters depend on the recovery. The
e‰ciency of sample preparation steps such as extraction and cleanup must
be checked to ensure that the analytes are being recovered from the sample.
1.6.1. Determination of Accuracy and Precision
The levels of accuracy and precision determine the quality of a measurement. The data are as good as random numbers if these parameters are not
specified. Accuracy is determined by analyzing samples of known concentration (evaluation samples) and comparing the measured values to the
known. Standard reference materials are available from regulatory agencies
and commercial vendors. A standard of known concentration may also be
made up in the laboratory to serve as an evaluation sample.
E¤ective use of evaluation samples depends on matching the standards
with the real-world samples, especially in terms of their matrix. Take the
example of extraction of pesticides from fish liver. In a real sample, the pesticide is embedded in the liver cells (intracellular matter). If the calibration
standards are made by spiking livers, it is possible that the pesticides will
be absorbed on the outside of the cells (extracellular). The extraction of
quality assurance and quality control
29
extracellular pesticides is easier than real-world intracellular extractions.
Consequently, the extraction e‰ciency of the spiked sample may be significantly higher. Using this as the calibration standard may result in a negative
bias. So matrix e¤ects and matrix matching are important for obtaining high
accuracy. Extraction procedures that are powerful enough not to have any
matrix dependency are desirable.
Precision is measured by making replicate measurements. As mentioned
before, it is known to be a function of concentration and should be determined at the concentration level of interest. The intrasample variance can be
determined by splitting a sample into several subsamples and carrying out
the sample preparation/analysis under identical conditions to obtain a measure of RSD. For example, several aliquots of homogenized fish liver can be
processed through the same extraction and analytical procedure, and the
RSD computed. The intersample variance can be measured by analyzing
several samples from the same source. For example, di¤erent fish from the
same pond can be analyzed to estimate the intersample RSD.
The precision of the overall process is often determined by the extraction
step rather than the analytical step. It is easier to get high-precision analytical results; it is much more di‰cult to get reproducible extractions. For
example, it is possible to run replicate chromatographic runs (GC or HPLC)
with an RSD between 1 and 3%. However, several EPA-approved methods
accept extraction e‰ciencies anywhere between 70 and 120%. This range
alone represents variability as high as 75%. Consequently, in complex analytical methods that involve several preparative steps, the major contributor
to variability is the sample preparation.
1.6.2. Statistical Control
Statistical evidence that the precision of the measurement process is within
a certain specified limit is referred to as statistical control. Statistical control does not take the accuracy into account. However, the precision of the
measurement should be established and statistical control achieved before
accuracy can be estimated.
Control Charts
Control charts are used for monitoring the variability and to provide a
graphical display of statistical control. A standard, a reference material of
known concentration, is analyzed at specified intervals (e.g., every 50 samples). The result should fall within a specified limit, as these are replicates.
The only variation should be from random error. These results are plotted
on a control chart to ensure that the random error is not increasing or that a
30
sample preparation: an analytical perspective
Upper limit
Response
x + 3σ
Warning
limits
x
x − 3σ
Lower limit
Measurements
Figure 1.11. Control chart.
systematic bias is not taking place. In the control chart shown in Figure
1.11, replicate measurements are plotted as a function of time. The centerline is the average, or expected value. The upper (UCL) and lower (LCL)
control limits are the values within which the measurements must fall. Normally, the control limits are G3s, within which 99.7% of the data should lie.
For example, in a laboratory carrying out microwave extraction on a daily
basis, a standard reference material is extracted after a fixed number of
samples. The measured value is plotted on the control chart. If it falls outside the control limit, readjustments are necessary to ensure that the process
stays under control.
Control charts are used in many di¤erent applications besides analytical
measurements. For example, in a manufacturing process, the control limits
are often based on product quality. In analytical measurements, the control
limits can be established based on the analyst’s judgment and the experimental results. A common approach is to use the mean of select measurements as the centerline, and then a multiple of the standard deviation is used
to set the control limits. Control charts often plot regularly scheduled analysis of a standard reference material or an audit sample. These are then
tracked to see if there is a trend or a systematic deviation from the centerline.
quality assurance and quality control
31
Control Samples
Di¤erent types of control samples are necessary to determine whether a
measurement process is under statistical control. Some of the commonly
used control standards are listed here.
1. Laboratory control standards (LCSs) are certified standards obtained
from an outside agency or commercial source to check whether the
data being generated are comparable to those obtained elsewhere. The
LCSs provide a measure of the accuracy and can be used as audits. A
source of LCSs is standard reference materials (SRMs), which are certified standards available from the National Institute of Standards and
Testing (NIST) in the United States. NIST provides a variety of solid,
liquid, and gaseous SRMs which have been prepared to be stable and
homogeneous. They are analyzed by more than one independent
methods, and their concentrations are certified. Certified standards
are also available from the European Union Community Bureau of
Reference (BCR), government agencies such as the EPA, and from
various companies that sell standards. These can be quite expensive.
Often, samples are prepared in the laboratory, compared to the certified standards, and then used as secondary reference materials for
daily use.
2. Calibration control standards (CCSs) are used to check calibration.
The CCS is the first sample analyzed after calibration. Its concentration may or may not be known, but it is used for successive comparisons. A CCS may be analyzed periodically or after a specified number
of samples (say, 20). The CCS value can be plotted on a control chart
to monitor statistical control.
1.6.3. Matrix Control
Matrix Spike
Matrix e¤ects play an important role in the accuracy and precision of a
measurement. Sample preparation steps are often sensitive to the matrix.
Matrix spikes are used to determine their e¤ect on sample preparation and
analysis. Matrix spiking is done by adding a known quantity of a component that is similar to the analyte but not present in the sample originally.
The sample is then analyzed for the presence of the spiked material to
evaluate the matrix e¤ects. It is important to be certain that the extraction
recovers most of the analytes, and spike recovery is usually required to be
at least 70%. The matrix spike can be used to accept or reject a method.
32
sample preparation: an analytical perspective
For example, in the analysis of chlorophenol in soil by accelerated solvent
extraction followed by GC-MS, deuterated benzene may be used as the
matrix spike. The deuterated compound will not be present in the original
sample and can easily be identified by GC-MS. At the same time, it has
chemical and physical properties that closely match those of the analyte of
interest.
Often, the matrix spike cannot be carried out at the same time as the
analysis. The spiking is carried out separately on either the same matrix or
on one that resembles the samples. In the example above, clean soil can
be spiked with regular chlorophenol and then the recovery is measured.
However, one should be careful in choosing the matrix to be spiked. For
instance, it is easy to extract di¤erent analytes from sand, but not so if
the analytes have been sitting in clay soil for many years. The organics in
the soil may provide additional binding for the analytes. Consequently, a
matrix spike may be extracted more easily than the analytes in real-world
samples. The extraction spike may produce quantitative recovery, whereas
the extraction e‰ciency for real samples may be significantly lower. This is
especially true for matrix-sensitive techniques, such as supercritical extraction.
Surrogate Spike
Surrogate spikes are used in organic analysis to determine if an analysis has
gone wrong. They are compounds that are similar in chemical composition
and have similar behavior during sample preparation and analysis. For
example, a deuterated analog of the analyte is an ideal surrogate during
GC-MS analysis. It behaves like the analyte and will not be present in the
sample originally. The surrogate spike is added to the samples, the standards, the blanks, and the matrix spike. The surrogate recovery is computed
for each run. Unusually high or low recovery indicates a problem, such as
contamination or instrument malfunction. For example, consider a set of
samples to be analyzed for gasoline contamination by purge and trap. Deuterated toluene is added as a surrogate to all the samples, standards, and
blanks. The recovery of the deuterated toluene in each is checked. If the
recovery in a certain situation is unusually high or low, that particular
analysis is rejected.
1.6.4. Contamination Control
Many measurement processes are prone to contamination, which can occur
at any point in the sampling, sample preparation, or analysis. It can occur in
the field during sample collection, during transportation, during storage, in
the sample workup prior to measurement, or in the instrument itself. Some
quality assurance and quality control
33
Table 1.5. Sources of Sample Contamination
Measurement Step
Sources of Contamination
Sample collection
Equipment
Sample handling and preservation
Sample containers
Sample transport and storage
Sample containers
Cross-contamination from other samples
Sample preparation
Sample handling, carryover in instruments
Dilutions, homogenization, size reduction
Glassware and instrument
Ambient contamination
Sample analysis
Carryover in instrument
Instrument memory e¤ects
Reagents
Syringes
common sources of contamination are listed in Table 1.5. Contamination
becomes a major issue in trace analysis. The lower the concentration, the
more pronounced is the e¤ect of contamination.
Sampling devices themselves can be a source of contamination. Contamination may come from the material of construction or from improper
cleaning. For example, polymer additives can leach out of plastic sample
bottles, and organic solvents can dissolve materials from surfaces, such as
cap liners of sample vials. Carryover from previous samples is also possible.
Say that a sampling device was used where the analyte concentration was at
the 1 ppm level. A 0.1% carryover represents a 100% error if the concentration of the next sample is at 1 part per billion (ppb).
Contamination can occur in the laboratory at any stage of sample preparation and analysis. It can come from containers and reagents or from the
ambient environment itself. In general, contamination can be reduced by
avoiding manual sample handling and by reducing the number of discrete
processing steps. Sample preparations that involve many unautomated
manual steps are prone to contamination. Contaminating sources can also
be present in the instrument. For instance, the leftover compounds from a
previous analysis can contaminate incoming samples.
Blanks
Blanks are used to assess the degree of contamination in any step of the
measurement process. They may also be used to correct relatively constant,
34
sample preparation: an analytical perspective
Table 1.6. Types of Blanks
Blank Type
System or
instrument
blank
Solvent or calibration blank
Method blank
Matchedmatrix blank
Sampling media
Equipment
blank
Purpose
Process
Establishes the baseline of an
instrument in the absence of
sample
To measure the amount of the
analytical signal that arises
from the solvents and
reagents; the zero solution in
the calibration series
To detect contamination from
reagents, sample handling,
and the entire measurement
process
To detect contamination from
field handling, transportation,
or storage
Determine the background
signal with no sample
present
Analytical instrument is run
with solvents/reagents only
To detect contamination in the
sampling media such as filters
and sample adsorbent traps
To determine contamination of
equipment and assess the e‰ciency or equipment cleanup
procedures
A blank is taken through the
entire measurement procedure
A synthetic sample that
matches the basic matrix
of the sample is carried to
the field and is treated in
the same fashion as the
sample
Analyze samples of unused
filters or traps to detect
contaminated batches
Samples of final equipment
cleaning rinses are analyzed for contaminants
unavoidable contamination. Blanks are samples that do not contain any
(or a negligible amount of ) analyte. They are made to simulate the sample
matrix as closely as possible. Di¤erent types of blanks are used, depending
on the procedure and the measurement objectives. Some common blanks
are listed in Table 1.6. Blank samples from the laboratory and the field are
required to cover all the possible sources of contamination. We focus here
on those blanks that are important from a sample preparation perspective.
System or Instrument Blank. It is a measure of system contamination and is
the instrumental response in the absence of any sample. When the background signal is constant and measurable, the usual practice is to consider
that level to be the zero setting. It is generally used for analytical instruments
but is also applicable for instruments for sample preparation.
references
35
The instrument blank also identifies memory e¤ects or carryover from
previous samples. It may become significant when a low-concentration
sample is analyzed immediately after a high-concentration sample. This is
especially true where preconcentration and cryogenic steps are involved. For
example, during the purge and trap analysis of volatile organics, some
components may be left behind in the sorbent trap or at a cold spot in the
instrument. So it is a common practice to run a deionized water blank
between samples. These blanks are critical in any instrument, where sample
components may be left behind only to emerge during the next analysis.
Solvent/ Reagent Blank. A solvent blank checks solvents and reagents that
are used during sample preparation and analysis. Sometimes, a blank correction or zero setting is done based on the reagent measurement. For
example, in atomic or molecular spectroscopy, the solvents and reagents
used in sample preparation are used to provide the zero setting.
Method Blank. A method blank is carried through all the steps of sample
preparation and analysis as if it were an actual sample. This is most important from the sample preparation prospective. The same solvents/reagents
that are used with the actual samples are used here. For example, in the
analysis of metals in soil, a clean soil sample may serve as a method blank.
It is put through the extraction, concentration, and analysis steps encountered by the real samples. The method blank accounts for contamination
that may occur during sample preparation and analysis. These could arise
from the reagents, the glassware, or the laboratory environment.
Other types of blanks may be employed as the situation demands. It
should be noted that blanks are e¤ective only in identifying contamination. They do not account for various errors that might exist. Blanks are
seldom used to correct for contamination. More often, a blank above a predetermined value is used to reject analytical data, making reanalysis and
even resampling necessary. The laboratory SOPs should identify the blanks
necessary for contamination control.
REFERENCES
1. D. Scoog, D. West, and J. Holler, Fundamentals of Analytical Chemistry,
Saunders College Publishing, Philadelphia, 1992.
2. W. Horwitz, L. Kamps, and K. Boyer, J. Assoc. O¤. Anal. Chem., 63, 1344–1354
(1980).
3. V. Meyer, LC-GC North Am., 20, 106–112, 2 (2002).
36
sample preparation: an analytical perspective
4. B. Kebbekus and S. Mitra, Environmental Chemical Analysis, Chapman & Hall,
New York, 1998.
5. Test Methods: Methods for Organic Chemical Analysis of Municipal and Industrial
Wastewater, U.S. EPA-600/4-82-057.
6. U.S. EPA method 3640A, Gel-Permeation Cleanup, 1994, pp. 1–15.
7. V. Lopez-Avila, J. Milanes, N. Dodhiwala, and W. Beckert, J. Chromatogr. Sci.,
27, 109–215 (1989).
8. P. Mills, J. Assoc. O¤. Anal. Chem., 51, 29 (1968).