Metrology: Who Benefits and Why Should They

Transcription

Metrology: Who Benefits and Why Should They
September 2006
measure
NCSL INTERNATIONAL
The Journal of Measurement Science
Vol. 1 No. 3 • September 2006
NCSL International
In This Issue:
measure • The Journal of Measurement Science
Practical Approach to Minimizing
Magnetic Errors in Weighing
An Accurate Pulse Measurement
System for Real-Time
Oscilloscope Calibration
Metrology: Who Benefits
and Why Should They Care?
Weights and Measures
in the United States
Vol. 1 No. 3
measure
NCSL INTERNATIONAL
The Journal of Measurement Science
WELCOME to NCSLI measure,
a metrology journal published by
NCSL International (NCSLI), for the
benefit of its membership.
Contents
Features
22
2007 NCSLI Workshop
& Symposium
See Page 19
Vol. 1 No. 3 • September 2006
TECHNICAL PAPERS
An Accurate Pulse Measurement System
for Real-Time Oscilloscope Calibration
David I. Bergman
30
Metrology: Who Benefits and Why Should They Care?
Fiona Redgrave and Andy Henson
38
Fiber Deflection Probe Uncertainty Analysis for Micro Holes
Bala Muralikrishnan and Jack Stone
46
Reduction of Thermal Gradients by Modifications
of a Temperature Controlled CMM Lab
Hy D. Tran, Orlando C. Espinosa, and James F. Kwak
REVIEW PAPERS
52
Weights and Measures in the United States
Carol Hockert
60
Legal and Technical Measurement Requirements
for Time and Frequency
Michael A. Lombardi
TECHNICAL TIPS
CONTACT NCSLI
Business Office:
Craig Gulka, Business Manager
NCSL International
2995 Wilderness Place, Suite 107
Boulder, CO 80301-5404 USA
Phone: 303-440-3339
Fax: 303-440-3384
Email: [email protected]
70
Practical Approach to Minimizing Magnetic Errors in Weighing
Richard Davis
74
Stopwatch Calibrations, Part III: The Time Base Method
Robert M. Graham
Departments
3
Letter from the Editor
5
International NMI News
15
Metrology News
77
New Products
79
Advertiser Index
80
Classifieds
NCSLI measure Information:
www.ncsli.org/measure/
Vol. 1 No. 3 • September 2006
MEASURE
|
1
NCSLI Member Benefit
in the Spotlight
Join the people who are doing the inside work –
NCSLI Committees
NCSLI has established discipline-specific and special interest
working committees that support the needs and interests of
member organizations. These committees discuss current
issues, develop and publish recommended practices, and
organize technical sessions at the Annual Workshop & Symposium. There are committees from a variety of industry groups,
such as pharmaceutical, healthcare, automotive, airlines, petrochemicals, utilities, and many others.
Committee participation enables members to meet practitioners with common interests and similar challenges in order to
develop solution strategies specific to their industry. NCSLI
Committees meet at our Annual Workshop & Symposium, and
committee members continue to network, dialogue, and interact throughout the year in order to carry forward essential work
necessary to achieve and excel in their respective disciplines.
Some of the NCSLI Committees include:
• Accreditation Resources
• Airline Metrology
• Automatic Test and Calibration Systems
• Automotive Committee
• Benchmarking Programs
• Calibration/Certification Procedures
• Chemical and Bio Defense
• Education Systems
• Equipment Management Forum
• Facilities
• Glossary
• Healthcare Metrology
• International Measurements Coordination
• Intrinsic and Derived Standards
• Laboratory Evaluation Laboratory
• Measurement Comparison Program
• Metrology Practices
• Personnel Training Requirement
• Small Business Initiative
• Standards Writing Committee
• Training Resources
• Utilities
measure
NCSL INTERNATIONAL
The Journal of Measurement Science
NCSLI measure (ISSN #19315775) is a metrology journal published by
NCSL International (NCSLI). The journal's primary audience is calibration laboratory personnel, from laboratory managers to project leaders
to technicians. measure provides NCSLI members with practical and
up-to-date information on calibration techniques, uncertainty analysis,
measurement standards, laboratory accreditation, and quality
processes, as well as providing timely metrology review articles. Each
issue will contain technically reviewed metrology articles, new products/services from NCSLI member organizations, technical tips,
national metrology institute news, and other metrology information.
Information for potential authors, including paper format, copyright
form, and a description of the review process is available at
www.ncsli.org/measure/ami.cfm. Information on contributing Technical
Tips, new product/service submission, and letters to the editor is available at www.ncsli.org/measure/tc.cfm. Advertising information is available at www.ncsli.org/measure/ads.cfm.
Managing Editor
Richard B. Pettit, Sandia National Laboratories (Retired), 7808 Hendrix,
NE, Albuquerque, NM 87110. Email: [email protected]
NMI/Metrology News Editor:
Michael Lombardi, NIST, Mailcode 847.00, 325 Broadway, Boulder, CO
80305-3328. Email: [email protected]
New Product/Service Announcements:
Jesse Morse, Fluke Corp., MS: 275-G, P.O. Box 9090, Everett, WA
98206 Email: [email protected]
Technical Support Team:
Norman Belecki, Retired, 7413 Mill Run Dr., Derwood, MD 208551156. Email: [email protected]
Belinda Collins, National Institute of Standards and Technology (NIST),
USA
Salvador Echeverria, Centro Nacional de Metrologia (CENAM), Mexico
Andy Henson, National Physical Laboratory (NPL), United Kingdom
Klaus Jaeger, Jaeger Enterprises, USA
Dianne Lalla-Rodrigues, Antigua and Barbuda Bureau of Standards,
Antigua and Barbuda
Angela Samuel, National Measurement Institute (NMI), Australia
Klaus-Deter Sommer, Landesamt fuer Mess und Eichwesen Thueringen (LMET), Germany
Alan Steele, National Research Council (NRC), Canada
Pete Unger, American Association for Laboratory Accreditation (A2LA),
USA
Andrew Wallard, Bureau International des Poids et Mesures (BIPM),
France
Tom Wunsch, Sandia National Laboratories (SNL), USA
Production Editor:
Mary Sweet, Sweet Design, Boulder, CO 80304
Email: [email protected]
Copyright © 2006, NCSL International. Permission to quote excerpts or to reprint
any figures or tables should be obtained directly from an author. NCSL International, for its part, hereby grants permission to quote excerpts and reprint figures
and/or tables from this journal with acknowledgment of the source. Individual
teachers, students, researchers, and libraries in nonprofit institutions and acting for
them are permitted to make hard copies of articles for use in teaching or research,
provided such copies are not sold. Copying of articles for sale by document delivery services or suppliers, or beyond the free copying allowed above, is not permitted. Reproduction in a reprint collection, or for advertising or promotional purposes,
or republication in any form requires permission of one of the authors and written
permission from NCSL International.
2
|
MEASURE
www.ncsli.org
Letter from the Editor
In this third issue of measure, we have several very interesting technical articles, as well as
some metrology information, that you should not miss.
First, I want to point out the special technical article written especially for measure by
Richard Davis, BIPM, titled “Practical Approach to Minimizing Magnetic Errors in Weighing.” This paper presents very practical information for dealing with magnetic errors in
weighing operations. First it discusses the recently published OIML Recommendation R-111
(2004) and why these specifications are necessary for standard weights. It then presents how
a weight can be tested to verify that it is in compliance with the OIML Recommendation.
Finally, the paper suggests strategies that can be used to minimize weighing errors due to
magnetic effects; each laboratory can then use this information to decide what level of testing
is warranted to support its weighing operations.
An article by David Bergman, NIST, titled “An Accurate Pulse Measurement System for
Real-Time Oscilloscope Calibration,” discusses a new calibration service offered by NIST for
the calibration of oscilloscopes. The system was designed to calibrate a digitizing oscilloscope for pulse voltage characteristics to an uncertainty of 0.2 % at 1 µs after the pulse transition for pulses with amplitudes up to 100 V. More information on the new NIST calibration
service can be obtained at the NIST web site: http://ts.nist.gov/ts/htdocs/230/233/calibrations/Electromagnetic/Pulse-waveform.htm or contacting David at [email protected].
Be sure to read the final Tech Tip in a series of three about the calibration of stopwatches.
The article in this edition by Robert Graham, Sandia, is titled “Stopwatch Calibrations, Part
III: The Time Base Method.” If you have any technical tips, please pass them along for publication.
There are two Review Articles in this issue: The first by Carol Hockert, NIST, describes
the US weights and measures program, including the role of NIST, state governments, the
National Conference on Weights and Measures (NCWM), laboratory accreditation, traceability, and the relevant documentary standards used throughout the US. You will also be
amazed by the photos accompanying the article which show the progress achieved in these
measurements in the past 80 years.
The second Review Article, by Michael Lombardi, NIST, presents the legal and technical
measurement requirements for the use of time and frequency from everyday metrology application to advanced applications. Areas covered include law enforcement, musical pitch,
wireless telephone networks, radio and television broadcast stations, the electrical power
grid, and radionavigation systems. I am sure that you will be amazed by the broad impact
of time and frequency on your daily life, as well as in the laboratory.
Finally, you should not miss the item under Metrology News that points out a new song
written about metrology called “The Measurement Blues.” Martin Rowe, Senior Technical
Editor at Test & Measurement World, has both written and recorded the song. If you listen
to it and have some special comments, please pass them along. I thought it was very accurate!I would give it a k = 8!
Richard Pettit
Managing Editor
Sandia National Laboratories (Retired)
HOW TO REACH US: MAIL letters to: NCSLI
measure Journal, 2995 Wilderness Pl., Ste 107, Boulder, CO
80301-5404 USA
FAX letters to: 303-440-3384 E-MAIL letters to: [email protected]
Vol. 1 No. 3 • September 2006
MEASURE
|
3
Flexibility
Comes
Standard
Presenting the Mensor Series 600
Automated Pressure Calibrator
Okay. While we can’t claim our Mensor Series 600 Automated
Pressure Calibrator (APC) can do yoga, it certainly offers
capabilities stretching well beyond that of the competition.
Why? With two independent precision pressure regulating
channels—each of which can have up to two interchangeable
transducers, and those—in turn—can have two calibrated
pressure ranges for a total of eight ranges, the Series 600
is just about the most flexible unit available today.
Mensor’s innovative transducer modules can be quickly
removed for calibration or service. Calibration data is stored
on each transducer module, allowing you to interchange
one transducer with another of the same–or different range.
Optional spare modules can be interchanged with modules
in the Series 600 to virtually eliminate down time during calibration cycles. How’s that for expanding your productivity?
The Series 600 comes standard with RS-232, Ethernet
and IEEE-488 interfaces. Emulation of gauge or
absolute modes can be achieved using an optional
barometric reference.
Not only that, you’ll find the Series 600, complete
with color-touch screen interface and menus available
in 13 languages, well within reach of your budget.
Want to know more? Call us today at 800.984.4200.
We’ll bend over backwards to show you how well the
Series 600 will s-t-r-e-t-c-h your capabilities!
201 Barnes Drive, San Marcos, Texas 78666
Phone: 512.396.4200 Toll free: 800.984.4200 Fax: 512.396.1820
Web site: www.mensor.com E-mail: [email protected]
NMI NEWS
NMI NEWS
Do You Know What Time It Is? NMIs Do
and They Display Time on the Web!
When the rock band then known as The Chicago Transit
Authority posed their famous musical question back in 1969
(“Does Anybody Really Know What Time It Is?”) it was some
25 years before the invention of the Internet web browser.
Today, thanks to a number of national metrology institutes
(NMIs) who keep the official time for their respective countries,
everybody knows what time it is! Anybody with an Internet
connection can get time using an ordinary web browser, accurate to within a fraction of a second. And yes, in response to the
band’s follow up question, some people really do care.
One of the best known NMI web clocks is maintained by the
NIST time and frequency division in Boulder, Colorado.
Accessed through either nist.time.gov or time.gov, the site is
ranked amongst the top 10,000 most visited web sites, according to data from Alexa.com. Visitors click on their time zone,
and the current time is displayed. The site even provides an estimate of the displayed time’s accuracy, with a resolution of 0.1
seconds. Another well known web clock that estimates accuracy to within 0.01 seconds (Coordinated Universal Time only)
can be found on the BIPM ‘s home page at www.bipm.org.
Other NMI web clocks aren’t quite as easy to find, but well
worth visiting. For example, Singapore’s national web clock
(pictured) features an animated map that lets you zoom in and
display the time for any part of the world. The site also includes
both a digital and an analog clock display. Canada’s national
web clock simultaneously displays the time for all of the
country’s time zones. Web addresses for these clocks and other
NMI web clocks are provided in the table.
Keep in mind that NMI web clocks only display the time, they
don’t synchronize your computer clock; a job left to the network
time protocol (NTP) servers operated by NIST and other NMIs.
For information about how to synchronize your computer
clock to a NIST time server, visit: tf.nist.gov/service/its.htm
National
Metrology
Institute
Country
Web Clock Address
NRC
Brazil
Canada
pcdsh01.on.br/ispy.htm
SIC
Columbia
ONRJ
NTSC
PTB
NICT
CENAM
SPRING
NIST
China
Germany
Japan
time5.nrc.ca/webclock_e.shtml
www.time.ac.cn
200.91.231.204
www.ptb.de/en/zeit/uhrzeit.html
www2.nict.go.jp/cgi-bin/JST_E.pl
Mexico
www.cenam.mx/dme/HoraExacta.asp
USA
time.gov, nist.time.gov
Singapore
www.singaporestandardtime.org.sg
NRC Celebrates 90th Anniversary
June 6, 2006 marked the 90th anniversary of Canada’s National
Research Council (NRC). Employing nearly 4,000 people
located across Canada, NRC is composed of over 20 institutes
and national programs that span a wide variety of disciplines
and offer a broad array of services.
Established in 1916, NRC has now been Canada’s leading
R&D organization for 90 years. In its early years, NRC functioned mainly as an advisory body to government, a role that
changed greatly in the early 1930s when new laboratories were
built in Ottawa. During the Second World War, NRC grew
rapidly as it performed R&D to benefit the Allied effort.
As a result of this growth, NRC played a major role during the
explosion of basic and applied research in science and engineering during the post-war period and into the 1960s. Key accomplishments during this period included the invention of the
Pacemaker (1940s), the development of Canola (1940s), the
Crash Position Indicator (1950s), and the Cesium Beam Atomic
Clock (1960s).
NRC continued to offer cutting-edge research in support of
industry throughout the 1970s and 1980s, a tradition maintained to this day. Key success included
the development of Computer Animation
Technology (1970s) and the Canadarm
(or NASA’s Shuttle Remote Manipulator
System –1980s). NRC’s recent history has
focused on developing partnerships with
private and public-sector organizations in
Canada and internationally, with the goal
of driving technology and stimulating the
creation of wealth.
A branch of NRC, the Institute of
National
Measurement
Standards
(INMS), serves as Canada’s national
metrology institute (NMI). INMS is
located in Ottawa and operates physical
metrology programs that develop, maintain, improve, and disseminate standards
Continued on page 7
Vol. 1 No. 3 • September 2006
MEASURE
|
5
NMI NEWS
for the base quantities of mass, length, time, electricity, temperature and luminous intensity, as well as a number of derived
measurement standards. The chemical metrology program
develops and maintains world-class capabilities in selected areas
of organic and inorganic trace analysis, and provides certified
reference materials.
For more information about NRC-INMS,
visit: inms-ienm.nrc-cnrc.gc.ca
Internet Portal on Thermal Metrology
The procedures and phenomena around temperature and heat
are omnipresent and have an influence on nearly all technical
and scientific development. However, the detailed knowledge
in this field is distributed over various places and is sometimes
not accessible and not valuable for users. The newly founded
Virtual Institute for Thermal Metrology should remedy this
problem. This website has been created within the scope of the
EU project EVITHERM.
In most industrial processes, the use of thermal technologies
and metrology play a significant role. However, many industrial
users have access to only limited knowledge in this field. The
consequences of this lack of knowledge include production
processes which are inefficient, unnecessarily complicated, or
environmentally polluting. The knowledge of thermal technologies is not evenly distributed and not easily accessible everywhere, which explains why industry can not make better use of it.
More than 40 project partners from 12 European countries,
under the auspices of the National Physical Laboratory (UK),
the Laboratoire National d’Essais (France), the Istituto di
Metrologia G. Colonetti (Italy), the ARC Seibersdorf (Austria),
and the Physikalisch-Technische Bundesanstalt (Germany)
have established the Virtual Institute for Thermal Metrology in
order to remedy this deficiency. The core of the project is an
Internet site, which is now available, where existing expert
knowledge, requirements, and experience have been pooled.
The aim of the Virtual Institute is to gather the information
and expertise on thermal technologies and thermal metrology in
one place, to link it and to evaluate it, as far as possible. Materials data and measuring techniques, standards, service and
training, directories of suppliers of thermal equipment, etc. are
components of EVITHERM. Special importance was attached
to the fast and simple access to data and expert knowledge. The
contents were compiled in a practice-oriented way, especially for
users from industry. Except for the databases of thermophysical properties, the website can be used free of charge.
The website of the Virtual Institute for Thermal Metrology is
www.evitherm.org.
Further information can be obtained from J. Fischer,
[email protected].
Still More Accurate After All These Years
Researchers at the National Institute of Standards and Technology (NIST) have developed an improved method for measuring
basic properties of complex fuel mixtures like gasoline or jet
fuel. The new apparatus for measuring distillation properties
Vol. 1 No. 3 • September 2006
produces significantly more detailed and accurate data needed
to better understand each fuel and its sample-to-sample variation. The data are valuable in tailoring fuels for high-performance and low emissions, and in designing new fuels, engines and
emission controls.
Petroleum-based fuels, with few exceptions, are highly
complex mixtures of hundreds of distinct components from
light butanes to increasingly heavy oils. For decades, distillation
curves have been one of the most widely accepted ways of characterizing a fuel. The curve charts the percentage of the total
mixture that has evaporated as the temperature of a sample is
slowly heated. The curve holds a wealth of information—not
just the basic makeup of the fuel, but also indicators as to how
it will perform. Engine starting ability, fuel system icing, vapor
lock, fuel injection scheduling, fuel auto-ignition, hot- and coldweather performance, and exhaust emissions all have been correlated with features of the distillation curve. The data are
important both for quality control at refineries and the design
of specialty high-performance fuels.
For all its utility, there are serious problems with the common
method for measuring a distillation curve in industry, based on
an old ASTM standard called D-86. The method is subject to
large uncertainties and systematic errors that make it difficult or
impossible to relate the test results to thermodynamic theory
used in developing modern fuels and engines. NIST researchers
added an additional temperature sensor and made other modifications, decreasing the random uncertainty in the temperature
measurement and control from a few degrees to 0.05 degree and
eliminating a number of systematic errors. They also added the
capability to do a composition analysis of each boiling “fraction,” which can provide vital insights into fuel behavior and
pinpoint batch-to-batch differences to help diagnose production
problems.
Technical Contact: Thomas J. Bruno, [email protected]
More NMI News on page 8
MEASURE
|
7
NMI NEWS
© GEOFFREY WHEELER
Observatory plans to make similar frequency measurements
soon of the same molecules produced in distant galaxies, which
are so far from Earth that they represent a window into ancient
history. By comparing precision values for the fine structure
constant on Earth and in distant parts of the universe, scientists
hope to determine whether this constant has changed over 10
billion years. Because the fine structure constant is used in so
many fields of physics, these measurements are a way to test the
consistency of existing theories. The JILA measurements could
enable any change in the fine structure constant over time to be
determined with a precision of one part per million.
The work at JILA is supported by the National Science Foundation, NIST, the Department of Energy, and the Keck Foundation.
Technical Contact: Jun Ye, [email protected]
Beyond the Kilogram: Redefining the
International System of Units
NIST chemists Thomas Bruno and Beverly Smith analyze complex fuel
mixtures with the new advanced distillation curve apparatus.
Measurements May Help Show If Natural
Constants Are Changing
Physicists at JILA (a joint institute of the University of Colorado
and the National Institute of Standards and Technology) have
performed the first-ever precision measurements using ultracold
molecules, in work that may help solve a long-standing scientific
mystery—whether so-called constants of nature have changed
since the dawn of the universe.
The research, reported in the April 14 issue of Physical
Review Letters,1 involved measuring two phenomena simultaneously—electron motion, and rotating and vibrating nuclei—in
highly reactive molecules containing one oxygen atom and one
hydrogen atom. The researchers greatly improved the precision
of these microwave frequency measurements by using electric
fields to slow down the molecules, providing more time for
interaction and analysis. JILA is a joint institute of the National
Institute of Standards and Technology (NIST) and the University of Colorado at Boulder.
Compared to the previous record, set more than 30 years ago,
the JILA team improved the precision of one frequency measurement 25-fold and another 10-fold. This was achieved by producing pulses of cold molecules at various speeds, hitting each
group with a microwave pulse of a selected frequency, and then
measuring how many molecules were in particular energy states.
The apparatus and approach were similar to those used in the
NIST-F1 cesium atomic fountain clock, the nation’s primary
time standard, raising the possibility of designing a clock that
keeps time with molecules, instead of atoms.
The JILA team’s ability to make two molecular measurements
at once enables scientists to apply mathematical calculations to
probe the evolution over time of fundamental natural properties, such as the fine structure constant, which is widely used in
research to represent the strength of electromagnetic interactions. Another research group at the National Radio Astronomy
8
|
MEASURE
The world’s official standard for mass—a 115-year-old cylinder
of metal—will likely join the meter bar as a museum piece in the
near future. Will the standards for electric current, temperature,
and amount of substance soon follow?
Measurement experts long have planned to replace the kilogram standard—its mass actually fluctuates slightly—with a definition based on an invariable property of nature. The next
logical step in the quest for the most precise, consistent, and
accessible measurements possible is to redefine several more
units of the International System of Units (SI), according to a
new paper by five eminent scientists from three countries.
The paper, published April 6, 2006, in Metrologia,2 advocates
redefining not only the kilogram, but also three more base units
of the SI that are not currently linked to true invariants of
nature—the ampere, kelvin, and mole (used to measure electric
current, thermodynamic temperature, and amount of substance,
respectively). The paper suggests that all four units be redefined
Continued on page 10
1 E.R. Hudson, H.J. Lewandowski, B.C. Sawyer, and J.Ye, “Cold Mole-
cule Spectroscopy for Constraining the Evolution of the Fine Structure
Constant,” Phys. Rev. Letters, vol. 96, no., 14, p. 143004, 2006.
2 I.M. Mills, P.J. Mohr, T.J. Quinn, B.N. Taylor and E.R. Williams,
“Redefinition of the Kilogram, Ampere, Kelvin and Mole: A Proposed
Approach to Implementing CIPM Recommendation 1 (CI-2005),”
Metrologia, vol. 43, pp. 227-246, 2006. Available online at the BIPM
website.
www.ncsli.org
RUNNING HEAD GOES HERE
Vol. 1 No. 3 • September 2006
MEASURE
|
9
NMI NEWS
in terms of four different fundamental constants or atomic properties to which precise values would be assigned. A property of
nature is, by definition, always the same and can in theory be
measured anywhere. (See chart.)
The paper represents the collective opinions of the authors,
including one from the University of Reading in the United
Kingdom, who heads an influential international metrology
committee, as well as three scientists from the U.S. National
Institute of Standards and Technology (NIST) and the former
director of Bureau International des Poids et Mesures (BIPM)
near Paris.
The paper does not represent the official policy position of
any of the authors’ three institutions. However, much of the
paper echoes, and suggests, a practical strategy for implementing an October 2005 recommendation by the International
Committee for Weights and Measures (CIPM).
If implemented, the proposed changes would affect measurements requiring extreme precision and reduce the uncertainty in
values for numerous fundamental constants, not only the four
constants named in the redefinitions, but also many others
because of their interrelationships. Physical constants are widely
used by scientists and engineers to make many types of calculations, and also are used in designing and calibrating quantumbased measurement systems.
“Our general conclusion is that the changes we propose here
would be a significant improvement in the SI, which would be
to the future benefit of all science and technology,” the authors
state in the paper. “We believe that these changes would have
the widespread support of the metrology community, as well as
the broader scientific community.”
The proposed SI system would enable scientists to independently determine measurement standards without the need to
refer to a particular object, the kilogram artifact, which is kept
at BIPM and has been made available for comparisons on only
two occasions since 1889. Further, in the new system, measurements made today could be compared to measurements made
far in the future with no ambiguity. For example, the new SI
system would provide the basis for precise electrical measurements, without the use of approximate values assigned to two
fundamental constants related to resistance and voltage, as is
necessary today. Voltmeters then could be calibrated with high
accuracy in SI units, which is not possible now.
At the same time, the authors note that ripple effects from
such changes in the SI system would be too small to have a negative effect on everyday commerce, industry, or the public.
The international metrology community has been moving for
years toward redefining the kilogram, and last year began considering the ampere, kelvin, and mole, as recorded in the recent
CIPM recommendation. The committee’s action was prompted
by an April 2005 paper3 by the same five authors, which advocated a quicker redefinition of the kilogram than had previously
been planned. The 2005 paper stimulated extensive discussions
in the international metrology community, which is also the
3 I.M. Mills, P.J. Mohr, T.J. Quinn, B.N. Taylor and E.R. Williams,
“Redefinition of the Kilogram: A Decision Whose Time Has Come,”
Metrologia, vol. 42, pp. 71–80, April 2005. Available online at the
BIPM website.
10
|
MEASURE
Base
Unit
Would be
Linked to
Kilogram
Planck
constant
The mass of a body whose
energy is equal to that of a
number of photons (the
smallest particles of light)
whose frequencies add up to
a particular total.
Ampere
Elementary
Charge
The electric current in the
direction of the flow of a
certain number of elementary
charges per second.
Kelvin
Boltzmann
constant
The change of thermodynamic temperature that
results in a change of thermal
energy by a specific amount.
Mole
Avogadro
constant
The amount of substance that
contains exactly [set value for
Avogadro’s constant] specified elementary entities, such
as atoms, molecules, electrons, or other particles or
groups of particles.
Possible New Definition
authors’ hope for the new paper.
Any decisions about when and how to redefine the SI are
made by an international group, the International Committee
for Weights and Measures, and ratified by a General Conference
on Weights and Measures, which meets every four years. The
new paper suggests that the redefinitions could be ratified at the
conference meeting in 2011.
1. Background
The SI is founded on seven base units—the meter, kilogram,
second, ampere, kelvin, mole, and candela (corresponding to
the seven base quantities of length, mass, time, electric current,
thermodynamic temperature, amount of substance, and luminous intensity).
Of the seven units, only the second and the meter are directly
related to true invariants of nature. The kilogram is still defined
in terms of a physical artifact—a cylinder of platinum-iridium
alloy about the size of a plum—and the definitions of the
ampere, the mole, and the candela depend on the definition of
the kilogram. The kelvin is based on the thermodynamic state
of water, which is a constant, but it depends on the composition
and purity of the water sample used.
The new Metrologia paper lays out a roadmap for implementing CIPM Recommendation 1 (CI-2005), which calls for linking
the kilogram, ampere, kelvin, and mole to exactly known values
of fundamental constants. As a model, consider the meter,
which was once equal to the length of a metal bar that was
prone to shrinking and growing slightly with changes in temperContinued on page 12
www.ncsli.org
Vol. 1 No. 3 • September 2006
MEASURE
|
11
NMI NEWS
ature; the meter is now defined as the distance light travels in
vacuum in a prescribed time. In a similar way, the mass of the
physical kilogram changes slightly depending on trace levels of
dirt or on polishing; scientists plan to replace it with a definition
based on a quantity of light or the mass of a certain number of
specific atoms.
If the changes proposed in the paper were carried out, then
six of the seven base units of the SI (the exception being the
candela) would be related to fundamental constants or atomic
properties, which are true invariants of nature. The proposed
changes are outlined briefly below and in the accompanying
table.
2. The Kilogram
The paper suggests redefining the kilogram by selecting a fixed
value for the Planck constant, which is widely used in physics
to describe the sizes of “quanta,” or units of energy. Quanta are
the building blocks of the theory of quantum mechanics, which
explains the behavior of the smallest particles of matter and
light.
A possible new definition might be something like: The kilogram is the mass of a body whose energy is equal to that of a
number of photons (the smallest particles of light) whose frequencies add up to a particular total. The Planck constant is tied
into this definition because the energy of a photon is the product
of the Planck constant and its frequency, and the relation
between energy and the corresponding mass follows from Einstein’s famous equation E = mc2.
The new definition could, in principle, be realized using either
one of the two leading approaches for redefining the kilogram.
One method is the “watt balance,” currently being refined by
NIST and other metrology laboratories in England, Switzerland,
and France. This method relies on selecting a fixed value for the
Planck constant. The alternative method involves counting the
number of atoms of a specific atomic mass that equal the mass
of 1 kilogram. This method depends on selecting a fixed value
for the Avogadro constant, which describes the number of
atoms or molecules in a specified amount of a substance. The
new paper suggests this constant should be the basis of a new
definition of the mole instead (see below).
Although the proposed re-definition of the kilogram (using
the Planck constant) would be more directly implemented using
the watt balance, the alternative method could still be used if
additional calculations were made using the theoretical relationship between the Planck and Avogadro constants. This relationship depends on having accurate values for a number of other
fundamental constants. Researchers are working on improving
both methods, which have not yet met consensus goals for precision and also produce slightly different results.
The paper suggests linking the ampere to a specific value for
the elementary charge, which is the electric charge carried by a
single proton, a particle with a positive charge in an atomic
nucleus. The ampere might be defined, for example, as the electric current in the direction of the flow of a certain number of
elementary charges per second.
4. The Kelvin
The kelvin is used in scientific experiments to represent temperature. Conveniently, absolute zero, the point at which no more
heat can be removed from an entity, is 0 K. The kelvin is now
defined as a fraction of the thermodynamic temperature of the
“triple point” of water (the temperature and pressure at which
the gas, liquid, and solid phases coexist in a stable way). The
kelvin is extremely difficult to realize, as it requires special thermometers, and attempts to define it have led to new temperature scales.
The paper suggests redefining the kelvin as the change of thermodynamic temperature that results in a change of thermal
energy by a specific amount. This has the effect of fixing the
value of the Boltzmann constant, which relates temperature to
energy. This constant, together with the Avogadro constant, is
used in, for example, studies of gases and semiconductors, and
serves as a link between the everyday and microscopic worlds.
This suggested definition would be easier to realize over a broad
range of temperatures than the existing definition.
5. The Mole
Chemists often use the mole to describe sample sizes. The mole
is now defined as an amount that contains as many elementary
entities (such as atoms, molecules, or electrons) as there are
atoms in 0.012 kilograms of a particular type of carbon.
The new paper proposes a definition that sets a specific value
for Avogadro’s number. This is a very large constant used in
chemistry and physics, currently representing the number of
atoms in 12 grams of carbon. The number is so huge
(6.022…#1023) that it would take a computer billions of years
to count that high.
The new definition of the mole would be something like: The
amount of substance that contains exactly [set value for the
Avogadro constant] specified elementary entities, such as
atoms, molecules, electrons, or other particles or groups of particles.
Reprinted from the NIST News Site:
www.nist.gov/public_affairs/newsfromnist_beyond_the_kilogram.htm
More NMI News on page 14
3. The Ampere
The ampere is used widely in electrical engineering, for
example, to design electrical devices and systems. It is now
defined in terms of a current that, if maintained in two straight
parallel conductors of specific sizes and positions, would
produce a certain amount of [magnetic] force between the conductors. The ampere is extremely difficult to realize in practice.
12
|
MEASURE
www.ncsli.org
Vol. 1 No. 3 • September 2006
MEASURE
|
13
NMI NEWS
22nd Asia Pacific Metrology Programme
(APMP) General Assembly to be held in
New Delhi, India
The 22nd General Assembly of APMP 2006 (APMP-06) in conjunction with the 6th International Conference on Advances in
Metrology (AdMet-06) will be organized by National Physical
Laboratory, New Delhi, India (NPLI) in December 2006. The
venue of these conferences is India Habitat Centre, New Delhi.
AdMet-06, which includes symposia on Physical, Electrical,
Environmental and Other issues; Pressure & Vacuum ; Time &
Frequency; Chemical Metrology, will be held during December
11-13, 2006. This will be followed by APMP GA-06, covering
the General Assembly and related meetings during December
13-16, 2006.
Laboratory visits to NPLI will also be arranged as a part of the
program for APMP-06. I hope that the delegates of APMP-06
will also be able to take advantage of the AdMet-06 by participating in this Conference.
New Delhi, the capital of India, is rich in the architecture of
its monuments. Diverse cultural elements absorbed into the
daily life of the city have enriched its character. Exploring the
city can be a fascinating and rewarding experience. Besides, it
will provide pleasant surroundings for the APMP GA-06 and
AdMet-06 participants to work and stay. There are also numerous other tourist destinations within easy reach from Delhi
which you can explore.
For more information, please visit: www.apmp2006.org.in
14
|
MEASURE
NIST / UM Program To Support Nanotech
Development
The National Institute of Standards and Technology (NIST) and
the University of Maryland (UM) have joined in a $1.5 million
cooperative program that will further NIST’s efforts to develop
measurement technology and other new tools designed to
support all phases of nanotechnology development, from discovery to manufacture. The competitively awarded grant,
renewable for up to five years, also will accelerate the scale-up
of NIST’s new Center for Nanoscale Science and Technology
(CNST), launched in March 2006. UM research associates will
work on jointly defined projects aligned with the center’s
mission to develop the knowledge and technical infrastructure
that underpins nanotechnology development. They also will collaborate with visiting researchers who come to the CNST to use
measurement instruments and other advanced equipment in its
Nanofabrication Facility, a national resource available to collaborators and outside users.
For more information, see www.nist.gov/public_affairs/releases/
nistgrant_toumd.html
www.ncsli.org
METROLOGY NEWS
METROLOGY NEWS
Annual Meeting of Council on Ionizing
Radiation Measurements and Standards
(CIRMS)
The Council on Ionizing Radiation Measurements and Standards (CIRMS) will hold its 15th annual meeting at the National
Institute of Standards and Technology in Gaithersburg, Maryland., October 23 to 25, 2006. CIRMS is an open forum that
promotes dialog among its three main constituencies: Industry;
Academia; and Government. The theme for this year’s meeting
is the Implications of Uncertainties in Radiation Measurements
and Applications. Travel grants to attend this meeting are available for students on a competitive basis. Information on CIRMS
can be found at www.cirms.org, which has links to the presentations from last year’s annual meeting and to the fourth
issuance of the CIRMS triennial report on “Needs in Ionizing
Radiation Measurements and Standards.” This report also contains information on the history and background of CIRMS, its
mission and objectives, as well as detailing many specific areas
requiring program work in radiation measurements.
For more information, please visit www.cirms.org, or contact
the CIRMS executive secretary, Katy Nardi at (770) 622-0026,
email: [email protected]
Workshop on Flexible
Scope of Accreditation
On May 15, 2006, the SP
Swedish National Testing
and Research Institute
hosted a workshop on “Flexible Scope of Accreditation” which is organised on behalf of EA,
EUROLAB and EURACHEM. Flexible scope is advantageous,
and even necessary, when there are a multitude of similar
methods or when a general methodology is applicable to a spectrum of products The objective of the workshop is to present
experiences from the different stakeholders, to give constructive
input to a road map for the future development of flexible scope
of accreditation.
Additional information is available at http://www.sp.se/eng/
U.S. and Singapore Act
To Simplify Telecom Trade
On June 2, new, streamlined regulatory approval procedures
came into effect in the United States and Singapore, allowing
U.S. makers of telecommunication equipment to certify their
products at home and ship directly to the $1.3 billion Asian
market, and eliminating the need for often-duplicative testing.
The delay-ending, cost-saving simplification is the latest bilateral step in carrying out a 1998 trade agreement among
members of APEC, the Asia-Pacific Economic Cooperation. The
National Institute of Standards and Technology (NIST) designated four U.S. organizations as “certification bodies,” and they
now have been recognized by the Singapore government as
qualified to determine whether shipments of telecommunications products—including wireless equipment—comply with
that country’s required standards.
In a parallel action, the Federal Communications Commission
(FCC) has recognized a certification body designated by the
Infocomm Development Authority of Singapore. This permits
Singapore telecommunication exports to be tested and certified
as conforming to FCC regulations before shipment to the United
States. The FCC is the U.S. regulator of interstate and international communications. Two-way trade of telecommunication
products between the two nations totaled about $1.1 billion in
2005.
The joint action nearly completes the second phase of the
1998 APEC Mutual Recognition Arrangement on Telecommunication Equipment, intended to reduce technical barriers to
markets. Since 2001, under the first phase, manufacturers could
furnish test results from approved U.S. laboratories as evidence
of compliance, but Singapore officials continued to perform the
final evaluation and certification of products. Before then, procedures for certifying U.S. telecommunications exports were
performed entirely by Singapore organizations.
The four Singapore-approved certification bodies include the
Bay Area Compliance Laboratory Corp. (Sunnyvale, Calif.);
Underwriters Laboratories, Inc. (San Jose, Calif.); CKC Certification Services (Mariposa, Calif.) and Compliance Certification Services (Morgan Hill, Calif.)
After Canada, Singapore is only the second APEC member
with which the United States has progressed to full implementation of the MRA. The first phase has been implemented with
Australia, Canada, Chinese Taipei (Taiwan), Hong Kong, Korea
and Singapore.
U.S. certification bodies are evaluated by the NIST-recognized accreditation services of the American National Standards
Institute (ANSI). After an audit and review, in 2005, NIST recognized ANSI to be the accreditor of U.S. “certification bodies”
for evaluating telecommunications equipment for compliance
with Singapore requirements.
Further information on the Singapore-approved laboratories
and certification bodies, and the MRA can be found at
http://ts.nist.gov/ts/htdocs/210/gsig/mra.htm
More Metrology News on page 17
Vol. 1 No. 3 • September 2006
MEASURE
|
15
METROLOGY NEWS
Keithley Receives Award for
Semiconductor Manufacturing
Measurement Product
Bob Jameson (right), President of CISG, shown here with Jim
Genge, CISG’s Quality Manager
First Calibration Lab in Canada to Achieve
ISO/IEC 17025-2005 Accreditation
The calibration laboratory of the Canadian Instrumentation Services Group (CISG) Ltd. announced on May 31, 2006 that they
had recently achieved ISO/IEC 17025:2005 accreditation for
the calibration of electrical measure and test equipment.
“It was very methodical process. Fortunately, we have a dedicated and skilled staff who were extremely supportive - this was
definitely a team effort,” says Jim Genge, CISG’s quality
manager, “We built our infrastructure based on a strong commitment to quality and service. Our processes are well documented and ingrained in how we do business. We are the first
Canadian calibration lab to receive this prestigious accreditation
to the new 2005 revision.” CISG president Bob Jameson added
that “This is a demonstration of our staff’s commitment to
quality. It strengthens our ability to compete in the global marketplace.”
CISG maintains a fully equipped, calibration laboratory
located in Peterborough, Ontario. The lab has a history dating
back to the early 1900’s, and performs calibrations of electrical,
electronic, dimensional and physical instrumentation with traceability to national standards.
For more information, visit
cisg.net, or contact Michelle O’Neill at (705) 741-9819.
Vol. 1 No. 3 • September 2006
Keithley Instruments, Inc. has received the prestigious Editors’
Choice Best Product Award, presented annually by Semiconductor International magazine, for its Model 4200-SCS Semiconductor Characterization System with Pulse I-V (PIV) Package.
This award recognizes Keithley’s achievement in “developing a
product that is truly making a difference in semiconductor manufacturing,” according to the magazine. The Model 4200-SCS
PIV package supplies instrumentation, connections, and software that allow semiconductor engineers to take ultra-short
pulse measurements on tiny transistors while they are still on an
integrated circuit wafer.
“Pulse measurements, which involve testing electronic
devices using pulsed current in nanosecond-length bursts
instead of a constant flow, are becoming increasingly important
in designing and fabricating semiconductor devices,” said Mark
Hoersten, Keithley Vice President, Business Management. “This
new measurement capability is vital for materials researchers
and device engineers who are struggling to test increasingly
fragile and miniaturized devices, since they can be damaged by
overheating when subjected to traditional electronic test
methods. In addition, advanced semiconductor materials may
not be accurately characterized by older test techniques.”
In a ceremony on July 12 in San Francisco at the international
SEMICON trade show, Semiconductor International
announced all the winners for 2006. “Advances in semiconductor technology are only possible because of the kinds of products being honored in this year’s Editors’ Choice Best Product
Awards program,” said Pete Singer, Editor-in-Chief of Semiconductor International. “Chipmakers rely on these products to
create electronics that are smarter, smaller, faster, less expensive
and more reliable. We congratulate the people and the companies that have had the insight and fortitude to bring these products to the market.”
For more information on Keithley’s semiconductor test
systems, visit www.keithley.com/pr/054
Do you ever get the Measurement Blues?
We all know the feeling. There’s a measurement that we needed
to finish yesterday, but nothing is going right. The instruments
aren’t working or else somebody has borrowed them, your computer keeps crashing, and you can tell at a glance that your
results aren’t even close to being right. And to make matters
worse, the budget for new equipment is zero! To make us all
feel better (or maybe worse), Martin Rowe, Senior Technical
Editor at Test & Measurement World, has written and recorded
a song called “The Measurement Blues.” It’s worth a listen after
a hard day of struggling with metrology problems.
You can download the MP3 and read the lyrics at:
www.tmworld.com/blues
MEASURE
|
17
From Coast to Coast, We Always Make Perfect Time.
I
t takes a perfect team of time specialists to
create the perfect primary frequency standards.
Symmetricom has two teams of technologists who
have been in the forefront of atomic clock technology
for decades—one in Santa Clara, California, the other
in Beverly, Massachusetts.
their knowledge and expertise in cesium technology to
meet an even wider range of mission-critical timing needs in
metrology, laboratory standards, calibration, timekeeping,
SATCOM terminals and telecommunications.
Symmetricom’s 5071A Primary
Frequency Standard
Symmetricom’s Santa Clara team of timing specialists
is responsible for the 5071A Primary Frequency
Standard, which provides unsurpassed cesium accuracy,
stability and reliability
for the most demanding
laboratory and timekeeping applications.
Symmetricom’s Beverly team of
timing specialists has recently
introduced the Cs4000 Cesium
Frequency Standard, an advanced
digital cesium that provides
exceptional performance in a
configurable 3U rack mount chassis
designed to provide standard and
custom outputs.
P
To learn more about Symmetricom’s precision frequency
references, visit us online at www.SymmTTM.com or call
1-707-528-1230.
rior to August of 2005, each team of technologists
challenged the other, as peers and competitors.
Today, under the Symmetricom umbrella, they challenge
each other as peers and co-workers. Working together
as one team, they are taking advantage of increased
investment in research and development, and sharing
Symmetricom’s Cs4000 Cesium
Frequency Standard
Perfect Timing. It’s Our Business.
RUNNING HEAD GOES HERE
2007
NCSL International Workshop & Symposium
Metrology’s
Impact Products
Services
JULY 29 – AUGUST 2
Saint Paul RiverCentre
Saint Paul, Minnesota
on
and
Every product and service that consumers use is
highly dependent on metrology. From the fit and
finish of our vehicles to weights and volumes of
products purchased in the grocery, we are
impacted at every level.
www.ncsli.org/conference
[email protected][email protected]
303-440-3339
[email protected]
Metrology laboratories calibrate equipment used
to create compatible component parts used in
commercial and consumer products. A sound and
cohesive metrology and quality system, from the
National Metrology Institute to the end consumer,
impacts the quality of life for everyone.
RUNNING HEAD GOES HERE
20
|
MEASURE
www.ncsli.org
NCSL International would like to thank its 2006 Workshop & Symposium Sponsors
for their outstanding contributions to the overall success of this year’s Conference event in Nashville, Tennessee.
Your support has been key to the successful expansion and promotion of
educational and professional measurement science activities and opportunities for
the advancement of global understanding of the impact of metrology on society.
May this be your best conference to date!
GOLD SPONSORS
Thank You!
SILVER SPONSORS
NCSLI PRESIDENT’S RECEPTION SPONSOR
2007 Sponsorship opportunities are now
available. For further information, contact
Craig Gulka, NCSL International, at 303440-3339 or by email: [email protected]
Vol. 1 No. 3 • September 2006
MEASURE
|
21
TECHNICAL PAPERS
An Accurate Pulse Measurement
System for Real-Time
Oscilloscope Calibration
1
David I. Bergman
Abstract: An accurate sampling system for calibrating the pulse response of real-time digitizing oscilloscopes up to
100 V is described. The measurement system is the result of ongoing efforts at the National Institute of Standards and
Technology (NIST) to establish and maintain capability in waveform sampling metrology. A low-noise sampling probe
in conjunction with a frequency-compensated resistive attenuator measures repetitive pulses with attainable amplitude uncertainty less than 0.2 % of the pulse amplitude at 1 µs following the pulse transition. The probe and attenuator are calibrated against a wideband sampling probe and 50 Ω attenuator combination that serves as a reference
standard requiring only a dc calibration. The method used to calibrate the low-noise probe and attenuator is described
along with a tally of error sources. The biggest contributor to Type B uncertainty is the tuning of the attenuator's frequency compensation, achieved through a digital filter.
1. Introduction
A waveform sampling system has been developed at the
National Institute of Standards and Technology to support accurate voltage and current waveform metrology for signals in the
frequency range from dc to 6 GHz. Dubbed the NIST Sampling
Waveform Analyzer (SWA), the system supports measurement
applications such as ac voltage (including phase), distorted
power, pulse settling, and pulse energy [1]. The system can also
serve as a check for an ac Josephson voltage standard. The SWA
offers excellent performance for gain flatness and settling error
and combines 16 bits of digitizing resolution with bandwidth as
high as 6 GHz. Dynamic accuracy expressed as Total Harmonic
Distortion at 1 GHz is –32 dB and can be corrected to a level
of –46 dB. The SWA can also serve the needs of lower frequency
applications using a NIST-developed low-bandwidth (f3dB =
20 MHz) probe designed for higher accuracy and low noise.[2]
David I. Bergman
National Institute of Standards and Technology
100 Bureau Drive, MS 8172
Gaithersburg, MD 20899 USA
Email: [email protected]
A measurement system built around the SWA and this lownoise probe to characterize the pulse response of real-time digitizing oscilloscopes has recently been developed for the Sandia
National Laboratories under their sponsorship. Experiments
performed at Sandia frequently require the measurement of
high voltage pulses. For this purpose, high voltage capacitive or
resistive dividers are commonly used in conjunction with digitizing oscilloscopes that measure the divider’s output or lowvoltage side. While calibration procedures are in place for the
high voltage dividers themselves, there is currently no capability for measuring the pulse voltage characteristics of the digitizing oscilloscope to the required accuracy of 0.2 % at 1 µs after the
pulse transition for pulses with amplitudes up to 100 V. The system described here was developed to meet this measurement need.
2. Description of the Sampling
Waveform Analyzer (SWA)
The NIST SWA consists of a sampling mainframe unit connected through an umbilical electrical harness to a sampling
comparator probe, hereafter referred to as a sampling probe.
Together, the mainframe and sampling probe form a successive
1 Contribution of the National Institute of Standards and Technology.
Not subject to copyright in the U.S.
22
|
MEASURE
www.ncsli.org
TECHNICAL PAPERS
obtained through a successive approximation
sequence in which each comparison occurs on
DELAY
Input Signal
a different cycle of the waveform being digi–
SAR
DIGITIZED
tized. Fig. 1 illustrates this process. As each
LOGIC
OUTPUT
+
Enable
sample acquisition is completed, the timebase
DAC
DAC
18-BIT
delay is increased by one sample period allowing the next point on the waveform to be
sampled. It is also possible to improve upon this
Input Signal
method by using a timebase capable of producing more than one sampling strobe per signal
period. Such a scheme can greatly decrease data
DAC
acquisition time when measuring low-frequency
Sample
signals.[3]
At least one commercially-available instrument
has used this technique, packaging the
Trigger
TIMEµP
comparator and companion circuitry neatly in a
BASE
Signal
pencil-type probe.[4] A wideband sampling
Signal
–
Generator
SAR
Generator
RAM
comparator integrated circuit has also been
Under
LOGIC
+
Under
Test
Test
developed at NIST featuring a bandwidth of 2.3
DAC
Sampling
Sampling
18-BIT
GHz and excellent settling performance.[5] In
Comparator
Mainframe
Probe
addition to serving as a reference standard for
the measurement system described in this
paper, the wideband sampling probe supports a
Figure 1. Equivalent-time, successive approximation digitization. Comparator probe
connects to sampling mainframe through umbilical harness.
NIST Special Test measurement service for step
settling error. The service is available to customers seeking settling uncertainty as low as 0.2 % at 2 ns or
approximation analog-to-digital converter (ADC) that samples
0.02 % at 10 ns.[6] Fig. 2 shows a picture of the SWA hardware
in equivalent-time. The sampling probe is typically connected in
including a low-noise sampling probe (with a 100 V attenuator)
close physical proximity to the signal source being measured
and a wideband sampling probe.
and serves as the comparator portion of the ADC while also perA SAR-type digitizer sampling in equivalent-time can achieve
forming the sampling function. The timebase, digital-to-analog–3 dB bandwidths in excess of 1 GHz with digitizing resolution
converter (DAC), and successive approximation register (SAR)
of 16 bits or more. A drawback of this approach is that in order
logic reside within the sampling mainframe. This arrangement
to sample in equivalent-time, the waveform being sampled must
achieves the best possible measurement because the critical and
be repetitive, and a synchronous trigger signal must be availsole wideband component of the measurement process – the
able. Implicit to this design philosophy is the fact that although
comparator – is placed in close proximity to the signal source
multi-comparator digitizing schemes achieve higher throughput
being measured.
rates than single-comparator implementations, their accuracy is
Operating in equivalent-time, a single waveform sample is
limited to the input offset mismatch among the individual comparators. In contrast, a single-comparator SAR approach has
only a single offset, and it can be calibrated. The offset does not
limit digitizing accuracy, which through signal averaging can
surpass the limit imposed by the noise floor of the sampling
system.
Sample
Strobe
TIMEBASE
3. Description of the Oscilloscope
Measurement System
A turnkey system for characterizing the pulse performance of
digitizing oscilloscopes has been developed using a commercially available pulse generator, the SWA, and custom application software written in LabVIEW2. The pulse generator
delivers repetitive, programmable voltage pulses up to 100 V
peak to the inputs of the oscilloscope under test and the SWA.
2 The identification of a commercial product does not imply recommen-
Figure 2. Picture of SWA system showing mainframe unit, wideband sampling probe (smaller case), and the low-noise sampling
probe with 100 V attenuator attached.
Vol. 1 No. 3 • September 2006
dation or endorsement by the National Institute of Standards and
Technology or that the item identified is necessarily the best available
for the purpose.
MEASURE
|
23
TECHNICAL PAPERS
The SWA digitizes the pulse signal and makes the data available
for processing by a personal computer via the IEEE 488 bus.
Parameter
Specification
Comments
3.1 System Components
Analog
bandwidth
20 MHz
f3dB
Pass-band
flatness
200 µV/V
dc to 1 MHz
Accuracy
0.2 %
After calibration
RMS Noise
3 mV (0.0008 %
of 400 V FSR)
Referred to
attenuator input
Input
Impedance
1 MΩ, 5 pF
Parallel RC model
Timebase
range
1 ps to 0.1 s
per sample
Linearity error
< 5 ps
Table 1. SWA Performance Specifications Using Low-Noise Sampling Probe
Salient measurement specifications for the SWA are listed in
Table 1. The complete system includes the following components:
1. A commercial programmable Pulse Generator.
2. A two-channel, NIST-developed Sampling Waveform Analyzer (SWA) to support NIST sampling comparator probes
and to provide sampled, digitized data records.
3. A NIST-developed sampling comparator probe and attenuator with 100 V peak voltage capability, input resistance
≥ 1 MΩ, input capacitance ≥ 5 pF, bandwidth approx.
20 MHz.
4. A NIST-developed wideband sampling comparator probe
with 2 V peak voltage capability, 50 Ω input impedance,
bandwidth > 2 GHz, and settling to 0.02 % in 10 ns.
5. A wideband, 40 dB, 50 Ω attenuator.
6. Software written in LabVIEW to control the measurement
system and to acquire and process the measurement data
from the system under test and the NIST reference system
The programmable pulse generator supplies 100 V peak voltage
Figure 3. LabVIEW front panel for the voltage pulse measurement system.
24
|
MEASURE
www.ncsli.org
TECHNICAL PAPERS
pulses into a 50 Ω load. Pulse durations range from 5 µs to 50 µs
with 10 % to 90 % transition duration of approximately 10 ns.
Pulse repetition rates range from 100 Hz to 1 kHz. At these
rates, pulse duty factors are held below 1 % to minimize
thermal problems and to closely approximate single-shot transient pulse conditions. While pulse aberrations may reach 5 %,
the effects of pulse distortion are minimized by normalizing the
response of the oscilloscope under test with that of the reference
SWA. The SWA’s high input impedance allows it to measure the
pulse signal delivered to the oscilloscope in parallel with the
oscilloscope while presenting minimal additional load to the
generator.
The SWA provides simultaneous sampling of both channels
synchronous with the test signal. Equivalent-time resolution of
a picosecond is possible, so the nominal sampling rate of the test
oscilloscope can be easily matched. The oscilloscope output
data are first normalized by the reference output data to correct
for aberrations in the voltage pulse and to refer the measurements to NIST-traceable standards. This correction is made as
follows: An ideal rectangular pulse is numerically constructed to
have an amplitude equal to the mean value of the 20 data points
preceding the pulse trailing edge in the SWA data record. The
oscilloscope data record, PSCOPE, is amplitude-corrected by
adding to it the difference between this ideal pulse, PIDEAL, and
the SWA data record, PSWA. Thus, we have
PCORRECT = PSCOPE + PIDEAL – PSWA .
(1)
Once corrected, selected performance measures are computed along with their uncertainties. Measured oscilloscope performance parameters include sensitivity (gain accuracy) in units
of Vin/Vout, transition duration, settling error, tilt, and input
impedance. The expanded measurement uncertainty (k = 2) performance goal for the sensitivity of the test unit, averaged over
at least a 1 µs interval and at least 1 µs following the 50 % reference level instant, is within the larger of V # 0.2 % or
50/V # 0.02 % where V is the applied peak voltage.
Fig. 3 shows the LabVIEW user interface front panel. The
controls allow the user to configure the parameters of the pulse
generator and the setup of the oscilloscope under test. The user
can also specify the number of data record averages to be
acquired from both the oscilloscope and the SWA. The oscilloscope’s raw data record, the SWA reference data, and the normalized oscilloscope data are displayed together in the waveform
graph. The data may also be saved to a user specified file.
shown in Fig. 4.
C1
When R1C1 = R2Ceff
(where Ceff equals the
Vout
Vin
R1
parallel combination
of C2 and Cprobe), the
R2 C2
Cprobe
input to output transfer function of the
network is purely real
Figure 4. Basic frequency-compensated
and constant. In pracattenuator and its impulse response.
tice, it is difficult to
match the attenuator
impedances (including the probe input capacitance) to better
than about 1 %. For applications requiring uncertainties better
than 1 %, the attenuator’s frequency response may be compensated further using a digital filter on the sampled data. The
digital filter compensates for mismatch between the high side
and low side impedances of the attenuator by providing a transfer function that is the reciprocal of the attenuator’s transfer
function. The transfer function of the frequency compensated
attenuator may be written as
,
where a0 = R1R2C1, a1 = R2, b0 = R1R2(C1+Ceff), and
b1 = R1+R2.
The discrete-time impulse response of the filter corresponding to the reciprocal of H(s) has the form
,
Vol. 1 No. 3 • September 2006
(3)
where w0=(R1+R2)/R2, w1=(C1+Ceff)/C1, and w2=1/(R1C1) are
the three governing parameters of the attenuator response; T is
the sample period; δ(kT) is the unit delta function; and u(kT) is
the unit step function. Numerical compensation of the attenuator is achieved by computing hr(kT) for a given sample rate and
record size and then convolving probe data with this impulse
response. Alternatively, the reciprocal filter can be implemented
with an equivalent two-step process. The exponential component of equation (3) acts upon the data in a manner that can be
represented by a first order infinite impulse response (IIR) filter,
,
(4a)
and the impulsive component of equation (3) is accounted for
by adding to the intermediate result, zk, of equation (4a) the
input data, xk, appropriately weighted,
3.2 Attenuator Frequency Compensation Filter
The low-noise sampling probe’s input range of ±10 V is
extended for signals with amplitudes up to several hundred volts
with a suitable voltage attenuator at the probe’s signal input. To
minimize loading of the source and power dissipation in the
attenuator and associated thermal errors, the attenuator should
use relatively high resistance values. However, unless compensated, a high output resistance from the attenuator will form a
low-pass filter with the input capacitance of the sampling probe
reducing the effective bandwidth. To maintain bandwidth, a
conventional frequency compensation scheme may be used as
(2)
.
(4b)
3.3 Calibration of the Attenuator
The sampling probe and 100 V attenuator are calibrated
through a procedure that compares the pulse response of the
probe and attenuator combination to that of a wideband sampling probe with a 40 dB, 50 Ω attenuator connected to its
input. The two probes are connected to channels A and B of the
SWA respectively. For the purpose of calibrating the sampling
probe and attenuator, the wideband probe and its attenuator
MEASURE
|
25
TECHNICAL PAPERS
100 V, 50 Ω
PULSE
SOURCE
1 MΩ
FREQ. COMPENSATED
ATTENUATOR
40 dB WIDEBAND
ATTENUATOR
5 pF
LOWNOISE
PROBE
2.5 kΩ
1 MΩ
50 kΩ
80 pF
A
B
CHANNEL A DATA
SWA
CHANNEL B DATA
50 Ω
50 Ω
WIDEBAND
PROBE
1 MΩ
ATTENUATOR CORRECTED CHANNEL A DATA
RECIPROCAL
FILTER
NONLINEAR
LEASTSQUARES FIT
connected to a low-noise sampling probe
on mainframe channel A. The wideband
attenuator is connected to a wideband
sampling probe on channel B. Data collected through the low-noise probe
channel are filtered by the attenuator
reciprocal filter using initial estimates for
the filter coefficients. A nonlinear least
squares fitting algorithm fits the filtered
data to the reference channel data iteratively until an appropriate stopping criterion is reached. We define an error
function, err(w0, w1, w2, offset) as
(5)
where * denotes the convolution operation. The function, err(w0, w1, w2, offset),
FILTER COEFFICIENTS
is the difference between the desired
output as measured by the reference
channel B and the corrected (reciprocal
Figure 5. Measurement setup for determining filter coefficients to correct residual
filtered) probe data measured on channel
mismatch in the high-side and low-side impedances of the low-noise probe’s
A with allowance made for an offset.
attenuator.
Since the error function is nonlinear in the
filter parameters, we use Newton’s method to iteratively find a
minimum in err. Note that in the reference channel data, pulse
transition times are much smaller than in the low-noise probe
data record because the wideband probe and 50 Ω attenuator
combination has much higher bandwidth. Because we are
attempting to correct only the transfer function of the 1 M Ω
attenuator and not the difference in bandwidth between the
low-noise sampling probe and the wideband sampling probe, for
pulse measurements, we exclude points in the vicinity of the
transitions from the fit. Note also that because the crossover frequency of the attenuator is much less than the bandwidth of the
probe, the inverse filter is fitted entirely to the response of the
attenuator and not to the response of the probe which is almost
completely flat in the vicinity of the crossover frequency.
Fig. 6 demonstrates the ability of digital filtering to compensate for non-optimal trimming of the attenuator's fixed compoFigure 6. Illustration of the reciprocal filter’s ability to fine-tune
nents. The topmost portion of a 50 V, 20 µs wide pulse
the frequency compensation of the attenuator.
measured with the probe and attenuator nominally compensated is shown. Also shown in the figure is the pulse after digital
correction overlaid upon the pulse measured with the wideband
may be considered a reference standard. Its response has been
probe as a reference. Agreement between the corrected wavemeasured independently and found to contribute negligible
form and the reference waveform is better than 100 µV/V. The
error over the frequency range of interest here. The wideband
lower noise of the low-bandwidth sampling probe compared to
probe and attenuator combination is calibrated for dc offset and
the reference standard is also clearly evident.
gain using a 6.5 digit digital multimeter whose calibration is
The same method used to calibrate the attenuator can be used
traceable to NIST standards. Following the dc calibration of the
to determine the oscilloscope’s input impedance. If a parallel RC
wideband probe and attenuator, an ac (pulsed) calibration of the
combination is placed in series with the oscilloscope input, an
sampling probe and attenuator is made against the corrected
RC network like that in Fig. 4 is formed with the R2C2 impedwideband probe and attenuator.
Fig. 5 illustrates a method for obtaining the filter's coeffiance corresponding to the input impedance of the oscilloscope.
cients. A 100 V pulse generator simultaneously drives the 1 MΩ
If R1 is known, it can be shown that
attenuator that is to be corrected and a 40 dB wideband attenuator comprising part of the reference channel. The 1 MΩ
(6a)
attenuator is already nominally frequency-compensated and is
26
|
MEASURE
www.ncsli.org
TECHNICAL PAPERS
and
(6b)
where w0 and w1 are the filter coefficients computed from a
minimum solution to equation (5), and Rin and Cin in parallel
are the oscilloscope’s input impedance. In this case, the chA
data come from the oscilloscope, and the chB data come from
a data record acquired with the low-noise probe acting as a reference.
4. Uncertainty Budget
Sources of error and uncertainty that give rise to Type B [7]
uncertainty components in the calibration of the sampling probe
and attenuator are listed below. These effects are believed to be
inclusive of all salient systematic effects that produce combined
errors and uncertainties within an order of magnitude of the
performance goals of the measurement system.
1. Uncertainty in the offset and gain measurements
2. Wideband attenuator self heating
3. Wideband attenuator voltage coefficient
4. Wideband attenuator linear response
5. Sampling probe attenuator frequency compensation
6. Sampling probe thermal tail error
Item 1 is a component of uncertainty arising from random
measurement noise during the gain and offset calibration of the
wideband probe and its attenuator. Items 2-4 are intrinsic errors
in the wideband probe and attenuator reference standard. Items
5 and 6 address how well the sampling probe and frequency
compensated attenuator agree with the reference standard and
are therefore regarded as errors.
Random noise in the sampling probe is believed to be the only
source of uncertainty relevant to Type A [7] evaluation of uncertainty. When the software reports Type A uncertainty, it includes
the effects of random errors in the oscilloscope measurement.
Table 2 summarizes the constituent uncertainty components
resulting from the Type A and Type B evaluation of uncertainties for the measurement of pulse amplitude. The Type A uncertainty shown in the table includes only noise contributed by the
reference measurement system, i.e., by the sampling probe.
Fig. 7 presents a gallery of plots that collectively place an empirical bound on the error in the numerical frequency-compensation of the sampling probe attenuator. The worst case error is
seen to be approximately 0.1 %.
Relative
Expanded
Uncertainty
Comment
1a. Uncertainty in the offset measurement
404 V
______
# 10–6
Vin
Vin = pulse
amplitude
1b. Uncertainty in the gain measurement
20 #10–6
2. Wideband attenuator self heating
200 #10–6
3. Wideband attenuator voltage coefficient
20 #10–6
Type B
4. Wideband attenuator linear response
0
5. Sampling probe attenuator frequency
compensation
1 #10–3
1000
V
______
# 10–6
Vin
After 1 µs
Type B Combined Uncertainty
(RSS Total)
1.020 #10–3
Vin = 100 V
1.026 #10–3
Vin = 10 V
1000
V
______
# 10–3
Vin Mn
n data record
averages
1.022 #10–3
Vin = 100 V
(no averaging)
1.220 #10–3
Vin = 10 V
(no averaging)
Type A
CombinedExpanded Uncertainty
(RSS Total of Type A and Type B)
Table 2. Summary of Evaluation of Relative Expanded Uncertainties (k=2)
Vol. 1 No. 3 • September 2006
The error sources just described
give rise to uncertainties in the
waveform parameters measured
by the system. Here we give a
brief discussion on how Type B
waveform parameter uncertainties
depend on these measurement
uncertainties. Type A uncertainties are determined during the
measurement process based on
repeated measurements.
5.1 Sensitivity (Gain Accuracy)
6. Sampling probe thermal tail error
Sampling probe noise
5. Uncertainty in Reported
Performance Parameters
for an Oscilloscope
Under Test
Uncertainty in sensitivity depends
on the uncertainty in the SWA’s
estimate of the pulse amplitude. If
only a single SWA data record is
collected, uncertainty in pulse
amplitude consists of the Type B
uncertainty component and the
Type A component (with n = 1)
divided by the square-root of the
number of data samples, M, used
in the computation. For example,
consider the case of a 100 V, 10 µs
wide pulse measured over a waveform epoch [8] of 25 µs with a
record size of 500 samples. In this
case, the sample period will be
25 µs / 500 samples = 50 ns. If
MEASURE
|
27
102.4
Pulse Height (V)
(a)
Pulse Height (V)
TECHNICAL PAPERS
102.2
102
101.8
26.2
26
101.6
0
2
4
0
2
4
102.6
Pulse Height (V)
Pulse Height (V)
26.4
102.4
102.2
102
101.8
101.6
0
20
26.3
26.2
40
0
Pulse Height (V)
51.2
Pulse Height (V)
26.4
51
50.8
0
10
20
40
102.5
102
20
0
Time (µs)
10
20
Time (µs)
Pulse Height (V)
Pulse Height (V)
-101.6
(b)
-101.8
-102
-102.2
-26
-26.5
-102.4
2
4
0
2
4
-101.6
Pulse Height (V)
Pulse Height (V)
0
-101.8
-102
-102.2
-102.4
-26.2
-26.4
-26.6
0
20
40
0
20
40
Pulse Height (V)
Pulse Height (V)
-101.6
-50.8
-51
-51.2
-101.8
-102
-102.2
-102.4
0
10
Time (µs)
20
0
10
20
Time (µs)
Figure 7. Corrected low-noise sampling probe and wideband probe for (a) positive and (b) negative-going pulses of different amplitudes
and durations. The heavy blue line is the low-noise sampling probe and attenuator. The lighter black line is the wideband probe and wideband attenuator.
28
|
MEASURE
www.ncsli.org
TECHNICAL PAPERS
sensitivity is measured over a 5 µs wide window positioned
around the center of the pulse, then M = 100. In this example,
the combined relative expanded uncertainty of sensitivity is
. (7)
The Type A component of uncertainty for sensitivity is computed by the system software based on the statistical variance of
the data records used in the averaging process. A coverage
factor obtained from Student's t distribution is used so that
uncertainties are reported with 95.45 % confidence. The expanded uncertainty for a given measurement is therefore the Type
A component of uncertainty reported by the software combined
with the Type B component of uncertainty in equation (7). It is
noted that if multiple data records are collected and averaged,
the second term on the left side of equation (7) may be omitted
and treated as a Type A uncertainty, estimated through direct
measurement.
5.2 Tilt
Uncertainty in tilt equals the uncertainty in the SWA data record
due to the normalization operation. This uncertainty consists of
the Type B uncertainty component and the Type A component
divided by the square-root of 20, since the 20 data points preceding the pulse trailing edge were used to compute the ideal
pulse amplitude. Thus, for a 100 V pulse measured with no
averaging, the combined expanded uncertainty of tilt is
5.4 Settling Error
Uncertainty in settling error equals the combined uncertainty of
the pulse data and the uncertainty of the settling reference level.
Each of these uncertainties equals the uncertainty in the oscilloscope data record due to the normalization operation. This
uncertainty consists of the Type B uncertainty component and
the Type A component divided by the square-root of 20, since
the 20 data points preceding the pulse trailing edge were used
to compute the settling reference level. Thus, for a 100 V pulse
measured with no averaging, the combined expanded uncertainty of settling error is
. (10)
6. Conclusion
A system for characterizing the pulse response of real-time digitizing oscilloscopes has been developed specifically for use by
the Sandia National Laboratories in their Primary Calibration
Laboratory. The system is built around a Sampling Waveform
Analyzer System developed at NIST and sponsored in part by
the Department of Energy and the Department of Defense. A
sampling mainframe unit, a sampling comparator probe, and a
100 V resistive attenuator form a complete sampling and digitizing system with accuracy that surpasses that of commercially
available oscilloscopes. The system is calibrated against a reference wideband sampling probe which, in conjunction with a
wideband 50 Ω attenuator, is capable of serving as a reference
standard.
. (8)
7. References
Again it is noted that if multiple data records are collected and
averaged, the second term on the left side of equation (8) may
be omitted and treated as a Type A uncertainty, estimated
through direct measurement.
Note also that uncertainty in the time component of the estimated slope is considered to be negligibly small.
5.3 Transition Duration
For most measurements, the dominant source of uncertainty in
the transition duration measurement is time quantization error
which is ±1 sample period, T. For example, measuring a pulse
whose width is 10 µs requires a waveform epoch of approximately 20 µs. If the data record length is 1000 samples, then the
sample period, T equals 20 ns. This number constitutes an
uncertainty that is considerably larger than any uncertainty
component that might arise from timebase errors in the measurement system or from additive jitter in the waveform.
The determination of transition duration involves two measurements, one that estimates the 10 % reference level instant
and one that estimates the 90 % reference level instant of the
waveform. A worst-case estimate of uncertainty for each of
these measurements is uTD = T, so the combined and expanded
uncertainty of transition duration is
UTD = 2T .
Vol. 1 No. 3 • September 2006
[1] T.M. Souders, B.C. Waltrip, and O. B. Laug, “A Wideband Sampling Voltmeter,” IEEE Trans. Instrum. Meas., vol 46, No. 4, pp.
947-953, 1997.
[2] D.I. Bergman and B.C. Waltrip, “A Low Noise Latching Comparator Probe for Waveform Sampling Applications,” IEEE Trans.
Instrum. Meas., vol. 52, No. 4, pp. 1107-1113, 2002.
[3] B.C. Waltrip, O.B. Laug, and G.N. Stenbakken, “Improved TimeBase for Waveform Parameter Estimation,” IEEE Trans. Instrum.
Meas., vol. 50, pp. 981-985, 2001.
[4] C. Gyles, “Repetitive Waveform High Frequency, High Precision
Digitizer,” IEEE Trans. Instrum. Meas., vol. 38, pp. 917-920,
1989.
[5] O.B. Laug, T.M. Souders, and D.R. Flach, “A Custom Integrated
Circuit Comparator for High-Performance Sampling Applications,” IEEE Trans. Instrum. Meas., vol. 41, pp. 850-855, 1992.
[6] “Calibration Services User's Guide,” NIST Special Publication
SP250, pp. 189-193, 1998.
[7] “Guidelines for Evaluating and Expressing the Uncertainty of NIST
Measurement Results,” NIST Technical Note 1297, 1994.
[8] “IEEE Standard on Transitions, Pulses, and Related Waveforms,”
IEEE Std 181-2003.
(9)
MEASURE
|
29
TECHNICAL PAPERS
Metrology: Who Benefits
and Why Should They Care?
Fiona Redgrave and Andy Henson1
Abstract: The National Metrology Institutes (NMIs) push the boundaries of metrological capability to ever-greater
heights, spurred on by advances in science and technology, the demands of industry and the needs of society. Many
new products and processes, new science and technology, indeed new markets and the legislation that governs them,
depend on good metrology. It would therefore seem logical that metrology and measurement are intrinsic elements in
planning the processes on which they impact, yet often they are not routinely addressed, or at least not in a timely way.
The NMI mission includes delivering benefits to the national economy or quality of life for our citizens by working to
overcome this inertia. However, in a global economy we increasingly need to rethink our definitions of national impact,
as nowadays many drivers and their consequent effects are no longer confined within national boundaries. This paper
will reflect on the mechanisms by which metrology impacts our past, present and future, the interplay between national
and global perspectives, and suggests new approaches for embedding metrology “upstream” into our economies and lives.
Fiona Redgrave
Andy Henson
International Office
National Physical Laboratory
Hampton Road, Teddington, Middlesex TW11 0LW
United Kingdom
E-mail: [email protected]
1.0 Introduction
Many new products and processes, new science and technology,
and indeed new markets and the legislation that governs them,
depend on good metrology. On the one hand innovation can
drive metrological requirements, whilst on the other hand
1© Crown copyright 2005. Reproduced by permission of the Controller
of HMSO and Queen's Printer for Scotland.
30
|
MEASURE
www.ncsli.org
TECHNICAL PAPERS
metrology can provide researchers with the necessary technology and techniques to develop and convert innovative ideas and
knowledge into practical products. This virtuous circle is essential in a knowledge led economy, which must innovate but also
ensure the quality of life of its citizens. This paper examines the
mechanisms by which metrology impacts our lives and considers new approaches for embedding metrology “upstream” into
our economies and lives.
2.0 Historical Context
Recognition of the need for measurement dates back thousands
of years, at least to the times when measurements were essentially driven by the need to trade locally commodities such as
wheat, oil, wine etc. However, both the Egyptians and later the
Romans used their measurement capability to spectacular effect
in the construction of impressive monuments, buildings and
structures, many of which survive to this day. Local commerce
was also the driver in the UK in the Middle Ages, where the
need for consistent measurement was enshrined in the 35th
clause within King John’s Magna Carta [1] of 1215, which
declared that [2]: “There shall be standard measures of wine,
ale, and corn (the London quarter), throughout the kingdom.
There shall also be a standard width of dyed cloth, russett, and
haberject, namely two ells within the selvedges. Weights are to
be standardised similarly..”
Five hundred years later the US Constitution, Article I,
section 8 [3], vested the US Congress with the power to
“… coin money, regulate the value thereof, and of foreign coin,
and fix the standard of weights and measures; …”
The basic need to ensure fair trade of commodities within a
nation remains a primary requirement and is still encapsulated
in modern legal metrology legislation, including the recent EU
Measuring Instruments Directive (Directive 2004/22/EC) [4].
The Renaissance and subsequent scientific discoveries, by
people such as Galileo, Huygens, Newton, Boyle, Hooke and
others, led to the development of scientific measuring instruments and hence improvements in measurement capability.
Trade expanded beyond commodities, but the production of
manufactured goods was still limited in part by measurement
capability, with components specifically manufactured and
fitted on an individual basis. The US Springfield rifle, which
helped shape history, was one of the first examples of components manufactured to sufficient accuracy and quality to enable
interoperability and thus initiating mass production.
The Industrial Revolution was fuelled by developments in
measurement capability, which for the first time enabled a wide
range of standardised components to be made of appropriate
quality for interoperability allowing mass production, increasing volume and thus reducing prices, enabling greater accessibility and market growth. Nowadays standardisation and
interoperability are part of our everyday life from the small scale
to examples like the Airbus A380, where major structural parts
are manufactured independently in four different countries but
assembled in one, with the absolute confidence that all components will fit together.
By the mid nineteenth century the increase in international
trade, and the developments in transportation and communicaVol. 1 No. 3 • September 2006
tions, together with greater volume markets, led to the need to
formalise the metrological infrastructure not just within but also
between countries.
3.0 Technical Framework
Providing the technical basis for promoting innovation or
making sound decisions on regulations and trade requires a
framework for ensuring the quality of measurements. Reliance
on quality measurement data in the regulatory and trade arenas
also depends on the mutual acceptance by trading partners and
by regulators and those being regulated. This extra dimension
requires transparency in the various pathways to traceability,
evidence of comparability among realisations of the SI at the top
of respective traceability chains, and mutual acceptance of
quality system assessment schemes.
Since the mid nineteenth century, recognition of this need has
driven the metrology community to develop a number of
regional and international arrangements and agreements in
order to improve the comparability of measurements internationally, to increase confidence in calibration and measurement
capabilities and test certificates issued by National Metrology
Institutes (NMIs) and accredited laboratories, and to provide a
formal basis for acceptance of certificates issued by other countries.
3.1 Metre Convention
The Convention of the Metre (Convention du Mètre) [5] was
signed in Paris in 1875 by representatives of seventeen nations.
The Convention is a diplomatic treaty that gives authority to the
General Conference on Weights and Measures (Conférence
Générale des Poids et Mesures, CGPM), the International Committee for Weights and Measures (Comité International des
Poids et Mesures, CIPM) and the International Bureau of
Weights and Measures (Bureau International des Poids et
Mesures, BIPM) to act in matters of world metrology. In addition to founding the BIPM, the Metre Convention established a
permanent organizational structure for member governments to
act in common accord on all matters relating to units of measurement. The Convention, modified slightly in 1921, remains
the basis of international agreement on units of measurement
and currently has fifty-one Member States and twenty Associates, including all the major industrialised countries. The Convention effectively led to the establishment of National
Metrology Institutes within many of the signatory countries.
3.2 SI System
In 1960 the 11th General Conference on Weights and Measures
adopted the name Système International d'Unités (International
System of Units, international abbreviation SI) [6], for the recommended practical system of units of measurement. The 11th
CGPM laid down rules for the prefixes, the derived units, and
other matters. The base units are seven well-defined units which
by convention are regarded as dimensionally independent: the
metre, the kilogram, the second, the ampere, the kelvin, the
mole, and the candela. Derived units are those formed by combining base units according to the algebraic relations linking the
corresponding quantities.
MEASURE
|
31
TECHNICAL PAPERS
3.3 CIPM MRA
3.5 Accreditation
In 1999, the directors of the National Metrology Institutes of 38
countries and two international organisations signed the
Mutual Recognition of national measurement standards and of
calibration and measurement certificates [7] (the CIPM Mutual
Recognition Arrangement or CIPM MRA). By the end of 2005
NMIs from a further 26 countries had signed the CIPM MRA.
The CIPM MRA is a response to the drivers from trade and regulation, and currently around 90 % of world trade in merchandise exports is between CIPM MRA participant nations [8].
The CIPM MRA can be thought of as being supported by
three pillars that require the participating NMIs to:
1. To take part in appropriate comparisons: the key and suppmentary comparisons;
2. To implement and demonstrate an appropriate quality
system; and
3. To declare and subject their Calibration and Measurement
Capabilities (CMCs) for extensive peer review.
After an internal review within the NMI, all CMCs undergo
peer review by other NMIs within the local Regional Metrology
Organisation (RMO), i.e. EUROMET, APMP, SIM, COOMET
and SADCMET, followed by a sample peer review by all other
RMOs. CMCs are then submitted for review to the Joint Committee of the RMOs and the BIPM (JCRB). Once agreed by the
JCRB they then populate the BIPM key comparison database
(KCDB) on the BIPM web site.
Thus for the first time the CIPM MRA provides end users
with validated data in a harmonised form, that are supported by
international comparisons, subject to extensive peer review and
declared by NMIs that are obliged to operate an appropriate
quality system. End users are now able to make a realistic comparison of the services and uncertainties offered by the various
NMIs. The arrangement provides the basis for acceptance by an
end user in one country of a certificate issued by an NMI in
another country, providing that the NMI is a signatory to the
CIPM MRA, meets the requirements of the Arrangement and
that the technical needs of the end user are met.
The vast majority of calibrations performed to underpin industry, trade and regulation are not undertaken directly against a
primary standard held by a National Metrology Institute. For
many end users their source of traceability to the SI is through
accredited calibration laboratories, and it is therefore important
that an end user can have confidence in all parts of the traceability chain through a transparent, validated and documented
process. The non-acceptance of either accreditation status or
certificates and reports issued by accredited calibration and test
laboratories or a demand for multiple accreditations is therefore
a hindrance for international trade, particularly for those products which have to undergo re-testing or re-calibration upon
entry to importing countries.
The International Laboratory Accreditation Cooperation
(ILAC) is an international cooperation among the various laboratory accreditation schemes operated throughout the world.
The ILAC Arrangement [10], which came into effect on 31
January 2001, provides technical underpinning to international
trade by promoting cross-border stakeholder confidence and
mutual acceptance of accredited laboratory data and calibration
and test certificates. The purpose of the ILAC Arrangement is
to develop a global network of accredited testing and calibration
laboratories that are assessed and recognised as competent by
ILAC Arrangement signatory accreditation bodies.
3.4 OIML
The International Organisation of Legal Metrology (OIML) [9]
was established in 1955 on the basis of a convention in order to
promote the global harmonisation of legal metrology procedures. OIML is an intergovernmental treaty organisation with
58 member countries, which participate in technical activities,
and 51 corresponding member countries that join the OIML as
observers. OIML collaborates with the Metre Convention and
BIPM on the international harmonisation of legal metrology. A
worldwide technical structure provides members with metrological guidelines for the elaboration of national and regional
requirements concerning the manufacture and use of measuring
instruments for legal metrology applications. The OIML develops model regulations and issues international recommendations that provide members with an internationally agreed basis
for the establishment of national legislation on various categories of measuring instruments.
32
|
MEASURE
4.0 Metrology as an Enabler for Everyday Life
Metrology both influences, drives and underpins much of what
we do and experience in our everyday lives, though often unseen
and beyond our awareness. Industry, trade, regulation, legislation, quality of life, science and innovation all rely on metrology
to some extent. It is estimated that in Europe today we measure
and weigh at a cost equivalent to 2 % to 7 % of the gross
domestic product (GDP) [11], so metrology forms a natural and
vital part of everyday life.
4.1 Metrology and industry
Industry relies on good measurements to manufacture products
that meet specifications, customer’s requirements and documentary standards; to comply with regulations and legislation both
national, European and international; to enable exports; to
ensure compatibility and interoperability; to enable efficient
manufacture and assembly to the required quality; to improve
production processes and techniques, reduce scrap, meet deadlines; and to cut costs or improve cost effectiveness.
4.2 Metrology for trade and commerce
Global trade and commerce relies on a growing number of international standards and technical regulations, with some estimates indicating that 80 % of traded goods are based on
documentary standards and regulations where conformity
assessments, and hence measurements, may be required. International Organisations (ILAC, International Accreditation
Forum – IAF, Metre Convention, OIML, ISO) have developed
interdependent procedures with the aim of "One standard, one
test, accepted everywhere." Equipment is often sold with calibration or test certificates so it is essential that these documents
www.ncsli.org
TECHNICAL PAPERS
and tests are accepted and considered reliable not only in the
country of origin but also in the final destination. This has
become more important in recent years as trade outside individual countries and regions has increased dramatically, and with
the reduction in fiscal barriers the potential for non-tariff barriers to become visible has increased. Two of the major trading
blocs, the EU and the USA, account for around one fifth of each
other's bilateral trade, a matter of €1 billion a day. In 2002,
exports of EU goods to the USA amounted to €240 billion
(24.2 % of total EU exports), whilst exports from the US to the
EU amounted to €175 billion (17.7 % of total EU imports)
[12]. Likewise in 2001 imports to the EU from Japan accounted
for 7.4 % of the total EU import market, whilst 4.6 % of EU
exports were destined for Japan. As many manufactured goods
comprise components made in a multitude of countries standardisation and interoperability are crucial elements.
4.3 Metrology for regulation and legislation
Regulations and legislation are now developed not just at a
national level, but also on a European and international basis.
Metrology is often a requisite for effective legislation covering
product safety, compatibility and interoperability, health and
safety, the environment and healthcare to name just a few.
Metrology is required not just to enable an effective assessment
of compliance, but also in the development of effective regulation and as an input to the data underpinning the rationale for
the legislation. In a survey undertaken by the Joint Research
Center – Institute for Reference Materials and Measurements
(JRC-IRMM) on behalf of the European Commission [13], it
was estimated that 20 % of all EU legislation has a significant
science and technology basis and one third of outstanding EU
legislation relies on measurements.
A number of drivers for improvements in the way measurement is addressed in regulation have arisen, particularly as a
result of the lowering of technical limits for environmental pollutants and substances hazardous to health. Take for example
the area of emissions trading, where substantial amounts of
money are tied to levels of emissions. CO2 emissions quotatrading is dependant on simple measurements of the quantity of
fuel burnt. If emissions trading is to address other greenhouse
gases, which depend on the combustion conditions, rigorous
and continuous measurement of the actual flue gas emissions
will be required – a much tougher proposition. There are yet
other reasons raising the profile of measurements made in a regulatory context. The EU Integrated Pollution, Protection and
Control Directive (IPPC – Directive 96/61/EC) [14] requires
data to be reported to Brussels to enable the EU Governments
to make the tough choices related to global warming that will
affect the whole of Europe. Thus data related to a factory’s emissions are no longer just a local enforcement issue, and quality
and consistency take on a wider meaning.
blood pressure, X-rays, ultrasound scans), to monitor patients
and to deliver treatment such as radio- and chemotherapy and
drugs. Many modern advances in medical treatment, such as
increased doses in cancer treatment, are only possible because
improvements in measurement techniques and accuracy ensure
that the treatment is effectively targeted at the diseased area
minimising damage to healthy tissue. Similarly improvements to
or degradation in the environment can only be assessed through
effective monitoring and measurement. Recent scares have significantly raised public awareness and concern regarding food
safety, be it chemical or microbiological contamination or the
presence of genetically modified organisms.
4.5 Metrology for science and innovation
On the one hand new breakthroughs in science enable improvements in measurement techniques and metrological capability;
for example, the discovery of the Josephson effect led to new
voltage standards. On the other hand, improved metrological
capability provides new tools for scientists, researchers and
innovators in all fields; for example, advances in atomic clocks
enabled the development of Global Positioning Systems. Measurements are required to test hypotheses and verify theories, to
establish consistency of results, to determine fundamental constants, and to investigate susceptibility of phenomena to external influences. Developments in metrology in one field
frequently require metrological developments in other areas,
and the drive to replace the artefact mass standard through
either the Watt Balance or the Avogadro project are two such
examples. The ability to move a scientific breakthrough or
development, for example carbon nano tubes, from an interesting scientific phenomenon to an industrial application is often
reliant on metrological capability and advances.
5.0 Challenges
5.1 International and national perspectives
The NMI’s mission includes delivering benefits to the national
economy and quality of life for its country’s citizens. Historically
the drivers underpinning the priority setting have been nationally generated, but more and more these drivers are influenced
by external factors such as regional or international legislation
or trade. Regulators, for example, tend to be mandated nationally but they operate within and impact on a global economy. In
many cases it is therefore no longer wise to consider benefits,
drawbacks, needs and impact from a purely national perspective. Indeed as a result of mergers, acquisitions and expansion,
for larger businesses and industries the concept of a national
company is somewhat out-dated. Whilst this generates a
number of challenges, it also provides the potential of benefits
through collaboration with other countries with similar needs.
5.2 Challenges for trade, regulation and innovation
4.4 Metrology for quality of life
Measurements are essential for ensuring the quality of life of citizens, be it health and safety, food safety, healthcare and medical
treatment, or environmental monitoring. Doctors require measurements to diagnose illness (eg. temperature, blood samples,
Vol. 1 No. 3 • September 2006
Before the advent of modern best practice in industry, measurement issues had to be faced and addressed on a case by case
basis. Quality management standards such as the ubiquitous
ISO/IEC 9001:2000 [15] and ISO/IEC 17025:2000 [16]
addressed the need for equipment to be calibrated and operators
MEASURE
|
33
TECHNICAL PAPERS
to be trained, and for many companies compliance with this
requirement has become a matter of course. However, good calibration practice is only one aspect of good measurement practice. Often “good results” depend on an appropriately planned
and executed approach to the measurement and testing regime.
The value of metrology is wider and deeper than the calibration
of equipment, and includes collaborative research, development
of measurement techniques, and expert advice, hence the challenge remains to ensure that these broader aspects are explored
and appropriately addressed.
Two studies carried out on behalf of the European Commission (EC), the ACCEPT project [17] and the Hogan and
Hartson review [18], highlighted a number of instances of
measurement and testing related barriers to trade and more
recent examples have been identified in the EC RegMet [19] and
MetroTrade [20] projects. Two specific examples of measurement related technical barriers to trade relate to fish exports to
the EU from Africa [21] and the export of frozen shrimps to the
EU. [22] The African fish exports highlighted the necessity for
an integrated Standards Quality Accreditation and Metrology
(SQAM) infrastructure and demonstrated the benefits from its
implementation, whilst the case of the frozen shrimps shows the
problems that can occur when legislation or the associated mandated standards include technical limits which are ambiguous
and where the assessment of compliance or otherwise can
depend on the equipment and technology used for the assessment. In both cases the monetary value of the exports alone was
significant, with around €100 million lost exports per year for
the East African countries and €1 million for a single consignment of frozen shrimps. There is a similar example from the
Caribbean region, which highlights the negative impact, on
exports and the significantly reduced market access, of not
having a functional national measurement infrastructure.
The regulatory community for example faces a wide range of
metrological challenges in the development and enforcement of
regulatory legislation [23] including:
• Regulatory requirements that are difficult to test in practice;
• Documentary standards which are not sufficiently specific
or allow the use of a range of methods which have not been
cross validated and provide different results (EMC testing);
• Insufficient reliable data to undertake scientifically rigorous
risk assessments (genetically modified organisms);
• A lack of understanding of the impact of uncertainty of
measurement on the setting of technical limits and the
assessment of compliance;
• Legislation or documentary standards that do not specify
the maximum permissible level in an unambiguous manner
(antibiotic residues);
• Specified limits that are very close to the physical limits of
detection (residue of genetically modified organisms,
mercury in water and conductivity of solutions);
• A lack of suitable certified reference materials and the difficulty in producing stable reference materials (particularly
for some chemical, food and microbiological testing, where
achieving traceability in the strictest interpretation can be
exceedingly difficult);
34
|
MEASURE
• A lack of technically robust cost-effective measurement
methods and equipment;
• Requirements for dynamic and real-time measurements
(environmental monitoring); and
• The need to operate in a rapidly changing global environment.
The causes of these problems may be found in:
1. Limitations in technical capabilities and practical realisations.
2. Incomplete, inadequate or diverse sources of information.
3. Inconsistent recognition of materials supplied by diverse
commercial producers.
4. Extreme ranges of physical quantities.
5. Differences in regulations, legislation and mandated standards.
6. Differences in the implementation of existing legislation.
7. De facto requirements or historical practice for traceability
to national standards in a specified country or institute.
8. Historical practices.
9. Differences between metrological standards in different
countries.
10. Variations in technology between countries.
11. Lack of harmonisation of test and calibration procedures.
12. Political and economic factors.
13. The belief that metrological and technical issues will be
dealt with ‘downstream’ of the formulation of regulations.
14. Lack of understanding (particularly in developing
economies) of the impact of measurement on development
and quality of life.
Some of these challenges arise because the measurement
aspects are not always addressed at an appropriate place in the
regulatory process or in a timely manner, in some cases because
the need for metrology and measurement capability is not
apparent at the time. Whilst measurement is often addressed
well at the enforcement stage of the regulatory process, it is not
always recognised that the data underpinning the rationale for
the legislation and the establishment of technical limits may also
rely crucially on measurement aspects. Unless the measurement
aspects are identified at each stage of the process and in advance
of need, it may be difficult to ensure that appropriate measurement capability is available when required and the legislation
may be less effective as a result. If the measurement issues have
not been adequately addressed, the quality of the underpinning
data may be impaired thus impacting on the rationale for the
legislation. For example, the uncertainties of the measurements
will impact on the quality of the underpinning data, the cost of
compliance with the legislation, the cost effectiveness of the legislation and the reliability of assessments of compliance. Regulatory limits that are comparable to the uncertainties will and do
give rise to a host of problems, as illustrated by the disqualification and subsequent court challenges of Olympic athletes following failed doping tests.
In practice many of the issues above are not just of concern
to regulators but also to industry, innovators and scientists in
general. Likewise in ensuring that a scientific breakthrough can
develop into an industrial application or that new products can
be developed effectively, there is frequently a need for the develwww.ncsli.org
TECHNICAL PAPERS
1. Checklist
Twelve questions with associated issues for
consideration, to guide the regulator through
measurement-related issues likely to be encountered
2. Guidance
More detailed background information, including
reference to some sector-specific examples of the
role of measurement in regulation
3. Definitions
Definitions of terms commonly encountered in
measurement, plus a list of standards,
measurement and accreditation organisations
Figure 1. Three tier structure of the template.
opment of metrological capability, which is not always recognised at an early stage in the process.
6.0 The Way Forward
The cost on businesses and industry of a particular piece of legislation can be very dependent on the technical limits set and the
technology required to implement and ensure compliance with
the legislation. An assessment of the measurement requirements
at all stages of the process before the legislation is developed is
therefore essential.
The EC RegMet project [19] found that many of the measurement issues facing the regulatory community were common
across sectors. Some generic guidance on metrological issues
within regulation was therefore developed in both hard copy
and CD based HTML format. The guidance, known as the ‘Template,’ was developed as a three tier document as shown in
Fig. 1.
The questions, and sets of accompanying bullet points, are
designed to draw the regulators attention to relevant issues for
consideration. The twelve top-level questions summarise the
issues (see below), whilst the bullet points list more in-depth
points for consideration.
• Q1 What is the driver for the regulation?
• Q2 What consequences will the regulation have for international trade?
• Q3 Is the rationale for the regulation fully supported by
data of appropriate quality and reliability?
• Q4 Are new data required to underpin the regulation?
• Q5 What parameter or quantities will need to be measured?
• Q6 What measurement and accreditation infrastructure
exists for the quantities identified in Q5?
• Q7 Is suitable measurement technology available for the
relevant quantities?
• Q8 Can regulatory limits be set which provide an appropriate balance of risk and cost?
• Q9 What, if any, new research does the regulation
require?
• Q10 How much detail will be prescribed in the regulation?
• Q11 How will the effectiveness of the regulation be
enhanced by feedback from surveillance in the marketplace?
• Q12 How might future technical or bureaucratic developments impact the regulation?
Vol. 1 No. 3 • September 2006
Although this guidance was developed with the regulatory
community in mind, a number of questions (particularly the
associated issues) are relevant to industry, innovation, the development of new products and scientific research. For example,
Q12 recognises the benefit of revisiting metrological and technological solutions at a later date to identify scope for development and improvement. Metrology and technology moves on
and these developments may provide more cost effective, time
efficient, accurate and reliable solutions, and of course needs
and requirements also change with time.
Other measures can also be utilised, such as those below:
• Identification of potential measurement and testing requirements at an early stage during research, innovation or
development, so that measurement solutions can be developed in a timely manner and metrological expertise encapsulated and built on within the process.
• Development of mechanisms to ensure appropriate pre-normative research is conducted so that science based standards (which are much more likely to be accepted
internationally) are developed in a timely way.
• Conducting underpinning science collaboratively across the
trade regions to bolster confidence in the results.
• Developing, producing and certifying reference materials
on a cooperative basis.
• Use of existing standards, accreditation and metrology
infrastructure, including reliance on widely accepted
generic standards for quality.
• Strengthening international standardisation through the use
and development of international rather than national documentary standards.
• Identification of regulatory and trade related measurement
and testing requirements at an early stage in the process.
• Ensuring that all regulatory bodies whose spheres of
responsibility encompass the need for measurements adopt
a specific policy to avoid unnecessary technical barriers to
trade based on:
– Use of CIPM MRA and ILAC Arrangement as basis for
acceptance of traceability and calibration and test results.
– Consultation of issues associated with technical limits at
an early stage, including assessment of the impact of
advances in technology.
– Avoidance of technical limits based on ‘no detectable
limit’, ‘zero’ and ‘below detectable limits.’
Increasingly companies and other organisations, both multinational and small and medium enterprises, are realising added
value from the national metrological investment by tapping the
deep well of existing expertise through dialogue and working in
partnership with their NMI. Many NMIs provide small consultancies and advice either for free or on a marginal cost basis.
Innovation is enhanced and metrological solutions developed
through joint research projects, secondments (both formal and
informal) and consultancy and the provision of advice. Some countries have developed formal schemes to facilitate this process, such
as the US Advanced Technology Program operated by the National Institute of Standards and Technology (NIST) and the UK’s
Department of Trade and Industry Measurement for Innovators
scheme operated by the National Physical Laboratory (NPL).
MEASURE
|
35
TECHNICAL PAPERS
7.0 Conclusions
Industry, trade, quality of life and innovation all depend on
sound measurements, whose measurement traceability can be
clearly demonstrated.
The value of good calibration practice has been recognised
through documentary standards such as ISO 9001:2000 and
ISO/IEC 17025:2000 and is implemented by many organisations as a matter of course. There is, however, a risk of the scenario of “out of sight, out of mind.” The scope of metrology’s
interface with the wider world is much greater than just calibrations, and often “good results” depend on an appropriately
planned and executed approach to the measurement and testing
regime. The challenge remains to ensure that the broader
aspects are explored and appropriately addressed in a timely
manner. Quality managment standards do not cover the
approach to the measurement and testing regime, but this is an
important aspect and in many cases needs to be addressed at a
policy level. Metrology and measurement needs change with
time so there is much to be gained by revisiting solutions to
ensure that best use is made of metrological and technological
developments.
On the one hand, innovation can drive metrological requirements, whilst on the other, metrology can provide researchers
with the necessary technology and techniques to develop innovative ideas and knowledge into practical products. There is
therefore benefit to be gained through enhanced interaction
between NMIs, industry and researchers.
NOTE: This paper was presented at the 2005 NCSLI Workshop and Symposium and won Best Paper award in the Quality
& Management category.
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
8.0 Acknowledgements
The authors gratefully acknowledge the financial support of the
UK Department of Trade and Industry (National Measurement
System Directorate).
[22]
[23]
9.0 References
[1] Magna Carta: www.bl.uk/collections/treasures/magna.html
[2] Magna Carta: (English translation): www.bl.uk/collections/treasures/magnatranslation.html
[3] US Constitution: www.house.gov/Constitution/Constitution.html
[4] Directive 2004/22/EC of the European Parliament and of the
Council of 31 March 2004 on measuring instruments:
http://europa.eu.int/scadplus/leg/en/lvb/l21009b.htm
[5] Convention of the Metre: www.bipm.fr/en/convention/
[6] The International System of Units (SI), www.bipm.fr/en/si/
[7] "CIPM Mutual Recognition Arrangement (MRA) for national
measurement standards and for calibration and measurement
certificates issued by national metrology institutes."
www.bipm.org/enus/8_Key_Comparisons/mra.html
[8] KPMG Consulting, “Potential economic impact of the CIPM
Mutual Recognition Arrangement,” Final Report, April 2002.
[9] The International Organisation of Legal Metrology; see web:
www.oiml.org/
[10] ILAC Arrangement, see ILAC website: www.ilac.org
[11] G. Williams, University of Oxford, “The Assessment of the Economic role of Measurements and Testing in Modern Society,”
36
|
MEASURE
Final Report, European Commission DG Research, Contract
G6MA – 2000 – 20002, July 2002.
DG Trade website: http://europa.eu.int/comm/trade/issues/
bilateral/countries/usa/index_en.htm
A. Herrero, Private communication, UK, 16 February 2004.
Council Directive 96/61/EC of 24 September 1996 concerning
integrated pollution prevention and control [Official Journal L
257 of 10.10.1996], http://europa.eu.int/scadplus/leg/en/lvb/
l28045.htm.
BS EN ISO 9001:2000, “Quality management systems. Requirements,” 2000.
BS EN ISO/IEC 17025:2000, “General requirements for the
competence of testing and calibration laboratories,” 2000.
"Mutual Acceptance of Calibration Certificates between
EUROMET and NIST", Final report to the Commission of the
European Communities, under contract SMT4 – CT97 – 8001.
Hogan and Hartson L. L. P. 2000 "Study of the applications and
implications of an agreement between the EU and the U.S. on
the mutual recognition of calibration and testing certificates",
Report to the Commission of the European Communities, Oct.
2000.
F. Redgrave and A. Henson, "Improving Dialogue in Europe
between the Regulatory Bodies and the National Metrology
Institutes through the RegMet Project," Proc. 2003 NCSL International Workshop and Symposium, NCSLI, Boulder, CO,
USA.
H. Kallgren, M. Lauwaars, B. Magnusson, L. Pendrill, and P.
Taylor, “Role of measurement uncertainty in conformity assessment in legal metrology and trade,” Accred. Qual. Assur. vol. 8,
pp. 541–547, 2003.
A.K. Langat and B. Rey, in EC Fisheries Bulletin, vol 12, no.
2–3, p. 11, 1999.
Luc Erard, "Metrological support to international trade – The
METROTRADE project," Proc 11th International Metrology
Congress, Toulon, France, 20th–24th , October 2003.
F. Redgrave and A. Henson, “Improving Dialogue in Europe
between the Regulatory Bodies and the National Metrology
Institutes through the RegMet Project," Proc 11th International
Metrology Congress, Toulon, France, 20th–24th, October 2003.
www.ncsli.org
RUNNING HEAD GOES HERE
SMART/CMS Enterprise Calibration Management Software
YOUR GOAL, OUR SOLUTION
Now available on
SQL Server 2005
and Oracle 10GR2!
AssetSmart by PMSC
2800 28th Street, Suite 109
Santa Monica, CA 90405 USA
310.450.2566
www.assetsmart.com
[email protected]
Scalable from laptop to
mega-enterprise
Compliance Assurance for
ANSI/NCSL Z540, ISO 9000,
SAE AS9000, ISO 17025 and
FDA 21 CFR Part 11
Automated Tracking of
OOT/SOOT notices
Integration Tools connect to calibrators
and data acquisition systems
User Customizable screens
Integrated Equipment Management
module
Tool Crib and Spares Inventory
module
TECHNICAL PAPERS
Fiber Deflection Probe
Uncertainty Analysis for Micro Holes
Bala Muralikrishnan, Jack Stone
Abstract: We have recently reported on a new probe, the Fiber Deflection Probe (FDP), for diameter and form meas-
urement of large aspect ratio micro-holes (100 µm nominal diameter, 5 mm deep). In this paper, we briefly review the
measurement principle of the FDP. Then, we discuss different error sources and present an uncertainty budget for
diameter measurements. Some error sources are specific to our fiber probe such as imaging uncertainty, uncertainty
in determining calibration factor, and misalignment of the two optical-axes. There are other sources of error that are
common to traditional coordinate metrology such as master ball diameter error, tilt in hole’s axis, temperature effects
etc. Our analysis indicates an expanded uncertainty of only 0.07 µm on diameter.
1. Fiber Deflection Probing
The stylus and probing system of a traditional Coordinate Measuring Machine (CMM) is limited at the low end of its measurement range because of large stylus diameter and high contact
forces. In order to measure micro features and small holes of the
order of 100 µm diameter, novel low force probing technologies
Bala Muralikrishnan
Department of Mechanical Engineering & Engineering Science
University of North Carolina at Charlotte
9201 University City Blvd, Charlotte NC 28223 USA
(Guest Researcher, National Institute
of Standards and Technology)
Email: [email protected]
Jack Stone
Precision Engineering Division
National Institute of Standards and Technology
100 Bureau Drive, MS 8211, Gaithersburg, MD 20899 USA
38
|
MEASURE
are required. There are several such systems reported in the literature, which are summarized in a recent review by Weckenmann et al. [1]
At the National Institute of Standards and Technology
(NIST), we have developed a new probing system for measuring holes of diameter 100 µm. We refer to this technique as
Fiber Deflection Probing (FDP) and it is based on imaging a
thin fiber stem using simple optics. The advantages of this technique are the large aspect ratio achievable (5 mm depth in 100
µm hole), an inexpensive probe that can be easily replaced, large
over-travel protection of approximately 1 mm before probe
damage, extremely low contact force (; 1 µN) to avoid part
damage, and extremely small uncertainties (0.07 µm, k = 2 on
diameter).
The measurement principle is shown in Fig. 1(a). A thin fiber
(50 µm diameter, 20 mm long), with a microsphere (80 µm
diameter) bonded on the end, serves as the probe. The deflections of the stem upon contacting a surface are detected by optically imaging the stem, a few millimeters below the ball. The
www.ncsli.org
TECHNICAL PAPERS
(a)
(b)
1. Fiber
2. Source
3. Collimating
Lens
4. Objective
5. Fold Mirror
6. Eyepiece
7. Thin mirror
to split pixel
array
Figure 1. (a) Measurement principle.
(b) Optical setup showing fiber and two axes.
optical setup used is shown in Fig. 1(b). The stem of this fiber
is illuminated from two orthogonal directions to detect deflections in X and Y. The resulting shadows are imaged using objectives and a camera. Upon contact with a test surface, the fiber
deflects and also bends. By determining the position of the fiber
in the deflected state and also in the free state, and using a previously determined scale factor (in units of µm/pixel) that
accounts for both the bending and deflection, we can correct the
machine’s final coordinates to determine surface coordinates.
The probing system is currently a heavy prototype that is placed
on the bed of the machine, with the probe pointing upwards. All
measurements are carried out on the Moore M48 [2] measuring
machine at NIST. The machine is used primarily as a fine threeaxis positioning stage; its Movamatic probing system is
removed from the ram to allow the placement of the test artifacts. A detailed description of the technique along with validation results and small hole measurement data can be found in
[3]. Here, we discuss different error sources involved and
provide an uncertainty budget for diameter measurements.
2. Error Sources Overview
We provide an overview of the different sources of error in
measuring artifacts such as a small hole. In subsequent sections,
we describe them in greater detail and tabulate an uncertainty
budget.
Sources of error that are specific to our fiber probe:
1. As mentioned in the previous section, we determine any
coordinate on a surface from knowledge of the fiber’s free
and deflected state and the machine coordinates at the
deflected state. There is an uncertainty in determining both
the machine’s position at the deflected state and the fiber’s
position (imaging uncertainty). This contributes to an
uncertainty in determining every coordinate in space and
consequently impacts diameter.
2. In order to determine the magnitude of the fiber’s deflection
in units of length, we require a scale factor that converts the
fiber’s deflection in pixels to micrometers. Uncertainty in
determining the scale factor will contribute to an uncertainty in part diameter, not directly but in combination with
Vol. 1 No. 3 • September 2006
8. Camera
other factors as described in Sections 3.2 and 4.
3. The two optical axes of the fiber probe are not necessarily
aligned with the machine’s axes. This non-orthognality/misalignment introduces an error in diameter. We typically
compensate this term in software, but a small residual can
remain.
Other general sources of error:
1. As with any traditional coordinate measurement process,
we have to calibrate the probe ball diameter (and form)
using a master ball of known diameter (and form). The
uncertainty in master ball diameter is therefore another
term in our budget.
2. Uncertainty in determining the equatorial plane of the
master ball and tilt angle of the test hole contribute to an
uncertainty in final diameter.
3. Temperature effects are not significant for dimensional
measurement of small holes, but may impact master ball
diameter measurement.
For purposes of this error budget, we consider a 3000 µm
nominal diameter ruby sphere as the master ball and a 100 µm
nominal diameter ceramic hole as the test artifact. All results are
based on Least Squares (LS) algorithm with 16 points sampled
along the surface. Nominal probe ball diameter is assumed to be
80 µm.
3. Uncertainty in Determining a Coordinate
in Space – u(coordinates)
3.1 Errors in Determining Fiber Center by Shadow Imaging
While an uncertainty budget for diameter measurements is presented later on, we discuss the uncertainty in determining the
fiber center due to imaging here. This term, along with the
machine’s positioning repeatability, is used later to determine
the uncertainty in obtaining any coordinate in space. Fig. 2(a)
shows two thin white bands of light that represent a portion of
the fiber stem viewed from two orthogonal directions. (The
glass fiber behaves as a cylindrical lens and focuses light on the
image plane to produce the bands; we monitor the position of
these bands instead of the outer boundaries of the shadow. One
MEASURE
|
39
TECHNICAL PAPERS
(a)
three horizontal pixels are crossed, the uncertainty will not
exceed this value. Also, as a consequence of the fact that we
measure both the left and right edge of the band, the uncertainty
will be reduced to a value on the order of (6 /M2 ) nm = 4 nm.
Thus we might hope to see roughly a 4 nm (which is equivalent
to about 0.015 pixels, at a nominal scale factor of 300 nm/pixel)
uncertainty in detecting the position of the probe in space under
ideal conditions. We have carried out measurements that indicate that this small uncertainty for the imaging system is probably attainable, but under realistic conditions our uncertainties
are much larger, with the imaging uncertainty contributing negligibly to the overall uncertainty budget. Although the 4 nm
uncertainty might be improved further by sophisticated subpixel interpolation, there is no practical advantage to doing so.
(b)
3.2 Uncertainty in Determining a Coordinate in Space
Figure 2. (a) Image as recorded by the camera. (b) Binary image
after processing.
band corresponds to motion along X, another to motion along
Y). These bands after image processing are shown in Fig. 2(b).
We determine a center position for the stem (in pixel coordinates) in each direction by least squares fitting (using data from
an edge finding routine applied to each row) and averaging (left
and right edge for each band).
We use a 640 by 480 pixel array Charge Coupled Device
(CCD) camera, where the width of each pixel is 8500 nm. With
an optical magnification of 35, the pixel resolution is 8500/35
= 243 nm. Therefore, the center can lie within – 122 nm and
+122 nm with equal probability. Assuming a rectangular distribution, the standard uncertainty is 122 / M3 = 70 nm. This is the
uncertainty in determining the center using just one row of
pixels. We average over 400 rows (out of the total possible 480
rows, a few are discarded because of outliers) to reduce this
uncertainty. In the absence of noise, a slight tilting of the fiber
relative to the field of view is needed to average over the quantization error of discrete pixels; this is a very standard imaging
technique that has close analogs in other fields such as electronics [4]. The mathematical details differ slightly from one implementation of the technique to the next depending on the
averaging algorithm employed (for us, the least squares fit). The
reduction in uncertainty due to averaging is a complex function
of the angle of the fiber relative to the pixel array. If the fiber is
misaligned relative to the pixel array so that it crosses more than
three pixels in the horizontal direction, then the error due to
pixel resolution is reduced below ±0.04 pixels (±10 nm).
Assuming a rectangular distribution of errors, this ±10 nm
range of possible errors corresponds to a standard uncertainty
of 6 nm. For some angles the uncertainty might be considerably
smaller, but as long as the angle is large enough that at least
40
|
MEASURE
The coordinate of any point on the surface is determined from
knowledge of the fiber center in both the free state and in the
deflected state. We know the machine’s coordinates at the
deflected state and the magnitude of the fiber’s deflection. From
these, we can infer the coordinates of the center of the probe tip
when it is in contact with the surface. Thus, the final coordinate
(X, Y) on the surface after correcting for the fiber’s deflection
is given by:
X, = (Px – Pxo)*SFx + Xo ,
(1)
Y = (Py – Pyo)*SFy + Yo ,
(2)
where (Xo, Yo) are the CMM readings in micrometers at the
deflected state of the fiber, (Px,Py) are the fiber centers in pixels
at the deflected position, (Pxo, Pyo) are the fiber centers in pixels
at the free undeflected state and SFx and SFy are the scale (or
calibration) factors in µm/pixel along X and Y. The uncertainty
in any coordinate (X,Y), given by (u(X), u(Y)), is therefore a
function of uncertainties in each of the quantities on the right
hand side of Eqs. (1) and (2) and is given by
, (3)
,,
(4)
where the coefficients are the partial derivatives as described in
the US Guide to the Expression of Uncertainty in Measurement.[5]
Before we proceed with the evaluation of the different uncertainties in the right hand side of Eqs. (3) and (4), we make the
following observations/assumptions:
• First, we assume that the uncertainties are not directionally
dependent. Therefore, u(Px) = u(Py), u(Pxo) = u(Pyo) and
u(Xo) = u(Yo). This simplifies our discussion to only terms
on the right hand side of Eq. 3.
• Second, we observe that the uncertainties in scale factors,
u(SFx) and u(SFy), have only a very small effect on typical
measurements, where the measurements are performed at
nearly the same deflection as used when the probe is calibrated. If the measured scale factor is smaller than the true
www.ncsli.org
TECHNICAL PAPERS
value, the master ball diameter appears smaller, resulting in
a smaller value for the fiber probe ball diameter. Because
we use the same scale factor for test artifact (small hole)
measurement, the smaller scale factor, combined with a
smaller probe ball diameter produces the correct hole diameter, in essence canceling out the effect of u(SFx) and
u(SFy). Some error will remain because the probe deflection is not exactly the same for calibration and for test artifact measurement, but typically this is a small error. We
discuss this source in Section 4.
• Third, determining the free state of the fiber is not critical
because this term only serves to translate the center coordinates and does not influence diameter or form. Therefore,
we can ignore the free state in all computations and simply
report a coordinate as X = Px *SFx + Xo.
From this, the uncertainty in determining a coordinate can be
simplified as:
.
Using
(5)
= 300 nm/pixel (nominal scale factor value),
= 1, u(Px) = 0.015 pixels (from previous section)
and u(Po) = 35 nm, the combined standard uncertainty in determining the X (and Y) coordinate of any point in space using the
fiber probing technique is 35 nm. The uncertainty is dominated
by u(Xo), and the value given here for u(Xo) was determined
experimentally as the standard deviation of repeated measurements of a point on a surface. This lack of repeatability is large
relative to other sources of uncertainty. The source of repeatability errors is still under investigation, but it is likely that they
arise primarily from CMM positioning errors.
Note that we do not include probe non-orthogonality in Eq.
(1) and (2) because we compensate for this error in software
(see Section 9). We also do not treat non-orthogonality in CMM
axes separately. The Moore M48 is well characterized and error
mapped. Therefore, we do not separately treat its errors. Instead
we lump motion related errors into one term: its single point
repeatability of 35 nm. For a more detailed discussion of error
sources and uncertainty budgets for the NIST Moore M48
CMM, we refer to [6].
3.3 Contribution of Uncertainty in Coordinates
to Uncertainty in Diameter
The contribution of this term to diameter uncertainty is determined using Monte Carlo Simulation (MCS).[7] With 35 nm
standard deviation Gaussian noise, and using 16 sampling
points with a LS fitting routine, we determine the standard
uncertainty in diameter to be 18 nm. This term is the largest
contributor to the overall uncertainty budget, and affects every
coordinate measured using the fiber probe. Therefore this term
affects both the calibration and test artifact measurement.
Vol. 1 No. 3 • September 2006
4. Uncertainty in Scale Factor Combined
with Unequal Fiber Deflection – u(SF)
As mentioned in Section 3.2, the uncertainty in the scale factor
will not directly impact the final diameter if we use the same
scale factor value for both the calibration and test artifact measurement. This is true under the circumstance that the fiber
deflects by the same nominal amount at all angular positions
(sampling locations) of both the master ball and test artifact. In
reality, the fiber will not deflect by identical amounts at all
angular positions of any artifact because of centering and part
form error. Assuming a 2 µm centering error in the test artifact
(the master ball is assumed to be well centered), a nominal scale
factor of 300 nm/pixel, 0.5 nm/pixel standard uncertainty in the
scale factor, and 15 µm nominal deflection, the uncertainty in
diameter is 1 nm. Also, the fiber will not necessarily deflect by
the same nominal amount for both the calibration and test artifact measurement. Assuming typical nominal deflections are
held to within a 2 µm range between the calibration and test
artifact measurement, the uncertainty in diameter is 7 nm.
5. Master Ball Diameter Uncertainty – u(master)
For purposes of calibrating the diameter (and form) of the
probe ball, we use a 3 mm nominal diameter ruby sphere
mounted on a stem (a CMM stylus), as the master ball. The
diameter of this master ball is determined to be 3000.79 µm
with a standard uncertainty of 5 nm using interferometry at
NIST. The master ball diameter uncertainty was determined by
measuring two point diameters at different locations and therefore samples some form error also.
6. Uncertainty in Determining Equatorial Plane
of Master Ball – u(height)
Determining the equatorial plane of a sphere is important
during calibration to obtain an accurate diameter of the probe
ball. The equatorial plane is found iteratively as follows. We first
determine the approximate center of the circle at some arbitrary
plane near the equatorial plane. Using this center, we determine
the location of the pole point along Z, and then evaluate the new
location of the equatorial plane from knowledge of the ball’s
diameter. We repeat this process several times to refine the location of the equatorial plane. The error in determining the Z location of the equator is ±1.5 µm from this method. The standard
uncertainty in determining calibration artifact diameter is therefore 1 nm.
7. Temperature Effects – u(temperature)
Temperature effects are typically not significant for dimensional
measurement of small objects. If temperature can be controlled
to within ±0.05 °C, the change in diameter is 0.8 nm for the
master ball. The radial expansion of the probe tip and the test
hole are negligible. Therefore, assuming a rectangular distribution, the standard uncertainty in determining master ball diameter because of non-standard temperature is 0.5 nm.
MEASURE
|
41
TECHNICAL PAPERS
(b)
55
A
(a)
78.3
Active-Axis
Non-active Axis
α
θ
50
78.2
78.15
45
78.1
78.05
40
78
77.95
35
77.9
77.85
30
Non-active Axis Displacement (µm)
θ
Active Axis Displacement (µm)
78.25
77.8
1
11
21
31
41
51
61
Point #
Figure 3. (a) Axis misalignment schematic. Optical axes
1 and 2 are misaligned by θ1 and θ2 with the corresponding machine axis. (b) Reading of active and nonactive
axes as the probe is cycled (note the different scales for the active and non-active axis), moving first toward a surface and then back out
from the surface. Positive θ is as shown in the figure. The sign conventions shown for each probe refer to pixel coordinates; that is, deflection of the fiber to the right of optical axis 1 is considered positive for that axis and deflection to the left for optical axis 2 is considered positive.
8. Uncertainty in Aligning Hole Axis
with Machine’s Z axis – u(tilt)
The tilt angle of a hole’s axis affects final diameter values.
Assuming tilt can be controlled to within ±0.5 °, the standard
uncertainty in diameter is 1 nm.
9. Uncertainty in Aligning Optical Axis
with Machine’s X & Y axes – u(AM)
9.1 Introduction to Axis-misalignment
The two optical axes of the probe measurement system are not
necessarily aligned with the machine’s axes. This misalignment
introduces an error in diameter (and form), which if uncompensated can be a significant portion of the total uncertainty
budget. We discuss this error source and our approach to compensating it. A residual error will remain; it is itemized in the
uncertainty budget. It is worthwhile to emphasize that these
alignment errors, which are usually of only minor importance
for measurements of typical engineering metrology artifacts,
take on much greater significance when probe deflections are
comparable in magnitude to machine motions, such as, for
example, when measuring the inside diameter of a 100 µm hole
with an 80 µm diameter probe.
Figure 3(a) shows a schematic of the measurement system.
The fiber probe stem (top view) is shown at the origin, with the
optical axis 1 misaligned with the machine’s Y axis by θ1, and
the optical axis 2 misaligned with the machine’s X axis by θ2.
When the machine deflects the probe along the X axis, optical
axis 1 (which is aligned with the Y axis) senses the displacement
and is therefore the X-axis sensor. The ‘+ve’ and ‘–ve’ signs
show the sign convention in pixel coordinates as explained in
the figure caption.
In a typical measurement process, the test part (either the
42
|
MEASURE
master ball or the hole) is brought in contact with the fiber at a
certain angle (α) and further translated by P along the same
direction. If the two optical axes are perfectly aligned, axis 1
(that is, the X-axis sensor) senses a displacement of Pcosα,
while axis 2 senses a displacement of Psinα (the sign conventions for the two optical axes are shown in Fig. 3). These displacements (Pcosα, Psinα) are then corrected from the machine
coordinates at that location to determine the coordinates on the
surface. However, if the optical axes are aligned as shown in Fig.
3 (a), axis 1 senses a displacement of Pcos(α – θ1), while axis 2
senses a displacement of Psin(α – θ2). The displacement corrections are therefore incorrect resulting in errors in part diameter
and form.
Figure 3(b) shows experimental evidence of the presence of
this error. As the part is brought in contact with the fiber along
the machine’s X axis and displaced back and forth in steps of
1.5 µm over a travel of 15 µm (active axis – optical axis 1 readings), optical axis 2 (non-active axis) records a motion of
approximately 0.4 µm, indicating that optical axis 2 is not
aligned with the machine’s X axis.
9.2 Understanding its Impact
Axis misalignment can potentially be a large component of the
overall error budget, if left uncompensated. In order to understand its impact, we consider two cases. If θ1 = θ2, the resulting
coordinates after displacement correction are rotated to a new
point, either inside or outside the true surface. Thus, the impact
is only on diameter, not on form. (Form errors will occur, however, if there are variations in the magnitude of the probe displacement from point to point.) If θ1 0 θ2, the resulting coordinates
after displacement correction are not only rotated but also
stretched and compressed along two orthogonal axis (causing
an apparent ovality), resulting in errors in diameter and form.
www.ncsli.org
A
TECHNICAL PAPERS
Most of the misalignment induced errors cancel between the
master ball and test hole measurement. There is however a
residue, which is not insignificant, as shown here. Although
probe errors that are strictly along the radial direction are independent of the diameter of the object being measured, errors
along a direction tangent to the measurement direction have
much greater influence on the calculated diameter when probe
deflections become comparable to machine motions; in a diameter measurement, the tangential errors represent second-order
cosine errors of negligible magnitude when measuring a circle
of large radius but become much larger when measuring a very
small circle. This effect is particularly important when the calibration artifact is macroscopic and the test artifact, a hole, is
only slightly larger that the probe diameter. For a 5° misalignment angle in one axis, no misalignment in the other, the error
in diameter when measuring a 3 mm ball (80 µm probe tip with
15 µm nominal deflection) is – 57 nm (diameter appears to be
smaller for outer diameter features). For the same conditions,
the error in diameter for a 100 µm hole is 121 nm (diameter
appears to be larger for inner diameter features). Thus, if the
3 mm ball measurement is used to calibrate the probe tip diameter prior to a measurement of the diameter of the 100 µm hole,
the net diameter error is 64 nm. Because the axes are not
orthogonal, there is also a residual out-of-roundness error of
approximately 86 nm. For a 0.5° misalignment in one axis, these
numbers are much smaller. The residual errors in diameter and
out-of-roundness are only 1 nm. Thus, if we can estimate axis
misalignment angles to within 0.5°, our compensation will significantly reduce the contribution of this term to the overall
budget.
Typically observed misalignment angles are between – 5° and
+ 5° in both axes (note that while these angles may seem large,
these angles represent a combination of physical and optical
misalignment). It is therefore necessary to compensate diameter and form for axis misalignment. We discuss next a procedure
to evaluate the magnitude of this misalignment. Then, we
discuss our approach to correcting for it.
9.3 Estimating Axis Misalignment Angles
Our procedure for estimating axis misalignment angles involves
monitoring both optical axes while deflecting the fiber along
two of the machine’s principal directions. As mentioned earlier,
if the optical axes are well aligned with the machine’s axes, and
the fiber is deflected along the machine’s X axis, optical axis 1
senses all of the deflection while optical axis 2 senses no deflection at all. The same is true for deflections along the machine’s
Y axis, where optical axis 1 senses no deflection and optical axis
2 senses the complete deflection.
If however, the optical axes are aligned as shown in Fig. 3,
then we follow the procedure outlined here to estimate θ1 and
θ2. First, we let the test part contact (at point O, the origin) and
deflect the probe (to point A) as the part moves along the
machine’s positive X direction. Let the deflection of the probe,
OA, be P. Let the magnitude of the observed probe deflections
by optical axis 1 and optical axis 2 be XA and YA pixels. Also,
let the scale factors in X and Y be Sx and Sy, expressed in units
of µm/pixel if the deflection P is measured in micrometers.
Vol. 1 No. 3 • September 2006
Then,
(6)
We then contact the probe and displace it to point B along the
positive Y direction, again by P. Let the magnitude of the
observed deflection seen by optical axis 1 and optical axis 2 be
XB pixels and YB pixels. Then,
(7)
From Eq. 6 and 7, we get:
(8)
Sx and Sy can also be obtained from these equations. Similar
equations can be written for deflections in the opposite directions yielding another set of values for θ1, θ2. The results can
then be averaged to obtain axis misalignment angles.
9.4 Compensating Axis Misalignment Error
After the angles are determined, we can estimate the magnitude
of the correction as described here. Let the fiber be deflected by
some distance at an arbitrary angle α. Let a and b be the
observed readings (in pixels) of optical axis 1 and optical axis
2 respectively. Let u and v be the true deflections along the X
and Y directions. Then u and v can be determined from the following system of equations:
(9)
Thus, from the observed deflection (a, b) at every angle α, we
can determine the true deflection (u, v) and compensate for axis
misalignment.
10. Other Miscellaneous Errors
Hertzian deformations of the probe tip and workpiece are negligible because measurement forces are only 0.16 µN when the
probe is deflected by 20 µm. We have therefore not discussed
this error source. A complete accounting of errors would also
include a component due to incomplete sampling of the part
form errors; for purposes of our discussion here we ignore this
potential complication.
The emphasis in this paper has been on the fiber probe, and
therefore we have not explicitly discussed CMM scale and positioning errors. For our M48 CMM, these errors (interferometric scale related and other machine errors) are primarily
manifested as part of the 35 nm repeatability discussed previously. Previous studies of the M48 show that other positioning
errors that would affect these small-scale diameter measurements (errors such as hysteresis or, more likely on the M48,
errors of short spatial period associated with the roller bearings)
might contribute as much as 20 nm uncertainty to a two-point
diameter measurement at a particular spot on the table. This
uncertainty should be reduced to 14 nm for a four-point diameter measurement that samples independent errors associated
with measurements along the x and y axes. For two-artifact
MEASURE
|
43
TECHNICAL PAPERS
measurement (master ball and test hole), this translates to
an effective uncertainty of about 20 nm in diameter.
Finally, there are errors associated with dust settling on
either the test part or the master ball. Dust is a persistent
problem when using low-force probing outside of a clean
room. Most often, a particle of dust will produce a large,
obvious error, and can be corrected by cleaning, but if a very
small piece of dust produced a radial error under 50 nm, this
error might go undetected. However, it is unlikely that this
would occur at more than 1 of the 16 measurement points,
and therefore the resulting diameter error would be less
than 3 nm.
Experimentally, we have determined the standard uncertainty in diameter to be of the order of 20 nm. This repeatability samples the different error sources we have outlined
in previous sections. It is however possible that there are
other sources we have not sampled, such as those described
in this section and any other unknown sources. To account
for these, we itemize a 20 nm uncertainty in diameter in our
budget.
11. Summary: Overall Uncertainty Budget
Finally, we tabulate in Table 1 the contributions of the different sources towards the uncertainty in diameter for a 100
µm hole. From Table 1, the combined standard uncertainty
in diameter is 34 nm. Thus, the expanded uncertainty is 0.07
µm (k = 2) on diameter.
Note that the uncertainty in diameter will be smaller than
the uncertainty in determining a position (35 nm) because
of the averaging involved. We sample 16 points along the
circumference of a circle. The uncertainty in each coordinate
is (±35 nm, ±35 nm). As explained in section 3.3, the uncertainty in diameter (based on 16 sampling points, LS best fit)
is reduced to only 18 nm. Adding in other terms as shown in
the uncertainty budget in Table 1, the final combined standard uncertainty in diameter is 34 nm.
12. Conclusions
We have discussed different error sources involved in measuring the diameter of 100 µm nominal diameter holes using a new
fiber deflection probe for CMMs. The probing uncertainty,
which is the imaging term, is of the order of 4 nm. Experimentally determined single point repeatability using the fiber probe,
on a CMM is approximately 35 nm. A substantial portion of this
rather large difference is attributable to the machine’s positioning repeatability. However, we are still investigating the presence of any other systematic effects that might contribute to this
loss in performance. Overall, our analysis indicates expanded
uncertainty of only 0.07 µm (k = 2) on diameter. This value is
amongst the smallest reported uncertainties in the literature for
micro holes measured using a CMM. Our current focus is on
expanding the technique to 3D and profile measurements and
in understanding the error sources involved therein.
44
|
MEASURE
Error
Source
ucal(Coordinates)
Uncertainty
(nm)
Description
Uncertainty in probe ball diameter due to
uncertainty in determining coordinates (X, Y)
of probing points. This is primarily because of
imaging uncertainty and machine repeatability.
18
18
u(Coordinates)
Same as 1, but on test artifact.
u(SF)
Uncertainty in scale factor combined with centering error.
1
Uncertainty in scale factor combined with
unequal nominal deflections between master
ball and test artifact measurement.
7
Error in determining the equatorial plane (Z
height) on master ball.
1
ucal (Height)
ucal (Master)
Uncertainty in master ball diameter and form.
5
ucal (T)
Uncertainty in diameter due to nonstandard
temperature. This affects calibration sphere
diameter primarily because of larger nominal
diameter. Test artifact diameter is much smaller
and temperature effects are ignored.
1
u(Tilt)
Error in determining tilt angle on test artifact
1
u(AM)
Probe axis misalignment introduces an error in
diameter, some of which cancels out when
measuring the cal-ball and later the test artifact.
Also, most of this error is software corrected.
The residual error is tabulated here.
1
u(Other Sources)
Contribution from machine positioning and
other sources.
20
Table 1. Error sources contributing to uncertainty in diameter.
Expanded uncertainty is 0.07 µm (k = 2) on diameter. Note that the subscript ‘cal’ indicates calibration process.
13. References
[1] A. Weckenmann, T. Estler, G. Peggs, and D. McMurtry, “Probing
Systems in Dimensional Metrology,” Annals of the CIRP, vol. 53,
pp. 1-28, 2004.
[2] Commercial equipment and materials are identified in order to
adequately specify certain procedures. In no case does such identification imply recommendation or endorsement by the National
Institute of Standards and Technology, nor does it imply that the
materials or equipment identified are necessarily the best available
for the purpose.
[3] B. Muralikrishnan, J. Stone, S. Vemuri, C. Sahay, A. Potluri, and
J. Stoup, “Fiber Deflection Probe for Small Hole Measurements,”
Proc. of the ASPE Annual Meeting, pp. 24-27, 2004.
[4] M.F. Wagdy, “Effects of Various Dither Forms on Quantization
Errors of Ideal A/D Converters,” IEEE Trans. Instrum. Meas., vol.
38, pp. 850-855, 1989.
[5] “US Guide to the Expression of Uncertainty in Measurement,”
ANSI/NCSL Z540-2-1997.
[6] John Stoup and Ted Doiron, “Measurement of Large Silicon
Spheres using the NIST M48 Coordinate Measuring Machine,”
Proc. of the SPIE, vol. 5190, pp. 277-288, 2003.
[7] M.G. Cox, M.P. Dainton and P.M. Harris, “Software Support for
Metrology, Best Practice Guide No. 6: Uncertainty and Statistical
Modeling,” ISSN 1471–4124, NPL, UK, March 2001.
www.ncsli.org
Vol. 1 No. 3 • September 2006
MEASURE
|
45
TECHNICAL PAPERS
Reduction of Thermal Gradients
by Modifications of a
Temperature Controlled CMM Lab
Hy D. Tran, Orlando C. Espinosa, and James F. Kwak
Abstract: The Sandia Primary Standards Lab Coordinate Measuring Machine Lab (CMM Lab) was built in 1994. Its
temperature controls were designed to be state of the art at 20.00 ± 0.01 C and relative humidity 36 ± 4 %. Further
evaluation demonstrated that while the control achieved the desired average air temperature stability of 10 mK at a
single point, the CMM Lab equipment had vertical temperature gradients on the order of 500 mK. We have made inexpensive minor modifications to the lab in an effort to reduce thermal gradients. These modifications include partitioning temperature sensitive equipment from operators and other heat sources; increasing local and internal air circulation
at sensitive equipment with fans; and concentrating the flow of this circulated air into the HVAC control sensor. We
report on the performance improvements of these modifications on machine temperature gradients during normal operation, and on the robustness of the improved system.
1. Introduction
Temperature control is one of the keys in
achieving accuracy in dimensional
metrology [1]. The dimension of mech-
Hy D. Tran
Orlando C. Espinosa
James F. Kwak
Primary Standards Lab
Sandia National Laboratories 1
P.O. Box 5800, MS-0665
Albuquerque, NM 87185-0665
e-mail: [email protected]
46
|
MEASURE
anical objects is referenced to a 20 °C
measurement. Measuring an object at a
different temperature than 20 °C, and
correcting for thermal expansion introduces uncertainty due to uncertainty in
the coefficient of thermal expansion
(CTE)[2]. In addition, temperature gradients in the measurement apparatus can
introduce unknown distortions in the
measurement equipment. Consider a
100 mm part made from a 300-series
stainless steel. The CTE could be as low
as 14 ppm/°C or as high as 19 ppm/°C.
If the measurement is made at 21 °C, the
correction for CTE would range from
–1.9 µm to –1.4 µm. This situation is
exacerbated if the measurement equipment is made from a variety of materials.
A typical example is a CMM with granite
and aluminum ways, but with glass
1 Sandia National Laboratories is a multiprogram laboratory operated by Sandia Corporation, a
Lockheed Martin Company, for the United States Department of Energy’s National Nuclear
Security Administration under Contract DE-AC04-94AL85000.
www.ncsli.org
TECHNICAL PAPERS
logging. The Instrulab was set up using
its default Steinhart and Hart coefficients, and an offset programmed for
each different thermistor. The reported
temperature followed this equation:
SPRT
inserted here


1
T =
3 − 273.15  + Offset (1)
 A + B ⋅ ln( R ) + C ⋅ ( ln( R ))

Thermistor
mechanical
adapters
Thermistor
Figure 1. Calibration fixture used for the
thermistors. The aluminum block helps
equilibrate all the thermistors to the same
temperature, where they are calibrated to
the SPRT.
scales. In order to minimize measurement uncertainties with dimensional
measurements and calibrations, it is
therefore important to control the temperature of the measurement environment to 20 °C.
The Sandia Primary Standards Lab
building was built in 1994. Included in
that building is a lab designed to house a
Moore M48 CMM (the CMM lab). The
CMM Lab has separate air conditioning
equipment, designed to maintain the
room within ±0.010 °C from its setpoint
(nominally 20.00 °C), and maintain the
relative humidity to 36 ± 4 %. Evaluations of the CMM Lab showed that the
air conditioning equipment maintained
the air temperature at a single point
within specification; however, the air at
various points around the CMM exhibited variations as much as 0.5 °C from
each other [3].
Our approach to reduce the temperature gradients in the lab and on the CMM
is twofold: (1) partition the CMM from
the control electronics and the operator;
and (2) increase air circulation around
(or in) the CMM.
where A, B, and C are the Steinhart and
Hart coefficients, 1.4733 # 10-3,
2.372 # 10-4, and 1.074 # 10-7 respectively; R is the thermistor resistance; and
T is the reported temperature in degrees
Celsius [5]. The individual offsets were
measured by placing the thermistors in
an aluminum block, together with a calibrated standard platinum resistance
thermometer (SPRT). The block is then
placed on a large cast iron surface plate.
It is assumed that after some period of
time, all temperatures within the aluminum block are the same within a standard uncertainty of ±0.003 °C, and the
offsets are calculated to reflect this.
Fig. 1 shows a photograph of the aluminum fixture used to calibrate the thermistors against the SPRT. The
thermistors exhibit a resolution of
0.001°C, and the measurement uncertainty (k = 2) is ±0.008 °C between 19 °C
and 21 °C. This measurement uncer-
tainty includes the uncertainty in the reference SPRT and the readout electronics.
It should be noted that while the measurement uncertainty is an order of magnitude greater than the resolution, when
measuring the same object, the thermistors
will all read within ±0.001 °C of each other.
When measuring air temperature or
CMM temperature, the thermistors are
fixtured with duct tape. If the bead is
exposed, a piece of aluminum foil is used
to shield the sensor from lighting.
3. Temperature Control
in the CMM Lab
The layout of the CMM Lab, as originally
described in [3], is sketched in Fig. 2. As
can be seen, the control electronics and
the operator are in the same room as the
CMM and artifact to be measured. While
it is desirable to place the operator and
servo electronics in another room, this is
not always feasible. The laboratory air
conditioning controls both temperature
and humidity. Air enters the lab from
registers on the sides of the wall, both at
foot and at waist level; the return is thru
a perforated false ceiling. The sensor for
the HVAC controls is hung from a pipe
tree approximately in the center of the
room.
2. Instrumentation
Six YSI 46043 series thermistors
(± 0.05° C interchangeability; 2252 Ω at
25 °C; low drift) were used, with a bird
cage air probe style (package 050)
having a 1 second time constant in
flowing air [4]. They were connected to
an Instrulab 3312A system for data
Vol. 1 No. 3 • September 2006
Figure 2. Layout of the CMM Lab. The interferometer wavelength tracker is not illustrated; it is mounted at the rear of the CMM. Note that the electronics are adjacent to
the CMM. It is impractical to move the electronics to a different room.
MEASURE
|
47
TECHNICAL PAPERS
21
Reheat
Chill
lights
20.8
Return Plenum
Lab
lights
ON
OFF
OFF
20.6
Temperature ºC
Air Temperature &
Humidity Sensor
lights
Top of ram housing
20.4
Air at top of ram housing
20.2
Air at control sensor
Figure 3. HVAC control system for CMM
Lab. The air is fed into the room via registers at the floor and at waist-height. The air
flows up into a plenum, where it is chilled,
then, reheated to the temperature setpoint.
Humidity is also added to the setpoint level.
20
Step gage
on table
Tracker air
19.8
10/31/95 12:00
11/1/95 12:00
11/2/95 12:00
11/3/95 12:00
11/4/95 12:00
Date/Time
Fig. 3 shows a schematic of the air
conditioning control systems. The air
conditioning system incorporates both
temperature and humidity control. A
portion of the air in the room is discharged to the outside with make up air
added. The air is chilled, then, reheated
as needed prior to being blown into the
room via the inlet registers sketched in
Fig. 2. Humidity is also removed or
added as needed. By pre-chilling and
reheating the air, a greater degree of temperature control can be achieved. The
HVAC system uses a commercial proportional-integral-derivative (PID) operated
by the laboratories’ facilities and utilities
department.
The temperature control performance
of the system is very good (measured
near the air temperature and humidity
feedback sensor.) Fig. 4 shows a typical
Figure 5. Temperature at various locations in the CMM Lab. Note the temperature difference between the top of the CMM (ram) and the temperature at the table (step gage on
table). (From reference [3]). This CMM has a wavelength tracker for its interferometers
(labeled “Tracker Air”) attached to the rear of the main casting. This is typical performance
in the lab, with the most recent measurements in September 2004.
plot of temperature taken over a span of
two weeks. While the temperature
exhibits
short-term
underdamped
response (with a period of approximately two hours), the overall average
over the workday or several workdays
generally meets the desired ±0.01 °C stability.
In spite of this very good temperature
control, there are temperature gradients
in the lab, due to the basic design of the
room (air flow coming in from registers
at the floor and sides; air return at the
ceiling). Fig. 5 is reproduced from [3],
showing the air
temperature at the
measuring surface
of the CMM; at the
laser wavelength
tracker; and on top
of the Z-axis ram.
There is a difference of nearly 0.6°
C between the top
of the CMM and
the typical object
to be measured at
Figure 4. Temperature stability in CMM Lab, measured next to the
the bottom of the
HVAC sensor (September 25 to October 9, 2004).
CMM workspace
48
|
MEASURE
(for example, a step gage on the CMM
table).
We surveyed two other facilities which
have Moore M48 CMM’s: the National
Institute of Standards and Technology
(NIST) in Gaithersburg, MD, and the
Oak Ridge Metrology Center (ORMC) in
Oak Ridge, TN. At NIST’s Advanced
Measurements Lab (AML), the power
electronics and operator console were in
a different room than the CMM. At
ORMC, a horizontal laminar flow system
was used, which separated air flowing
over the CMM from air flowing over the
power electronics. At both facilities, air
was also ducted through the internal
castings of the CMM.
Due to space constraints, it is impractical to move our CMM electronics and
operator console outside of the CMM
Lab. However, based on observations of
other facilities and conversations with
NIST and ORMC metrologists, we
arrived at some low budget modifications of the CMM Lab to reduce existing
temperature gradients.
www.ncsli.org
TECHNICAL PAPERS
Figure 6. A curtain was added to partition the CMM Lab. Only one of the flexible ducts circulating air through the casting and into the HVAC control sensor is shown.
4. Modifications of the CMM Lab
and Results
The key ideas that we wanted to implement were: Partitioning the operator
and the power electronics from the
CMM machine itself, and increasing air
circulation around the CMM workspace.
To partition the operator and power electronics from the CMM, a simple unistrut
frame could be built, with a vinyl curtain
installed between the CMM and the electronics and console. To increase air cir-
culation around the CMM, we installed
fans to suck air out of the CMM, and
blow this air into the HVAC sensor
system. The idea is to more closely
couple the HVAC control system to the
critical equipment, which is the CMM.
Fig. 6 is a sketch of the modified CMM
Lab. The added fans and flexible ductwork are not shown in the sketch. The
fans used were electronic cabinet cooling
fans (Sunon 78 cfm fans) and 4” diameter flexible vinyl ducts. The ducts were
Figure 7. Typical laboratory temperature at the HVAC sensor after
the curtain has been installed.(December 1 to 14, 2004)
Vol. 1 No. 3 • September 2006
attached to the CMM castings and to the
CMM laser interferometer cabinet. The
fan air flow direction was to pull ambient
air into the casting; thru the duct; thru
the fan, and to the HVAC sensor. The
installation of the curtain did not affect
the stability of the air conditioning
system. Fig. 7 shows the air temperature
in the room (only the curtain had been
installed; not the fans). Costs for
installing the curtain partition were a few
hundred dollars for materials and two
days’ labor of a general contractor. The
cost of the fans and ducts were also a few
hundred dollars.
Installing the fans to circulate air
through the CMM and into the HVAC
sensor did not alter the stability of the
room air. This is shown in Fig. 8.
Fig. 9 shows the air temperature at
various locations in the CMM Lab with
the curtain in place and the circulating
fans on. The CMM has started running a
program at the beginning of the time
period shown; the CMM is stopped after
six hours of operation.
As can be seen in Fig. 9, the vertical
gradient between the CMM tabletop
(where the artifact is installed) and the
top of the ram has been reduced from
about 0.6° C to about 0.3 °C. In addition,
running or stopping the CMM axes do
not appear to introduce significant temperature disturbances in the lab. Note
that the air temperature at the HVAC
feedback sensor is approximately 1 °C
higher than the air temperature around
the X casting or Y bridge. This is due to
the energy being added to the air in the
circulation ducts from the fans. This is
not a problem, because the HVAC set-
Figure 8. Temperature in the CMM Lab at the HVAC sensor. The
curtain has been installed; the additional fans are on. (February 2February 16, 2005)
MEASURE
|
49
TECHNICAL PAPERS
21.200
21.200
21.100
21.000
21.100
20.900
21.000
20.800
20.700
Temperature (°C)
20.900
20.800
Temperature (°C)
20.700
Temperatures – fans running
20.600
20.500
CMM Moving
CMM Stopped
Outlet of fans/HVAC sensor
20.400
Top of Z-Axis Ram
Temperatures – fans running
20.300
20.600
20.200
20.500
CMM
Moving
20.000
Wavelength Tracker
Air near ceiling
20.100
20.400
CMM Tabletop
CMM Stopped
Air at CMM Y Bridge
Air below CMM Table
19.900
19.800
20.300
0
5
10
15
20
25
Time (hrs)
20.200
20.100
20.000
19.900
19.800
0
5
10
15
20
25
Time (hrs)
Figure 9. Temperatures around the CMM in the CMM Lab. The curtain is installed and the circulation fans are running. Test taken April 2728, 2005.
point is adjusted so that the CMM tabletop/artifact area is maintained at 20 °C.
The figure also shows an additional
sensor mounted next to the HVAC
sensor, near the ceiling. This sensor is not
in the air stream of the circulation ducts.
5. Conclusions
Relatively inexpensive modifications
were successful in reducing the temperature gradients from 0.6 °C to 0.3 °C in
the CMM Lab. The key issues considered
were: isolating sources of disturbances
(operator, electronics) from the critical
equipment (the CMM), and forcing additional air circulation at the critical equipment. Furthermore, air temperature
measurements show almost no discernible heating transients from operating the CMM.
There is always room for improvements. The HVAC control loops are not
tuned optimally. They show underdamped response to transient disturbances. Working with facilities and
physical plant, we may be able to adjust
the gains in the HVAC system. There is
50
|
MEASURE
still a temperature difference between
the top of the machine and the working
area. This is most likely due to insufficient airflow in the room and heating
from the lighting. It is impractical to
operate the CMM with all the room
lights off; therefore, additional air circulation may reduce this temperature difference.
Finally, it is interesting to note that we
could not discern heating transients in
operating the CMM. We have been operating the CMM servos at 20 % of their
rated speed to avoid unnecessary heating
of the machine. Moore M48’s are typically operated at reduced speeds in order
to minimize heating. A very interesting
experiment would be to determine the
maximum traverse speeds while keeping
parasitic temperature changes minimized. Another important experiment
would be to determine the differences
between the CMM tabletop temperature
and nearby ambient air temperatures,
and optimizing air duct positioning and
settings.
References
[1] J.B. Bryan, E.R. McClure, W. Brewer,
and J.W. Pearson. “Thermal Effects in
Dimensional Metrology,” ASME Paper
65-PROD-13, June 1965.
[2] T. Doiron and J. Stoup. “Uncertainty and
Dimensional Calibrations,” J. Res. Natl.
Inst. Stand. Technol. 102(6), p.647-675,
December 1997.
[3] J.F. Kwak. “Temperature Studies in the
New Sandia Length and Mass Primary
Standards Lab,” in NCSL Workshop
and Symposium, Monterey, California,
August 1996.
[4] YSI Inc. YSI Thermistors and Probes.
Catalogue T3-02 0504, www.ysi.com,
2004.
[5] Instrulab 3312 Operations Manual,
Instrulab Inc, Dayton, OH, 1993.
www.ncsli.org
Vol. 1 No. 3 • September 2006
MEASURE
|
51
REVIEW PAPERS
Weights and Measures
in the United States
Carol Hockert
Abstract: What does the weights and measures system in the United States
look like, and what impact does it have on commerce? Every state in the
United States has its own weights and measures program, and many states
have county and city run programs within their own jurisdiction. More
importantly, each of these programs has sovereignty within its jurisdiction.
There are over 650 independent regulatory jurisdictions in the United
States. How then, can laws and regulations be applied uniformly? How
can U.S. commerce be assured of accurate measurement and consistent
application?
The National Conference on Weights and Measures (NCWM) was
created by the National Bureau of Standards (NBS) in 1905 to bring
together stakeholders in the weights and measures system in areas
such as enforcement, manufacturing, and industry, in order to establish and modify laws governing weights and measures. Once adopted
through the NCWM standards development process, the model laws and regulations are published and disseminated by the National Institute of Standards and Technology (NIST), but adopted and enforced
by the representative jurisdictions. The basis for any weights and measures
program must start with accurate measurements. In addition to publishing and
disseminating model laws and procedures, the Weights and Measures Division
Carol Hockert
(WMD) at NIST provides training and support to state, county and industry
National Institute of Standards
and Technology
metrology laboratories and weights and measures field officials to ensure
Weights and Measures Division
traceability of measurements in commerce. This paper discusses the makeup
100 Bureau Drive, M/S 2600
of the weights and measures system in the United States, how numerous sepGaithersburg, MD 20877 USA
Email: [email protected]
arate weights and measures programs are able to provide uniformity, and the
52
|
MEASURE
www.ncsli.org
REVIEW PAPERS
1. Introduction
Historically, the development of weights
and measures in the United States began
with the writing of the Constitution. A
fundamental role of the Federal Government was “to fix the standards of weights
and measures.” [1] The Office of Weights
and Measures was created in 1836 and
was a main component of the original
National Bureau of Standards (NBS),
formed in 1901 and now called the
National Institute of Standards and Technology (NIST). In creating the National
Conference on Weights and Measures
(NCWM) four years later, NBS began the
process of striving for uniformity in
weights and measures across the nation
that continues today. Through the
NCWM, state and local weights and
measures officials meet with manufacturers and industry representatives, federal
agency representatives, and other stakeholders to set the standards for weights
and measures. These standards are adopted into law and enforced by the states,
or by regions within the states. The
intent of a weights and measures system
is to ensure a fair and equitable marketplace for both the consumer and for
competing industries. When operating
properly, a weights and measures system
is invisible to the average person, yet
affects almost every aspect of their lives.
The legal metrology system, of which
the weights and measures system is a
part, encompasses a broad range of
measuring devices used in law enforcement, the medical industry and other
applications, and is beyond the scope of
this discussion. The current paper lays
out the various functions of the U.S.
weights and measures system and defines
the roles and responsibilities of the key
players. These include NIST, NCWM,
state and local officials, other federal
agencies and industry.
Vol. 1 No. 3 • September 2006
2. NIST
The technical basis for a weights and
measures system begins with the national
metrology institute, which in the United
States is NIST. NIST contributes to the
weights and measures system by providing access to traceability, training, education, and accreditation services, by
participating in the development of
national and international documentary
standards and by publishing the model
laws and regulations that are adopted
into state law.
Traceability is a fundamental part of
all legal metrology systems, and is
achieved through a combination of steps.
NIST laboratories provide calibration
services to state laboratories and to
industry, who continue the unbroken
chain of calibration that is part of the
traceability requirement. The state laboratory program at NIST Weights and
Measures Division (WMD) publishes
calibration procedures (NIST Handbook
145/NIST IR 6969) [2] and management
system requirements (NIST Handbook
143) [3] for state laboratories. WMD
also provides training to state metrologists on these procedures and assesses
state laboratories to the published
quality system requirements. Finally,
WMD oversees a proficiency testing
program to assure the quality of the
results of calibrations performed by the
state laboratories. State laboratories are
encouraged to seek accreditation to
ISO/IEC 17025 [4], and are subsidized
by NIST/WMD if they are accredited
through the National Voluntary Laboratory Accreditation Program (NVLAP),
which is part of the Standards Services
Division of NIST. Currently, there are 16
NVLAP accredited state laboratories,
and 45 state labs that have been assessed
according to Handbook 143 [3] and
found to be compliant.
Defining the requirements used in
laws and regulations in each jurisdiction
is another essential component in any
weights and measures system. In the
United States, the model laws and regulations are developed through a partnership between NIST and NCWM. NIST
participates in the development of, publishes, and disseminates a number of
model laws and regulations agreed to by
the NCWM and used in weights and
measures programs across the country.
Staff of WMD provide technical expertise on a range of subjects from load cells
to grain moisture meters. NIST Handbook 44 [5] is adopted by all 50 states
and is the standard for specifications and
tolerances of commercial weighing and
measuring devices. Some states incorporate exceptions to Handbook 44 into
their law, or they use a previous version
of the Handbook. It is important to the
states that they retain sovereignty over
weights and measures law. NIST Handbook 130 [6] is the model for uniform
laws and regulations for legal metrology
and engine fuel quality. NIST Handbook
133 [7] is the guideline that describes the
MEASURE
|
53
REVIEW PAPERS
methods for checking the net contents of
packaged goods. A series of NIST Handbooks (105-1 through 105-8) [8-15]
provide standard requirements for the
design of commercial test equipment,
from mass standards and volume measures to thermometers. NIST Handbook
112 [16] documents the procedures
used in testing commercial devices in the
field. Together, these documents provide
the framework for weights and measures
laws used throughout the United States.
Once the proper tools are in place,
officials must be trained to use them in
order to be effective. In addition to the
metrology laboratory training mentioned
previously, NIST provides training at all
levels in the weights and measures
system. WMD conducts training of field
staff both on-site and around the country.
Input is sought before the training schedule for the coming year is developed.
One way NIST/WMD collects input is by
conducting periodic Administrators
Workshops, where the chief Weights and
Measures Administrator from every state
is invited to participate, with typical participation of 15 to 20 Administrators.
These workshops are useful in determining what training is needed at the state
level. Training may include all types of
device testing, package inspection
methods, and contents of published
Handbooks, as well as unique topics
such as understanding audit trails.
Because it was identified as a need in the
national weights and measures system,
WMD recently developed and conducted
training on balance and scale uncertainties. NIST also conducts weights and
measures tutorials at various conferences
throughout the year. WMD responds to
hundreds of questions directed at NIST
54
|
MEASURE
about weights and measures each year.
The metric program, which is a part of
WMD, conducts outreach to educate the
public about the metric system and legal
metrology issues in general. In addition
to World Metrology Day, the weights and
measures community in the United
States celebrates Weights and Measures
Week every March.
In order to meet the needs of U.S.
industry and commerce in the global
marketplace, it is especially important
that the United States interact with the
international legal metrology community
and contribute to the development of
international legal metrology standards.
NIST/WMD represents the U.S., on
behalf of the U.S. Department of State,
in an international treaty organization
known as the International Organization
of Legal Metrology (OIML). OIML’s
primary function is to harmonize legal
metrology standards and practices
worldwide in order to foster confidence
in global trade and commerce. OIML
covers areas of legal metrology that
include weights and measures, but also
areas of human health and safety, and
environmental protection and monitoring. WMD staff assemble National
Working Groups (NWGs) of U.S. stakeholders to provide consensus U.S. positions on the development, review and
revision of these international standards.
WMD staff also coordinate U.S. representation in OIML, including serving as
Secretariat of several OIML Technical
Committees and Subcommittees, and
work closely with the NCWM in this
regard.
A number of other federal agencies
play a significant role in the legal metrology system for the United States. These
include the Food and Drug Administration, the Federal Trade Commission, the
Environmental Protection Agency, the
Treasury Department and the Department of Agriculture. Many of these
agencies participate in the standards
development process and work closely
with WMD and other stakeholders.
There are specific cases of federal preeminence over state law, where states
must conform and enforce the same laws
as enacted at the federal level. For the
most part, these are packaging and labeling laws designed to facilitate trade
across state borders.
3. NCWM
The National Conference on Weights and
Measures was created in 1905 to bring
together stakeholders in the weights and
measures system in areas such as
enforcement, manufacturing and industry, in order to establish and modify laws
governing weights and measures.
Changes to the weights and measures
handbooks are proposed, debated and
adopted within the NCWM. NCWM
also manages the National Type Evaluation Program (NTEP), ensuring that the
design of commercial devices is appropriate for their intended use. By providing a forum for numerous stakeholders
to communicate and interact, NCWM
contributes to the overall uniformity of
both law and enforcement in the United
States weights and measures system.
Four independent regional weights
and measures associations, the Northeast, Central, Western and Southern
Weights and Measures Associations,
exist in addition to NCWM. Members
from these regions meet, conduct business and vote on proposals that are forwarded on to the NCWM for
consideration. As in the case of the
NCWM, both weights and measures
officials and industry take part in the
regional associations.
NCWM membership consists of state
and local weights and measures officials,
industry representatives and representatives from various federal agencies. This
unique combination of stakeholders provides a balanced forum for discussion
and debate of weights and measures
issues.
Handbooks 44, 130 and 133 form the
www.ncsli.org
REVIEW PAPERS
backbone by which weights and measures law is promulgated across the
United States. Changes to these standards are made through a process that
begins with a written proposal, usually
submitted to a regional association. Each
association and the NCWM have committees whose responsibilities are to
review proposals and make recommendations to the association or to NCWM.
In some cases, a proposal may be
referred to a technical working group for
further development. Both the working
groups and the committees are made up
of members representing industry and
government. The NCWM holds an
Interim Meeting where items are classified as “Informational,” “Under Development,” “Withdrawn,” or “Ready for
Voting.” At its annual conference, the
NCWM conducts a voting process and
formally adopts changes to the Handbooks. The changes are incorporated and
new editions are published by NIST and
then adopted by states and localities.
The voting process consists of a series
of open hearings, where all proposals are
discussed and debated by all NCWM
members, followed by a voting session.
During the voting session, the membership is divided into three bodies. One
individual from each state or territory
makes up the House of State Representatives, additional weights and measures
officials make up the House of Delegates, and other industry members and
federal officials form the House of
General Membership. During a vote to
make technical changes to a Handbook,
members from the House of General
Membership are not allowed to vote. It is
also important to note that only NCWM
Vol. 1 No. 3 • September 2006
members are allowed to vote; thus a state
representative in attendance may not
vote if not registered as a member of
NCWM. While industry is welcome to
attend NCWM meetings and to voice
their opinions during open hearings,
industry members are considered associate members and are not allowed a vote
on technical issues during the voting
session.
The NCWM also manages the National
Type Evaluation Program (NTEP), which
evaluates weighing and measuring
devices intended for commercial use,
issuing certificates of conformance to
those meeting NTEP requirements. Many
states require commercial devices to
have an NTEP certificate as part of
overall compliance with their weights
and measures law. Manufacturers of
commercial devices submit new products
to NTEP laboratories in the United
States for testing where they are more
stringently evaluated for full compliance
to NIST HB 44 than is possible in field
testing conditions.
The NCWM, along with the regional
weights and measures associations, facilitates communication between local
jurisdictions, industry, other federal
agencies and NIST. With the number and
variety of legal metrology programs in
existence around the United States, it is
important for there to be close interaction and continuous communication
between all parties. The result of communication among the states and regions
is increased uniformity in weights and
measures across the country. This interaction also provides opportunities for
joint investigations and surveys. A survey
may be conducted periodically to deter-
mine the rate of compliance in a specific
area or with a specific commodity. A
joint investigation may be initiated when
there is an indication of widespread noncompliance with a specific device or
product.
4. State and Local Jurisdictions
States are responsible for adopting and
enforcing weights and measures laws and
maintaining a weights and measures
program for their jurisdiction. The size
and scope of each state’s weights and
measures program varies widely for a
variety of reasons. A number of states
have additional weights and measures
jurisdictions in the cities and/or counties
within them. States with larger populations, unique products or whose commerce is largely agricultural all have
different needs and have designed programs to fit those needs. A significant
portion of a state or local weights and
measures official’s time is spent testing
and inspecting commercial devices that
are used in determining cost per unit of
measure, such as scales or meters. Most
states require that a device be tested
prior to its use in commercial transactions, and then periodically retested to
assure continued proper function.
Devices that fail inspection are normally
removed from service until repair or
replacement is completed, but officials
may take additional enforcement action
as necessary and as allowed by law.
In order for an enforcement action to
be upheld in a court of law, proof of
measurement traceability is required.
The state weights and measures official
assures traceability by first using traceable field standards when testing devices.
MEASURE
|
55
REVIEW PAPERS
In addition to using proper equipment,
the official must follow documented
procedures for testing devices, and must
be able to provide evidence that they
have been properly trained both on the
use of the equipment and the procedure.
States maintain metrology laboratories
for, at a minimum, calibrating field
equipment used in legal metrology. This
equipment may be owned by the state or
by a private company that installs,
repairs or owns commercial devices.
States are ultimately responsible for
the training of the weights and measures
officials within their state. This includes
state field staff, local (city or county)
officials, and may include privately
employed agents that may be licensed to
test, inspect, install or repair commercial
devices. Additionally, some states may
provide training to device owners in
order to help them understand and
comply with the law. Training may be
done in a number of ways. For state
employees, initial training is conducted
upon hire, followed by periodic supplemental training at group meetings. The
initial training period may range from
three to six months. States conduct
training courses for city and county officials and may bring in a NIST/WMD
trainer for some types of instruction.
Some jurisdictions require officials to
pass a test or a series of tests before
being allowed to inspect and test commercial devices. Similarly, registered
agents may also be required to pass a test
before being granted authority to install
or repair a device and then place it into
service.
In most states, state law provides for
56
|
MEASURE
some type of enforcement of weights and
measures law; however, authority varies
by state so that there is not a single procedure for how weights and measures
law is enforced. Some states have authority to write warnings, citations, or issue
fines. Others may prosecute violations in
court. In states where private individuals
or companies are licensed to act on
behalf of the government, the state may
be responsible for the oversight of the
licensees’ actions. State and local jurisdictions also respond to consumer complaints, and take action as warranted.
In addition to enforcement, state and
local officials participate in NCWM
through membership, by serving on committees or the board, and by attending
the interim and annual meetings.
Because state and local weights and
measures officials can identify needs that
may be unique to their region or an
industry within their region, they often
provide valuable input to the NCWM
standards development process. For this
reason, the majority of the members of
NCWM and regional association
working groups and technical committees are state and local officials. Some
states have their own weights and measures associations where training is conducted and potential changes to weights
and measures law is discussed. State and
local weights and measures officials also
participate in other standards development forums, such as the American
Society for Testing and Materials
(ASTM) and OIML.
www.ncsli.org
REVIEW PAPERS
5. Licensed Agents
In order to assure fair competition,
many states allow private agents to place
a commercial device into service after
installing or repairing it. Most private
agents work for scale or meter repair
companies, but some operate independently or work for manufacturers and
others in industry. These agents are normally registered or licensed with the
state and must uphold the same laws that
the state or county officials follow.
Because there are many more private
agents operating within a state than there
are state or county weights and measures
officials, they can be more responsive
than a government official. Thus, a business requiring a new commercial device
will be able to use the device to conduct
business much more quickly if a private
agent is allowed to place the device in
service as opposed to waiting for the
state official to test and inspect it. States
that license private agents will normally
conduct a follow-up inspection within 30
days to make sure that the device was
installed properly and is operating correctly.
Agents who are licensed to place commercial devices in service must use calibrated test equipment, similar to the
equipment used by the legal metrology
officials in order to assure traceability of
measurement in commerce. This equipment must meet specifications contained
in the NIST Handbook 105 series. It is
important that the agent using the test
equipment be knowledgeable regarding
its proper design, care and use. The
agent must also know and understand
Handbook 44 and the laws specific to the
state where doing business. This is espeVol. 1 No. 3 • September 2006
cially important for agents working in
multiple states. Knowledgeable and welltrained agents contribute to the legal
metrology system by educating device
owners in addition to assuring that accurate measurements are made. These
agents are an important link between
commerce and government because they
interact with both. Agents may introduce
new technology to the state legal metrology official or ask for advice on interpreting the law.
6. Industry
Manufacturers of commercial devices
and industries that package goods sold
by measure or count are both links in the
traceability chain that begins with the
International System of Units (SI) and
ends with the consumer. Industry is
dependent on a robust and well-functioning legal metrology system and, as a
result, is often involved in the standards
development process. It is critical to have
adequate industry representation during
the standards adoption process. Industry
depends on the creation and implementation of weights and measures laws to
assure a level playing field for fair competition. Harmonization of standards
facilitates commerce and international
trade, thereby benefiting industry and
consumers.
7. Consumers
The ultimate goal of a legal metrology
program is to assure accurate measurements between buyer and seller, and to
facilitate transactions between parties.
The consumer can be many things, from
the purchaser of retail products, to the
patient at the hospital, to the company
buying electricity. Consumers expect protection from unfair practices in commerce. This means assured accuracy of
measurements in commercial transactions, but it also means the ability to
make value comparisons. For example, if
someone tried to sell gasoline by the kilogram, customers would not be able to
compare the value of their purchase to
that of the competitor selling by the
gallon or liter. It is the role of a legal
metrology system to define how a commodity may be sold, so that value comparisons may be made on the basis of
common units.
8. Enforcement and Compliance
Because enforcement of weights and
measures law takes place at the state and
local level, there is not a consistent
method of enforcement. Compliance
with weights and measures law is also
highly variable. It stands to reason that a
state that has an effective legal metrology
system in place will have greater compliance with weights and measures law. So
what are the components of an effective
weights and measures system? First, the
system must provide traceability of measurement from SI to consumer to assure
consistent application. Second, proper
documentary standards, laws and regulations must be in place for the official to
use. Third, there must be adequate training of legal metrology officials, and
proper equipment must be available.
Perhaps the most difficult step in implementing an effective program is careful
planning to identify potential problems
and then allocating resources to address
them. Finally, there must be a thoughtful
approach to interaction with the reguMEASURE
|
57
REVIEW PAPERS
lated customer. A combination of education and public outreach with enforcement
and consequence of violation is necessary. The legal metrology official must
develop a relationship with the regulated
customer in order to achieve the optimal
program.
Compliance rates with weights and
measures laws are not available for all
states and all types of regulated transactions, but an example of successful compliance is in retail motor fuel dispensers
(gas pumps). In 2002, the overall U.S.
compliance rate for gas pumps was
greater than 93 %, where the required
accuracy was less than a 0.5 % error.
[17] This equates to an error of no more
than one cup or 250 ml for a typical purchase of 12 gallons or 45 liters of gasoline. The failures may have been the
result of an inaccurate meter, or from a
number of other causes for rejection,
such as improper labeling, a leaking
Inspection Class
Rejection
Rate
Retail Scales
8%
Industrial Scales
16 %
Large Scales
24 %
Gas Pumps
9%
High Volume
Pumps
23 %
Terminal Meters
2%
LP Meters
19 %
Package
Inspection
14 %
Table 1. Compliance data from one state
(2002).
hose, or a malfunctioning price display.
The table below (Table 1) provides
typical compliance data from a state
program for a variety of device types.
Keep in mind that a device may be
rejected for reasons other than accuracy.
In the case of gas pumps, approximately
half of the rejections are due to tolerance
failure.
9. Impact on Commerce
It is estimated that legal metrology
affects over $6 trillion in commerce. This
is not surprising when you look at the
scope of impact. From transportation
and the petroleum industry, to agriculture, to manufacturing and retail sales,
legal metrology touches every facet of the
commercial marketplace.
As an example, let’s examine the petroleum industry. The chain of custody goes
something like this: Crude oil is
processed by the refiner, transported by
the pipeline, distributed by the terminal,
stored and further distributed by the
bulk plant and sold at the retail station.
The retail sale may also occur at the terminal, the bulk plant or even at an
airport or marina. Digging deeper, there
are variable octane and cetane ratings for
gasoline and fuel oils on which price may
be based. Petroleum products must also
comply with specifications published in
ASTM standards. To further complicate
the issue, alternative fuels may be
blended with petroleum products, changing the specifications. The legal metrology system must address each component
and enforce laws relevant to each level of
this sub-system. In 2002, retail petroleum sales at gas stations only were $250
billion in the United States. [18] Table 2
below provides a breakdown of petroleum sales in 2002.
A similar breakdown could be done
for other sub-systems within the com-
Grain
Sales
(millions)
$85 593
Livestock
$7 095
Sand, Gravel,
Stone
$3 146
Trucking
$164 219
Barrels sold (millions)
Sales (millions)
Building Materials
$215 641
Gasoline
3 230
$188 025
$26 402
Distillate Fuel (Diesel)
1 378
$76 338
Garden
Supplies/Nursery
596
$33 900
Table 2. U.S. petroleum product sales 2002. [19]
|
Commodity
Commodity
Aviation Fuel
58
mercial marketplace. In agriculture, from
grain and produce to livestock and aquaculture, there are numerous scenarios for
weights and measures to affect commerce. The transportation industry frequently charges by weight for a load,
whether shipping by truck, barge, air or
railway. Some of the other industries that
are notably impacted by the effectiveness
of the legal metrology system are the
mining industry and forestry. Look at
Table 3 [20] to see some commodity
sales data that demonstrates the impact
weights and measures law can conceivably have on commerce.
A specific example of the impact that
weights and measures has on the marketplace is the results of the market studies
of packaged milk in 1997 and 1998. [21]
In 1997, over $8 billion of milk was sold
in the United States. The results in the
first study found a 46 % failure rate due
to shortage of product. The average
shortage was 0.76 %, which amounted
to a $28 million shortage. Once the
problem was identified and effort was
made to correct it, a follow-up study was
conducted. In 1998, the data showed a
decline to a 19 % failure rate due to
shortage of product, with an average
shortage of 0.71 %. This amounted to a
savings of $17 million to consumers and
competitors. This is only one of thousands of packaged products that are sold
by measure, but it demonstrates in tangible dollars the amount of money directly
affected by accuracy of measure.
MEASURE
LP Gas
$9 286
Table 3. Retail and wholesale sales data
2002.
www.ncsli.org
REVIEW PAPERS
10. Summary
In the United States, only $0.25 per
person is spent on weights and measures
annually, and yet it has been estimated
that over 50 % of the Gross Domestic
Product (GDP is $12.7 trillion in 2006
[22]) is impacted by weights and measures regulations on transactions. To
industry, a robust weights and measures
system means a fair market and reduced
production costs. To the average consumer, it means getting what they paid
for in a transaction. NIST and NCWM
work hard to ensure that the laws, regulations, procedures and knowledge are in
place to provide equity in commerce for
both buyer and seller.
11. References
[1] United States Constitution, Article I,
Section 8. Available at the website:
archives.gov/national-archives-experience/charters/constitution.html.
[2] NIST Handbook 145/NISTIR 6969,
“Selected Laboratory and Measurement Practices, and Procedures to
Support Basic Mass Calibrations,”
United States Department of Commerce, Technology Administration,
NIST, 2003.
[3] NIST Handbook 143, “State Weights
and Measures Laboratories Program
Handbook,” United States Department of Commerce, Technology
Administration, NIST, 2003.
[4] ISO/IEC 17025: 2005, “General
Requirements for the Competence of
Testing and Calibration Laboratories,”
International Organization for Standardization, 2005.
[5] NIST Handbook 44, “Specifications,
Tolerances, and Other Technical
Requirements For Weighing and Measuring Devices – 2006 Edition,”
United States Department of Commerce, Technology Administration,
NIST, 2006.
[6] NIST Handbook 130 – 2006 Edition,
“Uniform Laws and Regulations in the
Areas of Legal Metrology and Engine
Fuel Quality,” United States Department of Commerce, Technology
Administration, NIST, 2006.
[7] NIST Handbook 133 – Fourth
Edition, “Checking the Net Contents
of Packaged Goods,” United States
Department of Commerce, TechnolVol. 1 No. 3 • September 2006
ogy Administration, NIST, 2005.
[8] NIST Handbook 105-1, “Specifications and Tolerances for Reference
Standards and Field Standard Weights
and Measures, 1. Specifications and
Tolerances for Field Standard Weights
(NIST Class F),” United States
Department of Commerce, Technology Administration, NIST, 1990.
[9] NIST Handbook 105-2, “Specifications and Tolerances for Reference
Standards and Field Standard Weights
and Measures, 2. Specifications and
Tolerances for Glass Flasks,” United
States Department of Commerce,
Technology Administration, NIST,
1997.
[10] NIST Handbook 105-3, “Specifications and Tolerances for Reference
Standards and Field Standard Weights
and Measures, 3. Specifications and
Tolerances for Graduated Neck Type
Volumetric Field Standards,” United
States Department of Commerce,
Technology Administration, NIST,
1997.
[11] NIST Handbook 105-4, “Specifications and Tolerances for Reference
Standards and Field Standard Weights
and Measures, 4. Specifications and
Tolerances for Liquefied Petroleum
Gas and Anhydrous Ammonia Liquid
Volumetric Provers,” United States
Department of Commerce, Technology Administration, NIST, 1997.
[12] NIST Handbook 105-5, “Specifications and Tolerances for Reference
Standards and Field Standard Weights
and Measures, 5. Specifications and
Tolerances for Field Standard Stopwatches,” United States Department
of Commerce, Technology Administration, NIST, 1997.
[13] NIST Handbook 105-6, “Specifications and Tolerances for Reference
Standards and Field Standard Weights
and Measures, 6. Specifications and
Tolerances
for
Thermometers,”
United States Department of Commerce, Technology Administration,
NIST, 1997.
[14] NIST Handbook 105-7, “Specifications and Tolerances for Reference
Standards and Field Standard Weights
and Measures, 7. Specifications and
Tolerances for Dynamic Small Volume
Provers,” United States Department
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
of Commerce, Technology Administration, NIST, 1997.
NIST Handbook 105-8, “Specifications and Tolerances for Reference
Standards and Field Standard Weights
and Measures, 8. Specifications and
Tolerances for Field Standard Weight
Carts,” United States Department of
Commerce, Technology Administration, NIST, 2003.
NIST Handbook 112 – 2002 Edition,
“Examination Procedure Outlines for
Commercial Weighing and Measuring
Devices – A Manual for Weights and
Measures Officials,” United States
Department of Commerce, Technology Administration, NIST, 2002.
W.J. White, B. Rowe, A.C. O’Connor,
and A. Rogozhin, “National Weights
and Measures Benchmarking and
Needs Assessment Survey Final
Report,” RTI International, February
2005.
United States Census Bureau – 2002
Economic Census, Retail Trade by
sub-sector. Available at the website:
census.gov/econ/census02/
United States Department of Energy,
Energy Information Administration,
Annual Energy Review, Table 5.11,
2004. Available at the website:
eia.doe.gov/emeu/aer/pdf/aer.pdf
U.S. Census Bureau, 2002 Economic
Census, Retail Trade by sub-sector,
Wholesale Trade by sub-sector, Transportation and Warehousing by subsector data. Available at the website:
census.gov/econ/census02/
Reports by the Federal Trade Commission, Food and Consumer Service of
the U.S. Dept. of Agriculture, Office
of Weights and Measures of the NIST,
and Office of Food Labeling of the
U.S. Food and Drug Administration,
“Milk: Does It Measure Up?,” July 17,
1997 and August 13, 1998. Available
at the websites: ftc.gov/reports/milk/
index.html and ftc.gov/reports/milk2/
milk2.htm
Bureau of Economic Analysis, News
Release: “Gross Domestic Product
and Corporate Profits,” March 30,
2006. Available at the website:
bea.gov/bea/newsrelarchive/2006/
gdp405f.htm
MEASURE
|
59
REVIEW PAPERS
Legal and Technical Measurement
Requirements for Time and Frequency
1
Michael A. Lombardi
Abstract: This paper discusses various technologies and applications that rely on precise time and frequency, and
explores their legal and technical requirements for measurement uncertainty. The technologies and applications discussed include financial markets, the wired and wireless telephone networks, radio and television broadcast stations,
the electrical power grid, and radionavigation systems. Also discussed are the legal and technical requirements for
“everyday” metrology situations, including wristwatches, commercial timing devices, and radar devices used by law
enforcement officers.
1. Introduction
Time and frequency measurements
occupy a special place, and possess a
certain mystique, in the world of metrol-
Michael A. Lombardi
Time and Frequency Division
National Institute of Standards
and Technology
325 Broadway
Boulder, CO 80305-3328, USA
Email: [email protected]
60
|
MEASURE
ogy. The unit of time interval, the second
(s), and its reciprocal unit of frequency,
the hertz (Hz), can each be measured
with more resolution and less uncertainty than any other physical quantity.
NIST and a handful of other national
metrology laboratories can currently
realize the second to uncertainties measured in parts in 1016 [1], and NIST has
experimental standards already in place
that promise uncertainties at least one or
two orders of magnitude smaller.[2]
These uncertainties represent the pinnacle of the metrology world, and have a
“gee whiz” quality that attracts media
attention and captures the public’s imagination. These tiny uncertainties are also
of interest to scientists and design engi-
1 This paper is a contribution of the United States government and is not subject to copyright.
The illustrations of commercial products and services are provided only as examples of the
technology discussed, and this neither constitutes nor implies endorsement by NIST.
www.ncsli.org
REVIEW PAPERS
Commercial
Timing Device
Overregistration
Requirement
Parking Meter
None
Time clocks
and time recorders
3 s per hour, not to
exceed 1 minute per day
Taximeters
3 s per minute
Other Timing Devices
5 s for any interval
of 1 minute or more
Uncertainty
NA
0.07 %
to 0.08 %
5%
NA
Underregistration
Requirement
Uncertainty
10 s per minute
5 minutes per half hour
7 minutes per hour
11.7 %
to 16.7 %
3 s per hour, not to exceed
1 minute per day
0.07 %
to 0.08 %
6 s per minute
10 %
6 s per minute
10 %
Table 1. Legal requirements of commercial timing devices.
neers, because history has shown that as
time and frequency uncertainties get
smaller, new technologies are enabled
and new products become possible.
For metrologists, however, it can be
difficult to place the tiny uncertainties of
state-of-the-art time and frequency measurements into their proper context. Most
metrology work is performed in support
of “real world” systems that require their
measuring instruments and standards to
be within a specified tolerance in order
for the system to perform as designed.
Thus, metrologists are concerned with
questions such as: What type of frequency uncertainty is required so that a
police officer knows that a measurement
of vehicle speed is valid? How close
does a radio station’s carrier frequency
need to be controlled so that it does not
interfere with another station? What frequency tolerance does a telephone
network need in order to avoid dropping
calls? These questions are answered by
looking at both the legal and technical
requirements of time and frequency
metrology, the topics of this paper. These
topics are covered by first looking at the
requirements for “everyday” metrology,
and then examining the requirements for
advanced applications.
2. Requirements for
“Everyday” Metrology
In “everyday” life, we check our wristwatches for the correct time, pay for time
on parking meters and other commercial
timing devices, play and listen to musical
instruments, and drive our cars at a safe
Vol. 1 No. 3 • September 2006
speed that is at or below the posted
speed limit. The modest time and frequency requirements of these activities
are described in this section.
2.1 Wristwatches
Wristwatches are unique devices, the
only metrological instruments that we
actually wear. Most wristwatches contain
a tiny quartz oscillator that runs at a
nominal frequency of 32 768 Hz. There
are no legally required uncertainties for
wristwatches, but at least one major
manufacturer specifies their watches as
accurate to within 15 s per month, or
about 0.5 s per day, a specification that
seems to be typical for the quartz watch
industry. This translates to an allowable
frequency uncertainty of about 0.2 Hz, or
a dimensionless uncertainty near 6 # 10-6.
2.2 Commercial Timing Equipment
and Field Standard Stopwatch
Commercial timing equipment includes
devices such as parking meters, taxicab
meters, and coin operated timers used in
laundries and car washes. NIST Handbook 44 [3], which is used by all 50
states as the legal basis for regulating
commercial weighing and measuring
devices, uses the terms overregistration
and underregistration when defining the
legal requirements of commercial timing
devices. Overregistration means that the
consumer received more time than they
paid for; underregistration means that
they received less time than they paid for.
The laws are intended to protect consumers, and underregistration is of much
greater concern. For example, a person
who pays for 10 minutes on a parking
meter is legally entitled to receive close
to 10 minutes before the meter expires,
but no law is broken if the meter runs for
more than 10 minutes. Table 1 summarizes the legal requirements of commercial timing devices. [3]
Commercial timing devices are often
checked with field standard stopwatches
since they can not be easily moved to a
calibration laboratory. Most modern
stopwatches are controlled by quartz
oscillators, and they typically meet or
exceed the performance of a quartz
wristwatch (as discussed above). Stopwatches are sometimes calibrated using a
universal counter and a signal generator
(see Fig. 1), or with a device designed to
measure the frequency of their time base
oscillator. However, most stopwatch calibrations are still made by manually
starting and stopping the device under
test while listening to audio timing
signals from NIST radio station WWV or
a similar source. For this type of calibration, the longer the time interval measured, the less impact human reaction
time will have on the overall measurement uncertainty.[4] To avoid unreasonably long calibration times, the legally
required measurement uncertainty is
typically 0.01 % or 0.02 % (1 or 2 parts
in 104). NIST Handbook 44 [3] specifies
15 s for a 24 hour interval, or 0.017 %.
Some states and municipalities have
their own laws that list similar requirements. For example, the state of Pennsylvania code [5] states that an electronic
MEASURE
|
61
REVIEW PAPERS
Figure 1. A stopwatch calibration that employs the totalize function of a universal counter
(courtesy of Sandia National Laboratories).
stopwatch shall comply with the following standards:
(i) The common crystal frequency shall
be 32 768 Hz with a measured frequency within plus or minus 3 Hz,
or approximately .01% of the standard frequency.
(ii) The stopwatch shall be accurate to
the equivalent of plus or minus
9 seconds per 24-hour period.
2.3 Musical Pitch
The pitch of a musical tone is a function
of the speed at which air has been set in
motion. The speed is measured as the
number of complete vibrations – backwards and forwards – made by a particle
of air in one second. When pitch is produced by a vibrating column of air, the
pitch of the same length of pipe varies
with temperature: for a 1 °F difference,
pitch will vary by 0.001 Hz. [6]
The international standard for musical
pitch was first recognized in 1939, and
reaffirmed by the International Organization for Standardization in 1955 and
1975. [6, 7] It defined international
standard pitch as a system where A
above “middle” C (known as A4) is
tuned to 440 Hz. A 440 Hz tone is broadcast by NIST radio stations WWV and
WWVH for use as a musical reference. [8]
The ability of the human ear to dis62
|
MEASURE
criminate between differences in pitch
depends upon many factors, including
the sound volume, the duration of the
tone, the suddenness of the frequency
change, and the musical training of the
listener. However, the just noticeable difference in pitch is often defined as
5 cents, where 1 cent is 1/100 of the
ratio between two adjacent tones on a
piano’s keyboard. Since there are 12
tones in a piano’s octave, the ratio for a
frequency change of 1 cent is the 1200th
root of 2. Therefore, raising a musical
pitch by 1 cent requires multiplying by
the 1200th root of 2, or 1.00057779. By
doing this five times starting at 440 Hz,
we can determine that 5 cents high is
about 441.3 Hz, or high in frequency by
about 0.3 %. [8] Some studies have shown
that trained musicians can distinguish
pitch to within 2 or 3 cents, or to within
0.1 % or better. Thus, frequency errors
of 0.1 % or larger can change the way
that music sounds for some listeners.
2.4 Law Enforcement
Law enforcement officers use radar
devices to check vehicle speed. These
devices are normally calibrated by pointing them at tuning forks whose oscillations simulate vehicle speed. For
example, a radar device might be calibrated by checking it with a tuning fork
labeled 30 mph (miles per hour) to test
the low range, and another fork labeled
90 mph to test the high range. The
nominal frequency of the tuning fork
varies depends upon the radar device
being used; a K-band tuning fork labeled
30 mph will oscillate at a higher frequency than an X-band fork with the
same label.
To meet legal requirements that vary
from state to state, tuning forks must be
periodically calibrated, often with a frequency counter or an oscilloscope. A frequency uncertainty of 0.1 % (1# 10-3) is
sufficient for tuning fork calibrations.
Although this seems like a coarse
requirement, a frequency uncertainty of
0.1% translates directly to a speed
uncertainty (for example, 0.03 mph at 30
mph, 0.09 mph at 90 mph) for either Xband or K-band radar devices. This is
insignificant when you consider that
speeding tickets are seldom issued unless
a motorist exceeds the posted speed limit
by at least several miles per hour. [9]
3. Requirements for
Financial Markets
To protect investors from securities fraud
and to ensure that financial transactions
occur in an orderly fashion that can be
audited if necessary, financial markets
often require all recorded events to be
time tagged to the nearest second. For
example, after an August 1996 settlement with the Securities Exchange Commission (SEC) involving stock market
fraud related to the improper execution
of trades, the National Association of
Securities Dealers (NASD) needed a way
to perform surveillance of the NASDAQ
market center. As a result, the NASD
developed an integrated audit trail of
order, quote, and trade information for
NASDAQ equity securities known as
OATS (Order Audit Trail System).
OATS introduced many new rules for
NASD members, including requiring all
members to synchronize their computer
system and mechanical clocks every
business day before the market opens to
ensure that recorded order event time
stamps are accurate. To maintain clock
synchronization, clocks should be
checked against the standard clock and
resynchronized, if necessary, at predetermined intervals throughout the day, so
www.ncsli.org
REVIEW PAPERS
Figure 2. A OATS compliant clock used to
time stamp financial transactions (courtesy
of the Widmer Time Recorder Company).
that the time kept by all clocks can
always be trusted. NIST time was chosen
as the official time reference for
NASDAQ transactions.
NASD OATS Rule 6953, Synchronization of Member Business Clocks, applies
to all member firms that record order,
transaction, or related data to synchronize all business clocks. In addition to
specifying NIST time as the reference, it
requires firms to keep a copy of their
clock synchronization procedures onsite. One part of the requirements [10]
reads as follows:
All computer system clocks and mechanical time stamping devices must be synchronized to within three seconds of the
National Institute of Standards and
Technology (NIST) atomic clock. Any
time provider may be used for synchronization, however, all clocks and time
stamping devices must remain accurate
within a three-second tolerance of the
NIST clock. This tolerance includes all of
the following:
• The difference between the NIST standard and a time provider’s clock:
• Transmission delay from the source;
and
• The amount of drift of the member
firm’s clock.
For example, if the time provider’s clock
is accurate to within one second of the
NIST standard, the maximum allowable
drift for any computer system or mechanical clock is two seconds.
Prior to the development of OATS, brokerage houses often used clocks and time
stamp devices that recorded time in
decimal minutes with a resolution of 0.1
minutes (6 s). The new OATS requireVol. 1 No. 3 • September 2006
Figure 3. A mobile calibration van that tests whether or not a transmitter is within tolerances specified by the FCC (courtesy of dbK Communications, Inc.).
ments forced the removal of these clocks.
Fig. 2 shows an OATS compliant clock
that synchronizes to NIST time via the
Internet. Clocks such as this one are synchronized to the nearest second, but up
to 3 seconds of clock drift are allowed
between synchronizations.
4. Requirements for Broadcasting
Unlike time metrology, which has origins
that date back thousands of years, frequency metrology was not generally discussed until about 1920, when commercial
radio stations began to appear. Radio
pioneers such as Marconi, Tesla, and
others were not aware of the exact frequencies (or even the general part of the
spectrum) that they were using. However, when the number of radio broadcasters began to proliferate, keeping
stations near their assigned frequencies
became a major problem, creating an
instant demand for frequency measurement procedures and for frequency standards.[11] Today, with stable quartz
and atomic oscillators readily available,
keeping broadcasters “on frequency” is
relatively easy, but all broadcasters must
provide evidence that they follow the
Federal Communications Commission
(FCC) regulations as described in
Section 4.1. Fig. 3 shows a mobile calibration van that makes on-site visits to
transmitter sites to check their frequency.
4.1 FCC Requirements for Radio
and Television Broadcasting
The FCC specifies the allowable carrier
frequency departure tolerances for AM
and FM radio stations, television stations, and international broadcast stations.[12] These tolerances are specified
as a fixed frequency across the broadcast
band of ±20 Hz for AM radio, ±2000 Hz
for FM radio, and ±1000 Hz for the
audio and video television carriers, and
as a dimensionless tolerance of
0.0015 % for international shortwave
broadcasters. The allowable uncertainties are converted to scientific notation
and summarized in Table 2.
4.2 Frequency Requirements for
Color Television Subcarriers
For historical design reasons, the chrominance subcarrier frequency on analog
color televisions is 63/88 multiplied by
5 MHz, or about 3.58 MHz. To ensure
adequate picture quality for television
viewers, federal regulations specify that
the frequency of this subcarrier must
remain within ±10 Hz of its nominal
value, and the rate of frequency drift
must not exceed 0.1 Hz per second.[13]
This corresponds to an allowable tolerance of ±0.044 Hz for the 15 734.264 Hz
horizontal scanning frequency, a dimensionless frequency uncertainty near 3#10-6.
MEASURE
|
63
REVIEW PAPERS
Low End of Band
Broadcast
Tolerance
Carrier
Uncertainty
High End of Band
Carrier
Uncertainty
10-5
1710 kHz
1.2 # 10-5
AM radio
±20 Hz
530 kHz
3.8 #
FM radio
±2000 Hz
88 MHz
2.3 # 10-5
108 MHz
1.9 # 10-5
Television
±1000 Hz
55.25 MHz
(channel 2 video)
1.8 # 10-5
805.75 MHz
(channel 69 audio)
1.2 # 10-6
International
0.0015 %
3 MHz
1.5 # 10-5
30 MHz
1.5 # 10-5
Table 2. FCC requirements for broadcast carrier frequency departure.
5. Requirements for Electric
Power Distribution
The electric power system in North
America consists of many subsystems
that interconnect into several massive
grids that span the continent. The system
delivers the 60 Hz AC frequency to many
millions of customers by matching power
generation levels to transmission capability and load patterns. The entire power
system relies on time synchronization,
and synchronization problems can lead
to catastrophic failures. For example, the
massive August 2003 blackout in the
eastern regions of the United States and
Canada was at least partially caused by
synchronization failures.[14]
The timing requirements of the power
industry vary (Table 3), because different
parts of the system were designed at different times, and the entire system has
evolved over many years. The older
parts of the system have less stringent
timing requirements because they were
designed using technologies that predated the Global Positioning System
(GPS). The newer parts of the system
rely on the ability of GPS to provide
precise time synchronization over a large
geographic area.
Since electrical energy must be used as
it is generated, generation must be constantly balanced with load, and the alternating current produced by a generator
must be kept in approximate phase with
every other generator. Generation
control requires time synchronization of
about 10 ms. Synchronization to about 1
ms is required by event and fault
recorders that supply information used
to correct problems in the grid and
improve operation. Stability control
schemes prevent unnecessary generator
shutdown, loss of load, and separation of
the power grid. They require synchronization to about 46 µs (±1° phase angle
at 60 Hz), and networked controls have
requirements one order of magnitude
lower, or to 4.6 µs (±0.1° phase angle at
60 Hz). Traveling wave fault locators
find faults in the power grid by timing
waveforms that travel down power lines
at velocities near the speed of light.
Because the high voltage towers are
Time
Requirement
System Function
Measurement
Generation Control
Generator phase
10 ms
Event Recorders
Time tagging of records
1 ms
Stability Controls
Phase angle, ±1°
46 µs
Networked Controls
Phase angle, ±0.1°
4.6 µs
Traveling wave fault locators
300 meter tower spacing
1 µs
Synchrophasor measurements
Phase angle, ±0.022°
1 µs
Table 3. Time synchronization requirements for the electric power industry.
64
|
MEASURE
spaced about 300 meters apart, the
timing requirement is 1 µs, or the period
of a 300 meter wavelength [15]. Newer
measurement techniques, such as synchronized phasor measurements, require
time synchronization to Coordinated
Universal Time (UTC) to within 1 µs,
which corresponds to a phase angle accuracy of 0.022 ° for a 60 Hz system. A
local time reference must be applied to
each phasor measurement unit, and GPS
is currently the only system that can meet
the requirements of synchrophasor measurements.[16] Commercial phasor measurement units that receive GPS signals
are shown in Fig. 4.
The 60 Hz frequency delivered to consumers is sometimes used as the resonator for low priced electric clocks and
timers that lack quartz oscillators. The
legally allowable tolerance for the 60 Hz
frequency is only ±0.02 Hz, or 0.033 %
[17], but under normal operating conditions the actual tolerance is much tighter.
6. Requirements for
Telecommunication Systems
Telecommunication networks make use
of the stratum hierarchy for synchronization as defined in the ANSI T1.101 standard. [18] This hierarchy classifies
clocks based on their frequency accuracy,
which translates into time accuracy relative to other clocks in the network. The
best clocks, known as Stratum 1, are
defined as autonomous timing sources
that require no input from other clocks,
other than perhaps a periodic calibration. Stratum-1 clocks are normally
atomic oscillators or GPS disciplined
oscillators (GPSDOs), and have an accuracy specification of 1#10-11. Clocks at
strata lower than level 1 require input
and adjustment from another network
www.ncsli.org
REVIEW PAPERS
Figure 4. Phasor Measurement Systems receive time signals from the GPS satellites (courtesy of ABB, Inc.).
clock. The specifications for stratum
levels 1, 2, 3, and 3E are shown in
Table 4. The “pull-in range” determines
what type of input accuracy is required
to synchronize the clock. For example, a
“pull-in-range” of ±4#10-6, means that
the clock can be synchronized by another
clock with that level of accuracy.
6.1 Requirements for
Telephones (land lines)
The North American T1 standard for
telecommunications consists of a digital
data stream clocked at a frequency of
1.544 MHz. This data stream is divided
into 24 voice channels, each with 64 kHz
of bandwidth. Each voice channel is
sampled 8000 times per second, or once
every 125 µs. When a telephone connection is established between two voice
channels originating from different
clocks, the time error needs to be less
than one half of the sample period, or
62.5 µs. Half the period is used to indicate the worst case, which exists when
two clocks of the same stratum are
Vol. 1 No. 3 • September 2006
moving in opposite directions. If the time
error exceeds 62.5 µs, a cycle slip occurs
resulting in loss of data, noise on the line,
or in some cases, a dropped call. The use
of Stratum-1 clocks throughout a
network guarantees that cycle slips occur
only once every 72.3 days (62.5 µs
divided by 0.864 µs of time offset per
day). In contrast, Stratum-3 clocks could
produce cycle slips as often as every
169 s (Table 4), an unacceptable condition. Thus if resources allow, the use of
Stratum-1 clocks is certainly desirable
for network providers.
Stratum Levels
Stratum-1 Stratum-2
Stratum-3E
Stratum-3
Frequency accuracy,
adjustment range
1 # 10-11
1.6 # 10-8
1 # 10-6
4.6 # 10-6
Frequency stability
NA
1 # 10-10
1 # 10-8
3.7 # 10-7
Pull-in range
NA
1.6 # 10-8
4.6 # 10-6
4.6 # 10-6
Time offset per day due
to frequency instability
0.864 µs
8.64 µs
864 µs
32 ms
7.2 days
104 minutes 169 s
Interval between cycle slips 72.3 days
Table 4. Stratum timing requirements for clocks in telecommunication networks.
MEASURE
|
65
REVIEW PAPERS
GPS Antenna
equipped with GPS in North America).
The time requirement is ±10 µs, even if
GPS is unavailable for up to 8 hours.
During normal operation, base stations
are synchronized to within 1 µs. The frequency requirement is 5#10-8 for the
transmitter carrier frequency, but the
carrier is normally derived from the same
GPSDO as the time, and is usually much
better than the specification. Fig. 5
shows a cellular telephone tower containing a large variety of antennas.
Several small GPS antennas near the
base of the tower are used to obtain the
CDMA time reference (one antenna is
shown in the inset).
Although not yet as popular as CDMA
in the United States, the Global System
for Mobile Communications (GSM) is
the most popular standard for mobile
phones in the world, currently used by
over a billion people in more than 200
countries. GSM is a time division multiple access (TDMA) technology that
works by dividing a radio frequency into
time slots and then allocating slots to
multiple calls. Unlike CDMA, GSM has
no time synchronization requirement
that requires GPS performance, but the
uncertainty requirement for the frequency source is 5#10-8, generally
requiring a rubidium or a high quality
quartz oscillator to be installed at each
base station.[20] Unlike CDMA subscribers, GSM subscribers won’t necessarily have the correct time-of-day
displayed on their phones. The base
station clock is sometimes (but not
always) synchronized to the central
office master clock system.
6.3 Requirements for
Wireless Networks
Figure 5. Cellular telephone towers contain a myriad of antennas, often including GPS
antennas used to obtain a CDMA time reference.
6.2 Requirements for
Mobile Telephones
Mobile telephone networks depend upon
precise time and frequency. Code division multiple access (CDMA) networks
have the most stringent requirements.
66
|
MEASURE
CDMA networks normally comply with
the TIA/EIA IS-95 standard [19] that
defines base station time using GPS time
as a benchmark. Thus, nearly all CDMA
base stations contain GPSDOs (more
than 100,000 CDMA base stations are
Although they operate at much higher
frequencies than those of the radio and
television stations discussed earlier, wireless networks based on the IEEE
802.11b and 802.11g have a similar
acceptable tolerance for carrier frequency departure of ±2.5#10-5. The
specifications call for the transmit frequency and the data clock to be derived
from the same reference oscillator.[21]
www.ncsli.org
REVIEW PAPERS
Figure 6. The measurement system supplied to subscribers to the NIST Frequency
Measurement and Analysis Service. It
makes frequency measurements traceable
to the NIST standard by using GPS as a
transfer standard.
7. Requirements for
Calibration Laboratories
Calibration laboratories with an accredited capability in frequency usually maintain either a rubidium, cesium, or a
GPSDO as their primary frequency standard. This frequency standard is used to
calibrate time base oscillators in test
equipment such as counters and signal
generators. The test equipment is generally calibrated in accordance with manufacturer's specifications, which typically
range from a few parts in 106 for low
priced devices with non-temperature
controlled quartz oscillators to parts in
1011 for devices with rubidium time
bases. Therefore, a frequency standard
with an uncertainty of 1#10-12 allows a
laboratory to calibrate nearly any piece
of commercial test equipment and still
maintain a test uncertainty ratio that
exceeds 10:1. For these reasons, calibration laboratories seldom have a frequency uncertainty requirement of less
than 1#10-12. Laboratories that require
monthly certification of their primary
frequency standard can subscribe to the
NIST Frequency Measurement and
Analysis Service (Fig. 6), and continuously measure their standard with an
uncertainty of 2#10-13 at an averaging
time of one day.[22] Laboratories that do
not need certification can often meet a
1#10-12 uncertainty requirement by using a
GPSDO and a frequency measurement
system with sufficient resolution.
7.1 Requirements for
Voltage Measurements
The uncertainty in voltage measurement
in a Josephson voltage standard (JVS) is
Vol. 1 No. 3 • September 2006
Figure 7. Josephson voltage standards require a high performance frequency reference
(courtesy of Yi-hua Tang, NIST).
proportional to the uncertainty in frequency measurement. Typical high level
direct comparisons of JVS systems at 10
V are performed at uncertainties of a few
parts in 1011. Therefore, each laboratory
involved in a JVS comparison requires a
frequency standard with an uncertainty
of 1#10-11 or less at an averaging time of
less than 10 minutes to ensure proper
voltage measurement results.[23] This
frequency requirement is generally met
by using either a cesium oscillator or a
GPSDO. Fig. 7 shows the NIST JVS
system with a GPSDO located at the top
of the equipment rack.
and time and length metrology have a
close relationship. Until recently, the best
physical realizations of the meter had
uncertainties several orders of magnitude
larger than the uncertainty of the second,
due to the techniques used to derive the
meter. [24] However, the optical frequency standards [2] now being developed at national metrology institutes can
also serve as laser wavelength standards
for length metrology. As a result, the
uncertainties of the best physical realizations of the second and the meter will
probably track very closely in future
years. [25]
7.2 Requirements for
Length Measurements
7.3 Requirements for
Flow Measurements
Since 1983, the meter has been defined
as “the length of the path traveled by
light in a vacuum during a time interval
of 1 / 299 792 458 of a second.” Thus,
the definition of length is dependent
upon the prior definition of time interval,
Flow metrology normally involves collecting a measured amount of gas or
liquid in a tank or enclosure over a measured time interval, which is known as the
collection time. Thus, uncertainties in
the measurement of the collection time
MEASURE
|
67
REVIEW PAPERS
can contribute uncertainty to the flow
measurement. However, they are generally insignificant if they can be held, for
example, to a few tenths of a second over
a 100 s interval. Nearly any commercial
time interval counters can exceed this
requirement by at least two orders of
magnitude, but most collection time
uncertainty is introduced by delay variations in the signals used to start and stop
the counter. These delay variations need
to be measured against a time interval reference, and included in the uncertainty
analysis of a flow measurement. [26]
8. Requirements for
Radionavigation
Radionavigation systems, such as the
ground-based LORAN-C system and the
satellite based GPS system, have very
demanding time and frequency requirements. The precise positioning uncertainty of these systems is entirely
dependent upon precise time kept by
atomic oscillators. In the case of GPS the
satellites carry on-board atomic oscillators that receive clock corrections from
earth-based control stations just once
during each orbit, or about every 12
hours. The maximum acceptable contribution from the satellite clocks to the
positioning uncertainty is generally
assumed to be about 1 m. Since light
travels at about 3#10-8 m/s, the 1 m
requirement is equivalent to about a
3.3 ns ranging error. This means that the
satellite clocks have to be stable enough
to keep time (without the benefit of corrections) to within about 3.3 ns for about
12 hours. That translates to a frequency
stability specification near 6#10-14,
which was the specified technical
requirement during a recent GPS space
clock procurement. [27]
9. Requirements for
Remote Comparisons
of the World’s Best Clocks
The current primary time and frequency
standard for the United States is the
cesium fountain NIST-F1, with uncertainties that have dropped below 1#10-15
[1]. To determine that a clock is accurate
to within 1#10-15 relative to another
clock, the time transfer technique used to
compare the clocks needs to reach uncertainties lower than 1#10-15 in a reason68
|
MEASURE
Required Uncertainty
Application or Device
Time
Frequency
Wristwatches
0.5 s per day
6 # 10-6
Parking Meters
7 minutes per hour
11.7%
Time Clocks and Recorders
1 minute per day
7 # 10-4
Taximeters
6 s per minute
10%
Field Standard Stopwatches
9 s per day
1 # 10-4
Musical Pitch
NA
1 # 10-3
Tuning forks used for radar calibration
NA
1 # 10-3
Stock Market time stamp
3 s absolute
accuracy
NA
AM Radio Carrier frequency
NA
1.2 # 10-5
FM Radio Carrier frequency
NA
1.9 # 10-5
TV Carrier Frequency
NA
1.2 # 10-6
Shortwave Carrier Frequency
NA
1.5 # 10-5
Color TV subcarrier
NA
3 # 10-6
Electric Power Generation
10 ms
NA
Electric Power Event Recorders
1 ms
NA
Electric Power Stability Controls
46 µs
NA
Electric Power Network Controls
4.6 µs
NA
Electric Power Fault Locators
1 µs
NA
Electric Power Synchrophasors
1 µs
NA
Telecommunications, Stratum-1 clock
NA
1 # 10-11
Telecommunications, Stratum-2 clock
NA
1.6 # 10-8
Telecommunications, Stratum-3E clock
NA
1 # 10-6
Telecommunications, Stratum-3 clock
NA
4.6 # 10-6
Mobile Telephones, CDMA
10 µs
5 # 10-8
Mobile Telephones, GSM
NA
5 # 10-8
Wireless Networks, 802.11g
NA
2.5 # 10-5
Frequency Calibration Laboratories
NA
1 # 10-12
Josephson Array Voltage Standard
NA
1 # 10-11
GPS Space Clocks
NA
6 # 10-14
State-of-the-art time transfer
< 1 ns
parts in 1016
Table 5. Summary of legal and technical time and frequency requirements.
ably short interval. NIST-F1 is routinely
compared to the world’s best clocks
using time transfer techniques that
involve either common-view measurements of the GPS satellites, or two-way
time transfer comparisons that require
the transmission and reception of signals
through geostationary satellites. Currently, both the carrier-phase GPS and
the two-way time transfer techniques can
reach uncertainties of about 2#10-15 at
one day, reaching parts in 1016 after
www.ncsli.org
REVIEW PAPERS
about 10 days of averaging [28]. There
are practical limits to the length of these
comparisons, because it often not possible to continuously run NIST-F1 and
comparable standards for more than 30
to 60 days. Although these time transfer
requirements might seem staggeringly
high, keep in mind that the uncertainties
of the world’s best clocks will continue to
get smaller [2] and time transfer requirements will become even more stringent
in the coming years.
[7]
[8]
[9]
10. Summary and Conclusion
As we have seen, the world of time and
frequency metrology is extensive, supporting applications that range from the
everyday to the state-of-the-art. It has
legal and technical uncertainty requirements that cover an astounding 15
orders of magnitude, from the parts per
hundred (percent) uncertainties required
by coin operated timers, to the parts in
1016 uncertainties required for remote
comparisons of the world’s best clocks.
Table 5 summarizes the requirements for
the applications discussed in this paper
(listed in the order that they appear in
the text).
[10]
[11]
[12]
[13]
[14]
11. References
[1] T.P. Heavner, S.R. Jefferts, E.A. Donley,
J.H. Shirley, and T.E. Parker, “NIST-F1:
recent improvements and accuracy evaluations,” Metrologia, vol. 42, pp. 411422, September 2005.
[2] S.A. Diddams, J.C. Bergquist, S.R. Jefferts, and C.W. Oates, “Standards of
Time and Frequency at the Outset of the
21st Century,” Science, vol. 306, pp.
1318-1324, November 19, 2004.
[3] T. Butcher, L. Crown, R. Suitor, J.
Williams, editors, “Specifications, Tolerances, and Other Technical Requirements for Weighing and Measuring
Devices,” National Institute of Standards and Technology Handbook 44,
329 pages, December 2003.
[4] J.C. Gust, R.M. Graham, and M.A. Lombardi, “Stopwatch and Timer Calibrations,” National Institute of Standards
and Technology Special Publication 96012, 60 pages, May 2004.
[5] State of Pennsylvania Code, 67 §
105.71(2), (2005).
[6] Lynn Cavanagh, “A brief history of the
establishment of international standard
Vol. 1 No. 3 • September 2006
[15]
[16]
[17]
[18]
[19]
[20]
pitch a = 440 Hz,” WAM: Webzine about
Audio and Music, 4 pages, 2000.
International Organization for Standardization, “Acoustics – Standard tuning
frequency (Standard musical pitch),”
ISO 16, 1975.
M.A. Lombardi, “NIST Time and Frequency Services,” National Institute of
Standards and Technology Special Publication 432, 80 pages, January 2002.
U. S. Department of Transportation,
“Speed-Measuring Device Performance
Specifications: Down the Road Radar
Module,” DOT HS 809 812, 72 pages,
June 2004.
NASD, “OATS Reporting Technical
Specifications,” 281 pages, September
12, 2005.
J.H. Dellinger, “Reducing the Guesswork in Tuning,” Radio Broadcast, vol.
3, pp. 241-245, December 1923.
Code of Federal Regulations 47 §
73.1545, (2004).
Code of Federal Regulations 47 §
73.682, (2004).
U.S. – Canada Power System Outage
Task Force, “Final report on the August
14, 2003 blackout in the United States
and Canada: Causes and Recommendations” April 2004. Available at:
www.nerc.com/~filez/blackout.html
K.E. Martin, “Precise Timing in Electric
Power Systems,” Proceedings of the
1993 IEEE International Frequency
Control Symposium, pp. 15-22, June
1993.
Power System Relaying Committee of
the IEEE Power Engineering Society,
“IEEE Standard for Synchrophasors for
Power Systems,” IEEE Standard 13441995(R2001), 36 pages, December
1995, reaffirmed March 2001.
North American Electric Reliability
Council, “Generation, Control, and Performance,” NERC Operating Manual,
Policy 1, Version 2, October 2002.
American National Standard for
Telecommunications, “Synchronization
Interface Standards for Digital Networks,” ANSI T1.101, 1999.
“Mobile Station-Base Station Compatibility Standard for Wideband Spread
Spectrum Cellular Systems,” TIA/EIA
Standard 95-B, Arlington, VA: Telecommunications Industry Association,
March 1999.
European Telecommunications Stan-
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
dards Institute (ETSI), "GSM: Digital
cellular telecommunication system
(Phase 2+); Radio subsystem synchronization (GSM 05.10 version 8.4.0),”
ETSI TS 100 912, 1999.
LAN/MAN Standards Committee of the
IEEE Computer Society, “IEEE Standard
for Information technology – Telecommunications and information exchange
between systems – Local and metropolitan area networks – Specific requirements Part 11: Wireless LAN Medium
Access Control (MAC) and Physical
Layer (PHY) specifications – Amendment 4: Further Higher Data Rate
Extension in the 2.4 GHz Band,” IEEE
Standard 802.11g, 2003.
M.A. Lombardi, “Remote frequency calibrations: The NIST frequency measurement and analysis service,” National
Institute of Standards and Technology
Special Publication 250-29, 90 pages,
June 2004.
Y. Tang, M.A. Lombardi, D.A. Howe,
“Frequency uncertainty analysis for
Josephson voltage standard”, Proceedings of the 2004 IEEE Conference on
Precision Electromagnetic Measurements, pp. 338-339, June 2004.
B.W. Petley, “Time and Frequency in
Fundamental Metrology,” Proceedings of
the IEEE, vol. 79, no. 7, pp. 1070-1076,
July 1991.
J. Helmcke, “Realization of the metre by
frequency-stabilized lasers,” Measurement Science and Technology, vol. 14,
pp. 1187-1199, July 2003.
J.D. Wright, A.N. Johnson, M.R.
Moldover, and G.M. Kline, “Gas
Flowmeter Calibrations with the 34 L
and 677 L PVTt Standards,” National
Institute of Standards and Technology
Special Publication 250-63, 72 pages,
January 2004.
T. Dass, G. Freed, J. Petzinger, J. Rajan,
T.J. Lynch, and J. Vaccaro, “GPS Clocks
in Space: Current Performance and
Plans for the Future,” Proceedings of the
2002 Precise Time and Time Interval
Meeting, pp. 175-192, December 2002.
T.E. Parker. S.R. Jefferts, T.P. Heavner,
and E.A. Donley, “Operation of the
NIST-F1 cesium fountain primary frequency standard with a maser ensemble,
including the impact of frequency transfer noise,” Metrologia, vol. 42, pp. 423430, September 2005.
MEASURE
|
69
TECHNICAL TIPS
Practical Approach to Minimizing
Magnetic Errors in Weighing
Richard Davis
Abstract: OIML Recommendation R-111 (2004), now publicly available, specifies the magnetic properties of standard
weights as a function of class and nominal value. This note aims to show why these specifications are necessary and
how a manufactured weight can, in principle, be tested to verify that it is in compliance. A risk analysis can then help
individual laboratories decide what level of testing is warranted.
1. Magnetization and Mass Standards
How are the magnetic properties of mass standards specified;
what do these specifications mean? In the International Organization of Legal Metrology (OIML) Recommendation R-111
(2004) [1], the magnetic properties of a weight are specified by
two parameters: (1) the volume magnetic susceptibility (symbol
χ, dimensionless in the SI) and (2) the magnetic polarization
(symbol µ0M, SI unit: tesla). The quantity M is called the permanent magnetization, or just the magnetization in this note,
and µ0 is the magnetic constant, 4π!10–7 N/A2. It can be
shown that the units of M, apparently TA2/N, reduce to A/m.
The important point for this discussion is that we will assume
that magnetization and polarization differ only by a multiplicative constant. Metals handbooks often refer to relative permeability instead of susceptibility; it is worth knowing that relative
permeability equals 1 + χ.
Susceptibility is a measure of the ability of a material object,
like a stainless steel weight, to concentrate within itself the
ambient magnetic fields in which it was placed. Thus a uniform
magnetic field, such as that due to the Earth within a laboratory,
will be concentrated to an extent proportional to χ for χ ^ 1;
but the consequences of this are generally benign. [The Earth’s
magnetic field points in different directions depending on both
latitude and longitude but its magnitude is generally about 40
A/m (field strength) or 50 µT (induction). For our purposes, we
assume that ‘field strength’ and ‘induction’ always differ by a
factor of µ0.] A non-uniform field, such as the stray field from
the servocontrol magnet of a balance, will also be concentrated
within the weight and may lead to a significant force between
the magnet and the weight.[2] The concentration of ambient
magnetic fields within a non-magnetic material is not permanent. It becomes zero if the ambient magnetic field is zero. As
a consequence, it is not possible to measure magnetic susceptibility without exposing the test sample to a magnetic field. But
since a weakly magnetic weight can become permanently magnetized in an external magnetic field, any measurement of susRichard Davis
Bureau International des Poids et Mesures
Pavillon de Breteuil
Sèvres, Cedex 92312
France
Email: [email protected]
70
|
MEASURE
ceptibility should be made with care, following the recommendations found in Section B.6 of [1].
Polarization is a measure of a weight’s permanent magnetization. Magnetization of a weight may also lead to a significant
force between the servocontrol magnet and the weight and this
force, under certain conditions, is proportional to the magnitude
of the polarization.[2] As the name implies, the permanent magnetization remains in a material even when the ambient magnetic field is reduced to zero. It is thus possible to determine the
permanent magnetization without exposing the test mass to an
additional magnetic field.
Susceptibility and polarization are distinct quantities. Take
the extreme example of pure iron: an un-magnetized iron rod
nevertheless has a very high susceptibility so that, for instance,
it can attract the closer tip of a compass needle but cannot itself
be used as a compass. By contrast, if the rod were magnetized
it could either attract or repel the tip and could itself be used as
a compass needle. In fact, the common compass shows us that
magnetic interactions can produce torques as well as forces. We
can ignore torques in this note but their existence is an indication that a full analysis of magnetic interactions is complicated.
Figure 1 shows a simple model which, nevertheless, illustrates
the effects that have been mentioned. When a weight is placed
on a balance pan, vertical magnetic forces will add to or subtract
from the force of gravity on the weight. Since it is virtually
impossible to correct for the magnetic forces, they must be
made insignificant.
2. Stepping Back
What was the situation prior to the specifications of R-111
(2004)? For hundreds of years, precision balances were purely
mechanical devices and the best commercial mass standards
were made of brass (an alloy of copper and zinc). OIML class
F weights may still be constructed of this alloy. Sometimes the
standards were plated with gold, rhodium or nickel. Except for
nickel, all these materials are fundamentally non-magnetic and
a thin coating of nickel if properly applied does not degrade the
magnetic properties. Brass may contain iron impurities, rendering the alloy weakly magnetic, but sufficiently pure brass could
be obtained. With few exceptions, balances did not themselves
contain magnetic components. Starting in about 1950, ‘nonmagnetic’ stainless steel alloys gradually replaced brass as the
material of choice for the best commercial weights. One obvious
advantage of stainless steel is that its surface does not need to
www.ncsli.org
TECHNICAL TIPS
A cylindrical weight of nominal mass N, density ρ, and height L is placed
on a balance pan. The magnetic environment of the balance itself is such
that the ambient vertical magnetic induction is Btop over the top surface of
the weight and Bbot over the bottom surface. In this example, the vertical
induction vectors point downward and have different magnitudes, i.e. Btop
> Bbot. (A fundamental property of magnetism requires this model to
include horizontal induction components as well, indicated by white
arrows. But these components can be ignored.) The SI unit of magnetic
induction is the tesla.
The susceptibility, χ, of the weight leads to a vertical force, Fχ . Materials
with positive susceptibility, such as stainless steel, are attracted to regions
of more concentrated magnetic induction (hence an upward force on the
weight in the example shown here). The formula shown at the right applies
if χ << 1.
Btop
N
L
Bbot
If the weight is uniformly polarized in the vertical direction, then there will
be an additional vertical force, Fµ0M. In the example shown here, this force
will be upward if the polarization vector is in the same direction as the dark
blue arrows.
Note that the terms in blue are common to both force calculations. There
can be no vertical magnetic forces if there is no vertical gradient in the
ambient magnetic induction (i.e. no forces if Btop = Bbot).
Figure 1. The simple model shown here illustrates the magnetic forces that are considered in OIML R-111.
be plated. Of course the recipes for stainless steel alloys specify
large fractions of iron and nickel, both ferromagnetic materials
at room temperature, but the high-temperature structure of
certain stainless steel alloys becomes non-magnetic (the nonmagnetic structure is called austenite, the usual magnetic structure is called ferrite) and this desirable property is ‘frozen in’ as
the alloy is cooled back to room temperature. However,
improper heat treatment of the alloy or subsequent cold
working during the fabrication of mass standards can sometimes degrade the desired magnetic properties. This was soon
noted by mass metrologists but, as long as the balances used
remained purely mechanical and free of magnetic parts, there
was still no problem.
Today, many modern balances and scales use electro-magnetic
servocontrol and, in some cases, electro-magnetic motors for
opening doors, changing weights etc. Short of perfect magnetic
shielding, these components produce stray magnetic fields in the
vicinity of the weighing pan. The weighing of a weakly magnetic
‘non-magnetic’ weight then becomes problematic. To make
matters worse, the magnetic environment at the weighing pan
of a balance may have additional local perturbations, for
example from iron reinforcement bars in concrete supports.
(Steel or cast iron weights present special problems that are
dealt with in Reference [3] and in Section B.6 of [1].)
Have weighing errors actually been observed using inferior
non-magnetic stainless steel weights on modern balances? Plentiful anecdotal evidence suggests that they have. Established
standards laboratories have had the opportunity to use their
oldest stainless steel weights on servocontrolled balances. Some
of these weights are seen to be unstable compared to their
behavior on the old mechanical balances. Anecdotal evidence
Vol. 1 No. 3 • September 2006
and our own experience at the BIPM also suggest that the magnetic properties of weights have improved as manufacturers
responded to current metrological needs. We also have many
stainless steel weights made in the 1950s with excellent magnetic properties. It is difficult to generalize.
3. Strategies
What strategies are used to minimize weighing errors due to
magnetic effects? The balance manufacturer, the weight manufacturer and the user must all play their part to ensure that mass
calibrations are free of magnetic errors. The manufacturer
designs balances so that the magnetic servocontrol mechanism
and auxiliary motors are ‘not too close’ to the weighing pan and
the stray magnetic fields coming from these components are
‘minimized.’ The weight manufacturer selects alloys having
‘appropriate’ magnetic properties. The user works in a laboratory that keeps extraneous sources of magnetic fields ‘to a
minimum.’ The phrases in quotations are all open to broad
interpretation and neither metrologists nor manufacturers can
be satisfied with such vague advice. Clearly the criteria must be
different for OIML class E1 and class M3 weights. Since the
magnetic effects discussed above depend on the volume of the
weight, we might expect that the magnetic requirements within
a weight class might depend on the ratio of nominal value to
maximum permissible error. The following strategy has been
adopted in [1] in order to derive quantitative specifications.
First, it is tacitly assumed that the user’s laboratory has a magnetically clean environment. Just as it is up to the user to ensure
that balances are not placed in direct sunlight or directly in air
drafts, a precise balance should not be placed on an iron table
or on a table that has iron re-enforcement bars. Next, it is
MEASURE
|
71
TECHNICAL TIPS
OIML Weight class
E1
E2
F1
F2
m^1g
0.25
0.9
10
–
2 g ^ m ^ 10 g
0.06
0.18
0.7
4
20 g ^ m
0.02
0.07
0.2
0.8
Table 1. Maximum susceptibility, χ [1].
assumed that the ambient magnetic induction and its gradient
present at the weighing pan have a particular form and certain
worst-case values, based on a survey of various commercial balances. These assumptions then allow a specification of the
maximum magnetic susceptibility of standard weights (see
Table 1) such that their magnetic errors will be approximately
1/3 the typical measurement uncertainty for the corresponding
weight class. [3]
Unfortunately, permanent magnetization (polarization) is not
as simple to characterize. It will always be negligible if the stainless steel is 100 % austenized but this is not the case for finished
products made of many common non-magnetic alloys of stainless steel. The state of polarization depends on the exposure of
the weight to magnetic fields during its lifetime, including tests
of magnetic susceptibility as mentioned above. Thus ‘permanent
magnetization’ might not be permanent. In fact, a weight can
sometimes be demagnetized or ‘degaussed’ by exposure to a
strong magnetic field that is reversed periodically in sign while
gradually being reduced in magnitude. (Degaussing can also
have unintended effects. Perhaps for this reason degaussing is
not mentioned in R-111.) Unlike susceptibility, polarization is
not just a number. In reality, it is a non uniform vector field. If
we think of a magnetized weight as divided into small, equal
volumes, the polarization of each volume points in a certain
direction. In general, the polarizations from these volumes have
different strengths and point in different directions. To deal with
this severe complication of real weights, one test recommended
in R-111 consists of measuring the magnetic field external to the
weight along its central axis at a point near the surface.[1,4]
Any non-zero effect due to the presence of the weight is attributed to a uniform axial polarization and limits to this polarization are established in the same way as for the susceptibility (see
Table 2). Because the model of uniform axial polarization is only
an approximation—and often a poor one—some weights with
significant polarization might pass this test. Nevertheless, the
test has proven to be useful in various trials. It will indeed catch
the majority of badly magnetized weights. One consequence of
the vector nature of polarization is that magnetized weights may
OIML
Wt Class
E1
E2
F1
F2
M1
M1-2
M2
Polarization
(µT)
2.5
8
25
80
250
500
800
produce different weighing errors in a balance depending on
whether the weight is right-side up or up-side down, as observed
in [5]. This can be seen in the simple model presented in Fig. 1.
Note that for class M weights, the largest of which are usually
made of gray cast iron, polarization is considered to be the
major risk. The high susceptibility of these weights is taken for
granted. However for χ >> 1 the force due to susceptibility
reaches a plateau, thus becoming independent of the actual
value χ.
Section B.6 of R-111 lists several methods for measuring the
magnetic susceptibility and the polarization of weights.
However, such measurements can add significant overheads to
a calibration laboratory. An alternative is to trust that the manufacturer has met the magnetic specification for the weights,
similar to the trust that is often accorded to the density specification, and only seek to test those few weights that are behaving badly. Indeed R-111 recommends that for some weights,
depending on nominal value and class, one should rely on handbook values for susceptibility or the manufacturer’s specifications of magnetic properties ([1], Section B.6).
If it is impractical to test for magnetic susceptibility and
polarization both, many laboratories have given the latter the
higher priority. The polarization test can be performed with a
Hall probe gaussmeter or fluxgate magnetometer as described
in R-111. Ref. [4] provides additional insight to this method
and compares it to the use of a susceptometer, which has the
advantage that it may be used to determine both polarization
and susceptibility.
NOTE: Although the BIPM and the OIML enjoy a cordial
working relationship, they are independent organizations.
4. References
[1] International Organization of Legal Metrology, “Weights of
classes E1, E2, F1, F2, M1, M1–2, M2, M2–3 and M3. Part 1: Metrological and technical requirements,” Recommendation OIML
R-111-1 (2004). [This document may be downloaded at no cost
from oiml.org/publications/.]
[2] R.S. Davis, “Determining the magnetic properties of 1 kg mass
standards,” J. Res. National Institute of Standards and Technology,
vol. 100, pp. 209-225, 1995; Errata, vol. 109, p. 303, 2004.
[3] M. Gläser, “Magnetic interactions between weights and weighing
instruments,” Meas. Sci. Technol., vol. 12, pp. 709-715, 2001;
R.S. Davis and M. Gläser, “Magnetic properties of weights, their
measurements and magnetic interactions between weights and balances,” Metrologia, vol. 40, pp. 339-355, 2003.
[4] R.S. Davis, “Magnetization of Mass Standards
as Determined by Gaussmeters, Magnetometers
and Susceptometers,” NCSLI 2003 Conference
Proceedings, Catalog Number CP-C03-R-158.
[5] R.S. Davis and J. Coarasa, “Errors due to magnetic effects in 1 kg primary mass comparators,”
M2-3 M3
Measurement, in press (already available online).
1 600 2 500
Table 2. Maximum polarization, µ0M, in µT [1].
72
|
MEASURE
www.ncsli.org
Vol. 1 No. 3 • September 2006
MEASURE
|
73
TECHNICAL TIPS
Stopwatch Calibrations, Part III:
The Time Base Method
Robert M. Graham
Abstract: The Time Base Method for calibrating stopwatches and/or timers involves determining the frequency of the
device’s time base using calibrated, traceable standards. Two non-contact methods for this measurement are discussed:
an ultrasonic acoustic pickup of the internal oscillator’s frequency or an inductive pickup of the electric field signal.
An easy way to implement these techniques is through the use of a commercial stopwatch calibrator.
1. Introduction
In previous issues of NCSLI measure ‘Technical Tips,’ two
methods for calibrating stopwatches and/or timers were presented: the Direct Comparison Method (Part I – [1]) and the
Totalize Method (Part II – [2]). In this paper, we will discuss a
third calibration method, the Time Base Method.
fication to increase the signal to a measurable level. Once the
signal has been adequately amplified, it is easily measured with
a frequency counter and any offset from nominal is calculated,
using the formula:
,
(1)
where FreqMeasured is the measured frequency, and FreqNominal
A stopwatch is made up of four distinct parts: (1) the power
is the nominal time base frequency for the stopwatch under test
source; (2) the time base; (3) a technique or system for count(normally 32.768 kHz for a modern Liquid Crystal Display
ing the time base; and (4) a method to display the elapsed time
stopwatch). A similar system can be used to calibrate mechan[3]. Because the uncertainty of the stopwatch corresponds
ical stopwatches; however, in this case, a microphone is used to
directly to the uncertainty of its time base, measuring the frepick up the five ticks per second that is standard for most
quency offset of the time base will provide
mechanical stopwatches.
a value for the best uncertainty that a stopA very easy way to utilize this calibrawatch or timer can measure elapsed time
tion method is to use a commercially-avail(although that value does not include the
able stopwatch calibrator. These units
effects of the stopwatch operator’s reachave all of the required circuitry built in
tion times; those effects must be measured
(pickup, amplifier, and display), and are
and included separately). Unfortunately,
relatively fast and easy to use. One such
you cannot connect a counter or digital
unit is shown in Fig. 2. After placing the
oscilloscope directly to the time base and
stopwatch or timer on the unit’s sensor
take a reading. The quartz crystals in most
module (shown to the left of the display
digital stopwatches are very small and delunit in Fig. 2), the calibrator measures the
icate, and trying to probe the connections Figure 1. Typical timebase method calibrastopwatch’s crystal frequency and displays
will usually damage or destroy the crystal. tion setup
the offset from nominal in seconds per day
Therefore, a non-contact method must be used to measure the
(s/day). The calibrator can be used to measure mechanical stopfrequency of the time base crystal. The two most common
watches, older LED (light-emitting diode) stopwatches (with a
methods to calibrate a digital, quartz-crystal stopwatch are to
crystal frequency of 4.19 MHz), as well as a modern stopwatch
use either (1) an ultrasonic acoustic pickup to measure the
designed to operate at a frequency of 32.768 kHz. The benefits
crystal oscillator frequency (32.768 kHz for nearly all digital
of using a commercial stopwatch calibrator are that they are
stopwatches), or (2) an inductive pickup to sense the electrical
fast, easy to use, and can give very accurate results (the unit in
field oscillations of the crystal (see Fig. 1). One issue with these
Fig. 2 has an uncertainty of ±0.05 seconds/day, or
methods that must be overcome is the fact that the signals are
±0.000 058 %). Also, because the stopwatch’s time base is
very, very weak, and therefore the sensing electronics require a
being measured directly, the operator’s reaction time does not
significant amount signal amplification to boost the signal to
1 Sandia is a multi-program laboratory operated by Sandia Corporation,
levels that can be measured accurately by a frequency counter
a Lockheed Martin Company, for the United States Department of
or digitizer. It can take anywhere from 60 to 120 dB of ampliEnergy’s National Nuclear Security Administration under contract DE-
2. The Time Base Method
Robert M. Graham
Primary Standards Laboratory
Sandia National Laboratories
Albuquerque, NM 87185-0665 USA
Email: [email protected]
74
|
MEASURE
AC04-94AL85000.
2 Certain commercial equipment, instruments, or materials are identi-
fied in this paper in order to adequately describe the experimental procedure. Such identification does not imply recommendation or
endorsement by the author or NCSL International, nor does it imply
that the materials or equipment identified are the only or best available for the purpose.
www.ncsli.org
TECHNICAL TIPS
multiply it by the number of seconds in
contribute to the uncertainty of the
one day. Therefore, a specification of
stopwatch calibration process (but the
± 0.01 % would be 0.01 % of 86 400
reaction time does need to be included
(86400 * 0.0001) or 8.64 s/day.
in any measurements made using the
calibrated stopwatch). Finally, using the
Time Base Method, a typical stopwatch
3. Conclusions
calibration takes only minutes, rather
This series of Tech Tips has presented
than the several hours necessary for the
three different methods for calibrating
other two methods. [1, 2] The primary
stopwatches and timers. Depending on
disadvantage to using this method is Figure 2. Commercial stopwatch calibrator
the number of calibrations a laboratory
that it requires specialized equipment and the equipment
performs each year, the level of uncertainty required, and the
requires periodic calibration using traceable standards.
available resources, one of these three methods should meet a
However, since the uncertainty of the Stopwatch Calibrator is
laboratory’s requirements: the Direct Comparison Method; the
significantly better than the specification of commercially manTotalize Method; and/or Timebase Measurement Method.
ufactured stopwatches, the overall uncertainty in the calibration
Each method has its own advantages and disadvantages, so
process is given by the measured stopwatch offset.
select the method that works best for your applications.
Regardless of how the time base of a stopwatch is measured,
it is sometimes necessary to convert the measurement from one
4. References
format to another to allow the calibration results to be evaluated
[1] R.M. Graham, “Stopwatch Calibrations, Part I: The Direct Comproperly. Manufacturers normally state their specifications in
parison Method,” NCSLI measure, vol. 1, no. 1, pp. 72-73, March
one of two ways, either as a percentage of reading (% rdg) or
2006.
in seconds per day (s/day). It is therefore necessary to be able
[2] R.M. Graham, “Stopwatch Calibrations, Part II: The Totalize
to convert from one unit to the other. To convert from s/day to
Method,” NCSLI measure, vol. 1, no. 2, pp. 72-73, June 2006.
% rdg, divide the s/day by the number of seconds in one day
[3] J.C. Gust, R.M. Graham, and M.A. Lombardi, “Stopwatch and
(86 400 s). For example, to convert an offset of +1.00 s/day, the
Timer Calibrations,” NIST Special Publication 960-12, May 2004.
percentage offset would be (1.00/86 400) x 100 = 0.0012 %.
(Available free of charge from http://tf.nist.gov/timefreq/general/
Conversely, to convert from % rdg to s/day, take the % rdg and
pdf/1930.pdf)
QUAMETEC INSTITUTE OF
MEASUREMENT TECHNOLOGY
METROLOGY & ISO17025
CONSULTING SERVICES
COURSES IN:
Metrology & ISO17025
• Public Classes
• Self-Paced CD/DVDs
• Onsite Training
• Coming Soon:
eLearning Courses
•
•
•
•
•
•
Quality Systems
Gap Analysis
Compliance Audit
Uncertainty Analysis
Metrology Consulting
Instrumentation
QUAMETEC PROFICIENCY
TESTING SERVICES
MEASUREMENT
QUALITY VALIDATION
•
•
•
•
•
Small Uncertainties
Quick Test Results
PT Planning
Custom Test Design
A2LA Accredited
CALL FOR MORE
INFORMATION
CALL FOR MORE
INFORMATION
CALL FOR MORE
INFORMATION
810.225.8588
810.225.8588
260.244.7450
Vol. 1 No. 3 • September 2006
MEASURE
|
75
Improve Measurement Accuracy
Through Automation!
RESISTANCE STANDARDS
PRECISION SHUNTS
LABORATORY HIGH VOLTAGE DIVIDERS
Primary Applications:
√ Voltage*
√ Resistance*
√ Temperature
* Software Available
Data Proof’s extremely Low Thermal Scanners are ideal for
automating precision DC measurements. These versatile scanners
are used by hundreds of standards laboratories around the world.
611 E. CARSON ST. PITTSBURGH, PA 15203
TEL 412-431-0640 FAX 412-431-0649
Data
Proof
76
|
MEASURE
2562 Lafayette Street, Santa Clara, CA 95050
Phone: (408) 919-1799, Fax: (408) 907-3710
WWW.OHM-LABS.COM
www.DataProof.com
www.ncsli.org
PPCH-G™ Opens New Doors
in Automated High Gas
Pressure Calibration and Test
Applications
NEW PRODUCTS
Fluke Enhances its Family of
Pressure Calibrators
Fluke Corporation
has added nine new
products
and
enhanced features in
its family of Fluke
pressure calibrators.
The Fluke 718 Pressure Calibrators are
designed to provide a
compact,
lightweight, total pressure calibration solution for transmitters, gauges and
switches. They feature a new design that
protects the built-in pneumatic calibration pump from fluid contamination,
allowing it to be serviced in the field for
reduced service expense and overall cost
of ownership. Measuring less than nine
inches in length and weighing just over
two pounds, the rugged Fluke 718 is
available in 1, 30, 100 and 300 psi
models. It features pressure source and
milliamp measurement, with mA accuracy of 0.015 %, percent error calculation, and switch test and Min/Max/Hold
capability. The Fluke 718 can also
measure pressure using any of the 29
Fluke 700Pxx Pressure Modules to cover
applications up to 10,000 psi.
For more information on the Fluke 717
and 718 Pressure Calibrators, visit
www.fluke.com/processtools, or contact
Fluke Corporation: [email protected]
Radian Research Introduces
the RD-33 Multifunction
Three-Phase Electrical
Reference Standard
The RD-33 Three-Phase Power and
Energy Reference Standard is designed
to provide extremely accurate and
precise measurements, while also providing a multitude of advanced power
quality features. Accuracy is 0.01%
worst-case for all measurements with a
current input range of 20mA – 200A and
a voltage input range of 60VAC –
600VAC. The RD-23 single-phase model
is also available.
For information about the RD-33
Reference Standard call 1-765-449-5500 or
visit www.radianresearch.com
Vol. 1 No. 3 • September 2006
Masy Systems Receives A2LA
Accreditation for Calibrations
Masy Systems, Inc. has announced
ISO/IEC 17025:2005 accreditation for
calibrations by the American Association
for Laboratory Accreditation (A2LA).
Masy Systems’ calibration laboratory are
A2LA accredited for specific temperature, voltage, resistance, and frequency
measurements. This milestone allows
Masy Systems to perform 17025 accredited calibrations of dataloggers, temperature standards, and drywell temperature
baths. The full accreditation certificate
and scope are available on Masy Systems’
website. Masy Systems is involved in
inter-laboratory proficiency testing
through Quametec Proficiency Testing
Services, as well as engaged in inter-laboratory comparisons with other facilities,
that demonstrate the highest levels of
competence in thermodynamic testing.
For additional information on Masy
Systems, our calibration services or other
capabilities, please contact John Masiello
or visit www.masy.com
Absorbance Microplate
Recalibration Service
Stranaska LLC is pioneering a centralized facility for timely, affordable, and
NIST-traceable recalibrations of UV/VIS
absorbance microplate standards from
the leading commercial instrument manufacturers. This unique measurement
services program now provides a viable
option to every life sciences company,
especially those who own several
microplate reader instruments from multiple vendors. Companies can now have
all of its absorbance microplate standards recalibrated solely by a single independent and reputable analytical metrology
company.
Contact information: Stranaska LLC;
4025 Automation Way, Building A, Fort
Collins, CO; 80525 USA; Tel: 970-2823840; Email: [email protected];
Web: www.stranaska.com
DHI's PPCH-G™ is a pressure controller/ calibrator for gas pressure operation from 1 to 100 MPa (150 to15 000
psi). PPCH-G’s emphasis is on high end
performance, minimizing measurement
uncertainty, and maintaining precise
control over a very wide pressure range.
PPCH-G can be configured with individually characterized, quartz reference
pressure transducer (Q-RPT) modules,
resulting in increased precision and
reduced measurement uncertainty. The
AutoRange™ feature supports infinite
ranging, automatically optimizing all
aspects of operation for the specific
range desired. A special control mode is
included to handle large and/or leaky test
volumes. PPCH-G is loaded with all the
features, including pressure “ready/not
ready” indicator with user adjustable criteria; intelligent AutoZero™ function;
16 SI and US pressure units; automatic
fluid pressure head correction; on-board,
programmable calibration sequences
with DUT tolerance testing; and FLASH
memory for simple and free embedded
software upgrades.
Contact DHI at 602-431-9100 or go to
www.dhinstruments.com for more
information
New Fluke 9640A RF
Reference Source
The NEW Fluke 9640A Reference
Source is the first RF calibrator to
combine level precision, dynamic range
and frequency capability in a single
instrument. It can be used to calibrate a
broad range of RF test equipment including spectrum analyzers, modulation
meters and analyzers, RF power meters
and sensors, measurement receivers, freContinued on page 78
MEASURE
|
77
NEW PRODUCTS
quency counters and attenuators. With
built-in signal leveling and attenuation,
the Fluke 9640A provides the frequency
range and precision required to replace
many commonly used RF calibration
devices including level generators, RF
signal generators, power meters and
sensors, step attenuators and function
generators. The Fluke 9640A is supported by a range of common RF workload procedures within the Fluke
MET/CAL® Plus Calibration Measurement Software. The Fluke 9640A has a
best level accuracy of ±0.05 dB from 10
Hz up to 4 GHz and features integrated
signal leveling and attenuation, eliminating the need to use separate, step attenuators. The leveling head delivers signals
directly to the unit under test (UTT),
maintaining signal precision and noise
immunity throughout a +24 dBm to –130
dBm dynamic range to minimize losses,
noise, mismatch errors and to maintain
the calibrated integrity of the signal. The
Fluke 9640A comes with a 50 ohm leveling head and has an option to add a 75
ohm head.
ates effectively as a standalone program.
The CPM Module is used for on-the
bench calibrations to collect and store
calibration data for up to eight instruments simultaneously. CPM Module 2.1
brings more automation capability and
the integration of the powerful new
COM-Server object command class. This
new capability can use CPM to communicate with external Object Linking and
Embedding (OLE) automation objects.
Internal CPM commands allow the
system to share data with external applications, and to be used as a conduit to
pass data back and forth between these
applications through a read or write
interface.
For more information contact Edison
MudCats Sales at 714-895-0440 or visit
their website at www.edisonmudcats.com
For more information on the Fluke 9640A,
visit www.fluke.com, or e-mail [email protected]
Dynamic Technology, Inc.
Dynamic Technology, Inc. is pleased to
announce the acquisition of Metroplex
Metrology Lab (MML) of Fort Worth,
Texas. The acquisition is strategic to the
overall growth strategy of DTI in Texas
and helps meet the strategies set forth by
corporate, which include growing the
business, diversifying into new markets
and geographies, and continuing to
provide the best possible quality service
available anywhere.
For more information, contact the
Dynamic Technology, Inc., website at
www.dynamictechnology.com
Major Upgrate to MudCats
Metrology Software
Edison ESI, a Southern California
Edison Company, has released a major
upgrade to their MudCats Metrology
Suite. The Cal Process Manager (CPM)
Module is revolutionary software that
integrates with the MudCats suite of
modules or other commercial calibration
management software, as well as oper-
78
|
MEASURE
Endevco Sensor Calibration
Lab Offers Three Day
Turnaround
Endevco Corp. has renovated its calibration services laboratory and can now
provide three day turnaround on
accelerometer, pressure transducer,
microphone, and shock sensor calibrations for domestic U.S. customers. The
new calibration area provides NISTtraceable calibrations over the frequency
range 0.01 Hz to 500 Hz, as well as POP
calibrations up to 10,000 g. Endevco can
also provide high-frequency calibration
up to 20 kHz with resonance search up
to 50 kHz, leading the industry in accuracy and reliability. The new lab utilizes
the Endevco Automated Accelerometer
Calibration System (AACS), which provides NIST-traceable calibrations over a
wide frequency range with uncertainties
down to 1.2%. The AACS is regarded as
the highest performing calibration
systems in the world. A new environmental control system, as well as process
workflow and ancillary equipment
upgrades, were also integral to the renovation. In addition, Endevco can provide
custom calibrations including special
temperature/pressure environments and
system level calibrations.
For further information, contact Endevco
Corp.at www.endevco.com
New Fluke 5320A
Multifunction Electrical Tester
Calibrator
Fluke Corporation has announced the
Fluke 5320A Multifunction Electrical
Tester Calibrator, which is designed to
simplify calibration processes and
improve efficiency by incorporating in a
single, easy-to-use instrument the functionality required to calibrate and verify
a wide range of electrical test tools.
Designed for intuitive operation, the
Fluke 5320A has a large, bright color
display, a graphical interface that shows
users how to make terminal connections
between the unit-under-test and the calibrator, and includes graphical help
menus with calibration information. It
features three standard interfaces for
remote
control,
and
supports
MET/CAL® Plus Calibration Measurement Software for automating the calibration
process
and
managing
calibration laboratory inventory. The
Fluke 5320A Multifunction Electrical
Tester Calibrator enables users to verify
and calibrate the following electrical
testers: Insulation Resistance Testers;
Continuity Testers and Earth Resistance
Testers; Loop/line Impedance Testers
and Ground Bond Testers; RCD (or
GFCI) Testers; Earth and Line Leakage
Current Testers; Voltmeters; and Hipot
Testers.
For more information on the Fluke 5320A,
visit www.fluke.com/5320A, or contact
Fluke at (888) 308-5277 or email [email protected]
Product information is provided as a
reader service and does not constitute
endorsement by NCSLI. Contact
information is provided for each
product so that readers may make
direct inquiries.
www.ncsli.org
ADVERTISER INDEX
Mensor Corporation www.mensor.com ....................................
4
9
Morehouse Instrument Co. www.mhforce.com ......................
79
Andeen-Hagerling, Inc. www.andeen-hagerling.com ..............
76
NCSLI Training Center www.ncsli.com ....................................
80
AssetSmart www.assetsmart.com ............................................
37
Northrop Grumman Corporation www.northropgrumman.com
13
Blue Mountain Quality Resources www.coolblue.com ..........
18
Ohm-Labs www.ohm-labs.com ................................................
49
Cal Lab www.callabmag.com ....................................................
80
Process Instruments Inc. www.procinst.com .......................... 45
Data Proof www.dataproof.com ................................................
76
Quametec Corporation www.quametec.com..........................
75
DH Instruments, Inc. www.dhinstruments.com........................
6
Spektra www.spektra-usa.com ..................................................
80
DH-Budenberg Inc. www.dh-budenberginc.com ....................
21
Symmetricom www.SymmTTM.com ........................................
16
Dynamic Technology, Inc. www.dynamictechnology.com.......... 51
Sypris Test and Measurement www.calibration.com ..............
14
Essco Calibration www.esscolab.com......................................
TAC/Tour Andover Controls www.tac.com/pe ........................
73
The American Association for Laboratory Accreditation
(A2LA) www.a2la.org................................................................
73
Fluke / Hart Scientific Corporation
www.hartscientific.com ..........................
Thunder Scientific Corporation
Inside Front Cover, 11
Gulf Calibration Systems
www.gcscalibration.com ..........................
Vaisala Inc. www.vaisala.com ....................................................
45
Outside Back Cover
Holt Instrument www.holtinstrument.com ................................
76
Integrated Sciences Group www.isgmax.com ........................
20
Laboratory Accreditation Bureau www.l-a-b.com ..................
51
Masy Systems, Inc. www.masy.com ........................................
11
Vol. 1 No. 3 • September 2006
www.thunderscientific.com ............................Inside Back Cover
MEASURE
|
79
CLASSIFIEDS
measure
NCSL INTERNATIONAL
NCSLI Training Center
The NCSLI Training Center is designed to
provide a state-of-the-art training facility
with full electronic and staff support.
NCSLI member organizations are charged
a reduced rate. The facility offers:
• 900 Square Feet of Meeting Space
• State-of-the art Audiovisual Equipment
• Access to Full Catering Services and Kitchen
• Holds 45 Theater; 30 Classroom; 24 Square;
and 20 U-Shaped
• Pricing (NCSLI Members):
1-2 day – $350/day; 3 day – $275/day
• Audiovisual: Screen;
Flip charts/markers;
Electronic board;
Overhead and LCD
projection.
• High Speed Internet
• Ample Free Parking
Located at: NCSL International Headquarters
2995 Wilderness Place, Suite 107 • Boulder, CO 80301
303-440-3339
80
|
MEASURE
The Journal of Measurement Science
measure
measure
NCSL INTE
RNATIONA
L
NCSL
INTER
NATIO
NAL
The Journ
al of Measu
rement Scienc
e
VOL. 1 NO.
1 • FALL
2005
The Jou
rnal of
Measure
ment Scie
In nce
This Issue:
Lorem ipsum
dolor sit amet,
consectetue
r adipiscing
elit.
Donec interd
um.
VOL.
1 NO.
1 • FALL
Nam loremIn This Issue:
augue, sollic
itudin
nec, Mauris
Lorcursu
s, ligula feugia
em ipsu
pharetra ultrice
t
m dolor
consecs in arcu.
sit ame
tetu
Donec inte er adipiscing t,
Pellentesqu
elit.
rdum.
e habitant
morbi
tristique senec
tus et netus
Nam lore
et malesuada
m augue,
ac turpis
nec, Mau
soll
egest
ris cursus as.icitudin
pharetr
a ultrices , ligula feugiat
in arcu.
Pellente
sque hab
tristiqu
itant mor
e sen
et malesu ectus et netu bi
s
ada ac
turpis ege
stas.
Advertising
Opportunities
All advertising is reserved
exclusively for NCSLI Member
Organizations. For pricing, specifications, publication calendar, and deadlines, see www.ncsli.org/measure/ads.cfm
or contact Craig Gulka, 303-440-3339,
[email protected]
To contribute a Technical Article,
Tech Tip, Special Report, Review
Article, or Letter to the Editor, contact
Dr. Richard Pettit, 505-292-0789,
[email protected]
For information on submitting
accouncements of new products
or services from NCSLI Member
Organizations, see
www.ncsli.org/measure/psa.cfm
www.ncsli.org
2005
September 2006
measure
NCSL INTERNATIONAL
The Journal of Measurement Science
Vol. 1 No. 3 • September 2006
NCSL International
In This Issue:
measure • The Journal of Measurement Science
Practical Approach to Minimizing
Magnetic Errors in Weighing
An Accurate Pulse Measurement
System for Real-Time
Oscilloscope Calibration
Metrology: Who Benefits
and Why Should They Care?
Weights and Measures
in the United States
Vol. 1 No. 3