Está en la página 1de 84

measure

Vol. 9 No. 3 September 2014

IN THIS ISSUE:
Electrical Units in the New SI:
Saying Goodbye to the 1990 Values
Evaluation of Proficiency Testing Results
with a Drifting Artifact
Calibration of Ultrasonic Flaw Detectors
An Uncertainty Model and Analyzer
for a Space Environmental Test Facility

CALIBRATE YOUR TEST BUDGETS...

SMART!
...with AssetSmart
Test Equipment
Management Software.

TEST EQUIPMENT & INSTRUMENT TRACKING


CALIBRATION & MAINTENANCE
EQUIPMENT POOL SUPPORT
TEST SUITE PLANNING
SYSTEM COMPONENT TRACKING
BARCODE SCANNING
AUTOMATIC CATALOGING
SEARCH BY PERFORMANCE SPECIFICATIONS

Do More With Less. Now Thats SMART!

By PMSC
2800 28th Street, Santa Monica, California 90405 USA 800.755.3968 info@assetsmart.com www.assetsmart.com

Vol. 9 No. 3 September 2014

CONTENTS

Welcome to NCSLI Measure, a metrology journal published by


NCSL International for the benefit of its membership.

SPECIAL FEATURE
22 Software Analysis and Protection for Smart Metering
Charles B. do Prado, Davidson R. Boccardo, Raphael C. S. Machado,
Luiz F. R. da Costa Carmo, Tiago M. do Nascimento, Lucila M. S. Bento,
Rafael O. Costa, Cristiano G. de Castro, Srgio M. Cmara, Luci Pirmez,
and Renato Oliveira

TECHNICAL PAPERS
30 Electrical Units in the New SI: Saying Goodbye to the 1990 Values
Nick Fletcher, Gert Rietveld, James Olthoff, Ilya Budovsky,
and Martin Milton
36 Realization and Dissemination of the International Temperature Scale of

1990 (ITS-90) above 962C
Andrew D. W. Todd and Donald J. Woods
42 Evaluation of Proficiency Testing Results with a Drifting Artifact
Chen-Yun Hung, Pin-Hao Wang, and Cheng-Yen Fang
48 A 40 GHz Air-Dielectric Cavity Oscillator with Low Phase Modulation Noise
Archita Hati, Craig W. Nelson, Bill Riddle, and David A. Howe
56 A Calibration System for Reference Radiosondes that Meets GRUAN

Uncertainty Requirements
Hannu Sairanen, Martti Heinonen, Richard Hgstrm, Antti Lakka,
and Heikki Kajastie
62 Calibration of Ultrasonic Flaw Detectors
Samuel C. K. Ko, Aaron Y. K. Yan, and Hing-wah Li
70 An Uncertainty Model and Analyzer for a Space Environmental Test Facility
Mihaela Fulop

DEPARTMENTS
3
4
16
80

Letter From the Editor


NMI News
Metrology News
Advertisers Index

NCSL International Craig Gulka, Executive Director


2995 Wilderness Place, Suite 107 Boulder, CO 80301 (303) 440-3339

Vol. 9 No. 3 September 2014

NCSLI Measure J. Meas. Sci. |

NCSLI Measure (ISSN #19315775) is a metrology journal published by NCSL International (NCSLI).
The journals primary audience consists of practitioners and researchers in the field of metrology,
including laboratory managers, scientists, engineers, statisticians, and technicians. NCSLI
Measure provides NCSLI members with practical and up-to-date information on calibration
techniques, uncertainty analysis, measurement standards, laboratory accreditation, and quality
processes, as well as metrology review articles. Each issue contains peer reviewed technical
papers, technical notes, national metrology institute news, and other metrology information.
Author instructions are available at www.ncsli.org. If you are interested in purchasing advertising,
please visit www.ncsli.org for more information.

Managing Editor:
Michael Lombardi, National Institute of Standards and Technology (NIST), USA,
lombardi@ncsli.org

Associate Editors:
Jeff Gust, Fluke Corporation, jeff.gust@flukecal.com
Dr. Klaus Jaeger, Jaeger Enterprises, jaegerenterprises@comcast.net
Dr. Leslie R. Pendrill, SP Technical Research Institute of Sweden, leslie.pendrill@sp.se
Dr. James Salsbury, Mitutoyo Corporation, jim.salsbury@mitutoyo.com
Dr. Alan Steele, National Research Council of Canada, alan.steele@nrc-cnrc.gc.ca

NMI/Metrology News Editor:


Dr. Richard B. Pettit, Sandia National Laboratories (retired), randepettit@comcast.net

Advertising Sales:
Linda Stone, NCSL International, 2995 Wilderness Place, Suite 107, Boulder, CO
80301-5404 USA, lstone@ncsli.org

Technical Support Team:


Norman Belecki, NIST (Retired), USA
Carol Hockert, National Institute of Standards and Technology (NIST), USA
Dr. James K. Olthoff, National Institute of Standards and Technology (NIST), USA
Dr. Salvador Echeverria-Villagomez, Centro Nacional de Metrologia (CENAM), MX
Dr. Seton Bennett, National Physical Laboratory (NPL), UK
Dianne Lalla-Rodrigues, Antigua/Barbuda Bureau of Standards, Antigua W.I.
Dr. Angela Samuel, National Measurement Institute (NMI), Australia
Peter Unger, American Association for Laboratory Accreditation (A2LA), USA

Copyright 2014, NCSL International. Permission to quote excerpts or to reprint any


figures, tables, and/or text from articles (Special Reports/Features, Technical Papers, Review
Papers, or Technical Notes) should be obtained directly from the author. NCSL International, for
its part, hereby grants permission to quote excerpts and reprint figures and/or tables from articles
in this journal with acknowledgment of the source. Individual teachers, students, researchers,
and libraries in nonprofit institutions and acting for them are permitted to make hard copies of
articles for use in teaching or research, provided such copies are not sold. Copying of articles
for sale by document delivery services or suppliers, or beyond the free copying allowed above, is
not permitted. Reproduction in a reprint collection, or for advertising or promotional purposes, or
republication in any form requires permission from one of the authors and written permission from
NCSL International.

NCSLI Measure J. Meas. Sci. www.ncsli.org

Letter From the Editor


The vast number of topics relevant to the
field of metrology never ceases to amaze
me. This issue of Measure samples that
vastness by exploring a wide variety
of metrological topics. The issue opens
with a special feature about the use
of smart meters in the electric power
industry, followed by seven noteworthy
technical papers.
The special feature comes to us from Brazil, written by Charles
do Prado of Inmetro, the national metrology institute of Brazil, along
with numerous colleagues. Their paper, entitled Software Analysis
and Protection for Smart Metering, describes how the security of
smart electricity meters has become a major topic of concern, and
how new advances in software technology can improve the situation.
Smart meters operate in an exposed environment, making critical
measurements that are often the target of fraud and manipulation. This
problem is of particular concern in Brazil, the worlds fifth largest
nation in terms of both area and population. The authors tell us that
some 60 million consumer electricity meters are located in Brazil,
operated by 59 different power companies, and that the total losses
due to fraud are about $1 billion USD per year. The work being done
by Inmetro to make smart meters more secure should have a huge
positive impact on the Brazilian economy.
The parade of technical papers is led by Nick Fletcher of the
Bureau International des Poids et Mesures (BIPM), assisted by some
distinguished colleagues. Their paper, Electrical Units in the new
SI: Saying Goodbye to the 1990 Values, is an in-depth discussion
about the proposed changes to the International System (SI) of units
that will likely occur in 2018. These changes will redefine four of the
SIs seven base units of measurement. Fletcher skillfully explains the
science behind the change, how the change will be implemented, and
the impact of the change on electrical metrology. The original version
of this manuscript was selected as the best overall paper at the 2014
NCSLI Workshop and Symposium in Orlando, Florida.
Explaining how to realize the ITS-90 temperature scale above
962 C is a topic not often explored in the metrology literature. However,
Andrew Todd and Donald Woods of NRC, the national metrology
institute of Canada, do exactly that. Their excellent paper, Realization
and Dissemination of International Temperature Scale of 1990 (ITS-90)
above 962 C, explains how NRC realizes their high temperature
scale, how they can extrapolate their scale to temperatures higher than
2500 C, and provides an uncertainly analysis across the entire range.
If you have participated in proficiency tests that involve
interlaboratory comparisons, you have likely had questions about how
to deal with a drifting artifact. Chen-Yun Hung and her colleagues at
CMS/ITRI in Taiwan provide a detailed solution to this problem in

their paper, Evaluation of Proficiency Testing Results with a Drifting


Artifact. The paper combines theory with practice, providing the
requisite data analysis, but also drawing on the experience that
CMS/ITRI has gained in more than ten years of conducting proficiency
tests for Taiwans calibration laboratories.
A new type of microwave oscillator (40 GHz) is the topic presented
by Archita Hati and her colleagues at NIST in Boulder, Colorado. Her
paper, A 40 GHz Air-Dielectric Cavity Oscillator with Low Phase
Modulation Noise, describes a reference oscillator that can be used in
millimeter wave data communication and radar systems. Phase noise
metrology is essential for characterizing this type of oscillator, which
not only has to produce a stable and accurate frequency, but also must
produce a signal that is spectrally pure. Details of both the oscillator
design and the measurement apparatus are provided.
Radiosondes are instrument packages carried into the atmosphere,
usually by weather ballons, to measure environmental parameters
such as upper air humidity. Hannu Sairanen and his colleagues at
MIKES, the national metrology institute of Finland, have developed
a system to calibrate reference radiosondes. Their paper, entitled A
Calibration System for Reference Radiosondes that Meets GRUAN
Uncertainty Requirements, proves that there is indeed a connection
between metrology and meteorology (something we have all been
asked about at one time or another), by presenting a system that allows
meteorologists to establish traceability to the SI.
The Calibration of Ultrasonic Flaw Detectors is the topic
explored by Samuel Ko and his colleagues at SCL, the national
metrology institute of Hong Kong. An ultrasonic flaw detector is
an instrument that can find defects under the surface of a material,
including steel materials and welding joints. This interesting paper
provides a calibration procedure for these instruments, accompanied
by a full uncertainty analysis.
Mihaela Fulop of SGT Metrology Services in Ohio wraps up the
issue by describing a software tool designed to model and analyze
measurement uncertainty at NASAs Spacecraft Propulsion Research
Facility. Her paper, An Uncertainty Model and Analyzer for a Space
Environmental Test Facility, describes this ambitious project in
detail, presenting the tools development, how it was implemented at
a large organization, and how it provides significant time savings for
the technical staff.
As always, we hope you enjoy this issue of Measure.
Sincerely,

Michael Lombardi
Managing Editor
lombardi@ncsli.org

HOW TO REACH US
NCSLI Measure, 2995 Wilderness Place, Suite 107, Boulder, CO 80301-5404 USA
www.ncsli.org measure@ncsli.org

Vol. 9 No. 3 September 2014

NCSLI Measure J. Meas. Sci. |

NMI NEWS
New NPL Strategic Partnership with
Strathclyde and Surrey
Following a formal competitive process, the universities of Strathclyde
and Surrey have been selected to
develop a strategic partnership with
the government and the National Physical Laboratory (NPL) of the
United Kingdom. This new partnership will help to provide future
leadership of NPL.
The partnership will strengthen both fundamental research and
engagement with business by applying measurement science to
support innovation and growth. The goals are to:
Bring greater expertise and intellectual flexibility to strengthen
the laboratorys science;
Make better use of the existing facilities by strengthening the
laboratorys links with its academic partners, through new and
existing collaborations with academia and industry;
Encourage greater interaction with business, driven by closer
integration of existing innovation infrastructure and commercial
activity;
Make better use of the site at Teddington by granting partners
access to our spare capacity; and
Allow for the formation of a dedicated applied science
postgraduate institute.
The strategic partnership offers exciting prospects to enhance the
reach and impact of NPLs science and commercial activities. NPL will
continue to work with a wide range of academic and industrial partners
both across the UK and internationally. In the new arrangement, the
Department for Business, Innovation and Skills will own the operating
company, NPL Management Ltd. Currently, NPL has been operated
under a government-owned contractor-operated arrangement.

VSL Develops Reference Set-up for High


Voltage Power Grids
The liberalization of the energy market and the increased use of renewable energy sources have raised
interest in metering electricity flows between the parties exploiting the electricity grid. Such grid metering
must be performed with high accuracy since small errors correspond to large amounts of money.
Driven by the economic importance of correct revenue metering in high voltage (HV) grids, the Dutch National Metrology Institute, VSL, has developed a reference set-up for validating
existing revenue metering systems in the HV power grid. The original
aim was to have an uncertainty of better than 0.1 %, at least five times
more accurate than existing grid revenue metering systems. The VSL
reference set-up has been built up around custom-made current and
voltage transformers (CTs and VTs) and a three-phase reference power/energy meter. With this set-up, power and energy can be measured
4

NCSLI Measure J. Meas. Sci.

Validation of the complete VSL HV revenue metering set-up at


NRC, showing the CTs, VTs, and reference power meter.

in three-phase high voltage lines, at 110 kV and 150 kV, with currents
up to 5 kA.
After calibrating the individual components in the VSL reference
system, an overall validation was performed at the National Research
Council (NRC) Canada (see photograph). While an agreement of
around 100 parts per million (ppm) was expected, the actual agreement between the VSL and NRC systems was better than 25 ppm at a
power level of 200 MW. Based on these results, a VSL system uncertainty of better than 300 ppm is estimated for actual on-site measurements three times better than the original goal.
The VSL system has been ready for on-site measurements since early
2014. This is just in time for power plant owners and large electricity
consumers in the heavy industry that have already contacted VSL for
on-site verification of their revenue metering systems.
For more information, contact Gert Rietveld: grietveld@vsl.nl

NIST Forms Forensic Science Standards Board


The new National Institute of Standards and Technology (NIST) Organization of Scientific Area Committees (OSAC) is in the process of selecting members from the forensic
science, criminal justice and academic research communities to serve as
committee and subcommittee members. NIST needs between 500 and
600 subject matter experts, representing a balance of experience and
perspectives, to serve on OSAC. An OSAC term will be three years.
NIST is establishing OSAC to support the development of forensic
science documentary standards, and to ensure the accuracy of methods
and practices in the nations crime laboratories. The OSAC will also
help determine each forensic disciplines research and measurement
standards needs and ensure that a sufficient scientific basis exists for
www.ncsli.org

NMI NEWS

each discipline. OSAC will consist of a Forensic Science Standards


Board (FSSB), three resource committees, five Scientific Area Committees (SACs), and 23 discipline-specific subcommittees. The five
SACs are:
(1)
(2)
(3)
(4)
(5)

Biology/DNA
Chemistry/Instrumental Analysis
Crime Scene/Death Investigation
Information Technology/Multimedia
Physics/Pattern

As of July 11, 2014, NIST and the Department of Justice (DOJ)


have appointed 17 members to the new organization. The NIST-DOJ
membership selection team is currently looking at applications to fill
the remaining OSAC positions.
For more information, visit: www.nist.gov/forensics/osac.cfm

PTB Develops Reference Standard for


Nano-Dimensional Metrology

An aberration-corrected, high-resolution TEM is capable of


measuring nano-structures on thin, single-crystal layers with atomic
resolution. It offers the best accuracies in calibrating the width of
a feature on the cross-section polish of line structures by using the
atomic spacing in the feature as an internal ruler. In this way, the
CD can be directly linked to the atomic spacing in the crystal lattice,
which can be traceably calibrated by a combined optical and x-ray
interferometer. For example, the lattice spacing d111, which is the
distance between the (111) crystal planes of the material silicon 28Si,
has been determined to be precisely (0.31356011 0.00000017) nm.
Before the TEM measurements, two separate structures on the
wafer are measured with the AFM. After that, one of the structures is
carefully detached and thinned down to less than 100 nm by means of
a focused ion beam (FIB) for the TEM measurements. Unfortunately,
this area of the sample is no longer available for further CD
measurements. However, the TEM measurements allow any inherent
systematic errors of the AFM method, such as the probe diameter, to
be detected and corrected. As a result, reference CD values can finally
be determined on the intact structure with an estimated combined
standard measurement uncertainty of 0.81 nm.
This result has been confirmed through five investigations of the CD
of a reference structure which were carried out independently of each
other on different TEMs. In addition, the new reference method has
been successfully used to measure different structure characteristics
of an EUV (extreme ultra-violet light) photomask. With these results,
measurements carried out by means of synchrotron radiation at PTBs
EUV scatter meter were confirmed. Integrating the combination of
FIB and TEM measurements result in a more accurate traceability
value for the final structure.
For more information, contact Gaoliang Dai at: gaoliang.dai@ptb.de

BIPM Publishes Supplement to 8th Edition of


SI Brochure
3D representation of a CD AFM image, measured on a group of
five features.

Physikalisch-Technische Bundesanstalt (PTB) has developed a


reference method for measuring the dimensions of nanostructures
(e.g., on semiconductor chips) with high accuracy. The method
combines atomic force microscope (AFM) measurements with atomic
resolution measurements performed using a transmission electron
microscope (TEM).
The term critical dimensions (CD) is often used synonymously
for structure width, and CD metrology plays an essential role in
process control in the semiconductor industry for ensuring reliable
manufacturing of micro- and nano-structures on silicon wafers and
photomasks. Due to the progressive miniaturization in the fabrication
process (today down to structure widths of 22 nm), the requirements
placed on the measurement uncertainties of CD metrology down to the
sub-nanometer range are becoming ever more demanding. Industry
has an urgent need to verify and characterize the diverse CD measuring
instruments used in the production lines of the semiconductor industry,
such as optical scatter meters.
Vol. 9 No. 3 September 2014

Supplemental changes to the 8th edition of the SI


brochure, published in 2006, were approved at
the 103rd meeting of the International Committee
for Weights and Measures (CIPM), held at
the Bureau International des Poids et Mesures
(BIPM) on March 12-13, 2014.
The new supplement includes the new
definition of the astronomical unit for length
adopted by the XXVIII General Assembly of the International
Astronomical Union (IAU) in 2012. In addition, the following tables
were updated:
Table 3. Coherent derived units in the SI with special names
and symbols.
Table 4. Examples of SI coherent derived units whose names
and symbols include SI coherent derived units with special
names and symbols.
Table 6. Non-SI units accepted for use with the International
System of Units.
Table 7. Non-SI units whose values in SI units (the natural unit
of speed excepted) must be determined experimentally.
NCSLI Measure J. Meas. Sci. |

NMI NEWS

Several paragraphs were also updated in Sections 1.2 and 5.3.1


based on changes to ISO and IEC standards (ISO and IEC 80000 series
Quantities and units.) Finally, changes to Section 5.3.5 (Expressing
the measurement uncertainty in the value of a quantity) were made
due to updates to the Guide to the Expression of Uncertainty in
Measurement (GUM), and using the CODATA 2010 values.
 o obtain the SI Supplement 2014, visit:
T
www.bipm.org/en/si/si_brochure/

NIST Develops a Gold Standard for


Hall Resistance
Researchers at the National Institute of Standards and Technology
(NIST) have developed a novel method of fabricating graphenebased microdevices that may hasten a new generation of standards
for electrical resistance. The new design can be adjusted to produce a
wide range of electronic properties.
Since 1990, the internationally accepted means of realizing the ohm
has been based on the quantum Hall effect (QHE), in which resistance is
exactly quantized in increments dictated by constants of nature. The QHE
is measured using electrical contacts placed along the sides of a rectangular, cryogenically cooled, current-bearing conductor (the Hall bar) in
which the charge carriers behave like a two-dimensional (2D) gas.
The widely-used standards for such measurements are based on GaAs/
AlGaAs heterostructure devices and require high magnetic field strengths
in the range of 5 to 15 tesla (T), typically obtainable only with expen-

NCSLI Measure J. Meas. Sci.

Configuration of the QHE device showing dimensions. The bluegray rectangle in the center is the open face of the Hall bar. The
locations of graphene components are outlined by white lines.
The source and drain are at the left and right ends of the bar,
while electrical contacts are both above and below the bar.

sive superconducting magnets. The QHE plateaus could be observed in


graphene at lower magnetic field strength and higher temperature than in
semiconductor devices. In general, there are three ways to obtain monolayer graphene sheets suitable to that task: the sticky-tape exfoliation
method used in 2004 to isolate the material for the first time; chemical

www.ncsli.org

NMI NEWS

vapor deposition on copper or other material; and growth on an insulating silicon carbide substrate, which the PML researchers employ.
The NIST fabrication method involves coating a sheet of graphene
on a section of silicon carbide wafer with about 15 nanometers of
gold before any lithography. Patterns are developed using traditional
photolithography to remove any unwanted gold-coated graphene.
Then, the areas that will be the Hall bar contacts get a thicker coating
of gold, so that they will make good connections for wires used in
electrical measurements. In the last step, the gold layer over the area
of graphene that will serve as the Hall bar is removed with dilute aqua
regia, a mixture of nitric acid, hydrochloric acid, and deionized water,
leaving the graphene almost completely clean.
The aqua regia etching produces helpful p-type doping in the
graphene. Thus molecules from the acids remain on the surface,
reducing the carrier density and improving the mobility of electrons
that remain. Low carrier density is important because the higher the
density of charge-carriers in the Hall bar, the higher the magnetic field
strength required to observe the critical QHE plateaus.
The new devices have carrier densities in the range of 3 1010
per cm2 to 3 1011 per cm2, allowing observation of clearly defined
resistance quantization at magnetic field strengths of less than 4 T. The
p-type molecular doping effect can be reduced by heating in argon
gas, and is restored by dipping in aqua regia.
 or more information, contact Rand Elmquist at:
F
randolph.elmquist@nist.gov

8's ad for MeasureMag 7_125x4_75_2014 7/9/14 3:19 PM Page 1

C
ROTRONI

TION
APPLICA

NOTE

b Competent?
ty Calibration La ward questions.

Is Your Humidi

ical straight for


ing these 8 crit
Find out by ask

the
Question 1 Is
to the ISO
lab accredited
?
17025 standard

rtainty, repeatabilinstrument unce


eraboth the temp
ity and drift of
dity reference
ture and humi
temperature and
the
instrument,
that are occur
ents
gradi
humidity
ber,
humidity cham
ring inside the
re and humidity
the temperatu
where
n the chamber
stability withi
the
be placed and
the devices will
r test
the unit unde
resolution of
le
I see the
should be capab
(UUT). The lab
laQuestion 2 May for the
explain their calcu
gets
and willing to
what
and
uncertainty bud
y
rtaint
tions on unce
n process?
calcularatio
the
calib
into
idity
went
hum
assumptions
lling or
for
unwi
or
are
vend
ation lab
tions. If they
Ask the calibr
you should proba
y study.
the uncertaint
unable to do so,
the details on
y
maybe start lookg the uncertaint
bly ask why and
When considerin
the
lab.
for humidity,
ing for another
continued
of the calibration
ined the folexam
have
ld
lab shou
the reference
lowing sources:

ilition to their capab


technical atten
tedly perform
ties and can repea
of
that same level
calibrations to
ditaaccre
ugh
Altho
competence.
the
g indicator that
tion is a stron
or
, accreditation
lab is competent
ld not be the sole
lack thereof shou
etence.
comp
of
ator
indic

17025
to the ISO/IEC
Accreditation
etency and teststandard for comp
is a really good
ing calibration
ditaIn general, accre
place to start.
tion does
not specifically say
how the
calibration
should be
performed,
but rather
it speaks
to a set of
guidance
principles
and results
that the
approved
EC
ld provide. ISO/I
shou
dures
proce
on confirms that
17025 accreditati
suclaboratory has
the calibration
r
ssed five majo
cessfully addre
calibration
areas within the
cies,
e of Competen
process; Scop
,
ed Procedures
Properly Defin
Control of the
Demonstrated
Underated
onstr
Process, Dem
urement
Meas
the
of
standing
onstrated ProfiUncertainty, Dem
urement. The
ciency in the Meas
ditation ensures
ISO 17025 accre
has
ation vendor
that your calibr
and
management
given serious

ERTAINTY
ROTRONIC UNC
eter or
Measured Param
ated
Device Calibr
RTAINTY
HUMIDITY UNCE
ent

urem
Humidity Meas
dity
Relative Humi

BUDGET

Range

Uncertainty (k=2)

MIC
THERMODYNA
% RH
0.1 % RH to 0.8
11.5 % RH
10.5 % RH to
% RH
34 % RH to 36
% RH
79 % RH to 81

UNCERTAINTY
TEMPERATURE
21 C to 27 C
Probe
Thermometer

0.21 % RH
0.22 % RH
0.29 % RH
0.49 % RH

United Arab Emirates (UAE) Launches New


Metrology Institute
The Emirates Metrology Institute (EMI) was formally launched at the
Abu Dhabi Quality Forum 2014 held in April. EMI will be a research
and educational institution which aims to establish a solid quality
infrastructure through the provision of all measurements references
and standards to ensure compliance with international measurement
specifications and standards.
EMI will seek to strengthen cooperation and partnership with relevant
institutions to ensure accuracy and precision in measurements and
compliance with international standards. In addition, EMI will provide
technical requirements for measuring instruments, serve as a national
reference for standardization laboratories, enable quality measurement
systems in the UAE to receive international accreditations, provide
consultancy for regulators and other relevant bodies, offer training
courses in standards and conformity assessment, and spread awareness
about the importance of metrology and its various applications.
The EMI is supported in this effort by a Memorandum of
Understanding (MOU) with the Dutch National Metrology Institute,
VSL. EMI/QCC is developing measurement capabilities in mass,
volume, flow, pressure/vacuum, force, torque, temperature, humidity,
dimensions, electrical, and time and frequency. All capabilities are
expected to be at the top level in the Gulf region.
 or more information, visit the Gulf Association for Metrology
F
(GULFMET) website: www.gulfmet.org/gulfmet/gulfmet/
members/uae

8 Critical Questions

Is your Humidity Calibration Lab Competent?


To properly and accurately calibrate and adjust relative
humidity instruments is no simple task.
Is the lab accredited?
Does it know and understand the uncertainty of the
calibration?
Are the reference instruments really reference
instrument level?

0.065 C

Download the paper Is your Humidity Calibration


Lab Competent?
content.rotronic-usa.com/Calibration-Questions

sales@rotronic-usa.com

Vol. 9 No. 3 September 2014

NCSLI Measure J. Meas. Sci. |

Discover the
Blue Box
Difference
8000B AutomAted Precision
VoltAge meAsurement system

Calibration of Fluke 57xx series

Traceability to 10V Zener Reference

1200V Range

Automated Binary Voltage DIvider

Bipolar Measurements

Accuracy as low as 0.05 ppm

Built in 20 Channel Scanner

Self Calibration

www.mintl.com

NMI NEWS

NRC Improves Ultimate Accuracy of a


Single-Ion Clock

NRCs atomic optical clock in which a strontium ion is loaded in


the trap. Next laser beams cool the ion to about 0.002 K above
absolute zero to reduce motion to a minimum. The strontium
ion is then ready for interrogation by an ultra-stable laser
system designed to find the center of a special optical transition
called clock transition. The laser radiation referenced to the
clock transition provides the signal for keeping time.

Single isolated atomic ions, trapped using electrodynamic rf fields and


OHMLABSshuntsAD.10.2013_Layout 1 10/21/13 3:36 PM Page 1
laser-cooled to extremely low kinetic temperatures, are one of natures

closest approximations to an isolated and unperturbed quantum system.


Such systems have been proposed as near ideal atomic frequency
references which can significantly outperform the current definition
of the International System of Units (SI) second based on a cesium
hyperfine transition. Researchers at the National Research Council
(NRC) of Canada report a substantial improvement toward this ultimate
goal by exploiting a unique intrinsic property of their strontium ion
optical frequency standard.
The effect utilizes the atoms internal atomic structure together with
the dynamics of the single-ion in the trapping field. Knowing that the
driven motion of the ion (micromotion) creates two opposing shifts,
one due to the Stark effect and the other to the relativistic time-dilation
effect, the NRC researchers were able to fine-tune the shifts using the
trap drive frequency so that they could effectively cancel them to a high
degree. The cancellation effect is only possible when the reference or
clock transition has a negative differential polarizability as in the case
of the strontium ion.
From the measured cancellation frequency, it was also possible to
improve on another critical atomic parameter, the differential scalar
polarizability. This parameter plays an essential role in determining the
blackbody radiation shift, the largest source of uncertainty for this ion
system and many of the other proposed next-generation optical atomic
clocks. The accurate value of the polarizability reduces the uncertainty
caused by the blackbody coefficient to below the 10-18 level. It also
allows a reduction of the driven motion shifts by a factor of 200.
These results pave the way for a clock evaluation two orders
of magnitude below that of the best cesium clocks that currently

PRECISION CURRENT SHUNTS

HIGH ACCURACY
LOW TEMPERATURE COEFFICIENT
STABLE OVER TIME
OPTIONAL TEMPERATURE SENSOR
INCLUDES ACCREDITED CALIBRATION
CALIBRATION SERVICE TO 1000 A

MODEL

ACCURACY

MODEL

ACCURACY

CS-0.1
CS-1
CS-5
CS-10
CS-20
CS-50

< 0.005%
< 0.005%
< 0.01%
< 0.01%
< 0.01%
< 0.01%

CS-100
CS-200
CS-300
CS-500
CS-1000
MCS

< 0.01%
< 0.02%
< 0.05%
< 0.02%
< 0.05%
MULTIPLE

STANDARD

MODELS LISTED; CUSTOM VALUES AVAILABLE.

SEE

WWW.OHM-LABS.COM FOR

DETAILS

611 E. CARSON ST. PITTSBURGH, PA 15203


TEL 412-431-0640 FAX 412-431-0649

WWW.OHM-LABS.COM
Vol. 9 No. 3 September 2014

NCSLI Measure J. Meas. Sci. |

NMI NEWS

define the SI second. This work is expected to contribute to new


advances for applications such as timekeeping, relativity, astronomy,
navigation, geodesy, space exploration, and tests of fundamental
physics postulates.
The results were recently published: Pierre Dub, Alan A. Madej,
Maria Tibbo, and John E. Bernard, High-Accuracy Measurement of
the Differential Scalar Polarizability of a 88Sr+ Clock Using the TimeDilation Effect, Phys. Rev. Lett., vol. 112, 173002, 2014.
 or more information, contact Pierre Dub at:
F
pierre.dube@nrccnrc.gc.ca

VSL Develops New Calibration Facility for


Power Quality Parameters
A new facility has been developed at VSL, the Dutch
Metrology Institute, to calibrate and test single or
three-phase power analyzers and calibrators for power
quality (PQ) parameters, such as frequency, time base,
voltage, current, power with arbitrary phase angle,
harmonics and total harmonic distortion (THD) for
voltage and current, voltage dips and swells, voltage
fluctuations (flicker), and unbalance.
The new sampling system allows for flexibility in the generation
and analysis of PQ signals. Control and analysis software has been
written to generate the signals and to analyze the measurement results
in full accordance with the relevant standards, in particular the IEC

10

Schematic diagram of the VSL reference setup used to calibrate


a PQ analyzer. Traceability is obtained by calibration of the
resistive divider, the shunt resistor, and the analog-to-digital
converters (ADCs).

61000-4-30. Hence, in order to have reliable and comparable results,


the PQ parameters are determined in the same way as the device under
test does. Consequently, the aim is not necessarily the lowest possible
calibration uncertainty, but the most valuable calibration result for the
customer instead.
For several parameters the PQ analyzer under test can be verified
to operate conform written standards such as the IEC 61000-3-3, IEC

NCSLI Measure J. Meas. Sci. www.ncsli.org

NMI NEWS

61000-4-30, and the EN 50160. For example, instantaneous, short-term


and long-term flicker severity can be tested with square or sinusoidal
modulation, for signals with or without harmonic distortion. Another
example is the 1.5 s time constant of the internal filter that can be tested
by measuring the response to a step change in the harmonic distortion
of the mains signal. In all cases, test measurements and calibration
parameters can be configured to meet the customers requirements.

How to calculate uncertainty.


How to read instrument specifications and their impact on uncertainty calculations.
Some examples of measurement challenges faced by electronic
and electrical engineers.
 o obtain a copy, visit: www.npl.co.uk/publications/guides/
T
beginners-guide-to-measurement-in-electronic-and-electricalengineering

 or more information, contact Helko van den Brom at:


F
hvdbrom@vsl.nl

NPLs Beginners Guide to Measurements in


Electronic & Electrical Engineering
Good Practice Guide No 132

Beginners Guide to Measurement


in Electronic and Electrical
Engineering

New NPL Mobile Environmental


Measurements Laboratory

The National Physical Laboratory (NPL) of the


United Kingdom and the Institution of Engineering and Technology have jointly produced
a guide for students titled Beginners Guide to
Measurement for Electronic and Electrical Engineering. The 51 page guide was published in
March 2014. The topics covered include:

How a measurement mix-up led to the loss of the Mars Climate


Orbiter.
The best ways to record measurement results.
The difference between accuracy and precision and the importance of calibration and traceability.

NPLs Differential Absorption Lidar facility is a mobile


atmospheric testing system.

Calibra on Services You Can Count On


Your laboratory standards are the basis for your
organiza ons most accurate measurements.
Examples of Standards Calibrated
Mul -Product Calibrators
Measuring Bridges
Reference Mul meters
High Voltage Dividers
Voltage Standards
Standard Resistors
Standard Capacitors
Standard Inductors
SPRTs/PRTs/RTDs
Metrology Temperature
Wells

Essco is an extraordinarily diverse commercial laboratory serving


many industries as well as other laboratories. Our capabili es far
surpass that of mo
most commercial laboratories.

Super Micrometer
Gage Blocks
Dead Weight Tester
Torque Standards
Force Standards
Two-Pressure Humidity
Standard
Height Gages
Thread Set Plugs
Measuring Wires
Plain Plug & Ring Gages

Super Micrometer is Trademarked to Pra & Whitney

Accredited to ISO/IEC 17025:2005 by NVLAP (Lab Code 200972-0). Over 35 metrology technicians on sta.

www.esscolab.com/capabili es/standards
800.325.2201 Ask about online repor ng via EsscoNet
Vol. 9 No. 3 September 2014

NCSLI Measure J. Meas. Sci. |

11

NMI NEWS

The Environmental Measurement Group and the Centre for Carbon


Measurement at the National Physical Laboratory (NPL) in the
United Kingdom have launched a new mobile laboratory to detect
and measure emissions that are harmful to the environment.
The Differential Absorption Lidar (DIAL) facility is a
sophisticated laser-based system that provides rapid, accurate
measurements of airborne emissions. It is a completely mobile
laboratory that can be shipped, or driven, to wherever it is needed.
NPLs current DIAL system has been used successfully by both
regulators and companies to measure leaks from industrial sites,
e.g. monitoring oil and gas emissions in Norway, benzene emissions
in the Port of Rotterdam, and methane leaks from landfill sites for
Defra in the UK.
The new, next-generation facility has been built thanks to over
1 million investment by NPL and support for the underpinning
science from the National Measurement Office (NMO), an
executive agency of the Department for Business, Innovation and
Skills (BIS). It will be used to quantify and visualize emissions from
industrial sites, and to support research and policy developments.
Innovations include greater detection sensitivities that offer more
accurate results, and a flexible system that enables operators to
switch between the types of pollutants being measured.
This unique service that NPL offers globally creates 3D emission
maps of various pollutants, like airborne hydrocarbons. With faster
deployment, as well as more efficient data manipulation and usage
through an improved software system, it delivers quicker results
for customers. Identifying emissions leaks and doing something

to prevent them will reduce environmental impacts and will also


have commercial benefits, e.g. methane has an economic value,
thus preventing methane leaks means there is more gas to sell.
 or more information, visit: www.npl.co.uk/environmentalF
measurement

Visitors to KCDB Database are


Primarily NMIs
Between January 24th and March 20th 2014, questionnaires were made
available on two pages of the Key Comparison Data Base (KCDB)
in order to obtain information about visitors to the database. The
two pages were: (1) Key and Supplementary Comparisons and (2)
Calibration and Measurement Capabilities CMCs.
It was found from the information collected that the section on
Calibration and Measurement Capabilities (CMCs) is visited twice
as often as the section on comparisons. In addition, the two sections
of the KCDB website are visited mainly by National Metrology
Institutes (NMIs) and Designated Institutes (DIs). However, the
proportion of visits from accreditation bodies, calibration and testing
laboratories, and industrial companies are far from negligible. In
particular, calibration and testing laboratories account for 15 % of
the total.
As of June 2014, the KCDB included a total of 24 700 Calibration
and Measurement Capabilities (CMCs), with about 400 new CMCs
published over the last year. Of the total number of published CMCs,

Force Calibration Service

The uncertainty of the instrument calibrated is directly influenced


by the measurement certainty of the calibration standard
Morehouse force calibrations are performed using
standards with the highest level of measurement certainty:
u
u
u

120,000 lbf
Morehouse Dead
Weight Machine

Dead Weights with accuracy of 0.002% of applied force used for


calibrations through 120,000 lbf
United States National Institute of Standards & Technology
(NIST) calibrated standards
Calibrations performed in our laboratory to 2,250,000 lbf in
compression and 1,200,000 lbf in tension and equivalent kgf
and Newtons
Calibrations performed in accordance with the American
Society for Testing and Materials (ASTM) E74, ISO 376, and
other specifications
Calibration for Proving Rings, Load Cells, Crane Scales,
Force Gauges and other force measuring instruments
ISO 17025 Accredited
American Association of Laboratory
Accreditation Calibration Cert 1398.01

2,250,000 lbf
Morehouse Universal
Calibrating Machine

Torque Calibrations accurate to 0.002% of applied torque to 2,000 N-m also available

MOREHOUSE FORCE & TORQUE CALIBRATION LABORATORIES

Phone: 717-843-0081 / Fax: 717-846-4193 / www.mhforce.com / e-mail: hzumbrun @ mhforce.com

INSTRUMENT COMPANY, INC.

1742 Sixth Avenue York, PA USA


12

NCSLI Measure J. Meas. Sci. www.ncsli.org

NMI NEWS

23 % cover the field of electricity/magnetism, 22 % chemistry, and


16 % ionizing radiation. In addition, the database covered a total of
880 key comparisons and 397 supplementary comparisons.
For more information, visit: www.bipm.org/en/db

NIST Guide on Using SRMs for the Analysis


of Foods and Dietary Supplements
Several laws require that food and dietary
supplement manufacturers analyze their products
for various reasons, and the National Institute
of Standards and Technology (NIST) frequently
receives questions about the appropriate use of
Standard Reference Materials (SRMs) in these
analyses.
Since the mid-1970s, NIST has been producing
food-matrix SRMs. The early materials were characterized solely for
elements. In the 1990s, NIST began providing food-matrix SRMs with
values assigned for vitamins, fatty acids, and other organic nutrients, and
in 2006, they began providing SRMs for dietary supplement analysis.
Recommendations and the statistical equations (and R code) necessary
for use of these and other natural-matrix SRMs as quality assurance
tools are discussed in NIST Special Publication 260-181, from selecting
an appropriate material to validating analytical methods, characterizing
in-house quality control materials, and establishing traceability.

Dr. Willie May Appointed as NIST Acting Director


Dr. Willie May, who has been leading
the National Institute of Standards
and Technology (NIST) while Patrick
Gallagher served as acting deputy
secretary of the Commerce Department,
officially took over as acting director
of NIST when Gallagher stepped down
from the position on June 13, 2014.
Dr. May is a 42-year veteran of NIST.
In his most recent position as associDr. May become the
acting director of NIST ate director for laboratory programs,
on June 13, 2014.
he was responsible for the operations
of NISTs seven laboratories. A chemist by training, May led
analytic chemistry research at NIST for 20 years. He began his
scientific career at the Oak Ridge Gaseous Diffusion Plant. Dr.
May takes over an agency responsible for establishing security
standards for federal information systems, voluntary cybersecurity guidelines for the private sector, and measurement and technical standards for a range of scientific, manufacturing, industrial
and technological areas.

To obtain SP 260-181, visit: www.nist.gov/srm/upload/SP260-181.pdf

Three phases,
one calibrator

Calibrating three-phase
power has never been
this simpleuntil now.
The Fluke Calibration 6003A Three Phase
Electrical Power Calibrator delivers three
independent power phases in a single, costeffective instrument that you can use in the
lab, or wheel on a cart into the factory. Use it
to calibrate power quality analyzers, energy
meters, and energy loggers, as well as power
quality instrumentation and transducers.

Find out how to simplify power


calibration in your organization:

www.flukecal.com/6003A_simplify
Fluke Calibration. Precision, performance, confidence.
Electrical

RF Temperature

Vol. 9 No. 3 September 2014

Pressure

Flow

Software

2014 Fluke Calibration. Ad 60022474_EN

NCSLI Measure J. Meas. Sci. |

13

MEASUREMENT TRAINING PROVIDED BY NCSL INTERNATIONAL

February 11 - 12, 2015


Raleigh Marriott Crabtree Valley Hotel
Raleigh, North Carolina

What is the Technical Exchange?


The Technical Exchange measurement training, developed by
NCSL International (NCSLI), is an educational event designed
to provide you regional access to low-cost, high-quality
measurement training solutions.
At this two-day event you will receive metrology training
covering several elds of measurements taught by industry
subject matter experts.
The NCSLI Technical Exchange provides a forum for
exchanging ideas, measurement techniques, best practices
and innovations with others interested in metrology industry
trends. This forum builds and enhances specic hands-on
skills in measurement techniques and teaches best practices.

TECHNICAL
EXCHANGE
TRAINING
PROGRAM
An Introduction to Instrument
Control and Calibration
Automation in LabVIEW

Logan Kunitz, National Instruments

ISO/IEC 17025 Laboratory


Accreditation

Rob Knake, American Association


for Laboratory Accreditation (A2LA)

Statistical Analysis of
Metrology Data

Dilip Shah, E = mc3 Solutions

Thermocouple Theory and


Practical Application
Ken Sloneker, ASL U.S.

8508A Intermediate and


Advanced Measurement
Principles

Jack Somppi, Fluke Calibration

TECHNICAL EXCHANGE TRAINING PROGRAM


Dynamic Sensors and Calibration

Process Calibration

Prociency Testing

Thermocouple Use and Calibration

Eric Seller, The Model Shop

Chuck Ellis, National Association for


Prociency Testing

Jim Shields, Fluke Calibration

Ken Sloneker, ASL U.S.

Dimensional Metrology

Basic Electronics

Hy Tran, Sandia National Laboratories

Jack Somppi, Fluke Calibration

Pressure and Vacuum Calibration


and Measurements
Jon Sanders, Additel Corporation

Temperature Measurement

Tom Wiandt, TrueCal Metrology, LLC

Pipet Calibrations and


Measurements

Julie Smith, Calibrate Inc.

Introduction to Measurement
Uncertainty
Dilip Shah, E = mc3 Solutions

Raleigh Marriott
Crabtree Valley Hotel

REGISTER TODAY

1-888-236-2427

10% discount for companies registering 3 or more attendees.


REGISTRATION
Registration pricing includes
lunch both days.

HALF DAY
ONE DAY
TWO DAY

NCSLI MEMBER

NON-MEMBER

$180
$360
$720

$205
$410
$820

EXHIBITOR REGISTRATION

$800

Join NCSLI as a Group Member and receive member prices


for this event, and a coupon for a half-day tutorial class at
the NCSLI Workshop & Symposium in Texas, July 2015.

4500 Marriott Drive


Raleigh, NC 27612
$134/night
Attendees, check this out:
Free shuttle service to and
from Raleigh-Durham
International Airport from
7:00 AM - 10:00 PM
High-Speed Internet Access:
Complimentary wireless in
guest rooms, lobby, public
areas, and meeting rooms

For registration questions and answers, please call the


NCSLI business oce at 303-440-3339 or visit ncsli.org

NCSL International

2995 Wilderness Place, Suite 107, Boulder CO 80301


Phone (303) 440-3339 Fax (303) 440-3389

ncsli.org

Vol. 9 No. 3 September 2014

NCSLI Measure J. Meas. Sci. |

15

METROLOGY NEWS
Japan Unveils Plans for CentimeterResolution GPS

To correct the errors, a master control center compares the satellites


signals received by the reference stations with the distance between the
stations and the satellites predicted location. These corrected components
are compressed from an overall 2 megabit/s data rate to 2 kilobits/s and
transmitted to the satellite, which then broadcasts them to users receivers.
The centimeter-scale precision promises to usher in a number of greatly improved applications beyond car and personal navigation. Besides
providing improved mapping and land surveying, precision farming and
autonomous tractor operations will become possible. Unmanned aerial
vehicles and autonomous vehicles in general will also find centimeter-level positioning valuable in maintaining and assuring separation from other
vehicles and fixed obstacles. In addition, the Japanese government plans
to use the service to broadcast short warning messages in times of disaster, when ground-based communication systems may be damaged.
For more information, visit: http://qzss.jaxa.jp

Four QZSS satellites in the Pacific Constellation will orbit in


such a way that at least one is always directly over Japan. Three
reserves will hang at the equator. Illustration: Erik Vrielink.

In Tokyo, Japan, Global Positioning System (GPS) navigation is stymied


by low resolution and a blocked view of the sky. However, engineers at
Mitsubishi Electric Corp. report that theyre on track to start up the first
commercial, nationwide, centimeter-scale satellite positioning technology
by 2018. The technology will also usher in a variety of innovative new
applications. Named the Quazi-Zenith Satellite System (QZSS), it is designed to augment Japans use of the U.S.-operated Global Positioning
System (GPS) satellite service. By precisely correcting GPS signal errors,
QZSS can provide more accurate and reliable positioning, navigation, and
timing services.
Todays GPS receivers track the distance to four or more GPS satellites
to calculate the receivers position. However various errors inherent in the
GPS system limit the accuracy to several meters. In using the data from
QZSS to correct the measured distance from each satellite, the accuracy
of the calculated position is narrowed down to the centimeter scale.
The Japan Aerospace Exploration Agency (JAXA) started with the
launch of QZS-1 in September 2010. Three additional satellites are
slated to be in place by the end of 2017, with a further three launches
expected sometime later to form a constellation of seven satellites
enough for sustainable operation and some redundancy. The government has budgeted about US $500 million for the three new satellites.
It also apportioned an additional $1.2 billion for the ground component of the project, which is made up of 1200 precisely surveyed reference stations.
The four satellites will follow an orbit that, from the perspective
of a person in Japan, traces an asymmetrical figure eight in the sky.
While the orbit extends as far south as Australia at its widest arc, it is
designed to narrow its path over Japan so that at least one satellite is
always in view high in the sky; hence the name quasi-zenith. This will
enable users in even the shadowed urban canyons of Tokyo to receive
the systems error-correcting signals.
16

Navy Patented METBENCH System Goes


Commercial
The United States Navy has licensed
its award-winning Metrology Bench
Top (METBENCH) automation
system, marking the Navys first
cross-licensing patent agreement of
its type. The system helps the Navy
perform some 10,000 calibrations each year. The historic patent
licensing agreement creates a unique, two-way sharing arrangement
allows improvements driven by the systems commercialization to
be incorporated by the Navy calibration system.
American Technical Services, Inc. will commercialize Naval
Surface Warfare Center (NSWC) Coronas automated calibration
system for industrial use. In turn, the Navy gets a license to use ATSs
software, worth nearly $1 million, that will add new capability to
its Metrology Bench Top system. NSWC Corona is the designated
metrology and calibration agent for the Navy and Marine Corps.
NSWC Coronas engineering team created METBENCH to meet
fleet needs, receiving a patent for it in 2007. The system takes oncelengthy manual procedures and shortens them using automation. For
example, one calibration procedure for a piece of test equipment
went from 120 minutes down to 15 minutes, nearly a 90 % reduction.
Currently, METBENCH is installed on 160 Navy surface ships,
28 submarines and multiple Navy shore calibration laboratories,
saving the Navy more than $50 million by 2017. The innovative
system has been so effective that the METBENCH team received a
Navy Information Management/Information Technology Excellence
Award in 2011. METBENCH has introduced more than $10 million
worth of efficiencies at shore installations.
 or more information about METBENCH, see the NCSLI Measure
F
J. Meas. Sci., December 2009 issue. For more information about
NSWC, visit: www.navy.mil/local/nswccorona

NCSLI Measure J. Meas. Sci. www.ncsli.org

METROLOGY NEWS

CMS Introduces Level-Two Certification for


Portable CMMs

The Coordinate Metrology Society (CMS) has announced the


availability of its CMS Level-Two Certification, a hands-on
performance examination for users of portable CMMs (coordinate
measuring machines). Both Level-One and Level-Two Certification
examinations were conducted during the 30th annual Coordinate
Metrology Systems Conference (CMSC), which was held
July 21 25, 2014.
Applicants for the CMS Level-One Certification must meet
eligibility requirements, sign the CMS code of ethics, and pass a
peer review. Applicants for the CMS Level-Two Performance
Certification must have a Level-One Certification, two years basic
experience (minimum 400 hours) on an articulating arm, and submit
an application with two references who can attest to their hands-on
expertise. Qualifying candidates were notified and scheduled for an
examination seat at CMSC 2014.
The Level-One Certification examination is a proctored, online
assessment consisting of about 200 multiple choice questions covering
foundational theory and practice common to most portable 3D
Metrology devices. The Level-Two Certification exam on a portable
CMM is a performance assessment conducted by an authorized
proctor. The candidate uses the metrology instrument to collect a series

of measurements on an artifact, and then analyzes specific features


of that artifact. The proctor evaluates the applicants measurement
techniques, accuracy of the results, and overall performance on
the portable CMM. The Level-Two Certification program was
professionally designed and developed in cooperation with the CMS.
 ertification program guidelines and application forms are
C
available at: https://www.cmsc.org/cms-certification

NPLs Blackett Laboratory Recognized as


Historic Site
The Blackett Laboratory at the Imperial College in London, home
of the Department of Physics, was designated an historic site by
the European Physical Society (EPS). The prestigious recognition
is bestowed by the EPS upon sites in Europe that hold national
or international significance to physics and its history and was
commemorated with the unveiling of a plaque.
The Blackett Lab, which has housed the Department of Physics
since its completion in 1961, joins the National Physical Laboratory
at Bushy Park as one of only two such recognised sites in the UK.
It was selected by the EPS for its role as the home of pioneering
advances in the fields of theoretical and experimental physics over
the past five decades including in particle physics, quantum physics
and ultrafast laser development.
The building was the site of Mohammed Abdus Salams work on
the unification of the weak and electromagnetic forces for which

Thunders Calibration Laboratory Offers


Accredited Humidity Calibration Services

Mini/Mobile Two-Pressure
Humidity Generator Self contained
humidity calibration standard
NIST traceable certificate of calibration
certified at 0.5%RH uncertainty*.

Thunder Scientific provides


instrument calibration for
virtually any humidity
measurement device or
dew-point hygrometer with
as found and as left data
with uncertainties.

NVLAP Lab Code 200582-0

Humidity Parameter
Volume ratio, V (PPM)

Dew/Frost Point Temperature


Relative Humidity (0 C to 70 C)

Model 1200

Humidity Generation System

Range

Model 2500

Uncertainty*

0.1 to 3 PPM
3 to 200 PPM
200 to 400000 PPM
-90 to -70 C

4.0% of value
2.0% of value
0.1% of value
0.2 C

-70 to -20 C
-20 to 70 C
0% to 99%

0.1 C
0.05 C
0.3% of reading

Humidity Generation System


Benchtop/Mobile "Two-Pressure"
Humidity Generator self contained
humidity calibration standard
NIST traceable certificate of calibration
certified at 0.5%RH uncertainty*
ControLog automation software.

*Represents an expanded uncertainty using a coverage factor,


k=2, at an approximate level of confidence of 95%.

Model 3900

Ph: 505-265-8701 Fax: 505-266-6203

www.thunderscientific.com
sales@thunderscientific.com

Vol. 9 No. 3 September 2014

Low Humidity Generation System


"Two-Pressure Two-Temperature"
Low Humidity Generator.
NIST traceable certificate of calibration,
certified at 0.1 C Frost Point,
over the range of -95 C FP to +10 C DP.

Sales Service Support


800-872-7728
NCSLI Measure J. Meas. Sci. |

17

18

NCSLI Measure J. Meas. Sci. www.ncsli.org

METROLOGY NEWS

NPLs Blackett Laboratory, located at the Imperial College London.

he was awarded the Nobel Prize in 1979, as well as Professor Tom


Kibbles work which defined the mechanism by which gauge bosons
acquire mass via the Higgs field. This research helped lead the way
to the discovery of the Higgs Boson particle in 2012. More recently
the Blackett Lab has been home to Sir John Pendry and his work
(with David Smith and others) on the creation of meta-materials and
his research invisibility cloaking and the theory of the perfect lens.

European Parliament and Council of the


European Union Adopt EMPIR
The European Parliament has adopted the Innovation Investment
Package with all of its individual elements, including the European
Metrology Programme for Innovation and Research (EMPIR). As
part of this ambitious and inclusive program, to be implemented over
a 10-year period (2014-2024) by 28 participating states, EMPIR
will focus on innovation and industrial exploitation, on research for
standardization and regulatory purposes, and on capacity building.
The 28 participating states include Austria, Belgium, Bosnia and
Herzegovina, Bulgaria, Croatia, the Czech Republic, Denmark,
Estonia, Finland, France, Germany, Greece, Hungary, Ireland, Italy,
Norway, Spain, the Netherlands, Poland, Portugal, Romania, Serbia,
Slovenia, Slovakia, Sweden, Switzerland, Turkey and the United
Kingdom.
The European Unions financial contribution to EMPIR will be
up to 300 million. Financial contributions from the participating
states shall consist of both contributions through institutional funding
of the national metrology institutes (NMIs) and the designated
institutes (DIs) participating in EMPIR projects, and through
financial contributions to the administrative costs of EMPIR.
For more information, visit:
www.euramet.org/index.php?id=about_empir

8th Symposium on Frequency Standards and


Metrology in 2015
The 8th Symposium on Frequency Standards and Metrology will be
held the week from October 12 -16, 2015 at the Seminaris Seehotel
in Potsdam, Germany. The symposium serves as an international
discussion forum on precision frequency standards. The Symposium
Chairman is Fritz Riehle, of the Physikalisch-Technische Bundesanstalt
(PTB) in Germany.
Vol. 9 No. 3 September 2014

The focus of the symposium is on the fundamental scientific aspects


of the latest ideas, results, and applications in relation to frequency
standards. Significant progress has occurred since the last symposium
was held seven years ago. This progress includes optical atomic
clocks with uncertainties in the 10-18 regime. These optical clocks can
help answer questions about the possible variations of fundamental
constants, enable emerging fields such as relativistic geodesy, and
should lead to a new definition of the second, the base unit of time
interval.
Following the format of previous meeting, the symposium will
consist of a series of invited talks (no parallel sessions) and poster
presentations. In addition, participation in the 8th Symposium on
Frequency Standards and Metrology will be limited to about 170
attendees. The proceedings will be published in a book or e-book.
For more information, visit: www.ptb.de/8fsm2015

Seven Accreditation Bodies Sign APLAC MRA


The Scope of the Asia Pacific Laboratory Accreditation Cooperation
(APLAC) Mutual Recognition Arrangement (MRA) was extended to
include the accreditation of proficiency testing providers (PTPs) on
June 26, 2014 at the APLAC Technical Meetings and 20th General
Assembly held in Guadalajara, Mexico. Seven accreditation bodies
(ABs) from six economies became the inaugural signatories to the
APLAC MRA for the accreditation of proficiency testing providers.
The APLAC MRA is the first international agreement to be extended
to include the accreditation of PTPs.
The US signatories include the American Association for
Laboratory Accreditation (A2LA), Assured Calibration and
Laboratory Accreditation Select Services (ACLASS), and Forensic
Quality Services (FQS).
The accreditation bodies that are signatories to the APLAC MRA
for proficiency testing provider accreditation use the international
standard ISO/IEC 17043 to accredit PTPs, ensuring a uniform approach
to assessing PTP competence. This consistency allows economies to
establish MRAs based on mutual evaluation and acceptance of each
others PTP accreditation systems.

3rd European Flow Measurement Workshop to


be held in March 2015
The 3rd European
Flow Measurement
Workshop will be held
in the Netherlands
from March 17-19,
2015. The conference
will take place in the Grand Hotel Huis ter Duin in Noordwijk,
Netherlands. Note that previous workshops were held in Portugal.
With the change of venue, the workshop will be able to accommodate
more sponsors and exhibitors. More information on sponsors and
exhibitors will be featured on a new website that will be launched
later, for more information, contact: rvdberg@vsl.nl
More news and updates can be obtained by following the workshop
on LinkedIn and Twitter @EFMWS.
NCSLI Measure J. Meas. Sci. |

19

2015 NCSL INterNatIoNaL


WorkShop & SympoSIum

measurement
science
and the

of

life
NCSL
| 2995 J.
Wilderness
Place, Suite 107 | Boulder, CO 80301 | ncsli.org | Phone (303) 440-3339 | Email info@ncsli.org
20 International
| NCSLI Measure
Meas. Sci. www.ncsli.org

Call for Papers

2015

......................................................................................

The theme for NCSL Internationals 2015 Workshop & Symposium


is Measurement Science and the Quality of Life. Have you considered
what impact Measurement Science has had on the quality of your
life? If you take time to consider it, you can appreciate what a large
impact it has.
Think, for example, how your quality of life may
have improved over what your great grandparents
experienced a hundred years ago. Innovations and
conveniences that we enjoy today would have been
hard to imagine in their day. Modern automobiles,
air conditioning, appliances, commercial air travel,
computers, GPS navigation systems, cell phones, TVs,
radios, medical improvements including MRIs, organ
transplants, and joint replacements are all examples
that contribute to our quality of life. Now consider the
role of Measurement Science. Without the ability to
measure and control critical parameters, these kinds
of advances would only be science fiction.

Measurement Science professionals are encouraged


to get involved and submit an abstract for the 2015
Workshop & Symposium which will be held from July
19 23, at the Gaylord Texan Resort and Convention
Center in Grapevine, Texas.
Share your measurement experience and expertise
with other measurement professionals by attending
and presenting a paper. We welcome papers from
any measurement science professional, including
engineers, metrologists, lab assessors, lab
managers, quality managers, researchers, scientists,
statisticians, technologists and more.

...................................................................................................
Submissions can relate to a variety of measurement and process topics including research and development,
manufacturing and service related fields, new test and measurement techniques, measurement standards and
traceability, statistical process and evaluation, measurement accuracy and uncertainty analysis, laboratory
management and accreditation and new advances in measurement science. Topics can include:
Acceleration
Automation
Chemical
Dimensional
Electrical
Force

Flow
Fundamental Units
Inspection
Humidity
Lab Accreditation
Management Issues

Mass
Optical
Pressure
Quality Topics
RF/Microwave
Standards

Temperature
Time & Frequency
Vacuum
Other Measurement
Topics

NCSLI invites you to present your work at this exciting conference


as we consider Measurement Science and the Quality of Life.
Gaylord Texan resorT and ConvenTion CenTer | Grapevine, Texas | July 19 23, 2015

CALL FOR PAPERS

EXHIBIT SALES

Vol.www.ncsli.org
9 No. 3 September 2014
exhibits@ncsli.org

SPONSORSHIP PROGRAMS
larcher@ncsli.org

ADVERTISING OPPORTUNITIES

NCSLI
Measure J. Meas. Sci. |
lstone@ncsli.org

21

SPECIAL FEATURE

Software Analysis and Protection


for Smart Metering
Charles B. do Prado, Davidson R. Boccardo, Raphael C. S. Machado, Luiz F. R. da Costa Carmo,
Tiago M. do Nascimento, Lucila M. S. Bento, Rafael O. Costa, Cristiano G. de Castro, Srgio M. Cmara,
Luci Pirmez, and Renato Oliveira

Abstract: Smart meters are devices equipped with embedded software that are capable of processing complex digital data. These
devices are now a reality in most areas of metrology. They enable a number of new applications, but they also introduce new challenges with regards to their validation. This paper describes initiatives developed by Inmetro, the national metrology institute of
Brazil, and Eletrobrs, a Brazilian power utility company, that are designed to support the validation of smart electricity meters.
1. Introduction

The advanced metering infrastructure (AMI)


adds new features to the traditional metering
infrastructure, seeking to achieve performance
improvement, energy resources optimization,
energy efficiency, alternative storage, automatic utility billing, and telemetry. These features are made possible by the introduction of
smart meters, which are electrical meters with
embedded software. Despite the many technological advantages of the AMI, it is important to characterize the threats that smart meters may bring. Their security is a challenge
because smart meters operate in an exposed
environment, and are subjected to capture, reverse engineering, and manipulation.
In Brazil, the development and implementation of an AMI will enable a more effective
monitoring of the electricity consumption
distribution of utilities, leading to better expansion planning. It also provides a more effective way to combat electricity fraud. It is
estimated that Brazilian market encompasses
approximately 60 million consumer units,
40 % of which are classified as low-income
(up to 220 kWh of monthly consumption).
In this scenario, Brazil has experienced a
still very distinct situation, where non-technical losses may reach 20 % in distribution
areas of some utilities [1]. This scenario is
an inheritance of a model based on the public control of energy distribution, which was
tolerant with energy theft, subsisting until the
second half of the 90s. It is estimated that
the Brazilian non-technical loss rate reaches
22

17 % of the total energy production, or about


7,428,000 MWh. Considering the fraud losses suffered by the whole set of 59 electric energy utilities across Brazil, the total loss is almost $1 billion USD every year. When costs
from fraud control initiatives are added, these
numbers exceed $1.3 billion USD per year.
Energy theft always implies high energy
waste; when the consumer does not have
to pay the bill, there is no concern towards
saving energy or purchasing equipment with
better energy efficiency. In fact, even though
energy theft is more common at low income
levels, the energy consumption of those who
participate in energy theft is similar to that of
high-income consumers.
During the privatization process of electricity distribution companies in the mid 1990s,
the Brazilian Electricity Regulatory Agency
(ANEEL) put pressure on the utility companies
so that they were responsible for the non-technical losses, therefore, forcing the utilities to
find solutions to reduce them. The utilities
immediately began to seek alternatives to resolve the issue of energy theft. One alternative
was the introduction of an AMI equipped with
a set of components to detect tampering [2,
3]. However, equipping the meters with anti-tampering features is not enough, it is still
necessary for regulatory authorities to assess
the smart meter security and the unauthorized
reverse engineering mechanisms employed in
them. In Brazil, this assessment is the responsibility of the Brazilian National Institute of
Metrology, Quality and Technology (Inmetro).

Currently, this scenario has been investigated in the area of cyber security applied to
Smart Grids [4]. Cybersecurity, to which we
refer in this context, refers to the application
of methodologies that make it impossible (or
almost impossible) for an unauthorized person to have access to any service or information inherent in a smart meter, thus protecting
the meter from being tampered with or used
for improper purposes.
Therefore Inmetro, in partnership with
the Eletrobrs Distributor of Rondnia (CERON), has been developing a research project
titled Cyber Security in Smart Metering. In
the present work, we describe the initiatives
to achieve better smart meter validation and
present the following research avenues:
Software analysis, including methodologies to discover vulnerabilities and
how we address the software traceability, i.e., the correspondence of source
code with its compiled version;
Software protection, including methodologies to perform embedded software integrity verification, guaranteeing that the software embedded in a
smart meter corresponds to a version
that was previously validated by the
manufacturer and authorities; software
obfuscation, making the software code
harder to understand to protect against
reverse engineering; and software fingerprinting to discourage code leakage
through the insertion of an identifier

NCSLI Measure J. Meas. Sci. www.ncsli.org

SPECIAL FEATURE
into the software code with the goal of
making it traceable later; and
A measurement confidence chain, including methodologies that allow end
users to be confident that measurements have not been tampered with.

Authors
Charles B. do Prado

cbprado@inmetro.gov.br

Davidson R. Boccardo

drboccardo@inmetro.gov.br

Raphael C. S. Machado

rcmachado@inmetro.gov.br

Luiz F. R. da Costa Carmo

lfrust@inmetro.gov.br

Tiago M. do Nascimento

1,2

tmnascimento@inmetro.gov.br

Lucila M. S. Bento

1,2

lmbento-eletrobras@inmetro.gov.br

Rafael O. Costa

1,2

rocosta@inmetro.gov.br

Cristiano G. de Castro

cgcastro@inmetro.gov.br

Srgio M. Cmara

1,2

smcamara@inmetro.gov.br

Luci Pirmez

luci@nce.ufrj.br

Renato Oliveira

renato.oliveira@ceron.com.br
1

Inmetro
National Institute of Metrology,
Quality and Technology
Santa Alexandrina St, 416
Rio Comprido - Rio de Janeiro, RJ,
Brazil, 20261-232

Federal University of
Rio de Janeiro (UFRJ)
Av. Pedro Calmon, 550
Cidade Universitria, Rio de Janeiro,
RJ, Brazil, 21941-901

Ceron Eletrobras Distribution


Rondnia
Av. Presidente Vargas, 409/130
andar, Centro
Rio de Janeiro, RJ, Brazil,
20071-003

Vol. 9 No. 3 September 2014

2. Software Analysis

This research avenue refers to the need to


improve the process of software analysis
of a smart meter. It explores techniques for
analyzing code to audit its correctness, to
check the absence of faults, to trace a binary,
to check for consistency with design specifications, and to check for compliance with
established security requirements. The goal
of code auditing, in its source or binary form,
is to discover vulnerabilities or security holes
that attackers can exploit.
Before explaining the software analysis
process conducted by Inmetro, it is important to review the compilation process and the
differences between source and binary code
(compiled version of the source code). The
compiler checks the syntactic and semantic
correctness of the code when generating the
binary code.
The advantage of software analysis being conducted with source code is the direct identification of high-level languages
structures (loops, procedures, and classes);
enabling a better understanding of how the
code is structured. However, analysis of
source code has its peculiarities, such as the
need to trust in the build environment of the
developer and third party libraries, which
can be potentially vulnerable. This kind of
problem does not occur if the analysis were
conducted on binary code. Furthermore,
there are other reasons for the analysis to be
applied to the binary code, and one of them
would be the lack of information since the
symbols and debugging information table
has been removed due a process commonly
used to protect intellectual property. However, the analysis of binary code is much
more complex since the procedures are not
clearly defined and there is no distinction
between code and data.
There are various proposals for identifying
vulnerabilities in embedded applications [5].
These proposals can be classified as either
black box analysis, which assumes no knowledge of how either the source code or binary
code were developed, or white box analysis,
which requires knowledge of the source code
of the application [6]. The black box approach
is more widely applicable but usually much
less efficient, given the natural limitation on
the number of analyzable pathways. On the
other hand, the white box approach can be
conducted through a simple inspection of the
source code, if it is not practical to consider
the whole code.

In Brazil, smart meters are regulated by Inmetro, which specifies the set of requirements
and also conducts the type approval and evaluation procedures. The approval process involves the following validations:
Validation of the smart meter architecture and its operations;
Validation of the legally relevant software, i.e., the software that can potentially change measurement information; and
Validation of software protection
mechanisms, which verifies how sensitive information is protected and how
the software integrity is guaranteed.
These aspects are handled in Section 3.
The validation of the smart meter architecture and its operations is based on documentation analysis. The objective is to understand
the smart meter algorithms and protocols,
to identify the communication interfaces, to
check the whole set of commands, to identify
the legally relevant software, and to evaluate
the test cases.
The validation of legally relevant software
is done with by white box analysis, and thus
requires the source code. The legally relevant variables are tracked to verify if they
are correctly manipulated, the common vulnerabilities are scanned based on the CWE
and CERT-C standards, and the consistency
of the commands, protocols and algorithms
with the documentation is reviewed. However, when a source assessment is performed,
the following question arises: How do we ensure that the executable code embedded in an
electricity meter was actually generated from
the source code provided by the manufacturer
and previously evaluated by Inmetro?
This problem is called software traceability.
The goal is to verify if the compiled executable
code corresponds to the source code (which
can be written in any programming language).
For this problem, two immediate approaches
are possible, but may not be practical.
A simple and direct way to test for software traceability is to reproduce the software
development environment and to compile
the approved source code, verifying whether
the generated binary code is as expected. It
can be complex and expensive, however, to
maintain several software development environments. Another way to test for software
traceability is to audit the environment of the
software developer. This step is performed as
NCSLI Measure J. Meas. Sci. |

23

SPECIAL FEATURE

Figure 1. A centralized measurement system and its components.

the final procedure of the type approval process. Here, after the evaluation of the source
code by Inmetro, it is compiled in the environment of the software developer and embedded into the smart meter. However, such
an approach is likely to be ineffective when
dealing with a malicious developer.
To identify a malicious developer, the strategy involves verifying whether two program
codes written in different languages (typically
source code and executable code) exhibit the
same behavior. This strategy uses an artificial
neural network, fed with properties collected
from two program codes written in different
languages in order to discover their degree of
similarity. Preliminary results using artificial
neural networks with trivial properties, such
as the number of edges of the control flow
graphs, show that a strong correlation exists
between the source code and executable code
[7-9].
Another avenue of research is based on
black-box analysis and smart meter reverse
engineering and involves the identification
of sections of the firmware that can inappropriately change the behavior of the meter instrument. Here, we assume that is possible to
dump the firmware embedded into the smart
meter and interact with it through JTAG
(Joint Test Action Group) pins. Through the
dump, it is possible to obtain the entire firmware code and all of its execution pathways.
By debugging output with specific input (according to the manufacturers documenta24

tion), we can identify code sections that are


not executed during the common operations
of the instrument. These common operations
represent the usual behavior of an instrument,
described by the manufacturers documentation of the complete set of commands that
are acceptable by the meter. The presence of
non-running code sections can be potentially malicious, or a backdoor, and could compromise the functionality of the measuring
instrument. After identifying the non-running code sections, these are more carefully
revised to determine if they can be triggered
by the measuring instrument inputs or at a
certain time.
One example of a validated Brazilian architecture model that provides smart grid features is the Centralized Measurement System.
This architecture was motivated by the need
to optimize the process of reading individual
meters that are distributed throughout a large
building. The model consists in allocating
of a set of meters that is associated with a set
of end users within a single housing called a
hub. This system reduces production costs,
because a single box accommodates multiple
meters with just one power supply. Because
multiple meters can be read at a single point,
it at also simplifies the reading process.
Although this model was developed for
use in large buildings, meters suppliers and
electric utilities soon realized that it could
provide a solution to reduce the high levels
of power theft in some Brazilian states. The

idea was to isolate the meter from the end-user


location by moving it to the distribution line
post, which makes fraud more difficult because of the risk of electric shock. However,
the isolation of the meters has made automated
meter reading (AMR) necessary. The systems
developed for AMR have also incorporated
two-way communication features, which enables additional features, including the ability
to remotely disconnect the energy supply.
Figure 1 shows an example of a validated
Centralized Measurement System and their
components. The hub performs the following tasks: electrical energy measurement,
the transmission of electricity consumption
data to an end-user (display), and communication with the utility company. A hub is
comprised of a meter, a cutoff relay,a remote
communication unit (RCU),a local communication unit (LCU), and a central processing
unit (CPU). The repeater contains the same
components of a hub with the exception of
the RCU. The CPU represents the core of
the system, being responsible to capture all
data of the meters, execute external/internal
commands and monitoring all status. The
LCU sends consumption information by radio transmission to an end-user display. The
RCU includes a modem that uses cellular and
packet radio transmissions to handle the bidirectional communications between the hub
and the utility company.
3. Software Protection

This research avenue explores techniques


for transforming the code for software protection. A technique of software protection
can be defined as a set of procedures to hinder the attacker or opponent who, motivated
by financial reasons, attempts to sabotage or
tamper with the software or obtain sensitive
information from it. This threat exerts greater
impact in scenarios that may affect national
infrastructure, such as the AMI of the electricity sector. An attack in this scenario involves the capture of a smart meter disposed
in a physically accessible environment, and
reverse engineering the embedded software
in order to take possession of sensitive information (cryptographic keys) that may help
propagate other attacks, such as sending erroneous information to the central control
system, causing a blackout in a segment of an
electrical grid.
Advances in program analysis and software engineering technologies have led to
improved tools for secure software develop-

NCSLI Measure J. Meas. Sci. www.ncsli.org

SPECIAL FEATURE
ment. These tools allow software to be more efficient and as safe and
secure as possible. However, these same advances have increased the
capacity for reverse engineering with the goal of discovering vulnerabilities. For example, before an attacker exploits vulnerabilities in
a system, they will first have to identify them. Similarly, to change
the code embedded in a device, the attacker first has to analyze how
to tamper with the code without affecting the system functionality or
without triggering an anti-tampering mechanism.
Software protection techniques aim to hinder reverse engineering
and tampering with embedded software, to ensure that it will execute
as expected and to track possible illegal distribution [10]. The protection techniques that can be used for these purposes are: software
obfuscation, tamper proofing, and watermark. Software obfuscation
techniques hinder reverse engineering by syntactic changes that make
it harder to understand the code, but that do not alter its original behavior. Software integrity verification techniques ensure that the software performs as expected, even if tampering attempts are made. Finally, fingerprinting techniques, a special type of watermarking, refers
to the act of embedding a unique identifier in an object with the aim of
making it traceable later.
3.1 Software Obfuscation

Even though complete security through software obfuscation should


be considered impossible, as theoretically proven by [11], it is still
used in practice because it is a valuable technique to deter nave attackers and slow down dedicated attackers. Software obfuscation in
smart meters can strengthen code privacy and data confidentiality. For
example, once a smart meter is captured, an attacker can reverse engineer its embedded software in order to take possession of the cryptographic key, required to send messages to the AMI. With this information in hand, the attacker may send false information to the AMI
to cause a blackout. Or, if the attacker knows where the calibration
constant used to calculate energy consumption is stored, they can adjust the constant to produce fraudulent measurements.
Software obfuscation is based on a simple concept the more difficult it is to understand the software, the more difficult it is to change
it. Some proposals use code obfuscation to compromise reverse engineering tools in order to generate incorrect information, requiring
more time and effort from the attacker to understand the code and find
out sensitive data [12]. There are some works that use obfuscation in
order to protect instructions [13, 14]. LeDoux, et al., propose a technique to protect instructions by creating new instructions to embed the
actual instructions as its operands [13]. On the other hand, Balachandran and Emmanuel [14] propose to create an algorithm to obfuscate
instructions, which replace those instructions by others, hiding vital
information in the data segment of the program. Similarly to this,
our work proposes the use of code obfuscation to protect sensitive
information of smart meter devices instead of code instructions [15].
More specifically, the idea is to use obfuscation techniques and control
flow manipulation so that reverse engineering tools will incorrectly
translate the sensitive data embedded in the software of smart meters
as program instructions.
Some obfuscation techniques are based on replacing or inserting
instructions in the program in order to break the compiler conventions
used by reverse engineering tools [12]. Examples of such conventions
are the sequence of instruction used to make a call or to return procedures. When the compiler conventions are not followed, the reverse
Vol. 9 No. 3 September 2014

engineering tools can be induced to incorrectly translate a binary code.


For example, if the instruction responsible for calling a function was
replaced by other instructions, the reverse engineering tools will not be
able to identify the start address of the function being called. This code
transformation is known as call obfuscation. On the other hand, if the
instruction responsible for returning the control flow from a function
was inserted, this point will be identified as the address of the function
end. This code transformation is known as false return obfuscation.
We combine these obfuscation techniques with control flow manipulation in order to create spots in the code segment that protect
sensitive data of the smart meters such as calibration constants, measurement data, and cryptographic keys. In the call obfuscation, the
return address can be changed by any displacement in the code segment, opening the possibility of creating an address space will never be executed, known as a dead execution spot. In the false return
obfuscation, we also create the dead execution spot by manipulating
the return address at the stack before the false return instruction. This
return address can have any displacement needed to accommodate the
set of address required by the sensitive data, and to keep the software
semantics intact.
The reverse engineering tools will translate the data contained in the
dead execution spots as program instructions, validating our purpose.
However, despite the fact that the dead execution spots are never executed, it is necessary to ensure that all references to the sensitive data
stored in these spots are updated, because the data will be stored in addresses different from that of the original code. The benefit of hiding
sensitive code in dead execution spots is that the attacker must spend
more time and effort to find the sensitive data, because sensitive data
normally is found in the data segment.
Protecting embedded software against reverse engineering is essential for the security of an AMI even though any means of protection,
by software or hardware, can potentially be defeated. We proposed a
software-based protection approach protection in order to add layers
of difficulty to reverse engineering attempts. However, it is possible
for software and hardware protection schemes to coexist and further
increase the degree of protection.
3.2 Software Integrity Verification

Software integrity verification refers to the process of verifying that


the software under execution in a given device is undoubtedly the
same one that was previously approved by a competent authority.
Among all of the software integrity verification methods, the most
simple and easy to conduct is the direct access to the code in execution in the device that is, the complete dump of the memory area
where the program in execution is stored. Such an approach is simple
and direct, but it presents the disadvantage that the execution code is
available to anyone with access to the device. It can compromise the
intellectual property of the software developer and may reduce the
security of the device1. Moreover, the auditing process can be inconvenient, since the auditor must have direct access to the storage device
or chip that stores the software and this is not always practical or
even possible. A more sophisticated approach is the use of special
hardware that offers a functionality of returning a cryptographic digest
(often by application of a hash function) of the software code that
1

Clearly, security should not be based on obscurity. However, there are cases
where secret/private keys are stored in the same memory as the software
binary code. Hence, granting public access to the binary code would also
grant public access to those secret/private keys.

NCSLI Measure J. Meas. Sci. |

25

SPECIAL FEATURE

Figure 2. Tampered firmware (memory replication attack).

Figure 3. Integrity verification method. The numbers represent the


challenge-response sequence.

the chip stores. Such functionality represents


a great evolution in intellectual property protection. On the other hand, there is still the
need to have direct access to the chip. Moreover, these chips are still rare, expensive, and
not fully available in the commercial market.
In TPM-based architectures, the software under
evaluation is external to the TPM chip. Thus,
the software could forge any digest and send it
to the TPM to sign.
26

Recently, an alternative approach has


begun to be considered. In this approach,
based on the introspection concept [16-20],
a series of verification commands is sent to
the program under verification, allowing an
authorized person to check the integrity of
the software by its behavior in response to
these commands. The advantages of this approach are evident; even though the program
code remains protected from public access,

it is possible to send verification commands


through the usual communication interface
of the device under verification, and it is not
necessary to remove the chip where the software is stored from the device.
More specifically, the idea of introspection-based software verification is to request
that software returns fresh messages, which
are calculated based on its own code. In practice, the only technical need for introspection
is that the device that executes the verification routine has instructions that allow access to its own memory code. This approach
presents nice results when it is assumed that
there is no free memory and that the code is
not compressible. If those assumptions are
not valid (see, for example, [21]) it is easy
for malicious software to keep a copy of the
original software, and to perform each operation of the verification routines by reading
the instructions from the program memory
area where the copy of the original program
is stored (Fig. 2). We propose a software integrity verification method that asks the software under verification to return Message
Authentication Codes (MAC) that depend
on its own code. A MAC algorithm is a function whose input consists of an arbitrary long
message and a secret key, producing a fixedlength result, called authentication, as output.
Figure 3 shows the numbers 1 to 5, representing the challenge-response sequence for the
integrity verification procedure.
The previously described technique not
only provides software integrity verification
which is achieved through the dependency
of the software behavior to its own code
but also software content protection which
is achieved through the one-way property of
the MAC function, so that the software code
cannot be determined from the set of answers.
The main reason for the use of cryptographic
operations that are applied over the program
code is exactly the protection of this code.
An intruder that sends a set of verification
commands (and receive the answers for these
commands) would need to calculate several
hash pre-images to determine the program
code. Also, we can impose time-based restrictions for calculating one single hash
pre-image, making it impractical to determine the whole software content.
Since the software under verification cannot guess a priori which will be the asked key,
there could be only two possible strategies for
malicious software to trick the verification
process. The first possible strategy would be

NCSLI Measure J. Meas. Sci. www.ncsli.org

SPECIAL FEATURE
to store each possible answer that is, for each possible key in the
malicious software and to simply return the desired answer. Such an
approach is clearly unpractical, as long as any standard MAC has a
key space of at least 2128 keys.
The second strategy that malicious software could use to trick our
verification process would be to keep a copy of the original program and perform all MAC calculations on the copy (instead of on
the tampered program) returning, then, the expected answer. The first
and simplest countermeasure to this strategy is to assure that no extra
space is left for malicious software by keeping a copy of the original program in the memory code. This can be easily done by simply
filling the unused program memory with a random uncompressible
bit sequence. A last attack could be considered by a very motivated
attacker: to compress both original and tampered binary codes and
use on the fly decompression to execute the malicious code and to
compute MAC over the tampered code. Besides the fact that this approach is unpractical when there is about 10 kB of available program
memory, we observe that such software corruption would be detected
from a simple observation of timing characteristics of the device
under verification (the overhead due to the decompression process
would be easily detected).
We developed a software integrity verification tool (SIVT) that
employs the proposed introspection technique for smart meters. The
SIVT provides a single interface for an operator responsible for performing the integrity verification of measuring instruments embedded
with software in the field. This tool also manages the coexistence
of different software versions, each containing their unique identifier.
This is performed by matching the identifier with the correlated set of
MACs retrieved from a web server.
We presented an approach to improve the degree of protection of intellectual property of measurement devices software under evaluation
by regulators. This approach hides the complexity of performing software integrity verification of electricity meters by an operator, making
it easier to disseminate and training of the software integrity verification task. The exposed ideas are simple and easily implementable with
the aim of cheap micro-controllers. This approach was put into practice for the integrity verification of the Brazilian smart energy meters.
3.3 Software Fingerprinting

The term fingerprinting refers to the act of embedding an identifier


into an object with the goal of making it traceable later. Considering the model of meter validation that involves software analysis, we
propose fingerprinting within a security protocol for leakage code
identification. This security protocol employs fingerprinting based on
graphs (structures that are naturally associated with the flow of execution of a program) that uniquely identify the owner, which in our case
will be an accredited laboratory.
Inmetro currently conducts the software analysis stage, but this approach is not scalable. A possible solution is to outsource the software
analysis stage to accredited laboratories. On the other hand, software
analysis outsourcing can make difficult to track and punish the parties
responsible for any software leakage code, i.e., for the illegal distribution of the software under analysis.
Given this scenario, our proposed protocol uses a fingerprinting
technique based on graphs [22-27] that makes possible the unambiguous identification of third persons involved in the software analysis
stage, therefore allowing the tracking of those responsible for possiVol. 9 No. 3 September 2014

ble leaks. In this protocol, the regulator assumes the role of a trusted
third party (TTP), responsible for uniquely identifying each software
module that will be outsourced for analysis. Identifying these modules
is done with the use of fingerprints that are embedded in the software
and are not easily removable.
A fingerprint is inserted into a program by an embedder and it can
be recovered by means of a recognizer algorithm. Formally, a fingerprinting scheme is a function : . The input consists
of a software code p P, an information w W, and a secret parameter k K. The output consists of a fingerprinted code with the
following conditions:
q has the same functional semantics of p;
From almost all code q, or close enough to q, it is still possible to recover the information w, knowing k.
It is important to mention that the function f need not be secret, i.e.,
any individual may be able to generate q from p, w, and k. On the other hand, the second condition above tells us that, knowing the secret
k, it is possible to retrieve the information w from the fingerprinted
code even if it was reasonable modified. This means that it is difficult
for a malicious adversary to remove or circumvent the identification
provided by the fingerprint.
Through the use of fingerprints, it is possible to uniquely identify software modules that are distributed to a third party for analysis.
Consider the scenario where a developer has the code p to be analyzed
by the regulator, and the regulator will outsource this analysis. We
propose the following protocol:
1. The regulator generates a cryptographic digest (or hash) of the
whole code to be analyzed together with evaluator identification.
2. The regulator signs the hash. The set (signed hash + evaluator
identification) is the information w to be embedded in the code.
3. The regulator encodes an appropriate graph using the algorithm
presented in [27], and embedding in the original program p (in
a secret region k).
It is important to understand the motivation for each stage.
Stage 1 is important for the uniqueness of the information, w, that
is correlated to the code and the evaluator. Additional information
can be added, for example, the time of the fingerprinting insertion.
Stage 2 ensures that only the regulator will be able to embed valid
information since it is signed, otherwise, any individual would be
able to add a fake identifier to the code. Stage 3 ensures that only
the designated evaluator will have the modified code with a given
fingerprint. In this way, after the code delivery, any leaks from that
modified code can be traced by the fingerprint.
4. Measurement Confidence Chain

This research avenue intends to explore the ways of providing trust


on the approved devices by Inmetro. One of the main objectives is to
allow Smart Grid users to trust on information within the grid, both
in terms of its origin (integrity, authenticity, and irrefutability) and its
generation (correctness and accuracy).
Authenticating the consumption values is of obvious importance
to the consumer. For this reason, a consumption authenticator
should be shown on the meters display, changing for every new
NCSLI Measure J. Meas. Sci. |

27

SPECIAL FEATURE

a)

b)

Figure 4. (a) Trusted module positioned on the meters internal circuit. (b) Consumption authenticator shown on the display.

consumption value update, or on the billing statement (based on the


last consumption value of the period). This makes the information
available to the consumer and allows further validation by the
metrology authority.
We present two approaches for constructing a consumption authenticator for data that originates from the smart meter: (i) digital signature
based mechanism and (ii) one-time password (OTP) based mechanism.
The digital signature based mechanism provides a custom
cryptographic module, acting as a root of trust, that is coupled to
the beginning of the legally relevant chain of the smart meter. This
module stores measurements from the meter sensors, and passes it
forward (digitally signed) to the meters firmware, where the data can
be freely manipulated (Fig. 4.a). The set of measurements along with
its digital signature, the consumption authenticator, are both shown
on the meters display (Fig. 4.b). The mechanism trust resides in the
strength of the chosen digital signature scheme and in the fact that, if
any malicious manipulation of the data may be done after the signing
moment, it could be tracked back and discovered by a verification
process. The work in [28] describes this mechanism considering a
scenario of Time-Of-Use tariffs and how the size of the authenticator
could be shortened.
On the other hand, digital signatures may be too large to be
transcribed from the display. For this reason, a symmetric key
cryptography approach is also considered: the one-time password
based mechanism. This mechanism provides a consumption
authenticator that validates the measurements at a given time.
For this, the user must input the (i) total consumption (kWh),
(ii) the authenticator, and (iii) the meters unique identifier
into a trusted verifying system (e.g. website) delivered by the
metrology authority.
The mechanism relies on the Time-based One-Time Password
(TOTP) algorithm as its underlining function, to be able to
combine time and consumption values. The way of composing the
authenticator at a given time i is

28

Authenticatori = Truncate (n, Hash (ACi, TOTP (K, Ti ))),

(1)

where Ti is the Unix time at time i; K is the shared secret


between the smart meter and the verifying system; ACi is the
value of accumulated consumption at time i and Truncate()
defines a number of digits, n, to compose the authenticator (e.g.
n = 6). This way, the OTP-based mechanism could be suitable for
our purposes in terms of size, but it still lacks some properties
from the asymmetric cryptography, like irrefutability and easier
maintenance of cryptographic keys.
5. Conclusions

The advancement in computing and microelectronics has led to more


smart devices in various fields. In the field of metrology, the mechanical-only meter devices are being replaced by microcontrollers with
embedded software dedicated to the measurements. The advent of the
use of smart electricity meters has created a number of challenges with
regard to their evaluation by regulators.
This paper has shown that the validation of these meters can
only be effective if the regulator invests heavily in the scientific
knowledge, the development of specialized tools, and training of
human resources. We have presented the initiatives of Inmetro in
partnership with Eletrobrs Distributor of Rondnia aiming at these
purposes by discussing software analysis, software protection, and the
measurement confidence chain. We have described the joint actions
of Inmetro and meter manufacturers and distributors of energy to
develop systems and devices that increase the level of security of
smart energy meters.
6. Acknowledgements

The authors would like to acknowledgement the Eletrobrs


Distributor of Rondnia for the financial support of the research
project DR/069/2012.

NCSLI Measure J. Meas. Sci. www.ncsli.org

SPECIAL FEATURE
7. References

[1] R. Vidinich, Furto de Energia e suas consequncias, VII


Encontro Nacional de Conselhos de Consumidores de Energia
Eltrica, 2005 (In Portuguese).
[2] J. Mateus and P. Cuervo, Transmission Cost and Loss Allocation Method through Linearized Distribution Factors, Proceedings of 3rd International Conference: The European Electricity
Market (EEM-06), Warsaw, Poland, pp. 413-420, May 2006.
[3] J. Mateus and P. Franco, Transmission Loss Allocation through
Equivalent Bilateral Exchanges and Economical Analysis,
IEEE T. Power Syst., vol. 20, no. 4, pp. 1799-1807, 2005.
[4] NIST, NIST Issues Expanded Draft of Smart Grid Cyber
Security Strategy for Public Review and Comment,

(www.nist.gov/smartgrid/smartgrid_020310.cfm), last accessed,
August 2014.
[5] NIST, SAMATE - Software Assurance Metrics and Tool Evaluation, (samate.nist.gov/index.php/Main_Page.html), last accessed, August 2014.
[6] Y. Huang, F. Yu, C. Hang, C. Tsai, D. Lee, and S.Kuo, Securing
web application code by static analysis and runtime protection,
Proceedings of 13th International Conference on World Wide
Web, New York, New York, USA, pp. 40-52, 2004.
[7] D. Boccardo, T. Nascimento, R. Machado, C. Prado, and L.
Carmo, Traceability of Executable Codes Using Neural Networks, Lect. Notes Comp. Sci., vol. 6531, pp. 241253, 2011.
[8] T. Nascimento, C. Prado, R. Machado, L. Carmo, and D. Boccardo, Program Matching through Code Analysis and Artificial
Neural Networks, Int. J. Softw. Eng. Know., vol. 22, no. 2, pp.
225241, 2012.
[9] T. Nascimento, C. Prado, D. Boccardo, L. Carmo, and R. Machado, Program Equivalence Using Neural Networks, in Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, Springer Berlin
Heidelberg, vol. 87, pp. 637650, 2012.
[10] C. Collberg and J. Nagra, Surreptitious Software: Obfuscation,
Watermarking, and Tamperproofing for Software Protection,
Addison Wesley, 2010.
[11] B. Barak, O. Goldreich, R. Impagliazzo, S. Rudich, A. Sahai,
S. Vadhan, K. Yang, On the (im)possibility of obfuscating programs, J. ACM, vol. 59, no. 2, art. 6, April 2012.
[12] C. Linn and S. Debray, Obfuscation of Executable Code to Improve Resistance to Static Disassembly, Proceedings of ACM
Conference on Computer and Communication Security (CCS),
Washington, DC, USA, 10 p., 2003.
[13] C. LeDoux, M. Sharkey, B. Primeaux, and C. Miles, Instruction Embedding for Improved Obfuscation, Proceedings of
ACM Southeast Regional Conference, Tuscaloosa, Alabama,
USA, pp. 130135, 2012.
[14] V. Balachandran and S. Emmanuel, Software code obfuscation by hiding control flow information in stack, Proceedings
of IEEE International Workshop on Information Forensics and
Security, pp. 1-6, 2011.

Vol. 9 No. 3 September 2014

[15] R. Costa, L. Pirmez, D. Boccardo, L. Carmo, and R. Machado,


TinyObf: Code Obfuscation Framework for Wireless Sensor
Networks, Proceedings of International Conference on Wireless Networks, Las Vegas, Nevada, USA, 2012.
[16] V. de S, D. Boccardo, L. Rust and R. Machado, A tight
bound for exhaustive key search attacks against Message
Authentication Codes, RAIROTheor. Inf. Appl., vol. 27, no. 2,
pp. 171-180, 2013.
[17] D. Spinellis, Reflection as a Mechanism for Software Integrity Verification, ACM Transactions on Information and System
Security, vol. 3, no. 1, pp. 51-62, 2000.
[18] A. Seshadri, A. Perrig, L. van Doorn, and P. Khosla, Swatt:
Software-based attestation for embedded devices, In 2004
IEEE Symposium on Security and Privacy, p. 272, Los Alamitos, California, USA, 2004.
[19] A. Seshadri, M. Luk, E. Shi, A. Perrig, L. van Doorn, and P.
Khosla, Pioneer: verifying code integrity and enforcing
untampered code execution on legacy systems, SIGOPS Oper.
Syst. Rev., vol. 39, no. 5, pp. 1-16, 2005.
[20] A. Seshadri, M. Luk, A. Perrig, L. van Doorn, and P. Khosla,
Externally verifiable code execution, Commun. ACM, vol. 49,
no. 9, pp. 45-49, 2006.
[21] F. Douglas, The Compression Cache: Using On-Line Compression to Extend Physical Memory, Proceedings of Winter
USENIX Conference, pp. 519529, 1993.
[22] C. Collberg, A. Huntwork, E. Carter, G. Townsend, and M.
Stepp, More on graph theoretic software watermarks: Implementation, analysis and attacks, Inform. Software Tech., vol.
51, no. 1, pp. 5667, 2009.
[23] C. Collberg and C. Thomborson, Software Watermarking:
Models and Dynamic Embeddings, Proceedings of 26th ACM
SIGPLAN-SIGACT on Principles of Programming Languages,
San Antonio, Texas, USA, pp. 311324, 1999
[24] C. Collberg, C. Thomborson, and G. Townsend, Dynamic
graph-based software fingerprinting, ACM T. Progr. Lang. Sys.,
vol. 29, no. 6, pp. 167, 2007.
[25] J. Zhu, Y. Liu, and K. Yin, A Novel Dynamic Graph Software
Watermark Scheme, Proceedings of First International Workshop on Education Technology and Computer Science (ETCS
09), vol. 3, pp. 775-780, 2009.
[26] L. Bento, D. Boccardo, R. Machado, V. Pereira de S, and J. L.
Szwarcfiter, Towards a provably robust graph-based watermarking scheme, Proceedings of the 39th International Workshop on
Graph Theoretic Concepts in Computer Science (WG 2013), Lecture Notes in Computer Science 8165, pp. 50-63, 2013.
[27] M. Chroni and S. Nikolopoulos, Efficient Encoding of Watermark Numbers as Reducible Permutation Graphs, Computing
Research Repository (CoRR), 2011.
[28] S. Camara, R. Machado, and L. Carmo, A Consumption
Authenticator Based Mechanism for Time-of-Use Smart Meter
Measurements Verification, Appl. Mech. Mater., vol. 241-244,
pp. 218222, 2013.

NCSLI Measure J. Meas. Sci. |

29

TECHNICAL PAPERS

Electrical Units in the New SI:


Saying Goodbye to the 1990 Values
Nick Fletcher, Gert Rietveld, James Olthoff, Ilya Budovsky, and Martin Milton

Abstract: The proposed redefinition of several International System (SI) base units is a topic that has been on the metrology
agenda for the last decade. Recent progress on several determinations of the fundamental constants means that we now have a
good idea of the defined numerical values that will be given in the new system to the Planck constant, h, and the elementary charge,
e. This is especially relevant to electrical metrology as new numerical values for the von Klitzing and Josephson constants, given
by the relations RK=h/e2 and KJ=2e/h, will replace the existing 1990 conventional values, RK90 and KJ90. The implementation
of the new system cannot be done without introducing small step changes into sizes of the electrical units that are disseminated
using Josephson and quantum Hall intrinsic standards. At the time of writing it looks likely that the relative change from KJ90
to KJ will be of the order 1 107, and that from RK90 to RK will be approximately 2 108. This paper discusses the practical
impact of these changes on electrical metrology and highlights the long term benefits that will come from the updated system.
The CCEM (Consultative Committee for Electricity and Magnetism) of the International Committee for Weights and Measures
is now taking the first steps to ensure a smooth implementation, most probably in 2018.
1. Introduction

Discussions on a possible revision of the International System of Units (SI) [1] have been
ongoing for over a decade. After an initial
concentration on the stability of the kilogram
[2, 3], a consensus has emerged for a major
revision centered around the redefinition of
four of the seven base units, that brings fundamental constants to the fore [4]. Recent
progress towards this goal [5] indicates that
such a redefinition is now a real possibility
for 2018. This paper explores what this upcoming change means for how we realize and
disseminate the SI electrical units.
As we will see, no change to working
practices or traceability routes is required; the
new SI effectively formalizes what is already
standard practice in electrical metrology
laboratories.
In fact, the main change
brought about by the new SI is that we will
no longer need to worry about the distinction
between the representations of the volt and
the ohm maintained in the laboratory and the
presently inaccessible true SI units. The
new SI will make the present representations
of the volt and the ohm equal to the true
SI units. This will change very little for

30

most users of electrical calibrations, but the


improvement in the overall consistency of the
SI is considerable. In the following sections,
we review the difficulties with the present
situation, before considering the impact of
the required changes.
1.1 The Quantum Hall and Josephson Effects

Electrical metrology has had the good


fortune to benefit from two remarkable
macroscopic quantum effects the type of
physical phenomena that enable us to make
the link between the world of fundamental
constants and that of everyday calibrations.
The quantum Hall effect gives us a quantum
standard for electrical resistance RQHE [6], via
the relation
!"# = ! ,

(1)

!"# = ! .

(2)

and similarly the Josephson effect gives us a


quantum standard for voltage VJos [7], via the
relation

Here n is an integer and f the frequency of

the radiation driving the Josephson device;


RK and KJ are the von Klitzing constant and
Josephson constant, respectively.
It is unnecessary to review here more details of these effects or their successful application; however, we note that this success has
been a major driver in shaping the proposed
revisions of the SI. Two simple relations give
the von Klitzing constant and the Josephson
constant in terms of the Planck constant, h,
and the elementary charge, e,
! = ! , and

! = 2 .

(3)
(4)

Presently we use internationally agreed


values (known as the conventional 1990
values, discussed next) in place of the
true SI values of RK and KJ. In the revised
SI, we will instead have defined numerical
values for the fundamental constants h and
e, and hence, via Eqs. (3) and (4), we will
also have defined numerical values for RK
and KJ. This will make the standards based
on the quantum Hall effect and Josephson
effect direct realizations of the SI ohm and
volt, respectively.

NCSLI Measure J. Meas. Sci. www.ncsli.org

TECHNICAL PAPERS
1.2 Electrical Traceability Today: the
Conventional 1990 Values

The conventional 1990 values for RK and KJ


were introduced to solve a problem posed
by the success of the quantum Hall and the
Josephson effects. Resistance and voltage
standards based on these effects proved to be
extremely precise, repeatable and internationally available, but the sum of experimental results represented in the Committee on Data
for Science and Technology (CODATA) Task
Group on Fundamental Constants assessments did not allow them to be tied into the
SI with anywhere near the same accuracy [8].
The limitation on the accuracy with which the
present SI electrical units can be realized is
due to the mechanical definition of the base
unit, the ampere:
The ampere is that constant current
which, if maintained in two straight
parallel conductors of infinite length,
of negligible circular cross-section,

Authors
Nick Fletcher
Bureau International des Poids
et Mesures (BIPM)
Pavillon de Breteuil, 92312
Svres Cedex, France
nick.fletcher@bipm.org

Gert Rietveld
Van Swinden Laboratorium (VSL)
P.O. Box 654
2600 AR Delft, Netherlands
grietveld@vsl.nl

James Olthoff
National Institute of Standards and
Technology (NIST)
100 Bureau Drive
Gaithersburg, MD 20899-8171
james.olthoff@nist.gov

Ilya Budovsky
National Measurement Institute
Australia (NMIA)
Bradfield Road, West Linfield
NSW 2070, Australia
ilya.budovsky@measurement.gov.au

Martin Milton
Bureau International des Poid
et Mesures (BIPM)
Pavillon de Breteuil, 92312
Svres Cedex, France
martin.milton@bipm.org

Vol. 9 No. 3 September 2014

and placed 1m apart in vacuum, would


produce between these conductors a
force equal to 2107 newton per metre of length. [1]
In 1990, the world was not yet ready for a
revision of the SI that would abandon this mechanical definition of the ampere, but the new
quantum electrical standards were already being widely used due to their near ideal properties. The practical solution chosen was to agree
on international fixed values for the constants
RK and KJ, known as RK90 and KJ90. Whilst this
decision ensured that electrical measurements
would be consistent world-wide, it meant that
units derived from these quantum standards
became effectively decoupled from the SI.
Details of the considerations leading up to the
adoption of RK90 and KJ90 and the choice of
their values can be found in [8, 9].
The upside to this solution was an enormous gain in the uniformity of primary electrical standards between different national
metrology institutes (NMIs). Previous differences in national units of up to a few parts
in 106 were eliminated in a single stroke.
The downside of the practical unit realizations being inconsistent with the SI has not
turned out to be a major difficulty over the
past two decades. It only shows up in comparisons with experiments that have a link to
the mechanically defined SI electrical units
essentially watt balances and calculable
capacitors. There have probably never been
any problems seen in practical calibration
work. Still, the discrepancies between the
1990 units and the SI are clearly not ideal
and thus have been the subject of research
work, as reviewed in the regular adjustments
of the CODATA recommended values for the
fundamental constants.
The choice of values for RK90 and KJ90 turns
out in retrospect to have been sound. The
guiding principle at the time was: The values
should be so chosen that they are unlikely to
require significant change in the foreseeable
future [8]. After 25 years, we are now ready
to put these 1990 values into retirement, and
bring the quantum electrical standards fully
into the SI. The new SI represents the definitive solution to the problem which the 1990
values temporarily covered over.
The present system of conventional 1990
values also includes additional uncertainties
for the rare occasions where use of the true
SI is required. In practice, however, these are
not often encountered (they are omitted from

Calibration and Measurement Capabilities


(CMCs), for example). The assigned relative
standard uncertainties are 1 107 for the use
of RK90 [10] and 4 107 for the use of KJ90
[11]. As we shall see in the following sections, the changes we are considering here are
substantially smaller than these uncertainties,
underlining the validity of the decisions made
in 1990.
2. Progress in Knowledge of the
Fundamental Constants since 1990

Since 1990, experimental progress has continued on many determinations of fundamental constants and is regularly reviewed in
adjustments published by CODATA [12]. In
this section, we review what the last 20 years
have brought for our knowledge of the constants RK and KJ.
2.1 An Updated Value for RK

The CODATA value for RK is dominated by


experiments that determine the fine structure
constant, , due to the relation
! = ! 2 , ,

(5)

where both the magnetic constant, o, and


the speed of light, c, have defined numerical
values in the present SI. These experiments
have improved dramatically in both accuracy
and diversity since 1990, leading to the
excellent present knowledge of RK. Table 1
shows the successive best estimates over
recent years, and these figures are plotted
in Fig. 1. We see both an improvement
in uncertainty as well as a stable value to
within 1 part in 108, and thus can predict with
reasonable confidence the change on adoption
of the new SI. The relative offset from the
value of RK90 is around 17 109, and is now
believed to be known to better than 1 109.
The graph in Fig. 1 clearly shows one of
the challenges of the CODATA adjustments
of fundamental constants, namely that
new values based on the latest experiments
may significantly deviate from previous
values (this is likely because uncertainties
in experiments have been underestimated).
In this specific case, the relative difference
between the 2006 and 2010 CODATA values
of RK is 5 109, whereas both values have
a relative uncertainty of less than 1 109.
All four of the most recent CODATA values
of RK do however agree very well within the
level of 1 108, so this discrepancy does not
significantly affect the present discussion.
NCSLI Measure J. Meas. Sci. |

31

TECHNICAL PAPERS

RK()

u(/)

1990(/)

1990 Value

25 812.807

----

----

1998 CODATA

25 812.807 572 (95)

3.7 10-9

+22 10-9

2002 CODATA

25 812.807 449 (86)

3.3 10-9

+17 10-9

2006 CODATA

25 812.807 557 (18)

0.7 10-9

2010 CODATA

25 812.807 4434 (84)

0.32 10

+22 10-9
-9

+17 10-9

Table 1. Evolution of CODATA values of RK, including the relative standard


uncertainty, u, and the difference from the 1990 value.

! =

!
!!!

. .

(6)

For the purpose of analyzing the contributions to the value of KJ, we can assume the
uncertainty on RK is negligible, and can use
Eq. (6) to convert the reported experimental h
values to individual values of KJ. The results
thus obtained for the most important contributing experiments are shown in Fig. 3. The
values are plotted as relative differences from
KJ90, in parts in 108. The uncertainty bars are
standard uncertainties.
We plot both results that were included in the
latest (2010) CODATA adjustment and those
published since [13, 14]. The convergence of
recent results predicts a value of KJ approximately 10 parts in 108 below the 1990 value.
Although the picture is not yet completely finalized and critical results have only become available within the last six months, it has become
clear that we will have to deal with a small, but
significant, offset from the 1990 value.
3. Implementation of the New SI
Changes for Electricity

Figure 1. Data from Table 1: successive CODATA values of the von Klitzing
constant RK published since the adoption of the 1990 value.

KJ (GHz/V)

u(rel.)

1990(rel.)

1990 Value

43 597.9

----

----

1998 CODATA

43 597.898 (19)

3.9 10-8

-0.4 10-8

2002 CODATA

43 597.879 (41)

8.5 10-8

-4.3 10-8

2006 CODATA

43 597.891 (12)

2.5 10

-8

-1.9 10-8

2010 CODATA

43 597.870 (11)

2.2 10-8

-6.2 10-8

Table 2. Evolution of CODATA values of KJ, including the relative standard


uncertainty, u, and the difference from the 1990 value.

2.2 An Updated Value for KJ

The situation for KJ is not as clear as for RK.


Table 2 gives the evolution of CODATA
values of KJ since 1990 in a similar way to
those shown above for RK, and these figures
are also plotted in Fig. 2. The data clearly
show that there is little improvement in
uncertainty, and, more importantly, that the
value is not yet reliable at the 108 level.
32

The critical experiments in the CODATA


adjustments of KJ are watt balance (WB)
and silicon sphere (Si28) based Avogadro
measurements (see [12] for more details).
These are often compared via the effective
values of h that the individual experiments
give. From Eqs. (3) and (4), we can see that
KJ is linked to h and RK via

The ampere will remain the base unit for


electricity after the proposed revision, and as
at present, the dissemination of the electrical
units will continue to be based on standards
for resistance and voltage using the quantum
Hall and Josephson effects. The details of this
implementation for the electrical units have
been laid out in a draft document, known as
the mise en pratique, that has been available
since 2009 [15]. This document (along with
equivalents for other areas of metrology)
gives the details of how to implement the
abstract SI unit definitions in practical
realizations. It contains very little of surprise
to the electrical metrologist familiar with the
present SI. All that has really changed are the
two reference values used for RK and KJ.
There will be an inevitable step change in
the electrical units realized from quantum
standards when this change is implemented,
and the numerical values KJ90 and RK90 that
have been in use for more than 20 years are
abrogated and replaced by the new values of
KJ and RK based on the latest experiments.
To understand the impact of this change,
we must consider the uncertainties that
are achievable in both routine and state of
the art measurements today. We consider
separately below dc resistance and dc
voltage metrology, and finally the wider
spectrum of electrical quantities.

NCSLI Measure J. Meas. Sci. www.ncsli.org

TECHNICAL PAPERS
3.1 Impact for Resistance Measurements

Whilst quantum Hall resistance (QHR)


systems can be compared to the level of 1
part in 109 [16], calibrations of travelling
resistance standards rarely have relative
uncertainties of less than 2 108 due to the
limitations of the standards themselves [17].
Consequently, a step change of 0.02 /
in assigned resistor values due to the change
from RK90 to RK should only be seen on the
top level working standards maintained
within NMIs. Commercial QHR systems
have been available for more than 10 years,
but have remained relatively complex and
expensive, and have not been widely adopted
outside of NMIs. This may change in the
next few years, as graphene technology
promises to significantly simplify QHR
equipment [18], but we are not quite there
yet. Coordination of the change is thus
restricted to NMI experts, and even the most
demanding users of resistance traceability
will probably be unaffected.

Figure 2. Successive CODATA values of the Josephson constant KJ published


since the adoption of the 1990 value.

3.2 Impact for Voltage Measurements

Josephson voltage standards have reached a


mature level of technological development,
and commercial systems are widely distributed into industrial calibration laboratories.
We can get a good idea of the state of the
art in dc voltage metrology from the North
American 10 V Josephson interlaboratory
comparison (sponsored by NCSL International). This has been running since 2001,
with the latest (9th iteration) completed in
2011. The results and a review of the experiences of 10 years of measurements are
reported in [19]. The analysis shows three
distinct levels of uncertainty obtainable
in voltage comparisons via different techniques. The following figures quoted from
[19] are all expanded uncertainties at 95%
confidence, given in nV relative to 10V; the
0.1V/V relative change we are considering
for KJ is equivalent to 1000nV in 10V.
Firstly, direct comparisons of two Josephson systems without any intervening secondary standards can give uncertainties as low as
3nV. This finding is in line with the experience of the ongoing Bureau International des
Poids et Mesures (BIPM) series of on-site Josephson comparisons, conducted world-wide
amongst NMIs [20]. Secondly, two Josephson
systems in the same lab can be used alternately to measure a common Zener reference. In
this case, only the short-term instability of the
secondary standard is important, but the unVol. 9 No. 3 September 2014

Figure 3. Values of major experiments contributing to the value of the


Josephson constant KJ before (green squares) and after (red diamonds) the last
CODATA adjustment (blue dot).

certainty still rises significantly to the level of


20nV. Finally, one or more Zener standards
can be used as travelling artifacts to compare
Josephson systems in separate laboratories.
Even with well-characterized Zeners, the
transport shocks, the inevitable drift during
the time of exchange, and the necessary corrections for conditions of humidity and atmo-

spheric pressure, increase the uncertainty by


another factor of 10 to the level of 200nV.
This final level of uncertainty is also consistent with the CMCs of NMIs offering calibrations of Zener voltage standards directly
against Josephson systems, which can indeed
be as low as 0.02V/V [21]. A step change
of five times the uncertainty (at the 95% conNCSLI Measure J. Meas. Sci. |

33

TECHNICAL PAPERS

Figure 4. An example of a stable Zener standard drifting slowly against a Josphson


reference; the size of a step change of 0.1V/V is shown for comparison purposes.

fidence level) is clearly something that should


concern the metrologist.
However, when we put the change of
0.1 V/V in the context of the longer term
stability limits of Zeners (the best available
secondary standards), this change becomes
less worrisome. The results presented in
[19] show details of the long-term behaviour
of four individual Zener standards over the
10 years during which they have been used
for comparisons. They all show drifts in
time of order 10 V/year (not linear over
more than a few months, but well fitted by
an exponential function). More importantly, the residuals from the fit show rapid and
unpredictable variations of amplitude 1V.
Figure 4 shows a selected stable Zener that
has not travelled, but has been maintained in
stable conditions in the BIPM laboratories for
the last 10 years. Even here, we see that the
step change introduced by the update to the
reference value for KJ is of a similar size to
normal variations in the medium term, and
will thus quickly be lost in the drift line.
3.3 Effect beyond Primary Intrinsic
Standards

The impact of the new SI and the resulting


step change in resistance and voltage measurements will be very marginal beyond the
primary intrinsic standards. The most significant effect is a step change of the order
of 0.1 V/V in voltage, as outlined in the
previous section. High-end digital voltmeters have specifications of a few parts in
34

106. Even though they behave better in the


well-defined environment of a qualified (national) metrology laboratory, the effect of
the step change will still remain unnoticed,
swamped by the noise and instability of the
internal Zener-diode voltage reference used
in the instrument. Other areas of electrical
metrology will be essentially unaffected by
the envisaged change in voltage, given their
uncertainty levels. In the demanding area of
primary power measurements, the achieved
expanded relative uncertainties of around
2 106 [22, 23] are still an order of magnitude larger than the change in reference value
for KJ. Capacitance standards are often directly traceable to resistance standards, and
in these cases will also see a step change of
0.02F/F on the introduction of the updated
value for RK. However, even the best calibration uncertainties are larger than this, and the
effect will not be visible to end users.
In conclusion, for the wider field of electrical metrology there will be no need for the
type of large scale program of education and
recalibration that was undertaken for the introduction of the 1990 values (see e.g. [24]).
3.4 Implementing the Change

There are a few practical aspects when it comes


to implementing the change brought about by
the introduction of the new SI. This includes
updating the values of RK and KJ in measurement and data analysis software as well as
updating analyses of top level resistance and
voltage standards based on their history charts.

For the users of Josephson and QHR


systems, implementing the new SI in
principle will be as simple as changing one
reference number used in the calculation
of the measurement results. In practice
this can still present some difficulties. The
equipment concerned may be a commercial
Josephson or QHR system running software
supplied by the manufacturer, for which the
end-user does not have the source code. A
certain amount of time and effort will be
required for making these software updates,
and that needs planning and coordination. To
avoid discrepancies, it is important that the
updated software is available at the time of
the introduction of the new SI.
As explained above, the step changes
introduced into the history graphs of resistors
or Zeners will quickly fade into insignificance,
but close to the time of redefinition, care must
be taken not to mix old and new values.
Drift rates of standards are not affected by
the new SI, but in practice, when determining
drift rates from measurement data for very
stable standards, it is again important not to
mix old and new values; the old values
must be corrected for the step change caused
by the implementation of the new SI.
The industrial impact of redefinition has
been considered previously [25], but we
note that the expected change for voltage is
now five times larger than considered at that
time. Up until now, the details and timing
for redefinition have not been sufficiently
well developed to start communicating
effectively to end users. We can expect this
to change over the next few years, with the
implementation of the new SI most probably
occurring in 2018. The National Institute
of Standards and Technology (NIST) in the
United States and other NMIs around the
world will take up the work of ensuring a
smooth transition within industry, and will
assess the impact on national calibration
infrastructures.
4. Conclusions

The revised SI represents a significant


step forward for electrical metrology. The
electrical units in the new system will be
directly linked to the fundamental constants
of nature h and e via proven practical quantum
standards based on the quantum Hall effect
and the Josephson effect. However, in
order to get to this position, the widely used
conventional 1990 values KJ90 and RK90 must
be abandoned and the new values of RK and

NCSLI Measure J. Meas. Sci. www.ncsli.org

TECHNICAL PAPERS
KJ have to be used. The unavoidable step change introduced on doing
so will be at the limits of visibility for the most demanding users of
resistance and voltage calibrations. The most significant impact will
be in the area of voltage, where a step change of around 0.1V/V is
foreseen, which will be clearly visible in comparisons of Josephson
systems, such as those organized by NCSL International.
This paper is part of the work of a task group of the Consultative
Committee on Electricity and Magnetism (CCEM), created to address
the implementation of the new SI. The details given here will
continue to be updated with new experimental evidence, and the exact
changes to be applied will not be known until just prior to the date
of implementation (most likely 2018). However, the recent progress
means we now have a good picture, and that we can start to prepare
for the necessary changes.
5. References

[1] BIPM, The International System of Units (SI), 8th edition,


Bureau International des Poids et Mesures, 2006.
[2] B. Taylor and P. Mohr, On the redefinition of the kilogram,
Metrologia, vol. 36, no. 1, pp. 6364, 1999.
[3] I. Mills, P. Mohr, T. Quinn, B. Taylor, and E. Williams,
Redefinition of the kilogram: a decision whose time has come,
Metrologia, vol. 42, no. 2, pp. 7180, 2005.
[4] I. Mills, P. Mohr, T. Quinn, B. Taylor, and E. Williams,
Redefinition of the kilogram, ampere, kelvin and mole: a
proposed approach to implementing CIPM recommendation 1
(CI-2005), Metrologia, vol. 43, no. 3, pp. 227246, 2006.
[5] M. Milton, R. Davis, and N. Fletcher, Towards a new SI: a
review of progress made since 2011, Metrologia, vol. 51, no.
3, p. R21, 2014.
[6] B. Jeckelmann and B. Jeanneret, The quantum Hall effect as an
electrical resistance standard, Rep. Prog. Phys., vol. 64, no. 12,
pp. 16031655, 2001.
[7] C. Hamilton, Josephson Voltage Standards, Rev. Sci. Instrum.,
vol. 71, no. 10, pp. 36113623, 2000.
[8] B. Taylor and T. Witt, New International Electrical Reference
Standards Based on the Josephson and Quantum Hall Effects,
Metrologia, vol. 26, no. 1, pp. 4762, 1989.
[9] CCEM, Report of the 18th Meeting, 1988.
[10] CIPM, Procs-Verbaux des Sances du CIPM, vol. 56, p. 45,
1988 (amended by vol. 68, p. 101, 2000).
[11] CIPM, Procs-Verbaux des Sances du CIPM, vol. 56, p. 44,
1988.
[12] P. Mohr, B. Taylor, and D. Newell, CODATA recommended
values of the fundamental physical constants: 2010, Rev. Mod.
Phys., vol. 84, no. 4, pp. 15271605, 2012.

Vol. 9 No. 3 September 2014

[13] C. Sanchez, B. Wood, R. Green, J. Liard, and D. Inglis, A


determination of Plancks constant using the NRC watt balance,
Metrologia, vol. 51, no. 2, p. S5, 2014.
[14] S. Schlamminger, D. Haddad, F. Seifert, L. Chao, D. Newell,
R. Liu, R. Steiner, and J. Pratt, Determination of the Planck
constant using a watt balance with a superconducting magnet
system at the National Institute of Standards and Technology,
Metrologia, vol. 51, no. 2, p. S15, 2014.
[15] CCEM, Mise en pratique for the ampere and other electric units
in the International System of Units (SI), Draft #1, CCEM/0905, 6 p., 2011.
[16] F. Delahaye, T. Witt, E. Pesel, B. Schumacher, and P. Warneke,
Comparison of quantum Hall effect resistance standards of the
PTB and the BIPM, Metrologia, vol. 34, no. 3, pp. 211214,
1997.
[17] B. Schumacher, Final report on EUROMET.EM-K10: Key
comparison of resistance standards at 100 , Metrologia, vol.
47, 01008, 2010.
[18] T. Janssen, N. Fletcher, R. Goebel, J. Williams, A. Tzalenchuk, R.
Yakimova, S. Subatkin, S. Lara-Avila, and V. Falko, Graphene,
universality of the quantum Hall effect and redefinition of the SI
system, New J. Phys., vol. 13, 093026, 2011.
[19] H. Parks, Y. Tang, P. Reese, J. Gust, and J. Novak, The North
American Josephson Voltage Interlaboratory Comparison,
IEEE T. Instrum. Meas., vol. 62, no. 6, pp. 16081614, 2013.
[20] S. Solve and M. Stock, BIPM direct on-site Josephson voltage
standard comparisons: 20 years of results, Meas. Sci. Technol.,
vol. 23, no. 12, 124001, 2012.
[21] BIPM, Key Comparisons Database, Appendix C, (http://kcdb.
bipm.org/AppendixC/default.asp).
[22] W. Ihlenfeld, E. Mohns, and K. Dauke, Classical nonquantum
AC power measurements with uncertainties approaching 1 mW/VA,
IEEE T. Instrum. Meas., vol. 56, no. 2, pp. 410413, 2007.
[23] B. Waltrip, T. Nelson, E. So, and D. Angelo, A bilateral
comparison between NIST quantum-based power standard
and NRC current-comparator-based power standard, IEEE
Conference on Precision Electromagnetic Measurements
(CPEM) Digest, Washington, DC, USA, pp. 203204, 2012.
[24] N. Belecki, R. Dziuba, B. Field, and B. Taylor, Guidelines
for implementing the new representations of the Volt and Ohm
effective January 1, 1990, NIST Technical Note 1263, 1989.
[25] J. Gust, The Impact of the New SI on Industry, Proceedings
of the NCSL International Workshop and Symposium, National
Harbor, Maryland, 2011.

NCSLI Measure J. Meas. Sci. |

35

TECHNICAL PAPERS

Realization and Dissemination of the


International Temperature Scale of 1990
(ITS-90) above 962C
Andrew D. W. Todd and Donald J. Woods

Abstract: Above 962C, the ITS-90 is realized at the National Research Council (NRC) of Canada by a standard radiation
thermometer with a known spectral responsivity and a silver freezing-point blackbody. Together with Plancks law, the temperature
scale can be extrapolated to temperatures in excess of 2500C, albeit with uncertainties that increase with higher temperatures.
This realization can then be disseminated to other radiation thermometers via a variable-temperature blackbody for use in, for
example, calibration laboratories. In the future, it is expected that new high-temperature fixed points with transition temperatures
exceeding 3000C will allow an interpolated high-temperature realization and lower uncertainties.
1. Introduction

The International Temperature Scale of 1990,


ITS-90, is a practical approximation to the
true, thermodynamic temperature [1]. It is a
defined scale and so the temperature reference
points (i.e. fixed points) are defined with
transition temperatures with zero uncertainty
and are frozen in time. This temperature scale
approach facilitates conformity over time and
space. A history of, and more information
about, temperature scales can be found in [2].
The ITS-90 specifies the temperaturedefining fixed point(s), interpolating/
extrapolating instruments (thermometers),
and the method of interpolating/extrapolating
between/beyond the fixed-point(s) and is
divided into a number of ranges and subranges. Above the freezing point of silver
(961.78 C), the scale is defined by one of
the freezing temperatures of silver, gold, or
copper (the temperature-defining fixed point),
a radiation thermometer (the extrapolating
instrument), and the Planck radiation law (the
extrapolating function) [1]. Since this method
relies on extrapolation, the uncertainty in the
measurement of temperature increases rapidly
for temperatures further away from the
defining fixed point. This temperature range
differs from the other sub-ranges in the ITS90 since it is extrapolated from one defining
fixed point, and not interpolated between a
number of fixed points, as is the case in the
lower temperature ranges. This is because,
36

at the time the ITS-90 was developed,


there were no stable, low-uncertainty fixed
points available above the freezing point of
copper. Because there is only one defining
fixed point, additional information about
the radiation thermometer is required to
measure temperature which would not be
required with an interpolated scale. The
additional information is the relative spectral
responsivity (or, alternatively, the central
wavelength and the bandwidth) of the
radiation thermometer system.
In this paper, the way the ITS-90 is defined
and realized above 962 C at NRC will be
discussed, and current limitations will be
presented. We set out to improve our ability
to realize a temperature scale above the
freezing point of silver with uncertainties that
are comparable to the best national metrology
institutes (NMIs) in the world. To do so, we
have been actively researching new, hightemperature fixed points. We believe that
these fixed points will enable us to reduce
uncertainties and that they will enable better
scale comparisons between NMIs.
At NRC, we rely on a LP3 radiation
thermometer (KE Technologie, GmbH) that
uses an interference filter with a nominal
center wavelength of 650 nm and a fullwidth-half-maximum (FWHM) bandwidth
of ~10 nm. This instrument uses a silicon
detector and, while its focus is adjustable,
a source-to-lens distance of 70 cm is

maintained during calibration and use. The


silver freezing point is used as the ITS-90
reference point.
2. Realization of the ITS-90 above
962C

As stated above, the realization of the ITS-90


above 962 C relies on the Planck radiation
law (in ratio form) to extrapolate from the
defining fixed-point at a temperature, T90(X),
to the unknown temperature, T90:
! ! !!"

! ! !!" !

where
T90(X)


!!
!!
!!!" !
!
!"# !!! !!
!"

!"#

(1)

is the freezing temperature of


either silver (T90(Ag) = 1234.93K),
gold (T90(Au) = 1337.33 K, or
copper (T90(Cu) = 1357.77K);

Authors
Andrew D. W. Todd
andrew.todd@nrc-cnrc.gc.ca

Donald J. Woods
don.woods@nrc-cnrc.gc.ca
National Research Council
of Canada
1200 Montreal Road
Ottawa, Ontario, Canada K1A 0R6

NCSLI Measure J. Meas. Sci. www.ncsli.org

TECHNICAL PAPERS

Figure 1. The relative responsivity of the NRC standard


radiation thermometer.

L(T90)

L[T90(X)]


c 2


Figure 2. A picture of a disassembled NRC Ag fixed-point cell.

is the spectral radiance of a blackbody at a wavelength, ,


and a temperature, T90;
is the spectral radiance of a blackbody at a wavelength, , and
a temperature, T90(X);
is the wavelength (in vacuum) and;
is 0.014388 mK (note that the value of the second
radiation constant has changed since the ITS-90 was
published, but the constant used to realize the ITS-90
remains as it was defined).

Using Eq. (1), the defined temperature of one of the fixed points
of Ag, Au or Cu (T90(X)), and knowledge of the wavelength, the
unknown temperature (T90) can be determined. Since only one fixed
point is used this realization requires extrapolation and so the farther
the unknown temperature is from the defining fixed point, the higher
the uncertainty in the unknown temperature. In fact, the uncertainty
increases with the square of the temperature.
Since there is no such thing as a perfectly monochromatic radiation
thermometer, some additional knowledge about the operating
wavelength of the radiation thermometer is required. We must also
introduce the emissivities of the T90(X) fixed point and the target at the
temperature being measured. Then, Eq. (1) becomes:
! !!"

= ! !

!" !

where:
r
R()
S(T90)
S[T90(X)]
(,T90)
(,T90(X))

!
! ! !,!!" ! ! !!" ! ! !"
!
!
! ! !,!!" ! ! ! !!" ! ! ! !"

(2)

is the ratio of the signals measured at T90 and T90(X);


is the relative spectral responsivity of the radiation thermometer;
is the signal measured at T90;
is the signal measured at T90(X);
is the emissivity of the target at T90 and;
is the emissivity of the ITS-90 defining fixed point at T90(X).

At NRC we measure the relative spectral responsivity of our LP3


radiation thermometer and Fig. 1 shows this responsivity for the
650nm filter.
Equation (2) cannot be explicitly solved for the temperature, T90,
and the integrals need to be solved numerically. Chapter 6 of the
Vol. 9 No. 3 September 2014

Supplementary Information for the International Temperature Scale of


1990 [3] suggests the use of the mean effective wavelength approach
to help solve Eq. (2). However, with the current speed of computers,
the direct numerical calculation is easily achieved.
At NRC, a silver fixed-point blackbody is used to link the radiation
thermometer to the ITS-90 [4]. The silver fixed point is realized in a
sodium heat-pipe furnace to achieve the lowest possible temperature
gradient. The fixed point has an aperture 13 mm in diameter and
contains approximately 775g of silver. The freezing point is realized
and the photocurrent of the LP3 is recorded this photocurrent
corresponds to S[T90(X)] (where X = Ag) in Eq. (2). Fig. 2 shows an
image of a disassembled Ag fixed-point cell.
With a known spectral responsivity and the value of the LP3 output
signal at the silver freezing point, the temperature corresponding to
any other photocurrent measured by the LP3 can be determined. The
uncertainties [5] (all k = 1) associated with realizing T90 in this way
include components due to: the Ag fixed point [including impurities in
the Ag fixed point (2.5mK), emissivity of the Ag fixed point (32mK),
cavity temperature drop (28 mK), identification of the plateau (Ag
point) (17 mK), and repeatability of the Ag plateau (10 mK)]; the
uncertainty in the wavelength; calibration of the reference detector;
scattering and polarization; out-of-band transmittance; interpolation;
size-of-source effect (SSE) that varies with source size; non-linearity;
and gain ratio. The total uncertainty in the realization of the ITS-90
that can be obtained is shown in Fig. 3 as a function of temperature
and the components are indicated in Table 1 for temperatures of
1000C and 2500C.
3. Dissemination of the ITS-90 above 962C

To disseminate the ITS-90 in this range, the calibrated LP3 is used as


a standard and a variable-temperature blackbody is used to transfer the
scale from the LP3 to the radiation thermometer to be calibrated. For
temperatures above 962C, NRC currently uses an electrically-heated,
graphite-tube blackbody furnace with a 50.8 mm diameter blackbody
cavity. In this configuration, the furnace can reach 2500C. To transfer
the scale from the LP3 to the radiation thermometer being calibrated,
the blackbody furnace is stabilized at a particular temperature. Then,
the LP3 determines the temperature of the blackbody and the other
radiation thermometer is moved into place to measure the blackbody.
NCSLI Measure J. Meas. Sci. |

37

TECHNICAL PAPERS

Uncertainty Component
Ag point

u (k = 1)

u (k = 1) at 1000 C (mK)

u (k = 1) at 2500 C (mK)

51

236

Wavelength calibration

0.1 nm

46 mK

11

546

Reference detector

0.1 nm

11

532

Scattering and polarization

0.0009 nm

Out-of-band transmittance

0.065 nm

355

Interpolation

0.0065 nm

35

Non-linearity

0.0005

Gain Ratio

0.0001

Total (k = 1)

54

874

Total (k = 2)

107

1749

Table 1. Uncertainties in the realization of the ITS-90 above 962C at NRC.

The LP3 is then moved back in front of the blackbody to determine


if the temperature had drifted. This process is then repeated at chosen
temperature intervals to cover the range required. Figure 4 shows a
picture of the LP3 and high-temperature blackbody.
Of course, the uncertainty in the dissemination of the ITS-90 is
larger than the uncertainty in the realization of the ITS-90 described
in Section 2. The radiation thermometer to be calibrated has a SSE
uncertainty, a different field-of-view than the LP3 (which requires
that the homogeneity of the blackbody be known), and a stability
characteristic, and all of these contribute to the uncertainty. These
are typically taken to be included in the manufacturers uncertainty
specification and this is left to the end user. Because of this,
radiation thermometer calibration uncertainties will be larger than the
uncertainty in the realization of the scale.

Figure 3. The k = 1 uncertainty in the realization of the


ITS-90 above 962C using the NRC LP3.

4. Future Improvements/Advanced Realization

It is clear from Fig. 3 that the extrapolation from the Ag point to


temperatures near 2500C results in uncertainties that increase with
temperature. Any difference in the ITS-90 temperature from the true

Figure 4. The LP3 and the electrically heated, variable high-temperature blackbody.

38

NCSLI Measure J. Meas. Sci. www.ncsli.org

TECHNICAL PAPERS
thermodynamic temperature of the fixed point used to define the
ITS-90 (e.g. Ag, Au or Cu) will be magnified. If suitable high
temperature fixed-points were available with their thermodynamic
temperature known, then interpolation between the fixed points would
be possible. Uncertainties in this temperature range using a set of
high-temperature fixed points has been shown to have uncertainties
comparable to the ITS-90 but with more robustness and a better link
to thermodynamic temperature. A detailed comparison of extrapolated
versus multiple fixed-point schemes is given in [6].
In order to use a radiation thermometer to realize the ITS-90 as
it is written, the relative spectral responsivity needs to be known
in addition to the measurement of the Ag, Au or Cu freezing point.
Measuring the relative responsivity of a radiation thermometer is
time consuming and out of reach of most users. In a scheme where
three or more fixed points are used, the relative responsivity is not
required to be known and therefore offers some practical advantages
for many users.
In recent years, a great deal of effort has been expended to develop
high-temperature fixed points based on metal-carbon eutectic
materials [7, 8]. These metal-carbon alloys have been shown to exhibit
stable melting transitions and cover temperatures from ~1153 C
(Fe-C) to 3185C (HfC-C). While the determination of the transition
temperatures is still an active area of study at many NMIs, Fig. 5 [8]
shows the approximate melting temperatures of the metal-carbon and
metal carbide-carbon fixed points.
At present, the transition temperatures of these high-temperature
fixed points are not well known, but an international effort to measure
the absolute melting temperatures of the Co-C, Pt-C, and Re-C fixed
points is underway [9, 10]. At NRC we have constructed MC-C fixed
point of Co-C [11], Pt-C and Re-C [12] for radiation thermometry.
We have also measured TiC-C fixed points which have a transition
temperature near 2758C. NRC is also actively working to determine
the thermodynamic temperature of the fixed points of Co-C, Pt-C and
Re-C. Results of preliminary measurements have been presented in
[13] and measurements that will contribute to the final assignment
of the transition temperatures have been completed (but are being
kept unpublished until measurements at all the participating NMIs
have been completed). In this final work our determination of the
thermodynamic temperature of Co-C (~1324 C), Pt-C (~1735 C),
and Re-C (2474C) were made with k = 2 uncertainties of 0.281C,
0.502C, and 0.831C, respectively.
It is anticipated that by using three or more fixed points an
interpolated realization of high temperatures by radiation thermometry
can be achieved with uncertainties comparable to, or better than,
current ITS-90 realizations.
In addition to providing the ability to realize an interpolated hightemperature scale, these fixed points can be used as artifacts to be
circulated among different organizations to determine equivalence
of the locally-realized scales. Using fixed points for this purpose
has a number of advantages over other options for probing a high
temperature scale: they are more robust, less prone to drift, and less
sensitive to emissivity than lamps and are smaller, more robust, and
easier to transport than a radiation thermometer.
5. Summary

The ITS-90 is a defined temperature scale and, above 962 C, is


realized via a radiation thermometer with known relative spectral
Vol. 9 No. 3 September 2014

Figure 5. Approximate melting temperatures of the metalcarbon and metal carbide-carbon fixed points. The melting
temperature and the uncertainty in the melting temperature
of these fixed points are not well known at this time.

responsivity, Plancks law in ratio form, and the measurement of


the freezing point of one of the fixed points of Ag, Au, or Cu. With
a single temperature reference point (for a given realization), it is
then extrapolated to higher temperatures. At NRC, the scale in this
range is currently realized using an LP3 radiation thermometer and
an Ag freezing point. It can then be disseminated to clients via a
stable, variable temperature blackbody whose temperature has been
determined by the LP3.
NRC is also developing and measuring the thermodynamic
temperature of high temperature fixed points in the range from 1324C
to over 2500 C. In the future, these fixed points could be used to
realize an interpolated scale with a number of advantages. Firstly,
an interpolated scale with three or more fixed points (all with wellknown temperatures) obviates the need to know the relative spectral
responsivity of the radiation thermometer. Secondly, with more than
one fixed point, the realization is more robust. Thirdly, because the
high temperature fixed-points will have their true, thermodynamic
temperature measured, a scale realized in this way would be closer to
the thermodynamic temperature than the ITS-90 is currently. Fourthly,
they will enable high accuracy comparisons of temperatures scales in
this temperature range.
6. Acknowledgments

The authors thank Ken Hill for his helpful ideas and perspective.
7. References

[1] H. Preston-Thomas, International Temperature Scale of 1990


(ITS-90), Metrologia, vol. 27, no. 1, pp. 310, 1990.
[2] K. Hill and A. Steele, The International Temperature Scale:
Past, Present, and Future, NCSLI Measure J. Meas. Sci, vol. 9,
no. 1, pp. 60-67, March 2014.
[3] http://www.bipm.org/en/publications/mep_kelvin/its-90_
supplementary.html
[4] K. Hill and D. Woods, Characterizing the NRC Blackbody
Sources for Radiation Thermometry from 150 C to 962 C,
Int. J. Thermophys., vol. 30, no. 1, pp. 105123, 2009.
NCSLI Measure J. Meas. Sci. |

39

TECHNICAL PAPERS
[5] http://www.bipm.org/cc/CCT/Allowed/22/CCT03-03.pdf
[6] G. Machin, P. Bloembergen, K. Anhalt, J. Hartmann, M. Sadli,
P. Saunders, E. Woolliams, Y. Yamada, and H. Yoon, Practical
Implementation of the Mise en Practique for the Definition of
the Kelvin Above the Silver Point, Int. J. Thermophys., vol. 31,
no. 8-9, pp. 1779- 1788, 2010.
[7] Y. Yamada, H. Sakate, F. Sakuma, and A. Ono, Radiometric
observation of melting and freezing plateaus for a series of
metal-carbon eutectic points in the range 1330C to 1950C,
Metrologia, vol. 36, no. 3, pp. 207209, 1999.
[8] E. Woolliams, G. Machin, D. Lowe, and R. Winkler, Metal
(carbide)-carbon eutectics for thermometry and radiometry: a
review of the first seven years, Metrologia, vol. 43, no. 6, pp.
R11R25, 2006.
[9] G. Machin, K. Anhalt, P. Bloembergen, M. Sadli, Y. Yamada, and
E. Woolliams, Progress Report for the CCT-WG5 High Temperature Fixed Point Research Plan, in Temperature: Its Measurement and Control in Science and Industry, vol. 8, ed. C.W.
Meyer, AIP Conference Proceeding 1552, pp. 317322, 2013.
[10] G. Machin, J, Engert, R. Gavioso, M. Sadli, and E. Woolliams,
The Euramet Metrology Research Programme Project
Implementing the New Kelvin (InK), Int. J. Thermophys., vol
35, no. 3-4, pp. 405-416, 2014.
[11] A. Todd and D. Woods, Comparison of three Co-C fixed
points constructed using different crucible lining materials,
in Temperature: Its Measurement and Control in Science and
Industry, vol. 8, ed. C.W. Meyer, AIP Conference Proceeding
1552, pp. 369373, 2013.
[12] A. Todd, D. Lowe, W. Dong, and D. Woods, Comparison of
realizations of Re-C fixed points filled and measured at NPL
and NRC, in Temperature: Its Measurement and Control in
Science and Industry, vol. 8, ed. C.W. Meyer, AIP Conference
Proceeding 1552, pp. 797801, 2013.
[13] A. Todd and D. Woods, Thermodynamic temperature
measurements of the melting temperatures of CoC, PtC and
ReC fixed points at NRC, Metrologia, vol. 50, no. 1, pp.
20-26, 2013

40

NCSLI Measure J. Meas. Sci. www.ncsli.org

TECHNICAL PAPERS

Evaluation of Proficiency Testing Results


with a Drifting Artifact
Chen-Yun Hung, Pin-Hao Wang, and Cheng-Yen Fang

Abstract: Proficiency testing (PT) is an evaluation of participants performance against pre-established criteria by means of
interlaboratory comparisons. The normalized error, En, is the most widely used performance statistic for determining the measurement capability of a calibration laboratory. One of the variables in the En equation is Uref, which is the expanded uncertainty
of the reference laboratorys assigned value. To evaluate a participants performance effectively, if any effects of the PT scheme
are significant, the additional uncertainties should be combined with the reference laboratorys reported expanded uncertainty
to estimate Uref. Among such uncertainties, the stability of artifacts is an important uncertainty component in the PT scheme,
especially for a calibration laboratory. Based on practical PT experience, most artifacts can be regarded as sufficiently stable if
the difference between three reference laboratory measurements is small. In such cases, the median of the three measurements
is usually chosen as the assigned value, and its reported expanded uncertainty is used as the Uref value. However, some artifacts,
such as standard resistors, drift over time. This leads to questions about how to accurately determine the assigned values and
expanded uncertainties of these artifacts. This paper presents a PT scheme for standard resistors that demonstrates the evaluation
of PT results with a drifting artifact.
1. Introduction

Proficiency testing (PT) is an evaluation of


participants performance against pre-established criteria by means of interlaboratory
comparisons. It is an effective way to verify a laboratorys measurement capability.
Proficiency testing helps participants understand the differences between their measurement capabilities and those of other laboratories and it often leads to improvement in
measurement competence and quality control. In Taiwan, the Taiwan Accreditation
Foundation (TAF) is the only accreditation
body that is a signatory to the memorandums
of agreement (MRAs) of both the International Laboratory Accreditation Cooperation (ILAC) and the Asia Pacific Laboratory Accreditation Cooperation (APLAC).
To ensure quality, TAF only accepts PT
results provided by accredited PT providers that meet the requirements of ISO/IEC
17043:2010 [1] or a designated reputable
organization. The Center for Measurement
Standards/Industrial Technology Research
Institute (CMS/ITRI) is one of the designated organizations.
42

The PT performed by CMS/ITRI that is


presented in this paper focuses on calibration laboratories. It involves the measurement of a drifting artifact, in this case a standard resistor. Figure 1 shows the flowchart
of the PT scheme. The paper also presents
the standard resistor measurement capability
of each participating calibration laboratory
in Taiwan.
2. Stability Testing

Under Section 4.4.3 of ISO/IEC 17043:2010,


criteria for suitable stability shall be
established and based on the effect that
instability will have on the evaluation of
the participants performance. It must be
demonstrated that PT items are sufficiently
stable to ensure that they will not undergo any
significant changes during the PT process.
If this is not possible, the stability should
be quantified and regarded as an additional
component of the uncertainty associated
with the assigned value of the PT item, and
considered in the evaluation criteria. The
next section provides an example of stability
evaluation performed by CMS/ITRI.

2.1 Example of Drifting Artifact Standard Resistor

A PT of a standard resistor, numbered


PT2012-KF01, was performed from
October 2012 to February 2013. A total
of 16 laboratories participated in this test.
Two standard resistors with nominal values
of 1 and 10 k were chosen as PT items
(Table 1). Approximately two months was
spent on stability testing before the PT
scheme was initiated.

Authors
Chen-Yun Hung
hungcy@itri.org.tw

Pin-Hao Wang
pin-hao@itri.org.tw

Cheng-Yen Fang
fangcy@itri.org.tw
Center for Measurement Standards
Industrial Technology Research Institute
321 Kuang Fu Rd., Sec. 2
Hsinchu, Taiwan 30011, R.O.C.

NCSLI Measure J. Meas. Sci. www.ncsli.org

TECHNICAL PAPERS
Because standard resistors drift over time,
the evaluation criterion should be selected
carefully. If the standard uncertainty of stability testing is too large, some participants
may receive action and warning signals because the assigned values are inaccurate.
Therefore, based on the guidelines for limiting the uncertainty of the assigned value in
ISO 13528 [2], the criterion for allowing stability uncertainty is
2
2
u sta 0.3 u lab,
min + u ref ,

(1)

where
usta is the standard uncertainty of stability
testing;
ulab, min is the minimum standard uncertainty of
the participants results; and
uref is the standard uncertainty of the reference
laboratorys result.
When Eq. (1) is satisfied, the ratio,
2
2
u lab,
min + u ref
, will fall

2
2
2
u lab,
min + u ref + u sta

Figure 1. Flowchart of the PT scheme.

in the range
0.96

2
2
u lab,
min + u ref
2
2
2
u lab,
min + u ref + u sta

Item

1.00 . (2)

Thus, the stability uncertainty will not


affect the evaluation of the participants
performance. However, if Eq. (1) is not
satisfied, the PT items should be replaced,
or more than one assigned value should be
provided to limit the effect of the uncertainty
on the performance evaluation.
Figures 2 and 3 show the stability testing
results for the 1 and 10 k standard resistors in PT2012-KF01, respectively. The
dotted lines represent the maximum and minimum measured values. The value of usta is determined by calculating the difference in the
measurement results of stability testing with
a triangular distribution [3]. The value ulab, min
is the minimum of the participants reported
standard uncertainties. The value uref is the
reported standard uncertainty of the reference
laboratory. Table 2 shows each of these three
values for both the 1 and 10 k standard
resistors. As the values satisfy the criterion in
Eq. (1), the resistors can be regarded as sufficiently stable relative to the evaluation criterion in the PT scheme. During the PT process,
the resistors were returned to the reference
laboratory three times to ensure their stability.
Vol. 9 No. 3 September 2014

Nominal
Value

Manufacturer

Model

Serial No.

Standard
resistor

iET

SRL-1

B2-9425105

Standard
resistor

10 k

iET

SRL-10k

B2-9425117

Table 1. Details of the PT items.

Nominal Value

Uncertainties

usta

ulab, min

uref

0.00000020

0.000008

0.00000015

10 k

0.0000004 k

0.00005 k

0.0000015 k

Table 2. Standard uncertainties of the 1 and 10 k standard resistors in Eq. (1).


3. Performance Evaluation

This section discusses the determination


of assigned values, discusses performance
statistics, and presents results from a PT for
standard resistors with drifting artifacts.
3.1 Assigned Value and Its
Standard Uncertainty

The determination of assigned values should


be done in a way that fairly evaluates all
participants, and encourages agreement
among test or measurement methods.

Various procedures can be used to determine


the assigned values. The measured values
of the National Measurement Laboratory
(NML) of the Republic of China are used as
the assigned values in most PTs performed
by CMS/ITRI. To ensure the stability of the
PT items during a round of testing, they are
calibrated by NML in the opening, middle
and closing stages of the transfer schedule.
The transfer model, called the butterfly
model, is shown in Fig. 4. If the differences
between the three measured values are within
NCSLI Measure J. Meas. Sci. |

43

TECHNICAL PAPERS

0.9991798

Nominal Value (1 )

Nominal Value (1 )

Measured Value
max = 0.9991793
min = 0.9991784

0.9991812

Measured Value ()

0.9991808

0.9991790

0.9991786

0.9991782

0.9991778

0.9991804
0.9991800
0.9991796
0.9991792
0.9991788
0.9991784

2012/8/1
2012/8/3
2012/8/5
2012/8/7
2012/8/9
2012/8/11
2012/8/13
2012/8/15
2012/8/17
2012/8/19
2012/8/21
2012/8/23
2012/8/25
2012/8/27
2012/8/29
2012/8/31
2012/9/2
2012/9/4
2012/9/6
2012/9/8
2012/9/10
2012/9/12
2012/9/14
2012/9/16
2012/9/18
2012/9/20
2012/9/22
2012/9/24
2012/9/26
2012/9/28
2012/9/30

Measured Value ()

0.9991794

2012/10/01
E120519A

Figure 5. Calibration results of the 1 standard resistor.

Nominal Value (10 k)


10.000755

Measured Value (k)

Measured Value (k)

Nominal Value (10 k)

10.000753

10.000752

10.000751

2012/8/1
2012/8/3
2012/8/5
2012/8/7
2012/8/9
2012/8/11
2012/8/13
2012/8/15
2012/8/17
2012/8/19
2012/8/21
2012/8/23
2012/8/25
2012/8/27
2012/8/29
2012/8/31
2012/9/2
2012/9/4
2012/9/6
2012/9/8
2012/9/10
2012/9/12
2012/9/14
2012/9/16
2012/9/18
2012/9/20
2012/9/22
2012/9/24
2012/9/26
2012/9/28
2012/9/30

10.000750

10.000749

10.000760

Measured Value
max = 10.000753 k
min = 10.000751 k

10.000754

2013/03/08
E130072A

Date/Report Number

Date

Figure 2. Stability testing results of the 1 standard resistor.

2012/12/07
E120641A

Date
Figure 3. Stability testing results of the 10 k standard resistor.

10.000758
10.000756
10.000754
10.000752
10.000750
10.000748
10.000746
10.000744

2012/10/01
E120520A

2012/12/07
E120642A

2013/03/08
E130073A

Date/Report Number
Figure 6. Calibration results of the 10 k standard resistor.
3.2 Example of Drifting Artifact Standard Resistor

Figure 4. The butterfly transfer model.

the expanded uncertainty, the median of three measurements is used as


the assigned value, and its reported expanded uncertainty is the value
of Uref. Otherwise, the average of the measured values is deemed to be
the assigned value, and the reported expanded uncertainty is combined
with the stability uncertainty to estimate the value of Uref.

44

The example described in Section 2.1 is used to demonstrate the


performance evaluation of a PT scheme with a drifting artifact. The
measured values of NML were used as the assigned values in this PT
scheme. Figures 5 and 6 show the calibration results of PT items that
were calibrated three times by NML during a round of testing. In Fig.
5, the differences between the three measured values were calculated
without the expanded uncertainty. Although the PT items can be
regarded as sufficiently stable relative to the evaluation criterion in
Eq. (1), the average was chosen conservatively as the assigned value of
the 1 standard resistor. To more accurately estimate the difference
between the participants result and the assigned value, we divided the
participants into two groups and provided different assigned values.
One was the average of first two measured values, and the other was
the average of last two measured values. The Uref should include
the reported uncertainty of the reference laboratory and the stability
uncertainty of the PT items. The standard uncertainty associated
with stability was half of the difference between two measured values
divided by the square root of six due to the triangular distribution. In
Fig. 6, the differences between the three measured values were all

NCSLI Measure J. Meas. Sci. www.ncsli.org

TECHNICAL PAPERS

Group

Nominal
Value
1
10 k

|En| > 1.0 indicates


unsatisfactory performance and
generates an action signal.

II
Uref

Uref

0.9991795

0.0000004

0.9991801

0.0000005

10.000752 k

0.000003 k

10.000752 k

0.000003 k

Table 4 shows the PT results of the


participants in PT2012-KF01. If |En| > 1, the
participants result is deemed unsatisfactory
and is labeled with the symbol #. The scatter
diagrams of all the participants results are
shown in Figs. 7, 8, and 9. The dotted lines
represent the assigned values, and the error
bars are set according to each participants
expanded uncertainty. However, CMS/ITRI
only provided the PT results and suggestions
to the participants, and did not judge whether
the participants were qualified laboratories.

Table 3. Assigned values of the 1 and 10 k standard resistors.

Nominal Value

Lab Code
A

10 k

0.00

0.09

0.07

0.02

0.01

0.14

0.07

0.04

4. Confidentiality

0.02

0.01

0.00

0.22

3.85#

3.36#

0.08

0.01

According to Section 4.10.1 of ISO/IEC


17043:2010, the identity of participants in a PT
scheme shall be confidential and known only
to the persons involved in the operation of the
PT scheme. To ensure the confidentiality of a
PT performed by CMS/ITRI, each laboratory
is given a code that is mailed to the laboratory
in a sealed envelope. Thus, each participant
knows only its own laboratory code.
The confidentiality of a participants
information should especially be considered
when the assigned value of a drifting artifact
is determined. If the artifact drifts over
time, for the best estimate of assigned value,
linear regression can be used to predict
what the assigned value should be on a
given day. However, in this case, it would
be possible to identify the participants by
comparing a specific day on a regression
line with the transfer schedule. Therefore,
linear regression should only be used in a PT
scheme with a drifting artifact if the transfer
schedule is not publicly available, and when
the artifact is transferred to the participant by
the PT provider.

0.67

0.07

NA

0.07

0.18

0.01

0.03

0.00

0.00

0.02

0.27

0.26

0.19

0.30

0.17

0.14

Note: NA means the value is not available, and # designates an unsatisfactory result.

Table 4. The PT results of the participants (En number).

within the expanded uncertainty. Therefore,


the median was chosen as the assigned value
of the 10 k standard resistor, and its reported
expanded uncertainty was used as the value
of Uref. Table 3 shows the assigned values
for the 1 and 10 k standard resistors in
PT2012-KF01.

earlier, in the calibration field, the En number


is the most widely used performance statistic
for determining a calibration laboratorys
measurement capability. The value of En is
calculated as
En =

(x X)

2
2
U lab
+U ref

(3)

3.3 Performance Statistic

To facilitate interpretation and allow


comparison with defined objectives, the
PT results should be transformed into a
performance statistic. The purpose is to
measure the deviation from the assigned
value in a manner that allows comparison
with the performance criteria. Performance
statistics for quantitative results are
detailed in ISO/IEC 17043:2010, Section
B.3.1.3. The PT provider should choose
the performance statistic that is appropriate
for the type of PT scheme. As mentioned
Vol. 9 No. 3 September 2014

where
x is the participants result;
X is the assigned value;
Ulab is the expanded uncertainty of the
participants result; and
Uref is the expanded uncertainty of the
reference laboratorys assigned value.
The criteria for performance evaluation are
as follows:
|En| 1.0 indicates satisfactory
performance and generates no signal;

5. Conclusions

CMS/ITRI has performed proficiency tests


in Taiwan for more than ten years with the
focus on calibration laboratories. To ensure
the quality of PTs, ensuring the stability of artifacts always has been given a high priority.
The criteria used to determine the stability of
artifacts are based on expert judgment, instrument specifications, and statistical methods.
This paper has presented an example of a PT
scheme for standard resistor, PT2012-KF01,
with quantified criterion for stability testing
using statistical methods, to share the experiNCSLI Measure J. Meas. Sci. |

45

TECHNICAL PAPERS

Measured Value ()

1.002

Nominal Value (1 )

Nominal Value (10 k)

Measured Value

10.006

Assigned Value = 0.9991795

1.001
1.000
0.999
0.998
0.997
0.996
0.995

Measured Value

10.005

Measured Value (k)

1.003

Assigned Value = 10.000752 k

10.004
10.003
10.002
10.001
10.000
9.999
9.998
9.997
9.996

Ref Lab-I

Ref

Laboratory Code

Laboratory Code

Figure 7. Measurement results of the 1 standard resistor


in Group I.

Figure 9. Measurement results of the 10 k standard resistor.

6. References

1.0000

Measured Value ()

0.9998

[1] ISO/IEC, Conformity assessment General requirements for


proficiency testing, ISO/IEC 17043, 2010.
[2] ISO, Statistical methods for use in proficiency testing by
interlaboratory comparisons, ISO 13528, 2005.
[3] J. Gust, A Discussion of Stability and Homogeneity Issues
in Proficiency Testing for Calibration Laboratories, 4th ILAC
Proficiency Testing Working Group Meeting, 13 p., Vienna,
Austria, 2007.
[4] ISO/IEC, Uncertainty of measurement Part 3: Guide to the
expression of uncertainty in measurement (GUM:1995), ISO/
IEC Guide 98-3, 2008.

Nominal Value (1 )
Measured Value
Assigned Value = 0.9991801

0.9996
0.9994
0.9992
0.9990
0.9988
0.9986
0.9984

Ref LabII

Laboratory Code

Figure 8. Measurement results of the 1 standard resistor


in Group II.

ence of CMS/ITRI. We have demonstrated that when an artifact drifts


over time, more than one value can be assigned in order to increase
the accuracy of the difference between the participants result and the
assigned value. If necessary, the stability uncertainty should be combined with the reference laboratorys reported expanded uncertainty to
estimate Uref. However, to ensure the confidentiality of the participants
information, the number of assigned values should be controlled.
The results of PT2012-KF01 have also been presented in this
paper. An analysis of these results indicates that most of standard
resistor calibration laboratories in Taiwan have good measurement
competence. We have suggested to participants whose performance
was not satisfactory that they should first investigate the root cause of
the problem, and then actively implement corrective actions.

46

NCSLI Measure J. Meas. Sci. www.ncsli.org

Register Today!

.org

2014

Quality
Summit

Quality for Emerging Technologies


OEMs and suppliers agree that technology in vehicles is advancing at an
increasing rate. The new technologies pose new challenges in relation to
testing, measurement, and quality assurance.
This conference, Quality for Emerging Technologies will showcase how
companies are assuring quality, reliability and customer satisfaction in
their latest electronics and software products, including both adaptations
of existing tools and development of new quality methods and tools.
Dont miss this exciting opportunity to hear from our Keynote Speaker,
Mr. Dino Triantafyllos, Vice President, Quality Division, Toyota Motor
Engineering & Manufacturing North America, Inc.
The 2014 Quality Summit will be of interest to both suppliers and
customers in the automotive industry. Register early and save!

WHEN:
September 24 - 25, 2014
Registration opens: 7:30 AM
WHERE:
Suburban Collection Showplace
46100 Grand River Avenue
Novi, MI 48374
REGISTRATION:
Early Registration Deadline:
July 31, 2014
Regular Registration
Deadline: September 10, 2014
Type

Early

Regular

Late

Member

$575

$700

$775

NonMember

$775

$900

$975

TO REGISTER:
Visit www.aiag.org and click
the Events tab or call
248-358-3003.

Sponsors:

2014 AIAG | 26200 Lahser Road, Suite 200 | Southfield, MI 48033 | Tel: 248.358.3003 | Fax: 248.799.7995 | www.aiag.org

Vol. 9 No. 3 September 2014

NCSLI Measure J. Meas. Sci. |

47

TECHNICAL PAPERS

A 40 GHz Air-Dielectric Cavity Oscillator


with Low Phase Modulation Noise
Archita Hati, Craig W. Nelson, Bill Riddle, and David A. Howe

Abstract: We describe a 40 GHz cavity stabilized oscillator (CSO) that uses an air-dielectric cavity resonator as a frequency
discriminator to reduce the phase modulation (PM) noise of a commercial 10 GHz dielectric resonator oscillator (DRO) frequency
multiplied by four. Low PM noise and small size were the main design goals. Single sideband (SSB) PM noise equal to
-128 dBc/Hz at a 10 kHz offset from the carrier frequency is achieved for the CSO. In addition, we report on the PM noise of
several Ka-band components.
1. Introduction

Phase noise metrology is essential for the


development and characterization of low
noise, spectrally-pure oscillators. This
paper focuses on the need for reference
oscillators and measurements in the Ka-band
(26.5 to 40 GHz), a large portion of which
is authorized for U.S. military use. This
requirement extends to millimeter wave
data-communications and multistatic radar
systems that place very stringent phasestability requirements on system oscillators
and components.
Designing oscillators
that operate at these high frequencies is
challenging due to the frequency limitations
of active devices.
One approach for generating a millimeter-wave reference signal is to simply multiply the frequency of a high quality factor (Q)
quartz oscillator that is designed to operate
at a lower sub-multiple frequency. However,
multiplication techniques have their drawbacks. For example, the best low frequency
oscillators are often bulky, costly and vibration
sensitive. Also, frequency multipliers can increase PM noise at offset frequencies far from
the carrier. When the signal from a low-noise
oscillator is multiplied, its noise increases by
log10(N 2), where N is the multiplication factor.
The noise of the frequency-multiplied signal is
usually higher than the multiplier noise at offset frequencies close to the carrier, but lower at
offset frequencies far from the carrier. Therefore, due to the inherent noise of the multiplier,
48

the low PM noise of an oscillator cannot be


retained at higher frequencies by upconverting
through frequency multiplication.
Some low noise microwave oscillators
employ frequency locking to a high-Q
resonance cavity to improve the broadband
PM noise [1-5]. In these oscillators, the
cavity resonator is used primarily as a
frequency discriminator to improve the
PM noise of the oscillator with a feedback
control system. Any improvement of the
discriminator phase-shift sensitivity directly
translates to lowering the oscillator PM noise.
There are several key aspects of controlling
the cavity discriminator sensitivity, and the
most important of these involves increasing
the cavity Q [2, 6]. An effective method
of increasing discriminator sensitivity is to
suppress both the carrier signal reflected from
the cavity, and the amplification of the residual
noise [2, 3]. The suppression reduces the
effective noise temperature of the nonlinear
mixer, which acts as a phase detector with
enhanced sensitivity. The amount of carrier
suppression can be increased by making
the effective coupling coefficient into the
cavity approach its critical value of unity
[2], and also by use of interferometric
signal processing [4, 5]. The discriminator
sensitivity is proportional to the power of the
oscillator signal incident into the cavity [7].
Thus, by increasing the power of the carrier
signal, the discriminator sensitivity can be
improved as long as the resonator remains

linear, meaning that the power level does not


change frequency. The purpose of this paper
is to study the performance of an oscillator
based on an air-dielectric cavity resonator
that can be used as a measurement reference.
These design considerations not only work
in the development of a cavity-stabilized
oscillator (CSO) of high spectral purity at
40 GHz, but have notable advantages when
compared to the usual 10 GHz case.
Support for this work was provided by the Army
Research Laboratory (ARL). For the purpose of
technical description, commercial products are
mentioned in this paper, but no endorsement by
NIST is intended or implied.

Authors
Archita Hati

archita.hati@nist.gov

Craig W. Nelson

craig.nelson@nist.gov

Bill Riddle

bill.riddle@nist.gov

David A. Howe

david.howe@nist.gov
1,2,4

Time and Frequency Division


Electromagnetics Division
National Institute of Standards
and Technology
325 Broadway
Boulder, Colorado, 80305

NCSLI Measure J. Meas. Sci. www.ncsli.org

TECHNICAL PAPERS
In Section 2 of this paper, we first provide
the phase-noise performance of an assortment
of active components at 40 GHz, including data that most manufacturers omit from
their specifications. In Section 3, we discuss
the design of our compact dielectric-cavity
resonator. The CSO and its PM noise performance are described in Sections 4 and 5,
respectively. Section 6 presents a summary.
2. PM Noise of Active Components
at 40 GHz

The PM noise of individual components must


be understood before they are included in a
larger system, because the system performance
is affected if a noisy component is selected.

This section provides PM noise results for


a few selective commercial components at
Figure 1. Images of commercial products used for PM noise measurement
at 40 GHz.
Ka-band, since little or no information is
available from the manufacturers.
We measured the noise of several
amplifiers, dividers, and
multipliers prior
Amp1
Amp2
Amp3
to the design of our 40 GHz oscillator to be
discussed in Section 3. Figure 1 provides
mp2
Amp3
images of the components.
A single channel PM noise measurement
system (Fig. 2) is used to measure noise of a
40 GHz
Amplifiers
A-1844test (DUTs) [8]. It uses a
JS4-18004000-40-5A
pair of devices
under
reference oscillator, a double balanced mixer
(DBM), and a phase shifter. The oscillator
5A
signal is split into two to drive the input
of each DUT. The outputs of the DUTs are
connected to the LO and RF ports of a DBM
Figure 2. Block diagram of a single-channel system for measuring the PM noise
that acts as a phase detector (PD). A phase
of a pair of devices under test (DUTs).
shifter is used to establish phase-quadrature
Divider-1
Multiplier-1
Multiplier-2
(900) between
two signals at the PD inputs
required for phase noise measurement. The
Multiplier-2
Figure produces
1. Images of
commercial
products used for PM noise measurement at 40 GHz.
PD
output
voltage
fluctuations
at the baseband that are proportional to the
difference at
between
the phase fluctuations of
PM noise measurement
40 GHz.
A single channel
PM noise measurement system (Fig. 2) is used to measure noise of a pair of devices
DUT-1 and DUT-2. If the path lengths from
under test (DUTs) [8]. It uses a reference oscillator, a double balanced mixer (DBM), and a phase shifter.
reference
oscillator
thetwo
LOtoand
RFthe input of each DUT. The outputs of the DUTs are
The the
oscillator
signal
is split tointo
drive
s used toconnected
measure
ofLO
a pair
of devices
ports noise
are
matched,
oscillator
noise that acts as a phase detector (PD). A phase shifter is used to
to the
andthe
RF
ports of a DBM
balancedestablish
mixer (DBM),
and a phase shifter.
phase-quadrature
(90to0) abetween
two signals at the PD inputs required for phase noise
is common
mode cancels,
large degree.
each DUT.
The
outputs
of
the
DUTs
are
measurement.
The
PD
output
produces
voltageisfluctuations at the baseband that are proportional to the
The
output
of
the
PD
after
amplification
ase detector
(PD). A
phase shifter
is used
to
difference
between
the
phase
fluctuations
of
DUT-1 and DUT-2. If the path lengths from the reference
analyzed
with afor
fast-Fourier-transform
(FFT)
the PD oscillator
inputs
required
phase
noiseare matched,
to the LO
and
RF ports
the oscillator noise that is common mode cancels, to a
spectrum
analyzer.
The voltage
output of the
at the baseband
that are
proportional
to the
large degree.
The
output of the
PD after amplification is analyzed with a fast-Fourier-transform (FFT)
T-2. If the
path
lengths
from
thevoltage
IF amplifier
is
given
byreference
spectrum
analyzer.
The
output of the IF amplifier is given by
or noise that is common mode cancels, to a
nalyzed with a fast-Fourier-transform (FFT)
(1)
vn =
( t ) kd G ( t ) ,
(0)
given by

where
kd is the
phase-to-voltage
conversion factor, G is the gain of the IF amplifier and ( t ) is the
is the
PDs phase-to-voltage
where
kd PDs
(0)
difference
between
the
phase
fluctuations
conversion factor, G is the gain of the
theDUT-1 and DUT-2. Taking the Fourier transform of Eq.
(1)
provides
IFIFamplifier
the gain of the
amplifierand
and ( t ) isisthethe difference
between
the phase
fluctuations
DUT-2. Taking
the Fourier
transform
of Eq.of the DUT-1
v n ( t ) transform of
and DUT-2. TakingPSD
the Fourier
2 noise for a sample of commercial amplifiers
3.+ PM
S ( f ) =
S ( f ) ,
= S , DUT ( f ) + SFigure
, NF ( f )frequency
(0)
(carrier
= 40 GHz).
Eq. (1) provides
( k G )2

f ) + 2Swhere,
PM noise, S DUT ( f ) is the actual PM noise of a pair of DUTs, S NF ( f ) is
( f ) ,S ( f ) is the measured
(0)

Vol. 9 No. 3 September 2014


the measurement
system noise floor comprised of PD and IF amplifier noise, S ( f ) is the measured AMNCSLI Measure J. Meas. Sci. |
ual PM noise
DUTs, S conversion
noise,ofa ispair
theofAM-to-PM
ratio, and PSD is the power spectral density. This measurement
NF ( f ) is

49

TECHNICAL PAPERS

Amp1

Amp2

Amp2

Amp3

Amp1
Figure 4. Input-referenced
(i.e. 40 GHz) PM noise Amp2
for a pair
Amp2
Amp3
of dividers. Divider-1 is a commercial divider shown in Fig.
1. The regenerative divider is custom-built with two divideby-2 in series.
The input frequency is 40Amp1
GHz and the
Amp3
40
GHz frequency
Amplifiersis 10 GHz.
output
A-1844
JS4-18004000-40-5A
40 GHz A-1844
Amplifiers

Figure 5. Amp3
Output-referenced PM noise for a pair of
multipliers. Multiplier-1 and Multiplier-2 are shown in Fig. 1.
The input frequency is 10 GHz and the output frequency is
40
GHz.
Amp2
Amp3

JS4-18004000-40-5A

JS4-18004000-40-5A

Amp1
mp1

Amp2
Amp2

00-40-5A

Divider-1

40 GHz A-1844
Amplifiers
Amp3
Amp3

Multiplier-1

JS4-18004000-40-5A

Multiplier-2

mplifiers
4
JS4-18004000-40-5A
Divider-1
Multiplier-1
Figure 1. Images of commercial
products used for PM noise measurement atMultiplier-2
40 GHz.
Multiplier-1
ers
Multiplier-2
JS4-18004000-40-5A

The calibration factor is obtained by


measuring the voltage change (Dv) for a
known static phase shift (DF). The fixed
phase shift is first measured with a calibrated
vector network analyzer (VNA). Thus, kd can
be obtained from the following equation,

kd

volts
radian

(3)

In the case of amplifiers, only one DUT


Figure 1. Images of commercial products used for PM noise measurement at 40 GHz. can be used if an equal amount of delay is
Divider-1
Multiplier-1
Multiplier-2
Multiplier-2
mercialAproducts
used
for
PM
noisemeasurement
measurementsystem
at 40 GHz.
single channel
PM
noise
(Fig. 2) is used to measure
noise of a pair of devices
introduced to substitute for DUT-2. This
under test (DUTs) [8]. It uses a reference oscillator, a double balanced mixer (DBM), and a phase shifter.
prevents incomplete cancellation of the refThe oscillatorA signal
split into
twoFigure
tomeasurement
drive
the input
of each
Thetooutputs
thenoise
DUTs
areof devices
single is
channel
PM noise
system
(Fig. DUT.
2) isproducts
used
measure
noise
of ameasurement
pair
1. Images
of commercial
used
forofPM
atoscillator
40 GHz. noise at higher offset freor PM noise measurement
at 40 GHz.
surement
system
(Fig.
is used
to measure
a oscillator,
pair
devices
connected
to the
LO2)
and
RF ports
a DBM
thatofacts
as aofphase
detectorbalanced
(PD). Amixer
phase(DBM),
shifter is
used
to erence
under
test
(DUTs)
[8].
Itofuses
a noise
reference
a double
and
a phase
shifter.
0
quencies.
erence
oscillator,
aMultiplier-1
double balanced
(DBM),
adrive
phaseat
shifter.
phase-quadrature
(90 is)mixer
between
two
signals
the
PDofinputs
required
phaseofnoise
-1 establish
Multiplier-2
The
oscillator
signal
split
into
twoand
to
the
input
each DUT.
Thefor
outputs
the DUTs
are
Figure
the single-sideband PM
twomeasurement.
to driveMultiplier-1
the The
inputPD
of the
each
DUT.
The
outputs
of
the
DUTs
are
output
produces
voltage
fluctuations
at
the
baseband
that
are
proportional
to
the
connected
to
LO
and
RF
ports
of
a
DBM
that
acts
as
a
phase
detector
(PD).
A
phase
shifter
is
toof 3a shows
A single
channel PM noise measurement system (Fig. 2) is used to measure used
noise
pair of devices
2) is used to measure noise of a pair
ofMultiplier-2
devices
0ofphase

f adifference
DBM
thatestablish
acts
as a the
phase
detector
(PD).
A
shifter
is
used
toIfatthe
between
phase
fluctuations
DUT-1
and
DUT-2.
pathPDlengths
from
the reference
)
between
two
signals
the
inputs
required
for
phase
noise
phase-quadrature
(90
ges
of
commercial
products
used
for
PM
noise
measurement
at
40
GHz.
under
test
(DUTs)
[8].
It
uses
a
reference
oscillator,
a
double
balanced
mixer
(DBM),
and
a
phase
() of three commernoise, L() shifter.
ble balanced mixer (DBM), and a phase shifter.
between
twoto
signals
inputs
required
for
phase
noise
oscillator
the LO at
andthe
RFPD
ports
are matched,
the
noise
is
cancels,
a outputs
measurement.
The
PD
output
voltage
fluctuations
at
thecommon
baseband
thateach
are proportional
to the of the DUTs are
oscillator
is oscillator
split
into
two tothat
drive
the
inputmode
of
DUT.toThe
products
used
for PM
noise
measurement
at 40 GHz.
t commercial
of each DUT.
The
outputs
ofThe
the
DUTsproduces
aresignal
cially
available
amplifiers.
Note that Amp1
uces
voltage
fluctuations
at the
baseband
that
are
toof
the
large
degree.
The output
of connected
the
after
amplification
isports
analyzed
with that
a fast-Fourier-transform
(FFT)
difference
thePD
phase
fluctuations
and
DUT-2.
Ifacts
theaspath
lengths
from
the reference
to
the
LOproportional
andofRFDUT-1
a DBM
a phase
detector
(PD).
A phase shifter is used to
phasemeasurement
detector
(PD).
Abetween
phase
shifter
ismeasure
used
tonoise

noise
system
(Fig.
2)
is
used
to
of
a
pair
of
devices
0
uations
of DUT-1
and
DUT-2.
If
the
path
lengths
from
the
reference
spectrum
analyzer.
The
voltage
output
of
the
IF
amplifier
is
given
by
to the LO
and
RFphase-quadrature
ports are matched,(90
the) oscillator
that is common
mode
cancels,
to a for
and
Amp3
havephase
approximately
the same gain
establish
between noise
two signals
at the PD
inputs
required
noise
at the
PD oscillator
inputs
required
for
phase
noise
uses
a reference
oscillator,
balanced
mixer
(DBM),
and of
a cancels,
phase
shifter.
matched,
the
oscillator
noise
is
mode
to ais voltage
eare
measurement
system
(Fig. a2)double
isThe
usedmeasurement.
tothat
measure
noise
of after
aPD
pair
devices
large
degree.
output
of common
the
PD
amplification
analyzedfluctuations
with a fast-Fourier-transform
The
output
produces
at the baseband and
that(FFT)
are dB
proportional
to thebut that there is a
P1
compression,
ons
at
the
baseband
that
are
proportional
to
the
split
into
two
to
drive
the
input
of
each
DUT.
The
outputs
of
the
DUTs
are
Figure
6.
cylindrical
aluminum
resonator
40 GHz.
reference
oscillator,
a( tdouble
balanced
mixer
and aof
phase
shifter.
Da after
amplification
is
analyzed
a (DBM),
fast-Fourier-transform
(FFT)cavity
vthat
kasd(a)
Gfrom
Air-dielectric
The
tthe
spectrum
output
the
IFisamplifier
by andatDUT-2.
) analyzer.
(with
)and
,voltage
(0) If the pathvariation
difference
between
the shifter
phase
fluctuations
of DUT-1
lengths from
the reference
n=
RF ports
of
adrive
DBM
acts
phase
detector
(PD).
A phase
used to is given
DUT-2.
the
path
lengths
reference
of almost
20 dB in the PM noise
into
two
toIfIF
the
input
of aeach
DUT.
Thethe
outputs
of the
are
The
diameter
(2a)
length
(d)DUTs
of the
cavity are approximately 2 cm.
0the
put
of
amplifier
is
given
by
ure
(90
)
between
two
signals
at
the
PD
inputs
required
for
phase
noise
oscillator
to
the
LO
and
RF
ports
are
matched,
the
oscillator
noise
that
is
common
mode
cancels,
to avariation in noise
lator
noise
that
is
common
mode
cancels,
to
a
ports of a DBM thatThe
acts SMA
as a phase
detector (PD).
A phase
shifter
is usedcan
to be used as a size reference. (b)
connectors
on
the
circuit
board
performance.
A
similar
t ) is the
0where
kdwith
istwo
theasignals
PDs
phase-to-voltage
conversion
is to
the
gain
the IF amplifier
and (with
utput
produces
voltage
fluctuations
at
thek
baseband
that
are
proportional
the
degree.
The
output
ofGnoise
the
PD
afterofamplification
is| of
analyzed
a fast-Fourier-transform (FFT)
=
tlarge
G|

t40
) between
at vdata
the
PD
inputs
required
forfactor,
phase
(
)
(
)
, GHz
(0)
s90
analyzed
fast-Fourier-transform
(FFT)
n
d
Measured
of
|S
of
a
cavity.
(c)
Measured
data
of
|S
a
40
GHz
11
phase
fluctuations
of
DUT-1
DUT-2.
If21thethat
pathof
lengths
from(0)
the
voltage
fluctuations
atMeasured
thespectrum
baseband
are
proportional
toreference
the
difference
between
theand
phase
fluctuations
the
and
DUT-2.
the Fourier
analyzer.
The
voltage
output
ofTaking
the IF amplifier
is transform
given by of Eq. occurs for the two dividers shown in Fig. 4.
isproduces
cavity.
phase
of
S11DUT-1
for
a 40
GHz
RFgiven
ports by
areofmatched,
the (d)
oscillator
noise
that
is common
mode
cancels,
tocavity.
a
the noise of two dividers: one
fluctuations
DUT-1
and
DUT-2.
If
the
path
lengths
from
the
reference
( t ) iscompared
where
kd is theisPDs
phase-to-voltage
conversion factor,
G is the gain of the IF amplifier and We
the
provides
t of(1)
the
after
amplification
analyzed
with
a fast-Fourier-transform
ports
arePD
matched,
the oscillator
noise
that
is
common
mode and
cancels,
to
t )a(FFT)
(
ge
conversion
factor,
G
is
the
gain
of
the
IF
amplifier
is
the
commercial
(pictured
in Fig. 1) and the other
voltage
output
of
the IF amplifier
is given
by
vn =
(FFT)
DUT-1
difference
the
phase
fluctuations
DUT-2. Taking the Fourier transform of Eq.
( t ) kof
( t ) , andratio,
(0)
d Gthe
(0)
he PD after
amplification
isbetween
analyzed
with
a fast-Fourier-transform
conversion
and PSD is the power spectral a custom regenerative divide-by-four (two
ations
of
the
DUT-1
and
DUT-2.
Taking
the
Fourier
transform
of
Eq.
(1)
provides
PSD byv n ( t )
ge output of the IF amplifier is given
2 measurement method is accurate
This
( t )gain
SIF( famplifier
S ( f ) , factor, G is the(0)
series)
)=
) + S(0), NFdensity.
( f ) + conversion
( t ) is the[9, 10]. In addi,
PDs( fphase-to-voltage
gain of the IFdivide-by-two
amplifier and in
where
k = isSthe, DUT
GGisthe
of the
and
2 (dt ) isthe
k
G
(
)
as
long
as
theofPM
noise
of the
DUT
pairsTaking
is at the
nd( t )DUT-2.
dtransform
,
(0)
difference
between
the
phase
fluctuations
the
DUT-1
and
DUT-2.
Fourier
transform
Eq. of two frequention,
we
compared
theofnoise
PSD
v
t

Taking
the
Fourier
of
Eq.
) and ( t ) is the
se-to-voltage conversion factor, G is the gain of the
n (amplifier
IF
2

S ( f(1)
f ) +10
SdB
f ) +than
Sthe
f ) ,noise floor of the(0) cy multipliers (Fig. 5). The large differences
= provides
= S , DUT (least
hase
higher
PM
)
(
(
NF

2
the
DUT-1
andgain
DUT-2.
Taking
transform
2 Fourier
the
S ( f of
Sthe
( t ) isactual
-voltage
conversion
G is the
of the
IF amplifier
and
theof Eq.
the
measured
PM
noise,
PM noise of a pair of DUTs, S NF ( f ) is
) isfactor,
( f ) is(2)
Sfluctuations
= where,
, DUT( f ) + S , NF ( f ) + S ( f()kd, G ) DUT
(0) measurement system and leakage AM
noise.
fluctuations
of the DUT-1
and DUT-2.
Takingcomprised
the Fourier of
transform
of IF
Eq.amplifier noise, S ( f ) is the measured AM in noise performance between the various
the measurement
system
noise floor
PD and

The
noise
floor
of
the
measurement
system
is devices indicate the importance of selecting
PSD v t actual PM noise of a pair of 2DUTs,
measured
PM
noise,
S ( f ) isisthethe
S DUT ( fn)(is)the
where,
measured
PM
noise,
2 iswhere

S
f
S
f
S
f
S
f )S, NF ( f ) is
=
=
+
+
noise,

the
AM-to-PM
conversion
ratio,
and
PSD
is
the
power
spectral
density.
This
measurement
(
)
(
)
(
)
(
PSD
v
t

f
S
f
+
(
)
(
)
(
)
DUT
NF

,
,
n
(0)
obtained
simply
by
replacing
the
DUTs
with

NF
2 of a pair of DUTs, S NF2( f ) is
oise, S DUT= S( f ) is the
PM
noise
,( f )actual
the correct
PM
f )the
Snoise
f ) floor
+isSlong
+(0)
the
noise
of
acomprised
pairDUT
of((0)
(
(
the
measurement
system
of
PD
and
IF
amplifier
noise,
is
the
measured
AM components when designing a

, DUT
, NFactual
k
G
S
f
(
)
)
,
method
is
accurate
as
as
PM
noise
of
the
pairs
is
at
least
10
dB
higher
than
the
PM
noise
2

d
v n( k( t )G )
coaxial
cables
or
waveguide.
However,
it
is
r comprised
of
PD
IF (amplifier
the measured AM
S ( f ) issystem
low-noise system for this frequency band.
d = S
S noise,
f ) conversion
DUTs,
( f ) +and
) +is 2the
(measurement
, DUT
,
(0) and PSD is the power spectral density. This measurement
noise,
Sis, NF
the fAM-to-PM
ratio,
2
important
toSkeep
the
power
level
at the
LO
andof a pair
G
S
f
f
where,
is
the
measured
PM
noise,
is
the
actual
PM
noise
of DUTs, S
)
(
)
(
)
( f ) is
3
fractional
S
f
actual
PM
noise
of
a
pair
of
DUTs,
is

DUT

(
)
on
ratio,
and
PSD
is
the
power
spectral
density.
This
measurement
floor
comprised
of
PD
amplifier
S DUT
f ) is
noise
NF
sured PM noise,noise
the actualasPM
ofthe
aIF
pair
ofnoise
DUTs,ofSthe
( is
( f ) is pairs is at least 10 dB higher than the PMThe
method
accurate
long
asand
PM
noise 2-sigma NF combined
NF DUT
RF
ports
at the same
level
forIFboth
the noise
M
noise
of
the
DUT
pairs
is
at
least
10
dB
higher
than
the
PM
noise
uncertainties
of
the
PM
noise
measurement
the
measurement
system
noise
floor
comprised
of
PD
and
amplifier
noise,
is
the
measured
AM
S
f
(
)
noise,
measured
noise
floorScomprised
amplifier
IF amplifier
noise,
thethe
measured
AM
Sof
f ) is the
S measured
f and
PM
noise,
PM
noise
ofnoise,
a pair

(DUTs,
( f ) isofSthe
( actual
) isisIF
amplitude
DUT
NF ( f ) is AM
PD
floor3and the DUT measurements.
system discussed
above is approximately
M
conversion
ratio,
PSDIF
isamplifier
the
power
spectral
This measurement
modulation
(AM)
isSthe
is the measured
AM
e floor
comprised
of and
PD
and
noise,
noise,

is
conversion
ratio, and PSD is the power spectral density.
This measurement
( fdensity.
) AM-to-PM
the
power
spectral
density.
This
measurement
3of the DUT pairs is at least 10 dB higher than the PM noise
ng
as is
theatratio,
PM
noise
method
isdensity.
accurate
as measurement
long as the PM noise of the DUT pairs is at least 10 dB higher than the PM noise
nversion
and
the power
spectral
pairs
least
10PSD
dB ishigher
than
the PM
noise This
the PM noise 50
of the |DUTNCSLI
pairs is Measure
at least 10 dB
the PM noise
J. higher
Meas.than
Sci. www.ncsli.org
3
3

TECHNICAL PAPERS

Figure 7. Block diagram of the cavity stabilized oscillator


(CSO) at 40 GHz.

Figure 8. Typical error signal from the double-balanced


mixer (DBM) in the cavity discriminator of Fig. 7. The inset
corresponds to a slope of 100 mV/kHz, approximately, for
the resonator discriminator curve.

1.4 dB, which is obtained by combining the individual uncertainties,


evaluated with both Type A and Type B methods [11, 12]. The
contribution to uncertainty from a Type A evaluation is 1.2 dB, due to
the number of FFT averages used for PM and AM noise measurements
and measurement repeatability. The contributors to uncertainty
evaluated with the Type B method include the calibration of kd, the
measurement of IF amplifier gain and its frequency response, AMto-PM conversion at the PD, and error in the estimated measurement
bandwidth of the PSD function. The contribution to uncertainty from
components evaluated with the Type B method is conservatively
estimated as approximately 0.7 dB.
3. Compact Air-Dielectric Cavity Resonator

An important goal of microwave oscillator design is to achieve


significant reductions in size, weight, and power (so-called SWaP)
without a noise penalty. This paper investigates one strategy of
reducing SWaP while maintaining state-of-the-art spectral purity. The
basic approach used in the past at the National Institute of Standards
and Technology (NIST) consisted of improving the PM noise of a
10 GHz voltage-controlled oscillator (DRO, yttrium iron garnet (YIG)
Vol. 9 No. 3 September 2014

Figure 9. PM noise of a cavity stabilized oscillator (CSO),


the results are normalized to 40 GHz. Notice the significant
noise reduction of the free-running DRO noise out to
beyond the 100 kHz offset frequency.

Figure 10. Experimental set-up of CSO PM noise


measurement (SLCO = sapphire loaded cavity oscillator,
DPNMS = digital phase noise measurement system).

oscillator, etc.) by use of a high-Q, highly linear air-dielectric 10 GHz


cavity as a discriminator [7]. The unloaded Qs (Qu) of 50,000 to
70,000 are attained for TE023 or TE025 modes, but achieving this
moderately high Qu resulted in a fairly large cavity diameter and
a height of approximately 8 cm. With only a minimal increase in
noise, we can substitute a significantly smaller 40 GHz highly-linear
air-dielectric cavity as a discriminator. The air-dielectric cylindrical
cavity used in the CSO design is operating at TE015 mode, and its
inside dimensions are approximately 2 cm x 2 cm (about one-fourth
the size of NISTs 10 GHz cavity resonator). The resonant frequency
of this cylindrical cavity is given by [13]
f res

c
2

3.832
5

,
a
d

(4)

where a and d are the radius and length of the cylindrical cavity, respectively. For 2a d 2 cm, the resonant frequency is approximately
40.08 GHz.
The cavity is made of aluminum (Al), and the inside surface of
the cylinder and end caps are electro-silver plated and polished to
NCSLI Measure J. Meas. Sci. |

51

TECHNICAL PAPERS

Offset
frequency
(Hz)

CSO

Microoptoelectronic
oscillator
[22]

Voltage
Phase locked
controlled
dielectric
resonator oscillator oscillator
[24]
[23]

Commercial
frequency
synthesizer
[25]

100

-78

-54

-61

+8

-83

1000

-105

-84

-96

-28

-105

10000

-128

-107

-103

-71

-117

100000

-135

-119

-106

-96

-115

Table 1. PM noise comparison of different Ka-band oscillators at 40 GHz.

an industry standard of less than 0.2 m root


mean squared, considered to be a mirror finish,
with a plate thickness of 50 m.
The cavity used for the CSO design is
shown in Fig. 6(a). The signal is coupled to the
magnetic field in and out of the cavity by use
of coupling probes (loops) on the end plates
with their planes aligned with the radial plane
of the cylindrical cavity. A VNA is used to
measure both Qu and the loaded quality factor,
QL, and also to characterize the S-parameters
of the resonator. For the measurement of Qu
both the input and output coupling probes are
loosely coupled, whereas for QL the coupling
probes are adjusted to obtain nearly critical
input and loose output couplings. The Qu and
QL are calculated from the ratio of the resonant
frequency to the 3 dB bandwidth. The measured
values of these quantities for the 40 GHz
resonator are approximately 30,000 and 17,000.
The typical formulas for the reection
(S11) and transmission (S21) coefficients in
terms of input (b1) and output (b2) coupling
coefficients at the cavity resonance frequency
are [7, 14, 15],
S11

1 1 + 2

1 1 + 2

S21

2 12

. (5)

1 1 + 2

For b1 and b2 equal to 0.94 and 0.01, the


reected and the transmitted signals out of
the cavity are suppressed by approximately
32 dB and 22 dB, respectively, as shown in
Fig. 6. The nearly critical coupling coefficient
(0.94) is chosen because it provides a steeper
discriminator curve (higher discriminator
sensitivity).
Any improvement to the
discriminator phase-shift sensitivity directly
translates to a reduction of oscillator PM
noise. An effective method of increasing
discriminator sensitivity is to suppress the
carrier signal reflected from the cavity, which
can be achieved by making the effective
coupling coefficient into the cavity approach
52

its critical value of unity. An output coupling


of 0.01 was chosen because with high incident
power of almost 1 W to the cavity, it is possible
to obtain reasonable output power (nearly 10
dBm) for the final stabilized oscillator directly
from the transmission port of the cavity
without further degrading the cavity Q.
4. Description of the CSO

Figure 7 is a block diagram of the CSO at


40 GHz. It consists of a DRO (dielectric resonator oscillator) at 10 GHz whose
free-running PM noise is approximately
-112 dBc/Hz at a 10 kHz offset frequency. The
output of the DRO is first multiplied by four
and then amplified to 1 W, by use of a power
amplifier. The amplified signal is then applied
to the input coupling port of the discriminator
cavity through a circulator. The reflected signal out of the cavity exits port c of the circulator and is already highly suppressed because
the cavity coupling is nearly critical. A portion
of the input signal is added out of phase with
the reflected signal to further suppress the carrier (to about -50 dBm). This constitutes the
so-called interferometric signal processing.
The suppressed-carrier signal is then amplified
by use of a low-noise amplifier (gain = 44 dB,
noise figure = 2.8 dB) before being applied to
one port of a DBM that acts as a phase detector. Due to the high level of carrier suppression, the amplifiers flicker noise contribution
is significantly reduced. The other port of the
DBM is a directionally-coupled portion of the
input signal, adjusted to be in phase quadrature with the reflected signal. By placing the
amplifier before the mixer, the effective noise
contribution from the mixer is suppressed by
the amplifier gain. The output of the DBM is
the error voltage that tracks the frequency fluctuations of the DRO relative to the cavity. This
error voltage is applied to the voltage-control
tuning input of the DRO through the servo amplifier to stabilize its frequency.

Figure 8 shows a typical error signal at


the output of the DBM versus the frequency
difference between the resonance frequency
of the cavity and the 10 GHz DRO signal
multiplied by four. This slope, which is at
the mid-point of the resonator discriminator
curve, is approximately 100 mV/kHz.
The dimension of the prototype CSO is approximately 30 cm x 30 cm x 10 cm, and we
expect to further reduce the size in our final
design by replacing the connectorized components with a microstrip layout. The 10 GHz
reference oscillator weighs over 15 kg and
is assembled inside a 6U chassis with 43 cm
depth in a standard rack mount.
5. PM Noise Results

Figure 9 shows the PM noise of a 40 GHz CSO


constructed with an aluminum air-dielectric
cavity designed for the candidate mode TE015
with an unloaded Q of about 30,000. The
noise is measured at 10 GHz at the input of
the 4 multiplier due to the unavailability of a
40 GHz reference oscillator, which has either
comparable or lower PM noise than the CSO.
The PM noise measurement scheme
(Fig. 10) utilizes a direct-digital phase noise
measurement system (DPNMS) [16, 17]. The
DPNMS (1) directly measures relative phase
and does not require a phase-locked reference
at the same frequency, and (2) contains a dualchannel, cross-correlation technique to reduce
DPNMS-system random noise and low-level
digitally generated artifacts, or spurs. A
DPNMS requires a reference oscillator with
noise below the test oscillator. The operating
frequency range of this DPNMS used is
1 MHz to 400 MHz. In order to measure the
PM noise of the CSO at 10 GHz, this signal
is mixed with a 10.001 GHz signal from a
sapphire loaded cavity oscillator (SLCO) to
down-convert the 10 GHz signal to within the
operational frequency range of the DPNMS.
The down-converted signal is then compared
against a low PM noise signal obtained
from a 5 MHz quartz crystal oscillator. The
noise floor of the measurement system is
determined by replacing the CSO with a
second SLCO. The noise floor ranges from
15 to 20 dB lower than the PM noise of the
CSO at a 10 GHz output.
The final results shown in Fig. 9 are
normalized to 40 GHz. The PM noise of
the free-running DRO is shown along with
the CSO that demonstrates a 30 to 35 dB
reduction in the PM noise of the free-running
DRO. The origin of random-walk noise (f -4)

NCSLI Measure J. Meas. Sci. www.ncsli.org

TECHNICAL PAPERS
and spurious close-to-the carrier noise is due to the temperature and
vibration sensitivities of the resonator. The source of flicker frequency
noise (f -3) between 100 Hz to 2 kHz is due to flicker phase noise (f -1)
originating inside the discriminator, likely from the circulator, DBM,
and carrier suppression amplifier [7], and also possibly from AM-toPM conversion in the DBM. Above 2 kHz the noise of the CSO is
consistent with and clearly limited by the 4 multiplier noise, or the
bottom-most noise. The broad structure around 500 kHz offset is due
to the discriminator servo loop.
Note that the multiplier PM noise shown in Fig. 9 is measured with
analog measurement system already discussed in Section 2 (Fig. 5). The
DPNMS is used to measure noise at offset frequencies from 10 Hz to
100 kHz. Above a 100 kHz offset, a photonic delay-line measurement
system (PDLMS) [18, 19] is utilized because it has less instrument noise
above 100 kHz than the DPNMS. It is common practice to use such
hybrid schemes to cover a large range of offsets frequencies. Providing
a complete accounting of the measurement uncertainties is beyond the
scope of this paper, but we can summarize by stating that the measurement
uncertainty of the DPNMS is less than 1 dB and the measurement
uncertainty of the PDLMS is less than 2 dB. Both systems were calibrated
against NISTs PM/AM secondary noise standard [20, 21].
There are two drawbacks of measuring the PM noise at the input
of the multiplier at 10 GHz. First, the measured noise at 10 GHz is
limited by the 4 multiplier noise. Any corrections to the 40 GHz
signal that are lower than the multiplier noise cannot be detected
at the 10 GHz output. Second, any improvements to the 40 GHz
signal at frequencies far from the carrier due to the passive filtration
of the cavity cannot be observed. There are strategies that can be
used to reduce the 4 multiplier noise, hence reduce the DRO noise,
given that the multipliers output is phase stabilized by the overall
CSO scheme. The simplest strategy would be to simply replace the
DRO 4 with a single low-noise 40 GHz voltage-controlled
oscillator (VCO). However, candidates for a sufficiently low-noise
phase-lockable oscillator are not commonly available at a 40 GHz
center frequency. Table 1 shows the performance of the CSO with
respect to few commercially available oscillators in the Ka-band. The
PM noise of all oscillators is normalized to 40 GHz. This comparison
is done only in terms of PM noise; their size, cost and power
consumption are not taken into consideration.
6. Conclusions

We measured the PM noise of several commercially available Ka-band


components and observed wide variations in the noise performance
among devices. These results indicate how important it is to correctly
select components when designing a low-noise oscillator in the Ka
frequency band. We also reported performance of a low-PM noise
40 GHz CSO using an air-dielectric cavity resonator as a frequency
discriminator. The cavity in TE015 mode has an unloaded Q of
30,000. The PM noise of the CSO at 10 kHz offset is -128 dBc/Hz and
is entirely limited by the multiplier noise. In the future we plan

to use a 40 GHz VCO instead of a 10 GHz DRO and a 4


multiplier,
to control the cavity temperature and use vibration isolation to
reduce close-to-the carrier noise, and
to use an ultra-stiff ceramic cavity resonator to improve the
vibration sensitivity of the oscillator.

Vol. 9 No. 3 September 2014

7. Acknowledgements

The authors thank Justin Lanfranchi for the construction and noise
measurement of 40 GHz regenerative divide-by-four circuit, and
Stefania Rmisch and Jeff Jargon for useful discussion and suggestions.
We also thank Danielle Lirette and David Smith for carefully reading
and providing comments on this manuscript.
8. References

[1] F. Walls, C. Felton, and T. Martin, High Spectral Purity X-Band


Source, Proceedings of 1990 IEEE International Frequency
Control Symposium, Baltimore, Maryland, pp. 542-548, May 1990.
[2] D. Santiago and G. Dick, Microwave Frequency Discriminator
with a Cooled Sapphire Resonator for Ultra-Low Phase Noise,
Proceedings of 1992 IEEE International Frequency Control
Symposium, Hershey, Pennsylvania, pp. 176-182, May 1992.
[3] D. Santiago and G. Dick, Closed Loop Tests of the NASA
Sapphire Phase Stabilizer, Proceedings of 1993 IEEE
International Frequency Control Symposium, Salt Lake City,
Utah, pp. 774-778, June 1993.
[4] E. Ivanov, M. Tobar, and R. Woode, Advanced Phase Noise
Suppression Technique for Next Generation of Ultra LowNoise Microwave Oscillators, Proceedings of 1995 IEEE
International Frequency Control Symposium, San Francisco,
California, pp. 314-320, May 1995.
[5] E. Ivanov, M. Tobar, and R. Woode, Applications of
interferometric signal processing to phase-noise reduction in
microwave oscillators, IEEE T. Microw. Theory, vol. 46, no.
10, pp. 1537-1545, October 1998.
[6] M. Tobar, E. Ivanov, R. Woode, and J. Searls, Low Noise
Microwave Oscillators Based on High-Q Temperature Stabilized
Sapphire Resonators, Proceedings of 1994 IEEE International
Frequency Control Symposium, Boston, Massachusetts, pp.
433-440, June 1994.
[7] A. Gupta, D. Howe, C. Nelson, A. Hati, F. Walls, and J. Nava,
High spectral purity microwave oscillator: design using
conventional air-dielectric cavity, IEEE T. Ultrason. Ferr., vol.
51, no. 10, pp. 1225-1231, 2004.
[8] F. Walls and E. Ferre-Pikal, Measurement of frequency, phase
noise and amplitude noise, Wiley Encyclopedia of Electrical
and Electronics Engineering, vol. 12, pp. 459-473, June 1999.
[9] R. Miller, Fractional-frequency generators utilizing regenerative
modulation, P. IRE, vol. 27, no. 7, pp. 446-457, 1939.
[10] E. Rubiola, M. Olivier, and J. Groslambert, Phase noise in the
regenerative frequency dividers, IEEE T. Instrum. Meas., vol.
41, no. 3, pp. 353-360, June 1992.
[11] JCGM, Evaluation of measurement data Guide to the
expression of uncertainty in measurement, JCGM 100, 2008.
[12] B. Taylor and C. Kuyatt, Guidelines for Evaluating and
Expressing the Uncertainty of NIST Measurement Results,
NIST Technical Note 1297, 1994.
[13] D. Pozar, Microwave Engineering, 3rd ed., John Wiley & Sons,
2009.
[14] E. Ginzton, Microwave Measurements, McGraw-Hill Book
Company Inc., 1957.
[15] L. Chen, C. Ong, C. Neo, V. Varadan, and V. Varadan, Microwave
Electronics: Measurement and Materials Characterization,
John Wiley & Sons, 2004.
NCSLI Measure J. Meas. Sci. |

53

TECHNICAL PAPERS
[16] J. Grove, J. Hein, J. Retta, P. Schweiger, W. Solbrig, and S.
Stein, Direct-digital phase-noise measurement, Proceedings
of 2004 IEEE International Frequency Control Symposium,
Montreal, Canada, pp. 287-291, August 2004.
[17] C. Nelson and D. Howe, A Sub-Sampling Digital PM/AM
Noise Measurement System, NCSLI Measure J. Meas. Sci.,
vol. 7, no. 3, pp. 70-73, 2012.
[18] E. Rubiola, E. Salik, S. Huang, N. Yu, and L. Maleki, Photonicdelay technique for phase-noise measurement of microwave
oscillators, J. Opt. Soc. Am. B, vol. 22, no. 5, pp. 987-997,
2005.
[19] http://www.oewaves.com/phase-noise-measurement
[20] F. Walls, Secondary standard for PM and AM noise at 5, 10 and
100 MHz, IEEE T. Instrum. Meas., vol. 42, no. 2, pp.136-143,
1993.
[21] A. Hati, C. Nelson, N. Ashby, and D. Howe, Calibration
uncertainty for the NIST PM/AM noise standards, NIST
Special Publication 250-90, 33 p., July 2012.
[22] http://www.oewaves.com/products/item/85-micro-optoelectronic-oscillator-oeo.html (http://www.fh-microwave.com/
produits-products/photonics/)
[23] https://miteq.com/docs/MITEQ-PLDRO40000.PDF
[24] https://www.hittite.com/products/view.html/view/HMC-C073
[25] http://cp.literature.agilent.com/litweb/pdf/5989-0698EN.pdf

54

NCSLI Measure J. Meas. Sci. www.ncsli.org

TECHNICAL PAPERS

A Calibration System for Reference


Radiosondes that Meets GRUAN
Uncertainty Requirements
Hannu Sairanen, Martti Heinonen, Richard Hgstrm, Antti Lakka, and Heikki Kajastie

Abstract: A new International System (SI) traceable calibration set-up for reference radiosondes is presented here with a preliminary uncertainty analysis. By meeting the GRUAN requirements, this development fulfils the needs of the meteorological
community for disseminating SI traceability to upper air humidity measurements with reduced uncertainty. The set-up was designed for calibrations from laboratory temperature down to 183 K and in terms of dew-point temperature from 193K to 283 K.
To enable rapid changes in humidity and to shorten the time needed for a single calibration, the set-up utilizes a hybrid humidity
generator method in which two air flows with known water vapor concentrations are mixed. According to a preliminary uncertainty analysis, the relative expanded uncertainty (k = 2) of the set-up is less than 2% expressed in terms of mixing ratio.
1. Introduction

Accurate and reliable weather observations


are necessary for transport, industry and
everyday living. Along with ground observations upper air observations provide data
for forecasts and for climate change studies.
However, the quality of upper air measurements does not yet fulfil the requirements of
climatologists [1] but improved methods and
procedures are needed.
To enhance the quality of weather observations, the Global Climate Observing System
(GCOS) has established the GCOS Reference Upper-Air Network (GRUAN), which
is comprised of about 40 stations that will
provide reference observation data for the
global radiosonde station network. GRUAN
has specified targets and their priority for
measurements of all important parameters at
upper troposphere and lower stratosphere [2].
Water vapor pressure is one of the first priority parameters with the uncertainty requirement of 2 % in terms of mixing ratio at the
measuring range from 0.1 to 90 000 ppm [3].
In order to meet the water vapor accuracy
requirements set by GRUAN, traceable calibrations for radiosondes are needed. Due to
a short lifetime and a high calibration cost
when compared to price of a sonde, GRUAN
aims to ensure radiosondes stability, traceability and uniformity by standards [4]. Regardless of that GRUAN requires calibration
56

and traceability to SI for each radiosonde in


order to be accepted to GRUAN [4].
This work presents a new apparatus for
reference radiosonde calibrations that meets
the GRUAN uncertainty requirements. The
apparatus was designed to meet the requirements and still to be quick enough for practical calibration use. Applying a hybrid humidity generator method [5] with two saturators, the high accuracy of a single pressure
generator and the short stabilization time of
a flow mixing generator are achieved in a
single apparatus. The operation range covers
dew-point temperatures from 183 K to 283 K
and air temperatures from 183 K to 293 K.
This paper presents the design of the apparatus with a preliminary uncertainty analysis.

of the saturator temperature control. Also,


adsorption/desorption effects in the saturator
outlet increases the transition time significantly
at low temperatures. To obtain relatively
short transition times without losing the high
accuracy, we connect two dew-point generators
in parallel to a measurement chamber. By
switching the inlet of the chamber from one
generator to another, we can induce a step
change in water vapor pressure in the chamber
without changing the air temperature.

Authors
Hannu Sairanen
hannu.sairanen@mikes.fi

Martti Heinonen
martti.heinonen@mikes.fi

2. Principle of Operation

In humid air, the water vapor pressure (ew),


dew-point temperature (td), air pressure (p) and
water amount fraction (xw) are related to each
other according to the well-known equation,

Richard Hgstrm
richard.hogstrom@mikes.fi

Antti Lakka
antti.lakka@mikes.fi

! ! , ! = ! ,

(1)

where f is the water vapor enhancement


factor. Air with accurately known water
vapor pressure can be generated with a dewpoint generator. However, the transition
time between stable measurement points is
long with this method due to the slowness

Heikki Kajastie
heikki.kajastie@mikes.fi
Centre for Metrology and Accreditation
MIKES
Tekniikantie 1, FI-02150
Espoo, Finland

NCSLI Measure J. Meas. Sci. www.ncsli.org

TECHNICAL PAPERS
The versatility of the system can be further improved by
introducing accurate mass flow control to the dew-point generators.
This allows us to induce quick changes in the water vapor pressure
(within the limits set by the saturator temperatures of the dewpoint generators) without any changes temperature control settings.
This mixing flow generator principle, however, introduces more
uncertainty sources than a single pressure generator, which can be
seen the equation below derived from Eq. (1) for the water amount
fraction of the air after mixing,
! =

!!
!"! !!
!!
!"! !!
+
!! + !! !!
!! + !! !!

Here, is mass flow rate through a dew-point generator, and subscripts


G1 and G2 refer to the dew-point generators 1 and 2, respectively.
The saturation water vapor pressure and enhancement factor in air can
be calculated according to Sonntag [6] and Hardy [7], respectively.
The mixing ratio rw can be calculated as

xw
rw

Mw
Mg

1 xw

(2)

Figure 1. Process chart of the designed apparatus.


MFC = mass flow controller.

(3)

where Mw and Mg are molar masses of water and air, respectively.


3. GRUAN Requirements

GRUAN determines and prioritizes requirements for upper-air


measurements. It has identified temperature, pressure and water vapor
pressure as the most important measurands. Accuracy of water vapor
pressure is determined in terms of mixing ratio and the required level
is 2 % at the whole measurement range from 0.1 to 90 000 ppm. In
terms of frost-point temperature the range is from 186 K to 324 K.
The relative uncertainty of 2 % in mixing ratio is equivalent to 0.1K
in the frost-point temperature at 183 K. Absolute uncertainty in terms
of frost-point temperature decreases as the frost-point temperature
increases. Thus, the low end of the measurement range is the most
challenging from an uncertainty point of view.
To simulate upper-air conditions, calibrations should be carried
out at temperatures down to 170 K and at absolute pressures down to
1 hPa. GRUAN also hopes that radiosondes could be tested or even
calibrated at changing environments (e.g. freezing test, where water
vapor is freezed on the radiosonde). One of the key parameters when
changing conditions is the response time [3].
3.1 Requirements for the Calibration Set-Up

When operating at low temperatures and in the trace moisture region,


adsorption/desorption effects between water molecules and inner
walls of the flow path usually dominate the time of stabilization in a
calibration system. Measurement periods of several days or weeks are
often needed to obtain stable conditions, which is not practical for routine calibrations. To reduce the time constant of the system down to
a practical level, the volume of the measurement chamber and tubing
need to be minimized, and the wall surfaces should be well polished.
Also, the use of two dew-point generators with mixing enhances the
usability of the system.
4. Design of the Calibration Apparatus

The calibration apparatus was designed on the basis of a hybrid


humidity generator applying single pressure and mixing flow
Vol. 9 No. 3 September 2014

Figure 2. VCR sealing with a 1 mm hole.

generator principles. Two separate air flows are saturated at selected


temperatures and then mixed to achieve the final mixing ratio. The
mixing ratio is controlled through the flow rate control and the dewpoint generators.
The traceability to the SI for the apparatus is established by dewpoint generators, mass flow controllers, and pressure measurements.
The process chart of the designed apparatus is shown in Fig. 1.
4.1 Requirements for the Calibration Set-Up

To enable humidity calibrations with high accuracy at 193 K, a dewpoint generator that operates at lower temperatures down to 183 K is
needed. This is achieved by immersing a saturator in a commercially
available liquid bath. By introducing a stable bath temperature with
low temperature gradients, a saturator with high efficiency, and high
quality temperature measurements, we can achieve an uncertainty level that meets the GRUAN requirements. The dew-point generator can
be constructed by applying knowledge presented in [8-11].
4.2 Second Dew-Point Generator with Pressure Reducer

The second dew-point generator is immersed with a measurement


chamber in another thermostatic liquid bath. Thus, this saturator is
maintained at the same temperature as the chamber containing the
radiosonde under calibration. After the second saturator, the air pressure is reduced by a couple of hundreds pascals. This decreases the
dew-point temperature and avoids condensing to the walls of tubing
and measurement chamber. This pressure drop can be achieved, for
example, by a closed sealing with a tiny hole drilled through it in a
standard vacuum coupling radiation (VCR) connection (Fig. 2). The
target pressure drop is achieved with a hole diameter of approximately
1 mm at a mass flow rate of 0.02 g s-1.
NCSLI Measure J. Meas. Sci. |

57

TECHNICAL PAPERS

Estimate

Uncertainty of
input quantity

Sensitivity
coefficient

Uncertainty
contribution to
mixing ratio

Dew-point
temperature

183.15 K

0.05 K

1.03 x 10-8 K-1

7.76 x 10-10

under calibration. After a change of flow mixing the time needed to reach stable humidity
inside the measurement chamber is highly
dependent on the flow profile, the geometry
and the surface quality of the chamber. By
minimizing the volume and by optimizing the
shape of the chamber it is possible to achieve
feasible characteristics as presented by Lakka, et al., in [12].
To prevent diffusion and adsorption/desorption effects between the measurement air
and air around the electronics of the radiosonde under calibration, the chamber is divided into two parts: the measurement chamber
and the chamber for electronics. In Fig. 1 and
Fig. 3 the measurement part of the chamber
is the smaller one below the part reserved for
electronics. In the upper part of the chamber,
tiny leaks of water vapor through radiosonde
wiring may increase the humidity. Also the
stabilization time for the humidity is longer
than in the lower part of the chamber due to
the wiring, electronics and uneven flow. The
air flows from the lower to the upper part of
the chamber through a leaking seal assembled
between the measurement chamber and the
radiosonde and tiny holes drilled right next to
connecting screws (see Fig. 3). The flow path
is designed to prevent the back-diffusion of
water vapor to the lower part.

Water vapor
pressure

9.67 x 10-3 Pa

2.90 x 10-5 Pa

6.04 x 10-6 Pa-1

9.99 x 10-11

5. Uncertainty Analysis

103800 Pa

20 Pa

5.63 x 10-13 Pa-1

6.42 x 10-12

Enhancement
factor

1.0085

1.01 x 10-3

5.79 x 10-8

3.33 x 10-11

Molar mass
of air

28.96 g mol-1

2.55 x 10-5 g
mol-1

2.02 x 10-9 mol


g-1

1.55 x 10-13

Molar mass
of water

18.02 g mol-1

2.54 x 10-5 g
mol-1

3.24 x 10-9 mol


g-1

2.47 x 10-13

Mixing ratio

5.84 x 10-8

Figure 3. A photograph of the constructed measurement chamber is on the left,


the drawing on the right shows the inside of the chamber. A radiosonde is screwed
tightly on a leaking sealing and only the actual sensor is in the measurement
chamber. The narrow line around the sensor illustrates the removable shield.

Pressure

5.48 x 10-10
Relative standard uncertainty (k = 1)

0.94 %

Relative expanded uncertainty (k = 2)

1.88 %

Table 1. Summary of an uncertainty analysis at the low end of the humidity range,
i.e. when operating with only the low dew-point generator.
4.3 Flow Control

To obtain stable mixing thermal mass flow


controllers (MFC) are used for controlling
the air flows through both generators. The
MFCs are assembled before the dew-point
generators to reduce the adsorption/desorption effects. To minimize the stabilization
time, electro polished stainless steel tubes are
used to connect the generators to the mea58

surement chamber. Inner diameter of tubing


with selected design parameters is 4 mm. The
air flows are mixed in a standard t-joint with
standard VCR fittings.
4.4 Measurement Chamber

The measurement chamber design is crucial


when considering the time needed for obtaining stable humidity conditions for the device

A preliminary uncertainty analysis for


determination of the mixing ratio according
to Eq. (3) was carried out for the designed
apparatus according to [13]. This theoretical
analysis is based on the uncertainties
of existing devices. In this study, the
standard relative uncertainties of the flow
measurements are assumed to be 1 %.
The standard uncertainty of the pressure
measurements are estimated to 20 Pa. The
uncertainties of the generated dew-point
temperatures are assumed to be 0.05 K.
These uncertainty assumptions are based on
experience from MIKES calibrations and
tests for relevant instruments. The estimate of
the air pressure is chosen to be slightly above
atmospheric pressure. Molar masses used in
calculations were obtained from [14].
A summary of the uncertainty analysis
at low end of the humidity range, i.e. at the
frost-point temperature of 183K, is presented
in Table 1.
To show the effect of mixing when
operating with both dew-point generators,
Table 2 summarizes the uncertainty analysis

NCSLI Measure J. Meas. Sci. www.ncsli.org

TECHNICAL PAPERS

Estimate

Uncertainty of input
quantity

Sensitivity coefficient

Uncertainty contribution to mixing ratio

Dew-point temperature

183.15 K

0.05 K

5.16 x 10-9 K-1

2.58 x 10-10

Water vapor pressure

9.67 x 10 Pa

2.90 x 10 Pa

3.02 x 10 Pa

-1

8.77 x 10-11

1.09 x 10-2 g s-1

1.09 x 10-4 g s-1

6.23 x 10-6 s g-1

6.78 x 10-10

103800 Pa

20 Pa

2.81 x 10

5.63 x 10-12

1.0085

1.01 x 10-3

2.90 x 10-8

2.92 x 10-11

193.15 K

0.05 K

2.61 x 10-8 K-1

1.31 x 10-9

Dew-point generator 1

Gas flow rate


Pressure

-3

Enhancement factor

-5

-7

-13

Pa

-1

Dew-point generator 2
Dew-point temperature
Water vapor pressure
Gas flow rate
Pressure

5.47 x 10-2 Pa

1.64 x 10-4 Pa

3.01 x 10-6 Pa-1

4.95 x 10-10

1.09 x 10 g s

1.09 x 10 g s

-1

6.23 x 10 s g

6.78 x 10-10

104000 Pa

20 Pa

1.59 x 10-12 Pa-1

3.17 x 10-11

1.0076

1.01 x 10-3

1.64 x 10-7

1.65 x 10-10

-2

Enhancement factor

-1

-4

-1

-6

Other sources
Molar mass of air

28.96 g mol-1

2.55 x 10-5 g mol-1

6.70 x 10-9 mol g-1

1.71 x 10-13

Molar mass of water

18.02 g mol

2.54 x 10 g mol

1.08 x 10 mol g

2.74 x 10-13

Mixing ratio

-1

-5

-1

-8

-1

1.94 x 10-7

1.73 x 10-9
Relative standard uncertainty (k = 1)

0.89 %

Relative expanded uncertainty (k = 2)

1.78 %

Table 2. Summary of an uncertainty analysis in a case with equal flow mixing.

6. Discussion and Conclusions

A new apparatus for reference radiosonde


calibrations was designed on the basis of
the hybrid humidity generator principle.
Commercially available components as
well as self-designed saturators and a
measurements chamber will be used in
constructing the apparatus. According to
the preliminary uncertainty analysis the
apparatus meets the GRUAN requirements
Vol. 9 No. 3 September 2014

3.5 %

3.0 %

Relative expanded uncertainty

at the saturator temperatures of 183 K and


193 K with equal flow rates through them.
In this case, the pressure in the generator
2 is estimated to be 200 Pa higher than in
generator 1 due to the pressure reduction.
The uncertainty was also investigated at
different temperatures and flow rate ratios.
Maximum uncertainty is achieved at the
largest temperature difference between dewpoint generators (183 K and 243 K) and at large
(97 %) flow rate through the low-temperature
generator. In that case, the relative expanded
uncertainty reached a value of 3 % and is
dominated by the mass flow measurements.
In Fig. 4, the expanded relative uncertainties
are shown as functions of mass flow ratios
at three different temperatures of the second
dew-point generator.

2.5 %

2.0 %

1.5 %

1.0 %

0.5 %

0.0 %
0.0 %

10.0 %

20.0 %

30.0 %

40.0 %

50.0 %

60.0 %

70.0 %

80.0 %

90.0 %

100.0 %

Mass flow ratio through low-temperature generator

Figure 4. Relative expanded uncertainty of mixing ratio as functions of mass flow


ratio through a low-temperature generator at 183.15K. Symbols , and stand
for 193 K, 218 K, and 243 K temperatures of the second generator, respectively.

for humidity as shown in Tables 1 and 2.


However, the apparatus must be assembled,
properly tested, and validated before starting
calibrations of radiosondes. In particular,

the effect of adsorption/desorption of water


molecules walls of flow path will be carefully
studied as it is a major limiting factor for
system stabilization. The validation will be
NCSLI Measure J. Meas. Sci. |

59

TECHNICAL PAPERS
finalized by carrying out an intercomparison with another system
operating within the same measurement range.
More research is needed for validating the temperature and
pressure characteristics at the level required by GRUAN. Achieving
low pressures at low temperatures with low uncertainty as required
by GRUAN is especially challenging. Additionally, temperature
gradients due to the heat dissipated by a radiosonde in the measurement
chamber may cause problems, particularly at low pressures, and may
introduce additional uncertainties to the measurements.
Our new apparatus allows us to test responses to changing
environments. With the two separate generators it is possibly to
measure response time to slower and faster changes in humidity in a
large relative humidity range. Also the relationship between the air
temperature and the response time can be studied.
7. Acknowledgments

This work was supported by the European Metrology Research


Programme (EMRP) jointly funded by the EMRP participating
countries within EURAMET and by the European Union.
8. References

[1] K. Rosenlof, How Water Enters the Stratosphere, Science,


vol. 302, no. 5651, pp. 1691-1692, December 2003.
[2] GCOS, Implementation Plan for the Global Climate Observing
System Reference Upper Air Network, 2009-2013, GCOS
Report 134, July 2009.
[3] GCOS, GCOS Reference Upper-Air Network (GRUAN):
Justification, requirements, siting and instrumentation options,
GCOS Report 112, April 2007.
[4] GCOS, The GCOS Reference Upper-Air Network (GRUAN)
GUIDE, GCOS Report 171, March 2013.
[5] C. Meyer, W. Miller, D. Ripple, and G. Scace, Performance
and Validation Tests on the NIST Hybrid Humidity Generator,
Int. J. Thermophys., vol. 29, no. 5, pp. 1606-1614, October 2008.

60

[6] D. Sonntag, The History of Formulations and Measurements


of Saturation Water Vapour Pressure, Third International
Symposium on Humidity & Moisture, vol. 1, pp. 93-102,
London, England, April 1998.
[7] B. Hardy, ITS-90 Formulations for Vapour Pressure, Frostpoint
Temperature, Dewpoint Temperature, and Enhancement Factors
in the Range -100 to +100 C, Third International Symposium
on Humidity & Moisture, vol. 1, pp. 214-222, London, England,
April 1998.
[8] B. Choi, J. Kim, and S. Woo, Uncertainty of the Kriss Low
Frost-point Humidity Generator, Int. J. Thermophys., vol. 33,
no. 8-9, pp. 1559-1567, September 2012.
[9] D. Zvizdic, M. Heinonen, and D. Sestan, New Primary Dewpoint Generators at HMI/FSB-LPM in the Range from -70 C to
+60 C, Int. J. Thermophys., vol. 33, no. 8-9, pp. 15361549,
September 2012.
[10] G. Mamontov, Application of the Phase Equilibrium Method for
Generation of -100 C of Humid Gas Frost-point Temperature,
Meas. Sci. Technol., vol. 11, no. 6, pp. 818-827, 2000.
[11] G. Scace and J. Hodges, Uncertainty of the NIST Low Frostpoint Humidity Generator, Proceedings of 8th International
Symposium on Temperature and Thermal Measurements in
Industry and Science (TEMPMEKO), pp. 597-602, Berlin,
Germany, June 2001.
[12] A. Lakka, H. Sairanen, M. Heinonen, and R. Hgstrm,
Comsol-Simulations as a Tool in Validating a Measurement
Chamber, Proceedings of 12th International Symposium on
Temperature and Thermal Measurements in Industry and Science
(TEMPMEKO), Madeira, Portugal, October 2013.
[13] JCGM, Evaluation of Measurement Data Guide to the
Expression of Uncertainty in Measurement, JCGM 100, 2008.
[14] M. Wieser, Atomic Weights of the Elements 2005 (IUPAC
Technical Report), Pure Appl. Chem., vol. 78, no. 11, pp 2051
2066, 2006.

NCSLI Measure J. Meas. Sci. www.ncsli.org

TECHNICAL PAPERS

Calibration of Ultrasonic Flaw Detectors


Samuel C. K. Ko, Aaron Y. K. Yan, and Hing-wah Li

Abstract: A calibration system for ultrasonic flaw detectors has been developed at the Government of the Hong Kong Special Administrative Region Standards and Calibration Laboratory (SCL) in accordance with the international standard EN12668-1:2010.
The calibration covers all the periodic and repair tests (the Group 2 tests) required in the standard for checking the performance of
ultrasonic instruments, including their stability, transmitter pulse parameters, receiver response parameters, and time-base linearity.
During the calibration, the ultrasonic flaw detectors transmitter is connected to a combination of a delay generator and function
generator which simulates a delayed version of the transmitted signal as the reflected signal. The simulated reflected signal is then
fed to the receiver of the ultrasonic flaw detector. The stabilities over time and voltage variation of the received waves are measured
and the receiver frequency response is obtained. Other performance parameters of the receiver such as gain accuracy and linearity
are calibrated by comparing gain steps with step attenuators. Lastly, a burst of pulse waves are generated by the arbitrary waveform
generator to simulate a burst of reflected waves to check the linearity of the time base.
1. Introduction

Ultrasonic flaw detectors are used in various


sectors for checking defects under the
surface of steel materials and welding joints
[1, 2]. An ultrasonic flaw detector emits
short ultrasonic pulse-waves with center
frequencies ranging from 0.1 to 15 MHz
into materials to detect internal flaws by
measuring the amplitude and the arrival time
of the reflected waves. The two parameters

must be calibrated in order to locate the


flaws accurately. Figure 1 illustrates that the
arrival time of the reflected wave is reduced
when a flaw is present in a steel block.
The flaw detection can be illustrated by a
simple equation of motion, 2d = s / t, where d
is the distance of the flaw beneath the surface,
s is the speed of ultrasound in the material,
and t is the arrival time of the reflected signal.
Any deviation in the time measurement will

cause an error in the distance measurement of


the flaw location.
The international standard EN126681:2010 [3] for the calibration of ultrasonic
flaw detectors was released in 2010 to
supersede the EN12668-1:2000 [4]. In the
newer standard, the transducer is replaced by
a signal generator during the calibrations. To
meet the electrical performance requirements
of the standard, an ultrasonic instrument shall

Authors
Samuel C. K. Ko
samuel.ko@itc.gov.hk

Aaron Y. K. Yan
ykyan@itc.gov.hk

Hing-wah Li
scl_rf@itc.gov.hk

Figure 1. Working principle of an ultrasonic flaw detector.

62

The Government of the Hong Kong


Special Administrative Region
Standards and Calibration
Laboratory
36/F Immigration Tower,
7 Gloucester Road, Wan Chai,
Hong Kong

NCSLI Measure J. Meas. Sci. www.ncsli.org

TECHNICAL PAPERS
be verified using two groups of tests. Tests
performed at manufacturer on a representative
sample of the ultrasonic instruments produced
are categorized by a Group 1 test. The Group
2 tests shall be performed on every ultrasonic
instrument prior to the shipment of the
ultrasonic instrument by the manufacturer,
every 12 months during the lifetime of the
instrument, and immediately after every
repair of the instrument.
SCL provides the Group 2 tests which
include stability, transmitter pulse parameters,
amplifier frequency response, and linearity
of time base. The tests performed are
summarized in Table 1. The measurement
setup and techniques are detailed in Section
2 of the paper. Section 3 discusses the
measurement uncertainties of the tests.

EN12668-1:2010
Standard
Clause 9.3

Title of Test
Stability after warm-up time
Display jitter
Stability against voltage variation

Clause 9.4

Transmitter voltage, rise time, duration and reverberation

Clause 9.5

Amplifier frequency response


Equivalent input noise
Accuracy of calibrated attenuator/gain
Linearity of vertical display

Clause 9.6

Linearity of time base

Clause 8.8.2

Linearity of time base for digital ultrasonic instruments

Table 1. Summary of Group 2 test items.

2. Measurements

The measurement setup at SCL and the


measurement block diagram are shown in
Figs. 2 and 3, respectively. The transmitter
pulse of the unit under test (UUT) is delayed
by a delay/pulse generator to simulate the
propagation delay of the reflected waves.
The function generator generates an output
frequency ranging from 0.1 to 15 MHz
to simulate an ultrasonic transducer. This
ultrasonic signal is then fed to the receiver of
the ultrasonic flaw detector. A measurement
example of the amplitude and time position
of the received echo signal is shown in Fig.
4. The figure shows that the amplitude of the
received echo signal is now at 80 % of full
screen height and at a time position of 50 %
of full screen width.
2.1 Stability After Warm-up Time

The stability of the amplitude and the time


position of the echoed signal are recorded
at 10 minute intervals over a period of 30
minutes. According to the standard [3], the
signal amplitude shall not vary by more than
2 % of full screen height and the maximum
shift along the time base shall be less than
1 % of full screen width.
2.2 Display Jitter

During the stability measurement described


in Section 2.1, the fluctuation in amplitude
and the time position of the echo signal are
recorded as the display jitter. The signal
amplitude shall not vary by more than 2 %
of the full screen height and the position of
the signal shall not vary by more than 1 %
of the full screen width.
Vol. 9 No. 3 September 2014

1. Power amplifier for gain accuracy test


2. Arbitrary waveform generator for time base
linearity test
3. Function/arbitrary waveform generator to generate
simulated ultrasonic signal
4. Digital delay/pulse generator to simulate the
propagation delay of the ultrasonic signal in steel

5.
6.
7.
8.

Power supply to ultrasonic flaw detector (UUT)


Step attenuator with step resolution of 1 dB
Oscilloscope to test the transmitter output pulse signal
Step attenuator for gain accuracy test with step
resolution of 10 dB
9. Ultrasonic flaw detector (UUT)
10. Computer for equipment automation

Figure 2. Measurement setup for performance tests of ultrasonic flaw


detector in SCL.
2.3 Stability Against Voltage Variations

The power supply voltage is reduced slowly


from the maximum working voltage of the
UUT until the UUT ceases to operate properly.
The amplitude and position of the signal shall
be stable within the limits specified by the
manufacturer. The operation of automatic cutoff or low battery warning (if fitted) shall occur
before the reference signal amplitude varies by
more than 2 % of the full screen height or the
range changes by more than 1 % of the full
screen width from the initial setting.

2.4 Transmitter Voltage, Rise Time,


Reverberation, and Duration

The measurement setup is shown in Fig. 5.


The output voltage of the transmitted pulse
generated from an ultrasonic flaw detector
is attenuated by 40 dB and connected to an
oscilloscope to capture the pulse waveform.
SCL has developed software to measure the
50 W loaded pulse voltage, V50, rise time,
tr, duration, td, and the amplitude of any
reverberation, Vr, as defined in the standard.

NCSLI Measure J. Meas. Sci. |

63

TECHNICAL PAPERS

Variable AC/DC
Power Supply

0.50

Waveform of Transmitter Pulse

0.00

UUT

(Ultrasonic Flaw Detector)

Tx

Rx

-0.50

50 Feedthrough
Termination

Voltage (V)

Fixed Attenuator

RF Step
Attenuator

Transmitter
Pulse
-1.00

-1.50

-2.00

Amplifier
(optional)

Delay/Pulse Generator
or
Arbitrary Waveform Generator*

-2.50
0.0E+00

Function Generator

5.0E-08

1.0E-07

Time (s)

1.5E-07

2.0E-07

Figure 6. An example of a captured transmitter pulse waveform.

O/P
Trigger

O/P

*Arbitrary waveform generator is only


used in linearity of time base test.

Trigger

The standard [3] requires that:

Sync
I/P Channel
Digitizing Oscilloscope

Trigger

Figure 3. Block diagram of the measurement set up.

i. the transmitter loaded pulse voltage shall be within 10 % of the


value quoted in the manufacturers specification,
ii. the pulse rise time shall be less than the maximum value quoted
in the manufacturers specification,
iii. the pulse duration shall be within 10 % of the value quoted in
the manufacturers specification, and
iv. any pulse reverberation shall be less than 4 % of the peak to peak
transmitter pulse voltage.
An example of a captured transmitter pulse waveform is shown in
Fig. 6. The figure shows V50 = 2.114 x 92.79 = 196.16 V where 92.79
is the 39.5 dB attenuation in ratio, tr = 3.61 ns, td = 104.3 ns, and no
reverberation has been observed.
2.5 Amplifier Frequency Response

The amplifier frequency response of the receiver is characterized by its


center frequency and 3 dB bandwidth. The calibration procedure using
the measurement setup (Fig. 3) is:

Figure 4. Display of the received echo signal.

Figure 5. Set-up for transmitter voltage, rise time,


duration and reverberation tests.

64

i. Vary the frequency of function generator to obtain the maximum


signal amplitude displayed on the UUT screen. The frequency is
recorded as ! "# .
ii. Adjust the amplitude of function generator until the signal
amplitude was displayed on the UUT at 80 % of full screen
height.
iii. Decrease the external step attenuator by 3 dB.
iv. Increase and then decrease the frequency from ! "# to obtain
the upper and lower 3 dB points at which the displayed signal
amplitude returns to 80 % of full screen height. The higher
frequency point is called the upper 3 dB frequency, ! , and the
lower frequency point is called the lower 3 dB frequency, ! .
Figure 7 illustrates the relationship between ! "# , ! , and ! .

In normal usage, the selected frequency band must match with


the installed ultrasonic transducer. Any drift in the center frequency
or bandwidth may deteriorate the performance of the flaw detector,
therefore:

NCSLI Measure J. Meas. Sci. www.ncsli.org

TECHNICAL PAPERS

External step
attenuator
setting (dB)

Figure 7. Amplifier frequency response of the receiver.

i. The center frequency, ! , as given by ! ! , shall be within


5 % of the value stated in the manufacturers specification.
ii. The 3 dB bandwidth, , as given by ! ! , shall be
within 10 % of the bandwidth stated in the manufacturers
specification.

Target amplitude
on screen (% of
screen height)

Acceptable
amplitude (% of
screen height)

90

88 to 92

80

(Reference line)

64

62 to 66

50

48 to 52

40

38 to 42

12

25

23 to 27

14

20

18 to 22

20

10

8 to 12

26

3 to 7

Table 2. Test specifications of the linearity of vertical display.

2.6 Equivalent Input Noise

The gain is set to maximum with the receiver input disconnected and the
noise level on the UUT screen is observed and recorded. Then, the gain
is reduced by 40 dB and the UUT is connected to a signal generator. The
peak to peak amplitude of the signal generator, Vin, is adjusted until the
signal amplitude is the same level as the previously recorded mean noise
level. The equivalent input noise in voltage, Vein, is estimated by Vein =
!"#
Vin / 100 and the equivalent input noise per root bandwidth, nin, is

where is the 3 dB bandwidth described in Section 2.5.


The noise per root bandwidth shall be less than 80 x 10-9 V / Hz
for each frequency band according to the standard.

Figure 8. A burst of 11 regularly spaced signals.

2.7 Accuracy of Calibrated Attenuator /Gain

The gain accuracy of this amplifier is calibrated by adjusting the RF


attenuators in the measurement setup. The calibration procedure is:
i. Set the gain of the UUT to the middle value, for example,
50 dB for 100 dB gain range. Set the external step attenuator to
0 dB and adjust the amplitude of the function generator until the
signal amplitude is at 80 % of the full screen height.
ii. Increase the external step attenuator by 10 dB.
iii. Increase the UUT gain by minimum gain steps until the gain is
increased by a total of 10 dB.
iv. Observe the signal amplitude and record the signal amplitude
as a percentage of the full screen height.
v. Repeat the steps until the UUT maximum gain is reached.

vii. Decrease the UUT gain by minimum gain steps until the gain is
decreased by a total of 10 dB. Observe the signal amplitude. It
should be monotonically decreasing.
viii. Decrease the external step attenuator by 10 dB and record the
signal amplitude as a percentage of the full screen height.
ix. Repeat the steps until the UUT minimum gain is reached
According to the standard [3], the cumulative error in the fine attenuator/gain shall not exceed 1 dB in any successive 20 dB span, or
across the full range, whichever is smaller. In addition, the cumulative error in the coarse attenuator/gain shall not exceed 2 dB in any
successive 60 dB span, or across the full range, whichever is smaller.
2.8 Linearity of Vertical Display

An external power amplifier is connected as shown in Fig. 3 to


calibrate the lower gain range of the settings of the UUT, for example,
from 0 to 50 dB. Otherwise the signal may be too weak to display on
the UUT screen. Then, the remaining steps are performed:
vi. Set the UUT gain to its middle value. Set the external step attenuator to 50 dB and adjust the amplitude of function generator until the signal amplitude is at 80 % of the full screen height.
Vol. 9 No. 3 September 2014

This test measures the accuracy of the vertical grid line of the UUT
screen. The external attenuator is set to 2 dB in the measurement setup
(Fig. 3). The amplitude of the signal generator and the UUT gain are
adjusted so that the signal is at 80 % of the full screen height (Fig. 4).
Then, the external step attenuator is switched to the values of (1, 4, 6,
8, 12, 14, 20 and 26) dB to verify the vertical grid line accuracy on the
display. At each frequency band and step attenuator setting, the measured signal amplitude shall be within the tolerances given in Table 2.
NCSLI Measure J. Meas. Sci. |

65

TECHNICAL PAPERS

Delayed signal at 0 % line of the UUT horizontal scale

Delayed signal at 5 % line of the UUT horizontal scale

Delayed signal at 10 % line of the UUT horizontal scale

Delayed signal at 100 % line of the UUT horizontal scale

Figure 9. The calibration procedure for the linearity of time base for digital instruments.
2.9 Linearity of Time Base

This test measures the accuracy of the horizontal grid line of the
UUT screen. An arbitrary waveform generator is used instead of
a delay/pulse generator (Fig. 3) such that a burst of 11 pulses is
generated to trigger the signal generator. As a result, 11 regularly
spaced signals should be observed on the UUT screen (Fig. 8). The
standard [3] specifies that the deviation of the reference signals from
their ideal positions shall be within 1 % of the full screen width.
2.10 Linearity of Time Base for Digital Ultrasonic Instruments

Some parameters in digital ultrasonic instruments are not applicable


to analog ultrasonic instruments, for example, the digitization of
the A-scan and the algorithm used to produce the A-scan display.
Therefore an additional test is required for the calibration of digital
ultrasonic instruments.
In this test, a pulse/delay generator is used instead of an arbitrary
waveform generator in the measurement setup shown in Fig. 3
to generate a single delayed pulse. The delay is adjusted so that
the leading edge of the signal aligns with the 0 % line of the UUT
horizontal scale. Then, the pulse/delay generator is controlled to
generate another delayed pulse, but this pulse is aligned with the
next line of the UUT horizontal scale. The process is repeated
until the last line of the UUT horizontal scale has been tested
(Fig. 9). Eventually the location on the ultrasonic instrument screen
66

is plotted against the delay. A best fit curve to the measured values
is generated by software so that the error for each measurement can
be found. The time base non-linearity shall be within 0.5 % of the
full screen width.
3. Measurement Uncertainties

The standard EN12668-1:2010 [3] does not include any mention of


the measurement uncertainties. In this section, we have included an
evaluation of the sources of measurement uncertainty in accordance
with [5].
3.1 Stability After Warm-up Time

The measurement model for stability after warm-up time is


= ! ! ,

(1)

where
S is the amplitude or time base stability of the echo signal,
A1 is the amplitude or position of the echo signal after 10 min, 20 min
or 30 min, and
A0 is the initial amplitude or position of the echo signal.
The uncertainties of A1 and A0 are from the UUT discrimination of
the final reading and initial reading, respectively. The standard

NCSLI Measure J. Meas. Sci. www.ncsli.org

SAismaximum
thethe
amplitude
oramplitude
time
stability
of
thethe
echo
signal,
A1 amplitude
is the
or base
position
of the
echo
signal,
and
S is the
or
timeamplitude
base
stability
of
the
echo
signal,
minimum
or
position
of
echo
signal.
0 istraceable
are
tois SCLs
attenuation
measurement.
The relative
standardwith
measurement
Type
Bfrom
uncertainty
with
arelative
rectangular
The
assigned
vol
uncertainty,
u(v),
obtained
the
voltage
measurement
performed
an
are traceable
totheSCLs
attenuation
measurement.
Theecho
measurement
oscilloscope.
is
maximum
amplitude
or
position
of
the
signal,standard
and distribution.
A
where
where
1
the
minimum
amplitude
or
position
of
the
echo
signal.
A1 is Athe
amplitude
or position
of
the
echo
signal,
and
0 ismaximum
uncertainty,
u(v),
is
obtained
from
the
voltage
measurement
performed
withofanthe o
uncertainty
is
assessed
from
the
vertical
deflection
calibration
oscilloscope.
uncertainty,
u(v),
is
obtained
from
the
voltage
measurement
performed
with
an
The
standard
uncertainty
of
v
is
due
to
the
standard
uncertainties
of
a
an
The
standard
uncertainty
of
v
is
due
to
the
standard
uncertainties
of
a
and
m
m signal.
A0 isSimilarly,
theamplitude
minimum
amplitude
or
of
the
echo
position
of position
the
signal.
A0 is the minimum
the or
uncertainties
of Aecho
A8-bit
from
the UUT
discrimination
the finalequ
1 and
0 are in
oscilloscope.
digital
oscilloscope
is
voltage
measurement,
soattenuation
theof
resolution
oscilloscope.
The voltage
measurement
accuracy
is
calculated
as
the
sum
of
resolution
and
is
assigned
TECHNICAL
PAPERS
standard
measurement
uncertainty,
u(a),
is
assessed
from
its
calibrat
standard
measurement
uncertainty,
u(a),
is
assessed
from
its
attenuation
calibratio
S uncertainties
isthe
theinitial
amplitude
or1 time
base
stability
ofstandard
the
SUUT
is echo
thediscrimination
amplitude
signal, or time
basefinal
stability
th
and
A0scale
are
from
ofAs
the
Similarly,
of where
A
readingthe
and
reading,
respectively.
Thethe
measurement
uncertainties
of
Aof
1
scale
equals
eight
vertical
divisions.
aofresult,
the
Theasvoltage
measurement
accuracy
isfull
calculated
asoscilloscope
the
sum
of
resolution
and
the
voltage
measurement
uncertainty
of
the
calibration.
Itisrelative
isassigned
evaluated
as areso
traceable
attenuation
measurement.
The
relative
standard
areare
traceable
toofto
SCLs
attenuation
measurement.
The
standard
m
Similarly,
the
uncertainties
A
and
A
are
from
the
UUT
discrimination
the
final
is
the
maximum
amplitude
or
position
of
the
is
the
echo
maximum
signal,
and
amplitude
or
position
of
A
A
1SCLs
0the
1
1
readingand
and
the
initial
reading,
respectively.
The
standard
measurement
uncertainties
of
A
Similarly,
the
uncertainties
of
A
and
A
are
from
UUT
discrimination
of
the
final
A0 are
equalmeasurement
to 0.5asa1LSD
/ 3.0
The
accuracy
isoscilloscope
calculated
as
the
sum
resolution
andmeasurement
isasassigned
percentage,
and
calculated
as
1The
/ the
256
of
8voltage
It (V
/assigned
div
setting)
/1 measured
as the
voltage
measurement
uncertainty
of
the
calibration.
is
evaluated
a
The
voltage
measurement
accuracy
is
calculated
as
the
sum
of
resolution
and
is
Type
Bvoltage
uncertainty
with
a
rectangular
distribution.
assigned
voltage
uncertainty,
u(v),
is
obtained
from
the
measurement
perfor
uncertainty,
u(v),
is
obtained
from
voltage
measurement
perform
reading
and
the
initial
reading,
respectively.
The
standard
measurement
uncertainties
of
A
A0reading,
isisthe
minimum
amplitude
or
ofAthe
theas
minimum
signal.
a percentage,
andoscilloscope
calculated
1uncertainties
/ 256 x amplitude
8 It
x (V
/Adivor position
0 is echo
measurement uncertainties of A1 and A0 are equal
toand
0.5
LSD
/ 3
reading
and
the
initial
respectively.
The standard
measurement
of evaluated
Ax0asare
equal
to
0.5
expressed
LSD
/ 3. as
the
voltage
uncertainty
of position
the
calibration.
is
as
a1 of
B
uncertainty
with
a /rectangular
distribution.
The assigned
as theType
voltage
measurement
of
oscilloscope
calibration.
It voltage
is of
evaluated
as a1
uncertainty
is measurement
assessed
from
the
vertical
deflection
calibration
the measurement
oscilloscope.
The
oscilloscope.
oscilloscope.
and
Atypical
equal
touncertainty
0.5
LSD
/Rise
3.thevoltage
0 are
setting)
measured
x
100
%.
where LSD is the least significant digit of the
The
3.4
Transmitter
Voltage,
Time,
Duration
and
Reverberation
anddisplay.
A0 are equal
to
0.5

LSD
/
3.
Type
Boscilloscope
uncertainty
a in
rectangular
distribution.
The
assigned
voltage
3.5
Amplifier
Frequency
Response
assessed
from
the
deflection
calibration
the the
oscilloscope.
The
Typeuncertainty
B uncertainty
withSimilarly,
a rectangular
distribution.
assigned
voltage
measurement
digitalis
iswith
8-bit
voltage
measurement,
theof
resolution
equals
1measurement
/ 256
the vertical
uncertainties
of The
Aand
Aso
are
from
the
UUT
uncertainties
discrimination
of of
A1full
and
of thA
1 and
0 Similarly,
3.4 uncertainty
Transmitter
Voltage,
Rise
Time,
Duration
Reverberation
value of LSD equals 1 % of full scale.
is
assessed
from
the
vertical
deflection
calibration
of
the
oscilloscope.
The
digital3.4
oscilloscope
is 8-bit
inequals
measurement,
so
theand
resolution
equals
1 the
/ sum
256
ofisof
full
uncertainty
is assessed
from
the
vertical
deflection
calibration
of
the
oscilloscope.
The
scale
where
full
scale
eight
vertical
divisions.
As
a
result,
the
resolution
expressed
The
voltage
measurement
accuracy
is
calculated
as
sum
resolution
The
voltage
measurement
accuracy
is
calculated
as
the
resolution
ana
Transmitter
Voltage,
Rise
Time,
Duration
Reverberation
reading
and
the
initial
reading,
respectively.
reading
The
standard
and
the
initial
measurement
reading,
uncertaintie
respectively
3.5 model
Amplifier
Frequency
Response
3.4
Transmitter
Voltage,
Rise
Time,
Duration
and
Reverberation
The measurement
for
transmitter
voltage,
rise
time,
and equals
reverberation
isof full
digital
oscilloscope
iscalculated
8-bit
in
voltage
measurement,
so setting)
theduration
resolution
1 / 256
The
measurement
model
for
amplifier
frequency
is
scale
where
full
equals
eight
vertical
divisions.
As
adiv
result,
the
resolution
isof
expressed
digital
oscilloscope
isscale
8-bit
in
voltage
measurement,
so8uncertainty
the
resolution
equals
1 response
/to256
full
as a percentage,
and
as
1
/
256

(V
/
/
measured
voltage

100
%.
where
as
the
voltage
measurement
uncertainty
of
the
oscilloscope
calibration.
It
isev
as
the
voltage
measurement
of
the
oscilloscope
calibration.
It
is
and
A
are
equal
to
0.5

LSD
/
3.
and
A
are
equal
0.5

LSD
/
3.
0 for
0 andresponse
The
measurement
model
forrise
amplifier
frequency
is
where
The measurement
model
transmitter
voltage,
time, duration
reverberation
is is expressed
3.2 Display Jitter
scale
where
full
scale
equals
eight
vertical
As
ameasured
result,
thevoltage
scaleas
where
fullmeasurement
scale
equals
eight
divisions.
arectangular
result,
the
resolution
isresolution
expressed
a percentage,
and
calculated
as
1transmitter
/ 256
8with
with
(Vdivisions.
/ adiv
setting)
/ duration
assigned
100
%.is voltage
TypeBvertical
Bfor
uncertainty
distribution.
The
assigned
voltagem
Type
uncertainty
aAs
rectangular
distribution.
The
The
model
voltage,
rise
time,
and
reverberation
measurement
transmitter
voltage,
rise time,
duration
and reverberation
isvoltage
amodel
v , forand
(4)
vma=percentage,
The measurement model for display jitter is The
as
calculated
as8assessed
1 /(V
256
the
8setting)
frequency
(Vthe
/ div
setting)
/voltage
measured
of
100the
%. oscil
3.5 andAmplifier
Frequency
Response
as a percentage,
calculated
as
1center
/ 256isfrequency
is
div
/3.4
measured
calibration
100
%. (5)
is
the
band,
uncertainty
from
vertical
deflection
assessed
from
the
deflection
calibration
theTime,
oscilloD
3.4 uncertainty
Transmitter
Voltage,
Rise
Time,
Duration
Transmitter
and
Reverberation
Voltage,
Rise
fo /=of
, vertical

(5)of
is the center frequency
of
the
frequency
band,

(4)
vm = a v ,
3.5 vmAmplifier
Frequency
Response
where
is
the oscilloscope
upper
3 dB frequency,
digital
digital
oscilloscope
8-bitininvoltage
voltagemeasurement,
measurement,sosothe
theresolution
resolutionequals
equals1
is is8-bit
=
a

v
,
(4)
is the upper Response
3 dB frequency,
where
vm = a v3.5
,
(4)
= ! ! ,
Amplifier
Frequency
(2) is

= (eight

vertical
).vertical
3.5
Amplifier
Frequency
Response
The measurement
model
for amplifier
response

thewhere
lower
3full
dB
frequency,
and
scale
where
scale
equals
eight
divisions.
As
a and
result,
resolutio
scale
full
scale
equals
divisions.
As
a result,
thethe
resolution
The measurement
forfrequency
transmitter
The is
measurement
rise
time, duration
model
for
transmitter
reverberation
voltis
is the lowermodel
3 dB frequency,
and voltage,
where
The measurement
model
for
amplifier
frequency
response
is
is
the
center
frequency
of
the
frequency
band,

is
the
3
dB
bandwidth.
as
a
percentage,
and
calculated
as
1
/
256

(V
/
div
setting)
/
measured
volt
as
a
percentage,
and
calculated
as
1
/
256

(V
/
div
setting)
/
measured
voltag

where
14

,
model
istransmitter
the for
3 dB
bandwidth.
where
where
!
!
vm ismeasurement
the measured
pulse voltage
or response
reverberation,
The
amplifier
frequency
is
where
The measurement
model
amplifier
frequency
response
is
is
the
upper
3
dB
frequency,
v for
= fao =isv
aand
(5)v ,
(4)
vm = band,
mtransmitter
vm is thea measured
pulse
is the attenuation
in
ratio ofor
thereverberation,
attenuator,
, voltage
the
frequency
offixed
the frequency
,voltage
center
J is the amplitude or time base jitter of the echo signal,
is the
lower
3Amplifier
dB frequency,
The
uncertainties
of and
and
Response
from the 3 dB attenuator accuracy, which
3.5
Frequency
3.5
Amplifier
Frequency
Response
v
is
the
measured
transmitter
pulse
voltage
or
reverberation,
are
m
and

are
from
the
3
dB
attenuator
accuracy,
which
The
uncertainties
of

is measured
thev
attenuation
voltage
ratio
attenuator,
and
is athe
transmitter
voltage
or fixed
reverberation,
= in

(5)
fodelta
is the
theofvoltage
cursors
of the
reading
ispulse
the
upper
3 the
dB
frequency,

oscilloscope.

, of
A1 is the maximum amplitude or position of thevmecho
signal,
and

isfrom
the
3in
dB
bandwidth.

=
(
, ).
its
calibration
(performed
by the
National
Physical Laboratory, NPL, in

a isdelta
the
attenuation
voltage
ratio
of
the
fixed
attenuator,
and
=
(5)
f

where
where
where
oof

from
its
calibration
(performed
by
the
National
Physical
Laboratory,
NPL,
in
v
is
the
reading
the
voltage
cursors
of
the
oscilloscope.
a
is
the
attenuation
in
voltage
ratio
of
the
fixed
attenuator,
and
=

,
(5)
f
and

is the lower 3 dB frequency, where


A0 is the minimum amplitude or position of the echo signal. o
= reading
(
measurement
measurement
).
Kingdom).
model
frequency
response
The
model
forfor
amplifier
frequency
response
is is
The
v is
the
delta
of the
voltage
cursors
ofamplifier
thestandard
oscilloscope.
Kingdom).
v is the delta
reading
of the
voltage
cursors
of
the
oscilloscope.
14
The
standard
uncertainty
of
v
is
due
to
the
uncertainties
of
a
and
v.
Relative
the
m

=
( 3dB
).bandwidth.
= (

).is
The
uncertainties
offrequency
andpulse
of are
fromvmor
the
3band,
dB
attenuator
accuracy,
which
is as

is the
the
frequency
vuncertainty
transmitter
voltage
isreverberation,
the
measured
transmitter
pulse
voltage
where
m is the
measured
14is standard
Thestandard
standardmeasurement
of
vcenter
to the
uncertainties
of a and
v.frequency
Relative
uncertainty,
assessed
from
its
attenuation
calibration
results
that
m is dueu(a),
theis the
center
frequency
of
the
band,
The uncertainties of A1 and A0 are from the UUT discrimination
from
itsuncertainty
calibration
(performed
by
National
Physical
Laboratory,
NPL,
in
the
3.6
Equivalent
Input
Noise
14
The
standard
of
v
is
due
to
the
standard
uncertainties
of
a
and
v.
Relative

is
the
upper
3
dB
frequency,
a
is
the
attenuation
in
voltage
ratio
of
the
fixed
a
is
the
attenuator,
attenuation
and
in
voltage
ratio
of
the
fix
mthe
SCLs
3.6
Equivalent
Input
Noise
14
standard
uncertainty,
u(a),
assessed
its
calibration
results
that
The
standard
uncertainty
of
vuncertainties
is attenuation
due
standard
uncertainties
aattenuator
and
v.
Relative
aremeasurement
traceable
to
The3 of
standard
=
,are
(5) measurement
m

,from
(5)
fto

isfrom
theattenuation
upper
3relative
dB
frequency,
and
the
dB
accuracy,
The
of
o f=ois

measurement.
of the final reading and the initial reading, respectively.
The
standard
Kingdom).
standard
measurement
uncertainty,
u(a),
is
assessed
from
its
attenuation
calibration
results
that
is
the
lower
3
dB
frequency,
and

v
is
the
delta
reading
of
the
voltage
cursors
v
of
is
the
the
oscilloscope.
delta
reading
of
the
voltage
cursors

is
the
center
frequency
of
the
frequency
band,
attenuation
from
are measurement
traceable
to uncertainty,
SCLs
measurement.
The
relative
standard
measurement
standard
is assessed
from
calibration
results
that with an
uncertainty,
obtained
theits
voltage
measurement
performed
attenuation
is
theinput
lower
3 dB
and
which
isisu(a),
assessed
from
its
(performed
by the
National
measurement uncertainties of A1 and A0 are equal to 0.5 x are
LSD
/ 3 . u(v),
The
measurement
model
for
is frequency,

). 3 dB

=
(

calibration
tois
SCLs
measurement.
Thenoise
relative
standardwith
measurement
equivalent

).
is theattenuation
3measurement.
dB
bandwidth.
=
is
the
upper
frequency,
measurement
model
for
equivalent
input
noise
is
uncertainty,
u(v), The
obtained
from
the
voltage
measurement
performed
an
are traceable
totraceable
SCLs
attenuation
The
relative
standard
measurement
oscilloscope.
United
is the 3Kingdom).
dB bandwidth.
Physical
Laboratory
in the
3.6u(v),
Equivalent
Input(NPL)
Noise
uncertainty,
is
obtained
from
the
voltage
measurement
performed
is
due
to
the
standard
uncertainties
is an
due
v. to
R
The
standard
uncertainty
of
v
The
standard
uncertainty
vamand
is
the
lower
3
dB
frequency,
and
oscilloscope.
uncertainty,
u(v), is obtained from the voltagem measurement performed
with an of with
14 14
3.3 Stability Against Voltage Variations
oscilloscope.
and

are
from
the
3
dB
attenuator
accuracy,
whi
The
uncertainties
of

standard
measurement
uncertainty,
u(a),
is
standard
assessed
measurement
from
its
attenuation
uncertainty,
calibration
u(a),
is
resu
a

is
the
3
dB
bandwidth.
V / The
as
oscilloscope. The voltage
measurement
accuracy
the sum of of
resolution
is from
assigned
ninis=calculated
(6)the 3
,
are
uncertainties
and and
3.6
Equivalent
Input
Noise
model
forattenuation
equivalent
noise
is
The measurement model for
voltage variations is The
nin = Vein
,
ein /input
from its
(performed
byare
thetraceable
National
Physical
Laboratory,
NPL,
=stability
! against
are measurement
traceable
tocalibration
SCLs
measurement.
The
to
relative
SCLs
attenuation
standard (6)
measu
meas
! ,
Theasvoltage
measurement
accuracy
is
calculated
as
the
sum
of
resolution
and
is
assigned
the voltage
measurement
uncertainty
of
the
oscilloscope
calibration.
It
is
evaluated
as
a
from
its
calibration
(performed
by
the
National
The measurement modelVfor
equivalent
input noise is
=uncertainties
Vinfrom
/ 10(40/20),
eincalculated
The
voltage
measurement
accuracy
is
as
the
sum
of
resolution
and
is
assigned
Kingdom).
uncertainty,
u(v),
is
obtained
uncertainty,
the
voltage
u(v),
measurement
is
obtained
performed
from
wt
The
of

and

are
from
the
3
dB
attenua
=the
VKingdom).
/ 10(40/20),
as the
voltage
uncertainty
of theVein
oscilloscope
It isisvoltage
evaluated
as a
The
voltage
accuracy
calculated
as
ofcalibration.
resolution
and
assigned
Typemeasurement
Bmeasurement
uncertainty
with
aisrectangular
distribution.
The assigned
measurement
in sum
(3)oscilloscope.
of
=the
(oscilloscope
oscilloscope.
), (performed
= ! ! ,
as
the
voltage
measurement
uncertainty
calibration.
It
is
evaluated
as
a
from
its
calibration
by
the
National
Physical
L
(
), assigned
Buncertainty
uncertainty
a rectangular
The
as theType
voltage
measurement
uncertainty
thendistribution.
calibration.
It voltage
is of
evaluated
as (6)
a (6) The
=
Vein=
/ deflection
is with
assessed
fromofthe
vertical
calibration
the measurement
oscilloscope.
,
inoscilloscope
Type
Boscilloscope
uncertainty
a vertical
rectangular
distribution.
assigned
voltage
Equivalent
Input
Noise
Kingdom).
assessed
from
the
deflection
calibration
theInput
oscilloscope.
The of full
Typeuncertainty
B uncertainty
with a 3.6
rectangular
distribution.
assigned
voltage
measurement
digitalis
iswith
8-bit
in
voltage
soThe
theof
resolution
equals 1measurement
/ 256
nmeasurement,
/ (100
),
3.6
Equivalent
Noise
in = VThe
in
Vvertical
V/in10(40/20),
/ (100
),
ein =nV
in=deflection
where
in
assessed
from
the
calibration
ofthemeasurement
the
Theis ca
The
voltage
measurement
accuracy
The
voltage
asequals
the
sum
resolution
and
a
digitaluncertainty
oscilloscope
is
8-bit
voltage
measurement,
so is
thecalculated
resolution
1 oscilloscope.
/of256
ofisaccuracy
full
uncertainty
is assessed
from
theinequals
vertical
deflection
calibration
of
the
oscilloscope.
The
scale
whereisfull
scale
eight
vertical
divisions.
As
a result,
resolution
expressed

=measurement,
(for
), ofas
S is the amplitude or time base stability of the echo signal,digital oscilloscope

is
8-bit
in
voltage
so
the
resolution
equals
1
/
256
of
full
The
measurement
model
equivalent
input
noise
is
as
the
voltage
measurement
uncertainty
the
the
oscilloscope
voltage
measurement
calibration.
uncertainty
It
is
evaluat
of
3.6
Equivalent
Input
Noise
scale
where
full
scale
equals
eight
vertical
divisions.
As
a
result,
the
resolution
is
expressed
digital
oscilloscope
is
8-bit
in
voltage
measurement,
so
the
resolution
equals
1
/
256
of
full
as a percentage, and calculated as 1 / 256 8 The
(Vmeasurement
/ div setting)model
/ measured
voltage input
100 noise
%. is
for equivalent
where
A1 is the maximum amplitude or position of thescale
echowhere
signal,
andscale
scale
where
full
scale
equals
divisions.
As
result,
the
expressed
Type
Beight
uncertainty
with
distribution.
Type
B
uncertainty
The
assigned
with
aisvoltage
rectangular
measu
di
where
full
equals
vertical
divisions.
a result,
the
resolution
isresolution
expressed
nvertical
(100
as a percentage,
and
calculated
as 1 eight
/ 256
in 8=aVrectangular
(V
/ div
setting)
/ ameasured
voltage
100
%.
),
in / As
A0 is the minimum amplitude or position of theasecho
signal.
as
a percentage,
and
calculated
as8 1The
/(V
256

vertical
(V / div
setting)
/voltage
measured
voltage
100vertical
%.
uncertainty
from
the8setting)
uncertainty
deflection
is
calibration
assessed
of
from
thethe
oscilloscope
measurement
model
for equivalent
input
noise
is
3.5
Frequency
a percentage,
andAmplifier
calculated
as
1is/ assessed
256 Response
/ div
/
measured

100
%.
nin = Vein / ,
(6)
where
oscilloscope
is root
8-bitbandwidth
in voltageinmeasurement,
digital
oscilloscope
so thenresolution
is V8-bit
inequals
voltage
1 / meas
256
in =
ein / ,
3.5
Amplifierdigital
Frequency
Response
n
per
V
/
Hz,
in is the noise
Hz
where
,
is thefull
noise
per
root bandwidth
/divisions.
Hz,
3.5
Amplifier
Frequency
Response
the UUT
discrimination
Similarly, the uncertainties of A1 and A0 are from3.5
Veinvertical
= response
Vinin V/scale
10(40/20),
scaleninwhere
equals
eight
full
As ascale
result,
equals
the resolution
eight vertical
is exd
Amplifier
Frequency
Response
The measurement
model
forscale
amplifier
frequency
is where
Vein = Vin / 10(40/20),
n = V / ,
is the input
signal
amplitude
in
volts
(peak-to-peak),
of the final reading and the initial reading, respectively.
The standardas
),percentage,
1=
(
a Vpercentage,
and
calculated
as
/ 256
as
8a
(Vin/ div ein
setting)
and
calculated
/ measured
asvoltage
1 / 2561

Vinin for
is the
input
signal
amplitude
in
volts
(peak-to-peak),
The measurement
model
amplifier
frequency
response
is
=
( ),
Veinmodel
is the for
equivalent
input
noise inresponse
volts, is Vein = Vin /
The
measurement
amplifier
frequency
measurement uncertainties of A1 and A0 are equalThe
to 0.5
x
LSD
/
.
10(40/20),
3
is the per
equivalent
input
noise
measurement model
amplifier
frequency
response
isVin
nin for
is Vthe
root bandwidth
ein noise
Hz,
ninin=
V/involts,
/ (100 ),
is
the
center
Frequency
, frequency, Response
(5)

),
3.5 fo =
Amplifier
3.5
Amplifier
=
), Vin / (100Response
= ( nFrequency
in
is
thecenter
frequency,
V
is
the
input
signal
amplitude
in
volts
(peak-to-peak),

is
the
upper
3
dB
frequency,
(5)
fo =in ,
3.4 Transmitter Voltage, Rise Time, Duration and Reverberation
the
upper

(
, ).3 dB frequency,
= is=

f
n(5)
lower
in = Vin / (100 ),
Veinis
equivalent
noise
in volts,
the
33input
dB
and
othe
, is
(5) The measurement
fo =
dB frequency,
frequency,
and
measurement
for
amplifier
frequency
response
is model for amplifier frequen
The measurement model for transmitter voltage, rise time, duration
isthe
lowermodel
3 dB
frequency,
and
where
).lower
The
= (
where

is
the
center
frequency,

is
the
3
dB
bandwidth.
14

is
the
3

).
=
(

and reverberation is
= (
).is the 3 dB bandwidth.
is the
upper 3 dB frequency,
14
where
nThe
per
V to
/ Hz,
fonoise
=
root
, bandwidth
fo =(5)
, of Vin and
in
in is the

lower
3
dB
frequency,
andof 14
is theThe
standard
uncertainty
n is in
due
theper
standard
uncertainties
14
due
to
standard
uncertainties
root bandwidth
V of
/ Hz,
vm = a x v ,
(4)
duenoise
to the
the standard
uncertainties
Vin and
Thestandard
standarduncertainty
uncertaintyof
of nnininin isis the
ismeasurement
the
3
dB
bandwidth.
uncertainty,
u(),
is
assessed
from
the
bandwidth
in S
of
and
.uncertainty,
uncertainty,
u(
),=is(assessed
Vin is
the
input
signal
volts
(peak-to-peak),
).u(), isinassessed
measurement
).
=Standard
(
amplitude
measurement

measurement
from
the
bandwidth
measurement
in
Vin is
the
input
signal amplitude
inType
voltsB(peak-to-pea
nin uncertainty,
is the noise
per
root
bandwidth
in V / Hz,
Standard
measurement
u(V
),
is
obtained
through
a
evaluation.
in
Veinthe
is the
equivalent
input
noise in
volts,
from
bandwidth
measurement
in
Section
3.5.
Standard
measureStandard
measurement
uncertainty,
u(V
),
is
obtained
through
a
Type
B
evaluation.
where
in equivalent
input noise inof(Section
volts,
ein istothe
14
The
standard
uncertainty
ofisthe
nobtained
isVmeasurement
due
theamplitude
standard
VItin and 3.4),
. an
St
of
two
components:
voltage
ininput
the
center
frequency,
ment
uncertainty,
u(Vaain),is
through
ausing
Typeuncertainties
Bvolts
evaluation.
signal
inoscilloscope
(peak-to-peak),
is
of two
components:
voltage
measurement
using
oscilloscope
(Section
3.4), an
vm is the measured transmitter pulse voltage or reverberation,
is
the
center
frequency,

measurement
uncertainty,
u(),
is
assessed
from
the
bandwidth
measurement
in
Secti
reading
fluctuation.
It
is
assessed
to
have
a
rectangular
distribution
with
limits

is
the
upper
3
dB
frequency,
VIteinisis the
equivalent
input
in volts,
consists
of two components:
a voltage
measurement
using
oscillo- with limits

reading
fluctuation.
assessed
have
a noise
distribution
a is the attenuation in voltage ratio of the fixed attenuator, and
istothe
upper
3rectangular
dB
frequency,
noise
Standard
u(V
islevel
obtained
through
a assessed
Type B evaluation.
It c
maximum
offrequency,
reading.
in),
measurement
is(Section
thefluctuation
lower
3uncertainty,
dB
and
and
isthe
theUUT
center
frequency,
scope
3.4),
the
UUT
reading
fluctuation.
It
is
of
maximum
fluctuation
the
UUT
noise
level
reading.
v is the delta reading of the voltage cursors of the oscilloscope.
is the lower
dB frequency,(Section
and to 3.4), and the
of two
components:
abandwidth.
voltage
measurement
using3 oscilloscope
is
the
3
dB

is
the
upper
3
dB
frequency,
have a rectangular distribution
with
limits
0.5
x maximum fluctua
Attenuator
is
the 3of
dB
reading
fluctuation.
It of
is Calibrated
assessed
to
have
rectangular
3.7
Accuracy
/ bandwidth.
Gainanddistribution with limits of
is thereading.
lower
3 dB afrequency,
tion
UUT noise
level
3.7 of the
Accuracy
ofCalibrated
Attenuator
/ Gain
The standard uncertainty of vm is due to the standard uncertainties
maximum fluctuation
of the
reading.
The standard
uncertainty
UUT
is thenoise
3 of
dBnlevel
bandwidth.
in is due to the standard uncertainties of Vin and
of a and v. Relative standard measurement uncertainty, u(a), is
The
standard
uncertainty of nin is due isto the stan
Themeasurement
measurementuncertainty,
model for the
accuracy
of the calibrated
u(),
is assessed
from the attenuator/gain
bandwidth measurement
3.7 Accuracy
of Calibrated
Attenuator
Gain
The
measurement
model for the
accuracy
of/the
calibrated
attenuator/gain
is from ti
assessed from its attenuation calibration results that are traceable to
measurement
uncertainty,
u(), is assessed
3.7
Accuracy
of Calibrated
Attenuator
/inGain
Standard
measurement
uncertainty,
u(V
),
is
obtained
through
a
Type
B
evaluatio
The
standard
uncertainty
of
n
is
due
to
the
standard
uncert
in
accuracymeasurement
of the
attenuator/
15 calibrated
uncertainty,
u(Vin), is obtaine
SCLs attenuation measurement. The relative standard measurement The measurement model for theStandard
15 using oscilloscope (Section
of
two
components:
a
voltage
measurement
3.4),
measurement
uncertainty,
u(),
is
assessed
from
the
bandwid
gain is
of two
components:
a voltage measurement
using
uncertainty, u(v), is obtained from the voltage measurement
The measurement
model for
the
accuracy
of the
calibrated
attenuator/gain
is
reading fluctuation.
It is
assessed
to have
a rectangular
with lim
Standard
measurement
uncertainty,
u(Vin), distribution
is obtained
through
a
reading fluctuation. It is assessed to have a recta
performed with an oscilloscope.
maximum fluctuation
of
the
UUT
noise
level
reading.
of Etwo
components:
a
voltage
measurement
using
oscillosco
= 20 x maximum
log10 (R / R15
(7) level reading
fluctuation
of the UUT noise
o),
The voltage measurement accuracy is calculated as the sum
reading fluctuation. It is assessed to have a rectangular dist
of resolution and the voltage measurement uncertainty of the
3.7
Accuracymaximum
of Calibrated
Attenuator
/ Gain
fluctuation
of the UUT
noise level Attenuator
reading.
3.7
Accuracy
of Calibrated
/ Gain
oscilloscope calibration. It is evaluated as a Type B uncertainty where
with a rectangular distribution. The assigned voltage measurement R is the UUT amplitude as a percentage of full screen height at different
The measurement 3.7
model for
the accuracy
of the calibrated
attenuator/gain
is
Accuracy
of Calibrated
Attenuator
/ Gainof the
attenuator/gain settings,
The measurement
model
for the accuracy
calib
uncertainty is assessed from the vertical deflection calibration
of the oscilloscope. The digital oscilloscope is 8-bit in voltage Ro is the reference signal amplitude as a percentage15of full screen height,
15
The measurement model for the accuracy of the calibrated attenu
measurement, so the resolution equals 1 / 256 of full scale where and
15
full scale equals eight vertical divisions. As a result, the resolution E is the measurement error in decibels.
2014
Vol. 9 No. 3 September 2014

NCSLI Measure J. Meas. Sci. |

67
63

TECHNICAL PAPERS
The standard uncertainty of E is due to the display resolution, dr,
and the 10 dB step attenuator accuracy, a. The standard measurement
uncertainty for display resolution, u(dr), is calculated as 0.5 x LSD
/ 3 where LSD is the resolution or fluctuation of the UUT signal
amplitude reading. The standard measurement uncertainty due to step
attenuator accuracy, u(a), is assessed from the results of attenuation
measurements.
3.8 Linearity of Vertical Display
The measurement model for the linearity of the vertical display is

E = A1 A0 = A1 80 x 10 (2 - a) / 20,

(8)

where
A0 is the target amplitude as a percentage of full screen height,
a is the attenuation of the step attenuator in decibels,
A1 is the UUT amplitude reading as a percentage of full screen height,
and
E is the measured error as a percentage of full screen height.
The standard uncertainty of E is due to the standard uncertainty
of a and A1. The standard measurement uncertainty due to the step
attenuator accuracy, u(a), is assessed from the results of attenuation
measurements. The standard measurement uncertainty, u(A1), is
calculated as 0.5 x LSD / 3 , where the LSD is the resolution of the
UUT.
3.9 Linearity of Time Base

Tests
Clause 9.3 Stability
Stability after warm-up time

1%

Display jitter

1%

Stability against voltage variation

1%

Clause 9.4 Transmitter pulse


parameter
Pulse voltage (50 V to 450 V)

3%

Rise time (5 ns to 50 ns)

3%

Reverberation

3%

Duration (50 ns to 2 s)

1%

Clause 9.5 Receiver


Amplifier frequency response
Center frequency

2%

3 dB bandwidth

3%

Equivalent input noise (10 nV / Hz


to 100 nV / Hz )

3%

Internal attenuator/gain

The measurement model for the linearity of the time base is

0 dB to 110 dB
D = P0 P1,

Linearity of vertical display

Linearity of time base (3 s to 7 ms)

The standard uncertainty of D is due to the standard uncertainties of


P0 and P1. The standard measurement uncertainty, u(P1), is calculated
as 2 x 0.5 x LSD / 3 , where the LSD is the discrimination of the
UUT horizontal scale for two markers as a percentage of the full screen
width. The standard measurement uncertainty due to the reference
marker timing accuracy of the arbitrary waveform generator, u(P0),
is obtained from its calibration referenced to the SCL time standard.

Clause 8.8.2

3.10 Linearity of Time Base for Digital Ultrasonic Instruments

The measurement model for the linearity of the time base for digital
ultrasonic instruments is

68

1%

Clause 9.6

P1 is the observed signal position on screen.

where
L is the linearity error as a percentage of full scale,
D1 is the delay time setting of the delay/pulse generator,
D0 is the delay time from a linear best fit curve, and
D is the delay time of full scale.

0.3 dB

(9)

where
D is the deviation from the reference position,
P0 is the reference position, and

L = (D1 D0) / D x 100 %,

Calibration and
Measurement
Capability (CMC)

(10)

Linearity of time base for digital


ultrasonic instruments (3 s to 7 ms)

1%

1%

Table 3. The CMCs of SCL for ultrasonic flaw


detector calibrations.

The standard uncertainty of L is the combined uncertainties of


D0 and D1. The standard measurement uncertainty, u(D0 / D), is
calculated as 2 x 0.5 x LSD / 3 , where the LSD is the horizontal
scale discrimination of initial and final markers of the UUT as a
percentage of the full screen width. The standard measurement
uncertainty due to delay time generation accuracy of the digital
delay/pulse generator, u(D1 / D), is assessed from its calibration
referenced to the SCL time standard.
The Calibration and Measurement Capabilities (CMCs) of SCL
for ultrasonic flaw detector calibrations are summarized in Table 3.

NCSLI Measure J. Meas. Sci. www.ncsli.org

TECHNICAL PAPERS
4. Conclusions

SCL has developed a calibration service for ultrasonic flaw detectors


in accordance with the standard EN12668-1:2010 [3], which
includes stability, transmitter pulse parameters, amplifier frequency
response, and linearity of time base tests. The calibration methods,
measurement setup, and measurement uncertainty of the calibration
service have been detailed in this paper.
5. References

[1] R. Halmshaw, Introduction to the Non-Destructive Testing of


Welded Joints, Abington Publishing, 2nd edition, 1996.
[2] C. Hellier, Handbook of Nondestructive Evaluation, McGrawHill, 2nd edition, 2013.
[3] BSI, Non-destructive testing Characterization and verification
of ultrasonic examination equipment, Part 1: Instruments, BS
EN12668-1, 2010.
[4] BSI, Non-destructive testing Characterization and verification
of ultrasonic examination equipment, Part 1: Instruments, BS
EN12668-1, 2000.
[5] JCGM, Evaluation of measurement data Guide to the
expression of uncertainty in measurement, JCGM 100, 2008.

Vol. 9 No. 3 September 2014

NCSLI Measure J. Meas. Sci. |

69

TECHNICAL PAPERS

An Uncertainty Model and Analyzer for a


Space Environmental Test Facility
Mihaela Fulop

Abstract: This paper introduces a measurement uncertainty model and analyzer tool being developed for one of the worlds
largest space environmental test facilities, the Spacecraft Propulsion Research Facility (B2) located at NASA Glenn Research
Centers Plum Brook Station near Sandusky, Ohio. The B2 is the worlds only facility capable of testing full-scale upper-stage
launch vehicles and rocket engines under simulated high-altitude conditions.
Developing an uncertainty tool for the data acquisition of a test facility of this scale presents unique metrology challenges.
Not only must the uncertainty analyzer tool be versatile enough to accommodate a wide range of disciplines and measurement
requirements (such as temperature, pressure, strain, vacuum, and acceleration), but it must provide a user-interactive platform for
evaluating system measurement uncertainty based on customer-chosen measurement scenarios ranging from the most simplistic
tests to the most complex ones. The uncertainty analyzer tool, which was developed in Microsofts Visual Basic for Applications (VBA) in Excel, will serve multiple purposes, including aiding in the optimal selection of measuring and test equipment,
communicating capabilities to customers, and supporting all decisions based on measurements. Although the analysis tool was
developed for the data acquisition system in B2, it can be easily sized to fit other data acquisition systems at the site utilizing
similar measurement methods. This paper outlines the methodology followed, the features of this tool, and how the tool can be
applied to the measurement processes of different facilities.
1. Introduction

NASA Glenn Research Centers Calibration Laboratory was tasked with providing a Measurement Uncertainty Analysis
(MUA) tool for the data acquisition system
at the Spacecraft Propulsion Research Facility (B2) at Glenns Plum Brook Station campus. The MUA tool would need
to accommodate all of the types of measurements required for typical tests in the
B2, including voltage, temperature, strain,
accelerometer, and pressure. In the first
phase, Glenns Calibration Laboratory would
provide the MUA tool for the measurement
disciplines used most frequently for facility
tests: temperature and pressure.
It is essential to understand the measurement path for which the analysis was developed in order to ensure the correctness
of each measurement uncertainty analysis.
Therefore, this paper begins with a short presentation about the measurement path for the
B2 data acquisition system. Because different propellants can be used during tests and
70

because of their known hazards, the measurement path of the B2 data acquisition system
is located in three different places that are all
interconnected during tests.
In the first location, referred to as the test
building, the test article is loaded in a thermal vacuum chamber lined with a liquid
nitrogen cold wall capable of maintaining
195.5 C; the sensors also are located in
the vacuum chamber. The second location,
which is immediately next to the test building, is referred to as the data room. The data
room contains the rest of the measuring and
test equipment. The connections between the
sensors and the rest of the measurement path
are made through cables and chamber feedthroughs. To protect personnel during the
tests, NASA controls all tests from the third
location, which is a control room located
about 790 m from the test site.
The data acquisition system has a matrix
of 18 subsystems, and each subsystem has 32
channels, for a total of 576 available channels. In general, each channel path compris-

es a transducer (sensor), signal conditioners


(SC), analog-to-digital convertors (ADCs),
and the interconnecting cables.
The tests are run by software that can sample and calculate averages and standard deviations for each measurement. These standard
deviations are considered here as the terms
accounting for random variation and will be
used as the Type A contributors to uncertainty
in the analysis.
For simplicity, the uncertainty tool estimates were based only on the effects that
were considered to be systematic (also called

Author
Mihaela Fulop
mihaela.fulop-1@nasa.gov
SGT, Inc., Metrology Services
NASA Glenn Research Center
Calibration Laboratory
MS 217, 21000 Brookpark Road
Cleveland, OH 44135

NCSLI Measure J. Meas. Sci. www.ncsli.org

TECHNICAL PAPERS

Error
containment
probability,
%

Uncertainty
type

Distribution

Divisor

Standard
uncertainty,
ui, V

Sensitivity
coefficient,
ci

Product of
standard
uncertainty
and sensitivity
coefficient, V

Degrees
of
freedom
(DOF)

Error
source

Specifications

Error
limits, V

DC (gain,
offset error)

0.10 %

0.0001000

95

Normal

1.96

0.00005

100.000

0.00510

Quantization
error

0.00015 V

0.0001526

95

Normal

1.96

0.00008

1.000

0.00008

Common
voltage
error

100 dB

0.0000005

95

Normal

1.96

0.00000

100.000

0.00003

Crosstalk

110 dB

0.0000002

100

Rectangular

1.73

0.00000

100.000

0.00001

0.0000000

100

Rectangular

1.73

0.00000

100.000

0.00000

uADC

0.00510

Nonlinearity

Table 1. Example of a module accuracy file.

Effective
DOF
Expanded
UADC (95 %
confidence)

0.01001

the static MUA, or measurement capability). To find the expanded uncertainty for the
measurement process, we combine the static MUA values with the Type A uncertainty
components obtained as described earlier,
following rules in the Guide to the Expression of Uncertainty in Measurement (GUM)
[1] and in the U.S. versionof the guide [2].
2. Project Approach

The concepts and methods used to develop


the MUA tool were consistent with different
NASA policies and with those found in the
GUM.
These established best practices
are illustrated in the NASA Measurement
Quality Assurance Handbook - ANNEX
3: Measurement and Uncertainty Analysis
Principles and Methods [3].
The MUAtask was divided into measurement
disciplines (e.g., temperature, pressure, and
vacuum). Each discipline was then divided
into methods of measurements. For example,
in the temperature discipline, the work can be
divided into measurements using resistance
temperature detector (RTD) transducers and
measurements using thermocouples (TCs).
For each method of measurements, there is
a further hierarchy composed of three levels:
systems, modules (nomenclature or class), and
models or types. Figure 1 shows the hierarchy
for the temperature discipline.
A system is composed of several modules
arranged in a linear sequence and connected
by transfer functions. The modules have
Vol. 9 No. 3 September 2014

Figure 1. Measurement Uncertainty Analysis (MUA) work hierarchy for the


temperature discipline (TC is thermocouple, RTD is resistance temperature
detector, SC is signal conditioner, and ADC is analog-to-digital convertor).

similarities that are given by nomenclature


or class (e.g., TC, ADC, and Cold
junction). The lowest hierarchy level is
model/type. Models are members of the
module family. Models or types are defined
by the manufacturer, model, and type: for
example, Hy-Cal Engineering 401B; Precision
Filters, Inc., model 28608; and type E.
The MUA tool is composed of several
Excel files. The hierarchy just discussed
dictates the organization of these files into
two sets of Excel spreadsheet files: one set of
files that matches the discipline and method

of measurement and another set of files that


structures data for modules and models. The
first set of files contains the templates, and
the latter set contains the module accuracy
files. Since a module has different models,
the module accuracy file contains separate
worksheets for each model/type that could
be used in a specific module. For example,
the TC sensor accuracy file has different
worksheets for the different TC types.
Each of these worksheets contains the
systematic error source contributions and
other pertinent information for each model or
NCSLI Measure J. Meas. Sci. |

71

TECHNICAL PAPERS

Figure 2. Screen capture of example temperature/thermocouple (TC) template.

Summary points
Point

Temperature, C

Uncertainty, 95 %
confidence, C

Temperature of cold
junction, Tref , C

1
2
3

200
201
203

3.881
3.880
3.877

24
24
24

Table 2. Screen capture of a record in the Measurement Uncertainty


Assessment (MUA) tool.

type used. The accuracy files for the modules are


kept in a different network location so that the
information can be used by more than one main
template configuration interface. Only super
users, administrators with advanced knowledge
of MUA theory, have the authority to change
the module accuracy files. Regular users cannot
alter these files. The module accuracy file
worksheets use a tabular form that follows the
GUM [1] (Table 1). The expanded uncertainty
for each model is found in a predefined cell.
The template file is where the system
is defined and automatic calculations are
performed. Users make entries in these
files only and can mix and match different
models in multiple permutations depending
on the system architecture. A user can type
specific inputs and environmental conditions
and can pick the specific model or type for
each module from drop-down lists. The tool
provides full flexibility to optimize the system
for minimum uncertainty and gives failure
warnings if selections are not compatible with
all modules in the system. For consistency, all
templates look similar. At the beginning of
each template there is a block diagram of all
modules comprising the system, indicating
each modules input and output. Immediately
below each module in the block diagram
72

there is a drop down list that identifies all of


the models for that module.
Template files have predefined tables
for each module uncertainty and error
contributions. These tables are referred to
as template module uncertainty tables
and have columns similar to the tables in
the module accuracy file. The rows in the
template module uncertainty tables will be
populated automatically based on customer
entries as described below. The template
module uncertainty tables have predefined
cells for the modules output and for the
modules expanded uncertainty. Examples
of template module uncertainty tables are
Tables 5, 6, 8, and 9. Figure 2 shows an
example of a template.
The templates files also contain a few
command buttons that activate code written
in VBA for Excel. After the choices are
set, a user can click the Link with module
accuracy files command button. When this
command button is activated, data from the
module accuracy files and from the selected
model worksheets are automatically copied
into the template. Furthermore, activating
this button causes an exchange of data
between the selected accuracy files and the
main page of the template file. As such, input

data entered by the user in the template file are


transferred automatically to the accuracy files,
and the selected models expanded uncertainty,
effective degrees of freedom, and sensitivity
coefficients for each module are automatically
populated into the template module uncertainty
tables. At the same time, the MUA template
calculates the static expanded uncertainty and
the degrees of freedom for one specific input
set at a time for the whole system.
A record of a static expanded uncertainty
and the corresponding degrees of freedom
for different input sets is created when
another command button is activated (Table
2). Thus, the tool enables users to view and
graph the MUA versus different input for
different analyses.
3. Updating the Measurement
Uncertainty Analysis Tool for New
Equipment Models

It is not uncommon for new equipment


to be purchased and existing systems are
continually being updated. The MUA tool
can be easily updated to consider a new
model purchase for a module. Because
modules are similar instruments, they should
have similarities in their defined performance
characteristics and in their manufacturers
specifications. Therefore, the easiest way to
update the MUA tool is to copy an existing
worksheet into the module accuracy files.
For the Link with module accuracy files
command button to work properly, the name
of the new worksheet should match the name
in the module/model drop-down list in the
template spreadsheet. After the worksheet
is named correctly, super users must change
the error contributions to match manufacturer
specifications for the new model. Care should
be taken to ensure that all units of measure
match those in the original spreadsheets.
4. Applying the Measurement
Uncertainty Analysis Tool to Similar
Measurement Processes at Different
Facilities

For the same measurement disciplines and


measurement methods, facilities often have
similar data acquisition systems composed
of similar modules. The differences usually
are in different models or types. Therefore,
the MUA tool template can be sized to fit any
similar measurement system by correcting all
the module accuracy worksheets with all the
models available at the site. This task should
be done by super users, and care should be
taken to ensure that all units of measure match

NCSLI Measure J. Meas. Sci. www.ncsli.org

TECHNICAL PAPERS
those on the original spreadsheets. Because
the Link with module accuracy files will
automatically open the accuracy file workbooks
from their network locations, the VBA code for
this command button will also require minimum
modifications to match the network locations of
all the accuracy files at the new facility.
5. Validation of the Measurement
Uncertainty through Repeatable
Measurements

After the uncertainty is determined, it is critical to ensure its validity. By definition, the uncertainty is a range of values that are expected
to contain the true value with a specified level
of confidence. If the uncertainty is estimated
correctly, measurement results obtained at different times should be consistent with the reported uncertainty. This ensures a reliable and
repeatable measurement quality system. The
estimated standard uncertainty will be compared with the standard deviation of a series
of measurements for a test item so that the uncertainty of the measurement can be validated.
Because of the costs involved in this endeavor,
the series of measurement data will be mined
from historical data for similar tests. Statistical process control will also be employed to
make sure that the measuring system maintains in control. Statistical process controls
and validation of the measurement uncertainty
represent the second phase of this project and
are not presented in this paper.
6. Conclusions

Although the Measurement Uncertainty


Analysis (MUA) tool cannot be used as is
in every situation, the already developed
templates and files provide significant time
savings for all technical personnel responsible
for estimating and reporting measurement
uncertainty. The tool can be easily employed
at any site that has data acquisition systems
used for similar measurement disciplines
and with similar methods of measurements,
resulting in report standardization and
avoiding work duplication.
7. References

[1] JCGM, Evaluation of Measurement


DataGuide to the Expression of
Uncertainty in Measurement, JCGM
100, 2008.
[2] ANSI/NCSLI, U.S Guide to the Expression of Uncertainty in Measurement, ANSI/NCSL Z540.21997
(R2012), 2012.
Vol. 9 No. 3 September 2014

Figure 3. Thermocouple (TC) data acquisition system in the B2 (SC is signal


conditioner and ADC is analog-to-digital convertor).

Analog-to-digital convertor gain, GADC


Signal conditioner gain, G
Delta temperature in the room during test, T, C

100
2
5.5

Table 3. Specific conditions used.

[3] NASA, Measurement uncertainty


analysis principles and methods, Annex 3 of NASA Measurement Quality
Assurance Handbook, NASAHNBK
8739.193, 2010.
[4] G. Burns, M. Seroger, G. Strouse, M.
Croarkin, and W. Guthrie, Temperature-Electromotive Force Reference
Functions and Tables for the Letter-Designated Thermocouple Types
Based on the ITS90, National Institute of Standards and Technology
(NIST) Monograph 175, 1993.
[5] J. Nakos, Uncertainty Analysis of
Thermocouple Measurements Used in
Normal and Abnormal Thermal Environment Experiments at Sandias Radiant Heat Facility and Lurance Canyon
Burn Site, SANDIA National Laboratories, SAND20041023, 2004.
[6] ASTM, Standard Specification and
Temperature-Electromotive Force (emf)
Tables for Standardized Thermocouples, ASTM E-230, 2012.
8. Bibliography

EURAMET, Calibration of Thermocouples,


EURAMET cg-8, Version 2, European
Association of National Metrology Institutes,
October2011.
NASA, NASA Measurement Quality
Assurance
Handbook,
NASAHNBK
8739.19, 2010.
NCSLI,
Recommended
Practices:
Determining and Reporting Measurement
Uncertainty,
NCSL
International
RP-12, 2013.

A. AppendixMeasurement Uncertainty
Analysis for Thermocouple Measurement at the B2 Facility

This appendix describes the measurement uncertainty analysis for a measurement of type-E
thermocouples (TCs) with the data acquisition
system at the Spacecraft Propulsion Research
Facility (B2) at Glenns Plum Brook Station
campus. The analysis in this appendix follows
the system approach in Measurement Uncertainty Analysis Principles and Methods, Annex 3 of a NASA handbook [3]. According
to Annex 3, a system is composed of modules
arranged in series. Because of this series arrangement, the output of any module and its
associated uncertainty represents the input of
the next module.
The TC measurement path determined with
NASAs approach [3] for the data acquisition
system in the B2 is captured in Fig. 3. The
uncertainty analysis is documented for a system input of 200 C, a reference temperature
of 24 C, and the specific conditions listed in
Table 3. It is assumed that the mounting of the
TC does not introduce any errors in the circuit.
Table 4 is a list of the components in the B-2
TC measurement system. The modules for the
B-2 type-E TC measurement system are described in Sections A.1 to A.4.
A.1. Seebeck Temperature Module (M1)

Because of the physics behind it, the TC


measurement is a differential measurement
governed by the Seebeck effect. Therefore, it
makes sense to summarize all the components
that are part of the Seebeck effect equation in
a unique module named for this analysis
the Seebeck Temperature Module.
NCSLI Measure J. Meas. Sci. |

73

TECHNICAL PAPERS

Element component

Manufacturer

Model

Notes

OMEGA Engineering Inc.

TCPVCE24180, 320 to
250 F

----------------------

Type-E extension cable

OMEGA

-------------------------

25 ft long

TC connector block on
chamber platform

Marlin Manufacturing
Corporation

Type-E female barrel connector

----------------------

Type-E extension cable

-------------------------

-------------------------

50 ft long; 4 cables; 12 pair in


each cable

Type-E vacuum feedthrough

-------------------------

-------------------------

----------------------

----------------------

----------------------

100 ft long; 4 cables; 12 pairs


of wires in each cable

TC cold junction reference

Hy-Cal Engineering

401B

----------------------

27-pair copper wire

-------------------------

-------------------------

40 ft long

Copper terminals

-------------------------

-------------------------

----------------------

Copper wires

-------------------------

S12/S2/C

#20; 200 ft long

Precision Filters

28608B; gain = 1

----------------------

Analog-to-digital convertor

DSPCon

9843001004, digitizer

Gain = 100

Calculator channel processor

DSPCon

CCP002 TEFLT

----------------------

Type-E TC

Type-E extension cable

Signal-conditioning/filter
amplifier

Table 4. B2 Thermocouple (TC) data acquisition system.

Figure 4. Seebeck Temperature Module (Y1 = measured voltage).

For the B2 TC measurement system, the Seebeck Temperature


Module (M1) comprises a typeE TC, a cold reference junction, the
alloy extension cables, the connectors, and the copper cables to the SC
input. The cold junction reference is a 401 B Hy-Cal uniform-temperature reference block. The Seebeck effect equation shows that if
the wires are homogeneous, the measured temperature T is a function
of the measured voltage Y1, the Seebeck coefficient of the wire S, and
the temperature of the reference junction Tref.
The input for the M1 module is T and the output is Y1. The M1
output equation (transfer function) is

Y1 S T T connectors extension wires Sref Tref ref , (A.1)

The components of eT (error due to the thermocouples), econnectors


(error due to connectors), and eextension wires (error due to extension wires)
are found in the second column of Table 5. The thermoelectric voltage
E of each type of TC is a function of temperature as in NIST Ref. [4],
n

E ci (T )n
i 0

Eref ci (Tref )n ,

(A.2)

i 0

where ci are coefficients defined in [4].


The Seebeck coefficients are determined by finding the first
derivatives with respect to temperature from Eq. (A.2),

E
T

Sref

Eref
.
Tref

(A.3)

where
Figure 4 shows a block diagram of the Seebeck temperature module.
Y1 is the measured voltage, mV;
The error equation for the M1 module can be written as
Y1 cT T cTref Tref cconnectors connectors cextension wires
S is the average Seebeck coefficient, mV/C at T;
extension
(A.4)
wires
Sref is the Seebeck
mV/C
at
T
of
cold
junction;
,
Y1 ccoefficient,

T T
Tref
Tref
connectors connectors
extension wires extension wires
T is the measured temperature, C; and
where ci are the sensitivity coefficients obtained in Eqs. (A.5) to (A.8):
Tref is the temperature of cold junction, C.
74

NCSLI Measure J. Meas. Sci. www.ncsli.org

TECHNICAL PAPERS
Y1
S ,
T

(A.5)

Y1

STref ,
Tref

(A.6)

cT
cTref

Y1

cconnectors

conectors

S,

Y1
S .
extension wires

cextension wires

(A.7)
(A.8)

The uncertainty for the M1 module is obtained by applying the


variance operator to Eq. (A.4). If it is assumed that there are no
correlations between error sources, the uncertainty uY in the M1
1
module output is

uY1
2

connectors

transfer function is

Y3 GADC Y2 ADC ,
(A.14)
where
GADC is the ADC gain;
Y2 is the SC output, V;
Y3 is the output (V) corresponding to ADC counts equal to an ADC

input of Y2; and
eADC is the error due to ADC. The components of this error are

captured in the second column of Table 1.
The error equation for the M3 model is

Y3 cY2 Y2 cADC ADC ,


where the sensitivity
coefficients are
2

(A.9)
cT T 2 cTref Tref cconnectors connectors 2 cextension
wires extension wires

cY2

2
cextension wires extension wires .

(A.15)

Y3
GADC
Y2

cADC

Y3
1 .
ADC

(A.16)

There is no correlation between error sources; therefore, the


uncertainty in the M3 output is

A.2. Signal Conditioner Model (M2)

The second module (M2) in the TC measurement system for the B2


is a Precision Filters model 28608B signal conditioner (SC). The
input for this module is Y1 and its output is Y2. The output equation
(transfer function) for this module is

Y2 G Y1 SC ,

(A.10)

where
G is the gain, V/mV; and
Y2 is the SC output, V.

uY3

cY Y
2

cADC ADC

(A.17)

A.4. Data Processor Module (M4)

Following the measuring path, the last module in the B2 system is


the Data Processor Module (M4). This module takes the quantified
M2 output, converts it to millivolts, scales it down by the total gain,
and computes the temperature value by using a polynomial regression
and regression coefficients from NIST Ref. [4]. The input for this
module is Y3 corrected for millivolts and divided by the total gain, and
its output is Tcomputed.
The computed temperature in degrees Celsius is obtained as

Here, eSC is the SC combined error expressed as refer to input


2
n

(RTI). Because this error was expressed as RTI, it needs to be


y3
y3
y3
Tcomputed,C a0 a1
a
a
...


(A.18)
2

regress
n
multiplied by G to convert it to refer to output (RTO). The eSC
G GADC
G GADC
G GADC
components are captured in the second column of Table 7.
2
n

y3
y3
y3
The error equation Tfor this module
is
a2
... an
regression
computed,C a0 a1
G GADC
G GADC
G GADC
,
Y2 cY1 Y1 cSC SC ,
(A.11)
where y3 is Y3 converted to millivolts and a0, a1a9 are obtained from [4].
The module model error is
and the sensitivity coefficients for Eq. (A.11) are

cY1

Y2
G
Y1

cSC

Y2
G .
SC

The uncertainty for the M2 module output also is obtained by


applying the variance operator to the error in Eq. (A.11). If we assume
that there are no correlations between errors, this yields

uY2

cY Y
1

cSC SC .

(A.13)

A.3. Analog-to-Digital Convertor/Data Processor Module (M3)

The third module (M3) in the TC measurement system for the B2 is a


DSPCon model 9843, 16 bit, 10 V reference ADC and data processor.
The ADC converts the continuous analog output signal coming from
the SC to a digital signal (binary code).
Then, the data processor converts the number of counts to a voltage.
The input for this module is Y2 and its output is Y3. The M3 module
Vol. 9 No. 3 September 2014

Tcomputed Y3 cY3 regression cregression ,

(A.12)

(A.19)

and the sensitivity coefficients are

Tcomputed


y3
a1
2 a2

G GADC
G

G
ADC

2
8


y3
y3
3 a3
... 9 a9

G GADC
G GADC

Tcomputed

1
cregression

regression

c y3

y3

(A.20)

The uncertainty in the M4 module output is

uTcomputed

Y cY regression cregression
3

(A.21)

The following subsections describe the error sources for each


identified module in the TC measurement path for the B2.
NCSLI Measure J. Meas. Sci. |

75

TECHNICAL PAPERS

MUA Results for Seebeck Temperature Module (M1)

Module 1
(TC/alloy wires/connectors/cold junction)

Sensitivity
coefficient,
ci

Product of
standard
uncertainty
and sensitivity
coefficient, V

DOF

Normal

1.96

0.867 C

74.030
V/C

64.2106

Rectangular

1.73

0.000 C

74.030
V/C

0.0000

95

Normal

1.96

0.0000 V

1.0000

0.0000

0.621 C

95

Normal

1.96

0.317 C

60.860
V/C

19.2870

Maximum
gradient
temperature
across
connectors

0.200 C

95

Normal

1.96

0.102 C

74.030
V/C

7.5542

Accuracy

1.700 C

95

Normal

1.96

0.867 C

74.030
V/C

64.2106

Accuracy due
to length of
extension wires

0.000 C

95

Normal

1.96

0.000 C

74.030
V/C

0.0000

inhomogeneity

0.000 C

100

Rectangular

1.73

0.000 C

74.030
V/C

0.0000

M1
module
uncertainty
(95%, 2 )

182.5544

Error source

Error
containment,
%

Type

Distribution

TC accuracy

1.700 C

95

inhomogeneity

0.000 C

100

Error from TC
mounting

0.000
V

Cold
(reference)
junction

Cold junction
expanded
uncertainty

TC
connectors
connectors

TC
T

M1
module
output,
Y1

Divisor

Standard
uncertainty,
ui

Error
limits

Error
contributor

TC
extension
cables
extension
wires

11.987090
mV

Table 5. Uncertainty, error components, and output for M1 (TC is thermocouple).

are captured in Table 5 and explained in the


following subsections.
A.5.1 Errors Due to Thermocouple

Figure 5. Error contributors for M1 module.


A.5. Error Sources in Seebeck
Temperature Module (M1)

The error sources in the Seebeck Temperature


Module are divided into those due to the TCs,
76

to the connectors and cables, and to the cold


junction reference. These errors are captured
in the fishbone diagram in Fig. 5. The
uncertainty and output for the M1 module

For temperatures above 0 C, the


manufacturers limits of error for a standard
option type-E TC are 1.7 C or 0.5 % of
the reading, whichever is greater. For
temperatures below 0 C, the limits of
error are 1.7 C or 1 % of the reading,
whichever is greater. Other errors related
to the TCs are the TC installation errors and
inhomogeneity, which are both assumed here
to be negligible. For a temperature of 200 C,
the corresponding thermoelectric voltage of a
type-E TC is 13.42 mV. For a temperature
of 24 C, the corresponding thermoelectric
voltage of a type-E TC is 1.43mV. Using Eq.
(A.3), we obtained sensitivity coefficients of
74.03 mV/C. Therefore, the error limits at
(1 ) due to TC accuracy in units of voltage
are computed to be 64.2106 mV.

NCSLI Measure J. Meas. Sci. www.ncsli.org

TECHNICAL PAPERS

Module 2

MUA Results for M2 module

Error
contributor

Signal
conditioner

Sensitivity
coefficient,
ci

Product of
standard
uncertainty
and sensitivity
coefficient, V

DOF

Normal

1.96

0.1156 mV

231.3807

Normal

1.96

93.1417 V

186.2834

Normal

1.00

582.2175

Error source

Error
containment,
%

Type

Distribution

Signal
conditioner
expanded
uncertainty

0.2267
mV

95

182.5544
V

95
95

M1 module
Noise

M2
module
output,
Y2

Divisor

Standard
uncertainty,
ui

Error
limits

1
M2 module
uncertainty
(95 %, 2 )

0.02397 V

Table 6. Uncertainty, error components, and output for M2 (DOF is degrees of freedom).

Where the
errors come
from

Components
from RTI

Components
from RTO

Error
containment
probability,
%

Uncertainty
type

Distribution

Divisor

Standard
uncertainty,
ui mV

Sensitivity
coefficient,
ci

Product of
standard
uncertainty
and sensitivity
coefficient, mV

DOF

Error
source

Specifications

Error
limits,
mV

DC accuracy
% of settings

0.20 %

0.0240

95

Normal

1.96

0.0122

1.00

0.0122

Temperature
coefficient

0.008 %/C

0.0053

95

Normal

1.96

0.00267

1.00

0.0027

Noise RTI

2.80 V

0.0196

95

Normal

1.96

0.0100

1.00

0.0100

Offset drift
RTI

1 V/C

0.0055

100

Rectangular

1.73

0.0032

1.00

0.0032

Offset drift
RTO

0 V/C

0.0000

100

Rectangular

1.73

0.0000

0.50

0.0000

Noise RTO

60.00 V

0.4200

95

Normal

1.96

0.2143

0.50

0.1071

All-hostile
crosstalk

90 dB

0.1581

95

Normal

1.96

0.0403

0.50

0.0202

uSC

0.1102

Table 7. Uncertainty contributions of the signal conditioner (RTI, refer to input; RTO, refer to
output; DOF, degrees of freedom; SC, signal conditioner).

Effective
DOF
Expanded
USC (95 %
confidence)

A.5.2 Errors Due to Connectors

The approximate error due to the connector is the same as T across


connector [5]. These limits are assumed to be normally distributed
with a confidence level of 95 %. For this analysis the total uncertainty
was considered to be T = 0.2 C.

0.2267

wires, such as those used in the B2 data acquisition system, are also
susceptible to noise. The system incorporates a low-pass filter with
low cut-off frequency and use shielded wires to make sure unwanted
noise is eliminated.
A.5.4 Errors Due to Cold Junction Reference

A.5.3 Errors Due to Cables (Alloy and Copper)

The extension wires have tolerances in accordance with ASTM


E-230 [6]. According to this document the standard tolerance is
1.7 C for a type E extension wire. These limits are assumed to
be normally distributed with a confidence level of 95 %. Very long
Vol. 9 No. 3 September 2014

The cold junction is a 401B Hy-Cal uniform-temperature reference


block. Manufacturer specifications are 0.556 C and are considered to
be normally distributed with a confidence level of 95%. Another contributor from the cold junction is a nonuniformity of 0.278 C, which
is also considered to be normally distributed with a confidence level of
NCSLI Measure J. Meas. Sci. |

77

TECHNICAL PAPERS
95%. The expanded uncertainty for the cold junction reference with a
confidence level of 95% is 0.621 C.
Eq. (A.3) was used to obtain a sensitivity coefficient of 60.86
mV/C. Therefore, the error limits due to the cold junction reference
(1 ) in units of voltage were computed to be 19.287 mV.

to be 90 dB from DC to 100kHz. The error due to crosstalk at the


output of the SC was calculated by

A.6. Error Sources in the Signal Conditioner Module (M2)

According to Table 3, the analog-to-digital convertor gain used in


this analysis was 2. For this gain the maximum input voltage is 5 V.
For a maximum input voltage of 5 V the crosstalk error was calculated
to be 0.1581 mV. This error is RTO and was assumed to be normally
distributed with a confidence level of 95%. To convert this error into
an equivalent RTI error, it needs to be divided by the applied gain.
The SC also has a specification for the offset drift RTO, but this offset
was corrected by calibration. Therefore, this error had no contribution
to the SC uncertainty analysis.

The SC is a Precision Filters model 28608B. Table 6 lists the


uncertainty and output of the M2 module. One contributor to the error
in the M2 module is the uncertainty for the M1 module.
The contribution due to M1 comes from the uncertainty calculations
for this module (Table 5). The contributions due to the SC are
summarized in Table 7 and explained in the following subsections.
For the SC used, some manufacturer specifications are expressed
as RTI and some as RTO. The error expressed as RTI needs to be
multiplied by the gain to convert it to RTO units. The contributions of
the SC itself are discussed in the following subsections.
A.6.1 DC Accuracy Error of the Signal Conditioner

The error limits for the direct-current (DC) accuracy were calculated
from the manufacturer specification of 0.2 % of the input voltage to
the SC. For an input voltage of 11.9870 mV, this error is calculated
to be 0.024 mV. The distribution of this error was considered to be
normally distributed with a confidence level of 95 %.
A.6.2 Temperature Coefficient Error of the Signal Conditioner

Error due to the temperature coefficient was calculated from the


manufacturer specification of 0.008 %/C. The error limits for
a T of 5.5 C and an input of 11.9870 mV were calculated to be
0.053 mV. This was considered to be normally distributed with a
confidence level of 95 %.
A.6.3 Noise RTI Error of the Signal Conditioner

The SC has a specification of 2.8 V for the root-mean-square (RMS)


noise at the input stage amplified by gain. According to manufacturer
information, a crest factor of seven should be assumed for RTI noise.
Therefore, the peak-to-peak maximum RTI noise was calculated to
have an error limit of 2.8 V x 7. This was assumed to be normally
distributed with a confidence level of 95%.

crosstalk

A.6.6 Crosstalk Error of the Signal Conditioner

The manufacturer declared the error due to the crosstalk specifications


78

(A.22)

A.7. Error Sources in the Analog-to-Digital Module (M3)

Table 8 lists the uncertainty, error components, and output for M3. As
seen in this table, one error contributor to the error in the M3 module
is the uncertainty for M2. The error contribution due to M2 comes
from the uncertainty calculations for this module and is captured in
Table 6. The contributions due to the ADC itself are summarized in
Table1 and explained in the following subsections.
A.7.1 DC Error of the Analog-to-Digital Convertor

The error limit due to the DC was calculated from manufacturer


specifications of 0.1 %. This error was assumed to be normally
1
distributed with a confidence level of 95 %.
A.7.2 Quantizing Error of the Analog-to-Digital Convertor

The ADC used was a DSPCon digitizer model 9843001004. The


resolution limit for the quantization error of the 10 V with a gain of
100 was 3.05 mV (10 V/(100 x 215). The quantization error limit was
half the resolution, or 1.52 mV. These errors were considered to be
uniformly distributed with a confidence of 100%.
A.7.3 Common Mode Rejection of the Analog-to-Digital Convertor

The common mode rejection ratio (CMRR) was specified as 100 dB


typical specifications. It was defined as

e
CMRR in dB 20 log(CMRR) 20 log ADC CMV ,
error
CMV

(A.23)

where eCMV is the common mode voltage. If assumptions of maximum


eCMV = 0.05 V and a GADC=100 are made, the estimated common mode
voltage error is

errorCMV

eCMV GADC
10

CMRR in dB
20

(A.24)

For eCMV = 0.05 V and GADC = 100, errorCMV = 0.00255 V.

A.6.5 Offset Drift RTI Error of the Signal Conditioner

The error limit due to the offset drift RTI was calculated from
manufacturer specifications of 1 V/C. For a change in room
temperature of T = 5.5 C, this was calculated to be 0.0055 mV. This
error is affected by gain and was assumed to be normally distributed
with a confidence level of 95 %.

90

10 20

A.6.4 Noise RTO Error of the Signal Conditioner

Similarly, the SC has a specification of 60 V of the RMS noise RTO.


According to manufacturer information, a crest factor of seven should
be assumed for RTO noise. Therefore, the peak-to-peak maximum
RTO noise was calculated to have an error limit of 60 V x 7. This
was assumed to be distributed normally with a confidence of 95 %. To
convert this error into an equivalent RTI error, we need to divide its
value by the applied gain.

input voltage

A.7.4 Crosstalk, Channel to Channel

This specification is stated as 110 dB @1 kHz into 50 input impedance. If the maximum difference between channels is assumed to be 50
mV, the crosstalk error Vcrosstalk for a gain of 100 is 0.000912 V. These
limits are assumed to be distributed normally with a confidence of 95 %.
The only DC accuracy specifications provided by the manufacturer are for a
gain setting of 100.

NCSLI Measure J. Meas. Sci. www.ncsli.org

Module 3

TECHNICAL PAPERS

M3
module
output,
Y3

Type

Distribution

Divisor

Sensitivity
coefficient,
ci

Product of
standard
uncertainty
and sensitivity
coefficient, V

DOF

0.0051

100.0

0.0297

M3 output
module
uncertainty
(95 %, 2 )

0.0591

Error
contributor

Error source

ADC

ADC expanded
uncertainty

0.0100

95

Normal

1.96

0.0051

0.0006

95

Normal

1.96

0.0003

M2 module

Error
containment,
%

Standard
uncertainty,
ui, V

Error
limits,
V

2.39742 V

Module 4

Table 8. Uncertainty, error components, and output for M3 (ADC, analog-to-digital convertor; DOF, degrees of freedom).

M4
module
output,
Tcomputed

Divisor

Standard
uncertainty,
ui

Sensitivity
coefficient,
ci

Product of
standard
uncertainty
and sensitivity
coefficient, C

DOF

Error
contributor

Error source
mV

Error
limits

Error
containment,
%

Data
processor
conversion
of M3
uncertainty

0.2953

0.2953
mV

95

Normal

1.96

0.1507 mV

13.5 C/
mV

12.035

0.06 C

95

Normal

1.96

0.03 C

1.00

0.031

M4 module
uncertainty
(95 %, 2 )

3.990

Regression

Type

Distribution

200.001 C

Table 9. Uncertainty, error components, and output for M4.


A.8. Error Sources in the Data Processor Module (M4)

Table 9 lists the uncertainty, error components, and output for M4.
The error contributions in Table 9 are the error due to regression and
errors due to the modified output in the M3 module. The output in M3
was converted to millivolts and was scaled by dividing it by the total
gain. The regression errors were taken from [4] and are assumed to be
normally distributed with a confidence level of 95 %.
A.9. Computing System Output Uncertainty

According to [3], the system output uncertainty is equal to the


output uncertainty for the final module, which carries the uncertainty
components from all the previous modules, as described in Section
A.5. For T = 200 C, Tref = 24 C and the conditions in Table 3, the
output standard uncertainty is 1.96C (1 ).

Tcomputed uTcomputed t /2, ,

(A.25)

where
uTcomputed is the standard uncertainty in the system output (1 ); and
t /2,
is the Students t-distribution where = 1 p, p is the

confidence level probability, and is the degrees of freedom.
For this analysis, a confidence level of 95 % (p = 0.95) was used
in accordance with GUM guidelines. In this analysis, the degrees of
freedom are infinite ( = ), resulting in a corresponding Students
t-distribution of t /2, 1.96 . With these assumptions, the confidence
limits were computed to be 200 3.9 C.

A.10. Error Sources in the Data Processor Module (M4)

The system output uncertainty and degrees of freedom can be used to


find the confidence limits that are expected to contain the true value
with the specified confidence level

Vol. 9 No. 3 September 2014

NCSLI Measure J. Meas. Sci. |

79

C NNECT
to the Real Asset World.

CALIBRATION AND LAB MANAGEMENT...

ADVERTISERS INDEX

EQUIPMENT MANAGEMENT...
YOUR CUSTOMERS...
YOUR END USERS...
YOUR ERP/FINANCIAL SYSTEM...
WEB BASED ACCESS ANYWHERE IN THE WORLD...
THE CLOUD...

Integrated Solutions. One Smart Connection.

Additel
www.additel.com

Page 10

AIAG
www.aiag.org

Page 47

AssetSmart
www.assetsmart.com

Inside Front Cover

Asset Management 360, a true asset management solution,


elevates asset management to a whole new class of enterprise
integration and visibility. Connecting your physical operations
and end users with your ERP/Financial Systems. AM360 ultimately
improves asset utilization and operational efficiencies.
SMART/CMS Calibration and Laboratory Management Software
is the only system powerful enough to manage all of your instrument
calibration, repair, maintenance, asset tracking and lab management
needs in a single solution. SMART/ENCATS Enterprise Cataloging
provides a common tracking language across the enterprise.

Save time and money. Now Thats SMART.

.org
By PMSC

2800 28th Street, Santa Monica, California 90405 USA 310.450.2566 info@assetsmart.com www.assetsmart.com

CENAM
www.cenam.mx/simposio

Page 41

Essco Calibration Laboratory


www.esscolab.com

Page 11

Fluke Calibration
www.fluke.com

Page 13

Guildline
www.guildline.com

Page 18

Quality for Emerging T

Keysight Technologies
www.keysight.com/find/checkthecal

Back Cover

Measurements International
www.mintl.com

OEMs and suppliers agree that technolog


increasing rate. The new technologies pos
testing, measurement, and quality assura

Page 8

Mensor Corporation
www.mensor.com

Inside Back Cover

Morehouse Instrument Company


www.mhforce.com

Page 12

NCSLI 2015 Workshop & Symposium


www.ncsli.org

Page 20

NCSLI Call for Papers


www.ncsli.org

Page 21

NCSLI Expand Your Reach


www.ncsli.org

Page 61

NCSLI Technical Exchange


www.ncsli.org

Page 14

NCSLI Membership
www.ncsli.org

Page 55

Ohm-Labs, Inc.
www.ohm-labs.com

Page 9

Rotronic
www.rotronic-usa.com

Page 7

Thunder Scientific Corporation


www.thunderscientific.com

Page 17

Vaisala
www.vaisala.com

Page 6

80

Qua
Certified to ISO 9001:2008 and accredited to ISO/IEC 17025:2005
Calibration
Cert No. 1388.01

Over 30 metrology technicians on staff.

This conference, Quality for Emerging Te


companies are assuring quality, reliability
their latest electronics and software produ
of existing tools and development of new

Dont miss this exciting opportunity to he


Mr. Dino Triantafyllos, Vice President, Qua
Engineering & Manufacturing North Amer

The 2014 Quality Summit will be of intere


customers in the automotive industry. Re

Sponsors:

NCSLI Measure J. Meas. Sci. www.ncsli.org

2014 AIAG | 26

CTR9000

Primary Thermometry Bridge

CTR5000

Precision Thermometer

CTH7000

Handheld Thermometer

Dont risk your


companys reputation
on a bad calibration.
If your test instruments are not measured during
calibration to the specs you depend on, the
devices you test are at risk. That can be costly.
A detailed calibration report is the only way to
verify for sure your instruments are performing
to spec. Learn what to look for.

Are your measurements what you expect?


www.keysight.com/find/checkthecal
USA: 800 829 4444

Keysight Technologies, Inc. 2014

CAN: 877 894 4414

También podría gustarte