Physics in Canada La Physique au Canada

Transcription

Physics in Canada La Physique au Canada
Vol. 64 No. 2
Physics in Canada
La Physique au Canada
APRIL-JUNE (SPRING)
AVRIL À JUIN (PRINTEMPS)
HIGH PERFORMANCE COMPUTING (HPC)
LE CALCUL DE HAUTE PERFORMANCE (CHP)
GUEST EDITORS: MARK WHITMORE, P.PHYS., U. MANITOBA AND
GORDON DRAKE, P.PHYS., U. WINDSOR
2008
2008
Serving the Canadian
physics community
since 1945 /
Servont la communauté
de physique
depuis 1945
Canadian Association
of Physicists /
Association canadienne
des physiciens et
physiciennes
www.cap.ca
PHYSICS IN CANADA
LA PHYSIQUE AU CANADA
Canadian Association
of Physicists
Association canadienne des
physiciens et physiciennes
www.cap.ca
Vol. 64 No. 2 (April-June (Spring) 2008 / avril à juin (printemps) 2008)
DE FOND
ARTICLES
DEPARTMENTS
DÉPARTEMENTS
EDUCATION
ÉDUCATION
FEATURES
39
40
41
47
55
59
67
75
85
46
46
Foreword - “High Performance Computing (HPC)”, by M. Whitmore and G. Drake
Préface - “Le calcul de haute performance (CHP)”, par M. Whitmore et G. Drake
High Performance Computing Technologies, by Rob Simmonds
Astrophysical Jets, by David A. Clarke et al.
Finite Element Analysis in Solid Mechanics : Issues and Trends,
by Nader G. Zamani
Supercooled Liquids and Supercomputers, by Ivan Saika-Voivod and Peter H. Poole
Quantum Monte Carlo Methods for Nuclei, by Robert B. Wiringa
The Next Canadian Regional Climate Model, by Ayrton Zadra et al.
High Performance Computing in Canada: The Early Chapter,
by Allan B. MacIsaac and Mark Whitmore
Cover / Couverture :
PhD Physics Degrees Awarded at
McMaster University, Dec. 2006 to
Nov. 2007 (cont’d from Jan-Apr.08 PiC) /
Doctorats en physique décernés à
l’Université McMaster, déc. 2006 à
nov. 2007 (suite du PaC de jan. à avr. 08)
Congratulations : Paul Corkum, Art
McDonald, Barth Netterfield, and Carl Svensson
Advertising Rates and Specifications (effective January 2008) can be found on the PiC website
(www.cap.ca - Physics in Canada).
Les tarifs publicitaires et dimensions (en vigueur depuis janvier 2008) se trouvent sur le site internet de
La Physique au Canada (www.cap.ca - La Physique au Canada).
Example of a Global environmental multiscale (GEM) grid with a
higher resolution region within the
thick black lines. Inset: the seven
GEM limited area model domains
used in the regional climate simulation proposed in the InterContinental-scale Transferability
Study (ICTS) (for details see
Zadra et al, p. 74-83)
Exemple d’une grille du modèle de
l’environnement
mondial
à
échelles multiples (GEM) avec
une résolution plus élevée dans la
région bordée d’une ligne noire
épaisse. Carte en cartouche: les sept
zones GEM à surface limitée utilisées
dans le projet de simulation du climat
régional qui s’appelle ICTS (InterContinental-scale
Transferability
Study- étude de transférabilité des
données à l’échelle intercontinentale)
(pour plus de détails voir Zadra et al,
p. 74-83)
LA PHYSIQUE AU CANADA / Vol. 64, No. 2 ( avr. à juin (printemps) 2008 ) C i
54
DEPARTMENTS
DÉPARTEMENTS
74
84
89
92
93
94
PHYSICS IN CANADA
LA PHYSIQUE AU CANADA
Congratulations : André Bandrauk and
Paul Corkum
News and Congratulations
- New Director for Canada’s Perimeter Institute
- Dr. Richard Taylor inducted into Canada’s Science
and Engineering Hall of Fame
- Raymond Laflamme honoured with “Premier’s
Discovery Award”
Departmental, Sustaining, and CorporateInstitutional Members / Membres départementaux, de soutien, et corporatifs-institutionnels
In Memoria Ralph Nicholls (1926-2008)
Tapan Kumar Bose (1938-2008)
Yoginder Joshi (1938-2008)
Barry Wallbank (n/a - 2008)
Books Received / Livres reçus
Book Reviews / Critiques de livres
Employment Opportunities /
Postes d’emploi
The Journal of the Canadian Association of
Physicists
La revue de l'Association canadienne des physiciens et physiciennes
ISSN 0031-9147
EDITORIAL BOARD / COMITÉ DE RÉDACTION
Editor / Rédacteur
Béla Joós, PPhys
Physics Department, University of Ottawa
150 Louis Pasteur Avenue
Ottawa, Ontario K1N 6N5
(613) 562-5758; Fax:(613) 562-5190
e-mail: [email protected]
Associate Editor / Rédactrice associée
Managing / Administration
Francine M. Ford
c/o CAP/ACP
Book Review Editor / Rédacteur à la critique de livres
Richard Hodgson, PPhys
c/o CAP / ACP
Suite.Bur. 112, Imm. McDonald Bldg., Univ. of / d' Ottawa,
150 Louis Pasteur, Ottawa, Ontario K1N 6N5
(613) 562-5800 x6964; Fax: (613) 562-5190
Email: [email protected]
Advertising Manager / Directeur de la publicité
Greg Schinn
EXFO Electro-Optical Engineering Inc.
400 av. Godin
Quebec (QC) G1M 2K2
(418) 683-0913 ext. 3230
e-mail: [email protected]
Board Members / Membres du comité :
René Roy, phys
Département de physique, de génie physique et d’optique
Université Laval
Cité Universitaire, Québec G1K 7P4
(418) 656-2655; Fax: (418) 656-2040
Email: [email protected]
David J. Lockwood, PPhys
Institute for Microstructural Sciences
National Research Council (M-36)
Montreal Rd., Ottawa, Ontario K1A 0R6
(613) 993-9614; Fax: (613) 993-6486
Email: [email protected]
Tapash Chakraborty
Canada Research Chair Professor, Dept. of Physics and Astronomy
University of Manitoba, 223 Allen Building
Winnipeg, Manitoba R3T 2N2
(204) 474-7041; Fax: (204) 474-7622
Email: [email protected]
Canadian Association of Physicists (CAP)
Association canadienne des physiciens et physiciennes (ACP)
The Canadian Association of Physicists was founded in 1945 as a non-profit association
representing the interests of Canadian physicists. The CAP is a broadly-based national
network of physicists in working in Canadian educational, industrial, and research settings. We are a strong and effective advocacy group for support of, and excellence in,
physics research and education. We represent the voice of Canadian physicists to government, granting agencies, and many international scientific societies. We are an enthusiastic sponsor of events and activities promoting Canadian physics and physicists,
including the CAP's annual congress and national physics journal. We are proud to offer
and continually enhance our web site as a key resource for individuals pursuing careers
in physics and physics education. Details of the many activities of the Association can be
found at http://www.cap.ca . Membership application forms are also available in the membership section of that website.
L'Association canadienne des physiciens et physiciennes a été fondée en 1946 comme
une association à but non-lucratif représentant les intérêts des physicien(ne)s canadien(ne)s. L’ACP est un vaste regroupement de physiciens oeuvrant dans les milieux canadiens de l'éducation, de l'industrie et de la recherche. Nous constituons un groupe de
pression solide et efficace, ayant pour objectif le soutien de la recherche et de l'éducation
en physique, et leur excellence. Nous sommes le porte-parole des physiciens canadiens
face au gouvernement, aux organismes subventionnaires et à plusieurs sociétés scientifiques internationales. Nous nous faisons le promoteur enthousiaste d'événements et
d'activités mettant à l'avant-scène la physique et les physiciens canadiens, en particulier
le congrès annuel et la revue de l'Association. Nous sommes fiers d'offrir et de développer continuellement notre site Web pour en faire une ressource-clé pour ceux qui poursuivent leur carrière en physique et dans l'enseignement de la physique. Vous pouvez
trouver les renseignements concernant les nombreuses activités de l’ACP à
http://www.cap.ca. Les formulaires d’adhésion sont aussi disponibles dans la rubrique
“Adhésion” sur ce site.
II
C PHYSICS
IN
CANADA / VOL. 64, NO. 2 ( Apr.-June. (Spring) 2008 )
Normand Mousseau
Chaire de recherche du Canada, Département de physique
Université de Montréal, C.P. 6128, Succ. centre-ville
Montréal, Québec H3C 3J7
(514) 343-6614; Fax: (514) 343-2071
Email: [email protected]
Michael Steinitz, PPhys
Department of Physics
St. Francis Xavier University, P.O. Box 5000
Antigonish, Nova Scotia B2G 2W5
(902) 867-3909; Fax: (902) 867-2414
Email: [email protected]
ANNUAL SUBSCRIPTION / ABONNEMENT ANNUEL :
$40.00 Cdn + GST or HST (Cdn addresses),
$40.00 US (US addresses)
$45.00 US (other/foreign addresses)
Advertising, Subscriptions, Change of Address/
Publicité, abonnement, changement d'adresse:
Canadian Association of Physicists /
Association canadienne des physiciens et physiciennes,
Suite/Bureau 112, Imm. McDonald Bldg., Univ. of/d' Ottawa,
150 Louis Pasteur, Ottawa, Ontario K1N 6N5
Phone/ Tél: (613) 562-5614; Fax/Téléc. : (613) 562-5615
e-mail/courriel : [email protected]
Website/Internet : http://www.cap.ca
Canadian Publication Product Sales Agreement No. 0484202/
Numéro de convention pour les envois de publications canadiennes :
0484202
© 2008 CAP/ACP
All rights reserved / Tous droits de reproduction réservés
WWW.CAP.CA
(select Physics in Canada /
Option : La Physique au Canada)
ÉDITORIAL
HIGH PERFORMANCE COMPUTING (HPC)
A
ny reader of Physics in Canada is well aware of
the incredible advances in computing and communications that have occurred during our lifetimes, and how they have revolutionized, not
only science, but our daily lives as well. There are so many
examples that illustrate. In less than 40 years, our common
tools have changed from slide rules to massively parallel
computer systems, connected across the country by networks with unimaginable capacity and speeds. In barely
20 years, memory costs have gone from roughly $1,000
for a 20 MB disk to $10 for a 2 GB stick, a reduction in
per unit cost of a factor of 10,000, or even more if you
choose to incorporate general inflation in the calculation.
We are all familiar with these kinds of numbers, but it is
still worth pausing from time to time to reflect on just how
incredible they are -- and to speculate on how far and fast
these trends will continue.
Our interest in this issue of Physics in Canada is in the
realm of the big systems, or high performance computing
(HPC) as it has come to be known in recent years. Our primary goals are to summarize the computing systems and
technologies that are available to us now, and to illustrate
some of the breadth of the physics that is being done with
them. Our additional goals are to describe the Canadian
journey in HPC, with emphasis on the process and
achievements of the past decade, and point potential users
to the access points for the resources that are now available.
Our issue opens with an overview of HPC technologies,
their components, and how to choose different ones for
different applications. It includes a description of grid
services used to coordinate access to computers distributed around the world. Our issue closes with the story of
HPC in Canada to date, how our various resources are
organized and available nationally, and the emergence of a
new era and a new organization, Compute Canada. This
story has been one of, probably unparalleled, cooperation
amongst interested scientists across the country, and
tremendous support from organizations including
NSERC, the National Research Council of Canada, the
Canada Foundation for Innovation, CANARIE, provincial
funding organizations, and numerous computer manufacturers who have provided generously of both financial
support and time. Above all else, the key to success has
been the cooperation across all these sectors.
HPC, and applications to different sciences. They vary in
both style and content, but they have important common
and complementary features. They are all intended to
appeal to a broad audience of physicists with different
areas of expertise. They range from the enormous scales of
astrophysics, to the minute scales of nuclei. They cover
condensed matter physics, equilibrium and non-equilibrium physics, the finite-element analysis of engineering,
astrophysics, nuclear physics, and the ever-improving climate modeling, in particular the Canadian Regional
Climate Model. Some emphasize technique, some emphasize results, and some illustrate the place of simulations
within the broader spectrum from "theory" to "experiment".
A major goal of all academic work is not just to predict our
future as a planet and society, but to control our future.
Computers play a major role in this enterprise because
they enable an ever more detailed modeling of the physical world in which we find ourselves, and a cataloguing of
its rich diversity (such as the human genome). Recent
advances are analogous suddenly to having at our disposal a microscope or telescope that is orders of magnitude
more powerful than what was previously available, and so
we can see further and do more to control our own destiny
than was previously possible. The Canadian Regional
Climate Model is an obvious example of this, but all the
articles illustrate in one way or another how computers are
playing an increasingly important role as an extension of
our own senses to monitor and control the world in which
we live. The pace of development will only quicken in the
future, and the Canadian academic community has an
important role to play at the leading edge of these developments.
We hope that you find this collection of articles enjoyable
and informative. For those of you who have not been part
of the exciting Canadian HPC story, we invite you to
reflect on what a diverse community can accomplish when
it works tirelessly and cooperatively towards a common
cause.
Mark Whitmore, University of Manitoba
Gordon Drake, University of Windsor
Guest Editors, Physics in Canada
Comments of readers on this editorial are more than
welcome.
Mark Whitmore
<Mark_Whitmore@
umanitoba.ca>,
Department of
Physics and
Astronomy, University
of Manitoba,
Winnipeg, Manitoba,
R3T 2N2.
Gordon Drake
<gdrake@uwindsor.
ca>, Department of
Physics, University of
Windsor, Windsor,
Ontario, N9B 3P4.
Between these two "bookends" of this issue, we have a
collection of five papers illustrating the techniques of
The contents of this journal, including the views expressed above, do not necessarily represent the views or policies of the Canadian Association of Physicists. Le contenu de cette revue, ainsi que les opinions exprimées cidessus, ne représentent pas nécessairement les opinions et les politiques de l’Association canadienne des physiciens et des physiciennes.
LA PHYSIQUE AU CANADA / Vol. 64, No. 2 ( avr. à juin (printemps) 2008 ) C 39
EDITORIAL
LE
CALCUL DE HAUTE PERFORMANCE
T
out lecteur de La Physique au Canada est bien au fait
des incroyables progrès survenus en informatique et
en communication pendant notre vie, et de la manière
dont ces progrès ont révolutionné non seulement la
science, mais aussi notre vie quotidienne, ce que tant d’exemples illustrent. En moins de 40 ans, nos outils courants ont
changé, passant des règles à calcul aux systèmes d’ordinateurs
ultraparallèles qui forment à l’échelle du pays des réseaux aux
capacités et aux vitesses inimaginables. En 20 ans à peine, le
coût de la mémoire est passé d’environ 1 000 $ pour un disque
de 20 Mb à 10 $ pour une clé USB de 2 Gb, soit un facteur de
diminution de 10 000 du coût de l’unité, ou même plus si l’on
si l’on tient compte de l’inflation générale. Nous connaissons
tous de tels chiffres, mais il vaut quand même la peine de s’arrêter de temps à autre pour se rendre compte à quel point ils
sont incroyables et pour conjecturer quant à l’aboutissement de
ces tendances et au rythme auquel elles nous y mènent.
Le thème du présent numéro de La Physique au Canada est le
domaine des gros systèmes, ou le calcul de haute performance
(CHP), comme on en est venu à l’appeler ces dernières années.
Nos buts principaux sont de résumer les systèmes et technologies informatiques dont nous disposons aujourd’hui et d’illustrer une certaine ampleur de la physique à laquelle on les
emploie. Nous voulons aussi décrire la démarche canadienne
en CHP, en mettant l’accent sur le processus et les réalisations
de la dernière décennie, et orienter les utilisateurs éventuels
vers les points d’accès des ressources dont on dispose actuellement.
Ce numéro s’ouvre sur un aperçu des technologies de CHP, de
leurs composantes et de la façon d’en choisir certaines pour des
applications différentes. Il inclut une description des services
de réseau servant à coordonner l’accès aux ordinateurs disséminés autour du globe. Le dernier article du numéro décrit
l’histoire du CHP au Canada à ce jour, le mode d’organisation
de nos diverses ressources et leur disponibilité à l’échelle
nationale, ainsi que l’aube d’une nouvelle ère et d’un nouvel
organisme : Calcul Canada. Cet article décrit la coopération,
probablement inégalée, entre des scientifiques intéressés à
l’échelle du pays et le soutien extraordinaire de divers organismes, dont le Conseil de recherches en sciences naturelles et
en génie, le Conseil national de recherches du Canada, la
Fondation canadienne pour l’innovation, CANARIE, des
organismes de financement provinciaux et bien des fabricants
d’ordinateurs qui ont contribué généreusement à la fois un soutien financier et leur temps. Plus que toute autre chose, la clé de
la réussite a été la coopération entre tous ces secteurs.
Les premier et dernier articles de ce numéro encadrent une
série de cinq documents illustrant les techniques de CHP et leur
application à différentes sciences. Ceux-ci varient tant par le
40 C PHYSICS
IN
(CHP)
style que par le contenu, mais ils comportent d’importants caractères courants et complémentaires. Ils visent tous à séduire
un vaste auditoire de physiciens de différents domaines d’expertise. Ils vont de l’échelle immense de l’astrophysique à
celle, infime, des noyaux. Ils traitent de physique de la matière
condensée, de physique en équilibre et hors d’équilibre,
d’analyse par éléments finis du génie, d’astrophysique, de
physique nucléaire et de modélisation climatique en constante
amélioration, notamment du Modèle régional canadien du climat. Certains articles mettent en lumière la technique et
d’autres les résultats, et d’autres encore illustrent la place des
simulations dans le vaste spectre d’activités allont de la
« théorie » à « l’expérimentation ».
L’un des grands objectifs de toute recherche académique n’est
pas uniquement de prédire l’avenir de notre planète et de notre
société, mais de le contrôler. L’ordinateur joue un rôle de premier plan dans cette entreprise, car il permet de modéliser de
façon de plus en plus détaillée le monde physique qui nous
entoure et d’en cataloguer la riche diversité (tel le génome
humain). Les progrès récents s’apparentent soudainement à
disposer d’un microscope ou d’un télescope d’une puissance
supérieure de plusieurs ordres de grandeur à ce dont on disposait antérieurement, de sorte que nous pouvons mieux voir et
faire plus que ce qui était possible auparavant pour contrôler
notre propre destinée. Le Modèle régional canadien du climat
en est un exemple évident, mais tous les articles illustrent d’une
façon ou d’une autre comment l’ordinateur joue un rôle de plus
en plus grand comme prolongement de nos propres sens pour
surveiller et contrôler le monde où nous vivons. Le rythme du
progrès ne pourra que s’accélérer dans l’avenir et la collectivité universitaire canadienne a un rôle important à jouer à l’avant-plan de ce progrès.
Nous espérons que vous trouverez cette série d’articles à la fois
agréables à lire et instructifs. Pour ceux d’entre vous qui n’ont
pas eu part à la passionnante histoire du CHP au Canada, nous
les invitons à réfléchir à ce que peut réaliser une collectivité
diverse qui travaille sans relâche et dans la coopération à une
cause commune.
Mark Whitmore, Université du Manitoba
Gordon Drake, Université de Windsor
Rédacteurs invités, La Physique au Canada
Les commentaires de nos lecteurs au sujet de cet éditorial sont
bienvenus.
NOTE: Le genre masculin n’a été utilisé que pour alléger le
texte.
CANADA / VOL. 64, NO. 2 ( Apr.-June. (Spring) 2008 )
ARTICLE DE FOND
HIGH PERFORMANCE COMPUTING TECHNOLOGIES
BY
ROB SIMMONDS
igh performance computing systems are used
for solving a wide range of scientific problems.
Some problems can be solved serially on a single processor, and some larger problems can be
split into smaller tasks each of which can be run sequentially. With problems that cannot be partitioned in this
way, the only effective way to solve them is to create parallel programs that can utilise a large number of processors concurrently. Depending on the problems that need to
be solved, different types of computers might be most suitable.
H
these models is easier than using MPI, but the programs
created are limited to running on shared memory computers.
There are a number of programming models for parallel
applications. The message passing model is most scalable
and has the advantage of being usable on the widest range
of computing systems. With the message passing model, a
set of independent processes perform calculations and
exchange information using messages when required.
Today, most such programs use the Message Passing
Interface (MPI) libraries which were created to make it
easier to port code between systems. MPI has a large number of functions that enable point-to-point communication
between processes, and collective operations that distribute or collect data from large numbers of processes.
This paper describes the types of systems that are currently used to run both serial and parallel HPC applications. It
also describes grid middleware that is used to enable international projects. An excellent example of a project using
grid technologies is the ATLAS physics experiment [1] that
has many Canadian participants.
Other programming models rely on having all processors
being able to access a shared memory address space.
Multi-threaded programs use many threads of execution
coordinated by mutual exclusion primitives that provide
ordered access to data items accessed by more than one
thread. In programs using the OpenMP programming
model, concurrency is enabled between pairs of directives
inserted into the code. In general, writing programs using
SUMMARY
This paper gives an overview of technologies currently used in high performance
computing (HPC) environments. The different programming models used for HPC
applications are outlined and the types of
computer systems currently available
explained. The paper also provides a
description of grid computing services that
are employed to coordinate access to computers distributed around the world, and the
wide area networking technologies required
to use these distributed environments effectively.
Most computers use scalar processors, and parallel execution is enabled by using multiple processors concurrently.
Another form of parallelism comes from using vector
processors that perform operations on multiple data elements simultaneously. Only a small number of vendors
sell vector computers, and these are currently only cost
effective for solving a small set of problems.
The rest of the paper is laid out as follows. The second
section describes current computer systems. These include
clusters (1), large shared memory computers (2), and systems designed for running very large message passing programs (3). The next section describes specific system
components: multi-core processors (1), accelerators (2),
cluster interconnects (3), and storage systems (4). The
fourth section describes grid computing services, and the
fifth section describes wide area networking issues. The
paper is summarised in the final section.
COMPUTER SYSTEMS
There are a number of different types of computer systems
used for high performance computing. As already mentioned, vector systems provide high performance for some
applications. Systems that use scalar processors are
described in this section. These include cluster systems,
large shared memory computers and systems designed
specifically for running large message passing applications. A ranking of the fastest 500 computers in the world
is kept at www.top500.org; this ranking is based on the
performance of a single benchmark.
R. Simmonds
<simmonds@
westgrid.ca>
WestGrid Chief
Technology Officer,
WestGrid,
BioSciences 530,
University of Calgary,
2500 University Dr.
NW, Calgary, Alberta,
T2N 1N4
Cluster Computers
Cluster computers have become increasingly popular and
now dominate the top 500 list. Depending on the components used, a cluster may be suitable for running large
numbers of serial jobs or, if a high bandwidth, low latency interconnect is employed, clusters are capable of running large message passing parallel programs. A decade
LA PHYSIQUE AU CANADA / Vol. 64, No. 2 ( avr. à juin (printemps) 2008 ) C 41
HIGH PERFORMANCE COMPUTING (SIMMONDS)
ago, groups built their own clusters in order to create cost
effective systems [12], but cost effective clusters are now sold
by many vendors.
Some clusters are designed specifically for HPC environments
where price/performance has been the major factor in purchasing decisions. These systems are built using compute nodes
with fewer fault tolerance features than are in systems designed
for enterprise data centres. However, the power and cooling
requirements of recent processors have increased dramatically.
Due to this, more advanced systems are gaining popularity for
HPC applications. Many of these are blade systems that have
many components including power supplies, network switches
and cooling fans built into a chassis. Compute blades that hold
processors, memory and local hard drives are plugged into
these chassis. The use of shared power supplies and carefully
engineered airflow through the chassis leads to lower power
consumption and reduced load on external air conditioners.
Shared Memory Systems
For large scale shared memory processing, there are fewer
choices of system. Some vendors sell computers with
32 processor sockets; only SGI (www.sgi.com) sells computers
that scale to hundreds of processor sockets sharing memory.
The SGI systems can be configured with different combinations of processor and memory modules as required. These are
useful for running parallel programs that are difficult to program using the message passing model, or that need huge
amounts of memory in a single address space. However, no
shared memory systems scale to the extent that distributed
memory systems can. The large shared memory systems can be
used as cluster nodes, though nodes with small numbers of
processor sockets are far more cost effective when large shared
address spaces are not required. This is why nodes with two
sockets are the most common building blocks for clusters.
Large Systems
Cluster computers are very scalable, but they might not provide
the best solutions for running very large message passing parallel programs. There are several reasons for this. One is that
clusters usually run a general purpose operating system (OS)
on each node, often some variant of Linux. If each instance of
the OS is running independently without synchronisation, the
timing of I/O interrupts at different nodes can cause jitter in
parallel programs leading to lower performance [10]. Also,
these OS instances are often running services that are not needed by the parallel programs and these add jitter. Another problem is that any component failure on a node running part of the
parallel program can cause the whole program to fail. The more
nodes employed, the greater the chance that a failure will
occur.
There are systems that are built specifically for running very
large message passing jobs. Currently, an IBM BlueGene system heads the top 500 computer listings. Each BlueGene has a
large number of PowerPC processors, all running at reduced
clock speeds. This makes the processors more reliable, and also
42 C PHYSICS
IN
increases the thermal efficiency, resulting in less cooling being
required. It does this at the cost of serial performance, although
this is not considered so important in a massively parallel system. A BlueGene has multiple interconnection networks for
different types of communication, with one of the networks
used to synchronise the processors. Cray (www.cray.com) and
SGI also sell cluster based systems with features designed to
reduce jitter.
SYSTEM COMPONENTS
This section provides information about some of the components used in HPC computing systems. It starts by describing
the current multi-core processors. Then, accelerators that can
be used to boost the performance of some programs are introduced. Finally, interconnection networks and storage systems
are described.
Multi-core Processors
For many years, the speed of scalar processors doubled roughly every 18 months. This is no longer possible due to physical
constraints on the materials used to construct processors, so
this doubling of serial processing speed will not continue.
Currently, the primary technique being used to increase performance of general purpose processors is multi-core. With
this, each processor has multiple processing cores, each of
which contains the logic required to perform scalar processing.
Current multi-core processors have between 2 and 8 cores.
The multi-core processors have several caches, along with
memory and I/O controllers. In all designs, each core has a
level 1 (L1) cache that consists of fast memory used just to hold
data, and instructions that are used by the individual core. They
also have an L2 cache that is larger, but with slightly greater
access latency. The L2 cache might be for use by a single core,
but is more likely to be shared by more than one core. Some
designs also have an L3 cache that is shared between all of the
cores in the processor.
It is important to understand the design of the processors, since
this has implications on how to get the best performance from
a program using the processor. Since all the cores in the processor share a connection to the computer’s memory bus, processes that need high bandwidth might perform best if they do not
share the same socket with other processes. Processes that
communicate through shared memory might perform best if
they run in the same multi-core processor so that they share
cache structures. These are issues that have been addressed in
large shared memory computers in the past, but now they are
affecting all computers. Currently, the writers of cluster scheduling systems are working to adapt their software to be aware
of these issues when deciding where to place the processes that
are part of a parallel program.
Accelerators
There is currently a trend to add accelerators to compute nodes.
An accelerator is a special purpose unit capable of running
CANADA / VOL. 64, NO. 2 ( Apr.-June. (Spring) 2008 )
HIGH PERFORMANCE COMPUTING (SIMMONDS)AA
some calculations faster than a general purpose processor.
Ideally, the accelerator is used to offload parts of the computation, with the main processor continuing to execute instructions. The idea of using accelerators is not new. For example,
in the 1980s, the INMOS Transputer was used to accelerate
workstations. Accelerators are gaining popularity at the
moment since they can provide performance that greatly
exceeds that of general purpose processors for specific tasks.
Current accelerators include IBM’s Cell processor [8], the
ClearSpeed card (www.clearspeed.com) and Field
Programmable Gate Arrays (FPGAs). Both the Cell and
ClearSpeed cards provide superior floating point performance.
FPGAs perform well when it is advantageous to work with
variable word lengths. Another type of accelerator that is gaining popularity is general purpose graphics cards (GPGPUs)
that offer very high floating point performance while being relatively inexpensive. One issue with current GPGPUs is that
they do not support the same floating point arithmetic standards as general purpose processors, although this may be
addressed in future units. Another problem with card based
accelerators is the limited bandwidth between the main processor and the card. Due to this, these accelerators only give good
performance if the ratio of work done to I/O required large.
Current scalar processors have two to eight cores, but future
processors could have many more; Intel has demonstrated a
prototype processor with 80 cores. The term many-core has
been used to describe processors where adding or removing a
few cores will make little difference, since the performance
will be limited by the socket bandwidth. It is likely that these
future processors will have accelerators in the same package as
the scalar processing cores. Economics will dictate which
accelerators are added to processors.
Cluster Interconnects
age. Also, the volume of data that is used by modern applications is increasing massively. The size of individual disk drives
has increased considerably in the past few years, and new technologies have led to lower priced drives. The speed of disks has
not increased greatly, so modern storage systems need to access
many disks concurrently to get high performance. This is done
either by using hardware controllers that stripe across many
disks, or by using cluster file-systems that access large numbers of disk servers in parallel. The Lustre file-system
(www.lustre.org) is an example of a highly scalable cluster filesystem that is used on many of the largest Linux HPC clusters.
Large disk arrays are configured in redundant configurations
called RAID sets so that a file-system can survive a drive failure. With small disk drives, RAID level 5, where one drive
holds parity information for a set of striped drives, was adequate. However, with larger disks, the increased time for a disk
array to rebuild itself after a disk failure greatly increases the
chance that a second drive could fail before that process is
complete. To reduce the chance of losing data, most large HPC
disk arrays now use RAID level 6 that allows more than one
disk to fail in any RAID set. However, RAID 6 uses additional space for parity information.
DISTRIBUTED COMPUTING
Previous sections have described computer systems that are situated in a single machine room. However, many large projects
have collaborators in different countries and use computing
resources distributed globally. One of the best known large
scale collaborations is the ATLAS project [1] that will process
data collected at CERN at sites around the world. The following sections describe some of the technologies used to facilitate
these projects.
Grid Computing
The concept of grid computing was developed at the end of the
1990s [6]. This aims to provide coordinated access to computing resources that are geographically separated and belong to
different administrative domains. A well known grid environment is the CERN Large Computing Grid (LCG) that will be
used to distribute data and jobs for the ATLAS project.
Nodes in a cluster computer are linked using one or more networks. These networks are used to access the nodes, for the
nodes to access networked storage, and for message passing
communications. Ethernet provides good bandwidth, but generally high latency largely due to the complex IP software
stacks used to access them. One gigabit per second (Gb/s)
Ethernet is used most often; 10 Gb/s Ethernet components are
available, but are expensive compared to other high performance networks at this time. Low latency interconnects such as
Myrinet
(www.myrinet.com)
and
Quadrics
(www.quadrics.com) are specifically designed for use with
HPC clusters and provide low latency communications suitable
for running scalable message passing programs. Many new
clusters use Infiniband networks. Infiniband is designed to handle multiple protocols simultaneously, and provides good price
performance compared to other high performance interconnects.
A number of different grid middleware toolkits exist, with the
Globus Toolkit (GT) [2] being the most widely used. The GT
provides the basic components needed to build a grid environment. These handle authorisation and authentication, resource
discovery, launching jobs, and the efficient movement of data
between geographically separated systems. Current versions of
the GT include other higher level services; additional services
are usually needed to create a complete grid environment. The
Virtual Data Toolkit (VDT) package includes GT along with
other services useful for setting up a grid environment on
Linux based systems.
As the size and performance of computer systems have
increased, so has the need for greater performance from stor-
Large scale grid computing environments, such as LCG, need
to achieve a number of things. One is to distribute data so that
it can be accessed at sites separated by huge distances. Another
is to send jobs to sites with resources available to run them.
Storage
LA PHYSIQUE AU CANADA / Vol. 64, No. 2 ( avr. à juin (printemps) 2008 ) C 43
HIGH PERFORMANCE COMPUTING (SIMMONDS)
There is often a need to coordinate the distribution of data and
jobs since there is no point sending a job to a site that does not
have the required data.
Data management in grid environments often uses data discovery and replication services [5]. These can use tools such as the
Meta Data Catalog Service (MCS) to provide relational indexing of data files based on their contents. The indexes are
referred to as Logical File Names (LFNs). Then, a scalable
information service such as the Replica Location Service
(RLS) can be used to find all of the copies of the file that exist.
Each physical file is referenced by its name referred to by its
Physical File Name (PFN). A data replication service can be
used to find a “close” copy of a particular file and move it to a
site where it will be needed by one or more jobs. In a Globus
environment, GridFTP [4] is used to provide high performance
data transfer.
Jobs in a grid environment are managed using meta-schedulers
such as Condor-G [7]. A meta-scheduler is a tool that can send
jobs to distributed resources, each of which could have its own
scheduling system to manage local job starting policy. In a
Condor-G/Globus environment, each local resource runs a
Globus Resource Allocation Manager (GRAM) service. Each
GRAM instance may provide an interface to a local cluster
with thousands of processors, or it could provide an interface
to another meta-scheduler that itself submits jobs to many clusters, each controlled by its own local job scheduler. This provides a highly scalable way of accessing large numbers of distributed compute resources.
The Globus Monitoring and Discovery Service (MDS) can be
used to publish and discover information about resources. This
information could be displayed in a human readable form using
tools such as WebMDS, or it could be used directly by other
tools such as meta-schedulers. In this way, a new resource can
be added to the set of resources available automatically. Once
discovered, a data management system can replicate data to the
site, and a meta-scheduler can start sending jobs to the new
site. Rules can be set up to prevent the meta-scheduler submitting jobs to a site until the required data is available.
The Globus tools use the Grid Security Infrastructure (GSI) to
handle authorisation and authentication. This uses a Certificate
Authority (CA) model that allows users to be added to a system by a resource provider without any secret information,
such as a password, being exchanged between the resource
provider and the user. GSI also enables single sign-on using
short lived credentials generated from a user’s CA signed certificate and private key. This enables resources in multiple
administrative domains to be accessed using a single credential. The mapping schemes used with GSI mean that it is not
important if the usernames at different sites are not the same.
Each site has control over which users can use its resources; in
a simple Globus environment, this is done by adding a string
called the Distinguished Name (DN) that identifies a user’s
credential to a mapping file.
44 C PHYSICS
IN
In a grid environment with resources distributed across many
administrative domains, adding DNs to mapping files at each
site is likely to be difficult to coordinate. To deal with this,
Virtual Organisation (VO) management tools such as
VOMS [11] can be employed; a VO describes a group of users
working on a common problem. If VOMS is used, rather than
adding each person in a VO to a mapping file, a resource
provider can simply add the VO. The resource provider has
control of how the users of the VO are mapped to the resource.
One way to configure this is so that, each time a new user from
the VO accesses the resource, he or she is mapped to a new
local account. Since it is not important to grid users what their
local usernames are, a random username can be allocated.
Using VOMS, a resource provider can add the VOs they trust
to their configuration and let the VO managers control which
of their users can access the resource.
Web Services Based Tools
Grid tools have not been universally adopted by all collaborating research groups. In particular, some groups that are not
concerned about cross domain authorisation or tightly coupled
coordination of resources have adopted simpler Web service
based tools. Web service workflow managers such as Kepler
(www.kepler-project.org) and Taverna (taverna.sourceforge.
net) provide powerful graphical interfaces and have been widely adopted by some areas of science such as Bioinformatics.
WIDE AREA NETWORKING
In order to distribute data around large grid environments, high
performance wide area networks are critical; large capacity
research networks link systems around the world. In Canada,
these research networks are operated by CANARIE
(www.canarie.ca). Current research networks are a mix of IP
networks where data streams share links between high performance routers, dedicated links between sites that communicate often, and dynamically configured lightpath networks. A
lightpath is a point-to-point optical circuit linking two sites,
and software services such as UCLP [3] can be employed to
construct lightpaths as required.
To achieve high performance data transfer over wide area networks, it is vital to use tools designed specifically for this task.
Tools that use the TCP protocol need to be able allocate large
buffers and ideally be able to move data over parallel streams.
An example of such a tool is GridFTP [4] from the Globus
Toolkit. Other tools employ protocols specifically designed for
wide area data transfer, such as UDT [9], that avoid the performance issues encountered by TCP data retransmission algorithms on large round trip time (RTT) networks.
SUMMARY
This paper has given an overview of technologies currently
employed in high performance computing environments.
These include large scale computing systems employing powerful processors, high performance communication networks,
and large storage systems. The paper also included information
CANADA / VOL. 64, NO. 2 ( Apr.-June. (Spring) 2008 )
HIGH PERFORMANCE COMPUTING (SIMMONDS)AA
about grid and Web service based software that enables coordinated access to computing systems distributed around the
world. Finally, some technologies needed for high performance
wide area networking were discussed.
Because of this, the computer processing power available to
researchers in years to come will dwarf what was available just
a few years ago. It will be fascinating to see what research
advances these new computer systems will enable.
Computing technologies are evolving rapidly, and very large
systems can now be purchased for a few million dollars.
REFERENCES
1.
2.
3.
4.
5.
The ATLAS Project. http://atlas.web.cern.ch.
The Globus Toolkit. http://www.globus.org.
User Controlled Lightpaths. http://www.canarie.ca/canet4/uclp/.
B. Allcock, L. Liming, and S. Tuecke. GridFTP: A Data Transfer Protocol for the Grid.
E. Deelman, J. Blythe, Y. Gil, and C. Kesselman. Pegasus: Planning for Execution in Grids. Technical Report TR-2002-20, ISI,
November 2002.
6. I. Foster and C. Kesselman. The Grid: Blueprint for a New Computing Infrastructure. Morgan Kaufmann, 2004.
7. J. Frey, T. Tannenbaum, I. Foster, M. Livny, and S. Tuecke. Condor-G: A computation management agent for multi-institutional
grids. In Proceedings of the Tenth IEEE Symposium on High Performance Distributed Computing (HPDC), pages 7-9, San
Francisco, California, August 2001.
8. M. Gschwind. The cell broadband engine: Exploiting multiple levels of parallelism in a chip multiprocessor. International Journal
of Parallel Programming, 35(3), June 2007.
9. Y. Gu and R.L. Grossman. UDT: UDP-based Data Transfer for High-Speed Wide Area Networks. Computer Networks, 51, May
2007.
10. A. Hoisie, G. Johnson, D.J. Kerbyson, M. Lang, and S. Pakin. A performance comparison through benchmarking and modeling of
three leading supercomputers: blue gene/l, red storm, and purple. In SC '06: Proceedings of the 2006 ACM/IEEE conference on
Supercomputing, page 74, New York, NY, USA, 2006. ACM.
11. R. Alferi et.al. From gridmap-file to VOMS: Managing authorization in a Grid environment. Future Generation Computer Systems,
21(4):549-558, 2005.
12. T. Sterling, D. Savarese, D.J. Becker, J.E. Dorband, U.A. Ranawake, and C.V Packer. BEOWULF: A parallel workstation for scientic computation. In Proceedings of the 24th International Conference on Parallel Processing, pages 11-14, Oconomowoc, WI, 1995.
LA PHYSIQUE AU CANADA / Vol. 64, No. 2 ( avr. à juin (printemps) 2008 ) C 45
C
CONGRATULATIONS
ONGRATULATIONS
ONGRATULATIONS
C
CONGRATULATIONS
TWO
PROMINENT
ROMINENT C
PHYSICISTS
HYSICISTS I
INTO O
ORDER
RDER OF
OF C
AS O
.11/08)
T
WO P
CANADIAN
ANADIAN P
INDUCTED
NDUCTED INTO
CANADA
ANADA AS
OFFICERS
FFICERS (A
(APR
PR.11/08)
PAUL
AUL C
ORKUM is
P
CORKUM
is one
one of
of Canada’s
Canada’s
leading
leading experts
experts on
on lasers
lasers and
and their
their
applications.
applications. For
For more
more than
than 30
30 years,
years,
this
this National
National Research
Research Council
Council scienscientist
also Canada
ResearchatChair
tist (now
(also Professor
of Physics
the
at
the University
of Ottawa)
hasdevelbeen
University
of Ottawa)
has been
developing
and advancing
concepts
oping and advancing
concepts
needed
needed
to understand
how intense
to understand
how intense
laser light
laser
usedthetostrucstudy
pulseslight
can pulses
be usedcan
to be
study
the
matter.
He isasknown
turestructure
of matter.ofHe
is known
the as
the
father
of the
attosecond
pulse,
father
of the
attosecond
pulse,
which
which
is sothat
rapid
thatallowed
it has allowed
to capture
first of
is so rapid
it has
him to him
capture
the firstthe
image
image
of anorbiting
electronan
orbiting
an atom. Recognized
for his
an electron
atom. Recognized
for his innovative
innovative
research
for his contribution
he is
research and
for his and
contribution
to physics, to
hephysics,
is the recipient
the
recipient
of the 2006
Prizesciences,
for natural
the
of the
2006 Killam
Prize Killam
for natural
the sciences,
2007
2007
NSERC
Polyani
(see 54
page
for more
details)
NSERC
Polyani
Prize Prize
(see page
for54
more
details)
and is a
and
is a member
of both
theSociety
Royal of
Society
of London
and the
member
of both the
Royal
London
and the Royal
Royal
of Canada.
SocietySociety
of Canada.
An
scientist and
and administraadministraAn eminent
eminent scientist
tor,
tor, A
ARTHUR
MC
ONALD has
has greatly
greatly
RTHUR M
CD
DONALD
contributed to
to the
the physics
physics community
community
contributed
and to
to Canada’s
Canada’s reputation
reputation for
for excelexceland
lence. A
A former
former professor
professor at
at Princeton
Princeton
lence.
University, he
he joined
joined Queen’s
Queen’s
University,
University in
in 1989,
1989, and
and was
was instruinstruUniversity
mental in
in spearheading
spearheading an
an internationinternationmental
al research
research project
project studying
studying tiny
tiny partipartial
cles emitted
emitted from
from the
the sun.
sun. At
At the
the
cles
Sudbury Neutrino
Neutrino Observatory,
Observatory, where
where
Sudbury
he is
is director,
director, researchers
researchers found
found that
that neutrinos
neutrinos changed
changed into
into
he
different varieties
varieties on
on their
their way
way to
to earth.
earth. Hailed
Hailed as
as one
one of
of the
the
different
world’s top
top scientific
scientific breakthroughs
breakthroughs in
in recent
recent years,
years, the
the findfindworld’s
ing has
has changed
changed the
the laws
laws of
of physics
physics and
and provided
provided remarkable
remarkable
ing
insight into
into the
the structure
structure of
of the
the universe.
universe. Over
Over the
the years,
years, sevsevinsight
eral scientific
scientific institutions
institutions and
and organizations
organizations have
have benefited
benefited
eral
from his
his valuable
valuable guidance.
guidance.
from
TWO
CAP M
MEMBERS
NSERC S
STEACIE
FELLOWSHIPS
ELLOWSHIPS (M
AR.17/08)
T
WO CAP
EMBERS RECEIVED
RECEIVED NSERC
TEACIE F
(MAR
.17/08)
B
ARTH N
ETTERFIELD,, University
University of
of
BARTH
NETTERFIELD
Toronto, likes
likes to
to play
play with
with balloons,
balloons,
Toronto,
but not
not just
just any
any balloons.
balloons. Sporting
Sporting
but
names like
like BOOMERANG,
BOOMERANG, BLAST
names
BLAST
and SPIDER,
SPIDER, his
his toys
toys travel
travel far
far up
up
and
into the
the stratosphere
stratosphere carrying
carrying sophistisophistiinto
cated telescopes
telescopes that
that gather
gather data
data about
about
cated
the origins
origins of
of the
the universe
universe nearly
nearly
the
14 billion
billion years
years ago.
ago. In
In collaboration
collaboration
14
with colleagues
colleagues from
from around
around the
the
with
world, his
his balloon
balloon experiments
experiments study
study
world,
such phenomena
phenomena as
as the
the process
process of
of star
star formation
formation and
and the
the
such
characteristics of
of the
the cosmic
cosmic microwave
microwave background
background (CMB),
(CMB),
characteristics
which is
is the
the leftover
leftover radiation
radiation signature
signature of
of the
the Big
Big Bang.
Bang. Dr.
Dr.
which
Netterfield is
is one
one of
of the
the top
top experimental
experimental cosmologists
cosmologists in
in the
the
Netterfield
world, and
and his
his work
work on
on these
these types
types of
of astronomical
astronomical phenomephenomeworld,
na has
has earned
earned him
him an
an NSERC
NSERC E.W.R.
E.W.R. Steacie
Steacie Fellowship.
na
Fellowship.
C
SVENSSON
CARL
ARL S
VENSSON,, University
University of
of
Guelph,
for aa previously
previously
Guelph, is
is searching
searching for
unknown
and if
if he
he
unknown force
force of
of nature,
nature, and
finds
stand some
some of
of the
the curcurfinds it,
it, he
he will
will stand
rent
head.
rent laws
laws of
of physics
physics on
on their
their head.
That’s
force he’s
he’s looking
That’s because
because the
the force
looking
for
fundamentally differdifferfor behaves
behaves in
in aa fundamentally
ent
ones we
we already
already
ent way
way from
from the
the ones
know
know --- gravity,
gravity, electromagnetism,
electromagnetism,
and
strong and
weak nuclear
nuclear
and the
the strong
and weak
forces
forces --- whose
whose effects
effects do
do not
not depend
depend
on
of time.
time. This
This new
new force
force would
would explain
explain the
the
on the
the direction
direction of
lack
between matter
matter and
and anti-matter
anti-matter in
in the
the uniunilack of
of symmetry
symmetry between
verse.
reputation in
in
verse. Dr.
Dr. Svensson
Svensson has
has carved
carved out
out an
an enviable
enviable reputation
the
world of
of subatomic
subatomic physics
physics for
for both
both his
his experiexperithe rarefied
rarefied world
mental
work and
and his
his leadership
leadership in
in designing
designing and
and building
building the
the
mental work
tools
to probe
probe the
the inner
inner workings
workings of
of atoms.
atoms. His
His contricontritools needed
needed to
butions
him an
an NSERC
NSERC E.W.R.
E.W.R. Steacie
butions have
have earned
earned him
Steacie
Fellowship.
Fellowship.
The
additional funding
funding to
to support
support their
their research,
research, and
and their
their universities
universities receive
receive aa salary
salary contribution
contribution to
to fund
fund aa replaceThe winners
winners receive
receive additional
replacement
for the
the Fellow’s
Fellow’s teaching
teaching and
and administrative
administrative duties,
duties, thus
thus allowing
allowing the
the winners
winners to
to focus
focus on
on their
their research
research for
for two
two years.
years.
ment for
(For
www.nserc.ca/ news/2008/)
news/2008/)
(For more
more information
information :: www.nserc.ca/
P
HD
DP
PHYSICS
AWARDED
WARDED AT
AT M
MCCM
MASTER
ASTER U
UNIVERSITY
NIVERSITY;; Dec.
Dec. 2006
2006 to
to Nov.
Nov. 2007
2007 (cont’d
(cont’d from
from Jan.-Apr.
Jan.-Apr. 08
08 PiC)
PiC)
PH
HYSICS D
DEGREES
EGREES A
DOCTORATS
OCTORATS EN
EN PHYSIQUE
PHYSIQUE DÉCERNÉS
DÉCERNÉS À
ÀL
L’U
CM
déc. 2006
2006 àà nov.
nov. 2007
2007 (suite
(suite du
du PaC
PaC de
de jan.
jan. àà avr.
avr. 08)
08)
D
’UNIVERSITÉ
NIVERSITÉ M
MC
MASTER
ASTER;; déc.
Steven J.
Steven
J. Bickerton,
Bickerton, "" A
A Search
Search for
for Kilometer-Sized
Kilometer-Sized Kuiper
Kuiper Belt
Belt Objects
Objects with
with the
the Method
Method of
of Serendipitous
Serendipitous Stellar
Stellar
Occultations" (D.L.
(D.L. Welch),
Welch), June
June 2007
2007
Occultations"
Kevin F.
F. Lee,
Lee, "Controlling
"Controlling Molecular
Molecular Alignment"
Alignment" (P.
(P. Corkum
Corkum ),
), June
June 2007
2007
Kevin
David Lepischak,
Lepischak, "High-Amplitude
"High-Amplitude delta
delta Scuti
Scuti Variables
Variables in
in the
the LMC"
LMC" (D.
(D. L.
L. Welch),
Welch), June
June 2007
2007
David
Michael V.
V. Massa,
Massa, "" Studies
Studies of
Michael
of Polymer
Polymer Crystal
Crystal Nucleation
Nucleation in
in Droplet
Droplet Ensembles"
Ensembles" (K.
(K. Dalnoki-Veress),
Dalnoki-Veress), June
June 2007
2007
Soko Matsumura,
Matsumura, "" Planet
Planet Formation
Formation and
and Migration
Migration in
in Evolving
Evolving Protostellar
Protostellar Disks",
Disks", (R.
(R. Pudritz),
Pudritz), June
June 2007
2007
Soko
46
HYSICS
46 &C P
PHYSICS
IN
IN
C
64, N
NO
2 (( Apr.-June.
Apr.-June. (Spring)
(Spring) 2008
CANADA
ANADA // V
VOL
OL.. 64,
O.. 2
2008 ))
ARTICLE DE FOND
ASTROPHYSICAL JETS
BY
DAVID A. CLARKE, NICHOLAS R. MACDONALD, JON P. RAMSEY AND MARK RICHARDSON
A
strophysical jets are long, collimated, supersonic flows of plasma emanating from compact
celestial objects such as protostellar objects
(PSO) that become stars once thermonuclear
reactions begin, and active galactic nuclei (AGN) that are
supermassive black holes (108 n 109 Mu, where Mu = 2 H
1030 kg is the mass of the sun) at the cores of nascent
galaxies. Examples of jets are shown in Fig. 1. “Dead
stars” (white dwarfs, neutron stars, and stellar black holes)
can also exhibit jets (e.g., the famous microquasar
SS433 [1]), but we shall concentrate here on jets from
“young” objects such as PSO and AGN.
The term jet was first applied to an astronomical object by
Baade & Minkowski [2] to the “protrusion” out of the core
of the nearby galaxy M87. In fact, this feature was first
observed by H. Curtis in 1918 [3], the same Curtis of the
famous “Curtis-Shapley debate” [4]. By the late 1980s,
dozens of extragalactic radio jets had been extensively
studied (e.g., http://www.jb.man.ac.uk/atlas), most with
radio interferometers such as the National Radio
Astronomy Observatory’s Very Large Array, although several have been observed optically with the Hubble Space
Telescope (HST) [5] and a few in x-rays (e.g., Wilson et
al. [6]).
What are now known as PSO jets can be traced to
S. Burnham’s discovery in 1890 [7] of what he called “faint
nebulosities”. Burnham’s objects were first interpreted as
faint stars, but later identified as a separate class of objects
by G. Herbig and G. Haro in the 1940s and for whom
these objects are now named. Herbig-Haro (HH) objects
were not widely understood as jets until the early 1980s
when observations first revealed their narrow and collimated nature (e.g., Snell et al. [8] use the term “streams”,
and don’t go quite so far as to call them “jets”). Hundreds
of HH objects are now known, many of them associated
with jets (http://casa.colorado.edu/hhcat).
As protostars and protogalaxies form, the surrounding gas,
dust and, in the case of the latter, whole stars are drawn in
gravitationally which, by necessity, possess some initial
angular momentum. As collapse ensues, conservation of
angular momentum requires that the rotation speed of the
in-falling material increases until it reaches a pointCthe
so-called centrifugal barrierCwhere it can no longer
move toward the rotation axis. Instead, material may only
SUMMARY
Nature has devised numerous mechanisms
by which the universe could become selfaware, and where humanity could spring
forth from the ashes of ancient supernovæ
and gaze back upon the heavens to contemplate its origins. Astrophysical jets are one
such mechanism. To an astronomer, a jet is a
long, collimated, supersonic flow of gas
emanating from a condensed object collapsing under its own weight. But to a forming
star, a jet is the “arm” by which angular
momentum is removed from the rapidly
rotating object, allowing it to evolve. Without
this mechanism, the spin of a protostar
would prevent it from collapsing enough to
trigger thermonuclear fusion, and we would
not be here to talk about it.
In this contribution, we introduce the reader
to astrophysical jets, and discuss how
supercomputing allows us to investigate the
physics of these “hand-brakes of nature”.
Fig. 1
Three “inverted palette” images of jets, where black
represents the highest brightness. a) Very Large
Telescope image of HH 34 in Orion (courtesy, the
European Southern Observatory). The jet is the narrow
feature emanating from the protostar in the top left corner and disappears from view before reaching its terminus at the right where it excites a bow shock in the interstellar medium; b) Very Large Array (VLA) image of
the western jet in Cygnus A (from [9]). The jet moves
from the AGN at the bottom left corner to the right filling a giant radio lobe of hot plasma (see [10] for an
excellent overview of this prototypical AGN); c) VLA
image of the “naked” jet from the quasar 1007+417. The
quasar is the dark spot at the top of the image, and the
jet is the series of knots ending at the small, off-axis
lobe at the bottom.
D.A. Clarke
<[email protected]>,
N.R. MacDonald,
J.P. Ramsey, and
M. Richardson,
Institute for
Computational
Astrophysics,
Department of
Astronomy & Physics,
Saint Mary’s
University, Halifax,
Nova Scotia, Canada,
B3H 3C3
LA PHYSIQUE AU CANADA / Vol. 64, No. 2 ( avr. à juin (printemps) 2008 ) C 47
ASTROPHYSICAL JETS (CLARKE ET AL.)
“fall” parallel to the axis and on to the equatorial plane, forming an accretion disc. Were there no way for the disc to lose any
of its angular momentum, such a configuration would remain
stable for æons, rotating in a pseudo-solid body/Keplerian
fashion much like the Milky Way does, without forming a compact objectCthe star or AGNCat its centre. Planets would not
form from the debris of the disc, and we would not be here to
discuss it.
But Nature has found a way to rid the system of its angular
momentumCjets. As shown in Fig. 2, a small fraction of the
disc material is thrown off in long, collimated, oppositelydirected supersonic outflows carrying with them most of the
disc’s angular momentum. This allows material in the disc to
drift inward and collect at the gravitational “bottom” of the system (e.g., Königl & Pudritz [11]; Shu et al. [12]). Remarkably
this process has been directly observed. Table 1 gives recent
results from Woitas et al. [13] for the PSO RW Auriga. In these
HST observations, the jet and accretion disc are sufficiently
resolved to measure rotational speeds from Doppler-shifted
emission lines. From these data and previous measurments of
mass flow rates, angular momentum fluxes are deduced. The
conclusion: the jets in RW Aur transport less than ten percent
of the accreted material but more than 2/3 of the angular
momentum away from the disc.
TABLE 1
Measured rates at which mass and angular momentum are accumulated on the accretion disc (inflow) and then transported away by jets
(outflow) for RW Aur (from [13]). An AU (Astronomical Unit) is
1.5 H 108 km, the distance between the Earth and the sun.
Even before remarkable observations such as Woitas et al.
were possible, jets were known to be associated with most star
formation regions as well as young galaxies and quasars, and it
has been widely believed for some time that virtually all stars
and AGNs pass through a “jet phase” (e.g., see [14] for a recent
and very digestible review of jets). A typical AGN is 108 times
more massive than a PSO, and thus the scales of their associated jets are much greater. Jets from PSOs are about a light year
(1016 m) in length and travel at 100n200 km/s, while those
from AGNs can be longer than 106 light years and travel at
90% or more the speed of light. The jet phase might last 105 yr
for a PSO and 108 yr for an AGN; in either case, a very small
fraction of their total lifetime.
Besides their size, one of the most important differences
between jets from PSOs and AGNs is their density relative to
their surrounding media. The density of a typical galactic jet is
~104 particles per cm3 (an excellent laboratory vacuum is
104 times denser), similar to that of the surrounding interstellar
48 C PHYSICS
IN
medium (ISM) for a density ratio of ~1. Conversely, a typical
extragalactic jet may have a density of only 1 m−3 (that’s per
cubic metre) while that of the surrounding inter-galactic medium (IGM) ~1 cm−3 for a density ratio of 10−6. It is this striking
difference in density ratios that is thought to be responsible for
many of the morphological differences between galactic and
extragalactic jets.
Prior to 1985, a prevailing model for launching outflows posited that a thick accretion discCone whose “vertical” thickness
is an appreciable fraction of its equatorial radiusCformed about
a compact object with a deep “funnel” reaching down to the
object’s surface. A radiation-driven wind from the compact
object would be blocked by the disc in the equatorial plane, but
allowed to escape along the rotation axis and within the evacuated funnel. Indeed, the funnel would help collimate the outflow into the narrow jets observed [15,16]. However, in 1985
Papaloizou & Pringle [17] showed that such a funnel is hydrodynamically unstable in 3-D, rendering the mechanism impotent.
It was Blandford & Payne
[18] who first described the
jet-launching model to
stand the test of time. They
showed that a jet can be
launched
magneto-centrifugally from the surface
of an accretion disc, much
like a bead on a wire accelerates outward if the wire
were slanted outward
enough from the rotation
axis, then spun about. In
this model, the “bead” is a
mass of hot, ionised plasma
and the “wire” is a magnetic field line. As ionised
matter cannot cross field
lines, it is obliged to follow
it and thus form a collimated jet (Fig. 2).
Fig. 2. Artist’s rendition of the
Blandford & Payne model,
in which the rotating accretion disc wraps up magnetic field lines into a helical
pattern, launching a jet
from the surface of the disc
[courtesy NASA/ESA and
Ann Feild (STScI)].
This paper inspired other theoretical discussions (e.g. [19,20]),
and numerous computational investigations (e.g., [21n27]) to
study various aspects of this theory. While each of these works
examined the problem from a different angle, all agree on one
main point: In a rotating, magnetised plasma collapsing under
its own gravity, jets are unavoidable.
THEORY AND METHODOLOGY
Jet Physics
It is often said that 98% of the universe is in the plasma state,
with the EarthCin its mostly solid stateCrepresenting an anomaly. Plasma physics is enormously complicated to solve properly as it requires tracking every particleCionCunder their
mutual electromagnetic and, in astrophysics, gravitational
CANADA / VOL. 64, NO. 2 ( Apr.-June. (Spring) 2008 )
ASTROPHYSICAL JETS (CLARKE ET AL.)AA
fields. Fortunately, if one can assume charge neutrality over
small volumes of space and that relative speeds among the particles are comfortably sub-light, a simpler system of
equationsCthose of magnetohydrodynamics (MHD)Cprevail.
MHD was almost single-handedly developed by the Swedish
physicist Hannes Alfvén (1908n1995) who won the Nobel
prize for his efforts in 1970 (http://public.lanl.gov/alp/plasma/
people/alfven.html). His was often the story of “the lone voice
in the wilderness”, where scoffs such as ‘Were such a thing
possible, Maxwell himself would have discovered it!’ were
reportedly heard at his seminars. It wasn’t until Enrico Fermi,
having attended one of Alfvén’s lectures, pronounced his theory to be sound that Alfvén’s ideas started to be taken seriously.
And when his most controversial predictionCthat a form of
electromagnetic waves, later to be known as Alfvén waves,
could propagate through a conducting mediumCwas confirmed
in the lab in the late 1950s, his ideas finally became mainstream.
In their primitive form, the equations of ideal (infinite conductivity) MHD are:
∂ρ
r
+ ∇ ⋅ (ρ v ) = 0;
∂t
(1)
r
r r
∂v r
r
1
1
+ (v ⋅∇) v = − ∇p − ∇φ +
(∇ × B) × B;
∂t
ρ
µ 0ρ
∂p r
r
+ v ⋅∇p = − γp∇ ⋅ v
∂t
r
∂B
r r
= ∇× (v × B ),
∂t
(2)
(3)
(4)
where ρ is the density, ν6 is the fluid velocity, p α ργ is the thermal pressure, γ is the ratio of specific heats (5/3 for a plasma),
φ is the gravitational potential satisfying Poisson’s equation
(2φ = 4πGρ), and B6 is the magnetic induction which, by virtue
of equation (4), satisfies the solenoidal condition, @B6= 0, for
all time so long as it is imposed as an initial condition.
These equations reduce to those of ordinary fluid dynamics
when B6= 0. They describe how fluidCionised gas in this
caseCflows under the influence of pressure gradients, gravity,
and magnetic forces. It is an extremely rich and non-linear system of equations permitting four types of waves, both longitudinal and transverse. Except for the simplest of situations, they
are almost completely inscrutable by pen and paper, and computational methods are necessary to make progress in realistic
systems. Introductions to the subject can be found in the texts
by Davidson [28] and Biskamp [29].
Jets are a classic astrophysical application of MHD. The MHD
approximation (isotropic pressure, local charge neutrality, sublight interparticle speeds) is widely believed to be valid, and
there is much evidence of the association of jets with diffuse
gases and magnetic fields [14]. We therefore use equations
(1)n(4) as our starting point to build our jet models.
Computational Methods
We use ZEUS-3D, a computer program under development by
the authors at the ICA and publicly available on the ICA web
site (http://www.ica.smu.ca/zeus3d). A comprehensive user
manual and a gallery of 1-D, 2-D and 3-D simulations are also
available from this site, and inquiries on its use should be
directed to DAC.
ZEUS-3D is an example of a finite-volume code in which a
region of interestCsay the portion of the ISM or IGM through
which a jet propagatesCis divided into a number of small
zones, typically 100 million or more in 3-D. Equations (1)n(4),
or at least their conservative variations (e.g. [30]), are integrated over each zone volume and/or zone face and the resulting
difference equations are advanced in time by a small time-step
using a time-centred and conservative procedure. This process
is repeated for as many time-steps as required (e.g., tens of
thousands) until the system has evolved to the desired state. As
for any system of partial differential equations, initial and
boundary conditions define the problem and for us, these are
set to launch a jet from the surface of an accretion disc, or propagate a jet through a quiescent medium. The truly dedicated
reader can find a complete description of the numerical methods in Clarke [31].
THE SIMULATIONS
Magneto-centrifugally Launched Jets
Figure 3 illustrates our initial conditions for launching a jet
from the surface of an accretion disc (inner radius
ri ~ 0.05 AU). An atmosphere in hydrostatic equilibrium
(ρatm ~ r−1/(γ−1)) is established about a 1 Mu central mass (PSO)
that provides the gravity. A hydrostatic disc in Keplerian rotation about the PSO is maintained as left boundary conditions,
with ρdisc/ρatm = 100 and pdisc/patm = 1 (pressure balance). A
uniform magnetic field [B2z = 0.05F0 p(ri)] perpendicular to the
disc permeates both the atmosphere and the disc. At t = 0, the
disc is suddenly set into rotation and a rather knotty jet is the
result (Fig. 4). The initial Bz is wrapped around the rotation
axis creating the helical field needed for the Blandford &
Payne mechanism to accelerate material along the z (horizon-
Fig. 3
Schematic diagram showing the initialisation of the magnetocentrifugally launched jet problem with all boundary types
labeled. The disc is the gray band along the left side, with the
inner radius of the disc, ri , shown.
LA PHYSIQUE AU CANADA / Vol. 64, No. 2 ( avr. à juin (printemps) 2008 ) C 49
ASTROPHYSICAL JETS (CLARKE ET AL.)
Fig. 4. A 2-D axisymmetric simulation of a jet flowing from left to
right and launched from an accretion disc maintained as
boundary conditions on the left side. The top half shows density and the bottom half Bφ, with black indicating high values,
white low values. Note that the density “knots” correspond to
regions of relatively low Bφ.
tal) axis.
As tempting as the association may be between the knotty
appearance of the jet in HH 34 (Fig. 1a) and the knots (actually “rings” in axisymmetry) in the simulated jet (Fig. 4), it is
unlikely these two phenomena are related, at least directly.
Figure 4 represents the first 8 AU (~ 1 light-hour) of the jet,
whereas the HH 34 jet is about 1 light-year in length. Instead,
(magneto)hydrodynamical instabilities within the jet itself
(e.g., shocks) are likely the cause of the HH 34 knots, although
they could well begin as the knots seen in Fig. 4. Indeed, this
points to a current limitation of all simulations performed to
date: No single simulation has yet been able to resolve the jet
launching region and follow the jet to observable scale lengths.
This is a very difficult computational problem, and one that is
at the core of the Ph.D. dissertation of JPR.
Still, the origin of the knots in the simulation is of interest
(e.g. [32]), as they point to an important role played by the protostellar atmosphere not considered in the original Blandford &
Payne model. Figure 5 shows the density distribution, magnetic field lines, velocity vectors, and acceleration vectors of the
inner-most region of our simulation (0.4 H 0.5 AU) at a time
that best illustrates how the knots are formed.
Because the disc has an inner radius, ri , the Blandford & Payne
mechanism is capable of launching material from the disc at
r > ri only. As material begins moving away from the disc, the
hydrostatic balance of the atmosphere is disturbed and, at least
at first, gravity wins out. Material, particularly near the axis, is
drawn toward the centre (e.g., some of the white velocity vectors along the axis in Fig. 5 are pointing inward), replenishing
the central core with material. The momentum of the inwardly
falling material actually adds more matter to the core than
needed to restore balance, and the imbalance in hydrostatic
equilibrium now favours the outward pressure gradient. This
drives material away from the core but, owing to the still
inwardly moving material close to the axis, is redirected to
lower latitudes (upward in Fig. 5), forming the rings, or knots.
This time, momentum carries too much material away from the
core so that gravity is again dominant, and the process repeats
like a damped, driven oscillator continuing for as long as the
simulation is run. From the local accelerations (black vectors in
Fig. 5) and densities, we can deduce the period of oscillation
50 C PHYSICS
IN
Fig. 5. A blow-up of the inner 8ri x 10ri (0.4 H 0.5 AU) of the
launched jet, showing details of how the knots are generated.
Greyscale is density with black indicating high values, black
lines are magnetic field lines, white arrows indicate velocities, while black arrows show accelerations. The disc is located along x1 = 0 and x2 $ 1, where x1 and x2 are the axial (z)
and radial (r) coordinates respectively. The putative protostar
is located at the origin.
and, with the local velocities, we can determine the expected
spacings between the knots. We find this agrees with the measured spacings in the simulations to within 10n20%.
Of course, the role of the magnetic field is critical for all
aspects of this calculation, including the condensation of the
knots. Much of the knot material originates from near the axis
where Bφ is the lowest (the φ-component of any vector must go
to zero toward a symmetry axis). As this material is pushed to
lower latitudes, the paucity of Bφ within the knot means its
combined magnetic and thermal pressure is lower than that of
its new surroundings, and it is compressed into the high density, low Bφ features we see in Fig. 4.
The knots represent a significant modification to the Blandford
& Payne model, one that only numerical simulations could
reveal. While these knots may not be directly related to the
observed knots (though we won’t know this for sure until we
do the calculations), they do have quantitative implications on
the mass and momentum fluxes transported by the jet which, in
these simulations, are still too low by a factor of two to ten
compared with measured fluxes from a typical stellar jet.
Reasons for this discrepancy may include:
1.
2.
3.
the disc we impose as left boundary conditions may be
unrealistic;
starting the simulation off with the disc suddenly rotating
at t = 0 rather than allowing the disc and jet to co-evolve
may have unrealistic consequences;
our initial atmosphere, which we know plays a critical
role in the oscillatory nature of the knot generator, is
CANADA / VOL. 64, NO. 2 ( Apr.-June. (Spring) 2008 )
ASTROPHYSICAL JETS (CLARKE ET AL.)AA
4.
probably unrealistic;
our disc extends only to 1 AU and yet in our own solar
system, the solar disc extended at least to Neptune
(30 AU), and probably much further.
It is for these reasons that we are taking the next step, and
doing a simulation to include the jet launching region, the
entire disc, and more realistic initial atmospheric and magnetic
configurations on a grid that will extend to observed scale
lengths (104n105 AU). This should give us a great deal more
insight into the nature of stellar jets and what, if anything, their
observed properties can tell us about the conditions where they
were launched.
Propagating Jets
We are also studying the effects of magnetism on an extragalactic jet propagating through the IGM. As shown in Fig. 6,
we initialise a uniform, quiescent unmagnetised ambient medium and inject a light (ρjet/ρamb / η = 0.02), supersonic (M = 10
is the Mach number, the ratio of the jet speed and local sound
speed) and magnetised jet, as indicated by the arrow at the bottom left corner of the figure. We do not concern ourselves here
with how the jet is launched, just how the jet, once launched,
interacts with its surroundings.
Questions to address include: What is the nature of the
Fig. 6
Schematic diagram showing the initialisation of the propagating jet problem (rj = jet radius, νj = jet velocity) with all
boundary types labeled. The inset indicates the scale of the
elongated zones used for both the launched jet and propagating jet problems, where the elongation is in the direction of
flow.
“hotspots” in AGN lobes such as Cygnus A (Fig. 1b)? How
does the magnetic field affect the appearance of the jet-lobe
system? What is the origin of the filaments observed in
Cygnus A and many other AGN lobes? Some of these questions are fairly well understood, some are still open. In either
case, this is an area where numerical simulations have taken the
lead role.
The early 2-D axisymmetric simulations of Norman et al. [33]
contributed two very important pieces of insight. First, the hot
spots at the lobe extremities mark a strong shock where the
supersonic jet flow is decelerated to subsonic speeds. Kinetic
energy is converted to magnetic and thermal energy, and these
increase the synchrotron emissivity (the primary emission
Fig. 7. a) Density distribution for a 2-D axisymmetric jet with a
very weak magnetic field; b) Density (upper) and Bφ (lower)
for a 2-D jet with a strong Bφ; c) Density (upper) and Aφ
(lower) for a 2-D jet with a strong Bp. In all cases, black indicates high values, white zero.
mechanism for extragalactic jets) dramatically in the hot spots
and lobes. Second, they found that dense jets (η > 1) are
“naked” while light jets (η < 1) enshroud themselves in a
cocoon or lobe. This computational observation along with the
astronomical observation that most AGN jets feed extended
“fluffy” lobes is the primary reason why it is believed that
η n 1 for most extragalactic jets.
Figure 7 shows three M = 10, η = 0.02 jets at the same evolutionary time that differ only in the nature of the magnetic field
they transport. Panel (a) depicts a jet with a weak magnetic
field, (b) a jet with a strong toroidal magnetic field
(β tor / 2F0 p/B2φ = 0.2), and (c) a jet with a strong poloidal
magnetic field (βpol /2F0 p/B2p = 0.2, where B2p = B2r + B2z ).
Panel (b) includes both density (upper) and Bφ (lower), whereas panel (c) includes both density (upper) and the φ-component
of the vector potential, Aφ (lower), whose contours follow magnetic field lines.
All flow is from left to right, with the actual jet confined to near
the symmetry axis. In Fig. 7a, for example, the jet streams
along the bottom 1/20 of the image and is characterised by several condensations (grey knots) along the axis from which
oblique shocks (grey streaks pointing to the left) are anchored.
The large light-grey and turbulent region above the jet is the
cocoon filled with exceedingly hot (~109 K) jet material that
passed through the Mach disc at the jet terminus. The bow
LA PHYSIQUE AU CANADA / Vol. 64, No. 2 ( avr. à juin (printemps) 2008 ) C 51
ASTROPHYSICAL JETS (CLARKE ET AL.)
shock leading the jet and surrounding the cocoon is in the
ambient medium, and the sharp separation between the light
grey cocoon and the dark grey shocked ambient medium is the
contact discontinuity. This is known to be Kelvin-Helmholtz
unstable, whence the many undulations and “fingers” of ambient medium reaching into the cocoon.
Consider first the jet with the weak magnetic field in Fig. 7a. If
this jet were ballistic, it would have propagated eleven times
the distance shown. Instead, with η = 0.02, this jet resembles a
jet of compressed air in water; it manages to advance, but
results in a lot of backflow, internal shocks, and an extended
and turbulent cocoon [33].
On the other hand, the hoop-stress of a strong toroidal magnetic field provides rigidity even to a light jet, allowing the jet to
present a “sharper” leading point as it propagates [34]. Thus, the
jet in Fig. 7b advances further with less back-flowing material
and a narrower cocoon. The opposite is true with the poloidal
field jet. Large poloidal flux loops form at the head of the jet
and “peel away” from the axis giving the jet a “blunter” presentation than even the hydrodynamical jet. Progress is correspondingly slower and more material is deflected into the
cocoon. In Fig. 7c, the cocoon is as narrow as it is because of
the “transparent” boundary conditions above the jet inlet
(Fig. 6) and most of the cocoon material flows off the grid at
the left side. Were reflecting conditions imposed, this material
would remain on the grid and the cocoon would be substantially larger.
Finally, Fig. 8 shows line-of-sight integrations of the synchrotron emissivity of two 3-D simulations, and represent how a
radio telescope might observe such objects. Figures 8a and 8b
are, respectively, jets with a weak (βφ = 105) and strong
(βφ = 0.2) toroidal magnetic field. Unlike 2-D, in 3-D the
toroidal field is free to move off the initial symmetry axis, and
the field is mostly poloidal in the cocoon. Both jets have
M = 10 and η = 0.1, with η chosen higher than the 2-D jets to
reduce computational time. Even still, each jet took several
days on a 16-core, 2.4 GHz cluster to complete.
The most striking aspect of Fig. 8a is the filamentary nature of
the cocoon, and its similarity with the Cygnus A lobe in
Fig. 1b. In the simulations, the filaments are a result of turbulent eddies in the lobe wrapping the weak magnetic field into
bundles. This is such a fundamental property of weak-field turbulence, that one might expect all extragalactic radio lobes to
be filamentary, and indeed all well-resolved (and thus near-by)
radio lobes are (e.g., Pictor A, M87, 3C 219, Centaurus A, to
name the best-known).
However, some of the more distant radio lobes do not resemble
Cygnus A. The lobe associated with the quasar 1007+417
(Fig. 1c) is not filamentary and is confined to the head of the
jet. Superficially, it resembles Fig. 8b, suggesting 1007+417
may be an example of a jet transporting a strong toroidal magnetic field. Could qualities such as “extended and filamentary”
vs. “confined and smooth” be indicators of local magnetic field
strength?
Unfortunately, things are never so simple. Other 2-D and 3-D
simulations show that a jet with a strong magnetic field can still
have an extended, filamentary lobe provided it has a low
enough η and high enough Mach number, M. On crossing the
terminal jet shock (Mach disc), jumps in density and magnetic
field asymptote to finite values as M 64, while the jump in
thermal pressure is unbounded. Thus, no matter how strong the
field transported by the jet may be, a sufficiently strong shock
can render the post-shock magnetic field dynamically weak,
and the formation of an extended filamentary cocoon can
result.
Further progress requires lots of computing power and an accurate MHD solver including as much physics as practical and
possible. This includes a model for the emissivity that can
exploit all the information the observations can yield. For
example, synchrotron emission depends on the magnetic field
and the relativistic electrons embedded in the fluid. The former
we have, but we have had to make overly simplistic assumptions about the latter in creating Fig. 8. As part of his M.Sc. thesis, NRM is modifying ZEUS-3D to account for the energy
gains and losses of the underlying electron population as it
experiences the MHD effects of the overlying fluid. This will
improve our ability to compare simulations with observations
and untangle the competing physical effects responsible for the
nature of extragalactic jets.
ACKNOWLEDGEMENTS
Fig. 8. Line-of-sight integrations of the synchrotron emissivity for
two 3-D jet simulations with a) a dynamically weak magnetic field (β o 1), and b) with a dynamically strong toroidal
magnetic field at the orifice. As with Fig. 1, these images are
shown with inverted palettes.
52 C PHYSICS
IN
Support from NSERC though its DG, PGA, and USRA programmes is gratefully acknowledged. Some simulations were
performed on facilities provided by the Atlantic Computational
Excellence Network (ACEnet), funded by the CFI, ACOA, the
provinces of Nova Scotia, Newfoundland & Labrador, and
New Brunswick, and SUN Microsystems.
CANADA / VOL. 64, NO. 2 ( Apr.-June. (Spring) 2008 )
ASTROPHYSICAL JETS (CLARKE ET AL.)AA
REFERENCES
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
K.M. Blundell and M.G. Bowler, ApJ, 616, L159 (2004).
W. Baade and R. Minkowski, ApJ, 119, 215 (1954).
H.D. Curtis, Pub. Lick Obs., 13, 31 (1918).
H. Shapley and H.D. Curtis, Bull. of the Nat. Research Council, 2, 171 (1921).
W.B. Sparks, J.A. Biretta and F. Macchetto, ApJS, 90, 909 (1994).
A.S. Wilson, A. Young and P.L. Shopbell, ApJ, 547, 740 (2001).
S. Burnham, MNRAS, 51, 94 (1890).
R.L. Snell, R.B. Loren and R.L. Plambeck, ApJ, 239, L17 (1980).
R.A. Perley, J.W. Dreher, and J.J. Cowan, ApJ, 285, L35 (1984).
C.L. Carilli and P.D. Barthel, Astron. Astrophys. Rev., 7, 1 (1996).
A. Königl and R.E. Pudritz, in Protostars and Planets IV, ed. V. Mannings et al., Tucson: Univ. Arizona Press, 759 (2000).
F.H. Shu, J.R. Najita, H. Shang and Z.-Y. Li, in Protostars and Planets IV, ed. V. Mannings et al., Tucson: Univ. Arizona Press, 789
(2000).
J. Woitas, F. Bacciotti, T.P. Ray, A. Marconi, D. Coffey and J. Eislöffel, A&A, 432, 149 (2005).
E.M. de Gouveia Dal Pino, Adv. in Space Research, 25, 908 (2005).
D. Lynden-Bell, Phys. Scripta, 17, 185 (1978).
M.J. Rees, M.C. Begelman, R.D. Blandford, and E.S. Phinney, Nature, 295, 17 (1982).
J.C.B. Papaloizou and J.E. Pringle, MNRAS, 213, 799 (1985).
R.D. Blandford and D.G. Payne, MNRAS, 199, 883 (1982).
R.E. Pudritz and C.A. Norman, ApJ, 301, 571 (1986).
F.H. Shu, S. Lizano, S.P. Ruden and J. Najita, ApJ, 328, L19 (1988).
G.V. Ustyugova, A.V. Kolboda, M.M. Romanova, V.M. Chechetkin and R.V.E. Lovelace, ApJ, 439, 39 (1995).
R. Ouyed and R.E. Pudritz, ApJ, 484, 794 (1997).
R. Krasnopolsky, Z.-Y. Li, and R.D. Blandford, ApJ, 526, 631 (1999).
C. Fendt and D. Elstner, A&A, 363, 208 (2000).
R. Ouyed, D.A. Clarke, and R.E. Pudritz, ApJ, 582, 292 (2003).
J.M. Anderson, Z.-Y. Li, R. Krasnopolsky, and R.D. Blandford, ApJ, 630, 945 (2005).
R. Banerjee and R.E. Pudritz, ApJ, 641, 949 (2006).
P.A. Davidson, An Introduction to Magnetohydrodynamics, Cambridge University Press: Cambridge, ISBN 0 521 79487-0 (2001).
D. Biskamp, Nonlinear Magnetohydrodynamics, Cambridge University Press: Cambridge, ISBN 0 521 59918-0 (1997).
W. Dai and P.R. Woodward, J. Comput. Phys., 142, 331 (1998).
D.A. Clarke, ApJ, 457, 291 (1996).
R. Ouyed and R.E. Pudritz, MNRAS, 309, 233 (1999).
M.L. Norman, L. Smarr, K.-H. Winkler, and M.D. Smith, A&A, 113, 285 (1982).
D.A. Clarke, M.L. Norman, and J.O. Burns, ApJ, 311, L63 (1986).
LA PHYSIQUE AU CANADA / Vol. 64, No. 2 ( avr. à juin (printemps) 2008 ) C 53
C
CONGRATULATIONS
ONGRATULATIONS
CREATORS
REATORS OF
OF A
C
ATTOSECOND
TTOSECOND S
SCIENCE
CIENCE,,
TWO
T
WO CAP
CAP M
MEMBERS
EMBERS,, WIN
2007 NSERC
NSERC
WIN 2007
POLYANI
P
OLYANI A
AWARD
WARD (M
(MAR
AR.3/08)
.3/08)
CONGRATULATIONS
C
ONGRATULATIONS
Two Canadian
Canadian researchers
Two
researchers have
have caused
caused aa revolution
revolution in
in
molecular science
molecular
science by
by developing
developing most
most of
of the
the main
main conconcepts of
of aa new
new field
field known
known as
as "attosecond
"attosecond science."
science."
cepts
Attosecond science
Attosecond
science fuses
fuses chemistry
chemistry and
and physics
physics to
to arrive
arrive
at the
the innovative
at
innovative idea
idea of
of using
using intense,
intense, ultra-short
ultra-short laser
laser
pulses to
to image
image and
pulses
and even
even ultimately
ultimately control
control molecules.
molecules.
The achievements
achievements by
The
by
NDRÉ B
BANDRAUK
ANDRÉ
ANDRAUK,, Canada
Canada
A
Research
Research Chair
Chair in
in Computational
Computational
Chemistry
Chemistry and
and Molecular
Molecular Photonics
Photonics
at
at the
the Université
Université de
de Sherbrooke,
Sherbrooke,
and
and
Senior Scientist
Scientist at
P
AUL C
CORKUM
ORKUM,, Senior
at
PAUL
the National
National Research
the
Research Council's
Council's
Steacie Institute
Institute and
and Professor
Canada of
Steacie
ResearchatChair
at the University
of
Physics
the University
of Ottawa
Ottawa
have
them this
this year's
year's $250,000
$250,000 John
John C.
C. Polanyi
Polanyi
have earned
earned them
Award
the Natural
Natural Sciences
Sciences and
and Engineering
Engineering
Award from
from the
Research
of Canada.
Canada.
Research Council
Council of
The
researchers have
have combined
combined the
the power
power of
of supercomsupercomThe researchers
puting
laser technology
technology to
to control
control and
and
puting and
and the
the latest
latest laser
manipulate
matter with
with lasers
lasers at
at the
the molecular
molecular level,
level, both
both
manipulate matter
spatially
and temporally.
temporally. Their
Their research
research could
could lead
lead to
to
spatially and
advances
technology, bio-photonics
bio-photonics and
and
advances in
in materialsf
materials technology,
high-bandwidth
high-bandwidth telecommunications.
telecommunications.
Drs.
known for
for developing
the
Drs. Bandrauk
Bandrauk and
and Corkum
Corkum are
are known
developing the
science
of molecules
molecules exposed
exposed to
to intense
intense laser
laser light
light and
and for
for
science of
exploiting
this new
new science
science to
to generate
generate and
and measure
measure
exploiting this
shorter
light pulses
pulses and
and new
new methods
methods for
for imaging
imaging and
and
shorter light
controlling
molecules.
controlling molecules.
A
chemist, Dr.
Dr. Bandrauk
Bandrauk began
began aa productive
productive
A theoretical
theoretical chemist,
collaboration
in laser
laser science
science with
with Dr.
Dr. Corkum,
Corkum, an
an experexpercollaboration in
imental
physicist, in
in the
the mid-1980s.
mid-1980s. Their
work over
over two
two
imental physicist,
Their work
decades
has built
built on
on Dr.
Dr. Polanyi's
seminal work
work through
through
decades has
Polanyi's seminal
Dr.
laser experiments
experiments and
and Dr.
Dr.
Dr. Corkum’s
Corkum’s advanced
advanced laser
Bandrauk’s
state-of-the-art supercomputer
supercomputer
Bandrauk’s sophisticated
sophisticated state-of-the-art
simulations.
simulations.
Their
first work
work used
used chirped
chirped picosecond
picosecond pulses,
pulses, those
those
Their first
with
varying frequencies
frequencies to
to control
control molecular
molecular dissodissowith time
time varying
54
HYSICS
54 &C P
PHYSICS
IN
IN
ciation. Now,
ciation.
Now, six
six orders
orders of
of magnitude
magnitude faster,
faster, they
they have
have
shown that
that intense
shown
intense chirped
chirped attosecond
attosecond pulses
pulses can
can monitor
monitor
and control
control the
and
the electron
electron and
and its
its complex
complex quantum
quantum wave
wave
motion in
thus reflecting
reflecting aa major
motion
in matter,
matter, thus
major breakthrough
breakthrough in
in
molecular science.
An attosecond
attosecond is
is aa billionth
molecular
science. An
billionth of
of aa bilbillionth of
of aa second
second –
– one
one thousand
thousand times
times faster
lionth
faster than
than the
the
femtosecond that
femtosecond
that was
was previously
previously used
used as
as the
the measure
measure for
for
the shortest
shortest controlled
controlled light
the
light pulse.
pulse. For
For example,
example, an
an athlete
athlete
winning aa race
race by
by an
an attosecond
winning
attosecond would
would be
be ahead
ahead by
by less
less
than the
the width
than
width of
of an
an atom.
atom.
Investigations of
Investigations
of the
the inner
inner workings
workings of
of aa molecule
molecule by
by taktaking atoms
atoms apart
apart and
and putting
putting them
ing
them back
back together
together have
have led
led to
to
better understanding
aa better
understanding of
of the
the quantum
quantum nature
nature of
of the
the smallsmallest bits
est
bits of
of matter
matter in
in the
the universe.
universe. A
A conceptual
conceptual new
new model
model
was first
first introduced
introduced by
Corkum and
and confirmed
confirmed by
was
by Dr.
Dr. Corkum
by joint
joint
theoretical and
theoretical
and experimental
experimental collaboration.
collaboration. This
This allowed
allowed
an electron
electron to
an
to orbit
orbit and
and re-collide
re-collide with
with an
an atom
atom or
or molemolecule under
cule
under the
the influence
influence of
of an
an intense
intense laser
laser pulse,
pulse, resulting
resulting
in aa revolutionary
in
revolutionary method
method that
that emits
emits aa light
light pulse
pulse faster
faster
than ever
ever before.
than
before.
Attosecond pulses
Attosecond
pulses open
open up
up new
new avenues
avenues for
for time-domain
time-domain
studies and
and control
control of
of multi-electron
multi-electron dynamics
studies
dynamics in
in atoms,
atoms,
molecules, plasmas
plasmas and
and solids
solids on
on their
their natural,
natural, quantum
molecules,
quantum
mechanical time
mechanical
time scale
scale and
and at
at dimensions
dimensions shorter
shorter than
than
molecular and
molecular
and even
even atomic
atomic scales.
scales. These
These capabilities
capabilities
promise to
to change
change the
promise
the understanding
understanding of
of matter
matter where
where the
the
quantum wave
wave nature
quantum
nature of
of matter
matter dominates.
dominates.
Previously, chemists
chemists thought
thought that
Previously,
that intense
intense light
light pulses
pulses
would destroy
destroy molecules
would
molecules rather
rather than
than produce
produce interesting
interesting
science. Dr.
Dr. Bandrauk
science.
Bandrauk showed,
showed, through
through theory
theory and
and supersupercomputer modelling,
computer
modelling, that
that new
new molecules
molecules could
could be
be created
created
with intense
intense short
The original
with
short laser
laser pulses.
pulses. The
original investigation
investigation
by the
the two
two researchers
researchers of
of the
the nonlinear,
nonlinear, nonperturbative
nonperturbative
by
reaction of
of atoms
atoms and
and molecules
molecules to
intense laser
reaction
to intense
laser pulses
pulses
resulted in
resulted
in the
the major
major discovery
discovery of
of the
the generation
generation of
of
attosecond pulses.
attosecond
pulses.
As well,
well, The
The Economist
As
Economist reported
reported last
last year
year that
that attosecond
attosecond
science has
science
has brought
brought the
the world's
world's scientific
scientific community
community into
into
"an era
era of
of control
control of
of the
the quantum
quantum world."
"an
world." It
It said
said further:
further: "If
"If
such processes
processes (electronic)
(electronic) could
such
could be
be manipulated,
manipulated, then
then it
it
would have
have applications
applications in
would
in fields
fields as
as far
far apart
apart as
as computing
computing
and medicine."
and
medicine." That
That is
is the
the next
next challenge
challenge for
for the
the
researchers: the
researchers:
the application
application of
of attosecond
attosecond pulses
pulses to
to concontrolling electrons
electrons inside
These advances
advances in
in
trolling
inside molecules.
molecules. These
visualizing, understanding
visualizing,
understanding and
and ultimately
ultimately controlling
controlling the
the
wave nature
wave
nature of
of one
one of
of the
the most
most mysterious
mysterious objects
objects in
in sciscience –– the
the electron
ence
electron itself
itself –– have
have put
put the
the Bandrauk-Corkum
Bandrauk-Corkum
group of
of researchers
researchers at
at the
group
the forefront
forefront of
of this
this new
new area
area of
of sciscience. It
It also
ence.
also highlights
highlights Canada's
Canada's eminence
eminence in
in aa new
new field
field of
of
research that
that can
research
can be
be referred
referred to
to as
as "dynamic
"dynamic imaging."
imaging."
NOTE:
of DAMOP-APS
DAMOP-APS
NOTE: At
At the
the May
May 28
28 meeting
meeting of
(Penn
(Penn State
State U)
U) Paul
Paul Corkum
Corkum and
and André
André Bandrauk
Bandrauk
will
Fellows of
of the
the APS.
will be
be elected
elected Fellows
APS.
C
64, N
NO
2 (( Apr.-June.
Apr.-June. (Spring)
(Spring) 2008
CANADA
ANADA // V
VOL
OL.. 64,
O.. 2
2008 ))
ARTICLE DE FOND
FINITE ELEMENT ANALYSIS
ISSUES AND TRENDS
BY
IN
SOLID MECHANICS:
NADER G. ZAMANI
T
he area of Finite Element Analysis (FEA) has
become a standard tool in the numerical solution
of the field equations governing physical problems. These field equations arise in diverse areas
such as: solid mechanics, fluid dynamics, heat transfer,
electrostatics, and electromagnetism. Generally speaking,
these are coupled, nonlinear, and time dependent partial
differential equations which describe some form of conservation law. Depending of the nature of the field, the
conservation of momentum, energy, and charge are normally taken into account.
The concepts behind these field equations have been
known to the science community since the early 1800s.
These concepts are attributed to prominent physicists/mathematicians such as Euler, Lagrange, Laplace,
Bernoulli, and Fourier to name a few. Some of the approximate numerical schemes which are the basis of the FEA
approach are due to well known scientists such as
Raleigh, Ritz, and Galerkin. The breakthrough, however,
came about due to the development of high speed digital
computers. At that point, the early numerical schemes
were altered and modified making them efficient, accurate, and feasible for implementation on computers.
The first comprehensive numerical solution which
embraced FEA concepts in its modern form is attributed to
Courant [1] where he used piecewise linear polynomials on
a triangular mesh to solve the Laplace equation. Although
this work set the wheel in motion, it was not until the early
1960 where serious development in FEA started. It is not
surprising that this activity coincided with the development of high speed computers in the private sector. One of
the early projects along this path was the development of
the NASTRAN program which was created by NASA for
structural analysis [2]. This code that still exists, has gone
through continuous updating and improvements and prob-
SUMMARY
This expository paper discusses the area of
Finite Element Analysis (FEA) as pertaining
to the subject of solid mechanics. FEA as a
computational tool has evolved rapidly in the
past fifty years and continues to do so with
technological advances in the computer
industry. The paper briefly presents a historical background together with the current
status of the field, and the future trends.
ably is the most widely used FEA software for structures.
Due to the NASA connection, the NASTRAN code is in
the public domain and the source code can be acquired at
no cost to the user.
APPROPRIATE ELEMENT SELECTION
To avoid total generality, the rest of this expository article
focuses on the FEA formulation for solid mechanics applications. Other areas such as fluids, heat transfer, and electromagnetism follow the same track. In structural analysis,
the primary variable of interest is the displacement vector.
Once the displacements are determined, strains can be
computed, and based on the material response, the stresses are evaluated. For the sake of illustration, assume a linear behavior in kinematics and constitutive law.
Depending on the topological nature of the structure, the
three most common elements are solid, shell, and beam
elements which are symbolically displayed in Fig. 1. The
number of nodes is one of the factors determining the
accuracy of the results [3].
Fig. 1
(a) solid, (b) shell, (c) beam elements
If the topology is one dimensional (or a composite of onedimensional parts) such as the frame of building, or a communications tower, the beam elements have to be used. On
the other hand, if the topology is two-dimensional such as
a pressure vessel or aircraft wing, the shell elements covering surfaces are the most appropriate. Finally, for a
bulky object with no specific topological characteristics,
solid elements are commonly used In principle; every
structure can be modeled with solid elements, but the
demands on resources make it impractical even with
today’s computing power. All these elements have to be
modified in one form or other to be able to handle special
situations such as: material incompressibility, material
plasticity and visoelasticity.
N.G. Zamani
<zamani@uwindsor.
ca>, Department of
Mechanical,
Automotive and
Materials Engineering
University of
Windsor, Windsor,
Ontario, Canada
LINEAR VS. NONLINEAR RESPONSE
In a finite element analysis, there are three sources of nonlinearity. These are labeled as geometric, material, and
contact [4]. For the special case of strictly linear problems,
LA PHYSIQUE AU CANADA / Vol. 64, No. 2 ( avr. à juin (printemps) 2008 ) C 55
FINITE ELEMENT ANALYSIS (ZAMANI)
the details of the code may vary, but all FEA codes basically
give the same results. This is assuming that the same elements
are used and the same numerical integration algorithm is
employed. The minor differences are due only to code implementation.
A geometric nonlinearity refers to the case of large displacements, large rotations, and large strains. These are considered
to be mild nonlinearities which can easily be handled with a
good iterative solver. All nonlinearities require an iteration
approach for the numerical solution. Such algorithms are variations of the Newton-Raphson method or its secant implementation. For a mnemonic of this behavior, see Fig. 2(a).
The material response is also known as the constitutive law.
This represents the relationship between the stress and the
strain (or force and deflection). Most materials display a linear
response in a very narrow range. To give the reader a better
idea, consider the stretching of a rubber band. For small forces,
the relationship between the applied force and the resulting
stretch is linear. However, very quickly this linearity is lost and
a rather complicated path is traversed. This is an example of a
nonlinear elastic response. The situation in plasticity is considerably more complicated but falls into the category of nonlinear constitutive response. Material nonlinearity is also considered to be a mild nonlinearity. For a mnemonic of this behavior, see Fig. 2(b).
Presently, the majority of commercial FEA codes are capable
of handling both geometric and material nonlinearities. The
degree of their performance (in terms of efficiency and accuracy) varies from code to code. Furthermore, some codes have a
vast database of material properties which could be preferred
by the users.
Fig. 2
(a) nonlinear geometry, (b) nonlinear material, (c) nonlinear
contact
The most severe type of nonlinearity is generally due to contact
condition. Basically, any type of metal forming application
such as forging, stamping, and casting require contact calculations, see Fig. 2(c). The mathematical tools for handling contact algorithms involve the Lagrange multiplier and/or constrained optimization. A poor formulation often leads to lack of
convergence and other numerical difficulties.
STATIC VS. DYNAMIC RESPONSE
Technically speaking, all loads are dynamic (time dependent)
in a real world environment. The main issue is whether the
inertia effect (mass times acceleration) is significant compared
to other loads. In a nutshell, this has to do with the duration of
the applied load compared to the natural periods (inverse of the
56 C PHYSICS
IN
natural frequencies) of the structure [5]. For example, an impact
load on many occasions leads to a substantial inertia effect. In
terms of dynamic analysis, commercial FEA packages give the
user several options for carrying out the calculations. This is
schematically represented by Fig 3. For linear problems, the
modal superposition is generally available. The user can select
the number of modes and therefore control the accuracy of the
results.
One can also use the full history analysis by numerically integrating the equations of motion in time. The term full time history analysis refers to the fact that the governing field equation
is a time dependent differential equation. Therefore, the
unknowns are also time dependent. This system of differential
equations has to be solved numerically by an approximate integration it time. Clearly, in nonlinear problems, this is the
default approach. A variety of integration routines is available
but the most common ones are the central differencing and the
Newmark method. The former is conditionally stable whereas
the latter is unconditionally stable.
Fig. 3
(a) Static response for a slowly varying load, (b) Dynamic
response for a fast varying load
IMPLICIT VS. EXPLICIT FORMULATION
There are two finite element methodologies in solid mechanics.
These are known as the Explicit and Implicit methodologies [6].
The term Explicit refers to the fact that when numerical integration in time is carried out, a predicted entity can be written
directly in terms of the past values without actually solving a
system of algebraic equations. Whereas in the Implicit
approach, in order to calculate a predicted value, one is bound
to solve a system of equations and therefore more computation
is involved. Both are designed to solve (integrate) the equations
of motion in time. The equation of motion for the linear case
can be stated as [M]{ẍ}+[C]{ẋ}+[K]{x}={F(t)}. Here [M], [C],
and [K] are the mass, damping and stiffness matrices respectively. The vector represents the displacement vector and
{F(t)} is the vector of applied loads.
The methodology used depends on the nature of the application
being considered. Generally speaking, short duration events
such as metal forming, crashworthiness, and detonation require
explicit codes. In such codes, the mass matrix is approximated
to become diagonal, and the central differencing method is
used for time integration. The stiffness matrix is not stored in
its entirety at every time step and no iterations are carried at
each time step. However, the conditional stability of central
differencing requires an extremely small time step selection.
There are very few explicit FEA codes and they require consid-
CANADA / VOL. 64, NO. 2 ( Apr.-June. (Spring) 2008 )
FINITE ELEMENT ANALYSIS (ZAMANI)AA
erable computing resources.
The majority of the existing commercial FEA codes however
are based on an implicit formulation. This is not surprising as
the bulk of the design problems in engineering and product
developments can satisfactorily be handled with the implicit
FEA formulation. There is another important difference
between the implicit and explicit codes. In nonlinear problems,
implicit codes require substantial iteration steps. If the conditions are not realistic, the solution usually diverges and the user
is informed. However, in explicit calculations, since no iteration is involved, the software always arrives at a solution. The
difficulty is that this may not be the solution to the problem
under consideration.
It is worth mentioning that problems which ordinarily can be
solved with an implicit code can also be solved with an explicit one. However, the extremely small time step will dictate an
unreasonable solution time. The remedy is referred to as the
mass scaling option that is available in explicit codes. In this
option, the density of the material is artificially changed to
result in an attainable run time. One should carefully check the
energy history to make sure that the results are not contaminated by non-physical effects.
MESH ADEQUACY AND REFINEMENT
One of the most common questions when dealing with finite
element analysis is how small a mesh should be used for a
desirable accuracy. The user should be reminded that one cannot provide an answer without performing a mesh convergence
study. Basically, making a single run regardless of how small
the elements are; provides no information on the accuracy. The
key is in making a sequence of runs with decreasing element
size and comparing the differences in the results. Of course the
refinement should be performed in the critical regions and the
comparison of the results should also be made in the critical
locations. When the percentage change is to the user’s satisfaction, the mesh is assumed to be satisfactory. It is well known
that displacements converge faster that the stresses, however,
the latter entities are more important. Therefore, the convergence should be based on the stress variable.
The strategy above is known as “h” refinement. There are two
other strategies known as the “r” and “p” methods [7]. In the
“p” method, the mesh is fixed but the degree of the approximating polynomial is increasing. Although the “p” strategy displays promising results in linear problems, it is not available in
most commercial codes.
The final refinement strategy is the so called “r” method.
There, the number of nodes (and elements) is fixed. However,
their locations are adaptively changed to reduce the error estimate. This method has also been implemented for the
Boundary Element Method for linear problems [8]. Currently,
most commercial FEA codes have an adaptive (automatic)
mesh refinement capability for solving linear problems.
Sophisticated error estimators are used to perform the refinement strategies [9].
SOURCES OF ERROR IN AN FEA
CALCULATION
Understanding the sources of error in a finite element calculation is vital to obtaining good results [10]. In this section, these
sources are briefly described. The most obvious source is the
mathematical model that is expected to represent a physical
phenomenon. This source is beyond the control of the typical
user. Engineers and physicists are primarily responsible in
arriving at an accurate model. The second source is the approximation of the physical domain with the finite element model.
If the boundaries of the domain are curved surfaces, finite elements may only approximately represent this domain as shown
in Fig. 4(a). The use of higher order elements can reduce this
error. Mesh refinement will also improve the error. The interpolation error is displayed in Fig. 4(b). The nature of the shape
functions dictates how well the finite element functional variation approximates the exact solution. Higher order elements
approximate the exact solution more accurately.
The error in numerical integration is also a critical factor in
controlling the error. This is symbolically displayed in Fig. 4(c)
where the area under a curve represented by an integral is
approximated by the area of the trapezoid. There are however
circumstances where intentionally some error is introduced in
the integration process. This eliminates the possibility of unrealistically stiff structures [11]. The final source of error is the
mathematical round-off which could dramatically affect the
results. There are different reasons for this undesirable effect.
Among the reasons are the single precision calculations,
extreme mesh transition, and hard/soft regions being present [12].
Fig. 4
(a) physical domain approximated by the finite element
domain, (b) exact solution approximated by the finite element solution, (c) area under curve approximated by area
under line
OPTIMIZATION
The primary role of a commercial finite element package is to
perform analysis. However, the ability to perform analysis naturally leads to the idea of optimization. In this situation, the
objective function, constraints and design variables are defined
first. A sequence of analysis is performed which systematically updates the design variables such that the objective function
is optimized [13]. The optimization calculations can be based on
the gradient methods or more recent approaches such as the
Genetic algorithm [14]. Most recent commercial codes have an
optimization module. To give a concrete example of how optimization is used, consider the design of a loaded part to have a
LA PHYSIQUE AU CANADA / Vol. 64, No. 2 ( avr. à juin (printemps) 2008 ) C 57
FINITE ELEMENT ANALYSIS (ZAMANI)
minimum weight, where the von Mises stress is to remain
below the yield strength of the material.
VECTORIZATION FOR MULTIPROCESSING
The mainframe supercomputers appeared in the market about
twenty-five years ago. This prompted the FEA software companies to revise and reexamine their codes to run efficiently on
these machines. It mainly consisted of vectorizing their codes
to utilize the multiprocessor nature of the supercomputers. The
multiprocessing capability has recently been introduced in the
personal computer (PC) market. Currently, the major FEA software has separate installations which allow them to use a number of processors. Naturally, the licensing cost for such versions is more expensive than for a single processor version.
PRE AND POSTPROCESSING CAPABILITIES
It was not very long ago that commercial FEA software relied
solely on third party pre- and pos-processors. The finite element software companies primary put their efforts on the
solver module. This caused a great deal of inconvenience for
the user who needed to invest additional time to train in separate software. This was particularly troublesome when FEA
software were marketed to operate on the personal computers.
The turn around solution was achieved by two approaches.
Some FE software where modified to have a proprietary preand post-processor written from scratch to handle the needs,
while others incorporated third party codes and integrated them
with the solver module. This allowed the user to seamlessly
perform the pre- and post-processing, and run the finite element analysis simultaneously. Currently, all commercial FEA
software packages have their own pre- and post-processors.
They also have the flexibility of transferring data to and from
third party software.
Mesh generation still remains a challenging issue in a preprocessor. This is particularly the case when the geometry
under consideration is complicated. As an example, one can
visualize the meshing of a full automobile engine block.
Creating a free mesh using tetrahedron elements is now feasible regardless of the complexity of the geometry. However, if
all hexahedral elements are needed, the situation is not completely satisfactory. Mesh generation remains an active
research area in applied mathematics.
FEA AND CAD SOFTWARE INTEGRATION
A large number of general purpose commercial FEA software
packages has been developed and made available in the public
domain since early 1970s. This is also the case with CAD
packages which are widely used in industry. Clearly, the spectrum of the CAD software is rather wide depending on their
capabilities. These packages are traditionally used by the so
called designers who are not formally trained in physics or
engineering. Their experience is gained by on-the-job training
and they usually act as the interface between the production
(fabrication) and the engineering divisions.
The global trend is the elimination of such positions and
replacing them with qualified engineers or physicists. This has
prompted the integration of FEA modules in standalone CAD
packages. The numbers of CAD and FEA software packages
have dwindled in the past decade and now there are a handful
of fully integrated CAD/FEA packages which are referred to as
CAE software. In this context, CAE also embraces
Computational Fluid Dynamics (CFD) modules. Therefore, the
analysis capabilities are seamlessly integrated with CAD features. The cycle does not end at this stage and in most cases are
directly linked to the Computer Aided Manufacturing (CAM)
area which is the end of the product development cycle. It is
expected that this tend will continue with only a few fully integrated CAE software packages handling the entire design
process.
CLOSING REMARKS
One of the important points that is being raised in this expository article is to emphasize that not all FEA software packages
are the same. The user should clearly identify the needs for
his/her analysis. The decision should also factor the CAD
requirements. The cost of the software has a direct link to the
capabilities of the acquisition. Another factor which should be
seriously taken into account is the type and level of the technical support available for the CAE software. One should not
assume that the software’s documentation is sufficient and well
enough written for an average reader. This could be a major
issue, as training courses can be extremely expensive, and in
some cases not even available. Online searches and sharing
information with other users can be of great value to decide
which software fits the users’ needs.
REFERENCES
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
R. Courant. Bull. Am. Math. Soc., 49,1 (1943).
R.H. MacNeal, NASTRAN Theoretical Manual, NASA-SP, 221, 3 (1976).
R.D. Cook, Finite Element Modelling for Stress Analysis, John Wiley and Sons (1995).
K.J. Bathe, Finite Element Procedures, Prentice Hall (1996).
T.J. Hughes, Linear Static and Dynamic Finite Element Analysis, Prentice Hall (1987)
T. Belytschko, W.K. Liu, and B. Moran, Nonlinear Finite Elements for Continua and Structures, John Wiley and Sons (2000).
J.N. Reddy, Introduction to the Finite Element Mathod, McGraw Hill (2006)
N.G. Zamani and W. Sun, IJNME, 44, 3 (1991).
B. Szabo and I. Babuska, Finite Element Analysis, John Wiley and Sons (1991).
P.G. Ciarlet, The Finite Element Method for Elliptic Problems, North Holland (1978).
O.C. Zienkiewicz and R.L. Taylor, The Finite Element Method, McGraw Hill (1988).
G. Strang and G. Fix, An Analysis of the Finite Element Method, Prentice Hall (1973).
S. Moaveni, Finite Element Analysis, Theory and Applications with ANSYS, Prentice Hall (2008).
G. Lindfield and J. Penny, Numerical Methods Using MATLAB, Prentice Hall (2000).
58 C PHYSICS
IN
CANADA / VOL. 64, NO. 2 ( Apr.-June. (Spring) 2008 )
ARTICLE DE FOND
SUPERCOOLED LIQUIDS AND SUPERCOMPUTERS
BY IVAN
SAIKA-VOIVOD AND PETER H. POOLE
T
he liquid state provides condensed matter physics
with some of its longest standing and most perplexing questions [1,2]. By comparison, the fundamental nature of the other conventional phases,
gas and crystal, are much better understood. Our understanding of how the properties of gases and crystals arise
from interactions at the molecular level was (and continues to be) facilitated by the availability of idealized but
exactly-solvable limiting models as tractable starting
points for theory, and by systematic techniques for extending these ideal models to recover the properties of realistic systems. For example, for gases, we can start with the
ideal gas, and then make progress toward a real gas by
adding terms to a virial expansion. For crystals, the
Einstein crystal provides an idealized starting point, and
successive improvements can be made by progressing
through e.g. the Debye model, to better reveal the thermodynamics of crystals; or Bloch’s theorem, to provide a
starting point for understanding electronic properties.
For liquids, the situation is different. Liquids lack the discrete symmetry and long-range molecular order of crystals, and so the simplifications exploited in much of solid
state physics simply do not apply. Superficially, the structure of a liquid seems to have more in common with that
of the gas phase, at least from the standpoint of symmetry.
However, the vast difference in density between a gas
(well above the critical temperature) and a liquid (near the
freezing temperature) precludes the use of the ideal gas as
a starting point for any practical approach to studying the
liquid. The densities of most liquids near freezing are
within ten percent of the corresponding crystal density.
Hence, many-body molecular interactions play a dominant
role in determining the properties of the liquid state,
whereas in a typical real gas these can usually be treated
as perturbations to the ideal gas.
In addition, many of the most interesting questions about
liquids have to do with their evolution in time, in particular their dynamical behaviour in equilibrium (e.g. diffusion), and with how they behave when they are out of
SUMMARY
High performance computing is now central
to efforts to resolve long-standing questions
on the nature of cold, dense liquids, and on
how they transform to crystalline and amorphous solids. We review several current
examples.
equilibrium (e.g. during the transformation of a liquid to a
solid). Liquids consequently present us with a theoretical
“perfect storm”: we face all the complexity of a dense, disordered, strongly interacting, many-body system; we are
denied simplifications based on symmetry, as in crystals;
and we must not only treat the physics of a disordered
structure, but also how the structure changes with time. Of
course, liquids are not the only physical system to present
such barriers to understanding. Indeed, much of modern
statistical physics is focussed on systems where time-varying disorder is a central feature (e.g. granular matter, frustrated magnetic systems). Liquids are simply a commonly
encountered, and historically important, case.
In this article, we will focus our attention on one regime
where these challenges come strongly to the fore: in the
supercooled liquid state [1]. By this, we mean the liquid
state that can be observed if a liquid is cooled to a temperature T below the crystal melting temperature Tm. While
the supercooled state is a metastable one, in the sense that
the crystal has a lower free energy than the liquid, almost
all liquids can be studied for some range of T below Tm on
a time scale long enough for a metastable liquid-state
equilibrium to be established. Interest in this regime
derives in large measure from the fact that solid matter,
whether crystalline or amorphous, can be formed from the
supercooled liquid, in the former case by nucleation, and
in the latter via the glass transition. These two solidification mechanisms, while starkly different, depend sensitively on the nature of the supercooled liquid in which
they begin, and high performance computing has proven
to be an indispensable tool for progress in this area. We
will illustrate this by discussing results from our own
research and also highlight related studies, especially
those carried out in Canada.
COMPUTERS AND LIQUID STATE PHYSICS
Computers have had a singular impact on the development
of liquid state physics in general [2,3]. This is due to the
fact that, although the analytical challenges of the liquid
state are severe, the physical ingredients of a classical liquid at the molecular level are easy to specify, making an
algorithmic approach attractive [3]. The system’s potential
energy is usually well-approximated as a sum over a specified two-body interaction function, and Newton’s laws
suffice to determine the trajectories of the molecules in
space, starting from some given initial condition. This
specification is straightforward to implement as a computer algorithm, and is known as molecular dynamics (MD).
I. Saika-Voivod
<[email protected]>,
Department of
Physics and Physical
Oceanography,
Memorial University
of Newfoundland,
St. John's NF, A1B
3X7, Canada, and
P.H. Poole,
Department of
Physics, St. Francis
Xavier University,
Antigonish, Nova
Scotia B2G 2W5,
Canada
LA PHYSIQUE AU CANADA / Vol. 64, No. 2 ( avr. à juin (printemps) 2008 ) C 59
SUPERCOOLED LIQUIDS (SAIKA-VOIVOD AND POOLE)
If one is interested only in structural or thermodynamic properties, the time evolution of the system can be replaced by a stochastic exploration of the system’s configuration space, using
Monte Carlo (MC) methods.
A computational approach to the liquid state thereby opens up
the opportunity to model the behaviour of the system at the
molecular level, and study how bulk properties arise from
microscopic interactions. Complete knowledge of atomic positions as a function of time are generated in MD simulations,
making it possible to evaluate any property accessible in experiments, and many that are not accessible, or at least not yet.
The principal challenges of a molecular-level approach to modelling the liquid state are the limitations on the system sizes and
time scales that are accessible using a given generation of computing hardware. In MD simulations of classical liquids, the
fundamental time step by which the molecular trajectories are
advanced forward in time is usually on the scale of 1 fs. For
simulations on a modern single processor of a system having
103 molecules, a time scale on the order of 100 ns can be
reached if one is willing to wait several weeks for the results.
So long as the internal equilibrium relaxation time of the liquid
is less than this, reliable results can be obtained. However,
there are many processes in supercooled liquids (e.g. crystal
nucleation) that may occur only on much longer time scales.
Since the liquid-state relaxation slows down as T decreases,
this upper time scale also sets a bound on the lowest T that can
be reached in equilibrium.
Constraints on the system size depend in part on the time scale
that a given study must access: for long time scales, the smallest system sizes are chosen. For short time scale phenomena,
system sizes of as large as 109 atoms have been realized. While
computationally impressive, such systems are a long way from
the macroscopic regime. In all simulations of bulk liquids, periodic boundary conditions are used to minimize surface effects.
In spite of this, finite-size effects remain a serious challenge for
even the most modern simulations. An instructive example is
the recent work of Sokolovskii, Thachuk and Patey at UBC [4].
This study examines the influence of system size on the evaluation of tracer diffusion in a hard sphere liquid. The estimation
of the diffusion constant of an infinite sized system from finite
sized simulations is a long-standing problem, and this work
shows how state-of-the-art computing power and careful analysis have finally evolved to the point where a reliable answer
can be obtained.
The twin constraints of size and time scale are affected differently by computer hardware and software developments. For
large system sizes, parallel computing algorithms are desirable,
dividing the work of simulating a single large system over
many processors. At the same time, the largest accessible time
scale of any simulation, whether serial or parallel, depends on
processor speed. The advent in the last decade of very large
clusters of fast, inexpensive processors, interconnected by
high-bandwidth, low-latency networking, has thus benefitted
the simulation of both large systems and long time scales. In
60 C PHYSICS
IN
addition, these clusters facilitate studies in which a large number of independent single-processor liquid state simulations are
run under different conditions, e.g. of temperature and density
in order to evaluate a liquid’s equation of state. Such “embarrassingly parallel” parameter-space exploration studies have
flourished in the last decade, and have stimulated the development of new computational methods, such as parallel tempering [5].
The above discussion has avoided the question of where one
gets the molecular interaction potential required as input to an
MD or MC liquid simulation. This is a large and complex topic
by itself, and space precludes a full discussion here. For classical simulations (to which we restrict ourselves), these potentials are developed either by fitting the parameters of a model
function to a selection of experimentally-known properties
(e.g. an atomic radial distribution function, or a melting temperature), or by fitting a model function to a potential energy
surface determined quantum mechanically, usually for a small
molecular cluster.
The alternative is to implement a fully quantum mechanical
approach, i.e. a quantum molecular dynamics (QMD) simulation. In QMD, electronic degrees of freedom are modeled
explicitly, and so the molecular interactions are evaluated within the algorithm from first principles. In this way, QMD realizes a qualitatively higher degree of realism and, in addition,
allows for the evaluation of the electronic properties of liquids,
which are not available from classical simulations. For certain
problems, such as the notable work of Stanimir Bonev and coworkers at Dalhousie University on the high pressure properties of liquids such as hydrogen and nitrogen [6,7], QMD is the
only reliable way to proceed. However, the computational
demands of QMD are much higher than for classical MD.
Currently, a large QMD liquid simulation would be of a system
of a few hundred molecules over a time scale of tens of ps. At
present, this makes QMD unsuitable for most problems related
to large length/time scale phenomena in supercooled liquids,
e.g. crystal nucleation. However, there is no question that, as
computational power continues to progress, QMD studies will
systematically displace classical simulations of the liquid state
in the years to come.
THERMODYNAMICS AND PHASE DIAGRAMS
As stated in the Introduction, crystal and glass formation
depend sensitively on the interplay of both thermodynamic and
dynamical properties of the supercooled liquid state. In this
section, we focus on some of the thermodynamic aspects. One
of the most fundamental thermodynamic descriptors of a liquid, an equation of state (e.g. the pressure P as a function of
temperature T and density ρ) is readily evaluated from simulations. However, it is often crucial to determine the thermodynamic relationship of the liquid phase to the crystal. For example, the nucleation rate is a strong function of degree of supercooling, which can only be stated if the coexistence temperature for the liquid and crystal phases is determined.
CANADA / VOL. 64, NO. 2 ( Apr.-June. (Spring) 2008 )
SUPERCOOLED LIQUID (SAIKA-VOIVOD AND POOLE)AA
A suite of methods has therefore been developed to evaluate
the free energy of both liquid and crystal phases from simulations, to use this information to locate phase coexistence conditions and, ultimately, to build complete phase diagrams for
model substances. Daan Frenkel and coworkers have played a
leading role in the development of these methods, and the
recent text by Frenkel and Smit is an invaluable resource for
any researcher in this area [5].
To evaluate the free energy of any phase, one general approach
is to identify a state of the system for which the free energy is
known exactly, and then numerically carry out a “thermodynamic integration” (TI) to the particular state of interest in
order to find the free energy difference between the exactly
known and the desired state. For gases, the low density, ideal
gas limit is the natural starting point, followed by a TI along a
path that (say) first changes the density to the desired value,
and then the temperature. For liquids, the difference between
the free energy of the liquid and the ideal gas can be determined by integrating the excess pressure along an isotherm
from low densities, where the simulated system’s pressure is
well described by a low order viral expansion. As long as no
discontinuous phase transitions occur along the chosen path,
the absolute free energy can be computed to arbitrary precision.
In computer simulations, the path along which the integration
occurs need not be a function of macroscopic variables. For
example, it could occur along a path where a parameter in the
system Hamiltonian is changing. This approach is exploited to
find the free energy of a crystal. For a crystal, a natural starting
point is the Einstein crystal of the desired structure; i.e. a crystal in which the molecules do not explicitly interact with each
other, but are held near their ideal positions by harmonic
springs. The free energy of this system can be computed exactly. A parameter in the system Hamiltonian is then varied so as
to “morph” the system from this ideal potential to the real one
(usually, at fixed T and ρ), while computing the free energy
change along the way. Free energy changes from this state to
other T-ρ points can be computed by conventional TI.
Such methods, combined with the common availability of
computing clusters with hundreds of processors, have made
possible the evaluation of complete equations of state and free
energy surfaces for a number of important model systems.
These data can then be used to construct extensive phase diagrams. Fig. 1 shows the result of our own work to determine
the phase diagram of a commonly studied model of silica, the
so-called “BKS” model [8]. The simulated phase diagram, compared to that known from experiment (also shown [9]), both
reveals the inadequacies of the model (and thus provides clues
for how to improve the model), and clearly identifies regimes
of interest, e.g. at what range of T and P the liquid is supercooled, so that crystal nucleation can be studied. As we discuss
below, our knowledge of the phase diagram for BKS silica
facilitated our subsequent study of the nucleation of the
stishovite crystal from the supercooled melt.
Phase diagrams have been determined for a wide range of
model materials using these techniques, including those for
Fig. 1
(a) Experimentally determined coexistence lines of silica in
the P - T plane. Stability fields for the stishovite (S),
coesite (C), β-quartz (Q) and liquid (L) phases are shown.
Both stable (solid) and metastable (dashed) coexistence lines
are shown. The inset shows the stability fields of cristobalite
and tridymite. Adapted from Ref. [9]. (b) Phase diagram of
BKS silica in the P - T plane, evaluated from simulations as
described in Ref. [8]. Solid lines are stable coexistence lines.
Dotted lines show error estimates for the crystal-liquid coexistence lines. Metastable coexistence lines (dashed) are also
shown that meet at the metastable S-L-Q triple point. The
locations of the S-C (filled square) and C-Q (filled circle)
coexistence boundaries at T = 0, are also shown.
several models of water [10]. The thermodynamic properties of
supercooled water have been a sustained source of interest for
several decades, and simulations have made significant contributions by providing information on states where experiments
are challenging: e.g. in the deeply supercooled limit, where fast
crystal nucleation pre-empts observation of liquid-state behaviour, and in the regime of negative pressure, where only a few
experimental studies have successfully ventured. For example,
LA PHYSIQUE AU CANADA / Vol. 64, No. 2 ( avr. à juin (printemps) 2008 ) C 61
SUPERCOOLED LIQUIDS (SAIKA-VOIVOD AND POOLE)
simulation results were the basis of the proposal that a first
order liquid-liquid phase transition occurs in supercooled
water [11], the influence of which on surrounding states can
explain many of water’s unusual properties. Evidence now
exists for analogous liquid-liquid transitions in a range of substances (e.g. liquid silicon) and simulations have been central
to efforts to elucidate this phenomenon [12]. Similarly, for several years, simulation studies of supercooled water have pointed to the possibility that a minimum of the density (in contrast
to the density maximum that occurs at 4° C) occurs in the
supercooled regime [13,14]. Guided in part by these simulation
results, experimental evidence for the occurrence in supercooled water of this extremely rare phenomenon has recently
been reported [15]; see Fig. 2.
Fig. 2
Comparison of density vs. temperature curves at ambient
pressure for bulk liquid D2O (open triangles), confined liquid
D2O (filled circles) from Ref. [15], D2O ice Ih (filled
squares), and MD simulations of liquid TIP5P-E water (open
diamonds) from Ref. [13]. The density values for the TIP5PE model (which is a model of H2O) have been multiplied by
1.1 to facilitate comparison with the behaviour of D2O. Both
a maximum and a minimum of the density occur in simulations and experiment.
DYNAMICS NEAR THE GLASS TRANSITION
With the exception of quantum liquids (i.e. liquid He), there are
only two possible fates for a supercooled liquid as T decreases:
it will either undergo a first-order phase transition to a crystalline solid, or it will form an amorphous solid, or glass, at the
glass transition temperature, Tg [16]. Superficially, the glass
transition seems to be a purely dynamical transition, unrelated
to any thermodynamic process. The viscosity of a liquid
increases rapidly as T decreases, and in the absence of crystallization, the time scale for liquid-like structural relaxation
eventually exceeds typical observation times. The value of Tg
is (somewhat arbitrarily) taken as the T at which the viscosity
exceeds 1013 poise. Below Tg, the system retains a disordered
liquid-like structure, but the mechanical properties become
solid-like.
62 C PHYSICS
IN
This simple picture of the glass transition was notably critiqued
in a 1948 paper by Walter Kauzmann [17]. The paper describes
what has since become known as the “Kauzmann paradox”.
Kauzmann pointed out that the heat capacity of a liquid is generally higher than that of the crystal to which it freezes and, as
a consequence, the entropy decreases more rapidly in the liquid
than in the crystal as T decreases into the supercooled regime.
For a wide range of liquids, this results in the thermodynamic
behaviour shown schematically in Fig. 3. The liquid entropy,
extrapolated to arbitrarily low T, would not only meet the crystal entropy, but even threatens to become zero at finite T. In
practice, this “entropy catastrophe” is avoided because the
glass transition seems to always intervene, knocking the liquid
out of equilibrium, and putting a halt on the further decrease of
entropy. The paradox is this: If the glass transition is a purely
dynamical phenomenon, how can it be invoked to resolve a
purely thermodynamic problem (the entropy catastrophe)?
Kauzmann’s paradox suggests that thermodynamics must play
a role, along with dynamical behaviour, in the physics that
underlies the glass transition. This conceptional tension,
between the dynamical and thermodynamic underpinnings of
glass formation, persists to the present day.
Fig. 3
Schematic behaviour of the entropy for a typical liquid and
crystal of the same substance. Tm is the crystal melting temperature. At Tg the liquid falls out of equilibrium and
becomes a glass.
In 1995, P.W. Anderson wrote: “The deepest and most interesting unsolved problem in solid state theory is probably the theory of the nature of glass and the glass transition” [18]. While
theories of the glass transition abound, it continues to be true
that none is commonly accepted to have “solved” the problem,
in the sense of accounting for the complex range of observed
behaviour in a unified way. In this context, computer simulations have played a central role in testing theories, and in providing clues for the development of new theories. A prominent
example in the 1990’s was the simulation work of Kob and
Andersen [19], who used extensive MD simulations of a binary
Lennard-Jones liquid to confirm many of the predictions of the
mode-coupling theory (MCT) of the glass transition that had
CANADA / VOL. 64, NO. 2 ( Apr.-June. (Spring) 2008 )
SUPERCOOLED LIQUID (SAIKA-VOIVOD AND POOLE)AA
been developed previously by Goetze [20]. This stimulated a
great deal of work, both through simulations and experiments,
to test the range of applicability of MCT.
The interrelationship of dynamics and thermodynamics in
glass-forming liquids has also been explored with much success using simulations [16,21]. For example, a number of theories connecting entropy and diffusion have been proposed,
stimulated by the ideas of Adam and Gibbs (AG) in 1965 [22].
Simulations provide a helpful testing ground for such theories
because both thermodynamic and transport properties can be
evaluated from a single set of runs; in experiments, widely different apparatus are required to access these observables, making systematic studies challenging. Several studies have
demonstrated the validity of the AG theory in simulations. In
our own work with F. Sciortino on liquid silica, we were also
able to draw specific connections to the Kauzmann paradox [23]. We showed that the AG relation is obeyed in liquid silica, and at the same time the T dependence of the configurational entropy exhibits an inflection point that provides the
mechanism for this system to avoid Kauzmann’s entropy catastrophe. This result from simulation awaits experimental confirmation, since the relevant behaviour occurs in an extremely
challenging experimental regime between 3000 and 4000 K.
There has also been ongoing interest in the possibility of finding molecular-level structural features in the liquid associated
with the approach to the glass transition [24]. Supercooled liquids are notably homogeneous in a structural sense as they
approach Tg. For example, they typically lack any growth of
density fluctuations as T decreases, precluding the possibility
of thinking of the approach to the glass transition as the
approach to a conventional critical point. However, numerous
experiments and simulations provide evidence that significant
spatial heterogeneities of dynamical properties arise and grow
in liquids as T 6 Tg. These results indicate that the dynamics in
a supercooled liquid does not slow down uniformly in space.
Rather, correlated groups of relatively mobile and immobile
molecules emerge and grow in size as T decreases. These are of
course transient mobility fluctuations, which appear and disappear on the time scale of structural relaxation in the liquid.
While in most cases experiments only provide indirect evidence of such “dynamical heterogeneity” (DH), simulations
are able to image this phenomenon directly. Simulations carried out by S.C. Glotzer and coworkers have provided particularly clear views of DH [25,26]. In these studies, careful analysis
of very long equilibrium MD runs of the binary Lennard Jones
liquid showed that a molecule that is significantly more mobile
than the average has a higher probability of occurring close to
another similarly mobile molecule. These mobile molecules
tend to form quasi-one-dimensional “strings” in which molecules move one after another, like dancers in a conga line.
These results were subsequently confirmed in experiments on
colloids, in which the trajectories of individual colloid particles
were recorded and analyzed via confocal microscopy [27].
Much recent work has focussed on how the emergence of the
mobility correlations of DH can be incorporated into a broad
theory of glass formation.
Fig. 4
Dynamical heterogeneity in liquid water as imaged in simulations using the isoconfigurational ensemble. Larger spheres
represent molecules that have a greater propensity to remain
immobile on the time scale of structural relaxation; smaller
spheres have a greater propensity to be mobile. These
propensities are evaluated as averages for each molecule,
starting from the same initial configuration, averaged over
randomly chosen initial momenta. These results were
obtained from simulations of N = 1728 ST2 water molecules
at ρ = 0.83 g/cm3 for T = 350 K (top panel) and 270 K (bottom panel). Note how the characteristic size of the dynamically correlated regions increases as T decreases. Details may
be found in Ref. [29]. Note that only O atoms are shown.
More recently, Harrowell and coworkers developed a novel
simulation approach that showed that, despite the absence of an
obvious and growing structural heterogeneity in glass-forming
liquids, the orgins of DH can be ascribed, at least in part, to
configurational properties of the liquid state [28]. They define
an “isoconfigurational ensemble” of MD simulation trajectories, each starting from an identical equilibrium liquid config-
LA PHYSIQUE AU CANADA / Vol. 64, No. 2 ( avr. à juin (printemps) 2008 ) C 63
SUPERCOOLED LIQUIDS (SAIKA-VOIVOD AND POOLE)
uration, but in which the molecular velocities are assigned randomly from the Maxwell-Boltzmann distribution. By averaging the displacement of a particular molecule at a given time
over all the MD trajectories, the configurationally-induced
“propensity” for molecular mobility as a functional of spatial
position in the starting configuration can be assessed. In simulations of both 2D soft spheres and 3D liquid water, the resulting spatial maps of “dynamic propensity” affirm the picture of
glass-forming liquids becoming progressively more heterogeneous as T 6 Tg [29]; see Fig. 4. The appearance of DH even
after this kind of isoconfigurational averaging also suggests
that a comprehensive theory of glass formation must be based
on both dynamical and configurational ingredients. From the
standpoint of computing, the practicality of using an approach
such as isoconfigurational averaging is made possible only by
the existence of large computing clusters.
CRYSTAL NUCLEATION
In order for a liquid to freeze, nucleation of the new crystalline
phase must occur first. We will restrict our discussion to homogeneous nucleation, which takes place within the bulk of the
supercooled liquid [1]. Fluctuations in local structure give rise
to portions of the liquid that have a high degree of crystalline
order. These ordered pockets can be thought of as embryos or
nuclei from which the new phase arises. Perhaps surprisingly at
first glance, these small embryos tend to shrink and vanish, for
although the bulk crystal has a lower free energy than that of
the liquid, the interface created between the crystalline embryo
and the surrounding liquid makes embryo growth unfavourable
from a free energy standpoint.
The idea of the competition between bulk and surface contributions to the free energy of embryo formation (i.e. the work
required to form an embryo) is a main ingredient of Classical
Nucleation Theory (CNT), a phenomenological theory in
which an embryo has a well defined interface with the surrounding liquid. There is no generally accepted microscopic
theory of nucleation and so, despite dating back to the 1920’s,
CNT forms the theoretical basis for quantitatively understanding nucleation.
In CNT, the work required to assemble an embryo composed of
n particles is given by
∆G ( n) = − ∆µ n + aγn2 / 3 = −kBT ln
Nn
,
N
(1)
where ∆F is the difference in the chemical potential between
the bulk liquid and the bulk crystal, γ is the surface tension, a
is a factor that depends on the shape and density of the
embryos, Nn is the equilibrium number of embryos of size n
present in the liquid, and N is the total number of liquid particles. The generic shape of ∆G(n) is shown in Fig. 5, where we
see a maximum at n*, the critical embryo size. Embryos must
overcome a free energy barrier of height ∆G(n*) before it is
thermodynamically favourable for them to grow.
The rate of nucleation, or the rate at which embryos cross the
64 C PHYSICS
IN
Fig. 5
∆G(n) obtained from Nn from simulations of high pressure
silica for a set of temperatures. The curves are obtained from
parallel simulations described in Ref. [30]. The solid curves
are fits to the n dependence given by Eq. 1.
barrier per unit volume, is given by,
J = R exp
−∆G (n∗ )
,
kBT
(2)
where R is a kinetic prefactor that depends on the dynamics of
the supercooled liquid.
The study of nucleation seems ideally suited to computer simulations. One would think that the microscopic level of detail
in a MC or MD simulation should enable researchers to systematically peel apart the nucleation process. This is true, but
there are challenges nonetheless. Nucleation of a post-critical
embryo typically occurs in a metastable supercooled liquid as
a rare event, particularly if ∆G(n*) is large. For small to moderate supercooling, it may not be feasible to witness even a single nucleation event in even the longest simulations.
Another basic difficulty lies in discerning the embryo from the
surrounding liquid particles. In a simple supercooled liquid, the
neighbours of a given particle form a fairly ordered environment and, as mentioned earlier, the density mismatch between
the liquid and crystal phases is not large. Determining which
particles are crystal-like and which are not becomes a subtle
task. Nonetheless, satisfactory criteria have been worked out to
define local crystalline order, with the help of spherical harmonics. By looking at how this order is correlated between
neighbouring particles, it becomes possible to identify embryos
and the number of particles they contain. Fig. 6 shows snapshots of embryos taken from our simulations of silica [30].
With the embryos identified, the next step is to be able to drive
the system to nucleate. This can be accomplished through
biased sampling MC. In this technique, an order parameter, like
the size of the largest cluster in the system, is identified. Then,
a potential energy term that is a function of the order parameter is added to the model Hamiltonian, and is often taken to be
a parabola centred upon a particular cluster size, n0. The new
addition to the Hamiltonian biases the system to be in a state
CANADA / VOL. 64, NO. 2 ( Apr.-June. (Spring) 2008 )
SUPERCOOLED LIQUID (SAIKA-VOIVOD AND POOLE)AA
They, and now others, have used simulation to drive our understanding of nucleation in numerous systems (argon, hard
sphere colloids, NaCl, silica, and carbon, among others), and
under various influences, e.g. in the presence of metastable
critical points, near interfaces, or under extreme pressure. For
example, contrary to recent suggestions, Frenkel was able to
show that diamonds are not likely to nucleate in the carbon-rich
middle layers of Uranus [32].
In our work, we have used biased/tempered MC to study nucleation in a model of silica for which we worked out the phase
diagram earlier. We focussed on a high pressure regime where
nucleation occurs fairly easily, i.e. on a reasonable time scale
for simulations. We showed that the form given by CNT for
∆G(n) holds reasonably well even when the barrier becomes
fairly small at large supercooling (see Fig. 5). Additionally, we
reached a point where the picture of nucleation begins to
change qualitatively and the idea of a limit to liquid metastability may be required to make sense of some of the free energy
profiles we calculated.
Fig. 6
Crystal-like Si atoms in liquid silica. Top left: Sample critical
nucleus at 3300 K containing 10 Si atoms. Top right: A snapshot of the growing crystal embryo from a dynamic crystallization simulation at 3000 K when it contains 23 Si atoms.
Bottom: Sample end configuration of a crystallization simulation.
containing a cluster of size n0. Thus, through biased sampling,
we can study at leisure the system when it is in an otherwise
improbable state, in our case, states in which large and/or critical embryos are present. By simulating the system at values of
n0 ranging from small to critical to post-critical, we can calculate Nn, which through Eq. 1 determines ∆G(n)
To determine ∆G(n) for several temperatures, we would run
many simulations, each with a different T and n0. Perhaps these
would all be running at the same time if a computing cluster is
available. However, the equilibration of these systems can be
greatly sped up by using a technique dubbed parallel tempering. In this scheme, simulations running in parallel are allowed
to exchange configurations. The probability with which two
processors commit to an exchange is precisely determined by
the Boltzmann distribution. Qualitatively, the increase in computational efficiency comes from allowing slow states at low T
to benefit from occasional visits to high temperatures, where
kinetic barriers are more easily overcome.
Liquid configurations with embryos of critical size can be
selected from the biased/tempered MC simulations in order to
study their dynamic properties with good statistical sampling.
In particular, the rate at which particles attach themselves to a
critical embryo is used to calculate the dynamic prefactor in
Eq. 2, thus completing the calculation of quantities required by
CNT to predict the rate of nucleation.
The development and application of these techniques to nucleation is mostly attributable to Daan Frenkel and coworkers [31].
A number of other efforts are ongoing in Canada to simulate
nucleation and crystal growth. For example, Peter Kusalik (formerly at Dalhousie, now at U. Calgary) and coworkers were
the first to simulate ice nucleation in the presence of a strong
electric field [33], and more recently have done notable work
simulating the interface of a crystal surface as it progresses into
the liquid phase [34]. Nucleation is also being studied in liquid
nanoclusters in the group of R.K. Bowles [35] at the University
of Saskatchewan. Freezing of clusters differs from that of bulk
liquids in that there is an inherent inhomogeneity in the system,
i.e. a significant portion of the particles are on the surface, as
well as the fact that there are typically several different structures to which the cluster may freeze at a given T. The frozen
cluster structures are not bound to be true periodic crystals, and
may have (for example) icosahedral or decahedral structure.
It is intriguing to think about possible connections between
nucleation and the glass transition. What impact do dynamical
heterogeneities have on the nucleation processs? Are the heterogeneities themselves a result of subtle ordering connected
with embryo formation? Is the liquid trying to order locally to
a structure that cannot fill space? We are engaged in exploring
some of these questions, and are encouraged by some hints on
the subject now appearing in the literature pertaining to glass
formers [36].
OUTLOOK AND CONCLUSION
Computer simulation has and will continue to play a valuable
role in developing our understanding of the supercooled liquid
state, the glass transition and crystal nucleation. Simulations
have been instrumental in testing microscopic theories of the
glass transition, in linking thermodynamics to the glass transition, in calculating material properties from microscopic interactions, and in testing nucleation theories. Despite this
progress, fundamental questions remain. For example, there is
no generally accepted theory that can tell us why one liquid
should form a glass, while another crystallizes easily. It is
LA PHYSIQUE AU CANADA / Vol. 64, No. 2 ( avr. à juin (printemps) 2008 ) C 65
SUPERCOOLED LIQUIDS (SAIKA-VOIVOD AND POOLE)
inevitable that simulations will play a role in clarifying such
questions.
In addition, those contemplating research in this area will benefit from paying attention not only to scientific trends, but also
to the technological trends of computing hardware and software. For example, the explosive growth in single-processor
speed over the last several decades presently allows us to study
classical liquids over nearly 8 orders of magnitude in time, up
to almost the Fs time scale. More recent improvements in parallel architectures and algorithms have now also allowed the
sizes of systems studied to grow dramatically. The focus of current development in processor technology is now shifting to
multi-core processors, offering more potential for parallelism
(and lower power consumption), but with the speed of single
cores not increasing as dramatically as in the past. On its own,
this would shift the advantage to the simulation of larger systems (via parallelism), but would slow the increase of the maximum accessible time scale. In another direction, the advent of
accelerator cards (e.g. “GPGPUs” based on graphics coprocessors) offer the potential for tremendous speed increases
with some algorithms.
Finally, we note that our discussion in this article has focussed
on simple and network-forming liquids such as water, but the
basic ideas behind nucleation and glassy dynamics provide the
underpinning for understanding more complex phenomena
such as phase transitions and gelation in colloids, macromolecular assembly, protein folding and crystallization, and
nanoparticle self-assembly. These are currently studied via
simulation and experiments in research groups around the
world, and the proliferation of high performance computing
facilities will continue to advance simulation as an effective
means to transfer our knowledge of basic phenomena to these
more complex systems.
ACKNOWLEDGEMENTS
We thank our many collaborators who have provided us over
the years so much insight on the liquid state. These include
C.A. Angell,
R. Bowles,
S.C. Glotzer,
P. Kusalik,
G. Matharoo,
M.S.G. Razul,
S. Sastry,
F. Sciortino,
H.E. Stanley and F. Starr. PHP and ISV are both supported by
NSERC and ACEnet. PHP also acknowledges the support of
the CRC program.
REFERENCES
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
P.G. Debendetti, Metastable Liquids: Concepts and Principles (Princeton University Press, Princeton, 1997).
J.-P. Hansen and I.R. McDonald, Theory of Simple Liquids (Academic Press, London, 2006).
M. Allen and D. Tildesley, Computer Simulation of Liquids (Oxford University Press, Oxford, 1989).
R.O. Sokolovskii, M. Thachuk and G.N. Patey, J. Chem. Phys. 125, 204502 (2006).
D. Frenkel and B. Smit, Understanding Molecular Simulation (Academic Press, London, 2002).
J.-Y. Raty, E. Schwegler and S.A. Bonev, Nature 449, 448 (2007).
S.A. Bonev, E. Schwegler T. Ogitsu and G. Galli, Nature 431, 669 (2004).
I. Saika-Voivod, F. Sciortino, T. Grande and P.H. Poole, Phys. Rev. E 70, 061507 (2004).
Silica: Physical Behavior, Geochemistry and Materials Applications, edited by P. Heaney, C. Prewitt, and G. Gibbs, Reviews in
Mineralogy Vol. 29 (Mineralogical Society of America, Washington, D. C., 1994).
E. Sanz, C. Vega, J.L.F. Abascal, L.G. MacDowell, Phys. Rev. Lett. 92, 255701 (2004).
P.H. Poole, F. Sciortino, U. Essmann and H.E. Stanley, Nature 360, 324 (1992).
P.F. McMillan, M. Wilson, M.C. Wilding, D. Daisenberger, M. Mezouar and G..N. Greaves, J. Phys.: Condens. Matter 19, 415501
(2007).
D. Paschek, Phys. Rev. Lett. 94, 217802 (2004).
P.H. Poole, I. Saika-Voivod and F. Sciortino, J. Phys.: Condens. Matter 17, L431 (2005).
D. Liu, Y. Zhang, C.-C. Chen, C.-Y. Mou, P.H. Poole, S.-H. Chen, Proc. Natl. Acad. Sci. USA 104, 9570 (2007).
P.G. Debenedetti and F.H. Stillinger, Nature 410, 259 (2001).
W. Kauzmann, Chem. Rev. 43, 219 (1948).
P.W. Anderson, Science 267, 1615 (1995).
W. Kob and H.C. Andersen, Phys. Rev. E 51, 4626 (1995); W. Kob and H.C. Andersen, Phys. Rev. E 52, 4134 (1995).
W. Götze, in Liquids, Freezing and Glass Transition, edited by J.P. Hansen, D. Levesque, and J. Zinn-Justin (North Holland,
Amsterdam, 1991), p. 287.
D. Wales, Energy Landscapes (Cambridge University Press, New York, 2003).
G. Adam and J.H. Gibbs, J. Chem. Phys. 43, 139 (1965).
I. Saika-Voivod, P.H. Poole and F. Sciortino, Nature 412, 514 (2001)
M.D. Ediger, Annu. Rev. Phys. Chem. 51, 99 (2000).
C. Donati, J.F. Douglas, W. Kob, S.J. Plimpton, P.H. Poole and S.C. Glotzer, Phys. Rev. Lett. 80, 2338 (1998).
S.C. Glotzer, Journal of Non-Crystalline Solids 274, 342 (2000).
E.R. Weeks, J.C. Crocker, A.C. Levitt, A. Schofield and D.A. Weitz, Science 287, 627 (2000).
A. Widmer-Cooper, P. Harrowell and H. Fynewever, Phys. Rev. Lett. 93, 135701 (2004).
G.S. Matharoo, M.S.G. Razul and P.H. Poole, Phys. Rev. E74, 050502(R) (2006).
I. Saika-Voivod, P.H. Poole and R.K. Bowles, J. Chem. Phys. 124, 224709 (2006).
P.R. ten Wolde, M.J. Ruiz Montero, and D. Frenkel, J. Chem. Phys. 104, 9932 (1996); S. Auer and D. Frenkel, J. Chem. Phys. 120,
3015 (2004); C. Valeriani, E. Sanz, and D. Frenkel, J. Chem. Phys. 122, 194501 (2005).
L.M. Ghiringhelli, C. Valeriani, E.J. Meijer, and D. Frenkel, Phys. Rev. Lett. 73, 055702 (2007).
I.M. Svishchev and P.G. Kusalik, Phys. Rev. Lett. 99, 975 (1994).
M.S. Gulam Razul, J.G. Hendry and P.G. Kusalik, J. Chem. Phys. 123, 204722 (2005).
E. Mendez-Villuendas and R.K. Bowles, Phys. Rev. Lett. 98, 185503 (2007).
T. Kawasaki, T. Araki, and H. Tanaka, Phys. Rev. Lett. 99, 215701 (2007).
66 C PHYSICS
IN
CANADA / VOL. 64, NO. 2 ( Apr.-June. (Spring) 2008 )
ARTICLE DE FOND
QUANTUM MONTE CARLO METHODS
BY
NUCLEI
FOR
ROBERT B. WIRINGA
A
major goal in nuclear physics is to understand
how nuclear binding, structure, and reactions
can be described from the underlying interactions between individual nucleons [1,2]. We want
to compute the properties of an A-nucleon system as an Abody problem with free-space nuclear interactions that
describe nucleon-nucleon (NN) scattering and the twonucleon bound-state. Properties of interest for a given
nucleus include the ground-state binding energy, excitation spectrum, one- and two-nucleon density and momentum distributions, electromagnetic moments and transitions. We also wish to describe the interactions of nuclei
with electrons, neutrinos, pions, nucleons, and other
nuclei. Such calculations can provide a standard of comparison to test whether sub-nucleonic effects, such as
explicit quark degrees of freedom, must be invoked to
explain an observed phenomenon. They can also be used
to evaluate nuclear matrix elements needed for some tests
of the standard model, and to predict reaction rates that are
difficult or impossible to measure in the laboratory. For
example, all the astrophysical reactions that contribute to
the Big Bang or to solar energy production should be
amenable to such ab initio calculations.
To achieve this goal, we must both determine reasonable
Hamiltonians to be used and devise reliable many-body
methods to evaluate them. Significant progress has been
made in the past decade on both fronts, with the development of a number of potential models that accurately
reproduce NN elastic scattering data, and a variety of
advanced many-body methods. In practice, to reproduce
experimental energies and transitions, it appears necessary
to add many-nucleon forces to the Hamiltonian and electroweak charge and current operators beyond the basic
single-nucleon terms. While testing our interactions and
currents against experiment, it is also important to test the
many-body methods against each other to ensure that any
approximations made are not biasing the results.
For s-shell (3- and 4-body) nuclei, a number of accurate
many-body methods have been developed; a benchmark
test in 2001 compared seven different calculations of the
SUMMARY
Quantum Monte Carlo methods are applied
to the ab initio calculation of the structure
and reactions of light nuclei.
binding energy of the 4He nucleus using a semi-realistic
test Hamiltonian and obtained agreement at .0.1% [3].
Multiple few-body methods also agree quite well on lowenergy three-nucleon (3N) scattering and progress is being
made on larger scattering problems [4,5]. For p-shell
(5#A#16) and larger nuclei, three methods that are being
developed and checked against each other are the no-core
shell model (NCSM) [6], coupled-cluster expansion
(CCE) [7], and quantum Monte Carlo (QMC) [2]. This article will focus on the quantum Monte Carlo method as an
example of modern ab initio nuclear theory. We will
describe the nature of the problem, a method of solution,
and present some of the successes that have been achieved
as well as future challenges that must be faced.
NUCLEAR HAMILTONIAN
At present we have to rely on phenomenological models
for the nuclear interaction; a quantiative understanding of
the nuclear force based on non-perturbative quantum chromodynamics (QCD) is still some distance in the future.
We consider nuclear Hamiltonians of the form:
H = ∑ K i + ∑ vij +
i
i< j
∑V
i < j <k
ijk
.
Here Ki is the kinetic energy, νij is an NN potential, and
Vijk is a 3N potential. Realistic NN potentials fit a large
scattering database; models such as the Nijm I, Nijm II,
and Reid93 potentials of the Nijmegen group [8],
Argonne ν18 (AV18) [9], and CD Bonn [10], fit more than
4,000 elastic data at laboratory energies #350 MeV with a
χ2/datum ~1. These potentials are all based on pion
exchange at long range, but inevitably are more phenomenological at shorter distances. Their structure is complicated, including spin, isospin, tensor, spin-orbit, quadratic
momentum-dependent, and charge-independence-breaking terms, with ~40 fitted parameters. However, these
sophisticated NN models are generally unable to reproduce the binding energy of few-body nuclei such as 3H
and 4He without the assistance of a 3N potential [1].
R.B. Wiringa
<[email protected]>,
Physics Division,
Argonne National
Laboratory, Argonne,
Illinois 60439
Multi-nucleon interactions can arise because of the composite nature of the nucleon and its corresponding excitation spectrum, particularly the strong ∆(1232) resonance
seen in πN scattering. The expectation value of 3N potentials is much smaller than that for NN forces, but due to the
large cancellation between one-body kinetic and two-body
potential energies, they can provide significant corrections
to nuclear binding. Fortunately, four-nucleon potentials
LA PHYSIQUE AU CANADA / Vol. 64, No. 2 ( avr. à juin (printemps) 2008 ) C 67
QUANTUM MONTE CARLO ... (WIRINGA)
appear small enough to ignore at present. Models for the basic
two-pion-exchange 3N potential date from the 1950s [11]; more
sophisticated models have followed, including the TucsonMelbourne [12], Urbana [13], and Illinois [14] models. In principle, the 3N potential could have a far more complicated
dependence on the spins, isospins, and momenta of the nucleons than has been studied to date, but there is limited information by which to constrain the models. Three-nucleon scattering data provides some information, but very little partial-wave
analysis has been done which would help unravel the structure.
Energies and excitation spectra of light nuclei provide the best
current constraints for 3N potentials, especially isospin T = 32
interactions, which are particularly important for neutron stars.
Hamiltonians based on chiral effective theories are under
development that should provide a more consistent picture of
both NN and many-nucleon forces, while also making closer
connections to the underlying symmetries of QCD [15].
An important additional ingredient for the evaluation of electroweak interactions of nuclei is a consistent set of charge and
current operators. The standard impulse approximation (IA)
single-nucleon contributions need to be supplemented by
many-nucleon terms that again can be understood as arising
from the composite nature of the nucleons and the meson
exchanges that mediate the interactions between them. It is
important to use currents that satisfy the continuity equation
with the Hamiltonian. In practice, two-nucleon operators give
the bulk of the correction to the IA terms; they can be 20% corrections for magnetic moments and transitions, although generally much less for electric transitions and weak decays [1].
For the present article, we consider a Hamiltonian containing
the AV18 NN potential and Illinois-2 (IL2) 3N potential. The
AV18 model can be written in an operator format as:
vij =
∑
p =1, 22
v p ( rij )Oijp ,
Oijp =1,14 = [1, σi ⋅ σ j , Sij , L ⋅ S, L2 , L2σi ⋅ σ j , (L ⋅ S) 2 ] ⊗ [1, τi ⋅ τ j ],
Oijp =15,22 = [1, σi ⋅ σ j , Sij , L ⋅ S] ⊗ [Tij , ( τzi + τ zj )].
Here σ (τ) is the Pauli spin (isospin) operator, L (S) is the pair
orbital (spin) angular momentum operator, and Sij = 3σi @ r̂ij
σj @ r̂ij − σi @σj is the tensor operator, which can exchange spin
and orbital angular momenta. The first fourteen terms are
isoscalar, or charge-independent, i.e., they do not mix isospin
states. The first eight of these terms, up through spin-orbit, are
the most important. A good semi-realistic model designated
AV8N has been constructed (and used in the 4He benchmark
paper mentioned above) using just these operators; it reproduces S- and P-wave NN scattering phase shifts and the twonucleon bound state (deuteron) very well. The central, spinisospin, and tensor-isospsin components are shown in Fig. 1 by
solid lines with the left-hand scale; the central potential has its
maximum of .2000 MeV at r = 0.
The six terms quadratic in L are smaller, but are needed to fit
higher partial waves in NN scattering. The last eight terms
68 C PHYSICS
IN
Fig. 1
Important potential terms and corresponding VMC correlations for 4He.
break charge-independence, being either isovector (τzi + τzj), or
isotensor (Tij= 3τziτzj − τi@τj), in character; they are generally
small, and differentiate between pp, np and nn forces. Their
origin is in the electromagnetic (Coulomb, magnetic moment,
etc.) interaction, and the strong interaction (mπ0 − mπ± effects,
ρ-ω meson mixing, etc.).
The IL2 3N potential includes a long-range two-pion-exchange
piece, three-pion-exchange ring terms, and a phenomenological short-range repulsion. The spin- and isospin-dependence is
fixed by the rules of πΝ interactions, while the overall strength
is characterized by four parameters determined by a fit to
~20 nuclear energy levels when used with AV18 in the calculations described below.
QUANTUM MONTE CARLO METHODS
The many-body problem with the full Hamiltonian described
above is uniquely challenging. We want to solve the manybody Schrödinger equation
H Ψ (r1, r2, @@@, rA; s1,s2, @@@ , sA; t1, t2, @@@ , tA)
= E Ψ (r1,r2, @@@, rA; s1,s2, @@@ , sA; t1, t2, @@@ , tA)
where si = ± 12 are nucleon spins, and ti = ± 12 are nucleon
isospins (proton or neutron). This is equivalent to solving, for
an A-body nucleus with Z protons, 2A H (AZ ) complex coupled
second-order differential equations in 3A-dimensions. For 12C,
this number is 3,784,704 coupled equations in 36 variables! (In
practice, for many nuclei, symmetry considerations can reduce
the number by an order of magnitude.) The coupling is quite
strong; the expectation value +νtτ , corresponding to the tensorisospin operator Sijτi@τj is . 60% of +νij ,. This is a direct consequence of the pion-exchange nature of nuclear forces (and indirectly, the approximate chiral symmetry of QCD).
Furthermore, +νtτ , = 0 if there are no tensor correlations in the
wave function, so we cannot perturbatively introduce these
couplings.
CANADA / VOL. 64, NO. 2 ( Apr.-June. (Spring) 2008 )
QUANTUM MONTE CARLO ... (WIRINGA)AA
The first application of Monte Carlo methods to nuclei interacting with realistic potentials was a variational (VMC) calculation by Pandharipande and collaborators [16], who computed
upper bounds to the binding energies of 3H and 4He in 1981.
Six years later, Carlson [17] improved on the VMC results by
using the Green’s function Monte Carlo (GFMC) algorithm,
obtaining essentially exact results (within Monte Carlo statistical errors of 1%). Reliable calculations of light p-shell nuclei
started to become available in the mid 1990s and are reviewed
in [2]; the most recent results for A = 9,10 nuclei and 12C can
be found in [18,19].
A VMC calculation finds an upper bound EV to an eigenenergy
E0 of the Hamiltonian by evaluating the expectation value of H
in a trial wave function, ΨT:
EV =
ΨT | H | ΨT
ΨT | ΨT
≥ E0 .
Parameters in ΨT are varied to minimize EV, and the lowest
value is taken as the approximate energy. A good trial function
is [2]
| Ψ T = 1 +



∑U
i< j<k



ijk 



 S ∏ (1 + U ij )  | Ψ J ,
 i< j

where Uij and Uijk are non-commuting two- and three-body
correlation operators induced by νij and Vijk , respectively, S is
a symmetrizer, and the Jastrow wave function ΨJ is
| Ψ J = ∏ f c ( rij ) | Φ A ( J π ; T ) .
i< j
Here the single-particle A-body wave function ΦA(J π;T) is
fully antisymmetric and has the total spin, parity, and isospin
quantum numbers of the state of interest, while the product
over all pairs of the central two-body correlation fc(rij) keeps
nucleons apart to avoid the strong short-range repulsion of the
interaction. The long-range behavior of fc and any single-particle radial dependence in ΦA (which is written using coordinates
relative to the center of mass or to a sub-cluster CM to ensure
translational invariance) control the finite extent of the nucleus.
The two-body correlation operator has the structure
U ij =
∑u
p = 2, 6
p
( rij )Oijp ,
where the O ijp are the leading spin, isospin, and tensor operators
in νij . The fc(r) and up(r) are obtained by numerically solving
a set of six Schrödinger-like equations: two single-channel for
S=0, T=0 or 1, and two coupled-channel for S=1, T=0 or 1, with
the latter producing the important tensor correlations [20].
These equations contain the bare νij and parametrized
Lagrange multipliers to impose long-range boundary conditions of exponential decay and tensor/central ratios.
The central, spin-isospin, and tensor-isospin correlations
obtained for 4He are shown in Fig. 1 as dashed lines measured
by the right-hand scale. The fc is small at short distances, to
reduce contributions from the repulsive core of νc, and a maximum near where νc is most attractive, while the long-range
decrease keeps the nucleus confined. The uστ and utτ are small
and have signs opposite to νστ and νtτ as expected from perturbation theory.
Perturbation theory is also used to motivate the three-body correlation Uijk = − εVijk ( r~ij , r~jk , r~ki ) where r~ = yr, y a scaling
parameter, and ε a (small negative) strength parameter.
Consequently, Uijk has the same spin, isospin, and tensor
dependence that Vijk contains.
The ΨΤ is a vector in the spin-isospin space of the A nucleons,
each component of which is a complex-valued function of the
positions of all A nucleons. The tensor correlations mix spin
and spatial angular momenta, so that all 2A spin combinations
appear. Because the nuclear force is mostly isoscalar, the conservation of isospin results in fewer isospin possibilities, somewhat less than (ZA ). For MJ = 0 states there is an additional factor of 2 reduction. The total numbers of components in the vectors for 4He, 6Li, 8Be, 10B, and 12C are 16, 160, 1792, 21504,
and 270336, respectively.
Constructing the trial function with the pair spin and isospin
operators in Uij requires P = A(A - 1)/2 sparse-matrix operations on this vector (more if Uijk triples are used). Acting on the
trial function with νij then requires a sum of P additional operations for each spin or isospin term in the potential. Kinetic
energy contributions are evaluated by finite differences, i.e., by
reconstructing ΨΤ at 6A slightly shifted positions and taking
appropriate differences. Quadratic momentum-dependent L2
and (L@S)2 terms in νij require additional derivatives, but various tricks can be used to reduce the number of operations,
including rotation to a frame where fewer differences are needed, and Monte Carlo sampling these relatively short-ranged
terms when the two nucleons are far apart. Evaluating Vijk
requires additional operations, but these terms can also be sampled when the nucleons are far apart. The 3A-dimensional spatial integration is carried out by a standard Metropolis Monte
Carlo algorithm [21] with sampling controlled by a weight function W(R) .*ΨΤ *2, where R = r1, r2, ... rA specifies the spatial
configuration. Thus more (less) time is spent evaluating the
integral where the trial function is large (small).
VMC calculations produce upper bounds to binding energies
that are .2% above exact results for A = 3,4 nuclei. However,
as A increases, our present trial functions get progressively
worse and are unstable against breakup into sub-clusters. For
example, our 7Li trial function is more bound than 6Li, but less
bound than 4He plus 3H. Because any wave function can be
expanded in the complete set of exact eigenfunctions, the inadequacy of the trial function can be attributed to contamination
by excited state components in ΨΤ .
LA PHYSIQUE AU CANADA / Vol. 64, No. 2 ( avr. à juin (printemps) 2008 ) C 69
QUANTUM MONTE CARLO ... (WIRINGA)
The Green’s function Monte Carlo method provides a way of
systematically improving on the VMC trial state by removing
such contamination and approaching the true lowest-lying
eigenstate of given (J π;T) quantum numbers [2]. GFMC projects out the lowest-energy eigenstate from ΨΤ by a propagation
in imaginary time:
Ψ ( τ) = exp[ −( H − E% 0) τ]ΨT ,
=e
− ( E0 − E% 0 ) τ
lim Ψ ( τ) ∝ Ψ 0 ,
× [ Ψ 0 + ∑ αi e
− ( Ei − E0 ) τ
Ψ i ],
such that the average over all discarded configurations of
Ψ(τ)† @ ΨT is 0. Thus, if ΨT were the true eigenstate, the discarded configurations would contribute nothing but noise to
+H,. In practice, a final few (10–20) unconstrained steps are
made, before evaluating the energy, to eliminate any bias from
the constraint.
Expectation values with GFMC wave functions are evaluated
as “mixed” estimates
O ( τ)
τ→∞
where Ẽ0 is a guess for the exact E0. If sufficiently large τ is
reached, the eigenvalue E0 is calculated exactly while other
expectation values are generally calculated neglecting terms of
order |Ψ0 − ΨΤ |2 and higher. In contrast, the error in the variational energy, EV, is of order |Ψ0 − ΨΤ |2, and other expectation
values calculated with ΨΤ have errors of order |Ψ0 − ΨΤ |.
The evaluation of e−(H−Ẽ0)τ is made by introducing a small
time step, ∆τ = τ/n (typically ∆τ = 0.0005 ΜeV−1),
Ψ ( τ) = e
ΨT = G n ΨT .
 − ( H − E%0 ) V τ  n




where G is the short-time Green’s function. Again, Ψ(τ) is a
vector function of R, and the Green’s function Gαβ(RN,R) is a
matrix function of R and RN in spin-isospin space:
Gαβ ( R ′ , R ) = R ′ , α | e− ( H − E0 )V τ | R, β ,
%
where α, β denote the spin-isospin components. The repeated
operation of Gαβ(RN,R) in coordinate space results in a multidimensional integral over 3An (typically more than 10,000)
dimensions. This integral is also done by a Metropolis Monte
Carlo algorithm.
The short-time propagator is approximated as a symmetrized
product of exact two-body propagators and includes the 3N
potential to first-order. The Gαβ(RN,R) can be evaluated with
leading errors of order (∆τ)3, which can be made arbitrarily
small by reducing ∆τ (and increasing n correspondingly). In the
benchmark calculation [3] of 4He, the GFMC energy had a statistical error of only 20 keV and agreed with the other best
results to this accuracy (< 0.1%). Various tests indicate that the
GFMC calculations of p-shell binding energies have errors of
1–2%.
For more than four nucleons, GFMC calculations suffer significantly from the well-known fermion sign problem; the
Gαβ(RN,R) is a local operator that does not know about global
antisymmetry. Consequently it can mix in boson solutions that
are generally (much) lower in energy. This results in exponential growth of the statistical errors as one propagates to larger
τ, or as A is increased. For A$8 the resulting limit on τ is too
small to allow convergence of the energy. This problem is
solved by using a constrained-path algorithm [22], in which
configurations with small or negative Ψ(τ)† @ ΨT are discarded
70 C PHYSICS
IN
Mixed
=
Ψ ( τ) | O | Ψ T
Ψ ( τ) | Ψ T
.
The desired expectation values would have Ψ(τ) on both sides,
but if the starting trial wave function is reasonably good, we
can write Ψ(τ) = ΨT + δΨ(τ), and neglecting terms of order
[δΨ(τ)]2, we obtain the approximate expression
O ( τ) =
Ψ ( τ) | O | Ψ ( τ)
≈ O ( τ)
Ψ ( τ) | Ψ ( τ)
Mixed
+ [ O ( τ)
Mixed
− O V ],
where +O,V is the variational expectation value. More accurate
evaluations of +O(τ), are possible, essentially by measuring the
observable at the mid-point of the path. However, such estimates require a propagation twice as long as the mixed estimate and require separate propagations for every +O, to be
evaluated.
In practice, the operator in the mixed estimate acts on the
explicitly antisymmetric ΨT, which helps project out boson
contamination in Ψ(τ) and is particularly convenient for evaluating operators with derivatives. The expectation value of the
Hamiltonian is a special case, because half of the propagator
can be commuted to the other side of the mixed expectation
value, giving Ψ(τ/2) on either side; consequently the energy
has a variational upper bound property and converges to the
eigenenergy from above.
As described above, the number of spin-isospin components in
ΨT grows rapidly with the number of nucleons. Thus, a calculation of a state in 8Be involves about 30 times more floatingpoint operations than one for 6Li, and 10B requires 25 times
more than 8Be. Calculations of the sort being described are currently feasible for A #10. A few runs for the ground state of 12C
have been made; these require ~100,000 processor hours on
modern massively parallel computers or ~1017 floating point
operations for a single state.
RESULTS
The imaginary-time evolution of GFMC calculations for the
first three (J π; T=0) states in 6Li is shown in Fig. 2. The energy is evaluated after every 40∆τ propagation steps and is
shown by the solid symbols with error bars for the Monte Carlo
statistical errors. The EV = E(τ = 0) for the 1+ ground state is at
−28 MeV, above the threshold for breakup into separted α(4He)
and deuteron (2H) clusters. However, the energy drops quite
rapidly and is already stable against breakup after only a few
CANADA / VOL. 64, NO. 2 ( Apr.-June. (Spring) 2008 )
QUANTUM MONTE CARLO ... (WIRINGA)AA
Fig. 2
GFMC propagation for three states of 6Li.
Fig. 3
propagation steps. The final energy and statistical error is
obtained by averaging over E(τ) once the energy is stable. The
rapid drop in E(τ) for small τ indicates that the ΨT has a small
contamination of very high (>100 MeV) excitation; GFMC is
particularly efficient at filtering out such errors.
The 3+ excited state is actually unstable against cluster
breakup, but is physically narrow (decay width Γ=0.024 MeV)
and the GFMC energy is stable. However, the 2+ excited state
is physically wide (Γ=1.3 MeV) and after an initial rapid drop
from the −24 MeV starting energy, it continues to drift lower,
so a straight average is not reasonable. In principle, if the calculation is carried to large enough τ, the energy should converge to the sum of α and deuteron energies. In this case we
extrapolate linearly back to the end of the initial drop as
marked by the open star in Fig. 2 to estimate the energy of the
state.
Figure 3 compares GFMC calculations of energy levels of
selected nuclei with the experimental values (right bars). The
calculations use just the AV18 NN potential alone (left bars) or
with the IL2 3N potential (middle bars). The figure shows that
calculations with just a NN potential underbind the A$3 nuclei,
with the underbinding getting worse as A increases.
In
addition
−
−
many spin-orbit splittings, such as that of the 52 − 72 levels in
7Li, are too small. The addition of IL2 corrects these errors and
results in good agreement with the data; for 53 levels in
3 # A # 10 nuclei the rms deviation from experiment is only
740 keV. The case of 10B is particularly interesting, as the calculation with just AV18 incorrectly produces a 1+ ground state
instead of the correct 3+. As the figure shows, including IL2
reverses the order of the two levels and produces the correct
ground state. This result has been confirmed by NCSM calculations using different realistic NN and 3N potentials [23].
Many of the states shown in Fig. 3 are strong stable, i.e., they
can decay only by electromagnetic or weak transitions, if at all.
Others are actually resonant states that decay by nucleon or α
emission. As discussed above, good energies can still be
GFMC energy level calculations for various nuclei using the
AV18 (blue) and AV18+IL2 Hamiltonians (red) compared
with experiment (green). As can be seen, the AV18+IL2
results are consistently lower than AV18 alone, and are in
much better agreement with experiment.
obtained for resonant states that are physically narrow by the
techniques discussed above, but for wide states with decay
widths Γ > 0.2 MeV, a true scattering calculation is more
appropriate.
A nuclear GFMC calculation was recently completed for the
case of n+α scattering [24]. The basic technique is to confine the
nucleons in a box, with a radius large enough so that a nucleon
at the edge is far enough away from the others (inside the α)
that it is in the asymptotic scattering regime. A logarithmic
boundary condition is imposed on the trial function, and a
GFMC propagation is made that preserves the boundary condition while finding the energy of the system. The combination
of energy and logarithmic derivative at the boundary radius
gives a phase shift δ(E). A number of calculations are made for
different boundary conditions to map out δ(E), from which partial-wave cross sections can be calculated and resonance poles
and widths extracted.
This
is illustrated
in Fig. 4, where n+α
−
+
−
scattering in the 12 , 12 , and 32 channels, calculated with
AV18+IL2, is plotted (solid symbols) and compared to an
R-matrix analysis of experimental data (solid lines). The agreement is quite encouraging, but this is by far the simplest of
many scattering cases we would like to study.
In addition to energies of nuclear states, we calculate a variety
of other properties, such as one- and two-nucleon density and
momentum distributions. An example is shown in Fig. 5 where
the point proton and neutron densities of 4,6,8He are shown. The
α is extremely compact and has essentially identical proton and
neutron densities. In the halo nuclei 6,8He (so-called because of
the weakly bound valence neutrons and consequently extended
neutron distribution) the α core is only slightly distorted.
However, the valence neutrons drag the center of mass of the α
around and thus spread out the proton density. Recent neutral
atom trapping experiments that measure the isotope shift of
LA PHYSIQUE AU CANADA / Vol. 64, No. 2 ( avr. à juin (printemps) 2008 ) C 71
QUANTUM MONTE CARLO ... (WIRINGA)
Fig. 4
GFMC calculation of n + α scattering in partial-wave cross
sections for the AV18+IL2 Hamiltonian. Experimental data
are shown by solid curves.
atomic transitions in these nuclei, combined with extremely
accurate atomic theory, have determined the charge radius differences among the helium isotopes [25]. GFMC calculations of
these charge radii are quite challenging because the neutron
separation energies are only 1−2 MeV, so absolute energies of
the 4,6,8He nuclei must be calculated to much better than our
standard 1−2% accuracy. By using variations of the AV18+IL2
Hamiltonian for the GFMC propagator, it is possible to map
out the dependence of the charge radius on separation energy,
and then read off a prediction from the experimental separation
energy. The resulting radii agree with atom trap experiments at
the 1-2% level [26].
CONCLUSIONS
The VMC and GFMC quantum Monte Carlo methods discussed here have established a new standard of comparison for
the ab initio study of light nuclei using realistic interactions.
There are many calculations of interest beyond those discussed
here. These include the study of the small isovector and isotensor terms in the nuclear Hamiltonian, which contribute to the
energy difference between “mirror” nuclei like 3H–3He and
7Li–7Be. The microscopic origin of these forces is not fully
understood, so the ability to test interaction models against
experimental energies is an important tool.
Electromagnetic and weak transitions between different
nuclear states and the response of nuclei to scattered electrons,
neutrinos, and pions is also of considerable interest. The first
GFMC calculations of magnetic dipole (M1), electric quadrupole (E2), and weak Fermi (F) and Gamov-Teller (GT) transitions in light nuclei are just becoming available [27]. VMC calculations have been made for the electromagnetic elastic and
transition form factors in 6Li [28] and for spectroscopic factors
in the 7Li(e,eNp) [29] reaction which are in good agreement with
experiment. Calculations of spectroscopic amplitudes such as
72 C PHYSICS
IN
Fig. 5
Point proton and neutron densities for helium isotopes.
+8Li(J) + n(j)|9Li(JN),, where the nuclei are in a number of different possible excited states, are being used as input to DWBA
analyses of radioactive beam experiments, such as
2H(8Li,p)9Li [30]. There have also been VMC studies of astrophysically interesting radiative capture reactions such as
d(α,γ)6Li, t(α,γ)7Li, and 3He(α,γ)7Be [31]. GFMC calculations
of such reactions should be feasible in the next few years.
The chief drawback of the present VMC and GFMC methods
is the exponential growth in computational requirements with
the number of nucleons. It will be some time before A=11,12
calculations become routine. One of the challenges in moving
to larger nuclei is the need to transition from parallel computers with hundreds of processors, to the next generation of massively parallel machines with tens of thousands of processors.
With present machines tens of configurations reside on each
processor, but in future one configuraton might be spread over
ten processors, which will require some major programming
adjustments.
To reach larger nuclei, a new quantum Monte Carlo method,
auxiliary field diffusion Monte Carlo (AFDMC), is under
development and has already been used for larger nuclei like
16O and 40Ca using slightly simpler interactions [32]. The chief
advantage of this method is that, by linearizing the problem
with the introduction of auxiliary fields, spins and isospins can
be effectively sampled, rather than completely summed over.
The other ab initio nuclear many-body methods, NCSM and
CCE, are also pushing on to larger nuclei, and we expect continued rapid progress in this field.
ACKNOWLEDGMENTS
The quantum Monte Carlo work described here has been done
by many researchers including J. Carlson, K.M. Nollett,
V.R. Pandharipande, M. Pervin, S.C. Pieper, B.S. Pudliner,
R. Schiavilla, and K. Varga. This work is supported by the U.S.
Department of Energy, Office of Nuclear Physics, under contract DE-AC02-06CH11357.
CANADA / VOL. 64, NO. 2 ( Apr.-June. (Spring) 2008 )
QUANTUM MONTE CARLO ... (WIRINGA)AA
REFERENCES
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
J. Carlson and R. Schiavilla, Rev. Mod. Phys. 70, 743-841 (1998), and references therein.
S.C. Pieper and R.B. Wiringa, Annu. Rev. Nucl. Part. Sci. 51, 53-90 (2001), and references therein.
H. Kamada et al., Phys. Rev. C 64, 044001 (2001).
A. Kievsky et al., Phys. Rev. C 58, 3085 (1998).
A. Deltuva et al., Phys. Rev. C 71, 064003 (2005).
P. Navrátil, J.P. Vary and B.R. Barrett, Phys. Rev. Lett. 84, 5728 (2000); P. Navrátil and W.E. Ormand, Phys. Rev. Lett. 88, 152502
(2002), and references therein.
G. Hagen et al., Phys. Rev. C 76, 044305 (2007), and references therein.
V.G.J. Stoks et al., Phys. Rev. C 49, 2950 (1994).
R.B. Wiringa, V.G.J. Stoks, and R. Schiavilla, Phys. Rev. C 51, 38 (1995).
R. Machleidt, F. Sammarruca, Y. Song, Phys. Rev. C 53, R1483 (1996); R. Machleidt, Phys. Rev. C 63, 024001 (2001).
J. Fujita and H. Miyazawa, Prog. Theor. Phys. 17, 360 (1957); 17, 366 (1957).
S.A. Coon, et al., Nucl. Phys. A317, 242 (1979).
J. Carlson, V.R. Pandharipande, and R.B. Wiringa, Nucl. Phys. A401, 59 (1983).
S.C. Pieper, et al., Phys. Rev. C 64, 014001 (2001).
E. Epelbaum, Prog. Part. Nucl. Phys. 57, 654 (2006), and references therein.
J. Lomnitz-Adler, V.R. Pandharipande, and R.A. Smith, Nucl. Phys. A361, 399 (1981).
J. Carlson, Phys. Rev. C 36, 2026 (1987).
S.C. Pieper, K. Varga, and R.B. Wiringa, Phys. Rev. C 66, 034611 (2002).
S.C. Pieper, Nucl. Phys. A751, 516c (2005).
R.B. Wiringa, Phys. Rev. C 43, 1585 (1991).
N. Metropolis, et al., J. Chem. Phys. 21, 1087 (1953).
R.B. Wiringa, et al., Phys. Rev. C 62, 014001 (2000).
E. Caurier, et al., Phys. Rev. C 66, 024314 (2002).
K.M. Nollett, et al., Phys. Rev. Lett. 99, 022502 (2007).
L.-B. Wang, et al., Phys. Rev. Lett. 93, 142501 (2004); P. Mueller, et al., Phys. Rev. Lett. 99, 252501 (2007).
S.C. Pieper, Proceedings of the “Enrico Fermi” Summer School Course CLXIX (to be published); arXiv:0711.1500v1 [nucl-th].
M. Pervin, S.C. Pieper, and R.B. Wiringa, Phys. Rev. C 76, 064319 (2007).
R.B. Wiringa and R. Schiavilla, Phys. Rev. Lett. 81, 4317 (1998).
L. Lapikás, J. Wesseling, and R.B. Wiringa, Phys. Rev. Lett. 82, 4404 (1999).
A.H. Wuosmaa, et al., Phys. Rev. Lett. 94, 082502 (2005).
K.M. Nollett, Phys. Rev. C 63, 054002 (2001).
S. Fantoni, A. Sarsa, and K.E. Schmidt, Phys. Rev. Lett. 87, 181101 (2001).
LA PHYSIQUE AU CANADA / Vol. 64, No. 2 ( avr. à juin (printemps) 2008 ) C 73
NEWS AND CONGRATULATIONS
NEWS AND CONGRATULATIONS
NEW DIRECTOR FOR
PERIMETER INSTITUTE
CANADA’S
May 9, 2008 - The Board of
Directors of Canada’s Perimeter
Institute for Theoretical Physics
(PI) announced the appointment
of Dr. Neil Turok to the position of
Executive Director, replacing
Dr. Howard Burton who left PI in
mid-2007 after having served in
this role since the institute was
first created iin the summer of
1999 (Dr. Robert Myers from
CITA had been acting as
Executive Director in the interim). The appointment is
effective October 1st.
Commenting on his appointment, Dr. Turok says, “I
am thrilled and honored to serve as the next
Executive Director of Perimeter Institute, or PI. The
Institute’s innovative approach, its flexibility and its
determination to tackle the most basic questions are
already attracting the world’s most brilliant students
and researchers to Canada. Working with the excellent PI team, I hope to strengthen these developments so that PI becomes a world epicenter for theoretical physics, catalysing major scientific breakthroughs.”
About Dr. Turok
Neil Geoffrey Turok was born in Johannesburg, South
Africa, and educated in England with a PhD from
Imperial College. After stays at Santa Barbara,
Fermilab, and a Professorship at Princeton University,
he took the Chair of Mathematical Physics at
Cambridge University in 1997.
DR. RICHARD TAYLOR INDUCTED INTO
CANADA’S SCIENCE AND ENGINEERING
HALL OF FAME
Dr. Richard E. Taylor is a Nobel
laureate for his work in physics.
He and his colleagues provided
the first physical evidence for
quarks, now recognized as the
building blocks of 99 percent of
all matter. He is a Distinguished
University professor in the
Department of Physics at the
University of Alberta and on the
Board of Trustees of Canada's National Institute for
Nanotechnology (NINT). Born in Medicine Hat,
Alberta, Dr.Taylor is a Companion of the Order of
Canada and a Fellow of the Royal Society of Canada.
The Canadian Science and Engineering Hall of Fame is a central
part of the Innovation Canada exhibition. This is where the
Museum honours individuals whose outstanding scientific or
technological achievements have made a significant contribution
to Canadian society. There are currently 40 scientists, engineers,
and researchers recognized in the Hall of Fame, including John
Polanyi, Maude Abbot, Sir Sandford Fleming and Joseph Armand
Bombardier. Each spring, new members are added to the permanent gallery at the Canada Science and Technology Museum and
on its website sciencetech.technomuses.ca.
RAYMOND LAFLAMME HONOURED
WITH "PREMIER'S DISCOVERY
AWARD"
Dr. Raymond Laflamme of the
Perimeter Institute for Theoretical Physics (PI) and the Institute
of Quantum Computing (IQC) at
the University of Waterloo has
been honoured with a special
Premier's Discovery Award for
his contributions in natural sciences and engineering.
He is a noted cosmologist and mathematical physicist
who worked on the cosmic background radiation, the
cosmological constant, and developed, with Stephen
Hawking, the so-called Hawking-Turok instanton solutions which can describe the birth of an inflationary
universe. With Paul Steinhardt, Princeton University,
he has been developing a cyclic model for the universe.
Neil Turok is not only an award winning scientist but a
profoundly engaged scientist and founder of the
African Institute for Mathematical Sciences in
Muizenberg -- a postgraduate educational centre supporting the development of mathematics and science
across the African continent.
74 C PHYSICS
IN
The award, presented April 29th in Toronto, acknowledges Dr. Laflamme' s individual achievements and
demonstrated leadership in the globally competitive
arena of scientific research.
Dr. Laflamme is this year’s speaker at the Herzberg Memorial
Public Lecture held on June 8th in Quebec City as part of the
2008 CAP Congress. He will be speaking on “Harnessing the
Quantum World”.
CANADA / VOL. 64, NO. 2 ( Apr.-June. (Spring) 2008 )
ARTICLE DE FOND
THE NEXT CANADIAN REGIONAL CLIMATE MODEL
BY
AYRTON ZADRA, DANIEL CAYA, JEAN CÔTÉ, BERNARD DUGAS, COLIN JONES,
RENÉ LAPRISE, KATJA WINGER, AND LOUIS-PHILIPPE CARON
T
he provision of regional climate-change projections to support climate impact and adaptation
studies is a growing field that places heavy
demands on the modelling systems used in this
production process. Impact groups require increasingly
higher-resolution climate information, commensurate with
the spatial and temporal scales at which they work (e.g. at
the scale of individual river catchments, the scale of a forest ecosystem, etc.). Furthermore, potential changes in climate means, such as shifts in seasonal mean rainfall or
temperature, are being recognized as of secondary importance with respect to impacts on many climate-sensitive
sectors. Rather information on potential changes in infrequent, extreme events (e.g. high-impact weather events
such as wind-storms, ice-storms, flooding, droughts etc) is
often more important to properly support impact and adaptation work. Many of these extreme events are more accurately simulated in higher resolution modelling systems.
The ability to identify changes in extreme events related to
anthropogenic forcing, versus slow changes in the climate
system associated with natural variability, also places an
enormous demand on our models. In particular, as the
emphasis shifts towards rare (extreme) events, a large
ensemble of regional climate simulations is required, so
that a probability of occurrence can be attached to those
changes in extreme events that are identified to be forced
by increasing greenhouse gas (GHG) concentrations.
SUMMARY
The new generation of the Canadian
Regional Climate Model is being developed
within the framework of the existing Global
Environmental Multiscale (GEM) model
presently used for global and regional
numerical weather prediction at the
Meteorological Service of Canada. GEM supports a number of model configurations
within a single system, e.g. a regular latitude-longitude global model, a variable-resolution global model and a limited-area model
(LAM), allowing a flexible approach to model
development. Within the Canadian Regional
Climate Modelling and Diagnostics Network,
the LAM version of GEM will be used and further developed into a flexible and computationally efficient regional climate model.
The dual requirements of a significant increase in model
resolution and an increased number of climate simulations, demands a highly efficient modelling system that
can capitalize on the recent advances in parallel computing architectures. In short, Regional Climate Modelling
groups need to use accurate, physically based, models that
are optimally designed for application at high-resolution
and that integrate in the minimal time possible. In the context of the next 5-10 years, high-resolution means an RCM
resolution of ~10km, with specific, targeted integrations at
~2km resolution over a limited geographic domain. The
limited-area version of Global Environmental Multiscale
(GEM), a model originally developed for Numerical
Weather Prediction (NWP) [1-3], supports these combined
needs, particularly the requirement to operate at high resolution in a highly parallelised computational environment. GEM has therefore been targeted as the dynamical
core of the next-generation of the Canadian Regional
Climate Model (CRCM5).
In the following sections, we present a brief description of
the history, properties and behaviour of the GEM model,
as well as an overview of the past, present and future of
climate modelling in Canada.
NUMERICAL WEATHER PREDICTION IN
CANADA
Analysing and forecasting the weather, using numerical
models, requires a huge computing and data processing
power. The process that leads to the production of weather forecasts at the Canadian Meteorological Centre (CMC)
located in Dorval is an endless cycle of computer tasks
performed many times per day and that can be summarized as follows:
i. Data ingest: acquisition, decoding, re-formatting and
quality control of large amounts of meteorological
data from many sources worldwide.
ii. Data assimilation: an accurate three-dimensional
depiction of atmospheric winds, pressure, moisture,
and temperature, called objective analysis. It is a
careful weighing between a preceding forecast and
recently acquired observational data.
iii. Forecast: prediction of the time evolution of the
atmosphere, from a few hours up to a couple of weeks
into the future.
A. Zadra1,3, <ayrton.
[email protected]. ca>
D. Caya2,3, J. Côté1,3,
B. Dugas1,3,
C. Jones3, R. Laprise3,
K. Winger3 and
L.-P. Caron3
1 Meteorological
Research Division,
Environment Canada,
Québec, Canada
2 Ouranos Consortium,
Montréal, Québec,
Canada
3 Centre ESCER pour
l’Étude et la Simulation
du Climat à l’Échelle
Régionale, Université
du Québec à Montréal,
Québec, Canada
LA PHYSIQUE AU CANADA / Vol. 64, No. 2 ( avr. à juin (printemps) 2008 ) C 75
... REGIONAL CLIMATE MODEL (ZADRA ET AL.)
The above steps are then followed by post-processing (statistically-based adjustments for the automatic generation of forecast products) and dissemination to weather offices and other
clients.
NWP became a reality after the Second World War with the
development of civil aviation and the first computers.
Canadian scientists have been at the forefront of the development of numerical methods for data assimilation and weather
forecasting. Some of the methods developed in Dorval are at
the foundation of current operational models worldwide: spectral method, semi-implicit time integration scheme, coupling of
the semi-Lagrangian method to the semi-implicit time scheme,
fully elastic non-hydrostatic modelling, variable resolution,
and objective analysis. As resolution and computing power
increased, more sophisticated parameterizations of subgridscale physical effects were developed (henceforth referred to as
the “physics”).
The GEM model is the current NWP model in Canada. A global stretched-grid configuration at 15-km resolution is used for
regional short-range forecasts and a global uniform-resolution
configuration at approximately 33-km resolution for global
medium-range forecasts. The medium-range (global) data
assimilation is a 4-dimensional variational scheme [4], while the
short-range (regional) data assimilation is a more restricted 3dimensional scheme. More details on the GEM model dynamics and physics are provided later in this article.
CLIMATE MODELLING IN CANADA
History of global climate modelling
Climate modelling was initiated in early 1970’s at the Canadian
Climate Centre which later became the Canadian Centre for
Climate modelling and analysis (CCCma). The first Canadian
General Circulation Model (GCM) was based on the global
NWP model, which used the spectral transform technique, a
Eulerian semi-implicit scheme and very rudimentary physics
consisting of surface fluxes of heat, moisture and momentum,
a soft moist convective adjustment scheme, solar and terrestrial radiative transfers using prescribed cloud and ozone distributions. Sea surface temperature and sea ice distributions were
prescribed from climatology and held constant. Over land, surface temperature was calculated using a simple thermal inertia
scheme and a simple bucket method handled the water content.
In the 1980’s, the model was upgraded to allow an option for
triangular truncation and hybrid vertical coordinate system [5].
The standard horizontal resolution of the atmospheric component remained T32 truncation for some time, with 10 levels in
the vertical; recent integrations however are being performed
with T47 and T63 with 31 levels and an uppermost level at
1 hPa. A lot of effort went into the development of suitable
physics in the atmosphere, including: turbulent vertical fluxes
of heat, moisture and momentum in the planetary boundary
layer and the free atmosphere following Monin-Obukhov similarity theory; orographic wave drag; convective fluxes of heat
and moisture; clouds and precipitation; solar and terrestrial
76 C PHYSICS
IN
radiative energy transfers interacting with clouds that are diagnostically derived from local relative humidity (CGCM2 [6];
CGCM3 [7-9]).
Interactions between the atmosphere and the underlying surface through exchanges of energy and water constitute an
essential element of a climate model. The atmospheric component is therefore coupled to a three-dimensional dynamical
ocean model with thermodynamic sea ice [10]; the more recent
version employs an isopycnal eddy-stirring mixing and sea ice
is treated with a dynamical cavitating-fluid scheme. CGCM3
employs the CLASS scheme [11], a three-layer soil model with
explicit treatment of snow and vegetation, and fractional surface types: bare soil, vegetation, snow over bare soil and snow
with vegetation. Several transient GHG and aerosols
(GHG&A) experiments have been realised with CGCM2
(IPCC Third Assessment Report) and CGCM3 (IPCC Fourth
Assessment Report).
History of regional climate modelling
The development of the Canadian Regional Climate Model
(CRCM) was initiated in 1991 at the Université du Québec à
Montréal and has since then been pursued within the Canadian
Regional Climate Modelling (and Diagnostics) Network. In
2002, the Ouranos Consortium was created and its Climate
Simulations Team (CST) became responsible for the development of the operational versions of the CRCM and to carry out
the climate-change projections. The Ouranos CST got strongly
involved in the development of later versions of the model.
The dynamical kernel of CRCM evolved from the computationally efficient fully elastic semi-implicit semi-Lagrangian
nested model of Tanguay et al. [12] in which orography was
implemented through a terrain-following scaled-height coordinate. CRCM employs one-way nesting following Davies [13]:
lateral boundary conditions for winds, air temperature, water
vapour and pressure are interpolated (either from reanalyses or
CGCM simulations) and progressively imposed over a 10 grid
point ribbon around the domain perimeter. An option also
exists for weakly nudging the large scales within the interior of
the domain [14]. The typical horizontal grid mesh has been 45
km for some time, with a 15-min timestep, and the number of
levels in the vertical has increased from 10 to 29. The standard
domain has been enlarging from 100×70 to 201×193 grid
points, and it now covers most of North America and adjacent
portions of Atlantic, Pacific and Arctic Oceans.
The physics package of CGCM2 was initially employed in
CRCM [15]. Later the moist convective adjustment was
replaced with a deep cumulus convection scheme [16]. A recent
version of CRCM developed by the Ouranos CST uses the
CGCM3 physics package including CLASS, which contributed
to reduce substantially the excessive water recycling in summer [17]. The Great Lakes are handled with a mixed-layer
model developed by Goyette et al. [18]. Sea surface temperatures and sea ice distributions are prescribed by interpolating
analyses or CGCM simulations; an option exists for coupling
CANADA / VOL. 64, NO. 2 ( Apr.-June. (Spring) 2008 )
... REGIONAL CLIMATE MODEL (ZADRA ET AL.)AA
with the regional ocean model developed by Saucier et al. [19]
over the Gulf of St-Lawrence and Hudson-James Bays.
Climate simulations and climate-change projections were realized driven by time slices of CGCM2 for transient GHG&A
scenarios [20-21].
∂Z&
ds
+D+
=0
∂Z
dt
d ln θ d  θ  & d
≡ ln  *  + Z
ln θ*
dt
dt  θ 
dZ
DESCRIPTION OF THE GEM MODEL
≡
CRCM5 is the version of the model that is developed by the
CRCMD Network, a collaboration among scientists at UQAM,
Environment Canada (RPN and CCCma) and Ouranos. The
dynamical kernel of this version is the Local Area Model
(LAM) version of GEM, described below.
d   Tv 
 p   κZ&
ln  *  − κ ln    −
= Fθ

dt   T 
 Z  Z
δH
dφ
d ( φ − φ* )
Z&
− gw =
− Rd T * − gw = 0
dt
dt
Z
Hydrostatic pressure and vertical coordinate
dqv
= F qv
dt
The nonhydrostatic formulation of GEM uses hydrostatic pressure [22] as the basis for its vertical coordinate. The hydrostatic
pressure (π) is defined as a pressure field in hydrostatic balance
with the mass field
∂π
= −ρ
∂φ
p = π exp( q ') ⇒ ln p = ln π + q ' , qT ' = 0
(2.1b)
where prime denotes a perturbation quantity and the subscript
T denotes evaluation at the model top. Departure from hydrostatic balance is defined by the “nonhydrostatic index”
µ≡
∂p
−1
∂π
(2.2)
Hydrostatic pressure varies monotonically with height and can
be used to define a terrain-following vertical coordinate
η ≡ ( π − πT ) /( πS − πT )
(2.3)
where the subscripts S refers to evaluation at the surface of the
model. To allow for a more general non-linear relation between
η and π, the terrain-following vertical coordinate of the model,
denoted by Z, is taken to be π*(η), the reference pressure profile, obtained from η(π,πS,πτ ) by replacing the fields π, πS and
πτ by reference1 values Z/π*, ZS/π*S and Zτ/π*τ respectively.
Governing equations
The governing equations are the forced nonhydrostatic primitive equations [3]:
dV H
+ Rd Tv ∇ ln p + (1 + µ)∇φ + f ( k × V H ) = F H
dt
1.
1 ∂π
∂φ
=−
∂Z
ρ ∂Z
(2.1a)
where ρ is the density and φ the geopotential height. True pressure (p) is represented as a perturbation from π,
(2.4)
dw
− gµ = δH F V
dt
where
dZ ,
Z& =
dt
p = ρRd Tv
and
d
∂
∂
≡ + V H ⋅∇ + Z&
dt ∂t
Z
(2.5)
(2.6)
(2.7)
(2.8)
(2.9)
(2.10)
(2.11)
(2.12)
Here, VH is horizontal velocity, D is horizontal divergence, Tν
is virtual temperature, s = ln[(πs - πT ) / (Zs - ZT )] is the mass
variable, qν is specific humidity of water vapour, f is the
Coriolis parameter, k is a unit vertical vector, g is the vertical
acceleration due to gravity, and FH, F θ, F V and F qν are parameterized physical forcings. Eqs. (2.4)-(2.10) are respectively
the horizontal momentum, mass continuity, thermodynamic,
vertical momentum, vertical velocity, water vapour and hydrostatic pressure equations, and (2.11) is the equation of state,
taken here to be the ideal gas law for moist air. The switch δH
in (2.7) controls whether the model operates in hydrostatic or
nonhydrostatic mode.
Boundary conditions consist of periodic condition in the horizontal for global grids, or are provided by nesting data for limited-area grids; and homogeneous conditions ZA = 0 at the top
and bottom of the model atmosphere.
Time and space discretisations
In GEM, equations (2.4)-(2.10) are first integrated in the
absence of forcing. The parameterised forcing terms appearing
on the right-hand sides of those equations are then added using
a fractional-step time method. The time discretisation used to
integrate the frictionless adiabatic equations of the first step is
(almost) fully implicit semi-Lagrangian. A prognostic equation
of the form
dF / dt + G = 0
(2.13)
The reference state is motionless and isothermal with temperature T*. The reference potential temperature and geopotential are, respec−R /c
, φ* = − Rd T * ln( Z / Z S ) , where Rd and cpd are the gas constant and specific heat of dry air at constant pressure,
tively, θ* = T * ( Z / p00 )
and p00 / 1015 hPa.
d
pd
LA PHYSIQUE AU CANADA / Vol. 64, No. 2 ( avr. à juin (printemps) 2008 ) C 77
... REGIONAL CLIMATE MODEL (ZADRA ET AL.)
is discretised as
( F n − F n−1 ) / ∆t + [(1 / 2 + ε)G n + (1 / 2 − ε)G n−1 ] = 0
a)
b)
(2.14)
where ψ n = ψ(x,t ), ψ n-1 = ψ(x(t - ∆t), t - ∆t) , ψ = {F,G},
t = n∆t, x = (r,Z) is the 3-dimensional position vector, r is the
position vector on the sphere of radius a, and a trajectory is
determined by an approximate solution to
dr / dt = VH (r,Z,t ), d 2r / dt 2 = -r*VH / a*2,
dZ / dt = ZA (r,Z,t ), d 2Z / dt 2 = 0.
c)
(2.15)
The scheme (2.14) is decentred along the trajectory to avoid
spurious response arising from a centred approximation in the
presence of orography. Cubic interpolation is used everywhere
for upstream evaluations, except for the trajectory computations in (2.15) where linear interpolation is used instead.
Grouping terms at the new time on the left-hand side and
known quantities on the right-hand side, (2.14) may be rewritten as
Fig. 1
F n / τ + G n = F n-1 / τ - [(1 - 2ε) / (1 + 2ε)]G n-1 / R n-1,
τ = (1 + 2ε)∆t / 2.
(2.16)
A variable-resolution discretisation on an Arakawa C grid is
used in the horizontal. The scalar grid, where scalar fields are
defined, is described by giving a list of longitudes and latitudes, excluding the poles. The grid points of the zonal (meridional) wind image are located at the same (longitudes) latitudes
as the scalar grid points but at longitudes (latitudes) situated
halfway between those of the scalar grid. The horizontal discretisation of equations is centred almost everywhere and,
hence, is almost everywhere second order in space. There is no
staggering of variables in the vertical, and the equations where
a vertical derivative appears are discretised layer by layer with
a centred approximation.
Examples of GEM grids: (a) non-rotated global uniform latitude-longitude grid; (b) rotated, variable-resolution grid with
highest resolution domain (thick black line) centred over
Europe; (c) rotated, uniform limited-area over Europe
(dashed lines indicate nesting and sponge zones). Red lines
indicate the position of the (rotated) equator. For clarity, only
every 5 grid points of the original grids are shown.
is used to place an experiment’s area of interest over its domain
equator, where a latitude-longitude grid will most closely
resemble a plane surface.
Another attractive property of GEM is that it can operate in
highly parallelised computational environments. Figure 2
shows some results from computational speedup rate experiments for increasing number of processors.
Spatial discretisation of (2.16) yields a set of coupled nonlinear
equations. Terms on the right-hand side, which involve
upstream interpolation, are first evaluated. The coupled set is
rewritten as a linear one (where coefficients depend on the
basic state), plus a nonlinear perturbation that is placed on the
right-hand side. The set is then solved iteratively using the linear terms as a kernel, and reevaluating the nonlinear terms on
the right-hand side at each iterations. Two iterations, the minimum for stability, have been found sufficient for practical convergence [3]. The linear set can be algebraically reduced to the
solution of a 3-dimensional elliptic boundary-value problem,
from which the other variables are obtained by back-substitution.
A great amount of flexibility is available in configuring the
model’s horizontal mesh (Figure 1). Namely, the simulated
domain (i) may have uniform or variable latitude and/or longitude grid point distributions; (ii) may be of global or limited
area extent; and finally, (iii) its rotation poles may or may not
be co-located with the true geographical poles. The last option
78 C PHYSICS
IN
Fig. 2
Computational speed-up rate for the parallelised GEM
model: The thick red line shows the measured speed-up rate
as a function of the number of processors (CPUs), with
respect to a reference simulation with 24 processors. The thin
straight line indicates the theoretical (perfect) speed-up
curve.
CANADA / VOL. 64, NO. 2 ( Apr.-June. (Spring) 2008 )
... REGIONAL CLIMATE MODEL (ZADRA ET AL.)AA
RESULTS FROM REGIONAL CLIMATE
SIMULATIONS WITH GEM
Configurations, physical parameterisations and forcing
data
A set of present-day climate simulations exploring various
model configurations has been performed over the last few
years. Some were limited-area simulations for the period of
December 1959 to December 2000, using the European Centre
for Medium Range Weather Forecast (ECMWF) ERA40
reanalyses as lateral boundary conditions. Other simulations
were performed, with uniform- or variable-resolution global
grids from January 1978 to March 2004.
For all simulations, sea-surface temperature and sea-ice surface
boundary conditions were interpolated from the AMIP2
(Atmospheric Model Intercomparison Project v2) observed,
1º×1º monthly mean values, as obtained online from the
Lawrence-Livermore National Laboratory (LLNL) Program
for Climate Model Diagnosis and Inter-comparison (PCMDI).
The following parameterisations were used: (i) deep and shallow moist convective processes [23]; (ii) large-scale condensation [24]; (iii) correlated-K solar and terrestrial radiations [25];
(iv) ISBA land-surface scheme [26]; (v) subgrid-scale orographic gravity-wave drag [27]; (vi) low-level orographic blocking [28].
GEM has already participated in several international climate
experiments. Figure 1b shows one of the domains used in
GEM’s contribution to the Stretched Grid Model Intercomparison Project (SGMIP) [29]. Results from these experiments show
that large-scale climatological features, such the tropospheric
and lower stratospheric temperature and winds are well reproduced by the model, with biases generally smaller than 2º C
and 3 m/s, respectively. GEM also contributed to the InterContinental-scale Transferability Study (ICTS) [30], where participating models were asked to simulate the 2000-2004 period,
with the same model physics and resolution, over seven distinct
limited-area domains distributed over many continents
(Figure 3). Each model had to provide simulated data at specific locations where observations (station data) were available,
and energy and water budgets were considered at different
timescales.
The appeal in the LAM modelling approach is that it can be
used to downscale information, i.e. to generate small-scale features consistent with the large scales provided at the lateral
boundaries. In this approach, computer resources are no longer
used to simulate conditions over the entire globe, which allows
the focussing of resources on the domain of interest. Lateral
conditions for the LAM domain are taken from available
lower-resolution simulations or analysed data. Several such
databases are available nowadays and provide the best current
knowledge of the atmospheric state on a regular grid. The previously mentioned ERA40 is an example and provides global
data from 1957 to 2000. Several North-American (NA) and
European (EU) LAM experiments driven by ERA40 have been
performed. Figure 4 shows some results from the NA experiment, where the 500-hPa geopotential height field is used as a
proxy of the large-scale information content. Differences with
respect to the driving analyses are contoured. Results indicate
that winter results are closer to the analysis than the summer
ones, this summer bias being caused by warmer-than-observed
temperatures throughout the simulated atmosphere. Lower precipitation and a drier surface are also found in the winter simulations. The differences between the two seasons can be
explained by the relative strength of the driving winds [31].
Much stronger mean westerly tropospheric winds occur in winter, allowing for a greater control of the LAM by lateral boundary conditions. In the summer, climatological winds are only
half as strong, the lateral control is therefore weaker allowing
the LAM to drift more from the analysed climate.
Results from these climate experiments indicate that GEM performs rather well, for a model originally developed for NWP.
Some deficiencies can be traced to physical parameterisations
and surface processes, and plans to improve their representa-
a)
b)
c)
Fig. 4
Fig. 3
The seven GEM-LAM domains (black lines) used in the
regional climate simulations proposed by the ICTS experiment. Red X’s indicate the positions of observation stations.
(a) Winter mean and (b) summer mean of the 500-hPa geopotential height GZ (in dm) generated by a GEM-LAM regional climate simulation over the North American grid shown in
(c). Differences with respect to the corresponding means
from ERA40 reanalyses are shown with solid contours
(colour interval in units of 10 dm).
LA PHYSIQUE AU CANADA / Vol. 64, No. 2 ( avr. à juin (printemps) 2008 ) C 79
... REGIONAL CLIMATE MODEL (ZADRA ET AL.)
tion are described in the last section of this article. In the following section, results from a recent study on the simulation of
extreme events by GEM are presented.
A study on tropical cyclone activity
The landfall of hurricane Katrina in 2005 is a prime example of
the devastation a tropical cyclone (TC) can yield on coastal
populations and infrastructures. This destructive power arises
from a combination of extreme winds, storm surges and flooding, torrential rains and mudslides. Since 1995, tropical
cyclone activity in the Atlantic Ocean has increased markedly
in contrast to the quieter period of the 70's and 80's. Recent
years have seen many records broken in the Atlantic and the
accumulated cyclone energy index has been above the 19512000 median for all years from 1995-2005, except in 1997 and
2002 [32], years during which an El Niño, known to suppress
TC activity in the Atlantic, was observed. Whether this
upswing in activity is due to a multi-decadal natural variability, to a long-term rising trend caused by anthropogenically
forced global warming or to a combination of the two is still
unclear. This uncertainty has its root in the relatively limited
number of years of TC data available and the reliability of these
historical data [33].
Climate models offer an alternative to observations by which
TC activity and the factors controlling interannual variability
can be explored. However, so far, CGCM studies of future
tropical cyclone activity have shown widely different conclusions. One major cause of this is the low resolution of CGCMs
and their inability to
simulate the important processes controlling
tropical
cyclone genesis and
intensification. The
physical realism of
these simulated tropical cyclones clearly
improves
with
increasing model resolution [34]. Running
a
high-resolution
model over the whole
globe is still not feasible except on the Fig. 5 Grid used in the hurricane study.
The high-resolution area covers
most powerful superthe entire TC track in the
computers and only
Atlantic, while the rest of the
for limited simulation
globe is run at a lower resolution,
time. An alternative
typical of today's GCM.
approach to achieving
locally enhanced resolution (e.g. over the tropical Atlantic) is to run a model in variable-resolution mode (GVAR). In this study, we exploit this
option in GEM using a 2° global domain with telescoping up to
0.3° over the entire tropical Atlantic TC track (Figure 5).
Initially we concentrate on the ability of GEM to simulate past
observed Atlantic TC activity. In the first step of this evalua-
80 C PHYSICS
IN
Fig. 6
Normalised intra-annual distribution of Atlantic tropical
cyclones for the period 1979-2004.
tion, GVAR GEM has been integrated for the period 1979-2004
using observed sea surface temperatures (SSTs). A comparison
of simulated TC activity/intensities between the GVAR run and
observations allows a direct comparison of TC statistics on climate timescales. Figure 6 shows the relative monthly distribution for the period 1979-2004 for both observed and simulated
TCs. These storms have been selected based on physical characteristics typical of TC, such as a wind speed threshold of
63 km/h. The majority of Atlantic tropical storms are observed
during the period August-October, with a peak in September.
We notice that GEM reproduces fairly well the intra-annual
distribution of TCs, with a maximum during September.
However, its overall distribution is biased toward a too large
proportion of TCs at the end of the year to the detriment of the
beginning of hurricane season. In absolute numbers, GEM
tends to systematically overestimate the monthly production of
TCs; the reasons for this are currently under investigation.
Fig. 7
Normalised maximum wind speed distribution of Atlantic
tropical cyclones for the period 1979-2004.
CANADA / VOL. 64, NO. 2 ( Apr.-June. (Spring) 2008 )
... REGIONAL CLIMATE MODEL (ZADRA ET AL.)AA
A recurrent problem with low-resolution GCM when studying
TCs is the low intensity of the system produced: 2° GCMs produce systems that are reminiscent of TCs, but too weak to be
considered so: the wind speed threshold of 63 km/h is rarely
reached with low-resolution CGCM. By increasing the resolution to 0.3°, we witness the formation of tropical storms and
Category 1 hurricanes (threshold of 119km/h). Figure 7 shows
that GEM still comes short of producing the most intense
storms (Category 3+); further increases in resolution seem to
be necessary for simulating these most destructive storms. This
is not entirely surprising since 0.3° appears an insufficient resolution for eye development, a key process in the development
of extremely high wind speeds in TCs.
FUTURE PLANS FOR GEM AS A REGIONAL
CLIMATE MODELLING TOOL
One of the primary aims of the CRCMD Network is to prepare
CRCM5 for use as the next-generation operational Canadian
Regional Climate Model. The primary goal of this network is
to evaluate and further develop CRCM5 for application at
~10km resolution as a pan-North American regional climate
model. In this concluding section we briefly discuss a few of
the present and planned projects within the Network that are
contributing towards this goal.
Assessing a single parameterization package for GCM
and RCM resolutions
An important aspect in the operational application of RCMs is
compatibility of physical parameterisations across the interface
between a driving GCM and client RCM. The capacity of the
physics to be used across the resolution range spanned by
GCMs and RCMs, depends on the resolution sensitivity of a
few key schemes (e.g. convection and cloud parameterizations). Recent analysis of a large ensemble of RCM simulations, challenges the conventional wisdom that commonality of
physics increases the ability to reproduce large-scale variability as defined by the GCM results over the RCM domain [35].
Furthermore, in the case of the parameterization of convection,
schemes have been specifically developed for the mesoscale
range (~10-50 km model resolutions). These schemes likely
perform better at present and planned RCM resolutions than
schemes with GCM resolutions in mind.
To address these questions, the 4th version of the Canadian
GCM physics (AGCM4) is being implemented into the GEM
dynamical core. The resolution sensitivity of this package will
then be evaluated within the CRCMD Network. Parallel to this,
MSC scientists will assess the performance of the AGCM4
physics in their coupled GCM configuration. Depending on the
results obtained in both assessments, a common physics package in a single dynamical framework applicable for both GCM
and RCM modelling will become available.
Implementing Canada-specific Earth System Modelling
features into CRCM5
Some process modules, which play a key role in defining
regional climate differences across Canada and become
increasingly important as model resolution increases, are
presently being implemented into CRCM5. Some of the key
processes include:
i. An interactive lake model representing important lake
processes (e.g. lake surface temperatures, lake-ice fractional coverage, and surface evaporation). A number of
one-dimensional lake models are presently being tested.
These models can provide an accurate representation of
the surface and subsurface lake temperatures. Through
use of a coupled ice module, the lake-ice fraction can
also be calculated and act as a lower boundary condition
for the model atmosphere. One of these models will subsequently be coupled with the land-surface parameterization in CRCM5.
ii. An interactive permafrost model will be implemented
into CRCM5. Permafrost conditions are relatively widespread in Arctic Canada and exhibit significant sensitivity to a warming climate, in particular a shorter and
milder winter season [36]. Changes in permafrost characteristics can have a large impact on transport and building infrastructure in the Canadian Arctic. The latest version of the CLASS land-surface scheme [11] is suitable
for permafrost studies due to its multi-layer deeper vertical structure. This version of CLASS will be coupled
with a permafrost model and be introduced as an interactive component in CRCM5.
Developing a coupled Arctic Regional Climate
Modelling System based on CRCM5
The Arctic is particularly sensitive to anthropogenic climate
change, with most Global Climate Models indicating a significant Arctic amplification of any predicted global mean temperature increase, in response to increasing levels of greenhouse
gases [37]. The 4th Assessment Report of the IPCC [38] indicates
the potential for widespread impacts of climate change on
Arctic communities, infrastructure and ecology. Many of the
key processes occurring in the Arctic climate system are coupled atmosphere-ocean-sea ice phenomena, often occurring on
relatively small spatial scales. Sea-ice often acts as an integrating surface quantity between the atmosphere and ocean, and is
therefore a critical component to simulate accurately in Arctic
climate models [39]. The third major new model component
therefore being coupled into CRCM5 is an Arctic regional
ocean and sea-ice model. Present efforts centre on coupling the
Rossby Centre Ocean Model (RCO) [40] to CRCM5.
ACKNOWLEDGEMENTS
This research was achieved as part of the scientific research
programmes of the Canadian Regional Climate Modelling
(CRCM) Network, funded by the Canadian Foundation for
Climate and Atmospheric Sciences (CFCAS), the Ouranos
Consortium on Regional Climatology and Adaptation to
Climate Change, and Mathematics of Information Technology
and Complex Systems (MITACS). This work was partly supported by the Office of Science (BER), U.S. Department of
Energy, Grant No. DE-FG02-01ER63199.
LA PHYSIQUE AU CANADA / Vol. 64, No. 2 ( avr. à juin (printemps) 2008 ) C 81
... REGIONAL CLIMATE MODEL (ZADRA ET AL.)
REFERENCES
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
J. Côté, S. Gravel, A. Méthot, A. Patoine, M. Roch and A. Staniforth, “The operational CMC-MRB global environmental multiscale
(GEM) model. Part I: Design considerations and formulation”, Mon. Wea. Rev., 126, 1373-1395 (1998).
J. Côté, J.-G. Desmarais, S. Gravel, A. Méthot, A. Patoine, M. Roch and A. Staniforth, “The operational CMC-MRB global environmental multiscale (GEM) model. Part II: Results”, Mon. Wea. Rev., 126, 1397-1418 (1998).
K.-S. Yeh, J. Côté, S. Gravel, A. Methot, A. Patoine, M. Roch and A. Staniforth, “The operational CMC-MRB global environmental
multiscale (GEM) model. Part III: Non-hydrostatic formulation”, Mon. Wea. Rev., 130, 339-356 (2002).
P. Gauthier, M. Tanguay, S. Laroche, S. Pellerin and J. Morneau, “Extension of 3D-Var to 4D-Var: Implementation of 4D-Var at the
Meteorological Service of Canada”, Mon. Wea. Rev., 135, 2339-2354 (2007).
R. Laprise and C. Girard, “A spectral general circulation model using a piecewise-constant finite-element representation on a hybrid
vertical coordinate system”, J. Climate, 3(1), 32-52 (1990).
N.A. McFarlane, G.J. Boer, J.-P. Blanchet and M. Lazare, “The CCC second-generation GCM and its equilibrium climate”, J. Climate,
5(10), 1045-1077 (1992).
G.J. Zhang and N. A. McFarlane, “Sensitivity of climate simulations to the parameterization of cumulus convection in the Canadian
Climate Centre General Circulation Model”, Atmos.-Ocean, 33(3), 407-446 (1995).
H. Barker and Z. Li, “Improved Simulation of Clear-Sly Shortwave Radiative Transfer in the CCC-GCM”, J. Climate, 8, 2213-2223
(1995).
K. Abdella and N. McFarlane, “Parameterization of surface-layer exchange coefficients for atmospheric models”, Bound. Layer
Meteor., 80, 223-248 (1996).
G.M. Flato, G.J. Boer, W.G. Lee, N.A. McFarlane, D. Ramsden, M.C. Reader and A.J. Weaver, “The Canadian Centre for Climate
Modelling and Analysis global coupled model and its climate”, Clim. Dyn., 16, 451-467 (2000).
L.D. Verseghy, “The Canadian Land Surface Scheme (CLASS): Its History and Future”, Atmos.-Ocean, 38, 1-13 (2000).
M. Tanguay, A. Robert and R. Laprise, “A semi-implicit semi-Lagrangian fully compressible regional forecast model”, Mon. Wea.
Rev., 118(10), 1970-1980 (1990).
H.C. Davies, “A lateral boundary formulation for multi-level prediction models”, Quart. J. Roy. Meteor. Soc., 102, 405-418 (1976).
S. Riette and D. Caya, “Sensitivity of short simulations to the various parameters in the new CRCM spectral nudging”, Research activities in Atmospheric and Oceanic Modelling, edited by H. Ritchie, WMO/TD - No 1105, Report No. 32, 7.39-7.40 (2002).
D. Caya and R. Laprise, “A semi-Implicit Semi-Lagrangian Regional Climate Model: The Canadian RCM”, Mon. Wea. Rev., 127(3),
341-362 (1999).
P. Bechtold, E. Bazile, F. Guichard, P. Mascart and E. Richard, “A mass flux convection scheme for regional and global models”,
Quart. J. Roy. Meteor. Soc., 127, 869-886 (2001).
B. Music and D. Caya, “Investigation of the sensitivity of water cycle components simulated by the Canadian Regional Climate Model,
to the land surface parameterization, the lateral boundary data and the internal variability”, J. Hydromet. (submitted) (2007).
S. Goyette, N.A. McFarlane and G.M. Flato, “Application of the Canadian Regional Climate Model to the Laurentian Great Lakes
Region: Implementation of a Lake Model”, Atmos.-Ocean, 38(3), 481–503 (2000).
F. Saucier, S. Senneville, S. Prinsenberg, F. Roy, G. Smith, P. Gachon, D. Caya and R. Laprise, “Modelling the sea ice-ocean seasonal cycle in Hudson Bay, Foxe Basin and Hudson Strait, Canada”, Clim. Dyn., 23(3-4), 303-326 (2004).
R. Laprise, D. Caya, M. Giguère, G. Bergeron, H. Côté, J.-P. Blanchet, G.J. Boer and N.A. McFarlane, “Climate and Climate Change
in Western Canada as Simulated by the Canadian Regional Climate Model”, Atmos.-Ocean, 36(2), 119-167 (1998).
D.A. Plummer, D. Caya, A. Frigon, H. Côté, M. Giguère, D. Paquin, S. Biner, R. Harvey and R. de Elía, “Climate and climate change
over North America as simulated by the Canadian Regional Climate Model”, J. Climate, 19(13), 3112-3132 (2006).
R. Laprise, R., “The Euler equations of Motion with Hydrostatic Pressure as an Independent Variable”, Mon. Wea. Rev., 120, 197-207
(1992).
J.S. Kain and J.M. Fritsch, “A one-dimensional entraining/detraining plume model and application in convective parameterization”, J.
Atmos. Sci., 47, 2784-2802 (1990).
H. Sundqvist, E. Berge and J.E. Kristjansson, “Condensation and Cloud Parameterization Studies with a Mesoscale Numerical Weather
Prediction Model”, Mon. Wea. Rev., 117, 1641-1657 (1989).
J. Li and H.W. Barker, “A radiation algorithm with correlated-k distribution. Part I: local thermal equilibrium”, J. Atmos. Sci., 62, 286309 (2005).
S. Bélair, L.-P. Crevier, J. Mailhot, B. Bilodeau and Y. Delage, “Operational implementation of the ISBA land surface scheme in the
Canadian regional weather forecast model. Part I: Warm season results”, J. Hydromet., 4, 352-370 (2003).
N.A. McFarlane, “The effect of orographically excited gravity-wave drag on the circulation of the lower stratosphere and troposphere”,
J. Atmos. Sci., 44, 1175-1800 (1987).
A. Zadra, M. Roch, S. Laroche and M. Charron, “The subgrid scale orographic blocking parametrization of the GEM model”, Atmos.Ocean, 41, 155-170 (2003).
M.S. Fox-Rabinovitz, J. Côté, B. Dugas, M. Déqué and J.L. McGregor, “Variable resolution general circulation models: Stretched-grid
models intercomparison project (SGMIP)”, J. Geophys. Res., 111, D16104 (2006).
E.S. Takle, J. Roads, B. Rockel, W.J. Gutowski Jr., R.W. Arritt, I. Meinke, C.G. Jones and A. Zadra, “Transferability intercomparison:
An opportunity for new insight on the global water cycle & energy budget”, Bull. Amer. Meteor. Soc., 88(2), 375-384 (2007).
82 C PHYSICS
IN
CANADA / VOL. 64, NO. 2 ( Apr.-June. (Spring) 2008 )
... REGIONAL CLIMATE MODEL (ZADRA ET AL.)AA
31. P. Lucas-Picher, D. Caya, R. de Elía and R. Laprise, “Investigation of regional climate models’ internal variability with a ten-member
ensemble of ten years over a large domain”, Clim. Dyn. (accepted conditional to minor modifications) (2008).
32. G.D. Bell and M. Chelliah, “Leading tropical modes associated with interannual and multidecadal fluctuations in North Atlantic hurricane activity”, J. Climate, 19, 590-612 (2006).
33. C.W. Landsea, “Hurricanes and global warming”, Nature, 438, E11-E13 (2005).
34. L. Bengtsson, K.I. Hodges and M. Esch, “Tropical cyclones in a T159 resolution global climate model: comparison with observations
and re-analyses”, Tellus, 59A, 396-416 (2007).
35. G. Lenderink, A.P. Siebesma, S. Cheinet, S. Irons, C.G. Jones, P. Marquet, F. Muller, D. Olmeda, J. Calvo, E. Sanchez and P.M.M.
Soares, “The diurnal cycle of shallow cumulus clouds over land: a single column model intercomparison study”, Quart. J. Roy.
Meteorol. Soc., 123, 223-242 (2005).
36. L. Sushama, R. Laprise, D. Caya, D. Verseghy and M. Allard, “An RCM projection of soil thermal and moisture regimes for North
American permafrost zones”, Geophys. Res. Lett., 34, L20711 (2007).
37. M.C. Serreze and J.A. Francis, “The arctic amplification debate”, Climatic Change 76, 241-264 (2006).
38. S. Solomon, D. Qin, M. Manning, Z. Chen, M. Marquis, K.B. Averyt, M. Tignor and H.L. Miller (Eds.), IPCC AR4, 2007:
Intergovernmental Panel on Climate Change, Fourth Assessment Report, Working Group I. “Climate Change 2007: The Physical
Science Basis”, Cambridge Univ. Press (http://ipcc-wg1.ucar.edu/wg1/wg1-report.html).
39. K. Wyser, C.G. Jones, P. Du, E. Girard, U. Willén, J. Cassano, J.H. Christensen, J.A. Curry, K. Dethloff, J.-E. Haugen, D. Jacob,
M. Køltzow, R. Laprise, A. Lynch, S. Pfeifer, A. Rinke, M. Serreze, M.J. Shaw, M. Tjernström and M. Zagar, “An Evaluation of Arctic
Cloud and Radiation processes during the SHEBA year: Simulation results from 8 Arctic Regional Climate Models”, Clim. Dyn., 110,
D09207 (2007).
40. H.E.M. Meier, R. Döscher and T. Faxén, “A multiprocessor coupled ice-ocean model for the Baltic Sea: Application to salt inflow”.
J. Geophys. Res., 108: C8, 3273 (2003).
LA PHYSIQUE AU CANADA / Vol. 64, No. 2 ( avr. à juin (printemps) 2008 ) C 83
DEPARTMENTAL MEMBERS / MEMBRES DÉPARTEMENTAUX
(as at 2008 April 1 / au
1er
- Physics Departments / Départements de physique -
avril 2008)
Acadia University
Bishop's University
Brandon University
Brock University
Carleton University
Collège François-Xavier-Garneau
Collège Montmorency
Concordia University
Dalhousie University
École Polytechnique de Montréal
Lakehead University
Laurentian University
McGill University
McMaster University
Memorial Univ. of Newfoundland
Mount Allison University
Okanagan University College
Queen's University
Royal Military College of Canada
Ryerson University
Saint Mary’s University
Simon Fraser University
St. Francis Xavier University
Trent University
Université de Moncton
Université de Montréal
Université de Sherbrooke
Université du Québec à Trois-Rivières
Université Laval
University of Alberta
University of British Columbia
University of Calgary
University of Guelph
University of Lethbridge
University of Manitoba
University of New Brunswick
University of Northern British Columbia
University of Ontario Inst. of Technology
University of Ottawa
University of Prince Edward Island
University of Regina
University of Saskatchewan (and Eng. Phys.)
University of Toronto
University of Toronto (Medical Biophysics)
University of Victoria
University of Waterloo
University of Western Ontario
University of Windsor
University of Winnipeg
Wilfrid Laurier University
York University
SUSTAINING MEMBERS / MEMBRES DE SOUTIEN
(as at 2008 April 1/ au 1er avril 2008)
A. John Alcock
Thomas K. Alexander
J. Brian Atkinson
C. Bruce Bigham
Allan I. Carswell
See L. Chin
Walter Davidson
M. Christian Demers
Fergus Devereaux
Marie D'Iorio
Gerald Dolling
Gordon W.F. Drake
David J.I. Fry
Elmer H. Hara
Akira Hirose
Thomas Jackman
Béla Joós
James D. King
Ron M. Lees
Louis Marchildon
J.S.C. (Jasper) McKee
David B. McLay
Jean-Louis Meunier
J.C. Douglas Milton
Michael Morrow
Michael Kevin O'Neill
Allan Offenberger
A. Okazaki
Shelley Page
Roger Phillips
Beverly Robertson
Robert G.H. Robertson
Pekka K. Sinervo
Alec T. Stewart
G.M. Stinson
Boris P. Stoicheff
CORPORATE-INSTITUTIONAL MEMBERS /
MEMBRES CORPORATIFS-INSTITUTIONNELS
Eric C. Svensson
Louis Taillefer
John G.V. Taylor
Andrej Tenne-Sens
Michael Thewalt
Greg J. Trayling
William Trischuk
Sreeram Valluri
Henry M. Van Driel
Paul S. Vincett
Erich Vogt
Andreas T. Warburton
(as at 2008 April 1 / au 1er avril 2008)
The Corporate and Institutional Members of the Canadian Association
of Physicists are groups of corporations, laboratories, and institutions
who, through their membership, support the activities of the Association. The entire proceeds of corporate membership contributions are
paid into the CAP Educational Trust Fund and are tax deductible.
CORPORATE / CORPORATIFS
CANADA ANALYTICAL & PROCESS TECH.
BUBBLE TECHNOLOGY INDUSTRIES
CANBERRA CO.
GLASSMAN HIGH VOLTAGE INC.
Les membres corporatifs et institutionnels de l'Association canadienne des
physiciens et physiciennes sont des groupes de corporations, de laboratoires
ou d'institutions qui supportent financièrement les activités de l'Association.
Les revenus des contributions déductibles aux fins d'impôt des membres
corporatifs sont entièrement versés au Fonds Educatif de l'ACP.
JOHNSEN ULTRAVAC INC.
KURT J. LESKER CANADA INC.
NEWPORT INSTRUMENTS
PLASMIONIQUE INC.
VARIAN CANADA INC.
The Canadian Association of Physicists cordially invites interested corporations and institutions to make application for Corporate or Institutional
membership. Address all inquiries to the Executive Director.
INSTITUTIONAL / INSTITUTIONNELS
ATOMIC ENERGY OF CANADA LIMITED
CANADIAN LIGHT SOURCE
INSTITUTE FOR QUANTUM COMPUTING
PERIMETER INSTITUTE FOR THEORETICAL PHYSICS
TRIUMF
L'Association canadienne des physiciens et physiciennes invite cordialement corporations et institutions à faire partie des membres corporatifs
ou institutionnels. Renseignements auprès de la directrice exécutive.
CANADIAN ASSOCATION OF PHYSICISTS / ASSOCIATION CANADIENNE DES PHYSICIENS ET PHYSICIENNES
Bur. Pièce 112, Imm. McDonald Bldg., Univ. of/d’Ottawa, 150 Louis Pasteur, Ottawa, Ontario K1N 6N5
Phone / Tél : (613) 562-5614; Fax / Téléc : (613) 562-5615 ; Email / courriel : [email protected]
INTERNET - HTTP://WWW.CAP.CA
84 C PHYSICS
IN
CANADA / VOL. 64, NO. 1 ( Jan.-Mar. (Winter) 2008 )
ENSEIGNEMENT
HIGH PERFORMANCE COMPUTING
THE EARLY CHAPTERS
BY
IN
CANADA:
ALLAN B. MACISAAC AND MARK WHITMORE
T
his article was intended to be a history of high
performance computing (HPC) in Canada, and the
current environment. However, some major and
exciting events took place during its preparation,
prompting us to add a somewhat nostalgic flavour to this
history and to focus on the most recent aspects of that history. These events are the fulfillment of the main mission
of an organization known as C3.ca, and its formal conclusion to allow for the emergence of a new organization,
Compute/Calcul Canada, with a new and ambitious mandate. The formal end of C3.ca was recommended by its
Board of Directors in October, 2007, and subsequently
approved by the organization’s membership on November
16, 2007. For the purposes of this article, we think of this
as the opening of Chapter 3 in the Canadian HPC story. As
we celebrate this new chapter, we should also celebrate
those chapters which came before, and reflect on the elements that have made this story so successful.
St. Lawrence Seaway, as the US had no non-military computing suitable to the task.
The start of Canadian HPC is probably the purchase and
installation of “Ferut” at the University of Toronto as a
joint initiative of the University of Toronto and the
National Research Council in 1952. This, of course, was
back in the days when you could just say computing, since
all computing was “high performance”. Ferut was a
Ferranti Mark I purchased for about $300,000. There is
probably more computing power in the toys in a
MacDonald's Happy Meal today than available from
Ferut, but it was very impressive at the time. In fact, it was
powerful enough that it was used to draw the international boundary between Canada and the USA along the
All three of these machines were, without doubt, world
class, but their stories highlight the problems with HPC in
Canada in the past. They were single generation facilities,
with at best minor upgrades to the hardware before they
disappeared; they had no national mandate to support and
develop HPC throughout the country; and they never had
an opportunity to develop and maintain a staff to support
the Canadian user community. With one outstanding
exception, these three things were lacking in all HPC initiatives in Canada during this time period.
THE EARLY YEARS
SUMMARY
The HPC story in Canada provides an excellent example of what a coordinated, cooperative national initiative can achieve. We present here a summary of this story, with an
intentionally nostalgic flavour appropriate to
the closing of one successful chapter in this
story, as a new one begins. Without providing all of the details, we try to present the
keys to the success of this initiative, some
idea of parallel paths followed in different
areas of the country, and a table of the current HPC academic consortia.
Chapter 1 of our story runs from the time of the purchase
of Ferut to the emergence of a nationally coordinated HPC
strategy, and the national organization C3.ca, over 40
years later. The very early years were interesting, but the
modern period really begins during the 1980's and it is on
this period that we wish to focus. Canada's HPC efforts
were sporadic during this chapter. For example a Cyber
205 supercomputer was installed in Calgary in 1985, but it
was gone by 1991. A Cray XMP-4 was installed at the
Ontario Centre for Large Scale Computing at the
University of Toronto in 1988, but the machine and the
Centre lasted only until about 1992. Finally, a Fujitsu
VPX240 was installed in Calgary with a largely commercial mandate, but also made available to university
researchers. It disappeared in 1996. [This is the favorite
machine of one of the authors, ABM, as much of the work
on his Master's thesis was done on it.]
The exception was the computing resource maintained by
the Meteorological Service of Canada, subsequently know
as Environment Canada (EC). In 1962, that organization
acquired the first of its facilities, which have been used
ever since to run weather predictions. That first machine
was a Bendix G20, with less computing power than a
modern cell phone. Environment Canada has maintained a
first rate facility and staff since then, with regular updates.
Their most recent hardware is a supercomputer provided
by IBM, which was installed in 2003 and updated in 2005.
This continual updating and long term maintenance of a
qualified support staff differentiates this facility from the
other initiatives in Canada throughout the 40-year period
of Chapter 1 of our story. However, Environment Canada
did not have, nor has it ever had, a mandate to support
general research requiring HPC resources across Canada.
Allan B. MacIsaac
<[email protected]>,
Department of
Applied Mathematics,
Middlesex College,
University of Western
Ontario, London,
Ontario, N6A 5B7
and
Mark Whitmore
<mark_whitmore@
umanitoba.ca>,
Department of
Physics and
Astronomy, University
of Manitoba,
Winnipeg, Manitoba,
R3T 2N2
LA PHYSIQUE AU CANADA / Vol. 64, No. 2 ( avr. à juin (printemps) 2008 ) C 85
HPC ... THE EARLY CHAPTERS (MACISAAC AND WHITMORE)
A DECADE OF ACHIEVEMENT
Before 1997
As is often the case, the transition from one chapter to
another can be continuous. We choose to begin this one
with the emergence of a national project, largely lead by
Brian Unger of the University of Calgary. It started with
the submission of an NSERC Major Facilities Access
grant application called HPCnet. This application had 49
signatories from 11 Canadian universities, spanning the
country from Victoria, BC to St. John’s, NL. It was intended to support access to existing HPC resources, to develop new tools for using and accessing the facilities, and to
foster collaborations. HPCnet was awarded 3 years of
funding at the level of $175,000 per year, beginning in
1996. Illustrative of the challenges facing the community,
the Fujitsu VPX240 at Calgary was shutting down just as
this award was made.
A number of critical steps followed this award. A group of
academic researchers came together to administer the
grant, and to award funding for support personnel and
software development projects. The personnel team was
the fore-runner of the national Technical Analyst Support
Program (TASP) team that now spans the country. The
project brought visibility and cohesion, and a sense of success. A broad community joined together, with members
from the university, government and private sectors. It was
supported administratively by the organization WURCnet
based in Alberta, and the National Research Council
(NRC).
The national community also set forth on an important
visioning and planning mission, culminating in the creation of a new organization, C3.ca, and the publication of
“A Business Case for the Establishment of an Advanced
Computational Infrastructure for Canada”, both in 1997.
This was almost exactly 10 years ago.
The lack of existing resources was certainly a short-term
impediment. However, the community was determined to
share what it had. As an example, one of a number of initiatives was the provision of an AlphaServer 4100, by
Digital Equipment Corporation (now HP Canada ), located at Memorial University of Newfoundland (MUN).
Digital and MUN made a commitment to make this facility available nationally. Thus, the community began to
demonstrate that it could successfully share resources
across the country.
The 1997 Business Case presented a plan for a national
HPC infrastructure of hardware, software and personnel,
joined by the high-speed national network CANARIE,
“…applied to national needs and opportunities for
research and innovation in the sciences, engineering, and
the arts.” It presented a notional 7-year budget of approximately $225 Million, covering all aspects. Some of the
research community thought this was wildly optimistic,
86 C PHYSICS
IN
but turned out to be delightfully wrong.
As we reflect on the elements of the success of C3.ca , it
is inspiring to recall that those 1997 plans included three
phases:
1.
2.
3.
Engaging the Community, 1997 - 98
Building Regional Infrastructure, 1999 - 2002
Collaboration for Competitiveness, 2003 - 05
With small adjustments for timing, the HPC community
has evolved very much in line with those phases. Another
of the intriguing aspects of this initiative was the people
involved. It was driven and run by researchers, but supported by professionals and organizations, in particular the
NRC and WURCnet.
1997 and Beyond
The year 1997 was an extraordinary one. In the early days
of HPCnet and the early work towards the Business Case,
there were no apparent sources of funding of the magnitude contemplated in those plans. However, in that same
year, the Government of Canada created the Canada
Foundation for Innovation. Suddenly, there was a real
opportunity for significant funding. In one year, the community had the Business Case, an unprecedented funding
opportunity, national cooperation, and momentum.
The year 1998 was, perhaps, even more extraordinary. It
culminated in the submission of an application for renewal of the NSERC MFA grant, submission of a separate
MFA proposal from universities in Quebec, and 11 parallel CFI applications, all under the umbrella of C3.ca and
with a commitment to sharing resources.
A critical step in this process was this commitment to sharing. Participants and leaders from many of the CFI projects met at York University during the proposal phase, and
formally agreed to provide access to external users to any
facility that was funded via CFI, at the level of 20%.
Specific wording was agreed to that would be included as
a commitment in all CFI projects. The larger (nonQuebec) MFA proposal listed all 11 CFI projects with a
short description of each. An important feature of the two
MFA proposals was that each one referenced and complemented, but did not compete with, the other one.
In the end, 1998 and 1999 were years of great success.
Both MFA grants were funded. The larger one was funded
at $300,000 for the first year, and $600,000 for each of the
successive years. Similarly, the other one was funded at
$100,000 for the first year, and $200,000 for each of the
next two. In June of 1999, five of the CFI projects were
funded, with total CFI funding of $16.4 M, and “project
costs” totaling over $40 M. But better things were still yet
to come!
CANADA / VOL. 64, NO. 2 ( Apr.-June. (Spring) 2008 )
HPC ... THE EARLY CHAPTERS (MACISAAC AND WHITMORE)AA
Birth of the Consortia
In 1993, CANARIE had been formed to create a leadingedge national network for Canadian researchers and, in
1997, the Canadian Foundation for Innovation (CFI) was
announced. These were the initial conditions needed for
the true emergence of the Canadian HPC community on to
the modern world HPC scene. If CANARIE's research
network was the information super-highway, then the HPC
centres were to be the cities at the ends where much of the
information was to be manufactured and from which it
would be distributed.
Many of the early CFI proposals were multi-institutional,
with various internal organizational structures. These were
all the fore-runners of the current HPC “consortia”. Across
Canada there came to be seven regional consortia; ACEnet
in the Atlantic, RQCHP in Eastern Quebec, CLUMEQ in
Western Quebec, HPCVL in Eastern Ontario, SCINET in
the greater Toronto area, SHARCNET in Western Ontario,
and WESTGRID in the western provinces. Between 1999
and 2004 these consortia were awarded no fewer than 12
major CFI awards totaling over $100 M (project costs in
excess of $250 M). Each consortium award was the result
of local need, desire and initiative, but each of these
awards owed part of its success to the efforts of C3.ca and
the members’ lobbying of, and consultation with, the CFI.
One of the strengths of C3.ca in these early days was that,
while the success of the consortia was in part the result of
the efforts of C3.ca, the organization itself was not tied to
the success or even the existence of any consortia. This
left C3.ca free to carry out its primary mission of promoting the need to fund HPC research in Canada.
The authors of this article were each involved in the creation of one of the seven consortia. While the ends were
similar, the paths differed. ACEnet had its genesis as a
very loose organization which submitted a Memorial-centered CFI proposal in the first CFI competition, under the
informal name of Atlantic Computing Consortium, or
AC3, with an obvious parallel to the name C3. Members
of AC3 across Atlantic Canada worked cooperatively for a
number of years, submitting CFI proposals that were
quasi-independent of each other, but that referenced and
supported other members of AC3 and their systems. This
arrangement fitted the four-province, two-island geography of the times. However, in 2003, the community was
ready to make the leap, and submitted an integrated CFI
proposal, and ACEnet was born. Of course, the funding
situation remains complex, with matching funding coming
from numerous provincial and related organizations. In
contrast, SHARCNET was largely born as a regional project and was driven to be so by the provincial funding
organizations within Ontario at that time. While the
Compaq-Western Centre for Computational Research
might be considered its UWO-centric forerunner, SHARCNET truly began as a consortium much as it currently
exists, although it has grown from seven institutions initially to its current seventeen members.
To give some perspective on what the creation of the consortia means to researchers, we'll consider what SHARCNET has meant to a HPC researcher at the University of
Western Ontario. In 1998 the entire University was largely served by an 8 processor Cray SV1. Certainly a very
good machine at the time and it was running at full load
constantly. Today, SHARCNET provides over 8000
processors available from 15 different machines, with
massive amounts of memory and disk storage. Better still,
the Cray SVI was supported by a single dedicated, technical support person, while today SHARCNET employs
over 18 technical support personnel. And this is only at
SHARCNET; like all members of all consortia, UWO scientists can access any system, and consult any of the personnel, at any of the other consortia as well.
The technical support staff within SHARCNET are another legacy of C3.ca, as the first of these were hired in part
with funds from the NSERC MFA grants. More importantly many of the SHARCNET personnel are part of a national support group known as the TECC. The creation of
TECC, (Technical Experts in Compute/Calcul Canada), is
a further example of the legacy of C3.ca which, interestingly, was created before Compute/Calcul Canada existed.
TECC, founded in large part due to the efforts of a group
of TASP supported staff at RQCHP, is a group of HPC
experts who have agreed to work together to better support
the Canadian HPC community and to provide guidance to
national and regional groups which require access to personnel with high level technical skills. Members of this
group work together, developing standards and exchanging information in order to maintain the hardware and
software used by computational researchers across
Canada.
The model of coordinated national efforts was the foundation of another project carried out under the auspices of
C3.ca, the creation of The Long Range Plan for HPC in
Canada (LRP). Published in 2005, the LRP provided a
vision of a sustained and internationally competitive
infrastructure for computationally-based research in
Canada. It was one of the enablers of Chapter 3 in our
story.
THE FUTURE
At the writing of this article, the HPC research community finds itself, yet again, at the edge of another exciting
journey. In 2007, after extensive consultation with C3.ca,
the consortia and universities, CFI created the National
Platform Fund to "provide(s) generic research infrastructure, resources, services, and facilities that serve the needs
of many research subjects and disciplines, and that require
periodic reinvestments because of the nature of the technologies." targeted initially at HPC. The CFI invited a single, national proposal on HPC. The consortia responded,
rolling up their sleeves, and creating such a proposal, with
a structure that reflects the value and critical role that each
LA PHYSIQUE AU CANADA / Vol. 64, No. 2 ( avr. à juin (printemps) 2008 ) C 87
HPC ... THE EARLY CHAPTERS (MACISAAC AND WHITMORE)
APPENDIX 2
APPENDIX 1
Important Events in Chapter 2 of the HPC Story
1995:
Submission of the MFA Grant, HPCnet
1996:
Awarding of the MFA Grant, HPCnet, 3 years
@ $175,000 per year
Formation of the organization HPCnet, and
start of national planning
Closing of the Fujitsu VPX240 at Calgary
Donation of AlphaServer 4100 by Digital
Equipment Corporation
1997:
1998:
Creation of C3.ca
Publication of “A Business Case for the
Establishment of an Advanced Computational
Infrastructure for Canada”(December)
of the Canada Foundation for Innovation
Formal commitment to a policy on external
sharing and access
Submission of NSERC MFA for renewal, listing all CFI projects
Submission of Parallel MFA from Quebec
universities
Submission of 11 parallel CFI proposals
1999:
Awarding of both MFA grants
Funding of 5 CFI projects, at a total of
$16.4 M (project costs of $41 M)
Emergence of the consortium model
2005:
Publication of “A Long Range Plan for HPC
in Canada”
HPC Consortia in Canada, Current Membership
WESTGRID: U. Victoria, UBC, Simon Fraser, UNBC,
U of Alberta, U of Calgary, U of
Lethbridge, Banff Centre, Athabasca U,
U of Saskatchewan, U of Regina,
Brandon U, U of Winnipeg, U of
Manitoba
SHARCNET: McMaster U, U of Western Ontario,
U of Guelph, U of Windsor, Sir Wilfrid
Laurier U, Fanshawe College, Sheridan
College, U of Waterloo, Brock U, York
U,UOIT, Trent U, Laurentian U,
Lakehead U, Perimeter Institute, Ontario
College of Art & Design, Nipissing U.
HPCVL:
Queen’s U, Royal Military College,
Carleton U, U of Ottawa, Ryerson U,
Seneca College, Loyalist College
SCINET:
U of Toronto
CLUMEQ: Laval U, Université du Quebec (UQAM,
ETS, INRS, UQAC, UQTR, UQAR,
UQAT, UQO), McGill U.
RQCHP:
U de Montréal, Concordia U, U de
Sherbrooke, Bishop’s, École Polytechnique
ACEnet:
Memorial U, St. Francis Xavier U, U of
New Brunswick, Saint Mary’s U, UPEI,
Dalhousie U, Mount Allison U, Acadia U,
Cape Breton U
2007/08 Award of CFI National Platforms Grant
Folding of C3.ca
Creation of Compute/Calcul Canada
consortium plays, and a management and governance
structure to ensure it is truly a national platform. As a
result of this new CFI award, another $150 M of infrastructure will be installed, new support funds are being
awarded as an outgrowth of the earlier MFA awards, a new
organization, Compute/Calcul Canada is being formed,
and C3.ca, one of Canadian science’s true success stories,
is passing the torch after a job well done.
EPILOGUE
C3.ca was an organization whose successes were, in some
sense, intangible. It was the members who won the CFI
and NSERC grants, not the organization. But it was the
organization, an organization of the members, which guided those members’ cooperation, helped manage the
national project and coordinated some of the major pro-
88 C PHYSICS
IN
posals, and provided the advocacy that was critical to the
creation and maintenance of the vibrant, and tremendously successful, HPC community in Canada. On that front
C3.ca has been an unparalleled success. As we all celebrate the future, we should also celebrate the past, and
always remember what the keys to our success have been.
We must also remember the essential role our many partners have played. Major computer vendors made major
investments, and have played critical roles. Provincial
governments have provided essential matching funds and,
in many cases, funds for personnel. And CANARIE has
provided the essential connections, without which the
national infrastructure simply could not be used effectively and efficiently.
CANADA / VOL. 64, NO. 2 ( Apr.-June. (Spring) 2008 )
IN MEMORIAM
RALPH NICHOLLS - (1926-2008)
In 1965 he was enticed to move to the new York
University campus on Keele Street, Toronto, where he
established the Department of Physics and the Physics
Graduate Programme, acting as Chair for both from 19651969. His wife Doris joined and helped to build the
Biology Department. During this time he appointed all of
the original physics faculty members, and designed the
Petrie Science Building. However, he recognized that the
space age had begun in 1957, with the launch of Sputnik
1, and "space" became his new focus. Thus he founded
two partner entities, the Graduate Programme in
Experimental Space Science and the Centre for Research
in Experimental Space Science (CRESS). In his early
years at York he launched rockets to measure the ultraviolet spectrum of the aurora, and added faculty members
with space interests, building up an internationally recognized research centre.
In 1971 when the Canada Centre for Remote Sensing was
created Ralph Nicholls recognized another important
application of spectroscopy, soon making this a part of
CRESS. At about the same time, York University began a
program in Earth Science, and CRESS was adapted to this
by changing its name to the Centre for Research in Earth
and Space Science; Nicholls continued as its Director
until 1992. In 1986 he initiated discussions within this
group that led to the formation of an Ontario Centre of
Excellence on the York University campus, the Institute
for Space and Terrestrial Sciences. From 1985 onwards he
was a member of a number of space mission teams,
including SPEAM 1 that was operated by Marc Garneau
during his first flight. In more recent years his interest in
radiative transfer increased and he with his collaborators,
including those at Defence Research Establishment
Valcartier, created spectral synthesis code and spectroscopic atlases. The latter occupied his time to the end of
his career.
His enormous energy extended well beyond the York
University campus. He held visiting professorships at the
US National Bureau of Standards, and at Stanford
University, and was visiting lecturer at the NASA Ames
Research Center. He served on many national and international committees. Internationally he was involved with
the International Astronomical Union, the International
Union of Geodesy and Geophysics and the Americal
Physical Society and was the Canadian Observer on the
NASA Space and Earth Sciences Working Group on the
Scientific Uses of the Space Station. Nationally he chaired
the NRC joint sub-committee on Space Astronomy (197480), the NRC Associate Committee on Space Research
(1984-85) and the Canadian Advisory
Committee on the Scientific Uses of Space
Gordon Shepherd
Station. He was Editor of the Canadian
<[email protected]>
Journal of Physics from 1986-1992. He
Centre for Research
also received numerous honours, including
in Earth and Space
Science (C.R.E.S.S.),
the Fellowship in the Royal Society of
York University,
Canada (1978), the Queen's Golden Jubilee
4700 Keele Street,
Medal (2002), and the Order of Canada in
Toronto, ON, M3J
1997.
1P3
IN MEMORIAM
Ralph W. Nicholls (OC, FRSC),
Distinguished Research Professor
Emeritus at York University in
Toronto, died peacefully in his
sleep, on January 25, 2008 at the
age of 81 years. Nicholls was born
in Surrey, England in 1926 and
graduated from Imperial College,
London where he obtained his
Ph.D. and D.Sc. degrees and for a
time (1945-48) served as Senior Lecturer. In 1948 he was
appointed to the Physics Department at the University of
Western Ontario (UWO), and one of his first acts must
have been to join the CAP, as he has been a supportive
member since that year. At UWO he established a theoretical and experimental group focused on the determination
of transition probabilities in molecular systems. In 1950
he gave a paper on this work to a very small Saturday
morning audience of the American Physical Society in
Cleveland and after his talk one of the members of the
audience, Nate Gerson of the Air Force Cambridge
Research Laboratories, asked him if he would like a contract to extend the scope of his work. Gerson had a mandate to establish auroral research in North America, and
recognized the value of the transition probability research
to this enterprise. He implemented contracts to other
Canadian universities as well, as he described in Physics
in Canada 40, 308, 1984. Nicholls in turn wrote an obituary for Gerson in PIC, the March/April issue, 2002. This
link not only provided Nicholls with ample funds to build
up a thriving group, but also altered his outlook beyond
"classical spectroscopy" to its many application areas, of
which the upper atmosphere and the aurora became the
first. One of Nicholls' first tasks under this contract was to
organize an international Auroral Physics Conference,
where in 1951 the leading scientists in the field worldwide, many from the Scandinavian countries, were
brought to the UWO campus. This brought him to the
forefront on this subject, and established his reputation as
a scientist and organizer.
Ralph Nicholls' door was always open,
especially to young scientists, and he
worked hard to improve their security in
the universities. He offered his advice and
support freely, and in so doing created
enduring relationships. The number of
lives he influenced is enormous. He will be
remembered with respect and affection by
all.
LA PHYSIQUE AU CANADA / Vol. 64, No. 2 ( avr. à juin (printemps) 2008 ) C 89
IN MEMORIAM
IN MEMORIAM
TAPAN KUMAR BOSE - (1938-2008)
Tapan Kumar Bose, professeur
émérite à l'Université du Québec à
Trois-Rivières et fondateur de
l'Institut de recherche sur l'hydrogène (IRH), est décédé subitement le 24 janvier 2008.
Récipiendaire de la médaille du
Gouverneur général du Canada en
1993, il a aussi reçu, entre autres, le
Meritorius Service Award de la
National Hydrogen Association (USA) de même que la
médaille de reconnaissance de l'Association canadienne
de l'hydrogène, et il a été admis en 2005 au cercle d'excellence de l'Université du Québec.
Après avoir obtenu un baccalauréat et une maîtrise en
physique de l'Université de Calcutta, le professeur Bose a
complété son doctorat à l'Université de Louvain en 1965.
Il a ensuite effectué des recherches postdoctorales au laboratoire Kammerlingh Onnes à Leiden ainsi qu'à
l'Université Brown, au sein de l'équipe de R. H. Cole. Prêt
à entreprendre une carrière universitaire, il n'hésita pas, en
1969, à se joindre à une institution toute nouvelle,
l'UQTR. Il y amorcerait, avec quelques autres, une solide
tradition de recherche, en plus de renouer des amitiés
faites à Louvain.
À Brown, le professeur Bose avait réalisé des mesures
précises des coefficients du viriel de la permittivité et de la
pression de différents gaz purs ou mélanges. Il poursuivit
ces travaux à l'UQTR, avec ses étudiants et collaborateurs,
ne cessant toutefois de les étendre. De la permittivité statique, il passa à l'indice de réfraction, pour ensuite s'intéresser aux propriétés d'absorption du rayonnement électromagnétique dans les micro-ondes et dans l'infrarouge.
Son laboratoire permettait donc d'étudier
la réponse de différentes substances à
Louis Marchildon
une excitation électromagnétique sur une
<louis.marchildon@
très large gamme de fréquences. En
uqtr.ca>, Département
1980, il fondait le groupe de recherche
de physique, Université
du Québec à Troissur les diélectriques, avec le collaboraRivières, 3351 boul. des
teur de toute sa carrière, Jean-Marie Stforges, Trois-Rivières,
Arnaud.
Québec, G9A 5H4
Avec un laboratoire bien pourvu et une
solide expérience des mesures précises,
le professeur Bose s'attaqua à l'étude du
comportement critique, sujet chaud des
années '80. Il mit en lumière l'anomalie
de la permittivité et de l'indice de réfraction de mélanges près du point critique
de démixtion. Mais à la même époque,
90 C PHYSICS
IN
différents chercheurs notaient que les méthodes qu'il avait
développées pour la détermination de la permittivité et de
l'indice de réfraction pouvaient servir à la mesure précise
de la quantité de gaz circulant dans un gazoduc.
Sans interrompre ses travaux fondamentaux, le professeur
Bose amorça alors une toute nouvelle problématique de
recherche, de nature appliquée. C'est alors que son remarquable talent d'organisateur se révéla véritablement.
Rapidement, son groupe de recherche grandit, et il réussit
à obtenir d'importantes subventions et commandites pour
l'étude du gaz naturel d'abord, et de l'hydrogène ensuite. Il
fonde l'IRH en 1994 et, l'année suivante, reloge son équipe
dans un édifice moderne érigé au moyen de fonds qu'il a
lui-même trouvés. Comptant une cinquantaine de
chercheurs, assistants techniques et étudiants, l'IRH
devient vite un centre de réputation internationale. On y
étudie, en particulier, le stockage d'hydrogène par adsorption dans des micropores ou nanotubes de carbone, ou par
réfrigération magnétique; son utilisation dans des piles à
combustible ou dans des moteurs à explosion; la modélisation de son comportement en présence de flammes; et
son usage dans des systèmes autosuffisants, alimentés
entre autres par l'éolienne qui est devenue la marque du
campus de l'UQTR. Malgré toutes ses responsabilités,
dont celles de président de l'Association canadienne de
l'hydrogène et de président du comité technique ISO TC
197 sur les technologies de l'hydrogène, et malgré ses
nombreux déplacements, le professeur Bose s'investit dans
chacun de ces projets. À sa retraite, en 2005, l'édifice de
l'IRH est renommé " Pavillon Tapan K. Bose ". Il poursuivra, dans ses dernières années, ses efforts auprès de l'industrie et des gouvernements pour l'utilisation de l'hydrogène et des sources d'énergie sans émission de carbone.
Aucun de ceux qui ont bien connu Tapan ne restait indifférent à sa forte personnalité. Grand travailleur, doué
d'une inépuisable énergie, il savait aussi savourer de bons
moments de détente avec ses collègues et amis. Il adorait
la discussion, piquant souvent son interlocuteur en exprimant une opinion à laquelle l'autre ne pouvait que réagir.
Fonceur, soutenant indéfectiblement ses collaborateurs, il
oubliait vite les conflits et n'en gardait jamais rancune. Il
avait une inébranlable confiance en son intuition, et un
pouvoir de conviction à peu près irrésistible. Né en Inde
à l'époque coloniale, il n'en a pas moins conservé un profond respect des institutions britanniques, et est toujours
resté le produit de deux cultures. Il rêvait de faire de sa
région d'adoption la " Vallée de l'hydrogène ". Il aura laissé à l'UQTR la marque d'un véritable bâtisseur.
CANADA / VOL. 64, NO. 2 ( Apr.-June. (Spring) 2008 )
IN MEMORIAM
YOGINDER JOSHI
(1938-2008)
BARRY WALLBANK
(N/A-2008)
The St. Francis Xavier University
(StFX) community was saddened
by the death on April 2, of Dr.
Yoginder Joshi, Senior Research
Professor of Physics. Dr. Joshi was
70.
Yogi established a world-renowned laboratory in atomic
spectroscopy at StFX and created a world-wide network of
collaborators. He had active collaborations with physicists in the Netherlands, Russia, France, Italy, U.S.A.,
India, Egypt and Canada. His research was funded continuously by the National Research Council of Canada and
the Natural Sciences and Engineering Research Council of
Canada for the 43 years he was at StFX. He published
over 240 papers, including five in the past few months
while recuperating from cancer treatment.
In 1993 Dr. Joshi was elected a Fellow of the Optical
Society of America, and in 1994 was elected an Associate
Member of the Institute of Atomic Spectroscopy of the
Russian Academy of Sciences. He received the St. Francis
Xavier University Research Award in 1996.
Dr. Wallbank was born in England.
He earned his Ph.D. from the
University of Liverpool in 1974 and
was a research fellow at the
University of British Columbia before joining the physics
department at StFX in 1982 as a research associate. He
joined our faculty in 1988. He was a dedicated teacher and
researcher, achieving the rank of Full Professor in 2000
and assuming the role of Chair of our Physics department
in 2005. As Chair, he devoted himself generously to the
enhancement of the Physics department’s programs and
was particularly attentive to the intellectual growth and
general well-being of all the students in the program – as
indeed was he of all the students whom he encountered.
Professor Wallbank was internationally known for his
expertise in laser assisted electron scattering from elemental gases. He had numerous undergraduate honours students and postdoctoral fellows assisting in this research,
which was just recently relocated to a custom-designed
laser research laboratory in our new Physical Sciences
building. He published many research papers in internationally recognized peer adjudicated journals with coauthors from around the world. His focused, professional
approach to his research will be missed by the faculty and
students alike.
IN MEMORIAM
Dr. Joshi was born in the Punjab in
India. After completing a master’s
degree at Punjab University and
teaching in India for two years, he came to Canada in
1961. He earned his Ph.D. from the University of British
Columbia in 1964. He joined the physics department at
StFX in 1965 after having taught at St. Dunstan’s
University (now the University of Prince Edward Island)
for one year. He was chair of our physics department from
1990- 96. Upon his retirement from the university in 2001
he was appointed a Senior Research Professor and continued his research and some teaching until his death.
The St. Francis Xavier University
community was deeply saddened
and shocked by the death on May
22, of Dr. Barry Wallbank, Chair of
the Department of Physics
Dr. Wallbank and his wife, Dr. Denise Wallbank of the
StFX Chemistry Department have been an important part
of the scientific and academic community
of St. Francis Xavier University. They Michael Steinitz
have two children, Andrew and Sarah, and <[email protected]>,
one grandchild, whom they recently greet- Department of
Physics, St. Francis
ed on a trip West.
Xavier University,
Antigonish, Nova
Scotia, B2G 2W5
LA PHYSIQUE AU CANADA / Vol. 64, No. 2 ( avr. à juin (printemps) 2008 ) C 91
BOOKS
BOOK REVIEW POLICY
Books may be requested from the Book Review Editor, Richard Hodgson, by using the online book request form at http://www.cap.ca.
CAP members are given the first opportunity to request books. Requests from non-members will only be considered one month after the distribution date of
the issue of Physics in Canada in which the book was published as being available (e.g. a book listed in the January/February issue of Physics in Canada will
be made available to non-members at the end of March).
The Book Review Editor reserves the right to limit the number of books provided to reviewers each year. He also reserves the right to modify any submitted
review for style and clarity. When rewording is required, the Book Review Editor will endeavour to preserve the intended meaning and, in so doing, may find
it necessary to consult the reviewer. Beginning with this issue of PiC, the text of the book reviews will no longer be printed in each issue, but will be available
on the CAP website.
LA POLITIQUE POUR LA CRITIQUE DE LIVRES
Si vous voulez faire l’évaluation critique d’un ouvrage, veuillez entrer en contact avec le responsable de la critique de livres, Richard Hodgson, en utilisant le
formulaire de demande électronique à http://www.cap.ca.
Les membres de l'ACP auront priorité pour les demandes de livres. Les demandes des non-membres ne seront examinées qu'un mois après la date de distribution du numéro de la Physique au Canada dans lequel le livre aura été déclaré disponible (p. ex., un livre figurant dans le numéro de janvier-février de la
Physique au Canada sera mis à la disposition des non-membres à la fin de mars).
Le Directeur de la critique de livres se réserve le droit de limiter le nombre de livres confiés chaque année aux examinateurs. Il se réserve, en outre, le droit de
modifier toute critique présentée afin d'en améliorer le style et la clarté. S'il lui faut reformuler une critique, il s'efforcera de conserver le sens voulu par l'auteur
de la critique et, à cette fin, il pourra juger nécessaire de le consulter. Commençant par cette revue de PaC, le texte des critiques de livre ne
sera plus imprimé dans chaque revue, mais sera disponible sur le page Web de l’ACP.
BOOKS RECEIVED / LIVRES REÇUS
The following books have been received for review. Readers are
invited to write reviews, in English or French, of books of interest to
them. Books may be requested from the book review editor,
Richard Hodgson by using the online request form at
http://www.cap.ca.
Les livres suivants nous sont parvenus aux fins de critique. Cellesci peuvent être faites en anglais ou en français. Si vous êtes
intéressé(e)s à nous communiquer une revue critique sur un
ouvrage en particulier, veuillez vous mettre en rapport avec le
responsable de la critique des livres, Richard Hodgson par internet
à http://www.cap.ca.
GENERAL INTEREST
LIE GROUPS, PHYSICS AND GEOMETRY, Robert Gilmore,
Cambridge University Press, 2008; pp. 319; ISBN: 978-0-521-88400-6
(hc); Price: $80.00.
A list of ALL books available for review, books out for review, and
copies of book reviews published since 2000 are available on-line -see the “Physics in Canada” section of the CAP's website :
http://www.cap.ca.
FINAL THEORY, Mark Alpert, Simon & Schuster Canada, 2008; pp.
356; ISBN: 978-1-4165-7287-9; Price: 29.95.
UNDERGRADUATE TEXTS
A STUDENT'S GUIDE TO MAXWELL'S EQUATIONS, Daniel Fleisch,
Cambridge University Press, 2008; pp. 134; ISBN: 978-0-521-70147-1
(pbk); 978-0-521-87761-9 (hc); Price: $28.99/$80.
92 C PHYSICS
IN
Il est possible de trouver électroniquement une liste de livres
disponibles pour la revue critique, une liste de livres en voie de
révision, ainsi que des exemplaires de critiques de livres publiés
depuis l'an 2000, en consultant la rubrique "La Physique au Canada"
de la page Web de l'ACP : www.cap.ca.
GRADUATE TEXTS
AND PROCEEDINGS
THE PHYSICS OF THE Z AND W BOSONS, R. Tenchini and C.
Verzegnassi, World Scientific Publishing Co., 2008; pp. 419; ISBN:
978-9-812-707024 (hc); Price: $89.00
CANADA / VOL. 64, NO. 2 ( Apr.-June. (Spring) 2008 )
BOOK REVIEWS / CRITIQUES DE LIVRES
LIVRES
Book reviews for the following books have been received and posted to the Physics in Canada section of the CAP’s website :
http://www.cap.ca. Review summaries submitted by the reviewer are included; otherwise, the full review can be seen at the url listed with
the book details. [NOTE: Short reviews received for books listed in the January to September 2007 issues are included as well.]
Des revues critiques ont été reçues pour les livres suivants et ont été affichées dans la section “La Physique au Canada” de la page web de
l’ACP : http://www.cap.ca
A SHORT INTRODUCTION TO QUANTUM
INFORMATION
AND
QUANTUM
COMPUTATION, Michel Le Bellac, Cambridge
University Press, 2006; pp. 167; ISBN: 0-52186056-3 (hc); Price: $60.00. [Review by Michael
Underwood, Institute for Quantum Information
Science, U of C; posted 5/7/2008; To read the
detailed
review,
please
see
http://www.cap.ca/brms/reviews/Rev838_601.pdf ]
BASIC VACUUM TECHNOLOGY, 2ND
EDITION, A. CHAMBERS, R.K. Fitch and
B.S. Halliday, Institute of Physics Publishing,
1998; pp. 184; ISBN: 0-7503-0495-2; Price:
$45.00 (hc). [Review by Jake Bobowski, University
of British Columbia; posted 5/7/2008; To read the
detailed
review,
please
see
http://www.cap.ca/brms/reviews/Rev212_604.pdf ]
In the preface, the authors identify the need for a
modern book that covers a broad range of topics
relevant to vacuum technology that is suitable for
readers who need to become experts in the field.
These authors attempt to fill the void with their
book entitled Basic Vacuum Technology. In this
reviewer's opinion, too much has been sacrificed
in the effort to make this book affordable.
This book will be easily accessible to anyone
with a solid foundation in undergraduate thermodynamics. The topics covered include gases and
gases in vacuum, the pumping process, the various types of pumps and gauges used to measure
pressure, vacuum materials and the maintenance
of these materials, leak detection, and archetypical vacuum systems.
This broad range of topics is covered in mere 160
pages with 20+ pages of appendices. As a result
none of the topics are covered in great detail.
The text of the book is accompanied by plenty of
black and white figures. Unfortunately, the figure captions are kept to a bare minimum forcing
to reader to search through the main text for a
detailed description.
This book does succeed in introducing all of the
major topics to be considered when designing a
functional vacuum system. However, those looking to become experts in the field will be better
served by a more advanced text. Those who are
faced with designing a specific vacuum system,
be it a UHV system or a gas handling system for
a dilution refrigerator, will certainly benefit from
a more specialized text.
Jake Bobowski
University of British Columbia
GENERAL RELATIVITY AN INTRODUCTION
FOR PHYSICISTS, M.P. Hobson, G. Efstathiou
and A.N. Lasenby, Cambridge University Press,
2006; pp. 554; ISBN: 0-521-82951-8 (hc); Price:
$70.00. [Review by Lance Parsons, Physics Dept.,
Memorial University; posted 5/7/2008; To read the
detailed
review,
please
see
http://www.cap.ca/brms/reviews/Rev821_605.pdf ]
STRING THEORY AND M-THEORY: A
MODERN INTRODUCTION, K. Becker,
M. Becker and J. Schwarz, Cambridge
University Press, 2006; pp. 739; ISBN: 0-52186069-5 (hc); Price: $80.00. [Review by Henry
Ling, University of British Columbia; posted
5/7/2008; To read the detailed review, please see
http://www.cap.ca/brms/reviews/Rev871_584.pdf ]
SUPERFRACTALS,
Michael F. Barnsley,
Cambridge University Press, 2006; pp. 452;
ISBN: 0-521-84493-2 (hc); Price: $35.00 .
[Review by Collin Carbno, SaskTel; posted
5/7/2008; To read the detailed review, please see
http://www.cap.ca/brms/reviews/Rev854_615.pdf ]
I enjoy programming to produce fractals so naturally I enthusiastically dug into Barnsley's book.
Chapters one to four are spent laying a detailed
mathematical foundation for the concepts needed
to explain what a superfractal is. The key foundational concept is that of a fractal top. Barnsley's
precise mathematical explanation for a fractal top
is "an addressing function for the set attractor of
IFS such that each point on the attractor has a
unique address, even in the overlapping case".
A fractal top is roughly what is formed by taking
a transformation of a picture and then putting it
on top of the original picture, taking another
transformation of a picture (perhaps a different
one) and putting it on top of the output picture.
The resulting picture depends on the exact
sequence of overlays (tops) used to create the
picture and generating pictures used.
Superfractals are roughly a fractal top that is created by applying either deterministically or randomly, a collection of different transformations
to one or more generative pictures, where one
iterates in generations so that the output of one
generation becomes the input of next generation.
The limit of this process under some conditions
becomes a superfractal.
If you are looking for a book that puts superfractals on a solid mathematical foundation, then this
book is perfect. However, if you are more interested in getting a feeling for superfractals, how to
create them, how they work, how to actually use
them in a graphics application, this book may
disappoint you.
Collin Carbno
SaskTel
THE CHRONOLOGERS' QUEST: THE SEARCH
FOR THE AGE OF THE EARTH, Patrick Wyse
Jackson, Cambridge University Press, 2006; pp.
291; ISBN: 100521813328; Price: $30(US).
[Review by Louis Marchildon, Universite du Quebec
a Trois-Rivieres; posted 5/20/2008; To read the
detailed
review,
please
see
http://www.cap.ca/brms/reviews/Rev900_620.pdf ]
LA PHYSIQUE AU CANADA / Vol. 64, No. 2 ( avr. à juin (printemps) 2008 ) C 93
EMPLOYMENT OPPORTUNITIES
Laurentian University
Université Laurentienne
Postdoctoral Research Positions at SNOLAB for the SNO+ and DEAP Experiments
Three Postdoctoral Research positions are available in the experimental particle astrophysics group at Laurentian University, for two experiments under development at SNOLAB in Sudbury, Ontario, Canada. SNOLAB is Canada's new state-of-the-art international facility for particle astrophysics and an expansion of the
highly successful Sudbury Neutrino Observatory (SNO), located two kilometers underground at Vale Inco's Creighton Mine. For information on the laboratory
and the experimental program please see www.snolab.ca.
The SNO+ experiment will refill the SNO detector with a custom liquid scintillator to extend SNO's solar neutrino measurements to lower energies as well as to
study geo-neutrinos and reactor neutrinos. It also plans to load the scintillator with Neodymium to search for neutrino-less double beta decay with high sensitivity. Two postdoctoral associates would lead research in the SNO+ topics planned at the SNOLAB site and at Laurentian University. These include the establishing of purification and radio-assay techniques for the SNO+ metal-loaded liquid scintillator, and the study of detector backgrounds. Other topics include developing the SNO+ supernova neutrino burst trigger, Monte Carlo modeling, and other DAQ and data analysis tools.
The third postdoctoral associate will participate in the DEAP/CLEAN experimental program which uses single-phase liquid-argon as the detecting medium to
search for WIMP dark matter. The collaboration is developing detectors at several mass scales including DEAP-1, a 7kg detector, MiniCLEAN a 360 kg detector and DEAP/CLEAN a 3600 kg detector. DEAP-1 is currently operating underground at SNOLAB while MiniCLEAN and DEAP/CLEAN are proposed to be
installed at SNOLAB in 2009 and 2010 respectively. The successful candidate will take a lead role in the operation and analysis of data from DEAP-1 and will
participate in the development of DEAP/CLEAN including the implementation of the calibration systems.
We seek applicants with a PhD in experimental particle astrophysics, nuclear or particle physics or a closely related field. The candidates should have demonstrated ability to lead efforts in hardware development and data analysis. These positions are based at the SNOLAB site in Sudbury and administered through
Laurentian University. The initial appointments will be for two years. Salary will be commensurate with qualifications and experience. Applicants should send a
detailed CV and a statement of research interests, as well as arranging for three letters of reference to be forwarded to (please include the reference "SNO+/DEAP
application"):
Ms. Shari Moss
SNOLAB Project Office
P.O. Box 159, Lively ON Canada P3Y 1M3, or by e-mail to [email protected]
A review of applications will begin on July 27, 2008, but applications will be accepted until the positions are filled. We thank all who express interests in these
positions and advise that only those selected for an interview will be contacted. For further information contact Dr. Clarence Virtue ([email protected]).
Laurentian University is committed to equity in employment and encourages applications from all qualified applicants including women, aboriginal peoples,
members of visible minorities and persons with disabilities.
94 C PHYSICS
IN
CANADA / VOL. 64, NO. 2 ( Apr.-June. (Spring) 2008 )
POSTES D’EMPLOIS
Director of SNOLAB
SNOLAB is Canada's national underground research facility for particle astrophysics located in Sudbury, Ontario at the Creighton
Mine operated by Vale Inco Ltd. SNOLAB is the deepest laboratory in the world, located 2 km underground and will provide space
for a number of international experiments in an ultra-clean environment starting this year. The principal scientific topics under investigation at SNOLAB are the detection of Low Energy Solar Neutrinos, Neutrino-less Double Beta Decay, Cosmic Dark Matter and
Supernova Neutrinos. SNOLAB employs approximately 35 scientists, engineers, technicians, and general staff and expects hundreds of scientists from institutions world-wide to participate in experiments. It is operated by the SNO Institute, formed by a consortium of Canadian Universities, and receives capital and
operating support from the Canada Foundation for Innovation, NSERC, NOHFC, FEDNOR and The Ontario Ministry of Research and Innovation.
The Director will have overall responsibility for the Scientific Program of SNOLAB and for its operation and development, as well as the authority for critical
decisions directed to the securing and management of the operating funds, the safety of all workers, and the development and implementation of policies, internal
systems and programs.
The successful candidate will have an advanced degree in a physics related discipline, demonstrated leadership abilities, scientific insight and vision. He/she will
have an outstanding international research record with more than 10 years experience in a senior role. The successful candidate will have achieved international
stature in the fields of particle and/or nuclear physics and will have a proven track record for attracting operational and capital funding for research projects.
Experience with administrative and financial matters associated with large scale science projects is required along with strong communication, interpersonal,
negotiating and relationship building skills.
The Director position will be a five-year initial appointment associated with one of the member institutions and is renewable. This position is available
January 1, 2009.
Salary will be commensurate with that of a senior Full Professor at a Canadian University. The position is open to all qualified applicants. Please note that in the
case of equal qualifications, preference may be given to a Canadian Citizen or Permanent Resident.
Applicants should forward a detailed CV and arrange to have at least three letters of reference sent to:
Dr. Nigel Lockyer, Chair
SNOLAB Director Search Committee
c/o Ms. S. Moss, SNOLAB Project Office
P.O. Box 159, Lively, Ontario, Canada P3Y 1M3
Consideration of applications will begin August 1, 2008 and will continue until a suitable candidate is found. Please direct any queries to
[email protected]
SNOLAB is committed to employment equity and diversity in the workplace and welcomes applications from women, visible minorities, aboriginal people, persons with disabilities, and persons of any sexual orientation or gender identity.
Postdoctoral Research Position at SNOLAB in Experimental Astroparticle Physics
A Postdoctoral Research position in experimental astroparticle physics is available at SNOLAB. Now in
the final stages of construction, SNOLAB is Canada's new state of the art facility for astroparticle physics
and is an expansion of the highly successful Sudbury Neutrino Observatory (SNO) located near Sudbury
Ontario. With its completion this year, SNOLAB will be the deepest underground ultra-clean facility in the
world and will be a leading location for conducting frontier experiments in astroparticle physics. The successful applicant would
be expected to play a major role in one of the SNOLAB programs which include: searches for Dark Matter (DEAP/CLEAN,
MiniCLEAN, PICASSO, SuperCDMS); searches for neutrinoless double beta decay (SNO+ with neodymium and Gas EXO);
studies of low energy solar and geo-neutrinos (SNO+) and a supernova watch (HALO). For more details about the laboratory
and the experimental program please see www.snolab.ca.
We are seeking applicants with a PhD in experimental astroparticle physics, nuclear or particle physics or in a closely related
field. The candidates should have demonstrated ability to lead efforts in hardware development and data analysis. The position
will be based at the SNOLAB site in Sudbury Ontario and will be administered through Queen's University. The initial appointment will be for two years. Salary will be commensurate with qualifications and experience. Applicants should include a
detailed CV, a brief statement of research interests and arrange to have at least three letters of reference forwarded to:
Ms. S. Moss,
SNOLAB Project Office,
P.O. Box 159,
Lively, Ontario,
Canada P3Y 1M3 or by e-mail to:
[email protected]
SNOLAB thanks all who express an interest and advises that only those selected for an interview will be contacted. A review of
the applications will begin on June 15, 2008 but applications will be accepted until the position is filled. SNOLAB is committed
to employment equity and diversity in the workplace and welcomes applications from women, visible minorities, aboriginal people, persons with disabilities, and persons of any sexual orientation or gender identity.
LA PHYSIQUE AU CANADA / Vol. 64, No. 2 ( avr. à juin (printemps) 2008 ) C 95
EMPLOYMENT OPPORTUNITIES
MEDICAL PHYSICISTS
3 permanent positions available in the department of radiation oncology
The Montréal Jewish General Hospital, a 637 bed hospital complex, was created in 1934 by the Jewish community to serve the population without distinction of race, religion or financial means. All the medical services and
activities take place in French as well as in English. The Jewish General Hospital benefits from an envied reputation for the quality of its patients’ relations and for its spirit of good-companionship and mutual aid which
reigns within its staff.
Advantages:
•
Financial support for geographical relocation
•
A generous, individual budget for continuing education
•
Free services of a professional coach to facilitate your integration
•
Complete advantage plan
•
Access to the hospital parking lot at the preferential rate of $42 per month
•
Easy access by public transportation
•
Health services exclusive to employees
•
Daily opportunities to practice both French and English
To submit your candidacy, you must have a Masters or PhD in Medical Physics.
For information: Tel.: (514) 328-1091 or [email protected]
PHYSICIEN(NE) MÉDICAL(E)
3 postes permanents disponibles dans le département de radio-oncologie
L’Hôpital Général Juif de Montréal, un centre hospitalier à vocation universitaire de 637 lits, a été créé en 1934
par la communauté juive pour desservir la population du Québec, sans distinction de race, de religion ou de
moyens financiers. Tous les services médicaux et toutes les activités s’y déroulent aussi bien en français qu’en
anglais. L’Hôpital Général Juif bénéficie d’une réputation enviable pour la qualité de ses relations avec les
patients et pour l’esprit de camaraderie et d’entraide qui règne au sein de son personnel.
Avantages :
•
Un soutien financier à la relocalisation géographique
•
Un généreux budget individuel pour les activités de perfectionnement
•
Les services gratuits d’un coach professionnel pour faciliter votre intégration
•
Un plan complet d’avantages sociaux
•
L’accès au stationnement de l’hôpital au tarif préférentiel de 42 $ par mois.
•
Un accès facile par les transports en commun
•
Un service de santé à l’usage exclusif des employés
•
Des opportunités quotidiennes de pratiquer le français et l’anglais
Pour soumettre votre candidature, vous devez détenir une maîtrise ou un doctorat en physique médicale.
Pour information : Tél. : (514) 328-1091 ou [email protected]
96 C PHYSICS
IN
CANADA / VOL. 64, NO. 2 ( Apr.-June. (Spring) 2008 )
ALL UNDELIVERABLE
COPIES
IN
CANADA
/ TOUTE
CORRESPONDANCE
NE POUVANT
ETRE
LIVREE
AU
CANADA
should be
returned
to /
devra être
retournée
à:
Canadian
Association of
Physicists/
l’Association
canadienne
des
physiciens et
physiciennes
Suite/bur. 112
Imm. McDonald
Bldg.
Canadian Publications Product Sales Agreement No.
40036324 / Numéro de convention pour les envois de
publications canadiennes : 40036324
Univ. of/
d’Ottawa,
150 Louis
Pasteur,
Ottawa,
Ontario
K1N 6N5