Articles from Nature magazine (PDF file)

Transcription

Articles from Nature magazine (PDF file)
quently from Krishnan as from her nemeses.
But after the first 90 pages, apart from a
mildly interesting sub-plot involving sperm
stealing, the novel takes a more mundane
turn and becomes a description of the
biotechnology business plan to capitalize on
Krishnan’s discoveries. The interesting
human aspects of the characters, the inner
conflicts of scientists succumbing to competitive drives and the temptations of commercialization, become secondary to the
not-so-suspenseful fate of their stock
options.
Overall, the reader is most likely to be
gripped by the well-researched biology of
NO and the “jouissance” derived from reading about the science of sex, a term that,
according to Djerassi, was fashionable
among undergraduates at Wellesley College,
Massachusetts, circa 1970.
Frances M. Brodsky is in the Departments of
Biopharmaceutical Sciences, Pharmaceutical
Chemistry and Microbiology and Immunology at
the University of California, 513 Parnassus
Avenue, San Francisco, California 94143-0552,
USA. She is the author of the scientific mystery
Principal Investigation by B. B. Jordan (Berkley
Prime Crime, 1997).
From chaos to
complexity
Chaos Theory Tamed
by Garnett P. Williams
Taylor & Francis: 1997. Pp. 499. £19.95,
$34.95
Dynamics of Complex Systems
by Yaneer Bar-Yam
Addison-Wesley: 1997. Pp. 848. $56
Michael F. Shlesinger
Chaos is no longer a new field. It has already
been 35 years, and several generations of
students, since Edward Lorenz discovered
the strange attractor. Neither is chaos a fad
or a dead end. It is based on the rock-solid
foundation of physics, Newton’s laws, and
on tackling the nonlinear, non-integrable
equations whose solution had to wait for an
appreciation of unstable behaviour, new
mathematical tools and the advent of computer visualization.
The thesis of Garnett Williams’s Chaos
Theory Tamed is that enough wisdom has
accumulated to give an account of chaos
theory mostly in words and pictures, without resorting to deep and sophisticated
mathematics.
Williams is careful to focus on standard
theoretical topics related to low-dimensional dissipative systems, and opts not to
broach the rich, complex subject of Hamiltonian systems, thereby omitting topics
such as the three-body problem and the
strange kinetics associated with the fractal
538
8
Out of the chaos: clockwise from top left are a Lyapunov space (used to study how enzymes break down
carbohydrates), the Lorenz Attractor and fractal images entitled Overseer and Scorpio’s Tail.
orbits-within-orbits of the standard and
Zaslavsky maps.
Williams succeeds in his goal, with a
carefully written, thoughtful exposition of
standard topics (such as the logistic map,
strange attractors, routes to chaos and
Poincaré sections) and tools of the trade
(including attractor reconstruction, Kolmogorov–Sinai entropy, fractal dimensions
and Lyapunov exponents). Equations are
included, but they are developed using
careful discussion, rather than detailed
mathematics. The first 158 pages of his
book cover background information,
including vectors, Fourier analysis, probability theory and time series. A good deal of
discussion is given to the logistic map as a
simple model to demonstrate many ideas of
chaos. Those unfamiliar with the basic topics of nonlinear dynamics in dissipative systems would do well to study this friendly,
self-contained book.
The success of nonlinear dynamics in
handling chaos in systems with few degrees
of freedom has led some to believe that
these methods can be extended all the way
to understanding complex social systems,
such as economics, war strategy, psychology and city planning. But the ultimate success of the ideas of chaos in physics has been
based on experimental verification of the
existence of nonlinear instabilities and
behaviours in well-controlled, repeatable
experiments — in other words, the scientific method. Like the proverbial river into
which one cannot step twice, one cannot
repeat a social experiment, because the first
experiment changes the conditions under
which it was executed. But much can be
Nature © Macmillan Publishers Ltd 1998
learned, and a similar limitation has not
deterred cosmologists.
Yaneer Bar-Yam’s intriguing Dynamics
of Complex Systems goes beyond chaos theory to the broader field of complex systems.
He does not define complexity, but considers mainly systems with a large number of
interacting parts, and seeks to discover pervading themes, such as memory, adaptation, evolution and self-organization, and
then to model these phenomena.
The book begins with a 294-page introduction — a veritable book within a book
— covering basic topics such as iterative
maps, Monte Carlo techniques, random
walks, phase transitions, activated processes and fractals. These topics form an extensive toolkit, providing the reader with the
means to characterize, model and simulate
aspects of complex systems.
In the body of the book, Bar-Yam begins
with neural networks, then moves up the
scale of complexity to protein folding, evolution, developmental biology and, finally,
human civilization. The book does not try
to have the last word on these vast fields, but
introduces the reader to aspects that can be
modelled and explored. Throughout, questions and their answers are folded into the
text, and the many mathematical techniques and arguments are clearly presented. This book is an excellent place to start
exploring the concepts and techniques of
complex systems and provides an effective
springboard to further studies.
Michael F. Shlesinger is in the Office of Naval
Research, Physical Sciences Division 331,
800 North Quincy Street, Arlington, Virginia
22217-5660, USA.
NATURE | VOL 394 | 6 AUGUST 1998
MEHAU KULYK, SCOTT CAMAZINE, , GREGORY SAMS,/SCIENCE PHOTO LIBRARY,
book reviews
© 1999 Macmillan Magazines Ltd
© 1999 Macmillan Magazines Ltd
progress
A surprising simplicity to protein folding
David Baker
Department of Biochemistry, University of Washington, J567 Health Sciences Building, Box 357350, Seattle, Washington 98195, USA
............................................................................................................................................................................................................................................................................
The polypeptide chains that make up proteins have thousands of atoms and hence millions of possible inter-atomic interactions. It
might be supposed that the resulting complexity would make prediction of protein structure and protein-folding mechanisms
nearly impossible. But the fundamental physics underlying folding may be much simpler than this complexity would lead us to
expect: folding rates and mechanisms appear to be largely determined by the topology of the native (folded) state, and new
methods have shown great promise in predicting protein-folding mechanisms and the three-dimensional structures of proteins.
Proteins are linear chains of amino acids that adopt unique threedimensional structures (`native states') which allow them to carry
out intricate biological functions. All of the information needed to
specify a protein's three-dimensional structure is contained within
its amino-acid sequence. Given suitable conditions, most small
proteins will spontaneously fold to their native states1.
The protein-folding problem can be stated quite simply: how do
amino-acid sequences specify proteins' three-dimensional structures? The problem has considerable intrinsic scienti®c interest: the
spontaneous self-assembly of protein molecules with huge numbers
of degrees of freedom into a unique three-dimensional structure
that carries out a biological function is perhaps the simplest case of
biological self-organization. The problem also has great practical
importance in this era of genomic sequencing: interpretation of the
vast amount of DNA sequence information generated by large-scale
sequencing projects will require determination of the structures and
functions of the encoded proteins, and an accurate method for
protein structure prediction could clearly be vital in this process.
Since An®nsen's original demonstration of spontaneous protein
refolding, experimental studies have provided much information
on the folding of natural proteins2±4. Complementary analytical and
computational studies of simple models of folding have provided
valuable and general insights into the folding of polymers and the
properties of folding free-energy landscapes5±7. These studies of
idealized representations of proteins have inspired new models,
some described here, which attempt to predict the results of
experimental measurements on real proteins.
Because the number of conformations accessible to a polypeptide
chain grows exponentially with chain length, the logical starting point
for the development of models attempting to describe the folding of
real protein is experimental data on very small proteins (fewer than
100 residues). Fortunately, there has been an explosion of information about the folding of such small proteins over the last ten years3.
For most of these proteins, partially ordered non-native conformations are not typically observed in experiments, and the folding
reactions can usually be well modelled as a two-state transition
between a disordered denatured state and the ordered native state.
In contrast, the folding kinetics of larger proteins may in some cases
be dominated by escape from low-free-energy non-native conformations. The folding of larger proteins is also often facilitated by
`molecular chaperones'8 which prevent improper protein aggregation.
To pass between the unfolded and native low-free-energy states,
the protein must pass through a higher-free-energy transition state.
In the unfolded state the protein can take up any one of many
conformations, whereas in the native state it has only one or a few
distinct conformations. The degree of heterogeneity of conformations in the transition state has thus been the subject of much
discussion9±11. For example, one of the main differences between the
Box 1
Dependence of folding mechanisms on topology
The structures of folding transition states are similar in proteins with
similar native structures. The distribution of structure in the transition
state ensemble can be probed by mutations at different sites in the chain;
mutations in regions that make stabilizing interactions in the transition
state ensemble slow the folding rate, whereas mutations in regions that
are disordered in the transition state ensemble have little effect4. For
example, in the structures of the SH3 domains of src18 (a) and spectrin17
(b), and the structurally related proteins Adah2 (ref. 37; c) and acyl
phosphatase16 (d), the colours code for the effects of mutations on the
folding rate. Red, large effect ; magenta, moderate effect; and blue, little
effect. In the two SH3 domains, the turn coloured in red at the left of the
structures appears to be largely formed, and the beginning and end of
the protein largely disrupted, in the transition state ensemble. (To facilitate
NATURE | VOL 405 | 4 MAY 2000 | www.nature.com
the comparison in c and d, the average effect of the mutations in each
secondary structure element is shown.) This dependence of folding rate
on topology has been quanti®ed by comparing folding rates and the
relative contact order of the native structures. The relative contact order is
the average separation along the sequence of residues in physical
contact in a folded protein, divided by the length of the protein. e, A lowand high-contact-order structure for a four-strand sheet. In f, black
circles represent all-helical proteins, green squares sheet proteins and
red diamonds proteins comprising both helix and sheet structures. The
correlation between contact order and folding rate (kf) is striking,
occurring both within each structural subclass and within sets of proteins
with similar overall folds (proteins structurally similar to the a/b protein
acyl phosphatase16 are indicated by blue triangles).
© 2000 Macmillan Magazines Ltd
39
progress
`old' and `new' views of protein folding is that the `new' view allows
for a much more heterogeneous transition stateÐreally a transition
state ensembleÐthan the `old' view, which concentrated on a
single, well de®ned folding `pathway'.
The primary measurements that can be made experimentally of
the highly cooperative folding reactions of small proteins are:
the folding rate; the distribution of structures in the transition
state ensemble, inferred from the effects of mutations on the folding
rate (Box 1); and the structure of the native state. Here I focus on
recent progress in predicting these three features.
amino-acid residues interact. However, the general path that the
polymer chain takes through spaceÐits topologyÐcan be very
similar between proteins. Three independent lines of investigation
indicate that protein-folding rates and mechanisms are largely
determined by a protein's topology rather than its inter-atomic
interactions12.
First, large changes in amino-acid sequence, either experimental13,14
or evolutionary15, that do not alter the overall topology of a protein
usually have less than tenfold effect on the rate of protein folding15.
This suggests evolution has not optimized protein sequences
for rapid folding, an encouraging result for simple model
development.
Second, using the consequences of mutations on folding kinetics
to probe the transition states of proteins with similar structures but
very different sequences has shown that the structures of these
transition states are relatively insensitive to large-scale changes in
sequence16±18. For example, in Box 1 there are two examples of pairs
of structurally related proteins with little or no sequence similarity
that have very similar folding transition-state ensembles.
Topology determines folding mechanisms
Are simple models likely to be able to account for the overall features
of the folding process, given the many possible inter-atomic interactions in even a small protein? Recent data indicate that the
fundamental physics underlying the folding process may be simpler
than was previously thought.
The complexity of protein structure emerges from the details of
how individual atoms in both a protein's peptide backbone and its
Box 2
Prediction of protein-folding mechanisms
Munoz and Eaton24 computed folding rates by solving the diffusion
equation of motion on the one-dimensional free-energy pro®les that
result from projection of the full free-energy landscape onto a reaction
coordinate corresponding to the number of ordered residues. a shows
the accuracy of their prediction by plotting computed folding rates (kcalc)
against experimentally measured rates (kexp). To predict folding transition
state structure, the lowest free energy paths to the native state can be
identi®ed. For example, a b-hairpin (b) has two possible paths to the
native state, beginning at the hairpin (pathway 1) or at the free ends
(pathway 2; ordered residues only are indicated; L is loop length). The
Table gives the contributions to the free energy of each con®guration
(total free energy is the sum of the ®rst three columns). Plotting the free
energy as a function of the number of ordered residues (c) shows that the
transition state for both pathways consists of con®gurations with two of
the residues ordered. Calculations on real proteins (d±f) have considered
Pathway 1
COA
LMB
NYF MJC
AIT
PBA
SHG SRL
CSP
HDN
URN
PTL
FKB
2
6
0
6
–8
12
0
4
–16
18
0
2
–24
24
0
0
–8
6
10.6 8.6
–16
12
10.1
5.1
–24
18
9.0
3.1
–24
24
0
0
PKS
FNF
–2
APS
–2
0
2
4
6
log kexp (s–1)
d
e
10
path 1
path 2
8
TEN
0
Pathway 2
log kcalc (s–1)
ABD
4
0
c
Free energy
b
Hairpin ProtG
6
C
(–8 onta
pe ct e
r c ne
on rg
Or
tac y
de
(3 rin
t)
pe g
r r en
es tro
idu py
Lo
e)
(8+ op
1.5 ent
ln opy
(L)
Fr
)
ee
en
erg
y
a
all possible paths: the folding rate and transition state structure are
determined from the lowest free-energy paths. Galzitskaya and
Finkelstein25 and Alm and Baker26 predicted the folding transition state
structure of CheY (f), and CI-2 (d) and barnase (e), respectively. They
identi®ed the transition-state ensemble by searching for the highest freeenergy con®gurations on the lowest free-energy paths between unfolded
and folded states. The effects of mutations on the folding rate were
predicted on the basis of the contribution of the interactions removed by
the mutations to the free energy of the transition state ensemble, or by
directly determining the change in folding rate. The predicted effects of
mutations on the folding rates are shown on the native structure (left); the
measured effects, on the right (the colour scheme is as in Box 1; grey,
regions not probed by mutations; experimental results for CI-2 and
barnase, ref. 4; CheY, ref. 38).
6
4
2
0
–2
0
1
2
3
4
5
6
7
8
Number of ordered residues
f
Barnase
CI-2
CheY
40
© 2000 Macmillan Magazines Ltd
NATURE | VOL 405 | 4 MAY 2000 | www.nature.com
progress
Third, the folding rates of small proteins correlate with a property
of the native state topology: the average sequence separation
between residues that make contacts in the three-dimensional
structure (the `contact order'; Box 1). Proteins with a large fraction
of their contacts between residues close in sequence (`low' contact
order) tend to fold faster than proteins with more non-local
contacts (`high' contact order)12,19. This correlation holds over a
million-fold range of folding rates, and is remarkable given the large
differences in the sequences and structures of the proteins compared. Simple geometrical considerations appear to explain much
of the difference in the folding rates of different proteins.
The important role of native-state topology can be understood by
considering the relatively large entropic cost of forming non-local
interactions early in folding. The formation of contacts between
residues that are distant along the sequence is entropically costly,
because it greatly restricts the number of conformations available to
the intervening segment. Thus, interactions between residues close
together in sequence are less disfavoured early in folding than
interactions between widely separated residues. So, for a given
topology, local interactions are more likely to form early in folding
than non-local interactions. Likewise, simple topologies with
mostly local interactions are more rapidly formed than those with
many non-local interactions. More generally, the amount of con®gurational entropy lost before substantial numbers of favourable
native interactions can be made depends on the topology of the
native state. The importance of topology has also been noted in
studies of computational models of folding20±23.
As proteins' sequences determine their three-dimensional structures, both protein stability and protein-folding mechanisms are
ultimately determined by the amino-acid sequence. But whereas
stability is sensitive to the details of the inter-atomic interactions
(removal of several buried carbon atoms can completely destabilize
a protein), folding mechanisms appear to depend more on the lowresolution geometrical properties of the native state.
Predicting folding mechanism from topology
The results described above indicate that simple models based on
the structure of the native state should be able to predict the coarsegrained features of protein-folding reactions. Several such models
have recently been developed, and show considerable promise for
predicting folding rates and folding transition-state structures.
Three approaches24±26 have attempted to model the trade-off
between the formation of attractive native interactions and the
loss of con®gurational entropy during folding. Each assumes that
the only favourable interactions possible are those formed in the
native state. This neglect of non-native interactions is consistent
with the observed importance of native-state topology in folding,
and dates back to the work of Go on simple lattice models27.
Although the approaches differ in detail, the fundamental ideas
are similar. All use a binary representation of the polypeptide
chain in which each residue is either fully ordered, as in the native
state, or completely disordered. To limit the number of possible
con®gurations, all ordered residues are required to form a small
number of segments, continuous in sequence. Attractive interactions are taken to be proportional to the number of contacts, or
the amount of buried surface area, between the ordered residues
in the native structure, and non-native interactions are completely
ignored. The entropic cost of ordering is a function of the
number of residues ordered and the length of the loops between
the ordered segments. Folding kinetics are modelled by allowing
only one residue to become ordered (or disordered) at a time. As the
number of ordered residues increases, the free energy ®rst increases,
owing to the entropic cost of chain ordering, and then decreases,
as large numbers of attractive native interactions are formed.
Such simple models can potentially be used to predict experimentally measurable quantities such as the folding rate, which
depends on the height of the free-energy barrier, and the effects of
mutations on the folding rate, which depend on the region(s) of the
protein ordered near the top of the barrier. Predictions of both
Box 3
Ab initio structure predictions
Blind ab initio structure predictions for the CASP3 protein structure
prediction experiment. For each target, the native structure is shown on
the left with a good prediction on the right (predictions by Baker39(a, c),
Levitt40(b) and Skolnick41(d) and colleagues; for more information see
http://predictioncentre.llnl.gov/ and Proteins Suppl. 3, 1999). Segments
are colour coded according to their position in the sequence (from blue
(amino terminus) to red (carboxy terminus)). a, DNA B helicase41. This
protein had a novel fold and thus could not be predicted using standard
fold-recognition methods. Not shown are N- and
NATURE | VOL 405 | 4 MAY 2000 | www.nature.com
C-terminal helices which were positioned incorrectly in the predicted
structure. b, Ets-1 (ref. 43). c, MarA44. This prediction had potential for
functional insights; the predicted two-lobed structure suggests the
mechanism of DNA binding (left, X-ray structure of the protein±DNA
complex). d, L30. A large portion of this structure was similar to a protein
in the protein databank but the best ab initio predictions were competitive
with those using fold-recognition methods. The three approaches that
produced these predictions used reduced-complexity models for all or
almost all of the conformational search process.
© 2000 Macmillan Magazines Ltd
41
progress
folding rates and folding transition-state structures using these
simple models are quite encouraging (Box 2; other recent models
have also yielded good results28±33).
The success of these models in reproducing features of real
folding reactions again supports the idea that the topology of the
native state largely determines the overall features of protein-folding
reactions and that non-native interactions have a relatively minor
role. Incorporation of sequence-speci®c information into these
models, either in the inter-residue interactions or in the freeenergy costs of ordering different segments of the chain, should
improve their accuracy to the point where they may be able to
account for much of the experimental data on the folding of small
proteins.
Ab initio structure prediction
Predicting three-dimensional protein structures from amino-acid
sequences alone is a long-standing challenge in computational
molecular biology. Although the preceding sections suggest that
the only signi®cant basin of attraction on the folding landscapes of
small proteins is the native state, the potentials used in ab initio
structure-prediction efforts have not had this property, and until
recently such efforts met with little success. The results of an
international blind test of structure prediction methods (CASP3;
ref. 34) indicate, however, that signi®cant progress has been
made35,36.
As with the models for protein-folding mechanisms, most of the
successful methods attempt to ignore the complex details of the
inter-atomic interactionsÐthe amino-acid side chains are usually
not explicitly representedÐand instead focus on the coarse-grained
features of sequence±structure relationships. Problems in which the
full atomic detail of interactions in the native state is importantÐ
such as the design of novel stable proteins, and the prediction of
stability and high resolution structureÐwill almost certainly
require considerably more detailed models.
Some of the most successful blind ab initio structure predictions
made in CASP3 are shown in Box 3. In several of these predictions
the root-mean-square deviation between backbone carbon atoms in
the predicted and experimental structures is below 4.0 AÊ over segments
of up to 70 residues. Several of these models can compete with more
traditional fold-recognition methods. At least one case (Mar A) gave
a model capable of providing clues about protein function39.
The predictions are an encouraging improvement over those
achieved in the previous structure-prediction experiment (CASP2),
but improvements are still needed to the accuracy and reliability of
the models. Improvements in ab initio structure prediction may
allow these methods to generate reliable low-resolution models of
all the small globular proteins in an organism's genome.
Emerging simplicity
The experimental results and predictions discussed here indicate
that the fundamental physics underlying folding may be simpler
than previously thought and that the folding process is surprisingly
robust. The topology of a protein's native state appears to determine
the major features of its folding free-energy landscape. Both protein
structures and protein-folding mechanisms can be predicted, to
some extent, using models based on simpli®ed representations of
the polypeptide chain. The challenge ahead is to improve these
models to the point where they can contribute to the interpretation
of genome sequence information.
M
1. An®nson, C. Principles that govern the folding of protein chains. Science 181, 223±227 (1973).
2. Baldwin, R. L. & Rose, G. D. Is protein folding hierarchic? II. Folding intermediates and transition
states. Trends Biochem. Sci. 24, 26±33 (1999).
3. Jackson, S. E. How do small single-domain proteins fold? Fold. Des. 3, R81±91 (1998).
4. Fersht, A. Structure and Mechanism in Protein Science: A Guide to Enzyme Catalysis and Protein Folding
(Freeman, New York, 1999).
42
5. Chan, H. S. & Dill, K. A. Protein folding in the landscape perspective: chevron plots and nonArrhenius kinetics. Proteins 30, 2±33 (1998).
6. Bryngelson, J. D., Onuchic, J. N., Socci, N. D. & Wolynes, P. G. Funnels, pathways, and the energy
landscape of protein folding: a synthesis. Proteins 21, 167±195 (1995).
7. Dobson, C. M. & Karplus, M. The fundamentals of protein folding: bringing together theory and
experiment. Curr. Opin. Struct. Biol. 9, 92±101 (1999).
8. Horwich, A. L. Chaperone rings in protein folding and degradation. Proc. Natl Acad. Sci. USA 96,
11033±11040 (1999).
9. Shakhnovich, E. I. Folding nucleus: speci®c or multiple? Insights from lattice models and experiments. Fold. Des. 3, R108±111 (1998).
10. Pande, V. S., Grosberg, A. Y., Tanaka, T. & Rokhsar, D. Pathways for protein folding: is a new view
needed? Curr. Opin. Struct. Biol. 8, 68±79 (1998).
11. Thirumalai, D. & Klimov, D. K. Fishing for folding nuclei in lattice models and proteins. Fold. Des. 3,
R112±118 (1998).
12. Alm, E. & Baker, D. Matching theory and experiment in protein folding. Curr. Opin. Struct. Biol. 9,
189±196 (1999).
13. Riddle, D. S. et al. Functional rapidly folding proteins from simpli®ed amino acid sequences. Nature
Struct. Biol. 4, 805±809 (1997).
14. Kim, D. E., Gu, H. & Baker, D. The sequences of small proteins are not extensively optimized for rapid
folding by natural selection. Proc. Natl Acad. Sci. USA 95, 4982±4986 (1998).
15. Perl, D. et al. Conservation of rapid two-state folding in mesophilic, thermophilic and hyperthermophilic cold shock proteins. Nature Struct. Biol. 5, 229±235 (1998).
16. Chiti, F. et al. Mutational analysis of acylphosphatase suggests the importance of topology and contact
order in protein folding. Nature Struct. Biol. 6, 1005±1009 (1999).
17. Martinez, J. C. & Serrano, L. The folding transition state between SH3 domains is conformationally
restricted and evolutionarily conserved. Nature Struct. Biol. 6, 1010±1016 (1999).
18. Riddle, D. S. et al. Experiment and theory highlight role of native state topology in SH3 folding.
Nature Struct. Biol. 6, 1016±1024 (1999).
19. Plaxco, K. W., Simons, K. T. & Baker, D. Contact order, transition state placement and the refolding
rates of single domain proteins. J. Mol. Biol. 277, 985±994 (1998).
20. Shea J. E., Onuchic J. N. & Brooks, C. L. III. Exploring the origins of topological frustration: design of a
minimally frustrated model of fragment B of protein A. Proc. Natl Acad. Sci. USA 96, 12512±12517
(1999).
21. Onuch, J. N., Nymeyer, H., Garcia, A. E., Chaine, J. & Socci, N. D. The energy landscape theory of
protein folding: insights into folding mechanism and scenarios. Adv. Protein Chem. 53, 87±152
(2000).
22. Micheletti, C., Banavar, J. R., Maritan, A. & Seno, F. Protein structures and optimal folding from a
geometrical variational principle. Phys. Rev. Lett. 82, 3372±3375 (1999).
23. Abkevich, V., Gutin, A. & Shakhnovich, E. Speci®c nucleus as the transition state for protein folding:
evidence from the lattice model. Biochemistry 33, 10026±10036 (1994).
24. Munoz, V. & Eaton, W. A. A simple model for calculating the kinetics of protein folding from threedimensional structures. Proc. Natl Acad. Sci. USA 96, 11311±11316 (1999).
25. Galzitskaya, O. V. & Finkelstein, A. V. A theoretical search for folding/unfolding nuclei in threedimensional protein structures. Proc. Natl Acad. Sci. USA 96, 11299±11304 (1999).
26. Alm, E. & Baker, D. Prediction of protein-folding mechanisms from free-energy landscapes derived
from native structures. Proc. Natl Acad. Sci. USA 96, 11305±11310 (1999).
27. Go, N. Theoretical studies of protein folding. Annu. Rev. Biophys. Bioeng. 12, 183±210 (1983).
28. Portman J. J., Takada, S. & Wolynes, P. G. Variational theory for site resolved protein folding free
energy surfaces. Phys. Rev. Lett. 81, 5237±5240 (1998).
29. Debye, D & Goddard, W. A. First principles prediction of protein-folding rates. J. Mol. Biol. 294, 619±
625 (1999).
30. Li, A. J. & Daggett, V. Identi®cation and characterization of the unfolding transition state of
chymotrypsin inhibitor 2 by molecular dynamics simulations. J. Mol. Biol. 257, 412±429 (1996).
31. Lazaridis, T. & Karplus, M. `New view' of protein folding reconciled with the old through multiple
unfolding simulations. Science 278, 1928±1931 (1997).
32. Sheinerman, F. B. & Brooks, C. L. III A molecular dynamics simulation study of segment B1 of protein
G. Proteins 29, 193±202 (1997).
33. Burton, R. E., Myers, J. K. & Oas, T. G. Protein folding dynamics: quantitative comparison between
theory and experiment. Biochemistry 37, 5337±5343 (1998).
34. Moult, J., Hubbard, T., Fidelis, K. & Pederson, J. T. Critical assessment of methods of protein structure
prediction (CASP): round III. Proteins (Suppl.) 3, 2±6 (1999).
35. Orengo, C. A., Bray, J. E., Hubbard, T. LoConte, L. & Sillitoe, I. Analysis and assessment of ab initio
three-dimensional prediction, secondary structure, and contacts prediction. Proteins (Suppl.) 3, 149±
170 (1999).
36. Murzin, A. G. Structure classi®cation-based assessment of CASP3 predictions for the fold recognition
targets. Proteins (Suppl.) 3, 88±103 (1999).
37. Villegas, V., Martinez, J. C., Aviles, F. X. & Serrano, L. Structure of the transition state in the folding
process of human procarboxypeptidase A2 activation domain. J. Mol. Biol. 283, 1027±1036 (1998).
38. Lopez-Hernandez, E. & Serrano, L. Structure of the transition state for folding of the 129 aa protein
CheY resembles that of a smaller protein, CI-2. Fold. Des. 1, 43±55 (1996).
39. Simons, K. T., Bonneau, R., Ruczinski, I. & Baker, D. Ab initio protein structure prediction of CASP III
targets using ROSETTA. Proteins (Suppl.) 3, 171±176 (1999).
40. Samudrala, R. Xia, Y. Huang, E. & Levitt, M. Ab initio protein structure prediction using a combined
hierarchical approach. Proteins (Suppl.) 3, 194±198 (1999).
41. Ortiz, A. R., Kolinski, A., Rotkiewicz, P., Ilkowski, B. & Skolnick, J. Ab initio folding of proteins using
restraints derived from evolutionary information. Proteins (Suppl.) 3, 177±185 (1999).
42. Weigelt, J., Brown, S. E., Miles, C. S. & Dixon, N. E. NMR structure of the N-terminal domain of E. coli
DnaB helicase: implications for structure rearrangements in the helicase hexamer. Structure 7, 681±
690 (1999).
43. Slupsky, C. M. et al. Structure of the Ets-1 pointed domain and mitogen-activated protein kinase
phosphorylation site. Proc. Natl Acad. Sci. USA 95, 12129±12134 (1998).
44. Rhee, S., Martin, R. G., Rosner, J. L. & Davies, D. R. A novel DNA-binding motif in MarA: the ®rst
structure for an AraC family transcriptional activator. Proc. Natl Acad. Sci. USA 95, 10413±10418
(1998).
© 2000 Macmillan Magazines Ltd
NATURE | VOL 405 | 4 MAY 2000 | www.nature.com
autumn books
ly enjoyable activity decline with advancing
years, as does recorded pleasure in the experience when it does occur. But perhaps willingness to participate enthusiastically in the
business of rating, on a 10-point scale, the
pleasure experienced by being tickled also
declines markedly with age.
The irritating thing about what is, in principle, an attractive scholarly enterprise, is the
sheer unevenness of the treatment. One
might almost think it an unhappy collaborative effort. After some truly dire attempts at
humour (“premature ejokulation”, “laftus
interruptus”), the reader is pleasantly surprised by stretches of good, pacey exposition
of plausible science and intriguing insights
from primatology and the study of autism.
But then Chapter 9, “Laughing Your Way to
Health”, feels like an editorial imposition, and
comes to no worthwhile conclusions at all.
The following chapter, “Ten Tips for Increasing Laughter”, solemnly advises the reader to
“stage social events” and “provide humorous
materials”. We need a neurobiologist for this
kind of advice?
What bothers me most is that a professor
who, presumably, has spent many hours on
his feet engaging the attention of eager youth
on matters scientific feels it worth proposing
that, as a public speaker evokes laughter from
an audience, “the brains of speaker and audience are locked into a dual-processing mode”
(author’s italics). Classical manuals of
rhetoric have more insight to offer. “Laughter is about relationships” — but only in the
sense that Life Is About Relationships, a
sense that does little to inform and nothing
to explain.
Humour can be very culture-specific:
recognition laughter is comprehensible only
in terms of a set of expectations and experiences, the humour of incongruity only in
terms of what would count as congruent, and
neither yields much to this analysis. As for
irony, well, that is perhaps in any case a
peculiarly British taste, and possibly one of
the great barriers to shared laughter between
nations. I came to wonder eventually just
26
how much the author’s sense of humour has
been sidetracked by his professional interest
in laughter. One is left with the feeling that, in
his view, laughter is funny peculiar rather
than funny ha-ha, and that putting this book
together was rather less fun than he would
like us to believe.
■
Steve Blinkhorn is at Psychometric Research
and Development Ltd, Brewmaster House,
The Maltings, St Albans AL1 3HT, UK.
The gene is dead;
long live the gene
The Century of the Gene
by Evelyn Fox Keller
Harvard University Press: 2000.192 pp.
$22.95, £15.95
Jerry A. Coyne
Gregor Mendel’s work was rediscovered in
1900 and Wilhelm Johannsen coined the
word ‘gene’ in 1909. Since then, genetics has
progressed from T. H. Morgan’s work on the
fruitfly Drosophila to the genome projects of
today. In retrospect, it seems appropriate to
dub the twentieth century, at least in scientific terms, ‘the century of the gene’. But
despite the title of her book, Evelyn Fox
Keller disagrees.
The Century of the Gene is, in fact, a jihad
against our notion of the gene. Keller insists
that the gene is neither the stable, self-replicating entity we thought it was, nor a repository of information about development. To
Keller, ‘gene’ is simply an outmoded term, a
semantic straitjacket signifying something
that can’t be defined. Were she less constrained by publishing convention, I suspect
her book would have been called The Century of that Nebulous, Ill-Defined Entity Formerly Known as ‘The Gene’.
Keller, a philosopher and historian of
science, is best known for A Feeling for the
Organism (W. H. Freeman, 1983), her biog-
© 2000 Macmillan Magazines Ltd
raphy of the geneticist Barbara McClintock,
which was written for a general audience.
Given the high technical level of discussion,
The Century of the Gene is, however, clearly
aimed at professional biologists.
Unfortunately, the book is long on complaint and short on substance, and ultimately fails to make its case against the primacy of
the gene. Despite her repeated claims that the
recent history of genetics is replete with
“major reversals”, “serious provocations”
and “radical modifications”, the gene
emerges unscathed. Many of the alleged
problems highlighted by Keller turn out to be
semantic issues likely to be of little interest to
either working biologists or serious philosophers of science. Moreover, the level of analysis is disturbingly superficial: Keller seems
more interested in forcing genetics into the
Procrustean bed of her thesis than in presenting a balanced argument.
She claims, for example, that the idea of
the gene as a unit of structure or function is
outmoded because some bits of DNA do not
produce proteins, but instead regulate genes,
because some genes can be spliced or read in
alternative ways, and because the products of
some genes perform several functions.
Although it is true that genes are often complex, the word gene is still a perfectly good
working term for biologists, especially when
defined as a piece of DNA that is translated
into messenger RNA. Farmers are still called
farmers even though their job is far more
complex than that of their predecessors.
Keller asserts that DNA is not a ‘self-replicating’ molecule because enzymes are needed for replication. She also claims that genes
do not direct development because gene
activation depends on many different factors
(such as chromatin structure, egg cytoplasm
and local differences in the cellular environment which turn on different genes in different tissues). Again, these are pseudo-problems: replication enzymes and many inducers of development are themselves products
of genes. One might as well argue that political candidates are not self-promoting
because they hire others to do that job for
them. Certainly, non-genetic factors influence development, but ultimately we differ
from chimpanzees because of our genes, not
our environments.
The supposed non-autonomy and complexity of genes lead Keller to suggest that we
should replace a reductionist approach to
genetics with a more holistic programme
that incorporates trendy concepts such as
developmental networks and self-organization. But she does not specify how this
approach would work. In fact, history shows
clearly that the greatest triumphs of genetics
have been born of reductionism: progress
nearly always comes by first studying single
genes and then examining their interactions
with others. The remarkable advances in
understanding the developmental genetics
NATURE | VOL 408 | 2 NOVEMBER 2000 | www.nature.com
autumn books
of Drosophila, for example, confirm the
value of reductionism in molecular biology.
An example of Keller’s one-sided treatment of more substantive issues is her discussion of ‘evolvability’. A recent buzzword
in evolutionary genetics, evolvability refers
to the idea that, in some species, natural
selection may favour traits that increase the
likelihood of future evolution. There is considerable controversy about whether and
how this could occur, but Keller ignores these
disputes. Instead, she promotes a particular
form of evolvability that, she claims, is both
ubiquitous and a radical challenge to modern Darwinian theory. She is wrong on both
counts.
Keller argues that species have evolved
ways of increasing their mutation rates to
generate genetic variation — the raw material for further evolution — and that this evolution undermines the idea that genes are
stable. Her evidence for ‘adaptive mutability’
is the observation that, in some microorganisms, various forms of environmental stress
(such as starvation, ultraviolet light or
extreme temperature) appear to activate
genetic systems that increase the mutation
rate. Although most mutations are harmful,
some may be useful, and genetic linkage
between ‘mutator genes’ and their adaptive
products may drive mutators to high frequencies. Permanently increasing the output of new variants could accelerate future
evolution. There are, however, serious problems with this argument.
Keller’s prime example of adaptive mutability is the SOS repair system, a mechanism
for DNA repair best characterized in the
bacterium Escherichia coli. When pervasive,
stress-induced damage overwhelms normal
repair mechanisms, the SOS system comes
into play. This system reverses many mutations, but in so doing introduces a few others.
Keller suggests, as do some microbiologists,
that the SOS system is an adaptation for
increasing the mutation rate under stress.
But as this system acts to repair mutations, a
more parsimonious explanation is that it
evolved simply as a second line of defence
against DNA damage and, like many adaptations, is imperfect.
Unfortunately, Keller mentions neither
this alternative explanation nor the continuing debate about the nature and meaning of
stress-induced mutability. Moreover, she
fails to note that selection for higher mutation rates via linkage does not work in sexually reproducing organisms. In such cases
mutator genes will be separated from their
adaptive products by recombination and
then eliminated by natural selection.
Finally, such inducible mutator systems
can yield an adaptive response only to factors
that impinge directly on DNA molecules. In
multicellular organisms with separate
germ cells, most forms of selection do not
work this way. The presence of lions on the
NATURE | VOL 408 | 2 NOVEMBER 2000 | www.nature.com
savanna does not increase the mutation rates
in gazelles.
Some individual genes, including vertebrate antibodies, have apparently evolved
new ways of generating variation as an adaptive response to constantly changing selection. But this, as well as any selection for
inducible mutation in bacteria, can be completely explained by evolutionary genetics.
By unwarranted extrapolation from bacteria
to all organisms, Keller grossly exaggerates
the challenge of evolvability to both Darwinism and genetics.
Keller concludes that “gene talk”, the
argot of geneticists, is passé because of
“accumulating inadequacies of an existing
lexicon in the face of new experimental findings”. Gene talk persists, she says, because it is
an easy way for biologists to communicate,
and because it helps geneticists get grants and
biotechnology companies make profits. Her
remedy is to call for a new vocabulary that
incorporates concepts from engineering and
computer science. Sadly, she fails to suggest
what words or concepts we need. Although
my enthusiasm for neologisms is limited,
they can be useful, as in physicists’ distinction
between ‘mass’ and ‘weight’. But the notion
that geneticists are semantically challenged is
simply silly. There is not the slightest evidence
that future advances in genetics will be stalled
by an outmoded lexicon. What we need is
more work, not more words.
The physicist Richard Feynman, famous
for his one-liners, supposedly said that the
philosophy of science is as useful to scientists
as ornithology is to birds. His criticism is
overstated, because philosophy can give
scientists intellectual perspective on their
work. The Century of the Gene, however,
ranks as opinionated and poorly informed
ornithology. The gene is no albatross.
■
Jerry A. Coyne is in the Department of Ecology and
Evolution, University of Chicago, Chicago, Illinois
60637, USA.
With a hammer
and passion
Trilobite! Eyewitness to Evolution
by Richard Fortey
Knopf/HarperCollins: 2000. 288 pp.
$26/£15.99
Philippe Janvier
Palaeontology is one of the rare areas of science that teenagers can tackle by themselves,
especially if they live in a fossil-rich area. All
that is needed is a hammer and passion. Such
an early training certainly helps the happy
few who finally manage to become professionals, as was the case for Richard Fortey. I
empathize with him, as I too had a youthful
passion for fossils.
© 2000 Macmillan Magazines Ltd
In this book, Fortey gives a passionate and
often lyrical account of his life with trilobites,
a group of extinct marine arthropods related
to the living horseshoe crabs and spiders,
which lived from 540 to 260 million years
ago. Trilobites are indeed fascinating. They
look a bit like large woodlice, but show an
amazing diversity of morphologies. What’s
more, their preserved anatomy is complex
enough to allow scientists to reconstruct
the group’s relationships and evolutionary
history.
In a vivid, popular style, full of didactic
metaphors and anecdotes, Fortey writes
about how his passion for trilobites arose
when he was 14, how he was trained by
his mentor Harry Whittington, how he
discovered new trilobites in the rocks of
Spitsbergen, China, Thailand and Australia,
and his life with colleagues in the small community of trilobite specialists. He also
recounts the history of trilobite research,
how early palaeontologists gradually
revealed the most intimate details of trilobite
anatomy: the amazing structure of their eyes,
and their long-elusive appendages and gills.
Fortey uses trilobites to explain how
palaeontologists work, from basic fieldwork
and identifying and describing species to farreaching generalizations about evolution.
Seen in this way, the book is an excellent
introduction to the basic practice of
palaeontology and systematics.
Trilobites have been in at the birth of
several theories about the process of evolution. For example, there is Niles Eldredge
and Stephen Jay Gould’s ‘punctuated equilibria’, an evolutionary pattern where a
species shows a long period of stability
(equilibrium) and is then suddenly replaced
by its closest related species (punctuation).
Or there is McNamara’s ‘heterochronism’,
an evolutionary process involving shifts in
the timing of the development of certain
organs, and hence shifts in the morphology
of the entire organism. Trilobites are also
there at the Cambrian explosion — the
period 540 million years ago where most
major animal groups appear suddenly in the
fossil record. Fortey uses trilobite examples
27
concepts
The artistry of nature Pattern
Eshel Ben-Jacob and Herbert Levine
he endless array of patterns and shapes
in nature has long been a source of joy
and wonder to laymen and scientists
alike. Discovering how such patterns
emerge spontaneously from an orderless
and homogeneous environment has been a
challenge to researchers in the natural sciences throughout the ages. In 1610, the
astronomer Johannes Kepler was already
captivated by the beautiful shapes of
snowflakes, perhaps the most striking
examples of pattern in inorganic azoic
(non-living) systems. But the origins of
their six-fold beauty eluded him — Kepler
lived too early to know about atoms and
molecules. He did have the insight, though,
that the symmetry of snowflakes resulted
from an inherent power in matter, which he
dubbed the “facultas formatrix”. Kepler was
not alone in his inability to explain those
graceful forms.
Only during the past two decades have
the principles of transfer of the microscopic,
molecular information to the macroscopic
level of the visible flakes been deciphered. In
T
this case, we now understand how nature
chooses one pattern over the other. But
what about other pattern-forming systems?
Indeed, many diverse out-of-equilibrium
processes result in the emergence of patterns,
such as spiral waves in the Belousov–
Zhabotinsky redox reaction, Liesegang rings
in reaction–diffusion systems, Rayleigh–
Benard convection cells in heated fluids and
disordered-branching patterns during electrochemical deposition. We believe that
underlying these disparate patterns there is a
set of overarching principles that can lend a
unified perspective to this field of study.
Recent progress towards such a perspective
hints at the possibility of obtaining radically
new insights into the even harder problem of
pattern formation in living systems.
Patterning via competition
Pure substances at equilibrium (closed
systems) usually assume a homogeneous
(patternless) state or, at most, a simple
periodic one. Back in the early 1950s, Alan
Turing understood that complex patterns
would emerge only in a system driven out
of equilibrium (open systems), where there
“Underlying many disparate
patterns, we believe there is a set
of overarching principles that can
lend a unified perspective.”
exists competition between various tendencies. For example, in snowflake formation the competition is between the
diffusion of water molecules towards the
flake and the microscopic dynamics of
crystal growth at the solid–vapour interface.
The diffusion kinetics tend to drive the
system towards decorated and irregular
shapes, with maximal interfacial area. The
microscopic dynamics, giving rise to surface
tension, surface kinetics and growth
anisotropy, compete with this tendency and
thereby impose characteristic length scales
and overall symmetries on the resultant
patterns. In other examples, competition
between short-range activation and longrange inhibition, or between macroscopic
heat transfer versus short-range viscous dissipation, has a corresponding role.
Micro–macro singular interplay
The aforementioned competition often
gives rise to a two-way transfer of information between the microscopic and macroscopic scales. This is most obvious in the
snowflake, in which the six-fold symmetry
of the underlying lattice is manifest in the
dendritic branches of the flake on the
macroscopic (observed) level. At present,
we understand that whenever the microscopic dynamics act as a singular perturbation (stabilizing competitor), details at the
microstructural scale, such as preferred
growth directions, will be amplified in
effect by the macroscopic process. Chirality,
the difference between left- and right-handed shapes, can act in a similar manner. By
the same token, the macroscopic dynamics
can reach down and affect the microstructure; changing the macroscopic conditions
can force the small-scale structure to
change, by favouring a particular growth
mode over other possibilities. In other
words, the macro-level and the micro-level
organization must be determined in a selfconsistent manner.
Patterns in non-living systems. Photos of ‘captured’ real snowflakes, taken by Wilson A. Bentley
(Jericho Historical Society). Note the high level of (six-fold) symmetry together with the complexity of
the patterns. Inset, ‘metal leaves’ produced during the electrochemical deposition of ZnSO4; picture
taken using an electron microscope (magnification 2400). Both pictures show dendritic patterns and
demonstrate that similar patterns can be seen in different systems and over very different length scales.
NATURE | VOL 409 | 22 FEBRUARY 2001 | www.nature.com
© 2001 Macmillan Magazines Ltd
Morphology diagrams
The micro–macro interplay varies as the
control parameters are changed. Because of
the large degree of cooperativity in the pattern-forming process, we expect in general
that there will be sharp transitions between
different ‘essential’ shapes. Each of these
shapes, or morphologies, represents a differ985
concepts
Complex patterns exhibited during colonial
cooperative self-organization of Paenibacillus
dendritiformis (top) and Paenibacillus vortex
(bottom) show chiral asymmetry (all the
branches have a twist with the same handedness).
This observed chirality on the macroscopic
(colonial) level results (via singular interplay)
from the chirality of the flagella of the bacteria
(the micro level). P. vortex shows organization of
vortices (dots) composed of thousands of
bacteria, which all circulate around a common
centre. The delicate balance between order and
chaos persists over many length scales, lending an
unmistakable aesthetic quality to these images.
ent balance between the various competing
tendencies leading to the formation of the
pattern. Lending support to this notion is the
well-studied example of diffusion-controlled
growth. The same morphologies appear
repeatedly in different systems exhibiting this
underlying pattern-forming competition,
with length scales ranging from micrometres
to metres. This perspective brings to mind
the idea of a morphology diagram, by
analogy with a phase diagram for systems
in equilibrium. In equilibrium, for given
conditions, the phase that minimizes the
free energy is selected and observed. The
existence of an equivalent principle for
dynamic non-equilibrium (open) systems is
the most profound unsolved question in the
study of pattern formation.
The power of cooperation
Among non-equilibrium systems, living
organisms are the most challenging ones
scientists can study. Although pattern for986
mation exists throughout the biological
world, cooperative microbial behaviour
seems a natural choice of a starting point
to apply the lessons learned from azoic
systems to living ones. Bacteria are the
simplest organisms, yet a wealth of beautiful patterns are formed during colonial
development of various bacterial strains.
Some of the observed spatio-temporal patterns are reminiscent of those observed in
non-living systems. Others exhibit an even
richer behaviour, reflecting the additional
layers of complexity involved in colonial
development.
As in non-living systems, patterns
emerge from the singular interplay between
the micro level (the individual cell) and the
macro level (the colony). That is, there must
be an internal consistency between the
microscopic interactions brought about by
single-cell behaviour and the overall macroscopic organization of the colony as a whole.
The building blocks of the colonies are themselves living systems, each having its own
autonomous self-interest and internal
degrees of freedom. At the same time, efficient adaptation of the colony to adverse
growth conditions requires self-organization
on higher levels — function follows form —
and this can be achieved only via cooperative
behaviour by the individual cells.
Thus, bacteria have developed sophisticated cooperative behaviour and intricate
communication capabilities, including:
direct cell-to-cell physical interactions via
membrane-bound polymers, the collective
production of extracellular ‘wetting’ fluid
for movement on hard surfaces; long-range
chemical signalling, such as quorum sensing; and chemotactic signalling, the collective activation and deactivation of genes, and
even the exchange of genetic material.
The communication capabilities enable
each bacterial cell to be both an actor and a
spectator (using Niels Bohr’s expression)
during the complex patterning. The singlecell dynamics determine the macroscopic
pattern even as it itself is shaped by that selfsame pattern. For researchers in the patternformation field, the communication, regulation and control mechanisms that ultimately
control the observable morphologies offer a
modelling challenge that far surpasses that
considered to date within the context of nonliving processes. It should be evident to
microbiologists that colonies have sophisticated capabilities for coping with hostile
environmental conditions, capabilities that
cannot be studied by focusing exclusively on
the behaviour of the single bacterium.
Clues about complexity
Understanding pattern formation is intimately related to understanding the notion
of complexity in open systems. Complexity
is an oft-used word that still lacks any precise definition. Structural complexity might
© 2001 Macmillan Magazines Ltd
refer to patterns with repeating yet variable
units; in this sense, completely disordered
structures are as simple as perfectly repeating ones. Functional complexity might
be related to systems whose dynamic properties are not simply explained by detailed
understanding of the constituent parts,
perhaps because of feedback from the
macro level. Unfortunately, neither of these
intuitive notions has led to an objective
operational measure.
An essential question in complex systems
is the extent to which one can formulate
theories that permit sensible predictions of
the macroscopic behaviour of open systems
without having to simulate in mind-numbing detail all the microscopic degrees of
freedom. In physical systems in equilibrium,
we are typically confronted with this
question in the context of a two-level
micro–macro interplay. We deal with this
via the introduction of the entropy as an
additional variable on the macro level. The
entropy is a measure of (the logarithm of)
the number of possible microscopic states
for a given macro state of the system.
Hence, it can be viewed as either our lack of
information about the micro-level (looking
from the macro level) or as the freedom in
the microdynamics for given imposed
macroscopic conditions (looking from the
micro level).
Might complexity, properly defined,
replace entropy as a fundamental property
of macroscopic open systems? It is certainly
intriguing that such systems tend to evolve
in the direction of increased complexity
as they are driven further from equilibrium.
Future work on patterns, especially in
living organisms, will no doubt offer some
needed clues.
■
Eshel Ben-Jacob is in the School of Physics
and Astronomy, Tel Aviv University,
69978 Tel Aviv, Israel.
Herbert Levine is in the Department of Physics,
University of California, San Diego, La Jolla,
California 92093, USA.
FURTHER READING
Ball, P. The Self-made Tapestry: Pattern Formation in
Nature (Oxford Univ. Press, 1999).
Kessler, D. A., Koplik, J. & Levine, H. Pattern selection
in fingered-growth phenomena. Adv. Phys. 37, 255
(1988).
Ben-Jacob, E. & Garik, P. The formation of patterns
in non-equilibrium growth. Nature 343, 523–530
(1990).
Ben-Jacob, E. & Levine, H. The artistry of
microorganisms. Sci. Am. 279(4), 82–87 (1998).
Ben-Jacob, E., Cohen, I. & Levine, H. The cooperative
self-organization of microorganisms. Adv. Phys. 49,
395–554 (2000).
Correction: In the Millennium Essay “A cellular cornucopia”
(Nature 408, 773; 2000), the lizard was mistakenly cited in place of
the newt in the context of limb regeneration.
NATURE | VOL 409 | 22 FEBRUARY 2001 | www.nature.com
book reviews
treasures of our minds. To extract the core
explanation for savant skills, it might be necessary to test savant prodigies when their skill
first emerges because, with maturity, autistic
savants often acquire concepts and knowledge which inevitably become incorporated
into their skill base. Such research remains a
herculean task for future investigators.
■
Allan Snyder is at the Centre for the Mind,
Australian National University, Canberra, ACT
0200, and University of Sydney, Main Quadrangle,
Sydney, New South Wales 2006, Australia.
Spandrels
or selection?
The Evolutionists:
The Struggle for Darwin’s Soul
by Richard Morris
W. H. Freeman: 2001. 272 pp. $22.95, £18.99
Dawkins vs. Gould: Survival
of the Fittest
by Kim Sterelny
Icon: 2001. 160 pp. £5.99, $9.95 (pbk)
Michael A. Goldman
Nature or nurture? Chance or necessity?
These dichotomies embody a controversy
that has raged among the top thinkers in
evolutionary biology. The question is: does
adaptation by natural selection explain
everything in nature, including human
PAUL ALMASY/CORBIS
questions such as “what day of the week
was 18 April 1720?”. Hermelin says that
they use the rules and regularities of the
calendar, and that musical savants extract
the “grammar” of music.
So Hermelin believes that savants apply
the same rule-based strategies as do trained
people of normal intelligence. How do they
learn these strategies? She believes that the
rules of linear perspective used in drawing
are extracted from posters and illustrations.
As for the other skills, she believes savants
advance from a focus on specific details (say,
numbers) to the whole picture (say, the
Eratosthenes algorithm).
But savant skills can emerge suddenly
after a person is hit on the head, so it seems
possible that these skills are in us all without
training, but cannot normally be accessed.
Recent evidence suggests that they might
even be switched on by using magnetic
pulses to switch off part of the brain, as our
work had indicated.
Hermelin’s is a highly readable book. She
goes well beyond merely presenting a scientific account. Rather, she conveys something
about who these people really are. She
weaves a tapestry of their personal lives,
especially their difficulties in confronting
life as we normally know it. The book works
well at all levels.
Anyone who has interacted with autistic
individuals will appreciate the magnitude
of Hermelin’s contribution. Her findings
are a giant step forward in unravelling the
The spandrels of San Marco’s basilica, symbols of a long-running debate on evolution.
252
© 2001 Macmillan Magazines Ltd
behaviour, or is the situation more complicated? The problem is that no one really
believes the first proposition, but the second
does not constitute a useful scientific
hypothesis. And except as the impetus for a
spate of books and articles, and lots of acrimonious debate, it may not matter much.
The contemporary debate started in
1979, when Stephen Jay Gould and Richard
C. Lewontin published an article entitled
“The spandrels of San Marco and the Panglossian paradigm: a critique of the adaptationist programme”. This became the focus
for the conflict between two lines of evolutionary thought. On one side are Richard
Dawkins and like-minded evolutionary
biologists, who believe that natural selection
is adequate to explain virtually every observation in evolutionary biology. On the other
are Gould and his followers, who believe that
natural selection is a very important force in
evolution, but not the only one. The most
heated controversy arises when we attempt
to apply our knowledge of evolutionary
biology to the origin of human behaviour.
In The Evolutionists and Dawkins vs.
Gould, Richard Morris and Kim Sterelny,
respectively, recount this controversy in
excruciating detail. Sterelny gets almost to
the heart of the matter, and Morris’s engaging style makes the history, politics and
political motivations fun to read. Unfortunately, neither author really brings us any
closer to a resolution, and neither really
explains why the controversy may never
be resolved.
Both try to dissect the argument into its
component parts. They agree that Gould
departs from “Darwinian fundamentalists”
in his belief that evolution occurs by periods
of stasis followed by periods of rapid evolution (“punctuated equilibria” or, as his
detractors quip, a theory of “evolution by
jerks”). Palaeontologist Gould sees evidence
for rapid transitions, catastrophic extinctions and spectacular radiations in the fossil
record, and thinks that a model of slow,
steady change by natural selection acting on
genetic variation is not adequate to explain
history. In particular, Gould’s notion of contingency in evolution may be important in
understanding the origin of new species and
higher taxa, and aspects of the broad pattern
of evolutionary history that have never
been fully explained by the neodarwinian
synthesis.
Another area of disagreement concerns
Gould and Lewontin’s concept of ‘spandrels’
in evolution. Named after an architectural
feature that is a by-product of the construction, evolutionary spandrels are biological
structures or traits that are accidental byproducts of history, not the results of
natural selection. However, natural selection
can clearly mould a spandrel into a useful
structure. Spandrels, Morris and Sterelny
agree, don’t much change our understandNATURE | VOL 413 | 20 SEPTEMBER 2001 | www.nature.com
ing of anatomical evolution. But the issue
becomes very heated where sociobiology or
evolutionary psychology are concerned.
Gould believes that many human behavioural traits are spandrels — by-products of the
brain we evolved in the African savannah; the
ability to read Nature is a spandrel, not a
product of natural selection. The Dawkins
party tends to think of the brain as a collection of traits moulded by natural selection.
Morris gives an elaborate recipe, and even
some preliminary data, for deciding between
these two views by examining whether the
brain is composed of isolated functions or
parts or is an interacting whole. But anyone
who thinks a brain is composed of interacting parts, whereas a body is not, has never
suffered a stiff neck as a result of limping with
a sprained ankle, yet no one is arguing that
ankles and necks aren’t largely products of
natural selection.
Morris devotes a chapter to complexity
theory, providing a lucid and enlightening
explanation. Complexity scientist Stuart
Kauffmann “believes that, although natural
selection is important, it is not the sole
cause of evolutionary change”, which is a
“marriage between self-organization and
selection”. Complexity science indicates that
there are “emergent properties” that could
not be predicted by a reductionist approach,
a view that pleases Gould.
Why is it so important that we know
whether human behavioural traits are spandrels, and whether human brains have emergent properties? The answer lies more in politics and philosophy than it does in science.
Sterelny explains most of the conflict
between Dawkins and Gould in terms of
two distinct ideologies. “In short,” Sterelny
contends, “Dawkins, but not Gould, thinks
of science as a unique standard-bearer of
enlightenment and rationality.” Dawkins
views the entire world in reductionist terms,
and is dedicated to the scientific method as
the only valid mode of analysis. Gould, as
he has written elsewhere, sees science and
religion as “non-overlapping magisteria”, as
spheres in which different sorts of reasoning
apply. He is probably right. But he also views
some of science itself as outside the realm of
investigation. Dawkins thinks that modern
evolutionary theory provides a good model
for the exposition of a natural system of
morality, whereas Gould insists that morality is beyond the realm of science.
Morris and Sterelny both miss the opportunity to give us a bottom line on this argument. Gould, Lewontin and their followers
believe that we should not take the application of evolutionary theory and genetics to
human behaviour seriously, for otherwise
we will see a resurgence of eugenics reminiscent of the Holocaust. Their fears may be
correct. But no data on brain physiology,
no studies on parallel evolution or rapid
speciation, and no computer modelling
NATURE | VOL 413 | 20 SEPTEMBER 2001 | www.nature.com
of complex systems will ever change such
perceptions.
These two slim and readable books target a
well-defined problem in evolutionary theory.
Sterelny could have accomplished more with
an index, and both books could have profited
from a thorough and organized bibliography.
I found it easier to dig up my 20-year-old
photocopy of the “Spandrels” article than to
find a complete reference to it in either book.
Both authors promise an unbiased summary
of the arguments, but both come down predominantly in favour of Dawkins’ perspective. Morris and Sterelny are on the cusp of an
insightful analysis but never quite get to it. But
the aficionado of evolutionary theory and the
intense debate it engenders would do well
to read both accounts. Whereas Morris stresses the divergent approaches of complexity
and reductionism, Sterelny emphasizes
other issues, such as common descent (or
cladistics), which concerns Dawkins, and
morphological similarity, which, to Gould, is
of paramount importance. Longer, and with
an elementary introduction to evolutionary
science, Morris’s book provides more of a
stand-alone account and is suited to the
non-specialist.
We have created an icon in Darwin, a god
whose every printed word is canon. But Darwin knew that not everything he said would
stand the test of time and new data in every
detail. Darwin would be puzzled over the
struggle for his soul, because the soul, like
science, derives its strength not from rigidity
but from fluidity. While some of today’s
most brilliant thinkers grope for the soul of
Darwin, it is fortunate that so many experimental evolutionary biologists have decided
not to wait for the resolution.
■
Michael A. Goldman is in the Department of
Biology, San Francisco State University,
San Francisco, California 94132-1722, USA.
Preaching to the
chemical converts
Stories of the Invisible:
A Guided Tour of Molecules
by Philip Ball
Oxford University Press: 2001. 195 pp.
£11.99 (pbk)
that chemistry delivers, it shows little desire to
understand how it works its magic, or to
encourage the young to join the faith — a bit
like religion, perhaps. In a society of chemical
agnostics, it is a brave missionary who tries to
reveal its mysteries, but that is what the author
of Stories of the Invisible has attempted to do
— and done remarkably well.
Philip Ball has taken upon himself the
task of explaining to the layperson the
theories that determine the behaviour of
molecules — not just simple molecules, such
as water and alcohol, but the complex array
of molecules that make up the living cell and
those that affect human behaviour. Ball is the
right person to write this gospel, and it joins a
canon of his successful popular works, the
last one of which was the widely acclaimed
H2O: A Biography of Water.
Ball knows how to grab people’s attention
— witness his popular bangs-and-flashes
lectures — but a book is different. Can
he hold the reader’s attention while he
explains the intricacies of covalent bonding,
stereochemistry, entropy, polypeptides and
neurotransmitters? Maybe — the reward for
hacking through the thicket of theory is
reaching the tree of knowledge, and the
attentive reader will manage this feat.
Ball begins with the analogy of letters
combining to make words to explain how
atoms combine to make molecules. This
analogy cannot be stretched too far, but he
has found one of the best ways of introducing
the concepts of isomerism to a non-chemical
audience. The book then moves on to
subjects that are bound to capture the
reader’s attention: what is life? What makes it
possible? What keeps a cell alive? How is
a cell controlled and how does it store and
use information?
These topics are clearly explained, but
the book is not solely devoted to explaining
the chemistry of living cells and organisms
— it also deals with more conventional
areas of chemistry. For example, the chapter
on energy has a section on explosives,
the chapter on organized molecular motion
touches on nanotechnology,
and that on molecular
messengers covers the
different types of painkiller and how they
work. At no point
does Stories of
John Emsley
Science Year has been launched in
the United Kingdom.
Hallelujah! But will
it lead to a
revival in interest in chemistry? The irony
is that, although
the public accepts
the tangible benefits
© 2001 Macmillan Magazines Ltd
Molecular necessity: model of a water molecule.
253
ADAM HART-DAVIS/SPL
book reviews
news feature
Will the real Golgi please stand up
It was discovered more than a century ago, but cell biologists are still
debating whether the Golgi complex is an autonomous entity.
Erika Check profiles an organelle in identity crisis.
ver since the Golgi complex was first
described in 1898, this embattled cellular organelle has struggled to secure
an identity for itself. Using his Nobelwinning method of staining cells with silver
salts, the Italian biologist Camillo Golgi
spotted that neurons contain a stack of flattened, membrane-bound sacs. But although
the same structure is found in all nucleated
cells, from humans to amoebae, naysayers
argued for decades that it was merely an
artefact of Golgi’s staining technique.
In the 1950s the electron microscope
finally proved that the Golgi was no artefact.
And the organelle gained further legitimacy
when researchers revealed its crucial function: processing and packaging proteins for
export from the cell.
But now, the Golgi’s integrity is again in
doubt.At issue is the question of whether it is
a truly independent organelle, persisting
through cell division in skeletal form and
being rebuilt from this template. One camp
of cell biologists, led by Graham Warren at
Yale University in New Haven, Connecticut,
subscribes to this view. But others, championed by Jennifer Lippincott-Schwartz at the
National Institute of Child Health and
Human Development (NICHD) in
Bethesda, Maryland, argue that
the Golgi is just a fleeting aggregation of proteins and lipid
membrane that constantly
assembles and disassembles.
Dynamic cells
The Golgi question plays into a
wider debate about how cells are
built. The classic view is that cells
make new parts by assembling them
on static frameworks, rather as a skyscraper is built around a steel scaffold. But
some biologists argue that many intracellular structures are constantly forming and
disappearing in a flexible process that does
not depend on underlying templates — like
the dynamic equilibrium between condensation and evaporation in a billowing cloud.
“This idea of dynamic self-organization is
becoming more and more popular in many
areas of cell biology,” says Ben Glick, who
studies the Golgi complex at the University
of Chicago.
The Golgi’s current identity crisis
stems from the late 1980s, when Lippincott780
HENRY TAN
JOACHIM SEEMANN/YALE UNIV.
E
Graham Warren thinks that ‘matrix’ proteins (stained green, above) persist
throughout cell division and act as templates for the Golgi’s reassembly.
Schwartz was working
in Richard Klausner’s
lab at the NICHD as a
postdoc.She gave cells a
shot of brefeldin A, an
antibiotic from fungi that
blocks the transport of proteins from the endoplasmic reticulum (ER) to the Golgi. The ER is another
complex of intracellular membranes; it
receives proteins directly from proteinbuilding ribosomes attached to its surface.
When transport from the ER to the Golgi
was blocked, the Golgi disintegrated, and
proteins associated with its membranes
were rapidly redistributed to the ER. And
when Lippincott-Schwartz removed the
antibiotic, the Golgi reappeared1.
To Lippincott-Schwartz and her colleagues, this indicated that the Golgi is
constantly recycled to and from the ER in a
dynamic equilibrium2. “This was the first
© 2002 Macmillan Magazines Ltd
crack in the theory that the Golgi is a stable,
pre-existing entity,”she says.
More recently, her group studied what
happens to the Golgi during cell division,
when its structure temporarily breaks down.
By labelling Golgi proteins with green
fluorescent protein, the researchers showed
that these proteins fled to the ER. After cell
division was complete, the ER spat out the
Golgi proteins, and the structure rebuilt
itself. But rebuilding could be prevented by
expressing a mutant version of a gene called
Sar1, which — like brefeldin A — blocks the
transport of proteins from the ER3.
In the meantime, Warren and his colleagues had been looking more closely at cells
disturbed by brefeldin A. Like LippincottSchwartz’s team, they found that a dose of the
antibiotic caused Golgi proteins to migrate to
the ER. But not all of them. While working at
the Imperial Cancer Research Fund’s laboratories in London in the mid-1990s, Warren
| wwwNATURE | VOL 416 | 25 APRIL 2002 | www.nature.com
news feature
Tracking the template
In February this year, Warren and his colleagues reported on their efforts to track the
matrix proteins during cell division. Again
using brefeldin A or mutant Sar1 to confine
Golgi enzymes to the ER, they showed that
matrix proteins were partitioned into the
daughter cells in a manner reminiscent of the
entire organelle. Using microscopic magnetic
beads labelled with a fluorescent antibody
that captures GP130, the researchers also
found that this matrix protein remained
distinct from the ER6.
This convinces Warren that
matrix proteins are the template
from which the Golgi is rebuilt
after cell division. “Our take on
this is that the Golgi is an independent organelle responsible
for its own partitioning and the
endoplasmic reticulum doesn’t
play a part in this,”he says.
Lippincott-Schwartz
interprets the results differently. She
accepts that the matrix proteins stay
separate from the main body of the ER in
cells treated with brefeldin A. But she
argues that the matrix proteins travel to
a specialized part of the ER called its ‘exit
site’, from which membranes pinch off and
carry their cargo of proteins to the Golgi.
It is this portion of the ER, LippincottSchwartz believes, that directs the Golgi’s
reassembly.
She bases this claim on studies of dividing
cells treated with brefeldin A, in which her
team labelled the ER exit sites and matrix
proteins using different fluorescent dyes.
The two labels overlapped, and the labelled
matrix proteins moved away from the Golgi
before the other proteins in the organelle.
This contradicts the idea that the matrix
proteins stay behind after the Golgi unravels,
Lippincott-Schwartz argues. Her team has
also used a different mutant of Sar1 that
prevents the production of ER exit sites, and
found that the Golgi completely fails to
reform after treatment with brefeldin A7.
For now, the debate over whether the
Golgi is an autonomous entity remains
unresolved.Although Lippincott-Schwartz’s
NATURE | VOL 416 | 25 APRIL 2002 | www.nature.com
CELL BIOLOGY AND METABOLISM BRANCH, NICHD, NIH
showed that one protein, called GM130,
stayed behind4. “This gave us the idea that
such proteins might be the structure underlying the framework of the Golgi,” says
Warren. “Taking an extreme view, you can
argue that this is the Golgi apparatus itself.”
Although GM130 and other matrix
proteins disperse through the cytoplasm
when cells are dosed with brefeldin A,
Warren has found that they still form Golgilike structures in cells expressing a mutant
Sar1 gene, in which other Golgi proteins —
the enzymes that process proteins destined
for export from the cell — become confined
to the ER5.
Mixing of Golgi (green) and endoplasmic reticulum (red) before a cell divides
(bottom) makes Jennifer Lippincott-Schwartz question the Golgi’s autonomy.
results suggest that
the organelle is much
more mutable than was
once thought, other cell
biologists say it is possible
that we just do not know enough
about the Golgi to be sure about what is
going on when it reassembles. Perhaps the
matrix proteins that direct the process are yet
to be discovered, they suggest. “People are
trying to define the Golgi based on four or
five markers,”observes Vivek Malhotra of the
University of California, San Diego.“What if
there is a complex of proteins we don’t know
about, and that is serving as a sort of nucleation site?”
Self-sufficiency
In addition, say some experts, we need to
find out more about what happens to the
Golgi during and after cell division, in the
absence of any experimental disruption. If
the Golgi reforms very quickly after cells
divide, it would support the idea that it is
being rebuilt from a residual template,
rather than being recycled from the ER and
assembling completely from scratch.
But the idea of dynamic self-organization
has attractions that extend well beyond the
Golgi. The reassembly of the nucleus from
pre-existing skeletal structures might be
© 2002 Macmillan Magazines Ltd
the exception rather than the rule. If cellular
structures were mostly assembled through
dynamic and self-organizing protein interactions, it would help explain cells’ tremendous flexibility in responding to changes in
their environment.
“Self-organization makes a lot of sense
when you think about what a cell has to
do,” says Tom Misteli, a cell biologist at the
National Cancer Institute in Bethesda. “It
allows a cell to be very stable, but on the
other hand something terrible can happen to
a cell at any moment, and it has to be able to
respond to that.”
Seen in this light, the Golgi’s identity
crisis seems less neurotic than noble. Far
from being an isolated search for legitimacy,
its resolution might provide fundamental
insights into the way cells are built.
■
Erika Check is Nature’s Washington biomedical
correspondent.
1. Lippincott-Schwartz, J., Yuan, L. C., Bonifacino, J. S. & Klausner,
R. D. Cell 56, 801–813 (1989).
2. Klausner, R. D., Donaldson, R. G. & Lippincott-Schwartz, J.
J. Cell Biol. 116, 1071–1081 (1992).
3. Zaal, K. J. M. et al. Cell 99, 589–601 (1999).
4. Nakamura, N. et al. J. Cell Biol. 131, 1715–1726 (1995).
5. Seemann, J., Jokitalo, E., Pypaert, M. & Warren, G. Nature 407,
1022–1026 (2000).
6. Seemann, J., Pypaert, M., Taguchi, T., Malsam, J. & Warren, G.
Science 295, 848–851 (2002).
7. Ward, T. H., Polishchuk, R., Caplan, S., Hirschberg, K. &
Lippincott-Schwartz, J. J. Cell Biol. 155, 557–570 (2001).
781
concepts
The bigger picture
Tamas Vicsek
f a concept is not well defined, it can be
abused. This is particularly true of complexity, an inherently interdisciplinary
concept that has penetrated a range of
intellectual fields from physics to linguistics,
but with no underlying, unified theory.
Complexity has become a popular buzzword
that is used in the hope of gaining attention
or funding — institutes and research networks associated with complex systems grow
like mushrooms.
Why and how did this vague notion
become such a central motif in modern science? Is it only a fashion, a kind of sociological
phenomenon, or is it a sign of a changing paradigm of our perception of the laws of nature
and of the approaches required to understand them? Because almost every real system
is inherently complicated, to say that a system
is complex is almost an empty statement —
couldn’t an Institute for Complex Systems
just as well be called an Institute for Almost
Everything? Despite these valid concerns, the
world is indeed made of many highly interconnected parts on many scales, the interactions of which result in a complex behaviour
that requires separate interpretations of each
level. This realization forces us to appreciate
the fact that new features emerge as one
moves from one scale to another, so it follows
that the science of complexity is about revealing the principles that govern the ways in
which these new properties appear.
In the past, mankind has learned to
VICKY ASKEW
I
NATURE | VOL 418 | 11 JULY 2002 | www.nature.com/nature
understand reality through simplification
and analysis. Some important simple systems are successful idealizations or primitive
models of particular real situations — for
example, a perfect sphere rolling down an
absolutely smooth slope in a vacuum. This is
the world of newtonian mechanics, and it
ignores a huge number of other, simultaneously acting factors. Although it might
sometimes not matter that details such as the
motions of the billions of atoms dancing
inside the sphere’s material are ignored, in
other cases reductionism may lead to incorrect conclusions. In complex systems,
we accept that processes that occur simultaneously on different scales or levels are
important, and the intricate behaviour of the
whole system depends on its units in a nontrivial way. Here, the description of the entire
system’s behaviour requires a qualitatively
new theory, because the laws that describe its
behaviour are qualitatively different from
those that govern its individual units.
Take, for example, turbulent flows and
the brain. Clearly, these are very different
systems, but they share a few remarkable features, including the impossibility of predicting the rich behaviour of the whole by merely
extrapolating from the behaviour of its units.
Who can tell, from studying a tiny drop or
a single neuron, what laws describe the
intricate flow patterns in turbulence or the
patterns of electrical activity produced by the
brain? Moreover, in both of these systems
(and in many others), randomness and
determinism are both relevant to the system’s overall behaviour. Such systems exist
on the edge of chaos — they may exhibit
almost regular behaviour, but also can
change dramatically and stochastically in
time and/or space as a result of small changes
in conditions. This seems to be a general
property of systems that are capable of producing interesting (complex) behaviour.
Knowledge of the physics of elementary
particles is therefore useless for interpreting
behaviour on larger scales. Each new level
or scale is characterized by new, emergent
laws that govern it. When creating life,
nature acknowledged the existence of these
levels by spontaneously separating them
into molecules, macromolecules, cells, organisms, species and societies. The big question
is whether there is a unified theory for the
ways in which elements of a system organize
themselves to produce a behaviour that is
typical of large classes of systems.
Interesting principles have been proposed
in an attempt to provide such a unified theory.
These include self-organization, simultaneous existence of many degrees of freedom,
self-adaptation, rugged energy landscapes,
© 2002 Nature Publishing Group
Complexity
The laws that describe the
behaviour of a complex system are
qualitatively different from those that
govern its units.
and scaling (for example, power-law dependence) of the parameters and the underlying
network of connections. Physicists are learning how to build relatively simple models
that can produce complicated behaviour,
whereas those who work on inherently very
complex systems (such as biologists and
economists) are uncovering ways to interpret their subjects in terms of interacting,
well-defined units (such as proteins).
What we are witnessing in this context is
a change of paradigm in attempts to understand our world as we realize that the laws
of the whole cannot be deduced by digging
deeper into the details. In a way, this change
has been wrought by the development
of instruments. Traditionally, improved
microscopes or bigger telescopes are built to
gain a better understanding of particular
problems. But computers have allowed new
ways of learning. By directly modelling a
system made of many units, one can observe,
manipulate and understand the behaviour of
the whole system much better than before, as
in the cases of networks of model neurons
and virtual auctions by intelligent agents, for
example. In this sense, a computer is a tool
that improves not our sight (as does the microscope or telescope), but rather our insight
into mechanisms within complex systems.
Many scientists implicitly assume that
we understand a particular phenomenon if
we have a (computer) model that provides
results that are consistent with observations
and that makes correct predictions. Yet such
models make it possible to simulate systems
that are far more complex than the simplest
newtonian ones that allow deterministic,
accurate predictions of future events. In contrast, models of complex systems frequently
result in a new conceptual interpretation of
the behaviour. The aim is to capture the principal laws behind the exciting variety of new
phenomena that become apparent when the
many units of a complex system interact. ■
Tamas Vicsek is in the Department of Biological
Physics, Eötvös University, Budapest,
Pázmàny Stny 1A, H-1117 Hungary.
FURTHER READING
Waldrop, M. M. Complexity (Simon & Schuster,
New York, 1993).
Gell-Mann, M. Europhys. News 33, 17 (2002).
www.comdig.org
Nature Insight, Complex Systems Nature 410,
241–284 (2001).
131
concepts
Weaving life’s pattern Development
Melvin Konner
sychologists like to stress that what
happens in early life — what zoologists
call the juvenile phase — is not just
growth, but development. The implication
is twofold. First, ‘growth’ suggests mere
augmentation, either through increasing
cell size (hypertrophy) or successive mitotic
divisions (hyperplasia). But a system such
as the brain could not emerge so simply.
Second, ‘growth’ implies an autonomous
process, governed from within, and (given
minimal input such as oxygen and nutrients)
under fairly tight genetic control. An
alternative term, maturation, suggests that
the transformations of early life transcend
hypertrophy and hyperplasia, yet still follow
a preset programme. But this neglects to
consider the environment’s shaping role,
which, in the nervous system at least,
includes learning.
So development is not just more than
growth — it is more than maturation,
requiring constant negotiation with the
environment. Sometimes this truth has led
to a refusal to try to tease out the different
roles of maturation and learning. In Jean
Piaget’s theory of mental development, for
example, the contributions of learning and
of a tacitly assumed preset programme are
deliberately obscured. In another model,
really a metaphor, maturation and learning
are viewed as the warp and woof — one blue,
one yellow — that give a swatch of cloth a
green colour. The claim is that attempting to
separate the two contributions destroys the
unique product of their interaction.
Of course, a thicker, denser blue warp
makes the cloth a bluer green. These features
GETTY IMAGES
P
Two of a kind? From before birth, chance and the
environment conspire to make twins differ.
NATURE | VOL 418 | 18 JULY 2002 | www.nature.com/nature
of the warp, not to mention the design and
technique of weaving, help to explain the
outcome. In the 1950s, the prescient psychologist Anne Anastasi saw that the real
question is not “which?” or “how much?”,
but “how?” Advances in genetics and brain
science now leave us in no doubt that we can
answer all three. But how do we address the
“how” question?
In embryology, development always
entails interaction, although the interactions
often take place inside the organism. In the
classic account, the dimpling of the vertebrate eye from a blob-like to a cup-like shape
— the formation of the retina — occurs in
response to a chemical signal from the overlying ectoderm. Soon after, the lens is formed
from ectoderm when the brand-new eye cup
sends back another signal.
Later, as neurons form and migrate
around the brain, they are attracted, rerouted and stopped by molecular signals.
Many of these come from other cells that
guide or challenge the migrators in a kind
of immunochemistry. Recognition of cells
and surfaces, and ultimately adhesion to
them, determine the fates of neurons, and
subsequently those of their extensions.
These patterns become the wiring plan of
the brain.
But this line of thinking comes up against
Changeux’s paradox: how do 30,000 human
genes determine 1011 cells with 1015 connections? Obviously they can’t do it in the same
way that the roundworm’s 18,000 genes govern its 959 cells. There are several solutions.
First, pioneer cells and axons pave the way
for thousands or tens of thousands of others
to track their guidance, offering lots of
hook-ups for the price of one. Second, the
mammalian brain forms many more cells
and connections than it needs, subsequently
pruning back around half of them. Some of
this occurs through programmed cell death,
but much depends on activity — meaning
that spontaneous and reactive fetal movements shape the brain. Third, small groups
of neurons may form under strict genetic
control — creating small, deterministically
wired systems similar to the roundworm’s
brain — and then compete for incoming
stimulation and outgoing actions.
These processes have been called darwinian, but this is only a partial analogy. The
cells of the embryo are genetically identical,
and they produce no offspring, thus undermining two pillars of Darwin’s theory —
variation and inheritance. Still, the processes
involve competition, which is resolved by
environmental, adaptive selection. And the
cells are not quite genetically identical — the
same set of genes is always there, but only
© 2002 Nature Publishing Group
Development is not just more
than growth — it is more than
maturation, requiring constant
negotiation with the environment.
some are switched on. Which switch on and
which off in any given cell — and when,
and how, and why — determine the cell’s
character and function. A main key to development is this on–off pattern, a pulsing,
embryo-wide light show that turns genetic
instructions into animals.
Elucidating the control of these switches
— by signals inside the cell, beyond it, or
even outside the body — is the main task of
biology in the twenty-first century. And the
switches are not flipped just in early life
— genes that confer Huntington’s and
Alzheimer’s diseases are switched on decades
after the die is cast. But of course, in a complex animal, much is left to chance. Chaos in
the formal sense — exquisite sensitivity to
variations in starting conditions — cumulatively amplifies small differences. This
embryonic butterfly effect gives identical
twins different brains within weeks of conception. Such unpredictable paths help to
explain why twins differ before we even
consider their environmental influences.
Less certain is the role of emergence in
development, but if self-organizing processes occur in non-living solutions, why
not in a minuscule protoplasmic pool or an
early, inchoate blob of cells? In computer
models of embryos, self-organization looks
to be adequate for certain tasks. We need to
learn more about these less deterministic
routes to life’s complexity.
One thing is certain. The sequencing of
the genome will soon look like the easiest
thing that biologists ever did. And what
sequencers euphemistically call “annotation” and the rest of us call development —
what the genes actually do — constitutes the
real code of living systems. To crack that code
will take centuries, but getting there will be
more than half the fun.
■
Melvin Konner teaches at Emory University, Atlanta,
Georgia, USA. He is the author of the completely
revised edition of The Tangled Wing: Biological
Constraints on the Human Spirit.
FURTHER READING
Anastasi, A. Psychol. Rev. 65, 197–208 (1958).
Changeux, J.-P. Neuronal Man: The Biology of Mind
(trans. Garey, L.; Princeton Univ. Press, 1997).
Edelman, G. M. Neural Darwinism: The Theory of
Neuronal Group Selection (Basic, New York, 1987).
Wolpert, L. The Triumph of the Embryo (Oxford Univ.
Press, 1991).
279
news feature
From wobbly bridges to new speech-recognition systems, the concept of
synchrony seems to pervade our world. Steve Nadis reports on attempts
to understand it, and the applications that may be on the horizon.
teven Strogatz’s curriculum vitae is
more eclectic than most. He has
investigated how crickets come to
chirp in harmony, and why applauding
audiences spontaneously clap in unison.
The theme behind such studies — the way
in which systems of multiple units achieve
synchrony — is so common that it has kept
him busy for over two decades. “Synchrony,” says Strogatz, a mathematician at
Cornell University in Ithaca, New York, “is
one of the most pervasive phenomena in
the Universe.”
When a mysterious wobble forced engineers to close London’s Millennium Bridge
shortly after it opened in 2000,for example,an
unforeseen synchronizing effect was responsible: walkers were responding to slight movements in the bridge and inadvertently adjusting their strides so that they marched in time.
But synchrony can provide benefits too:
researchers working on new radio transmitters and drug-delivery systems are harnessing
the phenomenon to impressive effect. “It
occurs on subatomic to cosmic scales and at
frequencies that range from billions of oscillations per second to one cycle in a million
years,”says Strogatz.“It’s a way of looking at the
world that reveals some amazing similarities.”
The study of synchronous systems cuts
across the disciplines of modern science. But
the underlying phenomenon was first documented over three centuries ago. In 1665,
Dutch physicist Christiaan Huygens lay ill in
S
780
bed, watching the motions of two pendulum
clocks he had built. To his surprise, he detected an “odd kind of sympathy” between the
clocks: regardless of their initial state,the two
pendulums soon adopted the same rhythm,
one moving left as the other swung right.
Elated, Huygens announced his finding
at a special session of the Royal Society of
London, attributing this synchrony to tiny
forces transmitted between the clocks by the
wooden beam from which they were suspended. But rather than inspiring his peers
to seek other examples of self-synchrony,
his study was largely ignored. The heir to
Huygens’idea was not a seventeenth-century
scientist, but Arthur Winfree, a theoretical
biologist who began in the 1960s to study
coupled oscillators1 — groups of interacting
units whose individual behaviours are confined to repetitive cycles.
Jungle rhythms
The blinking of fireflies is one behaviour that
Winfree studied. As night falls on the jungles
of Southeast Asia, fireflies begin to flicker,
each following its own rhythm. But over
the next hour or so, pockets of synchrony
emerge and grow. Thousands of fireflies
clustered around individual trees eventually
flash as one, switching on and off every
second or two to create a stunning entomological light show.
How does such synchrony come about?
In this case, each firefly has its own cycle of
© 2003 Nature Publishing Group
flashes, but that rhythm can be reset when
the fly sees a flash from a neighbour. Pairs of
flies become synchronized in this way, and
the effect gradually spreads until large
groups are linked. In general, oscillating
units communicate by exchanging signals
that prompt other units to alter their timing.
Synchronization occurs if these ‘coupling’
signals are influential enough to overcome
the initial variation in individual frequencies.
“Below a threshold, anarchy prevails; above
it, there is a collective rhythm,” Winfree
wrote in a review article published shortly
after his death in November 2002 (ref. 2).
Winfree’s attempts to create a detailed
mathematical model of coupled oscillators
were stymied by the difficulty of solving
nonlinear differential equations — the
mathematical tools used to describe such
systems. But a crucial breakthrough came in
1975, when Yoshiki Kuramoto, a physicist at
the University of Kyoto in Japan, produced a
simplified model of the kind of system that
Winfree was interested in.Kuramoto’s system,
in which the oscillators are nearly identical
and are joined by weak links to all of the
others, can be described by a set of largely
solvable equations3.
Kuramoto did not assume that his
abstract model would necessarily relate to
real physical systems. But that changed in
1996 when Strogatz, together with physicists
Kurt Wiesenfeld of the Georgia Institute of
Technology in Atlanta and Pere Colet,then at
NATURE | VOL 421 | 20 FEBRUARY 2003 | www.nature.com/nature
CLEMSON UNIV.
P. JORDAN/PA
All together now
Cycling club: synchronizing systems in both natural and technological settings. Left to right: pedestrians make London’s Millennium Bridge wobble;
crickets and fireflies synchronize their chirps and flashes; an audience claps in sync; and the electric currents through Josephson junctions oscillate as one.
the Institute of Material Structures in
Madrid, produced a mathematical description of an array of superconducting devices
called Josephson junctions4. These consist of
an insulating layer, so thin that electrical
current can actually cross it, sandwiched
between two superconducting metals. Once
the current across the junction exceeds a
certain level, the direction of flow oscillates
very rapidly,sometimes exceeding 100 billion
cycles per second.
According to Wiesenfeld and his colleagues, an array of junctions will come to
oscillate in sync as connections between the
junctions nudge the devices into phase. Electrical engineers, who hoped that Josephson
junctions could be used to drive a new breed
of faster computers, were intrigued by the
idea.What’s more, in the same paper, the trio
also showed that their theoretical description is equivalent, in mathematical terms, to
Kuramoto’s model. The finding kick-started
interest in synchronized systems, capturing
the attention of researchers from across the
scientific spectrum.
John Hopfield, a theoretical physicist at
Princeton University in New Jersey who pioneered studies of artificial neural networks,
is one example. Computer simulations of
networks of simplified model neurons are
known to be well suited to certain tasks, such
as pattern and face recognition.But Hopfield
is now working with both real and simulated
networks of units that behave more like actual
neurons. Each neuron in his network emits
voltage pulses at regular intervals, which are
relayed to other parts of the network.Like the
fireflies, a neuron’s firing cycle can be reset
by an incoming signal, allowing groups of
neurons to synchronize their outputs.
In 2001, Hopfield described how this
synchrony could be exploited to create a
speech-recognition device5. He simulated a
network of 650 biologically realistic neurons
with only weak couplings between them,
initially using conventional sound-analysis
software to divide spoken words into 40
‘channels’. Each channel corresponds to a
particular range of sound frequencies and
one of three key events: the time at which the
sound of that frequency began, when it
peaked, and when it stopped. Each thus has
a time associated with it, which states when
a particular frequency turned on, off or
peaked. Neurons in Hopfield’s network are
connected to one or more of these channels,
firing off a series of regular pulses when they
receive the time signal. The frequency of this
firing decreases with time, and although this
rate varies between neurons, all eventually
fall silent.
One to think about
So how does such a set-up recognize sounds?
Neurons are activated at different times, but
because their firing frequencies fall off at different rates, some of them will momentarily
fall into sync with each other before drifting
out of phase again.In a first trial run,Hopfield
fed the word ‘one’ into the network and
tracked the firing of the neurons until he
spotted a group that moved into phase. He
then strengthened the coupling between these
neurons.When the word ‘one’was presented a
second time, this coupling was sufficient to
prompt a burst of synchronous and easily
detectable firing when the neurons drifted
into phase. Other words did not cause this
subset of neurons to come into phase, and
hence did not prompt synchronous firing.
NATURE | VOL 421 | 20 FEBRUARY 2003 | www.nature.com/nature
© 2003 Nature Publishing Group
The network could speed up speech
recognition, as detecting synchronous firing
is much quicker than identifying a word by
analysing each channel.“If you take a system
that can spontaneously synchronize, you
immediately get an answer: it’s in sync or it’s
not,” says Hopfield. He suggests that the
approach could be useful for answering
questions in tasks such as face recognition,
“where you have lots of information coming
in and all you really want to know is yes or no”.
At the University of Pennsylvania in
Philadelphia, bioengineer Kwabena Boahen
has created real systems, each consisting of a
network of thousands of circuits that mimic
the behaviour of neurons. Theoretical studies
of these networks suggest that their synchronous firing could be put to good use6.
Boahen’s circuits can be trained to recognize a
particular pattern of inputs.By measuring the
proportion of neurons that fire in sync, an
observer can judge the degree of certainty
associated with the decision. An input that
causes 90% of neurons to fire in sync, for
example, is more likely to have been recognized that one that causes 80% to synchronize. “This shows you can answer more than
just yes/no questions,” Hopfield comments.
“Instead, you can ask what is the degree of
confidence that this face belongs to ‘Joe’?”
While Hopfield and Boahen are pursuing
computational methods inspired by neural
circuits, other investigators hope to exploit
synchrony at the level of genes and proteins.
Nancy Kopell,Jim Collins and their colleagues
at Boston University in Massachusetts are
trying to construct a synthetic regulatory
network in the bacterium Escherichia coli
that turns genes on and off on a periodic
basis. Last year, they described a theoretical
781
ABOVE LEFT, S. MAZE/CORBIS; ABOVE, H. VAN DER ZANT
I. POLUNIN
news feature
G. MEEK/GEORGIA TECH
news feature
D. HATCH
Swinging time: a Georgia Tech researcher recreates Christiaan Huygens’ twin pendulum experiment.
cell7 that contains genes for two proteins, X
and Y. X activates the genes that encode both
itself and Y, and this positive feedback causes
levels of X and Y to rise. But in the
Kopell–Collins model, Y also degrades X, so
that levels of X fall as Y builds up. This is turn
reduces the activity of the gene for Y. With
less Y around, X levels increase and the cycle
repeats itself.
Each oscillating set of genes can be coupled by introducing a third protein, A, which
diffuses between cells. The gene for A is activated by X, and A in turn activates X, so levels
of A and X rise together. As these levels
increase, molecules of A diffuse from the cell
and boost levels of X in neighbouring cells.
This resets the cycle of fluctuating X levels in
neighbouring cells, bringing them into line
with the cell from which A originally diffused.
Theoretical analysis of a population of
1,000 cells based on biologically plausible
rates of diffusion suggests that they will all
fall into synchronization within a matter of
minutes, even when the simulation begins
with cells distributed at random points in
their cycle. In experiments set to begin later
this year, the Boston University team will find
out whether this idea holds up in the lab. If
it does, the levels of one of the proteins
produced by the cell will peak around once
an hour, although this frequency could be
adjusted. In the long term, they hope to use a
similar strategy to produce therapeutic substances at regular intervals, to form part of a
drug-delivery system for use inside the body.
Evidence that this approach could work
in practice comes from a 2000 paper by theoretical physicists Michael Elowitz and Stanislas Leibler,then both at Princeton University.
Elowitz and Leibler created an oscillating
three-gene network in E. coli 8, in which the
protein produced by the first gene suppresses
the activity of the second gene; the second
protein suppresses the third gene; and the
third protein suppresses the first. In this way,
levels of the three proteins successively rise
and fall over a period of two to three hours.
Collins and Kopell hope to build on this
achievement, establishing oscillations such
782
as this in many cells and then getting the
oscillations to synchronize.
Other examples of research into selfsynchronizing systems abound. Neuroscientists are debating how synchronous neural
activity within the brain influences attention,
and perhaps even consciousness. Studies of
the breakdown of synchronous beating
among heart-muscle cells could lead to a
better understanding of cardiac arrhythmias.
And in 2001,Wiesenfeld and his Georgia Tech
colleagues repeated Huygens’ experiment
under more rigorous conditions9, tracking
the pendulums’ movements with lasers, as a
means of generating data for Wiesenfeld’s
theoretical studies of synchrony.
Wider view
Meanwhile, Strogatz is interested in expanding the range of systems that are studied
under the banner of synchrony. “We’ve gone
far by limiting our focus to repetitive behaviour,” says Strogatz, whose new book on
synchronization will be published next
month10. But the time is ripe to loosen the
shackles of the Kuramoto model, he suggests,
and entertain more general conditions.
The biological circuits studied by Kopell
and Collins are one example,as the signalling
between the cells is stronger than the coupling that Kuramoto built into his model.
Work by Robert York, an electrical engineer
at the University of California, Santa Barbara, represents another step away from simplified oscillator networks.
York has constructed a
string of ten radio transmitters11 — the frequency
of radio waves that each
emits is determined by the
oscillating current that is
fed into it. The circuits that
produce these currents are
linked, and fall into sync
with each other less than a
nanosecond after they are
Steve Strogatz:
turned on.
it’s time to study
In York’s system, each
more systems.
transmitter is coupled only
© 2003 Nature Publishing Group
to its nearest neighbour. But this doesn’t prevent the array from synchronizing. What’s
more, it also allows York to control the frequency at which the array synchronized,
simply by adjusting the oscillator circuits for
the antennae at each end of the array.
A group headed by Brian Meadows, a
physicist at the US Navy’s Space and Naval
Warfare Systems Command in San Diego, is
scaling up this idea,preparing to build a square
array of 900 radio antennae to see whether the
same approach works in two dimensions.Such
systems are attractive, as they are more flexible
than a single large antenna and can be packed
more tightly than a conventional array. If
Meadows’ array works, it could yield a wide
variety of applications,such as compact system
for ships, airliners and satellites. “Normally
you can’t put antennae too close, because
coupling becomes a problem,” says Meadows.
“For us, this coupling is essential and we take
full advantage of it.”
But the biggest challenge may be understanding systems containing oscillators that
are far from identical.“In physics, we’re used
to dealing with things like electrons and
water molecules that are all the same,” says
Strogatz. “But no one knows how to deal
mathematically with the tremendous diversity that biology presents.” He wants to
replace idealized oscillators with real biological elements such as genes and cells,but considers the task daunting.“Biologists are used
to collecting as many details as possible,” he
says.“For someone like me, the trick is to see
which details we really need. But there’s no
guarantee that simplification will work in
our efforts to model cellular processes.”
Strogatz is nevertheless convinced that
such studies will one day bear fruit.“Virtually
all of the major unsolved problems in science
today concern complex, self-organizing systems, where vast numbers of components
interact simultaneously,with each shift in one
agent influencing the other,”he says. Huygens
had a similarly strong conviction that he had
stumbled into something big,which was sufficient to rouse him from his sickbed, even if he
could not have fathomed its full significance at
the time. Only now are we getting a glimpse of
how enduring his legacy may be.
■
Steve Nadis is a freelance writer in Boston.
1. Winfree, A. T. J. Theor. Biol. 16, 15–42 (1967).
2. Winfree, A. T. Science 298, 2336–2337 (2002).
3. Kuramoto, Y. Int. Symp. Math. Problems in Theor. Phys (ed.
Araki, H.) 420–422 (Springer, Heidelberg, 1975).
4. Wiesenfeld, K., Colet, P. & Strogatz, S. H. Phys. Rev. Lett. 76,
404–407 (1996).
5. Hopfield, J. J. & Brody, C. D. Proc. Natl Acad. Sci. USA 98,
1282–1287 (2001).
6. Hynna, K. & Boahen, K. Neural Networks 14, 645–656 (2001).
7. McMillen, D., Kopell, N., Hasty, J. & Collins, J. J. Proc. Natl Acad.
Sci. USA 99, 679–684 (2002).
8. Elowitz, M. B. & Leibler, S. Nature 403, 335–338 (2000).
9. Bennett, M., Schatz, M. F., Rockwood, H. & Wiesenfeld, K. Proc.
R. Soc. Lond. A 458, 563–579 (2002).
10. Strogatz, S. H. Sync: The Emerging Science of Spontaneous Order
(Hyperion, New York, in the press).
11. Liao, P. & York, R. A. in IEEE MTT-S International Microwave
Symposium Digest 1235–1238 (Inst. Elec. Electron. Engin., San
Diego, 1994).
NATURE | VOL 421 | 20 FEBRUARY 2003 | www.nature.com/nature
news and views
1. Klausberger, T. et al. Nature 421, 844–848 (2003).
2. Buhl, E. H., Halasy, K. & Somogyi, P. Nature 368, 823–828
(1994).
3. Freund, T. F. & Buzsáki, G. Hippocampus 6, 347–470 (1996).
4. McBain, C. J. & Fisahn, A. Nature Rev. Neurosci. 2, 11–23
(2001).
5. Buzsáki, G. Neuroscience 31, 551–570 (1989).
Animal behaviour
How self-organization evolves
P. Kirk Visscher
I
Figure 1 Honeybee swarm in search of a
new nest site.
a particular site may visit it and in turn
dance for new recruits, so dances reproduce.
But nest-site scouts may cease dancing
before they recruit at least one other dancer:
the population of dancers for that site
then declines, and may become extinct.
Myerscough’s approach incorporates key
aspects of the dynamics of nest-site recruitment, and can accommodate differences that
are specific to the nest site or the individual
bee. The populations of dancers have ‘age
structure’ in the sense that some dances are a
scout’s first dance for a nest site, others follow
a second trip, and so on. This is similar to
population growth with discrete generations, which can be represented in a standard
tool of mathematical ecology: a Leslie
matrix. The ‘age structure’ patterns also can
incorporate an important difference in
dance language use between nectar foraging
and house-hunting. In foraging, the number
of waggle runs that a bee performs when
returning with food increases and then
levels off with successive dances by that bee
(Fig. 2a). In contrast, in house-hunting,
the number of waggle runs (which initially
depends on the quality of the site) generally
declines with each successive dance (Fig. 2b),
NATURE | VOL 421 | 20 FEBRUARY 2003 | www.nature.com/nature
© 2003 Nature Publishing Group
Nectar foragers
20
0
40
b
Nest-site scouts
20
0
0
Self-organized systems can evolve by small parameter shifts that produce
large changes in outcome. Concepts from mathematical ecology show
how the way swarming bees dance helps to achieve unanimous decisions.
n work published in Proceedings of the
Royal Society, Mary Myerscough1 has
taken a novel approach to the modelling
of group decision-making by honeybee
swarms when they are in search of a new
home. Bees ‘waggle dance’ to communicate
locations of food in foraging, and of potential nest sites when a colony moves during
swarming. Myerscough treats the scout bees
dancing for alternative sites as populations,
and models their growth and extinction with
the tools of mathematical ecology. From this
approach it is evident how a slight difference
in the way the dance-language ‘recruitment’
of other bees is structured in foraging and
house-hunting influences the outcome of
each process.
The choice of a new home site by a swarm
of honeybees is a striking example of group
decision-making. When a swarm clusters
after leaving its natal colony (Fig. 1), scouts
search the countryside for cavities with the
appropriate volume and other characteristics2.
They then return to the swarm, and communicate the distance to and direction of
the sites that they have found with waggle
dances3, just like those used for communicating locations of food sources in
foraging4. Usually, the scouts find and
report several sites, but in time dances cease
for all but one of them, and finally the
swarm flies to the selected cavity. Selforganizing processes such as this, in which
a complex higher-order pattern (here, the
development of a consensus on the best
site) arises from relatively simple responses
of individuals with no global view of the
situation, are receiving increasing attention
as biological mechanisms for elaborating
complexity5.
The population-biology metaphor is
appropriate for analysing honeybee dance
information. Bees recruited by dances for
a
40
2
4
Number of trips
6
8
Figure 2 Different patterns of dance-language
performance in nectar foragers and nest-site
scouts. These graphs plot the number of waggle
runs in the recruitment dances performed after
each return trip to the colony for successive
instances where each individual bee danced10.
a, Nectar foragers continue to dance for many
trips. (Here, 93% of 40 foraging bees in 3
colonies danced on more than 8 trips; most
danced on more than 50 trips.) b, Nest-site
scouts, searching for a new home following
swarming, perform dances with more waggle
runs at first, but soon cease to dance entirely.
(Here, fewer than 5% of 86 bees in 3 swarms
performed more than 8 dances.) Myerscough’s
analysis1 suggests that this difference in dance
performance underlies the difference in
outcome: in foraging, it is desirable to recruit
new foragers for several sites; in swarming,
unanimity for a single site must be reached.
and each scout soon ceases dancing entirely.
This gives different patterns of ‘age-specific
fecundity’ to the dancing bee populations.
Because the mathematical theory of
models of this type is well developed,
Myerscough’s approach has an analytical payoff. It is straightforward to predict whether a
population of dancers for a site will increase or
decline. But this is a dynamic process, because
only a limited number of scouts can be
recruited. As a result, whether dancers for a
particular site increase or decrease in number
depends both on the quality of the site and on
the populations of other dancers. The dancing
for a site may increase while competing
dances are rare, but then decline in favour of
other sites with greater ‘fecundity’ (that is,
those that elicit a greater number of waggle
runs of dancing per trip by scouts). Such
dynamics are typical of swarms3,6,7, with the
outcome that the highest-quality site among
those discovered is usually selected8.
The most striking result of this approach
is that it shows how certain special features
of the dance in the context of house-hunting
799
JOHN B. FREE/NATUREPL.COM
Edvard I. Moser is at the Centre for the Biology of
Memory, Norwegian University of Science and
Technology, MTFS, 7489 Trondheim, Norway.
e-mail: [email protected]
6. Wilson, M. A. & McNaughton, B. L. Science 265, 676–679
(1994).
7. Squire, L. R. & Alvarez, P. Curr. Opin. Neurobiol. 5, 169–177
(1995).
8. Cobb, S. R., Buhl, E. H., Halasy, K., Paulsen, O. & Somogyi, P.
Nature 378, 75–78 (1995).
9. Paulsen, O. & Moser, E. I. Trends Neurosci. 21, 273–278
(1998).
10. Ranck, J. B. Jr Exp. Neurol. 41, 461–531 (1973).
11. Henze, D. A. et al. J. Neurophysiol. 84, 390–400 (2000).
12. Csicsvari, J., Hirase, H., Czurko, A., Mamiya, A. & Buzsáki, G.
J. Neurosci. 19, 274–287 (1999).
13. Fox, S. E. & Ranck, J. B. Jr Exp. Brain Res. 62, 495–508 (1986).
14. Fyhn, M., Molden, S., Hollup, S., Moser, M. B. & Moser, E. I.
Neuron 35, 555–566 (2002).
_ standard error)
Mean number of waggle runs (+
principles of memory formation in the
neuronal assemblies of the hippocampus. ■
news and views
ensure that one, and usually only one, of the
populations of nest-site dancer ends up with
all available recruits. This finding is of wide
interest, because it shows how natural selection can shape a self-organizing process. In
both foraging and nest-site scouting, global
patterns of allocation of bees among alternative resources arise from interactions of bees
responding to their own experience, without
a global view of the pattern of allocation or
direct knowledge of the characteristics of
alternative sources9.
However, the contexts of nectar foraging
and nest-site decision-making differ in one
key respect. In foraging it is usually desirable
for the bee colony to use several food sources
simultaneously, especially if they are similar in
quality; in house-hunting the colony has to
settle on just one of multiple sites, even
if they differ little in quality. The dance
language is used to recruit bees in both
settings, but certain aspects of how the dance is
performed are different. Myerscough shows
it is just these parameters that determine the
outcome. Attrition in dances in the Leslie
matrix models mathematically ensures that
one resource will always dominate in nest-site
selection (unless stochastic differences intervene, which may account for the occasional
failure of swarms to achieve unanimity). But
in foraging there is no advantage to doing this,
and attrition does not occur.
A common misconception about selforganization in biological systems is that it
represents an alternative to natural selection5.
This example illustrates how natural selection presumably evolves such mechanisms:
slight modifications of key components
shape the parameters of the self-organizing
system, and shift the ensuing large-scale
patterns to achieve different ends.
■
P. Kirk Visscher is in the Department of
Entomology, University of California, Riverside,
California 92521, USA.
e-mail: [email protected]
1. Myerscough, M. Proc. R. Soc. Lond. B published online
3 February 2003 (doi:10.1098/rspb.2002.2293).
2. Seeley, T. D. & Morse, R. A. Behav. Ecol. Sociobiol. 49, 416–427
(1978).
3. Lindauer, M. Z. Vergl. Physiol. 37, 263–324 (1955).
4. von Frisch, K. The Dance Language and Orientation of
Honeybees (Harvard Univ. Press, 1967).
5. Camazine, S. et al. Self-Organization in Biological Systems
(Princeton Univ. Press, 2001).
6. Seeley, T. D. & Buhrmann, S. C. Behav. Ecol. Sociobiol. 45, 19–31
(1999).
7. Camazine, S., Visscher, P. K., Finley, J. & Vetter, R. S.
Insectes Soc. 46, 348–360 (1999).
8. Seeley, T. D. & Buhrman, S. C. Behav. Ecol. Sociobiol. 49,
416–427 (2001).
9. Camazine, S. & Sneyd, J. J. Theor. Biol. 149, 547–571 (1991).
10. Beering, M. A Comparison of the Patterns of Dance Language
Behavior in House-hunting and Nectar-foraging Honey Bees
(Apis mellifera L.). Thesis, Univ. California, Riverside (2001).
Electronics
Polymers light the way
Andrew Holmes
Using the methods of polymer deposition that are employed in making
integrated circuits, light-emitting polymers can be patterned for application
in flat-screen, full-colour displays.
iquid-crystal devices dominate the
market for the flat-panel displays used
in laptops, personal organizers and
mobile telephones. They have their drawbacks, however, and light-emitting polymers
are showing great promise as a complementary technology. Processing such polymers
to produce a colour, pixelated display is one
of the challenges. As they describe on page
829 of this issue, it is a challenge that Müller
et al.1 have tackled in a new way.
The disadvantage of liquid-crystal devices
is that the light must pass through various
colour and polarizing filters before it reaches
the eye. So, as everyone knows who has
travelled in an aircraft with personal video
screens, they can be viewed conveniently only
if the screen is at right angles to the viewer.
For flat-panel screens, one solution is to use
organic fluorescent materials, which are
themselves the actual light source and in
principle visible from a much wider range of
viewing angles. The emissive material can
be a thin film of either an organic molecule
or a polymer; fluorescence (electrolumin-
L
800
escence) is induced by the injection of
charge into a film of the emitter sandwiched
between oppositely charged electrodes
(ideally) powered by a small battery. Good
red, green and blue electroluminescent
materials are now available, and car radio
and multicolour mobile telephone displays
using small-molecule ‘organic light-emitting
diodes’ (OLEDs) are on the market2. The
drawback is that such materials can only
be deposited using vacuum (sublimation)
deposition techniques, in combination with
a shadow mask to control where the molecule
is deposited. This presents a problem of scale
in large-area displays, although prototype
television screens have been fabricated.
By contrast, fluorescent polymer lightemitting diodes (PolyLEDs) can be assembled by deposition from solution. Here the
problems are to avoid impurities (in the
polymer and the solvent) and not to dissolve
away a film during deposition of another
layer. One elegant method of delivering a
polymer droplet of the right colour to a small
dot (pixel) in the display is to use ink-jet
© 2003 Nature Publishing Group
printing3, and rapid progress has been made
towards television-size prototype displays
using ink-jet printing onto specially prepared wafers of polysilicon (Fig. 1). Simple
monochrome PolyLED products are now
also on the market, as demonstrated by the
display in the electric shaver used by Pierce
Brosnan in the latest James Bond movie
Die Another Day. Müller et al.1 now describe
a completely different way of solution-processing coloured displays, one that involves a
clever chemical crosslinking method.
Electroluminescent devices operate by
forcing positive and negative charges from
oppositely charged electrodes into a
sandwich device containing a thin film of the
fluorescent organic or polymeric material4.
The charges migrate in opposite directions
through the material until they annihilate
and cause fluorescence from the excited
state. One of the most powerful families
of stable light-emitting polymers is the
polyfluorenes, which can conveniently be
prepared in good yield and high molecular
weight by the Suzuki reaction. Generically,
this involves carbon-bond formation between aryl halide and boron compounds. In
the case of producing polyfluorenes, it is the
palladium-mediated polycondensation of
a bis-boronate ester with an appropriate
dibromo-substituted aromatic compound5.
The reaction schemes used by Müller et
al. are outlined in Fig. 1 of their paper on
page 830. They obtained the three primary
polymers (red, green and blue) by ‘tuning’
the Suzuki copolymerization6,7 of the bisboronate monomer with the comonomer
containing reactive oxetane end-groups
and various dibromo-substituted aromatic
comonomers. To form a patterned device,
each polymer was crosslinked using the
standard photoresist techniques that are
employed to make integrated-circuit patterns on silicon chips. Thus, solution deposition of the first polymer onto a transparent
electrode (precoated with a conducting
polythiophene derivative) in the presence of
the photo-acid generator, followed by irradiation of the film through a shadow mask
(diffraction grating), released photochemically generated acid in the regions under
irradiation.
The acid released in the film caused the
strained-ring oxetane end-group to undergo
a ring-opening cationic polymerization,
leading to crosslinked material. Washing
with solvent removed the material that had
not become crosslinked, and further gentle
baking left the polymer in a well-defined
pattern. The two remaining layers of emissive
polymers were then deposited in the same
way, followed by vacuum deposition of the
top electrode, to give a device that showed
good resolution and characteristics.
It might have been expected that release
of acid and crosslinking would adversely
affect the performance of the light-emitting
NATURE | VOL 421 | 20 FEBRUARY 2003 | www.nature.com/nature
news and views
be called a fudge factor, in this case of 10 or 15
million years — close to 20%.
The precise date of major genome duplications (measured by a molecular clock)
can thus be compared with major events in
evolutionary history (generally measured
by a different molecular clock), using one or
more calibration points (fossils). The error
of the estimate is high, so correlations are
difficult, if not impossible, to demonstrate
rigorously.
Langkjaer et al.1 and Bowers et al.2 circumvent this problem by using relative time.
Bowers et al. compare pairs of genes in Arabidopsis with those in cabbage (Brassica; from
the same family), cotton (from a different
family), pine (a seed plant, but not a flowering plant) and moss (a very distant relative),
and for each gene they compute an evolutionary tree — the gene’s pedigree. From the
pattern of the evolutionary tree, they can
determine when a duplication occurred
relative to the evolutionary origin of other
species (Fig. 1). The evolutionary tree (see
Fig. 2b on page 436) shows a clear duplication
event, affecting many genes in the genome,
that occurred before the Brassica/Arabidopsis
split, and before the members of the family
Brassicaceae started to diverge. Similarly,
Langkjaer et al. show that the yeast genome
was duplicated before the divergence of
Saccharomyces and Kluveromyces.
After duplication, one copy of many of
the genes in a duplicated genome segment is
lost. Once duplicate segments have been
identified, comparisons between the two
allow the gene composition of the common
ancestor to be estimated (Fig. 2). Having
done this, duplicated regions that are even
more ancient become apparent — pairs of
genes and gene regions that were not initially
identified because too many puzzle pieces
were missing. At the same time, it is possible
to identify the pattern and relative rate of
gene loss. Repeating their evolutionary
analysis for the newly identified duplicated
segments, Bowers et al. were able to identify a
more ancestral duplication event early in the
evolution of the flowering plants, after the
Duplicated segments
Inferred ancestral chromosomal segment
Figure 2 Duplicated chromosomal segments,
showing some gene pairs. This pattern of
duplication suggests that all seven genes may
have been present and in the same order in the
common ancestor.
384
ancestor of cotton and Arabidopsis (which
are both dicotyledonous plants) diverged
from the ancestor of rice and maize (which
are monocotyledons). Another round of
analyses revealed a duplication that was still
more ancient, possibly occurring before the
origin of the seed plants.
A historian, trying to dissect cause and
effect, needs to know the relative times of
battles and treaties. Similarly, the biologist
needs to know the relative times of gene
duplications, speciation events, major
species diversifications, and events of Earth
history. Approaches that involve the construction of evolutionary trees are designed
specifically to assess relative time. Incorporating such an approach into future genome
studies will undoubtedly lead to a clearer
picture of the role of gene and genome
duplication in the evolutionary process. By
increasingly dense sampling of evolutionary
trees, even without complete genome
sequences for every species, it is possible
to distinguish single-gene duplications from
whole-genome duplication. So the approach
holds the promise of dissecting the dynamic
processes by which genes and genomes
evolve.
■
Elizabeth A. Kellogg is in the Department of Biology,
University of Missouri-St Louis, 8001 Natural
Bridge Road, St Louis, Missouri 63121, USA.
e-mail: [email protected]
1. Langkjaer, R. B., Cliften, P. F., Johnston, M. & Piskur, J. Nature
421, 848–852 (2003).
2. Bowers, J. E., Chapman, B. A., Rong, J. & Paterson, A. H. Nature
422, 433–438 (2003).
3. Gu, X., Wang, Y. & Gu, J. Nature Genet. 31, 205–209 (2002).
4. McLysaght, A., Hokamp, K. & Wolfe, K. H. Nature Genet. 31,
200–204 (2002).
5. Wolfe, K. H. & Shields, D. C. Nature 387, 708–713 (1997).
6. Jacobs, B. F., Kingston, J. D. & Jacobs, L. L. Ann. Missouri Bot.
Garden 86, 590–643 (1999).
Nonlinear dynamics
Synchronization from chaos
Peter Ashwin
It isn’t easy to create a semblance of order in interconnected dynamical
systems. But a mathematical tool could be the means to synchronize
systems more effectively — and keep chaos at bay.
haos and control are often seen as
opposite poles of the spectrum. But the
theory of how to control dynamical
chaos is evolving, and, in Physical Review
Letters, Wei, Zhan and Lai1 present a welcome
contribution.
Chaos is a feature in all sciences: from
lasers and meteorological systems, to chemical reactions (such as the Belouzov–
Zhabotinski reaction) and the biology of
living organisms. In most deterministic
dynamical systems that display chaotic
behaviour, selecting the initial conditions
carefully can drive the system along a trajectory towards much simpler dynamics, such
as equilibrium or periodic behaviour. But
sensitive dependence on initial conditions
— the well-known ‘butterfly effect’ — and
the effects of noise in the system mean that in
practice this is not so easy to do.
The aim of chaos control is to be able to
perturb chaotic systems so as to ‘remove’ or at
least ‘control’ the chaos. For example, in a
spatially extended system, the aim may be
to achieve regular temporal and/or spatial
behaviour. Techniques introduced2,3 and
developed by several researchers over the past
decade have sought to make unstable behaviour robust against both noise and uncertainties in initial conditions by stabilizing the
system (using feedback3, for instance) close to
dynamically unstable trajectories. These techniques have been very successful in controlling
chaos, at least for low-dimensional systems.
Synchronization is a good example of a
C
© 2003 Nature Publishing Group
chaos-control problem: synchronizing an
array of coupled (interdependent) systems
— such as the coherent power-output from
an array of lasers — is of interest for technological applications. In biology, synchronization of coupled systems is a commonly
used model4, and the presence, absence or
degree of synchronization can be an important part of the function or dysfunction of
a biological system. For example, epileptic
seizures are associated with a state of the
brain in which too many neurons are synchronized for the brain to function correctly.
In the simplest case, synchronization
of two identical coupled systems (such as
periodic oscillators) can be achieved through
their coupling as long as it is strong enough
to overcome the divergence of trajectories
within either individual system. The required strength is indicated by the most
positive Lyapunov exponent of the system: a
Lyapunov exponent is an exponential rate of
convergence or divergence of trajectories of
a dynamical system, and the most positive
Lyapunov exponent measures the fastest
possible rate of divergence of trajectories. In
particular, the fact that the individual systems have chaotic dynamics before they are
coupled together means that the most positive Lyapunov exponent is greater than zero,
and there is always a threshold below which
synchronization cannot be achieved.
Synchronization in more general arrays
can be done similarly, although with local
coupling this can only be achieved with a
NATURE | VOL 422 | 27 MARCH 2003 | www.nature.com/nature
news and views
coupling strength that grows with system
size. This synchronization can be achieved
without forcing the dynamics to become,
for example, periodic. Hence, the problem
of spatial control of coupled dynamics,
although it still involves stabilizing dynamics
that are inherently unstable, is easier to
achieve than control of chaotic into simple
dynamics. Control of synchronization can
usually be achieved by careful design of the
coupling, rather than resorting to feedback
techniques. What then remains is to try to
minimize the level of coupling required to
achieve synchronization.
This is the problem that Wei, Zhan and
Lai1 have tackled. They have come up with a
novel way of reducing the necessary coupling
in an array by using wavelet decomposition of
the matrix of coupling coefficients. Wavelets
are mathematical functions that have been
developed over the past decade or so as a
powerful tool for signal-processing and
numerical analysis. Wavelet analysis involves
reducing a signal into a series of coefficients
that can be manipulated, analysed or used to
reconstruct the signal. Wei et al. make a small
change to the low-frequency components
in the wavelet-transformed matrix, before
applying an inverse transform to obtain a
modified coupling matrix. This turns out to
be an efficient strategy for achieving synchronization at much lower coupling strengths.
Wei et al. test their method by synchronizing a ring of coupled Lorenz systems. The
Lorenz system is a set of three nonlinear differential equations showing chaotic behaviour. In this proof-of-principle, a ring of
Lorenz systems are coupled together linearly,
their relations to each other represented by a
matrix of coupling coefficients. A small
change in this matrix (less than 2% for 64
coupled systems), through the wavelet transform, produces a much lower threshold of
coupling to achieve synchronization. The
authors show that their technique is robust
even if the symmetry of nearest-neighbour
coupling is broken.
It will be interesting to see if this method
can be extended to more general arrays of
coupled systems, to better understand control of spatial patterns. It may be that the
work by Wei et al.1 will suggest new techniques and structures for the design of local
and global coupling in such systems.
■
Peter Ashwin is in the School of Mathematical
Sciences, University of Exeter, Exeter EX4 4QE, UK.
e-mail: [email protected]
1. Wei, G. W., Zhan, M. & Lai, C.-H. Phys. Rev. Lett. 89, 284103
(2002).
2. Ott, E., Greboi, C. & Yorke, J. A. Phys. Rev. Lett. 64, 1196–1199
(1990).
3. Pyragas, K. Phys. Lett. A 170, 421–428 (1992).
4. Pikovsky, A., Rosenblum, M. & Kurths, J. Synchronization:
A Universal Concept in Nonlinear Sciences (Cambridge Univ.
Press, 2001).
Neurobiology
Ballads of a protein quartet
Mark P. Mattson
The fate of neurons in the developing brain and in Alzheimer’s disease may
lie with a four-protein complex that regulates the cleavage of two molecules
spanning the cell membrane. The role of each protein is now being unveiled.
cientific discoveries often originate in
surprising places. Some years ago, for
instance, researchers looking at how
the brain develops received help from an
unexpected quarter: studies of patients with
Alzheimer’s disease. This disease is characterized in part by the abnormal accumulation,
in the brain, of a protein called amyloid bpeptide (Ab), which is a fragment of a larger
protein, the amyloid precursor protein
(APP), that sits across the outer membrane
of nerve cells. Two enzymatic activities are
involved in precisely snipping APP to produce Ab, which is then shed into the brain.
Curiously, one of these activities — dubbed
g-secretase1 — was later discovered also to
cleave Notch, a receptor protein that lies on
the cell surface, and thereby to affect the way
in which Notch regulates gene expression
during normal development2. On page 438 of
this issue, Takasugi and colleagues3 add to our
understanding of how APP and Notch are
processed. Using genes and cells from flies
S
NATURE | VOL 422 | 27 MARCH 2003 | www.nature.com/nature
and humans, and the powerful new technology of RNA interference, these authors
establish specific roles for four different
proteins underlying g-secretase activity.
For many years, much of the research into
Alzheimer’s disease has concentrated on
identifying and characterizing the protein
(or proteins) that generate Ab. In the first
step of this process, APP is cleaved at a specific
point by a so-called b-secretase activity; the
protein responsible for this activity was
identified some four years ago. Cleavage by
the g-secretase activity then produces Aβ —
but here the molecules at fault have been
harder to pin down. An early hint came from
the finding that mutations in a gene encoding the presenilin-1 protein occur in several
families with inherited Alzheimer’s disease;
it was quickly shown that these mutations
cause increased cleavage of APP to produce
Ab. So presenilin-1 was assumed to be the
g-secretase.
A surprising link to brain development
© 2003 Nature Publishing Group
was then discovered when researchers
knocked out the presenilin-1 gene in mice
(reviewed in ref. 2). The animals died as
embryos, and had severe defects in brain
development that were indistinguishable
from the defects in mice lacking Notch. This
is because presenilin-1 is required not only
to cleave APP and generate Ab, but also to
cleave Notch after Notch has detected and
bound a partner protein. An intracellular
fragment of Notch is then released, and
regulates gene expression in the neuronal
nucleus. It has been suggested4 that an
intracellular fragment of APP, generated by
g-secretase, likewise moves to the nucleus
and regulates gene expression.
But it soon became clear that presenilin-1
cannot work alone to cleave APP and Notch,
and a search began for other proteins that
might be involved. APP and Notch have been
highly conserved during evolution, which
not only attests to their physiological importance, but also means that molecular-genetic
analyses of fruitflies and worms can be used
to investigate their cleavage. Such studies
have found that four proteins seem to
contribute to g-secretase activity; these are
presenilin-1, nicastrin, APH-1 and PEN-2
(Fig. 1, overleaf )5–7. It has just been shown
that g-secretase activity can be fully reconstituted with only these four proteins8.
But what exactly do these proteins do?
To begin to understand this, Takasugi and
co-workers3 first generated fruitfly cells that
expressed different combinations of fruitfly
nicastrin, APH-1 and PEN-2 and determined the effects on cleavage of presenilin-1
(this event having been previously associated
with g-secretase activity). They found that
overexpression of APH-1 — or APH-1 plus
nicastrin — stabilized the four-protein
complex and simultaneously reduced presenilin-1 cleavage, suggesting that APH-1
inhibits the ability of g-secretase to cleave
any of its target proteins. They then showed
that, indeed, APH-1 reduces the g-secretase
cleavage of APP as well.
To determine the role of PEN-2 in the
g-secretase quartet, the authors used RNA
interference to target and degrade the
messenger RNA encoding PEN-2, thereby
reducing production of the protein, in fruitfly cells, mouse and human brain neurons,
and human tumour cells. This resulted in
decreased g-secretase activity. Further experiments in which a fragment of APP was added
confirmed that APH-1 inhibits, whereas
PEN-2 promotes, the production of Ab.
These findings advance our understanding
of an enzyme activity that is important in
both brain development and Alzheimer’s
disease, and identify new protein targets
for drugs to prevent or treat this disorder.
But the results also raise new questions,
and reveal further hurdles to treating
Alzheimer’s disease.
One general question is whether the
385
books and arts
fulfil this role are none other than those discovered by Fuster and Niki. If their persistent
activity in the absence of a sensory cue is
indeed the step of calculating a single
decision variable based on information
from several sources, then neurophysiologists have actually watched neurons making
up the monkey’s mind. What determines the
moment of decision is not yet known, but
just as ‘decide’ once meant to cut off, or bring
to an end, so these neurons do indeed stop
their activity when the decision is made.
There is a strong argument that we have
made such great progress in understanding
the neural basis of cognition only because
neurons, and the networks that they form,
compute in an analogue style. We can get
an idea of the underlying computations by
measuring the activity of single neurons, or
the strength of the functional magnetic resonance imaging signal. It seems fantastic, but
Fuster’s progress report dares us to believe
that the patterns woven by Sherrington’s
“enchanted loom”, the cerebral cortex, are
now well on the way to being understood. ■
Kevan Martin is at the Institute of
Neuroinformatics, University of Zurich/ETH,
Winterthurerstrasse 190, 8057 Zurich, Switzerland.
Suffocated or shot?
When Life Nearly Died: The
Greatest Mass Extinction of
All Time
by Michael Benton
Thames and Hudson: 2003. 336pp.
£16.95, $29.95
Peter J. Bowler
Whatever hit the Earth at the end of the
Permian period certainly struck hard, killing
90% of living species. Compared with this,
the extinction at the end of the Cretaceous
period was comparatively minor, with only
a 50% death rate. Yet the latter event is much
better known, because among that 50%
were the last of the dinosaurs. Partly for this
reason, Michael Benton uses the event at the
end of the Cretaceous as an introduction to
his account of the Permian extinction — he
wants us to realize how limited it was in comparison with what he intends to describe.
But there is a deeper reason for linking the
two episodes: Benton wants to show us how
the catastrophist perspective has re-emerged
in modern geology and palaeontology. He
argues that the theory of catastrophic mass
extinctions was widely accepted in the early
nineteenth century, but was then driven
underground by the gradualist perspective
of Charles Lyell’s uniformitarian geology
and Darwin’s theory of evolution. Only in the
1970s was catastrophism revived, through
the claim that the dinosaurs were wiped out
when an asteroid hit the Earth. Benton shows
384
Exit stage right — even though Lystrosaurus
survived the extinction at the end of the Permian.
us how in the 1990s the evidence began to
emerge that the species replacements marking the Permian–Triassic transition were also
sudden, and hence were probably caused by
some environmental trauma. He is describing both a geologically sudden event and a
rapid transformation in our ideas about the
Earth’s past.
As a result, the book is partly historical
in nature. It describes how the British geologist R. I. Murchison (himself a catastrophist)
defined the Permian rocks of Russia in about
1840, and how Lyell and Darwin challenged
the idea of mass extinctions by arguing that
apparently sudden transitions in the fossil
record were the result of gaps in the evidence,
which created illusory jumps between one
system of rocks and the next.
The triumph of darwinism ensured that
catastrophist explanations were marginalized until they were revived by the asteroidimpact theory for the end of the Cretaceous.
Even then, many palaeontologists resisted,
arguing that the dinosaurs were declining
anyway, so the impact only finished a job
that had already been started by gradual environmental changes. At the time, knowledge
of the Permian–Triassic transition was so
limited that gradualism still seemed plausible
here, too. Benton provides a graphic account
of how more recent evidence has piled up,
including his own experiences fossil hunting
in Russia, making a catastrophic explanation
inescapable.
There is one important twist in the story,
however: Benton finds little support for the
possibility that the Permian extinction was
caused by an extraterrestrial agent. Wild
theories about periodic bombardments by
asteroids have not stood the test of time: the
Permian event was probably triggered by
massive volcanism, which injected poisonous
gases into the atmosphere, both directly and
by triggering the release of methane from
deep-sea hydrates. Some geologists think
that volcanism also played a role at the
end of the Cretaceous. Significantly, Benton
concludes by considering the implications of
the latest, man-made mass extinction, asking
what light the earlier events can throw on the
potential for survival of modern species.
The historical aspect of Benton’s book
raises some intriguing questions. Many early
catastrophists postulated the involvement of
extraterrestrial agents — a comet was sometimes invoked as the cause of Noah’s flood.
© 2003 Nature Publishing Group
But such ideas went out of fashion in
the mid-nineteenth century, and later
catastrophists, including Murchison,
favoured explanations based on the
supposedly more intense geological activity in the young Earth. The asteroidimpact theory of dinosaur extinctions
seems to parallel some of the earliest
speculations, but Benton has redressed the
balance by favouring internal causes.
My one criticism of his account is that
he accepts too readily the assumption that
Lyell and Darwin marginalized all support
for discontinuity in the Earth’s history.
There were few outright catastrophists left
by around 1900, but many still believed that
the history of life had been punctuated by
environmental transitions far more rapid
than anything observed in the recent past.
The real triumph of gradualism came
with the modern darwinian synthesis of
the mid-twentieth century, and even then it
was confined to the English-speaking world.
Benton notes that British and US palaeontologists of the 1950s ignored the catastrophism of Otto Schindewolf. But we need to
recognize that German palaeontologists
such as Schindewolf were continuing a
long-standing tradition that had proved far
more robust than our modern, Darwincentred histories acknowledge. The fact
that modern catastrophists do not see a link
back to that tradition tells us about the
effectiveness of the neo-lyellian interlude of
the mid-twentieth century.
■
Peter J. Bowler is in the Department of Social
Anthropology, Queen’s University Belfast,
Belfast BT7 1NN, UK.
Hooke, life
and thinker
London’s Leonardo: The Life and
Work of Robert Hooke
by Jim Bennett, Michael Cooper,
Michael Hunter & Lisa Jardine
Oxford University Press: 2003. 240 pp.
£20, $35
David R. Oldroyd
Some devotees of Robert Hooke have
regarded him as Britain’s greatest scientific
genius of the seventeenth century, the range
of his interests and achievements being
hard to conceive. He is a fruitful subject for
historical enquiry as he left behind him a
large archival trail, and, with his polymathic
interests, he has attracted much attention.
A good general overview, Robert Hooke
by Margaret ’Espinasse (Heinemann), was
published in 1956. Since then, studies of
Hooke have expanded greatly to the point
where we have a detailed knowledge of the
man, although not all within the pages of a
NATURE | VOL 423 | 22 MAY 2003 | www.nature.com/nature
books and arts
single volume. London’s Leonardo contains
four highly competent and complementary
essays, which go a long way towards providing a definitive account of Hooke, while
leaving open the road (or preparing the way)
for a full intellectual biography.
Hooke was wealthy at his death, much
of his money having come from his work
helping to resurvey London after the Great
Fire of 1666. In his essay, Michael Cooper
describes this work pleasantly and informatively. That Hooke should have embarked
on it when he was already fully occupied
with his scientific work for the Royal Society
is remarkable and bespeaks his devotion
to London and its inhabitants. There were
many problems. With street widening,
residents had to be compensated fairly for
the land they were to lose. Buildings had
different owners on different floors, and
some structures had ‘interleaved’ with their
neighbours. An accurate survey was needed,
and it relied on instruments, some devised
by Hooke, that were an integral part of the
‘scientific revolution’. Hooke’s contributions
to the survey were substantial.
Jim Bennett’s fine paper, which is profusely
illustrated, deals with Hooke’s instruments
and inventions more generally, revealing
their extraordinary range and ingenuity:
time-pieces, air pumps, telescopes and microscopes, meteorological and oceanographic
instruments, the universal joint and many
other items. Hooke believed in the use of
instruments to enhance the senses, as can be
seen from his controversy with the Polish
astronomer Johannes Hevelius, who still
advocated naked-eye instruments for astronomy. Hooke was clearly on the winning side.
Everyone knew that optical instruments had
imperfections, and Hooke applied himself
to the endless task of their improvement.
Michael Hunter writes about Hooke’s
philosophy of nature and his ideas on scientific method. Regarding the latter, Hooke was
not a baconian inductivist (nor, indeed, was
Bacon), but rather a hypothetico-deductivist.
Although Hooke made some use of baconian
tables of ‘presence’, ‘absence’ and ‘degrees’, he
gave a clear example of the formulation and
Instrumental to his success: Hooke relied on
optical devices such as this compound microscope.
NATURE | VOL 423 | 22 MAY 2003 | www.nature.com/nature
Art
Science in site
Taking issue: Happy Hour by Fernando Arias (left) examines AIDS treatments; Daniel Lee focused on
evolution for Cheetaman (middle); and Annie Cattrell’s Capacity was inspired by the breath of life.
The website scicult.com is a science-related
contemporary art gallery — and an act of love. The
small group of ‘sci-art’ specialists who launched
it earlier this year are idealists, committed to
promoting a quality marriage of art and science.
The group has already signed up 20
significant artists, including Annie Cattrell and
Fernando Arias, some of whose whose work is
shown here. The art is exhibited in the online
gallery, and some pieces will eventually be
available for sale.
But scicult.com is more than a gallery. It
publishes an expanding range of intelligent
features about contemporary sci-art, and has
testing of hypotheses in science. He proposed
the idea of pole-wandering to account for
cyclical interchanges of the levels of land and
sea (to explain the presence of inland fossils).
Such movements in the position of the
geographic poles, if they occurred, would,
over time, produce changes in the direction
of the meridian at any given locality. Hooke
then suggested astronomical methods for
the accurate determination of the meridian,
which should be measured over a period of
years to look for changes. A first attempt at
determination failed because of poor weather
and the idea was not pursued, being pushed
aside by Hooke’s manifold other activities,
but the hypothetico-deductive method was
clearly enunciated.
This example, in a way, renders superfluous historians’ worries about what Hooke
meant by what he mysteriously called ‘philosophical algebra’, presumably some kind
of ‘routinizable’ procedure for conducting
science. Of course, knowing about the ‘form’
of scientific method tells us little about how
Hooke’s creative process worked. Hunter,
unlike another Hooke aficionado, Steve
Shapin, eschews discussion of the significance of Hooke’s social status for his scientific
practice. Rather, Hunter gives an excellent
exposition of Hooke’s Micrographia, which
links back to the discussion of instruments,
and further illustrates his procedures.
© 2003 Nature Publishing Group
longer-term plans to develop an ‘introduction
service’ for scientists and artists who seek
collaborating partnerships. It is also in the process
of acquiring a permanent, real-world gallery in
which it can exhibit more experimental works.
The website is attractive and functional.
Artworks are well displayed against a dark-grey
background and can be enlarged with a click of
the mouse. The features are timely and wellwritten, but suffer the plague of many web pages
designed primarily for visual impact: the text,
reversed out white on dark grey, is a strain to read
on the screen.
Alison Abbott
➧ www.scicult.com
Lisa Jardine’s paper is less precisely
focused than the other three. She explicates
details of Hooke’s relations with Robert
Boyle, and writes about Hooke’s work on
pressures, the magnitude of subterranean
gravitational attraction and geology. But
she is chiefly interested in his health and his
self-medication (recorded in his diary), which
eventually more or less killed him. Hooke left
no will, and his family fell on his fortune after
he died. They were not interested in preserving his name, so for many years he was
a rather forgotten figure (Jardine suggests).
But his time has come: the comprehensive
bibliography of London’s Leonardo shows
just how many works have been written
about him since ’Espinasse’s biography.
This prompts a thought. People’s interests
can often be judged by their libraries. Hooke’s
printed library sale catalogue survived, and
some years ago I attempted an approximate
classification of his books. The number of
literary items (languages, grammar, philology, poetry, plays, epigrams and biographical
works) easily exceeded the number in any
of the categories of mathematics, astronomy,
logic, physics, architecture, machines and so
on. Is there perhaps another Hooke to be
explored: the man of letters?
■
David R. Oldroyd is in the School of History and
Philosophy, University of New South Wales,
Sydney 2052, Australia.
385
news and views
Evolution in population dynamics
Peter Turchin
In their study of predator–prey cycles, investigators have assumed that they
do not need to worry about evolution. The discovery of population cycles
driven by evolutionary factors will change that view.
T. YOSHIDA & R. WAYNE
Figure 1 Predator: the rotifer Brachionus
calyciflorus1.
NATURE | VOL 424 | 17 JULY 2003 | www.nature.com/nature
No phase shift
a
Predator
Population densities
d
P1
0
20
40
60
80
N1
Prey
100
Shift of one-quarter cycle
e
P2
b
Predator
Population densities
P1
0
20
40
60
80
N1
Prey
100
Shift of one-half cycle
c
f
Predator
E
Figure 2 Phase shifts
between prey (green
curve) and predators
(red curve). Such shifts
yield a hint about
whether the oscillations
are driven by the
classical predator–prey
mechanism. Time
plots: a, no shift; b, a
shift of one-quarter of
a cycle; c, a shift of half
a cycle. Phase plots
corresponding to the
time plots: d, no
shift; e, a shift of onequarter of a cycle; f, a
shift of half a cycle. The
rotifer–algal system
studied by Yoshida et al.1
exhibited the out-ofphase oscillations seen
in c and f, which implies
that the cycles must
be driven by a factor
other than the
classical predator–prey
interaction. The authors
identify evolutionary
change in the prey as
that factor.
Population densities
cologists studying population dynamics prefer not to bother with the possibility of evolutionary change affecting
their study organisms. This is sensible,
because understanding the results of interactions between, for example, populations
of predators and prey is already a complicated task. Making the assumption that
evolutionary processes are too slow on ecological scales greatly eases the task of
modelling the commonly observed population oscillations. But an elegant study by
Yoshida et al.1 (page 303 of this issue) decisively demonstrates that this simplification
might no longer be tenable.
The story begins at Cornell University
when two members of the group —
ecologist Nelson Hairston Jr and theoretician
Stephen Ellner — teamed up to study the
population dynamics of rotifers (Fig. 1),
microscopic aquatic animals that feed on unicellular green algae. According to ecological
theory, the interaction between predators
(such as rotifers) and prey (algae) has an inherent propensity to oscillate2. Predators eat prey
and multiply, causing prey numbers to crash,
which in turn leads to a decline in the starving
predator population, allowing prey to
increase, and so on. Indeed, when the
investigators placed rotifers and algae in a
‘chemostat’(a laboratory set-up with continuous inflow of nutrients and outflow of waste)
they observed population cycles3. But the
phase shift between predator and prey cycles
was completely ‘wrong’ — predators peaked
when prey were at the minimum and vice
P1
0
20
40
versa, resulting in almost perfectly out-ofphase oscillations. This is a subtle but important point,which requires an explanation.
Suppose we observe three ecosystems
containing predators and their prey. These
three systems are in all ways identical, except
in the phase shift between predators and prey:
no shift (Fig. 2a), a shift of one-quarter of a
cycle (Fig. 2b), and a shift of half a cycle (Fig.
2c). Clearly, there is some sort of dynamical
connection between the two populations in
all cases,but in which case are cycles driven by
the predator–prey interaction? To answer
this question we replot each trajectory in the
‘phase space’ — two-dimensional euclidean
space in which each variable (prey and predator density) is represented with its own axis
(Fig. 2d–f). When oscillations are synchronous,the trajectory goes back and forth along
the same path, so that for each value of prey
density (say, N1) there is just one corresponding value of predator density (P1).This means
that if we already know the level at which
© 2003 Nature Publishing Group
60
Time
80
100
N1
Prey
prey is, knowledge of predator numbers gives
us no additional information.In a differential
equation describing prey dynamics we can
replace all terms containing P with N by
using the relationship depicted in Fig. 2d,
leaving us with a single differential equation
for N. But mathematical theory tells us that
such single-equation models cannot generate cycles4. In other words, simply by noting
that prey and predators oscillate in synchrony
we have disproved the hypothesis that cycles
are driven by the classical predator–prey
mechanism described in the previous paragraph. Some other factor must be involved in
producing the oscillations.
The same logic applies to the case of perfectly out-of-phase oscillations (Fig. 2c, f).
However, in the case in which predators trail
prey by a quarter of a cycle there are two values of P for each N (Fig. 2e). So a single equation for prey does not suffice; we must know
what predators are doing. If predators are at
the low point (P1), prey will increase, but if
257
news and views
258
low densities at which the probability of
extinction is high, and that natural selection
should therefore cause evolution away from
chaos6. Since this argument was advanced, at
least two examples of chaotic behaviour have
been discovered: in the dynamics of the incidence of childhood diseases such as measles7,
and of population numbers of rodents such as
voles and lemmings8.What is more important,
however, is that the argument assumes that
evolution occurs on much longer timescales
than oscillations. But the results of Yoshida et
al.1 show that evolution can be an intrinsic part
of oscillations, raising an exciting possibility
that some populations might rapidly evolve
both towards and away from chaos. Perhaps
this is the explanation of the puzzling observation that some Finnish vole populations shift
from a stable regime to oscillations, whereas
others do precisely the reverse9.
This is rank speculation, however, and
will have to remain so because we cannot test
it experimentally in natural systems. But in
the laboratory much more is possible, as the
study by Yoshida et al. shows. We can hope
that in the near future we will see an experimental investigation of the possibility of
rapid evolution to and away from chaos. ■
Peter Turchin is in the Department of Ecology and
Evolutionary Biology, University of Connecticut,
Storrs, Connecticut 06269, USA.
e-mail: [email protected]
1. Yoshida, T., Jones, L. E., Ellner, S. P., Fussman, G. F. & Hairston,
N. G. Jr Nature 424, 303–306 (2003).
2. May, R. M. in Theoretical Ecology: Principles and Applications
2nd edn (ed. May, R. M.) 5–29 (Sinauer, Sunderland,
Massachusetts, 1981).
3. Fussmann, G. F., Ellner, S. P., Shertzer, K. W. & Hairston, N. G.
Jr Science 290, 1358–1360 (2000).
4. Edelstein-Keshet, L. Mathematical Models in Biology (Random
House, New York, 1988).
5. Shertzer, K. W., Ellner, S. P., Fussman, G. F. & Hairston, N. G. Jr.
Anim. Ecol. 71, 802–815 (2002).
6. Berryman, A. A. & Millstein, J. A. Trends Ecol. Evol. 4,
26–28 (1989).
7 Tidd, C. W., Olsen, L. F. & Schaffer, W. M. Proc. R. Soc. Lond. B
254, 257–273 (1993).
8. Turchin, P. Complex Population Dynamics: A Theoretical/
Empirical Synthesis (Princeton Univ. Press, 2003).
9. Hanski, I. et al. Ecology 82, 1505–1520 (2001).
Accelerator physics
In the wake of success
Robert Bingham
Particle accelerators tend to be large and expensive. But an alternative
technology, which could result in more compact, cheaper machines, is
proving its viability for the acceleration of subatomic particles.
ince the construction of the first particle accelerator in 1932, high-energy
collisions of accelerated ions or subatomic particles (such as electrons and their
antimatter counterpart, positrons) have
proved a useful tool in physics research. But
the escalating size and cost of future
machines means that new, more compact
acceleration techniques are being sought. In
Physical Review Letters, Blue et al.1 report
results from a test facility at the Stanford
Linear Accelerator Center (SLAC), Califor-
S
nia, that have great significance for the
future of particle accelerators. Their success
heralds an entirely new type of technology,
the plasma wake-field accelerator.
When charged particles such as electrons
or positrons pass across a gradient of electric
field, they are accelerated — how much
depends on the steepness of the gradient. In
conventional accelerators, a radiofrequency
electric field is generated inside metal (often
superconductor) accelerator cavities.But the
gradient can be turned up only so far before
ALAN SCHEIN/CORBIS
predator numbers are high (P2), prey numbers will crash. The full model for the system
will have two equations, one for prey and one
for predators, and we know that such twodimensional models are perfectly capable of
displaying cyclic behaviour.
We now see why the observation of the
perfectly out-of-phase dynamics demonstrates that the rotifer–algal cycles could not
be driven by the classical predator–prey
mechanism. So what is the actual explanation? The path taken by the Cornell group to
answer this question is an almost textbook
example of how science is supposed to be
done. First they advanced four competing
hypotheses, suggested by various known features of algal and rotifer biology. Next they
translated the hypotheses into mathematical
models and contrasted model predictions
with data.Only one model,based on the ability of algae to evolve rapidly in response to
predation, successfully matched such features of data as period lengths and phase
relationships5. This is a convincing result,
and if we dealt with a natural system we
would have to stop there because we cannot
usually manipulate the genetic structure of
field populations.
In the laboratory, however, such an
experiment is possible, and the successful
test reported by Yoshida et al.1 provides the
final and most decisive evidence of the
rapid-evolution hypothesis. Thus, the outof-phase cycles result from the following
sequence of observed events: under intense
predation, the prey population becomes
dominated by clones that are resistant to
predators; when most prey are resistant, the
predators remain at low numbers even
though prey abundance recovers; low predation pressure allows non-resistant clones to
outcompete resistant ones; so predators can
increase again, leading to another cycle.
The experimental demonstration that
rapid evolution can drive population cycles
means that ecologists will have to rethink
several assumptions. To give just one
example, there is a long-standing debate in
population ecology on whether natural populations can exhibit chaotic dynamics.
Chaos (in the mathematical sense) is irregular dynamical behaviour that looks as
though it is driven by external random
factors, but in fact is a result of internal
workings of the system. Before the discovery
of chaos, ecologists thought that all irregularities in the observed population dynamics
were due to external factors such as fluctuations of climate. Now we realize that population interactions (including those between
predators and prey) can also result in
erratic-looking — chaotic — dynamics.
Incidentally, the chaos controversy was the
main reason why the Cornell group decided
to study rotifer population cycles.
Some ecologists have argued that chaotic
dynamics cause populations to crash to very
Figure 1 The wake created by a boat is a familiar image, but it is also the inspiration for a new type of
particle accelerator. Blue et al.1 have demonstrated that waves in a hot, ionized plasma of gas can create
a rippling electric field in their wake, and that this ‘wake field’ can accelerate subatomic particles.
© 2003 Nature Publishing Group
NATURE | VOL 424 | 17 JULY 2003 | www.nature.com/nature
news and views
protein synthesis, and the inhibition of DNA
replication following stress-induced release
of the protein nucleolin8.
There has been a remarkable convergence
of recent evidence — including the Rubbi and
Milner paper1 — suggesting that nucleoli are
important in monitoring cellular stress. The
health of the nucleolus is an excellent surrogate
for the health of the cell, and conditions that
lead to nucleolar disruption are unlikely to be
safe for continued cell proliferation. The
notion that intact nucleoli are necessary to
hold the p53 response in check provides an
attractive model in which a default pathway of
p53 induction and inhibition of cell growth is
overcome only by the maintenance of nucleolar well-being. These ideas reinforce the growing realization that the nucleolus — long
regarded as a mere factory for assembling ribosomal subunits — is a vital command unit in
monitoring and responding to stress.
■
Henning F. Horn and Karen H. Vousden are at the
Beatson Institute for Cancer Research, Switchback
Road, Glasgow G61 1BD, UK.
e-mail: [email protected]
1. Rubbi, C. P. & Milner, J. EMBO J. 22, 6068–6077 (2003).
2. Leonardo, A. D., Linke, S. P., Clarkin, K. & Wahl, G. M. Genes
Dev. 8, 2540–2551 (1994).
3. Siegel, J., Fritsche, M., Mai, S., Brandner, G. & Hess, R. D.
Oncogene 11, 1363–1370 (1995).
4. Lu, X. & Lane, D. P. Cell 75, 765–778 (1993).
5. Sherr, C. J & Weber, J. D. Curr. Opin. Genet. Dev. 10,
94–99 (2000).
6 Colombo, E., Marine, J.-C., Danovi, D., Falini, B. & Pelicci, P. G.
Nature Cell Biol. 4, 529–533 (2002).
7. Tsai, R. Y. & McKay, R. D. Genes Dev. 16,
2991–3003 (2002).
8. Daniely, Y., Dimitrova, D. D. & Borowiec, A. Mol. Cell. Biol. 22,
6014–6022 (2002).
9. Blander, G. et al. J. Biol. Chem. 274, 29463–29469 (1999).
10. Lohrum, M. A. E., Ludwig, R. L., Kubbutat, M. H. G.,
Hanlon, M. & Vousden, K. H. Cancer Cell 3, 577–587 (2003).
11. Zhang, Y. et al. Mol. Cell. Biol. 23, 8902–8912 (2003).
12. Mazumder, B. et al. Cell 115, 187–198 (2003).
Developmental biology
Asymmetric fixation
Nick Monk
Computer simulations and laboratory experiments have shed light on how
an asymmetric pattern of gene expression is fixed in vertebrate embryos
— an early step towards asymmetric development of the internal organs.
s judged by external appearances,
the left and right sides of vertebrate
bodies are (more or less) identical.
There are, however, consistent left–right differences in the structure and placement of
the internal organs. The heart, for instance,
A
usually forms on the left, the liver on the
right. In recent years, researchers have
uncovered several different molecular
events that are involved in establishing this
left–right asymmetry as embryos develop1.
But the picture that has emerged from these
Figure 1 Fixing asymmetry in vertebrates.
According to convention, embryos are viewed
from the ‘front’ — so the left-hand side of the
embryo appears on the right of this diagram.
An early manifestation of asymmetry in chick
embryos is the expression of the Nodal gene
on the left of the ‘node’ (oval). Raya et al.2 put
forward a model for how this occurs. It was
known from studies in mice that Nodal
expression depends on the Notch pathway,
which is in turn activated by Dll1 and Srr1.
a, At stage 5 of development (19–22 hours after
fertilization), Dll1 expression extends further
towards the head (the anterior) on the left than
on the right. This is the earliest indication that
Notch activity is higher on the left (as Dll1 is a
target of Notch activity). b, During stage 6
(23–25 hours after fertilization), the Dll1 and
Srr1 expression domains are symmetrical. But,
as the fifth pulse of expression of the Lfng gene
sweeps up the embryo, it moves further to the
anterior on the left. Nodal expression is then
induced around the boundary between Dll1 and
Srr1 expression. This occurs only on the left,
where the Ca2& concentration is high; this might
enhance the affinity of Notch for its ligands.
Note that the node ‘regresses’ posteriorly
between stages 5 and 6.
NATURE | VOL 427 | 8 JANUARY 2004 | www.nature.com/nature
©2004 Nature Publishing Group
studies contains significant gaps. The paper
by Raya et al.2 on page 121 of this issue goes
some way towards completing this picture,
revealing an explicit link between an early,
temporary asymmetry and later, stable
patterns of asymmetric gene expression.
The events that lead to the initial breaking
of left–right symmetry in vertebrate
embryos are not fully understood, but they
are believed to provide only weak transient
biases3.So additional mechanisms must exist
to amplify these biases, converting them into
stable and heritable asymmetric patterns of
gene expression1. The earliest detected feature of left–right asymmetry that is common
to all vertebrates studied is the expression of
the secreted growth-factor protein Nodal on
the left side of the ‘node’. This region, located
on the midline of the embryo,acts as an organizing centre during development. In mice,
Nodal expression has been shown to depend
on a second signalling pathway, centred on
the cell-surface-located receptor Notch4,5.
But how the Notch pathway becomes activated to a sufficient degree to trigger Nodal
expression only on the left side of the node
remains an open question.
Raya et al.2 use a combination of modelling and experimentation to address this
problem in chick embryos. Having determined the patterns of expression of various
key genes around the node, the authors capitalize on this information to construct a
mathematical model of the network of molecular interactions underlying Notch activation and Nodal expression. As Nodal
enhances its own production, it can act as an
on–off switch: only a transient increase in
activity of the Notch pathway is required to
induce stable Nodal expression. Raya et al.
find that the simplest way to achieve this in
their model is to enhance the affinity of
Notch for its activating partners (ligands) —
the Delta-like 1 (Dll1) and Serrate 1 (Srr1)
proteins. So the model suggests that a transient lateral bias in this affinity should be
enough to convert the initially symmetric
pattern of gene expression into one that is
manifestly asymmetric.
The authors carry out a range of experiments that show that this is indeed the case.
In doing so, they uncover a chain of events
that lead from a left–right asymmetry in the
electrochemical potential across the membranes of cells around the node, to the leftspecific expression of Nodal. The first step in
this cascade is a previously described leftsided reduction in the activity of a membrane-spanning ion pump (the H&/K&ATPase); this reduction results in membrane
depolarization6. Raya et al. find that this
depolarization leads to a transient increase
in the extracellular concentration of Ca2&
ions on the left of the node.And this in turn is
necessary for left-sided Nodal expression —
suggesting that it could be Ca2& that modulates the affinity of Notch for its ligands. In
111
news and views
Plant development
The flowers that bloom in the spring
Deciding when to flower is of crucial
importance to plants; every season
has advantages and disadvantages,
and different plant species adopt
different strategies. Elsewhere in
this issue, Sibum Sung and Richard
M. Amasino (Nature 427, 159–164;
2004) and Caroline Dean and
colleagues (Nature 427, 164–167;
2004) investigate how such
decisions are made at the molecular
level. They uncover a mechanism
that prevents the model plant
Arabidopsis thaliana (pictured) from
blooming until the coming of spring.
Plants take a variety of
environmental factors into account
when choosing when to flower, such
as the length of the day, the plant’s
age and the requirement for an
extended cold period (a process
called vernalization). All of these
factors work in part through the
gene FLOWERING LOCUS C (FLC),
whose protein product blocks
flowering by repressing numerous
genes required for flower
development. During a prolonged
cold spell, for example, the normally
high levels of expression of FLC are
lowered, remaining low even after
warm weather returns.
Several genes are needed for
vernalization: Dean and colleagues
studied two of these, VRN1 and
VRN2, whereas Sung and Amasino
identified another, VIN3. All three
encode proteins with counterparts
in animals that either bind DNA
directly, or change the structure
of the chromatin into which DNA
is packaged.
Following this lead, the two
groups found that vernalization
induces changes in histone proteins
(components of chromatin) in the
vicinity of the FLC gene — and that
VRN1, VRN2 and VIN3 mediate these
support of this, the authors discover that
ligand-dependent activation of Notch in cultured cells is sensitive to Ca2& concentrations
in the range observed around the chick node.
These findings provide a convincing picture of how Notch can trip the Nodal switch
asymmetrically. The Nodal gene is, however,
expressed only in a restricted region immediately neighbouring the node (Fig. 1),
whereas the Ca2& concentration increases in
a much broader domain.Raya et al.show that
this spatial restriction depends on a second
input to the Notch pathway. The Notch ligands Dll1 and Srr1 are expressed on both the
left and right of the node,in regions that abut
at an interface that lies roughly perpendicular to the embryo’s head-to-tail axis. It is
around this interface on the left of the node
— where Ca2& levels are high — that Nodal is
expressed (Fig. 1). This is not a coincidence:
Raya et al. find that experimentally disrupting this interface results in loss of left-sided
Nodal expression.
A third input is required to determine the
time at which the Notch pathway turns on
Nodal expression. Raya et al. show that the
Lunatic fringe (Lfng) protein is an essential
component of this input. The expression of
this protein is highly dynamic — several
short pulses of Lfng expression sweep up the
embryo from tail to head7. Raya and colleagues’ findings suggest that, as these pulses
cross the Dll1–Srr1 interface, they enhance
Notch activation. On the left of the node,
where Notch activity is already higher than
on the right because of the asymmetry in
112
changes. Specifically, cold causes
the loss of acetyl groups from
particular lysine amino acids in
histone H3. Such patterns of
deacetylation mark genes that are
permanently inactivated or silenced.
The researchers found that whereas
VIN3 is needed to deacetylate H3
during a cold snap, VRN1 and VRN2
are required afterwards, to maintain
the silenced state.
Ca2& levels, the fifth wave of Lfng expression
raises Notch activity to a high enough level to
allow Nodal to be expressed (Fig. 1).
This work represents a significant
advance in our understanding of how
left–right asymmetry is established. It shows
for the first time how transient non-genetic
biases can become fixed in stable asymmetric
patterns of gene expression.It also provides a
concrete example of a patterning mechanism that is driven by the spatial modulation
of a kinetic parameter (the affinity of Notch
for its ligands)8.A central role is played by the
Notch pathway, which acts as a robust signal
integrator and amplifier, using three disparate inputs to ensure that Nodal is
expressed at the correct time and place. Raya
and colleagues’ approach illustrates the
benefits that can be gained by exploiting the
complementarity of theoretical and experimental approaches, especially in systems as
complex as vertebrate embryos.
There are, of course, a few gaps yet to fill.
Most obviously, how is left–right symmetry
broken in the first place? In mice, an attractive candidate for the symmetry-breaking
event is the right-to-left flow of extracellular
fluid seen around the node9. The motile cilia
that generate this flow have been observed in
several different vertebrates before left-sided
Nodal expression is established, prompting
speculation that fluid flow has an evolutionarily conserved role in generating left–right
asymmetry10,11. But expression of Notch
around the node and fluid flow (or its consequences) appear to be largely independent of
©2004 Nature Publishing Group
Interestingly, these changes in
histone acetylation are confined to
a region of the FLC gene that was
recently shown to contain a binding
site for the FLOWERING LOCUS D
(FLD) protein (Y. He et al. Science
302, 1751–1754; 2003). FLD is
related to a component of the
human histone deacetylase
complex, and is also involved in
promoting flowering by silencing
FLC. Plants lacking FLD show
both high levels of histone
acetylation and a considerable
reluctance to flower.
Silencing is an effective
means of controlling long-term gene
expression, as it persists even after
cells divide. In animals, switching
silencing on or off is a well-known
way to control development. It
seems that plants share this system,
using it to preserve the memory of
winter’s passing. Christopher Surridge
each other4,5. It is intriguing that fluid flow
also generates a brief increase in Ca2& levels
to the left of the node — although this rise
is intracellular rather than extracellular12.
Perhaps these seemingly parallel mechanisms are somehow integrated at the level of
Nodal expression.
There are further issues. How does the
juxtaposition of Dll1 and Srr1 expression
enhance Notch activity? How is this potentiated by Lfng? And are there parallels with
the activation of Notch at Fringe-demarcated
boundaries in fruitflies? The dramatic
progress made in recent studies has opened
up many new fronts on which to explore
these fascinating questions.
■
Nick Monk is at the Centre for Bioinformatics and
Computational Biology and in the Department of
Computer Science, University of Sheffield, Regent
Court, 211 Portobello Street, Sheffield S1 4DP, UK.
e-mail: [email protected]
1. Hamada, H., Meno, C., Watanabe, D. & Saijoh, Y. Nature Rev.
Genet. 3, 103–113 (2002).
2. Raya, A. et al. Nature 427, 121–128 (2004).
3. Mercola, M. J. Cell Sci. 116, 3251–3257 (2003).
4. Krebs, L. T. et al. Genes Dev. 17, 1207–1212 (2003).
5. Raya, A. et al. Genes Dev. 17, 1213–1218 (2003).
6. Levin, M., Thorlin, T., Robinson, K. R., Nogi, T. & Mercola, M.
Cell 111, 77–89 (2002).
7. Jouve, C., Iimura, T. & Pourquié, O. Development 129,
1107–1117 (2002).
8. Page, K. M., Maini, P. K. & Monk, N. A. M. Physica D 181,
80–101 (2003).
9. Nonaka, S. et al. Cell 95, 829–837 (1998).
10. Essner, J. J. et al. Nature 418, 37–38 (2002).
11. McGrath, J. & Brueckner, M. Curr. Opin. Genet. Dev. 13,
385–392 (2003).
12. McGrath, J., Somlo, S., Makova, S., Tian, X. & Brueckner, M.
Cell 114, 61–73 (2003).
NATURE | VOL 427 | 8 JANUARY 2004 | www.nature.com/nature
29.1 CONCEPTS 399 MH
23/1/04
5:18 pm
Page 1
essay concepts
Engineering complex systems
J. M. Ottino
omplex systems can be identified by
what they do (display organization
without a central organizing authority
— emergence), and also by how they may or
may not be analysed (as decomposing the
system and analysing sub-parts do not necessarily give a clue as to the behaviour of the
whole). Systems that fall within the scope of
complex systems include metabolic pathways,
ecosystems, the web, the US power grid and
the propagation of HIV infections.
Complex systems have captured the
attention of physicists, biologists, ecologists,
economists and social scientists. Ideas about
complex systems are making inroads in
anthropology, political science and finance.
Many examples of complex networks that
have greatly impacted our lives — such as
highways, electrification and the Internet —
derive from engineering. But although engineers may have developed the components,
they did not plan their connection.
The hallmarks of complex systems are
adaptation, self-organization and emergence — no one designed the web or the
metabolic processes within a cell. And this is
where the conceptual conflict with engineering arises. Engineering is not about letting
systems be. Engineering is about making
things happen, about convergence, optimum design and consistency of operation.
Engineering is about assembling pieces that
work in specific ways — that is, designing
complicated systems.
It should be stressed that ‘complex’ is
different from ‘complicated’. The most elaborate mechanical watches are appropriately
called très compliqué, for example the Star
Caliber Patek Phillipe has over 1,000 parts.
The pieces in complicated systems can be
well understood in isolation, and the whole
can be reassembled from its parts. The components work in unison to accomplish a
function. One key defect can bring the entire
system to a halt; complicated systems do not
adapt. Redundancy needs to be built in when
system failure is not an option.
How can engineers, who have developed
many of the most important complex
systems, stay connected with their subsequent development? Complexity and engineering seem at odds — complex systems are
about adaptation, whereas engineering is
about purpose. However, it is robustness and
failure where both camps merge.
Consider the recent debate of the balance
between performance and risk. Many systems
C
More than the sum of its parts: complex systems,
such as highways, are constantly evolving.
self-organize to operate in a state of optimum
performance, in the face of effects that may
potentially destroy it. However, the optimal
state is a high-risk state — good returns at the
price of possible ruin.Most engineers are risk
averse, and would prefer to eliminate the
probability of catastrophic events. Recent
work borrows concepts from economic
theories (risk aversion, subjective benefit of
outcomes) and argues that one can completely remove the likelihood of total ruin
with minor loss of performance. This falls
squarely in the realm of engineering, but the
discussion has been driven by physics.
Engineers might also learn from social
scientists. In social sciences, there is no such
luxury as starting de novo — systems are
already formed, one has to interpret and
explain. Many engineering systems, such as
the web or the US power grid, also fall into
this category. How will they behave? How
robust are they? How might they fail?
Although systems where self-organization
has already happened present challenges,
there are also opportunities in situations
where self-organization can be part of the
design. Could we intelligently guide systems
that want to design themselves? Is it possible
to actually design systems that design themselves in an intelligent manner? Self-organization and emergence have been part of
materials science and engineering for quite
some time, after all, lasers and superconductivity depend on collective phenomena.
Emergent properties should strike a chord in
materials processing, and also in the
nanoworld. At larger scales, there is already
NATURE | VOL 427 | 29 JANUARY 2004 | www.nature.com/nature
work in directed self-assembly and complex
dissipative systems, which organize when
there is energy input. However, practical processing by self-assembly is still not a
reality, and there is work here for engineers.
But the choice need not be just between
designing everything at the outset and letting
systems design themselves. Most design
processes are far from linear, with multiple
decision points and ideas ‘evolving’ before
the final design ‘emerges’. However, once
finished,the design itself does not adapt.Here,
engineers are beginning to get insight from
biology. The emergence of function — the
ability of a system to perform a task — can be
guided by its environment, without imposing
a rigid blueprint. For example, just like the
beaks of Darwin’s finches, a finite-elementanalysis of a component shape such as an
airfoil can evolve plastically through a continuum of possibilities under a set of constraints,
so as to optimize the shape for a given function.
Engineers calculate, and calculation
requires a theory, or at least an organized
framework. Could there be laws governing
complex systems? If by ‘laws’ one means
something from which consequences can be
derived — as in physics — then the answer
may be no.But how about a notch below,such
as discovering relationships with caveats, as
in the ideal gas ‘law’, or uncovering power-law
relationships? Then the answer is clearly yes.
Advances will require the right kinds of
tools coupled with the right kind of intuition.
However, the current engineering courses do
not teach about self-organization, and few
cover computer modelling experiments.
Despite significant recent advances in our
understanding of complex systems, the field
is still in flux, and there is still is a lack of
consensus as to where the centre is — for
some, it is exclusively cellular automata; for
others it is networks. However, the landscape
is bubbling with activity,and now is the time to
get involved. Engineering should be at the
centre of these developments, and contribute
to the development of new theory and tools. ■
J. M. Ottino is at the R. R. McCormick School of
Engineering and Applied Sciences, Northwestern
University, Evanston, Illinois 60208, USA.
FURTHER READING
Ball, P. Critical Mass (Heinemann, Portsmouth, 2004).
Barabási, A.-L. Linked: The New Science of Networks
(Perseus Publishing, Cambridge, 2002).
Hartwell, L. H. et al. Nature 402, (suppl.) C47–C52 (1999).
Center for Connected Learning and Computer-Based
Modeling
➧ http://ccl.northwestern.edu/netlogo
399
©2004 Nature Publishing Group
SPACE IMAGING
The emergent properties of complex systems are far removed from the
traditional preoccupation of engineers with design and purpose.
9.9 n&v 133 MH
3/9/04
5:29 pm
Page 134
news and views
sheets is necessarily a slow process,limited by
the transfer of moisture through the atmosphere, and it appears likely that this process
initially limited the rate of climatic cooling.
Then, approximately 114,000 years ago,
with temperatures having dropped less than
halfway to typical full glacial values, the first
rapid climate changes began — as documented here for the first time. The timing
and characteristics of these events offer an
invaluable subject for climate modellers; the
mechanisms underlying rapid climate change
are still being debated, and climate models
have not yet convincingly predicted them.
There is much work yet to be done on
the NGRIP core, especially examining the
high-resolution characteristics of the record,
quantifying the temperature history, and
investigating the biogeochemical changes
that accompanied the transition to glacial
climate. The overview presented in this
issue1 is sufficient to demonstrate that it is a
valuable and remarkable core. Yet the
NGRIP project has not achieved its primary
goal: a reasonably complete record of climate
during the last interglacial. How warm did
this period get? Were any parts of it climatically unstable? Such information is crucial
for evaluating climate models of a warmer
world, and for understanding sea-level
changes induced by melting of the Greenland
ice sheet. Analysis of basal ices gives direct
and compelling evidence that the ice sheet
retreated significantly during this period9.
There is only one way to fill this gap. A
new ice core will have to be extracted from
the dry regions of north-central Greenland,
but at a safe distance from the heat-flow
anomaly discovered at the NGRIP site. The
cost and effort of such a project are trivial
compared with the possible impact of a
rise in sea level, and maybe even rapid
climate change, induced by warming of the
Arctic region.
■
Kurt M. Cuffey is in the Department of Geography,
507 McCone Hall, University of California,
Berkeley, California 94720-4740, USA.
e-mail: [email protected]
1. North Greenland Ice Core Project members Nature 431,
147–151 (2004).
2. Hammer, C., Mayewski, P. A., Peel, D. & Stuiver, M. (eds)
J. Geophys. Res. 102 (C12), 26317–26886 (1997).
3. Severinghaus, J. P. & Brook, E. J. Science 386, 930–934 (1999).
4. Chappellaz, J., Brook, E., Blunier, T. & Malaize, B. J. Geophys.
Res. 102, 26547–26557 (1997).
5. Greenland Ice-Core Project members Nature 364, 203–208
(1993).
6. Fahnestock, M., Abdalati, W., Joughin, I., Brozena, J. &
Cogineni, P. Science 294, 2338–2342 (2001).
7. Marshall, S. J. & Cuffey, K. M. Earth Planet. Sci. Lett. 179, 73–90
(2000).
8. Committee on Abrupt Climate Change Abrupt Climate Change:
Inevitable Surprises (National Academies Press, Washington
DC, 2002).
9. Koerner, R. M. Science 244, 964–968 (1989).
Evolutionary biology
Early evolution comes full circle
William Martin and T. Martin Embley
Biologists use phylogenetic trees to depict the history of life. But
according to a new and roundabout view, such trees are not the best
way to summarize life’s deepest evolutionary relationships.
harles Darwin described the evolutionary process in terms of trees, with
natural variation producing diversity
among progeny and natural selection shaping that diversity along a series of branches
over time. But in the microbial world things
are different, and various schemes have been
devised to take both traditional and molecular approaches to microbial evolution into
account. Rivera and Lake (page 152 of this
issue1) provide the latest such scheme, based
on analysing whole-genome sequences, and
they call for a radical departure from conventional thinking.
Unknown to Darwin, microbes use two
mechanisms of natural variation that disobey the rules of tree-like evolution: lateral
gene transfer and endosymbiosis. Lateral
gene transfer involves the passage of genes
among distantly related groups, causing
branches in the tree of life to exchange bits
of their fabric. Endosymbiosis — one cell
living within another — gave rise to the
double-membrane-bounded organelles of
C
eukaryotic cells: mitochondria (the powerhouses of the cell) and chloroplasts (of no
further importance here). At the endosymbiotic origin of mitochondria, a free-living
proteobacterium came to reside within an
archaebacterially related host — see Fig.1 for
terminology. This event involved the genetic
union of two highly divergent cell lineages,
causing two deep branches in the tree of life
to merge outright. To this day, biologists
cannot agree on how often lateral gene transfer and endosymbiosis have occurred in life’s
history; how significant either is for genome
evolution; or how to deal with them mathematically in the process of reconstructing
evolutionary trees. The report by Rivera and
Lake1 bears on all three issues.And instead of
a tree linking life’s three deepest branches
(eubacteria, archaebacteria and eukaryotes),
they uncover a ring.
The ring comes to rest on evolution’s
sorest spot — the origin of eukaryotes. Biologists fiercely debate the relationships between
eukaryotes (complex cells that have a nucleus
Prokaryotes Cells lacking a true nucleus.
Gene transcription occurs in the cytoplasm.
Archaebacteria Prokaryotes with a plasma
membrane of isoprene ether lipids. Protein
synthesis occurs on distinctive,
archaebacterial-type ribosomes. Synonymous
with Archaea.
Eubacteria Prokaryotes with a plasma
membrane of fatty acid ester lipids. Protein
synthesis occurs on distinctive, eubacterialtype ribosomes. Synonymous with Bacteria.
Eukaryotes Cells possessing a true nucleus
(lacking in prokaryotes), separated from the
cytoplasm by a membrane contiguous with
the endoplasmic reticulum (also lacking in
prokaryotes). Include double-membranebounded cytoplasmic organelles derived from
eubacterial endosymbionts11–13. The plasma
membrane consists of fatty acid ester lipids.
Protein synthesis occurs on ribosomes related
to the archaebacterial type. Synonymous with
Eucarya.
Proteobacteria A name introduced for the
group that includes the purple bacteria and
relatives18. The endosymbiotic ancestor
of mitochondria was a member of the
proteobacteria as they existed more than
1.4 billion years ago.
Figure 1 Who’s who among microbes. In 1938,
Edouard Chatton coined the terms prokaryotes
and eukaryotes for the organisms that biologists
still recognize as such3. In 1977 came the report
of a deep dichotomy among prokaryotes19 and
designation of the newly discovered groups as
eubacteria and archaebacteria. In 1990, it was
proposed2 to rename the eukaryotes, eubacteria
and archaebacteria as eucarya, bacteria and
archaea. Although widely used, the latter
names left the memberships of these groups
unchanged, so the older terms have priority.
and organelles) and prokaryotes (cells that
lack both). For a decade, the dominant
approach has involved another intracellular
structure called the ribosome, which consists
of complexes of RNA and protein,and is present in all living organisms. The genes encoding an organism’s ribosomal RNA (rRNA)
are sequenced,and the results compared with
those for rRNAs from other organisms. The
ensuing tree2 divides life into three groups
called domains (Fig. 2a). The usefulness of
rRNA in exploring biodiversity within the
three domains is unparalleled, but the proposal for a natural system of all life based on
rRNA alone has come increasingly under fire.
Ernst Mayr3, for example, argued forcefully that the rRNA tree errs by showing
eukaryotes as sisters to archaebacteria, thereby obscuring the obvious natural division
between eukaryotes and prokaryotes at the
level of cell organization (Fig. 2b). A central
concept here is that of a tree’s ‘root’, which
defines its most ancient branch and hence the
relationships among the deepest-diverging
NATURE | VOL 431 | 9 SEPTEMBER 2004 | www.nature.com/nature
134
©2004 Nature Publishing Group
9.9 n&v 133 MH
3/9/04
5:29 pm
Page 135
news and views
lineages.The eukaryote–archaebacteria sistergrouping in the rRNA tree hinges on the
position of the root (the short vertical line at
the bottom of Fig.2a).The root was placed on
the eubacterial branch of the rRNA tree based
on phylogenetic studies of genes that were
duplicated in the common ancestor of all
life2.But the studies that advocated this placement of the root on the rRNA tree used, by
today’s standards, overly simple mathematical models and lacked rigorous tests for
alternative positions4.
One discrepancy is already apparent in
analyses of a key data set used to place the
root, an ancient pair of related proteins,
called elongation factors, that are essential
for protein synthesis5. Although this data set
places the root on the eubacterial branch, it
also places eukaryotes within the archaebacteria, not as their sisters5. Given the
uncertainties of deep phylogenetic trees
based on single genes4,a more realistic view is
that we still don’t know where the root on the
rRNA tree lies and how its deeper branches
should be connected.
A different problem with the rRNA tree,
as Ford Doolittle6 has argued, is that lateral
gene transfer pervades prokaryotic evolution. In that view, there is no single tree of
genomes to begin with, and the concept of a
natural system with bifurcating genome
lineages should be abandoned (Fig. 2c).
Added to that are genome-wide sequence
comparisons showing eukaryotes to possess
far more eubacteria-like genes than archaebacteria-like genes7,8, in diametric opposition to the rooted rRNA tree, which accounts
for only one gene. Despite much dissent, the
rRNA tree has nonetheless dominated biologists’ thinking on early evolution because of
the lack of better alternatives.
Rivera and Lake’s ring of life1 (Fig. 2d)
includes the analysis of hundreds of genes,
not just one. It puts prokaryotes in one bin
and eukaryotes in another3; it allows lateral
gene transfer to be used in assessing genomebased phylogeny7; and it recovers the connections between prokaryote and eukaryote
genomes as no single gene tree possibly
could. Their method — ‘conditioned reconstruction’ — uses shared genes as a measure
of genome similarity but does not discriminate between vertically or horizontally
inherited genes. This method does not uncover all lateral gene transfer in all genomes.
But it does uncover the dual nature of
eukaryotic genomes7,8, which in the new
scheme sit simultaneously on a eubacterial
branch and an archaebacterial branch. This
is what seals the ring.
As the simplest interpretation of the ring,
Rivera and Lake1 propose that eukaryotic
chromosomes arose from a union of archaebacterial and eubacterial genomes. They suggest that the biological mechanism behind
that union was an endosymbiotic association
between two prokaryotes. The ring is thus at
Figure 2 Four schemes of natural order in the microbial world. a, The three-domain proposal based
on the ribosomal RNA tree, as rooted with data from anciently duplicated protein genes. b, The twoempire proposal, separating eukaryotes from prokaryotes and eubacteria from archaebacteria. c, The
three-domain proposal, with continuous lateral gene transfer among domains. d, The ring of life,
incorporating lateral gene transfer but preserving the prokaryote–eukaryote divide. (Redrawn from
refs 2, 3, 6 and 1, respectively.)
odds with the view of eukaryote origins by
simple Darwinian divergence9,10, but is consistent with symbiotic models of eukaryote
origins, variants of which abound11. Some
symbiotic models suggest that an archaebacterium–eubacterium symbiosis was followed
by the endosymbiotic origin of mitochondria; others suggest that the host cell in
which mitochondria settled was an archaebacterium outright.
Rivera and Lake’s findings do not reveal
whether a symbiotic event preceded the
mitochondrion. But — importantly — they
cannot reject the mitochondrial endosymbiont as the source of the eubacterial genes
in eukaryotes. The persistence of the mitochondrial compartment,especially in anaerobic eukaryotic lineages12,13, among which
the most ancient eukaryote lineages have
traditionally been sought, provides phylogeny-independent evidence that the endosymbiotic origin of mitochondria occurred
in the eukaryotic common ancestor. Phylogeny-independent evidence for any earlier
symbiosis is lacking. So the simpler, hence
preferable, null hypothesis is that eubacterial
genes in eukaryotes stem from the mitochondrial endosymbiont.
Rejecting that null hypothesis will
require improved mathematical tools for
probing deep phylogeny. Indeed, it is not
clear if conditioned reconstruction alone
is sensitive enough to do this — analyses
of individual genes are still needed. But
NATURE | VOL 431 | 9 SEPTEMBER 2004 | www.nature.com/nature
eukaryotes are more than 1.4 billion years
old14 and such time-spans push current
tree-building methods to, and perhaps well
beyond, their limits15.
Looking into the past with genes is like
gazing at the stars with telescopes: it involves
a lot of mathematics16, most of which the
stargazers never see. With better telescopes
we can see more details further back in
time, but nobody knows for sure how good
today’s gene-telescopes really are. Mathematicians have a well-developed theory for
building trees from recently diverged gene
sequences17, but mathematical methods for
recovering ancient mergers in the history
of life are still rare. Rivera and Lake’s ring
depicts the eukaryotic genome for what it is
— a mix of genes with archaebacterial and
eubacterial origins.
■
William Martin is at the Institut für Botanik III,
Heinrich-Heine Universität Düsseldorf,
40225 Düsseldorf, Germany.
e-mail: [email protected]
T. Martin Embley is in the School of Biology,
The Devonshire Building, University of Newcastle
upon Tyne, Newcastle upon Tyne NE1 7RU, UK.
e-mail: [email protected]
1. Rivera, M. C. & Lake, J. A. Nature 431, 152–155 (2004).
2. Woese, C., Kandler, O. & Wheelis, M. L. Proc. Natl Acad. Sci.
USA 87, 4576–4579 (1990).
3. Mayr, E. Proc. Natl Acad. Sci. USA 95, 9720–9723 (1998).
4. Penny, D., Hendy, M. D. & Steel, M. A. in Phylogenetic Analysis
of DNA Sequences (eds Miyamoto, M. M. & Cracraft, J.)
155–183 (Oxford Univ. Press, 1991).
5. Baldauf, S., Palmer, J. D. & Doolittle, W. F. Proc. Natl Acad. Sci.
USA 93, 7749–7754 (1996).
135
©2004 Nature Publishing Group
9.9 n&v 133 MH
3/9/04
5:29 pm
Page 137
news and views
6. Doolittle, W. F. Science 284, 2124–2128 (1999).
7. Rivera, M. C., Jain, R., Moore, J. E. & Lake, J. A. Proc. Natl Acad.
Sci. USA 95, 6239–6244 (1998).
8. Esser, C. et al. Mol. Biol. Evol. 21, 1643–1660 (2004).
9. Kandler, O. in Early Life on Earth (ed. Bengston, S.) 152–160
(Columbia Univ. Press, New York, 1994).
10. Woese, C. R. Proc. Natl Acad. Sci. USA 99, 8742–8747 (2002).
11. Martin, W., Hoffmeister, M., Rotte, C. & Henze, K. Biol. Chem.
382, 1521–1539 (2001).
12. Embley, T. M. et al. IUBMB Life 55, 387–395 (2003).
13. Tovar, J. et al. Nature 426, 172–176 (2003).
14. Javaux, E. J., Knoll, A. H. & Walter, M. R. Nature 412, 66–69 (2001).
15. Penny, D., McComish, B. J., Charleston, M. A. & Hendy, M. D.
J. Mol. Evol. 53, 711–723 (2001).
16. Semple, C. & Steel, M. A. Phylogenetics (Oxford Univ. Press,
2003).
17. Felsenstein, J. Inferring Phylogenies (Sinauer, Sunderland, MA,
2004).
18. Stackebrandt, E., Murray, R. G. E. & Trüper, H. G. Int. J. Syst.
Bact. 38, 321–325 (1988).
19. Woese, C. R. & Fox, G. E. Proc. Natl Acad. Sci. USA 74,
5088–5090 (1977).
Neurobiology
Feeding the brain
Claire Peppiatt and David Attwell
In computationally active areas of the brain, the blood flow is increased
to provide more energy to nerve cells. New data fuel the controversy
over how this energy supply is regulated.
ike all tissues, our brains need energy
to function, and this comes in the
form of oxygen and glucose, carried in
the blood. The brain’s information-processing capacity is limited by the amount of
energy available1, so, as has been recognized
for more than a century, blood flow is
increased to brain areas where nerve cells
are active2. This increase in flow provides
the basis for functional magnetic resonance
imaging of brain activity 2, but exactly how
the flow is increased is uncertain. On page
195 of this issue, Mulligan and MacVicar3
reveal a previously unknown role for nonneuronal brain cells called astrocytes in
controlling the brain’s blood flow. Intriguingly, the new data contradict a previous
suggestion for how astrocytes regulate flow.
Figure 1 shows recent developments in
our understanding of how the blood flow in
the brain is controlled. Glucose and oxygen
are provided to neurons through the walls of
capillaries, the blood flow through which is
controlled by the smooth muscle surrounding precapillary arterioles. Dedicated neuronal networks in the brain signal to the
smooth muscle to constrict or dilate arterioles and thus decrease or increase blood
flow 2; for example, neurons that release the
neurotransmitter molecule noradrenaline
constrict arterioles. In addition, the neuronal
activity associated with information processing increases local blood flow. This is in part
due to neurons that release the transmitter
glutamate, which raises the intracellular
concentration of Ca2 ions in other neurons,
thereby activating the enzyme nitric oxide
(NO) synthase and leading to the release of
NO. This in turn dilates arterioles4.
A radical addition to this scheme came
with the claim of Zonta et al.5 that glutamate
also works through astrocytes in the brain to
dilate arterioles. Glutamate raises the Ca2
concentration in astrocytes, and thus activates the enzyme phospholipase A2, which
produces a fatty acid, arachidonic acid. This
L
is converted by the enzyme cyclooxygenase
into prostaglandin derivatives, which dilate
arterioles. An attractive aspect of a role for
astrocytes in controlling blood flow is that,
although most of their cell membrane surrounds neurons and so can sense neuronal
glutamate release,they also send out an extension, called an endfoot, close to blood vessels:
thus, astrocyte anatomy is ideal for regulating
blood flow in response to local neuronal
activity6. In this scheme, a rise in the Ca2 levels in astrocytes, just like in neurons, would
dilate arterioles and increase local blood flow.
The new data contradict these results.
Mulligan and MacVicar3 inserted a ‘caged’
form of Ca2 into astrocytes in brain slices
taken from rats and mice. By using light to
suddenly uncage the Ca2, they found that
an increase in the available Ca2 concentration within astrocytes produces a constriction of nearby arterioles that could powerfully decrease local blood flow (the 23%
decrease in diameter seen would increase the
local resistance to blood flow threefold, by
Poiseuille’s law).
They show that this constriction results
from Ca2 activating phospholipase A2 to
generate arachidonic acid, as above; the twist
is that this arachidonic acid is then processed
by a cytochrome P450 enzyme (CYP) into a
constricting derivative. The authors propose
that this derivative is 20-hydroxyeicosatetraenoic acid (20-HETE),formed by CYP4A
in the arteriole smooth muscle7 (but the
high concentration of CYP4A blocker used
to deduce this might also block other
enzymes8). The authors also found that
noradrenaline evoked a rise in astrocyte Ca2
concentration and arteriole constriction.
Unexpectedly, therefore, it seems that rather
than noradrenaline-producing neurons signalling directly to smooth muscle, as is
conventionally assumed, much of their constricting action may be mediated indirectly
by astrocytes.In fact this is consistent with the
finding that many noradrenaline-release sites
on neurons are located near astrocytes9.
Is it possible to reconcile the new data3 (a
rise in astrocyte Ca2 levels constricts arterioles) with those of Zonta et al.5 (a rise in Ca2
dilates arterioles)? A likely solution is that the
increased concentration of Ca2 in astrocytes
leads to the production of both constricting
Figure 1 Controlling blood flow in the brain. Computationally active neurons release glutamate
(top left). This activates neuronal NMDA-type receptors, Ca2 influx through which leads to
nitric oxide synthase (NOS) releasing NO, which works on smooth muscle to dilate arterioles.
This increases the supply of oxygen and glucose to the brain. Glutamate also spills over to astrocyte
receptors (mGluRs), which raise the Ca2 levels in astrocytes and generate arachidonic acid (AA)
via phospholipase A2 (PLA2). Cyclooxygenase-generated derivatives of AA (PGE2) dilate arterioles5,
whereas, as Mulligan and MacVicar show3, the CYP4A-generated derivative 20-HETE constricts
them. Astrocyte Ca2 levels can also be raised by noradrenaline — released from dedicated neurons
that control the circulation — which works through 1 receptors (bottom left). Dotted lines show
messengers diffusing between cells. The detailed anatomy of synapses and astrocytes is not portrayed.
NATURE | VOL 431 | 9 SEPTEMBER 2004 | www.nature.com/nature
137
©2004 Nature Publishing Group
ESSAY
NATURE|Vol 435|30 June 2005
Now you see it, now you don't
Cell doctrine: modern biology and medicine see the cell as the fundamental building block of living
organisms, but this concept breaks down at different perspectives and scales.
as ‘intracellular’ and ‘extracellular’. The
other side of the ancient argument seems to
hold: the body is a fluid continuum.
Complexity theory, which describes emerIs this merely poetic description? I suggent self-organization of complex adaptive
gest not. The fragility of the cell as the funsystems, has gained a prominent position in
damental unit has been described before
many sciences. One powerful aspect of
as ‘cellular uncertainty’, akin to the Heisenemergent self-organization is that scale
berg uncertainty principle: any attempt to
matters. What appears to be a dynamic,
examine a cell, inevitably disrupts its
ever changing organizational panoply at
microenvironment, thereby changing the
the scale of the interacting agents that
state of the cell. But are cells fundacomprise it, looks to be a single, funcmentally ‘uncertain’ or is it possible to
tional entity from a higher scale. Ant
conceive of a technology — a perfect
colonies are a good example: from
MRI machine, if you will — that
afar, the colony appears to be a solid,
could collect the data to describe a
shifting, dark mass against the earth.
cell completely without altering it?
But up close, one can discern individComplexity analysis suggests that
ual ants and describe the colony as the
no machine could ever achieve this.
emergent self-organization of these
The cell as a definable unit exists only
scurrying individuals. Moving in still
on a particular level of scale. Higher
closer, the individual ants dissolve into
up, the cell has no observational
myriad cells.
validity. Lower down, the cell as an
Cells fulfill all the criteria necesentity vanishes, having no indepensary to be considered agents within
dent existence. The cell as a thing
a complex system: they exist in
depends on perspective and scale:
great numbers; their interactions
“now you see it, now you don’t,” as a
involve homeostatic, negative feedback loops; and they respond to local Scale up: hundreds of individual ants form a superorganism. magician might say.
This analysis also allows for
environmental cues with limited
stochasticity (‘quenched disorder’). Like nanoscale, quantum effects may have a hypothesis-based investigations of pheany group of interacting individuals ful- measurable impact, suggest that the nomena considered outside the bounds of
filling these criteria, they self-organize answer is yes. In particular, the behaviours ‘traditional’ biology. A prime example is
without external planning. What emerges of increasing numbers of biomolecular acupuncture, wherein application of stimis the structure and function of our tissues, ‘machines’ are seen to rely on brownian uli to special points (meridians) on the
motion of the watery milieu in which they body accomplishes remote physiological
organs and bodies.
This view is in keeping with cell doc- are suspended. Previously it was thought effects. The meridians do not correspond
trine — the fundamental paradigm of that binding of adenosine triphosphate to identifiable anatomical subunits. So
modern biology and medicine whereby (ATP) and hydrolysis releases the energy acupuncture, although testable and useful,
cells are the fundamental building blocks that drives these tiny machines. Now, it cannot be explained by cell doctrine and
of all living organisms. Before cell doc- seems that this energy is too small to move conventional anatomy.
The validity of cell doctrine depends on
trine emerged, other possibilities were the molecular machine mechanically, but
explored. The ancient Greeks debated is large enough to constrain the brownian- the scale at which the body is observed. To
whether the body’s substance was an end- driven mechanics to achieve the required limit ourselves to the perspective of this
lessly divisible fluid or a sum of ultimately movement. This constrained movement is model may mean that explications of some
indivisible subunits. But when the micro- neither completely stochastic (that is, bodily phenomena remain outside the
scopes of Theodor Schwann and Matthias brownian), nor rigidly determined (by capacity of modern biology. It is perhaps
Schleiden revealed cell membranes, the structure or by consumption of ATP). time to dethrone the doctrine of the cell, to
debate was settled. The body’s substance is Examples of such phenomena include allow alternative models of the body for
not a fluid, but an indivisible box-like cell: actin/myosin sliding, the activation of study and exploitation in this new, postthe magnificently successful cell doctrine receptors by ligand binding, and the tran- modern era of biological investigation. ■
scription of DNA to messenger RNA.
was born.
Neil D. Theise is at the Division of Digestive
So, at the nanoscale, cells cease to exist, Diseases, Beth Israel Medical Center,
But a complexity analysis presses for
consideration of a level of observation at a in the same way that the ant colony van- First Avenue at 16th Street, New York
lower scale. At the nanoscale, one might ishes at the perceptual level of an ant. On New York 10003, USA.
suggest that cells are not discreet objects; one level, cells are indivisible things; on
rather, they are dynamically shifting, adap- another they dissolve into a frenzied, self- FURTHER READING
tive systems of uncountable biomolecules. organizing dance of smaller components. Theise N. D. & d’Inverno, M. Blood Cells Mol. Dis. 32,
(2004).
Do biomolecules fulfill the necessary The substance of the body becomes self- 17–20
Theise N. D. & Krause D. S. Leukemia 16, 542–548
criteria for agents forming complex sys- organized fluid-borne molecules, which (2002).
tems? They obviously exist in sufficient know nothing of such delineating concepts Kurakin A. Dev. Genes Evol. 215, 46–52 (2005).
quantities to generate emergent phenomena; they interact only on the local level,
without monitoring the whole system; and
many homeostatic feedback loops govern
these local interactions. But do their interactions display quenched disorder; that is,
are they somewhere between being completely random and rigidly determined?
Analyses of individual interacting molecules and the recognition that at the
©2005 Nature Publishing Group
CONCEPTS
D. SCOTT/CORBIS
Neil D. Theise
1165
NEWS & VIEWS
50 YEARS AGO
With the appearance of a new
journal, Virology (pp. 140. New
York: Academic Press, Inc.; 9
dollars per vol.), this useful, but
ugly, word of doubtful parentage
presumably takes its place as the
official designation of the study
of viruses.
From Nature 9 July 1955.
50 & 100 YEARS AGO
100 YEARS AGO
36
Even with things as they are,
Oxford and Cambridge, though
much injured by competitive
examinations, have been far less
injured than England in general;
and this they owe to the
residential system. Little thought
of, perhaps neglected, by the
builders, the head-stone of the
educational edifice is here to be
found. Where mind meets mind
in the free intercourse of youth
there springs from the contact
some of the fire which, under our
present system, is rarely to be
obtained in any other way; and
not only this, but many other
priceless advantages in the battle
for life are also conferred. To these
influences we owe in large part all
that is best in the English character,
and so valuable are the qualities
thus developed, or at least greatly
strengthened, that we regard
residential colleges as essential
to the success and usefulness of
the newer universities.
ALSO:
An Angler’s Hours. By H. T.
Sherringham. Mr. Sherringham
deserves the thanks of all anglers
who have an idle hour and no
fishing for having re-published his
essays in book form, and he who
is forced by sad circumstance to
enjoy his fishing vicariously will
find his time well spent in our
scribe’s company... he despairs
of nothing, but finds good in all;
if there are no fish he can study
nature, and if there is no water
he can shrewdly meditate on the
ways of fish and men; an hour
with him and his rod by a troutless
tarn is as good as an hour by the
Kennet in the mayfly time… A
word of praise is also due to the
publishers, who have produced
a book the size and print of which
add to its convenience as an
adjunct to a pipe, an easy chair,
and idleness.
From Nature 6 July 1905.
Figure 1 | Arion lusitanicus — conservation agent.
grassland sown with rye grass (Lolium perenne)
and white clover (Trifolium repens) on a former
arable field that contained its own residual seed
bank of weed and other plant species.
The surface soil was thoroughly mixed to
avoid local patchiness in the seed bank, and a
series of experimental 22-m plots was established, each surrounded by a slug-proof fence.
Local slugs were placed in selected plots at a
density of 22 individuals per plot during the
first year, with an additional 10 slugs in subsequent years; this represents a high but realistic
concentration of the molluscs. Wooden slug
shacks provided shelter for these easily desiccated creatures in times of drought. The control plots were treated with molluscicide to
prevent any inadvertent slug invasion. Analysis of the vegetation composition over the
following three years provided the data needed
to determine the effect of slug grazing.
In the first two years, the species richness
and the diversity were lower in the slug-grazed
plots than in controls. (Species richness is the
number of species per plot; diversity also takes
into account the proportions of different
species, and is measured by the Shannon
diversity index.) This result confirms the
expectation that slug selection of seedlings
would reduce the number of
species from the local seed bank
that become established. In the
third year of the experiment, however, species richness in the grazed
plots was 23% higher than in the
controls.
The reason for this enhancement of richness and diversity in
the more mature stages can be
attributed to the consistent removal of biomass by the slugs. The
yield from primary productivity
was reduced by around 25% as a
result of slug grazing (comparable
to the removal of biomass by sheep
in a grazed pasture4). Holding back the development of dominance by fast-growing species
provided an opportunity for the germination
and establishment of less-competitive species,
including annual plants. In other words, slug
grazing permits the establishment of plant
species that might otherwise find it difficult to
maintain populations in developing grassland.
So, on this account at least, slugs are good
for diversity.
Slugs will never act as sheep substitutes by
creating a pastorally idyllic landscape and
inspiring poets. But they could well be an
answer to the conservationist’s prayer —
silently grazing beneath our feet, they provide
an alternative way to mow a meadow.
■
Peter D. Moore is in the Division of Life Sciences,
King’s College London, Franklin–Wilkins Building,
150 Stamford Street, London SE1 9NH, UK.
e-mail: [email protected]
1. Buschmann, H., Keller, M., Porret, N., Dietz, H.
& Edwards, P. J. Funct. Ecol. 19, 291–298 (2005).
2. Tansley, A. G. (ed.) Types of British Vegetation (Cambridge
Univ. Press, 1911).
3. Grime, J. P. Plant Strategies, Vegetation Processes, and
Ecosystem Properties (Wiley, Chichester, 2001).
4. Perkins, D. F. in Production Ecology of British Moors and
Montane Grasslands (eds Heal, O. W. & Perkins, D. F.)
375–395 (Springer, Heidelberg, 1978).
NONLINEAR DYNAMICS
When instability makes sense
Peter Ashwin and Marc Timme
Mathematical models that use instabilities to describe changes of weather
patterns or spacecraft trajectories are well established. Could such principles
apply to the sense of smell, and to other aspects of neural computation?
Dynamical stability is ubiquitous in many systems — and more often than not is desirable.
Travelling down a straight road, a cyclist with
stable dynamics will continue in more or less a
straight line despite a gust of wind or a bumpy
surface. In recent years, however, unstable
dynamics has been identified not only as being
present in diverse processes, but even as being
beneficial. A further exciting candidate for
©2005 Nature Publishing Group
this phenomenon is to be found in the realm
of neuroscience — mathematical models1–3
now hint that instabilities might also be
advantageous in representing and processing
information in the brain.
A state of a system is dynamically stable when
it responds to perturbations in a proportionate
way. As long as the gust of wind is not too
strong, our cyclist might wobble, but the
ONDREJ ZICHA, WWW.BIOLIB.CZ/EN
NATURE|Vol 436|7 July 2005
NEWS & VIEWS
NATURE|Vol 436|7 July 2005
a
b
c
Figure 1 | Stable and unstable dynamics in ‘state space’. a, A stable state
with stationary dynamics. The system returns to the stable fixed point
in response to small perturbations. b, An unstable saddle state is
abandoned upon only small perturbations. The paths indicating
possible evolutions of this system (solid lines) may pass close by such
a state but will typically then move away. Only some of the exceptional
direction and speed of the cycle will soon return
to their initial, stable-state values. This stable
state can be depicted in ‘state space’ (the collection of all possible states of the system) as a sink
— a state at which all possible nearby courses
for dynamic evolution converge (Fig. 1a).
By contrast, at unstable states of a system,
the effect of a small perturbation is out of all
proportion to its size. A pendulum that is held
upside-down, for example, although it can in
theory stay in that position for ever, will in
practice fall away from upright with even the
smallest of disturbances. On a state-space
diagram, this is depicted by paths representing
possible evolutions of the system running
away from the state, rather than towards it. If
the unstable state is a ‘saddle’ (Fig. 1b), typical
evolutions may linger nearby for some time
and will then move away from that state. Only
certain perturbations, in very specific directions, may behave as if the state was stable and
return to it.
There is, however, nothing to stop the
pendulum from coming back very close to
upright if frictional losses are not too great.
This is indicated on a state-space diagram by
a path travelling close to what is known as a
heteroclinic connection between two saddles.
Heteroclinic connections between saddle
states (Fig. 1c) occur in many different systems
in nature. They have, for example, been implicated in rapid weather changes that occur after
long periods of constant conditions4. Engineers planning interplanetary space missions5
routinely save enormous amounts of fuel by
guiding spacecraft through the Solar System
using orbits that connect saddle states where
the gravitational pulls of celestial bodies
balance out.
Several studies1–3,6,7 have raised the idea that
this kind of dynamics along a sequence of
saddles (Fig. 1c) could also be useful for processing information in neural systems. Many
traditional models of neural computation share
the spirit of a model8 devised by John Hopfield,
where completion of a task is equivalent to the
system becoming stationary at a stable state.
Rabinovich et al.1 and, more recently, Huerta
paths come back to the saddle state (dashed lines pointing inwards).
c, A collection of saddles linked by ‘heteroclinic’ connections (dashed lines).
The system evolves close to the heteroclinic connections between different
saddles, lingering near one saddle state before moving on to the next.
It is this last type of dynamics that several studies1–3,6,7 find in models
of neural computation.
et al.2 have shown that, in mathematical
models of the sense of smell, switching among
unstable saddle states — and not stable-state
dynamics — may be responsible for the generation of characteristic patterns of neural activity, and thus information representation. In
creating their models, they have been inspired
by experimental findings in the olfactory
systems of zebrafish and locusts9 that exhibit
reproducible odour-dependent patterns.
Huerta et al.2 model the dynamics in two
neural structures known as the antennal lobe
and the mushroom body. These form staging
posts for processing the information provided
by signals coming from sensory cells that are
in turn activated by odour ingredients.
Whereas activity in the mushroom body is
modelled by standard means using stable
dynamics, the dynamics of the antennal lobe is
modelled in a non-standard way using networks that exhibit switching induced by instabilities. In these models, the dynamics of the
neural system explores a sequence of states,
generating a specific pattern of activity that
represents one specific odour. The vast number of distinct switching sequences possible in
such a system with instabilities could provide
an efficient way of encoding a huge range of
subtly different odours.
Both Rabinovich et al.1 and Huerta et al.2
interpret neural switching in terms of game
theory: the neurons, they suggest, are playing
a game that has no winner. Individual states
are characterized by certain groups of neurons
being more active than others; however,
because each state is a saddle, and thus intrinsically unstable, no particular group of
neurons can eventually gain all the activity
and ‘win the game’. The theoretical study1 was
restricted to very specific networks of coupled
neurons, but Huerta and Rabinovich have now
shown3 that switching along a sequence of
saddles occurs naturally, even if neurons are
less closely coupled, as is the case in a biological system.
Similar principles of encoding by switching
along a sequence of saddles have also been
investigated in more abstract mathematical
©2005 Nature Publishing Group
models (see refs 6, 7 for examples) that pinpoint possible mechanisms for directing the
switching processes. One problem with these
proposals from mathematical modelling1–3,6,7
is that there is no clear-cut experimental
evidence of their validity in any real olfactory
system. Nevertheless, all of the mathematical
models rely on the same key features —
saddles that are never reached but only visited
in passing, inducing non-stationary switching
— that have been shown to be relevant in other
natural systems4,5. In biology, the detection of
odours by populations of neurons could be
only one example.
Much remains to be done in fleshing out
this view of natural processes in terms of
dynamics exploiting saddle instabilities. Then
we will see just how much sense instability
really makes.
■
Peter Ashwin is at the School of Engineering,
Computer Science and Mathematics, University
of Exeter, Exeter, Devon EX4 4QE, UK.
Marc Timme is at the Max Planck Institute for
Dynamics and Self-Organization, and the
Bernstein Center for Computational
Neuroscience, Bunsenstraße 10,
37073 Göttingen, Germany.
e-mails: [email protected];
[email protected]
1. Rabinovich, M. et al. Phys. Rev. Lett. 87, 068102 (2001).
2. Huerta, R. et al. Neural Comput. 16, 1601–1640 (2004).
3. Huerta, R. & Rabinovich, M. Phys. Rev. Lett. 93, 238104
(2004).
4. Stewart, I. Nature 422, 571–573 (2003).
5. Taubes, G. Science 283, 620–622 (1999) .
6. Hansel, D., Mato, G. & Meunier, C. Phys. Rev. E 48,
3470–3477 (1993).
7. Kori, H. & Kuramoto, Y. Phys. Rev. E 62, 046214 (2001).
8. Hopfield, J. J. Proc. Natl Acad. Sci. USA 79, 2554–2558
(1982).
9. Laurent, G. Nature Rev. Neurosci. 3, 884–895 (2002).
CORRECTION
In the News and Views article “Granular matter: A tale
of tails” by Martin van Hecke (Nature 435, 1041–1042;
2005), an author's name was misspelt in
reference 9. The correct reference is Torquato, S.,
Truskett, T. M. & Debenedetti, P. G. Phys. Rev. Lett.
84, 2064–2067 (2000).
37
Vol 436|4 August 2005
BOOKS & ARTS
Cool is not enough
Into the Cool: Energy Flow,
Thermodynamics and Life
by Eric D. Schneider & Dorion Sagan
University of Chicago Press: 2005. 362 pp.
$30, £21
J. Doyne Farmer
The level of organization in even the simplest
living systems is so remarkable that many, if
not most, non-scientists believe that we need
to go outside science to explain it. This belief
is subtly reinforced by the fact that many scientists still think the emergence of life was a
fortuitous accident that required a good roll of
the molecular dice, in a place where the conditions are just so, in a Universe where the laws
of physics are just right.
The opposing view is that matter tends to
organize itself according to general principles,
making the eventual emergence of life
inevitable. Such principles would not require
any modifications of the laws of physics, but
would come from a better understanding of
how complex behaviour arises from the interaction of simple components.
Complex organization is not unique to living
systems: it can be generated by very simple
mathematical models, and is observed in many
non-living physical systems, ranging from
fluid flows to chemistry. Self-organization
in non-living systems must have played a key
role in setting the stage for the emergence
of life. Many scientists have argued that certain
principles of complex systems could explain
the emergence of life and the universal properties of form and function in biology, and
perhaps even provide insights for social science. The problem is that these principles have
so far remained undiscovered.
In their book Into the Cool, Eric Schneider
and Dorion Sagan claim that non-equilibrium
thermodynamics provides the key principle
that has been lacking. They review its application to topics ranging from fluid dynamics
and meteorology to the origin of life, ecology,
plant physiology, and evolutionary biology,
and even speculate about its relevance to
health, economics and metaphysics. The book
contains a wealth of good references and is
worth buying for this reason alone.
When the discussion sticks to applications
where thermodynamics is the leading actor,
such as the energy and entropy flows of the
Earth, or the thermodynamics of ecological
G. JECAN/CORBIS
There’s more to life than the second law of thermodynamics.
IMAGE
UNAVAILABLE
FOR COPYRIGHT
REASONS
A complex problem: can a need to reduce energy gradients help to drive the evolution of forests?
systems, it is informative and worthwhile, but
it is repetitive and seems disorganized in places.
The book is less successful as an exposition
of a grand theory. It gets off to a bad start on
the dust-jacket, which says: “If Charles Darwin
shook the world by showing the common
ancestry of all life, so Into the Cool has a similar power to disturb — and delight.” While it
may be wise to stand on the shoulders of
giants, it is not advisable to stand back to back
with one and call for a tape measure.
The authors’ central thesis is that the broad
principle needed to understand self-organization is already implicit in the second law of
thermodynamics, and so has been right under
our noses for a century and a half. Although
the second law is a statement about increasing
disorder, they argue that recent generalizations
in non-equilibrium thermodynamics make it
clear that it also plays a central role in creating
order. The catchphrase they use to summarize
this idea is “nature abhors a gradient”. Being
out of equilibrium automatically implies a gradient in the flow of energy from free energy to
heat. For example, an organism takes in food,
which provides the free energy needed to do
work to perform its activities, maintain its
form and reproduce. The conversion of free
energy to entropy goes hand in hand with the
maintenance of organization in living systems.
The twist is to claim that the need to reduce
energy gradients drives a tendency towards
©2005 Nature Publishing Group
increasing complexity in both living and nonliving systems. In their words: “Even before
natural selection, the second law ‘selects’, from
the kinetic, thermodynamic, and chemical
options available, those systems best able to
reduce gradients under given constraints.” For
example, they argue that the reason a climax
forest replaces an earlier transition forest is
that it is more efficient at fixing energy from
the Sun, which also reduces the temperature
gradient. They claim that the competition to
reduce gradients introduces a force for selection, in which less effective mechanisms to
reduce gradients are replaced by more effective
ones. They argue that this is the fundamental
reason why both living and non-living systems
tend to display higher levels of organization
over time.
This is an intriguing idea but I am not convinced that it makes sense. The selection
process that the authors posit is never clearly
defined, and they never explain why, or in what
sense, it necessarily leads to increasing complexity. No one would dispute that the second
law of thermodynamics is important for understanding the functioning of complex systems.
Being out of equilibrium is a necessary condition for a physical phenomenon to display
interesting complex behaviour, even if ‘interesting’ remains difficult to define. But the authors’
claim that non-equilibrium thermodynamics
explains just about everything falls flat. For
627
BOOKS & ARTS
ships, which allow organisms to maintain their
form and execute purposeful behaviours that
enhance their survival. Such complex order
depends on the rules by which matter interacts.
It may well be that many of the details are not
important, and that there are general principles
that might allow us to determine when the
result will be organization and when it will be
chaos. But this cannot be understood in terms
of thermodynamics alone.
Understanding the logical and physical
principles that provide sufficient conditions
for life is a fascinating and difficult problem
that should keep scientists busy for at least a
millennium. Thermodynamics clearly plays
an essential part, and it is appropriate that the
authors stress this — many accounts of the
origin of life are easily rebutted on this point.
But it isn’t the principal actor, just one of many.
The others remain unknown.
■
J. Doyne Farmer is at the Santa Fe Institute,
1399 Hyde Park Road, Santa Fe, New Mexico
87501, USA.
Russia’s secret weapons
Biological Espionage: Special Operations of
the Soviet and Russian Foreign Intelligence
Services in the West
by Alexander Kouzminov
Greenhill: 2005. 192 pp. £12.99, $19.95
Jens H. Kuhn, Milton Leitenberg
& Raymond A. Zilinskas
In 1992, President Boris Yeltsin admitted that
the former Soviet Union had supported a secret
biological-warfare programme, in violation of
the Biological Toxin and Weapons Convention,
which the Soviet Union ratified in 1975. Some
of the researchers and officials who operated
the programme, such as Ken Alibek, Igor
Domaradskii and Serguei Popov, have provided
personal accounts that shed light on the clandestine system. However, the compartmentalization and secrecy so prevalent in the former
Soviet Union mean that such accounts describe
only a fraction of the nation’s bioweapons programme. Almost nothing is known about the
biological-warfare activities of the Soviet ministries of defence, health and agriculture, the
security agencies and the national academies.
As a result, any new information on the roles
of these agencies in the Soviet bioweapons
programme is welcomed by those who are
concerned about whether Russia is continuing
with its bioweapons programme. This is the
backdrop to the publication of a book by
Alexander Kouzminov, a former KGB agent,
who claims to provide new and important
information about the role of the KGB in the
Soviet bioweapons programme. So, what do
we learn from it?
Kouzminov describes himself as a former
employee of the top-secret Department 12 of
628
IMAGE
UNAVAILABLE
FOR COPYRIGHT
REASONS
In the dark: the bioweapons programme run from
KGB headquarters has remained largely secret.
Directorate S, the élite inner core of the KGB
First Chief Directorate, which was responsible
for operations abroad. One of the responsibilities of this department was to oversee ‘illegals’
— Russian intelligence operatives masquerading as Western nationals. Illegals were
deployed to spy on Western biodefence activities, procure microbiological agents of interest
for Soviet bioweapons research and development, and to perform acts of bioterrorism and
sabotage. Kouzminov was a case handler for
several illegals, including some that allegedly
©2005 Nature Publishing Group
worked in a UK institute and at the World
Health Organization (WHO). He repeatedly
asserts that these illegals provided the Soviet
Union with “significant” information.
Kouzminov does provide some information
on his agency’s work. He describes how Westerners were targeted for recruitment by the
KGB, and discusses the recruitment process
and the means whereby data collected by
agents and illegals were transported from the
West to the Soviet Union. These procedures
have previously been described by defectors
and students of the Soviet intelligence system,
and Kouzminov’s book adds little to the story
already in the public domain. Disappointingly,
it provides almost no information on how the
KGB transformed the data into intelligence,
and how this was then used.
According to Kouzminov, individuals were
deployed in the West and given numerous
objectives related to spying on national programmes. For example, he describes a husbandand-wife team who, while operating a mock
medical practice in Germany, were told by
the KGB “to establish the locations of all
NATO installations; their command personnel…air-force bases, and cruise-missile and
rocket sites”. It is doubtful that two individuals
could accomplish all this. And Kouzminov’s
explanation that the KGB placed agents in the
WHO to obtain information about the “development of vaccines against the most dangerous human and animal viral diseases” seems
rather lame, given that anyone could obtain
this information simply by telephoning WHO
representatives.
The author further alleges that around 1980
a KGB agent was placed inside the US Army
Medical Research Institute of Infectious Diseases at Fort Detrick, Maryland, and that
another agent was employed by an unnamed
British institute (probably the National Institute
for Biological Standards and Control, which
was not engaged in biodefence). What did these
agents do? Did they provide information about
US and UK defensive efforts that might be used
by the Soviet bioweapons programme? Did
they inform their superiors that neither country
actually had an offensive programme? Perhaps
they provided information on the development
of vaccines that might have been useful to the
Soviet defensive programme?
In fact, Kouzminov provides little information on the accomplishments of these and other
agents in the biological field. Nor does he identify the Soviet research institutes with which
the KGB allegedly collaborated in an effort to
create more potent bioweapons, despite the fact
that many of them are known today to Western
security and academic communities.
Kouzminov describes himself as a biophysicist with a microbiological background, so
it is surprising how many technical mistakes
he makes. For example, he misidentifies the
bacteria Bacillus anthracis and rickettsiae as
viruses, and misspells agents such as Francisella tularensis and Yersinia pestis.
AP PHOTO/A. ZEMLIANICHENKO
example, consider a computer. No one would
dispute that a power supply is essential. Even for
a perfectly efficient computer, thermodynamics
tells us that it takes at least kT ln2 energy units
to erase a bit, where T is the temperature and
k is the Boltzmann constant. But the need for
power tells us nothing about what makes a
laptop different from a washing machine. To
understand how a computer works, and what it
can and cannot do, requires the theory of computation, which is a logical theory that is disconnected from thermodynamics. The power
supply can be designed by the same person
who designs them for washing machines.
The key point is that, although the second
law is necessary for the emergence of complex
order, it is far from sufficient. Life is inherently
an out-of-equilibrium phenomenon, but then
so is an explosion. Something other than nonequilibrium thermodynamics is needed to
explain why these are fundamentally different.
Life relies on the ability of matter to store information and to implement functional relation-
NATURE|Vol 436|4 August 2005
NEWS FEATURE
NATURE|Vol 438|3 November 2005
IMAGE
UNAVAILABLE
FOR COPYRIGHT
REASONS
Personal effects
Living things from bacteria to humans change their environment, but the
consequences for evolution and ecology are only now being understood,
or so the ‘niche constructivists’ claim. Dan Jones investigates.
n the Negev Desert of Israel, small organisms can have a big impact. Take the
cyanobacteria that live in the soil. Some
species secrete sugary substances that form
a crust of sand and soil, protecting the bacterial colonies from the effects of erosion. When
the rains come, the crusty patches divert water
into pools in which wind-borne seeds can germinate. These plants in turn make the soil
more hospitable for other plants. Thanks in
part to these bacteria, patches of vegetation
can be found where they might not otherwise
exist. The action of the bacteria, together with
local climate change, could lead to the greening of large parts of the desert.
The Negev cyanobacteria, and organisms
like them, are also having an impact on evolutionary biologists these days. Examples of creatures altering their environment abound —
from beavers that dam streams and earthworms that enrich the soil to humans who irrigate deserts. But too little attention has been
given to the consequences of this, say advocates
of niche construction. This emerging view in
biology stresses that organisms not only adapt
to their environments, but also in part create
them. The knock-on effects of this interplay
between organism and environment, say niche
constructivists, have generally been neglected
in evolutionary models. Despite pointed criticism from some prominent biologists, niche
construction has been winning converts.
I
14
“What we’re saying is not only novel, but
also slightly disturbing,” says Kevin Laland, an
evolutionary biologist at the University of
St Andrews in Fife, UK, and one of the authors
of the idea1. “If we’re right, it requires rethinking evolution.”
The conventional view of evolution sees
natural selection as shaping organisms to fit
their environment. Niche construction, by
contrast, accords the organism a much
stronger role in generating a fit by recognizing
the innumerable ways in which living things
alter their world to suit their needs. From this
perspective, the road between organism and
environment is very much a two-way street.
The intellectual stirrings of niche construction date back to the early 1980s, when Har-
“What we’re saying is not only
novel, but also slightly disturbing.
If we’re right, it requires rethinking
evolution.” — Kevin Laland
vard University geneticist Richard Lewontin
turned to differential equations — stock in
trade for population biologists — to look at
evolution from two different perspectives2. He
created one set of equations to describe the
conventional view of evolution, the oneway-street version. A second set of equations,
which he felt better described real evolutionary
© 2005 Nature Publishing Group
processes, depicted evolution as a continual
feedback loop, in which organisms both adapt
to their environments and alter them in ways
that generate new selective pressures. Although
Lewontin’s equations provided a broad perspective rather than a detailed model, he
helped to kick-start the niche-constructivist
approach, says Laland. “He really put the idea
on the map.”
Sons of soil
But it has taken years for biologists to begin to
incorporate niche construction into more
detailed models of evolution and ecology, in
part because organism–environment interactions can be so complex. Earthworms, for
instance, not only aerate the soil by tunnelling, as any gardener knows, but they also
alter its chemical composition by removing
calcium carbonate, adding their mucus and
excrement, and pulling leaves down into the
soil to decay. All of this produces a more
favourable environment for worms to live
in. Yet classical evolutionary models have
typically failed to consider how this transformation alters the selective pressures on the
worms and other soil inhabitants, say nicheconstruction advocates.
Back in the Negev Desert there are further
examples of dramatic niche construction. At
least three species of snail feed on lichens that
live just below the surface of porous rocks. To
NEWS FEATURE
S. GINOTT/CORBIS (FACING);J. FOOTT/NATURE PICTURE LIBRARY
NATURE|Vol 438|3 November 2005
IMAGE
UNAVAILABLE
FOR COPYRIGHT
REASONS
From dissolving desert rocks to building dams, all organisms mould their environment to a certain extent.
get at the lichens, the snails have to literally eat
through the rock, which they then excrete, creating soil around the rock in the process. This
might sound insignificant, but it has been calculated that the combined action of these
snails could generate enough soil to affect the
whole desert ecosystem3. By transferring
nitrogen in rocks to the soil, where plants use
it for growth, the snails contribute substantially to sustaining local biodiversity.
Bigger picture
In extreme cases, niche-constructing activities
can affect the whole world. The classic example from early evolutionary history is that of
oxygen-producing cyanobacteria, which
helped to set the stage for the evolution of animals and plants. Today, niche construction by
human threatens to affect practically all life, as
we pump large amounts of carbon dioxide into
the atmosphere.
Critics are quick to point out that such cases
have been well known to biologists for some
time. “Darwin realized that organisms can
change their environments in ways that affect
their own evolution,” says Laurent Keller, an
evolutionary biologist at the University of Lausanne in Switzerland. “There are already many
cases of niche construction by animals and
especially humans,” he says.
But advocates of niche construction counter
that previous attempts to include these effects
in evolutionary models have not gone nearly
far enough. “People hadn’t thought through
the consequences of these effects, either for
evolution or ecology,” says John Odling-Smee,
a biological anthropologist at the University of
Oxford, UK.
To encourage people to consider the issue,
Odling-Smee and Laland have taken a twopronged approach. First, they have catalogued
hundreds of examples, involving thousands of
species such as the Negev Desert organisms, to
drive home the point that niche construction
is a widespread phenomenon. In addition,
they have developed mathematical models
that capture the bidirectional nature of the
niche-constructivist view, to show how these
processes can actually be modelled.
Traditional ecological models typically distinguish between living things and their
physical environment, but it is hard to model
both elements at the same time. To find a way
around this, Laland and Odling-Smee
teamed up with Marcus Feldman, an evolutionary biologist and mathematical modeller
at Stanford University in California. They
found that they could look at niche construction by treating both living and non-living
components of a niche as environmental factors that are both affected by, and feed back
to, all the organisms in the ecosystem. They
presented their results in a 2003 book1, whose
purpose, they say, was in part to convince
other scientists to take niche construction
into account in their research.
Perhaps the most direct way an organism
“Even Darwin realized that
organisms can change their
environments in ways that affect
their evolution.” — Laurent Keller
can alter the challenges it must face is by
selecting where it lives, says Robert Holt, an
ecologist at the University of Florida in
Gainesville. Such habitat selection defines the
future context for the evolution of the new
residents and their progeny. By choosing to
live in places to which they are already
adapted, organisms can short-circuit the
© 2005 Nature Publishing Group
selective forces that ordinarily lead to evolutionary change. In this way, habitat selection
can lead to niche conservatism, which is the
tendency not to adapt to new environments,
and may explain the evolutionary stasis often
seen in the fossil record.
Organisms can also shape their interaction
with the world in more subtle ways. Developmental biologists know, for instance, that the
mature form of many organisms varies
depending on the environment in which they
grow up. This is known as phenotypic plasticity. Although some creatures, such as beavers
and cyanobacteria, alter their environment
directly, others niche construct by modifying
themselves, says Sonia Sultan, a botanist at
Wesleyan University in Middletown, Connecticut. Sultan defines a niche according to
the way an organism experiences the world —
its niche is the sum of its experiences, rather
than its immediate physical surroundings.
Some plants, for example, can grow smaller or
larger leaves, depending on whether they happen to be growing in a sunny or shady spot.
So this is a form of niche construction, claims
Sultan, because the plant is altering its own
experience of sunlight.
Although phenotypic plasticity has been well
studied by a number of researchers, it has yet to
be incorporated into the core of evolutionary
theory. “Niche construction weaves together a
number of themes in ecology and evolution
that have typically been studied in isolation,”
Sultan says. Rethinking evolution in light of
plasticity and other issues raised by niche construction could contribute to an updating of
evolutionary theory, Sultan suggests.
An update is precisely what Laland and his
colleagues have proposed in what they have
dubbed extended evolutionary theory. In
classical theory, genetic inheritance is the only
link through time between generations. Niche
construction requires that a second form of
inheritance, termed ecological inheritance, be
taken into account.
Inherit the earth
According to this view, many of the physical
features that a creature encounters, and the
kinds of problem it has to solve, are inherited
from the activities of the previous generation.
Forest fires, for example, which help to distribute the seeds of some plant species, might be
thought to rely solely on the chance of a lightning strike. But the plants in the forest can
themselves increase the odds of a fire by
secreting flammable oils and retaining dry
dead wood in the canopy4. Similarly, every
earthworm inherits an environment more
suited to its lifestyle thanks to the activities of
its forebears. Ecological inheritance means
that the effects of genes on the environment
are, a little like the genes, passed down through
the generations.
The notion that genes reach beyond the
bounds of the organism is often referred to as
the ‘extended phenotype’, a term coined by
15
NEWS FEATURE
IMAGE
UNAVAILABLE
FOR COPYRIGHT
REASONS
on phenotypic plasticity could help scientists to
devise appropriate strategies for combating
conservation problems: it could give them, for
example, more accurate tools for projecting the
rate of spread of an invasive plant.
Others are pioneering ways to study perhaps the ultimate niche constructors — us. In
many obvious ways, humans have utterly
transformed otherwise inhospitable parts of
the world to suit our needs, from ranks of
houses in the desert to skyscrapers. Perhaps a
less obvious example of niche construction is
human culture. Culture itself can be seen as a
niche that we inhabit, and just as we shape our
culture, our culture shapes us. One example of
this is the emergence over several thousand
years of lactose tolerance in European adults,
which has followed the cultural practice of
drinking cow’s milk6.
J. MARSHALL/CORBIS
NATURE|Vol 438|3 November 2005
Now a number of anthropologists are scrutinizing how culture can put selective pressure
on our genetic make-up. In the past, many
have been reluctant to tackle such questions, in
Construction workers: humans create towns from deserts, but how do we and our niches interact?
part because of fears of being associated with
Richard Dawkins, an evolutionary biologist at which can lead to different evolutionary genetic determinism, but also because of the
daunting mathematics of modelling gene–
the University of Oxford, in his 1982 book of dynamics, Sterelny says.
the same name. So it might come as something
Laland says he is sympathetic to the distinc- culture interactions. But that seems to be
of a surprise that Dawkins has written a highly tion, but is concerned that the term ‘mere’ changing, says Joe Henrich, an anthropologist
critical commentary accusing niche construc- associated with ‘niche changing’ downplays its at Emory University in Atlanta, Georgia. “The
tivists of a serious conceptual blunder5.
evolutionary importance. For Laland, niche study of cultural evolution is expanding rapidly
changing is as important to evolution as within scientific anthropology,” he says.
One of the hottest areas at the moment is
beaver-like niche construction. When you get
Dam fools
Dawkins’s classic example of an extended down to doing the models it often doesn’t help the puzzle of human sociality — why we are
phenotype is the beaver dam. These remark- much to make the distinction, says Laland. so often willing to cooperate with unrelated
able structures dramatically alter the sur- The effects of organisms can have evolutionary people, even when it is not in our immediate
rounding ecosystem. Trees are felled to make consequences regardless of whether they are self-interest7. Whether or not genes promoting
the dam, which in turn floods the area, provid- produced by adaptations.
sociality flourish depends in part on the social
Although the philosophical debates con- environment in which they find themselves,
ing a new environment for species from frogs
to fish. If the beaver’s footprint on its enviwhich in turn is affected by culture. “We
ronment is viewed as an example of ecohave shown that culture can evolve to
logical inheritance, it would seem that the
change the selective environment faced by
extended phenotype and niche construcgenes favouring cooperation. This opens
tion should make natural bedfellows.
up a whole evolutionary vista unavailable
But guess again. Although Dawkins
to non-cultural species,” says Henrich.
says he recognizes the importance of
Niche-construction advocates are pasorganism-induced effects on the world,
sionate about their new view of ecological
he believes that niche construction conand evolutionary processes, whether they
flates two distinct kinds of effects. Dam- Kevin Laland (left) thinks the power of niche construction is
study bacteria or humans, but it is too
building certainly counts as an organism being underestimated, but Laurent Keller is not convinced.
soon to say whether the approach will
engineering its environment, he says, but
yield insights that might otherwise have
other effects, such as the oxygenation of the tinue, other researchers are busily incorporat- been missed. Still, Laland fully accepts the
atmosphere by cyanobacteria, are mere co- ing the ideas of niche construction into their challenge. “The onus is on us to show that this
incidental by-products of life. These types of work. Sultan, for instance, finds the concept is going to be useful,” he says.
■
effects, which Dawkins calls niche changing, useful in thinking about invasive species, Dan Jones is a copy editor for Nature Reviews
are too loosely connected to the success of the whose potentially destructive power is a key Drug Discovery.
organisms that cause them to count as genuine issue in conservation biology.
1. Odling-Smee, J., Laland, K. & Feldman, M. Niche
niche construction.
Invasive species, such as weeds, often expeConstruction: The Neglected Process in Evolution (Princeton
Dawkins is not alone in this view. Kim rience a time lag between arriving in a new
Univ. Press, Princeton, 2003).
Sterelny, a philosopher of biology at the Vic- niche and colonizing it. It may take a while for 2. Lewontin, R. C. in Evolution From Molecules to Men (ed. Bendall,
S.) 273–285 (Cambridge Univ. Press, Cambridge, 1983).
toria University of Wellington, New Zealand, successful genetic variants of the invader to 3. D.
Shachak, M., Jones, C. G. & Brand, S. Adv. Geoecol. 28,
says that niche construction “lumps too many arise and spread, for instance. But if a species
37–50 (1995).
things together”. This matters, because the two arrives that has sufficient phenotypic plasticity 4. Schwilk, D. W. Am. Nat. 162, 725–733 (2003).
kinds of effects, construction versus mere to thrive in the new environment, the take-over 5. Dawkins, R. Biol. Phil. 19, 377–396 (2004).
6. Beja-Pereira, A. et al. Nature Genet. 35, 311–313 (2003).
changing, generate different feedback loops might be much more rapid. Sultan believes that 7. Hammerstein, P. (ed.) Genetic and Cultural Evolution of
between the organism and the environment, explicitly adopting niche-constructivist views
Cooperation (MIT Press, Cambridge, 2003).
16
© 2005 Nature Publishing Group
S. PRADA UNICOM
Culture club
Vol. 438|22/29 December 2005
COMMENTARY
Barriers to progress in systems biology
For the past half-century, biologists have been uncovering details of countless molecular events. Linking these
data to dynamic models requires new software and data standards, argue Marvin Cassman and his colleagues.
he field of systems biology is lurching general, however, it is a terrible waste of time,
forwards, propelled by a mixture of money and effort. Most software remains inacfaith, hope and even charity. But if it is cessible to external users, even when the
to become a true discipline, several problems developers are willing to release it, because
with core infrastructure (data and software) supporting documentation is so poor.
For software developers and skilled users
need to be addressed. In our view, they are too
critical to be left to ad hoc developments by these problems are not insurmountable. But
sharing of the benefits of systems biology
individual laboratories.
Systems biology has been defined in many more widely will occur only when working
ways, but has at its root the use of modelling biologists, who are not themselves trained to
and simulation, combined with experiment, to develop and modify such software, can
explore network behaviour in biological manipulate and use these techniques.
systems — in particular their dynamic nature. Unfortunately, the translation of systems
The need to integrate the profusion of biology into a broader approach is complimolecular data into a systems approach has cated by the innumeracy of many biologists.
stimulated growth in this area over the past Some modicum of mathematical training
five or so years, as worldwide investments will be required, reversing the trend of the
in the field have increased. However, this past 30 years, during which biology has
become a discipline for
early enthusiasm will need
people who want to do
to overcome several barri“During the past 30 years science without learning
ers to development.
mathematics.
A recent survey carried
biology has become a
A reasonable set of
out by these authors —
discipline for people who
expectations is that differconducted by the World
want to do science without ent pieces of shared softTechnology Evaluation
ware should work together
Center (WTEC) in Baltilearning mathematics.”
seamlessly, be transparent
more, Maryland, and
to the user, and be
funded by seven US agencies — compared the activities of system biol- sufficiently documented so that they can be
ogists in the United States, Europe and Japan1. modified to suit different circumstances.
The survey reveals that work on quantitative Funding agencies would be unwise to support
or predictive mathematical modelling that is software development without also investing
truly integrated with experimentation is only in the infrastructure needed to preserve and
just beginning. Progress is limited, therefore, enhance the results. One way to do this would
and major contributions to biological under- be to create a central organization that would
standing are few. The survey concludes that serve both as a software repository and as a
the absence of a suitable infrastructure for sys- mechanism for validating and documenting
tems biology, particularly for data and soft- each program, including standardizing of the
ware standardization, is a major impediment data input/output formats.
As with centralized databases, having a
to further progress.
shared resource with appropriate softwareengineering standards should encourage users
Come together
The WTEC survey confirmed that vital soft- to reconfigure the most useful tools for increasware is being developed at many locations ingly sophisticated analysis. A group sponsored
worldwide. But these endeavours are highly by the US Defense Advanced Research Projects
localized, resulting in duplicated goals and Agency, and involving one of us (M.C.), has
approaches. Tellingly, one Japanese group developed a proposal for such a resource2. This
called their software YAGNS, for ‘yet another repository would serve as a central coordinator
gene network simulator’. There are many rea- to help develop uniform standards, to direct
sons for this cottage industry: the need to users to appropriate online resources, and to
accommodate local data; the requirements of identify — through user feedback — problems
collaborators to visualize data; and limited with the software. The repository should be
knowledge of what is already available. In organized through consultation with the
T
©2005 Nature Publishing Group
community, and will require the support of an
international consortium of funding agencies.
Diverse data
The problems with software diversity are
mirrored by the diversity of ways that data are
collected, annotated and stored. Such issues are
even worse than those faced by the DNAsequencing community, because experimental
data in systems biology is highly context
dependent. For data to be useful outside the
laboratory in which they were generated, they
must be standardized, presented using a uniform and systematic vocabulary, and annotated
so that the specific cell type, growing conditions
and measurements made — from metaboliteand messenger-RNA-profiling to kinetics and
thermodynamics — are reproducible.
Easy access to data and software is not a
luxury, it is essential when results undergo
peer review and publication. For the scientific
community to evaluate the increasingly
complex data types, the increasingly sophisticated analysis tools, and the increasingly
incomplete papers (that cannot include all
information because of the very complexity of
the experiments and tools), it is vital that it has
access to the source data and methods used.
Dealing with these complex infrastructure
issues will require a focused effort by
researchers and funding agencies. We propose
that the annual International Conferences on
Systems Biology would be an appropriate venue
for initial discussions. Whatever the occasion, it
must be done soon.
■
Marvin Cassman lives in San Francisco,
California, USA.
Co-authors are Adam Arkin of the Bioengineering
Department, University of California, Berkeley;
Fumiaki Katagiri of the Department of Plant
Biology, University of Minnesota, St Paul;
Douglas Lauffenburger of the Biological
Engineering Division, Massachusetts Institute of
Technology, Cambridge; Frank J. Doyle III of the
Department of Chemical Engineering, University
of California, Santa Barbara; and Cynthia L. Stokes
who is at Entelos, Foster City, California.
1. Cassman, M. et al. Assessment of International Research and
Development in Systems Biology (Springer, in the press)
www.wtec.org/sysbio
2. Cassman, M., Sztipanovits, J., Lincoln, P. & Shastry, S. S.
Proposal for a Software Infrastructure in Systems Biology
www.csl.sri.com/users/lincoln/SystemsBiology/SI.doc
1079
CORRESPONDENCE
NATURE|Vol 441|4 May 2006
Computing: report leaps
geographical barriers but
stumbles over gender
Laura Dillon Michigan State University, USA
SIR — As senior researchers in computer
science, we were interested in both the report
Towards 2020 Science, published by the
Microsoft Corporation, and your related set
of News Features and Commentaries (Nature
440, 398–405 and 409–419; 2006). The vision
of advanced computational techniques being
tightly integrated with core science is an
exciting and promising one, which we are
glad to see being carefully explored and
presented to the broader community.
We are, however, concerned that, of the
41 participants and commentators brought
together by Microsoft, not one was female,
with the same being true of the nine authors
of the related articles in Nature. The report
notes that the participants in the 2020
Science Group were geographically diverse,
representing 12 nationalities, coming
“from some of the world’s leading research
institutions and companies [and]… elected
for their expertise in a principal field”.
Women have earned between 13% and 18%
of all PhDs awarded in computer science and
engineering in the United States during the
past two decades. Women also work at
leading research institutions, and also have
expertise in the relevant fields. In most other
scientific fields represented in the report, an
even higher percentage of PhDs is female.
That the omission of women from the 2020
Science Group was doubtless unintentional
does not lessen the negative message
conveyed. The future of computing will be
defined by the efforts of female as well as
male computer scientists.
Computer ‘recycling’ builds
garbage dumps overseas
Martha E. Pollack
Computer Science and Engineering, University of
Michigan, 2260 Hayward Street, Ann Arbor,
Michigan 48109, USA
Susanne E. Hambrusch Purdue University, USA
Carla Schlatter Ellis Duke University, USA
Barbara J. Grosz Harvard University, USA
Kathleen McKeown Columbia University, USA
Mary Lou Soffa University of Virginia, USA
SIR — Your Editorial “Steering the future of
computing” (Nature 440, 383; 2006) explores
the future potential of the computing
industry. Interesting though this is, I am
concerned by the millions of tonnes of
electronic waste generated by the computer
industry in the United States and other
developed countries each year, much of
which is being shipped for recycling in
developing countries such as India, China,
Bangladesh and Pakistan.
Cheap labour and weak environmental
standards and law enforcement in developing
countries attract high-tech garbage-dumping
in the name of recycling. Old computers
are being dumped or burned in irrigation
canals and waterways across Asia, where
they are releasing toxic substances such as
lead, mercury, cadmium, beryllium and
brominated flame retardants that pose
serious health hazards to local people and
the natural environment.
The 1989 Basel Convention, restricting
the transfer of hazardous waste, has been
ratified by all developed countries except
the United States — which, according to the
environmentalist report Exporting Harm
(see www.svtc.org/cleancc/pubs/technotrash.
htm), exports 50–80% of its computer waste.
Many nations, including the European
Union, have gone further and ratified an
amendment banning all export of hazardous
waste to developing countries. Those who
have not should do more towards finding
solutions for the safe disposal of accumulated
hazardous waste on their own territory.
G. Agoramoorthy
Department of Pharmacy, Tajen University,
Yanpu, Pingtung 907, Taiwan
Jessica Hodgins Carnegie Mellon University, USA
Ruzena Bajcsy University of California, Berkeley, USA
Carla E. Brodley Tufts University, USA
Luigia Carlucci Aiello Università di Roma La Sapienza, Italy
Maria Paola Bonacina Università degli Studi di Verona, Italy
Lori A. Clarke University of Massachusetts, Amherst, USA
Julia Hirschberg Columbia University, USA
Manuela M. Veloso Carnegie Mellon University, USA
Nancy Amato Texas A&M University, USA
Liz Sonenberg University of Melbourne, Australia
Elaine Weyuker AT&T Labs, USA
Lori Pollock University of Delaware, USA
Mary Jane Irwin Penn State University, USA
Lin Padgham RMIT University, Australia
Barbara G. Ryder Rutgers University, USA
Tiziana Catarci Università di Roma La Sapienza, Italy
Kathleen F. McCoy University of Delaware, USA
Maria Klawe Princeton University, USA
Sandra Carberry University of Delaware, USA
A logical alternative for
biological computing
SIR — Roger Brent and Jehoshua Bruck, in
their Commentary article “Can computers
help to explain biology?” (Nature 440,
416–417; 2006), draw a firm distinction
between von Neumann computers —
the usual computer as we know it — and
biological systems. But there are many
alternative models of computation. A Prolog
(logic programming) computer, in particular,
does not seem to exhibit several of the
differences singled out.
A Prolog computation, like its biological
counterpart, does not need an order of
©2006 Nature Publishing Group
execution. Any partial ordering of the
major components, known as clauses, are
determined by a dynamic succession of
pattern-matching operations. Within these
clauses, the execution of logic expressions is
unordered: A and B is the same as B and A,
and it does not matter whether we deal first
with the truth of A or the truth of B (although
computational constraints sometimes impose
a partial ordering). A key for biological
modelling would be to impose only those
sequence constraints that have analogues
in biological systems.
A second distinction highlighted by
Brent and Bruck is that biological systems
do not have a separate ‘output’ component.
Again, Prolog does not conform to the norm.
Often the important reason for executing
a Prolog program is to find out what
‘bindings’ occur en route to a true outcome,
in other words, what values are bound to
what variables.
It is perhaps relevant that Stephen H.
Muggleton, in his companion Commentary
article “Exceeding human limits” (Nature
440, 409–410; 2006), encourages the
development of new formalisms within
computer science that integrate mathematical
logic and probability calculus.
Prolog may not be a perfect computational
model for biological systems, but it
exemplifies a system that could be
a better fit for biological modelling.
Derek Partridge
School of Engineering, Computer Science and
Mathematics, Harrison Building, University of
Exeter, Exeter EX4 4QF, UK
Colossus was the first
electronic digital computer
SIR — Your timeline (“Milestones in
scientific computing” Nature 440, 401–405;
2006) starts in 1946 with ENIAC, “widely
thought of as the first electronic digital
computer”. But that title should arguably be
held by the British special-purpose computer
Colossus (1943), used during the Second
World War in the secret code-breaking centre
at Bletchley Park.
Modern computing history starts even
earlier, in 1941, with the completion of the
first working program-controlled computer
Z3 by Konrad Zuse in Berlin. Zuse used
electrical relays to implement switches,
whereas Colossus and ENIAC used tubes.
But the nature of the switches is not essential
— today’s machines use transistors, and the
future may belong to optical or other types
of switches.
Jürgen Schmidhuber
Dalle Molle Institute for Artificial Intelligence,
Galleria 2, 6928 Manno-Lugano, Switzerland, and
Institut für Informatik, TUM, Boltzmannstraße 3,
D-85748 Garching bei München, Germany
25
Vol 444|2 November 2006
BOOKS & ARTS
Beautiful models
Evolutionary Dynamics: Exploring the
Equations of Life
by Martin Nowak
Belknap Press: 2006. 384 pp. $35, £22.95,
€32.30
Sean Nee
Martin Nowak is undeniably a great artist,
working in the medium of mathematical
biology. He may be a great scientist as well:
time will tell, and readers of this book can
form their own preliminary judgement.
In his wanderings through academia’s
firmament — from Oxford, through Princeton’s Institute for Advanced Study to his
apotheosis as professor of biology and mathematics at Harvard — Nowak has seemingly
effortlessly produced a stream of remarkable Weaving a spell: Martin Nowak models cooperators
theoretical explorations into areas as diverse and defectors to create patterns like Persian rugs.
as the evolution of language, cooperation, cancer and the progression from HIV infection to
until, in turn, this new ‘strain’ also comes under
AIDS. Evolutionary Dynamics, based on a course their control. Nowak’s model of the dynamhe gives at Harvard, is a comprehensive sum- ics of this interplay between the virus and
mary of this work. Although Nowak certainly the immune system shows a long period durdisplays his own oeuvre to great advantage, ing which the virus is under control until a
this book is not purely self-indulgent. His final threshold number of strains exist and the
chapter is an annotated bibliography of other immune system collapses. Indeed, the behavwork in the many fields he discusses that is both iour of the mathematical model elegantly mimfair and scholarly: in other words, he cites me.
ics the course of progression from initial
Many entities replicate. HIV replicates in infection to AIDS: how could something so
people’s bodies, as do cancer cells. Our genes beautiful not be true?
replicate when we reproduce. Replication may
For me, the highlight of the book is the chapoccur with errors as mutation. Natural selec- ter on evolutionary graph theory. This is based
tion occurs when entities with different prop- on a simple reconsideration of the simplest
erties replicate at different rates, and random model of evolution, which is that, at successive
chance may also intervene to dilute the action points in time, an individual in the population
of selection. These are the basic elements of the dies and is replaced by the progeny of another
evolutionary process: if you doubt that such individual, according to whatever rules of
simplicity can produce anything interesting, natural selection are being considered. We
look around you. Evolutionary dynamics is can visualize this in terms of a graph in which
the mathematical modelling of these processes one node can be replaced by a copy of a node
in a variety of biological scenarios.
connected to it. This is an idea that could have
A good work of art should stimulate, occurred to any of us, but most of us would not
challenge and, usually, be aesthetically pleas- have seen how to develop it further. In Nowak’s
ing. Some of Nowak’s work in evolutionary hands, the idea is a springboard: he’s off! He
dynamics is, literally, visually appealing. But designs graphs that amplify, and others that
all his work has a beautiful elegance. In time hinder, the efficacy of natural selection comwe will see which parts of it become embed- pared to the entropic force of random chance
ded in our way of understanding the various — there are bursts, stars, superstars, funnels
and metafunnels (see Fig. 3 in Nature 433,
phenomena that inspired him.
Consider, for example, the course of HIV 312–316; 2005). We get new theorems, such
infection. After infection, the virus is initially as the isothermal theorem, which tells us what
kept under control by the host’s immune sys- kind of graph can alter the power of natural
tem. Over time, mutant virus appears that can selection. The chapter fizzes with breathescape control by immune cells and multiply taking brio. Is the work relevant to anything?
©2006 Nature Publishing Group
Who knows? Who cares? It’s a riot.
Nowak takes the view that ideas in evolutionary biology should be formulated
mathematically. An easy retort would be
the observation that Darwin managed
quite well without mathematics. But, in
fact, Darwin did not realize the enormous
potential potency of natural selection until
he absorbed Thomas Malthus’ exposition
of the counterintuitive consequences of
exponential growth — a fundamentally
mathematical insight. Certainly, some ideas
that are essentially quantitative must be
explored mathematically. But there are plenty
of other interesting theoretical areas. Consider genomic imprinting, whereby genes
in a fetus are expressed differently depending on whether they come from the father
or mother. Nowak’s Harvard colleague
David Haig has explained this phenomenon
in terms of evolutionary conflicts between
parents about investment in the fetus, an
explanation that is fascinating, predictive,
falsifiable and entirely verbal.
Nowak is much younger and more successful than me. Also, he did not have the modesty
to put a question mark after the book’s subtitle.
So I wanted to hate this book and pen poison to
hurt him. I could, for example, chortle that he
goes from the most basic model of predator–
prey dynamics to Andrey Kolmogorov’s eight
mathematical conditions for limit cycles in
a single page. I could cackle that he assumes
that readers know the concept of ‘measure’
from advanced analysis, and then wonder how
many readers he is writing for. But after each
mathematical excursion, Nowak provides a
perfectly clear and intuitive verbal explanation
of what has just happened.
I therefore have no choice but to end positively. This is a unique book. It should be on
the shelf of anyone who has, or thinks they
might have, an interest in theoretical biology.
And if you want to have a punt about what
might be considered important new science
in the future, this would be a much better buy
than another recent book, generously illustrated with pictures of cellular automata but
with the much grander aim of revolutionizing
science, by another wunderkind who also trod
the Oxford–Princeton trail.
■
Sean Nee is at the Institute of Evolutionary
Biology, University of Edinburgh, Ashworth
Laboratories, King’s Buildings,
Edinburgh EH9 3JT, UK.
37
HARVARD UNIV. PRESS
The dynamics of evolutionary processes creates a remarkable picture of life.
BOOKS & ARTS
Silver would view these as spirituality-based
statements, yet we could do worse than accept
Leopold’s wisdom and the creatively combined
rationalism and spiritualism informing it.
Most left-brained people will love this book.
It may annoy right-brained people, but their
response to it will enhance the creative, democratic dialogue so badly needed on the issues
addressed.
■
James T. Bradley is in the Department of
Biological Sciences, Auburn University, Auburn,
Alabama 36849, USA. He is currently writing
a book for non-scientists called Twenty-First
Century Biotechnologies and Human Values.
Biology’s big idea
In the Beat of a Heart: Life, Energy, and
the Unity of Nature
by John Whitfield
Joseph Henry Press: 2006. 261 pp. $27.95
David Robinson
D’Arcy Wentworth Thompson is the hero with
whom John Whitfield begins and ends his
engaging book, In the Beat of a Heart. Teaching
“at a provincial university in a coarse, industrial Scottish city” in the early twentieth century, Thompson’s obsession was the search for
principles to unify the diversity of life. This is
the springboard for Whitfield’s lively account
of more recent attempts to answer questions
that Thompson posed.
Thompson’s 1917 book On Growth and Form
famously depicted his (often incorrect) ideas
about how organisms are as much the products
of physics as of natural selection. A polymath
of astonishing accomplishment, Thompson
was better equipped than most to appreciate
how physical simplifications of nature can
reveal things about the living world that traditional approaches cannot uncover. He believed
that however much evolution causes animals
or plants to vary in delightful ways, feathers
and foliage hide universal features of structure
or function that reflect unbreakable physical
laws. Identifying those features was Thompson’s goal. His work needed bold generalizations, and he was unafraid to look at nature in a
different way from everyone else. Eighty years
after On Growth and Form first appeared, its
unfashionable philosophy was emulated by the
main protagonists of Whitfield’s book.
In 1997, physicist Geoffrey West and two
ecologists, Jim Brown and Brian Enquist,
developed a theory to explain why many familiar biological patterns vary as quarter-powers
of body mass. For example, a mammal’s heart
rate varies, on average, as its body mass to the
power –1/4; an animal’s lifespan varies as its
body mass to the power 1/4; tree height varies
as body mass to the power 1/4, but tree density
in a forest varies as body mass to the power
–3/4, and so on. These patterns suggest that
272
Did Jonathan Swift use quarter-power scaling to
decide Gulliver’s food intake in Gulliver’s Travels?
there is an underlying order to the living world,
but how could such order possibly arise among
organisms and processes so diverse?
West, Brown and Enquist answered this
question by explaining why metabolic rate
tends to scale as body mass to the power 3/4,
one of the most fundamental and enigmatic
of biological relationships (and, Whitfield tells
us in passing, one that was implied in Gulliver’s Travels). With thompsonian economy and
elegance, West and his colleagues specified the
kind of branched vessels needed to transport
blood efficiently around an idealized organism,
worked out how those vessels could be packed
optimally into bodies of different sizes, and predicted how the organism’s metabolic rate would
then vary with its mass. The resulting algebra
yielded the magic number 3/4. From this initial triumph, the theory has since evolved spectacularly to account for many broad features of
metabolism, ecosystem processes, life histories,
©2006 Nature Publishing Group
developmental rates, community structure, the
global carbon cycle, tumour growth and so on,
and, somewhat improbably, even makes predictions about human fertility and the wealth
of nations. Like evolution by natural selection
and the DNA double helix, this theory explains
so much with so little. It is breathtaking in its
ambition and scope.
Any new theory that is apparently so omniscient will attract as many grumbles of doubt as
gasps of admiration, and this one is no exception. Its fans accept as strengths its physical
simplifications, its neglect of biological detail
and its mathematical reasoning, all of which
leave critics uncomfortable. But for many, the
clincher is this: within the limits of its assumptions and the bending of its rules by biological
variation, West, Brown and Enquist’s theory
accurately predicts an extraordinary range of
phenomena. No comparable idea yet matches
it, despite its inevitable limitations.
Whitfield does a fine job of describing the
logic behind the theory and its antecedents.
He unpacks its key assumptions and describes
what the fractal plumbing system responsible
for quarter-power scaling would look like. No
armchair pundit, Whitfield interviewed the
theory’s authors and their colleagues, censused
trees in Costa Rican forests with Enquist’s team
of students and postdocs, and spent a few less
arduous hours having his own metabolism
measured in London. His first-hand experiences at the subject’s coalface are vividly
readable. Whitfield’s later chapters consider
how metabolism relates to biodiversity and
biogeography, and how it might dovetail with
genetics. They also dwell on how these grand
ideas might apply, or not, to the largest part of
the tree of life: microbes. Overall, Whitfield’s
book provides the best available introduction
to West, Brown and Enquist’s big idea.
But is the big idea correct and so universally applicable? Whitfield does not ignore
its critics, but they get relatively thin coverage
despite their prominence in the pages of specialist ecology journals. This is understandable in a book of this type, which sets out to
popularize as much as inform, but it implies
that the theory itself is virtually home and dry.
The most explicit cautionary note comes from
West himself: “If it’s wrong, it’s wrong in some
really subtle way.”
West and his colleagues have been almost as
vigorous in defending their idea as they have
in using it to attack ever more diverse biological problems, and strong personalities on
both sides of the debate have generated robust
exchanges. To his credit, Whitfield resists the
temptation to overdramatize the disputes that
often accompany important scientific developments. Instead he focuses on the power of
a beguilingly simple idea about how the living
world might work, and on the remarkable men
who conceived it.
■
David Robinson is at the School of Biological
Sciences, University of Aberdeen,
Aberdeen AB24 3UU, UK.
MARY EVANS PICTURE LIBRARY
separating them. Can the gulf be bridged? One
example of creativity at the interface of rationalism and spirituality in the biological realm is
conservation biologist Aldo Leopold’s A Sand
County Almanac (Oxford University Press,
1949). Leopold gives three reasons for preserving native wilderness areas: science, wildlife and recreation. On the value of preserving
wilderness for the few who practise the primitive arts of canoeing and packing, Leopold
wrote: “Either you know it in your bones, or
you are very, very old.” And on recognizing the
cultural value of wilderness, he wrote that it is
“a question of intellectual humility”. I believe
NATURE|Vol 444|16 November 2006
J. READER/SPL
BOOKS & ARTS
NATURE|Vol 445|8 February 2007
A big bite of the past
By standing up for themselves between 3 million
and 4 million years ago, Lucy and her fellow
Australopithecus afarensis caused quite a stir. But
bipedalism is just one factor in the rich mix of
human evolution, as amply shown in the revised,
updated and expanded From Lucy to Language
(Simon & Schuster, $65).
Donald Johanson, who discovered Lucy, and his
co-writer Blake Edgar have added the big finds
since 1996 to their brilliant overview, including the
Indonesian ‘hobbit’ Homo floresiensis. And as this
snap of A. afarensis teeth from Ethiopia reveals, the
expanded range of photos — many at actual size
— remain jaw-droppingly spectacular.
B.K.
Back to basics
Darwinian Reductionism: Or, How to Stop
Worrying and Love Molecular Biology
by Alex Rosenberg
University of Chicago Press: 2006. 272 pp.
$40, £25.50
Bruce H. Weber
The understanding we have gained about the
molecular basis of living systems and their
processes was a triumph of twentieth-century
science. Since the structure of DNA was elucidated in 1953, molecular biologists have been
deepening our insights into a wide range of biological phenomena. It has been a heady time:
it seemed that mendelian genetics would be
reduced to the macromolecular chemistry of
nucleic acids, with biology set to become a
mature science in the same way as physics and
chemistry. The emerging field of the philosophy of biology inherited the reductionist
framework of logical empiricism. But as our
knowledge of molecular biology deepened,
many philosophers of biology, including David
Hull, Philip Kitcher, Eliot Sober, Evelyn FoxKeller and Paul Griffiths, saw that the reductionist approach faced serious problems.
There is no simple correlation between the
mendelian gene and the increasingly complex
picture provided by molecular genetics. To
make matters worse, the theory to be reduced
was presumably the population-genetic version
of darwinian natural selection, which had from
the start excluded phenomena about development and their possible link to evolutionary
dynamics. Given this absence, Ernst Mayr, a
founder of the modern evolutionary synthesis,
argued that, although biological systems did
not violate the laws of chemistry and physics,
evolving biological systems have properties
that cannot be reduced to such laws. The crux
of the issue as Mayr saw it was that, whereas the
physical sciences deal only in proximate explanations, the biological sciences also deal with
ultimate explanations relating to evolutionary
descent and the action of selection to produce
adapted function. This, Mayr argued, resulted
in the autonomy of biology with respect to
the physical sciences. Alex Rosenberg’s book
Darwinian Reductionism is a response to the
anti-reductionist position in contemporary
philosophy of biology and to the autonomist
stance of some biologists.
Rosenberg’s thesis is that biological phenomena, including their functional aspects, are best
understood at the level of their macromolecular constituents and their interactions in cellular environments that are themselves made
up of other molecules. This has been, and continues to be, he argues, a successful, progressive
research programme. He focuses in particular
on the great advances in our understanding
of developmental molecular biology, which
teaches us how the genes that are involved in
development function, interact and work with
chemical gradients, for example, to produce
morphology. Rosenberg provides an accessible
review of current ideas on the ‘wiring’ of such
gene complexes and the way they help account
for morphological evolution. He is one of the
first philosophers to consider the implications
of ‘evo-devo’ (evolutionary developmental
biology), and seizes the opportunity to promote
a reductionist interpretation that was simply
not possible with population genetics.
He shows a good grasp of the scientific
details of developmental molecular biology,
but it is unfortunate that in the introduction
he gets the molecular details of sickle-cell
anaemia wrong and then describes a resulting arterial blockage, rather than the lysis of
red blood cells. This should not have survived
the reviewing and editing process, but it is the
only serious lapse. When he returns to the issue
of mutant haemoglobins later in the book,
he gets the molecular details for sickle-cell
haemoglobin correct.
To bridge Mayr’s gap between ultimate (natural selection) causes and proximate (structural and functional) causes, Rosenberg cites
Theodosius Dobzhansky’s dictum that nothing
in biology makes sense except in the light of
evolution. The various molecules in cells and
the gene sequences of the macromolecules are
products of previous selection by which their
proximately causal (structural and functional)
properties were screened. In bringing causality to bear on explanation, he makes use of the
distinction between ‘how possible’ and ‘why
necessary’ explanations. Ultimate historical
explanations of current biological structures
and functions are ‘how possible’ in type. But
why particular molecular arrangements were
selected in the past has the force of ‘why necessary’ explanation. This removes the burden
from selectional dynamics of having to be predictive in order to be reductionist.
Rosenberg realizes that theory reductionism
requires the theory of darwinian natural selection to be grounded in, or reduced to, a principle of natural selection at the level of chemical
systems in which both stability and replicability are selected for. In effect, he produces a
scenario in which biological selection can be
reduced to chemical selection during the origin
of life. This crucial move needs more careful
analysis than Rosenberg provides. He gives,
in effect, a ‘how possible’ explanation for the
emergence of life and biological selection, but
not a ‘why necessary’ one. For that he would
need to deal with the literature of the origin of
life and the more general recent work on complexity. Such an investigation would show that
phenomena in these areas are more emergent
than Rosenberg believes, and that there is a
need to develop a theory of organization and
emergence. Research on emergent complexity
is still a work in progress, but it may undercut
Rosenberg’s thesis by providing a fully naturalistic, non-reductionist account of emergence.
Such a non-reductionist account would not be
anti-reductionist in the sense Rosenberg uses
the term, but would offer a ‘why necessary’
explanation of the emergent phenomena. ■
Bruce H. Weber is emeritus professor in the
Department of Chemistry and Biochemistry,
California State University, Fullerton, and in
the Division of Science and Natural Philosophy,
Bennington College, Bennington, Vermont, USA.
601
ESSAY
NATURE|Vol 445|8 February 2007
A clash of two cultures
Putting the pieces together
Biologists often pay little attention to
debates in the philosophy of science. But
one question that has concerned philosophers is rapidly coming to have direct relevance to researchers in the life sciences: are
there laws of biology? That is, does biology
have laws of its own that are universally
applicable? Or are the physical sciences the
exclusive domain of such laws?
Today, biologists are faced with an avalanche of data, made available by the successes of genomics and by the development
of instruments that track biological processes in unprecedented detail. To unpack
how proteins, genes and metabolites operate as components of complex networks,
modelling and other quantitative tools that
are well established in the physical sciences
— as well as the involvement of physical
scientists — are fast becoming an essential part of biological practice. Accordingly,
questions about just how much specificity
needs to be included in these models,
about where simplifying assumptions is
appropriate, and about when (if ever) the
search for laws of biology is useful, have
acquired pragmatic importance — even
some urgency.
In the past, biologists have been little
concerned about whether their findings
might achieve the status of a law. And
even when findings seem to be so general
as to warrant thinking of them as a law,
the discovery of limits to their generality
has not been seen as a problem. Think,
for example, of Mendel’s laws, the central
dogma or even the ‘law’ of natural selection. Exceptions to these presumed laws
are no cause for alarm; nor do they send
biologists back to the drawing board in
search of better, exception-free laws. They
are simply reminders of how complex biology is in reality.
Physical scientists, however, come from
a different tradition — one in which the
search for universal laws has taken high
priority. Indeed, the success of physics has
led many to conclude that such laws are the
sine qua non of a proper science, and provide the meaning of what a ‘fundamental
explanation’ is.
Physicists’ and biologists’ different attitudes towards the general and the particular have coexisted for at least a century
in the time-honoured fashion of species
dividing their turf. But today, with the eager
recruitment of physicists, mathematicians,
computer scientists and engineers to the
life sciences, and the plethora of institutes,
departments and centres that have recently
sprung up under the name of ‘systems biology’, such tensions have come to the fore.
Perhaps the only common denominator
joining the efforts currently included under
the systems-biology umbrella is their
subject: biological systems with large
numbers of parts, almost all of which
are interrelated in complex ways. But
although methods, research strategies and goals vary widely, they
can roughly be aligned with
one or the other of the attitudes
I’ve described.
For example, a rash of
studies has reported the
generality of ‘scalefree networks’ in
biological
systems.
In such
networks,
the distribution of nodal connections follows a power law (that
is, the frequency of nodes with connectivity k falls off as k−α, where α is a constant);
furthermore, the network architecture is
assumed to be generated by ‘growth and
preferential attachment’ (as new connections form, they attach to a node with a
probability proportional to the existing
number of connections). The scale-free
model has been claimed to apply to complex systems of all sorts, including metabolic and protein-interaction networks.
Indeed, some authors have suggested that
scale-free networks are a ‘universal architecture’ and ‘one of the very few universal
mathematical laws of life’.
But such claims are problematic on two
counts: first, power laws, although common, are not as ubiquitous as was thought;
second, and far more importantly, the
presence of such distributions tells us
nothing about the mechanisms that give
rise to them. ‘Growth and preferential
attachment’ is only one of many ways of
generating such distributions, and seems
to be characterized by a performance so
poor as to make it a very unlikely product
of evolution.
How appropriate is it to look for allencompassing laws to describe the properties of biological systems? By its very
nature, life is both contingent and particular, each organism the product of eons of
tinkering, of building on what had accumulated over the course of a particular
evolutionary trajectory. Of course, the laws
of physics and chemistry are crucial. But,
beyond such laws, biological generalizations (with the possible exception of natural selection) may need to be provisional
because of evolution, and because of
the historical contingencies on
which both the emergence
of life and its elaboration
depended.
Perhaps it is time to
face the issues head
on, and ask just
when it is useful to simplify,
to generalize, to
search for unifying principles,
and when it is
not. There is
also a question
of appropriate
analytical tools.
Biologists clearly
recognize their need for
new tools; ought physical
scientists entering systems
biology consider that they too might need
different methods of analysis — tools better suited to the importance of specificity
in biological processes? Finally, to what
extent will physicists’ focus on biology
demand a shift in epistemological goals,
even the abandonment of their traditional
holy grail of universal ‘laws’? These are
hard questions, but they may be crucial to
the forging of productive research strategies in systems biology. Even though we
cannot expect to find any laws governing
the search for generalities in biology, some
rough, pragmatic guidelines could be very
useful indeed.
■
Evelyn Fox Keller is at the Massachusetts
Institute of Technology, 77 Mass Avenue,
E51-185, Cambridge, Massachusetts
02139, USA, and a Blaise Pascal chair in
Paris, France.
FURTHER READING
Barabási, A. L. & Bonabeau, E. Sci. Am. 288, 50–59
(2003).
Beatty, J. in Concepts, Theories and Rationality in the
Biological Sciences (eds Lennox, J. G. & Wolters, G.)
45–81 (Univ. Pittsburgh Press, Pittsburgh, 1995).
Keller, E. F. BioEssays 27, 1060–1068 (2005).
Keller, E. F. Making Sense of Life: Explaining Biological
Development with Models, Metaphors, and Machines
(Harvard Univ. Press, Cambridge, MA, 2002).
For other essays in this series, see http://
nature.com/nature/focus/arts/connections/
index.html
CONNECTIONS
Evelyn Fox Keller
J. KAPUSTA/IMAGES.COM
Physicists come from a tradition of looking for all-encompassing laws, but is this the best
approach to use when probing complex biological systems?
603
ESSAY
NATURE|Vol 446|8 March 2007
Control without hierarchy
Putting the pieces together
J. KAPUSTA/IMAGES.COM
Deborah M. Gordon
Because most of the dynamic systems that
we design, from machines to governments,
are based on hierarchical control, it is difficult to imagine a system in which the
parts use only local information and the
whole thing directs itself. To explain how
biological systems operate without central
control — embryos, brains and socialinsect colonies are familiar examples
— we often fall back on metaphors from
our own products, such as blueprints and
programmes. But these metaphors don’t
correspond to the way a living system works, with parts
linked in regulatory networks
that respond to environment
and context.
Recently, ideas about complexity, self-organization,
and emergence — when the
whole is greater than the sum
of its parts — have come into
fashion as alternatives for
metaphors of control. But
such explanations offer only
smoke and mirrors, functioning merely to provide names
for what we can’t explain; they
elicit for me the same dissatisfaction I feel when a physicist
says that a particle’s behaviour
is caused by the equivalence
of two terms in an equation.
Perhaps there can be a general theory of
complex systems, but it is clear we don’t
have one yet.
A better route to understanding the
dynamics of apparently self-organizing
systems is to focus on the details of specific
systems. This will reveal whether there are
general laws. I study seed-eating ant colonies in the southwestern United States. In
each ant colony, the queen is merely an
egg-layer, not an authority figure, and no
ant directs the behaviour of others. Thus
the coordinated behaviour of colonies
arises from the ways that workers use local
information.
If you were the chief executive of an ant
colony, you would never let it forage in the
way that harvester ant colonies do. Put
down a pile of delicious mixed bird-seed,
right next to a foraging trail, and the ants
will walk right over it on their way to search
for buried shreds of seeds 10 metres further on. This behaviour makes sense only
as the outcome of the network of interactions that regulates foraging behaviour.
Foraging begins early in the morning
when a small group of patrollers leave the
nest mound, meander around the foraging area and eventually return to the nest.
A high rate of interactions with returning
patrollers is what gets the foragers going,
and through chemical signals the patrollers
determine the foragers’ direction of travel.
Foragers tend to leave in the direction that
the patrollers return from. If a patroller
can leave and return safely, without getting
blown away by heavy wind or eaten by a
horned lizard, then so can a forager.
Once foraging begins, the number of
ants that are out foraging at any time is
regulated by how quickly foragers come
back with seeds. Each forager travels away
from the nest with a stream of other foragers, then leaves the trail to search for food.
When it finds a seed, it brings it directly
back to the nest. The duration of a foraging trip depends largely on how long the
forager has to search before it finds food.
So the rate at which foragers bring food
back to the nest is related to the availability
of food that day. Foragers returning from
successful trips stimulate others to leave
the nest in search of food.
But why do foragers walk right past seed
baits? We learned recently that during a day,
each forager keeps returning to the same
patch to search for seeds. Once a forager’s
destination for the day is set, apparently
by the first find of the day, even a small
mountain of seeds is not enough to change
it. In this system, the success of a forager
in one place, returning quickly to the nest
with a seed, stimulates another forager to
travel to a different place. A good day for
foraging in one place usually means a good
day everywhere; for example, the morning
after a heavy rain, seeds buried in the soil
are exposed and can be found quickly.
The regulation of foraging in harvester
ants does not use recruitment, in which
some individuals lead others to a place with
abundant food. Instead, without requiring
any ant to assess anything or direct others,
a decentralized system of interactions rapidly tunes the numbers foraging to current
food availability.
It is difficult to resist the idea that general principles underlie non-hierarchical
systems, such as ant colonies
and brains. And because organizations without hierarchy are
unfamiliar, broad analogies
between systems are reassuring. But the hope that general
principles will explain the regulation of all the diverse complex
dynamical systems that we find
in nature, can lead to ignoring
anything that doesn’t fit a preexisting model.
When we learn more about
the specifics of such systems,
we will see where analogies
between them are useful and
where they break down. An
ant colony can be compared to
a neural network, but how do
colonies and brains, both using
interactions among parts that
respond only to local stimuli, each solve
their own distinct set of problems?
Life in all its forms is messy, surprising
and complicated. Rather than look for perfect efficiency, or for another example of
the same process observed elsewhere, we
should ask how each system manages to
work well enough, most of the time, that
embryos become recognizable organisms,
brains learn and remember, and ants cover
the planet.
■
Deborah M. Gordon is in the Department
of Biological Science, Stanford University,
Stanford, California 94305-5020, USA.
FURTHER READING
Gordon, D. M. Ants at Work (W. W. Norton and Co., New
York, 2000).
Haraway, D. J. Crystals, Fabrics, and Fields: Metaphors of
Organicism in Twentieth-Century Developmental Biology
(Yale Univ. Press, New Haven, 1976).
Lewontin, R. C. The Triple Helix: Gene, Organism and
Environment (Harvard Univ. Press, Cambridge, 2000).
For other essays in this series, see http://
nature.com/nature/focus/arts/connections/
index.html
CONNECTIONS
Understanding how particular natural systems operate without central control will reveal
whether such systems share general properties.
143
ESSAY
NATURE|Vol 446|22 March 2007
Frontier at your fingertips
Putting the pieces together
The Hitchhiker’s Guide to the Galaxy
famously features a supercomputer, Deep
Thought, that after millions of years spent
calculating “the answer to the ultimate
question of life, the Universe and everything”, reveals it to be 42. Douglas Adams’s
cruel parody of reductionism holds a
certain sway in physics today. Our 42 is
Schroedinger’s many-body equation: a set
of relations whose complexity balloons so
rapidly that we cannot trace its full consequences up to macroscopic scales. All is
well with this equation, provided we want
to understand the workings of isolated
atoms or molecules up to sizes of about
a nanometre. But between the nanometre and the micrometre wonderful things
start to occur that severely challenge our
understanding. Physicists have borrowed
the term ‘emergence’ from evolutionary
biology to describe these phenomena,
which are driven by the collective behaviour of matter.
Take, for instance, the pressure of a gas
— a cooperative property of large numbers
of particles that is not anticipated from the
behaviour of one particle alone. Although
Newton’s laws of motion account for it,
it wasn’t until more than a century after
Newton that James Clerk Maxwell developed the statistical description of atoms
necessary for understanding pressure.
The potential for quantum matter to
develop emergent properties is far more
startling. Atoms of niobium and gold, individually similar, combine to form crystals
that, kept cold, show dramatically different
properties. Electrons roam free across gold
crystals, forming the conducting fluid that
gives gold its lustrous metallic properties.
Up to about 30 nanometres, there is little
difference between gold and niobium.
It’s beyond this point that the electrons
in niobium start binding together into
the coupled electrons known as ‘Cooper
pairs’. By the time we reach the micrometre scale, these pairs have congregated
in their billions to form a single quantum state, transforming the crystal into
an entirely new metallic state — that of a
superconductor, which conducts without
resistance, excludes magnetic fields and
has the ability to levitate magnets.
Superconductivity is only the start. In
assemblies of softer, organic molecules, a
tenth of a micrometre is big enough for the
emergence of life. Self-sustaining microbes
little more than 200 nanometres in size
have recently been discovered. Although
we understand the principles that govern the superconductor, we have not yet
grasped those that govern the emergence
of life on roughly the same spatial scale.
In fact, we are quite some distance from
this goal, but it is recognized as the far
edge of a frontier that will link biology and
physics. Condensed-matter physicists have
taken another cue from evolution, and
believe that a key to understanding more
complex forms of collective behaviour in
matter lies in competition not between
species, but between different forms of
order. For example, high-temperature
superconductors — materials that develop
superconductivity at liquid-nitrogen temperatures — form in the presence of a
competition between insulating magnetic
behaviour and conducting metallic behaviour. Multi-ferroic materials, which couple
magnetic with electric polarization, are
found to develop when magnetism competes with lattice-distorting instabilities.
A related idea is ‘criticality’ — the concept that the root of new order lies at the
point of instability between one phase and
another. So, at a critical point, the noisy
fluctuations of the emergent order engulf
a material, transforming it into a state of
matter that, like a Jackson Pollock painting,
is correlated and self-similar on all scales.
Classical critical points are driven by thermal noise, but today we are particularly
interested in ‘quantum phase transitions’
involving quantum noise: jigglings that
result from Heisenberg’s uncertainty principle. Unlike its thermal counterpart, quantum noise leads to diverging correlations
that spread out not just in space, but also in
time. Even though quantum phase transitions occur at absolute zero, we’re finding
that critical quantum fluctuations have a
profound effect at finite temperatures.
For example, ‘quantum critical metals’
develop a strange, almost linear temperature dependence and a marked predisposition towards developing superconductivity.
The space-time aspect of quantum phase
transitions gives them a cosmological flavour and there do seem to be many links,
physical and mathematical, with current
interests in string theory and cosmology.
Another fascinating thread here is that
like life, these inanimate transformations
involve the growth of processes that are
correlated and self-sustaining in time.
Some believe that emergence implies an
abandonment of reductionism in favour of
a more hierarchical structure of science,
with disconnected principles developing
at each level. Perhaps. But in almost every
branch of physics, from string theory to
condensed-matter physics, we find examples of collective, emergent behaviour that
share common principles. For example, the
mechanism that causes a superconductor
to weaken and expel magnetic fields from
its interior is also responsible for the weak
nuclear force — which plays a central role
in making the Sun shine. Superconductors
exposed general principles that were used
to account for the weak nuclear force.
To me, this suggests that emergence
does not spell the end for reductionism,
but rather indicates that it be realigned to
embrace collective behaviour as an integral
part of our Universe. As we unravel nature
by breaking it into its basic components,
avoiding the problem of ‘42’ means we
also need to seek the principles that govern collective behaviour. Those include
statistical mechanics and the laws of evolution, certainly, but the new reductionism that we need to make the leap into the
realm between nano and micro will surely
demand a new set of principles linking
these two extremes.
■
Piers Coleman is in the Department
of Physics and Astronomy, Rutgers
University, 136 Frelinghuysen Road,
Piscataway, New Jersey 08854-8019,
USA.
FURTHER READING
Anderson, P. W. Science 177, 393 (1972).
Laughlin, R. B. A Different Universe (Basic Books, 2005).
Davis, J. C. http://musicofthequantum.rutgers.edu
(2005).
Coleman P. & Schofield, A. J. Nature 433, 226–229
(2005).
For other essays in this series, see http://
nature.com/nature/focus/arts/connections/
index.html
CONNECTIONS
Piers Coleman
J. KAPUSTA/IMAGES.COM
Between the nano- and micrometre scales, the collective behaviour of matter can give rise to
startling emergent properties that hint at the nexus between biology and physics.
379
Vol 446|29 March 2007
BOOKS & ARTS
All systems go
D. S. GOODSELL
Three authors present very different views of the developing field of systems biology.
Life: An Introduction to Complex
Systems Biology
by Kunihiko Kaneko
Springer: 2006. 383 pp. £61.50, $99
An Introduction to Systems Biology:
Design Principles of Biological Circuits
by Uri Alon
Chapman & Hall: 2006. 320 pp. £28.99
Systems Biology: Properties of
Reconstructed Networks
by Bernhard Palsson
Cambridge University Press: 2006.
334 pp. £35, $75
Eric Werner
The authors of three books profess to give an
introduction to systems biology, but each takes
a very different approach. Such divergence
might be expected from a field that is still
emerging and broad in scope. Yet systems biology is not as new as many of its practitioners
like to claim. It is a mutated soup of artificial
life, computational biology and computational
chemistry, with a bit of mathematics, physics
and computer science thrown in. Because it is
so broad and has few recognized boundaries
and plenty of funding, it is attractive to anyone
who has ever thought about life and has some
relevant technical expertise.
The discovery that dynamic systems can
exhibit complex, chaotic and self-organizing
behaviour made many scientists see analogies
with living systems. In Life, Kunihiko Kaneko
attempts to describe living organisms as complex systems akin to those seen in chemistry
and physics. The problem is that the theory
of dynamic complex systems used in physics
and chemistry may have little to do with biological organisms and the way they grow and
function.
For instance, Kaneko views differentiation
from a group of uniform cells as resulting from
slight stochastic perturbations that are gradually amplified by intracellular and intercellular
interactions. After a while, these become fixed,
resulting in a pattern of different cell types.
One problem with this theory is that it gives no
account of how differentiation repeats itself so
consistently in the development of organisms.
It fails to explain why identical twins remain
identical, and why horse embryos develop into
horses, not chimpanzees.
Kaneko also claims that stem cells are fundamentally unstable and that this leads to different cell types. But stem cells are not unstable.
The activity of cells is determined by complex interactions governed by a range of control signals.
Rather, when stimulated by signals or by their
own genetic clock, they start a very precise
process of differentiation that is dependent on
internal and external control signals.
There is one big player missing from the
dynamic-systems account: the genome. For
this reason, it seems to me that dynamicsystems theory fails to give sufficient insight
into biological processes. Cells are highly
complex agents containing a vast amount of
control information that cannot be reduced
to a few simple rules (or even sophisticated
mathematical functions) that attempt to
describe cell dynamics and cell interactions
externally without recourse to the information
contained in the genome. A similar problem
lies at the heart of the failure of Turing-like
models to describe embryonic development.
Kaneko provides a good summary of the standard weaknesses of Turing’s theory of development, but fails to see that some of the same
weaknesses apply to his own ideas as well.
Kaneko assumes that because complex
patterns can form from simple interacting
physical elements, such interactions can also
generate arbitrary complexity. Even a simple
counting algorithm that sequentially generates
every integer will generate every complex state
(binary sequences), but no algorithm can generate any particular number or state and stop
without having the information contained in
that number or state. Moreover, any process
that generates a complex structure and stops
must contain the information required to generate that structure. This is why cells need the
vast amount of information encoded in their
genome. Kaneko and many others who have
fallen for the myth of interactionism, complexsystems theory or Turing-like models are in
fundamental conflict with the complexity
conservation principle, which states that a
space-time event generated by a set of agents
493
BOOKS & ARTS
494
uses this to formalize uncertainty about biological chemical states. This space of possibilities can then be systematically constrained by
high- and low-level information. In this way,
he manages to formalize states of uncertainty
in a biological system so he can extract useful
predictive information about it, despite the
fact that many of its parameters and values of
variables are unknown.
Unfortunately, Palsson’s book is a difficult
read. It is not well organized and refers the
reader to later chapters to explain concepts
needed in earlier ones, and vice versa. Often
no explanation of basic concepts is provided;
additional appendices would have been helpful.
Palsson admits that he had help writing some
of the chapters, and the book does feel like
the work of a committee. However, it brings
together many of Palsson’s contributions to
metabolic network formalization and analysis
and, for this reason, deserves to be part of a
systems-biology curriculum. I look forward to
improvements in the promised future editions.
Of the three books, Palsson’s is the most
practical and immediately relevant to modelling low-level metabolic networks. Alon
investigates networks at a higher level, including genomic regulatory networks. He does
an excellent job of explaining and motivating a useful toolbox of engineering models
and methods using network-based controls.
Kaneko’s book is conceptually deep but further
removed from Palsson’s chemical networks
and even from Alon’s more abstract regulatory networks. Even though I am critical of
his approach, the book is filled with insights
and useful criticisms of some of the standard
models and theories used in systems biology,
and in biology generally. All three books will
be valuable and non-overlapping additions to
a systems-biology curriculum.
■
Eric Werner is in the Department of Physiology,
Anatomy and Genetics, University of Oxford,
Parks Road, Oxford OX1 3PT, UK.
ACADEMIE DES SCIENCES, PARIS/ARCHIVES
CHARMET/BRIDGEMAN ART LIBRARY
A little movement
Middle World: The Restless Heart of
Matter and Life
by Mark Haw
Macmillan Science: 2006. 256 pp. £16.99,
$24.95
Tom McLeish
The fascinating tale of brownian motion has
been looking for a story-teller for a long time.
The tangled threads knot together, rather than
begin, in the nineteenth century with botanist
Robert Brown’s original observations of the
random, ceaseless motion of particles in pollen
grains of Clarkia pulchella. The threads lead
back in time to medieval theories of matter
that tangled physics with theology — a pattern
that ran deep through the work of Galileo
and Newton — and further back still to the
Epicureans. Going forwards from Brown, they
twist through the nineteenth century’s ambivalence towards molecular theory and the thermodynamics of Sadi Carnot and Lord Kelvin.
Weaving through the kinetic theory of James
Clerk Maxwell and the statistical mechanics
of Ludwig Boltzmann that finally grasped
the physics of randomness, they lead to the
complementary beauties of Einstein’s theory
of brownian motion and Jean Baptiste Perrin’s
experiments that led to modern soft-matter
physics and a new understanding of the role
of brownian dynamics in molecular biology.
This is a remarkable story of science and scientists that leaves no major science untouched
and summons onto the stage a colourful and
eminent cast from centuries of endeavour.
In Middle World, Mark Haw provides an
accessible and racy account that succeeds in
opening up technical ideas without losing
momentum. Haw is not insensitive to dramatic
Jean Baptiste
Perrin (above)
provided a new
understanding
of Robert Brown’s
notion of random
motion.
irony, and makes
a satisfying conclusion out of the
return of brownian
motion to illuminate dynamical processes in
biology, where it originated, after spending a
century wandering the worlds of physics and
physical chemistry. We fleetingly visit the role
of brownian motion in polymer physics, oxygen capture by myoglobin, the protein-folding
problem and the question of how molecular
motors (the cell’s cargo transporters) can possibly execute controlled and directed motion
in a turbulent brownian world. It’s not quite
T. S. Eliot, but we are almost back where we
began, yet knowing for the first time.
Although it is a fitting window onto a selection of hot topics in current science, the final
‘contemporary’ section drops the connected
storyline of the preceding historical material.
THE NATURAL HISTORY MUSEUM, LONDON
cannot be more complex than the information
available to the agents.
Evolution gets round this principle by the
stochastic generation of new states. Stochastic
processes can be random so they can generate
arbitrary complexity, within physical chemical
constraints, because random strings or structures are maximally complex.
Uri Alon’s An Introduction to Systems Biology
is a superb, beautifully written and organized
work that takes an engineering approach to
systems biology (see also Connections, page
497). Alon provides nicely written appendices
to explain the basic mathematical and biological concepts clearly and succinctly without
interfering with the main text. He starts with
a mathematical description of transcriptional
activation and then describes some basic transcription-network motifs (patterns) that can
then be combined to form larger networks.
The elegance and simplicity of Alon’s book
might lead the reader to believe that all the
basics of the control of living systems have
been worked out. It only remains, it seems,
to combine the network motifs to get a total
understanding of networks in the dynamics
and development of living systems.
All is fine except that in the very first page of
the book, Alon defines networks as functions
that map inputs to protein production. In other
words, the meaning of genomic transcription
networks is restricted to the production of
proteins or cell parts. Granted, some of these
proteins are transcription factors that in turn
activate other genes and, thereby, are a key part
of the network itself. But this prejudices the
enterprise by presupposing that protein states
are all there is to understanding life. Such a
view is bottom-up in the extreme.
What’s missing is a relation between higherlevel organizational, functional states and networks. This is indicative of a more fundamental
problem. Because Alon focuses on very basic
low-level circuits, the global organization and
its effects are largely ignored.
In some ways, Bernhard Palsson’s Systems
Biology is a more practical book for those wishing to understand and analyse actual biological
data and systems. It directly relates chemistry
to networks, processes and functions in living
systems. The book’s main focus is on metabolic networks of single cells such as bacteria.
Palsson argues that classical modelling using
differential equations requires complete information about the state of the system. Such data,
however, are not available for complex biological systems. Palsson’s response is to accept biological uncertainty. The approach is to describe
a space of all the possible states of a system or
network (relative to a set of dimensions of
interest) and then use biological and chemical
data to constrain this space. This is similar to
the process of entropy reduction described in
statistical thermodynamics.
Specifically, Palsson espouses a mathematically ingenious method of formalizing metabolic reactions, pathways and networks, and
NATURE|Vol 446|29 March 2007
ESSAY
NATURE|Vol 446|19 April 2007
Putting the pieces together
Rules of engagement
CONNECTIONS
John Doyle and Marie Csete
860
Chaos, fractals, random graphs and power
laws inspire a popular view of complexity in which behaviours that are typically
unpredictable and fragile ‘emerge’ from
simple interconnections among like components. But applied to the study of highly
evolved systems, this attractively simple
view has led to widespread confusion. A
different, more rewarding take on complexity focuses on organization, protocols
and architecture, and includes the ‘emergent’ as an extreme special case within a
much richer dynamical perspective.
Engineers can learn from biology.
Biological systems are robust and evolvable in the face of even large changes in
environment and system components, yet
can be extremely fragile to small perturbations. Such universally robust yet fragile
(RYF) complexity is found wherever we
look. Take the evolution of microbes into
humans (robustness of lineages on long
timescales) punctuated by mass extinctions (extreme fragility). Or diabetes and
cancer, conditions resulting from faulty
biological control mechanisms, normally
so robust as to go unnoticed.
But RYF complexity is not confined to
biology. The complexity of technology
is exploding around us, but in ways that
remain largely hidden. Modern institutions
and technologies facilitate robustness and
accelerate evolution, but also enable major
catastrophes, from network crashes to climate change. Such RYF complexity presents
a major challenge to engineers, physicians
and, increasingly, scientists. Understanding RYF means understanding architecture
— the most universal, high-level, persistent
elements of organization — and protocols.
Protocols define how diverse modules
interact, and architecture defines how sets
of protocols are organized.
So biologists can learn from engineering. The Internet is an obvious example of
how a protocol-based architecture facilitates evolution and robustness. If you are
reading this on the Internet, your laptop
hardware (display, keyboard and so on)
and software (web browser) both obey
sets of protocols for exchanging signals
and files. Subject to protocol-driven constraints, you can access an incredible diversity of hardware and software resources.
But it is the architecture of TCP/IP
(Transmission Control and Internet
Protocols) that is more fundamental. The
hourglass protocol ‘stack’ has a thin, hidden ‘waist’ of universally shared feedback
control (TCP/IP) between the visible
upper (application software) and lower
(hardware) layers. Roughly, IP controls
the routes for packet flows and thus, available bandwidth. Applications split files
into packets, and TCP controls their rates
and guarantees delivery. This allows ‘plugand-play’ between modules that obey
shared protocols; any set of applications
that ‘talks’ TCP can run transparently and
robustly on any set of hardware that talks
IP, accelerating the evolution of TCP/IPbased networks.
Similarly, microbial genes
that talk transcription and
translation protocols can
move from one microbe
to another by horizontal
gene transfer, also accelerating evolution in a
kind of bacterial internet.
But as with the technological Internet, the
newly acquired proteins work better when
they can use additional
shared protocols such as
group transfers. Thus selection acting at the protocol level
could evolve and preserve shared architecture, essentially evolving evolvability.
All life and advanced technologies rely
on protocol-based architectures. The evolvability of microbes and IP-based networks
illustrates how dramatic, novel, dynamic
changes on all scales of time and space can
also be coherent, responsive, functional
and adaptive. New genes and pathways,
laptops and applications, even whole networks, can plug-and-play, as long as they
obey protocols. Biologists can even swap
gene sequences over the Internet in a kind
of synthetic horizontal gene transfer.
Typical behaviour is fine-tuned with this
elaborate control and thus appears boringly
robust despite large internal and external
perturbations. As a result, complexity and
fragility are largely hidden, often revealed
only by catastrophic failures. Because components come and go, control systems that
reallocate network resources easily confer
robustness to outright failures, whereas
violations of protocols by even small
random rewiring can be catastrophic. So
programmed cell (or component) ‘death’
is a common strategy to prevent local
failures from cascading system-wide.
The greatest fragility stemming from
a reliance on protocols is that standardized interfaces and building blocks can
be easily hijacked. So that which enables
horizontal gene transfer, the web and email
also aids viruses and other parasites. Large
structured rearrangements can be tolerated, whereas small random or targeted
changes that subtly violate protocols can
be disastrous.
By contrast, in the popular view of
complexity described at the beginning,
modelling and analysis are both
simplified because tuning,
structure and details are
minimized, as is environmental uncertainty;
and superficial patterns
in ensemble averages (not
protocols) define modularity. An unfortunate clash
of cultures arises because
architecture-based RYF
complexity is utterly
b e wilder ing w hen
viewed from this popular perspective. But the
search for a deep simplicity
and unity remains a common goal.
Fortunately, our growing need for
robust, evolvable technological networks
means the tools for engineering architectures and protocols are becoming more
accessible. These will bring rigour and
relevance to the study of complexity generally, but not at the expense of structure and
detail. Quite the contrary: both architectures and theories to study them are most
successful when they facilitate rather than
ignore the inclusion of domain-specific
details and expertise.
■
John Doyle is at the California Institute of
Technology, Pasadena, California 911258100, USA; Marie Csete is at Emory
University, Atlanta, Georgia 30322, USA.
FURTHER READING
Doyle et al. Proc. Natl Acad. Sci. USA 102, 14497–14502
(2005).
Moritz, M. A. et al. Proc. Natl Acad. Sci. USA 102, 17912–
17917 (2005).
For other essays in this series, see http://
nature.com/nature/focus/arts/connections/
index.html
J. KAPUSTA/IMAGES.COM
Complex engineered and biological systems share protocol-based architectures that make them
robust and evolvable, but with hidden fragilities to rare perturbations.
NEWS & VIEWS
be fine-tuned for specific purposes (although
the ‘green’ credentials of ionic liquids have
been questioned by reports that some of these
compounds are toxic5). It was, therefore, inevitable that new applications would emerge from
the growing number of scientific and technological disciplines studying these liquids.
Ionic liquids are known for their distinct
physical properties (such as low or non-volatility, thermal stability and large ranges of
temperatures over which they are liquids6),
chemical properties (such as resistance to degradation, antistatic behaviour, chirality and high
energy density) and biological activities (such
as antimicrobial and analgesic properties7). But
what is less appreciated is that these properties
in individual ionic liquids can be combined in
composite materials to afford multifunctional
designer liquids. It is therefore refreshing to see
a study 2 that focuses on the unique attributes
and uses of ionic liquids, rather than on whether
they are green or toxic.
Borra et al.2 use an ionic liquid to solve a
problem in making liquid mirrors for telescopes (Fig. 1). Liquid mirrors have several
advantages over traditional mirrors for such
applications — for example, they have excellent optical properties and their surfaces form
perfectly smooth parabolas. It has been proposed that a telescope on the Moon with a large
liquid mirror (20–100 metres across) could
provide unprecedented views of deep optical
fields, so advancing our knowledge of the early
Universe.
A major roadblock to the implementation
of a liquid-mirror telescope is finding a stable
liquid support for the reflective coating that
can resist the extreme environment of space.
Thus, the support must have high viscosity, a
very low melting point or glass transition temperature, and no vapour pressure. Borra et al.2
used vacuum vaporization to coat silver onto
several liquids, including silicone oil, a block
copolymer and an ionic liquid. Of the liquids
tested, the ionic liquid came closest to having
the desired physical properties, and also yielded
the most reflective material with a stable coating of silver. Furthermore, the coating process
could be improved by depositing chromium on
the ionic liquid before the silver, and provided
a surface with even better optical quality than
silver alone. Further improvements to the ionic
liquid will be necessary before it can be used
in a space telescope. Nevertheless, this report2
surpasses most descriptions of these liquids
because the application depends completely
on the physical and chemical properties of
the ionic liquid — in fact, it seems that only an
ionic liquid will do.
The approach taken by Borra et al.2 was first
to define the properties needed for an ideal
liquid-mirror support, and then to identify an
ionic liquid as being suited for that purpose.
They focused mainly on the physical properties
of the liquid, but its chemical properties should
also be carefully considered — for example, the
solubility and reactivity of the reflecting metal
918
NATURE|Vol 447|21 June 2007
(or metallic colloid) with the liquid. Such considerations may lead to improved methods of
metal deposition, or to new forms of liquid
mirrors.
One problem for the future is finding exactly
the right ionic liquid for the job, even though
the properties required for a liquid-mirror
material are known. Given the vast number of
possible ionic liquids to choose from, and the
fact that few rules exist for customizing them
(other than rules of thumb), the selection of an
appropriate ionic liquid is arduous and often
hit-and-miss. Anyone developing ionic liquids for technological applications faces this
challenge, and there is always the danger that
a competitor will chance upon a better choice.
Hope lies in the major efforts now being made
to model and predict the properties of ionic
liquids, although such predictive methods will
take time to develop.
In the meantime, a knowledge base of interdisciplinary data is rapidly being generated
for ionic liquids. This should fuel innovative
ideas and applications that will take these
liquids far beyond the realm of mere solvents.
The idea that ionic liquids could pave the way
for exciting fundamental science has yet to be
recognized. Nonetheless, the potential power
of these materials is clear: one need only look
in the mirror.
■
Robin D. Rogers is in the Department of
Chemistry and Center for Green Manufacturing,
The University of Alabama, Tuscaloosa, Alabama
35487, USA.
e-mail: [email protected]
1. Wasserscheid, P. & Welton, T. (eds) Ionic Liquids in
Synthesis (Wiley-VCH, Weinheim, 2003).
2. Borra, E. F. et al. Nature 447, 979–981 (2007).
3. Walden, P. Bull. Acad. Sci. St Petersburg 405–422 (1914).
4. Fremantle, M. Chem. Eng. News 76 (30 March), 32–37 (1998).
5. Nature 10.1038/news051031-8 (2005).
6. Deetlefs, M., Seddon, K. R. & Shara, M. Phys. Chem. Chem.
Phys. 8, 642–649 (2006).
7. Pernak, J., Sobaszkiewicz, K. & Mirska, I. Green Chem. 5,
52–56 (2003).
EVOLUTIONARY BIOLOGY
Re-crowning mammals
Richard L. Cifelli and Cynthia L. Gordon
The evolutionary history of mammals is being tackled both through
molecular analyses and through morphological studies of fossils. The
‘molecules versus morphology’ debate remains both vexing and vibrant.
On page 1003 of this issue, Wible and coauthors1 announce the discovery of a wellpreserved mammal from Mongolia dated
at between 71 million and 75 million years
old. The fossil, dubbed Maelestes gobiensis, is
noteworthy in its own right: finds of this sort
are exceptional in view of the generally poor
record of early mammals.
More interesting, though, is what this fossil and others from the latter part of the age
of dinosaurs (the Cretaceous period, about
145 million to 65 million years ago) have to
say about the rise of mammalian varieties that
populate Earth today. The authors have gone
much further than describing an ancient fossil specimen, and present a genealogical tree
depicting relationships among the main groups
of living and extinct mammals. Here, all Cretaceous fossil mammals are placed near the base
of the tree, as dead ‘side branches’, well below
the major tree ‘limbs’ leading to living mammals. These results differ strikingly from those
of other recent palaeontological studies2,3.
Chronologically speaking, this new analysis1
is eye-popping because it places direct ancestry of today’s mammals near the Cretaceous–
Tertiary (K/T) boundary about 65 million years
ago. This is much younger than dates based on
molecular biology — for example, a recent and
comprehensive analysis by Bininda-Emonds
et al.4 pushed that ancestry back more than
twice as far into the geological past, to some
148 million years ago. The conflicting results of
these palaeontological1 and molecular4 studies
have profound implications for understanding
the evolutionary history of mammals, and for
understanding the pace and nature of evolution generally.
Three main groups of living mammal are
recognized: the egg-laying monotremes such
as the platypus; marsupials (kangaroos, koalas,
opossums and so on); and placentals, which
constitute the most varied and diverse group,
including everything from bats to whales and
accounting for more than 5,000 of the 5,400
or so living mammals. Fossils can be placed
within one of these three ‘crown’ groups only
if anatomical features show them to be nested
among living species5.
The placental crown group, which is of primary interest here, represents the living members of a more encompassing group, Eutheria,
which includes extinct allied species, the oldest
of which dates to about 125 million years ago6.
Herein lies a central problem: because of inadequate preservation and/or non-comparability
with living species, the affinities of many early
mammals have been contentious. Certain Cretaceous fossils have been previously recognized
as members of the placental crown group;
some analyses suggest the presence of placental
superorders in the Cretaceous2,3, but referral of
NEWS & VIEWS
NATURE|Vol 447|21 June 2007
such ancient fossils to living orders is dubious5.
For context, placentals encompass four major
divisions, or superorders, each containing
one to six orders, such as Cetacea (whales),
Primates and Rodentia.
The study by Wible et al.1 is ground-breaking
because it brings a wealth of new data into play:
it includes every informative Cretaceous fossil
and is based on comparison of more than 400
anatomical features. Palaeontologically, the
authors’ evolutionary tree is iconoclastic in
demoting many previously recognized members of the placental crown group to the status
of ‘stem’ species, or generalized eutherians. In
this scheme, the oldest-known placental is a
rabbit-like mammal from Asia, dated to about
63 million years ago.
Of more general interest are the implications
of this tree for dating mammalian evolutionary radiations, and the factors that may have
affected them. Following extinction of nonavian dinosaurs at the K/T boundary, the fossil
evidence shows that eutherians underwent significant radiations in the Palaeocene (between
65 million and 55 million years ago), and that
most of the modern groups appeared and
flourished later. One cannot help but notice an
analogy between this ‘bushy’ radiation and the
initial explosion of complex life-forms some
500 million years ago. In both cases, the explosion is followed by the extinction of lineages
Laurasiatheria
Afrotheria
Xenarthra
Euarchontoglires
40
Laurasiatheria
30
Euarchontoglires
20
Afrotheria
0
10
b Morphology
Molecules
Xenarthra
a
that presumably represent failed evolutionary
experiments, with the concomitant emergence
and radiation of modern types7.
By coincidence, the appearance of Wible and
colleagues’ paper1 comes hard on the heels of
that by Bininda-Emonds et al.4, which was
published in March. The two studies — one
based on anatomy (emphasizing fossils) and
the other on molecular biology (living species
only) — come to very different conclusions
about the timing of mammalian evolution. As
such, they represent the latest volleys in the
‘molecules versus morphology’ debate5.
Previous studies have identified three
models for the origin and diversification of
placental mammals8: ‘explosive’, in which divergence of most superorders and orders occurred
near and following the K/T boundary;
‘long fuse’, differing in the significantly earlier
diversification of superorders; and ‘short fuse’,
which calls for diversification of both superorders and orders well back in the Cretaceous.
The study by Bininda-Emonds et al.4, which
integrates results of about 2,500 subtrees that
collectively include 99% of living mammal
species, is the most comprehensive of its kind
to date9. It yields support for both the shortfuse (groups including at least 29 living species)
and the long-fuse (less diverse groups) models, with a lull in diversification of placentals
per se following the K/T boundary (Fig. 1a).
Millions of years ago
50
60
Placentalia
70
Maelestes†
80
Zalambdalestidae†
90
Zhelestidae†
100
110
120
Eutheria
130
140
150
‘Placentalia’
K/T boundary
160
Figure 1 | Two views (simplified) of the diversification of the major orders of modern placental
mammals. a, The picture provided by the molecular analyses of Bininda-Emonds et al.4. In this,
inter-ordinal diversification of the four main placental superorders occurred in the mid-Cretaceous,
with intra-ordinal diversification happening soon thereafter (although this is not the case for all
lineages). ‘Placentalia’ is equivalent to Eutheria, as used elsewhere1,8. b, The picture arising from the
morphological (fossil) studies of Wible et al.1. Here, the modern orders of placentals did not appear
and diversify until after the K/T boundary, with many Cretaceous mammals (such as Maelestes1) being
relegated to evolutionary dead-ends. These fossils near the base of the tree are included in the broader
group Eutheria, whose living representatives are the placentals. The placental superorders are the
Xenarthra (sloths and armadillos, for example), Afrotheria (elephants, sea cows), Euarchontoglires
(primates, bats, rodents) and Laurasiatheria (whales, carnivores, shrews). No genealogical
relationships are implied in either tree. †, extinct group.
By contrast, Wible and colleagues’ morphological work1 strongly supports the explosive
model (Fig. 1b).
These results1,4 show a widening rather than
a narrowing of the gap between the conclusions
drawn from morphological and molecular
studies. Why the difference? The two studies
are based on independent lines of evidence,
each with its own shortcomings. The fossil
record is notorious for its incompleteness,
thereby leaving open the possibility of new discoveries that radically alter the picture. Some
studies suggest, however, that the existing fossil
record is complete enough to be taken at face
value8,10. The principal issue with molecular
studies has to do with assumptions about the
‘molecular clock’ and variations in the rates of
gene substitution on which such research is
based. Yet there are also important points of
congruence among the results, notably in the
geometry of the evolutionary trees, suggesting
that neither type of data has an exclusive claim
to validity.
Where do we go from here? For palaeontologists, the answer lies in filling the gaps in the
fossil record. One new fossil, such as a Cretaceous giraffe, could send Wible and co-authors
scrambling back to the drawing-board. And
those involved in molecular studies must continue to develop more sophisticated methods
to account for gene-substitution rates that vary
according to lineage, geological time interval,
body size and other factors11.
For the onlooker, however, the big question
is whether the floodgates of mammalian evolution were ecologically opened by dinosaur
extinctions at the K/T boundary. The answer
seems to be ‘yes’, at least in part12. Evolutionary
trees are essential, but further levels of analysis
are needed to interpret changes in terrestrial
ecosystems and the assemblages of mammals
that inhabited them. In this context, perhaps
attention has been too narrowly focused on
crown placentals4,9: other eutherians, marsupials and mammalian varieties were also present
during this exciting time. Ultimately, interpreting the dynamics of mammalian evolution will
depend on integrating genealogical investigations — both palaeontological and molecular
— with complementary studies of palaeoecology and of the role that each species played in
its respective community.
■
Richard L. Cifelli is at the Sam Noble Oklahoma
Museum of Natural History, 2401 Chautauqua
Avenue, Norman, Oklahoma 73072, USA.
Cynthia L. Gordon is in the Department of
Zoology, University of Oklahoma, Norman,
Oklahoma 73019, USA.
e-mails: [email protected]; [email protected]
1. Wible, J. R., Rougier, G. W., Novacek, M. J. & Asher, R. J.
Nature 447, 1003–1006 (2007).
2. Archibald, J. D., Averianov, A. O. & Ekdale, E. G. Nature 414,
62–65 (2001).
3. Kielan-Jaworowska, Z., Cifelli, R. L. & Luo, Z.-X. Mammals
from the Age of Dinosaurs: Structure, Relationships, and
Paleobiology (Columbia Univ. Press, New York, 2004).
4. Bininda-Emonds, O. R. et al. Nature 446, 507–512
(2007).
919
NEWS & VIEWS
5. Benton, M. J. BioEssays 21, 1043–1051 (1999).
6. Ji, Q. et al. Nature 416, 816–822 (2002).
7. Gould, S. J. Wonderful Life: The Burgess Shale and the Nature
of History (Norton, New York, 1989).
8. Archibald, J. D. & Deutschman, D. H. J. Mammal. Evol. 8,
107–124 (2001).
NATURE|Vol 447|21 June 2007
9. Penny, D. & Phillips, M. J. Nature 446, 501–502 (2007).
10. Foote, M., Hunter, J. P., Janis, C. M. & Sepkoski, J. J. Science
283, 1310–1314 (1999).
11. Springer, M. S., Murphy, W. J., Eizirik, E. & O’Brien, S. J. Proc.
Natl Acad. Sci. USA 100, 1056–1061 (2003).
12. Wilson, G. P. Thesis, Univ. California (2004).
αA
KIX
pKID
αB
BIOPHYSICS
Free
Proteins hunt and gather
David Eliezer and Arthur G. Palmer III
In higher organisms, many proteins, including some involved in critical aspects of
biological regulation and signal transduction,
are stably folded only in complex with their
specific molecular targets. On page 1021 of
this issue, Sugase et al.1 elucidate a three-step
mechanism by which one such ‘intrinsically
disordered’ protein binds to its cognate folded
protein target. This mechanism indicates
a bipartite strategy for this class of protein
in optimizing the search for partner molecules. An initial encounter complex, formed
through weak, nonspecific interactions,
facilitates the formation of a partially structured state, which makes a subset of the final
contacts with the target. This intermediate
conformation allows an efficient search for the
final structure adopted by the high-affinity
complex.
Previous work by these authors2,3 described
the conformational preferences of an intrinsically disordered polypeptide that constitutes
part of the gene transcription factor, CREB;
this polypeptide is known as the phosphorylated kinase inducible activation domain
(pKID). When found in a high-affinity complex with the KIX domain of the CREB-binding protein, pKID forms two α-helices (A and
B) in its amino- and carboxy-terminal regions,
respectively. Helix B makes intimate contacts
with a hydrophobic groove on the KIX surface,
whereas helix A forms a less extensive interface
with KIX (ref. 2). In the absence of KIX, pKID
is largely, but not completely, disordered. Its
amino-terminal region intermittently forms
helix A, but its carboxy-terminal region is more
unstructured3,4.
The different molecular species formed during pKID binding to KIX interconvert kinetically; hence, neither the encounter complex
nor the intermediate complex can be isolated
and studied directly. To characterize these
species, Sugase and colleagues used techniques that rely on the exquisite sensitivity
of the resonance frequencies observed in
nuclear magnetic resonance (NMR) spectroscopy. In particular, time-dependent changes in
local chemical environments and molecular
920
structures modify the resonance frequencies
by altering the magnetic fields experienced by
individual atomic nuclei.
The specific effects of environmental and
structural changes on NMR spectra depend
on whether the kinetic transition rate constants
linking different molecular states are larger
than, comparable to or smaller than the
differences in the resonance frequencies of
these states. These three regimes are termed
fast, intermediate and slow exchange, respectively. For example, in the fast-exchange limit,
the observed frequency of a resonance signal
(which in NMR spectroscopy is called the
chemical shift) is the population-weighted
average of individual resonance frequencies
for different states. The width of the resonance
signal (which is proportional to the transverse relaxation rate constant for the nuclear
magnetization) depends on the variation in
individual resonance frequencies and on the
transition rates.
The approach developed by Sugase et al.
will probably be widely applicable to the
study of other protein–protein binding reactions. Using established techniques, known as
1
H–15N single-quantum correlation (HSQC)
and 15N transverse relaxation dispersion, the
authors monitored changes in chemical shifts
and relaxation rate constants as a function of
the concentration ratio of the two interacting
proteins.
The HSQC technique yields highly sensitive and well-resolved NMR spectra that allow
detailed monitoring of the chemical shifts for
the 1H and 15N nuclei of amide groups in proteins. The relaxation dispersion experiment
measures the transverse relaxation rate constants for the amide 15N nuclei in the presence
of applied radiofrequency fields, as strong
effective fields partially suppress the relaxation caused by transitions between molecular
states with different resonance frequencies.
These two techniques allow the identification
and structural characterization of weakly
populated, or rare, conformational states that
arise during coupled binding and folding
processes. They also allow quantification of
αA
αB
Some proteins do not fold fully until they meet their functional partners.
Folding in concert with binding allows an efficient stepwise search for the
proper structure within the final complex.
Encounter
αB
αA
Intermediate
αB
αA
Complex
Figure 1 | A complex encounter between disorder
and order. Interaction between the pKID
domain of the gene transcription factor CREB
and the KIX domain of the CREB-binding
protein occurs in the cell nucleus to regulate
gene expression. By elucidating the three-step
binding reaction between pKID and KIX using
NMR spectroscopy, Sugase et al.1 identified four
states along the reaction pathway. Initially, the
highly disordered, free state of pKID partially
populates helix A (αA). In the encounter complex
with KIX, pKID is tethered by nonspecific
hydrophobic contacts in its helix B region(αB).
The intermediate state is characterized by
a specifically bound and largely configured
helix A. Finally, in the high-affinity, bound
conformation, both helices are fully structured.
the kinetic rate constants linking the different
steps along the reaction pathway.
The HSQC spectra of 15N-labelled pKID
revealed continuous changes in 1H and 15N
chemical shifts during titration with subequivalent quantities of KIX (1:0 to 1:0.5
pKID:KIX concentration ratios). This observation indicates a fast-exchange, reversible
interaction between the two proteins, which
was confirmed by competition with another
peptide that binds to KIX and by mutation of
a key amino-acid residue in KIX. The NMR
spectrum that is predicted by extrapolating the
chemical-shift changes to a 1:1 ratio of these
letters to nature
Received 24 September; accepted 16 November 2004; doi:10.1038/nature03211.
1. MacArthur, R. H. & Wilson, E. O. The Theory of Island Biogeography (Princeton Univ. Press,
Princeton, 1969).
2. Fisher, R. A., Corbet, A. S. & Williams, C. B. The relation between the number of species and the
number of individuals in a random sample of an animal population. J. Anim. Ecol. 12, 42–58 (1943).
3. Preston, F. W. The commonness, and rarity, of species. Ecology 41, 611–627 (1948).
4. Brown, J. H. Macroecology (Univ. Chicago Press, Chicago, 1995).
5. Hubbell, S. P. A unified theory of biogeography and relative species abundance and its application to
tropical rain forests and coral reefs. Coral Reefs 16, S9–S21 (1997).
6. Hubbell, S. P. The Unified Theory of Biodiversity and Biogeography (Princeton Univ. Press, Princeton,
2001).
7. Caswell, H. Community structure: a neutral model analysis. Ecol. Monogr. 46, 327–354 (1976).
8. Bell, G. Neutral macroecology. Science 293, 2413–2418 (2001).
9. Elton, C. S. Animal Ecology (Sidgwick and Jackson, London, 1927).
10. Gause, G. F. The Struggle for Existence (Hafner, New York, 1934).
11. Hutchinson, G. E. Homage to Santa Rosalia or why are there so many kinds of animals? Am. Nat. 93,
145–159 (1959).
12. Huffaker, C. B. Experimental studies on predation: dispersion factors and predator-prey oscillations.
Hilgardia 27, 343–383 (1958).
13. Paine, R. T. Food web complexity and species diversity. Am. Nat. 100, 65–75 (1966).
14. MacArthur, R. H. Geographical Ecology (Harper & Row, New York, 1972).
15. Laska, M. S. & Wootton, J. T. Theoretical concepts and empirical approaches to measuring interaction
strength. Ecology 79, 461–476 (1998).
16. McGill, B. J. Strong and weak tests of macroecological theory. Oikos 102, 679–685 (2003).
17. Adler, P. B. Neutral models fail to reproduce observed species-area and species-time relationships in
Kansas grasslands. Ecology 85, 1265–1272 (2004).
18. Connell, J. H. The influence of interspecific competition and other factors on the distribution of the
barnacle Chthamalus stellatus. Ecology 42, 710–723 (1961).
19. Paine, R. T. Ecological determinism in the competition for space. Ecology 65, 1339–1348 (1984).
20. Wootton, J. T. Prediction in complex communities: analysis of empirically-derived Markov models.
Ecology 82, 580–598 (2001).
21. Wootton, J. T. Markov chain models predict the consequences of experimental extinctions. Ecol. Lett.
7, 653–660 (2004).
22. Paine, R. T. Food-web analysis through field measurements of per capita interaction strength. Nature
355, 73–75 (1992).
23. Moore, J. C., de Ruiter, P. C. & Hunt, H. W. The influence of productivity on the stability of real and
model ecosystems. Science 261, 906–908 (1993).
24. Rafaelli, D. G. & Hall, S. J. in Food Webs: Integration of Pattern and Dynamics (eds Polis, G. &
Winemiller, K.) 185–191 (Chapman and Hall, New York, 1996).
25. Wootton, J. T. Estimates and tests of per-capita interaction strength: diet, abundance, and impact of
intertidally foraging birds. Ecol. Monogr. 67, 45–64 (1997).
26. Kokkoris, G. D., Troumbis, A. Y. & Lawton, J. H. Patterns of species interaction strength in assembled
theoretical competition communities. Ecol. Lett. 2, 70–74 (1999).
27. Drossel, B., McKane, A. & Quince, C. The impact of nonlinear functional responses on the long-term
evolution of food web structure. J. Theor. Biol. 229, 539–548 (2004).
often individuals place offspring into adjacent vertices. The
homogeneous population, described by the Moran process3, is
the special case of a fully connected graph with evenly weighted
edges. Spatial structures are described by graphs where vertices
are connected with their nearest neighbours. We also explore
evolution on random and scale-free networks5–7. We determine
the fixation probability of mutants, and characterize those
graphs for which fixation behaviour is identical to that of a
homogeneous population7. Furthermore, some graphs act as
suppressors and others as amplifiers of selection. It is even
possible to find graphs that guarantee the fixation of any
advantageous mutant. We also study frequency-dependent selection and show that the outcome of evolutionary games can
depend entirely on the structure of the underlying graph. Evolutionary graph theory has many fascinating applications ranging
from ecology to multi-cellular organization and economics.
Evolutionary dynamics act on populations. Neither genes, nor
cells, nor individuals evolve; only populations evolve. In small
populations, random drift dominates, whereas large populations
Acknowledgements I thank the Makah Tribal Council for providing access to Tatoosh Island;
J. Sheridan, J. Salamunovitch, F. Stevens, A. Miller, B. Scott, J. Chase, J. Shurin, K. Rose, L. Weis,
R. Kordas, K. Edwards, M. Novak, J. Duke, J. Orcutt, K. Barnes, C. Neufeld and L. Weintraub for
field assistance; and NSF, EPA (CISES) and the Andrew W. Mellon foundation for partial financial
support.
Competing interests statement The author declares that he has no competing financial interests.
Correspondence and requests for materials should be addressed to J.T.W.
([email protected]).
..............................................................
Evolutionary dynamics on graphs
Erez Lieberman1,2, Christoph Hauert1,3 & Martin A. Nowak1
1
Program for Evolutionary Dynamics, Departments of Organismic and
Evolutionary Biology, Mathematics, and Applied Mathematics, Harvard
University, Cambridge, Massachusetts 02138, USA
2
Harvard-MIT Division of Health Sciences and Technology, Massachusetts
Institute of Technology, Cambridge, Massachusetts, USA
3
Department of Zoology, University of British Columbia, Vancouver, British
Columbia V6T 1Z4, Canada
.............................................................................................................................................................................
Evolutionary dynamics have been traditionally studied in the
context of homogeneous or spatially extended populations1–4.
Here we generalize population structure by arranging individuals on a graph. Each vertex represents an individual. The
weighted edges denote reproductive rates which govern how
312
Figure 1 Models of evolution. a, The Moran process describes stochastic evolution of a
finite population of constant size. In each time step, an individual is chosen for
reproduction with a probability proportional to its fitness; a second individual is chosen for
death. The offspring of the first individual replaces the second. b, In the setting of
evolutionary graph theory, individuals occupy the vertices of a graph. In each time step, an
individual is selected with a probability proportional to its fitness; the weights of the
outgoing edges determine the probabilities that the corresponding neighbour will be
replaced by the offspring. The process is described by a stochastic matrix W, where w ij
denotes the probability that an offspring of individual i will replace individual j. In a more
general setting, at each time step, an edge ij is selected with a probability proportional to
its weight and the fitness of the individual at its tail. The Moran process is the special case
of a complete graph with identical weights.
© 2005 Nature Publishing Group
NATURE | VOL 433 | 20 JANUARY 2005 | www.nature.com/nature
letters to nature
are sensitive to subtle differences in selective values. The tension
between selection and drift lies at the heart of the famous dispute
between Fisher and Wright8–10. There is evidence that population
structure affects the interplay of these forces11–15. But the celebrated
results of Maruyama16 and Slatkin17 indicate that spatial structures
are irrelevant for evolution under constant selection.
Here we introduce evolutionary graph theory, which suggests a
promising new lead in the effort to provide a general account of how
population structure affects evolutionary dynamics. We study the
simplest possible question: what is the probability that a newly
introduced mutant generates a lineage that takes over the whole
population? This fixation probability determines the rate of evolution, which is the product of population size, mutation rate and
fixation probability. The higher the correlation between the
mutant’s fitness and its probability of fixation, r, the stronger the
effect of natural selection; if fixation is largely independent of
fitness, drift dominates. We will show that some graphs are
governed entirely by random drift, whereas others are immune to
drift and are guided exclusively by natural selection.
Consider a homogeneous population of size N. At each time step
an individual is chosen for reproduction with a probability proportional to its fitness. The offspring replaces a randomly chosen
individual. In this so-called Moran process (Fig. 1a), the population
size remains constant. Suppose all the resident individuals are
identical and one new mutant is introduced. The new mutant has
relative fitness r, as compared to the residents, whose fitness is 1. The
fixation probability of the new mutant is:
r1 ¼
1 2 1=r
1 2 1=r N
ð1Þ
This represents a specific balance between selection and drift:
advantageous mutations have a certain chance—but no guaran-
Figure 2 Isothermal graphs, and, more generally, circulations, have fixation behaviour
identical to the Moran process. Examples of such graphs include: a, the square lattice;
b, hexagonal lattice; c, complete graph; d, directed cycle; and e, a more irregular
circulation. Whenever the weights of edges are not shown, a weight of one is distributed
evenly across all those edges emerging from a given vertex. Graphs like f, the ‘burst’, and
g, the ‘path’, suppress natural selection. The ‘cold’ upstream vertex is represented in
NATURE | VOL 433 | 20 JANUARY 2005 | www.nature.com/nature
tee—of fixation, whereas disadvantageous mutants are likely—but
again, no guarantee—to become extinct.
We introduce population structure as follows. Individuals are
labelled i ¼ 1, 2, …N. The probability that individual i places its
offspring into position j is given by w ij.
Thus the individuals can be thought of as occupying the vertices
of a graph. The matrix W ¼ [w ij] determines the structure of the
graph (Fig. 1b). If w ij ¼ 0 and w ji ¼ 0 then the vertices i and j are
not connected. In each iteration, an individual i is chosen for
reproduction with a probability proportional to its fitness. The
resulting offspring will occupy vertex j with probability w ij. Note
that W is a stochastic matrix, which means that all its rows sum to
one. We want to calculate the fixation probability r of a randomly
placed mutant.
Imagine that the individuals are arranged on a spatial lattice that
can be triangular, square, hexagonal or any similar tiling. For all
such lattices r remains unchanged: it is equal to the r 1 obtained for
the homogeneous population. In fact, it can be shown that if W is
symmetric, w ij ¼ w ji, then the fixation probability is always r 1. The
graphs in Fig. 2a–c, and all other symmetric, spatially extended
models, have the same fixation probability as a homogeneous
population17,18.
There is an even wider class of graphs whose fixation probability
is r 1. Let T i ¼ Sj w ji be the temperature of vertex i. A vertex is ‘hot’ if
it is replaced often and ‘cold’ if it is replaced rarely. The ‘isothermal
theorem’ states that an evolutionary graph has fixation probability
r 1 if and only if all vertices have the same temperature. Figure 2d
gives an example of an isothermal graph where W is not symmetric.
Isothermality is equivalent to the requirement that W is doubly
stochastic, which means that each row and each column sums to
one.
If a graph is not isothermal, the fixation probability is not given
blue. The ‘hot’ downstream vertices, which change often, are coloured in orange. The
type of the upstream root determines the fate of the entire graph. h, Small upstream
populations with large downstream populations yield suppressors. i, In multirooted
graphs, the roots compete indefinitely for the population. If a mutant arises in a root then
neither fixation nor extinction is possible.
© 2005 Nature Publishing Group
313
letters to nature
by r 1. Instead, the balance between selection and drift tilts; now to
one side, now to the other. Suppose N individuals are arranged in a
linear array. Each individual places its offspring into the position
immediately to its right. The leftmost individual is never replaced.
What is the fixation probability of a randomly placed mutant with
fitness r? Clearly, it is 1/N, irrespective of r. The mutant can only
reach fixation if it arises in the leftmost position, which happens
with probability 1/N. This array is an example of a simple population structure whose behaviour is dominated by random drift.
More generally, an evolutionary graph has fixation probability
1/N for all r if and only if it is one-rooted (Fig. 2f, g). A one-rooted
graph has a unique global source without incoming edges. If a graph
has more than one root, then the probability of fixation is always
zero: a mutant originating in one of the roots will generate a lineage
which will never die out, but also never fixate (Fig. 2i). Small
upstream populations feeding into large downstream populations
are also suppressors of selection (Fig. 2h). Thus, it is easy to
construct graphs that foster drift and suppress selection. Is it
possible to suppress drift and amplify selection? Can we find
structures where the fixation probability of advantageous mutants
exceeds r 1?
The star structure (Fig. 3a) consists of a centre that is connected
with each vertex on the periphery. All the peripheral vertices are
connected only with the centre. For large N, the fixation probability
of a randomly placed mutant on the star is r2 ¼ ð1 2 1=r 2 Þ=ð1 2
1=r2N Þ: Thus, any selective difference r is amplified to r 2. The star
acts as evolutionary amplifier, favouring advantageous mutants and
inhibiting disadvantageous mutants. The balance tilts towards
selection, and against drift.
The super-star, funnel and metafunnel (Fig. 3) have the amazing
property that for large N, the fixation probability of any advantageous mutant converges to one, while the fixation probability of
any disadvantageous mutant converges to zero. Hence, these population structures guarantee fixation of advantageous mutants however small their selective advantage. In general, we can prove that for
sufficiently large population size N, a super-star of parameter K
satisfies:
Figure 3 Selection amplifiers have remarkable symmetry properties. As the number of
‘leaves’ and the number of vertices in each leaf grows large, these amplifiers dramatically
increase the apparent fitness of advantageous mutants: a mutant with fitness r on an
amplifier of parameter K will fare as well as a mutant of fitness r K in the Moran process.
a, The star structure is a K ¼ 2 amplifier. b–d, The super-star (b), the funnel (c) and the
metafunnel (d) can all be extended to arbitrarily large K, thereby guaranteeing the fixation
of any advantageous mutant. The latter three structures are shown here for K ¼ 3. The
funnel has edges wrapping around from bottom to top. The metafunnel has outermost
edges arising from the central vertex (only partially shown). The colours red, orange and
blue indicate hot, warm and cold vertices.
314
rK ¼
1 2 1=r K
1 2 1=r KN
ð2Þ
Numerical simulations illustrating equation (2) are shown in Fig. 4a.
Similar results hold for the funnel and metafunnel. Just as onerooted structures entirely suppress the effects of selection, super-star
structures function as arbitrarily strong amplifiers of selection and
suppressors of random drift.
Scale-free networks, like the amplifier structures in Fig. 3, have
most of their connectivity clustered in a few vertices. Such networks
are potent selection amplifiers for mildly advantageous mutants (r
© 2005 Nature Publishing Group
NATURE | VOL 433 | 20 JANUARY 2005 | www.nature.com/nature
letters to nature
close to 1), and relax to r 1 for very advantageous mutants (r .
. 1)
(Fig. 4b).
Further generalizations of evolutionary graphs are possible.
Suppose in each iteration an edge ij is chosen with a probability
proportional to the product of its weight, w ij, and the fitness of the
individual i at its tail. In this case, the matrix W need not be
stochastic; the weights can be any collection of non-negative real
numbers.
Here the results have a particularly elegant form. In the absence of
upstream populations, if the sum of the weights of all edges leaving
the vertex is the same for all vertices—meaning the fertility
is independent of position—then the graph never suppresses
selection. If the sum of the weights of all edges entering a vertex is
the same for all vertices—meaning the mortality is independent of
position—then the graph never suppresses drift. If both these
conditions hold then the graph is called a circulation, and the
structure favours neither selection nor drift. An evolutionary graph
has fixation probability r 1 if and only if it is a circulation (see
Fig. 2e). It is striking that the notion of a circulation, so common in
deterministic contexts such as the study of flows, arises naturally in
this stochastic evolutionary setting. The circulation criterion completely classifies all graph structures whose fixation behaviour is
identical to that of the homogeneous population, and includes the
subset of isothermal graphs (the mathematical details of these
results are discussed in the Supplementary Information).
Let us now turn to evolutionary games on graphs18,19. Consider, as
before, two types A and B, but instead of having constant fitness,
their relative fitness depends on the outcome of a game with payoff
matrix:
A B
A
a b
B
c d
!
In traditional evolutionary game dynamics, a mutant strategy A can
invade a resident B if b . d. For games on graphs, the crucial
condition for A invading B, and hence the very notion of evolutionary stability, can be quite different.
As an illustration, imagine N players arranged on a directed cycle
Figure 4 Simulation results showing the likelihood of mutant fixation. a, Fixation
probabilities for an r ¼ 1.1 mutant on a circulation (black), a star (blue), a K ¼ 3 superstar (red), and a K ¼ 4 super-star (yellow) for varying population sizes N. Simulation
results are indicated by points. As expected, for large population sizes, the simulation
results converge to the theoretical predictions (broken lines) obtained using equation (2).
b, The amplification factor K of scale-free graphs with 100 vertices and an average
connectivity of 2m with m ¼ 1 (violet), m ¼ 2 (purple), or m ¼ 3 (navy) is compared to
that for the star (blue line) and for circulations (black line). Increasing m increases the
number of highly connected hubs. Scale-free graphs do not behave uniformly across
the mutant spectrum: as the fitness r increases, the amplification factor relaxes from
nearly 2 (the value for the star) to circulation-like values of unity. All simulations are based
on 104–106 runs. Simulations can be explored online at http://www.univie.ac.at/
virtuallabs/.
NATURE | VOL 433 | 20 JANUARY 2005 | www.nature.com/nature
Figure 5 Evolutionary games on directed cycles for four different orientations. a, Positive
symmetric. The invading mutant (red) is favoured over the resident (blue) if b . c.
b, Negative symmetric. Invasion is favoured if a . d. For the Prisoner’s Dilemma, the
implication is that unconditional cooperators can invade and replace defectors starting
from a single individual. c, Positive anti-symmetric. Invasion is favoured if a . c. The
tables are turned: the invader behaves like a resident in a traditional setting. d, Negative
anti-symmetric. Invasion is favoured if b . d. We recover the traditional invasion of
evolutionary game theory.
© 2005 Nature Publishing Group
315
letters to nature
(Fig. 5) with player i placing its offspring into i þ 1. In the simplest
case, the payoff of any individual comes from an interaction with
one of its neighbours. There are four natural orientations. We
discuss the fixation probability of a single A mutant for large N.
(1) Positive symmetric: i interacts with i þ 1. The fixation probability is given by equation (1) with r ¼ b/c. Selection favours the
mutant if b . c.
(2) Negative symmetric: i interacts with i 2 1. Selection favours the
mutant if a . d. In the classical Prisoner’s Dilemma, these dynamics
favour unconditional cooperators invading defectors.
(3) Positive anti-symmetric: mutants at i interact with i 2 1, but
residents with i þ 1. The mutant is favoured if a . c, behaving like a
resident in the classical setting.
(4) Negative anti-symmetric: Mutants at i interact with i þ 1, but
residents with i 2 1. The mutant is favoured if b . d, recovering the
traditional invasion criterion.
Remarkably, games on directed cycles yield the complete range of
pairwise conditions in determining whether selection favours the
mutant or the resident.
Circulations no longer behave identically with respect to games.
Outcomes depend on the graph, the game and the orientation. The
vast array of cases constitutes a rich field for future study. Furthermore, we can prove that the general question of whether a
population on a graph is vulnerable to invasion under frequencydependent selection is NP (nondeterministic polynomial time)hard.
The super-star possesses powerful amplifying properties in the
case of games as well. For instance, in the positive symmetric
orientation, the fixation probability for large N of a single A mutant
is given by equation (1) with r ¼ ðb=dÞðb=cÞK21 : For a super-star
with large K, this r value diverges as long as b . c. Thus, even a
dominated strategy (a , c and b , d) satisfying b . c will expand
from a single mutant to conquer the entire super-star with a
probability that can be made arbitrarily close to 1. The guaranteed
fixation of this broad class of dominated strategies is a unique
feature of evolutionary game theory on graphs: without structure,
all dominated strategies die out. Similar results hold for the superstar in other orientations.
Evolutionary graph theory has many fascinating applications.
Ecological habitats of species are neither regular spatial lattices nor
simple two-dimensional surfaces, as is usually assumed20,21, but
contain locations that differ in their connectivity. In this respect, our
results for scale-free graphs are very suggestive. Source and sink
populations have the effect of suppressing selection, like one-rooted
graphs22,23.
Another application is somatic evolution within multicellular
organisms. For example, the hematopoietic system constitutes an
evolutionary graph with a suppressive hierarchical organization;
stem cells produce precursors which generate differentiated cells24.
We expect tissues of long-lived multicellular organisms to be
organized so as to suppress the somatic evolution that leads to
cancer. Star structures can also be instantiated by populations of
differentiating cells. For example, a stem cell in the centre generates
differentiated cells, whose offspring either differentiate further, or
revert back to stem cells. Such amplifiers of selection could be used
in various developmental processes and also in the affinity maturation of the immune response.
Human organizations have complicated network structures25–27.
Evolutionary graph theory offers an appropriate tool to study
selection on such networks. We can ask, for example, which networks are well suited to ensure the spread of favourable concepts. If
a company is strictly one-rooted, then only those ideas that
originate from the root will prevail (the CEO). A selection amplifier,
like a star structure or a scale-free network, will enhance the spread
of favourable ideas arising from any one individual. Notably,
scientific collaboration graphs tend to be scale-free28.
We have sketched the very beginnings of evolutionary graph
316
theory by studying the fixation probability of newly arising mutants.
For constant selection, graphs can dramatically affect the balance
between drift and selection. For frequency-dependent selection,
graphs can redirect the process of selection itself.
Many more questions lie ahead. What is the maximum mutation
rate compatible with adaptation on graphs? How does sexual
reproduction affect evolution on graphs? What are the timescales
associated with fixation, and how do they lead to coexistence in
ecological settings29,30? Furthermore, how does the graph itself
change as a consequence of evolutionary dynamics31? Coupled
with the present work, such studies will make increasingly clear
the extent to which population structure affects the dynamics of
evolution.
A
Received 10 September; accepted 16 November 2004; doi:10.1038/nature03204.
1. Liggett, T. M. Stochastic Interacting Systems: Contact, Voter and Exclusion Processes (Springer, Berlin,
1999).
2. Durrett, R. & Levin, S. A. The importance of being discrete (and spatial). Theor. Popul. Biol. 46,
363–394 (1994).
3. Moran, P. A. P. Random processes in genetics. Proc. Camb. Phil. Soc. 54, 60–71 (1958).
4. Durrett, R. A. Lecture Notes on Particle Systems & Percolation (Wadsworth & Brooks/Cole Advanced
Books & Software, Pacific Grove, 1988).
5. Erdös, P. & Renyi, A. On the evolution of random graphs. Publ. Math. Inst. Hungarian Acad. Sci. 5,
17–61 (1960).
6. Barabasi, A. & Albert, R. Emergence of scaling in random networks. Science 286, 509–512 (1999).
7. Nagylaki, T. & Lucier, B. Numerical analysis of random drift in a cline. Genetics 94, 497–517 (1980).
8. Wright, S. Evolution in Mendelian populations. Genetics 16, 97–159 (1931).
9. Wright, S. The roles of mutation, inbreeding, crossbreeding and selection in evolution. Proc. 6th Int.
Congr. Genet. 1, 356–366 (1932).
10. Fisher, R. A. & Ford, E. B. The “Sewall Wright Effect”. Heredity 4, 117–119 (1950).
11. Barton, N. The probability of fixation of a favoured allele in a subdivided population. Genet. Res. 62,
149–158 (1993).
12. Whitlock, M. Fixation probability and time in subdivided populations. Genetics 164, 767–779 (2003).
13. Nowak, M. A. & May, R. M. The spatial dilemmas of evolution. Int. J. Bifurcation Chaos 3, 35–78
(1993).
14. Hauert, C. & Doebeli, M. Spatial structure often inhibits the evolution of cooperation in the snowdrift
game. Nature 428, 643–646 (2004).
15. Hofbauer, J. & Sigmund, K. Evolutionary Games and Population Dynamics (Cambridge Univ. Press,
Cambridge, 1998).
16. Maruyama, T. Effective number of alleles in a subdivided population. Theor. Popul. Biol. 1, 273–306
(1970).
17. Slatkin, M. Fixation probabilities and fixation times in a subdivided population. Evolution 35,
477–488 (1981).
18. Ebel, H. & Bornholdt, S. Coevolutionary games on networks. Phys. Rev. E. 66, 056118 (2002).
19. Abramson, G. & Kuperman, M. Social games in a social network. Phys. Rev. E. 63, 030901(R) (2001).
20. Tilman, D. & Karieva, P. (eds) Spatial Ecology: The Role of Space in Population Dynamics and
Interspecific Interactions (Monographs in Population Biology, Princeton Univ. Press, Princeton, 1997).
21. Neuhauser, C. Mathematical challenges in spatial ecology. Not. AMS 48, 1304–1314 (2001).
22. Pulliam, H. R. Sources, sinks, and population regulation. Am. Nat. 132, 652–661 (1988).
23. Hassell, M. P., Comins, H. N. & May, R. M. Species coexistence and self-organizing spatial dynamics.
Nature 370, 290–292 (1994).
24. Reya, T., Morrison, S. J., Clarke, M. & Weissman, I. L. Stem cells, cancer, and cancer stem cells. Nature
414, 105–111 (2001).
25. Skyrms, B. & Pemantle, R. A dynamic model of social network formation. Proc. Nat. Acad. Sci. USA
97, 9340–9346 (2000).
26. Jackson, M. O. & Watts, A. On the formation of interaction networks in social coordination games.
Games Econ. Behav. 41, 265–291 (2002).
27. Asavathiratham, C., Roy, S., Lesieutre, B. & Verghese, G. The influence model. IEEE Control Syst. Mag.
21, 52–64 (2001).
28. Newman, M. E. J. The structure of scientific collaboration networks. Proc. Natl Acad. Sci. USA 98,
404–409 (2001).
29. Boyd, S., Diaconis, P. & Xiao, L. Fastest mixing Markov chain on a graph. SIAM Rev. 46, 667–689
(2004).
30. Nakamaru, M., Matsuda, H. & Iwasa, Y. The evolution of cooperation in a lattice-structured
population. J. Theor. Biol. 184, 65–81 (1997).
31. Bala, V. & Goyal, S. A noncooperative model of network formation. Econometrica 68, 1181–1229
(2000).
Supplementary Information accompanies the paper on www.nature.com/nature.
Acknowledgements The Program for Evolutionary Dynamics is sponsored by J. Epstein. E.L. is
supported by a National Defense Science and Engineering Graduate Fellowship. C.H. is grateful to
the Swiss National Science Foundation. We are indebted to M. Brenner for many discussions.
Competing interests statement The authors declare that they have no competing financial
interests.
Correspondence and requests for materials should be addressed to E.L. ([email protected]).
© 2005 Nature Publishing Group
NATURE | VOL 433 | 20 JANUARY 2005 | www.nature.com/nature
letters to nature
..............................................................
Self-similarity of complex networks
Chaoming Song1, Shlomo Havlin2 & Hernán A. Makse1
1
Levich Institute and Physics Department, City College of New York, New York,
New York 10031, USA
2
Minerva Center and Department of Physics, Bar-Ilan University, Ramat Gan
52900, Israel
.............................................................................................................................................................................
This result comes as a surprise, because the exponential increase
in equation (1) has led to the general understanding that complex
networks are not self-similar, since self-similarity requires a powerlaw relation between N and l.
How can we reconcile the exponential increase in equation (1)
with self-similarity, or (in other words) an underlying length-scaleinvariant topology? At the root of the self-similar properties that we
unravel in this study is a scale-invariant renormalization procedure
that we show to be valid for dissimilar complex networks.
To demonstrate this concept we first consider a self-similar
Complex networks have been studied extensively owing to their
relevance to many real systems such as the world-wide web, the
Internet, energy landscapes and biological and social networks1–5.
A large number of real networks are referred to as ‘scale-free’
because they show a power-law distribution of the number of
links per node1,6,7. However, it is widely believed that complex
networks are not invariant or self-similar under a length-scale
transformation. This conclusion originates from the ‘smallworld’ property of these networks, which implies that the
number of nodes increases exponentially with the ‘diameter’ of
the network8–11, rather than the power-law relation expected for a
self-similar structure. Here we analyse a variety of real complex
networks and find that, on the contrary, they consist of selfrepeating patterns on all length scales. This result is achieved by
the application of a renormalization procedure that coarsegrains the system into boxes containing nodes within a given
‘size’. We identify a power-law relation between the number of
boxes needed to cover the network and the size of the box,
defining a finite self-similar exponent. These fundamental properties help to explain the scale-free nature of complex networks
and suggest a common self-organization dynamics.
Two fundamental properties of real complex networks have
attracted much attention recently: the small-world and the scalefree properties. Many naturally occurring networks are ‘small world’
because we can reach a given node from another one, following the
path with the smallest number of links between the nodes, in a very
small number of steps. This corresponds to the so-called ‘six degrees
of separation’ in social networks10. It is mathematically expressed by
the slow (logarithmic) increase of the average diameter of the
with the total number of nodes N, l < lnN; where l
network, l;
is the shortest distance between two nodes and defines the distance
metric in complex networks6,8,9,11. Equivalently, we obtain:
N < el=l0
ð1Þ
where l0 is a characteristic length.
A second fundamental property in the study of complex networks
arises with the discovery that the probability distribution of
the number of links per node, P(k) (also known as the degree
distribution), can be represented by a power-law (‘scale-free’) with a
degree exponent g that is usually in the range 2 ,g , 3 (ref. 6):
PðkÞ < k2g
ð2Þ
These discoveries have been confirmed in many empirical studies of
diverse networks1–4,6,7.
With the aim of providing a deeper understanding of the
underlying mechanism that leads to these common features, we
need to probe the patterns within the network structure in more
detail. The question of connectivity between groups of interconnected nodes on different length scales has received less attention.
But many examples exhibit the importance of collective behaviour,
such as interactions between communities within social networks,
links between clusters of websites of similar subjects, and the highly
modular manner in which molecules interact to keep a cell alive.
Here we show that real complex networks, such as the world-wide
web (WWW), social, protein–protein interaction networks (PIN)
and cellular networks are invariant or self-similar under a lengthscale transformation.
392
Figure 1 The renormalization procedure applied to complex networks. a, Demonstration
of the method for different lB. The first column depicts the original network. We tile the
system with boxes of size lB (different colours correspond to different boxes). All nodes in
a box are connected by a minimum distance smaller than the given lB. For instance, in the
case of lB ¼ 2, we identify four boxes that contain the nodes depicted with colours red,
orange, white and blue, each containing 3, 2, 1 and 2 nodes, respectively. Then we
replace each box by a single node; two renormalized nodes are connected if there is at
least one link between the unrenormalized boxes. Thus we obtain the network shown in
the second column. The resulting number of boxes needed to tile the network, N B(lB), is
plotted in Fig. 2 versus lB to obtain d B as in equation (3). The renormalization procedure is
applied again and repeated until the network is reduced to a single node (third and fourth
columns for different lB). b, The stages in the renormalization scheme applied to the
entire WWW. We fix the box size to lB ¼ 3 and apply the renormalization for four stages.
This corresponds, for instance, to the sequence for the network demonstration depicted in
the second row in panel a. We colour the nodes in the web according to the boxes to which
they belong. The network is invariant under this renormalization, as explained in the
legend of Fig. 2d and the Supplementary Information.
© 2005 Nature Publishing Group
NATURE | VOL 433 | 27 JANUARY 2005 | www.nature.com/nature
letters to nature
network embedded in euclidean space, of which a classical example
would be a fractal percolation cluster at criticality12. To unfold the
self-similar properties of such clusters we calculate the fractal
dimension using a ‘box-counting’ method and a ‘cluster-growing’
method13.
In the first method we cover the percolation cluster with N B boxes
of linear size lB. The fractal dimension or box dimension d B is then
given by14:
B
N B < l2d
B
ð3Þ
In the second method, the network is not covered with boxes.
Instead one seed node is chosen at random and a cluster of nodes
centred at the seed and separated by a minimum distance l is
calculated. The procedure is then repeated by choosing many seed
nodes at random and the average ‘mass’ of the resulting clusters
(kM cl, defined as the number of nodes in the cluster) is calculated as
a function of l to obtain the following scaling:
kM c l < ldf
ð4Þ
14
defining the fractal cluster dimension d f . Comparing equations
(4) and (1) implies that d f ¼ 1 for complex small-world
networks.
For a homogeneous network characterized by a narrow degree
distribution (such as a fractal percolation cluster) the box-counting
method of equation (3) and the cluster-growing method of
equation (4) are equivalent, because every node typically has the
Figure 2 Self-similar scaling in complex networks. a, The upper panel shows a log-log
plot of N B versus lB, revealing the self-similarity of the WWW and actor network
according to equation (3). The lower panel shows the scaling of s(lB) versus lB according
to equation (9). The error bars are of the order of the symbol size. b, Same as a but for two
PINs: H. sapiens and E. coli. Results are analogous to b but with different scaling
exponents. c, Same as a for the cellular networks of A. fulgidus, E. coli and C. elegans.
d, Invariance of the degree distribution of the WWW under the renormalization for different
NATURE | VOL 433 | 27 JANUARY 2005 | www.nature.com/nature
same number of links or neighbours. Equation (4) can then be
derived from equation (3) and d B ¼ d f, and this relation has been
regularly used.
The crux of the matter is to understand how we can calculate a
self-similar exponent (analogous to the fractal dimension in euclidean space) in complex inhomogeneous networks with a broad
degree distribution such as equation (2). Under these conditions
equation (3) and (4) are not equivalent, as will be shown below. The
application of the proper covering procedure in the box-counting
method (equation (3)) for complex networks unveils a set of selfsimilar properties such as a finite self-similar exponent and a new set
of critical exponents for the scale-invariant topology.
Figure 1a illustrates the box-covering method using a schematic
network composed of eight nodes. For each value of the box size lB,
we search for the number of boxes needed to tile the entire network
such that each box contains nodes separated by a distance l , lB.
This procedure is applied to several different real networks: (1) a
part of the WWW composed of 325,729 web pages that are
connected if there is a URL link from one page to another6
(http://www.nd.edu/,networks); (2) a social network where the
nodes are 392,340 actors linked if they were cast together in at least
one film15; (3) the biological networks of protein–protein interactions found in Escherichia coli (429 proteins) and Homo sapiens
(946 proteins) linked if there is a physical binding between them
(database available via the Database of Interacting Proteins16,17,
other PINs are discussed in the Supplementary Information), and
box sizes, lB. We show the data collapse of the degree distributions, demonstrating the
self-similarity at different scales. The inset shows the scaling of k 0 ¼ s(lB)k for different
lB, whence we obtain the scaling factor s(lB). Moreover, we also apply the
renormalization for a fixed box size, for instance lB ¼ 3 as shown in Fig. 1b for the WWW,
until the network is reduced to a few nodes, and find that P(k) is invariant under these
multiple renormalizations as well, for several iterations (see Supplementary Information).
© 2005 Nature Publishing Group
393
letters to nature
(4) the cellular networks compiled by ref. 18 using a graphtheoretical representation of all the biochemical pathways based
on the WIT integrated-pathway genome database19 (http://igweb.
integratedgenomics.com/IGwit) of 43 species from Archaea, Bacteria and Eukarya. Here we show the results for Archaeoglobus
fulgidus, E. coli and Caenorhabditis elegans18; the full database is
analysed in the Supplementary Information. It has been previously
determined that the WWW and actor networks are small-world and
scale-free, characterized by equation (2) with g ¼ 2.6 and 2.2,
respectively1. For the PINs of E. coli and H. sapiens we find
g ¼ 2.2 and 2.1, respectively. All cellular networks are scale-free
with average exponent g ¼ 2.2 (ref. 18). We confirm these values
and show the results for the WWW in Fig. 2.
Figure 2a and b shows the results of N B(lB) according to equation
(3). They reveal the existence of self-similarity in the WWW, actors
and E. coli and H. sapiens PINs with self-similar exponents d B ¼ 4.1,
d B ¼ 6.3, and d B ¼ 2.3 and d B ¼ 2.3, respectively. The cellular
networks are shown in Fig. 2c and have d B ¼ 3.5.
We now elaborate on the apparent contradiction between the two
definitions of self-similar exponents in complex networks. After
performing a renormalization at a given lB, we calculate the mean
mass of the boxes covering the network, kM B(lB)l, to obtain:
kM B ðlB Þl ; N=N B ðlB Þ < ldBB
ð5Þ
which is corroborated by direct measurements for all the networks,
and shown in Fig. 3a for the WWW.
On the other hand, the average obtained from the clustergrowing method (for this calculation we average over single boxes
without tiling the system) gives rise to an exponential growth of the
mass:
kM c ðlB Þl < elB =l1
ð6Þ
with l1 < 0.78 in accordance with the small-world effect equation
(1), as seen in Fig. 3a.
The topology of scale-free networks is dominated by several
highly connected hubs—the nodes with the largest degree—implying that most of the nodes are connected to the hubs via one or very
few steps. Therefore the average obtained in the cluster-growing
method is biased; the hubs are overrepresented in equation (6)
because almost every node is a neighbour of a hub. By choosing the
seed of the clusters at random, there is a very large probability of
including the hubs in the clusters. On the other hand, the boxcovering method is a global tiling of the system, providing a flat
average over all the nodes: that is, each part of the network is covered
with an equal probability. Once a hub (or any node) is covered, it
cannot be covered again. We conclude that equations (3) and (4)
are not equivalent for inhomogeneous networks with topologies
dominated by hubs with a large degree.
The biased sampling of the randomly chosen nodes is clearly
demonstrated in Fig. 3b. We find that the probability distribution of
the mass of the boxes for a given lB is very broad and can be
in the case of the
approximated by a power-law: PðM B Þ < M 22:2
B
WWW and lB ¼ 4. On the other hand, the probability distribution
of M c is very narrow and can be fitted by a log-normal distribution
(see Fig. 3b). In the box-covering method there are many boxes with
very large and very small masses, in contrast to the peaked
distribution in the cluster-growing method, thus showing the biased
nature of the latter method in inhomogeneous networks. This
biased average leads to the exponential growth of the mass in
equation (6) and it also explains why the average distance is
logarithmic with N, as in equation (1).
The box-counting method provides a powerful tool for further
investigations of network properties because it enables a renormalization procedure, revealing that the self-similar properties and the
scale-free degree distribution persist irrespectively of the amount of
coarse-graining of the network.
Subsequent to the first step of assigning the nodes to the boxes we
create a new renormalized network by replacing each box by a single
node. Two boxes are then connected, provided that there was at least
one link between their constituent nodes. The second column of the
panels in Fig. 1a shows this step in the renormalization procedure
for the schematic network, while Fig. 1b shows the results for the
same procedure applied to the entire WWW for lB ¼ 3.
The renormalized network gives rise to a new probability distribution of links, P(k 0 ), which is invariant under the renormalization:
0
0
PðkÞ ! Pðk Þ < ðk Þ2g
ð7Þ
Figure 2d supports the validity of this scale transformation by
showing a data collapse of all distributions with the same g
according to equation (7) for the WWW.
Further insight arises from relating the scale-invariant properties
(equation (3)) to the scale-free degree distribution (equation (2)).
Plotting (see inset in Fig. 2d for the WWW) the number of links k 0
of each node in the renormalized network versus the maximum
number of links k in each box of the unrenormalized network
exhibits a scaling law:
0
Figure 3 Different averaging techniques lead to qualitatively different results. a, Mean
value of the box mass in the box-counting method, kM Bl, and the cluster mass in the
cluster growing method, kM cl, for the WWW. The solid lines represent the power-law fit
for kM Bl and the exponential fit for kM cl according to equations (5) and (6), respectively.
b, Probability distribution of M B and M c for lB ¼ 4 for the WWW. The curves are fitted by
a power-law and a log-normal distribution, respectively.
394
k ! k ¼ sðlB Þk
ð8Þ
This equation defines the scaling transformation in the connectivity
distribution. Empirically we find that the scaling factor s (,1) scales
with lB with a new exponent d k:
© 2005 Nature Publishing Group
k
sðlB Þ < l2d
B
ð9Þ
NATURE | VOL 433 | 27 JANUARY 2005 | www.nature.com/nature
letters to nature
as shown in Fig. 2a for the WWWand actor networks (with d k ¼ 2.5
and d k ¼ 5.3, respectively), in Fig. 2b for the protein networks
(d k ¼ 2.1 for E. coli and d k ¼ 2.2 for H. sapiens) and in Fig. 2c for
the cellular networks with d k ¼ 3.2.
Equations (8) and (9) shed light on how families of hierarchical
sizes are linked together. The larger the families, the fewer links exist.
Surprisingly, the same power-law relation exists for large and small
families represented by equation (2).
From equation (7) we obtain n(k)dk ¼ n 0 (k 0 )dk 0 , where
n(k) ¼ NP(k) is the number of nodes with links k and n 0 ðk 0 Þ ¼
N 0 Pðk 0 Þ is the number of nodes with links k 0 after the renormalization (N 0 is the total number of nodes in the renormalized network).
Using equation (8), we obtain n(k) ¼ s 12gn 0 (k). Then, upon
renormalizing a network with N total nodes we obtain a smaller
number of nodes N 0 according to N 0 ¼ s g21N. The total number of
nodes in the renormalized network is the number of boxes needed to
cover the unrenormalized network at any given lB, so we have
N 0 ¼ N B(lB). Then, from equations (3) and (9) we obtain the
relation between the three indexes:
ð10Þ
g ¼ 1 þ dB =dk
Equation (10) is confirmed for all the networks analysed here (see
Supplementary Information). In all cases the calculation of d B and
d k and equation (10) gives rise to the same g exponent as that
obtained in the direct calculation of the degree distribution. The
significance of this result is that the scale-free properties characterized by g can be related to a more fundamental length-scale
invariant property, characterized by the two new indexes d B and
A
d k.
Received 4 August; accepted 30 November 2004; doi:10.1038/nature03248.
1. Albert, R. & Barabási, A.-L. Statistical mechanics of complex networks. Rev. Mod. Phys. 74, 47–97
(2002).
2. Dorogovtsev, S. N. & Mendes, J. F. F. Evolution of Networks: From Biological Nets to the Internet and the
WWW (Oxford Univ. Press, Oxford, 2003).
3. Pastor-Satorras, R. & Vespignani, A. Evolution and Structure of the Internet: a Statistical Physics
Approach (Cambridge Univ. Press, Cambridge, 2004).
4. Newman, M. E. J. The structure and function of complex networks. SIAM Rev. 45, 167–256
(2003).
5. Amaral, L. A. N. & Ottino, J. M. Complex networks—augmenting the framework for the study of
complex systems. Eur. Phys. J. B 38, 147–162 (2004).
6. Albert, R., Jeong, H. & Barabási, A.-L. Diameter of the World Wide Web. Nature 401, 130–131
(1999).
7. Faloutsos, M., Faloutsos, P. & Faloutsos, C. On power-law relationships of the Internet topology.
Comput. Commun. Rev. 29, 251–262 (1999).
8. Erdös, P. & Rényi, A. On the evolution of random graphs. Publ. Math. Inst. Hung. Acad. Sci. 5, 17–61
(1960).
9. Bollobás, B. Random Graphs (Academic, London, 1985).
10. Milgram, S. The small-world problem. Psychol. Today 2, 60–67 (1967).
11. Watts, D. J. & Strogatz, S. H. Collective dynamics of ‘small-world’ networks. Nature 393, 440–442
(1998).
12. Bunde, A. & Havlin, S. Fractals and Disordered Systems Ch. 2 (eds Bunde, A. & Havlin, S.) 2nd edn
(Springer, Heidelberg, 1996).
13. Vicsek, T. Fractal Growth Phenomena 2nd edn, Part IV (World Scientific, Singapore, 1992).
14. Feder, J. Fractals (Plenum, New York, 1988).
15. Barabási, A.-L. & Albert, R. Emergence of scaling in random networks. Science 286, 509–512
(1999).
16. Xenarios, I. et al. DIP: the database of interacting proteins. Nucleic Acids Res. 28, 289–291 (2000).
17. Database of Interacting Proteins (DIP) khttp://dip.doe-mbi.ucla.edul (2000).
18. Jeong, H., Tombor, B., Albert, R., Oltvai, Z. N. & Barabási, A.-L. The large-scale organization of
metabolic networks. Nature 407, 651–654 (2000).
19. Overbeek, R. et al. WIT: integrated system for high-throughput genome sequence analysis and
metabolic reconstruction. Nucleic Acid Res. 28, 123–125 (2000).
Supplementary Information accompanies the paper on www.nature.com/nature.
Acknowledgements We are grateful to J. Brujić for many discussions. This work is supported by
the National Science Foundation, Materials Theory. S.H. thanks the Israel Science Foundation
and ONR for support.
Competing interests statement The authors declare that they have no competing financial
interests.
Correspondence and requests for materials should be addressed to H.A.M. ([email protected]).
NATURE | VOL 433 | 27 JANUARY 2005 | www.nature.com/nature
..............................................................
Strong polarization enhancement
in asymmetric three-component
ferroelectric superlattices
Ho Nyung Lee, Hans M. Christen, Matthew F. Chisholm,
Christopher M. Rouleau & Douglas H. Lowndes
Condensed Matter Sciences Division, Oak Ridge National Laboratory, Oak Ridge,
Tennessee 37831, USA
.............................................................................................................................................................................
Theoretical predictions—motivated by recent advances in epitaxial engineering—indicate a wealth of complex behaviour
arising in superlattices of perovskite-type metal oxides. These
include the enhancement of polarization by strain1,2 and the
possibility of asymmetric properties in three-component superlattices3. Here we fabricate superlattices consisting of barium
titanate (BaTiO3), strontium titanate (SrTiO3) and calcium
titanate (CaTiO3) with atomic-scale control by high-pressure
pulsed laser deposition on conducting, atomically flat strontium
ruthenate (SrRuO3) layers. The strain in BaTiO3 layers is fully
maintained as long as the BaTiO3 thickness does not exceed the
combined thicknesses of the CaTiO3 and SrTiO3 layers. By
preserving full strain and combining heterointerfacial couplings,
we find an overall 50% enhancement of the superlattice global
polarization with respect to similarly grown pure BaTiO3, despite
the fact that half the layers in the superlattice are nominally nonferroelectric. We further show that even superlattices containing
only single-unit-cell layers of BaTiO3 in a paraelectric matrix
remain ferroelectric. Our data reveal that the specific interface
structure and local asymmetries play an unexpected role in the
polarization enhancement.
Oxide heterostructures with atomically abrupt interfaces, defined
by atomically flat surface terraces and single-unit-cell steps, can now
be grown on well-prepared single-stepped substrates4–7. This
advance has encouraged theoretical investigations that have led
to predictions of new artificial materials1–3,8–10. The atomic-scale
control of the combining of dissimilar materials is expected to
produce striking property enhancements as well as new combinations of desired properties. Here we discuss the experimental
realization of one of these predictions, the strain enhancement of
ferroelectric polarization. The challenge associated with fabricating
such strained structures—the deliberate and controlled deposition
of up to hundreds of individual layers—remains a formidable task,
for which the principal technique used has been high-vacuum
molecular beam epitaxy5,11. However, many insulators do not
yield the correct oxide stoichiometry (or expected resulting physical
properties) when grown by molecular beam epitaxy. Furthermore, a
shortage of electrically conducting oxide substrates and our stilllimited understanding of the stability and growth mechanisms
of conducting-film electrodes have hindered the electrical characterization of oxide superlattices.
To address these challenges, we have recently shown that atomically flat, electrically conducting SrRuO3 electrodes can be grown
with a surface quality that mimics that of the substrate (Fig. 1a)7.
Pulsed laser deposition (PLD) has long been regarded as an effective
method for synthesizing various oxide heterostructures12–15, but
obtaining atomically sharp interfaces has been difficult in the
comparatively high-pressure processes needed to maintain oxygen
stoichiometry. Here we demonstrate the growth by a high-pressure
PLD technique of hundreds of individual perovskite layers of
BaTiO3, SrTiO3 and CaTiO3. These superlattices were grown with
layer-by-layer control, yielding as-grown samples with compositionally abrupt interfaces, atomically smooth surfaces, and excellent ferroelectric behaviour that indicated oxygen stoichiometry.
© 2005 Nature Publishing Group
395
Vol 440|30 March 2006|doi:10.1038/nature04546
REVIEWS
Eukaryotic evolution, changes and
challenges
T. Martin Embley1 & William Martin2
The idea that some eukaryotes primitively lacked mitochondria and were true intermediates in the prokaryote-toeukaryote transition was an exciting prospect. It spawned major advances in understanding anaerobic and parasitic
eukaryotes and those with previously overlooked mitochondria. But the evolutionary gap between prokaryotes and
eukaryotes is now deeper, and the nature of the host that acquired the mitochondrion more obscure, than ever before.
ew findings have profoundly changed the ways in which we
view early eukaryotic evolution, the composition of major
groups, and the relationships among them. The changes
have been driven by a flood of sequence data combined with
improved—but by no means consummate—computational methods
of phylogenetic inference. Various lineages of oxygen-shunning or
parasitic eukaryotes were once thought to lack mitochondria and
to have diverged before the mitochondrial endosymbiotic event.
Such key lineages, which are salient to traditional concepts about
eukaryote evolution, include the diplomonads (for example, Giardia),
trichomonads (for example, Trichomonas) and microsporidia (for
example, Vairimorpha). From today’s perspective, many key groups
have been regrouped in unexpected ways, and aerobic and anaerobic
eukaryotes intermingle throughout the unfolding tree. Mitochondria
in previously unknown biochemical manifestations seem to be
universal among eukaryotes, modifying our views about the nature
of the earliest eukaryotic cells and testifying to the importance of
endosymbiosis in eukaryotic evolution. These advances have freed
the field to consider new hypotheses for eukaryogenesis and to weigh
these, and earlier theories, against the molecular record preserved in
genomes. Newer findings even call into question the very notion of a
‘tree’ as an adequate metaphor to describe the relationships among
genomes. Placing eukaryotic evolution within a time frame and
ancient ecological context is still problematic owing to the vagaries
of the molecular clock and the paucity of Proterozoic fossil eukaryotes
that can be clearly assigned to contemporary groups. Although the
broader contours of the eukaryote phylogenetic tree are emerging
from genomic studies, the details of its deepest branches, and its root,
remain uncertain.
N
they diverged after this singular symbiotic event5. Therefore, Archezoa
were interpreted as contemporary descendants of a phagotrophic,
nucleated, amitochondriate cell lineage that included the host for the
mitochondrial endosymbiont6. The apparent agreement between
molecules and morphology depicted the relative timing of the
mitochondrial endosymbiosis (Fig. 1) as a crucial, but not ancestral,
event in eukaryote phylogeny.
Chinks in the consensus
Mitochondrial genomes studied so far encode less than 70 of the
proteins that mitochondria need to function5; most mitochondrial
proteins are encoded by the nuclear genome and are targeted to
The universal tree and early-branching eukaryotic lineages
The universal tree based on small-subunit (SSU) ribosomal RNA1
provided a first overarching view of the relationships between the
different types of cellular life. The relationships among eukaryotes
recovered from rRNA2, backed up by trees of translation elongation
factor (EF) proteins3, provided what seemed to be a consistent, and
hence compelling, picture (Fig. 1). The three protozoa at the base of
these trees (Giardia, Trichomonas and Vairimorpha), along with
Entamoeba and its relatives, were seen as members of an ultrastructurally simple, paraphyletic group of eukaryotes called the Archezoa4.
Archezoa were thought to primitively lack mitochondria, having split
from the main trunk of the eukaryotic tree before the mitochondrial
endosymbiosis: all other eukaryotes contain mitochondria because
Figure 1 | The general outline of eukaryote evolution provided by rooted
rRNA trees. The tree has been redrawn and modified from ref. 92. Until
recently, lineages branching near the root were thought to primitively lack
mitochondria and were termed Archezoa4. Exactly which archezoans
branched first is not clearly resolved by rRNA data2, hence the polytomy
(more than two branches from the same node) involving diplomonads,
parabasalids and microsporidia at the root. Plastid-bearing lineages are
indicated in colours approximating their respective pigmentation. Lineages
furthest away from the root, including those with multicellularity, were
thought to be the latest-branching forms and were sometimes misleadingly
(see ref. 60) called the ‘crown’ groups.
1
School of Biology, The Devonshire Building, University of Newcastle upon Tyne, Newcastle NE1 7RU, UK. 2Institute of Botany III, University of Düsseldorf, D-40225 Düsseldorf,
Germany.
© 2006 Nature Publishing Group
623
REVIEWS
NATURE|Vol 440|30 March 2006
mitochondria using a protein import machinery that is specific to
this organelle7. The mitochondrial endosymbiont is thought to have
belonged to the a-proteobacteria, because some genes and proteins
still encoded by the mitochondrial genome branch in molecular
trees among homologues from this group5,8. Some mitochondrial
proteins, such as the 60- and 70-kDa heat shock proteins (Hsp60,
Hsp70), also branch among a-proteobacterial homologues, but the
genes are encoded by the host nuclear genome. This is readily
explained by a corollary to endosymbiotic theory called endosymbiotic gene transfer9: during the course of mitochondrial genome
reduction, genes were transferred from the endosymbiont’s genome
to the host’s chromosomes, but the encoded proteins were reimported into the organelle where they originally functioned. With
the caveat that gene origin and protein localization do not always
correspond9, any nuclear-encoded protein that functions in mitochondria and clusters with a-proteobacterial homologues is most
simply explained as originating from the mitochondrion in this
manner.
By that reasoning10, the discovery of mitochondrial Hsp60 in
E. histolytica was taken as evidence that its ancestors harboured
mitochondria. A flood of similar reports on mitochondrial Hsp60
and Hsp70 from all key groups of Archezoa ensued11, suggesting that
Figure 2 | Enzymes and pathways found in various manifestations of
mitochondria. Proteins sharing more sequence similarity to eubacterial
than to archaebacterial homologues are shaded blue; those with converse
similarity pattern are shaded red; those whose presence is based only on
biochemical evidence are shaded grey; those lacking clearly homologous
counterparts in prokaryotes are shaded green. a, Schematic summary of
salient biochemical functions in mitochondria5,88, including some anaerobic
forms16,17. b, Schematic summary of salient biochemical functions in
hydrogenosomes14,19. c, Schematic summary of available findings for
mitosomes and ‘remnant’ mitochondria32–34,93. The asterisk next to the
Trachipleistophora and Cryptosporidium mitosomes denotes that these
organisms are not anaerobes in the sense that they do not inhabit O2-poor
624
their common ancestor also contained mitochondria. At face value,
those findings falsified the central prediction of the archezoan
concept. However, suggestions were offered that lateral gene transfer
(LGT) in a context not involving mitochondria could also account
for the data. But that explanation, apart from being convoluted, now
seems unnecessary: the organisms once named Archezoa for lack of
mitochondria not only have mitochondrial-derived proteins, they
have the corresponding double-membrane-bounded organelles as
well.
Mitochondria in multiple guises
The former archezoans are mostly anaerobes, avoiding all but a trace of
oxygen, and like many anaerobes, including various ciliates and fungi
that were never grouped within the Archezoa, they are now known to
harbour derived mitochondrial organelles—hydrogenosomes and
mitosomes. These organelles all share one or more traits in common
with mitochondria (Fig. 2), but no traits common to them all, apart
from the double membrane and conserved mechanisms of protein
import, have been identified so far. Mitochondria typically—but
not always (the Cryptosporidium mitochondrion lacks DNA12)—
possess a genome that encodes components involved in oxidative
phosphorylation5. With one notable exception13, all hydrogenosomes
niches, but that their ATP supply is apparently O2-independent. UQ,
ubiquinone; CI, mitochondrial complex I (and II, III and IV, respectively);
NAD, nicotinamide adenine dinucleotide; MCF, mitochondrial carrier
family protein transporting ADP and ATP; STK, succinate thiokinase;
PFO, pyruvate:ferredoxin oxidoreductase; PDH, pyruvate dehydrogenase;
CoA, coenzyme A; Fd, ferredoxin; HDR, iron-only hydrogenase;
PFL, pyruvate:formate lyase; ASC, acetate-succinate CoA transferase;
ADHE, bi-functional alcohol acetaldehyde dehydrogenase; FRD, fumarate
reductase; RQ, rhodoquinone; Hsp, heat shock protein; IscU, iron–sulphur
cluster assembly scaffold protein; IscS; cysteine desulphurase; ACS (ADP),
acetyl-CoA synthase (ADP-forming).
© 2006 Nature Publishing Group
REVIEWS
NATURE|Vol 440|30 March 2006
and mitosomes studied so far lack a genome. The organisms in which
they have been studied generate ATP by fermentations involving
substrate-level phosphorylations, rather than through chemiosmosis
involving an F1/F0-type ATPase12,14,15. Entamoeba, Giardia and
Trichomonas live in habitats too oxygen-poor to support aerobic
respiration14, while others, like Cryptosporidium and microsporidia
have drastically reduced their metabolic capacities during adaptation
to their lifestyles as intracellular parasites12,15.
Between aerobic mitochondria, which use oxygen as the terminal
electron acceptor of ATP-producing oxidations, and Nyctotherus
hydrogenosomes, which (while retaining a mitochondrial genome)
use protons instead of oxygen13, there are a variety of other anaerobically functioning mitochondria. They occur in protists such as
Euglena, but also in multicellular animals such as Fasciola and
Ascaris, which typically excrete acetate, propionate or succinate,
instead of H2O or H2, as their major metabolic end-products16,17.
Hence, mitochondria, hydrogenosomes and mitosomes are viewed
most simply as variations on a single theme, one that fits neatly
within the framework provided by classical evolutionary theory18.
They are evolutionary homologues that share similarities because of
common ancestry, but—like forelimbs in vertebrates—differ substantially in form and function across lineages owing to descent with
modification.
Hydrogen-producing mitochondria
Hydrogenosomes oxidize pyruvate to H2, CO2 and acetate, making
ATP by substrate-level phosphorylation19 that they export to the
cytosol using a mitochondrial-type ADP/ATP carrier20,21. They have
been identified in trichomonads, chytridiomycetes and ciliates13,22;
their hydrogen excretion helps to maintain redox balance14 in
these organisms. Important similarities between Trichomonas
hydrogenosomes and mitochondria include the use of common
protein import pathways23, conserved mechanisms of iron–sulphurcluster assembly24, conserved mechanisms of NADþ regeneration25,
and conservation of a canonical ATP-producing enzyme of the
mitochondrial Krebs cycle—succinate thiokinase26. On the basis of
electron microscopy and ecology, additional, and diverse, eukaryotic
lineages are currently suspected to contain hydrogenosomes27,28, but
hydrogen production—the defining characteristic of hydrogenosomes19 —by those organelles has not yet been shown.
In contrast to most mitochondria, hydrogenosomes typically
contain pyruvate:ferredoxin oxidoreductase (PFO) and iron [Fe]
hydrogenase. Common among anaerobic bacteria, these enzymes
prompted the early suggestion that trichomonad hydrogenosomes
arose from a Clostridium-like endosymbiont29. In a recent rekindling
of that idea30,31, trichomonad hydrogenosomes were suggested to be
hybrid organelles, derived from an endosymbiotic anaerobic bacterium (the source of PFO and hydrogenase genes), a failed mitochondrial endosymbiosis (the source of nuclear genes for mitochondrial
Hsp60 and Hsp70), plus LGT from a mitochondrially related (but
non-mitochondrial) donor (the source of NADH dehydrogenase).
However, independent work suggested a mitochondrial, rather than
hybrid, origin of the Trichomonas NADH dehydrogenase25. Furthermore, the hybrid hypothesis fails to account for the presence of [Fe]hydrogenase homologues in algal chloroplasts, PFO homologues in
Euglena mitochondria, or the presence of either enzyme and hydrogenosomes in other eukaryotic lineages25; hence, a single common
ancestry of mitochondria and hydrogenosomes sufficiently accounts
for current observations.
Mitochondria reduced to bare bones
Mitosomes were discovered in Entamoeba32 as mitochondrionderived organelles that have undergone more evolutionary reduction
than hydrogenosomes. They are also found in Giardia33 and microsporidia34. Mitosomes seem to have no direct role in ATP synthesis
because, so far, they have been found only among eukaryotes
whose core ATP synthesis occurs in the cytosol14 or among energy
parasites15. Mitosomes import proteins in a mitochondrial-like
manner35–37, and Giardia mitosomes contain two mitochondrial
proteins of Fe–S cluster assembly—cysteine desulphurase (IscS)
and iron-binding protein (IscU)33. Fe–S clusters are essential for
life: they are cofactors of electron transfer, catalysis, redox sensing
and ribosome biogenesis in eukaryotes38. Fe–S cluster assembly is
an essential function of yeast mitochondria38 and it has been
widely touted as a potential common function for mitochondrial
homologues15,22. It is the only known function of Giardia mitosomes,
which, like Trichomonas hydrogenosomes24,37, promote assembly of
[2Fe–2S] clusters into apoferredoxin in vitro33. By contrast, and (so
far) uniquely among eukaryotes, Entamoeba uses two proteins of
non-mitochondrial ancestry for Fe–S cluster assembly39; the location
of this pathway in Entamoeba is currently unknown.
Branch migrations and evolutionary models
The discovery of mitochondrial homologues in Giardia, Trichomonas
and microsporidians, which had been the best candidates for
eukaryotes that primitively lacked mitochondria, has pinned the
timing of the mitochondrial origin to the ancestor of all eukaryotes
studied so far. But that does not mean that the basal position of these
groups in the SSU rRNA tree (Fig. 1) and EF trees3 is necessarily
incorrect. That issue hinges on efforts to construct reliable rooted
phylogenetic trees depicting ancient eukaryotic relationships: a
developing area of research that is fraught with difficulties. The
tempo and mode of sequence evolution is far more complicated than
is assumed by current mathematical models that are used to make
phylogenetic trees40. In computer simulations, where the true tree is
known, model mis-specification can produce the wrong tree with
strong support41.
Different sites in molecular sequences evolve at different rates, and
failure to accommodate this rate variation, something early methods
failed to do, can lead to strongly supported but incorrect trees owing
to a common problem called ‘long-branch-attraction’42. This occurs
when branches that are long or ‘fast evolving’, relative to others in the
tree, cluster together irrespective of evolutionary relationships. The
molecular sequences of Giardia, Trichomonas and microsporidia
often form long branches in trees and thus are particularly prone
to this problem25,43,44. The traditional models that placed microsporidia deep within trees2,3 assumed that all sequence sites evolved at
the same rate, even though they clearly do not. In these trees, the
long-branch microsporidia are next to the long branches of the
prokaryotic outgroups. More data and better models have produced
trees that agree in placing microsporidia with fungi45,46, suggesting
that the deep position of microsporidia in early trees was indeed an
artefact.
The position of Giardia and Trichomonas sequences at the base of
eukaryotic molecular trees is also suspect, given that they also form
long branches in the trees that place them in this way, and because
other trees and models place them together as an internal branch of a
rooted eukaryotic tree47. Resolving which position is correct is
particularly important, because Giardia and Trichomonas are still
commonly referred to as ‘early-branching’ eukaryotes. Given the
evident uncertainties of such phylogenies, and the importance of the
problem, the onus is on those who would persist in calling these
species ‘early branching’ to show that trees placing them deep explain
the data significantly better than trees that do not.
The root of the eukaryotic tree
The usual way to root a phylogenetic tree is by reference to an
outgroup; the rRNA and EF trees used prokaryotic sequences to root
eukaryotes on either the Giardia, Trichomonas or microsporidia
branch (Fig. 1), but these rootings have not proved robust43–45. The
sequences of outgroups are often highly divergent compared to those
of the ingroup, making it difficult to avoid model mis-specification
and long-branch-attraction44,48.
An alternative method of rooting an existing tree is to look for rare
© 2006 Nature Publishing Group
625
REVIEWS
NATURE|Vol 440|30 March 2006
changes in a complex molecular character where the ancestral state
can be inferred. This method was used49 to infer that the root of the
eukaryotic tree lies between the animals, fungi and amoebozoa
(together called unikonts) on the one side, and plants, algae and
most protozoa (bikonts) on the other. In fungi and animals, the genes
for dihydrofolate reductase (DHFR) and thymidylate synthase (TS)
are separate44, as they are in prokaryote outgroups; but they are fused
in the bikonts sampled so far. Assuming that the fusion occurred
only once and that its subsequent fission did not occur at all, the
DHFR–TS fusion would be a derived feature uniting bikonts,
suggesting that the eukaryote root lies outside this group49. The
coherence of animals, fungi and various unicellular eukaryotes
(together called opisthokonts) is supported by phylogenetic trees
and other characters50. The presence of a type II myosin in opisthokonts and amoebozoa unites them to form the unikonts51. If both
unikonts and bikonts are monophyletic groups, and together they
encompass extant eukaryotic diversity, then the root of eukaryotes
would lie between them.
Placing the eukaryote root between unikonts and bikonts would
help to bring order to chaos, if it is correct. However, it assumes that
the underlying tree—over which the rooting character is mapped—is
known, when in fact the relationships—especially for bikonts and
many enigmatic protistan lineages52 —remain uncertain. The rooting
also depends upon a single character of unknown stability sampled
from only a few species. An additional caveat is that Giardia and
Trichomonas lack both DHFR and TS—parasites relinquish genes of
various biosynthetic pathways, stealing the pathway products from
their hosts instead. Hence, the missing fusion character does not
address their position in the tree.
such attributes could be derived, and no intermediate cell types
known that would guide a gradual evolutionary inference between
the prokaryotic and eukaryotic state. Accordingly, thoughts on the
topic are diverse, and new suggestions appear faster than old ones can
be tested.
Biologists have traditionally derived the complex eukaryotic state
from the simpler prokaryotic one. In recent years, even that has been
New hypotheses of eukaryotic relationships
New data and analyses from many laboratories have been used
to formulate a number of hypotheses of eukaryotic relationships
(Fig. 3) that fundamentally differ from those in the SSU rRNA tree. It
is apparent that hydrogenosomes and mitosomes appear on different
branches; the absence of traditional mitochondria and presence of a
specialized anaerobic phenotype are neither rare nor ‘primitive’, as
once thought. Mitochondria with a genome encoding elements of the
respiratory pathway also appear on both sides of the tree (Fig. 3),
suggesting that this pathway has been retained since earliest times;
although, as modern examples attest16,17, it need not have always used
oxygen as the sole terminal electron acceptor. On the basis of the
unfolding tree, it would seem entirely possible—if not likely—that
aerobic and anaerobic eukaryotes, harbouring mitochondrial
homologues of various sorts, have co-existed throughout eukaryote
history.
The relationships between major groups of eukaryotes are uncertain because of the lack of agreement between different proteins
and different analyses; this uncertainty is depicted as a series of
polytomies in Fig. 3. Most groups are still poorly sampled for species
and molecular sequences—factors that impede robust resolution53. It
has been suggested54 that the lack of resolution in deeper parts of the
eukaryotic tree stems from an evolutionary ‘big bang’ or rapid
radiation for eukaryotes, perhaps driven by the mitochondrial
endosymbiosis54 . However, both theory and computer simulations40,41 suggest that a lack of resolution at deeper levels is to be
expected given sparse data, our assumptions about sequence evolution, and the limitations of current phylogenetic methods. Thus, loss
of historical signal provides a simple null hypothesis for the observed
lack of resolution in deeper parts of the eukaryotic tree.
More good theories for eukaryotic origins than good data
Eukaryotic cell organization is more complex than prokaryotic,
boasting, inter alia, a nucleus with its contiguous endoplasmic
reticulum, Golgi, flagella with a ‘9þ2’ pattern of microtubule arrangement, and organelles surrounded by double membranes. There are no
obvious precursor structures known among prokaryotes from which
626
Figure 3 | Schematic tree of newer hypotheses for phylogenetic
relationships among major groups of eukaryotes. The composite tree is
based on work from many different laboratories and is summarized
elswhere52; no single data set supports all branches. Polytomies indicate
uncertainty in the branching order between major groups. The naming of
groups follows current popular usage52,60. The current debate that the root of
the tree may split eukaryotes into bikonts and unikonts is discussed in the
text. Lineages containing species with comparatively well-studied
hydrogenosomes (H) or mitosomes (M) are labelled. The depicted
distribution of hydrogenosomes and mitosomes is almost certainly
conservative, as relatively few anaerobic or parasitic microbial eukaryotes
have been studied in sufficient detail to characterize their organelles. The
strict coevolution of host nuclear and algal nuclear plus plastid genomes
within the confines of a single cell in the wake of secondary endosymbiosis
(28), irrespective of whether or not the secondary nucleus or plastid has
persisted as a separate compartment, is indicated by doubled branches.
Diversity of pigmentation among photosynthetic eukaryote lineages is
symbolized by different coloured branches.
© 2006 Nature Publishing Group
REVIEWS
NATURE|Vol 440|30 March 2006
called into question, as some phylogenies have suggested that
prokaryotes might be derived from eukaryotes55. However, the
ubiquity of mitochondrial homologues represents a strong argument
that clearly polarizes the prokaryote-to-eukaryote transition: because
the common ancestor of contemporary eukaryotes contained a
mitochondrial endosymbiont that originated from within the
proteobacterial lineage, we can confidently infer that prokaryotes
arose and diversified before contemporary eukaryotes—the only
ones whose origin requires explanation—did. This view is consistent
with microfossil and biogeochemical evidence56.
Current ideas on the origin of eukaryotes fall into two general
classes: those that derive a nucleus-bearing but amitochondriate cell
first, followed by the origin of mitochondria in a eukaryotic host57–61
(Fig. 4a–d), and those that derive the origin of mitochondria in a
prokaryotic host, followed by the origin of eukaryotic-specific
features62–64 (Fig. 4e–g). Models that derive a nucleated but amitochondriate cell as an intermediate (Fig. 4a–d) have suffered a
substantial blow with the demise of Archezoa. Models that do not
entail amitochondriate intermediates have in common that the host
assumed to have acquired the mitochondrion was an archaebacterium not a eukaryote; hence, the steep organizational grade between
prokaryotes and eukaryotes follows in the wake of radical chimaer-
Figure 4 | Models for eukaryote origins that are, in principle, testable with
genome data. a–d, Models that propose the origin of a nucleus-bearing
but amitochondriate cell first, followed by the acquisition of mitochondria
in a eukaryotic host. e–g, Models that propose the origin of mitochondria in
a prokaryotic host, followed by the acquisition of eukaryotic-specific
ism involving mitochondrial origins (Fig. 4e–g). A criticism facing all
‘archaebacterial host’ models is that phagotrophy (the ability to
engulf bacteria as food particles) was once seen as an absolute
prerequisite for mitochondrial origins60. This argument has lost
some of its strength with the discovery of symbioses where one
prokaryote lives inside another, non-phagocytotic prokaryote65.
The elusive informational ancestor
With the exception of the neomuran hypothesis, which views both
eukaryotes and archaebacteria as descendants of Gram-positive
eubacteria60,61 (Fig. 4d), most current theories for eukaryotic origins
(Fig. 4) posit the involvement of an archaebacterium in that process.
The archaebacterial link to eukaryote origins was first inferred
from shared immunological and biochemical similarities of their
DNA-dependent RNA polymerases66. Tree-based studies of entire
genomes67,68 extended this observation: most eukaryotic genes for
replication, transcription and translation (informational genes) are
related to archaebacterial homologues, while those encoding biosynthetic and metabolism functions (operational genes) are usually
related to eubacterial homologues8,67,68.
The rooted SSU rRNA tree1 depicts eukaryotes and archaebacteria
as sister groups, as in the neomuran (Fig. 4d) hypothesis60,61. By
features. Panels a–g are redrawn from refs 57 (a), 58 (b), 59 (c), 60 and 61
(d), 62 (e), 63 (f) and 64 (g). The relevant microbial players in each model
are labelled. Archaebacterial and eubacterial lipid membranes are indicated
in red and blue, respectively.
© 2006 Nature Publishing Group
627
REVIEWS
NATURE|Vol 440|30 March 2006
contrast, the eocyte (Fig. 4c) hypothesis69,70 proposes that eukaryotic
informational genes originate from a specific lineage of archaebacteria called the eocytes, a group synonymous with the Crenarchaeota1. In the eocyte tree, the eukaryotic genetic machinery is
descended from within the archaebacteria. Although the rooted
rRNA tree is vastly more visible to non-specialists, published data
are equivocal: for every analysis of a eukaryotic informational gene
that recovers the neomuran topology, a different analysis of the same
molecule(s) has recovered the eocyte tree70–74, with the latter being
favoured by more sophisticated phylogenetic analyses69,73,74 and by a
shared amino-acid insertion in eocyte and eukaryotic elongation
factor 1-a70.
More recently, genome trees based on shared gene content have
been reported. These methods are still new, and—just like gene
trees—give different answers from the same data, recovering for
informational genes either eukaryote–archaebacterial sisterhood75,
the eocyte tree76 or a euryarchaeote ancestry77. The dichotomy of
archaebacteria into euryarchaeotes and eocytes/crenarchaeotes1
remains unchallenged. The issue, so far unresolved, is the relationship
of eukaryotic informational genes to archaebacterial homologues:
inheritance from a common progenitor (as in the neomuran
hypothesis) or a direct descendant; and if by direct descent, from
eocytes/crenarchaeotes like Sulfolobus76, or euryarchaeotes such as
Thermoplasma64,78, Pyrococcus77 or methanogens58,62. The problems
associated with the phylogenetic relationships discussed above
are exacerbated at such deep levels, and there is currently
neither consensus on this issue nor unambiguous evidence that
would clarify it.
The vexing operational majority
Of those eukaryotic genes that have detectable prokaryotic homologues, the majority67, perhaps as much as 75%8, are eubacterial and
correspond to the operational class. Here arises an interesting point.
Although individual analyses of informational genes arrive at
fundamentally different interpretations76,77, no one has yet suggested
that more than one archaebacterium participated in eukaryote
origins. The situation is quite different with operational genes,
where differing phylogenies for individual genes are freely interpreted as evidence for the participation of more than one eubacterial
partner. The contribution of gene transfers from the ancestral
mitochondrion to nuclear chromosomes has been estimated as
anywhere from 136–157 (ref. 77) to ,630 genes79, depending on
the method of analysis. An issue that still requires clarification
concerns the origin of thousands of eukaryotic operational genes
that are clearly eubacterial, but not specifically a-proteobacterial, in
origin8 (disregarding here the cyanobacterial genes in plants80).
There are currently four main theories that attempt to account for
those genes. (1) In the neomuran hypothesis (Fig. 4d), they are
explained through a direct inheritance from the Gram-positive
ancestor60,61; however, few eukaryote genes branch with Grampositive homologues. (2) In hypotheses entailing more than one
eubacterial partner at eukaryote origins (Fig. 4a–c), they are
explained as descending from the non-mitochondrial eubacterium;
however, these genes branch all over the eubacterial tree, not with any
particular lineage. (3) In models favouring widespread LGT from
prokaryotes to eukaryotes, they are explained as separate acquisitions
from individual donors81; although some LGT clearly has occurred82,
the jury is still out on its extent because of a lack of detailed largescale analyses of individual genes using reliable methods. (4) In
single-eubacterium models (Fig. 4e–g), they are either not addressed,
or explained as acquisitions from the mitochondrial symbiont, with a
twofold corollary8 of LGT among free-living prokaryotes since the
origin of mitochondria, and phylogenetic artefact.
LGT among prokaryotes83 figures into the origin of eukaryotic
operational genes in a fundamental manner that is often overlooked.
Most claims of outright LGT to ancestral eukaryotes (that is, from
donors distinct from the mitochondrion) implicitly assume a static
628
chromosome model in which prokaryotes do not exchange genes
among themselves; finding a eukaryotic gene that branches with a
group other than a-proteobacteria is taken as evidence for an origin
from that group (the vagaries of deep branches notwithstanding).
But if we embrace a fluid chromosome model for prokaryotes, as
some interpretations of the data suggest we should84, then the
expected phylogeny for a gene acquired from the mitochondrion
would be common ancestry for all eukaryotes, but not necessarily
tracing to a-proteobacteria, because the ancestor of mitochondria
possessed an as yet unknown collection of genes.
The timing and ecological context of eukaryote origins
Diversified unicellular microfossils of uncertain phylogenetic affinity
(acritarchs), but widely accepted as eukaryotes, appear in strata of
,1.45 billion years (Gyr) of age85, providing a minimum age for the
group. Bangiomorpha, a fossilized multicellular organism virtually
indistinguishable in morphology from modern bangiophyte red
algae, has been found in strata of ,1.2 Gyr of age86, placing a lower
bound on the age of the plant kingdom. A wide range of molecular
clock estimates of eukaryote age have been reported, but these are still
uncertain, being contingent both on the use of younger calibration
points and on the phylogenetic model and assumed tree87. At present,
a minimum age of eukaryotes at ,1.45 Gyr and a minimum age of
the plant kingdom at ,1.2 Gyr seem to be criteria that the molecular
clock must meet.
The classical view of early eukaryote evolution posits two main
ecological stages: (1) the early emergence and diversification of
anaerobic, amitochondriate lineages, followed by (2) the acquisition
of an oxygen-respiring mitochondrial ancestor in one lineage thereof
and the subsequent diversification of aerobic eukaryotic lineages78.
Concordant with that view, mitochondrial origins have traditionally
been causally linked to the global rise in atmospheric oxygen levels at
,2 Gyr ago and an assumed ‘environmental disaster’ for cells lacking
the mitochondrial endosymbiont63,88, providing a selective force
(oxygen detoxification) for the acquisition of the mitochondrion63,88.
Two observations challenge this model.
First, it is now clear that the contemporary anaerobic eukaryotes
did not branch off before the origin of mitochondria. Second, new
isotope studies indicate that anaerobic environments persisted
locally and globally over the past 2 Gyr. That oxygen first appeared
in the atmosphere at ,2 Gyr ago is still generally accepted, but it is
now thought that, up until about 600 Myr ago, the oceans existed in
an intermediate oxidation state, with oxygenated surface water
(where photosynthesis was occurring), and sulphide-rich (sulphidic)
and oxygen-lacking (anoxic) subsurface water89,90. Hence, the ‘oxygen event’ in the atmosphere should be logically decoupled from
anoxic marine environments, where anaerobic eukaryotes living on
the margins of an oxic world could have flourished, as they still do
today27.
Outlook
In the past, phylogenetic trees have produced a particular view of
early eukaryote history that was appealing, but turned out to be
wrong in salient aspects. Simply testing whether a model used to
make a tree actually fits the data40 would do much to restore
confidence in the merits of deep phylogenetic analyses. The fact
that monophyly of plants can be recovered using molecular
sequences91, an event that should predate 1.2 Gyr, suggests that
ancient signal can be extracted, but how far back we might expect to
be able to go is uncertain. The persistence of mitochondrially derived
organelles in all eukaryotes, and plastids in some lineages, provides
phylogeny-independent evidence for the occurrence of those symbiotic events. But independent evidence for the participation of other
prokaryotic endosymbionts is lacking. Analysis of mitochondria in
their various guises has revealed that their unifying trait is neither
respiration nor ATP synthesis; the common essential function, if
any, for contemporary eukaryotes remains to be pinpointed by
© 2006 Nature Publishing Group
REVIEWS
NATURE|Vol 440|30 March 2006
comparative study. It may still be that a eukaryote is lurking out there
that never possessed a mitochondrion—a bona fide archezoan—in
which case prokaryote-host models (Fig. 4e–g) for eukaryogenesis
can be abandoned. However, morphological studies and environmental sequencing efforts performed so far from the best candidate
habitats to harbour such relics—anaerobic marine sediments—have
not uncovered new, unknown and more-deeply branching lineages;
rather, they have uncovered a greater diversity of lineages with
affinities to known mitochondriate groups28,61. The available phylogenetic findings from genomes are not fully consistent with any
current hypothesis for eukaryote origins, the underlying reasons
for which—biological, methodological or both—are as yet
unclear. Genomes must surely bear some testimony to eukaryotic
origins, but new approaches and more rigorous attention to the
details of phylogenetic inference will be required to decipher the
message.
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
Woese, C. R., Kandler, O. & Wheelis, M. L. Towards a natural system of
organisms: Proposal for the domains Archaea, Bacteria, and Eucarya. Proc. Natl
Acad. Sci. USA 87, 4576–-4579 (1990).
Leipe, D. D., Gunderson, J. H., Nerad, T. A. & Sogin, M. L. Small subunit
ribosomal RNA of Hexamita inflata and the quest for the first branch in the
eukaryotic tree. Mol. Biochem. Parasitol. 59, 41–-48 (1993).
Hashimoto, T., Nakamura, Y., Kamaishi, T. & Hasegawa, M. Early evolution of
eukaryotes inferred from the amino acid sequences of elongation factors 1a
and 2. Arch. Protistenkd. 148, 287–-295 (1997).
Cavalier-Smith, T. in Endocytobiology II (eds Schwemmler, W. & Schenk, H. E. A.)
1027–-1034 (De Gruyter, Berlin, 1983).
Gray, M. W., Lang, B. F. & Burger, G. Mitochondria of protists. Annu. Rev.
Genet. 38, 477–-524 (2004).
Cavalier-Smith, T. in Endocytobiology II (eds Schwemmler, W. & Schenk, H. E.
A.) 265–-279 (De Gruyter, Berlin, 1983).
Pfanner, N. & Geissler, A. Versatility of the mitochondrial protein import
machinery. Nature Rev. Mol. Cell Biol. 2, 339–-349 (2001).
Esser, C. et al. A genome phylogeny for mitochondria among a-proteobacteria
and a predominantly eubacterial ancestry of yeast nuclear genes. Mol. Biol.
Evol. 21, 1643–-1660 (2004).
Timmis, J. N., Ayliffe, M. A., Huang, C. Y. & Martin, W. Endosymbiotic gene
transfer: Organelle genomes forge eukaryotic chromosomes. Nature Rev. Genet.
5, 123–-135 (2004).
Clark, C. G. & Roger, A. J. Direct evidence for secondary loss of
mitochondria in Entamoeba histolytica. Proc. Natl Acad. Sci. USA 92, 6518–-6521
(1995).
Roger, A. J. Reconstructing early events in eukaryotic evolution. Am. Nat. 154,
S146–-S163 (1999).
Abrahamsen, M. S. et al. Complete genome sequence of the apicomplexan,
Cryptosporidium parvum. Science 304, 441–-445 (2004).
Boxma, B. et al. An anaerobic mitochondrion that produces hydrogen. Nature
434, 74–-79 (2005).
Müller, M. in Molecular Medical Parasitology (eds Marr, J. J., Nilsen, T. W. &
Komuniecki, R. W.) 125–-139 (Academic, Amsterdam, 2003).
Katinka, M. D. et al. Genome sequence and gene compaction of the eukaryote
parasite Encephalitozoon cuniculi. Nature 414, 450–-453 (2001).
Tielens, A. G., Rotte, C., van Hellemond, J. J. & Martin, W. Mitochondria as we
don’t know them. Trends Biochem. Sci. 27, 564–-572 (2002).
Komuniecki, R. W. & Tielens, A. G. M. in Molecular Medical Parasitology
(eds Marr, J. J., Nilsen, T. W. & Komuniecki, R.) 339–-358 (Academic,
Amsterdam, 2003).
Darwin, C. The Origin of Species Reprint edn (Penguin Books, London, 1968).
Müller, M. The hydrogenosome. J. Gen. Microbiol. 139, 2879–-2889 (1993).
van der Giezen, M. et al. Conserved properties of hydrogenosomal and
mitochondrial ADP/ATP carriers: A common origin for both organelles.
EMBO J. 21, 572–-579 (2002).
Tjaden, J. et al. A divergent ADP/ATP carrier in the hydrogenosomes of
Trichomonas gallinae argues for an independent origin of these organelles. Mol.
Microbiol. 51, 1439–-1446 (2004).
Embley, T. M. et al. Hydrogenosomes, mitochondria and early eukaryotic
evolution. IUBMB Life 55, 387–-395 (2003).
Dyall, S. D. et al. Presence of a member of the mitochondrial carrier family in
hydrogenosomes: Conservation of membrane-targeting pathways between
hydrogenosomes and mitochondria. Mol. Cell. Biol. 20, 2488–-2497 (2000).
Sutak, R. et al. Mitochondrial-type assembly of FeS centers in the
hydrogenosomes of the amitochondriate eukaryote Trichomonas vaginalis.
Proc. Natl Acad. Sci. USA 101, 10368–-10373 (2004).
Hrdy, I. et al. Trichomonas hydrogenosomes contain the NADH dehydrogenase
module of mitochondrial complex I. Nature 432, 618–-622 (2004).
Schnarrenberger, C. & Martin, W. Evolution of the enzymes of the citric acid
cycle and the glyoxylate cycle of higher plants. A case study of endosymbiotic
gene transfer. Eur. J. Biochem. 269, 868–-883 (2002).
27. Fenchel, T. & Finlay, B. J. Ecology and Evolution in Anoxic Worlds (eds May, R. M.
& Harvey, P. H.) (Oxford Univ. Press, Oxford, 1995).
28. Roger, A. J. & Silberman, J. D. Cell evolution: Mitochondria in hiding. Nature
418, 827–-829 (2002).
29. Whatley, J. M., John, P. & Whatley, F. R. From extracellular to intracellular: The
establishment of mitochondria and chloroplasts. Proc. R. Soc. Lond. B 204,
165–-187 (1979).
30. Dyall, S. D., Brown, M. T. & Johnson, P. J. Ancient invasions: From
endosymbionts to organelles. Science 304, 253–-257 (2004).
31. Dyall, S. D. et al. Non-mitochondrial complex I proteins in a hydrogenosomal
oxidoreductase complex. Nature 431, 1103–-1107 (2004).
32. Tovar, J., Fischer, A. & Clark, C. G. The mitosome, a novel organelle related to
mitochondria in the amitochondrial parasite Entamoeba histolytica.
Mol. Microbiol. 32, 1013–-1021 (1999).
33. Tovar, J. et al. Mitochondrial remnant organelles of Giardia function in iron–sulphur protein maturation. Nature 426, 172–-176 (2003).
34. Williams, B. A., Hirt, R. P., Lucocq, J. M. & Embley, T. M. A mitochondrial
remnant in the microsporidian Trachipleistophora hominis. Nature 418, 865–-869
(2002).
35. Regoes, A. et al. Protein import, replication and inheritance of a vestigial
mitochondrion. J. Biol. Chem. 280, 30557–-30563 (2005).
36. Chan, K. W. et al. A Novel ADP/ATP transporter in the mitosome of the
microaerophilic human parasite Entamoeba histolytica. Curr. Biol. 15, 737–-742
(2005).
37. Dolezal, P. et al. Giardia mitosomes and trichomonad hydrogenosomes share a
common mode of protein targeting. Proc. Natl Acad. Sci. USA 102,
10924–-10929 (2005).
38. Lill, R. & Muhlenhoff, U. Iron–-sulfur-protein biogenesis in eukaryotes. Trends
Biochem. Sci. 30, 133–-141 (2005).
39. Ali, V., Shigeta, Y., Tokumoto, U., Takahashi, Y. & Nozaki, T. An intestinal
parasitic protist, Entamoeba histolytica, possesses a non-redundant nitrogen
fixation-like system for iron–-sulfur cluster assembly under anaerobic
conditions. J. Biol. Chem. 279, 16863–-16874 (2004).
40. Penny, D., McComish, B. J., Charleston, M. A. & Hendy, M. D. Mathematical
elegance with biochemical realism: The covarion model of molecular evolution.
J. Mol. Evol. 53, 711–-723 (2001).
41. Ho, S. Y. W. & Jermiin, L. S. Tracing the decay of the historical signal in
biological sequence data. Syst. Biol. 53, 623–-637 (2004).
42. Felsenstein, J. Cases in which parsimony or incompatibility methods will be
positively misleading. Syst. Zool. 25, 401–-410 (1978).
43. Stiller, J. W. & Hall, B. D. Long-branch attraction and the rDNA model of early
eukaryotic evolution. Mol. Biol. Evol. 16, 1270–-1279 (1999).
44. Philippe, H. et al. Early-branching or fast-evolving eukaryotes? An answer
based on slowly evolving positions. Proc. R. Soc. Lond. B 267, 1213–-1221
(2000).
45. Hirt, R. P. et al. Microsporidia are related to fungi: Evidence from the largest
subunit of RNA polymerase II and other proteins. Proc. Natl Acad. Sci. USA 96,
580–-585 (1999).
46. Keeling, P. J., Luker, M. A. & Palmer, J. D. Evidence from beta-tubulin
phylogeny that microsporidia evolved from within the fungi. Mol. Biol. Evol. 17,
23–-31 (2000).
47. Arisue, N., Hasegawa, M. & Hashimoto, T. Root of the Eukaryota tree as
inferred from combined maximum likelihood analyses of multiple molecular
sequence data. Mol. Biol. Evol. 22, 409–-420 (2005).
48. Penny, D. Criteria for optimising phylogenetic trees and the problem of
determining the root of a tree. J. Mol. Evol. 8, 95–-116 (1976).
49. Stechmann, A. & Cavalier-Smith, T. The root of the eukaryote tree pinpointed.
Curr. Biol. 13, R665–-R666 (2003).
50. Steenkamp, E. T., Wright, J. & Baldauf, S. L. The protistan origins of animals
and fungi. Mol. Biol. Evol. 23, 93–-106 (2006); published online 8 September
2005 (doi:10.1093/molbev/msj011).
51. Richards, T. A. & Cavalier-Smith, T. Myosin domain evolution and the primary
divergence of eukaryotes. Nature 436, 1113–-1118 (2005).
52. Adl, S. M. et al. The new higher level classification of eukaryotes with emphasis
on the taxonomy of protists. J. Eukaryot. Microbiol. 52, 399–-451 (2005).
53. Graybeal, A. Is it better to add taxa or characters to a difficult phylogenetic
problem? Syst. Biol. 47, 9–-17 (1998).
54. Philippe, H. & Adoutte, A. in Evolutionary Relationships Among Protozoa
(eds Coombs, G. H., Vickerman, K., Sleigh, M. A. & Warren, A.) 25–-56 (Kluwer
Academic, Dordrecht, 1998).
55. Forterre, P. & Philippe, H. Where is the root of the universal tree of life?
Bioessays 21, 871–-879 (1999).
56. Knoll, A. H. Life on a Young Planet: The First Three Billion Years of Evolution on
Earth (Princeton Univ. Press, 2003).
57. Margulis, L., Dolan, M. F. & Whiteside, J. H. “Imperfections and oddities” in the
origin of the nucleus. Paleobiology 31, 175–-191 (2005).
58. Moreira, D. & Lopez Garcia, P. Symbiosis between methanogenic archaea and
d-proteobacteria as the origin of eukaryotes: The syntrophic hypothesis. J. Mol.
Evol. 47, 517–-530 (1998).
59. Lake, J., Moore, J., Simonson, A. & Rivera, M. in Microbial Phylogeny and
Evolution Concepts and Controversies (ed. Sapp, J.) 184–-206 (Oxford Univ.
Press, Oxford, 2005).
© 2006 Nature Publishing Group
629
REVIEWS
NATURE|Vol 440|30 March 2006
60. Cavalier-Smith, T. The phagotrophic origin of eukaryotes and phylogenetic
classification of Protozoa. Int. J. Syst. Evol. Microbiol. 52, 297–-354 (2002).
61. Cavalier-Smith, T. Only six kingdoms of life. Proc. R. Soc. Lond. B 271, 1251–-1262
(2004).
62. Martin, W. & Müller, M. The hydrogen hypothesis for the first eukaryote.
Nature 392, 37–-41 (1998).
63. Vellai, T., Takacs, K. & Vida, G. A new aspect to the origin and evolution of
eukaryotes. J. Mol. Evol. 46, 499–-507 (1998).
64. Searcy, D. G. in The Origin and Evolution of the Cell (eds Matsuno, H. H. &
Matsuno, K.) 47–-78 (World Scientific, Singapore, 1992).
65. von Dohlen, C. D., Kohler, S., Alsop, S. T. & McManus, W. R. Mealybug
b-proteobacterial endosymbionts contain g-proteobacterial symbionts. Nature
412, 433–-436 (2001).
66. Zillig, W., Schnabel, R. & Stetter, K. O. Archaeabacteria and the origin of the
eukaryotic cytoplasm. Curr. Top. Microbiol. Immunol. 114, 1–-18 (1985).
67. Rivera, M. C., Jain, R., Moore, J. E. & Lake, J. A. Genomic evidence for two
functionally distinct gene classes. Proc. Natl Acad. Sci. USA 95, 6239–-6244
(1998).
68. Ribeiro, S. & Golding, G. B. The mosaic nature of the eukaryotic nucleus. Mol.
Biol. Evol. 15, 779–-788 (1998).
69. Lake, J. A. Origin of the eukaryotic nucleus determined by rate-invariant
analysis of rRNA sequences. Nature 331, 184–-186 (1988).
70. Rivera, M. C. & Lake, J. A. Evidence that eukaryotes and eocyte prokaryotes are
immediate relatives. Science 257, 74–-76 (1992).
71. Baldauf, S. L., Palmer, J. D. & Doolittle, W. F. The root of the universal tree and
the origin of eukaryotes based upon elongation factor phylogeny. Proc. Natl
Acad. Sci. USA 93, 7749–-7754 (1996).
72. Brown, J. R. & Doolittle, W. F. Archaea and the prokaryote-to-eukaryote
transition. Microbiol. Mol. Biol. Rev. 61, 456–-502 (1997).
73. Tourasse, N. J. & Gouy, M. Accounting for evolutionary rate variation among
sequence sites consistently changes universal phylogenies deduced from rRNA
and protein-coding genes. Mol. Phylogenet. Evol. 13, 159–-168 (1999).
74. Brown, J. R. et al. Universal trees based on large combined protein sequence
data sets. Nature Genet. 28, 281–-285 (2001).
75. Daubin, V., Gouy, M. & Perriere, G. A phylogenomic approach to bacterial
phylogeny: Evidence of a core of genes sharing a common history. Genome Res.
12, 1080–-1090 (2002).
76. Rivera, M. C. & Lake, J. A. The ring of life provides evidence for a genome
fusion origin of eukaryotes. Nature 431, 152–-155 (2004).
77. Horiike, T., Hamada, K., Miyata, D. & Shinozawa, T. The origin of eukaryotes is
suggested as the symbiosis of Pyrococcus into g-proteobacteria by
phylogenetic tree based on gene content. J. Mol. Evol. 59, 606–-619 (2004).
78. Margulis, L., Dolan, M. F. & Guerrero, R. The chimeric eukaryote: Origin of the
nucleus from the karyomastigont in amitochondriate protists. Proc. Natl Acad.
Sci. USA 97, 6954–-6959 (2000).
630
79. Gabaldon, T. & Huynen, M. A. Reconstruction of the proto-mitochondrial
metabolism. Science 301, 609 (2003).
80. Martin, W. et al. Evolutionary analysis of Arabidopsis, cyanobacterial, and
chloroplast genomes reveals plastid phylogeny and thousands of
cyanobacterial genes in the nucleus. Proc. Natl Acad. Sci. USA 99, 12246–-12251
(2002).
81. Doolittle, W. F. You are what you eat: A gene transfer ratchet could account
for bacterial genes in eukaryotic nuclear genomes. Trends Genet. 14, 307–-311
(1998).
82. Loftus, B. et al. The genome of the protist parasite Entamoeba histolytica. Nature
433, 865–-868 (2005).
83. Doolittle, W. F. Lateral genomics. Trends Cell Biol. 9, M5–-M8 (1999).
84. Kunin, V., Goldovsky, L., Darzentas, N. & Ouzounis, C. A. The net of life—
reconstruction of the microbial phylogenetic network. Genome Res. 15,
954–-959 (2005).
85. Javaux, E. J., Knoll, A. H. & Walter, M. R. Morphological and ecological
complexity in early eukaryotic ecosystems. Nature 412, 66–-69 (2001).
86. Butterfield, N. J. Bangiomorpha pubescens n. gen., n. sp.: implications for the
evolution of sex, multicellularity, and the Mesoproterozoic/Neoproterozoic
radiation of eukaryotes. Paleobiology 26, 386–-404 (2000).
87. Benton, M. J. & Ayala, F. J. Dating the tree of life. Science 300, 1698–-1700
(2003).
88. Kurland, C. G. & Andersson, S. G. Origin and evolution of the mitochondrial
proteome. Microbiol. Mol. Biol. Rev. 64, 786–-820 (2000).
89. Shen, Y., Knoll, A. H. & Walter, M. R. Evidence for low sulphate and anoxia in a
mid-Proterozoic marine basin. Nature 423, 632–-635 (2003).
90. Poulton, S. W., Fralick, P. W. & Canfield, D. E. The transition to a sulphidic
ocean ,1.84 billion years ago. Nature 431, 173–-177 (2004).
91. Rodriguez-Ezpeleta, N. et al. Monophyly of primary photosynthetic eukaryotes:
green plants, red algae, and glaucophytes. Curr. Biol. 15, 1325–-1330 (2005).
92. Pace, N. R. A molecular view of microbial diversity and the biosphere. Science
276, 734–-740 (1997).
93. Keithly, J. S., Langreth, S. G., Buttle, K. F. & Mannella, C. A. Electron
tomographic and ultrastructural analysis of the Cryptosporidium parvum relict
mitochondrion, its associated membranes, and organelles. J. Eukaryot.
Microbiol. 52, 132–-140 (2005).
Acknowledgements We thank M. Müller, J. Archibald, R. Hirt, K. Henze and
L. Tielens, and members of our laboratories, for discussions.
Author Information Reprints and permissions information is available at
npg.nature.com/reprintsandpermissions. The authors declare no competing
financial interests. Correspondence should be addressed to T.M.E.
([email protected]) or W.M. ([email protected]).
© 2006 Nature Publishing Group