reti neurali cellulari e circuiti nonlineari nuovi risultati ed

Transcription

reti neurali cellulari e circuiti nonlineari nuovi risultati ed
RETI NEURALI CELLULARI E CIRCUITI NONLINEARI
NUOVI RISULTATI ED APPLICAZIONI
(CELLULAR NEURAL NETWORKS AND NONLINEAR
CIRCUITS, NEW RESULTS AND APPLICATIONS)
Gabriele Manganaro
TESI DI DOTTORATO DI RICERCA IN
INGEGNERIA ELETTROTECNICA (X CICLO)
UNIVERSITÁ DEGLI STUDI DI CATANIA
FACOLTÁ DI INGEGNERIA
UNIVERSITÁ DEGLI STUDI DI CATANIA
DIPARTIMENTO ELETTRICO, ELETTRONICO E SISTEMISTICO
Tesi di Dottorato di Ricerca in
Ingegneria Elettrotecnica (X Ciclo).
“Reti Neurali Cellulari e Circuiti Nonlineari Nuovi Risultati ed
Applicazioni (Cellular Neural Networks and Nonlinear Circuits, New
Results and Applications)”
Autore: Gabriele Manganaro
Febbraio 1998
Tutore:
Chiar.mo Prof. Ing. Luigi Fortuna
Tutore Esterno:
Chiar.mo Prof. Dr. Jose Pineda de Gyvez
Coordinatore:
Chiar.mo Prof. Ing. Alfio Consoli
ii
To my parents.
iv
Table of Contents
Table of Contents
viii
Abstract
ix
Acknowledgements
xi
Part I: Circuit Theory and Applications
1
1 CNN’s basics
1.1 The CNN of Chua and Yang . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.1.1 The Cell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.1.2 The CNN Array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.1.3 More about Templates . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.1.4 Multilayer CNN’s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.1.5 The CNN as an analog processor . . . . . . . . . . . . . . . . . . . . .
1.2 Main Generalizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2.1 Nonlinear CNN’s and delay CNN’s . . . . . . . . . . . . . . . . . . . .
1.2.2 Non-uniform processor CNN’s and multiple neighborhood size CNN’s
1.2.3 Discrete-time CNN’s . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2.4 The CNN Universal Machine . . . . . . . . . . . . . . . . . . . . . . .
1.3 A Formal Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.3.1 The cells and their coupling . . . . . . . . . . . . . . . . . . . . . . . .
1.3.2 Boundary conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2 Some applications of CNN’s
2.1 CNN-based Image Pre-Processing for the Automatic Classification of
2.1.1 The pre-filtering . . . . . . . . . . . . . . . . . . . . . . . . .
2.2 Processing of NMR Spectra . . . . . . . . . . . . . . . . . . . . . . .
2.2.1 2D NMR Spectra . . . . . . . . . . . . . . . . . . . . . . . . .
2.2.2 NMR Spectra Processing Via CNN . . . . . . . . . . . . . . .
2.2.3 Description of the dual algorithm . . . . . . . . . . . . . . . .
2.3 Air quality modeling . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.3.1 Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.3.2 CNN’s for air quality modeling . . . . . . . . . . . . . . . . .
v
Fruits
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
4
4
5
7
9
10
11
12
13
14
16
17
18
20
21
.
.
.
.
.
.
.
.
.
22
23
25
26
28
29
31
34
35
35
2.4
2.3.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3 The CNN as nonlinear dynamics generator
3.1 The State Controlled CNN Model . . . . . . . . . . . . . .
3.1.1 Discrete components realization of SC-CNN cells .
3.2 Chua’s Oscillator dynamics generated by the SC-CNN . .
3.2.1 Main Result . . . . . . . . . . . . . . . . . . . . . .
3.2.2 Experimental Results . . . . . . . . . . . . . . . .
3.3 Chaos of a Colpitts Oscillator . . . . . . . . . . . . . . . .
3.4 Hysteresis Hyperchaotic Oscillator . . . . . . . . . . . . .
3.5 n-Double Scroll Attractors . . . . . . . . . . . . . . . . . .
3.5.1 A new realization for the n-Double Scroll family .
3.5.2 n-Double Scrolls in SC-CNN’s . . . . . . . . . . . .
3.6 Nonlinear dynamics Potpourri . . . . . . . . . . . . . . . .
3.6.1 A Non-autonomous Second Order Chaotic Circuit
3.6.2 A Circuit with a Nonlinear Reactive Element . . .
3.6.3 Canards and Chaos . . . . . . . . . . . . . . . . .
3.6.4 Multimode Chaos in Coupled Oscillators . . . . . .
3.6.5 Coupled Circuits . . . . . . . . . . . . . . . . . . .
3.7 General Case and Conclusions . . . . . . . . . . . . . . . .
3.7.1 Theoretical Implications . . . . . . . . . . . . . . .
3.7.2 Practical Implications . . . . . . . . . . . . . . . .
39
41
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
43
45
46
48
48
50
53
57
59
60
67
68
70
70
73
75
78
79
81
81
4 Synchronization
4.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.1.1 Pecora-Carroll approach . . . . . . . . . . . . . . . . . . . .
4.1.2 Inverse System approach . . . . . . . . . . . . . . . . . . . .
4.2 Experimental Signal Transmission Using Synchronized SC-CNN . .
4.2.1 Circuit Description . . . . . . . . . . . . . . . . . . . . . . .
4.2.2 Synchronization: Experimental and simulation results . . .
4.2.3 Non-ideal channel effects . . . . . . . . . . . . . . . . . . . .
4.2.4 Effects of additive noise and disturbances onto the channel
4.3 Chaotic System Identification . . . . . . . . . . . . . . . . . . . . .
4.3.1 Description of the algorithm . . . . . . . . . . . . . . . . . .
4.3.2 Identification of Chua’s oscillator . . . . . . . . . . . . . . .
4.3.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
83
83
84
85
86
86
88
91
99
103
103
105
106
112
.
.
.
.
.
.
.
114
114
116
118
120
121
124
126
5 Spatio-temporal Phenomena
5.1 Analysis of the Cell . . . . . . . . . .
5.1.1 Fixed Points . . . . . . . . .
5.1.2 Limit Cycle and bifurcations
5.1.3 Slow-fast dynamic . . . . . .
5.1.4 Some simulation results . . .
5.2 The Two-Layer CNN . . . . . . . . .
5.3 Traveling Wavefronts . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
vi
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
5.4
5.5
5.6
5.3.1 Autowaves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.3.2 Labyrinths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Pattern Formation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.4.1 Condition for the existence of Turing Patterns in arrays of coupled circuits
5.4.2 Turing Patterns in the two-layer CNN . . . . . . . . . . . . . . . . . . . . .
5.4.3 Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Sensitivity to Parametric Uncertainties and
Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.5.1 Spiral wave: Parametric Uncertainty . . . . . . . . . . . . . . . . . . . . . .
5.5.2 Spiral Waves: Presence of Noise on to the Initial Conditions . . . . . . . . .
5.5.3 Patterns: Parametric uncertainties . . . . . . . . . . . . . . . . . . . . . . .
Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
126
130
132
132
134
136
.
.
.
.
.
139
139
140
141
143
Part II: Implementation and Design
145
6 A Four Quadrant S 2 I Switched-current Multiplier
6.1 Detailed analysis of the S 2 I memory cell . . . . . . .
6.2 The Multiplier’s Architecture . . . . . . . . . . . . .
6.3 Analysis and Design of the S 2 I multiplier . . . . . .
6.3.1 Circuit Analysis of the multiplier . . . . . . .
6.3.2 Circuit Design . . . . . . . . . . . . . . . . .
6.4 Experimental Performance Evaluation . . . . . . . .
6.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
146
147
153
156
158
161
164
168
7 A 1-D Discrete-Time Cellular Neural Network
7.1 System Architecture . . . . . . . . . . . . . . .
7.2 The Tapped Delay Line . . . . . . . . . . . . .
7.3 CNN cells . . . . . . . . . . . . . . . . . . . . .
7.3.1 Multiplier and Ancillary circuitry . . . .
7.4 Cell Behavior and Hardware Multiplexing . . .
7.5 Results and example . . . . . . . . . . . . . . .
7.6 Conclusions . . . . . . . . . . . . . . . . . . . .
Chip for Audio
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
Signal Processing170
. . . . . . . . . . . 170
. . . . . . . . . . . 172
. . . . . . . . . . . 174
. . . . . . . . . . . 174
. . . . . . . . . . . 178
. . . . . . . . . . . 182
. . . . . . . . . . . 184
A General Mathematical Background
A.1 Topology . . . . . . . . . . . . . .
A.2 Operations and Functions . . . . .
A.3 Matrices . . . . . . . . . . . . . . .
A.4 Dimension . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
187
187
188
189
189
B Dynamical Systems
B.1 Basic Definitions . . . . . . . . . . . . . . . .
B.2 Steady-state behaviour . . . . . . . . . . . . .
B.2.1 Classification of asymptotic behaviors
B.3 Stability . . . . . . . . . . . . . . . . . . . . .
B.3.1 Stability of equilibrium points . . . . .
B.3.2 Stability of limit cycles . . . . . . . .
B.3.3 Lyapunov exponents . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
190
191
192
193
195
195
199
202
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
vii
.
.
.
.
B.4 Topological Equivalence and Conjugacy,
Structural stability and Bifurcations . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
B.5 Šilnikov Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
B.6 Particular Results for Two-Dimensional Flows . . . . . . . . . . . . . . . . . . . . . . 205
Bibliography
207
viii
Abstract
Questa Tesi di Dottorato di Ricerca discute alcuni nuovi risultati teorici, applicazioni e aspetti
implementativi delle Reti Neurali Cellulari (CNN’s). Diversi risultati sperimentali sono forniti a
complemento degli aspetti piú teorici.
La tesi é organizzata in due parti. Nella prima, dopo l’introduzione di alcune nuove applicazioni
a problemi industriali, l’attenzione principale é rivolta verso lo studio di CNN’s nell’ambito della
teoria dei circuiti nonlineari. Il cosiddetto State Controlled CNN model é proposto come primitiva
per la generazione di un’ampia classe di dinamiche di circuiti. Questa peculiaritá ha una delle sue
immediate conseguenze nel campo dei ricetrasmettitori basati sul caos. Viene presentato uno studio
sperimentale dei vantaggi e limiti di alcune di queste tecniche. Inoltre, viene incluso un nuovo
metodo per identificare i parametri di circuiti nonlineari sincronizzati usando Algoritmi Genetici.
Uno studio dei fenomeni spazio-temporali in una CNN a due strati tradizionale termina la prima
parte.
La seconda parte tratta alcuni degli aspetti implementativi delle CNN’s. Viene presentata una
CNN tempo-discreta integrata progettata mediante la tecnica delle correnti commutate. La programmabilitá é ottenuta per mezzo di moltiplicatori implementanti l’accoppiamento delle cellule.
Viene presentato uno studio approfondito del funzionamento e della progettazione di un nuovo moltiplicatore. Successivamente viene discussa l’implementazione circuitale della CNN. Un uso efficiente
dell’hardware é raggiunto attraverso una tecnica di multiplazione.
ix
x
This Ph.D. Thesis discusses some new theoretical results, applications and implementation issues
of Cellular Neural Networks (CNN’s). Many experimental results are provided as complement of
the more theoretical aspects.
The thesis is organized in two parts. In the first one, after the introduction of some new applications to industrial problems, the main attention is devoted to the study of CNN’s within the
nonlinear circuit theory. The so-called State Controlled CNN model is proposed as primitive for
the generation of a wide class of circuit dynamics. This peculiarity has one of his immediate consequences in the field of chaos-based transceivers. An experimental study of the advantages and limits
of some of these techniques is presented. Moreover, a new method to identify the parameters of
synchronized nonlinear circuits by using Genetic Algorithms is included. A study of spatio-temporal
phenomena in a traditional two-layer CNN ends the first part.
The second part deals with some the implementation issues of CNN’s. An integrated 1-D discretetime CNN designed using the switched-current technique is presented. The programmability is
achieved by the use of multipliers implementing the cells’ coupling. A thorough study of the behavior
and the design of a new multiplier is presented. Eventually the circuit implementation of the CNN
is discussed. An efficient use of the hardware is achieved throughout a multiplexing technique.
Acknowledgements
I would like to thank Luigi Fortuna, my supervisor, for his suggestions and guidance, especially
during the first years of this research carried out at the University of Catania (Italy). I am also very
indebted to Jose Pineda de Gyvez for his many advice and support through the second part of my
Ph.D. course spent at the Texas A&M University (College Station, Texas, U.S.A.).
A word of thanks also goes to Edgar Sánchez-Sinencio (Texas A&M Univ.), Paolo Arena and
Salvo Baglio (University of Catania), Riccardo Caponetto (SGS-Thomson Microelectronics) and to
the many friends that helped me in different ways: Fikret Dülger, Sez Günay, Giovanna Oriti, Benoit
Provost, Jan Michael Stevenson, ...
Of course, I am grateful to my parents for their patience and love. Without them this work
would never have come into existence (literally).
Finally, I wish to thank the following: Aashit, Alex, Andrew, Antonio, Anna, Carmelo, Carolyn,
Eric, Francesco, Gianluca, Giuseppe, Laura, Louyi, Pankaj, Santi, Uwe... (for all the good and bad
times we had together); and my brother Massimo (because he always withstood me).
Dallas, Texas
February 25, 1998
Gabriele Manganaro
xi
Part I: Circuit Theory and
Applications
Cellular Neural Networks (CNN’s) constitute a family of nonlinear circuits. This first part of the
dissertation discusses new theoretical results regarding CNN’s as well as some of their applications.
The fundamental concepts and definitions of this fascinating area of nonlinear circuit theory are
reviewed in Chapter 1.
Chapter 2 discusses some new applications of CNN’s to industrial problems. In particular, two
different cases in which a CNN can be used to perform pre-filtering of two-dimensional arrays of data
are considered. Eventually some CNN-based models for environmental modeling and simulation are
reported.
In Chapter 3 it is shown that a so-called State-Controlled CNN is able to reproduce the dynamics
of a wide class of nonlinear circuits including several well-known chaotic and hyperchaotic ones. The
idea that such a circuit can be considered a building block for nonlinear dynamic generation is
discussed.
The ability of generating many different dynamics from a single architecture find immediate
application in the field of chaos-based transceivers. Some of the advantages and limits of synchronization of chaotic circuits are examined throughout an experimental study reported Chapter 4.
Moreover a new method for the identification of the circuit parameters of nonlinear circuits based
on the principles of synchronization is presented.
Self-organizing patterns, autowaves and other spatio-temporal phenomena have been recently
observed in CNN arrays composed of coupled Chua’s oscillators and degenerated Chua’s oscillators.
1
2
Chapter 5 shows how, many of these phenomena, can equally be obtained in a traditional and simpler
two-layer Chua and Yang CNN model. A thorough analytic study of the fundamental second-order
circuit used as cell sets up the basis to generate and understand the considered phenomena.
Chapter 1
CNN’s basics
The concept of Cellular Neural Network (CNN, also called Cellular Nonlinear Network) was introduced in 1988 with the seminal papers by Leon O.Chua and Lin Yang [1, 2, 3]. Chua proposed the
idea of using an array of, essentially simple, non-linearly coupled dynamic circuits to process large
amount of information in real time [4]. The concept was inspired to the architecture of the Cellular
Automata [5, 6] and the Neural Networks [7, 8]. Chua showed that his new architecture was able to
efficiently perform time-consuming tasks, such as image processing and partial differential equation
solution, and it was suitable to VLSI implementation.
The new architecture rapidly attracted the attention of the scientific community, especially from
the circuit theory and analog VLSI area. Indeed, a vast amount of applications were proposed in
just a couple of years both with the first IC implementations [9].
This tremendous interest has been testified by the publication of several special issues on international scientific journals (Int.J.Circuit Theory and Applicat.[10, 11], IEEE Transaction on Circuits
and Systems[12, 13, 14]) other than the institution of a special biennial international meeting (the
IEEE Int.Workshop on Cellular Neural Networks and Their Applications [15, 16, 17, 18, 19]) currently at its fifth edition.
Over the years, the original CNN model proposed by Chua and Yang, has been generalized so
that successive formal definitions [2, 20, 21] have been given to include the many extensions under
a common general framework.
3
4
Figure 1.1: The basic cell
In this chapter the basic concepts and definitions will be reviewed. The model of Chua and
Yang is firstly discussed. An illustrative description of the main generalizations of this model are
presented. Eventually a more general and formal definition of CNN is given.
1.1
The CNN of Chua and Yang
The original model introduced by Chua and Yang [1, 2, 3] in 1988 is hereby discussed. In spite of
the various generalizations brought, it remains the most widely used thanks to a good compromise
among simplicity and versatility, other than being easier to be implemented.
In this section, formalism will be traded for clarity. Once an intuitive understanding of the
concepts involved will be developed, it will be possible to discuss a more general and rigorous
theory.
1.1.1
The Cell
The fundamental brick of the CNN is the so-called cell, a lumped circuit containing linear and
nonlinear elements, shown in Fig.1.1.
The cell C(i, j) has one input uij , one output yij and one
state variable xij (represented in Fig.1.1 by the node voltages vuij , vyij and vxij respectively). The
output yij is a memoryless nonlinear function of the state xij :
yij = f (xij ) =
as depicted in Fig.1.2.
1
(|xij + 1| − |xij − 1|)
2
(1.1.1)
The elements of the cell are all linear but the voltage controlled current
5
Figure 1.2: The output nonlinearity
source (VCCS) Iyx =
1
Ry
· f (vxij ). The core of the cell is constituted by a capacitor C connected
in parallel to a resistor Rx , an independent current source I called bias and a group of VCCS’s
Ixu (i, j; k, l), . . . , Ixy (i, j; k, l) that will be discussed later. The cell’s dynamics can be formally
described by a single state equation (essentially the node equation for the grounded capacitor C).
It is assumed that |xij (0)| ≤ 1 (initial condition constraint) and that the input, obtained by the
independent voltage source Eij , is constant and |uij | ≤ 1 (input constraint).
1.1.2
The CNN Array
A Cellular Neural Network is an array of cells. Any cell is coupled only to its neighbor cells. Adjacent
cells can interact directly with each other while more far cells can influence indirectly by propagation.
For instance, a two-dimensional 4 × 4 CNN is depicted in Fig.1.3. The squares represent the cells
while the links represent the direct coupling. It is possible to define CNN’s of any dimension, but
in the following the attention will be focused on two-dimensional arrays only.
Let us consider a two-dimensional array of M × N identical cells arranged in M rows and N
columns. This is a M × N Cellular Neural Network. The generic cell placed on the ith row and jth
column will be denoted by C(i, j). Moreover, it is assumed that all the cells in the CNN have the
same parameters (the CNN is space invariant). Let us define the neighborhood of C(i, j).
Definition 1.1.1 (r-neighborhood set). The r-neighborhood of C(i, j) is:
.
Nr (i, j) = C(k, l) | max{|k − i|, |l − j|} ≤ r, 1 ≤ k ≤ M ; 1 ≤ l ≤ N
where r ∈ N − {0} is the radius.
(1.1.2)
6
Figure 1.3: A 4 × 4 CNN.
Figure 1.4: The Nr (i, j) sets for r = 1, 2, 3 respectively.
Examples of neighborhoods of the same cell (highlighted in the center) for r = 1, 2, 3 are shown
in Fig.1.4.
It is also common practice to talk about ’3 × 3 neighborhood’, ’5 × 5 neighborhood’,
’7 × 7 neighborhood’ and so forth.
The coupling among C(i, j) and the cells belonging to its neighbor set Nr (i, j) is obtained by
means of the linear VCCS’s Ixu (i, j; k, l), . . . , Ixy (i, j; k, l) mentioned above. In fact, the input and
output of any cell C(k, l) ∈ Nr (i, j) influence the state xij of C(i, j) by means of two VCCS’s, in
C(i, j), defined by the equations:
Ixy (i, j; k, l) = A(i, j; k, l) · vykl
(1.1.3a)
Ixu (i, j; k, l) = B(i, j; k, l) · vukl
(1.1.3b)
where the coupling coefficients A(i, j; k, l), B(i, j; k, l) ∈ R are the feedback template coefficient and
the control template coefficient respectively.
7
It is important to emphasize that the coupling among the cells in the CNN is only local. This
restriction is extremely important for the feasibility of implementation. As previously pointed out,
however, cells that does not belong to the same neighbor set can still affect each other indirectly
because of the propagation effects of the continuous-time dynamics of the network.
At this point, performing a nodal analysis1 , it is possible to write the circuit equations of the
CNN:
C
1
dvxij
=−
vxij (t) +
A(i, j; k, l)vykl (t)
dt
Rx
C(k,l)∈Nr (i,j)
B(i, j; k, l)vukl + I
+
(1.1.4)
C(k,l)∈Nr (i,j)
1 ≤ i ≤ M; 1 ≤ j ≤ N
It must be noted that there are essentially two classes of cells: inner and boundary cells. The
inner cell is a cell which has (2r + 1)2 neighbor cells. All the other cells are boundary cells. For the
latter class it is assumed that the missing cells in the neighbor set contribute with zero input, state
and output. Other possible boundary conditions will be discussed in the following. As expected,
boundary conditions are going to affect the behavior of boundary cells, and, by virtue of the indirect
propagation, can also affect the whole array dynamic [22].
1.1.3
More about Templates
It is now clear that the template coefficients2 are going to completely define the behavior of the
network with given input and initial condition. It is worth remembering that it has been assumed
that all the cells in the CNN have equal parameters and so equal templates (space invariance). The
term cloning templates is used to emphasize this property of invariance. This means that the set of
2 · (2r + 1)2 + 1 real numbers A(i, j; k, l), B(i, j; k, l) and I completely determine the behavior of an
arbitrary large two-dimensional CNN (they define the dynamic rules of the network).
Besides, if any template set corresponds to a different behaviour, then any choice for the template
set corresponds to a particular processing of the inputs and initial conditions. And so it can be
thought as a primitive instruction for an analog processor (the CNN). More about this will be
discussed later.
1 It is quite evident that this kind of networks are suitable to nodal analysis. However, for large CNN arrays,
extremely sparse matrices must be expected because of the local connectivity.
2 The bias I is considered as a template coefficient as well.
8
The templates are often expressed in a compact form by means of tables or matrices. For instance
the following two square matrices are used

A(i, j; i − 1, j − 1)

A=
 A(i, j; i, j − 1)
A(i, j; i + 1, j − 1)

B(i, j; i − 1, j − 1)

B=
 B(i, j; i, j − 1)
B(i, j; i + 1, j − 1)
for a CNN with r = 1:

A(i, j; i − 1, j) A(i, j; i − 1, j + 1)

A(i, j; i, j)
A(i, j; i, j + 1) 

A(i, j; i + 1, j) A(i, j; i + 1, j + 1)

B(i, j; i − 1, j) B(i, j; i − 1, j + 1)

B(i, j; i, j)
B(i, j; i, j + 1) 

B(i, j; i + 1, j) B(i, j; i + 1, j + 1)
(1.1.5)
This form enable us to rewrite the state equations of the CNN in a more compact form by means of
the two-dimensional convolution operator ∗.
Definition 1.1.2 (Convolution Operator). For any cloning template T:
.
T ∗ vij =
T (k − i, l − j)vkl
(1.1.6)
C(k,l)∈Nr (i,j)
where T (m, n) denotes the entry in the mth row and nth column of the cloning template, m, n =
−1, 0, 1.
Therefore, the state equations (1.1.4) can be rewritten as follows:
C
dvxij
1
= − vxij (t) + A ∗ vyij (t) + B ∗ vuij + I
dt
Rx
(1.1.7)
1 ≤ i ≤ M; 1 ≤ j ≤ N
C, Rx and Ry can be conveniently chosen by the designer. CRx determines the rate of change3
of the dynamics of the circuit and is usually chosen to be 10−8 ÷ 10−5 s. The circuit parameters,
however, can be scaled/normalized for convenience. For instance the dynamics can be simply scaled
in time by changing the value of C only. Similar changes of scales can be obtained for the currents
and voltages to fit the real design specifications. In practice then, it is very often convenient to
describe the dynamics of the array by using the following normalized/dimensionless representation:
dxij
= −xij (t) + A ∗ yij (t) + B ∗ uij + I
dt
(1.1.8)
1 ≤ i ≤ M; 1 ≤ j ≤ N
and to adopt a suitable scaling of the circuit parameters when the network is to be implemented.
It is easily understood that templates are going to determine the stability properties of the
network. Several classes of templates are defined and studied. These ones are classified according
3 The
term time constant is not really appropriate for a nonlinear network.
9
to the signs of the coefficients, the structure of the template matrices and so on [23, 24]. Here only
a short mention about the so-called reciprocal templates is given.
Definition 1.1.3 (Symmetric/Reciprocal Templates). A template is symmetric or reciprocal
if:
A(i, j; k, l) = A(k, l; i, j), 1 ≤ i, k ≤ M ; 1 ≤ j, l ≤ N
(1.1.9)
It has been proved [2] that CNN’s with reciprocal templates are completely stable 4 .
Another important and useful results, for the Chua and Yang model, is:
Theorem 1.1.1 (Suff. cond. for binary outputs [2]). Let A(i, j; i, j) be the normalized selffeedback coefficient (i.e. Rx = 1, C = 1). Then:
A(i, j; i, j) > 1 ⇒ lim |yij | = 1 ∀ij
(1.1.10)
t→∞
1.1.4
Multilayer CNN’s
In the single-layer model considered in the previous sections any cell contributes with one state
variable only. This can be generalized if the system order (the number of the state variables) of
the cell is increased. A Multilayer CNN (MCNN) will be composed by cells having several state
variables, one for any layer. Moreover, the interaction between the state variables of the same cell
can be complete while the cell-to-cell interaction remains local (restricted to neighbors). To have a
picture of a multilayer CNN, it can be thought as composed of several single-layer arrays, stacked
one in top of each other and in which a full layer-to-layer interaction is possible.
A formal model is easily obtained by making use of the previous definitions. Then the state
equations for a MCNN can be expressed in compact vector form:
dxij
= −xij (t) + A ∗ yij (t) + B ∗ uij + I
dt
(1.1.11a)
1 ≤ i ≤ M; 1 ≤ j ≤ N
where:

A11

 ·


A= ·

 ·

Am1
0
0
·
·
·
·
·

0

0 


0 ,

0 

· Amm

B11

 ·


B= ·

 ·

Bm1
0
0
0
·
·
·
·
·


0 


0 ,

0 

· Bmm
I =(I1 , . . . , Im ) ,
xij = (x1ij , . . . , xmij ) , yij =(y1ij , . . . , ymij ) , uij = (u1ij , . . . , umij )
4 See
def. B.3.9 in appendix B for a definition of completely stable system.
(1.1.11b)
10
and where m denotes the number of state variables in the multilayer cell circuit, or, in other words,
the number of layers. Here the convolution operator ∗ between a matrix and a vector is to be decoded
like matrix multiplication but with the operator ∗ inserted between each entry of the matrix and of
the vector. Observe that A and B are block triangular matrices.
Any layer can be used to perform a different processing and, of course, any layer works in parallel
to the others.
As a final remark, it can be added that the capacitors corresponding to the various layers can
be sized to different values. In this way it is possible to have layers with different time rates. As a
limiting case, for some layer, the capacitors can be zero (or so small compared to the others to be
considered zero for all the practical purposes), thereby obtaining a set of differential and algebraic
equations.
1.1.5
The CNN as an analog processor
A CNN is mainly used as a real-time processor for arrays of data. Different approaches are possible
and here only the simplest ones are mentioned; other alternatives will be presented in the following.
Let us suppose, for example, that we want to process a gray-scale image composed of M × N
pixels [25, 26]. This image can be represented as an array of M × N voltages normalized into the
allowed input range [−1, 1]. Then it can be fed at the M × N inputs of the CNN. Provided that
the network parameters are such that the CNN is completely stable, the state will settle to an
equilibrium point. This will correspond to an array of M × N outputs in the range [−1, 1]. This
output array/image represents the result of the processing, performed by the CNN, according to
the dynamic rule fixed by the templates. As a matter of fact, the network works as a map that
associates an input U ∈ [−1, 1]MN to an output Y ∈ [−1, 1]MN .
Depending on the templates, this mapping may, or may not, depend on the initial condition
X(0) ∈ [−1, 1]MN of the CNN.
Therefore, in some cases, it can be appropriate to take X(0) = U or X(0) = 0. Alternatively it
is possible to operate with U = 0 and to feed the input image as initial condition X(0).
These approaches are considered when the desired processing is applied to a single operand.
When, instead, as in a binary operation5 , two different operands (e.g. two different M ×N images) are
11
processed to give a combined result, they can be provided as input and initial condition respectively.
Several other approaches are still possible. For instance, the designer may decide to consider the
state X of the network at a fixed instant τ to be the result of the desired operation (relaxing in this
way the hypothesis of complete stability).
From these simple examples it is understood how a CNN is able to process arrays of data
according to a chosen template and criteria on the representation of the data involved. Hence the
CNN can be thought as an analog processor. The template would then represent the instruction of
this processor. Besides, from an alternative point of view, in analogy with systolic arrays[27], the cell
could be considered the elementary processor while the CNN would become an array of processors.
Indeed, a vast library of cloning templates, performing the most disparate operations has been
developed by several research groups over the years.
In this framework the natural following step is to realize that, if some data are processed by a
CNN with a particular template, then the corresponding results can be furtherly processed changing
the template and so on. In this way it is possible to perform algorithms. In this context the word
dual algorithms is used [28, 29].
Image processing is an application field cited above just for the purpose of illustration. Applications of CNN have been developed for incredibly distant and different disciplines: image processing,
Partial Differential Equation solution, Physical System simulations, Nonlinear phenomena modeling,
generation of nonlinear dynamics, Neurophysiology, Robotics, Biology, to cite a few [9].
To conclude this section it is worth mentioning that several important results about the stability
and the dynamic behavior of CNN’s has been reported in literature [2, 3, 30, 15, 16, 17, 18]. Moreover,
from the above discussion, it is apparent the impact that these theoretical results have in terms of
applications and implementations. However a discussion about these results is out of the scope of
this section.
1.2
Main Generalizations
As previously pointed out, several extensions have been brought to the model of Chua and Yang
discussed in section 1.1. The purpose of these generalizations has been to enhance the capabilities
5 Trivial
examples of binary operations are the sum, the multiplication and so on.
12
of the CNN’s, broadening the field of their applications or improving the efficiency of existing ones
[31, 21, 30, 20, 29].
Some of the most common generalizations are reviewed in this section. A more general and
rigorous definition including most of them as particular cases will be given in the following section.
1.2.1
Nonlinear CNN’s and delay CNN’s
The CNN model of Chua and Yang is indeed a nonlinear circuit because of the output function
(1.1.1). However, some authors [21, 30] refer to this model as the linear CNN to emphasize the
linearity of the VCCS’s determining the coupling, as in equations (1.1.3a-1.1.3b).
Let us now substitute the relationships (1.1.3a-1.1.3b) with the following ones:
Ixy (i, j; k, l) = Âij;kl (vykl , vyij ) + Aτij;kl vykl (t − τ )
(1.2.1a)
τ
vukl (t − τ )
Ixu (i, j; k, l) = B̂ij;kl (vukl , vuij ) + Bij;kl
(1.2.1b)
where Âij;kl (·, ·), B̂ij;kl (·, ·) : C(R) × C(R) →
R (i.e. they are real-valued continuous functions of,
τ
at most, two variables) while Aτij;kl , Bij;kl
∈
R, τ ∈ [0, ∞). Namely a nonlinear coupling is intro-
τ
duced by Âij;kl (·, ·) and B̂ij;kl (·, ·), while a functional dependence is introduced by Aτij;kl and Bij;kl
.
Equations (1.1.3a-1.1.3b) are a particular case of (1.2.1-1.2.1b) and the state equation (1.1.8) is
substituted by the following one:
dxij
= −xij (t) + Â ∗ yij (t) + Aτ ∗ yij (t − τ )
dt
+ B̂ ∗ uij (t) + Bτ ∗ uij (t − τ ) + I
(1.2.2)
1 ≤ i ≤ M; 1 ≤ j ≤ N
with the consequent extension for the convolution operator. Moreover, let us observe that one of
the hypotheses, namely the assumption on the time-invariance of the inputs, has been relaxed.
τ
We will talk about nonlinear CNN when Aτij;kl , Bij;kl
= 0 while we will talk about delay-type
CNN when Âij;kl , B̂ij;kl = 0.
Moreover, other hypotheses can be relaxed. For example the ones on time-invariance of the
templates or the space-invariance of the templates.
The nonlinear templates allow more sophisticated and efficient ways of processing data [31, 20].
Besides they permit to mimic some biological functions such as the ones in the retina [32, 33, 34].
13
Figure 1.5: Nonlinear output functions; (a) unity gain with saturation, (b) high gain with saturation,
(c) inverse gaussian, (d) sigmoid, (e) inverse sigmoid, (f ) gaussian.
Delay-type templates find application, for instance, in detection of moving objects [35, 36, 37].
Other extensions regard the memoryless output equation (1.1.1). An arbitrary bounded nonlinear
function f :
R → R can be used in place of (1.1.1). Actually, practical design considerations imply
further restrictions such as continuity. Some typical output functions are shown in Fig.1.5.
Moreover a dynamic output function can be considered:
v̇yij = −avyij + f (vxij )
(1.2.3)
An easy way to obtain it is to connect a capacitor in parallel with Ry in Fig.1.1.
Another generalization involving the topology of the cell consists in substituting the circuit shown
in Fig.1.1 with a completely different one. Very important is the case in which the Chua’s oscillator
[38] is used as cell. This case will be considered in detail in the next chapters.
1.2.2
Non-uniform processor CNN’s and multiple neighborhood size CNN’s
Motivated partly by neurobiological structures, other generalizations involve non-uniform grid CNN’s,
having more than one type of cell and/or more than one size of neighborhood. These are called
Non-uniform processor CNN’s (NUP-CNN) and multiple-neighborhood-size CNN’s (MNS-CNN) respectively. Examples of grids are reported in Fig.1.6. The adoption of a grid alternative to the
14
Figure 1.6: Examples of grids; (a) rectangular, (b) triangular, (c) hexagonal.
Figure 1.7: A NUP-CNN with two kinds of cells
rectangular implies a formal modification in the definition of the neighbor set and in the way in
which cell’s positions are specified.
An example of NUP-CNN with two different kinds of cells
(drawn in black and white respectively) is shown in Fig.1.7.
An example of MNS-CNN is shown
in Fig.1.8. In this multilayer CNN there are neighbor sets of two different sizes. One layer (white
cells) with fine grid (r = 1) and another one (black cells) with coarse grid (r = 3).
This kind of
architecture reflects some characteristic structures found in living visual systems [31, 33].
1.2.3
Discrete-time CNN’s
A very important class of CNN’s is the one of the Discrete-time CNN’s (DTCNN) [39]. As the name
implies, these ones are essentially the discrete-time version of the already discussed continuous-time
models. Actually, in the original definition given in [39], some further difference (e.g. the output
function is the so-called threshold nonlinearity and implies only binary outputs) is found.
15
Figure 1.8: A MNS-CNN with two kinds of neighbor sets
A DTCNN is described by the state map:
xij (n) =
a(i, j; k, l)ykl (n) +
C(k,l)∈Nr (i,j)
yij (n) = f (xij (n − 1)) =

1
b(i, j; k, l)ukl + iij
(1.2.4a)
C(k,l)∈Nr (i,j)
if xij (n − 1) > 0
−1 if x (n − 1) < 0
ij
(1.2.4b)
1 ≤ i ≤ M; 1 ≤ j ≤ N
or, in compact form, adopting the Einstein summation6 convention:
xc (n) = acd y d (n) + bcd ud (n) + ic

1
if xc (n − 1) > 0
y c (n) = f (xc (n − 1)) =
−1 if xc (n − 1) < 0
(1.2.5a)
(1.2.5b)
where c is the generic cell of the DTCNN, d ∈ Nr (c), while acd , bcd and ic are the cloning templates.
Moreover, this latter form is also valid with non-rectangular grids. All of the above mentioned
generalizations can be applied to this kind of CNN. Note that the output function is not defined in
the origin but in practice there is always some noise so it does not matter.
It is immediately worth mentioning however that the DTCNN have many practical features not
own by a conventional continuous time CNN. Among these, systematic procedures to design the
templates needed for a well-defined task do exist [39, 40, 41]. A simple example that explains how
that is possible: just consider that, due to the output nonlinearity, the templates can be found
6 See
def. A.2.5 in appendix A.
16
by solving a system of inequalities obtained by imposing a desired state transition on the update
equation (1.2.5).
Moreover DTCNN’s have some appreciable robustness features. In fact, if the templates are such
that:
∆ = min |acd y d (n) + bcd ud (n) + ic |
c,k
(1.2.6)
is large enough, then the algorithm (1.2.5) is relatively insensitive to parameters tolerance of acd ,bcd
or ic smaller than ∆, because individually they cannot cause any changes in the output states [39].
Finally, the speed of the network is easily controlled by adjusting the clock frequency for the law
(1.2.5). This is advantageous in terms of testability.
On the other side, as it always happens when discrete-time systems and continuous-time systems
are compared [42], some of the features of the continuous CNN are lost in the DTCNN. For instance,
in a DTCNN, a symmetric template does not imply complete stability. Indeed, examples of such a
systems admitting stable limit cycles do exist [39].
1.2.4
The CNN Universal Machine
As pointed out in section 1.1.5, the CNN can be used as a programmable processing device where
the instructions are represented by the templates. This is the idea behind the so-called CNN Universal Machine (CNNUM), a further evolution of the CNN architecture, proposed by T.Roska and
L.O.Chua [29, 28], in which both analog and digital circuitry coexist.
The adjective universal must be intended in the sense of Turing. This statement hide a rather
complex definition but the essential meaning is that a Turing (universal) machine is able to realize
any conceivable algorithm (recursive function). In fact, it has been proved that both the CNN [43]
and the CNNUM (that we are going to briefly discuss) [44] are universal in this sense. In simple
terms, it is theoretically proved that they are able to perform any possible algorithm. This is an
important result on existence. It does not give answer on how the CNN/CNNUM can perform a
desired algorithm. This is instead an tough open problem and it is known as the learning and design
problem7 .
7 The term design is used when the desired task could be translated into a set of local dynamic rules. While the
term learning is used when the templates should be obtained so that pairs of input and output must correspond (the
relationship of which may be by far too complicated for the explicit formulation of local rules)[45].
17
On this basis, the CNNUM is a CNN in which any (analog) cell is augmented by the introduction
of additional (local) analog and digital blocks. Moreover, further blocks are added to perform global
tasks.
The local blocks are essentially constituted by memories storing data to be processed and sequences of template values to perform the desired processing. Moreover some local logic to govern
the operation is also added. This, in analogy with cache and pipeline techniques used in many common microprocessors, allows to improve the throughput of the network in performing template-based
algorithms.
Moreover, global circuitry, controls the overall behaviour (loading and retrieving information) of
the CNN nucleus other than the interaction of the CNNUM with the external world.
The coexistence of analog and digital sections, working together, without any data converter
placed in between them, suggested the introduction of the term analogic computing (from the
contraction of “analog” and “logic”) or dual computing. And, as a consequence, the CNNUM
executes analogic and dual algorithms.
Roska and Chua identified the bottleneck of the CNN, namely the input/ouput with the external
world. In fact, it is easily understood that while the number of cells, in an integrated realization,
grows as N 2 , instead the corresponding number of pins can only grow linearly. And this forces a
sequential input/output from the chip.
Given that many applications of CNN’s deal with the processing of sensory information, they
proposed the integration of the array of sensors/antennas/coils directly on the same chip of the CNN
itself [29]. As much appealing as technologically challenging.
1.3
A Formal Definition
The previous sections essentially followed the actual evolution of the CNN paradigm from its introduction to present. It is apparent how multiple contributions and interactions among electrical
engineering and other disciplines led to different mutations and variations on the basic model of
Chua and Yang.
This brought to successive formal definitions (first in [2], eventually in [20] and, more recently,
in [21]) of the CNN.
18
In this section, the last and most general definition of Cellular Neural Network is reported.
For the definitions of vector field, map, dynamical system and so on, the reader can refer to the
appendices.
1.3.1
The cells and their coupling
Definition 1.3.1 (Cellular Neural Network). A Cellular Neural Network (CNN) is a high dimensional dynamic nonlinear circuit composed by locally coupled, spatially recurrent circuit units
called cells. The resulting net may have any architecture, including rectangular, hexagonal, toroidal,
spherical and so on. The CNN is defined mathematically by four specifications:
1. Cell dynamics.
2. Synaptic law.
3. Boundary conditions.
4. Initial conditions.
Definition 1.3.2 (Cell dynamics). The internal circuit core of the Cell can be any dynamical
system. The Cell dynamics is defined by an evolution equation. In the case of continuous-time
lumped circuits, the dynamics is defined by the state equation:
ẋα = −g(xα , zα , uα (t), Isα )
(1.3.1)
where xα , zα , uα ∈ Rm are the state vector, threshold (DC bias) and input vector of the cell nα at
position α respectively. Isα is a synaptic law and g : Rm × Rm × Rm × Rm → Rm is a vector field.
For a discrete-time circuit, the dynamics is defined by the state update law:
ẋα = −G(xα , zα , uα (t), Isα )
where G :
(1.3.2)
Rm × Rm × Rm × Rm → Rm is a map.
Note the explicit dependence of xα by xα itself (self-feedback), the threshold zα and its inputs
uα (t), other than the following defined synaptic law.
Definition 1.3.3 (Sphere of influence). In this section, the sphere of influence Sα of the cell nα
coincides with the previously defined neighbor set Nr (nα ) without nα itself:
.
Sα = Nr (nα ) − {nα }
(1.3.3)
Definition 1.3.4 (Synaptic law). The synaptic law defines the coupling among the considered
cell nα and all the cells nα+β within a prescribed sphere of influence Sα of nα itself:
Isα = Âβα xα+β + Aβα ∗ fβ (xα , xα+β ) + Bαβ ∗ uα+β (t)
(1.3.4)
using the Einstein summation rule.
The first term Âβα xα+β is the linear feedback of the states of the neighboring cells nα+β . Âβα is the
19
state template8 .
The second term Aβα ∗ fβ (xα , xα+β ) defines the arbitrary nonlinear coupling. Aβα is the nonlinear
feedback template.
The last term Bαβ ∗ uα+β (t) accounts for the external inputs contribution. Bαβ is the feedforward or
control template.
Before proceeding any further it is worth giving a few remarks with regards to the common case in
which a two-dimensional rectangular-grid CNN is considered. In this circumstance the contributions
shown in equations (1.3.1-1.3.2) and (1.3.4) can be written in explicit form. First of all, the notation
regarding the sphere of influence has to be intended as follows:
kl ∈ Sα ⇐⇒ k, l ∈ {−r, . . . , 0, . . . , r} , (k, l) = (0, 0)
(1.3.5)
The simplest example for (1.3.1) is the first order cell:
s
ẋij = −g(xij ) + Iij
with g :
(1.3.6)
R → R.
The state contribution is written as:
Âβα xα+β =
âij;kl xi+k,j+l
(1.3.7)
kl∈Sα
The nonlinear (output) coupling expresses an arbitrary nonlinear interaction law among the cells
in a neighbor. The same is valid for the feedforward template. Therefore both the nature of fβ
and of the nonlinear feedback Aβα and feedforward Bαβ templates is discretionary. Some illustrative
examples for the nonlinear feedback follow:
Aβα ∗ fβ (xα , xα+β ) =
aij;kl f (xi+k,j+l )
(1.3.8a)
aij;kl f (xi+k,j+l (t − T ) − xi,j (t − T ))
(1.3.8b)
kl∈Sα
Aβα ∗ fβ (xα , xα+β ) =
kl∈Sα
Aβα ∗ fβ (xα , xα+β ) =
aij;kl
kl∈Sα
Aβα ∗ fβ (xα , xα+β ) =
aij;kl
kl∈Sα
t
0
h(t − τ ) · (xi+k,j+l (τ ) − xi,j (τ ))dτ
M n=1
0
t
t
···
0
(1.3.8c)
t
0
· hn (t − τ1 , t − τ2 , . . . , t − τn )
(1.3.8d)
· xi+k,j+l (τ1 ) · · · xi+k,j+l (τn )dτ1 dτ2 · · · dτn
8 I am proud to specify that the introduction of the direct contribution from the state itself has been firstly
introduced by the group composed by Paolo Arena, Salvatore Baglio, Luigi Fortuna and Gabriele Manganaro in [46].
20
The reader can recognize the memoryless output coupling, the nonlinear delay, the convolution and
the Volterra series operator respectively.
1.3.2
Boundary conditions
In the previous sections the importance of the boundary conditions has been pointed out. In fact,
these ones define the way the boundary cells work and so, by indirect propagation, may affect
the whole network behavior. It is worth noticing that in some cases (especially in some theoretical
derivations, because of the formal simplification that this hypothesis sometimes implies) it is assumed
the CNN is extended to infinity.
Some of the most common boundary conditions are now defined. In order to give mathematical
definitions a formal artifice is introduced.
Definition 1.3.5 (Missing cell). Let C be the set of cells of a finite size CNN and C ∞ its extension
to infinity (C ⊂ C ∞ ). Let nα ∈ C be a boundary cell and Sα its sphere of influence. A missing cell
(of nα ) nα+β is defined as:
nα+β ∈ C ∞ : nα+β ∈ Sα and nα+β ∈
/C
(1.3.9)
Definition 1.3.6 (Fixed (Dirichlet) Boundary Condition). Let nα+β be a missing cell and
Exα+β , Euα+β ∈ Rm two constant vectors. Then the Dirichlet condition is defined by:
xα+β = Exα+β , uα+β = Euα+β
(1.3.10)
In other words, the CNN is clamped at its ends to some fixed (possibly space-invariant) potential.
For instance, this is the case of the CNN model of Chua and Yang, where the potential is uniformly
at ground.
Definition 1.3.7 (Zero-flux (Neuman) Boundary Condition). Let nα+β be a missing cell of
nα . The Zero-flux condition is defined by:
xα+β = xα , uα+β = uα
(1.3.11)
In this case the state and input of the missing cell follow the ones of the boundary cell.
Definition 1.3.8 (Periodic (Toroidal) Boundary Condition). Let nα and nγ be two boundary cells placed at the corresponding opposite ends of a CNN. Let nα+β the missing cell of nα lying
on the symmetry axis uniquely defined by nα and nγ . The Toroidal condition is defined by:
xα+β = xγ , uα+β = uγ
In this case the network behaves as if it is closed onto itself forming a torus.
(1.3.12)
21
1.4
Conclusions
In this chapter the basic concepts and definitions of Cellular Neural Network have been reviewed.
The original Chua and Yang model and its most important generalizations have been summarized,
introducing the reader to this continuously evolving field.
Finally a broad and formal definition of CNN has been given.
This chapter build the basis for the following ones where the original contributions of this dissertation are discussed.
Chapter 2
Some applications of CNN’s
In this chapter some original applications of Cellular Neural Networks to real-world problems are
discussed.
In section 2.1 a new image processing technique based on CNN’s for improving the automatic
classification of fruits (in particular, oranges) is introduced. It allows the digitized orange images
to be processed in order to highlight some peculiarities of the fruits. In this way the following classification step is greatly simplified and improved. Moreover, the real-time processing characteristic
of CNN’s is a very advantageous point over the traditional computing resources commonly used in
this kind of processing. The proposed task is accomplished by the choice of suitable templates in a
simple Chua and Yang CNN model. These templates are described and some examples are reported.
In section 2.2 a new method to filter 2D NMR spectra against uncertainties arising from experiments and data acquisition machinery is proposed. The method introduced is explained and is
applied to filtering of a real 2D NMR spectrum of a protein.
In section 2.3 the simulation of some environmental models for air quality is accomplished by
suitable CNN’s. These have been derived from the partial differential equations describing the
concentrations of pollutants under the wind action. In particular three different cases have been
considered: the one-dimensional advection, the one-dimensional diffusion and the two-dimensional
advection.
For each one, the proper CNN structure is derived. Moreover, some examples are presented.
22
23
2.1
CNN-based Image Pre-Processing for the Automatic Classification of Fruits
One of the main problems in the automation of modern farming production regards the selection
of good fruits from the whole crop. This is necessary for commercial reasons and because a bad
(rotten) fruit placed among good ones can deteriorate the whole lot. In particular we are interested
in the selection of oranges. Currently this task is mainly done manually.
Some attempts at automatic classification using traditional computing resources and algorithms
have been made. Unfortunately the average classification time for each orange using this equipment is
too long for an efficient real-time application. It is easily understood that the classification of fruits
based on their digitized images can be improved and simplified if redundant details are removed
from these images. This can be accomplished by preliminary image filtering. Again, this has been
attempted by digital computers but it has proved to be too time-consuming to be actually applied.
In [47] an alternative solution that uses Cellular Neural Networks (CNN’s) for a fast processing
of orange images is proposed. The choice of suitable templates allows the desired pre-filtering to be
achieved even by a simple Chua and Yang CNN model.
Some words must be spent about automatic orange image classification. For each orange six
different views are available (top, bottom, etc.). The basis of the classification is the recognition of
possible spots on the surface of the orange: if the fruit presents a spot it must be rejected. For a
correct operation it is important to distinguish correctly between spots and other parts of the fruit
such as the stalk.
If, in fact, the latter is recognized as a spot, every fruit will be rejected. However, stalks and
spots present some peculiarities that can be exploited in order to distinguish between them. More
specifically, stalks present a star-like shape while spots present an almost elliptical shape (see Fig.
2.1). Some stalks, nevertheless, have a rounded shape so they can be confused with spots. In this
case the irregular surface of stalks can be used to distinguish them from the smooth surface of spots.
24
Figure 2.1: (a) Two spots in the orange’s surface. (b) Two stalks.
25
2.1.1
The pre-filtering
It is now clear that, roughly speaking, the classification is actually based on the presence/absence of
spots in the orange being examined and that the main requirement is the ability to avoid confusion
between spots and stalks. Therefore the required filtering strategy must be able to clean the images
of useless particulars such as shadows, small irregularities on the surface of the fruit, noises and so
on, leaving just the interesting elements, i.e. spots and stalks. Moreover, the classification will be
enhanced if the different peculiarities of these two elements are further stressed.
Let us consider the following template:

0

0


0


A = 0

0


0

0

+1 −1 −1

−1 +1 −1


−1 −1 +1


B = +1 +1 +1

−1 −1 +1


−1 +1 −1

0
0 0
0 0
0
0 0
0 0
0
0 0
0 0
0
0 2
0 0
0
0 0
0 0
0
0 0
0 0
0
0 0
0 0
+1 −1 −1
+1 −1 +1
+1 +1 −1
+2 +1 +1
+1 +1 −1
+1 −1 +1
+1 −1 −1 +1 −1 −1

0

0


0


0 ,

0


0

0

+1

−1


−1


+1 ,

−1


−1

+1
(2.1.1a)
I = 0.2
(2.1.1b)
In (2.1.1-2.1.1b) we can immediately note some peculiarities:
1. A(i, j; i, j) = 2 > 1 so steady-state output saturation is guaranteed, according to what was
stated in Theorem 1.1.1.
2. the feedback template is reciprocal so the convergence (complete stability) is guaranteed as
well, in agreement to what was mentioned in section 1.1.3.
Let us note its resemblance to some well-known templates used to extract particular connected
configurations in images [15]. On the other hand, one of the design guidelines was the necessity to
extract the above-mentioned configurations of spots and stalks. In particular let us consider the
26
control template B. It contains the value 1.0 along the eight principal directions (north, south, etc.)
and the value -1.0 in the remaining positions in order to reveal the possible connected components
along these directions.
The bias current acts as a threshold while the feedback template drives the system to the steady
state.
It is worth noting that a small neighbor radius template implies poor efficiency in the connected
line extraction task because it reveals thin lines but it meets some problems with wide and/or broken
lines. A big radius template implies a complex practical realization and it is not congruent with the
local interactions of CNN’s.
A trade-off between complexity and efficiency was found with a r = 3 radius in the proposed
templates.
In Fig. 2.1 some images of spots and stalks are shown, while Fig. 2.2 gives the results of the
proposed processing.
From these figures it can be noted that, as anticipated, just the required elements have been
preserved. Moreover, the inner irregularities of the spots have been removed while the circular
irregularities inside the central part of the stalks and their star-like structure has been further
highlighted.
By varying the bias value I and the central control coefficient B(i, j; i, j), some peculiarities of
the resulting image processing (e.g. noise selectivity, resolution etc.) can be enhanced according to
particular requirements. This leads to a family of templates.
2.2
Processing of NMR Spectra
The problem of noise removal from corrupted images is a topic of great interest in several research
areas, ranging from robotics and automation to medicine and rehabilitation. In particular, as regards the biological and medical field, experiments via Nuclear Magnetic Resonance (NMR) spectra
are very helpful because of the meaningful information that derives from this kind of experiments
performed on living tissues. Suitable filtering of these spectra is required but quite difficult to
obtain, owing to disturbances arising from the fact that the sample is analyzed while moving in solution. Moreover, additional noise sources deriving from data acquisition equipment and instrument
27
Figure 2.2: The results of the proposed processing on the images shown in Fig. 2.1.
28
resolution make the filtering process even harder.
In this section the problem of noise removal in 2D NMR spectra is efficiently solved by using a
CNN approach. This is achieved in four steps, where the CNN is used in each phase like an analog
processor whose operation is fixed by a proper choice of its templates.
2.2.1
2D NMR Spectra
Among the wide variety of applications of 2D NMR spectra, they are used in the biochemical field
to study and reconstruct protein structure, i.e. it is possible to identify which aminoacids make up
a protein, their sequence into the protein and how they are located in the three-dimensional space
[48]. The power of NMR processing lies in its capability to analyze biopolymers in solution, i.e. in a
conformational state very close to their living conditions. Actually, owing to NMR’s peculiarity to
analyze proteins in solution, NMR data referring to the same molecule may vary a lot, owing to the
vibrations and conformational motions of the molecule which is free to move around an equilibrium
position. Such a situation causes some problems doing spectra analysis. The presence of noise makes
the spectra interpretation more difficult.
The proposed procedure was applied to filter a particular 2D NMR spectrum, the TOCSY (TOtal
Correlation SpectroscopY) spectrum [49], though it is quite general and can be applied to any kind
of 2D NMR spectrum.
The TOCSY spectrum allows us to identify the correlations among the spins constituting the
protein being examined by means of the presence/absence of resonance peaks in some particular
positions on a two-dimensional plane. For our aims, the presence of a peak is more important than
its elevation so the three-dimensional spectra surface can be reduced to a two-dimensional array
called a map. The peaks are ordered in sets called lists. Every list is a row or a column of peaks
that permits a particular residue in the protein to be characterized.
The processing of these maps (in order to identify the protein) is a time-consuming task, especially
if it is done in a traditional way, i.e. by human experts. Some artificial intelligence tools have been
developed in order to improve and to automatize this job [50]. Due to the presence of noise, the peaks
belonging to a list are not exactly aligned in a straight line as they should be in the ideal case. This
is nothing special for a human expert because he/she expects to find the peaks around particular
29
Figure 2.3: (a) The TOCSY spectrum, (b) the selected window and (c) its corresponding hydrogen’s
peaks.
regions, but it can obviously be an important problem for an automatic recognition method. It
is clear that if the map is appropriately pre-filtered (rebuilt), before the classification process, the
simplicity and the reliability of the identification is greatly improved.
2.2.2
NMR Spectra Processing Via CNN
Let us show how a simple reconstruction of the noisy lists can be achieved by a dual CNN algorithm.
This algorithm is composed of a sequence of four cloning templates for the model of Chua and Yang.
It is assumed that the execution of any algorithm step is completed when the network reaches the
steady-state which always consists of settling at an equilibrium point.
In Fig. 2.3 an example of a TOCSY bidimensional spectrum is shown.
This is a very wide array (typically 1024 × 1024 or 800 × 800) so for a practical application
the proposed processing must be repeated by windows, i.e. the CNN is used to process a small
rectangular region (a window) of the map; subsequently the window is moved to another position
30
and the processing is repeated and so on.
The processed windows should not overlap and, possibly, they can be concurrently processed. In
this paper 61 × 61 windows were considered.
As can be seen in Fig. 2.3, the spectrum is symmetric with respect to the diagonal of the
hydrogen resonance peaks. Moreover, the lists are horizontal in the region to the right of the
diagonal, while they are vertical in the region above the diagonal. We concentrated on the former
region. Coherently with the above definitions, the map is actually an array of pixels. A black pixel
represents the presence of a peak while a white pixel represents its absence.
An important and useful point is that the peaks in the diagonal determine the straight horizontal
line identifying the lists. More precisely, every list has hydrogen resonance peak as an element in
the diagonal [48]. Therefore a reliable way to reconstruct the lists is to exploit the virtual lines
determined by the peaks in the diagonal; so for any processed window it is necessary to consider a
new window containing the corresponding hydrogen peaks (as is shown in Fig. 2.3).
In order to describe the proposed algorithm it is necessary to describe the four instructions/templates
used. A name is assigned to each template, like computer instructions. The first instruction is called
STRETCH. Its effect is to add one new black pixel below any black one formerly present and the corresponding template is:

0 0

A=
0 2
0 0


0
0



0
 , B = 0
0
0
1
0
0

0

0
, I = 1
0
(2.2.1)
STRETCH requires that data are fed into the inputs and processed as the initial state of the CNN.
The second one is called LINE and it generates horizontal line of black pixels in any row containing
a black pixel. The chosen template is:

0 0

A=
1 2
0 0

0

1
 , B = 0, I = 1.9
0
(2.2.2)
LINE requires its own data as the initial state (inputs are insignificant). The third one is the wellknown template AND. This is the only already-known template and its effect is the logical AND
31
between the inputs and the initial

0


A = 0
0
state.

0




1.5 0 , B = 0
0 0
0
0
0


0

1.5 0
 , I = −1
0 0
0
(2.2.3)
The last instruction is SHRINK. Its effect is to delete any black pixel that has another black pixel
above it. The corresponding template is:



0 0 0
0






A = 0 1.5 0 , B = 0
0 0 0
0

−1 0

1 0
 , I = −1
0 0
(2.2.4)
SHRINK requires its own data in the inputs and initial state.
2.2.3
Description of the dual algorithm
In this section the algorithm for filtering 2D NMR spectra is described and applied to an experiment
performed on the BPTI protein. It consists of the following steps:
1. choose the window; assign the value 1.0 to the black pixels and -1.0 to the white pixels; set
these values in the corresponding inputs and initial state of the CNN; execute STRETCH (see in
Fig. 2.4);
2. analogously, set the corresponding “hydrogen window” in the initial state of the CNN and
execute LINE (see Fig. 2.5);
3. set the result of step 1 (STRETCH) in the inputs and set the result of step 2 (LINE) in the initial
state; execute AND (see Fig. 2.6);
4. set the result of step 3 (AND) in the inputs and initial state and execute SHRINK (see in Fig.
2.7);
It is worth noting that steps 1 and 2 are independent so they can be executed concurrently. Now let
us compare the ideal window (i.e. how the peak should be in the ideal case when there is no noise)
shown in Fig. 2.8a, the noisy window (i.e. the window just processed) shown in Fig. 2.8b and the
result of the CNN processing (rebuilt window) shown in Fig.2.8c. Fig. 2.9 makes this comparison
simpler.
32
Figure 2.4: The execution of STRETCH.
Figure 2.5: The execution of LINE.
Figure 2.6: The execution of AND.
33
Figure 2.7: The execution of SHRINK.
Figure 2.8: (a) the ideal window, (b) the noisy window, (c) the rebuilt window.
Figure 2.9: (a) Overlapping of the ideal and noisy windows. (b) Overlapping of the ideal and rebuilt
windows. The white peaks belong to the ideal window while the black peaks belong to the other
overlapped windows. Therefore, in (a) 19 peaks are in the wrong position while in (b) just 1 peak is
in the wrong position.
34
In Fig.2.9a the ideal and noisy window have been overlapped while in Fig. 2.9b the ideal and
rebuilt window have been overlapped. The ideal window contains 23 peaks. The noisy window
differs from the ideal one by 19 peaks (i.e. the error is 19/23 ≈ 83%) while the rebuilt window
differs from the ideal one by only one peak (i.e. the error is 1/23 ≈ 4%).
The error has been significantly reduced. However, the efficiency of this filtering varies. A
number of different cases were examined applying the proposed algorithm and, in the worst case, a
maximum final error of around 15% was obtained, even with very noisy windows (more than 80%
initial noise). Nevertheless the reconstruction is often perfect (0% error). This happens when the
lists are less crowded than the presented example (in particular it happens if the result of the LINE
is a set of divided thin stripes).
It is worth considering that if the lists are too close, even a human expert will find it difficult to
decide if a peak is not in its exact position and, if this is the case, to decide the right list to assign
it to.
2.3
Air quality modeling
Great interest is currently devoted to environmental problems; among these air quality modeling own
a particular attention. Ozone air quality modeling, for example, shows important concern, especially
in the United States [51]. Air quality models are mathematical descriptions of atmospheric transport,
diffusion and chemical reaction of pollutants [51]. Usually, the concentrations of chemical species in
the air are the unknown variables. These models are used to predict how peak concentrations will
change in response to prescribed changes in meteorology and in the source of pollution.
In this section the considered models are represented by partial differential equations (PDE’s)
and suitable CNN’s are introduced in order to solve these PDE’s1 [52, 53]. In this way simulations
and predictions of the advection and diffusion of chemical pollutants in the air can be efficiently
accomplished. Three cases have been considered: the advection and the diffusion in a straight
dimension and the advection in a two-dimensional plane.
1 In
this section, some of the symbols (e.g. x, y, i, y and so on) are replicated with different meaning. This is due
to the fact that these are well-established notations in the specialized literature of the different disciplines involved.
Changing one or the other one would create confusion to some readers with no real improvement in the accessibility
of the document. Therefore it has been preferred to leave the original nomenclature while the meaning is fully clear
from the context.
35
2.3.1
Models
Air quality models can be divided in two classes: diagnostic models and prognostic models. The
former are statistical descriptions of the observed data, while the latter are based on physicochemical
principles governing air pollution [51]. In this section, only prognostic models are taken into account.
The general model, describing pollution in a 3-D time-dependent domain Ωt is a coupled system
of nonlinear parabolic partial differential equations [51]:
∂ci
+ ∇ · (uci ) =∇ · (K∇ci ) + fi (c1 , . . . , cp )
∂t
(2.3.1)
x ∈ Ωt , t > 0, 1 ≤ i ≤ p
where u is the air velocity field (wind field), K is a diffusion matrix, fi is the chemical formation
(or depletion) rate of species i, ci are the concentrations of chemical species and model (2.3.1) is
derived as the conservation of mass equations for these species in Ωt .
Here the concentrations of pollutants are unknown while the wind field and diffusion matrix are
known. Of course, the equation (2.3.1) must be completed by the initial conditions:
ci (x, 0) = ci0 (x), for x ∈ Ω0 , 1 ≤ i ≤ p
(2.3.2)
and the boundary conditions:
aci + b
where a, b, g : (x, t) →
∂ci
= gi , x ∈ ∂Ωt , t > 0, 1 ≤ i ≤ p
∂ν
(2.3.3)
R and a ≥ 0, b ≥ 0, a2 + b2 > 0,∀(x, t) ∈ Ωt × R+ ; while ν is the outward
unity normal to ∂Ωt .
Important particular cases derived from this general model are the advection and diffusion. In
the former case the diffusion matrix K vanishes (or its contribution is negligible respect to wind
effects), that is, the pollutants are transported by the wind without diffusion.
In the diffusion case K cannot be neglected.
2.3.2
CNN’s for air quality modeling
These PDE’s are traditionally solved by finite differences numerical methods [51]. Of course it is
often time-consuming because a huge amount of elementary computations are needed. However,
CNN’s have been often used in the solution of many PDE’s [54, 21, 55, 56]. In this section a new
36
approach for the air quality prediction is presented [52, 53]. The attention has been focused toward
the diffusion and transport of a single pollutant (e.g. the ozone) ignoring the various underlying
processes.
In particular the following cases have been considered:
1. 1D advection;
2. 1D diffusion;
3. 2D advection.
Let us begin with the one-dimensional advection. Let us suppose that wind field is uniform and
that diffusion can be neglected (kij = 0). For the sake of simplicity the reference system can be
chosen so that u = (U, 0, 0) with U ∈ R. With these hypotheses the model (2.3.1) is simplified to
the one-dimensional model:
∂c
∂c
+U
=0
∂t
∂x
(2.3.4)
It can be proved that with this model and with the above hypothesis on wind field, the peak of an
initial distribution of the pollutant cannot grow as the system evolves [51]. The shape of the initial
distribution will be simply transported by the wind field.
In order to obtain a CNN model, let us introduce a uniform discretization grid for the spatial
variable x. Moreover the spatial derivative can be approximated by the simple formula:
∂c cj − cj−1
∼
, where cj ≡ c(j∆x, t)
=
∂x x=j∆x
∆x
(2.3.5)
substituting (2.3.4) into (2.3.5) the following model is obtained:
U
∂cj
=−
[cj − cj−1 ]
∂t
∆x
(2.3.6)
As ∆x tends to zero this model approaches the model (2.3.4). Therefore it can be easily proved
that if the discretization length ∆x is sufficiently small then, again, the initial peak value of c cannot
grow but it will be just translated.
This consideration allows to map this equation into the following linear CNN model:
dxj
= −xj + A ∗ yj
dt
(2.3.7a)
37
with:
A=
U
∆x
(1 −
0
U
∆x )
(2.3.7b)
where xj corresponds to cj and differs just by a scale factor that must be chosen such that the initial
concentration peak corresponds to 1 (or less). In this way yj will always corresponds to xj because
saturation will never occur.
Let us now consider the one-dimensional diffusion. In this case, the hypothesis on the wind field
is the same as the previous case but now diffusion in the x axis direction is present. More precisely:


k 0 0



u = (U, 0, 0), U ∈ R, K = 
(2.3.8)
 0 0 0
0 0 0
So the general model (2.3.1) is reduced to the one-dimensional model:
∂c
∂c
∂2c
+U
=k 2
∂t
∂x
∂x
(2.3.9)
In this case the initial distribution of the pollutant will be transported by the wind field and will
diffuse at the same time. For these reasons it is evident that the concentration peak cannot grows
but, possibly, it will decreases.
Analogously to the previous case let us introduce a uniform discretization grid into the onedimensional spatial variable and let us approximate the spatial derivatives with the following formulas:
∂c cj − cj−1
∼
,
=
∂x x=j∆x
∆x
∂ 2 c cj+1 − 2cj + cj−1
∼
=
2
∂x x=j∆x
∆x2
(2.3.10)
Therefore the model (2.3.9) is approximated by the following spatio-discrete model:
U
2k
k
U
k
∂cj
= −(
+
)cj−1 +
)cj + (
+
cj+1
∂t
∆x ∆x2
∆x2
∆x
∆x2
(2.3.11)
That, again, can be mapped on a simple linear CNN model with:
dxj
= −xj + A ∗ yj
dt
(2.3.12a)
where:
U
A = ( ∆x
+
k
∆x2 )
(1 −
U
∆x
−
2k
∆x2 )
k
∆x2
(2.3.12b)
38
which differs from (2.3.7-2.3.7b) just for the template. It is evident that yj cannot saturate also in
this case.
The third case, the two-dimensional advection, is surely the most interesting. Here, the wind
field is no more uniform but depends on the spatial position. However, as in the first considered
case, the diffusion can be disregarded. So the general model (2.3.1) reduces to the following one:
∂c ∂(U c) ∂(V c)
+
+
=0
∂t
∂x
∂y
(2.3.13)
where the wind field is stationary and it has been described by u = (U (x, y), V (x, y)), being x and
y the spatial coordinates. This is a nonlinear PDE and can be rewritten as:
∂c
∂c
∂c
∂U
∂V
+U
+V
+c
+c
=0
∂t
∂x
∂y
∂x
∂y
(2.3.14)
Similarly to previous cases, a 2D spatial grid (i∆x, j∆y) is introduced and spatial derivatives are
approximated by finite difference terms:
cij − c(i−1)j
∂c ∼
,
=
∂x (i,j)
∆x
cij − ci(j−1)
∂c ∼
,
=
∂y
∆y
(i,j)
∂U ∂x (i,j)
∂V ∂y (i,j)
Uij − U(i−1)j
∼
,
=
∆x
(2.3.15)
Vij − Vi(j−1)
∼
=
∆y
leading to a time-continuous, spatio-discrete model. For the sake of simplicity it can be supposed
.
that ∆x = ∆y = h and with trivial algebra, analogously the the two previous cases, the model can
be represented by the following nonlinear three-layer CNN:
dxij
= −xij + A ∗ yij ,
dt


0
0
A11


A=
0 
A22
 , A11 = A22
 0
0
0
A33

0
Uij
1
A33 = 
(h − 2Uij − 2Vij + U(i−1)j
V
h  ij
0
0

Vij

 

xij = 
Uij  ,
cij


0 0 0



=
0 1 0 ,
0 0 0

0

+ Vi(j−1) ) 0

0
(2.3.16)
It can be seen from these that A11 and A22 ensure the time-invariance of the wind field and A33 is
not a constant coefficients matrix but its elements are functions of the state variables of the layers
one and two, so the CNN of equation (2.3.16) is a nonlinear CNN.
39
Figure 2.10: Initial pollutant concentration distribution.
2.3.3
Examples
Here two examples of prediction using the above introduced CNN models are reported.
In the first example the case of the 1D advection is examined. A one-dimensional CNN array of
50 cells has been used. For simplicity it has been chosen U/∆x = 1. So the feedback template is
simply reduced to A = [1 0 0].
In Fig. 2.10 the initial distribution of the pollutant (represented by means of the state variables)
is shown. A classic gaussian shape has been chosen. In Fig. 2.11 the distribution is reported at
different time instants; it is clear how the shape is simply translated toward the wind direction.
However the most interesting case is the 2D advection that is the topic of the last example where
a 25 × 25 CNN is considered. It has been supposed that:
1. the initial distribution of the pollutant is gaussian and centered in (12, 12); it is shown in Fig.
2.12;
2. the stationary wind field has unitary magnitude but is not uniformly distributed; it is shown
in Fig. 2.13.
In these conditions the pollutant is transported by the wind. The initial distribution of c is put out
of shape by the different directions of the wind in the various part of its spatial distribution.
40
Figure 2.11: The distribution at successive time instants.
Figure 2.12: Initial pollutant concentration distribution.
41
Figure 2.13: The wind field.
Fig. 2.14 shows the results of the simulation of the CNN in (2.3.16) at different time-instants.
It shows how a constant level section of the pollutant distribution is transported and modified by
the wind field.
2.4
Conclusions
In this Chapter some new applications of CNN’s to real-world problems have been presented.
In Section 2.1 a new image processing technique for filtering orange images has been presented.
It has been developed in order to enhance the automatic classification of these fruits. In fact, from
the above considerations it follows that recognition and classification is greatly simplified, freeing
the artificial classifier from redundant information.
Moreover, the approach introduced only requires the simplest CNN model. This is also important
in view of possible future actual hardware implementations
In Section 2.2 a novel strategy to filter 2D NMR spectra efficiently has been introduced The
example reported shows the efficiency of the noise removal. The particular spectrum taken into
consideration belongs to the class of the so-called TOCSY spectra and represents resonance peaks
corresponding to BPTI protein residues. However, this strategy can be directly applied to other 2D
42
Figure 2.14: contour of the pollutant concentration surface transported and modified by the wind at
five different time instants, obtained by the simulation of the multi-layer CNN of equations (2.3.16).
NMR spectra. This method is applied in a phase prior to the true analysis of the spectrum. Clearly,
if the filtering phase is accurate then the following recognition step is made easier and more reliable.
Residue recognition has already been performed employing artificial neural networks[57].
In Section 2.3 some CNN model for the prediction of air quality are proposed. These are analytically derived from traditional nonlinear PDE’s models. Moreover simulation results of some cases
are reported. The CNN approach has many advantages over the traditional finite elements methods;
among these the time-continuity of the CNN models and the natural computing parallelism of this
analogic architecture.
Chapter 3
The CNN as nonlinear dynamics
generator
Many researchers state nowadays that, the main scientific topics that revolutionized Science in
this century are the Einstein’s Theory of Relativity, the Quantum Mechanics and the Chaos Theory.
Indeed, somebody says that chaotic behavior of nonlinear dynamical systems, although not explicitly
mentioned, was already known in its essence by Henri Poincaré and other mathematicians at the
end of the past century[58, 59].
Nevertheless a rapidly growing interest on chaotic and complex systems has involved several scientific disciplines in the last two or three decades; including Electrical and Electronic Engineering[60]
of course.
Indeed, as engineers, once acknowledged that chaotic oscillations show up in strongly nonlinear
systems such as electronic circuits[61, 62, 60], we are interested in their study in view of applications.
To cite a few, chaos has been applied to broaden the capture range of phase-locked loops, in
pseudo-random number generators, in secure communication systems, to suppress the injection of
harmonics back into the power distribution network due to switched-mode power supplies and so
on[62, 60]. Moreover, in some cases, it is of interest to control a chaotic system to follow a desired
(e.g. periodic) trajectory[63].
Cellular Neural Network’s are nonlinear dynamic circuits and so it is natural to expect that
chaotic motions can show up. In fact, F.Zou and J.A.Nossek reported[64] and studied[65, 66] the
first chaotic attractor in a CNN composed of 3 cells1 .
43
44
On these basis, however, given that a CNN is a programmable circuit2 and so extremely flexible,
it is natural to ask “what kind of nonlinear dynamics can be obtained from a CNN?”. This question
is harder to be answered than it could be expected at first sight.
The reasons for these difficulties are numerous. But just two of them are enough to give the
reader a glance of the challenge that this question implies:
1. It well-known that general methods for the study of nonlinear systems do not exist and the
analytic study of a third-order system3 can be often extremely hard if not an intractable
problem[58, 67, 68, 59].
2. Any cell in a CNN increases the system order by one unity.
As previously stated however, several electronic circuits have been found to behave in a chaotic
way under certain choices of their parameters. Moreover, analytic studies concerning the nature of
their dynamics and strange attractors have been done for some of them.
A more accessible question could be then: “is it possible to obtain the dynamics belonging to
many different circuits and systems on a Cellular Neural Network?”.
What would it be the consequence of a positive answer to this question? First of all a theoretical consequence: the CNN would represent a general model for the class of circuits for which it
reproduces the dynamics. In other words, the cell would be the primitive of a wide class of dynamic
circuits.
Secondly, but not less important, a practical consequence: given that chaos, nowadays, is actively
used in engineering applications, being able of obtain several different dynamics from the same unique
circuit would boost the capabilities of existing applications and open the way to new and powerful
ones [69].
In this Chapter it is proved that a CNN is able to exactly reproduce the dynamics of several
well-known nonlinear oscillators. Main emphasis is given to chaotic and hyperchaotic dynamics but
more exotic behaviors, such as the so-called canards, have been also considered. In particular, eight
1 Specifically,
a Chua and Yang model.
changing its templates.
3 The minimum order required to have chaos in a continuous-time system.
2 by
45
different cases are considered, including both theoretical and experimental results. Eventually a
general analysis of the conditions upon which the dynamic of a circuit can be reproduced by such a
CNN is reported.
A wider discussion about the consequences of this result will be given in conclusion of this
Chapter, while Chapter will discuss about some of the applications.
3.1
The State Controlled CNN Model
In the original definition of CNN of Chua and Yang discussed in Section 1.1 of Chapter 1 the coupling
of the cells is obtained throughout the inputs and nonlinear outputs only.
In [46] however this restriction has been removed by the introduction of a direct dependence
from the neighbors’ state.
In Section 1.3.1, Definition 1.3.4, it has been emphasized that a direct dependence of the cell
dynamics from the state vector of its neighbors is introduced by the linear feedback template (also
called state template) Â.
Strictly speaking it can be easily proved that this dependence is implicitly obtained even without
the need of the state template if the output nonlinearity y = f (x) is such that
∂f
∂x
= 0 almost
everywhere4.
Regardless of this it is useful to introduce a particular sub-class of Cellular Neural Networks.
Definition 3.1.1 (SC-CNN[46, 70]). A State-Controlled Cellular Neural Network (SC-CNN5 ) is
a CNN with non-zero linear feedback template Â.
In this Chapter we will consider SC-CNN’s with a small number of cells. Therefore, we will often
refer to the simple one-dimensional linear SC-CNN with the following state equation:
Âi;j xj + Ai;j yj + Bi;j uj + I
ẋi = −xi +
(3.1.1)
C(j)∈Nr (i)
1 ≤ i, j ≤ N
or to its multi-layer version.
The theoretic propositions of this Chapter will be complemented by experimental results. To
4 This
5 The
is not true in the Chua and Yang model.
acronym SC has no relation with the homonym abbreviation used for Switched Capacitor circuits.
46
Figure 3.1: First circuit realization for a SC-CNN cell.
this purpose, given that the order of the considered circuits is relatively low, simple implementations
using off-the-shelf discrete-components has been assembled.
3.1.1
Discrete components realization of SC-CNN cells
If a circuit prototype for an SC-CNN with a limited number of cells is needed, an op-amp-based
circuit is often a good choice on account of its simplicity and reliability. Here two possible circuit
realizations for a SC-CNN cell are given. The first one is shown in Fig.3.1; it is essentially constituted
by three blocks. The block B1 implements the output nonlinear function by exploiting the natural
output saturation of amplifiers. It basically consists of an inverting amplifier stage in which the
gain has to be chosen so that the output saturates when the input voltage reaches the breakpoints
(i.e. when |xij | ≥ 1). This amplifier is followed by a voltage divider that is used to scale the output
voltage in the [-1,1] range. From these considerations the following design equations hold:
R8 /R7 = VsatA /Vsatx
(3.1.2a)
R7 /R8 = R10 /(R9 + R10 )
(3.1.2b)
where VsatA is the output saturation voltage of A1, while Vsatx is its corresponding input voltage
(i.e., in our case Vsatx = 1). The input and output impedances of B1 are R7 and the parallel of R9
and R10 , respectively.
47
Figure 3.2: Second circuit realization for a SC-CNN cell.
The second block, B2, is a simple unity gain inverting amplifier (so R5 = R6 ) with an input
impedance equal to R5 .
The block B3 is the fundamental core of the cell and is constituted by an inverting summing amplifier
followed by a simple RC network. The impedances seen from the two inputs −V1 and −V2 are R1
and R2 . If the parallel of the input impedances of blocks B2 and B1 is very high, compared with the
output impedance of block B3 (that is, R4 /(1 + jωR4 Cj )), then blocks B2 and B1 do not sensibly
influence the capacitor voltage. If this hypothesis is satisfied the following state equation holds:
Cj ẋj = −
xj
R3
R3
+
V1 +
V2
R4
R1 R4
R2 R4
(3.1.3)
This equation is formally equivalent (apart from a constant multiplying coefficient) to the SCCNN model (3.1.1); in fact the inputs can be fed by the signals corresponding to contributions from
the other cells.
However, this circuit can be simplified reducing the number of op-amps needed, if an algebraic
summing amplifier is used instead of the summing inverting amplifier of block B3[71]. In this way
the inverting amplifier of block B2 can be avoided because each of the possible signs for the desired
gains can be obtained easily. This circuit is shown in Fig.3.2 and is made up of just two blocks:
the algebraic summing amplifier of block B2 and the non-inverting amplifier block B1 used to
implements the nonlinear output function (so, again, this is designed to have a gain corresponding
to the VsatA /Vsatx ratio). Here again, the input impedance of B1 is chosen so that it is very high with
48
Figure 3.3: Chua’s Oscillator.
respect to the output impedance of B2. Both blocks have been completed with offset compensation
resistors (R5 and R13 ) that must be chosen in accordance with the following relations:
1
1
1
1
1
1
=
+
+
−
−
R5
R11
R1
R2
R3
R4
1
1
1
1
=
+
−
R13
R8
R7
R6
(3.1.4a)
(3.1.4b)
The corresponding state equation for the single cell is therefore:
Cj ẋj = −
xj
R11
R11
R11
R11
+
V1 +
V2 − −
V3 −
V4
R12
R1 R12
R2 R12
R3 R12
R4 R12
(3.1.5)
Both the schemes presented can easily be extended to an arbitrary number of inputs.
3.2
3.2.1
Chua’s Oscillator dynamics generated by the SC-CNN
Main Result
The Chua’s oscillator6, shown in Fig. 3.3, is the simplest autonomous third-order nonlinear electronic
circuit with a rich variety of dynamical behaviours including chaos, stochastic resonance, 1/f noise
spectrum and chaos-chaos intermittency[38, 72]. Its state equation is:
dv1
1
=
[G(v2 − v1 ) − g(v1 )]
dτ
C1
dv2
1
=
[G(v1 − v2 ) + i3 ]
dτ
C2
1
di3
= − [v2 + R0 i3 ]
dτ
L
6 Also
known as the unfolded Chua’s circuit.
(3.2.1a)
49
where:
g(v1 ) = Gb v1 + 0.5 · (Ga − Gb ) · [|v1 + E| − |v1 − E|]
(3.2.1b)
The PieceWise-Linear resistor Nr of i − v-characteristic g(v) is known as the Chua’s diode.
Taking:
x =v1 /E;
y =v2 /E;
t =(τ G)/C2 ;
m0 =(Ga /G) + 1;
β =C2 /(LG2 );
α =C2 /C1 ;
z =i3 /(EG);
m1 =(Gb /G) + 1;
(3.2.2)
γ =(C2 R0 )/(GL);
equation (3.2.1) can be rewritten in the more convenient dimension-less form:
ẋ =α [y − h(x)]
ẏ =x − y + z
(3.2.3a)
ż = − βy − γz
where:
h(x) = m1 x + 0.5 · (m0 − m1 ) · [|x + 1| − |x − 1|]
(3.2.3b)
x, y, and z being the state variables and α, β, γ, m0 , m1 the system parameters.
Chua proved that state equation (3.2.3), which is uniquely determined by 5 parameters, is topologically conjugate7 to a 21-parameter family of continuous odd-symmetric piecewise-linear equations
in
R3 [72].
From this result Chua’s oscillator is considered the canonical circuit for studying chaos.
A zoo of more than 30 strange attractors generated by these equations can be found in [72]. It is
nevertheless worth noting that the only strange attractor admissible on Chua’s oscillator for positive
circuit elements is the so-called Double Scroll. All of the other attractors have been obtained with
negative capacitors/inductors.
Let us now consider the following result[46].
Proposition 3.2.1 (Chua’s Oscillator dynamics in a SC-CNN). The dynamics of Chua’s Oscillator with state equation (3.2.3) can be obtained in a SC-CNN with state equation (3.1.1) ∀α, β, γ, m0 , m1 ∈
R.
7 Refer
to Def. B.4.2 in Appendix B.
50
Proof. Chua’s Oscillator is a third-order nonlinear dynamic system therefore if an equivalent dynamic
system is to be obtained by using SC-CNN cells, three cells are needed: one for each state variable
of the given circuit.
Let us assume the correspondence x1 = x, x2 = y, x3 = x.
Observe (3.2.3) and notice that the first state equation (associated with x) does not contain any
contribution from z; the second one (associated with y) contains contributions from x (the previous
one) and z (the following one); finally, the third equation (associated with z) does not contain any
contribution from x. This implies that a unity neighborhood radius r can be chosen. Moreover,
Chua’s Oscillator is an autonomous circuit so the control templates in (3.1.1) are equal to zero.
So, finally, by direct comparison of (3.2.3) and (3.1.1), the following identities provide the templates for the SC-CNN (3.1.1) and prove the equivalence of the two systems:
A1;2 = A1;3 = A2;2 = A2;3 = A3;2 = A3;3 = 0;
A2;1 = A3;1 = 0; Â1;3 = Â3;1 = Â2;2 = 0;
I1 = I2 = I3 = 0; A1;1 = α · (m1 − m0 );
(3.2.4)
Â3;3 = 1 − γ; Â2;1 = Â2;3 = 1;
Â1;1 = 1 − α · m1 ; Â1;2 = α; Â3;2 = −β;
It can be seen, however, that the SC-CNN with templates (3.2.1) is not space-invariant because any
of the three cells has his own template set. This is fully compatible with the definition of CNN and
it does not create technical problems because of the rather limited size of the network. However, it
does not even need analytic proof but it is worth noting that a three-layers linear SC-CNN is again
able to reproduce the above dynamics.
Hence a simple three-layer linear SC-CNN is able to fully reproduce the dynamics of a CNN
having the Chua’s Oscillator as cell, a quite common case in literature [20, 14, 73, 74].
3.2.2
Experimental Results
The SC-CNN provided by Proposition 3.2.1 can be immediately implemented by using one of the
approaches discussed in Section 3.1.1 for instance.
Let us consider a few examples.
A double scroll attractor is observed in Chua’s circuit dynamic if α = 9, β = 14.286, γ = 0,
m0 = −1/7 and m1 = 2/7. The simulated phase portrait in the plane x − y for the Chua’s
oscillator equation (3.2.3) is shown in Fig. 3.4.
Substituting these parameter values in (3.2.1)
the corresponding SC-CNN is obtained. A circuit implementing it is shown in Fig. 3.5 [71].
The
51
Figure 3.4: The Double Scroll Attractor in the x − y plane.
Figure 3.5: A SC-CNN implementation for the Double Scroll dynamics.
52
Figure 3.6: The Experimental phase portrait in x1 − x2 obtained by the SC-CNN.
corresponding experimental phase portrait in the x1 − x2 plane is shown in Fig. 3.6. While the part
list is reported in Table 3.1 It is interesting to note that with this approach, given that templates
have not any restriction about signs, the SC-CNN can implement any of the strange attractors that
in Chua’s Oscillators would have required negative elements8 in a straightforward way.
To this regard let us consider two examples. In the first one the following set of parameters
is considered: α = −4.08685, β = −2, γ = 0, m0 = −1/7, m1 = 2/7. The simulated attractor
8 Implementations of negative resistances, capacitances and inductances do exist but they involve complex and
often band-limited additional circuitry.
Cell 1
R1=4K;
R6=1K;
R11=12.1K;
Cell 2
R13=51.1K;
R18=1K;
Cell 3
R19=8.2K;
C3=100n;
R2=13.2K;
R7=75K;
R12=1K;
R3=5.7K;
R8=75K;
C1=100n;
R4=20K;
R9=1M;
R5=20K;
R10=1M;
R14=100K;
C2=100n;
R15=100K;
R16=100K;
R17=100K;
R20=100K;
R21=100K;
R22=7.8K;
R23=1K;
Table 3.1: Part list for the circuit shown in Fig. 3.5.
53
Figure 3.7: Strange Attractor for the first set of parameters in the x − y plane.
obtained by system (3.2.3) is shown in Fig. 3.7. The corresponding experimental phase portrait
obtained with the SC-CNN is shown in Fig. 3.8.
Instead, in the second example the following
set is considered: α = −6.69191, β = −1.52061, γ = 0, m0 = −1/7, m1 = 2/7. In correspondence
to it, Fig. 3.9 shows the simulation obtained by the Chua’s Oscillator while Fig. 3.10 shows the
experimental phase portrait.
3.3
Chaos of a Colpitts Oscillator
It has been recently shown that a Colpitts Oscillator, for a particular choice of its parameters, shows
a strange attractor in its dynamics [75]. Besides, it has been proved that this attractor is actually
topologically conjugated to a member of the Chua’s circuit family [76, 77].
A Piece-Wise Linear (PWL)9 model of the Colpitts oscillator is [75]:
dVCE
= IL − IC ;
dτ
VEE + VBE
dVBE
=−
− IL − IB ;
C
dτ
REE
dIL
= VCC − VCE + VBE − IL RL ;
L
dτ
C
9 See
Def. A.2.4 of Appendix A.
(3.3.1a)
54
Figure 3.8: The Experimental phase portrait in x1 − x2 obtained by the SC-CNN for the first set of
parameters.
Figure 3.9: Strange Attractor for the second set of parameters in the x − y plane.
55
Figure 3.10: The Experimental phase portrait in x1 − x2 obtained by the SC-CNN for the second set
of parameters.
with:
IB =

0
if VBE ≤ VT H
 VBE −VT H
if VBE > VT H
RON
IC = βF IB
(3.3.1b)
(3.3.1c)
where VCE , VBE and IL are the state variables. The strange behavior is observed in the dynamic of
the circuit if the following values are assumed for the model parameters:
VT H = 0.75V ; RON = 200Ω; RL = 35Ω; L = 98.5µH;
C = 54nF ; REE = 400Ω; βF = 256; VEE = −5V ; VCC = 5V.
(3.3.2)
The phase portrait obtained by PSPICE simulation of the Colpitts Oscillator is depicted in Fig.
3.11. This strange attractor can be reproduced in a SC-CNN composed of three cells [78, 79].
Proposition 3.3.1. The dynamics of the Colpitts Oscillator with state equation (3.3.1) can be obtained in a SC-CNN with state equation (3.1.1).
Proof. The following normalized variables are assumed to represent the system given in (3.3.1):
VCE
;
VCC
aVBE + b
;
x2 =
k
RL IL
x3 =
;
VCC
τ
t=
;
RL C
x1 =
(3.3.3a)
56
Figure 3.11: PSPICE simulation of the Colpitts attractor in the Vce − Vbe plane.
with:
k = aVT H + b
(3.3.3b)
a and b being suitable real constants used to obtain the required nonlinearity (a possible choice is
a = b = 4). Moreover, the nonlinear function (3.3.1b) can be substituted with the following function
without changes in the system behaviour:
IB =
k
[x2 − 0.5(| x2 + 1 | − | x2 − 1 |)]
aRON
(3.3.4)
because the independent variable x2 never assumes values less than −1, so, in the range of values
actually assumed by x2 and with the linear transformation described above, the two functions are
equivalent.
Therefore the following dimensionless state representation is obtained:
βF kRL
βF kRL
x2 + x3 +
y2 ;
ẋ1 = −x1 + x1 −
aRON VCC
aRON VCC
RL
RL
aVCC
RL (b − aVEE )
RL
ẋ2 = −x2 + (1 −
x3 +
−
)x2 −
y2 +
(3.3.5a)
REE
RON
k
RON
kREE
2
2
bRL
C
C
R2 C
kRL
R2 C
R2 C
ẋ3 = −x3 + − L x1 +
x2 + (1 − L )x3 + L −
L
aLVCC
L
L
aLVCC
where:
y2 = 0.5 · (|x2 + 1| − |x2 − 1|).
(3.3.5b)
57
Figure 3.12: The Experimental phase portrait in x1 − x2 obtained by the SC-CNN for the Colpitts
dynamics.
But this is the equation of a SC-CNN with 3 cells and the following template coefficients:
βF kRL
βF kRL
; Â1;3 = 1; A1;2 =
;
aRON VCC
aRON VCC
RL
RL
aVCC
RL
Â2;2 = 1 −
A2;2 =
−
; Â2;3 = −
;
REE
RON
k
RON
2
C
R2 C
kRL
R2 C
Â3;1 = − L ; Â3;2 =
; Â3;3 = 1 − L ;
L
aLVCC
L
2
2
bRL
C
C
RL (b − aVEE )
RL
I2 =
−
; I3 =
kREE
L
aLVCC
Â1;1 = 1; Â1;2 = −
(3.3.6)
This proves the proposition.
The experimental phase portrait obtained by the SC-CNN is shown in Fig. 3.12.
3.4
Hysteresis Hyperchaotic Oscillator
In continuous-time systems, when the system order is higher than 3, it is possible to have more
than one positive Lyapunov Exponent10 . If this is the case, the corresponding behavior is known as
Hyperchaos.
T.Saito recently introduced a four-dimensional nonlinear autonomous circuit that shows hyperchaos [80]. It is called the Saito Hysteresis Chaos Generator (for simplicity we will refer to it as the
SHCG).
10 See
Section. B.3.3 in Appendix B.
58
Figure 3.13: Saito Hysteresis Chaos Generator.
The dimensionless state equation of the SHCG is:
ẋ = − z − w
ẏ =γ(2δy + z)
(3.4.1a)
ż =ρ(x − y)
ẇ =x − h(w)
where:
h(w) = w − (|w + 1| − |w − 1|)
(3.4.1b)
x, y, z and w being the state variables and γ, δ, ρ and the system parameters. The SHCG is shown
in Fig. 3.13.
Nr is a nonlinear resistor and is responsible for h(x). Letting → 0, the nonlinear
resistor Nr and the inductor L0 originate the jump phenomenon and hysteresis [81].
Moreover, by varying the parameters of the SHCG it is possible to generate a wide variety of
dynamic behaviours (periodic solutions, quasi-periodic solutions, chaos, hyperchaos).
This fourth-order oscillator is, again, a member of the SC-CNN family. In fact, the following
proposition holds [82]:
Proposition 3.4.1. The dynamics of the SHCG circuit with state equation (3.4.1) can be obtained
in a SC-CNN with state equation (3.1.1), ∀γ, δ, ρ, ∈ R.
Proof. The proof is immediately achieved if in equation (3.1.1), for a SC-CNN composed of 4 cells,
59
Figure 3.14: The Experimental phase portrait in x1 − x2 obtained by the SC-CNN for the SHCG
hyperchaotic attractor.
the following templates coefficients are taken into consideration:
Ak;k = 0; for k = 1, 2, 3 Aj;k = 0; for j, k = 1, . . . , 4,(j = k)
Â1;2 = Â2;1 = Â2;4 = Â3;4 = Â4;2 = Â4;3 = 0; ij = 0; for j = 1, . . . , 4
Â1;1 = 1; Â1;3 = Â1;4 = −1; Â2;2 = 1 + 2γδ; Â2;3 = γ;
(3.4.2)
Â3;1 = ρ; Â3;2 = −ρ; Â3;3 = 1; A4,4 = 2/
; Â4;1 = 1/
; Â4;4 = 1 − 1/
;
and assuming the correspondence x1 = x, x2 = y, x3 = z and x4 = w.
The experimentally observed hyperchaotic attractor, corresponding to the parameter set γ = 1,
ρ = 14, δ = 1 and → 0 (in practice = 10− 2 is sufficient) obtained in the four-cells SC-CNN is
reported in Fig. 3.14.
Aside, this is the very first example of true hyperchaotic attractor ever observed in a CNN [82].
3.5
n-Double Scroll Attractors
In Section 3.2 we already encountered the well-known Double-Scroll attractor11 of the Chua’s Oscillator. Indeed, this attractor and its geometric structure are probably among the most extensively
studied [83, 84, 38] in Electrical Engineering.
In [85] J.A.K.Suykens and J.Vandewalle introduced a new family of circuits derived from the
traditional Chua’s circuit modifying the characteristic of the Chua’s diode. The corresponding
11 That
B.
has been proved to be a true chaotic attractor in the sense of Šilnikov [83]. See also Section B.5 in Appendix
60
attractors of this generalized Chua’s circuit have been called n-Double Scroll attractors. One of
these looks like more double scroll attractors, of different sizes, one contained inside another like a
russian matrioska. In this framework, the classic Double Scroll corresponds to the 1-Double Scroll.
The generalized Chua’s Diode of J.A.K.Suykens and J.Vandewalle, however, had a quite involved
i − v-characteristic presenting intervals with polynomial nonlinearity and singular points. Indeed, a
circuit implementation of that specific family has never been presented.
In this Section it will be shown that n-Double Scroll attractors12 can also be obtained if a PWL
continuous i − v-characteristic is chosen for the Chua’s Diode.
Moreover, the obtained n-Double Scroll family will be realized by a SC-CNN with appropriate
output nonlinearity.
3.5.1
A new realization for the n-Double Scroll family
A general expression for a (n + 1)-segments scalar PWL function f :
f (x) = a0 + a1 x +
n
R → R is given by:
bj |x − Ej |
(3.5.1)
j=1
where E1 < E2 < · · · < En are the n break-points and a0 , a1 , b1 , b2 , ..., bn ∈
R are related to the
segment slopes and to f (0) by the following expressions:
a1 =
1
(m0 + mn ),
2
bj =
1
(mj − mj−1 ),
2
a0 = f (0) −
n
bj |Ej |
(3.5.2)
j=1
in which m0 is the first linear segment slope and mj is the (j + 1)-th linear segment slope (see Fig.
3.15).
Let us then introduce the following generalized Chua’s oscillator:
Definition 3.5.1 (PWL n-Double Scroll Family [86]). The state equation:
ẋ =α [y − h(x)]
ẏ =x − y + z
(3.5.3a)
ż = − βy − γz
where:
h(x) = m2n−1 x +
2n−1
1 (mk−1 − mk ) [|x + bk | − |x − bk |]
2
k=1
12 That,
most likely, are topologically conjugated with the n-Double Scrolls in [85].
(3.5.3b)
61
Figure 3.15: A generic PWL continuous function.
bk are the 2n − 1 break points and mk are the 2n corresponding slopes, define a family of circuits,
systems or vector fields that will be referred as the PWL n-Double Scroll Family.
Observe that b1 = 1, h(0) = 0, h(x) = −h(−x) and the restriction of (3.5.3) to −b2 ≤ x ≤ b2
coincide to the classic Chua’s circuit but the nonlinear resistor has been modified maintaining its
PWL nature.
Moreover, (3.5.3) is invariant under the transformation s :
 
−x
.  

s(x, y, z) = 
 −y 
−z
R3 → R3 :
(3.5.4)
and so the geometry of the state space of (3.5.3) has a symmetry about the origin.
Moreover, to understand what follows it is important to remind a few facts regarding the 1-Double
Scroll.
1. γ = 0;
62
2. The state space X of the Chua’s Oscillator can be partitioned into three subspaces:
D0 = [x y z] ∈ X : |x| ≤ 1
D1 = [x y z] ∈ X : x > 1
D−1 = [x y z] ∈ X : x < −1
3. There exist three fixed points (saddle foci):
 
 
−κ
0
 
 


x0 = 
x1 = 
 0  ∈ D1
0 ∈ D0
κ
0
(3.5.5a)
(3.5.5b)
(3.5.5c)

κ

 

x−1 = 
 0  ∈ D−1
−κ
(3.5.6a)
being:
. m1 − m0
k=
m1
(3.5.6b)
4. The eigenvalues of the linearized systems corresponding to these fixed points are:
λ(x0 ) = {γ0 , σ0 ± jω0 } ;
λ(x1 ) = λ(x−1 ) = {γ1 , σ1 ± jω1 } ;
γ0 > 0; σ0 < 0; ω0 > 0;
γ1 < 0; σ1 > 0; ω1 > 0.
(3.5.7)
and the dimension of the corresponding stable and unstable eigenspaces are:
dim E s (x±1 ) = dim E u (x0 ) = 1
(3.5.8)
dim E u (x±1 ) = dim E s (x0 ) = 2
5. Roughly speaking, the shape of the double scroll is constituted by two outer discs (due to
x±1 ), linked by inner (almost) straight trajectories (due to x0 ).
The proof in [83] of the true chaotic nature of the Double Scroll is based on the existence of an
oddsymmetrically related pair of homoclinic trajectories H∓ based at the origin x0 and consequent
application of the Šilnikov Theorem.
63
Let us now discuss an algorithm to generate n-Double Scroll from (3.5.3). From a geometrical
point of view, once fixed the Double Scroll order n, the goal is to find the proper set of slopes for
(3.5.3b) such that
1. the origin x0 keeps its saddle focus nature;
2. a new pair of homoclinic trajectories Hn∓ (still based at x0 ) are formed.
Fortunately, a sufficient condition to have the first requirement is to keep m0 of (3.5.3b) at the value
known for the 1-Double Scroll (i.e. m0 = −1/7). For what matters the second condition it can be
noticed that the horseshoe chaos, according to the Šilnikov Theorem, is structurally stable.
Exploiting the above cited symmetry, only the sub-space for x > 0 is considered.
Let us first obtain the 2-Double Scroll, starting from the 1-Double Scroll. To this purpose, it is
necessary to add two new discs to the sides of the classical double scroll.
So let us add two new break points (b2 and b3 respectively) to the right hand side of b1 in order to
add a new segment with negative slope and another one with positive slope. The same modification
will be made on the negative side in order to maintain the odd-symmetry of h(x) (see Fig. 3.16).
The new segment’s slopes are m2 and m3 .
As in the case of Chua’s circuit, it is useful partitioning the state space into different regions
determined by the breakpoints of the PWL nonlinearity:
R1 = [x y z]
R2 = [x y z]
R3 = [x y z]
R4 = [x y z]
R5 = [x y z]
R6 = [x y z]
R7 = [x y z]
∈ X : x ≤ −b3
∈ X : −b3 < x ≤ −b2
∈ X : −b2 < x ≤ −b1
∈ X : −b1 < x < b1
∈ X : b1 ≤ x < b2
∈ X : b2 ≤ x < b3
∈ X : x > b3
(3.5.9)
What we want to do is to introduce the new break points b2 and b3 close enough to the focus x1
so that some of the external trajectories in R5 , following a outward spiral path, can encounter the
boundary imposed by b2 . When this happen, these trajectories are “snatched” because, due to m2 ,
64
Figure 3.16: h(x) for the 2-Double Scroll.
the vector field changes direction in R6 . Hence, in R6 , the vector field accelerates the trajectories
towards the new boundary b3 . At this point, the state enters a R7 , where a new unstable focus (the
stability of which depends on m3 ), of the same nature of x1 , forces a outward spiral motion. This
would create the desired outer disc. These growing spiral trajectories will eventually hit again the
boundary fixed by b3 . Depending on the region of this intersection, these ones are redirected again
inside the outer region or accelerated toward the other inner regions (and so toward the inner double
scroll again).
This simple and intuitive strategy suggests some guidelines:
1. choose the break point b2 near the right edge of the 1-scroll disc, i.e. around x = 2;
2. choose the slope of the outer segment (m3 in this case), corresponding to the new disc, equal
to the slope associated to the inner disc, i.e. m3 = m1 ;
3. if the slope m2 is chosen equal to m0 then too few trajectories will be snatched from the inner
disc, therefore a higher slope is necessary;
4. choose the remaining parameters equal to the Chua’s double scroll parameters, i.e. α = 9,
β = 14.286, γ = 0, m0 = −1/7 and m1 = 2/7.
65
Figure 3.17: The 2-Double Scroll attractor.
With the above criteria in mind a feasible choice is rapidly obtained by simulation. For instance:
b2 = 2.15;
b3 = 3.6;
m2 = −4/7;
m3 = 2/7;
(3.5.10)
With these parameters the attractor of Fig. 3.17 is obtained. The same attractor both with the 7
regions of (3.5.9) is shown in Fig. 3.18.
If a 3-Double Scroll is desired the above outlined strategy can be repeated. In fact, starting from
the 2-Double Scroll, other two break points b4 and b5 must be added to h(x) with corresponding
negative and positive slopes.
In Fig. 3.19 the 3-Double Scroll is shown.
It has been obtained choosing b4 = 8.2, b5 = 13,
m4 = m2 = −4/7 and m5 = m3 = 2/7.
It is now clear that, with this procedure, the n-Double Scroll will be obtained by the (n − 1)double scroll. In order to simplify even more the tuning in the introduced procedure it is suggested
to take the negative slope and the positive slope respectively equal to -4/7 and 2/7 as in the case of
the 3-Double Scroll; eventually the break-point values can be found using the above algorithm.
66
Figure 3.18: The 2-Double Scroll attractor and the seven regions in which the state space can be
partitioned.
Figure 3.19: The 2-Double Scroll attractor.
67
Figure 3.20: The output nonlinearity of the SC-CNN admitting a 2-Double Scroll attractor.
3.5.2
n-Double Scrolls in SC-CNN’s
Once n-Double Scrolls have been obtained by means of the family of PWL systems (3.5.3), the
circuit implementation by using SC-CNN is straightforward.
A modification of the output function y = f (x) is however required. The classic saturation
function (1.1.1) is replaced by its PWL generalization:
yj =
2n−1
1 nk (|x + bk | − |x − bk |)
2
(3.5.11)
k=1
where bk are the above fixed break-points and nk are related to the slopes of the segments composing
(3.5.11). This function is shown in Fig. 3.20 for the 2-Double Scroll case.
The template set for
this three-cell SC-CNN will be the same as the one seen for the case of the Chua’s Oscillator (refer
to equation (3.2.1) in Proposition 3.2.1) with the only exception of Â1;1 now equal to 1 − αm2n−1 .
The coefficients nk in (3.5.11) are given by:
nk = α(mk − mk−1 ),
k = 1, . . . , 2n − 1
(3.5.12)
For what matters the circuit implementation, again, the only difference with the other cases regards
the circuit for the output function (3.5.11). The circuit is shown in Fig. 3.21.
This equation is
68
Figure 3.21: A circuit implementation for the output output nonlinearity of the SC-CNN admitting
a 2-Double Scroll attractor.
obtained by a weighted sum of three nonlinear terms. These terms can be separately realized by
inverting/non-inverting amplifiers whose outputs are saturated when their inputs reach the desired
break-point (B1, B2 and B3). Afterwards, these three outputs are added with appropriate weights
by a summing amplifier (B4). Consider the blocks B1, B2, B3. If b1 , b2 and b3 are the corresponding
desired break points then the following equations hold:
1+
R2
Esat
=
,
R1
b1
R4
Esat
=
,
R3
b2
1+
R6
Esat
=
R5
b3
(3.5.13)
where Esat is the saturation voltage of the differential amplifiers. The following summing stage is
easily designed for the proper weights, taking into account that the various outputs of B1, B2, B3
are saturated when xj reaches the corresponding break points. Further details about the actual
implementation and the values of the components can be found in [86]. The experimental phase
portraits of the 2-Double Scroll obtained with this SC-CNN are shown in Fig. 3.22.
3.6
Nonlinear dynamics Potpourri
From what it has been shown until this point it may appear that only autonomous circuits, with
nonlinear resistors, whose i − v characteristic is a function of a single state variable, belong to the
69
Figure 3.22: Observed phase portraits of the 2-Double Scroll obtained in the SC-CNN. (a) x1 − x2 ,
(b) x2 − x3 , (c) x1 − x3 .
70
SC-CNN family.
In reality this limitation does not hold. In fact, in this Section, a further group of nonlinear
circuits with particular features are realized by SC-CNN’s. In particular, two non-autonomous
circuits, one of which includes a nonlinear inductor, a circuit admitting a Canard and a circuit with
a nonlinearity function of the sum of two state variables are considered.
3.6.1
A Non-autonomous Second Order Chaotic Circuit
A second order non-autonomous (driven) circuit described in [87] is now taken into account. The
state equations of the circuit are:
ẋ = y − f (x);
(3.6.1a)
ẏ = −x + u + b;
with > 0; k > 0; b > 0; and
f (x) = x + 0.5(1 + k)(|x − 1| − |x + 1|);
(3.6.1b)
where x and y are the state variables and u is an external input. These two state equations can be
realized by a two-cell SC-CNN as stated by the following proposition [70].
Proposition 3.6.1. The forced oscillator (3.6.1) can be realized by a two-cell non-autonomous linear SC-CNN with the following templates:
1+k
1
;
Â1;1 = 1 − ;
Â2;1 = −1;
1
B2;2 = 1;
Â1;2 = ;
Â2;2 = 1;
I2 = b;
the remaining coefficients being zero.
A1;1 =
(3.6.2)
The behaviour of the system (3.6.1) is very sensitive to parameter variations. In particular the
case in which a sinusoidal input signal u = a · cos(ωt) and the following choice for the parameters:
= 0.2;, a = 0.2;, ω = 2.0π;, k = 0.885; varying b within the narrow interval [1, 1.0154] was taken
into consideration [87]. One of the attractors obtained with the SC-CNN based circuit is shown in
Fig. 3.23.
3.6.2
A Circuit with a Nonlinear Reactive Element
The SC-CNN dynamic realization of a non-autonomous circuit containing a nonlinear inductor [88] is
considered. The electrical scheme is reported in Fig. 3.24. In [88] the following state representation
71
Figure 3.23: Observed phase portrait of the non-autonomous SC-CNN in x1 − x2 .
Figure 3.24: Circuit with nonlinear inductor.
72
has been given:
C
dφ(i)
di
V˙1 = V2 ;
V˙2 = −CRV2 − V1 + Ecosωt;
(3.6.3a)
with:
φ(i) =

L 0 i
if i ≤ i0
L i + (φ − L i )
1
m
1 0
if i > i0
(3.6.3b)
where V2 is defined as V2 ≡ i/C, i and C being the inductor current and the capacitor value
respectively. A strange behaviour appears if the circuit parameters are assumed as [88] k = 0.032;,
√
√
1
ν = 0.7;, B = 0.725;, α → 0;, being: α = L
L0 Cω;, k = R C/L0 ;, B = E L0 C/φm .
L0 ;, ν =
From which it results:
L0 = 1H; L1 = 0.5 · 10−2 H; ω = 1rad/s;
E = 1V ; C = 0.49F ; φm = 0.9655W b; R = 4.571 · 10−2 Ω
(3.6.4)
However, using the model (3.6.3) the SC-CNN realization is not so straightforward. Therefore, an
alternative equivalent mathematical description is required.
Proposition 3.6.2. The circuit shown in Fig. 3.24 can be described by the following state representation:
φ̇ = es − Ri(φ) − V1 ;
1
V˙1 = i(φ);
C
(3.6.5a)
L−1
0 φ
L−1
1 (φ − φm ) + i0
(3.6.5b)
with:
i(φ) =
if φ ≤ φm
if φ > φm
and this system can be realized by a single-layer two-cell non-autonomous SC-CNN with the following
−1
−1
1
1
templates: Â1;1 = 1 − RL−1
1 ; Â1;2 = − φ ; B1;1 = φm ; A1;1 = R(L1 − L0 ); Â2;2 = 1; Â2;1 =
m
−1
−1
φm (L0 −L1 )
φm
; the remaining ones being zero.
CL1 ; A2;1 =
C
Proof. The alternative state representation (3.6.5) is obtained by choosing the capacitor voltage V1
and the magnetic induction flux φ as state variables. With this choice, the circuit can be described
by the following equations:
es = Ri + φ̇ + V1 ;
C V˙1 = i;
(3.6.6)
73
and: φm = L0 i0 ; es = E · cos(ωt). From these relationships (3.6.5) is obtained. However, taking
into account that real inductor nonlinearities are odd-symmetric, this nonlinear characteristic can
be substituted with the following one:
i(φ) = L−1
1 φ++
−1
(L−1
0 − L1 )
(|φ + φm | − |φ − φm |)
2
(3.6.7)
This substitution does not affect the dynamics of the system because of the values actually assumed
by φ that makes them equivalent. It is now clear that the model (3.6.5) with (3.6.7) can be suitably
realized by a two-cell linear non-autonomous SC-CNN. In fact taking:x1 = φ/φm ; ; x2 = V1 ; u1 =
es ; as cell state variables and input, and with the following template coefficients:
1
1
−1
; B1;1 =
; A1;1 = R(L−1
1 − L0 );
φm
φm
−1
φm
φm (L−1
0 − L1 )
;
=
; A2;1 =
CL1
C
Â1;1 = 1 − RL−1
1 ; Â1;2 = −
Â2;2 = 1; Â2;1
(3.6.8)
the desired dynamic can be reproduced as in the other propositions reported.
The particular case in which parameter set (3.6.4) has been adopted can obviously be reproduced
with the SC-CNN realization and the experimentally phase portrait observed in the x1 -x2 plane is
shown in Fig. 3.25.
Let us note that (3.6.5) is an alternative model of (3.6.3), both of them describing the same
physical system, then it exists a bijective correspondence among the orbits of the two representations. Moreover, the nonlinear functions involved are all invertible and derivable almost everywhere.
Finally, the same time variable is chosen. Therefore, a straightforward consequence of Proposition
3.6.2 is that:
Corollary 3.6.3. The strange attractors of system (3.6.5) and (3.6.3) are topologically conjugated.
3.6.3
Canards and Chaos
Canards are nonlinear phenomena that can be observed in the so-called slow-fast systems [89, 90].
They are certain singular solutions of these nonlinear systems which, at first, move with a slow
dynamic; then, the state goes through the trajectory faster and, subsequently, it slows down again
and so on until it vanishes while a irregular oscillation or a cycle appears. The main feature of Canards is that they are really sensitive to parameter variations; in fact they are structurally unstable.
Our attention has been devoted to the Bonhoeffer-Van der Pol (BVP) system [89] mathematically
74
Figure 3.25: The Experimental phase portrait in x1 − x2 obtained by the SC-CNN.
described by the following state space equations:
ẋ = −
y + f (x)
;
(3.6.9a)
ẏ = x − αy − β;
with α > 0; > 0, ∼ 0; β = 2.8 and:
f (x) = (x3 − 27x)/18;
(3.6.9b)
Moreover, the previous circuit has been generalized leading to a third-order autonomous PWL system
with the following state representation[90]:
ẋ = z − y;
ẏ = α(x + y);
(3.6.10a)
x + n(z)
;
ż = −
with:
n(z) = βz + 0.5(γ − β)(|z + 1| − |z − 1|);
(3.6.10b)
and α > 0; β > 0; γ < 0. It is worth noting that in the BVP system (3.6.9) the nonlinear function
f (x) in (3.6.9b) is a cubic continuous nonlinearity while in the third-order generalized BVP (3.6.10)
the nonlinearity n(z) in (3.6.10b) is a PWL one. It has been proved that the BVP system has the
Canard trajectories while the third-order BVP cannot possess them because of the nature of its
nonlinearity[89, 90].
75
Both of the previously assumed systems can be realized with SC-CNN as already shown. While
in the three-cell SC-CNN realizing the third-order BVP system a linear SC-CNN can be used, on
the contrary, in the two cell realization of the BVP system a linear SC-CNN with a smooth output
function (instead of the stiff PWL saturation function) must be used. This smooth output nonlinearity was obtained by using a circuit with an appropriate diode network. The diode nonlinearity
permits us to obtain the required smoother shape.
Proposition 3.6.4. The state equations of the BVP system (3.6.9) can be realized by a two-cell SCCNN with the defined templates: Â1;1 = 1; Â1;2 = −1/
; A1;1 = −1/
; Â2;1 = 1; Â2;2 = 1 − α; I2 =
−β; the other template coefficients being zero and taking the nonlinear function f (x) = (x3 −27x)/18
as the output nonlinearity.
Proposition 3.6.5. The state equations of the third-order generalized BVP system (3.6.10) can be
realized by a three-cell linear SC-CNN with the following templates: Â1;1 = 1; Â1;2 = −1; Â1;3 =
1; Â2;1 = α; Â2;2 = 1 + α; Â3;1 = −1/
; Â3;3 = 1 − β ; A3;3 = β−γ
; the other template coefficients
being zero.
The experimentally observed phase portraits, referring to the SC-CNN realizations, are shown
in Fig.’s 3.26a and 3.26b.
3.6.4
Multimode Chaos in Coupled Oscillators
In [91] some interesting nonlinear phenomena were observed in coupled oscillators. The state equations of a single oscillator are:
di1
= −v − Ri1 − vd (i1 + i2 );
dτ
di2
= −vr (i2 ) − vd (i1 + i2 );
L2
dτ
dv
= i1 ;
C
dτ
L1
(3.6.11a)
where i1 , i2 and v are the state variables and vd and vr are nonlinear functions defined as follows:



V − ri2



rd
vr (i2 ) = ( 2RRd+r
− r)i2
d
d




−V − ri2
if i2 > JA
if |i2 | ≤ JA
if i2 < −JA
(3.6.11b)
76
Figure 3.26: (a) The canard observed in the SC-CNN realization of the BVP system. (b) The chaotic
attractor observed in the SC-CNN realization of the generalized BVP system.
77
where JA =
2Rd +rd
Rd rd
·V,



3·V



vd (i1 + i2 ) = 23 rd (i1 + i2 )





−3 · V
where JB =
2·V
rd
if i1 + i2 > JB
if |i1 + i2 | ≤ JB
(3.6.11c)
if i1 + i2 < −JB
.
This circuit becomes chaotic if the following choice is made for its parameters: L1 = 100mH; L2 =
200mH; C = 0.068µF ; R = 10Ω; Rd = 560Ω; r = 1kΩ; rd = 15M Ω; V = 0.7V .
It can be noted that, in this case (see (3.6.11c)), a nonlinear function of two state variables is
present.
Proposition 3.6.6. The nonlinear dynamic system (3.6.11) is topologically conjugated to a threecell linear SC-CNN with the following templates:
R
R1
r (2Rd + rd )
; Â1;2 = −(
+
)
;
L1
L1
L2
2V Rd
rd
1
1 3rd
=−
; A1;1 = −(
+
)
;
2V L1
L1
L2 2
r
3V Rd rd
V Rd rd
; A2;2 = −
;
=1−
; A2;1 = −
L2
L2 (2Rd + rd )
L2 (2Rd + rd )
2V
2V
= 1; Â3;1 =
; Â3;2 = −
;
Crd
Crd
Â1;1 = 1 −
Â1;3
Â2;2
Â3;3
(3.6.12)
all the remaining template coefficients being zero.
Proof. Let us consider the following linear state transformation for system (3.6.11):
x1 = rd (i1 + i2 )/(2 · V );
x2 = Rd rd i2 /(2Rd + rd );
x3 = v;
(3.6.13)
Therefore, (3.6.11) can be rewritten as follows:
R
R1
r (2Rd + rd )
rd
1
1 3rd
y1 ;
x1 − (
+
)
x2 − −
x3 − (
+
)
L1
L1
L2
2V Rd
2V L1
L1
L2 2
r
V Rd rd
3V Rd rd
y2 − −
y1 ;
ẋ2 = − x2 −
L2
L2 (2Rd + rd )
L2 (2Rd + rd )
2V
ẋ3 = −x3 +
(x1 − x2 ) + x3 ;
Crd
ẋ1 = −
(3.6.14)
These state equations represent the model of a three-cell linear SC-CNN if the template coefficients
(3.6.12) are assumed. This proves the proposition.
78
Figure 3.27: The attractor of the single oscillator (3.6.11) obtained by a SC-CNN and then mapped
into i1 − i2 .
Once the SC-CNN realization is accomplished, the original variables of the classical realization can
be obtained by inverting the transformation13 (3.6.13). In this case the phase portrait observed,
referring to the variables i1 and i2 is shown in Fig. 3.27.
3.6.5
Coupled Circuits
Two of the previous considered oscillators can be coupled by using a capacitor in series with the
existing ones [91]. From a mathematical point of view this implies that the first equation in system
(3.6.11) is modified as follows:
L1
di1
= −v1 − Ri1 − vd (i1 + i2 ) − v0 ;
dτ
(3.6.15a)
being:
C0 v0 = Cv1 + Cv2 ;
(3.6.15b)
where v0 is the branch voltage of the coupling capacitor C0 , v1 and v2 are the branch voltages of
the two capacitors in the two Nishio-Ushida chaotic circuits.
13 Any linear transformation and/or its inverse can be obtained in straightforward way by means of op-amp based
arrays of algebraic adders.
79
Figure 3.28: A gallery of attractors from the six cells SC-CNN.
A six state equation system for the coupled oscillators is derived after some algebra on the
expressions (3.6.15b) and (3.6.15a). This system can be realized by using six SC-CNN cells in the
same way as proved above for the autonomous circuit.
The attractors observed, corresponding to a variety of parameter sets (mainly the capacitors of
the six cells), are shown in Fig. 3.28.
3.7
General Case and Conclusions
As seen from previous Sections a wide variety of well-known nonlinear circuits belong to the family
of SC-CNN. The list of examples could still continue but it is preferable trying to obtain a general
answer.
To this purpose the following sufficient condition is presented.
80
Proposition 3.7.1. Let us consider a n-th order circuit, system or vector field with state representation:
ẋ = Hx + Gf (Lx)
where x ∈ Rn , f :
(3.7.1a)
Rn → Rn , H, G, L ∈ Rn and L nonsingular (i.e. ∃L−1 ). Moreover:
∂fi 0
∂xj = ≥ 0
if |i − j| > r
otherwise
(3.7.1b)
where r is the neighbor radius. Then (3.7.1a) is topologically conjugated to the SC-CNN family
(3.1.1).
Proof. Let us consider the nonsingular state variable linear transformation z = Lx. Hence L− 1z = x
and L− 1ż = ẋ. By substitution in (3.7.1a) the following system is obtained:
L−1 ż = HL−1 z + Gf (z)
(3.7.2)
ż = LHL−1 z + LGf (z)
(3.7.3)
namely:
This last equation is nothing else that the matrix form of (3.1.1) where LHL−1 defines the state
templates and LG defines the feedback templates. Condition (3.7.1b) is necessary to satisfy the
definition of local interaction imposed by the synaptic law (1.3.4) in Def. 1.3.4. To this regard
it is important to observe that re-ordering the scalar equations composing system (3.7.1a) (and so
re-numbering the components of x or z) can satisfy (3.7.1b).
Condition (3.7.1b) must not be confused with the one for co-operative systems14 and it is actually
less restrictive.
Corollary 3.7.2. If all the hypotheses of Prop. 3.7.1 are satisfied and, moreover, f (x) is such that:
1. Piece Wise Linear and
2. ∀i, j = 1, . . . , n and ∀x̃ ∈ Rn constant vector:
fi (x)x=[x̃
1 ,... ,xj ,... ,x̃n ]
1
= a0 + a1 xj + a2 (|xj + bk | − |xj − bk |)
2
(3.7.4)
then system (3.7.1a) is realized by a SC-CNN with saturation output functions.
The extension of these results to non-autonomous circuits and systems is trivial.
Let us now discuss some of the many implications that all the results of this Chapter may imply.
14 See
Def. B.3.13 in Appendix B.
81
3.7.1
Theoretical Implications
First of all, if it is true that the Chua’s Circuit is the simplest electronic circuit able to show a wide
variety of nonlinear dynamic behaviors, then from what seen in Section 3.2 it can be stated that the
primitive circuit for obtaining such behaviors is the SC-CNN cell.
Moreover, CNN arrays based on the Chua’s Oscillator as basic cell can be actually substituted
by Multi (3) Layer SC-CNN’s with simpler and more homogeneous circuit topology.
The results shown provide a common electrical model for the study of a wide class of nonlinear
circuits. The introduced approach unify the field of “conventional” CNN’s and the one of nonlinear
oscillators.
The SC-CNN becomes a general programmable generator of nonlinear dynamics.
3.7.2
Practical Implications
It is worth notice that in a SC-CNN:
1. there are not inductors;
2. all the capacitors are identical;
3. the actual absolute value of the capacitors does not really matters15 but the ratio of the
capacitors is important;
4. saturation-type nonlinearity is a natural characteristic of any amplifier;
These features are of paramount importance for VLSI implementation. In fact [92, 93, 94]:
1. inductors are very difficult to realize in IC technology;
2. the tolerance on the absolute value of integrated capacitors is very high (on the order of ±20%
or more);
3. but, on the other side, the ratio among capacitors on the same chip can be very accurate (±1%
or even ±0.1%) if supported by careful layout work;
15 It
affects the whole time-scale of the dynamics but not its shape or nature.
82
4. amplifiers with saturation-type nonlinearity are ubiquitous;
5. feedback amplifiers allow very accurate gains to be realized;
At this point it is important to stress that although chaos and other nonlinear behaviors are
relatively robust to parameter variations, the need for accuracy that cannot be underestimated.
This means, for example, that a straight16 IC implementation of a traditional Chua’s oscillator,
a SHCG or one of the other above reported circuits17 is going to be an announced failure.
A way to keep control of (or to recover from) parameter variations is absolutely indispensable18 .
Therefore the SC-CNN approach offers a natural solution to all of these problems.
Moreover it provides all the different dynamics in one circuit and at complete user demand.
Some good reasons for providing a programmable chaos generator (PCG) can be found in applications as pseudo-random sequences/numbers generators [60] and for secure communication/chaosbased cryptography [95, 96, 97, 98]. In all of these applications, roughly speaking, any chaotic
attractor corresponds to a “seed”, a code or a “key”. A PCG offers a fast re-programmable multikey system [69].
More on this subject will be discussed in Chapter 4.
16 Component
by component without any device able to control the value of the circuit elements.
fact, realized with discrete components.
18 Namely compulsory introduction of feedback circuitry is required.
17 In
Chapter 4
Synchronization
One recent and interesting research topic in the circuit area is the synchronization of nonlinear
circuits (SNC)[95, 97, 99, 96, 100, 98]. Some techniques have been developed to force two (or more)
identical nonlinear dynamic circuits, starting from different initial conditions, to synchronize, namely
to follow identical trajectories (asymptotically, at least). This is particularly interesting if the circuits
behave in chaotic way because of their sensitive dependence on initial conditions. However, if some
conditions are satisfied [95, 96], then these circuits can be successfully synchronized.
Synchronization principles have been applied to realize analog masking systems for secure communication [96, 98].
In this Chapter an experimental study of a chaotic transceiver using SC-CNN’s and non-ideal
communication channels is reported[101, 102]. Moreover a new method to identify the parameters
of a chaotic circuit using synchronization and Genetic Algorithms is discussed[103, 104, 105].
4.1
Background
In this Section we will rapidly recall some of the concepts of Synchronization that are strictly
necessary for what follows. A good introduction on this subject can be found in [96].
The synchronization of nonlinear systems is defined as follows[96].
Definition 4.1.1 (Synchronization). Let us consider two (or more) nonlinear systems (N ≥ 2):
ẋi = fi (xi )
where xi ∈ Rn , fi :
(4.1.1a)
Rn → Rn and 1 ≤ i ≤ N ; if:
lim |xi (t) − xj (t)| = 0
t→∞
83
(4.1.1b)
84
with i = j then the N systems are synchronized.
In this Chapter just two systems will always be considered (N = 2). In order to obtain synchronization, many different approaches are possible. When the two systems are coupled such that the
first one is independent from the other one, while the dynamics of the second one is influenced by
the first one (one way coupling) then we have a master-slave configuration1.
4.1.1
Pecora-Carroll approach
Let us consider a dynamic system[95]:
u̇ = f (u)
(4.1.2)
and partition it into two subsystems, u = (v, w):
v̇ = g(v, w)
(4.1.3)
ẇ = h(v, w)
where v = [u1 , .., um ] , g = [f1 (u), .., fm (u)] , w = [um+1 , .., un ] ,
h = [fm+1 (u), .., fn (u)] .
The partition is arbitrary since the equations can be previously reordered. A new system w can
now be considered; it is created by duplicating the w system; moreover the set of variables v are
substituted by their corresponding v:
ẇ = h(v, w )
(4.1.4)
in this way the w system is forced by the u system by means of the v variables; w is called the
response system. If, as time elapses, w (t) → w(t) then the master and slave synchronize.
To this purpose the system obtained as difference among these two can be considered. In other
.
words, coherently with Def.4.1.1 we want ∆w(t) = w (t) − w(t) to converge at zero as t → ∞. This
leads to the variational equation2 :
d∆w
= Dw h(v, w)∆w + O (∆w)2
dt
1 Where,
2 See
obviously, the master is the independent system.
Section B.3.2 in Appendix B.
(4.1.5)
85
where Dw h is the Jacobian of the w subsystem. In the limit of small ∆w, higher order terms can
be neglected and the variational equation for the w subsystem remains.
The Lyapunov exponents resulting by this variational equation (i.e. the Lyapunov exponents
of the difference among w and w ) are the conditional Lyapunov exponents. If all the conditional
Lyapunov exponents are less than zero, then the response systems will synchronize with the master.
This synchronization scheme can be furtherly completed as follows. The v subsystem can be
also reproduced creating the v subsystem and driving it with the w variables. The complete slave
system is therefore:
v̇ = g(v , w )
(4.1.6)
ẇ = h(v, w )
Again, if all the corresponding conditional Lyapunov exponents are negative, then the master (4.1.3)
and the slave (4.1.6) will synchronize and, in particular, v −→ v as t −→ ∞. This last set-up is
known as the cascaded synchronization scheme[97].
An interesting result about synchronization regards the fact that the two systems may be synchronized even in presence of noise or if the driving signal has been altered by a filter [97].
4.1.2
Inverse System approach
Consider another master-slave set-up. Let the master be forced by an external signal s(t) and let
y(t) be the corresponding response. s(t) can have a regular waveform while y(t), assuming that the
master is a chaotic circuit, is supposed to be chaotic3 .
The equations of the master and its output can be written as:
ẋ = f (x, s)
(4.1.7)
y = g(x, s)
where x ∈ Rn is the state, f :
Rn+1 → Rn the vector field, s ∈ R the forcing signal and g : Rn+1 → R
the output function.
3 Although an external forcing signal can, in some circumstances, cause a bifurcation and so the system can loose
the chaotic behaviour.
86
The output signal y(t) is used to drive the slave system:
x̂˙ = fˆ(x̂, y)
(4.1.8)
ŝ = h(x̂, y)
where x̂ ∈
Rn is the state, fˆ: Rn+1 → Rn the vector field, ŝ ∈ R the response to y, h : Rn+1 → R
the output function.
If, for equal initial conditions x(0) = x̂(0), it happens that ŝ(t) = s(t), ∀t ≥ 0 then the slave is
said to work as inverse system.
However, given that there is no real control over initial conditions4 , a real inverse system set-up
is supposed to be able to synchronize (at least asymptotically) regardless of the initial conditions.
4.2
Experimental Signal Transmission Using Synchronized
SC-CNN
In this Section a simple architecture, based on two State Controlled Cellular Neural Networks, is
proposed in order to be used to transmit signals by applying the chaos synchronization principles.
In particular, the inverse system approach is considered.
Experimental and simulation results are reported. Moreover, an experimental study of the consequences of the introduction of a non-ideal transmission channel between transmitter and receiver
is included.
4.2.1
Circuit Description
The circuit considered is composed by the four blocks shown in Fig. 4.1.
Blocks B1 and B2 are two identical three-cell SC-CNN realizing the dynamic of the double-scroll
as shown in Chapter 3. The corresponding circuit schematic, already seen in Fig. 3.5, is shown
again in Fig. 4.2 for convenience.
Block B3 (see Fig. 4.3) is used to convert the voltage signal V 3 representing the message to be
encrypted in the current im . This current is added to the node A (see Fig. 4.1) and B1 responds
with a chaotic modulated signal x1 (the transmitted signal). Two buffers have been put between the
4 And
this is particularly critical in the case of chaotic circuits because of sensitive dependence on initial conditions.
87
Figure 4.1: Block diagram of the master-slave set-up
Figure 4.2: SC-CNN realization of the Chua’s Oscillator.
88
Figure 4.3: Voltage to current converter realizing block B3.
channel input and the chaotic modulator B1 and between the channel output and the de-modulator
B2 in order to avoid possible loads to the SC-CNN circuits.
The synchronization signal is imposed to the node corresponding to x4 (the homologous of x1
for the slave system B2) and B2 will respond drawing the current is . It has been recalled in Section
4.1.2 that iff B1 and B2 are synchronized then is = im . The current is will be transformed into the
voltage Vout by the block B4 shown in Fig. 4.4.
4.2.2
Synchronization: Experimental and simulation results
Let us consider the ideal case in which the master and slave are directly coupled. Fig. 4.5 shows
the not-coded5 and decoded messages overlapped.
It is seen that apart from a brief transient, due to different initial conditions, the two waveforms
are in a good agreement. This case will be considered as reference for the next discussion on the
introduction of a non-ideal channel.
A triangular waveform is considered (as message) instead of a sinusoidal one because it is well
known [96] that the behaviour of this system is dependent on the frequency and on the amplitude
of the tone.
The corresponding experimentally observed waveforms are shown in Fig. 4.6b.
5 The
original message before the chaotic cryptography.
89
Figure 4.4: Current sensing stage realizing block B4.
Figure 4.5: Ideal case: Transmitted (V3 ) and decoded (Vout ) signal with direct coupled circuits.
90
Figure 4.6: Experimentally observed waveforms. (a) x2 (second variable of the master) vs x5 (second
variable of the slave); (b) Vout and V3 overlapped; (c) Observed phase portrait in the x1 − x2 plane
when forcing signal is applied.
The quality of the experimental synchronization can be appreciated looking at Fig. 4.6a that
shows the variable x2 versus the variable x5 , corresponding each one to the other, in the master and
slave systems. Finally, the observed phase portrait in the x1 − x2 plane when the forcing signal is
applied has been reported in Fig. 4.6c. In Fig. 4.6b, the presence of a small ripple super-imposed
onto the decoded signal can be observed.
This could lead to the wrong conclusion that the two circuits are not exactly synchronized;
however, Fig. 4.6a excludes this case. The correct explanation of this fact is immediately obtained
by measuring the actual current fed by the block B3. In fact, this current is slightly corrupted by
the master itself due to the non-ideal features of the realized current source generator.
Apart from the triangular wave, other different waveforms have also been taken into account
(e.g. sine waves, square waves, speech and musical signals and so on) with successful results.
Finally, it has been noted that the above cited ripple is always present independently from the
nature of the considered transmitted signals. Therefore it could be significantly reduced by using a
better current source or by filtering the decoded signal by means of a low-pass filter.
91
4.2.3
Non-ideal channel effects
In this Section the effects of coupling the transmitter and the receiver with a commercial coaxial
cable are discussed. In particular the model RG6/U with characteristic impedance Z0 = 75Ω has
been considered. Two different cases has been taken into account: in the first one the transmission
channel has been adapted with its proper characteristic impedance, while in the second one the line
is not adapted.
The circuit referring to the latter case is the one shown in the above figures. Otherwise if the
channel has to be adapted then a 75Ω resistor must be inserted between the output of the buffer
U1A and the input of the line T1 in Fig. 4.1; furthermore, another 75Ω resistance must be inserted
between the positive input of buffer U3A and the ground in the same figure.
Moreover, due to the voltage divider effect at the input of the line T1, in this case, the buffer
U3A must be replaced by a non-inverting stage with voltage gain G = 2.
Let us refer to the ideal case of directly coupled circuits considered in the previous Section. The
spectrum of the synchronization signal x1 , when the master is not forced by any message, is reported
in Fig. 4.7.
It can be seen that the majority of the signal power is located in a band below 50KHz, therefore
in the following it will be assumed that the signal is band-limited to this range.
The Bode plots of the transfer function of the adapted line for a 1Km long cable and 10Km long
cable are reported in Fig. 4.8 and 4.9 respectively.
Let us consider the case of the adapted channel. In a first instance a 1Km long line is considered.
The corresponding transmitted and decoded waveforms are reported in Fig. 4.10, while in Fig. 4.11
the case of a 10Km line is depicted.
Of course, while in the first case there is still a good agreement among the two signals, in the
case of the 10Km cable the degradation due to the line is excessive.
If the line is not adapted then the corresponding waveforms are the ones reported in Fig.’s
4.12-4.13.
From these it follows that while for the 1Km long cable the degradation is still acceptable (and
comparable with the one of the adapted line), in the 10Km case the distortion of the decoded message
is greater than the one observed in the case of the adapted line.
92
Figure 4.7: Spectrum of the coupling signal x1 .
93
Figure 4.8: Bode plot of the transmission line for 1Km.
94
Figure 4.9: Bode plot of the transmission line for 10Km.
95
Figure 4.10: Transmitted and decoded signal with a 1Km long line.
96
Figure 4.11: Transmitted and decoded signal with a 10Km long line.
97
Figure 4.12: Transmitted and decoded signal with a 1Km long line and not adapted channel.
98
Figure 4.13: Transmitted and decoded signal with a 10Km long line and not adapted channel.
99
Figure 4.14: Block scheme for the analysis of the effects of noise.
4.2.4
Effects of additive noise and disturbances onto the channel
Here, the effects of additive noise and disturbances in signal x1 to the the “quality” of the synchronization are considered. An ideal coupling is assumed.
The block diagram shown in Fig. 4.14 represents the experimental set-up.
In particular the block B5 represents the noise generator embedded into the spectrum analyzer
used for the measurements (HP 35665A).
Three different situations have been taken into consideration. The first one deals with white
noise, whose power spectrum for 1 Vpk (rms), is depicted in Fig. 4.15. In the second case a pink
noise was added and its spectrum for 1 Vpk (rms), is depicted in Fig. 4.16.
In the third case some sinusoidal disturbances with different frequency and amplitude have been
added. In order to evaluate the quality of the synchronization the cross-correlation between the
state variable x2 (master) and x5 (slave) has been considered:
1
t→∞ T
Rx2 ,x5 (τ ) = lim
x2 (t)x5 (t + τ )dt
(4.2.1)
T
Of course the cross-correlation decreases when the slave is badly synchronized. Moreover, in
order to take into account of the increased energy supplied to the slave due to the noise and/or
disturbance, the cross-correlation functions have been normalized with respect to their maximum
100
Figure 4.15: Power spectrum of the white noise.
101
Figure 4.16: Power spectrum of the pink noise.
102
Additive signal
No noise
White noise
White noise
White noise
Pink noise
Pink noise
Sine
Sine
Sine
Sine
Sine
Vpk (rms)/f (Hz)
0
2
3
5
2
3
1/1K
1/10K
1/100
2/1K
1/560
(R̃x2 ,x5 ) (Joule)
49.3387
43.4927
41.0844
32.5768
45.7839
39.6112
31.8274
36.0531
45.0874
31.1263
34.1867
Table 4.1: Measurements in presence of additive noise and sinusoidal disturbance.
values at the origin.
So the normalized cross-correlation will be defined as:
R̃x2 ,x5 =
Rx2 ,x5 (τ )
Rx2 ,x5 (0)
(4.2.2)
The results are summarized in Table 4.1; in particular the energy (R̃x2 ,x5 ) of this normalized
cross-correlation is reported for the various cases.
The reference values for the following comparisons are the ones reported in the first row of the
table, in which an undisturbed transmission is considered. The following three rows consider the
case of additive white noise with increasing power.
As it is expected the energy of the normalized cross-correlation decreases as the noise power
increases. Analogously, a similar decrease is observed in the case of pink noise (fifth and sixth rows).
The last five rows illustrate the effect of sinusoidal disturbance. It is interesting to note that a
tone around 1KHz can be more harmful, with respect to the synchronization, than a similar tone at
a different frequency. This can be understood observing the spectrum of x1 ; in fact, the 1Khz tone
is located around the frequency range in which the coupling signal has the majority of its power.
Besides, it can be conjectured that such a external signal could drive the system to a bifurcation
point.
It is worth noting, from the table, as the synchronization remains relatively insensible to the
additive noise.
103
4.3
Chaotic System Identification
In this section the new approach to chaotic system parameter identification is described. It is based
on the Pecora-Carroll cascaded synchronization approach. The novel identification procedure is
fairly general and it has been applied to the Chua’s Oscillator as an example.
Using the new approach, the chaotic circuit, whose parameters have to be estimated, is considered
as a master system and one (at least) of its state variables is used as a driving signal for an identical
circuit used as a slave system. The slave system responds to the driving signal following a state
trajectory that depends on this input signal and on its parameters, that, in general, are different
from the master unknown parameters. The distance between the master’s state variable, used to
drive the slave, and its corresponding slave’s state variable is used to define a performance index.
When the master and the slave have the same parameter set then they synchronize, so their
corresponding state variables are asymptotically equal. This means that, in this case, the performance index reaches its global minimum. The optimization of the performance index is performed
by Genetic Algorithms (GA)[106, 107].
4.3.1
Description of the algorithm
Let us consider an autonomous6 nonlinear circuit or system whose mathematical model is supposed
to be known, while its parameters are unknown and they have to be estimated.
A state representation of this system is considered:
ẋ = f (x)
(4.3.1)
with x ∈ n . As explained in Section 4.1.1 this system can be partitioned into two subsystems
x = (v, w):
v̇ = g(v, w)
(4.3.2)
ẇ = h(v, w)
6 This hypothesis is not restrictive, because it is well-known that a non-autonomous system can be described by an
autonomous model augmenting its original model with suitable additional variables and equations as seen in Appendix
B.
104
Figure 4.17: The adopted synchronization scheme
where v = [x1 , .., xm ] , g = [f1 (x), .., fm (x)] , w = [xm+1 , .., xn ] ,
h = [fm+1 (x), .., fn (x)] . This partitioned system can be used to realize a cascaded synchronization
scheme as above discussed.
Equations (4.3.2) represent the master system while the slave x = (v , w ), driven by v is:
v̇ = g(v , w )
ẇ = h(v, w )
(4.3.3)
This setup is shown in the block diagram of Fig. 4.17.
In the slave system, the r unknown parameters p = [p1 , .., pr ] can be, at first, arbitrarily assigned.
Let us suppose that v are the only variables available from the master system and that:
{v̂(k)} , k = 0, 1, .., M
(4.3.4)
is its corresponding sampled time series (δ being the sampling time). Analogously {v̂ (k)} is the
time series corresponding to v . Let us introduce the following index.
Definition 4.3.1. Let us define the distance between v̂ and v̂ as:
M 2
(p))2
(v̂1 − v̂1 (p)) + .. + (v̂m − v̂m
I (p) = (4.3.5)
k=0
It is clear that this definition is independent from the nature of the time series, so their possible
chaotic features are not a problem.
Our identification problem can therefore be formulated as an optimization problem. In fact, the
105
master and the slave will synchronize if they have identical parameters and, in this case, as from
definition 4.1.1, the index (4.3.5) has the global minimum.
Therefore the parameters p must be changed according to this goal; GA’s have been used to
achieve it. In particular a GA in a standard form has been adopted [106]. Besides reproduction, one
point crossover and mutation, the elitist strategy has been utilized.
The time series (4.3.4) is used to drive a simulated slave system whose parameters are chosen by a
GA-based program. It is worth noting that the time discretization, that is necessary for the computer
simulation of the slave, inevitably introduces a numeric error that is an increasing function of the
discretization step size. This aspect has to be taken into account in order to evaluate the quality of
the obtained results.
4.3.2
Identification of Chua’s oscillator
The above procedure is now applied to the case of Chua’s oscillator.
The driving signal can be chosen among x, y and z. The proper choice depends on the conditional
Lyapunov exponents and so on the considered systems parameters. This means that, in order to
obtain the synchronization, some parameter choices require the x variable as driving signal while
some other require y or z. In the following, the case in which x is used as driving signal has been
discussed. However the procedure is quite general and an example in which the z variable is used to
drive the slave systems is presented in the following section. The state equations (3.2.3) of Chapter
3 can be partitioned into two subsystems: the first one, composed by x only, and the second one,
composed by y and z. So, using the above terminology, v = x while w = (y, z); then, the slave
(driven by x) is described as:
ẋ = α(y − h(x ))
ẏ = x − y + z (4.3.6)
ż = −βy − γz where the r = 5 parameters are p = (α, β, γ, m0 , m1 ) and the objective index is:
M
(x̂ − x̂ (p))2
I (p) = (4.3.7)
k=0
The slave system has been simulated by using a fixed step-size fourth-order Runge-Kutta algorithm;
this step-size has been chosen to be equal to the sampling time δ of the driving time series x̂.
106
Master
α=9
β=14.286
γ=0
m0 =-0.142857
m1 =0.285714
I (p)
popsize
generations
prob. cross.
prob. mut
Slave
α=9.77
β=14,619
γ=0.157
m0 =-0.122
m1 =0.277
1.5721
GA parameters
100
100
0.6
0.003
Table 4.2: Parameters of the double-scroll case
4.3.3
Examples
In this section three different examples of the application of new method to the Chua’s oscillator
case are reported; the parameters of the Chua’s oscillator used as master circuit have been fixed to
known values. The identification procedure has been applied in order to recover these values just
from the given time series.
In the first case the parameters have been chosen to obtain a double scroll attractor, that is:
α=9, β=14.286, γ=0, m0 =-0.142857, m1 =0.285714. The sampling time has been chosen as δ=0.1
and M =2000 samples have been considered. x has been used as synchronization signal. The results
of the identification are shown in table 4.2 together with the parameters of the adopted genetic
algorithm.
In order to evaluate the obtained results, the attractor generated by the master system and the
one obtained with the estimated parameters, when they are disconnected, have been overlapped in
Fig. 4.18;while in Fig. 4.19 the synchronization signal x and x both with the synchronization error
e = x − x are shown.
In the second example the parameters have been chosen to obtain the attractor shown in Fig.
4.20, that is: α=-4,08685, β=-2, γ=0, m0 =-0.142857, m1 =0.2857143. Let us refer to this attractor
as no.2.
In this case the sampling time has been chosen as δ = 0.1 with M = 2000 samples and the results
107
Figure 4.18: Overlapped attractors of the separated master and slave systems in x − y plane. The
dotted line refers to the master while the solid one is used for the slave.
Figure 4.19: Upper trace: Overlapped variables x and x of the master and slave systems for the
double scroll. The dotted line refers to the master while the solid one is used for the slave. Lower
trace: Synchronization error e = x − x .
108
Figure 4.20: Phase portrait of attractor no.2 in the x − y plane
are shown in table 4.3.
In this second example the synchronization can be only accomplished by using the z variable as
driving signal. Again the original attractor and the one corresponding to the estimated parameter
set have been overlapped as shown in Fig. 4.21;while in Fig. 4.22 the synchronization signal z and
z both with the synchronization error e = z − z are shown.
In the last case the parameters have been chosen to obtain the attractor no.3, shown in Fig.
4.23, that is: α=6.579, β=10.898, γ=-0.0447, m0 =-0.18197, m1 =0.3477.
The sampling time has been chosen as δ=0.01 and M =6000 samples have been considered. x has
been used as synchronization signal. The results of the identification are shown in table 4.4 together
with the parameters of the adopted genetic algorithm.
In order to evaluate the obtained results, the attractor generated by the master system and the
one obtained with the estimated parameters, when they are disconnected, have been overlapped in
Fig. 4.24;while in Fig. 4.25 the synchronization signal x and x both with the synchronization error
e = x − x are shown.
The accuracy of the estimation process can be increased if more samples, smaller step-sizes and
more generations are used. Of course this implies increased computational costs so a trade-off is
109
Master
α=-4.088685
β=-2
γ=0
m0 =-0.142857
m1 =0.285714
I (p)
popsize
generations
prob. cross.
prob. mut
Slave
α=-4.113
β=-2.157
γ=-0.013
m0 =-0.121
m1 =0.267
1.4922
GA parameters
100
140
0.6
0.006
Table 4.3: Parameters of the second example
Figure 4.21: Overlapped attractors of the separated master and slave systems in x-y plane for the
attractor no.2. The dotted line refers to the master while the solid one is used for the slave.
110
Figure 4.22: Upper trace: Overlapped variables z and z of the master and slave systems for the
attractor no.2. The dotted line refers to the master while the solid one is used for the slave. Lower
trace: Synchronization error e = z − z .
Figure 4.23: Phase portrait of attractor no.3 in the x − z plane
111
Master
α=6.579
β=10.898
γ=-0.0447
m0 =-0.18197
m1 =0.3477
I (p)
popsize
generations
prob. cross.
prob. mut
Slave
α=6.195
β=10.695
γ=-0.041
m0 =-0.179
m1 =0.35
2.0539
GA parameters
80
200
0.6
0.001
Table 4.4: Parameters of the third case
Figure 4.24: Overlapped attractors of the separated master and slave systems in x-y plane for the
attractor no.3. The dotted line refers to the master while the solid one is used for the slave.
112
Figure 4.25: Upper trace: Overlapped variables x and x of the master and slave systems for the
attractor no.3. The dotted line refers to the master while the solid one is used for the slave. Lower
trace: Synchronization error e = x − x .
necessary. Moreover, some dynamics are more difficult to estimate, that is, the parameters of the
master must be very close to the slave ones in order to obtain the correct synchronization. This
feature is related to the structural stability7 of the considered dynamic, namely to the feature of the
system to retains its qualitative properties under small perturbations of the parameters or of the
model.
4.4
Conclusions
In this Section 4.2 the experimental study of the synchronization of chaotic SC-CNN’s has been
considered. The inverse system approach has been applied in order to realize a chaotic based
transceiver.
A experimental analysis of the effects of non-ideal channel to the synchronization set-up has
been presented. Particular attention has been devoted to the use of coaxial line as a coupling media.
Moreover the effects of additive noise and sinusoidal disturbances has been evaluated.
From these last measures it comes up that a tone with suitable frequency can be disastrous. The
7 See
Section B.4 in Appendix B.
113
collected data represents the starting point for the design of an optimal communication channel with
respect to the length and the equalization of the line.
To this aim it can be easily argued that for short distances (less than some kilometers), as in the
case of a local communication system (e.g. in a building), there could be no need of equalization or
even line impedance matching.
However this is mandatory for longer communication channels.
In Section 4.3 a new method to identify the parameters of nonlinear circuits has been presented.
The procedure has been formulated as a global optimization problem and it has been faced by using
a Genetic Algorithm.
It has been applied to estimate the five dimension-less parameters of the chaotic Chua’s Oscillator
and three experimental examples have been reported.
Besides, the accuracy of the method has been discussed. With the proposed approach a circuit
model for a chaotic behaviour could be obtained. In fact, many different attractors have been
observed in nonlinear circuits and the introduced strategy represents a useful tool to determine the
parameters of a circuit model that best fits a chaotic time-series. Moreover, it should be noticed
that the above method could potentially be used to eavesdrop communication coded in chaos. In
fact, it could be used to estimate the parameters of the nonlinear circuit used as modulator in a
chaotic carrier cryptography system.
Chapter 5
Spatio-temporal Phenomena
In this chapter some spatio-temporal phenomena arising in array of coupled nonlinear circuits are
taken into consideration. In particular it is shown that many well-known phenomena, already
observed in arrays of coupled Chua’s oscillators, can be similarly obtained in a two-layer Chua and
Yang CNN model, that has a much simpler circuit topology.
5.1
Analysis of the Cell
In order to be able to consider the complex spatio-temporal behaviors described in the following
sections it is necessary to develop a thorough analysis of the single cell.
F.Zou and J.A.Nossek extensively studied [108, 64, 65] the following two-cells Chua and Yang
CNN model:
ẋ1 = −x1 + (1 + µ)y1 − sy2
(5.1.1)
ẋ2 = −x2 + sy2 + (1 + µ)y2
which, for some choice of the parameters µ, s ∈
R originates a stable limit cycle, symmetric with
respect to the axis in the phase plane.
However, studies on the different active media, in which the phenomena that we are going
to consider exist, has shown that system with non-symmetric oscillations, bi-stable and slow-fast
regimes are appropriate candidates [109, 73, 14]. Therefore it is necessary to modify the above
model.
114
115
The nonlinear state equations of the cell in which we are interested are in the form:
ẋ = f (x)
(5.1.2)
obtained from (5.1.1) introducing two non-zero constant bias terms i1 , i2 ∈ R [110, 111]:
ẋ1 = −x1 + (1 + µ)y1 − sy2 + i1
(5.1.3a)
ẋ2 = −x2 + sy2 + (1 + µ)y2 + i2
with:
yi =
1
(|xi + 1| − |xi − 1|)
2
(5.1.3b)
For this model, some of the results derived by F.Zou and J.A.Nossek still hold. However many others
cannot be valid anymore, because the introduction of the bias terms leads to the disappearance of
some fixed points and in a substantial modification of the limit cycle.
Hence, the study of the planar vector field (5.1.3a) must almost be repeated from scratch.
In order to analyze the dynamic behavior of (5.1.3a) it is useful to partition the phase plane into
some regions, in which (5.1.3a) can be locally studied as a linear1 vector field. In these regions, the
Hartman-Großman theorem2 give us thorough information on the system trajectories.
Definition 5.1.1. The phase plane
R2 can be partitioned into the following nine regions:
1. Linear region:
. D0 = x ∈ R2
:
|xi | < 1 , i = 1, 2
:
x1 ≥ 1 , x2 ≥ 1
:
x1 ≥ 1 , x2 ≤ −1
:
x1 ≤ −1 , x2 ≤ −1
x1 ≤ −1 , x2 ≥ 1
(5.1.4)
2. Saturation regions:
. Ds++ = x ∈ R2
. Ds+− = x ∈ R2
. Ds−− = x ∈ R2
. Ds−+ = x ∈ R2
1 In
fact as an affine, not exactly linear.
Theorem B.3.1 in Appendix B.
2 See
:
(5.1.5)
116
3. Partial saturation regions:
. Dp+,l = x ∈ R2
. Dpl,− = x ∈ R2
. Dp−,l = x ∈ R2
. Dpl,+ = x ∈ R2
:
x1 ≥ 1 , |x2 | < 1
:
|x1 | < 1 , x2 ≤ −1
:
x1 ≤ −1 , |x2 | < 1
|x1 | < 1 , x2 ≥ 1
:
It is obvious that all of them are disjointed and their union is
(5.1.6)
R2 .
For convenience it is also useful to define:
.
Ds = Ds++ ∪ Ds+− ∪ Ds−− ∪ Ds−+
(5.1.7)
.
Dp = Dp+,l ∪ Dpl,− ∪ Dp−,l ∪ Dpl,+
5.1.1
Fixed Points
The fixed points are obtained solving the system of nonlinear algebraic equations:
f (x) = 0
(5.1.8)
Depending on the choice of the parameters s, µ, i1 , i2 ∈ Rn some or all of the candidate solutions of
(5.1.8) can be real fixed points or virtual fixed points [81]. Nonetheless, virtual fixed points give as
much information on the dynamics as the real equilibria.
Theorem 5.1.1. In any of the regions defined in Def. 5.1.1 there can be, at most, one equilibrium
point.
Proof. The proof comes straightforward if we consider that system (5.1.8) is locally affine within
any of the above defined regions (5.1.4-5.1.6). Therefore it can admit one, none or infinite solutions.
However, the latter case happens iff the two equations composing the system geometrically correspond to the same straight line. This case is not structurally stable and so it is not considered here.
Specifically the following candidate solutions are obtained:
s si1 + µi2
i1
si1 + µi2
l,l
),
,
P0 = − + · ( 2
µ
µ
µ − s2
µ2 − s2
(5.1.9a)
candidate to belong to D0 ,
Ps++ = [1 + µ − s + i1 , 1 + µ + s + i2 ] ,
Ps+− = [1 + µ + s + i1 , −1 − µ + s + i2 ] ,
Ps−− = [−1 − µ + s + i1 , −1 − µ − s + i2 ] ,
Ps−+ = [−1 − µ − s + i1 , 1 + µ + s + i2 ]
(5.1.9b)
117
candidates to belong to Ds++ , Ds+− , Ds−− , Ds−+ respectively,
Ppl,+ =
s − i1
s
, 1 + µ + · (s − i1 ) + i2
µ
µ
s
s + i2
Pp+,l = 1 + µ + · (s + i2 ) + i1 , −
,
µ
µ
s
s + i1
l,−
, −1 − µ − · (s + i1 ) + i2 ,
Pp = −
µ
µ
s
s − i2
Pp−,l = −1 − µ − · (s − i2 ) + i1 ,
µ
µ
(5.1.9c)
candidates to belong to Dpl,+ , Dp+,l , Dpl,−1 , Dp−1,l respectively.
If the candidate belongs to the corresponding region then it represents a real fixed point. Otherwise it is a virtual fixed point conditioning the behavior of the vector field in the corresponding
region.
For the sake of clarity let us consider a simple example. If P0l,l ∈
/ D0 then it is a virtual
equilibria. Moreover let us assume that, from the extension of the linearization (in D0 ) to
R2 , it
comes up that P0l,l would be a stable equilibria. All the trajectories in D0 represent the restriction
of the ones obtained from the extension in
R2 . But this means that all the trajectories entering D0
will eventually go out attracted by the virtual fixed point P0l,l .
Similar conclusions can be formulated for the other cases.
The output nonlinearity is PWL continuous and so derivable almost everywhere:

0 if |xi | > 1
∂yi
∂yi
=
= 0, i = j
∂xi 1 if |x | < 1
∂xj
(5.1.10)
i
Hence we can consider the Jacobian of (5.1.3a):
J = Df =
!
∂y1
−1 + (1 + µ) ∂x
1
∂y1
s ∂x
1
∂y2
−s ∂x
2
"
∂y2
−1 + (1 + µ) ∂x
2
and the corresponding characteristic equation:
∂y1
∂y2
|λI−J| = λ2 + λ 2 − (1 + µ)
+
+
∂x1
∂x2
$
∂y1
∂y1 ∂y2 # 2
∂y2
s + (1 + µ)2 − (1 + µ)
+
+1
+
∂x1 ∂x2
∂x1
∂x2
(5.1.11)
(5.1.12)
118
µ>0
µ<0
µ=0
P0l,l
λ1,2 = µ ± js
unstable focus
stable focus
center
Ps++ , Ps+− , Ps−+ , Ps−−
λ1,2 = −1
stable node
stable node
stable node
Ppl,+ , Pp+,l , Ppl,− , Pp−,l
λ1 = µ, λ2 = −1
saddle node
stable node
marginally stable node
Table 5.1: Summary of the stability of the (real/virtual) fixed points.
that provides us the eigenvalues of the linearized systems corresponding to the candidate fixed points
(5.1.9).
The corresponding results are summarized in Table 5.1.
From this table it is clearly seen that for µ = 0 not-hyperbolic points do appear (so the HartmanGroßman does not hold). This value delimit the boundary between two qualitatively different
behaviours of the candidate fixed points. Indeed a bifurcation is observed.
Instead, for µ < 0, if at least one of the nine candidates is a real fixed point, then the system is
completely stable. This is certainly true if |i1 | < 1 and |i2 | < 1 because, in this case, we would have
at least P0l,l ∈ D0 . Moreover, analogously to what done in [65], it can be proved that a cycle limit
cannot exists, by the Bendixon’s Criterion3 .
However, the case in which we are interested is the one for µ > 0. In this case a Hopf-like
Bifurcation cause the appearance of a stable limit cycle.
5.1.2
Limit Cycle and bifurcations
Starting from now on, let us assume µ > 0, |i1 | < 1 and |i2 | < 1. In other words there is always, at
least, one fixed point (i.e. P0l,l ∈ D0 ).
Let us investigate the local bifurcation for µ = 0.
The Hopf theorem4 cannot directly applied because P0 is a center for µ = 0 and moreover f ∈
/ Ck,
k ≥ 4. Nevertheless, looking at Table 5.1, it can be seen that there are all the conditions for the the
birth of a limit cycle.
In fact the following theorem holds.
3 See
4 See
Theorem B.6.2 in Appendix B.
Theorem B.6.4 in Appendix B.
119
Theorem 5.1.2. Let us consider the following real numbers:
s++ = max{s − i1 , −s − i2 }
(5.1.13a)
s+− = max{−s − i1 , s + i2 }
(5.1.13b)
s−− = max{s + i1 , i2 − s}
(5.1.13c)
s−+ = max{i1 − s, −s − i2 }
(5.1.13d)
s∗ = min{s++ , s+− , s−− , s−+ }
(5.1.13e)
and the dynamic system (5.1.3a) with |i1 | < 1 and |i2 | < 1. Then µ = 0 is a bifurcation point and
for µ ∈ (0, s∗ ) P0l,l ∈ D0 is an unstable focus surrounded by a stable limit cycle Γ.
Proof. From Table 5.1 it is known that for µ > 0 the only candidate stable fixed points are
Ps++ , Ps+− , Ps−+ , Ps−− . If all of these are virtual fixed points then there will be not any stable fixed point in the phase plane. Moreover if we draw a ring-like region Ψ with radius R 1 + µ + s + max{i1 , i2 } and not including the remained unstable fixed points, then the vector field
(5.1.3a) can be approximated:
ẋ1 ≈ −x1
ẋ2 ≈ −x2
(5.1.14)
nearby the outer boundary of the simply-connected region.
This means that all the trajectories cross the outer boundary of Ψ from outside into inside. On the
other side, because of the instability of P0l,l , all the trajectories will also cross the inner boundary
of Ψ from outside into inside. Therefore, according to the Poincaré-Bendixon Theorem5 it exists a
limit cycle Γ inside Ψ.
It only remains to prove that Ps++ , Ps+− , Ps−+ , Ps−− are all virtual equilibria. For this to be true,
all of them must be located outside their corresponding saturation regions Ds++ , Ds+− , Ds−+ , Ds−− .
Let us consider Ps++ = [1 + µ − s + i1 , 1 + µ + s + i2 ] if either its first component or its second
component are smaller than 1 then it is located outside Ds++ . It is easily verified that this is true
if 0 < µ < s++ . Repeating the same reasoning for the other three points, similar inequalities are
derived using s+− , s−− and s−+ . Hence, to have all of them simultaneously outside their respective
regions we must have 0 < µ < s∗ .
This prove the thesis.
Corollary 5.1.3. If µ > s∗ then system (5.1.3a) is completely stable and so µ = s∗ is another
bifurcation value.
Proof. In fact, in this hypothesis, at least one point among Ps++ , Ps+− , Ps−+ , Ps−− will belong to a
saturation region and it will be a stable node. It will attract any trajectory entering the saturation
region and so Γ vanishes.
The above results reduce to the ones proved in [65] when i1 = i2 = 0, being the latter ones just
particular cases of the above ones.
5 See
Theorem B.6.1 in Appendix B.
120
5.1.3
Slow-fast dynamic
The limit cycle considered in [65] for µ < s is symmetric. Moreover, when µ = s > 0, the stable
equilibria of the saturation regions coincide with the unstable equilibria of the partial saturation
regions. Therefore the trajectories inside the saturation regions are attracted to these fixed points
while are repelled from them inside the partial saturation regions. Besides, the respective manifolds
intersect in such a way to create a symmetric heteroclinic orbit connecting the four equilibria. This
is obtained as degeneration of the stable symmetric limit cycle.
However, when µ > s > 0, the structurally unstable heteroclinic orbit disappears, all of the 4
fixed points of the saturation region do appear simultaneously and the system becomes completely
stable.
In our case, thanks to the introduction of the non-zero biases i1 and i2 , it is possible to have:
1. all of Ps++ , Ps+− , Ps−+ , Ps−− outside the respective saturation regions but
2. only one of them (say Ps−− for example) close to the boundary with the corresponding saturation region (Ds−− in the considered example).
Let us consider this case for Ps−− ∈
/ Ds−− , Ps−− ∈ Dsl,− but close to the boundary among Ds−− and
Dsl,− . In this condition, the asymmetric limit cycle Γ is put out of shape in proximity of the virtual
equilibria (Ps−− ). In fact, in the saturation region Ds−− the trajectories behave as attracted by the
virtual point Ps−− . Moreover, in Ds−− the flow tends to it in exponential time:
x(t) = e−t x0 + (1 − e−t )Ps−− ,
x(t), x0 ∈ Ds−−
(5.1.15)
Conversely, no stable equilibria are present in the partial saturation region, then once entered in
Dsl,− , the trajectories are accelerated to leave the region.
This means the flow undergoes a sudden variation of the rate of change of the state variables in
proximity of the virtual fixed point. In fact, let x10 = −1 − µ + s + i1 and x20 = −1 − µ − s + i2 be
the two coordinates of Ps−− = [x10 , x20 ] , then:
lim
x1 →x10 , x1 ∈Dpl,− ∩Γ
y1 (t) = x1 (t)
lim
x1 →x10 , x1 ∈Ds−− ∩Γ
y1 (t) = −1
(5.1.16)
121
being y2 (t) = −1. Hence the change of rate is clearly shown by the following limits:
lim
ẋ1 (t) = 0
lim
ẋ2 (t) = 0
x→Ps−− , x1 ∈Ds−− ∩Γ
x→Ps−− ,
x1 ∈Ds−− ∩Γ
lim
ẋ1 (t) = (1 + µ)(s + i1 − µ)
lim
ẋ2 (t) = s(s + i1 − µ)
x→Ps−− , x1 ∈Dsl,− ∩Γ
x→Ps−− ,
x1 ∈Dsl,− ∩Γ
(5.1.17)
If similar situations are not verified in the neighbors of the other regions, the remaining part of Γ is
covered without significant variations in speed.
This implicitly prove that:
Proposition 5.1.4. The vector field (5.1.3a) can have a slow-fast dynamic for an appropriate choice
of its parameters.
In practice, if desired:
1. if only one of the four stable virtual equilibria Ps++ , Ps+− , Ps−+ , Ps−− is placed on the boundary
of its saturation region and
2. the equilibrium corresponding to the adjacent partial saturation region is placed on it creating
a single saddle point
then a homoclinic orbit can be obtained.
5.1.4
Some simulation results
After the general analysis carried out in the previous paragraphs we have all the results that enable
us to choose the parameters for the desired dynamics.
In particular, the following parameters will be taken into consideration:µ = 0.7, s = 1, i1 = −0.3
and i2 = 0.3. Fig. 5.1 shows the modification of x1 (t), as the bias terms are introduced, showing
the slow-fast dynamics discussed earlier.
Fig. 5.2 shows the corresponding limit cycle modification. From the last figure it can be directly
seen how, while without any bias term the limit cycle is covered at almost constant speed, with
the addition of the biases, the limit cycle is non longer centered into the origin, but into the new
equilibrium point, and the speed of running through is faster in some parts than in others.
In particular, in Fig. 5.3 the modified limit cycle is reported together with the trends of f1 (x) = 0
and f2 (x) = 0. It can be seen that the speed variation of the state variables, while running onto the
limit cycle, is accomplished in a neighborhood of the two points Q1 (−3, 1) and Q2 (−1, −2.4), which
correspond, for our choice of the parameters, to Pp−,l and Ps−− , respectively.
122
Figure 5.1: State variable trend modification with the bias addition.
Figure 5.2: Limit cycle modification with the bias addition.
123
Figure 5.3: The limit cycle (solid line) and the contours of the p.w.l. equations f1 (x) = 0 and
f2 (x) = 0 (dashed lines). (hor. x1 , ver. x2 ).
124
Figure 5.4: Time evolution of x1 (solid line) and x2 (dashed line).
The time evolution of the two state variables x1 and x2 , is depicted in Fig. 5.4.
5.2
The Two-Layer CNN
Many nonlinear reaction-diffusion partial differential equations (PDEs) have shown self-organizing
patterns [14, 109] and in [21] the concept of reaction-diffusion CNN has been formalized in order
to reproduce similar behaviours in CNN’s. Here, a new CNN with constant templates, with circuit
topology simpler than the CNN’s reported in literature6 [14] is introduced by suitably coupling the
second-order cells introduced above.
6 Until now all the spatio-temporal phenomena that we are going to show have been always reported in CNN’s
composed of coupled Chua’s Oscillators.
125
Let us consider the following state equations:
ẋ1;i,j = −x1;i,j + (1 + µ)y1;i,j − sy2;i,j +
+ D1 · (y1;i+1,j + y1;i−1,j + y1;i,j−1 + y1;i,j+1 − 4y1;i,j ) + i1
ẋ2;i,j = −x2;i,j + sy1;i,j + (1 + µ)y2;i,j +
(5.2.1)
+ D2 · (y2;i+1,j + y2;i−1,j + y2;i,j−1 + y2;i,j+1 − 4y2;i,j ) + i2 ;
1 ≤ i ≤ M;
1 ≤ j ≤ N;
It can be seen that while the two layers interact within each single cell generating the oscillatory,
slow-fast behaviour described above, conversely, the interaction with the neighboring cells is obtained
separately by means of two circulant diffusion templates with diffusion coefficients D1 (for the first
layer) and D2 (for the second one) respectively [14].
Namely, there is not a direct interaction between the layer 1 of a cell C(i, j) and the layer 2 of
its neighbors and vice versa. Moreover, it can be noted that while in the examples reported in [21]
the Laplacian templates weighted the state variables of the neighboring cells, conversely, in (5.2.1),
coherently with the multi-layer CNN definition in Chapter 1, the Laplacian template is a feedback
template that weighs the outputs.
Hence, with little modification with respect to (5.2.1), the new CNN is formally defined as follows:
Definition 5.2.1. The state model of the new two-layer CNN with constant templates is:
ẋij = −xij + A ∗ yij + B ∗ uij + I
(5.2.2a)
where xij = [x1;i,j x2;i,j ] , yij = [y1;i,j y2;i,j ] and uij = [u1;i,j u2;i,j ] are the state, the output and
the input of the CNN respectively while A, B and I are the feedback, control and bias templates
respectively. The cloning templates are:
A11
A=
A21
A12
i
; B = 0; I = 1 ;
A22
i2
(5.2.2b)
where:



D1
0.5D1
0.5D1
0 0 0
−5D1 + µ + 1
D1  ; A12 = 0 −s 0 ;
A11 =  D1
0.5D1
D1
0.5D1
0 0 0




D2
0.5D2
0 0 0
0.5D2
−5D2 + µ + 1
D2 
A21 = 0 s 0 ; A22 =  D2
0.5D2
D2
0.5D2
0 0 0

and Zero-Flux (Neumann) boundary condition.
(5.2.2c)
126
5.3
Traveling Wavefronts
Reaction-diffusion systems can be considered as an ensemble of a large number of identical subsystems, coupled each other by diffusion. These systems of coupled cells can be often met in living
structures, where transport processes take place, such as living neural tissues, physiological systems,
eco-systems, as well as in chemical reactions or in combustion[112, 113]. Traditionally, the local
subsystems are defined through a set of nonlinear differential equations. In this context, CNN’s
represent a powerful tool for their modeling and real-time simulation.
In this Section the CNN defined in 5.2.1 will be used to generate spatio-temporal phenomena
with the following choice for the parameters:
µ = 0.7; s = 1; i1 = −0.3; i2 = 0.3; D1 = D2 = 0.1;
5.3.1
(5.3.1)
Autowaves
The term autowave was introduced for the first time by R. V. Khorhlov, to indicate ”autonomous
waves” [109, 113]. They represent a particular class of non linear waves, which propagate without
forcing functions, in strongly nonlinear active media [114, 115]. Their propagation takes place at
the expense of energy stored into the active medium; such energy is used to trigger the process into
adjacent regions.
This phenomenon is often encountered in combustion waves or in chemical reactions, as well as
in many biological processes, like propagation in nerve fibers or heart excitement.
Autowaves posses some typical characteristics, basically different from those of classical waves in
conservative systems. Their shape remains constant during propagation, reflection and interference
do not take place, while diffraction is a common property between classical and autowaves.
In Fig. 5.5 the formation of two propagating fronts of an autowave can be observed. A 44 × 44
array has been considered. The grey levels in the figures represent the output values: the black
colour represents +1 while the white one represents −1.
The snapshots in the upper part of the figure represent the outputs of the first layer of the
above described CNN at particular time periods. The corresponding snapshots for the outputs of
the second layer are reported in the lower part of Fig. 5.5. The initial conditions for the two layers
are reported in the first two snapshots at the left-hand side of the figure.
127
Figure 5.5: Generation of two autowave fronts.
From this figure the main properties of autowaves can be easily observed: the wavefront shape
remains unchanged during propagation and no reflection takes place.
Fig. 5.6 shows the mechanism of initiation of the most important type of autowaves sources: the
Reverberator . It consists of a rotating vortex similar to an Archimedean spiral. It spontaneously
takes place in inhomogeneous media, in which autowaves can break while propagating.
The snapshots in Fig. 5.6 represent the evolution of the first layer in the CNN array. The second
layer outputs will be not reported, being the opposite of the corresponding first layer outputs. The
first snapshot on the left-hand-side represents the initial condition on the first layer. It simulates
the tail of the autowave front which, due to the medium inhomogeneity, is somewhat longer in the
middle of the picture. The second wavefront cannot propagate and two wave-breaks take place. The
broken waves propagate more slowly than the preceding wavefront: from the consecutive wavefront
positions it is observed that the wave-breaks begin to curl and rotate each one forming two spiral
waves.
Since more than one spiral wave exist in the medium, the wavefronts emitted by each spiral
annihilate when colliding, rather than penetrate one another; therefore no interference takes place.
All of the characteristics of autowaves have therefore been verified.
Moreover, since the wave breaking does not take place exactly in the middle of the array (as
it can be seen from the initial condition), one spiral will rotate at a higher frequency. This one,
eventually, will suppress all of the other spirals, as it can seen in the last snapshot of Fig. 5.6.
128
Figure 5.6: Generation of a Reverberator from an autowave break.
Slightly different initial conditions lead to the formation of two spiral waves in the second wavefront, and in the onset of a circular wavefront in the first one, as it can be seen in Fig. 5.7.
Just like all the experiments carried out in chemical active media [109], the reverberator suppresses the concentric wave. Depending on the level of inhomogeneity in the medium, the wave
emitted by a rotating reverberator can produce new reverberators, showing their ability of reproduction.
This property, usually met in experiments, has been successfully simulated in Fig. 5.8.
The simulations carried on in this section show the main properties of spirals in active media:
their ability to occur in inhomogeneities during propagation, to suppress other wave sources and to
reproduce themselves. Such considerations are very important in order to gain further insight into
the basic mechanisms of ventricular fibrillation as well as of a dangerous type of cardiac arythmia, in
which the onset of reverberators suppresses the normal heart pacemaker, causing a dramatic increase
in the cardiac rate, as seen in experiments on animals [14].
129
Figure 5.7: The Reverberator annihilates a circular wave, as seen from experiments in various active
media.
Figure 5.8: Onset of a Reverberator from medium inhomogeneity and its reproduction.
130
Figure 5.9: Inputs of the two Layers for the labyrinth experiment.
5.3.2
Labyrinths
In this Section, an experiment similar to that one by V.Perez-Munuzuri et al. [73], but accomplished
with lower-order circuits, is reported. Traveling wave-fronts go all over a labyrinth. This example is
interesting for potential engineering purposes, since it represents a suitable application of autowaves
for autonomous robots path planning, PCB routing and so on.
Inputs have been used to define the labyrinth forcing both layers. The control template are not
zero anymore:
B=
!
B11
B12
B21
B22
"
; B11

0

=
0
0
0 0


1 0
 ; B22
0 0

0

=
0
0
0 0


1 0
;
0 0
(5.3.2)
B21 = B12 = 0;
The input values of the two layers are the same and they have been reported in Fig. 5.9 where,
as usual, the black pixel represents +1 while the others are all zero.
In Fig.s 5.10 and 5.11 the outputs of the two layers have been reported. In this case the output
of the second layer is more interesting than in other cases presented because, at the end of the
whole diffusion process, it will reproduce the entire labyrinth that has been passed through by the
wave fronts of layer 1. As in the experiments reported in [73] the wave fronts of layer 1 propagate
throughout the medium with a constant speed, breaking into different arms at each fork. These
features could be used to find the shortest path between two points in the labyrinth.
131
Figure 5.10: Outputs of Layer 1 for the labyrinth experiment.
Figure 5.11: Outputs of Layer 2 for the labyrinth experiment.
132
5.4
Pattern Formation
Autocatalytic chemical reactions coupled with molecular diffusion can generate patterns in biological,
chemical and biochemical systems, following the Alan Turing’s model of morphogenesis [112].
Such phenomena, arise from the interaction between different chemicals (also called morphogens),
which react each-other and spatially diffuse on the chemical medium, until a steady-state spatial
concentration pattern has completely developed.
A typical reaction-diffusion model shows the so called activator-inibitor mechanism suggested by
Gierer and Meinhardt [112]:
∂A
= F1 (A, I) + DA ∇2 A
∂t
∂I
= F2 (A, I) + DI ∇2 I
∂t
(5.4.1)
A and I being the chemical concentrations of the activator and inhibitor, respectively, F1 (A, I) and
F2 (A, I) non-linear functions and DA and DI the diffusion coefficients.
The diffusion phenomenon takes place spatially: in particular, the activator is responsible of the
initial instability into the medium, and the pattern formation starts. Once this phase is completed,
the inhibitor supplies stability. A necessary condition for such a phenomenon to take place is that:
DA DI .
In this Section it will be shown that, with slight modification of the above two-layer CNN, pattern
structures based on a mechanism similar to the reaction-diffusion phenomenon studied by Turing
develops. Therefore, patterns obtained with this methods will be referred to as Turing patterns.
5.4.1
Condition for the existence of Turing Patterns in arrays of coupled
circuits
Goraş et al. [116, 117, 118] dealt with the problem of obtaining Turing Patterns in arrays of coupled
circuits. In particular, in order for such a generalized CNN to generate Turing patterns, the following
conditions have to be satisfied:
1. the cells of the two-dimensional grid CNN should be at least of second order (the aim is to
simulate the interaction between, at least, two chemical species);
133
2. the isolated cells have a stable equilibrium point;
3. when the cells are coupled, the equilibrium point, which corresponds to a homogeneous pattern,
becomes unstable, and a non-uniform pattern corresponding to another equilibrium point
emerges.
The single cell has the following state model:
ẋ = g(x)
where x ∈ R2 , g :
R2 → R2 and Jacobian matrix:
!
Dg =
g11
g12
g21
g22
(5.4.2)
"
(5.4.3)
With no loss of generality let us assume that (5.4.2) has a fixed point at the origin. For this equilibria
to be stable the following condition on the Jacobian elements, in a neighbor of the origin, must hold:
g11 + g22 < 0
(5.4.4)
g11 g22 − g12 g21 > 0
Circuit (5.4.2) is eventually coupled to its neighbors. The coupling is obtained by augmenting (5.4.2)
with Laplacian templates. The equations of the array are then:
ẋij = g(xij ) + D∇2 xij
where xij ∈ R is the state of the cell C(i, j), D =
2
!
D1
0
(5.4.5)
"
is the diagonal matrix of the diffusion
0 D2
coefficients and ∇2 is the two-dimensional discretized approximation of the Laplacian operator [21]:


0 1 0

. 

(5.4.6)
∇2 xij = 
1 −4 1 ∗ xij
0 1 0
Let us consider again the origin of the single cell state, and its linearization within a neighbor of
the the origin, in this new situation. A necessary condition for the origin to become unstable upon
coupling is [117]:
D2 g11 + D1 g22 > 0
(D2 g11 − D1 g22 )2 + 4D1 D2 g12 g21 > 0
(5.4.7)
134
This condition is necessary to have a band of unstable spatial modes in the array [117]. Indeed
the complete meaning of this second condition is more complex than what mentioned. A complete
discussion can be found in the cited bibliography and it is out of the scope of this Section.
The domain in the parameter space for which the conditions for Turing instability are fulfilled
will be called the Turing space of the CNN.
Besides (5.4.4) and (5.4.7), the following ones should be satisfied too [118]:
1. inside the band of unstable modes at least one spatial mode should exist,
2. the initial conditions should be such that at least one of the unstable modes is activated,
3. the nonlinearity should be such that the final pattern is bounded and stationary
5.4.2
Turing Patterns in the two-layer CNN
In order to obtain Turing Patterns from the two-layer CNN some modifications must be done on the
cell’s equation [119, 120]. First of all the bias coefficients can be put again at zero. Secondly, the
parameter µ, defining the self-feedback of the cell, has to be perturbed adding a new parameter .
This is discussed in the following proposition.
Proposition 5.4.1. The M × N two-layer CNN with equation:
ẋ1;ij = − x1;ij + (1 + µ + )y1;ij − sy2;ij + D1 ∇2 y1;ij
ẋ2;ij = − x2;ij + sy1;ij + (1 + µ − )y2;ij + D2 ∇2 y2;ij
(5.4.8)
and Zero-flux boundary conditions satisfies the necessary conditions to admit Turing Patterns for
an appropriate choice of its parameters.
Proof. First of all it is trivially verified that (5.4.8) admit the origin as fixed point, regardless of the
parameters values.
In order to check condition (5.4.4), let us consider (5.4.8) in a neighbor of the origin:
ẋ1;ij =(µ + )x1;ij − sx2;ij + D1 ∇2 x1;ij
ẋ2;ij =sx1;ij + (µ − )x2;ij + D2 ∇2 x2;ij
and its Jacobian matrix:
g
Dg = 11
g21
g12
µ + −s
=
g22
s
µ−
(5.4.9)
(5.4.10)
So (5.4.4) reduces to:
µ<0
µ2 + S 2 > 2
(5.4.11)
135
In order to have a band of spatial unstable modes, the second necessary condition (5.4.7) must be
met. This reduces to:
> −µ
D2 + D1
D2 − D1
(5.4.12)
2
µ(D2 − D1 ) + (D2 + D1 ) > 4s D1 D2
Conditions (5.4.11)-(5.4.12) define the Turing space of (5.4.8).
The other above mentioned conditions still need to be met. Let us look for a solution of (5.4.8)
by de-coupling it into M N -decoupled systems of two first-order linear differential equations and
considering the M N orthonormal space-dependent eigenfunctions ΦMN (m, n, i, j) of the discrete
Laplacian operator:
∇2 ΦMN (m, n, i, j) =ΦMN (m, n, i + 1, j) + ΦMN (m, n, i − 1, j)
+ ΦMN (m, n, i, j + 1) + ΦMN (m, n, i, j − 1)
(5.4.13)
2
ΦMN (m, n, i, j)
− 4ΦMN (m, n, i, j) = −kmn
2
where kmn
are the corresponding spatial eigenvalues.
In particular, for the zero-flux boundary conditions, the spatial eigenfunctions and eigenvalues
assume the following form:
(2j + 1)nπ
(2i + 1)mπ
cos
2M
2N
2 mπ
2 nπ
+ sin
=4 sin sin
M
N
ΦMN (m, n, i, j) = cos
2
kmn
(5.4.14)
with m = 0, . . . , M ; n = 0, . . . , N
From the relations which arise between the spatial eigenfunctions and the temporal eigenvalues,
it comes up that the connection between diffusing cells can generate patterns if
2
) and
1. at least one of the temporal eigenvalues has a positive real part (as a function of kmn
2. inside the band of unstable modes Bu , at least one spatial mode exist.
The bandwidth depends on the cell parameters as well as on the diffusion coefficients. In particular,
the limits of Bu are the values k12 , k22 :
%
1
2
2
(D2 f1 + D1 g2 ) ± (D2 f1 + D1 g2 ) + 4D1 D2 f2 g1
(5.4.15)
k1,2 =
2D1 D2
Let us choose the following set of parameters:
µ = −0.1; s = 2; = 2; D2 = 1; D1 = 0.01
(5.4.16)
Using such a set of parameters the band Bu of unstable modes results to be the following: k12 ≤
Bu ≤ k22 , with k12 = 0.0564 and k22 = 187.33.
Because of the zero-flux boundary conditions and the initial conditions (5.4.14), and restricting
the analysis to the case M = N = 2 (with no loss of generality), the following four spatial eigenvalues
can be found from relations(5.4.14):
2
2
2
2
K00
= 0; k01
= k10
= 2; k11
=4
(5.4.17)
136
Figure 5.12: Patterns arising at the first layer output when M = N = 2
Therefore, for the above parameter choice, 3 spatial eigenvalues are contained in the band of
unstable modes. Thus it is possible to obtain 3 pattern configurations, each one with 2 available
polarities. In conclusion a whole set of six different patterns.
This proves the thesis.
5.4.3
Simulation Results
In the following simulations particular care must be taken to the accuracy of the numeric integration.
Besides the boundary conditions, which have been set to zero-flux, the integration time step used
was δ = 0.01. Inaccurate integration can cause the appearance of spurious steady state conditions.
The M = N = 2 Case Let us consider the case M = N = 2. Starting from zero initial conditions,
excite some of the four cells belonging to the first layer with a constant input randomly selected
between −0.1 and 0.1 while the second layer has zero initial conditions. As it can be seen from Fig.
5.12, six different patterns have been obtained at the first layer output, as analytically derived in
the previous section.
The M, N > 2 Case Hereby the zero initial conditions are represented in white, while black
represents an initial state set to one. Conversely for steady state conditions black and white represent
+1 and −1 respectively.
Fig. 5.13 shows the pattern formation phenomenon.
The initial conditions are depicted in Fig.s 5.13a,c,e, while the final patterns are shown in Fig.s
5.13b,d,f, respectively. Fig.s 5.13a,b show the the case of a 11 × 1 CNN array; Fig.s 5.13c,d a 11 × 3
CNN, and Fig.s 5.13e,f a 11 × 5 CNN. It is seen that the number of cells available for propagation
determines the final pattern geometry.
137
Figure 5.13: Patterns arising from different initial conditions at the first layer output for various M
and N sizes.
Figure 5.14: Another set of patterns arising from various initial conditions (first layer output) when
M = N > 2.
138
Figure 5.15: Pattern arising from the propagation of a horizontal strip (initial condition).
This is also seen from Fig. 5.14. In particular, the pictures on the left hand side show the initial
conditions, while the ones on the right side show the corresponding final configuration. Fig.s 5.14a
and b show the case of a 11 × 11 array. In Fig.s 5.14c and d the size is increased to 21 × 21 both
with a modification on the initial condition. Instead, on Fig.s 5.14e,f and 5.14g,h the same kind of
initial condition used in 5.14a is considered while the corresponding sizes are (again) 21 × 21 and
44 × 44 respectively.
Fig. 5.15 shows the propagation of a unique horizontal strip (initial condition) towards a steady
state condition in which the strip propagates through the available 11 × 10 array (Fig.s 5.15a,b), or
a 21 × 10 array (Fig.s 5.15a,b).
A slight different initial condition (Fig. 5.15e) in the 21 × 21 brings to the formation of a new
pattern (Fig. 5.15f).
The final pattern configuration is shown to depend, once again, from the initially available cell
dimension and on the initial conditions.
It is interesting to note how the results of these simulations resemble the rearrangement of skin
strips and spots observed in many animals during the growth. As a matter of fact, mathematical
models based on reaction-diffusion PDE’s have been reported in literature [112, 121]. The same
initial condition can give strips or spots formation according to the different part of the animal skin.
139
Figure 5.16: Formation of a spiral in presence of uncertainties on the capacitors.
5.5
Sensitivity to Parametric Uncertainties and
Noise
Let us now consider how the behaviour of the introduced two-layer CNN is affected by parametric
uncertainties and noise. The case of a spiral wave obtained with the CNN discussed in Section 5.2
is considered first. Eventually, the case of Turing Patterns is taken into account.
5.5.1
Spiral wave: Parametric Uncertainty
Consider the case in which all the couples of capacitors C1;i,j and C2;i,j associated to the two layers
of each cell C(i, j) of the CNN array are affected by uncertainty, namely:
C1;i,j = C0 + δC1;i,j ; C2;i,j = C0 + δC2;i,j
(5.5.1)
where C0 is the nominal value (always implicitly assumed to be equal to unity) of the capacitors
while δC1;i,j and δC2;i,j are their uncertainties. In this condition, the cells’ capacitors have different
values randomly distributed within a bounded range. This can be considered as a model for a
inhomogeneous active medium.
In Fig. 5.16 the snapshots corresponding to the case in which δC1;i,j and δC2;i,j vary in the range
[−0.2, +0.2] (20 percent of the nominal value) are shown. It is seen that, in spite of the significant
amplitude of the uncertainties, the spiral is only put out of shape.
140
Figure 5.17: Formation of a spiral in presence of uncertainty on the feedback template.
Let us now take into account the case in which only the cloning templates are affected by
uncertainty. Namely, all the elements of the feedback template matrices reported in Def. (5.2.1)
have been randomly perturbed within bounded ranges. The template coefficients obtained in this
way are then used for all the cells of the array.
The results obtained for an uncertainty of 1% of the nominal values is depicted in Fig. 5.17.
It can be seen that the introduced uncertainty determines the generation of undesired wavefronts.
They collide with themselves and with the rising spiral annihilating themselves and the outer part of
the spiral. Nevertheless, observing the subsequent snapshots it is seen that, due to the fact that the
inner part of the spiral is protected by its external wave fronts, the spiral is able to develop (even it
takes longer).
5.5.2
Spiral Waves: Presence of Noise on to the Initial Conditions
As discussed above, the initial conditions of the CNN are in the range [−1, +1], and they determine
the formation of spirals, patterns and so on. Here, noise has been added to the matrices of initial
conditions that lead to the formation of a spiral. The considered noise is uniformly distributed in
the range [−δ, +δ] along the entire CNN lattice.
In the following examples it is shown that even with significant values of δ the noise is rapidly
filtered by the diffusion effect and it does not change the qualitative behaviour of the spiral.
141
Figure 5.18: Formation of a spiral in presence of noise on the initial conditions (δ = 0.4).
In Fig. 5.18 it is shown the case in which δ = 0.4. It is worth noting that due to the limited
number of colors used to represent the output values, the noise peaks are not immediately visible in
all the pictures. However, their effects on the spiral can be observed in the snapshots. As anticipated
the noise has been rapidly filtered and the qualitative behaviour is maintained.
Analogously, in Fig. 5.19 the case with δ = 0.8 is shown. In this case, due to the fact that the
noise peaks are comparable with the amplitude of the noise-free initial conditions, some spurious
wave fronts can be generated. In the reported case this is annihilated by the raising spiral and the
last snapshot shows a fully developed spiral. However, in general, some undesidered spirals could be
generated by the noise and they could not necessarily be annihilated.
5.5.3
Patterns: Parametric uncertainties
According to the theory, the presence of noise in Turing Patterns affects only the geometry of the
final configuration. Therefore it will be not considered. In is, instead, interesting to examine to what
extent an uncertainty in the CNN parameters will affect the stability of the final configuration.
Again, let us first consider the case of uncertainties on the capacitors. Fig. 5.20a shows the initial
conditions, Fig. 5.20b depicts the final pattern in the disturbance-free case; Fig.s 5.20c,d,e,f report
the final patterns when the cell capacitors have been added an uncertainty of the 10%, 20%, 40%,
50% on the nominal value, respectively. Namely δC1;i,j and δC2;i,j vary in the range [−0.1, +0.1],
142
Figure 5.19: Formation of a spiral in presence of noise on the initial conditions (δ = 0.8).
Figure 5.20: Pattern arising from particular initial conditions (upper left corner picture) with uncertainties on the capacitors.
143
Figure 5.21: Pattern evolution with uncertainties on the template coefficients.
[−0.2, +0.2] and so on. Higher uncertainty levels prevent the steady state condition to be reached.
It can be noted that uncertainty, in this case, plays the role of varying the geometry of the final
pattern.
Let us now consider the case of uncertainties on the feedback templates. Fig. 5.21 show the final
patterns when the template values have been perturbed by 0%, 10%, 20%, 40% of their nominal
value, respectively.
Fig. 5.22 shows the results obtained when the capacitors have been perturbed ranging from 10%
to 50%, and, at the same time, the template have been perturbed ranging from 10% to 60%. From
such figures it can be observed that such phenomena are fairly robust with respect to disturbances.
5.6
Conclusions
In this Chapter a quite novel and fascinating field of application of CNN’s has been discussed. The
considered spatio-temporal phenomena are characteristic of non-linear active media, such as living
structures, usually driven by reaction-diffusion dynamics. Such behaviours are classified as complex
phenomena and include traveling wavefronts as well as morphogenetical and self-organizing patterns.
Many of the considered phenomena have been already observed in CNN’s composed by arrays of
Chua’s Oscillators or degenerated Chua’s circuit. Here it has been shown that they can actually be
obtained in a simple Chua and Yang CNN model.
A thorough analytic study of the proposed cell has been performed. The cases of traveling
144
Figure 5.22: Pattern evolution with uncertainties both on the template coefficients and on the capacitors values.
wavefront and Turing patterns formation have been considered. Finally simulation results have
been presented for the purpose of evaluating the effect of perturbations.
It appears that CNN’s can represent a powerful tool to investigate Complexity in many of its
various aspects, some of them until now not fully developed and completely understood.
Part II: Implementation and
Design
CNN implementations can be subdivided in programmable and fixed template CNN’s. In fixed
template CNN’s the values of the cloning templates are set up by the designer and cannot be
changed once the chip has been fabricated. Conversely the programmability consists in being in the
position, for the user, to set the template values within an allowed range.
Although fully digital CNN implementations have been presented in literature [122], most of
the CNN chips are either fully analog or mixed-signal architectures [10, 11, 13]. Among these, it is
possible to distinguish between digitally-programmable and analogically-programmable CNN’s. The
former class includes all of those chips for which the desired template is chosen by a digital word7
[123, 124, 125, 126]. In the latter class, instead, both the template and weighted signal vary in a
continuous range[127, 128, 129, 130].
In Chapter 7 an analogically-programmable CNN is presented. In order to implement the programmable templates some choices are possible. Among these, the most common and appropriate
include the use of Operational Transconductance Amplifiers (OTA’s)[127, 128, 131], Tunable Resistors[132, 133] and Current Mirrors [134], Analog Multipliers[135, 136, 137].
A new multiplier has been realized to implement the programmable synapses of the CNN discussed in Chapter 7. The analysis, design and implementation of this S 2 I switched-current8 multiplier [138, 139] is the topic of Chapter 6.
7 Within
8 The
a discrete-finite set of values.
name S 2 I comes from the fact that the input current is sampled in two phases instead of one only.
145
Chapter 6
A Four Quadrant S 2I
Switched-current Multiplier
Switched-current (SI) circuits [140] represent a feasible alternative to Switched-capacitor (SC) [141,
93] circuits especially due to their compatibility with digital technology.
Only one switched-current (SI) multiplier has ever been presented until now [142]. The complexity of this circuit required one extra clock phase besides the normal clock phases used in common
2nd generation SI cells, two explicit capacitors to cope with clock feedthrough problems, and a
regulated-cascode architecture to deal with channel modulation effects. The multiplication of any
two given currents x and y is accomplished by evaluating their quadratic terms:
(x + y)2 − x2 − y 2 = 2xy
(6.0.1)
One smart aspect of Leenaert’s [142] multiplier is that any quadratic term is evaluated by using the
same squarer circuit in different clock phases, avoiding in this way the need of precisely matched
squarer circuits as is the case in some continuous time current multipliers [143].
This Chapter presents an alternative implementation to the latter approach by using the S 2 I
switched current technique that has already been proved effective in filtering and converting applications [144, 145, 146, 147].
One of the most important features of S 2 I is that it allows to compensate for the analog errors
that are mainly due to charge injection from the MOS switches. Essentially, the signal-dependent
clock feedthrough is sampled separately and added to the corrupted sampled signal to dramatically
146
147
reduce the error due to the charge injection. This makes the use of additional ”replica circuitry” for
compensating the offsets, unnecessary.
Another improvement in the proposed design is that no cascode transistors have been used to
minimize errors due to the finite gm /g0 ratio. Instead, they were minimized by varying the the current
gain of a current mirror. Moreover, thanks to the adoption of the S 2 I technique no capacitors have
been used, making the implementation more suitable for a standard digital technology.
The multiplier has been implemented using a 2µm n-well CMOS technology. Experimental
results are in agreement with the theoretical findings.
The following are brief highlights of the measurement results:
1. 0.425 millions of multiplication per second,
2. 1.7% total harmonic distortion for a sinusoidal of 35mA (50Hz),
3. 206KHz of bandwidth
4. 50dB of Signal to Noise Ratio (SNR) and
5. 0.3mW zero input power consumption for a ±3V power supply.
A complete set of detailed experimental results is provided at the end of this Chapter.
6.1
Detailed analysis of the S 2I memory cell
One of the most severe problems affecting SI circuits is the charge injection coming from the MOS
switches. This is the well-known problem of clock feedthrough[140, 148, 149]. Many different
approaches have been developed to cope with this problem including replica circuits[148, 150, 151],
dummy switches[140, 127], algorithmic approaches [140, 152] and so on [153]. The S 2 I technique
[144, 145, 154, 155] deals with this problem in a very effective way as it will be shown in this Section.
Essentially, the signal-dependent clock feedthrough that corrupts the input signal is sampled
and stored in the initial sampling phase of the current copier operation and then algebraically
added to the corrupted current to minimize the corresponding error. Likewise, more sophisticated
circuit techniques have been applied to this elementary circuit architecture [146, 147] in order to
148
Figure 6.1: S 2 I memory cell and clock waveforms.
cope with other problems like switch ir-drops and charge injection coming from the drain-gate and
drain-substrate capacitances.
We have considered only the basic cell in the design of the multiplier.
The general circuit of a S 2 I cell and its associated clock phases is shown in Fig. 6.1.
Let us now perform a detailed small signal analysis of the cell. In all cases refer to Fig. 6.2
which depicts equivalent circuits for the various clock phases. During phase φ1a of a given clock
period, arbitrarily called period (n − 1), the NMOS transistor MC is diode connected while the
PMOS transistor MF behaves as a current source. This is depicted in Fig. 6.2a. The input current
ii (n − 1) is stored into MC by means of i1a :
i1a =
gmC
gmC
ii (n − 1)
+ gdsF + gdsC
(6.1.1)
When the switch that realizes the diode-connection of the NMOST is opened, it will inevitably inject
an undesired charge in the NMOST due to the clock feedthrough, corrupting in this form the stored
current. Unfortunately, this charge is dependent on the input signal ii itself. Hence, the NMOST
is referred to as the coarse memory. The input resistance of the memory cell, during this phase is
−1
ri = (gmC + gdsC + gdsF )−1 ∼
.
= gmC
During φ1b the current error ∆i just introduced is sampled and stored (by difference with the
stored current and the current ii still present at the input) by diode-connecting the PMOST as
depicted in Fig. 6.2b. In this case the PMOST is referred to as the fine memory. Here, CC is
149
Figure 6.2: Small signal analysis for the S 2 I memory cell .Phase: (a) φ1a ; (b) φ1b ; (c) φ2 .
150
the total capacitance between the gate and the source of MC that stores the current i1a plus the
undesired current error. Moreover due to the drain-gate capacitance CdgC an additional current
error is fed back to the gate of MC .
This is commonly modeled by augmenting the total output conductance g0C of MC by an additional contribution [140]:
g0C = gdsC +
CdgC
gmC
CC + CdgC
(6.1.2)
So the current i1b stored in the fine memory MF is:
i1b = [ii (n − 1) − i1a − ∆i]
gmF
g0C + gdsF + gmF
(6.1.3)
It is worth noting that because ii (n − 1) ∼
= i1a then, only the coarse memory error is stored into the
fine memory. Moreover this also means that most of the input current ii (n − 1) is nulled by making
the actual input impedance smaller than in the previous sub-phase.
In order to evaluate the input resistance ri shown in the small signal equivalent circuit on the
right hand side of Fig. 6.2b, the node equation can be written neglecting the contribution ∆i due
to the feedthrough:
ii (n − 1) − i1a = v(gmF + g0C + gdsF )
(6.1.4)
Substituting (6.1.1) into (6.1.4):
ii (n − 1) −
gmC
gmC
ii (n − 1) = v(gmF + g0C + gdsF )
+ gdsF + gdsC
(6.1.5)
Therefore:
−1
ri = v/ii (n − 1) = (gdsF +gdsC ) · [(gmC + gdsF + gdsC ) · (gmF + g0C + gdsF )]
1 1
∼
= (gdsF + gdsC )/(gmC gmF ) =
Ψ gmF
(6.1.6)
It is seen that the dimensionless constant Ψ is the voltage gain of a CMOS inverter where MF works
as a current source while MC works as driver.
Finally, in phase φ2 , the currents stored in the two transistors are added up at the drain node
and supplied to the generic load gL . There is however a fixed output current offset, δI, due to the
last signal-independent charge injection that is of minor concern with respect to the compensated
151
signal-dependent error. The final output current i0 (n − 1/2) is:
i0 (n − 1/2) = − [∆i + i1a + i1b + δI]
gL
g0C + g0F + gL
(6.1.7)
where δI is the offset current due to the signal independent clock feedthrough in MF and g0F the
total output impedance of MF augmented by the feedback effect due to its drain-gate capacitance.
Notice that during this phase the output resistance is z0 = (g0C + g0F )−1 .
Substituting relationships (6.1.1)-(6.1.3) into (6.1.7) and grouping common terms, the following
expression is obtained for the output current:
gmF
gL
·
−
+ g0C + gdsF g0C + g0F + gL
i0 (n−1/2) = −ii (n − 1)
− ∆iα − δI
g0C
gmF
gL
+ g0F + gL
(6.1.8)
where:
.
α=
1−
gmF
gmF
+ gdsF + g0C
gL
g0 C + g0F + gL
(6.1.9)
Notice that α is composed of two terms: the first one in brackets is close to 0 while the other one
is less than 1. Therefore, in (6.1.8), the second and third terms can be neglected. Indeed, this
proves the clock feedthrough cancellation property of the S 2 I cell. Let us further simplify (6.1.8)
and do some assumptions. First of all, assume that all the MOST have the same value of small
signal transconductance gm . Two different cases have to be distinguished at this point: if the load
consists of a diode connected transistor (e.g. another current mode stage) then gL = gm . If, instead,
the load is another S 2 I memory cell then φ2 is partitioned into φ2a (during which gL = gm ) and φ2b
(during which gL ∼
= (gmC gmF )/(gdsF + gdsC ) = Ψgm ).
Let us first consider the case of a diode connected transistor as load. An underestimation of the
information retrieved from the memory is accomplished by making gdsF = g0F in (6.1.8). Hence
(6.1.8) can be simplified to:
−i
(n
−
1)
i0 (n − 1/2) ∼
= i
g0C
gm
+ g0F + gm
2
− δI
gm
g0C + g0F + gm
(6.1.10)
152
Taking g0 = g0F = g0C (6.1.10) can be approximated to:
i0 (n − 1/2) ∼
= −ii (n − 1)
1
1
− δI
=
1 + 4 ggm0
1 + 2 ggm0
1
+ ioffset
= −ii (n − 1)
1 + 4 ggm0
(6.1.11)
where ioffset represents the output current offset. Given that the ideal transfer function of an S 2 I
cell is Hi (z) = I0 (z)/Ii (z) = −z −1/2 we can take the z-transform:
I0 (z) ∼
= Ii (z)
Hi (z)
+ Ioffset (z)
1 + 4 ggm0
(6.1.12)
It is seen that an attenuation due to the finite conductance ratio has to be expected on the retrieved
signal. A less detailed analysis taking into consideration only the effect of the channel length
modulation of MF and MC during the last phase (φ2 ) will give a perfect signal-dependent clock
feedthrough cancellation. The corresponding final expression is:
Hi (z)
I0 (z) ∼
+ Ioffset (z)
= Ii (z)
1 + gg0T
m
(6.1.13)
where g0T = g0F + g0C is the total output conductance of the memory cell.
When, instead, the load of the memory cell is constituted by another S 2 I cell then the load
changes to gL ∼
= Ψgm during φ2b . Therefore gL is going to dominate over the output impedance
of the S 2 I allowing a complete transfer of the output current i0 (n − 1/2) to the next stage. This
implies that (6.1.10) is substituted by:
i0 (n − 1/2) ∼
= −ii (n − 1)
g0C
gm
Ψgm
− δI
+ g0F + gm
g0C + g0F + Ψgm
(6.1.14)
Notice that the fractional term multiplying δI on the right hand side of (6.1.14) is very close to 1.
Moreover, taking g0 = g0F = g0C this can be approximated to:
i0 (n − 1/2) ≈ −ii (n − 1)
1
− δI
1 + 2 ggm0
(6.1.15)
In other words, considering that the total output conductance is 2g0 , then the final expression
describing the behavior of the cell is formally equal to (6.1.13). If, however, as commonly assumed in
SI circuit analysis [140], the effect of the device’s output impedance (due to the channel modulation
effect) is neglected during the sampling phase (namely during φ1a and φ1b ) and considered only
during φ2 (analogously to the procedure to obtain (6.1.13) ), then: (a) there is no current attenuation
153
during the sampling phase and (b) the Norton equivalent of the S2I cell (with output conductance
equal to 2g0 2) transfers the retrieved current to a load with conductance equal to Ψgm during f2b .
So following final expression is obtained:
I0 (z) ∼
= Ii (z)
Hi (z)
+ Ioff (z)
0T
1 + 2g
Ψgm
(6.1.16)
This analytically proves that the S 2 I cell achieves not only the feedthrough cancellation but it
also reduces the error due to the finite gm /g0 ratio as compared with common 2nd generation cells
[144, 146].
This analytic result is in complete agreement with the simulation [144] and experimental results
[146, 147] obtained by Hughes et al.
The sampling frequency in common 2nd generation cells is limited by the settling time of the
cell sampler [140]. In the case of S 2 I the cell treats the coarse phase f1a settling error as the other
errors and so attempts to cancel it during the fine phase φ1b . This means that the bandwidth is the
same as of a standard 2nd generation cell.
6.2
The Multiplier’s Architecture
Starting from the algorithm introduced by Leenaerts et al. [142] an alternative circuit implementation is now presented. The product of two currents x and y fed at the inputs of the multiplier is
accomplished by evaluating the left-hand side of 6.2.1:
(x + y)2 − x2 − y 2 = 2xy
(6.2.1)
A fundamental requirement is that these inputs have to be kept constant during the evaluation
process. In order to obtain the square of a current the simple squarer circuit shown in Fig. 6.3 has
been considered [156].
For this circuit a relationship including the input/output offset errors gives:
io = Ib + e1 +
(ii + e2 )2
4Ib
(6.2.2)
where io and ii are the output and input current respectively, Ib is a constant current related to the
bias voltage Vbias , e1 is the output offset error, e2 is the input offset error.
154
Figure 6.3: The current squarer.
Figure 6.4: Block diagram of the S 2 I multiplier.
The approach introduced in [142] consists of using the same squarer circuit many times while
storing intermediate results in the SI memory cells. In this way the problem of matching many
squarer circuits (a problem of the quarter squarer multipliers [143] for example) is avoided.
The block diagram of the whole system realizing the above algorithm is depicted in Fig. 6.4.
The algorithm consists of four main steps described by means of four main clock phases: φ1 ,
φ2 , φ3 , φ4 . The circuitry consists of three building blocks: the current squarer, an adjustable
current mirror and two S 2 I memory cells. From this figure it is seen that, analogously to [142],
a complementary version of the squarer circuit shown in Fig. 6.3 is used. In this way, due to the
fact that the adopted technology is a CMOS n-well it is possible to avoid the bulk effects on M2
by connecting its source to the bulk. Moreover by using the mirror and the current source Ib , it
is possible to remove the DC contribution from the output of the squarer and to store only small
155
signals in the S 2 I memory cells.
Let us investigate the algorithm. During φ1 both x and y are added up to the input node of the
current squarer. The squared result is stored in memory 1. During φ2 only x is fed in; the result of
the squaring operation is added to the previous result that was fed (and sign inverted) by memory
1 at node A. This sum is stored in memory 2. During φ3 no input is presented to the squarer that
feeds the residual at node A. At the same time the latter partial sum is fed (and sign inverted) at
node A by memory 2 and the total current is stored into memory 1. Finally, during φ4 the current
y is squared and added to the previous sign inverted sub-total given by memory 1.
The total current Iout is provided at the output. Applying this algorithm to (6.2.1) by using
(6.2.2) and assuming ideal behavior for the circuits and devices (infinite output/input impedance
ratio of the building blocks constituting the circuit) the following first-order relationship is obtained:
(x + y + e2 )2
(x + e2 )2
iout = Ib + e1 +
− Ib + e1 +
+
4Ib
4Ib
(6.2.3)
(e2 )2
(y + e2 )2
xy
+ Ib + e1 +
− Ib + e1 +
=
4Ib
4Ib
2Ib
It is seen that thanks to the third term in which no input to the squarer exists, the errors due to the
offsets in the squarer are canceled out and the final result is proportional to the product of x and
y. Due to the fact that the transfer function of any single S 2 I memory cell of any kind is inverted
type, the order in which the various terms are evaluated and the intermediate results are retrieved
is important and it can help in avoiding the use of any additional current inverter circuits.
As recalled in Section 6.1 the output node of a S 2 I cell can be seen as a virtual ground. This
minimizes the channel length modulation effect and so the problem of the finite gm /g0 ratio in
comparison to other SI cells not utilizing cascode or feedback-based approaches.
However, as seen in Section 6.1, an attenuation in the retrieved contents of the memory cells is
expected. Among the four steps, the biggest unbalance occurs during φ4 because the squared input
signal is directly added up to the rest of the previously calculated result without an equal attenuation.
Therefore, during this last phase the ratio of the mirror is changed in order to compensate for this
unbalance. The mirror has unity gain during all the phases but φ4 . Strictly speaking, similar kind of
problems are expected also in phases φ2 and φ3 . Therefore, for better accuracy a similar ratio-tuning
can also be accomplished in those phases with only a small increase in the complexity of the system.
However, simulation results have clearly shown that for a satisfying accuracy (nonlinearity of around
156
±1% FS) this is not necessary. The analysis of this behavior is reported in the next Section. The
circuit schematic of the complete multiplier is shown in Fig. 6.5.
In this figure it is clearly seen how the current gain of the mirror is slightly changed during φ4
by adding a small-area diode-connected MOST in parallel to the existing one. This, however, will
cause an extra bias current coming out from the right branch of the mirror due to the unbalance in
Ib . This will add up to the constant offset error present in the output of the S 2 I cells and will cause
a constant current offset on Iout .
This undesired offset can be canceled in two ways. If the multiplier is going to be used alone an
extra current can be added during φ4 by using an additional current source and a current steering
switch. This is realized by M10 as shown in Fig. 6.5. This for example is what has been done
in the fabricated chip whose experimental results are discussed in the following. If, instead, many
multipliers are going to be used in the same chip (as in the case of a neural network or in the case of
adaptive filters and so on) then a suitable alternative might consists of realizing another multiplier
(namely a replica multiplier) without inputs. The output current offset of this multiplier is added
up to the outputs of all the other multipliers by using current mirrors. In this way the output offset
of the other multipliers can be drastically reduced in spite of process variations.
Seven of the nine control signals used to drive the switches are depicted in Fig. 6.6. The other
two signals are φ2 + φ3 = φ1 + φ4 and φ3 + φ4 = φ1 + φ2 . The nine control signals are obtained as
combinations of the various ”master” phases. It can be noticed that while the sub-phases used to
control the internal switches of the memory cells are not overlapping (namely φ1a + φ3a , φ1b + φ3b ,
φ2a , φ2b ), the current steering switches are controlled by signals with overlapping rising and falling
edges (φ1 + φ4 , φ1 + φ2 , φ4 ). This minimizes the generation of spikes without interfering with the
operation of the circuit.
6.3
Analysis and Design of the S 2 I multiplier
In this Section the behavioral analysis of the S 2 I multiplier is carried out. The approach presented
takes into account the non-idealities of the circuits and devices. Eventually, an algorithm for the
circuit design of the multiplier is discussed.
157
Figure 6.5: Circuit schematic of the S 2 I multiplier
Figure 6.6: Control signals for the switches.
158
6.3.1
Circuit Analysis of the multiplier
In [144, 146, 147] the behavior and some of the applications of the S 2 I cell has been discussed.
However, in order to analyze the proposed multiplier, the effect of the finite conductance ratio in
the memory cells, as well as in the other building blocks, has to be taken into account. These topics
are discussed in the following paragraphs.
In the following, for the sake of simplicity, it will be assumed that all the gm ’s are equal and all
the gds ’s are equal.
Strictly speaking a complete analysis would require to analyze the behavior of the multiplier in
seven phases (φ1a , φ1b , φ2a , φ2b , φ3a , φ3b and φ4 ). However, taking advantage of the analysis of
the S 2 I memory cell carried out in Section 6.1, we can consider its small signal equivalent circuit
as follows. During φ1 , φ2 and φ3 : the memory cell working in the sampling phase (composed by
the two sub-phases “a” and “b”) has a small signal equivalent circuit consisting of a conductance
equal to Ψgm . The current stored in the cell is the one flowing into this conductance. During the
retrieval phase the small signal circuit is composed of an ideal current source supplying the current
stored in the previous phase in parallel with a conductance equal to 2gds . Notice that, although the
cells supply a fixed output offset when the stored current is retrieved, we can nevertheless implicitly
consider this offset as part of the squarer offset e1 . During φ4 , instead, the circuit load is assumed
to be the generic conductance gL .
Therefore the analysis is carried out in the four main phases φ1 , φ2 , φ3 and φ4 .
Let us now refer to the complete circuit schematic shown in Fig. 6.5 and to the small signal
equivalents shown in Fig. 6.7.
The current source I1 depicted in Fig. 6.5 is implemented by using a single PMOST supplying
a constant current equal to Ib . According to what discussed in Section 6.2, in phase φ1 the mirror
M4a − M5 will supply the following current:
(x + y + e2 )2
i1 = − e1 +
4Ib
(6.3.1)
The corresponding small signal equivalent is shown in Fig. 6.7a. Here g0M represents the output
conductance of the right hand side branch of the current mirror. Therefore, following the previous
discussion we have, g0M = 2gds . Only cell 1 is connected to node A and the input conductance of
159
Figure 6.7: Small signal equivalent circuits for the S 2 I multiplier. (a) phase φ1 ; (b) phase φ2 ; (c)
phase φ3 ; (d) phase φ4 .
the memory cell is Ψgm . So the current stored into it is:
is1 =
Ψgm
i1
Ψgm + 2gds
(6.3.2)
In phase φ2 the current i2 supplied by the mirror is added to is1 retrieved from cell 1 and the result
is stored in cell 2. Again, the actual current stored in cell 2 is the one flowing into Ψgm .
Hence, from the small signal equivalent circuit shown in Fig. 6.7b:
(x + e2 )2
Ψgm
(i2 − is1 )
i2 = − e1 +
,
is2 =
4Ib
Ψgm + 4gds
(6.3.3)
In phase φ3 the mirror supplies i3 which is added to is2 supplied by cell 2 and the result is2 is stored
into cell 1. So, considering the small signal equivalent circuit depicted in Fig. 6.7c:
(e2 )2
Ψgm
(i3 − is2 )
i3 = − e1 +
,
is2 =
4Ib
Ψgm + 4gds
(6.3.4)
Finally in phase φ4 the mirror changes its ratio. This is accomplished by inserting M4b in parallel
to M4a . The ratio changes from 1 : 1 to 1 : β with β < 1. However, because the bias current on the
left branch of the mirror is not changed, the extra current (1 − β)Ib is supplied by the right hand
160
side branch of the mirror. This represent an output offset that can be canceled as discussed in the
previous section.
With regards to the small signal analysis, the mirror supplies i4 that is added to is3 supplied by
cell 1. The result i0 flows into the output load gL . So, from the small signal circuit of Fig. 6.7d:
(y + e2 )2
β,
i4 = − e1 +
4Ib
is2 =
gL
(i4 − is3 )
gL + 4gds
(6.3.5)
Substituting (6.3.2),(6.3.3) and (6.3.4) into (6.3.5) the following expression is obtained:
i0 =
i4 − i3 − i2 −
Ψgm
gL
Ψgm
Ψgm
i1
Ψgm + 2gds
Ψgm + 4gds Ψgm + 4gds gL + 4gds
(6.3.6)
Defining:
.
η=
Ψgm
,
Ψgm + 4gds
.
ρ=
gL
gL + 4gds
(6.3.7)
the relationship (6.3.6) can be approximated to:
#
$
i0 ∼
= i4 − i3 − (i2 − ηi1 )η η ρ = i4 − i3 η + i2 η 2 − i1 η 3 ρ = iρ
(6.3.8)
where i is just a current proportional to the actual output current i0 . Let us then analyze i by
substituting i1 , i2 , i3 and i4 from (6.3.1) -(6.3.5) into (6.3.8). After some algebra the following
relationship is obtained:
1 3
(η − η 2 + η − β) +
4Ib
y2 3
x2 2
η (η − 1) +
(η − β) +
+
4Ib
4Ib
xe2 2
ye2 3
η3
+
η (η − 1) +
(η − β) +
xy
2Ib
2Ib
2Ib
i = e1 (η 3 −η 2 + η − β) + e22
(6.3.9)
The first two terms of the right-hand side represent an offset, the last term is the desired result while
the remaining terms constitute the nonlinear distortion. Note that η is very close but less than 1.
Therefore the third and fifth terms, being multiplied by a factor η 2 (η − 1), can be neglected. So the
relationship (6.3.9) can be rewritten as:
y2 3
ye2 3
η3
(η − β) +
(η − β) +
xy
4Ib
2Ib
2Ib
1 3
= e1 (η 3 − η 2 + η − β) + e22
(η − η 2 + η − β)
4Ib
i = ioff +
ioff
(6.3.10)
161
where ioff is the offset current. The nonlinear error is canceled by setting the current mirror ratio
as:
.
β = β0 = η 3 =
1
1+
3
4gds
Ψgm
(6.3.11)
The final expression is therefore:
i = e1 (−η 2 + η) + e22
η3
1
(−η 2 + η) +
xy
4Ib
2Ib
(6.3.12)
It is worth noting that because η ∼
= 1 the offset is almost canceled as well. So finally:
η3 ρ
i0 ∼
xy
=
2Ib
(6.3.13)
The analysis carried out in this section demonstrates the big advantage that the proposed architecture has on the effect of the nonlinearity cancellation by selecting the appropriate mirror gain.
6.3.2
Circuit Design
Let us consider the circuit diagram shown in Fig. 6.5 and, for the sake of clarity, assume a symmetric
power supply (Vdd = −Vss ). Let us consider the two memory cells composed by M6 -M9 and the
corresponding switches. The quiescent drain voltages of the MOSTs are chosen as to stay at ground.
In order to avoid that the drain voltage of these transistors jump when the switches turn on or off
M6 -M9 are designed for:
|VGS | = |VDS | = |Vdd |
(6.3.14)
Therefore Vref = 0V. The bias current ID can be chosen to be a safe value for the adopted technology.
Alternatively, it can be chosen to satisfy other possible requirements on gm and/or gds or for a desired
S/N ratio. From these considerations the aspect ratios are determined as:
W
2ID
,
= L
KN (Vdd − VT N )2 (1 + λN Vdd )
NMOST
W
2ID
= L PMOST
KP (Vdd − |VT N |)2 (1 + λP Vdd )
(6.3.15)
There is still a degree of freedom that allows us to choose either W or L. Two alternatives are
possible. If the output impedance of the cell is of major concern then L is fixed and W is determined accordingly. The second alternative regards the area W L. Larger areas minimize the clock
feedthrough. On the other hand, smaller areas are necessary for high speed.
162
Both transistors M4 and M5 constitute a current mirror. Hence, to minimize the current error
due to the channel modulation effect, the quiescent drain voltage of M4a is chosen to stay at ground.
M1 , M2 and M3 constitute the squarer circuit and it is assumed that all of them are equal [156].
Hence, from the above considerations it follows that the quiescent drain voltage of M1 stays at Vdd /2.
The square law holds for |ii | < 2Ib , [156]. Therefore, a minimum value for Ib is fixed because the
maximum value of |ii | is essentially given by |xmax | + |ymax |.
Taking into consideration the above assumptions for the node voltages and the fact that M1 ,
M2 and M3 are biased for a drain current equal to Ib /2, while M4a and M5 are biased for a drain
current equal to Ib , the design of these transistors is then straightforward.
In (6.3.11) the mirror ratio (current gain) has been obtained. Assuming that (W/L)5 = (W/L)4a
, the aspect ratio of M4b is obtained as follows:
W
L
=
4b
W
L
4a
1−β
β
(6.3.16)
However, the final design of M4b also depends on the actual voltage drop on the switch in series with
M4b . A common measure of the accuracy is given by the difference between the multiplier’s actual
and ideal output, at full scale, as a percentage of the full scale itself. This is known as internal trim
error [157]. This includes the effect of offset, feedthrough, nonlinearity and scale-factor errors. A
plot of versus β , obtained by HSPICE simulation for a 100KHz clock frequency and large signals
is shown in Fig. 6.8.
Let us now consider the switches. The control voltages for the switches go from rail to rail while
the terminals of the switches stay at a voltage close to ground during the whole operation of the
circuit. Therefore the switches can easily be designed by using minimum size either NMOS transistors
or CMOS transmission gates. However, for the adopted technology, no appreciable difference has
been noticed by substituting the NMOSTs with the CMOS switches, thus, single NMOSTs have
been used in the considered implementation. There is one exception only: because the voltage at
the drain of M1 is Vdd /2 the two switches steering the currents x and y at the input of the squarer
are implemented using CMOS switches instead of simple NMOSTs.
A summary of the adopted transistor sizes is reported in Table 6.1.
163
Figure 6.8: Percentage trim error (
) versus mirror correction factor (β). The total error correction
is obtained for the optimal value β0 = 0.5161 in the considered example.
Squarer:
M1 = 36/4, M2 = 36/4, M3 = 36/4
Adj. Mirror:
M4b = 4/8, M4a = 16/30, M5 = 16/30
Memory 1:
M6 = 28/16, M7 = 16/30
Memory 2:
M8 = 28/16, M9 = 16/30
Add.Cur.Source: M10 = 4/18
Switches: 4/4
Table 6.1: Sizes of the Transistors for the circuit shown in Fig. 6.5. All the quantities are in µm.
164
Output function (ideal)
Input range
Max output current
Internal Trim error (Full Scale)
THD (Ix = 35µA, Iy = 35µAsin(2π50))
Accuracy vs. Supply
SNR
Output offset
x-feedthrough
y-feedthrough
−3dB Small-signal bandwidth
Full power response
Max clock frequency fc
Max throughput
Die Area
Power consumption
Power supply
Iout = 5000A−1 Ix · Iy
±35µA
±6µA
1.0% at fc = 400KHz
1.5% at fc = 1.7MHz
1.73%
0.2%/%
50 dB
200nA
< 200nA
200nA
200KHz at fc = 1.7MHz
150KHz at fc = 1.7MHz
1.7MHz
0.420 MOPS at fc = 1.7Mhz
225 × 250µm2
0.3mW
±3V
Table 6.2: Performance of the tested chip
6.4
Experimental Performance Evaluation
A prototype of the proposed multiplier has been fabricated in MOSIS Orbit N-well 2µm technology.
In this Section the experimental results obtained by testing the chip are discussed. A summary of
the measured parameters is reported in Table 6.2.
The control signals for the switches have been generated on-chip by means of digital circuitry [158,
159] driven by an external master clock of frequency fc . The multiplier performs fc /4 multiplications
per second since the frequency of the four phases φ1 to φ4 is fc /4. The nominal clock frequency is
fc = 400KHz. Indeed, it has been experimentally verified that the maximum frequency at which the
circuit works without appreciable performance degradation (1.5% FS) is fc = 1.7MHz, corresponding
to 0.425 million multiplications per second. The input currents have been provided by means of offchip V-I converters. In particular two Howland circuits have been used [160]. The output current
of the multiplier is measured by feeding a 2.2KΩ grounded resistor. The power supply is ±3V and
the power consumption with zero inputs is 0.3mW (essentially constant until fc = 1.7MHz).
The multiplier’s ideal transfer characteristic is Iout = 5000A−1 Ix · Iy . The measured input
165
Figure 6.9: Total Harmonic Distortion versus input currents. It is worth remembering that the full
scale value is 35mA.
current ranges from −35mA to 35mA with a maximum output current of ±6mA. It has also been
verified that the multiplier is still working for an input range of around ±40mA but with reduced
linearity as shown in Fig. 6.9.
The experimental transfer characteristic of the multiplier is shown in Fig. 6.10.
Due to the inherently discrete-time nature of the circuit, the output current is a pulse train and
its envelope corresponds to the result of the multiplication. Thus, in order to trace the curves of Fig.
6.10, the voltage swing at the output resistor has been amplified and filtered by using a low-pass
filter that separates the carrier from the envelope. The curve has been traced by using an HP 4145B
curve tracer and the corresponding currents are reported close to the original instrument scales.
As previously pointed out, in this realization the constant offset current is internally compensated
by inserting during phase φ4 a current source in node A (realized by the NMOST). However, due to
unavoidable process variations, an offset current of 200nA has been measured at the output. This
166
Figure 6.10: Measured transfer characteristic. Output current I0 versus input current Ix . The
original scales are in volts (see text). The corresponding currents are reported.
167
can be easily zeroed by a suitable shift of the voltage level of the second terminal of the output
resistor (less than 1mV in our case).
An internal trim error of 1.0%FS has been measured at a clock frequency of fc = 400KHz. This
degrades to 1.5%FS at fc = 1.7MHz. A more complete information on the nonlinearity is given by
the total harmonic distortion (THD%) for a sinusoid at 50Hz. The trim error, in fact, is referred
only to the accuracy at full scale, while the THD involves the entire range of operation of the circuit.
In order to measure THD, a 50Hz sinusoid is fed at one input while the other input is held
constant. From the analysis carried out in the previous Section it turns out that the input current
that mainly affects the linearity is Iy . Therefore, a worst case result is obtained if the sinusoidal
signal is fed on the input y. This is the case herewith considered. The THD has been measured by
acquiring the spectrum of the output current using an HP 3588A spectrum analyzer.
A plot of the THD for different values of the sinusoid amplitude and of Ix is shown in Fig. 6.9.
The plot includes values outside the normal range of operation as well. Measurements obtained with
negative Iy are analogous to the one shown in Fig. 6.9.
Another important performance parameter is the input feedthrough [157]. In particular the xfeedthrough refers to the case in which the y input is zero. Conversely the y-feedthrough refers to
the case in which the x input is zero. The maximum values of feedthrough obtained by keeping one
of the inputs at zero and varying the other input through the full allowed range are depicted in Fig.
6.11 as a function of the clock frequency.
The −3dB small-signal bandwidth, measured at a clock frequency of fc = 1.7MHz, is close to
200KHz. It is worth mentioning that, in practice, higher clock frequencies are hardly applicable and
the performance of the circuit rapidly starts to degrade sensitively. The full power bandwidth, at
fc = 1.7MHz, is 150KHz.
Finally, the percentage of the variation of the THD at full scale for a variation of the power
supply is 0.213%/% (i.e. −13dB %/%) while the Signal to Noise Ratio (SNR) is 50dB (again, as
well as for the measurement of the THD, this is the worst case: the signal is fed to y while x fixes
the gain).
Some die micro-photographs of the multiplier prototype are shown in Fig. 6.12. The whole area,
including the references for biasing the circuit is 225 × 250µm2 .
168
Figure 6.11: x-feedthrough (dashed line) and y-feedthrough (solid line) versus clock frequency
6.5
Conclusions
The design, analysis and experimental results of a S 2 I switched-current multiplier have been presented. A comprehensive analysis to understand the sources of offsets and nonlinearities of our
circuit has been performed.
It has been found that by appropriately setting current-mirror gain the nonlinearity can effectively
be canceled. An IC prototype was fabricated using MOSIS n-well 2µm technology. Experimental
results are consistent with theoretical findings.
This kind of multiplier is used to realize the programmable synapses of the Cellular Neural
Network discussed in Chapter 7.
169
Figure 6.12: Die micro-photographs of the multiplier prototype. (a) The whole chip, (b) detail of 4
identical multipliers, (c) one multiplier with references and bias circuitry.
Chapter 7
A 1-D Discrete-Time Cellular
Neural Network Chip for Audio
Signal Processing
Although most of the CNN applications and corresponding VLSI implementations regard twodimensional arrays, one-dimensional Cellular Neural Networks have recently received increasing
attention. For the latter ones, quite different applications have been reported, ranging from 1-D
signal processing [161, 162, 133] to instrumentation and control [163]. In particular a 1-D CNN
architecture able to emulate the behavior of FIR filters and to perform the Daubechies Wavelet
Transform [164] has been thoroughly discussed in [161, 162].
Besides, unlike the case of 2-D CNNs for image processing, the applications reported for 1D CNNs require a very reduced number of cells [161, 162, 133]. This Chapter presents a VLSI
implementation of a new one-dimensional discrete-time programmable CNN suitable for the above
mentioned applications [165]. It is based on the well-known S 2 I technique.
Moreover, the introduction of a time-multiplexing scheme allowed a more efficient use of the
hardware.
7.1
System Architecture
A block diagram of the proposed architecture is shown in Fig. 7.1.
The main blocks are an analog shift register (i.e. a tapped delay line) and a set of locally
connected cells. A cascade of eight delay elements (sr0-sr7) composes the shift register. Input data
170
171
Figure 7.1: Block diagram of the proposed architecture.
enters the shift register on the left side (through sr0) and the samples are shifted to the right, passing
from sr0 to sr1 and so on. The cells of the CNN receive their inputs from the shift register and from
the outputs of the neighbors. The cloning templates are provided externally making the proposed
architecture programmable. The state equation describing the behavior of the cell at position c is:
xc (n + 1) =
Ac,d yd (n) + Bc,d ud (n)
(7.1.1)
d∈N (c)
where xc (n + 1) is the updated state of the cell, yc (n) = f (xc (n)) is its output, uc (n) is the output
of the shift register srC (e.g. u1 is the output of sr1), Ac,d are the feedback templates while Bc,d are
the control templates, N (c) is the neighbor set of cell c defined as follows:
N (c) = {d : |d − c| ≤ 1}
(7.1.2)
As shown in Fig. 7.1, the proposed architecture includes six CNN cells and eight delay cells (the
self-feedback is implicit in the cells of Fig. 7.1 and it has not been shown to avoid clutter).
In practice, in the real implementation, the first delay stage sr0 is not necessary and the input
(Data In on Fig. 7.1) can be directly provided in place of it.
172
Figure 7.2: Analog Shift Register Cell.
7.2
The Tapped Delay Line
A cascade of S 2 I delay cells1 composes the analog shift register. A full delay is obtained by cascading
two S 2 I half-delay cells whose circuit diagram and corresponding switch control signals can be found
in Fig. 6.1 of Chapter 6.
A replica of the current at the output of the cell is needed for the next delay cell and for any
other circuit requiring it as input. Therefore, a current mirror with multiple output branches is
placed in-between any full-delay block. In particular, cascode current mirrors have been used in
order to obtain very high output impedance. The circuit of a shift register cell is shown in Fig. 7.2.
An impedance is placed between the output and the input node of the half-delay cells in Fig.
7.2. It represents an NMOST identical to the ones used for the switches with his gate connected to
the positive power supply. It compensates for the ir drop due to the output switch [146].
In Section 6.1 of Chapter 6 has been proved that the transfer function of a S 2 I half-delay cell,
when loaded by another identical one, is approximately:
Io (z) = Ii (z)
Hi (z)
2g0 + Ioff1 (z)
1 + Ψg
m
(7.2.1)
where Hi (z) = −z −1/2 is the ideal transfer function of an half-delay cell, Ioff1 (z) represents a constant
1 See
Section 6.1 in Chapter 6.
173
output offset Ψ = gm /2g0 is the voltage gain of an inverter in which MC acts as driver while MF
acts as current source.
However, when the S 2 I half-delay cell is loaded by a conductance equal to gm then the transfer
function is approximately given by:
Io (z) = Ii (z)
Hi (z)
+ Ioff2 (z)
1 + 2 ggm0
(7.2.2)
where Ioff2 (z) represents a constant output offset very close to Ioff1 (z). Therefore, due to the sign
inversion, when two of these cells are cascaded in order to obtain a full delay, the two offsets
practically cancel each other. In our case, the first half-delay cell is loaded by the second one, while
the current mirror loads the second one. Therefore the total transfer function of the two blocks in
cascade is given by:
G(z) =
z −1
1
0
1 + 2g
1
+
+
gm
Ψ
4g02
2
Ψgm
∼
=
z −1
1 + 2 ggm0
(7.2.3)
This implies that the delay cell slightly attenuate the delayed signal.
This attenuation could be minimized making use of cascode transistors in the half-delay cell [166].
However, in the proposed design, it has been preferred to compensate it by choosing a suitable current
gain for the current mirror following the delay cell. In fact, performing a Montecarlo analysis, the
latter solution has shown a better robustness to parameter variation. Finally, in order to compensate
a possible residual output offset current, another delay cell with zero input has been created. This
cell’s output current is the residual offset. This is copied and subtracted to the output of all the
other delay cells by means of cascode mirrors (it is added to the intermediate node of the current
mirrors using the terminal Off c as shown in Fig. 7.2).
This is very similar to the well-known replica circuit technique [151].
The aspect ratios of the transistors used for the S 2 I half-delay cells are (W/L)NMOS = 16µ/30µ
and (W/L)PMOS = 28µ/16µ for a quiescent current of 20µA. The aspect ratios of the transistors
of the first mirror are (W/L)NMOS = 5µ/4µ and (W/L)PMOS = 23µ/4µ for the first one and
(W/L)NMOS = 4µ/4µ and (W/L)PMOS = 18µ/4µ for the second one. The quiescent current is
approximately 5µA and imposes the limits of the dynamic range. The final output branch is identical
to the output branch of the first current mirror.
174
All the switches are NMOST’s with aspect ratio (W/L)NMOS = 4µ/4µ.
The large sizes adopted are imposed by the reliability of the available technology. The layout of
the tapped delay line is shown in Fig. 7.3. The linear current to voltage converter are shown on the
right-hand side; they provide off-chip observation points.
7.3
CNN cells
The cell’s behavior is characterized by the state equation (7.1.1). The output of the cell is:
1
yc (n) = f (xc (n)) =
|xc (n) + 1| − |xc (n) − 1|
2
(7.3.1)
Two cascode current mirrors shown in Fig. 7.4 easily obtain this.
The bias current Idd of the NMOSTs working as cascode current sources fixes the saturation
current. Therefore:
1. if |ii | < Idd then i0 = ii ,
2. if ii ≥ Idd the transistors of the input branch of the first current mirror turn off and so i0 = Idd ,
3. if ii ≤ −Idd the transistors of the input branch of the second current mirror will now turn off
making i0 = Idd .
The sizes of the transistors are analogous to the ones used for the cascoded current placed in-between
the shift register cells. However it is worth noting that, while in the case of the analog delay line
only one variable is passed through the stages, here the value of the state can be much larger in
magnitude because of the many contributions arising from (7.1.1). This means that the mirrors in
the shift register will never clip the signal, provided that the input range fulfill the allowed limits.
Conversely the output function will saturate when the bound of Idd = 5µA is passed, as desired.
7.3.1
Multiplier and Ancillary circuitry
The multiplier used to implement the synaptic connections is the one discussed in Chapter 6. However, the actual implementation has much smaller area. Specifically, the size of the the transistors
constituting the S 2 I memory cells is four times smaller (any W and L is the half of the one reported
in Chapter 6).
175
Figure 7.3: Layout of the delay line.
176
Figure 7.4: Nonlinear output function circuit.
This means that the aspect ratio and so the quiescent current are unchanged. However a decrease
in accuracy has to be expected being the smaller cell more sensitive to process variations, size
inaccuracies and un-compensated charge injection2 .
On the other hand, the total available area of the chip (1.8mm×1.8mm) fixes the a limit to the
number of modules that can be placed on it. A trade-off is implied.
To avoid confusion in the remaining of this Chapter, we will refer to the four main phases of the
multiplier (called φ1 -phi4 in Chapter 6) as θ1 -θ4 .
In order to bias the current squarer, the multiplier’s input node must be kept at Vdd /2 = 1.5V.
Therefore a common gate amplifier has been used as level shifter. Moreover, to have the same order
of magnitude for the two operands of the multiplier, a current amplifier is placed at the signal input.
The layout of one complete multiplier, the corresponding level shifter and other circuitry is shown
in Fig. 7.5.
2 That
although effectively minimized through the S 2 I technique can never be fully canceled.
177
Figure 7.5: Layout of the multiplier and its ancillary circuitry.
178
On-chip linear I-V converters allow the external observation of the data in the delay line. These
ones are based on the square-law characteristic of the MOS transistor in saturation [156].
The phases and control signal for the switches are obtained by on-chip digital circuitry. In
particular a eight stages dynamic ring counter generates eight non-overlapping phases. These ones
are eventually fed to combinatorial and sequential logic (including logic gates, J-K Master Slave
slave flip-flops and other cross-coupled latches) to obtain the required signals. The latter ones are
driven throughout the chip’s digital buses and re-generated at regular intervals by line-drivers. The
layout of the whole digital part of the chip is shown in Fig. 7.6.
The final layout of the chip is shown in Fig. 7.7. The long horizontal strip at the top is the digital
circuitry. The long vertical strip on the right-hand side is the tapped delay line. The large rectangular
are in the middle contains the abutted multipliers of the six cells. Finally the nonlinearities and the
additional S 2 I memories completing the cells are shown on the left hand side.
7.4
Cell Behavior and Hardware Multiplexing
From what was discussed in the above Sections two remarks can be done. First of all, the shift
register cells allow new data to enter and retrieve the stored data only during φ1 .
Data is internally exchanged between the two half-delay cells on φ2 . The full delay cell is
completely isolated by the rest of the system on this phase. During this second phase the rest
of the hardware would be essentially idle. These two phases determine the time synchronization
represented by the time index of the state equation (7.1.1).
Secondly, the result of the multiplication process is required before the end of a clock cycle.
Therefore, a strategy that permits to exploit the available hardware during the idle phase and to
save in terms of area is outlined next.
Let us refer to the block diagram shown in Fig. 7.8 depicting a CNN cell.
The programmable synaptic connections using the multipliers are drawn on the left-hand side of
the figure. Those multipliers accept the outputs of the shift register ud on φ1 with the corresponding
weights (namely the control templates Bc,d ) provided by off chip currents.
179
Figure 7.6: Digital circuits for the generation of the control phases.
180
mbb
GND
Vdd
mbb
mbb
mbb
mbb
mbb
mbb
mbb
GND
GND
GND
GND
GND
GND
GND
Vdd
Vdd
Vdd
Vdd
Vdd
Vdd
Vdd
Vdd
Ce
Vss!
Vss!c1n
Vss!
Vss!
Vss!
Vss!
Vss!
Vss!
Vss!
c1n
c1n
c1n
c1n
c1n
c1
c1
c1
c1
c1
c1
Vss!Vdd!
c2n
c2n
c2n
c2n
c2n
c2n
p2+
c2
c2
c2
c2
c2
c2
Rc
Rc
Rc
Rc
Rc
Rc
Vdd!
Vdd!
Vdd!
Vdd!
Vss!Vdd!
Vdd!
Vdd!
Vdd!
Vdd!Vdd!Vdd!
Vdd!
Vdd!Vdd!Vdd!
Vdd!
Vdd!
Vdd!
Vdd!
Vdd!
Vdd!
Vdd!
Vdd!
Vdd!
Vdd!
Vdd! Vdd!Vdd!
Vdd!
Vdd!Vdd!Vdd!
Vdd!
Vdd!Vdd!Vdd!
Vdd!
Vdd!Vdd!Vdd!
Vdd!
Mc
Ce
Vss!Vdd!
Vss!
Vss!
Vss!
Vss!Vss!Vss!
Vss!
Vss!Vss!Vss!
Vss!
Vss!
Vss!
Vss!
Vss!
Vss!
Vss!
Vss!Vss! Vss!Vss!
Vss!
Vss!Vss!Vss!
Vss!
Vss!Vss!Vss!
Vss!
Vss!
Vss!
Vss!
Vss!Vss!Vss!
Vss!
Pa
Vss!Vdd!
Pb
Pc
Pd
Pe
Vss!Vdd! Pg
Pf
Vss!Vdd! Ph Vss!
Vss!
Vss!
Vss!Vss!
Vss!
Vss! Vss!Vss!
Vss!
Vss!Vss!Vss!
Vss! Vss!Vss!
Vss!
Vss!
Vss!
Vss!Vss!Vss!
Vss!
Vss!
Vss!
Vss!
Vss! Vss! Vss!Vss!
Vss!
Vss!
Vss!Vdd!
Vss!Vdd!
Vdd!
Vdd!
Vdd!Vdd!
Vdd!
Vdd! Vdd!Vdd!
Vdd!
Vdd!Vdd!Vdd!
Vdd!
Vdd! Vdd!Vdd!
Vdd!
Vdd!
Vdd!
Vdd!Vdd!Vdd!
Vdd! Vdd!Vdd!
Vdd!
Vdd!
Vdd!
Vdd!
Vdd!
Vdd!
Vdd!
F2
F2F1
p3o4
p1o2
p4o1
p2o3
mbb
Vdd
GND
Vdd!
Vdd!
node6out
Vss!Vdd!
Vss!Vdd!
Vss!Vdd!
Vss!Vdd!
Vss!
Vss!Vdd!
Vdd!
VddGND
Vdd!Vss!
Vdd!Vss!
Vdd!Vss!
Vdd!Vss!
Vss!
Vss! Vss!
Vss! F1
F1
phi1a
phi1a
mbb
phi1b
phi1b
F2
F2
phi2a
phi2a
Vdd!Vss!
phi2b
phi2bphi2
phi1
in2
F1
F2
out1
p1aop3a
p1bo3b
p2op3
p4op1
p1op2
p3op4
p4
p2a
p2b
p1aop3a
p1bo3b
p2op3
p4op1
p1op2
p3op4
p4
p2a
p2b
Vdd! GND!
Vdd!
Vdd!
phi2b
phi2a
phi2
phi1b
phi1a
phi1
GND!
vref
GND!
vref
phi2
GND!
vrefVdd!
Vdd! Vdd!
GND!
vref
phi2
Vdd!vrefGND!
Vdd
GND
in GND!
Vdd!
xoutII
Vdd!
xout
Vdd!
mbb
out
inGND!
GND! Vdd!
GND!
yout1
Vdd!
GND!
yout2
Vdd!
GND!
yout3Vdd!
Vdd
GND
GND! Vdd!
in GND!
Vdd!
GND!
vref
Vdd! in
Vdd!
mbb
out
GND!
vref
phi2
GND!
vrefVdd!
Vdd
GND
inxoutII
GND!
Vdd!
xout
Vdd!
inGND!
Vdd!
mbb
out GND! Vdd!
GND!
yout1
Vdd!
GND!
yout2
Vdd!
GND!
yout3Vdd!
GND! Vdd!
Vdd
GND
GND!
vref
in GND!
in
Vdd!
Vdd!
Vdd!
GND!
vref
mbb
out GND! phi2
vrefVdd!
Vdd xoutII
GND
xout
in GND!
Vdd!
inGND!
Vdd!
GND! Vdd!
Vdd!
yout1
GND!
mbb
out GND!
yout2Vdd!
Vdd!
yout3
GND!
Vdd!
GND! Vdd!
A1
Vdd
GND
Vdd
Vdd
out3
out2
in
inII
GND!
vref
in2
F1 F1
F1a
F1b
Vdd!
Vdd!
phi2b
phi2a
phi2
phi1b
phi1a
phi1
Vdd! Vdd!
in2
out
out
Vdd!
out
out
GND!
GND!
yout1II
GND!
yout2II
GND!
yout3II
yout1
yout2
yout3
GND!
in
inII
Vdd! Vdd!
node5
node4
Vdd!
in2
out3
out2
out1
p1aop3a
p1bo3b out3
p2op3
p4op1
p1op2 out2
p3op4
p4
p2a
p2b
out1
p1aop3a
p1bo3b
p2op3
p4op1
p1op2
p3op4
p4
p2a
p2b
GND!
in
Vdd!
Vdd!
Vdd!
Vdd!
Vdd!
GND!
vref
out
out
GND!
GND!
yout1II
GND!
yout2II
GND!
yout3II
yout1
yout2
yout3
GND!
in
inII
Vdd!
in2
in2
Vdd!
out
out
out
out
out3
out2
Vdd!
node3
inm3
inm2
node2
out1
A1
A2
A3
B1
B2
B3
in2
vref
GND!
GND!
in
Vdd!
Vdd!
Vdd!
Vdd!
Vdd!
Vdd
out1
p2b
p2a
p4
p3op4
p1op2
p4op1
p2op3
p1bo3b
p1aop3a
p2b
p2a
p4
p3op4
p1op2
p4op1
p2op3
p1bo3b
p1aop3a
VddGND
Vdd! Vdd!GND!
Vdd!GND!
in
Vdd!GND!
phi2b
phi2a
phi2
phi1b
phi1a
phi1 out
mbb
Vdd!
probe
vref
GND!
VddGND
Vdd! Vdd!GND!
Vdd!GND!
in
Vdd!GND!
phi2b
phi2a
phi2
phi1b
phi1a
phi1
Vdd!
mbb
probe out
vref
GND!
Vdd!
vref
GND!
vref
GND!
vref
GND!
vref
GND!
GND!
GND!
yout1II
GND!
yout2II
GND!
yout3II
yout1
yout2
yout3
GND!
A1
A2
A3
B1
B2
B3
Vdd
node1 out
GND
mbb
out
m3
A1
A2
A3
B1
B2
B3
Vdd
Vdd
GND
GND
mbb
out
m1
m2
A1
A2
A3
B1
B2
B3
Vdd
out1
in2
node1
inm2
inm3 m1
m2
m3
A1
A2
Roff
A3
B1
B2
B3
Vdd
GND
mbb
vref
GND!
VddGND
Vdd!GND!
Vdd!GND!
phi2b
phi2a
phi2
phi1b
phi1a
phi1
in
Vdd! Vdd!GND!
probe out
mbb
Vdd!
VddGND
Vdd!GND!
Vdd!GND!
phi2b
phi2a
phi2
phi1b
phi1a
phi1
Vdd!
in
probe
Vdd!GND!
mbb
out
GND
mbb
Vdd!
vref
GND!
VddGND
Vdd!GND!
phi2b
phi2a
phi2
phi1b
phi1a
phi1
Vdd!
Vdd!GND!
probe
in
Vdd!GND!
mbb
out
vref
GND!
Vdd!
p2b
p2a
p4
p3op4
p1op2
p4op1
p2op3
p1bo3b
p1aop3a
out1
mbb
VddGND
Vdd!GND!
Vdd! Vdd!GND!
in
Vdd!GND!
mbb
phi2b
phi2a
phi2
phi1b
phi1a
phi1 out
Vdd!
probe
in2
in2
out3
out2
GND!
vref
phi2
Vdd!vrefGND!
GND
mbb
vref
GND!
node2
out3
out2
F1 F1
F1a
F1b
vref
GND!
GND!
vref
phi2
Vdd!vrefGND!
in2
Vdd!
Vdd!
phi2b
phi2a
phi2
phi1b
phi1a
phi1
Vdd!
p1aop3a
p1bo3b
p2op3
p4op1
p1op2
p3op4
p4
p2a
p2b
node3
F1 F1
F1a
F1b
in2
out
out
in2
in2
GND!
in
Vdd!
Vdd!
Vdd!
Vdd!
Vdd!
GND!
vref
out1
vref
GND!
node5
node4
GND
mbb
out
in2
p1aop3a
p1bo3b
p2op3
p4op1
p1op2
p3op4
p4
p2a
p2b
p4
GND
out
out3
out2
mbb
Vdd
GND
in GND!
Vdd!
Vdd!
Vdd! in
mbb
out
in2
mbb
Ph
i3
F1F2
i1
i2
i4
Vss!
Vss!
Vss! Vss!
Vss!
mbb
Vdd
GND
mbb
GND
vref
GND!
vref
GND!
vref
GND!
Vdd
VddGND
Vdd!GND!
phi2b
phi2a
phi2
phi1b
phi1a
phi1
Vdd!
probe
Vdd!GND!
in
Vdd!GND!
out
mbb
Vdd!
in
phi2b
phi2a
phi2
phi1b
phi1a
phi1
Vdd!
GND
mbb
mbb
Figure 7.7: Final Layout of the Cellular Neural Network chip.
VddGND
181
Figure 7.8: Cell block diagram.
On φ2 , instead, the neighbor outputs yd are fed to the inputs of the multipliers in place of ud ,
while the feedback template Ac,d is fed instead of the control template Bc,d 3 .
In
&
other
words,
d∈N (c) Bc,d ud (n
during
φ1
the
weighted
sum
of
the
shift
register
outputs
+ 1) is present on the summing node at the output of the multipliers.
This sum enters the full delay cell. At the same time, the previous value of this sum is present
at the output of the delay cell (because it entered the delay cell during the previous period n).
Moreover, let us assume that the output of the half-delay cell, depicted below the full delay cell, is
&
providing the weighted sum d∈N (c) Ac,d yd (n) at its output. (this assumption will soon be proven).
Therefore, the state xc (n + 1) is obtained at the summing node of the two previous outputs,
in accordance to the state equation (7.1.1). This one enters the second half-delay cell. Its output
instead is zero and so is the output of the nonlinearity.
On φ2 the weighted sum
&
d∈N (c) Bc,d ud (n + 1)
is stored in the full delay cell. It is passing from
the first half-delay cell to the second half-delay cell that constitutes the full delay cell itself. It is
therefore completely isolated from the rest of the system. The cells outputs yd and the feedback
&
templates Ac,d are fed to the synapses. So the weighted sum d∈N (c) Ac,d yd (n + 1) is fed to the
3 The
alternate inputs are obtained by on-chip current-steering switches controlled by φ1 and φ2 .
182
input of the half-delay cell on the left-hand side of the figure. This value is available at its output
on φ1 of the next clock period (namely on period n + 2).
This proves the above assumption on the output of this half-delay cell.
On φ2 there are no currents at the summing node on the right of the full delay cell. The half-delay
cell on the left hand side releases its stored value (namely xc (n + 1)) and so, due to the nonlinear
block, the output yc (n + 1) of the cell is available. This is consistent with the fact that the outputs
yd (n + 1) are provided as inputs of the cells during this phase. The above approach allow us to
1. exploit the rest of the hardware during the idle phase φ2 of the shift register and
2. use only three multipliers instead of six, saving in area and power.
To obtain all of this, only two half-delay cells have been added to the classical CNN cell architecture. From this scheme it can be seen that while the outputs of the delay cells are available during
the whole corresponding phases, instead, the sampling is accomplished only during φ1a − φ1b (or
φ2a − φ2b for other half-delay cells) corresponding to θ4 . A whole multiplication cycle is performed
during φ1 and another one during φ2 .
7.5
Results and example
HSPICE simulation results, at transistor and functional level, are here presented. Let us first consider
some transistor level simulation results.
In a first case a sinusoidal input is fed at the input of the tapped delay line and shifted along it.
The outputs of the on-chip linear I-V converters corresponding to the delay stages sr1, sr2, sr3 and
sr7 respectively, are shown in Fig. 7.9.
The initial values present in the delays before the first sample reaches the stage can also be
noticed.
The two input currents of one of the multipliers are shown in Fig. 7.10. One of the two inputs
is fed alternatively with B and A while the other one is fed with u and y. The different signals on
the distinct phases can be distinguished. It is seen that on phase φ1 the corresponding B template
coefficient is provided at one of the inputs while the output current coming from the delay line (u)
is fed at the other one.
183
Figure 7.9: Tapped delay line outputs for sr1,sr2, sr3 and sr7
184
Besides, the 4 phases θ1 -θ4 of the multiplier are distinguished from the fact that one of the
inputs (the template coefficient) enters on θ1 + θ2 while the other input is accepted on θ1 and θ4 .
Conversely, the template coefficient A is provided on phase φ2 both with the corresponding output
of the cell.
A practical example of the functionality of the proposed architecture is now given. The voice
of the author has been sampled at 8KHz. White noise has been added obtaining a (4000 samples)
noisy signal with signal-to-noise ratio SNR=1.1965.
A SPICE macromodel[167] of the chip has been simulated to process a wavelet decomposition of
this signal (top of Fig. 7.11) according to the algorithm described in [161, 162].
√
3
(2)
√
The wavelet coefficients are φ = [d0 d1 d2 d3 ] with d0 = 1+
√
4
, d1 =
√
3+
√ 3
4 (2)
√
√
, d2 = 3−
4
3
(2)
,
√
d3 = 1−
4
3
.
(2)
The corresponding control templates for the algorithm are B1 = [d0 d1 0] and B2 = [d3 d4 0] (see
[162] for the detailed description of the algorithm).
The filtered signal (bottom of Fig. 7.11) has a SNR=6.4288, an improvement of 14.6043dB.
In Fig. 7.11 the time shift among the two signals cannot be seen at this level of magnification.
The noise reduction, instead, is visible, particularly where the low frequency components of the vocal
signal are dominant (at the beginning, in the middle and at the end).
7.6
Conclusions
The VLSI implementation of a discrete-time one-dimensional Cellular Neural Network has been
discussed. One of the peculiarities of the proposed architecture is a hardware-multiplexing strategy. This allows to efficiently use the hardware halving the number of multipliers and storing the
intermediate results into temporary memories.
Simulation results at transistor and functional level have been reported. A CMOS N-well MOSIS
Orbit 2µ chip is currently in fabrication.
185
Figure 7.10: Inputs of one of the multipliers
186
Figure 7.11: Noisy vocal signal (top) and filtered output (bottom).
Appendix A
General Mathematical Background
Mathematicians are like Frenchmen: whenever you say something to them, they translate
it into their own language, and at once it is something entirely different
—Goethe, Maxims and Reflections (1829)
This appendix summarizes the mathematical background which may be a convenient source for
repeated reference to standard facts. In the following the notation “iff” stands for “if and only if”.
Moreover, the open ball with centre x and radius > 0 is denoted by B (x) or by V (x) without
explicit mention of the radius.
A.1
Topology
Definition A.1.1 (Topological space). A topological space is a set X together with a collection
of subsets of X, called open sets, satisfying the axioms:
1. The empty set ∅ and X are open sets.
2. The union of any family of open sets is an open set.
3. The intersection of finitely many open sets is an open set.
Definition A.1.2 (Closed set). A subset A of a topological space X is called closed if the complement X − A is open.
◦
Definition A.1.3 (Interior). The interior of a subset A ⊂ X, denoted A, is the largest (possibly
◦
empty) open subset of X which is contained in A. A is open iff A = A.
Definition A.1.4 (Boundary). The boundary of a subset A ⊂ X, denoted ∂A, is the set of points
◦
.
of A which are not in the interior of A. Namely ∂A = A − A.
187
188
Definition A.1.5 (Closure). The closure of A in X, denoted by Ā, is the smallest closed subset
of X which contains A. A is closed iff A = Ā.
Definition A.1.6 (Dense subset). A subset A ⊂ X is called dense in X if Ā ≡ X.
Definition A.1.7 (Hausdorff space). A space X is called Hausdorff, if
∀x1 , x2 ∈ X, x1 = x2 , ⇒ ∃V (x1 ), V (x2 ) : V (x1 ) ∩ V (x2 ) = ∅
(A.1.1)
Definition A.1.8 (Covering). Let A ⊂ X. A collection C of subsets of X is said to be a covering
of A if A is contained in the union of the elements of C.
Definition A.1.9 (Compact). The subset A ⊂ X is called compact if every covering of A by open
subsets of X contains a finite sub-collection covering A.
Every closed subset of a compact space is compact. Every compact subset of a Hausdorff space
is closed.
A.2
Operations and Functions
Definition A.2.1 (Vector field). A vector field f : U →
subset U ⊆ Rm ; n, m ∈ N .
Rn is a vector function defined on some
Definition A.2.2 (Lipschitz functions). A vector field f is Lipschitz if:
∃k ∈ R, 0 < k < ∞, : f (x) − f (x ) ≤ k x − x , ∀x, x ∈ Rn
(A.2.1)
Definition A.2.3 (Affine function). A function f : U ⊆ Rm → Rn that can be written as f (x) =
Ax + b with A ∈ Rn×m ,b ∈ Rn is called affine. It is a nonlinear function1 .
Definition A.2.4 (Piece-Wise-Linear (PWL) Continuous Function). A PieceWise-Linear (PWL)
continuous function is a continuous function that is locally affine.
Definition A.2.5 (Einstein Summation Rule). Let S ⊆ Ω be a sphere of influence with centre
c ∈ Ω and radius r > 0. The Einstein summation rule is defined as:
. a(d; c)y d
(A.2.2)
acd y d =
∀d∈S
where a(d; c) are the weights corresponding to the couples (d, c) ∈ S 2
Definition A.2.6 (C k Functions). A function is C k if it is k-times differentiable.
Definition A.2.7 (Diffeomorphism). A C k diffeomorphism f : M → N is a mapping f which is
1 − 1, onto, and has the property that both f and f −1 are k-times differentiable.
Definition A.2.8 (Homeomorphism). A homeomorphism is a C 0 diffeomorphism, i.e. a continuous mapping f : M → N with a continuous inverse.
1 In
fact, it is easy to verify that it does not satisfy the superposition principle.
189
A.3
Matrices
Definition A.3.1 (Irreducible matrix). A = [Aij ] ∈ Rn×n is irreducible if:
∀i = j,
∃ a chain of indices i =k0 , . . . , km = j :
Akr ,kr−1 = 0,
A.4
r = 1, 2, . . . , m
(A.3.1)
Dimension
Only metric dimensions are hereby considered.
Definition A.4.1 (Capacity or Fractal dimension). Let us consider a covering C of A ⊂ Rn .
Let N (
) be the minimum number of n-dimensional cubes of side length needed to cover A. The
capacity or fractal dimension2 of A is defined as:
ln N (
)
.
Dcap = lim
→0 ln 1/
(A.4.1)
if the limit exists.
Definition A.4.2 (Hausdorff dimension). Let C be a covering of A ⊂ Rn of n-dimensional cubes
of variable edge lengths {
i }. Define the quantity ld (
) by:
.
di
(A.4.2)
ld (
) = inf
C
i
where d ∈ R+ which is still to be specified. Now let:
ld = lim ld (
)
→0
(A.4.3)
The Hausdorff dimension of A is the value d = DH above which ld = 0 and below which ld = ∞.
2 In
case Dcap is not integer.
Appendix B
Dynamical Systems
What we have to learn to do we learn by doing
—Aristotle, Ethica Nicomachea II (c. 325 B.C.)
A lumped 1 circuit containing resistive (resistors, voltage and current sources) and energy storage
elements (capacitors and inductors) may be modeled as a continuous-time deterministic dynamical
system.
Besides, discrete-time deterministic systems occur in electrical engineering as models of switched
capacitors / switched currents circuits, digital filters, sampled phase locked loops etc.
Unless explicitly mentioned, deterministic systems are always considered in this dissertation.
In this appendix some definitions and results on nonlinear dynamical system theory are shortly
summarized for the purpose of quick reference. This is not a tutorial and it is far from being
complete. It is assumed that the reader is fairly familiar with this material. Moreover linear system
theory is assumed to be known and will not be recalled.
These topics are covered in detail in many textbooks on Ordinary Differential Equations (ODE’s)
other than in the specialized literature [58, 67, 168, 42, 169, 68, 170].
1 A lumped circuit is one whose physical dimensions are small compared to the wavelengths of its voltage and
current waveforms [81].
190
191
B.1
Basic Definitions
Definition B.1.1 (State equation of a Continuous-time System). A continuous time dynamical system can be described by a system of ordinary differential equations:
dx .
= ẋ = f (x)
dt
(B.1.1)
where x = x(t) ∈ Rn (called the state) is a vector valued function of an independent variable (usually
time) and f is a smooth vector field defined on some subset U ⊆ Rn . We say that the vector field
f generates a flow φt : U → Rn , where φt (x) = φ(x, t) is a smooth function defined ∀x ∈ U and t in
some interval I = (a, b) ⊆ R, and φ satisfies (B.1.1) in the sense that:
d
φ(x, t) = f φ(x, τ ) , ∀x ∈ U, τ ∈ I
(B.1.2)
dt
t=τ
It can be noted that, in its domain of definition, φt satisfies the group properties (i) φ0 =id, and
(ii) φt+s = φt ◦ φs .
In the following, unless stated otherwise, the vector field f will be assumed to be smooth, namely
C∞.
Definition B.1.2 (Autonomous/Non-autonomous Systems). Systems of the form (B.1.1), in
which the vector field does not contain time explicitly, are called autonomous.
Otherwise they are called non-autonomous.
Definition B.1.3 (Trajectory). Often an initial condition x(0) = x0 ∈ U is given. In this case a
solution φ(x0 , t) such that φ(x0 , 0) = x0 is sought. This solution is often written as x(x0 , t) or simply
x(t). The function φ(x0 , ·) : I → Rn defines a solution curve, trajectory, or orbit of the differential
equation (B.1.1) based at x0 .
Since, for an autonomous system, the vector field f is invariant with respect to translations in
time, solutions based at times t0 = 0 can always be translated to t0 = 0.
Conversely, a non-autonomous n-dimensional system can be transformed to an (n+1)-dimensional
autonomous system by augmenting it with an additional dummy state variable θ(t) = t:

 ẋ(t) = f x(t), θ(t)
 θ̇(t) = 1
(B.1.3)
Definition B.1.4 (State equation of a Discrete-time System). A discrete time dynamical system can be defined by a system of difference equations:
x(k + 1) = G(x(k)) or xk+1 = G(xk ) or x → G(x) k ∈ N
(B.1.4)
being G(·) a nonlinear vector valued function, called a map. x(k) = xk ∈ Rn is the state, x(k0 ) = x0
is the initial condition.
192
Sometimes it is assumed k ∈ Z.
In analogy with the continuous-time counterparts, it is possible to define the flow φk , the orbit
2
φk (x0 ). Finally, it is possible to distinguish among autonomous and non-autonomous systems
depending on the explicit independence / dependence of G from k respectively.
The kth iterate of a point p under G is indicated as Gk (p):
.
Gk (p) = G(Gk−1 (p)) = G (G (G (· · · (G (x) ) · · · )))
()
*
'
(B.1.5)
k times
Definition B.1.5 (Contracting and expanding flow). φt is contracting if:
φt (x0 ) − φt (xˆ0 ) < x0 − xˆ0 , ∀x0 = xˆ0 , ∀t > 0
(B.1.6)
It is called expanding otherwise.
B.2
Steady-state behaviour
A trajectory based at x0 settles, possibly after some transient, onto a set of points called a limit set.
A number of general definitions are now given for the purpose of identifying the possible asymptotic
behaviours of nonlinear systems.
Definition B.2.1 (Invariant set). An invariant set S for a flow φt or map G on
S ⊂ Rn such that:
φt (x) ∈ S (or G(x) ∈ S), ∀x ∈ S, ∀t ∈ R
Rn is a subset
(B.2.1)
Definition B.2.2 (Non-wandering states). A state x ∈ Rn of the flow φt (resp. the map G) is
called non-wandering if:
∀B (x), ∀T > 0, ∃t > T
: φt (B (x)) ∩ B (x) = ∅
(B.2.2a)
∀B (x), ∀K > 0, ∃k > K
: Gn (B (x)) ∩ B (x) = ∅
(B.2.2b)
respectively
Conversely, a point x that is not non-wandering is called wandering.
Definition B.2.3 (Non-wandering set). The set Ω of all the non-wandering states is the nonwandering set.
Definition B.2.4 (ω-Limit Point). A point p is an ω-limit point of x if:
∃ {φtk (x)} :
φti (x) → p and ti → ∞
(B.2.3)
2 The term trajectory is more commonly used in the context of continuous time systems, while the term orbit is
appropriate to discrete time maps
193
Definition B.2.5 (α-Limit Point). A point q is an α-limit point of x if:
∃ {φtk (x)} :
φti (x) → q and ti → −∞
(B.2.4)
For maps G the ti are integers.
The α− (resp. ω−) limit sets α(x), ω(x) are the sets of α and ω limit points of x.
Definition B.2.6 (Attracting Set). A closed invariant set A ⊂ Rn is called an attracting set if:
∃B (A) : φt (x) ∈ B (A), ∀t ≥ 0 and φt (x) → A as t → ∞, ∀x ∈ B (A)
(B.2.5)
+
Definition B.2.7 (Domain/Basin of Attraction). The set t≤0 φt (B (A)) is the domain of attraction of A.
When all the trajectories, and not just those ones which start in a neighborhood, converge to the
attracting set, then the attracting set is a global attracting set. A repelling set is defined analogously,
replacing t by −t.
Definition B.2.8 (Attractor). An attractor is an attracting set which contains a dense orbit.
In an asymptotically stable linear system the limit set is independent of the initial condition and
unique. So it makes sense to talk about of the steady-state behaviour. Instead, a nonlinear system
can have different limit sets and, therefore, it can show different asymptotic behaviours. Which one
will take place depends on the initial condition.
It is almost superfluous to mention that, since non-attracting limit sets cannot be experimentally
observed in physical systems, then the asymptotic behavior of a real circuit corresponds to a motion
on an attracting limit set.
B.2.1
Classification of asymptotic behaviors
Definition B.2.9 (Equilibria). An equilibrium point or stationary point (resp. fixed point) of a
vector field f (resp. a map G) is a state xq such that f (xq ) = 0 (resp. G(xq ) = xq )
In the state space, the limit set of an equilibria consists of a single non-wandering point xq . An
equilibria is said to have dimension zero.
Definition B.2.10 (Periodic point). A state p is periodic if ∃0 < T < ∞ : φT (p) = p.
An alternative, equally important, definition for a periodic solution is the following one:
Definition B.2.11 (Periodic solution). φt (p) is a periodic solution if:
φt (p) = φt+T (p), ∀t ∈ R
and some minimal period T > 0.
(B.2.6)
194
Definition B.2.12 (Cycle). A periodic orbit which is not a stationary point is called a cycle.
Definition B.2.13 (Limit cycle). A limit cycle Γ is an isolated periodic orbit.
The limit cycle Γ is then a closed curve in the state space where any point (all are non-wandering
ones) is periodically visited with period T . It has dimension one.
Definition B.2.14 (Period-K solution). A subharmonic periodic solution or period-K orbit of
a discrete time system is the set:
xk , 1 ≤ k ≤ K | xk = GK (xk )
(B.2.7)
Definition B.2.15 (Quasi-periodic solution). A N -frequency quasi-periodic solution φt (x) is
one that can be written as a function of N independent variables and it is periodic in each of these
variables with incommensurate frequencies:
φt = h(t, t, . . . , t),
h(t1 , t2 , . . . , ti + Ti , . . . , tN ) = h(t1 , t2 , . . . , ti , . . . , tN ),
(B.2.8a)
1 ≤i ≤ N
and:
. 2π
Ωi =
, 1≤i≤N
Ti
(B.2.8b)
m 1 Ω1 + m 2 Ω2 + · · · + m N ΩN = 0
does not hold for any set of integers, m1 , m2 , . . . , mN (negative integers are allowed)3 .
The limit set of a quasi-periodic motion has an integer dimension greater than 1.
The last behavior admitted by in a nonlinear dynamical system is the so-called chaos. A strict
mathematical definition of chaotic behavior is still debated. Chaos is a bounded, low-dimensional,
non-wandering motion which exhibits both “randomness” and “order” [59]. It can be defined by
negation. Namely it is an asymptotic behaviour that is not an equilibria or a periodic orbit or a
quasi-periodic motion.
Chaos is characterized by some peculiar features such as:
1. Continuous, broad-band, noise-like spectrum of any component of the state vector x(t).
2. Sensitive dependence on initial condition.
3 except
for the trivial case m1 = m2 = · · · = mN = 0. In other words, the frequencies Ωi are linearly independent.
195
3. Fractional dimension4 limit set.
The term “strange attractor” is often used in presence of chaotic (or supposedly chaotic) behaviour. However, it has to be noted that it is very difficult to show that a dense orbit exists. In
fact, many of the observed “strange attractors” may not be true attractors but merely attracting
sets, since they may contain stable periodic orbits [58]. More on chaos and the above enumerated
features will be discussed in section B.3.3.
B.3
Stability
Let us first consider the stability of fixed points of vector fields. Maps will be considered later.
B.3.1
Stability of equilibrium points
Definition B.3.1 (Stable equilibrium point). A fixed point xq of f is said to be stable if:
∀V (xq ) ⊂ U, ∃V1 ⊂ V :
x0 ∈ V1 , φt (x0 ) ⊂ V, ∀t > 0
(B.3.1)
Definition B.3.2 (Asymptotically stable equilibrium point). A stable equilibria xq is asymptotically stable if, in addition, V1 can be chosen such that:
lim x(t) = xq
t→∞
(B.3.2)
The two above definitions are local because they concern only the behavior near the fixed point
xq . In this case the linearization method can be applied. Let:
ξ˙ = Df (xq )ξ,
ξ ∈ Rn
(B.3.3)
the linearized system, where Df = [∂fi /∂xi ] is the Jacobian matrix of f and x = xq + ξ, |ξ| 1.
Therefore the linearized flow Dφt (xq )ξ arising from ẋ = f (x) at the fixed point xq is obtained
by integration of (B.3.3):
Dφt (xq )ξ = etDf (xq ) ξ
(B.3.4)
Let us just shortly remind that, in a linear system, depending on the eigenvalues associated with
the fixed point, six possible classes of equilibria can be distinguished:
1. attracting sink: stable equilibria with (some) complex eigenvalues; all eigenvalues have negative
real part; it corresponds to a stable under-damped response.
4 Commonly
called Fractal.
196
2. repelling sink: unstable equilibria with (some) complex eigenvalues; at least one eigenvalue
has positive real part; it corresponds to an unstable under-damped response.
3. attracting node: stable equilibria with (all) real negative eigenvalues; it corresponds to a stable
over-damped response.
4. repelling node: unstable equilibria with (all) real positive eigenvalues; it corresponds to an
unstable exponentially growing response.
5. saddle: unstable equilibria with (all) real (some positive, some negative) eigenvalues; it corresponds to an unstable exponentially growing response for some, but not all, the components.
6. center: marginally stable5 equilibria with (all) zero and/or imaginary eigenvalues; it corresponds to undamped sustained oscillations that must not be confused with a limit cycle6 .
The same classes of qualitative behaviors can be distinguished in nonlinear systems. Of course
it does not make any sense to talk about eigenvalues. It makes sense, however, to consider the
eigenvalues of the linearized system near the fixed point (B.3.3). In particular because of the following
theorem.
Theorem B.3.1 (Hartman-Großman). If Df (xq ) has no zero or purely imaginary eigenvalues
then there is a homeomorphism h defined on some neighborhood V (xq ) ⊂ Rn locally taking orbits of
the nonlinear flow φt of ẋ = f (x), to those of the linear flow etDf (xq ) of (B.3.3). The homeomorphism
preserves the sense of orbits and can also be chosen to preserve parametrization by time.
Definition B.3.3 (Hyperbolic equilibria). The equilibria xq is called hyperbolic or non-degenerate
if Df (xq ) has no eigenvalues with zero real part.
The stability of a degenerate equilibrium point cannot be determined by linearization. Hence, a
saddle or a center in a nonlinear system, being degenerate fixed points, cannot be simply identified
by linearization.
Definition B.3.4 (Local manifolds). The
local
stable
and
unstable
s
u
(xq ) and Wloc
(xq ) of the equilibria xq are the sets:
Wloc
.
s
(xq ) = x ∈ U | lim φt (x) = xq and φt (x) ∈ U ∀t ≥ 0
Wloc
t→∞
.
u
Wloc
(xq ) = x ∈ U |
lim φt (x) = xq and φt (x) ∈ U ∀t ≤ 0
manifolds,
(B.3.5)
t→−∞
where U ⊂ Rn is a neighborhood of the fixed point xq .
5 Generally
6 The
acknowledged to be considered unstable.
amplitude of a limit cycle oscillation does not depend from the initial condition. The one of a center does it.
197
s
u
The invariant manifolds Wloc
(xq ) and Wloc
(xq ) provide nonlinear analogues of the flat stable and
unstable eigenspaces E s , E u of the linear systems.
be
a
Theorem B.3.2 (Stable Manifold Theorem for a Fixed Point). Let
xq
s
(xq ),
hyperbolic fixed point of ẋ = f (x). Then there exist local stable and unstable manifolds Wloc
u
Wloc
(xq ), of the same dimensions ns , nu as those of the eigenspaces E s , E u of the linearized system
s
u
(B.3.3), and tangent to E s , E u at xq . Wloc
(xq ), Wloc
(xq ) are as smooth as the function f .
s
u
The local invariant manifolds Wloc
(xq ), Wloc
(xq ) have global analogues W s (xq ), W u (xq ) obtained
s
u
(xq ) flow backward in time and those in Wloc
(xq ) flow forward.
by letting points in Wloc
Definition B.3.5 (Global manifolds). The global stable and unstable manifolds, W s (xq ) and
W u (xq ) of the equilibria xq are the sets:
. ,
s
φt (Wloc
(xq ))
W s (xq ) =
t≤0
. ,
u
φt (Wloc
(xq ))
W (xq ) =
u
(B.3.6)
t≥0
Existence and uniqueness of solutions of the initial value problem ẋ = f (x), x(0) = x0 ensure
that two stable (or unstable) manifolds of distinct fixed points cannot intersect, nor can W s (xq ) (or
W u (xq )) intersect itself. However, intersections of stable and unstable manifolds of distinct fixed
points or the same fixed point can occur.
Definition B.3.6 (Homoclinic orbit). A E-homoclinic orbit (or, simply, homoclinic orbit) is a
trajectory Γ, which is asymptotic to a fixed point xq in both positive and negative time.
Definition B.3.7 (Heteroclinic connection). If a nonconstant solution is asymptotic to xi in
.
negative time, and to xi+1 in positive time, then we have an heteroclinic connection Γi = W u (xi ) ∩
s
W (xi+1 ).
Definition B.3.8 (Heteroclinic orbit). If there is a loop, which connects m fixed points x1 , . . . , xm
in one direction, the common set:
m
. ,
(Γi ∪ xi )
Λ=
(B.3.7)
i=1
is called a heteroclinic orbit.
Theorem B.3.3 (Lyapunov Stability). Let xq be a fixed point for ẋ = f (x) and V : W →
a differentiable function defined on some neighborhood W ⊆ U of xq such that:
1. V (xq ) = 0 and V (x) > 0 if x = xq ; and
2. V̇ (x) ≤ 0 in W − {xq };
then xq is stable. Moreover if
3. V̇ (x) < 0 in W − {xq };
R be
198
then xq is asymptotically stable.
Here:
V̇ (x) =
n
n
∂V
∂V
ẋj =
fj (x)
∂xj
∂xj
j=1
j=1
(B.3.8)
is the derivative of V along solution curves of ẋ = f (x).
Definition B.3.9 (Completely Stable Systems). Let X be the state space of the dynamical
system ẋ = f (x) and x̂ ∈ X a constant vector. The system ẋ = f (x) is completely stable or
convergent iff:
lim x(t) = x̂,
t→∞
lim ẋ(t) = 0, ∀x(0) ∈ X
t→∞
(B.3.9)
Definition B.3.10 (Globally Stable System). The system ẋ = f (x) is said globally asymptotically stable or globally convergent iff:
∀x(0) = x0 ∈ Rn ,
lim x(t) = x̂ ∈ Rn (and x̂ is unique) lim ẋ(t) = 0
t→∞
t→∞
(B.3.10)
Observe that, in a globally stable system, the fixed point x̂ is unique and independent from the
initial condition. In other words, its basin of attraction is the whole state space.
Definition B.3.11 (Stability almost everywhere). The system ẋ = f (x) is said stable almost
everywhere or almost convergent if the set of initial values in which the system does not converge
to a fixed point has zero Lebesgue measure.
Definition B.3.12 (Completely unstability almost everywhere). The
system
ẋ = f (x) is said completely unstable almost everywhere if ∀x(0) = x0 (except possibly a set of
zero Lebesgue measure) does not converge to an equilibrium.
Definition B.3.13 (Cooperative systems). The system ẋ = f (x), f ∈ C 1 is said to be cooperative iff the off-diagonal elements of the Jacobian matrix J = Df (x) are non-negative, i.e.:
Jij = ∂fi /∂xj ≥ 0, i = j.
Definition B.3.14 (Irreducible systems). The system ẋ = f (x), f ∈ C 1 is said to be irreducible
iff the Jacobian matrix J = Df (x) is irreducible ∀x.
Let us now consider the discrete time maps. All the concepts discussed for the vector fields can
be easily generalized to maps with little formal modification. It is almost redundant to remind that,
for what matters the eigenvalues corresponding to fixed points of linear systems, the stable region
of the complex plane, for maps, is the unity circle centered into the origin. Therefore it is not the
sign of the real part of the eigenvalues that is going to determine the stability but the modulus of
the eigenvalues. Hence, for example, the definition of hyperbolic fixed point (B.3.3) is modified as
follows:
199
Definition B.3.15 (Hyperbolic fixed point). A fixed point xq for G (i.e.G(xq ) = xq ) is said
hyperbolic if DG(xq ) has no unit modulus eigenvalues.
Analogously it exists a theory for diffeomorphisms parallel to that for flows. In particular there
are analogous Hartman-Großman and invariant manifolds theorems.
Theorem B.3.4 (Hartman-Großman). Let G : Rn → Rn be a (C 1 ) diffeomorphism with a hyperbolic fixed point xq . Then there exists a homeomorphism h defined on some neighborhood U (xq )
such that h(G(ξ)) = DG(xq )h(ξ),∀ξ ∈ U .
Definition B.3.16 (Local manifolds). The
local
stable
and
unstable
s
u
Wloc
(xq ) and Wloc
(xq ) of the fixed point xq are the sets:
.
s
n
n
Wloc (xq ) = x ∈ U : lim G (x) = xq , and G (x) ∈ U, ∀n ≥ 0
n→+∞
.
u
Wloc
(xq ) = x ∈ U : lim G−n (x) = xq , and G−n (x) ∈ U, ∀n ≥ 0
manifolds,
(B.3.11)
n→+∞
where U ⊂ Rn is a neighborhood of the fixed point xq .
Theorem B.3.5 (Stable manifold theorem). Let G : Rn → Rn be a (C 1 ) diffeomorphism with
s
u
(xq ), Wloc
(xq ),
a hyperbolic fixed point xq . Then there are local stable and unstable manifolds Wloc
s
u
s
tangent to the eigenspaces Exq , Exq of DG(xq ) at xq and of corresponding dimensions. Wloc
(xq ),
u
Wloc
(xq ) are as smooth as the map G.
Definition B.3.17 (Global manifolds). The global stable and unstable manifolds, W s (xq ) and
W u (xq ) of the equilibria xq are the sets:
. , −n
s
G (Wloc
(xq ))
W s (xq ) =
n≥0
. , n
u
G (Wloc
(xq ))
W (xq ) =
u
(B.3.12)
n≥0
Just consider that flows and maps differs from the fact that, while the trajectory φt (p) of a flow
is a curve in
Rn , the orbit {Gn (p)} of a map is a sequence of points. Thus, while the invariant
manifolds of flows are composed of unions of solution curves, those of maps are unions of discrete
orbit points.
Similarly, it is possible to talk about homoclinic and heteroclinic orbits.
B.3.2
Stability of limit cycles
The study of the stability of limit cycles can be converted into the study of the stability of fixed
points by means of the Poincaré sections and maps.
Let γ be a periodic orbit of the flow φt ∈ Rn arising from a nonlinear vector field f (x).
200
Definition B.3.18 (Poincaré Section). Let us consider a local (n − 1)-dimensional hypersurface
Σ ⊂ Rn . Let n(x) be the unit normal to Σ at x. Let Σ be a transverse cross section of f , i.e.:
f (x) · n(x) = 0, ∀x ∈ Γ. Γ is called a Poincaré Section.
Definition B.3.19 (Poincaré Map). Let p the (unique) intersection point among γ and Γ (p =
γ ∩ Γ). Let U ⊂ Γ some neighborhood of p7 . The first return or Poincaré map P : U → Γ is defined
for a point q ∈ U by:
.
P (q) = φτ (q)
(B.3.13)
where τ = τ (q) is the time taken for φt (q) to first return to Γ.
Note that τ generally depends upon q and need not to be equal to T = T (p), the period of γ,
although limq→p τ = T .
Clearly, p is a fixed point for the map P and the stability of p reflects the stability of γ for the
flow φt . If p is hyperbolic, and DP (p), the linearized map, has ns eigenvalues with modulus less
than one and nu with modulus greater than one (ns + nu = n − 1), then dim W s (p) = ns , and
dim W u (p) = nu for the map. Since the orbits of P lying in W s and W u are formed by intersections
of orbits (solution curves) of φt with Γ, the dimensions of W s (γ) and W u (γ) are each one greater
than those for the map.
Analogously, closed orbits {x∗k }k=1 of P with period K correspond to Kth-order sub-harmonics
K
of the underlying dynamical system.
Let x̂(t) = x̂(t + T ) be a solution lying on the closed orbit γ, based at x(0) = p ∈ Σ. Consider
the linearization around γ:
ξ˙ = Df (x̂(t))ξ
(B.3.14)
where Df (x̂(t)) is an n × n, T -periodic matrix. The corresponding Poincaré map P (x) has the fixed
point p. The linearized map corresponding to (B.3.14) and Σ has the form:
Φ̇ = Df (φt (x0 ))Φ
(B.3.15)
with solution Φt (x0 ) and ΦT (p) = DP (p) = Dx0 φT (p) 8 . Equation (B.3.15) is called the variational
equation. The solution matrix of (B.3.14) is:
X(t) = Z(t)etR ,
7 If
Z(t) = Z(t + T )
γ has multiple intersections with Γ, then shrink Γ until there is only one intersection.
in the last passage, the partial derivative with respect to the initial condition.
8 Observe,
(B.3.16)
201
being X, Z, R ∈ Rn,n . In particular we can choose X(0) = Z(0) = I, hence:
X(T ) = Z(T )eT R = Z(0)eT R = eT R
(B.3.17)
It follows that the behaviour of the solutions in the neighborhood of γ is determined by the eigenvalues of the constant matrix eT R . These ones are also the eigenvalues of the solution Φt (x0 ) of
the variational equation (B.3.15). These eigenvalues, m1 , . . . , mn , are called the characteristic (Floquet) multipliers of γ. The eigenvalues µ1 , . . . , µn , of R are the characteristic exponents of γ. One
of the Floquet multipliers is always unity9 . The moduli of the remaining n − 1, if none are unity10 ,
determine the stability of γ. These ones are independent from the chosen Poincaré map.
In a non-autonomous system, according to (B.1.3) and in complete coherence with definition
B.3.19, the Poincaré map can be obtained by a periodic sampling along t of the system trajectory.
Besides, in this case, the definition is global and not local (as in the case of def. B.3.19). Let S 1 = R(
mod T ) be the circular component reflecting the periodicity11 of the vector field f in θ. The global
cross-section is then:
Σ = (x, θ) ∈ Rn × S 1 : θ = θ0
(B.3.18)
while the global Poincaré map P : Σ → Σ is:
P (x0 ) = π · φT (x0 , θ0 )
where φt :
(B.3.19)
Rn × S 1 → Rn × S 1 is the flow of (B.1.3) and π denotes projection onto the first factor.
In the case of Kth-order sub-harmonics the discussion can be repeated by considering the eigenvalues of the solution DP K (p1 ) (p1 is anyone of the K periodic points) of the variational equation
corresponding to the (K-th iterate) Poincaré map P K .
Definition B.3.20 (P -Homoclinic orbit). If xq happens to be a homoclinic fixed point of discrete map (e.g. Poincaré) corresponding to a periodic orbit γ in the flow φt , then the homoclinic
orbit Γ is a P -homoclinic orbit.
9 Not
true in the non-autonomous case.
there is at least one that lies on the unity circle then an exception similar to the case of the HartmanGroßman theorem with non-hyperbolic points does apply. Hence the stability cannot simply determined by the
Floquet multipliers.
11 Usually, T is the period of the forcing input or an integer multiple of it.
10 If
202
B.3.3
Lyapunov exponents
Lyapunov exponents are a generalization of the eigenvalues and the Floquet multipliers. They allow
to characterize the stability of any type of steady-state behavior including quasi-periodic and chaotic
motion. Let Φt (x0 ) be the solution of the variational equation.
Definition B.3.21 (Lyapunov Exponents). Let {mi (t)}ni=1 be the eigenvalues of Φt (x0 ). The
Lyapunov exponents are:
1
.
λi = lim ln |mi (t)|,
t→∞ t
i = 1, . . . , n
(B.3.20)
if the limit exists12 .
Given that the definition is for a limit t → ∞, any point belonging to the basin of attraction
of an attractor has the same Lyapunov exponents13 . Moreover, it can be seen that the Lyapunov
exponents (LE’s) reduce to the real part of the eigenvalues in the case of a fixed point. While, the
following relationship holds with the Floquet multipliers m1 , . . . , mn of a limit cycle:
λi =
1
ln |mi |,
T
i = 1, . . . , n
(B.3.21)
Of course, one of the LE’s, is zero14 . In general, for any bounded attractor of an autonomous system,
except an equilibrium point, one LE is always zero.
The Lyapunov exponents give the average rate of contraction (λi < 0) or expansion (λi > 0) in a
particular direction near a particular trajectory. A property that can be explained by the fact that,
for the trajectory to remain bounded, the contraction must outweigh expansion, holds:
n
λi < 0
(B.3.22)
i=0
Ignoring the case of non-hyperbolic attractors, the following classification based on the LE’s
holds (λ1 ≥ λ2 ≥ · · · ≥ λn ):
1. Stable fixed point: λi < 0,∀i.
2. Stable limit cycle: λ1 = 0,λi < 0 for i = 2, . . . , n.
12 lim
can be replaced by lim sup to guarantee the existence. However the interpretation of the Lyapunov exponents
is correct only when the limit exists.
13 This is true for almost every point in the case of some strange attractors.
14 The one corresponding to the unity multiplier.
203
3. Stable torus15 : λ1 = λ2 = 0,λi < 0 for i = 3, . . . , n.
4. Stable K-torus:16 : λ1 = · · · = λK = 0,λi < 0 for i = K + 1, . . . , n.
5. Chaos: λ1 > 0,
&
i
λi < 0.
6. Hyperchaos: λ1 , . . . , λK > 0,
B.4
&
i
λi < 0.
Topological Equivalence and Conjugacy,
Structural stability and Bifurcations
Definition B.4.1 (Equivalent Maps). Two C r maps F , G are C k equivalent or C k conjugate
(k ≤ r) if there exists a C k diffeomorphism h such that h ◦ F = G ◦ h. C 0 equivalence is called
topological equivalence.
Definition B.4.2 (Equivalent Vector Fields). Two C r vector fields f ,g are said to be C k equivalent (k ≤ r) if there exists a C k diffeomorphism h which takes orbits φft (x) of f to orbits φgt (x) of
g, preserving sense but not necessarily parametrization by time. If h does preserve parametrization
by time, then it is called a conjugacy.
Definition B.4.3 (Structural stability). A map F ∈ C r (R) (resp. a C r vector field f ) is structurally stable if ∃
> 0 such that all C 1 , perturbations of F (resp. f ) are topologically equivalent
to F (resp. f ).
It is clear that a vector field (or map) that has a non-hyperbolic fixed point cannot be structurally
stable because any perturbation can remove it or turn it into an hyperbolic one. It is straightforward
that the same observation applies to periodic orbits. It comes out that having all hyperbolic fixed
points and closed orbits is a necessary but not sufficient condition for the structural stability.
Homoclinic and heteroclinic orbits are not structurally stable.
When, changing one of the system parameters, the dynamic system undergoes an abrupt qualitative change of behavior (e.g. a stable fixed point becomes unstable and a stable limit cycle appear),
we say that the system undergoes a bifurcation. More precisely, the value of the parameter = 0 for
which the system is not structurally stable is called the bifurcation value of , being the so-called
bifurcation parameter.
There are a certain number of known bifurcations: Hopf, Saddle-node, Period-doubling. These
are called local bifurcations because they may be understood by linearization [58].
15 Quasi-periodic
16 Quasi-periodic
motion.
motion as well.
204
B.5
Šilnikov Method
In this Section, the Šilnikov Method is considered in its restriction to threedimensional dissipative
continuous systems with homoclinic trajectories.
Definition B.5.1 (Šilnikov map). A Poincaré map defined in a neighbor U of a homoclinic trajectory H based at the fixed point xq and such that xq ∈
/ U is called Šilnikov map.
Theorem B.5.1 (Šilnikov Theorem). Consider the thirdorder autonomous system:
ẋ = f (x)
where f is a C 2 vector field on
(B.5.1)
R3 . Let xq be a fixed point for (B.5.1) and suppose that:
1. xq is a saddle focus, whose eigenvalues of the corresponding linearized system are of the form:
γ, σ ± jω,
γ, σ, ω ∈ R
(B.5.2a)
with ω = 0, and satisfy the Šilnikov inequality:
|γ| > |σ| > 0
(B.5.2b)
2. There exists a homoclinic trajectory H based at xq .
Then:
1. The Šilnikov map defined in a neighbor of H possesses a countable number of Smale horseshoes
in its discrete dynamics.
2. For any sufficient small C 1 -perturbation g of f , the perturbed system:
ẋ = g(x)
(B.5.3)
has at least a finite number of Smale horseshoes in the discrete dynamics of the Šilnikov map
defined near H.
3. Both the unperturbed system (B.5.1) and the perturbed system (B.5.3) exhibit horseshoe chaos.
Some important remarks deserve mention. First of all, results (1) and (2) implies the structural
stability of the horseshoe chaos. In other words, chaos persists in spite of perturbations, although
homoclinic trajectories are not structurally stable.
Secondly, if |γ| ≤ |σ| then chaos extinguish. Therefore |σ| = γ is the bifurcation point between
regular and chaotic behavior. Instead, perhaps, the most difficult aspect of this method is the proof
of existence of H.
Finally, the Šilnikov method has been extended to PWL C 2 vector fields for which:
205
1. xq is in the interior of one of the domains in which the state space can be partitioned according
to the PWL nonlinearity;
2. H is bounded away from all other fixed points and it is not tangent to any of the boundary
surfaces.
B.6
Particular Results for Two-Dimensional Flows
The nature of admissible solutions for two-dimensional (planar) flows is rather limited. In fact, the
choice of possible limit sets is restricted to fixed points, cycles, homoclinic and heteroclinic orbits
only. This inherent relative simplicity allows some further general result to hold. The topic, however,
is far from being trivial. For instance, Andronov and co-workers [170] or Hirsch and Smale [68] have
written well over a thousand of pages on this subject.
Theorem B.6.1 (Poincaré-Bendixon Theorem). A nonempty compact ω-limit or α-limit set
of a planar flow, which contains no fixed points, is a closed orbit.
Very useful is also the following well-known criterion.
Theorem B.6.2 (Negative Bendixon’s Criterion). If on a simply connected region D ⊆
the divergence of the vector field:
∂f1
∂f2
+
∂x1
∂x2
Rn
(B.6.1)
is not identically zero and does not change sign, then the system ẋ = f (x) has no closed orbits lying
entirely in D.
A generalization of this one is the:
Theorem B.6.3 (Negative Dulac’s Criterion). If a continuous function B(x)
with continuous derivatives exists, such that in simply connected region D ⊆ Rn the expression:
∂(Bf1 ) ∂(Bf2 )
+
∂x1
∂x2
(B.6.2)
is not identically zero and does not change sign, then the system ẋ = f (x) has no closed orbits lying
entirely in D.
A limit cycle can appear as a consequence of a Hopf Bifurcation:
Theorem B.6.4 (Hopf bifurcation theorem). Let us consider the dynamic system:
ẋ = f (x, µ)
(B.6.3)
206
where x ∈ R2 , f ∈ C k with k ≥ 4, while µ ∈ R is a system parameter. Suppose that (B.6.3) has an
equilibrium point at the origin ∀µ. Furthermore, suppose the eigenvalues λ1 (µ) and λ2 (µ) are purely
imaginary for µ = µ0 . If the real part of the eigenvalues , Re{λ1 (µ)}, satisfies:
d
>0
(B.6.4)
Re{λ1 (µ)} dµ
µ=µ0
and the origin is an asymptotically stable equilibrium point for µ = µ0 then:
1. µ = µ0 is a bifurcation point of the system;
2. for µ ∈ (µ1 , µ0 ), some µ1 < µ0 the origin is a stable focus;
3. for µ ∈ (µ0 , µ2 ), some µ2 > µ0 the origin is an unstable focus surrounded by a stable limit
cycle, whose size increases with µ.
Bibliography
[1] Leon O.Chua and Lin Yang, “Cellular neural networks,” in IEEE International Symposium
on Circuits and Systems, 1988, vol. 2, pp. 985–988.
[2] Leon O.Chua and Lin Yang, “Cellular neural networks: Theory,” IEEE Transactions on
Circuits and Systems, vol. 35, no. 10, pp. 1257–1272, 1988.
[3] Leon O.Chua and Lin Yang, “Cellular neural networks: Applications,” IEEE Transactions on
Circuits and Systems, vol. 35, no. 10, pp. 1273–1290, 1988.
[4] Leon O.Chua and Lin Yang, “Cellular neural networks,” US Patent, , no. 5140670, 1992.
[5] S.Wolfram, “Computation theory of cellular automata,” Communications in Mathematical
Physics, vol. 96, pp. 15–57, 1984.
[6] T.Toffoli and N.Margolus, Cellular Automata Machines: A New Environment for Modeling,
MIT Press, 1987.
[7] J.J.Hopfield, “Neural networks and physical systems with emergent computational abilities,”
Proc. Natl. Acad. Sci. USA., vol. 79, pp. 2554–2558, 1982.
[8] Carver Mead, Analog VLSI and Neural Systems, Addison-Wesley, 1989.
[9] P.Arena, S.Baglio, L.Fortuna and G.Manganaro,
“Cellular neural networks: A survey,”
in Proc. of 7th IMACS-IFAC Symposium on Large Scale Systems Theory and Applications
(LSS’95), 1995, vol. 1, pp. 53–58.
[10] T.Roska and J.Vandewalle, Ed.s, “Special issue on CNNs,” International Journal on Circuit
Theory and Applications, vol. 20, no. sept./oct., 1992.
[11] T.Roska and J.Vandewalle, Ed.s, “Special issue on CNNs,” International Journal on Circuit
Theory and Applications, vol. 24, no. 3, 1996.
207
208
[12] T.Roska and J.A.Nossek, Ed.s, “Special issue on CNNs,” IEEE Transactions on Circuits and
Systems–Part I: Fundamental Theory and Applications, vol. 40, no. 3, 1993.
[13] T.Roska and J.A.Nossek, Ed.s, “Special issue on CNNs,” IEEE Transactions on Circuits and
Systems–Part II: Analog and Digital Signal Processing, vol. 40, no. 3, 1993.
[14] L.O.Chua, Ed., “Special issue on nonlinear waves, patterns and spatio-temporal chaos in
dynamic arrays,” IEEE Transactions on Circuits and Systems–Part I: Fundamental Theory
and Applications, vol. 42, no. 10, 1995.
[15] T.Roska, Ed., Proc. Int. Workshop Cellular Neural Networks and Their Applications. IEEE,
1990.
[16] J.A.Nossek, Ed., Proc. Int. Workshop Cellular Neural Networks and Their Applications. IEEE,
1992.
[17] V.Cimagalli, Ed., Proc. Int. Workshop Cellular Neural Networks and Their Applications.
IEEE, 1994.
[18] T.Roska, Ed., Proc. Int. Workshop Cellular Neural Networks and Their Applications. IEEE,
1996.
[19] V.Tavsanoglu, Ed., Proc. Int. Workshop Cellular Neural Networks and Their Applications.
IEEE, 1998.
[20] Leon O.Chua and Lin Yang, “Cellular neural networks: Applications,” IEEE Transactions
on Circuits and Systems–Part I: Fundamental Theory and Applications, vol. 40, no. 3, pp.
147–156, 1993.
[21] Leon O.Chua, Martin Hasler, George S.Moschytz and Jacques Neirynck, “Autonomous cellular
neural networks: A unified paradigm for pattern formation and active wave propagation,”
IEEE Transactions on Circuits and Systems–Part I: Fundamental Theory and Applications,
vol. 42, no. 10, pp. 559–577, 1995.
[22] Patrick Thiran, “Influence of boundary conditions on the behavior of cellular neural networks,”
IEEE Transactions on Circuits and Systems–Part I: Fundamental Theory and Applications,
vol. 40, no. 3, pp. 207–212, 1993.
[23] Leon O.Chua and Chai Wah Wu, On the universe of stable Cellular Neural Networks, pp.
59–79, In T.Roska and J.Vandewalle [30], 1993.
209
[24] Leon O.Chua and Tamás Roska, “Stability of a class of nonreciprocal cellular neural networks,”
IEEE Transactions on Circuits and Systems, vol. 37, no. 12, pp. 1520–1527, 1990.
[25] Rafael C.Gonzalez, Digital Image Processing, Addison-Wesley, 1992.
[26] William K.Pratt, Digital Image Processing, John Wiley & Sons, 1978.
[27] D.P.Bertsekas and J.N.Tsitsiklis, Parallel and distributed computations, Numerical methods,
Prentice Hall, 1989.
[28] Tamás Roska and Leon O.Chua, “Reprogrammable CNN and supercomputer,” US Patent, ,
no. 5355528, 1994.
[29] Tamás Roska and Leon O.Chua, “The CNN universal machine: An analogic array computer,”
IEEE Transactions on Circuits and Systems–Part II: Analog and Digital Signal Processing,
vol. 40, no. 3, pp. 163–173, 1993.
[30] T.Roska and J.Vandewalle, Eds., Cellular Neural Networks, John Wiley & Sons, 1993.
[31] Tamás Roska and Leon O.Chua, Cellular Neural Networks with non-linear and delay-type
template elements and non-uniform grids, pp. 31–43, In T.Roska and J.Vandewalle [30], 1993.
[32] W.Heiligenberg and T.Roska, “On biological sensory information processing principles relevant to dually computing CNN’s,” Rep. DNS-4-1992 , Dual and Neural Computing Systems
Laboratory, Hungarian Academy of Science, 1992.
[33] T.Roska, J.Hámori, E.Lábos, K.Lotz, L.Orzó, J.Takács, P.L.Venetianer, Z.Vidnyánszky,
A.Zarándy, “The use of CNN models in the subcortical visual pathway,” IEEE Transactions on Circuits and Systems–Part I: Fundamental Theory and Applications, vol. 40, no. 3,
pp. 182–195, 1993.
[34] F.Werblin and A.Jacobs, “Using CNN to unravel space-time processing in the vertebrate
retina,” In V.Cimagalli [17], pp. 33–40.
[35] Valerio Cimagalli, “A neural network architecture for detecting moving objects II,” In T.Roska
[15], pp. 124–126.
[36] T.Roska, T.Boros, P.Thiran and L.O.Chua, “Detecting simple motion using cellular neural
networks,” In T.Roska [15], pp. 127–138.
[37] T.Roska, T.Boros, A.Radványi, P.Thiran, L.O.Chua, Detecting moving and standing objects
using cellular neural networks, pp. 175–190, In T.Roska and J.Vandewalle [30], 1993.
210
[38] R.Madan, Ed., Chua’s circuit: a paradigm for chaos, World Scientific, 1993.
[39] Hubert Harrer and Josef A.Nossek, Discrete-time Cellular Neural Networks, pp. 15–29, In
T.Roska and J.Vandewalle [30], 1993.
[40] C.Ho and S.Mori, “A systematic design method for discrete-time cellular neural networks,”
in European Conference on Circuit Theory and Design, 1993, pp. 693–698.
[41] H.Magnussen and J.A.Nossek, “Global learning algorithms for discrete-time cellular neural
networks,” In V.Cimagalli [17], pp. 165–170.
[42] John J.D’Azzo and Constantine H.Houpis, Linear Control System Analysis and Design Convential and Modern, Electrical and Electronic Engineering Series. McGraw-Hill, 3rd edition,
1988.
[43] Leon O.Chua, Tamás Roska and Peter L.Venetianer, “The CNN is universal as the Turing
machine,” IEEE Transactions on Circuits and Systems–Part I: Fundamental Theory and
Applications, vol. 40, no. 4, pp. 289–291, 1993.
[44] Kenneth R.Crounse and Leon O.Chua, “The CNN is universal machine is as universal as a
Turing machine,” IEEE Transactions on Circuits and Systems–Part I: Fundamental Theory
and Applications, vol. 43, no. 4, pp. 353–355, 1996.
[45] Josef A.Nossek, “Design and learning with cellular neural networks,” In V.Cimagalli [17], pp.
137–146.
[46] P.Arena, S.Baglio, L.Fortuna and G.Manganaro, “Chua’s circuit can be generated by CNN
cells,” IEEE Transactions on Circuits and Systems–Part I: Fundamental Theory and Applications, vol. 42, no. 2, pp. 123–125, 1995.
[47] P.Arena, L.Fortuna, G.Manganaro and S.Spina, “CNN image processing for the automatic
classification of oranges,” In V.Cimagalli [17], pp. 463–467.
[48] K.Wuthrich, NMR of proteins and nucleic acids, John Wiley & Sons, 1986.
[49] H.Kessler, M.Gehrke and C.Griesinger, ,” Adv. Chem. Int. Ed. Engl., vol. 27, pp. 490–536,
1988.
[50] P.Catasti, E.Carrara and C.Nicolini, “PEPTO: an expert system for automatic peak assignment of two-dimensional nuclear magnetic resonance spectra of proteins,” Journal of
Computational Chemistry, 1992.
211
[51] A.Friedman, Mathematics in Industrial Problems, vol. 31 of IMA Vol. Math. Appl., SpringerVerlag, 1990.
[52] P.Arena, S.Baglio, L.Fortuna and G.Manganaro, “Air quality modeling with CNN’s,” in
European Conference on Circuit Theory and Design, 1995, vol. 2, pp. 885–888.
[53] L.Fortuna, S.Graziani, G.Manganaro and G.Muscato, “The CNN’s as innovative computing
paradigm for modeling,” in Proc. IDA World Congress on Desalination and Water Sciences,
1995, vol. 4, pp. 399–409.
[54] T.Kozek and T.Roska, “A double time-scale CNN for solving 2-D Navier-Stokes equations,”
In V.Cimagalli [17], pp. 267–272.
[55] Tamás Roska, Leon O.Chua, Dietrich Wolf, Tibor Kozek, Ronald Teztlaff and Frank Puffer,
“Simulating nonlinear waves and partial differential equations via CNN-part I: Basic techniques,” IEEE Transactions on Circuits and Systems–Part I: Fundamental Theory and Applications, vol. 42, no. 10, pp. 807–815, 1995.
[56] Tibor Kozek, Leon O.Chua, Tamás Roska, Dietrich Wolf, Ronald Teztlaff, Frank Puffer and
Karoly Lotz, “Simulating nonlinear waves and partial differential equations via CNN-part II:
Typical examples,” IEEE Transactions on Circuits and Systems–Part I: Fundamental Theory
and Applications, vol. 42, no. 10, pp. 816–820, 1995.
[57] Paolo Arena, Reti neurali per modellistica e predizione in circuiti e sistemi, Ph.D. thesis,
University of Catania - Italy, 1994, ing. elettrotecnica VI ciclo, in italian.
[58] John Guckenheimer and Philip Holmes, Nonlinear Oscillations, dynamical systems, and bifurcations of vector fields, vol. 42 of Applied Mathematical Sciences, Springer-Verlag, 1983.
[59] Thomas S.Parker and Leon O.Chua, Practical numerical algorithms for chaotic systems,
Springer-Verlag, 1989.
[60] M.A.van Wyk and W.-H.Steeb, Chaos in Electronics, vol. 2 of Mathematical Modelling, Theory
and Applications, Kluwer Academic Publishers, 1997.
[61] S.Baglio, R.Cristaudo, L.Fortuna and G.Manganaro, “Complexity in an industrial flyback
converter,” International Journal of Circuits, Systems and Computers, vol. 5, no. 4, pp.
627–633, 1995.
212
[62] Salvatore Baglio, comportamenti non lineari e fenomeni caotici nei circuiti e nei sistemi
dinamici, Ph.D. thesis, University of Catania - Italy, 1994, ing. elettrotecnica VI ciclo, in
italian.
[63] E.Ott, C.Grebogi and J.A.Yorke, “Controlling chaos,” Physical Review Letters, vol. 64, no.
11, pp. 1196–1199, 1990.
[64] Fan Zou and Josef A.Nossek, “A chaotic attractor with Cellular Neural Networks,” IEEE
Transactions on Circuits and Systems–Part I: Fundamental Theory and Applications, vol. 38,
no. 7, pp. 811–812, 1991.
[65] Fan Zou and Josef A.Nossek, “Bifurcation and chaos in Cellular Neural Networks,” IEEE
Transactions on Circuits and Systems–Part I: Fundamental Theory and Applications, vol. 40,
no. 3, pp. 166–173, 1993.
[66] Fan Zou, Axel Katérle and Josef A.Nossek, “Homoclinic and heteroclinic orbits of the three-cell
Cellular Neural Network,” IEEE Transactions on Circuits and Systems–Part I: Fundamental
Theory and Applications, vol. 40, no. 11, pp. 843–848, 1993.
[67] Edward Ott, Chaos in dynamical systems, Cambridge University Press, 1993.
[68] M.W.Hirsch and S.Smale, Differential equations, dynamical systems and linear algebra, Academic Press, 1974.
[69] Gabriele Manganaro, Mario Lavorgna, Matteo Lo Presti and Luigi Fortuna, “Cellular neural
network to obtain the so-called unfolded Chua’s Circuit,” European Patent, , no. 96830137.42201, 1996.
[70] P.Arena, S.Baglio, L.Fortuna and G.Manganaro, “State controlled CNN: A new strategy for
generating high complex dynamics,” IEICE Transactions on Fundamentals of Electronics,
Communications and Computer Sciences, vol. E79-A, no. 10, pp. 1647–1657, 1996.
[71] P.Arena, S.Baglio, L.Fortuna and G.Manganaro, “A simplified scheme for the realisation of
the chua’s oscillator by using SC-CNN cells,” IEE Electronics Letters, vol. 31, no. 21, pp.
1794–1795, 1995.
[72] Leon O.Chua, “Global unfolding of chua’s circuit,” IEICE Transactions on Fundamentals of
Electronics, Communications and Computer Sciences, vol. E76-A, pp. 704–734, 1993.
213
[73] Vincente Pérez-Muñuzuri, Vincente Pérez-Villar and Leon O.Chua, “Autowaves for image
processing on a two-dimensional CNN array of excitable nonlinear circuits: flat and wrinkled
labyrinths,” IEEE Transactions on Circuits and Systems–Part I: Fundamental Theory and
Applications, vol. 40, no. 3, pp. 174–181, 1993.
[74] Alberto Pérez-Muñuzuri, Vincente Pérez-Muñuzuri, Vincente Pérez-Villar and Leon O.Chua,
“Spiral waves on a 2-D array nonlinear circuits,” IEEE Transactions on Circuits and Systems–
Part I: Fundamental Theory and Applications, vol. 40, no. 11, pp. 872–877, 1993.
[75] M.P.Kennedy, “Chaos in the Colpitts oscillator,” IEEE Transactions on Circuits and Systems–
Part I: Fundamental Theory and Applications, vol. 41, pp. 771–774, 1994.
[76] M.P.Kennedy, “On the relationship between the chaotic Colpitts oscillator and Chua’s Oscillator,” IEEE Transactions on Circuits and Systems–Part I: Fundamental Theory and Applications, vol. 42, pp. 376–379, 1995.
[77] G.Sarafian and B.Z.Kaplan, “Is the Colpitts oscillator a relative of Chua’s Circuit?,” IEEE
Transactions on Circuits and Systems–Part I: Fundamental Theory and Applications, vol. 42,
pp. 373–376, 1995.
[78] P.Arena, S.Baglio, L.Fortuna and G.Manganaro, “A CNN to generate the chaos of the colpitts
oscillator,” in International Symp.on Nonlinear Theory and its Applications (NOLTA’95),
1995, vol. 2, pp. 689–694.
[79] P.Arena, S.Baglio, L.Fortuna and G.Manganaro, “How state controlled CNN cells generate
the dynamics of the colpitts-like oscillator,” IEEE Transactions on Circuits and Systems–Part
I: Fundamental Theory and Applications, vol. 43, no. 7, pp. 602–605, 1996.
[80] T.Saito, “An approach toward higher dimensional hysteresis chaos generators,” IEEE Transactions on Circuits and Systems, vol. 37, pp. 399–409, 1990.
[81] L.O.Chua, C.A.Desoer and E.S.Kuh, Linear and nonlinear circuits, McGraw-Hill, 1987.
[82] P.Arena, S.Baglio, L.Fortuna and G.Manganaro, “Hyperchaos from cellular neural networks,”
IEE Electronics Letters, vol. 31, no. 4, pp. 250–251, 1995.
[83] L.O.Chua, M.Komuro and T.Matsumoto, “The double scroll family,” IEEE Transactions on
Circuits and Systems, vol. 33, no. 11, pp. 1072–1118, 1986.
[84] A.I.Mees and P.B.Chapman, “Homoclinic and heteroclinic orbits in the double scroll attractor,” IEEE Transactions on Circuits and Systems, vol. 34, no. 9, pp. 1115–1120, 1987.
214
[85] J.A.K.Suykens and J.Vandewalle, “Generation of n-double scrolls (n = 1, 2, 3, 4, . . . ),” IEEE
Transactions on Circuits and Systems–Part I: Fundamental Theory and Applications, vol. 40,
pp. 861–867, 1993.
[86] P.Arena, S.Baglio, L.Fortuna and G.Manganaro, “Generation of n-double scrolls via cellular
neural networks,” International Journal on Circuit Theory and Applications, vol. 24, no. 3,
pp. 241–252, 1996.
[87] T.Katagiri, T.Saito and M.Komuro, “Lost solution and chaos,” in IEEE International Symposium on Circuits and Systems, 1993, pp. 2616–2619.
[88] K.Hat’Ta and T.Saito, “Chaos and bifurcation from a serial resonant circuit including a
saturated inductor,” in IEEE International Symposium on Circuits and Systems, 1993, pp.
2612–2615.
[89] M.Itoh and R.Tomiyasu, “Canards and irregular oscillations in a nonlinear circuit,” in IEEE
International Symposium on Circuits and Systems, 1991, pp. 850–853.
[90] M.Itoh and L.O.Chua, “Canards and chaos in nonlinear systems,” in IEEE International
Symposium on Circuits and Systems, 1992, pp. 2789–2792.
[91] Y.Nishio and A.Ushida, “Multimode chaos in two coupled chaotic oscillators with hard nonlinearities,” in IEEE International Symposium on Circuits and Systems, 1994, vol. 6, pp.
109–112.
[92] Adel S.Sedra and Kenneth C.Smith, Microelectronic Circuits, Electrical Engineering Series.
Oxford University Press, 3rd edition, 1991.
[93] Randall L.Geiger, Phillip E.Allen and Noel R.Strader, VLSI: design techniques for analog and
digital circuits, Electronic Engineering Series. McGraw-Hill, 1990.
[94] Kenneth R.Laker and Willy M.C.Sansen, Design of Analog Integrated Circuits and Systems,
Electrical and Computer Engineering. McGraw-Hill, 1994.
[95] L.M.Pecora and T.L.Carroll, “Synchronization in chaotic systems,” Physical Review Letters,
vol. 64, pp. 821–824, 1990.
[96] Martin Hasler, Synchronization principles and applications, pp. 314–327, In Circuits and
Systems Series [99], 1996.
215
[97] T.L.Carroll, “Communicating with use of filtered, synchronized, chaotic signal,” IEEE Transactions on Circuits and Systems–Part I: Fundamental Theory and Applications, vol. 42, pp.
105–110, 1995.
[98] M.P.Kennedy and M.J.Ogorza
lek, Ed.s, “Special issue on chaos synchronization and control:
theory and applications,” IEEE Transactions on Circuits and Systems–Part I: Fundamental
Theory and Applications, vol. 44, no. 10, 1997.
[99] Chris Toumazou, Nick Battersby and Sonia Porta, Circuits and Systems Tutorials ’94, Circuits
and Systems Series. IEEE Press, 1996.
[100] K.M.Cuomo, A.V.Oppenheim and S.H.Strogatz, “Synchronization of lorenz-based chaotic
circuits with applications to communications,” IEEE Transactions on Circuits and Systems–
Part II: Analog and Digital Signal Processing, vol. 40, no. 10, pp. 626–633, 1993.
[101] P.Arena, S.Baglio, L.Fortuna and G.Manganaro, “Experimental signal transmission by using
synchronized state controlled cellular neural networks,” IEE Electronics Letters, vol. 32, no.
4, pp. 362–363, 1996.
[102] R.Caponetto, L.Fortuna, M.Lavorgna, G.Manganaro and L.Occhipinti, “Experimental study
on chaotic synchronization with non-ideal transmission channel,” in IEEE Global Telecommunications Conference (GLOBECOM’96), 1996, vol. 3, pp. 2083–2087.
[103] R.Caponetto, L.Fortuna, G.Manganaro and M.G.Xibilia, “Chaotic system identification via
genetic algorithm,” in First IEE/IEEE Int.Conference on Genetic Algorithms in Engineering
Systems: Innovations and Applications (GALESIA’95), 1995, pp. 170–174.
[104] R.Caponetto, L.Fortuna, G.Manganaro and M.G.Xibilia, “Synchronization-based nonlinear
chaotic circuit identification,” in SPIE’s International Symposium on Information, Communications and Computer Technology, Applications and Systems, ”Chaotic Circuits for Communication”, 1995, number 2612, pp. 48–56.
[105] R.Caponetto, L.Fortuna, G.Manganaro and M.G.Xibilia,
Chaotic systems identification,
vol. 55 of IEE Control Engineering, chapter 6, pp. 118–133, Peter Peregrinus publishing,
London, UK, 1997.
[106] D.Goldberg, Genetic Algorithms in Search, optimization and machine learning, AddisonWesley, 1989.
216
[107] Mitsuo Gen and Runwei Cheng, Genetic algorithms and engineering design, Engineering
Design and Automation. John Wiley & Sons, 1997.
[108] Fan Zou and Josef A.Nossek, “Stability of Cellular Neural Networks with opposite-sign templates,” IEEE Transactions on Circuits and Systems–Part I: Fundamental Theory and Applications, vol. 38, no. 6, pp. 675–677, 1991.
[109] V.I.Krinsky, Ed., Self-organization: autowaves and structures far from equilibrium, SpringerVerlag, 1984.
[110] P.Arena, S.Baglio, L.Fortuna and G.Manganaro, “Complexity in a two-layer CNN,” In
T.Roska [18], pp. 127–132.
[111] P.Arena, S.Baglio, L.Fortuna and G.Manganaro, “Self-organization in a two-layer CNN,”
IEEE Transactions on Circuits and Systems–Part I: Fundamental Theory and Applications,
to appear.
[112] J.D.Murray, Ed., Mathematical biology, Springer-Verlag, 1989.
[113] G.Nicolis and I.Prigogine, Eds., Exploring complexity: an introduction, W.H.Freeman Publishing, 1989.
[114] Ladislav Pivka, “Autowaves and spatio-temporal chaos in CNN’s - part I: A tutorial,” IEEE
Transactions on Circuits and Systems–Part I: Fundamental Theory and Applications, vol. 42,
no. 10, pp. 638–649, 1995.
[115] Ladislav Pivka, “Autowaves and spatio-temporal chaos in CNN’s - part II: A tutorial,” IEEE
Transactions on Circuits and Systems–Part I: Fundamental Theory and Applications, vol. 42,
no. 10, pp. 650–664, 1995.
[116] Liviu Goraş, Leon O.Chua and Domine W.Leenaerts, “Turing patterns in CNN’s – part I:
once over lightly,” IEEE Transactions on Circuits and Systems–Part I: Fundamental Theory
and Applications, vol. 42, no. 10, pp. 602–611, 1995.
[117] Liviu Goraş and Leon O.Chua, “Turing patterns in CNN’s – part II: equations and behaviors,”
IEEE Transactions on Circuits and Systems–Part I: Fundamental Theory and Applications,
vol. 42, no. 10, pp. 612–626, 1995.
[118] Liviu Goraş, Leon O.Chua and Ladislav Pivka, “Turing patterns in CNN’s - part III: computer
simulation results,” IEEE Transactions on Circuits and Systems–Part I: Fundamental Theory
and Applications, vol. 42, no. 10, pp. 627–637, 1995.
217
[119] P.Arena, L.Fortuna and G.Manganaro, “A CNN cell for pattern formation and active wave
propagation,” in European Conference on Circuit Theory and Design, 1997, vol. 1, pp. 371–376.
[120] P.Arena, R.Caponetto, L.Fortuna and G.Manganaro, “Cellular Neural Networks to explore
complexity,” Soft Computing, vol. 1, no. 3, pp. 120–136, 1997.
[121] S.Kondo and R.Asai, “A reaction-diffusion wave on the skip of the marine Angelfish Pomacanthus,” Nature, vol. 376, pp. 765–768, 1995.
[122] Sunjung Park, Joonho Lim and Soo-Ik Chae, “Discrete-time cellular neural networks using
distributed arithmetic,” IEE Electronics Letters, vol. 31, no. 21, pp. 1851–1852, 1995.
[123] K.Halonen, V.Porra, T.Roska and L.O.Chua, Programmable analogue VLSI CNN chip with
local digital logic, pp. 135–144, In T.Roska and J.Vandewalle [30], 1993.
[124] G.C.Cardarilli and F.Sargeni, “Very efficient VLSI implementation of CNN with discrete
templates,” IEE Electronics Letters, vol. 29, no. 14, pp. 1286–1287, 1993.
[125] Fausto Sargeni and Vincenzo Bonaiuto, “A fully digitally programmable CNN chip,” IEEE
Transactions on Circuits and Systems–Part I: Fundamental Theory and Applications, vol. 42,
no. 11, pp. 741–745, 1995.
[126] Mario Salerno, Fausto Sargeni and Vincenzo Bonaiuto, “6 × 6DPCNN: a programmable mixed
analogue-digital chip for cellular neural networks,” In T.Roska [18], pp. 451–456.
[127] Angel Rodríguez-Vázquez, Servando Espejo, Rafael Domínguez-Castro, Jose L.Huertas and
Edgar Sánchez-Sinencio, “Current-mode techniques for the implementation of continuousand discrete-time cellular neural networks,” In IEEE Transactions on Circuits and Systems–
Part II: Analog and Digital Signal Processing [13], pp. 132–146.
[128] Joseph E.Varrientos, Edgar Sánchez-Sinencio and Jaime Ramírez-Angulo, “A current-mode
cellular neural network implementation,” In IEEE Transactions on Circuits and Systems–Part
II: Analog and Digital Signal Processing [13], pp. 147–155.
[129] Ari Paasio, Adam Dawidziuk, Kari Halonen and Veikko Porra, “Fast and compact 16 by 16
cellular neural network implementation,” Analog Integrated Circuits and Signal Processing,
vol. 12, pp. 59–70, 1997.
[130] G.F.Dalla Betta, S.Graffi, Zs.M.Kovács and G.Masetti, “CMOS implementation of an analogically programmable cellular neural network,” In IEEE Transactions on Circuits and Systems–
Part II: Analog and Digital Signal Processing [13], pp. 206–215.
218
[131] José M.Cruz and Leon O.Chua, Design of High-speed, High-density CNN’s in CMOS technology, pp. 117–134, In T.Roska and J.Vandewalle [30], 1993.
[132] Josef A.Nossek and Gerhard Seiler, Cellular Neural Networks: theory and circuit design, pp.
95–115, In T.Roska and J.Vandewalle [30], 1993.
[133] Sa H.Bang and Bing J.Sheu, “A neural network for detection of signals in communication,”
IEEE Transactions on Circuits and Systems–Part I: Fundamental Theory and Applications,
vol. 43, no. 8, pp. 644–655, 1996.
[134] Peter Kinget and Michiel Steyaert, “Analogue CMOS VLSI implementation of cellular neural
networks with continuously programmable templates,” in IEEE International Symposium on
Circuits and Systems, 1994, vol. 6, pp. 367–370.
[135] Mancia Anguita, Francesco J.Pelayo, Alberto Prieto and Julio Ortega, “Analog CMOS implementation of a discrete time CNN with programmable cloning templates,” In IEEE Transactions on Circuits and Systems–Part II: Analog and Digital Signal Processing [13], pp. 215–218.
[136] G.C.Cardarilli, R.Lojacono, M.Salerno and F.Sargeni, “VLSI implementation of a cellular neural network with programmable control operator,” in IEEE Midwest Symposium on Circuits
and Systems, 1993, pp. 1089–1092.
[137] Gunhee Han, Jose Pineda de Gyvez and Edgar Sánchez-Sinencio, “Optimal manufacturable
CNN array size for time multiplexing schemes,” In T.Roska [18], pp. 387–392.
[138] Gabriele Manganaro and Jose Pineda de Gyvez, “Design and implementation of an algorithmic
S 2 I switched current multiplier,” in IEEE International Symposium on Circuits and Systems,
1998, to appear.
[139] Gabriele Manganaro and Jose Pineda de Gyvez, “A four quadrant S 2 I switched-current
multiplier,” IEEE Transactions on Circuits and Systems–Part II: Analog and Digital Signal
Processing, to appear.
[140] C.Toumazou, J.B.Hughes and N.C.Battersby, Switched-currents an analogue technique for
digital technology, vol. 5 of Circuits and Systems series, Peter Peregrinus, 1993.
[141] Rolf Unbehauen and Andrzej Cichocki, MOS switched-capacitor and continuous-time integrated circuits and systems, vol. 13 of Communications and Control Engineering, SpringerVerlag, 1989.
219
[142] D.M.W. Leenaerts, G.H.M. Joordens and J.A. Hegt, “A 3.3v 625khz switched-current multiplier,” IEEE International Journal of Solid State Circuits, vol. 31, no. 9, pp. 1340–1343,
1996.
[143] Gunhee Han and Edgar Sánchez-Sinencio, “CMOS continuous multipliers: A tutorial,” submitted for publication.
[144] John B. Hughes and Kenneth W. Moulding, “S 2 I: a switched-current technique for high
performance,” IEE Electronics Letters, vol. 29, no. 16, pp. 1400–1401, 1993.
[145] Geir E.Sæter, Chris Toumazou,Gaynor Taylor, Kevin Eckersall and Ian M.Bell, “Concurrent
self test of switched current circuits based on the S 2 I technique,” in IEEE International
Symposium on Circuits and Systems, 1995, vol. 2, pp. 841–844.
[146] John B. Hughes and Kenneth W. Moulding, “Enhanced S 2 I switched-current cells,” in IEEE
International Symposium on Circuits and Systems, 1996.
[147] M. Bracey, W. Redman-White, J.Richardson and J.B. Hughes, “A full Nyquist 15 MS/s 8b differential switched-current A/D converter,” IEEE International Journal of Solid State
Circuits, vol. 31, no. 7, pp. 945–951, 1996.
[148] Terri S.Fiez, Guojin Liang and David J.Allstot, “Switched-current circuit design issues,” IEEE
International Journal of Solid State Circuits, vol. 26, no. 3, pp. 192–201, 1991.
[149] Terri S.Fiez and David J.Allstot, “CMOS switched-current ladder filters,” IEEE International
Journal of Solid State Circuits, vol. 25, no. 6, pp. 1360–1367, 1990.
[150] Minkyu Song, Yongman Lee and Wonchan Kim, “A clock feedthrough reduction circuit for
switched-current systems,” IEEE International Journal of Solid State Circuits, vol. 28, no. 2,
pp. 133–137, 1993.
[151] B.Jonsson and S.Eriksson, “New clock feedthrough compensation scheme for switched-current
circuits,” IEE Electronics Letters, vol. 29, no. 16, pp. 1446–1447, 1993.
[152] Markus Helfenstein and George S.Moschytz, “Clock feedthrough compensation technique for
switched-current circuits,” IEEE Transactions on Circuits and Systems–Part II: Analog and
Digital Signal Processing, vol. 42, no. 3, pp. 229–231, 1995.
[153] H.-K. Yang and E.I. El-Masry, “Clock feedthrough analysis and cancellation in current sample/hold circuits,” IEE Proc.-Circuits Devices and Systems, vol. 141, no. 6, pp. 510–516,
1994.
220
[154] Rudy J.van de Plassche, Willy M.C.Sansen and Johan H.Huijsing, Ed.s, Analog Circuit Design,
Kluwer Academic Publishers, 1995.
[155] Johan H.Huijsing, Rudy J.van de Plassche and Willy M.C.Sansen, Ed.s, Analog Circuit Design,
Kluwer Academic Publishers, 1996.
[156] Klaas Bult and Hans Wallinga, “A class of analog CMOS circuits based on the square-law
characteristic of an MOS transistor in saturation,” IEEE International Journal of Solid State
Circuits, vol. 22, no. 3, pp. 357–365, 1987.
[157] Daniel H.Sheingold, Nonlinear circuits handbook, Analog Devices, Inc., 1976.
[158] Fikret Dülger, “A programmable fuzzy controller emulator chip,” M.S. thesis, Istanbul Technical Univerisity, 1996.
[159] N.Weste and K.Eshraghian, Principles of CMOS VLSI design, Addison-Wesley, 2nd edition,
1993.
[160] Sergio Franco, Design with operational amplifiers and analog integrated circuits, McGraw-Hill,
1988.
[161] Oscar Moreira-Tamayo, Analog Systems for Spectral Analysis and Signal Processing, Ph.D.
thesis, Texas A&M University, 1996.
[162] O.Moreira-Tamayo and J.Pineda de Gyvez, “Filtering and spectral processing of 1-d signals
using cellular neural networks,” in IEEE International Symposium on Circuits and Systems,
1996, vol. 3, pp. 76–79.
[163] Bruno Andó, Salvatore Baglio, Salvatore Graziani and Nicola Pitrone, “A novel analog modular sensor fusion architecture for developing smart structures,” in Proc. Of IEEE IMTC97,
1997.
[164] C.Sidney Burrus, Ramesh A.Gopinath and Haitao Guo, Wavelets and Wavelet transforms: A
primer, Prentice Hall, 1998.
[165] Gabriele Manganaro and Jose Pineda de Gyvez, “1-D discrete time CNN with multiplexed
template hardware,” in IEEE International Symposium on Circuits and Systems, 1998, to
appear.
[166] C.Toumazou, J.B.Hughes and D.M.Pattullo, “Regulated cascode switched-current memory
cell,” IEE Electronics Letters, vol. 26, no. 5, pp. 303–305, 1990.
221
[167] J.Alvin Connelly and Pyung Choi, Macromodeling with SPICE, Prentice Hall, 1992.
[168] Uwe Helmke and John B.Moore, Optimization and dynamical systems, Communications and
Control Engineering Series. Springer-Verlag, 1994.
[169] A.Cichocki and R.Unbehauen, Neural Networks for optimization and signal processing, John
Wiley & Sons, 1993.
[170] A.A.Andronov, A.A.Vitt and S.E.Khaikin, Theory of oscillators, Pergamon Press, 1966.