Topics in Robustness Analysis - Automatic Control

Transcription

Topics in Robustness Analysis - Automatic Control
Linköping studies in science and technology. Thesis.
No. 1512
Topics in Robustness Analysis
Sina Khoshfetrat Pakazad
LERTEKNIK
REG
AU
T
O MA
RO
TI C C O N T
L
LINKÖPING
Division of Automatic Control
Department of Electrical Engineering
Linköping University, SE-581 83 Linköping, Sweden
http://www.control.isy.liu.se
[email protected]
Linköping 2011
This is a Swedish Licentiate’s Thesis.
Swedish postgraduate education leads to a Doctor’s degree and/or a Licentiate’s degree.
A Doctor’s Degree comprises 240 ECTS credits (4 years of full-time studies).
A Licentiate’s degree comprises 120 ECTS credits,
of which at least 60 ECTS credits constitute a Licentiate’s thesis.
Linköping studies in science and technology. Thesis.
No. 1512
Topics in Robustness Analysis
Sina Khoshfetrat Pakazad
[email protected]
www.control.isy.liu.se
Department of Electrical Engineering
Linköping University
SE-581 83 Linköping
Sweden
ISBN 978-91-7393-014-7
ISSN 0280-7971
LiU-TEK-LIC-2011:51
Copyright © 2011 Sina Khoshfetrat Pakazad
Printed by LiU-Tryck, Linköping, Sweden 2011
To My Parents and Brothers
Abstract
In this thesis, we investigate two problems in robustness analysis of uncertain
systems with structured uncertainty. The first problem concerns the robust finite
frequency range H2 analysis of such systems. Classical robust H2 analysis methods are based on upper bounds for the robust H2 norm of a system which are
computed over the whole frequency range. These bounds can be overly conservative, and therefore, classical robust H2 analysis methods can produce misleading
results for finite frequency range analysis. In the first paper in the thesis, we
address this issue by providing two methods for computing upper bounds for
the robust finite-frequency H2 norm of the system. These methods utilize finitefrequency Gramians and frequency partitioning to calculate upper bounds for
the robust finite-frequency H2 norm of uncertain systems with structured uncertainty. We show the effectiveness of these algorithms using both theoretical and
practical experiments.
The second problem considered in this thesis is on distributed robust stability
analysis of interconnected uncertain systems with structured uncertainty. Distributed analysis methods are useful when a centralized solution for the problem
is not possible, which can be due to computational or structural constraints in
the problem. Under this topic, we study robust stability analysis of large scale
weakly interconnected systems using the so-called µ-analysis method, which involves solving convex feasibility problems. By exploiting the structure imposed
by the interconnection of subsystems, these feasibility problems can be decomposed into smaller and simpler problems that are coupled. We propose tailored
projection-based methods for solving the resulting convex feasibility problems,
and we discuss how these algorithms can be implemented in a distributed manner. Finally, our numerical results show that these methods outperform the conventional projection-based algorithms for such problems.
v
Populärvetenskaplig sammanfattning
Många modellbaserade metoder för att designa regulatorer tar inte hänsyn till felaktigheter i de givna modellerna i designprocessen. Det är därför viktigt att kontrollera stabilitet och prestanda med hänsyn till dessa potentiella brister som kan
finnas i modellen, till exempel osäkerhet i modellparametrar. I denna avhandling
studerar vi hur modellosäkerheter vid designen av regulatorn kan påverka stabiliteten och prestandan för det slutna systemet. Detta kallas robusthetsanalys. I
detta avhandling kommer vi att fokusera pårobusthetsanalys för ett begränsat
frekvensintervall samt distribuerad robust stabilitetsanalys av sammanlänkade
osäkra system.
Vikten av metoder för robusthetsanalys för begränsade frekvensintervall blir uppenbar när klassiska metoder för oändliga frekvensintervall ger alltför konservativa resultat. I denna avhandling presenterar vi robusthetsanalysmetoder för
begränsade frekvensintervall för modeller med osäkerheter och visar att dessa
förbättrar resultaten i jämförelse med de klassiska metoderna.
Robust stabilitetsanalys för sammanlänkade system dyker upp i många tillämpningar, till exempel smarta elnät, partiella differentialekvationer etc. För dessa
problem kan det, pågrund av storleken eller strukturella begränsningar, vara
svårt eller olämpligt att utföra robust stabilitetsanalys centraliserat. För att, till
viss del, komma runt detta kan man använda en distribuerad metod. I denna
avhandling visar vi hur man kan utnyttja strukturen påkopplingarna mellan systemen för att dela upp problemet, och vi ger en distribuerad algoritm för att lösa
det.
vii
Acknowledgments
First and foremost I would like to thank my supervisor Prof. Anders Hansson,
for his persistent guidance, contribution, inspiration and remarkable patience
throughout the process leading to this thesis. I am truly grateful. I am also
very thankful for the additional support and guidance that I received from my
co-supervisors, Prof. Torkel Glad and Dr. Anders Helmersson.
Thank you Prof. Lennart Ljung for granting me the possibility to be a member
of the impressive Automatic Control group in Linköping, which I am sure will
continue to be under the passionate guidance of Prof. Svante Gunnarsson as the
newly appointed head of the devision. My gratitude extends to Ninna Stensgård
and her predecessors Åsa Karmelind and Ulla Salaneck for their constant support
and help through these years.
Special thanks goes to Dr. Daniel Ankelhed, Lic. Daniel Petersson and Dr. Matrin
S. Andersen for their very constructive comments and much needed help to finish
this thesis.
I thank all the extraordinary people in the Automatic Control department for
providing such warm and friendly environment. Thank you Lic.1 André Carvalho Bittencourt, Dr. Daniel Ankelhed, Dr. Emre Özkan, Lic. Fredrik Lindsten,
Dr. Henrik Ohlsson, Lic. Zoran Sjanic, Lic.1 Patrik Axelsson and Ylva Jung for
being such great friends, remarkable travel companions and endless sources of
fun and energy. Also I extend my gratitude to Amin Shahrestani Azar, Behbood
Borghei, Farham Farhangi, Farzad Irani, Roozbeh Kianfar and Shirin Katoozi for
their support through all the good and not so good times. I really cherish your
friendship.
I would like to take this opportunity to also thank my mom and dad for always
being there for me and supporting me all the way to this point. You have always
been a source of inspiration to me and always will be. Also I am indebted to my
brothers, Soheil and Saeed, for always encouraging and helping me through all
my endeavors in my life.
Finally, for financial support, I would like to thank the European Commission
under contract number AST5-CT-2006-030768-COFCLUO and Swedish government under the ELLIIT project, the strategic area for ICT research.
Linköping, November 2011
Sina Khoshfetrat Pakazad
1 Soon to be.
ix
Contents
Notation
I
xv
Background
1 Introduction
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2 Publications and Contributions . . . . . . . . . . . . . . . . . . . .
1.3 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3
3
4
5
2 Optimization
2.1 General Description . . . . . . . . . . . . . . . . . . . . . . . .
2.2 Convex Optimization . . . . . . . . . . . . . . . . . . . . . . .
2.2.1 Convex sets . . . . . . . . . . . . . . . . . . . . . . . .
2.2.2 Convex functions . . . . . . . . . . . . . . . . . . . . .
2.2.3 Definition of a convex optimization problem . . . . .
2.2.4 Generalized inequalities . . . . . . . . . . . . . . . . .
2.2.5 Semidefinite programming . . . . . . . . . . . . . . . .
2.3 Primal and Dual Problems . . . . . . . . . . . . . . . . . . . .
2.4 Decomposition Methods . . . . . . . . . . . . . . . . . . . . .
2.4.1 Primal decomposition . . . . . . . . . . . . . . . . . .
2.4.2 Dual decomposition . . . . . . . . . . . . . . . . . . .
2.5 Matrix Sparsity . . . . . . . . . . . . . . . . . . . . . . . . . .
2.5.1 Possibilities in sparsity in semidefinite programming
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
7
7
8
8
8
9
10
10
11
12
12
13
14
15
3 Uncertain Systems and Robustness Analysis
3.1 Linear Systems . . . . . . . . . . . . . . . . . . . . . . . .
3.1.1 Continuous time systems . . . . . . . . . . . . . .
3.1.2 H∞ and H2 norms . . . . . . . . . . . . . . . . . .
3.2 Uncertain Systems . . . . . . . . . . . . . . . . . . . . . .
3.2.1 Structured uncertainties and LFT representation
3.2.2 Robust H∞ and H2 norms . . . . . . . . . . . . .
3.2.3 Nominal and robust stability and performance .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
17
17
17
18
19
19
20
20
xi
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
xii
CONTENTS
3.3 µ-Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.3.1 Structured singular values . . . . . . . . . . . . . . . . . . .
3.3.2 Structured robust stability and performance analysis . . . .
Bibliography
II
21
21
22
25
Publications
A Robust Finite-Frequency H2 Analysis of Uncertain Systems
31
1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
1.1
Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2
Problem formulation . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.1
H2 norm of a system . . . . . . . . . . . . . . . . . . . . . . 35
2.2
Robust H2 norm of a system . . . . . . . . . . . . . . . . . . 35
3
Mathematical preliminaries . . . . . . . . . . . . . . . . . . . . . . 37
3.1
Finite-frequency observability Gramian . . . . . . . . . . . 37
3.2
An upper bound on the robust H2 norm . . . . . . . . . . . 38
4
Gramian-based upper bound on the robust finite-frequency H2 norm 43
5
Frequency gridding based upper bound on the robust finite-frequency H2 norm 43
6
Numerical examples . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
6.1
Theoretical Example . . . . . . . . . . . . . . . . . . . . . . 47
6.2
Comfort Analysis Application . . . . . . . . . . . . . . . . . 51
7
Discussion and General remarks . . . . . . . . . . . . . . . . . . . . 54
7.1
The observability Gramian based method . . . . . . . . . . 54
7.2
The frequency gridding based method . . . . . . . . . . . . 54
8
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
A
Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
A.1
Proof of Lemma 2 . . . . . . . . . . . . . . . . . . . . . . . . 55
A.2
Proof of Theorem 1 . . . . . . . . . . . . . . . . . . . . . . . 56
A.3
Proof of Theorem 3 . . . . . . . . . . . . . . . . . . . . . . . 56
A.4
Proof of Theorem 4 . . . . . . . . . . . . . . . . . . . . . . . 57
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
B Decomposition and Projection Methods for Distributed Robustness Analysis of Intercon
1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
2
Decomposition and projection methods . . . . . . . . . . . . . . . . 66
2.1
Decomposition and convex feasibility in product space . . 66
2.2
Von Neumann’s alternating projection in product space . . 67
2.3
Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
3
Convex minimization reformulation . . . . . . . . . . . . . . . . . 68
3.1
Solution via Alternating Direction Method of Multipliers . 69
4
Distributed implementation . . . . . . . . . . . . . . . . . . . . . . 71
4.1
Feasibility Detection . . . . . . . . . . . . . . . . . . . . . . 71
4.2
Infeasibility Detection . . . . . . . . . . . . . . . . . . . . . 73
5
Robust Stability Analysis . . . . . . . . . . . . . . . . . . . . . . . . 74
6
Numerical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
CONTENTS
7
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
xiii
79
83
Notation
Used Notations
Notation
N
Rn
R+n
Cn
Sn
S+n
dom f
int S
A′
A∗
Meaning
Set of natural numbers
Set of n-dimensional real vectors
Set of n-dimensional positive real vectors
Set of n-dimensional complex vectors
Set of n × n symmetric matrices
Set of n × n symmetric positive semi definite matrices
Domain of f
Interior of S
Transpose of matrix A
Conjugate transpose of matrix A
xv
Part I
Background
1
Introduction
1.1
Motivation
Many control design methods are model driven, e.g., [Glad and Ljung, 2000, Bequette,
2003, Åström and Wittenmark, 1990, Åström and Hägglund, 1995], and as a result, stability and performance provided by controllers designed using such methods are affected by the quality of the models. One of the major issues with models
is their uncertainty, and some of the model-based design methods take this uncertainty into consideration, e.g., see [Zhou and Doyle, 1998, Skogestad and Postlethwaite,
2007, Zhou et al., 1997, Doyle et al., 1989]. However, many of the model based
design methods, specially the ones employed in industry, neglect the model uncertainty. Hence, it is important to address how model uncertainty tampers with
the performance or stability of the closed loop system. To be more precise, it is essential to check whether there is any chance for the closed loop system, under the
designed controller, to lose stability or desired performance under any possible
uncertainty. This type of analysis is called robustness analysis and is extremely
important in applications where the margin for errors is very small, e.g., flight
control design.
Analysis and control of high dimensional and large scale interconnected systems
have been of interest in the field of automatic control, e.g., see [Papachristodoulou and Peet,
2006, Mbarushimana and Xin, 2011, Zhou et al., 2010]. Robustness analysis of
such systems pose different challenges. These can be purely computational or due
to structural constraints in the problem. Computational challenges can, for instance, appear in the analysis of systems with large state dimension, as in the analysis of spatially discretized partial differential equations, e.g., see [Krstic, 2009,
Papachristodoulou and Peet, 2006]. Also as examples for complicating structural
constraints one can point to physical separation or privacy requirement over a
3
4
1 Introduction
network, e.g., see [Budka et al., 2010, Zhou et al., 2010]. These challenges can be
addressed, to some extent, by devising distributed robustness analysis algorithms
which provide the possibility of solving the analysis problem over a network of
agents. By this, we distribute the computational burden over the network and
can satisfy underlying structural requirements in the problem, such as privacy.
Motivated by this, robustness analysis is the main subject of this thesis, and specifically the following topics are addressed in the thesis.
• Robust finite-frequency H2 analysis of uncertain systems.
In contrast to conventional robustness analysis methods where the whole
frequency range is considered, we here investigate the problem of robustness analysis over a finite frequency range. For instance, this can be relevant
either due to the fact that the system operates within certain frequency intervals, or due to the fact that the provided models are only valid up to a
certain frequency. In either case, performing the analysis over the whole
frequency range may result in misleading conclusions. For example, this
was the case for the model provided by AIRBUS in [Garulli et al., 2011]. By
analyzing this model unexplainable peaks were observed in the frequency
response of the model, above the frequency of 15 rad/s, which were not
present in the actual physical system. As a result, the model was deemed to
be valid only for frequencies up to 15 rad/s.
• Distributed robust stability analysis of weakly interconnected uncertain
systems.
Within this topic we investigate robust stability analysis of a collection of
weakly interconnected uncertain systems, and propose computational algorithms for solving this problem in a distributed manner. This approach is
relevant in situations where each of the subsystems is reluctant to share
sensitive information regarding their models or controllers, but are willing to share certain information for the collective good. Also considering
the recent development of multi core and multi processor platforms, these
methods can also be employed for distributing the computational burden
of robustness analysis of systems with large state dimension, [Krstic, 2009,
Papachristodoulou and Peet, 2006].
1.2
Publications and Contributions
This thesis is based on the following papers
Paper A
A. Garulli, A. Hansson, S. Khoshfetrat Pakazad, A. Masi, and R. Wallin.
Robust finite-frequency H2 analysis of uncertain systems. Technical Report LiTH-ISY-R-3011, Department of Electrical Engineering,
Linköping University, SE-581 83 Linköping, Sweden, May 2011.
1.3
Thesis Outline
5
Paper B
S. Khoshfetrat Pakazad, A. Hansson, M. S. Andersen, and A. Rantzer.
Decomposition and projection methods for distributed robustness analysis of interconnected uncertain systems. Technical Report LiTH-ISYR-3033, Department of Electrical Engineering, Linköping University,
SE-581 83 Linköping, Sweden, Nov. 2011.
In paper A we investigate the first topic in Section 1.1 and proposes two algorithms for finite-frequency robustness analysis. This paper was written by the
author of the thesis, however, the algorithm labeled Gramian based in the paper
has been entirely developed by the other authors of the paper, [Masi et al., 2010].
Parts of this paper has also been presented in [Pakazad et al., 2011].
In paper B we propose tailored projection based methods for solving convex feasibility problems where none of the constraints in the problem depend on the
whole optimization vector. The proposal of these methods was motivated by the
problem of robustness analysis of weakly interconnected uncertain systems.
The following publication by the author of this thesis has not been included in
the thesis.
R. Wallin, S. Khoshfetrat Pakazad, A. Hansson, A. Garulli, and A. Masi.
Optimization Based Clearance of Flight Control Laws , Ch. Applications of IQC based analysis techniques for clearance. Springer, 2011.
1.3
Thesis Outline
The thesis consists of two main parts. Part I covers the theoretical background,
with Chapter 2 reviewing some basic definitions and concepts in optimization
and Chapter 3 providing some preliminaries on uncertain systems and robustness analysis of uncertain systems with structured uncertainty. Note that the
information in Part I is far from the full treatment of any of the mentioned topics.
For a more detailed introduction to convex optimization and distributed computing refer to [Boyd and Vandenberghe, 2004, Bertsekas and Tsitsiklis, 1997]. Also
for a more complete treatment of uncertain systems and robust control refer to
[Skogestad and Postlethwaite, 2007, Zhou et al., 1997]. Finally, Part II consists of
papers A and B that discuss the topics presented in Section 1.1.
2
Optimization
Optimization is one of the most important tools in different fields of engineering.
In this chapter some of the basic concepts in optimization are reviewed. The outline for this chapter is as follows. First, Section 2.1 describes how to define an
optimization problem. Then, Section 2.2 introduces convex optimization problems. Sections 2.3 and 2.4 review the definitions of primal and dual optimization
problems and describe some of the basic decomposition methods for these problems, respectively. Section 2.5 concludes the chapter by discussing the definition
of sparsity in optimization problems and investigating some of the opportunities
made feasible by exploiting this structure in the problem.
2.1
General Description
There are different ways of defining an optimization problem. In this thesis and
in the appended papers, we consider the following definition of an optimization
problem, taken from [Boyd and Vandenberghe, 2004]
minimize
f 0 (x)
subject to
f i (x) ≤ 0, i = 1, . . . , m,
hi (x) = 0, i = 1, . . . , p,
(2.1)
where f 0 (x) is the cost function, f i (x), for i = 1, . . . , m, and hi (x), for i = 1, . . . , p,
are the inequality and equality constraint functions, respectively. The goal is to
minimize the cost function while satisfying the constraints in the problem.
Next a class of optimization problems referred to as convex optimization problems is described, and the requirements on the constraints and cost functions for
this class of problems are discussed.
7
8
2 Optimization
2.2
Convex Optimization
In order to describe the characteristics of the cost function and constraint functions in a convex optimization problem, we need to define the concept of convexity for sets and functions.
2.2.1
Convex sets
This section reviews some definitions on sets, which are essential in the upcoming
sections. We start by defining an affine set.
Definition 2.1 (Affine set, Boyd and Vandenberghe [2004]). A set C ⊆ Rn is
affine if for any x1 , x2 ∈ C,
x = θx1 + (1 − θ)x2 ∈ C,
(2.2)
for all θ ∈ R.
An affine set is a special case of convex sets which are defined as follows.
Definition 2.2 (Convex set, Boyd and Vandenberghe [2004]). A set C ⊆ Rn is
convex if for any x1 , x2 ∈ C,
x = θx1 + (1 − θ)x2 ∈ C,
(2.3)
for all 0 ≤ θ ≤ 1.
Another important subclass of convex sets, are convex cones which are defined
as below.
Definition 2.3 (Convex cone, Boyd and Vandenberghe [2004]). A set C ⊆ Rn is
a convex cone if for any x1 , x2 ∈ C,
x = θ1 x1 + θ2 x2 ∈ C,
(2.4)
for all θ1 , θ2 ≥ 0.
Also a convex cone is called proper if
• It contains its boundary.
• It has nonempty interior.
• It does not contain any line.
For instance, the sets S+n and R+n which represent the symmetric positive semidefinite n × n matrices and positive real n dimensional vectors, respectively, are both
proper cones. The next section reviews the definition of convex functions and
provides some important examples of this type of functions.
2.2.2
Convex functions
The concept of convex functions is fundamental in the definition of convex optimization problems. Definition 2.4, defines this class of functions.
2.2
Convex Optimization
9
Definition 2.4 (Convex functions, Boyd and Vandenberghe [2004]). A function f : Rn 7→ R is convex, if dom f is convex and for all x, y ∈ dom f and
0 ≤ θ ≤ 1,
f (θx + (1 − θ)y) ≤ θf (x) + (1 − θ)f (y).
(2.5)
Also a function is strictly convex if the inequality in (2.5) holds strictly.
Some of the most widely used convex functions in general and in this thesis are
listed below
• Affine functions.
• Norms.
• Distance to a convex set.
• Indicator function for convex sets,
where the indicator function for a set C is defined as


∞
x<C

g(x) = 

0
x∈C
(2.6)
Note that convexity of all the functions mentioned above can be established using
Definition 2.4.
2.2.3
Definition of a convex optimization problem
Having defined convex sets and functions, we can define a convex optimization
problem.
Definition 2.5 (Convex optimization problem, Boyd and Vandenberghe [2004]).
Consider the optimization problem defined in (2.1). If
• f 0 (x) is a convex function,
• f i (x), for i = 1, . . . , m are all convex functions,
• hi (x) for i = 1, . . . , m are all affine functions,
then the optimization problem in (2.1) is a convex optimization problem.
Definition 2.5 is not the only definition for a convex optimization problem, and
there are other definitions which only consider the convexity of the cost function
and the feasible set as the required conditions for convexity of the problem, e.g.,
see [Bertsekas, 2009]
If the cost function in the optimization problem in (2.1) is set to zero, or is chosen
10
2 Optimization
to be independent of x, then the problem can be viewed as the following problem
find
subject to
x
f i (x) ≤ 0,
i = 1, . . . , m,
hi (x) = 0,
i = 1, . . . , p.
(2.7)
This problem is referred to as a feasibility problem, and correspondingly if the
equality and inequality constraint functions in (2.7) satisfy the conditions in Definition 2.5, this problem is referred to as a convex feasibility problem. This type of
problems is the topic of one of the papers appended to this thesis, [Khoshfetrat Pakazad et al.,
2011].
Another subclass of convex optimization problems, which plays a pivotal role
in control, is the SemiDefinite Programming, SDP, problems, [Boyd et al., 1994,
Boyd and Vandenberghe, 2004], which will be briefly reviewed in Section 2.2.5.
2.2.4
Generalized inequalities
In this section an extension to the notion of inequality is introduced, which is
based on the definition of proper cones, [Boyd and Vandenberghe, 2004]. Let K
be a proper cone. Then for x, y ∈ Rn ,
x K y ⇔ x − y ∈ K,
(2.8)
x ≺K y ⇔ x − y ∈ int K.
Note that component-wise inequalities and matrix inequalitities are special cases
of the inequalities in (2.8). This can be seen by choosing K to be R+n or S+n . Definition 2.4 can also be generalized using proper cones. A function f is said to be
convex with respect to a proper cone K, i.e., K-convex, if for all x, y ∈ dom f and
0 ≤ θ ≤ 1,
f (θx + (1 − θ)y) K θf (x) + (1 − θ)f (y).
(2.9)
Also, f is strictly K-convex if the inequality in (2.9) holds strictly.
2.2.5
Semidefinite programming
An SDP problem is a convex optimization problem and is defined as below
minimize
c′ x
subject to
F0 +
x
n
X
xi Fi 0,
(2.10)
i=1
Ax = b,
where c ∈ Rn , x ∈ Rn , Fi ∈ Sm , for i = 0, . . . , n, A ∈ Rp×n and b ∈ Rp . They
appear in many problems in automatic control, [Boyd et al., 1994], and are used
extensively in the methods presented in this thesis for performing robust finite-
2.3
11
Primal and Dual Problems
frequency H2 analysis and distributed robustness analysis. The role of SDP problems in robustness analysis will be described in Chapter 3.
2.3
Primal and Dual Problems
Consider the problem in (2.1). The Lagrangian function for this problem is defined as
p
m
X
X
υi hi (x),
(2.11)
λi f i (x) +
L(x, λ, υ) = f 0 (x) +
i=1
i=1
h
where λ = λ1
dual variables.
···
λm
i′
∈
Rm
h
and υ = υ1
···
υp
i′
∈ Rp are the so called
Let the collective domain of the optimization problem in (2.1) be denoted as D =
m
∩m
i=0 dom f i ∩i=1 dom hi . Then the corresponding dual function for this problem
is defined as
g(λ, υ) = inf L(x, λ, υ).
x∈D
(2.12)
Note that for λ 0 and any feasible solution x̄ for the problem in (2.1)
f 0 (x) ≥ L(x̄, λ, υ) ≥ g(λ, υ).
(2.13)
As a result, if the optimal cost function value for the problem in (2.1) is denoted
as p ∗ , then p ∗ ≥ g(λ, υ). In other words, g(λ, υ) constitutes a lower bound for the
optimal value of the original problem for all υ ∈ Rp and λ 0. Then the best
lower bound for p ∗ can be computed using the following convex optimization
problem
maximize
g(λ, υ)
subject to
λ 0.
λ,υ
(2.14)
This problem is the corresponding dual problem for the original problem in (2.1).
The original problem in (2.1) is also called the primal problem.
Depending on the optimization problem, the lower bound calculated using (2.14),
can be arbitrary tight or off. However, under certain conditions, it can be guaranteed that this lower bound is equal to p ∗ . In this case, it is said that strong duality
holds.
Guaranteeing strong duality for general nonconvex problems is not straight forward. However, if the primal problem is convex and strictly feasible, i.e., there exists a feasible solution for the primal problem such that all inequality constraints
hold strictly, strong duality holds. This set of conditions are called Slater’s constraint qualification, [Boyd and Vandenberghe, 2004]. Note that if strong duality
12
2 Optimization
holds, then the optimal solution for the primal problem can be obtained by solving the dual problem, [Boyd and Vandenberghe, 2004, Bertsekas, 2009].
2.4
Decomposition Methods
There are many cases when solving an optimization problem in a distributed or
decentralized manner can improve the performance of the optimization procedure. For instance this is the case when the structure or scale of the problem is
such that the problem cannot be solved in its original form.
Decomposition methods provide the possibility to divide the original optimization problem into several subproblems. Note that the gain from decomposing the
original problem depends on the generated subproblems and how computationally demanding they are to solve in comparison to the original problem.
There are different decomposition methods described in the literature, e.g., primal, dual, proximal, etc, see [Bertsekas and Tsitsiklis, 1997, Conejo et al., 2006,
Boyd et al., 2011]. In this section, we will only investigate the dual and primal decomposition methods which are similar to the approach taken in [Khoshfetrat Pakazad et al.,
2011].
Consider the following optimization problem
minimize
xi , i=1,...,N , y
subject to
N
X
f i (xi )
i=1
xi ∈ C i ,
xi = Ei y,
i = 1, . . . , N ,
(2.15)
i = 1, . . . , N ,
where Ci s are convex sets that can be described by inequalities and equalities
with K-convex and affine functions, respectively. Also, y ∈ Rm , xi ∈ Rni and
Ei ∈ Rni ×m , for i = 1, . . . , N . In this formulation, the variable y and the constraints
xi = Ei y are referred to as the consistency variables and constraints, respectively.
These are used for describing the couplings between different components in vectors xi , for i = 1, . . . , N . Consistency constraints provide the possibility to decompose the optimization problem into N subproblems. Next, we discuss how
the problem in (2.15) can be decomposed and solved, using its primal and dual
formulations.
2.4.1
Primal decomposition
Primal decomposition methods make use of the primal formulation of the problem. Consider the problem in (2.15). If we let the consistency variables to be
fixed, this problem breaks down to solving the following N subproblems
2.4
13
Decomposition Methods
minimize
f i (xi )
subject to
xi ∈ C i ,
xi
(2.16)
xi = Ei y,
xi∗ (y)
for i = 1, . . . , N . Let
and gi (y) denote the optimal solution and optimal
value for these problems as a function of the consistency variables. Then, the
consistency variables could be updated by solving the following optimization
problem
minimize
y
N
X
gi (y).
(2.17)
i=1
Note that depending on the problems in (2.15) and (2.17), this minimization can
be performed using different methods, e.g., gradient or subgradient descent methods, [Bertsekas and Tsitsiklis, 1997, Bertsekas, 1999, Conejo et al., 2006, Nedic et al.,
2010]. In such algorithms, it is neither necessary to solve the problem in (2.17)
exactly, nor is it required to calculate the optimal values and solutions of the
problems in (2.16) explicitly as a function of consistency variables.
Having calculated the new updates for the consistency variables, the optimization problems in (2.16) are solved using the new updates. By repeating this procedure convergence to the optimal solution may be achieved.
2.4.2
Dual decomposition
As was mentioned in Section 2.3, if strong duality holds it is possible to obtain
the optimal solution for the primal problem by solving the dual problem. Dual
decomposition methods approach the problem through its dual formulation and
decompose the minimization and maximization procedures in (2.12) and (2.14).
Consider the partial Lagrangian for the problem in (2.15), i.e., the Lagrangian for
the problem in (2.15) excluding the constraints xi ∈ Ci ,
L(x, y, λ) =
N
X
f i (xi ) +
i=1
h
where λi ∈ Rni and λ = λ1
···
N
X
λ′i (xi − Ei y) .
(2.18)
i=1
i′
λN .
In order to calculate the dual function, we should minimize the Lagrangian with
respect to the variables xi , for i = 1, . . . , N , and y. However, for the Lagrangian
to be bounded from below, it is required that Ei′ λi = 0 for all i = 1, . . . , N . By this,
minimizing the Lagrangian can be written as follows
14
2 Optimization
d(λ) = minimize
xi ∈Ci , i=1,...,N
=
N (
X
i=1
N n
X
f i (xi ) + λ′i xi
o
i=1
(2.19)
)
minimize f i (xi ) +
xi ∈Ci
λ′i xi
.
As can be seen from (2.19), this problem can be solved by solving the following
N subproblems,
di (λi ) := minimize f i (xi ) + λ′i xi ,
xi ∈Ci
(2.20)
for i = 1, . . . , N . Let xi∗ (λi ) denote the optimal solutions for the problems in (2.19)
for fixed values of λi . By (2.14), let the dual variables be updated using the
following optimization problem
maximize
λ
subject to
N
X
di (λi )
i=1
Ei′ λi
= 0,
(2.21)
i = 1, . . . , N .
Then, under certain conditions, e.g, strict convexity of the functions f i , it is possible to generate the primal and dual optimal solutions by solving the problems
in (2.20) and (2.21) iteratively, until convergence is reached.
Note that similar to the primal decomposition method, the updates for the dual
variables can also be obtained using different methods, such as gradient or subgradient descent methods, [[Bertsekas and Tsitsiklis, 1997, Boyd et al., 2011, Conejo et al.,
2006, Nedic and Ozdaglar, 2001], and it is not necessary to solve the problem in
(2.21) exactly.
2.5
Matrix Sparsity
A matrix is sparse if the number of nonzero elements in the matrix is considerably smaller than the number of zero elements. A sparsity pattern of a matrix
describes where the nonzero elements are located in the matrix. Let A ∈ Cn×n be
a sparse matrix, then the sparsity pattern of this matrix can be described using
the following set
n
o
S = (i, j) | Aij , 0 ,
(2.22)
where Aij denotes the element on the i th row and j th column. This set is also
referred to as the aggregate sparsity pattern. Considering the fact that in the
applications in this thesis we mainly deal with symmetric matrices, from now on
we only study the sparsity pattern for symmetric matrices. Note that if the matrix
is symmetric and (i, j) ∈ S then (j, i) ∈ S.
2.5
Matrix Sparsity
15
The sparsity pattern for a sparse symmetric matrix can also be described through
the associated undirected graph for the matrix. Let G(V , E) denote an undirected
graph with vertex or node set V and edge set E. Then the associated graph for
the matrix A is a graph with V = {1, · · · , n} and edges between nodes i, j ∈ V × V ,
if Ai,j , 0 and i , j. Note that by this, the associated graph does not include any
self-loops and does not represent any of the diagonal entries of the matrix.
Often special structures of the associated graph point to properties in the matrix
that can be exploited in different applications. One of the important structures
for graphs, specifically in this thesis, is the chordal structure and it is defined as
follows.
Definition 2.6 (Chordal Graphs, Fukuda et al. [2000]). A graph G(V , E) is chordal
if every cycle of length ≥ 4 has a chord, i.e., an edge between two nonconsecutive
nodes in the cycle.
If the associated graph for an sparse matrix is chordal, it is said that the matrix
has a chordal sparsity pattern. Often exploiting chordal sparsity in matrices used
in the description of optimization problems, enables us to solve such problems
more efficiently.
2.5.1
Possibilities in sparsity in semidefinite programming
The aim of exploiting sparsity in optimization problems is to reduce the computational burden of solving the problem. This can be achieved by either reducing
the dimension of the optimization problem or by decomposing the problem into
smaller and easier problems to solve.
Exploitation of chordal sparsity has been studied in SDP problems with promising results, see [Fukuda et al., 2000], [Kim et al., 2010, Klerk, 2010]. In order to
discuss the opportunities provided by exploiting chordal sparsity in the problem,
we need to introduce some basic concepts and notations.
Let X be an n × n symmetric matrix variable, and let V = {1, . . . , n} denote the set
of its row and column indices. Assume that entries of X specified by the enteries
in the set F ⊂ V × V are fixed.
Definition 2.7 (Positive semidefinite completion, Fukuda et al. [2000]). It is
said that the symmetric n × n matrix X̄ is a symmetric positive (semi) definite
completion for X, if for all (i, j) ∈ F, X̄ij = Xij and X̄ ≻ 0 (X̄ 0).
Definition 2.8 (Complete graphs, Fukuda et al. [2000]). A graph is said to be
complete is there exists an edge between its every two nodes.
Definition 2.9 (Cliques and maximal cliques, Kim et al. [2010]). A clique of
a graph G(V , E) is any subgraph with vertices C ⊂ V that is complete, and if
its vertex set is not a proper subset of a vertex set of another clique, it is also a
maximal clique.
16
2 Optimization
Let V̄ ⊂ V , and let XV̄ denote a submatrix of X with entries specified in V̄ × V̄ .
As an example consider the following 3 × 3 matrix and its submatrix specified by
V̄ = {1, 3},


"
#
1 2 3
1 3
2 5 6
X = 
.
(2.23)
 , XV̄ =


3 9
3 6 9
Using the definitions and notations introduced above, the following theorem provides the main result of this section.
Theorem 2.1 (Fukuda et al. [2000]). Let the entries of X belonging to F be
fixed and specified, and let G(V , E) be its associated graph, such that , F =
E ∪ {(i, i)|i ∈ V }. Also assume that G(V , E) is chordal with l maximal cliques with
vertex sets Ck ⊂ V for k = 1, . . . , l . Then the matrix X has a symmetric positive
(semi) definite completion if and only if (XCk 0) XCk ≻ 0 for all k = 1, . . . , l .
As can be seen, this theorem provides a preliminary guideline on how to decompose large SDP problems with chordal sparsity, into smaller and easier problems
to solve. However, the new set of semidefinite constraints does not constitute a
standard SDPTproblem. This is due to the fact that if there exist m, p ∈ {1, . . . , l}
such that Cm Cp , ∅, there are common variables among the constraints XCm 0 and XCp 0, introduced in Theorem 2.1. The presence of such common variables is addressed by introducing auxiliary variables.
As an example, consider the simple constraint X ∈ Sn , X 0 with the aggregate
sparsity pattern F = {(i, j) | |i − j| ≤ 1}. The associated graph for this constraint, as
defined in Theorem 2.1, is chordal. The maximal cliques for this chordal graph
are Ck = {k, k + 1} for k = 1, . . . , n − 2. As a result, using Theorem 2.1, X 0 is
equivalent to the following set of constraints
"
#
X11 X12
0,
X21 Z1
#
"
Yk−1 Xkk+1
0, for k = 2, . . . , n − 2
Xkk+1
Zk
(2.24)
#
"
Yn−2 Xkk+1
0,
Xkk+1 Xnn
Zk − Yk = 0,
for k = 1, . . . , n − 2,
where the equality constraints and the auxiliary variables Zk and Yk , for k =
1, . . . , n − 2, are introduced to rectify the issue with the common variables. Note
that depending on how chordal sparsity is observed in the problem, there are different approaches on how to decompose and reduce the dimension of the original
problem, e.g., see [Kim et al., 2010].
3
Uncertain Systems and Robustness
Analysis
This chapter touches upon some of the basic concepts in robustness analysis of
uncertain systems. In Section 3.1, a description of linear systems is reviewed. Section 3.2 extends this description to linear uncertain systems. Lastly, in Section 3.3
we briefly review a method for analyzing robust stability and performance of uncertain systems with structured uncertainties. Note that this chapter follows the
ideas from [Zhou et al., 1997] and it is recommended to consult this reference for
more details on the discussed topics.
3.1
Linear Systems
In this thesis the main focus is robustness analysis of linear systems and this
section reviews some of the basics of linear systems.
3.1.1
Continuous time systems
Consider the following description of a linear time invariant system
ẋ(t) = Ax(t) + Bu(t),
y(t) = Cx(t) + Du(t),
(3.1)
where x ∈ Rn , A ∈ Rn×n , B ∈ Rn×m , C ∈ Rp×n and D ∈ Rp×m . This is referred
to as the state space representation for the system. The corresponding transfer
function matrix for this system is defined as G(s) = C (sI − A)−1 B + D, with s ∈ C,
and is also denoted as follows
17
18
3
"
G(s) :=
Uncertain Systems and Robustness Analysis
A
C
B
D
#
.
(3.2)
This system is said to be stable, if for all the eigenvalues, λi , of the matrix A
we have that Re λi < 0. Given an initial state, x(t0 ) and input u(t), the system
responses for x(t) and y(t) are given by
x(t) = e
A(t−t0)
Zt
e A(t−τ) Bu(τ) dτ,
x(t0 ) +
t0
y(t) = Cx(t) + Du(t).
3.1.2
(3.3)
H∞ and H2 norms
The H2 norm for the system described in (3.1), is defined as follows
kGk22
1
=
2π
Z∞
Tr {G(j ω)∗ G(j ω)} dω.
(3.4)
−∞
Considering this definition, for a system to have finite H2 norm it is required that
D = 0. In that case and if the system is stable, the H2 norm of the system can be
calculated as follows
(3.5)
kGk22 = Tr B′ QB = Tr CPC ′ ,
where P and Q are the controllability and observability Grammians, respectively.
These are the unique solutions to the following Lyapunov equations,
AP + PA′ + BB′ = 0,
AQ ′ + QA + C ′ C = 0.
(3.6)
In case the system in (3.1) is stable, the H∞ norm for the system is defined as
below
kGk∞ = sup σ̄ {G(j ω)} ,
(3.7)
ω
where σ̄ denotes the maximum singular value for a matrix.
In this thesis, methods for computing the H∞ norm are not discussed and we
just state that this quantity can be computed by solving a Semi Definite Programming problem, [Boyd et al., 1994], or by iterative algorithms using bisection,
[Boyd et al., 1988, Zhou et al., 1997].
3.2
19
Uncertain Systems
∆
q (t )
p (t )
z (t )
M 11
M 12
M 21
M 22
w(t )
Figure 3.1: Uncertain system with structured uncertainty
3.2
Uncertain Systems
There are different methods to express uncertainty in a system. These descriptions fall mainly into the two categories of structured and unstructured uncertainties. In this section, we only discuss structured uncertainties.
3.2.1
Structured uncertainties and LFT representation
In case of a bounded uncertainty and rational system uncertainty dependence,
it is possible to describe the uncertain system as a feedback connection of the
uncertainty and a time invariant system, as illustrated in Figure 3.1, where ∆ is
"the extracted
# uncertainties from the system and the transfer function matrix M =
M11 M12
is the coefficient or system matrix. Let M ∈ C(d+l)×(d+m) , M11 ∈ Cd×d ,
M21 M22
M12 ∈ Cd×m , M21 ∈ Cl×d and M22 ∈ Cl×m . The coefficient or system matrix can
always be constructed such that ∆ has the following block diagonal structure
h
∆ = diag δ1 Ir1
···
δ L I rL
∆L+1
···
i
∆L+F ,
where δi ∈ C for i = 1, · · · , L, ∆L+j ∈ Cmj ×mj for j = 1, · · · , F and
L
X
i=1
(3.8)
ri +
F
X
mj = d,
j=1
and all blocks have a bounded induced ∞-norm less than or equal to 1, i.e.,
|δi | ≤ 1, for i = 1, . . . , L, and k∆i k∞ ≤ 1, for i = L, . . . , L + F. Systems with an
uncertainty structure as in (3.8) are referred to as uncertain systems with structured uncertainties.
The representation of the uncertain system in Figure 3.1 is also referred to as
the Linear Fractional Transformation, LFT, representation of the system, [Magni,
2006, Hecker et al., 2005]. Using this representation it is possible to describe the
mapping between w(t) and z(t) as below
20
3
Uncertain Systems and Robustness Analysis
(∆ ∗ M) := M22 + M21 ∆(I − M11 ∆)−1 M12 ,
(3.9)
where this transfer function matrix is referred to as the upper LFT with respect
to ∆. This type of representation of uncertain systems is used extensively in the
upcoming papers and sections.
3.2.2
Robust H∞ and H2 norms
Robust H2 and H∞ norms for uncertain systems with structured uncertainties
are defined as
Z∞
dω
2
sup k∆ ∗ Mk2 = sup
Tr {(∆ ∗ M)∗ (∆ ∗ M)}
,
(3.10)
2π
∆∈B∆
∆∈B∆
−∞
sup k∆ ∗ Mk∞ = sup sup σ̄ {(∆ ∗ M)} ,
∆∈B∆
(3.11)
∆∈B∆ ω
respectively, where B∆ represents the unit ball for the induced ∞-norm for the
uncertainty structure in (3.8).
These quantities are of great importance and in case w(t) and z(t) are viewed as
disturbance acting on the system and controlled output of the system, respectively, these norms quantify the worst case effect of the disturbance on the performance of the system. The computation of these quantities are not discussed
in this section and for more information on the calculation of upper bounds for
these norms refer to [Paganini and Feron, 2000, Paganini, 1997, 1999, Doyle et al.,
1989, Zhou et al., 1997].
3.2.3
Nominal and robust stability and performance
Consider the interconnection of a controller with an uncertain system with structured uncertainty as in Figure 3.2, where K is the controller and ∆ represents the
uncertainty in the model. Considering the fact that in this section we do not discuss the synthesis problem, we assume that a controller is designed such that the
nominal system, with ∆ = 0, together with the controller is stable and satisfies
certain requirements over the performance of the closed loop system. Then the
system is called nominally stable and has a nominal performance.
Under this assumption, if the process, P, and the controller are combined, then
the setup in Figure 3.2 will transform to the one presented in Figure 3.1. In this
case, if the resulting system together with the uncertainty is stable and satisfies
the nominal requirements on its behavior, it is said that the system is robustly
stable and has robust performance.
3.3
21
µ-Analysis
∆
z (t )
P(s )
w(t )
K
Figure 3.2: Uncertain system with a structured uncertainty and a controller
3.3
µ-Analysis
As was mentioned in Chapter 1, it is important to evaluate the robustness of the
proposed controllers with respect to the uncertainty in the model, specifically if
the controller is designed using model based design methods that do not take
the uncertainty into account explicitely. This type of analysis should examine the
possibility of loss of stability or performance when considering uncertainty in the
model. In the upcoming subsections, a framework for analyzing robust stability
and performance of uncertain systems with structured uncertainty is reviewed.
3.3.1
Structured singular values
Consider the system in Figure 3.1. As was mentioned in Section 3.2.1, the transfer function matrix between w(t) and z(t) can be described using the upper LFT
with respect to ∆, as in (3.9). For this relation to exist, it is required that det(I −
M11 ∆) , 0 for all frequencies and ∆ ∈ B∆ . This motivates the definition of the
structured singular values for matrices.
Definition 3.1 (Structured Singular Values, Zhou et al. [1997]). Let G ∈ Cn×n
and ∆ ∈ B∆ . Then the structured singular value for G is defined as
µ∆ (G) :=
1
,
min {σ̄(∆) | ∆ ∈ B∆ , det(I − G∆) = 0}
(3.12)
and if there does not exist any ∆ ∈ B∆ such that det(I − G∆) = 0, then µ∆ (G) := 0.
From Definition 3.1, it can be seen that µ∆ (G) quantifies the smallest perturbation, from the σ̄(∆) perspective, that causes det(I − G∆) to become zero. However, generally it is not possible to compute the exact value for the structured
singular value and instead upper bounds for this quantity are used. From the
definition of µ∆ (G), it follows that µ∆ (G) ≤ σ̄(G), [Zhou et al., 1997]. Let X =
{X | X ≻ 0, X = X ∗ , X∆ = ∆X}. Considering the fact that µ∆ (G) = µ∆ XGX −1
22
3
Uncertain Systems and Robustness Analysis
for all X ∈ X, it holds that
µ∆ (G) ≤ σ̄ XGX −1 ,
∀X ∈ X.
(3.13)
As a result this upper bound can be tightened by minimizing the right hand side
of the inequality in (3.13) with respect to the scaling X. Hence, a tightened upper
bound is given by
(3.14)
inf σ̄ XGX −1 .
X∈X
The structured singular value and the upper bound in (3.14), play an important
role in structured robust stability and performance analysis, and this is discussed
in the following section.
3.3.2
Structured robust stability and performance analysis
Consider the system in Figure 3.1. The following theorem provides a tool for
robust stability analysis of uncertain systems with structured uncertainty.
Theorem 3.1 (Zhou et al. [1997]). Let ∆ ∈ B∆ . Then the system in Figure 3.1 is
robustly stable if and only if
sup µ∆ (M11 (j ω)) < 1.
(3.15)
ω
Proof: See [Zhou et al., 1997].
This theorem can be modified using the upper bound in (3.14) as follows.
Corollary 3.1. Let ∆ ∈ B∆ and X(ω) ∈ X ∀ω . Then the system in Figure 3.1 is
robustly stable if for all ω
inf σ̄ X(ω)M11 (j ω)X(ω)−1 < 1.
(3.16)
X(ω)∈X
Proof: It follows from Theorem 3.1 and the upper bound for the structured singular value in (3.14).
For each frequency, ω0 , the condition in (3.16) can be examined using the following convex feasibility problem
find
X
subject to
X ∈ X,
M11 (j ω0 )∗ XM11 (j ω0 ) − X ≺ 0,
(3.17)
which is an SDP problem. Considering the fact that the problem in (3.17) should
be solved for infinitely many frequencies, the whole problem constitutes an infinite dimensional problem. As a result, in practice the robust stability analysis is
often performed only over a grid of sufficiently many frequency points.
3.3
µ-Analysis
23
The robust performance analysis for the system can also be addressed in a similar
way by introducing an augmented uncertainty block. Recall the system matrix
defined in Section 3.2.1, see Figure 3.1. Define the augmented uncertainty block
"
#
∆ 0
∆A =
,
(3.18)
0 ∆p
where ∆p ∈ Cm×l with ∆p ≤ 1 and ∆ ∈ B∆ ⊂ Cd×d . Then by replacing ∆ with ∆A
and M11 with M in Theorem 3.1 and Corollary 3.1, the robust performance of the
system can be analyzed in the same manner as for the robust stability analysis.
For more detailed discussions on this topic, the reader is referred to [Zhou et al.,
1997].
Bibliography
B. W. Bequette. Process control - Modeling, design and simulation. Pearson
Education, 2003.
D. P. Bertsekas. Nonlinear programming. Athena Scientific, 1999.
D. P. Bertsekas. Convex optimization theory. Athena Scientific, 2009.
D. P. Bertsekas and J. N. Tsitsiklis. Parallel and Distributed Computation: Numerical Methods. Athena Scientific, 1997.
S. Boyd and L. Vandenberghe. Convex optimization. Cambridge University Press,
2004.
S. Boyd, V. Balakrishnan, and P. Kabamba. On computing the H∞ norm of a
transfer matrix. Proceedings of the American Control Conference, 1988.
S. Boyd, L. ElGhaoui, E. Feron, and V. Balakrishnan. Linear matrix inequalities
in system and control theory. Society for Industrial and Applied Mathematics,
1994.
S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers.
Foundations and Trends in Machine Learning, 3(1):1–122, 2011.
K. Budka, J. Deshpande, J. Hobby, Young-Jin Kim, V. Kolesnikov, Wonsuck Lee,
T. Reddington, M. Thottan, C.A. White, Jung-In Choi, Junhee Hong, Jinho Kim,
Wonsuk Ko, Young-Woo Nam, and Sung-Yong Sohn. GERI - Bell Labs smart
grid research focus: Economic modeling, networking, and security amp; privacy. In First IEEE International Conference on Smart Grid Communications
(SmartGridComm), pages 208 –213, Oct. 2010.
A. J. Conejo, E. Castillo, R. Miguez, and R. Garcia-Bertrand. Decomposition techniques in mathematical programming. Springer, 2006.
J.C. Doyle, K. Glover, P.P. Khargonekar, and B.A. Francis. State-space solutions
to standard H2 and H∞ ; control problems. IEEE Transactions on Automatic
Control, 34(8):831 –847, Aug. 1989.
25
26
Bibliography
M. Fukuda, M. Kojima, M. Kojima, K. Murota, and K. Nakata. Exploiting sparsity in semidefinite programming via matrix completion I: General framework.
SIAM Journal on Optimization, 11:647–674, 2000.
A. Garulli, A. Hansson, S. Khoshfetrat Pakazad, A. Masi, and R. Wallin. Robust
finite-frequency H2 analysis of uncertain systems. Technical Report LiTH-ISYR-3011, Department of Electrical Engineering, Linköping University, SE-581
83 Linköping, Sweden, May 2011.
T. Glad and L. Ljung. Control theory, multivariable and nonlinear methods. CRC
Press, 2000.
S. Hecker, A. Varga, and J. Magni. Enhanced LFR-toolbox for Matlab. Aerospace
Science and Technology, 9(2):173 – 180, 2005.
S. Khoshfetrat Pakazad, A. Hansson, M. S. Andersen, and A. Rantzer. Decomposition and projection methods for distributed robustness analysis of interconnected uncertain systems. Technical Report LiTH-ISY-R-3033, Department of
Electrical Engineering, Linköping University, SE-581 83 Linköping, Sweden,
Nov. 2011.
S. Kim, M. Kojima, M. Mevissen, and M. Yamashita. Exploiting sparsity in linear
and nonlinear matrix inequalities via positive semidefinite matrix completion.
Mathematical Programming, pages 1–36, 2010.
E. Klerk. Exploiting special structure in semidefinite programming: A survey of
theory and applications. European Journal of Operational Research, 201(1):1 –
10, 2010.
M. Krstic. Control of an unstable reaction-diffusion PDE with long input delay.
In Proceedings of the IEEE Conference on Decision and Control, pages 4452–
4457, 2009.
J. Magni. User Manual of the Linear Fractional Representation Toolbox Version
2.0. Technical report, ONERA, Systems Control and Flight Dynamics Department, Feb. 2006.
A. Masi, R. Wallin, A. Garulli, and A. Hansson. Robust finite-frequency H2 analysis. Proceedings of 49th IEEE Conference on Decision and Control. 2010.
A. Mbarushimana and A. Xin. Load management using smart supervisory in
a distributed smart grid. In 4th International Conference on Electric Utility
Deregulation and Restructuring and Power Technologies DRPT, pages 1113 –
1120, July 2011.
A. Nedic and A. Ozdaglar. Convex Optimization in Signal Processing and Communications, Ch. Cooperative distributed multi-agent optimization. Cambridge University Press, 2001.
A. Nedic, A. Ozdaglar, and P.A. Parrilo. Constrained consensus and optimization
in multi-agent networks. IEEE Transactions on Automatic Control, 55(4):922
–938, April 2010.
Bibliography
27
F. Paganini. State space conditions for robust H2 analysis. Proceedings of the
American Control Conference, 2:1230 –1234 vol.2, Jun. 1997.
F. Paganini. Convex methods for robust H2 analysis of continuous-time systems.
IEEE Transactions on Automatic Control, pages 239 –252, Feb. 1999.
F. Paganini and E. Feron. Advances in linear matrix inequality methods in control, Ch. Linear matrix inequality methods for robust H2 analysis: A survey
with comparisons. Society for Industrial and Applied Mathematics, 2000.
S. Khoshfetrat Pakazad, A. Hansson, and A. Garulli. On the calculation of the robust finite frequency H2 norm. Proceedings of the 18th IFAC World Congress.
2011.
A. Papachristodoulou and M.M. Peet. On the analysis of systems described by
classes of partial differential equations. In 45th IEEE Conference on Decision
and Control, pages 747 –752, Dec. 2006.
S. Skogestad and I. Postlethwaite. Multivariable feedback control. Wiley, 2007.
K. J. Åström and T. Hägglund. PID controllers - Theory, design, and tuning. Instrument Society of America, 1995.
K. J. Åström and B. Wittenmark. Computer controlled systems - Theory and
design. Prentice Hall, 1990.
R. Wallin, S. Khoshfetrat Pakazad, A. Hansson, A. Garulli, and A. Masi. Optimization Based Clearance of Flight Control Laws , Ch. Applications of IQC
based analysis techniques for clearance. Springer, 2011.
K. Zhou and J. C. Doyle. Essentials of robust control. Prentice Hall, 1998.
K. Zhou, J. C. Doyle, and K. Glover. Robust and optimal control. Prentice Hall,
1997.
X. Zhou, L. Cui, and Y. Ma. Research on smart grid technology. In International Conference on Computer Application and System Modeling (ICCASM),
volume 3, pages V3–599 –V3–603, Oct. 2010.
Part II
Publications
Paper A
Robust Finite-Frequency H2 Analysis
of Uncertain Systems
Authors:
Andrea Garulli, Anders Hansson, Sina Khoshfetrat Pakazad† , Alfio
Masi and Ragnar Wallin
Published as
A. Garulli, A. Hansson, S. Khoshfetrat Pakazad, A. Masi, and R. Wallin.
Robust finite-frequency H2 analysis of uncertain systems. Technical Report LiTH-ISY-R-3011, Department of Electrical Engineering, Linköping
University, SE-581 83 Linköping, Sweden, May 2011.
† Corresponding author
Robust Finite-Frequency H2 Analysis of
Uncertain Systems
Andrea Garulli∗ , Anders Hansson∗∗ , Sina Khoshfetrat Pakazad† ∗∗ , Alfio Masi∗
and Ragnar Wallin∗∗
∗ Dipartimento
∗∗ Dept.
di Ingegneria
dell’Informazione
Universita’ degli Studi di Siena
Siena, Italy
[email protected],
[email protected]
of Electrical Engineering,
Linköping University,
SE–581 83 Linköping, Sweden
[email protected],
[email protected],
[email protected]
Abstract
In many applications, design or analysis is performed over a finitefrequency range of interest. The importance of the H2 /robust H2
norm highlights the necessity of computing this norm accordingly.
This paper provides different methods for computing upper bounds
on the robust finite-frequency H2 norm for systems with structured
uncertainties. An application of the robust finite-frequency H2 norm
for a comfort analysis problem of an aero-elastic model of an aircraft
is also presented.
1
Introduction
The H2 /robust H2 norm has been one of the pivotal design and analysis criteria
in many applications, such as structural dynamics, acoustics, colored noise disturbance rejection, etc, [Zattoni, 2006, Marro and Zattoni, 2005, Caracciolo et al.,
2005]. Due to the importance of the H2 /robust H2 norm, there has been a substantial amount of research on computation, analysis and design based on these
measures, many of which consider the use of Linear Matrix Inequalities (LMIs)
and Ricatti equations for this purpose, e.g. [Doyle et al., 1989, Stoorvogel, 1993,
Boyd et al., 1994, Paganini, 1997, 1999b, Chesi et al., 2009, Sznaier et al., 2002].
A survey of recent methods in robust H2 analysis is provided in [Paganini and Feron,
2000].
Most of the methods presented in the literature consider the whole frequency
range for calculating the H2 /robust H2 norm. However, in some applications
it is beneficial to concentrate only on a finite-frequency range of interest and
calculate the design/analysis measures accordingly. This can be due to different
† Corresponding author
33
34
Paper A
Robust finite-frequency H2 analysis of uncertain systems
reasons, e.g. the model is only valid for a specific frequency range or the design
is targeted for a specific frequency interval. This motivates the importance of
computing the (robust) finite-frequency H2 norm.
In [Gawronski, 2000], a method for calculating the finite-frequency H2 norm for
systems without uncertainty is presented, where the key step is to compute the
finite-frequency observability Gramian. This is accomplished by first computing
the regular observability Gramian and then scaling it by a system dependent
matrix.
This paper introduces two methods for calculating an upper bound on the robust finite-frequency H2 norm for systems with structured uncertainties. The
first method combines the notion of finite-frequency Gramians, introduced in
[Gawronski, 2000], with convex optimization tools, [Boyd and Vandenberghe, 2004],
commonly used in robust control and calculates the upper bound by solving an
underlying optimization problem [Masi et al., 2010]. The second method, provides a computationally cheaper algorithmic method for calculating the desired
upper bound. In contrast to the first approach, the second method performs frequency gridding and breaks the original problem into smaller problems, which
are possibly easier to solve. Then it uses the ideas presented in [Roos and Biannic,
2010] on computing upper bounds on structured singular values, for solving the
smaller problems. The results of the smaller problems are then combined to
compute the upper bound on the whole desired frequency range, [Pakazad et al.,
2011].
This paper is structured as follows. First some of the notations used throughout
the paper are presented. Section 2 introduces the problem formulation. Mathematical preliminaries are presented in Section 3, which covers the notion of finitefrequency Gramians and reviews the calculation of upper bounds on the robust
H2 norm. Sections 4 and 5 provide the details of the two methods for calculating
upper bounds on the robust finite-frequency H2 norm. In Section 6 numerical examples are presented. Section 7 provides more insight to the proposed methods
by investigating the advantages and disadvantages of them, and finally Section 8
concludes the paper with some final remarks.
1.1
Notation
The notation in this paper is standard. The min and max represent the minimum
and maximum of a function or a set, and similarly sup represents the supremum
of a function. The symbols and ≺ denote the inequality relation between matrices. A transfer matrix in terms of state-space data is denoted
#
"
A B
:= C(j ωI − A)−1 B + D.
(1)
C D
With k · k2 , we denote the Euclidian or 2-norm of a vector or the norm of a matrix
induced by the 2-norm. Furthermore RH∞ represents real rational functions
bounded on Re(s) = 0 including ∞. For the sake of brevity of notation, unless
2
35
Problem formulation
necessary, we drop the dependence of functions on frequency.
2
2.1
Problem formulation
H2 norm of a system
Consider the following system in state space form



ẋ = Ax + Bu


y = Cx
(2)
and define G(s) as the corresponding transfer function. Then the H2 norm of the
system in (2) is defined as follows
kGk22
Z∞
Tr {G(j ω)∗ G(j ω)}
=
−∞
dω
.
2π
This can also be written as
 ∞
 
Z∞ n

Z
 

o


T
T

 
 T 
A t T
At
 B
kGk22 = Tr BT e A t C T Ce At B dt = Tr 
B
e
C
Ce
dt







 
 

0
0
n
o
= Tr BT Wo B ,
(3)
(4)
where Wo is the observability Gramian of the system. Similarly the finite-frequency
H2 norm of (2) is defined as
kGk22,ω̄
Zω̄
Tr {G(j ω)∗ G(j ω)}
=
dω
.
2π
(5)
−ω̄
2.2
Robust H2 norm of a system
Consider the uncertain state space system


ẋ = Ax + Bq q + Bw w





p
 = Cp x + Dpq q



z = Cz x + Dzq q



q = ∆p
(6)
where x ∈ Rn , w ∈ Rm , z ∈ Rl and p, q ∈ Rd . Also ∆ ∈ Cd×d , which represents the
uncertainty present in (6), has the following structure
h
i
∆ = diag δ1 Ir1 · · · δL IrL ∆L+1 · · · ∆L+F ,
(7)
36
Paper A
Robust finite-frequency H2 analysis of uncertain systems
∆
q (t )
p (t )
z (t )
M 11
M 21
M 12
M 22
w(t )
Figure 1: Uncertain system with structured uncertainty
where δi ∈ R for i = 1, · · · , L, ∆L+j ∈ C
mj ×mj
for j = 1, · · · , F and
L
X
i=1
ri +
F
X
mj = d.
j=1
Also in addition to that and without loss of generality it is assumed that ∆ ∈ B∆
where B∆ is the unit ball for the induced 2-norm. This structure of ∆ can represent both real parametric uncertainties (δi Iri ) and un-modeled system dynamics
(∆L+j ).
The transfer matrix for the uncertain system in (6) is defined as below, see Figure 1,


"
#  A
Bq Bw 


M11 M12
M(j ω) =
=  Cp Dpq 0  .
(8)
M21 M22
 C
Dzq 0 
z
where M ∈ C(d+l)×(d+m) , M11 ∈ Cd×d , M12 ∈ Cd×m , M21 ∈ Cl×d and M22 ∈ Cl×m .
The following definition of this transfer matrix will also be used later in the upcoming sections
#
"
h
i
A Bq Bw
,
(9)
M(j ω) = M1 M2 =
C D
0
where M1 ∈ C(d+l)×(d) , M2 ∈ C(d+l)×(m) and
"
#
" #
Dpq
Cp
,D =
.
C=
Dzq
Cz
(10)
In analysis of uncertain systems, the transfer function between the signals w(t)
and z(t) is of interest. This transfer function is given by the upper LFT representation
(∆ ∗ M) = M22 + M21 ∆(I − M11 ∆)−1 M12 .
(11)
3
37
Mathematical preliminaries
Having (11), the robust H2 norm of the system in (6) is defined as below
sup k∆ ∗
∆∈B∆
Mk22
Z∞
Tr {(∆ ∗ M)∗ (∆ ∗ M)}
= sup
∆∈B∆
−∞
dω
.
2π
(12)
Similarly the robust finite-frequency H2 norm of the system in (6) is defined as
sup k∆ ∗
∆∈B∆
Mk22,ω̄
Zω̄
Tr {(∆ ∗ M)∗ (∆ ∗ M)}
= sup
∆∈B∆
dω
.
2π
(13)
−ω̄
This paper proposes methods for calculating upper bounds on (13).
3
3.1
Mathematical preliminaries
Finite-frequency observability Gramian
As was mentioned in Section 1, one of the ways of calculating the H2 norm of
the system in (2) is by using the observability Gramian of the system, see (4).
Computation of the observability Gramian can be done by solving the following
Lyapunov equation
AT Wo + Wo A + C T C = 0,
(14)
where Wo ∈ Rn×n is the observability Gramian. Using the Parseval’s identity
and (4), the observability Gramian can also be expressed as
Z∞
H(j ω)∗ C T CH(j ω)
Wo =
−∞
dω
,
2π
(15)
A)−1 .
where H(j ω) = (j ωI −
To continue with the study of the robust finitefrequency H2 problem, next the notion of finite-frequency observability Gramian
is introduced, as proposed in [Gawronski, 2000]. The finite-frequency observability Gramian is defined as
Zω̄
H(j ω)∗ C T CH(j ω)
Wo (ω̄) =
dω
.
2π
(16)
−ω̄
The next lemma provides a way to compute Wo (ω̄) in terms of the observability
Gramian, Wo .
Lemma 1. The finite-frequency observability Gramian can be computed as
Wo (ω̄) = L(A, ω̄)∗ Wo + Wo L(A, ω̄),
(17)
38
Paper A
Robust finite-frequency H2 analysis of uncertain systems
where Wo is defined by (14) or equivalently (15) and
Zω̄
L(A, ω̄) =
H(j ω)
j
dω
=
ln[(A + j ω̄I)(A − j ω̄I)−1 ].
2π
2π
(18)
−ω̄
Proof: See [Gawronski, 2000, page 100].
3.2
An upper bound on the robust H2 norm
Let X represent Hermitian, block diagonal positive definite matrices that commute with ∆, i.e. every X ∈ X has the following structure
i
h
(19)
X = diag X1 · · · XL xL+1 Im1 · · · xL+F ImF .
where the blocks in X have compatible dimensions with their corresponding
blocks in ∆. The following condition plays a central role throughout this section.
Condition 1. Consider the system in (6). There exists X (ω) ∈ X where X ⊆ Rd×d , Hermitian Y (ω) ∈ Rm×m and ǫ > 0 such that
"
M(jω)∗
X (ω)
0
#
"
X (ω)
0
M(jω) −
0
I
# "
−ǫI
0
0
Y (ω)
#
0
.
0
(20)
The set of operators X are often called scaling matrices. In many cases it is customary to use constant scaling matrices to make the problem easier to handle.
However the results achieved based on constant scaling matrices can be conservative. One of the ways to reduce the conservativeness and keep the computational
complexity reasonable is to use special classes of dynamic scaling matrices. This
will be investigated in more detail in Section 3.2.
Next, two methods for computing upper bounds on robust H2 norm of systems
with structured uncertainties are presented. The first method explicitly defines
Y (ω) in Condition 1 and uses Y (ω) to construct the upper bound on the robust
H2 norm of the system. This method will be referred to as explicit upper bound
calculation. The second method calculates the upper bound through computing
the observability Gramian via solving a set of LMIs. This method is referred to as
Gramian based upper bound calculation.
Explicit upper bound calculation
Consider Condition 1. This condition can be restated as follows
Lemma 2. If there exists X (ω) ∈ X such that
∗
∗
M11
X (ω)M11 + M21
M21 − X (ω) ≺ 0,
(21)
3
39
Mathematical preliminaries
then Condition 1 is satisfied if and only if there exists Y (ω) = Y (ω)∗ such that,
∗
∗
∗
∗
M12
X (ω)M12 + M22
M22 − (M12
X (ω)M11 + M22
M21 )×
∗
∗
(M11
X (ω)M11 + M21
M21 − X (ω))−1 ×
∗
∗
(M12
X (ω)M11 + M22
M21 )∗ Y (ω).
(22)
Proof: See Appendix A.1.
Using Condition 1 and Lemma 2, the following theorem provides upper bounds
on the gain of the system for all frequencies and will be used to provide an upper
bound on the robust H2 norm for systems with structured uncertainty.
Theorem 1. If there exists X (ω) ∈ X such that (21) is satisfied ∀ω and we define
Y (ω) as below
∗
∗
∗
∗
Y (ω) = M12
X (ω)M12 + M22
M22 − (M12
X (ω)M11 + M22
M21 )×
∗
∗
∗
∗
(M11
X (ω)M11 + M21
M21 − X (ω))−1 (M12
X (ω)M11 + M22
M21 )∗ ,
then (∆ ∗
M)(j ω)∗(∆
(23)
∗ M)(j ω) Y (ω) ∀ω .
Proof: See Appendix A.2.
Corollary 1. If there exists X (ω) ∈ X and a frequency interval centered at ωi ,
I (ωi ) = [ωi + ωmin ωi + ωmax ], such that
∗
∗
M11
X M11 + M21
M21 − X ≺ 0
∀ω ∈ I (ωi ),
(24)
and we consider Y (ω) as defined in (23) for the mentioned frequency interval
then
Z
Z
dω
dω
∗
≤
Tr {Y (ω)}
,
(25)
Tr {(∆ ∗ M) (∆ ∗ M)}
2π
2π
ω∈I (ωi )
ω∈I (ωi )
for all ∆ ∈ B∆ , and specifically if I (ωi ) covers all frequencies
sup k∆ ∗
∆∈B∆
Mk22
Z∞
Tr {Y (ω)}
≤
−∞
dω
.
2π
(26)
Gramian-based upper bound calculation
In this section a class of dynamic scaling matrices with the following structure
will be considered
h
i h
i∗
X (ω) = ψ(j ω)Xψ(j ω)∗ = Cψ (j ωI − Aψ )−1 I X Cψ (j ωI − Aψ )−1 I , (27)
where Aψ ∈ Rnψ ×nψ and Cψ ∈ Rd×nψ are fixed matrices with appropriate dimen
sions such that Aψ is stable and Cψ , Aψ is observable. Also note that X ∈
40
Paper A
Robust finite-frequency H2 analysis of uncertain systems
R(d+nψ )×(d+nψ ) is a free basis for the parameters such that X (s) ∈ X. As shown
in [Giusto, 1996] using this class of scaling matrices, Condition 1 can be rewritten as follows
h
i
Lemma 3. Consider the partitioning M = M1 M2 , defined in (9), for the
transfer matrix of system in (6). By replacing X (ω) with X (ω)−1 in (20), it can be
restated as
"
#


X (ω) 0


∗
M2 (j ω)
M1 (j ω)X (ω)M1(j ω) −
 0.
0
I
(28)



M2 (j ω)∗
−Y (ω)
Proof: See [Giusto, 1996, Lemma 1].
The upper left block of (28) can be expressed, up to its sign, as
"
#
X (ω) 0
C11 :=
− M1 (j ω)X (ω)M1(j ω)∗
0
I
"
#"
#"
#∗ "
# "
#∗
ψ 0 X 0 ψ 0
M11 ψ
M11 ψ
=
−
X
0 I 0 I 0 I
M21 ψ
M21 ψ


#
"
# −X 0 0 "
 M11 ψ ψ 0 ∗
M11 ψ ψ 0 

0
X
0
=

 M21 ψ 0 I .
M21 ψ 0 I 
0
0 I
(29)
By introducing the following transfer matrix
"
M11 ψ
C̃(j ωI − Ã) B̃q + D̃ =
M21 ψ
−1
h
and setting Γ = 0
C11
(30)
iT
, (29) can be reformulated as
#! 
"
h
i B̃ 0 −X
q
= C̃(j ωI − Ã)−1 I
 0
D̃ Γ 
0
#!∗
"
h
i B̃ 0
q
,
C̃(j ωI − Ã)−1 I
D̃ Γ
I
#
ψ
,
0
0
X
0

0
0 ×

I
(31)
where

 A

à =  0
0
"
C̃ = C
Bq Cψ
Aψ
0
DCψ

0 

0  ,
Aψ 
" ##
Cψ
,
0

0 Bq 0

B̃q =  I 0 0

0 0 I
"
"
0
D̃ = 0 D
0

0

0 ,

0
##
I
,
0
(32)
3
41
Mathematical preliminaries
˜
˜
where à ∈ Rñ×ñ , B̃q ∈ Rñ×d , C̃ ∈ R(l+d)×ñ and D̃ ∈ R(l+d)×d , with ñ = 2nψ + n and
d˜ = 2nψ + 2d. Let Π(X, B̃q , D̃) be an affine function of X defined as below
#
# −X 0 0 "
"
"
#
 B̃q 0 T
Π11 Π12
B̃q 0 

0
X
0
Π(X, B̃q , D̃) =
=
,
(33)


T
 D̃ Γ
D̃ Γ 
Π12
Π22
0
0 I
where Π11 ∈ Rñ×ñ , Π12 ∈ Rñ×(l+d) and Π22 ∈ R(l+d)×(l+d) . The following theorem
taken from [Paganini, 1997] can be used to calculate upper bounds on the robust
H2 norm.
Theorem 2. If there exist matrix X such that X (ω) in (27) satisfies X (ω) ∈ X,
and Hermitian matrices P− , P+ ∈ Rñ×ñ , Q ∈ Rnψ ×nψ , W̃o ∈ Rñ×ñ , such that


P− , Q ≻ 0,



A Q + QAT QC T 


 ψ
ψ
ψ

 − X ≺ 0,






C
Q
0
ψ






ÃP− + P− ÃT P− C̃ T 


 − Π(X, B̃q , D̃) ≺ 0,



C̃P−
0 





T
P+ C̃ T 
(34)

ÃP+ + P+ Ã



 − Π(X, B̃q , D̃) ≺ 0,



C̃P+
0






W̃o


I




 ≻ 0,

 I


P
−
P

+
−


 


h

i


 Bw 



T
 

Tr
W̃
B
0

 < γ 2,

o
w



 

0 
then X (ω) satisfies (28) and the system (∆ ∗ M) defined in (11) has robust H2
norm less than γ 2 .
Proof: See [Paganini, 1997].
Theorem 2 includes the problem with constant scaling matrices as a special case.
Let
"
" ##
h
i
Id
 = A, B̂q = Bq 0 , Ĉ = C, D̂ = D
.
(35)
0
Then the following Corollary is a restatement of Theorem 2 for constant scaling
matrices, i.e. X (ω) = X.
Corollary 2. If there exist matrix X ∈ X and symmetric matrices P− , P+ , Z ∈
42
Paper A
Robust finite-frequency H2 analysis of uncertain systems
Rn×n such that


P , X ≻ 0,



−



ÂP− + P− ÂT P− Ĉ T 


 − Π(X, B̂q , D̂) ≺ 0,






ĈP−
0 





T

P+ Ĉ T 
ÂP+ + P+ Â
 − Π(X, B̂q , D̂) ≺ 0,




ĈP+
0 






Z
I 





 I P − P  ≻ 0,


−
 n +
o


Tr BT Z Bw < γ 2 .
w
(36)
then X (ω) = X satisfies (28) and the system (∆ ∗ M) defined in (11) has robust H2
norm less than γ 2 .
Proof: See [Paganini, 1997].
Aside from upper bounds on the robust H2 norm of the system, Theorem 2
also provides additional information that will be used in the upcoming sections.
These additional information are highlighted in the following Lemma.
Lemma 4. Let P− , P+ , X, Q and W̃o satisfy (34), and C11 be defined as in (29).
Then C11 ≻ 0 with spectral factor Ñ such that Ñ , Ñ −1 ∈ RH∞ , i.e. C11 = Ñ Ñ ∗ .
Also let the scaled M be defined as
"
# "
#
1
− 21
2
X
(ω)
X
(ω)
0
0
M̂ =
M
,
(37)
0
I
0
I
i
h
and be partitioned as M̂ = M̂1 M̂2 . Then kÑ −1 M̂2 k22 < γ 2 . A state space
realization for Ñ −1 M̂2 is given by


 Ã − (Π12 − P− C̃ T )Π−1
22 C̃ B̃ w 

−1

Ñ M̂2 = 
(38)
 .
−1
Π222 C̃
0
Moreover Wo is the observability Gramian of Ñ −1 M̂2 .
Proof: See [Paganini, 1997].
4
4
Gramian-based upper bound on the robust finite-frequency H2 norm
43
Gramian-based upper bound on the robust
finite-frequency H2 norm
In this section the first method for calculating an upper bound on the robust
finite-frequency H2 norm of system in (6) is presented. The following theorem
combines the ideas presented in Section 3.1, regarding the finite-frequency observability Gramians, with the results of Section 3.2, and computes the upper
bound on the robust finite-frequency H2 norm for (6). Hereafter this method is
referred to as Method 1.
Theorem 3. Let P− , P+ , X, Q and W̃o be a solution to (34), then
" #
" #
T

B 


 Bw 2
∗
w 
sup k∆ ∗ Mk2,ω̄ ≤ Tr 
L(Ã, ω̄) W̃o + W̃o L(Ã, ω̄)


0 
 0

∆∈B
(39)
∆
where L(Ã, ω̄) is defined in (18) and à = à − (Π12 − P− C̃ T )Π−1
22 C̃ .
Proof: See Appendix A.3.
As was mentioned in Section 3.2, by using dynamic scaling matrices and increasing the order of these scaling matrices, it is possible to reduce the conservativeness of the results. In order to further reduce the conservativeness of the bounds
and improve the numerical properties of the optimization problems, it is useful
to perform uncertainty partitioning. In this approach, for each of the uncertainty
partitions, the upper bound on the robust finite-frequency H2 norm of the system is computed and the maximum of these bounds is considered as the final
result.
5
Frequency gridding based upper bound on the
robust finite-frequency H2 norm
In this section the second method to compute upper bounds on the robust finitefrequency H2 norm is presented. The following corollary to Theorem 1 plays a
central role in the proposed algorithm.
Corollary 3. hLet I (ωii ) for i = 1, . . . , m be disjoint frequency intervals such that
Sm
i=1 I (ω i ) = − ω̄ ω̄ . Also let the constant matrices X i for i = 1, . . . , m be the
∗
∗
scaling matrices for which M11
Xi M11 + M21
M21 − Xi ≺ 0 ∀ω ∈ I (ωi ). Then, it
holds that
44
Paper A
sup k∆ ∗
∆∈B∆
Mk22,ω̄
Robust finite-frequency H2 analysis of uncertain systems
≤ sup
m
X
∆∈B∆ i=1
≤
m
X
Z
Tr {(∆ ∗ M)∗ (∆ ∗ M)}
dω
2π
ω∈I (ωi )
Z
Tr {Yi (ω)}
i=1 ω∈I (ω )
i
dω
,
2π
(40)
where Yi (ω) is defined as in (23), with X (ω) = Xi .
Corollary 3 provides a sketch for calculating upper bounds on the robust finitefrequency H2 norm via frequency gridding. However calculating a suitable scal∗
∗
ing matrix Xi requires checking M11
Xi M11 + M21
M21 − Xi ≺ 0 for an infinite
number of frequencies in I (ωi ). Next a method is proposed to solve this issue.
Consider the following two LMIs
M11 (j ω)∗ X (ω)M11 (j ω) + M21 (j ω)∗ M21 (j ω) − X (ω) ≺ 0,
(41)
#∗
"
#
M11 (j ω) 0 ¯
M11 (j ω) 0
X (ω)
− X¯ (ω) ≺ 0.
M21 (j ω) 0
M21 (j ω) 0
(42)
"
"
#
Xi 0
Then X̄i =
∈ R(d+l)×(d+l) satisfies (42) for ω = ωi , if and only if Xi satisfies
0 I
(41) for the same frequency.
The following theorem taken from [Roos and Biannic, 2010], solves the issue of
infinite dimensionality of the problem in Corollary 3 by providing a way to ex∗
∗
tend the validity of a scaling matrix that satisfies M11
Xi M11 + M21
M21 − Xi ≺ 0
for a single frequency, e.g. ω = ωi , to a frequency interval, I (ωi ).
"
M11
Theorem 4. Let M̃ =
M21
the LMI in (42). Define
#
1
B̃
, and let D = X̄i2 , where X̄i satisfies
D̃
# "
Ã
0
=
0
C̃
G = AX − BX DX−1 CX ,
(43)
where
"
#
AG
0
AX =
∗C
∗ ,
−CG
G −A G
h
i
CX = DG∗ CG B∗G ,
"
#
−BG
BX = ∗
,
CG DG
DX = I − DG∗ DG
(44)
in which
"
G=
AG
CG
BG
DG
#
"
=
à − j ωi I
D C̃
B̃D −1
D D̃D −1
#
,
(45)
5
Frequency gridding based upper bound on the robust finite-frequency H2 norm
and define ωlow and ωhig h as



if j G has no positive real eigenvalue
−ωi ,
ωlow = 

max{λ ∈ R− : det(λI + j G) = 0},
otherwise
ωhig h



if j G has no negative real eigenvalue
∞,
=

min{λ ∈ R+ : det(λI + j G) = 0},
otherwise
45
(46)
(47)
h
i
Then X̄i satisfies (42) ∀ω ∈ I¯ (ωi ) = ωi + ωlow , ωi + ωhig h .
Proof: See Appendix A.4.
Using Corollary 3 and Theorem 2, the following algorithm can be used for calculating an upper bound on the robust finite-frequency H2 norm. This algorithmic
method is referred to as Method 2.
Algorithm 1. (Computation of an upper bound on the robust finite-frequency H2 norm)
(I) Divide the frequency interval of interest into a desired number of disjoint partitions,
I (ωi ), where ωi is the center of the respective partition.
(II) For each of the partitions, compute Xi such that it satisfies (41) for ω = ωi . In case
there exist a partition for which there exists no feasible solution, the system is not
robustly stable and this method cannot be applied to this system.
(III) Construct X̄i from the achieved Xi in (II).
(IV) Using Theorem 4 calculate the valid frequency range for the mentioned LMIs in (II).
If the achieved frequency range does not cover the respective frequency partition,
i.e. I (ωi ) * I¯ (ωi ), go back to (I) and choose a finer partitioning for the frequency
interval of interest.
(V) Define Yi (ω) using (23) with X (ω) = Xi .
(VI) Use
numerical integration to calculate
R
Tr {Yi (ω)} dω
2π .
ω∈I (ω )
i
(VII) By Corollary 3, calculate the upper bound by summing up the integrals computed
in (VI).
The second step of Algorithm 1, requires computation of constant scaling matrices that satisfy (41) for ω = ωi for each of the partitions. This can be accomplished through different approaches. However, considering the expression
in (40) and the importance of Tr {Yi (ω)} in the quality (closeness to the actual
value) of the proposed upper bound on the robust finite-frequency H2 norm
in (40), it seems intuitive to calculate the scaling matrices while aiming at minimizing Tr {Yi (ωi )}. The following two approaches utilize this in the process of
computing suitable scaling matrices.
46
Paper A
Robust finite-frequency H2 analysis of uncertain systems
Approach 1. Compute Xi in Step (II) of Algorithm 1 as the solution of the following
optimization problem
minimize
Tr {Yi }
subject to
(20) with ω = ωi .
X i ,Yi
(48)
Remark 1. The idea of frequency gridding was also presented in [Paganini, 1999a], where
the authors consider the H2 performance problem for discrete time systems. In that paper,
an optimization problem similar to (48) for frequencies 0 = ω0 . . . ωN = 2π is formulated
R 2π
and then the integral 0 trace(Y (ω)) dω
2π is approximated by the following Riemann sum
expression
N
1 X
Tr {Yi } (ωi − ωi−1 ),
2π
(49)
i=1
where
0 = ω0 . . . ωN = 2π.
However, this approach does not necessarily provide a guaranteed upper bound on the
robust H2 norm of the system.
For any Xi satisfying the LMI in (41) for ω = ωi let
∗
∗
∗
∗
f (α) = Tr{M12
αXi M12 + M22
M22 − (M12
αXi M11 + M22
M21 )×
∗
∗
∗
∗
(M11
αXi M11 + M21
M21 − αXi )−1 (M12
αXi M11 + M22
M21 )∗ }.
(50)
This function is convex with respect to α. Next, following the same objectives as
in Approach 1, an alternative method for calculating suitable scaling matrices is
introduced.
Approach 2. Compute Xi in Step (II) of Algorithm 1 using the following sequential method
(I) Find Xi such that it satisfies the LMI in (41) for ω = ωi .
(II) Minimize f (α), in (50), with the achieved Xi with respect to all α such that αXi still
satisfies the LMI in (41) for ω = ωi .
Denote α ∗ as the minimizing α. Then α ∗ Xi will be used within the remaining steps of
Algorithm 1. In order to assure that α ∗ Xi satisfies (41) the search for α should be subject
to the constraint α > αmin , where
αmin =
1
(
min eig
"
1
Λ− 2
0
" 1
0 U (−M ∗ X M + X )U ∗ Λ − 2
i
11 i 11
I
0
#
0
I
#!) ,
(51)
in which U , a unitary matrix, and Λ, are defined by the singular value decomposition
" 1
#
−
0 U.
∗ M
∗ Λ 2
=
U
M21
21
0
0
It is important to note that for some problems it might be required to perform
many iterations between the first and the fourth steps of Algorithm 1. One of the
6
47
Numerical examples
ways to alleviate this issue and even calculate better upper bounds, is to modify
the proposed approaches by augmenting new constraints for other frequencies
from the partition under investigation. In this case the cost function can also be
modified accordingly. As an example, Approach 1 can be modified as follows
minimize
Tr {Yi }
subject to
Xi
M(j ω)
0
X i ,Yi
"
∗
#
"
0
X
M(j ω) − i
I
0
#
0
0
Yi
for ω = ωj ∈ I (ωi ), j = 1, . . . , Ni ,
(52)
or alternatively as
minimize
j
X i ,Yi j=1,...,Ni
Ni
X
j
Tr Yi
j=1
"
subject to
Xi
M(j ω)
0
∗
#
"
Xi
0
M(j ω) −
I
0
#
0
j 0
Yi
for ω = ωj ∈ I (ωi ), j = 1, . . . , Ni .
(53)
Similar to Method 1, uncertainty partitioning improves the quality of the calculated upper bound on this method too.
Remark 2. Note that although the calculated value for the upper bound using Algorithm 1
has a decreasing trend with respect to the number of partitions, this trend is not necessarily monotonically decreasing. This is due to the fact that the calculated upper bound not
only is dependent on the number of partitions but also on the quality of the calculated
scaling matrices and how they affect the numerical integration procedure.
6
Numerical examples
In this section the proposed methods are tested on theoretical and practical examples. The chosen theoretical example can be solved analytically, i.e. the robust H2
norm for this example can be computed via routine calculations. The achieved
results for this example are reported in Section 6.1.
As a practical example, an application to the comfort analysis problem for a civil
aircraft model is discussed. Due to its more complex uncertainty structure, this
example is computationally more challenging. Section 6.2 presents the analysis
results for this example. It should be pointed out that all the computations, for
both examples, are conducted using the Yalmip toolbox [Löfberg, 2004] with the
SDPT3 solver [Toh et al., 1999]. The platform used for the simulations uses a
Dual Core AMD OpteronT M Processor 270 as the CPU and 4 GB of RAM.
6.1
Theoretical Example
Consider the uncertain system in (6) with the following system matrices
48
Paper A
Robust finite-frequency H2 analysis of uncertain systems
1
ω̄ = 50
k∆ ∗ M k22
0.8
0.6
0.4
0.2
0
−2
−1
10
10
0
10
1
10
2
10
3
10
ω[ rad
s ]
Figure 2: Gain plot versus different values for the uncertain parameter.

−2.5
 0


A =  0

 0
0
 
0
5
 
Bw = 0 ,
 
0
5
0.5
−1
−0.5
0
0
0
−50
0.5
0
0
0
0
−5
0 −100
# 1

Cp
= 0
C=

Cz
1
"


0 
0.25

 0
0 



0  , Bq =  0


 0
100

0
0
0 0 0
0 0 0
0 0 0

0
0 ,

0

−0.5

0 
0  ,

0 
0
# 0

Dpq
D=
= 1
Dzq

0
"

0
0 .

0
(54)
In this example ∆(δ) = δI2 with −1 ≤ δ ≤ 1. This system is known to have robust
H2 norm, as defined in (12), equal to 1.5311 which is attained for δ = 0.25.
Figure 2 illustrates the gain plots of the system for different values of the uncertain parameter. The aim is to calculate the robust finite-frequency H2 norm
of the system and avoid the peak occurring at 100 rad/s. This is motivated by
Figure 3 which presents the calculated finite-frequency H2 norm of the system
in (54), with respect to different values for the uncertain parameter and frequency
bounds. As can be seen from this figure and the jump at ω̄ = 100 rad/s, the contribution of this peak to the robust finite-frequency H2 norm cannot be neglected.
In order to avoid this peak, the frequency bound that has been considered for
this example is ω̄ = 50 rad/s. The actual value for the robust finite-frequency H2
norm for (54) with this frequency bound is 0.8919.
Method 1, presented in Section 4, utilizes the following class of dynamic scaling
6
49
Numerical examples
1.6
1.4
k∆ ∗ M k22,ω̄
1.2
1
0.8
0.6
0.4
0.2
0
1
0.5
2
10
0
0
10
−0.5
−1
δ
−2
10
ω̄[ rad
s ]
Figure 3: Finite-frequency H2 norm versus different values of the uncertain
parameter and frequency bounds.
matrices
ψ(s) =
n −1
(s−p) ψ
n
(s−p) ψ
I2
(s−p)
(s−p)
nψ −2
nψ −1 I2
...
1
I
(s−p) 2
I2
(55)
with p = 150, and via Theorem 3 it calculates the upper bound on the robust
finite-frequency H2 of the system. For this particular example dynamic scaling
matrices with order higher than 3 do not produce any better upper bounds, so
only scaling matrices up to order 3 are considered.
Method 2, presented in Section 5, has been applied to this example with Approaches 1 and 2. The number of frequency partitions is increased until either
the performance matches the performance of Method 1 or the improvement in
the computed upper bound is not discernible anymore.
Figure 4 illustrates the achieved upper bounds on different frequency bounds,
ω̄. The curve marked with the solid line reports the actual values for the robust
finite-frequency H2 norm of the system. The dashed lines present the achieved
upper bounds using Method 1. As can be seen from the figure as the order
of the dynamic scaling matrices increases the computed upper bound becomes
tighter. Note that the upper bounds computed using scaling matrices with nψ ≥ 1
are practically indistinguishable. The bounds presented with the dashed-dotted
lines are results achieved by applying Method 2 to this example. As can be seen
from Figure 4, Method 2 with Approach 1 can produce better upper bounds than
the second approach and can match the performance of Method 1 with 40 partitions. Figure 6 illustrates the calculated upper bounds on the systems gains, Y (ω),
for different frequencies. Table 1 presents a summary of the achieved results.
50
Paper A
Robust finite-frequency H2 analysis of uncertain systems
2.5
k∆ ∗ M k22,ω̄
2
1.5
1
0.5
0
−3
10
−2
10
−1
10
0
1
10
2
10
3
10
10
ω̄[ rad
s ]
Figure 4: Robust finite-frequency H2 norm and the computed upper bounds
on it for different frequency bounds. The solid line illustrates the actual
value for the robust finite-frequency H2 norm. The dashed and dasheddotted lines represent the achieved upper bounds using Methods 1 and 2
for different orders of scaling matrix and numbers of frequency partition
numbers, respectively.
2
k∆ ∗ M k22
1.5
1
ω̄ = 50
0.5
0
−2
10
−1
10
0
10
1
10
2
10
3
10
ω[ rad
s ]
Figure 5: Magnitudes of k∆ ∗ Mk22 for different uncertainty values (solid
lines), and the calculated upper bound on each frequency point. The dashed
and dashed-dotted lines represent the achieved upper bounds using Methods 1 and 2 for different orders of scaling matrix and numbers of frequency
partition numbers, respectively.
6
Numerical examples
51
So far the presented results are achieved without any uncertainty partitioning. In
order to illustrate the effect of uncertainty partitioning on the performance of the
proposed methods, Method 1 and Method 2 with Approach 1 are applied to this
example with uncertainty partitioning. Figures 6 and 7 present the achieved upper bounds on robust finite-frequency H2 norm of the system with ω̄ = 50 rad/s
using Methods 1 and 2, respectively. These figures illustrate the upper bound
with respect to number of uncertainty partitions and order of dynamic scaling
matrices, for Method 1, and number of frequency grid points, for Method 2. As
can be seen from the figures and considering the actual robust finite-frequency
H2 norm of the system, the computed upper bounds using both methods are extremely tight. A summary of the results from this analysis is presented in Tables 2
and 3.
As can be observed from Tables 2 and 3, although both methods produce equally
tight upper bounds, Method 1 achieves this goal with lower computation time.
6.2
Comfort Analysis Application
The problem considered in this section involves a model of a civil aircraft, including both rigid and flexible modes. This model is to be used to evaluate the effect
of wind turbulence on different points of the aircraft, and is referred to as comfort analysis. This problem can be reformulated as an H2 performance analysis
problem for an extended system, including the model of the aircraft, a Von Karman filter, modeling the wind spectrum, and an output filter, accounting for the
turbulence field, [Papageorgiou et al., 2011]. In the provided aircraft model the
uncertain parameter δ corresponds to the level of fullness of the fuel tanks and
it is normalized to vary within the range [−1, 1]. The overall extended system is
presented in LFT form, as in (6), with n = 21 states and an uncertainty block size
of d = 14.
The provided aircraft model is valid for frequencies up to 15 rad/s and beyond
that does not have any physical meaning [Roos, 2009]. This motivates performing
finite-frequency H2 performance analysis, limited to this frequency range.
Table 1: Numerical results for the theoretical example.
Method
Estimated
Elapsed
Upper bound
Time[sec]
M.1, nψ = 0
1.2609
11
M.1, nψ = 1
1.1972
10
M.1, nψ = 2
1.1944
12
M.1, nψ = 3
1.1911
13
M.2, App.1, npar = 40
1.189
44
M.2, App.1, npar = 200
1.186
144
M.2, App.2, npar = 200
1.3184
552
52
Paper A
Robust finite-frequency H2 analysis of uncertain systems
1.3
1.25
k(∆ ∗ M )k22,50
1.2
1.15
1.1
1.05
1
0.95
0.9
0
0.85
0
5
10
0.5
1
15
1.5
2
20
No. Uncertainty Partitions
2.5
Ψ
3
25
k(∆ ∗ M )k22,50
Figure 6: The achieved upper bound on robust finite-frequency H2 norm
with ω̄ = 50 rad/s with respect to number of uncertainty partitions and
order of dynamic scaling matrices.
2.2
2.15
2.1
2.05
2
1.95
1.9
1.85
1.8
1.75
1.7
1.65
1.6
1.55
1.5
1.45
1.4
1.35
1.3
1.25
1.2
1.15
1.1
1.05
1
0.95
0.9
0.85
0.8
25
0
20
20
15
10
5
40
0
60
No. Frequency Partitions
No. Uncertainty Partitions
Figure 7: The achieved upper bound on robust finite-frequency H2 norm
with ω̄ = 50 rad/s with respect to number of uncertainty partitions and
number of frequency grid points.
6
53
Numerical examples
3
k∆ ∗ M k22
2.5
ω̄ = 15
2
1.5
1
0.5
0
−2
10
−1
10
0
1
10
10
2
10
3
10
ω[ rad
s ]
Figure 8: Gain plot versus different values for the uncertain parameter.
Figure 8, illustrates the gain plots of the system under consideration as a function
of frequency. Different curves in this figure correspond to different uncertainty
values. As can be seen from the figure, the frequency bound at 15 rad/s is necessary to avoid the peak at approximately 20 rad/s which is outside the validity
range of the model.
The methods considered for performing comfort analysis are Methods 1 and 2
with the use of constant scaling matrices and Approach 1, respectively. Tables 4
and 5 summarize the achieved results using Methods 1 and 2, respectively. As
can be seen from the tables, both methods perform equally accurate in estimating
the robust finite-frequency H2 norm of the system. However, in contrast to the
example in Section 6.1, Method 2 is faster in calculating the upper bound with
equal accuracy.
Similar to Section 6.1, it is possible to improve the computed upper bounds via
uncertainty partitioning. This can be observed from Tables 4 and 5.
Table 2: Numerical results for the theoretical example Using Method 1.
nψ
No. Uncer.
Estimated
Elapsed
Par.
Upper bound
Time[sec]
2
1
1.1944
12
2
20
0.8928
434
54
Paper A
Robust finite-frequency H2 analysis of uncertain systems
Table 3: Numerical results for the theoretical example Using Method 2.
No. freq.
No. Uncer.
Estimated
Elapsed
Grids
Par.
Upper bound
Time[sec]
20
1
1.1945
30
20
20
0.8924
532
Table 4: Numerical results for the theoretical example Using Method 1.
nψ
No. Uncer.
Estimated
Elapsed
Par.
Upper bound
Time[h]
0
50
1.2434
8.62
0
450
0.7970
59.24
7
Discussion and General remarks
This section highlights the advantages and disadvantages of the proposed methods and provides insight on how to improve the performance of the methods
considering the characteristics of the problem at hand.
7.1
The observability Gramian based method
This method considers the frequency interval of interest as a whole and calculates
an upper bound on the robust finite-frequency H2 norm of the system in one shot
or one iteration by solving an SDP. However the dimension of this optimization
problem grows rapidly with the number of states and/or size of the uncertainty
block. This limits the capabilities of this method in handling medium or large
sized problems, i.e. analysis of systems with high number of states or large uncertainty blocks.
The most apparent possibility to improve the accuracy of the computed upper
bound using this method is to increase the order of the dynamic scaling matrices.
This comes at the cost of rapidly increasing the number of optimization variables
in the underlying SDP and affects the computational tractability of the method.
Another way of improving the computed upper bound is to perform uncertainty
partitioning, which proved to be effective through the examples presented in Section 6. However, this improvement comes at the cost of a much higher computational burden, see Table 4.
7.2
The frequency gridding based method
This method starts with an initial partitioning of the desired frequency interval
and calculates the upper bound on the robust finite-frequency H2 norm by solving the corresponding SDP for each of the partitions.
The size of the underlying SDPs in this method is smaller than the previous
8
55
Conclusion
Table 5: Numerical results for the theoretical example Using Method 2.
No. freq.
No. Uncer.
Estimated
Elapsed
Grids
Par.
Upper bound
Time[h]
80
1
1.2382
0.5611
80
10
0.7911
4.25
method and is mainly dependent on the size of the uncertainty block. Consequently, this method can handle larger problems. However for large problems,
the algorithm might require some iterations between steps IV and I of the algorithm, to be able to produce consistent results. Another issue with this method
is the requirement to perform numerical integration on a rational function in
step VI of the algorithm. This can become slightly problematic for high order
systems.
There are two main ways to improve the computed upper bounds using this
method, namely increasing the number of partitions, and augmenting the SDP
for each partition with more constraints for other frequency points in the partition and/or adding more variables to the SDPs corresponding to the partitions.
This proved to scale better considering the computation time, as compared to
Method 1 see Table 5.
8
Conclusion
This paper has provided two methods for calculating upper bounds on the robust
finite-frequency H2 norm. Through the paper different guidelines for improving
the performance of the proposed methods have been presented and their effectiveness has been illustrated using both a theoretical and a practical example.
The proposed methods consider different formulations for calculating a consistent upper bound on the robust finite-frequency H2 norm. Due to this, although
both methods can produce equally tight upper bounds, they have different computational properties. Method 1 is more suitable for small-sized problems and
produce results faster than the second method for this type of problems. On
the other hand, Method 2 can handle larger problems and produce results more
rapidly for this class of problems.
A
A.1
Appendix
Proof of Lemma 2
Let
∗
∗
M21 − X (ω),
C11 = M11
X (ω)M11 + M21
∗
∗
X (ω)M12 + M21
M22 ,
C12 = M11
56
Paper A
Robust finite-frequency H2 analysis of uncertain systems
∗
∗
C21 = M12
X (ω)M11 + M22
M21 ,
∗
∗
C22 = M12 X (ω)M12 + M22 M22 .
(56)
Then the left hand side of Condition 1 can be written as
"
#
"
# "
X (ω) 0
X (ω)
0
C
∗
M (j ω)
M(j ω) −
= 11
0
I
C21
0
Y (ω)
#
C12
.
C22 − Y (ω)
(57)
Now if we assume that there exists X (ω) ∈ X such that C11 ≺ 0, then Lemma 2 is
the direct outcome of Schur’s lemma.
A.2
Proof of Theorem 1
If the assumptions of the theorem are satisfied, then by Lemma 2, Condition 1 is
valid, i.e. (20) holds. Define
# "
#
"
1
− 21
2
X
(ω)
0
0
X
(ω)
M
.
(58)
M̂ =
0
I
0
I
Then (20) can be rewritten as
"
I
M̂ M̂ −
0
∗
# "
0
−ǫI
Y (ω)
0
#
0
.
0
(59)
As a result
"
I
M̂ M̂ 0
∗
#
0
.
Y (ω)
1
(60)
1
Define q̄(j ω) = X (ω) 2 q(j ω) and p̄(j ω) = X (ω)"2 p(j ω).# By pre and post multiplyi
h
q̄(j ω)
ing both sides of (60) by q̄(j ω)∗ w(j ω)∗ and
, respectively, we have
w(j ω)
| z(j ω) |2 + | p̄(j ω) |2 ≤| q̄(j ω) |2 +w(j ω)∗ Y (ω)w(j ω).
1
1
(61)
1
1
For all frequencies ∆ commutes with X (ω)− 2 , and hence q̄ = X 2 q = X 2 ∆X − 2 p̄ =
∆ p̄. Considering the fact that ∆ ∈ B∆ , it now follows from (11) and (61) that
| z(j ω) |2 = w(j ω)∗(∆ ∗ M)(j ω)∗(∆ ∗ M)(j ω)w(j ω) ≤ w(j ω)∗ Y (ω)w(j ω),
(62)
which completes the proof.
A.3
Proof of Theorem 3
Let P− , P+ , X, Q and W̃o satisfy (34). Define
−1
Ỹ = (Ñ −1 M̂2 )∗ (Ñ −1 M̂2 ) = M̂2∗ C11
M̂2 0.
(63)
where Ñ and M̂2 are defined in Lemma 4. From (63) C11 ≻ 0. If we set Y =
−1
M̂2∗ C11
M̂2 , by Schur’s lemma it follows that
A
57
Appendix
"
−C11
M̂2∗
#
M̂2
0
−Y
(64)
By replacing X (ω) with X (ω)−1 in (58) and using Lemma 3, (64) is equivalent to
(59). In other words
"
#
"
# "
#
−1 0
−ǫI 0
X (ω)−1
0
∗ X (ω)
M(j ω) −
.
(65)
M(j ω)
0
0
0
I
0
Y (ω)
By (65) and the same arguments as in the proof of Theorem 1, (∆ ∗ M)(j ω)∗(∆ ∗
M)(j ω) Y (ω) ∀ω, ∀∆ ∈ B∆ . As a result by using lemmas 1 and 4, (39) follows.
A.4
Proof of Theorem 4
Consider the LMI in (42) with X¯ (ω) = X̄i . This LMI can be rewritten as
−1
1
1
− 21
X̄i 2 M̃ ∗ X̄i2 X̄i2 M̃ X̄i
− I ≺ 0.
(66)
#
AG BG
. In this
It now follows that G =
Let G(j ω) = X̄i M̃(j(ω +
CG DG
theorem we are looking for the largest frequency interval, for which the LMI in
(66) is valid. On the boundary of this interval I − G(j ω)∗ G(j ω) becomes singular,
i.e. det(I − G(j ω)∗ G(j ω)) = 0.
"
#
AX BX
By (44) and (45), I − G(j ω)∗ G(j ω) =
. Using Sylvester’s determinant
CX DX
lemma and some simple matrix manipulation we have
1
2
"
−1
ωi ))X̄i 2 .
det(I − G(j ω)∗ G(j ω)) = 0 ⇔
−1
−1
det(I + DX 2 CX (j ωI − AX )−1 BX DX 2 ) = 0 ⇔
det(I + (j ωI − AX )−1 BX DX−1 CX ).
(67)
By using the matrix determinant lemma and the definition of G it is also straight
forward to establish equivalence between the following expressions
det(I + (j ωI − AX )−1 BX DX−1 CX ) ⇔
det(j ωI − (AX − BX DX−1 CX )) = 0 ⇔ det(ωI + j G) = 0,
which completes the proof.
(68)
58
Paper A
Robust finite-frequency H2 analysis of uncertain systems
Bibliography
S. Boyd and L. Vandenberghe. Convex optimization. Cambridge University Press,
2004.
S. Boyd, L. ElGhaoui, E. Feron, and V. Balakrishnan. Linear matrix inequalities
in system and control theory. Society for Industrial and Applied Mathematics,
1994.
R. Caracciolo, D. Richiedei, A. Trevisani, and V. Zanotto. Robust mixed-norm
position and vibration control of flexible link mechanisms. Mechatronics, 15
(7):767 – 791, 2005.
G. Chesi, A. Garulli, A. Tesi, and A. Vicino. Homogeneous Polynomial Forms for
Robustness Analysis of Uncertain Systems. Springer, 2009.
J.C. Doyle, K. Glover, P.P. Khargonekar, and B.A. Francis. State-space solutions
to standard H2 and H∞ ; control problems. IEEE Transactions on Automatic
Control, 34(8):831 –847, Aug. 1989.
A. Garulli, A. Hansson, S. Khoshfetrat Pakazad, A. Masi, and R. Wallin. Robust
finite-frequency H2 analysis of uncertain systems. Technical Report LiTH-ISYR-3011, Department of Electrical Engineering, Linköping University, SE-581
83 Linköping, Sweden, May 2011.
W. Gawronski. Advanced structural dynamics and active control of structures.
Springer Verlag, 2000.
A. Giusto. H∞ and H2 robust design techniques for static prefilters. In Proceedings of the 35th IEEE Decision and Control, 1996., volume 1, pages 237 –242
vol.1, Dec 1996.
J. Löfberg. Yalmip : A toolbox for modeling and optimization in MATLAB. 2004.
URL http://users.isy.liu.se/johanl/yalmip.
G. Marro and E. Zattoni. H2 -optimal rejection with preview in the continuoustime domain. Automatica, 41(5):815 – 821, 2005.
A. Masi, R. Wallin, A. Garulli, and A. Hansson. Robust finite-frequency H2 analysis. Proceedings of 49th IEEE Conference on Decision and Control. 2010.
F. Paganini. State space conditions for robust H2 analysis. Proceedings of the
American Control Conference, 2:1230 –1234 vol.2, Jun. 1997.
F. Paganini. Frequency domain conditions for robust H2 performance. IEEE
Transactions on Automatic Control, 44(1):38 –49, Jan. 1999a.
F. Paganini. Convex methods for robust H2 analysis of continuous-time systems.
IEEE Transactions on Automatic Control, pages 239 –252, Feb. 1999b.
F. Paganini and E. Feron. Advances in linear matrix inequality methods in control, Ch. Linear matrix inequality methods for robust H2 analysis: A survey
with comparisons. Society for Industrial and Applied Mathematics, 2000.
Bibliography
59
S. Khoshfetrat Pakazad, A. Hansson, and A. Garulli. On the calculation of the robust finite frequency H2 norm. Proceedings of the 18th IFAC World Congress.
2011.
Ch. Papageorgiou, R. Falkeborn, and A. Hansson. IQC-based analysis techniques
of clearance. In Optimization Based Clearance of Flight Control Laws—A Civil
Aircraft Application. Springer Verlag, 2011.
C. Roos. Generation of flexible aircraft lft models for robustness analysis. 6th
IFAC Symposium on Robust Control Design, 2009.
C. Roos and J. Biannic. Efficient computation of a guaranteed stability domain
for a high-order parameter dependent plant. American Control Conference,
pages 3895 –3900, Jun. 2010.
A.A. Stoorvogel. The robust H2 control problem: a worst-case design. IEEE
Transactions on Automatic Control, 38(9):1358 –1371, Sep. 1993.
M. Sznaier, T. Amishima, P. A. Parrilo, and J. Tierno. A convex approach to robust
performance analysis. Automatica, 38(6):957 – 966, 2002.
K. C. Toh, M. J. Todd, and R. H. Tütüncü. SDPT3 - a Matlab software package
for semidefinite programming. Optimization Methods and Software, (11):545–
581, 1999.
E. Zattoni. Detection of incipient failures by using an H2 -norm criterion: Application to railway switching points. Control Engineering Practice, 14(8):885 –
895, 2006.
Paper B
Decomposition and Projection
Methods for Distributed Robustness
Analysis of Interconnected Uncertain
Systems
Authors:
Sina Khoshfetrat Pakazad, Martin S.Andersen, Anders Hansson and
Anders Rantzer
Published as
S. Khoshfetrat Pakazad, A. Hansson, M. S. Andersen, and A. Rantzer.
Decomposition and projection methods for distributed robustness analysis of interconnected uncertain systems. Technical Report LiTH-ISYR-3033, Department of Electrical Engineering, Linköping University,
SE-581 83 Linköping, Sweden, Nov. 2011.
Decomposition and Projection Methods for
Distributed Robustness Analysis of
Interconnected Uncertain Systems
Sina Khoshfetrat Pakazad∗ , Martin S.Andersen∗ , Anders Hansson∗ and Anders
Rantzer∗∗
∗ Dept.
of Electrical Engineering,
Linköping University,
SE–581 83 Linköping, Sweden
[email protected],
[email protected],
[email protected]
∗∗ Dept.
of Electrical Engineering
Lund university
Lund, Sweden
[email protected]
Abstract
We consider a class of convex feasibility problems where the constraints
that describe the feasible set are loosely coupled. These problems
arise in robust stability analysis of large, weakly interconnected systems. To facilitate distributed implementation of robust stability analysis of such systems, we propose two algorithms based on decomposition and simultaneous projections. The first algorithm is a nonlinear
variant of Cimmino’s mean projection algorithm, but by taking the
structure of the constraints into account, we can obtain a faster rate
of convergence. The second algorithm is devised by applying the alternating direction method of multipliers to a convex minimization
reformulation of the convex feasibility problem. We use numerical results to show that both algorithms require far less iterations than the
accelerated nonlinear Cimmino algorithm.
1
Introduction
Distributed and parallel solutions for control problems are of great importance.
Prioritizing such approaches over the centralized solutions is mainly motivated
by structural or computational reasons, [Axehill and Hansson, 2011, Langbort et al.,
2004, Venkat et al., 2008, Sandell et al., 1978, Dochain et al., 2003]. Such solutions are common in the control of distributed interconnected systems, which
appear in many practical applications, [Dochain et al., 2003, Bayen et al., 2004,
Jovanovic et al., 2007, Hansen et al., 2006, Cantoni et al., 2007, Stewart et al., 2003,
Langbort et al., 2004]. Considering the fact that many of the control design methods are model based, they are vulnerable to model uncertainty. Some of these
design methods consider the model uncertainty, but many, specially the ones em63
64
Paper B Decomposition and Robustness analysis of Interconnected Systems
ployed in industry, neglect it. As a result, for such methods it is important to
address the stability of the closed loop system with respect to the model uncertainty.
Methods for robust stability analysis of uncertain systems has been studied thoroughly, e.g., see [Zhou et al., 1997, Zhou and Doyle, 1998, Skogestad and Postlethwaite,
2007, Megretski and Rantzer, 1997]. One of such methods is the so called µanalysis, an approximation of which solves a Convex Feasibility Problem, CFP, for
each frequency on a finite grid of frequencies. For networks of interconnected uncertain systems, each CFP is a global feasibility problem that involves the entire
network. In other words, each CFP depends on all the local models, i.e., the models associated with the subsystems in the network, as well as the network topology. In networks where all the local models are available to a central unit and the
size of the problem is not computationally prohibitive, each feasibility problem
can be solved efficiently in a centralized manner. However distributed or parallel
methods are desirable in networks, for instance, where the interconnected system dimension is large or where the local models are either private or available
to only a small number of subsystems in the network, [Fang and Antsaklis, 2008,
Rice and Verhaegen, 2009a, VanAntwerp et al., 2001, Grasedyck et al., 2003]. In
[Fang and Antsaklis, 2008, Rice and Verhaegen, 2009a, VanAntwerp et al., 2001],
for specific network structures, the system matrix is diagonalized and decoupled using decoupling transformations. This decomposes the analysis and design problem and provides the possibility to produce distributed and parallel
solutions. An overview of such algorithms for analysis of systems governed by
Partial Differential Equations, PDEs, is given in [Rice and Verhaegen, 2009b].
In this paper, we use decomposition methods to facilitate distributed robust stability analysis of large networks of weakly interconnected uncertain systems. Decomposition allows us to reformulate a single global feasibility constraint x ∈ C ⊂
Rn involving m uncertain systems as a set of N < m loosely coupled constraints
x ∈ Ci ≡ {x ∈ Rn | f i (x) K 0},
i = 1, . . . , N
(1)
where f i (x) Ki 0 is a linear conic inequality with respect to a convex cone Ki .
We will assume that f i depends only on a small subset of variables {xk | k ∈ Ji },
Ji ⊆ {1, . . . , n}. The number of constraints N is less than the number of systems m
in the network, and hence the functions f i generally involve problem data from
more than one subsystem in the network. We will henceforth assume that each
f i is described in terms of only a small number of local models such that the
Euclidean projection PCi (x) of x onto Ci involves just a small group of systems
in the network. This assumption is generally only valid if the network is weakly
interconnected.
One algorithm that is suited for distributed solution of the CFP is the nonlinear
Cimmino algorithm, [Censor and Elfving, 1981, Cimmino], which is also known
as the Mean Projection Algorithm. This algorithm is a fixed-point iteration which
takes as the next iterate x(k+1) a convex combination of the projections PCi (x(k) ),
1
65
Introduction
i.e.,
x(k+1) :=
N
X
αi PCi (x(k) )
(2)
i=1
P
where N
i=1 αi = 1 and α1 , . . . , αN > 0. Notice that each iteration consists of two
steps: a parallel projection step which is followed by a consensus step that can
be solved by means of distributed weighted averaging, e.g.,[Nedic and Ozdaglar,
2009, Tsistsiklis, 1984, Tsitsiklis et al., 1986, Lin and Boyd, 2003, Rajagopalan and Shah,
2011]. Iusem & De Pierro [Iusem and De Pierro, 1986] have proposed an accelerated variant of the nonlinear Cimmino algorithm that takes as the next iterate a
convex combination of the projections of x(k) on only the sets for which x(k) < Ci .
This generally improves the rate of convergence when only a few constraints are
violated. However, the nonlinear Cimmino algorithm and its accelerated variant
may take unnecessarily conservative steps when the sets Ci are loosely coupled.
We will consider two algorithms that can exploit this type of structure, and both
algorithms are closely related to the nonlinear Cimmino in that each iteration
consists of a parallel projection step and a consensus step.
The first algorithm that we consider is equivalent to the von Neumann Alternating Projection (AP) algorithm, [von Neumann, P
1950, Bregman, 1965], in a product space E = R|J1 | × · · · × R|JN | of dimension N
i=1 |J i |. As a consequence, this
algorithm converges at a linear rate under mild conditions, and its behavior is
well-understood also when the CFP is infeasible [Bauschke and Borwein, 1994].
Using the ideas from [Boyd et al., 2011, Bertsekas and Tsitsiklis, 1997], we also
show how this algorithm can be implemented in a distributed manner.
A CFP can also be formulated as a convex minimization problem which can
be solved with distributed optimization algorithms; see e.g. [Nedic et al., 2010,
Boyd et al., 2011, Bertsekas and Tsitsiklis, 1997, Tsistsiklis, 1984]. The second
algorithm that we consider is the Alternating Direction Method of Multipliers,
ADMM, [Glowinski and Marroco, 1975, Gabay and Mercier, 1976, Boyd et al., 2011,
Bertsekas and Tsitsiklis, 1997], applied to a convex minimization formulation of
the CFP. Unlike AP, ADMM also makes use of dual variables, and when applied
to the CFP, it is equivalent to Dykstra’s alternating projection method, [Dykstra,
1983, Bauschke and Borwein, 1994], in the product space E. Although there exist problems for which Dykstra’s method is much slower than the classical AP
algorithm, it generally outperforms the latter in practice.
Outline
The paper is organized as follows. In Section 2, we present a product space formulation of CFPs with loosely coupled sets, and we propose an algorithm based
on von Neumann AP method. In Section 3, we consider a convex minimization
reformulation of the CFP, and we describe an algorithm based on the ADMM.
We discuss distributed implementation of both algorithms in Section 4, and in
Section 5, we consider distributed solution of the robust stability analysis problem. We present numerical results in Section 6, and we conclude the paper in
66
Paper B Decomposition and Robustness analysis of Interconnected Systems
Section 7.
Notation
We denote with Np the set of positive integers {1, 2, . . . , p}. Given a set J ⊂ Nn , the
matrix EJ ∈ R|J|×n is the 0-1 matrix that is obtained from an identity matrix of
order n by deleting the rows indexed by Nn \ J. This means that EJ x is a vector
with the components of x that correspond to the elements in J. The distance from
a point x ∈ Rn to a set C ⊆ Rn is denoted as dist(x, C), and it is defined as
dist(x, C) = inf x − y .
(3)
y∈C
2
Similarly, the distance between two sets C1 , C2 ⊆ Rn is defined as
dist(C , C ) =
inf x − y .
1
2
2
y∈C1 ,x∈C2
(4)
The relative interior of a set C is denoted relint(C), and D = diag(a1 , . . . , an ) is a
diagonal matrix of order n with diagonal entries Dii = ai . The column space of a
matrix A is denoted C(A).
2
Decomposition and projection methods
2.1
Decomposition and convex feasibility in product space
Given N loosely coupled sets C1 , . . . , CN , as defined in (1), we define N lowerdimensional sets
C̄i = {si ∈ R|Ji | | EJTi si ∈ Ci },
i = 1, . . . , N
(5)
such that si ∈ C̄i implies EJTi si ∈ Ci . With this notation, the standard form CFP
can be rewritten as
find
subject to
s1 , s2 , . . . , s N , x
si ∈ C̄i , i = 1, . . . , N
si = EJi x, i = 1, . . . , N
(6)
where the equality constraints are the so called coupling constraints that ensure
that the variables s1 , . . . , sN are consistent with one another. In other words, if
the sets Ci and Cj (i , j) depend on xk , then the kth component of EJTi si and EJTj sj
must be equal.
The formulation (6) decomposes the global variable x into N coupled variables
s1 , . . . , sN . This allows us to rewrite the problem as a CFP with two sets
find
subject to
S
S ∈ C, S ∈ D
where
S = (s1 , . . . , sN ) ∈ R|J1 | × · · · × R|JN |
(7)
2
67
Decomposition and projection methods
C = C̄1 × · · · × C̄N
D = {Ēx | x ∈ Rn }
h
iT
Ē = EJT1 · · · EJTN .
The formulation in (7) can be thought of as a “compressed” product space formulation of a CFP with the constraints in (1), and it is similar to the consensus
optimization problems described in [Boyd et al., 2011, Sec. 7.2].
2.2
Von Neumann’s alternating projection in product space
The problem (7) can be solved using the von Neumann’s AP method. Given a
CFP with two sets
find
x
(8)
subject to x ∈ A, x ∈ B
and a starting point y (0) = x0 , von Neumann’s AP method computes two sequences [Bauschke and Borwein, 1993]
z (k+1) = PA y (k)
(9a)
y (k+1) = PB z (k+1) .
(9b)
If the CFP is feasible, i.e., A ∩ B , 0, then both sequences converge to a point
in A ∩ B. We discuss infeasible problems at the end of this section. If we define
X = Ēx, applying the von Neumann AP algorithm to the CFP (7) results in the
update rule
(10a)
S (k+1) = PC X (k) = PC̄1 EJ1 x(k) , . . . , PC̄N EJN x(k)
−1
(10b)
Ē T S (k+1) ,
X (k+1) = Ē Ē T Ē
|
{z
}
x(k+1)
where (10a) is the projection onto C, and (10b) is the projection onto the column
space of Ē. The projection onto the set C can be computed in parallel by N com(k)
puting agents, i.e., agent i computes si = PC̄i (EJi x(k) ), and the second projection
can be interpreted as a consensus step that can be solved via distributed averaging.
2.3
Convergence
Gubin et al. [Gubin et al., 1967] showed that a sequence of cyclic projections converges linearly to some x∗ ∈ C = ∩i Ci , ∅ if the set {C1 , . . . , CN } is boundedly
linearly regular, i.e., there exists θB > 0 for every bounded set B in Rn such that
dist(x, C) ≤ θB max {dist(x, Ci )},
1≤i≤N
∀x ∈ B.
(11)
It was later shown by Bauschke et al. [Bauschke et al., 1999] and Beck & Teboulle
[Beck and Teboulle, 2003] that Slater’s condition for the CFP implies bounded linear regularity, i.e., (11) holds and hence the cyclic projection algorithm converges
68
Paper B Decomposition and Robustness analysis of Interconnected Systems
linearly if
 k

 N

\  \  \





Ci 
relint(Ci ) , ∅


i=1
(12)
i=k+1
where C1 , . . . , Ck are polyhedral sets and Ck+1 , . . . , CN are general closed, convex
sets. For two sets, cyclic projections is equivalent to alternating projections, and
when {A, B} is boundedly linearly regular, it follows from [Beck and Teboulle,
2003, Thm. 2.2] that
dist(x(k+1) , C) ≤ γB dist(x(k) , C)
with
s
γB =
1−
1
θB2
(13)
where C = A ∩ B and θB a positive constant that depends of the starting point
x(0) and that satisfies (11) with B = {x | kx − zk ≤ kx(0) − zk} for any z ∈ C.
For infeasible problems (i.e., A ∩ B = ∅), the iterates satisfy
y (k) − z (k) , y (k) − z (k+1) → v,
kvk2 = dist(A, B).
(14)
Moreover, the sequences y (k) and z (k) converge if the distance dist(A, B) is attained, and otherwise ky (k) k2 → ∞ and kz (k) k → ∞ [Bauschke and Borwein, 1994,
1993]. This means that we can use the difference v (k) = y (k) − z (k) to detect infeasibility. If the set C in the CFP in (7) is a closed set, it is possible to use the
statement in (14) for detecting the infeasibility of the CFP in (6). Note that this is
the case for the problem described in Section 5.
3
Convex minimization reformulation
The CFP (6) is equivalent to a convex minimization problem
PN
minimize
i=1 g i (s i )
subject to si = EJi x, i = 1, . . . , N ,
(15)
with variables S and x, where



∞
gi (si ) = 

0
si < C̄i
si ∈ C̄i
(16)
is the indicator function for the set C̄i . In the following, we apply the ADMM to
the problem (15).
3
69
Convex minimization reformulation
3.1
Solution via Alternating Direction Method of Multipliers
Consider the following equality constrained convex optimization problem
minimize
subject to
F(x)
Ax = b
(17)
with variable x ∈ Rn , and where A ∈ Rm×n , B ∈ Rm , and F : Rn 7→ R. The
augmented Lagrangian for this problem can be expressed as
ρ
(18)
Lρ (x, λ) = F(x) + λT (Ax − b) + kAx − bk22 ,
2
where ρ > 0 and λ ∈ Rm . This can also be rewritten in normalized form as
2 ρ 2
ρ
L̄ρ (x, λ̄) = F(x) + Ax − b + λ̄2 − λ̄2
(19)
2
2
where L̄ρ (x, λ̄) is referred to as the normalized augmented Lagrangian, and λ̄ =
λ/ ρ is the so-called normalized Lagrange variable.
A general convex optimization problem of the form
minimize
subject to
F(S)
S∈A
(20)
can be cast as a problem of the form (17) by means of the indicator function g(S)
for the set A, i.e.,
minimize
subject to
F(S) + g(q)
S−q=0
(21)
where q is an auxiliary variable. The normalized augmented Lagrangian for this
problem is given by
2
ρ 1 2
λ̄ .
(22)
L̄ρ (S, q, λ̄) = F(S) + g(q) + S − q − λ̄ −
2
2
2ρ
2
The problem in (21) can be solved by applying the ADMM to (22); see e.g. [Boyd et al.,
2011] and [Bertsekas and Tsitsiklis, 1997, Sec. 3.4]. This results in the following
iterative algorithm
2 ρ S (k+1) = argmin F(S) + S − q (k) − λ̄(k) 2
2
S
2 ρ
q (k+1) = argmin g(q) + S (k+1) − q − λ̄(k) 2
2
q
λ̄(k+1)
= PA (S (k+1) + λ̄(k) ),
= λ̄(k) + S (k+1) − q (k+1) .
The problem (15) can be solved using a similar approach. The normalized aug-
70
Paper B Decomposition and Robustness analysis of Interconnected Systems
mented Lagrangian for this problem is given by
Lρ (S, x, λ̄) =
N X
gi (si ) +
i=1
ρ
2
2
si − EJ x − λ̄i − ρ λ̄i 2
i
2
2
2
(23)
where λ̄i ∈ R|Ji | and λ̄ = λ̄1 , . . . , λ̄N . If we apply the ADMM to (23), we obtain
the following iterative method


N
2 !

X


ρ 

(k) − λ̄(k) 
s
−
E
x
S (k+1) = argmin 
g
(s
)
+
i
J
i
i
i

i 


2
2
S
i=1
N

2 
X ρ 
s(k+1) − E x − λ̄(k) 
Ji

i 
2 i
2
(24)


x(k+1) = argmin 


x
i=1
(k+1)
(k)
(k+1)
λ̄i
= λ̄i + (si
− EJi x(k+1) ).
The update of S can be decomposed into N subproblems
(
2 )
ρ
(k+1)
(k) = argmin gi (si ) + si − EJi x(k) − λ̄i si
2
si
2
= PC̄i (EJi x
(k)
−
(25)
(k)
λ̄i ),
and the update of x can be computed as
−1
x(k+1) = Ē T Ē
Ē T S (k+1) + λ̄(k) .
(26)
Using the definition X = Ēx, we can write the update of X as
−1
X (k+1) = Ē Ē T Ē
Ē T S (k+1) + λ̄(k) .
(27)
Furthermore, given the starting point x(0) = x0 and λ̄(0) = 0, we obtain
λ̄(k) =
k X
S (i) − X (i) .
(28)
i=1
−1
Ē T in (27) defines an orthogonal projection onto C(Ē), and
The matrix Ē Ē T Ē
−1
hence Ē Ē T Ē
Ē T λ̄(k) = 0 for all k ≥ 0. This allows us to simplify (27) as
−1
X (k+1) = Ē Ē T Ē
Ē T S (k+1) .
(29)
To summarize, the update rules for the ADMM, applied to (15), are given by
S (k+1) = PC (X (k) − λ̄(k) )
−1
X (k+1) = Ē Ē T Ē
Ē T S (k+1)
(30b)
λ̄(k+1) = λ̄(k) + (S (k+1) − X (k+1) ).
(30c)
(30a)
4
71
Distributed implementation
We remark that this algorithm is a special case of the algorithm developed in
[Boyd et al., 2011, Sce. 7.2] for consensus optimization. We also note that this
algorithm is equivalent to Dykstra’s alternating projection method for two sets
where one of the sets is affine [Bauschke and Borwein, 1994]. Moreover, it is possible to detect infeasibility in the same way that we can detect infeasibility for the
AP method, i.e., X (k) − S (k) → v where kvk2 = dist(C, D) [Bauschke and Borwein,
1994]. Note that, unlike the AP method, the iteration (30) does not necessarily
converge with a linear rate when the feasible sets satisfy (11).
4
Distributed implementation
In this section, we describe how the algorithms can be implemented in a distributed manner using techniques that are similar to those described in [Boyd et al.,
2011] and [Bertsekas and Tsitsiklis, 1997, Sec. 3.4]. Specifically, the parallel projection steps in the algorithms expressed by the update rules in (10) and (30)
are amenable to distributed implementation. We will henceforth assume that a
network of N computing agents is available.
Let I i = {k | i ∈ Jk } ⊆ Nn denote the set of constraints that depend on xi . Then it
is easy to verify that
Ē T Ē = diag(|I 1 |, . . . , |I n |)
(31)
and consequently, the jth component of x in the update rules (10b) and (30b) is
of the form
1 X T (k+1)
(k+1)
EJ q s q
xj
=
.
(32)
|I j |
j
q∈Ij
In other words, the agents in the set I j must solve a distributed averaging prob(k+1)
. Let xJi = EJi x. Since the set C̄i associated with agent i
lem to compute xj
involves the variables index by Ji , agent i should update xJi by performing the
update in (32) for all j ∈ Ji . This requires agent i to communicate with all agents
in the set
\
n
o
Ne(i) = j | Ji
Jj , ∅ ,
(33)
which we call the neighbors of agent i. Each agent j ∈ Ne(i) shares one or more
variables with agent i. Distributed variants of the algorithms presented in the
previous sections are summarized in Algorithms 1 and 2.
4.1
Feasibility Detection
For strictly feasible problems, the algorithms 1 and 2 converge to a feasible solution. We now discuss different techniques for checking feasibility.
72
Paper B Decomposition and Robustness analysis of Interconnected Systems
Algorithm 1 Alternating projection algorithm in product space
1:
2:
Given x(0) and for all i = 1, . . . , N , each agent i should
repeat
(k+1)
(k)
← PC̄i xJi
.
3:
si
4:
Communicate with all agents r belonging to Ne(i).
for all j ∈ Ji do
P
(k+1)
(k+1)
.
= |I1 | q∈Ij EJTq sq
xj
5:
6:
7:
8:
9:
j
j
end for
k ← k+1
until forever
Algorithm 2 ADMM based algorithm
1:
2:
Given x(0) , λ̄(0) = 0, for all i = 1, . . . , N , each agent i should
repeat
(k+1)
(k)
(k)
← PC̄i xJi − λ̄i
.
3:
si
4:
Communicate with all agents r belonging to Ne(i).
for all j ∈ Ji do
P
(k+1)
(k+1)
.
xj
= |I1 | q∈Ij EJTq sq
5:
6:
7:
8:
9:
10:
j
j
end for
(k+1)
(k+1)
(k)
(k+1)
− xJ i
← λ̄i + si
λ̄i
k ← k+1
until forever
Global Feasibility Test
Perhaps the easiest way to check feasibility is to directly check the feasibility of
x(k) with respect to all the constraints. This can be accomplished by explicitly
forming x(k) . This requires a centralized unit that recieves the local variables
from the individual agents.
Local Feasibility Test
It is also possible to chech feasibility locally. Instead of sending the local variables
to a central unit, each agent declares its feasibility status with respect to its local
constraint. This type of feasibility detection method is based on the following
Lemmas.
(k)
Lemma 1. If xJi ∈ C̄i , for all i = 1, · · · , N then using these vectors a feasible
T
solution, x, can be constructed for the original problem, i.e., x ∈ N
i=1 C i .
Proof: The update rule for each of the components of the variable x is merely
4
73
Distributed implementation
the average of all the corresponding local iterates from other agents. Hence, all
iterates of each of the components of the x variable are equal. As a result, based
(k)
on the definition of the C̄i , by having xJi ∈ C̄i for all i = 1, · · · , N , it is possible
(k)
to construct a feasible vector x(k) from the iterates xJi .
Lemma 2. If kX (k+1) − X (k) k2 = 0 and kS (k+1) − X (k+1) k2 = 0, T
then a feasible solution, x, can be constructed for the original problem, i.e., x ∈ N
i=1 C i .
Proof: Consider the update rules for Algorithm 1. The conditions stated in the
(k)
(k+1)
= xJi . As a result for all i = 1, · · · , N
lemma imply that si
(k)
(k)
(k+1)
(34)
= PC̄i xJi = xJi ,
si
(k)
which implies that xJi ∈ C̄i . Therefore, similar to Lemma 1, it is possible to
(k)
generate a feasible solution out of vectors xJi .
Consider the update rules in Algorithm 2 for the ADMM based algorithm.
the conditions stated above imply that
S (k+1) = PC X (k) − λ̄(k) = X (k) ,
−1
X (k+1) = Ē Ē T Ē
Ē T X (k+1) = X (k)
λ̄(k+1) = λ̄(k) + S (k+1) − X (k+1) = λ̄(k) .
Then
(35a)
(35b)
(35c)
Hence any solution that satisfies the conditions in the lemma, constitutes an equilibrium point for the update rule in (30), and as a result
(k)
(k)
(36)
PC̄i xJi = xJi ,
which in turn provides the possibility to construct a feasible solution.
Remark 1. For Algorithm 1, conditions in lemmas 1 and 2 are equivalent. However, this is
not the case for Algorithm 2. For this algorithm, satisfaction of the conditions in Lemma 2
implies the conditions in Lemma 1.
Remark 2. Feasibility detection using Lemma 1, requires additional computations for
Algorithm 2. These computations include local feasibility check of the iterates xJi . Note
that this check does not incur any additional cost for Algorithm 1.
4.2
Infeasibility Detection
Recall from Section 2 that if the CFP is infeasible, then the sequence kS (k) − X (k) k2
will converge to a nonzero constant kvk2 = dist(C, D). Therefore, in practice, it is
possible to detect infeasible problems by limiting the number of iterations, i.e.,
if kS (k) − X (k) k2 is not sufficiently small after the maximum number of iterations,
the problem is considered to be infeasible.
74
Paper B Decomposition and Robustness analysis of Interconnected Systems
w1
y1
z1
w1
y1
z1
w2
y2
z2
1
2
3
x1
x2
x3
w2
y2
z2
wm−4 wm−4 wm−3 wm−3
ym−4 ym−4 ym−3 ym−3
zm−4 zm−4 zm−3 zm−3
···
m−2
m−1
m
xm−2
xm−1
xm
Figure 1: A chain of m uncertain systems with problem variables x1 , . . . , xm
and auxiliary (coupling) variables wi , yi , zi , i = 1, . . . , m − 3. Each box corresponds to an uncertain system, and the dashed lines separate the m − 2
groups of systems that are associated with the sets C̄1 , . . . , C̄m−2 .
∆
Y (s)
U(s)
M(s)
Figure 2: An uncertain system.
5
Robust Stability Analysis
Robust stability of uncertain large scale weakly interconnected systems with structured uncertainty can be analyzed through the so-called µ-analysis framework
[Fan et al., 1991]. This leads to CFP which is equivalent to a semidefinite programming (SDP) problem involving the system matrix.
Consider the following system description
Y (s) = M(s)U(s),
(37)
where M(s) is a m × m transfer function matrix, and let
U(s) = ∆Y (s),
(38)
where ∆ = diag(δi ), with δi ∈ R, |δi | ≤ 1 for i = 1, · · · , m, represent the uncertainties in the system. The system is said to be robustly stable if there exists a
diagonal positive definite X(ω) and 0 < µ < 1 such that
M(j ω)∗X(ω)M(j ω) − µ2 X(ω) ≺ 0,
(39)
for all ω ∈ R. Note that this problem is infinite dimensional, and in practice, it is
often solved approximately by discretizing the frequency variable.
In the following, we investigate only a single frequency point such that the dependence on the frequency is dropped. Moreover, for the sake of simplicity, we
assume that M is real-valued. The extension to complex valued M is straightfor-
5
75
Robust Stability Analysis
ward. As a result, feasibility of the following CFP is a sufficient condition for
robust stability of the system
find
subject to
X
M ′ XM − X −ǫI
xi ≥ ǫ, i = 1, . . . , m
(40)
for ǫ > 0 and where xi are the diagonal elements of X.
A large scale network of weakly interconnected uncertain systems can also be
represented in the form (37)-(38). In the case of weakly interconnected system,
the system matrix M that relates the input and output signals is sparse. As an example, we consider a chain of systems, see Figure 3, which leads to a tri-diagonal
system matrix


0
0 
g1 h1 0

 f
0
0 
 2 g2 h2


..
..


.
.
0  .
M =  0 f 3
(41)


..



. gm−1 hm−1 
 0 0


0 0 0
fm
gm
This system matrix is obtained if the input-output relation for the underlying
systems are given by
p 1 = g 1 q 1 + h 1 z2
z1 = q 1
q1 = δ 1 p 1 ,
(42)
p m = gm qm + f m zm−1
zm = qm
q m = δ1 p m ,
(43)
and
p i = gi qi + f i zi−1 + hi zi+1
zi = qi
qi = δi p i ,
(44)
for i = 2, . . . , m − 1. The tri-diagonal structure in the system matrix implies that
the LMI defined in (40) becomes banded. This is a special case of a so-called
chordal sparsity pattern, and these have been exploited in SDP by several authors;
see [Andersen et al., 2010, Fujisawa et al., 2009, Kim et al., 2010, Fukuda et al.,
2000].
Positive semidefinite matrices with chordal sparsity patterns can be decomposed
into a sum of matrices that are positive semidefinite [Kim et al., 2010, Kakimura,
2010, Sec. 5.1]. For example, a positive semidefinite band matrix can be decomposed into a sum of positive semidefinite matrices, as illustrated in Figure 4. Note
76
Paper B Decomposition and Robustness analysis of Interconnected Systems
δ1
δ2
p1
q1
δm
p2
z1
M1 (s)
q2
pm
z2
M2 (s)
zm−1
···
z2
z3
qm
Mm (s)
zm
Figure 3: An example of a weakly interconnected uncertain system.
=
+···+
Figure 4: A positive semidefinite band matrix of order n with halfbandwidth p can be decomposed as a sum of n − p positive semidefinite
matrices.
that the nonzero blocks marked in the matrices on the right-hand side in Figure 4
are structurally equivalent to the block in the left-hand side, but the numerical
values are generally different. Using this decomposition technique, the problem
in (40) can be reformulated as in (7).
By introducing auxiliary variables w, y, z ∈ Rm−3 and letting q = (x, w, y, z), the
five-diagonal matrix M ′ XM − X can be decomposed as
# "
#
"
#
"
i g1 f 2 ′
g1 f 2 h
x1
0
0
(45a)
f 1 (q) = h1 g2 x01 x02 h1 g2 − 0 x2 −w1 −y1 ,
f m−2 (q) =
"f
m−1 0
gm−1 f m
h m−1 gm
#
hx
m−1
0
0
0 h2
0 h2
0
xm
i
#′
m−1 0
gm−1 f m
h m−1 gm
"f
"
−
−y1
−z1
wm−3
ym−3
0
ym−3 wm−3 +xm−1 0
0
0
xm
#
,
(45b)
and
"
f i (q) =
f i+1
gi+1
h i+1
#
"
[ xi+1 ]
f i+1
gi+1
h i+1
#′
"
−
wi−1
yi−1
0
yi−1 z i−1 +xi+1 −wi −yi
0
−yi
−z i
#
,
(45c)
for i = 2, . . . , m − 3. Notice that (45a) and (45b) depend on data from two subsystems whereas (45c) depends on data from only one subsystem. This dependence
is also illustrated in Figure 1. The right-hand side of the LMI in (40) can be decomposed in a similar manner
ǫ 00
000
F1 = 0 ǫ 0 , Fm−2 = 0 ǫ 0 ,
000
00ǫ
000
Fi = 0 ǫ 0 , i = 2, . . . , m − 3.
000
6
77
Numerical Results
50
45
No. Violated Constraints
40
35
30
25
20
15
10
5
0
0
5
10
15
No. Iterations
20
25
Figure 5: Algorithm 1: Number of violated constraints in the CFP after each
update of the global variables. This figure illustrates the results for 50 randomly generated problems.
With this decomposition, we can reformulate the LMI in (40) as a set of m − 2
LMIs
q ∈ Ci = {q | f i (q) Fi }, i = 1, . . . , m − 2,
or equivalently, as the constraints
si ∈ C̄i = {EJi q | f i (q) Fi } ⊆ R|Ji | , si = EJi q,
(46)
for i = 1, . . . , m − 2. Here Ji is the set of indices of the entries of q that are required to evaluate f i . Notice that (46) is of the form (6). The CFP defined by the
constraints in (46) can be solved over a network of m − 2 agents. The connectivity
of this network and the coupling variables among the agents are illustrated in
Figure 1.
Remark 3. As was mentioned above, the CFP in (40) is often solved over a frequency
grid. If these frequency points are chosen sufficiently close, it is probable that the CFPs
for adjacent frequencies are similar. As a result, it is very likely that a solution to a CFP
for one frequency is either a solution to or close to a solution to the CFP for the adjacent
frequencies. In this case, warm starting the projection-based algorithms using solutions
to CFPs at adjacent frequency points may significantly reduce the computational cost.
6
Numerical Results
In this section, we apply Algorithms 1 and 2 to a family of random problems
involving a chain of uncertain systems. These problems have the same band
structure as described in Section 5. We use the local feasibility detection method
78
Paper B Decomposition and Robustness analysis of Interconnected Systems
−2
kX k − X k−1 k22
10
−4
10
−6
10
−8
10
0
5
10
15
20
25
0
5
10
15
No. Iterations
20
25
−4
kS k − X k k22
10
−5
10
−6
10
−7
10
2
2
Figure 6: Algorithm 1: X (k) − X (k−1) 2 and S (k) − X (k) 2 with respect to the
iteration number. This figure illustrates the results for 50 randomly generated problems.
introduced in Section 4.1. Note that in order to avoid numerical problems and
unnecessarily many projections, the projections are performed for slightly tighter
bounds than those used for feasibility checks of the local iterates. This setup is
the same for all the experiments presented in this section.
Figures 5 and 6 show the behavior of Algorithm 1 for 50 randomly generated
problems with m = 52. These problems decompose into 50 subproblems which
50 agents solve collaboratively. Figure 5 shows the number of agents that are
locally infeasible at each iteration. This can be used to detect convergence to
a feasible solution by checking the satisfaction of the conditions in Lemma 1.
The figure demonstrates that the global variables satisfy the constraints of the
original problem after at most 22 iterations. Considering Remark 1 the same
performance should also be observed by checking the conditions in Lemma 2.
This is confirmed by the results illustrated in Figure 6.
We performed the same experiment with Algorithm 2, and the results are illustrated in Figures 7 and 8. As can be seen from these figures, the feasibility detection based on Lemmas 1 and 2 requires at most 7 and 27 iterations to detect
convergence to a feasible solution, respectively. In our experiments, Algorithm 2
is faster when feasibility detection is based on the conditions in Lemma 1, and
Algorithm 1 is faster when the condition in Lemma 2 is used for feasibility detection. It is also worth mentioning that with the accelerated nonlinear Cimmino
algorithm, more than 1656 iterations were needed to obtain a feasible solution.
In the next experiment, we investigate the performance of the algorithms as a
function of the number of systems in the chain. The results in Figures9 and 10
indicate that the problem parameter m does not affect the performance of Algo-
7
79
Conclusion
50
45
No. Violated Constraints
40
35
30
25
20
15
10
5
0
0
5
10
15
No. Iterations
20
25
30
Figure 7: Algorithm 2: Number of violated constraints in the CFP after each
update of the global variables. This figure illustrates the results for 50 randomly generated problems.
rithm 1 much. The number of iterations required to reach consensus is only increased slightly. Figures 11 and 12 verifies that Algorithm 2 behaves in a similar
manner.
7
Conclusion
In this paper, we have shown that it is possible to solve CPFs with loosely coupled constraints efficiently in a distributed manner. We have proposed two algorithms. One is based on von Neumann’s AP method, and hence it enjoys the
linear convergence rate that charaterizes this algorithms when applied to strictly
feasible problems. The other method is based on the ADMM, and it generally
outperforms the AP method in practice in terms of the number of iterations required to obtain a feasible solution. Both methods can detect infeasible problems.
For structured problems that arise in robust stability analysis of a large-scale
weakly interconnected uncertain systems, our numerical results show that both
algorithms outperform the classical projection-based algorithms.
80
Paper B Decomposition and Robustness analysis of Interconnected Systems
0
kX k − X k−1 k22
10
−10
10
−20
10
0
5
10
15
20
25
30
0
5
10
15
No. Iterations
20
25
30
0
kS k − X k k22
10
−5
10
−10
10
−15
10
2
2
Figure 8: Algorithm 2: X (k) − X (k−1) 2 and S (k) − X (k) 2 with respect to the
iteration number. This figure illustrates the results for 50 randomly generated problems.
400
350
No. Violated Constraints
300
250
200
150
100
50
0
0
5
10
15
No. Iterations
20
25
30
Figure 9: Algorithm 2: Number of violated constraints in the CFP after each
update of the global variables. This figure illustrates the results for problems
with different number of constraints.
81
Conclusion
−2
kX k − X k−1 k22
10
−4
10
−6
10
−8
10
0
5
10
15
20
25
30
0
5
10
15
No. Iterations
20
25
30
−2
kS k − X k k22
10
−4
10
−6
10
−8
10
2
2
Figure 10: Algorithm 1: X (k) − X (k−1) 2 and S (k) − X (k) 2 with respect to
the iteration number. This figure illustrates the results for problems with
different number of constraints.
400
350
300
No. Violated Constraints
7
250
200
150
100
50
0
0
5
10
15
No. Iterations
20
25
30
Figure 11: Algorithm 2: Number of violated constraints in the CFP after
each update of the global variables. This figure illustrates the results for
problems with different number of constraints.
82
Paper B Decomposition and Robustness analysis of Interconnected Systems
0
kX k − X k−1 k22
10
−10
10
−20
10
0
5
10
15
20
25
30
0
5
10
15
No. Iterations
20
25
30
0
kS k − X k k22
10
−5
10
−10
10
−15
10
2
2
Figure 12: Algorithm 2: X (k) − X (k−1) 2 and S (k) − X (k) 2 with respect to
the iteration number. This figure illustrates the results for problems with
different number of constraints.
Bibliography
83
Bibliography
M. S. Andersen, J. Dahl, and L. Vandenberghe. Implementation of nonsymmetric
interior-point methods for linear optimization over sparse matrix cones. Mathematical Programming Computation, 2010.
D. Axehill and A. Hansson. Towards parallel implementation of hybrid MPC
– A survey and directions for future research. In Rolf Johansson and Anders
Rantzer, editors, Distributed Decision Making and Control, Lecture Notes in
Control and Information Sciences. Springer Verlag, 2011.
H. H. Bauschke and J. M. Borwein. On the convergence of Von Neumann’s alternating projection algorithm for two sets. Set-Valued Analysis, 1:185–212,
1993.
H. H. Bauschke and J. M. Borwein. Dykstra’s alternating projection algorithm for
two sets. Journal of Approximation Theory, 79(3):418–443, 1994.
H. H. Bauschke, J. M. Borwein, and W. Li. Strong conical hull intersection property, bounded linear regularity, Jameson’s property (g), and error bounds in
convex optimization. Mathematical Programming, 86(1):135–160, 1999.
A.M. Bayen, R.L. Raffard, and C.J. Tomlin. Adjoint-based constrained control of
Eulerian transportation networks: Application to air traffic control. Proceedings of the American Control Conference, 6:5539–5545, 2004.
A. Beck and M. Teboulle. Convergence rate analysis and error bounds for projection algorithms in convex feasibility problems. Optimization Methods and
Software, 18(4):377–394, 2003.
D. P. Bertsekas and J. N. Tsitsiklis. Parallel and Distributed Computation: Numerical Methods. Athena Scientific, 1997.
S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers.
Foundations and Trends in Machine Learning, 3(1):1–122, 2011.
L. Bregman. The method of successive projection for finding a common point of
convex sets. Soviet Math. Dokl., 162:487–490, 1965.
M. Cantoni, E. Weyer, Y. Li, S.K. Ooi, I. Mareels, and M. Ryan. Control of largescale irrigation networks. Proceedings of the IEEE, 95(1):75–91, 2007.
Y. Censor and T. Elfving. A nonlinear Cimmino algorithm. Technical Report
Report 33, Department of Mathematics, University of Haifa, 1981.
G. Cimmino. Calcolo approssimato per le soluzioni dei sistemi di equazioni lineari. La Ricera Scientifica Roma, 1:326–333.
D. Dochain, G. Dumont, D.M. Gorinevsky, and T. Ogunnaike. IEEE transactions
on control systems technology special issue on control of industrial spatially
84
Paper B Decomposition and Robustness analysis of Interconnected Systems
distributed processes. IEEE Transactions on Control Systems Technology, 11
(5):609 – 611, Sep. 2003.
R. L. Dykstra. An algorithm for restricted least squares regression. Journal of the
American Statistical Association, 78(384):837–842, Dec. 1983.
M.K.H. Fan, A.L. Tits, and J.C. Doyle. Robustness in the presence of mixed parametric uncertainty and unmodeled dynamics. IEEE Transactions on Automatic
Control, 36(1):25 –38, Jan. 1991.
H. Fang and P.J. Antsaklis. Distributed control with integral quadratic constraints. IFAC Proceedings Volumes, 17(1), 2008.
K. Fujisawa, S. Kim, M. Kojima, Y. Okamoto, and M. Yamashita. User’s manual for
SparseCoLo: Conversion methods for SPARSE COnnic-form Linear Optimization. Technical report, Department of Mathematical and Computing Sciences,
Tokyo Institute of Technology, Tokyo, 2009.
M. Fukuda, M. Kojima, M. Kojima, K. Murota, and K. Nakata. Exploiting sparsity in semidefinite programming via matrix completion I: General framework.
SIAM Journal on Optimization, 11:647–674, 2000.
D. Gabay and B. Mercier. A dual algorithm for the solution of nonlinear variational problems via finite element approximation. Computers & Mathematics
with Applications, 2(1):17–40, 1976.
R. Glowinski and A. Marroco. Sur l’approximation, par éléments finis d’ordre un,
et la résolution, par pénalisation-dualité d’une classe de problèmes de dirichlet
non linéaires. Revue française d’automatique, informatique, recherche opérationnelle, 9(2):41–76, 1975.
L. Grasedyck, W. Hackbusch, and B.N. Khoromskij. Solution of large scale algebraic matrix Riccati equations by use of hierarchical matrices. Computing
Vienna/New York, 70(2):121–165, 2003.
L. G. Gubin, B. T. Polyak, and E. V. Raik. The method of projections for finding the common point of convex sets. USSR Computational Mathematics and
Mathematical Physics, 7(6):1–24, 1967.
A.D. Hansen, P. Sorensen, F. Iov, and F. Blaabjerg. Centralized power control of
wind farm with doubly fed induction generators. Renewable Energy, 31(7):
935–951, 2006.
A. N. Iusem and A. R. De Pierro. Convergence results for an accelerated nonlinear
Cimmino algorithm. Numerische Mathematik, 49:367–378, 1986.
M.R. Jovanovic, M. Arcak, and E.D. Sontag. Remarks on the stability of spatially
distributed systems with a cyclic interconnection structure. Proceedings of the
American Control Conference, pages 2696–2701, 2007.
N. Kakimura. A direct proof for the matrix decomposition of chordal-structured
Bibliography
85
positive semidefinite matrices. Linear Algebra and its Applications, 433(4):819
– 823, 2010.
S. Khoshfetrat Pakazad, A. Hansson, M. S. Andersen, and A. Rantzer. Decomposition and projection methods for distributed robustness analysis of interconnected uncertain systems. Technical Report LiTH-ISY-R-3033, Department of
Electrical Engineering, Linköping University, SE-581 83 Linköping, Sweden,
Nov. 2011.
S. Kim, M. Kojima, M. Mevissen, and M. Yamashita. Exploiting sparsity in linear
and nonlinear matrix inequalities via positive semidefinite matrix completion.
Mathematical Programming, pages 1–36, 2010.
C. Langbort, R.S. Chandra, and R. D’Andrea. Distributed control design for systems interconnected over an arbitrary graph. IEEE Transactions on Automatic
Control, 49(9):1502 – 1519, Sep. 2004.
X. Lin and S. Boyd. Fast linear iterations for distributed averaging. In Proceedings
42nd IEEE Conference on Decision and Control, volume 5, pages 4997 – 5002
Vol.5, Dec. 2003.
A. Megretski and A. Rantzer. System analysis via integral quadratic constraints.
IEEE Transactions on Automatic Control, 42(6):819 –830, Jun 1997.
A. Nedic and A. Ozdaglar. Distributed subgradient methods for multi-agent optimization. IEEE Transactions on Automatic Control, 54(1):48 –61, Jan. 2009.
A. Nedic, A. Ozdaglar, and P.A. Parrilo. Constrained consensus and optimization
in multi-agent networks. IEEE Transactions on Automatic Control, 55(4):922
–938, April 2010.
S. Rajagopalan and D. Shah. Distributed averaging in dynamic networks. IEEE
Journal of Selected Topics in Signal Processing, 5(4):845 –854, Aug. 2011.
J. Rice and M. Verhaegen. Robust control of distributed systems with sequentially
semi-separable structure. The European Control Conference, 2009a.
J.K. Rice and M. Verhaegen. Distributed control: A sequentially semi-separable
approach for spatially heterogeneous linear systems. IEEE Transactions on
Automatic Control, 54(6):1270 –1283, Jun. 2009b.
Jr. Sandell, N., P. Varaiya, M. Athans, and M. Safonov. Survey of decentralized
control methods for large scale systems. IEEE Transactions on Automatic Control, 23(2):108 – 128, Apr. 1978.
S. Skogestad and I. Postlethwaite. Multivariable feedback control. Wiley, 2007.
G.E. Stewart, D.M. Gorinevsky, and G.A. Dumont. Feedback controller design for
a spatially distributed system: The paper machine problem. IEEE Transactions
on Control Systems Technology, 11(5):612–628, 2003.
J. N. Tsistsiklis. Problems in decentralized decision making and computation.
PhD thesis, MIT, 1984.
86
Paper B Decomposition and Robustness analysis of Interconnected Systems
J. Tsitsiklis, D. Bertsekas, and M. Athans. Distributed asynchronous deterministic and stochastic gradient optimization algorithms. IEEE Transactions on
Automatic Control, 31(9):803 – 812, Sep 1986.
J. G. VanAntwerp, A. P. Featherstone, and R. D. Braatz. Robust cross-directional
control of large scale sheet and film processes. Journal of Process Control, 11
(2):149 – 177, 2001.
A.N. Venkat, I.A. Hiskens, J.B. Rawlings, and S.J. Wright. Distributed MPC strategies with application to power system automatic generation control. IEEE
Transactions on Control Systems Technology, 16(6):1192–1206, 2008.
J. von Neumann. The Geometry of Orthogonal Spaces, volume 2 of Functional
Operators. Princeton University Press, 1950.
K. Zhou and J. C. Doyle. Essentials of robust control. Prentice Hall, 1998.
K. Zhou, J. C. Doyle, and K. Glover. Robust and optimal control. Prentice Hall,
1997.
Licentiate Theses
Division of Automatic Control
Linköping University
P. Andersson: Adaptive Forgetting through Multiple Models and Adaptive Control of Car
Dynamics. Thesis No. 15, 1983.
B. Wahlberg: On Model Simplification in System Identification. Thesis No. 47, 1985.
A. Isaksson: Identification of Time Varying Systems and Applications of System Identification to Signal Processing. Thesis No. 75, 1986.
G. Malmberg: A Study of Adaptive Control Missiles. Thesis No. 76, 1986.
S. Gunnarsson: On the Mean Square Error of Transfer Function Estimates with Applications to Control. Thesis No. 90, 1986.
M. Viberg: On the Adaptive Array Problem. Thesis No. 117, 1987.
K. Ståhl: On the Frequency Domain Analysis of Nonlinear Systems. Thesis No. 137, 1988.
A. Skeppstedt: Construction of Composite Models from Large Data-Sets. Thesis No. 149,
1988.
P. A. J. Nagy: MaMiS: A Programming Environment for Numeric/Symbolic Data Processing. Thesis No. 153, 1988.
K. Forsman: Applications of Constructive Algebra to Control Problems. Thesis No. 231,
1990.
I. Klein: Planning for a Class of Sequential Control Problems. Thesis No. 234, 1990.
F. Gustafsson: Optimal Segmentation of Linear Regression Parameters. Thesis No. 246,
1990.
H. Hjalmarsson: On Estimation of Model Quality in System Identification. Thesis No. 251,
1990.
S. Andersson: Sensor Array Processing; Application to Mobile Communication Systems
and Dimension Reduction. Thesis No. 255, 1990.
K. Wang Chen: Observability and Invertibility of Nonlinear Systems: A Differential Algebraic Approach. Thesis No. 282, 1991.
J. Sjöberg: Regularization Issues in Neural Network Models of Dynamical Systems. Thesis
No. 366, 1993.
P. Pucar: Segmentation of Laser Range Radar Images Using Hidden Markov Field Models.
Thesis No. 403, 1993.
H. Fortell: Volterra and Algebraic Approaches to the Zero Dynamics. Thesis No. 438,
1994.
T. McKelvey: On State-Space Models in System Identification. Thesis No. 447, 1994.
T. Andersson: Concepts and Algorithms for Non-Linear System Identifiability. Thesis
No. 448, 1994.
P. Lindskog: Algorithms and Tools for System Identification Using Prior Knowledge. Thesis No. 456, 1994.
J. Plantin: Algebraic Methods for Verification and Control of Discrete Event Dynamic
Systems. Thesis No. 501, 1995.
J. Gunnarsson: On Modeling of Discrete Event Dynamic Systems, Using Symbolic Algebraic Methods. Thesis No. 502, 1995.
A. Ericsson: Fast Power Control to Counteract Rayleigh Fading in Cellular Radio Systems.
Thesis No. 527, 1995.
M. Jirstrand: Algebraic Methods for Modeling and Design in Control. Thesis No. 540,
1996.
K. Edström: Simulation of Mode Switching Systems Using Switched Bond Graphs. Thesis
No. 586, 1996.
J. Palmqvist: On Integrity Monitoring of Integrated Navigation Systems. Thesis No. 600,
1997.
A. Stenman: Just-in-Time Models with Applications to Dynamical Systems. Thesis
No. 601, 1997.
M. Andersson: Experimental Design and Updating of Finite Element Models. Thesis
No. 611, 1997.
U. Forssell: Properties and Usage of Closed-Loop Identification Methods. Thesis No. 641,
1997.
M. Larsson: On Modeling and Diagnosis of Discrete Event Dynamic systems. Thesis
No. 648, 1997.
N. Bergman: Bayesian Inference in Terrain Navigation. Thesis No. 649, 1997.
V. Einarsson: On Verification of Switched Systems Using Abstractions. Thesis No. 705,
1998.
J. Blom, F. Gunnarsson: Power Control in Cellular Radio Systems. Thesis No. 706, 1998.
P. Spångéus: Hybrid Control using LP and LMI methods – Some Applications. Thesis
No. 724, 1998.
M. Norrlöf: On Analysis and Implementation of Iterative Learning Control. Thesis
No. 727, 1998.
A. Hagenblad: Aspects of the Identification of Wiener Models. Thesis No. 793, 1999.
F. Tjärnström: Quality Estimation of Approximate Models. Thesis No. 810, 2000.
C. Carlsson: Vehicle Size and Orientation Estimation Using Geometric Fitting. Thesis
No. 840, 2000.
J. Löfberg: Linear Model Predictive Control: Stability and Robustness. Thesis No. 866,
2001.
O. Härkegård: Flight Control Design Using Backstepping. Thesis No. 875, 2001.
J. Elbornsson: Equalization of Distortion in A/D Converters. Thesis No. 883, 2001.
J. Roll: Robust Verification and Identification of Piecewise Affine Systems. Thesis No. 899,
2001.
I. Lind: Regressor Selection in System Identification using ANOVA. Thesis No. 921, 2001.
R. Karlsson: Simulation Based Methods for Target Tracking. Thesis No. 930, 2002.
P.-J. Nordlund: Sequential Monte Carlo Filters and Integrated Navigation. Thesis No. 945,
2002.
M. Östring: Identification, Diagnosis, and Control of a Flexible Robot Arm. Thesis
No. 948, 2002.
C. Olsson: Active Engine Vibration Isolation using Feedback Control. Thesis No. 968,
2002.
J. Jansson: Tracking and Decision Making for Automotive Collision Avoidance. Thesis
No. 965, 2002.
N. Persson: Event Based Sampling with Application to Spectral Estimation. Thesis
No. 981, 2002.
D. Lindgren: Subspace Selection Techniques for Classification Problems. Thesis No. 995,
2002.
E. Geijer Lundin: Uplink Load in CDMA Cellular Systems. Thesis No. 1045, 2003.
M. Enqvist: Some Results on Linear Models of Nonlinear Systems. Thesis No. 1046, 2003.
T. Schön: On Computational Methods for Nonlinear Estimation. Thesis No. 1047, 2003.
F. Gunnarsson: On Modeling and Control of Network Queue Dynamics. Thesis No. 1048,
2003.
S. Björklund: A Survey and Comparison of Time-Delay Estimation Methods in Linear
Systems. Thesis No. 1061, 2003.
M. Gerdin: Parameter Estimation in Linear Descriptor Systems. Thesis No. 1085, 2004.
A. Eidehall: An Automotive Lane Guidance System. Thesis No. 1122, 2004.
E. Wernholt: On Multivariable and Nonlinear Identification of Industrial Robots. Thesis
No. 1131, 2004.
J. Gillberg: Methods for Frequency Domain Estimation of Continuous-Time Models. Thesis No. 1133, 2004.
G. Hendeby: Fundamental Estimation and Detection Limits in Linear Non-Gaussian Systems. Thesis No. 1199, 2005.
D. Axehill: Applications of Integer Quadratic Programming in Control and Communication. Thesis No. 1218, 2005.
J. Sjöberg: Some Results On Optimal Control for Nonlinear Descriptor Systems. Thesis
No. 1227, 2006.
D. Törnqvist: Statistical Fault Detection with Applications to IMU Disturbances. Thesis
No. 1258, 2006.
H. Tidefelt: Structural algorithms and perturbations in differential-algebraic equations.
Thesis No. 1318, 2007.
S. Moberg: On Modeling and Control of Flexible Manipulators. Thesis No. 1336, 2007.
J. Wallén: On Kinematic Modelling and Iterative Learning Control of Industrial Robots.
Thesis No. 1343, 2008.
J. Harju Johansson: A Structure Utilizing Inexact Primal-Dual Interior-Point Method for
Analysis of Linear Differential Inclusions. Thesis No. 1367, 2008.
J. D. Hol: Pose Estimation and Calibration Algorithms for Vision and Inertial Sensors.
Thesis No. 1370, 2008.
H. Ohlsson: Regression on Manifolds with Implications for System Identification. Thesis
No. 1382, 2008.
D. Ankelhed: On low order controller synthesis using rational constraints. Thesis
No. 1398, 2009.
P. Skoglar: Planning Methods for Aerial Exploration and Ground Target Tracking. Thesis
No. 1420, 2009.
C. Lundquist: Automotive Sensor Fusion for Situation Awareness. Thesis No. 1422, 2009.
C. Lyzell: Initialization Methods for System Identification. Thesis No. 1426, 2009.
R. Falkeborn: Structure exploitation in semidefinite programming for control. Thesis
No. 1430, 2010.
D. Petersson: Nonlinear Optimization Approaches to H2 -Norm Based LPV Modelling and
Control. Thesis No. 1453, 2010.
Z. Sjanic: Navigation and SAR Auto-focusing in a Sensor Fusion Framework. Thesis
No. 1464, 2011.
K. Granström: Loop detection and extended target tracking using laser data. Thesis
No. 1465, 2011.
J. Callmer: Topics in Localization and Mapping. Thesis No. 1489, 2011.
F. Lindsten: Rao-Blackwellised particle methods for inference and identification. Thesis
No. 1480, 2011.
M. Skoglund: Visual Inertial Navigation and Calibration. Thesis No. 1500, 2011.