Kwang-Kyo Oh, "Decentralized Control of Multi

Transcription

Kwang-Kyo Oh, "Decentralized Control of Multi
Dissertation for Doctor of Philosophy
Decentralized Control of Multi-agent Systems:
Theory and Applications
Kwang-Kyo Oh
School of Information and Mechatronics
Gwangju Institute of Science and Technology
2013
박사학위논문
다중 에이전트 시스템의 분산 제어: 이론 및 응용
오 광교
정보기전공학부
광주과학기술원
2013
Dedicated to my family.
Ph.D/SM
20092109
Kwang-Kyo Oh. Decentralized Control of Multi-agent Systems: Theory and
Applications. School of Information and Mechatronics. 2013. 196p. Advisor:
Prof. Hyo-Sung Ahn.
Abstract
Multi-agent systems have recently attracted a significant amount of research interest
due to their advantages such as flexibility, scalability, robustness, and cost-effectiveness.
Advances in control, computation and communication provide the enabling technology for
the construction of such systems.
In this dissertation, we focus on formation control and consensus in multi-agent systems.
We study various formation control problems based on local and partial measurements. First,
we propose a decentralized formation control law based on inter-agent distances for singleintegrator modeled agents in the plane and show a rigid formation is locally asymptotically
stable under the proposed law. We also show that a general n-dimensional rigid formation of
double-integrator modeled agents is locally asymptotically stable under a gradient-like control law. Second, we propose a formation control strategy based on orientation estimation for
single-integrator modeled agents in the plane under the assumption that the agents maintain
their own local reference frames, whose orientations are not aligned with each other. The
proposed strategy is applicable to network localization. Third, we propose a formation control strategy based on position estimation for single- and double-integrator modeled agents
in n-dimensional space. Under the proposed strategy, the position is estimated up to translation in general. The proposed control strategy is successfully applied to formation control of
unicycles. Finally, we propose a formation control strategy based on orientation and position
estimation for single-integrator modeled agents in the plane. The proposed strategy allows
the agents to overcome the lack of information by sharing their local information.
We also study consensus of a class of heterogeneous linear agents and disturbance attenuation in a consensus network of identical linear agents. First, we derive a sufficient condition
for consensus of linear agents under the assumption that the dynamics of each agent contains
single-integrator and the rest part of the dynamics is positive real. Based on the sufficient
–i–
condition, stability of the load frequency control network of a power grid can be analyzed.
Second, we study disturbance attenuation problems in a consensus network of identical linear agents. Under the assumption that the agents are under exogenous disturbances, we take
the H∞ norm of the transfer function from the disturbances to the disagreement vector of
the network as a metric for the disturbance attenuation performance. We show that the disturbance attenuation performance is enhanced by increasing the algebraic connectivity when
the agents satisfy a certain property. Procedures to design the interconnection topology and
the decentralized controller for the agents are provided.
c
2013
Kwang-Kyo Oh
– ii –
Ph.D/SM
20092109
오 광 교. 다중 에이전트 시스템의 분산 제어: 이론 및 응용. 정보기전공
학부. 2013. 196p. 지도교수: 안 효 성.
국문요약
다중 에이전트 시스템은 유연성, 확장성, 강인성, 비용의 효율성 등의 장점으로
인하여 최근 많은 연구가 이루어지고 있다. 또한, 제어, 연산 및 통신 기술의 진보로 인
해서 다중 에이전트(multi-agent) 시스템의 구현이 점차 현실화 되고 있다.
이러한 배경에 근거하여, 본 학위 논문은 다중 에이전트 시스템에 있어서의 전형
적인 제어 문제라 할 수 있는 포메이션(formation) 제어와 컨센서스(consensus)에 초점
을 둔다. 먼저, 포메이션 제어와 관련해서는, 여러 가지 상이한 가정 하에서 분산 포
메이션 제어 법칙을 제안하고 제안된 제어 법칙 하에서의 원하는 포메이션에 대한 안
정성을 분석한다. 첫째, 거리 기반 포메이션 제어 문제를 해결하기 위한 새로운 제어
법칙을 제안하고 원하는 포메이션이 제안된 법칙 하에서 점근적으로(asyptotically) 안
정하다는 것을 보인다. 또한, 거리 기반 포메이션 제어 문제에서 주로 이용되어 온 그
래디언트 기반 제어 법칙(gradient-based control law) 하에서 강성인(rigid) 포메이션이
국부적으로 점근적으로 안정함을 보인다. 기존의 연구와 달리, 본 논문에서는 일반적
인 n 차원 공간에서 이중 적분기로 표현되는 시스템의 포메이션 또한 안정함을 보인
다. 둘째, 각 에이전트가 자신의 국부 좌표계의 방향을 추정하도록 하여 모든 에이전
트들이 공통의 방향감각을 공유하도록 함으로써 거리 기반 포메이션 제어 문제를 변
위 기반 문제로 다루는 기법을 제안한다. 제안된 제어 기법은 거리와 방향 센서를 가
지는 센서 네트워크의 측위(localization) 기법에 응용될 수 있다. 셋째, 각 에이전트가
이웃하는 에이전트들의 상대적인 위치를 측정하는 경우에 있어서, 위치를 추정하는
기법을 제안하고 그 기법을 기반으로 한 포메이션 제어 법칙을 제안한다. 제안된 기
법은 유니사이클(unicycle)로 모델링되는 실제적인 모바일 에이전트에 적용이 가능하
다. 넷째, 국부 좌표계의 방향과 위치를 동시에 추정하는 기법을 제안하고 그 기법에
기반한 포메이션 제어 법칙을 제안한다. 제안된 기법은, 거리 기반 포메이션 제어 문
제에서 가정되는 매우 제한된 정보만이 측정 가능한 상황에서 에이전트들이 정보를
서로 공유함으로써 문제를 해결해 나가는 것이라 할 수 있다.
– iii –
한편, 컨센서스와 관련해서는 특정한 구조를 가지는 상이한(heterogeneous) 선형
에이전트들이 컨센서스에 도달하기 위한 조건을 제시하고 또한 동일한(identical) 선
형 에이전트로 구성된 시스템에서 외란의 영향을 줄이기 위한 설계 기법을 제안한다.
먼저, 각 에이전트들의 동적 방정식이 주파수 영역에서 단일 적분기와 실수 부분이 양
의 값을 가지는(positive real) 전달함수로 표현되는 경우에, 그 에이전트들이 컨센서스
에 도달한다는 것을 보인다. 그 동안의 컨센서스와 관련된 연구결과들이 대부분 동일
한 선형 에이전트 모델을 가정하였다는 점에서 이 결과는 의미가 있다. 또한, 주어진
조건에 기반하여 전력 시스템의 주파수 제어 시스템의 안정성을 분석할 수 있다. 다
음으로, 동일한 선형 에이전트들이 외란의 영향 하에 있다는 가정하에서, 외란의 영
향을 줄이기 위해서 에이전트들의 상호연결(interconnection)과 관련한 그래프를 설계
하고 또한 분산 제어 법칙을 설계하는 기법을 제안한다. 특히, 각 에이전트가 특정한
특성을 가지는 경우에, 에이전트들의 연결성(connectivity)가 커질 수록 외란의 영향이
줄어들 수 있음을 보인다.
c
2013
오 광교
– iv –
Contents
Abstract (English)
i
Abstract (Korean)
iii
List of Tables
ix
List of Figures
xi
1
Introduction
1
1.1
Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
1.2
Literature review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5
1.2.1
Formation control . . . . . . . . . . . . . . . . . . . . . . . . . . .
5
1.2.2
Consensus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11
1.3
Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13
1.4
Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
16
2
3
Preliminaries
19
2.1
Algebraic graph theory . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19
2.2
Graph rigidity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
24
Distance-based formation control considering inter-agent distance dynamics
27
3.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
27
3.2
Formation control considering inter-agent distance dynamics . . . . . . . .
29
3.2.1
29
Problem statement . . . . . . . . . . . . . . . . . . . . . . . . . .
–v–
4
3.2.2
Three-agent case . . . . . . . . . . . . . . . . . . . . . . . . . . .
31
3.2.3
N -agent case . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
37
3.3
Simulation results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
45
3.4
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
47
Distance-based formation under the gradient control law
49
4.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
49
4.2
Undirected formations of single-integrators . . . . . . . . . . . . . . . . .
51
4.2.1
Problem statement . . . . . . . . . . . . . . . . . . . . . . . . . .
51
4.2.2
Gradient control law . . . . . . . . . . . . . . . . . . . . . . . . .
52
4.2.3
Stability analysis . . . . . . . . . . . . . . . . . . . . . . . . . . .
53
Undirected formations of double-integrators . . . . . . . . . . . . . . . . .
59
4.3.1
Problem statement . . . . . . . . . . . . . . . . . . . . . . . . . .
59
4.3.2
Gradient-like law and stability analysis . . . . . . . . . . . . . . .
60
4.4
Simulation results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
63
4.5
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
64
4.3
5
Formation control based on orientation estimation
67
5.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
67
5.2
Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
69
5.3
Formation control based on orientation estimation . . . . . . . . . . . . . .
70
5.3.1
Problem statement . . . . . . . . . . . . . . . . . . . . . . . . . .
70
5.3.2
Control strategy and stability analysis: static graph case . . . . . .
72
– vi –
5.3.3
Control strategy and stability analysis: switching graph case . . . .
79
Application to network localization . . . . . . . . . . . . . . . . . . . . . .
82
5.4.1
Problem statement . . . . . . . . . . . . . . . . . . . . . . . . . .
82
5.4.2
Network localization based on orientation estimation . . . . . . . .
83
5.5
Simulation results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
85
5.6
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
86
5.4
6
Formation control based on position estimation
89
6.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
89
6.2
Formation control based on position estimation: single-integrator case . . .
92
6.2.1
Problem statement . . . . . . . . . . . . . . . . . . . . . . . . . .
92
6.2.2
Control strategy and stability analysis . . . . . . . . . . . . . . . .
94
6.2.3
Application to unicycle-like mobile robots . . . . . . . . . . . . . .
98
Formation control based on position estimation: double-integrator case . . .
101
6.3.1
Problem statement . . . . . . . . . . . . . . . . . . . . . . . . . .
101
6.3.2
Control strategy and stability analysis . . . . . . . . . . . . . . . .
103
6.3.3
Reduced-order position estimation . . . . . . . . . . . . . . . . . .
106
6.4
Simulation results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
108
6.5
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
112
6.3
7
Formation control based on orientation and position estimation
113
7.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
113
7.2
Formation control based on orientation and position estimation . . . . . . .
115
– vii –
8
7.2.1
Problem statement . . . . . . . . . . . . . . . . . . . . . . . . . .
115
7.2.2
Control strategy and stability analysis . . . . . . . . . . . . . . . .
118
7.3
Simulation results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
127
7.4
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
129
Consensus of networks of a class of linear agents
131
8.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
131
8.2
Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
133
8.3
Main result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
135
8.4
Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
142
8.4.1
Illustrating examples . . . . . . . . . . . . . . . . . . . . . . . . .
142
8.4.2
Load frequency control network of synchronous generators . . . . .
145
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
149
8.5
9
Disturbance attenuation in consensus networks
151
9.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
151
9.2
Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
154
9.3
Problem statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
157
9.3.1
Graph design problem . . . . . . . . . . . . . . . . . . . . . . . .
157
9.3.2
Controller design problem . . . . . . . . . . . . . . . . . . . . . .
159
Graph design for disturbance attenuation . . . . . . . . . . . . . . . . . . .
161
9.4.1
Decomposition of the consensus network . . . . . . . . . . . . . .
162
9.4.2
Graph design . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
165
9.4
– viii –
9.5
9.6
9.7
Controller design for disturbance attenuation . . . . . . . . . . . . . . . . .
168
9.5.1
Design of decentralized control networks . . . . . . . . . . . . . .
169
9.5.2
Design of distributed control networks . . . . . . . . . . . . . . . .
173
Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
174
9.6.1
Graph design example . . . . . . . . . . . . . . . . . . . . . . . .
175
9.6.2
Controller design example . . . . . . . . . . . . . . . . . . . . . .
177
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
180
10 Conclusion
183
10.1 Summary of results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
183
10.2 Future works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
184
Bibliography
187
– ix –
List of Tables
9.1
Result for the graph design example. . . . . . . . . . . . . . . . . . . . . .
176
9.2
Result for the decentralized controller design example. . . . . . . . . . . .
178
–x–
List of Figures
1.1
Typical self-organized collective behavior found in biological systems. . . .
2
3.1
Projection of kd bi /4 onto the column space of Ai . . . . . . . . . . . . . . .
38
3.2
Simulation result of the three-agents under (3.19). . . . . . . . . . . . . . .
46
3.3
The interaction graph and the formation trajectory of the ten-agents under
(3.19). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
46
4.1
Sensing graph for five-agents. . . . . . . . . . . . . . . . . . . . . . . . .
64
4.2
Simulation result for five single-integrators. . . . . . . . . . . . . . . . . .
64
4.3
Simulation result for five double-integrators. . . . . . . . . . . . . . . . . .
65
5.1
Measurement of relative orientation angle. . . . . . . . . . . . . . . . . . .
71
5.2
The interaction graph for the six single-integrators. . . . . . . . . . . . . .
85
5.3
Simulation result of the six single-integrators under (5.34) and (5.38): static
interaction graph case. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.4
86
Simulation result of the six single-integrators under (5.34) and (5.38): switching interaction graph case. . . . . . . . . . . . . . . . . . . . . . . . . . .
87
6.1
Block diagram for formation control based on a position estimator. . . . . .
91
6.2
Unicycle-like mobile robot. . . . . . . . . . . . . . . . . . . . . . . . . . .
99
6.3
The interaction graph for the six-agents. . . . . . . . . . . . . . . . . . . .
109
6.4
Simulation result of the six single-integrators under (6.5) and (6.7). . . . . .
110
6.5
Simulation result of the six single-integrators under the existing displacementbased formation control law. . . . . . . . . . . . . . . . . . . . . . . . . .
– xi –
110
6.6
Simulation result of the six double-integrators under (6.20) and (6.22). . . .
111
6.7
Simulation result of the six double-integrators under (6.24) and (6.26). . . .
111
6.8
Simulation result of the six-unicycles under (6.5) and (6.12). . . . . . . . .
112
7.1
Measurement of relative orientation angle. . . . . . . . . . . . . . . . . . .
117
7.2
The interaction graph for the simulation.
128
7.3
Simulation result of six single-integrator modeled agents having the static
. . . . . . . . . . . . . . . . . .
interaction graph. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.1
128
Interconnection of two first-order and two second-order systems (Systems 1
and 2 are first-order and systems 3 and 4 are second-order). . . . . . . . . .
144
8.2
Consensus of two first- and two second-order systems. . . . . . . . . . . .
144
8.3
Node model for LFC network of synchronous generators. . . . . . . . . . .
147
8.4
Nyquist plots for Gi (s) of synchronous generators with typical parameters.
149
9.1
Four types of graphs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
175
– xii –
Chapter 1
Introduction
1.1
Background
For decades, scientists have revealed that collective behavior discovered in various fields
are based on relatively simple mechanisms. Those collective behavior is considered selforganized in the sense that it appears from interactions among neighboring individuals rather
than an intervention of a central coordinator or external command, and thus, it is achieved in
decentralized and parallel ways. Such self-organized collective behavior is particularly ubiquitous in biological systems (Strogatz, 2003). One typical example is the flocking behavior
of birds (Figure 1.1a), fish (Figure 1.1b), penguins, ants, and bees.
It is remarkable that such visually complex flocking behavior arises from simple rules
of individuals in flocks. For instance, fireflies scattering over an extensive area are flashing
in unison by the stimulation via the sight of flashing of neighbors (Buck, 1935). Winfree
(1967) showed that phase-dependent mutual influences among the insects gave rise to such
striking synchronization. Based on time-series analysis of frames of videotaped samples,
Partridge (1981) revealed that the behavior of a school of saithe, which is a kind of fish, was
governed by several simple rules. According to Partridge (1981), the individuals of the fish
school tend to match both swimming direction and speed of their neighbors, while they are
not greatly affected by average velocity of the school.
–1–
(a) A flock of birds.
(b) A school of fish.
Figure 1.1: Typical self-organized collective behavior found in biological systems.
Those scientific results indicate that local interaction, i.e., interaction among neighboring
individuals, rather than all-to-all or global interaction, underlies self-organized collective
behavior. This idea has also been verified by simulation. Reynolds (1987) has proposed
a simple model having a set of basic rules, i.e., separation, alignment, and cohesion, for
the simulation of self-organized flocking behavior. With the three simple rules, the flock
moves in an extremely realistic way, creating complex motion and interaction that would be
extremely hard to create otherwise. Another model is found in Vicsek et al. (1995). Vicsek
et al. (1995) have proposed a discrete-time model of autonomous dynamical systems, i.e.,
point masses, which are all moving in the plane with same speed but with different heading
angles. The heading angle of each system is updated by means of a local rule based on the
average of its own heading angle plus the angles of its neighbors. The neighbors of each
system are those systems that are in the pre-specified circle centered at the system. Though
each system does not know the average heading angle of the collection of all the systems, it
has been revealed that the heading angles of the systems aligned eventually.
–2–
Obviously, the results in Reynolds (1987); Vicsek et al. (1995) seems to indicate possibility of implementing artificial systems achieving collective behaviors. While various issues
have yet to be resolved, we may expect the following benefits from such artificial systems:
• Scalability: Such a system is scalable because each individual system has its own local
rule based on interaction with neighboring systems. Due to the absence of a centralized
coordinator, the number of individual systems is not restricted.
• Cost effectiveness: Centralized coordinator would require huge computation capability and high communication capacity while interaction among neighboring individual
systems could be implemented cost effectively.
• Robustness: Even when some individual systems malfunction, the overall system
might work since individual systems behave based on interactions among neighboring individual systems.
Inspired by self-organized collective behavior discovered in natural systems, a significant
amount of research efforts have recently been focused on the implementation of engineering
systems that are capable of achieving global tasks based on local control laws, opening a
new research field, i.e., multi-agent systems. Advances in control, computation and communication make it possible to construct the multi-agent systems.
Multi-agent systems are studied in various disciplines such as computer science and social science, but there is no universally accepted definition for multi-agent systems (Wooldridge,
2002). In this dissertation, from a control theoretic view point, a multi-agent system is understood as a collection of dynamical systems interacting with each other. Accordingly, an
–3–
agent is understood as a dynamical system.
There is a variety of research topics on multi-agent system from a control theoretic point
of view. The research topics in the literature can be categorized as follows (Mesbahi &
Egerstedt, 2010; Ren & Cao, 2011):
• Consensus/synchronization: Consensus/synchronization means to stabilize a certain
quantity of interest that depends on the state of all agents to a common value. The
emphasis is on the interaction topology of a multi-agent system in consensus while the
emphasis is on the dynamics of individual agents in synchronization rather than the
interaction topology of the multi-agent system.
• Formation control: Formation control refers to the stabilization of the state of a multiagent system to form a certain geometrical configuration through local interaction
among agents. In a general problem formulation, formation control covers the consensus/synchronization. For instance, rendezvous, which is a special case of formation
control, can be regarded as consensus/synchronization. The two terms are regarded as
the same in this dissertation.
• Distributed estimation: Global information is often crucial for the control of multiagent systems. Distributed estimation refers to estimating the global information in
cooperative manners.
• Distributed task assignment: For a sensor/robotic network, local tasks should be allocated to each individual agent in a distributed fashion. Distributed task assignment
includes task/resource allocation, coverage control, and scheduling.
–4–
• Etc.: There is a variety of other research topics on multi-agent systems. Analysis and
prediction of social networks and epidemics are typical examples.
In this dissertation, we are mainly focused on formation control and consensus of multiagent systems because formation control and consensus problems cover general control problems in multi-agent systems.
1.2
Literature review
1.2.1
Formation control
Consider the following N -agents:
ẋi = f (x, ui ), i = 1, . . . , N,
(1.1)
where xi ∈ Rn , x = [xT1 · · · xTN ]T , ui ∈ Rr , and fi : RnN × Rr → Rn . For the multi-agent
system, the global task is defined by the following M -constraints:
gi (x) = gi (x∗ ), i = 1, . . . , M,
(1.2)
where gi : RnN → R, for some x∗ ∈ RnN . Then the formation control problem is generally
stated as follows:
Problem 1.2.1 (A general formation control problem) For the multi-agent system (1.1),
design a control law u = [uT1 · · · uTN ]T such that the set
Ex∗ = {x ∈ RnN : gi (x) = gi (x∗ ), i = 1, . . . , M }
becomes asymptotically stable under the control law.
–5–
(1.3)
As already mentioned, consensus can be regarded as a special sort of formation control. For instance, Problem 1.2.1 is reduced into a consensus problem, when the formation
constraints (1.2.1) are given by
x1 = x2 = · · · = xN .
(1.4)
Problem 1.2.1 is also called a rendezvous problem when the formation constraints are given
by (1.4).
The assumption that there is no centralized controller for the multi-agent system (1.1)
raises primarily two issues on Problem 1.2.1 The first issue is associated with available information for agents, which can be summarized by the following two questions (Summers et
al., 2011):
• What variables are measured?
• What variables are controlled?
Depending on controlled variables, a variety of formation control problems can be formulated. Formation control problems in the literature can be mainly categorized into two classes
of approaches:
• Position-based approaches: In this approach, the position of each agent is controlled
based on sensed positions and relative displacements among agents.
• Displacement-based approaches: Relative displacements among agents are controlled
based on sensed relative displacements with respect to the global reference frame in
these approaches.
–6–
• Distance-based approaches: Relative distances among agents are controlled based on
sensed relative displacements with respect to local reference frames of agents, which
are not necessarily aligned, in these approaches.
• Etc.: angle-based approaches and pure distance-based approaches.
Beside the two classes of approaches, one may formulate formation control problems
based on appropriate assumptions on sensed variables and controlled variables.
The second issue is interaction topology among agents. That is, under the assumption that
sensed variables and controlled variables are determined, which agents senses and control
the variables? By interaction topology, local tasks are assigned to the agents. Appropriately
designed interaction topology allows consistency between the global task of a multi-agent
system and the local tasks of the agents, i.e., the global task is achieved by the completion of
all the local tasks.
In the following, we review existing results on displacement- and distance-based approaches in the literature.
Displacement-based approaches
We consider the multi-agent system described by (1.1). Under displacement-based formation
control problem setup, each agent senses and controls relative-displacements of its neighboring agents with respect to the global reference frame. That is, the sensed variables for agent
i is given as
xji = xj − xi
–7–
(1.5)
for all its neighboring agents j.
The global task for the multi-agent system is to stabilize x to satisfy the formation constraints given by desired relative-displacements,
xj − xi = x∗j − x∗i
(1.6)
for some x∗ and for all i, j = 1, . . . , N . The local task of agent i is automatically defined
from the global task, i.e., stabilization of xi to satisfy xj −xi = x∗j −x∗i for all its neighboring
agents.
Graphs have turned out quite useful to modeling multi-agent systems. The multi-agent
system (1.1) is modeled as a graph on N -vertices. Each edge in the graph is understood as
the existence of interaction between two agents corresponding to the vertices incident to the
edge. That is, at least one of the two agents senses and controls the relative-displacement
between them. Since this graph describes the interaction topology of the multi-agent system,
we refer to the graph as the interaction graph of the multi-agent system. Appropriate interaction topology for the multi-agent system is conveniently characterized by properties of the
graph. It is known that the existence of a spanning tree, which is formally defined in Chapter
2, guarantees the consistency between the local tasks of the agents and the global task.
Formation control problems are reduced into consensus problems if the dynamics of
the agents are linear. Thus the results on consensus are directly applied to formation control problems. Olfati-Saber & Murray (2004) have provide a necessary and sufficient condition for interaction topology of single-integrator modeled agents for average consensus,
i.e. xi (t) →
PN
k=1
xk (0) as t → ∞ for all i = 1, . . . , N . Similar problems have been
also addressed in Jadbabaie et al. (2003); Lin et al. (2004). Ren et al. (2004) has shown
–8–
that, for single-integrator modeled agents, the existence of a spanning tree in the interaction graph is a necessary and sufficient condition for consensus, i.e. xi (t) − xj (t) → 0 as
t → ∞ for all i, j = 1, . . . , N . Moreau (2004) has shown that a condition for the interaction
graph, called uniform connectivity, is sufficient for consensus of single-integrator modeled
agents. Consensus of double-integrator modeled agents has been studied in Ren & Atkins
(2007). According to Ren & Atkins (2007), the existence of a spanning tree in the interaction
graph of the double-integrators is a necessary but not sufficient for consensus. Consensus of
identical linear time-invariant systems has been addressed in Scardovi & Sepulchre (2009).
Displacement-based formation control problems for nonlinear agents have also been studied
in Dimarogonas & Kyriakopoulos (2008); Dong & Farrell (2008); Lin et al. (2005); Ren &
Atkins (2007).
Distance-based approaches
For the multi-agent system (1.1), under distance-based problem setup, each agent senses
relative-displacements of its neighboring agents with respect to its local reference frame.
The local reference frames of the agents are not aligned, thus each agent maintains its own
local reference frames. That is, the sensed variables of agent i are given as
xiji = xij − xii ,
(1.7)
where superscript i denotes the the variables are with respect to the local reference frame of
agent i, for all neighboring agents j of agent i.
The global task for the multi-agent system is to stabilize x to satisfy the formation con-
–9–
straints given by desired relative-distances,
kxj − xi k = kx∗j − x∗i k,
(1.8)
where k k denotes the Euclidean norm, for some x∗ and for all i, j = 1, . . . , N . The local task
of agent i is then defined from the global task, i.e., stabilization of xi to satisfy kxij − xii k =
kxj − xi k = kx∗j − x∗i k for all its neighboring agents.
It has been known that when the interaction graph of agents is undirected, rigidity (Asimow & Roth, 1979; Laman, 1970) of the interaction graph ensures consistency between the
local tasks and the global task. That is, if the interaction graph is rigid, the formation of the
agents is uniquely determined up to congruence at least locally by the stabilization of each
relative-distance to its desired value. When the interaction graph is directed, the notion of
graph rigidity is generalized to the notion of persistence (Hendrickx et al., 2007). Persistence
includes rigidity while requiring an additional condition called constraint consistency.
A noticeable early work on distance-based formation control problems is found in Baillieul & Suri (2003). Baillieul & Suri (2003) have proposed a gradient-based control law,
which has been adopted in most of other works, and analyzed stability of acyclic minimally
persistent formations. Later on, the gradient-based control law has been adopted in most
of existing results on distance-based formation control. Krick et al. (2009) have proved
local asymptotic stability of undirected rigid formations and acyclic persistent formations
under the gradient-based control law, based on the center manifold theory. Dimarogonas &
Johansson (2010) have shown that undirected/directed tree formations are globally asymptotically stable whereas such formations are not uniquely determined up to congruence by
the completion of the local tasks. Local asymptotic stability of cyclic and acyclic minimally
– 10 –
persistent formations has been addressed in Summers et al. (2011); Yu et al. (2009), respectively. Global asymptotic stability of triangular formations has been studied in Cao et al.
(2007, 2011, 2008); Dörfler & Francis (2010).
1.2.2
Consensus
For the past decade, consensus of multiples of systems have attracted a significant amount
of research interest due to its broad applications in various areas (see Olfati-Saber et al.
(2007); Ren et al. (2007) and the references therein). Theoretical study on consensus has
been particularly focused on linear system networks. Olfati-saber and Murray have provided
a necessary and sufficient condition for the underlying graph topology of single-integrator
modeled agents to achieve average consensus (Olfati-Saber & Murray, 2004). Ren and
Atkins has shown that, for single-integrator modeled agents, the existence of a spanning
tree in the underlying graph is a necessary and sufficient condition for consensus (Ren et al.,
2004). Moreau has shown that the uniform connectivity of the underlying graph is sufficient
to achieve consensus for single-integrator modeled agents (Moreau, 2004). Consensus of
double-integrator modeled agents has been studied in Ren & Atkins (2007). According to
Ren & Atkins (2007), the existence of a spanning tree in the interaction graph of the doubleintegrators is a necessary but not sufficient for consensus. Consensus of identical linear
time-invariant systems has been addressed in Fax & Murray (2004); Scardovi & Sepulchre
(2009); Tuna (2009).
Recently, Wang & Elia (2010) have studied a single-integrator network interconnected
by dynamic edges and provided a sufficient condition for consensus based on the diagonal
– 11 –
dominance of the complex interconnection matrix. Since the edges of the network have
dynamics, one may call such networks as dynamic consensus networks. Another work on
dynamic consensus networks is found in Moore et al. (2011). Motivated by thermal process
in a building, Moore et al. (2011) have proposed general single-integrator dynamic consensus
network model and provided a sufficient condition for consensus also based on the diagonal
dominance condition. A load frequency control (LFC) network of synchronous generators
can be described as a dynamic consensus network of single-integrators. In an LFC network,
each generator control system can be represented as a node whose variable is the phase
variation of its voltage, and the power exchange among the generators can be represented
as the interconnection of the network. Though generator control systems are nonidentical
high-order systems and the interconnection is diffusive (Tuna, 2009) in that power exchange
between two nodes is proportional to the phase differences of the voltages of the nodes
(Kundur et al., 1994), we may address the LFC network as a dynamic consensus network of
single-integrators by transforming the network dynamics appropriately.
While the great majority of the existing results is based on the ideal assumption that
there exist no external disturbances to individual systems, physical systems are usually affected by some disturbances in reality. Liu et al. have studied design of a dynamic output
feedback controller for a undirected network of identical linear-time invariant systems under exogenous disturbances (Liu & Jia, 2010; Liu et al., 2009). Considering the transfer
function matrix from the disturbance vector to the disagreement vector of the network, they
have formulated a H∞ suboptimal problem. Based on the symmetry of the Laplacian of the
undirected graph, they have decomposed the overall equation for the network into indepen-
– 12 –
dent systems with the same system order as that of individual systems, and then provided
an LMI condition for the decomposed systems to solve the H∞ suboptimal problem. Li et
al. have considered a undirected network of identical linear-time invariant systems under
disturbances assuming that some individual systems can measure their own states, and then
provided LMI conditions to find state feedback controllers solving the H2 and H∞ suboptimal problems (Li et al., 2011). Meanwhile, most of the existing results have been mainly
focused on decentralized feedback control gain matrix design to ensure the disturbance attenuation performance in a consensus network. Though the disturbance attenuation property is
dependent not only on the feedback controller of the network but also on the graph associated
with the network, less attention has been paid to the graph design.
1.3
Contributions
The contributions of this dissertation are summarized as follows. First, we propose a
new decentralized formation control law under distance-based setup. While most of existing
control laws are gradient-based laws, which are focused on the dynamics of agents, the proposed law is focused on edge dynamics. Accordingly, the proposed law, which is designed
considering inter-agent distance dynamics, shows performance comparable to existing control laws. Particularly, the proposed law shows a good property when applied to three-agent
formations.
Second, we present local asymptotic stability analysis results of undirected and directed
formations of single-integrator modeled agents, and show that undirected formations of
double-integrator modeled agents are also locally asymptotically stable based on the topolog-
– 13 –
ical equivalence of a Hamiltonian system and a gradient-like system, which is an extension
of the existing results. While lots of systems including vehicles can be modeled as singleintegrators, vehicles are inherently double-integrator systems, which verifies the importance
of the proposed stability analysis.
Third, we propose a displacement-based formation control law via orientation alignment
under distance-based setup. Though agents maintain their own local reference frame under distance-based setup and thus they exploit some directional information under distancebased setup, the directional information of each agent cannot be used to achieving global
tasks, which causes the lack of information, because the agents have nonidentical orientation angles. Such lack of information links to the difficulties of formation control under
distance-based setup. To overcome the obstacle, we propose an orientation alignment law,
which allows the agents to align their local reference frames and utilize their directional information to stabilizing their states. That is, the lack of information is overcome by means
of cooperation among the agents. Under the proposed control law, conditions of interaction
topology for achieving desired formations is relaxed and the region of attraction is clearly
provided.
Forth, we propose a position-based formation control law via distributed observer under
displacement-based setup. Obviously, desired formations would be far more readily achievable if each agent knows its own state. Though it is not possible for the agents to estimate
their exact states, they can estimate their states up to translation based on the cooperation
among the agents. Then the translated states of the agents can be utilized for achieving
the desired formation. It is shown that the performance of formation control can be en-
– 14 –
hanced under the proposed control law especially when the distributed observer reaches to
the steady-state.
Fifth, by combining the proposed orientation alignment law and distributed observer, we
proposed a position-based control law under distance-based control law. That is, by means
of cooperation among agents, the agents obtain state information as well as directional information under distance-based setup, where relative-distances and only locally meaningful
directional information are available.
Sixth, we investigate conditions for the consensus of nonidentical linear agents interconnected by diffusive coupling. We present sufficient conditions for the consensus of a
network of positive real (PR) agents multiplied by a single-integrator. Individual agents of
the network may have nonidentical dynamics and different system orders. It is shown that
a connected network of weakly strictly positive real (WSPR) agents multiplied by a singleintegrator reaches consensus. Further, a condition for the consensus of a connected network
of PR agents multiplied by a single-integrator is also provided. Based on the conditions,
we can check consensusability of an output diffusively coupled linear agent network and an
LFC network of synchronous generators. Since many load frequency controllers have been
designed without consideration of the stability of the overall network in the literature, the
sufficient conditions presented in this paper might be useful for the design of load frequency
controllers.
Finally, we study two problems related with the disturbance attenuation of undirected
consensus networks of identical linear agents that are under exogenous disturbances. We
address the H∞ suboptimal problem for a given identical linear agent network to ensure the
– 15 –
disturbance attenuation performance under the assumption that the topology of the network
is given but the edge weights are variables belonging to a convex set. We show that the
H∞ suboptimal problem, which is the design of the edge weights of the graph, is solved by
maximizing of the second eigenvalue of the graph Laplacian under some condition, which
can be readily checked by solving an LMI feasibility problem. Since the disturbance attenuation performance is a highly nonlinear function of the edge weights of the graph, it is
generally intractable to solve the H∞ suboptimal problem. In this regard, the proposed approach might be useful in practice. We also consider an identical linear agent network with
existing interconnection, which might be regarded as the physical interconnection. For the
consensus network, we formulate two H∞ suboptimal problems based on decentralized and
distributed controllers, respectively, and provide algorithms for the design of decentralized
and distributed controllers to ensure the given disturbance attenuation performance. When
the network has certain properties, the decentralized controller is readily designed by solving
an LMI feasibility problem and maximizing the second eigenvalue of the graph Laplacian.
The distributed controller is also designed by solving an LMI feasibility problem.
1.4
Outline
The outline of the remainder of this dissertation is as follows. In Chapter 2, we review
some mathematical backgrounds used throughout the dissertation. Basic notions on graphs,
algebraic graph theory, and graph rigidity are summarized.
From Chapter 3 to Chapter 7, we study decentralized formation control problems under
various assumptions on available information for agents. In Chapter 3, a new decentralized
– 16 –
formation control law is proposed under distance-based setup, and stability of undirected
formations is analyzed under the proposed control law.
In Chapter 4, we analyze stability of formations under existing gradient-based control
laws. First, we present another proof for local asymptotic stability of undirected formations
of single-integrator modeled agents under the existing laws, based on the Lyapunov direct
method. Second, we provide a proof for local asymptotic stability of undirected formations
of double-integrator modeled agents under the existing laws, based on the topological equivalence of a Hamiltonian system and a gradient-like system. Third, we present a new proof
for local asymptotic stability of cycle-free persistent formations of single-integrator modeled
agents under the existing laws.
In Chapter 5, we propose a displacement-based formation control law via orientation
alignment under distance-based setup. We propose an orientation alignment law based on
cooperation among the agents, and then show that a displacement-based formation control
law is effectively applied to stabilizing the formation of the agents.
In Chapter 6, we propose a position-based control law via distributed observer under
displacement-based setup. Then the formation of the agents is stabilized based on the estimated states.
In Chapter 7, we, combining the results in Chapter 6 and Chapter 7, propose a positionbased control law via distributed observer and orientation alignment under distance-based
setup. Under the proposed control law, agents align their orientations while estimating their
positions, based on cooperation. The agents eventually obtain translated and rotated version
of their states, and the formation of the agents is stabilized based on the estimated states.
– 17 –
Chapter 8 and 9 are mainly focused on consensus problems. In Chapter 8, we investigate
conditions for consensus of a class of hetetogeneous linear agents. We assume that the
dynamics of each agent contains a single-integrator and the rest of the dynamics is positive
real.
In Chapter 9, we study disturbance attenuation in a consensus network of identical linear
agents. We show that the disturbance attenuation performance is enhanced by increasing the
algebraic connectivity of the network when the agents satisfy a certain condition.
In Chapter 10, we conclude this dissertation discussing future works.
– 18 –
Chapter 2
Preliminaries
In this chapter, we provide mathematical notions used throughout this dissertation. First, basic graph notions and algebraic graph theory are summarized. Graphs turned out to be useful
for the description of multi-agent systems. Details of algebraic graph theory are found in
Godsil & Royle (2001); Merris (1994). Then, we provide notions of graph rigidity, which
lies in the foundation of distance-based control. Details of graph rigidity are found in Asimow & Roth (1979); Laman (1970).
2.1
Algebraic graph theory
Suppose that a group of agents interact with each other by means of sensing and/or com-
munication. It has been known that the interaction among the agents is naturally modeled by
a graph in such a case. Each vertex of the graph represents each agent of the group, each edge
between two-vertices represents the interaction strength between two-agents corresponding
to the vertices.
An weighted directed graph G is defined as three-tuple (V, E, W), where V denotes the
set of vertices, E ⊆ V × V denotes the set of edges, and W : E → R+ denotes the mapping
assigning positive real numbers to the edges. Self-edges, i.e., (i, i) for some i ∈ V are not
allowed in this dissertation since graphs are used to modeling the interaction among agents.
When every weight of a graph G = (V, E, W) is identical, we often denote the graph by G =
– 19 –
(V, E). An weighted undirected graph is understood as a special type of weighted directed
graphs. An weighted directed graph is called undirected if, for all (i, j) ∈ E, (j, i) ∈ E
and wij = wji , where wij and wji denote the weights assigned to the edges (i, j) and (j, i),
respectively. When all the weights of an directed weighted graph G = (V, E, W) are one, we
often denote the graph by G = (V, E).
We next summarize basic notions on graphs in the followings. Though such notions
are defined for an weighted directed graph, they can be directly defined for an weighted
undirected graph analogously.
Definition 2.1.1 (Basic notions on graphs I) Consider an weighted directed graph G =
(V, E, W).
• Neighbor: Vertex j is a neighbor of vertex i if (i, j) ∈ E.
• Neighborhood: The neighborhood Ni of vertex i is the set of its all neighbors.
• Parent: Vertex j is a parent of vertex i if (i, j) ∈ E.
• Child: Vertex i is a child of vertex j if (i, j) ∈ E.
Definition 2.1.2 (Basic notions on graphs II)
• Directed path: A directed path is a se-
quence of edges of the form (i1 , i2 ), (i2 , i3 ), . . ..
• Directed cycle: A directed cycle is a directed path that starts and ends at the same
vertex.
• Connectedness: A directed graph is connected if there is a directed path between any
pair of vertices.
– 20 –
• Strong connectedness: A directed graph is strongly connected if there is a directed
path from every vertex to every other vertex.
• Completeness: A directed graph is complete if there is an edge from every vertex to
every other vertex.
• Tree: A directed tree is a directed graph in which every vertex has exactly one parent
except for one vertex called the root of the graph.
• Subgraph: A graph G 0 = (V 0 , E 0 , W 0 ) is a subgraph of G = (V, E, W) if V 0 ⊆ V,
E 0 ⊆ E, and W 0 ⊆ W.
• Directed spanning tree: A directed graph G 0 = (V 0 , E 0 , W 0 ) is a directed spanning tree
of a directed graph G = (V, E, W) if G 0 is a subgraph of G, G 0 is a directed tree, and
V 0 = V.
Suppose that a directed graph G = (V, E, W) has N -vertices and M -edges. The adjacency matrix W = [wij ] ∈ RN ×N of the graph G is defined as



 wij , (i, j) ∈ E,
wij :=


 0,
(i, j) 6∈ E,
where wij is the weight assigned to edge (i, j). Note that aii = 0 for all i ∈ V since self-edges
out
are not allowed. The in-degree din
of a vertex i are defined as
i and out-degree di
din
i
:=
N
X
wij ,
j=1
dout
:=
i
N
X
j=1
– 21 –
wji .
out
The directed graph G = (V, E, W) is balanced if din
for all i ∈ V. The out-degree
i = di
matrix D = [dij ] ∈ RN ×N of G is defined as


P

 N
j=1 wji , i = j,
dij :=


 0,
i=
6 j.
Then the graph Laplacian matrix L = [lij ] of G is defined as
L := D − W.
The elements of the graph Laplacian matrix L are defined as


P

 k∈N wik , i = j,
i
lij =


 −wij ,
i 6= j.
Graph Laplacian matrices have the following properties:
Theorem 2.1.1 (Properties of graph Laplacian matrices I) Consider a graph Laplacian
matrix L ∈ RN ×N . Then the followings are true.
• The row sums of L are all zero.
• There is a zero eigenvalue of L with the associated eigenvector 1N .
• The matrix L is diagonally dominant, i.e., lii ≥
PN
j=1,j6=i lij
for all i = 1, . . . , N .
• All nonzero eigenvalues of L are in the open left-half complex plane.
• The matrix exponential e−Lt has row sum equal to one for all t, i.e., e−Lt is a stochastic
matrix.
A graph Laplacian matrix of a connected graph has the following properties:
– 22 –
Theorem 2.1.2 (Properties of graph Laplacian matrices II) Suppose that L ∈ RN ×N is
the graph Laplacian matrix of a connected graph . Then the followings are true.
• The zero eigenvalue of L is algebraically simple.
• There is a zero eigenvalue of L with the associated eigenvector 1N .
• For a vector x = [x1 · · · xN ]T ∈ RN , the solution of ẋ = −Lx satisfies xi → x∗ for all
i = 1, . . . , N for some constant x∗ .
Consider a directed graph G = (V, E), where |V| = N and |E| = M . The incidence
matrix H = [hij ] ∈ RM ×N of G is defined as




1, if j is the sink vertex of i,




hij :=
−1, if j is the source vertex of i,






 0, otherwise.
The outgoing edge matrix O = [oij ] ∈ RN ×M of G is defined as



 −1, if vertex i has outgoing edge j,
oij :=


 0, otherwise.
The edges of a directed graph G = (V, E) can be partitioned such that E = Ed ∪ E+ ∪ E− ,
where Ed , E+ and E− are disjoint and (i, j) ∈ E+ implies (j, i) ∈ E− . We define Ē as
Ē = Ed ∪ E+ .
The incidence matrix of the graph is then obtained as H = [HdT , H+T , −H+T ]T and the outgoing edge matrix as O = [Od , O+ , O+ −H+T ], where H+ and O+ are the incident and outgoing
edge matrix corresponding to E+ . We defined H̄ and Ō as
H̄ := [HdT , H+T ]T , Ō := [Od , O+ ].
– 23 –
2.2
Graph rigidity
Given an undirected graph G = (V, E), where V = {1, . . . , N } and E = {1, . . . , 2M },
let pi ∈ Rn be assigned to i ∈ V. Then p = (p1 , . . . , pN ) ∈ RnN is called a realization of G in
n-dimensional space. Further, the pair (G, p) is called a f framework of G in n-dimensional
space.
Two frameworks (G, p) and (G, q) in n-dimensional space are equivalent if kpi − pj k =
kqi − qj k for all (i, j) ∈ E; they are congruent if kpi − pj k = kqi − qj k for all i, j ∈ V.
The edges of the graph G can be partitioned such that E = E+ ∪ E− , where E+ and E−
are disjoint and (i, j) ∈ E+ implies (j, i) ∈ E− . The incidence matrix of the graph is then
obtained as H = [H+T , −H+T ]T and the outgoing edge matrix as O = [O+ , O+ − H+T ], where
H+ and O+ are the incident and outgoing edge matrix corresponding to E+ .
By ordering all edges in E+ in some way, an edge function gG : RnN → RM associated
with (G, p) is defined as
1
gG (p) := (. . . , kpi − pj k2 , . . .), ∀(i, j) ∈ E+ ,
2
(2.1)
where M is the cardinality of E+ . The rigidity of frameworks is then defined as follows
(Asimow & Roth, 1979):
Definition 2.2.1 A framework (G, p) is rigid if there exists a neighborhood Up of p ∈ RnN
−1
such that gG−1 (gG (p)) ∩ Up = gK
(gK (p)) ∩ Up , where K is the complete graph on N -vertices.
In other words, the framework (G, p) is rigid if there exists a neighborhood Up of p ∈ RnN
such that, for any ξ ∈ Up , if gG (ξ) = gG (p), then (G, ξ) is congruent to (G, p). Further, if the
framework (G, p) is rigid in RnN , then it is globally rigid.
– 24 –
As mentioned in Dörfler & Francis (2009), the link e = (e1 , . . . , eM ) ∈ RnM of a framework (G, p) is obtained as
e := (H+ ⊗ In )p.
(2.2)
Note that e1 , . . . , eM are not independent but are located in the column space Im(H+ ⊗ In ).
The space Im(H+ ⊗ In ) is referred to as the link space associated with the framework (G, p).
We define a function vG : Im(H+ ⊗ In ) → RM/2 as
1
vG (e) := (ke1 k2 , . . . , keM k2 ),
2
which corresponds to the edge function gG parameterized in the link space. That is, gG (p) =
vG ((H+ ⊗ In )p). Defining D as
D(e) := diag(e1 , . . . , eM ),
we obtain
∂gG (p)
∂vG (e) ∂e
=
= [D(e)]T (H+ ⊗ In ).
∂p
∂e ∂p
Let G = (V, E) be a directed graph. Suppose that for a framework (G, p), desired squared
distances d∗ij for all (i, j) ∈ E are given. An edge (i, j) ∈ E is active if kpi − pj k2 = d∗ij . A
position pi of a vertex i ∈ V is fitting for the distances if there is no p∗i ∈ R2 such that
{(i, j) ∈ E :kpi − pj k2 = d∗ij } ⊂ {(i, j) ∈ E : kp∗i − pj k2 = d∗ij }.
(2.3)
This means that pi is one of the best positions to satisfy the constraints for edge lengths.
Then the framework (G, p) is fitting for the desired distances if all the vertices of E are at
fitting positions for the desired distances. Then the persistence of frameworks is defined as
follows (Hendrickx et al., 2007):
– 25 –
Definition 2.2.2 Let G = (V, E) be a directed graph. A framework (G, p) is persistent if
there exists > 0 such that every realization p0 fitting for the distance set induced by p and
satisfying d(p, p0 ) < , where d(p, p0 ) = maxi∈V kpi − p0i k, is congruent to p.
That is, if (G, p) is persistence, then there exists a neighborhood of p such that every realization p0 fitting to p is congruent to p in the neighborhood.
As a generalization of rigidity, which is defined for undirected graphs, to directed graphs,
persistence formalizes not only rigidity but also constraint consistency. Intuitively, constraint
consistency means that if each vertex satisfies its out-going edge length constraints, then the
congruency of the framework is achieved.
If G contains no cycles and (G, p) is persistent, then the framework is cycle-free persistent. Cycle-free persistent frameworks are also referred to as leader/follower structured rigid
frameworks in the literature (Cao et al., 2011). For simplicity, we assume that vertices of a
cycle-free framework are ordered as follows: j ∈ Nk ⇒ j < k. It has been known that a
cycle-free persistent framework can be constructed by a Henneberg sequence (Hendrickx et
al., 2007).
– 26 –
Chapter 3
Distance-based formation control considering interagent distance dynamics
In this chapter, we propose a formation control strategy based on inter-agent distances for
single-integrator modeled agents in the plane. Attempting to directly control the inter-agent
distances, we derive a control law from the inter-agent distance dynamics. The proposed
control law achieves the local asymptotic stability of infinitesimally rigid formations. Further, we show that a triangular infinitesimally rigid formation asymptotically converges to
the desired formation under the proposed control law if the initial and the desired formations
are not collinear, with all squared inter-agent distance errors exponentially and monotonically converging to zero. As an extension of the existing results, the stability analysis in
this chapter reveals that any distance-based formation control laws related with the gradient
law by multiplication of a positive definite matrix ensure the local asymptotic stability of
infinitesimally rigid formations.
3.1
Introduction
A considerable amount of research efforts has recently been focused on formation control
of mobile agents based on local and partial information. In such works, displacement- and
distance-based approaches have primarily been employed in the literature, depending on
– 27 –
the availability of a common sense of orientation to the agents. While several effective
control laws have been proposed in displacement-based approaches due to the existence
of a common sense of orientation (Dimarogonas & Kyriakopoulos, 2008; Dong & Farrell,
2008; Lin et al., 2005; Ren & Atkins, 2007), various issues remain unsolved in the realm of
distance-based control approaches.
The majority of distance-based formation control approaches have employed gradientbased laws (Cao et al., 2011; Dörfler & Francis, 2009, 2010; Krick et al., 2009). Krick et
al. (2009) have shown that a gradient law achieves the local asymptotic stability of infinitesimally rigid formations by using the center manifold theory. Dörfler & Francis (2009) have
proved the local asymptotic stability of minimally infinitesimally rigid formations by means
of Lyapunov-based stability analysis. While only local stability has been investigated for
general rigid formations, it has been known that the gradient law achieves the asymptotic
convergence of a triangular formation to the desired formation if the initial and the desired
formations are not collinear (Cao et al., 2011; Dörfler & Francis, 2010).
We address the control of infinitesimally rigid formations of single-integrator modeled
agents in the plane. We attempt to achieve desired formations by the direct control of interagent distances and derive the decentralized formation control law for agents from the interagent distance dynamics. Contributions of this chapter can be summarized as follows. First,
we propose a new strategy for distance-based formation controller design. Since each agent
has its own local subtask in decentralized control, our strategy, which focuses on the local
subtasks, is simple and straightforward. The proposed design strategy provides a novel angle
to distance-based formation controller design. Second, we show that the proposed control
– 28 –
law achieves the local asymptotic stability of infinitesimally rigid formations. Moreover,
a triangular infinitesimally rigid formation asymptotically converges to the desired formation if the initial and the desired formations are not collinear, with all squared inter-agent
distance errors exponentially and monotonically converging to zero. Third, we extend an
existing Lyapunov-based stability analysis result to more general formations. The stability
analysis result in this chapter covers not only minimally but also general infinitesimally rigid
formations, whereas the result in Dörfler & Francis (2009) is valid for minimally rigid formations. Finally, as an extension of the existing results in Cao et al. (2007, 2011, 2008); Dörfler
& Francis (2009); Krick et al. (2009), the result in this chapter shows that any control laws
related with the gradient law by the multiplication of a positive definite matrix achieves local
asymptotic stability of infinitesimally rigid formations.
3.2
Formation control considering inter-agent distance dynamics
3.2.1
Problem statement
We consider the following N single-integrator modeled agents in the plane:
ṗi = ui , i = 1, . . . , N,
(3.1)
where pi ∈ R2 and ui ∈ R2 denote the position and the control input, respectively, of agent
i.
We assume that each agent measures the relative positions of its neighboring agents. In
this case, the sensing topology of the agents is modeled by a undirected graph G = (V, E).
We refer to the graph as the interaction graph of the agents. Thus it is assumed that agent
– 29 –
i ∈ V measures relative positions of its neighbors:
pji := pj − pi , j ∈ Ni , i ∈ V,
(3.2)
where Ni is the set of all neighbors of agent i. Due to the absence of a common sense of
orientation, the directional information contained in pji cannot be used in a global sense.
The distance-based control laws, thus, generally adjust the norm of pji rather than pji itself,
while the displacement-based control laws directly adjust pji .
T
2N
Then, given a realization p∗ = [pT1 ∗ · · · p∗T
. the desired formation is defined as
N ] ∈ R
Ep∗ := {p ∈ R2N : kpi − pj k = kp∗i − p∗j k, i, j ∈ V}.
(3.3)
That is, Ep∗ is the set of all formations congruent to p∗ . We assume that (G, p∗ ) is infinitesimally rigid.
Thus the overall task for the agents is the control of p to satisfy the following condition:
kpi − pj k = kp∗i − p∗j k, i, j ∈ V.
Then the subtask of each agent i can be assigned as the control of its position pi to satisfy
the following condition:
kpi − pj k = kp∗i − p∗j k, j ∈ Ni , i ∈ V.
(3.4)
The consistency between the overall task for the agents and the subtasks of the agents is
crucial, i.e., it is required that the achievement of all subtasks implies the completion of the
overall task. The rigidity of (G, p∗ ) is linked to the consistency. Based on Definition 2.2.1, it
follows from the rigidity of (G, p∗ ) that there exists a neighborhood Uq of q ∈ Ep∗ such that
– 30 –
if p ∈ Uq , then the desired formation is achieved by satisfying (3.4), i.e., gG−1 (gG (p∗ )) ∩ Uq ⊆
Ep∗ . Note that if (G, p∗ ) is not rigid, then gG−1 (gG (p∗ )) ∩ Uq 6⊆ Ep∗ .
The formation control problem is then formulated as follows:
Problem 3.2.1 For the N single-integrator modeled agents (3.1), suppose that (G, p∗ ) is
infinitesimally rigid. Design a decentralized formation control law such that the desired
formation Ep∗ in (3.3) is asymptotically stable under the control law, based on measurements
(3.2).
3.2.2
Three-agent case
To provide the basic idea for the controller design, we need some preliminaries. A matrix
D = [dij ] ∈ RN ×N is a Euclidean distance matrix (EDM) if and only if there exists a
realization p = [pT1 · · · pTN ]T ∈ RnN , where pi ∈ Rn , i = 1, . . . , N , such that dij = kpi −pj k2
for i, j = 1, . . . , N. Then p is said to be a realization of the EDM D. The triangular inequality
holds for elements of EDMs such that
p
p
p
dij ≤ dik + dkj
for any distinct i, j, k = 1, . . . , N . The dimension n of the affine span of points p1 , . . . , pN is
said to be the embedding dimension of D. An EDM D is invariant under the translation and
rotation of p. If p is a realization of D, then any matrix [(Qp1 + b)T · · · (QpN + b)T ]T ∈ RnN
is also a realization of D, where Q ∈ Rn×n and b ∈ Rn denote an orthogonal matrix and a
translation vector, respectively. Details on EDMs are found in Dattorro (2005).
For a undirected graph G = (V, E), where V = {1, . . . , N }, a pair (G, d), where d =
[· · · dij · · · ]T with {i, j} ∈ E, is said to be realizable if there exist points p1 , . . . , pN in n– 31 –
dimensional space such that dij = kpi − pj k2 for i, j = 1, . . . , N . That is, (G, d) is realizable
in n-dimensional space if and only if gG−1 (d) is not empty. The partial matrix D0 induced by
(G, d) such that d0ij = dij if {i, j} ∈ E and all the other elements are unspecified has an EDM
completion D if and only if (G, d) is realizable.
For the N agents under the assumption of Problem 3.2.1, focusing on the subtask (3.4),
we attempt to directly control dij for all {i, j} ∈ E. The time-derivative of dij is written as
d
d˙ij = (kpi − pj k2 )
dt
∂(kpi − pj k2 ) ∂pi ∂(kpi − pj k2 ) ∂pj
=
+
∂pi
∂t
∂pj
∂t
= 2(pi − pj )T (ui − uj ).
(3.5)
We define the distance control input uij as
uij := 2(pi − pj )T (ui − uj ).
(3.6)
for all {i, j} ∈ E. That is uij is the virtual input for the control of dij . Then we aim to design
uij to stabilize dij to d∗ij , and derive ui and uj from (3.6).
However, it is not generally possible to independently control every inter-agent distance
dij for all {i, j} ∈ E, while maintaining the realizability of (G, d) in the plane because
arbitrary adjustment of inter-agent distances may cause (G, d) not to be realizable in the
plane (Dattorro, 2005). For instance, for a triangular formation, three-sides of the formations
are under the constraint of the triangular inequality. For a N -agent formation, there exists a
more complicated constraint, which is known as Schoenberg criterion (Dattorro, 2005).
While it is not possible to arbitrarily design uij , the following theorem (Havel et al.,
1983) is useful for the design of uij .
– 32 –
Theorem 3.2.1 (Havel et al., 1983) A convex combination of given two N × N EDMs that
are realizable in m1 - and m2 -dimensional Euclidean spaces, respectively, is realizable in at
most (m1 + m2 )-dimensional Euclidean space.
For the three-agents under the assumption of Problem 3.2.1, let D0 and D∗ be the EDMs
generated by the initial realization p0 and the desired realization p∗ , respectively. According
to Theorem 3.2.1, the convex combination
D(t) := α(t)D0 + (1 − α(t)) D∗ , t ≥ 0,
where α : [0, ∞) → [0, 1], is an EDM realizable in at most four-dimensional space. However, since any triangle is realizable in the plane, D(t) is realizable in the plane for t ≥ 0
(Dattorro, 2005).
Motivated by this fact, we design the distance control input uij as
uij = −kd (dij − d∗ij ), {i, j} ∈ E,
(3.7)
where kd > 0, which leads to
dij (t) = e−kd t d0ij + (1 − e−kd t )d∗ij , {i, j} ∈ E.
(3.8)
Note that (G, d(t)), where d(t) = [· · · dij · · · ]T for all {i, j} ∈ E, is realizable in the plane
for all t ≥ 0 because dij evolves along time as (3.8) for any {i, j} ∈ E.
Then, based on (3.6) and (3.7), we can obtain constraints for the control inputs of individual agents as follows:
uij = 2(pi − pj )T (ui − uj )
= −kd d˜ij ,
– 33 –
(3.9)
where d˜ij = dij − d∗ij , for all {i, j} ∈ E. That is, among possible solutions of (3.9), we
consider the solution of the following equations:
kd ˜
dij ,
4
kd
(pi − pj )T uj = d˜ij
4
(pj − pi )T ui =
(3.10a)
(3.10b)
for all {i, j} ∈ E. Since each agent has exactly two neighbors in this case, the control input
of each agent is under exactly two constraints. Thus, from (3.9) and (3.10), the constraints
of the control law ui are arranged as the following system of linear equations:




 (pj − pi )T 
 d˜ij 

 ui = kd 
, i ∈ V,




4
T
˜
(pk − pi )
dik
|
| {z }
{z
}
Ai :=
(3.11)
bi :=
where j = i + 1(mod 3) and k = i + 2(mod 3). Then, the control law for agent i can be
derived as
ui =
kd −1
A bi
4 i
(3.12)
for all i ∈ V under the assumption that Ai is nonsingular.
Since the control law (3.12) is singular if p is collinear, we need to investigate whether
the control law preserves the non-collinearity of p, which is stated in the following lemma.
Lemma 3.2.1 For the three-agents under the assumption of Problem 3.2.1, if (G, p0 ) and
(G, p∗ ) are infinitesimally rigid, then the control law (3.12) preserves the non-collinearity of
p(t) for all t ≥ 0.
Proof: Since (G, p0 ) and (G, p∗ ) are infinitesimally rigid, there exist positive constants
– 34 –
0ik and ∗ik for distinct i, j and k such that
q
q
q
d0ij + d0jk − 0ik ,
(3.13a)
q
q
p
p
d∗ik = d∗ij + d∗jk − ∗ik ,
(3.13b)
q
d0ik =
due to the triangular inequality. From (3.8), dik (t) can be written as
dik (t) = α(t)d0ik + (1 − α(t)) d∗ik , (i, k) ∈ E,
(3.14)
where α(t) = e−kd t . From (3.13) and (3.14), dik (t) can be arranged as
q
q 2
q
q
q
p 2
0
0
dij + djk − 0ik + (1 − α(t))
d∗ij + d∗jk − ∗ik
dik (t) = α(t)
= dij (t) + djk (t)
q
q
+ 2α(t) d0ij d0jk + 2 (1 − α(t)) d∗ij d∗jk
q
q
q
p
0
0
− 2α(t) dij + djk 0ik − 2 (1 − α(t)) d∗ij + d∗jk ∗ik
+ α(t)0ik + (1 − α(t)) ∗ik .
(3.15)
By some algebra, we can obtain
2α(t)
q
q
q
d0ij d0jk + 2 (1 − α(t)) d∗ij d∗jk ≤ 2 dij (t)djk (t)
and
q
q
q
q
q
p
p
2α(t) d0ij + d0jk 0ik + 2 (1 − α(t)) d∗ij + d∗jk ∗ik ≤ 2
dij (t) + djk (t)
ik (t),
where ik (t) = α(t)0ik + (1 − α(t)) ∗ik . Thus, from (3.15), we obtain
q
q
p
p
dik (t) ≤ dij (t) + djk (t) − ik (t).
– 35 –
Since ik (t) ≥ min(0ik , ∗ik ) for all t ≥ 0, every trajectory starting from the set of collinear
formations stays away from the set. Therefore, the control law (3.12) preserves the noncollinearity of p(t) for all t ≥ 0.
Then the following Theorem 3.2.2 confirms that the control law (3.12) achieves the
asymptotic stability of the desired formation.
Theorem 3.2.2 For the three-agents under the assumption of Problem 3.2.1, the desired
formation Ep∗ is locally asymptotically stable under the proposed control law (3.12). If
(G, p0 ) and (G, p∗ ) are infinitesimally rigid, p always converges to a realization in Ep∗ and
each squared-distance error converges to zero exponentially and monotonically.
Proof:
Since (G, p0 ) and (G, p∗ ) are infinitesimally rigid, the proposed control law
(3.12) is nonsingular from Lemma 3.2.1. From (3.8), d˜ij converges to zero exponentially and
monotonically. Therefore, the desired formation Ep∗ is asymptotically stable since the triangular formation is uniquely determined up to congruence by inter-agent distances. Further,
the exponential convergence property of d˜ implies the convergence of p to a point (Dörfler
& Francis, 2009).
Due to the singularity, the control input (3.12) approaches infinity whenever (G, p0 ) is
arbitrarily close to the set of collinear formations. The drawback can be overcome by multiplying the control law (3.12) by | det(Ai )|, which yields
ui =
kd
|det (Ai )| A−1
i bi , i ∈ V.
4
(3.16)
Furthermore, the control law (3.16) preserves the advantage of the exponential convergence
of each squared-distance error as stated in the following corollary.
– 36 –
Corollary 3.2.1 For the three-agents under the assumption of Problem 3.2.1, Ep∗ is locally
asymptotically stable under the proposed control law (3.16). If (G, p0 ) and (G, p∗ ) are infinitesimally rigid, p always converges to a realization in Ep∗ and each inter-agent squareddistance error converges to zero exponentially and monotonically.
Proof:
First, | det(Ai (t))| = | det(Aj (t))| = Sp(t) for all i, j ∈ V, where Sp(t) is
the area of triangle constructed by p(t). Since d˙ij (t) = −kd | det(Ai (t))|(dij (t) − d∗ij ), we
obtain dij (t) = α0 (t)d0ij + (1 − α0 (t))d∗ij , {i, j} ∈ E, where α0 (t) = e−
Rt
0
kd | det(Ai (τ )|dτ
.
From Lemma 3.2.1, every trajectory p(t) stays away from the set of collinear formations and
thus there exists a positive Smin such that | det(Ai (t))| > Smin for t ≥ 0. Based on the
argument in the proof of Theorem 3.2.2, Ep∗ is then locally asymptotically stable. If (G, p0 )
and (G, p∗ ) are infinitesimally rigid, each squared inter-distance error always converges to
zero exponentially and monotonically. The exponential convergence property of d˜ implies
the asymptotic convergence of p(t) to a point (Dörfler & Francis, 2009).
The stability analysis in this subsection, compared to those in Cao et al. (2011); Dörfler
& Francis (2009), is simple and straightforward. Further, the proposed control laws (3.12)
and (3.16) achieve the exponential and monotonic convergence of squared distance errors to
zero.
3.2.3 N -agent case
We next consider N -agents under the assumption of Problem 3.2.1. First of all, while
the inter-agent distance control input (3.7) is effective for the three-agents in the previous
subsection, it generally raises the realizability problem for the N -agent case. Suppose that
– 37 –
kd
bi
4
Column space of Ai
Ai ui
Figure 3.1: Projection of kd bi /4 onto the column space of Ai .
dij (t), {i, j} ∈ E, evolves along time as (3.8). In this case, from Theorem 3.2.1, (G, d(t)) is
realizable at most in four-dimensional space. That is, (G, d(t)) is generally not realizable in
the plane, unlike the three-agent case.
Conversely, the realizability problem implies that the inter-agent distance control input
(3.7) is generally not implementable in the plane for the N -agent case. That is, the constraints
of the control law ui , which are induced from (3.10), yield the following possibly overdetermined system of linear equations:



.
..
.
 ..








 (p − p )T  ui = kd  d˜
 j

ij
i
4 





 .
..
..
.
|
| {z
{z
}
Ai :=
bi :=




, j ∈ Ni , i ∈ V,



(3.17)
}
According to Laman (1970), if (G, p) is rigid and N > 3, then there exist at least two agents
having control input constraints given by over-determined systems of equations.
Since there does not exist a solution ui that perfectly satisfies (3.17) in general, we derive
the control law ui by projection. That is, as depicted in Figure 3.1, we find the solution ui
such that Ai ui becomes the projection of (kd /4)bi onto the column space of Ai . Then the
– 38 –
control input ui can be obtained by solving the following least-squares problem:
2
k
d
, i ∈ V.
b
ui = argmin A
u
−
i
i
i
4 ui ∈R2
(3.18)
Since the solution of (3.18) is given by the solution of the equation ATi Ai ui = (kd /4)ATi bi ,
the control input ui is then designed as
ui =
under the assumption that ATi Ai =
kd T
(A Ai )−1 ATi bi , i ∈ V
4 i
P
j∈Ni (pj
(3.19)
− pi )(pj − pi )T is nonsingular.
For single-integrator modeled agents, the gradient law employed in Cao et al. (2011);
Dörfler & Francis (2009, 2010); Krick et al. (2009) is as follows:
ug = −∇φ(p)
T
˜
= −kg RgG (p) d,
(3.20)
where d˜ = d − d∗ , d = gG (p), d∗ = gG (p∗ ), and
φ(p) :=
kg X
(kpi − pj k2 − d∗ij )2 .
4
(3.21)
{i,j}∈E
Although the control law (3.19) has been designed by a new design strategy, it is related with
the gradient law (3.20) by multiplication with a matrix. To show this, let us define
Φ = diag . . . , (ATi Ai )−1 , . . . .
Then we obtain
u=−
kd
Φug .
4kg
Further, the control law (3.12) can also be written in this form.
– 39 –
(3.22)
To analyze the stability of Ep∗ under the proposed control law (3.19), we first investigate
the properties of ATi Ai , which is arranged as

ATi Ai

(xj − xi )2
(xj − xi )(yj − yi ) 
 , i ∈ V.

(xj − xi )(yj − yi )
(yj − yi )2
X

=

j∈Ni
The following lemma shows that ATi Ai and its inverse matrix are positive definite.
Lemma 3.2.2 For the N -agents under the assumption of Problem 3.2.1, if (G, p) is infinitesimally rigid, then ATi Ai and its inverse matrix are positive definite for all i ∈ V.
Proof: Due to the infinitesimal rigidity of (G, p), the first leading principal minor of
ATi Ai is positive:
P
j∈Ni (xj
− xi )2 > 0 for all i ∈ V. Since N ≥ 3 and agent i has at least
two neighboring agents due to the rigidity of (G, p), the second leading principal minor of
ATi Ai is also positive by the Cauchy-Schwarz inequality:
!2
X
j∈Ni
(xj − xi )2
X
j∈Ni
(yj − yi )2 −
X
j∈Ni
(xj − xi )(yj − yi )
> 0, i ∈ V.
The second leading principal minor of ATi Ai is zero if and only if [· · · , xj − xi · · · ]T and
[· · · yj − yi , · · · ]T for all j ∈ Ni are linearly dependent, which implies that pi and pj , j ∈ Ni ,
are collinear. It then follows from Sylvester’s criterion that ATi Ai is positive definite. Thus
(ATi Ai )−1 is positive definite by the positive definiteness of ATi Ai .
The following result, which is known as Lojasiewicz’s inequality (Absil & Kurdyka,
2006; Lojasiewicz, 1970), is useful for the investigation of properties of the desired formations.
– 40 –
Theorem 3.2.3 (Lojasiewicz’s inequality) Let f be a real analytic function on a neighborhood of z in Rn . Then, there exist constants k > 0 and ρ ∈ [0, 1) such that
k∇f (x)k ≥ kkf (x) − f (z)kρ
(3.23)
in some neighborhood of z.
While Ep∗ is not compact, the desired formation parameterized in the link space is compact (Dörfler & Francis, 2009). To exploit the compactness, we analyze the stability of desired formations in the link space. Defining e = [· · · eTij · · · ]T ∈ R|E|/2 , where eij = pi − pj
for all {i, j} ∈ E and i < j, we obtain the desired formation parameterized by e as follows:
Ee∗ := {e : keij k2 = d∗ij , {i, j} ∈ E, i < j}.
The dynamics of the formation of the agents can be expressed in the link space by ė =
(H ⊗ I2 )u, where H is the incidence matrix and I2 is the two by two identity matrix. The
definition of H can be found in Dörfler & Francis (2009). By slightly abusing notation, we
use RgG (e) to denote RgG (p). A potential function V (e) can be defined in the link space as
follows:
V (e) :=
kd X
(keij k2 − d∗ij )2 .
4
(3.24)
{i,j}∈E
The following lemma, based on Theorem 3.2.3, reveals the existence of a level set of V (e),
which plays a crucial role in the stability analysis of Ee∗ .
Lemma 3.2.3 For the N -agents under the assumption of Problem 3.2.1, if (G, p∗ ) is infinitesimally rigid, then there exists a level set Ωc = {e : V (e) ≤ c} such that Φ is positive
definite for any e ∈ Ωc and (RgG (e))T d˜ 6= 0 for any e ∈ Ωc and e ∈
/ Ee∗ .
– 41 –
Proof:
First, due to the infinitesimal rigidity of (G, p∗ ), if p is sufficiently close to
Ep∗ , then (G, p) is infinitesimally rigid, which, together with Lemma 3.2.2, implies that Φ is
positive definite. Thus there exists a positive constant ρmax such that if ρmax ≥ ρ > 0, then
Φ is positive definite for any e ∈ Ωρ = {e : V (e) ≤ ρ}.
Second, since φ(p), which is defined in (3.21), is a real analytic function in some neighborhood of any p̄ ∈ Ep∗ , it follows from Theorem 3.2.3 that there exist a neighborhood Up̄ of
p̄ and constants kp̄ > 0 and ρp̄ ∈ [0, 1) such that
T
˜ ≥ kp̄ kφ(p) − φ(p̄)kρp̄
k∇φ(p)k = k − kg RgG (p) dk
for all p ∈ Up̄ . Since φ(p) = 0 only if p ∈ Ep∗ ,
T
˜ ≥ kp̄ kφ(p)kρp̄ > 0
kkg RgG (p) dk
(3.25)
for all p ∈ Up̄ and p ∈
/ Ep∗ . Then, for any ē ∈ Ee∗ , we can take a neighborhood Uē of ē such
that
˜ >0
k(RgG (e))T dk
(3.26)
for all e ∈ Uē and e 6∈ Ee∗ .
Third, due to the compactness of Ee∗ , there exists a finite open cover UEe∗ =
Sne
k=1
Uēk
such that (3.26) holds for all e ∈ UEe∗ and e 6∈ Ee∗ . That is, for any k ∈ {1, . . . , ne },
if e ∈ Uēk and e 6∈ Ee∗ , then (3.26) holds. Taking UEe∗ and c such that Ωc ⊆ UEe∗ and
c ≤ ρmax ensures that Φ is positive definite for any e ∈ Ωc and (RgG (e))T d˜ 6= 0 for any
e ∈ Ωc and e ∈
/ Ee∗ .
– 42 –
Based on Lemma 3.2.3, the following theorem states the main stability result for N -agent
groups.
Theorem 3.2.4 For the N -agents under the assumption of Problem 3.2.1, if (G, p∗ ) is infinitesimally rigid, then the control law (3.19) achieves the local asymptotic stability of Ep∗ .
Proof: Take V (e), which is defined in (3.24), as a Lyapunov function candidate. The
time derivative of V (e) is then arranged as
˜
V̇ (e) = −kd d˜T RgG (e)Φ(RgG (e))T d.
From Lemma 3.2.3, there exists a level set Ωc such that Φ is positive definite for any e ∈ Ωc
/ Ee∗ . Since V̇ (e) is negative definite in Ωc , it
and (RgG (e))T d˜ 6= 0 for any e ∈ Ωc and e ∈
follows that Ee∗ is locally asymptotically stable, which in turn implies the local asymptotic
stability of Ep∗ .
Although Theorem 3.2.4 confirms the local asymptotic stability of Ep∗ , it does not imply
the convergence of p to a point in Ep∗ . While the convergence property is ensured by the
control law (3.19) if (G, p∗ ) is minimally infinitesimally rigid, which can be verified based on
the result in Dörfler & Francis (2009), it is not certain whether such convergence property can
be ensured for general infinitesimally rigid formations. To show the convergence property,
we need the following lemma, which is implied by Theorem 4 in Krick et al. (2009).
Lemma 3.2.4 Given an N -agent group, if (G, p∗ ) is infinitesimally rigid, then the gradient
law (3.18) achieves the asymptotic convergence of p to a point in Ep∗ .
Then the convergence property of p to a point in Ep∗ by the control law (3.19) can be
shown as follows:
– 43 –
Theorem 3.2.5 Given an N -agent group, if (G, p∗ ) is infinitesimally rigid, then the control
law (3.19) achieves the asymptotic convergence of p to a point in Ep∗ .
Proof: From Lemma 3.2.3, there exists a level set Ωc such that Φ is positive definite for
any e ∈ Ωc and (RgG (e))T d˜ 6= 0 for any e ∈ Ωc and e ∈
/ Ee∗ . It follows from Lemma 3.2.4
that the gradient input ug (t) is Lebesgue integrable if e(0) ∈ Ωc , which in turn implies that
ug (t) belongs to L1 space, i.e,
R∞
0
|u(t)|dt < ∞ Rudin (1976). Since Φ is positive definite
in Ωc , there exists a constant MR such that kΦk1 ≤ MR , where k · k1 denotes the induced
1-norm of matrices. It follows that u(t) = −(kd /4kg )Φug (t) also belongs to L1 space. Thus
p asymptotically converges to a point in Ep∗ .
Based on Theorem 3.2.4 and 3.2.5, the relationship between the gradient law and the
proposed control law (3.19) implies that we can draw a more general result as follows:
Corollary 3.2.2 Given an N -agent group, if (G, p∗ ) is infinitesimally rigid, then the control
law
T
˜
u = −P2N RgG (p) d,
where P2N ∈ R2N ×2N is positive definite in Ωc , achieves the local asymptotic stability of Ep∗
and the asymptotic convergence of p to a point in Ep∗ .
Remark 3.2.1 The contributions of the results developed in this subsection are summarized
as follows:
• Based on an alternative design approach using Euclidean distance matrix concept,
a novel stability analysis method has been developed. While the result in Dörfler
– 44 –
& Francis (2009) is valid for minimally rigid formations, Theorem 3.2.4, 3.2.5 and
Corollary 3.2.2 extend the existing result to general infinitesimally rigid formations.
• According to Corollary 3.2.2, the local asymptotic stability is achieved even when
controller gains of agents are not identical. This result may be potentially beneficial
when assigning different control efforts to individual agents taking account of different
actuation capabilities of agents.
• As an extension of existing results (Cao et al., 2011; Dörfler & Francis, 2009; Krick
et al., 2009), it has been revealed that any control laws related with the gradient law
by multiplication of a positive matrix ensure the local asymptotic stability of infinitesimally rigid formations and its asymptotic convergence to a point.
3.3
Simulation results
For a three-agent group, the initial and desired formations, p0 and p∗ are assumed to
√
be ((2, 2), (−5, 0), (5, 0)) and ((0, 75), (−5, 0), (5, 0)). Figure 3.2a and 3.2b depict the
formation trajectory and squared inter-agent distance errors, respectively, of the group. Expectedly, squared inter-agent distance errors monotonically converge to zero as depicted in
Figure 3.2b.
Figure 3.3a shows the interaction graph of a ten-agent group. It is assumed that p∗ =
10 × ((cos(2π/10), sin(2π/10)) . . . (cos(10 × 2π/10), sin(10 × 2π/10))) and p0 is perturbed
from p∗ by a random variable that is uniformly distributed on [−2.5, 2.5]. As depicted in
Figure 3.3b, the formation trajectory converges to the desired formation.
– 45 –
6
20
initial formation
desired formation
d12 − d∗12
d23 − d∗23
d31 − d∗31
10
4
0
−10
1
2
2
dij − d∗ij [m2 ]
y [m]
−20
0
3
−30
−40
−2
−50
−60
−4
−70
−6
−6
−80
−4
−2
0
x [m]
2
4
6
0
(a) The formation trajectory of the three-agents.
5
10
time [s]
15
20
(b) The squared inter-agent distance errors of the
three-agents.
Figure 3.2: Simulation result of the three-agents under (3.19).
15
initial formation
desired formation
10
y [m]
5
0
−5
−10
−15
−15
−10
−5
0
x [m]
5
10
15
(a) The interaction graph of the
(b) The squared inter-agent distance errors of the
ten-agents.
three-agent group.
Figure 3.3: The interaction graph and the formation trajectory of the ten-agents under (3.19).
– 46 –
3.4
Conclusion
We proposed a new strategy for the formation controller design. The proposed control
law achieves the local asymptotic stability of general infinitesimally rigid formations and
the global asymptotic stability of triangular infinitesimally rigid formations. In the case of
triangular infinitesimally rigid formations, each squared distance error converges to zero exponentially and monotonically. Further, we extended the existing Lyapunov-based stability
analysis result to general infinitesimally rigid formations and revealed that any control laws
related with the gradient law by multiplication of a positive matrix achieves local asymptotic
stability of infinitesimally rigid formations.
While we focused on the control of undirected formations, it may be shown that the proposed control law achieves the local asymptotic stability of infinitesimally rigid formations
constructed by Henneberg insertion sequences, based on the result in Krick et al. (2009).
– 47 –
– 48 –
Chapter 4
Distance-based formation under the gradient control law
In this chapter, we study the stability of undirected formations of single- and double-integrators
based on inter-agent distance measurements. It has been known that undirected formations
of single-integrators in two-dimensional space are locally asymptotically stable under a gradient control law. We present another proof for the local asymptotic stability of undirected
formations, under general gradient control laws, of single-integrator in n-dimensional space.
Based on the topological equivalence of a dissipative Hamiltonian system to a gradient system, we also prove the local asymptotic stability of undirected formations, under the generalized gradient control laws, of double-integrator in n-dimensional space. Simulation results
support the validity of the stability analysis.
4.1
Introduction
In the literature, gradient control laws have mainly been employed for formation stabi-
lization under the distance-based problem setup. Local asymptotic stability of undirected
formations of single-integrators has been addressed in Dörfler & Francis (2009); Krick et al.
(2009); Oh & Ahn (2011d). Stability of acyclic persistent formations has been studied in
Krick et al. (2009); Oh & Ahn (2011d). It has been revealed that triangular formations are
– 49 –
globally asymptotically rigid (Cao et al., 2007, 2011; Dörfler & Francis, 2010; Oh & Ahn,
2011d). Global asymptotic stability of tree formations has been proved in Dimarogonas &
Johansson (2010).
However, the existing results have mainly focused on formations of single-integrator
modeled agents in two-dimensional space. Thus we seek to extend the existing results to
general n-dimensional space and also to double-integrator modeled agents in this chapter.
Accordingly, contributions of this chapter are as follows. First, we show that undirected
formations of single-integrator modeled agents in n-dimensional space are locally asymptotically stable under general gradient control laws. This is an extension of the results in
Dörfler & Francis (2009); Krick et al. (2009); Oh & Ahn (2011d). Second, we prove the
local asymptotic stability of undirected formations of double-integrator modeled agents in ndimensional space under the general gradient control laws, motivated by the result in Dörfler
& Bullo (2011). Formation dynamics of double-integrators under the control laws is given
as a dissipative Hamiltonian system. Based on the topological equivalence of the dissipative
Hamiltonian system to a gradient system, which has been revealed in Dörfler & Bullo (2011),
we show the local asymptotic stability of the undirected formations. Though Olfati-Saber &
Murray (2002) have addressed the stability of undirected formations of double-integrator
modeled agents by means of the LaSalle invariance principle, as mentioned in Krick et al.
(2009), it is not certain whether the invariance principle is applicable to a non-compact set.
– 50 –
4.2
Undirected formations of single-integrators
4.2.1
Problem statement
Consider the following N single-integrator modeled agents in n-dimensional space:
ṗi = ui , i = 1, . . . , N,
(4.1)
where pi ∈ Rn and ui ∈ Rn denote the position and the control input, respectively, of agent
i. We assume that the agents do not necessarily share a common sense of orientation. Due to
the absence of a common sense of orientation, the agents maintain their own local reference
frames, whose orientations are not aligned with each other.
The sensing topology among the agents is modeled by a undirected graph G = (V, E).
We refer to the graph as the sensing graph of the agents. We then assume that each agent
measures the relative positions of its neighboring agents. Thus the following measurements
are available to agent i.
pji := pj − pi , ∀j ∈ Ni
(4.2)
for all i ∈ V.
nN
∗T T
For a given realization p∗ = [p∗T
, we define the desired formation Ep∗
1 · · · pN ] ∈ R
of the agents as the set of formations that are congruent to p∗ :
Ep∗ := {p ∈ RnN : kpj − pi k = kp∗j − p∗i k, ∀i, j ∈ V}.
(4.3)
Then the formation control problem for the single-integrator modeled agents is stated as
follows:
– 51 –
Problem 4.2.1 For N single-integrator modeled agents (7.1) in n-dimensional space, suppose that the sensing graph of the agents is given by a undirected graph G = (V, E). Given a
realization p∗ ∈ RnN , design a decentralized control law based on measurements (7.3) such
that Ep∗ is asymptotically stable under the decentralized control law.
.
4.2.2
Gradient control law
Consider the single-integrator modeled agents under the assumptions of Problem 4.2.1.
For each agent i, let us define a local potential φi as follows:
φi (pi , . . . , pj , . . .) :=
kp X
γ kpj − pi k2 − kp∗j − p∗i k2 ,
2 j∈N
(4.4)
i
where kp > 0 and γ : R → R̄+ is a function that satisfies the following assumption:
Assumption 4.2.1 The function γ : R → R̄+ satisfies the following conditions:
• Positive definiteness: γ(x) ≥ 0 for any x ∈ R and γ(x) = 0 if and only if x = 0;
• Analyticity: γ is analytic in a neighborhood of 0.
In the literature, the following function, which satisfies the conditions in Assumption 4.2.1,
has been popularly adopted as the local potential function (Baillieul & Suri, 2003; Cao et al.,
2007, 2011; Dörfler & Francis, 2009; Krick et al., 2009):
φi =
X kp
2
kpi − pj k2 − kp∗j − p∗i k2 .
2
j∈N
i
For brevity, we often omit the arguments of the function φ if there is no ambiguity.
– 52 –
Based on the local potential function φi , a control law for agent i can be designed as
ui = − ∇pi φi
T
∂φi
≡−
∂pi
"
#T
kp X ∂γ(d˜ji ) ∂ d˜ji
=−
2 j∈N ∂ d˜ji ∂pi
i
= kp
X ∂γ(d˜ji )
(pj − pi ),
∂ d˜ji
j∈Ni
(4.5)
where d˜ij = kpj − pi k2 − kp∗j − p∗i k2 . It is worth noting that the gradient control law (4.5)
can be implemented in the local reference frames of the agents by using the measurements
(7.3).
4.2.3
Stability analysis
Let us define a global potential function φ for the agents as
φ(p) :=
kp X
γ kpj − pi k2 − kp∗j − p∗i k2 .
2
(4.6)
(i,j)∈E+
∗T T
∗
Let e = [eT1 · · · eTM ]T := (H+ ⊗ In )p and e∗ = [e∗T
1 · · · eM ] := (H+ ⊗ In )p . Then the
control law (4.5) can be arranged in a vector form as the gradient of the potential function φ:
u = −∇φ(p)
˜
= −kp (H+ ⊗ In )T D(e)Γ(d),
(4.7)
where u = [uT1 · · · uTN ]T , d˜ = [ke1 k2 − ke∗1 k2 · · · keM k2 − ke∗M k2 ]T and
"
˜
˜
˜ := ∂γ(d1 ) · · · ∂γ(dM )
Γ(d)
∂ d˜1
∂ d˜M
– 53 –
#T
.
Thus the formation dynamics of the agents can be described as
ṗ = −∇φ(p),
(4.8)
which is a gradient system.
We define a set Ep0 ∗ , which is the set of realizations that are equivalent to (G, p∗ ), as
follows:
Ep0 ∗ := {p ∈ RnN : kpj − pi k = kp∗j − p∗i k, ∀(i, j) ∈ E+ }.
(4.9)
Obviously, Ep∗ ⊆ Ep0 ∗ . By the definition of the global rigidity, Ep∗ = Ep0 ∗ if (G, p∗ ) is
globally rigid. Further, if (G, p∗ ) is rigid, there exists a neighborhood Up∗ of p∗ such that
Ep∗ ∩ Up∗ = Ep0 ∗ ∩ Up∗ . In the below, we show the local asymptotic stability of Ep0 ∗ . The
local asymptotic stability of Ep∗ is then followed by the rigidity of (G, p∗ ).
Meanwhile, the set Ep0 ∗ is invariant under translations of rotations of realizations, and thus
it is not compact, which complicates the stability analysis. To avoid the complicatedness, we
define the link e ∈ RnM of (G, p) as
e = [eT1 · · · eTM ]T := (H+ ⊗ In )p,
(4.10)
which has been introduced in Dörfler & Francis (2009). From the definition (4.10), e belongs
to the column space of H+ ⊗ In , i.e., e ∈ Im(H+ ⊗ In ). The space Im AGE(H+ ⊗ In ) is
referred to as the link space associated with the framework (G, p). We then define a function
vG : Im AGE(H+ ⊗ In ) → RM as
1
vG (e) := [ke1 k2 · · · keM k2 ]T ,
2
– 54 –
which corresponds to the edge function gG parameterized in the link space. That is, gG (p) ≡
vG ((H+ ⊗ In )p). Defining
D(e) := diag(e1 , . . . , eM ),
we obtain
∂gG (p)
∂vG (e) ∂e
=
∂p
∂e ∂p
= [D(e)]T (H+ ⊗ In ).
Thus the gradient system (4.8) can be described in the link space as follows:
ė = (H+ ⊗ In )ṗ
˜
= −kp (H+ ⊗ In )(H+ ⊗ In )T D(e)Γ(d).
(4.11)
Further, the set Ep0 ∗ can be parameterized in the link space as follows:
Ee0 ∗ := {e ∈ Im(H+ ⊗ In ) : kei k = ke∗i k, ∀i = 1, . . . , M }.
As remarked in Dörfler & Francis (2009), Ee0 ∗ is compact, whereas Ep0 ∗ is not. We exploit
the compactness in the proof of Theorem 4.2.2.
Defining V : Im(H+ ⊗ In ) → R̄+ as
V (e) :=
M
X
1
i=1
2
γ kei k2 − ke∗i k2 ,
we take V as a Lyapunov function candidate for the link dynamics (4.11). The time derivative
– 55 –
of V can be arranged as
V̇ (e) =
∂V (e)
ė
∂e
˜
= −kp [∇V (e)]T (H+ ⊗ In ) (H+ ⊗ In )T D(e)Γ(d)
iT
˜
˜
= −kp D(e)Γ(d) (H+ ⊗ In ) (H+ ⊗ In )T D(e)Γ(d)
|
{z
}
{z
}
|
h
=−[∇φ(p)]T
=−∇φ(p)
= −kp k∇φ(p)k2
≤ 0,
which shows the local stability of Ee0 ∗ . Then the local asymptotic stability of Ee0 ∗ can be
ensured by showing the existence of a neighborhood UEe0 ∗ of Ee0 ∗ such that, for any e ∈ UEe0 ∗ ,
if e 6∈ Ee∗ , then V̇ (e) < 0.
The following inequality, which is known as the Lojasiewicz theorem, is useful for stability analysis of gradient systems.
Theorem 4.2.1 (Lojasiewicz, 1970) Suppose that f is a real analytic function in a neighborhood of a in Rnf . There exist constants kf > 0 and ρf ∈ [0, 1) such that
k∇f (x)k ≥ kf kf (x) − f (z)kρf
in some neighborhood of a.
Based on Theorem 4.2.1, we obtain the following lemma:
Lemma 4.2.1 For any p̄ ∈ Ep0 ∗ , there exists a neighborhood Up̄ of p̄ such that, for all ξ ∈ Up̄
and ξ 6∈ Ep0 ∗ , k∇φ(ξ)k > 0.
– 56 –
Proof: Since γ is analytic in some neighborhood of 0, for any p̄ ∈ Ep0 ∗ , there exists a
neighborhood of p̄ such that φ is analytic in the neighborhood. Thus it follows from Theorem
4.2.1 that there exist kφ > 0, ρφ ∈ [0, 1), and a neighborhood Up̄ such that
k∇φ(ξ)k ≥ kφ kφ(ξ) − φ(p̄)kρφ = kφ kφ(ξ)kρφ .
for all ξ ∈ Up̄ . Further, φ(ξ) = 0 if and only if ξ ∈ Ep0 ∗ by the positive definiteness of γ.
Thus, for any ξ ∈ Up̄ and ξ 6∈ Ep0 ∗ , k∇φ(ξ)k > 0.
The local asymptotic stability of Ep0 ∗ is then ensured based on Lemma 4.2.1 as follows:
Theorem 4.2.2 For the single-integrator modeled agents under the assumptions of Problem
4.2.1, Ep0 ∗ is locally asymptotically stable under the gradient control law (4.7).
Proof: Let e ∈ Im(H+ ⊗ In ). Define fe : Ee0 ∗ → R̄+ as fe (η) := ke − ηk. It follows
that there exists a ē ∈ Ee0 ∗ such that
inf ke − ηk = ke − ēk
η∈Ee0 ∗
because Ee0 ∗ is compact and fe is continuous (Rudin, 1976). Then there exists a p ∈ RnN
such that p ∈ Im(H+T ⊗ In ) and (H+ ⊗ In )p = e. Similarly, there exists a p̄ ∈ Ep0 ∗ such that
p̄ ∈ Im(H+T ⊗ In ) and (H+ ⊗ In )p̄ = ē. Since p and p̄ belong to the row space of H+ ⊗ In ,
we obtain
σmin (H+ ⊗ In )kp − p̄k ≤ ke − ēk = k(H+ ⊗ In )(p − p̄)k ≤ σmax (H+ ⊗ In )kp − p̄k,
where σmin (H+ ⊗ In ) denotes the non-zero minimum singular value and σmax (H+ ⊗ In )
denotes the maximum singular value of H+ ⊗ In .
– 57 –
It follows from Lemma 4.2.1 that, for any p̄ ∈ Ep0 ∗ , there exists a neighborhood Up̄ of p̄
such that k∇φ(ξ)k > 0 for all ξ ∈ Up̄ and ξ 6∈ Ep0 ∗ . We denote the neighborhood by
Up̄ = {p ∈ RnN : kp − p̄k < ∗p }.
(4.12)
Define
UEe0 ∗ (e ) := {e ∈ Im(H+ ⊗ In ) : inf0 ke − ηk < e }.
η∈Ee∗
Let ∗e := σmin (H+ ⊗ In )∗p . Then, for any e ∈ UEe0 ∗ (∗e ), there exists p ∈ RnN such that
(H+ ⊗ In )p = e and
k(H+ ⊗ In )T ∇V (e)k = k∇φ(p)k > 0.
We then define Ω as
Ω(c) := {e ∈ Im(H+ ⊗ In ) : kV (e)k ≤ c}.
Then, for a sufficiently small c∗ > 0, we have Ω(c∗ ) ⊂ UEe0 ∗ (∗e ). This implies that, for any
e ∈ Ω(c∗ ) and e 6∈ Ee0 ∗ , there exists p ∈ RnN such that
V̇ (e) = −kp k∇φ(p)k2 < 0.
Since the time derivative of V is negative definite in the level set Ω(c∗ ), Ee0 ∗ is locally asymptotically stable. Thus Ep0 ∗ is locally asymptotically stable.
Then the local asymptotic stability of Ep∗ is ensured if (G, p∗ ) is rigid.
Theorem 4.2.3 For the single-integrator modeled agents under the assumptions of Problem
4.2.1, if (G, p∗ ) is rigid, then Ep∗ is locally asymptotically stable under the gradient control
law (4.7).
– 58 –
Proof:
From Theorem 4.2.2, Ep0 ∗ is locally asymptotically stable. Since (G, p∗ ) is
rigid, it follows from the definition of the graph rigidity that, for any p̄ ∈ Ep∗ , there exists
a neighborhood Up̄ of p̄ such that Ep∗ ∩ Up̄ = Ep0 ∗ ∩ Up̄ . This implies that Ep∗ is locally
asymptotically stable.
While Theorem 4.2.3 confirms the local asymptotic stability of Ep∗ when (G, p∗ ) is
rigid, it does not ensure the convergence of p to a finite realization in Ep∗ . The convergence
property is ensured by the fact that the centroid of a undirected formation is stationary under
the gradient control law (4.7) (Krick et al., 2009). That is, since Ep∗ is locally asymptotically
stable and the centroid of p is stationary, p converges to a finite realization in Ep∗ .
The stability analysis approach is based on the fact that the control law (4.5) is described
as the gradient of the potential function φ in a vector form, i.e., u = (−∇pi φi , . . . , −∇pN φN ) =
−∇φ. Note that only undirected formations hold this property. Thus, in general, the approach in this section is not applicable to directed formations or more specifically persistent
formations, which have been studied in Hendrickx et al. (2007).
4.3
Undirected formations of double-integrators
4.3.1
Problem statement
We consider the following N double-integrator modeled agents in n-dimensional space:



 ṗi = vi ,
i = 1, . . . , N,
(4.13)


 v̇i = ui ,
where pi ∈ Rn , vi ∈ Rn , and ui ∈ Rn denote the position, the velocity, and the control input,
respectively, of agent i. Then we assume that each agent measures its own velocity and the
– 59 –
∗T T
nN
relative positions of its neighbors. Further, given a realization p∗ = [p∗T
,
1 · · · pN ] ∈ R
we define the desired formation Ep∗ ,v∗ of the agents as
Ep∗ ,v∗ := {[pT v T ]T ∈ R2nN : kpj − pi k = kp∗j − p∗i k, v = 0, ∀i, j ∈ V}.
Then the formation control problem for the double-integrator modeled agents is stated as
follows:
Problem 4.3.1 For N double-integrator modeled agents (4.13) in n-dimensional space, suppose that the sensing graph of the agents is given by a undirected graph G = (V, E). Given
a realization p∗ ∈ RnN , design a decentralized control law based on velocities of the agents
and measurements (7.3) such that Ep∗ ,v∗ is asymptotically stable under the decentralized
control law.
4.3.2
Gradient-like law and stability analysis
Consider the double-integrator modeled agents under the assumptions of Problem 4.3.1.
Since each agent measures its own velocity and the relative distances of its neighboring
agents, we can design a formation control law for the agents as follows:
˜
u = −kv v − kp (H+ ⊗ In )T D(e)Γ(d).
– 60 –
(4.14)
Then the overall dynamics of the agents can be described as a dissipative Hamiltonian system:
ṗ = v
= ∇v ψ,
(4.15a)
v̇ = −kv v − kp (H+ ⊗ In )T D(e)d˜
= −kv ∇v ψ − ∇p ψ,
(4.15b)
where kp > 0, kv > 0, and
M/2
1X
kp X
2
kvi k +
γ(kpj − pi k2 − d∗ij ).
ψ(p, v) :=
2 i∈V
2 i=1
We now consider the following one-parameter family Hλ of dynamical systems, which
combines the dissipative Hamiltonian system (4.15) and a gradient system as follows:
 

  

 
InN   −λInN
0   ∇p ψ 
 ṗ  
 0
 

  = (1 − λ) 
−
 

  

 
∇v ψ
v̇
−InN 0
0
kv InN




=−

|
λInN
(1 − λ)InN
−(1 − λ)InN   ∇p ψ 

,


kv InN
∇v ψ
{z
}
(4.16)
Wλ :=
where λ ∈ [0, 1]. The parameterized system (4.16) continuously interpolates between the
Hamiltonian system (4.15) and a gradient system by means of convex combination. When
λ = 0, the parameterized system (4.16) reduces into the Hamiltonian system (4.15). In the
case that λ = 1, (4.16) reduces into the following gradient system:
ṗ = −∇p ψ,
(4.17a)
v̇ = −kv ∇v ψ.
(4.17b)
– 61 –
It has been revealed that parameterized systems of the form (4.16) have the identical
equilibrium set and the identical local stability properties for all λ ∈ [0, 1] (Dörfler & Bullo,
2011) as stated in the following theorem:
Theorem 4.3.1 (Dörfler & Bullo, 2011) For the one parameter family Hλ of dynamical
systems in (4.16), the following statements hold independently of the parameter λ ∈ [0, 1].
• Equilibrium set: For all λ ∈ [0, 1] the equilibrium set of Hλ is given by the critical
points of the potential function ψ, i.e. Ep,v = {[pT v T ]T : ∇ψ = 0}.
• Local stability: For any equilibrium [pT v T ]T ∈ Ep,v and for all λ ∈ [0, 1], the
numbers of the stable, neutral, and unstable eigenvalues of the Jacobian of Hλ are not
dependent on λ.
Theorem 4.3.1 allows us to study the local stability of the formation dynamics (4.15)
by investigating the local stability of the gradient system (4.17). Since subsystems (4.17a)
and (4.17b) are decoupled, we investigate the local stability of each subsystem. First, the
subsystem (4.17a) coincides to the gradient system (4.8) and thus both systems have the
same stability property. Thus the local stability of Ep0 ∗ and Ep∗ , which are defined in (4.9)
and (4.3), respectively, follows from Theorem 4.2.2 and 4.2.3 for (4.17a). Second, the only
equilibrium point of (4.17b) is obviously the origin and it is globally exponentially stable.
We present the main result of this section, which confirms the local asymptotic stability
of the desired formation of the double-integrator modeled agents.
Theorem 4.3.2 For the double-integrator modeled agents under the assumptions of Problem
4.3.1, if (G, p∗ ) is rigid, then Ep∗ ,v∗ is locally asymptotically stable under the control law
– 62 –
(4.14).
The dynamics of the centroid of the agents is given by p̈o + kv ṗo = 0, which implies
that the centroid converges to a finite point. This ensures the convergence of p to a finite
realization.
4.4
Simulation results
We present the simulation results of formation control of five single- and double-integrator
modeled agents in three-dimensional space. The sensing graph for both kinds of agents is depicted in Fig. 7.2. The function γ is defined as γ(x) = (1/2)x2 , which is popularly adopted
in the literature.
√
For the single-integrator modeled agents, we assume that p∗1 = [0 0 10 5]T , p∗2 =
√
√
√
[0 20 0]T , p∗3 = [−10 3 − 10 0]T , p∗4 = [10 3 − 10 0]T , and p∗5 = [0 0 − 10 5]T . Thus
the desired inter-agent distances kp∗j − p∗i k for all (i, j) ∈ E are 30. The components of the
initial positions pi (0) for all i ∈ V are randomly perturbed from those of p∗i by a random
variable uniformly distributed on [−7.5, 7.5]. Fig. 4.2 shows the formation p and the interagent distance errors of the five single-integrators. The formation of the agents converges to
the desired formation and the inter-agent errors converge to zero as depicted in the figure.
For the double-integrator modeled agents, p∗ is given as the same as that for the singleintegrator modeled agents. The initial positions are also given as the same as those for the
single-integrator modeled agents. The components of the initial velocities of the double
integrator group are randomly given by a random variable uniformly distributed on [−5, 5].
As depicted in Fig. 4.3, the formation p converges to a finite realization congruent to p∗ and
– 63 –
Figure 4.1: Sensing graph for five-agents.
800
initial formation
desired formation
30
600
20
400
z
10
0
200
−10
−20
0
−30
20
−200
30
20
0
10
0
−10
−20
y
−400
−20
−30
0
x
(a) The formation p.
1
2
3
4
5
(b) The inter-agent distance errors d˜ij for all
(i, j) ∈ E+ .
Figure 4.2: Simulation result for five single-integrators.
the inter-agent distance errors converge to zero.
4.5
Conclusion
We have studied the local asymptotic stability of n-dimensional undirected formations
of single- and double-integrator modeled agents in the distance-based formation setup. On
the contrary to existing results that have focused on single-integrator modeled agents in the
plane, this paper focuses on general n-dimensional formations and double-integrator mod-
– 64 –
800
initial formation
desired formation
30
600
20
400
z
10
0
200
−10
−20
0
−30
20
−200
30
20
0
10
0
−10
−20
y
−400
−20
−30
0
x
(a) The formation p.
1
2
3
4
5
(b) The inter-agent distance errors d˜ij for all
(i, j) ∈ E+ .
Figure 4.3: Simulation result for five double-integrators.
eled agents. While the stability of undirected formations has been satisfactorily addressed,
the analysis in this paper is not applicable to general directed formations even for singleintegrator modeled agent case. Thus, it is desirable to study the stability of general directed
formations.
– 65 –
– 66 –
Chapter 5
Formation control based on orientation estimation
In this chapter, we propose a formation control strategy based on orientation estimation for
single-integrator modeled agents in the plane. Under the assumption that the orientations of
the local reference frames of the agents are not aligned due to the absence of a common sense
of orientation that is available to the agents, the proposed strategy consists of an estimation
law for the orientation angles of the local reference frames and a formation control law.
Under the proposed strategy, if the interaction graph has a spanning tree and all the initial
orientation angles belong to an interval with arc length less than π, then the formation of the
agents exponentially converges to the desired formation. A asymptotic convergence property is proved for the case that the interaction graph is uniformly connected. We also show
that the proposed strategy can be utilized for network localization. For a sensor network
equipped with range and angle measurements, the proposed strategy provides an effective
decentralized solution to localization.
5.1
Introduction
Depending on available local information to agents, there have been two kinds of forma-
tion control problem setups, i.e. displacement- and distance-based setups. In distance-based
formation control problem setup, inter-agent distances are stabilized by agents to achieve the
desired formation. Thus graph rigidity, which formalizes a property to achieve congruence
– 67 –
of formations by edge lengths, has usually been assumed in this setup. An early work on control of rigid formations is found in Baillieul & Suri (2003), where a formation control law
based on the gradient of a potential function has been proposed. Local asymptotic stability
of rigid formations has been proved in Krick et al. (2009); Oh & Ahn (2011a,c). Global
asymptotic stability of triangular formations has been addressed in Cao et al. (2007, 2011);
Dörfler & Francis (2010); Oh & Ahn (2011b,c).
While distance-based approaches require no global information for achieving a desired
formation, they have some disadvantages such as strict graph condition (rigidity) and difficulty of investigating region of attraction. In this chapter, we propose a formation control
strategy based on the estimation of the orientation angles of the local reference frames. The
proposed strategy can be straightforwardly applied to network localization.
Contributions of this chapter are summarized as follows. First, we propose a formation
control strategy based on the estimation of the orientation angles of the local reference frames
of the agents. The proposed strategy ensures the asymptotic convergence of the formation
of the agents to the desired formation. Further, the graph condition for the convergence is
simple. That is, if the interaction graph of the agents has a spanning tree, then the proposed
control strategy ensures the convergence. Second, besides the convergence of a formation to
the desired formation, the orientation angles of local reference frames are estimated under
the proposed formation control strategy. If orientation angles are once estimated, a common
sense of orientation is available to the agents, and thus the formation control problem in this
chapter is eventually reduced into a displacement-based problem. Thus the proposed strategy allows us to utilize the effectiveness of displacement-based formation controllers (Ren
– 68 –
& Atkins, 2007). Third, we propose a network localization strategy for a sensor network
equipped with relative distance and angle measurements. We show that the proposed formation control strategy can be applied to network localization. The proposed localization
strategy can be implemented as a decentralized way.
5.2
Preliminaries
A time-varying directed weighted graph is denoted by G(t) = (V, E(t), W(t)). Follow-
ing the notion in Scardovi & Sepulchre (2009), let us assume that wij (t) ∈ {0}∪[wmin , wmax ]
for all i, j ∈ V, where 0 < wmin < wmax and wmax are finite, for any t ≥ t0 . Then
(i, j) ∈ E(t) if and only if wmin ≤ wij (t) ≤ wmax . The graph G(t) = (V, E(t), W(t)) is
uniformly connected if, for any t ≥ t0 , there exists a finite time T and a vertex i ∈ V such
that i is the root of a spanning tree of the graph (V, ∪τ ∈[t,T ] E(τ ),
RT
t
W(τ )dτ ).
Consider the following N single-integrator modeled agents over the weighted directed
graph G = (V, E, W):
ẋi = ui , i ∈ V,
(5.1)
where xi ∈ Rn and ui ∈ Rn . Under the assumption that agent i ∈ V has measurements
xj − xi for all j ∈ Ni , a consensus protocol for the agents has been proposed as follows
(Jadbabaie et al., 2003; Lin et al., 2004; Olfati-Saber & Murray, 2004; Ren et al., 2004):
ui =
X
j∈Ni
wij (xj − xi ) .
(5.2)
The overall dynamic equation for the agents is written as
ẋ = −(L ⊗ In )x,
– 69 –
(5.3)
where x := [xT1 · · · xTN ]T and L is the graph Laplacian of G. The following result is due to
Ren et al. (2004).
Theorem 5.2.1 The equilibrium set EnN := {[xT1 · · · xTN ]T ∈ RnN : xi = xj ∀i, j =
1, . . . , N } of the system (5.3) is exponentially stable if G has a spanning tree. Further, x(t)
exponentially converges to a finite point in EnN .
5.3
Formation control based on orientation estimation
5.3.1
Problem statement
Consider the following N single-integrator modeled agents in the plane:
ṗi = ui , i = 1, . . . , N,
(5.4)
where pi ∈ R2 and ui ∈ R2 denote the position and the control input, respectively, of agent
i with respect to the global reference frame denoted by g
P
. We denote [pT1 · · · pTN ]T ∈ R2N
by p. Due to the absence of a common sense of orientation available to the agents, each
agent i maintains its local reference frame, denoted by
i
and the orientation angle θi ∈ (−π, π] with respect to g
P
P
, with the origin at its position
as illustrated in Figure 5.1. We
denote [θ1 · · · θN ]T by θ. By adopting a notation in which superscripts are used to denote
local reference frames, the dynamic equation of the agents can be written as
ṗii = uii , i = 1, . . . , N,
(5.5)
where pii and uii denote the position and the control input, respectively, of agent i with respect
to i
P
.
– 70 –
g
¦
c
¦
i
j
¦
G ji
Ti
¦
G ij
Tj
Tf
Figure 5.1: Measurement of relative orientation angle.
For a weighted directed graph G = (V, E, A), where |V| = N , we assume that agent i
measures the relative positions of its neighboring agents with respect to i
P
:
piji := pij − pii , j ∈ Ni , i ∈ V.
(5.6)
We refer to G as the interaction graph for the agents. Defining θji as
θji := PV (θj − θi ) , j ∈ Ni ,
(5.7)
where PV (θj − θi ) := [(θj − θi + π) mod 2π] − π, we additionally assume that θji , j ∈ Ni ,
are available to agent i.
The relative orientation angles can be measured as follows. As depicted in Figure 5.1,
agent j senses the angle δij and transmits the sensed value to agent i by communication.
Then agent i senses the angle δji and calculates the relative orientations of its neighbors as
follows: PV (θj − θi ) = PV (δji − δij + π).
We further assume that the agents estimate their orientation angles with respect to
– 71 –
g
P
and they receive the estimated values of their neighbors through communication. Thus θ̂ji
and p̂cji are available to agent i:
θ̂ji := PV(θ̂j − θ̂i ), p̂cji := p̂cj − p̂ci , j ∈ Ni ,
where θ̂i is the estimated orientation angle with respect to g
mated position of agent i with respect to c
P
(5.8)
of agent i and p̂ci is the esti-
P
.
∗T T
2N
Given p∗ := [p∗T
, the formation control problem is stated as follows:
1 · · · pN ] ∈ R
Problem 5.3.1 For the N -agents modeled by (5.4) in the plane, let G = (V, E, W) be the
interaction graph of the agents. Given p∗ ∈ R2N , design an orientation angle estimation law
and a formation control law such that θ̂i → θi + θ̃∞ and kpi − pj k → kp∗i − p∗j k as t → ∞
under the control laws, based on (5.6), (5.7), and (5.8).
5.3.2
Control strategy and stability analysis: static graph case
In this subsection, we assume that the interaction graph G of the agents is static. For the
agents under the assumption of Problem 5.3.1, an orientation estimation law can be designed
as follows:
X
˙
wij (θ̂ji − θji ),
θ̂i = kθ̂
(5.9)
j∈Ni
where kθ̂ > 0 and θ̂i (0) = 0 for all i = 1, . . . , N . Suppose that maxi∈V θi (0)−mini∈V θi (0) <
π. Defining θ̃i := θ̂i − θi , we then have maxi∈V θ̃i (0) − mini∈V θ̃i (0) < π since θ̂i (0) = 0 for
all i = 1, . . . , N . Thus the orientation estimation error dynamics is written as
θ̃˙ = −kθ̂ Lθ̃,
where θ̃ := [θ̃1 · · · θ̃N ]T . From Theorem 5.2.1, we have the following result:
– 72 –
(5.10)
Lemma 5.3.1 If G has a spanning tree and maxi∈V θi (0) − mini∈V θi (0) < π, there exists a
finite constant θ̃∞ such that θ̃(t) exponentially converges to θ̃∞ 1N .
Lemma 5.3.1 confirms that the estimated orientation angles converge to the actual angles
up to translation, i.e., θ̂i (t) → θi + θ̃∞ as t → ∞ for all i = 1, . . . , N . In the case that there
exists an agent who knows its actual orientation angle, all the agents can estimate its actual
orientation angles. Suppose that an agent is located at the root of a spanning tree of G and
the agent knows its orientation angle with respect to g
P
. In this case, θ̃(t) → 0 as t → ∞,
which means that θ̂i → θi as t → ∞.
Based on Lemma 5.3.1, we consider a reference frame whose origin is located at the
global origin and orientation angle is −θ̃∞ with respect to
g
P
. We refer to the reference
frame as the estimated common reference frame of the agents and denote it by
c
P
. See
Figure 5.1. In the below, we analyze the stability of the formation of the agents with respect
to c
P
.
Referring to the consensus protocol (5.2), we design a formation control law of agent i
as
uii
= kp
X
j∈Ni
h
i
i
i
∗
∗
wij (pj − pi ) − R(−θ̂i )(pj − pi ) , i ∈ V,
where kp > 0 and R(−θ̂i ) is the two-dimensional rotation matrix by angle −θ̂i .
– 73 –
(5.11)
Based on the existence of θ̃∞ , we define eθi as eθi := θ̃i − θ̃∞ for each i ∈ V. The position
dynamics of agent i is then written as
ṗci = kp uci
= kp R(θi + θ̃∞ )uii
= kp
X
j∈Ni
= kp
X
j∈Ni
h
i
wij R(θi + θ̃∞ )(pij − pii ) − R(θi + θ̃∞ )R(−θ̂i )(p∗j − p∗i )
wij (pcj − pci ) − R(−eθi )(p∗j − p∗i ) ,
where uci denotes the control input of agent i with respect to c
(5.12)
P
. Further, by defining epci as
epci := pci − p∗i for all i ∈ V, we obtain the position error dynamics as follows:
ėpci = kp
X
j∈Ni
wij (epcj − epci ) + kp
X
j∈Ni
wij [I2 − R(−eθi )] (p∗j − p∗i ).
(5.13)
Defining epc := [eTpc1 · · · eTpcN ]T , eθ := [eθi · · · eθN ]T , and Γ(eθ ) := diag(R(eθ1 ), . . . , R(eθN )),
we obtain the following overall error dynamics:
ėpc = −kp (L ⊗ I2 )epc + kp I2N − Γ−1 (eθ ) (L ⊗ I2 )p∗ ,
ėθ = −kθ̂ Leθ .
(5.14a)
(5.14b)
When all the agents know their orientation angles, then θ̃ ≡ 0. In this case, the overall
error dynamics (5.14) is reduced into
ėpc = −kp (L ⊗ I2 )epc
(5.15)
because Γ−1 (eθ ) = I2N . Thus the overall error dynamics (5.14) is the cascade system of the
position error dynamics (5.14a) and additional dynamics (5.14b) generated by the orientation
estimation error.
– 74 –
We analyze the stability of the overall error dynamics (5.14). In the below, we first show
that kkp [I2N − Γ−1 (eθ (t))] (L⊗I2 )p∗ k exponentially converges to 0 as t → 0 under a certain
condition. Then we show that epc (t) exponentially converges to consensus state.
The following lemma states the exponential convergence of kI2N − Γ(eθ (t))k to zero:
Lemma 5.3.2 If G has a spanning tree and maxi∈V θi (0) − mini∈V θi (0) < π, there exist
constants kγ > 0 and λγ > 0 such that kI2N − Γ(eθ (t))k ≤ kγ e−λγ t keθ (0)k and kI2N −
Γ−1 (eθ (t))k ≤ kγ e−λγ t keθ (0)k for all t ≥ 0.
Proof: From Lemma 5.3.1, there exist constants keθ > 0 and λeθ > 0 such that
keθ (t)k ≤ keθ e−λeθ t keθ (0)k
(5.16)
for all t ≥ 0. Due to the property of block diagonal matrices,
kI2N − Γ(eθ (t))k = max kI2 − R(eθi (t))k.
i=1,...,N
Since kI2 − R(eθi (t))k =
p
√
2 − 2 cos(eθi (t)), we have kI2 − R(eθi (t))k ≤ 2|eθi (t)| under
the condition that maxi∈V eθi (t) − mini∈V eθi (t) < π, which leads to
√
kI2N − Γ(eθ (t))k ≤ 2 max |eθi (t)|
i=1,...,N
√
≤ 2keθ (t)k.
(5.17)
Since maxi∈V θi (0) − mini∈V θi (0) < π, it follows that maxi∈V eθi (0) − mini∈V eθi (0) < π,
which leads to maxi∈V eθi (t) − mini∈V eθi (t) < π for all t ≥ 0 (Moreau, 2004).
From (5.16) and (5.17), there exist constants kγ > 0 and λγ > 0 such that kI2N −
Γ(eθ (t))k ≤ kγ e−λγ t keθ (0)k for all t ≥ 0.
– 75 –
From Lemma 5.3.2, it is obvious that kkp [I2N − Γ−1 (eθ (t))] (L ⊗ I2 )p∗ k exponentially
converges to 0 as t → 0. Based on the convergence property, we then present the following
theorem, which is the main result of this subsection.
Theorem 5.3.1 If G has a spanning tree and maxi∈V θi (0) − mini∈V θi (0) < π, then epc (t)
T T
] ∈ R2N : ξi = ξj , ∀i, j = 1, . . . , N } and
exponentially converges to E2N := {[ξ1T · · · ξN
∞
there exits a point e∞
pc ∈ E2N such that epc (t) asymptotically converges to epc as t → ∞.
Proof: Let w := kp [I2N − Γ−1 (eθ (t))] (L ⊗ I2 )p∗ . Then, the position error dynamics
(5.14a) can be written as
e˙pc = −kp (L ⊗ I2 )epc + w.
(5.18)
It follows from From Lemma 5.3.2 that there exist constant kw and λw such that
kw(t)k ≤ kw e−λw t .
(5.19)
From Theorem 5.2.1, the equilibrium set E2N of the system
e˙pc = −kp (L ⊗ I2 )epc
is exponentially stable when w ≡ 0. Define the distance from any x ∈ R2N to the set E2N as
dist(x, E2N ) := inf kx − ξk.
ξ∈E2N
Since E2N is exponentially stable, there exist constants kE > 0 and λE > 0 such that, for any
x ∈ R2N ,
dist(e−kp (L⊗I2 )t x, E2N ) ≤ kE e−λE t dist(x, E2N ).
– 76 –
(5.20)
Since E2N is a subspace of R2N , which is a Hilbert space, there exists an orthogonal space
⊥
⊥
E2N
⊂ R2N such that R2N = E2N ⊕ E2N
. Then, for any x ∈ R2N , there exist xk ∈ E2N and
⊥
x⊥ ∈ E2N
such that x = xk + x⊥ and
dist(x, E2N ) = dist(x⊥ , E2N ) = kx⊥ k.
Further, for any x ∈ R2N and y ∈ R2N , we can obtain the following triangular inequality:
dist(x + y, E2N ) = kx⊥ + y ⊥ k
≤ kx⊥ k + ky ⊥ k
= dist(x, E2N ) + dist(y, E2N ).
(5.21)
Since the solution of (7.20) is given as
epc (t) = e
−kp (L⊗I2 )t
Z
t
epc (0) +
e−kp (L⊗I2 )(t−τ ) w(τ )dτ,
(5.22)
0
we obtain
dist(epc (t), E2N ) ≤ dist(e
−kp (L⊗I2 )t
epc (0), E2N ) + dist(
≤ kE e−λE t dist(epc (0), E2N ) +
Z
Z
0
t
e−kp (L⊗I2 )(t−τ ) w(τ )dτ, E2N )
t
|0
dist(e−kp (L⊗I2 )(t−τ ) w(τ ), E2N )dτ
{z
}
η(t):=
based on (5.20) and (5.21). It follows from (5.20) that
−λE t
η(t) ≤ kE e
−λE t
≤ kE e
<
t
Z
0
eλE τ dist(w(τ ), E2N )dτ
1 λE τ
e
λE
t
sup (dist(w(τ ), E2N ))
0 0≤τ ≤t
kE
sup (dist(w(τ ), E2N )) .
λE 0≤τ ≤t
– 77 –
(5.23)
Note that dist(x, E2N ) ≤ kxk for any x ∈ R2N because inf ξ∈E2N kx − ξk ≤ kxk. Thus it
follows from (5.3.2) that
sup (dist(w(τ ), E2N )) ≤ sup (k(w(τ )k)
0≤τ ≤t
0≤τ ≤t
≤ sup kγ Mw e−λγ τ keθ (0)k
0≤τ ≤t
≤ kγ Mw keθ (0)k.
(5.24)
From (7.25), we obtain
dist(epc (t), E2N ) ≤ kE e−λE t dist(epc (0), E2N ) +
kE kγ Mw
keθ (0)k.
λE
(5.25)
By replacing 0 with t/2 in (7.27), we have
t
kE kγ Mw
t
dist(epc ( ), E2N ) +
keθ ( )k
2
λE
2
λE t
λE t k k M
λE t
E γ
w
≤ kE e− 2 kE e− 2 dist(epc (0)) + kE e− 2
keθ (0)k
λE
kE kγ Mw2 − λγ t
+
e 2 keθ (0)k,
λE
dist(epc (t), E2N ) ≤ kE e−
λE t
2
which implies that epc exponentially converges to E2N . Further, since kepc k is bounded from
∞
Lemma 5.3.2, there exists a point e∞
pc ∈ E2N such that epc (t) asymptotically converge to epc .
Theorem 5.3.1 shows several advantages of the proposed approach over distance-based
approaches (Baillieul & Suri, 2003; Cao et al., 2007, 2011; Dörfler & Francis, 2010; Krick
et al., 2009; Oh & Ahn, 2011a,d) as follows:
• The graph condition for the convergence of a formation to its desired formation is
simple. In distance-based approaches, more strict graph condition, i.e. rigidity, is
required to ensure the convergence.
– 78 –
• Region of attraction is clearly given under the proposed strategy. Though rigid formations are locally asymptotically stable, it is difficult to investigate region of attraction.
• By achieving orientation estimation, the proposed strategy allows us to utilize the effectiveness of displacement-based approaches (Ren & Atkins, 2007).
As mentioned previously, the overall error dynamics (5.14) is the cascade system of the
position error dynamics (5.14a) and additional dynamics (5.14b) generated by the orientation
estimation error. It is worth noting that the orientation estimation error dynamics is informational whereas the position error dynamics is physical and thus its convergence speed is
limited by the capability of the actuators. Further, the position error dynamics reaches consensus only when the orientation estimation error converges to the origin. In this regard, it
is desirable to design the orientation estimation error dynamics is far faster than the position
error dynamics. The gain parameters kθ̂ and kp should be designed based on this observation.
While the orientation error dynamics is informational, the value of kθ̂ cannot be arbitrarily
assigned in reality because some noises might be contained in the measurements. It is required that the value of kp is selected considering the capability of the actuators of the agents
to avoid the actuator saturation, which degrades the system performance.
5.3.3
Control strategy and stability analysis: switching graph case
In this subsection, we assume that the edge weights of G is switching. For the agents
under the assumption of Problem 5.3.1, suppose that maxi∈V θi (t0 ) − mini∈V θi (t0 ) < π.
Defining θ̃i := θ̂i − θi , we then have maxi∈V θ̃i (t0 ) − mini∈V θ̃i (t0 ) < π if θ̂i (t0 ) = 0 for all
– 79 –
i = 1, . . . , N . Thus the orientation estimation error dynamics is written as
θ̃˙ = −kθ̂ Lθ̃, t ≥ t0 ,
(5.26)
where θ̃ := [θ̃1 · · · θ̃N ]T . From Theorem ??, we have the following result:
Lemma 5.3.3 If G is uniformly connected and maxi∈V θi (0)−mini∈V θi (0) < π, there exists
a finite constant θ̃∞ such that θ̃(t) asymptotically converges to θ̃∞ 1N .
Based on Lemma 5.3.3, it can be shown that kI2N − Γ(eθ (t))k asymptotically converges
to zero as t → ∞. Then we have the following theorem, which is the main result of this
subsection:
Theorem 5.3.2 If G is uniformly connected and maxi∈V θi (0) − mini∈V θi (0) < π, then
T T
] ∈ R2N : ξi = ξj , ∀i, j = 1, . . . , N }.
epc (t) asymptotically converges to E2N := {[ξ1T · · · ξN
Proof: Let φepc be the state transition matrix of (7.20). From Theorem ??, the equilibrium set E2N of the system
e˙pc = −kp (L ⊗ I2 )epc
is uniformly exponentially stable when w ≡ 0. Since E2N is uniformly exponentially stable,
there exist constants kE > 0 and λE > 0 such that, for any x ∈ R2N ,
dist(φepc (t, t0 )x, E2N ) ≤ kE e−λE (t−t0 ) dist(x, E2N ).
(5.27)
Since the solution of (7.20) is given as
Z
t
epc (t) = φepc (t, t0 )epc (0) +
φepc (t, τ )w(τ )dτ,
t0
– 80 –
(5.28)
we obtain
dist(epc (t), E2N ) ≤ dist(φepc (t, t0 )epc (t0 ), E2N ) + dist(
≤ kE e
−λE (t−t0 )
dist(epc (t0 ), E2N ) +
Z
Z
t0
φepc (t, τ )w(τ )dτ, E2N )
t
t0
|
t
dist(φepc (t, τ )w(τ ), E2N )dτ
{z
}
η(t):=
based on (5.27) and (5.21). It follows from (5.27) that
η(t) ≤ kE e
≤ kE e
<
−λE t
t
Z
0
−λE t
eλE τ dist(w(τ ), E2N )dτ
1 λE τ
e
λE
t
sup (dist(w(τ ), E2N ))
0 t0 ≤τ ≤t
kE
sup (dist(w(τ ), E2N )) .
λE t0 ≤τ ≤t
(5.29)
Note that dist(x, E2N ) ≤ kxk for any x ∈ R2N because inf ξ∈E2N kx − ξk ≤ kxk. Thus it
follows that
sup (dist(w(τ ), E2N )) ≤ sup (k(w(τ )k) ,
t0 ≤τ ≤t
t0 ≤τ ≤t
which leads to
dist(epc (t), E2N ) ≤ kE e−λE (t−t0 ) dist(epc (t0 ), E2N ) +
kE
sup (k(w(τ )k) .
λE t0 ≤τ ≤t
(5.30)
By replacing t0 with (t + t0 )/2 in (5.30), we have
dist(epc (t), E2N ) ≤ kE e−
λE (t−t0 )
2
dist(epc (
t + t0
kE
), E2N ) +
2
λE
which implies that epc asymptotically converges to E2N .
– 81 –
sup
t+t0
≤τ ≤t
2
(k(w(τ )k) ,
5.4
Application to network localization
5.4.1
Problem statement
We formulate the localization problem. Consider N agents located in the plane. Suppose
that the interconnection topology of the agents is modeled by a weighted directed graph
G = (V, E, W). Each agent i ∈ V has measurements as follows:
piji := pij − pii ,
(5.31a)
θji := PV (θj − θi ) , ∀j ∈ Ni .
(5.31b)
We assume that each agent estimates its position p̂ci with respect to a common reference
frame, denoted by c
P
, and receives the estimated positions of its neighbors through commu-
nication. Further we assume that each agent i estimates its orientation angle θ̂i and receives
the estimated angles of its neighbors. Then, defining p̂cji and θ̂ji as
p̂cji := p̂cj − p̂ci ,
(5.32a)
θ̂ji := θ̂j − θ̂i , ∀j ∈ Ni ,
(5.32b)
we assume that p̂cji and θ̂ji for all j ∈ Ni are available to agent i.
The localization problem is then stated as follows:
Problem 5.4.1 For the N -agents in the plane, let G = (V, E, W) be the interaction graph
cT T
c
c
of the agents. Design a rule for estimating p̂c := [p̂cT
1 · · · p̂N ] such that kp̂j (t) − p̂i (t)k →
kpj − pi k as t → ∞ for all i, j ∈ V, based on the measurements (5.31) and (5.32).
– 82 –
5.4.2
Network localization based on orientation estimation
Consider the agents under the assumptions of Problem 5.4.1. Suppose that
max θi (0) − min θi (0) < π.
i∈V
i∈V
The orientation estimation law can be designed as follows:
h
i
X
˙
θ̂i = kθ̂
wij (θ̂j − θj ) − (θ̂i − θi ) ,
(5.33)
j∈Ni
where kθ̂ > 0 and θ̂i (0) = 0 for all i ∈ V. The overall error dynamics for orientation angle
estimation is written as
θ̃˙ = −kθ̂ Lθ̃,
(5.34)
T T
T
T T
where θ̃ = [θ̃1T · · · θ̃N
] := [θ̂1T − θ1T · · · θ̂N
− θN
] . Then, from Lemma 5.3.1, θ̃ exponentially
converges to a finite vector θ̃∞ = θ̃∞ 1N if G has a spanning tree. Consider a reference frame
whose origin is located at the global origin and orientation angle is −θ̃∞ with respect to g
P
.
We refer to the reference frame as the estimated common reference frame of the agents and
denote it by c
P
. See Figure 5.1.
Next we design a localization rule for the agents. A localization rule for the agents can
be designed as
p̂˙ci (t) = kp̂
X
wij
h
(p̂cj
j∈Ni
−
p̂ci )
−
R(−θ̂i )piji
i
,
(5.35)
where kp̂ > 0. By some algebra, we can arrange the right-hand side of (5.35) as follows:
kp̂
X
j∈Ni
h
i
h
i
X
wij (p̂cj − p̂ci ] − R(−θ̂i )piji = kp̂
wij (p̂cj − p̂ci ) − R(−θ̂i )R(θi + θ̃∞ )pcji
j∈Ni
= kp̂
X
j∈Ni
– 83 –
wij (p̂cj − p̂ci ) − R(−eθi )pcji ,
(5.36)
where eθi := θ̃i − θ̃∞ . Further, defining p̃ci as p̃ci := pci − pci for each i ∈ V, we obtain the
position error dynamics as follows:
p̃˙ci = kp̂
X
j∈Ni
wij (p̃cj − p̃ci ) + kp̂
X
j∈Ni
wij [I2 − R(−eθi )] (pcj − pci ).
|
{z
ψi :=
(5.37)
}
Thus the overall error dynamics for position estimation can be written as
p̃˙c = kp̂ L ⊗ I2 epc + ψ,
(5.38)
cT T
T
T T
where p̃c := [p̃cT
1 · · · p̃N ] and ψ := [ψ1 · · · ψN ] .
Notice that (5.34) and (5.38) correspond to (5.10) and (5.14), respectively. Based on the
relationship, we straightforwardly obtain the following corollary:
Corollary 5.4.1 If G has a spanning tree and maxi∈V θi (0) − mini∈V θi (0) < π, then p̃c (t)
T T
] ∈ R2N : ξi = ξj , ∀i, j = 1, . . . , N } and
exponentially converges to E2N := {[ξ1T · · · ξN
there exits a point p̃c∞ ∈ E2N such that p̃c (t) asymptotically converges to p̃c∞ as t → ∞.
Advantages of the proposed strategy are as follows:
• The graph condition for localization is simple. As a common problem setup, rigidity
is a necessary condition for localization (Fang et al., 2009; So & Ye, 2007).
• While most of distance-based localization algorithms are implemented in a centralized
way (Fang et al., 2009; So & Ye, 2007), the proposed strategy can be implemented in
a decentralized way.
– 84 –
1
1
(1): 2 ® 1,
(6): 1 ® 2
2
2
3
(3): 4 ® 2
4
6
3
(4): 5 ® 3
(5): 6 ® 5
4
5
(a) Static interaction graph.
(2): 3 ® 2
6
5
(b) Switching interaction graph. Only one edge
has positive weight in the order of (1) → (2) →
· · · → (6) → (1) → · · · , and the other edges have
zero weights.
Figure 5.2: The interaction graph for the six single-integrators.
5.5
Simulation results
We present simulation results of formation control of six single-integrator modeled agents
under the proposed control strategy. The initial orientation angles are assigned randomly in
the range −π ≤ θi < π for all i ∈ V. The desired formation is changed at t = 10s from p∗
to 0.1 × p∗ .
In the first simulation, the interaction graph of a six agent group is given as shown in
Figure 5.2a and each edge weight is set to 1 without switching. Figure 5.3 illustrates the result of the first simulation. The formation converges to the desired formation (Figure 5.3a),
the orientation angle estimation error keθ (t)k converges to the origin (Figure 5.3b), and the
formation error converges to a steady-state value (Figure 5.3b). Although the desired formation is changed at t = 10s, the orientation angle error does not change. That is, once
the orientation is aligned, the agents can utilize the common sense of orientation as in the
– 85 –
1
0
7
initial formation
1st desired formation
2nd desired formation
kEpa (t)k
kEθ (t)k
6
5
−1
k·k
y
4
−2
3
−3
2
−4
−5
−3
1
0
−2
−1
0
x
1
(a) The formation pa (t).
2
3
0
5
10
time [sec]
15
20
(b) The formation error kepc (t)k and the orientation estimation error keθ (t)k.
Figure 5.3: Simulation result of the six single-integrators under (5.34) and (5.38): static
interaction graph case.
displacement-based setup.
In the second simulation, the interaction graph is switching every 10ms as depicted in
Figure 5.2b. Only one edge is activated in every 10ms time-interval with 6 as its edge weight
in the order of (1) → (2) → · · · → (6) → (1) → · · · . Figure 5.4 depicts the result of
formation control of the six agent group having the switching interaction graph. As shown
in the figure, keθ (t)k converges to zero and kepc (t)k converges to a steady-state value.
5.6
Conclusion
We have proposed a formation control strategy based on orientation alignment. The pro-
posed strategy ensures asymptotic convergence of a formation of single-integrator modeled
agents to the desired formation. The proposed control strategy can also be utilized for lo-
– 86 –
2
6
1
5
0
4
k·k
y
kEpa (t)k
kEθ (t)k
−1
3
−2
2
initial formation
1st desired formation
2nd desired formation
−3
−4
−1
1
0
0
1
2
x
3
(a) The formation pa (t).
4
5
0
5
10
time [sec]
15
20
(b) The formation error kepc (t)k and the orientation alignment error keθ (t)k.
Figure 5.4: Simulation result of the six single-integrators under (5.34) and (5.38): switching
interaction graph case.
calization. For wireless sensor networks equipped with time of arrival and angle of arrival
measurements, the proposed control strategy provides an effective solution for network localization.
– 87 –
– 88 –
Chapter 6
Formation control based on position estimation
In this chapter, we propose a formation control strategy based on position estimation for
single-integrator modeled agents in n-dimensional space under the assumption that each
agent measures the relative positions of its neighboring agents. Subsequently, we show that
the formation of single-integrator modeled agents globally exponentially converges to the
desired formation with the estimated formation exponentially converging to the actual formation up to translation if and only if the interaction graph of the agents contains a spanning
tree. Additionally, a sufficient condition for the convergence is provided to the case that
the interaction graph is switching. We also apply the proposed formation control strategy to
double-integrator modeled agents. Simulation results support the effectiveness of the proposed strategy.
6.1
Introduction
While position-based formation control approaches ensure the best performance among
the three kinds of the approaches, they require position information, which can be thought of
as over-restrictive. Since it is obviously desirable to ensure better control performance with
less available information for agents, we attempt to emulate point-based formation control
within the displacement-based setup. That is, if each agent of a group estimates its position
by some means in the displacement-based setup, the formation of the group can be stabilized
– 89 –
by exploiting the estimated position information as like as in the position-based setup. Based
on this motivation, in this chapter, we seek to establish a position estimation scheme of the
agents for formation control, by using partial relative displacement measurements.
Specifically, we design a formation control strategy based on position estimation for
single- and double-integrator modeled agent groups by using partial relative displacement
measurements in two or three-dimensional spaces. It is noticeable that the position estimation of agents by position estimator and the formation stabilization of the group are simultaneously conducted in a feedback closed-loop as illustrated in Figure 6.1. Contributions of
this chapter are summarized as follows. First, it is shown that the displacement-based formation control, when it is combined with position estimators, could have a similar feature to the
position-based control. Subsequently, the proposed strategy ensures that position trajectories
of agents evolve in a more desirable way than usual displacement-based approaches. The position trajectories are dependent on the connectivity among agents and thus are usually not
predictable under the existing displacement-based formation control law. However, when
the error dynamics of a proposed position estimator reaches a steady-state, then the position
trajectories, which are independent on the connectivity, evolve in a predictable way, under
the proposed strategy. Second, we show that localization problems can be readily solved in
the displacement-based setup. That is, the proposed position estimators can be applied to
localization of wireless sensor network under the assumption that the agents of the network
are equipped with range and angle measuring capability and a common directional-sense is
available for the agents. In the most common problem setup for localization, agent positions
are estimated based on relative distances of certain neighboring agents in centralized ways
– 90 –
p*
Formation Control
p
p̂
Localization
Figure 6.1: Block diagram for formation control based on a position estimator.
(Aspnes et al., 2006; Mao et al., 2007). For instance, centralized approaches based on convex optimization of relative distance errors are found in Alfakih et al. (1999); Biswas & Ye
(2004); So & Ye (2007). Since decentralized localization problems are generally intractable
in the distance-based setup, the proposed position estimators may provide effective solutions
for localization. Third, we present necessary and sufficient conditions for the convergence
of agent group formations to the desired formations up to translation. For a single-integrator
modeled agent group, the formation of the group globally exponentially converges to the
desired formation if and only if the graph encoding interaction topology among the agents
of the group has a spanning tree. For a double-integrator modeled agent group, the formation of the group globally exponentially converges to the desired formation if the graph has
a spanning tree. Further, we present stability analysis results in the case that the graph is
switching. Lastly, we show that the proposed strategy can be combined with an existing
formation control law. The proposed strategy, combined with the result in Lin et al. (2005),
is successfully applied to single-integrator modeled nonholonomic agent groups.
– 91 –
6.2
Formation control based on position estimation: single-integrator case
6.2.1
Problem statement
Consider the following N single-integrator modeled agents in n-dimensional space:
ṗi = ui , i = 1, . . . , N,
(6.1)
where pi ∈ Rn and ui ∈ Rn denote the position and the control input, respectively, of the
agent i. The overall dynamic equation for the agents is then described by
ṗ = u,
where p = [pT1 · · · pTN ]T and u = [uT1 · · · uTN ]T .
We model the interaction topology of the agents by a weighted directed graph G =
(V, E, W). Accordingly, agent i ∈ V measures the relative positions of its neighboring
agents:
pji := pj − pi , j ∈ Ni .
(6.2)
Further, we assume that agent i ∈ V estimates its position with respect to the global reference
frame and receives the estimated positions of its neighbors through communication. Thus
the following information is available to agent i:
p̂ji := p̂j − p̂i , j ∈ Ni ,
(6.3)
where p̂i and p̂j denote the estimated positions, respectively, of agents i and j. We denote
[p̂T1 · · · p̂TN ]T by p̂.
– 92 –
∗T T
nN
Given a formation p∗ = [p∗T
, the desired formation Ep∗ of the agents is
1 · · · pN ] ∈ R
defined as
Ep∗ = {p ∈ RnN : pi − pj = p∗i − p∗j , ∀i, j ∈ V}.
(6.4)
Then the formation problem for the single-integrator modeled agents is stated as follows:
Problem 6.2.1 Suppose that the interaction graph is given by G = (V, E, W) for the N
single-integrator modeled agents (6.1). For given p∗ , design a position estimator to drive p̂
such that p̂j (t) − p̂i (t) → pj (t) − pi (t) and a formation control law to stabilize p such that
pj (t) − pi (t) → p∗j − p∗i as t → ∞ for all i, j ∈ V.
The formation control strategy based on a position estimator has several benefits while
utilizing only local information available in the displacement-based setup.
• While the consensus protocol (5.2) can be exploited for displacement-based formation
control, the position trajectories of agents are usually unpredictable because the consensus protocol depends on the connectivity among the agents. Such dependency on
the connectivity can be resolved by using a position estimator.
• If an agent group performs a distributed sensing task, then position information can
be added to measurement data by using a position estimator, without any additional
computation for localization of the data in a data center. Considering the scalability of
agent groups, such distributed localization capability is clearly desirable.
• A position estimator can be exploited as a distributed localization algorithm for a suitably equipped wireless sensor network. That is, if an agent group has a common
– 93 –
directional-sense and agents of the group are equipped with range and angle measurements, then a position estimator can be used as a localization algorithm for the group.
6.2.2
Control strategy and stability analysis
Consider the agents under the assumptions of Problem 6.2.1. A position estimator for
the agents can be designed as
p̂˙i = ko
X
j∈Ni
wij [(p̂j − p̂i ) − (pj − pi )] ,
(6.5)
where ko > 0. Defining p̃ := p − p̂, differentiating p̃ and then substituting (6.1), (6.2) and
(6.5) into it, we obtain the following error dynamics:
p̃˙ = −ko (L ⊗ In )p̃.
(6.6)
Based on Theorem 5.2.1, the following lemma provides a necessary and sufficient condition
for the convergence of the position estimator (6.5).
Lemma 6.2.1 For the estimation error dynamics (6.6), there exists a finite vector p̃∞ ∈ Rn
such that p̃ exponentially converges to 1N ⊗ p̃∞ if and only if G has a spanning tree.
Thus, if G has a spanning tree, the estimated positions exponentially converge to the
actual positions up to translation, i.e., p̂(t) exponentially converges to p−1N ⊗ p̃∞ as t → ∞.
From Lemma 6.2.1, if an agent that is located at the root node of a spanning tree contained
in G knows its actual position, then the estimated positions converge to the actual positions,
i.e., p̂ converges to p.
In the case that the edges of G is time-varying, based on Theorem ??, we have the following sufficient condition:
– 94 –
Lemma 6.2.2 For the estimation error dynamics (6.6), there exists a finite vector p̃∞ ∈ Rn
such that p̃ asymptotically converges to 1N ⊗ p̃∞ if G is uniformly connected.
We then design a formation control law based on the estimated positions for the agent as
follows:
ui = kc p∗i − kc p̂i ,
(6.7)
where kc > 0. Defining ep := p − p∗ and differentiating ep , we obtain the following overall
error dynamics:
ėp = −kc ep + kc p̃,
(6.8a)
p̃˙ = −ko L ⊗ In p̃.
(6.8b)
When the interaction graph is static, due to the property of the determinant of block
triangular matrices, the eigenvalues of the system matrix in (6.8) are given by the equation
det(λIN + kc InN ) det(λInN + ko L ⊗ In ) = 0.
(6.9)
Further, by the definition of eigenvalues of matrices, we have the following formulas for the
eigenvalues:
det(λInN
nN
Y
+ kc InN ) =
(λ + kc ),
(6.10a)
l=1
det(λInN
nN
Y
+ ko L ⊗ In ) =
(λ + ko µl ),
(6.10b)
l=1
where µl is the lth eigenvalue of −L ⊗ In . Thus the eigenvalues of the system matrix in
(6.8) are the union of nN -multiples of −kc and the eigenvalues of −ko L ⊗ In . That is, the
– 95 –
eigenvalues of the formation error dynamics and the position estimation error dynamics are
not affected by each other, which is due to the separation property of linear systems.
We present the following necessary and sufficient condition for the convergence of overall error error dynamics (6.8).
Theorem 6.2.1 For the overall error dynamics (6.8), there exists a finite vector p̃∞ ∈ Rn
such that [eTp p̃T ]T exponentially converge to [1TN ⊗ p̃T∞ 1TN ⊗ p̃T∞ ]T if and only if G has a
spanning tree.
Proof: (Sufficiency) It follows from Lemma 6.2.1 that there exists a finite vector p̃∞ ∈
Rn such that p̃ exponentially converges to 1N ⊗ p̃∞ if and only if G has a spanning tree.
Define ξs and ζs as
ξ := ep − 1N ⊗ p̃∞ , ζ := p̃ − 1N ⊗ p̃∞ .
Then we obtain
ξ˙ = −kc ξ + kc ζ, ζ̇ = −ko (L ⊗ In )ζ.
Since kc > 0 and ko > 0, there exist positive constants kξ , λξ , kζ and λζ such that
kξ(t)k ≤ kξ kξ(0)ke−λξ t +
kc kξ
sup (kζ(τ )k),
λξ 0≤τ ≤t
kζ(t)k ≤ kζ kζ(0)ke−λζ t ,
where k · k is the n-dimensional Euclidean norm. By some algebra, we then obtain
kξ(t)k ≤ kξ kξ(t/2)ke−λξ t/2 +
kξ kc
sup (kζ(τ )k)
λξ t/2≤τ ≤t
≤ kξ (kξ kξ(0)ke−λξ t/2 +
kc kξ kζ
kc kξ kζ
kζ(0)ke−λζ t/2 )e−λξ t/2 +
kζ(0)ke−λζ t/2 .
λξ
λξ
– 96 –
Since ζ globally exponentially converges to the origin from Lemma 6.2.1, it follows from
kηk ≤ kξk + kζk, where η := [ξ T ζ T ]T , that there exist positive constant kη and λη such that
kη(t)k ≤ kη kη(0)ke−λη t ,
which implies that [eTp p̃T ]T exponentially converge to [1TN ⊗ p̃T∞ 1TN ⊗ p̃T∞ ]T .
(Necessity) Consider the system matrix of the overall error dynamics (6.8). Suppose that
the G does not contain a spanning tree. Then the p̃ does not reach consensus.
Based on Theorem 6.2.1, [pT p̂T ]T exponentially converges to[p∗T + 1TN ⊗ p̃T∞ p − 1TN ⊗
p̃T∞ ]T if and only if G has a spanning tree. Further, if an agent that is located at the root node
of a spanning tree contained in G, knows its actual position, then p converges to p∗ .
Suppose that, given p∗ , the overall dynamics of the agents is in a steady-state, i.e. ep = 0
and p̃ = 0. Further, suppose that the desired formation is changed from p∗ to p∗ 0 at an
instance. Since the position estimation error dynamics (6.8b) is not dependent on p∗ , p̃
will remain in the origin, under the proposed strategy. Moreover, since p̃ = 0, we can
see that ep will exponentially and monotonically converge to origin from (6.8a). Note that,
under the existing formation control law in Ren & Atkins (2007), trajectories of the agents
are dependent on the connectivity among agents and thus it is usually difficult to predict
the trajectories in such a situation. In this context, the proposed position estimator-based
formation control strategy has an advantage in the sense that it allows us to emulate positionbased formation control in the displacement-based setup.
In the case that the edge weights of G are time-varying, we have the following result:
Theorem 6.2.2 For the overall error dynamics (6.8), there exists a finite vector p̃∞ ∈ Rn
– 97 –
such that [eTp p̃T ]T asymptotically converge to [1TN ⊗ p̃T∞ 1TN ⊗ p̃T∞ ]T if G is uniformly connected.
Proof: It follows from Lemma 6.2.2 that there exists a finite vector p̃∞ ∈ Rn such that
p̃ asymptotically converges to 1N ⊗ p̃∞ if G is uniformly connected. Define ξ and ζ as
ξ := ep − 1N ⊗ p̃∞ , ζ := p̃ − 1N ⊗ p̃∞ .
Then we have ξ˙ = −kc ξ + kc ζ and limt→∞ kζ(t)k = 0.
Since kc > 0, there exist positive constants kξ and λξ such that
kξ(t)k ≤ kξ kξ(0)ke−λξ (t−t0 ) +
kc kξ
sup (kζ(τ )k).
λξ t0 ≤τ ≤t
By replacing t0 with (t0 + t)/2 in the above equation, we obtain
kξ(t)k ≤ kξ kξ
t + t0
2
ke−
λξ (t−t0 )
2
+
kξ kc
λξ
sup (kζ(τ )k),
t+t0
≤τ ≤t
2
which implies that kξ(t)k → 0 as t → ∞.
6.2.3
Application to unicycle-like mobile robots
In this subsection, we apply the proposed control strategy to formation control of unicyclelike agents depicted in Figure 6.2. Consider N unicycle-like agents modeled as follows:
ṗi = ui , i = 1, . . . , N,
where pi ∈ R2 , and ui ∈ R2 is given by
ui = vi [cos θi sin θi ]T , θ̇i = ωi .
– 98 –
(6.11)
qi
yi
g
xi
å
Figure 6.2: Unicycle-like mobile robot.
Here θi is the heading angle of agent i, and vi and ωi are the translational and angular control
inputs of agent i, respectively. The unicycle-like agents are single-integrator modeled agents
in the plane under some input constraint. Due to the input constraint, we may not arbitrarily
design ui . Thus we design ui indirectly by designing vi and ωi . Then the formation control
problem for the unicycle-like agents is stated as follows. For the unicycle-like agents modeled by (6.11), suppose that the interaction graph is given by G = (V, E, W). For given p∗ ,
design a position estimator to drive p̂ such that p̂j (t) − p̂i (t) → pj (t) − pi (t) and a formation
control law to stabilize p such that pj (t) − pi (t) → p∗j − p∗i as t → ∞ for all i, j ∈ V.
The position estimation law (6.5) can be used for the unicycle-like agents. Based on the
estimated position p̂i , vi and ωi can be designed as follows:
vi = kc (p∗i − p̂i )T [cos θi sin θi ]T ,
(6.12a)
ωi = cos(kt t),
(6.12b)
where kc > 0 and kt > 0, which is similar to the control law proposed in Lin et al. (2005).
– 99 –
Then ui is arranged as
ui = kc M (θi )(p∗i − p̂i ),
(6.13)
where


M (θi ) = 


cos2 θi
sin θi cos θi
sin θi cos θi 
.

sin2 θi
The overall error dynamics is then written as
ėp = −kc H(θ)ep + kc H(θ)p̃,
(6.14a)
θ̇ = cos(kt t)1N ,
(6.14b)
p̃˙ = −ko (L ⊗ I2 )p̃,
(6.14c)
where H(θ) = diag(M (θ1 ), . . . , M (θN )).
The following theorem provides a condition for the global exponential convergence of p
and p̂ to p∗ and p, respectively, up to translation.
Theorem 6.2.3 For the overall error dynamics (6.14), if G has a spanning tree, then there
exist a finite vector p̃∞ ∈ R2 and a constant kc∗ > 0 such that, for any kc ∈ (0, kc∗ ), [eTp p̃T ]T
exponentially converge to [1TN ⊗ p̃T∞ 1TN ⊗ p̃T∞ ]T .
Proof: It follows from Lemma 6.2.1 that there exists a finite vector p̃∞ ∈ Rn such that
p̃ exponentially converges to 1N ⊗ p̃∞ if and only if G has a spanning tree. Let ξ = ep − p̃∞
and ζ = p̃ − p̃∞ . Then we have the following cascade system:
ξ˙ = −kc H(θ)ξ + kc H(θ)ζ,
(6.15a)
ζ̇ = −ko (L ⊗ I2 )ζ̃.
(6.15b)
– 100 –
From Lemma 6.2.1, ζ exponentially converges to the origin. Note that H(θ) is 2π/kt -periodic
in t, i.e.,
H(θ(t)) = H(θ(t +
2π
)).
kt
Consider the following unforced average system:
ξ˙ = −kc Hav ξ,
where Hav = diag(M̄1 , . . . , M̄N ) and M̄i = (kt /2π)
R 2π/kt
0
M (θi (τ ))dτ . Due to the positive
definiteness of Hav , there exists kc∗ > 0 such that the origin of ξ˙ = −kc H(θ)ξ is globally
exponentially stable for any kc ∈ (0, kc∗ ) (Khalil, 2002; Lin et al., 2005). Furthermore,
since the right-hand side of (6.15a) is globally Lipschitz in (ξ, ζ), uniformly in t, (6.15a) is
input-to-state stable with ζ as input. Thus the origin of the cascade system (6.15) is globally
exponentially stable for any kc ∈ (0, kc∗ ) (Khalil, 2002). Therefore there exist a finite vector
p̃∞ ∈ R2 and a constant kc∗ > 0 such that, for any kc ∈ (0, kc∗ ), [eTp p̃T ]T exponentially
converge to [1TN ⊗ p̃T∞ 1TN ⊗ p̃T∞ ]T .
6.3
Formation control based on position estimation: double-integrator case
6.3.1
Problem statement
We consider the following N double-integrator modeled agents:
ṗi = vi ,
(6.16a)
v̇i = ui ,
(6.16b)
where pi ∈ Rn , vi ∈ Rn and ui ∈ Rn denote the position, the velocity and the control input,
respectively, of the agent for all i = 1. . . . , N . The overall dynamic equation for the agents
– 101 –
is described by
ṗ = v,
(6.17a)
v̇ = u,
(6.17b)
T T
where p = [pT1 · · · pTN ]T , v = [v1T · · · vN
] and u = [uT1 · · · uTN ]T .
The interaction graph for the agents is given by a weighted directed graph G = (V, E, W).
Thus agent i measures the following variables:
pji := pj − pi ,
(6.18a)
vji := vj − vi .
(6.18b)
Additionally, we assume that each agent estimates its position and velocity, and receives the
estimated values of its neighbors through communication. Thus the following variables are
assumed to be available to agent i:
p̂ji := p̂j − p̂i ,
(6.19a)
v̂ji := v̂j − v̂i ,
(6.19b)
where p̂i and v̂i denote the estimated position and the velocity of agent i, respectively. We
T T
denote [p̂T1 · · · p̂TN ]T and [v̂1T · · · v̂N
] by p̂ and v̂, respectively.
∗T T
∗
∗T
∗T T
Given p∗ = [p∗T
1 · · · pN ] and v = [v1 · · · vN ] , the formation problem for the double-
integrator modeled agents is then stated as follows:
Problem 6.3.1 Suppose that the interaction graph is given by G = (V, E, W) for the N
double-integrator modeled agents (6.16). Based on the measurements (6.18) and (6.19) ,
design an estimation law to p̂ and v̂ such that p̂j (t) − p̂i (t) → pj (t) − pi (t) and v̂j (t) −
– 102 –
v̂i (t) → vj (t) − vi (t) and a control law to stabilize p such that pj (t) − pi (t) → p∗j − p∗i and
vj (t) − vi (t) → vj∗ − vi∗ as t → ∞ for all i, j ∈ V.
6.3.2
Control strategy and stability analysis
For the N double-integrator modeled agents under the assumption of Problem 6.3.1, an
estimation law can be designed as follows:
p̂˙i = v̂i ,
v̂˙ i = kop
(6.20a)
X
j∈Ni
wij [(p̂j − p̂i ) − (pj − pi )] + kov
X
j∈Ni
wij [(v̂j − v̂i ) − (vj − vi )] + u, (6.20b)
where kop > 0 and kov > 0. Defining p̃ := p − p̂ and ṽ := v − v̂, we obtain the estimation
error dynamics as
p̃˙ = ṽ,
(6.21a)
ṽ˙ = −kop L ⊗ In p̃ − kov L ⊗ In ṽ.
(6.21b)
Based on the result in Ren & Atkins (2007), we have the following lemma, which provides a necessary and sufficient condition for the convergence of the position estimator
(6.20).
Lemma 6.3.1 For the estimation error dynamics ( 6.21), there exist p̃∞ ∈ Rn and ṽ∞ ∈ Rn
such that [p̃T ṽ T ]T exponentially converge to [(1N ⊗ p̃∞ )T + t(1N ⊗ ṽ∞ )T (1N ⊗ ṽ∞ )T ]T if
and only if G has a spanning tree and kov µl ±
p
2 µ2 + 4k µ < 0 for all µ 6= 0, where
kov
op l
l
l
µl for all l = 1, . . . , N are the eigenvalues of −L.
Based on the estimator (6.20), a control law for the agents can be designed as
ui = −kcp (p̂i − p∗i ) − kcv (v̂i − vi∗ ) ,
– 103 –
(6.22)
where kcp > 0 and kcv > 0. Defining ep and ev as ep := p − p∗ and ev := v − v ∗ , from (6.17),
(6.18), (6.20) and (6.22), we obtain the overall error dynamics of the agents as follows:
ėp = ev ,
(6.23a)
ėv = −kcp ev − kcv ev + kcp p̃ + kcv ṽ,
(6.23b)
p̃˙ = ṽ,
(6.23c)
ṽ˙ = −kop L ⊗ In p̃ − kov L ⊗ In ṽ.
(6.23d)
The following theorem states a necessary and sufficient condition for the stability of the
overall error dynamics ( 6.22):
Theorem 6.3.1 For the error dynamics (6.23), there exist p̃∞ ∈ Rn and ṽ∞ ∈ Rn such that
[eTp eTv p̃T ṽ T ]T exponentially converge to [(1N ⊗ p̃∞ )T + t(1N ⊗ ṽ∞ )T (1N ⊗ ṽ∞ )T (1N ⊗
p̃∞ )T + t(1N ⊗ ṽ∞ )T (1N ⊗ ṽ∞ )T ]T if and only if G has a spanning tree and kov µl ±
p
2 µ2 + 4k µ < 0 for all µ 6= 0, where µ for all l = 1, . . . , N are the eigenvalues of
kov
op l
l
l
l
−L.
Proof:
From Lemma 6.3.1, there exist p̃∞ ∈ Rn and ṽ∞ ∈ Rn such that [p̃T ṽ T ]T
exponentially converge to [(1N ⊗ p̃∞ )T + t(1N ⊗ ṽ∞ )T (1N ⊗ ṽ∞ )T ]T if and only if G has a
spanning tree and kov µl ±
p
2 µ2 + 4k µ < 0 for all µ 6= 0, where µ for all l = 1, . . . , N
kov
op l
l
l
l
are the eigenvalues of −L. Define ξ and ζ as


 ep − [1N ⊗ p̃∞ + t(1N ⊗ ṽ∞ )] 
,
ξ := 


ev − 1N ⊗ ṽ∞


 p̃ − [1N ⊗ p̃∞ + t(1N ⊗ ṽ∞ )] 
.
ζ := 


ṽ − 1N ⊗ ṽ∞
– 104 –
Then we have


ξ˙ = 


0
IN


ξ + 


|
−kcp IN −kcv IN
{z
}



ζ̇ = 


|
Aξ :=
0
IN

0
0

 ζ,

kcp IN kcv IN
{z
}
Bξ :=

 ζ.

−kop IN −kov IN
{z
}
|
Aζ :=
Since Aξ and Aζ are Hurwitz, there exist positive constants kξ , λξ , kζ and λζ such that
kξ(t)k ≤ kξ kξ(0)ke−λξ t +
kξ kBξ k
sup kζ(τ )k,
λξ 0≤τ ≤t
kζ(t)k ≤ kζ kζ(0)ke−λζ t .
From Lemma 6.3.1, ζ globally exponentially converge to the origin. Thus it follows that
there exist positive constants kη and λη such that
kη(t)k ≤ kη kη(0)ke−λη t ,
where η = [ξ T ζ T ]T . Therefore, there exist p̃∞ ∈ Rn and ṽ∞ ∈ Rn such that [eTp eTv p̃T ṽ T ]T
exponentially converge to [(1N ⊗ p̃∞ )T + t(1N ⊗ ṽ∞ )T (1N ⊗ ṽ∞ )T (1N ⊗ p̃∞ )T + t(1N ⊗
ṽ∞ )T (1N ⊗ ṽ∞ )T ]T .
While Theorem 6.3.1 provides a condition under which p(t) asymptotically converges to
p∗ + 1N ⊗ p̃∞ + t(1N ⊗ ṽ∞ ) as t → ∞, it does not ensure the convergence of kp(t)k to a
finite vector. Indeed p(t) linearly increases with time when ṽ ∞ 6= 0. According to Ren &
Atkins (2007), ṽ ∞ = 0 only when v̂(0) = v(0).
– 105 –
6.3.3
Reduced-order position estimation
In this subsection, we design a reduced-order position estimator under the assumption
that each agent measures its own velocity. Since vi is available to agent i in this case, a
reduced-order estimator can be designed as
p̂˙i = kop
X
j∈Ni
wij [(p̂j − p̂i ) − (pj − pi )] + vi ,
(6.24)
where kop > 0. Then we have the following estimator error dynamics:
p̃˙ = −ko (L ⊗ In )p̃,
(6.25)
which has the same form as (6.6). Thus, based on Theorem 5.2.1, we present the following
lemma regarding the condition for the convergence of the position estimator (6.24).
Lemma 6.3.2 For the error dynamics (6.25), there exists a constant p̃∞ ∈ Rn such that p̃
exponentially converge to 1N ⊗ p̃∞ if and only if G has a spanning tree.
Based on the position estimator (6.24), a position controller can be designed as
ui = −kcp (p̂i − p∗i ) − kcv (vi − vi∗ ),
(6.26)
where kcp > 0 and kcv > 0. From (6.17), (6.19a), (6.24) and (6.26), the overall error
dynamics is then arranged as
ėp = ev ,
(6.27a)
ėv = −kcp ep − kcv ev + kcp p̃,
(6.27b)
p̃˙ = −kop (L ⊗ In )p̃.
– 106 –
(6.27c)
The following theorem states a necessary and sufficient condition for the consensus of the
overall error dynamics (6.27):
Theorem 6.3.2 For the error dynamics ((6.27)), there exists p̃∞ ∈ Rn such that [eTp eTv p̃T ]T
exponentially converge to [(1N ⊗ p̃∞ )T 0T (1N ⊗ p̃∞ )T ]T if and only if G has a spanning
tree.
Proof: If follows from Lemma 6.3.2 that there exists p̃∞ ∈ Rn such that p̃ exponentially
converge to 1N ⊗ p̃∞ if and only if G has a spanning tree. We define ξ and ζ as follows:
ξ := ep − 1N ⊗ p̃∞ ,
ζ := p̃ − 1N ⊗ p̃∞ .
Then we obtain



ξ˙ = 

|
0
IN


ξ + 


−kcp IN −kcv IN
{z
}
Aξ :=


|
0

 ζ,

kcp IN
{z }
Bξ :=
ζ̇ = −kop (L ⊗ In )ζ.
Since kcp > 0 and kcv > 0, Adrc is Hurwitz, there exist positive constants kξ , λξ , kζ and
λζ such that
kξ(t)k ≤ kξ kξ(0)ke−λξ t +
kξ kBξ k
sup kζ(τ )k,
λξ 0≤τ ≤t
kζ(t)k ≤ kζ kζ(0)ke−λζ t .
It then follows that there exist positive constants kη and λη such that
kη(t)k ≤ kη kη(0)ke−λη t ,
– 107 –
where η := [ξ T ζ T ]T . Therefore [eTp eTv p̃T ]T exponentially converge to [(1N ⊗p̃∞ )T 0T (1N ⊗
p̃∞ )T ]T if and only if G has a spanning tree.
6.4
Simulation results
In this section, we present simulation results of formation control of six single- and
double-integrator modeled agents. The interaction graphs shown in Figure 6.3 are assumed
throughout the simulation.
Figure 6.4 depicts the formation trajectory and the norm of control and estimation errors
for a six single-integrator modeled agents under the observer (6.5) and the controller (6.7).
The interaction graph shown in Figure 6.3b switches every 1ms. The desired formation is
rotated by π/4 at t = 10s. As depicted in Figure 6.4(b), the control and estimation errors
converge to a constant, as expected from Theorem 6.2.1.
Figure 6.5 shows the trajectory of p and the norm of formation error for a six singleintegrator modeled agent group having time-varying interaction graph, which is depicted in
Figure 6.3b, under the displacement-based formation control law proposed in Ren & Atkins
(2007). From the comparison of the results depicted in Figure 6.4 and Figure 6.5, the advantage of the proposed strategy is evident. While the trajectories of agents are dependent on
the connectivity among agents under the existing control law (Ren & Atkins, 2007), they can
be directly stabilized under the proposed strategy in this chapter when the error dynamics of
the position estimator has reached a steady-state. That is, each agent moves from the first
desired formation to the second desired formation along a straight line under the proposed
strategy as depicted in Figure 6.4.
– 108 –
1
1
(1): 2 ® 1,
(6): 1 ® 2
2
2
3
(3): 4 ® 2
4
6
(2): 3 ® 2
3
(4): 5 ® 3
(5): 6 ® 5
4
5
5
(a) Static interaction graph.
(b) Switching interaction graph.
6
Figure 6.3: The interaction graph for the six-agents.
Figure 6.6 shows the result of control of a six double-integrator modeled agents under
the full-order observer (6.20) and the controller (6.22). The fixed interaction graph, which
is depicted in Figure 6.3a, is assumed. The estimation error p̃ grows with time as shown in
Figure 6.6a, which results from the initial velocity estimation error. Accordingly, k[eTp eTv ]k
and k[p̃T ṽ T ]T k also increase with time as depicted in Figure 6.6b.
Simulation result of control of a six double-integrator modeled agent group under the
reduced-order observer (6.24) and the controller (6.26) are shown in Figure 6.7. The interaction graph of the agents is fixed as depicted in Figure 6.3a, and the desired formation is
rotated by π/4 at t = 10s. As depicted in Figure 6.7b, k[eTp eTv ]k converges to a constant
value.
Figure 6.8 depicts the simulation result of a six unicycle-like robots under the estimator
(6.5) and the controller (6.12). The interaction graph of the agents is given as depicted in
Figure 6.3a, and the desired formation is rotated by π/4 at t = 10s. The desired formation
is achieved as shown in Figure 6.8.
– 109 –
3
7
kep k
kp̃k
initial formation
1st desired formation
2nd desired formation
2
6
1
5
k·k
y
0
−1
4
−2
3
−3
2
−4
−5
−4
1
−3
−2
−1
0
x
1
2
3
4
0
5
10
t[sec]
15
20
(b) The norm of control error kep k and the norm of
(a) The trajectory of the position p.
estimation error kp̃k.
Figure 6.4: Simulation result of the six single-integrators under (6.5) and (6.7).
2
7
kep k
initial formation
1st desired formation
2nd desired formation
1
6
5
0
k·k
y
4
−1
3
−2
2
−3
−4
−3
1
0
−2
−1
0
1
x
2
3
4
(a) The trajectory of the position p.
5
0
5
10
t[sec]
15
20
(b) The norm of control error kep k and the norm of
estimation error kp̃k.
Figure 6.5: Simulation result of the six single-integrators under the existing displacementbased formation control law.
– 110 –
25
26
p
p̂
24
k(ep , ev )k
22
k(p̃, ṽ)k
20
20
15
k·k
y
18
10
16
14
5
12
10
0
8
−5
−5
6
0
5
10
15
20
25
30
0
5
x
10
t[sec]
15
20
(b) The norm of control error k(eTp eTv )T k and the
(a) The trajectory of the position p.
norm of estimation error k(p̃T ṽ T )T k.
Figure 6.6: Simulation result of the six double-integrators under (6.20) and (6.22).
25
20
8
initial formation
1st desired formation
2nd desired formation
k(ep , ev )k
kp̃k
7
6
15
k·k
y
5
10
4
3
5
2
0
1
−5
−5
0
0
5
10
x
15
20
(a) The trajectory of the position p.
25
0
5
10
t[sec]
15
20
(b) The norm of control error k(eTp eTv )T k and the
norm of estimation error kp̃k.
Figure 6.7: Simulation result of the six double-integrators under (6.24) and (6.26).
– 111 –
4
7
initial formation
1st desired formation
2nd desired formation
3
kep k
kp̃k
6
2
5
1
k·k
y
4
0
3
−1
2
−2
1
−3
−4
−4
0
−3
−2
−1
0
x
1
2
3
(a) The trajectory of the position p.
4
0
5
10
t[sec]
15
20
(b) The norm of control error kep k and the norm
of estimation error kp̃k.
Figure 6.8: Simulation result of the six-unicycles under (6.5) and (6.12).
6.5
Conclusion
We proposed a formation control strategy based on position estimation for single- and
double-integrator modeled agents in n-dimensional space. The key motivation underlying
the proposed formation control strategy is to enhance the performance of relative displacementbased formation control scheme so that it would have similar characteristics as like as positionbased formation control schemes. As demonstrated in numerical simulations, the proposed
strategy provides effective solutions to formation control of the agents. Particularly, if the
error dynamics of the position estimation has once reached a steady-state, then formation
control is reduced into the point stabilization of individual agents.
– 112 –
Chapter 7
Formation control based on orientation and position estimation
In this chapter, we propose a formation control strategy based on orientation and position
estimation for single-integrator modeled agents in two-dimensional space. Under the assumption that orientations of local reference frames of the agents are not aligned due to
the absence of an available common sense of orientation for them, the proposed formation
control strategy consists of an orientation estimation law, a position estimation law, and a
position control law based on the estimated positions. Under the proposed control strategy,
if the interaction graph of the agents has a spanning tree and all initial orientation angles
belong to an interval with arc length less than π, then the orientations are asymptotically
estimated, the estimated positions asymptotically converge to the actual positions up to congruence, and the desired formation is asymptotically achieved. Simulation results support
the effectiveness of the proposed control strategy.
7.1
Introduction
Formation control based on local information has been attracting a considerable amount
of research interest. Depending on what information is available to agents, various formation control problems can be formulated. In the literature, two kinds of formation control
– 113 –
approaches, which might be called displacement- and distance-based approaches, are dominantly studied. A key difference between the two kinds of approaches arises from whether a
common sense of orientation is available to agents.
In displacement-based approaches (Dimarogonas & Kyriakopoulos, 2008; Lin et al.,
2004, 2005; Ren & Atkins, 2007), agents maintain their own local reference frames whose
orientations are aligned to each other and measure the relative displacements of their neighboring agents. Subsequently, the formation of the agents is stabilized by controlling the relative displacements. The control laws proposed in Dimarogonas & Kyriakopoulos (2008);
Lin et al. (2004, 2005); Ren & Atkins (2007) ensure global asymptotic convergence of the
formation to the desired formation up to translation.
The orientations of the local reference frames of agents are not aligned to each other in
distance-based approaches (Cao et al., 2011; Dörfler & Francis, 2010; Krick et al., 2009;
Oh & Ahn, 2011d). In these approaches, it is assumed that the local reference frames are
not aligned because the agents do not share a common sense of orientation. Though each
agent measures the relative displacements of its neighboring agents with respect to its local reference frame, the directional information contained in the relative displacements are
meaningful only to the agent because the orientations are not aligned. Thus the formation
of the agents is stabilized by controlling the Euclidean norm of the relative displacements
rather than the displacements themselves. Formation control problems in distance-based approaches are intractable and thus only local stability has been analyzed (Dörfler & Francis,
2010; Krick et al., 2009; Oh & Ahn, 2011d).
Though displacement-based approaches provide effective solutions to formation control,
– 114 –
they require agents to carry direction sensor such as a compass. Formation control problems
are generally intractable in distance-based approaches while they are of interest because
agents do not share global information such as a common sense of orientation. Meanwhile,
the formation of agents can be more effectively controlled if position information is available
whereas it is assumed that position information is not available to agents in displacementand distance-based approaches.
We propose a formation control strategy for single-integrator modeled agents in the plane
based on orientation angle and position estimation. Subsequently, if the graph associated
with the interaction topology of the agents has a spanning tree and all the orientation angles
belong to an interval with range less than π, then the proposed strategy ensures the exponential convergence of the estimated orientation angles to the actual angles up to translation,
the exponential convergence of the estimated relative position to the actual position up to
congruence, and the exponential convergence of the formation to the desired formation up
to congruence. Further, if there is an agent having no neighboring agents knows its absolute
orientation and absolute position, then all the agents estimate their absolute positions. In this
case, the formation of the agents asymptotically converges to the absolute desired formation.
7.2
Formation control based on orientation and position estimation
7.2.1
Problem statement
We consider N single-integrator modeled mobile agents in the plane:
ṗi = ui , i = 1, . . . , N,
– 115 –
(7.1)
where pi ∈ R2 and ui ∈ R2 denote the position and control input, respectively, of agent i
with respect to the global reference frame, which is denoted by g
P
.
We assume that agent i maintains its own local reference frame with the origin at pi and
the orientation angle θi ∈ (−π, π] with respect to g
the local reference frame of agent i by by i
P
as illustrated in Fig. 7.1. We denote
P
. By adopting a notation in which superscripts
are used to denote reference frames, the agents are described as
ṗii = uii , i = 1, . . . , N,
(7.2)
where pii ∈ R2 and uii ∈ R2 denote the position and control input, respectively, of agent i
with respect to i
P
.
Given a graph G = (V, E, W) such that |V| = N , we assume that agent i measures the
relative displacements of its neighboring agents with respect to i
P
:
piji := pij − pii , j ∈ Ni , i ∈ V.
(7.3)
We refer to G as the interaction graph for the agents. Defining θji as
θji := PV (θj − θi ) , j ∈ Ni ,
(7.4)
where PV (θj − θi ) := [(θj − θi + π) mod 2π] − π, we additionally assume that θji , j ∈ Ni ,
are available to agent i. The relative orientations can be measured as follows. As depicted
in Fig. 7.1, agent j senses the angle δij and transmits the sensed value to agent i by communication. Then agent i senses the angle δji and calculates the relative orientations of its
neighbors as follows: PV (θj − θi ) = PV (δji − δij + π).
We further assume that the agents estimate their orientation angles with respect to
– 116 –
g
P
g
¦
c
¦
i
j
¦
Ti
G ji
¦
G ij
Tj
Tf
Figure 7.1: Measurement of relative orientation angle.
and their positions with respect to a common reference frame
c
P1
and they receive the
estimated values of their neighbors through communication. Thus θ̂ji and p̂cji are available
to agent i:
θ̂ji := PV(θ̂j − θ̂i ), p̂cji := p̂cj − p̂ci , j ∈ Ni , i ∈ V.
where θ̂i is the estimated orientation angle with respect to g
mated position of agent i with respect to c
P
(7.5)
of agent i and p̂ci is the esti-
P
.
Denoting the position of agent i with respect to c
P
by pci , the formation control problem
is then stated as follows:
Problem 7.2.1 For N -agents modeled by (7.1), let G be the interaction graph of the agents.
∗T T
2N
Given p∗ = [p∗T
, design a formation control strategy such that θ̂i − θ̂j → 0,
1 · · · pN ] ∈ R
p̂ci − p̂cj → pci − pcj , and pci − pcj → p∗i − p∗j , based on measurements (7.3), (7.4) and (7.5).
1
The common reference frame c
P
will be defined in Section 7.2.
– 117 –
The orientations of g
P
and c
P
are not necessarily aligned while their origins are located at
the same position as depicted in Fig. 7.1. Thus the objective of Problem 7.2.1 is to stabilize
p to p∗ only up to congruence.
7.2.2
Control strategy and stability analysis
For the agents under the assumption of Problem 7.2.1, an orientation estimation law can
be designed as follows:
X
˙
wij (θ̂ji − θji ),
θ̂i = kθ̂
(7.6)
j∈Ni
where kθ̂ > 0 and θ̂i (0) = 0 for all i = 1, . . . , N . Suppose that maxi∈V θi (0)−mini∈V θi (0) <
π. Defining θ̃i := θ̂i − θi , we then have maxi∈V θ̃i (0) − mini∈V θ̃i (0) < π since θ̂i (0) = 0 for
all i = 1, . . . , N . Thus the orientation estimation error dynamics is written as
θ̃˙ = −kθ̂ Lθ̃,
(7.7)
where θ̃ = [θ̃1 · · · θ̃N ]T . From Theorem 5.2.1, we have the following result:
Lemma 7.2.1 For the orientation estimation error dynamics (7.7), if maxi∈V θi (0)−mini∈V θi (0) <
π and G has a spanning tree, there exists a finite constant θ̃∞ such that θ̃(t) exponentially
converges to θ̃∞ 1N for all i = 1, . . . , N .
Suppose that an agent is located at the root of a spanning tree of G and the agent knows
its orientation angle with respect to g
P
. In this case, θ̃(t) → 0 as t → ∞.
Based on Lemma 7.2.1, we can consider a reference frame whose origin is located at the
global origin and orientation angle is −θ̃∞ with respect to
g
P
. We refer to the reference
frame as the estimated common reference frame of the agents and denote it by c
– 118 –
P
.
Each agent estimates its position with respect to c
P
. A relative position estimation law
for the agents can be designed as follows:
p̂˙ci = kp̂
X
j∈Ni
i
h
wij (p̂cj − p̂ci ) − R(−θ̂i )(pij − pii ) + R(−θ̂i )uii ,
(7.8)
where kp̂ > 0. Let eθi := θ̃∞ − θ̃i . Then, we can arrange (7.8) as
X
p̂˙ci = kp̂
j∈Ni
wij (p̂cj − p̂ci ) − kp̂
X
j∈Ni
wij R(−θ̂i )R(θi + θ̃∞ )(pcj − pci )
+ R(−θ̂i )R(θi + θ̃∞ )uci
X
= kp̂
j∈Ni
X
wij (p̂cj − p̂ci ) − (pcj − pci ) + kp̂
wij [I2 − R(−eθi )] (pcj − pci ) + R−1 (eθi )uci .
j∈Ni
(7.9)
Based on the estimated position, a position control law for the agents can be designed as
follows:
uii = −kp (p̂ci − p∗i ),
(7.10)
where kp > 0.
Define epci := pci − p∗i and p̃ci := pci − p̂ci . Then the error dynamics of agent i can be
arranged as
ėpci = − kp (epci − p̃ci ) + kp [I2 − R(eθi )] (epci − p̃ci ),
p̃˙ci = − kp̂
X
j∈Ni
wij (p̃ci − p̃cj ) − kp̂
X
j∈Ni
wij [I2 − R(−eθi )] (ecpj − ecpi + p∗j − p∗i )
+ kp [I2 − R(eθi )] (epci − p̃ci ),
ėθi = − kθ̂
X
j∈Ni
(7.11a)
(7.11b)
wij (eθi − eθj ).
(7.11c)
– 119 –
Defining Γ(eθ ) := diag(R(eθ1 ), . . . , R(eθN )), we then obtain the following overall error
dynamics for the agents:
ėpc = − kp ecp + kp p̃c + kp [I2N − Γ(eθ̃ )] (epc − p̃c ),
(7.12a)
p̃˙c = − kp̂ (L ⊗ I2 )p̃c + kp̂ I2N − Γ−1 (eθ̃ ) (L ⊗ I2 )(epc + p∗ )
+ kp [I2N − Γ(eθ̃ )] (epc − p̃c ),
(7.12b)
ėθ = − kθ̂ Leθ ,
(7.12c)
cT T
T
where epc := [eTpc1 · · · eTpcN ]T , p̃c := [p̃cT
1 · · · p̃N ] , and eθ := [eθ1 · · · eθN ] .
If all the agents know their orientation angles, then we can assume that eθ̃ ≡ 0. In this
case, the overall error dynamics (7.12) is reduced into
ėp =Aep ,
(7.13)
where ep := [eTpc p̃cT ]T and


kc I2N
 −kc I2N

.
A := 


0
−ko (L ⊗ I2 )
Note that the error dynamics (7.12) can be viewed as the cascade system of (7.13) and additional dynamics that results from the orientation estimation errors. To clarify this, let us
describe (7.12) as
ėp =Aep + ∆A(eθ )ep + D(eθ ),
(7.14a)
ėθ = − kθ̂ Leθ ,
(7.14b)
– 120 –
where


∆A(eθ ) := [I2N − Γ(eθ )] 

|

kc I2N
ko (L ⊗ I2 ) + kc I2N
{z
∆A1 :=

−kc I2N 


−kc I2N
}

0
0 


ko (L ⊗ I2 ) 0
{z
}
|

+ I2N − Γ−1 (eθ ) 

(7.15)
∆A2 :=
and


D(eθ ) := 


0
ko [I2N − Γ−1 (eθ )] (L ⊗ I2 )p∗

.

From (7.14), it is obvious that the overall error dynamics is the cascade system of ėp = Aep
and some additional dynamics generated by eθ .
We analyze the stability of the overall error dynamics (7.14). We first show that the term
k∆A(eθ (t))ep (t) + D(eθ (t))k exponentially converges to zero as t → ∞ under a certain
condition. Based on the exponential convergence, we then show that there exists a finite
vector p̃c∞ ∈ R2 such that epc (t) and p̃c (t) exponentially converge to p̃c∞ as t → ∞ under the
condition.
The following lemma states the exponential convergence of kI2N − Γ(eθ (t))k and kI2N −
Γ−1 (eθ (t))k to zero as t → ∞:
Lemma 7.2.2 If maxi∈V θi (0) − mini∈V θi (0) < π and G has a spanning tree, there exist
constants kγ > 0 and λγ > 0 such that kI2N − Γ(eθ (t))k ≤ kγ e−λγ t keθ (0)k and kI2N −
Γ−1 (eθ (t))k ≤ kγ e−λγ t keθ (0)k for all t ≥ 0.
– 121 –
Proof: From Lemma 7.2.1, there exist constants keθ > 0 and λeθ > 0 such that
keθ (t)k ≤ keθ e−λeθ t keθ (0)k
(7.16)
for all t ≥ 0. Due to the property of block diagonal matrices,
kI2N − Γ(eθ (t))k = max kI2 − R(eθi (t))k.
i=1,...,N
Since kI2 − R(eθi (t))k =
p
√
2 − 2 cos(eθi (t)), we have kI2 − R(eθi (t))k ≤ 2|eθi (t)| under
the condition that maxi∈V eθi (t) − mini∈V eθi (t) < π, which leads to
√
kI2N − Γ(eθ (t))k ≤ 2 max |eθi (t)|
i=1,...,N
√
≤ 2keθ (t)k.
(7.17)
Since maxi∈V θi (0) − mini∈V θi (0) < π, it follows that maxi∈V eθi (0) − mini∈V eθi (0) < π,
which leads to maxi∈V eθi (t) − mini∈V eθi (t) < π for all t ≥ 0 (Moreau, 2004). Similarly, it
can be shown that
kI2N − Γ−1 (eθ (t))k ≤
√
2keθ (t)k.
(7.18)
From (7.16), (7.17) and (7.18), there exist constants kγ > 0 and λγ > 0 such that
kI2N − Γ(eθ (t))k ≤ kγ e−λγ t keθ (0)k and kI2N − Γ−1 (eθ (t))k ≤ kγ e−λγ t keθ (0)k for all t ≥ 0.
From Lemma 7.2.2, it is obvious that k∆A(eθ (t))k and kD(eθ (t))k exponentially converge to zero as t → ∞. To confirm that k∆A(eθ (t))ep (t) + D(eθ (t))k exponentially converges to zero as t → ∞, we need to show the boundedness of ep (t) for all t ≥ 0. The
following lemma is useful for the investigation of the boundedness:
– 122 –
Lemma 7.2.3 For the overall error dynamics (7.14), suppose that ∆A(eθ ) ≡ 0 and D(eθ ) ≡
0. Then there exists a finite vector p̃c∞ ∈ R2 such that epci (t) and p̃ci (t) exponentially converge
to p̃c∞ as t → ∞ for all i ∈ V if and only if G has a spanning tree.
Based on Lemma 7.2.3, the following lemma states a sufficient condition for the boundedness of ep (t) for all t ≥ 0:
Lemma 7.2.4 For the overall error dynamics (7.14), ep (t) is bounded for all t ≥ 0 if
maxi∈V θi (0) − mini∈V θi (0) < π and G has a spanning tree.
Proof: The solution of (7.14a) is given as
t
Z
At
eA(t−τ ) [∆A(eθ (τ ))ep (τ ) + D(eθ (τ ))] dτ
ep (t) = e ep (0) +
0
for all t ≥ 0. It then follows from Lemma 7.2.3 that there exists a constant MA such that
keAt k ≤ MA for all t ≥ 0. Further, from Lemma 7.2.2, there exist constants kd > 0 and
λd > 0 such that
kD(eθ (t))k ≤ kD e−λD t kD(eθ (0))k.
Thus we have
At
kep (t)k ≤ ke kkep (0)k +
Z
+
0
Z
t
0
keA(t−τ ) kkD(eθ (τ ))kdτ
t
keA(t−τ ) kk∆A(eθ (τ ))kkep (τ )kdτ
kD
≤ MA kep (0)k +
MA kD(eθ (0))k
λD
Z t
+ MA
k∆A(eθ (τ ))kkep (τ )kdτ.
0
– 123 –
(7.19)
Further, it follows from the Bellman-Gronwall lemma (Ioannou & Sun, 1996) that
kep (t)k ≤ MA kep (0)ke
Rt
0
k∆A(eθ (τ ))kdτ
+
Rt
kD
MA kD(eθ (0))ke 0 k∆A(eθ (τ ))kdτ .
λD
The boundedness of kyk is then confirmed by showing the integrability of k∆Ak. In (7.15),
k∆A1 (t)k and k∆A2 (t)k are bounded for all t ≥ 0 because L(t) is bounded. Then, from
Lemma 7.2.2, we obtain that
k∆A(t)k ≤ kγ e−λγ t keθ (0)k(k∆A1 (t)k + k∆A2 (t)k),
which implies that
Rt
0
k∆A(eθ (τ ))kdτ is bounded for all t ≥ 0.
We now present the following theorem, which is the main result of this chapter:
Theorem 7.2.1 If maxi∈V θi (0) − mini∈V θi (0) < π and G has a spanning tree, then p̃c (t)
exponentially converges to E2N := {(ξ1 , . . . , ξN ) ∈ R2N : ξi = ξj , ∀i, j = 1, . . . , N } and
there exits a point p̃c∞ ∈ E2N such that epc and p̃c asymptotically converges to p̃c∞ .
Proof: Let w := ko (I2N − Γ−1 (eθ ))(L ⊗ I2 )(epc + p∗ ) + kc (I2N − Γ−1 (eθ ))(epc − p̃c ).
Then, (7.12b) can be written as
p̃˙a = −kp (L ⊗ I2 )p̃c + w.
(7.20)
It follows from Lemma 7.2.4 that there exists a constant Mw such that Mw ≥ ko kL ⊗
I2 kkecp (t) + p∗ k + kc kecp (t) − p̃c (t)k for all t ≥ 0. Thus kw(t)k exponentially converges
to zero as shown in the following:
kw(t)k ≤ ko kI2N − Γ−1 (eθ (t))kkL ⊗ I2 kkecp (t) + p∗ k + kc kI2N − Γ(eθ (t))kkecp (t) − p̃c (t)k
≤ kγ e−λγ t Mw keθ (0)k.
(7.21)
– 124 –
Define E2N as
E2N := {[xT1 , . . . , xTN ]T ∈ R2N : xi = xj ∀i, j = 1, . . . , N }.
From Theorem 5.2.1, the equilibrium set E2N of (7.20) is exponentially stable when w ≡ 0.
Further, define the distance from any x ∈ R2N to the set E2N as
dist(x, E2N ) := inf kx − ξk.
ξ∈E2N
Since E2N is exponentially stable, there exist constants kE > 0 and λE > 0 such that, for any
x ∈ R2N ,
dist(e−(L⊗I2 )t x, E2N ) ≤ kE e−λE t dist(x, E2N ).
(7.22)
Since E2N is a subspace of R2N , which is a Hilbert space, there exists an orthogonal space
⊥
⊥
. Then, for any x ∈ R2N , there exist xk ∈ E2N and
⊂ R2N such that R2N = E2N ⊕ E2N
E2N
⊥
x⊥ ∈ E2N
such that x = xk + x⊥ and
dist(x, E2N ) = dist(x⊥ , E2N ) = kx⊥ k.
Further, for any x ∈ R2N and y ∈ R2N , we can obtain the following triangular inequality:
dist(x + y, E2N ) = kx⊥ + y ⊥ k
≤ kx⊥ k + ky ⊥ k
= dist(x, E2N ) + dist(y, E2N ).
Since the solution of (7.20) is given as
c
−(L⊗I2 )t c
p̃ (t) = e
Z
t
p̃ (0) +
0
– 125 –
e−(L⊗I2 )(t−τ ) w(τ )dτ,
(7.23)
we obtain
c
dist(p̃ (t), E2N ) ≤ dist(e
−(L⊗I2 )t c
p̃ (0), E2N ) + dist(
≤ kE e−λE t dist(p̃c (0), E2N ) +
Z
|0
Z
0
t
e−(L⊗I2 )(t−τ ) w(τ )dτ, E2N )
t
dist(e−(L⊗I2 )(t−τ ) w(τ ), E2N )dτ (7.24)
{z
}
η(t):=
based on (7.22) and (7.23). It follows from (7.22) that
−λE t
η(t) ≤ kE e
0
−λE t
≤ kE e
<
t
Z
eλE τ dist(w(τ ), E2N )dτ
1 λE τ
e
λE
t
sup (dist(w(τ ), E2N ))
0 0≤τ ≤t
kE
sup (dist(w(τ ), E2N )) .
λE 0≤τ ≤t
(7.25)
Note that dist(x, E2N ) ≤ kxk for any x ∈ R2N because inf ξ∈E2N kx − ξk ≤ kxk. Thus it
follows from (7.21) that
sup (dist(w(τ ), E2N )) ≤ sup (k(w(τ )k)
0≤τ ≤t
0≤τ ≤t
≤ sup kγ Mw e−λγ τ keθ (0)k
0≤τ ≤t
≤ kγ Mw keθ (0)k.
(7.26)
From (7.25), we obtain
dist(p̃c (t), E2N ) ≤ kE e−λE t dist(p̃c (0), E2N ) +
kE kγ Mw
keθ (0)k.
λE
By replacing 0 with t/2 in (7.27), we have
t
t
kE kγ Mw
keθ ( )k
dist(p̃c ( ), E2N ) +
2
λE
2
λE t
λE t
λE t k k M
E γ
w
≤ kE e− 2 kE e− 2 dist(p̃c (0)) + kE e− 2
keθ (0)k
λE
kE kγ Mw2 − λγ t
+
e 2 keθ (0)k,
λE
dist(p̃c (t), E2N ) ≤ kE e−
λE t
2
– 126 –
(7.27)
which implies that p̃c exponentially converges to E2N . Further, since kep k is bounded from
Lemma 7.2.4, there exists a point p̃c∞ ∈ E2N such that p̃c (t) asymptotically converge to p̃c∞ .
Next let us consider (7.12a). Let v := kp [I2N − Γ(eθ̃ )] (epc − p̃c ). Then, we have
epc (t) − p̃
c∞
t
=e
(epc ( ) − p̃c∞ ) + kp
2
Z t
+ kp
e−kp (t−τ ) v(τ )dτ.
−kp t
t
Z
t
2
e−kp (t−τ ) (p̃c (τ ) − p̃c∞ )dτ
t
2
From Lemma 7.2.4, there exists a constant Mv such that Mv ≥ kp kecp (t) − p̃c (t)k for all
t ≥ 0. Thus we obtain
t
kepc (t) − p̃c∞ k ≤ e−kp t kepc ( ) − p̃c∞ k + sup kp̃c (τ ) − p̃c∞ k
t
2
≤τ ≤t
2
+ sup
t
≤τ ≤t
2
kγ Mv e−λγ τ keθ (0)k ,
which implies that limt→∞ kepc (t) − p̃c∞ k = 0.
Suppose that an agent is located at the root of a spanning tree of G and the agent knows its
orientation angle and its position with respect to g
P
. In this case, epc (t) → 0 and p̃c (t) → 0
as t → ∞.
7.3
Simulation results
We present the simulation results of formation control of six single-integrator modeled
agents under the proposed strategy. The initial orientation angles are assigned randomly from
the interval (−π/2, π/2]. The desired formation is changed at t = 10s from p∗ to 0.1 × p∗ .
The interaction graph for the agents is given as shown in Figure 7.2 and each edge weight
is 1. In Figure 7.2, the direction of an arrow indicates that of information flow between two
– 127 –
Figure 7.2: The interaction graph for the simulation.
5
10
initial formation
1st desired formation
2nd desired formation
3
8
2
7
1
6
0
5
−1
4
−2
3
−3
2
−4
1
−5
−5
kepc k
kp̃ck
keθ k
9
k·k
yc
4
0
−4
−3
−2
−1
0
xc
1
2
3
(a) The position pc of the agents.
4
5
0
2
4
6
8
10
time [sec]
12
14
16
18
20
(b) The position control error kepc k, the position
estimation error kp̃c k and the orientation alignment error keθ k.
Figure 7.3: Simulation result of six single-integrator modeled agents having the static interaction graph.
– 128 –
corresponding agents. Figure 7.3 illustrates the simulation result. The formation converges
to the desired formation (Figure 7.3a), the norm of the orientation angle error converges to
zero, and the norms of the position estimation error and the position control error converge
to a steady-state value (Figure 7.3b). Although the desired formation is changed at t = 10s,
the orientation angle error remains at zero and the position estimation error remains at the
steady-state value.
7.4
Conclusion
We proposed a formation control strategy based on orientation alignment law and dis-
tributed position estimation. The proposed strategy ensures convergence of the formation
of single-integrator modeled agents to the desired formation. It is of interest to extend the
proposed strategy to agents having more practical dynamics. One can consider the extension
of the proposed strategy to three-dimensional space.
– 129 –
– 130 –
Chapter 8
Consensus of networks of a class of linear agents
In this chapter, we study conditions for the consensus of a network consisting of nonidentical
positive real systems multiplied by a single-integrator. The individual systems of the network
may have nonidentical system-orders. The coupling among the individual systems is undirected and diffusive. Under such assumptions, sufficient conditions for the consensus of the
network are provided. It is shown that, if the individual systems are weakly strictly positive
real systems multiplied by a single-integrator, then the consensus is achieved. For a network
of positive real systems multiplied by a single-integrator, a Nyquist plot based criterion is
provided. As examples of such networks, we address consensus of first- and second-order
systems and load frequency control of synchronous generators.
8.1
Introduction
There has been a significant amount of research interest in coordination of multi-agent
systems (see Olfati-Saber et al. (2007); Ren et al. (2007) and the references therein). Among
various problems, consensus has been primarily studied in the literature. In a consensus
problem, individual agents interact with each other to evolve their own variables to a common
value.
Recently, Wang & Elia (2010) have studied a single-integrator network interconnected
by dynamic edges and provided a sufficient condition for consensus based on the diagonal
– 131 –
dominance of the complex Laplacian matrix. Since the edges of the network are described
as dynamical systems in their work, one may call the network a dynamic consensus network.
Another work on dynamic consensus networks is found in Moore et al. (2011). Motivated by
thermal process in a building, Moore et al. (2011) have proposed a general single-integrator
dynamic consensus network model and provided a sufficient condition for consensus based
on the diagonal dominance condition.
As shown below, a load frequency control (LFC) network of synchronous generators can
be described as a dynamic consensus network of single-integrators. In an LFC network, each
generator control system can be modeled as a node whose variable is the phase variation of its
voltage and the power exchange among the generators can be modeled as the interconnection
of the network. Since the generator control systems have nonidentical high-order dynamics
in general, it is difficult to analyze the stability of the LFC network. Meanwhile, the interconnection among the generator control systems is diffusive (Tuna, 2009), i.e., power exchange
between two nodes is proportional to the phase differences of the voltages of the nodes (Kundur et al., 1994). This allows us to model the LFC network as a dynamic consensus network
of single-integrators.
Motivated by the LFC network model, we attempt to investigate conditions for the consensus of nonidentical systems interconnected by diffusive coupling. We present sufficient
conditions for the consensus of a network of positive real (PR) systems combined with a
single-integrators. Individual systems of the network might have nonidentical dynamics
and different system orders. Though existing results on the consensus condition of identical system networks are found in Fax & Murray (2004); Gattami & Murray (2004), not
– 132 –
much research interest has been focused on the nonidentical system case. It is shown that a
connected network of weakly strictly positive real (WSPR) systems combined with a singleintegrator reaches consensus. Further, a condition for the consensus of a connected network
of PR systems combined with a single-integrator is also provided. Based on the conditions,
we can determine the consensus of an output diffusively coupled linear system network and
an LFC network of synchronous generators. Since many load frequency controllers have
been designed without consideration of the stability of the overall network in the literature,
the sufficient conditions presented in this chapter might be useful for the design of load
frequency controllers.
The outline of the chapter is as follows. In Section 8.2, we review some mathematical
background. In Section 8.3, we present sufficient conditions for the consensus of a network
of PR or WSPR systems combined with a single-integrator. In Section 8.4.2, we provide
some illustrating examples and we apply the sufficient conditions to a LFC network. Concluding remarks are then provided in Section 8.5.
8.2
Preliminaries
The set of complex (respectively, real) numbers is denoted by C (respectively, R). For
any s ∈ C, <[s] denotes the real part of s. For any z ∈ Cn , the conjugate transpose of z is
denoted by z H . The absolute value of z is denoted by |z|. For any A ∈ Cn×n , we denote the
spectral norm of A by kAk. The spectrum of A is denoted by σ(A). The field values of A,
defined as the set {z H Az ∈ C : z ∈ Cn , kzk = 1}, is denoted by F (A). The column vector
[1 · · · 1]T ∈ Rn is denoted by 1n .
– 133 –
A positive real (PR) transfer function is defined as follows:
Definition 8.2.1 (Lozano et al., 2000) A real rational transfer function h(s) is said to be
positive real if
(i) h(s) is analytic in <[s] > 0;
(ii) h(s) is real for positive real s;
(iii) <[h(s)] ≥ 0 for all <[s] > 0.
The following theorem provides a necessary and sufficient condition for the positive realness.
Theorem 8.2.1 (Lozano et al., 2000) A real rational transfer function h(s) is positive real if
and only if
(i) h(s) has no poles in <[s] > 0;
(ii) <[h(jω)] ≥ 0 for all ω such that jω is not a pole in h(s);
(iii) If s = jω0 is a pole in h(s), it is a simple pole. If ω0 is finite, the residual lims→jω0 (s −
jω0 )h(s) is real and positive. If ω0 is infinite, the limit limω→∞ h(jω)/jω is real and
positive.
If there exists > 0 such that h(s − ) is PR, then h(s) is said to be strictly positive real
(SPR). The following property of weakly strictly positive realness is stronger than positive
realness but weaker than strictly positive realness.
Definition 8.2.2 (Lozano et al., 2000) A real rational transfer function h(s) is said to be
weakly strictly positive real if
– 134 –
(i) h(s) is analytic in <[s] ≥ 0;
(ii) <[h(jω)] > 0 for all ω ∈ (−∞, ∞).
Based on Definition 8.2.2 and Theorem 8.2.1, we see that a WSPR function is PR.
8.3
Main result
We consider the following N dynamical system of nodes:
d
d
d
Di
yi = Ni
ui , i = 1, . . . , N,
dt
dt
dt
(8.1)
where yi ∈ R are the node variables, ui ∈ R are the coupling variables defined below, and
Ni
d
dt
ni
X
dk
nik k , Di
=
dt
k=0
d
dt
=
mi
X
k=0
dik
dk
.
dtk
Note that the transfer function from ui to yi contains a pure single-integrator if ni0 6= 0.
Assuming that the interconnections among the individual systems are modeled as a
weighted graph G = (V, E, W), we consider the node input ui given as
ui = −
X
j∈Ni
aij (yi − yj ) ,
(8.2)
which can be called an output diffusive coupling (Tuna, 2009). Note that aij is an element
of the adjacency matrix associated with G. The individual systems are then described as
X
d
d
d
Di
yi = −Ni
aij (yi − yj ) ,
dt
dt
dt j∈N
(8.3)
i
By taking the Laplace transform of both sides of (8.3) with zero initial conditions, we
have
sDi (s)Yi (s) = −Ni (s)
X
j∈Ni
– 135 –
aij (Yi (s) − Yj (s)) ,
where Yi (s) and Yj (s) are the Laplace transform of yi (t) and yj (t). Defining
Gi (s) :=
Ni (s)
Di (s)
and G(s) := diag (G1 (s), . . . , GN (s)), the overall equation for the network is arranged as
sY (s) = −G(s)LY (s),
(8.4)
where L is the Laplacian matrix of G and Y (s) = [Y1 (s) · · · YN (s)]T . Note that the equation
(8.4) has the form of a dynamic consensus network of single-integrators (Moore et al., 2011;
Wang & Elia, 2010).
Several comments on (8.4) are in order here. First, if Gi (s) are identical, a consensus
condition can be checked graphically based on the eigenvalues of L and the Nyquist plots of
Gi (s) (Fax & Murray, 2004). A similar result for multi-input/multi-output systems is found
in Gattami & Murray (2004). However, such results cannot be applied to a network consisting of nonidentical systems. We attempt to analyze the consensus of (8.4) by considering it
a dynamic consensus network.
Second, we are primarily interested in investigating what properties of the individual
systems ensure the achievement of the consensus of the network. Wang & Elia (2010) have
studied a single-integrator consensus network having dynamic edges and provided a necessary and sufficient condition for the consensus based on the properties of the overall transfer
function matrix. However, if we consider the scalability of the network, it would be complicated to check the condition for the overall transfer function. Thus we seek to investigate
conditions for individual systems to achieve consensus in this chapter.
Third, though it has been revealed that the diagonal dominance condition is sufficient
– 136 –
for the consensus of dynamic networks (Moore et al., 2011; Wang & Elia, 2010), the network (8.4) does not satisfy diagonal dominance. Moore et al. (2011); Wang & Elia (2010)
have shown that the consensus of the network sY (s) = −Γ(s)Y (s) is achieved if Γ(0) is a
Laplacian matrix of a connected graph and Γ(s) = [γij (s)] satisfies the following diagonal
dominance condition
< [γii (s)] >
N
X
j6=i,j=1
|γij (s)|, <[s] ≥ 0, s 6= 0.
Note that (8.4) does not satisfy the diagonal dominance condition. Indeed, G(s)L satisfies
the following equality:
lii Gi (s) =
N
X
j6=i,j=1
|lij Gi (s)| .
Consider the following characteristic equation of (8.4):
det [sIN + G(s)L] = 0.
(8.5)
It can be shown that a necessary and sufficient condition for the consensus of network (8.4)
is that
• (graph connectedness) The graph G is connected;
• (pole location) Equation (8.5) has a distinct root at zero and all the other roots at the
open left half of the complex plane.
The necessity and sufficiency can be proved based on the idea in the third part of the proof
of Theorem 8.3.2 below.
Assuming that G is connected, we now need to investigate the individual system properties ensuring the pole location condition. The following theorem is useful:
– 137 –
Theorem 8.3.1 (Horn & Johnson, 1991) Let A ∈ Cn×n and B ∈ Rn×n . If B is symmetric
and positive semidefinite, then, for any λ ∈ σ(AB), there exist a ∈ F (A) and b ∈ F (B)
such that λ = ab.
Then the following theorem provides a sufficient condition for the consensus of network
(8.4):
Theorem 8.3.2 For network (8.4), assume that
(a) The graph associated with L is connected;
(b) For any i ∈ {1, . . . , N }, Gi (s) is WSPR.
Then,
(i) s = 0 is a distinct root of (8.5);
(ii) All the nonzero roots of (8.5) are in the open left half of the complex plane;
(iii) For arbitrary initial conditions, there exists a constant c such that yi (t) → c as t → ∞
for all i = 1, . . . , N .
Proof:
(i) From condition (b), G(0) is positive definite because Gi (0) > 0 for all
i ∈ {1, . . . , N }. Since G(0) is diagonal, it follows from condition (a) that G(0)L is the
Laplacian matrix of a connected graph. Then it is obvious that s = 0 is a distinct root of
(8.5).
(ii) We first show that all the nonzero roots of (8.5) have non-positive real-parts. Suppose
that s∗ is a root of (8.5) with positive real-part, i.e., <[s∗ ] > 0. Since L is symmetric and
– 138 –
positive semidefinite, it follows from Theorem 8.3.1 that there exist a ∈ F (−G(s∗ )) and
b ∈ F (L) such that s∗ = ab ∈ F (−G(s∗ )L). Note that, for any z = [z1 · · · zN ]T ∈ CN ,
z H G(s∗ )z =
N
X
ziH Gi (s∗ )zi =
i=1
N
X
i=1
Gi (s∗ )|zi |2 .
From condition (b), we have <[Gi (s∗ )] ≥ 0 for all i ∈ {1, . . . , N } because <[s∗ ] > 0. Then
P
∗
2
it follows that < −z H G(s∗ )z = − N
i=1 <[Gi (s )]|zi | ≤ 0, which implies that <[a] ≤ 0.
Further, we have b ≥ 0 from the symmetry and the positive semidefiniteness of L. Thus
<[s∗ ] = <[ab] ≤ 0, which is a contradiction. This implies that all the nonzero roots of (8.5)
have non-positive real-parts.
To complete the proof, we next show that (8.5) does not have pure imaginary roots.
Suppose that jω ∗ is a pure imaginary root of (8.5), which means that jω ∗ ∈ F (−G(jω ∗ )L).
Then, there exist a ∈ F (−G(jω ∗ )) and b ∈ F (L) such that jω ∗ = ab. From condition (b),
we have <[Gi (jω ∗ )] for all i ∈ {1, . . . , N }. Hence, for any z ∈ CN , < −z H G(jω ∗ )z =
−
PN
i=1
<[Gi (jω ∗ )]|zi |2 < 0, which implies that <[a] < 0. Further, b 6= 0 because jω ∗ is
pure imaginary. Thus we have <[jω ∗ ] = <[ab] < 0, which is a contradiction. Therefore
there exist no pure imaginary roots of (8.5), which completes the proof.
(iii) This claim can be proved based on a similar argument in the proof of Theorem 4.1 in
Moore et al. (2011). Since s = 0 is a distinct root of (8.5), this claim is proved by showing
that no solutions of (8.4) exist with modes corresponding to values in S := {s ∈ C : <(s) ≥
0, s 6= 0}. It can be shown (Polderman, 1998) that the solutions of (8.4) are given by w(t)
that satisfy
(sD(s) + N (s)L)|s= d w(t) = 0.
dt
– 139 –
Further, it is known that all the allowable modes of w(t) are given by the roots of
det [sD(s) + N (s)L] = 0.
Since sD(s) has no zeros in S due to the weakly strictly positive realness condition of Gi (s),
the roots of det [sD(s) + N (s)L] = 0 in S are identical to those of (8.5). Since there are no
roots of (8.5) in S, it follows that no solutions of (8.4) have modes corresponding to values
in S. Therefore, for any arbitrary set of initial conditions, there exists a constant c such that
yi (t) → c as t → ∞.
Next we consider the case that Gi (s) are PR with no poles on the imaginary axis. Different from the previous WSPR case, we need to consider two things. First, in the proof of
Theorem 8.3.2, the condition that <[G(jω)] > 0 for all ω ∈ (−∞, ∞) is required to show
that (8.5) does not have pure imaginary roots. Note that we have only <[Gi (jω)] ≥ 0 in
this case, which is based on condition (ii) in Theorem 8.2.1. That is, there might be pure
imaginary roots of (8.5). Second, to ensure that s = 0 is a distinct root of (8.5), we need an
additional condition that Gi (0) > 0 for all i ∈ {1, . . . , N }. Because of this additional condition, the small gain theorem fails to provide a condition for the consensus of the network in
general. According to the small gain theorem, if k−G(jω)L/jωk < 1 for all ω ∈ (−∞, ∞),
the consensus of network (8.4) is ensured. However, note that limω→0 k−G(jω)L/jωk = ∞
when Gi (0) > 0 for all i ∈ {1, . . . , N }.
Meanwhile, for many physical systems, phases are bounded in low frequency whereas
magnitudes are bounded in high frequency. Based on this observation, we suppose that there
exists a constant ωs > 0 such that <[Gi (jω)] > 0 for all 0 < ω ≤ ωs and maxω≥ωs kG(jω)k
is bounded. To show that (8.5) does not have pure imaginary roots, we then attempt to utilize
– 140 –
the condition that <[G(jω)] > 0 for the case that 0 < ω ≤ ωs and apply the small gain
theorem to the case that ω ≥ ωs . Based on the idea, we have the following theorem:
Theorem 8.3.3 For network (8.4), assume that
(a) The graph associated with L is connected;
(b) For any i ∈ {1, . . . , N }, Gi (s) is PR and has no poles on the imaginary axis;
(c) For any i ∈ {1, . . . , N }, Gi (0) is positive, i.e., ni0 /di0 > 0;
(d) There exist ωs > 0 and rG such that <[Gi (jω)] > 0 for all 0 < ω ≤ ωs and
max max |Gi (jω)| <
i∈{1,...,N } ω≥ωs
ωs
,
λmax (L)
where λmax (L) is the largest eigenvalue of L.
Then the statements in Theorem 8.3.2 are true.
Proof: (i) From condition (c), G(0) is positive definite. Then it follows from condition
(a) that s = 0 is a distinct root of (8.5).
(ii) From condition (b), we have <[Gi (s)] ≥ 0 for all <[s] > 0. Then it can be shown that
there are no nonzero roots of (8.5) in the open right half of the complex plane in the same
way as the proof of Theorem 8.3.2.
The proof of this claim is completed by showing that (8.5) does not have pure imaginary
roots. First, suppose that there exists a nonzero root jω ∗ of (8.5) such that |ω ∗ | ≤ ωs . Based
on condition (d), we can show that jω ∗ cannot be a root of (8.5) in the same way as the proof
of Theorem 8.3.2.
– 141 –
Second, suppose that there exists a nonzero root jω ∗ of (8.5) such that |ω ∗ | ≥ ωs . Then
we have
G(jω ∗ )L
det[jω IN + G(jω )L] = det IN +
= 0.
jω ∗
∗
∗
Due to condition (d), we have
G(jω ∗ )L λmax (L) maxi∈{1,...,N } maxω≥ωs |Gi (jω)|
< 1,
jω ∗ ≤
ωs
which is a contradiction, according to the small gain theorem. Therefore there are no nonzero
pure imaginary roots of (8.5).
(iii) The proof of this claim is the same as the proof of Theorem 8.3.2.
8.4
Examples
8.4.1
Illustrating examples
Let us consider the following consensus network of first-order systems:
ẏi = −kpi
X
j∈Ni
aij (yi − yj ) , i = 1, . . . , N,
where yi ∈ R and aij is an edge weight of a graph. It is known that consensus is achieved if
and only if the associated graph is connected (Olfati-Saber et al., 2007). Note that the consensus network of first-order systems satisfies conditions conditions (b) and (c) in Theorem
8.3.2.
The consensus of a second-order system network can be also checked based on Theorem
8.3.2. Consider the following network of second-order systems:
ÿi + kvi ẏi = −kpi
X
j∈Ni
aij (yi − yj ) , i = 1, . . . , N,
– 142 –
where yi ∈ R, kpi > 0, kvi > 0, and aij is an edge weight of a graph. Since G(s) is given as
G(s) = − diag
kp1
kpN
,...,
s + kv1
s + kvN
,
which satisfy conditions (b) and (c) in Theorem 8.3.2, the consensus follows if the graph is
connected.
Now consider a network consisting of Ns first-order and Nd (= N − Ns ) second-order
systems. Without loss of generality, we can describe the dynamics of the network:
ẏi = ui , i = 1, . . . , Ns
ÿi + kvi ẏi = ui , i = Ns + 1, . . . , N,
where yi ∈ R, vi ∈ R, and ui ∈ R. Suppose that the input is given as follows:
ui = −kpi
X
j∈Ni
aij (yi − yj ) , i = 1, . . . , N
where aij are the edge weights of a graph. After taking the Laplace transform with zero
initial conditions, we obtain the overall equation for the network:
G(s) = − diag kp1 , . . . , kpNs ,
kp(Ns +1)
kpN
,...,
s + kv(Ns +1)
s + kvN
,
which satisfies conditions (b) and (c) in Theorem 8.3.2. Consider a network of two firstorder and two second-order systems. Fig. 8.1 shows the interconnection of the network. It is
assumed that all the edge weights are 1, kp1 = 1.0, kp2 = 2.2, kp3 = 10, kv3 = 4, kp4 = 20,
and kv4 = 2. As expected, all the output variables converge to a constant as depicted in Fig.
8.2.
In general, the following output diffusive coupled network can be also analyzed based on
– 143 –
Figure 8.1: Interconnection of two first-order and two second-order systems (Systems 1 and
2 are first-order and systems 3 and 4 are second-order).
20
Single−integrator 1
Single−integrator 2
Double−integrator 1
Double−integrator 2
15
10
yi
5
0
−5
−10
−15
0
1
2
3
4
5
t
Figure 8.2: Consensus of two first- and two second-order systems.
– 144 –
Theorem 8.3.2 or 8.3.3:
ẋi = Ai xi + Bi
X
j∈Ni
aij (yj − yi ) ,
yi = Ci xi , i ∈ i = 1, . . . , N,
(8.6a)
(8.6b)
where xi ∈ Rni , yi ∈ Rnp , Ai ∈ Rni ×ni , Bi ∈ Rni ×1 , and Ci ∈ R1×ni . The output coupling
topology is modeled by a weighted undirected graph. Thus aij is the weight for the edge
from node j to node i. By taking the Laplace transform of the equation (8.6), we have
Yi (s) = Ci (sI − Ai )−1 Bi
X
j∈Ni
aij [Yj (s) − Yi (s)] .
Defining Gi (s) = sCi (sI − Ai )−1 Bi , the overall equation is arranged in the form (8.4). Then
we can apply Theorem 8.3.2 or 8.3.3.
As an example for Theorem 8.3.3, consider a network of the following two systems:
G1 (s) =
s2 + 1000
s2 + 100
,
G
(s)
.
2
s2 + 250s + 100
s2 + 2500s + 1000
Here G1 (s) and G2 (s) are PR but not WSPR. Since <[G1 (jω)] > 0 and <[G2 (jω)] > 0
for any 0 < ω < 10, we have ws = 10. From the Nyquist plots, we can find that
maxi∈{1,...,N } maxω≥ωs |Gi (jω)| = 0.0360. Let the weight between two systems be l. Then,
from Theorem 8.3.3, the two system outputs reach consensus if the following condition
holds:
l < ωs / max max |Gi (jω)|.
i∈{1,...,N } ω≥ωs
8.4.2
Load frequency control network of synchronous generators
It is known that the flows of real and reactive power in a power grid are fairly independent
of each other. Real power control is closely related to frequency control whereas reactive
– 145 –
power is related to voltage control. Thus it is often the case that they are studied separately
(Kundur et al., 1994).
In a power grid, it is required that the frequency remains nearly constant to ensure the
quality of power supply and the generation is controlled to maintain scheduled power exchange among areas. The control of generation and frequency is commonly referred to as
load frequency control (LFC).
A node model for an LFC network of synchronous generators can be represented as the
block diagram in Figure 8.3 (Kundur et al., 1994). In Fig. 8.3, ∆Θi (s) and ∆Ωi (s) denote
the phase and the frequency variations of the complex voltage of node i in s-domain. Further,
∆Pi (s), which denotes the power transfer variation between node i and the other nodes, is
given as
X
∆Pi (s) =
j∈Ni
Tij (∆Θi (s) − ∆Θj (s)) ,
where Tij is the synchronizing power coefficient between nodes i and j. Note that Tij = Tji
in general. The local load variation is denoted by ∆Pi in Figure 8.3.
We consider a N -node LFC network, whose interconnection is modeled by a weighted
undirected graph G. By taking ∆Θi (s) as the output variable of node i, we obtain the following equation for each node i:
s∆Θi (s) = Gi (s)
X
j∈Ni
Tij (∆Θj (s) − ∆Θi (s)) − GLi (s)∆PLi (s), i = 1, . . . , N,
where Gi (s) and GLi (s) are given as
Gi (s) =
GP,i (s) [1 + Ki (s)GG,i (s)GT,i (s)GP.i (s)]
,
1 + [1/Ri + Bi Ki (s)] GG,i (s)GT,i (s)GP,i (s)
GLi (s) =
GP,i (s)
.
1 + [1/Ri + Bi Ki (s)] GG,i (s)GT,i (s)GP,i (s)
– 146 –
(8.7)
– 147 –
Bi
Ki (s)
GG ,i ( s )
GT ,i ( s )
'Pi ( s )
'PLi ( s )
1
Ri
GP ,i ( s )
':i ( s )
Figure 8.3: Node model for LFC network of synchronous generators.
':i ( s )
1
s
'4i ( s )
LFC network (8.7) can be regarded as a dynamic consensus network of single-integrators
of the form (8.3). The Laplacian matrix L = [lij ] of G is given as


P

 k∈N Tik , i = j,
i
lij =


 −Tij ,
i 6= j.
Further, the objective of the LFC network are the regulation of ∆ωi and ∆Pi in the presence
of ∆PLi . Note that, if there exists a constant c such that ∆θi → c for all i = 1, . . . , N , then
∆ωi → 0 and ∆Pi → 0. In other words, the control objective is to achieve the consensus of
the phase variations.
Assuming that G is connected and ∆PLi (s) ≡ 0, one may determine whether the network
(8.7) reaches consensus by checking if Gi (s) is WSPR. Figure 8.4 shows the Nyquist plots
for Gi (s) of a typical steam turbine generator and a hydraulic generator, whose parameters
are given in (Kundur et al., 1994, pp. 598–599). For both generators, Ki (s) = 0.01/s
is adopted as suggested in Stankovic et al. (1998). From Figure 8.4, we can see that the
Nyquist plots does not cross the imaginary axes. Since Gi (s) do not have poles in the closed
half of the complex plane, they are WSPR.
Note that GLi (s) have a zero at s = 0 for both cases. This implies that, for constant local
load variation, the consensus of the LFC network (8.7) is achieved. The proof can be done
based on the idea of the proof of third claim of Theorem 8.3.2.
It is recognized that there exist many design constraints for a LFC network of practical
synchronous generators. Even though a LFC network satisfies all the conditions in Theorem
8.3.2 or 8.3.3, the network might not practical. However, it would be desirable if a practical
load frequency network satisfies the conditions in Theorem 8.3.2 because the stability of the
– 148 –
1.5
0.1
1
0.05
0.5
Imaginary Axis
Imaginary Axis
0.15
0
0
−0.05
−0.5
−0.1
−1
−0.1
−0.05
0
0.05
0.1
Real Axis
0.15
0.2
0.25
(a) Steam turbine generator case.
−1.5
−1
−0.5
0
0.5
1
Real Axis
1.5
2
2.5
(b) Hydraulic generator case.
Figure 8.4: Nyquist plots for Gi (s) of synchronous generators with typical parameters.
network is ensured at least in a local sense. Many load frequency controllers proposed in
existing results have been designed without consideration of stability of the overall network.
8.5
Conclusion
For a network of PR or WSPR systems combined with a single-integrator, we have pro-
vided sufficient conditions for the consensus of the network. Then we have shown that the
consensus of an output diffusively coupled linear system network and an LFC network of
synchronous generators can be determined based on the conditions.
There are several possible research directions. First, we have considered only undirected
interconnections. One may consider the directed case. Second, the consensus conditions
provided in this chapter are sufficient, not necessary. It is desirable to investigate necessary
and sufficient conditions. Third, we have addressed only the single-input/single-output system case. One may extend the results to the multi-input/multi-output case. State consensus
– 149 –
of diffusively coupled nonidentical systems would be also interesting to consider.
– 150 –
Chapter 9
Disturbance attenuation in consensus networks
In this chapter, we study disturbance attenuation in undirected consensus networks of identical linear systems. Assuming that the individual linear systems are under exogenous disturbances, we take the H∞ norm of the transfer function matrix from the disturbance vector
to the disagreement vector of the network as the metric for the performance of disturbance
attenuation. For a given consensus network, we show that the disturbance attenuation performance is enhanced by maximizing the second eigenvalue of the graph Laplacian under a
certain condition, which can be checked by an LMI condition. In the case that the consensus
network has physical interconnection, we provide algorithms for the design of decentralized
and distributed controllers that ensure the disturbance attenuation performance.
9.1
Introduction
For the past decade, consensus of multiples of systems have attracted a significant amount
of research interest due to its broad applications in various areas (see Olfati-Saber et al.
(2007); Ren et al. (2007) and the references therein). Theoretical study on consensus has
been particularly focused on linear system networks. Olfati-saber and Murray have provided
a necessary and sufficient condition for the underlying graph topology of single-integrator
modeled agents to achieve average consensus (Olfati-Saber & Murray, 2004). Ren and
Atkins has shown that, for single-integrator modeled agents, the existence of a spanning
– 151 –
tree in the underlying graph is a necessary and sufficient condition for consensus (Ren et al.,
2004). Moreau has shown that the uniform connectivity of the underlying graph is sufficient
to achieve consensus for single-integrator modeled agents (Moreau, 2004). Consensus of
double-integrator modeled agents has been studied in Ren & Atkins (2007). According to
Ren & Atkins (2007), the existence of a spanning tree in the interaction graph of the doubleintegrators is a necessary but not sufficient for consensus. Consensus of identical linear
time-invariant systems has been addressed in Fax & Murray (2004); Scardovi & Sepulchre
(2009); Tuna (2009).
While the great majority of the existing results is based on the ideal assumption that
there exist no external disturbances to individual systems, physical systems are usually affected by some disturbances in reality. Liu et al. have studied design of a dynamic output
feedback controller for a undirected network of identical linear-time invariant systems under exogenous disturbances (Liu & Jia, 2010; Liu et al., 2009). Considering the transfer
function matrix from the disturbance vector to the disagreement vector of the network, they
have formulated a H∞ suboptimal problem. Based on the symmetry of the Laplacian of the
undirected graph, they have decomposed the overall equation for the network into independent systems with the same system order as that of individual systems, and then provided
an LMI condition for the decomposed systems to solve the H∞ suboptimal problem. Li et
al. have considered a undirected network of identical linear-time invariant systems under
disturbances assuming that some individual systems can measure their own states, and then
provided LMI conditions to find state feedback controllers solving the H2 and H∞ suboptimal problems (Li et al., 2011).
– 152 –
Meanwhile, most of the existing results have been mainly focused on decentralized feedback control gain matrix design to ensure the disturbance attenuation performance in a consensus network. Though the disturbance attenuation property is dependent not only on the
feedback controller of the network but also on the graph associated with the network, less
attention has been paid to the graph design.
In this chapter, we study two problems related with the disturbance attenuation of undirected consensus networks of identical linear systems that are under exogenous disturbances.
Taking the H∞ norm of the transfer function matrix from the exogenous disturbance vector
to the disagreement vector of the network as the metric for the disturbance attenuation performance, we study the graph and controller design to enhance the performance. First, we
address the H∞ suboptimal problem for a given identical linear system network to ensure the
disturbance attenuation performance under the assumption that the topology of the network
graph is given but the edge weights are variables belonging to a convex set. We show that the
H∞ suboptimal problem, which is the design of the edge weights of the graph, is solved by
maximizing of the second eigenvalue of the graph Laplacian under some condition, which
can be readily checked by solving an LMI feasibility problem. Since the disturbance attenuation performance is a highly nonlinear function of the edge weights of the graph, it
is generally intractable to solve the H∞ suboptimal problem. In this regard, the approach
of this chapter might be useful in practice. Second, we consider an identical linear system
network with existing interconnection, which might be regarded as the physical interconnection. For the consensus network, we formulate two H∞ suboptimal problems based on
decentralized and distributed controllers, respectively, and provide algorithms for the design
– 153 –
of decentralized and distributed controllers to ensure the given disturbance attenuation performance. When the network has certain properties, the decentralized controller is readily
designed by solving an LMI feasibility problem and maximizing the second eigenvalue of
the graph Laplacian. The distributed controller is also designed by solving an LMI feasibility
problem.
9.2
Preliminaries
The set of real (respectively, complex) numbers is denoted by R (respectively, C). The
set of nonnegative (respectively, positive) real numbers is denoted by R̄+ (respectively, R+ ).
By A 0 (respectively, A 0), we denote the positive definiteness (respectively, positive
semi-definiteness) of A. Further, by A ≺ 0 (respectively, A 0), we denote the negative
definiteness (respectively, negative semi-definiteness) of A. For any A ∈ CN ×N , AH denotes
the conjugate transpose of A. For any two matrices, A ⊗ B denotes the Kronecker product
of A and B. The transposition operation is distributive over the Kronecker product:
(A ⊗ B)T = AT ⊗ B T .
(9.1)
Let A, B, C, and D be matrices such that AC and BD are well defined. Then
(A ⊗ B)(C ⊗ D) = AC ⊗ BD.
(9.2)
Let P = P T > 0 be a m × m matrix with eigenvalues 0 < λ1 ≤ · · · ≤ λm and Q =
QT 0 be a n × n matrix with eigenvalues 0 ≤ µ1 ≤ · · · ≤ µn . Then the eigenvalues of
(P ⊗Q) = (P ⊗Q)T are λ1 µ1 , . . . , λ1 µn , . . . , λm µ1 , . . . , λm µn (Laub, 2005). Further, for any
x = [xT1 · · · xTm ]T ∈ Rmn , where xi ∈ Rn for all i = 1, . . . , m, xT [(P − λ1 Im ) ⊗ Q] x ≥ 0
– 154 –
because the eigenvalues of the symmetric matrix [(P − λ1 Im ) ⊗ Q] are nonnegative. Thus
we have
xT (P ⊗ Q)x ≥ λ1 xT (Im ⊗ Q)x = λ1
m
X
xTi Qxi .
(9.3)
i=1
Let G = (V, E, W) have N nodes. The Laplacian matrix L = [lij ] ∈ RN ×N of G is
defined as
lij :=


P



j∈Ni wij , i = j,



−wij ,






 0,
{i, j} ∈ E,
{i, j} 6∈ E,
where wij := W({i, j}) for any {i, j} ∈ E. Then the followings are true for the Laplacian
matrix L (Ren et al., 2004):
• L is symmetric and positive semidefinite;
• L has a zero eigenvalue associated with the eigenvector 1N ;
• All nonzero eigenvalues of L are positive.
If G is connected, the zero eigenvalue is distinct (Ren et al., 2004). Due to the symmetry of
L, there exists an orthogonal matrix U = U T ∈ RN ×N such that U T LU is diagonal. Based
on the properties of Laplacian matrices, we have following lemmas:
Lemma 9.2.1 Let Lp ∈ RN ×N and Lc ∈ RN ×N be the Laplacian matrices of two connected
graphs, respectively. Let Up = UpT = [up,1 · · · up,N ] ∈ RN ×N be an orthogonal matrix
such that UpT Lp Up = diag(λp,1 , . . . , λp,N ), where 0 = λp,1 < λp,2 ≤ · · · ≤ λp,N are the
– 155 –
eigenvalues of Lp . Then


0
 0

,
UpT Lc Up = 


0T
0
0 U p Lc U p
(9.4)
0
where U 0 p = [up,2 · · · up,N ] ∈ RN ×N −1 . Further, U T Lc U 0 is symmetric and its eigenvalues
are the positive eigenvalues of Lc .
Proof: First, up,1 is the eigenvector of Lp associated with the zero eigenvalue, so we
√
0
let up,1 = 1N / N . Then we have equation (9.4). Second, UpT Lc U 0 p is symmetric due
to the symmetry of UpT Lc Up . Finally, the eigenvalues of UpT Lc Up are the same as those of
Lc , which, together with the connectedness of the graph associated with Lc , implies that the
0
eigenvalues of UpT Lc U 0 p are the positive eigenvalues of Lc .
Lemma 9.2.2 (Lin et al., 2008) Let
L̄ := I −
1N 1TN
.
N
(9.5)
Then L̄ has 0 and 1 as its eigenvalues with multiplicities 1 and N − 1, respectively.
Note that L̄ is the Laplacian matrix of the complete graph with 1/N as the weights for
all the edges.
Consider a graph G = (V, E, W) with N nodes and M edges. Let λ1 , . . . , λN be the
eigenvalues of the Laplacian matrix of L of G. Without loss of generality, we assume that
0 = λ1 ≤ λ2 ≤ · · · ≤ λN hereafter. By an ordering, we denote the edge set by E =
{e1 , . . . , eM }, where ek is a node pair {i, j} for all k = 1, . . . , N . Then we define
w := [W(e1 ), . . . , W(eM )]T ∈ RM .
– 156 –
(9.6)
We refer to w as the edge weight of G.
Suppose that f is a function defined on RN −1 . Since λi are functions of the edge weight
w, there exists a function g such that
f (λ2 (w), . . . , λN (w)) ≡ g(w).
Then g is convex function whenever f is a symmetric convex function (Boyd, 2006). This
property allows us to formulate many graph design problems as convex optimization problems, which are efficiently solved in numerical ways (Boyd, 2006). Suppose that V and E
are given and w is a decision variable that belongs to a convex set. Then a minimization of
a symmetric convex function f (λ2 (w), . . . , λN (w)) is formulated as a convex optimization
problem. Similarly, a maximization of a symmetric concave function h(λ2 (w), . . . , λN (w))
is also a convex optimization problem in this case.
9.3
Problem statement
9.3.1
Graph design problem
Let G = (V, E, W) be a graph with N nodes and M edges. Consider the following
network of identical linear systems over a graph G:
ẋi = Axi + F
X
j∈Ni
wij (xj − xi ) + Edi ,
N
1 X
zi = xi −
xj ,
N j=1
(9.7a)
(9.7b)
where xi ∈ Rn , zi ∈ Rq , and di ∈ Rl denote the state, the output, and the exogenous
disturbance, respectively, of system i, for i = 1, . . . , N , and A ∈ Rn×n , E ∈ Rn×l , and
F ∈ Rn×p are constant matrices. The coefficient wij is the weight assigned to {i, j} ∈ E.
– 157 –
The overall equation for the network (9.7) is written in a vectorial form as
ẋ = [(IN ⊗ A) − (L ⊗ F )] x + (IN ⊗ E)d,
(9.8a)
z = (L̄ ⊗ In )x,
(9.8b)
T T
where x := [xT1 · · · xTN ]T , z := [z1T · · · , zN
] , d := [dT1 · · · dTN ]T , L is the Laplacian matrix
of G, and L̄ is defined in (9.5). Note that the output vector z is the disagreement vector of x,
i.e., z = x − (1/N )1N 1TN x.
We say the network (9.8) asymptotically reaches consensus when xi − xj → 0 as t → 0
for all i, j ∈ V. We assume that the objective of the network (9.8) is to reach consensus
asymptotically. Meanwhile, the network (9.8) might not reach consensus in the presence of
the exogenous disturbance, i.e., d 6≡ 0. Thus we seek to design w appropriately so that the
network (9.8) is less sensitive to the exogenous disturbance d.
Let Tzd (s) be the transfer function matrix from d to z. Since z is the disagreement vector
of x, we can regard z as a metric for consensus of the network (9.8). Thus we can quantify
the disturbance attenuation performance of the network (9.8) by the H∞ norm of Tzd (s) (Li
et al., 2011; Liu & Jia, 2010; Liu et al., 2009) :
kTzd (s)k∞ := sup σ̄ (Tzd (s)) ≡ sup σ̄ (Tzd (jω)) ,
ω∈R
<(s)>0
where σ̄ (Tzd (s)) is the maximum singular value of Tzd (s).
From (9.8), it is obvious that kTzd (s)k∞ depends only on L when A, F , and E are given
constant matrices. Thus, by appropriately designing the edge weight w ∈ R̄M
+ of G, which
is defined in (9.6), we can improve the disturbance attenuation performance of the network
(9.8). More precisely, assuming that (V, E) is given, we attempt to design w to reduce
– 158 –
kTzd (s)k∞ . As shown in the following sections, the graph design problem is reduced into
the maximization of the second eigenvalue λ2 of L under certain conditions.
Since λ2 is a positive homogeneous function of w, to make the problem of maximizing λ2
sensible, it is required to to impose some constraints on w (Sun et al., 2006). Such constraints
are also reasonable in reality. Suppose that wij is the conductance between generators i and
j in a power grid. Then wij is proportional to the cost required for the construction of the
corresponding transmission line. Considering the geometric condition for edge {i, j}, we can
represent the cost for wij as cij wij , where cij > 0. Based on such an idea, it is reasonable to
impose a limitation constraint on the sum of costs required for the edges:
M
X
T
c w=
k=1
c k w k ≤ bc ,
(9.9)
where c ∈ RM
+.
Then, an H∞ suboptimal problem is stated as follows:
Problem 9.3.1 For a given γ > 0, design the edge weight w of G under constraint (9.9) such
that the network (9.8) asymptotically reaches to consensus when d ≡ 0 and kTzd (s)k∞ < γ
when d 6≡ 0.
9.3.2
Controller design problem
We next consider the following network of identical linear systems over a graph Gp =
(V, Ep , Wp ) with N nodes:
ẋi = Axi + Bui + F
X
j∈Np,i
zi = xi −
1
N
N
X
wp,ij (xj − xi ) + Edi ,
xj ,
j=1
– 159 –
(9.10a)
(9.10b)
where xi ∈ Rn , zi ∈ Rq , ui ∈ Rm , and di ∈ Rl denote the state, the measurement, the output,
the control input, and the exogenous disturbance, respectively, of system i, for i = 1, . . . , N ,
and A ∈ Rn×n , B ∈ Rn×m , E ∈ Rn×l , and F ∈ Rn×p are constant matrices. The coefficient
wP,ij is the weight assigned to (i, j) ∈ EP . We assume that Gp is given a priori. We can
regard the graph Gp as the topology for the physical interconnection among the individual
systems.
The overall equation for the network (9.10) is written as
ẋ = [(IN ⊗ A) − (Lp ⊗ F )] x + (IN ⊗ B)u + (IN ⊗ E)d,
(9.11a)
z = (L̄ ⊗ In )x,
(9.11b)
T T
] , d := [dT1 · · · dTN ]T , Lp is the
where x := [xT1 · · · xTN ]T , u := [uT1 · · · uTN ]T , z := [z1T · · · , zN
Laplacian matrix of Gp , and L̄ is defined in (9.5).
For the network (9.11), we consider two types of controllers. The first one is the decentralized controller. Let Gc = (V, Ec , Wc ). Then the decentralized controller has the form
ui = K
P
j∈Nc,i
wc,ij (xj − xi ) for all i = 1, . . . , N , where K ∈ Rn×n is the control gain and
wc,ij is the weight for {i, j} ∈ Ec . In a vectorial form, the decentralized controller is written
as
u = −(L ⊗ K)x.
(9.12)
The second one is the distributed controller of the form ui = −Kxi for all i = 1, . . . , N ,
where K ∈ Rn×n is the control gain, which is written in a vectorial form
u = −(IN ⊗ K)x.
– 160 –
(9.13)
For the distributed controller (9.13), kKk2 is the maximum gain for the measurement
x. Since such a measurement contains noises in reality, it is not desirable to increase kKk2
arbitrarily because high gains usually cause high sensitivity to noises. Further high gains
give rise to input saturation, which is also not desirable. Thus it is required to impose a
limitation constraint on kKk2 as
kKk2 ≤ bK .
(9.14)
c
Let wc ∈ R̄M
+ be the edge weight of Gc , where Mc is the cardinality of Ec . Then, for the
decentralized controller (9.12), we impose the following constraint:
wc,k kKk2 ≤ bKw , k = 1, . . . , Mc .
(9.15)
Then, an H∞ suboptimal problem is stated as follows:
Problem 9.3.2 For a given γ > 0, design a decentralized (respectively, distributed) controller u of the form (9.12) (respectively, (9.13)) under the constraint (9.15) (respectively,
(9.14)) such that the network (9.11) asymptotically reaches to consensus when d ≡ 0 and
kTzd (s)k∞ < γ when d 6≡ 0.
9.4
Graph design for disturbance attenuation
In this section, we study Problem 9.3.1. First, we decompose the network (9.8) by means
of coordinate transformations. The decomposition yields a set of independent systems that
are equivalent to the overall network (9.8), thereby allowing us to deal with computationally
tractable low order decomposed systems, instead of the overall network (9.8). Further, the
– 161 –
decomposed systems give intuition for the design of G as shown below. Next we present a
formula for the H∞ norm of Tzd (s), and then provides solutions to Problem 9.3.1.
9.4.1
Decomposition of the consensus network
Since it might not be tractable to compute and manipulate Tzd (s) directly when N is
very large, we seek to decompose the network (9.8) into a set of low order systems. Such
decomposition is primarily based on diagonalizability of the graph Laplacian matrices L and
L̄ (Liu et al., 2009).
Let
δ := (L̄ ⊗ In )x.
(9.16)
Then we have
δ̇ = (L̄ ⊗ In ) [(IN ⊗ A) − (L ⊗ F )] x + (L̄ ⊗ In )(IN ⊗ E)d
1nN 1TnN
x) + (L̄ ⊗ E)d
= (L̄ ⊗ A) − (L̄L ⊗ F ) (δ +
N
= (L̄ ⊗ A) − (L̄L ⊗ F ) δ + (L̄ ⊗ E)d,
z = δ,
where we use x = δ + (1nN 1TnN /N )x and (L̄ ⊗ A) − (L̄L ⊗ F ) 1nN = 0.
Since L̄ is the Laplacian of a connected graph, there exists an orthogonal matrix Ū ∈
RN ×N such that


0 
 0
 =: Λ̄,
Ū T L̄Ū = 


0 IN −1
– 162 –
which is evident from Lemma 9.2.2. Moreover, it follows from Lemma 9.2.1 that


0
 0

,
Ū T LŪ = 


0T
0
0 Ū LŪ
0
where Ū T LŪ 0 is symmetric and positive definite. Let
δ̂ := (Ū T ⊗ In )δ, dˆ := (Ū T ⊗ Il )d.
(9.17)
Then we have
˙
δ̂ = (Ū T ⊗ In ) (L̄ ⊗ A) − (L̄L ⊗ F ) (Ū ⊗ In )δ̂ + (Ū T ⊗ In )(L̄ ⊗ E)(Ū ⊗ Il )dˆ
ˆ
= (Λ̄ ⊗ A) − (Λ̄Ū T LŪ ⊗ F ) δ̂ + (Λ̄ ⊗ E)d.
(9.18)
Since the first rows of Λ̄ and Ū T LŪ are zero vectors, dˆ1 does not affect δ̂. Thus, by deleting
the first row in (9.18), we have the following reduced order network equation:
i
h
0
˙
δ̂ 0 = (IN −1 ⊗ A) − (Ū T LŪ 0 ⊗ F ) δ̂ 0 + (IN −1 ⊗ E)dˆ0 ,
T
where δ̂ 0 = [δ̂2T · · · δ̂N
] and dˆ0 = [dˆT2 · · · dˆTN ].
0
We finish the decomposition by diagonalizing the matrix Ū T LŪ 0 ∈ R(N −1)×(N −1) . Since
0
Ū T LŪ 0 is symmetric and positive definite, there exists an orthogonal matrix U 0 such that
0
0
U T Ū T LŪ 0 U 0 =: Λ0 ,
where Λ0 is diagonal. The diagonal elements of Λ0 are the eigenvalues of L except λ1 = 0.
Let
δ̄ 0 := (U
0T
0
⊗ In )δ̂ 0 , d¯0 := (U T ⊗ Il )dˆ0 .
– 163 –
Then we have
δ̄˙ 0 = [(IN −1 ⊗ A) − (Λ0 ⊗ F )] δ̄ 0 + (IN −1 ⊗ E)d¯0 ,
which is decomposed into N − 1 low order systems:
δ̄˙i0 = (A − λi+1 F )δ̄i0 + E d¯0i , i = 1, . . . , N − 1,
(9.19)
where λi+1 is the (i + 1)th eigenvalues of L.
We then quantify the disturbance attenuation performance of the network (9.8) by the
H∞ norms of the decomposed systems (9.19). The following lemma gives the relationship
between kTzd (s)k∞ and the H∞ norms of the decomposed systems (9.19), which can be
shown based on the results in Li et al. (2011); Liu et al. (2009):
Lemma 9.4.1 For the network (9.8),
kTzd (s)k∞ =
max
i=1,...,N −1
k(sIn − A + λi+1 F )−1 Ek∞ .
(9.20)
Proof: By simple algebra, we have
−1
Tzd (s) = Tδd (s) = sInN − (L̄ ⊗ A) + (L̄L ⊗ F )
(L̄ ⊗ E)
= (Ū ⊗ In ) sInN − (Λ̄ ⊗ A) + (Λ̄Ū T LŪ ⊗ F ) (Λ̄ ⊗ E)(Ū ⊗ Il )
= (Ū ⊗ In )Tδ̂dˆ(s)(Ū ⊗ Il ),
which implies that kTzd (s)k∞ = kTδd (s)k∞ = kTδ̂dˆ(s)k∞ . Further, since
kTδ̂dˆ(s)k∞ = max(kTδˆ1 dˆ1 (s)k∞ , kTδ̂0 dˆ0 (s)k∞ ) = kTδ̂0 dˆ0 (s)k∞
– 164 –
(9.21)
and kTδˆ1 dˆ1 (s)k∞ = 0, it follows that kTzd (s)k∞ = kTδ̂0 dˆ0 (s)k∞ . Thus, we obtain
h
i−1
0
Tδ̂0 dˆ0 (s) = sIn(N −1) − (IN −1 ⊗ A) + (Ū T LŪ 0 ⊗ F )
(IN −1 ⊗ E)
−1
= (U ⊗ In ) sIn(N −1) − (IN −1 ⊗ A) + (Λ ⊗ F )
(IN −1 ⊗ E)(U ⊗ Il )
= (U ⊗ In )Tδ̄0 w̄0 (s)(U ⊗ Il ),
which leads to (9.20).
9.4.2
Graph design
In the case that G is given, based on Lemma 9.4.1, we have the following lemma, which
is similar to Theorem 3 in Li et al. (2011):
Lemma 9.4.2 For a given γ > 0, the network (9.8) asymptotically reaches to consensus
when d ≡ 0 and kTzd (s)k∞ ≤ γ if and only if A − λi+1 F is Hurwitz and k(sIn − A +
λi+1 F )−1 Ek∞ ≤ γ for all i = 1, . . . , N − 1.
Proof: (Necessity) Suppose that the network (9.8) asymptotically reaches to consensus when d ≡ 0 and kTzd (s)k∞ ≤ γ. Since the network (9.8) asymptotically reaches to
consensus, δ → 0 as t → ∞, which implies that δ̄ 0 → 0 as t → ∞. Thus A − λi+1 F is
Hurwitz for all i = 1, . . . , N − 1. Next, since kTzd (s)k∞ ≤ γ, it follows from (9.20) that
k(sIn − A + λi+1 F )−1 Ek∞ ≤ γ for all i = 1, . . . , N − 1.
(Sufficiency) Suppose that A − λi+1 F is Hurwitz and k(sIn − A + λi+1 F )−1 Ek∞ ≤ γ for
all i = 1, . . . , N − 1. It then follows from (9.20) that kTzd (s)k∞ ≤ γ. Further, we can show
that the network (9.8) asymptotically reaches to consensus as follows. By the definition of
δ in (9.16), 1TN δ = 0. Moreover, from the definition of δ̂ in (9.17), δ̂1 = (1TN ⊗ IN )δ = 0.
– 165 –
Since A − λi+1 F is Hurwitz for all i = 1, . . . , N − 1, it follows (9.19) that δ̂ 0 → 0 as t → ∞,
0
which implies that δ̂ = [δ̂1T δ̂ T ]T → 0 as t → ∞. Thus δ → 0 as t → ∞.
While Lemma 9.4.2 provides a necessary and sufficient condition for checking the disturbance attenuation performance of the network (9.8) under the assumption G is given, it might
not be useful in practice. First, it does not provide any clues on how to design w. Since the
eigenvalues of L are highly nonlinear function of w, it is generally difficult to design w to
satisfy k(sIn − A + λi+1 F )−1 Ek∞ ≤ γ for all i = 1, . . . , N − 1. Second, it might be tedious
and computationally intractable to check the condition of Lemma 9.4.2 if N is very large.
Meanwhile, as shown below, if the network (9.8) satisfies a certain condition, we can
solve solve Problem 9.3.1 by designing the graph such that the second eigenvalue of the
graph Laplacian is maximized. Loosely speaking, when the interconnection via the graph is
beneficial to consensus, the graph design problem is reduced into the maximization of the
second eigenvalue.
Suppose that there exists a constant σmin > 0 such that
• A − σF is Hurwitz for all σ ≥ σmin ;
• k(sIn − A + σF )−1 Ek∞ ≤ γ for all σ ≥ σmin .
Under this condition, we can solve Problem 9.3.1 by designing w such that λ2 ≥ σmin .
Since λ2 is a concave function of w, we can achieve the condition λ2 ≥ σmin by increasing
the entries of w sufficiently if w does not have any constraints. This means that the interconnection is beneficial to consensus, thereby allowing us to solve Problem 9.3.1 by increasing
the interconnection strengths. Based on this idea, we then have the following theorem:
– 166 –
Theorem 9.4.1 For a given γ > 0, the network (9.8) asymptotically reaches to consensus
when d ≡ 0 and kTzd (s)k∞ ≤ γ if there exist 0 < σmin ≤ λ2 and P = P T 0 such that


 (A − σmin F )T P + P (A − σmin F ) + I P E 

 0,


ET P
−γ 2 I
F T P + P F 0.
(9.22a)
(9.22b)
Proof: Suppose that there exist 0 < σmin ≤ λ2 and P = P T 0 that satisfy LMI
condition (9.22). Based on the Schur complement formula (Boyd et al., 1994), we have the
following algebraic Riccati equation:
(A − σmin F )T P + P (A − σmin F ) + I +
1
P EE T P − (σ − σmin )(F T P + P F ) 0
γ2
for all σ ≥ σmin , which implies that
(A − λi+1 F )T P + P (A − λi+1 F ) + I +
1
P EE T P 0, i = 1, . . . , N − 1.
γ2
It then follows from the bounded real lemma (Gahinet & Apkarian, 1994) that A − λi+1 F is
Hurwitz and k(sIn − A + λi+1 F )−1 Ek∞ ≤ γ for all i = 1, . . . , N − 1. From Lemma 9.4.2,
the network (9.8) asymptotically reaches to consensus when d ≡ 0 and kTzd (s)k∞ ≤ γ.
According to Theorem 9.4.1, λ2 is required to be increased so that it is greater than or
equal to σmin . The problem of maximizing λ2 is posed as the following convex optimization
problem:
maximize λ2 (w)
(9.23)
subject to cT w ≤ bc .
Note that the optimization problem (9.23) can be transformed into a semi-definite programming problem and thus it can be efficiently solved numerically (Sun et al., 2006).
– 167 –
Based on the Theorem 9.4.1, we provide a procedure to solve Problem 9.3.1 as follows:
(1) solve the convex optimization problem (9.23) to find the maximum second eigenvalue λ?2
and the optimizer w? ; (2) check the feasibility of LMI condition (9.22) with σmin = λ?2 ; (3)
if the LMI condition is feasible, the edge weight is designed as w = w? .
In general, kTzd (s)k∞ is a nonlinear function of the edge weight w. Thus it is usually difficult to design w with consideration of disturbance attenuation performance of the network.
While Theorem 9.4.1 only provides a sufficient condition, we expect that lots of networks
satisfy LMI condition (9.22) since interconnections among individual systems are usually
beneficial to consensus of the network.
9.5
Controller design for disturbance attenuation
In this section, we study Problem 9.3.2. For the decentralized controller case, we show
that the overall network cannot generally be decomposed into lower-order systems since Gp
and Gc are usually not identical. Thus we consider two special cases for which the overall
network can be decomposed in a manner as done in the previous section. We then provide a
sufficient LMI condition for a general case. For the distributed controller case, through the
coordinate transformations used in subsection 9.4.1, the overall network is readily decomposed into N − 1 low order independent systems. Based on the decomposition we provide
an LMI condition to solve Problem 9.3.2.
– 168 –
9.5.1
Design of decentralized control networks
If each individual system (9.10) has a decentralized controller (9.12), the closed loop
equation for the network is arranged as
ẋ = [(IN ⊗ A) − (Lp ⊗ F ) − (Lc ⊗ BK)] x + (IN ⊗ E)d,
(9.24a)
z = (L̄ ⊗ In )x,
(9.24b)
where Lp and Lc denote the Laplacian matrices of Gp and Gc , respectively, and L̄ is defined
in (9.5).
Based on the coordinate transformations used in subsection 9.4.1, we obtain the following reduced order equivalent systems:
h
i
0
0
z̄˙ 0 = (IN −1 ⊗ A) − (Λ0p ⊗ F ) − (UpT Ū T Lc Ū 0 Up0 ⊗ BK) z̄ 0 + (IN −1 ⊗ E)d¯0 ,
0
(9.25)
0
where Λ0p = diag(λp,2 , . . . , λp,N ) and Up0 is an orthogonal matrix such that UpT Ū T Lp Ū 0 Up0 =
0
0
Λ0p . It can be shown that UpT Ū T Lc Ū 0 Up0 is symmetric and positive definite. Further, the
0
0
eigenvalues of UpT Ū T Lc Ū 0 Up0 are 0 ≤ λc,2 ≤ · · · ≤ λc,N , where λc,i is the ith eigenvalue of
Lc for all i = 2, . . . , N .
0
0
Note that UpT Ū T Lc Ū 0 Up0 is not diagonal and thus (9.25) is not decomposed in general.
As a special case, we consider the case that K can be designed such that BK = F . In this
case, the network (9.24) is described by
ẋ = [(IN ⊗ A) − (L ⊗ F )] x + (IN ⊗ E)d,
(9.26a)
z = (L̄ ⊗ In )x,
(9.26b)
where L := Lp + Lc , which is the same form as (9.8). Thus it follows from Theorem 9.4.1
that the network asymptotically reaches to consensus when d ≡ 0 and kTzd (s)k∞ ≤ γ if
– 169 –
there exists P = P T 0 that satisfies LMI condition (9.22). To solve Problem 9.3.2 in
this case, we let wc = kwc 1Mc for some kwc > 0 and find the second smallest eigenvalue of
L. We then check the feasibility of LMI condition (9.22). Finally, if kwc kKk ≤ bKw , then
constraint (9.15) is satisfied.
Though we cannot decompose the network (9.24) in general, we have a sufficient condi0
0
tion based on the symmetry and the positive definiteness of UpT Ū T Lc Ū 0 Up0 .
Theorem 9.5.1 For a given γ > 0, the network (9.24) asymptotically reaches consensus
when d ≡ 0 and kTzd (s)k∞ ≤ γ if there exist 0 < σmin ≤ λc,2 , Y , and Q = QT 0 such
that


 Q(A − λp,2


F )T
+ (A − λp,2 F )Q − σmin (BY + Y
T BT )
+
EE T
Q

 0,

(9.27a)
BY + Y T B T 0,
(9.27b)
F Q + QF T 0.
(9.27c)
−γ 2 I
Q
Proof: Suppose that there exist 0 < σmin ≤ λc,2 , Y , and Q = QT 0 that satisfy LMI
condition (9.27). Let P = γ 2 Q−1 and K = Y Q−1 . Then, P and K satisfy
(A − λp,i+1 F − λc,2 BK)T P + P (A − λp,i+1 F − λc,2 BK) + I +
1
P EE T P 0 (9.28)
γ2
for all i = 1, . . . , N − 1 and P (BK) + (BK)T P 0.
Let V (z̄ 0 ) := z̄ 0T (IN −1 ⊗ P )z̄ 0 . Then the time-derivative of V along the trajectory of
(9.25) is given as
V̇ (z̄ 0 ) =
N
−1
X
i=1
h
N
−1
X
z̄i0T (A − λp,i+1 F )T P + P (A − λp,i+1 F ) z̄i0 + 2
z̄i0T P E d¯0i
i=1
0
− z̄ 0T (IN −1 ⊗ P )(UpT Ū
|
0
T
0
Lc Ū 0 Up0 ⊗ BK)z̄ 0 + z̄ 0T (UpT Ū
{z
=:W (z̄ 0 )
– 170 –
0
T
i
Lc Ū 0 Up0 ⊗ BK)T (IN −1 ⊗ P )z̄ 0 .
}
Further, by some algebra, we have
0
0
W (z̄ 0 ) = z̄ 0T (IN −1 ⊗ P )(UpT Ū T Lc Ū 0 Up0 ⊗ BK)z̄ 0
0
0
0
0
0
0
+ z̄ 0T (UpT Ū T Lc Ū 0 Up0 ⊗ BK)T (IN −1 ⊗ P )z̄ 0
0
0
= z̄ 0T (UpT Ū T Lc Ū 0 Up0 ⊗ P BK)z̄ 0 + z̄ 0T (UpT Ū T Lc Ū 0 Up0 ⊗ P BK)T z̄ 0
= z̄ 0T (UpT Ū T Lc Ū 0 Up0 ⊗ (P BK + (BK)T P )z̄ 0
≥ λc,2
N
−1
X
z̄i0T (BK)T P + P (BK) z̄i0 ,
(9.29)
i=1
where we use (9.1), (9.2), and (9.3). Thus
0
V̇ (z̄ ) ≤
N
−1
X
i=1
+2
z̄i0T (A − λp,i+1 F − λc,2 BK)T P + P (A − λp,i+1 F − λc,2 BK) z̄i0
N
−1
X
z̄i0T P E d¯0i .
i=1
It then follows from (9.28) that
T 

V̇ (z̄ 0 ) ≤
<
N
−1
X
i=1
N
−1
X
i=1

z̄i0
T

z̄i0
  (A − λp,i+1 F − λc,2 BK) P + P (A − λp,i+1 F − λc,2 BK) P E  


 
.

 

0
T
0
¯
¯
di
(P E)
0
di



T 



 z̄i0   −I

 

 
0
0
d¯i
  z̄i0 

,


2
0
γ I
d¯i
0
which implies that kTz̄0 d¯0 k∞ = kTzd k∞ ≤ γ (Gahinet & Apkarian, 1994). Since V̇ (z̄ 0 ) is
negative definite when d¯0 ≡ 0, the network (9.24) asymptotically reaches to consensus. This
completes the proof.
From the proof of Theorem 9.5.1, K can be designed as K = Y Q−1 . To check constraint
(9.15), we can consider the condition kY Q−1 k2 ≤ α with α = bKw /kc in addition to (9.27)
assuming that wc = kwc 1Mc . Such a norm boundedness condition is satisfied if the following
– 171 –
LMI holds:


 2Q − In Y T 

 0,


Y
α2 In
(9.30)
which is equivalent to Q (1/2)In and α2 (2Q − In ) − Y T Y 0. Since Q2 2Q − In , it is
obvious that α2 Q2 − Y T Y 0. It then follows from the congruence transformation (Boyd
et al., 1994) that α2 In − Q−1 Y T Y Q−1 0, which leads to kY Q−1 k2 ≤ α. In summary, by
checking the feasibility of LMI conditions (9.27) and (9.30) with changing the value of kwc ,
we can solve Problem 9.3.2.
In the case that Ec = Ep , we can design Gc such that Lc = kLp for some k > 0, and thus
0
0
0
0
we have UpT Ū T Lc Ū 0 Up0 = kΛ0p . Since UpT Ū T Lc Ū 0 Up0 is diagonal in this case, we obtain the
following decomposed system equivalent to the network (9.24):
z̄˙i0 = [A − λi+1 (F + kBK)]z̄i0 + E d¯0i , i = 1, . . . , N − 1.
(9.31)
Based on the decomposed systems (9.31), we provide the following corollary:
Corollary 9.5.1 Suppose that Lc = kLp for some k > 0. For a given γ > 0, the network
(9.24) asymptotically reaches consensus when d ≡ 0, and kTzd (s)k∞ ≤ γ if there exist Y
and Q = QT 0 such that

 Q(A − λp,2



F )T
+ (A − λp,2 F )Q − kλp,2 (BY + Y
T BT )
+
EE T
Q

 0,

(9.32a)
BY + Y T B T 0,
(9.32b)
F Q + QF T 0.
(9.32c)
−γ 2 I
Q
– 172 –
One advantage of this case is that the actual H∞ norm can be obtained based on the decomposed systems (9.31). That is, after designing K = Y Q−1 based on Corollary 9.5.1, the
actual H∞ norm can be obtained as kTzd (s)k∞ = kTz̄10 d¯01 (s)k∞ .
9.5.2
Design of distributed control networks
If each individual system (9.10) has a distributed controller (9.13), the overall closed loop
equation for the network is arranged as
ẋ = [(IN ⊗ A) − (Lp ⊗ F ) − (IN ⊗ BK)] x + (IN ⊗ E)d,
(9.33a)
z = L̄x,
(9.33b)
where Lp is the Laplacian matrix of Gp and L̄ is defined in (9.5).
Based on the coordinate transformations used in subsection 9.4.1, the overall network
(9.33) is decomposed into the following N − 1 independent systems:
z̄˙i0 = (A − λp,i+1 F − BK)z̄i0 + E w̄i0 , i = 1, . . . , N − 1,
(9.34)
where λp,i+1 are the eigenvalues of Lp for all i = 1, . . . , N − 1. Further, based on a similar
argument to the proof of Lemma 9.4.1, it can be shown that
kTzd (s)k∞ =
max
i=1,...,N −1
k(sIn − A + λp,i+1 F + BK)−1 Ek∞ .
Thus, similar to Lemma 9.4.2, it can be shown that the network (9.33) asymptotically reaches
to consensus when d ≡ 0 and kTzd (s)k∞ ≤ γ if and only if A − BK − λp,i+1 F is Hurwitz
and k(sIn − A + λp,i+1 F + BK)−1 Ek∞ ≤ γ for all i = 1, . . . , N − 1.
The following theorem provides a sufficient condition:
– 173 –
Theorem 9.5.2 For a given γ > 0, the network (9.33) asymptotically reaches consensus
when d ≡ 0, and kTzd (s)k∞ ≤ γ if there exist 0 < σmin ≤ λp,2 , Y , and Q = QT 0 such
that


T
T
T
 Q(A − σmin F ) + (A − σmin F )Q − (BY + Y B ) + EE


Q
T
Q
−γ 2 I

 0, (9.35a)

F Q + QF T 0. (9.35b)
Proof: Suppose that there exist 0 < σmin ≤ λp,2 , Y , and Q = QT 0 that satisfy LMI
condition (9.35). Let P = γ 2 Q−1 and K = Y Q−1 . Then, P and K satisfy
(A − λp,i+1 F − BK)T P + P (A − λp,i+1 F − BK) + I +
1
P EE T P 0, i = 1, . . . , N − 1,
2
γ
which, based on the bounded real lemma (Gahinet & Apkarian, 1994), implies that A−BK−
λp,i+1 F is Hurwitz and kTz̄i0 d¯0i (s)k∞ ≤ γ for all i = 1, . . . , N − 1. Since A − BK − λp,i+1 F
is Hurwitz, the network (9.33) asymptotically reaches consensus when d ≡ 0. Further, since
kTzd (s)k∞ = maxi=1,...,N −1 kTz̄i0 d¯0i (s)k∞ , it follows that kTzd (s)k∞ ≤ γ.
For the distributed control case, Problem 9.3.2 is solved if kKk2 = kY Q−1 k2 ≤ bK . To
satisfy this constraint, we can consider LMI condition (9.30) with α ≤ bK in addition to (
9.35).
9.6
Examples
In this section, we provide several design examples. To solve LMI problems, we use
YALMIP (Löfberg, n.d.). To solve problem (17) we use CVX, a package for specifying and
solving convex programs (Grant & Boyd, 2008; Grant et al., n.d.).
– 174 –
(a) Complete graph.
(b) Ring graph.
(c) Star graph.
(d) Tree graph.
Figure 9.1: Four types of graphs.
9.6.1
Graph design example
Consider the network of N linear coupled oscillators over G:
ẋi = Axi + F
X
j∈Ni
wij (xj − xi ) + Edi , i = 1, . . . , N,
where xi ∈ R2 and A, E, and F are defined as follows:


 

(9.36)

 0 1 
 1 
 1 0 
, E =  , F = 
.
A=


 


−1 0
1
0 1
For these linear coupled oscillators, if γ > 0 is given, we can find σmin that satisfies LMI
condition (9.22). For instance, if γ = 0.05, then LMI condition (9.22) is feasible when
σmin ≥ 23.9. Next we assume that N = 5 and bc = 50. For given different graph types in
Fig. 9.1, we then estimate the minimum H∞ norm for the network. First, we find the maximum second smallest eigenvalue for each case by solving the convex optimization problem
(9.23). For simplicity, we assume that all the entries of c are one for each case. Second,
after plugging λ?2 into σmin , we then solve the following optimization problem to obtain the
– 175 –
Table 9.1: Result for the graph design example.
Complete graph Ring graph Star graph
Tree graph
w?
5 × 110
10 × 15
12.5 × 14
[10 15 15 10]T
λ?2
25.0
13.82
12.5
5.0
γmin
0.0565
0.1021
0.1128
0.2774
kTzd (s)k∞
0.0565
0.1021
0.1128
0.2774
minimum value of γ:
maximize γ
(9.37)
T
subject to P = P 0, (9.22).
Table 9.1 shows the result. In Table 9.1, γmin is the solution of optimization problem
(9.37) while kTzd k∞ is the actual H∞ norm of the network for each graph case. From Table
9.1, we can see that greater λ?2 yields less γmin , which implies that more interconnections
lead to better disturbance attenuation performance. This is because LMI condition (9.22)
characterizes the network condition under which interconnections are beneficial to consensus. Further, according to Theorem 9.4.1, it is desirable for a network to have greater λ?2
under the network condition. In this sense, the best disturbance attenuation performance estimate is ensured when (V, E) is a complete graph. Note that the solution of the optimization
problem (9.23) becomes the greatest when (V, E) is a complete graph.
– 176 –
9.6.2
Controller design example
We next consider the design of a controller for the network of five linear coupled oscillators over Gp :
X
ẋi = Axi + F
j∈Np,i
wp,ij (xj − xi ) + Bui + Edi , i = 1, . . . , 5.
where xi ∈ R2 and A, B, E, and F are defined as follows:




 

(9.38)

 0 1 
 0 1 
 1 
 1 0 
, B = 
, E =  , F = 
.
A=




 


−1 0
1 0
1
0 1
We assume that Gp is a ring graph with wp = 15 . Thus we have λp,2 = 1.382.
For the design of a decentralized controller for the network (9.38), we assume that information interconnection topologies are given as depicted in Fig. 9.1 and bKw = 40. To
estimate the best disturbance attenuation performance for each information interconnection
topology based on Theorem 9.5.1, we find λc,2 by setting wc = kwc 1Mc with different values of kwc , plug λc,2 into σmin , and then solve the following optimization problem with
α = kwc bKw :
maximize γ
(9.39)
subject to Y ∈ Rn , Q ∈ Rn , Q = QT 0, (9.27), (9.30).
Further, the controller gain matrix K is also obtained by solving (9.39). Table 9.2 shows the
result. In Table 9.2, we let wc = 1Mc for each case because the solution of (9.39) does not
depend on kwc in this example, though it might does in general.
Meanwhile, the results based on Theorem 9.5.1 are conservative. The conservatism
mainly arises from inequality (9.29). We can find the actual norm kTzd (s)k∞ based on
– 177 –
– 178 –
5
 
1.382
15
Ring graph
 
1
14
Star graph

0.0439
0.0095
kTzd (s)k∞
0.0286
0.0835
0.0440
0.0982
 0.1908 34.5966   0.6425 34.4697   0.9244 34.4434 

 
 


 
 

34.7841 −0.1740
35.0695 −0.5626
35.3193 −0.8597

110
γmin
K
λc,2
wc
Complete graph
Table 9.2: Result for the decentralized controller design example.
0.382

0.0862
0.1592
 2.7760 33.8264 




36.2309 −2.5066

14
Tree graph
the decomposed systems (9.31) when Lc = kLp for some k > 0. Thus, in the case that
the information interconnection graph is a ring graph, which has the same topology as the
physical interconnection, we solve the following optimization problem to estimate the best
disturbance attenuation performance with different values of kwc :
maximize γ
subject to Y ∈ Rn , Q ∈ Rn , Q = QT 0, (9.32), (9.30).
When wc = 15 , we have


 0.4558 34.5675 
,
K=


35.0033 −0.3995
and the actual H∞ norm is obtained as kTzd (s)k∞ = kTz̄10 d¯01 (s)k∞ = 0.0286.
To design a distributed controller for the network (9.38) based on Theorem 9.5.2, we
assume that bK = 40, To estimate the best disturbance attenuation performance, we solve
the following optimization problem with σmin = λp,2 and α = bK :
maximize γ
(9.40)
subject to Y ∈ Rn , Q ∈ Rn , Q = QT 0, (9.35), (9.30).
As the solution of (9.40), we have γmin = 0.0982 with


 1.2870 34.1670 
.
K=


35.2262 −0.9006
Further, the actual H∞ norm is given by kTzd (s)k = 0.0389.
Some comments are in order here about the above simulation results. First, more information interconnections tend to ensure better disturbance attenuation performance estimate
– 179 –
in consensus networks. This is because the LMI conditions in Theorem 9.5.1 and Corollary 9.5.1 characterize the network condition under which interconnections are beneficial
to consensus. That is, due to LMI conditions (9.32b) or (9.27b), γmin does not increase
as σmin increases. In this case, a decentralized controller ensures the best disturbance performance estimate when the information interconnection is complete because the second
smallest eigenvalue of a Laplacian matrix is a concave function of the edge weights (Boyd,
2006). Second, as opposed to intuition, distributed controllers are not always better than decentralized controllers in terms of disturbance attenuation in consensus networks under our
problem setup. In the above example, the decentralized controller shows better disturbance
attenuation performance estimate when the information interconnection is a complete or ring
graph.
9.7
Conclusion
We have provided algorithms to solve H∞ suboptimal control problems to guarantee
disturbance attenuation performance in undirected consensus networks of identical linear
time-invariant systems. Our results included a graph design result, where the objective was
to determine optimal edge weights to provide the best possible disturbance attenuation. We
also showed how to formulate and solve two control problems, one using nearest neighbor
data in the feedback and the other using local-only feedback data. The techniques all depended on maximization of the second smallest eigenvalue of the graph Laplacians under
various constraints. Though only undirected networks have been addressed in this chapter,
it is desirable to extend the result to directed networks. Another research direction is the
– 180 –
extension to nonidentical system networks. One obstacle to such extensions is that in either
case the overall network cannot be decomposed into lower-order independent systems.
– 181 –
– 182 –
Chapter 10
Conclusion
10.1
Summary of results
Under distance-based problem setup, we proposed a new decentralized formation control
law considering inter-agent distance dynamics under distance-based setup. As opposed to
most of existing formation control laws under distance-based setup, the proposed law is designed focusing on the dynamics edge. The proposed law showed comparable performance
to existing control laws. Particularly, the proposed law showed interesting property when
applied to three-agent formations. We also presented local asymptotic stability analysis for
undirected and directed acyclic formations of single-integrators, and showed that undirected
formations of double-integrator modeled agents are also locally asymptotically stable.
Distance-based formation control problems are generally intractable due to the lack of
available information for agents. Considering such intractability, we proposed a displacementbased formation control strategy via orientation alignment under distance-based setup. The
orientation update law allows the agents to align their local reference frames and utilize their
directional information to stabilizing their formation. For short, the lack of available information is overcome by means of cooperation among the agents through communication.
Under the proposed control strategy, conditions of interaction topology for achieving desired
formations is mitigated and the region of attraction is clearly determined.
– 183 –
To enhance performance of formation control, we proposed a position-based formation
control strategy via distributed observer under displacement-based setup. The distributed
observer ensures the estimation positions of agents up to translation, and agents stabilize their
positions based on the estimated positions. Performance of formation control is enhanced
under the proposed control strategy especially when the distributed observer reaches to the
steady-state. Further, combining the proposed orientation alignment law and distributed
observer, we proposed a position-based control law under distance-based control law. Based
on cooperation among agents, the agents obtain position information as well as directional
information under distance-based setup.
Motivated by the LFC network in a power grid, we derived sufficient conditions for
consensus of a network of a class of heterogeneous linear agents. Though a lot of research
interest has been focused on networks of identical linear agents, consensus of heterogeneous
agents has yet to be studied. The derived conditions can be used for checking stability of the
LFC network.
Considering that any physical systems are affected by some exogenous disturbances, we
studied distrubance attenuation in the consensus network of identical linear agents. For a
given linear agent network, we proposed a design method for the underlying graph. Further,
we also proposed a procedure to design a decentralized controller.
10.2
Future works
The results in this dissertation could be extended in some directions. First, there are
several issues on stability of formations under distance-based setup. While undirected rigid
– 184 –
formations are locally stable, global asymptotic stability has yet to be investigated. If such
formations are not globally asymptotically stable, it is worth investigating region of attraction of such a formation. Stability of general persistent formations is still an open problem whereas acyclic persistent formations and minimally persistent formations are locally
asymptotically stable.
Second, global asymptotic stability of orientation angles under the alignment law in
Chapter 5 and 7 deserves being investigated. Simulation results seem to suggest that orientation angles reach consensus in the global sense if they do not have some bad initial
angles. Global asymptotic stability of orientation angles would enhance applicability of the
alignment law to real systems.
Third, focusing on information types available for agents, simple dynamic models for
agents such as single- and double-integrators were mainly addressed in this dissertation.
What if agents have more complicated, higher-order, and/or nonidentical dynamics? Obviously, real systems contain some nonlinearity, which might deteriorate stability and performance of formation control laws, and thus a significant amount of effors might be required
for the stabilization of real systems. Such a study, which would be of interest from a theoretical viewpoint, would be great importance in order to put the results in this dissertation in
practice.
Forth, though constraints on interaction topology were not been considered, it is more
practical to allow evolution of connectivity among agents along with evolution of states of
the agents. For instance, a mobile agent generally has some limited sensing range in reality,
and thus the neighbors of the agent are determined by the positions of all agents and the
– 185 –
sensing range, which means that the set of neighbors of the agent varies along with the
evolution of the formation.
Finally, though we focused on disturbance attenuation in the undirected consensus network of identical linear agents for simplicity, it should be extended to directed networks
and to non-identical agents. One obstacle to the extension is that the overall equation is not
decomposed into lower order independent systems.
– 186 –
Bibliography
Absil, R.A., & Kurdyka, K. (2006). On the stable equilibrium points of gradient systems.
Systems & Control Letters 55(7), 573–577.
Alfakih, A.Y., Khandani, A., & Wolkowicz, H. (1999). Solving euclidean distance matrix
completion problems via semidefinite programming. Computational optimization and applications 12(1), 13–30.
Asimow, L., & Roth, B. (1979). The rigidity of graphs, ii. Journal of Mathematical Analysis
and Applications 68(1), 171–190.
Aspnes, J., Eren, T., Goldenberg, D.K., Morse, A.S., Whiteley, W., Yang, Y.R., Anderson,
B.D.O., & Belhumeur, P.N. (2006). A theory of network localization. IEEE Transactions
on Mobile Computing pp. 1663–1678.
Baillieul, J., & Suri, A. (2003). Information patterns and hedging brockett’s theorem in controlling vehicle formations. In the Proceedings of the 42nd IEEE Conference on Decision
and Control (CDC). Vol. 1. pp. 556–563.
Biswas, P., & Ye, Y. (2004). Semidefinite programming for ad hoc wireless sensor network
localization. In the Proceedings of Proceedings of the 3rd international symposium on
Information processing in sensor networks. pp. 46–54.
Boyd, S. (2006). Convex optimization of graph laplacian eigenvalues. In the Proceedings of
International Congress of Mathematicians. pp. 1311–1320.
– 187 –
Boyd, S., El Ghaoui, L., Feron, E., & Balakrishnan, V. (1994). Linear matrix inequalities in
system and control theory. Vol. 15. Society for Industrial Mathematics.
Buck, J.B. (1935). Synchronous flashing of fireflies experimentally induced. Science
81(2101), 339.
Cao, M., Morse, A.S., Yu, C., Anderson, B.D.O., & Dasgupta, S. (2007). Controlling a
triangular formation of mobile autonomous agents. In the Proceedings of the 46th IEEE
Conference on Decision and Control (CDC). pp. 3603–3608.
Cao, M., Morse, A.S., Yu, C., Anderson, B.D.O., & Dasgupta, S. (2011). Maintaining a
directed, triangular formation of mobile autonomous agents. Communications in Information and Systems 11, 1–16.
Cao, M., Yu, C., Morse, A.S., Anderson, B.D.O., & Dasgupta, S. (2008). Generalized
controller for directed triangle formations. In the Proceedings of the 2008 IFAC World
Congress. pp. 6590–6595.
Dattorro, J. (2005). Convex optimization & Euclidean distance geometry. Meboo Publishing,
California, USA.
Dimarogonas, D.V., & Johansson, K.H. (2010). Stability analysis for multi-agent systems
using the incidence matrix: Quantized communication and formation control. Automatica
46(4), 695–700.
Dimarogonas, D.V., & Kyriakopoulos, K.J. (2008). A connection between formation infeasi-
– 188 –
bility and velocity alignment in kinematic multi-agent systems. Automatica 44(10), 2648–
2654.
Dong, W., & Farrell, J.A. (2008). Cooperative control of multiple nonholonomic mobile
agents. IEEE Transactions on Automatic Control 53(6), 1434–1448.
Dörfler, F., & Bullo, F. (2011). Topological equivalence of a structure-preserving power network model and a non-uniform kuramoto model of a coupled oscillator. In the Proceedings
of the 50th IEEE Conference on Decision and Control (CDC) and European Control Conference (ECC). pp. 7099–7104.
Dörfler, F., & Francis, B.A. (2009). Formation control of autonomous robots based on cooperative behavior. In the Proceedings of the 2009 European Control Conference (ECC).
pp. 2432–2437.
Dörfler, F., & Francis, B.A. (2010). Geometric analysis of the formation problem for autonomous robots. IEEE Transactions on Automatic Control 55(10), 2379–2384.
Fang, J., Cao, M., Morse, A.S., & Anderson, B.D.O. (2009). Sequential localization of sensor
networks. SIAM Journal on Control and Optimization 48(1), 321–350.
Fax, J.A., & Murray, R.M. (2004). Information flow and cooperative control of vehicle formations. IEEE Transactions on Automatic Control 49(9), 1465–1476.
Gahinet, P., & Apkarian, P. (1994). A linear matrix inequality approach to H∞ control.
International Journal of Robust and Nonlinear Control 4(4), 421–448.
– 189 –
Gattami, A., & Murray, R. (2004). A frequency domain condition for stability of interconnected mimo systems. In the Proceedings of 2004 American Control Conference (ACC).
Vol. 4. IEEE. pp. 3723–3728.
Godsil, C.D., & Royle, G. (2001). Algebraic graph theory. Springer, New York, USA.
Grant, M., & Boyd, S. (2008). Graph implementations for nonsmooth convex programs.
Recent advances in learning and control pp. 95–110.
Grant, M., Boyd, S., & Ye, Y. (n.d.). Cvx: Matlab software for disciplined convex programming, 2008. Web page and software available at http://stanford. edu/˜ boyd/cvx.
Havel, T.F., Kuntz, I.D., & Crippen, G.M. (1983). The theory and practice of distance geometry. Bulletin of Mathematical Biology 45(5), 665–720.
Hendrickx, J.M., Anderson, B., Delvenne, J.C., & Blondel, V.D. (2007). Directed graphs
for the analysis of rigidity and persistence in autonomous agent systems. International
Journal of Robust and Nonlinear Control 17(10-11), 960–981.
Horn, R.A., & Johnson, C.R. (1991). Topics in matrix analysis. Cambridge University Press.
Ioannou, P.A., & Sun, J. (1996). Robust adaptive control. Prentice-Hall.
Jadbabaie, A., Lin, J., & Morse, A.S. (2003). Coordination of groups of mobile autonomous
agents using nearest neighbor rules. IEEE Transactions on Automatic Control 48(6), 988–
1001.
Khalil, H.K. (2002). Nonlinear systems. 3rd edition. Prentice-Hall, New Jersey, USA.
– 190 –
Krick, L., Broucke, M.E., & Francis, B.A. (2009). Stabilization of infinitesimally rigid formations of multi-robot networks. International Journal Control 82(3), 423–439.
Kundur, P., Balu, N.J., & Lauby, M.G. (1994). Power system stability and control. Vol. 4.
McGraw-hill New York.
Laman, G. (1970). On graphs and rigidity of plane skeletal structures. Journal of Engineering
mathematics 4(4), 331–340.
Laub, A.J. (2005). Matrix analysis for scientists and engineers. Society for Industrial Mathematics.
Li, Z., Duan, Z., & Chen, G. (2011). On H∞ and H2 performance regions of multi-agent
systems. Automatica.
Lin, P., Jia, Y., & Li, L. (2008). Distributed robust h秀掃 consensus control in directed
networks of agents with time-delay. Systems & control letters 57(8), 643–653.
Lin, Z., Broucke, M., & Francis, B.A. (2004). Local control strategies for groups of mobile
autonomous agents. IEEE Transactions on Automatic Control 49(4), 622–629.
Lin, Z., Francis, B.A., & Maggiore, M. (2005). Necessary and sufficient graphical conditions
for formation control of unicycles. IEEE Transactions on Automatic Control 50(1), 121–
127.
Liu, Y., & Jia, Y. (2010). Consensus problem of high-order multi-agent systems with external
disturbances: An H∞ analysis approach. International Journal of Robust and Nonlinear
Control 20(14), 1579–1593.
– 191 –
Liu, Y., Jia, Y., Du, J., & Yuan, S. (2009). Dynamic output feedback control for consensus
of multi-agent systems: an H∞ approach. In the Proceedings of 2009 American Control
Conference (ACC). IEEE. pp. 4470–4475.
Löfberg, J. (n.d.). Yalmip: A toolbox for modeling and optimization in matlab, 2004. URL:
http://control. ee. ethz. ch/ joloef/yalmip. php.
Lojasiewicz, S. (1970). Sur les ensembles semi-analytiques. In the Proceedings of Actes du
Congres International des Mathématiciens. Vol. 2. pp. 237–241.
Lozano, R., Brogliato, B., Egeland, O., & Maschke, B. (2000). Dissipative systems analysis
and control: theory and applications. Springer.
Mao, G., Fidan, B., & Anderson, B. (2007). Wireless sensor network localization techniques.
Computer Networks 51(10), 2529–2553.
Merris, R. (1994). Laplacian matrices of graphs: a survey. Linear Algebra and its Applications 197, 143–176.
Mesbahi, M., & Egerstedt, M. (2010). Graph theoretic methods in multiagent networks.
Princeton University Press, New Jersey, USA.
Moore, K.L., Vincent, T., Lashhab, F., & Liu, C. (2011). Dynamic consensus networks with
application to the analysis of building thermal processes. In the Proceedings of Proceedings of the IFAC World Congress (IFAC WC). Vol. 18. pp. 3078–3083.
Moreau, L. (2004). Stability of continuous-time distributed consensus algorithms. In the Pro-
– 192 –
ceedings of the 43rd IEEE Conference on Decision and Control (CDC). Vol. 4. pp. 3998–
4003.
Oh, K.K., & Ahn, H.S. (2011a). Distance-based formation control using euclidean distance
dynamics matrix: general cases. In the Proceedings of the 2011 American Control Conference (ACC). pp. 4816–4821.
Oh, K.K., & Ahn, H.S. (2011b). Distance-based formation control using euclidean distance
dynamics matrix: three-agent case. In the Proceedings of the 2011 American Control Conference (ACC). pp. 4810–4815.
Oh, K.K., & Ahn, H.S. (2011c). Formation control of mobile agent groups based on localization. In the Proceedings of the 2011 IEEE International Symposium on Intelligent Control
(ISIC). pp. 822–827.
Oh, K.K., & Ahn, H.S. (2011d). Formation control of mobile agents based on inter-agent
distance dynamics. Automatica 47(10), 2306–2312.
Olfati-Saber, R., & Murray, R.M. (2002). Distributed cooperative control of multiple vehicle
formations using structural potential functions. In the Proceedings of the 15th IFAC World
Congress (IFAC WC). pp. 346–352.
Olfati-Saber, R., & Murray, R.M. (2004). Consensus problems in networks of agents
with switching topology and time-delays. IEEE Transactions on Automatic Control
49(9), 1520–1533.
– 193 –
Olfati-Saber, R., Fax, J.A., & Murray, R.M. (2007). Consensus and cooperation in networked
multi-agent systems. Proceedings of the IEEE 95(1), 215–233.
Partridge, B.L. (1981). Internal dynamics and the interrelations of fish in schools. Journal of
Comparative Physiology A: Neuroethology, Sensory, Neural, and Behavioral Physiology
144(3), 313–325.
Polderman, J.W. (1998). Introduction to the mathematical theory of systems and control.
Ren, W., & Atkins, E. (2007). Distributed multi-vehicle coordinated control via local
information exchange. International Journal of Robust and Nonlinear Control 17(1011), 1002–1033.
Ren, W., & Cao, Y. (2011). Distributed coordination of multi-agent networks. Springer, London, UK.
Ren, W., Beard, R.W., & Atkins, E.M. (2007). Information consensus in multivehicle cooperative control. Control Systems, IEEE 27(2), 71–82.
Ren, W., Beard, R.W., & McLain, T.W. (2004). Coordination variables and consensus building in multiple vehicle systems. Cooperative Control pp. 439–442.
Reynolds, C.W. (1987). Flocks, herds, and schools: a distributed behavioral model. Computer Graphics.
Rudin, W. (1976). Principles of mathematical analysis. 3rd edition. McGraw-Hill, Singapore.
– 194 –
Scardovi, L., & Sepulchre, R. (2009). Synchronization in networks of identical linear systems. Automatica 45(11), 2557–2562.
So, A.M.C., & Ye, Y. (2007). Theory of semidefinite programming for sensor network localization. Mathematical Programming 109(2), 367–384.
Stankovic, AM, Tadmor, G., & Sakharuk, TA (1998). On robust control analysis and design
for load frequency regulation. Power Systems, IEEE Transactions on 13(2), 449–455.
Strogatz, S.H. (2003). Sync: The emerging science of spontaneous order. Hyperion, New
York, USA.
Summers, T., Yu, C., Dasgupta, S., & B.D.O., Anderson (2011). Control of minimally persistent leader-remote-follower and coleader formations in the plane. IEEE Transactions
on Automatic Control PP(99), 1–14.
Sun, J., Boyd, S., Xiao, L., & Diaconis, P. (2006). The fastest mixing markov process on a
graph and a connection to a maximum variance unfolding problem. SIAM review pp. 681–
699.
Tuna, S.E. (2009). Conditions for synchronizability in arrays of coupled linear systems. Automatic Control, IEEE Transactions on 54(10), 2416–2420.
Vicsek, T., Czirók, A., Ben-Jacob, E., Cohen, I., & Shochet, O. (1995). Novel type of phase
transition in a system of self-driven particles. Physical Review Letters 75(6), 1226–1229.
Wang, J., & Elia, N. (2010). Consensus over network with dynamic channels. International
Journal of Systems, Control, and Communications 2(1), 275–297.
– 195 –
Winfree, A.T. (1967). Biological rhythms and the behavior of populations of coupled oscillators. Journal of theoretical biology 16(1), 15–42.
Wooldridge, M.J. (2002). An introduction to multiagent systems. John Wiley & Sons, West
Sussex, UK.
Yu, C., Anderson, B.D.O., Dasgupta, S., & Fidan, B. (2009). Control of minimally persistent
formations in the plane. SIAM Journal on Control and Optimization 48, 206–233.
– 196 –
감사의글
즐겁고 행복했던 박사과정을 마무리하며, 그 동안 저에게 도움을 주신 많은 분들께 감
사의 뜻을 전하고자 합니다.
먼저, 박사과정의 지도교수이신 안 효성 교수님께 감사드립니다. 교수님께서는 진
학문제로 고민하던 저를 지도학생으로 받아주시고, 박사과정 동안 학문적인 면과 인
간적인 면에서 제게 본보기를 보여주셨으며, 물심 양면으로 저를 지원을 해주셨습니
다. 학위 논문을 심사해 주신 이 흥노 교수님, 전 성찬 교수님, 고 광희 교수님께도 감
사드립니다. 또한, 박사과정 동안 가르침을 주신 정보기전공학부의 다른 교수님들께
도 감사드립니다.
미국 콜로라도 스쿨 오브 마인즈(Colorado School of Mines)의 케빈 무어(Kevin L.
Moore) 교수님과 타이론 빈센트(Tyrone L. Vincent) 교수님께도 감사드립니다. 두 교
수님께서는 제가 2013년 상반기에 방문학생으로 체류하던 시기에 매주 미팅을 통해
서 저를 지도해 주셨습니다. 또한, 케빈 무어 교수님께서는 저의 학위논문을 심사해
주셨습니다.
그 동안 동고동락했던 분산제어 및 자동화 연구실의 환이, 영철이, 상철이, 지환이,
병연이, 영훈이, 재영이, 병훈이, 승주, 윤태, 명철이, 성모에게도 감사의 마음을 전합
니다. 특히, 환이는 박사과정 동기로서 늘 든든한 동반자 역할을 해주었습니다. 지환
이는 속 깊은 이야기를 나눌 수 있는 좋은 동료였습니다. 이제는 졸업생이 된 태경이,
상혁이, 한얼이, 썬(Tong Duy Son)에게도 감사드립니다.
진로문제로 고민할 때에 격려의 말씀을 해주신 목포해양대학교의 이 광운 교수님
과 전자부품연구원의 이 상택 선임연구원님께도 감사드립니다. 항상 저를 응원하고
격려해 준 SG 모임의 재혁이 형, 종덕이 형, 영채에게도 감사드립니다. 아무 때나 연
– 197 –
락해도 시간을 내준 선구에게도 감사의 마음을 전합니다.
늦깍이 학생이 되겠다는 저의 결정을 묵묵히 인정해 주시고 응원해 주신 부모님과
장인어른, 장모님께 감사드립니다. 그 동안 가장으로서의 역할을 잘 하지 못한 저를
이해해 준 아내 황 미혜에게도 감사의 뜻을 전합니다. 끝으로, 딸 서현이에게 더 좋은
아빠가 되겠다는 다짐을 하며 글을 맺고자 합니다.
– 198 –
Curriculum vitae
• Name: Kwangkyo Oh (Kwang-Kyo Oh)
• Birth date: October 1, 1975
• Nationality: South Korea
Education
• Ph.D., Information and Mechatronics, Gwangju Institute of Science and Technology,
Gwangju, South Korea, 2013.
• M.S., Electrical and Computer Engineering, Seoul National University, Seoul, South
Korea, 2001.
• B.S., Mineral and Petroleum Engineering, Seoul National University, Seoul, South
Korea, 1998.
Work experience
• Engineer, Power lab., LG innotek, Gwangju, South Korea, Apr. 2008–Mar.2009.
• Engineer, Digital appliances division, Samsung electronics, Gwangju, South Korea,
Jul. 2003–Apr. 2008.
• Engineer, Space research and developement center, Korea Aero Industries, Ltd., Daejoen, South Korea, Jan. 2001–Jun. 2003.
– 199 –
Publications
Journal papers
1. Kwang-Kyo Oh, Fadel Lashhab, Kevin L. Moore, Tyrone L. Vincent, and Hyo-Sung
Ahn, “Output consensus of positive real systems combined with a single-integrator,”
submitted to Systems and Control Letters.
2. Kwang-Kyo Oh, Kevin L. Moore, and Hyo-Sung Ahn, “Disturbance attenuation in
networks of identical linear systems: an H∞ approach,” submitted to IEEE Transactions on Automatic Control, 2012.
3. Myong-Chul Park, Kwang-Kyo Oh, and Hyo-Sung Ahn, “Distance-based control for
acyclic minimally persistent formations,” submitted to Systems and Control Letters,
2012.
4. Kwang-Kyo Oh and Hyo-Sung Ahn, “Distance-based undirected formations of singleand double-integrator modeled agents in n-dimensional space,” submitted to International Journal of Robust and Nonlinear Control, 2012.
5. Kwang-Kyo Oh and Hyo-Sung Ahn, “Formation control and network localization via
orientation alignment,” conditionally accepted for the publication in IEEE Transactions on Automatic Control, 2012.
6. Seung-Ju Lee, Kwang-Kyo Oh, and Hyo-Sung Ahn, “Passivity based output synchronization of port-controlled Hamiltonian and general linear interconnected systems,”
accepted for the publication in IET Control Theory and Applications, 2012.
– 200 –
7. Young-Hun Lim, Kwang-Kyo Oh, and Hyo-Sung Ahn, “Stability and stabilization of
fractional-order linear systems subject to input saturation,” accepted for the publication
in IEEE Transactions on Automatic Control, 2012.
8. Kwang-Kyo Oh and Hyo-Sung Ahn, “Formation control of mobile agents based on
distributed relative position estimation,” accepted for the publication in IEEE Transactions on Automatic Control, 2012.
9. Kwang-Kyo Oh and Hyo-Sung Ahn, “Formation control of mobile agents based on
inter-agent distance dynamics,” Automatica, vol. 47, no. 10, pp. 2306-2312, 2011.
10. Seok Ho Jeon, Kwang Kyo Oh, and Jin Young Choi, “Flux observer with online
tuning of stator and rotor resistances for induction motors,” IEEE Transactions on
Industrial Electronics, vol. 49, no. 3, pp. 653-664, 2002.
Conference papers
1. Kwang-Kyo Oh and Hyo-Sung Ahn, “Formation control of mobile agents without an
initial common sense of orientation,” to appear at the 51st IEEE Conference on Control
and Decision (CDC), 2012.
2. Myoung-Chul Park, Kwang-Kyo Oh and Hyo-Sung Ahn, “Modified gradient control for acyclic minimally persistent formations to escape from collinear position ,” to
appear at the 51st IEEE Conference on Control and Decision (CDC), 2012.
3. Kwang-Kyo Oh and Hyo-Sung Ahn, “Local asymptotic convergence of a cycle-free
persistent formation of double-integrators in three-dimensional space,” to appear at
2012 IEEE Multi-Conference on Systems and Conference (MSC), 2012.
– 201 –
4. Myoung-Chul Park, Byeong-Yeon Kim, Kwang-Kyo Oh and Hyo-Sung Ahn, “Control of inter-agent distances in cyclic polygon formations ,” to appear at 2012 IEEE
Multi-Conference on Systems and Conference (MSC), 2012.
5. Byeong-Yeon Kim, Kwang-Kyo Oh, and Hyo-Sung Ahn, “Power distribution with
consensus,” Proceedings of the 8th IEEE/ASME International Conference on Mechatronic and Embedded Systems and Applications (MESA), Suzhou, China, pp. 52-56,
2012.
6. Kwang-Kyo Oh and Hyo-Sung Ahn, “Orientation alignment based formation control
in multi-agent systems,” Proceedings of the 8th IEEE/ASME International Conference
on Mechatronic and Embedded Systems and Applications (MESA), Suzhou, China, pp.
42-45, 2012.
7. Kwang-Kyo Oh and Hyo-Sung Ahn, “Formation control of mobile agent groups based
on localization,” Proceedings of the 2011 IEEE International Symposium on Intelligent
Control (ISIC), Denver, Colorado, USA, Pages 822-827, 2011.
8. Kwang-Kyo Oh and Hyo-Sung Ahn, “Distance-based formation control of leader/followerstructured agent groups,” Proceedings of the 2011 IEEE International Symposium on
Intelligent Control (ISIC), Denver, Colorado, USA, Pages 816-821, 2011.
9. Kwang-Kyo Oh and Hyo-Sung Ahn, “Distance-based formation control using Euclidean distance dynamics matrix: general cases,” Proceedings of the 2011 American
Control Conference (ACC), San Francisco, California, USA, Page 4816-4821, 2011.
– 202 –
10. Kwang-Kyo Oh and Hyo-Sung Ahn, “Distance-based formation control using Euclidean distance dynamics matrix: three-agent case,” Proceedings of the 2011 American Control Conference (ACC), San Francisco, California, USA, Page 4810-4815,
2011.
11. Hyo-Sung Ahn and Kwang-Kyo Oh, “Command coordination in multi-agent formation: Euclidean distance matrix approaches,” Proceedings of the 2010 International
Conference on Control Automation and Systems (ICCAS), Goyang, Korea, Page 1592–
1597, 2010.
12. Kwang-Kyo Oh and Hyo-Sung Ahn, “Distance-based sequential formation control of
mobile agents by using motion primitives,” Proceedings of the 2010 IEEE International Symposium on Intelligent Control (ISIC), Yokohama, Japan, Page 1464–1469,
2010.
13. Kwang-Kyo Oh and Hyo-Sung Ahn, “A survey of formation of mobile agents,” Proceedings of the 2010 IEEE International Symposium on Intelligent Control (ISIC),
Yokohama, Japan, Page 1470–1475, 2010.
– 203 –