Homework 4

Transcription

Homework 4
Advanced Topics in Control: Distributed Systems and Control
Homework set #4
Note: The homework is due Friday, May 8 before 10:15 am.
Exercise 1: Convergence for strongly-connected graphs point-wise in time.
Let t 7→ L(t) be a time-varying Laplacian matrix with associated time-varying digraph t 7→ G(t),
t ∈ R≥0 so that
[20%]
• each non-zero edge weight aij (t) is larger than a constant > 0;
• each graph G(t) is strongly connected point-wise in time; and
• there is a positive vector w ∈ Rn satisfying 1Tn w = 1 and wT L(t) = 0Tn for all t ∈ R≥0 .
Without relying on Theorem 10.9, show that the solution to x(t)
˙
= −L(t)x(t) converges to
limt→∞ x(t) = wT x(0)1n .
Exercise 2: H2 performance of consensus in continuous time
Consider the continuous-time distributed averaging dynamics
[20%]
x(t)
˙
= −Lx(t) + w(t),
where L is the Laplacian matrix of an underlying undirected graph and w(t) is an exogenous input.
Consider an output y(t) = Qx(t) for some Q ∈ Rp×n satisfying Q1n = Op . The square of the
system H2 norm from w to y is given by
Z ∞
2
T
||H||2 = trace
H(t) H(t)dt ,
0
where H(t) = Qe−Lt is the system impulse response matrix. The H2 norm has several interesting
interpretations, including the total output signal energy in response to a unit impulse input or the
root mean square of the output signal in response to a white noise input with identity covariance.
1. Show that ||H||22 is obtained from trace(P ), where P is the solution to the Lyapunov equation
LP + P L = QT Q.
2. Derive a closed-form expression for ||H||22 in terms of Q and L.
Exercise 3: Convergence of the gradient flow of a convex function
Let f : Rn → R be a strictly convex and twice differentiable function. Show convergence of the
∂
negative gradient flow of f , i.e., x˙ = − ∂x
f (x), to the set of critical points of f using the LaSalle
invariance principle and the function V (x) = (x − x∗ )T (x − x∗ ) with x∗ being a critical point.
Hint: Use the following global underestimate property of a strictly convex function: f (x0 ) − f (x) >
∂
0
0
∂x f (x)(x − x) for all x and x in the domain of f .
1
[10%]
Advanced Topics in Control: Distributed Systems and Control
Homework set #4
Exercise 4: Nonlinear distributed optimization using the Laplacian Flow
Consider the saddle point dynamics (7.5) that solve the optimization problem (7.4) in a fully distributed fashion. Assume that the objective functions are strictly convex and twice differentiable
and that the underlying communication graph among the distributed processors is connected and
undirected. By using the LaSalle invariance principle and extending the results from Exercise 3,
show that all solutions of the saddle point dynamics (7.5) converge to the set of saddle points.
[20%]
Hint: Use the following global underestimate property of a strictly convex function: f (x0 ) − f (x) >
∂
0
0
∂x f (x)(x − x) for all x and x in the domain of f ; and the following global overestimate property
∂
of a concave function: g(x0 ) − g(x) ≤ ∂x
g(x)(x0 − x) for all x and x0 in the domain of g.
Exercise 5: Consensus with input constraints
Consider a set of n agents each with first-order dynamics x˙ i = ui . Assume that the agents can
communicate through an undirected and connected communication network.
1. Construct a consensus protocol that respects input constraints ui (t) ∈ [−1, 1] ∀t ≥ 0, and
prove that your protocol achieves average consensus.
2. Extend the protocol and proof to the case of second-order dynamics x
¨i = ui for consensus of
the position states x and convergence of velocity states x˙ to zero.
Hint: Use a smooth saturation function and Theorem 12.8.
2
[30%]