castel fragsburg

Transcription

castel fragsburg
MED'16
Uncertainty Randomization
in Control Systems
Roberto Tempo
CNR-IEIIT
Politecnico di Torino, Italy
Athens, Greece, June 21-24, 2016
Overview

•
•
•
•
Motivations: UAV
Uncertain systems: deterministic/probabilistic methods
Randomized algorithms
Sampling-based methods for convex optimization
Parallel networked algorithms
ITHACA Project
•
•
•
ITHACA: Information Technology for
Humanitarian Assistance, Cooperation
and Action
Workpackage: UAV for archaeological site monitoring
Roman city Bene Vagienna
(Augusta Bagiennorum)
with the endorsement of United Nations
E. Capello, F. Quagliotti, R. Tempo (2014)
SMILE Project
•
•
•
SMILE: SisteMa a pIlotaggio remoto per
il supporto all'agricoLtura di precisionE
Workpackage: UAV in precision agriculture
Castel Fragsburg
in collaboration with Free University of Bolzano, Italy
and Fraunhofer Italia
E. Capello, G. Guglieri, R. Tempo (2016)
UAV and Uncertain Systems: 1
•
Variations of flight conditions (turbulence): physical
parameters (moment of inertia, load, speed…)
•
Variations of geometric parameters for manufacturing
process: aerodynamic data
UAV
Uncertain Systems
UAV and Uncertain Systems - 2
•
UAV: Structured nonlinear parametric uncertainty
affecting plant and flight conditions, and aerodynamic
database
x(t ) = A(q) x(t )  B(q)u (t )
•
Uncertainty vector q = [q1, q2, q3]T
1
a11 (q) =
(q2 cos(q3 )  q23 )
2q1

1
a12 (q) = cos(q22 )
4
Deterministic or probabilistic uncertainty?
R. Tempo, G. Calafiore, F. Dabbene (2013)


•
Variations of flight conditions
(turbulence): physical parameters
(moment of inertia, load, speed…)
Variations of geometric
parameters for manufacturing
process: aerodynamic data
1
a11 (q) =
(q2 cos(q3 )  q23 )
2q1
1
a12 (q) = cos(q22 )
4
•
Probabilistic uncertainty
UAV
Gaussian pdf
•
uniform pdf
UAV and Uncertain Systems: 3
q
nominal
range
q1 = kt
0 deg
50%
q2 = m
620 g
50%
q3 = V
15 m/s
50%
q4 = Jx
0.01 kg mm2
55%
q5 = Jy
0.0035 kg mm2
55%
q6 = Jz
0.015 kg mm2
55%
q7 = Jxz
0.0035 kg mm2
55%
q
mean
st dev
q8 = CL
2 l/rad
0.225
q9 = CM
1.3 l/rad
0.260
q10 = CU
0 l/rad
0.0047
q11 = CT
-0.125 l/rad
0.0306
q12 = CF
0.035 l/rad
0.0014
q13 = CV
0.005 l/rad
0.0034
q14 = CH
-0.03 l/rad
0.0028
Uncertain Systems
Deterministic/Probabilistic Uncertainty
•
Linear system with nonlinear uncertainty
x(k  1) = A(q) x(k )  B(q)u (k )
•
Deterministic uncertainty q = [q1, q2, q3]T
Hard bounds
•
q1   0.1,0.1 q2 , q3  0.9,1.1
Probabilistic uncertainty q uniformly distributed
Soft bounds
q1 =U  0.1, 0.1 q2 , q3 =G 0.9,1.1
Overview
•

•
•
•
Motivations: UAV
Uncertain systems: deterministic/probabilistic methods
Randomized algorithms
Sampling-based methods for convex optimization
Parallel networked algorithms
Deterministic Uncertainty
•
Deterministic uncertainty: pessimistic viewpoint
“If there is a fifty-fifty chance that
something can go wrong, nine out
of ten times it will”
Yogi Berra
[example: stability with parametric uncertainty]
Uncertain Systems
Probabilistic Uncertainty
•
Probabilistic uncertainty: optimistic viewpoint
“Don’t assume the worst-case scenario.
It’s emotionally draining and probably
won’t happen anyway”
Anonymous
[example: LQG]
Uncertain Systems
Random Uncertainty
•
•
•
Random vector (matrix) q
q bounded in support set Q  R m
Multivariate (uniform) pdf associated to q  Q
 1
if q  Q
multivariate

U Q  =  vol(Q)
uniform pdf
 0
otherwise

uniform pdf in a circle
Uncertain Systems
Probabilistic Methods
System Constraints
•
Define system constraints
f (q) : Q  R
[f(·) is a measurable function]
•
Example: H ∞
•
G(s,q) is stable and f(q) = ||G(s,q)||∞ > γ
γ
|G(jω,q)|
w
G(s,q)
z
ω
Uncertain Systems
Probabilistic Methods
Violation Probability
•
Given level g, the probability of violation is
Vf = Prob{q  Q: f(q) > g}
•
Example: H ∞
γ
|G(jω,q)|
w
G(s,q)
z
ω
Uncertain Systems
Probabilistic Methods
Small Violation and Reliability
•
Sufficiently small violation (within α) may be acceptable
Vf = Prob{q  Q: f(q) > g} ≤ α
where α  (0,1) is a probabilistic parameter (level)
•
Equivalently
Rf = 1 - Vf = Prob{q  Q: f(q) ≤ g} > 1- α
which indicates the system reliability
Uncertain Systems
Probabilistic Methods
Computing Violation Probability
•
For uniform pdf we obtain
Prob q  Q : f (q)  γ
•
•
•
•

=
f ( q ) γ
dq
vol(Q)
This is a “vol-over-vol” problem
For m small or very special constraints f computation is
easy
In general hard integration problem over nonconvex
domain
Curse of dimensionality for m large
Uncertain Systems
Probabilistic Methods
Overview
•
•

•
•
Motivations: UAV
Uncertain systems: deterministic/probabilistic methods
Randomized algorithms
Sampling-based methods for convex optimization
Parallel networked algorithms
Monte Carlo Randomized Algorithm
•
Randomized Algorithm: an algorithm that makes
random choices during execution to produce a result
•
Monte Carlo Randomized Algorithm (MCRA): a
randomized algorithm that provides approximate results
with bounded “probability of failure”
•
MCRA may fail to provide the exact result
Probability of failure can be made arbitrarily small
•
R. Tempo, G. Calafiore, F. Dabbene (2013)
Monte Carlo History
•
Monte Carlo method was invented by Metropolis, Ulam,
von Neumann… in the fourties during Manhattan project
Ulam, Feynmann, von Neumann
Fermi
Metropolis
Ulam
Randomized Algorithms
von Neumann
Monte Carlo Simulations
•
•
Computation of violation probability is based on N
Monte Carlo simulations
Draw N iid random samples of q  Q
according to a given probability
measure (e.g. uniform)
q(1), q(2), …, q(N)  Q
•
N iid samples in a circle
This is the multisample q1,…,N = {q(1), q(2), …, q(N)}
Randomized Algorithms
Empirical Violation - 1
•
•
Let K be # samples such that f(q(i)) ≤ γ
Empirical reliability which approximates Rf is given by
K
R̂ =
N
N
f
•
Define the indicator function I(·)
(i )

1
if
f
(
q
)γ
(i )
I( f (q )) = 
otherwise
0
Randomized Algorithms
Empirical Violation - 2
•
Empirical reliability is defined as

N
1
R̂ Nf =  I f (q (i ) )
N i =1
•
•

No need to compute f(q(i))
It suffices to check if f(q(i)) ≤ γ
[Lyapunov: checking positive definiteness of a n x n symmetric
matrix requires n3 floating point operations]
Randomized Algorithms
Sample Complexity
•
Probabilistic parameters ε  (0,1) and d  (0,1) called
accuracy and confidence
•
Given accuracy ε and confidence d, need to compute the
sample complexity (smallest integer N) such that the
probability inequality
ˆ N  ε}  1  δ
Prob{ R f  R
f
holds
•
event
There are two probability levels
Randomized Algorithms
Two-Sided Hoeffding Inequality
•
Recall that MCRA may fail
probability of failure = Prob{q1,, N : R f  Rˆ Nf  ε}
•
Given accuracy ε, two-sided Hoeffding inequality states
1,, N
Prob{q
N
2Nε 2
ˆ
: R f  R f  ε}  2e
[e is the Euler number]
•
Probability of failure bounded/vanishes exponentially
•
Can be made arbitrarily small taking N sufficiently large
Randomized Algorithms
Hoeffding Inequality/Chernoff Bound
•
Consider Hoeffding inequality and confidence δ
1,, N
Prob{q
N
2Nε 2
ˆ
: R f  R f  ε}  2e
2
-2N
2e
≤ d holds
•
Compute the smallest N such that
•
Numerical computation of N (integer) is immediate
Randomized Algorithms
Chernoff Bound
•
“Inverting the bound” 2e-2N2 ≤ d is straightforward
•
Obtain the (additive) Chernoff bound
 log 2δ 
N  Nch =  2 
 2ε 
Randomized Algorithms
Sample Complexity (Revisited)
•
Chernoff bound provides a fundamental explicit formula
(sample complexity) Nch = Nch(, d)
N ch  1/ ε 2
•
Confidence d is cheap
•
Accuracy  more expensive
•
Sample complexity can be
computed a priori
Randomized Algorithms
N ch  log (1/ δ)

d
Nch
0.001
0.050
1.84 x 106
0.001
0.010
2.65 x 106
0.001
0.005
3.00 x 106
0.001
0.001
3.80 x 106
Example: Monte Carlo Estimation of π
•
•
Draw N = 75000 iid random samples in a square (blu)
and count how many are inside the circle (red)
Obtain K = 58942
π
•
4K
N
3.1436
This estimate is within accuracy
ε = 10-2 of the actual value
with confidence d = 10-6
[using Chernoff bound obtain Nch=72544]
Randomized Algorithms
 log 2δ 
N  Nch =  2 
 2ε 
Curse of Dimensionality
•
No curse of dimensionality: sample complexity does not
depend on # of uncertain parameters
•
It does not depend on the shape of Q
•
It does not depend on the pdf
[assuming that a polynomial-time
oracle to draw N iid random
samples in Q exists]
R. Bellman (1957)
sampling Schur region
Large Deviation Theory: Rare Events
•
Samples q(1), q(2), …, q(N) are iid
Parallel and distributed simulation algorithms
MCMC (Markov Chain Monte Carlo) are sequential
methods: samples are not iid
Convergence of mixing processes
•
Long and fat-tail distributions
•
Theory of random matrices
•
•
•
A. Dembo, O. Zeitouni (1993) - M.L. Mehta (1991)
Overview
•
•
•


•
Motivations: UAV
Uncertain systems: deterministic/probabilistic methods
Randomized algorithms
Sampling-based methods for convex optimization
Part 1: Scenario approach
Parallel networked algorithms
Convex Semi-Infinite Optimization
•
Robust viewpoint: semi-infinite optimization problem
min c Tθ subject to f (θ, q)  γ for all q  Q
θ
•
•
f(θ, q) ≤ g is convex in θ
for any fixed q  Q
f(, q) is measurable in q for
any fixed 
Sampling-Based Methods
f(θ,·)
Scenario Approach
Scenario Approach
•
•
•
•
Consider random uncertainty q
Construct a scenario problem using random samples of q
Draw N iid random samples qi), construct the sampled
constraints
f(θ, qi)) ≤ g, i = 1, …, N
Form the scenario optimization problem
θsce = arg min c Tθ subject to f (θ, q (i ) )  γ, i = 1,
θ
G. Calafiore, M.C. Campi (2005)
,N
Violation Probability of Scenario
•
Suppose that N ≥ n and , d (0,1) satisfy the inequality
n 1

i =0
N i
N i
  ε 1  ε   δ
i 
where n = dim(
•
Then, with probability at least 1- d, it holds
Vf (θsce) = Prob{q  Q: f(sce, q) > g} ≤ 
[minor technical feasibility and uniqueness assumptions needed]
M.C. Campi, S. Garatti (2008)
Binomial Distribution
•
“Inverting the bound” on the binomial distribution to
compute sample complexity is not easy
N i
N i

  ε 1  ε   δ
i =0  i 
Use standard tables for
δ=0.002
confidence intervals
n 1
•
Sampling-Based Methods
Scenario Approach
Sample Complexity
•
Improved sample complexity bound is given by
N  N bin
•
 e 
1

=
 log  (n  1)  
δ

 ε(e  1) 
Constant 2 appearing in previous bounds is reduced to
e
 1.5820
(e  1)
N bin  1/ ε
N bin  log (1/ δ)
T. Alamo, R. Tempo, A. Luque, D. Ramirez (2015)
N bin  n
Overview
•
•
•


•
Motivations: UAV
Uncertain systems: deterministic/probabilistic methods
Randomized algorithms
Sampling-based methods for convex optimization
Part 2: Sequential methods
Parallel networked algorithms
Motivations: Lyapunov
•
•
•
•
AT P + PA ≤ 0
Sample complexity is linear in # decision variables
If P (r × r)
# decision variables = r(r+1)/2
Example:  = d = 106, r =10, we have
N = 6.57 x 107
Need to resort to other approaches
Sampling-Based Methods
Sequential Algorithms
Iterative Methods
•
Iterative methods based on two steps
- solve a reduced-size scenario problem
- check if the solution is probabilistic feasible
Remarks:
• First step is easy because the size is “reduced”
• Second step requires checking feasibility of a candidate
solution (much easier than solving an optimization
problem)
Sampling-Based Methods
Sequential Algorithms
Sequential Probabilistic Validation
sequential algorithms based on probabilistic validation
Initialization
Reduced
Scenario
N(k) samples
k=0
k = k+1
θ̂ N ( k )
Validation
M(k) samples
no
Update
T. Alamo, R. Tempo, A. Luque, D. Ramirez (2015)
θseq = θˆ N ( k )
yes
Reduced Size Scenario Optimization
•
At step k solve a reduced size scenario problem
θ̂ N ( k ) = arg min c Tθ subject to f (θ, q (i ) )  γ, i = 1,
θ
where
•
, N (k )

k
N (k ) =  N bin 
kd 

and kd is the desired # of iterations
Obtain θ̂ N ( k )
Sampling-Based Methods
Sequential Algorithms
Binary Validation Oracle
•
Draw M(k) iid random validation samples according to a
given pdf
v(1), v(2), …, v(M(k))  Q
•
Check if the solution θ̂ N ( k ) satisfies
f (θˆ N ( k ) , v(i ) )  γ, i = 1, , M (k )
set
θseq = θˆ N ( k )
and stop
•
Otherwise update the iteration counter k
Sampling-Based Methods
Sequential Algorithms
Convergence Properties
•
Suppose that , d (0,1) and M(k) satisfy the inequality
1
 π2k 2 
M (k )   log 

 6δ  
ε
•
Then, with probability at least 1- d, it holds
Vf (θseq) = Prob{q  Q: f(seq, q) > g} ≤ 
[more advanced bounds based on the Riemann zeta function have
been recently derived]
Y. Oishi (2007) - T. Alamo, R. Tempo, A. Luque, D. Ramirez (2015)
Comments
•
•
•
•
•
Main idea: to validate a candidate solution is easier than
computing a solution
Sample complexity Nbin is computed a priori
Sequential algorithms do not provide the sample
complexity
Number of iterations k is random
N(k) and M(k) are established only a posteriori
Sampling-Based Methods
Sequential Algorithms
R-RoMulOC
•
R-RoMulOC: Randomized and Robust Multi-Objective
Control Toolbox
•
Joint effort to merge
RoMulOC and RACT
http://projects.laas.fr/OLOCEP/romuloc/
M. Chamanbaz, F. Dabbene, D. Peaucelle, R. Tempo (2015)
Overview
•
•
•
•

Motivations: UAV
Uncertain systems: deterministic/probabilistic methods
Randomized algorithms
Sampling-based methods for convex optimization
Parallel networked algorithms
Parallel Networked Algorithms - 1
•
Nj # constraints handled by node j
N
j =1
•
•
•
j
 N bin
Undirected communication links
Primal-dual subgradient
method based on consensus
constraints to update local solution θ kj at step k
Convergence result: For all nodes j, we have
lim cTθ kj = θsce
k 
K. You, R. Tempo (2016)
Parallel Networked Algorithms - 2
•
Nj # constraints handled by node j
N
j =1
•
•
•
j
 N bin
Directed communication links
Random projection algorithm
Convergence result:
For all nodes j, we update local solution θ kj
lim cTθ kj = θsce
k 
K. You, R. Tempo (2016)
w.p.1
Networked Uncertain Systems
•
•
Distributed convex optimization (no uncertainty)
Robust convex optimization
(centralized approach)
Robust distributed convex optimization
K. You, R. Tempo (2016)
Conclusions
From networked systems
to networked uncertain systems
J. Baillieul, P. Antsaklis (2007)
From networked systems to networked uncertain systems
Thanks to
T. Alamo, F. Allgower, E.W. Bai, B.R. Barmish, T. Basar, G.
Calafiore, E.F. Camacho, M.C. Campi, E. Capello, M. Chamanbaz,
P. Colaneri, F. Dabbene, S. Formentin, P. Frasca, Y. Fujisaki, S.
Garatti, H. Ishii, C. Lagoa, M. Lorenzen, Y. Oishi, S. Parsegov, D.
Peaucelle, B.T. Polyak, M. Prandini, A. Proskurnikov, L. Qiu, C.
Ravazzi, M. Sznaier, M. Vidyasagar, T. Wada, K. You, L. Zaccarian
Networked Uncertain Systems