Solving Optimization Problems with Diseconomies of Scale via

Transcription

Solving Optimization Problems with Diseconomies of Scale via
Solving Optimization Problems with Diseconomies of Scale via
Decoupling
Konstantin Makarychev – Microsoft Research
Maxim Sviridenko – Yahoo Labs
Simons Symposium on Approximation Algorithms, February 26, 2014
What is a diseconomy of scale?
Cost of Resources
6
5
4
3
2
1
0
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
What is a diseconomy of scale?
Cost of Resources
6
5
4
3
2
1
0
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
What is a diseconomy of scale?
Cost of Energy used for Computing
8
7
Energy consumption grows as π‘₯ π‘ž as a function of speed π‘₯.
6
5
4
3
2
1
0
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
Example: Energy Efficient Routing
Given:
β€’ graph 𝐺
β€’ set of demand pairs (𝑠𝑖 , 𝑑𝑖 , 𝑑𝑖 )
Goal:
β€’ route 𝑑𝑖 units of unsplittable flow
from 𝑠𝑖 to 𝑑𝑖
So as to minimize energy:
π‘ž
𝑐𝑒 π‘₯𝑒
π‘’βˆˆπΈ
Example: Energy Efficient Routing
Given:
β€’ graph 𝐺
β€’ set of demand pairs (𝑠𝑖 , 𝑑𝑖 , 𝑑𝑖 )
Goal:
β€’ route 𝑑𝑖 units of unsplittable flow
from 𝑠𝑖 to 𝑑𝑖
So as to minimize energy:
π‘ž
𝑐𝑒 π‘₯𝑒
π‘’βˆˆπΈ
Example: Energy Efficient Routing
Given:
β€’ graph 𝐺
β€’ set of demand pairs (𝑠𝑖 , 𝑑𝑖 , 𝑑𝑖 )
Goal:
β€’ route 𝑑𝑖 units of unsplittable flow
from 𝑠𝑖 to 𝑑𝑖
So as to minimize energy:
π‘ž
𝑐𝑒 π‘₯𝑒
π‘’βˆˆπΈ
= 5 × 1π‘ž + 2 × 2π‘ž
What’s known?
 Arbitrary 𝑑𝑖 :
Andrews, Fernández Anta, Zhang, Zhao – 𝑂(#π‘‘π‘’π‘šπ‘Žπ‘›π‘‘π‘  + log π‘‘π‘šπ‘Žπ‘₯ )
approximation
 All 𝑑𝑖 are the same:
 Andrews, Fernández Anta, Zhang, Zhao – 𝑂(1) approximation
π‘ž
 Bampis, Kononov, Letsios, Lucarelli, Sviridenko – 𝑃 π‘ž approximation
 We give 𝑃
π‘ž
π‘ž
approximation for the general case
Graph of
𝑃
π‘ž
π‘ž
2.5
2
1.5
1
0.5
0
1
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
2
LP Relaxation
οƒ˜ 𝒫𝑖 is the set of 𝑠𝑖 β†’ 𝑑𝑖 paths
οƒ˜ variable πœ†π‘ for all 𝑝 ∈ 𝒫𝑖
constraints:
β€’ π‘βˆˆπ’«π‘– πœ†π‘ = 𝑑𝑖 βˆ€π‘–
β€’ πœ†π‘ ∈ [0, 𝑑𝑖 ]
Let Ξ› be the set of feasible πœ†β€™s.
πœ†π‘
LP Relaxation
οƒ˜ 𝒫𝑖 is the set of 𝑠𝑖 β†’ 𝑑𝑖 paths
οƒ˜ variable πœ†π‘ for all 𝑝 ∈ 𝒫𝑖
Minimize
𝑒 𝑐𝑒
𝑝:π‘’βˆˆπ‘ πœ†π‘
π‘ž2
Subject to πœ† ∈ Ξ›
constraints:
πœ†π‘ = 1 𝑛
β€’ π‘βˆˆπ’«π‘– πœ†π‘ = 𝑑𝑖 βˆ€π‘–
β€’ πœ†π‘ ∈ [0, 𝑑𝑖 ]
Let Ξ› be the set of feasible πœ†β€™s.
cost = 2𝑛 ×
1
𝑛2
= 2/n
LP Relaxation – Local Distributions of Paths
Minimize
Let
οƒ˜ πΉπ‘ž,𝑒 = min π”Όπ‘Œ 𝑒 βˆΌπ’Ÿπ‘’
π’Ÿπ‘’
𝑒 𝑐𝑒 πΉπ‘ž,𝑒 (πœ†) subject to
𝑒
π‘Œ
𝑖 𝑖
π‘ž
,
where
β€’ π’Ÿπ‘’ is a distribution over r.v. {π‘Œπ‘–π‘’ };
β€’ π‘Œπ‘–π‘’ is the amount of flow 𝑠𝑖 β†’ 𝑑𝑖 going via 𝑒
β€’ 𝔼 π‘Œπ‘–π‘’ = π‘βˆˆπ’«π‘– :π‘’βˆˆπ‘ πœ†π‘
β€’ π‘Œπ‘–π‘’ ∈ {0, 𝑑𝑖 }
πœ†βˆˆΞ›
Rounding Algorithm
LP:
Minimize
𝑒 𝑐𝑒 πΉπ‘ž,𝑒 (πœ†) subject
where πΉπ‘ž,𝑒 = min π”Όπ‘Œ 𝑒 βˆΌπ’Ÿπ‘’
π’Ÿπ‘’
to πœ† ∈ Ξ›
π‘ž
𝑒
𝑝:π‘’βˆˆπ‘ π‘Œπ‘
.
Algorithm A:
For each 𝑖, pick a random path 𝑝: 𝑠𝑖 β†’ 𝑑𝑖 with probability πœ†π‘ /𝑑𝑖 .
Pick paths 𝑝 independently for all 𝑖.
Analysis
Algorithm A:
For each 𝑖, pick a random path 𝑝: 𝑠𝑖 β†’ 𝑑𝑖 with probability πœ†π‘ /𝑑𝑖 .
Pick paths 𝑝 independently for all 𝑖.
Analysis:
Compare LP and ALG edge by edge. For every edge 𝑒:
 Let 𝑋𝑖 = amount of flow from 𝑠𝑖 to 𝑑𝑖 routed via 𝑒 by A.
 All 𝑋𝑖 are independent.
 π‘Žπ‘™π‘”. π‘π‘œπ‘ π‘‘ 𝑒 = 𝑐𝑒 𝑖 𝑋𝑖𝑒 π‘ž .
Analysis
We need to compare
οƒ˜ 𝔼[π‘Žπ‘™π‘”. π‘π‘œπ‘ π‘‘ 𝑒 ] = 𝔼[𝑐𝑒
οƒ˜ 𝐿𝑃. π‘π‘œπ‘ π‘‘ 𝑒
= 𝔼[𝑐𝑒
𝑒 π‘ž
𝑋
]
𝑖 𝑖
𝑒 π‘ž
π‘Œ
]
𝑖 𝑖
 Each 𝑋𝑖𝑒 has the same distribution as π‘Œπ‘– .
 All 𝑋𝑖𝑒 are independent.
 But π‘Œπ‘–π‘’ are not independent 
Decoupling Inequality (de la Peñaβ€˜90)
If
β—¦ π‘Œ1 , … , π‘Œπ‘› β€” jointly distributed nonnegative r.v.’s;
β—¦ 𝑋1 , … , 𝑋𝑛 β€” independent nonnegative r.v.;
β—¦ Each 𝑋𝑖 has the same distribution as π‘Œπ‘– ;
Then
𝑋1 + β‹― + 𝑋𝑛
π‘ž
for some absolute constant πΆπ‘ž .
≀ πΆπ‘ž β‹… π‘Œ1 + β‹― + π‘Œπ‘›
π‘ž
Decoupling Inequality (de la Peñaβ€˜90)
If
β—¦ π‘Œ1 , … , π‘Œπ‘› β€” jointly distributed nonnegative r.v.’s;
β—¦ 𝑋1 , … , 𝑋𝑛 β€” independent nonnegative r.v.;
β—¦ Each 𝑋𝑖 has the same distribution as π‘Œπ‘– ;
Then
𝑋1 + β‹― + 𝑋𝑛
π‘ž
≀ πΆπ‘ž β‹… π‘Œ1 + β‹― + π‘Œπ‘›
De la Peña, Ibragimov, Sharakhmetov β€˜03:
π‘ž
πΆπ‘ž
π‘ž
πΆπ‘ž
π‘ž
≀ 2 for π‘ž ∈ (1,2];
≀
𝑃
π‘ž
π‘ž
for π‘ž β‰₯ 2.
Decoupling Inequality (de la Peñaβ€˜90)
If
β—¦ π‘Œ1 , … , π‘Œπ‘› β€” jointly distributed nonnegative r.v.’s;
β—¦ 𝑋1 , … , 𝑋𝑛 β€” independent nonnegative r.v.;
β—¦ Each 𝑋𝑖 has the same distribution as π‘Œπ‘– ;
Then
𝑋1 + β‹― + 𝑋𝑛
We show that
π‘ž
πΆπ‘ž
= 𝑃
π‘ž
π‘ž,
π‘ž
≀ πΆπ‘ž β‹… π‘Œ1 + β‹― + π‘Œπ‘›
π‘ž
where 𝑃 is a Poisson r.v. with parameter 1.
What is the Poisson distribution?
 π‘ƒπœ† is Poisson with parameter πœ† β‡’ β„™ 𝑃 = π‘˜ =
πœ†π‘˜ 𝑒 βˆ’πœ†
.
π‘˜!
 π‘ƒπœ† = number of β€œchosen” points in the Poisson process:
𝑁 points; each chosen w.p. πœ†/𝑁
What is the Poisson distribution?
 π‘ƒπœ† is Poisson with parameter πœ† β‡’ β„™ 𝑃 = π‘˜ =
πœ†π‘˜ 𝑒 βˆ’πœ†
.
π‘˜!
 π‘ƒπœ† = number of β€œchosen” points in the Poisson process:
𝑁 points; each chosen w.p. πœ†/𝑁
Decoupling Inequality (de la Peñaβ€˜90)
If
β—¦ π‘Œ1 , … , π‘Œπ‘› β€” jointly distributed nonnegative r.v.’s;
β—¦ 𝑋1 , … , 𝑋𝑛 β€” independent nonnegative r.v.;
β—¦ Each 𝑋𝑖 has the same distribution as π‘Œπ‘– ;
Then
𝑋1 + β‹― + 𝑋𝑛
We show that
π‘ž
πΆπ‘ž
= 𝑃
π‘ž
π‘ž,
π‘ž
≀ πΆπ‘ž β‹… π‘Œ1 + β‹― + π‘Œπ‘›
π‘ž
where 𝑃 is a Poisson r.v. with parameter 1.
Better Language?
Yes – Convex Stochastic Order
Convex Order
Def: 𝑋 ≀𝑐π‘₯ π‘Œ if for every convex πœ‘: π”Όπœ‘ 𝑋 ≀ π”Όπœ‘(π‘Œ).
Basic Properties:





≀𝑐π‘₯ is a property of distribution: 𝑋 =𝑠𝑑 𝑍, 𝑋 ≀𝑐π‘₯ π‘Œ ⟹ 𝑍 ≀𝑐π‘₯ π‘Œ
If 𝑋 ≀𝑐π‘₯ π‘Œ, π‘Œ ≀𝑐π‘₯ 𝑍 ⟹ 𝑋 ≀𝑐π‘₯ 𝑍
If 𝑋 ≀𝑐π‘₯ π‘Œ, then 𝛼𝑋 + 𝛽 ≀𝑐π‘₯ π›Όπ‘Œ + 𝛽
If 𝑋 ≀𝑐π‘₯ π‘Œ, then 𝔼𝑋 = π”Όπ‘Œ
We can verify β€œβ‰€β€ only for πœ‘ with πœ‘ 0 = 0
Example: Random Walk
𝑋 ≀𝑐π‘₯ π‘Œ
8
𝑋 = π‘Šπœ1
7
𝜏1 ≀ 𝜏2
6
π‘Œ = π‘Šπœ2
5
4
3
2
1
0
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
Example: 𝐡 ≀𝑐π‘₯ 𝑃
 𝐡 – Bernoulli r.v. with parameter 𝑝;
 𝑃 – Poisson r.v. with parameter 𝑝
 Then, 𝐡 ≀𝑐π‘₯ 𝑃
Proof: πœ‘ – convex with πœ‘ 0 = 0
10
πœ‘ 𝑃 β‰₯𝑙 𝑃
πœ‘ 𝐡 = 𝑙(𝐡)
9
8
πœ‘ 𝑑
7
6
5
4
π”Όπœ‘ 𝑃 β‰₯ 𝔼𝑙 𝑃 = 𝑙 𝔼𝑃
= 𝑙 𝔼𝐡 = 𝔼𝑙 𝐡 = π”Όπœ‘(𝐡)
β„“ 𝑑
3
2
1
0
0
0.5
1
1.5
2
2.5
3
Lemma I
 If 𝑋, π‘Œ, 𝑍 are independent r.v’s and 𝑋 ≀𝑐π‘₯ π‘Œ,
 Then Z + 𝑋 ≀𝑐π‘₯ 𝑍 + π‘Œ.
Proof:
π”Όπœ‘ 𝑍 + 𝑋 = 𝔼 𝔼 πœ‘ 𝑍 + 𝑋
𝑍] ≀𝑐π‘₯ 𝔼 𝔼 πœ‘ 𝑍 + π‘Œ 𝑍] = π”Όπœ‘ 𝑍 + π‘Œ
Since for any fixed 𝑧, π”Όπœ‘ 𝑧 + 𝑋 ≀𝑐π‘₯ π”Όπœ‘ 𝑧 + π‘Œ .
Lemma II
 𝑋1 , … , 𝑋𝑛 are independent r.v.’s; π‘Œ1 , … , π‘Œπ‘› are independent r.v.’s;
 𝑋𝑖 ≀𝑐π‘₯ π‘Œπ‘– for all 𝑖
 Then 𝑋1 + β‹― + 𝑋𝑛 ≀𝑐π‘₯ π‘Œ1 + β‹― + π‘Œπ‘›
Proof:
𝑋1 + 𝑋2 + 𝑋3 + 𝑋4 ≀𝑐π‘₯
𝑋1 + 𝑋2 + 𝑋3 + π‘Œ4 ≀𝑐π‘₯
𝑋1 + 𝑋2 + π‘Œ3 + π‘Œ4 ≀𝑐π‘₯
𝑋1 + π‘Œ2 + π‘Œ3 + π‘Œ4 ≀𝑐π‘₯
π‘Œ1 + π‘Œ2 + π‘Œ3 + π‘Œ4
Decoupling Inequality
If
β—¦ π‘Œ1 , … , π‘Œπ‘› β€” jointly distributed nonnegative r.v.’s;
β—¦ 𝑋1 , … , 𝑋𝑛 β€” independent nonnegative r.v.;
β—¦ Each 𝑋𝑖 has the same distribution as π‘Œπ‘– ;
Then
𝑋1 + β‹― + 𝑋𝑛 ≀𝑐π‘₯ 𝑃 β‹… (π‘Œ1 + β‹― + π‘Œπ‘› )
where 𝑃 is a Poisson r.v. with parameter 1 independent of π‘Œπ‘– ’s.
Proof of de la Peña’s Inequality
οƒ˜
οƒ˜
π‘ž
𝑋1 + β‹― + 𝑋𝑛 π‘ž = 𝔼 𝑋1 + β‹― + 𝑋𝑛 π‘ž
π‘ž
π‘Œ1 + β‹― + π‘Œπ‘› π‘ž = 𝔼 π‘Œ1 + β‹― + π‘Œπ‘› π‘ž
𝔼 𝑋1 + β‹― + 𝑋𝑛
π‘ž
≀ 𝔼 𝑃 β‹… π‘Œ1 + β‹― + π‘Œπ‘›
π‘ž
= 𝔼 π‘ƒπ‘ž 𝔼 π‘Œ1 + β‹― + π‘Œπ‘›
π‘ž
Why Poisson?
 𝑋1 , … , 𝑋𝑛 : independent Bernoulli r.v.’s with parameter 1 𝑛;
 π‘Œ1 , … , π‘Œπ‘› : Pick a random 𝑖 ∈ [𝑛], let π‘Œπ‘– = 1; π‘Œπ‘— = 0 for 𝑗 β‰  𝑖;
 Then
𝑋1 + β‹― + 𝑋𝑛 β‰ˆ 𝑃
π‘Œ1 + β‹― + π‘Œπ‘› = 1
So
𝑋1 + β‹― + 𝑋𝑛 β‰ˆ 𝑃(π‘Œ1 + β‹― + π‘Œπ‘› )
Special Case of the Theorem
 𝑋1 , … , 𝑋𝑛 are independent Bernoulli random variables with 𝔼 𝑋𝑖 = 1/𝑛;
 π‘Œ1 , … , π‘Œπ‘› are Bernoulli random variables s.t.
𝑖 π‘Œπ‘–
= 1 (always)
 Then
π‘Œ1 + β‹― + π‘Œπ‘› ≀𝑐π‘₯ 𝑋1 + β‹― + 𝑋𝑛 ≀𝑐π‘₯ 𝑃 π‘Œ1 + β‹― + π‘Œπ‘›
1 ≀𝑐π‘₯ 𝑋1 + β‹― + 𝑋𝑛 ≀𝑐π‘₯ 𝑃
Special Case of the Theorem
 𝑋1 , … , 𝑋𝑛 are independent Bernoulli random variables;
 π‘Œ1 , … , π‘Œπ‘› are Bernoulli random variables s.t.
𝑖 π‘Œπ‘–
= 1 (always)
 𝛼1 , … , 𝛼𝑛 are non-negative numbers
 Then
𝛼1 π‘Œ1 + β‹― + 𝛼𝑛 π‘Œπ‘› ≀𝑐π‘₯ 𝛼1 𝑋1 + β‹― + 𝛼𝑛 𝑋𝑛 ≀𝑐π‘₯ 𝑃 𝛼1 π‘Œ1 + β‹― +𝛼𝑛 π‘Œπ‘›
General Case
We compare

𝑖 π‘Œπ‘–

π‘¦βˆˆπ’΄ 𝐡𝑦

π‘¦βˆˆπ’΄

𝑖 𝑋𝑖
=
π‘¦βˆˆπ’΄ πœ’
β‹…
π‘Œ=𝑦 β‹…
𝒴
𝑖 𝑦𝑖
𝑖 𝑦𝑖
𝑖𝑦
𝐡
𝑦
𝑖
𝑖
𝐡𝑦 β€”Bernoulli: β„™ 𝐡𝑦 = 1 = β„™ π‘Œ = 𝑦
𝑦 = (𝑦1 , … , 𝑦𝑛 )
General Case
Let 𝒴 be the support of π‘Œ = (π‘Œ1 , … , π‘Œπ‘› ). Then
π‘Œπ‘– =
π‘¦βˆˆπ’΄ πœ’ π‘Œ = 𝑦 β‹… 𝑦𝑖 ≀𝑐π‘₯
where 𝐡𝑦𝑖 is a Bernoulli with parameter β„™ π‘Œ = 𝑦 .
𝑖 ⋅𝑦
𝐡
𝑖
π‘¦βˆˆπ’΄ 𝑦
General Case
Let 𝒴 be the support of π‘Œ = (π‘Œ1 , … , π‘Œπ‘› ). Then
𝑋𝑖 =𝑠𝑑 π‘Œπ‘– =
π‘¦βˆˆπ’΄ πœ’ π‘Œ = 𝑦 β‹… 𝑦𝑖 ≀𝑐π‘₯
where 𝐡𝑦𝑖 is a Bernoulli with parameter β„™ π‘Œ = 𝑦 .
𝑖 ⋅𝑦
𝐡
𝑖
π‘¦βˆˆπ’΄ 𝑦
General Case
Let 𝒴 be the support of π‘Œ = (π‘Œ1 , … , π‘Œπ‘› ). Then
𝑋𝑖 =𝑠𝑑 π‘Œπ‘– =
π‘¦βˆˆπ’΄ πœ’ π‘Œ = 𝑦 β‹… 𝑦𝑖 ≀𝑐π‘₯
𝑖 ⋅𝑦
𝐡
𝑖
π‘¦βˆˆπ’΄ 𝑦
where 𝐡𝑦𝑖 is a Bernoulli with parameter β„™ π‘Œ = 𝑦 . Thus,
𝑖 𝑋𝑖 ≀𝑐π‘₯
𝑖
𝑖 ⋅𝑦 =
𝐡
𝑖
π‘¦βˆˆπ’΄ 𝑦
π‘¦βˆˆπ’΄(
𝑖 ⋅𝑦 )
𝐡
𝑖
𝑖 𝑦
Now,
π‘¦βˆˆπ’΄ ( 𝑖 𝑦𝑖 )
𝐡𝑦 ≀𝑐π‘₯ 𝑃 β‹…
π‘¦βˆˆπ’΄( 𝑖 𝑦𝑖 )
πœ’ π‘Œ=𝑦 =𝑃
𝑖 π‘Œπ‘–
Lemma:
𝑖 𝑦𝑖 𝐡
𝑖
≀
𝑖 𝑦𝑖 𝐡
Rescale 𝑦 s.t. 𝑦𝑖 = 1.
Then,
π”Όπœ‘
𝑖
𝑦
𝐡
𝑖 𝑖
≀
𝑖
𝑦
𝔼
πœ‘
𝐡
𝑖 𝑖
=
𝑖 𝑦𝑖
π”Όπœ‘ 𝐡
=π”Όπœ‘ 𝐡
Special Case of the Theorem
 𝑋1 , … , 𝑋𝑛 are independent Bernoulli random variables;
 π‘Œ1 , … , π‘Œπ‘› are Bernoulli random variables s.t.
𝑖 π‘Œπ‘–
= 1 (always)
 𝛼1 , … , 𝛼𝑛 are non-negative numbers
 Then
𝛼1 π‘Œ1 + β‹― + 𝛼𝑛 π‘Œπ‘› ≀𝑐π‘₯ 𝛼1 𝑋1 + β‹― + 𝛼𝑛 𝑋𝑛 ≀𝑐π‘₯ 𝑃 𝛼1 π‘Œ1 + β‹― +𝛼𝑛 π‘Œπ‘›
Proof: 𝛼1π‘Œ1 + β‹― + 𝛼𝑛 π‘Œπ‘› ≀𝑐π‘₯ 𝛼1𝑋1 + β‹― + 𝛼𝑛 𝑋𝑛
 Consider a convex function πœ‘ with πœ‘ 0 = 0
 πœ‘ is superadditive: πœ‘ π‘Ž + 𝑏 β‰₯ πœ‘ π‘Ž + πœ‘(𝑏)
 πœ‘ 𝛼1 𝑋1 + β‹― + 𝛼𝑛 𝑋𝑛 β‰₯ πœ‘ 𝛼1 𝑋1 + β‹― + πœ‘ 𝛼𝑛 𝑋𝑛
π”Όπœ‘
π”Όπœ‘
𝑖 𝛼𝑖 π‘Œπ‘–
=
𝑖 𝛼𝑖
𝑋𝑖 β‰₯
𝑖 π”Όπœ‘
𝑖 πœ‘(𝛼𝑖 )β„™(π‘Œπ‘–
𝛼𝑖 𝑋𝑖
= 1) =
𝑖 π”Όπœ‘
𝛼𝑖 π‘Œπ‘–
Proof: 𝛼1𝑋1 + β‹― + 𝛼𝑛 𝑋𝑛 ≀𝑐π‘₯ 𝑃(𝛼1π‘Œ1 + β‹― + 𝛼𝑛 π‘Œπ‘› )
 Let 𝑃𝑖 be a Poisson r.v. with parameter β„™(𝑋𝑖 = 1), then
𝛼1 𝑋1 + β‹― + 𝛼𝑛 𝑋𝑛 ≀𝑐π‘₯ 𝛼1 𝑃1 + β‹― + 𝛼𝑛 𝑃𝑛
 𝑃 = 𝑃1 + β‹― 𝑃𝑛 is a Poisson r.v. with parameter 1.
Proof: 𝛼1𝑃1 + β‹― + 𝛼𝑛 𝑃𝑛 ≀𝑐π‘₯ 𝑃(𝛼1π‘Œ1 + β‹― + 𝛼𝑛 π‘Œπ‘› )
 Let 𝑃𝑖 be a Poisson r.v. with parameter β„™(𝑋𝑖 = 1), then
𝛼1 𝑋1 + β‹― + 𝛼𝑛 𝑋𝑛 ≀𝑐π‘₯ 𝛼1 𝑃1 + β‹― + 𝛼𝑛 𝑃𝑛
 𝑃 = 𝑃1 + β‹― 𝑃𝑛 is a Poisson r.v. with parameter 1.
Proof: 𝛼1𝑃1 + β‹― + 𝛼𝑛 𝑃𝑛 ≀𝑐π‘₯ 𝑃(𝛼1π‘Œ1 + β‹― + 𝛼𝑛 π‘Œπ‘› )
 Let 𝑃 = 𝑃1 + β‹― + 𝑃𝑛 . Condition on 𝑃 = π‘˜ > 0:
𝔼 πœ‘
𝑖 𝛼𝑖 𝑃𝑖 ) 𝑃 = π‘˜ = 𝔼 πœ‘(
β‰₯𝔼
𝑃𝑖
𝑖 π‘˜
𝑃𝑖
𝑖 π‘˜ 𝛼𝑖 π‘˜) |𝑃
β‹… πœ‘ 𝛼𝑖 π‘˜ |𝑃 = π‘˜
=
𝑖 πœ‘(𝛼𝑖 π‘˜) 𝔼
𝑃𝑖
|𝑃
π‘˜
=
𝑖 πœ‘(𝛼𝑖 π‘˜) 𝔼
𝑃𝑖 =
=𝔼 πœ‘ 𝑃
=π‘˜
𝑖 𝛼𝑖 π‘Œπ‘– )
=π‘˜
𝑖 πœ‘(𝛼𝑖 π‘˜) 𝔼
𝑃=π‘˜
π‘Œπ‘–
Conclusion
 Introduced a new framework for problems with β€œdiseconomies of scale”:
 Energy minimization
 𝐿𝑝 -minimization
 Found the exact constant in de la Peña’s inequality.
 Generalized it to other convex and concave functions.
𝑋1 + β‹― + 𝑋𝑛 ≀𝑐π‘₯ 𝑃 β‹… (π‘Œ1 + β‹― + π‘Œπ‘› )
Thank you!

Similar documents

2011 Fepto meeting (Neve Shalom) Contact List

2011 Fepto meeting (Neve Shalom) Contact List 2011 Fepto meeting (Neve Shalom) Contact List

More information