Recent Progress on Integer Programming and Lattice Problems

Download Report

Transcript Recent Progress on Integer Programming and Lattice Problems

Lattice Sparsification and the Approximate
Closest Vector Problem
Daniel Dadush
Centrum Wiskunde en Informatica
Joint work with Gabor Kun (Renyi Institute)
Outline
1. Norms, Lattices and Lattice Problems:
a) Shortest & Closest Vector Problems (SVP / CVP).
2. Overview of Algorithms for Lattice Problems:
a) Ajtai, Kumar, Sivakumar (AKS) randomized sieve.
3. Lattice Point Enumeration via Ellipsoid Covering (Older work):
a) Micciancio, Voulgaris algorithm for 𝑙2 CVP.
b) SVP under general norms.
4. Lattice Sparsification (New work):
a) Approximate CVP under general norms.
Lattices
A lattice 𝐿 ⊆ ℝ𝑛 is all integral
combinations of a basis
B = 𝑏1 , … , 𝑏𝑛 .
Note: a lattice has many
equivalent bases.
𝑏2
𝑏1
𝑏1
𝑏2
𝑏2
𝐿
Norms and Convex Bodies
Symmetric convex body 𝐾 ⊆ ℝ𝑛 (𝐾 = −𝐾).
Gauge Function: 𝑥
𝐾
𝐾
≝ inf {𝑠 ≥ 0: 𝑥 𝜖 𝑠𝐾}
𝑠𝐾
𝑥
0
-𝑥
1. 𝑥 + 𝑦 𝐾 ≤ 𝑥 𝐾 + 𝑦
2. 𝑡𝑥 𝐾 = 𝑡 𝑥 𝐾 , 𝑡 ≥ 0
3. 𝑥 𝐾 = −𝑥 𝐾
𝐾 = {𝑥 𝜖 ℝ𝑛 : 𝑥 𝐾 ≤ 1}
𝐾 is unit ball of ∙ 𝐾
𝐾
(triangle inequality)
(homogeneity)
(symmetry)
Norms and Convex Bodies
Convex body 𝐾 ⊆ ℝ𝑛 containing origin in its interior.
Gauge Function: 𝑥
𝐾
𝐾
≝ inf {𝑠 ≥ 0: 𝑥 𝜖 𝑠𝐾}
𝑠𝐾
𝑥
0
1. 𝑥 + 𝑦 𝐾 ≤ 𝑥 𝐾 + 𝑦
2. 𝑡𝑥 𝐾 = 𝑡 𝑥 𝐾 , 𝑡 ≥ 0
3. 𝑥 𝐾 = −𝑥 𝐾
𝐾 = {𝑥 𝜖 ℝ𝑛 : 𝑥 𝐾 ≤ 1}
𝐾 is unit ball of ∙ 𝐾
𝐾
(triangle inequality)
(homogeneity)
(symmetry)
Norms and Convex Bodies
𝑙𝑝 norms, 𝑝 ≥ 1:
𝑙1 ball
𝑥
𝑝
𝑝 1/𝑝
= ∑|𝑥𝑖 |
𝑙2 ball
𝑙∞ ball
0
0
0
𝐵1𝑛
𝐵2𝑛
𝑛
𝐵∞
Shortest Vector Problem (SVP)
Given: Lattice 𝐿, norm ∙ 𝐾 in ℝ𝑛 .
Goal: Find 𝑦 𝜖 𝐿 ∖ {0} minimizing 𝑦
K.
𝑦
𝐾
0
𝐿
-𝑦
Closest Vector Problem (CVP)
Given: Lattice 𝐿, target 𝑥, norm ∙
Goal: Find 𝑦 𝜖 𝐿 minimizing 𝑦 − 𝑥
𝐾
𝑛.
in
ℝ
𝐾
𝐾.
𝑦
𝑥
𝐿
Applications of SVP and CVP
Optimization: Integer and Linear Programming
Number Theory: Factoring Polynomials, Number Field Sieve
Communication Theory: Decoding Gaussian Channels
Cryptanalysis: RSA with Small Exponent, Knapsack Crypto Systems
Cryptography: Lattice based crypto systems (hardness of LWE / SIS)
Hardness
SVP: hard to approx. for any constant factor under 𝑙𝑝 norms
[Ajtai 98, Cai-Nerurkar 98, Micciancio 98, Khot 03,…].
CVP: hard to approx. for 𝑛𝑐/ log log 𝑛 under 𝑙𝑝 norms
[Arora-Babai-Stern-Sweedik 93, Dinur-Kindler-Raz-Safra 98].
Outline
Question: How do we solve lattice problems under general norms?
1) Overview known algorithms
Methods for 𝑙2 lattice problems, AKS Sieve for general norms
2) Lattice Point Enumeration via M-Ellipsoid Coverings (Older work)
SVP under general norms
3) Lattice Sparsification (New work)
Approximate CVP under general norms
Why General Norms?
1. Integer Programming Problem (IP):
𝐾1
𝐾2
ℤ𝑛
Input: 𝐴 𝜖 ℚ𝑚×𝑛 , 𝑏 𝜖 ℚ𝑚
𝐴𝑥 ≤ 𝑏
𝑥 𝜖 ℤ𝑛
Classic NP-Hard problem
IP Problem: Decide whether above system has a solution.
Why General Norms?
1. Integer Programming Problem (IP):
𝐾1
𝐾2
ℤ𝑛
Lattice Sub-Problems:
1. Heuristic Rounding: try to find integer point near center of
2.
feasible region (Approximate CVP)
Decompose IP: break up into lower dimensional slices (SVP)
Why General Norms?
1. Integer Programming Problem (IP):
subproblems
𝑦 𝐾1
𝐾2
ℤ𝑛
Lattice Sub-Problems:
1. Heuristic Rounding: try to find integer point near center of
2.
feasible region (Approximate CVP)
Decompose IP: break up into lower dimensional slices (SVP)
Why General Norms?
2. Diophantine Approximation: 𝑙∞ SVP
Find 𝑧 𝜖 ℤ𝑛 , 𝑛 ≤ 𝑁, s.t. 𝑧 − 𝑛𝑥 ∞ ≤ 𝑁 1 𝑛 . [FT 87]
3. Machine efficient polynomial approximation: 𝑙∞ CVP
Find best polynomial approx. with small coeff. [BMT 06]
𝑓: 0,1 → ℝ
Goal: output degree 𝑑 polynomial 𝑝 𝑥 =
minimizing max |𝑓 𝑥 − 𝑝 𝑥 |
𝑥 𝜖[0,1]
𝑎𝑖 𝑖
𝑑
Σ𝑖=0 𝑝 𝑥 , 𝑎𝑖 𝜖
2
ℤ
Why General Norms?
2. Diophantine Approximation: 𝑙∞ SVP
Find 𝑧 𝜖 ℤ𝑛 , 𝑛 ≤ 𝑁, s.t. 𝑧 − 𝑛𝑥 ∞ ≤ 𝑁 1 𝑛 . [FT 87]
3. Machine efficient polynomial approximation: 𝑙∞ CVP
Find best polynomial approx. with small coeff. [BMT 06]
𝑓: 0,1 → ℝ
Alg: discretize [0,1] to finite set 𝑆 = 𝑥1 , … , 𝑥𝑛 .
1 𝑥1
Minimize ⋮ ⋮
1 𝑥𝑛
⋯
⋱
⋯
𝑥1𝑑
⋮
𝑥𝑛𝑑
𝑎1
2𝑝
𝑓 𝑥1
⋮ −
⋮
𝑎𝑑
𝑓 𝑥𝑛
2𝑝
, 𝑎1 , … , 𝑎𝑑 𝜖 ℤ
Ellipsoids
Ellipsoid 𝐸 = 𝑇𝐵2𝑛 is linear transformation of a ball .
𝐵2𝑛
𝐸
𝑇
𝐸 = 𝑥 𝜖 ℝ𝑛 : 𝐴𝑥
2
≤ 1 for 𝐴 = 𝑇 −1 .
𝑥
𝑥 𝐸 = 𝐴𝑥 2 = 𝑠
Ellipsoidal norm on ℝ𝑛 .
0
𝐸
𝑠𝐸
Ellipsoids
Theorem [John `48]: Any symmetric convex body
𝐾 ⊆ ℝ𝑛 can be approximated by ellipsoid 𝐸
satisfying
𝐸 ⊆ 𝐾 ⊆ 𝑛𝐸
0
𝐸
𝐾
𝑛𝐸
General norm problems become “interesting” when
approximation factor is much less than 𝑛.
SVP & CVP under 𝑙2 norm
Method
Basis
Reduction
Randomized
Sieve
Voronoi Cell
Prob
Apx
Time
Space
Rand
SVP &
CVP
2𝑂(𝑛)
poly(𝑛)
poly(𝑛)
0
LLL 83, Bab 86,
Sch 87, …
SVP &
CVP
1
𝑛𝑛/2
poly(𝑛)
0
Kan 87, Hel 86,
Blo 00, HS 08,…
SVP
1
2𝑂(𝑛)
2𝑂(𝑛)
2𝑂(𝑛)
AKS 00, BN 07,
MV 10a
CVP
1+𝜖
1 𝑂(𝑛)
𝜖
1 𝑂(𝑛)
𝜖
1 𝑂(𝑛)
𝜖
AKS 01, BN 07
1
2𝑂(𝑛)
2𝑂(𝑛)
0
SVP &
CVP
Authors
MV 10b
SVP & CVP under 𝑙2 norm
Method
Basis
Reduction
Randomized
Sieve
Voronoi Cell
Prob
Apx
Time
Space
Rand
SVP &
CVP
2𝑂(𝑛)
poly(𝑛)
poly(𝑛)
0
LLL 83, Bab 86,
Sch 87, …
SVP &
CVP
1
𝑛𝑛/2
poly(𝑛)
0
Kan 87, Hel 86,
Blo 00, HS 08,…
SVP
1
2𝑂(𝑛)
2𝑂(𝑛)
2𝑂(𝑛)
AKS 00, BN 07,
MV 10a
CVP
1+𝜖
1 𝑂(𝑛)
𝜖
1 𝑂(𝑛)
𝜖
1 𝑂(𝑛)
𝜖
AKS 01, BN 07
1
2𝑂(𝑛)
2𝑂(𝑛)
0
SVP &
CVP
Reduce to poly 𝑛 ?
Authors
MV 10b
SVP & CVP under General Norms
Method
Prob
Norm
Apx
Time
Space
Rand
Ellipsoidal
norm approx.
SVP &
CVP
sym
𝑛
2𝑂(𝑛)
2𝑂(𝑛)
0
MV 10
SVP
sym
1
2𝑂(𝑛)
2𝑂(𝑛)
2𝑂(𝑛)
BN 07,
AJ 09
CVP
𝑙𝑝
1+𝜖
1 𝑂(𝑛)
𝜖
1 𝑂(𝑛)
𝜖
1 𝑂(𝑛)
𝜖
BN 07
CVP
all∗
1+𝜖
1 𝑂(𝑛)
𝜖
1 𝑂(𝑛)
𝜖
1 𝑂(𝑛)
𝜖
D 12
2𝑂(𝑛)
2𝑂(𝑛)
Randomized
Sieve
CVP
𝑙∞
1+𝜖
1 𝑂(𝑛)
ln 𝜖
Voronoi Cell +
SVP
Ellipsoid Cover
all∗
1
2𝑂(𝑛)
Boosting
ln 1𝜖
0
𝑂(𝑛)
Authors
EHN 11
DPV 11,
DV 12
* near-symmetric norms: vol 𝐾 ≤ 2𝑂(𝑛) vol(𝐾 ∩ −𝐾)
SVP & CVP under General Norms
Method
Prob
Norm
Apx
Time
Space
Rand
Ellipsoidal
norm approx.
SVP &
CVP
sym
𝑛
2𝑂(𝑛)
2𝑂(𝑛)
0
MV 10
SVP
sym
1
2𝑂(𝑛)
2𝑂(𝑛)
2𝑂(𝑛)
BN 07,
AJ 09
CVP
𝑙𝑝
1+𝜖
1 𝑂(𝑛)
𝜖
1 𝑂(𝑛)
𝜖
1 𝑂(𝑛)
𝜖
BN 07
CVP
all∗
1+𝜖
1 𝑂(𝑛)
𝜖
1 𝑂(𝑛)
𝜖
1 𝑂(𝑛)
𝜖
D 12
2𝑂(𝑛)
2𝑂(𝑛)
Randomized
Sieve
CVP
𝑙∞
1+𝜖
1 𝑂(𝑛)
ln 𝜖
Voronoi Cell +
SVP
Ellipsoid Cover
all∗
1
2𝑂(𝑛)
Boosting
ln 1𝜖
0
𝑂(𝑛)
Authors
EHN 11
DPV 11,
DV 12
* near-symmetric norms: vol 𝐾 ≤ 2𝑂(𝑛) vol(𝐾 ∩ −𝐾)
SVP & CVP under General Norms
Method
Prob
Norm
Apx
Time
Space
Rand
Ellipsoidal
norm approx.
SVP &
CVP
sym
𝑛
2𝑂(𝑛)
2𝑂(𝑛)
0
MV 10
SVP
sym
1
2𝑂(𝑛)
2𝑂(𝑛)
2𝑂(𝑛)
BN 07,
AJ 09
CVP
𝑙𝑝
1+𝜖
1 𝑂(𝑛)
𝜖
1 𝑂(𝑛)
𝜖
1 𝑂(𝑛)
𝜖
BN 07
CVP
all∗
1+𝜖
1 𝑂(𝑛)
𝜖
1 𝑂(𝑛)
𝜖
1 𝑂(𝑛)
𝜖
D 12
2𝑂(𝑛)
2𝑂(𝑛)
Randomized
Sieve
CVP
𝑙∞
1+𝜖
1 𝑂(𝑛)
ln 𝜖
Voronoi Cell +
SVP
Ellipsoid Cover
all∗
1
2𝑂(𝑛)
Boosting
ln 1𝜖
0
𝑂(𝑛)
Authors
EHN 11
DPV 11,
DV 12
Missing: Deterministic 1 + 𝜖 -CVP? (Focus of talk)
SVP & CVP under General Norms
Method
Prob
Norm
Apx
Time
Space
Rand
Ellipsoidal
norm approx.
SVP &
CVP
sym
𝑛
2𝑂(𝑛)
2𝑂(𝑛)
0
MV 10
SVP
sym
1
2𝑂(𝑛)
2𝑂(𝑛)
2𝑂(𝑛)
BN 07,
AJ 09
CVP
𝑙𝑝
1+𝜖
1 𝑂(𝑛)
𝜖
1 𝑂(𝑛)
𝜖
1 𝑂(𝑛)
𝜖
BN 07
CVP
all∗
1+𝜖
1 𝑂(𝑛)
𝜖
1 𝑂(𝑛)
𝜖
1 𝑂(𝑛)
𝜖
D 12
2𝑂(𝑛)
2𝑂(𝑛)
Randomized
Sieve
CVP
𝑙∞
1+𝜖
1 𝑂(𝑛)
ln 𝜖
Voronoi Cell +
SVP
Ellipsoid Cover
all∗
1
2𝑂(𝑛)
Boosting
ln 1𝜖
0
𝑂(𝑛)
Authors
EHN 11
DPV 11,
DV 12
Missing: Exact CVP in 2𝑂(𝑛) time? (best current: 𝑛𝑛 time)
Randomized Sieve [Ajtai-Kumar-Sivakumar `00]
Main option for solving lattice problems in more general
norms than 𝑙2 .
0
𝐾
Randomized Sieve [Ajtai-Kumar-Sivakumar `00]
How it works for SVP.
0
𝐾
Randomized Sieve [Ajtai-Kumar-Sivakumar `00]
How it works for SVP.
Algorithm Outline:
1. Sample 2𝑂
𝑛
“perturbed” lattice points.
𝑥7
𝑥9 𝑥3
𝑥1
𝑥5 𝑥
8
0 𝑟𝐾
𝑥6 𝑥4
𝑥2
Randomized Sieve [Ajtai-Kumar-Sivakumar `00]
How it works for SVP.
Algorithm Outline:
1. Sample 2𝑂 𝑛 “perturbed” lattice points.
2. Iteratively cluster to get shorter vectors.
𝑟
2
cluster
𝑥7
𝑥9 𝑥3
𝑥1
𝑥5 𝑥
8
0 𝑟𝐾
𝑥6 𝑥4
𝑥2
Randomized Sieve [Ajtai-Kumar-Sivakumar `00]
How it works for SVP.
Algorithm Outline:
1. Sample 2𝑂 𝑛 “perturbed” lattice points.
2. Iteratively cluster to get shorter vectors.
𝑥5
𝑥9
𝑥7
𝑥4
𝑥1
𝑥2
0 𝑟𝐾
𝑥3 𝑥8
𝑥6
Lost vectors
Bounded number lost
(packing argument)
Randomized Sieve [Ajtai-Kumar-Sivakumar `00]
How it works for SVP.
Algorithm Outline:
1. Sample 2𝑂 𝑛 “perturbed” lattice points.
2. Iteratively cluster to get shorter vectors.
𝑟
2
Repeat on 𝐾
𝑥5
𝑥9
𝑥7
𝑥4
𝑥1
𝑥2
0 𝑟𝐾
2
Randomized Sieve [Ajtai-Kumar-Sivakumar `00]
How it works for SVP.
Algorithm Outline:
1. Sample 2𝑂 𝑛 “perturbed” lattice points.
2. Iteratively cluster to get shorter vectors.
3. Unperturb and return shortest non-zero difference.
𝑥5
𝑥9
𝑥7
𝑥4
𝑥1
0𝐾
𝑥2
Randomized Sieve [Ajtai-Kumar-Sivakumar `00]
How it works for SVP.
Algorithm Outline:
1. Sample 2𝑂 𝑛 “perturbed” lattice points.
2. Iteratively cluster to get shorter vectors.
3. Unperturb and return shortest non-zero difference.
𝑥5
𝑥9
𝑥7
𝑥4
𝑥2
𝑥10 𝐾
Randomized Sieve [Ajtai-Kumar-Sivakumar `00]
Why add noise to lattice points? (Noise ~ uniform(K))
𝑦, 𝑣 𝜖 𝐿, 𝑦 𝐾 = 𝑂(1), 𝑣 shortest non-zero vector
𝐾
𝑦+𝑣
𝐾
𝑣
𝑦
0
Uncertainty region
Sieve can’t distinguish whether “unperturbing”
results in 𝑦 + 𝑣 or 𝑦. (Information Hiding)
Randomized Sieve [Ajtai-Kumar-Sivakumar `00]
Why add noise to lattice points? (Noise ~ uniform(K))
𝑦, 𝑣 𝜖 𝐿, 𝑦 𝐾 = 𝑂(1), 𝑣 shortest non-zero vector
𝐾
𝑥3
𝑦+𝑣
𝑥1
𝐾
𝑥2
𝑣
𝑦
0
Uncertainty region
Bounded number of regions with short 𝑦.
Pigeonhole gets many sieved vectors in the same region.
Randomized Sieve [Ajtai-Kumar-Sivakumar `00]
Why add noise to lattice points? (Noise ~ uniform(K))
𝑦, 𝑣 𝜖 𝐿, 𝑦 𝐾 = 𝑂(1), 𝑣 shortest non-zero vector
𝐾
𝑥3
𝑦+𝑣
𝑥1
𝐾
𝑥2
𝑣
𝑦
0
Uncertainty region
Uncertainty guarantees set of differences
contains 𝑣 with overwhelming probability.
Randomized Sieve [Ajtai-Kumar-Sivakumar `00]
Why add noise to lattice points? (Noise ~ uniform(K))
𝑦, 𝑣 𝜖 𝐿, 𝑦 𝐾 = 𝑂(1), 𝑣 shortest non-zero vector
𝐾
𝑥2 𝑥3
𝑦
𝑦+𝑣
𝑣
𝑥1
𝐾
0
Uncertainty region
Uncertainty guarantees set of differences
contains 𝑣 with overwhelming probability.
Randomized Sieve [Ajtai-Kumar-Sivakumar `00]
Why add noise to lattice points? (Noise ~ uniform(K))
𝑦, 𝑣 𝜖 𝐿, 𝑦 𝐾 = 𝑂(1), 𝑣 shortest non-zero vector
𝐾
𝑥2 𝑥3
𝑦
𝑦+𝑣
𝑣
𝑥1
𝐾
0
Uncertainty region
What about CVP? Same with homogenization trick.
Can only guarantee many sieved vectors
land in “union” of uncertainty regions.
(hence only get approximate solution)
Randomized Sieve [Ajtai-Kumar-Sivakumar `00]
Pros
1. Mechanics of sieve are very easy to implement
(practical for 𝑙2 norm [Nguyen-Vidick 08])
2. Can solve SVP and (1 + 𝜖) CVP under general norms.
Cons
1. Sieving algorithms are Monte Carlo, i.e. cannot verify
correctness of output.
2. Requires exponential number of uniform samples
from norm ball 𝐾. (complicated to generate)
3. Requires exponential space. (seems inherent)
Main Result
Closest Vector Problem [D.-Kun 12]:
For lattice 𝐿, target 𝑥, near-symmetric norm ∙ 𝐾 in ℝ𝑛 ,
computes 𝑦 𝜖 𝐿 minimizing 𝑦 − 𝑥 𝐾 to within 1 + 𝜖
in deterministic 2𝑂 𝑛 (1 + 1/𝜖)𝑛 time and 2𝑛 space.
AKS Based
This Paper
Time
2𝑂
𝑛
1 + 1/𝜖
2𝑛
2𝑂
𝑛
Space
2𝑂
𝑛
1 + 1/𝜖
𝑛
2𝑛
Randomness
2𝑂
𝑛
1 + 1/𝜖
𝑛
0
1 + 1/𝜖
𝑛
Main Result
Closest Vector Problem [D.-Kun 12]:
For lattice 𝐿, target 𝑥, near-symmetric norm ∙ 𝐾 in ℝ𝑛 ,
computes 𝑦 𝜖 𝐿 minimizing 𝑦 − 𝑥 𝐾 to within 1 + 𝜖
in deterministic 2𝑂 𝑛 (1 + 1/𝜖)𝑛 time and poly(𝑛) space.
AKS Based
This Paper
Time
2𝑂
𝑛
1 + 1/𝜖
2𝑛
2𝑂
𝑛
Space
2𝑂
𝑛
1 + 1/𝜖
𝑛
poly(𝑛)
Randomness
2𝑂
𝑛
1 + 1/𝜖
𝑛
0
1 + 1/𝜖
𝑛
If there exists a 2𝑂 𝑛 time and poly(𝑛) space
algorithm for exact CVP in the 𝑙2 norm.
Main Result
Closest Vector Problem [D.-Kun 12]:
For lattice 𝐿, target 𝑥, near-symmetric norm ∙ 𝐾 in ℝ𝑛 ,
computes 𝑦 𝜖 𝐿 minimizing 𝑦 − 𝑥 𝐾 to within 1 + 𝜖
in deterministic 2𝑂 𝑛 (1 + 1/𝜖)𝑛 time and 2𝑛 space.
Builds upon Lattice Point Enumeration techniques of
[Micciancio-Voulgaris 10] (𝑙2 norm)
and [D.-Peikert-Vempala 11] (general norms) .
CVP under 𝑙∞
Corollary [Eisenbrand-Hänhle-Neimeir 11, D.-Kun 12]:
For lattice 𝐿, target 𝑥 in ℝ𝑛 ,
computes 𝑦 𝜖 𝐿 minimizing 𝑦 − 𝑥 ∞ to within 1 + 𝜖
in deterministic 2𝑂 𝑛 (1 + ln 1𝜖)𝑛 time and 2𝑛 space.
Direct application of boosting technique
of [Eisenbrand-Hänhle-Neimeir 11].
Removes randomization due to AKS sieve.
Application to IP
Theorem [D. 12, D.-Kun 12]: For 𝐾 ⊆ ℝ𝑛 convex body,
in expected 2𝑂(𝑛) time and 2𝑛 space, can either
1. return 𝑐 𝜖 𝐾 such that a 1 2 scaling of 𝐾 about 𝑐 is
integer free, or
2. compute integer point 𝑦 𝜖 𝐾.
𝐾
𝑐
ℤ𝑛
1
𝐾
2
Application to IP
Theorem [D. 12, D.-Kun 12]: For 𝐾 ⊆ ℝ𝑛 convex body,
in expected 2𝑂(𝑛) time and 2𝑛 space, can either
1. return 𝑐 𝜖 𝐾 such that a 1 2 scaling of 𝐾 about 𝑐 is
integer free, or
2. compute integer point 𝑦 𝜖 𝐾.
𝐾
𝑐
Previous
best: scaling
factor 1of 𝑛 [HK 11]
Used
to improve
complexity
Integer Programming [D. 12]
ℤ𝑛
𝑦
Application to IP
Algorithm:
1. Compute 𝑐 𝜖 𝐾 such that 𝐾−𝑐 is near symmetric.
(can use center of mass of 𝐾)
2. Solve 2-approximate CVP with target 𝑐, norm ∙ 𝐾−𝑐
and lattice ℤ𝑛 .
3. If 𝑦 𝜖 𝐾, return 𝑦; otherwise, return 𝑐.
𝐾
𝑐
ℤ𝑛
𝑦
Lattice Point Enumerator
Theorem [D.-Peikert-Vempala 11, D. Vempala 12, D. 12]:
For 𝐾 and 𝐿 in ℝ𝑛 , computes 𝐾 ∩ 𝐿 in deterministic
2𝑂(𝑛) 𝐺 𝐾, 𝐿 time and 2𝑛 space,
where 𝐺 𝐾, 𝐿 = max𝑥 𝜖 ℝ𝑛 (𝐾 + 𝑥) ∩ 𝐿 .
Main workhorse for solving lattice problems.
Lattice Point Enumerator
Theorem [D.-Peikert-Vempala 11, D. Vempala 12, D. 12]:
For 𝐾 and 𝐿 in ℝ𝑛 , computes 𝐾 ∩ 𝐿 in deterministic
2𝑂(𝑛) 𝐺 𝐾, 𝐿 time and 2𝑛 space,
where 𝐺 𝐾, 𝐿 = max𝑥 𝜖 ℝ𝑛 (𝐾 + 𝑥) ∩ 𝐿 .
𝐾
𝐿
Lattice Point Enumerator
Theorem [D.-Peikert-Vempala 11, D. Vempala 12, D. 12]:
For 𝐾 and 𝐿 in ℝ𝑛 , computes 𝐾 ∩ 𝐿 in deterministic
2𝑂(𝑛) 𝐺 𝐾, 𝐿 time and 2𝑛 space,
where 𝐺 𝐾, 𝐿 = max𝑥 𝜖 ℝ𝑛 (𝐾 + 𝑥) ∩ 𝐿 .
𝐾
𝐾
𝐾
𝐾
𝐿
Lattice Point Enumerator
Theorem [D.-Peikert-Vempala 11, D. Vempala 12, D. 12]:
For 𝐾 and 𝐿 in ℝ𝑛 , computes 𝐾 ∩ 𝐿 in deterministic
2𝑂(𝑛) 𝐺 𝐾, 𝐿 time and poly(𝑛) space,
where 𝐺 𝐾, 𝐿 = max𝑥 𝜖 ℝ𝑛 (𝐾 + 𝑥) ∩ 𝐿 .
If there exists 2𝑂
𝑛
time and poly 𝑛 space 𝑙2 CVP solver.
Lattice Point Enumerator
Theorem [D.-Peikert-Vempala 11, D. Vempala 12, D. 12]:
For 𝐾 and 𝐿 in ℝ𝑛 , computes 𝐾 ∩ 𝐿 in deterministic
2𝑂(𝑛) 𝐺 𝐾, 𝐿 time and 2𝑛 space,
where 𝐺 𝐾, 𝐿 = max𝑥 𝜖 ℝ𝑛 (𝐾 + 𝑥) ∩ 𝐿 .
Builds upon enumeration techniques of
[Micciancio-Voulgaris 10] for Ellipsoids +
Ellipsoid covering technique using M-ellipsoids [Milman 86].
Ellipsoid Enumeration
[Micciancio Voulgaris `10]
Goal: Compute (𝐸 + 𝑥) ∩ 𝐿.
𝑥
𝐸+𝑥
𝐿
Ellipsoid Enumeration
[Micciancio Voulgaris `10]
Alg: Find closest vector 𝑦 to 𝑥 under ∙
𝑦
𝑥
𝐸+𝑥
𝐿
𝐸
.
Ellipsoid Enumeration
[Micciancio Voulgaris `10]
Alg: Compute “Voronoi relevant” vectors of 𝐿
with respect to ∙ 𝐸 .
𝑣6
𝑣4
𝑣2
𝑣1
𝑣3
𝑣5
𝑦
𝑥
𝐸+𝑥
𝐿
Ellipsoid Enumeration
[Micciancio Voulgaris `10]
Alg: For 𝑧 𝜖(𝐸 + 𝑥) ∩ 𝐿, let 𝑝 𝑧 = 𝑧+𝑣𝑖 for
minimal index i s.t. 𝑝(𝑧) closer to 𝑥.
𝑣6
𝑣4
𝑣2
𝑣1
𝑣3
𝑣5
𝑧
𝑝(𝑧)
𝑦
𝑥
𝐸+𝑥
Note 𝑝 𝑦 = ∅
𝐿
Ellipsoid Enumeration
[Micciancio Voulgaris `10]
Alg: Traverse implicit tree, starting from
root 𝑦, to enumerate 𝐸 + 𝑥 ∩ 𝐿.
𝑣6
𝑣4
𝑣2
𝑣1
𝑣3
𝑣5
𝑦
𝑥
𝐸+𝑥
𝐿
General Enumeration
[D-Peikert-Vempala `10]
Goal: Compute 𝐾 ∩ 𝐿.
𝐿
𝐾
General Enumeration
[D-Peikert-Vempala `10]
Alg: Compute “good covering” ellipsoid
𝐸 for 𝐾 (M-Ellipsoid).
𝐸
𝐾
𝐿
General Enumeration
[D-Peikert-Vempala `10]
M-Ellipsoid: 2𝑂(𝑛) translates of 𝐸 suffice to cover 𝐾
[Milman 86]
2𝑂(𝑛) translates of 𝐾 suffice to cover 𝐸
𝐸
𝐾
𝐿
General Enumeration
[D-Peikert-Vempala `10]
Alg: Compute a covering of 𝐾 by
2𝑂(𝑛) translates of 𝐸.
𝐸+𝑡𝑖
𝐸
𝐿
𝑡3
𝑡2
𝑡1
𝑡6
𝐾
𝑡5
𝑡4
General Enumeration
[D-Peikert-Vempala `10]
Alg: Compute 𝐸 + 𝑡𝑖 ∩ 𝐿 ∀𝑖.
𝐸+𝑡𝑖
𝐿
𝑡3
𝑡2
𝑡1
𝑡6
𝐾
𝑡5
𝑡4
General Enumeration
[D-Peikert-Vempala `10]
Alg: Compute 𝐸 + 𝑡𝑖 ∩ 𝐿 ∀𝑖.
𝐿
𝐾
General Enumeration
[D-Peikert-Vempala `10]
Alg: Keep only the points in 𝐾.
𝐿
𝐾
General Enumeration
[D-Peikert-Vempala `10]
By covering properties enumerate at most
|cover of 𝐾 by 𝐸| x (max # points in 𝐸)
𝐿
𝐾
General Enumeration
[D-Peikert-Vempala `10]
By covering properties enumerate at most
|cover of 𝐾 by 𝐸| x |cover E by 𝐾| x (max # points in 𝐾)
𝐿
𝐾
General Enumeration
[D-Peikert-Vempala `10]
By covering properties enumerate at most
2𝑂 𝑛 × 𝐺(𝐾, 𝐿)
𝐿
𝐾
Lattice Point Enumerator
Theorem [D.-Peikert-Vempala 11, D. Vempala 12, D. 12]:
For 𝐾 and 𝐿 in ℝ𝑛 , computes 𝐾 ∩ 𝐿 in deterministic
2𝑂(𝑛) 𝐺 𝐾, 𝐿 time and 2𝑛 space,
where 𝐺 𝐾, 𝐿 = max𝑥 𝜖 ℝ𝑛 (𝐾 + 𝑥) ∩ 𝐿 .
Questions: How to use enumerator for SVP / CVP?
In which situations is 𝐺 𝐾, 𝐿 reasonable?
Simple bound on 𝐺(𝑡𝐾, 𝐿)
Let 𝐾 be a symmetric convex body (𝐾 = −K).
Let 𝜆1 𝐾, 𝐿 denote the length of the shortest
non-zero vector in 𝐿 under ∙ 𝐾 .
Lemma: For 𝑡 = 𝛼𝜆1 𝐾, 𝐿 , 𝐺 𝑡𝐾, 𝐿 ≤ 2𝛼 + 1 𝑛 .
Will use a standard packing argument.
Similar bound holds for near symmetric bodies, i.e.
where vol 𝐾 ≤ 2𝑂(n) vol 𝐾 ∩ −𝐾 .
Simple bound on 𝐺(𝑡𝐾, 𝐿)
Assume 𝜆1 𝐾, 𝐿 = 1.
𝑡𝐾+𝑥
1
2
Pack copies of 𝐾
around lat pts.
𝐿
Simple bound on 𝐺(𝑡𝐾, 𝐿)
Assume 𝜆1 𝐾, 𝐿 = 1.
𝑡𝐾+𝑥
1
2
𝐾
𝐿
(12+𝑡)𝐾+𝑥
Comparing Volumes:
𝑡𝐾+𝑥 ∩ 𝐿 ≤ vol
1
2
+𝑡 𝐾
vol(12𝐾) ≤ 2𝑡+1
𝑛
SVP Algorithm
Goal: Find 𝑦 𝜖 𝐿 ∖ {0} minimizing 𝑦
𝑦
𝐾
0
-𝑦
𝐾
.
SVP Algorithm
Alg: Scale 𝐾such that 𝐾 ∩ 𝐿 = {0}.
𝐾
0
SVP Algorithm
Alg: Compute 2𝑖 𝐾 ∩ 𝐿 for 𝑖 = 1,2, … until 2𝑖 𝐾 ∩ 𝐿 ≠ {0}.
Return all shortest lattice vectors.
4𝐾 𝑦
2𝐾
𝐾
0
-𝑦
SVP Algorithm
Runtime Analysis:
Suffices to bound the time to compute 2𝑖 𝐾 ∩ 𝐿 for the
last step 𝑖.
Since 𝑖 is last step, we have 2𝑖−1 𝐾 ∩ 𝐿 = {0}.
This implies 𝜆1 𝐾, 𝐿 ≥ 2𝑖−1 .
By our lemma, 𝐺 2𝑖 𝐾, 𝐿 ≤ 2 × 2 + 1
𝑛
= 5n .
Hence enumeration time is proportion to 5𝑛 as needed.
Approximate CVP
Goal: Find 𝑦 𝜖 𝐿 minimizing 𝑦 − 𝑥 to within 1 + 𝜖.
(1+ϵ)𝑑𝐾
𝑑𝐾
𝑥
𝐾
𝐿
Approximate CVP
Question: What’s wrong with previous approach?
(successive doubling for exact CVP)
4𝐾
𝑥
2𝐾
𝐾
𝐿
Approximate CVP
Problem: 𝑑 maybe much larger than 𝜆1 (𝐾, 𝐿).
(𝑑 is distance of 𝑥 from 𝐿)
Shortest Vector
𝑑𝐾
𝑥
𝐾
𝐿
Approximate CVP
Problem: No control on 𝐺(𝑑𝐾, 𝐿).
(𝑑 is distance of 𝑥 from 𝐿)
𝑑𝐾
𝑑𝐾
𝑥
𝐾
𝐿
Approximate CVP
Idea: Make the lattice “sparser” without
changing distance of 𝑥 to 𝐿 by much.
𝑑𝐾
𝑥
𝐾
𝐿
Approximate CVP
Lattice Sparsifier: Build sublattice 𝐿′ ⊆ 𝐿 satisfying
1) 𝜆1 𝐾, 𝐿′ = Ω(𝜖𝑑).
2) distance of 𝑥 to 𝐿′ is less than 1 + 𝜖 𝑑.
𝑑𝐾
𝑥
𝐾
𝐿
Approximate CVP
Lattice Sparsifier: Build sublattice 𝐿′ ⊆ 𝐿 satisfying
1) 𝜆1 𝐾, 𝐿′ = Ω(𝜖𝑑).
2) distance of 𝑥 to 𝐿′ is less than 1 + 𝜖 𝑑.
Runtime on 𝐿′: 𝐺 1+𝜖 𝑑𝐾, 𝐿′ = 2𝑂
𝑛
1 + 1/𝜖
(1+ϵ)𝑑𝐾
𝑑𝐾
𝑥
𝐾
𝐿′
𝑛
Lattice Sparsifier
Let 𝑑𝐾 𝐿, 𝑥 = min 𝑦 − 𝑥
𝑦𝜖𝐿
𝐾
(distance of 𝑥 to 𝐿)
Theorem [D.-Kun 12]: For any distance 𝑠 > 0,
a sublattice 𝐿′ ⊆ 𝐿 satisfying
1. 𝜆1 𝐾, 𝐿′ ≥ 𝑠.
2. ∀𝑥 𝜖 ℝ𝑛 𝑑𝐾 𝐿′ , 𝑥 ≤ 𝑑𝐾 𝐿, 𝑥 + 𝑂(𝑠)
can be computed in deterministic
2𝑂(𝑛) time and 2𝑛 space.
Lattice Sparsifier Existence
Assume 𝐿 = ℤ𝑛 . Symmetric convex body 𝐾, 𝑠 > 0.
1. Let 𝑁 = |ℤ𝑛 ∩ 𝑠𝐾| .
2. Choose prime 𝑝 s.t. 2𝑁 < 𝑝 < 4𝑁.
3. Let 𝐿′ = {𝑦 𝜖 ℤ𝑛 : ∑𝑖 𝑦𝑖 𝑎𝑖 ≡ 0 mod 𝑝 }
for 𝑎1 , … , 𝑎𝑛 chosen uniformly from ℤ𝑝 .
0
𝐿
𝑠𝐾
𝑁
Lattice Sparsifier Existence
Assume 𝐿 = ℤ𝑛 . Symmetric convex body 𝐾, 𝑠 > 0.
1. Let 𝑁 = |ℤ𝑛 ∩ 𝑠𝐾| .
2. Choose prime 𝑝 s.t. 2𝑁 < 𝑝 < 4𝑁.
3. Let 𝐿′ = {𝑦 𝜖 ℤ𝑛 : ∑𝑖 𝑦𝑖 𝑎𝑖 ≡ 0 mod 𝑝 }
for 𝑎1 , … , 𝑎𝑛 chosen uniformly from ℤ𝑝 .
0
𝐿′
𝑁
𝑠𝐾
Theorem [D.-Kun 12]: Random sublattice 𝐿′ works with
constant probability.
Lattice Sparsifier Existence
𝑆 = ℤ𝑛 ∩ 𝑠𝐾 , N = S , 2𝑁 < 𝑝 < 4𝑁, 𝑝 prime
𝐿′ = {𝑦 𝜖 ℤ𝑛 : ∑𝑖 𝑦𝑖 𝑎𝑖 ≡ 0 mod 𝑝 }
Lemma (Property 1): All non-zero vector of 𝐿’ have
length ≥ 𝑠 with probability at least 1/2.
E 𝐿′ ∩ (𝑆 ∖ 0 ) = Σ𝑦𝜖𝑆∖{0} Pr 𝑦, 𝑎 ≡ 0 𝑚𝑜𝑑 𝑝
=
𝑆∖ 0
𝑝
=
𝑁−1
𝑝
<
1
2
Lemma follows from Markov’s inequality.
Lattice Sparsifier Existence
𝑆 = ℤ𝑛 ∩ 𝑠𝐾 , N = S , 2𝑁 < 𝑝 < 4𝑁, 𝑝 prime
𝐿′ = {𝑦 𝜖 ℤ𝑛 : ∑𝑖 𝑦𝑖 𝑎𝑖 ≡ 0 mod 𝑝 }
2. ∀𝑥 𝜖 ℝ𝑛 𝑑𝐾 𝐿′ , 𝑥 ≤ 𝑑𝐾 ℤ𝑛 , 𝑥 + 𝑂(𝑠)
Claim: Suffices that all y 𝜖 ℤ𝑛 be at distance 𝑂(𝑠) from 𝐿′.
𝑦 𝑂(𝑠) 𝑧
𝐿′
𝑑
𝑦 closest to 𝑥 in ℤ𝑛
𝑧 closest to 𝑦 in 𝐿′
𝑥
Lattice Sparsifier Existence
𝑆 = ℤ𝑛 ∩ 𝑠𝐾 , N = S , 2𝑁 < 𝑝 < 4𝑁, 𝑝 prime
𝐿′ = {𝑦 𝜖 ℤ𝑛 : ∑𝑖 𝑦𝑖 𝑎𝑖 ≡ 0 mod 𝑝 }
2. ∀𝑥 𝜖 ℝ𝑛 𝑑𝐾 𝐿′ , 𝑥 ≤ 𝑑𝐾 ℤ𝑛 , 𝑥 + 𝑂(𝑠)
Lemma (Property 2): All y 𝜖 ℤ𝑛 at distance 𝑂(𝑠) from 𝐿′
with probability at least 1/2.
Lattice Sparsifier Existence
𝑆 = ℤ𝑛 ∩ 𝑠𝐾 , N = S , 2𝑁 < 𝑝 < 4𝑁, 𝑝 prime
𝐿′ = {𝑦 𝜖 ℤ𝑛 : ∑𝑖 𝑦𝑖 𝑎𝑖 ≡ 0 mod 𝑝 }
Claim: Let 𝐴 = { 𝑦, 𝑎 𝑚𝑜𝑑 𝑝 : 𝑦 𝜖 𝑆 },
𝑝
then 𝐴 ≥ + 1 with probability at least 1/2.
20
# Collisions:
𝐸[∑𝑥,𝑦𝜖𝑆 𝐼[ 𝑥, 𝑎 ≡ 𝑦, 𝑎 (𝑚𝑜𝑑 𝑝)]] = |S| +
|𝑆| 𝑆 −1
𝑝
Let 𝑆 𝑖 = {𝑥 𝜖 𝑆: 𝑥, 𝑎 ≡ 𝑖 𝑚𝑜𝑑 𝑝 }
𝑆
2
= ∑𝑖𝜖𝐴
2
𝑖
|𝑆 |
≤ |𝐴|(∑𝑖𝜖𝐴
2
𝑖
𝑆 )
= 𝐴 (#Collisions)
Lattice Sparsifier Existence
𝑆 = ℤ𝑛 ∩ 𝑠𝐾 , N = S , 2𝑁 < 𝑝 < 4𝑁, 𝑝 prime
𝐿′ = {𝑦 𝜖 ℤ𝑛 : ∑𝑖 𝑦𝑖 𝑎𝑖 ≡ 0 mod 𝑝 }
Claim: Let 𝐴 = { 𝑦, 𝑎 𝑚𝑜𝑑 𝑝 : 𝑦 𝜖 𝑆 },
𝑝
then 𝐴 ≥ + 1 with probability at least 1/2.
20
# Collisions:
𝐸[∑𝑥,𝑦𝜖𝑆 𝐼[ 𝑥, 𝑎 ≡ 𝑦, 𝑎 (𝑚𝑜𝑑 𝑝)]] = |S| +
𝐴 ≥
𝑆2
# Collisions
≥
𝑆2
2𝑆
𝑆
1+ 𝑝
by Markov
>
𝑆 /𝑝
2 1+ 𝑆 /𝑝
|𝑆| 𝑆 −1
𝑝
𝑝>
𝑝
10
since 𝑆 /𝑝 > 1/4
Lattice Sparsifier Existence
𝑆 = ℤ𝑛 ∩ 𝑠𝐾 , N = S , 2𝑁 < 𝑝 < 4𝑁, 𝑝 prime
𝐿′ = {𝑦 𝜖 ℤ𝑛 : ∑𝑖 𝑦𝑖 𝑎𝑖 ≡ 0 mod 𝑝 }
Will need the following:
Theorem (Cauchy-Davenport): 𝐴1 , … , 𝐴𝑘 ⊆ ℤ𝑝
𝐴1 + ⋯ + 𝐴𝑘 ≥ min{ 𝐴1 + ⋯ + 𝐴𝑘 −𝑘+1, 𝑝}
Lattice Sparsifier Existence
𝑆 = ℤ𝑛 ∩ 𝑠𝐾 , N = S , 2𝑁 < 𝑝 < 4𝑁, 𝑝 prime
𝐿′ = {𝑦 𝜖 ℤ𝑛 : ∑𝑖 𝑦𝑖 𝑎𝑖 ≡ 0 mod 𝑝 }
𝐴 = { 𝑦, 𝑎
𝑚𝑜𝑑 𝑝 : 𝑦 𝜖 𝑆 }. Assume 𝐴 ≥
𝑝
20
Lemma: All y 𝜖 ℤ𝑛 at distance ≤ 20𝑠 from 𝐿′.
Let 𝐴𝑘 = 𝐴 + ⋯ + 𝐴. By Cauchy-Davenport,
𝑘 times
𝐴20
≥ min 𝑝, 20
𝑝
20
Therefore 𝐴20 = ℤ𝑝 .
+ 1 − 19 ≥ 𝑝.
+ 1.
Lattice Sparsifier Existence
𝑆 = ℤ𝑛 ∩ 𝑠𝐾 , N = S , 2𝑁 < 𝑝 < 4𝑁, 𝑝 prime
𝐿′ = {𝑦 𝜖 ℤ𝑛 : ∑𝑖 𝑦𝑖 𝑎𝑖 ≡ 0 mod 𝑝 }
𝑚𝑜𝑑 𝑝 : 𝑦 𝜖 𝑆 }. 𝐴 satisfies 𝐴20 = ℤ𝑝 .
𝐴 = { 𝑦, 𝑎
Take 𝑦 𝜖 ℤ𝑛 . By above, can choose 𝑦1 , … , 𝑦20 𝜖 𝑆 such that
𝑦, 𝑎 ≡ 𝑦1 , 𝑎 + ⋯ + 𝑦20 , 𝑎 (𝑚𝑜𝑑 𝑝)
By definition 𝑦 − 𝑦1 + ⋯ + 𝑦20 𝜖 𝐿′.
𝑑𝐾 𝐿′ , 𝑦 ≤
𝑦1 + ⋯ + 𝑦20
𝐾
≤ 20𝑠 (triangle inequality)
Random Lattices
Khot 03+04: NP-hard to distinguish lattices with many
short vectors vs few short vectors. Use random
sublattices to establish hardness of GapSVP.
Goldstein-Mayer 02: as 𝑝 → ∞, random sublattices
“equidistributed” in space of lattices (after
renormalization). Fill out space proportional to
volume. Used to construct “extremal” lattices.
Derandomizing Construction
𝑆 = ℤ𝑛 ∩ 𝑠𝐾 , N = S , 2𝑁 < 𝑝 < 4𝑁, 𝑝 prime
𝐿′ = {𝑦 𝜖 ℤ𝑛 : ∑𝑖 𝑦𝑖 𝑎𝑖 ≡ 0 mod 𝑝 }
𝜆 = 𝜆1 𝐾, ℤ𝑛
Issue 1: 𝑁 can be huge.
Idea: Build sparsifier iteratively. Let 20𝑘−1 𝜆 ≤ 𝑠 ≤ 20𝑘 𝜆.
Construct sequence 𝐿′𝑘 ⊆ ⋯ ⊆ 𝐿′1 ⊆ 𝐿′0 = ℤ𝑛
𝐿′𝑖 sparsifies 𝐿′𝑖−1 at distance 20𝑖 𝜆.
Ni = 𝐿′𝑖 ∩ 20𝑖+1 𝜆𝐾 = 2𝑂(𝑛) since 𝜆1 𝐾, 𝐿′𝑖 ≥ 20𝑖 𝜆.
Derandomizing Construction
𝑆 = ℤ𝑛 ∩ 𝑠𝐾 , N = S , 2𝑁 < 𝑝 < 4𝑁, 𝑝 prime
𝐿′ = {𝑦 𝜖 ℤ𝑛 : ∑𝑖 𝑦𝑖 𝑎𝑖 ≡ 0 mod 𝑝 }
May assume 𝑝 = 2𝑂(𝑛) .
Issue 2: How to choose 𝑎 𝜖 ℤ𝑛𝑝 deterministically?
Idea: Existence depends only on 𝑁, 𝑝 (not 𝑛).
Find 𝑛−1 dim. projection 𝑃, 𝑃𝑆 (𝑚𝑜𝑑 𝑝) = |𝑆|.
Once 𝑛 ≤ 2, use exhaustive search (2𝑂(𝑛) time).
Summary
1. Gave first deterministic single exponential time
algorithm for 1 + 𝜖 CVP under general norms.
2. Introduced Lattice Sparsifier concept and showed its
utility for 1 + 𝜖 CVP.
Future Directions
1. Fast 𝑙2 CVP using polynomial (subexponential) space?
2. Faster algorithm for Integer Programming?
3. Further applications of Lattice Sparsifiers?
THANK YOU!
Norms and Convex Bodies
Convex body 𝐾 ⊆ ℝ𝑛 containing origin in its interior.
Gauge Function: 𝑥
𝐾 ∩ −𝐾
0
𝐾
𝐾
≝ inf {𝑠 ≥ 0: 𝑥 𝜖 𝑠𝐾}
𝐾 = {𝑥 𝜖 ℝ𝑛 : 𝑥 𝐾 ≤ 1}
𝐾 is unit ball of ∙ 𝐾
1. 𝑥 + 𝑦 𝐾 ≤ 𝑥 𝐾 + 𝑦 𝐾
(triangle inequality)
2. 𝑡𝑥 𝐾 = 𝑡 𝑥 𝐾 , 𝑡 ≥ 0
(homogeneity)
3. vol 𝐾 ≤ 2𝑂 𝑛 vol(𝐾 ∩ −𝐾) (near-symmetry)
Why General Norms?
3. Integer Programming: General norm SVP & CVP
Polytope 𝑃 ⊆ ℝ𝑛 , decide if 𝑃 ∩ ℤ𝑛 ≠ ∅. [Len 83, Kan 87]
4. Frobenius Problem: “CVP” under simplicial norm
𝑎1 , … , 𝑎𝑛 𝜖 ℕ, gcd 1, find largest 𝑦 𝜖 ℕ that is not a nonnegative integer combination of the 𝑎𝑖 s. [Kan 89]
Rule of thumb: If 2𝑂(𝑛) approximation suffices ⇒
use 𝑙2 techniques (lose additional 𝑛 to 𝑛3 2 ).
In these cases one usually has control on the choice
of lattice.
Why General Norms?
3. Integer Programming: General norm SVP & CVP
Polytope 𝑃 ⊆ ℝ𝑛 , decide if 𝑃 ∩ ℤ𝑛 ≠ ∅. [Len 83, Kan 87]
4. Frobenius Problem: “CVP” under simplicial norm
𝑎1 , … , 𝑎𝑛 𝜖 ℕ, gcd 1, find largest 𝑦 𝜖 ℕ that is not a nonnegative integer combination of the 𝑎𝑖 s. [Kan 89]
Need tailored methods when coarse approximations
substantially degrade performance or
yield unusable results.
Why General Norms?
3. Integer Programming: General norm SVP & CVP
Polytope 𝑃 ⊆ ℝ𝑛 , decide if 𝑃 ∩ ℤ𝑛 ≠ ∅. [Len 83, Kan 87]
4. Frobenius Problem: “CVP” under simplicial norm
𝑎1 , … , 𝑎𝑛 𝜖 ℕ, gcd 1, find largest 𝑦 𝜖 ℕ that is not a nonnegative integer combination of the 𝑎𝑖 s. [Kan 89]
GOAL: Develop more efficient methods to
solve lattice problems under arbitrary norms
to near optimality.
Why General Norms?
Polynomial Approximation:
Intended for software libraries. Worthwhile to spend time
for compute near-exact solutions. [BMT 06]
Integer Programming:
Can reduce from examining 𝑂 𝑛2 [HK 10] subproblems per
dimension to 𝑂(𝑛) [D 12] using general norm methods.
Frobenius Problem:
Need 1 + 𝑛1 approximation for simplicial CVP to retrieve
anything meaningful. [Kan 89]
Simple bound on 𝐺(𝑡𝐾, 𝐿)
𝐾 symmetric convex body with 𝜆1 𝐾, 𝐿 = 1.
Claim: 12𝐾+𝑥 ∩ 12𝐾+𝑦 = ∅,
for distinct 𝑥, 𝑦 𝜖 𝐿.
Assume not then
𝑦 − 𝑥 𝜖 𝐿, non-zero, and
𝑦−𝑥
𝐾
≤ 𝑦−𝑐
= 𝑦−𝑐
1
2
1
2
+ 𝑐−𝑥
𝐾 + 𝑥−𝑐
𝐾
𝑐
𝑥
1
𝐾
2
𝐾
𝐾
𝑦
1
𝐾
2
𝑥, 𝑦 𝜖 𝐿
(triangle ineq.)
(symmetry)
< + < 1. Contradicts 𝜆1 𝐾, 𝐿 = 1.