Randomness and Pseudorandomness

Download Report

Transcript Randomness and Pseudorandomness

RANDOMNESS AND
PSEUDORANDOMNESS
Omer Reingold, Microsoft Research and Weizmann
Randomness and Pseudorandomness



When Randomness is Useful
When Randomness can be reduced or eliminated –
derandomization
Basic Tool: Pseudorandomness
 An
object is pseudorandom if it “looks random”
(indistinguishable from uniform), though it is not.
 Expander Graphs
Randomness In Computation (1)
Can’t live without you



Distributed computing (breaking
symmetry)
Cryptography: Secrets, Semantic
Security, …
Sampling, Simulations, …
Randomness In Computation (2)
You change my world


Communication Complexity (e.g.,
equality)
Routing (on the cube [Valiant]) drastically reduces congestion
Randomness In Computation (3)
Do I really need you?



In algorithms – useful design tool, but many times can
derandomize (e.g., PRIMES in P). Is it always the
case?
BPP=P means that every randomized algorithm can
be derandomized with only polynomial increase in
time
RL=L means that every randomized algorithm can be
derandomized with only a constant factor increase in
memory
In Distributed Computing
Dining Philosophers:
Don’t
Attack
breaking symmetry
Byzantine
Agreement
Deterministic
Randomized
Synchronous
t failures
t+1 rounds
O(1)
Asynchronous
impossible
possible
Attack
Now
Randomness Saves Communication
Deterministic: need to
send the entire file!
 Randomness in the Sky:
O(1) bits (or log in 1/error)
 Private Randomness:
Logarithmic number of bits
(derandomization).

Original
File
?
=
Copy
In Cryptography
Private Keys: no randomness no secrets and no identities
Encryption: two encryptions
of same message with same
key need to be different
Randomized (interactive)
Proofs: Give rise to wonderful
new notions: Zero-Knowledge,
PCPs, …
Random Walks and Markov Chains






When in doubt, flip a coin:
Explore graph: minimal memory
Page Rank: stationary
distribution of Markov Chains
Sampling vs. Approx counting.
Estimating size of Web
Simulations of Physical Systems
…
Shake Your Input



Communication network (n-dimensional cube)
Every deterministic routing scheme will incur
exponentially busy links (in worse case)
Valiant: To send a message from x  y,
select node z at random, send x  z  y.
Now: O(1) expected load for every edge
Another example – randomized quicksort
Smoothed Analysis: small perturbations, big impact
In Private Data Analysis
Hide Presence/Absence of Any Individual
How many people in the database have the BC1 gene?
Add random noise to true answer distributed as Lap(/)
More questions? More privacy? Need more noise.
ratio bounded
-4
-3
-2
-
0

2
3
4
Randomness and Pseudorandomness



When Randomness is Useful
When Randomness can be reduced or eliminated –
derandomization
Basic Tool: Pseudorandomness
 An
object is pseudorandom if it “looks random”
(indistinguishable from uniform), though it is not.
 Expander Graphs
Cryptography:
Good Pseudorandom Generators are Crucial
With them, we
have one-time
pad (and more):

plaintext data:
00001111
Without, keys are bad,
algorithms are worthless
(theoretical & practical)

short key K0: 110
derived key K:
01100100
E
D
ciphertext = plaintext  K = 01101011
plaintext data:
(ciphertext  K) = 00001111
Data Structures & Hash Functions
Linear Probing:
Bob
F(Bob)


Alice
Mike

Bob

Mary

If F is random then insertion time and
query time are O(1) (in expectation).
But where do you store a random
function ?!? Derandomize!
Heuristic: use SHA1, MD4, …
Recently (2007): 5-wise independent
functions are sufficient*
Similar considerations all over: bloom
filters, cuckoo hashing, bit-vectors, …
Weak Sources & Randomness Extractors


Available random bits are biased and correlated
Von Neumann sources:
b1 b2 … bi … are i.i.d. 0/1 variables and bi =1 with some
probability p < 1 then translate

01
1
10
0
Randomness Extractors produce randomness from
general weak sources, many other applications
Algorithms:
Can Randomness Save Time or Memory?


Conjecture - No* (*moderate overheads may still apply)
Examples of derandomization:
Primality
Testing in
Polynomial
Time

Graph
Connectivity
logarithmic
Memory
Holdouts: Identity testing, approximation algorithms, …
(Bipartite) Expander Graphs
N
S, |S| K
N
D
|(S)|  A |S|
(A > 1)
Important: every (not too large) set
expands.
(Bipartite) Expander Graphs
N
S, |S| K
N
D
|(S)|  A |S|
(A > 1)
Main goal: minimize D (i.e. constant D)

•
Degree 3 random graphs are expanders!
[Pin73]
(Bipartite) Expander Graphs
N
S, |S| K
N
D
|(S)|  A |S|
(A > 1)
Also: maximize A.
 Trivial upper bound: A  D


even A ≲ D-1
Random graphs: AD-1
Applications of Expanders
These “innocent” looking objects are intimately
related to various fundamental problems:








Network design (fault tolerance),
Sorting networks,
Complexity and proof theory,
Derandomization,
Error correcting codes,
Cryptography,
Ramsey theory
And more ...
Non-blocking Network with
On-line Path Selection [ALM]
N (Inputs)
N (Outputs)
Depth O(log N), size O(N log N), bounded degree.
Allows connection between input nodes and output
nodes using vertex disjoint paths.
Non-blocking Network with
On-line Path Selection [ALM]
N (Inputs)
N (Outputs)
Every request for connection (or disconnection) is
satisfied in O(log N) bit steps:
• On line. Handles many requests in parallel.
The Network
N (Inputs)
N (outputs)
“Lossless”
Expander
Slightly Unbalanced, “Lossless” Expanders
N
S,
|S| K
M= N
D
|(S)| 
0.9 D |S|
0< 1 is an arbitrary constant 
D is constant & K= (M/D) =  (N/D).
[CRVW 02]: such expanders (with D = polylog(1/))
Property 1: A Very Strong
Unique Neighbor Property
S, |S| K, |(S)|  0.9 D |S|
S
Unique neighbor of S
Non Unique neighbor
S has  0.8 D |S| unique neighbors !
Using Unique Neighbors for
Distributed Routing
Task: match S to its neighbors (|S| K)
S
S`
Step I: match S to its unique neighbors.
Continue recursively with unmatched vertices S’.
Reminder: The Network
Adding new paths: think of vertices used
by previous paths as faulty.
Property 2: Incredibly Fault Tolerant
S, |S| K, |(S)|  0.9 D |S|
Remains a lossless expander even if adversary
removes (0.7 D) edges from each vertex.
Simple Expander Codes
[G63,Z71,ZP76,T81,SS96]
N (Variables)
M= N (Parity Checks)
1
0
0
+
+
1
+0
1
+
Linear code; Rate 1 – M/N = (1 -  ).
Minimum distance  K.
Relative distance  K/N= ( / D) =  / polylog (1/).
For small  beats the Zyablov bound and is quite close to
the Gilbert-Varshamov bound of  / log (1/).
Simple Decoding Algorithm in Linear
Time (& log n parallel phases) [SS 96]
N (Variables)
Error set B, |B|
K/2
1
0
10
|Flip\B|  |B|/4 0 1
|B\Flip|  |B|/4
 |Bnew| |B|/2 1

M= N (Constraints)
+
+
0
1
1
|(B)| > .9 D |B|
+
0 |(B)Sat|< .2 D|B|
+
0
Algorithm: At each phase, flip every variable that
“sees” a majority of 1’s (i.e, unsatisfied constraints).
Random Walk on Expanders [AKS 87]
x1
x0
...
x2
xi
xi converges to uniform fast (for arbitrary x0).
For a random x0: the sequence x0, x1, x2 . . . has interesting
“random-like” properties.
Thanks