BosonSampling Scott Aaronson (MIT) ICMP 2015, Santiago, Chile Based mostly on joint work with Alex Arkhipov.

Download Report

Transcript BosonSampling Scott Aaronson (MIT) ICMP 2015, Santiago, Chile Based mostly on joint work with Alex Arkhipov.

BosonSampling

Scott Aaronson (MIT) ICMP 2015, Santiago, Chile Based mostly on joint work with Alex Arkhipov

What This Talk Won’t Have

 

z

What It Will Have

P #P Oracle for Counting Problems PH Constant Number of NP Quantifiers NP Efficiently Checkable Problems P Efficiently Solvable Problems

The Extended Church Turing Thesis (ECT) Everything feasibly computable in the physical world is feasibly computable by a

(probabilistic)

machine Turing

Shor’s Theorem:

Q UANTUM S IMULATION has no efficient classical algorithm, unless F ACTORING does also

So the ECT is false … what more evidence could anyone want?

Building a QC able to factor large numbers is

hard

! After 20 years, no fundamental obstacle has been found, but who knows?

Can’t we “meet the physicists halfway,” and show computational hardness for quantum systems closer to what they actually work with now?

F ACTORING might be have a fast classical algorithm! At any rate, it’s an extremely “special” problem Wouldn’t it be great to show that if, quantum computers can be simulated classically, then (say)

P

=

NP

?

Det

Our Starting Point

   

S n

  sgn  

i n

  1

a i

,  All I can say is, the bosons In

P

Per  

i n

  

S n

1

a i

, 

#P

-complete [Valiant]

FERMIONS BOSONS

So if n-boson amplitudes correspond to permanents…

Can We Use Bosons to Calculate the Permanent?

That sounds way too good to be true—it would let us solve

NP

-complete problems and more using QC!

Explanation:

Amplitudes aren’t directly observable.

To get a reasonable estimate of Per(A), you might need to repeat the experiment exponentially many times

Basic Result:

Suppose there were a polynomial-time classical randomized algorithm that took as input a description of a noninteracting-boson experiment, and that output a

sample

over n-boson states.

from the correct final distribution

Then P #P = BPP NP and the polynomial hierarchy collapses.

Motivation:

Compared to (say) Shor’s algorithm, we get “stronger” evidence that a “weaker” system can do interesting quantum computations

Related Work

Valiant 2001, Terhal-DiVincenzo 2002, “folklore”:

A QC built of noninteracting fermions can be efficiently simulated by a classical computer

Knill, Laflamme, Milburn 2001:

plus

adaptive measurements

Noninteracting bosons yield universal QC

Jerrum-Sinclair-Vigoda 2001:

Fast classical randomized algorithm to approximate Per(A) for

nonnegative

A

Bremner-Jozsa-Shepherd 2011 (independent of us):

Analogous hardness results for simulating “commuting Hamiltonian” quantum computers

The Quantum Optics Model

A rudimentary subset of quantum computing, involving only

non-interacting bosons

, and

not

based on qubits

Classical counterpart:

Galton’s Board, on display at many science museums

Using only pegs and non interacting balls, you probably can’t build a universal computer— but you

can

do some interesting computations, like generating the binomial distribution!

The Quantum Version

Let’s replace the balls by

identical single photons

, and the pegs by

beamsplitters

Then we see strange things like the

Hong-Ou-Mandel dip

    1 1 2 2  1 2 1 2    The two photons are now correlated, even though they never interacted!

Explanation involves

destructive interference of amplitudes

: Final amplitude of non-collision is 1 2 1 2  1 2 1 2  0

Getting Formal

The basis states have the form |S  =|s 1 ,…,s m  , where s i is the number of photons in the i th “mode” We’ll never create or destroy photons. So s 1 +…+s m =n is constant.

For us, m=n O(1) Initial state: |I  =|1,…,1,0,……,0 

U

You get to apply any m  m unitary matrix U—say, using a collection of 2-mode beamsplitters

M

:   

m

n

 1   distribute n identical photons into m modes U induces an M  M unitary  (U) on the n-photon states as follows:   

S

,

T

 Per

s

1 !

s m

 

S

,

T

!

t

1 !

t m

!

Here U S,T is an n  n submatrix of U (possibly with repeated rows and columns) , obtained by taking s i row of U and t j copies of the j th copies of the i column for all i,j th

Beautiful Alternate Perspective

The “state” of our computer, at any time, is a degree-n polynomial over the variables x=(x 1 ,…,x m ) (n<

Initial state:

p(x) := x 1  x n We can apply any m  m unitary transformation U to x, to obtain a new degree-n polynomial

p

' 

S

 

s

1   ,  ,

s m

S x

1

s

1 

x m s m

Then on “measuring,” we see the monomial with probability 

S

2

s

1 !

s m

!

x

1

s

1 

x m s m

OK, so why is it hard to sample the distribution over photon numbers classically?

Given

any

matrix A  C n  n , we can construct an m  m unitary U (where m  2n) as follows:

U

   

A C D

  Suppose we start with |I  =|1,…,1,0,…,0  (one photon in each of the first n modes), apply U, and measure.

Then the probability of observing |I  again is 2

p

: 

I

I

  2

n

Per 2

Claim 1:

p is

#P

-complete to estimate (up to a constant factor)

Idea:

Valiant proved that the P ERMANENT is

#P

-complete.

Claim 2:

Suppose we had a fast classical algorithm for boson sampling. Then we could estimate p in

BPP NP

Can use a classical reduction to go from a multiplicative approximation of |Per(A)| 2 to Per(A) itself.

p

: 

I

  

I

2

Idea:

sampling algorithm, and let r be its randomness. Use approximate counting to estimate   Let M be our classical 2

n

Pr

r M

Per   outputs   for boson sampling. Then

P #P

=

BPP NP

.

2

I

The Elephant in the Room

The previous result hinged on the difficulty of estimating a

single, exponentially-small probability

p—but what about noise and error?

The “right” question:

can a classical computer efficiently sample a distribution with

1/n O(1) variation distance

from the boson distribution?

Our Main Result:

Suppose it can. Then there’s a

BPP NP

algorithm to estimate |Per(A)| 2 , with high probability over a Gaussian matrix

A

~

N

 

n

n C

Our Main Conjecture

Estimating |Per(A)| 2 , with high probability over i.i.d. Gaussian A, is a

#P

-hard problem If this conjecture holds, then even a

noisy

n-photon experiment could falsify the Extended Church Thesis, assuming

P #P

BPP NP

!

Much of our work is devoted to giving evidence for this conjecture What makes the Gaussian ensemble special?

Theorem:

It arises by considering sufficiently small submatrices of Haar-random unitary matrices.

“Easier” problem: Just show that, if A is an i.i.d. Gaussian matrix, then |Per(A)| 2 is approximately a lognormal random variable (as numerics suggest) , and not so concentrated around 0 as to preclude its being hard to estimate Can prove for determinant in place of permanent.

For permanent, best known anti-concentration results [Tao-Vu] are not yet strong enough for us Can calculate E[|Per(A)| 2 ]=n! and E[|Per(A)| 4 ]=(n+1)(n!) 2 , but not strong enough to imply anti-concentration result

BosonSampling Experiments

In 2012, groups in Brisbane, Oxford, Rome, and Vienna reported the first 3-photon BosonSampling experiments, confirming that the amplitudes were given by 3x3 permanents

# of experiments > # of photons!

Challenges for Scaling Up:

- Reliable single-photon sources (optical multiplexing?) - Minimizing losses - Getting high probability of n-photon coincidence

Goal (in our view):

Scale to 10-30 photons Don’t want to scale much beyond that—both because (1) you probably can’t without fault-tolerance, and (2) a classical computer probably couldn’t even verify the results!

Scattershot BosonSampling

Exciting recent idea, proposed by Steve Kolthammer and others, for sampling a hard distribution even with highly unreliable (but heralded) photon sources, like SPDCs

The idea:

Say you have 100 sources, of which only 10 (on average) generate a photon. Then just detect which sources succeed, and use those to define your BosonSampling instance!

Complexity analysis turns out to go through essentially without change

Using Quantum Optics to Prove that the Permanent is

#P

-Complete

[A., Proc. Roy. Soc. 2011]

Valiant showed that the permanent is

#P

-complete—but his proof required strange, custom-made gadgets We gave a new, arguably more transparent proof by combining three facts: (1) n-photon amplitudes correspond to n  n permanents (2) Postselected quantum optics can simulate universal quantum computation

[Knill-Laflamme-Milburn 2001]

(3) Quantum computations can encode

#P

-complete quantities in their amplitudes

Can BosonSampling Solve Non Sampling Problems?

(Could it even have cryptographic applications?) Idea:

What if we could “smuggle” a matrix A with huge permanent, as a submatrix of a larger unitary matrix U? Finding A could be hard classically, but shooting photons into an interferometer network would easily reveal it

Pessimistic Conjecture:

If U is unitary and |Per(U)|  1/n O(1) , then U is “close” to a permuted diagonal matrix—so it “sticks out like a sore thumb”

A.-Nguyen, Israel J. Math 2014:

Proof of a weaker version of the pessimistic conjecture, using inverse Littlewood-Offord theory

BosonSampling with Lost Photons

Suppose we have n+k photons in the initial state, but k are randomly lost. Then the probability of each output has the form 

S S

   1 ,  ,

n n

Per k

  

S

2

A

~

N

 

n

 

n

k

 What can we say about these quantities? Are they also (plausibly)

#P

-hard to approximate?

Work in progress with Daniel Brod

Summary

Intuition suggests that not merely quantum computers, but many natural quantum systems, should be intractable to simulate on classical computers, because of the exponentiality of the wavefunction BosonSampling provides a clear example of how we can formalize this intuition—or at least, base it on “standard” conjectures in theoretical computer science.

It’s also brought QC theory into closer contact with experiment. And it’s highlighted the remarkable connection between bosons and the matrix permanent. Future progress may depend on solving hard open problems about the permanent

Bonus: Rise and Fall of “Complexity”

Sean Carroll’s example:

But how to quantify? One simpleminded measure:

apparent complexity.

The Kolmogorov complexity (estimated, say, by GZIP file size) of a coarse-grained (de-noised) description of our thermodynamic mixing process. Does it rise and then fall?

The Coffee Automaton

A., Carroll, Mohan, Ouellette, Werness 2015:

A probabilistic n  n reversible system that starts half “coffee” and half “cream.” At each time step, we randomly “shear” half the coffee cup horizontally or vertically (assuming a toroidal cup) We prove that the apparent complexity of this image has a rising-falling pattern, with a maximum of at least ~n 1/6 500 450 400 350 300 250 200 150 100 50 -100 0 100 300 500

Time Steps

700 900