BosonSampling Scott Aaronson (MIT) Based on joint work with Alex Arkhipov November 20, 2014 arXiv:1011.3245

Download Report

Transcript BosonSampling Scott Aaronson (MIT) Based on joint work with Alex Arkhipov November 20, 2014 arXiv:1011.3245

BosonSampling
Scott Aaronson (MIT)
Based on joint work with Alex Arkhipov
November 20, 2014
arXiv:1011.3245
The Extended ChurchTuring Thesis (ECT)
Everything feasibly
computable in the physical
world is feasibly computable
by a (probabilistic) Turing
machine
Shor’s Theorem: QUANTUM SIMULATION has no efficient
classical algorithm, unless FACTORING does also
So the ECT is false … what more
evidence could anyone want?
Building a QC able to factor large numbers is damn
hard! After 20 years, no fundamental obstacle has
been found, but who knows?
Can’t we “meet the physicists halfway,” and show
computational hardness for quantum systems closer to
what they actually work with now?
FACTORING might be have a fast classical algorithm! At
any rate, it’s an extremely “special” problem
Wouldn’t it be great to show that if, quantum computers
can be simulated classically, then (say) P=NP?
BosonSampling (A.-Arkhipov 2011)
A rudimentary type of quantum computing, involving
only non-interacting photons
Classical counterpart:
Galton’s Board
Replacing the balls by photons leads to
famously counterintuitive phenomena,
like the Hong-Ou-Mandel dip
In general, we consider a network of
beamsplitters, with n input “modes” (locations)
and m>>n output modes
n identical photons enter, one per input mode
Assume for simplicity they all leave in
different modes—there are  m  possibilities
n
 
The beamsplitter network defines a column-orthonormal
matrix ACmn, such that Pr outcome S  Per  A  2

where
Per X  

n
x  


S n i 1
is the matrix permanent
i,
i
S
nn submatrix of A
corresponding to S
Example
For Hong-Ou-Mandel experiment,


Proutput 1,1   Per



1
2
1
2
2
1 
2

2   11 0
1 
2 2


2
In general, an nn complex permanent is a sum of n!
terms, almost all of which cancel
How hard is it to estimate the “tiny residue” left over?
Answer (Valiant 1979): #P-complete (meaning: as hard
as any combinatorial counting problem)
Contrast with nonnegative permanents!
So, Can We Use Quantum Optics to
Solve a #P-Complete Problem?
That sounds way too good to be true…
Explanation: If X is sub-unitary, then |Per(X)|2
will usually be exponentially small. So to get a
reasonable estimate of |Per(X)|2 for a given X,
we’d generally need to repeat the optical
experiment exponentially many times
Better idea: Given ACmn as input, let BosonSampling
be the problem of merely sampling from the same
distribution DA that the beamsplitter network samples
from—the one defined by Pr[S]=|Per(AS)|2
Theorem (A.-Arkhipov 2011): Suppose BosonSampling is
#P=BPPNP
solvable
in
classical
polynomial
time.
Then
P
Upshot: Compared to (say) Shor’s factoring
algorithm,
we get
different/stronger
evidence
Better
Theorem:
Suppose
we can sample
DA eventhat a
weaker system
can dopolynomial
somethingtime.
classically
approximately
in classical
Thenhard
in
BPPNP, it’s possible to estimate Per(X), with high
nn
probability over a Gaussian random matrix X ~ Ν 0,1C
We conjecture that the above problem is already
#P-complete. If it is, then even a fast classical
algorithm for approximate BosonSampling would
have the consequence that P#P=BPPNP
Related Work
Valiant 2001, Terhal-DiVincenzo 2002, “folklore”:
A QC built of noninteracting fermions can be efficiently
simulated by a classical computer
Knill, Laflamme, Milburn 2001: Noninteracting bosons
plus adaptive measurements yield universal QC
Jerrum-Sinclair-Vigoda 2001: Fast classical randomized
algorithm to approximate Per(X) for nonnegative X
Gurvits 2002: O(n2/2) classical randomized algorithm to
approximate an n-photon amplitude to ± additive error
(also, to compute k-mode marginal distribution in nO(k) time)
OK, so why is it hard to sample the
distribution over photon numbers classically?
Given any matrix XCnn, we can construct an mm
unitary U (where m2n) as follows:
 X Y 

U  
 Z W
Suppose we start with |I=|1,…,1,0,…,0 (one photon
in each of the first n modes), apply U, and measure.
Then the probability of observing |I again is
p 
2n
Per  X 
2
Claim 1: p is #P-complete to
estimate (up to a constant factor)
This follows from Valiant’s
famous result.
Claim 2: Suppose we had a
fast classical algorithm for
boson sampling. Then we
could estimate p in BPPNP—
that is, using a randomized
algorithm with an oracle for
NP-complete problems
This follows from a classical
result of Goldwasser-Sipser
 
2
Conclusion: Suppose we2 nhad a fast classical
algorithm
  Per
X P#P=BPPNP.
for bosonpsampling.
Then
Unfortunately, this argument hinged on the hardness of
estimating a single, exponentially-small probability p.
As such, it’s not robust to realistic experimental error.
Showing that a noisy BosonSampling device still
samples a classically-intractable distribution is a much
more complicated problem. As mentioned, we can do
it, but only under an additional assumption (that
estimating Gaussian permanents is #P-complete)
A first step toward proving that conjecture, would
simply be to understand the distribution of |Per(X)|2 for
Gaussian X. Is it (as we conjecture) approximately
lognormal?
BosonSampling Experiments
In 2012, groups in Brisbane,
Oxford, Rome, and Vienna
reported the first 3-photon
BosonSampling experiments,
confirming that the amplitudes
were given by 3x3 permanents
# of experiments > # of photons!
Obvious Challenges for Scaling Up:
- Reliable single-photon sources (optical multiplexing?)
- Minimizing losses
- Getting high probability of n-photon coincidence
Goal (in our view): Scale to 10-30 photons
Don’t want to scale much beyond that—both because
(1) you probably can’t without fault-tolerance, and
(2) a classical computer probably couldn’t even verify
the results!
Scattershot BosonSampling
Exciting recent idea, proposed by Steve Kolthammer and
others, for sampling a hard distribution even with highly
unreliable (but heralded) photon sources, like SPDCs
The idea: Say you have 100 sources, of which only 10
(on average) generate a photon. Then just detect which
sources succeed, and use those to define your
BosonSampling instance!
Complexity analysis turns out to go through essentially
without change
Polynomial-Time Verification of
BosonSampling Devices?
Idea 1: Let AS be the nn
submatrix of A corresponding
to output S. Let PS be the
product of squared 2-norms
of AS’s rows. Check whether
the observed distribution
over PS is consistent with
BosonSampling
P under uniform distribution (a
lognormal random variable)
P under a BosonSampling
distribution
Idea 2: Let the scattering matrix U be a discrete Fourier
transform. Then because of cancellations in the
permanent, a ~1/n fraction of outcomes S should have
probability 0. Check that these never occur.
Using Quantum Optics to Prove
that the Permanent is #P-Complete
[A., Proc. Roy. Soc. 2011]
Valiant showed that the permanent is #P-complete—but
his proof required strange, custom-made gadgets
We gave a new, arguably more transparent proof by
combining three facts:
(1) n-photon amplitudes correspond to nn permanents
(2) Postselected quantum optics can simulate universal
quantum computation [Knill-Laflamme-Milburn 2001]
(3) Quantum computations can encode #P-complete
quantities in their amplitudes
Open Problems
Prove that Gaussian permanent approximation is #P-hard
(first step: understand distribution of Gaussian permanents)
Can the BosonSampling model solve classically-hard
decision problems? With verifiable answers?
Can one efficiently sample a distribution that can’t be
efficiently distinguished from BosonSampling?
Similar hardness results for other natural quantum
systems (besides linear optics)?
Bremner, Jozsa, Shepherd 2010: Another system for which exact
classical simulation would collapse the polynomial hierarchy