The Learnability of Quantum States

Download Report

Transcript The Learnability of Quantum States

BosonSampling
Scott Aaronson (MIT)
Talk at SITP, February 21, 2014
The Extended ChurchTuring Thesis (ECT)
Everything feasibly
computable in the physical
world is feasibly computable
by a (probabilistic) Turing
machine
Shor’s Theorem: QUANTUM SIMULATION has no efficient
classical algorithm, unless FACTORING does also
So the ECT is false … what more
evidence could anyone want?
Building a QC able to factor large numbers is damn
hard! After 16 years, no fundamental obstacle has
been found, but who knows?
Can’t we “meet the physicists halfway,” and show
computational hardness for quantum systems closer to
what they actually work with now?
FACTORING might be have a fast classical algorithm! At
any rate, it’s an extremely “special” problem
Wouldn’t it be great to show that if, quantum computers
can be simulated classically, then (say) P=NP?
BosonSampling (A.-Arkhipov 2011)
A rudimentary type of quantum computing, involving
only non-interacting photons
Classical counterpart:
Galton’s Board
Replacing the balls by photons leads to
famously counterintuitive phenomena,
like the Hong-Ou-Mandel dip
In general, we consider a network of
beamsplitters, with n input “modes” (locations)
and m>>n output modes
n identical photons enter, one per input mode
Assume for simplicity they all leave in
different modes—there are  m  possibilities
n
 
The beamsplitter network defines a column-orthonormal
matrix ACmn, such that Pr outcome S  Per  A  2

where
Per X  

n
x  


S n i 1
is the matrix permanent
i,
i
S
nn submatrix of A
corresponding to S
For simplicity, I’m ignoring outputs
with ≥2 photons per mode
Example
For Hong-Ou-Mandel experiment,


Proutput 1,1   Per



1
2
1
2
2
1 
2

2   11 0
1 
2 2


2
In general, an nn complex permanent is a sum of n!
terms, almost all of which cancel
How hard is it to estimate the “tiny residue” left over?
Answer: #P-complete, even for constant-factor approx
(Contrast with nonnegative permanents!)
So, Can We Use Quantum Optics to
Solve a #P-Complete Problem?
That sounds way too good to be true…
Explanation: If X is sub-unitary, then |Per(X)|2
will usually be exponentially small. So to get a
reasonable estimate of |Per(X)|2 for a given X,
we’d generally need to repeat the optical
experiment exponentially many times
Better idea: Given ACmn as input, let BosonSampling
be the problem of merely sampling from the same
distribution DA that the beamsplitter network samples
from—the one defined by Pr[S]=|Per(AS)|2
Theorem (A.-Arkhipov 2011): Suppose BosonSampling is
solvable in classical polynomial time. Then P#P=BPPNP
Upshot: Compared to (say) Shor’s factoring
Better Theorem: Suppose we can sample DA even
algorithm, we get different/stronger evidence that a
approximately in classical polynomial time. Then in
weaker system can do something classically hard
NP
BPP , it’s possible to estimate Per(X), with high
nn
probability over a Gaussian random matrix X ~ Ν 0,1C
We conjecture that the above problem is already
#P-complete. If it is, then a fast classical
algorithm for approximate BosonSampling would
already have the consequence that P#P=BPPNP
Related Work
Valiant 2001, Terhal-DiVincenzo 2002, “folklore”:
A QC built of noninteracting fermions can be efficiently
simulated by a classical computer
Knill, Laflamme, Milburn 2001: Noninteracting bosons
plus adaptive measurements yield universal QC
Jerrum-Sinclair-Vigoda 2001: Fast classical randomized
algorithm to approximate Per(A) for nonnegative A
Gurvits 2002: Fast classical randomized algorithm to
approximate n-photon amplitudes to ± additive error
BosonSampling Experiments
Last year, groups in Brisbane,
Oxford, Rome, and Vienna
reported the first 3- and 4photon BosonSampling
experiments, confirming that
the amplitudes were given by
3x3 and 4x4 permanents
# of experiments ≥ # of photons!
Obvious Challenges for Scaling Up:
- Reliable single-photon sources (optical multiplexing?)
- Minimizing losses
- Getting high probability of n-photon coincidence
Goal (in our view): Scale to 10-30 photons
Don’t want to scale much beyond that—both because
(1) you probably can’t without fault-tolerance, and
(2) a classical computer probably couldn’t even verify
the results!
Theoretical Challenge: Argue that, even with photon
losses and messier initial states, you’re still solving a
classically-intractable sampling problem
Scattershot BosonSampling
Wonderful new idea, proposed by several
experimental groups, for sampling a hard distribution
even with highly unreliable (but heralded) photon
sources, like Spontaneous Parametric Downconversion
(SPDC) crystals
The idea: Say you have 100 sources, of which only 10
(on average) generate a photon. Then just detect
which sources succeeded, and use those to define
your BosonSampling instance!
Complexity analysis goes through essentially without
change
Verifying BosonSampling Devices
As mentioned before, even verifying the output of a
claimed BosonSampling device would presumably take
exp(n) time, in general!
Recently underscored by [Gogolin et al. 2013]
(alongside specious claims…)
Our responses:
(1) Who cares? Take n=30
(2) If you do care, we can show how to distinguish the
output of a BosonSampling device from all sorts of
specific “null hypotheses”
Theorem (A. 2013): Let ACmn be Haar-random, where m>>n.
Then there’s a classical polytime algorithm C(A) that
distinguishes the BosonSampling distribution DA from the
uniform distribution U (whp over A, and using only O(1) samples)
Strategy: Let AS be the nn submatrix of A corresponding to
output S. Let P be the product of squared 2-norms of AS’s
rows. If P>E[P], then guess S was drawn from DA; otherwise
guess S was drawn from U
Recent realization: You can also use the
number of multi-photon collisions
distinguish
P undertouniform
distribution
(a lognormal
DA from DA’, the same distribution
butrandom
with variable)
AS classical distinguishable particles
P under a BosonSampling
distribution
A
n
P  v1  vn
2
2
n
  ?
m
Turning the Logic Around
[A., Proc. Roy. Soc. 2011]
Arkhipov and I used the #P-completeness of the
permanent—a great discovery of CS theory from the
1970s—to argue that bosonic sampling is hard for a
classical computer
Later, I realized that one can also go in the reverse
direction! Using the power of postselected linear-optical
quantum computing—shown by [Knill-Laflamme-Milburn
2001]—and the connection between LOQC and the
permanent, I gave a new, arguably-simpler proof that the
permanent is #P-complete
Open Problems
Prove that approximating the permanent of an i.i.d.
Gaussian matrix is #P-hard!
Can our linear-optics model solve a classically-intractable
problem for which a classical computer can efficiently
verify the answer?
Similar hardness results for other natural quantum
systems (besides linear optics)?
Bremner, Jozsa, Shepherd 2010: Another system for
which exact classical simulation would collapse PH