The Learnability of Quantum States

Download Report

Transcript The Learnability of Quantum States

BosonSampling
Scott Aaronson (MIT)
Talk at BBN, October 30, 2013
The Extended ChurchTuring Thesis (ECT)
Everything feasibly
computable in the physical
world is feasibly computable
by a (probabilistic) Turing
machine
Shor’s Theorem: QUANTUM SIMULATION has no efficient
classical algorithm, unless FACTORING does also
So the ECT is false … what more
evidence could anyone want?
Building a QC able to factor large numbers is damn
hard! After 16 years, no fundamental obstacle has
been found, but who knows?
Can’t we “meet the physicists halfway,” and show
computational hardness for quantum systems closer to
what they actually work with now?
FACTORING might be have a fast classical algorithm! At
any rate, it’s an extremely “special” problem
Wouldn’t it be great to show that if, quantum computers
can be simulated classically, then (say) P=NP?
BosonSampling (A.-Arkhipov 2011)
A rudimentary type of quantum computing, involving
only non-interacting photons
Classical counterpart:
Galton’s Board
Replacing the balls by photons leads to
famously counterintuitive phenomena,
like the Hong-Ou-Mandel dip
In general, we consider a network of
beamsplitters, with n input “modes” (locations)
and m>>n output modes
n identical photons enter, one per input mode
Assume for simplicity they all leave in
different modes—there are  m  possibilities
n
 
The beamsplitter network defines a column-orthonormal
matrix ACmn, such that Pr outcome S  Per  A  2

where
Per X  

n
x  


S n i 1
is the matrix permanent
i,
i
S
nn submatrix of A
corresponding to S
Example
For Hong-Ou-Mandel experiment,


Proutput 1,1   Per



1
2
1
2
2
1 
2

2   11 0
1 
2 2


2
In general, an nn complex permanent is a sum of n!
terms, almost all of which cancel
How hard is it to estimate the “tiny residue” left over?
Answer: #P-complete, even for constant-factor approx
(Contrast with nonnegative permanents!)
So, Can We Use Quantum Optics to
Solve a #P-Complete Problem?
That sounds way too good to be true…
Explanation: If X is sub-unitary, then |Per(X)|2
will usually be exponentially small. So to get a
reasonable estimate of |Per(X)|2 for a given X,
we’d generally need to repeat the optical
experiment exponentially many times
Better idea: Given ACmn as input, let BosonSampling
be the problem of merely sampling from the same
distribution DA that the beamsplitter network samples
from—the one defined by Pr[S]=|Per(AS)|2
Theorem (A.-Arkhipov 2011): Suppose BosonSampling is
#P=BPPNP
solvable
in
classical
polynomial
time.
Then
P
Upshot: Compared to (say) Shor’s factoring
algorithm,
we get
different/stronger
evidence
Better
Theorem:
Suppose
we can sample
DA eventhat a
weaker system
can dopolynomial
somethingtime.
classically
approximately
in classical
Thenhard
in
BPPNP, it’s possible to estimate Per(X), with high
nn
probability over a Gaussian random matrix X ~ Ν 0,1C
We conjecture that the above problem is already
#P-complete. If it is, then a fast classical
algorithm for approximate BosonSampling would
already have the consequence that P#P=BPPNP
Related Work
Valiant 2001, Terhal-DiVincenzo 2002, “folklore”:
A QC built of noninteracting fermions can be efficiently
simulated by a classical computer
Knill, Laflamme, Milburn 2001: Noninteracting bosons
plus adaptive measurements yield universal QC
Jerrum-Sinclair-Vigoda 2001: Fast classical randomized
algorithm to approximate Per(A) for nonnegative A
Gurvits 2002: O(n2/2) classical randomized algorithm to
approximate an n-photon amplitude to ± additive error
(also, to compute k-mode marginal distribution in nO(k) time)
BosonSampling Experiments
Last year, groups in Brisbane,
Oxford, Rome, and Vienna
reported the first 3-photon
BosonSampling experiments,
confirming that the amplitudes
were given by 3x3 permanents
# of experiments > # of photons!
Obvious Challenges for Scaling Up:
- Reliable single-photon sources (optical multiplexing?)
- Minimizing losses
- Getting high probability of n-photon coincidence
Goal (in our view): Scale to 10-30 photons
Don’t want to scale much beyond that—both because
(1) you probably can’t without fault-tolerance, and
(2) a classical computer probably couldn’t even verify
the results!
Scattershot BosonSampling
Exciting new idea, proposed by Steve Kolthammer, for
sampling a hard distribution even with highly unreliable
(but heralded) photon sources, like SPDCs
The idea: Say you have 100 sources, of which only 10
(on average) generate a photon. Then just detect which
sources succeed, and use those to define your
BosonSampling instance!
Complexity analysis goes through essentially without
change
Issues: Increases depth of optical network needed. Also,
if some sources generate ≥2 photons, need a new
hardness assumption
Recent Criticisms of Gogolin et al.
(arXiv:1306.3995)
Suppose you ignore which actual photodetectors light
up, and count only the number of times each output
configuration occurs. In that case, the BosonSampling
distribution DA is exponentially-close to the uniform
distribution U
Response: Why would you ignore which
detectors light up??
The output of almost any algorithm is also
gobbledygook if you ignore the order of the
output bits…
Recent Criticisms of Gogolin et al.
(arXiv:1306.3995)
OK, so maybe DA isn’t close to uniform. Still, the very
same arguments we gave for why polynomial-time
classical algorithms can’t sample DA, suggest that
they can’t even distinguish DA from U!
Response: That’s why we said to focus on 10-30
photons—a range where a classical computer can
verify a BosonSampling device’s output, but the
BosonSampling device might be “faster”!
(And 10-30 photons is probably the best you can do anyway,
without quantum fault-tolerance)
More Decisive Responses
(A.-Arkhipov, arXiv:1309.7460)
Theorem: Let ACmn be a Haar-random
BosonSampling matrix, where m≥n5.1/.
Then with 1-O() probability over A, the
BosonSampling distribution DA has Ω(1)
variation distance from the uniform
distribution U
Histogram of (normalized)
probabilities under DA
Under U
Necessary, though not sufficient, for
approximately sampling DA to be hard
Theorem (A. 2013): Let ACmn be Haar-random, where
m>>n. Then there is a classical polynomial-time algorithm
C(A) that distinguishes DA from U (with high probability over A
and constant bias, and using only O(1) samples)
Strategy: Let AS be the nn submatrix of A corresponding to
output S. Let P be the product of squared 2-norms of AS’s
rows. If P>E[P], then guess S was drawn from DA; otherwise
guess S was drawn from U
P under uniform distribution
(a lognormal random variable)
AS
P under a BosonSampling
distribution
A
n
P  v1  vn
2
2
n
  ?
m
Using Quantum Optics to Prove
that the Permanent is #P-Complete
[A., Proc. Roy. Soc. 2011]
Valiant showed that the permanent is #P-complete—but
his proof required strange, custom-made gadgets
We gave a new, arguably more transparent proof by
combining three facts:
(1) n-photon amplitudes correspond to nn permanents
(2) Postselected quantum optics can simulate universal
quantum computation [Knill-Laflamme-Milburn 2001]
(3) Quantum computations can encode #P-complete
quantities in their amplitudes
Open Problems
Prove that Gaussian permanent approximation is #P-hard
(first step: understand distribution of Gaussian permanents)
Can the BosonSampling model solve classically-hard
decision problems? With verifiable answers?
Can one efficiently sample a distribution that can’t be
efficiently distinguished from BosonSampling?
Are we still sampling a hard distribution with unheralded
photon losses, or Gaussian initial states?
Similar hardness results for other natural quantum
systems (besides linear optics)?
Bremner, Jozsa, Shepherd 2010: Another system for which exact
classical simulation would collapse PH