Quantum Computing and the Limits of the Efficiently Computable Scott Aaronson MIT Things we never see… GOLDBACH CONJECTURE: TRUE NEXT QUESTION Warp drive Perpetuum mobile Übercomputer The (seeming) impossibility of the.

Download Report

Transcript Quantum Computing and the Limits of the Efficiently Computable Scott Aaronson MIT Things we never see… GOLDBACH CONJECTURE: TRUE NEXT QUESTION Warp drive Perpetuum mobile Übercomputer The (seeming) impossibility of the.

Quantum Computing and the Limits
of the Efficiently Computable
Scott Aaronson
MIT
Things we never see…
GOLDBACH
CONJECTURE:
TRUE
NEXT QUESTION
Warp drive
Perpetuum mobile
Übercomputer
The (seeming) impossibility of the first two machines
reflects fundamental principles of physics—Special
Relativity and the Second Law respectively
So what about the third one?
Some would say Mr. T & friends already
answered this question in the 1930s
But what about finding proofs of bounded length? Can
that be done in a way that avoids exhaustive search?
This sounds like (literally) a
$1,000,000 question:
P=NP?
If there actually were a machine with
[running time] ~Kn (or even only with ~Kn2),
this would have consequences of the
greatest magnitude.
—Gödel to von Neumann, 1956
However, an important presupposition underlying
P vs. NP is the...
Extended Church-Turing Thesis
“Any physically-realistic computing device can be
simulated by a deterministic or probabilistic
Turing machine, with at most polynomial
overhead in time and memory”
So how sure are we of this thesis?
Have there been serious challenges to it?
Old proposal: Dip two glass plates with pegs between them
into soapy water.
Let the soap bubbles form a minimum Steiner tree
connecting the pegs—thereby solving a known NP-hard
problem “instantaneously”
Ah, but what about
quantum computing?
(you knew it was coming)
Quantum mechanics: “Probability
theory with minus signs”
(Nature seems to prefer it that way)
Quantum Computing
A quantum state of n qubits takes 2n complex
numbers to describe:

x 0 ,1
x x
n
Chemists and physicists knew that for decades, as a
practical problem!
In the 1980s, Feynman, Deutsch, and others had the
amazing idea of building a new type of computer that
could overcome the problem, by itself exploiting the
exponentiality inherent in QM
Actually building a QC: Damn hard, because of decoherence.
(But seems possible in principle!)
Popularizers Beware:
A quantum computer is NOT like a
massively-parallel classical computer!
 

x
 x
x  1 , , 2 
n
Exponentially-many basis
states, but you only get to
observe one of them
Any hope for a speedup
rides on the magic of
interference
BQP (Bounded-Error Quantum Polynomial-Time): The class
of problems solvable efficiently by aInteresting
quantum computer,
defined by Bernstein and Vazirani in 1993
Shor 1994: Factoring integers is in BQP
NP-complete
NP
BQP
Factoring
P
But factoring is not believed to be NP-complete!
And today, we don’t believe BQP contains all of NP
(though not surprisingly, we can’t prove that it doesn’t)
Bennett et al. 1997: “Quantum magic” won’t be enough
If you throw away the problem structure, and just consider an
abstract “landscape” of 2n possible solutions, then even a
quantum computer needs ~2n/2 steps to find the correct one
(That bound is actually achievable, using Grover’s algorithm!)
So, is there any quantum algorithm for NP-complete problems
that would exploit their structure?
Quantum Adiabatic Algorithm
(Farhi et al. 2000)
Hi
Hamiltonian with easilyprepared ground state
Hf
Ground state encodes solution
to NP-complete problem
Problem: “Eigenvalue gap”
can be exponentially small
Nonlinear variants of the Schrödinger
Equation
Abrams & Lloyd 1998: If quantum mechanics were
nonlinear, one could exploit that to solve NPcomplete problems in polynomial time
1 solution to NP-complete problem
No solutions
Relativity Computer
DONE
Zeno’s Computer
Time (seconds)
STEP 1
STEP 2
STEP 3
STEP 4
STEP 5
Closed Timelike Curves (CTCs)
Here’s a polynomial-time algorithm to solve NP-complete
problems (only drawback is that it requires time travel):
Read an integer x{0,…,2n-1} from the future
If x encodes a valid solution, then output x
Otherwise, output (x+1) mod 2n
If valid solutions exist, then the only fixed-points of the
above program input and output them
Building on work of Deutsch, [A.-Watrous 2008] defined a
formal model of CTC computation, and showed that in
both the classical and quantum cases, it has exactly the
power of PSPACE (believed to be even larger than NP)
“The No-SuperSearch Postulate”
There is no physical means to solve NP-complete
problems in polynomial time.
Includes PNP as a special case, but is stronger
No longer a purely mathematical conjecture, but also a
claim about the laws of physics
If true, would “explain” why adiabatic systems have
small spectral gaps, the Schrödinger equation is linear,
CTCs don’t exist...
Question: What exactly does it mean to “solve” an NPcomplete problem?
Example: It’s been known for decades that, if you send n
identical photons through a network of beamsplitters, the
amplitude for the photons to reach some final state is given
by the permanent of an nn matrix of complex numbers
Per  A  
n
a
i ,  i 
  S n i 1
But the permanent is #P-complete
(believed even harder than NPcomplete)! So how can Nature do
such a thing?
Resolution: Amplitudes aren’t directly observable, and require
exponentially-many probabilistic trials to estimate
Lesson: If you can’t observe the answer, it doesn’t count!
Recently, Alex Arkhipov and I gave evidence that even
the observed output distribution of such a linear-optical
network would be hard to simulate on a classical
computer—but the argument was necessarily subtler
Last year, groups in Brisbane,
Oxford, Rome, and Vienna
reported the first 3-photon
BosonSampling experiments,
confirming that the amplitudes
were given by 3x3 permanents
# of experiments > # of photons!
Obvious Challenges for Scaling Up:
- Reliable single-photon sources
- Minimizing losses
- Getting high probability of n-photon coincidence
Goal (in our view): Scale to 10-30 photons
Don’t want to scale much beyond that—both because
(1) you probably can’t without fault-tolerance, and
(2) a classical computer probably couldn’t even verify
the results!
Theoretical Challenge: Argue that, even with photon
losses and messier initial states, you’re still solving a
classically-intractable sampling problem
Suppose we believe certain computational
problems (e.g., NP-complete ones) are
intractable in the physical world. Does that
belief do any nontrivial work for physics?
Until this January, I thought the answer was yes—but all
my examples involved ruling out possibilities (like CTCs)
that most physicists consider crazy anyway!
But recently, Harlow and Hayden made
a striking connection between
computational intractability and the
black-hole firewall paradox [AMPS 2012]
What is this firewall paradox?
Firewalls: Black Hole Information Problem Redux
R = Faraway Hawking Radiation
H = Just-Emitted Hawking Radiation
Near-maximal
entanglement
B = Interior
of “Old”
Black Hole
Also near-maximal
entanglement
Violates “monogamy of entanglement”! The same
qubit can’t be maximally entangled with 2 things
Harlow-Hayden 2013 (arXiv:1301.4504): Under plausible
assumptions about black-hole physics, for Alice to decode the
early Hawking radiation R and “see” that it’s entangled with
H, she’d need the ability to find “collisions” in a function of
the form f:{0,1}n{0,1}n-1
Moreover, I proved in 2002 that, for a “generic” f, the above
problem takes exponential time even for a QC!
Complexity theory to the rescue of quantum field theory??
Recently I improved the HH argument, to show that Alice’s
decoding task is at least as hard as inverting one-way
functions (“almost” as hard as NP-complete)
“Physical meaning” of these results? Surely a contradiction in
physics isn’t OK just because it takes exponential time to find?
Conclusions
1990s:
Today:
Computational
Complexity
Shor &
Grover
Computational
Complexity
Quantum
Mechanics
Quantum
Mechanics
Many other exciting connections between these areas are
currently being explored! (Condensed matter, quantum chemistry…)
NP Hardness Assumption: Candidate for a robust, fruitful,
falsifiable principle bridging complexity and physics?