The Future of Computer Science

Download Report

Transcript The Future of Computer Science

Black Holes, Firewalls, and the Complexity of States and Unitaries

Scott Aaronson (MIT) Papers and slides at www.scottaaronson.com

My Starting Point

PHYSICS AND MORE!

BOSONS & FERMIONS

(Cf. my talk at IQC yesterday)

BLACK HOLES, AdS/CFT

(Today’s talk)

QUANTUM COMPUTING COMPUTER SCIENCE

Black Holes and Computational Complexity??

QAM AM QSZK SZK BQP BPP

YES!

Amazing connection made last year by Harlow & Hayden But first, let’s review 40 years of black hole history

Bekenstein, Hawking 1970s:

Black holes have entropy and temperature! They emit radiation

The Information Loss Problem:

Calculations suggest that Hawking radiation is thermal—uncorrelated with whatever fell in. So, is infalling information lost forever? Would seem to violate the unitarity / reversibility of QM OK then, assume the information somehow gets out!

The Xeroxing Problem:

How could the same qubit |  fall inexorably toward the singularity, and emerge in Hawking radiation? Would violate the No-Cloning Theorem

Black Hole Complementarity (Susskind, ‘t Hooft):

An external observer can describe everything unitarily without including the interior at all! Interior should be seen as “just a scrambled re-encoding” of the exterior degrees of freedom

The Firewall Paradox

(AMPS 2012) R = Faraway Hawking Radiation B = Just-Emitted Hawking Radiation H = Interior of “Old” Black Hole

Near-maximal entanglement Also near-maximal entanglement

Violates monogamy of entanglement! The same qubit can’t be maximally entangled with 2 things

Harlow-Hayden 2013 (arXiv:1301.4504):

Striking argument that Alice’s first task, decoding the entanglement between R and B, would take exponential time—by which point, the black hole would’ve long ago evaporated anyway

Complexity theory to the rescue of quantum field theory?

Are we saying that an inconsistency in the laws of physics is OK, as long as it takes exponential time to discover it?

NO!

“Inconsistency” is only in low-energy effective field theories; question is in what regimes they break down

Caveats of Complexity Arguments

1. Asymptotic

E.g., 8  8 chess takes O(1) time! Only for n  n chess can we give evidence of hardness. But for black holes, n  10 70 …

2. (Usually) Conjectural

Right now, we can’t even prove

P

NP

! To get where we want, we almost always need to make assumptions. Question is, which assumptions?

3. Worst-Case

We can argue that a natural formalization of Alice’s decoding task is “generically” hard. We can’t rule out that a future quantum gravity theory would make her task easy, for deep reasons not captured by our formalization.

Quantum Circuits

The HH Decoding Problem

Given a description of a quantum circuit C, such that

C

0 

n

 

RBH

Promised that, by acting

only

on R (the “Hawking radiation part”), it’s possible to distill an EPR pair 0 0  1 1 2 between R and B

Problem:

Distill such an EPR pair, by applying a unitary transformation U R to the qubits in R

Isn’t the Decoding Task Trivial?

Just invert C!

Problem:

That would require waiting until the black hole was fully evaporated (  no more firewall problem) When the BH is “merely” >50% evaporated, we know from a dimension-counting argument that “generically,” there will exist a U and B R that distills an EPR pair between R But interestingly, this argument doesn’t suggest any

efficient procedure

to find U R or apply it!

The HH Hardness Result

Set Equality:

Given two efficiently-computable injective functions f,g:{0,1} n  {0,1} p(n) . Promised that Range(f) and Range(g) are either equal or disjoint. Decide which.

In the “black-box” setting, this problem takes exp(n) time even with a quantum computer

(a main result from my 2004 PhD thesis, the “collision lower bound”)

. Even in non-black box setting, would let us solve e.g. Graph Isomorphism

Theorem (Harlow-Hayden):

Suppose there’s a polynomial-time quantum algorithm for HH decoding. Then there’s also a polynomial-time quantum algorithm for Set Equality!

RBH

 1

The HH Construction

2

n

 1 

x

  

n

x

, 0

R

0

B f H

x

, 1

R

1

B g H

(easy to prepare in poly(n) time given f,g) 

Intuition:

If Range(f) and Range(g) are disjoint, then the H register decoheres all entanglement between R and B, leaving only classical correlation If, on the other hand, Range(f)=Range(g), then there’s

some

permutation of the |x,1  R states that puts the last qubit of R into an EPR pair with B Thus, if we had a reliable way to distill EPR pairs whenever possible, then we could also decide Set Equality

My strengthening: Harlow-Hayden decoding is as hard as inverting an arbitrary one-way function

RBH

 1 2 2

n

 1

x

, 

s

  

n

,

a

f

  ,

s

,

a

, 1

R x s

a B x

,

s H

R: “old” Hawking photons / B: photons just coming out / H: still in black hole B is maximally entangled with the last qubit of R. But in order to see that B and R are even

classically correlated

, one would need to learn x  s (a “hardcore bit” of f), and therefore invert f Is computational intractability the only “armor” protecting the geometry of spacetime inside the black hole?

Quantum Circuit Complexity and Wormholes

[A.-Susskind, in progress]

The

AdS/CFT correspondence

relates anti deSitter quantum gravity in D spacetime dimensions to conformal field theories (without gravity) in D-1 dimensions But the mapping is extremely nonlocal!

It was recently found that an expanding wormhole, on the AdS side, maps to a collection of qubits on the CFT side that just seems to get more and more “complex”: 

t

 

I

U t

   00  2 11   

n

Question:

What function of |  t  can we point to on the CFT side, that’s “dual” to wormhole length on the AdS side?

Susskind’s Proposal:

The quantum circuit complexity C(|  t  )— that is, the number of gates in the smallest circuit that prepares |  t  from |0   n

(Not clear if it’s right, but has survived some nontrivial tests)

2 n C(|  t  ) 0 0 Time t 2 n But

does

C(|  t  ) actually increase like this, for natural scrambling dynamics U?

Theorem:

Suppose U implements (say) a computationally universal, reversible cellular automaton. Then after t=exp(n) iterations, C(|  t  ) is superpolynomial in n, unless something very unlikely happens with complexity classes (

PSPACE

PP/poly

)

Proof Sketch:

I proved in 2004 that

PP

=

PostBQP

Suppose C(|  t  )=n O(1) . Then we could give a description of C machine, and the machine could Note that on some |x  

some

t

 t   )  2

n

1 / 2

x

   must be made to lower-bound C(|

x

 t   )

U t x

of interest, then measure the second register to learn U t |x  —thereby solving a

PSPACE

-complete problem!

A Favorite Research Direction

Understand, more systematically, the quantum circuit complexity of preparing n-qubit states and applying unitary transformations

(“not just for quantum gravity! also for quantum algorithms, quantum money, and so much more”) Example question:

For every n-qubit unitary U, is there a Boolean function f such that U can be realized by a polynomial-time quantum algorithm with an oracle for f?

(I’m giving you any

computational

capability f you could possibly want—but it’s still far from obvious how to get the

physical

capability U!) Easy to show: For every n-qubit

state

Boolean function f such that |  |  , there’s a can be prepared by a polynomial-time quantum algorithm with an oracle for f

A Related Grand Challenge

Can we classify all possible sets of quantum gates acting on qubits, in terms of which unitary transformations they approximately generate?

“Quantum Computing’s Classification of Finite Simple Groups” Warmup:

Classify all the possible Hamiltonians / Lie algebras. Even just on 1 and 2 qubits!

A.-Bouland 2014:

is universal Every nontrivial two-mode beamsplitter

Baby case that already took lots of representation theory…

The Classical Case

A.-Grier-Schaefer 2015:

Classified all sets of reversible gates in terms of which reversible transformations F:{0,1} n  {0,1} n they generate (assuming swaps and ancilla bits are free) CNOT Toffoli Fredkin

Schaeffer 2014:

The first known “physically universal” cellular automaton (able to implement any transformation in any bounded region, by suitably initializing the complement of that region) Solved open problem of Janzing 2010

Bonus: Rise and Fall of Complexity in Closed Thermodynamic Systems

Unlike entropy, “interesting structure” seems to first increase and then decrease as systems mix to equilibrium

Sean Carroll’s example: “Apparent Complexity”:

Entropy (as measured, e.g., by compressed file size) of a coarse-grained version of the image But how to quantify this pattern?

The Coffee Automaton

A., Carroll, Mohan, Ouellette, Werness 2015:

A probabilistic n  n reversible system that starts half “coffee” and half “cream.” At each time step, we randomly “shear” half the coffee cup horizontally or vertically (assuming a toroidal cup) We prove that the apparent complexity of this image has a rising-falling pattern, with a maximum of at least ~n 1/6 500 450 400 350 300 250 200 150 100 50 -100 0 100 300 500

Time Steps

700 900

Summary

Quantum computing established a remarkable intellectual bridge between computer science and physics

That’s always been why I’ve cared! Actual devices would be a bonus

My research agenda: to see just how much weight this bridge can carry Rebuilding physics in the language of computation won’t be

nearly

as easy as some people (e.g., Wolfram) have thought! Not only does it require engaging our actual understanding of physics (QM, QFT, AdS/CFT…); it requires hard mathematical work, often making new demands on theoretical computer science But I think it’s ultimately possible