Foundations of Cryptography Lecture 7: Message Authentication in the Manual Channel Model Lecturer: Gil Segev.

Download Report

Transcript Foundations of Cryptography Lecture 7: Message Authentication in the Manual Channel Model Lecturer: Gil Segev.

Foundations of Cryptography
Lecture 7:
Message Authentication in the Manual Channel Model
Lecturer: Gil Segev
Diffie-Hellman Key Agreement

Alice and Bob wish to agree on a secret key
gx
Alice
Bob
gy
Both parties compute KA,B = gxy

DDH assumption:
{(g,
gx,
gy,
for random x, y and c.
c
xy
g )} 
{(g, gx, gy, gc)}
Computational
Indistinguishability
2
Diffie-Hellman Key Agreement

Alice and Bob wish to agree on a secret key
Alice
gx
gy
Bob
Both parties compute KA,B = gxy


DDH assumption: KA,B as good as a random secret
Secure against passive adversaries


Eve is only allowed to read the sent messages
Can now use KA,B as a one-time pad:
Alice
KA,B  z
Bob
3
Diffie-Hellman Key Agreement

Suppose now that Eve is an active adversary

“man-in-the-middle” attacker
Alice
gx
ga
Eve
KA,E = gxa

gy
gb
Bob
KE,B = gby
Completely insecure:

Eve can decrypt z, and then re-encrypt it
Alice
KA,E  z
Eve
KE,B  z
Bob
4
Diffie-Hellman Key Agreement

Suppose now that Eve is an active adversary

“man-in-the-middle” attacker
Alice
gx
ga
Eve
KA,E = gxa

gb
Bob
KE,B = gby
Solution - Message authentication:


gy
Alice and Bob authenticate gx and gy
Problem - Authentication requires setup, such as:


Shared secret key
Public key infrastructure
5
Practical
Scenario
6
Pairing of Wireless Devices
gx
gy
Scenario:
 Buy a new wireless camera
 Want to establish a secure channel for the first time

E.g., Diffie-Hellman key agreement
7
Pairing of Wireless Devices
Cable pairing



Simple
Cheap
Authenticated channel
“I thought this is a
wireless
camera…”
8
Pairing of Wireless Devices
Wireless pairing
Problem: Active adversaries (“man-in-the-middle”)
9
Pairing of Wireless Devices
Wireless pairing
gx
gy
ga
gb
Problem: Active adversaries (“man-in-the-middle”)
10
Message Authentication

Assure the receiver of a message that it has not been
changed by an active adversary
Alice
m
Eve
^
m
Bob
11
Pairing of Wireless Devices
gx
gy
ga
gb
m = gx || ga
^ = gb || gy
m
12
Message Authentication

Assure the receiver of a message that it has not been
changed by an active adversary
Alice



m
Eve
^
m
Bob
Without additional setup: Impossible !!
Public Key: Signatures
Problem: No trusted PKI
Solution:
Manual Channel
13
The Manual Channel
gx
gy
ga
gb
141
User can
compare two
short strings
14
Manual Channel Model
Bob
...
Alice
m
s
s
Interactive


Insecure communication channel
Low-bandwidth auxiliary channel:


Non-interactive
Enables Alice to “manually” authenticate one short string s
Adversarial power:




Choose the input message m
Insecure channel: Full control
Manual channel: Read, delay
Delivery timing
15
Manual Channel Model
Bob
...
Alice
m
s
s
Interactive


Insecure communication channel
Low-bandwidth auxiliary channel:

Non-interactive
Enables Alice to “manually” authenticate one short string s
Goal:
Minimize the length of the manually authenticated string
16
Manual Channel Model
Bob
...
Alice
m
s
s

No trusted infrastructure, such as:




Public key infrastructure
Shared secret key
Common reference string
.......
Suitable for ad hoc networks:

Pairing of wireless devices


Secure phones


Wireless USB, Bluetooth
AT&T, PGP, Zfone
Many more...
17
Why Is This Model Reasonable?
Implementing the manual channel:
 Compare two strings displayed by the devices
141
18
Why Is This Model Reasonable?
Implementing the manual channel:
 Compare two strings displayed by the devices
 Type a string, displayed by one device, into the other device
141
19
Why Is This Model Reasonable?
Implementing the manual channel:
 Compare two strings displayed by the devices
 Type a string, displayed by one device, into the other device
 Visual hashing
20
Why Is This Model Reasonable?
Implementing the manual channel:
 Compare two strings displayed by the devices
 Type a string, displayed by one device, into the other device
 Visual hashing
 Voice channel
21
The Naive Solution
m
Alice
Bob
H(m)

H - collision resistant hash function (e.g., SHA-256)


^ = H(m) with noticeable
^  m s.t. H(m)
No efficient algorithm can find m
probability
Any adversary that forges a message can be used to find a collision
for H
Alice
m
Eve
^
m
Bob
H(m)
22
The Naive Solution
Alice
m
Bob
H(m)

H - collision resistant hash function (e.g., SHA-256)


^ = H(m) with noticeable
^  m s.t. H(m)
No efficient algorithm can find m
probability
Any adversary that forges a message can be used to find a collision
for H
Are we done?

No. The output length of SHA-256 is too long (160 bits)

Cannot be easily compared or typed by humans
23
Previous Work

[Rivest & Shamir `84]: The “Interlock” protocol


Mutual authentication of public keys
No trusted infrastructure

AT&T, PGP,…, Zfone

[Vaudenay `05]:





Forgery
probabilit
y
Formal model
Computationally secure protocol for arbitrary long messages
log(1/) manually authenticated bits Optimal !
[LAN `05, DDN `00]: Can be based on any one-way function
(non-malleable commitments)
Efficient implementations:


Rely on a random oracle
or
Assume a common reference string [DIO `98, DKOS `01]
24
Previous Work

[Rivest & Shamir `84]: The “Interlock” protocol


Mutual authentication of public keys
No trusted infrastructure

AT&T, PGP,…, Zfone

[Vaudenay `05]:





Computational
Assumptions !!
Forgery
probabilit
y
Formal model
Computationally secure protocol for arbitrary long messages
log(1/) manually authenticated bits Optimal !
[LAN `05, DDN `00]: Can be based on any one-way function
(non-malleable commitments)
Are those really
Efficient implementations:


necessary?
Rely on a random oracle
or
Assume a common reference string [DIO `98, DKOS `01]
25
Our Results - Tight Bounds
n-bit
...
m
s
ℓ-bit
 forgery probability
No setup or computational assumptions
Only twice as
many as [V05]



Upper bound:
Constructed log*n-round protocol in which ℓ = 2log(1/) + O(1)
Matching lower bound: n  2log(1/)

ℓ  2log(1/) - 2
One-way functions are necessary (and sufficient) for breaking the
lower bound in the computational setting
26
Unconditional Security
Some advantages over computational security:

Security against unbounded adversaries

Exact evaluation of error probabilities

Protocols are often

easier to compose

more efficient
Key agreement
protocols
27
Our Results - Tight Bounds
ℓ = 2log(1/)
ℓ
ℓ = log(1/)
One-way
functions
Unconditional
security
Computational
security
Impossible
log(1/)
28
Outline

Security definition

Our results


The protocol

Lower bound

One-way functions are necessary for
breaking the lower bound
Conclusions
29
Security Definition
n-bit
...
m
s
ℓ-bit
Unconditionally secure (n, ℓ, k, )-authentication protocol:



n-bit input message
ℓ manually authenticated bits
k rounds
Completeness: No interference  m Bob accepts m
(with high probability)
^ m]
Unforgeability: m Pr[ Bob accepts m
30
Outline

Security definition

Our results


The protocol

Lower bound

One-way functions are necessary for
breaking the lower bound
Conclusions
31
The Protocol (simplified)


Based on the [GN93] hashing technique
In each round, the parties:



Cooperatively choose a hash function
Reduce to authenticating a shorter message
A short message is manually authenticated
Preliminaries:
For m = m1 ... mk  GF[Q]k and x  GF[Q], let m(x) =
k

i=1
mixi
^ ≠ m and for any c,
^ c  GF[Q],
Then, for any m
Prob x R GF[Q] [ m(x) + c = ^
m(x) + ^c ]  k/Q
32
The Protocol (simplified)
Preliminaries:
x || m(x) + c
For
= m1 ... m
Wem hash
mk toGF[Q]k and x  GF[Q], let m(x) =
^ ≠ m and for any c,
^ c  GF[Q],
Then, for any m
One party
chooses x
k

i=1
mixi
Other party
chooses c
Prob x R GF[Q] [ m(x) + c = ^
m(x) + ^c ]  k/Q
33
The Protocol (simplified)
m
Alice
R GF[Q1]
a2 R GF[Q2]
a1
a1
Bob
b1 R GF[Q1]
b1 b2
b2 R GF[Q2]
m2
Both parties set: m0 = m
m1 = b1 || m0(b1) + a1
Accept iff
m2 is consistent
Q1  n/ , Q2  log(n)/
m2 = a2 || m1(a2) + b2
2log(1/) + 2loglog(n) + O(1) manually authenticated bits
Two GF[Q2] elements

k rounds

2loglog(n) is reduced to 2log(k-1)(n)
34
Security Analysis


Must consider all generic man-in-the-middle attacks.
Three attacks in our case:
Attack #1
Alice
Eve
Bob
m a1
^ a^1
m
^
^
b1 b
2
b1 b2
m2
35
Security Analysis


Must consider all generic man-in-the-middle attacks.
Three attacks in our case:
Attack #2
Alice
Eve
Bob
^ a^1
m
b1 b2
m a1
^
^
b1 b
2
m2
36
Security Analysis


Must consider all generic man-in-the-middle attacks.
Three attacks in our case:
Attack #3
Alice
Eve
Bob
m a1
^
^
b1 b
2
m2
^ a^1
m
b1 b2
m2
37
Security Analysis – Attack #1
Alice
Eve
Bob
m a1
^ a^1
m
^
^
b1 b
2
b1 b2
m2
^
m0,B = m
m0,A = m
^)+a
m1,A = ^
b1 || m0,A(b
1
1
m1,B = b1 || m0,B(b1) + ^
a1
m2,A = a2 || m1,A(a2) + ^
b2
m2,B = a2 || m1,B(a2) + b2
m0,A  m0,B and m2,A = m2,B
Pr[ m1,A = m1,B ] + Pr[ m1,A  m1,B and m2,A = m2,B ]  /2 + /2
38
Security Analysis – Attack #1
Alice
Eve
Bob
m a1
^ a^1
m
^
b1
b1
^
m0,B = m
m0,A = m
^)+a
m1,A = ^
b1 || m0,A(b
1
1
m1,B = b1 || m0,B(b1) + ^
a1
Claim:

Eve chooses ^b1  b1
^b1 = b1
chooses
Pr[mEve
=
m
]
1,A
1,B

m1,A  m1,B

Pr[ m0,A(b1) + a1 = m0,B(b1) + ^a1 ]  /2
 /2
39
Outline

Manual channel model

Our results


The protocol

Lower bound

One-way functions are necessary for
breaking the lower bound
Conclusions
40
Lower Bound
Alice
m, x1
Bob
x2
s

m R {0,1}n  M, X1, X2, S are well defined random variables
41
Lower Bound
Alice
M, X1
Bob
X2
S

Goal: H(S)  2log(1/)
Basic Information Theory:

Shannon entropy
 Conditional entropy

Mutual information

Cond. mutual information
H(X) = - x p(x) logp(x)
H(X | Y) = Expy H(X | Y=y)
I(X ; Y) = H(X) - H(X | Y)
I(X ; Y | Z) = H(X | Z) - H(X | Y,Z)
42
Lower Bound
M, X1
Alice
Bob
X2
S

Goal: H(S)  2log(1/)
Evolving intuition:

The parties must use at least log(1/) random bits

Each party must use at least log(1/) random bits

Each party must independently reduce H(S) by log(1/) bits
H(S) = H(S) - H(S | M, X1)
= I(S ; M, X1)
+ H(S | M, X1) - H(S | M, X1, X2)
+ H(S | M, X1, X2)
+ I(S ; X2 | M, X1)
+ H(S | M, X1, X2)
43
Lower Bound
Alice
M, X1
Bob
X2
S

Goal: H(S)  2log(1/)
Evolving intuition:

The parties must use at least log(1/) random bits

Each party must use at least log(1/) random bits
Each party must independently reduce H(S) by log(1/) bits
Alice’s
randomnes
H(S)
= I(S ; M, X1)
s
+ I(S ; X2 | M, X1)
Bob’s
+ H(S | M, X1, X2)
randomnes
s
44

Lower Bound
Alice
M, X1
Bob
X2
S

Goal: H(S)  2log(1/)
Lemma 1: I(S ; M, X1) + H(S | M, X1, X2)  log(1/)
Lemma 2: I(S ; X2 | M, X1)  log(1/)
H(S) = I(S ; M, X1)
+ I(S ; X2 | M, X1)
+ H(S | M, X1, X2)
Alice’s
randomnes
s
Bob’s
randomnes
s
45
Proof of Lemma 1
Consider the following attack:
Alice
Eve
m x1
^2
x
^ ^
m
x1
x2
Bob
Eve wants Alice to
manually authenticate ^
s
s
Eve acts as follows:




^ R {0,1}n
Chooses m
Chooses m R {0,1}n
Samples ^
x2 from the distribution of X2 given m, x1 and ^s
Forwards s
and hopes
that s = ^
s
If Pr[ ^
s | m, x1 ] = 0
Eve quits
46
Proof of Lemma 1
By the protocol requirements:
^ ]  Pr[ s = ^s ] - 2-n
  Pr[ s = ^
s and m ≠ m
Since n  log(1/), we get
2  Pr[ s = ^s ]
Claim: Pr[ s = ^s ]  2
- { (S ; M, X ) + H(S | M, X , X ) }
1
1 2
which implies
(S ; M, X1) + H(S | M, X1, X2)  log(1/) - 1
47
Lower Bound
Alice
M, X1
Bob
X2
S

Goal: H(S)  2log(1/) - 2
Lemma 1: I(S ; M, X1) + H(S | M, X1, X2)  log(1/) - 1
Lemma 2: I(S ; X2 | M, X1)  log(1/) - 1
H(S) = I(S ; M, X1)
+ I(S ; X2 | M, X1)
+ H(S | M, X1, X2)
Alice’s
randomnes
s
Bob’s
randomnes
s
48
Outline

Manual channel model

Our results


The protocol

Lower bound

One-way functions are necessary for
breaking the lower bound
Conclusions
49
One-Way Functions
Theorem:
One-way functions are necessary for breaking the 2log(1/) lower
bound in the computational setting
No one-way functions
The attacks of the lower bound can be
carried out by a poly-time adversary
50
Recall: Proof of Lemma 1
Consider the following attack:
Alice
Eve
^ ^
m
x1
x2
Bob
m x1
^2
x
s
Eve acts as follows:




Randomly inverting
a function
^ R
Chooses m
Chooses m R {0,1}n
Samples ^
x2 from the distribution of X2 given m, x1 and ^s
{0,1}n
Forwards s
51
One-Way Functions

One-way functions:
 Easy to compute
 Hard to invert given the image of a random input
Hard to find even one inverse

Distributionally one-way functions [IL89]:
 Easy to compute
 Hard to randomly invert given the image of a random input
May be easy to
find some
inverses


Any one-way function is also distributionally one-way
[IL89]: The existence of both primitives is equivalent
52
One-Way Functions

Eve has to sample X2 given m, x1 and s.
f(m, rA, rB) = (m, x1, x2, s)
Message
Alice’s coins
Transcript of
the protocol
Bob’s coins
g(m, rA, rB) = (m, x1, s)
53
One-Way Functions

Eve has to sample X2 given m, x1 and s.
f(m, rA, rB) = (m, x1, x2, s)

g is not distributionally one-way  Eve can randomly
rA, rB) = x(m,
x1, s)
invert g and applyg(m,
f to compute
2.
-statistically
close to uniform

Bob cannot distinguish between the two executions with
significant probability.
54
Conclusions


Manual Channel
Computational assumptions are not necessary



Protocol
Matching lower bound
Sharp threshold between unconditional and computational
ℓ = 2log(1/)
ℓ
ℓ = log(1/)
One-way
functions
Unconditional
security
Computational
security
Impossible
log(1/)
55
Reference

Moni Naor, Gil Segev, and Adam Smith
Tight Bounds for Unconditionally Secure Authentication Protocols in
the Manual Channel and Shared Key Models
Advances in Cryptology - CRYPTO 2006.
56