Slides

Transcription

Slides
Protocol Security in the Presence of
Compromising Adversaries
David Basin
ETH Zurich
Joint work with Cas Cremers
Evolution of formal methods for security protocols
 1980s
 Early systems: Interrogator, NRL, Ina Jo
 Support for simulation and search
 Protocols as “fruit flies” of formal methods
 1990s
 Lowe's attack on NSPK (1996)
 Dozens of protocols analyzed, e.g., Clark-Jacob library.
 Proliferation of methods and verification tools
 FDR/Casper, Paulson's Inductive Method, Athena, ...
 2000-present
 Maturation of tools: ProVerif, OFMC, AVISPA, Scyther
 Alice and Bob exchange a key is “solved”
… at least for medium sized protocols using basic crypto
14.03.2011
2
Basis: Dolev Yao adversary model
 Models an active intruder with
full network control and perfect recall
 Idealized black-box cryptography
 Successful: interesting theory and powerful tools
14.03.2011
3
Going beyond the Dolev Yao adversary
Problems and challenges
 Strong abstraction
 Terms hash(m) rather than bit-strings 1101
Great progress
 Possibilistic definitions
→ cryptographically-faithful abstractions
 Equations like
→ Intruder deductions modulo equations
Good progress
This talk
 Corruptions: modeling compromise of keys, ...
→ Adversaries control more than just the network!
14.03.2011
4
Why study corruption?
 Security is relative to powers of an adversary
 Real adversaries might...





Break into machine and extract disk drive
Read out memory
Cryptanalyze keys or attack side channels
Bribe or bully other agents
And all of this could happen at any time!
 Flip side: rings of protection in hardware/software
 TPMs, HSMs, smart cards and tokens vs. main memory, etc.
Formal foundations? Verification methods and tools?
14.03.2011
5
An example
 Consider the following protocol
 Does it make a difference if:
 K is a symmetric key used for symmetric encryption, or
 K is a public key from a freshly generated public/private key pair, used
for asymmetric encryption?
 Yes: Perfect Forward Secrecy
PFS is the property where an attacker cannot decrypt a protocol session
even after compromising the long-term cryptographic secrets of each side.
N.B.: for PFS some variant of Diffie-Hellman is typically used in practice.
14.03.2011
6
Overview
 Formal symbolic model for protocols
 Modular semantics capturing different adversarial notions
 Tool support
 Results and a demo
 Conclusions and future work
This is a talk about models, not logic per se
14.03.2011
7
Terms, roles, and protocols
 Terms: operators for constructing cryptographic messages
 Roles: sequences of agent events
 Example
14.03.2011
8
Threads
 A thread is a role instance
 No limit to number of threads
 Each thread assigned a unique identifier from the set TID.
 We instantiate names and syntactically bind fresh values and
variables to their owning thread, e.g. K#1, y#1
 For currently active threads, we store the remaining sequence
of steps in a thread pool th :
14.03.2011
9
Core symbolic model
 State (tr,IK,th,¾)




tr: trace of events that have occurred
IK: “intruder knowledge” of adversary
th: thread pool, mapping thread identifiers to remaining steps
¾: substitution mapping variables to ground terms
 LTS modeling agents' threads and (outside) adversary
14.03.2011
10
Reasoning about protocol semantics (LTS)
 General complexity
 Reachability properties are undecidable, e.g. secrecy
(Durgin, Lincoln, Mitchell, Scedrov 1999)
 NP-hard, even when number of sessions is bounded
(Rusinowitch, Turuani, 1999)
 Scyther tool often successful in protocol analysis
Description of
security protocol
+
security properties
(reachability)
14.03.2011
Secure
Tool
Insecure
Attack
example
11
Demo (NSPK, 1978)
Nonces should be shared secrets between authenticated parties.
Now let's look at this in Scyther
14.03.2011
12
Why are these protocols so difficult to get right?
14.03.2011
13
Honesty versus corruption
 Note adversaries use of Charlie's key
 Corresponds to static corruption or
an inside adversary
 Adversary learns agent's long-term key before session starts
 Security guarantees only in terms of honest agents
 If adversary compromises everyone, secrecy (or authentication)
cannot be achieved
In general, data may be compromised in many ways!
14.03.2011
14
Compromise in cryptographic models (AKE)
 Security modeled as an adversary playing a game
 Start a new thread or ask a thread to process a message
 Compromise queries revealing long-term keys, session keys,
thread state, or randomness/ephemeral keys
 Choose a (test) session to perform the test query
 Can adversary distinguish the session key from a random key?
 Adversary wins if his advantage is non-negligible
 Various side conditions restricting test queries
 An AKE protocol is secure if no efficient (PPT) adversary
has more than a negligible advantage in winning this game
See: Bellare Rogaway 93,95; Bellare Pointcheval Rogaway 2000; Shoup;
Canetti Krawczyk 2001; Canetti (UC) 2001-... ; LaMacchia et al 2007; …
14.03.2011
15
Co-evolution of adversary models and protocols
A  B : A , {B , na }pk  B
B  A : {B , H na  , nb , K }pk  A
Signed DH
A  B : {H nb}K
A  B : {B , g na }sk  A
sessionkey : K
B  A : {A , g nb }sk  B 
BKE
nb na
sessionkey A: g 
sessionkey B : g na nb
HMQV (simplified)
A  B : g na
B  A : g nb
sessionkey A: H  g nb  pk be nad  sk a  
sessionkey B : H  g na  pk a d nbe  sk b  
These key exchange protocols are all “correct” in symbolic models.
Finer distinctions possible using cryptographic models.
14.03.2011
16
Problems with existing cryptographic models
 Many adversarial notions proposed for AKE
 Details embedded in monolithic security notions, specialized for AKE
 No agreement about the details and each model is a bit different
 But details influence protocol design and correctness!
 Protocol analysis is complex, tedious, and error prone
Our approach
 Extend symbolic models with modular definitions of
adversarial capabilities
 Extract common elements from cryptographic models
 Abstract and generalize where possible
 Provide tool support for analyzing and comparing
protocols with respect to entire model family
14.03.2011
17
Cast of characters
Alice
(actor)
We consider some thread executed by Alice in which she
tries to communicate with Bob.
We refer to this as the test thread.
Alice is the actor of this thread.
Bob is the peer of Alice and may
be executing the (intended)
partner thread.
Bob
Alice and Bob may execute other threads
in parallel, e.g., communicating with
other agents such as Eve.
(peer)
Eve
How much information can the adversary get without jeopardizing the
security of the test thread?
14.03.2011
18
How much information can be compromised?
Random numbers (nonces or keys)
Local state of a thread
Test thread
Alice
(actor)
Bob
(peer)
Partner
Eve
14.03.2011
19
Dimensions of compromise
 When: before, during, or after test session
 Whose data: actor, peers, or others
 Which data: reveal long-term keys, session keys, state (of
thread), or randomness
First distinction: long-term versus short-term data
14.03.2011
20
Reveal long-term data: whose, when
14.03.2011
21
Partnering
Partners
(Test thread)
computes intended partner threads in trace tr
14.03.2011
22
Reveal short-term data: whose, which
14.03.2011
23
Results in a hierarchy of adversary models
Different rule combinations yield 96 distinct adversary models
14.03.2011
24
Recasting existing models
… plus dozens of new models
14.03.2011
25
Factoring security properties
Split security property into adversary model and pure security property.
Recombining parts yield both known and new properties.
14.03.2011
26
Tool demo — authentication and key exchange
 Needham-Schroeder: with and without inside attackers
 NSL: insiders, long-term key compromise
 Signed Diffie-Hellman: long-term key compromise
14.03.2011
27
Applications of the tool
Many new (⁎) and rediscovered (√) attacks
 y?
 Nontrivial analysis
 Previously by hand: 1 attack = 1 publication
 Now tool-based: automatic, within seconds
 Can determine strength of a protocol (WRT 96 different models),
establishing/disproving relationships between protocols
14.03.2011
28
Ordering protocols — an example
 Define P1 ≤ P2 iff for all adversary models where there is an
attack on P2, there is also an attack on P1
 Jeong, Katz, Lee [2004,2008]: Protocols TS1, TS2, and TS3
“We present three provably-secure protocols for two-party
authenticated key exchange. Our first, most efficient protocol
provides key independence but not forward secrecy. Our second
scheme additionally provides forward secrecy but requires some
additional computation. […] Our final protocol provides even
stronger security guarantees than our second protocol.”
 So, we conclude TS1 < TS2 < TS3
14.03.2011
29
State-of-the-art in comparing Key Exchange protocols
 Boyd, Cliff, Nieto, Paterson [2008]: BCNP1, BCNP2.
“The PKI versions of our protocols also compare favourably with the
existing ones of Jeong et al. [19] and Okamoto [30]. Protocol 2
(BCNP2) provides more security than Jeong's protocol (TS2) and
the same as Okamoto's. When instantiated with Kiltz' PKI-based
KEM [21] Protocol 2 is slightly less efficient than Okamoto's.”
 Conclusion: TS2 < BCNP2
(BCNP1)
(BCNP2)
(TS2)
14.03.2011
30
Security hierarchy
TS3
{ LKRnotgroup, LKRafter, SKR, SR, RNR }
BCNP2
{ LKRnotgroup, LKRactor, LKRaftercorrect, SKR, SR }
BCNP1
{ LKRnotgroup, LKRactor, SKR, SR }
TS2
{ LKRnotgroup, LKRaftercorrect, SKR, SR }
{ LKRnotgroup, SKR, SR, RNR }
TS1
{ LKRnotgroup, SKR, SR, RNR }
Automatic analysis shows
 TS3 incomparable to TS1,TS2
 BCNP2 incomparable to TS2
 Stronger properties could have been proven of TS3
Shouldn't we decide first what kind of security we want
before comparing exponentiations?
14.03.2011
31
Conclusions
 Modular approach, generalizing existing notions
 Bridges important gap between crypto and formal models
 Tool support: first tool systematically supporting a wide
range of security notions from the cryptographic literature
 Analyze individual protocols
 Compare their relative strength
 Simple decomposition of security notions, separating
security properties from adversary model
 Paves the way for more detailed analysis of other (trace) properties
 Next step: computational variants of our models
 Aim for modular (computational) proofs with respect to capabilities
14.03.2011
32
Questions?
14.03.2011
33
Backup: agent rules
14.03.2011
34
Security as reachability properties
 Secrecy of a nonce or key k (slightly simplified)
 Authentication (various forms, e.g., aliveness)
 If in a reachable state, agent A has finished his role apparently
with B, then B was alive (executed events).
 More complex notions can also be formalized such as
agreement. [Lowe 97, Blanchet 02, AVISPA tool]
14.03.2011
35