# ANR Project TECAP

TECAP: Protocol Analysis - Combining Existing Tools

`ANR-17-CE39-0004-01`

## Kick-off meeting

The kick-off meeting of the ANR Project TECAP will be held on March 14 2018 in Cachan.### Venue

The meeting will be held in the auditorium Chemla of the ENS-Cachan campus. Directions are available here.### Program

- 9:30 Welcome coffee
- 9:50 Vincent Cheval: Short welcome
- 10:00 Adrien Koutsos: Deciding Indistinguishability: A Decision Result for a Set of Cryptographic Game Transformations - slides
- 10:40 Bruno Blanchet: Composition Theorems for CryptoVerif and Application to TLS 1.3 - slides
- 11:20 Pierre-Yves Strub: Proving Expected Sensitivity of Probabilistic Programs
- 12:00 Lunch
- 13:15 Itsaka Rakotonirina: Verifying equivalences of finite processes - slides
- 13:55 Joseph Lallemand: Equivalence Properties by Typing in Cryptographic Branching Protocols - slides
- 14:35 Coffee break
- 15:00 Charlie Jacomme: An extensive formal analysis of multi factor authentication protocols
- 15:40 David Baelde: POR for Security Protocols Equivalences : Beyond Action-Determinism

Computational indistinguishability is a key property in cryptography and verification of security protocols. Current tools for proving it rely on cryptographic game transformations.
We follow Bana and Comon's approach, axiomatizing what an adversary cannot do. We prove the decidability of a set of first-order axioms that are both computationally sound and expressive enough. This can be viewed as the decidability of a family of cryptographic game transformations. Our proof relies on term rewriting and automated deduction techniques.

We present composition theorems for security protocols, to compose a key exchange protocol and a symmetric-key protocol that uses the exchanged key. Our results rely on the computational model of cryptography and are stated in the framework of the tool CryptoVerif. They support key exchange protocols that guarantee injective or non-injective authentication. They also allow random oracles shared between the composed protocols. To our knowledge, they are the first composition theorems for key exchange stated for a computational protocol verification tool, and also the first to allow such flexibility.

As a case study, we apply our composition theorems to a proof of TLS 1.3 Draft-18. This work fills a gap in a previous paper that informally claims a compositional proof of TLS 1.3, without formally justifying it.

As a case study, we apply our composition theorems to a proof of TLS 1.3 Draft-18. This work fills a gap in a previous paper that informally claims a compositional proof of TLS 1.3, without formally justifying it.

Program sensitivity, also known as Lipschitz continuity, describes how
small changes in a program's input lead to bounded changes in the
output. We propose an average notion of program sensitivity for
probabilistic programs---expected sensitivity---that averages a
distance function over a probabilistic coupling of two output
distributions from two similar inputs. By varying the distance,
expected sensitivity recovers useful notions of probabilistic function
sensitivity, including stability of machine learning algorithms and
convergence of Markov chains.
Furthermore, expected sensitivity satisfies clean compositional
properties and is amenable to formal verification.

In this talk, I'll present a relational program logic called 𝔼pRHL for proving expected sensitivity properties. The logic features two key ideas. First, relational pre-conditions and post-conditions are expressed using distances, a real-valued generalization of typical boolean-valued (relational) assertions. Second, judgments are interpreted in terms of expectation coupling, a novel, quantitative generalization of probabilistic couplings which supports compositional reasoning.

In this talk, I'll present a relational program logic called 𝔼pRHL for proving expected sensitivity properties. The logic features two key ideas. First, relational pre-conditions and post-conditions are expressed using distances, a real-valued generalization of typical boolean-valued (relational) assertions. Second, judgments are interpreted in terms of expectation coupling, a novel, quantitative generalization of probabilistic couplings which supports compositional reasoning.

Automated verification has become an essential part in the security evaluation of cryptographic protocols. Recently, there has been a considerable effort to lift the theory and tool support that existed for reachability properties to the more complex case of equivalence properties. In this paper we contribute both to the theory and practice of this verification problem. We establish new complexity results for static equivalence, trace equivalence and labelled bisimilarity and provide a decision procedure for these equivalences in the case of a bounded number of sessions. Our procedure is the first to decide trace equivalence and labelled bisimilarity exactly for a large variety of cryptographic primitives---those that can be represented by a subterm convergent destructor rewrite system. We implemented the procedure in a new tool, DeepSec. We showed through extensive experiments that it is significantly more efficient than other similar tools, while at the same time raises the scope of the protocols that can be analysed.

Recently, many tools have been proposed for automatically analysing, in symbolic models, equivalence of security protocols. Equivalence is a property needed to state privacy properties or game-based properties like strong secrecy. Tools for a bounded number of sessions can decide equivalence but typically suffer from efficiency issues. Tools for an unbounded number of sessions like Tamarin or ProVerif prove a stronger notion of equivalence (diff-equivalence) that does not properly handle protocols with else branches.

We explore a novel approach based on type systems and propose a type system for proving equivalence of protocols, for a bounded or an unbounded number of sessions, with extensions for reasoning about branching protocols and dynamic keys.

We prove our type system to entail equivalence, for all the standard primitives. Our type system has been implemented and shows a significant speedup compared to the tools for a bounded number of sessions, and compares similarly to ProVerif for an unbounded number of sessions. Moreover, we can also prove security of protocols that require a mix of bounded and unbounded number of sessions, which ProVerif cannot properly handle.

We explore a novel approach based on type systems and propose a type system for proving equivalence of protocols, for a bounded or an unbounded number of sessions, with extensions for reasoning about branching protocols and dynamic keys.

We prove our type system to entail equivalence, for all the standard primitives. Our type system has been implemented and shows a significant speedup compared to the tools for a bounded number of sessions, and compares similarly to ProVerif for an unbounded number of sessions. Moreover, we can also prove security of protocols that require a mix of bounded and unbounded number of sessions, which ProVerif cannot properly handle.

Passwords are still the most widespread means for authenticating
users, even though they have been shown to create huge security
problems. This motivated the use of additional authentication
mechanisms used in so-called multi-factor authentication
protocols. In this paper we define a detailed threat model for this
kind of protocols: while in classical protocol analysis attackers
control the communication network, we take into account that many
communications are performed over TLS channels, that computers may
be infected by different kinds of malwares, that attackers could
perform phishing, and that humans may omit some actions. We
formalize this model in the applied pi calculus and perform an
extensive analysis and comparison of several widely used protocols
--- variants of Google 2 Step and FIDO's U2F. The analysis is completely
automated, generating systematically all combinations of threat
scenarios for each of the protocols and using the Proverif tool for
automated protocol analysis. Our analysis highlights weaknesses and
strengths of the different protocols, and allows us to suggest
several small modifications of the existing protocols which are easy
to implement, yet improve their security in several threat scenarios.

Over the past years, much research has focused on verifying trace
equivalence on protocols, which is notably used to model many
interesting privacy properties, e.g. anonymity or unlinkability.
Many tools for checking trace equivalence rely on a naive and expensive
exploration of all interleavings of concurrent actions, which calls for
POR techniques. I will present the first POR technique for protocol
equivalences that does not rely on an action-determinism assumption: we
recast the trace equivalence problem as a reachability problem, to which
persistent and sleep set techniques can be applied, and we show how to
effectively and efficiently put these results to work through symbolic
analysis. We report on a prototype implementation, integrated in Apte
and Deepsec. This is joint work with Stéphanie Delaune and Lucca Hirschi.