Characteristic Formulae for Mechanized Program Veri cation

Transcription

Characteristic Formulae for Mechanized Program Veri cation
UNIVERSITÉ PARIS.DIDEROT (Paris 7)
ÉCOLE DOCTORALE : Sciences Mathématiques de Paris Centre
Ph.D in Computer Science
Characteristic Formulae for
Mechanized Program Verication
Arthur Charguéraud
Advisor: François POTTIER
Defended on December 16, 2010
Jury
Claude
Marché
Reviewer
Greg
Morrisett
Reviewer
Roberto
di Cosmo
Examiner
Yves
Bertot
Examiner
Matthew
Parkinson
Examiner
François
Pottier
Advisor
Abstract
This dissertation describes a new approach to program verication,
based on characteristic formulae. The characteristic formula of a program is a higher-order logic formula that describes the behavior of that
program, in the sense that it is sound and complete with respect to
the semantics. This formula can be exploited in an interactive theorem
prover to establish that the program satises a specication expressed
in the style of Separation Logic, with respect to total correctness.
The characteristic formula of a program is automatically generated
from its source code alone. In particular, there is no need to annotate the
source code with specications or loop invariants, as such information
can be given in interactive proof scripts. One key feature of characteristic
formulae is that they are of linear size and that they can be prettyprinted in a way that closely resemble the source code they describe, even
though they do not refer to the syntax of the programming language.
Characteristic formulae serve as a basis for a tool, called CFML, that
supports the verication of Caml programs using the Coq proof assistant. CFML has been employed to verify about half of the content of
Okasaki's book on purely functional data structures, and to verify several imperative data structures such as mutable lists, sparse arrays and
union-nd. CFML also supports reasoning on higher-order imperative
functions, such as functions in CPS form and higher-order iterators.
Contents
1 Introduction
7
1.1
Approaches to program verication . . . . . . . . . . . . . . . . . . .
1.2
Overview
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11
1.3
Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
16
1.4
Contribution
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19
1.5
Research and publications . . . . . . . . . . . . . . . . . . . . . . . .
20
1.6
Structure of the dissertation . . . . . . . . . . . . . . . . . . . . . . .
20
2 Overview and specications
2.1
2.2
22
Characteristic formulae for pure programs . . . . . . . . . . . . . . .
22
2.1.1
Example of a characteristic formula . . . . . . . . . . . . . . .
22
2.1.2
Specication and verication
. . . . . . . . . . . . . . . . . .
Formalizing purely functional data structures
3.2
3.3
24
. . . . . . . . . . . . .
27
. . . . . . . . . . . . . . . . . .
27
2.2.1
Specication of the signature
2.2.2
Verication of the implementation
2.2.3
Statistics on the formalizations
. . . . . . . . . . . . . . .
30
. . . . . . . . . . . . . . . . .
33
3 Verication of imperative programs
3.1
7
36
Examples of imperative functions . . . . . . . . . . . . . . . . . . . .
36
3.1.1
Notation for heap predicates . . . . . . . . . . . . . . . . . . .
36
3.1.2
Specication of references
. . . . . . . . . . . . . . . . . . . .
37
3.1.3
Reasoning about for-loops . . . . . . . . . . . . . . . . . . . .
38
3.1.4
Reasoning about while-loops
. . . . . . . . . . . . . . . . . .
39
. . . . . . . . . . . . . . . . . . . . . . . . .
41
Mutable data structures
3.2.1
Recursive ownership
. . . . . . . . . . . . . . . . . . . . . . .
41
3.2.2
Representation predicate for references . . . . . . . . . . . . .
42
3.2.3
Representation predicate for lists . . . . . . . . . . . . . . . .
43
3.2.4
Focus operations for lists . . . . . . . . . . . . . . . . . . . . .
45
3.2.5
Example: length of a mutable list . . . . . . . . . . . . . . . .
46
3.2.6
Aliased data structures . . . . . . . . . . . . . . . . . . . . . .
48
3.2.7
Example: the swap function . . . . . . . . . . . . . . . . . . .
49
Reasoning on loops without loop invariants
3.3.1
. . . . . . . . . . . . . .
Recursive implementation of the length function
3
. . . . . . .
50
51
4
CONTENTS
3.4
3.3.2
Improved characteristic formulae for while-loops . . . . . . . .
52
3.3.3
Improved characteristic formulae for for-loops . . . . . . . . .
53
Treatment of rst-class functions
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
54
3.4.1
Specication of a counter function
3.4.2
Generic function combinators . . . . . . . . . . . . . . . . . .
55
3.4.3
Functions in continuation passing-style . . . . . . . . . . . . .
56
3.4.4
Reasoning about list iterators . . . . . . . . . . . . . . . . . .
57
4 Characteristic formulae for pure programs
54
60
4.1
Source language and normalization . . . . . . . . . . . . . . . . . . .
60
4.2
Characteristic formulae: informal presentation . . . . . . . . . . . . .
61
4.2.1
Characteristic formulae for the core language
. . . . . . . . .
62
4.2.2
Denition of the specication predicate
. . . . . . . . . . . .
63
4.2.3
Specication of curried n-ary functions . . . . . . . . . . . . .
64
4.2.4
Characteristic formulae for curried functions . . . . . . . . . .
66
Typing and translation of types and values . . . . . . . . . . . . . . .
67
4.3.1
Erasure of arrow types and recursive types . . . . . . . . . . .
67
4.3.2
Typed terms and typed values
68
4.3.3
Typing rules of weak-ML
4.3.4
Reection of types in the logic
4.3.5
4.3
4.4
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
69
69
Reection of values in the logic . . . . . . . . . . . . . . . . .
71
Characteristic formulae: formal presentation . . . . . . . . . . . . . .
71
4.4.1
Characteristic formulae for polymorphic denitions . . . . . .
72
4.4.2
Evaluation predicate . . . . . . . . . . . . . . . . . . . . . . .
72
4.4.3
Characteristic formula generation with notation . . . . . . . .
73
4.5
Generated axioms for top-level denitions
. . . . . . . . . . . . . . .
74
4.6
Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
76
4.7
4.6.1
Mutually-recursive functions . . . . . . . . . . . . . . . . . . .
76
4.6.2
Assertions . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
76
4.6.3
Pattern matching . . . . . . . . . . . . . . . . . . . . . . . . .
77
Formal proofs with characteristic formulae . . . . . . . . . . . . . . .
80
4.7.1
Reasoning tactics: example with let-bindings
81
4.7.2
Tactics for reasoning on function applications . . . . . . . . .
82
4.7.3
Tactics for reasoning on function denitions . . . . . . . . . .
84
4.7.4
Overview of all tactics . . . . . . . . . . . . . . . . . . . . . .
85
. . . . . . . . .
5 Generalization to imperative programs
5.1
5.2
5.3
Extension of the source language
87
. . . . . . . . . . . . . . . . . . . .
87
5.1.1
Extension of the syntax and semantics . . . . . . . . . . . . .
87
5.1.2
Extension of weak-ML . . . . . . . . . . . . . . . . . . . . . .
89
Specication of locations and heaps . . . . . . . . . . . . . . . . . . .
90
5.2.1
Representation of heaps
5.2.2
Predicates on heaps
. . . . . . . . . . . . . . . . . . . . .
90
. . . . . . . . . . . . . . . . . . . . . . .
91
Local reasoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
92
CONTENTS
5.4
5.5
5.6
5.7
5
5.3.1
Rules to be supported by the local predicate . . . . . . . . . .
92
5.3.2
Denition of the local predicate . . . . . . . . . . . . . . . . .
93
5.3.3
Properties of local formulae . . . . . . . . . . . . . . . . . . .
94
5.3.4
Extraction of invariants from pre-conditions . . . . . . . . . .
95
Specication of imperative functions
. . . . . . . . . . . . . . . . . .
AppReturns
and
Spec
5.4.1
Denition of the predicates
. . . . . . .
96
5.4.2
Treatment of n-ary applications . . . . . . . . . . . . . . . . .
98
5.4.3
Specication of n-ary functions
. . . . . . . . . . . . . . . . .
98
Characteristic formulae for imperative programs . . . . . . . . . . . .
100
5.5.1
Construction of characteristic formulae . . . . . . . . . . . . .
100
5.5.2
Generated axioms for top-level denitions
. . . . . . . . . . .
102
Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
102
5.6.1
Assertions . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
102
5.6.2
Null pointers and strong updates . . . . . . . . . . . . . . . .
103
Additional tactics for the imperative setting . . . . . . . . . . . . . .
105
5.7.1
Tactic for heap entailment . . . . . . . . . . . . . . . . . . . .
105
5.7.2
Automated application of the frame rule . . . . . . . . . . . .
107
5.7.3
Other tactics specic to the imperative setting
108
. . . . . . . .
6 Soundness and completeness
6.1
6.2
6.3
6.4
96
109
Additional denitions and lemmas
Func
. . . . . . . . . . . . . . . . . . .
110
6.1.1
Interpretation of
. . . . . . . . . . . . . . . . . . . . . .
110
6.1.2
Reciprocal of decoding: encoding . . . . . . . . . . . . . . . .
110
6.1.3
Substitution lemmas for weak-ML
. . . . . . . . . . . . . . .
112
6.1.4
Typed reductions . . . . . . . . . . . . . . . . . . . . . . . . .
112
6.1.5
Interpretation and properties of
. . . . . . . . . . . .
115
6.1.6
Substitution lemmas for characteristic formulae . . . . . . . .
116
6.1.7
Weakening lemma
. . . . . . . . . . . . . . . . . . . . . . . .
117
6.1.8
Elimination of n-ary functions . . . . . . . . . . . . . . . . . .
117
Soundness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
119
6.2.1
Soundness of characteristic formulae
119
6.2.2
Soundness of generated axioms
AppEval
. . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
122
Completeness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
123
6.3.1
Labelling of function closures
. . . . . . . . . . . . . . . . . .
124
6.3.2
Most-general specications
. . . . . . . . . . . . . . . . . . .
126
6.3.3
Completeness theorem . . . . . . . . . . . . . . . . . . . . . .
126
6.3.4
Completeness for integer results . . . . . . . . . . . . . . . . .
129
Quantication over
Type
. . . . . . . . . . . . . . . . . . . . . . . . .
130
6.4.1
Case study: the identity function . . . . . . . . . . . . . . . .
130
6.4.2
Formalization of exotic values . . . . . . . . . . . . . . . . . .
131
6
CONTENTS
7 Proofs for the imperative setting
7.1
Additional denitions and lemmas
133
. . . . . . . . . . . . . . . . . . .
133
7.1.1
Typing and translation of memory stores . . . . . . . . . . . .
133
7.1.2
Typed reduction judgment . . . . . . . . . . . . . . . . . . . .
7.1.3
Interpretation and properties of
7.1.4
AppEval
. . . . . . . . . . . .
134
135
Elimination of n-ary functions . . . . . . . . . . . . . . . . . .
136
7.2
Soundness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
137
7.3
Completeness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
143
7.3.1
Most-general specications
. . . . . . . . . . . . . . . . . . .
144
7.3.2
Completeness theorem . . . . . . . . . . . . . . . . . . . . . .
146
7.3.3
Completeness for integer results . . . . . . . . . . . . . . . . .
153
8 Related work
155
8.1
Characteristic formulae . . . . . . . . . . . . . . . . . . . . . . . . . .
155
8.2
Separation Logic
157
8.3
Verication Condition Generators . . . . . . . . . . . . . . . . . . . .
159
8.4
Shallow embeddings
. . . . . . . . . . . . . . . . . . . . . . . . . . .
161
8.5
Deep embeddings . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
165
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9 Conclusion
169
9.1
Characteristic formulae in the design space
. . . . . . . . . . . . . .
9.2
Summary of the key ingredients . . . . . . . . . . . . . . . . . . . . .
170
9.3
Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
173
9.4
Final words
176
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
169
Chapter 1
Introduction
The starting point of this dissertation is the notion of correctness of a
program. A program is said to be correct if it behaves according to its
specication. There are applications for which it is desirable to establish
program correctness with a very high degree of condence.
For large-
scale programs, only mechanized proofs, i.e., proofs that are veried by
a machine, can achieve this goal. While several decades of research on
this topic have brought many useful ingredients for building verication
systems, no tool has yet succeeded in making program verication a
routine exercise. The matter of this thesis is the development of a new
approach to mechanized program verication.
In this introduction, I
explain where my approach is located in the design space and give a
high-level overview of the ingredients involved in my work.
1.1 Approaches to program verication
Motivation
Computer programs tend to become involved in a growing number of
devices. Moreover, the complexity of those programs keeps increasing. Nowadays,
even a cell phone typically relies on more than ten million lines of code. One major
concern is whether programs behave as they are intended to.
that appears to work is not so dicult.
Writing a program
Producing a fully-correct program has
shown to be astonishingly hard. It suces to get one single character wrong among
millions of lines of code to end up with a buggy program. Worse, a bug may remain
undetected for a long time before the particular conditions in which the bug shows
up are gathered.
If you are playing a game on your cell phone and a bug causes
the game to crash, or even causes the entire cell phone to reboot, it will probably
be upsetting but it will not have dramatic consequences.
However, think of the
consequences of a bug occurring in the cruise control of an airplane, a bug in the
program running the stock exchange, or a bug in the control system of a nuclear
plant.
Testing can be used to expose bugs, and a coverage analysis can even be used to
7
8
CHAPTER 1.
INTRODUCTION
check that all the lines from the source code of a program have been tested at least
once. Yet, although testing might expose many bugs, there is no guarantee that it
will expose all the bugs. Static analysis is another approach, which helps detecting
all the bugs of a certain form. For example, type-checking ensures that no absurd
operation is being performed, like reading a value in memory at an address that
has never been allocated by the program. Static analysis has also been successfully
applied to array bound checking, and in particular to the detection of buer overow
vulnerabilities (e.g., [84]).
Such static analyses can be almost entirely automated
and may produce a reasonably-small number of false positives.
Program verication
Neither testing nor static analysis can ensure the total
absence of errors. More advanced tools are needed to prove that programs always
behave as intended. In the context of program verication, a program is said to be
correct if it satises its specication. The specication of a program is a description
of what a program is intended to compute, regardless of how it computes it. The
general problem of computing whether a given program satises a given specication
is undecidable. Thus, a general-purpose verication tool must include a way for the
programmer to guide the verication process.
In traditional approaches based on a Verication Condition Generator (VCG),
the transfer of information from the user to the verication system takes the form
of invariants that annotate the source code. Thus, in addition to annotating every
procedure with a precise description of what properties the procedure can expect of
its input data and of what properties it guarantees about its output data, the user
also needs to include intermediate specications such as loop invariants. The VCG
tool then extracts a set of proof obligations from such an annotated program. If all
those proof obligations can be proved correct, then the program is guaranteed to
satisfy its specication.
Another approach to transferring information from the user to the verication
tool consists in using an interactive theorem prover. An interactive theorem prover,
also called a proof assistant, allows one to state a theorem, for instance a mathematical result from group theory, and to prove it interactively. In an interactive proof,
the user writes a sequence of tactics and statements for describing what reasoning
steps are involved in the proofs, while the proof assistant veries in real-time that
each of those steps is legitimate. There is thus no way to fool the theorem prover:
if the proof of a theorem is accepted by the system, then the theorem is certainly
true (assuming, of course, that the proof system itself is sound and correctly implemented).
Interactive theorem provers have been signicantly improved in the
last decade, and they have been successfully applied to formalize large bodies of
mathematical theories.
The statement this program admits that specication can be viewed as a theorem. So, one could naturally attempt to prove statements of this form using a proof
assistant. This high-level picture may be quite appealing, yet it is not so immediate
to implement. On the one hand, the source code of a program is expressed in some
1.1.
APPROACHES TO PROGRAM VERIFICATION
9
programming language. On the other hand, the reasoning about the behavior of that
program is carried out in terms of a mathematical logic. The challenge is to nd a
way of building a bridge between a programming language and a mathematical logic.
For example, one of the main issue is the treatment of mutable state. In a program,
x may be assigned to the value 5 at one point, and may later updated to
the value 7. In mathematics, on the contrary, when x has the value 5 at one point, it
has the value 5 forever. Devising a way to describe program behaviors in terms of a
a variable
mathematical logic is a central concern of my thesis. Several approaches have been
investigated in the past. I shall recall them and explain how my approach diers.
Interactive verication
The deep embedding approach involves a direct axioma-
tization of the syntax and of the semantics of the programming language in the logic
of the theorem prover. Through this construction, programs can be manipulated in
the logic just as any other data type. In theory, this natural approach allows proving
any correct specication that one may wish to establish about a given program. In
practice, however, the approach is associated with a fairly high overhead, due to the
clumsiness of explicit manipulation of program syntax. Nonetheless, this approach
has enabled the verication of low-level assembly routines involving a small number
of instructions that perform nontrivial tasks [65, 25].
Rather than relying on a representation of programs as a rst-order data type,
one may exploit the fact that the logic of a theorem prover is actually a sort of
programming language itself. For example, the logic of Coq [18] includes functions,
algebraic data structures, case analysis, and so on. The idea of the shallow embedding approach is to relate programs from the programming language with programs
from the logic, despite the fact that the two kinds of programs are not entirely
equivalent.
There are basically three ways to build on this idea. A rst technique consists
in writing the code in the logic and using an extraction mechanism (e.g., [49]) to
translate the code into a conventional programming language. For example, Leroy's
certied C compiler [47] is developed in this way. The second technique works in
the other direction: a piece of conventional source code gets decompiled into a set
of logical denitions. Myreen [58] has employed this technique to verify a machinecode implementation of a LISP interpreter. Finally, with the third approach, one
writes the program twice: once in a deep embedding of the programming language,
and once directly in the logic.
A proof can then be conducted to establish that
the two programs match. Although this approach requires a bigger investment than
the former two, it also provides a lot of exibility.
This third approach has been
employed in the verication of a microkernel, as part of the Sel4 project [46].
All approaches based on shallow embedding share one big diculty: the need
to overcome the discrepancies between the programming language and the logical
language, in particular with respect to the treatment of partial functions and of
mutable state. One of the most developed techniques for handling those features in
a shallow embedding has been developed in the Ynot project [17]. Ynot, which is
10
CHAPTER 1.
INTRODUCTION
based on Hoare Type Theory (HTT) [63], relies on a dependently-typed monad for
describing impure computations.
The approach developed in this thesis is quite dierent: given a conventional
source code, I generate a logical proposition that describes the behavior of that
code.
In other words, I generate a logical formula that, when applied to a speci-
cation, yields a sucient condition for establishing that the source code satises
that specication.
This sucient condition can then be proved interactively.
By
not representing explicitly program syntax, I avoid the technical diculties related
to the deep embedding approach. By not relying on logical functions to represent
program functions, I avoid the diculties faced by the shallow embedding approach.
Characteristic formulae
The formula that I generate for a given program is a
sound and almost-complete description of the behavior of that program, hence it is
called a characteristic formula. Remark: a characteristic formula is only almostcomplete because it does not reveal the exact addresses of allocated memory cells
and it does not allow specifying the exact source code of function closures. Instead,
functions have to be specied extensionally.
The notion of characteristic formu-
lae dates back to the 80's and originates in work on process calculi, a model for
parallel computations.
There, characteristic formulae are propositions, expressed
in a temporal logic, that are generated from the syntactic denition of a process.
The fundamental result about those formulae is that two processes are behaviorally
equivalent if and only if their characteristic formulae are logically equivalent [70, 57].
Hence, characteristic formulae can be employed to prove the equivalence or the disequivalence of two given processes.
More recently, Honda, Berger and Yoshida [40] adapted characteristic formulae
from process logic to program logic, targetting PCF, the Programming language
for Computable Functions. PCF can be seen as a simplied version of languages
of the ML family, which includes languages such as Caml and SML. Honda et al
give an algorithm that constructs the total characteristic assertion pair of a given
PCF program, that is, a pair made of the weakest pre-condition and of the strongest
post-condition of the program. (Note that the PCF program is not annotated with
any specication nor any invariant, so the algorithm involved here is not the same
as a weakest pre-condition calculus.)
This notion of most-general specication of
a program is actually much older, and originates in work from the 70's on the
completeness of Hoare logic [31]. The interest of Honda et al's work is that it shows
that the most general specication of a program can be expressed without referring
to the programming language syntax.
Honda et al also suggested that characteristic formulae could be used to prove
that a program satises a given specication, by establishing that the most-general
specication entails the specication being targeted. This entailment relation can
be proved entirely at the logical level, so the program verication process can be
conducted without the burden of referring to program syntax. Yet, this appealing
idea suered from one major problem.
The specication language in which total
1.2.
OVERVIEW
11
characteristic assertion pairs are expressed is an ad-hoc logic, where variables denote
PCF values (including non-terminating functions) and where equality is interpreted
as observational equality. As it was not immediate to encode this ad-hoc logic into
a standard logic, and since the construction of a theorem prover dedicated to that
new logic would have required an tremendous eort, Honda et al's work remained
theoretical and did not result in an eective program verication tool.
I have re-discovered characteristic formulae while looking for a way to enhance
the deep embedding approach, in which program syntax is explicitly represented
in the theorem prover.
I observed that it was possible to build logical formulae
capturing the reasoning that can be done through a deep embedding, yet without
exposing program syntax at any time. In a sense, my approach may be viewed as
building an abstract layer on top of a deep embedding, hiding the technical details
while retaining its benets.
Contrary to Honda, Berger and Yoshida's work, the
characteristic formulae that I build are expressed in terms of standard higher-order
logic. I have therefore been able to build a practical program verication tool based
on characteristic formulae.
1.2 Overview
The characteristic formula of a term t, written
heap in which the term
t
JtK,
relates a description of the input
is executed with a description of the output value and of
the output heap produced by the execution of
t.
Characteristic formulae are closely
{H} t {Q} asserts
that, when executed in a heap satisfying the predicate H , the term t terminates
and returns a value v in a heap satisfying Q v . Note that the post-condition Q
is used to specify both the output value and the output heap. When t has type
τ , the pre-condition H has type Heap → Prop and the post-condition Q has type
hτ i → Heap → Prop, where Heap is the type of a heap and where hτ i is the Coq
type that corresponds to the ML type τ .
The characteristic formula JtK is a predicate such that JtK H Q captures exactly
the same proposition as the triple {H} t {Q}. There is however a fundamental difference between Hoare triples and characteristic formulae. A Hoare triple {H} t {Q}
related to Hoare triples [29, 36]. A total correctness Hoare triple
is a three-place relation, whose second argument is a representation of the syntax
of the term
t.
On the contrary,
JtK H Q
is a logical proposition, expressed in terms
of standard higher-order logic connectives, such as
refer to the syntax of the term
t.
∧, ∃, ∀
and
⇒,
which does not
Whereas Hoare-triples need to be established by
application of derivation rules specic to Hoare logic, characteristic formulae can
be proved using only basic higher-order logic reasoning, without involving external
derivation rules.
In the rest of this section, I present the key ideas involved in the construction
of characteristic formulae, focusing on the treatment of let bindings, of function
applications, and of function denitions.
rule, which enables local reasoning.
I also explain how to handle the frame
12
CHAPTER 1.
Let-bindings
the subterm
To evaluate a term of the form t1
let x = t1 in t2 ,
INTRODUCTION
one rst evaluates
t2 , in which x
let x = t1 in t2 and then computes the result of the evaluation of
t1 . To prove that the expression
Q as post-condition, one needs to nd a valid post0
condition Q for t1 . This post-condition, when applied to the result x produced by t1 ,
describes the state of memory after the execution of t1 and before the execution
0
of t2 . So, Q x denotes the pre-condition for t2 . The corresponding Hoare-logic rule
denotes the result produced by
admits
H
as pre-condition and
for reasoning on let-bindings appears next.
{H} t1 {Q0 }
∀x. {Q0 x} t2 {Q}
{H} (let x = t1 in t2 ) {Q}
The characteristic formula for a let-binding is built as follows.
Jlet x = t1 in t2 K
λH. λQ. ∃Q0 . Jt1 K H Q0 ∧ ∀x. Jt2 K (Q0 x) Q
≡
This formula closely resembles the corresponding Hoare-logic rule.
The only real
dierence is that, in the characteristic formula, the intermediate post-condition
Q0
is explicitly introduced with an existential quantier, whereas this quantication is
implicit in the Hoare-logic derivation rule. The existential quantication of unknown
specications, which is made possible by the strength of higher-order logic, plays
a central role in my work.
This contrasts with traditional program verication
approaches where intermediate specications, including loop invariants, have to be
included in the source code.
In order to make proof obligations more readable, I introduce a system of notation
for characteristic formulae. For example, for let-bindings, I dene:
(let x = F1
in F2 )
≡
λH. λQ. ∃Q0 . F1 H Q0 ∧ ∀x. F2 (Q0 x) Q
Note that bold keywords correspond to notation for logical formulae, whereas plain
keywords correspond to constructors from the programming language syntax. The
generation of characteristic formulae then boils down to a re-interpretation of the
keywords from the programming language.
Jlet x = t1 in t2 K
≡
(let x = Jt1 K in Jt2 K)
It follows that characteristic formulae may be pretty-printed exactly like source code.
Hence, the statement asserting that a term
t
admits a pre-condition
H
and a post-
Q, which takes the form JtK H Q, appears to the user as the source code
t followed with its pre-condition and its post-condition. Note that this convenient
display applies not only to a top-level program denition t but also to all of the
subterms of t involved during the proof of correctness of the term t.
condition
of
Frame rule
Local reasoning [67] refers to the ability of verifying a piece of code
through a piece of reasoning concerned only with the memory cells that are involved
1.2.
OVERVIEW
13
in the execution of that code. With local reasoning, all the memory cells that are
not explicitly mentioned are implicitly assumed to remain unchanged. The concept
of local reasoning is very elegantly captured by the frame rule, which originates in
Separation Logic [77]. The frame rule states that if a program expression transforms
a heap described by a predicate
for any heap predicate
the form
H1 ∗ H2
H2 ,
H1
into heap described by a predicate
H10 ,
then,
the same program expression also transforms a heap of
into a state described by
H10 ∗ H2 ,
where the star symbol, called
separating conjunction, captures a disjoint union of two pieces of heap.
For example, consider the application of the function
contents of the memory cell, to a location
l.
If
(l ,→ n)
incr,
which increments the
describes a singleton heap
n, then an application of the function incr
(l ,→ n) and produces the heap (l ,→ n+1). By the frame
rule, one can deduce that the application of the function incr to l also takes a heap
0
0
0
0
of the form (l ,→ n) ∗ (l ,→ n ) towards a heap of the form (l ,→ n + 1) ∗ (l ,→ n ).
0
Here, the separating conjunction asserts that l is a location distinct from l. The use
that binds the location
l
to the integer
expects a heap of the form
of the separating conjunction gives us for free the property that the cell at location
l0
is not modied when the contents of the cell at location
l
is incremented.
The frame rule can be formulated on Hoare triples as follows.
{H1 } t {Q1 }
{H1 ∗ H2 } t {Q1 ? H2 }
where the symbol
(?)
(∗) except that it extends a post-condition with a piece
Q1 ? H2 is dened as λx. (Q1 x) ∗ H2 , where the variable x
value and Q1 x describes the output heap.
is like
of heap. Technically,
denotes the output
To integrate the frame rule in characteristic formulae, I rely on a predicate
called
frame.
This predicate is dened in such a way that, to prove the proposition
frame JtK H Q, it suces to nd a decomposition of H
of
Q
as
Q1 ? H2 ,
and to prove
follows.
frame F
JtK H1 Q1 .
H1 ∗ H2 ,
a decomposition
The formal denition of
is thus as
8
<
≡
λHQ. ∃H1 H2 Q1 .
:
as
frame
H = H1 ∗ H2
F H1 Q1
Q = Q1 ? H2
The frame rule is not syntax-directed, meaning that one cannot guess from the
shape of the term
t when the frame rule needs to be applied.
Yet, I want to generate
characteristic formulae in a systematic manner form the syntax of the source code.
Since I do not know where to insert applications of the predicate
insert applications of
frame
frame,
I simply
at every node of a characteristic formula. For example,
I update the previous denition for let-bindings to:
(let x = F1
in F2 )
≡
frame (λH. λQ. ∃Q0 . F1 H Q0 ∧ ∀x. F2 (Q0 x) Q)
This aggressive strategy allows applying the frame rule at any time in the reasoning.
If there is no need to apply the frame rule, then the
ignored. Indeed, given a formula
F,
frame
the proposition F
predicate may be simply
H Q
is always a sucient
14
CHAPTER 1.
condition for proving and
H2
frame F H Q.
It suces to instantiate
INTRODUCTION
H1
as
H , Q1
as
Q
as a specication of the empty heap.
The approach that I have described here to handle the frame rule is in fact generalized so as to also handle applications of the rule of consequence, for strengthening
pre-conditions and weakening post-conditions, and to allow discarding memory cells,
for simulating garbage collection.
Translation of types
Higher-order logic can naturally be used to state properties
about basic values such as purely-functional lists. Indeed, the list data structure can
be dened in the logic in a way that perfectly matches the list data structure from
the programming language. However, particular care is required for specifying and
reasoning on program functions. Indeed, programming language functions cannot
be directly represented as logical functions, because of a mismatch between the
two: program functions may diverge or crash whereas logical functions must always
terminate.
To address this issue, I introduce a new data type, called
to represent functions. The type
Func
Func,
used
is presented as an abstract data type to the
user of characteristic formulae. In the proof of soundness, a value of type
Func
is
interpreted as the syntax of the source code of a function.
Another particularity of the reection of Caml values into Coq values is the
treatment of pointers.
When reasoning through characteristic formulae, the type
and the contents of memory cells are described explicitly by heap predicates, so
there is no need for pointers to carry the type of the memory cell they point to. All
pointers are therefore described in the logic through an abstract data type called
Loc.
In the proof of soundness, a value of type
Loc is interpreted as a store location.
The translation of Caml types [48] into Coq [18] types is formalized through an
operator, written
h·i,
that maps all arrow types towards the type
reference types towards the type
a Coq value of type
hτ i.
Loc.
A Caml value of type
The denition of the operator
hinti
hτ1 × τ2 i
hτ1 + τ2 i
hτ1 → τ2 i
href τ i
≡
≡
≡
≡
≡
h·i
τ
Func and maps all
is thus represented as
is as follows.
Int
hτ1 i × hτ2 i
hτ1 i + hτ2 i
Func
Loc
On the one hand, ML typing is very useful as knowing the type of values allows
to reect Caml values directly into the corresponding Coq values.
On the other
hand, typing is restrictive: there are numerous programs that are correct but cannot
be type-checked in ML. In particular, the ML type system does not accommodate
programs involving null pointers and strong updates. Yet, although those features
are dicult to handle in a type system without compromising type safety, their
correctness can be justied through proofs of program correctness.
So, I extend
Caml with null pointers and strong udpates, and then use characteristic formulae to
1.2.
OVERVIEW
15
justify that null pointers are never dereferenced and that reads in memory always
yield a value of the expected type.
The correctness of this approach is however not entirely straightforward to justify. On the one hand, characteristic formulae are generated from typed programs.
On the other hand, the introduction of null pointers and strong updates may jeopardize the validity of types. To justify the soundness of characteristic formulae, I
introduce a new type system called weak-ML. This type system does not enjoy type
soundness.
However, it carries all the type information and invariants needed to
generate characteristic formulae and to prove them sound.
In short, weak-ML corresponds to a relaxed version of ML that does not keep
track of the type of pointers or functions, and that does not impose any constraint
on the typing of dereferencing and applications. The translation from Caml types
to Coq types is in fact conducted in two steps: a Caml type is rst translated into
a weak-ML type, and this weak-ML type it then translated into a Coq type.
Functions
To specify the behavior of functions, I rely on a predicate called
AppReturns.
AppReturns f v H Q asserts that the application of the function f
v in a heap satisfying H terminates and returns a value v 0 in a heap satisfying
Q v 0 . The predicates H and Q correspond to the pre- and post-conditions of the
application of the function f to the argument v . It follows that the characteristic
formula for an application of a function f to a value v is simply built as the partial
application of AppReturns to f and v .
The proposition to
Jf vK
≡
AppReturns f v
f is viewed in the logic as a value of type Func. If f takes as
v described in Coq at type A and returns a value described in Coq
at type B , then the pre-condition H has type Hprop, a shorthand for Heap → Prop,
and the post-condition Q has type B → Hprop. So, the type of AppReturns is as
The function
argument a value
follows.
AppReturns : ∀A B. Func → A → Hprop → (B → Hprop) → Prop
For example, the function
of the form
(l ,→ n)
incr is specied as shown below, using a pre-condition
and a post-condition describing a heap of the form
(l ,→ n + 1).
Note: the abstraction λ_. is used to discard the unit value returned by the function
incr.
∀l. ∀n. AppReturns incr l (l ,→ n) (λ_. l ,→ n + 1)
To establish a property about the behavior of an application, one needs to exploit
an instance of
AppReturns.
Such instances can be derived from the characteristic
formula associated with the denition of the function involved.
If a function
is dened as the abstraction λx. t, then, given a particular argument
derive an instance of the pre-condition
H
AppReturns f x H Q
x,
Q
for that particular
one can
t admits
argument x. The
simply by proving that the body
and the post-condition
f
16
CHAPTER 1.
characteristic formula of a function denition
INTRODUCTION
(let rec f = λx. t in t0 )
is dened as
follows.
Jlet rec f = λx. t in t0 K ≡
λHQ. ∀f. (∀x H 0 Q0 . JtK H 0 Q0 ⇒ AppReturns f x H 0 Q0 ) ⇒ Jt0 K H Q
Observe that the formula does not involve a specic treatment of recursivity.
Indeed, to prove that a recursive function satises a given specication, it suces
to conduct a proof by induction that the function indeed satises that specication.
The induction may be conducted on a measure or on a well-founded relation, using
the induction facility from the theorem prover. So, there is no need to strengthen
the characteristic formula in any way for supporting recursive functions. A similar
observation was also made by Honda et al [40].
1.3 Implementation
I have implemented a tool, called CFML, short for Characteristic Formulae for ML,
that targets the verication of Caml programs [48] using the Coq proof assistant [18].
This tool comprises two parts. The CFML generator is a program that translates
Caml source code into a set of Coq denitions and axioms. The CFML library is
a Coq library that contains notation, lemmas and tactics for manipulating characteristic formulae.
In this section, I describe precisely the fragment of the OCaml
language supported by CFML, as well as the exact logic in which the reasoning on
characteristic formulae is taking place. I also give an overview of the architecture of
CFML, and comment on its trusted computing base.
Source language
I have focused on a subset of the OCaml programming lan-
guage, which is a sequential, call-by-value, high-level programming language. The
current implementation of CFML supports the core
λ-calculus,
including higher-
order functions, recursion, mutual recursion and polymorphic recursion. It supports
tuples, data constructors, pattern matching, reference cells, records and arrays. I
provide an additional Caml library that adds support for null pointers and strong
updates.
For the sake of simplicity, I model machine integers as mathematical integers,
thus making the assumption that no arithmetic overow ever occurs. Moreover, I do
not include oating-point numbers, whose formalization is associated with numerous
technical diculties. Also, I assume the source language to be deterministic. This
determinacy requirement might in fact be relaxed, but the formal developments were
slightly simpler to conduct under this assumption.
Lazy expressions are supported under the condition that the code would terminate without any lazy annotation. Although this restriction does not enable reasoning on innite data structures, it covers some uses of laziness, such as computation
scheduling, which is exploited in particular in Okasaki's data structures [69].
In
fact, CFML simply ignores any annotation relative to laziness. Indeed, if a program
1.3.
IMPLEMENTATION
17
satises its specication when evaluated without any lazy annotation, then it also
satises its specication when evaluated with lazy annotations. (The reciprocal is
not true.)
Moreover, CFML includes an experimental support for Caml modules and functors, which are reected as Coq modules and functors.
This development is still
considered experimental because of several limitations of the current module system
1 Yet, the support for modules has shown useful in the verication of data
of Coq.
structures implemented using Caml functors.
Thereafter, I refer to the subset of the OCaml language supported by CFML as
Caml.
Target logic
When verifying programs through characteristic formulae, both the
specication process and the verication process take place in the logic. The logic
targeted by CFML is the logic of Coq.
More precisely, I work in the Calculus
of Inductive Constructions, strengthened with the following standard extensions:
functional extensionality, propositional extensionality, classical logic, and indenite
description (also called Hilbert's epsilon operator). Note that the proof irrelevance
property is derivable in this setting.
Those axioms are all compatible with the standard boolean model of type theory.
They are not included by default in Coq but they are integrated in other higherorder logic proof assistants such as Isabelle/HOL and HOL4.
Remark: although
the CFML library is implemented in the Coq proof assistant, the developments it
contains could presumably be reproduced in another general-purpose proof assistant
based on higher-order logic.
The CFML generator
The CFML generator starts by parsing a Caml source
le. This source code need not be annotated with any specication nor invariant.
The tool then normalizes the source code in order to assign names to intermediate
expressions, that is, to make sequencing explicit. It then type-checks the code and
generates a Coq le.
This Coq le contains a set of declarations reecting every
top-level declaration from the source code.
let f x = t. The CFML
Func, as well as an axiom named f_cf,
predicate AppReturns for the function f.
For example, consider a top-level function denition f
generator produces an axiom named , of type
which allows deriving instances of the
Axiom f
: Func.
Axiom f_cf : ∀x H Q. JtK H Q ⇒ AppReturns f x H Q.
The specication and verication of the content of a le called
place in a proof script called
1
demo_proof.v.
The le
demo_proof.v
demo.ml
takes
depends on the
A major limitation of the Coq module system is that the positivity requirement for inductive
denitions is based on a syntactic check that does not appropriately support abstract type constructors. In other words, Coq does not implement a feature equivalent to the variance constraints
of Caml. Another inconvenience is that Coq module signatures cannot be described on-the-y like
in Caml; they have to be named explicitly.
18
CHAPTER 1.
generated le, which is called
demo_ml.v.
When the source le
INTRODUCTION
demo.ml is modied,
demo_ml.v
the CFML generator needs to be executed again, producing an updated
le. The proof script from
why the previous version of
demo_proof.v, which records the arguments explaining
demo.ml was correct, may need to be updated. To nd
out where modications are required, it suces to run the Coq proof assistant on
the proof script until the rst point where a proof breaks.
One can then x the
proof script so as to reect the changes made in the source code.
The CFML library
The CFML library is made of three main parts.
First, it
includes a system of notation for pretty-printing characteristic formulae. Second, it
includes a number of lemmas for reasoning on the specication of functions. Third,
it includes the denition of high-level tactics that are used to eciently manipulate
characteristic formuale.
The CFML library is built upon three axioms. The rst one asserts the existence
of a type
Func,
which is used to represent functions.
existence of a predicate
AppReturns
is dened.
The second one asserts the
AppEval, a low-level predicate in terms of which the predicate
0 0
Intuitively, the proposition AppEval f v h v h corresponds to
(f v)/h ⇓ v 0 /h0 . The
0 0
in the judgment AppEval f v h v h ,
the big-step reduction judgment for functions: it is equivalent to
third axiom asserts that
AppEval is deterministic:
the last two arguments are uniquely determined by the rst three.
The library
also includes axioms for describing the specication of the primitive functions for
manipulating references. Through the proof of soundness, I establish that all those
axioms can be given a sound interpretation.
Trusted computing base
The framework that I have developed aims at proving
programs correct. To trust that CFML actually establish the correctness of a program, one needs to trust that the characteristic formulae that I generate and that
I take as axioms are accurate descriptions of the programs they correspond to, and
that those axioms do not break the soundness of the target logic.
The trusted computing base of CFML also includes the implementation of the
CFML generator, which I have programmed in OCaml, the implementation of the
parser and of the type-checker of OCaml, and the implementation of the Coq proof
assistant. To be exhaustive, I shall also mention that the correctness of the framework indirectly relies on the correctness of the complete OCaml compiler and on the
correctness of the hardware.
The main priority of my thesis has been the development and implementation of
a practical tool for program verication. For this reason, I have not yet invested time
in trying to construct a mechanized proof of the soundness of characteristic formulae.
In this dissertation, I only give a detailed paper proof justifying that all the axioms
involved can be given a concrete interpretation in the logic. An interesting direction
for future work consists in trying to replace the axioms with denitions and lemmas,
which would be expressed in terms of a deep embedding of the source language.
1.4.
CONTRIBUTION
19
1.4 Contribution
The thesis that I defend is summarized in the following statement.
Generating the characteristic formula of a program and exploiting that formula in an interactive proof assistant provides
an eective approach to formally verifying that the program
satises a given specication.
A characteristic formula is a logical predicate that precisely describes the set of
specications admissible by a given program, without referring to the syntax of
the programing language. I have not invented the general concept of characteristic
formulae, however I have turned that concept into a practical program verication
technique. More precisely, the main contributions of my thesis may be summarized
as follows.
I show that characteristic formulae can be expressed in terms of a standard
higher-order logic.
Moreover, I show that characteristic formulae can be pretty-
printed just like the source code they describe.
Hence, compared with Honda et
al's work, the characteristic formulae that I generate are easy to read and can be
manipulated within an o-the-shelf theorem prover. I also explain how characteristic
formulae can support local reasoning.
More precisely, I rely on separation-logic
style predicates for specifying memory states, and the characteristic formulae that I
generate take into account potential applications of the frame rule.
I demonstrate the eectiveness of characteristic formulae through the implementation of a tool called CFML that accepts Caml programs and produces Coq
denitions. I have used CFML to verify a collection of purely-functional data structures taken from Okasaki's book on purely-functional data structures [69].
Some
advanced structures, such as bootstrapped queues, had never been formally veried
previously.
I have investigated the treatment of higher-order imperative functions. In particular, I have veried the following functions: a higher-order iterator for lists, a
function that manipulates a list of counters in which each counter is a function that
carries its own private state, the generic combinator
compose,
and Reynolds' CPS-
append function [77]. More recently, I have veried three imperative algorithms: an
implementation of Dijkstra's shortest path algorithm (the version of the algorithm
that uses a priority queue), an implementation of Tarjan's Union-Find data structure
(specied with respect to a partial equivalence relations), and an implementation of
sparse arrays (as described in the rst task from the Vacid-0 challenge [78]). Those
2
developments, which are not described in this manuscript, can be found online .
2
Proof scripts:
http://arthur.chargueraud.org/research/2010/thesis/
20
CHAPTER 1.
INTRODUCTION
1.5 Research and publications
During my thesis, I have worked on three main projects.
First, I have studied a
type system based on linear capabilities for describing mutable state.
This type
system has been described in an ICFP'08 paper, and it is not included in my dissertation. Second, I have worked on a deep embedding of the pure fragment of Caml
in Coq. This deep embedding is described in a research paper which has not been
published. The work on the deep embedding led me to characteristic formulae for
purely-functional Caml programs.
This work has appeared as an ICFP'10 paper.
Most of the content of that paper is included in my dissertation.
I have recently
extended characteristic formulae to imperative programs, reusing in particular ideas
that had been developed for the type system based on capabilities. At the time of
writing, the extension of characteristic formulae to the imperative setting has not
yet been submitted as a conference paper.
Previous research papers:
−
Functional Translation of a Calculus of Capabilities,
Arthur Charguéraud and François Pottier,
International Conference on Functional Programming (ICFP), 2008.
−
Verication of Functional Programs Through a Deep Embedding,
Arthur Charguéraud,
Unpublished, 2009.
−
Program Verication Through Characteristic Formulae,
Arthur Charguéraud,
International Conference on Functional Programming (ICFP), 2010.
1.6 Structure of the dissertation
The thesis is structured as follows. Examples of verication through characteristic
formulae is described in Chapter 2 for pure programs, and in Chapter 3 for imperative programs.
The construction of characteristic formulae for pure programs is
developed in Chapter 4.
grams in Chapter 5.
This construction is then generalized to imperative pro-
The proof of soundness and completeness of characteristic
formulae for pure programs is the matter of Chapter 6. Those proofs are then extended to an imperative setting in Chapter 7. Finally, related work is discussed in
Chapter 8, and conclusions are given in Chapter 9.
Notice: several predicates are used with a dierent meaning in the context of
purely-functional programs than in the context of imperative programs.
For ex-
ample, the big-step reduction judgment for pure programs takes the form
whereas the judgment for imperative programs takes the form
t/m ⇓ v 0 /m0 .
t ⇓ v,
Simi-
larly, reasoning on applications in a purely-functional setting involves the predicate
1.6.
STRUCTURE OF THE DISSERTATION
21
AppReturns f v P , whereas in an imperative setting the same predicate takes the
AppReturns f v H Q. This reuse of predicate names, which makes the presen-
form tation lighter and more uniform, should not lead to confusion since a given predicate
is always used in a consistent manner in each chapter.
Chapter 2
Overview and specications
The purpose of this overview is to give some intuition on how to construct
a characteristic formula and how to exploit it. To that end, I consider
a small but illustrative purely-functional function as running example.
Another goal of the chapter is to explain the specication language upon
which I rely. Specications take the form of Coq lemmas, stated using
special predicates as well as a layer of notation.
The specication of
modules and functors is illustrated through the presentation of a case
study on purely-functional red-black trees.
2.1 Characteristic formulae for pure programs
In a purely-functional setting, the characteristic formulae
JtK associated with a term
t is such that, for any given post-condition P , the proposition JtK P holds if and
only if the term t terminates and returns a value satisfying the predicate P . Note
that reasoning on the total correctness of pure programs can be conducted without pre-conditions. In terms of types, the characteristic formula associated with a
τ applies to a post-condition P of type hτ i → Prop and produces a
proposition, so JtK admits the type (hτ i → Prop) → Prop. In terms of a denotational
interpretation, JtK corresponds to the set of post-conditions that are valid for the
term t. I next describe an example of a characteristic formula.
term
t
of type
2.1.1 Example of a characteristic formula
Consider the following recursive function, which divides by two any non-negative
even integer. For the interest of the example, the function diverges when called on a
negative integer and crashes when called on an odd integer. Through this example, I
present the construction of characteristic formulae and also illustrate the treatment
22
2.1.
CHARACTERISTIC FORMULAE FOR PURE PROGRAMS
23
of partial functions, of recursion, and of ghost variables.
let rec half x =
if x = 0 then 0
else if x = 1 then crash
else let y = half (x − 2) in
y+1
x and a post-condition P of type int → Prop, the characteristic
half describes what needs to be proved in order to establish that the
application of half to x terminates and returns a value satisfying the predicate
P , written AppReturns half x P . The characteristic formula associated with the
denition of the function half appears below and is explained next.
Given an argument
formula for
∀x. ∀P.
0
(x = 0 ⇒ P 0)
B ∧ (x 6= 0 ⇒
B
B
(x = 1 ⇒ False)
B
B
B
∧ (x 6= 1 ⇒
B
@
∃P 0 . (AppReturns half (x − 2) P 0 )
∧ (∀y. (P 0 y) ⇒ P (y + 1)) ))
⇒ AppReturns half x P
When
half
x
is equal to zero, the function
returns a value satisfying
the function
half
P,
half
returns zero. So, if we want to show that
we have to prove P
way to proceed is to show that the instruction
proof obligation
False.
P.
Then, for any name
y
satises
condition
When
x
is equal to one,
P 0,
cannot be reached. Hence the
y
let y = half (x−2) in y + 1
To that end, we need to exhibit a post-condition
such that the recursive call to
that
fail
Otherwise, we want to prove that returns a value satisfying
P 0.
0.
crashes, so we cannot prove that it returns any value. The only
half
on the argument
x−2
P0
returns a value satisfying
that stands for the result of this recursive call, assuming
we have to show that the output value
y+1
satises the post-
P.
For program verication to be realistic, the proof obligation JtK P should be
easy to read and manipulate.
Fortunately, characteristic formulae can be pretty-
printed in a way that closely resemble source code. For example, the characteristic
formula associated with
half
is displayed as follows.
Let half = fun x 7→
if x = 0 then return 0
else if x = 1 then crash
else let y = app half (x − 2) in
return (y + 1)
At rst sight, it might appear that the characteristic formula is merely a rephrasing of the source code in some other syntax. To some extent, this is true. A characteristic formula is a sound and complete description of the behavior of a program.
24
CHAPTER 2.
OVERVIEW AND SPECIFICATIONS
Thus, it carries no more and no less information than the source code of the program itself. However, characteristic formulae enable us to move away from program
syntax and conduct program verication entirely at the logical level. Characteristic
formulae thereby avoid all the technical diculties associated with manipulation of
program syntax and make it possible to work directly in terms of higher-order logic
values and formulae.
2.1.2 Specication and verication
One of the key ingredients involved in characteristic formulae is the abstract predicate
AppReturns,
which is used to specify functions. Because of the mismatch be-
tween program functions, which may fail or diverge, and logical functions, which
must always be total, we cannot represent program functions using logical functions.
predicate
Values of type
AppReturns.
tion of the function
P.
Func, to represent
Func are exclusively specied in terms of the
proposition AppReturns f x P states that the applica-
For this reason, I introduce an abstract type, named
program functions.
The
f
Hence the type of
to an argument
AppReturns,
x
terminates and returns a value satisfying
shown below.
AppReturns : ∀A B. Func → A → (B → Prop) → Prop
Observe a function
the function
f.
f
Func, regardless of Caml type of
Func as a constant type and not a parametric type allows for
is described in Coq at the type
Having
a simple treatment of polymorphic functions and of functions that admit a recursive
type.
The predicate
AppReturns
is used not only in the denition of characteristic
formulae but also in the statement of specications. One possible specication for
half
is the following: if
application of
half
to
x
x
is the double of some non-negative integer
returns an integer equal to
n.
n,
then the
The corresponding higher-
order logic statement appears next.
∀x. ∀n. n ≥ 0 ⇒ x = 2 ∗ n ⇒ AppReturns half x (= n)
Remark: the post-condition
short for λa. (a
= n).
(= n)
denotes a partial application of equality: it is
Here, the value
n corresponds to a ghost variable:
in the specication of the function but not in its source code.
that I have considered for
half
it appears
The specication
might not be the simplest one, however it is useful to
illustrate the treatment of ghost variables.
The next step consists in proving that the function
half
satises its specication.
This is done by exploiting its characteristic formula. I rst give the mathematical
presentation of the proof and then show the corresponding Coq proof script. The
n ≥ 0 and
x = 2 ∗ n. We apply the characteristic formula to prove AppReturns half x (= n). If
x is equal to 0, we conclude by showing that n is equal to 0. If x is equal to 1, we
0
show that x = 2 ∗ n is absurd. Otherwise, x ≥ 2. We instantiate P as = n − 1,
specication is proved by induction on
x.
Let
x
and
n
be such that
2.1.
CHARACTERISTIC FORMULAE FOR PURE PROGRAMS
and prove AppReturns half (x − 2) P 0 show that, for any
y
such that
25
using the induction hypothesis. Finally, we
y = n − 1,
the proposition
y+1 = n
holds. This
completes the proof. Note that, through this proof by induction, we have proved
that the function
half
terminates when it is applied to a non-negative even integer.
Formalizing the above piece of reasoning in a proof assistant is straightforward.
In Coq, a proof script takes the form of a sequence of tactics, each tactic being used
to make some progress in the proof. The verication of the function
half
could be
done using only built-in Coq tactics. Yet, for the sake of conciseness, I rely on a few
specialized tactics to factor out repeated proof patterns. For example, each time we
reason on a if statement, we want to split the conjunction at the head of the goal
and introduce one hypothesis in each subgoal. The tactics specic to my framework
can be easily recognized: they start with the letter x. The verication proof script
for
half
appears next.
xinduction (downto 0).
xcf. introv IH Pos Eq. xcase.
xret. auto.
xfail. auto.
xlet.
xapp (n-1); auto.
xret. auto.
(*
(*
(*
(*
(*
x = 0 *)
x = 1 *)
otherwise *)
half (x-2) *)
return y+1 *)
The interesting steps in that proof are: the setting up of the induction on the set
of non-negative integers (xinduction), the application of the characteristic formula
(xcf), the case analysis on the value of
n
ghost variable
with the value n
(xapp). The tactic
auto
− 1
x (xcase),
half
runs a goal-directed proof search and may also rely on a
decision procedure for linear arithmetic. The tactic
to hypotheses.
and the instantiation of the
when reasoning on the recursive call to
introv
is used to assign names
Such explicit naming is not mandatory, but in general it greatly
improves the readability of proof obligations and the robustness of proof scripts.
When working with characteristic formulae, proof obligations always remain very
tidy. The Coq goal obtained when reaching the subterm let y = half (x − 2) in y + 1
is shown below. In the conclusion (stated below the line), the characteristic formula
associated with that subterm is applied to the post-condition to be established,
which is =
n.
n ≥ 0 and x = 2 ∗ n,
x 6= 0 and x 6= 1, as well as
The context contains the two pre-conditions
the negation of the conditionals that have been tested,
the induction hypothesis, which asserts that the specication that we are trying to
prove for
half
already holds for any non-negative argument
x : int
IH : forall x', 0 <= x' -> x' < x ->
forall n, n >= 0 -> x' = 2 * n ->
AppReturns half x' (= n)
n : int
Pos : n >= 0
x0
smaller than
x.
26
CHAPTER 2.
OVERVIEW AND SPECIFICATIONS
Eq : x = 2 * n
C1 : x <> 0
C2 : x <> 1
----------------------------------------------(Let y := App half (x-2) in Return (1+y)) (= n)
As illustrated through the example, a verication proof script typically interleaves applications of x-tactics with pieces of general Coq reasoning.
In order
to obtain shorter proof scripts, I set up an additional tactic that automates the
invokation of x-tactics.
This tactic, named
xgo,
simply looks at the head of the
characteristic formula and applies the appropriate x-tactic. A single call to
xgo may
analyse an entire characteristic formula and leave a set of proof obligations, in a
similar fashion as a Verication Condition Generator (VCG).
Of course, there are pieces of information that
xgo
cannot infer. Typically, the
specication of local functions must be provided explicitly. Also, the instantiation
of ghost variables cannot always be inferred.
In our example, Coq automation is
slightly too weak to infer that the ghost variable
in the recursive call to
half.
much information to go on.
given point in the code.
In practice,
xgo
n
should be instantiated as
stops running whenever it lacks too
The user may also explicitly tell
Moreover,
xgo
n−1
xgo
to stop at a
accepts hints to be exploited when some
information cannot be inferred. For example, we can run
that the function application whose result is named
y
xgo
with the indication
should use the value
n−1
1 In this case, the verication proof script for the
to instantiate a ghost variable.
function
half
is reduced to:
xinduction (downto 0). xcf. intros.
xgo~ 'y (Xargs (n-1)).
Automation, denoted by the tilde symbol, is able to handle all the subgoals produced
by
xgo.
For simple functions like
half,
a single call to
for more complex programs, the ability of
code is crucial.
xgo
xgo
is usually sucient. However,
to be run only on given portions of
In particular, it allows one to stop just before a branching point
in the code in order to establish facts that are needed in several branches. Indeed,
when a piece of reasoning needs to be carried out manually, it is extremely important
to avoid duplicating the corresponding proof script across several branches.
To summarize, my approach allows for very concise proof scripts whenever verifying simple pieces of code, thanks to the automated processing done by
1
The name
y
'y,
and
is a bound name in the characteristic formula. Since Coq tactics are not allowed
to depend on bound names, the tactic
constant
xgo
xgo
actually takes as argument a constant called
'y.
The
dened by the CFML generator, serves as an identier used to tag the characteristic
let y = half (x − 2) in y + 1. This
Let 'y y := App half (x-2) in Return (1+y).
formula associated with the subterm in Coq as follows:
tagged formula is displayed
If the tactic language did
support ways of referring to bound variables, then the work-around described in this footnote
would not be needed.
2.2.
FORMALIZING PURELY FUNCTIONAL DATA STRUCTURES
27
module type Fset = sig
| module type Ordered = sig
type elem
|
type t
type fset
|
val lt: t -> t -> bool
val empty: fset
| end
val insert: elem -> fset -> fset |
val member: elem -> fset -> bool |
end
|
Figure 2.1: Module signatures for nite sets and ordered types
to the good amount of automation available through the proof search mechanism
and the decision procedures that can be called from Coq. At the same time, when
verifying more complex code, my approach oers a very ne-grained control on the
structure of the proofs and it greatly benets from the integration in a proof assistant
for proving nontrivial facts interactively.
2.2 Formalizing purely functional data structures
Chris Okasaki's book Purely Functional Data Structures [69] contains a collection of
ecient data structures, with concise implementation and nontrivial invariants. Its
code appeared as a excellent benchmark for testing the usability of characteristic
formulae for verifying pure programs. I have veried more than half of the contents
of the book. Here, I focus on describing the formalization of red-black trees and give
statistics on the other formalizations completed.
Red-black trees are binary search trees where each node is tagged with a color,
either red or black. Those tags are used to maintain balance in the tree, ensuring a
logarithmic asymptotic complexity. Okasaki's implementation appears in Figure 2.2.
It consists of a functor that, given an ordered type, builds a module matching the
signature of nite sets. The Caml signatures involved appear in Figure 2.1.
I specify each Caml module signature through a Coq module signature. I then
verify each Caml module implementation through a Coq module implementation
that contains lemmas establishing that the Caml code satises its specication. I rely
on Coq's module system to ensure that the lemmas proved actually correspond to
the expected specication. This strategy allows for modular verication of modular
programs.
2.2.1 Specication of the signature
In order to specify functions manipulating red-black trees, I introduce a representation predicate called
rep.
Intuitively, every data structure admits a mathematical
model. For example, the model of a red-black tree is a set of values. Similarly, the
model of a priority queue is a multiset, and the model of a queue is a sequence (a
list). Sometimes, the mathematical model is simply the value itself. For instance,
the model of an integer or of a value of type
color
is just the value itself.
28
CHAPTER 2.
OVERVIEW AND SPECIFICATIONS
module RedBlackSet (Elem : Ordered) : Fset = struct
type elem = Elem.t
type color = Red | Black
type fset = Empty | Node of color * fset * elem * fset
let empty = Empty
let rec member x = function
| Empty -> false
| Node (_,a,y,b) ->
if Elem.lt x y then member x a
else if Elem.lt y x then member x b
else true
let
|
|
|
|
balance = function
(Black, Node (Red, Node (Red, a, x,
(Black, Node (Red, a, x, Node (Red,
(Black, a, x, Node (Red, Node (Red,
(Black, a, x, Node (Red, b, y, Node
-> Node (Red, Node(Black,a,x,b), y,
| (col,a,y,b) -> Node(col,a,y,b)
b), y, c), z, d)
b, y, c)), z, d)
b, y, c), z, d))
(Red, c, z, d)))
Node(Black,c,z,d))
let rec insert x s =
let rec ins = function
| Empty -> Node(Red,Empty,x,Empty)
| Node(col,a,y,b) as s ->
if Elem.lt x y then balance(col,ins a,y,b)
else if Elem.lt y x then balance(col,a,y,ins b)
else s in
match ins s with
| Empty -> raise BrokenInvariant
| Node(_,a,y,b) -> Node(Black,a,y,b)
end
Figure 2.2: Okasaki's implementation of Red-Black sets
2.2.
FORMALIZING PURELY FUNCTIONAL DATA STRUCTURES
Rep. If
Rep a A. For
I formalize models through instances of a typeclass named
a type
a
are modelled by values of type
A,
then I write consider red-black trees that contain items of type
(i.e.,
values of
example,
If those items are modelled by
Rep t T ), then trees of type fset are modelled by values of type
Rep fset (set T )), where set is the type constructor for mathematical sets
values of type
set T
t.
29
T
(i.e.,
in Coq.
The typeclass
Rep a A,
Rep
contains two elds, as shown below. For an instance of type
the rst eld,
their model, of type
red-black tree
e,
A.
rep,
is a binary relation that relates values of type
a
with
Note that not all values admit a model. For instance, given a
the proposition rep e E can only hold if
formed binary search tree. The second eld of
asserting that every value of type
a
Rep,
e
is a well-balanced, well-
rep_unique,
named
is a lemma
admits at most one model. We sometimes need
to exploit this fact in proofs.
Class Rep (a:Type) (A:Type) :=
{ rep : a -> A -> Prop;
rep_unique : forall x X Y,
rep x X -> rep x Y -> X = Y }.
Remark: although representation predicates have appeared previously in the context
of interactive program verication (e.g. [63, 27, 56]), my work seems to be the rst
to use them in a systematic manner through a typeclass denition.
Figure 2.3 contains the specication for an abstract nite set module named
Elements of the sets, of type
elem,
are expected to be modelled by some type
and to be related to their models by an instance of type Rep elem T .
F.
T
Moreover,
fset, should be related to their model,
of type set T , through an instance of type Rep fset (set T ). The module signature
then contains the specication of the values from the nite set module F. The rst
one asserts that the value empty should be a representation for the empty set. The
specications for insert and member rely on a special notation, explained next.
So far, I have relied on the predicate AppReturns to specify functions. Even
though AppReturns works well for functions of one argument, it becomes impractical
the values implementing nite sets, of type
for curried functions of higher arity, in particular because one needs to specify the
behavior of partial applications. So, I introduce the
Spec
notation, explaining its
Spec
insert, shown below, reads like a prototype: insert takes
two arguments, x of type elem and e of type fset. Then, for any model X of x and for
any set E that models e, the function returns a nite set e' which admits a model E'
equal to {X} ∪ E. Below, \{X} is a Coq notation for a singleton set and \u denotes
meaning informally and postponing its formal denition to Ÿ4.2.3. With the
notation, the specication of
the set union operator.
Parameter insert_spec :
Spec insert (x:elem) (e:fset) |R>>
forall X E, rep x X -> rep e E ->
R (fun e' => exists E',
30
CHAPTER 2.
OVERVIEW AND SPECIFICATIONS
rep e' E' /\ E' = \{X} \u E).
The conclusion, which takes the form R
plication of
R
insert
to the arguments
(fun e' => H), can be read as the apx and e returns a value e' satisfying H. Here
is bound in the notation |R. The notation will be explained later on (Ÿ4.2.2).
As it is often the case that arguments and/or results are described through their
rep
predicate, I introduce the
RepSpec
notation. With this new layer of syntactic
sugar, the specication becomes:
Parameter insert_spec :
RepSpec insert (X;elem) (E;fset) |R>>
R (fun E' => E' = \{X} \u E ; fset).
The specication is now stated entirely in terms of the models, and no longer refers
to the names of Caml input and output values. Only the types of those program
values remain visible. Those type annotations are introduced by semi-columns.
The specication for the function
insert
given in Figure 2.3 makes two further
simplications. First, it relies on the notation
tion of a name
R
RepTotal,
which avoids the introduc-
when it is immediately applied. Second, it makes uses of a partial
application of equality, of the form =
{X} ∪ E,
for the sake of conciseness. Overall,
the interest of introducing several layers of notation is that the nal specications
from Figure 2.3 are about the most concise formal specications one could hope for.
Let me briey describe the remaining specications. The function
as argument a value
x
and only if the model
and a nite set
X
of
x
e,
belongs to the model
E
of
e.
the specication of an abstract ordered type module named
ordered type
t
should be modelled by a type
by a total order relation.
member takes
and returns a boolean which is true if
T.
Figure 2.4 contains
O.
Values of type
T
Elements of the
should be ordered
The order relation and the proof that it is total are
Le and Le_total_order, respectively.
An instance of the strict-order relation (LibOrder.lt) is automatically derived through
described through instances of the typeclasses
the typeclass mechanism. This relation is used to specify the boolean comparison
function
lt,
dened in the module
O.
2.2.2 Verication of the implementation
It remains to specify and verify the implementation of red-black trees. Consider a
O has been veried
OrderedSigSpec. Our goal is then to
prove correct the module obtained by applying the functor RedBlackSet to the module O, through the construction of a Coq module of signature FsetSigSpec. Thus, the
verication of the Caml functor RedBlackSet is carried through the implementation
of a Coq functor named RedBlackSetSpec, which depends both on the module O and
on its specication OS. For the Coq experts, I show the rst few lines of this Coq
module
O
that describes an ordered type. Assume the module
through a Coq module named
functor are shown below
OS
of signature
2.2.
FORMALIZING PURELY FUNCTIONAL DATA STRUCTURES
Module Type FsetSigSpec.
Declare Module F : MLFset. Import F.
Parameter T : Type.
Instance elem_rep : Rep elem T.
Instance fset_rep : Rep fset (set T).
Parameter empty_spec : rep empty \{}.
Parameter insert_spec :
RepTotal insert (X;elem) (E;fset) >> = \{X} \u E ; fset.
Parameter member_spec :
RepTotal member (X;elem) (E;fset) >> bool_of (X \in E).
End FsetSigSpec.
Figure 2.3: Specication of nite sets
Module Type OrderedSigSpec.
Declare Module O : MLOrdered. Import O.
Parameter T : Type.
Instance rep_t : Rep t T.
Instance le_inst : Le T.
Instance le_order : Le_total_order.
Parameter lt_spec :
RepTotal lt (X;t) (Y;t) >> bool_of (LibOrder.lt X Y).
End OrderedSigSpec.
Figure 2.4: Specication of ordered types
31
32
CHAPTER 2.
OVERVIEW AND SPECIFICATIONS
Module RedBlackSetSpec
(O:MLOrdered) (OS:OrderedSigSpec with Module O:=O)
<: FsetSigSpec with Definition F.elem := O.t.
Module Import F <: MLFset := MLRedBlackSet O.
The next step in the construction of this functor is the denition of an instance
of the representation predicate for red-black trees. To start with, assume that our
rep
inv, as shown
Second, inv relates a
goal is simply to specify a binary search tree (not necessarily balanced). The
predicate would be dened in terms of an inductive invariant called
below.
inv relates the empty tree to the empty set.
root y and subtrees a and b to the set {Y} ∪ A ∪ B,
First,
node with
where the uppercase
variables are the models associated with their lowercase counterpart. Moreover, we
A are smaller than the root Y,
and that, symmetrically, elements from B are greater than Y. Those invariants are
stated with help of the predicate foreach. The proposition foreach P E asserts that
need to ensure that all the elements of the left subtree
all the elements in the set
E
satisfy the predicate
P.
Inductive inv : fset -> set T -> Prop :=
| inv_empty :
inv Empty \{}
| inv_node : forall col a y b A Y B,
inv a A -> inv b B -> rep y Y ->
foreach (is_lt Y) A -> foreach (is_gt Y) B ->
inv (Node col a y b) (\{Y} \u A \u B).
A red-black tree is a binary search tree satisfying three additional invariants.
First, every path from the root to a leaf should contain the same number of black
nodes. Second, no red node can have red child. Third, the root of the entire tree
must be black. In order to capture the rst invariant, I extend the predicate
that it depends on a natural number
inv
so
n representing the number of black nodes to be
found in every path. For an empty tree, this number is zero. For a nonempty tree,
this number is equal to the number
m
of black nodes that can be found in every
path of each of the two subtrees, augmented by one if the node is black. The second
invariant, asserting that a red node must have black children, can be enforced simply
by testing colors. The
there exists a value
n
rep
predicate then relates a red-black tree
such that inv n e E
e
with a set
holds and such that the root of
(the third invariant). The resulting denition of
inv
e
E
if
is black
appears in Figure 2.5.
In practice, I further extend the invariant with an extra boolean (this extended
denition does not appear here; it can be found in the Coq development). When
the boolean is true, the denition of
inv
is unchanged. However, when the boolean
is false, then the second invariant, which asserts that a red node cannot have a
red children, might be broken at the root of the tree. This relaxed version of the
invariant is useful to specify the behavior of the auxiliary function
balance.
Indeed,
this function takes as input a color, an item and two subtrees, and one of those two
subtrees might have its root incorrectly colored.
2.2.
FORMALIZING PURELY FUNCTIONAL DATA STRUCTURES
33
Inductive inv : nat -> fset -> set T -> Prop :=
| inv_empty : forall,
inv 0 Empty \{}
| inv_node : forall n m col a y b A Y B,
inv m a A -> inv m b B -> rep y Y ->
foreach (is_lt Y) A -> foreach (is_gt Y) B ->
(n = match col with Black => m+1 | Red => m end) ->
(match col with | Black => True
| Red => root_color a = Black
/\ root_color b = Black end) ->
inv n (Node col a y b) (\{Y} \u A \u B).
Global Instance fset_rep : Rep fset (set T).
Proof. apply (Build_Rep
(fun e E => exists n, inv n e E /\ root_color e = Black)).
... (* the proof for the field rep_unique is not shown *)
Defined.
Figure 2.5: Representation predicate for red-black trees
Figure 2.6 shows the lemma corresponding to the verication of
that the local recursive function
the help of the tactic
xgo.
ins is specied in the script.
insert.
Observe
It is then veried with
2.2.3 Statistics on the formalizations
I have specied and veried various implementations of queues, double-ended queues,
priority queues (heaps), sets, as well as sortable lists, catenable lists and randomaccess lists.
Caml implementations are directly adapted from Okaski's SML code
[69]. All code and proofs can can be found online.
2 Figure 2.7 contains statistics
on the number of non-empty lines in Caml source code and in Coq scripts.
The
programs considered are generally short, but note that Caml is a concise language
and that Okasaki's code is particularly minimalist.
Details are given about Coq
scripts.
The column inv indicates the number of lines needed to state the invariant
of each structure.
The column facts gives the length of proof script needed to
state and prove facts that are used several times in the verication scripts.
The
column spec indicates the number of lines of specication involved, including the
specication of local and auxiliary functions.
Finally, the last column describes
the size of the actual verication proof scripts where characteristic formulae are
manipulated. Note that Coq proof scripts also contain several lines for importing
and instantiating modules, a few lines for setting up automation, as well as one line
per function for registering its specication in a database of lemmas.
I evaluate the relative cost of a formal verication by comparing the number
2
http://arthur.chargueraud.org/research/2010/cfml/
34
CHAPTER 2.
OVERVIEW AND SPECIFICATIONS
Lemma insert_spec : RepTotal insert (X;elem) (E;fset) >>
= \{X} \u E ; fset.
Proof.
xcf. introv RepX (n&InvE&HeB).
xfun_induction_nointro_on size (Spec ins e |R>>
forall n E, inv true n e E -> R (fun e' =>
inv (is_black (root_color e)) n e' (\{X} \u E))).
clears s n E. intros e IH n E InvE. inverts InvE as.
xgo*. simpl. constructors*.
introv InvA InvB RepY GtY LtY Col Num. xgo~.
(* case insert left *)
destruct~ col; destruct (root_color a); tryifalse~.
ximpl as e. simpl. applys_eq* Hx 1 3.
(* case insert right *)
destruct~ col; destruct (root_color b); tryifalse~.
ximpl as e. simpl. applys_eq* Hx 1 3.
(* case no insertion *)
asserts_rewrite~ (X = Y). apply~ nlt_nslt_to_eq.
subst s. simpl. destruct col; constructors*.
xlet as r. xapp~. inverts Pr; xgo. fset_inv. exists*.
Qed.
Figure 2.6: Invariant and model of red-black trees
of lines specic to formal proofs (gures from columns facts and verif ) against
the number of lines required in a properly-documented source code (source code
plus invariants and specications). For particularly-tricky data structures, such as
bootstrapped queues, Hood-Melville queues and binominal heaps, this ratio is close
to
2.0.
In all other structures, the ration does not exceed
1.25.
For a user as uent
in Coq proofs as in Caml programming, it means that the formalization eort can
be expected to be comparable to the implementation and documentation eort (at
least in terms of lines of code).
2.2.
FORMALIZING PURELY FUNCTIONAL DATA STRUCTURES
Development
Caml
Coq
inv
facts
specif
verif
BatchedQueue
20
73
4
0
16
16
BankersQueue
19
95
6
20
15
16
PhysicistsQueue
28
109
8
10
19
32
RealTimeQueue
26
104
4
12
21
28
ImplicitQueue
35
149
25
21
14
50
BootstrappedQueue
38
212
22
54
29
77
HoodMelvilleQueue
41
363
43
53
33
180
BankersDeque
46
172
7
26
24
58
LeftistHeap
36
132
16
28
15
22
PairingHeap
33
137
13
17
16
35
LazyPairingHeap
34
132
12
24
14
32
SplayHeap
53
176
10
41
20
59
BinomialHeap
48
367
24
118
41
110
UnbalancedSet
21
85
9
11
5
22
RedBlackSet
35
183
20
43
22
53
BottomUpMergeSort
29
151
23
31
9
40
CatenableList
38
153
9
20
23
37
RandomAccessList
63
272
29
37
47
83
643
3065
284
566
383
950
Total
Figure 2.7: Non-empty lines of source code and proof scripts
35
Chapter 3
Verication of imperative
programs
This chapter contains an overview of the treatment of imperative programs. After introducing some notation for specifying heaps, I present
the specication of the primitive functions for manipulating references,
and I explain the treatment of for-loops and while-loops, using loop invariants.
I then introduce representation predicates for mutable data
structures, focusing on mutable lists to illustrate the denition and manipulation of such predicates.
I study the example of a function that
computes the length of a mutable list and explain why reasoning on
loops using invariants does not take advantage of local reasoning. I then
present a dierent treatment of loops. Finally, I explain the treatment
of higher-order functions and that of functions with local state.
3.1 Examples of imperative functions
3.1.1 Notation for heap predicates
I start by describing informally the combinators for constructing heap predicates,
which have type
heap is written
[]
Hprop.
Formal denitions are postponed to Ÿ5.2.1.
and the spatial conjunction of two heaps is written
predicate l
The empty
H1 ∗ H2 .
The
,→T v describes a reference cell l whose contents is described in Coq
as the value v of type T . Since the type T can be deduced from the value v , I
often omit it and write l ,→ v . A more general predicate for describing mutable
data structures, written l
T v , is explained further on. I lift propositions and
existentials into heap predicates as follows. The predicate [P ] holds of the empty
heap when the proposition P is true. The predicate ∃
∃x. H holds of a heap h if
there exists a value x such that the predicate H holds of h (note that x is bound in
H ). The corresponding ASCII notations, which are used in Coq statements, appear
below.
36
3.1.
EXAMPLES OF IMPERATIVE FUNCTIONS
37
Heap predicate
Latex notation
ASCII notation
Empty heap
[]
H1 ∗ H2
l
Tv
l ,→ v
[P ]
∃∃x. H
[]
C1 \* C2
l > T v
l > v
[P]
Hexists x, H
Separating conjunction
Mutable data structure
Reference cell
Lifted proposition
Lifted existential
The post-condition for a term
t
describes both the output value and the output
state returned by the evaluation of t. If
t admits the type τ , then the post-condition
hτ i → Hprop. So, a post-condition generally
takes the form λx. H . In the particular case where the return value x is of type
unit, the name x is irrelevant, and I write # H as a shorthand for λ_ : unit. H . In
the particular case where the evaluation of a term returns exactly a value v in the
empty heap, the post-condition takes the form λx. [x = v]. To describe such a postcondition, I use the shorthand \= v . Finally, it is useful to extend a post-condition
with a piece of heap. The predicate Q ? H is a post-condition for a term that returns
a value x and a heap h such that the heap h satises the predicate (Q x) ∗ H . The
for its evaluation is a predicate of type
corresponding ASCII notations are given in the next table.
Post-condition predicate
Latex notation
ASCII notation
General post-condition
λx. H
#H
\= v
Q?H
fun x => H
# H
\= v
Q \*+ H
Unit return value
Exact return value
Separating conjunction
The symbol
H1 B H2
(B)
denotes entailment between heap predicates, so the proposition
H2 . Similarly, Q1 I Q2
asserts that a post-condition Q1 entails a post-condition Q2 , in the sense that for
any value x and any heap h, the proposition Q1 x h implies Q2 x h.
asserts that any heap satisfying
H1
also satises
Entailment relation
Latex notation
ASCII notation
Between heap predicates
H1 B H2
Q1 I Q2
H1 ==> H2
Q1 ===> Q2
Between post-conditions
3.1.2 Specication of references
In this section, I present the specication of functions manipulating references.
start with the function
incr,
I
which increments the contents of a reference on an
incr from the introduction. It asserts
AppReturns incr r (r ,→ n) (# r ,→ n+1) holds for any location
n. The specication can be presented using the notation Spec,
integer. Recall the specication of the function
that the proposition r
and any integer
which is like that introduced in the previous chapter except that the variable
bound by the
R
Spec notation now stands for a predicate that takes as argument both
a pre-condition and a post-condition.
38
CHAPTER 3.
VERIFICATION OF IMPERATIVE PROGRAMS
Lemma incr_spec :
Spec incr r |R>> forall n, R (r > n) (# r > n+1)
R, which describes the behaviour of the application of incr
AppReturns incr r.
function ref takes as argument a value v and allocates a reference
Observe how the variable
r,
to the argument
The primitive
with contents
v.
plays the same role as the predicate Its pre-condition is the empty heap, and its post-condition asserts
that the application of
ref
to
v
produces a location
The specication is polymorphic in the type
A
l
in a heap of the form
of the value
v.
l ,→ v .
Remark: since
ref
is a built-in function, its specication is not proved as a lemma but is taken as an
axiom.
Axiom ref_spec : forall A,
Spec ref (v:A) |R>> R [] (fun l => l > v).
The function
get applies to a location l and to a heap of the form l ,→ v , and returns
exactly the value
v,
without changing the heap. Hence its specication shown next.
Spec get (l:Loc) |R>> forall (v:A),
R (l > v) (\=v \*+ l > v).
The function
form
l ,→ v 0
set
applies to a location
for some value
The application of
set
v0,
l
and to a value
v.
The heap must be of the
indicating that the location
l
is already allocated.
produces the unit value and a heap of the form
l ,→ v .
Spec set (l:Loc) (v:A) |R>> forall (v':A),
R (l > v') (# l > v).
One can prove the specication of
set.
incr with respect to the specications of get and
The verication is conducted on the administrative normal form of the denition
of the function
incr, where the side-eect associated with reading the contents of the
reference is separated from the action of updating the contents of the reference. The
normal form of
incr,
shown next, is automatically generated by CFML.
let incr r = (let x = get r in set r (x + 1))
The proof of the specication of
incr is quite short: xgo.xsimpl ..
The tactic
xgo
follows the structure of the code and exploits the specications of the functions for
reading and updating references. The tactic
xsimpl
is used to discharge the proof
obligation produced, which simply consists in checking that, under the assumption
x = n,
the heap predicate
r ,→ x + 1
is equivalent to the heap predicate
r ,→ n + 1.
3.1.3 Reasoning about for-loops
To illustrate the treatment of for-loops, I consider a function that takes as argument
a non-negative integer
on the reference
r.
k
and a reference
r,
and then calls
k
times the function
incr
3.1.
EXAMPLES OF IMPERATIVE FUNCTIONS
39
let incr_for k r = (for i = 1 to k do (incr r) done)
The specication of this function asserts that if
reference
r,
then
n+k
n
is the initial contents of the
is its nal content.
Spec incr_for k r |R>> forall n, k >= 0 ->
R (r > n) (# r > n + k).
The proof involves providing a loop invariant for reasoning on the loop. The tactic
xfor_inv
expects a predicate
I,
of type
Int → Hprop,
describing the state of the
heap in terms of the loop counter. In the example, the appropriate loop invariant,
λi. (r ,→ n + i − 1). One may check that an execution of
r, turns a heap satisfying I i into a heap satisfying
I (i + 1). One may also check that, when i is equal to 1, the heap predicate I 1
is equivalent to the initial heap (r ,→ n). Moreover, when i is equal to k + 1, the
heap predicate I (k + 1) is equivalent to the nal heap (r ,→ n + k). Note that
the post-condition of the loop is described by I (k + 1) and not by I k , because I k
describes the heap before the nal iteration of the loop, whereas I (k + 1) describes
called
I,
is dened as
the loop body, which increments
the heap after that nal iteration.
The characteristic formula of a for-loop of the form next.
The invariant
I
for i = a to b do t
is quantied existentially at head of the formula.
is shown
A con-
junction of three propositions follows. The rst one asserts that the initial heap
entails the application of the invariant to the initial value
a of the loop counter.
H
The
t of the loop, starting from a heap
i, terminates and
produces a heap that satises the invariant I (i + 1). The third and last proposition
checks that the predicate I (b + 1) entails the expected nal heap description Q tt
(where tt denotes the unit value). This third proposition is in fact generalized so as
to correctly handle the case where a is greater than b, in which case the loop body is
not executed at all. The heap predicate I (max a (b + 1)) concisely takes into account
both the case a ≤ b and the case a > b.
second one asserts that the execution of the body
satisfying the invariant
Ii
for a valid value of the loop counter
8
Jfor i = a to b do tK ≡ frame (λHQ. ∃I.
Remark:
<
:
the function that increments
k
H BIa
∀i ∈ [a, b]. JtK (I i) (# I (i + 1)) )
I (max a (b + 1)) B Q tt
times a reference
r
may in fact be
assigned a more general specication that also covers the case where
k
is a negative
value. This specication, shown next, involves a post-condition asserting that the
contents of
r
is equal to
n
plus the maximum of
k
and
0.
Spec incr_for k r |R>> forall n,
R (r > n) (# r > n + max k 0).
3.1.4 Reasoning about while-loops
I now adapt the previous example to a while-loop. The function shown below takes
as argument two references, called
s
and
r,
and executes a while-loop. As long as
40
CHAPTER 3.
the contents of
s
VERIFICATION OF IMPERATIVE PROGRAMS
is positive, the contents of
r
is incremented and the contents of
s
is decremented.
let incr_while s r = (while (!s > 0) do (incr r; decr s) done)
If
k
denotes the initial contents of
simplicity) and if
n
s
(assume
k
to be non-negative for the sake of
denotes the initial contents of
is equal to zero and the nal contents of
r
s,
then the nal contents of
is equal to
n + k.
s
The corresponding
specication appears next.
Spec incr_while s r |R>> forall k n, k >= 0 ->
R (s > k \* r > n) (# s > 0 \* r > n+k).
The function is proved correct using a loop-invariant. Contrary to for-loops, there
is no loop counter to serve as index for the invariant.
So, I articially introduce
some indices. Those indices are used to describe how the heap evolves through the
execution of the loop, and to argue for termination, using a well-founded relation
over indices.
In the example, we may choose to index the invariant with values of types
The invariant
I
is then dened as λi. (s
execution of the loop increments
into a heap satisfying
by
Ik
index
of
i
I (i − 1).
r
,→ i) ∗ (r ,→ n + k − i) ∗ [i ≥ 0]. The
s, so it turns a heap satisfying I i
and decrements
Observe that the initial state of the loop is described
and that the nal state of the loop is described by
i
decreases from
k
to
0
Int.
I 0.
Since the value of the
during the execution of the loop, the absolute value
is a measure that justies the termination of the loop.
The characteristic formula for a loop while t1 do t2 ,
in its version based on a
A
A → Hprop describing heaps
before evaluating the loop condition, over an invariant J of type A → bool →
Hprop describing the post-condition of the loop condition (J is needed because the
evaluation of t1 may modify the store), and over a binary relation (≺) of type
A → A → Prop.
A conjunction of ve propositions follows. This rst one asserts that (≺), the
loop invariant, appears below. This formula quanties existentially over a type
used to represent indices, over an invariant
I
of type
relation used to argue for termination, is well-founded. The second one requires the
existence of a index
X0
such that the initial heap satises the heap predicate
The third proposition states that, for any index
executable in a heap of the form
IX
X,
the loop condition
t1
and produce a heap of the form
I X0 .
should be
J X.
The
two remaining proposition correspond to the cases where the loop condition returns
true or false, respectively.
true, the loop body t2 is
J X true. The execution of that body should produce a
the invariant I Y for some index Y , which is existentially-quantied
To ensure termination, Y has to be smaller than X , written Y ≺ X .
When the loop condition returns
executed in a heap satisfying
heap satisfying
in the formula.
When the loop condition returns
false,
the loop terminates in a heap satisfying
3.2.
MUTABLE DATA STRUCTURES
J X false.
41
This heap description should entail the expected heap description
Jwhile t1 do t2 K ≡
Q tt .
frame (λHQ.
8
> well-founded(≺)
>
>
<
∃A. ∃I. ∃J. ∃(≺).
>
>
>
:
∃X0 . H B I X0
∀X. Jt1 K (I X) (J X)
)
∀X. Jt2 K (J X true) (# ∃∃Y. (I Y ) ∗ [Y ≺ X])
∀X. J X false B Q tt
3.2 Mutable data structures
In this section, I introduce a family of predicates, written
x
G X,
for describing
linear mutable data structures, such as mutable lists and mutable trees.
More
generally, a linear data structure is a structure that does not involve sharing of
sub-structures.
Those predicates are also used to describe purely-functional data
structures that may contain mutable elements.
The treatment of non-linear data
structure, where pointer aliasing is involved, is discussed as well.
3.2.1 Recursive ownership
In the heap predicate
x
G X,
the value
x
denotes the entry point of a mutable
data structure, for example the location at the head of a mutable list. The value
describes the mathematical model of that data structure, for example
a Coq list. The predicate
G
X
captures the relation between the entry point
structure, the mathematical model
X
X
might be
x
of the
of the structure, and the piece of heap contain-
ing the representation of that structure. The predicate
G
is called a representation
predicate.
G X is dened as G X x. (More generally, x
P is dened
as P x.) So, if x has type a and X has type A, then G has type A → a → Hprop. The
tagging of the application of G X to x with an arrow symbol serves two purposes:
The predicate
x
improving the readability of specications, and simplifying the implementation of
the Coq tactics from the CFML library.
For example, the representation predicate
The heap predicate
l
Mlist G L
of a mutable list starting at location
list
L.
Mlist is used to describe mutable lists.
describes a heap that contain the representation
l
and whose mathematical model is the Coq
G here describes the relation between the items
G has type a →
predicate Mlist G has type Loc → List A → Hprop.
The representation predicate
found in the mutable list and their mathematical model. So, when
A → Hprop,
the representation
The representation predicate for pure values (e.g., values of type
Int)
simply
asserts that the mathematical model of a base value is the value itself. This representation predicate is called
Id,
and its formal denition is shown next.
Definition Id (A:Type) (X:A) (x:A) : Hprop := [x = X].
42
CHAPTER 3.
VERIFICATION OF IMPERATIVE PROGRAMS
IdA to denote the type application of Id to the type A, and simply
Id when the type A can be deduced from the context. Note that, for any type
A and any value x of type A, the predicate x
IdA x holds of the empty heap.
Combining the predicate Mlist and Id, we can build for instance a heap predicate
l
Mlist IdInt L, which describes a mutable list of integers. Here, the location l has
type Loc and L has type List Int. More interestingly, we can go one step further and
Thereafter, I write
write
construct a heap predicate describing a mutable list that contains mutable lists of
l
Mlist (Mlist IdInt ) L.
List (List Int). Intuitively,
integers as elements, through a heap predicate of the form
The mathematical model
L
the representation predicate
here is a list of lists, of type
Mlist (Mlist IdInt )
describes the ownership of a mutable
list and of all the mutable lists that it contains. There are, however, some situations
where we only want to describe the ownership of the main mutable list.
In this
case, we view the mutable list as a list of locations, through a predicate of the form
l
L
Mlist IdLoc L.
The list
L
now has type
List Loc,
and the locations contained in
are entry points for other mutable lists, which can be described using other heap
predicates.
To summarize, representation predicates for mutable data structures not only
relate pieces of data laid out in memory with their mathematical model, they also
indicate the extent of ownership. On the one hand, the application of a representation predicate to another nontrivial representation predicate describes recursive
ownership of data structures. On the other hand, the representation predicate
IdLoc
cuts recursive ownership. Although representation predicates are commonplace in
program verication, the denition of parametric representation predicates, where a
representation predicate for a container takes as argument the representation predicate for the elements, does not appear to have been exploited in a systematic manner
in other program verication tools. Remark: the polymorphic representation predicates that I use here are the heap-predicate counterpart of the capabilities involved
in the type system that I have developed during the rst year of my PhD, together
with François Pottier [16].
3.2.2 Representation predicate for references
In this section, I explain how to dene a representation predicate for references. I
also describe the focus and unfocus operations, which allow rearranging one's view
on memory.
So far, a heap containing a reference has been specied using the predicate
which asserts that the location
the situation where
x
r
points towards the value
Ref G X
some value
x
To concisely describe
corresponds to the entry point of a mutable data structure, I
introduce a representation predicate for references, called
r
x.
r ,→ x,
Ref.
The heap predicate
describes a heap made of a memory cell at address
r
that points to
on the one hand, and of a heap described by the predicate
x
GX
on the other hand.
Definition Ref (a:Type) (A:Type) (G:A->a->Hprop) (X:A) (r:Loc) :=
Hexists x:a, (r > x) \* (x > G X).
3.2.
MUTABLE DATA STRUCTURES
For example, a reference
heap predicate
r
whose contents is an integer
Ref IdInt n,
r
s,
n
is described by the
r ,→Int n.
which is logically equivalent to
interesting example is that of a reference
reference, call it
43
r
whose content is the location of another
whose contents is the integer
conjunction of two heap predicates:
also describe the same heap as:
r
A more
n.
A direct description involves a
Ref IdLoc s) ∗ (s
Ref IdInt n). We may
Ref (Ref IdInt ) n. This second predicate is more
(r
concise than the rst one, however it does not expose the location
s
of the inner
reference.
The heap predicate ∃
∃s. (r
cate Ref (Ref IdInt )
Ref IdLoc s) ∗ (s
Ref IdInt n)
and the heap predi-
are equivalent in the sense that one can convert from one to the
other by unfolding the denition of the representation predicate
unfolding the denition of
Ref
Ref.
The action of
is called focus, following a terminology introduced
by Fähndrich and DeLine [23] and reused in [16] for a closely-related operation. The
statement of the focus lemma for references is an implication between heap predicates.
It takes the ownership of a reference
a model
X
r
and of its contents represented by
with respect to some representation predicate
conjunction of the ownership of a reference
r
G,
and turns it into the
that contains a base value
hand, and of the ownership of a data structure whose entry point is
model with respect to
G
is the value
X
x on the one
x and whose
on the other hand.
Lemma focus_ref : forall a A (G:A->a->Hprop) (r:Loc) (X:A),
(r > Ref G X) ==> Hexists x, (r > x) \* (x > G X).
The reciprocal operation is called unfocus.
It consists in packing the ownership
of a reference with the ownership of its contents back into a single heap predicate.
The unfocus lemma could be formulated as the reciprocal of the focus lemma, yet
in practice it is more convenient to extract the existential quantiers as shown next.
Lemma unfocus_ref : forall a A G (r:Loc) (x:a) (X:A),
(r > x) \* (x > G V) ==> (r > Ref G X).
The lemmas
focus_ref
and
unfocus_ref
are involved in verication proof scripts
for changing from one view on memory to another. In the current implementation,
those lemmas need to be invoked explicitly by the user, yet it might be possible to
design specialized decision procedures that are able to automatically exploit focus
and unfocus lemmas.
3.2.3 Representation predicate for lists
In this section, I present the denition of the representation predicate
mutable lists, as well as the representation predicate
Mlist
for
Plist for purely-functional lists.
I then describe the focus and unfocus lemmas associated with list representation
predicates.
For the sake of type soundness, the Caml programming language does not feature null pointers. Nevertheless, a standard backdoor (called Obj.magic) can be
exploited to augment the language with null pointers. In Caml extended with null
44
CHAPTER 3.
VERIFICATION OF IMPERATIVE PROGRAMS
pointers, one can implement C-style mutable lists.
The head cell of a nonempty
mutable list is represented by a reference on a pair of an item and of the tail of the
list (a two-eld record could also be used), and the empty list is represented by the
null pointer. Hence the Caml type of mutable list shown next.
type 'a mlist = ('a * 'a mlist) ref
The Coq denition of the representation predicate
pointer
l
with a Coq list
structure of the list
L,
empty list, the pointer
L.
The predicate
Mlist
Mlist
appears next. It relates a
is dened inductively following the
as done in earlier work on Separation Logic. When
l
should be the null pointer, written
null.
When
L
L
is the
has the
X :: L0 , the pointer l should point to a reference cell that contains a pair (x, l0 ),
0
such that X is the model of the data structure of entry point x and such that L is
0
the model of the list starting at pointer l . Note that a heap predicate of the form
(l ,→ v) entails the fact that the location l is not null.
form
Fixpoint Mlist A a (G:A->a->Hprop) (L:list A) (l:Loc) : Hprop :=
match L with
| nil => [l = null]
| X::L' => Hexists x l',
(l > (x,l')) \* (x > G X) \* (l' > Mlist G L')
end.
Mlist using the representation
Ref and the representation predicate Pair, which is explained next. The
denition of Pair is such that a pair (X, Y ) is the mathematical model of a pair (x, y)
with respect to the representation predicate Pair G1 G2 when X is the model of x
One can devise a slightly more elegant denition of
predicate
with respect to
of
G1
and
Y
is the model of
y
with respect to
G2 .
The Coq denition
Pair appears next (for the sake of readability, I pretend that a Coq denition can
bind pairs of names directly).
Definition Pair A B a b (G1:A->a->Hprop) (G2:B->b->hprop)
((X,Y):A*B) ((x,y):a*b) : Hprop :=
(x > G1 X) \* (y > G2 Y).
Mlist
X :: L0 is the
model of the pointer l with respect to the representation predicate Mlist G if the pair
(X, L0 ) is the mathematical model of the pointer l with respect to the representation
predicate Ref (Pair G (Mlist G)). So, the updated denition no longer needs to involve
The denition of
can now be improved as follows.
The list
existential quantiers. Existential quantiers are only involved in the focus lemma
presented further on.
Fixpoint Mlist A a (G:A->a->Hprop) (L:list A) (l:Loc) : Hprop :=
match L with
| nil => [l = null]
| X::L' => l > Ref (Pair G (Mlist G)) (X,L')
end.
3.2.
MUTABLE DATA STRUCTURES
45
I now dene the representation predicate
trary to
Mlist,
the predicate
Plist
Plist
for purely-functional lists. Con-
relates a Coq list
l
of type
List a
with another
List A, where the representation predicate G associated with the
A → a → Prop. The denition of Plist is conducted by induction
on the list L and by case analysis on the list l. The Coq denition appears next.
Observe that the lists l and L must be either both empty or both non-empty.
Coq list
L
of type
elements has type
Fixpoint List A a (G:A->a->Hprop) (L:list A) (l:list a) : Hprop :=
match L,l with
| nil , nil => [True]
| X::L', x::l' => (x > G X) \* (l' > List G L')
| _
, _
=> [False]
end.
An example of use of the predicate
Plist
is given later on.
3.2.4 Focus operations for lists
In this section, I describe focus and unfocus lemmas for mutable lists. I rst present
the lemmas that are used in practice, and explain afterwards how those lemmas
can be derived as immediate corollaries of a few lower-level lemmas. Note that the
contents of this section could be directly applied to other linear data structures such
as mutable trees and purely-functional lists. Note also that the lemmas presented in
this section have to be stated and proved manually. Indeed, the current version of
CFML is only able to generate focus and unfocus lemmas for data structures that
do not involve null pointers.
I start with the unfocus operation for empty lists. There are two versions. The
rst one asserts that if a pointer
l
has for model the empty list then the pointer
l
must be the null pointer. The second one reciprocally asserts that if a null pointer
has for model a list
L
then the list
L
must be the empty list.
Lemma unfocus_nil : forall a A (G:A->a->Hprop) (l:Loc),
l > Mlist G nil ==> [l = null].
Lemma unfocus_nil' : forall a A (G:A->a->Hprop) (L:list A),
null > Mlist G L ==> [L = nil].
The focus operation for the empty list asserts that a null pointer has for model the
empty heap.
This statement does not involve any pre-condition, so the left-hand
side is the empty heap predicate.
Lemma focus_nil : forall a A (G:A->a->Hprop),
[] ==> null > Mlist G nil.
There are two focus operations for non-empty lists. The rst one asserts that
if a pointer
l
has for model the list
X :: L0 ,
then this pointer points to a reference
46
CHAPTER 3.
VERIFICATION OF IMPERATIVE PROGRAMS
cell that contains a pair of a value
0
as L . The values
x
x
modelled as
X
and of a pointer
l0
modelled
0
and l are existentially quantied. Henceforth, for the sake of
readability, I do not show the universal quantications at the head of statements.
Lemma focus_cons :
(l > Mlist G (X::L')) ==>
Hexists x l', (l > (x,l')) \* (x > G X) \* (l' > Mlist G L').
l has for model the list
X :: L0 , and l points to a reference cell that contains
0
0
as X and of a pointer l modelled as L .
The second focus operation asserts that if a non-null pointer
L,
then this list decomposes as
a pair of a value
x
modelled
Lemma focus_cons' :
[l <> null] \* (l > Mlist G L) ==> Hexists x l', Hexists X L',
[L = X::L'] \* (l > (x,l')) \* (x > G X) \* (l' > Mlist G L').
The unfocus lemma for non-empty lists states the reciprocal entailment of the focus
operations.
Lemma unfocus_cons :
(l > (x,l')) \* (x > G X) \* (l' > Mlist G L') ==>
(l > Mlist G (X::L')).
All the lemmas presented in this section can be derived from three core lemmas.
Those lemmas are stated with an equality symbol, which corresponds to logical
equivalence between heap predicates (recall that I work in Coq with predicate extensionality).
H2
and
H2
In other words, the equality
entails
denition of
Mlist
H1 .
H1 = H2
holds when both
H1
entails
Working with an equality reduces the number of times the
needs to be unfolded in proofs.
Lemma models_iff :
(l > Mlist G L) = [l = null <-> L = nil] \* (l > Mlist G L).
Lemma models_nil :
[] = (null > Mlist G nil).
Lemma models_cons :
l > Mlist G (X::L')) = Hexists x l',
(l > (x,l')) \* (x > G X) \* (l' > Mlist G L').
3.2.5 Example: length of a mutable list
The function
length
computes the length of a mutable list, using a while loop to
traverse the list. It involves a counter
been passed by, as well as a pointer
h
n
for counting the items that have already
for maintaining the current position in the
list.
let length (l : 'a mlist) : int =
let n = ref 0 in
3.2.
MUTABLE DATA STRUCTURES
47
let h = ref l in
while (!h) != null do
incr n;
h := tail !h;
done;
!n
The specication of the length function is stated in terms of the representation
predicate
Mlist.
If the pointer
list modelled by the Coq list
length of
L.
l given
L, then
as input to the function describes a mutable
the function returns an integer equal to the
The input list is not modied, so the post-condition also mentions a
copy of the pre-condition.
Note that the references
n
and
h
allocated during the
execution of the function are local; they do not appear in the specication.
Lemma length_spec : forall a,
Spec length (l:Loc) |R>> forall A (G:A->a->Hprop) L
R (l > Mlist G L) (\= length L \*+ l > Mlist G L).
I explain later how to prove this specication correct without involving a loop invariant.
In this section, I follow the standard proof technique, which is based on
a loop invariant and involves the manipulation of segments of mutable lists.
representation predicate
e.
It relates a pointer
l
(the head of the list segment) with a Coq list
contained between the pointer
l
and the pointer
a list segment heap predicate takes the form
MlistSeg
The
MlistSeg e G describes a list segment that ends on a pointer
directly extends that of
l
Mlist.
e
L
if the items
are modelled by the list
MlistSeg e G L.
L.
So,
The denition of
Fixpoint MlistSeg (e:Loc) A a (G:A->a->Hprop) (L:list A) (l:Loc) :=
match L with
| nil => [l = e]
| X::L' => l > Ref (Pair G (Mlist G)) (X,L')
end.
A mutable list is a segment of mutable list that ends on the null pointer. Hence,
Mlist
is equivalent to
MlistSeg null.
In fact, I use this result as a denition for
Mlist.
Definition Mlist := MlistSeg null.
The loop invariant for the while-loop involved in the length function decomposes
the list in two parts: the segment of list made of the cells that have already been
passed by, and the list made of the cells that remain ahead. So, the mathematical
model
L
is decomposed in two sub-lists
L1
and
L2 .
If
e
is the pointer contained
h, then the list L1 models the segment that starts at the pointer
l and ends at the pointer e, and the list L2 models the list that starts at pointer
e. The counter n is equal to the length of L1 . During the execution of the loop,
the pointer h traverses the list, so the length of L1 increases and the length of L2
decreases. I use the list L2 as index for the loop invariant (recall Ÿ3.1.4). The loop
in the reference
invariant is then dened as follows.
48
CHAPTER 3.
VERIFICATION OF IMPERATIVE PROGRAMS
Definition inv L2 := Hexists e L1,
[L = L1++L2] \* (n > length L1) \* (h > e)
\* (l > MlistSeg e G L1) \* (e > MList G L2).
One can check that the instantiation of
L2
as the list
L
(in which case
L1
must be
the empty list) describes the state before the beginning of the loop, and that the
instantiation of
L2
as the empty list (in which case
L1
must be the list
L)
describes
the state after the nal iteration of the loop.
At a given iteration, either the pointer
or the pointer
e
e is null, in which case the loop terminates,
is not null, in which case the loop body is executed. In this case,
focus_cons')
L2 takes the form X :: L02 and that e points to
(x, e0 ) such that X models x and L02 models the
the focus lemma for non-empty mutable lists can be exploited (lemma
to get the knowledge that the list
a reference on a pair of the form
list starting at
e0 .
An unfocus operation that operates at the tail of a list segment
can be used to extend the segment ranging from
l
to
e0
l
to
e
into a segment ranging from
(the statement of this unfocus lemma is not shown here). The other facts
involved in checking that the execution of the loop body leaves a heap satisfying the
invariant instantiated with the list
L02
are straightforward to prove.
3.2.6 Aliased data structures
The heap predicate
l
Mlist (Ref IdInt ) L
describes a mutable list that contains
distinct references as elements. Some algorithms, however, involve manipulating a
mutable list containing possibly-aliased pointers, in the sense that two pointers from
the list may point to a same reference. In this case, one would use the predicate
l
Mlist IdLoc L
instead, in which
L
is a list of locations (of type
describe the references being pointed to by pointers from the list
L,
List Loc).
To
we need a new
heap predicate that describes a set of mutable data structures.
A value of type
Heap
describes a set of memory cells. It is isomorphic to a map
from locations to values. Values of type
Heap only oer a low-level view on memory,
because they do not make use of representation predicates for describing the elements
contained in the memory cells. So, I introduce a group predicate, written
Group,
to describe a piece of heap containing a set of mutable data structures of the same
A → Loc → Prop, and if M is a map
A, then the predicate Group G M describes a disjoint
union of heaps of the form xi
G Xi , where xi denotes a key from the map M and
where Xi denotes the value bound to xi in M . The group predicate is dened using
a fold operation on the map M to iterate applications of the separating conjunction,
type. If
T
is a representation predicate of type
from locations to values of type
as follows.
Definition Group A (G:A->loc->hprop) (M:map Loc A) : Hprop :=
Map.fold (fun (acc:Hprop) (x:Loc) (X:A) => acc \* (x > G X))
[] M.
The focus operation consists in taking one element out of a group. Reciprocally,
the unfocus operation on a group consists in adding one element to a group. The
3.2.
MUTABLE DATA STRUCTURES
49
formal statement of those operations involves manipulation of nite maps. Insertion
or update of a value in a map is written
removal of a binding is written
that
x
is a key bound in
unspecied value).
M
M\|x.
M\[x:=X],
M\[x], and
index M x asserts
reading is written
Moreover, the proposition
(reading at a key that is not bound in a map returns an
The statements of the focus and unfocus operations on group
are then expressed as follows.
Lemma group_unfocus :
forall A (G:A->loc->hprop) (M:map Loc A) (x:Loc) (X:A),
(Group G M) \* (x > G X) ==> (Group G (M\[x:=X]))
Lemma group_focus :
forall A (G:A->loc->hprop) (M:map Loc A) (x:Loc), index M x ->
(Group G M) ==> (Group G (M\|x)) \* (x > G (M\[x]))
Most often, group predicates are not manipulated directly but through derived
specications. For example, consider a mutable list of possibly-aliased integer references. The corresponding heap is described by a mutable list modelled as
group predicate modelled as
are keys in
l
M,
L,
by a
and by a proposition asserting that values from
L
M.
Mlist IdLoc L ∗ Group (Ref IdInt ) M ∗ [∀x. x ∈ L ⇒ x ∈ dom(M )]
L, one can use a derived specication for get
Group (Ref IdA ) M . This specication
x that belongs to a group described by M returns
To read at a pointer occurring in the list
specialized for reading in groups of the form
asserts that reading at a location
the value
M[x].
Spec get (x:Loc) |R>> forall A (M:map Loc A), index M x ->
R (Group (Ref Id) M) (\= M\[x] \*+ Group (Ref Id) M).
3.2.7 Example: the swap function
A standard example for illustrating the reasoning on possibly-aliased pointers is the
swap function, which permutes the contents of two references.
the two pointers
r1
and
r2
In the case where
given as argument to the swap function are distinct, the
following specication applies.
Spec swap (r1:Loc) (r2:Loc) |R>> forall G1 G2 X1 X2,
R ((r1 > Ref G1 X1) \* (r2 > Ref G2 X2))
(# (r2 > Ref G1 X1) \* (r1 > Ref G2 X2)).
This specication requires that
r1
and
possible to realize the pre-condition.
r2
be two distinct pointers, otherwise it is not
Nevertheless, the code of the function swap
works correctly even when the two pointers
r1
and
admits another specication covering this case.
r2
are equal. The swap function
50
CHAPTER 3.
VERIFICATION OF IMPERATIVE PROGRAMS
Spec swap (r1:Loc) (r2:Loc) |R>> forall G X, r1 = r2 ->
R ((r1 > G X) (# (r1 > G X)).
It is straightforward to establish both specications independently.
For the swap
function, which is very short, it suces to verify twice the same piece of code.
However, in general, we need a way to state and prove a single specications that
handles both the case of non-aliased arguments and that of aliased arguments.
Such a general specication can be expressed using group predicates. The precondition describes a group predicate modelled by a map
r1
and
r2
that should contain both
in its domain. The post-condition describes a group predicate modelled
M 0 , which corresponds to
r1 and r2 have been exchanged.
by a map
on
M
a copy of
M
where the contents of the bindings
Spec swap (r1:Loc) (r2:Loc) |R>> forall G M,
index M r1 -> index M r2 ->
let M' := M \[r1:=M\[r2]] \[r2:=M\[r1]] in
R (Group G M) (# Group G M').
This general specication can be used to derive the earlier two specications given
for the swap function, using the focus and unfocus operations on group predicates.
3.3 Reasoning on loops without loop invariants
Although reasoning on loops using loop invariants is sound, it does not take full
advantage of local reasoning. In short, the frame rule can be applied to the entire
execution of a loop, however it cannot be applied at a given iteration of the loop in
order to frame out pieces of heap from the reasoning on the remaining iterations. In
this section, I explain why this problem arises and how to x it.
After writing this material, I learnt that the limitation of loop invariants in
Separation Logic has also been recently pointed out by Tuerk [83].
Both Tuerk's
solution and mine are based on the same idea: rst generate a reasoning rule that
corresponds to the encoding of a while-loop as a recursive function, and then simplify
this reasoning rule. This approach is also closely related to Hehner's specied blocks
[34], where, in particular, loops are described using pre- and post-conditions instead
of loop invariants (even though Hehner was not using local reasoning). More detailed
comparisons can be found in the related work section (Ÿ8.2).
The simplest example exhibiting the failure of loop invariants to take advantage
of the frame rule is the function that computes the length of a mutable list. The
proof of this function using loop invariants has been presented earlier on (Ÿ3.2.5).
In this section, I consider a variation of the code where the while-loop is encoded as
a local recursive procedure, and I show that this transformation allows us to apply
the frame rule and to simplify the proof signicantly, in particular by removing the
need to involve list segments.
3.3.
REASONING ON LOOPS WITHOUT LOOP INVARIANTS
51
3.3.1 Recursive implementation of the length function
The code of the length function, where the while loop has been replaced by a local
recursive procedure called
aux,
appears next.
let length (l : 'a mlist) =
let n = ref 0 in
let h = ref l in
let rec aux () =
if (!h) != null then begin
incr n;
h := tail !h;
aux()
end in
aux ();
!n
aux
n containing a
h containing a location e such that e points to a mutable list
modelled by L2 . Its post-condition asserts that the reference n contains the value
k augmented by the length of the list L2 , that the reference h contains the null
pointer, and that e still points to a mutable list modelled by L2 . Note: n and h are
free variables of the specication of aux shown next.
The function
value
k
admits a pre-condition that describes a reference
and a reference
Spec aux () |R>> forall k e L2,
R ((n > k) \* (h > e) \* (e > Mlist G L2))
(# (n > k + length L2) \* (h > null) \* (e > Mlist G L2)).
Observe that, contrary to the loop invariant from the previous section, the specication involved here does not refer to the list segment
L1
have already been passed by. The absence of reference to
made of the cells that
L1
is made possible by
the fact that the frame rule can then be applied around the recursive call to the
function
aux.
Intuitively, every time a list cell is being passed by, it is framed out
of the reasoning.
Let me explain in more details how the frame rule is applied. Assume that the
location
e
h is not the null pointer. The list L2 can be
X :: L02 , and e points towards a pair of the form (x, e0 ), where X
L02 models x. Just before making the recursive call to the function
contained in the reference
decomposed as
models
aux,
x
and
the heap satises the predicate shown next.
(n > k+1) \* (h > e) \* (e' > Mlist G L2')
\* (x > G X) \* (e > (x,e'))
The frame rule can be applied to exclude the two heap predicates that are mentioned
in the second line of the above statement. The specication of the function
aux
can
then be exploited on the predicates from the rst line. Combining the post-condition
aux with the predicates that have been framed out, we obtain
describing the heap that remains after the execution of aux.
of the recursive call to
a predicate
52
CHAPTER 3.
VERIFICATION OF IMPERATIVE PROGRAMS
(n > k+1+length L2') \* (h > null) \* (e' > Mlist G L2')
\* (x > G X) \* (e > (x,e'))
It remains to verify that the above predicate entails the post-condition of
aux.
To
that end, it suces to apply the unfocus lemma to undo the focus operation performed earlier on, and to check that the length of
of
L02
(recall that
L02
is the tail of
L2
is equal to one plus the length
L2 ).
Encoding the while-loop into a recursive function has led to a dramatic simplication of the proof of correctness. The gain would be even more signicant in an
algorithm that traverses a mutable tree, as the possibility to apply the frame rule
typically saves the need to carry the description of a tree structure with exactly one
hole inside it (i.e., the generalization of a list segment to a tree structure).
3.3.2 Improved characteristic formulae for while-loops
I now explain how to produce characteristic formulae for while-loops that directly
support local reasoning.
The term while t1 do t2 and the term exactly the same semantics (recall that
tt
if t1 then (t2 ; while t1 do t2 ) else tt denotes the unit value).
admit
So, the char-
acteristic formulae of the two terms should be two logically-equivalent predicates,
as stated below (recall that bold symbols correspond to notation for characteristic
formulae).
Jwhile t1 do t2 K =
if Jt1 K then (Jt2 K ; Jwhile t1 do t2 K) else JttK
R be a shorthand for the characteristic formula Jwhile t1 do t2 K. The assertion
while t1 do t2 admits a pre-condition H 0 and a post-condition
0
Q is then captured by the proposition R H 0 Q0 . According to the equality stated
0 0
above, it suces to prove the proposition (if Jt1 K then (Jt2 K ; R) else JttK) H Q
0
0
for establishing R H Q . For this reason, the characteristic formula for a while loop
Let
stating that the term provides the following assumption.
∀H 0 Q0 . (if Jt1 K then (Jt2 K ; R) else JttK) H 0 Q0
Then, to prove that the evaluation of the term pre-condition
H
and a particular post-condition
⇒
while t1 do t2 Q,
R H 0 Q0
admits a particular
RH Q
H and Q, which specify
H 0 and Q0 , which can be used
it suces to establish
under the above assumption. Notice the dierence between
the behavior of all the iterations of the loop, and
to specify any subset of those iterations.
formula for while t1 do t2 ,
the variable
R
In the denition of the characteristic
cannot explicitly refer to
which we are precisely trying to dene. Instead,
R
Jwhile t1 do t2 K,
is quantied universally and the
only assumption about it is the fact that it supports application of the frame rule.
This assumption is written
is_localR
(the predicate
is_local
is dened further on,
in Ÿ5.3.3).
Jwhile t1 do t2 K ≡ local (λHQ. ∀R. is_localR ∧ H ⇒ R H Q)
where
H ≡ ∀H 0 Q0 . (if Jt1 K then (Jt2 K ; R) else JttK) H 0 Q0 ⇒ R H 0 Q0
3.3.
REASONING ON LOOPS WITHOUT LOOP INVARIANTS
53
Remark: the above characteristic formula is stronger than the one stated using a
loop invariant and a well-founded relation (this result is proved in Coq and exploited
by tactics that support reasoning on loops using loop invariants).
Let me illustrate the application of such a characteristic formula on the example
of the while-loop implementation of the length function.
loop, a variable
R
When reasoning on the
is introduced from the characteristic formula and the goal is as
follows.
R ((n > 0) \* (h > l) \* (l > Mlist G L))
((n > length L) \* (h > null) \* (l > Mlist G L))
This statement is rst generalized for the induction, by universally quantifying over
the list
L,
over the pointer l, and over the initial contents
k
of the counter
n.
forall L l k,
R ((n > k) \* (h > l) \* (l > Mlist G L))
((n > k + length L) \* (h > null) \* (l > Mlist G L))
The proof is then conducted by induction on the structure of
the loop, in the particular case where the pointer
l
L.
In the body of
is not null, the counter
n
is
incremented and the rest of the loop is executed to compute the length of the tail
of the list. Let
l0
decomposition of
denote the pointer on the tail of the list, and let
L.
X :: L0
be the
The proof obligation associated with the verication of the
remaining executions of the loop body is shown next.
R ((n > k+1) \* (h > l') \* (l' > Mlist G L')
\* (l > (x,l')) \* (x > G X))
((n > k+1+length L') \* (h > null) \* (l' > Mlist G L')
\* (l > (x,l')) \* (x > G X))
At this point, the heap predicates from the second and the fourth line can be framed
out, and the remaining statement follows directly from the induction hypothesis.
3.3.3 Improved characteristic formulae for for-loops
I construct a local-reasoning version of the characteristic formula for for-loops in
a very similar manner as for while-loops. In the case of a while-loop, I quantied
over some predicate
R,
where
RH Q
meant under the pre-condition
terminates and admits the post-condition
Q.
H,
the loop
Contrary to while-loops, for-loops
involve a counter whose value changes at every iteration. So, to handle a for-loop,
S , that depends on the initial value of
S a H Q means that the execution of the term for i =
pre-condition H and the post-condition Q.
I quantify instead over a predicate, called
the counter. The proposition
a to b do t
admits the
The characteristic formula for for-loops is then stated as shown next. The main
goal is
S a H Q,
and the main assumption relates the predicate
the behavior of the loop starting from index
i,
with
S (i + 1),
S i,
which describes
which describes the
54
CHAPTER 3.
VERIFICATION OF IMPERATIVE PROGRAMS
i + 1.
behavior of the loop starting from index
that the predicate
Si
The assumption
supports the frame rule for any index
is_local1 S
asserts
i.
Jfor i = a to b do tK ≡ local (λHQ. ∀S. is_local1 S ∧ H ⇒ S a H Q)
0 0
0 0
0 0
with H ≡ ∀iH Q . (if i ≤ b then (JtK ; S (i + 1)) else JttK) H Q ⇒ S i H Q
3.4 Treatment of rst-class functions
In this section, I describe the specication of a counter function, which carries its own
local piece of mutable state, and describe the treatment of higher-order functions,
with list iterators, with generic combinators such as composition, with functions
in continuation-passing style, and with a function manipulating a list of counter
functions.
3.4.1 Specication of a counter function
The function
make_counter allocates a reference r with the contents 0 and returns a
function that, every time it is called, increments the contents of
the current contents of
r
and then returns
r.
let make_counter () =
let r = ref 0 in
(fun () -> incr r; !r)
I rst describe a naive specication for
make_counter
that fully exposes the im-
plementation of the local state of the function. This specication asserts that the
r with contents 0 and returns a function f specied as
m of the reference r, a call to f returns the value m + 1
r to the value m + 1.
function creates a reference
follows: for every contents
and sets the contents of
Spec make_counter () |R>>
R [] (fun f => Hexists r, (r > 0) \*
[Spec f () |R>> forall m,
R (r > m) (\= (m+1) \*+ r > (m+1)) ]).
A better specication can be devised for
make_counter,
hiding the fact that the
counter is implemented using a reference cell. I dene the representation predicate
Counter,
n
which relates a counter function
f
with its mathematical model
n,
where
is the current value of the counter. So, a heap predicate describing a counter
takes the form
f
Counter n.
The Coq denition of
Counter
Definition Counter (n:int) (f:Func) : Hprop :=
Hexists I:int->hprop,
(I n) \* [Spec f () |R>> forall m,
R (I m) (\= (m+1) \*+ I (m+1)) ].
is shown next.
f
3.4.
TREATMENT OF FIRST-CLASS FUNCTIONS
The post-condition of
make_counter
counter whose model is the integer
55
simply states that the return value
f
is a
0.
Spec make_counter () |R>>
R [] (fun f => f > Counter 0)
So far, reasoning on a call to a counter function still requires the denition of the
predicate
Counter to be exposed.
It is in fact possible to hide the denition of
Counter
by providing a lemma describing the behaviour of the application of a counter function. This lemma is stated using
AppReturns,
as shown below.
Lemma Counter_apply : forall f n,
AppReturns f tt (f > Counter n)
(\= (n+1) \*+ f > Counter (n+1)).
Note: the packing of local state in the representation predicate
Counter is similar to
Pierce and Turner's encoding of objects [73].
To summarize, a library implementing a counter function only needs to expose
three things:
cate
Counter.
Counter, the
make_counter stated
an abstract predicate
specication of the function
lemma
Counter_apply,
and the
in terms of the abstract predi-
This example illustrates how local state can be entirely packed into
representation predicates and need not be mentioned in specications.
3.4.2 Generic function combinators
In this section, I describe the specication of the combinators
apply
and
compose.
Their denition is recalled next.
let apply f x = f x
let compose g f x = g (f x)
The specication of
apply
H and
x admits H as
states that, for any pre-condition
for any post-
Q, if the application of f to the argument
pre-condition
Q as post-condition (written AppReturns f x H Q), then the application of apply
to the arguments f and x admits the pre-condition H and the post-condition Q
(written R H Q below).
condition
and
Spec apply (f:Func) (x:A) |R>> forall (H:Hprop) (Q:B->hprop),
AppReturns f x H Q -> R H Q.
AppReturns f x HQ
apply f xK H Q should not
The fact that is a sucient condition for proving the propo-
sition J
be surprising.
characteristic formula of f
apply f x,
and apply f x
x
Indeed, apply f xK H Q
and J
AppReturns f x
is the
is the characteristic formula of
has the same semantics as f
x.
The quantication over a pre- and a post-condition in a specication is a typical
pattern when reasoning on functions with unknown specications. This technique
is applied throughout this section, and also in the next one which is concerned with
reasoning on functions in continuation-passing style.
56
CHAPTER 3.
The specication of
VERIFICATION OF IMPERATIVE PROGRAMS
compose
is shown next.
It quanties over an initial pre-
H , over a nal post-condition Q, and also over an intermediate post0
condition Q , which describes the output of the application of the argument f to the
argument x.
condition
Spec compose (g:Func) (f:Func) (x:A) |R>> forall H Q Q',
AppReturns f x H Q' ->
(forall y, AppReturns g y (Q' y) Q) ->
R H Q.
The hypotheses closely resemble the premises of the reasoning rule for letbindings.
Indeed, the term let y = f x in g y .
compose g f x
has the same semantics as the term
In fact, we can use the notation for characteristic formulae to
give a more concise specication to the function
compose,
as shown below.
The
bold keywords associated with notation for characteristic formulae are written in
Coq using capitalized keywords. In particular,
AppReturns.
App
is a notation for the predicate
Spec compose (g:Func) (f:Func) (x:A) |R>> forall H Q,
(Let y = (App f x) In (App g y)) H Q -> R H Q.
compose produces
function compose was
With this specication, reasoning on an application of the function
exactly the same proof obligation as if the source code of the
inlined. In general, code inlining is not a satisfying approach to verifying higherorder functions because it is not modular. However, the source code of the function
compose
is so basic that its specication is not more abstract that its source code.
3.4.3 Functions in continuation passing-style
The CPS-append function has been proposed as a verication challenge by Reynolds
[77]. This function takes three arguments: two lists
Its purpose is to call the continuation
the lists
x
and
k
x
and
y
and a continuation
k.
on the list obtained by concatenation of
y . The function is implemented recursively, so as to compute the
x and y using the function itself. I study rst the purely-functional
concatenation of
version of CPS-append, and then the corresponding imperative version that uses
mutable lists.
The source code of the pure CPS-append function is shown next.
let rec cps_app (x:'a list) (y:'a list) (k:'a list->'b) : 'b =
match x with | [] -> k y
| v::x' -> cps_app x' y (fun z -> k (v::z))
An example where
x
is instantiated as a singleton list of the form v
:: nil
helps
understand the working of the function.
cps_app (v :: nil) y k = cps_app nil y (λz. k(v :: z)) = (λz. k(v :: z)) y = k(v :: y)
3.4.
TREATMENT OF FIRST-CLASS FUNCTIONS
57
The specication of the function asserts that if the application of the continuation
k
to the concatenation of
x and y
admits a pre-condition
H
and a post-condition
then so does the application of the CPS-append function to the arguments
k.
x, y
Q,
and
(For the sake of simplicity, I here assume the lists to contain purely-functional
values, thereby avoiding the need for representation predicates.)
Spec cps_app (x:list a) (y:list a) (k:Func) |R>>
forall H Q, AppReturns k (x++y) H Q -> R H Q.
This specication is proved by induction on the structure of the list
x.
The imperative version of the CPS-append function is shown next. The function
tail
and
set_tail
are used to obtain and to update the tail of a mutable list,
respectively.
let rec cps_app' (x:'a mlist) (y:'a mlist) (k:'a mlist->'b) : 'b =
if x == null
then k y
else cps_app' (tail x) y (fun z -> set_tail x z; k x)
The specication is slightly more involved than in the purely-functional case because
it involves representation predicates for the list and for the elements of that list. The
pre-condition asserts that the pointers
as
L
and
M,
x
and
y
are starting points of lists modelled
H
respectively. It also mentions a heap predicate
the heap. We need
H
covering the rest of
because the frame rule does not apply when reasoning with
CPS functions, as the entire heap usually needs to be passed on to the continuation.
In the imperative CPS-append function, the continuation
that points to a list modelled as the concatenation of
representation predicate associated with
heap predicate
H.
z,
L
k
is called on a pointer
and
M.
the pre-condition of
The post-condition of the function, called
Q,
z
In addition to the
k
also mention the
is then the same as
the post-condition of the continuation. The complete specication appears next.
Spec cps_app' (x:Loc) (y:Loc) (k:Func) |R>>
forall (G:a->A->hprop) (L M:list A) H Q,
(forall z, AppReturns k z (z > Mlist G (L++M) \* H) Q) ->
R (x > Mlist G L \* y > Mlist G M \* H) Q.
This specication is proved by induction on the structure of the list
L.
3.4.4 Reasoning about list iterators
In this section, I show how to specify the
iter function on purely-functional lists, and
illustrate the use of this specication for reasoning on the manipulation of a list of
counter functions.
1
1
Due to lack of time, I do not describe the specication of the functions
specication of iterators on imperative maps. I will add those later on.
fold and map, nor the
58
CHAPTER 3.
VERIFICATION OF IMPERATIVE PROGRAMS
The specication of the function
iter
shares a number of similarities with the
treatment of loops. Again, I want to avoid the introduction of a loop invariant and
use a presentation that takes advantage of local reasoning, because I want to be able
iter
to reason about the application of the function
l
without asserting that some prex of the list
while the remaining segment of the list
l
to a function
f
and to a list
has been treated by the function
l
f
has not yet been treated.
Recall the recursive implementation of the function
iter.
let rec iter f l = match l with
| [] -> ()
| x::l -> f x; iter f l
The function
iter is quite similar to the recursive encoding of a for-loop, in the sense
that the argument changes at every recursive call. So, I quantify over a predicate,
called
S,
that depends on the argument
the application of condition
iter f to the list
l
l.
The proposition
S l H Q asserts that
H and the post-
admits the pre-condition
Q.
In the specication shown below, l0 denotes the initial list passed to
notes a sublist of l0 . The term written
RH Q
does hold.
(where
R
iter f l0 admits
is bound by the
H
and
iter, and l de-
Q as pre- and post-conditions,
Spec notation), if the proposition S l0 H Q
The hypothesis given about
S
is a slightly simplied version of the
characteristic formula associated with the body of the function
iter.
Spec iter (f:val) (l0:list a) |R>>
(forall (S:(list a)->hprop->(unit->hprop)->Prop),
is_local_1 S ->
(forall l H' Q',
match l with
| nil => (H' ==> Q' tt)
| x::l' => (App f x ;; S l') H' Q'
end ->
S l H' Q') ->
S l0 H Q) ->
R H Q.
To check the usability of the specication of the function
example.
The function
step_all,
iter,
I consider an
whose code appears below, takes as argument a
list of distinct counter functions (recall Ÿ3.4.1), and makes a call to each of those
counters for incrementing their local state. To keep the example simple, the results
produced by the invocation of the counter functions are simply ignored.
let step_all (l:list(unit->int)) =
List.iter (fun f => ignore (f())) l
A list of counters is modelled using the application of the representation predicate
Plist to the representation predicate Counter.
More precisely, if
l is a list of functions
3.4.
and
TREATMENT OF FIRST-CLASS FUNCTIONS
L
is a list of integers, then the predicate
59
Plist Counter L
l
asserts that
L
contains the integer values describing the current state of each of the counters from
the list
l.
A call to the function
counter, so the model of
l
step_all
on the list
evolves from the list
L
l
by applying the successor function to the elements of
where
map
increments the state of every
to a list
L,
L0 .
L0 is obtained
map (λn. n + 1) L,
The list
written
denotes the map function on Coq lists. Hence the specication shown
next.
Spec step_all (l:list Func) |R>> forall (L:list int),
R (l > List Counter L)
(# l > List Counter (map (fun n => n+1) L)).
Chapter 4
Characteristic formulae for pure
programs
This chapter describes the construction, pretty-printing and manipulation of characteristic formulae for purely-functional programs.
I start
with the presentation of a set of informal rules explaining, for each construct from the source programming language, what logical formula describes it. I then focus on the specication of n-ary functions and formally dene the predicate
Spec and the notation associated with it.
The
formal presentation of a characteristic formula generator follows. Compared with the informal presentation, it involves direct support for n-ary
functions and n-ary applications, it describes the lifting of program values
to the logical level, and it shows how polymorphism is handled. Finally,
I explain how to build a set of tactics for reasoning on characteristic
formulae.
4.1 Source language and normalization
Before generating the characteristic formula of a program, I apply a transformation on the source code so as to put the program in a administrative normal form.
Through this process, the program is arranged so that all intermediate results and
all functions become bound by a
let-denition.
One notable exception is the appli-
cation of simple total functions such as addition and subtraction; for example, the
application f
(v1 + v2 )
is considered to be in normal form although f
(g v1 v2 )
is
not in normal form in general. Without such a special treatment of basic operators,
programs in normal form tend to be hard to read.
The normalization process, which is similar to
A-normalization
[28], preserves
the semantics and greatly simplies formal reasoning on programs. Moreover, it is
straightforward to implement.
Similar transformations have appeared in previous
work on program verication (e.g., [40, 76]).
I omit a formal description of the
normalization process because it is relatively standard and because its presentation
60
4.2.
CHARACTERISTIC FORMULAE: INFORMAL PRESENTATION
61
is of little interest.
The grammar of terms in administrative normal form includes values, applications of a value to another value, failure, conditionals, let-bindings for terms, and
let-bindings for recursive function denitions.
The grammar of terms is extended
later on with curried n-ary functions and curried n-ary applications (Ÿ4.2.4), as well
as pattern matching (Ÿ4.6.3).
The grammar of values includes integers, algebraic
values, and function closures. Algebraic data types correspond to an iso-recursive
type made of a tagged unions of tuples. In particular, the boolean values
false are dened using algebraic data type.
true
and
Note that source code may contain func-
tion denitions but no function closure. Function closures, written
µf.λx.t, are only
created at runtime, and they are always closed values.
x, f
n
D
v
t
:= variables
:= integers
:= algebraic data constructors
:= x | n | D(v, . . . , v) | µf.λx.t
:= v | (v v) | crash | if v then t else t |
let x = t in t | let rec f = λx. t in t
In this chapter, I only consider programs that are well-typed in ML with recursive
types (I explain in Ÿ4.3.1 what kind of recursive types is handled). The grammar
of types and type schema is recalled below. Note: in several examples, I use binary
pairs and binary sums; those type constructors can be encoded as algebraic data
types.
A
C
τ
τ
σ
:= type variable
:= algebraic type constructors
:= lists of types
:= A | int | C τ | τ → τ
:= ∀A.τ
| µA.τ
The associated typing rules are the standard typing rules of ML; I do not recall them
here.
Characteristic formulae are proved sound and complete with respect to the semantics of the programming language.
Since characteristic formulae describe the
big-step behavior of programs, it is convenient to describe the source language semantics using big-step rules, as opposed to using a set of small-step reduction rules.
The big-step reduction judgment, written
t ⇓ v,
is dened through the standard
inductive rules shown in Figure 4.1.
4.2 Characteristic formulae: informal presentation
The key ideas involved in the construction of characteristic formulae are explained
next, through an informal presentation. Compared with the formal denitions given
afterwards, the informal presentation makes two simplications: Caml values and
Coq values are abusively identied, and typing annotations are omitted.
62
CHAPTER 4.
CHARACTERISTIC FORMULAE FOR PURE PROGRAMS
([f → µf.λx.t1 ] [x → v2 ] t1 ) ⇓ v
((µf.λx.t1 ) v2 ) ⇓ v
v⇓v
([f → µf.λx.t1 ] t2 ) ⇓ v
(let rec f = λx. t1 in t2 ) ⇓ v
t1 ⇓ v1
([x → v1 ] t2 ) ⇓ v
(let x = t1 in t2 ) ⇓ v
t2 ⇓ v
(if false then t1 else t2 ) ⇓ v
t1 ⇓ v
(if true then t1 else t2 ) ⇓ v
Figure 4.1: Semantics of functional programs
JvK
Jf vK
JcrashK
Jif v then t1 else t2 K
Jlet x = t1 in t2 K
Jlet rec f = λx. t1 in t2 K
≡
≡
≡
≡
≡
≡
λP.
λP.
λP.
λP.
λP.
λP.
(P v)
AppReturns f v P
False
(v = true ⇒ Jt1 K P ) ∧ (v = false ⇒ Jt2 K P )
∃P 0 . Jt1 K P 0 ∧ (∀x. P 0 x ⇒ Jt2 K P )
∀f. (∀x. ∀P 0 . Jt1 K P 0 ⇒ AppReturns f x P 0 ) ⇒ Jt2 K P
Figure 4.2: Informal presentation of the characteristic formula generator
4.2.1 Characteristic formulae for the core language
The characteristic formula of a term
that follows the structure of
t.
t,
written
JtK,
is generated using an algorithm
Recall that, given a post-condition
P,
the charac-
teristic formula is such that the proposition JtK P holds if and only if the term
t
P . The denition of JtK for a particular
term t always takes the form λP. H, where H expresses what needs to be proved in
order to show that the term t returns a value satisfying the post-condition P . The
terminates and returns a value that satises
denitions appear in Figure 4.2 and are explained next.
v returns a value satisfying P , it suces to prove that P v (P v). Next, to prove that an application f v returns a value satisfying P , one must exhibit a proof of AppReturns f v P . So, Jf vK
is dened as λP. AppReturns f v P . To show that if v then t1 else t2 returns a value
satisfying P , one must prove that t1 returns such a value when v is true and that
t2 returns such a value when v is false. So, the proposition Jif v then t1 else t2 K P is
To show that a value
holds. So,
JvK
is dened as λP.
equivalent to
(v = true ⇒ Jt1 K P ) ∧ (v = false ⇒ Jt2 K P )
To show that the term crash
returns a value satisfying
P,
the only way to pro-
ceed is to show that this point of the program cannot be reached, by proving that
the assumptions accumulated at that point are contradictory. Therefore,
JcrashK
is
4.2.
CHARACTERISTIC FORMULAE: INFORMAL PRESENTATION
dened as λP.
False.
To show that a term let x = t1 in t2 returns a value satisfying
63
P , one must prove
0
0
that there exists a post-condition P such that t1 returns a value satisfying P and
0
that, for any x satisfying P , t2 returns a value satisfying P . So, the proposition
Jlet x = t1 in t2 K P is equivalent to the proposition ∃P 0 . Jt1 K P 0 ∧ (∀x. P 0 x ⇒
Jt2 K P ).
let rec f = λx. t1 in t2 .
0
x P is called the body description
Consider the denition of a possibly-recursive function ⇒ AppReturns f
let rec f = λx. t1 . It captures the fact that,
in order to prove that the application of f to x returns a value satisfying a post0
condition P , it suces to prove that the body t1 , instantiated with that particular
0
argument x, terminates and returns a value satisfying P . The characteristic formula
0
0
The proposition ∀x. ∀P . Jt1 K P
associated with the function denition for a function denition is built upon the body description of that function: to prove
that the term let rec f = λx. t1 in t2 that, for any name
hold, the term
t2
f,
P , it suces to prove
let rec f = λx. t1 to
Jlet rec f = λx. t1 in t2 K P is
returns a value satisfying
asssuming the body description for returns a value satisfying
P.
Hence,
equivalent to:
∀f. (∀x. ∀P 0 . Jt1 K P 0 ⇒ AppReturns f x P 0 ) ⇒ Jt2 K P
4.2.2 Denition of the specication predicate
In a language like Caml, functions of several arguments are typically presented in a
let rec f = λx y. t describes a function that,
x, returns a function that expects an argument y .
curried fashion. A denition of the form when applied to its rst argument
In order to obtain a realistic tool for the verication of Caml programs, it is crucial
to oer direct support for reasoning on the denition and application of n-ary curried
functions. In this next section, I give the formal denition of the predicate
Spec used
to specify unary functions. In the next section I describe the generalization of this
predicate to higher arities.
Consider the specication of the function
AppReturns.
half,
written in terms of the predicate
∀x. ∀n ≥ 0. x = 2 ∗ n ⇒ AppReturns half x (= n)
The same specication can be rewritten with the
Spec
notation as follows.
Spec half (x : int) | R >> ∀n ≥ 0. x = 2 ∗ n ⇒ R (= n)
The notation introduced by the keyword
order predicates called
The denition of
Spec1
Specn ,
where
n
Spec
is dened using a family of higher-
corresponds to the arity of the function.
appears next, and its generalization
Specn
for
n
is given
afterwards.
Spec1 f K asserts that the function f admits the specication
K takes both x and R as argument, and species the result of the
f to x. The predicate R is to be applied to the post-condition that
The proposition K.
The predicate
application of
64
CHAPTER 4.
CHARACTERISTIC FORMULAE FOR PURE PROGRAMS
holds of the result of f
x.
For example, the previous specication for
half
actually
stands for:
Spec1 half (λx R. ∀n ≥ 0. x = 2 ∗ n ⇒ R (= n))
In rst approximation, the predicate
Spec1 f K
where
K
Spec1
is dened as follows:
∀x. K x (AppReturns f x)
≡
A → ((B → Prop) → Prop) → Prop,
input and the output type of f .
has type
correspond to the
half
and where
A
and
B
8. The behav8 (AppReturns1 half 8),
where K stands for the specication of the function half, which is λx R. ∀n ≥
0. x = 2 ∗ n ⇒ R (= n). Simplifying the application of K in the expression
K 8 (AppReturns1 half 8) gives back the rst property that was stated about that
For example, consider an application of
to the argument
ior of this application is described by the proposition K
function:
∀n. n ≥ 0 ⇒ 8 = 2 ∗ n ⇒ AppReturns1 half 8 (= n)
The actual denition of
is covariant in
R.
Spec1 includes an extra side-condition, expressing that K
Weakenable,
Covariance is formally captured by the predicate
dened as follows:
Weakenable J
has type (B
∀ P P 0 . J P → (∀x. P x → P 0 x) → J P 0
≡
→ Prop) → Prop
where
J
Spec1
appears in the middle of Figure 4.3. Fortunately, thanks to appropriate lem-
mas and tactics, the predicate
for some type
Weakenable never
B.
The formal denition of
needs to be manipulated explicitly
while verifying programs.
4.2.3 Specication of curried n-ary functions
Generalizing the denitions of
AppReturns1 and Spec1 to higher arities is not entirely
straightforward, due to the need to support reasoning on partial applications and
over applications. Intuitively, the specication of a
capture the property that the application of
f
n-ary
curried function
to less than
n
f
should
arguments terminates
and returns a function whose specication is an appropriate specialization of the
specication of
f.
Firstly, I dene the predicate
AppReturnsn f v1 . . . vn P AppReturnsn
in such a way that the proposition
states that the application of
f
to the
n
arguments
v1
shown at the
AppReturnsn is dened
on n, ultimately in terms of the abstract predicate AppReturns, as
top of Figure 4.3. For instance, AppReturns2 f x y P states that the
application of
f
. . . vn returns a value satisfying
by induction
to
a value satisfying
arities
n
and
m
P.
The family of predicates
x returns a function g such that the application of g to y returns
P . More generally, if m is smaller than n, then applications at
are related as follows:
AppReturnsn f x1 . . . xn P
⇐⇒
AppReturnsm f x1 . . . xm (λg. AppReturnsn−m g xm+1 . . . xn P )
4.2.
CHARACTERISTIC FORMULAE: INFORMAL PRESENTATION
AppReturns1 f x P
≡
AppReturnsn f x1 . . . xn P ≡
AppReturns f x P
AppReturns f x1 (λg. AppReturnsn−1 g x2 . . . xn P )
is_spec1 K
is_specn K
≡
≡
∀x. Weakenable (K x)
∀x. is_specn−1 (K x)
Spec1 f K
Specn f K
≡
≡
is_spec1 K ∧ ∀x. K x (AppReturns f x)
is_specn K ∧ Spec1 f (λx R. R (λg. Specn−1 g (K x)))
n > 1 and (f : Func) and (g : Func) and (xi : Ai )
(K : A1 → . . . An → ((B → Prop) → Prop) → Prop).
In the gure,
Prop)
65
and
Figure 4.3: Formal denitions for
Secondly, I dene the predicate
f of two
x, returns
Specn ,
AppReturnsn
and
and
(P : B →
Specn
again by induction on
n.
For example,
a curried function
arguments is a total function that, when applied to
its rst argument
a unary function
which depends on that rst argument.
K
x
If
K
g
that admits a certain specication
denotes the specication of
denotes the specication of the partial application of
denition of
Spec2
f
to
x.
f
then
Intuitively, the
is as follows.
Spec2 f K
≡
Spec1 f (λx R. R (λg. Spec1 g (K x)))
→ A2 → ((B → Prop) → Prop) → Prop. Note that Spec2
A1 , A2 and B . The formal denition of Specn , which
appears in Figure 4.3, also includes a side condition to ensure that K is covariant
in its argument R, written is_specn K .
where
K
has type A1
is polymorphic in the types
The high-level notation for specications used in Ÿ2.2 can now be easily explained
in terms of the family of predicates
Specn .
Spec f (x1 : A1 ) . . . (xn : An ) | R >> H
≡ Specn f (λ(x1 : A1 ). . . . λ(xn : A1 ). λR. H)
The predicate
Specn
is ultimately dened in terms of
AppReturns.
A direct elim-
ination lemma can be proved for relating a specication for a n-ary function stated
Specn with the behavior of an application of that function described with
predicate AppReturnsn . This elimination lemma takes the following form.
in terms of
the
Specn f K
⇒
K x1 . . . xn (AppReturnsn f x1 . . . xn )
f has type Func, where each xi admits a type Ai , and where K is of type
A1 → . . . An → ((B → Prop) → Prop) → Prop, with B being the return type
of the function f . This elimination lemma is not intended to be used directly;
where
it serves as a basis for proving other lemmas upon which tactics for reasoning on
applications are based (Ÿ4.7.2). Note that the elimination lemma admits a reciprocal,
66
CHAPTER 4.
CHARACTERISTIC FORMULAE FOR PURE PROGRAMS
which is presented in detail in Ÿ4.7.3. This introduction lemma admits the following
statement (two side-conditions are omitted).
€
∀x1 . . . xn . K x1 . . . xn (AppReturnsn f x1 . . . xn )
n = 4.
Specn f K
⇒
AppReturnsn
Remark: in the CFML library, the predicates
mented up to
Š
and
Specn
are imple-
It would not take very long to support arities up to
12,
which
should be sucient for most functions used in practice. It might even be possible
to dene the n-ary predicates in an arity-generic way, however I have not had time
to work on such denitions.
4.2.4 Characteristic formulae for curried functions
In this section, I update the generation of characteristic formulae to add direct
support for reasoning on n-ary functions using
AppReturnsn
and
Specn .
Note that
the grammar of terms in normal form is now extended with n-ary applications and
n-ary abstractions.
The generation of the characteristic formula for a n-ary application is straightforward, as it directly relies on the predicate
≡
Jf v1 . . . vn K
AppReturnsn .
The denition is:
λP. AppReturnsn f v1 . . . vn P
The treatment of n-ary abstractions is slightly more complex. Recall that the
body description associated with a function denition let rec f = λx. t
is the propo-
⇒ AppReturns f x P . I could generalize this denition using
AppReturnsn , but this would not capture the fact that partial applications of the n-ary function terminate. Instead, I rely on the predicate Specn . In this
new presentation, the body description of a n-ary function let rec f = λx1 . . . xn . t,
sition: ∀x. ∀P. JtK P
the predicate
where
n ≥ 1,
is dened as follows.
€
Š
∀K. is_specn K ∧ ∀x1 . . . xn . K x1 . . . xn JtK ⇒ Specn f K
It may be surprising to see the predicate K
acteristic formula
of the function
body of
λx R.
half.
JtK.
half.
x1 . . . xn being applied to a char-
It is worth considering an example.
It takes the form Recall the denition
let rec half = λx. t, where t stands for the
Spec1 half K , where K is equal to
Its specication takes the form ∀n ≥ 0. x = 2 ∗ n ⇒ R (= n).
According to the new body description for
functions, in order to prove that the function
half
satises its specication, we need
to prove the proposition ∀x. K
x JtK. Unfolding the denition of K ,
∀x. ∀n ≥ 0. x = 2 ∗ n ⇒ JtK (= n). As expected, we are required to
the body of the function
such that
x
is equal to
half,
prove that
which is described by the characteristic formula
n, under
2 ∗ n.
returns a value equal to
we obtain:
the assumption that
n
JtK,
is a non-negative integer
To summarize, the characteristic formula of a n-ary application is expressed in
AppReturnsn and the characteristic formula of a n-ary function
denition is expressed in terms of the predicate Specn . The elimination lemma, which
relates Specn to AppReturnsn , then serves as a basis for reasoning on the application
terms of the predicate
of n-ary functions.
4.3.
TYPING AND TRANSLATION OF TYPES AND VALUES
67
4.3 Typing and translation of types and values
So far, I have abusively identied program values from the programming language
with values from the logic. This section claries the translation from Caml types to
Coq types and the translation from Caml values to Coq values.
4.3.1 Erasure of arrow types and recursive types
I map every Caml type to its corresponding Coq type, except for arrow types. As
explained earlier on, due to the mismatch between the programming language arrow
type and the logical arrow type, I represent Caml functions using Coq values of type
Func.
To simplify the presentation and the proofs about characteristic formulae, it is
convenient to consider an intermediate type system, called weak-ML. This type system is like ML except that the arrow type constructor is replaced with a constant
type constructor, called
func. During the generation of characteristic formulae, varifunc in weak-ML become variables of type Func in Coq.
ables that have the type
Also, weak-ML does not include general recursive types but only algebraic data
types, such as the type of lists. The grammar of weak-ML types is thus as follows.
T
S
:= A | int | C T
:= ∀A.T
The translation of an ML type
hτ i.
τ
| func
into the corresponding weak-ML type is written
hτ i is a copy of τ where all the arrow types are replaced with the type
h·i also handles polymorphic types and general recursive
types, as explained further on. The formal denition of the erasure operator h·i
appears in Figure 4.4. Note that hτ i denotes the application of the erasure operator
to all the elements of the list of types τ .
The translation of a Caml type scheme ∀A.τ is a weak-ML type scheme of the
form ∀B.hτ i. The list of types B might be strictly smaller than the list A because
some type variables occurring in τ may no longer occur in hτ i. For example, the
ML type scheme ∀AB. A + (B → B) is translated in weak-ML as ∀A. A + func,
which no longer involves the type variable B .
An ML program may involve general equi-recursive types, of the form µA.τ .
Such recursive types are handled by CFML only if the translation of the type τ no
longer refers to the variable A. This is often the case because any occurrence of the
variable A under an arrow type gets erased. For example, the term λx. x x admits
the general recursive type µA.(A → B). This type is simply translated in weak-ML
In short,
func.
as
The erasure operator
func, which no longer involves a recursive type.
So, CFML supports reasoning on
the value λx. x x. However, reasoning on a value that admits a type in which the
recursivity does not traverse an arrow is not supported. For example, CFML does
not support reasoning on a program involving values of type µA.(A
× int).
In summary, CFML supports base types, algebraic data types and arrow types.
Moreover it can handle functions that admit an equi-recursive type, because all
68
CHAPTER 4.
CHARACTERISTIC FORMULAE FOR PURE PROGRAMS
hAi
≡ A
hinti
≡ int
hC τ i
≡ C hτ i
hτ1 → τ2 i ≡ func
h∀A. τ i
≡ ∀B. hτ i
hµA.τ i
≡
program rejected
hτ i
where
if
B = A ∩ fv(hτ i)
A 6∈ hτ i
otherwise
Figure 4.4: Translation from ML types to weak-ML types
arrow types are mapped to the constant type
func.
4.3.2 Typed terms and typed values
The characteristic formula generator takes as input a typed weak-ML program,
that is, a program in which all the terms and all the values are annotated with
their weak-ML type.
In the implementation, the CFML generator takes as input
the type-carrying abstract syntax tree produced by the OCaml type-checker, and it
applies the erasure operator to all the types carried by that data structure. I explain
next the notation employed for referring to typed terms and typed values.
The meta-variable
t̂
ranges over typed term,
v̂
ranges over typed values, and
ŵ
ranges over polymorphic typed values. A typed entity is a pair, whose rst component is a weak-ML type, and whose second component is a construction from the
programming.
Typed programs moreover carry explicit information about gener-
alized types variables and type applications.
A list of universally-quantied type
variables, written ΛA., appears in polymorphic let-bindings, in polymorphic values, and in function closures. Note that a polymorphic function is not a polymorphic
value because all functions, including polymorphic functions, admit the type
func.
In
the grammar of typed values, explicit type applications are mentioned for variables,
written
xT,
and for polymorphic data constructors, written
v̂
ŵ
t̂
D T.
:= x T | n | D T (v̂, . . . , v̂) | µf.ΛA.λx.t̂
:= ΛA. v̂
:= v̂ | v̂ v̂ | crash | if v̂ then t̂ else t̂
let x = ΛA. t̂ in t̂ | let rec f = ΛA.λx.t̂ in t̂
Substitution in typed terms involves polymorphic values. For example, during
the evaluation of the term v̂ ,
and then the variable
x
let x = ΛA. t̂1 in t̂2 ,
is substituted in
Such polymorphic values are written
I write
t̂ T
to indicate that
T
t̂2
the term
t̂1
reduces to some value
with the polymorphic value ΛA. v̂ .
ŵ.
is the type that annotates by the typed term
T
similar convention applies for values, written v̂ . Moreover, I write
t̂.
A
µf.ΛA.λx T0 .t̂ T1
4.3.
TYPING AND TRANSLATION OF TYPES AND VALUES
to denote the fact that the input type of the function is
is
T1 .
Finally, given a typed term
corresponding raw term
Remark:
t,
t̂,
T0
69
and that its return type
one may strip all the types and obtained a
which is written strip_types t̂.
to simplify the soundness proof, I enforce the invariant that bound
polymorphic variables must occur at least once in the type on which they scope. In
µf.ΛA.λx T0 .t̂ T1 , a type
1
type T1 (or in both).
particular, in a function closure of the form
the list
A
should occur in the type
T0
or in
variable from
4.3.3 Typing rules of weak-ML
A typed term
t̂
denotes a term in which every node is annotated with a weak-
ML type. However, it is not guaranteed that those type annotations are coherent
throughout the term. Since the construction of characteristic formulae only makes
sense when type annotations are coherent, I introduce a typing judgment for weakML typed terms, written
∆ ` t̂,
or simply
` t̂
when the typing context
∆
is
empty.
The proposition
∆ ` t̂ captures the fact that the type-annotated term t̂ is
∆, where ∆ maps variables to weak-ML type schema. The
well-typed in a context
typing rules dening this typing judgment appear in Figure 4.5.
In those typing
rules, typed terms are systematically annotated with the type they carry. Note that
the typing rule for data constructors involves a premise describing the type of a data
: ∀A. (T1 × . . . × Tn ) → C A asserts that the data
A, and that it expects arguments of type
type C A.
constructor: the proposition D
constructor
Ti
D
is polymorphic in the types
and constructs a value of
Compared with the typing rules that one would have for terms annotated with
ML types, the only dierence is that arrow types are replaced with the type
func.
This results in the typing rule for applications, shown in the upper-left corner of
the gure, being totally unconstrained: a function of type
an argument of some arbitrary type
arbitrary type
T1 ,
func
can be applied to
and pretend to produce a result of some
T2 .
Throughout the rest of this chapter, all the typed values and typed terms being
manipulated are well-typed in weak-ML.
4.3.4 Reection of types in the logic
In this section, I dene the translation from weak-ML types to Coq types.
This
translation is almost the identity, since every type constructor from weak-ML is directly mapped to the corresponding Coq type constructor. Yet, it would be confusing
to identify weak-ML types with Coq types. So, I introduce an explicit reection operator:
1
VT W
denotes the Coq type that corresponds to the weak-ML type
T.
The
If the invariant is not satised, then one can replace the dangling free type variables with an
int, without changing the semantics of the term nor its generality. For example,
A)
. x) nil) has type ∀A. int → int. The type variable A is used
to type-check the empty list nil which plays no role in the function. The typing of the function can
(
)
be updated to µf.Λ.λx .((λy
. x) nil) , which no longer involves a dangling type variable.
arbitrary type, say
the function
µf.ΛA.λx .((λy (
int
list
int
list int
int
int
70
CHAPTER 4.
CHARACTERISTIC FORMULAE FOR PURE PROGRAMS
∆ ` fˆfunc
∆ ` v̂T1
∆ ` (fˆfunc v̂T1 ) T2
∆ ` t̂1T
∆ ` v̂ bool
∆ ` t̂2T
∆ ` (if v̂ bool then t̂1T else t̂2T ) T
∆, A ` t̂1T1
∆ ` crashT
∆, x : ∀A.T1 ` t̂2T2
∆ ` (let x = ΛA. t̂1T1 in t̂2T2 ) T2
∆, A, f : func, x : T0 ` t̂1T1
∆ ` (let rec f = ΛA.λx
T0
∆, f : func ` t̂2T2
(x : ∀A.T ) ∈ ∆
in t̂2T2 ) T2
∆ ` (x T ) ([A→T ] T )
.t̂1T1
∀i. ∆ ` v̂i ([A→T ] Ti )
D : ∀A. (T1 × . . . × Tn ) → C A
∆ ` (D T (v̂1 , . . . , v̂n )) (C T )
∆ ` nint
∆, A, f : func, x : T0 ` t̂1T1
∆, A ` v̂ T
∆ ` (µf.ΛA.λxT0 .t̂1T1 ) func
∆ ` (ΛA. v̂ T ) (∀A.T )
Figure 4.5: Typing rules for typed weak-ML terms
formal denition of the reection operator
VAW
VintW
VC T W
VfuncW
V∀A.T W
V·W
≡
≡
≡
≡
≡
appears below.
A
Int
C VT W
Func
∀A. VT W
Algebraic type denitions are translated into corresponding Coq inductive denitions. This translation is straightforward. For example, Caml lists are translated
into Coq lists. Note that the positivity requirement associated with Coq inductive
types is not a problem here because all arrow types have been mapped to the type
func,
so the translation does not produce any negative occurrence of an inductive
type in its denition. Thus, for every type constructor
C
from the source program,
a corresponding constant
C is
type C T is written C VT W.
Henceforth, I let T range
dened in Coq. In particular, the translation of the
reected types. Similarly, I let
S
over Coq types of the form
VT W,
range over Coq types of the form
a weak-ML type scheme. Note that a reected type scheme
S
which are called
VSW,
where
S
is
always takes the form
∀A.T , for some reected type T . To summarize, we start with an ML type, written
τ , then turn it into a weak-ML type, written T , and nally translate it into a Coq
type, written T .
4.4.
CHARACTERISTIC FORMULAE: FORMAL PRESENTATION
71
4.3.5 Reection of values in the logic
I now describe the translation from Caml values to Coq values.
dv̂e, transforms
VT W. Since program
v̂
of type
T
The decoding
operator, written
a value
value of type
values might contain free variables, the decoding
operator in fact also takes as argument a context
Γ
into the corresponding Coq
that maps Caml variables to
Γ is written dv̂eΓ . The Coq
variables bound by Γ should be typed appropriately: if v̂ contains a free variable x
of type S then the Coq variable Γ(x) should admit the type VSW.
Γ shown next, values on the left-hand side are typed
In the denition of dve
Coq variables. The decoding of a value
v̂
in a context
weak-ML values and values on the right-hand side are Coq values. Note that Coq
values are inherently well-typed, because Coq is based on type theory.
The only
diculty is the treatment of polymorphism. When the source program contains a
polymorphic variable
application of
Γ(x)
x applied to some types T , this occurrence is translated as the
T , written Γ(x) VT W. Similarly,
to the translation of the types
the type arguments of data constructors get translated. Function closures do not
exist in source programs, so there is no need to translate them.
dx T eΓ
dneΓ
dD T (v̂1 , . . . , v̂2 )eΓ
dµf.ΛA.λx.teΓ
≡ Γ(x) VT W
≡ n
≡ D VT W (dv̂1 eΓ , . . . , dv̂2 eΓ )
≡ not needed at this time
When decoding closed values, the context
dv̂e
as a shorthand for
Γ
is typically empty. Henceforth, I write
dv̂eΓ .
The decoding of polymorphic values is not needed for generating characteristic
formulae, however it is involved in the proofs of soundness and completeness. Having
in mind the denition of the decoding of polymorphic values helps understanding the
generation of characteristic formulae for polymorphic let-bindings. The decoding of
a closed polymorphic value ΛA. v̂ is a Coq function that expects some types
returns the decoding of the value
A and
v̂ .
dΛA. v̂e
≡
λA. dv̂e
(nil, nil) has type ∀A. ∀B. list A× listB . The Coq translation
of this value is fun A B => (@nil A, @nil B), where the symbol @ indicates that
For example, the value
type arguments are given explicitly.
4.4 Characteristic formulae: formal presentation
This section contains the formal denition of a characteristic formula generator,
including the denition of the notation system for characteristic formulae. It starts
with a description of the treatment of polymorphism, and it includes the denition
of
AppReturns.
72
CHAPTER 4.
CHARACTERISTIC FORMULAE FOR PURE PROGRAMS
4.4.1 Characteristic formulae for polymorphic denitions
Consider a polymorphic let-binding of
t̂1 .
The variable
x
has type
let x = ΛA. t̂1 in t̂2 , and let T1
∀A.T1 .
denote the type
I dene the characteristic formula associated
with that polymorphic let-bindings as follows.
Jlet x = ΛA. t̂1 in t̂2 K ≡
Š
€
λP. ∃P 0 . ∀A. Jt̂1 K (P 0 A) ∧
€
∀x. (∀A. (P 0 A) (x A)) ⇒ Jt̂2 K P
Š
P 0 has type ∀A.(VT1 W → Prop) and the quantied variable x has type ∀A.VT1 W.
The formula rst asserts that, for any instantiations of the types A, the typed term
t̂1 , which depend on the variables A, should satisfy the post-condition (P 0 A). Ob0
serve that the post-condition P describing x is a polymorphic predicate of type
∀A.(VT1 W → Prop). It is not a predicate of type (∀A.VT W) → Prop, because we only
care about specifying monomorphic instances of the polymorphic variable x. A particular monomorphic instance of x in t̂2 takes the form x T . It is expected to satisfy
0
0
the predicate P T . Hence the assumption ∀A. (P A) (x A) provided for reasoning
about x in the continuation t̂2 .
where
It remains to see how polymorphism in handled in function denitions.
sider a function denition let rec f = ΛA.λx1 . . . xn .t̂1 in t̂2 .
description, shown below, quanties over the types
t̂1 ,
The associated body
Those types are involved in
K , in the type of the arguments xi , in the characteristic
the type of the specication
formula of the body
A.
Con-
and in the implicit arguments of the predicate
€
Specn .
Š
∀A. ∀K. is_specn K ∧ ∀x1 . . . xn . K x1 . . . xn Jt̂1 K ⇒ Specn f K
4.4.2 Evaluation predicate
AppReturns is dened in the CFML library in terms of a lower-level
predicate called AppEval, which is one of the three axioms upon which the library
0
is built. The proposition AppEval F V V captures the fact that the application of a
The predicate
function whose decoding is
decoding is
V 0.
F
The type of
to a value whose decoding is
AppEval
V
returns a value whose
is as follows.
AppEval : ∀AB. Func → A → B → Prop
A formal denition that justies the interpretation of the axiom
AppEval is included
in the soundness proof (Ÿ6.1.5).
Intuitively, the judgment to the argument
AppReturns
V
AppReturns F V P terminates and returns a value
asserts that the application of
V
can thus be easily dened in terms of
0 that satises
AppEval,
P.
≡
The predicate
with a denition that
need not refer to program values.
AppReturns F V P
F
∃V 0 . AppEval F V V 0 ∧ P V 0
4.4.
CHARACTERISTIC FORMULAE: FORMAL PRESENTATION
J v̂ KΓ
J fˆ v̂1 . . . v̂n KΓ
J crash KΓ
J if v̂ then t̂1 else t̂2 KΓ
J let x = ΛA. t̂1 in t̂2 KΓ
J let rec f = ΛA.λx1 . . . xn .t̂1 in t̂2 KΓ
73
return dv̂eΓ
app dfˆeΓ dv̂1 eΓ . . . dv̂n eΓ
crash
if dv̂eΓ then Jt̂1 KΓ else Jt̂2 KΓ
letA X = Jt̂1 KΓ in Jt̂2 K(Γ,x7→X)
let recA F X1 . . . Xn = Jt̂1 K(Γ,f 7→F,xi 7→Xi )
in Jt̂2 K(Γ,f 7→F )
≡
≡
≡
≡
≡
≡
Figure 4.6: Characteristic formula generator (formal presentation)
4.4.3 Characteristic formula generation with notation
The characteristic formula generator can now be given a formal presentation in
which types are made explicit and in which Caml values are reected into Coq
through calls to the decoding operator
d·e.
In order to justify that characteristic
formulae can be displayed like the source code, I proceed in two steps.
First, I
describe the characteristic formula generator in terms of an intermediate layer of
notation (Figure 4.6). Then, I dene the notation layer in terms of higher-order logic
connectives and in terms of the predicate
AppReturnsn
(Figure 4.7). The contents of
those gures simply rene the earlier informal presentation. From those denitions,
it appears clearly that the size of a characteristic formula is linear in the size of the
source code it describes.
Observe that the notation for function denitions relies on two auxiliary pieces
of notation, introduced with the keywords
let fun and body.
Those auxiliary de-
nitions will be helpful later on for pretty-printing top-level functions and mutuallyrecursive functions.
The CFML generator annotates characteristic formulae with tags in order to
ease the work of the Coq pretty-printer. A tag is simply an identity function with
a particular name. For example,
tag_if
is the tag used to label Coq expressions
that correspond to the characteristic formula of a conditional expression. The Coq
denition of the tag and the denition of the notation in which it is involved appear
next.
Definition tag_if : forall A, A -> A := id.
Notation "'If' V 'Then' F1 'Else' F2" :=
(tag_if (fun P => (V = true -> F1 P) /\ (V = false -> F2 P))).
The use of tags, which appear as head constant of characteristic formulae, signicantly eases the task of the pretty-printer of Coq because each tag corresponds to
exactly one piece of notation.
74
CHAPTER 4.
CHARACTERISTIC FORMULAE FOR PURE PROGRAMS
return V
≡ λP. P V
app F
V1 . . . Vn
≡ λP. AppReturnsn F V1 . . . Vn P
crash
≡ λP. False
if V then F else F 0
≡ λP. (V = true ⇒ F P ) ∧ (V = false ⇒ F 0 P )
letA X
= F in F 0
≡ λP. ∃P 0 . (∀A. F (P 0 A) ∧ (∀X. (∀A. P 0 A (X A)) ⇒ F 0 P )
let recA F X1 . . . Xn = F1 in F2
≡ (let fun F = (bodyA F X1 . . . Xn
let fun F = H in F
= F1 ) in F2 )
≡ λP. ∀F. H ⇒ F P
bodyA F
X1 . . . Xn = F
≡ ∀A K. is_specn K ∧ (∀X1 . . . Xn . K X1 . . . Xn F) ⇒ Specn F K
Figure 4.7: Syntactic sugar to display characteristic formulae
4.5 Generated axioms for top-level denitions
Reasoning on complete programs
Consider a program let x = t̂.
The CFML
generator could produce a denition corresponding to the characteristic formula of
the typed term
value
x
t̂,
and then the user could use this denition to specify the result
of the program with respect to some post-condition
P.
Denition X_cf := JtK.
Lemma X_spec : X_cf P.
Yet, this approach is not really practical because a Caml program is typically made
of a sequence of top-level declarations, not just of a single declaration.
Instead, the CFML generator produces axioms to describe the result values of
each top-level declaration, and produces an axiom to describe the characteristic formula associated with each denition. In what follows, I rst present the axioms that
are generated for a top-level value denition, and then present specialized axioms
for the case of functions.
Generated axioms for values
let x = t̂ T Consider a top-level non-polymorphic denition
that does not describe a function.
axioms. The rst axiom, named
The CFML tool generates two
X, has type VT W, which is the Coq type that reects
4.5.
GENERATED AXIOMS FOR TOP-LEVEL DEFINITIONS
the type
T.
75
This axiom represents the result of the evaluation of the declaration
let x = t̂.
Axiom X : VT W.
X_cf, can be used to establish properties about the value
X . Given a post-condition P of type VT W → Prop, this axiom describes what needs
to be proved in order to deduce that the proposition P X holds.
The second axiom, named
Axiom X_cf : ∀P. J t̂ K∅ P ⇒ P X.
When the denition
x
has a polymorphic type
S,
VSW, which is of the form ∀A.VT W. In this case,
∀A.(VT W → Prop) and the axiom X_cf is as follows.
type
the Coq value
X
the post-condition
admits the
P
has type
Axiom X_cf : ∀A. ∀P. J t̂ K∅ (P A) ⇒ (P A) (X A).
Proof that types are inhabited
would be incoherent if the type
VSW
A declaration of the form generates a proof obligation to ensure that
axiom
X.
Axiom X : VSW
were not inhabited. To avoid this issue, CFML
VSW
is inhabited before producing the
The statement relies on the Coq predicate
Inhab,
which holds only of
inhabited types. The lemma generated thus takes the form:
Lemma X_safe : Inhab VSW.
This lemma can be discharged automatically by Coq whenever the type
inhabited, through invokation of Coq's proof search tactic
type
VSW
is not inhabited, then the lemma
X_safe
eauto.
VSW
is
However, if the
cannot be proved. In this case,
the generated le does not compile, and soundness is not compromised.
Overall, CFML rejects denitions of values whose type is not inhabited. Fortunately, those values are typically uninteresting to specify. Indeed, a term can admit
an un-inhabited type only if it never returns a value, that is, if it always either
crashes or diverges.
Axioms generated for functions
Consider a top-level function denition, of
let rec f = λx. t. Such a denition can be encoded as the denition
let f = (let rec f = λx. t in f ), which can be handled like other top-level value
the form denitions.
That said, it is more convenient in practice to directly produce an
let rec f = λx. t.
let rec f = ΛA.λx1 . . . xn .t̂. An axiom
Func can be safely considered as being
axiom corresponding to the body description of the function Consider a function denition of the form named
F
represents the function. The type
inhabited, so there is no concern about soundness here.
Axiom F : Func.
The axiom
F_cf
generated for
Axiom F_cf :
F
asserts the body description of the function
bodyA F
X1 . . . Xn = J t̂ K(f 7→F,xi 7→Xi ) .
In Chapter 6, I prove that the all the generated axioms are sound.
f.
76
CHAPTER 4.
CHARACTERISTIC FORMULAE FOR PURE PROGRAMS
4.6 Extensions
In this section, I explain how to extend the generation of characteristic formulae to
handle mutually-recursive functions, assertions, and pattern matching. For the sake
of presentation, I return to an informal presentation style, working with untyped
terms.
4.6.1 Mutually-recursive functions
To reason on mutually-recursive functions, one needs to state the specication of
each of the functions involved, and then to prove those functions correct together,
because the termination of a function may depend on the termination of the other
functions. Consider the denition of two mutually-recursive functions
body
t1
and
t2 .
f1
and
f2 ,
of
The corresponding characteristic formula is built as follows.
Jlet f1 = λx1 . t1 and f2 = λx2 . t2 in tK ≡ λP. ∀f1 . ∀f2 . H1 ∧ H2 ⇒ JtK P
where
H1 ≡ ∀K. is_spec K ∧ (∀x1 . K x1 Jt1 K) ⇒ Spec f1 K
H2 ≡ ∀K. is_spec K ∧ (∀x2 . K x2 Jt2 K) ⇒ Spec f2 K
There might be more than two mutually-recursive functions, moreover each function might expect several arguments.
To avoid the need to dene an exponential
amount of notation, I rely directly on the notation
description
body for pretty-printing the body
Hi associated with each function, and I rely directly on a generalized ver-
sion of the notation
let fun for pretty-printing the denition of mutually-recursive
denitions. For example, for the arity two, I dene:
(let
fun F1
= H1
and F2
= H2
in F)
≡
λP. ∀F1 . ∀F2 . H1 ∧ H2 ⇒ F P
body for each possible number of arguments and one
let fun for each possible number of mutually-recursive functions.
Overall, I need one version of
version of
For top-level mutually-recursive functions, I rst generate one axiom to represent
each function, and then generate the body description of each of the functions. Those
body descriptions may refer to the name of any of the mutually-recursive functions
involved.
This approach to treating mutually-recursive functions works very well in practice, as I could experience through the verication of Okasaki's bootstrapped queues,
whose implementation relies on ve mutually-recursive functions. Those functions
involve polymorphic recursion and some of them expect several arguments.
4.6.2 Assertions
Programmers can use assertions to test at runtime whether a particular property or
invariant holds of the data structures being manipulated is satised. If an assertion
evaluates to the boolean
true,
then the program execution continues as if the asser-
tion was not present. However, if the assertion evaluates to
false,
then the program
4.6.
EXTENSIONS
77
halts by raising a fatal error. Assertions are thus a convenient tool for debugging, as
they enable the programmer to detect precisely when and where the specications
are violated.
When formal methods are used to prove that the program satises its specication before the program is executed, the use of assertions appears to be of much
less interest.
Nevertheless, I want to be able to verify existing programs, whose
code might already contain assertions. So, I wish to prove that all the assertions
contained in a program always evaluate to
true.
In what follows, I recall the syntax
and semantics of assertions and I explain how to build characteristic formulae for
assertions.
The term tion of
t2
assert t1 in t2 only when
t1
evaluates the assertion
returns
true.
t1
and proceeds to the evalua-
t1 ⇓ true
t2 ⇓ v
(assert t1 in t2 ) ⇓ v
The characteristic formula is very simple to build. Indeed, to show that the term
assert t1 in t2 formula of
t1
admits
P
as post-condition, it suces to show that the characteristic
holds of the post-condition =
true
and that
t2
admits
P
as post-
condition. Hence the following denition:
Jassert t1 in t2 K
≡
λP. Jt1 K (= true) ∧ Jt2 K P
assert t1 in t2 has exactly the same behavior as the expres let x = t1 in if x then t2 else crash. One can indeed prove that the characteristic
Remark: the expression sion
formula of this latter term is equivalent to the formula given above.
4.6.3 Pattern matching
Syntax and semantics of pattern-matching
tended with a construction value
v
The grammar of terms is ex-
match v with p1 7→ t1 | . . . | pn 7→ tn ,
is matched against the linear patterns
pi .
denoting that the
The terms ti are the continuations
associated with each of the patterns. In the formal presentation, I let
pairs of a pattern and a continuation, of the form
matching construction takes the form p 7→ t.
b
range over
Formally, the pattern
match v with b.
t := . . . | match v with b
b := p 7→ t
p := x | n | D(p, . . . , p)
For generating characteristic formulae, it is simpler that patterns do not contain
any wildcard. So, during the normalization process of the source code, I replace all
wildcards by fresh identiers. Note: the treatment of and-patterns, or-patterns, and
when-clauses is not described.
78
CHAPTER 4.
CHARACTERISTIC FORMULAE FOR PURE PROGRAMS
The big-step reduction rules associated with patterns are given next. Below, the
function
fv
is used to compute the set of free variables of a pattern.
x = fv p
v = [x → v] p
([x → v] t) ⇓ v
(match v with p 7→ t | b) ⇓ v
x = fv p
(match v with b) ⇓ v
(∀vi . v 6= [x → v] p)
(match v with p 7→ t | b) ⇓ v
Example
I rst illustrate the generation of characteristic formulae for pattern-
matching through an example. Assume that
matched rst against the pattern
v
is a pair of integers being pattern-
(3, x) and then against the pattern (x, 4).
Assume
that the two associated continuation are terms named t1 and t2 . The corresponding
characteristic formula is described by the following statement.
Jmatch v with (3, x) 7→ t1 | (x, 4) 7→ t2 K ≡ λP.
If the value
v
is equal to
value satisfying
for all
x.
P.
(4, x)
for all
for some
x,
then we have to show that
Otherwise, we can assume that the value
Then, if the value
returns a value satisfying
from
(3, x)
(∀x. v = (3, x) ⇒ Jt1 K P )
∧ ((∀x. v 6= (3, x)) ⇒
(∀x. v = (x, 4) ⇒ Jt2 K P )
∧ ((∀x. v 6= (x, 4) ⇒ False))
x.
P.
v
is equal to
(4, x)
for some
x,
v
t1
returns a
is dierent from
(3, x)
t2
we have to show that
Otherwise, we can assume that, the value
v
is dierent
Since there are no remaining cases in the pattern-matching,
and since we want to show that the code does not crash, we have to show that all
the possible cases have been already covered. Hence the proof obligation
False.
Observe that the treatment of pattern matching in characteristic formulae allows
for reasoning on the branches one by one, accumulating the negation of all the
previous patterns which have not been matched.
In practice, I pretty-print the above characteristic formula as follows:
case v = (3, x) vars x then Jt1 K else
case v = (x, 4) vars x then Jt2 K else crash
The idea is to rely on a cascade of cases, introduced by the keyword
sponding to the various patterns being tested.
The keyword
list of pattern variables bound by each pattern. The
vars
case,
corre-
introduces the
then keyword introduces the
characteristic formula associated with continuation to be executed when the current
else keyword introduces the formula associated with the continuation to be followed otherwise. Note that variables bound by the keyword vars
range over the then branch but not over the else branch. Those piece of notation
pattern matches. The
are formally dened soon afterwards.
4.6.
EXTENSIONS
79
Informal presentation
Consider a term being matched against a pattern
p,
match v with p 7→ t | b.
over the remaining branches of the pattern-matching on
of pattern variables occurring in
p.
v.
Let
x
v is
b range
The value
associated with a continuation t, and let
denote the set
The generation of characteristic formulae for
pattern-matching is described through the following two rules.
Jmatch v with p 7→ t | bK ≡ λP.
Jmatch v with ∅K
€
Š
∀x. v = p ⇒ JtK P
€
Š
∧ (∀x. v 6= p) ⇒ Jmatch v with bK P
≡ λP. False
Remark: The value
v
plicated in each branch.
describing the argument of the pattern matching is duThus, the size of the characteristic formula might not
be linear in the size of the input program when the value
variable.
v
is not reduced to a
In practice, this does not appear to be a problem.
However, from a
theoretical perspective, it is straightforward to re-establish linearity of the characteristic formula. Indeed, it suces to change the term let x = v in match x with p 7→ t | b
for some fresh name
match v with p 7→ t | b
into
x before computing the char-
acteristic formula.
Decoding of patterns
In order to formalize the generation of characteristic for-
p̂, and to dene the
dp̂eΓ , transforms a typed pattern
mulae for patterns, I need to consider typed patterns, written
decoding of a typed pattern. This function, written
p̂
of type
T
into a Coq value of type
VT W,
by replacing all data constructors with
their logical counterpart, and by replacing pattern variables with their associated
Coq variable from the map
Γ.
dxeΓ
dneΓ
dD T (p̂1 , . . . , p̂n )eΓ
Remark: the variables bound in
Γ
≡ Γ(x)
≡ n
≡ D VT W (dp̂1 eΓ , . . . , dp̂n eΓ )
always have a monomorphic type, since pattern
variables are not generalized (I have not included the construction let p = t in t
in
the source language).
Formal presentation, with notation
I dene the generation of characteristic
formulae through an intermediate layer of syntactic sugar, relying on the notation
case
introduced earlier on in the example.
bound by the pattern
p̂,
and
X
Below,
x
denotes the list of variables
denotes a list of fresh Coq variables of the same
length.
Jmatch v̂ with p̂ 7→ t̂ | bKΓ ≡
Jmatch v̂ with ∅KΓ
≡
case dv̂eΓ = dp̂e(x7→X) vars X
then Jt̂K(Γ,x7→X)
else Jmatch v̂ with bKΓ
crash
80
CHAPTER 4.
where the notation
CHARACTERISTIC FORMULAE FOR PURE PROGRAMS
case is formally dened as follows:
(case V = V 0 vars X then F else F 0 ) ≡
λP. (∀X. V = V 0 ⇒ F P ) ∧ ((∀X. V 6= V 0 ) ⇒ F 0 P )
Remark: in the characteristic formula, the value
x,
of the pattern variables
code. So, if the value
in the pattern
p̂,
v̂
appears under the scope
although this was not the case in the program source
contains a variable with the same name as a variable bound
then we need to alpha-rename the pattern variable in conict.
Alternatively, we may rst bind the value
dv̂eΓ
let x = v̂ in match x with p̂ 7→ t̂ | b,
v̂
to a fresh variable
x,
building the term
in which the value being matched, namely
guaranteed to be distinct from all the pattern variables.
CFML performs this transformation whenever the value
v̂
x,
is
The implementation of
contains a free variable
that clashes with one of the pattern variables involved in the pattern matching.
Exhaustive patterns
By default, the OCaml type-checker checks for exhaustive-
ness of coverage in pattern-matching constructions, through an analysis based on
types. If there exists a value that might not match any of the patterns, then the
Caml compiler issues a warning. There are cases where the invariants of the program
guarantee that the given set of branches actually covers all the possible cases. In
such a case, the user may ignore the warning produced by the compiler.
When the compiler's analysis recognizes a pattern matching as exhaustive, then it
is safe to modify the characteristic formula so as to remove the proof-obligation that
corresponds to exhaustiveness. In such a case, I replace the formula
is dened as λP.
False,
with the formula
done,
which is dened as λP.
Through this change, a nontrivial proof obligation trivial proof obligation done P .
crash, which
crash P True.
is replaced with a
In practice, most patterns are exhaustive, so the
amount of work required in interactive proofs is greatly reduced by this optimization
which exploits the strength of the exhaustiveness decision procedure.
Conclusion and extensions
In summary, characteristic formulae rely on equal-
ities and disequalities to reason on pattern matching.
The branches of a pattern
can be studied one by one, following the execution ow of the program. The CFML
library also includes a tactic that applies to the characteristic formula of a pattern
with
n branches and directly generates n subgoals, plus one additional subgoal when
the pattern is not recognized as exhaustive. The treatment of for alias patterns (of
the form p
as x)
and of top-level or-patterns is not described here but is imple-
mented in CFML. The treatment of when-clauses and general or-patterns is left to
future work.
4.7 Formal proofs with characteristic formulae
In this section, I explain how to reason about programs through characteristic formulae.
While it is possible to manipulate characteristic formulae directly, I have
4.7.
FORMAL PROOFS WITH CHARACTERISTIC FORMULAE
81
found it much more eective to design a set of high-level tactics. Moreover, the use
of high-level tactics means that the user does not need to know about the details of
the construction of characteristic formulae.
4.7.1 Reasoning tactics: example with let-bindings
Denition of the core tactic To start with, assume we want to show that a term
let x = t1 in t2 returns a value satisfying a predicate P . The goal to be established,
let x = t1 in t2 K P , is equivalent to the proposition shown next.
J
∃P 0 . Jt1 K P 0 ∧ (∀X. P 0 X ⇒ Jt2 K(x7→X) P )
On this goal, a call to the tactic
xlet
with an argument
P1
instantiates
P0
as
P1
and produces two subgoals. The rst subgoal, Jt1 K P1 , corresponds to the body of
(x7→X) P in a context
the let-binding. The second subgoal consists in proving Jt2 K
extended with a free variable named
X
P1 X .
xlet is straightforward.
call to exists P 1. Then,
and an hypothesis
The Coq implementation of the tactic
its argument
P1
as a witness, with a
junction, with the tactic
split.
X,
well as the hypothesis about
First, it provides
it splits the con-
Finally, in the second subgoal, it introduces
calling the tactic
intro
X
as
twice. The corresponding
Coq tactic denition appears next.
Tactic Notation "xlet" constr(P1) :=
exists P1; split; [ | intro; intro ].
Inference of specication
mediate specication
P1
The tactic
xlet
described so far requires the inter-
to be provided explicitly. Yet, it is very often the case that
this post-condition can be automatically inferred when solving the goal Jt1 K P1 .
For example, if
t1
P1 gets
xlet that
is the application of a function, then the post-condition
unied with the post-condition of that function. So, I also dene a tactic
does not expect any argument, and that simply instantiates
P1
with a fresh unica-
tion variable. As soon as this unication variable is instantiated in the rst subgoal
Jt1 K P1 ,
the hypothesis P1
X
from the second subgoal becomes fully determined.
The implementation of this alternative tactic
xlet
relies on a call to the tactic
exists_, where the underscore symbol indicates that a fresh unication variable
should be created.
Tactic Notation "xlet" :=
exists _; split; [ | intro; intro ].
Generation of names
Choosing appropriate names for variables and hypotheses
is extremely important in practice for conducting interactive proofs. Indeed, a proof
script that relies on generated names such as
H16 is very brittle, because any change
H16 being renamed,
to the program or to the proof may shift the names and result in
for example into
H19.
Moreover, proof obligations are much easier to read when they
82
CHAPTER 4.
CHARACTERISTIC FORMULAE FOR PURE PROGRAMS
involve only meaningful names.
I have invested particular care in developing two
versions of every tactic: one version where the user explicitly provides a name for
a variable or an hypothesis, and one version that tries and picks a clever name
automatically. As a result, it is possible to obtain robust proof scripts and readable
proof obligations at a reasonable cost, by providing only a few names explicitly.
Let me illustrate this with the tactic
xlet.
A call to the tactic
intro
moves
a universally-quantied variable or a hypothesis from the head of the goal into the
proof context.
The name
X
that appears in the characteristic formula of a let-
binding comes from the source code of the program, after it has been put in normal
form. When the name
X
comes from the original source code, it makes sense to reuse
that name. However, when
X
corresponds to a variable that has been introduced
during the normalization process (for assigning names to intermediate expressions
from the source code), it is sometimes better to use a more meaningful name than
the one that has been generated automatically.
A call of the form xlet
enables one to specify the name that should be used for
The tactic
xlet
as X
X.
also needs to come up with a name for the hypothesis P1
By default, the tactic picks the name
HX,
X .
H
which is obtained by placing the letter
in front of the name of the variable. However, it is also possible to specify the name
to be used explicitly, by providing it as extra argument to
particularly useful when the hypothesis P1
X
xlet.
This possibility is
needs to be decomposed in several
For example, if the post-condition P1 of the term t1 takes the form
∃Y. H1 Y ∧ H2 Y ), then a call to the tactic xlet as X (Y&M1&M2) leaves in
the second subgoal a proof context containing a variable Y, an hypothesis M1 of type
H1 Y and an hypothesis M2 of type H2 Y . Without such a convenient feature,
one would have to write instead something of the form xlet as X. destruct HX
as (Y&M1&M2), or to rely on a more evolved tactic language such as SSReect [30].
Overall, there are ve ways to call the tactic xlet, as summarized below.
conjuncts.
(λX.
xlet.
xlet P1
xlet P1
xlet as
xlet as
as X.
as X HX.
X.
X HX.
All of the tactics implemented in CFML oer a similar degree of exibility.
4.7.2 Tactics for reasoning on function applications
The process of proving a specication for a function
f
and then exploiting that
specication for reasoning on applications of that function is as follows. First, one
states the specication of
f
as a lemma of the form
Specn f K .
Second, one proves
this lemma using the characteristic formula describing the behavior of
f.
Third,
one registers that lemma in a database of specication lemmas, so that the name
of the lemma does not need to be mentioned explicitly in the proof scripts. Then,
when reasoning on a piece of code that involves an application of the function
f,
a
4.7.
FORMAL PROOFS WITH CHARACTERISTIC FORMULAE
proof obligation of the form
AppReturnsm f x1 . . . xm P
83
is involved. The tactic
xapp
helps proving that goal. The implementation of the tactic depends on whether the
function is applied to the exact number of arguments it expects (m
a partial-application is involved (m
(m
> n).
< n),
= n), or whether
or whether an over-application is involved
Specn
Note that all the lemmas about
mentioned thereafter are proved in
Coq.
Normal applications
Specn (introduced
AppReturnsn f from the spec-
The elimination lemma for the predicate
in Ÿ4.2.3) can be used to deduce a proposition about
Specn f K . However, it does not directly have a
AppReturnsm f x1 . . . xn P . So, the implementation of the
ication
conclusion of the form
xapp
tactic
relies on a
corollary of the elimination lemma, shown next.
¨
Specn f K
∀R. Weakenable R ⇒ K x1 . . . xn R ⇒ R P
⇒ AppReturnsn f x1 . . . xn P
Let me illustrate the working of this lemma through an example. Suppose that
a value
x
is a non-negative multiple of 4 (i.e.
show that the application of the function
AppReturns1 half x even.
The proof obligation is:
∀R. Weakenable R
⇒
(∀n ≥ 0. x = 2 ∗ n ⇒ R (= n))
By instantiating the hypothesis with
the fact R (=
2∗k).
x = 4 ∗ k for some k > 0), and let me
to x returns an even value, that is,
half
n = 2∗k
R even
⇒
x = 2 ∗ n, we derive
R is compatible
∀y. y = 2 ∗ k ⇒ even y and checking that
The conclusion, namely R
even, follows:
with weakening, it suces to check that the proposition
since
holds.
Partial applications
Partial application occurs when the function
f
is applied to
fewer arguments than it normally expects. The result of such a partial application
is another function, call it
of the specication of
partial application of
f
f.
g,
whose specication is an appropriate specialization
For example, if
to an argument
that the partial application of
f
x
K
admits the specication
to arguments
P , one needs to prove that the specication P
K x1 . . . xn . The following result is exploited
partial applications (n > m).
¨
Specn f K
∀g. Specn−m g (K x1 . . . xm ) ⇒ P g
Over-applications
is the specication of
x1 . . . xn
K x.
higher-order combinators, such as
then the
So, to show
satises a post-condition
is a consequence of the specication
by the tactic
⇒
xapp
for reasoning on
AppReturnsm f x1 . . . xm P
Over-application occurs when the function
more arguments that it normally expects.
f,
f
is applied to
This situation typically occurs with
compose, that return a function.
The case of over-
applications can be viewed as a particular case of a normal applications. Indeed, a
84
CHAPTER 4.
CHARACTERISTIC FORMULAE FOR PURE PROGRAMS
m-ary application can be viewed as a n-ary application that returns a function of
m−n arguments. More precisely, when m > n, the goal AppReturnsm f x1 . . . xm P
can be rewritten into the goal shown below.
AppReturnsn f x1 . . . xn (λg. AppReturnsm−n g xn+1 . . . xm P )
Hence, the reasoning on over-applications relies on the following lemma.
¨
Specn f K
∀R. Weakenable R ⇒ K x1 . . . xn R ⇒ R (λg. AppReturnsm−n g xn+1 . . . xm P )
⇒ AppReturnsm f x1 . . . xm P
4.7.3 Tactics for reasoning on function denitions
Reasoning by weakening The tactic xweaken allows proving
Specn F
K 0 from an existing specication
Specn F K .
a specication
Using weakening, one may
derive several specications for a single function without verifying the code of that
K 0 must be weaker than the
0
existing specication K in the sense that K x1 . . . xn R has to imply K x1 . . . xn R
0
for any predicate R compatible with weakening. Notice that K has to be a valid
function more than once. The targeted specication
specication in the sense that it should satisfy the predicate
8
<
is_specn .
Specn f K
∀x1 . . . xn R. Weakenable R ⇒ (K x1 . . . xn R ⇒ K 0 x1 . . . xn R)
is_specn K 0
⇒ Specn f K 0
:
For example, from the specication of the function
half
we can deduce a weaker
specication that captures the fact that when the input is of the form
4∗m
then
the output is a non-negative integer.
Spec1 half (λx R. ∀m. m ≥ 0 ⇒ x = 4 ∗ m ⇒ R (λy. y ≥ 0))
In this case, the proof obligation is:
€
∀x R.
Weakenable R ⇒ ∀n. n ≥ 0 ⇒ x = 2 ∗ n
⇒ R (= n)
€
Š
⇒ ∀m. m ≥ 0 ⇒ x = 4 ∗ m ⇒ R (λy. y ≥ 0)
Š
x, R and m€. The goal is to show R (λy. y ≥ 0), and
we
Š
and ∀n. n ≥ 0 ⇒ x = 2 ∗ n ⇒ R (= n) and
m ≥ 0 and x = 4 ∗ m. If we instantiate n with 2 ∗ m in the second assumption about
R, we get R (= 2 ∗ m). Since R is compatible with weakening, we can prove the goal
R (λy. y ≥ 0) simply by showing that any value y equal to 2 ∗ m is non-negative.
This property holds because m is non-negative.
Consider a particular value of
have the assumptions
Weakenable R
4.7.
FORMAL PROOFS WITH CHARACTERISTIC FORMULAE
Reasoning by induction
The purpose of the tactic
by induction that a function
f
admits a specication
that, to prove
Specn f K , it suces to show that K
application of
f
to an argument
specication for applications of
xinduction is to establish
K . More precisely, it states
is a correct specication for the
x under the assumption that K
f to smaller arguments.
is already a correct
In the particular case of a function of arity one, calling the tactic
on an argument
(≺),
85
xinduction
which denotes a well-founded relation used to argue for termi-
nation, transforms a goal of the form
Spec1 f K
into the goal:
Spec1 f (λx R. Spec1 f (λx0 R0 . x0 ≺ x ⇒ K x0 R0 ) ⇒ K x R)
The tactic
…
xinduction
relies on the following lemma.
Specn f (λx1 . . . xn R.
Specn f (λx01 . . . x0n R0 . (x01 , . . . , x0n ) ≺ (x1 , . . . , xn ) ⇒ K x01 . . . x0n R0 )
⇒ K x1 . . . xn R)
⇒ Specn f K
f
(≺)
Func, K
where
has type
and
is a well-founded binary
A1 → . . . An → ((B → Prop) → Prop) → Prop,
relation over values of type A1 × . . . × An .
has type
Advanced reasoning by induction
The induction lemma can be used to estab-
lish the specication of a function by induction on a well-founded relation over the
arguments of that function. However, it does not support induction over the structure of the proof of an inductive proposition.
is to change a goal of the form
AppReturnsn f ,
Specn f K
The purpose of the tactic
xintros
into a goal with a conclusion about
thereby making it possible to conduct advanced forms of induction.
The implementation of the tactic
xintros
relies on an introduction lemma for
specications, which is a form of reciprocal to the elimination lemma:
proving the specication of a function
of applications of
f
f
it allows
in terms of a description of the behavior
to some arguments. This introduction lemma is formalized as
follows.
8
<
:
∀x1 . . . xn . K x1 . . . xn (AppReturnsn f x1 . . . xn )
Specn f (λx1 . . . xn R. True)
is_specn K
The lemma involves two side-conditions.
Specn f K
The rst one asserts that
curried function, meaning that the application to the
terminates.
⇒
The second hypothesis asserts that
K
n−1
f
is a n-ary
rst arguments always
is a valid specication, in the
sense that it is compatible with weakening.
4.7.4 Overview of all tactics
The CFML library includes one tactic for each construction of the language. I have
already presented the tactic
xlet
xapp for applicaThe tactic xret is used to reason
for let-bindings and the tactic
tions. Four other tactics are briey described next.
86
CHAPTER 4.
CHARACTERISTIC FORMULAE FOR PURE PROGRAMS
return, changing a goal (return V ) P V . The tactic xif helps reasoning on a conditional. It applies to a goal of
0
the form (if V then F else F ) P and produces two subgoals: rst F P , with an
hypothesis V = true, and second F P , with an hypothesis V = false. The syntax
xif as H allows specifying how to name those new hypotheses. The tactic xfun
S applies to a goal of the form (let fun F = H in F) P . It leaves two subgoals.
The rst goal requires proving the proposition S , which is typically of of the form
SpecF K , under the assumption H, which is the body description for F . The second
goal corresponds to the continuation: it requires proving F P under the assumption
S , which can be used to reason about applications of the function F .
For pattern matching, I provide the tactic xmatch that produces one goal for each
on a value. It simply unfolds the notation
into P
branch of the pattern matching. In case the completeness of the pattern matching
could not be established automatically, the tactic produces an additional goal asserting that the pattern matching is complete (recall Ÿ4.6.3). I also provide a tactic,
called
xcase,
for reasoning on the branches one by one.
This strategy allows ex-
ploiting the assumption that a pattern has not been matched for deriving a fact that
needs to be exploited through the reasoning on the remaining branches. Thereby,
the tactic
xcase
helps factorizing pieces of proof involved in the verication of a
nontrivial pattern matching.
The remaining tactics have already been described. They are briey summarized
xweaken allows establishing a specication from another specicaxinduction is used for proving a specication by induction on a well-founded
relation, and xintros is used for proving a specication by induction on the proof of
an inductively-dened predicate. The tactic xcf applies the characteristic formula of
a top-level denition. The tactic xgo automatically applies the appropriate x-tactic,
next. The tactic
tion,
and stops when a specication of a local function is needed or when a ghost variable
involved in an application cannot be inferred.
Chapter 5
Generalization to imperative
programs
In this chapter, I explain how to generalize the ingredients from the previous chapter in order to support imperative programs. First, I describe
a data structure for representing heaps, and dene Separation Logic connectives for concisely describing those heaps. Second, I dene a predicate
transformer called
local
that supports in particular applications of the
frame rule. Third, I introduce the predicates used to specify imperative
functions. I then explain how to build characteristic formulae for imperative programs using those ingredients. Finally, I describe the treatment
of null pointers and strong updates.
5.1 Extension of the source language
5.1.1 Extension of the syntax and semantics
To start with, I briey describe the syntax and the semantics of the source language
extended with imperative features. Compared with the pure language from the previous chapter, the grammar of values is extended with memory locations, written l.
The null pointer is just a particular location that never gets allocated. Values are
also extended with primitive functions for allocating, reading and updating reference cells, and with a primitive function for comparing pointers.
written
tt ,
The unit value,
is dened as a particular algebraic data type denition with a unique
data constructor. The grammar of terms is extended with sequence, for-loops and
while-loops. The grammar of ML types is extended with reference types.
l
v
t
τ
:= locations
:= . . . | l | ref | get | set | cmp
:= . . . | t ; t | for x = v to v do t | while t do t
:= . . . | ref τ
87
88
CHAPTER 5.
GENERALIZATION TO IMPERATIVE PROGRAMS
([f → µf.λx.t] [x → v] t)/m ⇓ v 0 /m0
((µf.λx.t) v)/m ⇓ v 0 /m0
v /m ⇓ v /m
t1 /m1 ⇓ tt /m2
t2 /m2 ⇓ v 0 /m3
([x → v] t2 )/m2 ⇓ v 0 /m3
t1 /m1 ⇓ v /m2
(let x = t1 in t2 )/m1 ⇓ v 0 /m3
(t1 ; t2 )/m1 ⇓ v 0 /m3
t1 /m ⇓ v 0 /m0
t2 /m ⇓ v 0 /m0
(if true then t1 else t2 )/m ⇓ v 0 /m0
(if false then t1 else t2 )/m ⇓ v 0 /m0
l = 1 + max(dom(m))
(ref v)/m ⇓ l/m][l7→v]
l ∈ dom(m)
(get l)/m ⇓ (m[l])/m
l1 6= l2
(cmp l1 l2 )/m ⇓ false/m
l1 = l2
(cmp l1 l2 )/m ⇓ true/m
a≤b
l ∈ dom(m)
(set l v)/m ⇓ tt /(m[l7→v])
(for i = a + 1 to b do t)/m2 ⇓ tt /m3
([i → a] t)/m1 ⇓ tt /m2
(for i = a to b do t)/m1 ⇓ tt /m3
t1 /m ⇓ false/m0
a>b
(for i = a to b do t)/m ⇓ tt /m
(while t1 do t2 )/m ⇓ tt /m0
t1 /m1 ⇓ true/m2
(while t1 do t2 )/m3 ⇓ tt /m4
t2 /m2 ⇓ tt /m3
(while t1 do t2 )/m1 ⇓ tt /m4
Figure 5.1: Semantics of imperative programs
Remark: I write the application of the primitive function
set was a function of two arguments.
set as a function of one argument that
if
set in the form set l v , as
However, it is simpler for the proof to view
expects a pair of values, as this avoids the
need to consider partially-applied primitive functions.
m, maps locations to program values. The reduction
0
judgment takes the form t/m ⇓ v /m0 , asserting that the evaluation of the term t in
0
0
a store m terminates and returns a value v in a store m . The denition of this
big-step judgment is standard. In the rules shown in Figure 5.1, m[l] denotes the
value stored in m at location l, m[l 7→ v] describes a memory update at location l
and m][l 7→ v] describes a memory allocation at location l. More generally, m1 ]m2
denotes the disjoint union of two stores m1 and m2 .
A memory store, written
5.1.
EXTENSION OF THE SOURCE LANGUAGE
89
5.1.2 Extension of weak-ML
In this section, I extend the denitions of the type system weak-ML to add support
for imperative features.
One of the key feature of weak-ML is that all locations
admit a constant type, called
loc,
instead of a type of the form
ref τ
that carries
the type associated with the corresponding memory cell. The grammar of weak-ML
types is thus extended as follows.
T
:= . . .
| loc
The erasure operation from ML types into weak-ML types maps all types of the
form
ref τ
to the constant type
loc.
href τ i ≡ loc
The notion of typed term and of typed value immediately extend to the imperative
setting. Due to the value restriction, only variables immediately bound to a value
may receive a polymorphic type. This restriction means that the grammar of typed
terms includes bindings of the form form let x = t̂1 in t̂2 ,
let x = ΛA. v̂1 in t̂2 as well as
let x = ΛA. t̂1 in t̂2 .
bindings of the
but not the general form The typing judgment for imperative weak-ML programs takes the form
∆ ` t̂.
Contrary to the traditional ML typing judgment, the weak-ML typing judgment does
not involve a store typing. Such an oracle is not needed because all locations admit
the constant type
loc.
The typing judgment for imperative weak-ML programs thus
directly extend the typing judgment for pure programs that was presented earlier
on (Ÿ4.3.3), with the exception of the typing rule for let-bindings which is replaced
by two rules so as to take into account the value restriction. The new rules appear
next.
∆ ` t̂1unit
∆ ` l loc
∆, A ` v̂1T1
∆ ` t̂2T
∆ ` t̂1T1
∆, x : T1 ` t̂2T2
∆ ` (t̂1unit ; t̂2T ) T
∆ ` (let x = t̂1T1 in t̂2T2 ) T2
∆, x : ∀A.T1 ` t̂2T2
∆ ` t̂1bool
∆ ` (let x = ΛA. v̂1T1 in t̂2T2 ) T2
∆ ` v̂1int
∆ ` t̂2unit
∆ ` (while t̂1bool do t̂2unit ) unit
∆ ` v̂2int
∆, i : int ` t̂3unit
∆ ` (for i = v̂1int to v̂2int do t̂3unit ) unit
Two additional typing denitions are explained next. First, all primitive functions
admit the type
func,
just like all other functions. Second, the value
particular location, admits the type
loc.
null,
which is a
Observe that, since applications are uncon-
strained in weak-ML, applications of primitive functions for manipulating references
are also unconstrained. This is how weak-ML accommodates strong updates.
In Coq, I introduce an abstract type called
weak-ML type
loc
is reected as the Coq type
Loc for describing locations. The
Loc. Note that a source program
90
CHAPTER 5.
GENERALIZATION TO IMPERATIVE PROGRAMS
∅
≡ ∅
l →T V
≡ [l := (T , V )]
h1 + h2
≡ h1 ∪ h2
h1 ⊥ h2 ≡ (dom(h1 ) ∩ dom(h2 ) = ∅)
Figure 5.2: Construction of heaps in terms of operations on nite maps
does not contain any location, so decoding locations is not needed for computing
the characteristic formula of a program.
VlocW ≡ Loc
5.2 Specication of locations and heaps
5.2.1 Representation of heaps
I now explain how memory stores are described in the logic as values of type
Whereas a memory store
Heap.
m maps locations to Caml values, a heap h maps locations
to Coq values. Moreover, whereas a store describes the entire memory state, a heap
may describe only a fragment of a memory state.
value that describes a piece of a memory store
domain of
h,
the value stored in
to the Caml value stored in
m
h
at location
at location
l.
l
m
h
location l
Intuitively, a heap
if, for every
is a Coq
from the
is the Coq value that corresponds
The connection between heaps and
memory states is formalized later on, in Chapter 7. At this point, I only discuss the
representation and specication of heaps.
Loc represents locations. It is isomorphic to natural numbers. The
Heap represent heaps. Heaps are represented as nite maps from locations
to values of type Dyn, where Dyn is the type of pairs whose rst component is a type
The data type
data type
T
and whose second component is a value
V
of type
T.
Heap ≡ Fmap Loc Dyn
Dyn ≡ ΣT T
Note: in Coq, nite map can be represented using logical functions. More precisely,
the datatype
Heap
is dened as the set of functions of type
Loc → option Dyn
that
None only for a nite number of arguments. Technically,
Heap ≡ {f : (Loc → option Dyn) | ∃(L : list loc). ∀(x : loc). f x 6= None ⇒ x ∈ L}.
return a value dierent from
Operations on heaps are dened in Figure 5.2 and explained next. The empty
heap, written
∅, is a heap built on the empty map. Similarly, a singleton heap,
l →T V , is a heap built on a singleton map binding the location l to the
value V of type T . The union of two heaps, written h1 + h2 , returns the union
written
Coq
of the two underlying nite maps. We are only concerned with disjoint unions, so
it does not matter how the union operator is dened for maps with overlapping
5.2.
SPECIFICATION OF LOCATIONS AND HEAPS
[]
[P]
l ,→T V
H1 ∗ H2
∃∃x. H
≡
≡
≡
≡
≡
λh.
λh.
λh.
λh.
λh.
91
h=∅
h=∅ ∧ P
(where P is any proposition)
h = (l →T V )
∃h1 h2 . (h1 ⊥ h2 ) ∧ h = h1 + h2 ∧ H1 h1 ∧ H2 h2
∃x. H h
(where x is bound in H )
Figure 5.3: Combinators for heap descriptions
domains.
Finally, two heaps are said to be disjoint, written
h1 ⊥ h2 ,
when their
underlying maps have disjoint domains. Note that the denition of heaps and of all
predicates on heaps is entirely formalized in Coq.
5.2.2 Predicates on heaps
I now describe predicates for specifying heaps in Separation Logic style, closely
following the denitions used in Ynot [17]. Heap predicates are simply predicates
over values of type
Heap.
I use the letter
H
to range over heap predicates.
convenience, the type of such predicates is abbreviated as
Hprop
For
Hprop.
Heap → Prop
≡
One major contribution of Separation Logic [77] is the separating conjunction
H1 ∗H2 describes a heap made of two disjoint parts
such that the rst one satises H1 and the second one satises H2 . Compared with
expressing properties on heaps directly in terms of heap representations, H1 ∗ H2
(also called spatial conjunction).
concisely captures the disjointedness of the two heaps involved.
Apart from the
setting up of the core denition and lemmas of the CFML library, I always work in
terms of heap predicates and never refer to heap representations directly.
Heap combinators, which are dened in Coq, appear in Figure 5.3. Empty heaps
are characterized by the predicate
value
V
of type
T
[ ].
A singleton heap binding a location
is characterized by the predicate
l ,→T V .
Since the type
l to a
T can
V , I often drop the type and write l ,→ V . The predicate
H1 ∗ H2 holds of a disjoint union of a heap satisfying H1 and of a heap satisfying
H2 . In order to describe local invariants of data structures, propositions are lifted
as heap predicates. More precisely, the predicate [P] holds of an empty heap only
when the proposition P is true. Similarly, existential quantiers are lifted: ∃
∃x. H
1
holds of a heap h if there exists a value x such that H holds of that heap.
be deduced from the value
1
The formal denition for existentials properly handles binders.
hexists J , where J
is a predicate on the value
x.
hexists (A : Type) (J : A → Hprop)
It actually takes the form
Formally:
≡
λ(h : Heap). ∃(x : A). J x h
92
CHAPTER 5.
GENERALIZATION TO IMPERATIVE PROGRAMS
I recall next the syntactic sugar introduced for post-conditions (Ÿ3.1). The post-
# H describes a term that returns the unit value tt and produces a heap
satisfying H . So, # H is a shortcut for λ_ : unit. H . The spatial conjunction of
a post-condition Q with a heap satisfying H is written Q ? H , and is dened as
λx. Q x ∗ H .
Finally, I rely on an entailment relation, written H1 B H2 , to capture that any
heap satisfying H1 also satises H2 .
condition
H1 B H2
≡
∀h. H1 h ⇒ H2 h
I also dene a corresponding entailment relation for post-conditions. The proposition
Q1 I Q2 asserts that for any output value x and any output heap h, if Q1 x h holds
then Q2 x h also holds. Entailment between post-conditions can be formally dened
in terms of entailment on heap descriptions, as follows.
Q1 I Q2
≡
∀x. Q1 x B Q2 x
5.3 Local reasoning
In the introduction, I have suggested how to dene a predicate called frame
that
applies to a characteristic formula and allows for applications of the frame rule while
reasoning on that formula. In this section, I explain how to generalize the predicate
frame
into a predicate called local
that also supports the rule of consequence as
well as garbage collection. I then present elimination rules establishing that local F is a formula where the frame rule, the rule of consequence and the rule of garbage
collection can be applied an arbitrary number of times, in any order, before reasoning
on the formula
F
itself.
5.3.1 Rules to be supported by the local predicate
The predicate local
aims at simulating possible applications of the three following
reasoning rules, which are presented using Hoare triples. In the frame rule,
H ∗ H0
0
0
extends the pre-condition H with a heap satisfying H and Q ? H symmetrically ex0
tends the post-condition Q with a heap satisfying H . The rule for garbage collection
0
allows discarding a piece of heap H from the pre-condition, as well as discarding
a piece of heap
H 00
from the post-condition. The rule of consequence involves the
entailment relation on heap predicates (B) for strengthening the pre-condition and
involve the entailment relation on post-conditions (I) for strengthening the postcondition.
{H} t {Q}
{H ∗ H 0 } t {Q ? H 0 }
H B H0
frame
{H 0 } t {Q0 }
{H} t {Q}
{H} t {Q ? H 00 }
{H ∗ H 0 } t {Q}
Q0 I Q
consequence
gc
5.3.
LOCAL REASONING
93
The rst step towards the construction of the predicate
local consists in combin-
ing the three rules into one. This is achieved through the rule shown next. In this
H
rule,
and
Q
describe the outer pre- and post-condition,
inner pre- and post-condition,
and
Hg
Hk
Hi
and
Qf
describe the
correspond to the piece of heap being framed out,
correspond to the piece of heap being discarded. Here and throughout the
rest of the thesis,
i stands for initial, f for nal, k for kept aside, and g for garbage.
H B Hi ∗ Hk
{Hi } t {Qf }
{H} t {Q}
Qf ? Hk I Q ? Hg
combined
One can check that this combined rule simulates the three previous rules.
2
5.3.2 Denition of the local predicate
The predicate
local
corresponds to the predicate-transformer presentation of the
combined reasoning rule, in the sense that the proposition one can nd instantiations of
Hi , Hk , Hg
Qf
and
local F Hi Qf .
t {Qf }
being replaced with
A rst unsuccessful attempt at dening the predicate
next, under the name
local .
0
8
<
local0 F
≡
λH Q. ∃Hi Hk Hg Qf .
:
holds if
such that the premises of the
combined rule are satised, with the premise {Hi }
local F H Q
local is shown
H B Hi ∗ Hk
F Hi Qf
Qf ? Hk I Q ? Hg
I explain soon afterwards why this denition is not expressive enough because it
does not allow extracting existentials and propositions out of pre-conditions.
the time being, let me focus on explaining how to patch the denition of
obtain the correct denition of
heap
h
local.
For
local0
to
The idea is that we need to quantify over the
H before quantifying existentially over the
Hi , Hk , Hg and Qf . So, the proposition local F H Q holds if, for any
heap h that satises the pre-condition H , the three following properties hold:
that satises the pre-condition
variables
input
1. There exists a decomposition of the heap
satisfying a predicate
2. The formula
F
Hi
holds of the pre-condition
3. The post-condition
h
as the disjoint union of a heap
and of a heap satisfying a predicate
Qf ?Hk
Hi
Hk .
and of some post-condition
entails the post-condition
Q?Hg
Qf .
for some predicate
Hg .
local appears next.
Hprop → (B → Hprop) → Prop
The formal denition of the predicate
It applies to a formula
with a type of the form
for some type
2
F
Recall
Hg as the empty heap predicate. For the rule of consequence,
Hk and Hg as the empty heap predicate. Finally, for the rule of garbage collection,
Hi as H , Hk as H 0 , Qf as Q ∗ H 00 , and Hg as H 0 ∗ H 00 .
For the frame rule, instantiate
instantiate both
instantiate
B.
94
CHAPTER 5.
that
Hi
GENERALIZATION TO IMPERATIVE PROGRAMS
is the initial heap,
Hg
being kept aside, and
Qf
describes the nal post-condition,
8
<
local F
≡
Hk
is the heap
is the garbage heap.
λH Q. ∀h. H h ⇒ ∃Hi Hk Hg Qf .
Notice that the denition of
local
:
(Hi ∗ Hk ) h
F Hi Qf
Qf ? Hk I Q ? Hg
refers to a heap representation
h.
This heap
representation never needs to be manipulated directly in proofs, as all the work can
be conducted through the high-level elimination lemmas that are explained in the
rest of this section.
Remark: the denition of the predicate
local
shows some similarities with the
denition of the STsep monad from Hoare Type Theory [61], in the sense that
both aim at baking the Separation Logic frame condition into a system dened in
terms of heaps describing the whole memory.
5.3.3 Properties of local formulae
The rst useful property about the predicate transformer
local
ignored during reasoning. More precisely, if the goal is to prove it suces to prove F
Q,
and
Hk
and
Hg
H Q.
Hi
(To check this implication, instantiate
as
then
H , Qf
as
as the empty heap predicate.) Formally:
∀H Q. F H Q
Another key property of
redundant.
is that it may be
local F H Q,
local
⇒
local F H Q
local are
local (local F) H Q is al-
is its idempotence: iterated applications of
In other words, a proposition of the form ways equivalent to the proposition local F H Q.
With predicate extensionality, the
idempotence property can be stated very concisely, as follows.
∀F. local F = local (local F)
It remains to explain how to exploit the predicate
acteristic formulae.
local
in reasoning on char-
Let me start with the case of the frame rule.
statement is a direct consequence of the denition of the predicate
F H1 Q1
The following
local.
local F (H1 ∗ H2 ) (Q1 ? H2 )
⇒
Yet, applying this lemma would result in consuming the occurrence of
local
at the
head of the formula, preventing us from subsequently applying other rules. Fortunately, thanks to the idempotence of the predicate
lemma where the
local
local,
it is possible to derive a
modier is preserved.
local F H1 Q1
⇒
local F (H1 ∗ H2 ) (Q1 ? H2 )
In practice, I have found it more convenient with respect to tactics to reformulate
the lemma in the following form.
is_local F
∧
F H1 Q1
⇒
F (H1 ∗ H2 ) (Q1 ? H2 )
5.3.
LOCAL REASONING
where the proposition to local F .
95
is_local F captures the fact that the formula
F
is equivalent
Formally,
is_local F
≡
(F = local F)
is_local is called a local formula.
local, any proposition of the form local F A formula that satises the predicate
that, due to the idempotence of
Observe
is a local
formula. This result is formally stated as follows.
∀F. is_local (local F)
To summarize, I have just established that any local formula supports applications of the frame rule. Similarly, I have proved in Coq that local formulae support
applications of the rule of consequence and of rule of garbage collection.
For the
sake of readability, those results are presented as inference rules.
is_local F
FHQ
0
F (H ∗ H ) (Q ? H 0 )
is_local F
frame
is_local F
H B H0
F H 0 Q0
FHQ
F H (Q ? H 00 )
F (H ∗ H 0 ) Q
Q0 I Q
gc
consequence
More generally, local formulae admit the following general elimination rule.
is_local F
H B Hi ∗ Hk
F Hi Qf
FHQ
Qf ? Hk I Q ? Hg
combined
5.3.4 Extraction of invariants from pre-conditions
Another crucial property of local formulae is the ability to extract propositions and
existentially-quantied variables from pre-conditions. To understand when such extractions are involved, consider the expression have specied the term ref 2
let a = ref 2 in get a,
by saying that it returns a fresh location pointing to-
wards an even integer, that is, through the post-condition λa. ∃
∃n. a
Now, to reason on the term Q
and assume we
,→ n ∗ [even n].
get a, we need to prove the following proposition, where
is some post-condition.
∀a. Jget aK (∃∃n. a ,→ n ∗ [even n]) Q
In order to read at the location
a,
a ,→ n. So,
invariant even n, so
we need a pre-condition of the form
we need to extract the existential quantication on
n
and the
as to change the goal to:
∀a. ∀n. even n ⇒ Jget aK (a ,→ n) Q
96
CHAPTER 5.
GENERALIZATION TO IMPERATIVE PROGRAMS
Extrusion of the existentially-quantied variables and of propositions is precisely
3
the purpose of the two following extraction lemmas , which are derivable from the
denition of the predicate
local.
Similar extraction lemmas have appeared in previous
work on Separation Logic (e.g., [2]).
extract-prop
extract-exists
is_local F
(P ⇒ F H Q)
F ([P] ∗ H) Q
The awed predicate
the heap representation
local0
h
is_local F
(∀x. F (H 0 ∗ H) Q)
F ((∃∃x. H 0 ) ∗ H) Q
dened earlier on does not involve a quantication on
that satises the pre-condition
H.
This predicate
local0
allows deriving the frame rule, the rule of consequence and the rules of garbage
collection, however it does not allow proving an extraction result such as extract-
exists. Intuitively, there is a problem related to the commutation of an existential
0
quantiers with a universal quantier. In the denition
local , the universal quanti-
h is implicitly contained in the proposition H B Hi ∗ Hk ,
so the quantication on h comes after the existential quantication on Hi . On the
contrary, in the denition of local, the universal quantication of h comes before the
existential quantication on Hi .
cation on the input heap
5.4 Specication of imperative functions
I now focus on the specication of functions, dening the predicates
and
Spec,
AppReturns
and then generalizing those predicates to curried n-ary functions. Note:
through the rest of this chapter, I write Coq values in lowercase and no longer in
uppercase, for the sake of readability.
5.4.1 Denition of the predicates AppReturns and Spec
In an imperative setting, the evaluation formula takes the form
AppEval f v h v 0 h0 ,
asserting that the application of a Caml function whose decoding is
whose decoding is
decoded as
of
AppEval
v0
v
in a store represented as
in a store represented as
h0 .
h
f
to a value
terminates and returns a value
Note that the arguments
h
and
h0
here describe the entire memory store in which the evaluation of the
application of
f
to
v
takes place, and not just the piece of memory involved for
reasoning on that application.
Here again,
AppEval
is an axiom upon which the
CFML library is built. Its type is as follows.
AppEval
3
:
∀A B. Func → A → Heap → B → Heap → Prop
The Coq statement of the rule extract-exists is as follows, where the denition of
that given in a footnote in Ÿ5.2.2.
is_local F
⇒ (∀x. F ((J x) ∗ H) Q) ⇒ F ((hexistsJ) ∗ H) Q
hexists is
5.4.
SPECIFICATION OF IMPERATIVE FUNCTIONS
AppReturns1
The denition of
in terms of
AppEval
97
is slightly more involved for
imperative programs than for purely-functional ones, mainly because of the need to
quantify over the piece of heap that is framed out during the reasoning on a function
application, and of the need to take into account the fact that pieces of heap might
be discarded after the execution of a function. Recall the type of
AppReturns1 , which
is as follows.
AppReturns1
The proposition :
∀A B. Func → A → Hprop → (B → Hprop) → Prop
AppReturns1 f v H Q
states that, if the input heap can be decom-
hi
H and of
v returns a value v 0 in a output heap
0
that can be decomposed in three disjoint parts hf , hk and hg such that Q v hf holds.
The heap hk describes the piece of store that is being framed out and the heap hg
posed as the disjoint union of a heap
another heap
hk ,
then the application of
that satises the pre-condition
f
to
describes the piece of heap being discarded. In the formal denition that appears
next, hf
and
⊥ hk ⊥ hg denotes the pairwise disjointedness of the three heaps
hf , hk
hg .
AppReturns1 f v H Q ≡
¨
∀hi hk .
8
< hf ⊥ hk ⊥ hg
hi ⊥ hk
0
⇒ ∃v hf hg . AppEval f v (hi + hk ) v 0 (hf + hk + hg )
:
H hi
Q v 0 hf
Note that this denition is formalized in Coq, since only the predicate
AppEval
is
taken as axiom in the CFML library.
A central result is that the predicate f
and
AppReturns1 f v is a local formula, for any
v.
∀f v. is_local (AppReturns1 f v)
AppReturns1 is compatible with the frame
rule, in the sense that the proposition AppReturns1 f v H1 Q1 implies the proposition
AppReturns1 f v (H1 ∗ H2 ) (Q1 ? H2 ).
The denition of Spec1 in terms of AppReturns1 is very similar to the one involved
In particular, this result implies that
in a purely-functional setting.
Spec1 f K
is_spec1 K ∧ ∀x. K x (AppReturns1 f x)
≡
The main dierence is the type of specications. Here, the specication
form
A → B → Prop
Ü
K
takes the
, where the shortand Ü is dened as follows.
B
Ü
B
≡
Hprop → (B → Hprop) → Prop
is_spec1 K asserts that, for any argument x, the predicate K x is coCovariance is captured by a generalized version of the predicate Weakenable,
The denition of
variant.
shown next.
Weakenable J
≡
∀R R0 . J R ⇒ (∀HQ. R H Q ⇒ R0 H Q) ⇒ J R0
98
CHAPTER 5.
GENERALIZATION TO IMPERATIVE PROGRAMS
5.4.2 Treatment of n-ary applications
The next step consists in dening
AppReturnsn
and
Specn .
As suggested earlier
on, the treatment of curried functions in an imperative setting is trickier than in a
purely-functional setting because every partial application might induce a side-eect.
Consider a program that contains an application of the form f
x y .
From
only looking at this piece of code, we do not know whether the evaluation of the
partial application f
do so.
x
modies the store.
So, we have to assume that it might
One way to deal with curried applications is to name every intermediate
result with a let-binding during the normalization process, e.g., changing f
let g = f x in g y .
x y
into
However, this approach would not be practical at all. Instead,
I wanted to preserve the nice and simple rule that was devised for pure programs,
where
Jf x1 . . . xn K
is dened as AppReturnsn f x1 . . . xn .
A simple and eective way to nd appropriate denition of
imperative setting is to exploit the fact that a term f
behavior as the term let g = f x in g y .
x y
AppReturnsn
in an
admits exactly the same
Indeed, since characteristic formulae are
sound and complete descriptions of program behaviors, the characteristic formulae
of two terms that admit the same behavior must be logically equivalent.
So, the
AppReturns2 f x y , should be logically
equivalent to the characteristic formula of let g = f x in g y .
The characteristic formula of let g = f x in g y , shown next, is a local formula
characteristic formula of f
x y ,
which is that states that one needs to nd an intermediate post-condition
tion of
f
to
x
Q0
for the applica-
0
such that, for any g , the predicate Q g is an appropriate pre-condition
for the application of
g
to
y.
‚
local
¨
0
λHQ. ∃Q .
Since a predicate of the form local (AppReturns1 f x) H Q0
∀g. local (AppReturns1 g y) (Q0 g) Q
AppReturns1 f v Œ
is already a local formula, it does not
change the meaning of the above formula to remove the applications of the predicate
local
AppReturns1 .
AppReturns2 f x y .
that occur in front of
for the predicate What then remains is a suitable denition
This analysis suggests the following formal denition for
AppReturns
f x1 . . . xn ¨
≡
‚n
local
λHQ.
∃Q0 .
AppReturnsn .
AppReturns1 f x1 H Q0
∀g. AppReturnsn−1 g x2 . . . xn (Q0 g) Q
Note that, by construction, a formula of the form
Œ
AppReturnsn f v1 . . . vn
is always
a local formula.
5.4.3 Specication of n-ary functions
The predicate
Specn
tically of the form
captures the specication of n-ary functions that are syntac-
λx1 . . . xn . t.
Such functions do not perform any side eects on
partial applications. Remark: a function of
n
arguments that is not syntactically
5.4.
SPECIFICATION OF IMPERATIVE FUNCTIONS
99
is_spec1 K
is_specn K
≡
≡
∀x. Weakenable (K x)
∀x. is_specn−1 (K x)
Spec1 f K
Specn f K
≡
≡
is_spec1 K ∧ ∀x. K x (AppReturns1 f x)
is_specn K ∧ ∀x. AppPure f x (λg. Specn−1 g (K x))
In the gure,
n>1
and
(f : Func)
and
Ü → Prop).
(K : A1 → . . . An → B
Figure 5.4: Formal denition of the imperative version of
of the form
λx1 . . . xn . t
can be specied in terms of
Specn .
specied using the predicate
Specn
AppReturnsn
b u t cannot be
f to an argument x is pure, we could
∀x. AppReturns1 f x [ ] (λg. [Spec1 g (K x)]). However,
To express that the application of a function
try to dene
Spec2 f K
as
this denition does not work out because it does not allow proving the introduction
lemma and the elimination lemma for
Specn , which capture the following equivalence
(side conditions are omitted).
Specn f K
€
Š
∀x1 . . . xn . K x1 . . . xn (AppReturnsn f x1 . . . xn )
⇐⇒
To prove this result, we would need to know that the partial application of a function
f
to an argument
x
always returns the same function
AppReturns1 f x [ ] (λg. [P g])
g.
Yet, a predicate form
does not capture this property, because the exact ad-
dresses of the locations allocated during the evaluation of the application of
f
to
x
may depend on the addresses allocated in the piece of heap that has been framed
out.
4 Intuitively, the problem is that the proposition
AppReturns1 f x [ ] (λg. [P g])
does not disallow side-eects, whereas the development of introduction and elimination lemmas for curried n-ary functions requires the knowledge that partial applications do not involve side-eects. To work around this problem, I introduce a more
precise predicate, called
AppPure,
for describing applications that are completely
pure.
The proposition
value
v0
AppPure f v P
satisfying the predicate
asserts that the application of
P,
f
to
v
returns a
without reading, writing, nor allocating in the
AppPure is dened in terms of AppEval, using a proposition of
AppEval f v h v 0 h, where the output heap h is exactly the same as the
store. The predicate
the form input heap
h.
AppPure f v P
4
∃v 0 . P v 0 ∧ (∀h. AppEval f v h v 0 h)
v0
produced by the evaluation of f v does not depend on the
h, which is universally-quantied after the existential quantication of v 0 .
Note that the result
input heap
≡
To illustrate the argument, consider the function
λx. let l =
ref 3 in λy. l,
which allocates a
memory cell and then returns a constant function that always returns the allocated location. This
function satises the property
AppReturns1 f x [ ] (λg. [Spec1 g (λy R. True)])
However, two successive applications of
f
to
x
for any argument
return two dierent functions.
x.
100
CHAPTER 5.
The predicate
Spec2
GENERALIZATION TO IMPERATIVE PROGRAMS
can now be dened in terms of
pared with the denition of
Spec2
ference is the replacement of the predicate
Spec2 f K
AppPure
and
Spec1 .
Com-
from the purely-functional setting, the only dif-
AppReturns
with the predicate
AppPure.
is_spec2 K ∧ ∀x. AppPure f x (λg. Spec1 g (K x))
≡
The general denitions of
Specn
and of
is_specn
appear in Figure 5.4.
5.5 Characteristic formulae for imperative programs
5.5.1 Construction of characteristic formulae
The algorithm for constructing characteristic formulae for imperative programs appears in Figure 5.5, and are explained next. For the sake of clarity, contexts and
decoding of values are left implicit. I have set up a notation layer for pretty-printing
characteristic formulae for imperative programs, in a very similar way as described
in the previous chapter for purely-functional programs. I omit the details here.
As explained in the introduction, an application of the predicate
duced at every node of a characteristic formula. A value
H
Q
and a post-condition
predicate
H,
v
local
is intro-
admits a pre-condition
if the current heap, which by assumption satises the
also satises the predicate
Q v.
This entailment is written
H B Q v.
AppReturnsn .
AppReturnsn is redundant, yet I leave it
treatment of crash, of conditionals and of function
The application of a n-ary function is described through the predicate
Here, the application of
local
in front of
for the sake of uniformity. The
denitions is quite similar to the treatment given for the purely-functional setting.
In short, it suces to replace occurrences of a post-condition
H
and a post-condition
Q.
P
with a pre-condition
In practice, it is useful to consider a direct construction
for the characteristic formulae of terms of the form if t0 then t1 else t2 .
Jif t0 then t1 else t2 K ≡
local (λHQ. ∃Q0 . Jt0 K H Q0 ∧ Jt1 K (Q0 true) Q ∧ Jt2 K (Q0 false) Q)
One can prove that this direct formula is logically equivalent to the characteristic
formula of the term let x = t0 in if x then t1 else t2 .
The treatment of let-bindings has already been described in the introduction.
let x = t1 in t2 ,
Q0
of t1 is quantied existentially,
0
and then Q x describes the pre-condition for t2 . Sequences are a particular case of
In a term the post-condition
let-bindings, where the result of
t1
is of type unit and thus need not be named.
For a polymorphic let-binding, the bound term must be a syntactic value, due to
the value restriction. Consider the typed term let x = ŵ1 in t̂2 , where ŵ1
stands for
a possibly-polymorphic value. The construction of the corresponding characteristic
formula is formally described as follows.
Jlet x = ŵ1 in t̂2 KΓ
If
S
VSW,
and
local (λHQ. ∀X. X = dŵ1 e ⇒ Jt̂2 K(Γ,x7→X) H Q)
ŵ1 , then the formula universally quanties over a value X of
provides the assumption X = dŵ1 e, which asserts that the logical
denotes the type of
type
≡
5.5.
CHARACTERISTIC FORMULAE FOR IMPERATIVE PROGRAMS
JvK ≡
local (λHQ. H B Q v)
Jf v1 . . . vn K ≡
local (λHQ. AppReturnsn f v1 . . . vn H Q)
JcrashK ≡
local (λHQ. False)
Jif v then t1 else t2 K ≡
local (λHQ. (v = true ⇒ Jt1 K H Q) ∧ (v = false ⇒ Jt2 K H Q))
Jlet x = t1 in t2 K ≡
local (λHQ. ∃Q0 . Jt1 K H Q0 ∧ ∀x. Jt2 K (Q0 x) Q)
Jt1 ; t2 K ≡
local (λHQ. ∃Q0 . Jt1 K H Q0 ∧ Jt2 K (Q0 tt) Q)
Jlet x = w1 in t2 K ≡
local (λHQ. ∀x. x = w1 ⇒ Jt2 K H Q)
Jlet rec f = λx1 . . . xn . t1 in t2 K ≡
local (λHQ. ∀f. H ⇒ Jt2 K H Q)
with
H ≡ ∀K. is_specn K ∧ (∀x1 . . . xn . K x1 . . . xn Jt1 K) ⇒ Specn f K
Jwhile t1 do t2 K ≡
local (λHQ. ∀R. is_local R ∧ H ⇒ R H Q)
with
H ≡ ∀H 0 Q0 . Jif t1 then (t2 ; |R|) else ttK H 0 Q0 ⇒ R H 0 Q0
Jfor i = a to b do tK ≡
local (λHQ. ∀S. is_local1 S ∧ H ⇒ S a H Q)
with
H ≡ ∀iH 0 Q0 . Jif i ≤ b then (t ; |S (i + 1)|) else ttK H 0 Q0 ⇒ S i H 0 Q0
Figure 5.5: Characteristic formula generator for imperative programs
101
102
CHAPTER 5.
variable
X
GENERALIZATION TO IMPERATIVE PROGRAMS
corresponds to the program value
the equality X
= dŵ1 e
ŵ1 .
When
S
is a polymorphic type,
relates two functions from the logic (recall that I assume
the target logic to include functional extensionality). For example, if
nil
(the empty Caml list), then
X
and
Nil
admit the Coq type
X is equal
∀A. listA.
to
Nil
ŵ1
is the value
(the empty Coq list), where both
The characteristic formulae of while-loops and of for-loops are stated using an
|R|, used to refer to Coq predicates inside the computation of the characteristic formula of a term. This operator is such that J |R| K
is equal to R. Recall that is_local R asserts that R is a local predicate and that
is_local1 S asserts that S i is a local predicate for every argument i.
anti-quotation operator, written
Primitive functions for manipulating references have been explained in Chapter 3. I recall their specication, and give the specication of the function
cmp that
compares two pointers.
∀A. Spec1 ref (λv R.
∀A. Spec1 get (λl R.
∀A. Spec2 set (λl v R.
Spec2 cmp (λl l0 R.
R [ ] (λl. l ,→A v))
∀v. R (l ,→A v) (\= v ? (l ,→A v)))
∀v 0 . R (l ,→A v 0 ) (# (l ,→A v)))
R [ ] (λb. [b = true ⇔ l = l0 ]))
5.5.2 Generated axioms for top-level denitions
The current implementation only supports top-level denitions of values and functions. It does not yet support general top-level declaration of the form let x = t,
but this should be added soon. For the time being, CFML can be used to verify
imperative functions and, in particular, imperative data structures.
5.6 Extensions
The treatment of mutually-recursive functions and of pattern-matching developed
in the previous chapter can be immediately adapted to an imperative setting.
suces to replace every application of a formula to a post-condition
application of the same formula to a pre-condition
H
P
It
with the
and to a post-condition
Q.
However, the treatment of assertions in an imperative setting is dierent because
assertions might perform side eects. Some systems require assertions not to perform
any side eects, nevertheless it can be useful to allow assertions to use local eects,
i.e. eects that are used to evaluate the assertion but that are not observable by
the rest of the program. In this section, I describe a treatment of assertions that
ensures that programs remain correct regardless of whether assertions are executed
or not. I then explain how to support null pointers and strong updates.
5.6.1 Assertions
In an imperative setting, an assertion expression takes the form the boolean value
true,
then the term assert t
assert t.
If
t returns
returns the unit value. Otherwise,
5.6.
if
t
EXTENSIONS
returns
false,
103
then the program crashes.
In general, assertions are allowed to
access and even modify the store, as asserted by the reduction rule for assertions.
t/m ⇓ true/m0
(assert t)/m ⇓ tt /m0
Let me rst give a characteristic formula that corresponds closely to this reduction rule.
assert t.
Let
H
be a pre-condition and
The term
t
Q
be a post-condition for the term
H . It must return a value
equal to the boolean true, that is, satisfying the predicate (= true). The output heap
should satisfy the predicate Q tt . So, the post-condition for t is \= true ? (Q tt). To
is evaluated in a heap satisfying
summarize, the characteristic formulae associated with an assertion may be built as
follows.
Jassert tK
≡
λHQ. JtK H (\= true ? (Q tt))
The above formula only asserts that the program is correct when assertions are
executed.
Yet, most programmers expect their code to be correct regardless of
whether assertions are executed or not. I next explain how to build a characteristic
formula for assertions that properly captures the irrelevance of assertions, while still
allowing assertions to access and modify the store, as this possibility can be useful
in general.
To that end, I impose that the execution of the assertion assert t
produces
H
t is \= true?H . The
post-condition Q of the term assert t describes the same heap as H , so the predicate
H should entail the predicate Q tt . The appropriate characteristic formula for
an output heap that satises the same invariant as the input heap it receives. If
denotes the pre-condition of the term t, then the post-condition of
assertions is thus as follows.
Jassert tK
≡
λHQ. JtK H (\= true ? H) ∧ H B Q tt
With such a characteristic formula, although an assertion cannot break the invariants satised by the heap, it may modify values stored in the heap. For example,
the expression assert (set x 4 ; true)
updates the heap at location
not break an invariant asserting that the location
∃
∃n. x
,→ n ∗ [even n].
x
x,
but it does
contains an even integer, e.g.
So, technically, the nal result computed by a program may
depend on whether assertions are executed or not. However, this result always satises the specication of the program, regardless of whether assertions are executed
or not.
5.6.2 Null pointers and strong updates
Motivation The ML type system guarantees type soundness:
a well-typed pro-
gram can never crash or get stuck. In particular, if a program involves a location
type
ref τ ,
then the store indeed contains a value of type
τ
l of
at location l. To achieve
this soundness result, ML gives up on at least two popular features from C-like languages: null pointers and strong updates. Null pointers are typically used to encode
104
CHAPTER 5.
GENERALIZATION TO IMPERATIVE PROGRAMS
empty data structures. In Caml, an explicit option type is generally used to translate C programs using null pointers, and this encoding has a small but measurable
cost in terms of execution speed and memory consumption. Strong updates allow
reusing a same memory location at several types, which is again a useful feature for
saving some memory space. Strong updates are also convenient for initializing cyclic
data structures. In this section, I explain how to recover null pointers and strong
update in Caml while still being able to prove programs correct using characteristic
formulae.
Null pointers
To support null pointers, I introduce a constant location
0.
programming language, implemented as the memory address
the logic a corresponding constant of type
for locations is such that
dnulle
is equal to
Loc, called Null.
Null.
A singleton heap predicate of the form l
,→T V null in the
I also introduce in
The decoding operation
should imply that
l
is not a
null pointer. To that end, I update the denition of the singleton heap predicate
with an assumption
l 6= Null,
as follows.
λh. h = (l →T V ) ∧ l 6= Null
≡
l ,→T V
With this new denition, the specication of
location
l
being returned by a call to
ref
ref,
recalled next, ensures that the
is distinct from the null location.
∀A. Spec1 ref (λv R. R [ ] (λl. l ,→A v))
The pointer comparison function
cmp
can be used to test at runtime whether a
given pointer is null. For example, one can dene a function
location
l
and returns a boolean
location. The specication of
b
is_null
that expects a
that is true if and only if the location
is_null
l
is the null
appears below.
Spec1 is_null (λl R. R [ ] (λb. [b = true ⇔ l = Null]))
Strong updates
Characteristic formulae accommodate strong updates quite nat-
urally because the type of memory cells is not carried by the type of pointers, since
all pointers admit the constant type
loc
in weak-ML. Instead, the type of the con-
tents of a memory cell appears in a heap predicate of the form
that the location
type
T.
l
l ,→T V , which asserts
V of
contains a Caml value that corresponds to the Coq value
Remark: earlier work on the logic of Bunched Implication [42], a precursor
of Separation Logic [77], has pointed out the fact that working with heap predicates
allows reasoning on strong updates [9]. This possibility was exploited in Hoare Type
Theory [61], which builds upon Separation Logic.
To support strong updates, it therefore suces to generalize the specication of
the primitive function
set,
allowing the type
A
of the argument to dier from the
0
type A of the previous contents of the location. This generalized specication is
formally stated as follows.
∀A. Spec2 set (λ l v R. ∀A0 . ∀(v 0 : A0 ). R (l ,→A0 v 0 ) (# (l ,→A v)))
5.7.
ADDITIONAL TACTICS FOR THE IMPERATIVE SETTING
105
When the type of memory cells is allowed to evolve through the execution of a
program, it is generally useful to be able to cast a pointer from a type to another. A
subtyping rule of the form ref τ ≤ ref τ 0 would here make sense. This subtyping rule
is obviously unsound in ML, but it does not break the soundness of characteristic
formulae because both the type
ref τ
and the type
ref τ 0
are reected in the logic
Loc. To encode the subtyping rule in Caml, I introduce a coercion
cast, of type ref τ → ref τ 0 , which behaves like the identity function.
Applications of the function cast are eliminated on-the-y by the CFML generator,
as the type
function called
immediately after type-checking.
Implementation in Caml
Null references and strong references can be imple-
mented in Caml with the help of the primitive function
'b.
Obj.magic,
of type
'a ->
This function allows fooling the type system by arbitrarily changing the type of
expressions. Remark: this encoding of strong references and null pointers in Caml
is a standard trick.
Figure 5.6 contains the signature and the implementation of a library for Cstyle manipulation of pointers.
sset
The module includes functions
sref
and
sget
and
for manipulating a Caml reference at a dierent type than that carried by the
pointer. It also includes the function
cmp
for comparing pointers of dierent types,
cast for changing the type of a pointer, the constant null which denotes
is_null which is a specialization of the comparison
cmp for testing whether a given pointer is null.
the function
the null reference, and the function
function
Remark: it is also possible to build a library where all pointers admit a constant
sref.
∀A. A → sref and
type called
For example, in this setting, the function
the function
get
admits the type
ref
∀A. sref → A.
admits the type
This alternative
presentation can be more convenient in some particular developments, however it
seems that, in general, the use of a constant type for pointers requires a greater
number of type annotation in source code than the use of type-carrying pointers.
To conclude, characteristic formulae allow for a safe and practical integration of
advanced pointer manipulations in a high-level programming language like Caml.
5.7 Additional tactics for the imperative setting
In this section, I describe the tactics for manipulating characteristic formulae that are
specic to the imperative setting. I start with a core tactic that helps proving heap
entailment relations. I then explain how the frame rule is automatically applied by
the tactic that handles reasoning about applications. Finally, I give a brief overview
of the other tactics involved.
5.7.1 Tactic for heap entailment
The tactic
hsimpl helps proving goals of the form H1 B H2 .
It works modulo associa-
tivity and commutativity of the separating conjunction, and it is able to instantiate
106
CHAPTER 5.
GENERALIZATION TO IMPERATIVE PROGRAMS
module
val
val
val
val
val
val
val
end
type PointerSig = sig
sref : 'b -> 'a ref
sget : 'a ref -> 'b
sset : 'a ref -> 'b -> unit
cmp : 'a ref -> 'b ref -> bool
cast : 'a ref -> 'b ref
null : 'a ref
is_null : 'a ref -> bool
module
let
let
let
let
let
let
let
end
Pointer : PointerSig = struct
sref x = magic (ref x)
sget p = !(magic p)
sset p x = (magic p) := x
cmp p1 p2 = ((magic p1) == p2)
cast p = magic p
null = magic (ref ())
is_null p = cmp (magic null) p
Figure 5.6: Signature and implementation of advanced pointer manipulations
the existential quantiers occurring in
H2
by exploiting information available in
For example, consider the following goal, in which
H1 .
?V and ?H denote Coq unication
variables.
(x > T X) \* (l > Mlist T L) \* (h > y)
==> (Hexists L', l > Mlist L') \* (h > ?V) \* [y = ?V] \* (?H)
hsimpl solves this goal as follows. First, it introduces a Coq unication
variable ?L' in place of the existentially-quantied variable L'. (Alternatively, one
may explicitly provide an explicit witness for L' as argument of hsimpl.) Second,
The tactic
the tactic tries to cancel out heap predicates from the left-hand side with those from
the right-hand side. This process unies
proposition
[y = ?V]
?L'
with
L
?V with y.
?V has been
and
is extracted as a subgoal. Since
The embedded
instantiated as
y, the subgoal is trivial to prove. After simplication, the remaining goal is x > T X
==> ?H. At this point, ?H is unied with the predicate x > T X, and the goal is solved.
The tactic hsimpl expects the right hand-side of the goal to be free of existentials
and of embedded propositions. The purpose of the tactic hextract is to set the goal
in that form, by pulling existential quantiers and embedded propositions out of the
goal and putting them in the context. Documentation and examples can be found
in the Coq development for additional details.
5.7.
ADDITIONAL TACTICS FOR THE IMPERATIVE SETTING
107
5.7.2 Automated application of the frame rule
xapp
The tactic
enables one to exploit the specication of a function for reasoning
on an application of that function. It automatically applies the frame rule on the
heap predicates that are not involved in the reasoning on the function application.
The implementation of the tactic relies on a corollary of the elimination lemma for
the predicate
¨
Spec,
shown next.
Spec f K
∀R. is_local R ⇒ K x R ⇒ R H Q
Let me illustrate the working of
the function
the value
3
incr
to a pointer
x,
xapp
AppReturns f x H Q
⇒
on an example. Consider an application of
under a pre-condition asserting that
in memory and that another pointer
y
x
is bound to
is bound to the value
that the post-condition is a unication variable called
7.
Assume
?Q. Note that post-conditions
are generally reduced to a unication variable because we try to infer as much
information as possible. The goal then takes the following form.
AppReturns incr x (x > 3 \* y > 7) ?Q
I am going to explain in detail how the tactic
the function
condition
?Q
incr
exploits the specication of
for discharging the goal and in the same time infer that the post-
should be instantiated as #
specication of
xapp
incr
(x > 4) \* (y > 7). Recall the
Spec. It is stated next (not
expressed in terms of the predicate
using the notation associated with
Spec).
spec_1 incr (fun r R => forall n, R (r > n) (# r > n+1))
Applying the lemma mentioned earlier on to that specication leaves:
forall R, is_local R ->
(forall n, R (x > n) (# x > n+1)) ->
R (x > 3 \* y > 7) ?Q
At this point, the tactic
(# x > n+1)
xapp
instantiates the hypothesis forall
with unication variables. Let
able that instantiaties
n.
?N
n, R (x > n)
denote the Coq unication vari-
The hypothesis becomes as follows.
R (x > ?N) (# x > ?N+1)
The tactic
xapp
then exploits the hypothesis is_local
R
by applying the frame
rule (technically, the frame rule combined with the rule of consequence) to the
conclusion R
1)
2)
3)
(x > 3 \* y > 7) ?Q.
This application leaves three subgoals.
R ?H1 ?Q1
x > 3 \* y > 7 ==> ?H1 \* ?H2
?Q1 \*+ ?H2 ==> ?Q
108
CHAPTER 5.
GENERALIZATION TO IMPERATIVE PROGRAMS
The rst goal is solved using the hypothesis R
stantiating
2)
3)
?H1
as
(x > ?N)
and
?Q1
(x > ?N) (# x > ?N+1), in(# x > ?N+1). Two subgoals remain.
as
(x > 3) \* (y > 7) ==> (x > ?N) \* ?H2
(# x > ?N+1) \*+ ?H2 ==> ?Q
hsimpl is invoked on goal (2). It unies ?N with the value 3 and ?H2
(y > 7). Calling hsimpl on goal (3) then unies the post-condition ?Q with
(# x > 3+1) \*+ (y > 7), which is equivalent to the expected instantiation
of ?Q.
To summarize, the tactic xapp is able to exploit a known specication as well
The tactic
with
as the information available in the pre-condition for inferring the post-condition of
a function application.
In particular, the application of the frame rule is entirely
automated. In the example presented above, the instantiation of the ghost variable
n
could be inferred.
explicitly provided.
However, there are cases where ghost variables need to be
So, the tactic
xapp
accepts a list of arguments that can be
exploited for instantiating ghost variables.
5.7.3 Other tactics specic to the imperative setting
F H Q, where F is
a predicate that satises the predicate is_local, and where H and Q denote a pre- and
a post-condition, respectively. The predicate F is typically a characteristic formula,
All the tactics described in this section apply to a goal of the form
but it may also be a universally-quantied predicate coming from the characteristic
iter.
xseq is a specialized version of xlet for reasoning on sequences.
The tactics xfor and xwhile are used to reason on loops using the characteristic
formulae that support local reasoning. The tactics xfor_inv and xwhile_inv apply
formula of a loop or from the specication of a function such as
The tactic
a lemma for replacing the general form of the characteristic formula for a loop with
a specialized characteristic formula based on an invariant. The loop invariant to be
used may be directly provided as argument to those tactics.
The tactic
xextract
pulls the existential quantiers and the embedded propo-
sitions out of a pre-condition.
The tactic
xframe
allows applying the frame rule
manually, which might be useful for example to reason on a local let-binding. The
tactic
xchange helps applying a focus or an unfocus lemma.
The tactic takes as argu-
ment a partially-instantiated lemma whose conclusion is a heap entailment relation,
and exploits it to rewrite the pre-condition. The tactic
similar task on the post-condition.
The tactic
performs a
xgc enables discarding some
xgc_post enables discarding
predicates from the pre-condition, and the tactic
predicates from the post-condition.
xchange_post
heap
heap
Chapter 6
Soundness and completeness
In this chapter, I prove that characteristic formulae for pure programs
are sound and complete. The soundness theorem states that if one can
t holds of a post-condition
t terminates and returns a value that satises P . Char-
prove that the characteristic formula of a term
P
then the term
acteristic formulae are built from the source code in a compositional way
and that they can even be displayed in a way that closely resemble source
code, thus, intuitively, proving the soundness of a characteristic formula
with respect to the source code it describes should be relatively direct.
The syntactic soundness proof that I give is indeed quite simple.
The completeness theorem states that if a term
v
then the characteristic formula for
ication for
v.
If the value
v
t
t
evaluates to a value
holds of the most-general spec-
does not contain any rst-class function,
then its most-general specication is simply the predicate being equal
to
v .
The case where
v
contains functions is slightly more subtle, since
characteristic formulae only allow specifying the extensional behavior of
functions and do not enable stating properties about the source code of
a function closure. In this chapter, I explain how to take that restriction
into account in the statement of the completeness theorem.
The last part of this chapter is concerned with justifying the soundness
of the treatment of polymorphism. In characteristic formulae, I quantify
type variables over the sort
Type, which corresponds to the set of all Coq
types, instead of restricting the quantication to the set of Coq types
that correspond to some weak-ML type. It is more convenient in practice
to quantify over
Type
because it is the default sort in Coq. I explain in
this chapter why the quantication over
109
Type
is correct.
110
CHAPTER 6.
SOUNDNESS AND COMPLETENESS
6.1 Additional denitions and lemmas
6.1.1 Interpretation of Func
To justify the soundness of characteristic formulae, I give a concrete interpretation
to the type
Func and to the predicate AppEval. I interpret Func as the set of all wellis_well_typed_closure be a predicate that characterizes
values of the form µf.ΛA.λx.t̂. The type Func is then constructed as
typed function closures. Let
well-typed
dependent pairs made of a value
v̂
and of a proof that
v̂
satises the predicate
is_well_typed_closure.
Func
Observe that
≡
Σv̂ (is_well_typed_closure v̂)
Func is a type built upon the syntax of source code from the programFunc would involve a deep embedding of the
ming language. So, a Coq realization of
source language. This indirection through syntax avoids the circularity traditionally
associated with models of higher-order stores, where heaps contain functions and
functions are interpreted in terms of heaps.
To prove interesting properties about characteristic formulae, I need a decoder
for function closures that are created at runtime.
Recall that decoders are only
applied to well-typed values. I dene the decoding of a well-typed function closure
as the function closure itself, viewed as a value of type
closure
µf.ΛA.λx.t̂
Func.
A well-typed function
can indeed be viewed as a value of the type
Func,
because
Func
denotes the set of all well-typed function closures. The formal denition is as follows.
≡ (µf.ΛA.λx.t̂, H) : Func
where H is a proof of is_well_typed_closure v̂ dµf.ΛA.λx.t̂eΓ
Note that the context
Γ
is ignored as function closures are always closed values.
6.1.2 Reciprocal of decoding: encoding
In the proofs, I rely on the fact that the decoding operator yields a bijection between
the set of all well-typed Caml values and the set of all Coq values that admit a type
of the form
VT W.
To show that decoding is bijective, I give the inverse translation,
called encoding. In this section, I describe the translation from (a subset of ) Coq
types into weak-ML types, written
TT U, and the translation of Coq values into typed
T denotes a Coq type of the form VT W.)
The denition of the inverse translation T·U is as simple as that of the translation
V·W. Note that T·U describes a partial function in the sense that not all Coq types
program values, written
bV c.
(Recall that
are the image of some weak-ML type.
TAU
TIntU
TC T U
TFuncU
T∀A.T U
≡
≡
≡
≡
≡
A
int
C TT U
func
∀A. TT U
6.1.
ADDITIONAL DEFINITIONS AND LEMMAS
111
The reciprocal of the decoding operation is called encoding. The encoding of a
Coq value
V
of type
T , written bV c, produces a typed program value v̂
of type
TT U.
In the denition of the encoding operator, shown below, values on the left-hand side
are closed Coq values, given with their type, and values on the right-hand side are
closed program values, which are annotated with weak-ML types.
bn
bD T (V1 , . . . , Vn )
b(µf.ΛA.λx.t̂, H)
bV
: Intc
:CTc
: Funcc
: ∀A. T c
≡
≡
≡
≡
nint
D TT U (bV1 c, . . . , bVn c)
µf.ΛA.λx.t̂
ΛA. bV Ac
The encoding operation is presented here as a meta-level operation, whose behavior depends on the type of its argument. If this operation were to be dened in
Coq, it would rather be presented as a family of operators
V .1
By construction, T·U is
the inverse function of d·e,
bV c,
indexed with the
type of the argument
the inverse function of
V·W,
dened in Ÿ4.3.4, and
b·c
is
dened in Ÿ4.3.5. Those results are formalized through
the following lemma.
Lemma 6.1.1 (Inverse functions)
T
V
b
d
Proof
VT W U
TT U W
dv̂e c
bV c e
=
=
=
=
T
T
v̂
V
where
where
where
where
T is a weak-ML type
T is a Coq type of the form VT W
v̂ is a well-typed weak-ML value
V is Coq value with a type of the
form
VT W
The treatment of functions and of polymorphism are not immediate.
First, consider a well-typed function closure
coding is a pair made of
v̂
and of a proof
H
v̂
of the form
asserting that
v̂
µf.ΛA.λx.t̂.
Its de-
is a well-typed function
closure. The encoding of that pair gives back the function closure. Reciprocally, let
Func.
H
asserting that v̂ is a well-typed function closure, so the encoding of V is the value v̂ ,
which is well-typed. The decoding of v̂ gives back a pair made of v̂ and of a proof
H0 asserting that v̂ is a well-typed function closure. This pair is equal to V because
0
the proof H is equal to the proof H. Indeed, by the proof-irrelevance property of
V
be a value of type
This value must be a pair of a value
v̂
and of a proof
the logic, two proofs of a same proposition are equal.
Second, consider a polymorphic value
that value is the Coq value
λA. dv̂e
ΛA. v̂ of type ∀A.T . The
∀A. VT W. The encoding
of type
decoding of
of that Coq
bλA. dv̂ec. By denition of the encoding operator, it is equal to
ΛA. b(λA. dv̂e) Ac. This value is equal to ΛA. bdv̂ec, and it therefore the same as
the value ΛA. v̂ , from which we started. Reciprocally, let V be a value of type ∀A.T .
value is written
1
My earlier work on a deep embedding of Caml in Coq [15] involves a tool that, for every type
constructor involved in a source Caml program, automatically generates the Coq denition of the
encoding operator associated with that type.
112
CHAPTER 6.
The encoding of
V
to
λA. dbV Ace,
V,
so it is equal to
is
ΛA. bV Ac.
The decoding of this value,
which is the same as
V,
SOUNDNESS AND COMPLETENESS
λA. (V A).
dΛA. bV Ace
is equal
The latter is an eta-expansion of
the value which we started from (recall that the target logic
is assumed to feature functional extensionality).
6.1.3 Substitution lemmas for weak-ML
Weak-ML does not enjoy type soundness, however the usual type-substitution and
term-substitution lemmas hold.
More precisely, typing derivations are preserved
through instantiation of a type variable by a particular type, and they are preserved
by substitution of a variable with a value of the appropriate type.
Lemma 6.1.2 (Type-substitution in weak-ML)
typed value), let
T
be a type, and let
T
a list of type variables, and let
∆, A, ∆0 ` t̂ T
Proof
∆
and
∆0
t̂
be a typed term (or a
T
be
∆, ([A → T ] ∆0 ) ` ([A → T ] t̂) ([A→T ] T )
⇒
Straightforward by induction on the typing derivation.
typed value), let
A
be a list of types.
Lemma 6.1.3 (Term-substitution in weak-ML)
∆
Let
be two typing contexts. Let
be a type, let
ŵ
Let
t̂
be a typed term (or a
be a polymorphic typed value of type
S,
and let
be a typing context.
∆, x : S ` t̂ T
Proof
∧
∆ ` ŵ S
⇒
∆ ` ([x → ŵ] t̂) T
t̂ is
T . Let ΛA. v̂ be the form of ŵ and let ∀A.T 0
S implies ∆, A ` v̂ T 0 . The assumption
be the form of S . The hypothesis ∆ ` ŵ
∆, x : ∀A.T 0 ` (x T ) T implies that T is equal to [A → T ] T 0 . The goal is to prove
0
∆ ` ((ΛA. v̂) T ) T , which is the same as ∆ ` ([A → T ] v̂) ([A→T ] T ) . This result
0
T
follows from the type-substitution lemma applied to ∆, A ` v̂ . By induction on the typing derivation. The interesting case occurs when
the variable
x
applied to some types
6.1.4 Typed reductions
In this section, I introduce a reduction judgment for typed terms, written
t̂ ⇓: v̂ .
Let me start by explaining why this judgment is needed.
Characteristic formulae are generated from typed values, and the strength of the
hypotheses provided by characteristic formulae depend on the types, especially for
let rec f = λx. x. If this function is typed
int
as a function from integers to integers, written let rec f = Λ.λx .x, then the body
functions. Consider the identity function description of that function is:
∀(X : int). ∀(P : int → Prop). P X ⇒ AppReturns F X P
6.1.
ADDITIONAL DEFINITIONS AND LEMMAS
113
However, it the function is typed as a polymorphic function, written ΛA.λxA .x,
let rec f =
then the body description is a strictly stronger assertion:
∀A. ∀(X : A). ∀(P : A → Prop). P X ⇒ AppReturns F X P
This example suggests that the soundness and the completeness of characteristic
formulae is strongly dependent on the types annotating the source code. The proofs
of soundness and completeness relate a characteristic formula with a reduction judgment. In order to keep track of types in the proofs, I introduce a typed version of
the reduction judgment, written
t̂ ⇓: v̂ .
A derivation tree for a typed reduction
t̂ ⇓: v̂
consists of a derivation tree describing not only the reduction steps involved but also
the type of all the values involved throughout the execution.
The inductive rules, shown below, directly extend the rules dening the untyped
reduction judgment t
⇓ v .
When reducing a let-binding of the form let x =
ΛA. t̂1 in t̂2 , the evaluation of t̂1 produces a typed value v̂1 , and then the value x is
replaced in t̂2 by the polymorphic value ∀A.v̂1 . When reducing a function denition
let rec f = ΛA.λx.t̂1 in t̂2 , the variable f is replaced in t̂2 with the function closure
µf.ΛA.λx.t̂1 . Finally, consider a beta-redex of the form (µf.ΛA.λx.t̂1 ) v̂2 . This
function reduces to the body t̂1 of the function in which the variable x is instantiated
as v̂2 , and the variable f is instantiated as a closure itself. Moreover, because
applications are unconstrained in weak-ML, the typed reduction rule for beta-redexes
includes hypotheses enforcing that the type of the argument
type
T0
of the argument
of the type
types
A
T1
x
and that the type
of the body of the function
t̂1 .
T
v̂2
is an instance of the
of the entire redex is an instance
The appropriate instantiation of the
is uniquely determined by the other types involved, as established further
on (Lemma 6.1.7). The reduction rules for conditionals are straightforward, so I do
not show them.
Denition 6.1.1 (Typed reduction judgment)
t̂1 ⇓: v̂1
([x → ΛA. v̂1 ] t̂2 ) ⇓: v̂
(let x = ΛA. t̂1
T2 = [A → T ] T0
in t̂2 ) ⇓: v̂
T = [A → T ] T1
([f → µf.ΛA.λx.t̂1 ] t̂2 ) ⇓: v̂
(let rec f = ΛA.λx.t̂1
in t̂2 ) ⇓: v̂
v̂ ⇓: v̂
([f → µf.ΛA.λx.t̂1 ] [x → v̂2 ] [A → T ] t̂1 ) ⇓: v̂
((µf.ΛA.λxT0 .t̂1T1 ) func (v̂2T2 )) T ⇓: v̂
The next two lemmas relate the untyped reduction judgment for ML programs
with the typed reduction judgment for weak-ML programs. A third lemma then establishes that the typed reduction of a well-typed term always produces a well-typed
value of the same type, and a fourth lemma establishes that the typed reduction
judgment is deterministic.
Lemma 6.1.4 (From typed reductions to untyped reductions)
typed term and
v̂
Let
t̂
be a
be a typed value, both carrying weak-ML type annotations. Let
t
114
and
CHAPTER 6.
v
SOUNDNESS AND COMPLETENESS
be the terms obtained by stripping types out of
t̂ ⇓: v̂
Proof
⇒
t̂
v̂ ,
and
respectively.
t⇓v
Removing the type annotation and the type substitutions from the rules
dening the typed reduction judgment give exactly the rules dening the untyped
reduction judgment.
Lemma 6.1.5 (From untyped reductions to typed reductions, in ML)
Consider a well-typed ML term, fully annotated with ML types. Let
t̂
be the corre-
sponding term where ML type annotations are turned into weak-ML type annotations,
by application of the operator
h·i,
and let
t
be the corresponding term in which all
type annotations are removed. Assume there exists a value
that is, such that
t ⇓ v.
Then, there exists a typed value
v and such that t̂ ⇓:
types, that corresponds to the value
Proof
Let
t̊
v
v̂ ,
v̂ .
such that
t reduces to v ,
annotated with weak-ML
denote a term annotated with ML types. Following the subject reduc-
tion proof for ML, one can show that the untyped reduction sequence
be turned into a typed reduction sequence
t̂ ⇓: v̂
t̊ ⇓ v̊ ,
t ⇓ v
can
which is a judgment dened like
except that it involves terms and values are annotated with ML types instead
h·i
v̂ . of weak-ML types. Then, applying the operator
produces exactly the required derivation
t̂
⇓:
to the entire derivation
Lemma 6.1.6 (Typed reductions preserve well-typedness)
term, let
v̂
be a typed value, let
T
be a type and let
B
Let
t̂
t̊ ⇓: v̊
be a typed
denote a set of free type
variables.
B ` t̂T
Proof
∧
t̂ ⇓: v̂
⇒
B ` v̂ T
By induction on the typed-reduction derivation. I only show the proof case
for let-bindings, to illustrate the kind of arguments involved, as well as the proof
case for the beta-reduction rule, which is the only one specic to weak-ML.
• Case t̂ is of the form (let x = ΛA. t̂1 T1 in t̂2 T2 ) T2 and reduces to v̂ . The premises
:
:
T
assert that t̂1 ⇓ v̂1 and ([x → ΛA. v̂1 ] t̂2 ) ⇓ v̂ hold. By hypothesis, t̂1 1 is wellT
typed in the context (B, A) and t̂2 2 is well-typed in the context (B, x : ∀A.T1 ).
T
By induction hypothesis, v̂1 1 is also well-typed in that context So, ΛA. v̂1 admits
the type ∀A.T1 in the context B . By the substitution lemma applied to the typing
T
T
assumption for t̂2 2 , the term ([x → ΛA. v̂1 ] t̂2 2 ) also admits the type T2 in the
context B . Therefore, by induction hypothesis, v̂ admits the type T2 in the context
B.
• Case t̂ is of the form (µf.ΛA.λxT0 .t̂1T1 (v̂2T2 )) T and reduces to v̂ . The premises
assert that the propositions T2 = [A → T ] T0 and T = [A → T ] T1 hold and that
[f → µf.ΛA.λx.t̂1 ] [x → v̂2 ] [A → T ] t̂1 reduces to v̂ . The goal is to prove that the
T is well-typed in the context
typed term ([f → µf.ΛA.λx.t̂1 ] [x → v̂2 ] [A → T ] t̂1 )
B . By inversion on the typing rule for applications, we obtain the fact that both
v̂2 T2 and (µf.ΛA.λxT0 .t̂1T1 ) func are well-typed. By inversion on the typing rule for
6.1.
ADDITIONAL DEFINITIONS AND LEMMAS
115
: func, x : T0 ` t̂1T1 .
By the type substitution lemma, this implies B, f : func, x : ([A → T ] T0 ) `
([A → T ] t̂1 )([A→T ] T1 ) . Exploiting the two assumptions T2 = [A → T ] T0 and T =
[A → T ] T1 , we can rewrite this proposition as B, f : func, x : T2 ` ([A → T ] t̂1 )T .
T
The conclusion, which is B ` ([f → µf.ΛA.λx.t̂1 ] [x → v̂2 ] [A → T ] t̂1 ) , can then
be deduced by applying the substitution lemma twice, once for x and once for f . function closures, we obtain the typing assumption B, A, f
Lemma 6.1.7 (Determinacy of typed reductions)
let
v̂1
and
v̂2
t̂ ⇓: v̂1
Proof
Let
t̂
be a typed term, and
be two typed values.
t̂ ⇓: v̂2
∧
⇒
v̂1 = v̂2
By induction on the rst hypothesis and case analysis on the second one.
The nontrivial case is that of applications, for which we need to establish that the
T is uniquely determined. Consider a list T such that T2 = [A → T ] T0
T = [A → T ] T1 , and another list T 0 such that T2 = [A → T 0 ] T0 and T = [A →
T 0 ] T1 . Then, [A → T ] T0 = [A → T 0 ] T0 and [A → T ] T1 = [A → T 0 ] T1 . Since the
free variables A are included in the union of the set of free type variables of T0 and
of that of T1 , the list T 0 must be equal to the list T . list of types
and
6.1.5 Interpretation and properties of AppEval
AppEval is the low-level predicate in terms of which AppReturns is dened.
AppEval in terms of the typed semantics
source language. The type of AppEval is recalled next.
Recall that
In this section, I give an interpretation of
of the
AppEval : ∀AB. Func → A → B → Prop
The proposition
reected as
F
AppEval F V V 0
asserts that the application of a Caml function
in the logic to a value reected as
a Caml value reected as
V
saying that the application of the encoding of
returns the encoding of
V 0.
T
and
V0
in the logic terminates and returns
F
to the encoding of
V
terminates and
Hence the following denition of the predicate
Denition 6.1.2 (Denition of AppEval)
value of type
V
0 in the logic. This can be reformulated using encoders,
is a value of type
AppEval F V V 0
≡
Let
F
be a value of type
T 0.
AppEval.
Func, V
is a
(bF c bV c) ⇓: bV 0 c
Remark: one can also present this denition as an inductive rule, as follows.
(fˆ v̂) ⇓: v̂ 0
AppEval dfˆe dv̂e dv̂ 0 e
The semantics of the source language is assumed to be deterministic.
AppEval
Since
lifts the semantics of function application to the level of Coq values, for
every function
the relation
F
and argument
AppEval F V V
following lemma.
V
there is at most one result value
V0
for which
0 holds. This property is formally captured through the
116
CHAPTER 6.
SOUNDNESS AND COMPLETENESS
Lemma 6.1.8 (Determinacy of AppEval) For any types T and T 0 , for any Coq
0
value F of type Func, any Coq value V of type T , and any two Coq values V1 and
V20
of type
T 0,
AppEval F V V10
Proof
AppEval F V V20
∧
V10 = V20
⇒
AppEval, the hypotheses are equivalent to (bF c bV c) ⇓: bV10 c
0
and (bF c bV c)
bV2 c. By determinacy of typed reductions (Lemma 6.1.7), bV10 c is
0
0
0
equal to bV2 c. The equality between V1 and V2 then follows from the injectivity of
By denition of
⇓:
encoders.
Remark: ideally, the soundness theorem for characteristic formulae should be
proved without exploiting the determinacy assumption on the source language. However, the proof appears to become slightly more complicated when this assumption
is not available. For this reason, I leave the generalization of the soundness proof to
a non-deterministic language for future work.
6.1.6 Substitution lemmas for characteristic formulae
The substitution lemma for characteristic formulae is used both in the proof of
soundness and in the proof of completeness.
It explains how a substitution of a
value for a variable in a term commutes with the computation of the characteristic
formula for that term. The proof of the substitution lemma involves a corresponding
substitution lemma for the decoding operator. It also relies on two type-substitution
lemmas, one for characteristic formulae and one for the decoding operator, presented
next.
Lemma 6.1.9 (Type-substitution lemmas)
Let
v̂
be a well-typed value and
t̂
(∆, A). Let Γ be a decoding context with types
T be a list of weak-ML types of the same arity as
be a well-typed term in a context
corresponding to those in
A.
∆.
Let
d[A → T ] v̂eΓ
Proof
= [A → VT W ] dv̂eΓ
J [A → T ] t̂ KΓ = [A → VT W ] J t̂ KΓ
By induction on the structure of
v̂
and
t̂. A useful corollary to the type-substitution lemma for decoders describes the application of the decoding of a polymorphic value. Recall that
values. I use the notation
if
ŵ
is of the form
ΛA. v̂ ,
ŵ T
ŵ
ranges over polymorphic
to denote the type application of
then
ŵ T
is equal to
[A → T ] v̂ .
ŵ
to the types
T:
Using this notation, the
commutation of decoding with type applications of polymorphic values is formalized
as follows.
Lemma 6.1.10 (Application of the decoding of polymorphic value)
Let
ŵ
be a closed polymorphic value of type
same arity as
∀A.T ,
and let
A.
dŵ T e
=
dŵe VT W
T
be a list of types of the
6.1.
ADDITIONAL DEFINITIONS AND LEMMAS
Proof
Let
ΛA. v̂
be the form of
ŵ.
117
VT W,
which is the same as
The conclusion then follows from the type-substitution lemma
Lemma 6.1.11 (Substitution lemma for decoding)
with a free variable
x
of type
S,
d[A → T ] v̂e. The
[A → VT W] dv̂e.
(Lemma 6.1.9). The left-hand side is equal to
right-hand side is equal to (λA. dv̂e)
let
ŵ
Let
v̂
be a value of type
be a closed value of type
S,
and let
Γ
T
be a
decoding context. Then,
d [x → ŵ] v̂ eΓ = d v̂ e(Γ,x7→dŵe)
Proof
By induction on the structure of
occurrence of the variable
xT.
x.
v̂ .
The interesting case is when
x
An occurrence of
v̂
in
v̂
is an
take the form of a type
x by ŵ gives ŵ T . This
dŵ T e. On the right-hand side, x T is directly decoded as
dŵe VT W, since x is mapped to dŵe is the decoding context (Γ, x 7→ dŵe). Using the
previous lemma, we can conclude that both sides yield equal values. .
application
On the left-hand side, the substitution of
value is then decoded as
Lemma 6.1.12 (Substitution lemma for characteristic formulae)
well-typed term with a free variable
type
S.
Proof
x
of type
S,
and let
ŵ
Let
t̂
be a
be a closed typed value of
Then,
J [x → ŵ] t̂ KΓ = J t̂ K(Γ,x7→dŵe)
By induction on the structure of t̂, invoking the previous lemma when reach-
ing values.
6.1.7 Weakening lemma
The following weakening lemma is involved in the proof of completeness.
Lemma 6.1.13 (Weakening for characteristic formulae)
For any well-typed term t̂, for any post-conditions
(∀X. P X ⇒ P 0 X)
Proof
By induction on
given a function
F
AppReturnsF V P 0 .
of a value
such that
V0
t̂.
∧
P
J t̂ KΓ P
and
P 0,
⇒
and for any context
Γ,
J t̂ KΓ P 0
The only nontrivial case is that for applications, where,
V , we must show that AppReturnsF V P implies
AppReturns, the assumption ensures the existence
and an argument
By denition of
such that
AppEval F V V
AppEval F V V 0
0 and
and
P V 0.
Therefore, the same value
P 0 V 0 . Thus the proposition
AppReturnsF V
V0
is
P 0 holds.
6.1.8 Elimination of n-ary functions
For the sake of verifying programs in practice, I have introduced a direct treatment of
n-ary functions, with the predicate
Spec.
However, for the sake of conducting proofs,
it is much simpler to consider only unary functions, with characteristic formulae
118
CHAPTER 6.
SOUNDNESS AND COMPLETENESS
AppReturns.
In this section, I formally justify that the
expressed directly in terms of
equivalence between the two presentations.
For functions of arity one, I show the equivalence between the body description
of a function let rec f = λx. t
Spec1 and the body description
AppReturns1 . In the corresponding lemma,
stated in terms of
of that same function stated in terms of
(G x) denotes the characteristic formula of the body t of the function
let rec f = λx. t (the characteristic formula of t indeed depends on the variable x).
shown below,
Lemma 6.1.14 (Spec1 -to-AppReturns1 )
(∀K. is_spec1 K ∧ (∀x. K x (G x)) ⇒
⇐⇒ (∀x P. (G x) P ⇒ AppReturns1 f x P )
Proof
Spec1 f K)
The proof is not entirely straightforward, so I have also carried it out in
Coq. To prove the proposition AppReturns1 f v P under the assumption (G v) P ,
= v ⇒ R P and we are left proving is_spec1 K ,
K is covariant in R, as well as ∀x. K x (G x), which is
equivalent to x = v ⇒ (G x) P . The latter directly follows from the assumption (G v) P . Reciprocally, we need to prove Spec1 f K under the assumptions
is_spec1 K and ∀x. K x (G x) and ∀x P. (G x) P ⇒ AppReturns1 f x P . By
denition of Spec1 , and given that is_spec1 K is true by assumption, it suces to
prove ∀x. K x (AppReturns1 f x). The hypothesis is_spec1 K asserts that K is
covariant in its second argument. So, exploiting the assumption ∀x. K x (G x), it
remains to prove ∀x P. (G x) P ⇒ AppReturns1 f x P , which corresponds exactly
to one of the assumptions. we instantiate
K
as λx R. x
which holds because
For a function of arity two, I show the equivalence between the body description
let rec f = λx y. t and the body description of the function let rec f =
λx. (let rec g = λy. t in y). This encoding of n-ary functions as unary function can be
of a function easily extended to higher arities. In the corresponding lemma, shown below,
denotes the characteristic formula of the body
(G x y)
t.
Lemma 6.1.15 (Spec2 -to-AppReturns1 )
(∀K. is_spec2 K ∧ (∀x y. K x y (G x y)) ⇒ Spec2 f K)
⇐⇒ (∀x P. (∀g. H ⇒ P g) ⇒ AppReturns1 f x P )
where
H ≡ (∀y P 0 . (G x y) P 0 ⇒
AppReturns1 g y P 0 )
The proof of this lemma is even more technical than the previous one because it
involves exploiting the determinacy of partial applications. So, I do not show the
details and refer the reader to the Coq proof for further details.
Note that the
treatment of arities higher than two does not involve further diculties. It simply
involves additional proof steps for reasoning on iterated partial applications.
6.2.
SOUNDNESS
119
6.2 Soundness
6.2.1 Soundness of characteristic formulae
The soundness theorem states that if the characteristic formula of a well-typed term
t̂
holds of a post-condition
P,
and such that the decoding of
then there exists a value
v̂
satises
P.
v̂
such that
t̂
evaluates to
v̂
Note that representation predicates are
dened in Coq and their properties are proved in Coq, so they are not involved at
any point in the soundness proof.
Theorem 6.2.1 (Soundness)
and let
P
be a predicate of type
` t̂
Proof
∧
t̂ be a closed
VT W → Prop.
Let
J t̂ K∅ P
⇒
∃v̂.
typed term, let
t̂ ⇓: v̂
The proof goes by induction on the size of
t̂.
∧
T
be the type of
t̂,
P dv̂e
The size of a term is dened
as the number of nodes from the grammar of terms.
Any value contained in a
term is considered to have size one. In particular, a function closure has size one,
even though its body is a term whose size may be greater than one.
Note that
this non-standard denition of the size of the term plays a crucial role in the proof.
Indeed, the standard denition of the size of the term would not enable the induction
β -reduction may cause terms to grow in size.
• Case t̂ = v̂ . The assumption J t̂ K∅ P is equivalent to P dv̂e. The conclusion
:
follows immediately, since v̂ ⇓ v̂ .
0
• Case t̂ = (((fˆfunc ) (v̂ 0 T )) T ). The assumption coming from the characteristic
0
0
:
formula is AppReturns dfˆe dv̂ e P . The goal is to nd a value v̂ such that (fˆ v̂ ) ⇓ v̂
and P dv̂e. The assumption asserts the existence of a value V of type VT W such
0
that P V holds and such that AppEval dfˆe dv̂ e V holds. By denition of the
0
:
ˆ
predicate AppEval, we have (f v̂ ) ⇓ bV c. To conclude, I instantiate v̂ as bV c. So,
V is equal to dv̂e, the proposition P dv̂e follows from P V .
• Case t̂ = crash. The assumption is False, so there is nothing to prove.
• Case t̂ = if dv̂e then t̂1 else t̂2 . The assumption is as follows.
principle to be applied because
(dv̂e = true ⇒ Jt̂1 K P )
The value
v̂
is equal to
true.
is of type
bool,
∧
(dv̂e = false ⇒ Jt̂2 K P )
so it is either equal to
true
The rst part of the assumption gives
false.
or to
Jt̂1 K P .
Assume that
v̂
The conclusions then
directly follow from the induction hypothesis applied to that fact. The case where
v̂
is equal to
the
false
is symmetrical.
t̂ = (let rec f = ΛA.λx T0 .t̂1 T1 in t̂2 T2 ).
function is not polymorphic, that is, when A
•
Case
to polymorphic functions is given afterwards.
∀F. H ⇒ Jt̂2 K(f 7→F ) P ,
where
H
I start with the case where
is empty.
The generalization
The assumption is the proposition
stands for:
∀X. ∀P 0 . Jt̂1 K(x7→X,f 7→F ) P 0 ⇒ AppReturns F X P 0
120
CHAPTER 6.
SOUNDNESS AND COMPLETENESS
F as dµf.λx.t̂1 e. According to the assumption, H
Jt̂2 K(f 7→F ) P . The plan is to prove that H holds, so as to later exploit the
(f 7→F ) P .
hypothesis Jt̂2 K
0
To prove H, consider some arbitrary arguments X and P . The hypothesis
(x7→X,f 7→F ) P 0 ). The goal consists in proving AppReturns F X P 0 . Since X
is (Jt̂1 K
has type VT0 W, we can consider its encoding. Let v̂2 denote the value bXc, which
has type T0 . We know that X is equal to dv̂2 e and that X has type VT0 W. So, the
(x7→dv̂2 e,f 7→dµf.λx.t̂1 e) P 0 ), which, by the substitution lemma,
assumption becomes (Jt̂1 K
∅ 0
gives (J[f → µf.λx.t̂1 ] [x → v̂2 ] t̂1 K P ).
By induction hypothesis applied to the term [f → µf.λx.t̂1 ] [x → v̂2 ] t̂1 , of type
T1 , there exists a value v̂ 0 such that the term considered reduces to v̂ 0 and such that
0
0
: 0
P dv̂ e holds. We can deduce the typed reduction judgment ((µf.λx.t̂1 ) v̂2 ) ⇓ v̂ .
0
Exploiting the denition of AppEval, we deduce AppEval dµf.λx.t̂1 e dv̂2 e dv̂ e. The
0
conclusion of H, namely AppReturns F X P , follows from the latter proposition and
0
0
from the assumption P dv̂ e.
(f 7→F ) P in order to conclude.
Finally, it remains to exploit the assumption Jt̂2 K
∅
By the substitution lemma, this fact can be reformulated as J [f → µf.λx.t̂1 ] t̂2 K P .
I instantiate the assumption with
implies
The conclusion follows from the induction hypothesis applied to that fact.
•
Case t̂
= (let x = ΛA. t̂1 in t̂2 ).
Let
the proof in the particular case where
x
T1
denote the type of
t̂1 .
I start by giving
has a monomorphic type, that is, when
A
0
is empty. The characteristic formula asserts that there exists a predicate P of type
∅
0
VT1 W → Prop such that Jt̂1 K P and such that, for any X of type VT1 W satisfying
0
(x7→X) P holds. The goal is to nd a value
the predicate P , the proposition Jt̂2 K
v̂1
and a value
that P
t̂1
dv̂e
v̂
such that
t̂1
reduces to
v̂1
and [x
→ v̂1 ] t̂2 reduces to
v̂ ,
and such
holds.
∅ P 0 , there exists a value v̂ such that
1
0
v̂1 and P dv̂1 e. Because typed reduction preserves typing, the value v̂1 has the
By induction hypothesis applied to Jt̂1 K
⇓:
t̂1 , namely T1 . Instantiating the
dv̂1 e gives Jt̂2 K(x7→dv̂1 e) P . By the
∅
equivalent to J[x → v̂1 ] t̂2 K P . Note that
X
same type as
variable
the value
substitution lemma, this proposition
is
the substitution lemma.
[x → v̂1 ] t̂2
P dv̂e,
v̂
of type
P0
is well-typed by
T
such that
([x → v̂1 ] t̂2 ) ⇓: v̂
as required.
Generalization to polymorphic let-bindings
which is the type of
[x → v̂1 ] t̂2
The application of the induction hypothesis to the term
asserts the existence of a value
and such that
the term
from the assumption with
x.
Let
S
be a shorthand for
∀A.T1 ,
The characteristic formula asserts that there exists a predicate
→ Prop)
Jt̂1 K∅ (P 0 A) holds, and such that,
0
for any X of type VSW satisfying the proposition ∀A. (P A (X A)), the proposition
(x7→X) P holds. The goal is to nd a value v̂ and a value v̂ such that t̂ reduces
Jt̂2 K
1
1
to v̂1 and [x → ΛA. v̂1 ] t̂2 reduces to v̂ , and such that P dv̂e holds.
∅
0
Let A be some type variables. The induction hypothesis for Jt̂1 K (P A) asserts
:
0
there exists a value v̂1 of type T1 such that t̂1 ⇓ v̂1 and (P A) dv̂1 e. Now, I instantiate
the variable X from the assumption with the polymorphic value dΛA. v̂1 e, of type
of type ∀A.(VT1 W
such that ∀A.
6.2.
SOUNDNESS
∀A.VT1 W. Note that,
λA. dv̂1 e.
121
by denition of the decoding of polymorphic values,
X
is also
equal to
At this point, we need to prove the premise ∀A.
(P 0 A) (X A).
Let
T
be some
0
type variables. The goal is to prove (P T ) (X T ). Substituting A with T in the
0
0
proposition (P A) dv̂1 e gives (P T ) ([A → T ] dv̂1 e). To conclude, it suces to prove
X is equal to λA. dv̂1 e.
Now that we have proved the premise ∀A.
A), we need to nd the
value v̂ to which [x → ΛA. v̂1 ] t̂2 reduces. Given the instantiation of X , the
(x7→X) P is equivalent to Jt̂ K(x7→dΛA. v̂1 e) P . By the substitution
assumption Jt̂2 K
2
∅
lemma, this proposition is equivalent to J[x → ΛA. v̂1 ] t̂2 K P . The conclusion then
follows from the application of the induction hypothesis to the term [x → ΛA. v̂1 ] t̂2 .
that
XT
is equal to
[A → T ] dv̂1 e.
This fact holds because
(P 0 A) (X
Generalization to polymorphic functions
This proof directly generalizes that
given for non-polymorphic functions. As before, the term
ΛA.λx T0 .t̂1 T1 in t̂2 T2 ).
where H stands for:
t̂ is of the form (let rec f =
∀F. H ⇒ Jt̂2 K(f 7→F ) P ,
The assumption is the proposition
∀A. ∀X. ∀P 0 . Jt̂1 K(x7→X,f 7→F ) P 0 ⇒ AppReturns F X P 0
F as dµf.ΛA.λx.t̂1 e. According to the assumption,
Jt̂2 K(f 7→F ) P . The plan is to prove that H holds, so as to later exploit the
(f 7→F ) P .
hypothesis Jt̂2 K
0
To prove H, consider some arbitrary arguments T , X and P . Let T be a short(x7
→
X,f
→
7
F
)
0
hand for TT U. The hypothesis is (J[A → T ] t̂1 K
P ). The goal consists in
0
0
proving AppReturns F X P , where X has type V[A → T ] T0 W and P has type V[A →
T ] T1 W → Prop. Let v̂2 denote the value bXc. Since X has type V[A → T ] T0 W, the
value v̂2 has type [A → T ] T0 . We know that X is equal to dv̂2 e and that X has type
VT0 W. The assumption becomes (J[A → T ] t̂1 K(x7→dv̂2 e,f 7→dµf.ΛA.λx.t̂1 e) P 0 ), which, by
∅ 0
the substitution lemma, gives (J[f → µf.ΛA.λx.t̂1 ] [x → v̂2 ] [A → T ] t̂1 K P ).
By induction hypothesis applied to the term [f → µf.ΛA.λx.t̂1 ] [x → v̂2 ] [A →
T ] t̂1 , of type [A → T ] T1 , there exists a value v̂ 0 such that the term considered
0
0
0
reduces to v̂ and such that P dv̂ e holds.
We can deduce the typed reduc: 0
tion judgment ((µf.ΛA.λx.t̂1 ) v̂2 ) ⇓ v̂ , because the argument v̂2 has the type
[A → T ] T0 and the result v̂ 0 is of type [A → T ] T1 . Exploiting the denition
0
of AppEval, we can deduce AppEval dµf.ΛA.λx.t̂1 e dv̂2 e dv̂ e. The conclusion of H,
0
namely AppReturns F X P , follows from the latter proposition and from the assump0
0
tion P dv e. The remaining of the proof is like in the case of non-polymorphic
functions. I instantiate the assumption with
H
implies
Corollary 6.2.1 (Soundness for integer results) Let t̂ be a closed typed term
of type int and let n be an integer. Let t denote the term obtained by stripping type
annotations out of
t̂.
J t̂ K∅ (= n)
⇒
t⇓n
122
CHAPTER 6.
Proof
SOUNDNESS AND COMPLETENESS
Apply the soundness theorem with
P
instantiated as the predicate
and use the fact that typed reductions entail untyped reductions.
(= n),
6.2.2 Soundness of generated axioms
The proof of soundness of the generated axioms involves Hilbert's epsilon operator.
Let me recall how it works. Given a predicate
P
A → Prop, where the type A
A. This value satises
type A satisfying P . Otherwise,
of type
is inhabited, Hilbert's epsilon operator returns a value of type
the predicate
P
if there exists at least one value of
if no value of type
A
satisfy
P,
then the value returned by the epsilon operator is
unspecied.
Lemma 6.2.1 (Soundness of the axioms generated for values) Consider
let x = t̂S . The generated axiom X and the axiom X_cf
a top-level Caml denition admit a sound interpretation.
Proof
To start with, assume
picking a value
the encoding of
V of
V.
type
VT W
S
is a monomorphic type. I realize the axiom
such that the evaluation of
t̂
X
by
terminates and returns
€
Š
Denition X : VT W := . λV. t̂ ⇓: bV c .
Remark: the epsilon operator can be applied because the type
to be inhabited through the generated lemma
not terminate, then the value of
X
It remains to justify the axiom
X_safe.
VT W has been proved
t does
Note that if the term
is unspecied.
X_cf.
The goal is the proposition
∀P. Jt̂K∅ P ⇒
P X. To prove this implication, consider a particular post-condition P and assume
Jt̂K∅ P . The goal is to prove P X. By the soundness theorem, Jt̂K∅ P implies the
:
existence of a value v such that P dve and t̂ ⇓ v̂ . Thus, there exists at least one
:
value V , namely dv̂e, such that the proposition t̂ ⇓ bV c holds. Indeed, the value
bV c is equal to v̂ . As a consequence, the epsilon operator returns a value X such
:
that the proposition t̂ ⇓ bXc holds. By determinacy of typed reductions, bXc is
equal to v̂ . So, X is equal to dv̂e. The conclusion P X is therefore derivable from
P dv̂e.
Generalization to polymorphic denitions
The proof in this case generalizes
the previous proof by adding quantication over polymorphic type variables.
∀A.T
be the form of
S.
The type scheme
VSW
takes the form
Let
∀A.VT W.
Denition X : VSW := λA. . (λ(V : VT W). t̂ ⇓: bV c).
X is specied through monomorphic instances of a predicate
∀A.(VT W → Prop). To justify the axiom X_cf, the goal is to prove:
Recall that the value
P
of type
∀P. ∀A. Jt̂K∅ (P A) ⇒ (P A) (X A)
6.3.
COMPLETENESS
123
T , and let T denote TT U. The goal is to prove that
[A → T ] t̂ K∅ (P T ) implies the proposition (P T ) (X T ).
The soundness theorem applied to the term [A → T ] t̂ of type [A → T ] T gives
:
the existence of a value v̂ of type [A → T ] T such that P T dv̂e and [A → T ] t̂ ⇓ v̂ .
Therefore, there exists at least one value V , namely dv̂e, such that the proposition
t ⇓: bV c holds. As a consequence, the denition of X T , shown below, yields a value
V 0 such that the proposition [A → T ] t̂ ⇓: bV 0 c holds.
Consider some arbitrary types
the assumption J
XT
=
. (λ(V : V[A → T ] T W). ([A → T ] t̂) ⇓: bV c)
[A → T ] t̂ ⇓: bX T c. By determinacy of typed reductions, bX T c is equal to v̂ .
So X T is equal to dv̂e. The conclusion P T (X T ) thus follows from the fact P T dv̂e,
which comes from the earlier application of the soundness theorem. Hence,
Lemma 6.2.2 (Soundness of the axioms generated for functions) Consider
a top-level Caml function dened as follows let rec f = ΛA.λx1 . . . xn .t̂. The generated axiom F and the axiom F_cf admit a sound interpretation.
Proof
I show that the generated axioms are consequences of the axioms gener-
ated for the equivalent denition let f = (let rec f = ΛA.λx1 . . . xn .t̂ in f ),
which the previous soundness lemma applies.
bodyA
for
The goal is to prove the proposi-
F X1 . . . Xn = J t̂ K(f 7→F,xi 7→Xi ) . To that end, I exploit the axiom
generated for the declaration let f = (let rec f = λx1 . . . xn . t̂ in f ), which is the
∅
proposition ∀P. Jlet rec f = λx1 . . . xn . t̂ in f K P ⇒ P F . I apply this assumption
with P instantiated as
tion λG.
bodyA G X1 . . . Xn
= J t̂ K(f 7→G,xi 7→Xi )
let rec f = λx1 . . . xn . t̂ in f K∅ P .
It remains to prove the premise J
By denition
of the characteristic formula of a function denition, this proposition unfolds to
∀F
tion
(bodyA F 0 X1 . . . Xn = J t̂ K(f 7→F',xi 7→Xi ) ) ⇒ P F 0 , which, given
0
0
0
of P , can be reformulated as the tautology ∀F . P F ⇒ P F . 0.
the deni-
6.3 Completeness
From a high-level point of view, the completeness theorem asserts that any correct
program can be proved correct using characteristic formulae.
More precisely, the
theorem asserts that characteristic formulae are always precise and expressive enough
to formally establish the behavior of any Caml program.
Caml program computes an integer
n,
For example, if a given
then one can prove that the characteristic
formula of that program holds of the post-condition
(= n).
More generally, the
characteristic formula of a program always holds of the most-general specication of
its output, a notion formalized in this section.
124
CHAPTER 6.
SOUNDNESS AND COMPLETENESS
6.3.1 Labelling of function closures
The CFML library is built upon three axioms:
the constant
AppEval, and a lemma asserting that AppEval is deterministic.
Func,
the predicate
In the proof of sound-
ness, I have given a concrete interpretation of those axioms. However, to establish
completeness, those axioms must remain abstract because the end-user cannot ex-
Func can no
Func with the function closure they describe, I
rely on a notion of correct labelling for relating values of type Func with the function
ploit any assumption about them. Since the interpretation of the type
longer be used to relate values of type
closures they correspond to.
The notions of labelling and of correct labelling are
explained in what follows.
Consider a function denition
associated with the denition of
f
(let rec f = ΛA.λx.t̂ in t̂0 ).
The body description
is recalled next.
∀A. ∀X. ∀P. (J t̂ K(x7→X,f 7→F ) P ) ⇒ AppReturns F X P
This proposition is the only available assumption for reasoning about applications of
the function
f
in the term
t̂0 .
The notion of labelling is used to relate this assumption,
F of type Func, with the function closure
µf.ΛA.λx.t̂, which gets substituted in the continuation t̂0 during the execution of the
program. In short, I tag the function closure µf.ΛA.λx.t̂ with the label F , writing
(µf.ΛA.λx.t̂){F } to denote this association, and I then dene the decoding of the
{F } as the constant F . This labelling technique makes it possible
value (µf.ΛA.λx.t̂)
which is expressed in terms of a constant
to exploit the substitution lemma when reasoning on function denitions, through
the following equality.
J t̂0 K(f 7→F )
=
J [f → (µf.ΛA.λx.t̂){F } ] t̂0 K∅
Labelled terms are like typed terms except that every function closure is labelled
with a value of type
Func.
I let
t̃
stand for a copy of a typed term
function closures are labelled. Similarly I use the notation
ṽ
t̂
in which all
for labelled values. The
formal grammars are shown next.
Denition 6.3.1 (Labelled terms and values)
ṽ
w̃
t̃
:= x T | n | D T (ṽ, . . . , ṽ) | (µf.ΛA.λx.t̃){F }
:= ΛA. ṽ
:= ṽ | ṽ ṽ | crash | if ṽ then t̃ else t̃ |
let x = ΛA. t̃ in t̃ | let rec f = ΛA.λx.t̃ in t̃
Given a labelled term
t̃,
one can recover the corresponding typed term
t̂
by strip-
ping the labels out of function closures. Formally, the stripping function is called
strip_labels,
so the typed term
t̂
associated with
t̃
is written
strip_labels t̃.
The decoding of a labelled function closure is formally dened as shown below.
Note that the decoding context
Γ is ignored, since a function closure is always closed.
6.3.
COMPLETENESS
125
Denition 6.3.2 (Decoding of a labelled function)
function closure labelled with a constant
F
(µf.ΛA.λx.t̂){F }
Let
be a constant of type
be a
Func, and let Γ be
a decoding context.
d(µf.ΛA.λx.t̃){F } eΓ
≡
F
The construction of characteristic formulae involves applications of the decoding
operators.
Since decoding now applies to labelled values, characteristic formulae
need to be applied to labelled terms. So, in the proof of completeness, characteristic
formulae take the form
J t̃ K.
The central invariant of the proof of completeness is the notion of correct labelling.
A function
(µf.ΛA.λx.t̃){F }
is said to be correctly labelled if the body description of
that function, in which the function is being referred to as
F,
has been provided as
assumption at a previous point in the characteristic formula. The notion of correct
labelling is formalized through the denition of the predicate called
body,
as shown
next.
Denition 6.3.3 (Correct labelling of a function)
body
A labelled term
t̃
is then said to be correctly labelled if, for any function closure
(µf.ΛA.λx.t̃1 ){F } that occurs in
hold.
E
AppReturns F X P )
(µf.ΛA.λx.t̂){F } ≡ (∀A. ∀X. ∀P. J t̃ K(x7→X,f 7→F ) P ⇒
Moreover, the term
when
t̃
t̃
t̃,
the proposition body (µf.ΛA.λx.t̃1 ){F } does
is said to be correctly labelled with values from a set
is correctly labelled and all the labels involved in
t̃
belong to the set
E.
The ability to quantify over all the labels occurring in a labelled term is needed for
the proofs. The correct labelling of
proposition labels E
t̃
with values from the set
t̃, dened next.2
Denition 6.3.4 (Correct labelling of a term)
be a set of Coq values of type Func.
Let
t̃
E
is captured by the
be a labelled term and let
E
labels E t̃
‚
≡
(µf.ΛA.λx.t̃1 ){F } occuring
body (µf.ΛA.λx.t̃1 ){F }
For any labelled function
F ∈E
∧
The same denition may be applied to values. For example, that the value
ṽ
A few immediate properties about the predicate
a source program
t̃
Œ
indicates
E.
labels are used
labels ∅ t̃
does not contain any closure, so t̃,
in proofs. First,
holds.
Second,
E grows into a bigger set. Third, if labels E t̃
t̃, labels E t̃0 holds as well. Finally, the notion
of correct labelling is stable through substitution: if labels E t̃ and labels E w̃ holds, then labels E ([x → w̃] t̃) holds as well.
labels E t̃
labels E ṽ is correctly labelled with constants from the set
in
is preserved when the set
holds then, for any subterm
2
The denition of the predicate
t̃0
of
labels is stated using a meta-level quantication of the form for
any labelled function occurring in t̃. Alternatively, this meta-level quantication could be replaced
with an inductive denition that follows the structure of
t̃.
126
CHAPTER 6.
SOUNDNESS AND COMPLETENESS
6.3.2 Most-general specications
The next step towards stating and proving the completeness theorem is the denition
of the notion of most-general specication of a value. Characteristic formulae allow
v̂
to specify the result
of the evaluation of a term
for the functions that are contained in
v̂ .
t̂
with utmost accuracy, except
Indeed, the best knowledge that can be
gathered about a function from a characteristic formula is the body description of
that function. In particular, it is not possible to state properties about the source
code of a function closure.
So, the most-general specication of a typed value
type VT W
→ Prop,
written
mgs v̂ ,
for the functions it contains.
v̂
T
of type
is a predicate of
that describes the shape of the value
v̂ ,
except
Functions are instead described using the predicate
labels introduced in the previous section.
More precisely, the predicate mgs v̂ holds
V if there exist a set E of Coq values of type Func and a labelling
ṽ of v̂ such that ṽ is correctly labelled with constants from E and V is equal to the
decoding of dṽe.
of a Coq value
Denition 6.3.5 (Most-general specication of a value)
Let
v̂ be a well-typed
value.
mgs v̂
≡
λV. ∃E. ∃ṽ. v̂ = strip_labels ṽ ∧
labels E ṽ
∧ V = dṽe
(3, id(int→int) ),
For example, the most general specication of the Caml value
which is made of a pair of the integer
int
int
(µf.Λ.λx .x ),
3
and of the identity function for integers
is dened as:
λ(V : int × Func). ∃E. ∃ṽ. strip_labels ṽ = (3, id(int→int) ) ∧ labels E ṽ ∧ V = dṽe
(3, id(int→int) {F } ) for some constant F that
int
int {F }
belongs to E . The assumption labels E ṽ is equivalent to body (µf.λx .x )
.
Since the set E need not contain any constant other than F , the most-general
(int→int)
specication of the value (3, id
) can be simplied by quantifying directly on
F , as shown next.
The labelled valued
ṽ
must be of the form
λ(V : int × Func).
∃(F : Func). body (µf.λxint .xint ){F }
Thus, the most-general specication of the Caml value
V
if and only if
V
is of the form
(3, F )
∧
(3, id)
for some constant
V = (3, F )
holds of a Coq value
F
of type
admits the body description of the identity function for integers.
Func
Note that
that
F
is
not specied as being equal to be the identity function. In fact, the statement F
is equal to
(µf.Λ.λxint .xint )
is not even well-formed, because
as an abstract data type. The value
predicate
AppReturns
F
Func
is here viewed
is only specied extensionally, through the
involved in the denition of the predicate
body.
6.3.3 Completeness theorem
Theorem 6.3.1 (Completeness in terms of most-general specications)
Let
t̂ be a closed, well-typed term of type T
that does not contain any function closure,
6.3.
COMPLETENESS
and let
v̂
127
be a value.
t̂ ⇓: v̂
Proof
J t̂ K∅ (mgs v̂)
I prove by induction on the derivation of the typed reduction judgment
that, for any correct labelling
assumption that
t̂
⇒
t̃
of
t̂
using constants for some set
is well-typed, the characteristic formula of
general specication of
t̂ ⇓: v̂
⇒
t̃
E,
t̂ ⇓: v̂
under the
holds of the most-
v.
∀E. ∀t̃. t̂ = strip_labels t̃ ∧ labels E t̃ ∧ ( ` t̂) ⇒ J t̃ K∅ (mgs v̂)
There is one proof case for each possible reduction rule. In each proof case, I describe
the form of the term
•
Case
t̂ = v̂ .
t̂
and the premises of the assumption
t̃
By hypothesis, there exists a labelling
t̂ ⇓: v̂ .
of the term
t̂
such that
labels E t̃ holds. Since t̂ is equal to v̂ , there also exists a labelling ṽ of the value v̂
labels E ṽ holds. The goal is (mgs v̂) dṽe. By denition, this proposition
such that is equivalent to the following proposition, which is true.
∃E. ∃ṽ. strip_labels ṽ = v̂ ∧ labels E ṽ ∧ dṽe = dṽe
• Case t̂ = (let rec f = ΛA.λx.t̂1 in t̂2 ), with a premise asserting that the term
→ µf.ΛA.λx.t̂1 ] t̂2 reduces to v̂ . By hypothesis, there exists a set E , a labelling t̃1 of t̂1 and a labelling t̃2 of t̂2 such that the propositions labels E t̃1 and
(f 7→F ) (mgs v̂), where
labels E t̃2 hold. The proof obligation is ∀F. H ⇒ Jt̃2 K
H, the body description of the function, corresponds exactly to the proposition
body (µf.ΛA.λx.t̃1 ){F } .
Given a particular variable F of type Func, the goal simplies to
[f
body (µf.ΛA.λx.t̃1 ){F }
Since the decoding of
(µf.ΛA.λx.t̃1 ){F }
⇒
Jt̃2 K(f 7→F ) (mgs v̂)
is equal to
F,
the substitution lemma
(adapted to labelled terms) allows us to rewrite the formula
(µf.ΛA.λx.t̃1 ){F } ] t̃2 K∅ .
Jt̃2 K(f 7→F )
into
J[f →
[f →
[f → (µf.ΛA.λx.t̃1 ){F } ] t̃2
It remains to invoke the induction hypothesis on the reduction sequence
µf.ΛA.λx.t̂1 ] t̂2 ⇓: v̂ .
To that end, it suces to check that
E∪{F }. Indeed, a label occurring in the
[f → (µf.ΛA.λx.t̃1 ){F } ] t̃2 either occurs in t̃1 , which is labelled with constants
from E , or occurs in t̃2 , which is also labelled with constants from E , or is equal to
the label F .
• Case t̂ = ((µf.ΛA.λx T0 .t̂1 T1 ) (v̂2 T2 )) T , with a premise asserting that the term
[f → µf.ΛA.λx.t̂1 ] [x → v̂2 ] [A → T ] t̂1 reduces to v̂ , and two premises relating
types: T2 = [A → T ] T0 and T = [A → T ] T1 . Let t̃ be the labelling of t̂. The
hypothesis labels E t̃ implies that there exist a set E , a value F from that set,
and two labellings t̃1 and ṽ2 such that the following facts are true: (µf.ΛA.λx.t̃1 )
{F } holds, labels E t̃ is labelled with F in t̃, the proposition body (µf.ΛA.λx.t̃1 )
1
holds, and labels E ṽ2 holds.
is correctly labelled with values from the set
term
128
CHAPTER 6.
SOUNDNESS AND COMPLETENESS
(µf.ΛA.λx.t̂1 ){F } is the constant F ,
∅
the goal J t̃ K (mgs ṽ) is equivalent to AppReturns F dṽ2 e (mgs v̂). To prove this
{F } , which asserts the following
goal, I apply the assumption body (µf.ΛA.λx.t̃1 )
Since the decoding of the labelled value
proposition.
∀A. ∀X. ∀P. (Jt̃1 K(x7→X,f 7→F ) P ) ⇒ AppReturns F X P
The types
A
are instantiated as
VT W, X
is instantiated as
dṽ2 e
and
P
is instanti-
(mgs v). It remains to prove the premise, which is the proposition J[A →
T ] t̃1 K(x7→dṽ2 e,f 7→F ) (mgs v̂). Through the substitution lemma, it becomes J[f →
(µf.ΛA.λx.t̃1 ){F } ] [x → ṽ2 ] [A → T ] t̃1 K∅ (mgs v̂). This proposition follows from the
induction hypothesis applied to the typed term [f → µf.ΛA.λx.t̂1 ] [x → v̂2 ] [A →
T ] t̂1 . This term is well-typed and admits the type T because the type of v̂2 is equal
to [A → T ] T0 and because T is equal to [A → T ] T1 . Note that the set of labels
{F } ] [x → ṽ ] [A → T ] t̃ is included in the set of
occurring in [f → (µf.ΛA.λx.t̃1 )
2
1
{F } ṽ ).
labels occurring in t̃, which is of the form ((µf.ΛA.λx.t̃1 )
2
• Case t̂ = (if v̂ then t̂1 else t̂2 ). Because t̂ is well-typed, the value v̂ has type bool,
ated as
so it is either equal to
true
or
false.
In both cases, the conclusion follows from the
induction hypothesis.
• Case t̂ = (let x = ΛA. t̂1 in t̂2 ), with two premises asserting t̂1 ⇓: v̂1 and
:
([x → ΛA. v̂1 ] t̂2 ) ⇓ v̂ . By hypothesis, there exist a set E , a labelling t̃1 of t̂1 and
a labelling t̃2 of t̂2 such that the propositions labels E t̃1 and labels E t̃2 hold.
Let me start with the case where x admits a monomorphic type, that is, assuming
A to be empty. In this case, the goal is:
∃P 0 . Jt̃1 K∅ P 0
∧
as the predicate mgs v̂1 . The rst subgoal,
:
1 ), follows from the induction hypothesis applied to the reduction t̂1 ⇓
To prove this goal, I instantiate
Jt̃1
v̂1 K∅ (mgs v̂
(∀X. P 0 X ⇒ Jt̃2 K(x7→X) (mgs v̂))
and to the set
P0
E.
It remains to prove the second subgoal. Let
and let
mgs,
X
X
be a value of type
there exists a set
= dṽ1 e.
such that be the type of the variable
holds.
x,
By denition of
ṽ1 of v̂1 such that labels E 0 ṽ1 and
(mgs v̂). By the substitution lemma,
∅
J[x → ṽ1 ] t̃2 K . So it remains to prove
K(x7→X)
Jt̃2
(x7→X) is equivalent to
→ ṽ1 ] t̃2 K∅ (mgs v̂).
T1
mgs v̂1 X and a labelling
The goal is to prove
the formula Jt̃2 K
J[x
E0
VT1 W
This proposition follows from the induction hypothesis
→ v̂1 ] t̂2 ) ⇓: v̂ and to the E ∪ E 0 , which we
[x → ṽ1 ] t̃2 thanks to the hypotheses labels E 0 ṽ1 applied to the reduction derivation ([x
know correctly labels the term
and labels E t̃2 .
Generalization to polymorphic let-bindings
morphic type
∀A. T1 .
¨
0
∃P .
The variable
x
admits the poly-
The goal is:
∀A. J t̃1 K∅ (P 0 A)
∀X. (∀A. (P 0 A (X A))) ⇒ J t̃2 K(x7→X) (mgs v̂)
6.3.
COMPLETENESS
I instantiate
P0
as
129
λA. mgs v̂1 ,
∀A.(VT1 W → Prop), as exT be some arbitrary types and let T denote the
J [A → T ] t̃1 K∅ (P 0 T ). The post-condition P 0 T is equal
which admits the type
pected. For the rst subgoal, let
list
TT U. We have to prove
[A → T ] (mgs v̂1 ). It is straightforward to show that this proposition is equal
∅
to mgs ([A → T ] v̂1 ). The proof obligation J [A → T ] t̃1 K (mgs ([A → T ] v̂1 )), is
:
obtained by applying the induction hypothesis to ([A → T ] t̂1 ) ⇓ ([A → T ] v̂1 ),
:
which is a direct consequence of the hypothesis t̂1 ⇓ v̂1 .
The second proof obligation is harder to establish. Let X be a Coq value of
0
type ∀A. VT1 W that satises the proposition ∀A. (P A (X A)). The goal is to prove
(x7
→
X)
J t̃2 K
(mgs v̂). The diculty comes from the fact that we do not have an assumption about the polymorphic value X , but instead have dierent assumptions
for every monomorphic instances of X . For this reason, the proof involves reasoning
separately on each of the occurrences of the polymorphic variable x in the term
t̃2 . So, let t̃02 be a copy of t̃2 in which the occurrences of x have been renamed
0
using a nite set of names xi , such that each xi occurs exactly once in t̃2 . The
0 (x 7→X,...,xn 7→X) (mgs v̂). The plan is to build an family of lagoal becomes J t̃2 K 1
i
bellings (ṽ1 )i∈[1..n] for the value v̂1 such that the goal can be rewritten in the form
i 0 ∅
J[xi → ṽ1 ] t̃2 K (mgs v̂).
0
Consider a particular occurrence of the variable xi in t̃2 . It takes the form of
a type application xi T , for some types T . The decoding of this occurrence gives
X T . Let T stand for the list of Coq types VT W. Instantiating the hypothesis
0
0
about X on the types T gives (P T ) (X T ). By denition of P , the value X T thus
satises the predicate [A → T ] (mgs v̂1 ), which is the same as mgs ([A → T ] v̂1 ). So,
i
i
there exist a set Ei and a labelling ṽ1 of the value v̂1 such that labels Ei ṽ1 holds
i
and such that X T is equal to dṽ1 e.
to
To summarize the reasoning of the previous paragraph, the decoding of the
occurrence of the variable
xi
in the context
as the decoding of the labelled value
0
ṽ1i
(x1 7→ X, . . . , xn 7→ X)
is the same
in the empty context. It follows that the
K(x1 7→X,...,xn 7→X) can be rewritten into J[xi → ṽ1i ] t̃02 K∅ .
i
a family of sets (Ei )i∈[1..n] such that labels Ei ṽ1 holds
characteristic formula J t̃2
Furthermore, there exists
for every index
i.
→ ṽ1i ] t̃02 K∅ (mgs v̂). This conclusion follows from the
S
induction hypothesis applied to the term [x → ΛA. v̂1 ] t̂2 and to the set E ∪ ( i Ei ).
i 0
Indeed, the term [xi → ΛA. ṽ1 ] t̃2 is a labelling of the term [x → ΛA. v̂1 ] t̂2 . Also,
S
i 0
a label that occurs in the labelled term [xi → ṽ1 ] t̃2 belongs to the set E ∪ ( i Ei ).
0
Indeed such a label occurs either in t̃2 (which contains the same labels as t̃2 ) or in
i
i
one of the labelled values ṽ1 , and we have labels E t̃2 as well as labels Ei ṽ1 for
every index i. It remains to prove J[xi
6.3.4 Completeness for integer results
A simpler statement of the completeness theorem can be devised for a program that
produces an integer. If a term
P,
t
evaluates to an integer
then the characteristic formula of
t
holds of
P.
n
that satises a predicate
130
CHAPTER 6.
SOUNDNESS AND COMPLETENESS
Corollary 6.3.1 (Completeness for integer results) Consider a program welltyped in ML and that admits the type int. Let t̂ be the term obtained by replacing ML
type annotations with the corresponding weak-ML type annotations in that program,
and let
t
denote the corresponding raw term.
Let
n
be an integer and let
P
be a
predicate on integers. Then,
t⇓n
Proof
The reduction
t ⇓ n
Pn
J t̂ K∅ P
⇒
starts on a term well-typed in ML, So there exists a
corresponding typed reduction
∅
assumption gives J t̂ K
∧
t̂ ⇓: n.
(mgs n).
The completeness theorem applied to this
∅
To show J t̂ K
P ,
I apply the weakening lemma
V of type int, the
V . By denition of mgs, there exists a set E and
a labelling ñ of n such that labels E ñ and V = n. Since n is an integer, the labelling
can be ignored completely. So, V = dne and the conclusion P V then follows from
the hypothesis P n. for characteristic formulae. I remains to show that, for any value
proposition mgs n V implies P
6.4 Quantication over Type
Polymorphism has been treated by quantifying over logical type variables. I have
not yet discussed what exactly is the sort of these variables in the logic. A tempting
solution would be to assign them the sort
Type.
In Coq,
Type is the sort of all types
from the logic. However, type variables used to represent polymorphism are only
meant to range over reected types, i.e. types of the form
one ought to assign type variables the sort
VT W }.
RType
Of course, I would provide
RType,
VT W. Thus, it seems that
{ A : Type | ∃T. A =
dened as
as an abstract denition, since the fact that
types correspond to reected types need not be exploited in proofs.
A question
RType is an abstract type, would it remain sound and complete
to use the sort Type instead of the sort RType as a sort for type variables? The answer
naturally arises: since
is positive. The purpose of this section is to justify this claim.
6.4.1 Case study: the identity function
To start with, I consider an example suggesting how replacing the quantication over
RType with a quantication over Type aects characteristic formulae. Consider the
identity function
type ∀A. A
id,
→ A
dened as
in Caml.
λx. x.
This function is polymorphic: it admits the
Hence, the assumption provided by a characteristic
formula for this function involves a universal quantication over the type
A.
∀(A : RType). ∀(X : A). ∀(P : A → Prop). P X ⇒ AppReturns id X P
6.4.
QUANTIFICATION OVER
TYPE
131
For the sake of studying this formula, let me reformulate it into the following equiv-
3
alent proposition , which is expressed in terms of the predicate
the high-level predicate
AppReturns.
AppEval
instead of
∀(A : RType). ∀(X : A). AppEval id X X
Is this statement still sound if we change the sort of
A
from
RType
to
Type?
RType is sound. Let A be a type of
RType and let X be a Coq value of type A. The goal is to prove AppEval id X X .
:
denition of AppEval, it is equivalent to the proposition (bidc bXc) ⇓ bXc. The
Let me rst recall why the statement with
sort
By
encoding of
to
v.
X,
written
bAc,
is a well-typed program value
The goal then simplies to
((λx. x) v̂) ⇓: v̂ ,
v̂ .
Thus,
bXc
is equal
which is correct according to the
semantics.
Now, if
A
is of sort
Type
and
X
A, then there
X . Intuitively,
is an arbitrary Coq value of type
v̂
does not necessarily exist a program value
that corresponds to
however, it would make sense to say that applying the identify function to some
exotic object returns the exact same object.
The object in question is not a real
program value, but this is not a problem because it is never inspected in any way by
the polymorphic identity function. The purpose of the next section is to formally
justify this intuition.
6.4.2 Formalization of exotic values
I extend the grammar of program values with exotic values, in such a way as to
obtain a bijection between the set of all Coq values and the set of all well-typed
program values, for a suitably-dened notion of typing of exotic values. First, the
grammar of values is extended with exotic values, written for a Coq type and
V
is a Coq value of type
exo T V , where T stands
T ranges over all Coq
types, whereas T only ranges over Coq types of the form VT W. The grammar of
weak-ML types is extended with exotic types, written Exotic T.
v
T
A Coq type
A
T
:= . . .
:= . . .
T.
Note that
| exo T V
| Exotic T
is said to be exotic, written is_exotic T,
if the head constructor of
does not correspond to a type constructor that exists in weak-ML. Let
C
denote
the set of type constructors introduced through algebraic data type denitions. The
denition of
is_exotic
is formalized as follows.
8
<
is_exotic T
3
:
T 6= Int
T 6= Func
∀C ∈ C . ∀T. T 6= C T
AppReturns, the equivalence to be established is
AppEval id X X ⇐⇒ (∀P. P X ⇒ (∃Y. AppEval id X Y ∧ P Y ))
Given the denition of
From left to right, instantiate
4
≡
4
Y
as
X.
From right to left, instantiate
P
as
λY. (Y = X).
Technically, one should also check that the type constructors are applied to list of types of the
appropriate arity. Partially-applied type constructors are also treated as exotic values.
132
CHAPTER 6.
An exotic value exo T V SOUNDNESS AND COMPLETENESS
admits the type Exotic T
if and only if the Coq type
T
is exotic.
is_exotic T
∆ ` (exo T V ) (Exotic T)
Let
T
be an exotic Coq type.
reected in the logic as the type
T.
The decoding of an exotic value
and, reciprocally, the encoding of a Coq value
value ≡
≡
≡
≡
of some exotic type
bV : Tc
T
is the exotic
9
T
Exotic T
V
exo T V
Observe that the side-condition and of
V
exo T V .
VExotic TW
TTU
dexo T V e
bV : Tc
Exotic T is
exo T V is V ,
The programming language type >
>
=
>
>
;
when is_exotic T
is_exotic T
holds
ensures that the denition of
do not overlap with existing denitions.
TTU
The soundness of the
introduction of exotic values is captured through the following lemma.
Lemma 6.4.1 (Inverse functions with exotic values)
tended with exotic types and exotic values, the operator
b·c
is the inverse of
Proof
d·e.
T·U
Under the denitions exis the inverse of
V·W
and
Follows directly from the denitions of the operations on exotic types and
exotic values.
To summarize, the treatment of exotic values described in this section allows
for a bijection to be established between the set of all Coq types and the set of
all weak-ML types, and between the set of all Coq values and the set of all typed
program values. This bijection justies the quantication over the sort
type variables that occur in characteristic formulae.
Type
of the
Chapter 7
Proofs for the imperative setting
Through the previous chapter, I have established the soundness and completeness of characteristic formulae for pure programs. The matter of this
chapter is the generalization of those results to the case of imperative
programs.
To that end, the theorems need to be extended with heap
descriptions. Two additional diculties related to the treatment of mutable state arise. Firstly, the statement of the soundness theorem needs
to quantify over the pieces of heap that have been framed out before
reasoning on a given characteristic formula. Secondly, the statement of
the completeness theorem needs to account for the fact that characteristic formulae do not reveal the actual address of the memory cells being
allocated. The most general post-condition that can be proved from a
characteristic formula is thus a specication of the output value and of
the output heap up to renaming of locations.
7.1 Additional denitions and lemmas
7.1.1 Typing and translation of memory stores
Reasoning on characteristic formulae involves the notion of typed memory store,
written
m̂,
and that of well-typed memory stores. I dene those notions next, and
then describe the bijection between the set of well-typed memory stores and the set
of values of type
Heap.
m̂
is a map from locations to typed values. So, if l is a location in the domain of m̂,
then m̂[l] denotes the typed value stored in m̂ at that location. A typed store m̂ is
said to be well-typed in weak-ML, written ` m̂, if all the values stored in m̂ are
A memory store
m
is a map from locations to closed values. A typed store
well-typed. Formally,
( ` m̂)
≡
∀l ∈ dom(m̂). ( ` m̂[l])
m̂ can be decoded into a value of type Heap. To that
each of the values stored in m̂. Reciprocally, a value of
A well-typed memory store
end, it suces to decode
133
134
CHAPTER 7.
PROOFS FOR THE IMPERATIVE SETTING
Heap can be encoded into a well-typed memory store by encoding all the values
type
that it contains. The formalization of this denition is slightly complicated by the
fact that
Heap
is actually a map from locations to dependent pairs made of a Coq
type and of a Coq value of that type. The formal denition is shown next. Note
that the decoding of typed stores only applies to well-typed stores.
Denition 7.1.1 (Decoding and encoding of memory stores)
¨
dm̂e ≡ h
where
¨
bhc
≡ m̂
where
dom(h) = dom(m̂)
m̂[l] = v̂ T ⇒ h[l] = (VT W, dv̂ T e)
dom(m̂) = dom(h)
h[l] = (T , V : T ) ⇒ m̂[l] = bV c
bdm̂ec is equal to m̂
to h for any heap h.
It is straightforward to check that
store
m̂,
and that
dbhce
is equal
for any well-typed memory
Remark: for the presentation to be complete, the decoding and the encoding
operations needs to be extended to locations, as follows.
VlocW
≡ Loc
≡ (l : Loc)
dlloc e
TLocU ≡ loc
bl : Locc ≡ lloc
7.1.2 Typed reduction judgment
The typed version of the reduction judgment is dened in a similar manner as in
the purely functional setting, except that it now involves typed memory stores. The
judgment takes the form
m̂
in the typed store
m̂0 .
t̂/m̂ ⇓: v̂ 0 /m̂0 ,
asserting that the evaluation of the typed
terminates and returns the typed value
The inductive rules appear in Figure 7.1.
primitive functions
ref, get
and
set
l
t̂
in the typed store
Observe that the semantics of the
consists in reading and writing typed values in a
typed store. The reduction rule for
at location
v̂ 0
get
includes a premise asserting that the value
in the store has the same type as the term get l.
As before, the typed reduction judgment is deterministic and preserves typing.
Lemma 7.1.1 (Determinacy of typed reductions)
0
t̂/m̂ ⇓: v̂1/
m̂01
Proof
0
t̂/m̂ ⇓: v̂2/
m̂02
∧
⇒
v̂10 = v̂20
B
v̂
m̂01 = m̂02
Similar proof as in the purely-functional setting (Lemma 6.1.7).
Lemma 7.1.2 (Typed reductions preserve typing)
let
∧
be a typed value, let
m̂
and
m̂
Let
be two typed stores, let
t̂
T
be a typed term,
be a type and let
denote a list of free type variables.
B ` t̂T
∧
` m̂
∧
t̂/m̂ ⇓: v̂ /m̂0
⇒
B ` v̂ T
∧
` m̂0
7.1.
ADDITIONAL DEFINITIONS AND LEMMAS
t̂1 /m̂1 ⇓: v̂1 /m̂2
135
([x → v̂1 ] t̂2 )/m̂2 ⇓: v̂ /m̂3
(let x = t̂1 in t̂2 )/m̂1 ⇓: v̂ /m̂3
v̂ /m̂ ⇓: v̂ /m̂
([x → ŵ1 ] t̂2 )/m̂ ⇓: v̂ /m̂0
([f → µf.ΛA.λx.t̂1 ] t̂2 )/m̂ ⇓: v̂ /m̂0
(let x = ŵ1 in t̂2 )/m̂ ⇓: v̂ /m̂0
(let rec f = ΛA.λx.t̂1 in t̂2 )/m̂ ⇓: v̂ /m̂0
T = [A → T ] T1
T2 = [A → T ] T0
([f → µf.ΛA.λx.t̂1 ] [x → v̂2 ] [A → T ] t̂1 )/m̂ ⇓: v̂ /m̂0
T0
((µf.ΛA.λx
.t̂1T1 ) func (v̂2T2 )) T /m̂
(get (l
loc
))
⇓ v̂ /m̂0
v̂ T = m̂[l]
l ∈ dom(m̂)
T
/m̂
:
⇓ v̂
T
l = 1 + max(dom(m̂))
(ref v̂) loc /m̂ ⇓: (l loc )/m̂][l7→v̂]
:
l ∈ dom(m̂)
(set (l
/m̂
loc
) v̂) unit /m̂ ⇓: tt unit /(m̂[l7→v̂])
Figure 7.1: Typed reduction judgment for imperative programs
Proof
The proof is similar to that of the purely-functional setting (Lemma 6.1.6),
except when the store is involved in the reduction.
(ref v̂) loc /m̂ ⇓: (l loc )/m̂][l7→v̂] . By assumption, v̂ and m̂ are well-typed, so
the extended store m̂ ] [l 7→ v̂] is also well-typed.
• Case (get (l loc )) T /m̂ ⇓: v̂ T /m̂ , where v̂ T = m̂[l]. From the premise ( ` m̂), we
T is equal to m̂[l], the value v̂ T is well-typed in the empty
get ( ` m̂[l]). Since v̂
•
Case
context.
so
• Case (set (l loc ) v̂) unit /m̂ ⇓: tt unit /(m̂[l7→v̂]) By assumption, v̂ and m̂ are well-typed,
the updated store m̂[l 7→ v̂] is also well-typed. To state the soundness and the completeness theorem, I introduce the convenient
t̂/h ⇓: v̂ 0 /h0 , which asserts that the term t̂ in a store equal to the encoding
0
0
heap h evaluates to a value v̂ in a store equal to the encoding of the heap h .
notation
of the
Denition 7.1.2 (Typed reduction judgment on heaps)
(t̂/h ⇓: v̂ 0 /h0 )
≡
(t̂/bhc ⇓: v̂ 0 /bh0 c )
7.1.3 Interpretation and properties of AppEval
The proposition
coding of
F
AppEval F V h V 0 h0
to the encoding of
and returns the encoding of
AppEval
V0
V
captures the fact that application of the en-
in the store described by the heap
in the store described by the heap
h
h0 .
is recalled next.
AppEval : ∀A B. Func → A → Heap → B → Heap → Prop
The axiom
AppEval
is interpreted as follows.
terminates
The type of
136
CHAPTER 7.
PROOFS FOR THE IMPERATIVE SETTING
Denition 7.1.3 (Denition of AppEval)
value of type
T, V0
be a value of type
T 0,
AppEvalT ,T 0 F V h V 0 h
Let
h
and
≡
F
and
be a value of type
h0 be two heaps.
Func, V
be a
(bF c bV c)/bhc ⇓: bV 0 c/bh0 c
Remark: one can also present this denition as an inductive rule, as follows.
(fˆ v̂)/m̂ ⇓: v̂ 0 /m̂0
AppEval dfˆe dv̂e dm̂edv̂ 0 edm̂0 e
The determinacy lemma associated with the denition of
AppEval follows directly
from the determinacy of typed reductions.
Lemma 7.1.3 (Determinacy) For any types T and T 0 , for any Coq value F of
0
0
0
type Func, any Coq value V of type T , any two Coq values V1 and V2 of type T ,
any heap
h
and any two heaps
AppEval F V h V10 h01
∧
h01
and
h02 .
AppEval F V h V20 h02
⇒
V10 = V20
∧
h01 = h02
7.1.4 Elimination of n-ary functions
In the previous chapter, I have exploited the fact that the characteristic formula of a
n-ary application is equivalent to the characteristic formula of the encoding of that
application in terms of unary applications. I could thereby eliminate reference to
Specn
AppReturns1 . This result is partially broken by
AppPure which is used to handle partial applicaa function of two arguments, the predicate Spec2
and work entirely in terms of
the introduction of the predicate
tions. In short, when describing
from the imperative setting captures a stronger property than a proposition stated
only in terms of
AppReturns1 ,
because
Spec2
forbids any modication to the store
on the application of the rst argument.
The direct characteristic formula for imperative n-ary functions gives a stronger
hypothesis about functions than a description in terms of unary functions, so it
leads to a logically-weaker characteristic formula. So, some work is needed in the
proof of the soundness theorem for justifying the correctness of the assumption given
for n-ary functions. The following lemma, proved in Coq, explains how the body
let rec f = λxy. t, expressed in terms of Spec2 , can be
related to a combination of AppPure and AppReturns1 , which is quite similar to the
body description associated with the function let rec f = λx. (let rec g = λy. t in g).
description for the function Lemma 7.1.4 (Spec2 -to-AppPure-and-AppReturns1 )
(∀K. is_spec2 K ∧ (∀x y. K x y (G x y)) ⇒
⇐⇒ (∀x P. (∀g. H ⇒ P g) ⇒ AppPuref xP )
where
H ≡ (∀y H 0 Q0 . (G x y) H 0 Q0 ⇒
Spec2 f K)
AppReturns1 g y H 0 Q0 )
7.2.
SOUNDNESS
137
However, there is no further complication in the completeness theorem, because
what can be proved with a given assumption about functions can also be proved
with a stronger assumption about functions. The following lemma states that the
let rec f = λxy. t, is strictly stronger than that
let rec f = λx. (let rec g = λy. t in g). Below, H stands for the same proposition
body description for the function of as in the previous lemma.
Lemma 7.1.5 (Spec2 -to-AppReturns1 )
(∀K. is_spec2 K ∧ (∀x y. K x y (G x y)) ⇒ Spec2 f K)
⇒ (∀x H Q. (∀g. H ⇒ H B Q g) ⇒ AppPuref xHQ)
In summary, the proofs in this chapter need only be concerned with unary functions except for one proof case in the soundness theorem, in which I prove the
correctness of the body description ∀x P.
(∀g. H ⇒ P g) ⇒ AppPuref xP ,
which
describes a functions of arity two. The treatment of higher arities does not raise any
further diculty.
7.2 Soundness
A simple statement of the soundness theorem can be given for complete executions.
Assume that a term
t̂
is executed in an empty memory store and that its charac-
teristic formula satises a post-condition
and a heap of the form
h0
h + h0
Q.
Then, the term
t̂
produces a value
v̂
such that the proposition Q dv̂e h holds. The heap
corresponds to the data that has been allocated and then discarded during the
reasoning.
Theorem 7.2.1 (Soundness for complete executions)
ing weak-ML type annotations, let
of type
VT W → Hprop.
` t̂
∧
Jt̂K∅ [ ] Q
Remark: the assumption
output value
v̂
` t̂
⇒
T
be the type of
∃v̂ h h0 .
t̂,
Let
and let
t̂/∅ ⇓: v̂ /(h+h0 )
Q
t̂
be a post-condition
∧
t̂/∅ ⇓: v̂ /(h+h0 )
type as t̂.
and the conclusion
is well-typed and admits the same
be a term carry-
Q dv̂e h
ensure that the
Here again, representation predicates are dened in Coq and their properties are
proved in Coq, so they are not involved at any point in the soundness proof. Indeed,
all heap predicates of the form
predicates of the form
l ,→ v .
x
SX
are ultimately dened in terms of heap
Only the latter appear in the proof of soundness,
which follows.
The general statement of soundness, upon which the induction is conducted,
relies on an intermediate denition.
formula
F
hi
sound t̂ F asserts that the
is a correct description of the behavior of the term t̂. The denition of the
predicate where
The proposition sound
and
hf
involves a reduction sequence of the form
t̂/hi +hk ⇓: v̂ /hf +hk +hg ,
correspond to the initial and nal heap which are involved in the
138
CHAPTER 7.
specication of the term
t̂,
PROOFS FOR THE IMPERATIVE SETTING
where
hg
been framed out, and where
hk
corresponds to the pieces of heap that have
corresponds to the piece of heap being discarded
during the reasoning on the execution of the term
t̂.
Denition 7.2.1 (Soundness of a formula) Let t̂ be a well-typed term of type T ,
and let F be a predicate of type Hprop → (VT W → Hprop) → Prop.
8
sound t̂ F
<
≡ ∀H Q hi hk .
:
8
FHQ
hi ⊥ hk
H hi
>
<
⇒ ∃v̂ hf hg .
>
:
hf ⊥ hk ⊥ hg
t̂/hi +hk ⇓: v̂ /hf +hk +hg
Q dv̂e hf
The soundness theorem then asserts that the characteristic formula of a welltyped term is a correct description of the behavior of that term, in the sense that
the proposition sound t̂ J t̂ K∅ holds. Before proving the soundness theorem, I rst
establish a lemma explaining how to eliminate the application of the predicate
that appears at the head of the characteristic formula
J t̂ K∅ .
Lemma 7.2.1 (Elimination of local) Let t̂ be a well-typed term of
F be a predicate of type Hprop → (VT W → Hprop) → Prop. Then,
type
T,
local
and
let
sound t̂ F
Proof
To prove the lemma, let
local F H Q
hi
holds and let
The goal is to nd the
v̂ , hf
H
and
hk
hg .
and
and
sound t̂ (local F)
⇒
Q
be pre- and post-conditions such that
be two disjoint heaps such that
H hi
holds.
local in the assumption local F H Q and specializing
hi , which satises H , we obtain some predicates Hi , Hk ,
Hg and Qf such that (Hi ∗ Hk ) holds of the heap hi and such that the propositions
F Hi Qf and Qf ?Hk I Q?Hg hold. By denition of the separating conjunction,
0
0
0
the heap hi can be decomposed into two disjoint heaps hi and hk such that hi satises
Hi and h0k satises Hk and hk = h0i + h0k .
0
The application of the hypothesis sound t̂ F to the heap hi satisfying Hi and to
0
0
0
the heap (hk + hk ) give the existence of a value v̂ and two heaps hf and hg satisfying
Unfolding the denition of
the assumption to the heap
the following properties.
t̂/h0i +hk +h0k ⇓: v̂ /h0f +hk +h0k +h0g
∧
Qf dv̂e h0f
Qf dv̂e h0f can be exploited with the hypothesis Qf ?Hk I Q?Hg .
0
0
Since the heap (hf + hk ) satises the heap predicate ((Qf dv̂e) ∗ Hk ), the same heap
also satises the predicate ((Q dv̂e) ∗ Hg ). By denition of separating conjunction,
00
there exist two heaps hf and hg such that Q dv̂e hf holds and such that the fact
Hg h00g and the equality h0f + h0k = hf + h00g are satised.
0
00
:
To conclude, I instantiate hg as hg +hg . It remains to prove t̂/hi +h ⇓ v̂ /h +h +hg .
k
f
k
The property
Given the typed reduction sequence obtained earlier on, it suces to check the
equalities
hi = h0i + h0k
and
h0f + h0k + h0g = hf + hg . 7.2.
SOUNDNESS
139
Theorem 7.2.2 (Soundness)
t̂
Let
` t̂
Proof
be a typed term.
sound
⇒
t̂ J t̂ K∅
t̂.
The proof goes by induction on the size of the term
have size one. Given a typed term
t̂
of type
T,
Here again, all values
its characteristic formula takes the
local F . I have proved in the previous lemma that, to establish the soundness
local F , it suces to establish the soundness of F . It therefore remains to study
form
of
the formula
Let
H
F,
whose shape depends on the term
be a pre-condition and
Q
be a post-condition for
might take, we consider two disjoint heaps
holds. The assumption
FHQ
t̂.
hi
and
hk
t̂.
depends on the shape of
t̂.
t̂
H hi
For each form that
such that the proposition
The particular form of
v̂ and two
hg satisfying t̂/hi +hk ⇓: v̂ /hf +hk +hg and Q dv̂e hf
• Case t̂ = v̂ . The assumption is H B Q dv̂e. Instantiate v̂ as v̂ , hf as hi and hg
:
as the empty heap. It is immediate to check the rst conclusion v̂ /hi +h ⇓ v̂ /hi +h .
k
k
The second conclusion, Q dv̂e hi is obtained by applying the assumption H B Q dv̂e
to the assumption H hi .
0
• Case t̂ = ((fˆfunc ) )(v̂ 0 T ) T . The assumption coming from the characteristic
0
formula is AppReturnsVT 0 W,VT W dfˆe dv̂ e H Q. Unfolding the denition of AppReturns
and exploiting it on the heaps hi and hk gives the existence of a value V of type
VT W and of two heaps hf and hg satisfying the next two properties.
the assumption is stated in every proof case. The goal is to nd a value
heaps
hf
and
AppEvalVT 0 W,VT W dfˆe dv̂e (hi + hk ) V (hf + hk + hg )
Let
v̂
denote
bV c.
The value
V
Case
Q V hf
dv̂e. So, Q dv̂e hf holds. Moreover,
0
ˆ
(f v̂ )/(hi +hk ) ⇓: v̂ /(hf +hk +hg ) holds.
is equal to
AppEval, the reduction
t̂ = (let x = t̂1 in t̂2 ). The
denition of
•
∧
by
assumption coming from the characteristic
formula is
∃Q0 . Jt̂1 K H Q0
∧
Consider a particular post-condition
asserts the existence of a value
v̂1 /h0f +hk +h0g
and
v̂1
Q0 .
∀X. Jt̂2 K(x7→X) (Q0 X) Q
Q dv̂1 e h0f .
t̂1
t̂1 /hi +hk ⇓:
The induction hypothesis applied to
and of two heaps
h0f
and
h0g
satisfying
X as dv̂1 e I obtaining the
Jt̂2 K(x7→dv1 e) (Q0 dv1 e) Q. By the substitution lemma, this is equivalent to
J[x → v1 ] t̂2 K∅ (Q0 dv1 e) Q. I apply the induction hypothesis on the heap h0f , which
0
0
satises the precondition Q dv̂1 e, and to the heap hk + hg , obtaining the existence
00
:
of a value v̂ and of two heaps hf and hg satisfying t̂2 /h0 +h +h0 ⇓ v̂ /h +h +h0 +h00 and
f
k
k
g
g
g
f
I then exploit the second hypothesis by instantiating
proposition
Q dv̂e hf . Instantiating hg
• Case t̂ = (t̂1 ; t̂2 ).
let x = t̂1 in t̂2 for a fresh
as
hk + hg
gives the derivation t̂/hi +h
k
⇓: v̂ /hf +hk +hg .
A sequence can be viewed as a let-binding of the form
name
x.
The soundness of the treatment of let-bindings
can therefore be deduced from the previous proof case.
140
•
Case t̂
CHAPTER 7.
PROOFS FOR THE IMPERATIVE SETTING
= (let x = ŵ1 in t̂2 ).
The assumption coming from the characteristic
formula is
∀X. X = dŵ1 e
⇒
Jt̂2 K(x7→X) H Q
X with dŵ1 e. The premise is immediately veried and the conclusion
Jt̂2 K(x7→dŵ1 e) H Q. By the substitution lemma, this proposition is equivalent
J[x → ŵ1 ] t̂2 K∅ H Q. The conclusion then follows directly from the induction
I instantiate
gives
to
hypothesis.
• Case t̂ = crash. Like in the purely-functional setting.
• Case t̂ = if dv̂e then t̂1 else t̂2 . Similar proof as in the purely-functional setting.
• Case t̂ = (let rec f = ΛA.λx.t̂1 in t̂2 ). The assumption of the theorem is
(f 7→F ) H Q, where H is
∀F. H ⇒ J t̂2 K
∀AXH 0 Q0 . J t̂1 K(x7→X,f 7→F ) H 0 Q0 ⇒ AppReturns F X H 0 Q0 .
dµf.ΛA.λx.t̂1 e. So, H implies Jt̂2 K(f 7→F ) P .
The next step consists in proving that H holds. Consider particular values for
T , X , H 0 and Q0 , let T denote TT U, and assume J [A → T ] t̂1 K(x7→X,f 7→F ) H 0 Q0 .
0 0
The goal is AppReturns F X H Q . Let v̂1 denote bXc. So, dv̂1 e is equal to X .
(x7→X,f 7→F ) H 0 Q0 is equivalent to
By the substitution lemma, the assumption J t̂1 K
J [f → µf.ΛA.λx.t̂1 ] [x → v̂1 ] [A → T ] t̂1 K∅ H 0 Q0 . Thereafter, let t̂0 denote the term
[f → µf.ΛA.λx.t̂1 ] [x → v̂1 ] [A → T ] t̂1 . The assumption becomes Jt̂0 K∅ H 0 Q0 .
0 0
By denition of AppReturns, the goal AppReturns F X H Q is equivalent to:
I instantiate the assumption with
F
as
∀h0i h0k . (h0i ⊥ h0k ) ∧ H 0 h0i ⇒
∃V 0 h0f h0g . AppEval F X (h0i + h0k ) V 0 (h0f + h0k + h0g ) ∧ Q0 V 0 h0f
h0i and h0k , I invoke the induction hypothesis on them and
0
∅
0 0
0
0
on the hypothesis Jt̂ K H Q . This gives the existence of a value v̂ of type T and
: 0
0
0
0
0
0
0
of two heaps hf and hg satisfying t̂ /h0 +h0 ⇓ v̂ /h0 +h0 +h0 and Q dv̂ e hf . I then
g
i
k
f
k
0
0
0 0 0
instantiate V as dv e. The proof obligation Q V hf is immediately solved. There
Given particular heaps
remains to establish the following fact.
AppEval dµf.ΛA.λx.t̂1 e dv̂1 e (h0i + h0k ) dv̂ 0 e (h0f + h0k + h0g )
By denition of
AppEval,
this fact is ((µf.ΛA.λx.t̂1 ) v̂1 )/h0 +h0
i
k
⇓: v̂ 0 /h0f +h0k +h0g .
The reduction holds because contraction of the redex ((µf.ΛA.λx.t̂1 ) v̂1 ) is equal
t0 .
H, we
(µf.ΛA.λx.t̂1 ) v̂1 , which reduces towards
Now that we have proved the premise
J t̂2 K(f 7→F ) H Q.
can conclude using the assumption
By the substitution lemma, we get
J [f → µf.ΛA.λx.t̂1 ] t̂2 K∅ H Q.
The conclusion follow from the induction hypothesis applied to that fact.
•
Case t̂
= (let rec f = ΛA.λxy.t̂1 in t̂2 ).
As explained earlier on (Ÿ7.1.4), I
have to prove the soundness of the body description generated for functions of two
arguments.
Let
fˆ denote
the function closure µf.ΛA.λxy.t̂1 , which is the same
7.2.
SOUNDNESS
141
let rec g = λy. t̂1 in g). The body description
∀AXP. (∀G. H ⇒ P G) ⇒ AppPuredfˆeXP , where
as µf.ΛA.λx.(
H
of
fˆ
is equivalent to
(∀Y H 0 Q0 . Jt̂1 K(x7→X,y7→Y ) H 0 Q0 ⇒ AppReturns1 G Y H 0 Q0 )
≡
T be somes types, and let T denote TT U. Let X be a given argument and
P be a given predicate of type Func → Prop such that ∀G. H ⇒ P G. The
goal is to prove AppPuredfˆeXP . By denition of AppPure, the goal unfolds to
∃V. (∀h. AppEval dfˆe X h V h) ∧ P V . In what follows, let ĝ denote the function
µg.Λ.λy.[x → v̂] [A → T ] t̂1 . I instantiate V as dĝe.
Let h be a heap and m̂ denote bhc. Let v̂ denote bXc. So, X is equal to dv̂2 e. The
proposition AppEval dfˆe X h V h is equivalent to AppEval dfˆe dv̂2 e dm̂e dĝe dm̂e,
Let
let
and it follows from the typed reduction sequence:
((µf.ΛA.λx.(let rec g = Λ.λy.t̂1 in g)) v̂2 )/m̂ ⇓: (µg.Λ.λy.[x → v̂2 ] [A → T ] t̂1 )/m̂
P dĝe.
It remains to prove
To that end, I apply the assumption ∀G.
The goal to be established is [G
T ] t̂1 .
→ dĝe] H.
Let
t̂01
stand for the term
H ⇒ P G.
[x → v̂] [A →
Using the substitution lemma, the goal can be rewritten as:
∀Y H 0 Q0 . J t̂01 K(y7→Y ) H 0 Q0 ⇒ AppReturns1 dµg.Λ.λy.t̂01 e Y H 0 Q0
This proposition is exactly the body description of the unary function
ĝ .
The sound-
ness of body description for unary functions has already been established in the
previous proof case.
•
Case t̂
= (while t̂1 do t̂2 ).
is the proposition ∀R.
H
hi
hg
such that
H , and the
and Q tt hf .
be a heap satisfying
t̂/hi +hk
where
∀H 0 Q0 . Jif t1 then (t2 ; |R|) else ttK H 0 Q0 ⇒ R H 0 Q0
≡
We know that
The assumption given by the characteristic formula
is_local R ∧ H ⇒ R H Q,
⇓:
tt /hf +hk +hg
goal is to nd two heaps
hf
and
The soundness of a while-loop relies on the soundness of its encoding as a recursive function, called
t̂0 .
It is straightforward to check that the term
has exactly the same semantics as
t̂0
≡
t̂0 ,
shown next,
t̂.
(let rec f = λ_. (if t̂1 then (t̂2 ; f tt) else tt) in f tt)
t̂0 /hi +hk ⇓: tt /hf +hk +hg and Q tt hf . The induction
0
0 ∅
hypothesis applied to t̂ asserts that the proposition J t̂ K H Q is a sucient condition
So, the goal reformulates as
for proving this goal. It thus remains to establish that the characteristic formula for
t̂
is stronger than the characteristic formula for
t̂0 .
Q gives the proposition ∀f. H0 ⇒
0 ∅
A partial computation of the formula J t̂ K H
AppReturns f tt H Q,
H0
≡
where
∀H 0 Q0 . Jif t1 then (t2 ; f tt) else ttK∅ H 0 Q0 ⇒ AppReturns f tt H 0 Q0
142
CHAPTER 7.
PROOFS FOR THE IMPERATIVE SETTING
func satisfying H0 . To prove AppReturns f tt H Q, I apply
the assumption ∀R. is_local R∧H ⇒ R H Q with R instantiated as AppReturnsf tt .
Let
f
be a Coq value of type
R is then a local predicate, as required. There remains to prove the
[R → AppReturnsf tt] H. This proposition is exactly the assumption
H0 , since J |AppReturnsf tt| K is the same as AppReturnsf tt . After all, the fact that
[R → AppReturnsf tt] H matches the assumption H0 follows from the design of the
Note that
proposition
characteristic formula of a while loop as (a simplied version of ) the characteristic
formula of its recursive encoding.
•
Case t̂
= (for x = n to n0 do t̂1 ).
The proof of soundness for for-loops is very
similar to that given for while-loops, so I only give the main steps. The characteristic
t̂
formula for
H
The term
is ∀S.
≡
t̂0
is_local1 S ∧ H ⇒ S a H Q,
∀iH 0 Q0 . Jif i ≤ b then (t ; |S (i + 1)|) else ttK H 0 Q0 ⇒ S i H 0 Q0
that corresponds to the encoding of
t̂0
where
≡
t̂
is dened as follows.
(let rec f = λi. (if i ≤ b then (t̂1 ; f (i + 1)) else tt) in f a)
The characteristic formula of
t̂0
is ∀f.
H0 ⇒ AppReturns f a H Q,
where
H0 ≡ ∀iH 0 Q0 . Jif i ≤ b then (t1 ; f (i + 1)) else ttK∅ H 0 Q0 ⇒ AppReturns f i H 0 Q0
To prove the characteristic formula of
implies
H,
t̂0
using that of
which is obtained by instantiating
S
as
t̂,
it suces to prove that
H0
AppReturns f .
There remains to prove the soundness of the specication of the primitive functions.
Spec ref (λV R. R [ ] (λL. L ,→T V )), where T
Spec, this is equivalent
to showing that the relation AppReturnsT ,Loc ref V [ ] (λL. L ,→T V ) holds for any
value V of type T . By denition of AppReturns, the goal is to show
•
Case
ref .
The specication is is the type of the argument
V.
Considering the denition of
∀hi hk . (hi ⊥ hk ) ∧ [ ] hi ⇒ ∃L hf hg .
AppEvalT ,Loc ref V (hi + hk ) L (hf + hk + hg ) ∧ (L ,→T V ) hf
[ ] hi asserts that hi is empty, meaning that the evaluation of ref
takes place in a heap hk . Let v̂ be equal to bV c. So, V is equal to dv̂e. The reduction
Loc
: loc
rule for ref asserts that (ref v̂)
/hk ⇓ l /hk ][l7→dv̂e] holds for some location l fresh
from the domain of hk . To conclude, it suces to instantiate L as dle, hf as the
singleton heap (L →T V ) and hg as the empty heap.
• Case get. Spec get (λL R. ∀V. R (L ,→T V ) (\= V ? (L ,→T V ))) is the
specication, where T is the type of V . Considering the denition of Spec, it suces
to prove that AppReturnsLoc,T get L (L ,→T V ) (\= V ? (L ,→T V )) holds for any
location L and any value V . Let L be a particular location and V be a particular
The hypothesis
7.3.
COMPLETENESS
value of type
T.
143
By denition of
AppReturns,
the goal is:
∀hi hk . (hi ⊥ hk8
) ∧ (L ,→T V ) hi ⇒
0
< AppEvalLoc,T get L (hi + hk ) V (hf + hk + hg )
∃V 0 hf hg .
V0 =V
:
(L ,→T V ) hf
V 0 as V , hf as hi and hg as the empty heap.
There remains to prove AppEval get L (hi + hk ) V (hi + hk ). The hypothesis (L ,→T
V ) hi asserts that hi is a singleton heap of the form (l →T V ). Let l denote the
location bLc and let v̂ denote the value bV c. The value v̂ has type TT U, which is
written T in what follows. The goal can be reformulated as the following statement:
AppEvalLoc,VT W get dle ((l → V ) + hk ) dv̂e ((l → V ) + hk ). By denition of AppEval,
T
:
T
it suces to prove (get l) /(l→dv̂e)+h ⇓ v̂ /(l→dv̂e)+h . This proposition follows
k
k
To prove the goal, I instantiate
get.
Spec set (λ(L, V ) R. ∀V 0 . R (L ,→T 0 V 0 ) (# (L ,→T V ))) is the
The goal is to prove that AppReturns(Loc×T ),Unit get (L, V ) (L ,→T 0
directly from the reduction rule for
•
Case
set.
specication.
V 0 ) (# (L ,→T V ))
holds for any location
L
and for any values
V
and
V 0.
Given
such parameters, the goal is equivalent to:
∀hi hk . (hi ⊥ hk ) ∧ (L ,→T 0 V 0 ) hi ⇒
8
00
< AppEval(Loc×T ),unit set (L, V ) (hi + hk ) V (hf + hk + hg )
00
00
∃V hf hg .
V = tt
:
(L ,→T V ) hf
V 00 as dtte, hf as (L →T V ) and hg as the empty heap. Let l denote
bLc, v̂ denote bV c and v̂ 0 denote bV 0 c. The assumption (L ,→T 0 V 0 ) hi asserts that
hi is of the form (l →T 0 v̂ 0 ). There remains to prove the following proposition:
AppEval set d(l, v̂)e ((l → v̂ 0 ) + hk ) dtte ((l → v̂) + hk ). This proposition follows from
unit
: unit
the reduction rule for set, since (set l v̂)
/(l→v̂ 0 )+hk ⇓ tt
/(l→v̂)+hk .
• Case cmp. The proof is straightforward since cmp does not involve any reasonI instantiate
ing on the heap.
7.3 Completeness
The proof of completeness of characteristic formulae for imperative programs involves two additional ingredients compared with the corresponding proof from the
purely functional setting.
First, it involves the notion of most-general heap spec-
ication, which asserts that, for every value stored in the heap, the most-general
specication of that value can be assumed to hold. Second, the denition of mostgeneral specications needs to take into account the fact that the exact memory
address of a location cannot be deduced from characteristic formulae. Indeed, the
specication of the allocation function
ref simply describes the creation of one fresh
location, but without revealing its address. As a consequence, the heap can only be
144
CHAPTER 7.
PROOFS FOR THE IMPERATIVE SETTING
specied up to renaming. The practical implication is that all the predicates become
parameterized by a nite map from program locations to program locations, written
α, whose purpose is to rename all the locations occurring in values, terms and heaps.
7.3.1 Most-general specications
The notion of labelling of terms and the denition of correct labelling with respect
to a set
E
of values of type
Func
are the same as in the previous chapter.
The
denition of body description is also the the same as before, except that it includes
a pre-condition.
Denition 7.3.1 (Body description of a labelled function)
body
(µf.ΛA.λx.t̃){F } ≡
∀A. ∀X.∀H.∀Q. (J t̃ K(x7→X,f 7→F ) H Q) ⇒
AppReturns F X H Q
The most-general specication predicate from the previous chapter is recalled
next. The predicate
the value
of
v̂
mgs v̂
V
E,
holds of a value
with constants from some set
if there exists a correct labelling
such that
V
ṽ
of
is equal to the decoding
ṽ .
Denition 7.3.2 (Most-general specication of a value, old version)
Let
v̂
be a well-typed value.
mgs v̂
≡
λV. ∃E. ∃ṽ. v̂ = strip_labels ṽ ∧
labels E ṽ
∧ V = dṽe
The denition used in this chapter extends it with a renaming map for locations,
that is, a nite map binding locations to locations.
locations occurring in the value
occurring in
mgv α v̂ ,
v̂
v̂ .
Let
locs(v̂)
Let α v̂ denote the renaming of the locations
according to the map
α.
The new specication predicate, written
corresponds to the most-general specication of the value
Denition 7.3.3 (Most-general specication of a value)
value, and
α
Let
(α v̂).
v̂ be a well-typed
be a map from locations to locations.
mgv α v̂
The denition of
α
denote the set of
≡
mgv
λV.
mgs (α v̂) V
locs(v̂) ⊆ dom(α)
∧
also includes a side-condition to ensure that the domain of
covers all the locations occurring in the value
v̂ .
is to guarantee that the meaning of the predicate
mgv α v̂
is preserved through an
v̂ are covered by α, then
0
0
for any α extends α, the value (α v̂) is equal to the value (α v̂).
The expression α m̂ denotes the application of the renaming α both to the
domain of the map m̂ and to the values in the range of m̂. The predicate mgh α m̂
describes the most-general specication of a typed store m̂ with respect to a renaming map α. If h is a heap, then the proposition mgh α m̂ h holds at the following
two conditions. First, the domain of h should correspond to the renaming of the
extension of the map
α.
The role of this side-condition
Indeed, when all the locations of
7.3.
COMPLETENESS
145
m̂ with respect to α. Second, for every value v̂ stored at the location l in
m̂, the value V stored at the corresponding location α l in h should satisfy the mostgeneral specication of v̂ with respect to α. Technically, the proposition mgv α v̂V should hold. In the denition of mgh shown next, the value v̂ is being referred to as
m̂[l], and the value h[α l] contained in the heap h is a dependent pair made of the
type T and of the value V of type T .
domain of
Denition 7.3.4 (Most-general specication of a store)
typed store and let
α
Let
m
be a well-
be a renaming map for locations.
8
>
>
<
mgh α m̂
≡
λh.
>
>
:
dom(h) = dom(α m̂)
∀l ∈ dom(m̂). mgv α (m̂[l]) V
where (T , V ) = h[α l]
locs(m̂) ⊆ dom(α)
Again, a side-condition is involved to assert that all the locations occurring in the
domain or in the range of
m̂
are covered by the renaming
α.
It remains to dene the notion of most-general post-condition of a term. The
proposition mgp α v̂ m̂
describes the most-general post-condition that can be as-
v̂ in a typed memory state
the heap h if there exists a
signed to a term whose evaluation returns the typed value
m̂.
The predicate
map
α0
mgp α v̂ m̂
that extends
α
V and of
V satises
holds of a value
such that the value
0
tion of the value v̂ modulo α and such that the heap
0
specication of the store m̂ modulo α .
Denition 7.3.5 (Most-general post-condition)
m̂
be a well-typed store, and let
mgp α v̂ m̂
α
Remark: the predicate h
Let
satises the most-general
v̂
be a well-typed value, let
be a renaming map for locations.
λV. λh. ∃α0 . α0 w α ∧
≡
the most-general specica-
mgp α v̂ m̂
mgv α0 v̂ V
∧
mgh α0 m̂ h
can also be formulated with heap predicates, as
shown next.
mgp α v̂ m̂
The renaming map
λV. ∃∃α0 . [α0 w α] ∗ [mgv α0 v̂ V ] ∗ (mgh α0 m̂)
=
α
is extended every time a new location is being allocated.
The following lemma explains that the predicates
argument
α
and that the predicate
mgp
mgv and mgh are covariant in the
is contravariant in its argument
Lemma 7.3.1 (Preservation through extension of the renaming)
α0
be two renamings for locations.
α0 w α ∧
α0 w α ∧
α w α0 ∧
mgv α v̂ V
mgh α m̂ h
mgp α v̂ m̂ V h
⇒
⇒
⇒
mgv α0 v̂ V
mgh α0 m̂ h
mgp α0 v̂ m̂ V h
α.
Let
α and
146
CHAPTER 7.
Proof
(α v̂).
(α m̂)
For
For
PROOFS FOR THE IMPERATIVE SETTING
mgv, the assumption locs(v̂) ⊆ dom(α) implies that (α0 v̂) is
mgh, the assumption locs(m̂) ⊆ dom(α) implies that (α0 m̂) is
equal to
equal to
0
and that for any location l in the domain of m, (α l) is equal to (α l). For
mgp, the hypothesis mgp α v̂ m̂ V h asserts the existence of some map α00 such that
α00 w α and mgv α00 v̂ V and mgh α00 m̂ h. To prove the conclusion mgp α0 v̂ m̂ V h, it
suces to observe that the map
0
the map α , so by transitivity
α00
extends the map
α00 extends
α
and that the map
α
extends
α0 . 7.3.2 Completeness theorem
Theorem 7.3.1 (Completeness for full, well-typed executions)
Let
closed term that does not contain any location nor any function closure, let
typed value, let
` t̂
Proof
m̂
a
a
be a typed store.
t̂/∅ ⇓: v̂ /m̂
∧
t̂ be
v̂ be
J t̂ K∅ [ ] (λV. ∃∃α. [mgv α v̂ V ] ∗ (mgh α m̂))
⇒
I prove by induction on the derivation of the reduction judgment that if a
t̂ in a store m̂ reduces to a value v̂ in a store m̂0 , then for any correct labelling t̃ of
t̂ and for any renaming of locations α, the characteristic formula of (α t̃) holds of the
pre-condition that corresponds to the most-general specication of the store m̂ and
term
of the post-condition that corresponds to the most-general specication associated
with the value
v̂
and the store
m̂0 .
The formal statement, which follows, includes
side-conditions asserting that the domain of the renaming map for locations
correspond exactly to the domain of the store
in the term
the store
t̂
m̂,
α should
and that the locations occurring
are actually allocated in the sense that they belong to the domain of
m̂.
8
t̂/m̂ ⇓: v̂ /m̂0 ⇒
>
>
>
>
>
>
>
<
>
>
>
>
>
>
>
:
∀t̃ α E.
` t̂
` m̂
t̂ = strip_labels t̃
labels E (α t̃)
dom(α) = dom(m̂)
locs(t̂) ⊆ dom(m̂)
⇒ J α t̃ K∅ (mgh α m̂) (mgp α v̂ m̂0 )
There is one case per possible reduction rule.
In each case, the characteristic
local, which I directly eliminate (recall that F H Q
mgh α m̂ as
0
mgp α v̂ m̂ as Q.
formula starts with the predicate
always implies
H
local F H Q).
and to the post-condition
•
Case
the value
v̂ /m̂ ⇓: v̂ /m̂ .
v̂ .
Henceforth, I refer to the pre-condition The term
The goal is to prove
∀h. mgh α m̂ h
⇒
t̃ labels the term v̂ . Let ṽ denote
H B Q dα ṽe, which unfolds to:
the labelling of
∃α0 . α0 w α ∧ mgv α0 v̂ dα ṽe ∧ mgh α0 m̂ h
α0 as α. The proposition mgh α m̂ h
holds by assumption, and the proposition mgv α v̂ dα ṽe holds by denition of mgv,
using the fact that labels E (α ṽ) holds.
The conclusion is obtained by instantiating
7.3.
COMPLETENESS
147
(let x = t̂1 in t̂2 )/m̂ ⇓: v̂ /m̂0 with t̂1 /m̂ ⇓: v̂1 /m00 and ([x → v̂1 ] t̂2 )/m00 ⇓:
v̂ /m̂0 . Let t̃1 and t̃2 be the labelled subterms associated with t̂1 and t̂2 , respectively.
Let T1 denote the type of x. The goal is:
•
Case
∃Q0 . J α t̃1 K∅ H Q0
∧
∀X. J α t̃2 K(x7→X) (Q0 X) Q
Q0 as the predicate mgp α v̂1 m00 . The rst subgoal
00
(mgp α v̂1 m ). It follows from the induction hypothesis.
To prove this goal, I instantiate
is J α t̃1
K∅ H
X be a value of type VT1 W. We need to establish the
K(x7→X) (Q0 X) Q. By unfolding the denition of Q0 and applying
For the second subgoal, let
proposition J α t̃2
the extraction rules for local predicates, the goal becomes:
∀α00 . α00 w α ∧ mgv α00 v̂1 X ⇒ J α t̃2 K(x7→X) (mgh α00 m00 ) Q
α00 be a renaming map that extends α. By denition of mgv, there exists a
0
0
00
set E and a labelling ṽ1 of v̂1 such that labels E (α ṽ1 ) and such that X is
00
00
equal to dα ṽ1 e, moreover satisfying the inclusion locs(v̂1 ) ⊆ dom(α ). From the
00
00
00
pre-condition (mgh α m ), we can extract the hypothesis that dom(α ) is equal to
dom(m00 ). This fact is used later on.
(x7→X) is equivalent to J [x →
By the substitution lemma, the formula J α t̃2 K
00
∅
00
α ṽ1 ] (α t̃2 ) K . Since locs(t̂2 ) v dom(α) and α w α, the (α t̃2 ) is equal to (α00 t̃2 ).
Let
So, it remains to prove the following proposition.
mgp
Since
J α00 ([x → ṽ1 ] t̃2 ) K∅ (mgh α00 m00 ) (mgp α v̂ m̂0 )
is contravariant in the renaming map, and since the map
α00
extends
0
the map α, the post-condition mgp α v̂ m̂ can be strengthened by the rule of
00
0
consequence into the post-condition mgp α v̂ m̂ .
The strengthened goal then follows from the induction hypothesis. The premises
can be checked as follows. First, the set
E ∪ E0
correctly labels the term
(α00 ([x →
ṽ1 ] t̃2 )) because both labels
1 ) and labels E (α t̃2 ) are true and because
00
(α t̃2 ) is equal to (α t̃2 ). Second, the locations of the term [x → v̂1 ] t̂2 are included
00
00
in the domain of m̂ because the inclusions locs(v̂1 ) ⊆ dom(m̂ ) and locs(t̂2 ) ⊆
dom(m̂) and dom(m̂) ⊆ dom(m̂00 ) hold. (The latter is a consequence of the fact
E0
(α00 ṽ
that the store can only grow through time.)
•
Case
(t̂1 ; t̂2 )/m̂ ⇓: v̂ /m̂0 .
The treatment of sequences is a simplied version of
the treatment of non-polymorphic let-bindings.
•
(let x = ŵ1 in t̂2 )/m̂ ⇓: v̂ /m̂0 with ([x → ŵ1 ] t̂2 )/m00 ⇓: v̂ /m̂0 . Let S1 be the
x. Let w̃1 and t̃2 be the subterms corresponding to ŵ1 and t̂2 , respectively.
Case
type of
The goal is:
Let
X
∀X. X = dα w̃1 e ⇒ J α t̃2 K(x7→X) H Q
VS1 W such that X is equal to dα w̃1 e. The goal can be
(x7
→
dα
w̃
1 e) H Q. By the substitution lemma, this proposition is
K
be a value of type
rewritten as J α t̃2
equivalent to J α ([x
→ w̃1 ] t̃2 ) K∅ H Q.
the induction hypothesis.
The conclusion then follows directly from
148
CHAPTER 7.
PROOFS FOR THE IMPERATIVE SETTING
(let rec f = ΛA.λx.t̂1 in t̂2 )/m̂ ⇓: v̂ /m̂0 with ([f → µf.ΛA.λx.t̂1 ] t̂2 )/m̂ ⇓:
v̂ /m̂0 . Let t̃1 and t̃2 be the subterms of t̃ corresponding to the labelling of the subterms
t̂1 and t̂2 . The proof is very similar to that of the purely-functional setting, so I give
•
Case
only the key steps. The goal is to prove:
body (µf.ΛA.λx.(α t̃1 )){F }
⇒
J α t̃2 K(f 7→F ) H Q
By the substitution lemma, the characteristic formula
)){F } ] (α t̃
) K∅ .
J α t̃2 K(f 7→F )
J [f →
J (α ([f →
is equal to
(µf.ΛA.λx.(α t̃1
So, the goal can be reformulated as
2
(µf.ΛA.λx.t̃1 ){F } ] t̃2 ) K∅ H Q. This proposition follows from the induction hypothesis
applied to the set E ∪ {F }.
• Case (µf.ΛA.λxT0 .t̂1T1 (v̂2T2 )) T /m̂ ⇓: v̂ /m̂0 , where ([f → µf.ΛA.λx.t̂1 ] [x →
v̂2 ] [A → T ] t̂1 )/m̂ ⇓: v̂ /m̂0 , and the assumptions T2 = [A → T ] T0 and T = [A →
T ] T1 hold. Again, the structure of the proof is similar to that of the purely-function
setting. Let t̃1 and ṽ2 be the labelled subterms associated with t̂1 and v̂2 , respectively. By hypothesis, the function µf.ΛA.λx.α t̃1 is labelled with some constant
F such that the proposition body (µf.ΛA.λx.α t̃1 ){F } holds. The goal is to prove
AppReturns F dα ṽ2 e H Q. This proposition is proved by application of the body
(x7→dα ṽ2 e,f 7→F ) H Q.
hypothesis. The premise to be established is J [A → T ] (α t̃1 ) K
By application of the substitution lemma, and by factorizing the renaming α, the
{F } ] [x → ṽ ] [A →
proposition can be reformulated as J α ([f → (µf.ΛA.λx.t̃1 )
2
∅
T ] t̃1 ) K H Q, which is provable directly from the induction hypothesis.
• Case (if true then t̂1 else t̂2 )/m̂ ⇓: v̂ /m̂0 with t̂1 /m̂ ⇓: v̂ /m̂0 . The goal is:
(true = true ⇒ J α t̃1 K∅ H Q) ∧ (true = false ⇒ J α t̃2 K∅ H Q)
Thus, it suces to prove the proposition J α t̃1
K∅ H Q.
The proposition follows di-
rectly from the induction hypothesis. The case where the argument of the condition
false is symmetrical.
(for x = n to n0 do t̂1 )/m̂ ⇓: tt /m̂0 .
is the value
•
Case
There are two cases. Either
the loop does not execute. In this case, the goal is to prove
instantiate
α0
as
α
and to check that mgv α tt dtte
H B Q tt .
n > n0
and
It suces to
holds. Otherwise,
n ≤ n0 .
In
this case, I prove the characteristic formula for for-loops stated in terms of a loop
invariant. This formula is indeed a sucient condition for establishing the general
characteristic formula that supports local reasoning.
The execution of the loop goes through several intermediate states, call them
(m̂i )i∈[n,n0 +1] , such that m̂n = m̂ and m̂n0 +1 = m̂0 .
The goal is to nd an invariant
that satises the three following conjuncts, in which
for
dn0 e.
8
<
:
Let
I
stands for
dne
H BIN
I (N 0 + 1) B Q dtte
∀X ∈ [N, N 0 ]. J t̂1 K(x7→X) (I X) (# I (X + 1))
[α0 w α] ∗ (mgh α0 m̂i ). For the rst conjunct,
to (mgh α m̂n ), so H B I N holds. For the second
be the predicate λi. ∃
∃α
the heap predicate
N
H
is equal
I
0
and N stands
0.
7.3.
COMPLETENESS
149
I (N 0 + 1) is equivalent to ∃∃α0 . [α0 w α] ∗ (mgh α0 m̂0 ).
0
It thus entails Q dtte, which is dened as mgp tt m̂ dtte.
0
For the third conjunct, let X be a Coq integer in the range [N, N ]. Let p be
the corresponding program integer, in the sense that dpe is equal to X . By the
(x7→X) (I X) (# I (X + 1)) simplies to J [x →
substitution lemma, the goal J t̂1 K
∅
p] t̂1 K (I dpe) (# I (X + 1)). Since the characteristic formula is a local predicate,
we can extract from the pre-condition I dpe the assumption that there exists a map
α00 that extends α and such that J [x → p] t̂1 K∅ (mgh α00 m̂p ) (# I (X + 1)) does
:
hold. The induction hypothesis applied to ([x → p] t̂1 )/m̂p ⇓ tt /m̂p+1 gives J [x →
p] t̂1 K∅ (mgh α00 m̂p ) (mgp α00 tt m̂p+1 ).
conjunct, the heap predicate
It remains to invoke the rule of consequence and check that the post-condition
mgp α00 tt m̂p+1 # I (dpe + 1). The former post-condition
α00 such that the heap satises
0
000 that extends α such
mgh α m̂p+1 . The latter asserts the existence of a map α
000
000 as
that the heap satises mgh α m̂p+1 . To prove it, it suces to instantiate α
0
0
α and to check that α extends α. This fact can be obtained by transitivity, since
α0 extends α00 and α00 extends α.
• Case (while t̂1 do t̂2 )/m̂ ⇓: tt /m̂0 . The proof of completeness of characteristic
entails the post-condition
asserts the existence of a map
α0
that extends
formulae for while-loops generalizes the proof given for for-loops. Again, I rely on a
while t̂1 do t̂2 )
loop invariant. The evaluation of the loop (
goes through a sequence
of intermediate typed states:
m̂ = m̂0 ,
m̂00 ,
m̂1 ,
m̂01 ,
m̂2 ,
m̂02 ,
...,
m̂n ,
m̂0n = m̂0
m̂i to the state m̂0i
0
and the evaluation of the loop body t̂2 takes from a state m̂i to the state m̂i+1 . So,
m̂i describes a state before the evaluation of the loop condition t̂1 and m̂0i describes
where the evaluation of the loop condition
t̂1
takes from a state
the corresponding state right after the evaluation of
t̂1 .
Thanks to the bijection
between well-typed memory stores and heaps, I can dene a sequence of heaps
0
and hi that correspond to
hi
m̂i and m̂0i .
In the characteristic formula for while-loops expressed with loop invariants, I
A as the type Heap, and I instantiate ≺ as a binary relation such that
hi ≺ hj holds if and only if i is greater than j . Moreover, I instantiate the predicates
I and J as follows.
instantiate
I
J
≡ λh. ∃∃i. [h = hi ] ∗ ∃∃α0 . [α0 w α] ∗ (mgh α0 m̂i )
≡ λhb. ∃∃i. [h = hi ] ∗ ∃∃α0 . [α0 w α] ∗ (mgh α0 m̂0i ) ∗ [b = true ⇔ i < n]
It remains to establish the following facts.
8
>
>
>
<
>
>
>
:
well-founded(≺)
∃X0 . H B I X0
∀X. Jt̂1 K (I X) (J X)
∀X. Jt̂2 K (J X true) (# ∃∃Y. (I Y ) ∗ [Y ≺ X])
∀X. J X false B Q dtte
150
CHAPTER 7.
PROOFS FOR THE IMPERATIVE SETTING
≺ is well-founded because no two memory states m̂i and m̂j can
i 6= j . Otherwise, if two states were equal, then the while-loop would
(1) The relation
be equal when
run as an innite loop, contradicting the assumption that the execution of the term
t̂
does terminate.
(2) The value
heap satises α0 = α
of type
in the denition of
(3) Let
for
X0 ,
mgh α m̂0 ,
X.
X
Heap,
can be instantiated as
h0 . By hypothesis,
I h0 (take i = 0
so it satises the heap predicate
this
and
I ).
be a heap. Consider the goal
Unfolding the denition of
I
Jt1 K (I X) (J X).
Let
h be another name
Jt1 K is a local
and exploiting the fact that
predicate, it suces to prove:
∀i. ∀α0 . h = hi ∧ α0 w α ⇒ Jt̂1 K (mgh α0 m̂i ) (J h)
m̂i and m̂0i , the reduction t̂1 /m̂i ⇓: brc/m̂0i holds, where
r is a boolean such that r = true ⇔ i < n. By induction hypothesis, t̂1 admits
0
the post-condition mgp b m̂i , which is equivalent to λb. ∃
∃α00 . [α00 w α0 ] ∗ [b =
r] ∗ (mgh α00 m̂0i ). One can check that this post-condition entails J h (instantiate i
0
00
as i, α and α and in the denition of J ).
(4) Let X be a heap. Consider the goal Jt2 K (J X true) (# ∃
∃Y. (I Y ) ∗ [Y ≺ X]).
By denition of J , X is equal to a heap hi with i < n. Moreover, there exists a
0
0 0
map α that extends α such that the input heap satises mgh α m̂i . The induction
:
hypothesis applied to the reduction sequence t̂2 /m̂0 ⇓ tt /m̂i+1 gives the proposii
∅
0 0
0
0
tion J t̂2 K (mgh α m̂i ) (mgp α tt m̂i+1 ). The post-condition mgp α tt m̂i+1 as00
0
serts the existence of a map α that extends α such that the output heap satises
mgh α00 m̂i+1 . So, this post-condition entails target post-condition # ∃∃Y. (I Y )∗[Y ≺
X]. Indeed, it suces to instantiate Y as hi+1 , which is indeed smaller than
hi with respect to ≺. The heap predicate I Y is then equivalent to ∃∃α0 . [α0 w
α] ∗ (mgh α0 m̂i+1 ), which follows from mgh α00 m̂i+1 by instantiating α0 as α00 . The
00
relation α w α is obtained by transitivity.
(5) It remains to prove J X false B Q tt for any heap X . By denition of J ,
there exists an index i such that X is equal to hi and i is not smaller than n. Hence,
X must be equal to hn . So, J X false is equivalent to ∃∃α0 . [α0 w α] ∗ mgh α0 m̂0i . This
heap predicate is indeed equivalent to Q tt , by denition of Q.
• Case (ref v̂) loc /m̂ ⇓: l loc /m̂0 where l is a location fresh from the domain of m̂ and
m̂0 is the heap m̂ ] [l 7→ v̂]. Let T denote the type of v̂ , and let T denote the Coq
type VT W. Let ṽ be the labelled value associated with v̂ , and let V be a shorthand
for dα ṽe The goal is to prove AppReturnsT ,loc ref dα ṽe H Q.
The specication of ref gives AppReturnsT ,loc ref V [ ] (λL. L ,→T V ). The frame
rule then gives AppReturnsT ,loc ref V H (λL. (L ,→T V ) ∗ H). The goal follows from
By denition of the heaps
the rule of consequence applied to this fact. It remains to establish that, for any
location
L, the heap predicate (L ,→T v̂) ∗ H Q L is equivalent to:
is stronger than
predicate
∃∃α0 . [α0 w α] ∗ [mgv α0 l L] ∗ (mgh α0 m̂0 )
Q L.
The heap
7.3.
COMPLETENESS
151
L be a location and h0
(L ,→T v̂) ∗ H . The
h0 . Then, to prove the
0
0
goal, I instantiate α as α ] [l 7→ bLc]. The rst subgoal α w α holds by denition
0
0
0
of α . The second subgoal mgv α l L simplies to L = α l, which is also true by
0
0
denition of α . It remains to prove the third subgoal, which asserts that h satises
0 0
the heap predicate (mgh α m̂ ).
0
0
Because h satises (L ,→T v̂) ∗ H , the heap h decomposes as a singleton heap
satisfying (L ,→T v̂) and as a heap h satisfying H , which was dened as mgh α m̂.
The assumptions given by mgh α m̂ h are as follows.
Let
be a heap satisfying the predicate
goal is to prove that the above heap predicate also applies to
8
<
:
To prove
dom(h) = dom(α m̂)
∀l0 ∈ dom(m̂). mgv α (m̂[l0 ]) V 0
locs(m̂) ⊆ dom(α)
(mgh α0 m̂0 h0 ),
8
<
:
where
(T 0 , V 0 ) = h[α l0 ]
we need to establish the following facts.
dom(h0 ) = dom(α0 m̂0 )
∀l0 ∈ dom(m̂0 ). mgv α0 (m̂0 [l0 ]) V 0
locs(m̂0 ) ⊆ dom(α0 )
where
(T 0 , V 0 ) = h0 [α0 l0 ]
l0 be a
0
0
0
location in the domain of m̂ . If l is equal to l, then (α l ) is equal to L and
0
0
0
the goal is mgv α (m̂ [l]) V . The proposition simplies to mgv α v̂ (α ṽ), which
follows from the tautology mgv α v̂ (α ṽ), by covariance of mgs in the α-renaming
0
0
map. Otherwise, if l is not equal to l, then l belongs to the domain of m̂ and the
0
0
0
covariance of mgv in the α-renaming map can be used to derive mgv α (m̂[l ]) V 0
0
from mgv α (m̂[l ]) V .
T
• Case (get l) /m̂ ⇓: v̂ T /m̂ where v̂ T = m̂[l]. Let T stand for VT W and let L
stand for dα le. The goal is AppReturnsloc,T get L (mgh α m̂) (mgp α v̂ m̂). We could
The rst and the third facts are easy to verify.
For the second fact, let
use the frame rule like in the previous proof case to prove this goal, however the
proof is simpler when we directly unfold the denition of
the predicate
AppEval.
AppReturns and work with
The goal then reformulates as follows.
¨
∀hi hk . mgh α m̂ hi ⇒ ∃V hf hg .
From the specication of
get
AppEval get L (hi + hk ) V (hf + hk + hg )
mgp α v̂ m̂ V hf
applied to the location
L,
we get:
∀V. AppReturnsloc,T get L (L ,→T V ) (λV 0 . [V 0 = V ] ∗ (L ,→T V ))
Reformulating this proposition in terms of
8
∀V
h0i h0k .
(L ,→T V
) h0i
⇒ ∃V
0
h0f
<
h0g .
:
AppEval
gives:
AppEval get L (h0i + h0k ) V 0 (h0f + h0k + h0g )
V0 =V
(L ,→T V 0 ) h0f
152
CHAPTER 7.
PROOFS FOR THE IMPERATIVE SETTING
It remains to nd the suitable instantiations for proving the goal by exploiting the above proposition.
l
Since
belongs to the domain of
m̂
and since the hy-
pothesis mgh α m̂ hi holds, we know that L belongs to the domain of hi and that
mgv α (m̂[l]) (hi [l]) holds. Let V stand for the value hi [l]. The assertion reformulates
as mgv α v̂ V . Moreover, there exists a heap hr such that the heap hi decomposes as
the disjoint union of the singleton heap
L →T V and of hr . Let h0i be instantiated
hr + hk . The union h0i + h0k equal to the
0
, and let hk be instantiated as
L →T V
union hi + hk .
as
0
0
0
We can check the premise (L ,→T V ) hi , and obtain the existence of V , hf and
0
hg satisfying the three conclusions derived from the specication of get. The sec0
0
ond conclusion ensures that V is equal to V . The third conclusion, (L ,→T V ) hf
0
0
asserts that hf is a singleton heap of the form L →T V , so it is the same as hi .
0
0
In particular, the union hf + hk is equal to the union hi + hk . The rst conclu0
0
0
0
0
0
sion, AppEval get L (hi + hk ) V (hf + hk + hg ), is then equivalent to the proposition
0
AppEval get L (hi + hk ) V (hi + hk + hg ).
0
To conclude, it remains to instantiate V as V , hf as hi , hg as hg , and to
0
prove mgp α v̂ m̂ V hi . Instantiating α as α in this proposition, we have to prove
mgh αm̂ hi ,
mgv α v̂ V ,
mgh α m̂ hi .
which holds by assumption, and to prove
been extracted earlier on from the assumption
a fact which has
• Case (set l v̂) unit /m̂ ⇓: tt unit /m̂0 where l belongs to the domain of l and m̂0 stands
for m̂[l 7→ v̂]. Let T be the type of v̂ and let T stand for VT W. By assumption, there
exists a labelled value ṽ that corresponds to v̂ , such that labels E v̂ . Let V stand
for dα ṽe, and let L stand for dα le. The goal is:
AppReturns(loc×T ),unit set (L, V ) (mgh α m̂) (mgp α tt m̂)
As in the previous proof case, we can rewrite this goal in terms of
AppEval.
The
statement shown next takes into account the fact that the return value is the unit
value.
¨
∀hi hk . mgh α m̂ hi ⇒ ∃hf hg .
From the specication of
set
AppEval set (L, V ) (hi + hk ) tt (hf + hk + hg )
mgp α tt m̂0 dtte hf
applied to the location
L,
we get:
∀V T 0 V 0 . AppReturns(loc×T ),unit set (L, V ) (L ,→T 0 V 0 ) (# (L ,→T 0 V ))
Reformulating this proposition in terms of
∀V T 0 V 0 h0i h0k .
⇒ ∃h0f h0g .
(L ,→T 0 V 0 ) h0i
¨
AppEval
gives:
AppEval set (L, V ) (h0i + h0k ) tt (h0f + h0k + h0g )
(L ,→T V ) h0f
l belongs to the domain of m̂
and since the hypothesis mgh α m̂ hi holds, we know that L belongs to the domain
0
of hi and that mgv α (m̂[l]) (hi [l]). Let V denote the value hi [l]. There exists a heap
hr such that the heap hi decomposes as the disjoint union of the singleton heap
It remains to nd the suitable instantiations. Since
7.3.
COMPLETENESS
153
L →T 0 V 0 and of hr . Let h0i be instantiated as L →T 0 V 0 , and let h0k be instantiated
0
0
as hr + hk . The union hi + hk equal to the union hi + hk .
0
0
We can check the premise (L ,→T 0 V ) (L →T 0 V ), and obtain the existence
0
0
of hf and hg satisfying the two conclusions derived from the specication of set.
0
0
The second conclusion, (L ,→T V ) hf asserts that hf is a singleton heap of the
0
form L →T V .
Let hf be instantiated as hf + hr .
In particular, the union
h0f + h0k is equal to the union hf + hk . Moreover, let hg be instantiated as h0g .
0
0
0
0
0
0
The rst conclusion, AppEval get L (hi + hk ) V (hf + hk + hg ) is then equivalent to
AppEval get L (hi + hk ) V (hf + hk + hg ).
0
0
It remains to prove mgp α tt m̂ dtte hf . Instantiating α as α in this predicate,
0
it remains to prove mgv α tt tt , which is true, and mgh α m̂ hf . To prove the latter,
0
0
let l be a location in the domain of m̂ , which is the same as the domain of m̂. We
0
0
0
have to prove mgv α (m̂[l ]) (hf [dα l e]). There are two cases. If l is not the location
0
0
0
0
0
l, then m̂ [l ] is equal to m̂[l ] and hf [dα l e] is equal to hi [dα l e], so the assumption
0
0
mgh α m̂ hi can be used to conclude. Otherwise, if l is the location l, then m̂ [l]
is equal to v̂ and h[dα le] is equal to V (because V stands for h[L] and L has been
dened as dα le), so the goal is to prove mgv α v̂ V , which holds because V has been
dened dα ṽe and labels E v̂ holds by assumption.
• Case (cmp l k 0 )/m̂ ⇓: b̂ bool /m̂ . The proof is straightforward since cmp does not
involve any reasoning on the heap. 7.3.3 Completeness for integer results
A simpler statement of the completeness theorem can be given in the case of a
program that admits the type
int
in ML. To lighten the presentation, I identify
Caml integers with Coq integers.
Theorem 7.3.2 (Completeness for ML programs with integer results)
Consider a closed ML program of type int. Let t̂ denote the corresponding term typed
in weak-ML, and let
t
denote the corresponding raw term. Let
be a Coq predicate on integers, and let
t/∅ ⇓ n/m
Proof
Since the term
t
∧
Pn
m
n
be an integer, let
P
be a memory store.
J t̂ K∅ [ ] (λn. [P n])
⇒
is well-typed in ML, the reduction derivation
be turned into a typed reduction derivation
t̂
int
/∅
⇓:
n
int
t/∅ ⇓ n/m
can
/m̂ . By the completeness
theorem applied to that derivation, there exists a renaming
α
such that:
J t̂ K∅ [ ] (λV. ∃∃α. [mgv α n V ] ∗ (mgh α m))
The characteristic formula is a local predicate, so we can apply the rule of garbage
collection to ignore the post-condition about the nal store
J t̂ K∅ [ ] (λV. ∃∃α. [mgv α n V ])
m.
It gives:
154
CHAPTER 7.
PROOFS FOR THE IMPERATIVE SETTING
To prove the conclusion, we apply the rule of consequence, and there remains to
establish an implication between the post-conditions:
∀V. (∃α. mgv α n V ) ⇒ P V
The hypothesis that
V = dα ñe.
mgv α n V the hypothesis simplies to
the assumption
asserts the existence of a correct labelling
n
V = n.
Since the integer
P n. ñ
of
n
such
does not contain any function nor any location,
The conclusion
PV
then follows directly from
Chapter 8
Related work
The discussion of related work is organized around ve sections.
The
rst one is concerned with the origins of characteristic formulae and
the comparison with the program logics developed by Honda, Berger,
and Yoshida.
In a second part, I focus on Separation Logic.
I then
discuss approaches based on Verication Condition Generators (VCG),
approaches based on shallow embeddings, and approaches based on deep
embeddings.
8.1 Characteristic formulae
Characteristic formulae in process calculi
The notion of characteristic for-
mula originates in process calculi. The characterization theorem [70, 57] states that
two processes are bisimilar if and only if they satisfy the same set of formulae in
Hennessy-Milner logic [35]. In this context, a formula
tic formula of a process
F
is said to be the characteris-
p if the set of processes that satisfy F matches exactly the set
p. Graf and Sifakis [32] described an algorithm for
of processes that are bisimilar to
constructing the characteristic formulae of a process from the syntactic denition of
that process. More precisely, they explained how to build a modal logic formula that
characterizes the equivalence class of a given CCS process (for processes that do not
involve recursion). Aceto and Ingólfsdóttir [1] later described a similar generation
algorithm for translating regular CCS processes into their characteristic formulae,
which are there expressed in Hennessy-Milner logic extended with recursion.
The behavioral equivalence or dis-equivalence of two processes can be established
by comparing their characteristic formulae. Such proofs can be conducted in a highlevel temporal logic rather than through reasoning on the syntactic denition of
the processes.
Somewhat similarly, the characteristic formulae developed in this
thesis allow reasoning on a program to be conducted on higher-order logic formulae,
without referring to the source code of that program at any time.
155
156
CHAPTER 8.
Total characteristic assertion pairs
RELATED WORK
The rst development of a program-logic
counterpart to process-logic characteristic formulae is due to recent work by Honda,
Berger and Yoshida [40].
This work originates in the study of a correspondence
between process logics and program logics [38], where Honda showed how an encoding of a high-level programming language into the
a logic for the
π -calculus
π -calculus
can be used to turn
with linear types into a compositional program logic for
a higher-order functional programming language. This program logic, which is described in detail in [41], was later extended to an imperative programming language
with global state [10], and then further extended to support reasoning on aliasing
[39] and reasoning on local state [87].
In the logics for higher-order imperative programs, the specication judgment
t is a term, C and C 0 are
assertions about the initial heap and the nal heap, and x is a bound name used to
0
refer to the result produced by t in the post-condition C . So, the interpretation of
0
0
the judgment {C} t :x {C } is essentially the same as that of the triple {C} t {λx. C },
take the form of a Hoare triple
{C} t :x {C 0 },
where
which follows the presentation that I have used so far. The specication of functions
and higher-order functions involves a ternary predicate, called evaluation formula
and written v1
value
v2
• v2 == v3 , which asserts that the application of the value v1 to the
v3 . The specication thus takes place in a rst-order logic
returns the value
extended with this ad-hoc evaluation predicate. Another specicity of the assertion
logic is that its values are the values of the programming language (i.e., PCF values),
including in particular non-terminating functions. Moreover, in the assertion logic,
equality is interpreted as observational equivalence. All those constructions make
the assertion logic nonstandard, making it dicult to reuse existing proof tools.
The series of work on program logics by Honda, Berger and Yoshida culminated
in the development of total characteristic assertion pairs [40], abbreviated as TCAP,
which corresponds to the notion of most-general Hoare triple. A most-general Hoare
triple, also called most-general formula, is made of the weakest pre-condition, which
corresponds to the minimal requirement for safe execution, and of the strongest postcondition, which precisely relates the output heap to the input heap. Most-general
formulae were introduced by Gorelick in 1975 [31] for establishing a completeness
result for Hoare logic. Yet, this theoretical result does not help verify programs in
practice, because most-general formulae were there expressed in terms of the syntax
and the semantics of the source program.
The total characteristic assertion pairs
(TCAPs) developed by Honda, Berger and Yoshida [40] precisely solve that problem,
as TCAPs express the weakest pre-condition and the strongest post-condition in the
assertion logic, without any reference to program syntax. Moreover, the TCAP of a
program can be constructed automatically by application of a simple algorithm.
As those researchers point out, TCAPs suggest a new approach to proving that
a program satises a given specication. Indeed, rather than building a derivation
using the reasoning rules of the Hoare program logic, one may simply prove that
the pre-condition of the specication implies the weakest pre-condition and that
the post-condition of the specication is implied by the strongest post-condition.
The verication of those two implications can be conducted entirely at the logical
8.2.
SEPARATION LOGIC
157
level. Yet, this appealing idea was never implemented, mainly because the treatment
of rst-class functions and of evaluation formulae in the assertion logic makes the
logic nonstandard and precludes the use of a standard theorem prover. A central
contribution of this thesis has been to show how to reconcile rst-class functions
and evaluation formulae with a standard logic, by representing functions in the logic
through the deep embedding of their code (with the type
evaluation formula (the predicate
AppEval)
Func)
and by dening an
in terms of the deep embedding of the
semantics of the source programming language.
The presentation of characteristic formulae that I build also dier in several
ways from TCAPs. First, characteristic formulae rely on existential quantication
of intermediate specications and loop invariants. This was not the case in TCAPs,
which are expressed in rst-order logic.
Second, I have shown that characteristic
formulae could be pretty-printed just like source code.
This possibility was not
pointed out in the work on TCAPs, even though, in principle, it should be possible
to apply a similar notation system to TCAPs. Third, TCAPs rely on a reachability
predicate for reasoning on dynamic allocation, whereas I have based on my work
upon standard technique from Separation Logic, whose eectiveness has already
been demonstrated. In particular, I introduced the predicate
local
to integrate the
frame rule directly inside characteristic formulae.
8.2 Separation Logic
Tight specications and the frame rule
Separation Logic has been developed
initially by Reynolds [77] and by O'Hearn and Yang [67, 86], and subsequently by
many others.
It builds on earlier work by Burstall [14], on work by Bornat [12],
and on the logic of Bunched Implications developed by O'Hearn, Ishtiaq and Pym
[66, 42]. The starting point of Separation Logic is the tight interpretation of Hoare
{H} t {Q} asserts that the execution of
t is only allowed to modify memory cells that are described by the precondition H . This interpretation is the key to the soundness of the frame rule, which
triples. The tight interpretation of a triple
the term
enables local reasoning.
Following the traditional presentation of Hoare logic, Separation Logic allows the
local variables of a program to be modied, even though it does not require local
variables to be described with an assertion of the form
x ,→ v .
In other words, in
Separation Logic assertions, the name of a local variable is confused with the current
value of that variable. For this reason, the frame rule is usually presented with a
side-condition, as shown below. In this rule,
c
denotes a command, and
p, q
and
r
are assertions about the heap.
{p} c {q}
{p ∗ r} c {q ∗ r}
‚
where no variable occurring
free in
r
is modied by
Œ
c.
Characteristic formulae require a frame rule without side condition because they
cannot refer to program syntax. As observed by O'Hearn [68], the side-condition is
158
CHAPTER 8.
RELATED WORK
not needed in a language such as Caml, where mutation is only allowed for heapallocated data.
More generally, it would suce to distinguish the name of local
variables from their contents in order to avoid the side-condition in the frame rule.
Separation Logic tools
A number of verication tools have been built upon ideas
from Separation Logic. For example, Smallfoot, developed by Berdine, Calcagno and
O'Hearn [8], is an automated verication tool specialized for programs manipulating
linked data structures. Also, Yang et al [85] have developed an automated tool for
verifying dozens of thousands of lines of source code for device drivers, which mainly
involve linked lists.
Because fully-automated tools are inherently limited in scope, other researchers
have investigated the possibility to exploit Separation Logic inside interactive proof
assistants.
The rst such embedding of Separation Logic appears to be the one
by Marti and Aeldt [52, 53].
Yet, these researchers conducted their Coq proofs
by unfolding the denition of the separating conjunction, which breaks the nice
level of abstraction brought by this operator.
Appel [2] has shown how to build
reasoning tactics that do not break the abstraction layer, obtaining signicantlyshorter proof scripts. Further Separation Logic tactics were subsequently developed
in many research projects, e.g., [24, 82, 62, 17, 55, 59].
Most of those tactics exploit an hypothesis of the form
of the form
H 0 h.
Hh
for proving a goal
One exception is a recent version of Ynot [17], where the tactics
directly handles goals of the form
H B H 0,
without involving a heap variable
h
in
the proof. This approach allows working entirely at the level of heap predicates and
it allows for the development of more ecient tactics. This is why I have followed
that approach in the implementation of my tactic
Local reasoning and while loops
hsimpl
(Ÿ5.7.1).
The traditional Hoare logic reasoning rule for
while loops is expressed using loop invariants. This rule can be used in the context
of Separation Logic. However, it does not take full advantage of local reasoning (as
explained in Ÿ3.3). I came accross the limitation of loop-invariants, found a way to
address it, and then learnt that Tuerk had made the same observation and discussed
it in a workshop paper [83]. I present his solution and then compare it with mine.
In Tuerk's presentation, the deduction rule that does involve a loop invariant is
as shown below. In this partial-correctness reasoning rule,
c
is a set of commands, and
I
b
is a boolean condition,
is an invariant.
{b ∧ I} c {I}
{I} (while b do c) {¬b ∧ I}
His generalized rule is shown next.
There,
becomes parameterized by an index), and
x
P
denotes a predicate (the invariant
and
y
denote indices for
that the premise involves a quantication over all the programs
specication. The variable
c0
c0
P.
Observe
that admit a given
intuitively denotes the remaining iterations of the loop.
8.3.
VERIFICATION CONDITION GENERATORS
So, the sequence
(c ; c0 )
159
corresponds to the execution of the rst iteration followed
with that of the remaining iterations.
€
∀x. ∀c0 . ∀y. {P y} c0 {¬b ∧ P y}
Š
⇒ {b ∧ P x} (c ; c0 ) {¬b ∧ P x}
∀x. {P x} (while b do c) {¬b ∧ P x}
The rule involves a negative occurrence of a Hoare triple, so it cannot be used as
an inductive denition. Instead, the reasoning rule is presented as a lemma proved
correct with respect to the operational semantics of the source language.
The quantication over
variable
c0
plays a very similar role as the quantication over a
R involved in the characteristic formulae for while loops.
My formulation is
slightly more concise because I quantify directly over a given behavior
quantifying over all the programs
c0
R
instead of
that admit a given behavior. It is not surprising
that my solution shares strong similarities with Tuerk's solution, since we both used
an encoding of while loops as recursive functions as a starting point for deriving a
local-reasoningfriendly version of the reasoning rule for loops.
8.3 Verication Condition Generators
A VCG is a tool that, given a program annotated with its specication and its
invariants, extracts a set of proof obligations that entails the correctness of the program.
The generation of proof obligations typically relies on the computation of
weakest pre-conditions Following this approach pioneered by Floyd [29], Hoare [36]
and Dijkstra [22], a large number of VCGs targeting various programming languages
have been implemented in the last decades, including VCGs for full-blown industrial
programming languages. For example, Spec-# [5] supports verication of properties
about C# programs. Programs are translated into the Boogie [4] intermediate language, from which proof obligations are generated. The SMT solver Z3 [21] is then
used to discharge those proof obligations. Most SMT solvers only cope with rstorder logic, thus the specication language does not benet from the expressiveness,
modularity, and elegance of higher-order logic. Several researchers have investigated
the possibility of extending the VCG approach to a specication language based on
higher-order logic. Three notable lines of work are described next.
The Why platform
The tool Why, developed by Filliâtre [26], is an intermedi-
ate purely-functional language annotated with higher-order logic specication and
invariants, on which weakest pre-conditions can be computed for generating proof
obligations. The tool is intended to be used in conjunction with a front-end, like
Caduceus [27] for C programs or Krakatoa [51] for Java programs. Proof obligations
that are not veried automatically by at an SMT solver can be discharged using an
interactive proof assistant such as Coq.
However, in practice, those proof obligations are often large and clumsy. This
is in part due to the memory model, which does not take advantage of Separation
Logic.
Moreover, interactive proofs of verication conditions are generally quite
160
CHAPTER 8.
RELATED WORK
brittle because the proof obligations are quite sensitive to changes in either the source
code or its invariants. So, although interactive verication of programs with higherorder logic specications is made possible by the tool Why, it involves a signicant
cost. As a consequence, the verication of large programs with Why appears to be
well-suited only for programs that require a small number of interactive proofs.
With characteristic formulae, a slightly smaller degree of automation is achievable than with a VCG, however proof obligations always remain tidy and can be
easily related to the point of the program they arise from. Characteristic formulae
oer some possibility for partially automating the reasoning, through calls to the
tactic
xgo
and through the invokation of decision procedures from Coq. When rea-
soning on the code becomes more involved and requires a lot of intervention from the
user, the verication of the code can be conducted step-by-step in a fully-interactive
manner.
Moreover, by investing a little extra eort in explicitly naming the hy-
potheses that need to be referred to in the proof scripts, one can build very robust
proof scripts.
Jahob
The tool Jahob [88], mainly developed by Zee, Kuncak and Rinard, targets
the verication of programs written in a subset of Java, and accommodates specications expressed in higher-order logic. The tool has been successfully employed to
verify mutable linked data structures, such as lists and trees. For discharging proof
obligations, Jahob relies on a translation from (a subset of ) higher-order logic into
rst order logic, as well as on automated theorem provers extended with specialized
decision procedures for reasoning on lists, trees, sets and maps.
A key feature of Jahob is its integrated proof language, which allows including
proof hints directly inside the source code. Those hints are used to guide automated
theorem provers, in particular by indicating how existential variables should be
instantiated. Although it is not clear how this approach would extend beyond the
verication of list and set data structures for which powerful decision procedures
can be devised, the programs veried in Jahob exhibit both high-level specications
and an impressive degree of automation.
Yet, the impressively-small number of hints written in a program often covers up
the extensive amount of time required for coming up with the appropriate hints. The
development process involves running the tool, waiting for the output of automated
theorem provers, reading the proof obligations to try and understand why proof
obligations could not be discharged automatically, and nally starting over with a
slightly dierent proof hint.
This process turns out to be quite time-consuming.
Moreover, guessing what hints may work seems to require a good understanding of
the working of the automated theorem provers and of the encoding of higher-order
logic that is applied before the provers are called.
Carrying out interactive proofs using characteristic formulae certainly also requires some expertise with the interactive theorem prover and its proof-search features. However, interactive proofs give near-instantaneous feedback, saving overall
a lot of time in the verication process. In the end, an interactive proof script might
8.4.
SHALLOW EMBEDDINGS
161
involve more lines than the integrated proof hints of Jahob, however those lines
might take a lot less time to come up with.
Pangolin
Pangolin, developed by Régis-Gianas and Pottier [76] is the rst VCG
tool that targets a higher-order, purely-functional programming language. The key
innovation of this work is the treatment of rst-class functions:
a function gets
reected in the logic as the pair of its pre-condition and of its post-condition. More
recently, the tool Who [44], by Kanig and Filliâtre, builds upon the same idea for
extending Why [26] with imperative higher-order functions. I have followed a very
dierent approach to lifting functions to the logical level. In my work, functions are
represented as values of the abstract data type
predicate
AppReturns.
Func and specied using the abstract
This approach has three benets compared with representing
functions as pairs of a pre- and a post-condition.
Firstly, the predicate
AppReturns
allows assigning several specications to the
same functions, whereas in Pangolin the specication of a function is hooked into the
type of the function. Assigning multiple specications to a same function turns out
to be particularly useful for higher-order functions, which typically admit a complex
most-general specication and a series of simpler, specialized specications.
Secondly, the fact that functions are uniformly represented as a data type
Func
rather than as a pair of a pre- and a post-condition makes it simpler to reason about
functions stored in data-structures. For example, in my approach a list of functions
List Func while in Pangolin the same list would have a type
List (∃AB. (A → Prop) × (B → Prop)), involving dependent types.
specication of functions through the predicate AppReturns interacts
is reected as the type of the form Thirdly,
better with ghost variables, as the reection of a function
(A
→ Prop) × (B → Prop)
f
as a logical value of type
does not take ghost variables into account. Although
Pangolin oers syntactic sugar for specifying functions with ghost variables, this
syntactic sugar has to be ultimately eliminated, leading to duplication of hypotheses
in pre- and post-conditions or in proof obligations.
8.4 Shallow embeddings
Program verication with a shallow embedding consists in relating the source code
of the program that is executed with a program written in the logic of a theorem
prover. This approach can take several forms.
Programming in type theory with ML types
Leroy's formally-veried C
compiler [47] is, for the most part, programmed directly in Coq.
The extraction
mechanism of Coq is used to obtain OCaml source code out of the Coq denitions.
This translation is mainly a matter of adapting the syntax.
Because the logic of
Coq includes only pure total functions, the compiler is implemented without any
imperative data structure, and the termination of all the recursive functions involved
is justied through a simple syntactic criteria. Although those restrictions appeared
162
CHAPTER 8.
RELATED WORK
acceptable for writing a program such as a compiler, there are many programs that
cannot accommodate such restrictions.
The source code of Leroy's compiler involves only basic ML types. Typically, a
data structure is represented using algebraic types, and the properties of a value of
type
T
are specied through a predicate of type
T → Prop.
There exists another
style of programming, detailed next, which consists in using dependent types to
enforce invariants of values, for example relying on the type of length
list n
to describe lists
n.
Programming in type theory with dependent types
Programming with de-
pendent types has been investigated in particular in Epigram [54], Adga [19] and
Russell [81]. The latter is an extension to Coq, which behaves as a permissive source
language which elaborates into Coq terms. In Russell, establishing invariants, justifying termination of recursive functions and proving the inaccessibility of certain
branches from a pattern matching can be done through interactive Coq proofs.
Those features make it easier in particular to write programs featuring nontrivial
recursion (even though the proofs about such dependently-typed functions might
turn out to be more technical).
When dependent types are used, Coq functions manipulate values and proofs in
the same time. So, the work of the extraction mechanism is more involved than when
programming with ML types, because the proof-specic entities need to be separated
from the values with computational content. Recognizing proof-specic elements is
far from trivial.
With the current implementation of Coq, some ghost variables
might fail to be recognized as computationally irrelevant.
Such variables remain
in the extracted code, leading to runtime ineciencies and, possibly to incorrect
asymptotic complexity [81]. The Implicit Calculus of Constructions [6] could solve
that problem, as it allows tagging ghost variables manually, however this system is
not yet implemented.
Hoare Type Theory and Ynot
The two approaches described so far do not sup-
port imperative features. The purpose of Hoare Type Theory (HTT) [61, 63], developed by Nanevski, Morrisett and Birkedal, is precisely to extend the programming
language from the logic with an axiomatic monad for describing impure features such
as side-eects and non-termination. HTT is implemented in a tool called Ynot [17]
and it has been recently re-implemented by Nanevksi et al [64]. Both developments,
implemented in Coq, allow for the extraction of code that performs side eects. The
type of the monad involved takes the form ST P Q,
which is a partial-correctness
Hoare triple asserting that a term of that type admits the pre-condition
post-condition
Q.
monad, of type On top of the monadic type
STsep P Q,
ST
P
and the
is built a Separation Logic-style
which supports local reasoning.
In HTT, verication proofs thus take the form of Coq typing derivations for
the source code. So, program verication is done at the same time as writing the
source code.
This is a signicant dierence with characteristic formulae, which
8.4.
SHALLOW EMBEDDINGS
163
allow verifying programs after they have been written, without requiring the source
code to be modied in any way.
Moreover, characteristic formulae can target an
existing programming language, whereas the Ynot programming language has to t
into the logic it is implemented in. For example, supporting handy features such as
alias-patterns and when-clauses would be a real challenge for Ynot, because pattern
matching is so deeply hard-wired in Coq that it is not straightforward to extend it.
Another technical diculty faced by HTT is the treatment of ghost variables.
A specication of the form ST P Q
does not naturally allow for ghost variables to
be used for sharing information between the pre- and the post-condition. Indeed, if
P
and
then
x
Q
both refer to a ghost variable
x
quantied outside of the type ST P Q,
is considered as a computationally-relevant value and thus it appears in the
extracted code (indeed,
x is typically not of type Prop).
Ynot [17] relies on a hack for
simulating the Implicit Calculus of Constructions [6], which, as mentioned earlier on,
allows to tag ghost variables explicitly. A danger of this approach is that forgetting
to tag a variable as ghost does not produce any warning yet results in the extracted
code being inecient. HTT [63, 64] takes a dierent approach and implements postconditions as predicate over both the input heap and the output heap. This makes
it possible to eliminate ghost variables by duplicating the pre-condition inside the
post-condition. Typically, ∀x.
ST P Q
gets encoded as ST (∃x. P ) (∀x. P ⇒ Q).
Tactics are then developed to avoid the duplication of proof obligations, however
the duplication remains visible in specications. It might be possible to also hide
the duplication in specications with a layer of notation, however such a notation
system does not appear to have been implemented.
Translation of code into logical denitions
The extraction mechanism in-
volved in the aforementioned approaches take a program described as a logical denition and translate it into a conventional programming language. It is also possible
to consider a translation that goes instead in the other direction, from a piece of
code towards a higher-order logic denition.
The LOOP compiler [43] takes Java programs and compiles them into PVS
denitions. It supports a fairly large fragment of the Java language and features advanced PVS tactics for reasoning on generated denitions. In particular, it includes
a weakest-precondition calculus mechanism that allows achieving a high degree of
automation, despite being limited in the size of the fragment of code that can be
automatically veried. However, interactive proofs require a lot of expertise from
the user: LOOP requires not only to master the PVS tool but also to understand the
compilation scheme involved [43]. By contrast, the tactics manipulating characteristic formulae appear to allow conducting interactive proofs of correctness without
detailed knowledge on the construction of those formulae.
Myreen and Gordon [60, 58] decompile machine code into HOL4 functions. The
lemmas proved interactively about the generated HOL4 functions can then be automatically transformed into lemmas about the behavior of the corresponding pieces
of machine code. This approach has been applied to verify a complete LISP inter-
164
CHAPTER 8.
preter implemented in machine code.
RELATED WORK
The translation into HOL4 is possible only
because the functional translation of a while loop is a tail-recursive function.
In-
deed, tail-recursive functions can be accepted as logical denitions in HOL4 without
compromising the soundness of the logic, even when the function is non-terminating
[50]. Without exploiting this peculiarity of tail-recursive functions, the automated
translation of source code into HOL4 would not be possible.
For this reason, it
seems hard to apply this decompilation-based approach to the verication of code
featuring general recursion and higher-order functions.
SeL4
The goal of the seL4 project [46, 45] is the machine-checked verication of
the seL4 microkernel. The microkernel is implemented in a subset of C, and also
includes lines of assembly code. The development involves two layers of specication. First, abstract logical specications, describing the high-level invariants that
the microkernel should satisfy, have been stated in Isabelle/HOL. Second, a model
of the microkernel has been implemented in a subset of Haskell.
This executable
specication is automatically translated into Isabelle/HOL denitions, via a shallow
embedding.
The proof of correcteness is two-step. First, it involves relating the translation of
the executable Haskell specication with a deep embedding of the C code. This task
can be partially automated with the help of specialized tactics. Second, the proof
involves relating the abstract specication with the translation of the executable
specication. This task is mostly carried out at the logical level, and thus does not
involve referring to the low-level representation of data-structures such as doublylinked lists.
To summarize, this two-step approach involves a shallow embedding of some
source code. This code is not the low-level C code that is ultimately compiled, but
rather the source code of an abstract version of that code, expressed in a higher-level
programming language, namely Haskell. The shallow embedding approach applies
well in seL4 because the code of the executable specication was written in such
a way as to avoid any nontrivial recursion [45].
Overall, the approach is fairly
similar to that of Myreen and Gordon [60, 58], except that the low-level code is not
decompiled automatically but instead decompiled by hand, and this decompilation
is proved correct using semi-automated tactics.
The KeY system
The KeY system [7] is a fairly successful approach to the
verication of Java programs. Contrary to the aforementioned approaches, the KeY
system does not target a standard mathematical logic, but instead relies on a logic
specialized for reasoning on imperative programs, called Dynamic Logic. Dynamic
Logic [33] is a particular kind of modal logic in which program code appears inside
modalities such as the mix-x operator
h·i.
For example, H1
Logic formula asserting that, in any heap satisfying
H2 .
interpretation as the proposition JtK H1 (# H2 ).
terminates and produces a heap satisfying
→ hti H2 is a Dynamic
H1 , the sequence of commands t
This formula thus has the same
Reasoning rules of Dynamic Logic
8.5.
DEEP EMBEDDINGS
165
enable symbolic execution of source code that appears inside a modality. Reasoning
on loops and recursive functions can be conducted by induction. Local reasoning is
supported in the KeY methodology thanks to the recent addition of a feature called
Dynamic Frames [79].
Despite being based on very dierent logics, the characteristic formulae approach
and the KeY approach to program verication show a lot of similarities regarding the
high-level interactions between the user and the system. Indeed, KeY features a GUI
interface where the user can view the current formula and make progress by writing
a proof script made of tactics (called taclets). It also relies on existential variables
for delaying the instantiations of particular intermediate invariants. Thanks to those
similarities, the approach based on characteristic formulae can presumably leverage
on the experience acquired through the KeY project and take inspiration from the
nice features developed for the KeY system. In the same time, by being implemented
in terms of a mainstream theorem prover rather than a custom one, characteristic
formulae can leverage on the fast development of that theorem prover and benet
from the mathematical libraries that are developed for it.
8.5 Deep embeddings
A fourth approach to formally reasoning on programs consists in describing the
syntax and the semantics of a programming language in the logic of a proof assistant,
using inductive denitions. In theory, the deep embedding approach can be used to
verify programs written in any programming language, and without any restriction
in terms of expressiveness (apart from those of the proof assistant).
Proof of concept by Mehta and Nipkow
Mehta and Nipkow [56] have set
up the rst proof-of-concept of program verication through a deep embedding in
a general-purpose theorem prover.
The authors axiomatized the syntax and the
semantics of a basic procedural language in Isabelle/HOL. Then, they proved Hoarestyle reasoning rules correct with respect to the semantics of that language. Finally,
they implemented in ML a VCG tactic for automatically applying the reasoning
rules. The VCG tactic, when applied to a source code annotated with specication
and invariants, produces a set of proof obligations that entails the correctness of
the source code. The approach was validated through the verication of a short yet
complex program, the Schorr-Waite graph-marking algorithm. Although Mehta and
Nipkow's work is conducted inside a theorem prover, it remains fairly close to the
traditional VCG-based approach.
To reason about data that are chained in the heap, Mehta and Nipkow adapted
a technique from Bornat [12]. They dened a predicate fact that, in a memory store
m,
List m l L
to express the
there exists a path that goes from the location
to the value null by traversing the locations mentioned in the list
Mehta and
List allows for some form of
List m l L remains true when
Nipkow observed that the denition of the predicate
local reasoning, in the sense that the proposition
L.
l
166
CHAPTER 8.
modifying a pointer that does not belong to the list
refers to the entire heap
m,
L.
RELATED WORK
However, the predicate
List
so it does not support application of the frame rule.
The frameworks XCAP and SCAP
The frameworks XCAP [65] and SCAP
[25], developed by Shao et al, rely on deep embeddings for reasoning in Coq about
assembly programs. They support reasoning on advanced patterns such as strong
updates, embedded code pointers, or longjump/setjump.
Those frameworks have
been used to verify short but complex assembly routines, whose proof involves hundreds of lines per instruction.
One of the long-term goals of the Flint project headed by Shao is to develop the
techniques required to formally prove correct the kernel of an operating system. Such
a development involves reasoning at dierent abstraction levels. Typically, a thread
scheduler or a garbage collector needs to have a lower-level view of memory, whereas
threads that are executed by the scheduler and that use the garbage collection facility
might have a much higher-level view of memory. So, it makes sense to develop various
logics for reasoning at each abstraction layer. Yet, at some point, all those logics
must be related to each other.
As Shao et al have argued, the key benets of using a deep embedding is that
it allows for a foundational approach to program verication, in which all those
logics are ultimately dened in terms of the semantics of machine code.
In this
thesis, I have focused on reasoning on programs written in a high-level programming
language, so the challenge is quite dierent. Applying such a foundational approach
to characteristic formulae would involve establishing a formal connection between a
characteristic formula generator and a formally-veried compiler.
From a deep embedding of pure-Caml to characteristic formulae
The
development of the characteristic formulae presented in this thesis originates in an
investigation of the use of a deep embedding for proving purely-functional programs.
In the development of this deep embedding, I did formalize in Coq the syntax and the
semantics of the pure fragment of Caml. The specication predicate took the form
t
⇓| P ,
asserting that the execution of the term
satisfying the predicate
P.
t
terminates and returns a value
The reasoning rules, which take the form of lemmas
proved correct with respect to the axiomatized semantics, were used to establish
judgments of the form t
⇓ | P .
I relied on tactics to help applying those reasoning rules. Those tactics behaved
in a similar way as the
x-tactics
developed for characteristic formulae, although
the implementation of those tactics was much more involved (I explain why further
on).
With this deep embedding, I was able to verify the implementation of the
list library of OCaml and to prove correct a bytecode compiler and interpreter for
mini-ML. The resulting verication proof scripts were actually quite similar to those
involved when working with characteristic formulae. Further details can be found
in the corresponding technical report [15]. I next explain how the deep embedding
reasoning rules led to characteristic formulae, an what improvements were brought
8.5.
DEEP EMBEDDINGS
167
by characteristic formulae.
When verifying a program through a deep embedding, the reasoning rule are
applied in a very systematic manner. For example, whenever reaching a let-node,
the reasoning rules for let-bindings, shown next, needs to be applied.
(t1 ⇓| P 0 )
∧
(∀x. P 0 x ⇒ t2 ⇓| P )
The intermediate specication
((let x = t1 in t2 ) ⇓| P )
⇒
P 0 , which does not appear in the goal, typically needs
to be provided at the time of applying the rule. The introduction of an existential
quantier allows applying the rule by delaying the instantiation of
P 0.
The updated
reasoning rule for let-bindings is as follows.
€
Š
∃P 0 . (t1 ⇓| P 0 ) ∧ (∀x. P 0 x ⇒ t2 ⇓| P )
⇒
((let x = t1 in t2 ) ⇓| P )
This new reasoning rule is now entirely goal-directed, so it can be applied in a
systematic manner, following the source code of the program. This led to the characteristic formula for let-bindings, recalled next. Observe that JtK P has exactly
the same interpretation as t
satisfying
⇓ | P , since both assert that the term t returns a value
P.
Jlet x = t1 in t2 K P
≡
€
∃P 0 . Jt1 K P 0 ∧ (∀x. P 0 x ⇒ Jt2 K P )
Š
In addition to automating the application of reasoning rules, another major
improvement brought by characteristic formulae concerns the treatment of program
values. With characteristic formulae, a list of integers 3
:: 2 :: nil
occurring in a
Caml program is described as the corresponding Coq list of integers. However, with
the deep embedding, the same value is represented as:
vconstr2 cons (vint 3) (vconstr2 cons (vint 2) (vconstr0 nil))
where
vconstr
is the constructor from the grammar of values used to represent the
application of data constructors (the index that annotates
of the data constructor), and where
vint
vconstr indicates the arity
is the constructor from the grammar of
values used to represent program integers. I did develop an eective technique for
avoiding to pollute specications and reasoning with such a low-level representation.
For every Coq type
A, I generated the Coq denition of its associated encoder, which
A → Val, where Val denotes the type of Caml values in
is a total function of type
the deep embedding.
took the form t
type A
→ Val,
The actual specication judgment in the deep embedding
⇓ A | P ,
where
and where
t ⇓ A | P asserts that the
V of type A such that P V
P
A
is a type for which there exists an encoder of
is a predicate of type A
evaluation of the term
t
→ Prop.
The proposition
returns the encoding of a value
holds. With this predicate, program verication through
a deep embedding could be conducted almost entirely at the logical level.
From my experience of developing a framework both for a deep embedding and
for characteristic formulae, I conclude that moving to characteristic formulae brings
at least three major improvements.
First, characteristic formulae do not need to
168
CHAPTER 8.
RELATED WORK
represent and manipulate program syntax. Thus, they avoid many technical diculties, in particular those associated with the representation of binders. Moreover,
I observed that the repeated computations of substitutions that occur during the
verication of a deeply-embedded program can lead to the generation of proof terms
of quadratic size, which can be problematic for scaling up to larger programs.
Second, with characteristic formulae there is no need to apply reasoning rules
of the program logic, because the applications of those rules is automatically done
through the construction of the characteristic formulae. In particular, characteristic
formulae saves the need to compute reduction contexts. With a deep embedding,
to prove a goal of the form t
C[t1 ]
⇓ | P ,
where
evaluation position.
C
⇓ | P ,
one must rst rewrite the goal in the form
is a reduction context and where
t1
is the subterm of
t
in
More generally, by moving to characteristic formulae, tactics
could be given a much simpler and a much more ecient implementation.
Third and last, characteristic formulae, by lifting values at the logical level once
and for all at the time of their construction, completely avoid the need to refer to
the relationship between program values and logical values. Contrary to the deep
embedding approach, characteristic formulae do not require encoders to be involved
in specications and proofs; encoders are only involved in the proof of soundness
of characteristic formulae. The removal of encoders leads to simpler specications,
simpler reasoning rules, and simpler tactics.
The fact that characteristic formulae outperform deep embeddings is after all not
a surprise: characteristic formulae can be seen as an abstract layer built on the top
of a deep embedding, so as to hide details related to the representation of values and
to retain only the essence of the reasoning rules supported by the deep embedding.
Chapter 9
Conclusion
This last chapter begins with a summary of how the various approaches
to program verication try to avoid referring to program syntax. I then
recall the main ingredients involved for constructing a practical verication tool upon the idea of characteristic formulae. Finally, I discuss
directions for future research, and conclude.
9.1 Characteristic formulae in the design space
The correctness of a program ultimately refers to the semantics of the programming
language in which that program is written. The reduction judgment describing the
semantics, written
state
t/m ⇓ v 0 /m0
in the thesis, directly refers to the input memory
m and to the output memory state m0 .
Hoare-triples, here written
allow for abstraction in specications. While a memory state
m
{H} t {Q},
describes exactly
what values lie in memory, the heap descriptions involved in Hoare triples allow to
state properties about the values stored in memory, without necessarily revealing the
exact representation of those values. Separation Logic later suggested an enhanced
interpretation of Hoare triples, according to which any allocated memory cell that is
not mentioned in the pre-condition of a term is automatically guaranteed to remain
unchanged through the evaluation of that term. Separation Logic thereby allows for
local reasoning, in the sense that the verication of a program component can be
conducted independently of the pieces of state owned by other components.
By moving from a judgment of the form
t/m ⇓ v 0 /m0
to a judgment of the form
{H} t {Q}, Hoare logic and Separation Logic avoid a direct reference to the memm and m0 . However, the reference to the source code t still remains. In
ory states
the traditional VCG approach, this is not an issue:
when the code is annotated
with suciently-many specications and invariants, it is possible to automatically
extract a set of verication conditions. Proving the verication condition then ensures that the program satises its specication. Although the VCG approach works
smoothly when the verication conditions can be discharged by fully-automated theorem provers, verication conditions are not as well-suited for interactive proofs.
169
170
CHAPTER 9.
CONCLUSION
One alternative to building Hoare-logic derivations automatically consists in
building them by hand, through interactive proofs.
Hoare triple
{H} t {Q}
To build the derivation of a
manually, one has to refer to the source code
t
explicitly.
The description of source code in the logic involves a deep embedding of the programming language. Program verication through a deep embedding is possible, however
it requires a signicant eort, mainly because values from the programming language
needs to be explicitly lifted at the logical level, and because reasoning on reduction
steps involves computing substitutions in the deep embedding of the source code.
The main motivation for verifying programs with a shallow embedding is precisely to avoid the reference to programming language syntax.
There are several
ways in which a shallow embedding can be exploited in program verication, but
they all rely on the same idea: building a logical denition that corresponds to the
source code of the program to be veried. These techniques relies on the fact that
the logic of a theorem prover actually contains a small programming language. Yet,
the shallow embedding approach suers from restrictions related to the fact that
some features of the host logical language are carved in stone.
I have followed a dierent approach to removing direct reference to the source
code of the program to be veried. It consists in generating a logical formula that
characterizes the behavior of the source code. This concept of characteristic formulae
is not new: it was rst developed in process calculi [70, 57] and was later adapted
to PCF programs by Honda, Berger and Yoshida [40].
Yet, those characteristic
formulae for PCF programs work did not lead to a practical verication system,
because the specication language relied on a nonstandard logic.
In this thesis, I have shown how to construct characteristic formulae expressed in
a standard higher-order logic. The characteristic formula of a term
written
and
Q
JtK,
such that
JtK H Q
implies that the term
t
admits
H
t
is a predicate,
as pre-condition
as post-condition. I have implemented a tool for generating Coq character-
istic formulae for Caml programs, and I have developed a Coq library that includes
denitions, lemmas and tactics for proving programs through their characteristic
formula. Finally, I have applied this tool for specifying and verifying nontrivial programs. The key ingredients involved for turning the idea of characteristic formulae
into a practical verication system are summarized next.
9.2 Summary of the key ingredients
Characteristic formulae
The starting point of this thesis is the notion of char-
acteristic formula, which is a higher-order logic formulae that characterizes the set
of total-correctness specications that a given program admits. A characteristic formula is built compositionally from the source code of the program it describes, and
it has a size linear in that of the program. The program does not need to be annotated with specication and invariants, as unknown specications and invariants
get quantied existentially in characteristic formulae.
9.2.
SUMMARY OF THE KEY INGREDIENTS
Notation layer
takes the form F
171
If F is the characteristic formula of a term t, the proof obligation
H Q, where H is the pre-condition and Q is the post-condition.
I have devised a system of notation for pretty-printing characteristic formulae in
a way that closely resemble the source code that the formula describes.
proof obligation F
H Q
reads like the term
post-conditions, in the form t H
Q.
t
So, the
being applied to the pre- and the
Since characteristic formulae are built compo-
sitionally, the ability to easily read proof obligations applies not only to the initial
term
t
but also to all of its subterms.
Reection of values
The values involved in the execution of a program are spec-
ied using values from the logic. For example, if a Caml function expects a list of
integers, then its pre-condition is a predicate about Coq list of integers. This reection of Caml values as Coq values works for all values except for functions, because
Coq functions can only describe total functions. I have introduced the abstract type
Func
AppEval for specifying
proof, a value of type Func
to represent functions, as well as the abstract predicate
the behavior of function applications. In the soundness
is interpreted as the deep embedding of the source code of some function.
This
interpretation avoids the need to rely on a nonstandard logic, as done in the work
of Honda, Berger and Yoshida [40].
Ghost variables
The specication of a function generally involves ghost vari-
ables, in particular for sharing information between the pre-condition and the postcondition. In order to state general lemmas about function specications, such as
the induction lemma which inserts an induction hypothesis into given specication,
I have introduced the predicate
function
f
Spec.
admits the specication
K,
The proposition
where
K
Specf K
asserts that the
is an higher-order predicate able to
capture the quantication over an arbitrary number of ghost variables.
N-ary functions
Caml programs typically dene functions of several arguments
in a curried fashion. Partial applications are commonplace, and over-applications
are also possible. In theory, a function of two arguments can always be specied as
a function of one argument that returns another function of one argument. Yet, a
higher-level predicate that captures the specication of a function of two arguments
in a more direct way is a must-have for practical verication. For that purpose, I
have dened a family of predicates
AppReturnsn and Specn , which are all ultimately
AppReturns.
dened in terms of the core predicate
Heap predicates
Characteristic formulae, which were rst set up for purely func-
tional programs, could easily be extended to an imperative setting by adding heap
descriptions.
I exploited the techniques of Separation Logic for describing heaps
in a concise way, and followed the Coq denitions of Separation Logic connectives
developed in Ynot [17]. In particular, heap predicates of the form
L ,→T V
support
reasoning on strong updates. One novel ingredient for handling Separation Logic in
172
CHAPTER 9.
characteristic formulae is the denition of the predicate
local,
CONCLUSION
which is inserted at
every node of a characteristic formula, thereby allowing for application of the frame
rule at any time.
Bounded recursive ownership
The use of representation predicates is a stan-
dard technique for relating imperative data structures with their mathematical
model. I have extended this technique to polymorphic representation predicates: a
representation predicate for a container takes as argument the representation predicate describing the elements stored in that container. Depending on the representation predicate given for the elements, one can describe either a situation where
the container owns its elements, or a situation where the container does not own its
elements and simply views them as base values.
Weak-ML
ML types greatly help in program verication because they allow re-
ecting basic Caml values directly as their Coq counterpart. Yet, for the sake of type
soundness, the ML type system keeps track of the type of functions, and it enforces
the invariant that a location of type
ref τ
contains a value of type
τ
at any time.
Since program verication is a much more accurate analysis than ML type-checking,
it need not suer from the restrictions imposed by ML. The introduction of the type
system weak-ML serves two purposes.
First, it relaxes the restrictions associated
with ML, in particular through a totally-unconstrained typing rule for applications.
Second, it simplies the proof of soundness of characteristic formulae, by allowing to
establish a bijection between well-typed weak-ML values and a subset of Coq values,
and by avoiding the need to involve a store typing oracle.
High-level tactics
I have developed a set of tactics for reasoning on character-
istic formulae in Coq. Those tactics not only shorten proof scripts, they also make
it possible to verify programs without knowledge of the details of the construction
of characteristic formulae.
The end-user only needs to learn the various syntaxes
for CFML tactics, in particular the optional arguments for specifying how variables
and hypotheses should be named.
Furthermore, in the verication of imperative
programs, tactics play a key role by supporting application of lemmas modulo associativity and commutativity of the separating conjunction, and by automating the
application of the frame rule on function calls.
Implementation
initions.
The CFML generator parses Caml code and produces Coq def-
It reuses the parser and the type-checker of the OCaml compiler, and
involves about
3, 000
lines of OCaml for constructing characteristic formulae. The
CFML library contains notations, lemmas and tactics for manipulating characteristic formulae. It is made of about
4, 000
lines of Coq. Note that the CFML library
would have been signicantly easier to implement if Coq did feature a slightly more
evolved tactic language.
9.3.
FUTURE WORK
173
9.3 Future work
Formal verication of characteristic formulae
Defenders of a foundational
approach to program verication argue that we ought to try and reduce the trusted
computing base as much as possible.
There are at least two ways of proceeding.
The rst one involves building a veried characteristic formula generator and then
conducting a machine-checked proof of the soundness theorem for characteristic
formulae. An alternative approach consists in developing a program that, instead
of generating axioms to reect every top-level declaration from the source Caml
program, directly generates lemmas and their proofs. Those proofs would be carried
out in terms of a deep embedding of the source program. Note that the program
generating the lemmas and their proofs would not need to be itself proved correct,
because its output would be validated by the Coq proof-checker. I expect both of
those approaches to involve a lot of eort if a full-blown programming language is
to be supported.
Non-determinism
In this thesis, I have assumed the source language to be deter-
ministic. This assumption makes denitions and proofs slightly simpler, in particular
for the treatment of partial functions and polymorphism, however it does not seem to
be essential to the development. One could probably replace the predicate
AppEval
with two predicates: a deterministic predicate describing only the evaluation of partial applications of curried functions, and a non-deterministic predicate describing
the evaluation of general function applications.
Exceptions
It should be straightforward to add support for throwing and catching
of exceptions. Currently, a post-condition has the type A
the result value of type
A
→ Hprop, describing both
and the heap produced. Support for exceptions can be
added by generalizing the result type to (A
+ exn) → Hprop,
where
exn
denotes
the type of exceptions. Such a treatment of exceptions has been implemented for
example in Ynot [17]. Characteristic formulae would then involve a left injection to
describe a normal result and involve a right injection to describe the throwing of an
exception.
Arithmetic
So far, I have completely avoided the diculties related to xed-
size representations of numbers:
I have assumed arbitrary-precision integers and
I have not provided any support for oating-point numbers.
One way to obtain
correct programs without wasting time reasoning on modulo arithmetic consists in
representing all integers using the BigInt library.
Alternatively, one could target
xed-size integers, yet at the cost of proving side-conditions for every arithmetic
operation involved in the source program. For oating-point numbers, the problem
appears to be harder. Nevertheless, it should be possible to build upon the recent
breakthroughs in formal reasoning on numerical routines [11, 3] and achieve a high
degree of automation.
174
CHAPTER 9.
Fields as arrays
CONCLUSION
The frame rule can be made even more useful if the ownership
of an array of records can be viewed as the ownership of a record of arrays. Such a
re-organization of memory allows to invoke the frame rule on particular elds of a
record. One can then obtain for free the property that a function that needs only
access one eld of a record does not modify the other elds. It should be possible
to dene heap predicates that capture the eld-as-array methodology.
Hidden state
Modules can be used to abstract the denition of the invariants
of private pieces of heap.
One may want to go further and hide altogether the
existence of such a private piece of state. This is precisely the purpose of the antiframe rule developed by Pottier [75, 80]. For example, assume that a module exports
a heap predicate
H ∗I
I
of type
Hprop
and the post-condition
Q ? I.
and a function
f
that admits the pre-condition
Assume that the module also contains a private
initialization code that allocates a piece of heap satisfying the invariant
I.
Then,
the module can be viewed from the outside world as simply exporting a function
that admits the pre-condition
to the invariant
I.
H
and the post-condition
Q,
f
removing any reference
Integrating the anti-frame rule would allow exporting cleaner
specications for modules such as a random-number generator, a hash-consing data
structure, a memoization module, and, more generally, for any imperative data
structure that exposes a purely-functional interface.
Concurrency
The verication of concurrent programs is a obvious direction for
future research. A natural starting point would be to try and extend characteristic
formulae with the reasoning rules developed for Concurrent Separation Logic, which
was initially devised by O'Hearn and Brookes [13, 68], and later also developed by
other researchers [72, 37].
Module systems
In the experimental support for modules and systems, I have
mapped Caml modules to Coq modules. However, limitations in the current module
system of Coq make it dicult to verify nontrivial Caml functors. The enhancement
of modules for Coq is a largely-open problem. In fact, one may argue that even the
module system of Caml is not entirely satisfying. I hope that, in the near future,
an expressive and practical module system that ts both Caml and Coq will be
proposed. Such a common module system would be very helpful for carrying out
modular verication of modular programs without having to hack around several
limitations.
Low-level languages
It would be very interesting to implement a characteristic
formula generator for a lower-level programming language such as C or assembly.
Features such as pointer arithmetic should not be too dicult to handle. The treatment of general goto instructions may be tricky to handle, however the treatment
of break and continue instructions appears to be straightforward to integrate
9.3.
FUTURE WORK
175
in the characteristic formulae for loops. Characteristic formulae for low-level programs should also bring strong guarantees about total collection, ensuring that all
the memory allocated by a program is freed at some point.
Object-oriented languages
It would also be worth investigating how charac-
teristic formulae may be adapted to an object-oriented programming language like
Java or C#. The object presentation naturally encourages ghost variables, which
describes the mathematical model of an object, to be laid out as ghost elds. This
treatment may reduce the number of existentially-quantied variables and thereby
allow more goals to be solved by automated theorem provers.
Yet, a number of
dicult problems arise when moving to an object-oriented language, in particular
with respect to the specication of collaborating classes and thus need to share a
common invariant, and with respect to the inheritance of specications. It would
be interesting to see whether characteristic formulae can integrate the notion of
dynamic specication that has been developed by Parkinson and Bierman [71] to
handle inheritance in Java.
Partial correctness
The characteristic formulae that I have developed target
total correctness, which oers a stronger guarantee than partial correctness.
some programs, such as web-servers, are not intended to terminate.
False
correctness Hoare triple whose post-condition is the predicate
Yet,
A partialasserts that
a program diverges, because the program is proved never to crash and never to
terminate.
Divergence cannot be specied in a total correctness setting.
So, one
could be interested in trying to build characteristic formulae that target partial
correctness, possibly reusing ideas developed by Honda et al [40].
Lazy evaluation
The treatment of lazy evaluation that I relied on for verifying
Okasaki's data structures is quite limited. Indeed, verifying a program by removing all the lazy keywords for the source code is sound but not complete.
I have
started to work on an explicit treatment of suspended computations, aiming for a
proper model of the implementation of laziness in Caml. More challenging would be
the reasoning on Haskell programs, where laziness is implicit. The specication of
Haskell programs would probably involve the specication of partially-dened data
structures, as suggested by Danielsson et al [20].
Complexity analysis
One could generalize characteristic formulae into predicates
that, in addition to a pre- and a post-condition, also applies to a bound on the
number of reduction steps. Proof obligations would then take the form F
where
F
is a characteristic formula and
N
N H Q,
is a natural number. Another possible
approach, suggested by Pottier, consists in keeping proof obligations in the form
F
H Q
and adding an articial heap predicate of the form the ownership of the right to perform
form credit (N +M )
N
credit N , which denotes
reduction steps. Note that a credit of the
can be split at any time into a conjunction credit N ∗ credit M .
176
CHAPTER 9.
CONCLUSION
This suggestion has been studied by Pilkiewicz and Pottier [74] in a type system
that extends the type system developed by Pottier and myself [16]. It should not
be dicult to modify characteristic formulae so as to impose that atomic operations
consume exactly one credit, in the sense that they expect credit 1
in the pre-
condition and do not give this credit back in the post-condition. Combined with a
treatment of laziness, this approach based on credits would presumably enable the
formalization of the advanced complexity analyses involved in Okasaki's book [69].
9.4 Final words
I would like to conclude by speculating on the role of interactive theorem provers in
future developments of program verication. Research on articial intelligence has
had limited success in automated theorem proving, and, in spite of the impressive
progress made by SMT solvers, the verication of nontrivial programs still requires
a good amount of user intervention in addition to the writing of specications.
Being established that some form of interaction between the user and the machine
is required, proof assistants appear as a fairly eective way for users to provide
the high-level arguments of a proof and for the system to give real-time feedback.
My thesis has focused on the development of characteristic formulae, which aim at
smoothly integrating program verication within proof assistants. Proof assistants
have undergone tremendous improvements in the last decade.
Undoubtedly, they
will continue to be improved, making interactive proofs, and, in particular, program
verication, more largely applicable and more accessible.
Bibliography
[1] L. Aceto and A. Ingolfsdottir. Characteristic formulae: from automata to logic.
Bulletin of the European Association for Theoretical Computer Science, 91:57,
2007. Columns: Concurrency.
[2] Andrew
W.
Appel.
Tactics
for
separation
logic.
Unpublished
draft,
http://www.cs.princeton.edu/appel/papers/septacs.pdf, 2006.
[3] Ali Ayad and Claude Marché. Multi-prover verication of oating-point programs. In Jürgen Giesl and Reiner Hähnle, editors, International Joint Con-
ference on Automated Reasoning, Lecture Notes in Articial Intelligence, Edinburgh, Scotland, 2010. Springer.
[4] Mike Barnett, Bor-Yuh Evan Chang, Robert DeLine, Bart Jacobs, and K. Rustan M. Leino. Boogie: A modular reusable verier for object-oriented programs.
In Formal Methods for Components and Objects (FMCO) 2005, Revised Lec-
tures, volume 4111 of LNCS, pages 364387, New York, NY, 2006. SpringerVerlag.
[5] Mike Barnett, Rob DeLine, Manuel Fähndrich, K. Rustan M. Leino, and Wolfram Schulte. Verication of object-oriented programs with invariants. Journal
of Object Technology, 3(6), 2004.
[6] Bruno Barras and Bruno Bernardo. The implicit calculus of constructions as a
programming language with dependent types. In Roberto M. Amadio, editor,
FoSSaCS, volume 4962 of LNCS, pages 365379. Springer, 2008.
[7] Bernhard Beckert, Reiner Hähnle, and Peter H. Schmitt. Verication of Object-
Oriented Software: The KeY Approach, volume 4334 of LNCS. Springer-Verlag,
Berlin, 2007.
[8] Josh Berdine, Cristiano Calcagno, and Peter W. O'Hearn. Smallfoot: Modular
automatic assertion checking with separation logic. In International Symposium
on Formal Methods for Components and Objects, volume 4111 of LNCS, pages
115137. Springer, 2005.
[9] Josh Berdine and Peter W. O'Hearn. Strong update, disposal, and encapsulation in bunched typing. In Mathematical Foundations of Programming Seman-
177
178
BIBLIOGRAPHY
tics, volume 158 of Electronic Notes in Theoretical Computer Science, pages
8198. Elsevier Science, 2006.
[10] Martin Berger, Kohei Honda, and Nobuko Yoshida. A logical analysis of aliasing
in imperative higher-order functions.
In ACM International Conference on
Functional Programming (ICFP), pages 280293, 2005.
[11] Sylvie Boldo and Jean-Christophe Filliâtre. Formal verication of oating-point
programs. In IEEE Symposium on Computer Arithmetic, pages 187194. IEEE
Computer Society, 2007.
[12] Richard Bornat.
Proving pointer programs in Hoare logic.
In International
Conference on Mathematics of Program Construction (MPC), volume 1837 of
LNCS, pages 102126. Springer, 2000.
[13] Stephen D. Brookes. A semantics for concurrent separation logic. In Philippa
Gardner and Nobuko Yoshida, editors, CONCUR, volume 3170 of LNCS, pages
1634. Springer, 2004.
[14] R. M. Burstall. Some techniques for proving correctness of programs which alter
data structures. In B. Meltzer and D. Mitchie, editors, Machine Intelligence 7,
pages 2350. Edinburgh University Press, Edinburgh, Scotland., 1972.
[15] Arthur
programs
Charguéraud.
through
a
Verication
deep
of
call-by-value
embedding.
2009.
functional
Unpublished.
http://arthur.chargueraud.org/research/2009/deep/.
[16] Arthur Charguéraud and François Pottier. Functional translation of a calculus
of capabilities. In ACM International Conference on Functional Programming
(ICFP), 2008.
[17] Adam Chlipala, Gregory Malecha, Greg Morrisett, Avraham Shinnar, and Ryan
Wisnesky. Eective interactive proofs for higher-order imperative programs. In
ACM International Conference on Functional Programming (ICFP), 2009.
[18] The Coq Development Team. The Coq Proof Assistant Reference Manual, Ver-
sion 8.2, 2009.
[19] Thierry Coquand. Alfa/agda. In Freek Wiedijk, editor, The Seventeen Provers
of the World, Foreword by Dana S. Scott, volume 3600 of LNCS, pages 5054.
Springer, 2006.
[20] Nils Anders Danielsson, John Hughes, Patrik Jansson, and Jeremy Gibbons.
Fast and loose reasoning is morally correct. In ACM Symposium on Principles
of Programming Languages (POPL), pages 206217, 2006.
[21] Leonardo de Moura and Nikolaj Bjørner. Z3: An ecient SMT solver. In Tools
and Algorithms for the Construction and Analysis (TACAS), volume 4963 of
LNCS, pages 337340, Berlin, 2008. Springer-Verlag.
BIBLIOGRAPHY
179
[22] Edsger W. Dijkstra. A Discipline of Programming. Prentice-Hall, 1976.
[23] Manuel Fähndrich and Robert DeLine.
Adoption and focus: practical linear
types for imperative programming. In ACM Conference on Programming Lan-
guage Design and Implementation (PLDI), pages 1324, 2002.
[24] Xinyu Feng, Rodrigo Ferreira, and Zhong Shao. On the relationship between
concurrent separation logic and assume-guarantee reasoning.
In Rocco De
Nicola, editor, ESOP, volume 4421 of LNCS, pages 173188. Springer, 2007.
[25] Xinyu Feng, Zhong Shao, Alexander Vaynberg, Sen Xiang, and Zhaozhong Ni.
Modular verication of assembly code with stack-based control abstractions.
In ACM Conference on Programming Language Design and Implementation
(PLDI), pages 401414, 2006.
[26] Jean-Christophe Filliâtre. Verication of non-functional programs using interpretations in type theory. Journal of Functional Programming, 13(4):709745,
2003.
[27] Jean-Christophe Filliâtre and Claude Marché.
programs.
Multi-prover verication of C
In Formal Methods and Software Engineering, 6th ICFEM 2004,
volume 3308 of LNCS, pages 1529. Springer-Verlag, 2004.
[28] Cormac Flanagan, Amr Sabry, Bruce F. Duba, and Matthias Felleisen.
The
essence of compiling with continuations. In ACM Conference on Programming
Language Design and Implementation (PLDI), pages 237247, 1993.
[29] R. W. Floyd.
Assigning meanings to programs.
In Mathematical Aspects of
Computer Science, volume 19 of Proceedings of Symposia in Applied Mathematics, pages 1932. American Mathematical Society, 1967.
[30] Georges Gonthier and Assia Mahboubi. A small scale reection extension for
the Coq system.
Research Report 6455, Institut National de Recherche en
Informatique et en Automatique, Le Chesnay, France, 2008.
[31] G. A. Gorelick. A complete axiomatic system for proving assertions about recursive and non-recursive programs. Technical Report 75, University of Toronto,
1975.
[32] Susanne Graf and Joseph Sifakis.
A modal characterization of observational
congruence on nite terms of CCS. Information and Control, 68(1-3):125145,
1986.
[33] David Harel, Dexter Kozen, and Jerzy Tiuryn. Dynamic Logic. The MIT Press,
Cambridge, Massachusetts, 2000.
[34] Eric C. R. Hehner. Specied blocks. In Bertrand Meyer and Jim Woodcock,
editors, VSTTE, volume 4171 of LNCS, pages 384391. Springer, 2005.
180
BIBLIOGRAPHY
[35] M. C. B. Hennesy and R. Milner.
Algebraic laws for nondeterminism and
concurrency. Journal of the ACM, 32(1):137161, 1985.
[36] C. A. R. Hoare. An axiomatic basis for computer programming. Communica-
tions of the ACM, 12(10):576580, 583, 1969.
[37] Aquinas Hobor, Andrew W. Appel, and Francesco Zappa Nardelli. Oracle semantics for concurrent separation logic. In Sophia Drossopoulou, editor, ESOP,
volume 4960 of LNCS, pages 353367. Springer, 2008.
[38] Kohei Honda.
From process logic to program logic.
In Chris Okasaki and
Kathleen Fisher, editors, ICFP, pages 163174. ACM, 2004.
[39] Kohei Honda, Martin Berger, and Nobuko Yoshida. An observationally complete program logic for imperative higher-order functions.
In Proceedings of
the Twentieth Annual IEEE Symposium on Logic in Computer Science (LICS),
pages 280293, 2005.
[40] Kohei Honda, Martin Berger, and Nobuko Yoshida.
Descriptive and relative
completeness of logics for higher-order functions. In M. Bugliesi, B. Preneel,
V. Sassone, and I. Wegener, editors, Automata, Languages and Programming,
33rd International Colloquium, ICALP 2006, Venice, Italy, July 10-14, 2006,
Proceedings, Part II, volume 4052 of LNCS. Springer, 2006.
[41] Kohei Honda and Nobuko Yoshida.
higher-order functions.
A compositional logic for polymorphic
In International ACM Conference on Principles and
Practice of Declarative Programming (PPDP), pages 191202, 2004.
[42] Samin Ishtiaq and Peter W. O'Hearn. BI as an assertion language for mutable
data structures. In ACM Symposium on Principles of Programming Languages
(POPL), pages 1426, London, United Kingdom, 2001.
[43] Bart Jacobs and Erik Poll.
ments and perspective.
Java program verication at nijmegen: Develop-
In Kokichi Futatsugi, Fumio Mizoguchi, and Naoki
Yonezaki, editors, ISSS, volume 3233 of LNCS, pages 134153. Springer, 2003.
[44] Johannes Kanig and Jean-Christophe Filliâtre.
higher-order programs.
Who:
a verier for eectful
In ML'09: Proceedings of the 2009 ACM SIGPLAN
workshop on ML, pages 3948, New York, NY, USA, 2009. ACM.
[45] Gerwin Klein, Philip Derrin, and Kevin Elphinstone. Experience report: seL4:
formally verifying a high-performance microkernel.
In Graham Hutton and
Andrew P. Tolmach, editors, ICFP, pages 9196. ACM, 2009.
[46] Gerwin Klein, Kevin Elphinstone, Gernot Heiser, June Andronick, David Cock,
Philip Derrin, Dhammika Elkaduwe, Kai Engelhardt, Rafal Kolanski, Michael
Norrish, Thomas Sewell, Harvey Tuch, and Simon Winwood.
seL4:
Formal
verication of an OS kernel. In Proceedings of the 22nd Symposium on Operating
BIBLIOGRAPHY
181
Systems Principles (22nd SOSP'09), Operating Systems Review (OSR), pages
207220, Big Sky, MT, 2009. ACM SIGOPS.
[47] Xavier Leroy. Formal certication of a compiler back-end or: programming a
compiler with a proof assistant. In ACM Symposium on Principles of Program-
ming Languages (POPL), pages 4254, 2006.
[48] Xavier Leroy, Damien Doligez, Jacques Garrigue, Didier Rémy, and Jérôme
Vouillon. The Objective Caml system, 2005.
[49] Pierre Letouzey. Programmation fonctionnelle certiÃ
©
©
e l'extraction de pro-
grammes dans l'assistant Coq. PhD thesis, UniversitÃ
Paris 11, 2004.
[50] Panagiotis Manolios and J Strother Moore. Partial functions in ACL2. 2003.
[51] Claude Marché, Christine Paulin Mohring, and Xavier Urbain. The Krakatoa
tool for certication of Java/JavaCard programs annotated in JML.
JLAP,
58(12):89106, 2004.
[52] Nicolas Marti, Reynald Aeldt, and Akinori Yonezawa. Towards formal verication of memory properties using separation logic, 2005.
[53] Nicolas Marti, Reynald Aeldt, and Akinori Yonezawa. Verication of the heap
manager of an operating system using separation logic. In Proceedings of the
Third SPACE, pages 6172, Charleston, SC, USA, 2006.
[54] Conor McBride and James McKinna.
The view from the left.
Journal of
Functional Programming, 14(1):69111, 2004.
[55] Andrew McCreight. Practical tactics for separation logic. In Stefan Berghofer,
Tobias Nipkow, Christian Urban, and Makarius Wenzel, editors, TPHOLs, volume 5674 of LNCS, pages 343358. Springer, 2009.
[56] Farhad Mehta and Tobias Nipkow. Proving pointer programs in higher-order
logic. In Franz Baader, editor, Automated Deduction - CADE-19, 19th Inter-
national Conference on Automated Deduction Miami Beach, FL, USA, July 28
- August 2, 2003, Proceedings, volume 2741 of LNCS, pages 121135. Springer,
2003.
[57] R. Milner. Communication and Concurrency. Prentice-Hall, 1989.
[58] Magnus O. Myreen.
Formal Verication of Machine-Code Programs.
PhD
thesis, University of Cambridge, 2008.
[59] Magnus O. Myreen. Separation logic adapted for proofs by rewriting. In Matt
Kaufmann and Lawrence C. Paulson, editors, ITP, volume 6172 of LNCS, pages
485489. Springer, 2010.
182
BIBLIOGRAPHY
[60] Magnus O. Myreen and Michael J. C. Gordon. Veried LISP implementations
on ARM, x86 and powerPC.
In Stefan Berghofer, Tobias Nipkow, Christian
Urban, and Makarius Wenzel, editors, TPHOLs, volume 5674 of LNCS, pages
359374. Springer, 2009.
[61] A. Nanevski and G. Morrisett. Dependent type theory of stateful higher-order
functions. Technical report, Citeseer, 2005.
[62] Aleksandar Nanevski, Greg Morrisett, Avi Shinnar, Paul Govereau, and Lars
Birkedal. Ynot : Reasoning with the awkward squad. In ACM International
Conference on Functional Programming (ICFP), 2008.
[63] Aleksandar Nanevski, J. Gregory Morrisett, and Lars Birkedal.
theory, polymorphism and separation.
Hoare type
Journal of Functional Programming,
18(5-6):865911, 2008.
[64] Aleksandar Nanevski, Viktor Vafeiadis, and Josh Berdine. Structuring the verication of heap-manipulating programs. In Manuel V. Hermenegildo and Jens
Palsberg, editors, Proceedings of the 37th ACM SIGPLAN-SIGACT Symposium
on Principles of Programming Languages, POPL 2010, Madrid, Spain, January
17-23, 2010, pages 261274. ACM, 2010.
[65] Zhaozhong Ni and Zhong Shao. Certied assembly programming with embedded code pointers.
In ACM Symposium on Principles of Programming Lan-
guages (POPL), 2006.
[66] Peter O'Hearn and David Pym. The logic of bunched implications. Bulletin of
Symbolic Logic, 5(2):215244, 1999.
[67] Peter O'Hearn, John Reynolds, and Hongseok Yang.
Local reasoning about
programs that alter data structures. In Proceedings of CSL'01, volume 2142 of
LNCS, pages 119, Berlin, 2001. Springer-Verlag.
[68] Peter W. O'Hearn.
Resources, concurrency and local reasoning.
Theoretical
Computer Science, 375(13):271307, 2007.
[69] Chris Okasaki. Purely Functional Data Structures. Cambridge University Press,
1999.
[70] David Park. Concurrency and automata on innite sequences. In Peter Deussen,
editor, Theoretical Computer Science: 5th GI-Conference, Karlsruhe, volume
104 of LNCS, pages 167183, Berlin, Heidelberg, and New York, 1981. SpringerVerlag.
[71] Matthew J. Parkinson and Gavin M. Bierman.
Separation logic, abstraction
and inheritance. In George C. Necula and Philip Wadler, editors, POPL, pages
7586. ACM, 2008.
BIBLIOGRAPHY
183
[72] Matthew J. Parkinson, Richard Bornat, and Peter W. O'Hearn. Modular verication of a non-blocking stack. In Martin Hofmann and Matthias Felleisen,
editors, POPL, pages 297302. ACM, 2007.
[73] Benjamin C. Pierce and David N. Turner. Simple type-theoretic foundations for
object-oriented programming.
Journal of Functional Programming, 4(2):207
247, 1994.
[74] Alexandre Pilkiewicz and Franßois Pottier. The essence of monotonic state.
In Proceedings of the Sixth ACM SIGPLAN Workshop on Types in Language
Design and Implementation (TLDI'11), Austin, Texas, 2011.
[75] Fran cois Pottier. Hiding local state in direct style: A higher-order anti-frame
rule. In LICS, pages 331340. IEEE Computer Society, 2008.
[76] Yann Régis-Gianas and François Pottier. A Hoare logic for call-by-value functional programs. In International Conference on Mathematics of Program Con-
struction (MPC), 2008.
[77] John C. Reynolds. Separation logic: A logic for shared mutable data structures.
In IEEE Symposium on Logic in Computer Science (LICS), pages 5574, 2002.
[78] K. Rustan, M. Leino, and M. Moskal. VACID-0: Verication of Ample Correctness of Invariants of Data-structures, Edition 0. 2010.
[79] Peter H. Schmitt, Mattias Ulbrich, and Benjamin Weiÿ. Dynamic frames in Java
dynamic logic. In Bernhard Beckert and Claude Marché, editors, Proceedings,
International Conference on Formal Verication of Object-Oriented Software
(FoVeOOS 2010), volume 6528 of LNCS, pages 138152. Springer, 2011.
[80] Jan Schwinghammer, Hongseok Yang, Lars Birkedal, Franßois Pottier, and
Bernhard Reus. A semantic foundation for hidden state. In C.-H. L. Ong, editor,
Proceedings of the 13th International Conference on Foundations of Software
Science and Computational Structures (FOSSACS 2010), volume 6014 of LNCS,
pages 217. Springer, 2010.
[81] Matthieu Sozeau. Program-ing nger trees in coq. SIGPLAN Not., 42(9):1324,
2007.
[82] Harvey Tuch, Gerwin Klein, and Michael Norrish. Types, bytes, and separation
logic. In Martin Hofmann and Matthias Felleisen, editors, POPL, pages 97108.
ACM, 2007.
[83] Thomas Tuerk. Local reasoning about while-loops. In VSTTE LNCS, 2010.
[84] David Wagner, Jerey S. Foster, Eric A. Brewer, and Alexander Aiken. A rst
step towards automated detection of buer overrun vulnerabilities. In NDSS.
The Internet Society, 2000.
184
BIBLIOGRAPHY
[85] Hongseok Yang, Oukseh Lee, Josh Berdine, Cristiano Calcagno, Byron Cook,
Dino Distefano, and Peter W. O'Hearn.
Scalable shape analysis for systems
code. In Aarti Gupta and Sharad Malik, editors, CAV, volume 5123 of LNCS,
pages 385398. Springer, 2008.
[86] Hongseok Yang and Peter W. O'Hearn. A semantic basis for local reasoning.
In Mogens Nielsen and Ue Engberg, editors, FoSSaCS, volume 2303 of LNCS,
pages 402416. Springer, 2002.
[87] Nobuko Yoshida, Kohei Honda, and Martin Berger.
Logical reasoning for
higher-order functions with local state. In International Conference on Foun-
dations of Software Science and Computation Structures (FOSSACS), volume
4423 of LNCS, pages 361377. Springer, 2007.
[88] Karen Zee, Viktor Kuncak, and Martin C. Rinard. An integrated proof language
for imperative programs.
In Michael Hind and Amer Diwan, editors, PLDI,
pages 338351. ACM, 2009.