Commit 4e48ffb8 authored by Adam Chlipala's avatar Adam Chlipala

Finished 2011 pass through Intro

parent 6174adf9
......@@ -9,7 +9,7 @@ VS := $(MODULES:%=src/%.v)
VS_DOC := $(MODULES_DOC:%=%.v)
TEMPLATES := $(MODULES_CODE:%=templates/%.v)
.PHONY: coq clean doc html templates install cpdt.tgz
.PHONY: coq clean doc html templates install cpdt.tgz pdf
coq: Makefile.coq
$(MAKE) -f Makefile.coq
......@@ -60,4 +60,7 @@ install: cpdt.tgz latex/cpdt.pdf html
cp cpdt.tgz staging/
cp latex/cpdt.pdf staging/
cp -R html staging/
rsync -az --exclude '*~' staging/* schizomaniac.net:sites/chlipala/adam/cpdt/
rsync -az --exclude '*~' staging/* chlipala.net:sites/chlipala/adam/cpdt/
pdf:
evince latex/cpdt.pdf&
......@@ -46,17 +46,15 @@ The license text is available at:
\chapter{Introduction}% *)
(** * Whence This Book? *)
(**
We would all like to have programs check that our programs are correct. Due in no small part to some bold but unfulfilled promises in the history of computer science, today most people who write software, practitioners and academics alike, assume that the costs of formal program verification outweigh the benefits. The purpose of this book is to convince you that the technology of program verification is mature enough today that it makes sense to use it in a support role in many kinds of research projects in computer science. Beyond the convincing, I also want to provide a handbook on practical engineering of certified programs with the Coq proof assistant. Almost every subject covered is also relevant to interactive computer theorem-proving in general, such as for traditional mathematical theorems. In fact, I hope to demonstrate how verified programs are useful as building blocks in all sorts of formalizations.
Research into mechanized theorem-proving began around the 1970's, and some of the earliest practical work involved Nqthm%~\cite{Nqthm}\index{Nqthm}%, the %``%#"#Boyer-Moore Theorem Prover,#"#%''% which was used to prove such theorems as correctness of a complete hardware and software stack%~\cite{Piton}%. ACL2%~\cite{CAR}\index{ACL2}%, Nqthm's successor, has seen significant industry adoption, for instance, by AMD to verify correctness of floating-point division units%~\cite{AMD}%.
Research into mechanized theorem proving began around the 1970's, and some of the earliest practical work involved Nqthm%~\cite{Nqthm}\index{Nqthm}%, the %``%#"#Boyer-Moore Theorem Prover,#"#%''% which was used to prove such theorems as correctness of a complete hardware and software stack%~\cite{Piton}%. ACL2%~\cite{CAR}\index{ACL2}%, Nqthm's successor, has seen significant industry adoption, for instance, by AMD to verify correctness of floating-point division units%~\cite{AMD}%.
Around the beginning of the 21st century, the pace of progress in practical applications of interactive theorem-proving accelerated significantly. Several well-known formal developments have been carried out in Coq, the system that this book deals with. In the realm of pure mathematics, Georges Gonthier built a machine-checked proof of the four color theorem%~\cite{4C}%, a mathematical problem first posed more than a hundred years before, where the only previous proofs had required trusting ad-hoc software to do brute-force checking of key facts. In the realm of program verification, Xavier Leroy led the CompCert project to produce a verified C compiler back-end%~\cite{CompCert}% robust enough to use with real embedded software.
Around the beginning of the 21st century, the pace of progress in practical applications of interactive theorem proving accelerated significantly. Several well-known formal developments have been carried out in Coq, the system that this book deals with. In the realm of pure mathematics, Georges Gonthier built a machine-checked proof of the four color theorem%~\cite{4C}%, a mathematical problem first posed more than a hundred years before, where the only previous proofs had required trusting ad-hoc software to do brute-force checking of key facts. In the realm of program verification, Xavier Leroy led the CompCert project to produce a verified C compiler back-end%~\cite{CompCert}% robust enough to use with real embedded software.
Many other recent projects have attracted attention by proving important theorems using computer proof assistant software. For instance, the L4.verified project%~\cite{seL4}% has given a mechanized proof of correctness for a realistic microkernel, using the Isabelle/HOL proof assistant%~\cite{Isabelle/HOL}\index{Isabelle/HOL}%. The amount of ongoing work in the area is so large that I cannot hope to list all the recent successes, so from this point I will assume that the reader is convinced both that we ought to want machine-checked proofs and that they seem to be feasible to produce. (To readers not yet convinced, I suggest a Web search for %``%#"#machine-checked proof#"#%''%!)
......@@ -100,20 +98,20 @@ This book is going to be about certified programming using Coq, and I am convinc
(** ** Based on a Higher-Order Functional Programming Language *)
(**
There is no reason to give up the familiar comforts of functional programming when you start writing certified programs. All of the tools I listed are based on functional programming languages, which means you can use them without their proof-related aspects to write and run regular programs.
%\index{higher-order vs. first-order languages}%There is no reason to give up the familiar comforts of functional programming when you start writing certified programs. All of the tools I listed are based on functional programming languages, which means you can use them without their proof-related aspects to write and run regular programs.
ACL2 is notable in this field for having only a %\textit{%#<i>#first-order#</i>#%}% language at its foundation. That is, you cannot work with functions over functions and all those other treats of functional programming. By giving up this facility, ACL2 can make broader assumptions about how well its proof automation will work, but we can generally recover the same advantages in other proof assistants when we happen to be programming in first-order fragments.
%\index{ACL2}%ACL2 is notable in this field for having only a %\textit{%#<i>#first-order#</i>#%}% language at its foundation. That is, you cannot work with functions over functions and all those other treats of functional programming. By giving up this facility, ACL2 can make broader assumptions about how well its proof automation will work, but we can generally recover the same advantages in other proof assistants when we happen to be programming in first-order fragments.
*)
(** ** Dependent Types *)
(**
A language of %\textit{%#<i>#dependent types#</i>#%}% may include references to programs inside of types. For instance, the type of an array might include a program expression giving the size of the array, making it possible to verify absence of out-of-bounds accesses statically. Dependent types can go even further than this, effectively capturing any correctness property in a type. For instance, later in this book, we will see how to give a Mini-ML compiler a type that guarantees that it maps well-typed source programs to well-typed target programs.
A language with %\textit{%#<i>#dependent types#</i>#%}% may include references to programs inside of types. For instance, the type of an array might include a program expression giving the size of the array, making it possible to verify absence of out-of-bounds accesses statically. Dependent types can go even further than this, effectively capturing any correctness property in a type. For instance, later in this book, we will see how to give a Mini-ML compiler a type that guarantees that it maps well-typed source programs to well-typed target programs.
ACL2 and HOL lack dependent types outright. PVS and Twelf each supports a different strict subset of Coq's dependent type language. Twelf's type language is restricted to a bare-bones, monomorphic lambda calculus, which places serious restrictions on how complicated %\textit{%#<i>#computations inside types#</i>#%}% can be. This restriction is important for the soundness argument behind Twelf's approach to representing and checking proofs.
%\index{ACL2}%ACL2 and %\index{HOL}%HOL lack dependent types outright. %\index{PVS}%PVS and %\index{Twelf}%Twelf each supports a different strict subset of Coq's dependent type language. Twelf's type language is restricted to a bare-bones, monomorphic lambda calculus, which places serious restrictions on how complicated %\textit{%#<i>#computations inside types#</i>#%}% can be. This restriction is important for the soundness argument behind Twelf's approach to representing and checking proofs.
In contrast, PVS's dependent types are much more general, but they are squeezed inside the single mechanism of %\textit{%#<i>#subset types#</i>#%}%, where a normal type is refined by attaching a predicate over its elements. Each member of the subset type is an element of the base type that satisfies the predicate. Chapter 6 of this book introduces that style of programming in Coq, while the remaining chapters of Part II deal with features of dependent typing in Coq that go beyond what PVS supports.
In contrast, %\index{PVS}%PVS's dependent types are much more general, but they are squeezed inside the single mechanism of %\textit{%#<i>#subset types#</i>#%}%, where a normal type is refined by attaching a predicate over its elements. Each member of the subset type is an element of the base type that satisfies the predicate. Chapter 6 of this book introduces that style of programming in Coq, while the remaining chapters of Part II deal with features of dependent typing in Coq that go beyond what PVS supports.
Dependent types are not just useful because they help you express correctness properties in types. Dependent types also often let you write certified programs %\textit{%#<i>#without writing anything that looks like a proof#</i>#%}%. Even with subset types, which for many contexts can be used to express any relevant property with enough acrobatics, the human driving the proof assistant usually has to build some proofs explicitly. Writing formal proofs is hard, so we want to avoid it as far as possible, so dependent types are invaluable.
......@@ -122,27 +120,27 @@ Dependent types are not just useful because they help you express correctness pr
(** ** An Easy-to-Check Kernel Proof Language *)
(**
Scores of automated decision procedures are useful in practical theorem proving, but it is unfortunate to have to trust in the correct implementation of each procedure. Proof assistants satisfy the %``%#"#de Bruijn criterion#"#%''% when they produce %\textit{%#<i>#proof terms#</i>#%}% in small kernel languages, even when they use complicated and extensible procedures to seek out proofs in the first place. These core languages have feature complexity on par with what you find in proposals for formal foundations for mathematics. To believe a proof, we can ignore the possibility of bugs during %\textit{%#<i>#search#</i>#%}% and just rely on a (relatively small) proof-checking kernel that we apply to the %\textit{%#<i>#result#</i>#%}% of the search.
%\index{de Bruijn criterion}%Scores of automated decision procedures are useful in practical theorem proving, but it is unfortunate to have to trust in the correct implementation of each procedure. Proof assistants satisfy the %``%#"#de Bruijn criterion#"#%''% when they produce %\textit{%#<i>#proof terms#</i>#%}% in small kernel languages, even when they use complicated and extensible procedures to seek out proofs in the first place. These core languages have feature complexity on par with what you find in proposals for formal foundations for mathematics (e.g., ZF set theory). To believe a proof, we can ignore the possibility of bugs during %\textit{%#<i>#search#</i>#%}% and just rely on a (relatively small) proof-checking kernel that we apply to the %\textit{%#<i>#result#</i>#%}% of the search.
Coq meets the de Bruijn criterion, while ACL2 and PVS do not, as they employ fancy decision procedures that produce no %``%#"#evidence trails#"#%''% justifying their results.
Coq meets the de Bruijn criterion, while %\index{ACL2}%ACL2 and %\index{PVS}%PVS do not, as they employ fancy decision procedures that produce no %``%#"#evidence trails#"#%''% justifying their results.
*)
(** ** Convenient Programmable Proof Automation *)
(**
A commitment to a kernel proof language opens up wide possibilities for user extension of proof automation systems, without allowing user mistakes to trick the overall system into accepting invalid proofs. Almost any interesting verification problem is undecidable, so it is important to help users build their own procedures for solving the restricted problems that they encounter in particular implementations.
A commitment to a kernel proof language opens up wide possibilities for user extension of proof automation systems, without allowing user mistakes to trick the overall system into accepting invalid proofs. Almost any interesting verification problem is undecidable, so it is important to help users build their own procedures for solving the restricted problems that they encounter in particular theorems.
Twelf features no proof automation marked as a bonafide part of the latest release; there is some automation code included for testing purposes. The Twelf style is based on writing out all proofs in full detail. Because Twelf is specialized to the domain of syntactic metatheory proofs about programming languages and logics, it is feasible to use it to write those kinds of proofs manually. Outside that domain, the lack of automation can be a serious obstacle to productivity. Most kinds of program verification fall outside Twelf's forte.
%\index{Twelf}%Twelf features no proof automation marked as a bonafide part of the latest release; there is some automation code included for testing purposes. The Twelf style is based on writing out all proofs in full detail. Because Twelf is specialized to the domain of syntactic metatheory proofs about programming languages and logics, it is feasible to use it to write those kinds of proofs manually. Outside that domain, the lack of automation can be a serious obstacle to productivity. Most kinds of program verification fall outside Twelf's forte.
Of the remaining tools, all can support user extension with new decision procedures by hacking directly in the tool's implementation language (such as OCaml for Coq). Since ACL2 and PVS do not satisfy the de Bruijn criterion, overall correctness is at the mercy of the authors of new procedures.
Of the remaining tools, all can support user extension with new decision procedures by hacking directly in the tool's implementation language (such as OCaml for Coq). Since %\index{ACL2}%ACL2 and %\index{PVS}%PVS do not satisfy the de Bruijn criterion, overall correctness is at the mercy of the authors of new procedures.
Isabelle/HOL and Coq both support coding new proof manipulations in ML in ways that cannot lead to the acceptance of invalid proofs. Additionally, Coq includes a domain-specific language for coding decision procedures in normal Coq source code, with no need to break out into ML. This language is called Ltac, and I think of it as the unsung hero of the proof assistant world. Not only does Ltac prevent you from making fatal mistakes, it also includes a number of novel programming constructs which combine to make a %``%#"#proof by decision procedure#"#%''% style very pleasant. We will meet these features in the chapters to come.
%\index{Isabelle/HOL}%Isabelle/HOL and Coq both support coding new proof manipulations in ML in ways that cannot lead to the acceptance of invalid proofs. Additionally, Coq includes a domain-specific language for coding decision procedures in normal Coq source code, with no need to break out into ML. This language is called %\index{Ltac}%Ltac, and I think of it as the unsung hero of the proof assistant world. Not only does Ltac prevent you from making fatal mistakes, it also includes a number of novel programming constructs which combine to make a %``%#"#proof by decision procedure#"#%''% style very pleasant. We will meet these features in the chapters to come.
*)
(** ** Proof by Reflection *)
(**
A surprising wealth of benefits follow from choosing a proof language that integrates a rich notion of computation. Coq includes programs and proof terms in the same syntactic class. This makes it easy to write programs that compute proofs. With rich enough dependent types, such programs are %\textit{%#<i>#certified decision procedures#</i>#%}%. In such cases, these certified procedures can be put to good use %\textit{%#<i>#without ever running them#</i>#%}%! Their types guarantee that, if we did bother to run them, we would receive proper %``%#"#ground#"#%''% proofs.
%\index{reflection}\index{proof by reflection}%A surprising wealth of benefits follow from choosing a proof language that integrates a rich notion of computation. Coq includes programs and proof terms in the same syntactic class. This makes it easy to write programs that compute proofs. With rich enough dependent types, such programs are %\textit{%#<i>#certified decision procedures#</i>#%}%. In such cases, these certified procedures can be put to good use %\textit{%#<i>#without ever running them#</i>#%}%! Their types guarantee that, if we did bother to run them, we would receive proper %``%#"#ground#"#%''% proofs.
The critical ingredient for this technique, many of whose instances are referred to as %\textit{%#<i>#proof by reflection#</i>#%}%, is a way of inducing non-trivial computation inside of logical propositions during proof checking. Further, most of these instances require dependent types to make it possible to state the appropriate theorems. Of the proof assistants I listed, only Coq really provides this support.
*)
......@@ -151,11 +149,11 @@ The critical ingredient for this technique, many of whose instances are referred
(** * Why Not a Different Dependently Typed Language? *)
(**
The logic and programming language behind Coq belongs to a type-theory ecosystem with a good number of other thriving members. %Agda\footnote{\url{http://appserv.cs.chalmers.se/users/ulfn/wiki/agda.php}}%#<a href="http://appserv.cs.chalmers.se/users/ulfn/wiki/agda.php">Agda</a># and %Epigram\footnote{\url{http://www.e-pig.org/}}%#<a href="http://www.e-pig.org/">Epigram</a># are the most developed tools among the alternatives to Coq, and there are others that are earlier in their lifecycles. All of the languages in this family feel sort of like different historical offshoots of Latin. The hardest conceptual epiphanies are, for the most part, portable among all the languages. Given this, why choose Coq for certified programming?
The logic and programming language behind Coq belongs to a type-theory ecosystem with a good number of other thriving members. %\index{Agda}Agda\footnote{\url{http://appserv.cs.chalmers.se/users/ulfn/wiki/agda.php}}%#<a href="http://appserv.cs.chalmers.se/users/ulfn/wiki/agda.php">Agda</a># and %\index{Epigram}Epigram\footnote{\url{http://www.e-pig.org/}}%#<a href="http://www.e-pig.org/">Epigram</a># are the most developed tools among the alternatives to Coq, and there are others that are earlier in their lifecycles. All of the languages in this family feel sort of like different historical offshoots of Latin. The hardest conceptual epiphanies are, for the most part, portable among all the languages. Given this, why choose Coq for certified programming?
I think the answer is simple. None of the competition has well-developed systems for tactic-based theorem proving. Agda and Epigram are designed and marketed more as programming languages than proof assistants. Dependent types are great, because they often help you prove deep theorems without doing anything that feels like proving. Nonetheless, almost any interesting certified programming project will benefit from some activity that deserves to be called proving, and many interesting projects absolutely require semi-automated proving, to protect the sanity of the programmer. Informally, proving is unavoidable when any correctness proof for a program has a structure that does not mirror the structure of the program itself. An example is a compiler correctness proof, which probably proceeds by induction on program execution traces, which have no simple relationship with the structure of the compiler or the structure of the programs it compiles. In building such proofs, a mature system for scripted proof automation is invaluable.
On the other hand, Agda, Epigram, and similar tools have less implementation baggage associated with them, and so they tend to be the default first homes of innovations in practical type theory. Some significant kinds of dependently typed programs are much easier to write in Agda and Epigram than in Coq. The former tools may very well be superior choices for projects that do not involve any %``%#"#proving.#"#%''% Anecdotally, I have gotten the impression that manual proving is orders of magnitudes more costly than manual coping with Coq's lack of programming bells and whistles. In this book, I will devote significant time to patterns for programming with dependent types in Coq as it is today. We can hope that the type theory community is tending towards convergence on the right set of features for practical programming with dependent types, and that we will eventually have a single tool embodying those features.
On the other hand, Agda, Epigram, and similar tools have less implementation baggage associated with them, and so they tend to be the default first homes of innovations in practical type theory. Some significant kinds of dependently typed programs are much easier to write in Agda and Epigram than in Coq. The former tools may very well be superior choices for projects that do not involve any %``%#"#proving.#"#%''% Anecdotally, I have gotten the impression that manual proving is orders of magnitudes more costly than manual coping with Coq's lack of programming bells and whistles. In this book, I will devote significant space to patterns for programming with dependent types in Coq as it is today. We can hope that the type theory community is tending towards convergence on the right set of features for practical programming with dependent types, and that we will eventually have a single tool embodying those features.
*)
......@@ -164,7 +162,7 @@ On the other hand, Agda, Epigram, and similar tools have less implementation bag
(**
In comparisons with its competitors, Coq is often derided for promoting unreadable proofs. It is very easy to write proof scripts that manipulate proof goals imperatively, with no structure to aid readers. Such developments are nightmares to maintain, and they certainly do not manage to convey %``%#"#why the theorem is true#"#%''% to anyone but the original author. One additional (and not insignificant) purpose of this book is to show why it is unfair and unproductive to dismiss Coq based on the existence of such developments.
I will go out on a limb and guess that the reader is a dedicated fan of some functional programming language or another, and that he may even have been involved in teaching that language to undergraduates. I want to propose an analogy between two attitudes: coming to a negative conclusion about Coq after reading common Coq developments in the wild, and coming to a negative conclusion about Your Favorite Language after looking at the programs undergraduates write in it in the first week of class. The pragmatics of mechanized proving and program verification have been under serious study for much less time than the pragmatics of programming have been. The computer theorem proving community is still developing the key insights that correspond to those that functional programming texts and instructors impart to their students, to help those students get over that critical hump where using the language stops being more trouble than it is worth. Most of the insights for Coq are barely even disseminated among the experts, let alone set down in a tutorial form. I hope to use this book to go a long way towards remedying that.
I will go out on a limb and guess that the reader is a fan of some programming language, and that he may even have been involved in teaching that language to undergraduates. I want to propose an analogy between two attitudes: coming to a negative conclusion about Coq after reading common Coq developments in the wild, and coming to a negative conclusion about Your Favorite Language after looking at the programs undergraduates write in it in the first week of class. The pragmatics of mechanized proving and program verification have been under serious study for much less time than the pragmatics of programming have been. The computer theorem proving community is still developing the key insights that correspond to those that programming texts and instructors impart to their students, to help those students get over that critical hump where using the language stops being more trouble than it is worth. Most of the insights for Coq are barely even disseminated among the experts, let alone set down in a tutorial form. I hope to use this book to go a long way towards remedying that.
If I do that job well, then this book should be of interest even to people who have participated in classes or tutorials specifically about Coq. The book should even be useful to people who have been using Coq for years but who are mystified when their Coq developments prove impenetrable by colleagues. The crucial angle in this book is that there are %``%#"#design patterns#"#%''% for reliably avoiding the really grungy parts of theorem proving, and consistent use of these patterns can get you over the hump to the point where it is worth your while to use Coq to prove your theorems and certify your programs, even if formal verification is not your main concern in a project. We will follow this theme by pursuing two main methods for replacing manual proofs with more understandable artifacts: dependently typed functions and custom Ltac decision procedures.
*)
......@@ -173,7 +171,7 @@ If I do that job well, then this book should be of interest even to people who h
(** * Prerequisites *)
(**
I try to keep the required background knowledge to a minimum in this book. I will assume familiarity with the material from usual discrete math and logic courses taken by all undergraduate computer science majors, and I will assume that readers have significant experience programming in one of the ML dialects, in Haskell, or in some other, closely related language. Experience with only dynamically typed functional languages might lead to befuddlement in some places, but a reader who has come to understand Scheme deeply will probably be fine.
I try to keep the required background knowledge to a minimum in this book. I will assume familiarity with the material from usual discrete math and logic courses taken by undergraduate computer science majors, and I will assume that readers have significant experience programming in one of the ML dialects, in Haskell, or in some other, closely related language. Experience with only dynamically typed functional languages might lead to befuddlement in some places, but a reader who has come to understand Scheme deeply will probably be fine.
Part IV of this manuscript is about formalizing programming languages and compilers. That part certainly depends on basic knowledge of formal type systems, operational semantics, and the theorems usually proved about such systems. As a reference on these topics, I recommend %\emph{%#<i>#Types and Programming Languages#</i>#%}~\cite{TAPL}%, by Benjamin C. Pierce. However, my current plan is to break Part IV into a separate, online-only document, since I expect the formalization interests of many readers of the book to lie outside of programming languages. I do often use examples from programming languages in the earlier parts of the book, but I have tried to design them to be comprehensible on the basis of ML or Haskell experience alone.
*)
......@@ -193,11 +191,59 @@ There, you can find all of the code appearing in this book, with prose intersper
A traditional printed version of the book is slated to appear from MIT Press in the future. The online versions will remain available at no cost even after the printed book is released, and I intend to keep the source code up-to-date with bug fixes and compatibility changes to track new Coq releases.
I believe that a good graphical interface to Coq is crucial for using it productively. I use the %Proof General\footnote{\url{http://proofgeneral.inf.ed.ac.uk/}}%#<a href="http://proofgeneral.inf.ed.ac.uk/">Proof General</a># mode for Emacs, which supports a number of other proof assistants besides Coq. There is also the standalone CoqIDE program developed by the Coq team. I like being able to combine certified programming and proving with other kinds of work inside the same full-featured editor, and CoqIDE has had a good number of crashes and other annoying bugs in recent history, though I hear that it is improving. In the initial part of this book, I will reference Proof General procedures explicitly, in introducing how to use Coq, but most of the book will be interface-agnostic, so feel free to use CoqIDE if you prefer it. The one issue with CoqIDE, regarding running through the book source, is that I will sometimes begin a proof attempt but cancel it with the Coq [Abort] or #<span class="inlinecode"><span class="id" type="keyword">#%\coqdockw{%Restart%}%#</span></span># commands, which CoqIDE does not support. It would be bad form to leave such commands lying around in a real, finished development, but I find these commands helpful in writing single source files that trace a user's thought process in designing a proof.
%\index{graphical interfaces to Coq}%I believe that a good graphical interface to Coq is crucial for using it productively. I use the %\index{Proof General}Proof General\footnote{\url{http://proofgeneral.inf.ed.ac.uk/}}%#<a href="http://proofgeneral.inf.ed.ac.uk/">Proof General</a># mode for Emacs, which supports a number of other proof assistants besides Coq. There is also the standalone %\index{CoqIDE}%CoqIDE program developed by the Coq team. I like being able to combine certified programming and proving with other kinds of work inside the same full-featured editor, and CoqIDE has had a good number of crashes and other annoying bugs in recent history, though I hear that it is improving. In the initial part of this book, I will reference Proof General procedures explicitly, in introducing how to use Coq, but most of the book will be interface-agnostic, so feel free to use CoqIDE if you prefer it. The one issue with CoqIDE, regarding running through the book source, is that I will sometimes begin a proof attempt but cancel it with the Coq [Abort] or #<span class="inlinecode"><span class="id" type="keyword">#%\coqdockw{%Restart%}%#</span></span># commands, which CoqIDE does not support. It would be bad form to leave such commands lying around in a real, finished development, but I find these commands helpful in writing single source files that trace a user's thought process in designing a proof.
*)
(** ** Reading This Book *)
(**
For experts in functional programming or formal methods, learning to use Coq is not hard, in a sense. The Coq manual%~\cite{CoqManual}% and the textbook by Bertot and Cast%\'%eran%~\cite{CoqArt}% have helped many people become productive Coq users. However, I believe that the best ways to manage significant Coq developments are far from settled. In this book, I mean to propose my own techniques, and, rather than treating them as advanced material for a final chapter or two, I employ them from the very beginning. After a first chapter showing off what can be done with dependent types, I retreat into simpler programming styles for the first part of the book. The other main thrust of the book, Ltac proof automation, I adopt more or less from the very start of the technical exposition.
Some readers have suggested that I give multiple recommended reading orders in this introduction, targeted at people with different levels of Coq expertise. It is certainly true that Part I of the book devotes significant space to basic concepts that most Coq users already know quite well. However, as I am introducing these concepts, I am also developing my preferred automated proof style, so I think even the chapters on basics are worth reading for experienced Coq hackers.
Readers with no prior Coq experience can ignore the preceding discussion! I hope that my heavy reliance on proof automation early on will seem like the most natural way to go, such that you may wonder why others are spending so much time entering sequences of proof steps manually.
Coq is a very complex system, with many different commands driven more by pragmatic concerns than by any overarching aesthetic principle. When I use some construct for the first time, I try to give a one-sentence intuition for what it accomplishes, but I leave the details to the Coq reference manual%~\cite{CoqManual}%. I expect that readers interested in complete understandings will be consulting that manual frequently; in that sense, this book is not meant to be completely standalone.
*)
(** ** On the Tactic Library *)
(**
To make it possible to start from fancy proof automation, rather than working up to it, I have included with the book source a library of %\emph{%#<i>#tactics#</i>#%}%, or programs that find proofs, since the built-in Coq tactics do not support a high enough level of automation. I use these tactics even from the first chapter with code examples.
The next chapter will introduce Coq with some simple examples, and the chapter starts with step-by-step instructions on getting set up to run the book chapters interactively, assuming that you have Coq and Proof General installed.
Some readers have asked about the pragmatics of using this tactic library in their own developments. My position there is that this tactic library was designed with the specific examples of the book in mind; I do not recommend using it in other settings. Part III should impart the necessary skills to reimplement these tactics and beyond. One generally deals with undecidable problems in interactive theorem proving, so there can be no tactic that solves all goals, though the %\index{Tactics!crush@\texttt{crush}}%[crush] tactic that we will meet soon may sometimes feel like that! There are still very useful tricks found in the implementations of [crush] and its cousins, so it may be useful to examine the commented source file %\texttt{%#<tt>#Tactics.v#</tt>.#%}.~\footnote{It's not actually commented yet. \texttt{;-)}}% I implement a new tactic library for each new project, since each project involves a different mix of undecidable theories where a different set of heuristics turns out to work well; and that is what I recommend others do, too.
*)
(** ** Installation and Emacs Set-Up *)
(**
At the start of the next chapter, I assume that you have installed Coq and Proof General. The code in this book is tested with Coq version 8.3pl2, though parts may work with other versions.
%\index{Proof General|(}%To set up your Proof General environment to process the source to this chapter, a few simple steps are required.
%\begin{enumerate}%#<ol>#
%\item %#<li>#Get the book source from
%\begin{center}\url{http://adam.chlipala.net/cpdt/cpdt.tgz}\end{center}%#<blockquote><tt><a href="http://adam.chlipala.net/cpdt/cpdt.tgz">http://adam.chlipala.net/cpdt/cpdt.tgz</a></tt></blockquote></li>#
%\item %#<li>#Unpack the tarball to some directory %\texttt{%#<tt>#DIR#</tt>#%}%.#</li>#
%\item %#<li>#Run %\texttt{%#<tt>#make#</tt>#%}% in %\texttt{%#<tt>#DIR#</tt>#%}%.#</li>#
%\item %#<li>#There are some minor headaches associated with getting Proof General to pass the proper command line arguments to the %\texttt{%#<tt>#coqtop#</tt>#%}% program, which provides the interactive Coq toplevel. The best way to add settings that will be shared by many source files is to add a custom variable setting to your %\index{.emacs file@\texttt{.emacs} file}\texttt{%#<tt>#.emacs#</tt>#%}% file, like this:
%\begin{verbatim}%#<pre>#(custom-set-variables
...
'(coq-prog-args '("-I" "DIR/src"))
...
)#</pre>#%\end{verbatim}%
The extra arguments demonstrated here are the proper choices for working with the code for this book. The ellipses stand for other Emacs customization settings you may already have. It can be helpful to save several alternate sets of flags in your %\texttt{%#<tt>#.emacs#</tt>#%}% file, with all but one commented out within the %\texttt{%#<tt>#custom-set-variables#</tt>#%}% block at any given time.#</li>#
#</ol>#%\end{enumerate}%
Every chapter of this book is generated from a commented Coq source file. You can load these files and run through them step-by-step in Proof General. Be sure to run the Coq binary %\texttt{%#<tt>#coqtop#</tt>#%}% with the command-line argument %\texttt{%#<tt>#-I DIR/src#</tt>#%}%. If you have installed Proof General properly, the Coq mode should start automatically when you visit a %\texttt{%#<tt>#.v#</tt>#%}% buffer in Emacs, and the above advice on %\texttt{%#<tt>#.emacs#</tt>#%}% settings should ensure that the proper arguments are passed to %\texttt{%#<tt>#coqtop#</tt>#%}% by Emacs.
With Proof General, the portion of a buffer that Coq has processed is highlighted in some way, like being given a blue background. You step through Coq source files by positioning the point at the position you want Coq to run to and pressing C-C C-RET. This can be used both for normal step-by-step coding, by placing the point inside some command past the end of the highlighted region; and for undoing, by placing the point inside the highlighted region.
%\index{Proof General|)}% *)
(** %\section{Chapter Source Files}
......
......@@ -21,32 +21,8 @@ Set Implicit Arguments.
(** I will start off by jumping right in to a fully-worked set of examples, building certified compilers from increasingly complicated source languages to stack machines. We will meet a few useful tactics and see how they can be used in manual proofs, and we will also see how easily these proofs can be automated instead. This chapter is not meant to give full explanations of the features that are employed. Rather, it is meant more as an advertisement of what is possible. Later chapters will introduce all of the concepts in bottom-up fashion.
I assume that you have installed Coq and Proof General. The code in this book is tested with Coq versions 8.2pl2 and 8.3pl2, though parts may work with other versions.
To set up your Proof General environment to process the source to this chapter, a few simple steps are required.
%\begin{enumerate}%#<ol>#
%\item %#<li>#Get the book source from
%\begin{center}\url{http://adam.chlipala.net/cpdt/cpdt.tgz}\end{center}%#<blockquote><tt><a href="http://adam.chlipala.net/cpdt/cpdt.tgz">http://adam.chlipala.net/cpdt/cpdt.tgz</a></tt></blockquote></li>#
%\item %#<li>#Unpack the tarball to some directory %\texttt{%#<tt>#DIR#</tt>#%}%.#</li>#
%\item %#<li>#Run %\texttt{%#<tt>#make#</tt>#%}% in %\texttt{%#<tt>#DIR#</tt>#%}%.#</li>#
%\item %#<li>#There are some minor headaches associated with getting Proof General to pass the proper command line arguments to the %\texttt{%#<tt>#coqtop#</tt>#%}% program, which provides the interactive Coq toplevel. The best way to add settings that will be shared by many source files is to add a custom variable setting to your %\texttt{%#<tt>#.emacs#</tt>#%}% file, like this:
%\begin{verbatim}%#<pre>#(custom-set-variables
...
'(coq-prog-args '("-I" "DIR/src"))
...
)#</pre>#%\end{verbatim}%
The extra arguments demonstrated here are the proper choices for working with the code for this book. The ellipses stand for other Emacs customization settings you may already have. It can be helpful to save several alternate sets of flags in your %\texttt{%#<tt>#.emacs#</tt>#%}% file, with all but one commented out within the %\texttt{%#<tt>#custom-set-variables#</tt>#%}% block at any given time.#</li>#
#</ol>#%\end{enumerate}%
As always, you can step through the source file %\texttt{%#<tt>#StackMachine.v#</tt>#%}% for this chapter interactively in Proof General. Alternatively, to get a feel for the whole lifecycle of creating a Coq development, you can enter the pieces of source code in this chapter in a new %\texttt{%#<tt>#.v#</tt>#%}% file in an Emacs buffer. If you do the latter, include two lines [Require Import Arith Bool List Tactics.] and [Set Implicit Arguments.] at the start of the file, to match some code hidden in this rendering of the chapter source. In general, similar commands will be hidden in the book rendering of each chapter's source code, so you will need to insert them in from-scratch replayings of the code that is presented. To be more specific, every chapter begins with some imports of other modules, followed by [Set Implicit Arguments.], where the latter affects the default behavior of definitions regarding type inference. Also, be sure to run the Coq binary %\texttt{%#<tt>#coqtop#</tt>#%}% with the command-line argument %\texttt{%#<tt>#-I DIR/src#</tt>#%}%. If you have installed Proof General properly, it should start automatically when you visit a %\texttt{%#<tt>#.v#</tt>#%}% buffer in Emacs.
With Proof General, the portion of a buffer that Coq has processed is highlighted in some way, like being given a blue background. You step through Coq source files by positioning the point at the position you want Coq to run to and pressing C-C C-RET. This can be used both for normal step-by-step coding, by placing the point inside some command past the end of the highlighted region; and for undoing, by placing the point inside the highlighted region. *)
As always, you can step through the source file %\texttt{%#<tt>#StackMachine.v#</tt>#%}% for this chapter interactively in Proof General. Alternatively, to get a feel for the whole lifecycle of creating a Coq development, you can enter the pieces of source code in this chapter in a new %\texttt{%#<tt>#.v#</tt>#%}% file in an Emacs buffer. If you do the latter, include two lines %\index{Vernacular commands!Require}%[Require Import] #<span class="inlinecode"><span class="id" type="var">#%\coqdocconstructor{%Arith%}%#</span></span># [Bool] #<span class="inlinecode"><span class="id" type="var">#%\coqdocconstructor{%List%}%#</span></span># [Tactics.] and %\index{Vernacular commands!Set Implicit Arguments}%[Set Implicit] #<span class="inlinecode"><span class="id" type="keyword">#%\coqdockw{%Arguments%}%#</span></span>#[.] at the start of the file, to match some code hidden in this rendering of the chapter source. In general, similar commands will be hidden in the book rendering of each chapter's source code, so you will need to insert them in from-scratch replayings of the code that is presented. To be more specific, every chapter begins with some imports of other modules, followed by [Set Implicit] #<span class="inlinecode"><span class="id" type="keyword">#%\coqdockw{%Arguments%}%#</span></span>#[.], where the latter affects the default behavior of definitions regarding type inference.
*)
(** * Arithmetic Expressions Over Natural Numbers *)
......@@ -101,15 +77,15 @@ Definition binopDenote := fun b =>
]]
Languages like Haskell and ML have a convenient %\textit{%#<i>#principal typing#</i>#%}% property, which gives us strong guarantees about how effective type inference will be. Unfortunately, Coq's type system is so expressive that any kind of %``%#"#complete#"#%''% type inference is impossible, and the task even seems to be hard heuristically in practice. Nonetheless, Coq includes some very helpful heuristics, many of them copying the workings of Haskell and ML type-checkers for programs that fall in simple fragments of Coq's language.
Languages like Haskell and ML have a convenient %\emph{%#<i>#principal typing#</i>#%}% property, which gives us strong guarantees about how effective type inference will be. Unfortunately, Coq's type system is so expressive that any kind of %``%#"#complete#"#%''% type inference is impossible, and the task even seems to be hard heuristically in practice. Nonetheless, Coq includes some very helpful heuristics, many of them copying the workings of Haskell and ML type-checkers for programs that fall in simple fragments of Coq's language.
This is as good a time as any to mention the preponderance of different languages associated with Coq. The theoretical foundation of Coq is a formal system called the %\textit{%#<i>#Calculus of Inductive Constructions (CIC)#</i>#%}%, which is an extension of the older %\textit{%#<i>#Calculus of Constructions (CoC)#</i>#%}%. CIC is quite a spartan foundation, which is helpful for proving metatheory but not so helpful for real development. Still, it is nice to know that it has been proved that CIC enjoys properties like %\textit{%#<i>#strong normalization#</i>#%}%, meaning that every program (and, more importantly, every proof term) terminates; and %\textit{%#<i>#relative consistency#</i>#%}% with systems like versions of Zermelo-Fraenkel set theory, which roughly means that you can believe that Coq proofs mean that the corresponding propositions are %``%#"#really true,#"#%''% if you believe in set theory.
This is as good a time as any to mention the preponderance of different languages associated with Coq. The theoretical foundation of Coq is a formal system called the %\emph{%#<i>#Calculus of Inductive Constructions (CIC)#</i>#%}%, which is an extension of the older %\emph{%#<i>#Calculus of Constructions (CoC)#</i>#%}%. CIC is quite a spartan foundation, which is helpful for proving metatheory but not so helpful for real development. Still, it is nice to know that it has been proved that CIC enjoys properties like %\emph{%#<i>#strong normalization#</i>#%}%, meaning that every program (and, more importantly, every proof term) terminates; and %\emph{%#<i>#relative consistency#</i>#%}% with systems like versions of Zermelo-Fraenkel set theory, which roughly means that you can believe that Coq proofs mean that the corresponding propositions are %``%#"#really true,#"#%''% if you believe in set theory.
Coq is actually based on an extension of CIC called %\textit{%#<i>#Gallina#</i>#%}%. The text after the [:=] and before the period in the last code example is a term of Gallina. Gallina adds many useful features that are not compiled internally to more primitive CIC features. The important metatheorems about CIC have not been extended to the full breadth of these features, but most Coq users do not seem to lose much sleep over this omission.
Coq is actually based on an extension of CIC called %\emph{%#<i>#Gallina#</i>#%}%. The text after the [:=] and before the period in the last code example is a term of Gallina. Gallina adds many useful features that are not compiled internally to more primitive CIC features. The important metatheorems about CIC have not been extended to the full breadth of these features, but most Coq users do not seem to lose much sleep over this omission.
Commands like [Inductive] and [Definition] are part of %\textit{%#<i>#the vernacular#</i>#%}%, which includes all sorts of useful queries and requests to the Coq system.
Next, there is %\emph{%#<i>#Ltac#</i>#%}%, Coq's domain-specific language for writing proofs and decision procedures. We will see some basic examples of Ltac later in this chapter, and much of this book is devoted to more involved Ltac examples.
Finally, there is %\textit{%#<i>#Ltac#</i>#%}%, Coq's domain-specific language for writing proofs and decision procedures. We will see some basic examples of Ltac later in this chapter, and much of this book is devoted to more involved Ltac examples.
Finally, commands like [Inductive] and [Definition] are part of %\emph{%#<i>#the vernacular#</i>#%}%, which includes all sorts of useful queries and requests to the Coq system. Every Coq source file is a series of vernacular commands, where many command forms take arguments that are Gallina or Ltac programs. (Actually, Coq source files are more like %\emph{%#<i>#trees#</i>#%}% of vernacular commands, thanks to various nested scoping constructs.)
%\medskip%
......@@ -233,13 +209,13 @@ Abort.
(* end hide *)
(* begin thide *)
(** Though a pencil-and-paper proof might clock out at this point, writing %``%#"#by a routine induction on [e],#"#%''% it turns out not to make sense to attack this proof directly. We need to use the standard trick of %\textit{%#<i>#strengthening the induction hypothesis#</i>#%}%. We do that by proving an auxiliary lemma:
(** Though a pencil-and-paper proof might clock out at this point, writing %``%#"#by a routine induction on [e],#"#%''% it turns out not to make sense to attack this proof directly. We need to use the standard trick of %\emph{%#<i>#strengthening the induction hypothesis#</i>#%}%. We do that by proving an auxiliary lemma:
*)
Lemma compile_correct' : forall e p s,
progDenote (compile e ++ p) s = progDenote p (expDenote e :: s).
(** After the period in the [Lemma] command, we are in %\textit{%#<i>#the interactive proof-editing mode#</i>#%}%. We find ourselves staring at this ominous screen of text:
(** After the period in the [Lemma] command, we are in %\emph{%#<i>#the interactive proof-editing mode#</i>#%}%. We find ourselves staring at this ominous screen of text:
[[
1 subgoal
......@@ -254,7 +230,7 @@ Coq seems to be restating the lemma for us. What we are seeing is a limited cas
Next in the output, we see our single subgoal described in full detail. There is a double-dashed line, above which would be our free variables and hypotheses, if we had any. Below the line is the conclusion, which, in general, is to be proved from the hypotheses.
We manipulate the proof state by running commands called %\textit{%#<i>#tactics#</i>#%}%. Let us start out by running one of the most important tactics:
We manipulate the proof state by running commands called %\emph{%#<i>#tactics#</i>#%}%. Let us start out by running one of the most important tactics:
*)
induction e.
......@@ -582,13 +558,13 @@ Inductive tbinop : type -> type -> type -> Set :=
| TEq : forall t, tbinop t t Bool
| TLt : tbinop Nat Nat Bool.
(** The definition of [tbinop] is different from [binop] in an important way. Where we declared that [binop] has type [Set], here we declare that [tbinop] has type [type -> type -> type -> Set]. We define [tbinop] as an %\textit{%#<i>#indexed type family#</i>#%}%. Indexed inductive types are at the heart of Coq's expressive power; almost everything else of interest is defined in terms of them.
(** The definition of [tbinop] is different from [binop] in an important way. Where we declared that [binop] has type [Set], here we declare that [tbinop] has type [type -> type -> type -> Set]. We define [tbinop] as an %\emph{%#<i>#indexed type family#</i>#%}%. Indexed inductive types are at the heart of Coq's expressive power; almost everything else of interest is defined in terms of them.
ML and Haskell have indexed algebraic datatypes. For instance, their list types are indexed by the type of data that the list carries. However, compared to Coq, ML and Haskell 98 place two important restrictions on datatype definitions.
First, the indices of the range of each data constructor must be type variables bound at the top level of the datatype definition. There is no way to do what we did here, where we, for instance, say that [TPlus] is a constructor building a [tbinop] whose indices are all fixed at [Nat]. %\textit{%#<i>#Generalized algebraic datatypes (GADTs)#</i>#%}% are a popular feature in GHC Haskell and other languages that removes this first restriction.
First, the indices of the range of each data constructor must be type variables bound at the top level of the datatype definition. There is no way to do what we did here, where we, for instance, say that [TPlus] is a constructor building a [tbinop] whose indices are all fixed at [Nat]. %\emph{%#<i>#Generalized algebraic datatypes (GADTs)#</i>#%}% are a popular feature in GHC Haskell and other languages that removes this first restriction.
The second restriction is not lifted by GADTs. In ML and Haskell, indices of types must be types and may not be %\textit{%#<i>#expressions#</i>#%}%. In Coq, types may be indexed by arbitrary Gallina terms. Type indices can live in the same universe as programs, and we can compute with them just like regular programs. Haskell supports a hobbled form of computation in type indices based on multi-parameter type classes, and recent extensions like type functions bring Haskell programming even closer to %``%#"#real#"#%''% functional programming with types, but, without dependent typing, there must always be a gap between how one programs with types and how one programs normally.
The second restriction is not lifted by GADTs. In ML and Haskell, indices of types must be types and may not be %\emph{%#<i>#expressions#</i>#%}%. In Coq, types may be indexed by arbitrary Gallina terms. Type indices can live in the same universe as programs, and we can compute with them just like regular programs. Haskell supports a hobbled form of computation in type indices based on multi-parameter type classes, and recent extensions like type functions bring Haskell programming even closer to %``%#"#real#"#%''% functional programming with types, but, without dependent typing, there must always be a gap between how one programs with types and how one programs normally.
*)
(** We can define a similar type family for typed expressions. *)
......@@ -631,9 +607,9 @@ Definition tbinopDenote arg1 arg2 res (b : tbinop arg1 arg2 res)
| TLt => lessThan
end.
(** This function has just a few differences from the denotation functions we saw earlier. First, [tbinop] is an indexed type, so its indices become additional arguments to [tbinopDenote]. Second, we need to perform a genuine %\textit{%#<i>#dependent pattern match#</i>#%}% to come up with a definition of this function that type-checks. In each branch of the [match], we need to use branch-specific information about the indices to [tbinop]. General type inference that takes such information into account is undecidable, so it is often necessary to write annotations, like we see above on the line with [match].
(** This function has just a few differences from the denotation functions we saw earlier. First, [tbinop] is an indexed type, so its indices become additional arguments to [tbinopDenote]. Second, we need to perform a genuine %\emph{%#<i>#dependent pattern match#</i>#%}% to come up with a definition of this function that type-checks. In each branch of the [match], we need to use branch-specific information about the indices to [tbinop]. General type inference that takes such information into account is undecidable, so it is often necessary to write annotations, like we see above on the line with [match].
The [in] annotation restates the type of the term being case-analyzed. Though we use the same names for the indices as we use in the type of the original argument binder, these are actually fresh variables, and they are %\textit{%#<i>#binding occurrences#</i>#%}%. Their scope is the [return] clause. That is, [arg1], [arg2], and [res] are new bound variables bound only within the return clause [typeDenote arg1 -> typeDenote arg2 -> typeDenote res]. By being explicit about the functional relationship between the type indices and the match result, we regain decidable type inference.
The [in] annotation restates the type of the term being case-analyzed. Though we use the same names for the indices as we use in the type of the original argument binder, these are actually fresh variables, and they are %\emph{%#<i>#binding occurrences#</i>#%}%. Their scope is the [return] clause. That is, [arg1], [arg2], and [res] are new bound variables bound only within the return clause [typeDenote arg1 -> typeDenote arg2 -> typeDenote res]. By being explicit about the functional relationship between the type indices and the match result, we regain decidable type inference.
In fact, recent Coq versions use some heuristics that can save us the trouble of writing [match] annotations, and those heuristics get the job done in this case. We can get away with writing just: *)
......@@ -779,7 +755,7 @@ The term "(n, s)" has type "(nat * vstack ts)%type"
Recall from our earlier discussion of [match] annotations that we write the annotations to express to the type-checker the relationship between the type indices of the case object and the result type of the [match]. Coq chooses to assign to the wildcard [_] after [TINConst] the name [t], and the type error is telling us that the type checker cannot prove that [t] is the same as [ts]. By moving [s] out of the [match], we lose the ability to express, with [in] and [return] clauses, the relationship between the shared index [ts] of [s] and [i].
There %\textit{%#<i>#are#</i>#%}% reasonably general ways of getting around this problem without pushing binders inside [match]es. However, the alternatives are significantly more involved, and the technique we use here is almost certainly the best choice, whenever it applies.
There %\emph{%#<i>#are#</i>#%}% reasonably general ways of getting around this problem without pushing binders inside [match]es. However, the alternatives are significantly more involved, and the technique we use here is almost certainly the best choice, whenever it applies.
*)
......@@ -812,7 +788,7 @@ Fixpoint tcompile t (e : texp t) (ts : tstack) : tprog ts (t :: ts) :=
(tconcat (tcompile e1 _) (TCons (TIBinop _ b) (TNil _)))
end.
(** One interesting feature of the definition is the underscores appearing to the right of [=>] arrows. Haskell and ML programmers are quite familiar with compilers that infer type parameters to polymorphic values. In Coq, it is possible to go even further and ask the system to infer arbitrary terms, by writing underscores in place of specific values. You may have noticed that we have been calling functions without specifying all of their arguments. For instance, the recursive calls here to [tcompile] omit the [t] argument. Coq's %\textit{%#<i>#implicit argument#</i>#%}% mechanism automatically inserts underscores for arguments that it will probably be able to infer. Inference of such values is far from complete, though; generally, it only works in cases similar to those encountered with polymorphic type instantiation in Haskell and ML.
(** One interesting feature of the definition is the underscores appearing to the right of [=>] arrows. Haskell and ML programmers are quite familiar with compilers that infer type parameters to polymorphic values. In Coq, it is possible to go even further and ask the system to infer arbitrary terms, by writing underscores in place of specific values. You may have noticed that we have been calling functions without specifying all of their arguments. For instance, the recursive calls here to [tcompile] omit the [t] argument. Coq's %\emph{%#<i>#implicit argument#</i>#%}% mechanism automatically inserts underscores for arguments that it will probably be able to infer. Inference of such values is far from complete, though; generally, it only works in cases similar to those encountered with polymorphic type instantiation in Haskell and ML.
The underscores here are being filled in with stack types. That is, the Coq type inferencer is, in a sense, inferring something about the flow of control in the translated programs. We can take a look at exactly which values are filled in: *)
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment