Commit 8390aa9b authored by Adam Chlipala's avatar Adam Chlipala

A pass of double-quotes and LaTeX operator beautification

parent 561db96c
......@@ -33,7 +33,7 @@ This would leave us with [bad tt] as a proof of [P].
There are also algorithmic considerations that make universal termination very desirable. We have seen how tactics like [reflexivity] compare terms up to equivalence under computational rules. Calls to recursive, pattern-matching functions are simplified automatically, with no need for explicit proof steps. It would be very hard to hold onto that kind of benefit if it became possible to write non-terminating programs; we would be running smack into the halting problem.
One solution is to use types to contain the possibility of non-termination. For instance, we can create a "non-termination monad," inside which we must write all of our general-recursive programs. This is a heavyweight solution, and so we would like to avoid it whenever possible.
One solution is to use types to contain the possibility of non-termination. For instance, we can create a %``%#"#non-termination monad,#"#%''% inside which we must write all of our general-recursive programs. This is a heavyweight solution, and so we would like to avoid it whenever possible.
Luckily, Coq has special support for a class of lazy data structures that happens to contain most examples found in Haskell. That mechanism, %\textit{%#<i>#co-inductive types#</i>#%}%, is the subject of this chapter. *)
......@@ -328,7 +328,7 @@ Theorem ones_eq' : stream_eq ones ones'.
Abort.
(* end thide *)
(** The standard [auto] machinery sees that our goal matches an assumption and so applies that assumption, even though this violates guardedness. One usually starts a proof like this by [destruct]ing some parameter and running a custom tactic to figure out the first proof rule to apply for each case. Alternatively, there are tricks that can be played with "hiding" the co-inductive hypothesis. *)
(** The standard [auto] machinery sees that our goal matches an assumption and so applies that assumption, even though this violates guardedness. One usually starts a proof like this by [destruct]ing some parameter and running a custom tactic to figure out the first proof rule to apply for each case. Alternatively, there are tricks that can be played with %``%#"#hiding#"#%''% the co-inductive hypothesis. *)
(** * Simple Modeling of Non-Terminating Programs *)
......@@ -387,7 +387,7 @@ Section safe.
-> safe pc' r'
-> safe pc r.
(** Now we can prove that any starting address and register bank lead to safe infinite execution. Recall that proofs of existentially-quantified formulas are all built with a single constructor of the inductive type [ex]. This means that we can use [destruct] to "open up" such proofs. In the proof below, we want to perform this opening up on an appropriate use of the [exec_total] lemma. This lemma's conclusion begins with two existential quantifiers, so we want to tell [destruct] that it should not stop at the first quantifier. We accomplish our goal by using an %\textit{%#<i>#intro pattern#</i>#%}% with [destruct]. Consult the Coq manual for the details of intro patterns; the specific pattern [[? [? ?]]] that we use here accomplishes our goal of destructing both quantifiers at once. *)
(** Now we can prove that any starting address and register bank lead to safe infinite execution. Recall that proofs of existentially-quantified formulas are all built with a single constructor of the inductive type [ex]. This means that we can use [destruct] to %``%#"#open up#"#%''% such proofs. In the proof below, we want to perform this opening up on an appropriate use of the [exec_total] lemma. This lemma's conclusion begins with two existential quantifiers, so we want to tell [destruct] that it should not stop at the first quantifier. We accomplish our goal by using an %\textit{%#<i>#intro pattern#</i>#%}% with [destruct]. Consult the Coq manual for the details of intro patterns; the specific pattern [[? [? ?]]] that we use here accomplishes our goal of destructing both quantifiers at once. *)
Theorem always_safe : forall pc rs,
safe pc rs.
......@@ -412,7 +412,7 @@ Print always_safe.
%\item%#<li># Define a function [map] for building an output tree out of two input trees by traversing them in parallel and applying a two-argument function to their corresponding data values.#</li>#
%\item%#<li># Define a tree [falses] where every node has the value [false].#</li>#
%\item%#<li># Define a tree [true_false] where the root node has value [true], its children have value [false], all nodes at the next have the value [true], and so on, alternating boolean values from level to level.#</li>#
%\item%#<li># Prove that [true_false] is equal to the result of mapping the boolean "or" function [orb] over [true_false] and [falses]. You can make [orb] available with [Require Import Bool.]. You may find the lemma [orb_false_r] from the same module helpful. Your proof here should not be about the standard equality [=], but rather about some new equality relation that you define.#</li>#
%\item%#<li># Prove that [true_false] is equal to the result of mapping the boolean %``%#"#or#"#%''% function [orb] over [true_false] and [falses]. You can make [orb] available with [Require Import Bool.]. You may find the lemma [orb_false_r] from the same module helpful. Your proof here should not be about the standard equality [=], but rather about some new equality relation that you define.#</li>#
#</ol>#%\end{enumerate}% #</li>#
#</ol>#%\end{enumerate}% *)
......@@ -23,7 +23,7 @@ Set Implicit Arguments.
(** * The Definitional Equality *)
(** We have seen many examples so far where proof goals follow "by computation." That is, we apply computational reduction rules to reduce the goal to a normal form, at which point it follows trivially. Exactly when this works and when it does not depends on the details of Coq's %\textit{%#<i>#definitional equality#</i>#%}%. This is an untyped binary relation appearing in the formal metatheory of CIC. CIC contains a typing rule allowing the conclusion $E : T$ from the premise $E : T'$ and a proof that $T$ and $T'$ are definitionally equal.
(** We have seen many examples so far where proof goals follow %``%#"#by computation.#"#%''% That is, we apply computational reduction rules to reduce the goal to a normal form, at which point it follows trivially. Exactly when this works and when it does not depends on the details of Coq's %\textit{%#<i>#definitional equality#</i>#%}%. This is an untyped binary relation appearing in the formal metatheory of CIC. CIC contains a typing rule allowing the conclusion $E : T$ from the premise $E : T'$ and a proof that $T$ and $T'$ are definitionally equal.
The [cbv] tactic will help us illustrate the rules of Coq's definitional equality. We redefine the natural number predecessor function in a somewhat convoluted way and construct a manual proof that it returns [0] when applied to [1]. *)
......@@ -95,12 +95,12 @@ Qed.
(** The standard [eq] relation is critically dependent on the definitional equality. [eq] is often called a %\textit{%#<i>#propositional equality#</i>#%}%, because it reifies definitional equality as a proposition that may or may not hold. Standard axiomatizations of an equality predicate in first-order logic define equality in terms of properties it has, like reflexivity, symmetry, and transitivity. In contrast, for [eq] in Coq, those properties are implicit in the properties of the definitional equality, which are built into CIC's metatheory and the implementation of Gallina. We could add new rules to the definitional equality, and [eq] would keep its definition and methods of use.
This all may make it sound like the choice of [eq]'s definition is unimportant. To the contrary, in this chapter, we will see examples where alternate definitions may simplify proofs. Before that point, we will introduce effective proof methods for goals that use proofs of the standard propositional equality "as data." *)
This all may make it sound like the choice of [eq]'s definition is unimportant. To the contrary, in this chapter, we will see examples where alternate definitions may simplify proofs. Before that point, we will introduce effective proof methods for goals that use proofs of the standard propositional equality %``%#"#as data.#"#%''% *)
(** * Heterogeneous Lists Revisited *)
(** One of our example dependent data structures from the last chapter was heterogeneous lists and their associated "cursor" type. The recursive version poses some special challenges related to equality proofs, since it uses such proofs in its definition of [member] types. *)
(** One of our example dependent data structures from the last chapter was heterogeneous lists and their associated %``%#"#cursor#"#%''% type. The recursive version poses some special challenges related to equality proofs, since it uses such proofs in its definition of [member] types. *)
Section fhlist.
Variable A : Type.
......@@ -193,7 +193,7 @@ The term "refl_equal ?98" has type "?98 = ?98"
In retrospect, the problem is not so hard to see. Reflexivity proofs only show [x = x] for particular values of [x], whereas here we are thinking in terms of a proof of [a = elm], where the two sides of the equality are not equal syntactically. Thus, the essential lemma we need does not even type-check!
Is it time to throw in the towel? Luckily, the answer is "no." In this chapter, we will see several useful patterns for proving obligations like this.
Is it time to throw in the towel? Luckily, the answer is %``%#"#no.#"#%''% In this chapter, we will see several useful patterns for proving obligations like this.
For this particular example, the solution is surprisingly straightforward. [destruct] has a simpler sibling [case] which should behave identically for any inductive type with one constructor of no arguments. *)
......@@ -297,7 +297,7 @@ The term "refl_equal x'" has type "x' = x'" while it is expected to have type
]]
The type error comes from our [return] annotation. In that annotation, the [as]-bound variable [pf'] has type [x = x'], refering to the [in]-bound variable [x']. To do a dependent [match], we %\textit{%#<i>#must#</i>#%}% choose a fresh name for the second argument of [eq]. We are just as constrained to use the "real" value [x] for the first argument. Thus, within the [return] clause, the proof we are matching on %\textit{%#<i>#must#</i>#%}% equate two non-matching terms, which makes it impossible to equate that proof with reflexivity.
The type error comes from our [return] annotation. In that annotation, the [as]-bound variable [pf'] has type [x = x'], refering to the [in]-bound variable [x']. To do a dependent [match], we %\textit{%#<i>#must#</i>#%}% choose a fresh name for the second argument of [eq]. We are just as constrained to use the %``%#"#real#"#%''% value [x] for the first argument. Thus, within the [return] clause, the proof we are matching on %\textit{%#<i>#must#</i>#%}% equate two non-matching terms, which makes it impossible to equate that proof with reflexivity.
Nonetheless, it turns out that, with one catch, we %\textit{%#<i>#can#</i>#%}% prove this lemma. *)
......@@ -323,9 +323,9 @@ fun U : Type => Eq_rect_eq.eq_rect_eq U
]]
[eq_rect_eq] states a "fact" that seems like common sense, once the notation is deciphered. [eq_rect] is the automatically-generated recursion principle for [eq]. Calling [eq_rect] is another way of [match]ing on an equality proof. The proof we match on is the argument [h], and [x] is the body of the [match]. [eq_rect_eq] just says that [match]es on proofs of [p = p], for any [p], are superfluous and may be removed.
[eq_rect_eq] states a %``%#"#fact#"#%''% that seems like common sense, once the notation is deciphered. [eq_rect] is the automatically-generated recursion principle for [eq]. Calling [eq_rect] is another way of [match]ing on an equality proof. The proof we match on is the argument [h], and [x] is the body of the [match]. [eq_rect_eq] just says that [match]es on proofs of [p = p], for any [p], are superfluous and may be removed.
Perhaps surprisingly, we cannot prove [eq_rect_eq] from within Coq. This proposition is introduced as an axiom; that is, a proposition asserted as true without proof. We cannot assert just any statement without proof. Adding [False] as an axiom would allow us to prove any proposition, for instance, defeating the point of using a proof assistant. In general, we need to be sure that we never assert %\textit{%#<i>#inconsistent#</i>#%}% sets of axioms. A set of axioms is inconsistent if its conjunction implies [False]. For the case of [eq_rect_eq], consistency has been verified outside of Coq via "informal" metatheory.
Perhaps surprisingly, we cannot prove [eq_rect_eq] from within Coq. This proposition is introduced as an axiom; that is, a proposition asserted as true without proof. We cannot assert just any statement without proof. Adding [False] as an axiom would allow us to prove any proposition, for instance, defeating the point of using a proof assistant. In general, we need to be sure that we never assert %\textit{%#<i>#inconsistent#</i>#%}% sets of axioms. A set of axioms is inconsistent if its conjunction implies [False]. For the case of [eq_rect_eq], consistency has been verified outside of Coq via %``%#"#informal#"#%''% metatheory.
This axiom is equivalent to another that is more commonly known and mentioned in type theory circles. *)
......@@ -339,7 +339,7 @@ fun U : Type => UIP_refl__Streicher_K U (UIP_refl U)
]]
This is the unfortunately-named "Streicher's axiom K," which says that a predicate on properly-typed equality proofs holds of all such proofs if it holds of reflexivity. *)
This is the unfortunately-named %``%#"#Streicher's axiom K,#"#%''% which says that a predicate on properly-typed equality proofs holds of all such proofs if it holds of reflexivity. *)
End fhlist_map.
......@@ -560,7 +560,7 @@ Inductive JMeq (A : Type) (x : A) : forall B : Type, B -> Prop :=
]]
[JMeq] stands for "John Major equality," a name coined by Conor McBride as a sort of pun about British politics. [JMeq] starts out looking a lot like [eq]. The crucial difference is that we may use [JMeq] %\textit{%#<i>#on arguments of different types#</i>#%}%. For instance, a lemma that we failed to establish before is trivial with [JMeq]. It makes for prettier theorem statements to define some syntactic shorthand first. *)
[JMeq] stands for %``%#"#John Major equality,#"#%''% a name coined by Conor McBride as a sort of pun about British politics. [JMeq] starts out looking a lot like [eq]. The crucial difference is that we may use [JMeq] %\textit{%#<i>#on arguments of different types#</i>#%}%. For instance, a lemma that we failed to establish before is trivial with [JMeq]. It makes for prettier theorem statements to define some syntactic shorthand first. *)
Infix "==" := JMeq (at level 70, no associativity).
......@@ -734,7 +734,7 @@ Theorem JMeq_eq' : forall (A : Type) (x y : A),
rewrite (UIP_refl _ _ x0); reflexivity.
Qed.
(** We see that, in a very formal sense, we are free to switch back and forth between the two styles of proofs about equality proofs. One style may be more convenient than the other for some proofs, but we can always interconvert between our results. The style that does not use heterogeneous equality may be preferable in cases where many results do not require the tricks of this chapter, since then the use of axioms is avoided altogether for the simple cases, and a wider audience will be able to follow those "simple" proofs. On the other hand, heterogeneous equality often makes for shorter and more readable theorem statements.
(** We see that, in a very formal sense, we are free to switch back and forth between the two styles of proofs about equality proofs. One style may be more convenient than the other for some proofs, but we can always interconvert between our results. The style that does not use heterogeneous equality may be preferable in cases where many results do not require the tricks of this chapter, since then the use of axioms is avoided altogether for the simple cases, and a wider audience will be able to follow those %``%#"#simple#"#%''% proofs. On the other hand, heterogeneous equality often makes for shorter and more readable theorem statements.
It is worth remarking that it is possible to avoid axioms altogether for equalities on types with decidable equality. The [Eqdep_dec] module of the standard library contains a parametric proof of [UIP_refl] for such cases. *)
(* end thide *)
......@@ -763,7 +763,7 @@ Theorem S_eta : S = (fun n => S n).
Qed.
(* end thide *)
(** The same axiom can help us prove equality of types, where we need to "reason under quantifiers." *)
(** The same axiom can help us prove equality of types, where we need to %``%#"#reason under quantifiers.#"#%''% *)
Theorem forall_eq : (forall x : nat, match x with
| O => True
......@@ -811,29 +811,29 @@ Qed.
%\item%#<li># Define a datatype [type] of lambda types, including just booleans and function types.#</li>#
%\item%#<li># Define a type family [exp : list type -> type -> Type] of lambda expressions, including boolean constants, variables, and function application and abstraction.#</li>#
%\item%#<li># Implement a definitional interpreter for [exp]s, by way of a recursive function over expressions and substitutions for free variables, like in the related example from the last chapter.#</li>#
%\item%#<li># Implement a function [subst : forall t' ts t, exp (t' :: ts) t -> exp ts t' -> exp ts t]. The type of the first expression indicates that its most recently bound free variable has type [t']. The second expression also has type [t'], and the job of [subst] is to substitute the second expression for every occurrence of the "first" variable of the first expression.#</li>#
%\item%#<li># Implement a function [subst : forall t' ts t, exp (t' :: ts) t -> exp ts t' -> exp ts t]. The type of the first expression indicates that its most recently bound free variable has type [t']. The second expression also has type [t'], and the job of [subst] is to substitute the second expression for every occurrence of the %``%#"#first#"#%''% variable of the first expression.#</li>#
%\item%#<li># Prove that [subst] preserves program meanings. That is, prove
[[
forall t' ts t (e : exp (t' :: ts) t) (e' : exp ts t') (s : hlist typeDenote ts),
expDenote (subst e e') s = expDenote e (expDenote e' s ::: s)
]]
where [:::] is an infix operator for heterogeneous "cons" that is defined in the book's [DepList] module.#</li>#
where [:::] is an infix operator for heterogeneous %``%#"#cons#"#%''% that is defined in the book's [DepList] module.#</li>#
#</ol>#%\end{enumerate}%
The material presented up to this point should be sufficient to enable a good solution of this exercise, with enough ingenuity. If you get stuck, it may be helpful to use the following structure. None of these elements need to appear in your solution, but we can at least guarantee that there is a reasonable solution based on them.
%\begin{enumerate}%#<ol>#
%\item%#<li># The [DepList] module will be useful. You can get the standard dependent list definitions there, instead of copying-and-pasting from the last chapter. It is worth reading the source for that module over, since it defines some new helpful functions and notations that we did not use last chapter.#</li>#
%\item%#<li># Define a recursive function [liftVar : forall ts1 ts2 t t', member t (ts1 ++ ts2) -> member t (ts1 ++ t' :: ts2)]. This function should "lift" a de Bruijn variable so that its type refers to a new variable inserted somewhere in the index list.#</li>#
%\item%#<li># Define a recursive function [lift' : forall ts t (e : exp ts t) ts1 ts2 t', ts = ts1 ++ ts2 -> exp (ts1 ++ t' :: ts2) t] which performs a similar lifting on an [exp]. The convoluted type is to get around restrictions on [match] annotations. We delay "realizing" that the first index of [e] is built with list concatenation until after a dependent [match], and the new explicit proof argument must be used to cast some terms that come up in the [match] body.#</li>#
%\item%#<li># Define a recursive function [liftVar : forall ts1 ts2 t t', member t (ts1 ++ ts2) -> member t (ts1 ++ t' :: ts2)]. This function should %``%#"#lift#"#%''% a de Bruijn variable so that its type refers to a new variable inserted somewhere in the index list.#</li>#
%\item%#<li># Define a recursive function [lift' : forall ts t (e : exp ts t) ts1 ts2 t', ts = ts1 ++ ts2 -> exp (ts1 ++ t' :: ts2) t] which performs a similar lifting on an [exp]. The convoluted type is to get around restrictions on [match] annotations. We delay %``%#"#realizing#"#%''% that the first index of [e] is built with list concatenation until after a dependent [match], and the new explicit proof argument must be used to cast some terms that come up in the [match] body.#</li>#
%\item%#<li># Define a function [lift : forall ts t t', exp ts t -> exp (t' :: ts) t], which handles simpler top-level lifts. This should be an easy one-liner based on [lift'].#</li>#
%\item%#<li># Define a recursive function [substVar : forall ts1 ts2 t t', member t (ts1 ++ t' :: ts2) -> (t' = t) + member t (ts1 ++ ts2)]. This function is the workhorse behind substitution applied to a variable. It returns [inl] to indicate that the variable we pass to it is the variable that we are substituting for, and it returns [inr] to indicate that the variable we are examining is %\textit{%#<i>#not#</i>#%}% the one we are substituting for. In the first case, we get a proof that the necessary typing relationship holds, and, in the second case, we get the original variable modified to reflect the removal of the substitutee from the typing context.#</li>#
%\item%#<li># Define a recursive function [subst' : forall ts t (e : exp ts t) ts1 t' ts2, ts = ts1 ++ t' :: ts2 -> exp (ts1 ++ ts2) t' -> exp (ts1 ++ ts2) t]. This is the workhorse of substitution in expressions, employing the same proof-passing trick as for [lift']. You will probably want to use [lift] somewhere in the definition of [subst'].#</li>#
%\item%#<li># Now [subst] should be a one-liner, defined in terms of [subst'].#</li>#
%\item%#<li># Prove a correctness theorem for each auxiliary function, leading up to the proof of [subst] correctness.#</li>#
%\item%#<li># All of the reasoning about equality proofs in these theorems follows a regular pattern. If you have an equality proof that you want to replace with [refl_equal] somehow, run [generalize] on that proof variable. Your goal is to get to the point where you can [rewrite] with the original proof to change the type of the generalized version. To avoid type errors (the infamous "second-order unification" failure messages), it will be helpful to run [generalize] on other pieces of the proof context that mention the equality's lefthand side. You might also want to use [generalize dependent], which generalizes not just one variable but also all variables whose types depend on it. [generalize dependent] has the sometimes-helpful property of removing from the context all variables that it generalizes. Once you do manage the mind-bending trick of using the equality proof to rewrite its own type, you will be able to rewrite with [UIP_refl].#</li>#
%\item%#<li># All of the reasoning about equality proofs in these theorems follows a regular pattern. If you have an equality proof that you want to replace with [refl_equal] somehow, run [generalize] on that proof variable. Your goal is to get to the point where you can [rewrite] with the original proof to change the type of the generalized version. To avoid type errors (the infamous %``%#"#second-order unification#"#%''% failure messages), it will be helpful to run [generalize] on other pieces of the proof context that mention the equality's lefthand side. You might also want to use [generalize dependent], which generalizes not just one variable but also all variables whose types depend on it. [generalize dependent] has the sometimes-helpful property of removing from the context all variables that it generalizes. Once you do manage the mind-bending trick of using the equality proof to rewrite its own type, you will be able to rewrite with [UIP_refl].#</li>#
%\item%#<li># A variant of the [ext_eq] axiom from the end of this chapter is available in the book module [Axioms], and you will probably want to use it in the [lift'] and [subst'] correctness proofs.#</li>#
%\item%#<li># The [change] tactic should come in handy in the proofs about [lift] and [subst], where you want to introduce "extraneous" list concatenations with [nil] to match the forms of earlier theorems.#</li>#
%\item%#<li># Be careful about [destruct]ing a term "too early." You can use [generalize] on proof terms to bring into the proof context any important propositions about the term. Then, when you [destruct] the term, it is updated in the extra propositions, too. The [case_eq] tactic is another alternative to this approach, based on saving an equality between the original term and its new form.#</li>#
%\item%#<li># The [change] tactic should come in handy in the proofs about [lift] and [subst], where you want to introduce %``%#"#extraneous#"#%''% list concatenations with [nil] to match the forms of earlier theorems.#</li>#
%\item%#<li># Be careful about [destruct]ing a term %``%#"#too early.#"#%''% You can use [generalize] on proof terms to bring into the proof context any important propositions about the term. Then, when you [destruct] the term, it is updated in the extra propositions, too. The [case_eq] tactic is another alternative to this approach, based on saving an equality between the original term and its new form.#</li>#
#</ol>#%\end{enumerate}%
#</li>#
......
(* Copyright (c) 2008-2009, Adam Chlipala
(* Copyright (c) 2008-2010, Adam Chlipala
*
* This work is licensed under a
* Creative Commons Attribution-Noncommercial-No Derivative Works 3.0
......@@ -37,6 +37,7 @@ Module Source.
(* end hide *)
Notation "'Nat'" := TNat : source_scope.
(** printing --> $\longrightarrow$ *)
Infix "-->" := Arrow (right associativity, at level 60) : source_scope.
Open Scope source_scope.
......@@ -127,7 +128,7 @@ Module Source.
-> exp_equiv G (#v1) (#v2)
| EqConst : forall G n,
exp_equiv G (^n) (^n)
exp_equiv G (^ n) (^ n)
| EqPlus : forall G x1 y1 x2 y2,
exp_equiv G x1 x2
-> exp_equiv G y1 y2
......@@ -158,6 +159,7 @@ Module CPS.
| Prod : type -> type -> type.
Notation "'Nat'" := TNat : cps_scope.
(** printing ---> $\Longrightarrow$ *)
Notation "t --->" := (Cont t) (at level 61) : cps_scope.
Infix "**" := Prod (right associativity, at level 60) : cps_scope.
......@@ -277,11 +279,12 @@ Import Source CPS.
Fixpoint cpsType (t : Source.type) : CPS.type :=
match t with
| Nat => Nat%cps
| t1 --> t2 => (cpsType t1 ** (cpsType t2 --->) --->)%cps
| t1 --> t2 => (cpsType t1 ** (cpsType t2 ---> ) ---> )%cps
end%source.
(** Now we can define the expression translation. The notation [x <-- e1; e2] stands for translating source-level expression [e1], binding [x] to the CPS-level result of running the translated program, and then evaluating CPS-level expression [e2] in that context. *)
(** printing <-- $\longleftarrow$ *)
Reserved Notation "x <-- e1 ; e2" (right associativity, at level 76, e1 at next level).
Section cpsExp.
......@@ -298,7 +301,7 @@ Section cpsExp.
| Var _ v => fun k => k v
| Const n => fun k =>
x <- ^n;
x <- ^ n;
k x
| Plus e1 e2 => fun k =>
x1 <-- e1;
......@@ -309,7 +312,7 @@ Section cpsExp.
| App _ _ e1 e2 => fun k =>
f <-- e1;
x <-- e2;
kf <- \r, k r;
kf <- \ r, k r;
p <- [x, kf];
f @@ p
| Abs _ _ e' => fun k =>
......
(* Copyright (c) 2008-2009, Adam Chlipala
(* Copyright (c) 2008-2010, Adam Chlipala
*
* This work is licensed under a
* Creative Commons Attribution-Noncommercial-No Derivative Works 3.0
......@@ -25,7 +25,7 @@ In this chapter, we will see that HOAS cannot be implemented directly in Coq. H
(** * Classic HOAS *)
(** The motto of HOAS is simple: represent object language binders using meta language binders. Here, "object language" refers to the language being formalized, while the meta language is the language in which the formalization is done. Our usual meta language, Coq's Gallina, contains the standard binding facilities of functional programming, making it a promising base for higher-order encodings.
(** The motto of HOAS is simple: represent object language binders using meta language binders. Here, %``%#"#object language#"#%''% refers to the language being formalized, while the meta language is the language in which the formalization is done. Our usual meta language, Coq's Gallina, contains the standard binding facilities of functional programming, making it a promising base for higher-order encodings.
Recall the concrete encoding of basic untyped lambda calculus expressions. *)
......@@ -500,6 +500,7 @@ Infix "@" := plug (no associativity, at level 60).
(** Finally, we have the step relation itself, which combines our ingredients in the standard way. In the congruence rule, we introduce the extra variable [E1] and its associated equality to make the rule easier for [eauto] to apply. *)
(** printing ==> $\Rightarrow$ *)
Reserved Notation "E1 ==> E2" (no associativity, at level 90).
Inductive Step : forall t, Exp t -> Exp t -> Prop :=
......@@ -579,6 +580,7 @@ Qed.
We must start by defining the big-step semantics itself. The definition is completely standard. *)
(** printing ===> $\Longrightarrow$ *)
Reserved Notation "E1 ===> E2" (no associativity, at level 90).
Inductive BigStep : forall t, Exp t -> Exp t -> Prop :=
......@@ -606,6 +608,7 @@ Hint Constructors BigStep.
(** To prove a crucial intermediate lemma, we will want to name the transitive-reflexive closure of the small-step relation. *)
(* begin thide *)
(** printing ==>* $\Rightarrow^*$ *)
Reserved Notation "E1 ==>* E2" (no associativity, at level 90).
Inductive MultiStep : forall t, Exp t -> Exp t -> Prop :=
......
......@@ -66,7 +66,7 @@ Qed.
]]
%\noindent%...which corresponds to "proof by case analysis" in classical math. For non-recursive inductive types, the two tactics will always have identical behavior. Often case analysis is sufficient, even in proofs about recursive types, and it is nice to avoid introducing unneeded induction hypotheses.
%\noindent%...which corresponds to %``%#"#proof by case analysis#"#%''% in classical math. For non-recursive inductive types, the two tactics will always have identical behavior. Often case analysis is sufficient, even in proofs about recursive types, and it is nice to avoid introducing unneeded induction hypotheses.
What exactly %\textit{%#<i>#is#</i>#%}% the induction principle for [unit]? We can ask Coq: *)
......@@ -329,7 +329,7 @@ Check nat_list_ind.
%\medskip%
In general, we can implement any "tree" types as inductive types. For example, here are binary trees of naturals. *)
In general, we can implement any %``%#"#tree#"#%''% types as inductive types. For example, here are binary trees of naturals. *)
Inductive nat_btree : Set :=
| NLeaf : nat_btree
......@@ -579,7 +579,7 @@ Inductive formula : Set :=
| And : formula -> formula -> formula
| Forall : (nat -> formula) -> formula.
(** Our kinds of formulas are equalities between naturals, conjunction, and universal quantification over natural numbers. We avoid needing to include a notion of "variables" in our type, by using Coq functions to encode quantification. For instance, here is the encoding of [forall x : nat, x = x]: *)
(** Our kinds of formulas are equalities between naturals, conjunction, and universal quantification over natural numbers. We avoid needing to include a notion of %``%#"#variables#"#%''% in our type, by using Coq functions to encode quantification. For instance, here is the encoding of [forall x : nat, x = x]: *)
Example forall_refl : formula := Forall (fun x => Eq x x).
......@@ -936,7 +936,7 @@ Section nat_tree_ind'.
]]
Coq rejects this definition, saying "Recursive call to nat_tree_ind' has principal argument equal to "tr" instead of rest." There is no deep theoretical reason why this program should be rejected; Coq applies incomplete termination-checking heuristics, and it is necessary to learn a few of the most important rules. The term "nested inductive type" hints at the solution to this particular problem. Just like true mutually-inductive types require mutually-recursive induction principles, nested types require nested recursion. *)
Coq rejects this definition, saying %``%#"#Recursive call to nat_tree_ind' has principal argument equal to "tr" instead of rest.#"#%''% There is no deep theoretical reason why this program should be rejected; Coq applies incomplete termination-checking heuristics, and it is necessary to learn a few of the most important rules. The term %``%#"#nested inductive type#"#%''% hints at the solution to this particular problem. Just like true mutually-inductive types require mutually-recursive induction principles, nested types require nested recursion. *)
Fixpoint nat_tree_ind' (tr : nat_tree) : P tr :=
match tr with
......@@ -1051,7 +1051,7 @@ The advantage of using the hint is not very clear here, because the original pro
Theorem true_neq_false : true <> false.
(* begin thide *)
(** We begin with the tactic [red], which is short for "one step of reduction," to unfold the definition of logical negation. *)
(** We begin with the tactic [red], which is short for %``%#"#one step of reduction,#"#%''% to unfold the definition of logical negation. *)
red.
(** [[
......@@ -1128,11 +1128,11 @@ Qed.
(** %\begin{enumerate}%#<ol>#
%\item%#<li># Define an inductive type [truth] with three constructors, [Yes], [No], and [Maybe]. [Yes] stands for certain truth, [No] for certain falsehood, and [Maybe] for an unknown situation. Define "not," "and," and "or" for this replacement boolean algebra. Prove that your implementation of "and" is commutative and distributes over your implementation of "or."#</li>#
%\item%#<li># Define an inductive type [truth] with three constructors, [Yes], [No], and [Maybe]. [Yes] stands for certain truth, [No] for certain falsehood, and [Maybe] for an unknown situation. Define %``%#"#not,#"#%''% %``%#"#and,#"#%''% and %``%#"#or#"#%''% for this replacement boolean algebra. Prove that your implementation of %``%#"#and#"#%''% is commutative and distributes over your implementation of %``%#"#or.#"#%''%#</li>#
%\item%#<li># Modify the first example language of Chapter 2 to include variables, where variables are represented with [nat]. Extend the syntax and semantics of expressions to accommodate the change. Your new [expDenote] function should take as a new extra first argument a value of type [var -> nat], where [var] is a synonym for naturals-as-variables, and the function assigns a value to each variable. Define a constant folding function which does a bottom-up pass over an expression, at each stage replacing every binary operation on constants with an equivalent constant. Prove that constant folding preserves the meanings of expressions.#</li>#
%\item%#<li># Reimplement the second example language of Chapter 2 to use mutually-inductive types instead of dependent types. That is, define two separate (non-dependent) inductive types [nat_exp] and [bool_exp] for expressions of the two different types, rather than a single indexed type. To keep things simple, you may consider only the binary operators that take naturals as operands. Add natural number variables to the language, as in the last exercise, and add an "if" expression form taking as arguments one boolean expression and two natural number expressions. Define semantics and constant-folding functions for this new language. Your constant folding should simplify not just binary operations (returning naturals or booleans) with known arguments, but also "if" expressions with known values for their test expressions but possibly undetermined "then" and "else" cases. Prove that constant-folding a natural number expression preserves its meaning.#</li>#
%\item%#<li># Reimplement the second example language of Chapter 2 to use mutually-inductive types instead of dependent types. That is, define two separate (non-dependent) inductive types [nat_exp] and [bool_exp] for expressions of the two different types, rather than a single indexed type. To keep things simple, you may consider only the binary operators that take naturals as operands. Add natural number variables to the language, as in the last exercise, and add an %``%#"#if#"#%''% expression form taking as arguments one boolean expression and two natural number expressions. Define semantics and constant-folding functions for this new language. Your constant folding should simplify not just binary operations (returning naturals or booleans) with known arguments, but also %``%#"#if#"#%''% expressions with known values for their test expressions but possibly undetermined %``%#"#then#"#%''% and %``%#"#else#"#%''% cases. Prove that constant-folding a natural number expression preserves its meaning.#</li>#
%\item%#<li># Using a reflexive inductive definition, define a type [nat_tree] of infinitary trees, with natural numbers at their leaves and a countable infinity of new trees branching out of each internal node. Define a function [increment] that increments the number in every leaf of a [nat_tree]. Define a function [leapfrog] over a natural [i] and a tree [nt]. [leapfrog] should recurse into the [i]th child of [nt], the [i+1]st child of that node, the [i+2]nd child of the next node, and so on, until reaching a leaf, in which case [leapfrog] should return the number at that leaf. Prove that the result of any call to [leapfrog] is incremented by one by calling [increment] on the tree.#</li>#
......
(* Copyright (c) 2008-2009, Adam Chlipala
(* Copyright (c) 2008-2010, Adam Chlipala
*
* This work is licensed under a
* Creative Commons Attribution-Noncommercial-No Derivative Works 3.0
......@@ -64,7 +64,9 @@ Module STLC.
Notation "# v" := (Var v) (at level 70).
(** printing ^ $\dag$ *)
Notation "^ n" := (Const n) (at level 70).
(** printing +^ $\hat{+}$ *)
Infix "+^" := Plus (left associativity, at level 79).
Infix "@" := App (left associativity, at level 77).
......
......@@ -49,7 +49,7 @@ The license text is available at:
We would all like to have programs check that our programs are correct. Due in no small part to some bold but unfulfilled promises in the history of computer science, today most people who write software, practitioners and academics alike, assume that the costs of formal program verification outweigh the benefits. The purpose of this book is to convince you that the technology of program verification is mature enough today that it makes sense to use it in a support role in many kinds of research projects in computer science. Beyond the convincing, I also want to provide a handbook on practical engineering of certified programs with the Coq proof assistant.
There are a good number of (though definitely not "many") tools that are in wide use today for building machine-checked mathematical proofs and machine-certified programs. This is my attempt at an exhaustive list of interactive "proof assistants" satisfying a few criteria. First, the authors of each tool must intend for it to be put to use for software-related applications. Second, there must have been enough engineering effort put into the tool that someone not doing research on the tool itself would feel his time was well spent using it. A third criterion is more of an empirical validation of the second: the tool must have a significant user community outside of its own development team.
There are a good number of (though definitely not %``%#"#many#"#%''%) tools that are in wide use today for building machine-checked mathematical proofs and machine-certified programs. This is my attempt at an exhaustive list of interactive %``%#"#proof assistants#"#%''% satisfying a few criteria. First, the authors of each tool must intend for it to be put to use for software-related applications. Second, there must have been enough engineering effort put into the tool that someone not doing research on the tool itself would feel his time was well spent using it. A third criterion is more of an empirical validation of the second: the tool must have a significant user community outside of its own development team.
%
\medskip
......@@ -74,7 +74,7 @@ There are a good number of (though definitely not "many") tools that are in wide
</table>
#
Isabelle/HOL, implemented with the "proof assistant development framework" Isabelle, is the most popular proof assistant for the HOL logic. The other implementations of HOL can be considered equivalent for purposes of the discussion here.
Isabelle/HOL, implemented with the %``%#"#proof assistant development framework#"#%''% Isabelle, is the most popular proof assistant for the HOL logic. The other implementations of HOL can be considered equivalent for purposes of the discussion here.
*)
......@@ -111,9 +111,9 @@ Dependent types are not just useful because they help you express correctness pr
(** ** An Easy-to-Check Kernel Proof Language *)
(**
Scores of automated decision procedures are useful in practical theorem proving, but it is unfortunate to have to trust in the correct implementation of each procedure. Proof assistants satisfying the "de Bruijn criterion" may use complicated and extensible procedures to seek out proofs, but in the end they produce %\textit{%#<i>#proof terms#</i>#%}% in kernel languages. These core languages have feature complexity on par with what you find in proposals for formal foundations for mathematics. To believe a proof, we can ignore the possibility of bugs during %\textit{%#<i>#search#</i>#%}% and just rely on a (relatively small) proof-checking kernel that we apply to the %\textit{%#<i>#result#</i>#%}% of the search.
Scores of automated decision procedures are useful in practical theorem proving, but it is unfortunate to have to trust in the correct implementation of each procedure. Proof assistants satisfying the %``%#"#de Bruijn criterion#"#%''% may use complicated and extensible procedures to seek out proofs, but in the end they produce %\textit{%#<i>#proof terms#</i>#%}% in kernel languages. These core languages have feature complexity on par with what you find in proposals for formal foundations for mathematics. To believe a proof, we can ignore the possibility of bugs during %\textit{%#<i>#search#</i>#%}% and just rely on a (relatively small) proof-checking kernel that we apply to the %\textit{%#<i>#result#</i>#%}% of the search.
ACL2 and PVS do not meet the de Bruijn criterion, employing fancy decision procedures that produce no "evidence trails" justifying their results.
ACL2 and PVS do not meet the de Bruijn criterion, employing fancy decision procedures that produce no %``%#"#evidence trails#"#%''% justifying their results.
*)
(** ** Convenient Programmable Proof Automation *)
......@@ -125,13 +125,13 @@ Twelf features no proof automation marked as a bonafide part of the latest relea
Of the remaining tools, all can support user extension with new decision procedures by hacking directly in the tool's implementation language (such as OCaml for Coq). Since ACL2 and PVS do not satisfy the de Bruijn criterion, overall correctness is at the mercy of the authors of new procedures.
Isabelle/HOL and Coq both support coding new proof manipulations in ML in ways that cannot lead to the acceptance of invalid proofs. Additionally, Coq includes a domain-specific language for coding decision procedures in normal Coq source code, with no need to break out into ML. This language is called Ltac, and I think of it as the unsung hero of the proof assistant world. Not only does Ltac prevent you from making fatal mistakes, it also includes a number of novel programming constructs which combine to make a "proof by decision procedure" style very pleasant. We will meet these features in the chapters to come.
Isabelle/HOL and Coq both support coding new proof manipulations in ML in ways that cannot lead to the acceptance of invalid proofs. Additionally, Coq includes a domain-specific language for coding decision procedures in normal Coq source code, with no need to break out into ML. This language is called Ltac, and I think of it as the unsung hero of the proof assistant world. Not only does Ltac prevent you from making fatal mistakes, it also includes a number of novel programming constructs which combine to make a %``%#"#proof by decision procedure#"#%''% style very pleasant. We will meet these features in the chapters to come.
*)
(** ** Proof by Reflection *)
(**
A surprising wealth of benefits follow from choosing a proof language that integrates a rich notion of computation. Coq includes programs and proof terms in the same syntactic class. This makes it easy to write programs that compute proofs. With rich enough dependent types, such programs are %\textit{%#<i>#certified decision procedures#</i>#%}%. In such cases, these certified procedures can be put to good use %\textit{%#<i>#without ever running them#</i>#%}%! Their types guarantee that, if we did bother to run them, we would receive proper "ground" proofs.
A surprising wealth of benefits follow from choosing a proof language that integrates a rich notion of computation. Coq includes programs and proof terms in the same syntactic class. This makes it easy to write programs that compute proofs. With rich enough dependent types, such programs are %\textit{%#<i>#certified decision procedures#</i>#%}%. In such cases, these certified procedures can be put to good use %\textit{%#<i>#without ever running them#</i>#%}%! Their types guarantee that, if we did bother to run them, we would receive proper %``%#"#ground#"#%''% proofs.
The critical ingredient for this technique, many of whose instances are referred to as %\textit{%#<i>#proof by reflection#</i>#%}%, is a way of inducing non-trivial computation inside of logical propositions during proof checking. Further, most of these instances require dependent types to make it possible to state the appropriate theorems. Of the proof assistants I listed, only Coq really provides this support.
*)
......@@ -144,18 +144,18 @@ The logic and programming language behind Coq belongs to a type-theory ecosystem
I think the answer is simple. None of the competition has well-developed systems for tactic-based theorem proving. Agda and Epigram are designed and marketed more as programming languages than proof assistants. Dependent types are great, because they often help you prove deep theorems without doing anything that feels like proving. Nonetheless, almost any interesting certified programming project will benefit from some activity that deserves to be called proving, and many interesting projects absolutely require semi-automated proving, if the sanity of the programmer is to be safeguarded. Informally, proving is unavoidable when any correctness proof for a program has a structure that does not mirror the structure of the program itself. An example is a compiler correctness proof, which probably proceeds by induction on program execution traces, which have no simple relationship with the structure of the compiler or the structure of the programs it compiles. In building such proofs, a mature system for scripted proof automation is invaluable.
On the other hand, Agda, Epigram, and similar tools have less implementation baggage associated with them, and so they tend to be the default first homes of innovations in practical type theory. Some significant kinds of dependently-typed programs are much easier to write in Agda and Epigram than in Coq. The former tools may very well be superior choices for projects that do not involve any "proving." Anecdotally, I have gotten the impression that manual proving is orders of magnitudes more costly than manual coping with Coq's lack of programming bells and whistles. In this book, I will devote significant time to patterns for programming with dependent types in Coq as it is today. We can hope that the type theory community is tending towards convergence on the right set of features for practical programming with dependent types, and that we will eventually have a single tool embodying those features.
On the other hand, Agda, Epigram, and similar tools have less implementation baggage associated with them, and so they tend to be the default first homes of innovations in practical type theory. Some significant kinds of dependently-typed programs are much easier to write in Agda and Epigram than in Coq. The former tools may very well be superior choices for projects that do not involve any %``%#"#proving.#"#%''% Anecdotally, I have gotten the impression that manual proving is orders of magnitudes more costly than manual coping with Coq's lack of programming bells and whistles. In this book, I will devote significant time to patterns for programming with dependent types in Coq as it is today. We can hope that the type theory community is tending towards convergence on the right set of features for practical programming with dependent types, and that we will eventually have a single tool embodying those features.
*)
(** * Engineering with a Proof Assistant *)
(**
In comparisons with its competitors, Coq is often derided for promoting unreadable proofs. It is very easy to write proof scripts that manipulate proof goals imperatively, with no structure to aid readers. Such developments are nightmares to maintain, and they certainly do not manage to convey "why the theorem is true" to anyone but the original author. One additional (and not insignificant) purpose of this book is to show why it is unfair and unproductive to dismiss Coq based on the existence of such developments.
In comparisons with its competitors, Coq is often derided for promoting unreadable proofs. It is very easy to write proof scripts that manipulate proof goals imperatively, with no structure to aid readers. Such developments are nightmares to maintain, and they certainly do not manage to convey %``%#"#why the theorem is true#"#%''% to anyone but the original author. One additional (and not insignificant) purpose of this book is to show why it is unfair and unproductive to dismiss Coq based on the existence of such developments.
I will go out on a limb and guess that the reader is a dedicated fan of some functional programming language or another, and that he may even have been involved in teaching that language to undergraduates. I want to propose an analogy between two attitudes: coming to a negative conclusion about Coq after reading common Coq developments in the wild, and coming to a negative conclusion about Your Favorite Language after looking at the programs undergraduates write in it in the first week of class. The pragmatics of mechanized proving and program verification have been under serious study for much less time than the pragmatics of programming have been. The computer theorem proving community is still developing the key insights that correspond to those that functional programming texts and instructors impart to their students, to help those students get over that critical hump where using the language stops being more trouble than it is worth. Most of the insights for Coq are barely even disseminated among the experts, let alone set down in a tutorial form. I hope to use this book to go a long way towards remedying that.
If I do that job well, then this book should be of interest even to people who have participated in classes or tutorials specifically about Coq. The book should even be useful to people who have been using Coq for years but who are mystified when their Coq developments prove impenetrable by colleagues. The crucial angle in this book is that there are "design patterns" for reliably avoiding the really grungy parts of theorem proving, and consistent use of these patterns can get you over the hump to the point where it is worth your while to use Coq to prove your theorems and certify your programs, even if formal verification is not your main concern in a project. We will follow this theme by pursuing two main methods for replacing manual proofs with more understandable artifacts: dependently-typed functions and custom Ltac decision procedures.
If I do that job well, then this book should be of interest even to people who have participated in classes or tutorials specifically about Coq. The book should even be useful to people who have been using Coq for years but who are mystified when their Coq developments prove impenetrable by colleagues. The crucial angle in this book is that there are %``%#"#design patterns#"#%''% for reliably avoiding the really grungy parts of theorem proving, and consistent use of these patterns can get you over the hump to the point where it is worth your while to use Coq to prove your theorems and certify your programs, even if formal verification is not your main concern in a project. We will follow this theme by pursuing two main methods for replacing manual proofs with more understandable artifacts: dependently-typed functions and custom Ltac decision procedures.
*)
......
......@@ -18,7 +18,7 @@ Set Implicit Arguments.
(** %\chapter{More Dependent Types}% *)
(** Subset types and their relatives help us integrate verification with programming. Though they reorganize the certified programmer's workflow, they tend not to have deep effects on proofs. We write largely the same proofs as we would for classical verification, with some of the structure moved into the programs themselves. It turns out that, when we use dependent types to their full potential, we warp the development and proving process even more than that, picking up "free theorems" to the extent that often a certified program is hardly more complex than its uncertified counterpart in Haskell or ML.
(** Subset types and their relatives help us integrate verification with programming. Though they reorganize the certified programmer's workflow, they tend not to have deep effects on proofs. We write largely the same proofs as we would for classical verification, with some of the structure moved into the programs themselves. It turns out that, when we use dependent types to their full potential, we warp the development and proving process even more than that, picking up %``%#"#free theorems#"#%''% to the extent that often a certified program is hardly more complex than its uncertified counterpart in Haskell or ML.
In particular, we have only scratched the tip of the iceberg that is Coq's inductive definition mechanism. The inductive types we have seen so far have their counterparts in the other proof assistants that we surveyed in Chapter 1. This chapter explores the strange new world of dependent inductive datatypes (that is, dependent inductive types outside [Prop]), a possibility which sets Coq apart from all of the competition not based on type theory. *)
......@@ -195,7 +195,7 @@ Fixpoint typeDenote (t : type) : Set :=
| Prod t1 t2 => typeDenote t1 * typeDenote t2
end%type.
(** [typeDenote] compiles types of our object language into "native" Coq types. It is deceptively easy to implement. The only new thing we see is the [%type] annotation, which tells Coq to parse the [match] expression using the notations associated with types. Without this annotation, the [*] would be interpreted as multiplication on naturals, rather than as the product type constructor. [type] is one example of an identifer bound to a %\textit{%#<i>#notation scope#</i>#%}%. We will deal more explicitly with notations and notation scopes in later chapters.
(** [typeDenote] compiles types of our object language into %``%#"#native#"#%''% Coq types. It is deceptively easy to implement. The only new thing we see is the [%type] annotation, which tells Coq to parse the [match] expression using the notations associated with types. Without this annotation, the [*] would be interpreted as multiplication on naturals, rather than as the product type constructor. [type] is one example of an identifer bound to a %\textit{%#<i>#notation scope#</i>#%}%. We will deal more explicitly with notations and notation scopes in later chapters.
We can define a function [expDenote] that is typed in terms of [typeDenote]. *)
......@@ -548,9 +548,9 @@ Definition balance1 n (a : rtree n) (data : nat) c2 :=
(** We apply a trick that I call the %\textit{%#<i>#convoy pattern#</i>#%}%. Recall that [match] annotations only make it possible to describe a dependence of a [match] %\textit{%#<i>#result type#</i>#%}% on the discriminee. There is no automatic refinement of the types of free variables. However, it is possible to effect such a refinement by finding a way to encode free variable type dependencies in the [match] result type, so that a [return] clause can express the connection.
In particular, we can extend the [match] to return %\textit{%#<i>#functions over the free variables whose types we want to refine#</i>#%}%. In the case of [balance1], we only find ourselves wanting to refine the type of one tree variable at a time. We match on one subtree of a node, and we want the type of the other subtree to be refined based on what we learn. We indicate this with a [return] clause starting like [rbtree _ n -> ...], where [n] is bound in an [in] pattern. Such a [match] expression is applied immediately to the "old version" of the variable to be refined, and the type checker is happy.
In particular, we can extend the [match] to return %\textit{%#<i>#functions over the free variables whose types we want to refine#</i>#%}%. In the case of [balance1], we only find ourselves wanting to refine the type of one tree variable at a time. We match on one subtree of a node, and we want the type of the other subtree to be refined based on what we learn. We indicate this with a [return] clause starting like [rbtree _ n -> ...], where [n] is bound in an [in] pattern. Such a [match] expression is applied immediately to the %``%#"#old version#"#%''% of the variable to be refined, and the type checker is happy.
After writing this code, even I do not understand the precise details of how balancing works. I consulted Chris Okasaki's paper "Red-Black Trees in a Functional Setting" and transcribed the code to use dependent types. Luckily, the details are not so important here; types alone will tell us that insertion preserves balanced-ness, and we will prove that insertion produces trees containing the right keys.
After writing this code, even I do not understand the precise details of how balancing works. I consulted Chris Okasaki's paper %``%#"#Red-Black Trees in a Functional Setting#"#%''% and transcribed the code to use dependent types. Luckily, the details are not so important here; types alone will tell us that insertion preserves balanced-ness, and we will prove that insertion produces trees containing the right keys.
Here is the symmetric function [balance2], for cases where the possibly-invalid tree appears on the right rather than on the left. *)
......
(* Copyright (c) 2009, Adam Chlipala
(* Copyright (c) 2009-2010, Adam Chlipala
*
* This work is licensed under a
* Creative Commons Attribution-Noncommercial-No Derivative Works 3.0
......@@ -20,7 +20,7 @@ Set Implicit Arguments.
(** The last few chapters have shown how PHOAS can make it relatively painless to reason about program transformations. Each of our example languages so far has had a semantics that is easy to implement with an interpreter in Gallina. Since Gallina is designed to rule out non-termination, we cannot hope to give interpreter-based semantics to Turing-complete programming languages. Falling back on standard operational semantics leaves us with the old bureaucratic concerns about capture-avoiding substitution. Can we encode Turing-complete, higher-order languages in Coq without sacrificing the advantages of higher-order encoding?
Any approach that applies to basic untyped lambda calculus is likely to extend to most object languages of interest. We can attempt the "obvious" way of equipping a PHOAS definition for use in an operational semantics, without mentioning substitution explicitly. Specifically, we try to work with expressions with [var] instantiated with a type of values. *)
Any approach that applies to basic untyped lambda calculus is likely to extend to most object languages of interest. We can attempt the %``%#"#obvious#"#%''% way of equipping a PHOAS definition for use in an operational semantics, without mentioning substitution explicitly. Specifically, we try to work with expressions with [var] instantiated with a type of values. *)
Section exp.
Variable var : Type.
......@@ -58,7 +58,7 @@ Fixpoint expV := exp (val expV).
]]
Of course, this kind of definition is not structurally recursive, so Coq will not allow it. Getting "substitution for free" seems to require some similar kind of self-reference.
Of course, this kind of definition is not structurally recursive, so Coq will not allow it. Getting %``%#"#substitution for free#"#%''% seems to require some similar kind of self-reference.
In this chapter, we will consider an alternate take on the problem. We add a level of indirection, introducing more explicit syntax to break the cycle in type definitions. Specifically, we represent function values as numbers that index into a %\emph{%#<i>#closure heap#</i>#%}% that our operational semantics maintains alongside the expression being evaluated. *)
......@@ -83,6 +83,7 @@ Section lookup.
(** The second of our two definitions expresses when one list extends another. We will write [ls1 ~> ls2] to indicate that [ls1] could evolve into [ls2]; that is, [ls1] is a suffix of [ls2]. *)
Definition extends (ls1 ls2 : list A) := exists ls, ls2 = ls ++ ls1.
(** printing ~> $\leadsto$ *)
Infix "~>" := extends (no associativity, at level 80).
(** We prove and add as hints a few basic theorems about [lookup] and [extends]. *)
......@@ -292,6 +293,7 @@ Import Source CPS.
(** Finally, we define a CPS translation in the same way as in our previous example for simply-typed lambda calculus. *)
(** printing <-- $\longleftarrow$ *)
Reserved Notation "x <-- e1 ; e2" (right associativity, at level 76, e1 at next level).
Section cpsExp.
......@@ -364,6 +366,8 @@ Section cr.
-> cr (Source.VFun l1) (CPS.VFun l2).
End cr.
(** printing |-- $\vdash$ *)
(** printing ~~ $\sim$ *)
Notation "s1 & s2 |-- v1 ~~ v2" := (cr s1 s2 v1 v2) (no associativity, at level 70).
Hint Constructors cr.
......
......@@ -18,7 +18,7 @@ Set Implicit Arguments.
(** %\chapter{Inductive Predicates}% *)
(** The so-called "Curry-Howard Correspondence" states a formal connection between functional programs and mathematical proofs. In the last chapter, we snuck in a first introduction to this subject in Coq. Witness the close similarity between the types [unit] and [True] from the standard library: *)
(** The so-called %``%#"#Curry-Howard Correspondence#"#%''% states a formal connection between functional programs and mathematical proofs. In the last chapter, we snuck in a first introduction to this subject in Coq. Witness the close similarity between the types [unit] and [True] from the standard library: *)
Print unit.
(** %\vspace{-.15in}% [[
......@@ -36,7 +36,7 @@ Print True.
The essence of the argument is roughly this: to an engineer, not all functions of type [A -> B] are created equal, but all proofs of a proposition [P -> Q] are. This idea is known as %\textit{%#<i>#proof irrelevance#</i>#%}%, and its formalizations in logics prevent us from distinguishing between alternate proofs of the same proposition. Proof irrelevance is compatible with, but not derivable in, Gallina. Apart from this theoretical concern, I will argue that it is most effective to do engineering with Coq by employing different techniques for programs versus proofs. Most of this book is organized around that distinction, describing how to program, by applying standard functional programming techniques in the presence of dependent types; and how to prove, by writing custom Ltac decision procedures.
With that perspective in mind, this chapter is sort of a mirror image of the last chapter, introducing how to define predicates with inductive definitions. We will point out similarities in places, but much of the effective Coq user's bag of tricks is disjoint for predicates versus "datatypes." This chapter is also a covert introduction to dependent types, which are the foundation on which interesting inductive predicates are built, though we will rely on tactics to build dependently-typed proof terms for us for now. A future chapter introduces more manual application of dependent types. *)
With that perspective in mind, this chapter is sort of a mirror image of the last chapter, introducing how to define predicates with inductive definitions. We will point out similarities in places, but much of the effective Coq user's bag of tricks is disjoint for predicates versus %``%#"#datatypes.#"#%''% This chapter is also a covert introduction to dependent types, which are the foundation on which interesting inductive predicates are built, though we will rely on tactics to build dependently-typed proof terms for us for now. A future chapter introduces more manual application of dependent types. *)
(** * Propositional Logic *)
......@@ -266,7 +266,7 @@ subgoal 2 is:
(* end hide *)
(** It would be a shame to have to plod manually through all proofs about propositional logic. Luckily, there is no need. One of the most basic Coq automation tactics is [tauto], which is a complete decision procedure for constructive propositional logic. (More on what "constructive" means in the next section.) We can use [tauto] to dispatch all of the purely propositional theorems we have proved so far. *)
(** It would be a shame to have to plod manually through all proofs about propositional logic. Luckily, there is no need. One of the most basic Coq automation tactics is [tauto], which is a complete decision procedure for constructive propositional logic. (More on what %``%#"#constructive#"#%''% means in the next section.) We can use [tauto] to dispatch all of the purely propositional theorems we have proved so far. *)
Theorem or_comm' : P \/ Q -> Q \/ P.
(* begin thide *)
......@@ -330,9 +330,9 @@ End Propositional.
(** One potential point of confusion in the presentation so far is the distinction between [bool] and [Prop]. [bool] is a datatype whose two values are [true] and [false], while [Prop] is a more primitive type that includes among its members [True] and [False]. Why not collapse these two concepts into one, and why must there be more than two states of mathematical truth?
The answer comes from the fact that Coq implements %\textit{%#<i>#constructive#</i>#%}% or %\textit{%#<i>#intuitionistic#</i>#%}% logic, in contrast to the %\textit{%#<i>#classical#</i>#%}% logic that you may be more familiar with. In constructive logic, classical tautologies like [~ ~ P -> P] and [P \/ ~ P] do not always hold. In general, we can only prove these tautologies when [P] is %\textit{%#<i>#decidable#</i>#%}%, in the sense of computability theory. The Curry-Howard encoding that Coq uses for [or] allows us to extract either a proof of [P] or a proof of [~ P] from any proof of [P \/ ~ P]. Since our proofs are just functional programs which we can run, this would give us a decision procedure for the halting problem, where the instantiations of [P] would be formulas like "this particular Turing machine halts."
The answer comes from the fact that Coq implements %\textit{%#<i>#constructive#</i>#%}% or %\textit{%#<i>#intuitionistic#</i>#%}% logic, in contrast to the %\textit{%#<i>#classical#</i>#%}% logic that you may be more familiar with. In constructive logic, classical tautologies like [~ ~ P -> P] and [P \/ ~ P] do not always hold. In general, we can only prove these tautologies when [P] is %\textit{%#<i>#decidable#</i>#%}%, in the sense of computability theory. The Curry-Howard encoding that Coq uses for [or] allows us to extract either a proof of [P] or a proof of [~ P] from any proof of [P \/ ~ P]. Since our proofs are just functional programs which we can run, this would give us a decision procedure for the halting problem, where the instantiations of [P] would be formulas like %``%#"#this particular Turing machine halts.#"#%''%
Hence the distinction between [bool] and [Prop]. Programs of type [bool] are computational by construction; we can always run them to determine their results. Many [Prop]s are undecidable, and so we can write more expressive formulas with [Prop]s than with [bool]s, but the inevitable consequence is that we cannot simply "run a [Prop] to determine its truth."
Hence the distinction between [bool] and [Prop]. Programs of type [bool] are computational by construction; we can always run them to determine their results. Many [Prop]s are undecidable, and so we can write more expressive formulas with [Prop]s than with [bool]s, but the inevitable consequence is that we cannot simply %``%#"#run a [Prop] to determine its truth.#"#%''%
Constructive logic lets us define all of the logical connectives in an aesthetically-appealing way, with orthogonal inductive definitions. That is, each connective is defined independently using a simple, shared mechanism. Constructivity also enables a trick called %\textit{%#<i>#program extraction#</i>#%}%, where we write programs by phrasing them as theorems to be proved. Since our proofs are just functional programs, we can extract executable programs from our final proofs, which we could not do as naturally with classical proofs.
......@@ -341,7 +341,7 @@ We will see more about Coq's program extraction facility in a later chapter. Ho
(** * First-Order Logic *)
(** The [forall] connective of first-order logic, which we have seen in many examples so far, is built into Coq. Getting ahead of ourselves a bit, we can see it as the dependent function type constructor. In fact, implication and universal quantification are just different syntactic shorthands for the same Coq mechanism. A formula [P -> Q] is equivalent to [forall x : P, Q], where [x] does not appear in [Q]. That is, the "real" type of the implication says "for every proof of [P], there exists a proof of [Q]."
(** The [forall] connective of first-order logic, which we have seen in many examples so far, is built into Coq. Getting ahead of ourselves a bit, we can see it as the dependent function type constructor. In fact, implication and universal quantification are just different syntactic shorthands for the same Coq mechanism. A formula [P -> Q] is equivalent to [forall x : P, Q], where [x] does not appear in [Q]. That is, the %``%#"#real#"#%''% type of the implication says %``%#"#for every proof of [P], there exists a proof of [Q].#"#%''%
Existential quantification is defined in the standard library. *)
......@@ -357,7 +357,7 @@ Print ex.
Theorem exist1 : exists x : nat, x + 1 = 2.
(* begin thide *)
(** remove printing exists *)
(** We can start this proof with a tactic [exists], which should not be confused with the formula constructor shorthand of the same name. (In the PDF version of this document, the reverse 'E' appears instead of the text "exists" in formulas.) *)
(** We can start this proof with a tactic [exists], which should not be confused with the formula constructor shorthand of the same name. (In the PDF version of this document, the reverse 'E' appears instead of the text %``%#"#exists#"#%''% in formulas.) *)
exists 1.
......@@ -479,7 +479,7 @@ Theorem isZero_contra : isZero 1 -> False.
It seems that case analysis has not helped us much at all! Our sole hypothesis disappears, leaving us, if anything, worse off than we were before. What went wrong? We have met an important restriction in tactics like [destruct] and [induction] when applied to types with arguments. If the arguments are not already free variables, they will be replaced by new free variables internally before doing the case analysis or induction. Since the argument [1] to [isZero] is replaced by a fresh variable, we lose the crucial fact that it is not equal to [0].
Why does Coq use this restriction? We will discuss the issue in detail in a future chapter, when we see the dependently-typed programming techniques that would allow us to write this proof term manually. For now, we just say that the algorithmic problem of "logically complete case analysis" is undecidable when phrased in Coq's logic. A few tactics and design patterns that we will present in this chapter suffice in almost all cases. For the current example, what we want is a tactic called [inversion], which corresponds to the concept of inversion that is frequently used with natural deduction proof systems. *)
Why does Coq use this restriction? We will discuss the issue in detail in a future chapter, when we see the dependently-typed programming techniques that would allow us to write this proof term manually. For now, we just say that the algorithmic problem of %``%#"#logically complete case analysis#"#%''% is undecidable when phrased in Coq's logic. A few tactics and design patterns that we will present in this chapter suffice in almost all cases. For the current example, what we want is a tactic called [inversion], which corresponds to the concept of inversion that is frequently used with natural deduction proof systems. *)
Undo.
inversion 1.
......@@ -658,7 +658,7 @@ Theorem even_plus : forall n m, even n -> even m -> even (n + m).
]]
Unfortunately, the goal mentions [n0] where it would need to mention [n] to match [IHn]. We could keep looking for a way to finish this proof from here, but it turns out that we can make our lives much easier by changing our basic strategy. Instead of inducting on the structure of [n], we should induct %\textit{%#<i>#on the structure of one of the [even] proofs#</i>#%}%. This technique is commonly called %\textit{%#<i>#rule induction#</i>#%}% in programming language semantics. In the setting of Coq, we have already seen how predicates are defined using the same inductive type mechanism as datatypes, so the fundamental unity of rule induction with "normal" induction is apparent. *)
Unfortunately, the goal mentions [n0] where it would need to mention [n] to match [IHn]. We could keep looking for a way to finish this proof from here, but it turns out that we can make our lives much easier by changing our basic strategy. Instead of inducting on the structure of [n], we should induct %\textit{%#<i>#on the structure of one of the [even] proofs#</i>#%}%. This technique is commonly called %\textit{%#<i>#rule induction#</i>#%}% in programming language semantics. In the setting of Coq, we have already seen how predicates are defined using the same inductive type mechanism as datatypes, so the fundamental unity of rule induction with %``%#"#normal#"#%''% induction is apparent. *)
Restart.
......@@ -775,7 +775,7 @@ Qed.
(** We write the proof in a way that avoids the use of local variable or hypothesis names, using the [match] tactic form to do pattern-matching on the goal. We use unification variables prefixed by question marks in the pattern, and we take advantage of the possibility to mention a unification variable twice in one pattern, to enforce equality between occurrences. The hint to rewrite with [plus_n_Sm] in a particular direction saves us from having to figure out the right place to apply that theorem, and we also take critical advantage of a new tactic, [eauto].
[crush] uses the tactic [intuition], which, when it runs out of tricks to try using only propositional logic, by default tries the tactic [auto], which we saw in an earlier example. [auto] attempts Prolog-style logic programming, searching through all proof trees up to a certain depth that are built only out of hints that have been registered with [Hint] commands. Compared to Prolog, [auto] places an important restriction: it never introduces new unification variables during search. That is, every time a rule is applied during proof search, all of its arguments must be deducible by studying the form of the goal. [eauto] relaxes this restriction, at the cost of possibly exponentially greater running time. In this particular case, we know that [eauto] has only a small space of proofs to search, so it makes sense to run it. It is common in effectively-automated Coq proofs to see a bag of standard tactics applied to pick off the "easy" subgoals, finishing with [eauto] to handle the tricky parts that can benefit from ad-hoc exhaustive search.
[crush] uses the tactic [intuition], which, when it runs out of tricks to try using only propositional logic, by default tries the tactic [auto], which we saw in an earlier example. [auto] attempts Prolog-style logic programming, searching through all proof trees up to a certain depth that are built only out of hints that have been registered with [Hint] commands. Compared to Prolog, [auto] places an important restriction: it never introduces new unification variables during search. That is, every time a rule is applied during proof search, all of its arguments must be deducible by studying the form of the goal. [eauto] relaxes this restriction, at the cost of possibly exponentially greater running time. In this particular case, we know that [eauto] has only a small space of proofs to search, so it makes sense to run it. It is common in effectively-automated Coq proofs to see a bag of standard tactics applied to pick off the %``%#"#easy#"#%''% subgoals, finishing with [eauto] to handle the tricky parts that can benefit from ad-hoc exhaustive search.
The original theorem now follows trivially from our lemma. *)
......@@ -815,7 +815,7 @@ Abort.
(* begin hide *)
(* In-class exercises *)
(* EX: Define a type [prop] of simple boolean formulas made up only of truth, falsehood, binary conjunction, and binary disjunction. Define an inductive predicate [holds] that captures when [prop]s are valid, and define a predicate [falseFree] that captures when a [prop] does not contain the "false" formula. Prove that every false-free [prop] is valid. *)
(* EX: Define a type [prop] of simple boolean formulas made up only of truth, falsehood, binary conjunction, and binary disjunction. Define an inductive predicate [holds] that captures when [prop]s are valid, and define a predicate [falseFree] that captures when a [prop] does not contain the %``%#"#false#"#%''% formula. Prove that every false-free [prop] is valid. *)
(* begin thide *)
Inductive prop : Set :=
......@@ -889,12 +889,12 @@ Qed.
%\item%#<li># [P /\ (Q \/ R) -> (P /\ Q) \/ (P /\ R)]#</li>#
#</ol> </li>#%\end{enumerate}%
%\item%#<li># Prove the following tautology of first-order logic, using only the tactics [apply], [assert], [assumption], [destruct], [eapply], [eassumption], and %\textit{%#<tt>#exists#</tt>#%}%. You will probably find [assert] useful for stating and proving an intermediate lemma, enabling a kind of "forward reasoning," in contrast to the "backward reasoning" that is the default for Coq tactics. [eassumption] is a version of [assumption] that will do matching of unification variables. Let some variable [T] of type [Set] be the set of individuals. [x] is a constant symbol, [p] is a unary predicate symbol, [q] is a binary predicate symbol, and [f] is a unary function symbol.
%\item%#<li># Prove the following tautology of first-order logic, using only the tactics [apply], [assert], [assumption], [destruct], [eapply], [eassumption], and %\textit{%#<tt>#exists#</tt>#%}%. You will probably find [assert] useful for stating and proving an intermediate lemma, enabling a kind of %``%#"#forward reasoning,#"#%''% in contrast to the %``%#"#backward reasoning#"#%''% that is the default for Coq tactics. [eassumption] is a version of [assumption] that will do matching of unification variables. Let some variable [T] of type [Set] be the set of individuals. [x] is a constant symbol, [p] is a unary predicate symbol, [q] is a binary predicate symbol, and [f] is a unary function symbol.
%\begin{enumerate}%#<ol>#
%\item%#<li># [p x -> (forall x, p x -> exists y, q x y) -> (forall x y, q x y -> q y (f y)) -> exists z, q z (f z)]#</li>#
#</ol> </li>#%\end{enumerate}%
%\item%#<li># Define an inductive predicate capturing when a natural number is an integer multiple of either 6 or 10. Prove that 13 does not satisfy your predicate, and prove that any number satisfying the predicate is not odd. It is probably easiest to prove the second theorem by indicating "odd-ness" as equality to [2 * n + 1] for some [n].#</li>#
%\item%#<li># Define an inductive predicate capturing when a natural number is an integer multiple of either 6 or 10. Prove that 13 does not satisfy your predicate, and prove that any number satisfying the predicate is not odd. It is probably easiest to prove the second theorem by indicating %``%#"#odd-ness#"#%''% as equality to [2 * n + 1] for some [n].#</li>#
%\item%#<li># Define a simple programming language, its semantics, and its typing rules, and then prove that well-typed programs cannot go wrong. Specifically:
%\begin{enumerate}%#<ol>#
......@@ -903,7 +903,7 @@ Qed.
%\item%#<li># Define an inductive type [cmd] of commands, containing expressions and variable assignments. A variable assignment node should contain the variable being assigned, the expression being assigned to it, and the command to run afterward.#</li>#
%\item%#<li># Define an inductive type [val] of values, containing natural number constants and pairings of values.#</li>#
%\item%#<li># Define a type of variable assignments, which assign a value to each variable.#</li>#
%\item%#<li># Define a big-step evaluation relation [eval], capturing what it means for an expression to evaluate to a value under a particular variable assignment. "Big step" means that the evaluation of every expression should be proved with a single instance of the inductive predicate you will define. For instance, "[1 + 1] evaluates to [2] under assignment [va]" should be derivable for any assignment [va].#</li>#
%\item%#<li># Define a big-step evaluation relation [eval], capturing what it means for an expression to evaluate to a value under a particular variable assignment. %``%#"#Big step#"#%''% means that the evaluation of every expression should be proved with a single instance of the inductive predicate you will define. For instance, %``%#"#[1 + 1] evaluates to [2] under assignment [va]#"#%''% should be derivable for any assignment [va].#</li>#
%\item%#<li># Define a big-step evaluation relation [run], capturing what it means for a command to run to a value under a particular variable assignment. The value of a command is the result of evaluating its final expression.#</li>#
%\item%#<li># Define a type of variable typings, which are like variable assignments, but map variables to types instead of values. You might use polymorphism to share some code with your variable assignments.#</li>#
%\item%#<li># Define typing judgments for expressions, values, and commands. The expression and command cases will be in terms of a typing assignment.#</li>#
......@@ -921,7 +921,7 @@ match goal with
end
]] #</li>#
%\item%#<li># You probably do not want to use an inductive definition for compatibility of variable assignments and typings.#</li>#
%\item%#<li># The [Tactics] module from this book contains a variant [crush'] of [crush]. [crush'] takes two arguments. The first argument is a list of lemmas and other functions to be tried automatically in "forward reasoning" style, where we add new facts without being sure yet that they link into a proof of the conclusion. The second argument is a list of predicates on which inversion should be attempted automatically. For instance, running [crush' (lemma1, lemma2) pred] will search for chances to apply [lemma1] and [lemma2] to hypotheses that are already available, adding the new concluded fact if suitable hypotheses can be found. Inversion will be attempted on any hypothesis using [pred], but only those inversions that narrow the field of possibilities to one possible rule will be kept. The format of the list arguments to [crush'] is that you can pass an empty list as [tt], a singleton list as the unadorned single element, and a multiple-element list as a tuple of the elements.#</li>#
%\item%#<li># The [Tactics] module from this book contains a variant [crush'] of [crush]. [crush'] takes two arguments. The first argument is a list of lemmas and other functions to be tried automatically in %``%#"#forward reasoning#"#%''% style, where we add new facts without being sure yet that they link into a proof of the conclusion. The second argument is a list of predicates on which inversion should be attempted automatically. For instance, running [crush' (lemma1, lemma2) pred] will search for chances to apply [lemma1] and [lemma2] to hypotheses that are already available, adding the new concluded fact if suitable hypotheses can be found. Inversion will be attempted on any hypothesis using [pred], but only those inversions that narrow the field of possibilities to one possible rule will be kept. The format of the list arguments to [crush'] is that you can pass an empty list as [tt], a singleton list as the unadorned single element, and a multiple-element list as a tuple of the elements.#</li>#
%\item%#<li># If you want [crush'] to apply polymorphic lemmas, you may have to do a little extra work, if the type parameter is not a free variable of your proof context (so that [crush'] does not know to try it). For instance, if you define a polymorphic map insert function [assign] of some type [forall T : Set, ...], and you want particular applications of [assign] added automatically with type parameter [U], you would need to include [assign] in the lemma list as [assign U] (if you have implicit arguments off) or [assign (T := U)] or [@assign U] (if you have implicit arguments on).#</li>#
#</ol> </li>#%\end{enumerate}%
......
......@@ -71,7 +71,7 @@ A note for readers following along in the PDF version: coqdoc supports pretty-pr
%\medskip%
Now we are ready to say what these programs mean. We will do this by writing an interpreter that can be thought of as a trivial operational or denotational semantics. (If you are not familiar with these semantic techniques, no need to worry; we will stick to "common sense" constructions.) *)
Now we are ready to say what these programs mean. We will do this by writing an interpreter that can be thought of as a trivial operational or denotational semantics. (If you are not familiar with these semantic techniques, no need to worry; we will stick to %``%#"#common sense#"#%''% constructions.) *)
Definition binopDenote (b : binop) : nat -> nat -> nat :=
match b with
......@@ -101,9 +101,9 @@ Definition binopDenote := fun b =>
]]
Languages like Haskell and ML have a convenient %\textit{%#<i>#principal typing#</i>#%}% property, which gives us strong guarantees about how effective type inference will be. Unfortunately, Coq's type system is so expressive that any kind of "complete" type inference is impossible, and the task even seems to be hard heuristically in practice. Nonetheless, Coq includes some very helpful heuristics, many of them copying the workings of Haskell and ML type-checkers for programs that fall in simple fragments of Coq's language.
Languages like Haskell and ML have a convenient %\textit{%#<i>#principal typing#</i>#%}% property, which gives us strong guarantees about how effective type inference will be. Unfortunately, Coq's type system is so expressive that any kind of %``%#"#complete#"#%''% type inference is impossible, and the task even seems to be hard heuristically in practice. Nonetheless, Coq includes some very helpful heuristics, many of them copying the workings of Haskell and ML type-checkers for programs that fall in simple fragments of Coq's language.
This is as good a time as any to mention the preponderance of different languages associated with Coq. The theoretical foundation of Coq is a formal system called the %\textit{%#<i>#Calculus of Inductive Constructions (CIC)#</i>#%}%, which is an extension of the older %\textit{%#<i>#Calculus of Constructions (CoC)#</i>#%}%. CIC is quite a spartan foundation, which is helpful for proving metatheory but not so helpful for real development. Still, it is nice to know that it has been proved that CIC enjoys properties like %\textit{%#<i>#strong normalization#</i>#%}%, meaning that every program (and, more importantly, every proof term) terminates; and %\textit{%#<i>#relative consistency#</i>#%}% with systems like versions of Zermelo Fraenkel set theory, which roughly means that you can believe that Coq proofs mean that the corresponding propositions are "really true," if you believe in set theory.
This is as good a time as any to mention the preponderance of different languages associated with Coq. The theoretical foundation of Coq is a formal system called the %\textit{%#<i>#Calculus of Inductive Constructions (CIC)#</i>#%}%, which is an extension of the older %\textit{%#<i>#Calculus of Constructions (CoC)#</i>#%}%. CIC is quite a spartan foundation, which is helpful for proving metatheory but not so helpful for real development. Still, it is nice to know that it has been proved that CIC enjoys properties like %\textit{%#<i>#strong normalization#</i>#%}%, meaning that every program (and, more importantly, every proof term) terminates; and %\textit{%#<i>#relative consistency#</i>#%}% with systems like versions of Zermelo Fraenkel set theory, which roughly means that you can believe that Coq proofs mean that the corresponding propositions are %``%#"#really true,#"#%''% if you believe in set theory.
Coq is actually based on an extension of CIC called %\textit{%#<i>#Gallina#</i>#%}%. The text after the [:=] and before the period in the last code example is a term of Gallina. Gallina adds many useful features that are not compiled internally to more primitive CIC features. The important metatheorems about CIC have not been extended to the full breadth of these features, but most Coq users do not seem to lose much sleep over this omission.
......@@ -147,7 +147,7 @@ Definition stack := list nat.
(** An instruction either pushes a constant onto the stack or pops two arguments, applies a binary operator to them, and pushes the result onto the stack. A program is a list of instructions, and a stack is a list of natural numbers.
We can give instructions meanings as functions from stacks to optional stacks, where running an instruction results in [None] in case of a stack underflow and results in [Some s'] when the result of execution is the new stack [s']. [::] is the "list cons" operator from the Coq standard library. *)
We can give instructions meanings as functions from stacks to optional stacks, where running an instruction results in [None] in case of a stack underflow and results in [Some s'] when the result of execution is the new stack [s']. [::] is the %``%#"#list cons#"#%''% operator from the Coq standard library. *)
Definition instrDenote (i : instr) (s : stack) : option stack :=
match i with
......@@ -233,7 +233,7 @@ Abort.
(* end hide *)
(* begin thide *)
(** Though a pencil-and-paper proof might clock out at this point, writing "by a routine induction on [e]," it turns out not to make sense to attack this proof directly. We need to use the standard trick of %\textit{%#<i>#strengthening the induction hypothesis#</i>#%}%. We do that by proving an auxiliary lemma:
(** Though a pencil-and-paper proof might clock out at this point, writing %``%#"#by a routine induction on [e],#"#%''% it turns out not to make sense to attack this proof directly. We need to use the standard trick of %\textit{%#<i>#strengthening the induction hypothesis#</i>#%}%. We do that by proving an auxiliary lemma:
*)
Lemma compile_correct' : forall e p s,
......@@ -585,7 +585,7 @@ ML and Haskell have indexed algebraic datatypes. For instance, their list types
First, the indices of the range of each data constructor must be type variables bound at the top level of the datatype definition. There is no way to do what we did here, where we, for instance, say that [TPlus] is a constructor building a [tbinop] whose indices are all fixed at [Nat]. %\textit{%#<i>#Generalized algebraic datatypes (GADTs)#</i>#%}% are a popular feature in GHC Haskell and other languages that removes this first restriction.
The second restriction is not lifted by GADTs. In ML and Haskell, indices of types must be types and may not be %\textit{%#<i>#expressions#</i>#%}%. In Coq, types may be indexed by arbitrary Gallina terms. Type indices can live in the same universe as programs, and we can compute with them just like regular programs. Haskell supports a hobbled form of computation in type indices based on multi-parameter type classes, and recent extensions like type functions bring Haskell programming even closer to "real" functional programming with types, but, without dependent typing, there must always be a gap between how one programs with types and how one programs normally.
The second restriction is not lifted by GADTs. In ML and Haskell, indices of types must be types and may not be %\textit{%#<i>#expressions#</i>#%}%. In Coq, types may be indexed by arbitrary Gallina terms. Type indices can live in the same universe as programs, and we can compute with them just like regular programs. Haskell supports a hobbled form of computation in type indices based on multi-parameter type classes, and recent extensions like type functions bring Haskell programming even closer to %``%#"#real#"#%''% functional programming with types, but, without dependent typing, there must always be a gap between how one programs with types and how one programs normally.
*)
(** We can define a similar type family for typed expressions. *)
......@@ -605,7 +605,7 @@ Definition typeDenote (t : type) : Set :=
(** It can take a few moments to come to terms with the fact that [Set], the type of types of programs, is itself a first-class type, and that we can write functions that return [Set]s. Past that wrinkle, the definition of [typeDenote] is trivial, relying on the [nat] and [bool] types from the Coq standard library.
We need to define one auxiliary function, implementing a boolean binary "less-than" operator, which only appears in the standard library with a type fancier than what we are prepared to deal with here. The code is entirely standard and ML-like, with the one caveat being that the Coq [nat] type uses a unary representation, where [O] is zero and [S n] is the successor of [n].
We need to define one auxiliary function, implementing a boolean binary %``%#"#less-than#"#%''% operator, which only appears in the standard library with a type fancier than what we are prepared to deal with here. The code is entirely standard and ML-like, with the one caveat being that the Coq [nat] type uses a unary representation, where [O] is zero and [S n] is the successor of [n].
*)
Fixpoint lessThan (n1 n2 : nat) : bool :=
......@@ -678,7 +678,7 @@ Eval simpl in texpDenote (TBinop TLt (TBinop TPlus (TNConst 2) (TNConst 2)) (TNC
(** ** Target Language *)
(** Now we want to define a suitable stack machine target for compilation. In the example of the untyped language, stack machine programs could encounter stack underflows and "get stuck." This was unfortunate, since we had to deal with this complication even though we proved that our compiler never produced underflowing programs. We could have used dependent types to force all stack machine programs to be underflow-free.
(** Now we want to define a suitable stack machine target for compilation. In the example of the untyped language, stack machine programs could encounter stack underflows and %``%#"#get stuck.#"#%''% This was unfortunate, since we had to deal with this complication even though we proved that our compiler never produced underflowing programs. We could have used dependent types to force all stack machine programs to be underflow-free.
For our new languages, besides underflow, we also have the problem of stack slots with naturals instead of bools or vice versa. This time, we will use indexed typed families to avoid the need to reason about potential failures.
......@@ -864,7 +864,7 @@ Abort.
Lemma tcompile_correct' : forall t (e : texp t) ts (s : vstack ts),
tprogDenote (tcompile e ts) s = (texpDenote e, s).
(** While lemma [compile_correct'] quantified over a program that is the "continuation" for the expression we are considering, here we avoid drawing in any extra syntactic elements. In addition to the source expression and its type, we also quantify over an initial stack type and a stack compatible with it. Running the compilation of the program starting from that stack, we should arrive at a stack that differs only in having the program's denotation pushed onto it.
(** While lemma [compile_correct'] quantified over a program that is the %``%#"#continuation#"#%''% for the expression we are considering, here we avoid drawing in any extra syntactic elements. In addition to the source expression and its type, we also quantify over an initial stack type and a stack compatible with it. Running the compilation of the program starting from that stack, we should arrive at a stack that differs only in having the program's denotation pushed onto it.
Let us try to prove this theorem in the same way that we settled on in the last section. *)
......
......@@ -20,7 +20,7 @@ Set Implicit Arguments.
\chapter{Subset Types and Variations}% *)
(** So far, we have seen many examples of what we might call "classical program verification." We write programs, write their specifications, and then prove that the programs satisfy their specifications. The programs that we have written in Coq have been normal functional programs that we could just as well have written in Haskell or ML. In this chapter, we start investigating uses of %\textit{%#<i>#dependent types#</i>#%}% to integrate programming, specification, and proving into a single phase. *)
(** So far, we have seen many examples of what we might call %``%#"#classical program verification.#"#%''% We write programs, write their specifications, and then prove that the programs satisfy their specifications. The programs that we have written in Coq have been normal functional programs that we could just as well have written in Haskell or ML. In this chapter, we start investigating uses of %\textit{%#<i>#dependent types#</i>#%}% to integrate programming, specification, and proving into a single phase. *)
(** * Introducing Subset Types *)
......@@ -234,7 +234,7 @@ Definition pred_strong4 (n : nat) : n > 0 -> {m : nat | n = S m}.
end).
(* begin thide *)
(** We build [pred_strong4] using tactic-based proving, beginning with a [Definition] command that ends in a period before a definition is given. Such a command enters the interactive proving mode, with the type given for the new identifier as our proof goal. We do most of the work with the [refine] tactic, to which we pass a partial "proof" of the type we are trying to prove. There may be some pieces left to fill in, indicated by underscores. Any underscore that Coq cannot reconstruct with type inference is added as a proof subgoal. In this case, we have two subgoals:
(** We build [pred_strong4] using tactic-based proving, beginning with a [Definition] command that ends in a period before a definition is given. Such a command enters the interactive proving mode, with the type given for the new identifier as our proof goal. We do most of the work with the [refine] tactic, to which we pass a partial %``%#"#proof#"#%''% of the type we are trying to prove. There may be some pieces left to fill in, indicated by underscores. Any underscore that Coq cannot reconstruct with type inference is added as a proof subgoal. In this case, we have two subgoals:
[[
2 subgoals
......@@ -260,7 +260,7 @@ We can see that the first subgoal comes from the second underscore passed to [Fa
(* end thide *)
Defined.
(** We end the "proof" with [Defined] instead of [Qed], so that the definition we constructed remains visible. This contrasts to the case of ending a proof with [Qed], where the details of the proof are hidden afterward. Let us see what our proof script constructed. *)
(** We end the %``%#"#proof#"#%''% with [Defined] instead of [Qed], so that the definition we constructed remains visible. This contrasts to the case of ending a proof with [Qed], where the details of the proof are hidden afterward. Let us see what our proof script constructed. *)
Print pred_strong4.
(** %\vspace{-.15in}% [[
......@@ -448,7 +448,7 @@ let rec eq_nat_dec' n m0 =
(** %\smallskip%
We can build "smart" versions of the usual boolean operators and put them to good use in certified programming. For instance, here is a [sumbool] version of boolean "or." *)
We can build %``%#"#smart#"#%''% versions of the usual boolean operators and put them to good use in certified programming. For instance, here is a [sumbool] version of boolean %``%#"#or.#"#%''% *)
(* begin thide *)
Notation "x || y" := (if x then Yes else Reduce y).
......@@ -592,7 +592,7 @@ Eval compute in pred_strong8 0.
(** * Monadic Notations *)
(** We can treat [maybe] like a monad, in the same way that the Haskell [Maybe] type is interpreted as a failure monad. Our [maybe] has the wrong type to be a literal monad, but a "bind"-like notation will still be helpful. *)
(** We can treat [maybe] like a monad, in the same way that the Haskell [Maybe] type is interpreted as a failure monad. Our [maybe] has the wrong type to be a literal monad, but a %``%#"#bind#"#%''%-like notation will still be helpful. *)
Notation "x <- e1 ; e2" := (match e1 with
| Unknown => ??
......@@ -611,7 +611,7 @@ Definition doublePred (n1 n2 : nat) : {{p | n1 = S (fst p) /\ n2 = S (snd p)}}.
[[(m1, m2)]]); tauto.
Defined.
(** We can build a [sumor] version of the "bind" notation and use it to write a similarly straightforward version of this function. *)
(** We can build a [sumor] version of the %``%#"#bind#"#%''% notation and use it to write a similarly straightforward version of this function. *)
(** printing <-- $\longleftarrow$ *)
......@@ -668,7 +668,7 @@ Definition eq_type_dec : forall t1 t2 : type, {t1 = t2} + {t1 <> t2}.
decide equality.
Defined.
(** Another notation complements the monadic notation for [maybe] that we defined earlier. Sometimes we want to include "assertions" in our procedures. That is, we want to run a decision procedure and fail if it fails; otherwise, we want to continue, with the proof that it produced made available to us. This infix notation captures that idea, for a procedure that returns an arbitrary two-constructor type. *)
(** Another notation complements the monadic notation for [maybe] that we defined earlier. Sometimes we want to include %``%#"#assertions#"#%''% in our procedures. That is, we want to run a decision procedure and fail if it fails; otherwise, we want to continue, with the proof that it produced made available to us. This infix notation captures that idea, for a procedure that returns an arbitrary two-constructor type. *)
Notation "e1 ;; e2" := (if e1 then e2 else ??)
(right associativity, at level 60).
......@@ -794,7 +794,7 @@ let rec typeCheck = function
(** %\smallskip%
We can adapt this implementation to use [sumor], so that we know our type-checker only fails on ill-typed inputs. First, we define an analogue to the "assertion" notation. *)
We can adapt this implementation to use [sumor], so that we know our type-checker only fails on ill-typed inputs. First, we define an analogue to the %``%#"#assertion#"#%''% notation. *)
(* begin thide *)
Notation "e1 ;;; e2" := (if e1 then e2 else !!)
......@@ -889,7 +889,7 @@ Eval simpl in typeCheck' (Plus (Nat 1) (Bool false)).
%\item%#<li># Define a function [negate] that returns a simplified version of the negation of a [prop]. That is, the function should have type [forall p : prop, {p' : prop | forall truth, propDenote truth p <-> ~ propDenote truth p'}]. To simplify a variable, just negate it. Simplify a negation by returning its argument. Simplify conjunctions and disjunctions using De Morgan's laws, negating the arguments recursively and switching the kind of connective. [decide] may be useful in some of the proof obligations, even if you do not use it in the computational part of [negate]'s definition. Lemmas like [decide] allow us to compensate for the lack of a general Law of the Excluded Middle in CIC.#</li>#
#</ol>#%\end{enumerate}% #</li>#
%\item%#<li># Implement the DPLL satisfiability decision procedure for boolean formulas in conjunctive normal form, with a dependent type that guarantees its correctness. An example of a reasonable type for this function would be [forall f : formula, {truth : tvals | formulaTrue truth f} + {forall truth, ~ formulaTrue truth f}]. Implement at least "the basic backtracking algorithm" as defined here:
%\item%#<li># Implement the DPLL satisfiability decision procedure for boolean formulas in conjunctive normal form, with a dependent type that guarantees its correctness. An example of a reasonable type for this function would be [forall f : formula, {truth : tvals | formulaTrue truth f} + {forall truth, ~ formulaTrue truth f}]. Implement at least %``%#"#the basic backtracking algorithm#"#%''% as defined here:
%\begin{center}\url{http://en.wikipedia.org/wiki/DPLL_algorithm}\end{center}%
#<blockquote><a href="http://en.wikipedia.org/wiki/DPLL_algorithm">http://en.wikipedia.org/wiki/DPLL_algorithm</a></blockquote>#
It might also be instructive to implement the unit propagation and pure literal elimination optimizations described there or some other optimizations that have been used in modern SAT solvers.#</li>#
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment