Commit 501370fd authored by Adam Chlipala's avatar Adam Chlipala

Batch of changes based on proofreader feedback

parent 89b1d898
......@@ -8,6 +8,8 @@
\usepackage{url}
\usepackage{makeidx,hyperref}
\newcommand{\naive}[0]{na\"ive}
\title{Certified Programming with Dependent Types}
\author{Adam Chlipala}
......
......@@ -106,7 +106,7 @@ Eval compute in fun x => id' x.
= fun x : nat => (fix id' (n : nat) : nat := n) x
]]
By running [compute], we ask Coq to run reduction steps until no more apply, so why do we see an application of a known function, where clearly no beta reduction has been performed? The answer has to do with ensuring termination of all Gallina programs. One candidate rule would say that we apply recursive definitions wherever possible. However, this would clearly lead to nonterminating reduction sequences, since the function may appear fully applied within its own definition, and we would naively "simplify" such applications immediately. Instead, Coq only applies the beta rule for a recursive function when _the top-level structure of the recursive argument is known_. For [id'] above, we have only one argument [n], so clearly it is the recursive argument, and the top-level structure of [n] is known when the function is applied to [O] or to some [S e] term. The variable [x] is neither, so reduction is blocked.
By running [compute], we ask Coq to run reduction steps until no more apply, so why do we see an application of a known function, where clearly no beta reduction has been performed? The answer has to do with ensuring termination of all Gallina programs. One candidate rule would say that we apply recursive definitions wherever possible. However, this would clearly lead to nonterminating reduction sequences, since the function may appear fully applied within its own definition, and we would %\%naive%{}%ly "simplify" such applications immediately. Instead, Coq only applies the beta rule for a recursive function when _the top-level structure of the recursive argument is known_. For [id'] above, we have only one argument [n], so clearly it is the recursive argument, and the top-level structure of [n] is known when the function is applied to [O] or to some [S e] term. The variable [x] is neither, so reduction is blocked.
What are recursive arguments in general? Every recursive function is compiled by Coq to a %\index{Gallina terms!fix}%[fix] expression, for anonymous definition of recursive functions. Further, every [fix] with multiple arguments has one designated as the recursive argument via a [struct] annotation. The recursive argument is the one that must decrease across recursive calls, to appease Coq's termination checker. Coq will generally infer which argument is recursive, though we may also specify it manually, if we want to tweak reduction behavior. For instance, consider this definition of a function to add two lists of [nat]s elementwise: *)
......@@ -248,7 +248,7 @@ Section fhlist_map.
Variable elm : A.
Theorem get_imap : forall ls (mem : fmember elm ls) (hls : fhlist B ls),
Theorem fhget_fhmap : forall ls (mem : fmember elm ls) (hls : fhlist B ls),
fhget (fhmap hls) mem = f (fhget hls mem).
(* begin hide *)
induction ls; crush; case a0; reflexivity.
......@@ -503,7 +503,7 @@ Section fhapp.
(** We might like to prove that [fhapp] is associative.
[[
Theorem fhapp_ass : forall ls1 ls2 ls3
Theorem fhapp_assoc : forall ls1 ls2 ls3
(hls1 : fhlist B ls1) (hls2 : fhlist B ls2) (hls3 : fhlist B ls3),
fhapp hls1 (fhapp hls2 hls3) = fhapp (fhapp hls1 hls2) hls3.
]]
......@@ -517,7 +517,7 @@ The term
This first cut at the theorem statement does not even type-check. We know that the two [fhlist] types appearing in the error message are always equal, by associativity of normal list append, but this fact is not apparent to the type checker. This stems from the fact that Coq's equality is%\index{intensional type theory}% _intensional_, in the sense that type equality theorems can never be applied after the fact to get a term to type-check. Instead, we need to make use of equality explicitly in the theorem statement. *)
Theorem fhapp_ass : forall ls1 ls2 ls3
Theorem fhapp_assoc : forall ls1 ls2 ls3
(pf : (ls1 ++ ls2) ++ ls3 = ls1 ++ (ls2 ++ ls3))
(hls1 : fhlist B ls1) (hls2 : fhlist B ls2) (hls3 : fhlist B ls3),
fhapp hls1 (fhapp hls2 hls3)
......@@ -651,7 +651,7 @@ The term "pf" has type "a :: (ls1 ++ ls2) ++ ls3 = a :: ls1 ++ ls2 ++ ls3"
However, now that we have generalized over the [fhapp] call, the type of the term being type-cast appears explicitly in the goal and _may be rewritten as well_. In particular, the final masterstroke is rewriting everywhere in our goal using associativity of list append. *)
rewrite app_ass.
rewrite app_assoc.
(** [[
============================
forall (pf0 : a :: ls1 ++ ls2 ++ ls3 = a :: ls1 ++ ls2 ++ ls3)
......@@ -730,12 +730,12 @@ Section fhapp'.
Variable A : Type.
Variable B : A -> Type.
(** This time, the naive theorem statement type-checks. *)
(** This time, the %\%naive theorem statement type-checks. *)
(* EX: Prove [fhapp] associativity using [JMeq]. *)
(* begin thide *)
Theorem fhapp_ass' : forall ls1 ls2 ls3 (hls1 : fhlist B ls1) (hls2 : fhlist B ls2)
Theorem fhapp_assoc' : forall ls1 ls2 ls3 (hls1 : fhlist B ls1) (hls2 : fhlist B ls2)
(hls3 : fhlist B ls3),
fhapp hls1 (fhapp hls2 hls3) == fhapp (fhapp hls1 hls2) hls3.
induction ls1; crush.
......@@ -773,7 +773,7 @@ Error: Impossible to unify "fhlist B ((ls1 ++ ?1572) ++ ?1573)" with
Now we can rewrite with append associativity, as before. *)
rewrite app_ass.
rewrite app_assoc.
(** %\vspace{-.15in}%[[
============================
forall f f0 : fhlist B (ls1 ++ ls2 ++ ls3), f == f0 -> (a0, f) == (a0, f0)
......@@ -791,7 +791,7 @@ End fhapp'.
The proof we have found relies on the [JMeq_eq] axiom, which we can verify with a command%\index{Vernacular commands!Print Assumptions}% that we will discuss more in two chapters. *)
Print Assumptions fhapp_ass'.
Print Assumptions fhapp_assoc'.
(** %\vspace{-.15in}%[[
Axioms:
JMeq_eq : forall (A : Type) (x y : A), x == y -> x = y
......@@ -812,19 +812,19 @@ Section fhapp''.
Variable A : Type.
Variable B : A -> Type.
Theorem fhapp_ass'' : forall ls1 ls2 ls3 (hls1 : fhlist B ls1) (hls2 : fhlist B ls2)
Theorem fhapp_assoc'' : forall ls1 ls2 ls3 (hls1 : fhlist B ls1) (hls2 : fhlist B ls2)
(hls3 : fhlist B ls3),
fhapp hls1 (fhapp hls2 hls3) == fhapp (fhapp hls1 hls2) hls3.
induction ls1; crush.
Qed.
End fhapp''.
Print Assumptions fhapp_ass''.
Print Assumptions fhapp_assoc''.
(** <<
Closed under the global context
>>
One might wonder exactly which elements of a proof involving [JMeq] imply that [JMeq_eq] must be used. For instance, above we noticed that [rewrite] had brought [JMeq_eq] into the proof of [fhapp_ass'], yet here we have also used [rewrite] with [JMeq] hypotheses while avoiding axioms! One illuminating exercise is comparing the types of the lemmas that [rewrite] uses under the hood to implement the rewrites. Here is the normal lemma for [eq] rewriting:%\index{Gallina terms!eq\_ind\_r}% *)
One might wonder exactly which elements of a proof involving [JMeq] imply that [JMeq_eq] must be used. For instance, above we noticed that [rewrite] had brought [JMeq_eq] into the proof of [fhapp_assoc'], yet here we have also used [rewrite] with [JMeq] hypotheses while avoiding axioms! One illuminating exercise is comparing the types of the lemmas that [rewrite] uses under the hood to implement the rewrites. Here is the normal lemma for [eq] rewriting:%\index{Gallina terms!eq\_ind\_r}% *)
Check eq_ind_r.
(** %\vspace{-.15in}%[[
......@@ -842,7 +842,7 @@ internal_JMeq_rew_r
(P : forall B0 : Type, B0 -> Type), P B b -> x == b -> P A x
]]
The key difference is that, where the [eq] lemma is parameterized on a predicate of type [A -> Prop], the [JMeq] lemma is parameterized on a predicate of type more like [forall A : Type, A -> Prop]. To apply [eq_ind_r] with a proof of [x = y], it is only necessary to rearrange the goal into an application of a [fun] abstraction to [y]. In contrast, to apply [internal_JMeq_rew_r], it is necessary to rearrange the goal to an application of a [fun] abstraction to both [y] and _its type_. In other words, the predicate must be _polymorphic_ in [y]'s type; any type must make sense, from a type-checking standpoint. There may be cases where the former rearrangement is easy to do in a type-correct way, but the second rearrangement done naively leads to a type error.
The key difference is that, where the [eq] lemma is parameterized on a predicate of type [A -> Prop], the [JMeq] lemma is parameterized on a predicate of type more like [forall A : Type, A -> Prop]. To apply [eq_ind_r] with a proof of [x = y], it is only necessary to rearrange the goal into an application of a [fun] abstraction to [y]. In contrast, to apply [internal_JMeq_rew_r], it is necessary to rearrange the goal to an application of a [fun] abstraction to both [y] and _its type_. In other words, the predicate must be _polymorphic_ in [y]'s type; any type must make sense, from a type-checking standpoint. There may be cases where the former rearrangement is easy to do in a type-correct way, but the second rearrangement done %\%naive%{}%ly leads to a type error.
When [rewrite] cannot figure out how to apply [internal_JMeq_rew_r] for [x == y] where [x] and [y] have the same type, the tactic can instead use an alternate theorem, which is easy to prove as a composition of [eq_ind_r] and [JMeq_eq]. *)
......@@ -853,7 +853,7 @@ JMeq_ind_r
P x -> forall y : A, y == x -> P y
]]
Ironically, where in the proof of [fhapp_ass'] we used [rewrite app_ass] to make it clear that a use of [JMeq] was actually homogeneously typed, we created a situation where [rewrite] applied the axiom-based [JMeq_ind_r] instead of the axiom-free [internal_JMeq_rew_r]!
Ironically, where in the proof of [fhapp_assoc'] we used [rewrite app_assoc] to make it clear that a use of [JMeq] was actually homogeneously typed, we created a situation where [rewrite] applied the axiom-based [JMeq_ind_r] instead of the axiom-free [internal_JMeq_rew_r]!
For another simple example, consider this theorem that applies a heterogeneous equality to prove a congruence fact. *)
......@@ -896,7 +896,7 @@ In other words, the successor function [S] is insufficiently polymorphic. If we
Abort.
(** Why did we not run into this problem in our proof of [fhapp_ass'']? The reason is that the pair constructor is polymorphic in the types of the pair components, while functions like [S] are not polymorphic at all. Use of such non-polymorphic functions with [JMeq] tends to push toward use of axioms. The example with [nat] here is a bit unrealistic; more likely cases would involve functions that have _some_ polymorphism, but not enough to allow abstractions of the sort we attempted above with [pattern]. For instance, we might have an equality between two lists, where the goal only type-checks when the terms involved really are lists, though everything is polymorphic in the types of list data elements. The {{http://www.mpi-sws.org/~gil/Heq/}Heq} library builds up a slightly different foundation to help avoid such problems. *)
(** Why did we not run into this problem in our proof of [fhapp_assoc'']? The reason is that the pair constructor is polymorphic in the types of the pair components, while functions like [S] are not polymorphic at all. Use of such non-polymorphic functions with [JMeq] tends to push toward use of axioms. The example with [nat] here is a bit unrealistic; more likely cases would involve functions that have _some_ polymorphism, but not enough to allow abstractions of the sort we attempted above with [pattern]. For instance, we might have an equality between two lists, where the goal only type-checks when the terms involved really are lists, though everything is polymorphic in the types of list data elements. The {{http://www.mpi-sws.org/~gil/Heq/}Heq} library builds up a slightly different foundation to help avoid such problems. *)
(** * Equivalence of Equality Axioms *)
......
......@@ -60,7 +60,7 @@ Section ilist.
We may use [in] clauses only to bind names for the arguments of an inductive type family. That is, each [in] clause must be an inductive type family name applied to a sequence of underscores and variable names of the proper length. The positions for _parameters_ to the type family must all be underscores. Parameters are those arguments declared with section variables or with entries to the left of the first colon in an inductive definition. They cannot vary depending on which constructor was used to build the discriminee, so Coq prohibits pointless matches on them. It is those arguments defined in the type to the right of the colon that we may name with [in] clauses.
Our [app] function could be typed in so-called%\index{stratified type systems}% _stratified_ type systems, which avoid true dependency. That is, we could consider the length indices to lists to live in a separate, compile-time-only universe from the lists themselves. Compile-time data may be _erased_ such that we can still execute a program. As an example where erasure would not work, consider an injection function from regular lists to length-indexed lists. Here the run-time computation actually depends on details of the compile-time argument, if we decide that the list to inject can be considered compile-time. More commonly, we think of lists as run-time data. Neither case will work with naive erasure. (It is not too important to grasp the details of this run-time/compile-time distinction, since Coq's expressive power comes from avoiding such restrictions.) *)
Our [app] function could be typed in so-called%\index{stratified type systems}% _stratified_ type systems, which avoid true dependency. That is, we could consider the length indices to lists to live in a separate, compile-time-only universe from the lists themselves. Compile-time data may be _erased_ such that we can still execute a program. As an example where erasure would not work, consider an injection function from regular lists to length-indexed lists. Here the run-time computation actually depends on details of the compile-time argument, if we decide that the list to inject can be considered compile-time. More commonly, we think of lists as run-time data. Neither case will work with %\%naive erasure. (It is not too important to grasp the details of this run-time/compile-time distinction, since Coq's expressive power comes from avoiding such restrictions.) *)
(* EX: Implement injection from normal lists *)
......@@ -632,7 +632,7 @@ Section insert.
end (ins b)
end.
(** The one new trick is a variation of the convoy pattern. In each of the last two pattern matches, we want to take advantage of the typing connection between the trees [a] and [b]. We might naively apply the convoy pattern directly on [a] in the first [match] and on [b] in the second. This satisfies the type checker per se, but it does not satisfy the termination checker. Inside each [match], we would be calling [ins] recursively on a locally bound variable. The termination checker is not smart enough to trace the dataflow into that variable, so the checker does not know that this recursive argument is smaller than the original argument. We make this fact clearer by applying the convoy pattern on _the result of a recursive call_, rather than just on that call's argument.
(** The one new trick is a variation of the convoy pattern. In each of the last two pattern matches, we want to take advantage of the typing connection between the trees [a] and [b]. We might %\%naive%{}%ly apply the convoy pattern directly on [a] in the first [match] and on [b] in the second. This satisfies the type checker per se, but it does not satisfy the termination checker. Inside each [match], we would be calling [ins] recursively on a locally bound variable. The termination checker is not smart enough to trace the dataflow into that variable, so the checker does not know that this recursive argument is smaller than the original argument. We make this fact clearer by applying the convoy pattern on _the result of a recursive call_, rather than just on that call's argument.
Finally, we are in the home stretch of our effort to define [insert]. We just need a few more definitions of non-recursive functions. First, we need to give the final characterization of [insert]'s return type. Inserting into a red-rooted tree gives a black-rooted tree where black depth has increased, and inserting into a black-rooted tree gives a tree where black depth has stayed the same and where the root is an arbitrary color. *)
......
......@@ -789,7 +789,7 @@ Fixpoint termDenote t (e : term t) : typeDenote t :=
| Abs _ e1 => fun x => termDenote (e1 x)
end.
(** Here is a naive first attempt at a reification tactic. *)
(** Here is a %\%naive first attempt at a reification tactic. *)
Ltac refl' e :=
match e with
......@@ -805,7 +805,7 @@ Ltac refl' e :=
| _ => constr:(Const e)
end.
(** Recall that a regular Ltac pattern variable [?X] only matches terms that _do not mention new variables introduced within the pattern_. In our naive implementation, the case for matching function abstractions matches the function body in a way that prevents it from mentioning the function argument! Our code above plays fast and loose with the function body in a way that leads to independent problems, but we could change the code so that it indeed handles function abstractions that ignore their arguments.
(** Recall that a regular Ltac pattern variable [?X] only matches terms that _do not mention new variables introduced within the pattern_. In our %\%naive implementation, the case for matching function abstractions matches the function body in a way that prevents it from mentioning the function argument! Our code above plays fast and loose with the function body in a way that leads to independent problems, but we could change the code so that it indeed handles function abstractions that ignore their arguments.
To handle functions in general, we will use the pattern variable form [@?X], which allows [X] to mention newly introduced variables that are declared explicitly. For instance: *)
......
......@@ -132,7 +132,8 @@ Check id Set.
<<
Error: Illegal application (Type Error):
...
The 1st term has type "Type (* (Top.15)+1 *)" which should be coercible to "Set".
The 1st term has type "Type (* (Top.15)+1 *)"
which should be coercible to "Set".
>>
The parameter [T] of [id] must be instantiated with a [Set]. The type [nat] is a [Set], but [Set] is not. We can try fixing the problem by generalizing our definition of [id]. *)
......@@ -169,7 +170,7 @@ Check id id.
Error: Universe inconsistency (cannot enforce Top.16 < Top.16).
>>
%\index{universe inconsistency}%This error message reminds us that the universe variable for [T] still exists, even though it is usually hidden. To apply [id] to itself, that variable would need to be less than itself in the type hierarchy. Universe inconsistency error messages announce cases like this one where a term could only type-check by violating an implied constraint over universe variables. Such errors demonstrate that [Type] is _predicative_, where this word has a CIC meaning closely related to its usual mathematical meaning. A predicative system enforces the constraint that, for any object of quantified type, none of those quantifiers may ever be instantiated with the object itself. %\index{impredicativity}%Impredicativity is associated with popular paradoxes in set theory, involving inconsistent constructions like "the set of all sets that do not contain themselves" (%\index{Russell's paradox}%Russell's paradox). Similar paradoxes would result from uncontrolled impredicativity in Coq. *)
%\index{universe inconsistency}%This error message reminds us that the universe variable for [T] still exists, even though it is usually hidden. To apply [id] to itself, that variable would need to be less than itself in the type hierarchy. Universe inconsistency error messages announce cases like this one where a term could only type-check by violating an implied constraint over universe variables. Such errors demonstrate that [Type] is _predicative_, where this word has a CIC meaning closely related to its usual mathematical meaning. A predicative system enforces the constraint that, when an object is defined using some sort of quantifier, none of the quantifiers may ever be instantiated with the object itself. %\index{impredicativity}%Impredicativity is associated with popular paradoxes in set theory, involving inconsistent constructions like "the set of all sets that do not contain themselves" (%\index{Russell's paradox}%Russell's paradox). Similar paradoxes would result from uncontrolled impredicativity in Coq. *)
(** ** Inductive Definitions *)
......@@ -1109,7 +1110,7 @@ eq_rect_eq : forall (U : Type) (p : U) (Q : U -> Type) (x : Q p) (h : p = p),
It turns out that this familiar axiom about equality (or some other axiom) is required to deduce [p = p0] from the hypothesis [H3] above. The soundness of that proof step is neither provable nor disprovable in Gallina.
Hope is not lost, however. We can produce an even stranger looking lemma, which gives us the theorem without axioms. *)
Hope is not lost, however. We can produce an even stranger looking lemma, which gives us the theorem without axioms. As always when we want to do case analysis on a term with a tricky dependent type, the key is to refactor the theorem statement so that every term we [match] on has _variables_ as its type indices; so instead of talking about proofs of [And p q], we talk about proofs of an arbitrary [r], but we only conclude anything interesting when [r] is an [And]. *)
Lemma proj1_again'' : forall r, proof r
-> match r with
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment