Inductive and (A : Prop) (B : Prop) : Prop := conj : A -> B -> A /\ B
For conj: Arguments A, B are implicit
For and: Argument scopes are [type_scope type_scope]
For conj: Argument scopes are [type_scope type_scope _ _]
]]
In addition to the definition of [and] itself, we get information on implicit arguments and parsing rules for [and] and its constructor [conj]. We will ignore the parsing information for now. The implicit argument information tells us that we build a proof of a conjunction by calling the constructor [conj] on proofs of the conjuncts, with no need to include the types of those proofs as explicit arguments.
%\medskip%
Now we create a section for our induction principle, following the same basic plan as in the last section of this chapter. *)
(** A first attempt at writing the induction principle itself follows the intuition that nested inductive type definitions are expanded into mutual inductive definitions.
[[
Fixpoint nat_tree_ind' (tr : nat_tree) : P tr :=
match tr return (P tr) with
| NLeaf' => NLeaf'_case
| NNode' n ls => NNode'_case n ls (list_nat_tree_ind ls)
end
with list_nat_tree_ind (ls : list nat_tree) : All P ls :=
Coq rejects this definition, saying "Recursivecalltonat_tree_ind'hasprincipalargumentequalto"tr"insteadofrest." The term "nestedinductivetype" hints at the solution to the problem. Just like true mutually-inductive types require mutually-recursive induction principles, nested types require nested recursion. *)
Fixpoint nat_tree_ind' (tr : nat_tree) : P tr :=
match tr return (P tr) with
| NLeaf' => NLeaf'_case
| NNode' n ls => NNode'_case n ls
((fix list_nat_tree_ind (ls : list nat_tree) : All P ls :=
(** We include an anonymous [fix] version of [list_nat_tree_ind] that is literally %\textit{%#<i>#nested#</i>#%}% inside the definition of the recursive function corresponding to the inductive definition that had the nested use of [list]. *)
End nat_tree_ind'.
(** We can try our induction principle out by defining some recursive functions on [nat_tree]s and proving a theorem about them. First, we define some helper functions that operate on lists. *)
Section map.
Variables T T' : Set.
Variable f : T -> T'.
Fixpoint map (ls : list T) : list T' :=
match ls with
| Nil => Nil
| Cons h t => Cons (f h) (map t)
end.
End map.
Fixpoint sum (ls : list nat) : nat :=
match ls with
| Nil => O
| Cons h t => plus h (sum t)
end.
(** Now we can define a size function over our trees. *)
Fixpoint ntsize (tr : nat_tree) : nat :=
match tr with
| NLeaf' => S O
| NNode' _ trs => S (sum (map ntsize trs))
end.
(** Notice that Coq was smart enough to expand the definition of [map] to verify that we are using proper nested recursion, even through a use of a higher-order function. *)
| NNode' n (Cons tr trs) => NNode' n (Cons (ntsplice tr tr2) trs)
end.
(** We have defined another arbitrary notion of tree splicing, similar to before, and we can prove an analogous theorem about its relationship with tree size. We start with a useful lemma about addition. *)
Lemma plus_S : forall n1 n2 : nat,
plus n1 (S n2) = S (plus n1 n2).
induction n1; crush.
Qed.
(** Now we begin the proof of the theorem, adding the lemma [plus_S] as a hint. *)
(** We know that the standard induction principle is insufficient for the task, so we need to provide a [using] clause for the [induction] tactic to specify our alternate principle. *)
induction tr1 using nat_tree_ind'; crush.
(** One subgoal remains: [[
n : nat
ls : list nat_tree
H : All
(fun tr1 : nat_tree =>
forall tr2 : nat_tree,
ntsize (ntsplice tr1 tr2) = plus (ntsize tr2) (ntsize tr1)) ls
After a few moments of squinting at this goal, it becomes apparent that we need to do a case analysis on the structure of [ls]. The rest is routine. *)
destruct ls; crush.
(** We can go further in automating the proof by exploiting the hint mechanism further. *)
Restart.
Hint Extern 1 (ntsize (match ?LS with Nil => _ | Cons _ _ => _ end) = _) =>
destruct LS; crush.
induction tr1 using nat_tree_ind'; crush.
Qed.
(** We will go into great detail on hints in a later chapter, but the only important thing to note here is that we register a pattern that describes a conclusion we expect to encounter during the proof. The pattern may contain unification variables, whose names are prefixed with question marks, and we may refer to those bound variables in a tactic that we ask to have run whenever the pattern matches.
The advantage of using the hint is not very clear here, because the original proof was so short. However, the hint has fundamentally improved the readability of our proof. Before, the proof refered to the local variable [ls], which has an automatically-generated name. To a human reading the proof script without stepping through it interactively, it was not clear where [ls] came from. The hint explains to the reader the process for choosing which variables to case analyze on, and the hint can continue working even if the rest of the proof structure changes significantly. *)