Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Contribute to GitLab
Sign in
Toggle navigation
C
cpdt
Project
Project
Details
Activity
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Board
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
research
cpdt
Commits
22363ec9
Commit
22363ec9
authored
Jan 06, 2013
by
Adam Chlipala
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
Last round of feedback from class at Penn
parent
00b5f4f8
Changes
5
Hide whitespace changes
Inline
Side-by-side
Showing
5 changed files
with
19 additions
and
15 deletions
+19
-15
Equality.v
src/Equality.v
+1
-1
LogicProg.v
src/LogicProg.v
+4
-4
Match.v
src/Match.v
+5
-3
MoreDep.v
src/MoreDep.v
+1
-1
Reflection.v
src/Reflection.v
+8
-6
No files found.
src/Equality.v
View file @
22363ec9
...
@@ -729,7 +729,7 @@ Section fhapp'.
...
@@ -729,7 +729,7 @@ Section fhapp'.
Variable
A
:
Type
.
Variable
A
:
Type
.
Variable
B
:
A
->
Type
.
Variable
B
:
A
->
Type
.
(
**
This
time
,
the
%
\
%
naive
theorem
statement
type
-
checks
.
*
)
(
**
This
time
,
the
%
\
%
naive
%{}%
theorem
statement
type
-
checks
.
*
)
(
*
EX
:
Prove
[
fhapp
]
associativity
using
[
JMeq
]
.
*
)
(
*
EX
:
Prove
[
fhapp
]
associativity
using
[
JMeq
]
.
*
)
...
...
src/LogicProg.v
View file @
22363ec9
...
@@ -320,7 +320,7 @@ Section slow.
...
@@ -320,7 +320,7 @@ Section slow.
(
**
The
following
fact
is
false
,
but
that
does
not
stop
[
eauto
]
from
taking
a
very
long
time
to
search
for
proofs
of
it
.
We
use
the
handy
%
\
index
{
Vernacular
commands
!
Time
}%
[
Time
]
command
to
measure
how
long
a
proof
step
takes
to
run
.
None
of
the
following
steps
make
any
progress
.
*
)
(
**
The
following
fact
is
false
,
but
that
does
not
stop
[
eauto
]
from
taking
a
very
long
time
to
search
for
proofs
of
it
.
We
use
the
handy
%
\
index
{
Vernacular
commands
!
Time
}%
[
Time
]
command
to
measure
how
long
a
proof
step
takes
to
run
.
None
of
the
following
steps
make
any
progress
.
*
)
Example
three_minus_four_zero
:
exists
x
,
1
+
x
=
0.
Example
zero_minus_one
:
exists
x
,
1
+
x
=
0.
Time
eauto
1.
Time
eauto
1.
(
**
%
\
vspace
{-
.15
in
}%
(
**
%
\
vspace
{-
.15
in
}%
<<
<<
...
@@ -488,7 +488,7 @@ Qed.
...
@@ -488,7 +488,7 @@ Qed.
Hint
Resolve
length_O
length_S
.
Hint
Resolve
length_O
length_S
.
(
*
end
thide
*
)
(
*
end
thide
*
)
(
**
Let
us
apply
these
hints
to
prove
that
a
[
list
nat
]
of
length
2
exists
.
*
)
(
**
Let
us
apply
these
hints
to
prove
that
a
[
list
nat
]
of
length
2
exists
.
(
Here
we
register
[
length_O
]
with
[
Hint
Resolve
]
instead
of
[
Hint
Immediate
]
merely
as
a
convenience
to
use
the
same
command
as
for
[
length_S
]
;
[
Resolve
]
and
[
Immediate
]
have
the
same
meaning
for
a
premise
-
free
hint
.
)
*
)
Example
length_is_2
:
exists
ls
:
list
nat
,
length
ls
=
2.
Example
length_is_2
:
exists
ls
:
list
nat
,
length
ls
=
2.
(
*
begin
thide
*
)
(
*
begin
thide
*
)
...
@@ -660,7 +660,7 @@ Example eval1' : forall var, eval var (Plus Var (Plus (Const 8) Var)) (2 * var +
...
@@ -660,7 +660,7 @@ Example eval1' : forall var, eval var (Plus Var (Plus (Const 8) Var)) (2 * var +
Abort
.
Abort
.
(
*
end
thide
*
)
(
*
end
thide
*
)
(
**
To
help
prove
[
eval1
'
]
,
we
prove
an
alternate
version
of
[
EvalPlus
]
that
inserts
an
extra
equality
premise
.
*
)
(
**
To
help
prove
[
eval1
'
]
,
we
prove
an
alternate
version
of
[
EvalPlus
]
that
inserts
an
extra
equality
premise
.
This
sort
of
staging
is
helpful
to
get
around
limitations
of
[
eauto
]
'
s
unification
:
[
EvalPlus
]
as
a
direct
hint
will
only
match
goals
whose
results
are
already
expressed
as
additions
,
rather
than
as
constants
.
With
the
alternate
version
below
,
to
prove
the
first
two
premises
,
[
eauto
]
is
given
free
reign
in
deciding
the
values
of
[
n1
]
and
[
n2
]
,
while
the
third
premise
can
then
be
proved
by
[
reflexivity
]
,
no
matter
how
each
of
its
sides
is
decomposed
as
a
tree
of
additions
.
*
)
(
*
begin
thide
*
)
(
*
begin
thide
*
)
Theorem
EvalPlus
'
:
forall
var
e1
e2
n1
n2
n
,
eval
var
e1
n1
Theorem
EvalPlus
'
:
forall
var
e1
e2
n1
n2
n
,
eval
var
e1
n1
...
@@ -815,7 +815,7 @@ Print linear.
...
@@ -815,7 +815,7 @@ Print linear.
The
basic
hints
for
[
auto
]
and
[
eauto
]
are
:
%
\
index
{
Vernacular
commands
!
Hint
Immediate
}%
[
Hint
Immediate
lemma
]
,
asking
to
try
solving
a
goal
immediately
by
applying
a
lemma
and
discharging
any
hypotheses
with
a
single
proof
step
each
;
%
\
index
{
Vernacular
commands
!
Hint
Resolve
}%
[
Resolve
lemma
]
,
which
does
the
same
but
may
add
new
premises
that
are
themselves
to
be
subjects
of
nested
proof
search
;
%
\
index
{
Vernacular
commands
!
Hint
Constructors
}%
[
Constructors
type
]
,
which
acts
like
[
Resolve
]
applied
to
every
constructor
of
an
inductive
type
;
and
%
\
index
{
Vernacular
commands
!
Hint
Unfold
}%
[
Unfold
ident
]
,
which
tries
unfolding
[
ident
]
when
it
appears
at
the
head
of
a
proof
goal
.
Each
of
these
[
Hint
]
commands
may
be
used
with
a
suffix
,
as
in
[
Hint
Resolve
lemma
:
my_db
]
.
This
adds
the
hint
only
to
the
specified
database
,
so
that
it
would
only
be
used
by
,
for
instance
,
[
auto
with
my_db
]
.
An
additional
argument
to
[
auto
]
specifies
the
maximum
depth
of
proof
trees
to
search
in
depth
-
first
order
,
as
in
[
auto
8
]
or
[
auto
8
with
my_db
]
.
The
default
depth
is
5.
The
basic
hints
for
[
auto
]
and
[
eauto
]
are
:
%
\
index
{
Vernacular
commands
!
Hint
Immediate
}%
[
Hint
Immediate
lemma
]
,
asking
to
try
solving
a
goal
immediately
by
applying
a
lemma
and
discharging
any
hypotheses
with
a
single
proof
step
each
;
%
\
index
{
Vernacular
commands
!
Hint
Resolve
}%
[
Resolve
lemma
]
,
which
does
the
same
but
may
add
new
premises
that
are
themselves
to
be
subjects
of
nested
proof
search
;
%
\
index
{
Vernacular
commands
!
Hint
Constructors
}%
[
Constructors
type
]
,
which
acts
like
[
Resolve
]
applied
to
every
constructor
of
an
inductive
type
;
and
%
\
index
{
Vernacular
commands
!
Hint
Unfold
}%
[
Unfold
ident
]
,
which
tries
unfolding
[
ident
]
when
it
appears
at
the
head
of
a
proof
goal
.
Each
of
these
[
Hint
]
commands
may
be
used
with
a
suffix
,
as
in
[
Hint
Resolve
lemma
:
my_db
]
.
This
adds
the
hint
only
to
the
specified
database
,
so
that
it
would
only
be
used
by
,
for
instance
,
[
auto
with
my_db
]
.
An
additional
argument
to
[
auto
]
specifies
the
maximum
depth
of
proof
trees
to
search
in
depth
-
first
order
,
as
in
[
auto
8
]
or
[
auto
8
with
my_db
]
.
The
default
depth
is
5.
All
of
these
[
Hint
]
commands
can
be
issued
alternatively
with
a
more
primitive
hint
kind
,
[
Extern
]
.
A
few
more
examples
of
[
Hint
Extern
]
should
illustrate
more
of
the
possibilities
.
*
)
All
of
these
[
Hint
]
commands
can
be
expressed
with
a
more
primitive
hint
kind
,
[
Extern
]
.
A
few
more
examples
of
[
Hint
Extern
]
should
illustrate
more
of
the
possibilities
.
*
)
Theorem
bool_neq
:
true
<>
false
.
Theorem
bool_neq
:
true
<>
false
.
(
*
begin
thide
*
)
(
*
begin
thide
*
)
...
...
src/Match.v
View file @
22363ec9
...
@@ -117,7 +117,7 @@ Ltac my_tauto :=
...
@@ -117,7 +117,7 @@ Ltac my_tauto :=
It
is
also
trivial
to
implement
the
introduction
rules
(
in
the
sense
of
%
\
index
{
natural
deduction
}%
natural
deduction
%~
\
cite
{
TAPLNatDed
}%
)
for
a
few
of
the
connectives
.
Implementing
elimination
rules
is
only
a
little
more
work
,
since
we
must
give
a
name
for
a
hypothesis
to
[
destruct
]
.
It
is
also
trivial
to
implement
the
introduction
rules
(
in
the
sense
of
%
\
index
{
natural
deduction
}%
natural
deduction
%~
\
cite
{
TAPLNatDed
}%
)
for
a
few
of
the
connectives
.
Implementing
elimination
rules
is
only
a
little
more
work
,
since
we
must
give
a
name
for
a
hypothesis
to
[
destruct
]
.
The
last
rule
implements
modus
ponens
,
using
a
tactic
%
\
index
{
tactics
!
specialize
}%
[
specialize
]
which
will
replace
a
hypothesis
with
a
version
that
is
specialized
to
a
provided
set
of
arguments
(
for
quantified
variables
or
local
hypotheses
from
implications
)
.
*
)
The
last
rule
implements
modus
ponens
,
using
a
tactic
%
\
index
{
tactics
!
specialize
}%
[
specialize
]
which
will
replace
a
hypothesis
with
a
version
that
is
specialized
to
a
provided
set
of
arguments
(
for
quantified
variables
or
local
hypotheses
from
implications
)
.
By
convention
,
when
the
argument
to
[
specialize
]
is
an
application
of
a
hypothesis
[
H
]
to
a
set
of
arguments
,
the
result
of
the
specialization
replaces
[
H
]
.
For
other
terms
,
the
outcome
is
the
same
as
with
[
generalize
]
.
*
)
Section
propositional
.
Section
propositional
.
Variables
P
Q
R
:
Prop
.
Variables
P
Q
R
:
Prop
.
...
@@ -196,7 +196,7 @@ Ltac extend pf :=
...
@@ -196,7 +196,7 @@ Ltac extend pf :=
(
**
We
see
the
useful
%
\
index
{
tactics
!
type
of
}%
[
type
of
]
operator
of
Ltac
.
This
operator
could
not
be
implemented
in
Gallina
,
but
it
is
easy
to
support
in
Ltac
.
We
end
up
with
[
t
]
bound
to
the
type
of
[
pf
]
.
We
check
that
[
t
]
is
not
already
present
.
If
so
,
we
use
a
[
generalize
]
/
[
intro
]
combo
to
add
a
new
hypothesis
proved
by
[
pf
]
.
The
tactic
%
\
index
{
tactics
!
generalize
}%
[
generalize
]
takes
as
input
a
term
[
t
]
(
for
instance
,
a
proof
of
some
proposition
)
and
then
changes
the
conclusion
from
[
G
]
to
[
T
->
G
]
,
where
[
T
]
is
the
type
of
[
t
]
(
for
instance
,
the
proposition
proved
by
the
proof
[
t
])
.
(
**
We
see
the
useful
%
\
index
{
tactics
!
type
of
}%
[
type
of
]
operator
of
Ltac
.
This
operator
could
not
be
implemented
in
Gallina
,
but
it
is
easy
to
support
in
Ltac
.
We
end
up
with
[
t
]
bound
to
the
type
of
[
pf
]
.
We
check
that
[
t
]
is
not
already
present
.
If
so
,
we
use
a
[
generalize
]
/
[
intro
]
combo
to
add
a
new
hypothesis
proved
by
[
pf
]
.
The
tactic
%
\
index
{
tactics
!
generalize
}%
[
generalize
]
takes
as
input
a
term
[
t
]
(
for
instance
,
a
proof
of
some
proposition
)
and
then
changes
the
conclusion
from
[
G
]
to
[
T
->
G
]
,
where
[
T
]
is
the
type
of
[
t
]
(
for
instance
,
the
proposition
proved
by
the
proof
[
t
])
.
With
these
tactics
defined
,
we
can
write
a
tactic
[
completer
]
for
adding
to
the
context
all
consequences
of
a
set
of
simple
first
-
order
formulas
.
*
)
With
these
tactics
defined
,
we
can
write
a
tactic
[
completer
]
for
,
among
other
things
,
adding
to
the
context
all
consequences
of
a
set
of
simple
first
-
order
formulas
.
*
)
(
*
begin
thide
*
)
(
*
begin
thide
*
)
Ltac
completer
:=
Ltac
completer
:=
...
@@ -317,7 +317,7 @@ Abort.
...
@@ -317,7 +317,7 @@ Abort.
(
**
The
problem
is
that
unification
variables
may
not
contain
locally
bound
variables
.
In
this
case
,
[
?
P
]
would
need
to
be
bound
to
[
x
=
x
]
,
which
contains
the
local
quantified
variable
[
x
]
.
By
using
a
wildcard
in
the
earlier
version
,
we
avoided
this
restriction
.
To
understand
why
this
applies
to
the
[
completer
]
tactics
,
recall
that
,
in
Coq
,
implication
is
shorthand
for
degenerate
universal
quantification
where
the
quantified
variable
is
not
used
.
Nonetheless
,
in
an
Ltac
pattern
,
Coq
is
happy
to
match
a
wildcard
implication
against
a
universal
quantification
.
(
**
The
problem
is
that
unification
variables
may
not
contain
locally
bound
variables
.
In
this
case
,
[
?
P
]
would
need
to
be
bound
to
[
x
=
x
]
,
which
contains
the
local
quantified
variable
[
x
]
.
By
using
a
wildcard
in
the
earlier
version
,
we
avoided
this
restriction
.
To
understand
why
this
applies
to
the
[
completer
]
tactics
,
recall
that
,
in
Coq
,
implication
is
shorthand
for
degenerate
universal
quantification
where
the
quantified
variable
is
not
used
.
Nonetheless
,
in
an
Ltac
pattern
,
Coq
is
happy
to
match
a
wildcard
implication
against
a
universal
quantification
.
The
Coq
8.2
release
includes
a
special
pattern
form
for
a
unification
variable
with
an
explicit
set
of
free
variables
.
That
unification
variable
is
then
bound
to
a
function
from
the
free
variables
to
the
"real"
value
.
In
Coq
8.1
and
earlier
,
there
is
no
such
workaround
.
We
will
see
an
example
of
this
fancier
binding
form
in
the
next
chapter
.
The
Coq
8.2
release
includes
a
special
pattern
form
for
a
unification
variable
with
an
explicit
set
of
free
variables
.
That
unification
variable
is
then
bound
to
a
function
from
the
free
variables
to
the
"real"
value
.
In
Coq
8.1
and
earlier
,
there
is
no
such
workaround
.
We
will
see
an
example
of
this
fancier
binding
form
in
Section
15.5
.
No
matter
which
Coq
version
you
use
,
it
is
important
to
be
aware
of
this
restriction
.
As
we
have
alluded
to
,
the
restriction
is
the
culprit
behind
the
surprising
behavior
of
[
completer
'
]
.
We
unintentionally
match
quantified
facts
with
the
modus
ponens
rule
,
circumventing
the
check
that
a
suitably
matching
hypothesis
is
available
and
leading
to
different
behavior
,
where
wrong
quantifier
instantiations
are
chosen
.
Our
earlier
[
completer
]
tactic
uses
a
modus
ponens
rule
that
matches
the
implication
conclusion
with
a
variable
,
which
blocks
matching
against
non
-
trivial
universal
quantifiers
.
No
matter
which
Coq
version
you
use
,
it
is
important
to
be
aware
of
this
restriction
.
As
we
have
alluded
to
,
the
restriction
is
the
culprit
behind
the
surprising
behavior
of
[
completer
'
]
.
We
unintentionally
match
quantified
facts
with
the
modus
ponens
rule
,
circumventing
the
check
that
a
suitably
matching
hypothesis
is
available
and
leading
to
different
behavior
,
where
wrong
quantifier
instantiations
are
chosen
.
Our
earlier
[
completer
]
tactic
uses
a
modus
ponens
rule
that
matches
the
implication
conclusion
with
a
variable
,
which
blocks
matching
against
non
-
trivial
universal
quantifiers
.
...
@@ -479,6 +479,8 @@ Abort.
...
@@ -479,6 +479,8 @@ Abort.
Even
basic
[
idtac
]
is
an
embedded
imperative
program
,
so
we
may
not
automatically
mix
it
with
purely
functional
code
.
In
fact
,
a
semicolon
operator
alone
marks
a
span
of
Ltac
code
as
an
embedded
tactic
script
.
This
makes
some
amount
of
sense
,
since
pure
functional
languages
have
no
need
for
sequencing
:
since
they
lack
side
effects
,
there
is
no
reason
to
run
an
expression
and
then
just
throw
away
its
value
and
move
on
to
another
expression
.
Even
basic
[
idtac
]
is
an
embedded
imperative
program
,
so
we
may
not
automatically
mix
it
with
purely
functional
code
.
In
fact
,
a
semicolon
operator
alone
marks
a
span
of
Ltac
code
as
an
embedded
tactic
script
.
This
makes
some
amount
of
sense
,
since
pure
functional
languages
have
no
need
for
sequencing
:
since
they
lack
side
effects
,
there
is
no
reason
to
run
an
expression
and
then
just
throw
away
its
value
and
move
on
to
another
expression
.
An
alternate
explanation
that
avoids
an
analogy
to
Haskell
monads
(
admittedly
a
tricky
concept
in
its
own
right
)
is
:
An
Ltac
tactic
program
returns
a
function
that
,
when
run
later
,
will
perform
the
desired
proof
modification
.
These
functions
are
distinct
from
other
types
of
data
,
like
numbers
or
Gallina
terms
.
The
prior
,
correctly
working
version
of
[
length
]
computed
solely
with
Gallina
terms
,
but
the
new
one
is
implicitly
returning
a
tactic
function
,
as
indicated
by
the
use
of
[
idtac
]
and
semicolon
.
However
,
the
new
version
'
s
recursive
call
to
[
length
]
is
structured
to
expect
a
Gallina
term
,
not
a
tactic
function
,
as
output
.
As
a
result
,
we
have
a
basic
dynamic
type
error
,
perhaps
obscured
by
the
involvement
of
first
-
class
tactic
scripts
.
The
solution
is
like
in
Haskell
:
we
must
"monadify"
our
pure
program
to
give
it
access
to
side
effects
.
The
trouble
is
that
the
embedded
tactic
language
has
no
[
return
]
construct
.
Proof
scripts
are
about
proving
theorems
,
not
calculating
results
.
We
can
apply
a
somewhat
awkward
workaround
that
requires
translating
our
program
into
%
\
index
{
continuation
-
passing
style
}%
_
continuation
-
passing
style_
%
\
cite
{
continuations
}%,
a
program
structuring
idea
popular
in
functional
programming
.
*
)
The
solution
is
like
in
Haskell
:
we
must
"monadify"
our
pure
program
to
give
it
access
to
side
effects
.
The
trouble
is
that
the
embedded
tactic
language
has
no
[
return
]
construct
.
Proof
scripts
are
about
proving
theorems
,
not
calculating
results
.
We
can
apply
a
somewhat
awkward
workaround
that
requires
translating
our
program
into
%
\
index
{
continuation
-
passing
style
}%
_
continuation
-
passing
style_
%
\
cite
{
continuations
}%,
a
program
structuring
idea
popular
in
functional
programming
.
*
)
Reset
length
.
Reset
length
.
...
...
src/MoreDep.v
View file @
22363ec9
...
@@ -60,7 +60,7 @@ Section ilist.
...
@@ -60,7 +60,7 @@ Section ilist.
We
may
use
[
in
]
clauses
only
to
bind
names
for
the
arguments
of
an
inductive
type
family
.
That
is
,
each
[
in
]
clause
must
be
an
inductive
type
family
name
applied
to
a
sequence
of
underscores
and
variable
names
of
the
proper
length
.
The
positions
for
_
parameters_
to
the
type
family
must
all
be
underscores
.
Parameters
are
those
arguments
declared
with
section
variables
or
with
entries
to
the
left
of
the
first
colon
in
an
inductive
definition
.
They
cannot
vary
depending
on
which
constructor
was
used
to
build
the
discriminee
,
so
Coq
prohibits
pointless
matches
on
them
.
It
is
those
arguments
defined
in
the
type
to
the
right
of
the
colon
that
we
may
name
with
[
in
]
clauses
.
We
may
use
[
in
]
clauses
only
to
bind
names
for
the
arguments
of
an
inductive
type
family
.
That
is
,
each
[
in
]
clause
must
be
an
inductive
type
family
name
applied
to
a
sequence
of
underscores
and
variable
names
of
the
proper
length
.
The
positions
for
_
parameters_
to
the
type
family
must
all
be
underscores
.
Parameters
are
those
arguments
declared
with
section
variables
or
with
entries
to
the
left
of
the
first
colon
in
an
inductive
definition
.
They
cannot
vary
depending
on
which
constructor
was
used
to
build
the
discriminee
,
so
Coq
prohibits
pointless
matches
on
them
.
It
is
those
arguments
defined
in
the
type
to
the
right
of
the
colon
that
we
may
name
with
[
in
]
clauses
.
Our
[
app
]
function
could
be
typed
in
so
-
called
%
\
index
{
stratified
type
systems
}%
_
stratified_
type
systems
,
which
avoid
true
dependency
.
That
is
,
we
could
consider
the
length
indices
to
lists
to
live
in
a
separate
,
compile
-
time
-
only
universe
from
the
lists
themselves
.
Compile
-
time
data
may
be
_
erased_
such
that
we
can
still
execute
a
program
.
As
an
example
where
erasure
would
not
work
,
consider
an
injection
function
from
regular
lists
to
length
-
indexed
lists
.
Here
the
run
-
time
computation
actually
depends
on
details
of
the
compile
-
time
argument
,
if
we
decide
that
the
list
to
inject
can
be
considered
compile
-
time
.
More
commonly
,
we
think
of
lists
as
run
-
time
data
.
Neither
case
will
work
with
%
\
%
naive
erasure
.
(
It
is
not
too
important
to
grasp
the
details
of
this
run
-
time
/
compile
-
time
distinction
,
since
Coq
'
s
expressive
power
comes
from
avoiding
such
restrictions
.
)
*
)
Our
[
app
]
function
could
be
typed
in
so
-
called
%
\
index
{
stratified
type
systems
}%
_
stratified_
type
systems
,
which
avoid
true
dependency
.
That
is
,
we
could
consider
the
length
indices
to
lists
to
live
in
a
separate
,
compile
-
time
-
only
universe
from
the
lists
themselves
.
Compile
-
time
data
may
be
_
erased_
such
that
we
can
still
execute
a
program
.
As
an
example
where
erasure
would
not
work
,
consider
an
injection
function
from
regular
lists
to
length
-
indexed
lists
.
Here
the
run
-
time
computation
actually
depends
on
details
of
the
compile
-
time
argument
,
if
we
decide
that
the
list
to
inject
can
be
considered
compile
-
time
.
More
commonly
,
we
think
of
lists
as
run
-
time
data
.
Neither
case
will
work
with
%
\
%
naive
%{}%
erasure
.
(
It
is
not
too
important
to
grasp
the
details
of
this
run
-
time
/
compile
-
time
distinction
,
since
Coq
'
s
expressive
power
comes
from
avoiding
such
restrictions
.
)
*
)
(
*
EX
:
Implement
injection
from
normal
lists
*
)
(
*
EX
:
Implement
injection
from
normal
lists
*
)
...
...
src/Reflection.v
View file @
22363ec9
...
@@ -401,7 +401,7 @@ Inductive formula : Set :=
...
@@ -401,7 +401,7 @@ Inductive formula : Set :=
(
**
The
type
%
\
index
{
Gallina
terms
!
index
}%
[
index
]
comes
from
the
[
Quote
]
library
and
represents
a
countable
variable
type
.
The
rest
of
[
formula
]
'
s
definition
should
be
old
hat
by
now
.
(
**
The
type
%
\
index
{
Gallina
terms
!
index
}%
[
index
]
comes
from
the
[
Quote
]
library
and
represents
a
countable
variable
type
.
The
rest
of
[
formula
]
'
s
definition
should
be
old
hat
by
now
.
The
[
quote
]
tactic
will
implement
injection
from
[
Prop
]
into
[
formula
]
for
us
,
but
it
is
not
quite
as
smart
as
we
might
like
.
In
particular
,
it
interprets
implications
incorrectly
,
so
we
will
need
to
declare
a
wrapper
definition
for
implication
,
as
we
did
in
the
last
chapter
.
*
)
The
[
quote
]
tactic
will
implement
injection
from
[
Prop
]
into
[
formula
]
for
us
,
but
it
is
not
quite
as
smart
as
we
might
like
.
In
particular
,
it
wants
to
treat
function
types
specially
,
so
it
gets
confused
if
function
types
are
part
of
the
structure
we
want
to
encode
syntactically
.
To
trick
[
quote
]
into
not
noticing
our
uses
of
function
types
to
express
logical
implication
,
we
will
need
to
declare
a
wrapper
definition
for
implication
,
as
we
did
in
the
last
chapter
.
*
)
Definition
imp
(
P1
P2
:
Prop
)
:=
P1
->
P2
.
Definition
imp
(
P1
P2
:
Prop
)
:=
P1
->
P2
.
Infix
"-->"
:=
imp
(
no
associativity
,
at
level
95
)
.
Infix
"-->"
:=
imp
(
no
associativity
,
at
level
95
)
.
...
@@ -478,7 +478,9 @@ Section my_tauto.
...
@@ -478,7 +478,9 @@ Section my_tauto.
Local
Open
Scope
partial_scope
.
Local
Open
Scope
partial_scope
.
(
**
Now
we
can
write
a
function
[
forward
]
which
implements
deconstruction
of
hypotheses
.
It
has
a
dependent
type
,
in
the
style
of
Chapter
6
,
guaranteeing
correctness
.
The
arguments
to
[
forward
]
are
a
goal
formula
[
f
]
,
a
set
[
known
]
of
atomic
formulas
that
we
may
assume
are
true
,
a
hypothesis
formula
[
hyp
]
,
and
a
success
continuation
[
cont
]
that
we
call
when
we
have
extended
[
known
]
to
hold
new
truths
implied
by
[
hyp
]
.
*
)
(
**
Now
we
can
write
a
function
[
forward
]
that
implements
deconstruction
of
hypotheses
,
expanding
a
compound
formula
into
a
set
of
sets
of
atomic
formulas
covering
all
possible
cases
introduced
with
use
of
[
Or
]
.
To
handle
consideration
of
multiple
cases
,
the
function
takes
in
a
continuation
argument
,
which
will
be
called
once
for
each
case
.
The
[
forward
]
function
has
a
dependent
type
,
in
the
style
of
Chapter
6
,
guaranteeing
correctness
.
The
arguments
to
[
forward
]
are
a
goal
formula
[
f
]
,
a
set
[
known
]
of
atomic
formulas
that
we
may
assume
are
true
,
a
hypothesis
formula
[
hyp
]
,
and
a
success
continuation
[
cont
]
that
we
call
when
we
have
extended
[
known
]
to
hold
new
truths
implied
by
[
hyp
]
.
*
)
Definition
forward
:
forall
(
f
:
formula
)
(
known
:
set
index
)
(
hyp
:
formula
)
Definition
forward
:
forall
(
f
:
formula
)
(
known
:
set
index
)
(
hyp
:
formula
)
(
cont
:
forall
known
'
,
[
allTrue
known
'
->
formulaDenote
atomics
f
])
,
(
cont
:
forall
known
'
,
[
allTrue
known
'
->
formulaDenote
atomics
f
])
,
...
@@ -789,7 +791,7 @@ Fixpoint termDenote t (e : term t) : typeDenote t :=
...
@@ -789,7 +791,7 @@ Fixpoint termDenote t (e : term t) : typeDenote t :=
|
Abs
_
e1
=>
fun
x
=>
termDenote
(
e1
x
)
|
Abs
_
e1
=>
fun
x
=>
termDenote
(
e1
x
)
end
.
end
.
(
**
Here
is
a
%
\
%
naive
first
attempt
at
a
reification
tactic
.
*
)
(
**
Here
is
a
%
\
%
naive
%{}%
first
attempt
at
a
reification
tactic
.
*
)
Ltac
refl
'
e
:=
Ltac
refl
'
e
:=
match
e
with
match
e
with
...
@@ -805,9 +807,9 @@ Ltac refl' e :=
...
@@ -805,9 +807,9 @@ Ltac refl' e :=
|
_
=>
constr
:
(
Const
e
)
|
_
=>
constr
:
(
Const
e
)
end
.
end
.
(
**
Recall
that
a
regular
Ltac
pattern
variable
[
?
X
]
only
matches
terms
that
_
do
not
mention
new
variables
introduced
within
the
pattern_
.
In
our
%
\
%
naive
implementation
,
the
case
for
matching
function
abstractions
matches
the
function
body
in
a
way
that
prevents
it
from
mentioning
the
function
argument
!
Our
code
above
plays
fast
and
loose
with
the
function
body
in
a
way
that
leads
to
independent
problems
,
but
we
could
change
the
code
so
that
it
indeed
handles
function
abstractions
that
ignore
their
arguments
.
(
**
Recall
that
a
regular
Ltac
pattern
variable
[
?
X
]
only
matches
terms
that
_
do
not
mention
new
variables
introduced
within
the
pattern_
.
In
our
%
\
%
naive
%{}%
implementation
,
the
case
for
matching
function
abstractions
matches
the
function
body
in
a
way
that
prevents
it
from
mentioning
the
function
argument
!
Our
code
above
plays
fast
and
loose
with
the
function
body
in
a
way
that
leads
to
independent
problems
,
but
we
could
change
the
code
so
that
it
indeed
handles
function
abstractions
that
ignore
their
arguments
.
To
handle
functions
in
general
,
we
will
use
the
pattern
variable
form
[
@?
X
]
,
which
allows
[
X
]
to
mention
newly
introduced
variables
that
are
declared
explicitly
.
For
instance
:
*
)
To
handle
functions
in
general
,
we
will
use
the
pattern
variable
form
[
@?
X
]
,
which
allows
[
X
]
to
mention
newly
introduced
variables
that
are
declared
explicitly
.
A
use
of
[
@?
X
]
must
be
followed
by
a
list
of
the
local
variables
that
may
be
mentioned
.
The
variable
[
X
]
then
comes
to
stand
for
a
Gallina
function
over
the
values
of
those
variables
.
For
instance
:
*
)
Reset
refl
'
.
Reset
refl
'
.
Ltac
refl
'
e
:=
Ltac
refl
'
e
:=
...
@@ -868,4 +870,4 @@ Goal (fun (x y : nat) => x + y + 13) = (fun (_ z : nat) => z).
...
@@ -868,4 +870,4 @@ Goal (fun (x y : nat) => x + y + 13) = (fun (_ z : nat) => z).
Abort
.
Abort
.
(
**
Our
encoding
here
uses
Coq
functions
to
represent
binding
within
the
terms
we
reify
,
which
makes
it
difficult
to
implement
certain
functions
over
reified
terms
.
An
alternative
would
be
to
represent
variables
with
numbers
.
This
can
be
done
by
writing
a
slightly
smarter
reification
function
that
identifies
variable
references
by
detecting
when
term
arguments
are
just
compositions
of
[
fst
]
and
[
snd
]
;
from
the
order
of
the
compositions
we
may
read
off
the
variable
number
.
We
leave
the
details
as
an
exercise
for
the
reader
.
*
)
(
**
Our
encoding
here
uses
Coq
functions
to
represent
binding
within
the
terms
we
reify
,
which
makes
it
difficult
to
implement
certain
functions
over
reified
terms
.
An
alternative
would
be
to
represent
variables
with
numbers
.
This
can
be
done
by
writing
a
slightly
smarter
reification
function
that
identifies
variable
references
by
detecting
when
term
arguments
are
just
compositions
of
[
fst
]
and
[
snd
]
;
from
the
order
of
the
compositions
we
may
read
off
the
variable
number
.
We
leave
the
details
as
an
exercise
(
though
not
a
trivial
one
!
)
for
the
reader
.
*
)
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment