[Prev][Next][Index][Thread]

A FIXPOINT THEOREM IN LINEAR LOGIC



Date: Wed, 5 Feb 92 14:51:20 +0100
To: linear@cs.stanford.edu

			A FIXPOINT THEOREM IN LINEAR LOGIC
				j-y girard

	I will report on results that i obtained last august and that are
inspired from logic programming (basically negation as failure) ; the
results have outputs in the proof-theory of equality and in the problem
of the 1-step predecessor. The basic statement is that -in the absence
of exponentials- every operator of LL predicate logic has a fixpoint.

1- FROM LOGIC PROGRAMMING
	Surely the most dubious thing in LP is negation as failure ; it
is dubious from the logical standpoint only. There is a strong reason not
to refuse it, since it vaguely states the following
	UNDER CERTAIN CIRCUMSTANCES MISSING INFORMATIONS ARE FALSE
which i take as the only important message of NAF.

1.1. Background (see proofs and types)
	resolution is nothing more than (optimised) proof-search in cut-free
systems ; in fact the hauptsatz enables one to remove all cuts but those
on atoms provided one allows substitution in axioms, then if the axioms are
horn clauses one can get rid of all structural rules : eventually only
cut and substitution (optimised as unification) remain.
	for negation there is nothing similar ; moreover the fact that
proof-search ignores contraction renders the behaviour of negation by
failure absolutely non-classical.
1.2. What are we looking for is
	basically something like Clark's completion but less naive ; in
some sense we want to say that if p has been introduced as the head of the
only clauses C_1...C_n (with bodies H_1...H_n) then this must be taken as
the definition of a new logical primitive (a generalised connective if
one prefers), and that the elimination rules for p are basically the negation
by failure rules. In this it is absolutely essential that we know that no
other clause ending with p is there (i.e. the notion of END of the list of
clauses that is essential in small details like failure).
	although seeing p as a generalised connective is the same as writing
logical equivalences, our logical formalisation cannot be a list of equiva-
lences ; i insist on the fact that the completed system should be a pure
logical system and that negative effects should just be ordinary proof-search
in this system. If i write A iff B then searching for B leads to A, which
leads to B etc.
	the completed system should enjoy cut-elimination (impossible to 
restrict search space otherwise) ; of course there is a possible joke, namely
to search for cut-free proofs in a system with no cut-elimination... but
then the declarative ideology is violated since there is no way to combine
answers (typically if A succeeds and A/\B fails i want to infer that B fails
and i need cut).
	finally the completed system should also prove enough theorems, again
for declarative reasons.
1.3. what do we keep about NAF
	basically the ideology of missing informations (under specific
circumstances, typically the presence of END instruction).
	the idea of expressing this by a fixpoint ; observe that all the
explanations about smallest fixpoints (that would lead to inductive
definitions) are completely wrong. In fact it is not hard to figure out
how such surrealistic explanations were created : the confusion between proofs
and models (smallest herbrand models and related junk). The consequence
relation is a smallest fixpoint, but this is independent of the logical
contents. By the way if i define int by int(0) and int(x) /- int(Sx)
if i define succ by succ(S0,0) and succ(x,y) /- succ(Sx,Sy) then the query
int(x)/\succ(x,x) does not fail (but its negation is provable by induction,
which means that the smallest fixpoint interpretation of int makes the query
false). What actually happens is that one only looks for arbitrary fixpoints
in NAF, but since we are interested in provability, what holds is what is
true in all these fixpoints (hence in the smallest one if any). The proposal
to add induction to proof-search is irrealistic (no control on search space :
induction can hide cuts).
1.4. previous attempts
	linear completion (cerrito 1990), basically exploits the analogies
between SLDNF and LL ; however what is obtained is an axiomatisation by
means of equivalences and it is not possible to explain SLDNF by means
of proof-search for this system. Moreover since the work stuck to actual
SLDNF it has to use additives in places where multiplicatives would have
done better.
	after Kunen, several axiomatisations of SLDNF by means of 3-valued
logic were proposed ; the interesting ones are those who relate it to the
three-valued interpretation of cut-free proofs (see chapter 3 of proof-theory
and logical complexity), vauzeilles and more recently staerk developed
axiomatisations in this line. For instance staerk has cut-elim, but the
calculus is too poor : A /- B reads as "if A is not false then B is true"
hence does not have A /- A... hence it is impossible to prove statements like
A/\B /- B/\A ; presumably one can do slightly better with the approach by
vauzeilles, but one should not have too many illusions : 3-valued logic is
extremely unexpressive, unless we decide to ignore the third value.
	halnas and schroeder-heister proposed the use of classical logic,
but without requiring cut-elimination. As we already said we get here terrible
problems when we want to relate answers : no deductive power at all.
	furthermore all these attempts have to some extent contributed to the
propositional aspect, but were very disappointing wrt predicates : usually
a lot of axioms for equality are required...

2. The fixpoint theorem

2.1. statement of the result
	let H_1,...,H_n be formulas of linear logic without exponentials ;
let t_1,...,t_n be terms ; then the fixpoint equation saying that p(x) iff
x is some t_i and H_i holds has a solution.
2.2. comments and remarks
	the existence of the fixpoint will follow from cut-elimination : a
sequent system in which this equivalence holds will be produced. Cut-elimina
tion in this system will just prove that the new primitive p is a good as
official ones like times and with.
	no assumption of positivity is made ; in particular there is a solution
to "p iff notp" ; by the way such solutions cannot be unique.
	the assumption about exponentials is too violent ; it suffices to 
require that p is not nested by some ? or ! within the H_i.
	there are obvious generalisations, which are indeed reducible to the
theorem : p can be chosen n-ary (use a fresh n-ary function to change arity)
or we can solve simultaneous fixpoints, typically, s being given :
p iff (q times r) -o s ; q iff (r times p) -o s ; r iff (p times q) -o s .
By the way the interpretation of a LP program is a simultaneous fixpoint.
	technically speaking it will simplify our life to assume that 
any variable x free in H_i occurs in t_i (otherwise replace H_i by ExH_i) ;
the formulation without this restriction is much more pedantic.
2.3. the system
	introduction rules : from /- gamma, H_i(theta) derive
/- gamma, pt_i(theta) where theta is any substitution.
	elimination rules : in order to derive /- gamma, notp(u) prove
(/- gamma, H_i)theta_i for each i such that u is unifiable with t_i, in
which case theta_i is their mgu. These rules are closed under substitution.
2.4. the cut-elimination theorem
	basically a cut introd/elim can be replaced by another one, with
usually a greater degree ; however, due to the absence of structural rules
the proof shrinks like in the small normalisation theorem in Linear Logic,
TCS 50. Technically speaking this can be obtained as a corollary of recent
unpublished work of vauzeilles about LU : the fact that formulas are not
well-founded does not matter for cut-elim as long as no infinite sequence
of exponentials is created... in other terms the actual cut-degree is the
maximum number of exponentials that can be found in the subformula tree... as
long as this number remains finite, i.e. that no loop contains a ! or ? then
this degree remains finite.
2.5. more comments
	hilbert proposed consistency (existence of a model) as a test for 
existence ; however this is fishy : nobody would accept a model making the
integers non-standard as proving the existence of some object. But a stronger
form of consistency is conservativity : p exists because it does not alter
the body of existing knowledge. Here this is achieved by cut-elimination and
the subformula property (of course the H_i are declared to be subformulas of
p). If A is provable and p is not a subformula of A then A is provable without
the p-rules. Philosophically speaking this is a bit like Pascal's bet. 
	logical systems are now considered as open systems in which one can
freely add new logical constants, the only test to pass being cut-elimina-
tion... instead of writing a new logic, just add new connectives !

3. examples
	
3.1. equality
	If i define equality by 1 /- x = x, i get
axiom : /- t = t
elim rules : /- gamma, not(t = u) when t and u not unifiable
>from /- gamma(theta) derive /- gamma, not(t = u) if theta = mgu(t,u)
	Typically this will prove /- gamma, not(Sx = 0) and 
/- not(Sx = Sy), x = y.
	This induces a proof-theory of equality with cut-elimination, that
was not existing before. But remark that
	the improvement is independent of LL (because H_1 does not mention =) ;
	it applies only to free structures (it is a theory of unification) ;
it also proves that x+0 is distinct from x. For arithmetic this is not a big
deal : by fixpoint all partial recursive functions are definable, including
plus and times by means of predicates. In general one can argue that equality
needs not be free equality, but one can counter-argue that if logic has
two ways of expressing functions it might mean that we can restrict the use
of function symbols to free structures and use predicates for the rest.

3.2. integers
	If i define integers by 1 /- int(0)   int(x) /- int(Sx), i get
introductions : axiom /- int(0)
>from /- gamma, int(t) derive /- gamma, int(St)
eliminations : axiom /- gamma, notint(ft_1...t_n) for f distinct from 0, S.
>from /- gamma derive /- gamma, notint(0)
>from /- gamma, notint(t) derive /- gamma, notint(St)
>from /- gamma[O/x] and /- gamma[Sy/x], notint(y) derive /- gamma, notint(x)
the last rule states that an integer is either O or the successor of an
integer ; this rule expresses the 1-step predecessor. Usual second order
definitions do not make it since they are not expressing fixpoints. In fact
the usual definition states that int is the smallest closed set, and one
must PROVE that it is actually a fixpoint ; this proof is the linear time
algo for predecessor.

4. Open questions

4.1. in logic programming
	the status of NAF is now logically very sound ; but there might be
implementation problems. For instance instead of trying to make a query
fail to make negation, we rather try to prove directly its negation... but
the main advantages of NAF are preserved. The major problem could be how
to answer existential queries, since our elim rules involve unification,
and that one should find a way (anti-unification ?) to optimise unification.

4.2. in typed lambda-calculi
	under specific positivity assumptions, one can add to the fixpoint
the strongest requirement "smallest fixpoint" ; LU seems the right formalism
to do this. Observe that we would get both primitive recursion and one step
predecessor. Observe too that all "petty" axioms are now logically provable
that all partial functions can be defined... hence the major problem is the
following : CAN WE ALSO MAKE A MAJOR PROGRESS ON INDUCTION ?