Date: Fri, 29 Nov 91 13:49:31 +0100
To: linear@cs.stanford.edu


		jean-yves girard



	The discovery of polarities occured during the study of classical
logic that i started last december [1] ; roughly speaking the idea is to
distinguish simple from double negations in the Godel-Kolmogoroff 
translation, so as to get a set of remarkable isdomorphisms. The resulting
sequent calculs LC one gets for classical logic is much better behaved
than LK, and has some typically intuitionistic features, like a
distinguished formula : these features enable one to get certain unexpected
forms of existence end disjunction properties.
	This suggested the possibility of a unique logical system, LU
of which classical, intuitionistic and linear logics could be fragments. Such
a system, proposed in [2], enables various logics to communicate freely,
without mistake : a classical lemma can be used in an intuitionistic proof
but after cut-elimination a "subsequent property" forces the proof to be
intuitionistic. This kind of unity has therefore nothing to do with
logical frameworks in which systems are put together but are forbidden
to communicate. Another measure of the relative success of this attempt
to unity is that the fragments corresponding to usual logics are better
than the original systems : for classical logic since the fragment is LC
and not LK, in the intuitionistic case since (for the negative fragment
for all, implies,and) since LU forces the declaration of headvariables
if any (linear dependencies must be declared in LU).
	LU was established on the basis of a third polarity, zero, basically
for pure linear behaviour in the absence of ? and ! ; but soon afterwards
other polarities (splitting O into many subpolarities) started to knock
at the door. There was no way to refuse them : as lomg as the subsequent
property is preserved, one should be free to extend LU by adding new
connectives, new rules etc. this is much better than writing a fresh
system every morning... For that reason it was necessary to set the
unifying pattern on a more generous basis, what leads to polarities
and behaviours.


	First of all the idea of polarity must be (at least) be co-attributed
to Andreoli and Pareschi who arrived to quite similar conclusions just from
considerations of logic programming. The triadic calculus that they use
is more than a system of control of linear proof-search : it also embodies
essential features of a unified sequent calculus.
	There are two polarities
POSITIVE (or introductions, or synchronous) formulas correspond to the case
in which the outermost symbol may contain an implicit information ; intuitio-
nistically existence and disjunction ; in LL times (the information is
the splitting of the context), plus, there is, bang, zero and one. Here
phase semantics is slightly helpful : the facts associated to such formulas
are double negations (the negative ones will be single negations). In terms
of logic programming this means that we cannot be sure that the last rule
applied leads to this formula... we may have to backtrack.
NEGATIVE (or eliminations or asynchronous) formulas are in LL the negations of
positive formulas ; it is quite surprising that, between a formula and its
negation, exactly one of them conveys some information, for negative formulas
are defined as those which contain no immediate implicit information. An
abstract explanation of this fact will be needed at some moment. In intuitio-
nistic logic the negative formulas are forall, implies, and anybody with some
experience of intuitionistic proof-theory knows that this distinction is a
major borderline : typically reducibility is defined using introduction or
elimination rules depending on the polarity. In terms of proof-search, negative
formulas are asynchronous, (or invertible) : this means that we can force the
last rule to introduce A and that this last rule is then unique.
	Polarities must be attributed to atoms as well ; the only way to
do so is to have positive and negative atoms. Polarity is a syntactic 
information and not a semi-information like the style.


	The style of a formula means the kind of structural maintenance
it accepts ; there can be many styles, since many structural combinations
are possible ; for the moment i see room for
l : linear style, used in LL and LI : exchange only
c : classical style, used in LL, LI, LC : the three usual rules
a : affine style, which can be used to build a system of affine logic :
exchange and weakening
p : pertinent style : exchange and contraction ; this one is added to 
complement 'a' rather than for real logical needs ; we avoided the expression
"relevant" since any attempt to build yet another "relevant logic" out of
style p cannot lead to one of the existing systems.
	  The big guy absent from this is the full non-commutative system ;
i am not saying that adding this style is impossible, i am saying that this
is prematurate (too many complications).
	The style is a semi-information : this means that i may decide to
weaken the style, i.e. to ignore part of it : typically i can treat a style
c like a style l. Syntactically speaking this has the following consequence :
there are judgements of style ; this small detail will solve later a lingering
question, namely will force the writing of contraction in a single n-ary
rule (a sequence of contractions will have cuts).


	The behaviour is simply the style and the polarity ; the polarity
says on which side the rules listed in the style apply
positive : left
negative : right
(exchange applies everywhere as long as all styles involve exchange)

+c occurs in LC, LI, LL
-c occurs in LC, LL
+l occurs in LL
-l occurs in LI, LL

	If one sticks to right-handed formulations, only negative formulas
allow structural rules. It might be of utmost interest to handle a formula
of a given behaviour b as if it were of behaviour -s, what we can note
(s)A. This prefix can be ignored or simplified when it brings nothing or
little, typically
(l)A is A when A is negative
(c)A is (p)A when A is of behaviour -a
	From this one builds a sequent calculus by means of sequents
/- (s1)A1,...,(sn)An
that i will not develop here.


	Just a word about connectives ; the main connectives will be
connectives of change of behaviour, namely the internalisation of the
notation (s)A. These connectives therefore are analogues to ? (? internalises
(c)A ) ; generalisations of ! naturally occur by symmetry. In particular
there is a bang putting everything negative to +l (notation A+) and a question
mark putting everything positive to -l (notation A-).
	This is why there is no need for the ultimate connectives to be defined
everywhere. In fact connectives very often will prefer to be defined
only on some polarity (A times B is positive, and it is enough, and simpler
syntactically, to assume that A and B were already positive, since we
can otherwise use A+ times B etc. ). A connective may be defined only for
certain behaviours ; for instance for behaviour +a (it is therefore 
also defined for +c). The new styles a and p will by the way naturally
lead to new comjunctions distinct from times and with. 


	One of the most spectacular progress that i expect is the following :
to have a clean and simple approach to the following problems of proof-nets

i) handling of bottom (i.e. pb of weakening in behaviour -a)
ii) handling of contraction : a n-ary rule with n non-negative that must
be done only once
iii) handling of with : the polarities reintroduce some asymmetry which
can be used to overcome the familiar intricacies of additives.
	All this is based on a precise description of the sequent calculus
that i postpone to some further mail. Yet another positive evidence is the
fact that the geometry of interaction of additives seems now possible. Again
this is because polarities enable one to make assumptions that would not live
under absolute symmetry.

[1] a new constructive logic : classical logic
MSCS (should already have appeared)

[2] on the unity of logic
preprint 1991