Re: Linear logic semantics
Date: Sun, 22 Dec 91 18:52:01+020
Comments: If you have trouble reaching this host as math.tau.ac.il
Please use the old address: email@example.com
To: rags@triples.Math.McGill.CA, firstname.lastname@example.org
Some notes about Seely's message. First two specific ones:
1) In a previous reply I have noted already that the models I discussed are not
complete. Thus (p->p)->(q->q) (where -> is the linear implication) is valid in
them, although not a theorem of multiplicative Linear Logic. What can really
be interesting is to find another concrete, easy-to-use model (or set of models)
for which we do have completeness. All the completeness results I know are
relative to very abstract classes of models.
2) I did not discuss the structure with the two points of infinity because my
main proof-theoretical results are not valid in the presence of the additive
propositional constants, and the main semantical results concrning instances
of multiplicative sentences is not valid in this structure. That this
structure is a model of Linear Logic is, however, a direct corollary of
my results in my paper: "the semantics and proof-theory of linear logic"
(TCS, vol 57 (1988) pp. 161-184.
Now to the main issue of semantics. Categories might be a natural framework
for semantical investigations, but I am not sure that they really give more
insight than the algebraic tradition which was used by relavance logicians
like Dunn, Urquhart and Meyer. To be specific, let me review the algebraic
semantics I gave to Linear Logic in the paper I mention above (and which was not
included in the list of Seely). Th basic structures I used were quadruples
(D,<,~,0) such that:
1) <D,<) is a poset (the order < is not meant here to be strict)
2) ~ is an involution on (D,<)
3) + is an associative, commutative, order-preserving operation on (D,<) with
an identity element 0 (i.e., (D,+) is a commutative monoid with identity and
+ is order-preserving.
I proved in my paper that a sentence B is provable in the multiplicative
fragment of Linear Logic if v(B)>1 (where 1=~0) for every assignment v in
such a structre (I proved in fact STRONG completeness, but for simplicity
I shall ignore this here).
Now the structure I used for the results about the multiplicative
fragment (in my original message) is the special case of
these basic structure which results if we take D as the integers (Z), + and
0 as the usual + and 0 (note that "par" is interpreted as usual addition!) and
the identity relation (=) as <. Nothing can be simpler than this.
When the additive operations are added one should demand (D,<) to be a lattice
and then the additives are interpreted as the lattice operations. In order to
give an interpretation to the additive constants, this lattice should have also
a maximum and a minimum element. In my paper I proved sound and completeness
results of the additive fragment(s) relative to this semantics. Again for the
results I reported in my message I use a special instance : the integers
with the usual +, 0 and =<.
A particularly important case of the lattice structures is when the lattices
are complete. I called these structures Girard's structures since I showed that
every "phase space" is such a structre and conversely, every Girard's structure
is isomorphic to some "phase space". (The completeness condition is important
for giving semantics to the first-order quantifiers).
One of the main results of my paper was that by standard constructions
of lattice theory, every basic structure (in
which D is only a poset) can be embedded in a Girard's structure so that
existing suprema and infima are preserved (the addition of the two points of
infinity to (Z,<,+,0) mentioned by Seely is just a special instance of this).
>From this immediately follow both the strong complteness of Girard's "phase
semantics" and the fact that the various fragments of Linear Logic are
(strongly) conservative extensions of each other. Of course, the last result
follows easily also from cut-elimination, but as far as I know, it was the
first proof-theoretical result which was shown using semantical tools. In my
original message I provided other examples of this nature and asked for more.
(For commpleteness, let me note that my paper had given algebraic semantics
also to the "exponentials", but it is not relevant to the present results).
At least to me, this presentation (from 1987) of the general structures and
their special instances is more perspicuous than the categorial one with its
heavy terminology. But maybe I am wrong. So let me return to my
original question and ask for an insight concerning Linear Logic that can be
(at least far more easily) gained by using categorical methods but it is hard
to see using the algebraic structres I used (I mean, of course, results that
can be formulated and understood outside the categorial framework).