[Prev][Next][Index][Thread]

Why Prolog and CBV?



Date: Mon, 18 Nov 91 09:36:46 CST
Cc: wand@cheech.lcs.mit.edu

  It gets more than confusing:  it gets non-CR.  Consider

  ((begin (set! x 3) (lambda (u) x))
   (begin (set! x 4) 5))
  
  ...

No, this is not true. What is lost is the apparent freedom to evaluate
>from left to right or from right to left. But notice that this is lost
for all practical programming languages because they tend to have
ERROR elements. For example, 
   
   ((lambda (x) 13) (/ 1 0)) 

yields ERROR when evaluated from right to left and 13 when evaluated
>from left to right. (As a matter of fact, one could make this argument
for bottom, too (consider ((lambda (x) 13) bottom) ) except that
people prefer visible results). 

The point is that the addition of a facility to a language requires
the addition of reduction rules that specify the meaning.  CRness is a
(technical) property of this reduction system; even if it didn't hold
you could still define a simple, reasonable programming language. BUT,
in this case it actually does hold. Crank (an MS student) and I had a
paper at POPL 91 that showed how we can find simple extensions of
lambda calculi with assignments for almost every parameter passing
technique. For the interested reader, I enclose the CBN case at the
end of the message.  

Wadler is correct in maintaining that in theory, cbn vs cbv makes no
difference for predicting the effect of f(e) where e contains
assignments. However, when we started comparing the reduction rules of
the various calculi we could conclude that CBV w/ ref cells (a la ML)
gives rise to the simplest calculus and has the nicest relationship to
the operational theory. BUT, this was a feeling not a well-formalized
conclusion. CBN w/ assignment still satisfies FULL BETA (which
verifies old Algol60 folklore) but that is the only good thing we
could say about the calculus. I would again agree w/ Wadler here that
call-by-need is most difficult to reason about: the rules for cbn are
some of the most complex I have seen. (This result is only in Crank's
thesis not the paper.)

In the upcoming POPL 92, Corky Cartwright and I have a paper on SPCF,
a sequential extension of PCF w/ a downward version of call/cc.
Contrary to Wadler's claims, this language has a fully functional
direct model (no continuation-passing style, no tricks) and moreover
the model is fully abstract. Its equational theory is a simple
extension of the typed lambda calculus that fixes (non-trivial)
equational specs for additional constants. I found programming in this
language as natural as programming in its call-by-value counterpart,
but again this is a feeling. In particular, the presence of downward
call/cc is quite helpful in many cases. My personal opinion is that
continuations (or other control operations) mix with call-by-name as
easily as they mix w/ call-by-value. In the case of SPCF, it is also
possible to define a simple call-by-NEED version, again contrary to
Wadler's opinion.

Regards -- Matthias Felleisen


The language: 
  M ::= constants | x | (\lambda x.M) | (set! M M) | (M M) | (letrec ( T ) M)
  T ::= (x,M) ... (x,M) 
	  (finite functions mapping variables to expressions,
	   represented via finite sequences, called declarations;
           T @ T' "appends" to finite sequences, and hygienically
	   renames variables when/where necessary;
	   T @ (x,M) same as above)
  V ::= constants | (\lambda x.M)
 
Eval Contexts
  E ::= [ ] | (f E) | (E e) | (set! x E) | (set! E e)

The Axioms:
  f V 		   		     --> delta(f,V) 		     (delta)
  (\lambda x.M)M'		     --> M[x / M'] 	             (beta)
  (letrec (T @ (x,M)) E[x])          --> (letrec (T @ (x,M)) E[M])   (deref)
					  if E =/= E'[(set! [ ] M')] 
  (letrec (T @ (x,M)) E[(set! x V)]) --> (letrec (T @ (x,V)) E[V])   (sigma)
  (letrec ( T ) E[(letrec (T') M)])  --> (letrec (T @ T') E[M])      (rho_u)

Theorem: The above axiom system is CR. 
	(See Crank and Felleisen, POPL1991)

Comments:
  delta:  functional constants, like +, -, etc
  beta:   full beta, no restrictions!
  deref:  when you can deref an identifier in a letrec binding
  sigma:  when you can change an identifier in a letrec binding
  rho_u:  sometimes you must merge declarations to advance computations