CS 641 Lecture -*- Outline -*- * Programming language for Computable Functions (PCF) adding recursion to typed lambda calculus See Plotkin's 77 paper "LCF Considered as a Prog. Lang." for another treatment of this material ** the language *** grammar ------------------------- t ::= num | bool | t -> t e ::= 0 | succ(e) | pred(e) | true | false | zero?(e) | x | \x:t.e | e e | \mu x:t.e | if e then e else e ------------------------- *** typing rules ------------------------- H |- 0 : num ... H |- e: num ==> H |- succ(e): num ... H, x:t |- e:t ==> H |- (\mu x:t . e) : t ------------------------- why not just have H |- succ: num -> num? why does rule for \mu differ from lambda? e.g., write factorial function as: \mu fact: num -> num . \n: num. if zero?(n) then succ(0) else times(fact(pred(n))) *** standard fixedpoint interpretation D^num = Nat_{\bot} with s: Nat_{\bot} -> Nat_{\bot} = \_n: plus(n,one) (\_ is strict lambda) p: Nat_{\bot} -> Nat_{\bot} = \_n: minus(n,one) (0 if argument is 0) z: Nat_{\bot} -> T = \_n: equal(n,zero) D^bool = T = {true, false}_{\bot} = Tr_{\bot} (Schmidt) use continuous type frame over these base sets for semantics of \-calc i.e., D^{s -> t} = [ D^s -> D^t], set of all *continuous functions* A^(s,t) is function application. semantic clauses for non-\ part C[[succ(e)]]r = s(C[[e]]r) C[[pred(e)]]r = p(C[[e]]r) C[[zero?(e)]]r = z(C[[e]]r) C[[if e then e' else e'']]r = (let b = C[[e]]r in b -> C[[e']]r [] C[[e'']]r) C[[H |> \mu x: t. e]]r = fix(d |-> C[[H, x:t |> e]](r[d/x])) least fixed point of e(x) *** operational semantics call by name reduction to head-normal form **** Little step version (Plotkin's) ------------------------- pred(0) --> 0 pred(succ(e)) --> e zero?(0) --> true zero?(succ(e)) --> false ((\x:s.e) e') --> [e'/x]e (if true then e2 else e3) --> e2 (if false then e2 else e3) --> e3 (\mu x:t. e) --> [(\mu x:t . e)/x]e e1 --> e1' ==> (e1 e2) --> (e1' e2) e --> e' ==> (if e then e2 else e3) --> (if e' then e2 else e3) e --> e' ==> succ(e) --> succ(e') e --> e' ==> pred(e) --> pred(e') e --> e' ==> zero?(e) --> zero?(e') ------------------------- def: canonical form is e that is terminal, i.e., T = {e | there is no e', e --> e'} exercise: show pred(succ(succ(0))) -->* succ(0) **** Big step version (Gunter's) ------------------------- 0 ~ 0 ... pred(0) ~ 0 e ~ succ(c) ==> pred(e) ~ c e ~ c ==> succ(e) ~ succ(c) e ~ 0 ==> zero?(e) ~ true e ~ succ(c) ==> zero?(e) ~ false \x:s.e ~ \x.s.e e ~ \x:s.e'', [e'/x]e'' ~ c ==> e(e') ~ c e1 ~ true, e2 ~ c ==> if e1 then e2 else e3 ~ c e1 ~ false, e3 ~ c ==> if e1 then e2 else e3 ~ c [(\mu x:t . e)/x]e ~ c ==> \mu x:t. e ~ c ------------------------- prop: the relation ~, is a partial function on closed expressions i.e., ~ is confluent def: canonical form of e' is e s.t. e' ~ e e.g., true, false 0, succ(0), ..., succ^n(0), ... \x:n.e not canonical are forms with applications of \-abstrs, forms with pred, zero, \mu? exercise: show that pred(succ(succ(0))) ~ succ(0) and that succ(0) ~ succ(0) lemma: if c is a canoical form of type num or bool, then for all envs r, C[[c]]r <> \bot ** soundness if e ~ c, then C[[e]] = C[[c]]. Pf: (by induction on structure of proof that e ~ c.) suppose e = \mu x:s.e' and last step was rule [(\mu x:s . e')/x]e ~ c ==> \mu x:s. e ~ c. Ind. hyp is that C[[ [e/x]e']] = C[[c]] C[[c]]r = C[[ [e/x]e']]r = C[[(\x:s.e')(e)]]r = C[[e']](r[ C[[e]]r / x]) but C[[e]]r is a fixed point of \d.C[[e']](r[d/x]) by def. so C[[e']](r[ C[[e]]r / x]) = C[[e]]r ** computational adequacy completeness? want to show (for canonical c) (*) if C[[e]] = C[[c]], then e ~ c. but doesn't hold for functions: \x:num.0 is not ~ to \x:num.succ(pred(0)) solutions: could strengthen ~ by adding more relationships (e.g., \-theory) but this would be unfaithful to real interpreters could look at a weaker property: (*) for terms of ground type (what Gunter does) *** Computable terms Proof by induction on structure of expressions fails, because functions are not of ground type. A term e of PCF is computable iff 1. |- e : t, where t is a ground type, and C[[e]] = C[[c]] implies e ~ c 2. |- e : s -> t and for all closed, computable |- e':s, e(e') is computable 3. x1:s1, ..., xn:sn |- e: s and for all closed computable terms |- ei:si, [e1,...,en/x1,...xn]e is computable Lemma: e is computable iff for all substitutions s whose range is computable closed terms and for all e1, ..., en computable closed terms, if |- (s(e))e1 ... en : t where t is a ground type, then C[[(s(e))e1 ... en]] = C[[c]] implies (s(e))e1 ... en ~ c. Lemma: every term of PCF is computable. Pf: structural induction on expressions. let s be substitution of closed computable terms. Must show that s(e) is computable for all e. Suppose e == \mu x:t.e'. If s(e) ~ c, then [s(e)/x](s(e')) ~ c (by rules for ~). By induction hypothesis, s(e') is computable. But what about s(e)? *** Syntactic approximations idea: find some syntactic approximation to \mu x:t.e' that doesn't involve \mu, use that above to show \mu x:t.e' is computable. **** Divergent program at type s \Omega_t == \mu x:t. x C[[\Omega_t]]r = \bot_{C[[t]]} **** Sequence of syntactic approximations: \mu^0 x:t . e' == \Omega_t \mu^{n+1} x:t . e' == [(\mu^n x:t. e') / x]e'. C[[\mu x:t . e']] = lub [[\mu^n x:t . e']] **** Operational theory of approximations H |- e <~ e' : s means that e is is more partial than e'; i.e., if e terminates, so does e' i.e., e ~ c implies (e' ~ c' and c <~ c'). ------------------------------- H |- 0 <~ 0 : num ... H |- \Omega_s <~ e : s H, x:s |- e <~ e' : s ==> H |- (\mu^n x:s . e) <~ \mu x:s . e' ------------------------------- corollary: if t is a ground type |- e <~ e' : t, and e ~ c, then e' ~ c. rest of proof found in the text. ** Summary Operational semantics is sound and computationally adequate wrt standard fixpoint semantics Remaining questions: fixedpoint theory of approximations operational equivalence (is ~ adequate?)