Com S 541 Lecture -*- Outline -*- * Declarative Programming Techniques (3.2-3.5) ** iterative computation (3.2) Q: What is an iterative computation? *** general schema (3.2.1, 3.3.3, 3.4.3) ------------------------------------------ ITERATIVE COMPUTATION (3.2) Is this iterative? fun {FromTo X Y} if X > Y then nil else X | {FromTo X+1 Y} end end How about this? fun {SumFromTo I J} if I > J then 0 else I + {SumFromTo I+1 J} end end ------------------------------------------ ... yes (but only because we can move the list construction in front of the recursive call) ... no (we can't move around (strict) function calls, like +) it has pending computations define a "pending computation" as: a procedure/function call that must be executed after a recursive call. Note: data structure constructions can often be permuted with calls, so don't generally make something non-iterative Not iterative if has pending computations Draw stack of execution if necessary Q: How to make it iterative? To make iterative: design a sequence of state transformations (See section 3.4.3) - introduce a state as an extra argument (or arguments): fun {SumFromTo I J} fun {SumFromToIter I J R} end in {FromToIter I J __} end - Design sequence of transformation of the arguments (including state) e.g., for call {SumFromTo 1 3} X: 1 1 1 1 Y: 3 2 1 0 R: 0 -> 3 -> 5 -> 6 - Write code to make that happen Where to get initial value? What's one step? How to stop? fun {SumFromTo I J} fun {SumFromToIter I J R} if I > J then R else {SumFromToIter I J-1 R+J} end end in {FromToIter I J 0} end - Optionally, avoid passing parts that don't change (like I) fun {SumFromTo I J} fun {SumFromToIter J R} if I > J then R else {SumFromToIter J-1 R+J} end end in {FromToIter J 0} end Q: Does this pattern work from FromTo? ------------------------------------------ FOR YOU TO DO Make the following iterative: fun {Product Ls} case Ls of E|Es then E*{Product Es} else 1 end end ------------------------------------------ *** When to use Iteration ------------------------------------------ WHEN TO USE ITERATION 0. When need efficiency 1. When the data doesn't maintain "place" 2. When need to return directly to caller ------------------------------------------ Q: what are some examples of this that we have seen? It's not always best to try to do this, E.g. working with binary trees on an exam *** Control Abstraction (3.2.4) ------------------------------------------ ABSTRACTION OF ITERATION (3.2.4) Consider fun {SumFromToIter JnR} J#R=JnR in if I > J then R else {SumFromToIter J-1#R+J} end end fun {SqrtIter Guess} if {GoodEnough Guess} then Guess else {SqrtIter {Improve Guess}} end end What do they have in common? How do they differ? Can we make the differences arguments? ------------------------------------------ ... general outline, returning part of state (R or Guess) Passing new value to state in recursion ... test for being done I > J or GoodEnough, how to extract the result from the state, how to transform the state ... yes % compare to page 123 fun {Iterate IsDone Extract Transform S} if {IsDone S} then {Extract S} else {Iterate IsDone Extract Transform {Transform S}} end end % avoiding passing unchanging arguments fun {Iterate2 IsDone Extract Transform S} fun {Loop S} if {IsDone S} then {Extract S} else {Loop {Transform S}} end end in {Loop S} end % curried fun {Iterate2c IsDone Extract Transform} fun {Loop S} if {IsDone S} then {Extract S} else {Loop {Transform S}} end end in Loop end How would you write SqrtIter using Iterate? SumFromToIter? ** Data-driven recursion (3.4) *** Type notation (3.4.1) ------------------------------------------ TYPE NOTATION (GRAMMARS) ::= red | blue | green ::= zero | succ ( ) ::= nil | T '|' ::= leaf | tree ( key: value: T left: right: ) ------------------------------------------ Q: How does the type definition resemble a grammar? *** Natural Numbers ------------------------------------------ RECURSIVE NATURAL NUMBERS ::= zero | succ ( ) % representation creation fun {FromInteger I} if I =< 0 then zero else succ({FromInteger I-1}) end end What is {FromInteger 0} {FromInteger 2} ------------------------------------------ ------------------------------------------ NATURAL NUMBER EXAMPLES ::= zero | succ ( ) To define a function F: }: T> recursively, write fun {F N} case N of zero then ... % basis [] succ (P) then ... % inductive case end end How to write Plus: }: >? ------------------------------------------ develop these using examples: base case: {ToInteger zero} = 0 inductive case: want {ToInteger succ(succ(succ(zero)))} = 3 given {ToInteger succ(succ(zero)) } = 2 how do we get 3 from 2? generalizing from this example, {ToInteger succ(P) = 1 + {ToInteger P} so get... fun {Plus N1 N2} case N1 of zero then N2 [] succ(P) then succ({Plus P N2}) end end Q: How does the structure of the program resemble the type definition? note the recursion occurs where the type def is recursive ------------------------------------------ FOR YOU TO DO Write Mult: }: > that multiplies two s. Write Equals: }: > without using Oz's == on arguments. ------------------------------------------ *** Working with lists (3.4.2.6) ------------------------------------------ RECURSION OVER FLAT LISTS ::= nil | T '|' Write Add1Each: {Add1Each nil} = nil {Add1Each 1|3|5|nil} = 2|4|6|nil {Add1Each 3|5|nil} = 4|6|nil ------------------------------------------ how do we get [2 4 6] from 1 and [4 6]? now generalize ------------------------------------------ FOR YOU TO DO Write DeleteFirst: }: > such that {DeleteFirst Sought Ls} returns a new list that is like Ls, but without the first occurence of Sought in Ls (if any). {Test {DeleteFirst b nil} nil } {Test {DeleteFirst b a|b|c|nil} a|c|nil } ------------------------------------------ Other potential examples if need more: Map Subst Reverse Lookup in association lists *** structure of data determines structure of code **** non-empty lists ------------------------------------------ GENERALIZING HOW TO WRITE RECURSIONS ::= sing(T) | cons(T ) Write MaxNEL: }: T> such that {MaxNEL sing(3)} = 3 {MaxNEL cons(5 sing(3))} = 5 {MaxNEL cons(3 cons(5 sing(3)))} = 5 Write Nth: }: T> such that {Nth sing(3) zero} = 3 {Nth cons(8 sing(3)) succ(zero)} = 3 {Nth cons(0 (cons 1 cons(2 sing(3)))) {FromInteger 2} } = 2 ------------------------------------------ **** More programming language like grammars ------------------------------------------ RECURSION OVER GRAMMARS ::= boolLit( ) | intLit( ) | subExp( ) | equalExp( ) | ifExp( ) Write the following Eval: }: > such that {Eval subExp(intLit(5) intLit(4))} = intLit(1) ------------------------------------------ Q: What are the base cases? Q: Where should there be a recursion? Q: Examples for each recursive case? Moral: in general can think of all recursions as recursions over grammars *** Difference Lists (3.4.4) **** Basics of Difference Lists A difference list is an "incomplete data structure", in that it usually contains unbound (logical) variables. ------------------------------------------ DIFFERENCE LISTS (3.4.4) Idea L1 # L2 represents list L1 minus elements of L2 Example: (1|2|3|X) # X means (1|2|3|nil) Main advantage: Lists of form (a|b|...|X) # X can be appended in constant time Example: To append (1|2|3|X) # X and (4|5|Y) # Y bind X to (4|5|Y) to get (1|2|3|4|5|Y) # Y ------------------------------------------ Think of a difference list as a subtraction problem that hasn't been solved yet Q: Why not just use (1|2|3|X) instead of (1|2|3|X) # X ? because then we couldn't get to the tail in constant time ------------------------------------------ FOR YOU TO DO Write AppendD: }: > Examples: {AppendD (2|3|X)#X (3|5|Y)#Y} = (2|3|3|5|Y)#Y {AppendD X#X (3|5|Y)#Y} = (3|5|Y)#Y ------------------------------------------ Solution Idea: (X-Y) + (Y-Z) = X-Z ... see page 142 fun {AppendD D1 D2} S1#V1 = D1 S2#V2 = D2 in V1=S2 S1#V2 end Conversions fun {ListToDiffList Ls} case Ls of E|Es then Es1#Es2 = {ListToDiffList Es} in (E|Es1)#Es2 else X in X#X end end fun {DiffListToList DL} LS#V = DL in % Can't pattern match against LS unless it's determined % as will suspend if we pattern match against an unbound variable. if {IsDet LS} then case LS of E|Es then E|{DiffListToList Es#V} else nil end else nil end end Q: What are the limitations of difference lists? Can only unify the tail variable once. **** Applications See the Flatten discussion on pages 143-145, used to make things faster The Reversal of a list discussion on page 145, for use in program derivation **** Queues and Performance (3.4.5) Q: What's efficiency issue in implementing FIFO queues in the declarative model? How to achive constant time performance for insert and delete Differences between the strict functional model and declarative model strict functional: can get constant amortized time dataflow extension: can get constant time Q: What's the difference between ephemeral and persistent data? Ephemeral data can only be used as input to one operation, not repeatedly used as input to many operations. Persistent doesn't have this limit, and can be used concurrently Q: How could we get amortized constant time queues? Keep 2 lists, to-remove and added, and move from added list to the to-remove list when the to-remove list is empty, reversing Q: How could we get constant time queues with dataflow variables? Use a difference list to add elements Q: Can we delete elements from a queue that aren't present? Yes, get unbound variable, so can block lazily... **** Trees (3.4.6-7) Leave this for them to read and homework **** Parsing (3.4.8) Leave this for them to read and homework ** Time and space efficiency (3.5) This is known as "pragmatics" Q: What's the recommended general approach for calculating resourse usage? Translate to kernal, get a set of equations for the time, solve them *** Time (3.5.1) ------------------------------------------ EXECUTION TIMES OF KERNEL INSTRUCTIONS S T(S) skip X = Y X = V S1 S2 T(S1) + T(S2) local X in S end k + T(S) proc {$ ...} S end if X then S1 k + max(T(S1),T(S2)) else S2 end case X of P then S1 else S2 end {F Y1 ... Yn} T_F(size_F( I_F({Y1,...,Yn}))) where I_F is the subset of used arguments and size_F is a measure function and T_F is a function specific to F ------------------------------------------ Q: What's the time needed for skip? they say some constant (k), why? Q: How can unification be constant time? It's not, as the book notes, in the worst case it depends on the size of data being unified Q: What's the time to do closure formation? If the closure code contains a precomputed environment skeleton giving the offsets in the parent environment. But the implementation does not do this, so it's linear. Why do we need to store the smallest possible environment with a closure? To save space, otherwise could just use whole environment. Q: How can pattern matching in case be constant time? *** Memory usage (3.5.2) Q: What needs to be measured for space? high water mark of used memory rate of memory use (proportional to how hard gc has to work) Q: Which is more important? the high water mark, since that determines where program can run ------------------------------------------ MEMORY CONSUMPTION OF KERNEL INSTRUCTIONS S M(S), in words skip 0 X = Y 0 X = V memsize(v) S1 S2 M(S1) + M(S2) local X in S end 1 + M(S) proc {$ ...} S end if X then S1 max(M(S1),M(S2)) else S2 end case X of P max(M(S1),M(S2)) then S1 else S2 end {F Y1 ... Yn} M_F(size_F( I_F({Y1,...,Yn}))) where I_F is the subset of used arguments and size_F is a measure function and M_F is a function specific to F ------------------------------------------ memsize(i: Int) = let bits = log_2(abs(i)) in if bits < 28 then 0 else ceiling(bits/32) memsize(f: Float) = 2 memsize(p: ListPair)= 2 memsize(t: L(F_1: V1, ..., F_N: VN)) = 1 + N Q: Does the value always need to be completely created in X=V? No, the VM can tell from the value's form what instructions will be needed to unify it. Q: What's the size of a closure? They say k + n, where n is number of free identifiers in the body and k is a constant *** Amortized complexity (3.5.3) Q: What's amortized complexity? Q: What are the techniques used to compute it? *** Does performance still matter? (3.5.4) Sure, but see text...