COP 4020 Lecture -*- Outline -*- * Basic thread programming techniques (4.2) ** creating threads (skip, 4.2.1) ------------------------------------------ CREATING THREADS use thread end also expression sugar thread end ------------------------------------------ ** Browser and threads (skip, 4.2.2) examples of how browser works ** dataflow computation with threads (4.2.3) ------------------------------------------ DATAFLOW COMPUTATION (4.2.3) declare X0 X1 in thread Y0 Y1 in {Browse [Y0 Y1]} Y0 = X0+1 Y1 = X1+Y0 {Browse completed} end {Browse [X0 X1]} ------------------------------------------ See dataflow.oz in this directory Q: What does this show? Q: What happens if we then feed X0=0 ? Then X1=1 ? Q: How could we parallelize the Map function? (See MapP.oz) If you try running this you might want to use the "Oz Panel" (from "Open Panel" in the Mozart environment) that shows the threads running Q: How could we parallelize the generation of the numbers from 1 to N? ------------------------------------------ CREATING AN EXPONENTIAL NUMBER OF THREADS fun {Fib X} if X =< 2 then 1 else thread {Fib X-1} end + {Fib X-2} end end ------------------------------------------ See Fib.oz in this directory ** Thread details (skip) *** thread scheduling (4.2.4) round robin scheduling Mozart uses a hardware timer 3 priority levels: high, medium low 3 queues, processor time is by default divided up by 100:10:1 so high priority threads get 90% of CPU, medium about 9%, low 1% Q: Are child threads given same priority as parent (creating) thread? Yes *** Cooperative and Competitive Concurrency (4.2.5) Q: Are threads supposed to work together cooperatively or compete? work together Q: Are threads supposed to trust each other? Yes, since share address space (the store) Use processes for competition supported in Mozart by distributed computation model and Remote module *** Thread operations (4.2.6) Thread module's operations: this (name), state, suspend, resume, preempt, terminate To set priorities use setPriority See table 4.2 for details ** Streams (4.3) "The most useful technique ... in the declarative concurrent model" ------------------------------------------ STREAMS (4.3) def: a *stream* is Grammar ::= ::= ------------------------------------------ ... a potentially unbounded list. Think of the elements as "messages". Q: What's the grammar for ? ::= T '|' | nil The nil at the end is optional, if it's actually unbounded it won't have that case. The program will have a convention for whether it uses nil or not. We'll say "infinite stream" for the grammar without nil. ::= T '|' Until we get laziness, we'll have to use nil to keep things finite... (unless we live with partial termination) Q: How many cases are there in this grammar? Q: Is it recursive? Q: What would a program that processes it look like? Q: How would you implement a stream in Oz? Use a list whose tail is an undetermined dataflow variable Q: How would you implement a stream in Java? As an Iterator object (or an Enumeration object) Why is that similar? *** Producer/Consumer (4.3.1) ------------------------------------------ PRODUCER/CONSUMER PIPELINE WITH STREAMS |-----------| 3|5|7|9|... |-----------| | Consumer |<------------| Producer | |-----------| |-----------| Example: |-----------| |-----------| | Player |<------------|MP3 Encoder| |-----------| |-----------| Idea: ------------------------------------------ Note this diagram is backwards from the book, its data flows right to left! ... The Producer creates stream elements The Consumer waits for the next item and then processes it. Q: What kinds of things would the consumer do? searching, filtering, mapping, ... **** filtering ------------------------------------------ FILTERING |-------------| 1|2|3|4|... |-----------| | Filter IsOdd|<------------| Count 20 | |-------------| |-----------| declare % Return the list [1 2 3 ... N] fun {Count N} fun {Help I} if I == N then I|nil else I|{Help I+1} end end in {Help 1} end {Browse {Filter thread {Count 20} end IsOdd}} ------------------------------------------ See Count.oz and CountTest.oz **** extended example (cf. 4.3.2) ------------------------------------------ HAILSTONE (3x+1) EXAMPLE declare fun {Hailstone N} if {IsOdd N} then (3*N+1) div 2 else N div 2 end end fun {IterHailstone N} if N == 1 then 1|nil else N|{IterHailstone {Hailstone N}} end end ------------------------------------------ See Hailstone.oz Try: {Browse {IterHailstone 26}} {Browse {IterHailstone 27}} Q: Where's the undetermined variable in IterHailstone? Look at a partial desugaring: % This is partially desugared to show the undetermined variable (Z) proc {IterHailstone N ?R} if N == 1 then R=1|nil else local Z in R=N|Z {IterHailstone {Hailstone N} Z} end end end And note that Z is put in the return value (R), before the recursive call. This allows another thread (imagine it waiting for R to be determined) to run a bit farther (in particular, to use N). Q: How would you find the maximum reached by a number? use FoldL with Max Q: How would you find for each number what it's maximum is? set up a stream ------------------------------------------ PEAK ARGUMENTS def: Suppose the arguments and results of a function F are totally ordered. Then N is *peak argument* for F iff {F M} < {F N} whenever M < N. PROBLEM Consider fun {HailstoneMax N} {FoldL {IterHailtone N} Max 0} end Find all peak arguments for HailstoneMax from 1 to N. ------------------------------------------ Show the architecture |-------| |--------------| |-------| <-- | Peaks | <--------- | Graph | <---- | Count | |------ | | HailstoneMax | |-------| |--------------| Work this on the computer (See the file HailstonePeaks.oz) \insert 'Hailstone.oz' \insert 'HailstoneMax.oz' declare fun {Count N} fun {Help I} if I == N then I | nil else I | {Help I+1} end end in {Help 1} end fun {Graph List F} {Map List fun {$ E} E#{F E} end} end fun {Peaks List} fun {PeakIter List MaxSoFar} case List of nil then nil [] (Arg#Val)|Ps then if Val > MaxSoFar then (Arg#Val) | {PeakIter Ps Val} else {PeakIter Ps MaxSoFar} end end end in {PeakIter List 0} end fun {GraphMaxPeaks SearchLimit} Numbers InStream OutStream in thread Numbers = {Count SearchLimit} end thread InStream = {Graph Numbers HailstoneMax} end thread OutStream = {Peaks InStream} end OutStream end Q: What's the producer and consumer in this setup? The producer is Count, Graph is a filter/transformer, and Peaks is the consumer (or another transformer). Q: Why is this incremental? Because Count, Graph, and Peaks are producing streams, since they bind their result to some list that becomes more defined as they go along (see above), and because they each run in a thread. *** Managing resources and improving throughput (4.3.3) Q: What happens if the producer or consumer runs faster than the other? Extra elements pile up, use up system resources. How to fix it? Flow control. This is a key idea in concurrent programming, that has several manifestations. **** flow control with demand-driven concurrency (4.3.3.1) supply-driven: producer generates eagerly demand-driven: consumer asks for them ------------------------------------------ DEMAND DRIVEN CONCURRENCY Consumer signals producer by binding a cons (X|Xs) to a stream X and Xs are unbound Xs will be the next signal Example (p. 262) %% The following is based on DGenerate from page 262 of CTM %% It puts its outputs on X|Xr proc {DDCount N X|Xr} X=N {DDCount N+1 Xr} end %% DDTake returns Limit items from Pairs, where it puts its demands fun {DDTake ?Pairs Limit} if 0 < Limit then P|Pr = Pairs in P | {DDTake Pr Limit-1} else nil end end %% Return a list of the numbers from 1 to NumberSought fun {UpTo NumberSought} Numbers InStream OutStream in thread {DDCount 1 Numbers} end thread OutStream = {DDTake Numbers NumberSought} end OutStream end ------------------------------------------ See HailstoneDD.oz Q: Which is the producer? Which is the consumer? Q: In DDCount, how is Xr used? DDCount gets demands (cons cells with unbound variables) from it, and puts outputs on it. Q: In DDTake, how is Pairs used? Q: What does UpTo do? Q: How would you write a demand-driven version of Graph? Peaks? %% DDGraph puts demands on List and puts outputs on P|Pr proc {DDGraph ?List P|Pr F} Arg|Args = List in P=Arg#{F Arg} {DDGraph Args Pr F} end %% DDPeaks puts demands on List and puts outputs on P|Pr proc {DDPeaks ?List P|Pr MaxSoFar} (Arg#Val)|Lr = List in if Val > MaxSoFar then P = (Arg#Val) {DDPeaks Lr Pr Val} else {DDPeaks Lr P|Pr MaxSoFar} end end fun {DDGraphMaxPeaks NumberSought} Numbers InStream PeakStream OutStream in thread {DDCount 1 Numbers} end thread {DDGraph Numbers InStream HailstoneMax} end thread {DDPeaks InStream PeakStream 0} end thread OutStream = {DDTake PeakStream NumberSought} end OutStream end Q: What are the advantages of using demand-driven streams over the first version? Better flow control **** Flow control with a bounded buffer (4.3.3.2) Q: What are the efficiency problems of demand-driven execution? Throughput suffers Q: How can avoid resources overuse and not reduce throughput? Use a bounded buffer... (see fig 4.14, below) Combines lazy and eager features ------------------------------------------ DEMAND-DRIVEN BOUNDED BUFFER (FIG 4.14) declare %% Buffer puts its demands on Xs %% and outputs results to Ys proc {BufferDD N ?Xs Ys} fun {Startup N ?Xs} if N == 0 then Xs else Xr in Xs=_|Xr {Startup N-1 Xr} end end %% AskLoop gets demands from Ys and %% transfers them to demands on Xs proc {AskLoop Ys ?Xs ?End} case Ys of Y|Yr then Xr End2 in Xs = Y|Xr % Get from buffer End=_|End2 % Replenish buffer {AskLoop Yr Xr End2} end end End = {Startup N Xs} in {AskLoop Ys Xs End} end ------------------------------------------ See BufferDD.oz Q: What does StartUp do? Where is it used? asks for N elements from the producer by extending Xs with N unbound elements (makes N demands) Q: What does AskLoop do? manages the buffer, by: - getting a demand from the consumer (pattern match on Ys) - extracting a value to satisfy it from the buffer (pattern match on Xs, which binds Y) - gives another demand to the producer (by binding End) Q: What does End do? What is it used for? It's the end of the list going to the producer, it's bound to give the producer new demands (data driven). Q: How could you implement a bounded buffer in Java? As a monitor (class with synchronized methods). Q: How would you use bounded buffers in Oz on the hailstone problem? % In HailstoneBB.oz fun {DDGraphMaxPeaks NumberSought} NumbersIn NumbersOut InStreamIn InStreamOut PeakStreamIn PeakStreamOut OutStream in thread {DDCount 1 NumbersIn} end thread {BufferDD 4 NumbersIn NumbersOut} end thread {DDGraph NumbersOut InStreamIn HailstoneMax} end thread {BufferDD 4 InStreamIn InStreamOut} end thread {DDPeaks InStreamOut PeakStreamIn 0} end thread {BufferDD 4 PeakStreamIn PeakStreamOut} end thread OutStream = {DDTake PeakStreamOut NumberSought} end OutStream end Q: How much space does {BufferDD N Xs Ys} use? proportional to N (even if nothing stored in it) Q: In what sense is a bounded buffer a blend of eager and lazy? It iz lazy if 0 size, eager if infinite size **** flow control with priorities (4.3.3.3) (skip) This doesn't work well in many cases, depends on speed ratios Should only be used as a performance optimization *** Stream objects (4.3.4) Now we abstract these ideas, making a procedural abstraction. Stream objects have a "state" that's an accumulator. ------------------------------------------ STREAM OBJECTS (4.3.4) %% Based on page 266 of CTM (but with different names) declare fun {MakeStreamObject NextState} proc {StreamObject InStream1 State1 ?OutStrm1} case InStream1 of InVal|InStream2 then local OutVal State2 OutStrm2 in {NextState InVal State1 OutVal State2} OutStrm1 = OutVal|OutStrm2 {StreamObject InStream2 State2 OutStrm2} end [] nil then OutStrm1=nil end end in %% receive on InStream0, output on OutStrm0 proc {$ InStream0 State0 ?OutStrm0} {StreamObject InStream0 State0 OutStrm0} end end ------------------------------------------ See MakeStreamObject.oz and MakeStreamObjectTest.oz Q: How does this work? See MakeStreamObjectTest.oz for an example. *** Digital Logic Simulation example (4.3.5) (skip) example that is not a pipeline, because it folds back on itself. **** Combinatorial logic (4.3.5.1) ------------------------------------------ COMBINATORIAL LOGIC (4.3.5.1) local fun {NotLoop Xs} case Xs of X|Xr then (1-X)|{NotLoop Xr} end end in fun {NotG Xs} thread {NotLoop Xs} end end end ------------------------------------------ See CombinatorialLogic.oz Q: Why have the thread? ------------------------------------------ fun {GateMaker F} fun {$ Xs Ys} fun {GateLoop Xs Ys} case Xs#Ys of (X|Xr)#(Y|Yr) then {F X Y}|{GateLoop Xr Yr} end end in thread {GateLoop Xs Ys} end end end OrG = {GateMaker fun {$ X Y} X+Y-X*Y end} ------------------------------------------ Q: How would you define an "and" gate? AndG = {GateMaker fun {$ X Y} X*Y end} **** Sequential Logic (4.3.5.2) Q: What if you feed output of an "and" gate back to its input? Get a deadlock, because there is a cyclic dependency How to prevent that? use a delay... ------------------------------------------ fun {DelayG Xs} 0|Xs end ------------------------------------------ See SequentialLogic.oz **** Clocking (4.3.5.3) ------------------------------------------ % N is in milliseconds fun {ClockMaker N} fun {Loop B} {Delay N} B|{Loop B} end in thread {Loop 1} end end ------------------------------------------ See SequentialLogic.oz **** Linguistic abstraction (4.3.5.4) (skip) ** Using the declarative model directly (4.4) Don't have to use stream abstractions... *** Order-determining concurrency (4.4.1) Q: How can you fix subtle order dependencies in calculations? put each calculation in a different thread See Figure 4.19 This passes the work to the language *** coroutines (4.4.2) def: a coroutine is a non-preemptive thread 3 operations: Spawn, Yield, and Resume Q: Why are coroutines hard to use? programming is intricate, all coroutines depend on others, so it's not modular It's easy to cause starvation In summary, it's *very* difficult to program with them! *** concurrent composition (4.4.3) Q: How can a tread wait for the forked thread to finish? use dataflow variables ------------------------------------------ WAITING FOR FORKED THREADS TO DIE Wait primitive: {Wait X} suspends until X is determined Similar Java code: public synchronized void Wait() { while (true) { if (isDetermined()) { return; } else { wait(); } } } ------------------------------------------ Q: How could you implement Wait as a procedure in Oz? proc {Wait X} _={IsRecord X} end Q: How would you program "wait" in Java? use a monitor, with synchronized methods that use wait and notify ------------------------------------------ BARRIER SYNCHRONIZATION (Fig 4.22) proc {Barrier Ps} fun {BarrierLoop Ps L} case Ps of P|Pr then M in thread {P} M=L end {BarrierLoop Pr M} [] nil then L end end S={BarrierLoop Ps unit} in {Wait S} end ------------------------------------------ See Barrier.oz Q: What is the type of Ps? Q: How does this wait for all created threads to execute? Q: How would you do that in Java? ** Lazy Execution (4.5) The idea behind this is to: - avoid recomputation, and - allow easier computation with potentially infinite data More general than lazy evaluation, due to concurrency. (Lazy evaluation does coroutining.) *** Examples ------------------------------------------ LAZY EXECUTION (4.5) idea: only execute when result needed fun lazy {From N} N|{From N+1} end Nats = {From 0} fun lazy {AddLists L1 L2} case L1#L2 of (X|Xs)#(Y|Ys) then X+Y|{AddLists Xs Ys} else nil end end % From Abelson and Sussman, p. 326-7 fun lazy {FibGen A B} A|{FibGen B A+B} end FibsGenerated = {FibGen 0 1} Ones = 1 | {fun lazy {$} Ones end} ------------------------------------------ See LazyLists.oz Q: What features of Oz does this rely on? threads and dataflow variables Q: How would you simulate this in C++? In C++ you could have a copy constructor that triggers evaluation of an object that has been created as a suspension. Q: How is laziness used in the singleton pattern in Java? The singleton class checks (in a synchronized block!) in getInstance() to see if the instance has been created and put in the static field. Q: What happens if we do {Browse 'Nats...'} {Browse Nats} {Browse {Nth Nats 2}} {Browse {List.take Nats 6}} {Browse {List.take {From 0} 25}} in Oz? Q: How would you see how AddLists works? FibsGenerated? Q: What does Ones do? When you do {Browse Ones}, it shows 1|_, but when you ask for the second element, it forces the by-need suspension, and then that returns Ones, making the list circular, just like the definition of Twos. So it's not really incremental like lists generated by From. *** Importance ------------------------------------------ USES OF LAZY EXECUTION - Efficient algorithms avoid computation amortized algorithms - Helps modularization easier flow control ------------------------------------------ For modularization, see the homework... *** demand-driven concurrent model (4.5.1) Q: What's the default: data-driven or demand-driven concurrency? data-driven Q: Why? Easier to reason about efficiency of data-driven Recommend structuring application as: data-driven computation around a demand-driven core **** formal model (cover quickly, skipping the formal semantics) ------------------------------------------ DEMAND-DRIVEN CONCURRENT MODEL (4.5.1) LAZY IS DESUGARED TO BYNEED fun lazy {$ ... } end ==> fun {$ ... } {ByNeed fun {$} end} end ==> proc {$ ... ?R} {ByNeed proc {$ X} X= end R} end ------------------------------------------ ... might say that the 0-argument function is a "thunk" ByNeed is also known as Value.byNeed ------------------------------------------ EXAMPLES local Q in {ByNeed proc {$ ?R} R=3 end Q} {Browse Q} end local Q in {ByNeed proc {$ ?R} R=3 end Q} thread {Browse Q} end {Delay 2000} {Browse Q+7} end ------------------------------------------ See ByNeedTest.oz Q: What do these do? The first just shows _. The first shows _ and the changes the _ to 3 and shows 10 (after 2 seconds) Q: How would you program something like ByNeed in Java? Make an abstract class named ByNeed, with a field to hold a Cell, and a method "eval": To create a ByNeed object, you use a (anonymous) subclass, that implements the eval method, and call it on demand. (But how do we make sure the eval method's body runs in a thread?) To make demands implicit we have to model Oz's dataflow variables, and have those bound by the eval method, and have the lookup trigger calls to the eval method... ------------------------------------------ BY-NEED TRIGGERS Assume Env is {P-->p, Y-->y, Z-->z}: Thread 1 Store {ByNeed P Y} {p=pv,y,z} ... {p=pv,y,z,trig(p,y)} Z=Y+7 {p=pv,y,z,trig(p,y)} (suspends) Thread 2 Z=Y+7 {P Y} {p=pv,y,z} Z=Y+7 {p=pv,y=3,z} {p=pv,y=3,z=10} ------------------------------------------ trig(p,y) is a trigger formed that waits on y to be needed ------------------------------------------ MODEL'S SEMANTICS Syntax: ::= ... | {ByNeed } Semantics (transitions): [trigger creation] ({({ByNeed X Y},E)|Rest} + MST, s) --> ({Rest} + MST, s') where unbound(s, E(Y)) and s' = addTrigger(s,trig(E(X),E(Y))) [ByNeed done] ({({ByNeed X Y},E)|Rest} + MST, s) --> ({({X Y},E)|nil} + {Rest} + MST, s') where determined(s, E(Y)) [trigger activation] ({ST} + MST, s) --> ({({X Y},{X->x,Y->y})|nil} + {ST} + MST, s') where needs(s)(ST,y) and hasTrigger(s,trig(x,y)) and s' = remTrigger(s,trig(x,y)) ------------------------------------------ The [trigger creation] and [ByNeed done] could also be -d-> transitions Recall that configurations are (MST,s) in State = MultiSet(Stack) x Store + Message Stack = ( x Environment)* T = Message + { (MST,s) | s in Store, each ST in MST is nil} Message = String Store has to be able to handle triggers, e.g., Store = s:(Variable -> Value) x PowerSet(Trigger(s)) Trigger(s) = {trig(x,y) | x, y in Variable, x, y in dom(s)} hasTrigger((s,t), trig(x,y)) = trig(x,y) in t remTrigger((s,t), trig(x,y)) = (s, t \ {trig(x,y)}) addTrigger((s,t), trig(x,y)) = (s, t U {trig(x,y)}) The following is covered informally later ------------------------------------------ NEEDS needs: Store -> Stack x Variable -> Bool needs(s)(ST,y) = suspendedon(s)(ST,y) or bindingof(s)(ST,y) suspendedon(s)(ST,y) = suspended(ST,s) and unbound(s,y) and (Exists v:: let s' = bind(s(y),v) in not(suspended(ST,s')) ------------------------------------------ Q: How would you formalize suspended? bindingof? suspended(ST,s) = not (Exists ST',s':: (ST,s) -d-> (ST',s')) bindingof(s)(ST,y) = unbound(s,y) and (Exists ST',s' :: (ST,s) -d-> (ST',s') and determined(s',y)) Q: Do we have to be sure that the trigger activation rule runs before the rules that bind variables in the store? Q: If so, how can we do that in the TTS? modify the run rule! Q: Can we also change the rules for memory management? yes, def of reachable must use triggers can reclaim trig(x,y) if y is unreachable Q: What's the relation between {ByNeed P Y} and thread {P Y} end ? essentially the same effect, but the ByNeed may not ever execute {P Y} unless there is a need **** sugars ------------------------------------------ EXPRESSION SUGARS X = {ByNeed F} ==> {ByNeed F X} Recall: fun {$} E end ==> proc {$ ?R} R=E end So: X = {ByNeed fun {$} 3 end} ==> ------------------------------------------ ... {ByNeed proc {$ ?R} R=3 end X} Q: What does X={ByNeed fun {$} 3 end} do? {ByNeed proc {$ ?R} R=3 end X} vs. thread X=3 end ------------------------------------------ LAZY FUNCTION SUGAR fun lazy {$ ... } end ==> fun {$ ... } {ByNeed fun {$} end} end ------------------------------------------ Q: How would you desugar fun lazy {$} 3 end ? fun {$} {ByNeed fun {$} 3 end} end Do you understand the difference between the above and {ByNeed fun {$} 3 end} ? Q: Why doesn't the "lazy" sugar work for procedures? They don't have an expression to delay They can use ByNeed directly if necessary. **** semantic example (skip) ------------------------------------------ SEMANTIC EXAMPLE Desugar local Answer in fun lazy {From N} N|{From N+1} end Answer = {List.take 2 {From 1}} end What is its semantics? ------------------------------------------ See LazySemanticsExample.oz ... local From Answer UnnestApply1 UnnestApply2 UnnestApply3 UnnestApply4 UnnestApply5 in proc {From Result1 Result2} local Fun1 in proc {Fun1 Result3} case Result1 of N then local RecordArg1 UnnestApply6 UnnestApply7 in Result3 = '|'(N RecordArg1) UnnestApply7 = 1 UnnestApply6 = N + UnnestApply7 {From UnnestApply6 RecordArg1} end end end {`Value.byNeed` Fun1 Result2} end end UnnestApply2 = take UnnestApply1 = List.UnnestApply2 UnnestApply3 = 2 UnnestApply5 = 1 {From UnnestApply5 UnnestApply4} {UnnestApply1 UnnestApply3 UnnestApply4 Answer} end **** concept of need, with examples ------------------------------------------ NEED EXAMPLES Needs from strict operations (like +): declare X Y Z thread X = {ByNeed fun {$} 3 end} end thread Y = {ByNeed fun {$} 4 end} end thread Z = X+Y end {Browse Z} Needs by determining a variable: declare X Z thread X = {ByNeed fun {$} 3 end} end thread X = 2 end thread Z = X+4 end {Browse Z} Determined by == ? declare X Y Z thread X = {ByNeed fun {$} 3 end} end thread X = Y end thread if X==Y then Z=10 end end {Browse output(x: X y: Y z: Z)} ------------------------------------------ See ByNeedTest.oz Q: What happens in these? The first returns 7, the second fails (always). The third doesn't determine X this shows that == doesn't make a determination, if X and Y are in the same equivalence class Q: What is it important that when a failure occurs, all orders would fail? Preserves declarative property (deterministic outcome) Q: How could you use ByNeed in implementing dynamic linking? *** declarative computational models Q: Is laziness independent of concurrency? Yes! ------------------------------------------ DECLARATIVE COMPUTATION MODELS (4.5.2) eager execution lazy execution ========================================== sequential + values sequential + values + dataflow vars concurrent + values + dataflow vars ------------------------------------------ Q: What combinations does this give us? ... eager execution lazy execution ========================================== sequential strict lazy + values functional functional (ML, Scheme) (Haskell) sequential declarative lazy FP + values model with dataflow + dataflow (Prolog) vars concurrent data-driven demand-driven + values concurrent concurrent + dataflow (CLP) (Oz) vars ------------------------------------------ 3 MOMENTS IN A VARIABLE'S LIFETIME 1. Creation 2. Specification of the call that will yield the value of the variable 3. Evaluation and binding ------------------------------------------ Q: Can these be done separately? At the same time? Q: Which goes with what model? eager execution lazy execution ========================================== sequential strict lazy + values functional functional (ML, Scheme) (Haskell) 1&2&3 1&2, 3 sequential declarative lazy FP + values model with dataflow + dataflow (Prolog) vars 1, 2&3 1, 2, 3 concurrent data-driven demand-driven + values concurrent concurrent + dataflow (CLP) (Oz) vars 1, 2&3 1, 2, 3 Q: Does the combination of laziness and dataflow require concurrency? Yes, can get deadlocks with dataflow and laziness Consider % file LazyConcurrencyNeed.oz local Z fun lazy {F1 X} X+Z end fun lazy {F2 Y} Z=1 Y+Z end in {Browse {F1 1} + {F2 2}} end Would get a deadlock if not concurrent, since X+Z blocks and Z=1 wouldn't be reached. Q: Is the semantics for lazy in Oz concurrent? Yes *** Lazy Streams (4.5.3) Can use either programmed triggers or implicit. We saw explicit ones in 4.3.3. Implicit triggers are better, since don't have to do bookkeeping in writing function. Q: Can we write our hailstone peaks search using lazy? Sure, see HailstoneLazy.oz Q: Which is easier to program, the demand driven or lazy version? The lazy one! Q: Why isn't lazy the default for functions? Because eager evaluation is much more efficient, so need to decide when can get by with eager evaluation (strictness analysis), which is complex. Because an eager language fits together with imperative features better, don't want to mix laziness (essentially unpredictable execution order) with imperative features (execution order matters) *** Bounded buffer (4.5.4) (skip if short on time) ------------------------------------------ BOUNDED BUFFER {Buffer In N} - fills itself with N elements from In - When an element is taken off of ouput get another input How to write this? ------------------------------------------ Idea: make the inner loop lazy... % file fig4_26.oz declare fun {Buffer1 In N} End={List.drop In N} fun lazy {Loop In End} case In of I|In2 then I|{Loop In2 End.2} end end in {Loop In End} end The above works in lockstep! (Why?) - the call to List.drop (see Drop.oz) waits for the producer to generate N elements - blocks on End.2 for producer to produce something To fix this, use a thread around the expressions that generate producer requests % file fig4_27.oz declare fun {Buffer2 In N} End=thread {List.drop In N} end fun lazy {Loop In End} case In of I|In2 then I|{Loop In2 thread End.2 end} end end in {Loop In End} end Q: How would you test a buffer implementation? % fig4_27Test.oz \insert 'fig4_27.oz' declare fun lazy {Ints N} {Delay 1000} N|{Ints N+1} end In = {Ints 1} Out = {Buffer2 In 5} {Browse Out} {Browse Out.1} {Browse Out.2.2.2.2.2.2.2.2.2} Q: In what sense is using a bounded buffer a compromise between the data-driven and demand-driven models? Because if the size is 0, it's demand-driven, and if the size is infinite, it's data-driven; so in between it's a mix. *** Lazy I/O (4.5.5) Q: Why would lazy evaluation be useful for file I/O? Allows reading file as a list, but doesn't require whole thing in memory *** Hamming problem (4.5.6) (skip) See the text, we did something like this already. *** Lazy list operations (4.5.7) (skip) **** lazy append % file LAppend.oz ------------------------------------------ LAZY APPEND fun lazy {LAppend As Bs} case As of nil then Bs [] A|Ar then A|{LAppend Ar Bs} end end ------------------------------------------ Q: What happens if we do L={LAppend "foo" "bar"} {Browse L} ? see the first 3 chars lazily, then all of "bar" Why? No more work after last element of As **** lazy map ------------------------------------------ LAZY MAP declare fun lazy {LMap Xs F} case Xs of nil then nil [] X|Xr then {F X}|{LMap Xr F} end end ------------------------------------------ file LMap.oz Q: Is this incremental? yes Q: Would reverse be incremental if we wrote it using lazy? no **** lazy filter ------------------------------------------ FOR YOU TO DO Write a lazy version of Filter ------------------------------------------ See LFilter.oz *** persistent queues and algorithm design (4.5.8) (skip) Q: Can we use laziness to build queues with constant-time insert and delete for queues? Yes, even persistent ones! See errata! The trick is to do reverse together with an append **** lessons for algorithm design (4.5.8.3) 1. start with algorithm A that has amortized bound O(f(n)) for ephemeral data 2. Can use laziness to make this persistent and still have O(f(n)) bound will be worst case if can spread out computations *** List comprehensions (4.5.9) mathematical set comprehensions: {x*y | 1 <= x <= 10, 1 <= y <= x} specify lazy streams, as in Haskell general form [f(x) | x <- generator(a1, ..., an), guard(x, a1, ..., an)] the generator calculates a stream the guard filters Can write these using LMap and LFilter (and LAppendMap) (Also similar to for loops in Oz)