I. The Data-Driven Concurrent (really parallel) Model (Ch 4) A. motivation ------------------------------------------ WHY IS CONCURRENCY USEFUL? ------------------------------------------ B. advantages ------------------------------------------ DECLARATIVE CONCURRENCY (4) Extension of declarative model - threads - by need triggers (lazy evaluation) Advantages: - preserves declarative reasoning - only changes performance - results calculated incrementally - can improve thruput (answers faster) Why is incremental calculation of results useful? ------------------------------------------ C. the data-driven concurrent model (section 4.1) ------------------------------------------ DATA-DRIVEN CONCURRENT MODEL (4.1) ::= ... | thread end "thread execution" New expression sugars: thread end ==> local Z in thread Z= end Z end ------------------------------------------ What does a thread do? ------------------------------------------ IN OTHER LANGUAGES How would you create new threads in Java? In C++? C? ------------------------------------------ What kind of mechanisms are used in Java to deal with mutable state and concurrency? What's the difference between running a thread and a process? ------------------------------------------ BENEFITS OF DECLARATIVE CONCURRENCY Why is declarative concurrency, without mutable state (cells), simple to use? ------------------------------------------ 1. Basic concepts (4.1.1) ------------------------------------------ BASIC CONCEPTS (4.1.1) Interleaving: Thread 1 --> --> --> --> Thread 2 --> --> --> --> def: Steps of a computation follow the *causal order* iff ------------------------------------------ ------------------------------------------ EXAMPLES local X in thread {Browse X+1} end thread {Delay 5000} X=3 end end FOR YOU TO DO What order of steps follow the causal order? local A B in thread {Browse A} end thread {Browse B} end thread {Delay 5000} A=3 end thread B=A+2 end end ------------------------------------------ How might the interleaving semantics be implemented? ------------------------------------------ NONDETERMINISM def: An execution is *nondeterministic* iff def: *observable nondeterminism* is nondeterminism that can be seen in some output. ------------------------------------------ How would we formalize "visible to the programmer"? What states can a thread be in? What is fairness in scheduling? D. Semantics of threads (4.1.2) How could we formalize the semantics of threads? 1. Operational semantics (skip, or go over quickly in favor of examples) ------------------------------------------ CONFIGURATIONS (MST,s) in State = MultiSet(Stack) x Store + Message Stack = ( x Environment)* T = Message + { (MST,s) | s in Store, each ST in MST is nil} Message = String input[[S]] = ({[(S,{})]}, {}) output((MST,s)) = s output(Msg) = Msg ------------------------------------------ When is a semantic stack runnable in a store s? ------------------------------------------ TRANSITIONS (-->) Let -d-> be the transitions of the declarative model. Let Td be terminal configurations of the declarative model. (ST,s) -d-> (ST', s') [run] ----------------------------------- ({ST} + MST, s) --> ({ST'} + MST, s') (ST,s) -d-> Msg [fail] ------------------------- ({ST} + MST, s) --> Msg [thread-creation] ({(thread S end,E)|Rest} + MST, s) --> ({(S,E)|nil} + {Rest} + MST, s) [thread-finish] ({nil} + MST, s) --> (MST, s) ------------------------------------------ In the run rule, do we need a side condition saying the stack is runnable in the store? Could be [thread-creation] be a -d-> rule? According to the book "A blocked semantic stack can be reclaimed if its activation condition depends on an unreachable variable"; how would you formalize that? What's the [fail] rule for? 2. example (4.1.3) ------------------------------------------ EXAMPLES What is the final value of Answer? Why? % (a) local Answer in local B in thread B = true end if B then Answer=yes else Answer=no end end {Browse Answer} end % (b) local Answer in local B in thread if B then Answer=yes else Answer=no end end thread B = true end end {Browse Answer} end % (c) local Answer in local N in thread N=0 end thread N=11 end thread N=222 end thread N=3333 end Answer=N end {Browse Answer} end % (d) local Answer Square in fun {Square I} thread I*I end end Answer={Square 9} {Browse Answer} end ------------------------------------------ E. what is declarative concurrency (4.1.4) What does it mean to be a declarative function? What problems might make it hard to define "referential transparency" or "declarative" in the data-driven concurrent model? 1. partial termination ------------------------------------------ PARTIAL TERMINATION def: A function *partially terminates* if all threads are blocked and further binding of inputs would cause more computation. EXAMPLE fun {Double X|Xr} 2*X | thread {Double Xr} end end ------------------------------------------ What happens if Xr grows in the example? 2. logical equivalence What does it mean for two partial values to be equivalent? E.g., is 1|2|3|Xr equivalent to 1|2|3|Yr ? ------------------------------------------ EQUIVALENT STORES? % Example 1 local X Y in X=1 Y=X end % vs. local X Y in Y=X X=1 end % Example 2 local X Y Z in X=foo(Y W) Y=Z end % vs. local X Y Z in X=foo(Z W) Y=Z end ------------------------------------------ Are the stores produced the same? If so, what changes could you make to produce a difference? ------------------------------------------ CONSTRAINTS ON THE STORE def: a *constraint* is def: Let c be a constraint. Then values(x,c) is ------------------------------------------ What is values(x, x=foo(y w) /\ y=z /\ y=1 /\ w=3)? What is values(x, x=foo(z w) /\ y=z /\ y=1 /\ w=3)? 3. declarative concurrency ------------------------------------------ DEFINING "DECLARATIVE" Q: How would you use the notion of logical equivalence to define when a function is declarative? ------------------------------------------ Is it possible to observe different outputs from a declarative program? How would you prove that the data-driven concurrent model is declarative? 4. failure Are failures declarative? What would happen if we just let the exception propagate out of the threads? Can you write a program that would be non-declarative by using exceptions that propagate out of threads? II. Basic thread programming techniques (4.2) A. creating threads (skip, 4.2.1) ------------------------------------------ CREATING THREADS use thread end also expression sugar thread end ------------------------------------------ B. Browser and threads (skip, 4.2.2) C. dataflow computation with threads (4.2.3) ------------------------------------------ DATAFLOW COMPUTATION (4.2.3) declare X0 X1 in thread Y0 Y1 in {Browse [Y0 Y1]} Y0 = X0+1 Y1 = X1+Y0 {Browse completed} end {Browse [X0 X1]} ------------------------------------------ What does this show? What happens if we then feed X0=0 ? Then X1=1 ? How could we parallelize the Map function? How could we parallelize the generation of the numbers from 1 to N? ------------------------------------------ CREATING AN EXPONENTIAL NUMBER OF THREADS fun {Fib X} if X =< 2 then 1 else thread {Fib X-1} end + {Fib X-2} end end ------------------------------------------ D. Thread details (skip) 1. thread scheduling (4.2.4) Are child threads given same priority as parent (creating) thread? 2. Cooperative and Competitive Concurrency (4.2.5) Are threads supposed to work together cooperatively or compete? Are threads supposed to trust each other? 3. Thread operations (4.2.6) E. Streams (4.3) ------------------------------------------ STREAMS (4.3) def: a *stream* is Grammar ::= ::= ------------------------------------------ What's the grammar for ? How many cases are there in this grammar? Is it recursive? What would a program that processes it look like? How would you implement a stream in Oz? How would you implement a stream in Java? 1. Producer/Consumer (4.3.1) ------------------------------------------ PRODUCER/CONSUMER PIPELINE WITH STREAMS |-----------| 3|5|7|9|... |-----------| | Consumer |<------------| Producer | |-----------| |-----------| Example: |-----------| |-----------| | Player |<------------|MP3 Encoder| |-----------| |-----------| Idea: ------------------------------------------ What kinds of things would the consumer do? a. filtering ------------------------------------------ FILTERING |-------------| 1|2|3|4|... |-----------| | Filter IsOdd|<------------| Count 20 | |-------------| |-----------| declare % Return the list [1 2 3 ... N] fun {Count N} fun {Help I} if I == N then I|nil else I|{Help I+1} end end in {Help 1} end {Browse {Filter thread {Count 20} end IsOdd}} ------------------------------------------ b. extended example (cf. 4.3.2) ------------------------------------------ HAILSTONE (3x+1) EXAMPLE declare fun {Hailstone N} if {IsOdd N} then (3*N+1) div 2 else N div 2 end end fun {IterHailstone N} if N == 1 then 1|nil else N|{IterHailstone {Hailstone N}} end end ------------------------------------------ Where's the undetermined variable in IterHailstone? How would you find the maximum reached by a number? How would you find for each number what it's maximum is? ------------------------------------------ PEAK ARGUMENTS def: Suppose the arguments and results of a function F are totally ordered. Then N is *peak argument* for F iff {F M} < {F N} whenever M < N. PROBLEM Consider fun {HailstoneMax N} {FoldL {IterHailtone N} Max 0} end Find all peak arguments for HailstoneMax from 1 to N. ------------------------------------------ What's the producer and consumer in this setup? Why is this incremental? 2. Managing resources and improving throughput (4.3.3) What happens if the producer or consumer runs faster than the other? a. flow control with demand-driven concurrency (4.3.3.1) ------------------------------------------ DEMAND DRIVEN CONCURRENCY Consumer signals producer by binding a cons (X|Xs) to a stream X and Xs are undetermined Xs will be the next signal Example (p. 262) %% The following is based on DGenerate from page 262 of CTM %% It puts its outputs on X|Xr proc {DDCount N X|Xr} X=N {DDCount N+1 Xr} end %% DDTake returns Limit items from Pairs, where it puts its demands fun {DDTake ?Pairs Limit} if 0 < Limit then P|Pr = Pairs in P | {DDTake Pr Limit-1} else nil end end %% Return a list of the numbers from 1 to NumberSought fun {UpTo NumberSought} Numbers InStream OutStream in thread {DDCount 1 Numbers} end thread OutStream = {DDTake Numbers NumberSought} end OutStream end ------------------------------------------ Which is the producer? Which is the consumer? In DDCount, how is Xr used? In DDTake, how is Pairs used? What does UpTo do? How would you write a demand-driven version of Graph? Peaks? What are the advantages of using demand-driven streams over the first version? b. Flow control with a bounded buffer (4.3.3.2) What are the efficiency problems of demand-driven execution? How can avoid resources overuse and not reduce throughput? ------------------------------------------ DEMAND-DRIVEN BOUNDED BUFFER (FIG 4.14) declare %% Buffer puts its demands on Xs %% and outputs results to Ys proc {BufferDD N ?Xs Ys} fun {Startup N ?Xs} if N == 0 then Xs else Xr in Xs=_|Xr {Startup N-1 Xr} end end %% AskLoop gets demands from Ys and %% transfers them to demands on Xs proc {AskLoop Ys ?Xs ?End} case Ys of Y|Yr then Xr End2 in Xs = Y|Xr % Get from buffer End=_|End2 % Replenish buffer {AskLoop Yr Xr End2} end end End = {Startup N Xs} in {AskLoop Ys Xs End} end ------------------------------------------ What does StartUp do? Where is it used? What does AskLoop do? What does End do? What is it used for? How could you implement a bounded buffer in Java? How would you use bounded buffers in Oz on the hailstone problem? How much space does {BufferDD N Xs Ys} use? In what sense is a bounded buffer a blend of eager and lazy? c. flow control with priorities (4.3.3.3) (skip) 3. Stream objects (4.3.4) ------------------------------------------ STREAM OBJECTS (4.3.4) %% Based on page 266 of CTM (but with different names) declare fun {MakeStreamObject NextState} proc {StreamObject InStream1 State1 ?OutStrm1} case InStream1 of InVal|InStream2 then local OutVal State2 OutStrm2 in {NextState InVal State1 OutVal State2} OutStrm1 = OutVal|OutStrm2 {StreamObject InStream2 State2 OutStrm2} end [] nil then OutStrm1=nil end end in %% receive on InStream0, output on OutStrm0 proc {$ InStream0 State0 ?OutStrm0} {StreamObject InStream0 State0 OutStrm0} end end ------------------------------------------ How does this work? 4. Digital Logic Simulation example (4.3.5) (skip) a. Combinatorial logic (4.3.5.1) ------------------------------------------ COMBINATORIAL LOGIC (4.3.5.1) local fun {NotLoop Xs} case Xs of X|Xr then (1-X)|{NotLoop Xr} end end in fun {NotG Xs} thread {NotLoop Xs} end end end ------------------------------------------ Why have the thread? ------------------------------------------ fun {GateMaker F} fun {$ Xs Ys} fun {GateLoop Xs Ys} case Xs#Ys of (X|Xr)#(Y|Yr) then {F X Y}|{GateLoop Xr Yr} end end in thread {GateLoop Xs Ys} end end end OrG = {GateMaker fun {$ X Y} X+Y-X*Y end} ------------------------------------------ How would you define an "and" gate? b. Sequential Logic (4.3.5.2) What if you feed output of an "and" gate back to its input? ------------------------------------------ fun {DelayG Xs} 0|Xs end ------------------------------------------ c. Clocking (4.3.5.3) ------------------------------------------ % N is in milliseconds fun {ClockMaker N} fun {Loop B} {Delay N} B|{Loop B} end in thread {Loop 1} end end ------------------------------------------ d. Linguistic abstraction (4.3.5.4) (skip) F. Using the declarative model directly (4.4) 1. Order-determining concurrency (4.4.1) How can you fix subtle order dependencies in calculations? 2. coroutines (4.4.2) Why are coroutines hard to use? 3. concurrent composition (4.4.3) How can a tread wait for the forked thread to finish? ------------------------------------------ WAITING FOR FORKED THREADS TO DIE Wait primitive: {Wait X} suspends until X is determined Similar Java code: public synchronized void Wait() { while (true) { if (isDetermined()) { return; } else { wait(); } } } ------------------------------------------ How could you implement Wait as a procedure in Oz? How would you program "wait" in Java? ------------------------------------------ BARRIER SYNCHRONIZATION (Fig 4.22) proc {Barrier Ps} fun {BarrierLoop Ps L} case Ps of P|Pr then M in thread {P} M=L end {BarrierLoop Pr M} [] nil then L end end S={BarrierLoop Ps unit} in {Wait S} end ------------------------------------------ What is the type of Ps? How does this wait for all created threads to execute? How would you do that in Java? G. Lazy Execution (4.5) 1. Examples ------------------------------------------ LAZY EXECUTION (4.5) idea: only execute when result needed fun lazy {From N} N|{From N+1} end Nats = {From 0} fun lazy {AddLists L1 L2} case L1#L2 of (X|Xs)#(Y|Ys) then X+Y|{AddLists Xs Ys} else nil end end % From Abelson and Sussman, p. 326-7 fun lazy {FibGen A B} A|{FibGen B A+B} end FibsGenerated = {FibGen 0 1} Ones = 1 | {fun lazy {$} Ones end} ------------------------------------------ What features of Oz does this rely on? How would you simulate this in C++? How is laziness used in the singleton pattern in Java? What happens if we do {Browse 'Nats...'} {Browse Nats} {Browse {Nth Nats 2}} {Browse {List.take Nats 6}} {Browse {List.take {From 0} 25}} in Oz? How would you see how AddLists works? FibsGenerated? What does Ones do? 2. Importance ------------------------------------------ USES OF LAZY EXECUTION - Efficient algorithms avoid computation amortized algorithms - Helps modularization easier flow control ------------------------------------------ 3. demand-driven concurrent model (4.5.1) What's the default: data-driven or demand-driven concurrency? Why? a. formal model (cover quickly, skipping the formal semantics) ------------------------------------------ DEMAND-DRIVEN CONCURRENT MODEL (4.5.1) LAZY IS DESUGARED TO BYNEED fun lazy {$ ... } end ==> fun {$ ... } {ByNeed fun {$} end} end ==> proc {$ ... ?R} {ByNeed proc {$ X} X= end R} end ------------------------------------------ ------------------------------------------ EXAMPLES local Q in {ByNeed proc {$ ?R} R=3 end Q} {Browse Q} end local Q in {ByNeed proc {$ ?R} R=3 end Q} thread {Browse Q} end {Delay 2000} {Browse Q+7} end ------------------------------------------ What do these do? How would you program something like ByNeed in Java? ------------------------------------------ BY-NEED TRIGGERS Assume Env is {P-->p, Y-->y, Z-->z}: Thread 1 Store {ByNeed P Y} {p=pv,y,z} ... {p=pv,y,z,trig(p,y)} Z=Y+7 {p=pv,y,z,trig(p,y)} (suspends) Thread 2 Z=Y+7 {P Y} {p=pv,y,z} Z=Y+7 {p=pv,y=3,z} {p=pv,y=3,z=10} ------------------------------------------ ------------------------------------------ MODEL'S SEMANTICS Syntax: ::= ... | {ByNeed } Semantics (transitions): [trigger creation] ({({ByNeed X Y},E)|Rest} + MST, s) --> ({Rest} + MST, s') where undetermined(s, E(Y)) and s' = addTrigger(s,trig(E(X),E(Y))) [ByNeed done] ({({ByNeed X Y},E)|Rest} + MST, s) --> ({({X Y},E)|nil} + {Rest} + MST, s') where determined(s, E(Y)) [trigger activation] ({ST} + MST, s) --> ({({X Y},{X->x,Y->y})|nil} + {ST} + MST, s') where needs(s)(ST,y) and hasTrigger(s,trig(x,y)) and s' = remTrigger(s,trig(x,y)) ------------------------------------------ ------------------------------------------ NEEDS needs: Store -> Stack x Variable -> Bool needs(s)(ST,y) = suspendedon(s)(ST,y) or bindingof(s)(ST,y) suspendedon(s)(ST,y) = suspended(ST,s) and undetermined(s,y) and (Exists v:: let s' = bind(s(y),v) in not(suspended(ST,s')) ------------------------------------------ How would you formalize suspended? bindingof? Do we have to be sure that the trigger activation rule runs before the rules that bind variables in the store? If so, how can we do that in the TTS? Can we also change the rules for memory management? What's the relation between {ByNeed P Y} and thread {P Y} end ? b. sugars ------------------------------------------ EXPRESSION SUGARS X = {ByNeed F} ==> {ByNeed F X} Recall: fun {$} E end ==> proc {$ ?R} R=E end So: X = {ByNeed fun {$} 3 end} ==> ------------------------------------------ What does X={ByNeed fun {$} 3 end} do? ------------------------------------------ LAZY FUNCTION SUGAR fun lazy {$ ... } end ==> fun {$ ... } {ByNeed fun {$} end} end ------------------------------------------ How would you desugar fun lazy {$} 3 end ? Why doesn't the "lazy" sugar work for procedures? c. semantic example (skip) ------------------------------------------ SEMANTIC EXAMPLE Desugar local Answer in fun lazy {From N} N|{From N+1} end Answer = {List.take 2 {From 1}} end What is its semantics? ------------------------------------------ d. concept of need, with examples ------------------------------------------ NEED EXAMPLES Needs from strict operations (like +): declare X Y Z thread X = {ByNeed fun {$} 3 end} end thread Y = {ByNeed fun {$} 4 end} end thread Z = X+Y end {Browse Z} Needs by determining a variable: declare X Z thread X = {ByNeed fun {$} 3 end} end thread X = 2 end thread Z = X+4 end {Browse Z} Determined by == ? declare X Y Z thread X = {ByNeed fun {$} 3 end} end thread X = Y end thread if X==Y then Z=10 end end {Browse output(x: X y: Y z: Z)} ------------------------------------------ What happens in these? What is it important that when a failure occurs, all orders would fail? How could you use ByNeed in implementing dynamic linking? 4. declarative computational models Is laziness independent of concurrency? ------------------------------------------ DECLARATIVE COMPUTATION MODELS (4.5.2) eager execution lazy execution ========================================== sequential + values sequential + values + dataflow vars concurrent + values + dataflow vars ------------------------------------------ What combinations does this give us? ------------------------------------------ 3 MOMENTS IN A VARIABLE'S LIFETIME 1. Creation 2. Specification of the call that will yield the value of the variable 3. Evaluation and binding ------------------------------------------ Can these be done separately? At the same time? Which goes with what model? Does the combination of laziness and dataflow require concurrency? Is the semantics for lazy in Oz concurrent? 5. Lazy Streams (4.5.3) Can we write our hailstone peaks search using lazy? Which is easier to program, the demand driven or lazy version? Why isn't lazy the default for functions? 6. Bounded buffer (4.5.4) (skip if short on time) ------------------------------------------ BOUNDED BUFFER {Buffer In N} - fills itself with N elements from In - When an element is taken off of ouput get another input How to write this? ------------------------------------------ How would you test a buffer implementation? In what sense is using a bounded buffer a compromise between the data-driven and demand-driven models? 7. Lazy I/O (4.5.5) Why would lazy evaluation be useful for file I/O? 8. Hamming problem (4.5.6) (skip) 9. Lazy list operations (4.5.7) (skip) a. lazy append ------------------------------------------ LAZY APPEND fun lazy {LAppend As Bs} case As of nil then Bs [] A|Ar then A|{LAppend Ar Bs} end end ------------------------------------------ What happens if we do L={LAppend "foo" "bar"} {Browse L} ? b. lazy map ------------------------------------------ LAZY MAP declare fun lazy {LMap Xs F} case Xs of nil then nil [] X|Xr then {F X}|{LMap Xr F} end end ------------------------------------------ Is this incremental? Would reverse be incremental if we wrote it using lazy? c. lazy filter ------------------------------------------ FOR YOU TO DO Write a lazy version of Filter ------------------------------------------ 10. persistent queues and algorithm design (4.5.8) (skip) Can we use laziness to build queues with constant-time insert and delete for queues? a. lessons for algorithm design (4.5.8.3) 11. List comprehensions (4.5.9) III. Soft real-time programming (4.6) (skip) ------------------------------------------ SOFT REAL-TIME PROGRAMMING (4.6) def: A soft real-time program has deadlines that should be respected most of the time. ------------------------------------------ A. Basic operations (Time module) (4.6.1) ------------------------------------------ SOFT REAL-TIME OPERATIONS {Delay I} - waits at least I ms {Alarm I U} - new thread that binds U to unit after at least I ms {Time.time} - number of seconds since current year ------------------------------------------ How would you write Alarm using Delay? Are there similar things available in Java? 1. ticking (4.6.2) ------------------------------------------ TICKING Write NewTicker such that {NewTicker} returns a stream that grows by 1 element per second ------------------------------------------ IV. The Haskell Language (4.7) (skip) A. features ------------------------------------------ HASKELL (See haskell.org) nonstrict strongly typed functional implicit currying monads Example: > t :: Integer -> Integer > t n = if odd n then (3*n+1) `div` 2 > else n `div` 2 > hailstones :: Integer -> [Integer] > hailstones n = n : hailstones (t n) ------------------------------------------ B. computation model (4.7.1) ------------------------------------------ COMPUTATION MODEL (4.7.1) Don't evaluate expression unless Consider an expression as a tree. When needed, Haskell - evaluates leftmost subexpression if it's a constructor, then done if it's a function, and no args, then done if it's a function with args, then apply to first argument by substituting argument in body then reevaluate Needs: - built-in functions (+, -, *) - pattern matching ------------------------------------------ C. Lazy evaluation (4.7.2) ------------------------------------------ LAZY EVALUATION (4.7.2) Laziness: functions evaluate expressions at most once Example: > h27 = hailstones 27 ------------------------------------------ D. Currying (4.7.3) ------------------------------------------ IMPLICIT CURRYING (4.7.3) Functions are implictly curried > add x y = x + y > add3 = add 3 > result = add3 7 > doubleList = map (2*) ------------------------------------------ E. polymorphic types (4.7.4) ------------------------------------------ TYPE NOTATION Type operators operator meaning ============================= _ -> _ function type (_ , _) product type [ _ ] list type Associativity b f g x means (((b f) g) x) a -> b -> c means a -> (b -> c) ------------------------------------------ Why do you think the associativity is different for applications and for function types? F. polymorphic types (Thompson 9.2, Davie 4.2) ------------------------------------------ POLYMORPHIC TYPES Monomorphic examples: Integer [Bool] -> Bool [(Integer, Integer) -> Bool] Polymorphic examples: [a] [b] -> b [(c,c) -> Bool] ------------------------------------------ What are some expressions that have these types? What are some other instances of these types? ------------------------------------------ ALGEBRAIC TYPES Can simulate enumerations > data Font = Roman | Italic | Bold data Color = data Boolean = Can also be used to define recursive types, including data HaskellType = ------------------------------------------ ------------------------------------------ OVERVIEW OF TYPE INFERENCE Type checking: you declare type, compiler infers type, compiler compares Type inference: compiler infers type In Haskell don't need to declare types (usually) Example > mymap f [] = [] > mymap f (x:xs) = (f x):(mymap f xs) ------------------------------------------ G. Type classes (Thompson 12, Davie 4.8) (CTM 4.7.5) ------------------------------------------ AD HOC POLYMORPHISM parametric polymorphism: map :: (a -> b) -> [a] -> [b] ad hoc polymorphism > square :: Num a => a -> a > square x = x * x ------------------------------------------ Why not require that you actually pass the multiplication yourself? What's done in OO programming? 1. type classes (Thompson 12.4) ------------------------------------------ TYPE CLASSES IN HASKELL -- abbreviated Eq type class class Eq a where (==), (/=) :: a -> a -> Bool x /= y = not (x==y) -- abbreviated Ord type class class (Eq a) => Ord a where compare :: a -> a -> Ordering (<), (<=), (>=), (>) :: a -> a -> Bool max, min :: a -> a -> a ------------------------------------------ 2. type class instances (Thompson 12.3) ------------------------------------------ DECLARING TYPE CLASS INSTANCES > data Prod a b = a :* b > instance (Eq a, Eq b) > => Eq (Prod a b) where > (x :* y) == (x' :* y') = > (x == x' && y == y') or you can write: > data Cartesian a b = a :** b deriving Eq ------------------------------------------ V. Discussion (4.8) What are the advantages of declarative programming? ------------------------------------------ ADVANTAGES OF DECLARATIVE PROGRAMMING ------------------------------------------ Can we do everything efficiently with the declarative model? A. efficiency (4.8.1) ------------------------------------------ EFFICIENCY OF DECARATIVE MODEL CPUs optimized for manipulating data in place so hard to implement... May have to rewrite the program... - incremental modifications of large data have to be careful to "single thread" it (never access old state) - memoization requires change in interface - may be more complex, due to lack of expressiveness see transitive closure algorithm in 6.8.1 ------------------------------------------ B. modularity (4.8.2) What does it mean to be modular? ------------------------------------------ MODULARITY ------------------------------------------ What problems are hard to modularize in the declarative model? ------------------------------------------ MODULARITY PROBLEMS OF DECLARATIVE MODEL - memoization, accumulators change interface, affect clients - instrumenting programs (counters for performance, etc.) Why not use a compiler/preprocessor to translate stateful model code to the declarative model? ------------------------------------------ C. nondeterminism (4.8.3) ------------------------------------------ NONDETERMINISM (4.8.3) Is the declarative model always deterministic? Might we sometimes want nondeterminism? Why? ------------------------------------------ Can two clients send to the same server without being coordinated? D. real world (4.8.4) E. picking the right model (4.8.5) ------------------------------------------ RULE OF LEAST EXPRESSIVNESS For each component, use the least expressive computation model that results in a "natural" program. ------------------------------------------ What is "natural"? F. extended models (4.8.6) ------------------------------------------ EXTENDED MODELS Declarative sequential - functional programing, partial values - algebraic equational reasoning Declarative concurrent - declarative + threads + by-need - algebraic equational reasoning + reasoning about flow control Declarative with exceptions - no longer declarative - reasoning by operational semantics Message passing concurrency: - declarative concurrency + ports - allows nondeterminism - can restrict nondeterminism to small parts of program Stateful: - normal sequential programming - reasoning based on histories (states) Shared-State concurrency: - stateful + threads - reasoning is hard Relational: - declarative model + search - reasoning based on logic(?) ------------------------------------------ G. Using different models together, impedance matching (4.8.7) ------------------------------------------ IMPEDANCE MATCHING EXAMPLES (4.8.7) Putting together modules with different computation/programming models. Typically, less expressive model inside a more expressive one needed for a larger component. Serializer (monitor): To use a sequential component inside a concurrent component Storage Manager: To use a declarative component inside stateful component. The storage manager passes context to the declarative module, and keeps the result for next time. Collector: To use a centralized component inside a distributed program. The collector accepts requests and passes them to centralized component. Protector: To use a component intended for a secure model inside an insecure model. The protector insulates it, verifying requests. ------------------------------------------ H. Advanced topics (4.9) 1. declarative concurrent model with exceptions (4.9.1) (skip) What situations raise exceptions? Is the declarative model with exceptions still declarative? What happens if the execution of a by-need trigger cannot complete normally? ------------------------------------------ FAILED VALUES Syntax ::= ... | {Failedvalue } Sugars Y = {FailedValue X} ==> {FailedValue X Y} Recall: Operational Semantics of exceptions [trigger creation] ({({ByNeed X Y},E)|Rest} + MST, s) --> ({Rest} + MST, s') where unbound(s, E(Y)) and s' = addTrigger(s,trig(E(X),E(Y))) [ByNeed done] ({({ByNeed X Y},E)|Rest} + MST, s) --> ({({X Y},E)|nil} + {Rest} + MST, s') where determined(s, E(Y)) [trigger activation] ({ST} + MST, s) --> ({({X Y},{X->x,Y->y})|nil} + {ST} + MST, s') where needs(s)(ST,y) and hasTrigger(s,trig(x,y)) and s' = remTrigger(s,trig(x,y)) ------------------------------------------ How would we make the operational semantics use Failed Values? How to get the rules to work with these failed values? 2. lazy execution (4.9.2) a. language design ------------------------------------------ LAZINESS AND LANGUAGE DESIGN Lazy by default? ------------------------------------------ Should an imperative language like Java or C++ be lazy by default? Should a declarative language by lazy by default? b. reduction orders ------------------------------------------ REDUCTION ORDERS def: normal order reduction = evaluating arguments only when needed, in leftmost outermost order def: applicative order reduction = evaluate all arguments before application Example: fun {OhNo} {OhNo} end % loops forever fun {Three X} 3 end {Three {OhNo}} def: nonstrict evaluation order = any evaluation order that terminates when normal order does def: lazy order = an evaluation order that does the minimum number of reduction steps. local X={F 4} in X+X end ------------------------------------------ Why is a nonstrict language hard to extend with explicit state? c. parallelism What is speculative execution? Why can we do that with the declarative model? 3. dataflow variables as communication channels (4.9.3) How is a dataflow variable like a communication channel? Can you do asynchronous receive and synchronous send? What does "nonblocking" mean? 4. synchronization (4.9.4) ------------------------------------------ SYNCHRONIZATION (4.9.4) def: a synchronization point links two steps B and A iff in every interleaving B occurs after A ------------------------------------------ 5. utility of dataflow variables (4.9.5) What are dataflow variables good for? a. other technical terms (skip) ------------------------------------------ FUTURES AND I-STRUCTURES def: A future is a placeholder for a computation. fun {Future E} X in thread X = {E} end !!X end def: an I-structure is an array of single-assignment variables. ------------------------------------------