Com S 541 Lecture -*- Outline -*-
* Introduction
** terminology
applicative, imperative, higher order languages,
and functional programming languages
draw a diagram and give examples
why?
referential transparency, so can have lazy evaluation,
so can easily make new combining forms (like ifFalse).
One definition is the following (W. v O. Quine, "Word and
Object", MIT Press, 1960, as quoted in, A. J. T. Davie, "An
Introduction to Functional Programming Systems Using Haskell",
Cambridge, 1992, p. 5): "A language in which the value of expressions
only depends on the values of their well-formed sub expressions is
called referentially transparent." Davie continues, "another
equivalent way of thinking about this is to say that any well formed
sub-expression must be able to be replaced by another with the same
value without affecting the value of the whole."
So, for example, in an expression such as
f(g(a[i])-g(a[j]))
for the case when i == j, a compiler might choose to compute
g(a[i]), to verify it has a value and then compute f(0).
However, in a language that does not have referential
transparency, this may change the meaning of the overall expression.
Similarly, if we have
(a[i]*h(x) == 0 ? q(x) : r(x))
doesn't need to evaluate h(x) if we know from the value of i that
a[i] is zero, and if we know h(x) will have a value.
This is important for optimization.
It's also important for Haskell's lazy evaluation, because it
means we do not have to repeatedly evaluate an expression
if its value is needed more than once, and because we do
not need to evaluate it at all, if its value is not
needed
Q: random numbers in Haskell?
done as a stream
** goals and techniques
------------------------------------------
FUNCTIONAL PROGRAMMING
GOALS AND TECHNIQUES
Goals:
Programming Techniques:
------------------------------------------
the goals are to solve these problems, what are they?
... - program at higher level of abstraction
- abstract computations to allow better reuse
- allow programs to be manipulated, proved correct,
and be derived simply
... - Operate on entire data structures at once
e.g., map, reduce
- abstract computations by using functionals
use functions as data representations
allows treatment of infinite data structures (e.g. inf. sets)
- equational reasoning
explicitly represent state if needed
possibility of aliasing will be explicit, so less problematic
referential transparency
(expression always denotes same value)
for all f, x: (f x) = (f x)
*** the Von Neumann Bottleneck (Davie 1.1, 1.3) (omit)
See Backus's Turing award lecture (1978)
------------------------------------------
THE VON NEUMANN BOTTLENECK
!---------! !-----------!
! ! ! !
! CPU ================= Memory !
! ! ! (RAM) !
!---------! ! !
!-----------!
------------------------------------------
standard computer organization (simplified), due to Von Neumann
think of I/O as accessed through memory (as is typical)
the bottleneck is the bus connecting CPU and memory (store).
Q: what's the purpose of a program?
to change memory
typically one word at a time
Q: How is this reflected in programming languages?
:=
Q: what helps aleviate this?
pipelining, but that makes the bottleneck shorter, not wider!
parallelism allows replacement of 1 CPU by many,
several concurrent accesses to
** interest and applications (omit most of this)
- can you still program?
- core notation/ideas used in semantics
close to math notations
thus learning how to program in functional style
is part of learning how to specify programming langs.
- easy to get parallelism (eval args in parallel)
- applications:
prototyping
semantics
AI
used for telephone switching systems by Ericsson (Erlang)
formal specification
*** programming
- pipelining and data transformations
- recursion and tail-recursion
- lazy evaluation to better structure programs
and to represent infinite data
- higher-order abstractions to allow reuse of patterns
- continuations and monads to insulate from change
*** reuse
- through abstractions (e.g., Haskell prelude and library)
- through monadic interpreters
- through domain-specific languages
*** language design
- pattern matching and algebraic data types
- type inference and parameteric polymorphism
- type classes and ad hoc polymorphism
- novel syntactic conventions (layout, infix operators, constructors)
- limiting side effects and I/O
- built-in types: lists, etc.
*** semantics
- very simple semantics (rewriting)
just lambda calculus and pattern matching
- the concepts are those from denotational semantics
(and the notation is similar)
hence it's useful for prototyping semantic definitions
*** types
- type inference
- combination of parameteric and ad hoc polymorphism
- classes and constructor classes
- using types to limit side-effects
*** reasoning
- equational reasoning and referential transparency
- using types to contain side effects
** History (omit)
1940s Alonzo Church invents lambda-calculus
1962 LISP 1.5
1975 Turner's SASL: lazy language with graph reduction
1977 Meta Language of LCF (proof system), type inference algorithm
1985 Miranda (lazy, typed language)
but this costs money, so some academics started...
1988-92 development of Haskell
1992 Haskell formal definition version 1.2 published
1996 Haskell 1.3 definition includes monads
1998 Haskell 98 standard (tries for portability)