##### More Minkowski for Free

Due to technical problems uploading to this site, I posted a new article over here: http://lorentzframe.blogspot.com/

Posted Saturday, September 9, 2006 9:15 AM by brianbec | 1 comment(s)

Due to technical problems uploading to this site, I posted a new article over here: http://lorentzframe.blogspot.com/

Posted Saturday, September 9, 2006 9:15 AM by brianbec | 1 comment(s)

Filed under: Physics, simulation, mathematics

Consider the following snapshot from a Geometry-Expressions (GX) (http://www.geometryexpressions.com) file:

The blue coordinate-frame lines represent our inertial frame, at rest with respect to us. The red coordinate-frame lines represent another inertial frame, one moving at speed v with respect to us. The red x-axis has slope v (actually v / c^2, but c = 1 here) and the red y-axis has slope 1/v, reflecting the space-time symmetry of the Special Theory of Relativity. B is an event in space-time, that is, a single, fixed point in space and time whose coordinates depend on the frame of reference in which the coordinates are measured. The cyan lines, parallel to the blue axes, measure the coordinates in our frame. The magenta lines, parallel to the red axes, measure the coordinates in the moving frame. Geometry Expressions calculates the Lorentz transformation for us. Minkowski geometry for free. Is that cool, or what? Here's the GX source file: http://home.comcast.net/~brianbec/Minkowski1.gx . (Speed of light = 1, and every place you see v * x, mentally substitute v * x / c^2.)

Posted Thursday, September 7, 2006 9:34 PM by brianbec | with no comments

Filed under: Physics, simulation, relativity

Should have said last time that space-time "curvature" is relativity code-speak for "acceleration" or "gravitation." It's often something one can feel. Here's an imaginary example. Put a very precise atomic clock, call it A, on board a rocket. Put another, identical clock, call it B, in a lab on the ground. Start the clocks ticking. Now, let the rocket take off (ACCELERATION! Curvature difference! We can feel it!). Once the rocket stops accelerating, but while it's moving more-or-less in a straight line. let's do some experiments, say, measuring the decay rates of radioactive samples. A and B get the same answers, since at this time there is no curvature difference. Now, let A and B compare notes: let the ground clock A send some of its data to B. B sees A's clock running slow! Vice versa: if the rocket crew sends some data to A, A will reckon B's clock running slow. Can they both be right? YES. As long as the rocket keeps flying, they will both reckon each other's clocks running slow, but they're getting further and further apart and they can't compare notes forever unless the rocket turns around. ACCELERATION! Curvature difference! We can feel it! The rocket crew is actually, objectively, measurably different from the ground crew, and when they get back together, they will both agree that the clock on the rocket actually ran slower, over all, than then one on the ground.

Such experiments have been done many times with atomic clocks on planes, satellites, the space shuttle, and rockets. The details are complicated by the fact that clocks on the ground run slower because they're deeper in the Earth's gravitational field (CURVATURE! We can feel the difference!) and the fact that the projectiles travel non-straight-line paths (codespeak: "non-intertial" paths). In fact, the Earth-grav effect is easier to measure: put two clocks on different floors of a building and compare notes: they get different results because their curvature is different (grav field is stronger on lower floors, clocks actually run slower, things weigh more, we can feel the difference!). GPS satellites run at slightly different altitudes because of lumps (mountains, valleys) in the grav field, and it's necessary to correct for their mutual clock drifts to maintain accuracy.

This is really weird, but it's actually, objectively, measurably true. The theory, which is conceptually simple, though mathematically intricate, is very very accurate. No one has ever seen anything to refute the theory.

Weird as this is, it's NOTHING compared to the weirdness of quantum physics.

Posted Saturday, September 2, 2006 7:10 AM by brianbec | 3 comment(s)

Filed under: CLR, Math, Physics, .NET, simulation, relativity

One of the groups that I have lunch with likes to discuss physics and astronomy. During one of these lunches, there was a bit of confusion about the Theory of Relativity (ToR), with people not being quite sure whether it means that measurements depend upon one's frame of reference. I was fortunately able to clarify this absolutely (pun intended). The axioms of the ToR are that

* if you do an experiment and I do the same experiment, and

* if the regions of space and time enclosing the experiment are small enough as to appear un-curved (or if they have the same curvature within our measurement precision)

you and I will get the same answers. These axioms are *really* a statement of the absolute and permanent nature of the laws of physics! For "small" experiments, we get the same answers, period, now and forever, with "small" meaning the above.

So why is it called Theory of Relativity and not the Theory of Absolutes? Because if I measure YOUR experiment from a distance, I might get a different answer than you get, depending on our mutual (relative) velocity and curvature differences. I will ALWAYS get the same answer on my copy of the experiment as you get on your copy of the experiment, but I might get a different answer looking at your experiment than you get on your experiment. The two ToR's tell us exactly how to account for the different answers. The Special ToR accounts for our mutual velocities, and the General ToR accounts for differences in spacetime curvature.

Posted Friday, September 1, 2006 10:31 AM by brianbec | 1 comment(s)

Filed under: CLR, Math, Physics, .NET, simulation, relativity

For some time, it was popular to experiment with VMs (virtual machines) that accepted programs represented as trees or graphs or related algebraic structures. The most famous examples are Haskell's G-Machine (standing for "graph" machine) and CAML's Categorical Abstract Machine, and there are many more. The whole topic of Intentional Programming is devoted to manipulation of programs as trees, even to the point of their surface syntax. We can see contemporary efforts to promote programs-as-trees in XML -- today's tree-syntax-of-choice -- for all kinds of VMs from query processors to rendering and simulation engines.

However, most workers have completely abandoned this approach and gone back to instruction sets as the preferred representation at the VM boundary. Java and CLR are obvious examples, but even Haskell and CAML have gone back to instructions and abandoned trees. Why? There is one big answer. The consensus -- after trying it -- is that it's the compiler's job to reduce recursion to looping, NOT the VM's job. That's the division of labor that promotes good software development by letting the VM concentrate on optimization, JITting, and efficient execution. Since trees and their relatives are an inherently recursive data structure, feeding them to the VM means that the VM needs to do the pedestrian step of boiling off recursive definitions before it can get to its real job.

Recursion is great for describing things, but lousy for executing things. Describing things is the compiler's job, and executing things is the VM's job.

Posted Tuesday, August 29, 2006 8:10 AM by brianbec | with no comments

Reference: “Feynman’s Lost Lecture, The Motion of Planets Around the Sun,” by David L. Goodstein and Judith R. Goodstein, W. W. Norton & Company, New York, 1996.

Richard Feynman was one of the most important physicists of all time. He had a peculiar way of taking ideas out of thin air and bringing them to bear on problems that seemed unrelated, thereby busting them wide open. He is perhaps best remembered for the path-integral formulation of relativistic quantum mechanics, which not only showed that the otherwise stupefyingly mysterious quantum mechanics is just another application of the old familiar principle of least action, but also revealed space-time symmetries that resolved conundrums about antiparticles and particles going backwards in time. Every word he ever spoke or wrote is worthy of study. I had the privilege of meeting with him several times over the years as a child, an older student, and as a professional physicist, and witnessing the magic at work.

The book referenced at the top records a wonderful demonstration by Feynman that Kepler’s laws follow from Newton’s laws plus ordinary, high-school plane geometry: ** no calculus required**. No differential equations, no angular momentum, no constants of integration. This is Feynman at his best: reducing something seemingly big, complicated, and difficult to something small, simple, and easy. In this PDF file (http://home.comcast.net/~brianbec/Feynman.zip), I present an original reconstruction of this argument, accompanied by several documents for the application

My presentation is very short, on purpose, but omits no detail. To get a more lengthy presentation that delectates on every precious morsel, PLUS an audio CD recording of the man himself delivering the lost lecture, see the book referenced at the top. You won’t regret it.

Here are Kepler’s three laws, in the order that we prove them:

**K1**: A planet orbiting a star sweeps out equal areas in equal times**K2**: The square of the period of a closed orbit is proportional to the cube of its longest axis**K3**: A closed orbit is an ellipse (open orbits are parabolic or hyperbolic)

Now, here are Newton’s laws (specialized to the fixed-star assumption and numbered as we need them)

**N1**: A body (planet) in motion continues in straight-line motion unless acted on by a force**N2**: The change in velocity of a planet over a time interval is proportional to the force applied (N1 is really a special case of N2 with force equal to zero)**N3**: The force between a planet and a star acts along the line connecting them (central-force law)**N4**: The force between a planet and a star is proportional to the inverse square of the distance between them (inverse-square law)

Now, get the downloads to get the rest of the story :)

Posted Thursday, August 3, 2006 9:22 PM by brianbec | 1 comment(s)

Filed under: Math, Physics, General Software Development

Posted Thursday, August 3, 2006 3:44 PM by brianbec | 2 comment(s)

Filed under: Visual Basic, CLR, .NET

I wasn’t sure I was going to report a version of Y using Object rather than explicit recursive types, but it turned out to be so easy and pretty that I couldn’t resist. It’s rather a dead-end, evolutionarily speaking, since future development will use the template of the explicit Universal type presented last time, but it’s amusing enough for a blog entry:

REM (λ a -> z z a)

Class** OfA**

** **Private** Z **As** **Object

** **Sub** **New**(**ByVal** Z **As** **Object**)**

** **Me**.Z = Z**

** **End** **Sub

** **Function** [Of](**ByVal** a **As** **Object**) **As** **Object

** **Return** Z.[Of](Z).[Of](a)**

** **End** **Function

End** **Class

REM [λ z -> h(λ a -> z z a)]

Class** OfZ**

** **Private** H **As** **Object

** **Sub** **New**(**ByVal** H **As** **Object**)**

** **Me**.H = H**

** **End** **Sub

** **Function** [Of](**ByVal** z **As** **Object**) **As** **Object

** **Return** H.[Of](**New** OfA(z))**

** **End** **Function

End** **Class

REM y h = [λ z -> h(λ a -> z z a)] [λ z -> h(λ a -> z z a)]

Class** Y**

** **Function** [Of](**ByVal** h **As** **Object**) **As** **Object

** **Dim** z **As** **New** OfZ(h)**

** **Return** z.[Of](z)**

** **End** **Function

End** **Class

REM λ n -> if n=0 return 1 else return n * fact(n - 1)

Class** OfN**

** **Private** fact **As** **Object

** **Sub** **New**(**ByVal** h **As** **Object**) **REM inflate the closure

** **Me**.fact = h**

** **End** **Sub

** **Function** [Of](**ByVal** N **As** **Object**) **As** **Object

** **If** N < 0 **Then

** **Return** N**

** **End** **If

** **If** N = 0 **Then

** **Return** 1**

** **Else

** **Return** N * fact.[Of](N - 1)**

** **End** **If

** **End** **Function

End** **Class

REM Currying: λ g -> (λ n -> ...). This is our target for y.

Class** f**

** **Function** [Of](**ByVal** fact **As** **Object**) **As** **Object

** **Return** **New** OfN(fact)**

** **End** **Function

End** **Class

Module** Module1**

** **Sub** Main()**

** **Dim** y = **New** Y()**

** **Dim** fact = y.[Of](**New** f)**

** **Dim** result **As** **Object** = 1**

** **While** result > 0**

** Console.Write(**"Enter an Integer Number (negative to stop): "**)**

** result = fact.[Of](Console.ReadLine())**

** Console.WriteLine(**"{0}"**, result)**

** **End** **While

** **End** **Sub

End** **Module

Posted Tuesday, June 6, 2006 7:38 AM by brianbec | 1 comment(s)

Filed under: Visual Basic, CLR, .NET

Ok, we have almost everything on hand to write a clean and honest recursion-free factorial in VB, albeit with recursive types. I actually struggled with this for some time, the problem being that we have functions of *Long* to *Long*—call those of type *df*, then functions from *df* to *df*—call those of type *dh*, then functions from *dh* to *df*, and functions from *dh* to *dh*—mmm, ok this situation isn’t converging. If we needed two flashes and one rolling thunderstorm of genius to get this far, it was looking like we might need a cat-3 hurricane of genius to get past it. Fortunately, my colleague, Mark Shields, showed me the way out. I’m not sure I would ever have thought of this myself, but it’s a lovely technique and it’s now added permanently to my bag of tricks. The technique is to model the type system itself—in miniature, of course—with a Universal type *U* that can model either a *Long* or a function from *U* to *U*. Now, all the variety of function types are swept up under one, large, inclusive rug. To model a *Long*, we’ll have a *MustOverride ReadOnly Property* that returns the *Long* value, and to model a function we’ll model function application itself through a *MustOverride* function named *[Of]* that takes a U and returns a U (square brackets let us reuse a keyword for our own purposes, and *Of* is the keyword for generic types, which we’re not using here for simplicity. We could have called the function *appliedTo*, but *[Of]* turns out to be definitely prettier, as we shall see):

MustInherit** **Class** U**

** **MustOverride** **ReadOnly** **Property** V() **As** **Long

** **MustOverride** **Function** [Of](**ByVal** u **As** U) **As** U**

End** **Class

Of course, we COULD use generics, replacing *Long* with a type variable *T*, but that’s a tweak at best. Let’s move on. For the subtype of *U* representing *Long*, we just need *[Of]* to throw:

Class** I**

** **Inherits** U**

** **Private** i0 **As** **Long

** **Sub** **New**(**ByVal** i **As** **Long**)**

** i0 = i**

** **End** **Sub

** **Overrides** **ReadOnly** **Property** V() **As** **Long

** **Get

** **Return** i0**

** **End** **Get

** **End** **Property

** **Overrides** **Function** [Of](**ByVal** x **As** U) **As** U**

** **Throw** **New** ApplicationException( _**

** **"Sorry, can't apply a Long to an argument"**)**

** **End** **Function

End** **Class

The subtype of *U* that represents functions should be *MustInherit* itself, and, of course, should throw on attempted access of the *Long* property, which just doesn’t obtain for functions:

MustInherit** **Class** λ**

** **Inherits** U**

** **Overrides** **ReadOnly** **Property** V() **As** **Long

** **Get

** **Throw** **New** ApplicationException( _**

** **"Sorry, can't get the value of a function"**)**

** **End** **Get

** **End** **Property

End** **Class

Let’s start with the ultimate target for application of the y combinator. We called this *f* in the previous blogs on this theme. This is a function of two arguments, the first of which is the factorial function itself, the second of which is a *Long* number to which to apply *factorial*. Since we’re currying, we represent this as a function of a function that returns a function of a *Long*, represented in-turn by two subclasses of λ, remembering to make closures over free variables:

REM Currying: λ g -> (λ n -> ...). This is our target for y.

Class** f**

** **Inherits** λ**

** **Overrides** **Function** [Of](**ByVal** fact **As** U) **As** U**

** **Return** **New** OfN(fact)**

** **End** **Function

End** **Class

REM λ n -> if n=0 return 1 else return n * fact(n - 1)

Class** OfN**

** **Inherits** λ**

** **Private** fact **As** λ**

** **Sub** **New**(**ByVal** h **As** λ)** REM inflate the closure

** **Me**.fact = h**

** **End** **Sub

** **Overrides** **Function** [Of](**ByVal** N **As** U) **As** U**

** **Dim** i = N.V**

** **If** i = 0 **Then

** **Return** **New** I(1)**

** **Else

** **Return** **New** I(i * fact.[Of](**New** I(i - 1)).V)**

** **End** **If

** **End** **Function

End** **Class

The money line is highlighted. Even though there is lots of machinery laying around, this is almost readable. Ok, now y itself, which we derived last time, with two auxiliary classes with closures, one for each of the lambda forms it uses internally:

REM y h = [λ z -> h(λ a -> z z a)] [λ z -> h(λ a -> z z a)]

Class** Y**

** **Inherits** λ**

** **Overrides** **Function** [Of](**ByVal** h **As** U) **As** U**

** **Dim** z **As** **New** OfZ_hFree(h)**

** **Return** z.[Of](z)**

** **End** **Function

End** **Class

REM [λ z -> h(λ a -> z z a)]

Class** OfZ_hFree**

** **Inherits** λ**

** **Private** H **As** U**

** **Sub** **New**(**ByVal** H **As** U)**

** **Me**.H = H**

** **End** **Sub

** **Overrides** **Function** [Of](**ByVal** z **As** U) **As** U**

** **Return** H.[Of](**New** OfA_zFree(z))**

** **End** **Function

End** **Class

REM (λ a -> z z a)

Class** OfA_zFree**

** **Inherits** λ**

** **Private** Z **As** U**

** **Sub** **New**(**ByVal** Z **As** U)**

** **Me**.Z = Z**

** **End** **Sub

** **Overrides** **Function** [Of](**ByVal** a **As** U) **As** U**

** **Return** Z.[Of](Z).[Of](a)**

** **End** **Function

End** **Class

That’s it! We’re done, and we really just had to read off the mathematics. Wasn’t too hard once Mark showed me the road. By the way, if you change y so that it uses the following, divergent form, you can generate stack-overflows and check that we really do need the one with delayed application:

REM [λ z -> h(z z)] : Loops forever

Class** OfZ_hFree_diverges**

** **Inherits** λ**

** **Private** H **As** U**

** **Sub** **New**(**ByVal** H **As** U)**

** **Me**.H = H**

** **End** **Sub

** **Overrides** **Function** [Of](**ByVal** z **As** U) **As** U**

** **Return** H.[Of](z.[Of](z))**

** **End** **Function

End** **Class

Ok, let’s call it and call it a day:

Module** Module1**

** **Sub** Main()**

** **Dim** y = **New** Y()**

** **Dim** fact = y.[Of](**New** f)**

** **Dim** result = fact.[Of](**New** I(20))**

** Console.WriteLine(**"{0}"**, result.V)**

** Console.Write(**"Press any key to end the program..."**)**

** Console.ReadKey()**

** **End** **Sub

End** **Module

Posted Monday, June 5, 2006 7:55 AM by brianbec | 1 comment(s)

Filed under: Visual Basic, CLR, .NET

Let’s review the bookends of our so-far successful foray into recursion elimination. The general theme has been ** replacement of recursive calls by self-application of functions adhering to recursive types**. (Now, I think we can even eliminate the recursive types, but we still have a way to go before that.) The first, successful recursive type was

** **Delegate** **Function** dg(**ByVal** g **As** dg, **ByVal** n **As** **Long**) **As** **Long

But we didn’t have closures and currying, so we had to keep an ugly argument slot open for the self-applied function. ‘Inventing’ closures and currying led us to invent a pair of function types:

** **Delegate** **Function** df(**ByVal** n **As** **Long**) **As** **Long

** **Delegate** **Function** dh(**ByVal** h **As** dh) **As** df**

Along the way, we gained something, but hardly noticed it, and then we lost it, again hardly noticing it. In this installment, we are going to get it back permanently. But what was it? Remember, at the very beginning, we wanted to write

** **Function** fact(f **As** df, **ByVal** n **As** **Long**) **As** **Long

** **If** n = 0 **Then : Return** 1**

** **Else **: **Return** n * f(n - 1)**

** **End** **If

** **End** **Function

but we were forced to write the following because we couldn’t get the declarations and the types uncoiled:

** **Function** preFac(**ByVal** g **As** dg, **ByVal** n **As** **Long**) **As** **Long

** **If** n = 0 **Then : Return** 1**

** **Else : Return** n * g(g, n - 1)**

** **End** **If

** **End** **Function

So, we channeled closures and currying to get rid of the open argument slot for **g**, but lost track of a primary wish: to have the function call in the final position NOT exhibit self application so blatantly. Brings to mind gagging on lukewarm bouillabaisse. While building up closures, we drove right by this little roadside poissonnerie:

** **Function** lambdaF(**ByVal** n **As** **Long**) **As** **Long

** **If** n = 0 **Then** : **Return** 1**

** **Else** : **Return** n * f(n - 1)**

** **End** **If

** **End** **Function

because this was only half of a real closure, of course. While it got the location of **f** right, its value didn’t come from the ambient environment. Instead, it was illicitly loaded at creation time, using secret knowledge, the **AddressOf lambdaF** itself. So, we got the *hiding-of-recursion-through-one-level-of-indirection* we wanted, but at the cost of defining **lambdaF** with reference to its own definition. We were trying to escape exactly that with the whole exercise! So we moved down the road to

** **Delegate** **Function** dh(**ByVal** h **As** dh) **As** df**

** **Private** h **As** dh**

** **Private** **Function** lambdaF(**ByVal** n **As** **Long**) **As** **Long

** **If** n = 0 **Then** : **Return** 1**

** **Else** : **Return** n * h(h)(n - 1)**

** **End** **If

** **End** **Function

** **Public** **Function** lambdaH(**ByVal** adh **As** dh) **As** df**

** **Me**.h = adh**

** **Return** **AddressOf** lambdaF**

** **End** **Function

Now, this new **lambdaF** doesn’t refer to itself in any way. One more level of indirection fixed it: the creator is responsible for loading the value of **h** from the ambient environment, where it can OPTIONALLY refer to **h** itself. The restoration of that option legitimizes the self-application:

** **Function** fact3(**ByVal** n **As** **Long**) **As** **Long

** **Dim** acf3 = **New** cf3**

** **Return** acf3.lambdaH(**AddressOf** acf3.lambdaH)(n)**

** **End** **Function

We made it! A place that fixes bouillabaisse, but, alas, it’s lukewarm again. **LambdaF** is ugly because it must blatantly repeat the self-application trick again, via **h(h)**. At least there is enough curry in the soup to prevent gagging. Or is there?

Notching up the abstraction of the discourse a tiny bit, we keep trying to write something like this

fact = f(f)

and getting our knickers in a twist because **f** has to do something unnatural for **fact. fact** wants to talk about numbers and **f** has to keep talking about *functions*, sometimes functions that take functions as arguments, and sometimes functions that take numbers as arguments. Let’s just back off and suppose we could pass **fact** itself to **f**:

fact = f(fact)

Well, this would do the trick, wouldn’t it? Now, **fact** could just call itself in the final position, legitimately divorced from self-definition, without carrying around the ugly divorce decree in plain sight as blatant self-application of a free variable.

This is a ** fixed-point equation**: it says that the

y = m x + b

which has the fixed-point solution

y = f(y) = b / (1 – m)

In other words, there is a unique fixed point unless m = 1, in which case either no points are fixed points, if b ≠ 0; or all points are fixed points, if b = 0.

But in our world of functions, we’re definitely feeling queasy. We’ve left the secure, dry land of functions on numbers and are pitching along the rolling seas of functions of functions. But there is much adventure and treasure ahead, even if only a properly cooked bouillabaisse.

We have only **f** to work with. Think, think, think. Ok (flash of genius), we’re already on the high seas of higher-order functions, so, maybe we can imagineer yet-an0ther-function, call it **y**, that can convert **f** into **fact**. We would get the mind-twisting equation:

fact = y(f) = f(y(f)) = f(fact)

Does this have a solution for **y** (we already know what **f** is)? Think, think, think. Ok (flash of genius), maybe self-application could help. Think, think, think. Maybe y(f) equals some other function applied to itself. Ok (rolling thunderstorm of genius), how about:

let g(x) = f(x(x)) *for any x (notice the free variable, *f*)*

let y(f) = g(g)*, which equals* f(g(g)) = f(y(f))

By gum, we’ve got it! A **y** that converts any function **f** into its fixed point, assuming that the fixed point exists. The condition for existence seems to be just that **f** take a function as its first argument, and we can curry off the rest of the arguments. Of course, this is the famous Y combinator and all that genius is borrowed from Haskell Curry himself.

There is one more little thing we must do before we boil this down into concrete realization in VB. If you actually called **y(f)** above, it would call **g(g)**, which would call **f(g(g))**, which would call **g(g)**, and the whole thing would diverge till we get a stack overflow. We must make sure that the **g(g)** in **f(g(g))** is not called ** yet**, so we replace the argument of

The current notation for discussing the mathematical meaning of all this function hacking is very clumsy, however, and gets in the way of understanding, and it will make it REALLY difficult to understand the upcoming VB code. It forces us to name every function, and the whole point of the project-at-large is to avoid naming functions by writing lambdas instead. We’ve got to graduate to the real thing. So here goes:

λ x → Q

Read this as “lambda of x returning Q.” It means *the function of **x** that does (or returns) **Q*, where Q stands for anything that may depend on x and upon other variables. This is *abstraction of *** x**. If x were a free variable in Q, then it is bound in λ x → Q (reminder, a free variable is any variable that is not a function argument at any level of nesting in an expression) We can give this function a name as follows:

P = λ x → Q

This associates to the right, so that

Q → R → S = (Q → (R → S))

(if we think about it, it really only can be read that way). Abstraction relates back to the traditional notation as follows:

P(x) = Q

In the lambda notation, we do NOT need to write the parentheses around the function arguments and this is very helpful in more involved expressions. So, the above is the same as this:

P x = Q

This is ** function application**, and it associates to the left, so

f g h = ((f g) h) *function **f** applied to **g**, and the result applied to **h*

To evaluate a function (called β reduction), we substitute an actual argument for a variable,

(λ x → Q) R ~> Q[x ← R] *if all free variables in *R* remain free*

Read this as “lambda of x returning Q applied to R becomes Q with R replacing x everywhere.” We can avoid ‘capturing’ free variables in R by just renaming any bound variables in Q that collide with them. See the wiki entry for more.

In the new notation, write the first version of y as

y f = [λ x → f (x x)] [λ x → f (x x)]

Read this as “y of f equals the function of x that returns f applied to x of x, applied to (a copy of) that same function.” Notice that we explicitly overrode the left-associativity in f(x x); we don’t want (f x) x— “f of x applied to x” —which would be the interpretation of just f x x. To evaluate y f, just follow β reduction, replacing x in f(x x) with λx→f(x x), resulting in

f ( [λ x → f (x x)] [λ x → f (x x)] ) = f (y f)

just as required. Doing the η conversion to delay evaluation changes nothing:

y f = [λ x → f (λ a → x x a)] [λ x → f (λ a → x x a)]

because (λ a → x x a) = x x, mathematically.

I’m just warming you up with this notation, because I’m going to put it in the comments when I write the VB version of this, next installment.

Posted Sunday, June 4, 2006 9:08 PM by brianbec | 1 comment(s)

Filed under: Visual Basic, CLR, .NET