teaching machines

CS 330 Lecture 30 – Call By Name

Dear students,

When you call a function, what happens? In particular, what happens in this code?

int m() {
  cout << "m" << std::endl;
  return 5;
}

int r() {
  cout << "r" << std::endl;
  return 7;
}

int multiply(int a, int b) {
  cout << "*" << std::endl;
  return a * b;
}

int main(int argc, char **argv) {
  cout << multiply(m(), r());
}

We should see the following output:

m
r
*

Our main effectively become this:

int a = m();
int b = r();
cout << multiply(a, b);

At the (ARM-like) machine code level, our code looks something like this:

call m
r6 = r0         // don't let m's return value get clobbered
call r
r1 = r0         // put r into the second parameter slot
r0 = r6         // put m into the first parameter slot
call multiply

It’s worth noting that if we were using a language without side effects, we could safely rearrange the order of execution because we know that the worlds of m and r will be identical and therefore their order of executions don’t matter:

call r
r1 = r0         // put r into the second parameter slot
call m          // m is automatically in r0
call multiply

Could there be any other way for this code to have been executed? If you’ve grown up on Java and C, you probably have a hard time imagining any other possible way. But yes, there are other ways that this might have played out.

What’s happening here is that the parameters to multiply are being evaluated eagerly. Before multiply ever sees the light of day, its values have been readied. Eager evaluation tends to be a feature of languages that use call-by-value parameter passing. In call-by-value languages, a function gets a copy of the actual parameters it is given. In fact, a better term would be pass-a-copy. That’s why code like this has no lasting effect:

public static void swap(int a, int b) {
  int tmp = a;
  a = z;
  z = tmp;
}

Consider the cost of eager evaluation. If a parameter is never referenced in a function, how many times will we have we evaluated it? Once. If it’s referenced once? Still once. If it’s referenced many times? Still once. Let’s record this in a table.

scheme 0 1 n
call-by-value 1 1 1

Sometimes it would be nice to delay the evaluation. For example, suppose we are writing a testing framework and we tried this definition for assert:

void assert(int expected, int actual) {
  if (expected != actual) {
    printf("[%s:%s:%d] Expected: %d | Actual: %d\n", __FILE__, __func__, __LINE__, expected, actual); 
  }
}

Let’s test this:

int main(int argc, char **argv) {
  assert(4, 3 * 2);
  assert(4, 1 + 1 + 1 + 1 + 1);
  return 0;
}

Here’s the output:

[foo.c:assert:6] Expected: 4 | Actual: 6
[foo.c:assert:6] Expected: 4 | Actual: 5

Our definition of assert fails us. For one, the reference to the file is not helpful at all. For two, we don’t get much information about the code that produced the wrong result. That’s because by the time assert gets control, the actual value has been eagerly evaluated. But what if we could delay its evaluation? Well, we can’t naturally do that in C, but we can strip out the function mechanism altogether. We’ll use the preprocessor to change this code from a function that introduces a new isolated context into a macro that essentially gets copied and pasted into the calling context, where the actual value has yet to be evaluated.

On top of that, we can use another trick of the preprocessor to print the raw the source of the expression that generates the actual value. We can turn the unevaluated expression into a string with the # operator:

#define ASSERT(expected, actual) \
  if (expected != actual) \
    printf("[%s:%s:%d] Expected: %d | Actual: %d\n", __FILE__, __func__, __LINE__, expected, actual) 

Here’s our new output:

[foo.c:main:15] Expected: 4 | Actual: 3 * 2 -> 6
[foo.c:main:16] Expected: 4 | Actual: 1 + 1 + 1 + 1 + 1 -> 5

Much better! What’s actually going on under the hood? Let’s compile with gcc -E to preprocess:

int main(int argc, char **argv) {
  if (4 != 3 * 2) printf("[%s:%s:%d] Expected: %d | Actual: %s -> %d\n", "foo.c", __func__, 15, 4, "3 * 2", 3 * 2);
  if (4 != 1 + 1 + 1 + 1 + 1) printf("[%s:%s:%d] Expected: %d | Actual: %s -> %d\n", "foo.c", __func__, 16, 4, "1 + 1 + 1 + 1 + 1", 1 + 1 + 1 + 1 + 1);
  return 0;
}

So, macros in C provide one way of “delaying” execution. A macro’s parameters enter into the macro without first having been evaluated. They are evaluated as needed.

We should note that if your expression issues a side effect, macros are particularly dangerous. In the code above, our actual expression gets evaluated twice. Any side effect would also be triggered twice. Sometimes this is bad. Consider this code:

#define PRINTDIGIT(c) if (isdigit(c)) printf("%c", c)
PRINTDIGIT(getchar());
PRINTDIGIT(getchar());
PRINTDIGIT(getchar());

Let’s run this on this input:

1a2b3c4d5f6e

Our output is this:

abc

That’s not quite what we wanted. What’s the issue? gcc -E reveals all:

int main(int argc, char **argv) {

  if (isdigit(getchar())) printf("%c", getchar());
  if (isdigit(getchar())) printf("%c", getchar());
  if (isdigit(getchar())) printf("%c", getchar());
  return 0;
}

Because getchar() is not eagerly evaluated, it is duplicated in its raw form in both the condition and the printf call.

That’s one danger of macros. Here’s another. Consider this wrapper intended to make malloc easier to call:

#define EZMALLOC(n, type) (type *) malloc(sizeof(type) * n)
int *p = EZMALLOC(5 + 1, int); // we want six ints!

But what do we actually get? gcc -E says this:

int *p = (int *) malloc(sizeof(int) * 5 + 1);

Whoops. Because the preprocessor lives outside the language and is really just a textual substituter, it knows nothing about precedence. When we write macros, we must be careful. Here we can use parentheses to ensure the proper association:

#define EZMALLOC(n, type) (type *) malloc(sizeof(type) * (n))

Okay, so macros have at least two issues: repeated execution of the “parameters” and association troubles. (They also have scope issues, no type checking, and lots of other problems, probably.) But the repeated execution issue is not always bad. It leads us to one of the beauties of delaying evaluation. Delayed evaluation allows us to write our own control structures. Like loops and conditionals. Suppose we wanted to write our own repeat loop. Let’s try to do this in Ruby:

def repeat n, body
  i = 0
  while i < n
    body
    i += 1
  end
end

repeat(10, puts("Foobag!"))

What’s the output here?

Foobag!

Not exactly what we wanted. Control structures can’t execute their parameters until they are needed! Imagine an if function:

if(condition, then, else);

In an eager evaluation scheme, both then and else will be executed before if is even called. That’s very not right. It defies the semantics of conditional statements. We want to delay parameter evaluation. A few modern languages support “real” delaying. R and Scala, for example. But not many.

Let’s try writing an until control structure in Scala. Here’s our first attempt:

object Main {
  def until(condition: Boolean, body: Unit) {
    while (!condition) {
      body
    }
  }

  def main(args: Array[String]) {
    var i = 0
    until(i == 10, {
      println(i)
      i += 1
    })
  }
}

We see 0 printed and then we enter an infinite loop. Why? The condition and body are eagerly evaluated. The condition enters until as false, so we enter the while which does nothing to change the already evaluated condition. But Scala gives us the power to delay:

object Main {
  def until(condition: => Boolean, body: => Unit) {
    while (!condition) {
      body
    }
  }

  def main(args: Array[String]) {
    var i = 0
    until(i == 10, {
      println(i)
      i += 1
    })
  }
}

Now we see our countup! Lazily evaluating a function’s parameters like this is called call-by-name. It’s not the greatest term in the world. A better term would pass-as-function or pass-as-closure.

But we can do one better. It’d be awesome if we could make our own control structures look like the builtin ones:

until (i == 10) {
  println(i)
  i += 1
}

We can do this if we make until behave as all Haskell functions do. If we give it just one parameter, it yields a new function awaiting the remaining parameters. We’ll feed it the condition first, and we’ll feed the body to the partially evaluated function that it returns. In reality, it’ll look something like this:

var until10 = until(i == 10)
until10({
  println(i)
  i += 1
})

Scala doesn’t automatically let us do this partial evaluation. But we can explicitly allow it by chunking up the parameter list:

def until(condition: => Boolean)(body: => Unit) {
  while (!condition) {
    body
  }
}

Freeing up the parameter list like this to allow partial application is called currying. Haskell does it automatically.

You are probably thinking, “Call-by-name. Big deal. How many new control structures am I going to need in my life?” Probably not many, but I think the bigger deal is that just as Stroustrup made our new types indistinguishable from the builtin types, call-by-name lets our code get the same special treatment that builtin structures get.

Let’s extend our table. Suppose a call-by-name function never references its lazily evaluated parameter. How many times will it have been evaluated? None. That’s a win compared to eager evaluation, especially if evaluating it was expensive. What if it’s reference once? It will be evaluated once. What if it’s referenced many times? It will be evaluated many times.

scheme 0 1 n
call-by-value 1 1 1
call-by-name 0 1 n

Note that we don’t explicitly pass functions as parameters to a call-by-name function. The language’s tooling wraps up our actual parameters in functions for us. Since most languages don’t support call-by-name, our only recourse to achieve something like it is to explicitly pass functions or lambdas. In Ruby we can use blocks for this. Let’s revisit repeat with blocks:

def repeat n
  i = 0
  while i < n
    yield
    i += 1
  end
end

repeat 5 do
  puts "Foobag!"
end

How about a loop-and-a-half control structure? Sadly, Ruby doesn’t let us pass more than one block to a function. We’ll use a lambda instead:

def loophalf n, outer, inner
  if n > 0
    outer.call
    n.times do
      inner.call
      outer.call
    end 
  end
end

loophalf 3, lambda { print 'x' }, lambda { print 'o' }

Even if call-by-name doesn’t exist in most of our languages, we can get close!

Here’s your TODO list:

See you next time, when we discuss one last scheme for passing parameters: call-by-need.

Sincerely,

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *