Here’s the slide presentation I gave at the Silicon Valley JavaFX Users Group this evening, June 4th 2014:

Java 8 Lambda and Streams Overview Slides (PDF)

This is an approximately one-hour talk that covers lambdas, default methods, method references, and the streams API. It necessarily leaves out a lot of details, but it serves as a reasonably brief overview to the new features we’ve added in Java 8. (This is the same presentation I gave at the Japan JUG  a couple weeks ago.)

The Meetup page for the SVJUGFX event is here:

Meetup Page

The video replay isn’t available as of this writing, but I imagine that a link will be placed there once the replay is available.

Finally, the source code for the “Bug Dates” charting application I showed in the talk is here:

Bug Dates demo (github)

We discovered another bug in a shell script test the other day. It’s incredibly simple and stupid but it’s been covering up errors for years. As usual, there’s a story to be told.

About a month ago, my colleague Tristan Yan fixed some tests in RMI to remove the use of fixed ports. The bug for this is JDK-7190106 and this is the changeset. Using a fixed port causes intermittent failures when there happens to be something else already running that’s using the same port. Over the past year or so we’ve been changing the RMI tests to use system-assigned ports in order to avoid such collisions. Tristan’s fix was another in the step of ridding the RMI test suite of its use of fixed ports. Using system-assigned ports requires using a Java-based test library, and these tests were shell scripts, so part of Tristan’s fix was also converting them to Java.

The converted tests worked mostly fine, except that over the following few weeks an occasional test failure would occur. The serialization benchmark, newly rewritten into Java, would occasionally fail with a StackOverflowError. Clearly, something must have changed, since we had never seen this error occur with the shell script version of the serialization benchmark. Could the conversion from shell script to Java have changed something that caused excessive stack usage?

It turns out that the serialization benchmark does use a larger-than-ordinary amount of stack space. One of the tests loads a class that is fifty levels deep in the class inheritance hierarchy. This wasn’t a case of infinite recursion. Class loading uses a lot of stack space, and this test sometimes caused a StackOverflowError. Why didn’t the stack overflow occur with the shell script test?

The answer is … it did! It turns out that the shell script version of the serialization test was throwing StackOverflowError occasionally all along, but never reported failure. It’s pretty easy to force the test to overflow the stack by specifying a very small stack size (e.g., -Xss200k). Even when it threw a StackOverflowError, the test would still indicate that it passed. Why did this happen?

After the test preamble, last line of the shell script invoked the JVM to run the serial benchmark like this:

$TESTJAVA/bin/java \
    -cp $TESTCLASSES \
    bench.serial.Main \
    -c $TESTSRC/bench/serial/jtreg-config &

Do you see the bug?

The pass/fail status of a shell script test is the exit status of the script. By usual UNIX conventions, a zero exit status means pass and a nonzero exit status means failure. In turn, the exit status of the script is the exit status of the last command the script executes. The problem here is that the last command is executed in the background, and the “exit status” of a command run in the background is always zero. This is true regardless of whether the background command is still running or whether it has exited with a nonzero status. Thus, no matter what happens in the actual test, this shell script will always indicate that it passed!

It’s a simple matter to experiment with the -Xss parameter in both the shell script and Java versions of the test to verify they both use comparable amounts of stack space. And given that the test workload sometimes overflowed the stack, the fix is to specify a sufficiently large stack size to ensure that this doesn’t happen. See JDK-8030284 and the changeset that fixes it.

How did this happen in the first place? I’m not entirely sure, but this serialization benchmark test was probably derived from another shell script test right nearby, an RMI benchmark. Tristan also rewrote the RMI benchmark into a Java test, but it’s a bit more complicated. The RMI benchmark needs to run a server and a client in separate JVMs. Simplified, the RMI benchmark shell script looked something like this:

echo "Starting RMI benchmark server "
java bench.rmi.Main -server &

# wait for the server to start
sleep 10 

echo "Starting RMI benchmark client "
java bench.rmi.Main -client

When the serialization benchmark script was derived from the RMI benchmark script, the original author simply deleted the invocation of the RMI client side and modified the server-side invocation command line to run the serialization benchmark instead of the RMI benchmark, and left it running in the background.

(This test also exhibits another pathology, that of sleeping for a fixed amount of time in order to wait for a server to start. If the server is slow to start, this can result in an intermittent failure. If the server starts quickly, the test must still wait the full ten seconds. The Java rewrite fixes this as well, by starting the server in the first JVM, and forking the client JVM only after the server has been initialized.)

This is clearly another example of the fragility of shell script tests. A single character editing error in the script destroyed this test’s results!

Further Reading

  1. Testing Pitfalls (pdf). I gave this presentation at the TestFest around the time of Devoxx UK in March, 2013. This presentation has a lot of material on the problems that can arise with shell script tests in OpenJDK that use the jtreg test harness.
  2. Jon Gibbons (maintainer of jtreg) has an article entitled Shelling Tests that also explains some of the issues with shell script tests. It also describes the progress being made converting shell tests to Java in the langtools repository of OpenJDK.

JavaOne 2013 Has Begun!

This year’s JavaOne has begun! The keynotes were today, in Moscone for the first time since Oracle acquired Sun. It was a bit strange, having a little bit of JavaOne in the midst of Oracle OpenWorld. Red was everywhere. The rest of the week, JavaOne is in The Zone in the hotels by Union Square.

As usual, I’m involved in several activities at JavaOne. Oddly enough I’m not on any normal technical sessions. But I have one of almost everything else: a tutorial, a BOF, and a Hands-on lab.

Jump-starting Lambda Programming [TUT3877] – 10:00am-12:00pm Monday.

A gentle introduction to lambdas. This is early in the schedule, so you should start here, and then progress to some of the other, more advanced lambda sessions later in the conference.

[update] Slides available here: TUT3877_Marks-JumpStartingLambda-v6.pdf

Ten Things You Should Know When Writing Good Unit Test Cases in Java [BOF4255] – 6:30pm-7:15pm Monday.

Paul Thwaite (IBM) submitted this and invited me to contribute. I think we have some good ideas to share. Ideally a BOF should be a conversation among the audience members and the speakers. This might be difficult, as it looks like over 250 people have signed up so far! It’s great that there’s so much interest in testing.

[update] Paul has posted his slides.

Lambda Programming Laboratory [HOL3970] – 12:30pm-2:30pm Wednesday.

Try your hand at solving a dozen lambda-based exercises. They start off simple but they can get quite challenging. You’ll also have a chance to play with a JavaFX application that illustrates how some Streams library features work.

[update] I’ve uploaded the lab exercises in the form of a NetBeans project (zip format). Use the JDK 8 Developer Preview build or newer and use NetBeans 7.4 RC1 or newer.

Java DEMOgrounds in the Exhibition Hall – 9:30am-5:00pm Monday through Wednesday.

The Java SE booth in the DEMOgrounds has a small lambda demo running in NetBeans. I wrote it (plug, plug). I plan to be here from 2:00pm-3:00pm on Monday (the dedicated exhibition hours, when no sessions are running) so drop by to chat, ask questions, or to play around with the demo code.

Enjoy the show!

No, not that fixed point.

In the current sex-scandal-of-the-week, New York mayoral candidate Anthony Weiner has basically admitted to sending lewd messages under the pseudonym “Carlos Danger.” Where the heck did that name come from?

Clearly, there is a function that maps from one’s ordinary name to one’s “Carlos Danger” name. Slate has helpfully provided an implementation of the Carlos Danger name generator function. Using this tool, for example, one can determine that the Carlos Danger name for me (Stuart Marks) is Ricardo Distress. Hm, not too interesting. Of course, the Carlos Danger name for Anthony Weiner is Carlos Danger.

Now, what is the Carlos Danger name for Carlos Danger? It must be Carlos Danger, right? Apparently not, as the generator reveals that it is Felipe Menace.

Inspecting the source code of the web page reveals that the generator function basically hashes the input names a couple times and uses those values to index into predefined tables of Carlos-Danger-style first and last names. So, unlike Anthony Weiner, which is special-cased in the code, there’s nothing special about Carlos Danger. It’ll just map into some apparently-random pair of entries from the tables.

If the Carlos Danger name for Carlos Danger isn’t Carlos Danger, is there some other name whose Carlos Danger name is itself? Since there is a fairly small, fixed set of names, this is pretty easy to find out by searching the entire name space, as it were. A quick transliteration of the function into Java later (including a small wrestling match with character encodings), I have the answer:

  • The Carlos Danger name for Mariano Dynamite is Mariano Dynamite.
  • The Carlos Danger name for Miguel Ãngel Distress is Miguel Ãngel Distress.

You heard it here first, folks.

Finally, if you ever run into Ricardo Distress, tell him I said hi.

I gave a talk at Devoxx UK 2013 entitled Accelerated Lambda Programming. Here is the slide presentation from that talk.

There are just a few introductory slides in the slide deck, after which most of the talk consisted of live programming demos in NetBeans. Below is the sample code from the demo, cleaned up, merged into a single file, and updated for Lambda build b88. The conference was several weeks ago, and I did the demos using build b82. The APIs have changed a little bit, but not that much, certainly much less than the amount they changed in the few weeks leading up to b82.

(Here is a link to JDK 8 early access builds with lambda support. The lambda support is at this writing still being integrated into the JDK 8 mainline, so it may still be a few weeks before you can run this code on the mainline JDK 8 builds. Also, here is a link to NetBeans builds with lambda support. Most recent builds should work fine.)

I’ve included extensive annotations along with the code that attempt to capture the commentary I had made verbally while giving the talk. At some point the video is supposed to be posted, but it isn’t yet (and in any case subscription might be required to view the video).

The APIs are still in a state of flux and they may still change. Please let me know if you have trouble getting this stuff to work. When all the lambda APIs are integrated into the JDK 8 mainline, I’ll update the sample code here if necessary.


package com.example;

import java.io.*;
import java.util.*;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.stream.*;

 * Sample code from my "Accelerated Lambda Programming" talk at Devoxx UK,
 * March 2013. I went through most of the examples in the talk, but I think
 * I missed a couple. At least, I had intended to present all of the examples
 * shown here. (-:
 * @author smarks

public class AcceleratedLambda {
    static List<String> strings = Arrays.asList(
        "one", "two", "three", "four", "five",
        "six", "seven", "eight", "nine", "ten");

    // ========== SOURCES AND OPERATIONS ==========

    static void sources_and_operations() throws IOException {

        // Use a List as a stream source.
        // This just prints out each string.
               .forEach(x -> System.out.println(x));

        // Use an int range as a stream source. Note that parameter x is now
        // inferred to be an int whereas above it was a String.
        IntStream.range(0, 10)
                 .forEach(x -> System.out.println(x));

        // Use a string as a stream of chars
        // This prints integers 97, 98, 99, ... huh? The chars() method
        // provides an IntStream so println prints numbers.
                .forEach(x -> System.out.println(x));

        // Cast values to char so that it prints 'a' through 'f'.
                .forEach(x -> System.out.println((char)x));

        // Reads and prints each line of a text file.
        try (FileReader in = new FileReader("input.txt");
             BufferedReader br = new BufferedReader(in)) {
              .forEach(x -> System.out.println(x));

        // Change the terminal operation to collect the values
        // into a List.
        List<String> result =
        System.out.println("result = " + result);

        // Collect the values into a Set instead of a List.
        // This will probably print the values in a different order,
        // since the set's iteration order is probably different from
        // the order of insertion.
        Set<String> result2 =
        System.out.println("result2 = " + result2);

        // Counts the number of values in the stream.
        long result3 =
        System.out.println("result3 = " + result3);

        // Check that every value matches a predicate.
        boolean result4 =
                   .allMatch(s -> s.length() < 10);
        System.out.println("result4 = " + result4);

        // Check that at least one value matches a predicate.
        boolean result5 =
                   .anyMatch(s -> s.length() > 5);
        System.out.println("result5 = " + result5);

        // Now add an intermediate operation: filter the stream
        // through a predicate, which passes through only the values
        // that match the predicate.
        List<String> result6 =
                   .filter(s -> s.length() > 3)
        System.out.println("result6 = " + result6);

        // Map (transform) a value into a different value.
        List<String> result7 =
                   .map(s -> s.substring(0,1))
        System.out.println("result7 = " + result7);

        // Take a slice of a stream.
        // Also try substream(n) and limit(n).
        List<String> result8 =
                   .substream(2, 5)
        System.out.println("result8 = " + result8);

        // Add intermediate operations to form a longer pipeline.
        // The peek method calls the lambda with each value as it passes by.
        List<String> result9 =
                   .filter(s -> s.length() > 4)
                   .peek(s -> System.out.println("  peeking at " + s))
                   .map(s -> s.substring(0,1))
        System.out.println("result9 = " + result9);

        // Most operations are stateless in that they operate on each
        // value as it passes by. Some operations are "stateful". The
        // distinct() operation builds up a set internally and passes
        // through only the values it hasn't seen yet.
        List<String> result10 =
                   .map(s -> s.substring(0,1))
        System.out.println("result10 = " + result10);

        // The sorted() operation is also stateful, but it has to buffer
        // up all the incoming values and sort them before it can emit
        // the first value downstream.
        List<String> result11 =
                   .map(s -> s.substring(0,1))
        System.out.println("result11 = " + result11);

    // ========== SEQUENTIAL, LAZY, AND PARALLEL PROCESSING ==========

    // A fairly stupid primality tester, useful for consuming a lot
    // of CPU given a small amount of input. Don't use this code for
    // anything that really needs prime numbers.
    static boolean isPrime(long num) {
        if (num <= 1)
            return false;

        if (num == 2)
            return true;

        long limit = (long)Math.sqrt(num);
        for (long i = 3L; i <= limit; i += 2L) {
            if (num % i == 0)
                return false;
        return true;

    public static void lazy_and_parallel() {
        // Adjust these parameters to change the amount of CPU time
        // consumed by the prime-checking routine.
        long start = 1_000_000_000_000_000_001L; // 10^15 (quadrillion) + 1
        long count = 100L;

        // Sequential check. Takes about 30 seconds on a 2009 MacBook Pro
        // (2.66GHz Core2Duo). There should be four primes found.
        long time0 = System.currentTimeMillis();
        LongStream.range(start, start + count, 2L)
                  .filter(n -> isPrime(n))
                  .forEach(n -> System.out.println(n));
        long time1 = System.currentTimeMillis();
        System.out.printf("sequential: %.1f seconds%n", (time1-time0)/1000.0);

        // Truncate the stream after three results. The full range need not
        // be checked, so this completes more quickly (about 23 seconds).
        LongStream.range(start, start + count, 2L)
                  .filter(n -> isPrime(n))
                  .forEach(n -> System.out.println(n));
        long time2 = System.currentTimeMillis();
        System.out.printf("limited: %.1f seconds%n", (time2-time1)/1000.0);

        // Run the full range in parallel. With two cores, this takes about
        // half the time of the first run (plus some overhead) completing
        // typically in 16 seconds. Note that results are probably returned
        // in a different order.

        // Where are the threads? A parallel stream is split into tasks
        // which are executed by the "common fork-join thread pool," new in
        // Java SE 8. See java.util.concurrent.ForkJoinPool.
        LongStream.range(start, start + count, 2L)
                  .filter(n -> isPrime(n))
                  .forEach(n -> System.out.println(n));
        long time3 = System.currentTimeMillis();
        System.out.printf("parallel: %.1f seconds%n", (time3-time2)/1000.0);

    // ========== REDUCTION ==========

    static List<String> words = Arrays.asList(
        "Experience", "is", "simply", "the", "name",
        "we", "give", "our", "mistakes."); // Oscar Wilde

    // Compute the sum of the lengths of the words.
    // The non-streamy approach, using a for-loop.
    static void length0() {
        int total = 0;
        for (String s : words)
            total += s.length();

    // An attempt at a streamy approach, mutating a captured local variable.
    // DOES NOT WORK. Captured locals cannot be mutated (they must be
    // effectively final).

    // If captured locals could be mutated, they would need to outlive
    // their enclosing scope, thus they'd have to reside on the heap. This
    // implies (a) they'd be visible from multiple threads and be subject
    // to race conditions; and (b) they'd be susceptible to memory leaks.
    // See: http://www.lambdafaq.org/
    //          what-are-the-reasons-for-the-restriction-to-effective-immutability/

//    static void length1() {
//        int total = 0;
//        words.stream()
//             .map(s -> s.length())
//             .forEach(len -> total += len);
//        System.out.println(total);
//    }

    // Work around the inability to mutate a captured local by using a
    // single-element array. DO NOT DO THIS. THIS IS BAD STYLE. This basically
    // buys into all the disadvantages local variables would have if they
    // were moved to the heap. Array elements cannot be synchronized, and
    // they cannot be volatile either, so it is essentially impossible to
    // write race-free algorithms with them. AGAIN, DO NOT DO THIS. YOU WILL
    // GET BURNED.

    static void length2() {
        int[] total = new int[0];
             .map(s -> s.length())
             .forEach(len -> total[0] += len);

    // Work around the inability to mutate a captured local variable by
    // mutating an AtomicInteger. This is allowed, since the reference to
    // the AtomicInteger is final, but the AtomicInteger itself can be
    // mutated. What's more, it can be mutated safely from multiple threads.
    // This works, but is poor style, as it results in contention among
    // the threads all attempting to mutate the same variable. See
    // slides 8-9.

    static void length3() {
        AtomicInteger total = new AtomicInteger(0);
             .map(s -> s.length())
             .forEach(n -> total.addAndGet(n));

    // Summation using reduction. See slides 10-14.
    static void length4() {
        int total =
                 .map(s -> s.length())
                 .reduce(0, (i, j) -> i + j);

    // Use method reference instead of a lambda for addition.
    static void length5() {
        int total =
                 .map(s -> s.length())
                 .reduce(0, Integer::sum);

    // Use convenience method sum() instead of explicit reduce() method.
    // Note that we've switched from map() to mapToInt() here. The
    // plain map() results in Stream<Integer>, which works fine above,
    // but it does add boxing and unboxing overhead. Using mapToInt()
    // results in an IntStream which not only is more efficient, it also
    // has the convenience sum() method on it.
    static void length6() {
        int total =
                 .mapToInt(s -> s.length())

    // ========== GROUPING ==========

    // These couple grouping examples illustrate "mutable reduction"
    // operations. (See the java.util.stream package documentation.)
    // Many kinds of reductions, such as summation, combine values to
    // create new values. Sometimes we want to build up a data structure
    // like a map. We could combine maps to create new maps, but this would
    // result in excessive copying. Instead, we perform careful mutation in
    // a collect() call at the end of a pipeline.

    // A "Collector" is an object that represents a set of functions that can
    // handle parallel mutation and combining of intermediate results. Of
    // course, the intermediate and final results must be thread-safe data
    // structures, if the reduction is done in parallel.

    // A full explanation of a Collector is beyond the scope of this example.
    // As set of prepared Collector objects can be obtained from the Collectors
    // utility class. We will show a particular kind of Collector that does
    // grouping. The idea is, for each value in the stream, a "classifier"
    // function is run over it. Typically, multiple values in the stream will
    // produce the same result from the classifier function. The values from
    // the stream are then gathered into a Map, whose keys are the results
    // of the classifier function, and whose values are lists of values that
    // correspond to the classifier results.

    // This example groups words by their first letter. Thus, given a stream
    // of strings
    //     one two three four five six seven eight nine ten
    // the resulting map would have key-value pairs
    //     "t" => ["two", "three", "ten"]
    //     "f" => ["four", "five"]
    //     "o" => ["one"]
    // and so forth.

    static void grouping1() {
        Map<String, List<String>> grouping1 =
               .collect(Collectors.groupingBy(s -> s.substring(0,1)));
        System.out.println("grouping1 = " + grouping1);

    // The example above has a hard-coded policy of accumulating
    // the grouped stream values into a list. What if we didn't want
    // a list, but instead we wanted to combine the grouped values
    // together?

    // The groupingBy() method has an overload that takes a "downstream"
    // collector that takes each grouped value and combines it with
    // other values in the same grouping.

    // In this example we don't want to combine all the grouped values into
    // a list, but instead we want to get the sum of their lengths. To
    // do this, we use a similar groupingBy() call, but add a second
    // argument Collectors.reducing() to which we specify how to combine
    // (reduce) the values. For a reduction we have to provide an initial
    // value of zero; the second argument is how to get the length of a
    // single string, and the third argument is how to combine the length
    // of the current string with the running total so far. Note, this
    // reduction occurs *within* each group.

    // Thus, the result is Map whose values aren't lists, but instead are
    // integers representing the sums of the lengths of the strings in
    // that group:
    //     "t" => 11
    //     "f" => 8
    //     "o" => 3
    // and so forth.

    static void grouping2() {
        Map<String, Integer> grouping2 =
               .collect(Collectors.groupingBy(s -> s.substring(0,1),
                        Collectors.reducing(0, s -> s.length(), Integer::sum)));
        System.out.println("grouping2 = " + grouping2);

    // Comment or uncomment the statements below to control
    // what you want to run.
    public static void main(String[] args) throws IOException {
        // sources_and_operations();
        // lazy_and_parallel();
        // length2();
        // length3();
        // length4();
        // length5();
        // length6();
        // grouping1();

I’m finally catching up with my backlog of items dating back to Devoxx UK 2013, which was in March!

There were a couple of testing-related events I participated in. The first was an OpenJDK TestFest, sponsored by the London Java Community, IBM, and Oracle. This wasn’t officially part of Devoxx. It was held at the Oracle offices in London the Saturday prior to Devoxx itself. There were several presentations; I gave a brief talk on OpenJDK Testing Pitfalls (slides). People spent some time hacking on actual tests, but I thought the presentations and the ensuing discussion were very helpful as well.

The other testing-related event I participated in was an OpenJDK BOF with Martijn Verburg. Hm, it’s entitled “OpenJDK Hack Session” but there wasn’t that much hacking there, though Martijn did demonstrate the new OpenJDK build system. I presented some additional material on Testing OpenJDK (slides). This presentation was less about writing tests for OpenJDK than about the difficulties we have testing OpenJDK. The biggest problem, I think, is with unreliable tests. One would hope that a failing test means that there is a bug in the system being tested. Unfortunately in OpenJDK we have a lot of tests that are only 99% reliable. If you run the test suite regularly, especially on all the platforms, that makes it very unlikely that you’ll get a test run with 100% of tests passing, even if there are no bugs in the system. Worse, there are bugs in the system, so test failures caused by actual bugs are mixed with spurious test failures.

You can see this in the test results that Balchandra Vaidya has been posting to the OpenJDK quality-discuss mailing list. See the jdk8 b88 test results posting, for example. If you click through some links to find the Results Archive page, you’ll see that there have been 12-17 failures out of 4,000 or so tests in the JDK test suite, for the past twenty or so JDK 8 builds. Worse, they haven’t been the same failures every time, since the code and tests are constantly being modified, and the set of tests being run is shifting around as well.

This is clearly a serious problem, one that I and others at Oracle hope to make progress on in the coming months.

Last night, I attended a “class reunion” of sorts, a reunion of the first Stanford CS108A/B class, Fundamentals of Computer Science, that Brian Reid taught in 1982-83. I was one of the teaching assistants. It was intended to be the beginning of an undergraduate Computer Science curriculum at Stanford. There still is a CS108 at Stanford, but it’s now called Object Oriented System Design. (Hmmm.)

Obviously a lot has changed in 30 years: personal computing, the internet, immense increases in computing power and storage, and so forth. But one thing that struck me about how much has changed is programming languages. I learned to program in BASIC in the 1970s. It was Wang 2200 BASIC, not one of the microcomputer-based BASICs of the time. Probably it was gratuitously different from them but fundamentally the same. From what I recall it had the following characteristics:

  • The only types were numbers and strings and arrays of them.
  • Variables were all globals.
  • Variable names were A, A0 through A9, …, Z9 and corresponding string variables A$, A0$ through A9$, … Z9$. At least one could have both a numeric variable Q7 and a string variable Q7$. (I seem to recall some BASICs prohibiting that.) But I don’t think you could have both scalar and array variables with the same name.
  • The control structures were IF, GOTO, GOSUB, and ON ERROR. You couldn’t put a statement in the IF statement, just a line number. There was no ELSE, so you had to GOTO around the then-block.
  • Oh yeah, line numbers.
  • GOSUB took line numbers. Wang BASIC had a variant of GOSUB that would pass parameters, but there were no return values from subroutines.

Despite all these limitations, we all had a lot of fun programming BASIC, didn’t we?

When I arrived at Stanford in the early 1980s, the programming language was Pascal. After BASIC, Pascal was a breath of fresh air. It had:

  • The ability to give names for things, and for names to reside within scopes.
  • The ability to put a group of statements into a begin/end block and use them in an if-then-else statement.
  • Structured programming constructs like while, repeat, and case.
  • It did have goto, but it was rarely used, and in fact I didn’t actually know how it worked. (The obvious cases worked as expected, of course, but what happens if you goto a label located in a nested scope? Or goto a label that’s in an outer scope but that’s not on your call stack?)
  • Data structures!
  • Local variables.
  • User-defined types.
  • A real boolean type.
  • Dynamic memory allocation and pointers.

Compared to BASIC, Pascal was a huge step forward, freed from restrictive variable names and line numbers. Dynamic allocation of instances of user-defined types enabled one to create data structures like linked lists and trees, which were almost impossible in BASIC.

All the CS108 programming assignments were in Pascal. We were writing really big programs. They must have have been hundreds or even thousands of lines long. :-) And as one of the TAs, I had to read a bunch of them.

People have dumped on Pascal a lot, unfairly so in my opinion. It had its share of problems, but it had huge advantages over BASIC, and I was also able to write some useful programs in Turbo Pascal on the PC in the mid 1980s.

Some of its problems did prove quite irritating though and possibly fatal. Among them are:

  • A string is simply an array of char, and the length of an array is part of its type. This made string handling incredibly cumbersome. BASIC’s substring and appending operations were fluid by comparison. I always wrestled with Pascal strings and never understood why they were so difficult until, many years later, I read a commentary by one of the C guys (Kernighan or Ritchie) that pointed out Pascal’s mistake was including the length of an array in its type. (I can’t find a reference to this though.) (See update below.)
  • Inability to terminate a loop in the middle. The typical practice was to use boolean flags to handle loop termination (with an embedded if-test) but this was cumbersome and error-prone. Of course, you could goto out of a loop, but that was frowned upon.
  • No short-circuit expression evaluation. This made loop termination even harder.
  • Weird lookahead-based I/O.
  • Lexical nesting.

UPDATE: In the comments, Joe Darcy pointed me to Kernighan’s article Why Pascal is Not My Favorite Language. It seems to be available in a bunch of places around the web. One particularly convenient place is here. Kernighan explains quite well not only the issue of array length, but also the problems with loop termination and lack of short-circuit evaluation. Money quote: “Early exits are a pain, almost always requiring the invention of a boolean variable and a certain amount of cunning.”

That point on lexical nesting deserves some further discussion. Lexical nesting is of course very useful: it allows certain details of internal data structures and code to be hidden from the outside. The problem with Pascal is that lexical nesting is the only means for creating large-scale program structures. (Of course some Pascal systems had separate compilation and libraries, but those were extensions.) Some of the toughest problems I helped students was with nesting problems. In one particular case I was trying to diagnose a compiler error where a variable wasn’t declared. Those were usually pretty easy: just look up towards the top of the program to find where the missing declaration needed to go. In this particular case there was a declaration that seemed to be in the right place. I was mystified.

After a lot of analysis, we discovered that the nesting was off. The begins and ends were balanced, but they were shifted in a way that caused an entire section of the program to be nested a level deeper than it seemed it was. Thus the declaration was in a nested scope and wasn’t visible where it was needed. The problem was that this was probably a 20- or 30-page program. The declaration was at one point, the compiler error (undeclared variable) was at a different point very far away, and the actual error (misplaced begin) was in still another location, also far away from the other two. Thus, to diagnose the problem, one had to read and understand the structure of the entire program.

Not long after I TA’d CS108, I started working for Brian in the Stanford Computer Systems Lab. This was a Unix/C shop, so I quickly switched from Pascal to C. Reminiscing last night, Brian said Pascal was like a nanny who always was saying “No, you can’t do that!” Programming in C was like Pascal with the training wheels taken off. (I’m sure I stole that line from somewhere.) Sure, there were weirdnesses (the * pointer dereference is a prefix operator? What’s with this weird array/pointer duality?) But most of the hassles of everyday programming in Pascal were gone:

  • The break statement.
  • Short-circuit expression evaluation.
  • Reasonable library-based I/O.
  • Reasonable library-based memory allocation.
  • Files-as-modules program structure.
  • Unchecked array length (length not part of array type).

This last is of course a blessing and a curse. As in Pascal, a C string is (mostly) just an array of char, but of variable length. This made string processing much easier. Actual string length was determined by a trailing zero byte. Of course, since the language didn’t keep track of the actual array capacity, programs would have to keep track of it themselves. Or not. But the programs worked anyway. Usually. This gave rise to a bunch of sloppy programming practices, leading to memory smashes, buffer overruns, security breaches, and so forth.

Nevertheless, C has proven to be incredibly useful and is still one of the most popular programming languages today. It’s important to understand, though, that the C we used in the 1980s (“K&R C”) isn’t the same C we program in today. I’ll have more to say about that in a separate article.


Get every new post delivered to your Inbox.