Feeds:
Posts
Comments

JavaOne 2014 Schedule

I have a jam-packed schedule for JavaOne 2014. My sessions are as follows:


TUT3371 Jump-Starting Lambda (Tue 30 Sep 0830 Hilton Yosemite B/C)

This is my gentle introduction to lambda tutorial. Download presentation (PDF).

CON3374 Lambda Q&A Panel (Tue 30 Sep 1230 Hilton Yosemite B/C)

This panel session will explore the impact of Java 8 Lambdas on the Java ecosystem.

BOF6244 You’ve Got Your Streams On My Collections! (Tue 30 Sep 1900 Hilton Yosemite A)

Community discussion of collections and the new streams APIs.

IGN12431 Ignite Session (Tue 30 Sep 1900 Hilton Imperial A)

This is at the same time as the BOF, so my session will be later on, perhaps 2000 or so. Make sure to come; I have a little surprise planned!

CON3372 Parallel Streams Workshop (Wed 1 Oct 1000 Hilton Yosemite A)

Writing parallel streams code can be easy and effective, but you have to avoid some pitfalls. Download Presentation (PDF)

CON6377 Debt and Deprecation (Wed 1 Oct 1500 Hilton Yosemite A)

Given by my alter ego, Dr. Deprecator, this talk explores the principles and prescriptions of deprecation. Download Presentation (PDF)

HOL3373 Lambda Programming Laboratory (Thu 2 Oct 1200-1400 Hilton Franciscan A/B)

This is your chance to try out some lambda expressions and stream APIs introduced in Java 8, in order to solve a couple dozen challenging exercises. View Introductory Slide Presentation. Download Exercises (NetBeans Project).


See you in San Francisco!

Here’s the slide presentation I gave at the Silicon Valley JavaFX Users Group this evening, June 4th 2014:

Java 8 Lambda and Streams Overview Slides (PDF)

This is an approximately one-hour talk that covers lambdas, default methods, method references, and the streams API. It necessarily leaves out a lot of details, but it serves as a reasonably brief overview to the new features we’ve added in Java 8. (This is the same presentation I gave at the Japan JUG  a couple weeks ago.)

The Meetup page for the SVJUGFX event is here:

Meetup Page

The video replay isn’t available as of this writing, but I imagine that a link will be placed there once the replay is available.

Finally, the source code for the “Bug Dates” charting application I showed in the talk is here:

Bug Dates demo (github)

We discovered another bug in a shell script test the other day. It’s incredibly simple and stupid but it’s been covering up errors for years. As usual, there’s a story to be told.

About a month ago, my colleague Tristan Yan fixed some tests in RMI to remove the use of fixed ports. The bug for this is JDK-7190106 and this is the changeset. Using a fixed port causes intermittent failures when there happens to be something else already running that’s using the same port. Over the past year or so we’ve been changing the RMI tests to use system-assigned ports in order to avoid such collisions. Tristan’s fix was another in the step of ridding the RMI test suite of its use of fixed ports. Using system-assigned ports requires using a Java-based test library, and these tests were shell scripts, so part of Tristan’s fix was also converting them to Java.

The converted tests worked mostly fine, except that over the following few weeks an occasional test failure would occur. The serialization benchmark, newly rewritten into Java, would occasionally fail with a StackOverflowError. Clearly, something must have changed, since we had never seen this error occur with the shell script version of the serialization benchmark. Could the conversion from shell script to Java have changed something that caused excessive stack usage?

It turns out that the serialization benchmark does use a larger-than-ordinary amount of stack space. One of the tests loads a class that is fifty levels deep in the class inheritance hierarchy. This wasn’t a case of infinite recursion. Class loading uses a lot of stack space, and this test sometimes caused a StackOverflowError. Why didn’t the stack overflow occur with the shell script test?

The answer is … it did! It turns out that the shell script version of the serialization test was throwing StackOverflowError occasionally all along, but never reported failure. It’s pretty easy to force the test to overflow the stack by specifying a very small stack size (e.g., -Xss200k). Even when it threw a StackOverflowError, the test would still indicate that it passed. Why did this happen?

After the test preamble, last line of the shell script invoked the JVM to run the serial benchmark like this:

$TESTJAVA/bin/java \
    ${TESTVMOPTS} \
    -cp $TESTCLASSES \
    bench.serial.Main \
    -c $TESTSRC/bench/serial/jtreg-config &

Do you see the bug?

The pass/fail status of a shell script test is the exit status of the script. By usual UNIX conventions, a zero exit status means pass and a nonzero exit status means failure. In turn, the exit status of the script is the exit status of the last command the script executes. The problem here is that the last command is executed in the background, and the “exit status” of a command run in the background is always zero. This is true regardless of whether the background command is still running or whether it has exited with a nonzero status. Thus, no matter what happens in the actual test, this shell script will always indicate that it passed!

It’s a simple matter to experiment with the -Xss parameter in both the shell script and Java versions of the test to verify they both use comparable amounts of stack space. And given that the test workload sometimes overflowed the stack, the fix is to specify a sufficiently large stack size to ensure that this doesn’t happen. See JDK-8030284 and the changeset that fixes it.

How did this happen in the first place? I’m not entirely sure, but this serialization benchmark test was probably derived from another shell script test right nearby, an RMI benchmark. Tristan also rewrote the RMI benchmark into a Java test, but it’s a bit more complicated. The RMI benchmark needs to run a server and a client in separate JVMs. Simplified, the RMI benchmark shell script looked something like this:

echo "Starting RMI benchmark server "
java bench.rmi.Main -server &

# wait for the server to start
sleep 10 

echo "Starting RMI benchmark client "
java bench.rmi.Main -client

When the serialization benchmark script was derived from the RMI benchmark script, the original author simply deleted the invocation of the RMI client side and modified the server-side invocation command line to run the serialization benchmark instead of the RMI benchmark, and left it running in the background.

(This test also exhibits another pathology, that of sleeping for a fixed amount of time in order to wait for a server to start. If the server is slow to start, this can result in an intermittent failure. If the server starts quickly, the test must still wait the full ten seconds. The Java rewrite fixes this as well, by starting the server in the first JVM, and forking the client JVM only after the server has been initialized.)

This is clearly another example of the fragility of shell script tests. A single character editing error in the script destroyed this test’s results!

Further Reading

  1. Testing Pitfalls (pdf). I gave this presentation at the TestFest around the time of Devoxx UK in March, 2013. This presentation has a lot of material on the problems that can arise with shell script tests in OpenJDK that use the jtreg test harness.
  2. Jon Gibbons (maintainer of jtreg) has an article entitled Shelling Tests that also explains some of the issues with shell script tests. It also describes the progress being made converting shell tests to Java in the langtools repository of OpenJDK.

JavaOne 2013 Has Begun!

This year’s JavaOne has begun! The keynotes were today, in Moscone for the first time since Oracle acquired Sun. It was a bit strange, having a little bit of JavaOne in the midst of Oracle OpenWorld. Red was everywhere. The rest of the week, JavaOne is in The Zone in the hotels by Union Square.

As usual, I’m involved in several activities at JavaOne. Oddly enough I’m not on any normal technical sessions. But I have one of almost everything else: a tutorial, a BOF, and a Hands-on lab.

Jump-starting Lambda Programming [TUT3877] – 10:00am-12:00pm Monday.

A gentle introduction to lambdas. This is early in the schedule, so you should start here, and then progress to some of the other, more advanced lambda sessions later in the conference.

[update] Slides available here: TUT3877_Marks-JumpStartingLambda-v6.pdf

Ten Things You Should Know When Writing Good Unit Test Cases in Java [BOF4255] – 6:30pm-7:15pm Monday.

Paul Thwaite (IBM) submitted this and invited me to contribute. I think we have some good ideas to share. Ideally a BOF should be a conversation among the audience members and the speakers. This might be difficult, as it looks like over 250 people have signed up so far! It’s great that there’s so much interest in testing.

[update] Paul has posted his slides.

Lambda Programming Laboratory [HOL3970] – 12:30pm-2:30pm Wednesday.

Try your hand at solving a dozen lambda-based exercises. They start off simple but they can get quite challenging. You’ll also have a chance to play with a JavaFX application that illustrates how some Streams library features work.

[update] I’ve uploaded the lab exercises in the form of a NetBeans project (zip format). Use the JDK 8 Developer Preview build or newer and use NetBeans 7.4 RC1 or newer.

Java DEMOgrounds in the Exhibition Hall – 9:30am-5:00pm Monday through Wednesday.

The Java SE booth in the DEMOgrounds has a small lambda demo running in NetBeans. I wrote it (plug, plug). I plan to be here from 2:00pm-3:00pm on Monday (the dedicated exhibition hours, when no sessions are running) so drop by to chat, ask questions, or to play around with the demo code.

Enjoy the show!

No, not that fixed point.

In the current sex-scandal-of-the-week, New York mayoral candidate Anthony Weiner has basically admitted to sending lewd messages under the pseudonym “Carlos Danger.” Where the heck did that name come from?

Clearly, there is a function that maps from one’s ordinary name to one’s “Carlos Danger” name. Slate has helpfully provided an implementation of the Carlos Danger name generator function. Using this tool, for example, one can determine that the Carlos Danger name for me (Stuart Marks) is Ricardo Distress. Hm, not too interesting. Of course, the Carlos Danger name for Anthony Weiner is Carlos Danger.

Now, what is the Carlos Danger name for Carlos Danger? It must be Carlos Danger, right? Apparently not, as the generator reveals that it is Felipe Menace.

Inspecting the source code of the web page reveals that the generator function basically hashes the input names a couple times and uses those values to index into predefined tables of Carlos-Danger-style first and last names. So, unlike Anthony Weiner, which is special-cased in the code, there’s nothing special about Carlos Danger. It’ll just map into some apparently-random pair of entries from the tables.

If the Carlos Danger name for Carlos Danger isn’t Carlos Danger, is there some other name whose Carlos Danger name is itself? Since there is a fairly small, fixed set of names, this is pretty easy to find out by searching the entire name space, as it were. A quick transliteration of the function into Java later (including a small wrestling match with character encodings), I have the answer:

  • The Carlos Danger name for Mariano Dynamite is Mariano Dynamite.
  • The Carlos Danger name for Miguel Ãngel Distress is Miguel Ãngel Distress.

You heard it here first, folks.

Finally, if you ever run into Ricardo Distress, tell him I said hi.

I gave a talk at Devoxx UK 2013 entitled Accelerated Lambda Programming. Here is the slide presentation from that talk.

There are just a few introductory slides in the slide deck, after which most of the talk consisted of live programming demos in NetBeans. Below is the sample code from the demo, cleaned up, merged into a single file, and updated for Lambda build b88. The conference was several weeks ago, and I did the demos using build b82. The APIs have changed a little bit, but not that much, certainly much less than the amount they changed in the few weeks leading up to b82.

(Here is a link to JDK 8 early access builds with lambda support. The lambda support is at this writing still being integrated into the JDK 8 mainline, so it may still be a few weeks before you can run this code on the mainline JDK 8 builds. Also, here is a link to NetBeans builds with lambda support. Most recent builds should work fine.)

I’ve included extensive annotations along with the code that attempt to capture the commentary I had made verbally while giving the talk. At some point the video is supposed to be posted, but it isn’t yet (and in any case subscription might be required to view the video).

The APIs are still in a state of flux and they may still change. Please let me know if you have trouble getting this stuff to work. When all the lambda APIs are integrated into the JDK 8 mainline, I’ll update the sample code here if necessary.

Enjoy!

package com.example;

import java.io.*;
import java.util.*;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.stream.*;

/**
 * Sample code from my "Accelerated Lambda Programming" talk at Devoxx UK,
 * March 2013. I went through most of the examples in the talk, but I think
 * I missed a couple. At least, I had intended to present all of the examples
 * shown here. (-:
 *
 * @author smarks
 */

public class AcceleratedLambda {
    static List<String> strings = Arrays.asList(
        "one", "two", "three", "four", "five",
        "six", "seven", "eight", "nine", "ten");

    // ========== SOURCES AND OPERATIONS ==========

    static void sources_and_operations() throws IOException {

        // Use a List as a stream source.
        // This just prints out each string.
        strings.stream()
               .forEach(x -> System.out.println(x));

        // Use an int range as a stream source. Note that parameter x is now
        // inferred to be an int whereas above it was a String.
        IntStream.range(0, 10)
                 .forEach(x -> System.out.println(x));

        // Use a string as a stream of chars
        // This prints integers 97, 98, 99, ... huh? The chars() method
        // provides an IntStream so println prints numbers.
        "abcdef".chars()
                .forEach(x -> System.out.println(x));

        // Cast values to char so that it prints 'a' through 'f'.
        "abcdef".chars()
                .forEach(x -> System.out.println((char)x));

        // Reads and prints each line of a text file.
        try (FileReader in = new FileReader("input.txt");
             BufferedReader br = new BufferedReader(in)) {
            br.lines()
              .forEach(x -> System.out.println(x));
        }

        // Change the terminal operation to collect the values
        // into a List.
        List<String> result =
            strings.stream()
                   .collect(Collectors.toList());
        System.out.println("result = " + result);

        // Collect the values into a Set instead of a List.
        // This will probably print the values in a different order,
        // since the set's iteration order is probably different from
        // the order of insertion.
        Set<String> result2 =
            strings.stream()
                   .collect(Collectors.toSet());
        System.out.println("result2 = " + result2);

        // Counts the number of values in the stream.
        long result3 =
            strings.stream()
                   .count();
        System.out.println("result3 = " + result3);

        // Check that every value matches a predicate.
        boolean result4 =
            strings.stream()
                   .allMatch(s -> s.length() < 10);
        System.out.println("result4 = " + result4);

        // Check that at least one value matches a predicate.
        boolean result5 =
            strings.stream()
                   .anyMatch(s -> s.length() > 5);
        System.out.println("result5 = " + result5);

        // Now add an intermediate operation: filter the stream
        // through a predicate, which passes through only the values
        // that match the predicate.
        List<String> result6 =
            strings.stream()
                   .filter(s -> s.length() > 3)
                   .collect(Collectors.toList());
        System.out.println("result6 = " + result6);

        // Map (transform) a value into a different value.
        List<String> result7 =
            strings.stream()
                   .map(s -> s.substring(0,1))
                   .collect(Collectors.toList());
        System.out.println("result7 = " + result7);

        // Take a slice of a stream.
        // Also try substream(n) and limit(n).
        List<String> result8 =
            strings.stream()
                   .substream(2, 5)
                   .collect(Collectors.toList());
        System.out.println("result8 = " + result8);

        // Add intermediate operations to form a longer pipeline.
        // The peek method calls the lambda with each value as it passes by.
        List<String> result9 =
            strings.stream()
                   .filter(s -> s.length() > 4)
                   .peek(s -> System.out.println("  peeking at " + s))
                   .map(s -> s.substring(0,1))
                   .collect(Collectors.toList());
        System.out.println("result9 = " + result9);

        // Most operations are stateless in that they operate on each
        // value as it passes by. Some operations are "stateful". The
        // distinct() operation builds up a set internally and passes
        // through only the values it hasn't seen yet.
        List<String> result10 =
            strings.stream()
                   .map(s -> s.substring(0,1))
                   .distinct()
                   .collect(Collectors.toList());
        System.out.println("result10 = " + result10);

        // The sorted() operation is also stateful, but it has to buffer
        // up all the incoming values and sort them before it can emit
        // the first value downstream.
        List<String> result11 =
            strings.stream()
                   .map(s -> s.substring(0,1))
                   .sorted()
                   .collect(Collectors.toList());
        System.out.println("result11 = " + result11);
    }

    // ========== SEQUENTIAL, LAZY, AND PARALLEL PROCESSING ==========

    // A fairly stupid primality tester, useful for consuming a lot
    // of CPU given a small amount of input. Don't use this code for
    // anything that really needs prime numbers.
    static boolean isPrime(long num) {
        if (num <= 1)
            return false;

        if (num == 2)
            return true;

        long limit = (long)Math.sqrt(num);
        for (long i = 3L; i <= limit; i += 2L) {
            if (num % i == 0)
                return false;
        }
        return true;
    }

    public static void lazy_and_parallel() {
        // Adjust these parameters to change the amount of CPU time
        // consumed by the prime-checking routine.
        long start = 1_000_000_000_000_000_001L; // 10^15 (quadrillion) + 1
        long count = 100L;

        // Sequential check. Takes about 30 seconds on a 2009 MacBook Pro
        // (2.66GHz Core2Duo). There should be four primes found.
        long time0 = System.currentTimeMillis();
        LongStream.range(start, start + count, 2L)
                  .filter(n -> isPrime(n))
                  .forEach(n -> System.out.println(n));
        long time1 = System.currentTimeMillis();
        System.out.printf("sequential: %.1f seconds%n", (time1-time0)/1000.0);

        // Truncate the stream after three results. The full range need not
        // be checked, so this completes more quickly (about 23 seconds).
        LongStream.range(start, start + count, 2L)
                  .filter(n -> isPrime(n))
                  .limit(3)
                  .forEach(n -> System.out.println(n));
        long time2 = System.currentTimeMillis();
        System.out.printf("limited: %.1f seconds%n", (time2-time1)/1000.0);

        // Run the full range in parallel. With two cores, this takes about
        // half the time of the first run (plus some overhead) completing
        // typically in 16 seconds. Note that results are probably returned
        // in a different order.

        // Where are the threads? A parallel stream is split into tasks
        // which are executed by the "common fork-join thread pool," new in
        // Java SE 8. See java.util.concurrent.ForkJoinPool.
        LongStream.range(start, start + count, 2L)
                  .parallel()
                  .filter(n -> isPrime(n))
                  .forEach(n -> System.out.println(n));
        long time3 = System.currentTimeMillis();
        System.out.printf("parallel: %.1f seconds%n", (time3-time2)/1000.0);
    }

    // ========== REDUCTION ==========

    static List<String> words = Arrays.asList(
        "Experience", "is", "simply", "the", "name",
        "we", "give", "our", "mistakes."); // Oscar Wilde

    // Compute the sum of the lengths of the words.
    // The non-streamy approach, using a for-loop.
    static void length0() {
        int total = 0;
        for (String s : words)
            total += s.length();
        System.out.println(total);
    }

    // An attempt at a streamy approach, mutating a captured local variable.
    // DOES NOT WORK. Captured locals cannot be mutated (they must be
    // effectively final).

    // If captured locals could be mutated, they would need to outlive
    // their enclosing scope, thus they'd have to reside on the heap. This
    // implies (a) they'd be visible from multiple threads and be subject
    // to race conditions; and (b) they'd be susceptible to memory leaks.
    // See: http://www.lambdafaq.org/
    //          what-are-the-reasons-for-the-restriction-to-effective-immutability/

//    static void length1() {
//        int total = 0;
//        words.stream()
//             .map(s -> s.length())
//             .forEach(len -> total += len);
//        System.out.println(total);
//    }

    // Work around the inability to mutate a captured local by using a
    // single-element array. DO NOT DO THIS. THIS IS BAD STYLE. This basically
    // buys into all the disadvantages local variables would have if they
    // were moved to the heap. Array elements cannot be synchronized, and
    // they cannot be volatile either, so it is essentially impossible to
    // write race-free algorithms with them. AGAIN, DO NOT DO THIS. YOU WILL
    // GET BURNED.

    static void length2() {
        int[] total = new int[0];
        words.stream()
             .map(s -> s.length())
             .forEach(len -> total[0] += len);
        System.out.println(total[0]);
    }

    // Work around the inability to mutate a captured local variable by
    // mutating an AtomicInteger. This is allowed, since the reference to
    // the AtomicInteger is final, but the AtomicInteger itself can be
    // mutated. What's more, it can be mutated safely from multiple threads.
    // This works, but is poor style, as it results in contention among
    // the threads all attempting to mutate the same variable. See
    // slides 8-9.

    static void length3() {
        AtomicInteger total = new AtomicInteger(0);
        words.stream()
             .map(s -> s.length())
             .forEach(n -> total.addAndGet(n));
        System.out.println(total.get());
    }

    // Summation using reduction. See slides 10-14.
    static void length4() {
        int total =
            words.stream()
                 .map(s -> s.length())
                 .reduce(0, (i, j) -> i + j);
        System.out.println(total);
    }

    // Use method reference instead of a lambda for addition.
    static void length5() {
        int total =
            words.stream()
                 .map(s -> s.length())
                 .reduce(0, Integer::sum);
        System.out.println(total);
    }

    // Use convenience method sum() instead of explicit reduce() method.
    // Note that we've switched from map() to mapToInt() here. The
    // plain map() results in Stream<Integer>, which works fine above,
    // but it does add boxing and unboxing overhead. Using mapToInt()
    // results in an IntStream which not only is more efficient, it also
    // has the convenience sum() method on it.
    static void length6() {
        int total =
            words.stream()
                 .mapToInt(s -> s.length())
                 .sum();
        System.out.println(total);
    }

    // ========== GROUPING ==========

    // These couple grouping examples illustrate "mutable reduction"
    // operations. (See the java.util.stream package documentation.)
    // Many kinds of reductions, such as summation, combine values to
    // create new values. Sometimes we want to build up a data structure
    // like a map. We could combine maps to create new maps, but this would
    // result in excessive copying. Instead, we perform careful mutation in
    // a collect() call at the end of a pipeline.

    // A "Collector" is an object that represents a set of functions that can
    // handle parallel mutation and combining of intermediate results. Of
    // course, the intermediate and final results must be thread-safe data
    // structures, if the reduction is done in parallel.

    // A full explanation of a Collector is beyond the scope of this example.
    // As set of prepared Collector objects can be obtained from the Collectors
    // utility class. We will show a particular kind of Collector that does
    // grouping. The idea is, for each value in the stream, a "classifier"
    // function is run over it. Typically, multiple values in the stream will
    // produce the same result from the classifier function. The values from
    // the stream are then gathered into a Map, whose keys are the results
    // of the classifier function, and whose values are lists of values that
    // correspond to the classifier results.

    // This example groups words by their first letter. Thus, given a stream
    // of strings
    //
    //     one two three four five six seven eight nine ten
    //
    // the resulting map would have key-value pairs
    //
    //     "t" => ["two", "three", "ten"]
    //     "f" => ["four", "five"]
    //     "o" => ["one"]
    //
    // and so forth.

    static void grouping1() {
        Map<String, List<String>> grouping1 =
        strings.stream()
               .collect(Collectors.groupingBy(s -> s.substring(0,1)));
        System.out.println("grouping1 = " + grouping1);
    }

    // The example above has a hard-coded policy of accumulating
    // the grouped stream values into a list. What if we didn't want
    // a list, but instead we wanted to combine the grouped values
    // together?

    // The groupingBy() method has an overload that takes a "downstream"
    // collector that takes each grouped value and combines it with
    // other values in the same grouping.

    // In this example we don't want to combine all the grouped values into
    // a list, but instead we want to get the sum of their lengths. To
    // do this, we use a similar groupingBy() call, but add a second
    // argument Collectors.reducing() to which we specify how to combine
    // (reduce) the values. For a reduction we have to provide an initial
    // value of zero; the second argument is how to get the length of a
    // single string, and the third argument is how to combine the length
    // of the current string with the running total so far. Note, this
    // reduction occurs *within* each group.

    // Thus, the result is Map whose values aren't lists, but instead are
    // integers representing the sums of the lengths of the strings in
    // that group:
    //
    //     "t" => 11
    //     "f" => 8
    //     "o" => 3
    //
    // and so forth.

    static void grouping2() {
        Map<String, Integer> grouping2 =
        strings.stream()
               .collect(Collectors.groupingBy(s -> s.substring(0,1),
                        Collectors.reducing(0, s -> s.length(), Integer::sum)));
        System.out.println("grouping2 = " + grouping2);
    }

    // Comment or uncomment the statements below to control
    // what you want to run.
    public static void main(String[] args) throws IOException {
        // sources_and_operations();
        // lazy_and_parallel();
        // length2();
        // length3();
        // length4();
        // length5();
        // length6();
        // grouping1();
        grouping2();
    }
}

I’m finally catching up with my backlog of items dating back to Devoxx UK 2013, which was in March!

There were a couple of testing-related events I participated in. The first was an OpenJDK TestFest, sponsored by the London Java Community, IBM, and Oracle. This wasn’t officially part of Devoxx. It was held at the Oracle offices in London the Saturday prior to Devoxx itself. There were several presentations; I gave a brief talk on OpenJDK Testing Pitfalls (slides). People spent some time hacking on actual tests, but I thought the presentations and the ensuing discussion were very helpful as well.

The other testing-related event I participated in was an OpenJDK BOF with Martijn Verburg. Hm, it’s entitled “OpenJDK Hack Session” but there wasn’t that much hacking there, though Martijn did demonstrate the new OpenJDK build system. I presented some additional material on Testing OpenJDK (slides). This presentation was less about writing tests for OpenJDK than about the difficulties we have testing OpenJDK. The biggest problem, I think, is with unreliable tests. One would hope that a failing test means that there is a bug in the system being tested. Unfortunately in OpenJDK we have a lot of tests that are only 99% reliable. If you run the test suite regularly, especially on all the platforms, that makes it very unlikely that you’ll get a test run with 100% of tests passing, even if there are no bugs in the system. Worse, there are bugs in the system, so test failures caused by actual bugs are mixed with spurious test failures.

You can see this in the test results that Balchandra Vaidya has been posting to the OpenJDK quality-discuss mailing list. See the jdk8 b88 test results posting, for example. If you click through some links to find the Results Archive page, you’ll see that there have been 12-17 failures out of 4,000 or so tests in the JDK test suite, for the past twenty or so JDK 8 builds. Worse, they haven’t been the same failures every time, since the code and tests are constantly being modified, and the set of tests being run is shifting around as well.

This is clearly a serious problem, one that I and others at Oracle hope to make progress on in the coming months.

Follow

Get every new post delivered to your Inbox.