Feeds:
Posts
Comments

Archive for the ‘Java’ Category

This evening, I presented Collections Refueled at the Silicon Valley JUG. Thanks to the JUG for having me, and to the attendees for all the interesting questions!

Here are the slides for my presentation: CollectionsRefueled.pdf

 

Read Full Post »

The first segment of Episode 23 of the Java Off-Heap podcast covered the deprecation of Object.finalize in Java 9 and deprecation and finalization in general. Deprecation is a subject near and dear to my heart. The hosts even mentioned me by name. Thanks for the shout-out, guys!

I wanted to clarify a few points and to answer some of the questions that weren’t resolved in that segment of the show.

Java Finalizers vs. C++ Destructors

The role of Java’s finalizers differs from C++ destructors. In C++ (prior to the introduction of mechanisms like shared_ptr) anytime you created something with new in a constructor, you were required to call delete on it in the destructor. People mistakenly carried this thinking over to Java, and they thought that it was necessary to write finalize methods to null out references to other objects. (This was never necessary, and the fortunately the practice seems to have died out long ago.) In Java, the garbage collector cleans up anything that resides on the heap, so it’s rarely necessary to write a finalizer.

Finalizers are useful if an object creates resources that aren’t managed by the garbage collector. Examples of this are things like file descriptors or natively allocated (“off-heap”) memory. The garbage collector doesn’t clean these up, so something else has to. In the early days of Java, finalization was the only mechanism available for cleaning up non-heap resources.

Phantom References

The point of finalization is that it allows one last chance at cleanup after an object becomes unreachable, but before it’s actually collected. One of the problems with finalization is that it allows “resurrection” of an object. When an object’s finalize method is called, it has a reference to this — the object about to be collected. It can hook the this reference back into the object graph, preventing the object from being collected. As a result, the object can’t simply be collected after the finalize method returns. Instead, the garbage collector must run again in order to determine whether the object is truly unreachable and can therefore be collected.

The reference package java.lang.ref was introduced all the way back in JDK 1.2. This package includes several different reference types, including PhantomReference. The salient feature of PhantomReference is that it doesn’t allow the object to be “resurrected.” It does this by making the contained reference be inaccessible. A holder of a phantom reference gets notified that the referent has become unreachable (strictly speaking, phantom-reachable) but there’s no way to get the referent out and hook it back into the object graph. This makes the garbage collector’s job easier.

Another advantage of a PhantomReference is that, like the other reference types, it can be cleared explicitly. Suppose there’s an object that holds some external resource like a file descriptor. Typically, such objects have a close method the application should call in order to release the descriptor. Prior to the introduction of the reference types, such objects also need a finalize method in order to clean up if the application had failed to call close. The problem is, even if the application has called close, the collector needs to do finalization processing and then run again, as described above, in order to collect the object.

PhantomReference and the other reference types have a clear method that explicitly clears the contained reference. An object that has released its native resources via an explicit call to a close method would call PhantomReference.clear. This avoids a subsequent reference processing step, allowing the object to be collected immediately when it becomes unreachable.

Why Deprecate Object.finalize Now?

A couple of things have changed. First, JEP 277 has clarified the meaning of deprecation in Java 9 so that it doesn’t imply that the API will be removed unless forRemoval=true is specified. The deprecation of Object.finalize is an “ordinary” deprecation in that it’s not being deprecated for removal. (At least, not yet.)

A second thing that’s changed in Java 9 is the introduction of a class java.lang.ref.Cleaner. Reference processing is often fairly subtle, and there’s a lot of work to be done to create a reference queue and a thread to process references from that queue. Cleaner is basically a wrapper around ReferenceQueue and PhantomReference that make reference handling easier.

What hasn’t changed is that for years, it’s been part of Java lore that using finalization is discouraged. It’s time to make a formal declaration, and the way to do this is to deprecate it.

Has Anything Ever Been Removed from Java SE?

The podcast episode mentioned a Quora answer by Cameron Purdy written in 2014, where he said that nothing had ever been removed from Java. When he wrote it, the statement was correct. Various features of the JDK had been removed (such as apt, the annotation processing tool), but public APIs had never been removed.

However, the following six APIs were deprecated in Java SE 8, and they have been removed from Java SE 9:

  1. java.util.jar.Pack200.Packer.addPropertyChangeListener
  2. java.util.jar.Pack200.Unpacker.addPropertyChangeListener
  3. java.util.logging.LogManager.addPropertyChangeListener
  4. java.util.jar.Pack200.Packer.removePropertyChangeListener
  5. java.util.jar.Pack200.Unpacker.removePropertyChangeListener
  6. java.util.logging.LogManager.removePropertyChangeListener

In addition, in Java SE 9, about 20 methods and six modules have been deprecated with forRemoval=true, indicating our intent to remove them from the next major Java SE release. Some of the classes and methods to be removed include:

  • java.lang.Compiler
  • Thread.destroy
  • System.runFinalizersOnExit
  • Thread.stop(Throwable)

The modules deprecated for removal are the following:

  1. java.activation
  2. java.corba
  3. java.transaction
  4. java.xml.bind
  5. java.xml.ws
  6. java.xml.ws.annotation

So yes, we are getting serious about removing stuff!

Will Finalization Be Removed?

As mentioned earlier, Object.finalize is not being deprecated for removal at this time. As such, its deprecation is merely a recommendation that developers consider migrating to alternative cleanup mechanisms. The recommended replacements are PhantomReference and the new Cleaner class.

That said, we do eventually want to get rid of finalization. It adds extra complexity to the garbage collector, and there are recurring cases where it causes performance problems.

Before we can get rid of it, though, we need to remove uses of it from the JDK. That’s more than just removing the overrides of finalize and rewriting the code to use Cleaner instead. The problem is that there are some public API classes in the JDK that override finalize and specify its behavior. In turn, their subclasses might override finalize and rely on the existing behavior of super.finalize(). Removing the finalize method would expose these subclasses to a potentially incompatible behavior change. This will need to be investigated carefully.

There might also be a transition period where calling of the finalize method is controlled by a command-line option. This would allow testing of applications to see if they can cope without finalization. Only after a transition period would we consider removing the finalization mechanism entirely. We might even leave the finalize method declaration around for binary compatibility purposes, even after the mechanism for calling it has been removed.

As you can see, removing finalization would require a long transition period spanning several JDK releases, taking several years. That’s all the more reason to start with deprecation now.

Read Full Post »

Now that Devoxx US is imminent, it’s about time for me to post about Devoxx BE 2016, which took place in November 2016 in Antwerp. That was several months ago, which was ages in conference time, so this post is mainly a placeholder to host slides and links to the videos.

Array Linked to a List, the Full Story! – José Paumard (video)

I was surprised to find that I was mentioned by name in the abstract for this university session. José Paumard took a tweet of mine from a year earlier (actually one by my alter ego, Dr Deprecator) and turned it into an entire university session. José was happy to have me attend the session, and he was gracious enough to invite me on stage for a few comments and questions.

Streams in JDK 8: The Good, The Bad and the Ugly – Simon Ritter (BOF)

This was another of my impromptu appearances. Simon had submitted this session, and he asked me to join him in presenting it. I said that I wasn’t sure what I would speak about. He said to me, “I’ll put up a slide and say a few words about it. I’m sure you’ll have an opinion.” (He was right.) This was a BOF, so it was pretty informal, but Simon came up with some really interesting examples, and we had a good discussion and covered a lot of issues.

Simon and I will be repeating this BOF at Devoxx US this coming week.

Ask the JDK Architects – panel session (video)

This was a panel session featuring Mark Reinhold and Brian Goetz (the actual JDK architects) along with Alan Bateman and myself (JDK Core Libraries engineers). This session consisted entirely of answering questions from the audience.

Optional: The Mother of All Bikesheds – conference session (slides, video)

This was a conference session about a single Java 8 API, java.util.Optional. Some were were skeptical that I could talk for an entire hour about a single API. I proved them wrong. Credit for the title goes to my übermanager at Oracle, Jeannette Hung. It refers to the many protracted mailing list discussions (“centithreads”) about the design of Optional.

Thinking in Parallel – joint conference session with Brian Goetz (slides, video)

This was an amazing experience because the auditorium was so full that people were sitting on the steps. Brian Goetz was the big draw here, but I also think it was packed because there were fewer sessions running at the same time.

* * *

I was pleased to learn that both of my conference sessions were in the top 20 talks for the conference. Thanks for your support!

Read Full Post »

I just finished a vJUG24 session entitled Optional: The Mother of All Bikesheds.

Video: YouTube

Slide deck: PDF

This slide deck has a few minor updates relative to what I presented in the vJUG24 session:

  • slides 21-22: clarify problem statement (the before and after code is correct)
  • slide 26: mention flatMap() for completeness
  • slide 31: add link to Stack Overflow question
  • slide 36: clarify reason for not deprecating Optional.get()
  • slide 42: new slide describing new methods in Java 9

For convenience, here are the six seven style rules I proposed in the session:

  1. Never, ever, use null for an Optional variable or return value.
  2. Never use Optional.get() unless you can prove that the Optional is present.
  3. Prefer alternative APIs over Optional.isPresent() and Optional.get().
  4. It’s generally a bad idea to create an Optional for the specific purpose of chaining methods from it to get a value.
  5. If an Optional chain has a nested Optional chain, or has an intermediate result of Optional, it’s probably too complex.
  6. Avoid using Optional in fields, method parameters, and collections.On a related note, I thought of another rule after I presented the session:
  7. Don’t use an Optional to wrap any collection type (List, Set, Map). Instead, use an empty collection to represent the absence of values.

Read Full Post »

JavaOne 2016 Sessions

Here are links to the JavaOne pages, slide decks, and videos for the sessions I presented at JavaOne 2016.

Collections Refueled

Enhanced Deprecation

Thinking in Parallel (with Brian Goetz)

Read Full Post »

Occasionally I see mention of a “fail-safe” Iterator in Java. What does this mean?

The context here is what happens when a collection is modified while it’s being iterated, or “concurrent modification” for short. Each collection defines its own policy for handling concurrent modification. Note that the issue is when the collection is modified directly, from “outside” the iterator. Modifying the collection via the iterator is generally permitted, with some specific exceptions.

The term “fail-safe” is not used anywhere in the Java SE specification. Instead, the specification describes four different policies for concurrent modification: fail-fast, weakly consistent, snapshot, and undefined.

1. fail-fast

Most of the non-concurrent collections in java.util have fail-fast iterators. See the ArrayList specification for example:

The iterators returned by this class’s iterator and listIterator methods are fail-fast: if the list is structurally modified at any time after the iterator is created, in any way except through the iterator’s own remove or add methods, the iterator will throw a ConcurrentModificationException. Thus, in the face of concurrent modification, the iterator fails quickly and cleanly, rather than risking arbitrary, non-deterministic behavior at an undetermined time in the future.

2. weakly consistent

Most of the concurrent collections in java.util.concurrent, such as ConcurrentHashMap, are weakly consistent. The definition from the java.util.concurrent package specification is:

Most concurrent Collection implementations (including most Queues) also differ from the usual java.util conventions in that their Iterators and Spliterators provide weakly consistent rather than fast-fail traversal:

  • they may proceed concurrently with other operations
  • they will never throw ConcurrentModificationException
  • they are guaranteed to traverse elements as they existed upon construction exactly once, and may (but are not guaranteed to) reflect any modifications subsequent to construction.

3. snapshot

A third policy is snapshot semantics. The main example of this is CopyOnWriteArrayList:

All mutative operations (add, set, and so on) are implemented by making a fresh copy of the underlying array.

The “snapshot” style iterator method uses a reference to the state of the array at the point that the iterator was created. This array never changes during the lifetime of the iterator, so interference is impossible and the iterator is guaranteed not to throw ConcurrentModificationException. The iterator will not reflect additions, removals, or changes to the list since the iterator was created.

4. undefined

The fourth policy, if it can be called a policy, is that nothing is specified at all. The results of modifying a collection during iteration are undefined and may result in inconsistencies. The examples here are the legacy collections Vector and Hashtable and their methods that return Enumeration, including Vector.elements, Hashtable.elements, and Hashtable.keys.

If you iterate an Enumeration from the Vector.elements method, it’s not difficult to get odd behaviors such as elements being skipped, elements appearing twice in the iteration, or getting an exception unexpectedly.

As an aside, when the Collections Framework was introduced in JDK 1.2, Vector and Hashtable were retrofitted to implement the new interfaces. For example, if you have a Vector, you can get an Enumeration using the Vector.elements method, or you can get an Iterator using the Vector.iterator method. The Iterator has a fail-fast policy, while the Enumeration does not. Hashtable also provides both an Enumeration and an Iterator, and only the latter has the fail-fast policy. What’s interesting is that Hashtable‘s Iterator and Enumeration are implemented by the same class which has a flag that determines whether it should behave as an Iterator or Enumeration.

After all this, where does “fail-safe” come into the picture? Answer: it doesn’t. The words “fail-safe” are never used in the Java SE specifications that describe the concurrent modification policy of a collection. As such, there is no reliable, consistent definition of “fail-safe” for an Iterator. One can attempt to apply the general concept of “fail-safety” to an Iterator, but this is open to varying, misleading, and even contradictory interpretations.

Don’t use “fail-safe” to describe a Java Iterator. Instead, use one of the documented policies listed above.

 

Read Full Post »

Some Java List Benchmarks

This is mainly a reply to a Twitter conversation I’ve been having with Ken Fogel, regarding the performance of various List implementations. But it’s of general interest to Java programmers, and probably also to others participating in the conversation, so I’m posting it publicly here.

Ken Fogel (Twitter: @omniprof) has posted some Java Collections Performance Benchmarks and is using the results to help guide what he teaches his students about data structures. Unfortunately, there are some issues with the benchmarks, which probably lead to misleading results.

Our conversation has mainly been about the java.util.LinkedList class, particularly compared to ArrayList. I’ll therefore focus on those, leaving aside the other classes benchmarked there.

Here’s a screenshot from the benchmark application:

CollectionPerformanceAppScreenSnapz001

Ken’s instructions were to click the buttons several times in order to warm up the JIT. The numbers bounced around a bit but I think the ones displayed here are representative. At first glance the numbers seem reasonable: element access in an ArrayList is uniform, and access to the ends of a LinkedList is fast whereas access to the middle is slow. The insert/erase numbers for LinkedList are similar.

Something strange is going on with the insert/erase numbers for ArrayList. (The benchmark actually just does insertion.) Inserting at the beginning is the worst case, since the entire array has to be shifted down one position. Inserting in the middle should take about half that time. Inserting at the end should be cheap, since usually no elements need to be shifted. What’s going on?

Here’s the code for that benchmark:

// Insert at end
runningTotal = 0;
for (int x = 0; x < REPETITIONS; ++x) {
    arrayListX = new ArrayList<String>(arrayList);
    startTime = System.nanoTime();
    arrayListX.add("Dawson College");
    endTime = System.nanoTime() - startTime;
    runningTotal += endTime;
}

There are three issues that I want to discuss here.

The first issue is in regard to general issues with Java benchmarking. The JVM’s JIT can easily invalidate benchmark results if you’re not careful. One of the most common problems is dead code elimination. A full discussion of these issues is provided in this article. In general, I strongly recommend writing benchmarks using JMH, the Java (Micro)benchmark Harness. It tries very hard to prevent the JIT from invalidating your benchmark, and it’s also quite rigorous at measuring and collecting data.

The second issue is about System.nanoTime(). The code here is careful to try to measure only the actual work, setting aside loop overhead and the setup time of creating the ArrayList that’s to be modified. Unfortunately, System.nanoTime() isn’t sufficiently precise to be used this way. While it reports its results in units of nanoseconds, it’s not necessarily precise enough to measure things with a granularity of nanoseconds. For a full explanation of the issues with nanoTime(), see Nanotrusting the Nanotime, an article by Oracle performance guru Alexey Shipilëv (who is also the author of JMH). The upshot is that you can’t use nanoTime() to measure an  operation that runs very quickly. For example, for the quickest operations, this benchmark reports results of around 36-38ns, whereas the JMH-reported results are around 3ns. (See below.)

The third issue is with the workload the code presents. The list is to be modified, so the pre-populated arrayList is first copied to arrayListX. Then, an element is added to arrayListX; that’s the operation that’s measured. This seems OK, but there’s more going on under the covers.

The ArrayList class stores its elements in an internal array. If the array fills up, it’s grown by 50%. Thus, the typical case is for an ArrayList to have excess capacity at the tail end of the array. Arrays in Java cannot be resized, so “growing” an array really means allocating a new array and copying the elements into it. Naturally, this can be expensive. The benchmark populates arrayList by creating it with the default length of 10 and then adding 10,000 elements. The internal array ends up being reallocated a bunch of times, and it ends up with a length of 14,053. But all of this allocation and copying happens outside the benchmark timing.

When an ArrayList is created using its copy constructor, the length of its internal array is exactly the same as the number of elements. So the new list arrayListX has an internal array length of 10,000 and it’s fully populated. When an element is added, an array of length 15,000 is created and the 10,000 elements are copied into it, along with the one added element. But arrayListX is then thrown away and a new instance is created, again with length 10,000. And again, the array is reallocated and copied when the element is added. This happens every time through the benchmark loop.

This is the worst possible case for ArrayList. The 50% growth policy is intended to allow amortization of the allocate-and-copy cost over a large number of operations, but the way this benchmark is written, no such amortization occurs. This doesn’t seem representative of ArrayList‘s performance. This also explains why insertion at the beginning and middle have about the same time as insertion at the end. I don’t know whether Ken intended for the benchmark to exercise the worst case instead of the average case. At least, this should be clarified, and an alternative benchmark provided. (For example, I changed the benchmark to remove an element instead of inserting one, and the numbers were very different.)

Benchmarks Recast into JMH

JMH-based benchmark code is here: gist

I’ve taken some of Ken’s benchmarks and have recast them into JMH. I benchmarked ArrayDeque (AD), ArrayList (AL), and LinkedList (LL). I modified Ken’s insertion benchmark so that the same list is modified every time. This avoids the copying issue I described above. However, since we’re operating on the same list, we have to remove the element that was inserted. I’m calling the result an “edit” operation (although it’s really two operations). I also copied his “access” benchmark. Finally, like Ken’s benchmarks, I also did operations on the first, middle, and last elements, with the exception of ArrayDeque which doesn’t support operations in the middle.

Ken’s benchmark also populated the lists using random words from a dictionary. I found that this caused a large amount of variability in some of the benchmarks, probably because of interference from garbage collection. I removed the dictionary and instead populated the lists with empty strings. This didn’t make any difference in the average results, but it reduced the variability considerably.

I’ve created a gist with the code for my JMH-based benchmark. It’s pretty straightforward as JMH benchmarks go. One thing of note is that I’ve arranged all of the benchmark methods to return a value. This isn’t strictly necessary, but it’s good benchmarking practice. If you don’t return a value, the JIT has the potential to declare the entire method to be dead code (if, for example, it can determine the method has no side effects), and optimize it away entirely. JMH consumes method return values in a such a way as to prevent the JIT from doing that.

My system is a 2014 MacBook Pro (MacBookPro11,1) with a dual-core Intel i7 running at 3 GHz. I’m running JDK 8u65. The benchmark results are as follows:

Benchmark            (SIZE)  Mode  Samples      Score  Score error  Units
AD_accessFirst        10000  avgt        5      2.923        0.078  ns/op
AD_accessLast         10000  avgt        5      2.979        0.176  ns/op
AD_editFirst          10000  avgt        5      8.995        0.150  ns/op
AD_editLast           10000  avgt        5      5.223        0.295  ns/op
AL_accessFirst        10000  avgt        5      2.945        0.159  ns/op
AL_accessLast         10000  avgt        5      2.964        0.255  ns/op
AL_accessMiddle       10000  avgt        5      2.932        0.114  ns/op
AL_editFirst          10000  avgt        5   1898.086      100.413  ns/op
AL_editLast           10000  avgt        5     20.434        2.096  ns/op
AL_editMiddle         10000  avgt        5    893.348       31.074  ns/op
LL_accessFirst        10000  avgt        5      2.901        0.150  ns/op
LL_accessLast         10000  avgt        5      3.001        0.312  ns/op
LL_accessMiddle       10000  avgt        5   8645.194      358.769  ns/op
LL_editFirst          10000  avgt        5      8.261        0.373  ns/op
LL_editLast           10000  avgt        5     10.703        2.873  ns/op
LL_editMiddleIndx     10000  avgt        5  17075.179      730.738  ns/op
LL_editMiddleIter     10000  avgt        5   8273.984      345.155  ns/op

We can see that many of the results are as expected. Access to the ends of ArrayDeque and LinkedList is quite fast, and access to any ArrayList element is uniformly fast. Access to the middle of the LinkedList is very slow. Editing the ends of the ArrayDeque and LinkedList is also quite fast. Editing the last element of the ArrayList is pretty fast. Editing the middle is somewhat slow, and editing the front is twice as slow. This makes sense, because inserting or deleting elements requires a lot of copying. The closer to the front of the list the edit occurs, the more copying is required, so the slower it gets.

Something funny is going on with editing in the middle of the LinkedList. One benchmark, LL_editMiddleIndx, is exceptionally slow. What does it do? The benchmark has a variable “mid” which is the middle index in the list; it’s SIZE / 2. The obvious code to edit the middle of a list is this:

@Benchmark
public String LL_editMiddleIndx() {
    linkedList.add(mid, "Dawson College");
    return linkedList.remove(mid);
}

This is a terrible way to edit a linked list, because it has to traverse to the middle of the list to do the insertion, and it has to traverse to the middle again in order to do the deletion. It turns out there’s a way to keep a cursor to a location in the middle of a list, using the ListIterator class. The LL_editMiddleIter benchmark does this:

@Benchmark
public String LL_editMiddleIter() {
    ListIterator<String> iter = linkedList.listIterator(mid);
    iter.add("Dawson College");
    String r = iter.previous();
    iter.remove();
    return r;
}

It’s slightly odd because we have to call previous() to cue up the element for removal. However, as you can see from the benchmark results, this performs only one traversal instead of two, making it twice as fast as the  LL_editMiddleIndx method, and it’s in line with LL_accessMiddle.

Summary & Observations

I and a few other folks on Twitter have been making the case against LinkedList. For example, at one point I said that “LinkedList is much less useful than people think.” (tweet) I think the benchmark results I’ve presented here bear this out.

Looking at just these operations on ArrayList and LinkedList, there are clear tradeoffs. Element access in ArrayList is faster, but editing potentially involves copying O(n) elements, which can be expensive. Many people claim that LinkedList has the advantage for editing operations because they are O(1). This is true, but only at the ends of the list, or if you already have the location in the list where you want to do the editing. (Such as with a ListIterator.) If you have to search for the location, or if the index is somewhere in the middle, you have to pay the traversal cost to get to that location.

The kicker here is that traversing through a LinkedList is considerably more expensive than copying the elements of an ArrayList. If we assume that the edit location is uniformly distributed within the list, the average cost of element copying in an ArrayList is around 0.9µs. For a LinkedList, however, the average cost of traversing to a location is over 4µs. There are clearly workloads where a LinkedList will outperform an ArrayList. However, in many cases, the traversal cost of the LinkedList is so much more expensive than the copying cost of the ArrayList, it more than offsets LinkedList‘s O(1) advantage in editing operations.

That’s why I claim that ArrayList is usually preferable, and LinkedList should (almost) never be used.

 

Read Full Post »

I’m presenting one session at the Devoxx conference in Antwerp, at 15:10 on Wednesday 11 November 2015, in room 5. Here is the slide deck:

Marks-CollectionsYoungPups-v2 (PDF)

Read Full Post »

Hi everybody, JavaOne 2015 is already underway. For some reason my talks are all concentrated toward the end of the conference this year; in fact, three are on Wednesday! Here’s my talk schedule:

♦ API Design with Java 8 Lambdas and Streams [CON6851]
(with Brian Goetz)
Wed 2015-10-28 – 8:30am
Hilton Continental Ballroom 5
Slides: CON6851-API-Design-v2 (PDF)
Video: https://youtu.be/o10ETyiNIsM?t=24m (61 minutes)

♦ New Tricks for Old Dogs: Collections Enhancements in Java 8 [CON7432]
(with Mike Duigou)
Wed 2015-10-28 – 11:30am
Hilton Continental Ballroom 1/2/3
Slides: CON7432-Marks-CollectionsNewTricks-v3 (PDF)
See also JEP 269, “Convenience Factory Methods for Collections” (JDK 9 work-in-progress)

♦ Saving the Future from the Past: Innovations in Deprecation [CON6856]
(presented by Dr Deprecator)
Wed 2015-10-28 – 3:00pm
Hilton Continental Ballroom 5
Slides: CON6856-Marks-Deprecation-v3 (PDF)
Video: https://youtu.be/o10ETyiNIsM?t=6h54m41s (61 minutes)
News flash! JEP 277 “Enhanced Deprecation” has been posted.

♦ 20 Years of APIs: A Retrospective [CON6891]
Thu 2015-10-29 – 9:00am
Hilton Continental Ballroom 5
Slides: CON6891-Marks-API-Retrospective-v2 (PDF)
Video: https://youtu.be/0KlJSNb8GZU?t=26m25s (61 minutes)

Sorry, there’s no lambda tutorial (“Jump-Starting Lambda”) this year, nor is there a Lambda Hands on Lab. This is most unfortunate. I was planning to work with Simon Ritter (Twitter: @speakjava) on those sessions this year, with Simon taking the lead. Unfortunately, Simon was laid off from Oracle just a few weeks ago, leaving no time to rearrange the program or to find someone else to work on them. There are a number of Lambda and Streams talks that I can recommend, however:

♦ Programming with Lambdas [CON8366]
Venkat Subramaniam
Mon 2015-10-26 – 4:00pm
Hilton Continental Ballroom 5
Video: https://youtu.be/8RhwmJlZQgs?t=7h54m50s

♦ Journey’s End: Collection and Reduction in the Stream API [TUT5906]
Maurice Naftalin
Tue 2015-10-27 – 8:30am
Hilton Continental Ballroom 4

♦ Streams: the Real Powerhouse in Java 8 [CON8367]
Venkat Subramaniam
Tue 2015-10-27 – 11:00am
Hilton Continental Ballroom 4

♦ Effective Java Streams [CON7066]
Paul Sandoz
Tue 2015-10-27 – 2:30pm
Hilton Continental Ballroom 5
Video: https://youtu.be/iHHSa39p48I?t=6h15m55s

♦ Shooting the Rapids: Maximizing Performance of Java 8 Streams [CON5931]
Maurice Naftalin & Kirk Pepperdine
Wed 2015-10-28 – 3:00pm
Hilton Continental Ballroom 4

Enjoy the conference!

Read Full Post »

Since this year’s Java Day Tokyo 2015 is about to happen, I figure I should post my article about last year’s event. Unfortunately I won’t be able to attend this year. But last year I traveled to Japan for Java Day Tokyo 2014 and for a Japan Java User Group event. The trip was  packed with events. I brought my family along with me, and fortunately we did have a couple days to travel around Tokyo to relax and do some sightseeing.

JJUG CCC 2014 Spring, 18 May

The first event was the JJUG CCC Spring 2014 (Japan Java Users Group, Cross-Community Conference). This is a twice-per-year gathering of several JUGs from around Japan where they stage a full-day conference. It turned out that I was one of the keynote speakers! I was told there were over 300 people attending, making it one of the biggest JJUG events ever. Wow, I’m honored.

My presentation was Overview of Java 8 Lambda and Streams, which covered not only those topics but also default methods and method references. That’s a lot to cover, and I couldn’t go very fast because I had to pause after every sentence for consecutive translation. Still, people said they enjoyed the presentation and that they found it helpful.

Here are some pictures Yuichi Sakuraba took at the event. (He seems to be the designated conference photographer in Japan, when he’s not busy taking pictures of food.)

14250622516_2c7ed7d9d9_z
(photo: Yuichi Sakuraba, 2014-05-18, CC BY-NC 2.0, original on Flickr)

14271641832_8556fe4c5d_z
(photo: Yuichi Sakuraba, 2014-05-18, CC BY-NC 2.0, original on Flickr)

Yuichi has posted a Flickr photo set of the entire event, including a few more of me.

Java Day Tokyo, 22 May 2014

This was the main event. It was jam packed with sessions, including a set of keynotes in the morning, and five tracks in parallel in the afternoon. Here’s the agenda, and here are slides and videos from the subset of sessions that were recorded. I had two sessions in the afternoon: the first on Java 8 Lambdas,  and the second on Java 8’s new Streams API. Here are some pictures I took during the keynotes.

Nandini Ramani (former VP, Oracle Java Platform Group) and Shin Ishiguro (NEC) showing off the NEC PaPeRo robot featuring Embedded Java SE:

 

DSC_1497

DSC_1591

Stephen Chin and Cameron Purdy demonstrating the Lego Duke balancing on two wheels:

DSC_1501

DSC_1514

DSC_1536

That evening after a full day of sessions, there was a two hour “Ask the Experts” panel and I was on the panel. David Buck (Oracle JVM Sustaining) was pressed into service doing consecutive translation in both directions between the audience and the panelists. I think he did quite well considering he’s not a professional translator.

Not surprisingly (as Java 8 had just been released) most of the questions were about Lambdas and Streams. There were some pretty good questions. One question asked about some details of how lambdas are implemented. I replied that I’d try to be brief and hold my remarks to under half an hour. That got a laugh out of the audience (a Japanese audience — a first for me!). David did pretty well translating my answer, until I got to the part about the “lambda metafactory.” I’m not the real expert at this, though. Brian Goetz is, and he’s given a talk called Lambda: A Peek Under The Hood that explains the lambda implementation in great detail.

The following day, (not officially part of the conference) we had a hands-on lab in the Oracle offices where we let participants try their hand at a set of exercises that can be solved using Java 8 Lambdas and Streams.  This is similar to labs we’ve had at JavaOne and Devoxx and other conferences:

DSC_1615
Like most labs, after a brief introduction, most of the participants went heads-down and worked steadily on the problems. They must have been pretty good problems, since most people were still working on them when we ran out of time!


I’m sad to be missing this year’s Japan event. Make sure you go if you get a chance. It looks like it’ll be as good if not better than last year’s!

Read Full Post »

« Newer Posts - Older Posts »