JCrete 2018

Every year, intrepid Java enthusiasts converge upon the island of Crete to brave questionable air carriers, the intense heat, the allure of beaches, the temptation of tropical drinks, and the astounding disorganization of the conference; all in the quest for knowledge. Here is my trip report.


JCrete is the brainchild of Heinz Kabutz (to whose “Java Specialists” newsletter you should subscribe right now if you haven't already). It's an unconference. No big conference fee. No keynotes. Self-organizing events. It's crazy that it works, but it really does. I've learned more this year at JCrete than at last year's Java One. And had way more fun.

Ultimately, the idea is simple. What do you really get out of a conference? The keynotes? Get real. The talks? You can watch them on Youtube. The true value is to meet people, ask questions, exchange ideas. And what better venue for that than a mediterranean island?


Somewhat to my surprise, monads seem to have entered the general consciousness of Java programmers. So much for Java being a “blue collar” language :-) One speaker gave a presentation about monads in Java that was down-to-earth and not annoying (somewhat along the lines of these slides). I liked it. A basic understanding of monads makes you understand why CompletableFuture is more awkward to use than it should be, and how to fix that.

Several people recommended the Vavr library that provides immutable collections and a cleaner way of working with streams.

We had a discussion about the evils of exception handling. Like, programmers catching exceptions instead of letting them percolate to a competent handler. One person suggested the Try monad. On the other side of the spectrum, someone cited Go as a language that “gets exceptions right” (and he isn't the only one with that belief). In Go, you can call panic and your program dies. What if you don't want that? Each function can return an error code in addition to a result. Kind of like a poor man's Either monad. But then the caller needs to check for the error and pass it on to its caller, and so on. That's so tedious that a long time ago, some clever folks invented a better mechanism—exception handling.

The problem with exception handling in Java isn't exception handling—it's checked exceptions. Suppose you have a stream of Path objects and want to do something with their contents.


That doesn't work, of course. The readString method (new to JDK 11, does what you think it does) throws a checked exception, so you must catch it. And then what? Yield an empty string? Turn it into an UncheckedIOException? At least that does the right thing and chains the original exception. What if it's a different exception and you have to do the chaining by hand?

Why didn't the designers of the stream library do all that? They could have made the signatures of all the functional interface methods include

throws Exception

like the Callable interface does. Then the stream operations would have to catch and wrap and chain those exceptions and produce an aggregate exception (preferably unchecked :-)). That's what the Vavr library does.

Perhaps one day, checked exceptions will be abolished in Java. In the meantime, if you write a library that asks users to provide lambda expressions, don't follow the java.util.stream model of rejecting lambdas that throw checked exceptions. Why force the your library users to do something tediously/poorly that the library can do well.

Desktop Clients and Jakarta EE (So Sad)

Many participants had one or more legacy Java desktop applications. Birds of a feather flocked together to bemoan the sad state of JavaFX and Java Web Start. An unkind soul suggested Electron as a replacement.

I attended a session on the future of Jakarta EE. Members of the working group explained the challenges of getting the project off the ground. As an example, Oracle's lawyers couldn't figure out whether the copyright of the current Java EE specs could be transferred. As we know from the NetBeans transition, open sourcing and ownership transfer can take a good long time.

To me, the scary part was that nobody could crisply define the scope of Jakarta EE. Everyone agreed it's a standard, but a standard for what? Is it a collection of technologies (servlets, JAX-RS, etc.)? A standard and reference implementation for an application server? An emerging architecture for microservices? All of the above?

JCrete participants reported using Spring, Play, or (gasp) Node.js. Spring was the most popular choice, and someone recommended http://springbootbuch.de/ this German book that I promptly bought. It's a good read and got me up to speed with the latest.

Other JVM Languages

Many participants had experience with Scala and Kotlin. Scala was admired but considered too complex for most developers. Several active Kotlin users were very enthusiastic about their choice, but many others didn't think that it worth the trouble to make the switch. There were a smattering of Groovy and JRuby users. It seems that Java is in no danger of becoming a second-tier language on the JVM.

What about compiling to JavaScript? I was intrigued by projects such as ScalaJS and KotlinJS, so that one can once again program user interfaces in Java. But I couldn't find anyone who thought that this was worthwhile.

Project Amber

Project Amber incubates smaller features for upcoming versions of Java. We were very fortunate to have Rémi Forax, a member of the expert group, as a participant.

An important part of these features is pattern matching. Consider the case of a list. It can be empty or nonempty. We can model this with inheritance:

public abstract class List {
   public abstract Object head(); // not generic for simplicity
   public abstract List tail(); 
   public abstract int size();
   // more methods

class EmptyList extends List {
   public Object head() { throw new UnsupportedOperationException(); } 
   public List tail() { throw new UnsupportedOperationException(); } 
   public int size() { return 0; }
   // more methods

class NonEmptyList extends List {
   private Object h;
   private List t;

   public NonEmptyList(Object head, List tail) { h = head; t = tail; }
   public Object head() { return h; } 
   public List tail() { return t; } 
   public int size() { return 1 + t.size(); }
   // more methods

That's all very object-oriented, but it's also tedious. Any method must be added in three places.

If we had pattern matching, we could define a size function somewhat like this:

int size(List l) {
   return match(l)
      of EmptyList is 0
      of NonEmptyList(h, t) is 1 + size(t)

Here, I write match, of, is, and end in italics to indicate that some suitable syntax can at some point be invented.

Pattern matching is nice because it keeps all pieces in one place. (We could equally well put it as a method in the List superclass, with match(this) ...). With an open-ended inheritance hierarchy, the “polymorphic” approach where each subclass defines its own method is better. But here, we have a fixed hierarchy that will never grow.

There will be some way to describe that the List class is “sealed”, with only the two given subclasses. Then the compiler can ensure that the match is exhaustive.

How will the match case NonEmptyList(h, t) work? It has to somehow extract the head h and tail t from the List object l. In Scala, a special unapply method does this, but that's not very efficient. The current thinking is to make this work efficiently for record types.

A more general match is also contemplated that replaces a series of instanceof tests, like this:

var str = match(obj)
  of Double d is String.format("%10.2f", d.doubleValue())
  of Number n is String.format("%10d", n.intValue())
  default is String.format("%-10s", obj.toString())

So far, so good. Now, what existing programming construct is most similar to this matching operation? Either if/else if/else or switch. And switch seems a better analog because it is possible to compute an efficient “jump” to the correct branch without having to try all matches. See this video for more.

Then again, it's not really like switch. For one, switch is a statement, and matching yields a value—it is an expression. It's the switchy analog to the ? : operator.

And switch has fallthrough.

Personally, I hate fallthrough. But Rémi said that the expert group found a number of code examples of if instanceof/else if instanceof statements that would benefit from fallthrough if they were recoded with pattern matching. I am sure I would hate seeing those examples.

That's where another part of Project Amber comes in: switch expressions. We'll have switch expressions with and without fallthrough, and a switch statement without fallthrough, together with the classic switch statement with fallthrough.

In Java 12.

The plan is logically consistent, in a brutalist way, and really, for a book author, what's not to like? Complexity is good for business.

Still, that's not how I imagined the “frequent release cycle” to work. I would have expected to have a fully developed pattern matching released, when and only when it is ready, instead of getting incremental stages that are of low value by themselves (other than for book authors). Rémy says that the expert group's thinking is different. They have thought through switch expressions for a long time, and now they feel it's ready. So why wait? The latest spec is here.

Project Valhalla

Project Valhalla is all about value types—immutable objects that don't (necessarily) live on the heap. For example, consider a value type Point with an x- and y-component. When you make an array of a million points, you get one array with two million numbers, not an array of a million references to objects, each with an object header and two numbers each.

Use cases include Optional and primitive type wrappers, where the added level of indirection is particularly annoying.

In the first iteration, support for generic value types (such as ArrayList<value type>) will be very limited. The value objects are “boxed” behind the scenes, just like primitive types are now. Longer term, this limitation will be transparently removed. At that point, the wrappers Integer, Double, and so on, can hopefully become value types.

What's not to like? There is some controversy whether value type instances can be null. Ideally, the answer should be no. Value types are like primitive types, and those can't be null. But that makes it hard to migrate an existing type such as Optional into a value type. After all, some clown somewhere might have had a variable

Optional<String> password = null; // Huh? Why not Optional.empty()

It's valid Java, so if the existing Optional is reimplemented, that code can't just fail. Even though it should. There will probably an option to make value types nullable to deal with this.

Also, the == operator. The current thinking is that it is not a valid operation for values. After all, values can never denote the same reference. In this regard, they are different from primitive types.

Why not do memberwise comparison of value objects? If the objects are wide, it's potentially expensive. Suppose that a class such as ArrayList eventually works with unboxed value types. Further suppose that some method makes a quick check for equal references before making a slower check for equality, like this:

if (val == element[i] || val.equals(element[i])) return i;

Now that “quick” check may no longer quick.

I am not particularly convinced by this argument. No, ArrayList doesn't do this anywhere. And if it did, so what? It would have to do the expensive test sooner or later.

So, there will be three different stories for == and equals: for primitive types, reference types, and value types. More complexity—good times for book authors.

On “hack day”, we got to play a bit with value types. You can too: download the reference implementation from http://jdk.java.net/valhalla/. Also clone Rémi's repo https://github.com/forax/valuetype-lworld with examples. (Yes, he is on two expert groups.)

Compile and run with

/opt/jdk-valhalla/bin/javac -XDallowGenericsOverValues ...
/opt/jdk-valhalla/bin/java -XX:+EnableValhalla ...

Here is a simple Point value class. Note the prototype syntax—there is no official syntax yet:

public final __ByValue class Point { // A value class
  public final int x;
  public final int y;

  private Point() {
     x = 0;
     y = 0;
  public static Point of(int x, int y) {
    var p = __MakeDefault Point(); // Constructs default value
    p = __WithField(p.x, x); // Yields a new Point whose x field has been updated
    p = __WithField(p.y, y); // Ditto with y
    return p;
  public static void main(String[] args) {
    var p = Point.of(2, 3);
    System.out.println(p); // toString is automatically generated
    System.out.println(p.hashCode()); // So is hashCode 
    System.out.println(p.equals(Point.of(2, 3))); And equals
    // System.out.println(p == Point.of(2, 3)); // Won't compile

Now you can allocate an array of Point values and, for comparison, an old-fashioned array of java.awt.Point objects:

long freeMem = Runtime.getRuntime().freeMemory();
var a = new Point[1_000_000];
for (int i = 0; i < a.length; i++)
   a[i] = __WithField(a[i].x, i);;          
long freeMem2 = Runtime.getRuntime().freeMemory();
System.out.println(freeMem - freeMem2);
var b = new java.awt.Point[1_000_000];
for (int i = 0; i < b.length; i++) {
   b[i] = new java.awt.Point();
   b[i].x = i;
long freeMem3 = Runtime.getRuntime().freeMemory();
System.out.println(freeMem2 - freeMem3);

When I ran this, the first allocation consumed less than 10 million bytes (you'd expect 8 million), and the second over 25 million. I have never really thought about how much memory an object consumes, but some sources claim 12 bytes per object and 4 bytes per reference. Together with the 8 bytes for the point data, the second array and the objects should then take 24 million bytes.

Comments powered by Talkyard.