Last week, Paige Niedringhaus posted an article Using Java to Read Really, Really Large Files. While one can quibble about whether the file to be processed is indeed “really, really large,” it’s large enough to expose some interesting concerns and to present some interesting opportunities for optimizations and use of newer APIs. There was some discussion on Reddit /r/java and /r/programming and a PR with an alternative implementation. Earlier today, Morgen Peschke posted an analysis and comparison to a Scala version of the program. (This is posted as comment at the bottom of the original article.) This article is my contribution to the discussion.
When I ran Niedringhaus’ program on my machine using JDK 8, I ran into the same memory issues as did Peschke; the program consumed so much memory that it spent all its time in garbage collection. Increasing the heap size worked around this problem. Interestingly, using JDK 11, I was able to run Niedringhaus’ version successfully without increasing the heap size. (I suspect the reason is that JDK 11 uses G1GC as the default collector, and its different collection scheme avoids the pathological behavior of the Parallel GC, which is the default collector in JDK 8.)
The approach I’ll take is to retain the large lists accumulated by the original program. My presumption is that the lists are loaded into memory in order to do further analysis that isn’t part of the original program. Instead of reducing memory consumption, I focus on changing aspects of the computation to improve runtime performance. After establishing the program’s baseline performance, I proceed to show several variations on the code that successively improve its performance, along with some discussion describing the reasons for the improvement. I present a diff for each variation. Each variation, along with my final version, is also available in a gist.
I downloaded indiv18.zip on 2019-01-04 and extracted the itcont.txt
file. It’s about 3.3GB in size and has 18,245,416 lines. I started with the 2019-01-05 version of Niedringhaus’ test program:
ReadFileJavaApplicationBufferedReader.java
For comparing execution times, I’m using the last time reported by the program, the “Most common name time.” The benchmark times all showed a fair amount of variability. In most cases I reran the program a few times and chose the fastest time. This isn’t very rigorous, but it should at least give an idea of the relative speeds of execution. I’ve rounded to whole seconds, because the high variability makes milliseconds superfluous.
Niedringhaus’ article reported a runtime of about 150 seconds for this version of the program. I ran the program on my laptop (MacBook Pro, mid 2014, 3GHz Intel Core i7, 2 cores, 16GB) and the execution time was about 108 seconds. I’ll use that figure as the baseline against which subsequent optimizations are compared.
Variation 1
--- ReadFileJavaApplicationBufferedReader0.java +++ ReadFileJavaApplicationBufferedReader1.java @@ -57,8 +57,8 @@ // System.out.println(readLine); // get all the names - String array1[] = readLine.split("\\s*\\|\\s*"); - String name = array1[7]; + String array1[] = readLine.split("\\|", 9); + String name = array1[7].strip(); names.add(name); if(indexes.contains(lines)){ System.out.println("Name: " + names.get(lines - 1) + " at index: " + (lines - 1)); @@ -80,7 +80,7 @@ } } - String rawDate = array1[4]; + String rawDate = array1[4].strip(); String month = rawDate.substring(4,6); String year = rawDate.substring(0,4); String formattedDate = month + "-" + year;
Applying this patch reduced the execution time from 108 seconds to about 44 seconds.
This is change is actually two optimizations. String splitting is quite expensive, and it’s done once for each of the 18 million lines in the file. It’s thus quite beneficial to remove work from the program’s main loop. The String.split()
call uses a regex that splits the line into fields, where the separator is a vertical bar including any adjacent whitespace. The regex pattern is compiled each time through the loop. It would save some time to compile the regex once before the loop and to reuse it. But it turns out that using a regex here is unnecessary. We can instead split on a vertical bar alone. The split()
method has a fast path for single-character split patterns which avoids regexes entirely. (Since the vertical bar is a regex metacharacter, it still counts as a single character even with the backslash escapes.) Thus we don’t need to worry about pre-compiling the split pattern.
Changing the split pattern can leave unwanted whitespace in some of the fields we’re interested in. To deal with this, we call the String.strip()
method to remove it from those fields. The strip()
method is new in Java 11. It removes whitespace from both ends of a string, where whitespace is defined using Unicode semantics. This differs from the older String.trim()
method, which uses an anachronistic definition of whitespace based on ASCII control characters.
The second optimization applies a limit to the number of splits performed. Each line of the file has 21 fields. Without the limit parameter, the split()
method will split the entire line into 21 fields and create string objects for them. However, the program is only interested in data from the 5th and 8th fields (array indexes 4 and 7). It’s a lot of extra work to split the remaining fields and then just to throw them away. Supplying a limit argument of 9 will stop splitting after the eighth field, leaving the remainder of the line unsplit in the last array element (at index 8). This reduces the amount of splitting work considerably.
Variation 2
--- ReadFileJavaApplicationBufferedReader1.java +++ ReadFileJavaApplicationBufferedReader2.java @@ -29,17 +29,12 @@ // get total line count Instant lineCountStart = Instant.now(); - int lines = 0; Instant namesStart = Instant.now(); ArrayList<String> names = new ArrayList<>(); // get the 432nd and 43243 names - ArrayList<Integer> indexes = new ArrayList<>(); - - indexes.add(1); - indexes.add(433); - indexes.add(43244); + int[] indexes = { 0, 432, 43243 }; // count the number of donations by month Instant donationsStart = Instant.now(); @@ -53,16 +48,12 @@ System.out.println("Reading file using " + Caller.getName()); while ((readLine = b.readLine()) != null) { - lines++; // System.out.println(readLine); // get all the names String array1[] = readLine.split("\\|", 9); String name = array1[7].strip(); names.add(name); - if(indexes.contains(lines)){ - System.out.println("Name: " + names.get(lines - 1) + " at index: " + (lines - 1)); - } if(name.contains(", ")) { @@ -88,11 +79,15 @@ } + for (int i : indexes) { + System.out.println("Name: " + names.get(i) + " at index: " + (i)); + } + Instant namesEnd = Instant.now(); long timeElapsedNames = Duration.between(namesStart, namesEnd).toMillis(); System.out.println("Name time: " + timeElapsedNames + "ms"); - System.out.println("Total file line count: " + lines); + System.out.println("Total file line count: " + names.size()); Instant lineCountEnd = Instant.now(); long timeElapsedLineCount = Duration.between(lineCountStart, lineCountEnd).toMillis(); System.out.println("Line count time: " + timeElapsedLineCount + "ms");
This patch reduces the execution time from 44 seconds to about 40 seconds.
This is perhaps a bit of a cheat, but it’s another example of removing work from the inner loop. The original code maintained a list of indexes (line numbers) for which names are to be printed out. During the loop, a counter would keep track of the current line, and the current line would be queried against the list of indexes to determine if the name is to be printed out. The list is short, with only 3 items, so searching it is pretty quick. There are 18,245,416 lines in the file and only 3 indexes in the list, so searching the list for the current line number will fail 18,245,413 times. Since we’re storing all the names in a list, we can just print out the names we’re interested in after we’ve loaded them all. This avoids having to check the list within the inner loop.
The patch also stores the indexes in an array since the syntax for initializing an array is a bit more concise. It also avoids boxing overhead. Boxing of three elements isn’t a significant overhead, so it’s unlikely this makes any measurable difference in the performance. In general, I prefer to avoid boxing unless it’s necessary.
Variation 3
--- ReadFileJavaApplicationBufferedReader2.java +++ ReadFileJavaApplicationBufferedReader3.java @@ -44,6 +45,7 @@ Instant commonNameStart = Instant.now(); ArrayList<String> firstNames = new ArrayList<>(); + var namePat = Pattern.compile(", \\s*(([^ ]*), |([^ ]+))"); System.out.println("Reading file using " + Caller.getName()); @@ -55,20 +57,13 @@ String name = array1[7].strip(); names.add(name); - if(name.contains(", ")) { - - String array2[] = (name.split(", ")); - String firstHalfOfName = array2[1].trim(); - - if (!firstHalfOfName.isEmpty()) { - if (firstHalfOfName.contains(" ")) { - String array3[] = firstHalfOfName.split(" "); - String firstName = array3[0].trim(); - firstNames.add(firstName); - } else { - firstNames.add(firstHalfOfName); - } + var matcher = namePat.matcher(name); + if (matcher.find()) { + String s = matcher.group(2); + if (s == null) { + s = matcher.group(3); } + firstNames.add(s); } String rawDate = array1[4].strip();
This patch reduces the execution time from 40 to about 38 seconds.
Whereas in variation 1 we saw that reducing a regex to a single character split pattern helped provide a large speedup, in this case we’re replacing some fairly involved string splitting logic with a regex. Note that this code compiles the regex outside the loop and uses it repeatedly within the loop. In this patch I’m attempting to provide similar semantics to the splitting logic, but I’m sure there are cases where it doesn’t produce the same result. (For the input data in this file, the regex produces the same result as the splitting logic.) Unfortunately the complexity is moved out of the logic and into the regex. I’m not going to explain the regex in great detail, since it’s actually fairly ad hoc itself. One problem is that extracting a “first name” from a name field relies on European name conventions, and those conventions don’t apply to all names in this file. A second problem is that the data itself isn’t well-formed. For example, one name in the file is “FOWLER II, COL. RICHARD”. Both the splitting logic and the regex extract the first name as “COL.” which is clearly a title, not a name. It’s unclear what can be done in this case. Nevertheless, the vast majority of records in the file are well-formed, and applying European name conventions works for them. For a name record such as “SMITH, JOHN A” both the splitting logic and the regex extract “JOHN” as the first name, which is the intended behavior.
Variation 4
--- ReadFileJavaApplicationBufferedReader3.java +++ ReadFileJavaApplicationBufferedReader4.java @@ -45,7 +45,7 @@ Instant commonNameStart = Instant.now(); ArrayList<String> firstNames = new ArrayList<>(); - var namePat = Pattern.compile(", \\s*(([^ ]*), |([^ ]+))"); + var namePat = Pattern.compile(", \\s*([^, ]+)"); System.out.println("Reading file using " + Caller.getName()); @@ -59,11 +59,7 @@ var matcher = namePat.matcher(name); if (matcher.find()) { - String s = matcher.group(2); - if (s == null) { - s = matcher.group(3); - } - firstNames.add(s); + firstNames.add(matcher.group(1)); } String rawDate = array1[4].strip();
This patch reduces the runtime from 38 seconds to about 35 seconds.
For reasons discussed previously, it’s difficult in general to extract the correct “first name” from a name field. Since most of the data in this file is well-formed, I took the liberty of making some simplifying assumptions. Instead of trying to replicate the original splitting logic, here I’m using a simplified regex that extracts the first non-comma, non-space sequence of characters that follows a comma-space separator. In most cases this will extract the same first name from the name field, but there are some edge cases where it returns a different result. Assuming this is acceptable, it allows a simplification of the regex and also of the logic to extract the desired substring from the match. The result is another small speedup.
Variation 5
--- ReadFileJavaApplicationBufferedReader4.java +++ ReadFileJavaApplicationBufferedReader5.java @@ -46,6 +46,8 @@ ArrayList<String> firstNames = new ArrayList<>(); var namePat = Pattern.compile(", \\s*([^, ]+)"); + char[] chars = new char[6]; + StringBuilder sb = new StringBuilder(7); System.out.println("Reading file using " + Caller.getName()); @@ -63,11 +65,12 @@ } String rawDate = array1[4].strip(); - String month = rawDate.substring(4,6); - String year = rawDate.substring(0,4); - String formattedDate = month + "-" + year; - dates.add(formattedDate); - + rawDate.getChars(0, 6, chars, 0); + sb.setLength(0); + sb.append(chars, 0, 4) + .append('-') + .append(chars, 4, 2); + dates.add(sb.toString()); } for (int i : indexes) {
This patch reduces the runtime from 35 seconds to about 33 seconds.
This change is primarily to reduce the amount of memory allocation within the inner loop. The previous code extracts two substrings from the raw date, creating two objects. It then appends the strings with a “-” separator, which requires creation of a temporary StringBuilder object. (This is likely still true even with JEP 280 – Indify String Concatenation in place.) Finally, the StringBuilder is converted to a String, allocating a fourth object. This last object is stored in a collection, but the first three objects are garbage.
To reduce object allocation, the patch code creates a char array and a StringBuilder outside the loop and reuses them. The character data is extracted into the char array, pieces of which are appended to the StringBuilder along with the “-” separator. The StringBuilder’s contents are then converted to a String, which is then stored into the collection. This String object is the only allocation the occurs in this step, so the patch code avoids creating any garbage.
I’m of two minds about this optimization. It does provide a few percentage points of optimization. On the other hand, it’s decidedly non-idiomatic Java: it’s rare to reuse objects this way. However, this code doesn’t introduce much additional complexity, and it does provide a measurable speedup, so I decided to keep it in. It does illustrate some techniques for dealing with character data that can reduce memory allocation, which can become expensive if done within an inner loop.
Variation 6
--- ReadFileJavaApplicationBufferedReader5.java +++ ReadFileJavaApplicationBufferedReader6.java @@ -115,16 +115,9 @@ } } - LinkedList<Entry<String, Integer>> list = new LinkedList<>(map.entrySet()); + Entry<String, Integer> common = Collections.max(map.entrySet(), Entry.comparingByValue()); - Collections.sort(list, new Comparator<Map.Entry<String, Integer> >() { - public int compare(Map.Entry<String, Integer> o1, - Map.Entry<String, Integer> o2) - { - return (o2.getValue()).compareTo(o1.getValue()); - } - }); - System.out.println("The most common first name is: " + list.get(0).getKey() + " and it occurs: " + list.get(0).getValue() + " times."); + System.out.println("The most common first name is: " + common.getKey() + " and it occurs: " + common.getValue() + " times."); Instant commonNameEnd = Instant.now(); long timeElapsedCommonName = Duration.between(commonNameStart, commonNameEnd).toMillis(); System.out.println("Most common name time: " + timeElapsedCommonName + "ms");
This patch reduces the runtime from 33 seconds to about 32 seconds.
The task here is to find the most frequently occurring first name. Instead of sorting a list of map entries, we can simply use Collections.max()
to find the maximum entry according to some criterion. Also, instead of having to write out a comparator that compares the values of two map entries, we can use the Entry.comparingByValue()
method to obtain such a comparator. This doesn’t result in much of a speedup. The reason is that, despite there being 18 million names in the file, there are only about 65,000 unique first names in the file, and thus only that many entries in the map. Computing the maximum entry saves a little bit of time compared to doing a full sort, but not that much.
Variation 7
This isn’t a patch, but instead I did a general cleanup and refactoring pass. I’ll describe the changes here. The revised source file is in this gist:
ReadFileJavaApplicationBufferedReader7.java
The changes didn’t significantly affect the runtime, which remained at about 32 seconds.
There are a couple places in the original code where a frequency table is generated. The general algorithm is to create a map of items to counts (typically Integer) to hold the results. Then, for each item, if there’s no entry for it in the map, insert it with the value 1, otherwise add 1 to the value that’s already there. Several commenters have suggested using Map.merge()
to make the put-or-update logic within the loop more concise. This will indeed work, but there’s a better way to do this using streams. For example, there is a list firstNames
with a list of all first names extracted from the file. To generate a frequency table of these names, one can use this code:
Map<String, Long> nameMap = firstNames.stream() .collect(groupingBy(name -> name, counting()));
(This assumes a static import of java.util.stream.Collectors.*
or individual names.) See the JDK Collectors documentation for more information. Note that the count value is a Long, not an Integer. Note also that we must use boxed values instead of primitives here, because we’re storing the values into collections.
I also use this same technique to generate the frequency table for dates:
Map<String, Long> dateMap = dates.stream() .collect(groupingBy(date -> date, counting()));
The typical technique to loop over a map involves looping the map’s entry set, and extracting the key and value from the entry using the getKey()
and getValue()
methods. Often, a more convenient way to loop over the entries of a Map is to use the Map.forEach()
method. I used this to print out the map entries from the date map:
dateMap.forEach((date, count) -> System.out.println("Donations per month and year: " + date + " and donation count: " + count));
What makes this quite convenient is that the key and value are provided as individual arguments to the lambda expression, avoiding the need to call methods to extract them from an Entry
.
Instead of creating a File
object, opening a FileReader
on it, and then wrapping it in a BufferedReader
, I used the NIO newBufferedReader()
method:
BufferedReader b = Files.newBufferedReader(Path.of(FILENAME))
It’s a bit more convenient than the wrapping approach.
Other changes I made include the following:
- Unified the start time into a single
Instant
variable, and refactored the elapsed time reporting into a separatebetween()
method. - Removed the outermost try statement whose catch block does nothing other than printing a stack trace. I see this a lot; I suspect it exists in some code template somewhere. It’s completely superfluous, because simply letting the exception propagate will cause the default exception handler to print the stack trace anyway. The only thing you might need to do is to add a
throws IOException
to themain()
method, which is what I did in this case. - Used interface types instead of implementation types. I used
List
andMap
in variable declarations instead ofArrayList
andHashMap
. This is an example of programming to an interface, not an implementation. This is not of great consequence in a small program, but it’s a good habit to get into, especially when defining fields and methods. I could also have usedvar
in more places, but I wanted to be explicit when I changed type arguments, e.g., fromInteger
toLong
. - Reindented the code. The JDK style is to use spaces for indentation, in multiples of 4. This avoids lines that are indented halfway off the right edge of the screen, but mainly I’m more comfortable with it.
Performance Recap
Version Time (sec) Description ------- ---------- ----------- Original 108 baseline Variation 1 44 optimize line splitting Variation 2 40 rearrange printing lines by index Variation 3 38 use regex for extracting first name Variation 4 35 simplified first name regex Variation 5 33 reuse StringBuilder/char[] for date extraction Variation 6 32 use max() instead of sort() Variation 7 32 cleanup
Summary & Comment
The first several optimizations involved removing work from the inner loop of the program. This is fairly obvious. Since the loop is executed a lot (18 million times) even a small reduction in the amount of work can affect the program’s runtime significantly.
What’s less obvious is the effect of reducing the amount of garbage generated within a loop. When more garbage is generated, it fills up the heap more quickly, causing GC to run more frequently. The more GC runs, the less time the program can spend getting work done. Thus, reducing the amount of garbage generated can also speed up a program.
I didn’t do any profiling of this program. Normally when you want to optimize a program, profiling is one of the first things you should do. This program is small enough, and I think I have a good enough eye for spotting potential improvements, that I was able to find some significant speedups. However, if somebody were to profile my revised version, they might be able to find more things to optimize.
Typically it’s a bad idea to do ad hoc benchmarking by finding the difference between before-and-after times. This is often the case with microbenchmarking. In such cases it’s preferable to use a benchmarking framework such as JMH. I didn’t think it was strictly necessary to use a framework to benchmark this program, though, since it runs long enough to avoid the usual benchmarking pitfalls. However, the differences in the runtimes between the later optimizations are getting smaller and smaller, and it’s possible that I was misled by my informal timing techniques.
Several commenters have suggested using the Files.lines()
method to get a stream of lines, and then running this stream in parallel. I’ve made a few attempts to do this but I haven’t shown any here. One issue is with program organization. As it stands, this program’s main loop extracts data into three lists. Doing this using streams involves operations with side effects (which are not recommended for parallel streams) or creating an aggregate object that can be used to accumulate the results. These are certainly reasonable approaches, but I wasn’t able to get any speedup from using parallel streams — at least on my 2-core system. The additional overhead of aggregation seemed to more than offset the benefit gained from running on two cores. It’s quite possible that with more work, or running the program on a system with more cores, can realize a benefit from running in parallel.
I believe the changes I’ve shown improve the quality of the code as well as improving its performance. But it’s possible to optimize this program even further. I’ve have some additional changes that get the runtime consistently down to about 26 seconds. These changes involve replacing some library calls with hand-written, special-purpose Java code. I don’t usually recommend making such changes, as they result in programs that are more complicated, less maintainable, and more error-prone. That’s why I’m not showing them. The last variation shows, I think, the “sweet spot” that represents the best tradeoff between code quality and performance. It is often possible, though, to make programs go faster at the expense of making them more complicated.
With this article, I hope that I’ve been able to illustrate several programming techniques and APIs that anybody can use to speed up and improve the quality of their code, and to help people improve their Java development skills.
Sorry if I use this to shill my own shit, I don’t mean to turn this into a pissing contest.
I have my own “solution” up on https://github.com/marschall/read-file-java. My approach focuses on three things:
1. Use my own libraries for parsing CSVs. They use a MappedByteBuffer to read the file. This has the advantage that it gets rid of a lot of intermediate allocations especially in cases like this where you only read a small part of the line. Additionally when the input is ASCII it also saves the decoding. So this represents a best case scenario. The disadvantage is that you have to work with CharSequence which is not that well supported by core JDK and third party libraries so up end up rolling your own methods more frequently.
2. Use Eclipse Collections Bag to track the occurrences. Eclipse Collections are much more efficient and rich than core JDK collections and Bag is a perfect fit for the use case. Unfortunately both Eclipse Collections JARs combined are about 10 MB.
3. Use java.time.YearMonth instead of a formatted String to track the occurrences. YearMonth is semantically the correct data type and while the shallow size (padded) is the same as that of a String the retained size of YearMonth is smaller as it does not reference an array. Also we can get rid of the intermediate allocations for creating the instance. Unfortunately DateTimeFormatter is very inefficient and accounted for about half the allocations in the program to I had to resort to manually parsing the date. This is not as bad as it sounds as Integer.parseInt was updated in JDK 9 to support a CharSequence and an offset.
The most compiled method is the one extracting the first name, but as you said it also has the most complicated logic. Otherwise I don’t feel much was scarified in the name of performance and I feel using YearMonth and Bag add to the readability and maintainability while offering better performance.
Using all these things together I could get the initial execution down to about 15 seconds on a Intel Core i7-7700T (Desktop).
Now to playing with JVM flags:
1. The live set of the program is very small. The program can be run with a 32 MB heap (-Xmx32m) without increasing the execution time.
2. As we cut back the allocations so much the program can actually be run without a GC. It is possible to run with EpsilonGC. -Xmx6g -XX:+UnlockExperimentalVMOptions -XX:+UseEpsilonGC drops the execution from about 15 seconds to about 10 seconds.
3. Running with Graal also drops the execution from about 15 seconds to about 10 seconds. -XX:+Experimentation -XX:+EnableJVMCI -XX:+UseJVMCICompiler. Unfortunately at this moment it is not possible to combine Graal and EpsilonGC.
A side note on Files.lines().parallel(), it does not close file descriptors and leaves the file undeletable on Windows until finialization occurs. See https://bugs.openjdk.java.net/browse/JDK-8129776
Sure, there are a bunch of tradeoffs everywhere. I’m not surprised that bringing in extra libraries can provide better performance. Note, I don’t claim that my code is the fastest; indeed, there are ways to speed it up that I haven’t (yet) shown. And I’m sure others can do better. Re your benchmark results, for better comparison, it would be helpful for you to run the original version and my final version to establish baselines to compare your modified version. After all, if your computer is twice as fast as mine, then you’ve hardly sped anything up at all. 🙂
That’s a fair point. I ran everything again and your final solution clocked in at about 18 to 19 seconds. Looks like you were right on the money and I didn’t improve the execution time that much. I guess I should feel happy about the performance of my computer.
Some observations:
Switching to ParallelOld increased the execution time of your solution to about 22 to 23 seconds. Using Graal with G1 was also round that time. Graal with ParallelOld was at about 26 seconds.
I find this quite interesting that Graal slows down your version by about the same time as it speeds up mine.
As for the original versions:
ReadFileJavaApplicationBufferedReader and ReadFileJavaApplicationLineIterator ran in about 70 seconds. ReadFileJavaApplicationFileInputStream used about 90 seconds to finish.
Thanks for your update. I’m a bit surprised that your computer is as fast as it is. In raw CPU speed, something like Geekbench had a 15% or so speedup between my computer’s processor (i7 4578U) and yours (i7 7700T). However, the memory subsystem and caches of your desktop are likely much better than my laptop’s, so I suspect that accounts for most of the difference. Nonetheless, comparing different versions on your system, going from 19 seconds to 15 seconds is still a pretty good speedup — it’s 20% after all.
I don’t have a full explanation of the differences caused by GC. If you’re using Graal you’re probably on JDK 8, right? It could be that the region-based collection of G1GC reduces pause times under heavy allocation conditions. I did see that, on JDK 8, Parallel GC was driven into full GC thrashing until I raised the heap size. I have no idea what’s going on with Graal. I don’t have a good idea of how to model its performance at this point.
I have some additional optimizations up my sleeve; watch this space. I would be interesting to do some followup performance comparisons.
Thanks for this interesting post. I had some fun over the weekend implementing this in a another language (Go) and doing iterations from your “Variation 7” down to 6.4 seconds on my 2017 MBP (I7-7920HQ). In terms of correctness, my goal was to produce the same output of “Variation 7” and still using idiomatic Go code.
As a baseline “Variation7” in JAVA (JDK11) on my mac runs in 23s vs the 18s of P.Marschall and 32s of yours.
During my iterations I came to the conclusion of the somehow not optimal performance of Go’s regex implementation in relation to JAVA both seen on Go’s issue tracker and also on the benchmarks-game site (https://benchmarksgame-team.pages.debian.net/benchmarksgame/faster/go.html) where Go’s implementation takes 2.8x the time of JAVA’s. This might also depend on the specific regular expression used, but I’m not that of an expert in this field.
Anyway, my fastest iteration with still using a regex for the firstname parsing went with 8.3s at its best run.
I documented runs and the different steps over 8 iterations on my implementation of Variation7 in go. I haven’t yet played with the GC configuration and also not optimized the file reading itself much and just use Go’s bufio.Scanner. I hope to write down the findings of my iterations in more detail.
I’m interested to hear if you both clean some file caches (like ‘purge’ in macOS) before your runs?
Interesting. Looking forward to your detailed results. No, I don’t do any file cache purging between runs. Since my system has sufficient RAM, I suspect the entire file is cached, and the cost is only transferring the data through the kernel. Of course, it depends on what you want to measure, but I think this is reasonable, as it exposes the performance issues in the Java application itself — which is the subject matter of the article.
As promised, a detailed look at how to port Variation7 to Go and then 9 Revisions from there on down to 3.88 seconds runtime:
“Processing Large Files – Java, Go and ‘hitting the wall'”, https://marcellanz.com/post/file-read-challenge/
I also referenced other implementations I found, all interesting too in their own solutions and findings. I wrote about “how useful” it is to go down to that level of optimisations, but it was fun and interesting to do it. I left some Exercices at the end too, as there is still room for improvements, both in quality of code as well of performance, IMO.
And now my attempt inspired by Marcel’s: I got rid of all regular allocations (irregular ones are still here – they are nearly impossible to avoid unfortunately), use my tool I developed specifically for large data processing (https://github.com/sirkon/ldetool), use pool of workers instead of raw goroutines:
https://github.com/sirkon/ldetool.
Marcel’s Rev.9 took 4.11 seconds to complete on my machine, my version finishes in 2.2 seconds.
Made a mistake, actual link to my attempt is https://github.com/sirkon/mineislarger
@ForkOfEmac nice! thanks for sharing, today I learned something new.
I ran it with the file in the OS cache with a runtime of
“revision: mineislarger, runtime: 1.729246485s”
and then by purging the cache on macOS constantly at ~2.7s which is probably a more valid usecase since it doesn’t make sense to read a file more than once.
Awesome contribution!
I do have the most minor of all possible corrections: my initial comment only covered the performances issues with the Java side of things, the Scala comparison wasn’t ready at that time.
I’m happy to say that it now is, and grew out of a single post of unusual length into a series of much shorter posts, the first of which can be found here: https://techblog.livongo.com/using-scala-to-read-really-really-large-files-part-0-introduction/
I ended up having to pull in help from a more Java savvy member of our team to figure out how to get the Java implementation faster, and we ran into some of the same optimizations you did, though not all of them. I was quite impressed while reading through this post by the methodical way you went about it.
Thanks, and thanks for your followup article! It’s a bit long for me to go through right at the moment but it looks very interesting. I also have some ideas for a followup article with additional optimizations. One of these days….
[…] Processing Large Files in Java – an interesting exercise in optimisation […]