The other day I thought about whether it would take a “while” for a computer to solve a sudoku puzzle using a naiive brute-force algorithm. I set out to find out.

In this article I use my bread-and-butter programming language Java to create such a solver in a kind of test driven way and also explore some simple optimizations.

Implementation idea:

  • use a backtracking algorithm, that means recursively go from cell to cell on the puzzle board and fill in numbers from 1 to 9 and check if all rules are satisfied. For example:
    1. It starts with top left, fill in “1”. All rules satisfied – got to the next.
    2. Fill in “1” – two 1s in a row – try with “2”, all rules satisfied, go to the next. And so on.
    3. If in a cell no number satisfy the rules, go back to the previous cell and try the next number there.
  • The puzzle board is represented as a 2-dimensional array.
  • The number “0” represents an empty cell.

Recap of the sudoku rules: In all horizontal and all vertical lines the numbers 1 to 9 are filled in exactly once plus in each 3×3 “subsquare” / “subboard” the numbers 1 to 9 are filled in exactly once.

Step 1: The Solver accepts an already completed board

When the board is already filled out, the Solver returns the board. It does not check if the board is correctly filled out. The following test checks this:

It creates a board (I call this “matrix” here) and fills it with “ones” except for the very first cell which gets a 5. It feeds it to the solver and checks whether it gets it back as solved.

Here is the code that accomplishes it:

The method “nextField” takes the current coordinates x and y and the matrix aka the board. It first checks whether it is just outside the board which means the board has been filled out. If so it returns the board. Otherwise if the current cell is already filled in, it recursivly calls the next cell. If the the current cell is not filled in, it returns an empty Optional, indicating it can’t fill in the cell.

Step 2: Adding the “horizontal rule”

Next we want to actually fill in numbers into an empty cell and check against the rule, that each row has pairwise distinct numbers in it.

First, here is the test:

It creates a board with each row numbers incrementally from one to nine and then “blanks” four cells. The solver should fill these cells with the correct numbers. Here is how it’s done (note: I introduce a “solve” method):

“isPotentialLegal” checks for distinct numbers by counting its occurences. It is called with all numbers of the current row. Zeros are “ignored”. If the rule is not satisfied, the next number is tried.

Step 3: Adding the “vertical rule”

Now I add the rule for columns. To create a test, I use a solved sudoku puzzle and clear some cells:

and later check for the correct solution.
The implementation is straight forward next to the “horizonal rule”:

Step 4: Adding the “subquadrant rule”

I wondered a bit about how to create a puzzle that would not be solvable without the subquadrant rule, but the original puzzle from Step 3 already did that. It has far more empty cells:

So here is the subquadrant rule. The key is to get the coordinates of the “subquadrant” right: integer division does the job, i.e. “(x/3)*3”. For example x=4 gets us “3” because it is the middle subquadrant starting at x=3. I use an extra method here because of the computation of the subquadrant start:

That did not made the test pass, though! It turned out I missed the backtracking-step, i.e. what happens when the recursion does not return a valid result – try next number (line 29):

Moreover, line 31 “empties” the cell so that we leave it in the starting state.

Conclusion

That’s it, I implemented a sudoku solver guided by tests. To answer my initial question: it’s fast. Well under one second! I will write a follow-up discussion some optimizations.

Many software projects use 3rd party libraries aka “dependencies”. You often want to use the most recent version of these dependencies but how do you know when a new release of a dependency is published? The more dependencies your project have the more tiresome a manual approach to “tracking dependency updates” is.

In this post I explore some solutions that tracks dependency updates for you. I cover broad solutions (libraries.io and dependabot) and Java-only solutions (“artifact listener” and a Gradle/Maven plugin).

Why update?

But why do we want to update dependencies at all?

A new version of a dependency

  • may fix bugs that affects your project
  • may introduce new features that you could use
  • may fix a security issue that affects your project
  • may have other optimizations to the code

Of course there is a risk as well: a new version may introduce a bug that affects your project. Plus, there might be API changes that require changes in your code.

Tracking solutions

Renovate

(update) I now use renovatebot because it integrates nicely with Gitlab CI. Much like dependabot (see below), it scans “dependency files” like “build.gradle”, “pom.xml” or “package.json” and creates merge requests for dependency updates.

Libraries.io

From their own words

Libraries.io can automatically keep track of all of the packages that your repositories depend upon across many different package managers.

Once synced, Libraries.io will email you about new versions of your dependencies, if you add or remove a new dependency it will change the notifications settings for that package as soon as you push to your repositories.

Repositories on Github, Gitlab and Bitbucket are supported. Plus, you can subscribe to dependencies manually, ie without a repository on any of these platforms.

Beside email notifications you can also subscribe to an RSS feed of your dependency updates.

Libraries.io is an open source project.

artifact listener

Artifact Listener is a small service and only available for Java / Maven Central. You can search for libraries and “follow” them. Alternatively you can upload a POM and then choose which dendencies to follow. Updates of libraries you follow are emailed to you.

You can provide additional email adresses to notify, e.g, email addresses of other team members. This is a small but lovely feature for me.

The service is an open source project.

Dependabot

Dependabot checks the “dependency files” (where your dependencies are definied) in your Github repos for updates. If there is an update it creates a PR for it. The PR may contain links, release notes, a list of commits etc.

So this service not only notifies you about an update but even creates a PR that applies it. You just have to merge it (at least if your project is on Github).

Dependabout has been aquired by Github.com and is free of charge.

Gradle plugin

If you are using Gradle (a Java build system) to declare dependencies and build your project you can use the Gradle versions plugin to detect dependency updates and report them. It is easy to use. You just need to execute it on a regular basis.

Maven plugin

Of course, there is a similar plugin for Maven (another Java build system).

In the last post I reviewed Java lambda expressions. They represent a concise syntax to implement functional interfaces.

Enter Java method references. They represent a concise syntax to implement functional interface using existing methods. Like with lambda expressions, referenced methods are not allowed to throw checked exceptions.

Syntax

It’s simply “class-or-instance name” “::” “method name”, like

Types of method references

Reference to a static method

Static methods are referenced using the class name like in the example above.

Reference to an instance method of a particular object

Methods of a particular object are referenced using the variable name of that object:

Reference to an instance method of an arbitary object of a particular type

Instead of using an already existing object you can just state the class and a non-static method. Then the instance is an additional parameter. In the following example toURI is a method with no arguments that returns a String. The function of this method reference takes a File (the object) and returns a String:

Reference to a constructor

Constructors are references using its type and “new”:

Here the constructor of StringBuffer with String parameter is referenced. Return type is the type of the constructor, parameters of the function are the parameters of the constructors.

 

 

Lambda expressions in Java represent “functions”, something that take a number of parameters and produce at most one return value.

This could be expressed with anonymous classes but lambda expressions offer a more concise syntax.

Syntax

Lambda expression consist of a parameter list, an “arrow” and a body.

The parameter list is enclosed in round brackets. Types are optional. When the expression has exactly one parameter, the brackets can be omitted.

The body can either be an expression (that returns a value) or a block. A block is a sequence of statements, enclosed in curly braces.

Lambda expressions and types

In the Java type system, lambda expressions are instances of “functional interfaces”. A functional interface is an interface with exactly one abstract method.

Functional interfaces in java.util.function

The package java.util.function in the JDK contains a number of functional interfaces:

  • Function<T,U>  represents a function with one parameter of type T and return type U
  • Consumer<T>  represents a function with one parameter of type T and return type void
  • Supplier<T>  represents a function with no parameter and return type T
  • Predicate<T>  represents a function with one parameter of type T and return type boolean

Plus, variants with “Bi” prefix exists that have two parameters, like BiPredicate . More variants exists for using primitive types like DoubleToIntFunction .

User defined function interfaces

Any interface with exactly one abstract method can be used as type of a lambda expression. You mark this interface with @FunctionInterface .

Benefits

For me, the benefits of lambda expression are

  • concise syntax for anonymous classes that represent functional code
  • improved readability
  • encouragement of a more functional programming style

Answer: not static at all. A static inner class behaves like a normal class except that it is in the namespace of the outer class (“for packaging convenience”, as the official Java tutorial puts it).

So as an example:

As opposed to a true inner (nested) class, you do not need an instance of Outer to create an instance of Inner:

and Inner instances have no special knowledge about Outer instances. Inner class behaves just like a top-level class, it just has to be qualified as “Outer.Inner”.

Why I am writing about this?

Because I was quite shocked that two of my colleagues (both seasoned Java developers) were not sure if a static inner class was about static members and therefore global state.

Maybe they do not use static inner classes.

When do I use static inner classes?

I use a static inner class

  1. when it only of use for the outer class and it’s independent of the (private) members of the outer class,
  2. when it’s conceptionally tied to the outer class (e.g. a Builder class)
  3. for packaging convenience.

Often, the visibility of the static inner class is not public. In this case there is no big difference whether I create a static inner class or a top-level class in the same source file. An alternative for the first code example therefore is:

An example for (2) is a Builder class:

If the Inner instance needs access to (private) members of the Outer instance then Inner needs to be non-static.

Brian Goetz, Java Language Architect at Oracle, gave an interesting presentation “From Concurrent to Parallel” (available on InfoQ) on the subject back in 2009. Here are the most important points of his talk.

Java 8 introduces the Stream library (which was, in Brians words, developed as a showcase for the new Java 8 language features). Just calling “.parallel()” advices the library to process the stream “in parallel”, i.e. in multiple threads.

Do you really need it?

Parallel processing is an optimization and therefore the general questions regarding optimizations have to be answered:

  • Do you have any performance requirements at all (otherwise you are already fast enough)
  • Do you have any means to measure the performance?
  • Does the performance you measure violate the requirements?

Only if the answer to all these questions is “yes” you may take the time to investigate whether “.parallel()” will increase the performance.

Why the performance may not increase?

Compared to sequentionally processing a stream, processing it in parallel has overhead costs: splitting the data, managing the threads that will process the data and combining the results. So you may not see as much speed up as you might hope for.

Brian talks about several factors that undermine speedup.

NQ is insufficiently high

He talks about a simple model called “NQ model”. N*Q should be greater than 10.000. N is the number of data items. Q is factor that expresses how CPU-expensive the processing step is. “Summing numbers” or “finding the max of integers” is very inexpensive and Q would be near 1. More complex tasks would have higher values of Q. So if all you want is to add up numbers, you need a lot of numbers to see a performance gain.

Cache-miss ratio too high

If the data to be processed is saved next to each other in RAM, it will be transfered together in CPU caches. Accessing cache memory instead of main memory is very fast, so data in an array is processed much faster than data in a linked list that is spread all over the main memory. Also when the data in an array is just pointers data access will be slow. As a demonstration, Brian shows that summing up an array of (native) integers in parallel scales well with number of CPU cores. Summing up an array of Integer-objects scales very poorly because there are just pointers in the array.

The more indirections (pointers) you work with the more cache-misses will have the CPU wait for memory access.

The source is expensive to split

In order to process data in parallel, the source of the data has to be splitted in order to hand parts of the data to different CPUs. Splitting arrays is simple, splitting linked list is hard (Brian said: linked list are splitted into “first element, rest”).

Result combination cost is to high

When the result combination is the sum of numbers, that is easy to calculate. If the result is a set, and the result combination is to merge the resulting sets, this is expensive. It might be faster to sequentially add each result into one set.

Order-sensitive operations

Operations like “limit()”, “skip()” and “findFirst()” depend on the order of the data in the stream. This makes the pipelines using them “less exploitable” regarding parallelism. You can call “unordered()” if the order is not meaningful to you and the JVM will optimize those operations.

Mutating shared state is a big NO!

Of course you will lose performance if you have to mutate shared state and guarding access to it with locks etc.

Parallel streams are for CPU-heavy process steps

Brian mentions that parallel streams where built for CPU-heavy tasks. If you do mainly IO in the processing steps than use ThreadPools and Executors etc.

What degree of parallelism does the Stream API uses?

As stated in the Streams API documentation, the API uses one JVM wide Fork-Join-Pool with a number of threads that defaults to the number of processors on the computer system. Why? Because it is expected that a processing steps will utilize the CPU as much as possible. So there is no sense in having more threads then CPU cores.

In contrast, when having IO-heavy tasks you want to have many more threads than CPU cores because the threads will wait for IO most of the time and therefore not utilize the CPU.

In Java, a synchronized method is not thread safe if it reads from and writes to one or more static member variables.

Consider:

and assume the access to someCounter is somehow thread safe because of the synchronized keyword on doSomething.
As soon as you call doSomething concurrently on multiple SomeClass instances, it will not print unique numbers. This is because the all instance share the same static member variables. Between the increment of someCounter and printing it, its value might have already changed by another instance.

That particular bug was a bit hidden because a “SomeClass” instance was “cached” in a JEE stateless session bean. Of course the JEE container creates multiple instances of the session bean and hence multiple instances of SomeClass.