The other day I thought about whether it would take a “while” for a computer to solve a sudoku puzzle using a naiive brute-force algorithm. I set out to find out.

In this article I use my bread-and-butter programming language Java to create such a solver in a kind of test driven way and also explore some simple optimizations.

Implementation idea:

  • use a backtracking algorithm, that means recursively go from cell to cell on the puzzle board and fill in numbers from 1 to 9 and check if all rules are satisfied. For example:
    1. It starts with top left, fill in “1”. All rules satisfied – got to the next.
    2. Fill in “1” – two 1s in a row – try with “2”, all rules satisfied, go to the next. And so on.
    3. If in a cell no number satisfy the rules, go back to the previous cell and try the next number there.
  • The puzzle board is represented as a 2-dimensional array.
  • The number “0” represents an empty cell.

Recap of the sudoku rules: In all horizontal and all vertical lines the numbers 1 to 9 are filled in exactly once plus in each 3×3 “subsquare” / “subboard” the numbers 1 to 9 are filled in exactly once.

Step 1: The Solver accepts an already completed board

When the board is already filled out, the Solver returns the board. It does not check if the board is correctly filled out. The following test checks this:

	@Test
	public void fineWithFilledMatrix() {
		final int[][] matrix = new int[9][9];
		for(int i = 0; i < matrix.length; i++) {
			for (int j = 0; j < matrix[i].length; j++) {
				matrix[i][j] = 1;
			}
		}
		matrix[0][0] = 5;
		System.out.println(Solver.matrixToString(matrix));
		final var result = new Solver().nextField(0, 0, matrix);
		System.out.println(Solver.matrixToString(matrix));
		final int[][] expected = new int[9][9];
		for(int i = 0; i < expected.length; i++) {
			for (int j = 0; j < expected[i].length; j++) {
				expected[i][j] = 1;
			}
		}
		expected[0][0] = 5;
		Assert.assertFalse(result.isEmpty());
		Assert.assertArrayEquals(expected, result.get());
	}

It creates a board (I call this “matrix” here) and fills it with “ones” except for the very first cell which gets a 5. It feeds it to the solver and checks whether it gets it back as solved.

Here is the code that accomplishes it:

package de.epischel.hello.sudoku;

import java.util.Optional;

public class Solver {

	public Optional<int[][]> nextField(int x, int y, int[][] matrix) {
		if (y==9 && x == 0) {
			return Optional.of(matrix);
		}
		if (matrix[y][x]>0) {
			int nextX = x<8?x+1:0;
			int nextY = x<8?y:y+1;
			return nextField(nextX, nextY, matrix);
		}
		return Optional.empty();
	}
	
	public static String matrixToString(int[][] matrix) {
		StringBuilder sb = new StringBuilder();
		for(int y = 0; y < matrix.length; y++) {
			for(int x=0; x < matrix[y].length; x++) {
				sb.append(" ").append(matrix[y][x]).append(" ");
			}
			sb.append("\n");
		}
		return sb.toString();
	}
}

The method “nextField” takes the current coordinates x and y and the matrix aka the board. It first checks whether it is just outside the board which means the board has been filled out. If so it returns the board. Otherwise if the current cell is already filled in, it recursivly calls the next cell. If the the current cell is not filled in, it returns an empty Optional, indicating it can’t fill in the cell.

Step 2: Adding the “horizontal rule”

Next we want to actually fill in numbers into an empty cell and check against the rule, that each row has pairwise distinct numbers in it.

First, here is the test:

	@Test
	public void followRuleHorizontal() {
		final int[][] matrix = new int[9][9];
		for(int i = 0; i < matrix.length; i++) {
			for (int j = 0; j < matrix[i].length; j++) {
				matrix[i][j] = j+1;
			}
		}
		matrix[0][3] = 0;
		matrix[0][4] = 0;
		matrix[5][5] = 0;
		matrix[5][7] = 0;
		System.out.println(Solver.matrixToString(matrix));
		final var result = new Solver().solve(matrix);
		System.out.println(Solver.matrixToString(matrix));
		final int[][] expected = new int[9][9];
		for(int i = 0; i < expected.length; i++) {
			for (int j = 0; j < expected[i].length; j++) {
				expected[i][j] = j+1;
			}
		}
		Assert.assertFalse(result.isEmpty());
		Assert.assertArrayEquals(expected, result.get());
	}

It creates a board with each row numbers incrementally from one to nine and then “blanks” four cells. The solver should fill these cells with the correct numbers. Here is how it’s done (note: I introduce a “solve” method):

	public Optional<int[][]> solve(int[][] matrix) {
		return nextField(0,0,matrix);
	}

	public Optional<int[][]> nextField(int x, int y, int[][] matrix) {
		if (y==9 && x == 0) {
			return Optional.of(matrix);
		}
		if (matrix[y][x]>0) {
			int nextX = x<8?x+1:0;
			int nextY = x<8?y:y+1;
			return nextField(nextX, nextY, matrix);
		}
		for(int i = 1; i<=9; i++) {
			matrix[y][x] = i;
			// check horizontal rule
			if (!isPotentialLegal(
					matrix[y][0],matrix[y][1],matrix[y][2],
					matrix[y][3],matrix[y][4],matrix[y][5],
					matrix[y][6],matrix[y][7],matrix[y][8])) {
				continue;
			}
			int nextX = x<8?x+1:0;
			int nextY = x<8?y:y+1;
			return nextField(nextX, nextY, matrix);
			
		}
		return Optional.empty();
	}
	
	private static boolean isPotentialLegal(int... numbers) {
		final int[] counts = new int[10];
		for(int i = 0; i < numbers.length; i++) {
			counts[numbers[i]]++;
		}
		for(int i = 1; i < counts.length; i++) {
			if (counts[i]>1) return false;
		}
		return true;
	}

“isPotentialLegal” checks for distinct numbers by counting its occurences. It is called with all numbers of the current row. Zeros are “ignored”. If the rule is not satisfied, the next number is tried.

Step 3: Adding the “vertical rule”

Now I add the rule for columns. To create a test, I use a solved sudoku puzzle and clear some cells:

		final int[][] matrix = new int[][] {
			{7,9,0,3,5,4,6,0,8},
			{8,0,4,1,2,6,3,0,7},
			{3,0,1,9,8,7,5,2,4},
			//
			{9,4,5,6,0,8,1,7,2},
			{2,7,8,5,4,1,9,3,6},
			{6,1,3,0,9,2,8,4,5},
			//
			{4,2,9,8,1,5,7,6,3},
			{1,8,7,2,6,3,4,5,9},
			{5,3,6,4,7,9,2,0,0},
		};

and later check for the correct solution.
The implementation is straight forward next to the “horizonal rule”:

			if (!isPotentialLegal(
					matrix[y][0],matrix[y][1],matrix[y][2],
					matrix[y][3],matrix[y][4],matrix[y][5],
					matrix[y][6],matrix[y][7],matrix[y][8])
			  ||
			    !isPotentialLegal(
			    	matrix[0][x],matrix[1][x],matrix[2][x],
			    	matrix[3][x],matrix[4][x],matrix[5][x],
			    	matrix[6][x],matrix[7][x],matrix[8][x])
					) {
				continue;
			}

Step 4: Adding the “subquadrant rule”

I wondered a bit about how to create a puzzle that would not be solvable without the subquadrant rule, but the original puzzle from Step 3 already did that. It has far more empty cells:

		final int[][] matrix = new int[][] {
			{0,9,0, 0,0,0, 0,1,0},
			{8,0,4, 0,2,0, 3,0,7},
			{0,6,0, 9,0,7, 0,2,0},
			//
			{0,0,5, 0,3,0, 1,0,0},
			{0,7,0, 5,0,1, 0,3,0},
			{0,0,3, 0,9,0, 8,0,0},
			//
			{0,2,0, 8,0,5, 0,6,0},
			{1,0,7, 0,6,0, 4,0,9},
			{0,3,0, 0,0,0, 0,8,0},
		};

So here is the subquadrant rule. The key is to get the coordinates of the “subquadrant” right: integer division does the job, i.e. “(x/3)*3”. For example x=4 gets us “3” because it is the middle subquadrant starting at x=3. I use an extra method here because of the computation of the subquadrant start:

	private boolean isSubquadratPotentialLegal(int x, int y, int[][] matrix) {
		final int xx = (x/3)*3;
		final int yy = (y/3)*3;
		return isPotentialLegal(
			matrix[yy][xx],matrix[yy][xx+1],matrix[yy][xx+2],
			matrix[yy+1][xx],matrix[yy+1][xx+1],matrix[yy+1][xx+2],
			matrix[yy+2][xx],matrix[yy+2][xx+1],matrix[yy+2][xx+2]);
	}

That did not made the test pass, though! It turned out I missed the backtracking-step, i.e. what happens when the recursion does not return a valid result – try next number (line 29):

	public Optional<int[][]> nextField(int x, int y, int[][] matrix) {
		if (y==9 && x == 0) {
			return Optional.of(matrix);
		}
		if (matrix[y][x]>0) {
			int nextX = x<8?x+1:0;
			int nextY = x<8?y:y+1;
			return nextField(nextX, nextY, matrix);
		}
		for(int i = 1; i<=9; i++) {
			matrix[y][x] = i;
			// check horizontal rule
			if (!(isPotentialLegal(
					matrix[y][0],matrix[y][1],matrix[y][2],
					matrix[y][3],matrix[y][4],matrix[y][5],
					matrix[y][6],matrix[y][7],matrix[y][8])
			  &&
			  // check vertical rule
			    isPotentialLegal(
			    	matrix[0][x],matrix[1][x],matrix[2][x],
			    	matrix[3][x],matrix[4][x],matrix[5][x],
			    	matrix[6][x],matrix[7][x],matrix[8][x])
			  && isSubquadratPotentialLegal(x, y, matrix))) {
				continue;
			}
			int nextX = x<8?x+1:0;
			int nextY = x<8?y:y+1;
			final var result = nextField(nextX, nextY, matrix);
			if (result.isPresent()) return result;
		}
		matrix[y][x] = 0;
		return Optional.empty();
	}

Moreover, line 31 “empties” the cell so that we leave it in the starting state.

Conclusion

That’s it, I implemented a sudoku solver guided by tests. To answer my initial question: it’s fast. Well under one second! I will write a follow-up discussion some optimizations.

I had some confusions about package.json and package-lock.json and what is used when but now that I have been “enlightened” I’ll record my new knowledge in this article.

package.json and package-lock.json

package.json lists, among other things, the dependencies you need for your JavaScript project (if you use npm). You’d edit this file manually. In contrast package-lock.json is generated by npm.

Example package.json:

{
  "name": "my-project",
  "version": "1.0.0",
  "dependencies": {
    "express": "^4.17.1",
    "lodash": "^4.17.20"
  },
  "devDependencies": {
    "nodemon": "^2.0.7"
  }
}

 

Often, the versions listed in package.json are given as ranges. npm uses SemVer, meaning a version scheme in three parts like a.b.c where a,b and c are numbers (also called “major.minor.patch”). “~a.b.c” means a and b are fixed and the last part can be c or any greater number: “~4.17.1” means “4.17.x for x>=1”. “^a.b.c” means a is fixed and minor and patch version is variable: “^4.17.20” means “4.x.y for either (x=17 and y>=20) or x>18”.

In contrast, package-lock.json contains the exact versions of the project’s dependencies (and their transitive dependencies and so on). When package-lock.json is generated or updated, the version range in package.json is resolved to the latest “allowed” version.

Generating and updating package-lock.json

How do you create the lock file? “npm install” will do it.

How do you update the lock file? “npm update” will do it, usually. Say package.json states module A in version “^3.4.5” and the existing state module A in version “3.4.20“. Then you run “npm update A” or “npm update” and when there is a version “3.5.2” of A out there, npm will update the lockfile to version “3.5.2” of module A.

If package.json and the lock file are out of sync (the version in package-lock.json is out of the range specified in package.json), “npm install” will correct the package-lock.json file.

Why committing the lock-file?

The general advice is to commit package-lock.json to your repository. That way, every developer will be using the same versions: the ones listed in the lock-file (using “npm install“).

How to upgrade dependencies?

npm outdated” shows outdated dependencies and “npm update <pkg> --save” updates a package.  Commit both files.

Another way is to use tools like dependabot or renovate which check for new versions. If a new version of a module is detected, these tools will create a branch using the new version. CI pipelines are run and pull/merge requests are created or even merged automatically.

CI pipelines

There is a special command for CI pipelines: “npm ci“. It will fail if the lock file is missing or is out of sync with package.json. So the build will fail if “npm install” would change the lock-file.

npm ci” ensures that your build is always based on a consistent set of dependencies, which is important for reproducibility and stability. It also helps avoid problems that can arise from using different versions of the same package across different stages of the pipeline.

Pinning dependencies

Pinning a dependency means using an exact version in package.json.

GitLab has a built-in CI system. A GitLab CI pipeline consists of a number of stages. In a stage, a number of jobs are executed in parallel. Jobs are executed as scripts within a docker container.

In the last days I set out to expand my basic knowledge of GitLab CI pipelines (e.g. compiling a Java project with gradle) in order to write a pipelines that compiles, tests and deploys a project to a CloudFoundry instance. Here are 3 things I learnt about.

Reusing jobs with “extends”

“extends” lets us reuse a job. It’s kind of “extract to method” refactoring: suppose I write a job to publish to CloudFoundry. It consists of three steps:

  1. install CloudFoundry CLI
  2. Login to CloudFoundry
  3. Publish app to CloudFoundry

A job that publishes an app to “dev” stage only differs in step 3 to a job that publishes the app to “test” stage. So we write a job that publishes an app to CloudFoundry based on some variables (e.g. “STAGE”)

.deploy-to-cf:
  script:
    - install cf cli
    - cf login -u "$CF_USER" -p "$CF_PASSWORD" -a "$CF_URL"
    - cf push -f manifest.yml --var STAGE=$STAGE

and then our job “deploy-to-dev” just extends this job and redefines STAGE

deploy-to-dev:
  extends: .deploy-to-cf
  variables:
    STAGE: "dev"
  when:
  - develop

We can further “refactoring” our CI pipeline by “extracting” the install&login part because we have two CF foundations and therefore, two different values for “CF_USER” and “CF_PASSWORD”

Test results

GitLab CI understands JUnit test result XML and we just need to provides it as artifact report:

test:
  script:
    - gradlew test
  artifact:
    reports:
      junit: build/reports/**/TEST-*xml

You can see the test results in the pipeline and in the merge request screen.

Speed up caches

Our pipeline uses the cache to avoid downloading gradle (our build tool) and all dependencies everytime a job builds or tests the code. But it is painfully slow, especially collecting the files and pushing the cache.

In my case it takes several minutes (!) to collect 3000+ files in .gradle dir. Things sped up significantly when adding:

variables:
  FF_USE_FASTZIP: "true"
  CACHE_COMPRESSION_LEVEL: "fast"

Here is the documentation.

You are using Github as remote git repository? This article explains how to use Access Token to authenticate yourself instead of username+password.

Create an Access Token

On Github.com, navigate to the setting menu of your Github profile. From there, choose “Developer Settings” and then “Personal Account Tokens”.

Here you are able to create a personal account token:

create new token

Select scope “repo”.

The token you created is presented to you. Save it to you credentials store (like KeePass).

Use the token

The token is part of the remote url of the repository. The URL has this form:

https://userid:token@github.com/projectid/repo.git

e.g. for a repo of mine and suppose the token is “token1234567890”:

https://epischel:token1234567890@github.com/epischel/gensources.git

When you are using the token in the remote URL you won’t be prompted for your password.

Good luck with your repo on Github!

Du benutzt Github als remote repository? Dieser Artikel beschreibt, wie Du Github Access Tokens nutzt, um Dich zu authentifizieren – anstelle von username/passwort.

Access Token erzeugen

Dazu gehst Du in das “Settings”-Menü und dort (links in der Link-Liste) auf “Developer settings) und dort weiter auf “Personal Access Tokens”.

Dort erzeugst Du ein neues Token:

Neues Token erzeugen

Als “Scope” “repo” ankreuzen.

Das erzeugte Token wird angezeigt. Unbedingt gut ablegen, man kann es sich nicht nochmal anzeigen lassen (aber neu erzeugen)

Das Token nutzen

Das Token wird nun in der remote url benutzt. Diese hat die Form

https://userid:token@github.com/projectid/repo.git

z.B. für eines meiner Projekte und einem Token “token1234567890”:

https://epischel:token1234567890@git/epischel/gensources.git

Wenn Du dieses Format nutzt, wirst Du nicht nach einem Passwort gefragt.

Viel Erfolg mit Github!

Many software projects use 3rd party libraries aka “dependencies”. You often want to use the most recent version of these dependencies but how do you know when a new release of a dependency is published? The more dependencies your project have the more tiresome a manual approach to “tracking dependency updates” is.

In this post I explore some solutions that tracks dependency updates for you. I cover broad solutions (libraries.io and dependabot) and Java-only solutions (“artifact listener” and a Gradle/Maven plugin).

Why update?

But why do we want to update dependencies at all?

A new version of a dependency

  • may fix bugs that affects your project
  • may introduce new features that you could use
  • may fix a security issue that affects your project
  • may have other optimizations to the code

Of course there is a risk as well: a new version may introduce a bug that affects your project. Plus, there might be API changes that require changes in your code.

Tracking solutions

Renovate

(update) I now use renovatebot because it integrates nicely with Gitlab CI. Much like dependabot (see below), it scans “dependency files” like “build.gradle”, “pom.xml” or “package.json” and creates merge requests for dependency updates.

Libraries.io

From their own words

Libraries.io can automatically keep track of all of the packages that your repositories depend upon across many different package managers.

Once synced, Libraries.io will email you about new versions of your dependencies, if you add or remove a new dependency it will change the notifications settings for that package as soon as you push to your repositories.

Repositories on Github, Gitlab and Bitbucket are supported. Plus, you can subscribe to dependencies manually, ie without a repository on any of these platforms.

Beside email notifications you can also subscribe to an RSS feed of your dependency updates.

Libraries.io is an open source project.

artifact listener

Artifact Listener is a small service and only available for Java / Maven Central. You can search for libraries and “follow” them. Alternatively you can upload a POM and then choose which dendencies to follow. Updates of libraries you follow are emailed to you.

You can provide additional email adresses to notify, e.g, email addresses of other team members. This is a small but lovely feature for me.

The service is an open source project.

Dependabot

Dependabot checks the “dependency files” (where your dependencies are definied) in your Github repos for updates. If there is an update it creates a PR for it. The PR may contain links, release notes, a list of commits etc.

So this service not only notifies you about an update but even creates a PR that applies it. You just have to merge it (at least if your project is on Github).

Dependabout has been aquired by Github.com and is free of charge.

Gradle plugin

If you are using Gradle (a Java build system) to declare dependencies and build your project you can use the Gradle versions plugin to detect dependency updates and report them. It is easy to use. You just need to execute it on a regular basis.

Maven plugin

Of course, there is a similar plugin for Maven (another Java build system).

Ron Jeffries published a wonderful post titled “working software“. He quotes the agile manifesto “working software is the primary measure of progress”.

Now, he explains that what exactly “working software” means depends (as always). But in general,

Working software is real. It works. It can be tested. It can be verified. Questions about whether it does X are readily answered with “Yes” or “No”. The less ambiguity there is in our answers—our true answers—the closer to “working software” we are.

And he shares his opinion on “best development style”:

The best development style we know today, whether working solo, in pairs, or in a mob, is to pick a very small thing that the system doesn’t as yet do, verify that it doesn’t do it, then make it do that thing, verifying again, and keeping the code at or above our standard of internal quality.

How small should that thing be? Tiny. Smaller than that. No, smaller still.

I repeat that:

  1. pick a very small thing that the system doesn’t do yet
  2. verify it doesn’t do it (the “red” step in the TDD cycle)
  3. make it do that thing
  4. verify it does it (the “green” step in TDD)
  5. keeping the code at or above our quality standard (the “refactor” step in TDD).

I really like that. It’s not always that easy. But more often than not.

I know of TDD but I need to get better at remembering this (I’ll put that in my Anki deck) – including the very small part – and then practice it.

Recently on our team chat: “I removed the remote git branch and pushed again”. “Remove” was not necessary, he could have used “git push –force” or better “git push –force-with-lease” instead.

Why

The normal “git pushed” only works when the remote branch is contained in your local branch or in other words if your local branch is the same or ahead of the remote branch. When you still want to “overwrite” the remote branch with your local branch, use the “–force” option or the “–force-with-lease”.

Caution

When you collaborate with other team members on a remote branch, git push with force option may overwrite their work. “–force-with-lease” makes git check that the remote branch is in the state we expect it to be in before pushing the local branch, so you wouldn’t destroy work that you don’t know of.

When

We need to force push when

  • we changed our local branch history by rebasing or amanding
  • we need to “reset” the remote branch to our local branch

Summary

If you need to “overwrite” a remote branch with your local branch, use the “–force-with-lease” option.

Our latest production issue: one morning our app kept crashing. Restart. Crash after some minutes. Out of memory.

It turned out to be a CLOB that was almost 2 GB big and got read when a user triggered a certain action. The 2 GB CLOB ended up in an almost 4 GB big char[] (because char is 16 bit in Java) and this was too much even for 8 GB heap space.

Of course that CLOB was not supposed to be that big!

It took some time to identify the root cause.