The other day I thought about whether it would take a “while” for a computer to solve a sudoku puzzle using a naiive brute-force algorithm. I set out to find out.

In this article I use my bread-and-butter programming language Java to create such a solver in a kind of test driven way and also explore some simple optimizations.

Implementation idea:

  • use a backtracking algorithm, that means recursively go from cell to cell on the puzzle board and fill in numbers from 1 to 9 and check if all rules are satisfied. For example:
    1. It starts with top left, fill in “1”. All rules satisfied – got to the next.
    2. Fill in “1” – two 1s in a row – try with “2”, all rules satisfied, go to the next. And so on.
    3. If in a cell no number satisfy the rules, go back to the previous cell and try the next number there.
  • The puzzle board is represented as a 2-dimensional array.
  • The number “0” represents an empty cell.

Recap of the sudoku rules: In all horizontal and all vertical lines the numbers 1 to 9 are filled in exactly once plus in each 3×3 “subsquare” / “subboard” the numbers 1 to 9 are filled in exactly once.

Step 1: The Solver accepts an already completed board

When the board is already filled out, the Solver returns the board. It does not check if the board is correctly filled out. The following test checks this:

It creates a board (I call this “matrix” here) and fills it with “ones” except for the very first cell which gets a 5. It feeds it to the solver and checks whether it gets it back as solved.

Here is the code that accomplishes it:

The method “nextField” takes the current coordinates x and y and the matrix aka the board. It first checks whether it is just outside the board which means the board has been filled out. If so it returns the board. Otherwise if the current cell is already filled in, it recursivly calls the next cell. If the the current cell is not filled in, it returns an empty Optional, indicating it can’t fill in the cell.

Step 2: Adding the “horizontal rule”

Next we want to actually fill in numbers into an empty cell and check against the rule, that each row has pairwise distinct numbers in it.

First, here is the test:

It creates a board with each row numbers incrementally from one to nine and then “blanks” four cells. The solver should fill these cells with the correct numbers. Here is how it’s done (note: I introduce a “solve” method):

“isPotentialLegal” checks for distinct numbers by counting its occurences. It is called with all numbers of the current row. Zeros are “ignored”. If the rule is not satisfied, the next number is tried.

Step 3: Adding the “vertical rule”

Now I add the rule for columns. To create a test, I use a solved sudoku puzzle and clear some cells:

and later check for the correct solution.
The implementation is straight forward next to the “horizonal rule”:

Step 4: Adding the “subquadrant rule”

I wondered a bit about how to create a puzzle that would not be solvable without the subquadrant rule, but the original puzzle from Step 3 already did that. It has far more empty cells:

So here is the subquadrant rule. The key is to get the coordinates of the “subquadrant” right: integer division does the job, i.e. “(x/3)*3”. For example x=4 gets us “3” because it is the middle subquadrant starting at x=3. I use an extra method here because of the computation of the subquadrant start:

That did not made the test pass, though! It turned out I missed the backtracking-step, i.e. what happens when the recursion does not return a valid result – try next number (line 29):

Moreover, line 31 “empties” the cell so that we leave it in the starting state.

Conclusion

That’s it, I implemented a sudoku solver guided by tests. To answer my initial question: it’s fast. Well under one second! I will write a follow-up discussion some optimizations.

I had some confusions about package.json and package-lock.json and what is used when but now that I have been “enlightened” I’ll record my new knowledge in this article.

package.json and package-lock.json

package.json lists, among other things, the dependencies you need for your JavaScript project (if you use npm). You’d edit this file manually. In contrast package-lock.json is generated by npm.

Example package.json:

 

Often, the versions listed in package.json are given as ranges. npm uses SemVer, meaning a version scheme in three parts like a.b.c where a,b and c are numbers (also called “major.minor.patch”). “~a.b.c” means a and b are fixed and the last part can be c or any greater number: “~4.17.1” means “4.17.x for x>=1”. “^a.b.c” means a is fixed and minor and patch version is variable: “^4.17.20” means “4.x.y for either (x=17 and y>=20) or x>18”.

In contrast, package-lock.json contains the exact versions of the project’s dependencies (and their transitive dependencies and so on). When package-lock.json is generated or updated, the version range in package.json is resolved to the latest “allowed” version.

Generating and updating package-lock.json

How do you create the lock file? “npm install” will do it.

How do you update the lock file? “npm update” will do it, usually. Say package.json states module A in version “^3.4.5” and the existing state module A in version “3.4.20“. Then you run “npm update A” or “npm update” and when there is a version “3.5.2” of A out there, npm will update the lockfile to version “3.5.2” of module A.

If package.json and the lock file are out of sync (the version in package-lock.json is out of the range specified in package.json), “npm install” will correct the package-lock.json file.

Why committing the lock-file?

The general advice is to commit package-lock.json to your repository. That way, every developer will be using the same versions: the ones listed in the lock-file (using “npm install“).

How to upgrade dependencies?

npm outdated” shows outdated dependencies and “npm update <pkg> --save” updates a package.  Commit both files.

Another way is to use tools like dependabot or renovate which check for new versions. If a new version of a module is detected, these tools will create a branch using the new version. CI pipelines are run and pull/merge requests are created or even merged automatically.

CI pipelines

There is a special command for CI pipelines: “npm ci“. It will fail if the lock file is missing or is out of sync with package.json. So the build will fail if “npm install” would change the lock-file.

npm ci” ensures that your build is always based on a consistent set of dependencies, which is important for reproducibility and stability. It also helps avoid problems that can arise from using different versions of the same package across different stages of the pipeline.

Pinning dependencies

Pinning a dependency means using an exact version in package.json.

GitLab has a built-in CI system. A GitLab CI pipeline consists of a number of stages. In a stage, a number of jobs are executed in parallel. Jobs are executed as scripts within a docker container.

In the last days I set out to expand my basic knowledge of GitLab CI pipelines (e.g. compiling a Java project with gradle) in order to write a pipelines that compiles, tests and deploys a project to a CloudFoundry instance. Here are 3 things I learnt about.

Reusing jobs with “extends”

“extends” lets us reuse a job. It’s kind of “extract to method” refactoring: suppose I write a job to publish to CloudFoundry. It consists of three steps:

  1. install CloudFoundry CLI
  2. Login to CloudFoundry
  3. Publish app to CloudFoundry

A job that publishes an app to “dev” stage only differs in step 3 to a job that publishes the app to “test” stage. So we write a job that publishes an app to CloudFoundry based on some variables (e.g. “STAGE”)

and then our job “deploy-to-dev” just extends this job and redefines STAGE

We can further “refactoring” our CI pipeline by “extracting” the install&login part because we have two CF foundations and therefore, two different values for “CF_USER” and “CF_PASSWORD”

Test results

GitLab CI understands JUnit test result XML and we just need to provides it as artifact report:

You can see the test results in the pipeline and in the merge request screen.

Speed up caches

Our pipeline uses the cache to avoid downloading gradle (our build tool) and all dependencies everytime a job builds or tests the code. But it is painfully slow, especially collecting the files and pushing the cache.

In my case it takes several minutes (!) to collect 3000+ files in .gradle dir. Things sped up significantly when adding:

Here is the documentation.

You are using Github as remote git repository? This article explains how to use Access Token to authenticate yourself instead of username+password.

Create an Access Token

On Github.com, navigate to the setting menu of your Github profile. From there, choose “Developer Settings” and then “Personal Account Tokens”.

Here you are able to create a personal account token:

create new token

Select scope “repo”.

The token you created is presented to you. Save it to you credentials store (like KeePass).

Use the token

The token is part of the remote url of the repository. The URL has this form:

e.g. for a repo of mine and suppose the token is “token1234567890”:

When you are using the token in the remote URL you won’t be prompted for your password.

Good luck with your repo on Github!

Du benutzt Github als remote repository? Dieser Artikel beschreibt, wie Du Github Access Tokens nutzt, um Dich zu authentifizieren – anstelle von username/passwort.

Access Token erzeugen

Dazu gehst Du in das “Settings”-Menü und dort (links in der Link-Liste) auf “Developer settings) und dort weiter auf “Personal Access Tokens”.

Dort erzeugst Du ein neues Token:

Neues Token erzeugen

Als “Scope” “repo” ankreuzen.

Das erzeugte Token wird angezeigt. Unbedingt gut ablegen, man kann es sich nicht nochmal anzeigen lassen (aber neu erzeugen)

Das Token nutzen

Das Token wird nun in der remote url benutzt. Diese hat die Form

z.B. für eines meiner Projekte und einem Token “token1234567890”:

Wenn Du dieses Format nutzt, wirst Du nicht nach einem Passwort gefragt.

Viel Erfolg mit Github!

Many software projects use 3rd party libraries aka “dependencies”. You often want to use the most recent version of these dependencies but how do you know when a new release of a dependency is published? The more dependencies your project have the more tiresome a manual approach to “tracking dependency updates” is.

In this post I explore some solutions that tracks dependency updates for you. I cover broad solutions (libraries.io and dependabot) and Java-only solutions (“artifact listener” and a Gradle/Maven plugin).

Why update?

But why do we want to update dependencies at all?

A new version of a dependency

  • may fix bugs that affects your project
  • may introduce new features that you could use
  • may fix a security issue that affects your project
  • may have other optimizations to the code

Of course there is a risk as well: a new version may introduce a bug that affects your project. Plus, there might be API changes that require changes in your code.

Tracking solutions

Renovate

(update) I now use renovatebot because it integrates nicely with Gitlab CI. Much like dependabot (see below), it scans “dependency files” like “build.gradle”, “pom.xml” or “package.json” and creates merge requests for dependency updates.

Libraries.io

From their own words

Libraries.io can automatically keep track of all of the packages that your repositories depend upon across many different package managers.

Once synced, Libraries.io will email you about new versions of your dependencies, if you add or remove a new dependency it will change the notifications settings for that package as soon as you push to your repositories.

Repositories on Github, Gitlab and Bitbucket are supported. Plus, you can subscribe to dependencies manually, ie without a repository on any of these platforms.

Beside email notifications you can also subscribe to an RSS feed of your dependency updates.

Libraries.io is an open source project.

artifact listener

Artifact Listener is a small service and only available for Java / Maven Central. You can search for libraries and “follow” them. Alternatively you can upload a POM and then choose which dendencies to follow. Updates of libraries you follow are emailed to you.

You can provide additional email adresses to notify, e.g, email addresses of other team members. This is a small but lovely feature for me.

The service is an open source project.

Dependabot

Dependabot checks the “dependency files” (where your dependencies are definied) in your Github repos for updates. If there is an update it creates a PR for it. The PR may contain links, release notes, a list of commits etc.

So this service not only notifies you about an update but even creates a PR that applies it. You just have to merge it (at least if your project is on Github).

Dependabout has been aquired by Github.com and is free of charge.

Gradle plugin

If you are using Gradle (a Java build system) to declare dependencies and build your project you can use the Gradle versions plugin to detect dependency updates and report them. It is easy to use. You just need to execute it on a regular basis.

Maven plugin

Of course, there is a similar plugin for Maven (another Java build system).

Ron Jeffries published a wonderful post titled “working software“. He quotes the agile manifesto “working software is the primary measure of progress”.

Now, he explains that what exactly “working software” means depends (as always). But in general,

Working software is real. It works. It can be tested. It can be verified. Questions about whether it does X are readily answered with “Yes” or “No”. The less ambiguity there is in our answers—our true answers—the closer to “working software” we are.

And he shares his opinion on “best development style”:

The best development style we know today, whether working solo, in pairs, or in a mob, is to pick a very small thing that the system doesn’t as yet do, verify that it doesn’t do it, then make it do that thing, verifying again, and keeping the code at or above our standard of internal quality.

How small should that thing be? Tiny. Smaller than that. No, smaller still.

I repeat that:

  1. pick a very small thing that the system doesn’t do yet
  2. verify it doesn’t do it (the “red” step in the TDD cycle)
  3. make it do that thing
  4. verify it does it (the “green” step in TDD)
  5. keeping the code at or above our quality standard (the “refactor” step in TDD).

I really like that. It’s not always that easy. But more often than not.

I know of TDD but I need to get better at remembering this (I’ll put that in my Anki deck) – including the very small part – and then practice it.

Recently on our team chat: “I removed the remote git branch and pushed again”. “Remove” was not necessary, he could have used “git push –force” or better “git push –force-with-lease” instead.

Why

The normal “git pushed” only works when the remote branch is contained in your local branch or in other words if your local branch is the same or ahead of the remote branch. When you still want to “overwrite” the remote branch with your local branch, use the “–force” option or the “–force-with-lease”.

Caution

When you collaborate with other team members on a remote branch, git push with force option may overwrite their work. “–force-with-lease” makes git check that the remote branch is in the state we expect it to be in before pushing the local branch, so you wouldn’t destroy work that you don’t know of.

When

We need to force push when

  • we changed our local branch history by rebasing or amanding
  • we need to “reset” the remote branch to our local branch

Summary

If you need to “overwrite” a remote branch with your local branch, use the “–force-with-lease” option.

Our latest production issue: one morning our app kept crashing. Restart. Crash after some minutes. Out of memory.

It turned out to be a CLOB that was almost 2 GB big and got read when a user triggered a certain action. The 2 GB CLOB ended up in an almost 4 GB big char[] (because char is 16 bit in Java) and this was too much even for 8 GB heap space.

Of course that CLOB was not supposed to be that big!

It took some time to identify the root cause.