The other day I thought about whether it would take a “while” for a computer to solve a sudoku puzzle using a naiive brute-force algorithm. I set out to find out.

In this article I use my bread-and-butter programming language Java to create such a solver in a kind of test driven way and also explore some simple optimizations.

Implementation idea:

  • use a backtracking algorithm, that means recursively go from cell to cell on the puzzle board and fill in numbers from 1 to 9 and check if all rules are satisfied. For example:
    1. It starts with top left, fill in “1”. All rules satisfied – got to the next.
    2. Fill in “1” – two 1s in a row – try with “2”, all rules satisfied, go to the next. And so on.
    3. If in a cell no number satisfy the rules, go back to the previous cell and try the next number there.
  • The puzzle board is represented as a 2-dimensional array.
  • The number “0” represents an empty cell.

Recap of the sudoku rules: In all horizontal and all vertical lines the numbers 1 to 9 are filled in exactly once plus in each 3×3 “subsquare” / “subboard” the numbers 1 to 9 are filled in exactly once.

Step 1: The Solver accepts an already completed board

When the board is already filled out, the Solver returns the board. It does not check if the board is correctly filled out. The following test checks this:

	@Test
	public void fineWithFilledMatrix() {
		final int[][] matrix = new int[9][9];
		for(int i = 0; i < matrix.length; i++) {
			for (int j = 0; j < matrix[i].length; j++) {
				matrix[i][j] = 1;
			}
		}
		matrix[0][0] = 5;
		System.out.println(Solver.matrixToString(matrix));
		final var result = new Solver().nextField(0, 0, matrix);
		System.out.println(Solver.matrixToString(matrix));
		final int[][] expected = new int[9][9];
		for(int i = 0; i < expected.length; i++) {
			for (int j = 0; j < expected[i].length; j++) {
				expected[i][j] = 1;
			}
		}
		expected[0][0] = 5;
		Assert.assertFalse(result.isEmpty());
		Assert.assertArrayEquals(expected, result.get());
	}

It creates a board (I call this “matrix” here) and fills it with “ones” except for the very first cell which gets a 5. It feeds it to the solver and checks whether it gets it back as solved.

Here is the code that accomplishes it:

package de.epischel.hello.sudoku;

import java.util.Optional;

public class Solver {

	public Optional<int[][]> nextField(int x, int y, int[][] matrix) {
		if (y==9 && x == 0) {
			return Optional.of(matrix);
		}
		if (matrix[y][x]>0) {
			int nextX = x<8?x+1:0;
			int nextY = x<8?y:y+1;
			return nextField(nextX, nextY, matrix);
		}
		return Optional.empty();
	}
	
	public static String matrixToString(int[][] matrix) {
		StringBuilder sb = new StringBuilder();
		for(int y = 0; y < matrix.length; y++) {
			for(int x=0; x < matrix[y].length; x++) {
				sb.append(" ").append(matrix[y][x]).append(" ");
			}
			sb.append("\n");
		}
		return sb.toString();
	}
}

The method “nextField” takes the current coordinates x and y and the matrix aka the board. It first checks whether it is just outside the board which means the board has been filled out. If so it returns the board. Otherwise if the current cell is already filled in, it recursivly calls the next cell. If the the current cell is not filled in, it returns an empty Optional, indicating it can’t fill in the cell.

Step 2: Adding the “horizontal rule”

Next we want to actually fill in numbers into an empty cell and check against the rule, that each row has pairwise distinct numbers in it.

First, here is the test:

	@Test
	public void followRuleHorizontal() {
		final int[][] matrix = new int[9][9];
		for(int i = 0; i < matrix.length; i++) {
			for (int j = 0; j < matrix[i].length; j++) {
				matrix[i][j] = j+1;
			}
		}
		matrix[0][3] = 0;
		matrix[0][4] = 0;
		matrix[5][5] = 0;
		matrix[5][7] = 0;
		System.out.println(Solver.matrixToString(matrix));
		final var result = new Solver().solve(matrix);
		System.out.println(Solver.matrixToString(matrix));
		final int[][] expected = new int[9][9];
		for(int i = 0; i < expected.length; i++) {
			for (int j = 0; j < expected[i].length; j++) {
				expected[i][j] = j+1;
			}
		}
		Assert.assertFalse(result.isEmpty());
		Assert.assertArrayEquals(expected, result.get());
	}

It creates a board with each row numbers incrementally from one to nine and then “blanks” four cells. The solver should fill these cells with the correct numbers. Here is how it’s done (note: I introduce a “solve” method):

	public Optional<int[][]> solve(int[][] matrix) {
		return nextField(0,0,matrix);
	}

	public Optional<int[][]> nextField(int x, int y, int[][] matrix) {
		if (y==9 && x == 0) {
			return Optional.of(matrix);
		}
		if (matrix[y][x]>0) {
			int nextX = x<8?x+1:0;
			int nextY = x<8?y:y+1;
			return nextField(nextX, nextY, matrix);
		}
		for(int i = 1; i<=9; i++) {
			matrix[y][x] = i;
			// check horizontal rule
			if (!isPotentialLegal(
					matrix[y][0],matrix[y][1],matrix[y][2],
					matrix[y][3],matrix[y][4],matrix[y][5],
					matrix[y][6],matrix[y][7],matrix[y][8])) {
				continue;
			}
			int nextX = x<8?x+1:0;
			int nextY = x<8?y:y+1;
			return nextField(nextX, nextY, matrix);
			
		}
		return Optional.empty();
	}
	
	private static boolean isPotentialLegal(int... numbers) {
		final int[] counts = new int[10];
		for(int i = 0; i < numbers.length; i++) {
			counts[numbers[i]]++;
		}
		for(int i = 1; i < counts.length; i++) {
			if (counts[i]>1) return false;
		}
		return true;
	}

“isPotentialLegal” checks for distinct numbers by counting its occurences. It is called with all numbers of the current row. Zeros are “ignored”. If the rule is not satisfied, the next number is tried.

Step 3: Adding the “vertical rule”

Now I add the rule for columns. To create a test, I use a solved sudoku puzzle and clear some cells:

		final int[][] matrix = new int[][] {
			{7,9,0,3,5,4,6,0,8},
			{8,0,4,1,2,6,3,0,7},
			{3,0,1,9,8,7,5,2,4},
			//
			{9,4,5,6,0,8,1,7,2},
			{2,7,8,5,4,1,9,3,6},
			{6,1,3,0,9,2,8,4,5},
			//
			{4,2,9,8,1,5,7,6,3},
			{1,8,7,2,6,3,4,5,9},
			{5,3,6,4,7,9,2,0,0},
		};

and later check for the correct solution.
The implementation is straight forward next to the “horizonal rule”:

			if (!isPotentialLegal(
					matrix[y][0],matrix[y][1],matrix[y][2],
					matrix[y][3],matrix[y][4],matrix[y][5],
					matrix[y][6],matrix[y][7],matrix[y][8])
			  ||
			    !isPotentialLegal(
			    	matrix[0][x],matrix[1][x],matrix[2][x],
			    	matrix[3][x],matrix[4][x],matrix[5][x],
			    	matrix[6][x],matrix[7][x],matrix[8][x])
					) {
				continue;
			}

Step 4: Adding the “subquadrant rule”

I wondered a bit about how to create a puzzle that would not be solvable without the subquadrant rule, but the original puzzle from Step 3 already did that. It has far more empty cells:

		final int[][] matrix = new int[][] {
			{0,9,0, 0,0,0, 0,1,0},
			{8,0,4, 0,2,0, 3,0,7},
			{0,6,0, 9,0,7, 0,2,0},
			//
			{0,0,5, 0,3,0, 1,0,0},
			{0,7,0, 5,0,1, 0,3,0},
			{0,0,3, 0,9,0, 8,0,0},
			//
			{0,2,0, 8,0,5, 0,6,0},
			{1,0,7, 0,6,0, 4,0,9},
			{0,3,0, 0,0,0, 0,8,0},
		};

So here is the subquadrant rule. The key is to get the coordinates of the “subquadrant” right: integer division does the job, i.e. “(x/3)*3”. For example x=4 gets us “3” because it is the middle subquadrant starting at x=3. I use an extra method here because of the computation of the subquadrant start:

	private boolean isSubquadratPotentialLegal(int x, int y, int[][] matrix) {
		final int xx = (x/3)*3;
		final int yy = (y/3)*3;
		return isPotentialLegal(
			matrix[yy][xx],matrix[yy][xx+1],matrix[yy][xx+2],
			matrix[yy+1][xx],matrix[yy+1][xx+1],matrix[yy+1][xx+2],
			matrix[yy+2][xx],matrix[yy+2][xx+1],matrix[yy+2][xx+2]);
	}

That did not made the test pass, though! It turned out I missed the backtracking-step, i.e. what happens when the recursion does not return a valid result – try next number (line 29):

	public Optional<int[][]> nextField(int x, int y, int[][] matrix) {
		if (y==9 && x == 0) {
			return Optional.of(matrix);
		}
		if (matrix[y][x]>0) {
			int nextX = x<8?x+1:0;
			int nextY = x<8?y:y+1;
			return nextField(nextX, nextY, matrix);
		}
		for(int i = 1; i<=9; i++) {
			matrix[y][x] = i;
			// check horizontal rule
			if (!(isPotentialLegal(
					matrix[y][0],matrix[y][1],matrix[y][2],
					matrix[y][3],matrix[y][4],matrix[y][5],
					matrix[y][6],matrix[y][7],matrix[y][8])
			  &&
			  // check vertical rule
			    isPotentialLegal(
			    	matrix[0][x],matrix[1][x],matrix[2][x],
			    	matrix[3][x],matrix[4][x],matrix[5][x],
			    	matrix[6][x],matrix[7][x],matrix[8][x])
			  && isSubquadratPotentialLegal(x, y, matrix))) {
				continue;
			}
			int nextX = x<8?x+1:0;
			int nextY = x<8?y:y+1;
			final var result = nextField(nextX, nextY, matrix);
			if (result.isPresent()) return result;
		}
		matrix[y][x] = 0;
		return Optional.empty();
	}

Moreover, line 31 “empties” the cell so that we leave it in the starting state.

Conclusion

That’s it, I implemented a sudoku solver guided by tests. To answer my initial question: it’s fast. Well under one second! I will write a follow-up discussion some optimizations.

I had some confusions about package.json and package-lock.json and what is used when but now that I have been “enlightened” I’ll record my new knowledge in this article.

package.json and package-lock.json

package.json lists, among other things, the dependencies you need for your JavaScript project (if you use npm). You’d edit this file manually. In contrast package-lock.json is generated by npm.

Example package.json:

{
  "name": "my-project",
  "version": "1.0.0",
  "dependencies": {
    "express": "^4.17.1",
    "lodash": "^4.17.20"
  },
  "devDependencies": {
    "nodemon": "^2.0.7"
  }
}

 

Often, the versions listed in package.json are given as ranges. npm uses SemVer, meaning a version scheme in three parts like a.b.c where a,b and c are numbers (also called “major.minor.patch”). “~a.b.c” means a and b are fixed and the last part can be c or any greater number: “~4.17.1” means “4.17.x for x>=1”. “^a.b.c” means a is fixed and minor and patch version is variable: “^4.17.20” means “4.x.y for either (x=17 and y>=20) or x>18”.

In contrast, package-lock.json contains the exact versions of the project’s dependencies (and their transitive dependencies and so on). When package-lock.json is generated or updated, the version range in package.json is resolved to the latest “allowed” version.

Generating and updating package-lock.json

How do you create the lock file? “npm install” will do it.

How do you update the lock file? “npm update” will do it, usually. Say package.json states module A in version “^3.4.5” and the existing state module A in version “3.4.20“. Then you run “npm update A” or “npm update” and when there is a version “3.5.2” of A out there, npm will update the lockfile to version “3.5.2” of module A.

If package.json and the lock file are out of sync (the version in package-lock.json is out of the range specified in package.json), “npm install” will correct the package-lock.json file.

Why committing the lock-file?

The general advice is to commit package-lock.json to your repository. That way, every developer will be using the same versions: the ones listed in the lock-file (using “npm install“).

How to upgrade dependencies?

npm outdated” shows outdated dependencies and “npm update <pkg> --save” updates a package.  Commit both files.

Another way is to use tools like dependabot or renovate which check for new versions. If a new version of a module is detected, these tools will create a branch using the new version. CI pipelines are run and pull/merge requests are created or even merged automatically.

CI pipelines

There is a special command for CI pipelines: “npm ci“. It will fail if the lock file is missing or is out of sync with package.json. So the build will fail if “npm install” would change the lock-file.

npm ci” ensures that your build is always based on a consistent set of dependencies, which is important for reproducibility and stability. It also helps avoid problems that can arise from using different versions of the same package across different stages of the pipeline.

Pinning dependencies

Pinning a dependency means using an exact version in package.json.

You are using Github as remote git repository? This article explains how to use Access Token to authenticate yourself instead of username+password.

Create an Access Token

On Github.com, navigate to the setting menu of your Github profile. From there, choose “Developer Settings” and then “Personal Account Tokens”.

Here you are able to create a personal account token:

create new token

Select scope “repo”.

The token you created is presented to you. Save it to you credentials store (like KeePass).

Use the token

The token is part of the remote url of the repository. The URL has this form:

https://userid:token@github.com/projectid/repo.git

e.g. for a repo of mine and suppose the token is “token1234567890”:

https://epischel:token1234567890@github.com/epischel/gensources.git

When you are using the token in the remote URL you won’t be prompted for your password.

Good luck with your repo on Github!

Du benutzt Github als remote repository? Dieser Artikel beschreibt, wie Du Github Access Tokens nutzt, um Dich zu authentifizieren – anstelle von username/passwort.

Access Token erzeugen

Dazu gehst Du in das “Settings”-Menü und dort (links in der Link-Liste) auf “Developer settings) und dort weiter auf “Personal Access Tokens”.

Dort erzeugst Du ein neues Token:

Neues Token erzeugen

Als “Scope” “repo” ankreuzen.

Das erzeugte Token wird angezeigt. Unbedingt gut ablegen, man kann es sich nicht nochmal anzeigen lassen (aber neu erzeugen)

Das Token nutzen

Das Token wird nun in der remote url benutzt. Diese hat die Form

https://userid:token@github.com/projectid/repo.git

z.B. für eines meiner Projekte und einem Token “token1234567890”:

https://epischel:token1234567890@git/epischel/gensources.git

Wenn Du dieses Format nutzt, wirst Du nicht nach einem Passwort gefragt.

Viel Erfolg mit Github!

Many software projects use 3rd party libraries aka “dependencies”. You often want to use the most recent version of these dependencies but how do you know when a new release of a dependency is published? The more dependencies your project have the more tiresome a manual approach to “tracking dependency updates” is.

In this post I explore some solutions that tracks dependency updates for you. I cover broad solutions (libraries.io and dependabot) and Java-only solutions (“artifact listener” and a Gradle/Maven plugin).

Why update?

But why do we want to update dependencies at all?

A new version of a dependency

  • may fix bugs that affects your project
  • may introduce new features that you could use
  • may fix a security issue that affects your project
  • may have other optimizations to the code

Of course there is a risk as well: a new version may introduce a bug that affects your project. Plus, there might be API changes that require changes in your code.

Tracking solutions

Renovate

(update) I now use renovatebot because it integrates nicely with Gitlab CI. Much like dependabot (see below), it scans “dependency files” like “build.gradle”, “pom.xml” or “package.json” and creates merge requests for dependency updates.

Libraries.io

From their own words

Libraries.io can automatically keep track of all of the packages that your repositories depend upon across many different package managers.

Once synced, Libraries.io will email you about new versions of your dependencies, if you add or remove a new dependency it will change the notifications settings for that package as soon as you push to your repositories.

Repositories on Github, Gitlab and Bitbucket are supported. Plus, you can subscribe to dependencies manually, ie without a repository on any of these platforms.

Beside email notifications you can also subscribe to an RSS feed of your dependency updates.

Libraries.io is an open source project.

artifact listener

Artifact Listener is a small service and only available for Java / Maven Central. You can search for libraries and “follow” them. Alternatively you can upload a POM and then choose which dendencies to follow. Updates of libraries you follow are emailed to you.

You can provide additional email adresses to notify, e.g, email addresses of other team members. This is a small but lovely feature for me.

The service is an open source project.

Dependabot

Dependabot checks the “dependency files” (where your dependencies are definied) in your Github repos for updates. If there is an update it creates a PR for it. The PR may contain links, release notes, a list of commits etc.

So this service not only notifies you about an update but even creates a PR that applies it. You just have to merge it (at least if your project is on Github).

Dependabout has been aquired by Github.com and is free of charge.

Gradle plugin

If you are using Gradle (a Java build system) to declare dependencies and build your project you can use the Gradle versions plugin to detect dependency updates and report them. It is easy to use. You just need to execute it on a regular basis.

Maven plugin

Of course, there is a similar plugin for Maven (another Java build system).

Recently on our team chat: “I removed the remote git branch and pushed again”. “Remove” was not necessary, he could have used “git push –force” or better “git push –force-with-lease” instead.

Why

The normal “git pushed” only works when the remote branch is contained in your local branch or in other words if your local branch is the same or ahead of the remote branch. When you still want to “overwrite” the remote branch with your local branch, use the “–force” option or the “–force-with-lease”.

Caution

When you collaborate with other team members on a remote branch, git push with force option may overwrite their work. “–force-with-lease” makes git check that the remote branch is in the state we expect it to be in before pushing the local branch, so you wouldn’t destroy work that you don’t know of.

When

We need to force push when

  • we changed our local branch history by rebasing or amanding
  • we need to “reset” the remote branch to our local branch

Summary

If you need to “overwrite” a remote branch with your local branch, use the “–force-with-lease” option.

In the last post I reviewed Java lambda expressions. They represent a concise syntax to implement functional interfaces.

Enter Java method references. They represent a concise syntax to implement functional interface using existing methods. Like with lambda expressions, referenced methods are not allowed to throw checked exceptions.

Syntax

It’s simply “class-or-instance name” “::” “method name”, like

Function<String, Integer> string2Int = Integer::valueOf;

Types of method references

Reference to a static method

Static methods are referenced using the class name like in the example above.

Reference to an instance method of a particular object

Methods of a particular object are referenced using the variable name of that object:

Map<Integer, String> aMap = new HashMap<>();
Function<Integer, String> getRef = aMap::get;
// call it
String s = getRef.apply(42);

Reference to an instance method of an arbitary object of a particular type

Instead of using an already existing object you can just state the class and a non-static method. Then the instance is an additional parameter. In the following example toURI is a method with no arguments that returns a String. The function of this method reference takes a File (the object) and returns a String:

Function<File, URI> file2Uri = File::toURI;

Reference to a constructor

Constructors are references using its type and “new”:

Function<String, StringBuffer> bufferFromString = StringBuffer::new;

Here the constructor of StringBuffer with String parameter is referenced. Return type is the type of the constructor, parameters of the function are the parameters of the constructors.

 

 

Lambda expressions in Java represent “functions”, something that take a number of parameters and produce at most one return value.

This could be expressed with anonymous classes but lambda expressions offer a more concise syntax.

Syntax

Lambda expression consist of a parameter list, an “arrow” and a body.

(String s1, String s2) -> s1 + "|" + s2

The parameter list is enclosed in round brackets. Types are optional. When the expression has exactly one parameter, the brackets can be omitted.

s -> s!=null && s.length>0

The body can either be an expression (that returns a value) or a block. A block is a sequence of statements, enclosed in curly braces.

n -> { if (n<10) System.out.println(n); }

Lambda expressions and types

In the Java type system, lambda expressions are instances of “functional interfaces”. A functional interface is an interface with exactly one abstract method.

Functional interfaces in java.util.function

The package java.util.function in the JDK contains a number of functional interfaces:

  • Function<T,U>  represents a function with one parameter of type T and return type U
  • Consumer<T>  represents a function with one parameter of type T and return type void
  • Supplier<T>  represents a function with no parameter and return type T
  • Predicate<T>  represents a function with one parameter of type T and return type boolean

Plus, variants with “Bi” prefix exists that have two parameters, like BiPredicate . More variants exists for using primitive types like DoubleToIntFunction .

User defined function interfaces

Any interface with exactly one abstract method can be used as type of a lambda expression. You mark this interface with @FunctionInterface .

@FunctionalInterface
interface SomeInterface {
  int someBehaviour(String a, String b);
}

SomeInterface geo = (x,y) -> x.length + y.length;

Benefits

For me, the benefits of lambda expression are

  • concise syntax for anonymous classes that represent functional code
  • improved readability
  • encouragement of a more functional programming style

Answer: not static at all. A static inner class behaves like a normal class except that it is in the namespace of the outer class (“for packaging convenience”, as the official Java tutorial puts it).

So as an example:

public class Outer {
  private int x = 0;
  public int y = 1;
  
  static class Inner {
    //...
  }
}

As opposed to a true inner (nested) class, you do not need an instance of Outer to create an instance of Inner:

Outer.Inner inner = new Outer.Inner();

and Inner instances have no special knowledge about Outer instances. Inner class behaves just like a top-level class, it just has to be qualified as “Outer.Inner”.

Why I am writing about this?

Because I was quite shocked that two of my colleagues (both seasoned Java developers) were not sure if a static inner class was about static members and therefore global state.

Maybe they do not use static inner classes.

When do I use static inner classes?

I use a static inner class

  1. when it only of use for the outer class and it’s independent of the (private) members of the outer class,
  2. when it’s conceptionally tied to the outer class (e.g. a Builder class)
  3. for packaging convenience.

Often, the visibility of the static inner class is not public. In this case there is no big difference whether I create a static inner class or a top-level class in the same source file. An alternative for the first code example therefore is:

public class Outer {
  // ...
}
// not really inner any more
class Inner {
  // ... 
}

An example for (2) is a Builder class:

public class Thing {
  //...
  public static class Builder {
     // ... many withXXX methods
     public Thing make() // ...
  }
}

If the Inner instance needs access to (private) members of the Outer instance then Inner needs to be non-static.

Sometimes I do a code kata at codewars.com. That is a fun way to solve computer science related problems, learn on the way to solve them and especially learn from the solutions of others.

Today I completed the kata “Make a spanning tree” using Javascript. I occasionally use Javascript to write an event handler or so but I don’t have much experience in “modern” Javascript. Here is what I learnt from looking at the solutions of others.

Destructuring

I know this from my Scala class and Clojure.

You can assign array elements to variables:

   
   var a, b, rest;
   [a, b] = [10, 20];
   console.log(a);
   // expected output: 10

   console.log(b);
   // expected output: 20

   [a, b, ...rest] = [10, 20, 30, 40, 50];

   console.log(rest);
   // expected output: [30,40,50]

so “…rest” is assign the rest of the array.

This is nice syntactic sugar also when working with nested arrays. Eg when “edges” is an array of pairs:

   
   // sort edges by weight
   edges.sort(([edge_a, a], [edge_b, b]) => a - b);

There is object destructuring:

   
var o = {p: 42, q: true};
var {p, q} = o;

console.log(p); // 42
console.log(q); // true

and even assigning to new variable names

   
var o = {p: 42, q: true};
var {p: foo, q: bar} = o;
 
console.log(foo); // 42 
console.log(bar); // true   

See MDN web docs for more.

Spread operator to create an array using an array literal

Using an array literal to create an array from two other arrays:

  
   const sets = {};
   //...
   // new array with sets[a] elements and sets[b] elements
   const set = [...sets[a], ...sets[b]];

Objects are associative arrays (aka Maps)

Although I already knew this, kind of, this refreshes my JS
knowledge.

First, you can add properties to Objects without declaring them in
the first place:

  
   let obj = {}; // anonymous object
   obj.height=2; // create new property "heigth" and assign value
   console.log(obj.height); // 2
   

Second, instead of the dot-notation you can use array index
notation using the property name as the index:

  
   let ojb = {};
   obj['height'] = 2;
   console.log(obj['height']); // 2
   

One solution uses this in order to save the weighted edges in an
object just like i did in the proper Map object:

  
   let set = {};
   edges.filter(e => e[0][1] !== e[0][0]).forEach(e => {
    if (!set[e[0]] || minOrMaxFunc(set[e[0]], e[1])>00) { set[e[0]] = e[1]; }
   });
   

Third, methods are kind of properties, too. In the same solution,
“minOrMaxFunc” is cleverly choosen (“minOrMax” argument is either
“min” or “max”):

  
   function makeSpanningTree(edges, minOrMax) {
     let minOrMaxFunc = { min: (a, b) => a - b, max: (a, b) => b - a }[minOrMax];
     // ...
   }
   

it creates an objects with two methods: “min” and “max” and then
accesses the one that is given in the argument. If “minOrMax” is
“min”, a reference of the “min” method is returned.

Strings are arrays

Destructuring works with strings:

  
   let [a,b] = 'ABC';
   console.log(a); // "A"
   console.log(b); // "B"

and you can index strings:

  
   const s = "ABC";
   s[1]; // "B"

“var” vs. “let”

Of course, the solutions written in “modern” JS use “let” and
“const” all over the place. I just reassured myself about the
difference between let and var:

First, variables declared in a block using “var” are visible
outside that block and are “known” before being declared:

  
   function f() {
    console.log(v); // undefined
    { var v = 3; }
    console.log(v); // 3
   }
   

a block might be a for-loop.

Variables declared using let are not visible outside the block and
are not “known” before declared:

  
   function f() {
    console.log(v); // Reference error
    { let v = 3; }
    console.log(v); // Reference error
   }   
   

Third, you might not redeclare a variable using let:

  
   var a = 0;
   var a = 1; // OK
   let b = 0;
   let b = 1; // not OK
   

So basically, “let” is a sane way to declare variables.