top of page
  • Andrew Jones

Colourising old All Blacks photos using Deep Learning

This article was originally published on and in their affiliated regional newspapers throughout New Zealand.

-- Don Clarke

It's the All Blacks as you have never seen them before. Historic photos of New Zealand rugby players have been shown in a completely new light, literally, thanks to the work of Andrew Jones - a London-based Kiwi working in the field of artificial intelligence.

Using an algorithm to colourise old black and white photos, Jones put some pictures of former All Blacks to the test - with the results rather striking.

Originally from Matamata, Jones studied at Victoria University in Wellington and has been in London for the past decade, freelancing in analytics and data science, recently working with companies like Amazon and Sony PlayStation.

He hadn't looked into image colourisation much before this, and it was after reading an article on the topic that he thought it looked interesting and that he'd try to apply it to old All Blacks photos, as he'd not seen anyone specifically attempt that before.


Jones explained that, to a computer, an image is essentially just a bunch of pixels. In a black and white image the computer simply registers those pixels as a number that equates to its brightness. That number ranges anywhere from zero (completely black) to 255 (completely white), while anywhere in between those two numbers is some shade of grey.

-- Duncan McGregor

A black and white image only has a single colour channel - grey. But in the case of a colour image, there are three colour channels representing every pixel - red, green and blue. These three channels have their own value from zero to 255, and the combination of the three values represents any one of around 16.7 million possible colours.

-- Wilson Whineray

The problem to solve for this task, Jones said, is, for each image pixel, how would he get the computer to learn what the numbers for red, green and blue channels should be, based on only knowing the numbers for the grey channel. If the computer can successfully learn that relationship, it can be provided with any black and white image and then apply what it's learned, with the output being its best guess at the colour version.

To do this in practice, Jones explained that a type of model that is very popular in artificial intelligence is used, called a neural network. It's a mathematical model built to try and mimic how our own brains learn.

In this case, a particular type of neural network is used, called a 'convolutional neural network', which deals well with learning about images because it scans different parts of the image to identify different features to 'understand' the various parts of the picture. While we might see an eye, nose, or chin, for instance, the neural network deals with very abstract features that us humans wouldn't necessarily understand.

In order for the neural network to learn this relationship, a large number of colour images are found and for every one of those, a black and white version is also created, via any basic photo software.

-- Lui Paewai

The neural network is then asked to go through every picture and try to best learn how to match the black and white version to its respective colour version based on the types of features it's discovered. This phase, Jones said, is called 'training', as for all the images he's not only provided the neural network with the black and white ones, but also the 'correct answer'.

-- Dave Gallaher

Once the neural network has learned these relationships as best it can, it can be asked to apply the learnings onto a never-before-seen black and white image (e.g. one of George Nepia from 1924). It looks through the new image for any features it discovered in the training phase and then for each pixel it provides its best guess for the red, blue and green values - the output of which is a colour image.

-- Colin Meads

Jones was pretty happy with how the photos turned out. Often on the vast majority of these types of projects it's trial and error, he said, essentially fine-tuning a wide range of parameters to get things to work the way you want. In this case he felt he got a bit lucky, as it worked pretty well early on.

There's always room for improvement, though, he acknowledged, as in some cases the model doesn't quite understand what colour to apply, which at one stage saw Nepia sporting a head of bright orange hair.

-- Ron Jarden

The solution is for him to train the model on more data - instead of giving neural network around 1200 images to learn, which he said is quite low, you would look to train it on tens or even hundreds of thousands of images.

While Jones doesn't have any further projects specifically lined up, he said if he finds the time he would like to play around with the idea of 'enhancing' old photos as well.

-- George Nepia

He said that would involve obtaining high definition pictures and artificially 'age' them, then get a model to learn the relationship between the aged versions and HD versions. That would mean potentially taking the colourised All Blacks photos and making them look like they were taken on a modern camera.

Attempting to colourise video clips is also something he's had an initial attempt at doing, and something which he said could certainly bring old sporting video to life for newer generations to enjoy and appreciate.

-- Wilson Whineray

-- George Nepia

-- Don Clarke

95 views0 comments

Recent Posts

See All
bottom of page