In the previous post I explained how PCA can be used to create a convenient basis for representing real-world reflectivities. The procedure uses a set of measured spectra to compute an orthonormal basis, sorted by variance or “statistical significance.” All but the three most significant vectors are then truncated. I called the difference between the actual spectrum and the reconstructed the error term .
Naturally the smaller the error, the better. Because is a vector, the authors chose the Euclidean norm (2-norm or L2) to represent the (scalar) error:
To get an idea of how well the method works in the general case, the authors would sum up reconstruction errors of all spectra in the dataset. I call this the total error :
where denotes the reconstruction error of , the -th measured spectrum.
Using just one basis for everything isn't ideal. The picture below shows reconstructions of a few colorchecker spectra. While by definition XYZ tristimulus values (and perceived colors) match perfectly, the spectra aren't that close.
The previous post mentions three basis vectors and a “dataset mean.” Here I explain how to find a good basis and what “good” means in this context.
The idea is to find a basis so that any spectral reflectivity can be expressed in the form:
where runs over all dimensions of the space (36 using Otsu et al.'s spectral data) and is a constant term you can ignore for now. This is a very common problem and there are many general solutions. By far the most common one is the discrete Fourier transform, which turns arbitrary signals into weighted sums of sines and cosines.
Unfortunately, recalling that the goal is to upsample a color triplet (e.g. XYZ tristimulus values or RGB data), there are only three degrees of freedom to work with and so at most three weights can be determined. Reformulating the problem,
where all but three terms are lumped together into an error term . I think it's clear that three sines or cosines (and a constant term) added together won't accurately reconstruct most spectra. There are far better purely mathematical models, such as Jakob and Hanika (2019).
Otsu et. al (2018) is the second spectral upsampling method I'm adding to Colour. It's a data-driven method, which not only produces reflectances matching the given color exactly, but also that are natural and resemble real, measured reflectances. This is in contrast with, for example, Jakob and Hanika (2019), which I worked on in June and uses a mathematical model.
At the moment the code is split in two parts. The feature branch in my fork implements the method using the dataset published with the article. Creating new datasets is done by this program and will be covered in depth in a later article. Work is mostly done, I just need to integrate the program into Colour and complete this series of blog posts.
I'm happy to announce the code I wrote in the first month of the Summer of Code was merged upstream. You can see the pull request here.
This post describes how to the compute derivatives mentioned in an earlier post using Maxima, a computer algebra system. Code is listed in a monospaced font and answers are written directly below, in LaTeX. Ending lines with dollar signs in Maxima (instead of semicolons) surpresses the output, which is very useful for definitions.
Start by defining the reflectance model.
U: c_0 * lambda^2 + c_1 * lambda + c_2$ R: 1 / 2 + U / (2 * sqrt(1 + U^2))$
R with respect to
c_0. The expansion of
U appears a few times in the result. Substitute
U to simplify it. (The underscore is there so it doesn't expand back to the full expression.)
diff(R, c_0)$ subst(U_, U, %);
This can be further simplified by substituting . Unfortunately
subst won't recognize as so I recommend doing this manually.
Repeating the process for the other coefficients reveals a pattern. Let .
After skipping trivial derivatives, which can be easily done by hand, one will find themselves having to differentiate the final expression for . Normally an expression like (
diff(L, c_0)) would be treated as a derivative of a constant in Maxima and reduced to zero. One can use
depends to tell the program that the L*a*b* coordinates are functions of the model parameters. The result will then contain derivatives of the coordinates themselves.
An alternative approach is typing out the coordinates as functions explicitly. Instead of using
L(c_0, c_1, c_2) (and so on for
b) in the code below. The downside is that the function arguments will appear in the end result, cluttering it and making a mess.
depends([L, a, b], [c_0, c_1, c_2])$ delta_E: sqrt((L - L_target)^2 + (a - a_target)^2 + (b - b_target)^2)$
Use the same
subst trick as before to simplify.
diff(delta_E, c_0)$ subst(delta_E_, delta_E, %);
The other derivatives look the same. Simply substitute or for in the expression above to get obtain them.