# GSoC Blog

## Function value threshold in SciPy's minimize

I was minimizing a color difference function using scipy.optimize.minimize. The problem I had was that the solver wouldn't stop until the gradient (the rate of change) was small enough. This is sensible in most applications, but for me this meant wasting time refining the solution until the color difference was orders of magnitude smaller than any perceptible error. In practice most iterations were unnecessary and avoiding them would save a lot of time.

So how does one tell SciPy to finish as soon as the function value is small enough? Here's what I've come up with:

class StopMinimizationEarly(Exception):
def __init__(self, x, fun):
self.x =  x
self.fun = fun

def error_function(x, ...):
...
if fun <= acceptable_fun:
raise StopMinimizationEarly(x, fun)
return fun

Simply throw an exception in the minimized function and then catch it like this:

try:
result = scipy.optimize.minimize(error_function, ...)
x, fun = result.x, result.fun
except StopMinimizationEarly as error:
x, fun = error.x, error.fun

As far as I know, you can't directly tell SciPy to stop once a function value threshold is reached. This could be a decent idea for a feature request, but I'm not sure how many people would actually ever need this. For the time being, this is the best approach to the problem, as far as I know.

2020/07/13 19:15 ·

## Differentiating code

One step in the spectral upsampling method described by Jakob and Hanika (2019) is computing an error function, which is used to find the model parameters. I wanted to provide derivatives of the error function with respect to its arguments to the optimization algorithm I used, in hopes for faster and more reliable convergence.

How does one differentiate Python code though? Every program, no matter how complicated, is a series of simple operations and the chain rule can be used. Computer algebra systems (the one I use is http://maxima.sourceforge.net/Maxima) can help with the process, making this a trivial task.

2020/07/13 18:28 ·

## June ends

The first coding has ended and I'm happy to say my pull request is nearly complete. I'm waiting for mentors to review the code, then I'll fix any remaining issues.

Here's a quick summary of the new code:

• A new sub-module called jakob2019 was created as a part of colour.recovery.
• The main interface is RGB_to_sd_Jakob2019, which turns colors (from any RGB space supported by Colour) to a spectral distribution.
• A low-level interface, find_coefficients, can be used if only the model parameters are needed. It's where the entire optimization algorithm takes place.
• It's also worth mentioning error_function, the function minimized during optimization. It also returns its own gradient (derivatives of the function value with respect to its inputs) to aid numerical algorithms. This required analytically differentiating all the intermediate steps.
• The Jakob2019Interpolator class can be used to work with precomputed tables. It can read and write the article's authors' .coeff files and also generate new ones.
• All functions and classes are documented.
• Close to 100% test coverage. Unit tests still need some improvements.
2020/07/01 15:35 ·

## Late June progress update

The first evaluation is just in a few days. Here's a quick summary of what I've done so far:

• I wrote and tested a decent prototype of the implementation, in my personal repository.
• I forked the main repository and began working on integrating my code into the codebase.
• I learned the basics of Sphinx and the NumPy docstring style, used in Colour.
• I learned how to use Poetry to run unit tests (using Nosetests), and how to write my own tests.
• I learned how to ensure a consistent code style using Flake8.
• Code is now well integrated into the Colour codebase, including relevant tests. Not everything is done yet, though.
• I opened a pull request, and I'm hoping to get it merged into the development branch (and eventually into master) before moving on to working on the next major goal.

What's left to do?

• Improve the optimization process. It still has convergence problems and fails for many colors. This is despite me having written code computing the exact gradient (using analytic differentiation). An investigation into new algorithms is needed.
• The code for looking up pre-computed tables isn't documented and has no unit tests. Because the tables are rather large (too big for inclusion in the main repository), testing can be challenging. An idea is to generate and use smaller tables for this purpose.
• The code for generating new tables is almost complete.
• There are still some minor stylistic issues to be discussed and fixed.

At the moment I'm still working on the solver. Hopefully the next blog entry is titled “no convergence problems.”

2020/06/25 17:55 ·