Abstract
We analyse the effectiveness of differential evolution hyperparameters in large-scale search problems, i.e. those with very many variables or vector elements, using a novel objective function that is easily calculated from the vector/string itself. The objective function is simply the sum of the differences between adjacent elements. For both binary and real-valued elements whose smallest and largest values are min and max in a vector of length N, the value of the objective function ranges between 0 and (N-1) × (max-min) and can thus easily be normalised if desired. This provides for a conveniently rugged landscape. Using this we assess how effectively search varies with both the values of fixed hyperparameters for Differential Evolution and the string length. String length, population size and generations for computational iterations have been studied. Finally, a neural network is trained by systematically varying three hyper-parameters, viz population (NP), mutation factor (F) and crossover rate (CR), and two output target variables are collected (a) median and (b) maximum cost function values from 10-trial experiments. This neural system is then tested on an extended range of data points generated by varying the three parameters on a finer scale to predict both median and maximum function costs. The results obtained from the machine learning model have been validated with actual runs using Pearson’s coefficient to justify the reliability to motivate the use of machine learning techniques over grid search for hyper-parameter search for numerical optimisation algorithms. The performance has also been compared with SMAC3 and OPTUNA in addition to grid search and random search.
Competing Interest Statement
The authors have declared no competing interest.
Footnotes
Douglas.Kell{at}liverpool.ac.uk