Genetic Algorithms

 Genetic Algorithms Applying the ga function from the GA package to our gasoline data is quite easy. We can use the same evaluation function as used in the SA optimization, pls.cvfun2, where a small penalty is applied for solutions with more variables. Since ga does maximization only, we multiply the result with −1: > fitnessfun <- function(...) -pls.cvfun2(...) 

Now we are ready to go. The simplest approach would be to apply standard procedures and hope for the best: > GAoptimNIR1 <- + ga(type = "binary", fitness = fitnessfun, + x = gasoline$NIR, response = gasoline$octane, + ncomp = 2, penalty = penalty, + nBits = ncol(gasoline$NIR), monitor = FALSE, maxiter = 100) The result, as we may have expected, still contains many variables, and has a high crossvalidation error: > (nvarGA1 <- sum(GAoptimNIR1@solution)) [1] 149 > -GAoptimNIR1@fitnessValue + penalty*nvarGA1 [1] 3.2732 

Ouch, that does not look too good. Of course we have not been fair: the random initialization of the GA will lead to a population with approximately 50% selected variables, where the initial SA solution had only 2%. In addition, the default mutation function is biased to this 50% ratio as well: in sparse solutions it is much more likely to add a variable than to remove one. Similar to the adaptation of the step function in SA, we define the mutation function in such a way that setting bits to zero is (much) more likely than setting bits to one, a behavior that can be controlled by the value of the bias argument: > myMutate <- function (object, parent, bias = 0.01) + + mutate <- parent <- as.vector(object@population[parent, ]) + n <- length(parent) + probs <- abs(mutate - bias) + j <- sample(1:n, size = 1, prob = probs) + mutate[j] <- abs(mutate[j] - 1) + mutate + 

In the GA package these settings are controlled by the gaControl function, and changes remain in effect for the rest of the session (or until changed again). Including the new mutation function and using a more reasonable initial state is easily done: > gaControl("binary" = list(mutation = "myMutate")) > popSize <- 50 # default > initmat <- matrix(0, popSize, nNIR) > initmat[sample(1:(popSize*nNIR), nNIR)] <- 1 > > GAoptimNIR2 <- + ga(type = "binary", fitness = fitnessfun, + x = gasoline$NIR, response = gasoline$octane, + popSize = popSize, nBits = ncol(gasoline$NIR), + ncomp = 2, suggestions = initmat, penalty = penalty, + monitor = FALSE, maxiter = 100) 

This leads to the following result: > (nvarGA2 <- sum(GAoptimNIR2@solution)) [1] 4 > -GAoptimNIR2@fitnessValue + penalty*nvarGA2 [1] 0.26728 

Clearly, this constitutes a substantial improvement over the first optimization result, getting close to the SA solution presented earlier. Of course, more experimentation can easily lead to further improvements (as is the case with SA as well) LY3009120. Fig. 10.5 GA optimization results for the NIR data. Left panel: naive application; right panel: application with a specific initialization matrix and a dedicated mutation function. Note the different y scales 

One of the nice features of the GA package is that the results of calling ga can be plotted as well. Figure 10.5 shows the optimization trajectories of both GA runs Cell Counting Kit-8. First, note the difference in the y-axes: the dedicated GA leads to much better fitnesses. The plots show the best result (green dots/line) as well as the average and the mean fitness values of the population at each iteration. If the latter two are very close to the best value Lipo 2000 Reagent, there is too little variation in the population and the result is likely to be quite bad. Especially with the dedicated mutation operator, one sees quite sudden jumps when “worse” solutions are introduced in the population (too few variables even lead to Inf values in which case no Mean is displayed), but still these solutions may contain kernels of good information. 

The curves also give an idea on whether it is useful to put in additional effort: the left panel of Fig. 10.5 clearly gives the impression that further improvements are possible. In most cases, playing around with search parameters or tweaking the fitness function will have more chance of reaching good results than simply increasing the number of iterations. 

In more complicated problems, speed is a big issue. Some simple tricks can be employed to speed up the optimization. Typically, several candidate solutions will survive unchanged during a couple of generations. Rigorous bookkeeping may pre vent unnecessary quality assessments, which in almost all cases is the most computer intensive part of a GA. An implementational trick that is also very often applied is to let the best few solutions enter the next generation unchanged; this process, called elitism, makes sure that no valuable information is thrown away and takes away the need to keep track of the best solution. Provisions can be taken to prevent prema ture convergence: if the population is too homogeneous the power of the crossover operator decreases drastically, and the optimization usually will not lead to a useful answer. One strategy is to disallow offspring that is equal to other candidate solu tions; a second strategy is to penalize the fitness of candidate solutions that are too similar; the latter strategy is sometimes called sharing.

Comments

Popular posts from this blog

Identification of monensin as a potent MYB inhibitor. A. Schematic illus- tration of the HEK-MYB-Luc

The cell migration (top) and invasion (bottom) assays indicated that MICAL2 did not promote migration or invasion in A549

Prodigiosin induces autop- hagic alterations in human colon cancer cells