# Overview

The main goal of the black box optimization toolkit (bbotk) is to provide a common framework for optimization for other packages. Therefore bbotk includes the following R6 classes that can be used in a variety of optimization scenarios.

• Optimizer: Objects of this class allow you to optimize an object of the class OptimInstance.
• OptimInstance: Defines the optimization problem, consisting of an Objective, the search_space and a Terminator. All evaluations on the OptimInstance will be automatically stored in its own Archive.
• Objective: Objects of this class contain the objective function. The class ensures that the objective function is called in the right way and defines, whether the function should be minimized or maximized.
• Terminator: Objects of this class control the termination of the optimization independent of the optimizer.

As bbotk also includes some basic optimizers and can be used on its own. The registered optimizers can be queried as follows:

library(bbotk)
opts()
#> <DictionaryOptimizer> with 6 stored values
#> Keys: cmaes, design_points, gensa, grid_search, nloptr, random_search

This Vignette will show you how to use the bbotk-classes to solve a simple optimization problem. Furthermore, you will learn how to

• construct your Objective.
• define your optimization problem in an OptimInstance
• define a restricted search_space.
• define the logging threshold.
• access the Archive of evaluated function calls.

## Use bbotk to optimize a function

In the following we will use bbotk to minimize this function:

fun = function(xs) {
- (xs[[1]] - 2)^2 - (xs[[2]] + 3)^2 + 10
}

First we need to wrap fun inside an Objective object. For functions that expect a list as input we can use the ObjectiveRFun class. Additionally, we need to specify the domain, i.e. the space of x-values that the function accepts as an input. Optionally, we can define the co-domain, i.e. the output space of our objective function. This is only necessary if we want to deviate from the default which would define the output to be named y and be minimized. Such spaces are defined using the package paradox.

library(paradox)

domain = ParamSet$new(list( ParamDbl$new("x1", -10, 10),
ParamDbl$new("x2", -5, 5) )) codomain = ParamSet$new(list(
ParamDbl$new("y", tags = "maximize") )) obfun = ObjectiveRFun$new(
fun = fun,
domain = domain,
codomain = codomain,
properties = "deterministic" # i.e. the result always returns the same result for the same input.
)

In the next step we decide when the optimization should stop. We can list all available terminators as follows:

trms()
#> <DictionaryTerminator> with 8 stored values
#> Keys: clock_time, combo, evals, none, perf_reached, run_time,
#>   stagnation, stagnation_batch

The termination should stop, when it takes longer then 10 seconds or when 20 evaluations are reached.

terminators = list(
evals = trm("evals", n_evals = 20),
run_time = trm("run_time")
)
terminators
#> $evals #> <TerminatorEvals> #> * Parameters: n_evals=20 #> #>$run_time
#> <TerminatorRunTime>
#> * Parameters: secs=30

We have to correct the default of secs=30 by setting the values in the param_set of the terminator.

terminators$run_time$param_set$values$secs = 10

We have created Terminator objects for both of our criteria. To combine them we use the combo Terminator.

term_combo = TerminatorCombo$new(terminators = terminators) Before we finally start the optimization, we have to create an OptimInstance that contains also the Objective and the Terminator. instance = OptimInstanceSingleCrit$new(objective = obfun, terminator = term_combo)
instance
#> <OptimInstanceSingleCrit>
#> * State:  Not optimized
#> * Objective: <ObjectiveRFun:function>
#> * Search Space:
#> <ParamSet>
#>    id    class lower upper levels        default value
#> 1: x1 ParamDbl   -10    10        <NoDefault[3]>
#> 2: x2 ParamDbl    -5     5        <NoDefault[3]>
#> * Terminator: <TerminatorCombo>
#> * Terminated: FALSE
#> * Archive:
#> <Archive>
#> Null data.table (0 rows and 0 cols)

Note, that OptimInstance(SingleCrit/MultiCirt)$new() also has an optional search_space argument. It can be used if the search_space is only a subset of obfun$domain or if you want to apply transformations. More on that later.

Finally, we have to define an Optimizer. As we have seen above, that we can call opts() to list all available optimizers. We opt for evolutionary optimizer, from the GenSA package.

optimizer = opt("gensa")
optimizer
#> <OptimizerGenSA>
#> * Parameters: list()
#> * Parameter classes: ParamDbl
#> * Properties: single-crit
#> * Packages: GenSA

To start the optimization we have to call the Optimizer on the OptimInstance.

optimizer$optimize(instance) #> INFO [11:24:57.048] Starting to optimize 2 parameter(s) with '<OptimizerGenSA>' and '<TerminatorCombo>' #> INFO [11:24:57.078] Evaluating 1 configuration(s) #> INFO [11:24:57.089] Result of batch 1: #> INFO [11:24:57.090] x1 x2 y #> INFO [11:24:57.090] -4.689827 -1.278761 -37.71645 #> INFO [11:24:57.092] Evaluating 1 configuration(s) #> INFO [11:24:57.103] Result of batch 2: #> INFO [11:24:57.104] x1 x2 y #> INFO [11:24:57.104] -5.930364 -4.400474 -54.852 #> INFO [11:24:57.106] Evaluating 1 configuration(s) #> INFO [11:24:57.109] Result of batch 3: #> INFO [11:24:57.110] x1 x2 y #> INFO [11:24:57.110] 7.170817 -1.519948 -18.92791 #> INFO [11:24:57.112] Evaluating 1 configuration(s) #> INFO [11:24:57.115] Result of batch 4: #> INFO [11:24:57.117] x1 x2 y #> INFO [11:24:57.117] 2.0452 -1.519948 7.807403 #> INFO [11:24:57.118] Evaluating 1 configuration(s) #> INFO [11:24:57.122] Result of batch 5: #> INFO [11:24:57.123] x1 x2 y #> INFO [11:24:57.123] 2.0452 -2.064742 9.12325 #> INFO [11:24:57.125] Evaluating 1 configuration(s) #> INFO [11:24:57.128] Result of batch 6: #> INFO [11:24:57.129] x1 x2 y #> INFO [11:24:57.129] 2.0452 -2.064742 9.12325 #> INFO [11:24:57.133] Evaluating 1 configuration(s) #> INFO [11:24:57.137] Result of batch 7: #> INFO [11:24:57.138] x1 x2 y #> INFO [11:24:57.138] 2.045201 -2.064742 9.12325 #> INFO [11:24:57.140] Evaluating 1 configuration(s) #> INFO [11:24:57.143] Result of batch 8: #> INFO [11:24:57.144] x1 x2 y #> INFO [11:24:57.144] 2.045199 -2.064742 9.12325 #> INFO [11:24:57.146] Evaluating 1 configuration(s) #> INFO [11:24:57.149] Result of batch 9: #> INFO [11:24:57.150] x1 x2 y #> INFO [11:24:57.150] 2.0452 -2.064741 9.123248 #> INFO [11:24:57.152] Evaluating 1 configuration(s) #> INFO [11:24:57.155] Result of batch 10: #> INFO [11:24:57.156] x1 x2 y #> INFO [11:24:57.156] 2.0452 -2.064743 9.123252 #> INFO [11:24:57.158] Evaluating 1 configuration(s) #> INFO [11:24:57.161] Result of batch 11: #> INFO [11:24:57.162] x1 x2 y #> INFO [11:24:57.162] 1.9548 -3.935258 9.12325 #> INFO [11:24:57.164] Evaluating 1 configuration(s) #> INFO [11:24:57.167] Result of batch 12: #> INFO [11:24:57.169] x1 x2 y #> INFO [11:24:57.169] 1.954801 -3.935258 9.12325 #> INFO [11:24:57.170] Evaluating 1 configuration(s) #> INFO [11:24:57.174] Result of batch 13: #> INFO [11:24:57.175] x1 x2 y #> INFO [11:24:57.175] 1.954799 -3.935258 9.12325 #> INFO [11:24:57.176] Evaluating 1 configuration(s) #> INFO [11:24:57.180] Result of batch 14: #> INFO [11:24:57.181] x1 x2 y #> INFO [11:24:57.181] 1.9548 -3.935257 9.123252 #> INFO [11:24:57.183] Evaluating 1 configuration(s) #> INFO [11:24:57.186] Result of batch 15: #> INFO [11:24:57.187] x1 x2 y #> INFO [11:24:57.187] 1.9548 -3.935259 9.123248 #> INFO [11:24:57.189] Evaluating 1 configuration(s) #> INFO [11:24:57.192] Result of batch 16: #> INFO [11:24:57.193] x1 x2 y #> INFO [11:24:57.193] 2 -3 10 #> INFO [11:24:57.195] Evaluating 1 configuration(s) #> INFO [11:24:57.198] Result of batch 17: #> INFO [11:24:57.199] x1 x2 y #> INFO [11:24:57.199] 2.000001 -3 10 #> INFO [11:24:57.201] Evaluating 1 configuration(s) #> INFO [11:24:57.204] Result of batch 18: #> INFO [11:24:57.205] x1 x2 y #> INFO [11:24:57.205] 1.999999 -3 10 #> INFO [11:24:57.207] Evaluating 1 configuration(s) #> INFO [11:24:57.210] Result of batch 19: #> INFO [11:24:57.214] x1 x2 y #> INFO [11:24:57.214] 2 -2.999999 10 #> INFO [11:24:57.216] Evaluating 1 configuration(s) #> INFO [11:24:57.219] Result of batch 20: #> INFO [11:24:57.220] x1 x2 y #> INFO [11:24:57.220] 2 -3.000001 10 #> INFO [11:24:57.227] Finished optimizing after 20 evaluation(s) #> INFO [11:24:57.228] Result: #> INFO [11:24:57.229] x1 x2 x_domain y #> INFO [11:24:57.229] 2 -3 <list[2]> 10 #> x1 x2 x_domain y #> 1: 2 -3 <list[2]> 10 Note, that we did not specify the termination inside the optimizer. bbotk generally sets the termination of the optimizers to never terminate and instead breaks the code internally as soon as a termination criterion is fulfilled. The results can be queried from the OptimInstance. # result as a data.table instance$result
#>    x1 x2  x_domain  y
#> 1:  2 -3 <list[2]> 10
# result as a list that can be passed to the Objective
instance$result_x_domain #>$x1
#> [1] 2
#>
#> $x2 #> [1] -3 # result outcome instance$result_y
#>  y
#> 10

You can also access the whole history of evaluated points.

instance$archive$data()
#>            x1        x2          y  x_domain           timestamp batch_nr
#>  1: -4.689827 -1.278761 -37.716445 <list[2]> 2020-10-07 11:24:57        1
#>  2: -5.930364 -4.400474 -54.851999 <list[2]> 2020-10-07 11:24:57        2
#>  3:  7.170817 -1.519948 -18.927907 <list[2]> 2020-10-07 11:24:57        3
#>  4:  2.045200 -1.519948   7.807403 <list[2]> 2020-10-07 11:24:57        4
#>  5:  2.045200 -2.064742   9.123250 <list[2]> 2020-10-07 11:24:57        5
#>  6:  2.045200 -2.064742   9.123250 <list[2]> 2020-10-07 11:24:57        6
#>  7:  2.045201 -2.064742   9.123250 <list[2]> 2020-10-07 11:24:57        7
#>  8:  2.045199 -2.064742   9.123250 <list[2]> 2020-10-07 11:24:57        8
#>  9:  2.045200 -2.064741   9.123248 <list[2]> 2020-10-07 11:24:57        9
#> 10:  2.045200 -2.064743   9.123252 <list[2]> 2020-10-07 11:24:57       10
#> 11:  1.954800 -3.935258   9.123250 <list[2]> 2020-10-07 11:24:57       11
#> 12:  1.954801 -3.935258   9.123250 <list[2]> 2020-10-07 11:24:57       12
#> 13:  1.954799 -3.935258   9.123250 <list[2]> 2020-10-07 11:24:57       13
#> 14:  1.954800 -3.935257   9.123252 <list[2]> 2020-10-07 11:24:57       14
#> 15:  1.954800 -3.935259   9.123248 <list[2]> 2020-10-07 11:24:57       15
#> 16:  2.000000 -3.000000  10.000000 <list[2]> 2020-10-07 11:24:57       16
#> 17:  2.000001 -3.000000  10.000000 <list[2]> 2020-10-07 11:24:57       17
#> 18:  1.999999 -3.000000  10.000000 <list[2]> 2020-10-07 11:24:57       18
#> 19:  2.000000 -2.999999  10.000000 <list[2]> 2020-10-07 11:24:57       19
#> 20:  2.000000 -3.000001  10.000000 <list[2]> 2020-10-07 11:24:57       20

### Search Space Transformations

If the domain of the Objective is complex, it is often necessary to define a simpler search space that can be handled by the Optimizer and to define a transformation that transforms the value suggested by the optimizer to a value of the domain of the Objective.

Reasons for transformations can be:

1. The objective is more sensitive to changes of small values than to changes of bigger values of a certain parameter. E.g. we could suspect that for a parameter x3 the change from 0.1 to 0.2 has a similar effect as the change of 100 to 200.
2. Certain restrictions are known, i.e. the sum of three parameters is supposed to be 1.
3. many more…

In the following we will look at an example for 2.)

We want to construct a box with the maximal volume, with the restriction that h+w+d = 1. For simplicity we define our problem as a minimization problem.

fun_volume = function(xs) {
- (xs$h * xs$w * xs$d) } domain = ParamSet$new(list(
ParamDbl$new("h", lower = 0), ParamDbl$new("w", lower = 0),
ParamDbl$new("d", lower = 0) )) obj = ObjectiveRFun$new(
fun = fun_volume,
domain = domain
)

We notice, that our optimization problem has three parameters but due to the restriction it only the degree of freedom 2. Therefore we only need to optimize two parameters and calculate h, w, d accordingly.

search_space = ParamSet$new(list( ParamDbl$new("h", lower = 0, upper = 1),
ParamDbl$new("w", lower = 0, upper = 1) )) search_space$trafo = function(x, param_set){
x = unlist(x)
x["d"] = 2 - sum(x) # model d in dependence of h, w
x = x/sum(x) # ensure that h+w+d = 1
as.list(x)
}

Instead of the domain of the Objective we now use our constructed search_space that includes the trafo for the OptimInstance.

inst = OptimInstanceSingleCrit$new( objective = obj, search_space = search_space, terminator = trm("evals", n_evals = 30) ) optimizer = opt("gensa") lg = lgr::get_logger("bbotk")$set_threshold("warn") # turn off console output
optimizer$optimize(inst) #> h w x_domain y #> 1: 0.6647984 0.6671394 <list[3]> -0.0370368 lg = lgr::get_logger("bbotk")$set_threshold("info") # turn on console output

The optimizer only saw the search space during optimization and returns the following result:

inst$result_x_search_space #> h w #> 1: 0.6647984 0.6671394 Internally, the OptimInstance transformed these values to the domain so that the result for the Objective looks as follows: inst$result_x_domain
#> $h #> [1] 0.3323992 #> #>$w
#> [1] 0.3335697
#>
#> $d #> [1] 0.3340311 obj$eval(inst$result_x_domain) #> [1] -0.0370368 ### Notes on the optimization archive The following is just meant for advanced readers. If you want to evaluate the function outside of the optimization but have the result stored in the Archive you can do so by resetting the termination and call $eval_batch().

library(data.table)

inst$is_terminated = FALSE inst$terminator = trm("none")
xvals = data.table(h = c(0.6666, 0.6667), w = c(0.6666, 0.6667))
inst$eval_batch(xdt = xvals) #> INFO [11:24:57.488] Evaluating 2 configuration(s) #> INFO [11:24:57.492] Result of batch 31: #> INFO [11:24:57.493] h w y #> INFO [11:24:57.493] 0.6666 0.6666 -0.03703704 #> INFO [11:24:57.493] 0.6667 0.6667 -0.03703704 tail(inst$archive$data()) #> h w y x_domain timestamp batch_nr #> 1: 0.6647984 0.6671394 -0.03703680 <list[3]> 2020-10-07 11:24:57 27 #> 2: 0.6647964 0.6671394 -0.03703680 <list[3]> 2020-10-07 11:24:57 28 #> 3: 0.6647974 0.6671404 -0.03703680 <list[3]> 2020-10-07 11:24:57 29 #> 4: 0.6647974 0.6671384 -0.03703680 <list[3]> 2020-10-07 11:24:57 30 #> 5: 0.6666000 0.6666000 -0.03703704 <list[3]> 2020-10-07 11:24:57 31 #> 6: 0.6667000 0.6667000 -0.03703704 <list[3]> 2020-10-07 11:24:57 31 However, this does not update the result. You could set the result by calling inst$assign_result() but this should be handled by the optimizer. Another way to get the best point is the following:

inst$archive$best()
#>         h      w           y  x_domain           timestamp batch_nr
#> 1: 0.6667 0.6667 -0.03703704 <list[3]> 2020-10-07 11:24:57       31

Note, that this method just looks for the best outcome and returns the according row from the archive. For stochastic optimization problems this is overly optimistic and leads to biased results. For this reason the optimizer can use advanced methods to determine the result and set it itself.