mlr_optimizers_adbo {mlr3mbo} | R Documentation |
Asynchronous Decentralized Bayesian Optimization
Description
OptimizerADBO
class that implements Asynchronous Decentralized Bayesian Optimization (ADBO).
ADBO is a variant of Asynchronous Model Based Optimization (AMBO) that uses AcqFunctionStochasticCB with exponential lambda decay.
Currently, only single-objective optimization is supported and OptimizerADBO is considered an experimental feature and API might be subject to changes.
Parameters
lambda
numeric(1)
Value used for sampling the lambda for each worker from an exponential distribution.rate
numeric(1)
Rate of the exponential decay.period
integer(1)
Period of the exponential decay.initial_design
data.table::data.table()
Initial design of the optimization. IfNULL
, a design of sizedesign_size
is generated with the specifieddesign_function
. Default isNULL
.design_size
integer(1)
Size of the initial design if it is to be generated. Default is100
.design_function
character(1)
Sampling function to generate the initial design. Can berandom
paradox::generate_design_random,lhs
paradox::generate_design_lhs, orsobol
paradox::generate_design_sobol. Default issobol
.n_workers
integer(1)
Number of parallel workers. IfNULL
, all rush workers specified viarush::rush_plan()
are used. Default isNULL
.
Super classes
bbotk::Optimizer
-> bbotk::OptimizerAsync
-> mlr3mbo::OptimizerAsyncMbo
-> OptimizerADBO
Methods
Public methods
Inherited methods
Method new()
Creates a new instance of this R6 class.
Usage
OptimizerADBO$new()
Method optimize()
Performs the optimization on an bbotk::OptimInstanceAsyncSingleCrit until termination. The single evaluations will be written into the bbotk::ArchiveAsync. The result will be written into the instance object.
Usage
OptimizerADBO$optimize(inst)
Arguments
Returns
Method clone()
The objects of this class are cloneable with this method.
Usage
OptimizerADBO$clone(deep = FALSE)
Arguments
deep
Whether to make a deep clone.
Note
The lambda parameter of the confidence bound acquisition function controls the trade-off between exploration and exploitation.
A large lambda value leads to more exploration, while a small lambda value leads to more exploitation.
The initial lambda value of the acquisition function used on each worker is drawn from an exponential distribution with rate 1 / lambda
.
ADBO can use periodic exponential decay to reduce lambda periodically for a given time step t
with the formula lambda * exp(-rate * (t %% period))
.
The SurrogateLearner is configured to use a random forest and the AcqOptimizer is a random search with a batch size of 1000 and a budget of 10000 evaluations.
References
EgelĂ©, Romain, Guyon, Isabelle, Vishwanath, Venkatram, Balaprakash, Prasanna (2023). “Asynchronous Decentralized Bayesian Optimization for Large Scale Hyperparameter Optimization.” In 2023 IEEE 19th International Conference on e-Science (e-Science), 1–10.
Examples
if (requireNamespace("rush") &
requireNamespace("mlr3learners") &
requireNamespace("DiceKriging") &
requireNamespace("rgenoud")) {
if (redis_available()) {
library(bbotk)
library(paradox)
library(mlr3learners)
fun = function(xs) {
list(y = xs$x ^ 2)
}
domain = ps(x = p_dbl(lower = -10, upper = 10))
codomain = ps(y = p_dbl(tags = "minimize"))
objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain)
instance = OptimInstanceAsyncSingleCrit$new(
objective = objective,
terminator = trm("evals", n_evals = 10))
rush::rush_plan(n_workers=2)
optimizer = opt("adbo", design_size = 4, n_workers = 2)
optimizer$optimize(instance)
} else {
message("Redis server is not available.\nPlease set up Redis prior to running the example.")
}
}