gradstep {pnd}R Documentation

Automatic step selection for numerical derivatives

Description

Automatic step selection for numerical derivatives

Usage

gradstep(
  FUN,
  x,
  h0 = NULL,
  method = c("plugin", "SW", "CR", "CRm", "DV", "M", "K"),
  control = NULL,
  cores = 1,
  preschedule = getOption("pnd.preschedule", TRUE),
  cl = NULL,
  ...
)

## S3 method for class 'stepsize'
print(x, ...)

## S3 method for class 'gradstep'
print(x, ...)

Arguments

FUN

Function for which the optimal numerical derivative step size is needed.

x

Numeric vector or scalar: the point at which the derivative is computed and the optimal step size is estimated.

h0

Numeric vector or scalar: initial step size, defaulting to a relative step of slightly greater than .Machine$double.eps^(1/3) (or absolute step if x == 0).

method

Character indicating the method: "CR" for (Curtis and Reid 1974), "CRm" for modified Curtis–Reid, "DV" for (Dumontet and Vignes 1977), "SW" (Stepleman and Winarsky 1979), "M" for (Mathur 2012), "K" for Kostyrka (2026, exerimental), and "plugin" for the single-step plug-in estimator.

control

A named list of tuning parameters for the method. If NULL, default values are used. See the documentation for the respective methods. Note that full iteration history including all function evaluations is returned, but different methods have slightly different diagnostic outputs.

cores

Integer specifying the number of CPU cores used for parallel computation. Recommended to be set to the number of physical cores on the machine minus one.

preschedule

Logical: if TRUE, disables pre-scheduling for mclapply() or enables load balancing with parLapplyLB(). Recommended for functions that take less than 0.1 s per evaluation.

cl

An optional user-supplied cluster object (created by makeCluster or similar functions). If not NULL, the code uses parLapply() (if preschedule is TRUE) or parLapplyLB() on that cluster on Windows, and mclapply (fork cluster) on everything else.

...

Passed to FUN.

Details

We recommend using the Stepleman–Winarsky algorithm because it does not suffer from over-estimation of the truncation error in the Curtis–Reid approach and from sensitivity to near-zero third derivatives in the Dumontet–Vignes approach. It really tries multiple step sizes and handles missing values due to bad evaluations for inadequate step sizes really in a robust manner.

Value

A list similar to the one returned by optim() and made of concatenated individual elements coordinate-wise lists: par – the optimal step sizes found, value – the estimated numerical gradient, counts – the number of iterations for each coordinate, abs.error – an estimate of the total approximation error (sum of truncation and rounding errors), exitcode – an integer code indicating the termination status: 0 indicates optimal termination within tolerance, 1 means that the truncation error (CR method) or the third derivative (DV method) is zero and large step size is preferred, 2 is returned if there is no change in step size within tolerance, 3 indicates a solution at the boundary of the allowed value range, 4 signals that the maximum number of iterations was reached. message – summary messages of the exit status. iterations is a list of lists including the full step size search path, argument grids, function values on those grids, estimated error ratios, and estimated derivative values for each coordinate.

References

Curtis AR, Reid JK (1974). “The Choice of Step Lengths When Using Differences to Approximate Jacobian Matrices.” IMA Journal of Applied Mathematics, 13(1), 121–126. doi:10.1093/imamat/13.1.121.

Dumontet J, Vignes J (1977). “Détermination du pas optimal dans le calcul des dérivées sur ordinateur.” RAIRO. Analyse numérique, 11(1), 13–25. doi:10.1051/m2an/1977110100131.

Mathur R (2012). An Analytical Approach to Computing Step Sizes for Finite-Difference Derivatives. Ph.D. thesis, University of Texas at Austin. http://hdl.handle.net/2152/ETD-UT-2012-05-5275.

Stepleman RS, Winarsky ND (1979). “Adaptive numerical differentiation.” Mathematics of Computation, 33(148), 1257–1264. doi:10.1090/s0025-5718-1979-0537969-8.

See Also

step.CR() for Curtis–Reid (1974) and its modification, step.plugin() for the one-step plug-in solution, step.DV() for Dumontet–Vignes (1977), step.SW() for Stepleman–Winarsky (1979), step.M() for Mathur (2012), and step.K() for Kostyrka (2026).

Examples

gradstep(x = 1, FUN = sin, method = "CR")
gradstep(x = 1, FUN = sin, method = "CRm")
gradstep(x = 1, FUN = sin, method = "DV")
gradstep(x = 1, FUN = sin, method = "SW")
gradstep(x = 1, FUN = sin, method = "M")
# Works for gradients
gradstep(x = 1:4, FUN = function(x) sum(sin(x)))
print(step.CR(x = 1, sin))
print(step.DV(x = 1, sin))
print(step.plugin(x = 1, sin))
print(step.SW(x = 1, sin))
print(step.M(x = 1, sin))
print(step.K(x = 1, sin))
f <- function(x) x[1]^3 + sin(x[2])*exp(x[3])
print(gradstep(x = c(2, pi/4, 0.5), f))

[Package pnd version 0.1.0 Index]