print.Hessian {pnd}R Documentation

Numerical Hessians

Description

Computes the second derivatives of a function with respect to all combinations of its input coordinates. Arbitrary accuracies and sides for different coordinates of the argument vector are supported.

Usage

## S3 method for class 'Hessian'
print(
  x,
  digits = 4,
  shave.spaces = TRUE,
  begin = "",
  sep = "  ",
  end = "",
  ...
)

Hessian(
  FUN,
  x,
  side = 0,
  acc.order = 2,
  h = NULL,
  h0 = NULL,
  control = list(),
  f0 = NULL,
  cores = 1,
  preschedule = TRUE,
  cl = NULL,
  func = NULL,
  ...
)

Arguments

x

Numeric vector or scalar: point at which the derivative is estimated. FUN(x) must return a finite value.

digits

Positive integer: the number of digits after the decimal comma to round to (i.e. one less than the number of significant digits).

shave.spaces

Logical: if true, removes spaces to ensure compact output; if false, results in nearly fixed-width output (almost).

begin

A character to put at the beginning of each line, usually "", "(", or "c(" (the latter is useful if console output is used in calculations).

sep

The column delimiter, usually " ", "|", "&" (for LaTeX), or ", ".

end

A character to put at the end of each line, usually "" or ")".

...

Additional arguments passed to FUN.

FUN

A function returning a numeric scalar. If the function returns a vector, the output will be is a Jacobian. If instead of FUN, func is passed, as in numDeriv::grad, it will be reassigned to FUN with a warning.

side

Integer scalar or vector indicating difference type: 0 for central, 1 for forward, and -1 for backward differences. Central differences are recommended unless computational cost is prohibitive.

acc.order

Integer specifying the desired accuracy order. The error typically scales as O(h^{\mathrm{acc.order}}).

h

Numeric scalar, vector, or character specifying the step size for the numerical difference. If character ("CR", "CRm", "DV", or "SW"), calls gradstep() with the appropriate step-selection method. Must be length 1 or match length(x). Matrices of step sizes are not supported. Suggestions how to handle all pairs of coordinates are welcome.

h0

Numeric scalar of vector: initial step size for automatic search with gradstep().Hessian(f, 1:100)

control

A named list of tuning parameters passed to gradstep().

f0

Optional numeric scalar or vector: if provided and applicable, used where the stencil contains zero (i.e. FUN(x) is part of the sum) to save time. TODO: Currently ignored.

cores

Integer specifying the number of CPU cores used for parallel computation. Recommended to be set to the number of physical cores on the machine minus one.

preschedule

Logical: if TRUE, disables pre-scheduling for mclapply() or enables load balancing with parLapplyLB(). Recommended for functions that take less than 0.1 s per evaluation.

cl

An optional user-supplied cluster object (created by makeCluster or similar functions). If not NULL, the code uses parLapply() (if preschedule is TRUE) or parLapplyLB() on that cluster on Windows, and mclapply (fork cluster) on everything else.

func

Deprecated; for numDeriv::grad() compatibility only.

Details

The optimal step size for 2nd-order-accurate central-differences-based Hessians is of the order Mach.eps^(1/4) to balance the Taylor series truncation error with the rounding error. However, selecting the best step size typically requires knowledge of higher-order cross derivatives and is highly technically involved. Future releases will allow character arguments to invoke automatic data-driven step-size selection.

The use of f0 can reduce computation time similar to the use of f.lower and f.upper in uniroot().

Some numerical packages use the option (or even the default behaviour) of computing not only the i < j cross-partials for the Hessian, but all pairs of i and j. The upper and lower triangular matrices are filled, and the matrix is averaged with its transpose to obtain a Hessian – this is the behaviour of optimHess(). However, it can be shown that H[i, j] and H[j, i] use the same evaluation grid, and with a single parallelisable evaluation of the function on that grid, no symmetrisation is necessary because the result is mathematically and computationally identical. In pnd, only the upper triangular matrix is computed, saving time and ensuring unambiguous results owing to the interchangeability of summation terms (ignoring the numerical error in summation as there is nothing that can be done apart from compensation summation, e.g. via Kahan's algorithm).

Value

A matrix with as many rows and columns as length(x). Unlike the output of numDeriv::hessian(), this output preserves the names of x.

See Also

Grad() for gradients, GenD() for generalised numerical differences.

Examples

f <- function(x) prod(sin(x))
Hessian(f, 1:4)
# Large matrices

  system.time(Hessian(f, 1:100))


[Package pnd version 0.1.0 Index]