 This article needs reorganization to meet quality standards. There is good information here, but it is poorly organized; editors are encouraged to be bold and make changes to the overall structure to improve this article. EP: This page should gear to the Gradient Estimation and not to Sensitivity Analysis. Furthermore the reference does not seem appropriate.

Local sensitivity analysis aims at exploring the changes of the quantity of interest due to some variations of the input parameters around a reference point $\mathbf{x}_0$ (e.g. mean value of the vector of input parameters). $\begin{matrix} s_i=\frac{\partial y(\mathbf{x})}{\partial x_i}. \end{matrix}$

The quantity $s_i$ can be approximated using finite differences (see e.g.)).

If using finite differences for the calculation of the sensitivities, $s_i$ is approximated by the slope of a secant line through the reference point $(\mathbf{x}_0,y(\mathbf{x}_0))$ and the point where the variable $x_i$ is affected by a small perturbation, which yields $\begin{matrix} s_i=\frac{\partial y(\mathbf{x})}{\partial x_i} \approx \frac{y[x_{0,1}, \ldots, x_{0,i-1},x_{0,i}+h,x_{0,i+1}, \ldots, x_{0,n}]-y(\mathbf{x}_0)}{h}. \end{matrix}$

An important consideration for the numerical differentiation is the selection of the value of $h$ . If it is selected too small, rounding errors can lead to erroneous results, if $h$ is too large, the slope of the tangent may be considerably different from the derivative at point $\mathbf{x}_0$ .

In case of a large number of input parameters $n$ , the computational efforts associated with the determination of all $s_i,\, i=1,\ldots, n$ may lead to the limits of practical applicability. Therefore, efficient Monte Carlo Gradient Estimation procedures are available for the computation of gradients.