Many statistical procedures involve calculation of integrals or optimization (minimization or maximization) of some objective function. In practical implementation of these, the user often has to face specific problems such as seemingly numerical instability of the integral calculation, choices of grid points, appearance of several local minima or maxima, etc. In this paper we provide insights into these problems (why and when are they happening?), and give some guidelines of how to deal with them. Such problems are not new, neither are the ways to deal with them, but it is worthwhile to devote serious considerations to them. For a transparant and clear discussion of these issues, we focus on a particular statistical problem: nonparametric estimation of a density from a sample that contains measurement errors. The discussions and guidelines remain valid though in other contexts. In the density deconvolution setting, a kernel density estimator has been studied in detail in the literature. The estimator is consistent and fully data-driven procedures have been proposed. When implemented in practice however, the estimator can turn out to be very inaccurate if no adequate numerical procedures are used. We review the steps leading to the calculation of the estimator and in selecting parameters of the method, and discuss the various problems encountered in doing so.
|Translated title of the contribution||Frequent problems in calculating integrals and optimizing objective functions: a case study in density deconvolution|
|Pages (from-to)||349 - 355|
|Number of pages||7|
|Journal||Statistics and Computing|
|Publication status||Published - Dec 2007|