It is common, in errors-in-variables problems in regression, to assume that the errors are incurred 'after the experiment', in that the observed value of the explanatory variable is an independent perturbation of its true value. However, if the errors are incurred 'before the experiment' then the true value of the explanatory variable equals a perturbation of its observed value. This is the context of the Berkson model, which is increasingly attracting attention in parametric and semiparametric settings. We introduce and discuss nonparametric techniques for analysing data that are generated by the Berkson model. Our approach permits both random and regularly spaced values of the target doses. In the absence of data on dosage error it is necessary to propose a distribution for the latter, but we show numerically that our method is robust against that assumption. The case of dosage error data is also discussed. A practical method for smoothing parameter choice is suggested. Our techniques for errors-in-variables regression are shown to achieve theoretically optimal convergence rates.
|Translated title of the contribution||Nonparametric methods for solving the Berkson errors-in-variables problem|
|Pages (from-to)||201 - 220|
|Number of pages||20|
|Journal||Journal of the Royal Statistical Society: Series B, Statistical Methodology|
|Publication status||Published - Apr 2006|