Wood et al. (2017) developed methods for fitting penalized regression spline based generalized additive models, with of the order of 104 coefficients, to up to 108 data. The methods offered two to three orders of magnitude reduction in computational cost relative to the most efficient previous methods. Part of the gain resulted from the development of a set of methods for efficiently computing model matrix products when model covariates each take only a discrete set of values substantially smaller than the sample size (generalizing an idea first appearing in Lang et al., 2014). Covariates can always be rounded to achieve such discretization, and it should be noted that the covariate discretization is marginal. That is we do not rely on discretizing covariates jointly, which would typically require the use of very coarse discretization. The most expensive computation in model estimation is the formation of the matrix cross product XTWX where X is a model matrix and W a diagonal or tri-diagonal matrix. The purpose of this paper is to present a simple, novel and substantially more efficient approach to the computation of this cross product. The new method offers, for example, a 30 fold reduction in cross product computation time for the Black Smoke model dataset motivating Wood et al. (2017). Given this reduction in computational cost, the subsequent Cholesky decomposition of XTWX and follow on computation of (XTWX) −1 become a more significant part of the computational burden, and we also discuss the choice of methods for improving their speed.
- Generalized additive model
- Fast regression