Prediction Consistency of Lasso Regression Does Not Need Normal Errors

Prediction Consistency of Lasso Regression Does Not Need Normal Errors

Abstract

Sourav Chatterjee in 2014 proved consistency of any estimator using orthogonal least squares (OLS) together with Lasso penalty under the conditions the observations are upper bounded, with normal errors, and being independent of observations, with a zero mean and a finite variance. Reviewing his elegant proof, we come to the conclusion that the prediction consistency of OLS with Lasso can be proven even with fewer assumptions, i.e., without assuming normality of the errors, knowing only they have a finite variance and zero mean. We give an upperbound on the convergence rate of OLS-Lasso estimator for these errors. This upper bound isnot asymptotic and depends both on the number of regressors and on the size of the data set. Knowing the number of regressors in a regression problem, one can estimate how large dataset is needed, to achieve a prediction error under a given value, and this in comparison to thecited work, without solving the parameter estimation problem for fitting the errors to a normal distribution. The result can encourage practitioners to use OLS Lasso as a convergent algorithmfor prediction with other than normal errors satisfying these milder conditions.

Grafik Top
Authors
  • Schindlerova (Hlavackova-Schindler), Katerina
Grafik Top
Shortfacts
Category
Journal Paper
Divisions
Data Mining
Journal or Publication Title
British Journal of Mathematics and Computer Science
ISSN
ISSN: 2231-0851
Publisher
SCIENCEDOMAIN international
Date
October 2016
Export
Grafik Top