Milan Hladík's Publications:

EIV regression with bounded errors in data: total `least squares' with Chebyshev norm

Milan Hladík, Michal Černý, and Jaromír Antoch. EIV regression with bounded errors in data: total 'least squares' with Chebyshev norm. Stat. Papers, 61(1):279–301, 2020.

Download

[PDF] [gzipped postscript] [postscript] [HTML] 

Abstract

We consider the linear regression model with stochastic regressors and stochastic errors both in regressors and the dependent variable (“structural EIV model”), where the regressors and errors are assumed to satisfy some interesting and general conditions, different from traditional assumptions on EIV models (such as Deming regression). The most interesting fact is that we need neither independence of errors, nor identical distributions, nor zero means. The first main result is that the TLS estimator, where the traditional Frobenius norm is replaced by the Chebyshev norm, yields a consistent estimator of regression parameters under the assumptions summarized below. The second main result is that we design an algorithm for computation of the estimator, reducing the computation to a family of generalized linear-fractional programming problems (which are easily computable by interior point methods). The conditions under which our estimator works are (said roughly): it is known which regressors are affected by random errors and which are observed exactly; that the regressors satisfy a certain asymptotic regularity condition; all error distributions, both in regressors and in the endogenous variable, are bounded in absolute value by a common bound (but the bound is unknown and is estimated); there is a high probability that we observe a family of data points where the errors are close to the bound. We also generalize the method to the case that the bounds of errors in the dependent variable and regressors are not the same, but their ratios are known or estimable. The assumptions, under which our estimator works, cover many settings where the traditional TLS is inconsistent.

BibTeX

@article{HlaCer2020a,
 author = "Milan Hlad\'{\i}k and Michal {\v{C}}ern\'{y} and Jarom\'{\i}r Antoch",
 title = "{EIV} regression with bounded errors in data: total `least squares' with {Chebyshev} norm",
 webtitle = "{EIV} regression with bounded errors in data: total 'least squares' with Chebyshev norm",
 sjournal = "Stat. Pap.",
 journal = "Stat. Papers",
 fjournal = "Statistical Papers",
 volume = "61",
 number = "1",
 pages = "279-301",
 year = "2020",
 doi = "10.1007/s00362-017-0939-z",
 issn = "1613-9798",
 url = "https://link.springer.com/article/10.1007/s00362-017-0939-z",
 bib2html_dl_html = "https://doi.org/10.1007/s00362-017-0939-z",
 bib2html_dl_pdf = "http://rdcu.be/uEAI",
 abstract = "We consider the linear regression model with stochastic regressors and stochastic errors both in regressors and the dependent variable (``structural EIV model''), where the regressors and errors are assumed to satisfy some interesting and general conditions, different from traditional assumptions on EIV models (such as Deming regression). The most interesting fact is that we need neither independence of errors, nor identical distributions, nor zero means. The first main result is that the TLS estimator, where the traditional Frobenius norm is replaced by the Chebyshev norm, yields a consistent estimator of regression parameters under the assumptions summarized below. The second main result is that we design an algorithm for computation of the estimator, reducing the computation to a family of generalized linear-fractional programming problems (which are easily computable by interior point methods). The conditions under which our estimator works are (said roughly): it is known which regressors are affected by random errors and which are observed exactly; that the regressors satisfy a certain asymptotic regularity condition; all error distributions, both in regressors and in the endogenous variable, are bounded in absolute value by a common bound (but the bound is unknown and is estimated); there is a high probability that we observe a family of data points where the errors are close to the bound. We also generalize the method to the case that the bounds of errors in the dependent variable and regressors are not the same, but their ratios are known or estimable. The assumptions, under which our estimator works, cover many settings where the traditional TLS is inconsistent.",
 keywords = "Errors-in-variables; Measurement error models; Total least squares; Chebyshev matrix norm; Bounded error distributions; Generalized linear-fractional programming",
}

Generated by bib2html.pl (written by Patrick Riley ) on Wed Oct 23, 2024 08:16:44