Statistical Estimation: Asymptotic Theorywhen certain parameters in the problem tend to limiting values (for example, when the sample size increases indefinitely, the intensity of the noise ap proaches zero, etc.) To address the problem of asymptotically optimal estimators consider the following important case. Let X 1, X 2, ... , X n be independent observations with the joint probability density !(x,O) (with respect to the Lebesgue measure on the real line) which depends on the unknown patameter o e 9 c R1. It is required to derive the best (asymptotically) estimator 0:( X b ... , X n) of the parameter O. The first question which arises in connection with this problem is how to compare different estimators or, equivalently, how to assess their quality, in terms of the mean square deviation from the parameter or perhaps in some other way. The presently accepted approach to this problem, resulting from A. Wald's contributions, is as follows: introduce a nonnegative function w(0l> ( ), Ob Oe 9 (the loss function) and given two estimators Of and O! n 2 2 the estimator for which the expected loss (risk) Eown(Oj, 0), j = 1 or 2, is smallest is called the better with respect to Wn at point 0 (here EoO is the expectation evaluated under the assumption that the true value of the parameter is 0). Obviously, such a method of comparison is not without its defects. |
Other editions - View all
Common terms and phrases
absolutely continuous analogous arbitrary assume asymptotically efficient asymptotically normal Bayesian estimators bound Chapter compact set conditions of Theorem consider consistent estimators converge convex Cramér-Rao inequality db(t defined definition Denote density f(x differentiable efficient estimator equality equation estimator Ô example exists Fisher's information fulfilled Furthermore implies integral interval LAN condition last inequality likelihood ratio lim lim lim sup limiting distribution loss function marginal distributions maximum likelihood estimator measure minimax moreover neighborhood normal distribution normalizing matrix obtain parameter set possesses prior density probability density proof of Theorem properties prove random function random variables regular experiment relation respect right-hand side Section sequence singularities of order statistical experiment subset sufficiently small Theorem 1.1 u₁ utilizing valid vector verify Wiener process X₁ zero ΘΕ Κ ᎧᎾ