Pitman's Measure of Closeness: A Comparison of Statistical Estimators
Pitman's Measure of Closeness (PMC) is simply an idea whose time has come. Certainly there are many different ways to estimate unknown parameters, but which method should you use? Posed as an alternative to the concept of mean-squared-error, PMC is based on the probabilities of the closeness of competing estimators to an unknown parameter. Renewed interest in PMC over the last 20 years has motivated the authors to produce this book, which explores this method of comparison and its usefulness. Written with research oriented statisticians and mathematicians in mind, but also considering the needs of graduate students in statistics courses, this book provides a thorough introduction to the methods and known results associated with PMC. Following a foreword by C .R. Rao, the first three chapters focus on basic concepts, history, controversy, paradoxes and examples associated with the PMC criterion.
What people are saying - Write a review
We haven't found any reviews in the usual places.
ancillarity BAN estimators Bayes Bayesian candidate Cauchy distribution Chapter choice class of estimators competing estimators consider convergence criteria crossing points defined Definition denote density function discussed distribution function efficiency Efron equivariant equivariant estimators esti estimation problems estimation theory example exist FOANR Geary-Rao Theorem Ghosh given Hence illustrate inequality influence function intransitiveness invariant estimator Karlin's Corollary likelihood function linear combination loss function M-estimators mators maximum likelihood estimator mean squared error measure of closeness median unbiased estimator minimum chi-square Nayak's normal distribution observations obtain order statistics pairwise comparisons paradox parameter 9 parameter space Pitman closeness criterion Pitman estimator Pitman-closest estimator Pitman's Lemma Pitman's measure posterior distribution preference procedure produces random sample random variables random vector Rao's regularity conditions risk sample mean scale parameter second-order sufficient statistic switching point symmetric tion transformation transitiveness UMVUE unbiasedness unknown parameter
Page 215 - HODGES, JL, JR. and LEHMANN, EL, 1951. Some applications of the Cramer-Rao inequality. Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability, University of California Press 13-22.