Introduction to the Theory of Statistics
McGraw-Hill, 1973 - Mathematics - 564 pages
Probability; Random variables, distribution functions, and expectation; Special parametric families of univariate distributions; Joint and conditional distributions, stochastic independence, more expectation; Distributions of functions of random variables; Sampling and sampling distributions; Parametric interval estimation; Tests of hypotheses; Linear models; Nonparametric method.
What people are saying - Write a review
We haven't found any reviews in the usual places.
Introduction and Summary
16 other sections not shown
Other editions - View all
approximately assume balls binomial called Chap chapter coin collection complete conditional confidence interval consists contains corresponding cumulative distribution function defined Definition degrees of freedom denote depend derived determine discrete discussed drawn equal error event EXAMPLE exists expected experiment exponential Find fixed fx(x given gives head hence hypothesis independent indicate integer joint jointly known Let X1 limiting loss maximum-likelihood estimator mean measure method moment moments normal distribution Note observations obtained outcomes parameter particular percent population positive possible powerful probability probability density function problem PROOF properties Prove random sample random variable Reject Remark replacement respect result sample mean sample space satisfying selected simple statistic subsets sufficient statistic Suppose Theorem theory tion tossing trial true unbiased estimator uniformly values variance versus H