Density Ratio Estimation in Machine Learning
Cambridge University Press, Feb 20, 2012 - Computers - 329 pages
Machine learning is an interdisciplinary field of science and engineering that studies mathematical theories and practical applications of systems that learn. This book introduces theories, methods, and applications of density ratio estimation, which is a newly emerging paradigm in the machine learning community. Various machine learning problems such as non-stationarity adaptation, outlier detection, dimensionality reduction, independent component analysis, clustering, classification, and conditional density estimation can be systematically solved via the estimation of probability density ratios. The authors offer a comprehensive introduction of various density ratio estimators including methods via density estimation, moment matching, probabilistic classification, density fitting, and density ratio fitting as well as describing how these can be applied to machine learning. The book also provides mathematical theories for density ratio estimation including parametric and non-parametric convergence analysis and numerical stability analysis to complete the first and definitive treatment of the entire framework of density ratio estimation in machine learning.
What people are saying - Write a review
We haven't found any reviews in the usual places.
Part II Methods of DensityRatio Estimation
Part III Applications of Density Ratios in Machine Learning
Part IV Theoretical Analysis of DensityRatio Estimation
Part V Conclusions
algorithm analysis approach approximation ASC divergence assumption asymptotic basis functions Chapter class-posterior probability cLSIF computation computationally condition number conditional density estimation convergence rate covariate shift cross-validation D3-LFDA/uLSIF datasets deﬁned denotes density-ratio estimation density-ratio fitting density-ratio function density-ratio model described dimension reduction dimensionality reduction direct density-ratio estimation divergence estimation error estimated density Figure ﬁmction framework Gaussian kernel Gaussian width given gradient Hessian matrix heterodistributional subspace input Kanamori KL divergence KLIEP Kullback–Leibler divergence least-squares LFDA likelihood-ratio test linear logistic regression LSCDE LSIF LSPC M-estimator machine learning matrix model selection multi-task learning mutual information non-negative non-parametric numerical stability obtained optimization problem outlier detection output p∗de probabilistic classification probability densities R-KuLSIF ratio estimation ratio r∗(x regularization parameter RKHS Section statistical Sugiyama supervised learning support vector machine Suzuki tasks Theorem training samples true density ratio two-sample test uLSIF variance vector