Neural networks for conditional probability estimation: forecasting beyond point predictions
This volume presents a neural network architecture for the prediction of conditional probability densities - which is vital when carrying out universal approximation on variables which are either strongly skewed or multimodal. Two alternative approaches are discussed: the GM network, in which all parameters are adapted in the training scheme, and the GM-RVFL model which draws on the random functional link net approach. Points of particular interest are: - it examines the modification to standard approaches needed for conditional probability prediction; - it provides the first real-world test results for recent theoretical findings about the relationship between generalisation performance of committees and the over-flexibility of their members; This volume will be of interest to all researchers, practitioners and postgraduate / advanced undergraduate students working on applications of neural networks - especially those related to finance and pattern recognition.
27 pages matching defined in this book
Results 1-3 of 27
What people are saying - Write a review
We haven't found any reviews in the usual places.
The Bayesian Evidence Scheme for Regularisation 147
A Universal Approximator Network for Predicting Condi
A Maximum Likelihood Training Scheme
14 other sections not shown
adaptation algorithm applied approach approximation error arand Bayesian evidence scheme Bayesian regularisation bold line Chapter conditional probability density Consequently cross-validation error cross-validation set data points defined derived discussed double-well eigenvalues EM-steps ensemble Equation error function Gaussian generalisation error generalisation performance given GM model GM network GM-RVFL network gradient descent graphs Hessian hidden layer hyperparameters intrinsic noise kernel centres kernel widths leads learning rate linear logistic map logistic-kappa map matrix network branches network committee network parameters network training neural network nodes noise nonlinear obtained optimisation Ormoneit output weights overfitting phase transitions posterior prediction performance predictor prior prior probabilities random weights regularisation scheme RVFL S-layer Section shows sigmoid function simulations single-network singular value decomposition standard deviation state-space plot term theorem tion training data training process training scheme training set under-regularised universal approximation update weight decay weight groups weighting scheme zero