## Artificial Neural Networks: An IntroductionThis tutorial text provides the reader with an understanding of artificial neural networks (ANNs), and their application, beginning with the biological systems which inspired them, through the learning methods that have been developed, and the data collection processes, to the many ways ANNs are being used today. The material is presented with a minimum of math (although the mathematical details are included in the appendices for interested readers), and with a maximum of hands-on experience. All specialized terms are included in a glossary. The result is a highly readable text that will teach the engineer the guiding principles necessary to use and apply artificial neural networks. |

### What people are saying - Write a review

User Review - Flag as inappropriate

Nice book for understanding ANN for beginners. Beginners must read from 1st chapter.

User Review - Flag as inappropriate

nice book.

### Contents

Learning Methods | 13 |

Data Collection Preparation Labeling and Input Coding | 21 |

Output Coding | 31 |

Unsupervised Training Methods | 49 |

Recurrent Neural Networks | 61 |

A Plethora of Applications | 71 |

Dealing with Limited Amounts of Data | 101 |

Appendix A The Feedforward Neural Network | 107 |

Appendix B Feature Saliency | 125 |

Matlab Code for Various Neural Networks | 131 |

Glossary of Terms | 143 |

151 | |

163 | |

### Common terms and phrases

activation function adapt adjusted applications approximation artificial neural networks backpropagation backpropagation algorithm Black Spruce bootstrapping cell classifier computed cross-validation decision regions desired output distance metric encoded error with respect estimator example feature space feedforward network feedforward neural network Figure Fisher iris data GLNN gradient descent Hessian matrix hidden layer hidden neurons Hopfield network Hopfield neural network hyperplanes input layer input pattern input vector iterations learning system linear neighborhood network designer network trained nodes number of hidden Optical optical character recognizer optimal output error output neuron parameters PCNN perceptron performance pixels Post-processing presented Priddy reader represent result self-organizing map sensor shown in Fig sigmoid solve speckle squared error statistical Step stimulus supervised learning Table target output techniques test set total number training data training feedforward training set transfer function unsupervised validation set values Wcols weight changes weight update Western Hemlock Western Larch Z-score normalization