Advances in Pattern Recognition Systems using Neural Network Technologies ,volume 7.

I. Guyon and P.S.P. Wang, editors.
World Scientific, Series in Machine Perception and Artificial Intelligence, Singapore (also appeared as a special issue if IJPRAI, volume 7 (4). World Scientific, Singapore, August 1993).
1994



``Neural Networks'' are learning systems inspired by very simplified models of the brain. Often implemented in software, they are used for Signal Processing and Artificial Intelligence tasks. In the past few years, Neural Network techniques have been very proficient in Pattern Recognition. But, although recognizing patterns is a seemingly simple task that even young children perform with ease, it is still challenging for machines, including Neural Networks.

The commonplace rationale behind using Neural Networks is that a machine which architecture imitates that of the brain should inherit its remarquable intelligence. This logic usually contrasts with the reality of the performance of Neural Networks. In this book, however, the authors have kept some distance with the biological foundations of Neural Networks. The success of their applications relies, to a large extend, on careful engineering. For instance, many novel aspects of the works presented here are concerned with combining Neural Networks with other ``non neural'' modules.

Few papers in this book are introductory. Perhaps the most introductory ones would be [1, 5] and [14], but the reader unfamiliar with Neural Networks will probably find it necessary to read first an introductory paper or a textbook cited therein.

The papers cover a wide variety of applications in Pattern Recognition, including Speech [1, 7, 8, 14, 17], Optical Character Recognition and Signature Verification [2, 3, 4, 5, 6, 11], Vision [5, 9, 10, 15, 16] and Language [12, 13]. Feed forward networks trained with the Back-Propagation algorithm are by far the most popular networks, but some papers use also Radial Basis Functions or related methods [5, 10] and others use Recurrent Networks [1, 9, 12, 15].

There may be two common key words for all the papers: structure and prior knowledge. The idea of using a fully connected ``back-prop-net'' on top of the raw data and hope for the best is no longer fashionable. Different ways of improving performance by making efficient use of the designer's prior knowledge are investigated. The authors generally use pre- and post-processor modules which incorporate structural knowledge about the task. Of particular interest are the use of pre-processors which enforce known invariances about the data, such as translation, rotation, etc. [4, 6, 10, 16] and Graph Algorithmic post-processors, including HMM's (Hidden Markov Models) [1, 2, 5, 11, 14, 17] which permit addressing the segmentation problem. A complementary way of incorporating prior knowledge is to constrain the structure of the Neural Network itself. The most widely used constrained networks are convolutional networks [2, 3, 4, 5, 11], which one dimensional version is known as TDNN (Time Delay Neural Network) and which two dimensional version is derived from the structure of the Neocognitron. The TWN (Time Warping Network) [8] is another kind of constrained network using elastic matching units, and which turns out to be a powerful extension of classical HMM's. Finally, super-structures are introduced in the form of multiple expert systems[10, 14, 15], whereby several Neural Networks specialized to solve subtasks are used jointly to perform the final decision.

We thank the authors and the reviewers for their efforts to contribute high quality papers and many useful references.

[1] Y. Bengio. A Connectionist Approach to Speech Recognition.

[2] J. Bromley, J. W. Bentz, L. Bottou, I. Guyon, L. Jackel, Y. Le Cun, C. Moore, E. Sackinger, and R. Shah. Signature Verification with a Siamese TDNN.

[3] C. Burges, J. Ben, Y. Le Cun, J. Denker and C. Nohl. Off-line Recognition of Handwritten Postal Words using Neural Networks.

[4] H. Drucker, Robert Schapire and Patrice Simard. Boosting Performance in Neural Networks.

[5] F. Fogelman, B. Lamy and E. Viennet. Multi-Modular Neural Network Architectures for Pattern Recognition: Applications in Optical Character Recognition and Human Face Recognition.

[6] A. Gupta, M. V. Nagendraprasad, A. Liu, P. S. P. Wang and S. Ayyadurai. An Integrated Architecture for Recognition of Totally Unconstrained Handwritten Numerals.

[7] E. K. Kim, J. T. Wu, S. Tamura, R. Close, H. Taketani, H. Kawai, M. Inoue and K. Ono. Comparison of Neural Network and K-NN Classification Methods in Vowel and Patellar Subluxation Image Recognitions.

[8] E. Levin, R. Pieraccini and E. Bocchieri. Time-Warping Network: A Neural Approach to Hidden Markov Model based Speech Recognition.

[9] H. Li and J. Wang. Computing Optical Flow with a Recurrent Neural Network.

[10] W. Li and N. Nasrabadi. Invariant Object recognition Based on Neural Network of Cascaded RCE Nets.

[11] G. Martin, M. Rashid and J. Pittman. Integrated Segmentation and Recognition Through Exhaustive Scans or Learned Saccadic Jumps.

[12] C. B. Miller and C. L. Giles. Experimental Comparison of the Effect of Order in Recurrent Neural Networks.

[13] L. Miller and A. Gorin. Structured Networks, for Adaptive Language Acquisition.

[14] N. Morgan, H. Bourlard, S. Renals M. Cohen and H. Franco. Hybrid Neural Network / Hidden Markov Model Systems for Continuous Speech Recognition.

[15] K. Peleg and U. Ben-Hanan. Adaptive Classification by Neural Net Based Prototype Populations.

[16] L. Wiskott and C. von der Malsburg. A Neural System for the Recognition of Partially Occluded Objects in Cluttered Scenes - A Pilot Study.

[17] G. Zavaliagkos, S. Austin, J. Makhoul and R. Schwartz. A Hybrid Continuous Speech Recognition System Using Segmental Neural Nets with Hidden Markov Models.

[ next paper ]