next up previous contents index
Next: Acknowledgements: Up: Preface Previous: Status of this document   Contents   Index

Motivation1.1

The fields of artificial neural networks and theoretical computer science have been linked since their inception. As Perrin (1990) says:

The first historical reference to finite automata is a paper of S.C. Kleene of 1956 in which the basic theorem, now known as Kleene's theorem, is already proved [...]. Kleene's paper was actually a mathematical reworking of the ideas of two reseachers from the MIT, W. McCulloch and W. Pitts, [...] who had presented as early as 1943 a logical model for the behaviour of nervous systems that turned out to be the model of a finite-state machine [...].
In the paper by McCulloch and Pitts, the first model of a finite automaton was indeed a network of idealized neuron-like elements.

The objective of this document is to comment a series of papers showing the intellectual strands from this early work to the current theoretical developments and potential applications in various areas. Neural networks are becoming more and more popular as tools in time-series prediction, pattern recognition, process control, signal and image processing, etc. One of the new exciting and growing directions in neural networks are models that process temporal signals and have feedback. Fully understanding these models means understanding their relationships and similarities to other models of mathematics, logic and computer science. It also means understanding what the neural network models are capable of computing and representing and what computational insights can be gained from new models of neural networks. Because of the increased importance of computational models in all fields, the interest in this area in neural networks and computer science will continue to grow.

In particular, the papers selected explore the mentioned relationship between neural networks and the foundations of computer science: automata (from finite-state to Turing machines), the classes of formal languages they define (from regular to unrestricted), and the underlying models for computation. Neural networks have been used both to represent these models and to learn them from examples; this growing field of research has produced an interesting corpus of scientific work which, in addition to strengthening the view of neural networks as implementations of theoretical computational devices, has found numerous practical applications. I have tried to survey in a single volume the most important work relating neural networks to formal language and computation models, which may be found scattered --as many other works on neural networks-- throughout a wide variety of journals as well as in technical reports.

The hardest part of compiling a document like this is the process of selecting the papers. Some papers have not been included because of their extension. Sometimes I have decided not to include the earliest account of a new idea or concept, but have included a later, but clearer paper; in these cases, earlier works are referred and briefly discussed in the introductions.

To my knowledge, this is the first document covering this relationship; however, it has an orientation which is very similar to other books such as Discrete Neural Computation by Siu et al. (1995) and Circuit Complexity and Neural Networks by Parberry (1994) which explore related subjects. What makes this document different is its emphasis in dynamic models of computation (i.e., automata) and dynamic neural networks (i.e., recurrent neural networks).

The featured papers will be grouped in chapters. Each group of papers will share an introduction that will allow the reader to comprehend the relationships among the different approaches and the main emerging results.

Chapter introductions have been written having in mind two (apparently conflicting) goals:

In this way, the two main intended uses of the document are covered: either as a textbook in a graduate course or as a reference book for researchers active in the fields involved. Chapter introductions provide the reader with a unified vision of the whole field since its inception up to the most recent work. For this purpose:

The intended readership of this document includes active researchers, both in academia and industry (mainly in computer science, computer engineering, electrical engineering, and applied mathematics departments or in R&D laboratories involved in computing, control, signal processing, etc.) of all levels, both coming from the fields of formal languages and computational theory and from the broader neural networks community; graduate students in these fields; and research planners or evaluators. Readers are expected to hold (or be about to obtain) a bachelor's degree in computer science, computer engineering, electrical engineering or applied mathematics and to be interested in fields such as computer science (formal languages and models of computation, pattern recognition, machine learning), computer engineering, electrical engineering, applied mathematics, control, signal processing, and complex systems.



Subsections
next up previous contents index
Next: Acknowledgements: Up: Preface Previous: Status of this document   Contents   Index
Debian User 2002-01-21