next up previous contents index
Next: Finite-state machines and neural Up: Introduction Previous: Neural networks and formal   Contents   Index

Organization of the document

The document is organized in four chapters in addition to this Introduction. Each chapter contains extensive introductory material that will allow the reader to comprehend the relationships among the different approaches and the main emerging results; these chapter introductions include the tutorial material needed to put the papers in context.

Chapter 2 illustrates the beginning of the relationship between neural networks, automata and formal models of computation by presenting two pioneering papers; one is the seminal paper ``A logical calculus of ideas immanent in nervous activity'', (McCulloch and Pitts, 1943), and the other is a book chapter by Minsky (1967), which discusses in detail the implementation of finite-state automata in terms of McCulloch-Pitts neurons. The introduction of the chapter includes a discussion on the computational capabilities of a single neuron as well as an introduction to finite-state machines.

Chapter 3 addresses the use of neural networks as sequence processors and collects four papers (Elman, 1990; Jordan, 1986; Pollack, 1990; Bengio et al., 1994) in which discrete-time recurrent neural networks are trained to process temporal sequences. The introduction describes discrete-time recurrent neural network architectures in detail, viewing them as neural automata, and discusses briefly the learning algorithms used for training them and the problems that may be encountered.

Chapter 4 deals with the theoretical work concerning the computational (symbolic sequence processing) capabilities of discrete-time recurrent neural networks and collects a number of representative papers (Goudreau et al., 1994; Siegelmann, 1995; Omlin and Giles, 1996b; Alquézar and Sanfeliu, 1995; Horne and Hush, 1996; Kremer, 1995; Siegelmann and Sontag, 1991; Alon et al., 1991). The introduction contains a tutorial describing formal grammars, Chomsky's hierarchy, Turing machines, and super-Turing computation.

The field of grammatical inference (that is, learning a language from examples) is the main theme of the papers collected in Chapter 5, which deals with the inference of regular languages and finite-state automata (Manolios and Fanelli, 1994; Tino and Sajda, 1995; Giles et al., 1992; Pollack, 1991; Cleeremans et al., 1989) and with the inference of context-free languages and pushdown automata (Mozer and Das, 1993; Zeng et al., 1994; Giles et al., 1990). The chapter starts with an introduction to grammatical inference in general and in particular with neural networks.


next up previous contents index
Next: Finite-state machines and neural Up: Introduction Previous: Neural networks and formal   Contents   Index
Debian User 2002-01-21