When discrete-time recurrent neural networks are used for grammatical inference purposes, and, in particular, to infer a finite-state machine, the grammatical inference task is not complete unless a true symbolic representation of the rules defining the language (grammar rules, finite-state machine transitions) are extracted from the dynamics of the DTRNN. Since the dynamics of the DTRNN is in a real vector space, all of the methods try to discover a partition of state space so that the dynamics of the DTRNN may be described in terms of a finite number of states.
Kolen (1994) has strongly criticised these methods on the grounds of their sensitivity to the initial conditions and the behavioral changes in the extracted description that may be induced simply by changing the way the continuous-state dynamics of the DTRNN of the DTRNN is observed: his main point is that looking for a finite-state description of the dynamics of the DTRNN may be incorrect because of the continuous state nature of the DTRNN. However, other authors such as Casey (1996) have proved that, under certain conditions, the behavior of a DTRNN may actually be described as that of a finite-state machine (see section 4.2), and other authors have shown that a DTRNN may actually be induced to behave as a FSM by suitably programming its weights ((Omlin and Giles, 1996a), (Carrasco et al., 2000); see also (Omlin and Giles, 1996b)) . These results do not completely invalidate the criticisms by Kolen (1994), which may be still applicable because most extraction methods assume, but cannot test, that the DTRNN is actually behaving as a FSM and then proceed to extract.
Three main types of automaton extraction algorithms will be discussed in this document: state-space partition methods, clustering methods, and methods based on Kohonen's self-organizing maps.