<p><b>Abstract</b>—This article explores the use of Simple Synchrony Networks (SSNs) for learning to parse English sentences drawn from a corpus of naturally occurring text. Parsing natural language sentences requires taking a sequence of words and outputting a hierarchical structure representing how those words fit together to form constituents. Feed-forward and Simple Recurrent Networks have had great difficulty with this task, in part because the number of relationships required to specify a structure is too large for the number of unit outputs they have available. SSNs have the representational power to output the necessary <tmath>$O(n^2)$</tmath> possible structural relationships because SSNs extend the <tmath>$O(n)$</tmath> incremental outputs of Simple Recurrent Networks with the <tmath>$O(n)$</tmath> entity outputs provided by Temporal Synchrony Variable Binding. This article presents an incremental representation of constituent structures which allows SSNs to make effective use of both these dimensions. Experiments on learning to parse naturally occurring text show that this output format supports both effective representation and effective generalization in SSNs. To emphasize the importance of this generalization ability, this article also proposes a short-term memory mechanism for retaining a bounded number of constituents during parsing. This mechanism improves the <tmath>$O(n^2)$</tmath> speed of the basic SSN architecture to linear time, but experiments confirm that the generalization ability of SSN networks is maintained.</p>