NotesFAQContact Us
Collection
Advanced
Search Tips
Back to results
ERIC Number: ED286930
Record Type: Non-Journal
Publication Date: 1986-Jun
Pages: 54
Abstractor: N/A
ISBN: N/A
ISSN: N/A
EISSN: N/A
Available Date: N/A
Experiments on Learning by Back Propagation.
Plaut, David C.; And Others
This paper describes further research on a learning procedure for layered networks of deterministic, neuron-like units, described by Rumelhart et al. The units, the way they are connected, the learning procedure, and the extension to iterative networks are presented. In one experiment, a network learns a set of filters, enabling it to discriminate format-like patterns in the presence of noise. The speed of learning strongly depends on the shape of the surface formed by the error measure in "weight space." Examples show the shape of the error surface for a typical task and illustrate how an acceleration method speeds up descent in weight space. The main drawback of the learning procedure is the way it scales as the size of the task and the network increases. Some preliminary scaling results show how the magnitude of the optimal weight changes depends on the fan-in of the units. A variation of the learning procedure that back-propagates desired state information rather than error gradients is developed and compared with the standard procedure. Finally, the relationship between these iterative networks and the "analog" networks described by Hopfield and Tank are discussed. (Author/LPG)
Publication Type: Reports - Research
Education Level: N/A
Audience: N/A
Language: English
Sponsor: Office of Naval Research, Washington, DC. Personnel and Training Branch.
Authoring Institution: Carnegie-Mellon Univ., Pittsburgh, PA. Dept. of Computer Science.
Grant or Contract Numbers: N/A
Author Affiliations: N/A