Interactions of neural networks: models for distraction and concentration.
AUTOR(ES)
Wang, L P
RESUMO
We present a model of neural group interactions, which are projections from one neural network (network B) of McCulloch-Pitts neurons connected via a Hebbian rule, to another network (network A) of the same structure. We first consider the case in which the projecting network B is in a pattern different from the initial attracting state of network A. A critical projecting strength lambda c is found such that for lambda below this value there exists a noise threshold sigma lambda corresponding to each lambda. For the case where lambda less than lambda c and the noise level sigma less than sigma lambda, there are two possible retrievals, with different probabilities: the initial attracting state of network A and the projecting pattern. If lambda less than lambda c and sigma greater than sigma lambda, stable states of network A disappear. In the case lambda greater than lambda c, network A is pulled out of its initial basin of attraction and into that of the projecting pattern. This analysis provides a model for distraction. Second-order interactions reduce the distraction. When the projecting network B is in the same pattern as the initial attracting state of network A, the projection acts as an external reinforcement, which enables network A to retrieve in highly noisy conditions. Sharp noise thresholds for nonzero retrievals are shown to be eliminated by the projection. Higher-order connectivity improves the retrieval ability of the network. The second case serves as a model of concentration. We discuss the model of distraction and concentration (i) in connection with common experience of expectation of recognition and (ii) in connection with recent T-maze experiments on infant rats; finally, we suggest a refined version of the Bruner-Potter experiment to test our prediction of the disappearance of hysteresis.
ACESSO AO ARTIGO
http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=54693Documentos Relacionados
- Interactions between Depression and Facilitation within Neural Networks: Updating the Dual-Process Theory of Plasticity
- Oscillations and chaos in neural networks: an exactly solvable model.
- Noise in neural networks: thresholds, hysteresis, and neuromodulation of signal-to-noise.
- Noise injection for training artificial neural networks: A comparison with weight decay and early stopping
- Descending interactions with spinal cord networks: a time to build?