Are binary synapses superior to graded weight representations in stochastic attractor networks?
Synaptic plasticity is an underlying mechanism of learning and memory in neural systems, but it is controversial whether synaptic efficacy is modulated in a graded or binary manner. It has been argued that binary synaptic weights would be less susceptible to noise than graded weights, which has impelled some theoretical neuroscientists to shift from the use of graded to binary weights in their models. We compare retrieval performance of models using both binary and graded weight representations through numerical simulations of stochastic attractor networks. We also investigate stochastic attractor models using multiple discrete levels of weight states, and then investigate the optimal threshold for dilution of binary weight representations. Our results show that a binary weight representation is not less susceptible to noise than a graded weight representation in stochastic attractor models, and we find that the load capacities with an increasing number of weight states rapidly reach the load capacity with graded weights. The optimal threshold for dilution of binary weight representations under stochastic conditions occurs when approximately 50% of the smallest weights are set to zero.
ACESSO AO ARTIGOhttp://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=2727164
- How biologically relevant are interaction-based modules in protein networks?
- Positive feedback in eukaryotic gene networks: cell differentiation by graded to binary response conversion
- A review of particle transport theory in a binary stochastic medium
- Cell signaling can direct either binary or graded transcriptional responses
- OUTLIER=GROSS ERROR? DO ONLY GROSS ERRORS CAUSE OUTLIERS IN GEODETIC NETWORKS? ADDRESSING THESE AND OTHER QUESTIONS