### Abstract

We consider certain simple neural network models of associative memory with N binary neurons and symmetric lth order synaptic connections, in which m randomly chosen N-bit patterns are to be stored and retrieved with a small fraction δ of bit errors allowed. Rigorous proofs of the following are presented both for l = 2 and l ≥ 3: 1. 1. m can grow as fast as αN^{l-1}. 2. 2. α can be as large as B_{l}/ln(1/δ) as δ → 0. 3. 3. Retrieved memories overlapping with several initial patterns persist for (very) small α. These phenomena were previously supported by numerical simulations or nonrigorous calculations. The quantity (l!) · α represents the number of stored bits per distinct synapse. The constant (l!) · B_{l} is determined explicitly; it decreases monotonically with l and tends to zero exponentially fast as l → ∞. We obtain rigorous lower bounds for the threshold value (l!) · α_{c} (the maximum possible value of (l!) · α with δ unconstrained): 0.11 for l = 2 (compared to the actual value between 0.28 and 0.30 as estimated by Hopfield and by Amit, Gutfreund, and Sompolinsky), 0.22 for l = 3 and 0.16 for l = 4; as l → ∞, the bound tends to zero as fast as (l!) · B_{l}.

Original language | English (US) |
---|---|

Pages (from-to) | 223-238 |

Number of pages | 16 |

Journal | Neural Networks |

Volume | 1 |

Issue number | 3 |

DOIs | |

State | Published - 1988 |

### Fingerprint

### ASJC Scopus subject areas

- Artificial Intelligence
- Neuroscience(all)

### Cite this

**Memory capacity in neural network models : Rigorous lower bounds.** / Newman, Charles.

Research output: Contribution to journal › Article

*Neural Networks*, vol. 1, no. 3, pp. 223-238. https://doi.org/10.1016/0893-6080(88)90028-7

}

TY - JOUR

T1 - Memory capacity in neural network models

T2 - Rigorous lower bounds

AU - Newman, Charles

PY - 1988

Y1 - 1988

N2 - We consider certain simple neural network models of associative memory with N binary neurons and symmetric lth order synaptic connections, in which m randomly chosen N-bit patterns are to be stored and retrieved with a small fraction δ of bit errors allowed. Rigorous proofs of the following are presented both for l = 2 and l ≥ 3: 1. 1. m can grow as fast as αNl-1. 2. 2. α can be as large as Bl/ln(1/δ) as δ → 0. 3. 3. Retrieved memories overlapping with several initial patterns persist for (very) small α. These phenomena were previously supported by numerical simulations or nonrigorous calculations. The quantity (l!) · α represents the number of stored bits per distinct synapse. The constant (l!) · Bl is determined explicitly; it decreases monotonically with l and tends to zero exponentially fast as l → ∞. We obtain rigorous lower bounds for the threshold value (l!) · αc (the maximum possible value of (l!) · α with δ unconstrained): 0.11 for l = 2 (compared to the actual value between 0.28 and 0.30 as estimated by Hopfield and by Amit, Gutfreund, and Sompolinsky), 0.22 for l = 3 and 0.16 for l = 4; as l → ∞, the bound tends to zero as fast as (l!) · Bl.

AB - We consider certain simple neural network models of associative memory with N binary neurons and symmetric lth order synaptic connections, in which m randomly chosen N-bit patterns are to be stored and retrieved with a small fraction δ of bit errors allowed. Rigorous proofs of the following are presented both for l = 2 and l ≥ 3: 1. 1. m can grow as fast as αNl-1. 2. 2. α can be as large as Bl/ln(1/δ) as δ → 0. 3. 3. Retrieved memories overlapping with several initial patterns persist for (very) small α. These phenomena were previously supported by numerical simulations or nonrigorous calculations. The quantity (l!) · α represents the number of stored bits per distinct synapse. The constant (l!) · Bl is determined explicitly; it decreases monotonically with l and tends to zero exponentially fast as l → ∞. We obtain rigorous lower bounds for the threshold value (l!) · αc (the maximum possible value of (l!) · α with δ unconstrained): 0.11 for l = 2 (compared to the actual value between 0.28 and 0.30 as estimated by Hopfield and by Amit, Gutfreund, and Sompolinsky), 0.22 for l = 3 and 0.16 for l = 4; as l → ∞, the bound tends to zero as fast as (l!) · Bl.

UR - http://www.scopus.com/inward/record.url?scp=0023869164&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0023869164&partnerID=8YFLogxK

U2 - 10.1016/0893-6080(88)90028-7

DO - 10.1016/0893-6080(88)90028-7

M3 - Article

VL - 1

SP - 223

EP - 238

JO - Neural Networks

JF - Neural Networks

SN - 0893-6080

IS - 3

ER -