From the time you read the Wi-Fi password from the cafe menu board to the time you can go back to your laptop and type it in, you have to remember it. If you’re wondering how your brain does this, you’re asking a question about working memory, which researchers have been trying to explain for decades. Now neuroscientists at MIT have published an important new insight into how it works.
in a study PLOS Computational Biology, Scientists at the Picower Institute for Learning and Memory compared measurements of brain cell activity in animals performing a working memory task with the output of various computer models representing two theories of the fundamental mechanisms by which information is retained in the brain. The results strongly support the new idea that networks of neurons store information by making transient changes to the pattern of their connections, or synapses, and contradict the traditional alternative that memory is created by neurons kept continuously active (as in idle engine)).
While both models allow information to be kept in mind, only the version that allows synapses to change connections instantaneously (“short-term synaptic plasticity”) produces patterns of neural activity that mimic what is actually observed in real brains at work. Senior author Earl K. Miller acknowledges that the idea that brain cells maintain memory by always being “on” may be simpler, but it doesn’t represent what nature is doing and doesn’t produce intermittent thinking Neural activity that may generate complex thinking flexibility supported by short-term synaptic plasticity.
“You need these mechanisms to give working memory activity the freedom it needs to be flexible,” said Miller, a professor of neuroscience in MIT’s Department of Brain and Cognitive Sciences (BCS). As simple as a light switch. But working memory is as complex and dynamic as our thoughts.”
Co-lead author Leo Kozachkov, who received his Ph.D. at MIT in November for theoretical modeling work that included this study, said matching computer models to real-world data was critical.
“Most people think that working memory ‘happens’ in neurons—that persistent neural activity produces persistent thoughts. However, this idea has recently come under scrutiny because it doesn’t quite line up with the data,” said Kozachkov, co-supervisor Say. Written by co-senior author Jean-Jacques Slotine, BCS and professor of mechanical engineering. “Using artificial neural networks with short-term synaptic plasticity, we show that synaptic activity (rather than neural activity) can underlie working memory. An important takeaway from our paper: These neural network models of ‘plasticity’ are better suited to the brain – like Quantitatively the same, with an additional functional advantage in terms of robustness.”
Models match nature
Together with co-lead author John Tauber, an MIT graduate student, Kozachkov’s goal was not just to determine how working memory information is retained, but to elucidate how nature actually does it. That means starting with “ground truth” measurements of the electrical “spike” activity of hundreds of neurons in the animal’s prefrontal cortex as it plays a working memory game. In each of many rounds, the animal sees an image, which then disappears. After a second, it sees two pictures including the original, and has to view the original to get a little reward. The critical moment is that second in the middle, known as the “latency period,” during which the image must be kept in mind before the test.
The team consistently observed what Miller’s lab had observed many times before: neurons spiking profusely when they saw the original image, spiking only intermittently during the delay, and then again when they had to recall the image during testing. Spikes occur (these dynamics are caused by the interplay of beta and gamma frequency brain rhythms). In other words, spikes are strong when information must be initially stored and must be recalled, but only when it must be maintained. Spikes are not continuous during latency.
In addition, the team trained a software “decoder” to read out working memory information from measures of spike activity. They are very accurate when the spikes are high, but not when the spikes are low, such as during lag periods. This shows that the spikes do not represent information during the delay. But this raises a key question: If the spikes don’t remember information, what can?
Researchers, including Oxford’s Mark Stokes, have proposed that changes in the relative strength, or “weight,” of synapses could act as a proxy for storing information. The MIT team tested this idea by computationally modeling neural networks containing two versions of each major theory. Machine learning networks were trained to perform the same working memory tasks as real animals and output neural activity that could also be interpreted by the decoder.
The upshot is that computational networks that allow short-term synaptic plasticity to encode information spike when the actual brain does, and don’t when it doesn’t. Networks that spik continuously as a method of maintaining memory spiked all the time, including when the natural brain wasn’t spiking. The decoder results showed a drop in accuracy during delays in models of synaptic plasticity, but remained unusually high in models of sustained spiking.
In another layer of analysis, the team created a decoder to read information from synaptic weights. They found that during delays, synapses represented working memory information, whereas spikes did not.
Of the two versions of the model with short-term synaptic plasticity, the most realistic one, called “PS-Hebb,” has a negative feedback loop that keeps the neural network stable and robust, Kozachkov said.
The role of working memory
In addition to better matching nature, models of synaptic plasticity confer other benefits that may be important to real brains. One is that the plasticity model retains information in its synaptic weights even after as many as half of the artificial neurons have been “ablated.” The persistent activity model collapsed after losing only 10-20% of synapses. And, Miller adds, it takes less energy to boost energy occasionally than to boost energy consistently.
Also, Miller said, quick spikes rather than sustained spikes make room in time to store multiple items in memory. Research shows that people can hold up to four different things in working memory. Miller’s lab plans to conduct new experiments to determine whether models with intermittent spikes and synaptic weight-based information storage perform better with real neural data when animals have to remember multiple things rather than just a single image. match.
In addition to Miller, Kozachkov, Tauber and Slotine, other authors on the paper are Mikael Lundqvist and Scott Brincat.
The Office of Naval Research, JPB Foundation, ERC, and a VR Startup Grant funded this research.