How a Memory Quirk of the Human Brain Can Galvanize AI

Even as toddlers we’re good at inferences. Take a two-year-old that first learns to recognize a dog and a cat at home, then a horse and a sheep in a petting zoo. The kid will then also be able to tell apart a dog and a sheep, even if he can’t yet articulate their differences.

How a Memory Quirk of the Human Brain Can Galvanize AI

Even as toddlers we’re good at inferences. Take a two-year-old that first learns to recognize a dog and a cat at home, then a horse and a sheep in a petting zoo. The kid will then also be able to tell apart a dog and a sheep, even if he can’t yet articulate their differences.

This ability comes so naturally to us it belies the complexity of the brain’s data-crunching processes under the hood. To make the logical leap, the child first needs to remember distinctions between his family pets. When confronted with new categories—farm animals—his neural circuits call upon those past remembrances, and seamlessly incorporate those memories with new learnings to update his mental model of the world.

Not so simple, eh?

It’s perhaps not surprising that even state-of-the-art machine learning algorithms struggle with this type of continuous learning. Part of the reason is how these algorithms are set up and trained. An artificial neural network learns by adjusting synaptic weights—how strongly one artificial neuron connects to another—which in turn leads to a sort of “memory” of its learnings that’s embedded into the weights. Because retraining the neural network on another task disrupts those weights, the AI is essentially forced to “forget” its previous knowledge as a prerequisite to learn something new. Imagine gluing together a bridge made out of toothpicks, only having to rip apart the glue to build a skyscraper with the same material. The hardware is the same, but the memory of the bridge is now lost.

This Achilles’ heel is so detrimental it’s dubbed “catastrophic forgetting.” An algorithm that isn’t capable of retaining its previous memories is severely kneecapped in its ability to infer or generalize. It’s hardly what we consider intelligent.

But here’s the thing: if the human brain can do it, nature has already figured out a solution. Why not try it on AI?

A recent study by researchers at the University of Massachusetts Amherst and the Baylor College of Medicine did just that. Drawing inspiration from the mechanics of human memory, the team turbo-charged their algorithm with a powerful capability called “memory replay”—a sort of “rehearsal” of experiences in the brain that cements new learnings into long-lived memories.

What came as a surprise to the authors wasn’t that adding replay to an algorithm boosted its ability to retain its previous trainings. Rather, it was that replay didn’t require exact memories to be stored and revisited. A bastardized version of the memory, generated by the network itself based on past experiences, was sufficient to give the algorithm a hefty memory boost.

Playing With Replay

In the 1990s, while listening in on the brain’s electrical chatter in sleeping mice, memory researchers stumbled across a perplexing finding. The region of the brain called the hippocampus, which is critical for spatial navigation and memory, sparked with ripples of electrical waves in sleep. The ripples weren’t random—rather, they recapitulated in time and space the same neural activity the team observed earlier, while the mice were learning to navigate a new maze.

Somehow, the brain was revisiting the electrical pattern encoding the mice’s new experiences during sleep—but compressed and distorted, as if rewinding and playing a fraying tape in fast-forward.

Scientists subsequently found that memory replay is fundamental to strengthening memories in mice and men. In a way, replay provides us with additional simulated learning trials to practice our learnings and stabilize them into a library of memories from which new experiences can build upon rather than destroy.

It’s perhaps not surprising that deep neural networks equipped with replay stabilize their memories—with the caveat that the algorithm needs to perfectly “remember” all previous memories as input for replay. The problem with this approach, the team said, is that it’s not scalable. The need to access prior experiences rapidly skyrockets data storage demands to an untenable amount.

But what if, just like the brain, we don’t actually need a perfect, total recall of memories during replay?

Memory Remix

The team’s lightbulb moment came when digging into the weeds of replay: rather than playing a perfectly accurate video tape of memories, perhaps the brain is “reimagining,” or generating its past experiences for playback. Here, replay doesn’t rely on faithfully stored memories. Instead, it’s more similar to our actual experience of memory: something reconstructed from reality, but tainted by our previous history and worldviews.

To test out their idea, the team coded an algorithm that reflects “brain-inspired replay.” It doesn’t store learning per se to be used for playback. Instead, it uses data from learned experiences to automatically reconstruct memories for replay.

As an analogy to the brain, say you’re learning a visual task, such as recognizing different animals. Your main processor is the cortex, which begins to parse out patterns that correspond to a dog or cat or sheep. In previous “replay” algorithms, these data are then transferred to the hippocampus, which stores the memory and uses it for playback. The electrical activity from the hippocampus washes across the cortex, strengthening what you just learned.

The new setup melds the two components—the artificial cortex and hippocampus—together in a more biologically feasible way. Here, the hippocampus uses data from the cortex, the processor, to basically “dream up” or “imagine” its replay patterns. These patterns lack pixel-by-pixel fidelity, similar to how our memory isn’t photographic in nature. However, the patterns capture something more abstract about the memory—what makes a sheep a sheep versus a dog—even when the two animals are learned across separate sessions.

When pitted against other deep learning algorithms for continuous learning, the newbie trounced its competitors at preventing catastrophic forgetting. In a visual memory challenge that required 100 tasks, the algorithm was able to keep its previous memories while deciphering new images. Impressively, the harder and the more “real world” the problem, the better the algorithm outperformed its peers.

“If our network with generative replay first learns to separate cats from dogs, and then to separate bears from foxes, it will also tell cats from foxes without specifically being trained to do so. And notably, the more the system learns, the better it becomes at learning new tasks,” said study author Dr. Gido van de Ven.

Meeting of Minds

These results are hardly the first to tap into the brain’s memory prowess.

Previously, AI researchers have also tuned into a separate memory process called metaplasticity, which alters how likely a neural network is to be vulnerable to change. Because memories are stored in a neural network, the more flexible it is, the more likely the memory can be altered or forgotten. Google DeepMind, for example, has used an artificial version of this brain quirk to help “protect” artificial synapses that are key to preserving a previous memory while encoding the next.

That’s not to say one approach bests another. What’s likely, the authors said, is that these strategies go hand-in-hand to protect the brain’s memories. An algorithm that incorporates both may be even more flexible and resilient to catastrophic forgetting, instead operating like a toddler trying to untangle a complex world one memory at a time.

Clearly the brain has a lot more inspiration for AI up its sleeve. Although the new algorithm is closer to biological plausibility, it can’t yet capture a fundamental component of our own memories—the experience of time—into its replay mechanism. On the other hand, machine learning also has more to give back to neuroscience. The results here could help unravel the neural processes behind replay that explain why some of our memories degrade, whereas others last a lifetime.

“Our method makes several interesting predictions about the way replay might contribute to memory consolidation in the brain. We are already running an experiment to test some of these predictions,” said van de Ven.

This article was originally written by Shelly Xuelai Fan and appeared on Singularity Hub on September 28, 2020. You can read the full article on Singularity Hub.

Leave a Reply

Your email address will not be published. Required fields are marked *