It’s always fascinating to me when, despite our best technological efforts to “optimize” how things are done, the way humans already do something turns out to be the optimal model.
Case in point, this article from one of my favorite Swipefile sources, Science Daily:
Artificial neural networks learn better when they spend time not learning at all
It’s the summary of a new study by sleep researchers out of the University of California San Diego. The researchers wanted to understand how humans’ sleep might improve a problem with artificial neural networks: a phenomenon known as “catastrophic forgetting.” What is that? Read on!
- The article opens with the fact that “humans need 7 to 13 hours of sleep per 24 hours.” During that time, not much happens in our bodies. But in our brains, the story is very different.
- The study’s lead author, Maxim Bazhenov, Ph.D., explains that the brain is busy “repeating what we have learned during the day,” so that it can “reorganize memories and present them in the most efficient way.”
- All of that is part of the process of building what’s known as “rational memory,” which is “the ability to remember arbitrary or indirect associations between objects, people, or events.” Building it up, through sleep, helps us protect against “forgetting old memories.”
- That’s pretty important for humans, I’d say, but the ability to remember associations is also pretty important for the systems we’ve built to mimic the human brain, namely artificial neural networks.
- We then get a slightly deeper dive into artificial neural networks, and how they’ve helped us “improve numerous technologies and systems, from basic science and medicine to finance and social media.” In fact, “in some ways, they have achieved superhuman performance, such as computational speed.”
- Except there’s a problem. The whole “catastrophic forgetting” thing, which is [useful concept alert!] what happens when “new information overwrites previous information.” That happens when artificial neural networks learn sequentially – one thing right after another.
- We humans don’t (typically!) have that problem because, as the lead researcher notes, “the human brain learns continuously and incorporates new data into existing knowledge.” Not only that, “it typically learns best when new training is interleaved with periods of sleep for memory consolidation.”
- So, what do you do when two things do the same thing (learn), but one of them (in this case, artificial neural networks) has a problem the other doesn’t (catastrophic forgetting)? You see if what prevents the problem in one (Sleep!) can help the problem in the other.
- Quick answer: it does.
- How did they find out? Step one: they set up the artificial neural network to learn more like a human brain: “instead of information being communicated continuously, it is transmitted as discrete events (spikes) at certain time points,” something known as a “spiking neural network.”
- The result: “when the spiking networks were trained on a new task, but with occasional off-line periods that mimicked sleep, catastrophic forgetting was mitigated,” because its “sleep” allowed the networks to “replay old memories without explicitly using old training data.”
- The replaying of old memories is important in humans because “memories are represented in the human brain by patterns of synaptic weight — the strength or amplitude of a connection between two neurons .” Our brains build that connection as we learn new information and strengthen that connection as we sleep, when “the spiking patterns learned during our awake state are repeated spontaneously.” (A process called “reactivation or replay,” for those playing at home.)
- Generally, the newer information is, the more changeable or “plastic” the connections are, even in sleep. That’s why sleep in humans can “further enhance synaptic weight patterns that represent the memory, helping to prevent forgetting or to enable transfer of knowledge from old to new tasks.”
- The same thing appeared to happen with the artificial neural networks, enabling them to learn continuously, without overwriting old information. Instead, the networks could incorporate the new information into what they had learned previously.
- The article wraps up with a window into the next path of research for the team: how to “develop optimal strategies to apply stimulation during sleep, such as auditory tones” to see if it’s possible to enhance sleep rhythms and improve learning.
How you could use it…
So, yeah, you could use this to justify to your boss, or even just yourself, why naps are necessary during the workday. That would be a pretty literal interpretation of the article.
But I think you could also use it in a more figurative way: to reinforce the benefits of pausing, rest, and distance from a task.
I also think having a name for what can happen when we don’t take that break—catastrophic forgetting!—is super useful, either literally or figuratively.
What ideas does the article activate for you? Email me and let me know!
Having a name for what can happen when we don't take that break—catastrophic forgetting—is super useful, either literally or figuratively. Click To TweetPlease note that many of the links are affiliate links, which means if you buy a thing I link to, I get a percentage of the cost, and then donate it to charity.
Like this content? Be the first to get it delivered directly to your inbox every week (along with a lot of other great content, including my #swipefiles). Yes, please send me the Red Thread newsletter, exclusive information, and updates.
Leave a Reply