Imagine a living room. Not yours or your friend’s or one you saw in a home makeover show, but one purely from your imagination—perhaps your ideal living room. You should have no trouble doing it: We take this kind of imagination for granted. Rarely do we find ourselves wondering how the mind chooses what objects to put into these novel scenes and which ones to exclude. But it’s worth reflecting on, perhaps especially for creative types, because our visual imagination appears to be constrained by regularities in visual memories. Diversifying what you see may mean enriching what you can imagine.
In a recent study Irish neuroscientist Eleanor Maguire, of University College London, had people imagine novel scenes and compared this to people imagining single objects against a white background. She found that different parts of the brain were implicated when imagining a rich scene compared to imagining a single object. One is the hippocampus, which, according to scene construction theory, is important for both consolidation of long-term memory and the imagination. Spatial navigation and planning both rely on the hippocampus and depend on the ability to create coherent spatial scenes. One hypothesis of the theory is that the closer you perceive things to be to each other—a couch and a coffee table, for instance—the more likely you are to later retrieve and put them into a scene when it is imagined, too. It also suggests that, as you create a scene, your brain starts a feedback loop between the hippocampus and the visual cortex.
Diversifying what you see may mean enriching what you can imagine.
The other is the ventromedial prefrontal cortex (vmPFC), which plays a role in, among other things, deliberation and self-control. Maguire found that the vmPFC was quite active while subjects imagined scenes, but was less so while they imagined objects. She also found that it was activated before the hippocampus. This suggests that the information flow goes from the vmPFC to the hippocampus, rather than the other way around. It looks like the hippocampus takes instructions from the vmPFC for how a scene should look—and this is what happens when you recall a memory. As Maguire and her colleagues concluded, episodic memory and scene imagination “share fundamental neural dynamics and the process of constructing vivid, spatially coherent, contextually appropriate scene imagery is strongly modulated by vmPFC.”
In my laboratory, we try to model this kind of imagination in software. My graduate students and I made an imagination engine to imagine scenes the same way people do. We would give it a prompt word, like “mouse,” and it would find objects in its database, or the words for them, that tend to appear in photographs with mice. The engine would then put five things associated with a mouse in an imagined scene. We quickly encountered a problem that humans easily avoid, though: The engine, given the word “mouse,” would generate both a cat and a computer keyboard. People don’t do this—they pick objects associated with one meaning of the term rather mix and match objects associated with both.
My graduate student Michael Vertolli sought to solve this problem. In a 2017 study, he used the concept of co-occurrence, which, in the context of a spatial scene, is similar to proximity. He had our imagination engine look not only at the correlation between the prompt word and a retrieved object, but also all of the correlations among the retrieved objects. “Computer keyboard” co-occurs with “mouse,” but it doesn’t appear in the same photographs as mousetraps. So one of them would be swapped out for a different word. This process would repeat until all of the objects in the imagined scene correlated with each other to a certain threshold coherence. Different regions of the hippocampus appear to have different functions—the CA3 subregion gets input for memory, and this is fed into the CA1 subregion, which scientists suspect detects coherent patterns. This might be where the thresholding mechanism takes place.
What might seem unintuitive about this is that, when you imagine a visual scene, it doesn’t feel like things are being replaced one by one. Apparently we are not conscious of the candidates our minds reject for placement in a scene—we are only privy to the things that end up there. If you imagine a birthday party, for example, the cake, candles, balloons, and drinks might seem to spring into your mind all at once. But if you look around to see what’s there, you might find that it wasn’t quite as fleshed-out as you thought. The image in your mind, if you can create it at all, needs to fill in where you cast your mind’s eye. You can even tell that, in other parts of the image, things will fade and disappear until you attend to them again.
It’s long been common knowledge that imaginings are re-combinations of bits from memory. But now we’re seeing that the act of recalling something that happened to you looks very much like what happens when you imagine something new.
Jim Davies is a professor at the Institute of Cognitive Science at Carleton University in Ottawa, and author of Riveted: The Science of Why Jokes Make Us Laugh, Movies Make Us Cry, and Religion Makes Us Feel One with the Universe, and co-hosts the “Minding the Brain” podcast. His sister is novelist JD Spero.