Johns Hopkins University cognitive psychologists are the first to link human’s long-term visual memory with how things move.Image Credit:Flickr/amy leonard
As Superman flies over the city, people on the ground famously suppose they see a bird, then a plane, and then finally realize it’s a superhero. But they haven’t just spotted the Man of Steel — they’ve experienced the ideal conditions to create a very strong memory of him.
Johns Hopkins University cognitive psychologists are the first to link human’s long-term visual memory with how things move. The key, they found, lies in whether we can visually track an object. When people see Superman, they don’t think they’re seeing a bird, a plane and a superhero. They know it’s just one thing — even though the distance, lighting and angle change how he looks.
People’s memory improves significantly with rich details about how an object’s appearance changes as it moves through space and time, the researchers concluded. The findings, which shed light on long-term memory and could advance machine learning technology, appear in this month’s Journal of Experimental Psychology: General.
“The way I look is only a small part of how you know who I am,” said co-author Jonathan Flombaum, an assistant professor in the Department of Psychological and Brain Sciences. “If you see me move across a room, you’re getting data about how I look from different distances and in different lighting and from different angles. Will this help you recognize me later? No one has ever asked that question. We find that the answer is yes.”
Humans have a remarkable memory for objects, says co-author Mark Schurgin, a graduate student in Flombaum’s Visual Thinking Lab. We recognize things we haven’t seen in decades — like eight-track tapes and subway tokens. We know the faces of neighbors we’ve never even met. And very small children will often point to a toy in a store after seeing it just once on TV.
Find your dream job in the space industry. Check our Space Job Board »
Though people almost never encounter a single object the exact same way twice, we recognize them anyway.
Schurgin and Flombaum wondered if people’s vast ability for recall, a skill machines and computers cannot come close to matching, had something to do with our “core knowledge” of the world, the innate understanding of basic physics that all humans, and many animals, are born with. Specifically, everyone knows something can’t be in two places at once. So if we see one thing moving from place to place, our brain has a chance to see it in varying circumstances — and a chance to form a stronger memory of it.
Likewise, if something is behaving erratically and we can’t be sure we’re seeing just one thing, those memories won’t form.
“With visual memory, what matters to our brain is that an object is the same,” Flombaum said. “People are more likely to recognize an object if they see it at least twice, moving in the same path.”
The researchers tested the theory in a series of experiments where people were shown very short video clips of moving objects, then given memory tests. Sometimes the objects appeared to move across the screen as a single object would. Other times they moved in ways we wouldn’t expect a single object to move, such as popping out from one side of the screen and then the other.
In every experiment, subjects had significantly better memories — as much as nearly 20 percent better — of trackable objects that moved according to our expectations, the researchers found.
“Your brain has certain automatic rules for how it expects things in the world to behave,” Schurgin said. “It turns out, these rules affect your memory for what you see.”
The researchers expect the findings to help computer scientists build smarter machines that can recognize objects. Learning more about how humans do it, Flombaum said, will help us build systems that can do it.
Source: Johns Hopkins University
Mark W. Schurgin, Jonathan I. Flombaum. Exploiting core knowledge for visual object recognition.. Journal of Experimental Psychology: General, 2017; 146 (3): 362 DOI: 10.1037/xge0000270