Scientists have deciphered visual images from a dog’s brain, offering a first glimpse of how the canine mind reconstructs what it sees. The Journal of Visualized Experiments published the research at Emory University.
The results suggest that dogs are more attuned to actions in their environment than to who or what is performing the action.
The researchers recorded the neural fMRI data of two awake, unbuckled dogs while they watched video for a total of 90 minutes over three 30-minute sessions. They then used a machine learning algorithm to analyze the patterns in the neural data.
“We have shown that we can monitor activity in a dog’s brain while it is watching a video and reconstruct, at least to a limited extent, what it is watching,” says Gregory Berns, Emory Professor of Psychology and corresponding author of the Work . “It’s remarkable that we’re able to do that.”
The project was inspired by recent advances in machine learning and fMRI to decode visual stimuli from the human brain and provide new insights into the nature of perception. Outside of humans, the technique has only been used on a handful of other species, including some primates.
“Although our work is based on just two dogs, it provides a proof of concept that these methods work in dogs,” says Erin Phillips, first author of the paper, who conducted the work as a research specialist at Berns’ Canine Cognitive Neuroscience Lab. “I hope this paper will help pave the way for other researchers to apply these methods to dogs and other species, so that we can get more data and deeper insights into how the brains of different animals work.”
Phillips, a native of Scotland, came to Emory, an exchange program between Emory and the University of St Andrews, as a Bobby Jones Scholar. She is currently a PhD student in Ecology and Evolutionary Biology at Princeton University.
Berns and his colleagues pioneered training techniques to get dogs to walk into an fMRI scanner and remain completely still and unhindered while their neural activity is measured. A decade ago, his team published the first fMRI brain images of a fully awake, unrestrained dog. That opened the door to what Berns calls The Dog Project – a series of experiments exploring the minds of the oldest domesticated species.
Over the years, his lab has published research on how the dog’s brain processes sight, words, smell, and rewards like praise or food.
Meanwhile, the technology behind machine learning computer algorithms continued to improve. Technology has allowed scientists to decipher some human brain activity patterns. The technology “reads minds” by recognizing within patterns of brain data the various objects or actions a person sees while watching a video.
“I started to wonder, ‘Can we use similar techniques on dogs?'” Berns recalls.
The first challenge was to develop video content that a dog might find interesting enough to watch over a long period of time. The Emory research team attached a video recorder to a gimbal and a selfie stick, which allowed them to take steady shots from a dog’s perspective, at about human waist height or slightly below.
They used the device to create a half-hour video of scenes related to most dogs’ lives. Activities included dogs being petted by humans and receiving treats from humans. They also showed scenes with dogs sniffing, playing, eating or walking on a leash. Activity scenes showed cars, bicycles, or a scooter driving by on a street; a cat walking in a house; a deer crossing a path; people sit; people hugging or kissing; people offering the camera a rubber bone or a ball; and people eat.
The video data was segmented by timestamps into different classifiers, including object-based classifiers (e.g. dog, car, human, cat) and action-based classifiers (e.g. sniff, play or eat).
Only two of the dogs trained for experiments in an fMRI had the concentration and temperament to lie perfectly still and watch the 30-minute video without a break, including three sessions totaling 90 minutes. These two “superstar” canines were Daisy, a mixed breed that may be part Boston Terrier, and Bhubo, a mixed breed that may be part Boxer.
“They didn’t even need treats,” says Phillips, who monitored the animals during the fMRI sessions and watched their eyes track on the video. “It was amusing because it’s serious science and a lot of time and effort was put into it, but it ended up with these dogs watching videos of other dogs and people being kind of silly.”
Two people also underwent the same experiment, watching the same 30-minute video in three separate sessions while lying in an fMRI.
The brain data could be mapped to the video classifiers using timestamps.
A machine learning algorithm, a neural network called Ivis, was applied to the data. A neural network is a machine learning method in which a computer analyzes training samples. In this case, the neural network was trained to classify the content of the brain data.
The results for the two human subjects revealed that the model developed with the neural network showed 99% accuracy in mapping the brain data to both the object- and action-based classifiers.
In the case of decoding video content from the dogs, the model did not work for the object classifiers. However, it was 75% to 88% accurate in deciphering the action classifications for the dogs.
The results point to major differences in how the brains of humans and dogs work.
“We humans are very object-oriented,” says Berns. “There are ten times as many nouns as verbs in the English language because we have a particular obsession with naming objects. Dogs seem to care less about who or what they see and more about the action itself.”
Dogs and humans also have major differences in their visual systems, Berns notes. Dogs only see in blues and yellows, but have a slightly higher density of visual receptors designed to detect movement.
“It makes perfect sense that dogs’ brains are very heavily attuned to action in the first place,” he says. “Animals have to be very concerned about the things that happen in their environment to avoid being eaten or to monitor animals they might want to hunt. Action and movement are paramount.”
For Philips, understanding how different animals perceive the world is important for her current field research on how the reintroduction of predators in Mozambique may affect ecosystems. “In the past, there was hardly any overlap between computer science and ecology,” she says. “But machine learning is a growing field that is beginning to find broader applications, including in ecology.”
Additional authors of the article are Daniel Dilks, Emory special professor of psychology, and Kirsten Gillette, who worked on the project as an Emory graduate student majoring in neuroscience and behavioral biology. Gilette has since graduated and is now in a post-baccalaureate program at the University of North Carolina.
Daisy is owned by Rebecca Beasley and Bhubo is owned by Ashwin Sakhardande. The human experiments in the study were supported by a grant from the National Eye Institute.
#Machine #learning #insight #dogs #brain #represents #sees #results #suggest #dogs #attuned #actions #performing #action
Leave a Comment