By Linda Conlin, Pro to Pro Managing Editor


Scientists have decoded visual images from a dog’s brain, offering a first look at how the canine mind reconstructs what it sees. The Journal of Visualized Experimentspublished the research done at Emory University. The results suggest that dogs are more attuned to actions in their environment rather than to who or what is performing the action. If you consider that dogs, with a higher density of vision receptors designed to detect motion, are adapted to activities such as hunting or herding, attention to movement is more advantageous than fixating on stationary objects. By contrast, humans are object oriented.

According to Emory University News, Decoding Canine Cognition, the researchers recorded the functional MRI (fMRI) neural data for two awake, unrestrained dogs as they watched videos in three 30-minute sessions, for a total of 90 minutes. (You read that right – the dogs watched videos!) They then used a machine-learning algorithm to analyze the patterns in the neural data. The Emory Canine Cognitive Neuroscience Lab had trained dogs for experiments using fMRI, but only two had the temperament to lie perfectly still and watch the videos. The videos consisted of scenes common to most dogs, such as sniffing, playing, and eating, as well as vehicles, cats, deer, and people, but without sound (or squirrels?).

Two humans also underwent the same experiment, watching the same 30-minute video in three separate sessions, while lying in an fMRI. The brain data was then mapped onto the object-based and action-based video classifiers using the time stamps. A machine-learning algorithm was applied to the data. The results for the two human subjects found that the model showed 99% accuracy in mapping the brain data onto both the object- and action-based classifiers. For the dogs, the model did not work for the object classifiers. It was 75% to 88% accurate, however, at decoding the action classifications.

Certainly, only two dogs and two humans constitute a very small study, but researchers suggest that the results open the way for learning about animals’ visual perceptions. This may also contribute to our understanding of the human visual system. “We showed that we can monitor the activity in a dog’s brain while it is watching a video and, to at least a limited degree, reconstruct what it is looking at,” says Gregory Berns, Emory professor of psychology and corresponding author of the paper. “The fact that we are able to do that is remarkable.”