Monthly Archives: November 2018

Putting visual information into context

By Nathalie L. Rochefort and Lukas Fischer

The cerebral cortex is the seat of advanced human faculties such as language, long-term planning, and complex thought. Research over the last 60 years has shown that some parts of the cortex have very specific roles, such as vision (visual cortex) or control of our limbs (motor cortex).

While categorizing different parts of the cortex according to their most apparent roles is convenient, it is also an oversimplification. Brain areas are highly interconnected, and studies over the past decade have shown that incoming sensory information is rapidly integrated with other variables associated with a person or an animal’s behavior. Our internal state, such as current goals and motivations, as well as previous experience with a given stimulus shape how sensory information is processed. This can be referred to as the “context” within which a sensory input is perceived. For example, we perceive a small, brightly-colored object differently when we see it in a candy store (where we might assume it is a candy) compared to seeing it in the jungle (where it might be a poisonous animal). In our recent article published in Cell Reports, we investigated the factors that impact the activity of cells in the visual cortex beyond visual inputs as animals learned to locate a reward in a virtual reality environment.

Researchers have known since the 1960s that cells in the primary visual cortex exhibit striking responses to specific features of the visual environment such as bars moving across our visual field or edges of a given orientation. The traditional view of a hierarchical organization of the visual system was that primary visual cortex encodes elementary visual features of our environment, and then forwards this information to higher cortical areas which, in turn, take these individual visual elements and combine them to represent objects and scenes. Our understanding of how an animal’s current behavior influences information processing, however, was limited. Recent studies have started to address this question directly and found that neurons in primary visual cortex show more complex responses than expected. We have set out to use cutting-edge technological advances in systems neuroscience to understand what type of information is processed in the primary visual cortex of awake animals as they learn to find rewards.

A window into the brain, literally

We have combined two recent technologies to record from a large number of individual neurons in primary visual cortex while mice were awake and free to run. An advanced microscopy technique, two-photon calcium imaging, allowed us to visualize the mouse brain and record the activity of hundreds of neurons through a small implanted window.

A key challenge with this technique, however, is that the mouse head has to remain fixed. We, therefore, constructed a virtual reality system in which an animal was placed atop a treadmill, surrounded by computer screens displaying a virtual environment within which they can freely move while their head remains in the same place. The virtual environment consisted of a linear corridor with a black-and-white striped pattern on the wall and a black wall section (visual cue) in which the mouse could trigger a reward (sugar water) by licking a spout. This allowed us to train animals to find rewards at a specific location within the virtual environment while simultaneously recording the activity of neurons in the visual cortex.

More dedicated neurons, more reward

Our first, unexpected, finding was that after learning the task, a large proportion of neurons (~80%) in the primary visual cortex were responding to task-specific elements, with many cells becoming specifically more active when the animal approached the reward area. Interestingly, the number of these “task-responsive” cells strongly correlated with how well the animals performed the task. In other words, the more precisely animals were able to locate the reward location, the more cells we found to be active around that location of the virtual corridor in the visual cortex. This was surprising as neurons in the visual cortex clearly seemed to be as interested in where the animal could get a drop of sugar water, as the visual features of their surroundings. To test the impact that the visual reward cue (black wall section) itself had on the activity, we removed this cue and found that some neurons still responded at the rewarded location. This suggests that these neurons in the visual cortex were no longer only depending on visual information to elicit their responses.

Visual inputs matter, but sometimes motor-related inputs matter more

These results opened up a number of interesting questions about what is driving these responses as it was clearly not only visual inputs. We wanted to understand the factors driving this activity in the visual cortex. Mice could use two strategies to locate the reward when no visual cue was present to indicate the reward point. The first strategy relies on their internal sense of distance based on feedback from the motor systems (motor feedback). In other words, an estimate of how far a mouse has traveled based on how many steps they have taken since the beginning of the corridor. The second strategy would be to rely on an estimate of position based on the way the visual world moves past the animal, known as “optic flow.”

We took advantage of the unique opportunities in experimental design afforded by a virtual reality system to test which information is driving those reward-location specific responses. By creating a mismatch between the animal’s own movement on the treadmill and the visual movement of the external virtual environment, we were able to test whether it is motor feedback or visual flow that determines where the animal thinks it is along the corridor, and, correspondingly, where these neurons representing the reward location become active. The results showed that animals expected the reward location primarily based on motor feedback. This means that there are some neurons in the primary visual cortex that encode information related to the location of a reward, based on an animal’s motor behavior rather than purely visual information.

However, in our final experiment, the importance of visual inputs became clear again: when the visual cue indicating the reward location was put back in, while still maintaining the mismatch between treadmill and virtual movement, the animal’s behavior, as well as the neuronal responses, snapped back to the visual cue, disregarding the number of steps it had taken. This suggests that motor feedback is available to and used by the primary visual cortex, but in a conflict situation, the visual cues indicating a specific location, override other types of information to correctly locate a reward.

Conclusion

These results demonstrate the importance of behavioral context for sensory processing in the brain. The primary visual cortex, a region that was once thought to primarily represent our visual world by detecting elementary visual features such as edges, is also influenced by prior experience, learning, and interactions with our environment. A prominent model proposed to explain sensory cortical function, posits that the cerebral cortex creates a representation of what we expect, based on current sensory inputs and previous experience.

Our results are congruent with this model while emphasizing the large role of contextual factors, such as motor feedback and prior knowledge of a location. Future studies are necessary to determine how different types of inputs to sensory regions of the brain influence the activity of individual neurons and how they shape our perception of the world.

These findings are described in the article entitled The Impact of Visual Cues, Reward, and Motor Feedback on the Representation of Behaviorally Relevant Spatial Locations in Primary Visual Cortex, recently published in the journal Cell ReportsThis work was conducted by Janelle M.P. Pakan from the University of Edinburgh, the Otto-von-Guericke University, and the German Center for Neurodegenerative DiseasesStephen P. Currie and Nathalie L. Rochefort from the University of Edinburgh, and Lukas Fischer from the University of Edinburgh and the Massachusetts Institute of Technology.

This blog originally appeared on the website Science Trends.

Nathalie L. Rochefort has been awarded the 2018 R Jean Banister Prize Lecture. The prize is awarded to early career physiologists in the late stages of a PhD, postdoc or who are in an early faculty position. It was established in 2016 in memory of a former Member of The Physiological Society, (Rachel) Jean Banister who left a legacy to The Society when she passed away in 2013.

Seeing Depth in the Brain – Part II

By Andrew Parker, Oxford University

Stereo vision may be one of the glories of nature, but what happens when it goes wrong? The loss of stereo vision typically occurs in cases where there has been a problem with eye coordination in early childhood. Developmental amblyopia, or lazy eye, can persist into adulthood. If untreated, this often leads to a squint, or a permanent misalignment of the left and right eyes. One eye’s input to the brain weakens and that eye may even lose its ability to excite the visual cortex. If the brain grows up during a critical, developmental period with uncoordinated input from the two eyes, binocular depth perception becomes impossible.

My lab’s work supports a growing tide of opinion that careful binocular training may prove to be the best form of orthoptics for improving the binocular coordination of the eyes. Recent treatment of amblyopia has tended to concentrate on covering the stronger eye with a patch to give the weaker eye more visual experience with the aim of strengthening the weaker eye’s connections to the visual cortex. Now we understand from basic research like ours that there is more to stereoscopic vision than the initial connection of left and right eyes into the primary visual cortex.

Ramon y Cajal

Figure 2: Ramon y Cajal’s secret stereo writing, see Text Box

Secret Stereo Writing. The great Spanish neuroanatomist Ramon y Cajal developed this technique for photographically sending a message in code. The method uses a stereo camera with two lenses and two photographic plates on a tripod at the left. Each lens focuses a slightly different image of the scene in front of the camera. The secret message is on plate B, whereas plate A contains a scrambled pattern of visual contours, which we term visual noise. The message is unreadable in each of the two photographic images taken separately because of the interfering visual noise. Each photograph would be sent with a different courier. When they arrive, viewing the pair of photos with a stereograph device as in Figure 1, the message is revealed because it stands out in stereo depth, distinct from the noisy background. Cajal did not take this seriously enough to write a proper publication on his idea: “my little game…is a puerile invention unworthy of publishing”. He could not guess that this technique would form the basis of a major research tool in modern visual neuroscience.

Our lab is investigating the fundamental structure of stereoscopic vision by recording signals from nerve cells in the brain’s visual cortex. One of the significant technical developments we use is the ability to record from lots of nerve cells simultaneously. Using this technique, I am excited to be starting a new phase of work that aims to identify exactly how the visual features that are imaged into the left eye are matched with similar features present in the right eye.

The neural pathways of the brain first bring this information together in the primary visual cortex. Remarkably there are some 30 additional cortical areas beyond the primary cortex, all in some way concerned with vision and most of them having a topographic map of the 2-D images arriving on the retinas of the eyes. The discovery of these visual areas started with Semir Zeki’s work in the macaque monkey’s visual cortex. Our work follows that line by recording electrical signals from the visual cortex of these animals. To achieve this, we are using brain implants in the macaques very similar to those being trialed for human neurosurgical use (where implants bypass broken nerves in the spinal cord to restore mobility).

turbot

Figure 3: A very odd form of binocular vision in the animal kingdom. The young turbot grows up with one eye on each side of the head like any other fish, but as adulthood is reached one eye migrates anatomically to join the other on the same side of the head. It is doubtful whether the adult turbot also acquires stereo vision. Human evolutionary history has brought our two eyes forward-facing, rather than lateral as in many mammals, enabling stereo vision.

My lab is currently interested in how information passes from one visual cortical area to another.  Nerve cells in the brain communicate with a temporal code, which uses the rate and timing of impulse-like events to signal the presence of different objects in our sensory world. When information passes from one area to another, the signals about depth get rearranged. The signals successively lose features that are unrelated to our perceptual experience and acquire new properties, corresponding closer to perceptual experience. So, these transformations eventually come to shape our perceptual experience.

In this phase of work, we are identifying previously unobserved properties of this transformation from one cortical area to another. We are examining how populations of nerve cells use coordinated signals to allow us to discriminate objects at different depths. We are testing the hypothesis that the variability of neural signals is a fundamental limit on how well the population of nerve cells can transmit reliable information about depth.

To be specific, we are currently following a line of enquiry inspired by theoretical analysis that identifies the shared variability between pairs of neurons (that is, the covariance of neural signals) as a critical limit on sensory discrimination. Pursuing this line is giving us new insights into why the brain has so many different visual areas and how these areas work together.

It is an exciting time. We still need to determine whether our perceptual experiences are in any sense localised to certain regions of the brain or represent the activity in particular groups of neurons. What are the differences between types of neural tissue in their ability to deliver conscious perception?

There are many opportunities created by the newer technologies of multiple, parallel recording of neural signals and the ability to intervene in neural signaling brought by the use of focal electrical stimulation and optogenetics. By tracking signals related to specific perceptual decisions through the myriad of cortical areas, we can begin to answer these questions. The prospect of applying these methods to core problems in the neuroscience of our perceptual experience is something to look forward to in the forthcoming years.

Seeing Depth in the Brain – Part I

By Andrew Parker, Oxford University

The Physiological Society set up the annual, travelling GL Brown Prize Lecture to stimulate an interest in the experimental aspects of physiology. With predecessors such as Colin Blakemore and Semir Zeki, following in their footsteps is a tall order. They are not only at the very top scientifically but also superb communicators.

My lecture series on stereo vision has already taken me around the UK, including London, Cardiff, and Sheffield. I’ll be at the University of Edinburgh on 15 November and Oxford University on 23 November. It’s a nice touch that GL Brown’s career took him around the country too, including Cambridge, Manchester, Mill Hill and central London, before he became Waynflete Professor in my own department in Oxford. The other pleasurable coincidence of giving lectures on stereo vision this year is that there is a 50th anniversary since fundamental discoveries were made about how the brain combines the information from the two eyes to provide us with a sense of depth.

In his 1997 book How the Mind Works Steven Pinker wrote, “Stereo vision is one of the glories of nature and a paradigm of how other parts of the mind might work.” I can’t claim to have written this inspiring sentence myself, but I can at least claim to have chosen stereo vision as my field well before Steven Pinker wrote his sentence.

Stereo vision is, in a nutshell, three-dimensional visual perception. It is the use of two eyes in coordination to give us a sense of depth: a pattern of 3-D relief or 3-D form that emerges out of 2-D images arriving at the left and right eyes. These images are captured by the light-sensitive surface of the eye called the retina. Stereo vision gives us the ability to derive information about how far away objects are, based solely on the relative positions of the object in the two eyes.

Victorian stereograph

Figure 1: “The stereograph as an educator,” illustrating the virtual reality technology of the Victorian era.

The Victorians amused themselves with stereo vision (see Figure 1). Virtual reality is our modern day version of this, but what comes next? The next generation will probably enjoy “augmented reality” rather than virtual reality. With augmented reality, extra computer-generated imagery is projected onto objects in the real world. The aim is to create a perceptual fusion of real objects with virtual imagery. For example, in one prototype I have seen, surgeons perform their operations with virtual imagery (acquired with diagnostic imaging devices) superimposed upon the surgical field in the operating theatre. Needless to say, this places much higher demands on the quality and stability of the virtual imaging systems.

What causes people like Pinker, who are outside the field, to get so excited about stereo vision? Partly it’s just the experience itself. If you’ve been to the 3-D movies or put on a virtual reality headset, you will have the sense of stereoscopic depth. It is vivid and immediate. The other thing that excites Pinker is the way in which the brain is able to create a sense of a third dimension in space out of what are fundamentally two flat images. As a scientific problem, this is fascinating.

We also see parallels between stereo vision and how other important functions of the brain are realised. One straightforward example of that is visual memory. Gaining a sense of stereoscopic depth from two images (left and right) requires matching of visual features from one image to another. Remembering whether or not we have seen something before requires matching of a present image to a memory trace of a previously seen image. Both processes require the nervous system to match visual information from one source to another.

Another aspect that Pinker is highlighting is the way in which the two flat images in stereo are fused to form with a new perceptual quality, binocular depth. A great deal of spatial perception works this way. One obvious example is our ability to use the two ears in combination to form an impression of sound localised in space, based just on the vibrations received by the left and right ear canals.

What is the world like without stereo vision? While you can partly experience this by placing an eyepatch over one eye (try playing a racket sport or carefully making a cup of tea), the difference is most strongly highlighted by the very rare cases when stereo vision appears to have been lost but is then recovered. Susan Barry, professor of neurobiology at Mount Holyoke College, was stereoblind in early life but eventually gained stereo vision with optometric vision therapy.

In a New Yorker article by Oliver Sacks (Stereo Sue, A Neurologist’s Notebook, June 19 2006) Barry describes her newly acquired perception of the world. “Every leaf seemed to stand out in its own little 3-D space. The leaves didn’t just overlap with each other as I used to see them. I could see the SPACE between the leaves. The same is true for twigs on tree, pebbles on the road, stones in a stone wall. Everything has more texture.”

Check back next Wednesday for Part II of Andrew Parker’s blog on stereo vision.