Tag Archives: vision

Seeing Depth in the Brain – Part II

By Andrew Parker, Oxford University

Stereo vision may be one of the glories of nature, but what happens when it goes wrong? The loss of stereo vision typically occurs in cases where there has been a problem with eye coordination in early childhood. Developmental amblyopia, or lazy eye, can persist into adulthood. If untreated, this often leads to a squint, or a permanent misalignment of the left and right eyes. One eye’s input to the brain weakens and that eye may even lose its ability to excite the visual cortex. If the brain grows up during a critical, developmental period with uncoordinated input from the two eyes, binocular depth perception becomes impossible.

My lab’s work supports a growing tide of opinion that careful binocular training may prove to be the best form of orthoptics for improving the binocular coordination of the eyes. Recent treatment of amblyopia has tended to concentrate on covering the stronger eye with a patch to give the weaker eye more visual experience with the aim of strengthening the weaker eye’s connections to the visual cortex. Now we understand from basic research like ours that there is more to stereoscopic vision than the initial connection of left and right eyes into the primary visual cortex.

Ramon y Cajal

Figure 2: Ramon y Cajal’s secret stereo writing, see Text Box

Secret Stereo Writing. The great Spanish neuroanatomist Ramon y Cajal developed this technique for photographically sending a message in code. The method uses a stereo camera with two lenses and two photographic plates on a tripod at the left. Each lens focuses a slightly different image of the scene in front of the camera. The secret message is on plate B, whereas plate A contains a scrambled pattern of visual contours, which we term visual noise. The message is unreadable in each of the two photographic images taken separately because of the interfering visual noise. Each photograph would be sent with a different courier. When they arrive, viewing the pair of photos with a stereograph device as in Figure 1, the message is revealed because it stands out in stereo depth, distinct from the noisy background. Cajal did not take this seriously enough to write a proper publication on his idea: “my little game…is a puerile invention unworthy of publishing”. He could not guess that this technique would form the basis of a major research tool in modern visual neuroscience.

Our lab is investigating the fundamental structure of stereoscopic vision by recording signals from nerve cells in the brain’s visual cortex. One of the significant technical developments we use is the ability to record from lots of nerve cells simultaneously. Using this technique, I am excited to be starting a new phase of work that aims to identify exactly how the visual features that are imaged into the left eye are matched with similar features present in the right eye.

The neural pathways of the brain first bring this information together in the primary visual cortex. Remarkably there are some 30 additional cortical areas beyond the primary cortex, all in some way concerned with vision and most of them having a topographic map of the 2-D images arriving on the retinas of the eyes. The discovery of these visual areas started with Semir Zeki’s work in the macaque monkey’s visual cortex. Our work follows that line by recording electrical signals from the visual cortex of these animals. To achieve this, we are using brain implants in the macaques very similar to those being trialed for human neurosurgical use (where implants bypass broken nerves in the spinal cord to restore mobility).

turbot

Figure 3: A very odd form of binocular vision in the animal kingdom. The young turbot grows up with one eye on each side of the head like any other fish, but as adulthood is reached one eye migrates anatomically to join the other on the same side of the head. It is doubtful whether the adult turbot also acquires stereo vision. Human evolutionary history has brought our two eyes forward-facing, rather than lateral as in many mammals, enabling stereo vision.

My lab is currently interested in how information passes from one visual cortical area to another.  Nerve cells in the brain communicate with a temporal code, which uses the rate and timing of impulse-like events to signal the presence of different objects in our sensory world. When information passes from one area to another, the signals about depth get rearranged. The signals successively lose features that are unrelated to our perceptual experience and acquire new properties, corresponding closer to perceptual experience. So, these transformations eventually come to shape our perceptual experience.

In this phase of work, we are identifying previously unobserved properties of this transformation from one cortical area to another. We are examining how populations of nerve cells use coordinated signals to allow us to discriminate objects at different depths. We are testing the hypothesis that the variability of neural signals is a fundamental limit on how well the population of nerve cells can transmit reliable information about depth.

To be specific, we are currently following a line of enquiry inspired by theoretical analysis that identifies the shared variability between pairs of neurons (that is, the covariance of neural signals) as a critical limit on sensory discrimination. Pursuing this line is giving us new insights into why the brain has so many different visual areas and how these areas work together.

It is an exciting time. We still need to determine whether our perceptual experiences are in any sense localised to certain regions of the brain or represent the activity in particular groups of neurons. What are the differences between types of neural tissue in their ability to deliver conscious perception?

There are many opportunities created by the newer technologies of multiple, parallel recording of neural signals and the ability to intervene in neural signaling brought by the use of focal electrical stimulation and optogenetics. By tracking signals related to specific perceptual decisions through the myriad of cortical areas, we can begin to answer these questions. The prospect of applying these methods to core problems in the neuroscience of our perceptual experience is something to look forward to in the forthcoming years.

Seeing Depth in the Brain – Part I

By Andrew Parker, Oxford University

The Physiological Society set up the annual, travelling GL Brown Prize Lecture to stimulate an interest in the experimental aspects of physiology. With predecessors such as Colin Blakemore and Semir Zeki, following in their footsteps is a tall order. They are not only at the very top scientifically but also superb communicators.

My lecture series on stereo vision has already taken me around the UK, including London, Cardiff, and Sheffield. I’ll be at the University of Edinburgh on 15 November and Oxford University on 23 November. It’s a nice touch that GL Brown’s career took him around the country too, including Cambridge, Manchester, Mill Hill and central London, before he became Waynflete Professor in my own department in Oxford. The other pleasurable coincidence of giving lectures on stereo vision this year is that there is a 50th anniversary since fundamental discoveries were made about how the brain combines the information from the two eyes to provide us with a sense of depth.

In his 1997 book How the Mind Works Steven Pinker wrote, “Stereo vision is one of the glories of nature and a paradigm of how other parts of the mind might work.” I can’t claim to have written this inspiring sentence myself, but I can at least claim to have chosen stereo vision as my field well before Steven Pinker wrote his sentence.

Stereo vision is, in a nutshell, three-dimensional visual perception. It is the use of two eyes in coordination to give us a sense of depth: a pattern of 3-D relief or 3-D form that emerges out of 2-D images arriving at the left and right eyes. These images are captured by the light-sensitive surface of the eye called the retina. Stereo vision gives us the ability to derive information about how far away objects are, based solely on the relative positions of the object in the two eyes.

Victorian stereograph

Figure 1: “The stereograph as an educator,” illustrating the virtual reality technology of the Victorian era.

The Victorians amused themselves with stereo vision (see Figure 1). Virtual reality is our modern day version of this, but what comes next? The next generation will probably enjoy “augmented reality” rather than virtual reality. With augmented reality, extra computer-generated imagery is projected onto objects in the real world. The aim is to create a perceptual fusion of real objects with virtual imagery. For example, in one prototype I have seen, surgeons perform their operations with virtual imagery (acquired with diagnostic imaging devices) superimposed upon the surgical field in the operating theatre. Needless to say, this places much higher demands on the quality and stability of the virtual imaging systems.

What causes people like Pinker, who are outside the field, to get so excited about stereo vision? Partly it’s just the experience itself. If you’ve been to the 3-D movies or put on a virtual reality headset, you will have the sense of stereoscopic depth. It is vivid and immediate. The other thing that excites Pinker is the way in which the brain is able to create a sense of a third dimension in space out of what are fundamentally two flat images. As a scientific problem, this is fascinating.

We also see parallels between stereo vision and how other important functions of the brain are realised. One straightforward example of that is visual memory. Gaining a sense of stereoscopic depth from two images (left and right) requires matching of visual features from one image to another. Remembering whether or not we have seen something before requires matching of a present image to a memory trace of a previously seen image. Both processes require the nervous system to match visual information from one source to another.

Another aspect that Pinker is highlighting is the way in which the two flat images in stereo are fused to form with a new perceptual quality, binocular depth. A great deal of spatial perception works this way. One obvious example is our ability to use the two ears in combination to form an impression of sound localised in space, based just on the vibrations received by the left and right ear canals.

What is the world like without stereo vision? While you can partly experience this by placing an eyepatch over one eye (try playing a racket sport or carefully making a cup of tea), the difference is most strongly highlighted by the very rare cases when stereo vision appears to have been lost but is then recovered. Susan Barry, professor of neurobiology at Mount Holyoke College, was stereoblind in early life but eventually gained stereo vision with optometric vision therapy.

In a New Yorker article by Oliver Sacks (Stereo Sue, A Neurologist’s Notebook, June 19 2006) Barry describes her newly acquired perception of the world. “Every leaf seemed to stand out in its own little 3-D space. The leaves didn’t just overlap with each other as I used to see them. I could see the SPACE between the leaves. The same is true for twigs on tree, pebbles on the road, stones in a stone wall. Everything has more texture.”

Check back next Wednesday for Part II of Andrew Parker’s blog on stereo vision.