Brain studies are a mass of contradictions. When you leave your job and your home and your technology behind for a vacation, you 'disconnect' some claim.
"Actually, you've just given your brain a whole new challenge," says Thomas D. Albright, director of the Vision Center Laboratory at of the Salk Institute and an expert on how the visual system works. "You may think you're resting, but your brain is automatically assessing the spatio-temporal properties of this novel environment-what objects are in it, are they moving, and if so, how fast are they moving?
The dilemma is that our brains can only dedicate so many neurons to this assessment, says Sergei Gepshtein, a staff scientist in Salk's Vision Center Laboratory. "It's a problem in economy of resources: If the visual system has limited resources, how can it use them most efficiently?"
Albright, Gepshtein and Luis A. Lesmes, a specialist in measuring human performance at the Schepens Eye Research Institute, think they have some answers in their Proceedings of the National Academy of Sciences paper.
Some neuroscientists believe that extended exposure to a novel environment would make you better at detecting its subtle details, such as the slow motion of waves on the ocean during your holiday. Yet experiments produced contradictory results. "Sometimes people got better at detecting a stimulus, sometimes they got worse, sometimes there was no effect at all, and sometimes people got better, but not for the expected stimulus," says Albright.
The answer, according to Gepshtein, came from asking a new question: What happens when you look at the problem of resource allocation from a system's perspective?
It turns out something's got to give.
"It's as if the brain's on a budget; if it devotes 70 percent here, then it can only devote 30 percent there," says Gepshtein. "When the adaptation happens, if now you're attuned to high speeds, you'll be able to see faster moving things that you couldn't see before, but as a result of allocating resources to that stimulus, you lose sensitivity to other things, which may or may not be familiar."
Summing up, Albright says, "Simply put, it's a tradeoff: The price of getting better at one thing is getting worse at another."
Gepshtein, a computational neuroscientist, analyzes the brain from a theoretician's point of view, and the PNAS paper details the computations the visual system uses to accomplish the adaptation. The computations are similar to the method of signal processing known as Gabor transform, which is used to extract features in both the spatial and temporal domains.
Yes, while you may struggle to balance your checkbook, it turns out your brain is using operations it took a Nobel Laureate to describe. Dennis Gabor won the 1971 Nobel Prize in Physics for his invention and development of holography. But that wasn't his only accomplishment. Like his contemporary Claude Shannon, he worked on some of the most fundamental questions in communications theory, such as how a great deal of information can be compressed into narrow channels.
"Gabor proved that measurements of two fundamental properties of a signal-its location and frequency content-are not independent of one another," says Gepshtein.
The location of a signal is simply that: where is the signal at what point in time. The content-the "what" of a signal-is "written" in the language of frequencies and is a measurement of the amount of variation, such as the different shades of gray in a photograph.
The challenge comes when you're trying to measure both location and frequency, because location is more accurately determined in a short time window, while variation needs a longer time window (imagine how much more accurately you can guess a song the longer it plays).
The obvious answer is that you're stuck with a compromise: You can get a precise measurement of one or the other, but not both. But how can you be sure you've come up with the best possible compromise? Gabor's answer was what's become known as a "Gabor Filter" that helps obtain the most precise measurements possible for both qualities. Our brains employ a similar strategy, says Gepshtein.
"In human vision, stimuli are first encoded by neural cells whose response characteristics, called receptive fields, have different sizes," he explains. "The neural cells that have larger receptive fields are sensitive to lower spatial frequencies than the cells that have smaller receptive fields. For this reason, the operations performed by biological vision can be described by a Gabor wavelet transform."
In essence, the first stages of the visual process act like a filter. "It describes which stimuli get in, and which do not," Gepshtein says. "When you change the environment, the filter changes, so certain stimuli, which were invisible before, become visible, but because you moved the filter, other stimuli, which you may have detected before, no longer get in."
"When you see only small parts of this filter, you find that visual sensitivity sometimes gets better and sometimes worse, creating an apparently paradoxical picture," Gepshtein continues. "But when you see the entire filter, you discover that the pieces - the gains and losses - add up to a coherent pattern."
From a psychological point of view, according to Albright, what makes this especially intriguing is that the assessing and adapting is happening automatically-all of this processing happens whether or not you consciously 'pay attention' to the change in scene.
Yet, while the adaptation happens automatically, it does not appear to happen instantaneously. Their current experiments take approximately thirty minutes to conduct, but the scientists believe the adaption may take less time in nature.
Comments