A brain-damaged man who can’t remember faces has nosed into a scientific debate about how people learn to recognize other complex objects. Deaf users of sign language also have a hand in this dispute.
The brain-damaged man’s facial failures are one symptom of a general inability to perceive configurations of object parts, suggests a new investigation led by psychologist Cindy Bukach of the University of Richmond in Virginia. The man thus stumbles at identifying not only people’s faces but also computer-generated, three-part objects called Greebles, even after extensive training, Bukach’s team reports online December 8 in Neuropsychologia.
Bukach and her colleagues studied LR, a man who fails to recognize his daughter when shown a picture of her but remembers distinctive facial features, such as Elvis’ sideburns. Damage in a car accident to a brain area just under the right temple caused this condition, called prosopagnosia.
“There are many ways in which face recognition can be disrupted, but our evidence shows that LR’s type of prosopagnosia impairs recognition of objects with multiple parts, with faces as the most obvious example,” Bukach says. Relative positions of the eyes, nose and mouth, as well their shapes, contribute to perceiving a face as a single entity.
In a 2006 report, her team designed a collection of eight faces using different combinations of two sets of eyes, noses and mouths. After briefly viewing a face, LR correctly selected it from all eight faces 25 percent of the time — about what would be expected if he based choices on a single facial feature, Bukach says. Further testing showed that LR homed in on the mouth.
In the new study, the researchers designed eight Greebles, using different combinations of two versions of three distinctive appendages. LR recognized Greebles he had just seen 31 percent of the time, improving little after several one-hour, weekly training sessions. Four healthy volunteers struggled at discerning Greebles at first but recognized most of them after training.
Bukach opposes an influential view that the brain evolved systems for dealing with key types of knowledge, including face recognition (SN: 7/7/01, p. 10). A proponent of that view, psychologist Bradley Duchaine of Dartmouth College, previously reported that a prosopagnosia patient named Edward — who cited lifelong problems recognizing faces — learned to discriminate Greebles but not human faces.
If face recognition depends on a general capacity for learning to recognize multi-part objects, Duchaine holds, healthy volunteers should recognize novel Greebles as poorly as prosopagnosia patients do at first but perform better than patients after seeing lots of Greebles. LR’s Greeble difficulties exceeded those of healthy volunteers from the start, a sign of fundamental object-recognition problems that make the results hard to interpret, Duchaine contends. “These new results don’t help us understand mechanisms used for face processing,” he says.
LR’s poor Greeble-detection accuracy before and after training indicates that he focused on only one Greeble appendage when trying to tell the funny-looking objects apart, Bukach responds.
Support for the idea that brains use a general mechanism to recognize complex objects comes from deaf people who communicate with American Sign Language. Just as upside-down faces look weird and often unrecognizable to healthy volunteers, so do upside-down signs shown to fluent ASL users, say psychologists David Corina of the University of California, Davis, and Michael Grosvald of the University of California, Irvine.
Because healthy individuals perceive faces as whole entities, topsy-turvy faces look bizarre, Corina says. Likewise, ASL users learn to see signs as integrated sets of movements that look peculiar when inverted, the researchers propose in a paper published online December 6 in Cognition.
Many researchers assume that people understand sign language by breaking each sign down into hand shapes, arm movements and other elements.
Corina and Grosvald also find that deaf ASL users are faster than hearing nonsigners at recognizing videos of head scratching and other common grooming actions. Sign languages exploit brain areas devoted to detecting human actions in general, they propose.
Psycholinguist Karen Emmorey of San Diego State University calls new evidence that fluent signers perceive signs as whole entities “a key insight.” Further work needs to confirm that learning a sign language modifies action-related brain areas, she adds.
The brain-damaged man’s facial failures are one symptom of a general inability to perceive configurations of object parts, suggests a new investigation led by psychologist Cindy Bukach of the University of Richmond in Virginia. The man thus stumbles at identifying not only people’s faces but also computer-generated, three-part objects called Greebles, even after extensive training, Bukach’s team reports online December 8 in Neuropsychologia.
Bukach and her colleagues studied LR, a man who fails to recognize his daughter when shown a picture of her but remembers distinctive facial features, such as Elvis’ sideburns. Damage in a car accident to a brain area just under the right temple caused this condition, called prosopagnosia.
“There are many ways in which face recognition can be disrupted, but our evidence shows that LR’s type of prosopagnosia impairs recognition of objects with multiple parts, with faces as the most obvious example,” Bukach says. Relative positions of the eyes, nose and mouth, as well their shapes, contribute to perceiving a face as a single entity.
In a 2006 report, her team designed a collection of eight faces using different combinations of two sets of eyes, noses and mouths. After briefly viewing a face, LR correctly selected it from all eight faces 25 percent of the time — about what would be expected if he based choices on a single facial feature, Bukach says. Further testing showed that LR homed in on the mouth.
In the new study, the researchers designed eight Greebles, using different combinations of two versions of three distinctive appendages. LR recognized Greebles he had just seen 31 percent of the time, improving little after several one-hour, weekly training sessions. Four healthy volunteers struggled at discerning Greebles at first but recognized most of them after training.
Bukach opposes an influential view that the brain evolved systems for dealing with key types of knowledge, including face recognition (SN: 7/7/01, p. 10). A proponent of that view, psychologist Bradley Duchaine of Dartmouth College, previously reported that a prosopagnosia patient named Edward — who cited lifelong problems recognizing faces — learned to discriminate Greebles but not human faces.
If face recognition depends on a general capacity for learning to recognize multi-part objects, Duchaine holds, healthy volunteers should recognize novel Greebles as poorly as prosopagnosia patients do at first but perform better than patients after seeing lots of Greebles. LR’s Greeble difficulties exceeded those of healthy volunteers from the start, a sign of fundamental object-recognition problems that make the results hard to interpret, Duchaine contends. “These new results don’t help us understand mechanisms used for face processing,” he says.
LR’s poor Greeble-detection accuracy before and after training indicates that he focused on only one Greeble appendage when trying to tell the funny-looking objects apart, Bukach responds.
Support for the idea that brains use a general mechanism to recognize complex objects comes from deaf people who communicate with American Sign Language. Just as upside-down faces look weird and often unrecognizable to healthy volunteers, so do upside-down signs shown to fluent ASL users, say psychologists David Corina of the University of California, Davis, and Michael Grosvald of the University of California, Irvine.
Because healthy individuals perceive faces as whole entities, topsy-turvy faces look bizarre, Corina says. Likewise, ASL users learn to see signs as integrated sets of movements that look peculiar when inverted, the researchers propose in a paper published online December 6 in Cognition.
Many researchers assume that people understand sign language by breaking each sign down into hand shapes, arm movements and other elements.
Corina and Grosvald also find that deaf ASL users are faster than hearing nonsigners at recognizing videos of head scratching and other common grooming actions. Sign languages exploit brain areas devoted to detecting human actions in general, they propose.
Psycholinguist Karen Emmorey of San Diego State University calls new evidence that fluent signers perceive signs as whole entities “a key insight.” Further work needs to confirm that learning a sign language modifies action-related brain areas, she adds.
No comments:
Post a Comment