音声ブラウザご使用の方向け: SKIP NAVI GOTO NAVI

Web Posted on: November 18, 1998


AUGMENTATIVE COMMUNICATION AND EMOTIONAL EXPRESSION

Donald B. Egolf, Ph.D.
Department of Communication
1117 Cathedral of Learning
University of Pittsburgh
Pittsburgh, PA 15260
(412) 624-6763 FAX (412) 624-1878
ratchet@vms.cis.pitt.edu

One common way of categorizing the messages of human discourse is to place them in either the verbal or nonverbal category. Getting the most attention in the field of augmentative communication has been the verbal category, a move understandable, of course, considering the immediate educational, interpersonal, and vocational needs of nonspeaking persons. At the same time, however, there is the other side of the communication coin, the nonverbal side, and it is this side that is the focus of the current research. Specifically, the research focused on facial expressions and sought to answer several basic questions in the area with specific reference to nonspeakers who use assistive communication devices. Two studies were conducted. The first study dealt with perception; the second with expression.


Study 1

The purpose of Study 1 was to find answers to the following two questions:

  • (1) How accurate are nonspeakers, who use assistive communication devices, in perceiving or identifying facially-expressed emotions in comparison with the accuracy of other populations? And
  • (2) How do the nonspeakers' perceptual confusions compare with the confusions of other populations? A confusion is an erroneous identification.

Four groups, comprised of the following 95 subjects, participated. Group 1, the target group, was comprised of ten adult, cerebral palsied, nonspeakers who used assistive devices. Of these subjects, nine were wheelchair users. All were enrolled in an independent living program. Group 2, the first control group, was comprised of ten adult, cerebral palsied speakers. These individuals came from the same program as did the members of the target group and seven were wheelchair users. Members of Groups 1 and 2 successfully passed vision and hearing screens and were literate. Group 3, the second control group, was comprised of fourteen adult, foreign nationals. Group 4, the third control group, was comprised of 61 college undergraduates. The students were included to represent native and non-disabled communicators.

Each subject was shown thirty, shuffled photographs that were taken from "Unmasking the Human Face" (Ekman and Friesen, 1975). The photographs depicted six basic facially-expressed emotions: anger, disgust, fear, happiness, sadness, and surprise. From cross-cultural studies, Ekman and Friesen determined that these six emotions were basically universal in meaning, and from other studies they determined that the same six emotions were the most reliably identified. Subjects were told that each picture shown to them depicted one the six emotions. No additional expressions were introduced during the course of the study. There were five photographs for each of the six emotions. Subjects had as much time as they needed to decide upon the emotion being expressed, but could choose only one of the six emotions. After each subject completed identifying all thirty photographs, the pictures were reshuffled. This was done to control for any ordering and expectancy effects. Arrangements were made to have subjects respond by a method that was physically possible for them.

For each subject an accuracy score was computed. The highest possible score was 30--equivalent to identifying correctly each of the 30 emotions presented. Group means were computed and the data showed that each successive group had a higher mean. In other words, cerebral palsied nonspeakers who used assistive communication devices scored lowest (M=17.10), succeeded by cerebral palsied speakers (M=19.40), followed by foreign nationals (M=21.86), and then by native, nondisabled communicators (M=23.85). Accuracy data was analyzed using a one-way ANOVA with a resultant F(3,91) = 18.57 (p<.0001). Therefore the groups were significantly different from one another. When the data were examined to see if the groups were similar with respect to which emotions they found to be most and least difficult to identify, they were found to be very much alike. All groups found happiness the easiest to identify and disgust to be the most difficult.

In answer to Question 2 of Study 1, it was found that nonspeakers were similar to the control groups in confusions. For example, with two emotions, fear and disgust, the most common confusions for all groups were surprise and anger respectively.


Study 2

Study 2 posed two questions:

  • (1) Can cerebral palsied nonspeakers, who use assistive communication devices, facially expressed emotions as accurately as natural speakers? And
  • (2) When nonspeakers' expressions are misidentified, what are the most common confusions.

Three groups participated; two were expressers and the third served as judges. Group 1 consisted of six, cerebral palsied, nonspeaking adults who used assistive communication devices. This group was recruited from the same independent living program mentioned in Study 1. Group 2 consisted of six college undergraduate natural speakers. Group 3 consisted of 77 undergraduates.

All subjects were asked to portray facially the emotions of happiness, sadness, fear, anger, surprise, and disgust. The portrayals were photographed and shown to the judges who recorded the emotion they thought was portrayed.

For each subject an accuracy portrayal score was computed. It was an average score with a range of 0 through 6, representing six emotions. Group means and variances were computed: nonspeakers (M=2.26; V=0.2) and speakers (M=4.01; V=2.44). A t-test (df=10) for independent groups showed significance at the .05 level. The two groups differed significantly from one another.

To answer to question posed by Question 2 of Study 2, confusion matrices were created showing the emotions portrayed by each of the subject groups versus the emotions perceived by the judges. Analysis of these matrices showed nonspeakers performing extremely well portraying happiness (83.8%); moderately well portraying sadness (52.3%) and surprise (45.6%); and poorly on fear, anger, and disgust (each below 23%). The normal speakers portrayed every emotion more accurately by an average of 29.8%.


Discussion

Study 1 showed that nonspeakers, who use assistive communication devices, scored lowest as a group in identifying emotions expressed by the face. One observation made of the nonspeaking group in free interaction was that the members spent much of their time looking at their assistive communication devices, and in doing so, were not observing the faces of their communication partners. Focus and concentration on the device is of course necessary, but focusing almost exclusively on the device and not on the communication partner may impede the development of perceptual skills or dull those extant.

Therefore one suggestion emerging from this study is that users of assistive communication devices should learn to visually shift their gazes from their devices to their discourse partners. Another suggestion from Study 1 is that the nonspeakers' ability to perceive nonverbal emotional expressions differs from other groups in degree but not in kind. In other words, nonspeakers' confusions are, more often than not, similar to the control groups. The consequence of this finding would seem to be that nonspeakers can indeed improve their "face-reading" skills.

Study 2 sought to determine if nonspeakers, who use assistive communication devices, can facially express emotions as accurately as a control group, and were judges' confusions of nonspeakers' expressions similar to those of the control group of speakers. Results showed that the nonspeakers expressed significantly more poorly than did the controls. The types of confusions made by the judges perceiving the two groups' expressions were in moderate agreement.

It is the case that with neuromuscular problems many nonspeakers will have difficulty facially expressing emotions with accuracy. Compensatory strategies have to be devised. It is here where the assistive communication device should be recruited for compensatory purposes. Devices can be programmed to generate sounds (both words and non-words) to reflect not only the six emotions listed by Ekman and Friesen, but other emotions as well. When the face cannot produce an emotional expression, the auditory output of the communication device must "step-in." And, just as the face can quickly produce an array of fast changing expressions or frames, so must the device be programmed accordingly. One user, for example, in his repertoire of emotional expressions, has programmed into his device the currently popular, "yada, yada, yada ..." to express boredom. This emotion can be quickly changed to one of interest with a preprogrammed, "OK-now we're getting somewhere." Programming assistive communication devices with the structures of language is very important, but at the same time, the nonverbal side of the communication coin must not be forgotten.


Reference

Ekman, P., & Friesen, W., (1975). UNMASKING THE HUMAN FACE: A GUIDE TO RECOGNIZING EMOTIONS FROM FACIAL EXPRESSIONS, Englewood Cliffs, NJ: Prentice-hall, 1975.