音声ブラウザご使用の方向け: SKIP NAVI GOTO NAVI

Web Posted on: February 16, 1998


SEMANTIC COMPACTION IN BOTH STATIC AND DYNAMIC ENVIRONMENTS: A NEW SYNTHESIS

Russell Thomas Cross, B.Sc.(Hons), MRCLST, Prentke Romich Company

Bruce R. Baker, A.M., President, Semantic Compaction Systems

Linda Valot Klotz, M.A., CCC-SLP, Prentke Romich Company

Arlene Luberoff Badman, M.A., CCC-SLP, Prentke Romich Company

In a recent article, Cross, Baker, Klotz, and Badman (1997) outlined the issues involved in combining the Semantic Compaction encoding technique (Baker, 1982) with dynamic display technology. They argued that dynamic and static display technologies have inherent strengths and weaknesses, and that they are based on different underlying assumptions. The challenge was to develop a language representation system that would combine the best features of both.

They summarized by saying that semantic compaction in a dynamic environment would:

  • be guessable, where possible
  • be learnable, where not guessable,
  • allow pairing of words and images,
  • allow for a small set of icons,
  • be rule-based,
  • have pages access for specific activities, and
  • promote automaticity as much as possible.

The development of the Vanguard(TM) communication aid has provided the opportunity to realize these features. Furthermore, the way in which is was envisioned Semantic Compaction would operate helped to shape the software of the device.


GUESSABILITY AND LEARNABILITY

Another design issue is related to the 'guessability' and 'learnability' of a system. These two notions have been around in the field of Human-Computer Interaction for some time, and relatively recently applied to AAC (Demasco, 1994; Cross, Freeman, and Blades, 1996a, 1996b). Specifically, they are used in relation to the concept of usability, defined by Preece (1990) as a '...measure of the ease with which a system can be learned or used, its safety, effectiveness and efficiency, and the attitude of its users towards it.' Jordan, Draper, MacFarlane and McNulty (1991) suggest three components to usability:

  • 1. Guessability: a measure of the time and effort required to get going with a system,
  • 2. Learnability: the amount of time and effort required to reach a user's peak level of performance with a system,
  • 3. Experienced User Performance: the asymptotic level of a user's performance over time.

Both dynamic and static display devices are aiming at getting people to the 'Experienced User Performance' level, but the former relies heavily on guessability, the latter on learnability. The vocabulary items that are more guessable are called 'Picture Producers' (Schank, 1972). These words are typically nouns, such as 'cup,' 'shoe,' 'pencil,' 'radio,' and so on. The problem becomes more acute when considering those words that are not so easy to draw. Examples include 'should,' 'came,' 'myself,' 'and,' and many others. In the sentence 'He dropped his new cup,' the only word that is able to be represented directly is the word 'cup.' Furthermore, in a sentence such as 'I wish I'd been there when he dropped it' there are no words that lend themselves to being represented by an obvious picture.

Current dynamic screen technologies use a Direct Representation model (c.f. Burkhart, 1994; McIntyre, Taylor and Wilkerson, 1994; Sinteff, 1994; Harrington, 1996; Lawrence, 1996;) which is based on the notion that each word can be represented by discrete pictures and aims at providing guessable images.

With the Vanguard device, guessable images are provided for Picture Producers wherever possible. For example, when the icon APPLE is selected, a number of sub-categories are predicted that use a concrete image, such as a plate of fruit for FRUITS, and then selecting that brings up individual fruits.

With non-Picture Producers, icons are used that are teachable and have some common visual element. Since it is difficult to design a guessable icon to represent a non-Picture Producer, using a teaching metaphor is advantageous. An example is for the class of determiners, which include such common words as 'that' and 'these.' Using a static display and a particular implementation of semantic compaction called Unity(TM) (Badman, Baker, Banajee, Cross, Lehr, Maro, and Zucco, 1995), all determiners are stored as 2-symbol sequences, with the first icon being a picture of a wizard and a wand. This is to make the concept teachable as 'pointing words' - use the wand to point as 'this,' 'that,' 'those,' and 'the' table etc.

On the Vanguard screen, once the picture of the wizard is selected, icons change to become different manifestations of the wizard. For 'that,' the wizard points at 'that' hat on the floor; for 'those' the wizard points at two hats. Remember that the aim is not to be guessable, but learnable. It is the ability to change pictures to become more concrete that is being exploited here.


WORDS AND IMAGES

The facility to pair an image with a word is a positive feature of dynamic display technology. This can facilitate literacy learning, and also to act as feedback for the teacher. Velche (1992) found that when showing concrete images to people who were 'mentally impaired,' those who had some literacy did better at identification, which suggests that pairing the word and the image is beneficial.

This has been implemented with the Vanguard Unity software. The word only appears when necessary. So, in a sequence, no words are visible until the terminal icon is reached.


SMALL ICON SET

In a system where a picture is used for each word in the lexicon, as the vocabulary size increases, so does the need to navigate through the system, and this navigation process can become a significant problem. This is one of the drawbacks of a page-based system. In contrast, by using sequences, a small icon set can be used to encode very large vocabularies. For example, the Unity program for the DeltaTalker(TM) static display device uses less than 100 icons yet over 6,000 words are available with no more than 3 keys for a word.

Within the Vanguard Unity program, there are 32 'core' keys which use sequencing. The vocabulary accessed by these core keys is high frequency items, which can be used across situations. These sequences also remain the same no matter what the topic being discussed, and this can lead to automaticity. The core also generates less frequent noun pictures, thus enhancing the overall vocabulary of the device.


ACTIVITY-BASED PAGES

One feature of the dynamic display is that it allows for the development of single-hit activity-based pages. Thus, the individual who regularly visits the mall to shop for clothes could have a special page set-up that would include words, phrases, and sentences specific to the activity. Storing these items as single activations makes access quick.

The limit to this approach is that if there is any deviation from the activity, this can necessitate a change of page. So, if the individual is in the process of buying clothes and the store person asks about the weather, unless appropriate vocabulary is available, changing pages is needed. If the topic continues to change, so might the pages.

Semantic compaction addresses this in two ways. The first is to have a display which has the 32 core keys as mentioned above, along with a row of 9 keys that can contain specific activity-based vocabulary. This 'Activity Row' as it is called can be changed based on the needs of the situation, but the core remains the same. For example, the Activity Row may contain vocabulary for going to the movies, with words such as 'popcorn,' 'ticket,' 'movie' and so on. The individual can access 'I want to go to the...' from the core, but then add 'movies' from the Activity Row. Then, when the activity is changed to being at the restaurant, with words like 'burger,' 'ketchup,' and 'fries,' the core can be used for 'I want some...' and the word 'fries' added. There is simultaneous access to high frequency core words and activity-specific words.

The second method is to simply call up a specific page from the core. Then, once the activity is completed, a single key press will take the individual back to the core overlay.


AUTOMATICITY

A possible problem with the dynamic environment is related to the development of automatic motor patterns to generate speech. When a motor pattern is learned to the point that the individual no longer has to think about it, more time can be spent on learning the use of language, rather than the act of selecting keys. On a dynamic display, the same word may be found on different pages and in different locations, which means that the individual using the device would find it difficult to develop a consistent motor pattern to access a word.

The Vanguard software minimizes this by having the high frequency items accessed by the core 32 icons. Unless the sequences are changed by the individual using the device, they will find that the motor patterns required to produce words are consistent.


SUMMARY

There has been a false dichotomy perceived for some time that there is a difference between semantic compaction on the one hand, and dynamic displays on the other. The real difference is between an encoding technique that involves the systematic exploitation of secondary iconicity, semantic compaction, and encoding using primary iconicity, pages or levels ( Cross, Jones, and Morris, 1994). The semantic compaction paradigm can be used with dynamic display technology, and the technology in turn can be used to enhance the way in which semantic compaction operates.


ACKNOWLEDGMENTS

Research leading to this publication was supported by Grant 1 R43 DC03523-01 from the National Institute on Deafness and Other Communication Disorders, as part of the Small Business Innovation Research Program.

DeltaTalker(TM) and Vanguard(TM) are communication aids manufactured and distributed by the Prentke Romich Company, 1022 Heyl Road, Wooster, OH 44691


REFERENCES

Badman, A.L, Baker, B.R., Banajee, M., Cross, R.T., Lehr, J.S., Maro, J. and Zucco, M. (1995). Unity: A Minspeak Application Program. Wooster, OH: Prentke Romich.

Baker, B. (1982). Minspeak. Byte, 9, 186-202.

Burkhart, L.J. (1994). Organizing vocabulary on Dynamic Display devices: practical ideas and strategies. Proceedings of the 6th Biennial Conference of the International Society for Augmentative and Alternative Communication, Maastrict, 145-146. Hoensbroek: IRV.

Cross, R.T., Badman, A.L., Baker, B.R., Jones, A.P., Lehr, J.S. and Zucco, M. (1996). Unity/AT: A Minspeak Application Program. Wooster, OH: Prentke Romich.

Cross, R.T., Baker, B.R., Klotz, L.S. and Badman, A.L. (1997). Static and Dynamic Keyboards: Semantic Compaction in Both Worlds. Proceedings of the 18th Annual Southeast Augmentative Communication Conference, 9-17. Birmingham: SEAC Publications

Cross, R.T., Freeman, M. and Blades, M. (1996a). Use of symbols with technology in augmentative communication: 1. British Journal of Therapy and Rehabilitation, 2, 3, 120-125.

Cross, R.T., Freeman, M. and Blades, M. (1996b). Use of symbols with technology in augmentative communication: 2. British Journal of Therapy and Rehabilitation, 2, 4, 174-178.

Cross, R.T., Jones, A.P. & Morris, D.W.H. (1994). Reply to Woltosz: Re-reading the literature. Communication Matters, 8, 1, 10-15.

Demasco, P. (1994). Human factors considerations in the design of language interfaces in AAC. Assistive Technology, 6, 10-25.

Harrington, N. (1996). Organization of a Dynamic Display System for Language and Learning Literacy. Proceedings of the 7th Biennial Conference of the International Society for Augmentative and Alternative Communication, Vancouver, 535-536. Toronto: ISAAC.

Jordan, P.W., Draper, S.W., MacFarlane, K.K. and MacNulty, S. (1991). Guessability, learnability, and experienced user performance. In D. Diaper and H. Hammond (Eds.) People and Computers VI. Cambridge: Cambridge University Press. 237-45.

Lawrence, P.R. (1996). Dynamic Display, Pictographic AAC: Tips, Tricks and Techniques. Proceedings of the 17th Annual Southeast Augmentative Communication Conference, 87-92. SEAC Publications: Birmingham.

McIntyre, M., Taylor, D. and Wilkerson, B. (1994). Use of the Words+ System 2000 Versa Laptop Communication Device. Proceedings of the 15th Annual Southeast Augmentative Communication Conference, 79-85. Birmingham: SEAC Publications.

Preece, J. (1990) A Guide to Usability. Walton Hall, Milton Keynes. OUP

Schank, R. (1972). Conceptual Dependency: A theory of natural language understanding. Cognitive Psychology, 1972.

Sinteff, B. (1994). A Notebook of Ideas for the DynaVox. Proceedings of the 15th Annual Southeast Augmentative Communication Conference, 123-132. Birmingham: SEAC Publications.

Velche, D. (1992). Access to signage information and use of transportation systems by mentally disabled people. In M. Dejeammes and J.P. Medevielle (Eds.) Mobility and Transport for Elderly and Disabled Persons. Proceedings of 6th International conference: Institut National de Recherche sur les Transports et leur Sécurité