音声ブラウザご使用の方向け: SKIP NAVI GOTO NAVI

Web Posted on: August 24, 1998


EZ Access Strategies for Cross-Disability Access to Kiosks, Telephones, and VCRs

Gregg C. Vanderheiden
Chris M. Law

Trace R&D Center
University of Wisconsin-Madison


1.0 Background

We stand on the threshold of a revolution in interface design. Advances in technology will soon allow us to design interfaces in much more flexible ways than we ever had in the past. As a result, it will soon be possible to create interfaces that a user can easily adapt to meet their abilities or constraints. In some cases, this will allow individuals who have disabilities to be able to operate devices that they previously could not operate. In other cases, it will allow people who do not have disabilities to operate devices in places where they ordinarily wouldn't be able to -- for example, checking information, reading e-mail, browsing the web while driving the car, or accessing audio information (in visual form) while in meetings, libraries, or environments where it is too noisy to hear, etc.

In addition to the trend toward flexibility, there is another force at play to make technology simpler. As technology permeates further into every aspect of society including education, employment, community services, and even our homes, it becomes more and more a requirement that people be able to access and use these technologies. While this creates a greater need on the part of people with disabilities to be able to access technologies, it also reflects a similar need by the half or more of the population who do not have disabilities but simply find the current technologies difficult or impossible to comprehend. Many have tried to avoid technologies while other operate them in superstitious manners. (Basically, they learn which buttons to press to operate the couple of functions they need on a product. If one day the product doesn't respond as they expect, they go off looking for the family "wizard" to make it work. They don't understand how the product works or how the various controls work with each other to operate the product. They don't want to, nor feel they can, figure it out or learn it, due to its complexity). This emerging trend toward simplicity is important for all users as well as for users with cognitive disabilities. It has generally more applicability for individuals with disabilities (any disabilities) than those who do not, since the interface or interface variations that are provided for people with disabilities are usually more complicated than the default interface on the product.

Two further critical elements that are enabling this interface revolution are the advances that are being made in electronics and voice technologies. The advances in electronics are allowing us to build intelligence into the simplest of devices, and the dropping cost memory is allowing us to put additional code into a product with almost no impact on price. (For example, Sun Microcomputer recently made reference to a program written in Java as a part of a battery saver circuit on an electric shaver.) Advances in voice technology combined with the advances in electronics are allowing us to add voice to products, a key factor in allowing cross disability accessible products especially for individuals with reading problems, low vision, or blindness.

There are other trends in interfaces, too -- the trend toward graphics and animation, the trend toward more interactive interfaces, the trend toward interfaces that let us produce as much as control, etc. What I would like to do is to focus on the four elements cited above - flexibility, simplicity, computational power, and voice technologies. The key to developing a new generation of products that are able to adapt the needs and constraints of users with a wide range of different situations due either to variations in their environment at the moment or in their personal abilities. In this paper, we will briefly examine each of these factors, look at the requirements that must be met to create truly cross-disability accessible interfaces, and then look at EZ Access, a first-generation cross-disability interface strategy for standard products.

2.0 Four enabling factors

2.1 Flexibility

A key factor in creating cross-disability accessible interfaces is flexibility. Simply put, there is no interface that is accessible to and usable by everyone, unless the user is able to adjust certain parameters. Just looking at individuals who are deaf and individuals who are blind demonstrates the need to have an interface that presents information in different ways. The interfaces on most of the products we have in our environment, however, are fixed. Information is presented in only one fashion, and controls can be operated in only one way. Probably the first major place that we found alternate interfaces was the Apple Macintosh computer. Opening a file, for example, could be accomplished by double-clicking on an icon, selecting an icon and then selecting "Open" from the menu, or tabbing to an icon and typing "command-O." While these interfaces all required vision, it did begin to show that it was possible to provide multiple strategies for accomplishing the same task -- different strategies that might appeal to different individuals, or to the same individual in different situations. It also illustrates a key enabler for this type of input flexibility -- an interface that is microprocessor controlled.

Later, the Macintosh added the ability to operate keyboard with one finger or headstick, as well as the ability to control the mouse from the keyboard and the ability to enlarge the image on the screen. These additional enhancements to the flexibility of the interface also illustrate a second important point -- that microprocessor-mediated interfaces can also have their flexibility enhanced with little or no increase in manufacturing cost. As flexibility is added, the number of users who can successfully operate the product increase. It was also noted that users who did not have disabilities found this additional interface flexibility to be useful -- a finding that has observed in almost every situation where interfaces have been made more flexible. It should be noted that microcomputer mediation is not necessary for an interface to be flexible. However, it tremendously enhances the ability to create flexible interfaces that provide wide variation and accommodation for little or no cost.

2.2 Simplicity

The biggest problem in creating interfaces for the full range of users is in figuring out how to create a single interface that works as well for power users as for novices; that works as well for individuals who are properly seated and have all of their cognitive, sensory, and motor abilities at optimum as for individuals who are mobile, distracted, or have some of their senses or abilities limited due to disability, circumstance, or because they are engaged in some other activity (such as driving a car). Again, flexibility is a key element in enabling a single product to have an interface that can function at different levels. With processor-mediated interfaces, it's possible to present different interfaces for different individuals. In a separate presentation at this conference, a universal remote console communication protocol is discussed that shows how a remote control for home appliances can be designed that shows a very simple and easy-to-use interface for all users most of the time. When more advanced features are required, they can be called up. Calling up the advanced features presents a much more complex display, but one that allows much more complex and exact control of the product. This "layering" technique, where the controls most needed and most used by most people are presented as a default, but where more complicated or advanced controls can be easily called up, is an example of a very powerful simplifying technique that does not reduce the functionality of the product. Nesting of controls, where there are only a few choices provided but additional choices are offered sequentially in a nested hierarchical fashion, can also be a powerful "layering" technique if done properly.

2.3 Electronic advances

Advances in micro-electronics are enabling us to build the programmability and flexibility into ever smaller and ever less expensive products. The cost to add a microprocessor, and thus programmable control, to a product is dropping precipitously. At one time, a microcomputer had one processor in it. Now, there is a separate microprocessor in every piece of the computer, including a separate microprocessor the keyboard, a separate microprocessor for the mouse, a separate microprocessor for the USB connector, etc. There are automobile designs that have microprocessors just to run the lights on the back of the car, because it's easier and cheaper to give them their own microprocessor than to run separate wires from the front of the car to the back of the car in order to operate them. There are microprocessors in coffeepots and toasters. They're even putting microprocessors in electric shavers to control the battery charging circuits.
Where there's a processor, there's a program. Where there's a program controlling the behavior of a product, there's the opportunity to introduce flexibility into the program and therefore into the behavior of the interface for the product. With the cost and size of both processors and memory dropping precipitously (by as much as a factor of 10 every 4-5 years) it will soon be difficult to find any type of electric product that is not microprocessor-controlled.

2.3 Voice technologies

Riding close on the heels of the continued rapid increase in processing power has been the development of effective voice technologies. We now have usable continuous speech recognition, and processors that are small and inexpensive enough to allow it to be built into ever-smaller products. There is currently a chip sold for $3.00 U.S. in die form ($4.00 U.S. in package) that can act as the controller for a product, and does a limited amount of speech recognition and a digital speech re-construction. With this chip, it is possible to make simple products that are voice controllable and have speech feedback. It is already possible to have voice capability added to ATMs, kiosks, and other large information appliances for less than 1/2 percent of the cost of the device. It will soon be possible to have voice output as a part of appliances of almost any price, with voice input following speech output down the product size/price curve by less than five years. The advance of these voice technologies and the ability to incorporate them into standard products as alternate forms of display and control will be a key factor providing more cross-disability accessibility.

3.0 The challenge of providing cross-disability access

3.1 Variance in disability

A key problem in considering access to products by people with disabilities is the tendency to think of "people with disabilities" as homogeneous group, or as a small number of homogeneous groups (e.g., "the blind," "the deaf," etc.). In fact, a major problem in looking at cross-disability access is the tremendous diversity that people with disabilities represent. There are people whose visual abilities range from 20/20 through multiple types of low vision to total blinds; people whose hearing ranges from standard or normal ability through different types of hearing loss to total deafness; people whose physical ability ranges from average through missing limbs, weakness, interference with controls, and paresis, to complete paralysis; people whose cognitive and language capabilities range from normal through different types of perceptual, reading, cognitive processing, sequencing, and memory problems through to severe dysfunction. There are also individuals who acquire disabilities as they are born or when they are young and can quickly adapt through to people who acquire disabilities as they are older and who may only be able to handle very obvious and straightforward access techniques.

3.2 Variance in the technologies

We also need to create access strategies that work across the full range of technologies with which we are being confronted. This includes everything from automated banking machines and electronic kiosks and building directories, through computers, workstations, electronic appliances, business phones, office automation, and home entertainment products, and down to pocket and even wearable information systems. In some cases, these electronic products are conveniences, and people can get along without them. Increasingly, however, they are an integral part of the education or business information systems, and are not optional.

3.3 Creating cross-disability accessible products

Creating products that can be used across all (or as many as commercially practical) disabilities is going to require that we develop strategies that are seamless extensions of the standard or default interface, and that are as useful as possible to the broadest range of users. The strategies should be as simple as possible, add little or no cost, and add little or no complexity to the product for users who do not require the strategies.


4.0 Proof of concept: The EZ Access approach

EZ Access is a flexible but standard set of interface strategies for allowing people to access and use electronic devices even when they are operating under constrained conditions. The constrained conditions might result from their having a disability or from environmental factors.

For example, not being able to see a cellular phone might arise from having your eyes occupied while driving a car, or from being blind. Not being able to hear a multimedia information kiosk might be from using it in a mall during Christmas shopping, or from being deaf. Not being able to touch individual keys on a security keypad might be from having gloves on in winter, or from having a disability which affected your hand movement.
The EZ Access approach provides the user with a means to adjust the way things work, so that the person can use the senses and abilities they do have (or have available) to augment the ones they don't have. So, if they can't see a visual display, they make it audible; and if they can't hear an audible display, they make it visual; if they can't touch individual keys, they change the way the keys are activated.

4.1. Populations addressed

The EZ Access techniques allow direct access by individuals with a wide range of abilities/ disabilities, including individuals:

  • Who have low vision;
  • Who are blind;
  • Who are hard of hearing;
  • Who are deaf;
  • Who have poor reach;
  • Who have poor motor control;
  • Who have difficulty reading;
  • Who have difficulty remembering;


and (via the infrared link to assistive technologies) to individuals:

  • Who are paralyzed;
  • Who are deaf-blind.

4.2 Technologies addressed

The first application of the EZ Access techniques was on touchscreen kiosks. This application has been commercially transferred, and is now available in over 30 kiosks, including two Jobs kiosks in the Mall of America used by the Mall as well as Knight-Ridder newspapers.
Since that time, the techniques have been extended and generalized so that they can be applied across a much broader range of products. Designs and/or prototypes have now been developed for incorporating the techniques in:

  • Touchscreen kiosks;
  • "8-button" screen-based ATMs;
  • Cellular phones;
  • Business phones.

Designs are being worked on for:

  • Stereos;
  • Videocassette recorders (VCRs);
  • Microwave ovens.

4.3 Implementing EZ Access

EZ Access is not necessarily complex or expensive to implement, but does provide a standard way for people with disabilities to use all manner of electronic devices, from microwave ovens, to cellular phones, to interactive multimedia kiosks, to coffee vending machines.

The process of implementing EZ Access will vary depending on the product and the company that makes it. In many cases adding the techniques will entail adding functionality such as speech output or audio system inter-operability. Changes may have to be made to adjust (or add to) existing software code in application, and some hardware may need to be added if it does not already exist on the device. However, in virtually all cases the standard means of operation for a device (for users who do not have disabilities) does not change.

4.4. How users interact with EZ Access devices

Users can adjust the way the device operates (using EZ Access) via menus, shortcuts, or by having their preferred means of interaction stored on a personal card (for devices that accept cards, such as ATMs).
The EZ Access extensions provide a small number of powerful, flexible interface enhancements that together can provide great flexibility in how a user interacts with the product. The basic EZ Access components include:

  • The ability to have any button that is touched (on a touchscreen or physical buttons) to be spoken aloud, as well as the ability to have any displayed text spoken.
  • The ability to have all of the functions and displayed information presented as a list that can be navigated by sliding one's finger up and down the list, rotating a wheel, or operating up and down arrow keys.
  • A mechanism that allows buttons to be highlighted but not activated when pressed, and only activated when a second "confirm" button is operated (or after a delay). Highlighting can take the form of a visual highlight or the auditory announcement or large text display of the name of the button.
  • A built-in "Quick Help" feature that is activated if a button is pressed and held in various modes.
  • The ability to have any auditory information presented on the visual display.
  • The ability to have individual items in the above-mentioned list highlighted (visually or auditorially) in sequence, allowing an individual with severe movement limitations to operate a system from a single button in a scanning or step-scanning mode.
  • An infrared port that can be used (in addition to other product functions) to allow a user to operate the device and/or display information on a separate assistive technology (such as an alternate keyboard or dynamic braille display).
  • These techniques can be used in combination to address the needs of a wide range of users.

4.5 Application of the EZ Access approach on three example devices

EZ Access is a flexible but standard set of interface strategies for allowing people to access and use electronic devices even when they are operating under constrained conditions. The constrained conditions might result from their having a disability or from environmental factors.

The following scenarios are provided to show how these techniques can be used together and in combination to address different disabilities across appliances. The technique with its variations are discussed for a public touchscreen kiosk, a PBX business telephone, and a home VCR.

NOTE: The following explanations are given as if the individuals were experienced users. EZ Access-based devices also have a tutorial module built in to allow new users to discover and master the techniques. However, the tutorial is not described here.

4.5.1. For users who have difficult seeing

On the kiosk, the user with low vision might see blurred shapes for the buttons or icons. They may be able to read, with difficulty, some of the larger fonts that have been used for screen titles, but not smaller text. To use the touchscreen, these users would activate the Talking Select and Confirm mode. With this turned on, any words, phrases, on-screen buttons, etc., that the user touches are highlighted and read aloud, but not activated. To activate any of the buttons on screen, the user would first touch the button, hear its name read, and then press the green diamond-shaped button below the screen to activate the button. In this fashion, the user can quickly activate items that they are familiar with, or can feel around on the screen to explore or read any of the text or buttons until they have obtained the information they are seeking or have located the button or function they are interested in activating.

The mode on the telephone works in a similar fashion. Here, the user activates the Talking Select and Confirm mode. They are then able to press any of the 30 or 40 buttons on the telephone to explore and locate the button they're interested in. For example, when looking for an open line to make a phone call, they might start at the top, and hear "Line 1: Busy," "Line 2: On hold," "Line 3: Empty." They could then press the green diamond-shaped button to activate this last button to get a dial tone and to proceed with dialing. A variation of this mode could also be available that would allow the individual to press and release the buttons quickly to have the buttons announced, and then to press the desired button down and hold it for a half-second to have it activated.

When using the videocassette tape recorder (VCR), the same technique can be used to allow the individual to easily step through the various buttons and dials on the VCR or the remote, to locate and then activate the desired controls.
[Note that this function can also be very helpful to many users who may have left their reading glasses at home or in another room, and cannot read the printing on the control if it is small, or the devices in a dim corner of the room.]

4.5.2 For users who have difficulty reading or who cannot read

The same techniques that work for people who have low vision can be used by individuals who have difficulty reading or who cannot read. In addition, there is a useful mode for people with mild reading (or visual) problems: the Quick Read mode. In this mode, the user would generally operate the device in the service delivery fashion. If they encountered a particular word or button label they had trouble reading, they would simply hold down the green diamond-shaped button and touch the word or button in question to have it read aloud. When they released the green diamond-shaped button, the device would return to its standard operation and they would be able to operate everything by simply touching the buttons in the traditional fashion. [This Quick Read mode would also be the technique that probably would be most used by individuals who don't have disabilities who have difficult reading the small print on a VCR in a dark corner. This mode does, however, require use of two hands simultaneously.]

4.5.3 Users who cannot see

To use the telephone or the VCR, the individual who was blind could use the same Talking Select and Confirm techniques described above, for people who have difficulty seeing. Because these devices have tactile buttons, people who are blind can explore the face, locate buttons, and use the speech output to identify their function. They may use the techniques consistently, or, as they become familiar with the device, only invoke the feature when they forget where a particular control is located or when they want to use a control that they have not memorized the location of.

This approach, however, will not work on touchscreen based systems, or devices with membrane keyboards, where it is not possible to detect the location of the individual keys or buttons. For these devices, the Auditory List mode would be used. In the Auditory List mode, all of the items available at any time (for example, all of the buttons, text fields, and labels on the touchscreen at any time) are provided in a list that the individual can move up and down through, and have the items read aloud. On the touchscreen, the list would be positioned along the left edge, so that the individual could place their finger on the screen up against the cowl on the left edge of the screen. Running their finger up and down the screen along the edge would cause the items in the list to be read aloud to them. All of the text as well as the buttons on the screen are included in the list, so they would have access to all of the information and functions on the screen. To activate a particular item, the user would move their finger until the desired item was read, and then press the diamond-shaped button below the screen. Using this techniques, the individual can "read" all of the information presented and operate all of the controls of the kiosk without ever removing their finger from the left edge of the screen (except to press the diamond-shaped confirmation button).

The same list mode can also be used on devices that do not have a touchscreen. In this case, once the mode is invoked, any two keys on the device (typically keys that are associated with up/down functions of some type) can be used to move through the list, and the diamond shaped button (or its equivalent) used to confirm selections.

[This mode can also be used in other situations, where users who do not have disabilities are operating a device while their eyes are busy, such as operating advanced features on their cell phone while driving.]

4.5.4. Users who cannot hear

Many modern kiosks use short video/movie sequences or voice-overs to present information or provide help and guidance. Users who cannot hear (due to disability or a noisy environment) can turn on the ShowSounds/captions mode. With the mode turned on, any information presented in an auditory fashion is also presented visually on the screen. This made be done by simply integrating the information into the layout of the screen, or by providing some type of caption or appropriate visual event.

VCRs don't typically have auditory cues as part of their operation, but those that did would provide a suitable visual event to accompany such a cue. This feature would also activate any closed captioning display functionality in the VCR or VCR/television combination.

Today, we do not have a good mechanism for allowing people who are completely deaf to use telephones without connecting an assistive technology. However, we are rapidly approaching the time when automatic speech recognition will be viable. Activating the ShowSounds feature on a telephone could cause such a speech recognition and display feature of either the business phone system or, more likely, the telephone company system, to be activated, causing speech to be displayed visually on the business phone display. Today, activating the ShowSounds feature might automatically connect the person with a speak-through relay service to carry out the same function, if the deaf individual is able to speak.

4.5.5 Users who have difficulty hearing

The ShowSounds/caption mode described above can also be used to assist people who have difficulty hearing. EZ Access devices would also have a wide range of volume control and a headphone jack. If a telephone handset is provided, it would be t-coil compatible, and have an option for muting the microphone if it is active.

4.5.6 Users who have cognitive impairments

The auditory feedback features discussed above for people with low vision or reading difficulties can also be used to provide individuals with mild or moderate cognitive impairments with assistance, by having text spoken aloud to them instead of them having to read it. The EZ Access package also has a Help feature that is built into the Quick Read and Talking Touch and Confirm functions. When the individual touches a button, its name is read aloud. Then, after a one-second pause, the function of the button, along with quick instructions, would be read. For example, pressing the "Conf" button on a phone might result in the system saying "Conference button [Pause] Dial a number [Press] Dial a number [Press] Dial a number [Press] Dial a number [Press] until up to six parties are connected, then press again to join them." [Such a feature would also be useful to many other users who have difficulty remembering the functions of all of the buttons on their products, or finding their instruction books.]

4.5.7 Users who have difficulty reaching and touching accurately

The Select and Confirm mode discussed above can be used with or without voice support, highlighting, text display, or a combination of these. The Select and Confirm mode allows individuals who have difficulty accurately touching the screen to successfully operate touchscreens or other devices without error. They simply try to touch the desired button. If they hit erroneous buttons, the buttons are announced, but nothing else happens. When they hit the desired button, they press the green diamond-shaped button to confirm and activate that button. Even buttons that are quite close together can be operated without error in this mode. Also, individuals who know that bumping the wrong button will not cause an error often exhibit better control and make fewer errors to begin with. The technique where the individual can press and hold a key down to activate it, could also be used. This technique, called SlowKeys, is already a standard part of all major computer operating systems today. These techniques work in a similar manner for both touchscreen devices and button-based devices such the phone and VCR.

4.5.8 Users with complete quadriplegia

A person with complete quadriplegia would typically not be able to access any of the three devices (kiosk, VCR, or telephone) directly. However, if they used a sip-and-puff or other special assistive device for control of their wheelchair or laptop computer, they could control other devices (including the three types discussed here) from their laptop or wheelchair control system . A protocol currently called the "Universal Remote Console Communication" (URCC) prototype (Vanderheiden et al., presented elsewhere in these proceedings) has been developed that would allow a person using assistive technology to access all of the information to control all of the functionality of the device over an infrared (or RF) link.

4.5.9 Users who have low vision and are hard of hearing

Many individuals have combinations of disabilities. The EZ Access features are designed to work together, facilitating access in these instances. For example, if an individual has low vision, but is hard of hearing, the voice enunciation technique may not work. If they have sufficient vision to read large print, the ShowSounds feature can be used in conjunction with the Touch and Confirm (or Quick Read) feature to allow them to directly access the device. When these modes are turned on and the individual touches a button on screen, its name is read aloud, and presented in large print across the top of the screen (or the bottom, if they are pointing to a button at the top of the screen).

If the person has sufficient visual and hearing impairments to be functionally deaf-blind, the same remote access URCC protocol discussed above could be used. If the individual had a personal braille display such as the Braille Lite, they would be able to use it to access and control all of the information on the kiosk, for example. This would function in essentially the same fashion as the list mode discussed above for individuals who are blind. In this case, however, the information presented auditorially would be presented to the individual on the braille display. They would be able to access all of the information and operate all of the functions just using their braille device.


5.6 Use of speech input

As speech input becomes a viable and inexpensive control option, it can be integrated directly into the above package. It would be of particular value to individuals with physical disabilities but with sufficiently clear speech, such as people with quadriplegia. It could also be useful in conjunction with speech output for individuals who are blind. Finally, speech input combined with natural language processing opens the possibility of direct command interfaces, making products much easier to access. For example, it could allow a person to say "Please show me the information on how to apply for a hunting license" rather than having to wander through nested menus of government services to locate the screen for applying for a hunting license. Speech input techniques, however, should always be an option, not the sole interface, since there are many people for whom speech may not be a viable interface technique. There are also many environments where noise may prevent its being functional.

5.0 Conclusion

Although the EZ Access package of interfaces is not the Holy Grail of cross-disability strategies, it can serve as a proof-of-concept for this type of approach. It does demonstrate that it is possible to create an interface that:

  • Is directly usable by individuals with a very wide range of physical, sensory, and cognitive disabilities;
  • Adds less than 1% to the cost of the product;
  • Does not change the interface "look and feel" for individuals who do not have disabilities;
  • Is useful to individuals who do not have disabilities but are in adverse circumstances (e.g., a very noisy mall, forgot their reading glasses);
  • Has been successfully commercially deployed.


Hopefully, the existence and successful deployment of these technologies can act both as a proof-of-concept and a proof of commercial feasibility for the creation of flexible cross-disability accessible "universally designed" products and can lead to the further refinement and advancement of techniques in this area.

As we move in these directions, we also need to remember that the world is changing. Things that seem like fantasy today will be reality tomorrow. Who would have thought, a couple of hundred years ago, that we could have any of the technologies we have today? And the rate of change is increasing sharply. Some day, we will all have a personal "communication and control system" that we will carry about with us (or in us) and use to interface with all of the technologies around us. The same underlying architectures needed to create these cross-disability accessible systems today will be required to allow people to interface with the wide variety of systems we are apt to meet in the future. As this occurs, the implications for cross-disability to standard product design will also be more apparent.

6.0 Acknowledgements

This is a publication of the Trace R&D Center, which is funded by the National Institute on Disability and Rehabilitation Research of the U.S. Department of Education under Grant #H133E30012 and H133E5002. The opinions contained in this publication are those of the grantee and do not necessarily reflect those of the Department of Education.

More information on EZ Access is available at:
http://tracecenter.org/world



| Top | | TIDE 98 Papers |