音声ブラウザご使用の方向け: SKIP NAVI GOTO NAVI

Accessible Interface Design: Adaptive Multimedia Information System (AMIS)

Japanese Society for Rehabilitation of Persons with Disabilities (JSRPD), Tokyo, Japan

Marisa DeMeglio
Accessible_marisa@dinf.ne.jp
Markku T. Hakkinen
hakkinen@dinf.ne.jp
Hiroshi Kawamura
hkawa@attglobal.net

Abstract

This paper provides an overview of the concepts and design of the Adaptive Multimedia Information System (AMIS). AMIS is a software application for the playback of natively accessible content (DAISY and DAISY/NISO books), and of accessibly authored HTML pages. DAISY playback includes support for SMIL elements and the Navigation Control Center. AMIS retrofits HTML documents with a navigation overlay that enables accessible presentation through synthesized speech, large print, and Braille renderings. AMIS employs a flexible XML-based architecture that allows for adaptation of the standard interface to meet the needs of both users and assistive technologies. The mutable application interface allows it to adapt to user preference, content delivery mode, and assistive device capabilities. Users may customize font size, color contrast, spacing, volume, playback speed, and presence/absence of interface regions. Core interface features are derived from both the DAISY playback model and the W3C User Agent Accessibility Guidelines. AMIS XML documents describe application components (controls, content layout regions, dialogs, content renderings) aurally, textually, and visually. The default application interface allows for visual and aural output; and touchscreen, mouse, and keyboard input. Through the use of the AMIS Plug-in SDK, developers can write interfaces to a variety of assistive devices. These devices can gain access to the same application functionality and content as does the native interface. Localization is also very easily obtainable in AMIS, because every aspect of the interface is customizable. Labels on buttons and regions are all imported directly from the system's Interface Markup documents. The plug-in architecture allows for the addition of new input methods, to accommodate techniques such as IME (Input Method Editor generally used for East Asian languages) and on-screen keyboards. The adaptable interface framework and the content rendering capabilities, coupled with the use of open standards, enables the customization or addition of features to meet a broad range of user requirements.

1. Introduction

Information technology has brought about significant advances that can enhance interactions between persons with disabilities and society. Digital content, such as that found on the world wide web, can now quite easily be made accessible. Blind users may use synthesized speech and a Braille display, rather than rely on volunteer readers and large Braille books. Deaf users may talk in real time, without the assistance of a sign language interpreter, using text-based chat. People with Dyslexia can use reading software that highlights word order. Those with cognitive disabilities can speed up or slow down the content playback speed to suit their preference and need.

1.1 Accessible Interfaces at Present

However, when we examine not only the content but also the interface (the means through which a user obtains the content), we see that basic accessibility may serve one disability group well, while failing to meet the needs of others. Certainly there has been great progress made in developing interfaces for the visually impaired, but even these are inadequate at times. Interfaces for users with cognitive disabilities are few in number, and are not yet incorporated into mainstream interface design. When considering persons with multiple disabilities, we see tremendous room for progress.

Often, operating systems will include a suite of tools aimed at improving ease of use for disabled users. While these tools are adequate for getting around a desktop, the user experience they provide does not replace that of a non-disabled user. Adaptive devices also provide a layer of accessibility, but here flexibility is limited to the requirements of users as predicted by the device manufacturers. In reality, user requirements far outnumber this.

1.2 Universal Design

Universal interface design aims to solve this problem by being "all things to all disabilities". This is what must exist in interfaces before they can claim to be completely accessible. There are two major aspects to universal design: features of a device's native interface, and means of communicating with other devices' interfaces.

There have been a variety of approaches taken to address the challenges of universal design. In all cases, the motivation is to create an interface that will result in an application being usable by the broadest set of users and abilities. EZ Access [6] defined a cross disability hardware and software interface that could be incorporated in information kiosks, automated teller machines, and hand-held devices such as mobile phones. Stephanidis & Savidis [7] describe their work in with the Avanti project, which is a set of tools and interface features that allows a web browser to be adapted to a range of user abilities. The concepts developed by Stephanidis [8] are applied to a broader goal of the Unified User Interface Platform, an architectural framework for designing accessible interfaces. In addition to these research efforts, operating system platform vendors have incorporated accessibility enabling Application Program Interfaces (APIs) (e.g., Microsoft Active Accessibility and Sun s Java Accessibility), which permit assistive technology vendors to provide unique interface solutions. Standards for the physical interface between systems and assistive devices are also in development, such as INCITS V2 [4].

In the examples above, we see hard-coded solutions (EZ-Access), automatic adaptation (Avanti), and low-level APIs that require significant software engineering expertise to implement. Mark-up languages, such as XML, can offer flexibility in designing interfaces, and open the design and implementation process to individuals with specific expertise in a given disability (rather than a software engineering background). The success of HTML as a design tool for the masses is a case in point. Projects such as the User Interface Mark-up Language [1] further demonstrate that XML can play a role in standardizing and the definition of application user interfaces.

1.3 Combined Approach: DAISY and AMIS

User requirements are so varied that they preclude us from making design assumptions beyond that we must provide access to interfaces in an open, multimodal, and highly navigable fashion. This is the aim of the Adaptive Multimedia Information System (AMIS). It is being developed as open source DAISY (Digital Accessible Information SYstem) playback software. The interface of AMIS can be fully customized by a non-technical user. Communication with external devices is made possible by the AMIS plug-in architecture. Like DAISY, the interface of AMIS is defined by multiple synchronized media types (text, audio, graphics). By creating an interface that operates in many modes, or combinations of modes, and combining that interface with DAISY's native accessibility, we can hope to reach as many ability levels as possible.

2. Accessible Content needs an Accessible Interface

2.1 Accessible Content: DAISY

DAISY is the newest advancement in accessible content. It is the new generation of digital talking books, but its capabilities are not limited to digital versions of print books. A DAISY publication may be text, text and audio, or audio only. Publications can be further enhanced by graphics, video, and other Synchronized Multimedia Integration Language (SMIL, pronounced "smile") elements. The content is built from synchronizing modalities (for example, text and audio) using SMIL files, and then defining navigation points. The navigation points are extracted and listed in a file called the Navigation Control Center (NCC or NCX). This is a very powerful feature of DAISY publications because it allows quick access to chapters, pages, footnotes, references, or other publication features [3]. We can see it as a flexible, accessible interface to multimodal content.

Libraries for the blind around the world are producing DAISY publications. International training seminars on creating DAISY content have generated great interest in countries such as Japan, India, Singapore, and Thailand.

2.2 Accessible Interface: AMIS

The concepts of the AMIS interface come from the DAISY playback model and the World Wide Web Consortium's User Agent Accessibility Guidelines. The underlying design is based on a very simple set of elements: frames, controls, and multimedia rendering viewports. Each interface element is externally defined in multiple modes: textual, aural, and graphical. Frames contain controls and viewports. Frames have titles, represented in textual, aural, and graphical ways. Controls are also labeled in this way. Controls do not contain other elements, but they have an associated command. It is very simple to create your own customized version of AMIS, simply by defining what frames look like, what multimedia viewports they contain, and what user controls exist. You may wish to "tile" the background of a frame with your own picture, or use a default color scheme. You can add sound clips to the controls, as an audio label or as confirmation of a user's action.

DAISY content is presented with many rendering options, and the user may adjust, among other things, font size, color contrast, line spacing, volume, and layout of viewports. This meets guidelines 4 and 5 of the W3C User Agent Accessibility Guidelines (UAG). They are, respectively: "Ensure that the user can select preferred styles (colors, size of rendered text, synthesized speech characteristics, etc.) from choices offered by the user agent..." and "Ensure that the user can control the behavior of viewports and other user interface controls..." [5]. Rendering via external assistive device is possible by making use of the AMIS plug-in architecture. This satisfies the UAG recommendation that applications: "Implement interoperable interfaces to communicate with other software (e.g., assistive technologies, the operating environment, plug-ins, etc.)" [5].

By making the entire application?interface, rendering, and interaction with other devices?customizable, and providing several ready-made customizations, we can enable both disability specialists and novices to find what works best for their target user group.

3. Flexibility

Next we will look at the architecture of AMIS. We will see how DAISY implementation requirements, use of XML and other configurable files, and the plug-in API define the system's flexibility.

3.1 Implementing DAISY

DAISY playback software typically has very few dependencies on external components. The recommendation for developers is to make use of an XML Parser, Media Playback, HTML/XML Rendering, and SMIL engine (optional) [2].

The AMIS application is built from an XML parser, an HTML parser, and multimedia rendering components. SMIL playback is done by AMIS; no engine is used. User interface elements are limited (purposely) to frames, short text display, and control buttons. Use of these common interface elements enables AMIS implementations to exist in many operating environments.

3.2 Use of eXtensible Markup Language and Configurable Files

AMIS relies upon a series of external files to specify user preferences about the interface style and available options. The AMIS interface is generated from a series of XML documents, called the interface markup files. These files give the placement and order of screen elements exposed to the user, such as button-style controls and multimedia viewports. The properties of these elements, such as label and associated command, are also defined in the interface markup files.

The screen element labeling system consists of <xLabel> XML elements. An <xLabel> has a type associated with it ('title', 'select', 'toggle', 'hover') and three representations of a label: text, audio, and graphic. Depending on the parent element (frame, control, prompt, or other descriptive context), there could be several xLabels in series, each with different type attributes. For example, a control might have a 'normal' type for when it is idle; a 'select' type for when it is chosen; a 'toggle' type for when its function changes (mute to unmute, perhaps). Visual rendering consists of text, text and graphic, or graphic only. Rendering is adjusted by turning on or off the 'show' attribute in the text field, and entering or leaving blank the graphic's filename. Aural rendering occurs as the user navigates the screen elements, using a control, keyboard key, or external device.

Controls with associated commands must specify a command name to use, as defined in the user command subset document. AMIS has a set of system commands, many of which are customizable. For example, let's look at the system command "applyStyle". This is used to apply a stylesheet to an HTML region. This is a command that must be customized to specify which stylesheet and what HTML region. The corresponding line in the user command subset file might look like:

<implementCMD id="Style1_main " paramValue="sysHTMLContent:oversize"refID="applyStyle"/>

where paramValue gives the required parameters, in the form of REGION:STYLESHEET_ID. When a control wishes to implement the command, it simply references Style1_main. Other ways of customizing commands are to use a system variable or to use a dialog to get a parameter value. Examples of system variables are CURRENT_FRAME, CURRENT_CONTROL, and CURRENT_REGION (instead of the specific region sysHTMLContent). An example of a dialog interaction command is shown by:

<implementCMD id="Exit" paramValue="AMIS_DIALOG:ExitApp"refID="exitApplication"/>

where the system command is 'exitApplication', the command to be referenced by controls is 'Exit', and the parameter (a yes or no value) is the value returned by the 'ExitApp' dialog. The prefix 'AMIS_DIALOG' is required to tell the system to launch a dialog window.

Another XML-based feature is the HTML rendering configuration file. HTML rendering is called for when the content is to be read using text-to-speech (if no prerecorded audio is available). Specified in this configuration file are the list of tag types to be rendered, the associated audio effect, what descriptive phrase to say before the tag is read, and where to get the tag's contents. For example, if the HTML tag looks like this:

<h1>Introduction</h1>

The rendering could be:

{audible sound} 'ding.wav'

{text-to-speech} "Heading one"

{text-to-speech} "Introduction"

Two other configuration options are important for AMIS: cascading style sheets (CSS) and internationalization. Because DAISY text is in HTML format, we are able to offer the user a great deal of customization in appearance by using CSS.

By editing profile settings, a user can specify CSS documents as well as sets, or groups, of CSS documents. For example, someone wishes to have control over the contrast settings of the HTML display. They can define a group of CSS documents called "contrast" and then customize the "cycleNextCSS" command to use their CSS group by giving it a parameter of "CURRENT_REGION:contrast" (or whatever their target region is). When a user chooses the control associated with this command, they will see the first (or next, depending) style of the "contrast" CSS group. CSS gives AMIS great flexibility in font size, color, and spacing; all important for types of print impairments.

Internationalization is easily achieved in AMIS. Simply by editing the interface markup documents and changing the text and font associated with interface elements, the keyboard commands, and any pre-recorded audio or locale-specific graphics, AMIS can be customized to any language supported by the operating environment. The text element of a label can reference an external file, so it is possible to change all the interface text by swapping the text string file with another. Also, it is important that all HTML and XML documents read by AMIS have the appropriate encoding attribute specified.

3.3 Input and Output Devices

AMIS natively supports keyboard, mouse, and touchscreen as input devices; and audio and monitor display as output devices. While programmatically, its behavior is similar to that of a mouse, a touchscreen is a much more intuitive interface for those who do not routinely use a computer. Keyboard support is strong, with an XML document defining command mapping. A keyboard interface has access to every command available to an on-screen control. These three native interfaces help us to reach a variety of users, but their experience is not limited to the use of these three devices.

AMIS is designed to communicate with input and output plug-ins by using a simple API. An engineer may wish to interface his human interface device (also called HID. e.g., a gamepad) to AMIS. He simply writes the appropriate code to interface his hardware with the computer, and incorporates this code into his AMIS input plug-in. When there is HID activity, the engineer's code finds the command he has associated with that particular button or switch, and sends the command to the AMIS system by using the specified event messaging method. All commands are available for use in the plug-in API.

An output plug-in is constructed in a similar fashion: an engineer interfaces her new refreshable Braille display invention to the computer. Then she writes an output plug-in so AMIS can communicate with the Braille display. Whenever there is new interface or content data, AMIS notifies the output plug-in and sends the data in XML format. The format for interface data is the same as defined in the interface XML documents: frames, controls, and descriptive xLabels for each.

4. Future of AMIS

The first development in the future of AMIS will be to launch an open source project. Information about this will be available at http://www.amisproject.org/ It is our hope to get input from users, engineers, and interface design experts, so that AMIS becomes a more powerful and easy to use system. The license will be GNU General Public License or one similar, and the project will be managed by the software development team at JSRPD. The target programming language is Java and releases are expected to be platform-independent.

Another area of collaboration will be in the advancement of the plug-in architecture. It is very important that we develop a robust text input plug-in system for use with such technologies as the Input Method Editor (IME, common in East Asian languages). For this we would like to work with engineers in those locations in order to develop the best strategies to meet the needs of users in those communities.

The next important step in the AMIS project is the creation of the Interface Generator application. Such a tool is very important for fast creation of interfaces. With a drag and drop or hierarchical approach, we can provide users with an easy way to "skin" their copy of AMIS. We would like to provide an online library of AMIS skins, free for use, so that people may access them and contribute their own creations.

The ultimate goal of the AMIS project is to provide users of all ability levels with DAISY playback software that is built from the ideas and contributions of users worldwide.

References.

  1. Ali, M.F., & Abrams, M.: Simplifying Construction of Multi-Platform User Interfaces Using UIML. UIML Europe 2001 Conference (2001).
  2. Hakkinen, M., Gylling, M., & DeMeglio, M.: Advancing DAISY Technology: What AT Vendors Need To Know About DAISY Implementation. CSUN 2002, California State University, Northridge. (2002)
  3. Hakkinen, M.T., & Kerscher, M.T.: Applying a Navigation Layer to Digital Talking Books: SMIL, XML and NCX. Multimedia on the Web Workshop, WWW9, Amsterdam. (2000)
  4. INCITS: INCITS V2 - Standards Development Committee on Information Technology Access Interfaces. At http://www.ncits.org/tc_home/v2.htm.
  5. Jacobs, I., Gunderson, J., Hansen, E., eds.: User Agent Accessibility Guidelines 1.0. This W3C Candidate Recommendation is http://www.w3.org/TR/UAAG10/
  6. Law, C.M., Vanderheiden, G.C.: The development of a Simple, Low Cost Set of Universal Access Features for Electronic Devices. ACM CUU 2000 (The Association of Computer Machinery Conference on Universal Usability), Washington, DC, November 16-17, 2000 (2000)
  7. Stephanidis, C., & Savidis, A.: Universal Access in the Information Society: Methods, Tools and Interaction Technologies. Universal Access in the Information Society (2001) 1 (1), 40-55.
  8. Stephanidis, C.: Designing for all in the Information Society: Challenges towards universal access in the information age. ERCIM ICST Research Report. (1999).