音声ブラウザご使用の方向け: SKIP NAVI GOTO NAVI

SENSOR-BASED SHARED CONTROL OF A REHABILITATION MANIPULATION SYSTEM

Wendy A. McEachern, John L. Dallaway Department of Engineering University of Cambridge, UK

ABSTRACT

This paper describes the implementation and evaluation of sensor-based shared control algorithms in a rehabilitation manipulation system. Quantitative and qualitative results of user trials with the system are presented, and the implications of these results are discussed.

BACKGROUND

The goal of this research was to investigate strategies which would enable a person with a severe physical disability to operate a general purpose, robotic manipulation aid in an undefined environment. Currently available robotic manipulation aids operate in direct or supervisory control modes. In a supervisory control mode, the environment must either be well-defined or it must be possible to obtain information about the environment autonomously. In a direct control mode the operator is responsible for all knowledge and perception of the environment, as well as all motion of the manipulator. Manipulation aids designed for use in undefined environments generally operate under direct control. Such aids have largely been rejected by potential operators due to the difficulties involved in executing a complex motion in a direct control mode [1]. In this research, strategies in sensor-based shared control which provide an intermediate level of autonomy to those of direct and supervisory control strategies have been identified. This approach is desirable both to allow task execution in undefined environments by taking advantage of the cognitive and perceptive skills of the operator, and to give the operator the sense that the manipulator can be used as an extension of him or herself. A behaviour-based architecture was used to implement the shared control strategy [2]. Sensor-based algorithms are used to modify the manipulator trajectory based on the current state of the environment, and the task being executed. The algorithms are highly configurable to the hardware system being implemented, and/or to the tasks which will be undertaken. The integration of the shared control algorithms with the Cambridge University Robot Language (CURL) [3] and their subsequent evaluation is discussed in the following sections. CURL is robot-independent and operates in the Windows environment. Kinematics and communication sub-systems for specific robotic devices are implemented using a CURL Device Driver (CDD) which is linked to CURL as a Dynamic Link Library (DLL). When using the CURL direct control mode, the operator has continuous control of a single axis of the manipulation system using any Windows compatible input device. Because the goal of a motion is not explicitly known, when using the shared control strategy it is necessary for the operator to define interactively whether a transport or grasping motion is being undertaken. This area has much room for development, particularly with regards to which axes are controlled and how choices are presented in CURL.

IMPLEMENTATION OF SHARED CONTROL

The shared control algorithms are contained in a device-independent module called the trajectory modification function (TMF). Extensive initialisation of parameters defining the number, orientation and significance of sensors mounted on the robot is necessary before the TMF can be used with a specific hardware system. Once configured, the module interacts with the primary trajectory generator to determine how the manipulator trajectory should be modified. For the purpose of integrating the shared control algorithms with CURL, the TMF has been implemented in the form of a DLL. Figure 1 depicts the way in which the shared control algorithms are integrated with the standard CURL direct control mode. The operator chooses the movement type (transport or grasping) and the movement axis. The TMF is then initialised with the appropriate task-oriented parameter set. Robot-specific kinematics and coordinate transformations to the TMF sensor-coordinate frame are calculated in the CDD. The desired movement trajectory is then modified by the TMF using the sensor-based {embed MSDraw \* mergeformat |}

Figure 1. Integration of the TMF with the CURL direct control mode

shared control algorithms, and control is returned to the CDD. The modified trajectory is transformed to the desired robot co-ordinate frame, and the robot velocity is updated. The operator-specified speed is then updated in CURL, and this cycle continues until the operator chooses to stop the motion by discontinuing the input. Motion can subsequently be continued on the same axis, or a new axis and/or movement type can be selected. The hardware set-up used for the system implementation consisted of an RTX robot with an IMP8 transputer-based control board, eight infrared sensors placed on the standard RTX gripper to provide object proximity information, and a 386DX PC to run the robot control software. An analogue joystick was used as a pointing device for computer interfacing purposes and to control the velocity of the robot along a single axis, as specified in the CURL direct control window. In the CURL direct control window, nine possible movement axes were provided. These were: Cartesian motion on the world frame x, y and z axes; Cartesian motion on the tool frame x, y, and z axes; pitch; gripper width and yaw. Using the analogue joystick, the operator specified the speed of the manipulator along the chosen movement axis. In the shared control mode, the primary trajectory specified by the operator using the CURL direct control modes was modified by the sensor-based trajectory modification algorithms. In addition to the direct control modes, a number of CURL procedures which caused the robot gripper to point from a home position to the approximate location of objects in the environment were provided.

EVALUATION

For the purpose of evaluating the shared control algorithms ten able-bodied people were asked to perform a simple task once with the trajectory modification algorithms (shared control), and once without (direct control). The order in which the shared and direct control modes were used was varied in order to minimise the effects of learning in the evaluation results An evaluation task which was relevant to the daily lives of the operators was desired in order to assess the performance of the system in a realistic environment. In addition, it was felt that the subjects would be better able to plan the actions of the manipulator when performing a task which was familiar to them. According to surveys of user task priorities [4], one task which operators would consider performing with a robotic manipulation aid is the preparation of a hot drink. Thus, the task of making a cup of tea was chosen for these evaluations. Due to practical considerations, real water and teabags were not used in the task; a number of wooden blocks were used as substitute objects.

RESULTS

CURL generates a log file which allows each control action of the operator to be recorded. In these evaluations, the time at which each procedure was invoked, and the times at which movement along each of the direct control axes were initiated and terminated were recorded. This information was used to calculate: the total time taken to complete the task; the number of modes required to complete the task (the sum of the number of manipulator-level control modes selected in the CURL direct control window, and the number of procedures used); the total amount of time spent in manipulator-level control modes; the number of manipulator-level control modes used; and the average time spent per manipulator control mode. Manipulator control modes refer to those modes in which the operator generated the primary trajectory of the manipulator using the joystick. The average results for the ten subjects are shown in Figure 2. The results of these trials indicate that the performance of this task was improved through use of the sensor-based trajectory modification algorithms. On average, the time taken to complete the task was reduced, as were the total number of modes used and the number of manipulation modes used. The time spent per manipulation mode was increased, indicating that fewer fine motions were used, and the time spent in manipulation modes was decreased. {embed MSDraw \* mergeformat |}

Figure 2. Results of the evaluation of the trajectory modification algorithms.

DISCUSSION

Some differences in task performance and the strategies adopted to complete the task are expected for operators with disabilities. In particular, operators with disabilities may not be capable of fine control motions, and mode changes may require more effort and time. Therefore, in these evaluations, the use of fewer fine control actions, as measured by the average amount of time spent per manipulator mode, and the use of fewer manipulator modes to complete a task are significant. In addition, the use of fewer total modes is important, but reflects the quality of the interface as well as that of the trajectory modification algorithms. In the evaluation task, the shared control algorithms were perceived by the operators to be most useful in aligning objects in the robot gripper prior to grasping. The transport portions of the task (e.g. moving the water to the cup) were not significantly altered by the use of the shared control algorithms because the environment used for the trials was static, and very little object avoidance was necessary.

REFERENCES

[1] Van der Loos HFM, (1994), VA/Stanford Rehabilitation Robotics Research and Development Program: Lessons Learned in the Application of Robotics Technology to the Field of Rehabilitation, IEEE Trans. Rehabilitation Engineering, 3 (1), 46-55.

[2] McEachern WA, Jackson, RD, (1995), A Sensor-based Manipulation Strategy for Applications in Rehabilitation Robotics, RESNA '95, 484-486.

[3] Dallaway JL, Mahoney RM, Jackson RD, Gosine RG, (1993), An Interactive Robot Control Environment for Rehabilitation Applications, Robotica, 11, 541-551.

[4] Stanger CA, Anglin C, Harwin WS, Romilly DP, (1994), Devices for Assisting Manipulation: A Summary of User Task Priorities, IEEE Trans. Rehabilitation Engineering, 2 (4), 256-265.

ACKNOWLEDGEMENTS

Much of this work was performed under the supervision of the late Robin Jackson. Wendy McEachern Department of Engineering University of Cambridge Trumpington Street Cambridge, UK, CB2 1PZ wam@eng.cam.ac.uk Implementation of Shared Control

Sensor-based Shared Control