音声ブラウザご使用の方向け: SKIP NAVI GOTO NAVI

Toward a Multimodal Environment for Learning Robot Manipulation by people with severe disabilities

Mounir Mokhtari1, Agnes Roby-Brami2

Abstract

The project is to find out a method of analyzing motors capabilities of disabled people. We are working on a 3D method for measuring motor capabilities when acting on different Man/Machine interfaces by the use of the Spatial Tracking System (Polhemus Fastrack). We have developed a software program to record the 3D movement and represent the trajectory on-line of each sensors and the movement on line in 3D. An off-line analysis is also provided. This paper describes a method under development which consist of using Polhemus sensors when manipulating an arm robot and representing the trajectory of the working point on-line during the movement. 3D audioguided end effector movement when reaching a target in space is also provided. The ultimate aim of this project is to improve the visual and audio feedback when applying robotics technology to assist people with severe disabilities in the task of learning to operate an arm robot.

Keywords Multimodal Learning Environment, AudioGuided Movement, Motor Capabilities Analysis.[1][2]

Introduction

The application of technological assistance to the needs of people with reduced motors abilities is available in four main area: mobility, by the use of powered wheelchairs, home control environment systems, manipulation, where a lot of robotics system appears this last decade, and alternative communication. The problem is that a great number of different items of electronic equipment, made by different companies, are presented to an individual, and each one is furnished by it's own specific input device. So the user, with severe motor disabilities, is confronted with multiple man/machine interfaces. Some integrated systems to have access to the areas described above has been developed [2]. But the main problem is the lack of a method to analyze the needs of each individual in term of technological assistance and to provide general guidelines so that the different companies will take it into account for the development of future products.

Manipulation by Motor Command Analysis

We are using movements recording methods to analyze the way a disabled individual is acting on the input device of a technological assistance system. An expert user is capable of performing programmed movement to reach the target [6]; we can also analyze the errors that occurs when the movement is done by a novice user. This type of methods must permit to analyze the origin of problems faced by disabled users within the use of direct manipulation interfaces. Consequently, it's necessary to compare the command gesture realized by the user on the input device with the displacement of the robot end effector.

Hardware Configuration

The measurement tool is a spatial tracking system (STS) developed by Polhemus (Fastrack) which is based on the use of up to four electromagnetic sensors. It's permit to measure the position (X, Y, Z) and the orientation (yaw, pitch and roll) of each sensor according to a transmitter which represent the fixed cartesian coordinate at a maximum sampling frequency of 120 Hz. A high speed serial link insure the connection between the computer and the STS. The software is developed on a personal computer using the Borland C++ on Windows. A first version has been developed under DOS with Turbo C and now we are rewriting the software on Windows to improve the graphics interface to make it more convivial for non experimented users. The sounds used in audioguiding movements are generated by a standard SoundBlaster audio card.

Virtual Camera Implementation

The objective is to represent the 3D position of the STS sensors, or the object on which the sensor is fixed , for example the human hand and the end-effector, on the screen. The solution adopted is to create a virtual camera which permits the conversion a 3D image of the point position on a projection screen [4].

The direct consequence of this method is the visualization of the trajectory of the sensor under different angles by adjusting the position and the orientation of the virtual camera according to the fixed cartesian coordinates of the transmitter. The geometrical model of the camera is defined by a certain number of parameters; the intrinsic parameters based on the internal model and the extrinsic parameters which are linked to the external model. The camera permits the projection of a 3D point of the workspace on 2D workspace, in other words performing the transformation from a metric coordinates to an image coordinate (i.e. pixels). The internal model is illustrated by the following figure.

We suppose that P(Xc,Yc,Zc) is the position, at a given time, of the sensor according to camera system of coordinate Rc. The coordinate systems Ri is fixed on the image plane and Rs on the projection plane. The transformation from Rc to Rs is defined as follow:

Bbc((Aalhs5co1(Zc.U;Zc.V;Zc)) = Bbc((Aalhs5co4(-f.eu;0;gu;0;0;-f.ev;gv;0;0;0;1;0)) * Bbc((Aalhs5co1(Xc;Yc;Zc;1))

f: focal of the camera

eu and ev: factors of the ladder in pixel/m

gu and gv: the translation of the origin translation of the axis U and V

Before processing the internal model to display the different position on the screen, it's necessary to perform a coordinate transformation to move from the fixed system coordinate of the transmitter (i.e. Ro) to the camera system of coordinate (i.e. Rc).

Bbc((Aalhs5co1(Xc;Yc;Zc;1)) = Bbc((Aalhs5co4(R11;R12;R13;Ox;R21;R22;R23;Oy;R31;R32;R33;Oz;0;0;0;1)) * Bbc((Aalhs5co1(Xo;Yo;Zo;1))

where Rij, defined below, represent the elements of the rotation matrix according to each axis: Rot(3,3) = Rot(z,é).Rot(y,).Rot(x,) where é and are the Euler angles given by the STS.

R11= Cos(é).Cos()

R12= Sin(é).Cos()

R13= -Sin()

R21= Cos(é).Sin().Sin()-Sin(é).Cos()

R22= Sin(é).Sin().Sin()+Cos(é).Cos()

R23= Cos().Sin()

R31= Cos(é).Sin().Cos()+Sin(é).Sin()

R32= Sin(é).Sin().Cos()-Cos(é).Sin()

R33= Cos().Cos()

The position of the camera according to the fixed cartesian coordinate is defined by the vector (lx,ly,lz). The translation vector (Ox,Oy,Oz) which correspond to the position of the origin `O' according to Rc is defined by :

Bbc((Aalhs5co1(Ox;Oy;Oz)) = Bbc(( Aalhs5co3(R11;R12;R13;R21;R22;R23;R31;R32;R33)) *Bbc((Aalhs5co1(lx;ly;lz)) lx, ly, lz, é, and are the external parameters of the camera.

Audioguiding method using STS

The perspective of audioguiding is to help the learning of complex motor commands in the field of assistive robotics devices. One sensor was fixed on the back of the hand and another one represented the target. We put a third sensor on the forehead to get the movement of the head during the experimentation. We have tested two methods on normal blindfolded subjects[5]: In the first one, the sound frequency varied as a function of the distance between the hand and the target and in the second one we have added the variation of the amplitude of the sound as a function the discrepancy angle between the actual movement and the target direction. The movement of the 3 sensors is displayed on-line on the screen. To get a linearity in the variation of the sound we have to consider an non-linear frequency function. The position of the target according to the position of the hand is defined by the vector and the frequency is given by :

Tequal to ç and æTequal to ç are parameters defined by the frequency bandwidth which was delimited from 30 to 1000 Hz and by the variation of the distance from the target of 1.5 meter. A ray of 5 cm around the target was defined to represent the success reaching area.

In the second method we added to the variation of the frequency according to the target distance the amplitude parameter. When the movement is on the direction of the target the amplitude of the sound is maximum otherwise it's attenuated according to the discrepancy angle. We have considered, as with the frequency function, an exponential variation of the amplitude:

The instantaneous direction vector is compared to the target direction  continuously.

The sound is not permanent: it's sended to feedback only when the subject is moving his hand, so we have fixed a threshold velocity at 8cm/s.

First Stage Evaluation

Five non-disabled individuals have participated to the experimentation of the audioguiding methods[5]. The subjects had the instruction of reaching the target by guiding following high notes sounds. Ten targets has been randomly chosen for each method. The two methods permitted to the subject to reach the target without the control of the eyes, usually before the time-out (1.30 minutes). In opposition to our start hypothesis, the first method of varying the frequency according to the distance appears to be more efficient than the second one.

Conclusion

Interaction analysis between the disabled user and the man/machine interfaces will permit to point out the problems encountered when using technological assitive devices. This paper describes a study, under development, which consists of creating a learning environment using both visual and audio feedback when acting on a robotics system. The results obtained by the two audioguided movement methods are not very consequent for the second one which consist of adding an amplitude variation with the frequency variation. We plane to improve this method by redefining on-line the sounded workspace with the use of a geometric cone fixed to the target. We are also investigating the use of the M3S communication bus [1] which permit to interconnect different input/output devices on the same wires. This will help us to check the type of data emitted by an input, for example a joystick, and the result translated on the movement of the end-effector of an arm robot, for example Manus II robot.

Acknowledgments

The authors would like to thank C. Ammi from the Institut National des TÆlÆcommunication (INT), Pr. B. Bussel from the rehabilitation hospital Raymond PoincarÆ of Garches, The Institut Garches, Chevalier from the MAIF foundation and J. C. Cunin from the Association FranÆaise contre les Myopathies (AFM). M. Mokhtari holds a grant from INT and AFM. The project is supported by grant convention of INSERM-CNAMTS No. 3AM074.

References

[1] P.M. Allemand, " Multiple-Masters-Multiple-Slaves (M3S) Bus", Master Thesis, Institut National des TÆlÆcommunications, November 95.

[2] M. Hamley, P.Cudd, and A. Cherry, "Systems for Integrated Access to Alternative Communication, Mobility, Computers and the Home Environment", Communication Outlook, Vol.15 No. 4, Winter 94.

[3] J. Hammel, K. Hall, D. Lees, L. Leifer, M. V. der Loos, I. Perkash, and R. Crigler, "Clinical Evaluation of Desktop Robotic Assistant", Journal of Rehabilitation Research and Development, vol. 26 No. 3, Summer 89.

[4] A. Loukil, "Interface Homme-Machine de Contr"le-Commande en Robotique TÆlÆopÆrÆ", Phd. Thesis, Univ. Evry-Val d'Essonne", December 93.

[5] M. Mokhtari,A. Roby-Brami,S. Fuchs, M. Tremblay,O. RÆmy-NÆris, "Mesure 3D du mouvement et audioguidage. Perspectives pour l'apprentissage de la tÆlÆmanipulation par des personnes prÆsentant un handicap moteur". 4th International Conference: Interface to Real &Virtual Worlds, June 95, Montpellier France.

[6] A. Roby-Brami, "Pointing gestures in man-machine interface, analysis of the mouse cursor trajectory", Proceeding of the 14th international. conference of the IEEE in Engineering in Medicine and Biological Society. Paris, 1992, 1656-1657.

[7] A. Roby-Brami,S. Fuchs, M. Mokhtari, B. Bussel, "3D method of recording reaching movements in normal and hemiplegic humans".17th meeting of the European Neuroscience Association, Vienna September 94. Eur.J. Neurosci, Supl. 7, p14.

[8] C. A. Stranger, C. Anglin, W. S. Harwin, and D. P. Romilly, "Devices for Assisting Manipulation: A Summary of User Task Priorities", IEEE Transactions on Rehabilitation Engineering vol. 2 No. 4, December 94.

Address INSERM-CREARE University Pierre & Marie Curie, 9 quai St-Bernard, Bat. C30, 75005 Paris, France.

ENDNOTES********************************

[1]M. Mokhtari, M.S., is currently pursuing the Ph.D. in rehabilitation engineering with the INSERM-CREARE at the University Pierre & Marie Curie, Paris, and the Institut National des TÆlÆcommunications (INT), Evry, France. e-mail: Mounir.Mokhtari@snv.jussieu.fr

[2]A. Roby-Brami, M.D. Ph.D., is research assistant at the INSERM-CREARE. e-mail: Agnes.Roby-Brami@snv.jussieu.fr