“INCLUSIVE INTERFACES”
Prof. Paula Escudeiro, Institute of Engineering of Porto, Polytechnic Institute of Porto, Portugal |
![]() |
Bio
Paula Maria de Sá Oliveira Escudeiro, PhD in Computer Science/ System and Information Technology in Education (2009), MSc in information Systems and Technology and BSc in Applied Mathematics/Informatics. She is a Professor at IPP-ISEP since 1992, with a vast experience in project supervision and evaluation, accumulated for the past 30 years.
Researcher in the field of Learning Technology, she is the Director of Research Unit GILT – Games Interaction & Learning Technologies since 2014 and coordinator of the Learning Technology thematic line. President of the Board of the Serious Games Association. Director and Founder of the Multimedia Laboratory of Computer Engineering Department. Member of the National Committee for European Projects and External Evaluator of European Projects.
Organizer of several events to promote Learning Technology. Editor of several journals and Books. Conference chair and editor of the Scientific Committee of several national and international conferences. Author of over than 100 scientific articles in the field of Information and Education Technology.
Director of the Postgraduate Program in Information and Communication Technology at ISEP.
Subdirector of the Computer Engineering Department, Director of Institute of Information Systems for Technical Development, Vice-President of the Pedagogical Council, Director of the Center for Multimedia Products Development for the National Institute Administration. Member of the Subcommittee on Quality Assessment at the Institute of Engineering of Porto.
Author of the Patent, No. 20091000045179, 0198 Code, related process 104557 K (Software Quality Model). Author of the Mark No. 20151000046313 Code 059 (Virtual Sign). Author of the Patent No. 20151000065572, code related process 0198 (Bidirectional Translator Sign Language). She won some research awards, including the Inclusion and Digital Literacy Prize among others.
Abstract
Promoting equal opportunities and social inclusion of disabled people is a concern of modern society and a key issue in European Education. The evolution of science and the emerging new technologies promote social inclusion and simplify the communication with disabled people. The current status can, however, evolve. In the access of deaf or blind people to public services, for instance, it can be quite complex to communicate. Education is another critical area and a more serious problem. Education impacts citizens lives to a great extent. Barriers to education limit future opportunities. Any contribution to tear these barriers down promotes equity. Deaf people experience significant obstacles in reading. Blind people have no easy access to digital content. These facts severely compromise the development of creative, emotional and social skills in deaf or blind students. The main goal of this research project is to promote the access of deaf or blind people to education and citizenship.
The project addresses research issues raised by Virtual Sign (VS). VS investigated the translation between Portuguese sign language and written Portuguese using desktop devices. It became clear that investigating mobile devices and their ability to empower disabled people with effective communication facilities not depending on desktop devices is necessary. It adds that the number of external devices required to collect the inputs to model gestures should be reduced making the solution more adequate for daily usage, outside the lab or traditional classrooms.
The project investigates the integration of multiple channels (MC) of communication (gesture, speech, text, digital arts) and multiple languages (ML) making them available through desktop and mobile devices in order to promote the communication with/between deaf and blind students.
The purpose of Multi-Channel/Multi-Language Communication (MCMLC) Model and Methodology, is the analysis and design of a model synthesizing the features required to implement a bi-directional converter between gestures and text, a voice-gesture converter and a gesture-voice converter. All these converters will embed language translators. This model should address the communication difficulties and the concrete needs of blind and deaf people on their daily lives.
The user interface plays a very important role, of particular relevance when directed to people with disabilities. User Interface Design will define the protocols, the characteristics and the requirements for interacting via audio, video, gestures and text on desktop and mobile devices. We will create the user interface architecture and define the modes of operation.
The automatic conversion between gestures, text and voice will be based on machine learning processes that learn from examples. In addition, the reproduction of gestures by a 3D avatar is needed when converting text or voice to gestures.