It is well known that older people like story-telling and being read to. However, due to the aging process, and its accompanying changes, reading becomes a difficult activity for that segment of the population. This project proposes to study the cognitive processes involved in reading for older adults, and develop an ambient-assisted reading framework targeted for elderly people and communities. Elderly people are one of the last groups to benefit from access to computers. What makes technology difficult for older adults to use is that they often suffer from aging related motor impairments and cognitive disabilities like age degenerative processes, short-term memory problems, and reduced visual and auditory problems. Moreover, the older adult population is unique in the sense that declines in abilities related to aging are not homogenous, and they tend to use other abilities to compensate for abilities in deterioration.

To better assist the reading activity, the cognitive processes involved in reading will be considered, as well as a characterization of the individual’s cognitive, emotional and perceptual states, and their evolution during the reading process. Consequently, the uniqueness of each individual will be taken into account. For this characterization to be possible, a cognitive model of the user and a modelling process must be defined. The user characterization will depart from a set of different physiological inputs, allowing the inference of several indicators (e.g. stress, tiredness, alertness, etc.) These inputs will be acquired through a series of sensors, including biometric sensors (e.g. heart rate), video cameras (e.g. facial processing), and microphones (e.g. user utterances). The information gathered from the various sensing devices will then be combined to generate a representation of the cognitive, emotional and perceptual profile of the individual, and to allow the analysis of the status’ evolution.

Parallel to this physiological information gathering, the framework allows for multimodal inputs, relieving the user from depending on keyboard and mouse commands, and relying on other input modalities, more familiar to older adults, with particular relevance to speech commands, the most natural way for command issuing. This can be particularly important for the targeted population, since it is an audience with less familiarity with information technology, and that may have motor impairments limiting their use of the technology (e.g. tremors), thus benefiting more from an input mode (speaking) that they are used to.

The commands issued, together with the evaluation of the individual’s cognitive and physiological state, govern the multimodal output presentation. The books can be presented to the user in two individual or combined modalities: audio and visual. The best way to present a book is not (always) dependent on its contents, but mainly on the characteristics of the user: the different degrees of visual or audio impairment will impact the modalities used, and decide on factors such as font size and audio volume.

A framework considering all the aspects will be developed and implemented in a living-room style set-up, to contribute to a better integration into an environment the older adults are used to, thus allowing more realistic evaluations to be conducted. A second framework, developed on top of the first one, will incorporate several features to foster the sense of reading as a community activity. Users will be able to annotate the books. These annotations will then be available for sharing with others users. In this fashion, we hope to contribute to make it easier for older adults to stay in touch with relatives and other acquaintances, fostering a reading community.

Participating Research Groups

LASIGE, Human Computer Interaction and Multimedia (, INESC ID, L2F Spoken Language Systems Lab (, and CITI, Research Center for Informatics and Information Technologies (

Time Line

The project started officially on 01-01-2010, with a duration of 36 months. Due to the schedule of the funding acceptance, the actual execution was set to start on 01-03-2010.

Funding Source

ARIA is funded by Fundação da Ciência e Tecnologia (PT) (Contract PTDC/EIA-EIA/105305/2008

Project (Results) Publications

Encontros Científicos Internacionais

Diogo Delgado, João Magalhães, Nuno Correia (2010), Automated Illustration of News Stories, IEEE 4th Intrnl Conference on Semantic Computing, CMU,USA, 2010, pdf.gif Download paper

Encontros Científicos Nacionais

Relatórios Técnicos

Teses de Mestrado

Diogo Delgado (2010), Automated Illustration of Multimedia Stories, Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa, 2010, pdf.gif Download

Artemisa Moreno (2012), Ferramenta para análise de modos e experiências de leitura, Faculdade de Ciências da Universidade de Lisboa, 2012, pdf.gif Download

Badjinca Baticam (2012), Avaliação de Leitura com Multimodalidades e Suporte para a acessibilidade, Faculdade de Ciências da Universidade de Lisboa, 2010, pdf.gif Download

Gonçalo Graças (2013), Facebook 3G – Social Networks for the Elderly, Faculdade de Ciências da Universidade de Lisboa, 2013, pdf.gif Download