top of page

EmotiphAI

A novel platform for mental health assessment relying on physiological signals for both individual and group settings. EmotiphAI contains three main components: Data collector and real-time visualisation; Retrospective Emotion Annotation; and an AI Engine for emotion knowledge extraction.

Motivation

Emotions are a powerful phenomena that greatly influence our health and behaviour. When we are in high spirits, it shows in our overall wellbeing and productivity. On the other hand, long periods of negative emotions can result in burnout and depression.  


According to WHO [1], in 2019 nearly 1 billion people worldwide suffer from mental health problems. A number that went up by 25% due to COVID-19. Considering that around 60% of the world population is in the workplace, mental health issues can have an impact of up to 12 Bi working days lost per year, 1 to 5 years lived with disability, and a total of $1 trillion per year lost in the global economy.


Mental health issues are highly stigmatised, leading to a late diagnosis, often through self-reported questionnaires which can be manipulated by the subject, and do not allow a monitorisation over time.


To tackle this issue, we propose a new physiological data-driven approach – the EmotiphAI platform.


PROTOTYPING

EmotiphAI is a novel platform for mental health assessment relying on physiological signals. EmotiphAI consists of an end-to-end framework for physiological data collection, emotion data annotation and knowledge extraction. To perform data collection, EmotiphAI relies on the scientISST device in a wearable system with integrated physiological sensors. Physiological signals in contrast to the usage of image or self-reports allow a non-biased look into the subject's autonomic nervous system for emotion recognition. Moreover, physiological signals can be collected non-intrusively during daily-living in a bracelet or integrated into daily-living objects in the form of “invisibles”.


Through the scientISST device, EmotiphAI allows both single and group data collection, e.g. by using multiple individuals in a shared workplace or cinema, or from the same individual but in multiple locations, e.g. chair and computer keyboard in the invisible paradigm. The scientISST devices transmit the data via WiFi through a local network created by a Hub. The Hub hosts a software platform with three main components:


  1. Recorder: performing the communication to the devices, allowing to store the data, and real-time data visualisation of the physiological signals;

  2. Annotator: containing a novel retrospective emotion annotation tool for long-videos; and

  3. AI Engine: that allows to visualise the collected data and extract more than 400 emotion-related features that can be used for emotion classification algorithms in pos-processing.


The EmotiphAI annotation platform was developed for a long-videos emotion annotation, allowing an undisturbed visualisation of the content and performing the annotation only in pos-processing, i.e. retrospectively. Additionally, our annotation tool reduces the annotation time and complexity. It does so through an intelligent content segmentation algorithm that selects meaningful segments of the entire content for a retrospective emotion annotation.


VALIDATION

The different EmotiphAI platforms were validated in separate tests. The EmotiphAI recorder was validated in systematic tests exploring the data loss at different sampling rates, WiFi routers and computers to run the software. At the moment, we perform data collection from two physiological sensors, up to 10 devices at 100 Hz sampling rate, with a surface computer, and a TL-MR3020 TP-link router [2]. For the EmotiphAI Annotator, we compared three methods for content segmentation: a method based on physiological data (emotion-based), scene (time-based), and random (control) selection. Additionally, we analysed the platform usability and user experience using system usability scale and mental workload using the NASA Task Load Index (NASA-TLX), and the annotation accuracy by comparing it to the annotations from a gold-standard. The experimental results showed that EmotiphAI provides an above average user experience with low mental workload. Additionally, the emotion-based segmentation method was shown to return reliable annotations for the creation of emotion recognition systems [3].


PARTICIPATE IN THIS STUDY

Every other Tuesday, EmotiphAI is participating in the Diferencial cinema sessions at IST (Pavilhão de Mecânica II | Anfiteatro AM), where you can explore your physiological reactions to a movie and learn more about our work.



 

REFERENCES

[1] World mental health report: transforming mental health for all. Executive summary. Geneva: World Health Organization; 2022. Licence: CC BY-NC-SA 3.0 IGO.


[2] Bota, P., Flety, E., Silva, H.P.d. et al. EmotiphAI: a biocybernetic engine for real-time biosignals acquisition in a collective setting. Neural Comput & Applic (2022). https://doi.org/10.1007/s00521-022-07191-8


[3] Bota, P., Cesar, P., Fred, A., P. da Silva, H. Exploring Retrospective Annotation in Long Videos for Emotion Recognition . In major revision at IEEE Trans. Affective Computing (2023)

Project Gallery

ScientISST
Technologies Used

ScientISST EmotiphAI
Sense Python API
ScientISST BioSPPy
ScientISST CORE

Team of ScientISSTs

bottom of page