Go to main section

The aim of the following research project was to verify the possibility of recognising human emotions based on their reactions. There were 3 body reactions chosen as a source of information:

  • heart rate,
  • galvanic skin response,
  • eye movement.

Emotions were stimulated via VR goggles which were displaying videos aimed at provoking expected emotion. The project investigated 5 key emotions: fear, sadness, dread, joy and neutral state.

The process of emotion recognition system development

The development of emotion recognition technology can be divided into 5 key steps:

  1. Data pre-processing

This stage consists in combining the data obtained by various meters and processing them. It is now when data is synchronised, combined, cleaned from incorrect information and missing values are filled in. Additionally, the data is divided into time windows, on the basis of which the recognition of emotions will be carried out in subsequent steps. Thanks to preliminary data processing, we obtain a new form of data which is acceptable by the recognition model.

  1. Features extraction

The goal of this stage is to extract relevant information from raw data. The source of information are:

  • Extractors dedicated to each of the information source (i.e. adjusted to the characteristics of heart rate, galvanic skin response and eye movement)
  • Statistical extractors that use statistical operations from time windows used.

Extractors not only speed up the learning process, but also increase the correctness of emotion recognition.

  1. Features selection

It is an important step to increase the efficiency of the emotion recognition technology. With the help of dedicated selectors, a subset of features with the greatest influence on the decision about the recognized emotion is determined. Reducing the number of features reduces noise in the data and also reduces the complexity of the recognition model.

  1. Development of the recognition model

This stage is the heart of the whole system. It is here where we define the whole model architecture, which will be used to recognize emotions. During this stage, various decision-making models are verified and then one of them is selected for final use.

  1. Tuning the recognition model

Tuning consists in finding optimal hyperparameters of the recognition model, i.e. determining the hyperparameters thanks to which the model achieves the highest efficiency. The tuning process is carried out with the help of a dedicated optimizer, which allows to achieve high efficiency in a limited number of model verification attempts.

The finally developed emotion recognition system is based on deep neural networks, thus achieving over 90% effectiveness in the test group. Within the framework of the project, which had the character of research and development, the possibility of recognising emotions on the basis of data from sensors monitoring selected body reactions was confirmed. Moreover, we have developed an application that is processing data from sensors and visualising the emotional state.


Are you ready to develop your company?

Share your idea for the project, we will evaluate it reliably

APA Group
Urban Lab
UM Krakow
UM Jaworzno
UM Rzeszów
Strefa Energii
ZOO Chorzów

Copyrights © 2024 Whiteaster

crafted by: mastafu design