The aim of the following research project was to verify the possibility of recognising human emotions based on their reactions. There were 3 body reactions chosen as a source of information:
Emotions were stimulated via VR goggles which were displaying videos aimed at provoking expected emotion. The project investigated 5 key emotions: fear, sadness, dread, joy and neutral state.
The development of emotion recognition technology can be divided into 5 key steps:
This stage consists in combining the data obtained by various meters and processing them. It is now when data is synchronised, combined, cleaned from incorrect information and missing values are filled in. Additionally, the data is divided into time windows, on the basis of which the recognition of emotions will be carried out in subsequent steps. Thanks to preliminary data processing, we obtain a new form of data which is acceptable by the recognition model.
The goal of this stage is to extract relevant information from raw data. The source of information are:
Extractors not only speed up the learning process, but also increase the correctness of emotion recognition.
It is an important step to increase the efficiency of the emotion recognition technology. With the help of dedicated selectors, a subset of features with the greatest influence on the decision about the recognized emotion is determined. Reducing the number of features reduces noise in the data and also reduces the complexity of the recognition model.
This stage is the heart of the whole system. It is here where we define the whole model architecture, which will be used to recognize emotions. During this stage, various decision-making models are verified and then one of them is selected for final use.
Tuning consists in finding optimal hyperparameters of the recognition model, i.e. determining the hyperparameters thanks to which the model achieves the highest efficiency. The tuning process is carried out with the help of a dedicated optimizer, which allows to achieve high efficiency in a limited number of model verification attempts.
The finally developed emotion recognition system is based on deep neural networks, thus achieving over 90% effectiveness in the test group. Within the framework of the project, which had the character of research and development, the possibility of recognising emotions on the basis of data from sensors monitoring selected body reactions was confirmed. Moreover, we have developed an application that is processing data from sensors and visualising the emotional state.