Urban Indoor/Outdoor Sentiment Analysis Dataset
The Indoor/Outdoor Sentiment Analysis Dataset contains images from the height of human vision as proposed by architects in an effort towards estimating the way human emotion is triggered by urban architectural places. In order to produce robust sentimental predictions, emotion is examined based on the Self-Assessment Mannequin (SAM), aiming to deploy a two-fold dataset for classification purposes. Based on the SAM we adopted the three level sentiment polarity for valence and arousal respectively since dominance has little effect on shaping sentiments. This means that the retrieved classes are Positive, Neutral, Negative and Excited, Neutral, Calm for valence and arousal respectively.
The dataset contains approximately 450 images, some deriving from the Places dataset , because of its scene-centric character, and a percentage of images from Unity environments in an effort towards creating a synthetic dataset that can be utilized in virtual and real life environments for sentiment analysis of indoor/outdoor architectural places. Regarding the annotation process, the initial collected images were divided into batches of 30 images and they were later integrated to GoogleForms questionnaires, enabling to control the non-duplication of answers on each form, making the process less exhausting and error-prone, while the interface contained details regarding the objectives in Spanish, Catalan, English and French. Every image was annotated at least by five individuals, while participants with selected backgrounds studying or working on architecture and design were asked to provide their emotional response via the created questionnaires.
If you are interested in the obtaining the dataset, please contact us.
Konstantinos Chatzistavros : firstname.lastname@example.org
Theodora Pistola : email@example.com
Konstantinos Ioannidis : firstname.lastname@example.org
In case you use the Urban Indoor/Outdoor Sentiment Analysis Dataset or our work is userful to your research activity, please cite the following publication:
K. Chatzistavros, T. Pistola, S. Diplaris, K. Ioannidis, S. Vrochidis & I. Kompatsiaris, “Sentiment analysis on 2D images of urban and indoor spaces using deep learning architectures”, In 19th International Conference on Content-based Multimedia Indexing (CBMI 2022) (accepted for publication)
Citations. B. Zhou, A. Lapedriza, A. Khosla, A. Oliva and A. Torralba, “Places: A 10 Million Image Database for Scene Recognition,” in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 6, pp. 1452-1464, 1 June 2018, doi: 10.1109/TPAMI.2017.2723009