This one seems like a showstopper. A total of 38 suicidal participants were scanned, but those who did not show the desired semantic effects were excluded due to “poor data quality”….
This logic seems circular to me, despite the claim that inclusion wasn’t based on group classification accuracy. Seriously, if you throw out over half of your subjects, how can your method ever be useful? Nonetheless, the 21 “poor data quality” ideators with excessive head motion and bad semantic signatures were used in an out-of-sample analysis that also revealed relatively high classification accuracy (87%) compared to the data from the same 17 “good” controls (the data from 24 “bad” controls were excluded, apparently).
How Making Art Helps Teens Better Understand Their Mental Health [Juli Fraga | Mind/Shift]
While the creative process in Wardrip’s group is an open canvas, each self-expression exercise teaches the students an emotional skill, like self-awareness, social skills and self-acceptance.
For example, students may create “mood mandalas” by drawing and coloring symbols to convey their inner worlds. They can also paint their worries on small “comfort” boxes and fill the container with personal items that bring solace. Others list their insecurities in “place book” journals, including healing words, like “Learn to accept your flaws and learn to accept beauty.” All group members receive “place books” where they privately record their thoughts and feelings.
We have released the EMOTIC (EMOTions In Context) dataset [ronakosti | Reddit Machine Learning]
Looks like a great data set for doing emotion detection in photos. Uses an “extended list of 26 emotion categories combined with the three common continuous dimensions Valence, Arousal and Dominance.” Here’s the abstract from the related research paper (Kosti et al, 2017):
Understanding what a person is experiencing from her frame of reference is essential in our everyday life. For this reason, one can think that machines with this type of ability would interact better with people. However, there are no current systems capable of understanding in detail people’s emotional states. Previous research on computer vision to recognize emotions has mainly focused on analyzing the facial expression, usually classifying it into the 6 basic emotions . However, the context plays an important role in emotion perception, and when the context is incorporated, we can infer more emotional states. In this paper we present the “Emotions in Context Database” (EMOTIC), a dataset of images containing people in context in non-controlled environments. In these images, people are annotated with 26 emotional categories and also with the continuous dimensions valence, arousal, and dominance . With the EMOTIC dataset, we trained a Convolutional Neural Network model that jointly analyses the person and the whole scene to recognize rich information about emotional states. With this, we show the importance of considering the context for recognizing people’s emotions in images, and provide a benchmark in the task of emotion recognition in visual context.