Links for 2017-10-30

Carving new paths where there are none: Considerations in eHealth research [Stephen Schueller | PsyberGuide]

In the end we did arrive at a meaningful sample of apps by limiting the number of mental health issues we were interested in, so we could manage the numbers. But how to rate their quality? App store ratings were, of course, meaningless – some can be paid for and boosted by companies, others the result of an angry mob of users, vengeful after the newest operating system (OS) rendered the app unusable. So this left us sifting through website evaluation criteria, user experience (UX) requirements and IT benchmarks. After numerous iterations, the MARS (Mobile App Rating Scale) was born!

Today the MARS has over 170 citations, since its publication in 2015 and has been translated into eight languages. Our team has received hundreds of requests for advice, information, support or collaboration, including its use on the PsyberGuide website. eHealth is blooming!

Mobile App Rating Scale [PsyberGuide]

The Mobile App Rating Scale (MARS) was designed by a research team involved in the development and validation of eHealth and mHealth interventions, or ‘eTools’. The scale aims to provide researchers, clinicians and developers with a list of evaluation criteria, and a gradient response scale for their objective evaluation.

There are three main MARS factors:

  1. The MARS mean
    This is the mean of four objective subscales:

    • Engagement
    • Functionality
    • Aesthetics
    • Information
  2. Subjective Quality
  3. Perceived Impact

Interesting that it apparently doesn’t include any scale regarding objective evidence basis for use of the tool. The article that details development of the scale says, “Researchers are yet to test the impact of the mental health apps included in this study. As a result, the MARS item evidence base was not rated for any of the apps in the current study and its performance has not been tested. It is hoped that as the evidence base for health apps develops, the applicability of this MARS item will be tested.” So there is desire to evaluate evidence base, but this hasn’t started yet.

A Universal ‘Language’ of Arousal Connects Humans and the  Animal Kingdom [Jen Viegas | Seeker]

New research not only supports Darwin’s views, but also identifies a universal “language” of arousal emitted and understood by amphibians, reptiles, birds, and mammals. The findings, published in the journal Proceedings of the Royal Society B, suggest that we are at least somewhat like the famous fictional character Doctor Dolittle, who could decipher animal communications with ease.

“Our study shows that humans are naturally able to recognize emotional arousal across all classes of vocalizing animals,” said lead author Piera Filippi, a postdoctoral researcher at the University of Aix-Marseille and the Max Planck Institute for Psycholinguistics. “This outcome may find an important application in animal welfare, suggesting that humans may rely on their intuition to assess when animals are stressed.”

Humans recognize emotional arousal in vocalizations across all classes of terrestrial vertebrates: Evidence for acoustic universals [Filippi et al, 2017]

In this study, we asked human participants to judge the emotional content of vocalizations of nine vertebrate species representing three different biological classes—Amphibia, Reptilia (non-aves and aves) and Mammalia. We found that humans are able to identify higher levels of arousal in vocalizations across all species. This result was consistent across different language groups (English, German and Mandarin native speakers), suggesting that this ability is biologically rooted in humans. Our findings indicate that humans use multiple acoustic parameters to infer relative arousal in vocalizations for each species, but mainly rely on fundamental frequency and spectral centre of gravity to identify higher arousal vocalizations across species. These results suggest that fundamental mechanisms of vocal emotional expression are shared among vertebrates and could represent a homologous signalling system.

The Chatbot Will See You Now [Nick Romeo | The New Yorker]

Where human therapists rely on body language and vocal tone to make inferences about a patient’s mood, the X2AI bots detect patterns in how phrasing, diction, typing speed, sentence length, grammatical voice (active versus passive), and other parameters correlate with different emotional states. In principle, this imbues the system with the capacity to notice latent emotions, just as human therapists do. “When you say something in a certain way, a good friend will know how you actually feel,” Bann said. “It’s the same thing with our A.I.s.” Although he and Rauws declined to describe exactly how their bots’ core algorithms work, he did say that they rely on both manual coding of emotions and self-directed learning. X2AI psychologists script the conversational flows—the abstract schemas that the bots follow—but algorithms generate the wording of statements and detect user-specific emotional patterns. The system is essentially modular, so that new treatment paradigms and different languages can easily be added. Bann claimed, for instance, that the company could create a chatbot capable of performing Freudian dream analysis “in a week or two.”