Designing our friendly robot companions isn’t about the AI [Ben Brown | O’Reilly Radar]
Building bots is not about the AI; it’s mostly “smoke and mirrors and parlor tricks, crafting your words and using your words really effectively.” Building bots is “design by writers.”
Uses example of killing a troll in Zork, constructing a command in parts:
What do you want to kill the troll with?
A mighty blow, but it misses the troll by a mile. The troll’s axe barely misses your ear.
Kind of amazing we are back to this style of interaction with computers!
inklewriter is shutting down [inklestudios]
Brown mentioned in the talk linked above that interactive story editor inklewriter is a great way to develop chatbot interactions, as it provides support for many of the features a good chatbot needs (branching, ability to store variables/context of a conversation, keeping track of chunks of text).
A radical new theory proposes that facial expressions are not emotional displays, but “tools for social influence” [Emma Young | The British Psychological Society]
To replace the basic emotions theory that says that facial expressions are hard-wired, researchers have proposed the Behavioural Ecology View (BECV) of facial displays that says they are “flexible tools for influencing the behaviour of other people.”
- Smile – (Happiness) – Influence interactant to play or affiliate
- Pouting – (Sadness) – Recruit interactant’s succour or protection
- Scowling – (Anger) – Influence interactant to submit
- Gasping – (Fear) – Deflect interactant’s attack via one’s own submission or incipient retreat
- Nose scrunching – (Disgust) – Reject current interaction trajectory
- Neutral – (“Suppressed emotion” or no emotion) – Lead the interactant nowhere in interaction trajectory
Should you trust mental health apps? [Stephen Schueller | The Neuroethics Blog]
Even though an app may have direct research evidence, that does not mean it has a good user experience. In fact, in our ratings we find that credibility and user experience have a very low correlation (Neary & Schueller, 2018). It is not surprising, then, that many mental health apps experience low levels of real-world engagement. Clinical experts such as academic teams rarely have the expertise, funding, or incentives to build an engaging mental health app. Commercial app developers rarely have the expertise, interest, or incentives to conduct rigorous scientific evaluations.
A further issue is that even if there were a randomized controlled trial showing that a particular app is efficacious (providing a good treatment effect in idealized conditions), that doesn’t mean the app will help people in the wild (and prove effective).
We need new ways of showing that apps are both engaging and effective, ways that don’t require academic studies with IRB approval. App developers are in a better position than academics to develop these methods and to show that their apps actually help people feel and function better. Many academics are stuck with outdated statistical methods that can’t detect incremental changes that play out over time.
Data scientists in business may be key to figuring this out, and they won’t be building AI. They’ll be developing new methods to quantify and optimize engagement and effectiveness through time.