Links for 2017-10-28

I’m sorry Dave, I’m afraid I can’t do that: Why chatbots may not fly [Zoe Kulsariyeva | Prototypr]

  • Human conversation is incredibly complex and is currently impossible to simulate realistically.
  • Users subconsciously believe that they are talking to human-like entities, and have matching expectations for their counterparts. These expectations cause frustration if not met.
  • Chatbots should not behave entirely human-like but should not be entirely robot-like, either.

All these obstacles pose quite a challenge for both designers and engineers: the use-case that allows to work around listed limitations is extremely narrow and navigating it to the sweet spot of user satisfaction here is going to be extremely hard, whereas the price of users’ frustration is going to be high. Nobody wants to design the next Clippy.

Google’s Sentiment Analyzer Thinks Being Gay is Bad [Andrew Thompson | Motherboard | via Four Short Links]

In addition to entity recognition (deciphering what’s being talked about in a text) and syntax analysis (parsing the structure of that text), the API included a sentiment analyzer to allow programs to determine the degree to which sentences expressed a negative or positive sentiment, on a scale of -1 to 1. The problem is the API labels sentences about religious and ethnic minorities as negative—indicating it’s inherently biased. For example, it labels both being a Jew and being a homosexual as negative.

Semantics derived automatically from language corpora necessarily contain human biases [Caliskan-Islam, Bryson, & Narayanan, 2016]

Here we show for the first time that human-like semantic biases result from the
application of standard machine learning to ordinary language—the same sort of language humans are exposed to every day. We replicate a spectrum of standard human biases as exposed by the Implicit Association Test and other well-known psychological studies. We replicate these using a widely used, purely statistical machine-learning model—namely, the GloVe word embedding—trained on a corpus of text from the Web. Our results indicate that language itself contains recoverable and accurate imprints of our historic biases, whether these are morally neutral as towards insects or flowers, problematic as towards race or gender, or even simply veridical, reflecting the status quo for the distribution of gender with respect to careers or first names. These regularities are captured by machine learning along with the rest of semantics.

Implementing the Famous ELIZA Chatbot in Python [Evan Dempsey | SmallSureThing]

You will notice that most of the source code is taken up by a dictionary called reflections and a list of lists called psychobabble. ELIZA is fundamentally a pattern matching program. There is not much more to it than that.

reflections maps first-person pronouns to second-person pronouns and vice-versa. It is used to “reflect” a statement back against the user.

psychobabble is made up of a list of lists where the first element is a regular expression that matches the user’s statements and the second element is a list of potential responses. Many of the potential responses contain placeholders that can be filled in with fragments to echo the user’s statements.

ELIZA [Wikipedia]

ELIZA is an early natural language processingcomputer program created from 1964 to 1966[1] at the MIT Artificial Intelligence Laboratory by Joseph Weizenbaum.[2] Created to demonstrate the superficiality of communication between man and machine, Eliza simulated conversation by using a ‘pattern matching‘ and substitution methodology that gave users an illusion of understanding on the part of the program, but had no built in framework for contextualizing events.[3] Directives on how to interact were provided by ‘scripts’, written originally in MAD-Slip, which allowed ELIZA to process user inputs and engage in discourse following the rules and directions of the script. The most famous script, DOCTOR, simulated a Rogerian psychotherapist and used rules, dictated in the script, to respond with non-directional questions to user inputs. As such, ELIZA was one of the first chatterbots, but was also regarded as one of the first programs capable of passing the Turing Test.

Person-Centered Therapy (Rogerian Therapy) [GoodTherapy.org]

Rather than viewing people as inherently flawed, with problematic behaviors and thoughts that require treatment, person-centered therapy identifies that each person has the capacity and desire for personal growth and change. Rogers termed this natural human inclination “actualizing tendency,” or self-actualization. He likened it to the way that other living organisms strive toward balance, order, and greater complexity. According to Rogers, “Individuals have within themselves vast resources for self-understanding and for altering their self-concepts, basic attitudes, and self-directed behavior; these resources can be tapped if a definable climate of facilitative psychological attitudes can be provided.”

The person-centered therapist learns to recognize and trust human potential, providing clients with empathy and unconditional positive regard to help facilitate change. The therapist avoids directing the course of therapy by following the client’s lead whenever possible. Instead, the therapist offers support, guidance, and structure so that the client can discover personalized solutions within themselves.