Anthropomorphizing AI: Humans Looking for Empathy in All the Wrong Places | by Alison Doucette | Aug, 2023


Should we halt the race to pass the Turing Test? Has Data Quality caught up with LLMs? Are we designed to be deceived?

Alison Doucette

Towards Data Science

A Black and White poster which reads “Mind the Gap”
Photo by Author (not an MIT Employee) taken at the MIT Museum in July 2023 — if you live in the Boston area you should check out this exhibit!

For those of you not familiar with the “Turing test,” it is a test of a machine’s ability to exhibit intelligent behavior indistinguishable from a human, by a human. As experts posit that AI technology is now only a decade or two from reaching this goal, experts and non-experts have begun to voice concerns as to whether technology is now moving faster than our societies and human nature can handle.

Intelligent behavior to one person may mean verbal intelligence (language, reasoning and problem solving) while to others, full human intelligence also includes emotional intelligence (perceiving, understanding, and managing emotions). Robust “Algorithmic Intelligence” or “Generative AI” can mimic verbal intelligence. However, these intelligent machines will also tell you clearly (if you ask them as I did) that they do not have emotional intelligence:

Text of response from GPT4 to the question “Explain how generative AI cannot have emotional intelligence?’
Image of text for GPT response referenced in link above.

People have been giving their tools human names and attributing human characteristics to inanimate objects for centuries. This behavior is termed anthropomorphism. We name our boats or cars after our spouses or call our pool vacuums “Jaws” or floor robots “Pacman”. Giving nicknames to objects can bring some fun to our interactions with tools. In this style of interaction, people consciously and mindfully anthropomorphize tools without truly interacting with the object or tool as they might a human. As social animals,­ we humans expeditiously try to determine if someone is a “friend or foe”, their place in the hierarchy, and if they might make a suitable mate.

The challenge comes when technology companies, in efforts to improve interaction with their software tools, aim for “unconscious or mindless anthropomorphic” attribution by users by adding a human name or human facial image, dialog cues, informal language, etc. As humans we feel the greatest “social presence” from chatbots when the interaction is more than text but



Source link

Leave a Comment