TimesDigest E-Edition

Can a Machine Know That We Know What It Knows?

Edited by Will Shortz OLIVER WHANG

Mind reading is common among us humans. We take in people’s faces and movements, listen to their words and then decide or intuit what might be going on in their heads.

Among psychologists, such intuitive psychology — the ability to attribute to other people mental states different from our own — is called theory of mind, and its absence or impairment has been linked to autism, schizophrenia and other developmental disorders. Theory of mind helps us communicate with and understand one another; it allows us to enjoy literature and movies, play games and make sense of our social surroundings. In many ways, the capacity is an essential part of being human.

What if a machine could read minds, too? Recently, Michal Kosinski, a psychologist at the Stanford Graduate School of Business, made just that argument: that large language models like Openai’s CHATGPT and GPT-4 — next-word prediction machines trained on vast amounts of text from the internet — have developed theory of mind. His studies have not been peer reviewed, but they prompted scrutiny and conversation among cognitive scientists, who have been trying to take the often asked question these days — Can CHATGPT do this? — and move it into the realm of more robust scientific inquiry. What capacities do these models have, and how might they change our understanding of our own minds?

“Psychologists wouldn’t accept any claim about the capacities of young children just based on anecdotes about your interactions with them, which is what seems to be happening with CHATGPT,” said Alison Gopnik, a psychologist at the University of California, Berkeley and one of the first researchers to look into theory of mind in the 1980s. “You have to do quite careful and rigorous tests.”

Kosinski’s previous research showed that neural networks trained to analyze facial features like nose shape, head angle and emotional expression could predict people’s political views and sexual orientation with a startling degree of accuracy (about 72 percent in the first case and about 80 percent in the second case). His recent work on large language models uses classic theory of mind tests that measure the ability of children to attribute false beliefs to other people.

A famous example is the Sally-anne test, in which a girl, Anne, moves a marble from a basket to a box when another girl, Sally, isn’t looking. To know where Sally will look for the marble, researchers claimed, a viewer would have to exercise theory of mind, reasoning about Sally’s perceptual evidence and belief formation: Sally didn’t see Anne move the marble to the box, so she still believes it is where she last left it, in the basket.

Kosinski presented 10 large language models with 40 unique variations of these theory of mind tests — descriptions of situations like the Sally-anne test, in which a person (Sally) forms a false belief. Then he asked the models questions about those situations, prodding them to see whether they would attribute false beliefs to the characters involved and accurately predict their behavior. He found that GPT-3.5, released in November 2022, did so 90 percent of the time, and GPT-4, released in March 2023, did so 95 percent of the time.

The conclusion? Machines have theory of mind.

DISPATCH

en-us

2023-03-28T07:00:00.0000000Z

2023-03-28T07:00:00.0000000Z

https://timesdigest.pressreader.com/article/281590949820324

New York Times