Addressing Loneliness through AI

Supplied

Loneliness can be addressed through AI say UNSW researchers leading a new project using AI characters. Loneliness is a modern-day epidemic placing a significant burden on Australia’s health system.

AI characters Richard, Lou, Viv, Sophie, Harry and Willow hope to be one part of the solution. At first glance you might question what they had to offer, but those who work with them attest to the patience, empathy, knowledge and friendly encouragement they all share attributes that make them ideal support for people going through a challenge like loneliness.

Created by UNSW researchers, the project aims to harness AI to provide ‘digital companions’ to support Australians living with psychosocial challenges, from dementia and ageing, eating disorders and depression, to mental health diagnoses and, of course, loneliness.

UNSW Professor Jill Bennett, head of the Big Anxiety Research Centre, leads the research team responsible for the project.

“When people think of digital companions helping people who are lonely, their first reaction might be, ‘Why would I want that? And how could it ever be a satisfying relationship?’

“So, it’s important to know a few things about these characters. Firstly, unlike existing AI chatbots, which in technical terms are reactive and essentially designed to agree with everything you say and prolong the interaction, we’re aiming for companions with a greater capacity to ‘plan’ and understand the psychosocial needs of users.”

In other words, the researchers are designing these companions to be much more like skilled friends in how they interact.

To do this, Prof. Bennett’s team is working with Professor Michael Thielscher, the acting director of UNSW’s AI institute, who is an expert in AI planning.

Recent incidents linking AI chatbots to suicide or abuse raise concerns about the potential risks of interacting with AI.

“Using an ‘AI planning’ approach enables us to address this risk by creating agents that can operate according to agreed goals. So, for example, if someone is in a state of despair, the AI companion won’t simply take pessimistic statements at face value but will be able to gently challenge and reframe negative beliefs,” says Prof. Thielscher.

“Our AI companions use an explicit model of the emotional state of their conversation partners to shape interactions,” he says.

Through advanced planning and decision-making, the AI companions can adopt a much more proactive role than current chatbots.

“Explicit guardrails ensure that our characters’ responses adhere to defined constraints, avoiding any remarks that are inappropriate in a given context,” explains Prof. Thielscher.
Prof. Bennett adds, “Like a friend, they’ll notice when something’s wrong and, they might say, ‘Let’s think about what can you do to change this situation’.”