Researchers from Penn State and University of California, Santa Barbara (UCSB) discovered that people are less likely to listen to health advice from an AI doctor when the robot has their medical history. But on the other hand, patients like to be on a first-name basis with their human doctors.
When the AI doctor used the patient’s first name and referred to their medical history, study participants were more inclined to thinking the AI health chatbot was meddlesome and less inclined to adhere to its medical advice, the researchers added. They however expected their human doctors to distinguish them from their other patients and were less likely to heed their advice when a human doctor failed to recollect their information.
The research findings prove that machines walk a fine line in serving as doctors, according to S. Shyam Sundar, James P. Jimirro, Professor of Media Effects in the Donald P. Bellisario College of Communications and co-director of the Media Effects Research Laboratory at Penn State.
‘Machines don’t have the ability to feel and experience, so when they ask patients how they are feeling, it’s really just data to them’, said Sundar, who is also an affiliate of Penn State’s Institute for Computational and Data Sciences (ICDS). ‘It’s possibly a reason why people in the past have been resistant to medical AI.’
Machines do have their own advantages as medical providers, according to Joseph B. Walther, distinguished Professor in Communication and the Mark and Susan Bertelsen Presidential Chair in Technology and Society at UCSB. He said that, similar to a family doctor who has treated a patient for a long time, hypothetically, computer systems could know a patient’s medical history. In comparison, seeing a new doctor who is aware of only your recent lab tests may be more common, according to Walther who is also Director of the Center for Information Technology and Society at UCSB.
‘This struck us with the question: Who really knows us better: a machine that can store all this information, or a human who has never met us before or hasn’t developed a relationship with us, and what do we value in a relationship with a medical expert?’ said Walther. ‘So this research asks, who knows us better— and who do we like more?’
The research team designed five chatbots for the study which was in two phases, with a total number of 295 participants for the first phase, 223 of whom returned for the second phase. In the first phase, participants were randomly assigned to interact with either a human doctor, an AI doctor or an AI-assisted doctor.
In the second phase of the study, the participants were assigned to interact with the same doctor for a second time. But when the doctor initiated conversation in this phase, they identified the participant by his first name and recalled information from their previous conversation, or asked again the preferred name of the patient and repeated questions about their medical history.
In both phases, the chatbots were programmed to ask eight questions regarding COVID-19 symptoms, and offer diagnosis and recommendations, said Jin Chen, doctoral student in mass communications, Penn State and first author of the paper.
‘We chose to focus this on COVID-19 because it was a salient health issue during the study period’, said Chen.
As medical providers look for ways to cut down cost while still providing better care, AI medical services may be a cost-effective alternative, However, AI doctors must provide the kind of care and advice that patients are willing to accept, said Cheng Chen, doctoral student in mass communications at Penn State.
‘One of the reasons we conducted this study was that we read in the literature a lot of accounts of how people are reluctant to accept AI as a doctor,’ said Chen. ‘They just don’t feel comfortable with the technology and they don’t feel that the AI recognizes their uniqueness as a patient. So, we thought that because machines can retain so much information about a person, they can provide individuation, and solve this uniqueness problem.’
The findings suggest that this technique may backfire. ‘When an AI system recognizes a person’s uniqueness, it comes across as intrusive, echoing larger concerns with AI in society,’ said Sundar.
They found that about 78% of the participants in the experimental condition that featured a human doctor believed they were interacting with an AI doctor. Sundar added that a possible explanation for this is that people may have become more used to online health platforms during the pandemic and expected a more robust interaction.
For future purposes, the researchers believe more research should be done into the authenticity and ability for machines to engage in back-and-forths may help in developing a better relationship with patients.
The work of the researchers was presented at the virtual 2021 ACM CHI Conference on Human Factors in Computing Systems— the premier international conference for research on Human-Computer Interaction.
By Marvellous Iwendi.
Source: PennState News