Psy virtuel : l’Intelligence Artificielle pourrait-elle remplacer votre psychiatre ?

Virtual psychiatrist: could AI replace the psychiatrist?

Artificial intelligence intended for psychiatric uses already exists today. But these algorithms, as neutral and tireless are they, could they one day replace the human psychiatrist?

There are now more than twenty artificial intelligences that have been validated by studies for psychiatric uses. Nevertheless, these machines have so far been used to collect data or to inform the nursing staff in decision-making. Indeed, these technologies know still several limits, despite their undeniable potential. Thus, the question of whether they would ever surpass the skills of a real shrink has good reason to be. To answer this, it is important to return to the function of the psychiatrist.

What is a psychiatrist?

Like a cardiologist or a surgeon, the psychiatrist is a consultant. He is able to diagnose patients and proscribe treatments after having undergone a complete training. This course includes a general medical training and specialized training in mental illnesses about 6 years.

To make his diagnosis, the psychiatrist can carry out a mental and physical examination, laboratory tests, medical imaging and study a detailed psychosocial history. Then he can resort to different combinable means to treat patients. These include: psychotherapies, a range of medications, neurostimulation techniques and social interventions.

A psychiatrist taking notes

More and more robot therapists after Covid-19

The idea of ​​integrating artificial intelligence into this panoply of tools made available to the psychiatrist appeared in 1966. Indeed, that was the year Joseph Weizenbaum finished coding his chatbot (dialogue program), named ELIZA. Dedicated to psychiatry, the AI ​​simulates a Rogerian psychotherapist (person-centered approach) by reformulating most of the statements of the ” patient “ in questions, and then ask them.

The concept has re-emerged in recent decades, notably with the context of the health crisis. More and more “robot therapists” and systems for detecting the state of health through voice have emerged. Some researchers believe that patients may have more affinity with robots than doctors. They would thus be likely to develop real therapeutic relationships with these machines.

A robot psychiatrist

The illusion of an objective virtual psychiatrist

Theoretically, in addition to being neutral and devoid of any judgment, digital shrinks would have several other advantages. The fact that they have no emotions would allow them to make objective, replicable and neutral decisions. In addition, unlike their human counterparts, they are not likely to make errors due to fatigue or be unavailable.

Nevertheless, it should not be forgotten that the machines are coded by engineers. To collect data and train artificial intelligence, the latter are based on their internal models. In many cases, RNs have been accused of discrimination or subjectivity because of the information with which they were fed. His processing processes and decisions will also depend on the subjectivity of the coder himself and his technical choices. Indeed, his experience, the quality of his training, his salary or the working time to write his code can influence the machine.

An AI psychiatrist

No more than the human psychiatrist?

For the time being, artificial intelligence is not entirely free of subjectivity… In any case, no more than its human counterpart. In their modus operandithe psychiatrist and the algorithm are not totally different.

Data collection and processing

The psychiatrist and the artificial intelligence function more or less in the same way during a psychiatric interview. They first collect the data exhaustively and relatively randomly. To do this, they rely on the medical file, gestures and reactions of the patient, etc. Then they select and process them according to those they deem relevant or insignificant. It is by organizing them in this way that the psychologist, just like the machine, associates them with pre-existing profiles, having common traits, and make the diagnosis.

The construction of the model

Each psychiatrist develops what is called “internal model”. It is a set of mental processes, explicit or implicit, which allow him to make his diagnosis. It is from this internal model that he formulates the diagnostic outcome. He builds it throughout his training and during his career, in particular through his clinical experiences, reading case reports, etc. This model sharpens and strengthens as he practices. Similarly, the algorithm develops its own internal model during his initial training and apprenticeship. Note, however, that the machine here has the advantage of to be able to handle many more cases than a psychiatrist will ever be able to.

The use of the model

The internal model is important, because it is from his knowledge of the diagnostic outcome that the shrink or the machine will make the decisions. The doctor or RN will refer to it to treat new patients. But other factors will certainly come into play for the human, such as their salary or workload. Similarly, the cost of the material and the time required for training or use are to be considered for theartificial intelligence.

An interview with a psychiatrist

So is the human shrink more reliable than the machine?

Endowed with empathy, the psychiatrist would be better able to detect the gestures that betray the patients in their speeches. This option is all the more important in these cases suicidal or victims of domestic violence, for example. It would also be better able to identify the patient’s real problems and access very different temporal information. Indeed, it is more flexible than AIin the sense that it can renew the exchange in real time according to the answers obtained.

the lack of physicality is also an important limiting factor of AI in psychiatry. This element is however crucial in the management and treatment of mental illnesses. The clinical interview is characterized in particular by doctor-patient dialogue. Speech and non-verbal communication are essential.

For the time being, the psychiatrist seems definitely more reliable to care for and treat mental illnesses. Artificial Intelligence nevertheless serves as a valuable tools that help speed up data collection, diagnosis and quality of treatment.

SEE AS ​​WELL : Top 10 Things Artificial Intelligence Does Better Than Humans

Should we forget the concept of virtual shrink?

As in most AI projects in other sectors, the prowess of late recorded does not exclude limiting factors. so far, artificial intelligence remains a tool rather than a substitute of a function occupied by humans. But no one knows what the future holds given that the AI ​​we know today is said ” weak “. So no one knows yet what the next breakthroughs hold in the matter.

Julia: the virtual shrink capable of diagnosing severe depressive disorders

Earlier this month, the Sanpsy laboratory team announced that they have designed a revolutionary virtual psy. According to their description, it is the first virtual human capable of conducting a psychiatric interview to diagnose depression. Their work has been documented in the journal Sientific Reports.

Julia, the psychiatrist robot that detects depression

The AI ​​is called Julia. She has a fairly conventional face and voice. For more realism, she was trained to make gestures and facial expressions depending on the conversation. To do this, the engineers used motion capture technique.

“The interview between this virtual human and the patient is constructed from a validated medical reference (editor’s note: based on the DSM-5, the diagnostic and statistical manual for mental disorders created by the American Psychiatric Association), enriched through turns of phrase and gestural and facial interactions reinforcing the patient’s commitment to the exchange. »

Pierre Philip, hospital practitioner at the Bordeaux University Hospital and director of the Sanpsy unit (sleep – addiction – neuropsychiatry) at the CNRS

Pierre and his team have found that the performance of this AI increases depending on the severity of the symptoms depressed. The results it has posted so far are therefore promising. However, they still do not equal those of a real doctor. The latter would be More reliable to diagnose from mild symptoms.

“The challenge is not to replace the doctor, but to assist the latter to more quickly diagnose patients not identified as depressive and possibly, in the future, to ensure quality medical follow-up in the patient’s home. »

According to its designers, Julia was designed to ensure quality medical care at home of the patient. So far, the algorithm has recorded “an acceptability score of 25.4/30 from patients”.

An AI that does better than several human radiologists

In 2020, a study involving 25,000 patients in the US and UK was conducted to testing an AI on breast cancer. Scott McKinney of Google Health in Palo Alto, Calif., and colleagues evaluated an algorithm for deep learning on his ability to recognize breast cancer on mammograms.

In this study, the machine had no pre-established idea of ​​what breast cancer is. She only referred to millions of iterations to repeat images. Eventually, she far exceeded the performance of several human radiologists.

“In this study, the AI ​​was simply better at doing this task than the human comparators, and I think that really shows how powerful this technology already is. And keeping in mind this ongoing, ongoing self-improvement process that these algorithms go through, you can project 10 years from now… where we might think these breast cancer algorithms will be. So I think that really sets the stage to start looking more specifically at cases that might have more relevance to psychiatry.”

Richard Cockerill, MD, assistant professor of psychiatry at Northwestern University Feinberg School of Medicine in Chicago

Leave a Comment

Your email address will not be published. Required fields are marked *