Consider a modern twist on the Trolley Problem which refers to a series of thought experiments first introduced in the 1960s. You see a baby in the path of a runaway trolley rushing down a slope. You can pull a lever that would change the direction of the trolley to hit a robot instead. The catch is that the baby has been manufactured in a lab through artificially programmed DNA while the robot is a state-of-the-art sentient companion able to ‘feel’ the entire spectrum of human emotions. Whom would you save?
Contemporary advancements in natural sciences with the aid of technology have enabled us to envision a world where such beings not only exist but would (hopefully) be well integrated within human society. These advancements result from a heightened interdisciplinarity of the sciences in question. AI-enabled therapy is one such field that personifies the integration of not just a multifaceted perspective of the world but also of the kind of species we can be.
Cyborg, for instance, is no longer just a fantastical member of the Justice League; it is now an augmented human species category. Neil Harbisson was born with an achromatic vision (complete color blindness) and created a sensory system called The Cyborg Antenna that allows him to ‘hear’ colors through audible vibrations in his head. The device which is permanently attached to his skull earned him the label of the world’s first officially registered cyborg. 
AI-enabled therapy is a major shareholder in this changing world and holds many examples which push us to broaden our understanding of what makes us human.
Introducing AI-enabled therapy
AI as a tool in healthcare is used on multiple levels. It can be used to create better diagnostic tools, improve medicines, and help with research (Jabbar et al., 2018). Lately, it has also begun to be used as a semi-autonomous direct contact link with the patient which is referred to in this article as AI-enabled therapy. Since such words that used to be limited to technical jargon are in widespread use nowadays, they can refer to multiple things. Honing in on what they mean for the scope of this article;
- AI here would be artificially developed algorithms that hold the capacity to emulate one or more cognitive capabilities of humans to some extent. For example; understanding language (voice detection software) which many of us have experienced with ‘Siri’, or recognizing pictures (classification algorithms). It falls under the wing of computer science, however, has broad applications in other fields as well.
- Enabled as mostly used in computing is defined by Oxford as ‘adapted for use with the specified application or system.’
- Therapy is simply defined as the remediation of a health problem.
Therefore AI-enabled therapy means “some artificially developed algorithms which emulate natural human ability to remediate health problems, whether they be medical, social, cognitive, or otherwise.” Here we will begin with a broad overview of the many kinds of systems this can talk about and then delve deeper into Social Support Bots.
A classification system for AI-enabled therapy
A rudimentary way to classify the kinds of AI in therapy that operate at the front end, so to say, leading to human-machine interactions, is to differentiate between the systems based on the kind of embodiment they use and the autonomy that they are allowed to exert. The ‘Embodiment axis’ differentiates between systems that are digital as opposed to physical. Autonomy on the other hand implies the degree of self-governance or self-operation a bot can exercise without external or hard-coded influence.
It is important to note that these categories are neither discrete nor absolute in their division. Autonomy, for instance, is expected to be partial or limited to their immediate environment in almost all AI-bots for at least a few years down the line. It is also mostly an amalgamation of scripts and self-responses and rarely one or the other. Thus, this system has not been used here for perfect, distinctive categorization, rather it is expected to allow for easier observation of the differences between the myriad AI-enabled systems currently operating. The negative sign indicates zero flexibility of the system in question and a very limited range of highly predictable responses which has been pre-coded. The positive sign indicates a relatively higher range of response and a broader area of functionality.
The various kinds of AI-enabled therapies
Beginning with the fourth quadrant and going counter-clockwise, each of these types will be discussed with a few examples that have also been illustrated in figure 1.
I) Powered exoskeletons are wearable machines that employ electrical, hydraulic, pneumatic, or a combination of these and other techniques to augment, assist or rehabilitate limb movement. They have been used extensively for people suffering from movement or motor disorders, paraplegia, and paralysis, to name a few .
II) Diagnosis is a major goal of any therapy and AI in diagnosis has made leaps and bounds to make the system more efficient. This has been further used as a direct user contact by deploying it into app form. Better known as diagnostic apps, these do not exercise any autonomy, they operate on pre-fed information or through a crowdsourced database. Such applications that are accessible through smartphones or over the internet typically respond to symptoms observed by the user with a possible diagnosis of the medical condition that may be prevailing. They were initially introduced as a way to relieve medical personnel of multiple case inquiries and workload. They have become even more popularised in the recent pandemic .
The avatar project, also falling in this quadrant, is a VR technique whereby people seeking therapeutic relief from auditory hallucinations are asked to create an avatar of the voices they hear. The therapist from another room then uses the avatar to communicate with the person and conducts sessions in this way .
III) Chatbots are online application-based software that are able to hold conversations with users. They are increasingly being used in mental health to alleviate or simply manage mental health symptoms .
Ellie is an example of a sophisticated AI therapist which is digital and semi-autonomous. It is designed to build a rapport with the person by analyzing and responding emphatically to facial micro-expressions, verbal cues, and body language. Lucas et al., in 2017 reported a major breakthrough with patients suffering from PTSD. Ellie served as the best method of discussing and divulging symptoms for the patients since the blend of effective rapport formation and assured anonymity allowed for an environment removed of stigma enabling high disclosure .
IV) Social Support Robots
Lastly, there are social support bots. These are physical, three-dimensional robots that exert some autonomy over their responses, which are not just scripted but arise to an extent due to intelligibility and contextual understanding of the situation around them. A more descriptive nomenclature is provided here in a study by Čaić et al. (2019).
In a fast-paced world of increasing social distance, in part due to the virus but also largely to a global shift towards digital technology, social support robots may be invaluable in their task assistance and in providing a much-needed respite from loneliness and social isolation. Thus, this particular category offers a wide scope of help that one can extract from AI-based therapists.
- Healthcare for the elderly uses these support systems to provide physical assistance, medication reminders, and physical companionship. They were seen to reduce loneliness, agitation, and depressive symptoms (Broadbent. 2017). Some preliminary findings also suggest Paro — an advanced therapeutic robot which looks like a fluffy seal, reduces signs of aggression in patients suffering from dementia (Scoglio et al., 2019).
- Healthcare for children makes use of social support robots as emotional companions provided during hospital stays, distractors during medical procedures as well as guides promoting certain behaviors. Most studies report positive influences in all of these contexts (Moerman et al., 2019).
Children with autism spectrum disorder are also hoped to benefit from the latest developments as the robots provide a toned-down version of how interactions with humans may look like. This is expected to help the children develop better social skills by providing a playing field for them to practice in which is responsive, encouraging, and gives appropriate feedback (Broadbent, 2017).
The relationship we share with social support robots
The integration in our lives of artificial beings is clearly increasing and is being expanded to even intimate contexts such as that of therapy. Ethicists argue whether such integration should be allowed or not. There have been concerns raised regarding the human-human interaction this might discourage as well as the dysfunctional relationships people could form with their AI’s. However, considering that it is already underway at least in some part, another disconcerting debate is that of how these ‘artificial’ systems should be integrated and treated by us humans.
Back in 2013 August, SAM — a soil sample collecting unit of the Mars rover Curiosity was made to sing ‘Happy Birthday’ to itself through the whirs and vibrations of its machinery for commemorating its first successful year on the planet. From being termed as one of the world’s ‘loneliest birthdays’ to explanations about the lack of singing thereafter (to conserve power) mentioning an early ‘demise’, the anthropomorphization is reminiscent of WALL-E. It is one of many examples around the world of humans attaching emotions to inanimate objects.
The question thus is no longer whether humans are, to paraphrase Elvis, “simply fools in love,” for the devices, tools, or systems they use but whether the latest technological developments are making way for a greater reciprocation of this unrequitedness. David Gunkel, in his book Robot Rights, poses the important philosophical perspectives to consider when debating this topic . For instance, the question of whether robots can have rights is still different from whether they should be allowed to? However, the advancement of social-support robots seems to be fast bridging the divide between the two.
The field of healthcare provides inherently emotional and vulnerable social contexts for the application of intelligent systems. In an endeavor to meet both physical as well as psychological requirements of a caregiver, there have been giant leaps to introduce higher awareness and human-like responses. Researchers in Japan, at the Osaka University, recently revealed new developments on a life-like robot child that can ‘feel’ pain. Affetto, an AI-powered robot equipped with newly developed tactile sensors responds with a wince when the input passes a certain threshold. The robot makes use of 116 distinct facial points, each of which is controlled by underlying deformation units that influence a part of the android skin to reliably create facial expressions. This, for example, allows for greater expressiveness and would be useful to further integrate AI-enabled agents in healthcare and therapy.
If Curiosity, a science laboratory miles away was made to hum Happy Birthday to itself, the integration and attachment with caregiving agents are expected to be much higher. An anecdotal incident from Japan illustrates the point further where the AIBO robots which were dog companions were given proper Buddhist funerals after the company canceled their production. The endeavor to allow for better response and understanding of a human’s mental state and emotions may lead to the sentience of robots; this is not a far-fetched reality.
How we treat robots and the space that we create to allow their possible sentience to manifest in is a comment on human society and its willingness to integrate other forms of beings.
Many instances of physical violence targeted towards social-support bots have been logged from around the world. Such abuse currently may not go further than physical harm but reveals deeper truths about human nature. It also serves as a warning to the kind of precedent we may be setting up for acceptable actions towards robots.
Social companions in the form of bots provide the possibility of a unique relationship between themselves and the humans they are with. However, is this a companionship that can be hoped to be mutually beneficial? Would we allow for that possibility? And is the human race ready to react to trans species and their rights favorably?
This article was guest-written by Nehchal Kaur, an M.Sc. Cognitive Systems student at Ulm University, Germany. This article was edited by Kyle Giffin, a UC Berkeley Alumnus, and Abraham Niu, a junior undergraduate student at UC Berkeley studying Cognitive Science and Data Science.
- “Personen: Quadriga Hochschule Berlin.” Quadriga Hochschule Berlin | Quadriga Unterstützt Professionals in Ihrer Beruflichen Entwicklung Und Zukunftskompetenz., www.quadriga-university.com/en/persons/neil-harbisson-cyborg-foundation-222082#:~:text=Neil%20Harbisson%20is%20a%20British,recognised%20cyborg%20in%20the%20world.
- Marinov, Bobby, et al. “42 Medical Exoskeletons into 6 Categories.” Exoskeleton Report, 6 Feb. 2020, exoskeletonreport.com/2016/06/medical-exoskeletons/.
- Jutel, Annemarie, and Deborah Lupton. “Digitizing Diagnosis: a Review of Mobile Applications in the Diagnostic Process.” Diagnosis, vol. 2, no. 2, 2015, pp. 89–96., doi:10.1515/dx-2014–0068.
- “King’s College London — Homepage.” King’s College London — AVATAR Therapy, www.kcl.ac.uk/ioppn/avatar-project/therapy.
- Michael Rucker, PhD. “Using AI for Mental Health.” Verywell Health, 17 Sept. 2020, www.verywellhealth.com/using-artificial-intelligence-for-mental-health-4144239.
- “Virtual Reality Avatar Therapy for People Hearing Voices — Full Text View.” Full Text View — ClinicalTrials.gov, clinicaltrials.gov/ct2/show/NCT04099940.
- “Why Give AI A Face? Exploring Digital Humans & Their Impact in 2020.” Digital Humans, 9 Feb. 2021, digitalhumans.com/blog/why-give-ai-a-face/.
- A.Jabbar, M., Samreen, S., & Aluvalu, R. (2018). The Future of Health care: Machine Learning. International Journal of Engineering & Technology, 7(4.6), 23–25. https://doi.org/10.14419/ijet.v7i4.6.20226
- Broadbent, E. (2017). Interactions With Robots: The Truths We Reveal About Ourselves. Annual Review of Psychology, 68(1), 627–652.
- Čaić, M., Mahr, D., & Oderkerken-Schröder, G. (2019). Value of social robots in services: Social cognition perspective. Journal of Services Marketing, 33(4), 463–478. https://doi.org/10.1108/JSM-02-2018-0080
- Lucas, G. M., Rizzo, A., Gratch, J., Scherer, S., Stratou, G., Boberg, J., & Morency, L.-P. (2017). Reporting Mental Health Symptoms: Breaking Down Barriers to Care with Virtual Human Interviewers. Frontiers in Robotics and AI, 4. https://doi.org/10.3389/frobt.2017.00051
- Moerman CJ, van der Heide L, Heerink M. Social robots to support children’s well-being under medical treatment: A systematic state-of-the-art review. J Child Health Care. 2019 Dec;23(4):596–612. doi: 10.1177/1367493518803031. Epub 2018 Nov 3. PMID: 30394806.
- Scoglio, A. A., Reilly, E. D., Gorman, J. A., & Drebing, C. E. (2019). Use of Social Robots in Mental Health and Well-Being Research: Systematic Review. Journal of Medical Internet Research, 21(7), e13322. https://doi.org/10.2196/13322