Seniorlink's Chief Product Officer, George "GK" Kassabgi, joined a panel at the Digital Health Summit earlier this month to talk about "How Messaging Will Emerge as the Backbone of Healthcare".
In the course of the panel, the conversation inevitably turned toward Artificial Intelligence (AI), which has been touted as one of the major areas of focus in the effort to figure out how to make messaging scale as a solution within the highly regulated, complex industry of healthcare.
Read an adapted version of George's thoughts on AI below, and click the video at the bottom of the post to see the segment. (The video is set to start at the beginning of George's comments on AI, but if you have the time we encourage you to rewind to hear the full panel – all of the guests were excellent!)
You can also find George's extensive writing on the topic of AI and messaging at his Medium page.
"AI" is overhyped - Instead, start with the conversation
AI is a very important topic, but it's completely overhyped; the label is used in ways that are really confusing, and that takes away from the true opportunity of an automatically responding, "chatbot" type of technology.
What's happening with messaging AI is you're having a machine automatically respond. There's no real intelligence there. You're looking for patterns, there's no understanding of language there. It does no good to overhype these terms.
Where the attention needs to be is to really study the conversation. What does that conversation look like, between the patient and the care manager, for example? This is a very unique conversation, not like the conversation between you and [Amazon's voice app] Alexa. What is the nature of that conversation?
We've studied this type of conversation extensively and found that it is very stateful; it's very contextual; it carries a lot of rich context over time; and it leverages a huge amount of information on the patient. So we believe in, and we are working on, an auto-response – you can call it a chatbot – that blends the human expert with the machine response. And in that blending you can have the best of both worlds.
The "blend" of human and machine
You're going to have a relatively lackluster, or a very poor, user experience if you rely on a chatbot-only approach in a healthcare setting in a conversation. Let me repeat that: if you use a chatbot by itself, a machine only, to carry a healthcare conversation – I don't care if it's this year or next year – you will have pretty poor user experience. Period.
One way to approach this is to blend it. What does that mean? Well, some of the responses should absolutely come from a human expert. There's a personal touch there, there's a nuance that, because there's no real intelligence in AI, cannot be handled by a machine. Not this year, not next year. That human touch will be necessary.
But some of the conversation, especially with clinical pathways, much of what a care manager is doing is assessment – carrying out a clinical assessment – and there the machine is actually better than a person. The machine is more consistent; the machine is going to prosecute that assessment 100% accurately 100% of the time. So if you can achieve the blending of the human and the machine, and understand how the user (who is completely oblivious to this) is taking part in the conversation, we think there's a tremendous opportunity there in the sector.
So instead of throwing the term "AI" around – assuming the chatbot is going to solve things, or a chatbot is going to replace the doctor – instead really study the conversation and understand that it is bimodal. There is a very highly structured, institutional model that is very conducive to machine response, and then there is the personal, empathetic human touch that is very not conducive to a machine. And in that blending is the answer.