|An interview with Gabriel Scali
partner, Reckon Digital - Healthware Group
AI/ML and Autonomous agents have the potential to create a highly automated, highly effective future for healthcare, but in order for them to do so, they must overcome big challenges that generally relate to issues of trust and accountability. Most of the worthwhile healthcare applications are high stakes and therefore require transparency, auditability and the careful management of risks for both health outcomes and social outcomes, like privacy. These issues must be addressed first if we are to fully realise the potential of these technologies. The latest research in the field is starting to indicate ways of transitioning from a “frontier” mentality to one where really useful and considered applications can make a difference for citizens. Coincidentally, that is one of the great contributions that a conference like “Frontiers” makes.
It will be absolutely central to all of its facets: prevention, diagnosis, choice of treatment and delivery. AI alone, though, is nothing but technology, while healthcare is an incredibly complex, dynamic sociotechnical system. The hardest challenges we are facing are not with the technical aspects of AI, but rather with the complexity of integrating it with how humans think and act.
Naturally the balance will shift. There will be an increasing number of tasks that autonomous agents can handle, relieving humans from those tasks. Think of the many useful applications robots can then have in hospitals. We must not forget the so-called Substitution Myth though, well-known in Human Factors, that when human roles are entirely reassigned to a machine, additional supervision work often becomes necessary. For this and many other reasons, the most exciting results will come from instances of Human-Autonomy Teaming, in which human-machine collaboration delivers results that are greater than the sum of its parts.
The important thing will be to keep the focus on the end game of delivering better healthcare to more people. The risks in these times of great advances are those of land-grabbing, short-term speculation, and proliferation of half-baked solutions that play on the market excitability but have little substance beyond the hype.
Firstly, better education of both professionals and the public. This is not only possible but is crucial to generate the maximum benefits from these technologies. The past year has been dominated by hyped expectations on the one hand and AI doomsday scenarios on the other. There is still a lot of work to do with AI and autonomous agents – this work will probably last for decades – but I would be content if this was the year that the necessary phase of reality checks and sensible thinking started. When that happens, citizens can start reaping the first real benefits from these new advances.
The interview has been originally published on the special issue of CoFounder.