Ophthalmology is a Harsh Reminder of the Current Limitations of AI

By Dhruv Syam

The early 2010s were a time when there was unimaginable hype about the term “artificial intelligence”. Every company seemed to try and fit the word in everything from voice assistants to cloud services that were "optimised" by AI. But recently we seem to have entered an AI innovation winter with people taking the term AI with a pinch of salt. We are yet to see self-driving cars for consumers or AI productivity systems that are set to be the next industrial revolution. When discussing medical uses of AI in 2019, Eric Topol from the Scripps Research Translational Institute, said that "the state of AI hype has far exceeded the state of AI science". 

One prime example of this is the field of ophthalmology: the branch of medicine and surgery which deals with the diagnosis and treatment of eye disorders. There were many predictions and models that given a correct retina scan, AI could, in the future, make correct diagnoses 93% of the time. However, this dream has yet to materialise with varying accuracy rates from 30% to 80%, which is nowhere near as accurate enough to diagnose people. 

Opthalmology.jpg

There are 3 main issues that arise with AI and ophthalmology that are indicative of AI in general. First is the inability to get data into a usable and clear form. AI is completely dependent on data and there is often a poor collection of this data in many formats. AI systems have also been proven to often succumb to human stereotypes due to misleading data being fed into a system. The consequences of an AI system that fails to diagnose different ethnicities could be severe.

Then there are also the challenges of privacy and regulation. Medical records are very closely guarded by individuals and regulators so there needs to be policy that allows for the use of people's data in a safe and anonymous fashion. 

Finally, there is the age-old black box AI problem where we understand the inputs and outputs but often don't know how the AI system achieved its result. AI algorithms are often highly complex and are safely guarded by firms that have developed them. At times even the programmers don't know what is going on in an AI system as it uses its own intelligence. This characteristic of AI is both an advantage and a weakness at the same time. This lack of transparency is one reason that governments have been very cautious with permitting new AI. You can imagine a scenario in which an AI system diagnoses someone incorrectly and leads to a person’s death. There would be no precedent and the liability would be questionable. Whose fault is it; the firm that developed the AI, the hospital or would there be no accountability? Such dilemmas can only be solved with government intervention or people will never trust an AI system. There is also the obvious problem with AI medical care is that people like to have human interaction and empathy that AI systems can never bring to the table. It’s all very well asking Siri for the weather but a robot gynaecologist is a completely different matter!