Lecture: Autonomous AI: Finding a Safe, Efficacious and Ethical Path to Increasing Healthcare Productivity

Technology is revolutionizing the world around us including creating new opportunities to deliver medical education and to provide clinical care globally. Artificial Intelligence (AI) has been amongst the most exciting advances in the last few years and now such work is being deployed globally.

This lecture will review how AI has been developed and approved specifically for diabetic retinopathy. This interactive webinar will also highlight the critical importance of the demonstrated safety, efficacy, and critical ethical reviews of such technology.

Lecturer: Dr. Michael Abramoff, MD, PhD, (FARVO) Professor of Ophthalmology and Visual Sciences, University of Iowa, USA

Transcript

DR ABRAMOFF: I’m very excited to be doing this. It’s an awesome intro that Orbis does here. We’ll be talking about artificial intelligence. What it means for the diabetic eye exam, for people with diabetes, for the developing world, and especially for health care productivity. Because it’s a big interest of mine. We’ll be talking about the ethical background, and how we know whether or not AI is safe, efficient, ethical, equitable. A few conflicts of interest. I’m founder and executive Chairman of what is now Digital Diagnostics. We’re changing our name from IDX. And I’m also professor at the University of Iowa. It’s an enormous honor to be here presenting to you as an audience. Very exciting. Before we start, and I think we can show the question now, I was wondering whether there are any concerns you have, right now, before we start, using autonomous AI, which is AI, artificial intelligence, that makes a medical decision by itself, without human oversight. For doing the diabetic retinopathy exam, rather than a human specialist, like you or me, doing the eye exam. Are you concerned about unproven effects on patient outcome, inappropriate patient data use, lack of clarity on how to get paid for this, risk of racial and other biases, liability for you or someone else for errors, job losses for clinicians, or maybe you don’t have any concerns, and you’re excited for its potential. Let us know. And we’ll continue while you answer this. So like I mentioned, you may have heard about IDX. We’re going through a name change to cover things we do outside of the eye, so we’re now Digital Diagnostics. There’s a lot of green in the slides. AI the right way is the subtitle. So let’s talk about how we can use AI to increase productivity and thereby save sight, because that’s what we’re all here for. I probably am repeating a lot you already know, but I want to make sure we all start from the same knowledge. You know that diabetic retinopathy is a very important cause of blindness, affecting the US and many other countries as the primary cause of blindness in the population. There are 30 million people in the US, 450 million worldwide, at least, that have diabetes, and it’s rapidly growing. The most important fear in diabetes in the US is visual loss and blindness, more than dying from it or amputation. It’s important to realize there’s no good predictor of who will develop diabetic retinopathy. A1C only partially helps you with predicting who is going to develop DR or not. Metabolic control is not a full explanation, and that makes the diabetic eye exam so important. There’s really no replacement for it. We also know that eye exams for diabetic retinopathy improve visual outcome, and I love to show the evidence for that. Because it’s so, so overwhelming, which is one of the reasons autonomous AI, the first one, was in — for the diabetic retinopathy exam. Regular eye examinations are necessary to diagnose diabetic retinopathy at an early stage. When we can still treat it with the best prognosis and best outcome. We also know, for example, from the UK, that if you have compliance with diabetic eye exam of over 70%, you can reduce the number of people going blind from diabetes in effect make it not a primary cause of blindness anymore, but more like number 11 on the list. Diabetic eye exams are the most cost effective intervention for any diabetes complication. But compliance corrected sensitivity for diabetic retinopathy testing is very low. And what do I mean by that? We have, for example, clinicians, telemedicine programs, even AI now, that are very sensitive for picking up diabetic retinopathy. But that doesn’t matter as much, if you have a highly sensitive process, if you’re not reaching the majority of patients. And we know that from 15% to 50% in the US and many other countries of people with diabetes are not getting diabetic eye exams. So that lowers your effective sensitivity or compliance corrected sensitivity. And what is exciting is that — the fourth bullet item here — that’s shifting from lab referral based to instantaneous point of care A1C testing. This is not about the diabetic eye exam, but A1C, showing control. If you have a desktop point of care diabetes management, compliance went from 50% to 95%. That shows that access is a very important problem for the diabetic eye exam. Also very important is that health inequities are very visible in diabetic retinopathy. Diabetes but also diabetic retinopathy affects different groups. Income, genetics, other reasons, in different ways. Incidence, prevalence, diabetic retinopathy incidence, compliance, and visual loss from diabetic retinopathy. And several studies are cited here that show that. So we came up with a solution many years ago. Let’s create an autonomous AI, autonomous meaning it makes a medical decision by itself. As an integrated system for doing this diabetic eye exam. And what you can see in the middle is what it looks like today. There’s a patient, there’s a robotic camera, that is almost fully self-operated, there is an operator who needs no more than high school graduation, they help the robotic camera go through the eye exam, two images per eye, retake images of insufficient quality or the wrong area of the retina. So it’s almost an entirely automated process for taking the images, and it takes a few seconds for the diagnosis to be ready. Point of care, realtime, you can put it anywhere where the patients are. No specialist oversight, and of course, typically integrated with electronic health records. Wherever the clinic is, primary care, typically. Very important is that this is FDA cleared, or De novo authorized. I used “approved” here, but it’s not correct. It’s De novo authorized, because we had to go through a clinical trial. And also important is that we as a company take liability for the accuracy of the AI. We do not give that to the operator or the physician ordering the test. That is on us. So that’s cool, and we have that, but that is just one of the many, many steps to actually bring this to patients. Because the fact that there is a system and there is FDA clearance doesn’t mean that patients can actually use it. There are many other things that need to be solved. Because autonomous was the first — it is still the first, but I know that there are many other autonomous AIs for various diseases, and organ systems, going through FDA approval process right now. So they will be coming out. But that was not — it’s one step of many. And I’ll show you here. I’ve been doing this for 30 years. My first publication on neural networks, using machine learning to mimic the brain, is from ’89, and so it’s a steady process of one by one overcoming the hurdles that existed where the health care system was really not prepared for having a computer rather than a human making diagnoses. And I was sort of at a choice years ago, where on the one hand, I could be the Uber of health care, essentially breaking the system for taxis, or we could work within the system, and we decided to work within the system, work with FDA, the stakeholders, the physicians, our colleagues, et cetera. But then again, we needed to solve for many things. And I will show you a few. We needed to design, together with FDA, a clinical trial for this de novo authorization. What do you compare it to? We decided to compare it to outcome, rather than comparing AI to other physicians, like is typically done. I’ll explain why. We wanted to do it in primary care clinics, which is really the environment in which it’s being used now, because that mimics how it will be used in practice, with existing staff, rather than specialized operators who look at the images. We had setpoints in terms of specificity, sensitivity, and a new one, called diagnosability. We did trials, with reproducibility studies. So that I think went really well, and so that was also the first published study, and the design of the study wasn’t intended to diagnose. Just to screen. Some people say: Why didn’t you do a randomized clinical trial? That would be unethical, because essentially it meant that a negative exam from the AI, meaning no diabetic retinopathy, would have to be left untreated for maybe years, and given that we already know how to treat these people with macular edema with laser and other treatments — that would be unethical. So that’s why we decided to have only one arm, but rather, in that arm, compare the diagnostic accuracy of the AI to predictor of outcome. So we had that clearance, and FDA did a lot of press around it, again, emphasizing that it is a computer making a diagnosis, not a specialist. What is really exciting for me, when we’re implemented all over the US — in New Orleans, after the Katrina hurricane, there was very little eyecare left, but there was a diabetes clinic there, and we came in a year ago, where the wait time for a diabetic eye exam for someone with diabetes was over four months, and now with the AI there, it’s a same day referral. Meaning if you have a positive exam from the AI, you can be seen the same day by an eyecare provider, typically an optometrist in this area. So it really improved access there, and that was the first data point on why we’re doing this. So we improve access, lower cost, improve quality. Another — there was a lot of press when we went into grocery stores. You see the tweet on the right there. Hey, we’re now gonna get an eye exam in the grocery store. And there was actually some pushback from physicians, because they said it shouldn’t be in a grocery store just like that. Of course, it’s a little bit different. There are — in the back of the grocery stores — this is in Delaware, in Safeways in the US — there are clinics, and you can be seen by a physician, a provider, and they can order diabetic eye exam. So indeed, it can happen there. The equipment is all there. But it involves a little bit more than just walking up to the machine after you buy your groceries. But indeed, yes, it is available in grocery stores, provided there’s someone confirming you have diabetes and need this. But of course, again, improving access to where the patients are, rather than having the patients come to the eye exam. Very exciting was that the American Diabetes Association, one of the biggest patient organizations in the world, who annually creates the standard of diabetes care in a number of publications, included AI in the standard of care. And you can see some quotes here, on the screen in front of you. And that’s really quick for new technology to become a part of the standard of care. And again shows how important the diabetic eye exam is for people treating diabetes. American Telemedicine Association put out guidelines this year for diabetic retinopathy, both telemedicine and AI was included. Conflict of interest — I’m the chair of this working group, within ATA, but there are a lot of other independent people, part of the panel, to ensure independence. Very important moment for US health care. What you have here in the US is so called failure-based care. Meaning you get paid for doing the right processes. And that includes diabetic eye exam as part of a measure, which rewards physicians and health care systems for doing diabetic eye exams on their patients. Previously it said that only an optometrist or ophthalmologist could do the eye exam to qualify for the measure. As we call it in the US. And that’s called HEDIS, and as of a few weeks ago, for the first time ever, you can now also do this care, close the care gap with AI, as you can see in this quote. So many hurdles to be taken before this can actually benefit patients. You need to design it, develop it, validate it, go to FDA, and there’s a host of other things that are needed to address before we can bring it to patients. But I want to shift a little bit gears and talk about the more general problem of: Can autonomous AI solve these problems, and are we doing it the right way? What is the right way of introducing AI into health care? And so the general problems in my view that we’re tackling are two: They’re cost and productivity. Essentially the same, and you may remember productivity is where output per hour, and that can be any work, including doctors. And if you see the green line on this slide, that’s the productivity in every sector except health care in the US. For example, we are more efficient at building cars, at banking, at farming, at shopping, at making clothes, et cetera. Productivity has been rising steadily, and that always implies that you have lower cost, and that’s why, for example, more people can afford a car now than 50 years ago. That is not the case in health care, as you can see in the red line. We’re actually getting less productive, meaning we’re seeing fewer patients per hour. With equal quality, we’re doing less, and that drives prices, everything else being equal. The demand is not staying equal, because there’s more demand on health care. But even if the demand was steady, the price would still go up. That’s not sustainable. So productivity needs to be addressed, and I think autonomous AI is one of the best ways of addressing this, because you literally have a computer doing what previously the doctor had to do, so the doctors can focus on the things they’re trained for, the more complicated things. Another giant problem I already alluded to a few times is access. And there’s many ways you can show it, but this is very visual to me. On the left you see in the US darker blue is where ophthalmologists are. In different counties, and dark blue means more ophthalmologists per county. You can see they’re really concentrated on the coasts. In the middle image you see where people with diabetes are. Darker red is more people with diabetes in an area. And you can see that’s mostly in the Southeast. Just from geography, there’s a big mismatch between where the diabetic eye exam is available and where it needs to be, where it’s needed. There are other ways of access, including inner cities, famous example, income differences, but this is very visual, and very clear to me. So many advantages that autonomous AI can address. For example, for the diabetic eye exam but also for other diseases. But we should also have concerns. We should never take something at face value, just because I say something… You shouldn’t leave that. So there’s concerns that people have. The public, patients, physicians, people who make our laws. Is it safe? Does it actually benefit patients? Is there racial or ethnic bias? Will doctors lose their jobs? Are we using patient data appropriately? Is it integrated into the health care system? And who is liable for errors? And there are a few examples on this page of how this can go wrong. Where data is used inappropriately, where there was a bias that was not recognized, how we assign health care resources, so lots of problems, many times, it can go wrong. So we need to make sure we address these concerns up front, and don’t ignore them. I’ve personally gone through this. Some of you may know that my nickname is the Retinator. The origin of that nickname is an editorial in Ophthalmology Times a few years ago, by my good friend Peter McDonnell, from Hopkins, and he talked about the fear of ophthalmologists losing their jobs, years ago, when I was doing research into AI for the ophthalmological exam. So going public with both the risks and benefits of AI, we have been able to turn it around, saying that we’re doing AI the right way. Here’s the American Medical Association very recently. The headline is: This ophthalmologist is doing health care AI the right way. Showing how you can turn around the concern of your colleagues, and there has been tremendous support from many physician organizations for doing AI the right way. It’s absolutely possible to address these fears and show, for example, that what AI does for ophthalmologists and people taking care of patients with eye problems is it allows you to take care of patients with more complicated problems. One of the things I think we shared when we started working with the FDA over ten years ago was sort of ethical foundation or background that was not fully made explicit in a recently published paper. Now two months ago — you can find it online. Where I tried to find how we looked at autonomous AI and ethical principles. And there’s a few. These terms — you may say I’ve never heard of these, but you are absolutely familiar with them. Non-maleficence is described as first do no harm. Very well known principle in medicine. Make sure you don’t make it worse. Do no harm first. For example, justice means things like all men and women are created equal. The point being that we should treat people equally. So these are very basic bioethical principles you will find in any textbook about ethics in medicine or health care. There’s also a legal principle called accountability. And these combined, these four or five really foundational concepts you can develop from there some ways of looking at AI. And I list them here. AI should improve patient outcome, and you need to show that by evidence. You cannot just say that. You need to design AI ideally so it mimics what clinicians do. You need to maximize your understanding and accountability for how you treat patient data, both in the training phase and when you use it. You need validation that is the best there is. That means typically comparing to outcome in the actual workflow. We’ll get to all of these. And in some ways, you need to put liability where it belongs, which is in our case at the company. Meaning if the AI is not accurate, that’s on the company, not on the user. And the American Medical Association has seen this as well. In their policy last year that came out, they also described these principles, and safety, efficiency, and equity — we’ll get to that. Outcome and liability. These are becoming sort of standard for how we look at AI. And let me give you a few examples. Improving patient outcome and not doing glamour AI, which is AI that is technologically really exciting, and I love technology, you know, like many others. Probably. But if it doesn’t improve outcomes, we should definitely not be paying for it, from taxpayer money, for example. So glamour AI is that AI that doesn’t benefit the patient. Even though it’s cool. And so you need to show a patient that it’s improving the patient outcome. How do you show that? For example, for our case, we linked it directly — the output of the AI to disease outcome. And this is a complicated table, but if you know a little bit about diabetic retinopathy, it’s probably simpler to look over here. Where we show if you have no diabetic retinopathy, or 12 to 20 diabetic retinopathy, which is called mild or questionable, and no diabetic macular edema of any form, the output is negative. And we know that associates with a 1.7% chance, very low, of developing proliferative diabetic retinopathy after three years. On the other hand, if you have center involved macular edema or clinically significant macular edema and nothing else, that’s enough to have a positive output, or if you have level 35 or higher, hemorrhage or more, diabetic retinopathy, then it gives a positive output, meaning you have the disease, and that is associated with a 90% chance of developing proliferative diabetic retinopathy after three years. So you can see very nicely how we met the output of the AI, to the actual outcome of the patient. Another factor is how you design the AI. And I already mentioned that. You make it as close as you can to how we think clinicians work. There was some brain research that I was involved in, where we showed how there was overlapping detectors in the brain, in the visual cortex, and we used that knowledge as much as we could when developing the AI. On the top you see how the AI is developed. You take a bunch of images of the retina in this case and associate it with a label, let’s say disease yes or no, diabetic retinopathy yes or no. And these obviously all do not have diabetic retinopathy. The problem with that is that in your training, you better represent all the different varieties. Skin color, retinal colors, that can occur. And that’s essentially impossible to prove that this is right. Instead, what you can also do is design detectors, with machine learning, for the different lesions that we all are aware as clinicians. In diabetic retinopathy. Including hemorrhages over here and here. Exudates, many other lesions, and what we did was actually develop overlapping detectors for each of these. We have small hemorrhages, large hemorrhages, even larger hemorrhages, microaneurysms, et cetera, so many multiple detectors, and then you combine the output of these detectors, in the visual cortex, in the brain, into a disease level, patient level output for both eyes. So again, mimic cortical processing of clinicians as much as possible. And interestingly enough, and that was anticipated — a sort of more natural way of doing things. It turns out that you’re more robust against catastrophic failure from noise and also robustness against racial and ethnic bias, because built in — a hemorrhage is a hemorrhage, whether someone is from Kenya, Iceland, or Japan. If they have hemorrhages, it is enough to say this person has diabetic retinopathy, all else being equal, and it doesn’t matter what color or racial background is. Here’s an example of this robustness. On the left you see an image of the retina with diabetic retinopathy. The exudates, hemorrhage. And any AI can detect it and any clinician can detect this as diabetic retinopathy. The numbers are really high. 99% for any form of AI, including more clinician mimicking, like ours. You can put a little bit of noise into this image. And you can see that here — well, you can actually not see that here. The point is that it’s invisible for you and me, if you look at this image here in the center. It is minimally changed. You have to believe me, if you subtract the two images, you’ll see the difference, but the hemorrhages and exudates obviously are still there. It turns out that an AI that is based on lesions will still diagnose this as diabetic retinopathy. However, if you are now trained on images, without understanding what is the higher level content of the image, you will fail pretty badly, and that we call catastrophic failure. An AI trained on the images, without understanding hemorrhages, et cetera, these do very poorly, and we have published that and others have published that. On the right is a little bit of explanation why it is, but we don’t have time to go into that. So we already — I already mentioned that we need to validate it rigorously, in preregistered clinical trials. In drug studies, that’s now the standard, and I want to explain it a little bit. Here’s an example that we want to validate it in the workflow. AI is not always making things better, and here’s an example that’s really illustrating that. Years ago, there was an assistive AI, meaning it was designed to assist the radiologist to diagnose breast cancer in mammograms. And the FDA approved that, based on the comparison of the radiologist in 200 women. It did really well. But in practice, it was together with the radiologist. And the radiologist was warned by the AI — look better at this calcification. Don’t miss this. And what someone — Fenton said in 2006 was: Well, let’s study this, because we actually do not know how this affects outcome. In 200,000 women, they used the mammograms, and compared to the radiologist with the AI to the radiologist alone, and looked at outcomes, and actually the outcome for the radiologist alone was better than for the radiologist with the AI. So this showed very clearly, and surprisingly to many, that the AI doesn’t always make it better. It can make it worse. It needs to be validated in how it’s going to be used in practice. It’s not always the same as the laboratory setting. You need to test it in the same setting as it will be used in, in primary care. So many people will say: Why did you go for one camera? There are many cameras out there, and you see the evolution of iPhone cameras to now more portable boxes, and on the bottom, you see fundus cameras that are tabletop, the joystick has disappeared and it’s now essentially robotic, and these are almost converging to a common design, clearly a handheld camera like that is hard for inexperienced operators that use it maybe two times a week, but you also have the price point, many other factors, so the combination shows that what we went for was the camera that worked best in the hands of inexperienced staff that never do this, on real patients. And so we went through many, many experiments with that, and that’s why we ended up with a specific camera. And so you need to test it with that camera. You can approve a system with a specific camera, because the staff experience that is using it has so much impact on the image quality there, and on the diagnostic performance of the AI. I already mentioned we validated against surrogate outcome, not against clinicians, and here’s why. There are two studies comparing clinicians to outcome, from years ago, two independent studies. Lin et al. and Pugh et al., and they showed that clinicians like me have poor sensitivity compared to outcome. Only 33% and 34% sensitivity compared to surrogate outcome. Let’s say you compare an AI to doctors like this. If the doctor and the AI disagree, how do you know that the AI is wrong and the doctor is right? It’s impossible to know and you can see that it may be that the doctor is more wrong than right in this case. That’s why we use surrogate clinical outcome. And in fact, the reason why diabetic retinopathy was the first autonomous AI, and why we’re in retina and ophthalmology very proud, paving the way, is because we have so many standards and so many knowledge about surrogate outcome in this chronic disease. On the right you can see there’s this image, and you can give it an ETDRS level, and if you gave it 30 years ago, and today, it will have the same number. It is very robust over decades, and we know also from studies done long ago that would be unethical to repeat now, how it is associated with outcome in terms of proliferative diabetic retinopathy, retinal detachment, et cetera. That’s why surrogate outcome is superior over comparing AI to physicians, where you can. It may not always be available, and sometimes you don’t need it to be, because you have outcome available in acute disease. Another important thing. Once you go to surrogate outcome, you need to realize that you image much more of the retina than the AI. AIs typically are compared based on the data going to the AI and to the clinician, that they’re compared to, but it’s not really the same for an outcome-based standard. Outcome is based on four wide field images, traditional center field stereo, we also use OCT for macular edema, and if you look at what the AI gets, it’s only 2 images per eye. Otherwise patients just walk away. You actually need to pay patients and subjects to undergo these many, many images. But anyway, in real life, the AI only sees two parts of the retina circled in green. Any time there’s a lesion, a hemorrhage, indicating level 35 ETDRS or moderate diabetic retinopathy, if you have a lesion like that, and it’s not visible in the green circles, automatically the AI is wrong, by definition. Because they missed that. They missed that part of the outcome. So you have to realize that when looking at the numbers. And here are the numbers. So the study was now published two years ago, so I want to go over it a little bit. We had three endpoints. Sensitivity, specificity, and diagnosability. Meaning do you have diagnostic results for every patient. We had endpoints we published before we started the study. We had 87%, 90%, and 96%, meaning only 4% of subjects in the study, a representative sample for all people in the US, had insufficient image quality, and 96% had valid output. If you compare to what I mentioned in the papers — board certified ophthalmologists, the numbers are of course much lower. I already mentioned 96% diagnosability was also very important, because we now have 88% diagnosability without dilation. Meaning you hardly need pupil dilation with drops anymore. So about outcome, improving outcome, validation, and comparing it in the workflow. Very important I think is how you use the data. We use data to diagnose patients. We use training data to build better systems. And we make sure we can trace the data back to the patient. So we know we’re accountable for that. I think that’s very important, because the patient typically considers — whether rightly or wrongly, according to legal standards — that data derives from them. They should have some control over it. A lot of lawsuits right now. That have that as their basis. Liability — I already mentioned it, but I’ll read it out to you. This is actually from the AMA, American Medical Association policy, on AI, creating autonomous AI products. The creators of products should obtain medical malpractice insurance or otherwise assume liability. This is assuming that the user is using it according to an FDA label. If you use this to diagnose melanoma, which it’s not designed for at all, it doesn’t diagnose melanoma, that is not something you can hold me liable for. When it’s not accurate in diagnosing diabetic retinopathy. It’s very different for autonomous than assistive AI. In assistive AI, the physician is still ultimately responsible, and therefore the liability lies with the user of the AI. In this case, the physician. Because we consider a primary care physician using AI — the reason they use the AI is because they do not feel confident enough to do this eye exam themselves, and so it would not be appropriate for them to be liable for something they have no control of. It was very interesting when I started speaking about this now years ago, that there was so much focus on this liability issue. I really think addressing this the right way helped in the sense of the hurdles we cleared to get this to patients. Autonomous AI the right way has been a long and very enjoyable journey. And some of the nuggets I discovered are here. Nothing in the health care system was ready for autonomous AI. We can democratize health care by helping those with various diagnoses. Ethics are really important. There’s an underpinning… Many other nuggets I put on here — I mentioned productivity early on. It’s very exciting that we’re starting to do studies, together with Orbis, on health care productivity, because I think we really need to establish that. Right now I have great hopes, but as a scientist, I have to say I need to prove this or disprove this with an open mind. I think we have shown that when access increases, we definitely can drive down costs, and we can make life better for people with diabetes, which is ultimately what we’re all hoping to achieve. So I’ll stop here, and hopefully there’s questions and discussion now. So if I may, can I… Let me see. I need to tap out of this.

>> So we have a few questions, if you open up the Q and A.

DR ABRAMOFF: I thought to see it, and then it went away. Let me see. It’s not chat. It’s Q and A. There we go. I will read them to you. That’s probably the way to do this, correct?

>> Yes, please.

DR ABRAMOFF: Okay. Greetings, sir. I wanted to know if there’s any randomized controlled trial on AI and its results. And can I answer them speaking? Or do I have to type it?

>> You can speak, please.

DR ABRAMOFF: Oh, okay. Right. So yeah. That’s a great question. And I mentioned it. The pivotal trial where we compared AI to outcome. To surrogate outcome, meaning chance of developing proliferative diabetic retinopathy three years later. If we had done a randomized clinical trial, we would have assigned people to either diagnose through the AI, or maybe to clinicians, and compare outcomes three to five years later. That would mean potentially that some people were diagnosed by the AI — we didn’t at the time know the AI was safe, and you would unethically have to leave people untreated that you would know would benefit from treatment. That would be unethical, so we decided not to do that with the FDA, and rather have an arm where everybody gets the diagnosis from the reading center, and are referred and treated and managed based on that. Hopefully that answers your question. Sir, can the AI be made available for all the eye diseases? Is it a job loss concern? I hopefully addressed the job loss concern. Again, the whole point of this is to get people where they should be, which is with eyecare providers from the primary care and front line care providers, where currently they are not being diagnosed with the disease at a level where they benefit from us, from eyecare providers. You make a great point. Why is this only for diabetic retinopathy? There’s a few reasons for that. A, I already showed you — there’s so much knowledge about diabetic retinopathy outcomes. How we can tie what we see in the retina to outcome. That is, for example, much harder with glaucoma. I’ve been working very hard on an AI for glaucoma. And actually, we’re presenting data in September of this year for the first time, a consensus of glaucoma experts, what glaucoma is actually diagnosing in primary care — and there was no consensus, so if you don’t know that, it’s impossible to say AI is safe, because you don’t know what it should be doing. Same for other diseases. It’s not as easy as you think to make an AI. The AI technologically is probably the easiest part. How you validate it, show it’s safe, when you don’t know what to compare it to, is the biggest problem. And any time you compare it in a clinical trial, there’s a lot of cost to that. And so the more diseases you compare to, the more you drive up the cost. We had to do a 900 patient trial which cost us over $10 million. If you do additional diseases, every time you will have to have additional subjects to compare that disease, the AI, to the truth. So I think for now, it will be one disease at a time. I don’t see for the foreseeable future, even two decades away, a general AI that does what a doctor does, or what the eyecare provider does, in terms of diagnosing many diseases at the same time. That is not also what we need. I think what we need is very scalable solutions for the most important problems in eyecare, and glaucoma, diabetes, HIV, those are the diseases, malarial retinopathy, those are the diseases we need to be diagnosing massively, and I think that’s where AI can have the most benefit, and that’s also where it’s possible to do trials that show safety and efficacy. And accuracy. What about the price factor, the cost? That’s indeed a big issue. As you can see, there’s a lot of cost involved to develop this AI and do it the right way. And in some way that needs to be recouped. I already explained in the US how we’re going about it, and how many hurdles needed to be taken. We’re seeing that similarly in other countries where it’s been deployed. Poland, many other countries in Europe, and it’s very exciting there to work with Orbis, to find a way to bring this to even more countries. I find the solution for this absolutely — we’re very eager to find solutions to make this available to everyone in the world with diabetes. Because that’s the whole point, why we’re doing this. How about AI in electronic health records? I’m not sure actually what you mean. So there’s two ways — you see this is an autonomous AI that essentially makes a diagnosis, and should be integrated with the — essentially what happens, how these are deployed, is that the patient is recognized when they come in as having diabetes, needing an AI exam, they get their vitals taken for diabetes, maybe a blood draw for A1C, immediately get within a few minutes the IDX exam, and then go to the person managing their diabetes and talk about their diet and metabolic control. And because it only takes a few minutes, they already have the diagnosis right then and there in front of them, while they’re speaking to the patient, and it’s coming from the electronic health record. In that way, it’s integrated. If you mean using AI to analyze data that is in electronic health record patient data, there is a lot of interest in that, I think it’s noisy data, but what is nice about the AI the way we use it is it is based on very objective images, which are not interpretations by the patient of their symptoms. But literally, you can measure it. That makes it easier to get a high performance, because you have such reliable, not noisy data. Electronic health record data, which is typed in from patient symptoms, is typically much more noisy, hard to get a good performance. Very exciting subject, but too far away from what I’m focused on, which is image-based AI. Another question: Even after AI detection, do you think there is need for fundus picture evaluation by an expert? No, actually, it would make it worse, as I showed. This is autonomous AI, meaning the AI makes a diagnosis by itself, and that is then discussed with the patient. The provider, let’s say, nurse practitioner, or a physician, a primary care physician, and acknowledges discussing the diabetes with the patient, and also discussing what it means for the patient that they have or do not have diabetic retinopathy because the AI diagnosed that. So the diagnosis is made by the AI itself. There’s no supervision by a clinician. And I already tried to explain that since the sensitivity of the clinician is only 33% to 34%, the fact that the clinician didn’t agree with the AI does not mean that the AI was wrong. It’s probably more right than wrong, in those comparisons. So no, it would actually be off-label use, and the FDA doesn’t like that. So no. Wonderful presentation. That is obviously very nice to hear. Thank you. From Mohammed. As per sensitivity and specificity mentioned, does that mean that AI is better in sensitivity and specificity compared to an optometrist or ophthalmologist in diabetic retinopathy only? Yes, this is a very specific task. I’m a clinician myself. I love what I do. I’m doing a benefit to my patients. It doesn’t mean that we’re bad. It just means for this very precise level of disease, where it’s all about counting hemorrhages and microaneurysms and exudates et cetera in very specific locations that that is something humans are not as good at. We are really good at recognizing when a patient needs to be treated or not treated, prognosis, et cetera, but what we’re not so good at is this very precise level that machines just seem to be better at. So yes, the answer to your question is yes. It’s more sensitive, it’s more specific, it’s also more diagnosable. It’s harder to do an indirect on some of these patients. Is there a large study which provides an algorithm for DR screening in the African population? Several studies exist, including one by Hanson and me as senior author. You can look it over. We did a pretty large study in Kenya, and also, we looked at African Americans, which are African derived, but a mixture, in our sample, 23% were African Americans, and we found no significant effect of race on accuracy of the AI, so very important. That we measured accuracy, both by diagnosability, as already mentioned, and also ensuring that the sensitivity, specificity, and diagnosability was not different for different ethnicities and races. That would be ethically unacceptable. So I think there’s some studies there. However, we have not done a big study with the current version of the AI. In African — we’ll have to do that. Dr. Abramoff, thank you very much for such a thorough talk. We are aware that the code for autonomous DR screening will be implemented in 2021, 9225x. To what extent were you able to advocate for, influence reimbursement rates for this code? Dear to my heart. We spent a lot of effort last year working with the editorial panel to create for the first time a CPT code, 9225x, so now CMS Medicare in the US — we’ve been having many meetings with them as well. To see whether they should cover this code, and how it should be paid for, so a lot of work with Congress, with other stakeholders in health care, with strong support from the American Medical Association and American Academy of Ophthalmology. We’re working on that. We expect the outcome any day now. Very soon we’ll know more about — for the first time ever — will they decide to cover this? And there’s great hope there. So yeah. That’s been top of mind for me, advocating for that. Will AI integration for health care productivity pose a threat to provider payments, and what is the opinion of the health care/Pharma lobbies? These are politically very interesting questions. I think there’s so much underserved populations that increasing productivity — payers want that. So provider payments, it’s not like you’re replacing something. You’re adding only 15% in my view, in the US, in the best measurements, get the eye exams they deserve. 85% are not getting them. Even if you look at the numbers that the CDC is quoting, it’s like 59% are getting them, and therefore 41% are not getting them. We need to do something about that, and I think the proposal coming out from CMS will support using AI for that, but it still means many of these patients will need treatment and management by eyecare providers. Eyecare providers, which you probably refer to. So no, I don’t see it as a threat. And I’m the retina expert. So I’ve been going through this for a while. What is the opinion of the health care/Pharma lobbies? The health care lobbies, American Medical Organization, other professional organizations, are very supportive. Pharma lobbies — I have no knowledge of. No opinion about. Sir, how do we meet the requirements of HIPAA compliance? What are the various things they look into? Please tell us briefly to prepare. There’s a whole host of — I didn’t go into this. Data security, privacy laws in the US, GDPR in Europe, high trust — these are all standards. I can talk about certification, quality systems — software development under FDA, and HIPAA is very rigorous, and so what it means is complying with all these standards, and then you can be sure that you have HIPAA compliance. Hopefully that’s an answer to your question. Sir is AI available in India? I know other groups are doing it. We are currently not available in India. We want to be, and we continue to have very exciting talks. I hope working together with Orbis will accelerate that. Have regulators on state boards accepting AI by optometrists and ophthalmologists… I’m assuming you mean state medical boards. How they feel about AI, and that’s interesting. AI is a medical device. It’s supervised by FDA. And by the Federal Trade Commission. State boards obviously — what physicians do. We have been presenting to state medical boards, to explain to them what liability means. Where liability lies. How the interaction between AI and providers goes. And so far, it has been very mutually beneficial. That’s all I can say about that thing. With the advent of AI, overlay like augmented reality, is AI autonomous surgery plausible? Well, we’re discussing two things here. We’re discussing augmented reality, which means a human is still looking at, I assume, the surgical field, and doing their procedure. You could have assistive AI, where the AI tells the surgeon — hey, your rhexis is not looking good. You know, pull harder to this side. That is assistive AI doing surgery. As I already showed with the Fenton example for mammography, you really have to validate that you’re actually making the surgery safer and not worse, with this type of assistive AI. Fully autonomous AI — I think, you may disagree and many disagree with me, I think given validation for a single diagnosis, I think as a surgeon and what surgery really is — is continuously very rapidly continue to make diagnosis while you do the procedure, with new information coming from what you see. And maybe what you feel and hear. Depending on what type of surgery, I think validating these continuous diagnoses is a little bit farther off. So I don’t see autonomous surgery coming very soon. But I may be wrong. And then it will be very, very… Strict fields, you know, cornea surgery, maybe lens surgery, where it’s almost entirely predictable. At first. How does AI differentiate between hard exudate and drusen? You build machine learning detectors for these. And of course, it’s very important for AMD. We have AI ready to go into trial for that. And others do as well. And so it’s just training on the right type of images for us, because we build detectors for these different lesions. And then make sure the truth is good, because it’s not always easy for a physician to differentiate between exudate and drusen. What is the cost of AI? Send an email to inquire. In terms of payment in the US, you pay $34 for the diagnosis. AI for diabetic macular edema? Absolutely. I already showed just based on the fundus image, if you compare it to centrally involved macular edema for OCT, it’s really safe, does really well, in diagnosing these people. It is however not specific for DME, because the FDA considered — we just need to refer these people if they have any form of bad diabetic retinopathy. Rather than telling a primary care physician: Oh, they have center involved macular edema of 50 microns. Not of interest to the primary care provider. Of high interest to the optometrist or ophthalmologist. So that is not what we’re focusing on there. Would you like to talk to regulators like ARBO? I’m not sure what you mean there. Maybe rephrase the question. What happens to the image data after the picture is taken. We’re not a data… Some companies want to use the data for other purposes, like what you can see — if it’s free, you’re the product, as many like to say. And so we didn’t do anything with it, because we diagnosed from the image. We have a requirement by FDA to sample the output, to ensure that accuracy is consistent. Continuous efficacy monitoring. We can talk about that. So that’s what we use some of the images for. But we only use it for diagnostic purposes. We do not sell it or otherwise abuse it. What kind of fundus camera is required for AI? You can do AI with any fundus camera. Hopefully — I tried to explain — that there is interaction between the quality of the image, how good the operator is. If you work with high school graduates only, and that’s the only requirement for training, which is typically in primary care, where the staff is not highly trained in taking retinal images, you need a camera and an environment where you can take high quality images on the vast majority of patients. There’s no value to a sensitive and specific AI that only diagnoses 10% of the patients and says in 90% of cases I don’t know. That is low diagnosability. It was an endpoint in our study. We needed to meet three endpoints. So that’s why the fundus camera is important. Let’s say you have a hand held and it only gets you good images in 30% of cases. You cannot achieve high diagnosability and you have to wonder how useful it is. That’s an excellent presentation. Is there any study going on with AI and glaucoma diagnosis? Thank you very much. Yes, we and others have developed glaucoma AIs. But the biggest hurdle was: What are you diagnosing? How do you know if the AI is right or wrong? You need truth, and the truth was not agreed upon by the glaucoma community. So that was the big hurdle. I think we have solved it. We’ll see in September. Have schools of optometry been using IDX equipment? Academic centers where optometrists are being trained? Yes. What is the background knowledge that is required for a physician to develop AI? Well, develop AI is very broad. If you want to work on truth in images, that’s one way. Clinical knowledge. On the other hand, if you want to code, neural networks, multilayer neural networks, you probably need to start learning Python and C++ during your residency or earlier. So it depends on what you want. Some coding experience is always welcome. Good evening everyone and thanks Cybersight for this webinar. Thank you very much. UK has a robust retinal screening service annually for diabetes so monitoring is well established. AI role will more likely be to reduce patient load referred to clinics with shorter waiting times. Potential problem will be adding funding for this service and insurance implications. Absolutely. You should be proud of the screening program. It’s important to validate against outcome. There’s very exciting work to be done. And again, access is important. So where you do it is really important. And of course, AI allows you to put it anywhere. In a grocery store, which may be convenient for some patients. But absolutely high efficiency is important. Can I get a copy of this program? The plan is to distribute this online by Orbis. Would AI lead to a loss of jobs for diagnosis will be made by AI and there will be no need of clinicians? Again, I’m the Retinator. That fear existed, but what you actually see is retina specialists and ophthalmologists embracing this. We’re not finding the patients currently and with AI we’re finding more patients that need our treatment and our expertise, so I don’t see that happening any time soon. AI could contribute to close gap between low and high income country in access of care but cost of equipment would be another barrier. We should think of low cost equipment and offline. Absolutely. Cost of equipment is big. How do we solve this? That’s why I’m so excited to work together with Orbis on this, and there will be some publicity about it in the near future. Could be integrated with laboratory test results for overview of diabetes? That’s the point. Absolutely. It needs to be integrated in diabetes management. Metabolic control, diet recommendations, and then also discuss this output that is indeed the point. Yes. We have one minute left. And they keep coming in. What are the most available reasonable treatment for DME… That is a treatment… That is not AI, so let’s skip that. I’ll be taking a course at Harvard next week, leading digital transformation… I would like to share information with our classmates. Reach out to… Ken Lawenda. Sure. Hi. You can also send me an email. There’s other treatments. Let’s not go there. Do we have plans to introduce AI solution in Canada, absolutely. Send me email if you want to discuss now, but absolutely we’re working on that. In India in the future, absolutely. Will AI supported telemedicine enhance cost effective operations for Orbis on global scale. That’s a question for Orbis, but I would love to. OCT analysis — we already have that with glaucoma and other diseases. It’s how do you do the clinical trial is the question. Thank you sir. In case of media haze, immature cataract, is there any problem with DR grading? Great question. We did not see any effect of having cataract or not on sensitivity, specificity, or diagnosability in clinical trial. There were a lot of older patients with cataract, and there was no difference in safety. Association of Regulatory Boards for Optometry. I forget the question. But yes, we’re talking to them. Requirement of staff operating the AI. Thank you. Only high school graduation in the US. How AI predicts affected areas… Which countries are using AI now… This AI? It’s like 15 or so. Thank you, thank you. Okay. There’s more. Wait. I think we need to close. Because it’s past time. Thank you. That’s a nice answer. Thank you. Well, thank you so much, audience. This was very exciting. And I hope to see you soon. Bye-bye.

Download Slides

PDF

July 24, 2020

Last Updated: October 31, 2022

1 thought on “Lecture: Autonomous AI: Finding a Safe, Efficacious and Ethical Path to Increasing Healthcare Productivity”

Leave a Comment