Lecture: Using Artificial Intelligence to Support Your Clinical Decision-Making

The latest advancements in Artificial Intelligence (AI) have great potential to improve the accuracy and delivery of eye care. Visulytix’s Pegasus software brings specialist level accuracy to the interpretation of OCT and Fundus imaging, aiding clinicians in the detection and diagnosis of serious eye diseases.

Orbis and Visulytix have teamed up by adding the Pegasus AI system to Cybersight. By supporting AI grading of colour fundus images attached to consultation cases, the system is capable of detecting and visualising glaucoma, macular disease and diabetic retinopathy in less than a minute. In so doing, eye doctors have access to the latest advancements in medical AI to help detect, diagnose and treat patients with blinding eye diseases. This service is available free of charge to eye health professionals in developing countries.

This 45-minute video provides a step-by-step demo of the service along with audience Q&A.

Facilitators: Dr. Daniel Neely, Pediatric Ophthalmologist and Professor of Ophthalmology | Indiana University School of Medicine, Indianapolis, USA and Mr Jay Lakhani, Co-Founder/CEO of Visulytix, London, UK


DR NEELY: Well, greetings to everyone out there. Welcome to our next webinar. Today it’s a real pleasure to introduce to you Jay Lakhani, who is joining us. Jay is with Visulytix. Jay is the CEO of Visulytix. And Visulytix is an artificial intelligence automated information platform. And earlier this year, Visulytix partnered with Orbis to provide an AI feature to the Cybersight consult. So today we’d like to give you some information about that. And then what I’ll be doing with Jay is I’ll show you the features, as I see them, from my clinical standpoint. I’ll show you how they’ve been integrated into the Cybersight platform, and I’ll show you how to use it. I’ll show you just a couple examples that I have submitted previously. And then hopefully we’ll do one live together. After that, then I’ll turn it over to Jay and let him say a few things, and then we will take questions and answers orally, at that time. And again, this is regarding the artificial intelligence or AI feature, which has been integrated into Cybersight. And again, Jay — I’m based in Indianapolis, Indiana, where I’m a medical consultant to Cybersight. Jay is based out of London, England. And again, his company is called Visulytix. And Visulytix does a lot of different things, but the specific platform that we’re dealing with today, as part of Cybersight, is called Pegasus. And Pegasus can utilize both fundus color images, as well as OCT images, and what it’s basically looking for: It is designed to analyze these images for hallmarks of diabetic retinopathy, optic nerve anomalies, macular anomalies, macular edema, age-related macular edema, and just kind of… It scans these images and then highlights the abnormalities. In the process of analyzing the images for abnormalities, it will also provide an estimate of the diabetic retinopathy severity scale. And we’ll touch basically on some of the international classifications for that. And in the process of giving you this estimate of the severity of the diabetic retinopathy, it will also kind of highlight where those features are, so that you can see what that estimate is being based upon. And here’s just an overview of how the process works. You can see there’s an image on the far left side, which then has been submitted, and it obviously has some extensive exudates, a few hemorrhages visible. It gets submitted through the Pegasus software, which gives you this diabetic retinopathy rating. And then breaks it down for you. So you’ll see that it then shows us where the hemorrhages are specifically that it’s analyzing, as well as the microaneurysms and exudates. Now, how do you use this feature? Let’s go through this. All right. You just log into Cybersight as you would normally. And this will be found within the patient case section. So you have to be submitting a new patient case. It will not appear if you are using a general question. But once you select patient case, then you have your typical dropdown of the subspecialty menus. Now, I’d like to point out that the AI feature is not available on all of the templates. Because it’s specific to retina, diabetic retinopathy, and optic nerve findings, it will only appear when you’re within one of three templates, currently. And those are, of course, retina-vitreous and also pediatric retina, but also under glaucoma. So if you want the AI feature, you have to be within one of those. At least for the time being. I just dropped my AirPod right down into my pocket. Probably because my ears are too big, so they don’t fit. Okay. So once you’ve selected one of those three templates that has the AI feature, then you’re going about business normally here at the beginning. You don’t need to select a specific mentor. You just need to put in the required patient data. We need age, sex, a little bit of medical history. So we know what’s going on. And then of course visual acuity is always required with the consults. We need that information. And then you add whatever other information is appropriate to your case. Glaucoma cases, obviously intraocular pressures would be appropriate and required. Then you get down to the posterior segment. And it’s great — we need photographs for this, but it’s also nice to have your description of what you’re seeing on those photographs. Not only does that help the consultant who’s also going to be looking at the case — it helps them understand the images. It also helps them understand your level of training, so that they can respond appropriately. So whatever information you can provide is helpful. Following the exam, the history and examination, then we’re moving on to where we enter the diagnosis, treatment, and any specific questions you have about this patient or the treatment plan. As normal, you see that we have attached a fundus image for this particular diabetic patient. This would also be where you’re attaching other images. So say you’re attaching a fluorescein angiogram or an OCT or a visual field. You can attach strabismus photographs. Extraocular — external photographs for plastics. And of course you can even attach videos. So keep in mind that this section for images and attachments can be any kind of file attachment. It can even be a PDF of your chart. All right? But once you have that attached, then you’re going to see that just below that, there’s the AI selection section. So it says run automated interpretation. And you don’t have to do this, but if you select yes, it becomes highlighted, and then it allows you to choose your files. All right? When you select yes, it opens up your file browser. And you can go to the fundus image or fundus images that you want. Now, usually it’s gonna be a left and a right color fundus photograph. Okay? So you would select those. Upload those. And now you see that that is attached in the AI section. Input images for automated interpretation. That’s the image that’s going to be analyzed. Now, there’s a little information button there, the question mark. This is simply explaining — I’ll read this, because the print’s rather small — please use this option to run automated interpretation of the fundus. The automated interpretation is powered by Orbis’s partner, Visulytix, and they will have temporary access to the patient images. Note that automated interpretation is an add-on feature and is not a replacement for the expert consultation. The volunteer faculty assigned to your case will still be responding to your questions. If you select the option, you will be notified of the results via email. The results can be accessed from within the case. And then we have image guidelines here. All right? What do we mean by image guidelines? There’s a sample image of what we would like to see. By image guidelines, we mean the file formats. This works best if it’s in one of these four formats. The jpeg, bitmap, png, or gif. You can attach up to eight images. So while left and right color fundus photo might be most common, you may put up to eight images, if you want multiple interpretations. Or if you feel like some views are better than others. The minimum resolution is 2 megapixels. All right. So once you see that the image has now been uploaded here, and from there it’s just submit as normal. Now, in this example, I’ve selected “keep case private”, just because it’s a demo case. I didn’t really want it going out there and being in our general collection, but under normal circumstances, if you can leave the cases not private, so that everyone can learn from these once they get closed out, that’s one of the long-term benefits of building up this library of interesting cases. And I myself was able to review our library of diabetic retinopathy cases, as I’ve prepared for this webinar. It’s really a great tool to search and find the diagnosis that you like. Okay? So then you submit your case. And the normal close-out screen. Now, once you submit that case, within honestly just a matter of one to two minutes you’re going to receive the response. So there is a mentor who’s going to look at it, and their time frame of responding is within 24 hours. But with this AI system from Visulytix, you’re going to have the preliminary interpretation within seconds to a minute or two. So I had just kind of clicked back over to my cases, and by then, I had a notification coming in that I had an email regarding this. I open my email almost instantly, and within my email there was a link for the case I had just submitted. You click on that link, it kicks you back into the consult system, and your information is going to be there. So one to two minutes after you submit this AI case, you get this confirmation. This case has been assigned to a Retina-Vitreous specialist, Dr. Ciulla, and as you scroll down, back down towards the images section, you will now see that there is a file attached here. This PDF on the top right corner. So that’s the automated interpretation from Visulytix. So what you do to review that is just simply click on that. It will open the PDF. And then it will scroll out to be a few pages. And let’s look at what that looks like, once you open that PDF. All right. Very first thing you’re going to see is that there’s a summary page of all the things that are being graded over here, under the results column. And then there are some scorings that happen where it says image number one. Now, we only have one image, so that’s all we’re seeing right here. But we’ll go through these, and let me explain what each of these summary grades are. The very first thing you’re going to see is a verification score. Verification is just verifying that this is a typical fundus photograph, and that it’s suitable for analysis. It’s the right file. It’s the right view. It meets the parameters for the kind of images that need to be scanned. So we’re looking for this to be above a 0.75. Now, this image was 1. It’s a perfect image. Gradability. This one is kind of borderline. This particular image is borderline on gradability. Now, why would that be? Well, gradability is going to come down to how clear the image might be. What’s in the field that’s included. And later on, Jay can speak to that, if there are other things that he’d like to point out on that. But it’s basically: Is this a good image that we feel that the AI software can process appropriately? Then the very first thing is the diabetic retinopathy score. Now, this was a really ugly-looking image. It had lots of exudates and hemorrhages. Basically you can see that this neon green area being highlighted — the whole area is abnormal, and that’s why it has this kind of neon color to it, with all the abnormalities being encompassed. In the estimate of the Pegasus software, this fundus image meets the criteria, based on the international grading scales, to have severe diabetic retinopathy. All right? So that involves several areas within that. And we’ll just touch on this briefly. What is… Here’s our image here on the right side again. This is the unprocessed image. What are the different levels of diabetic retinopathy? Well, normal would be normal. No abnormalities. Mild, non-proliferative diabetic retinopathy would be finding a few microaneurysms. And you’ll see an example later on. It’s actually a glaucoma example. Where we found a few microaneurysms. And so it got a classification of mild, non-proliferative diabetic retinopathy. All right. So moderate. Moderate is less defined here on the scale. It’s worse than mild, but not as bad as severe. Severe is really where all the criteria come into play. So severe — we’ll just look at this international definition — severe is more than 20 intraretinal hemorrhages in each of four quadrants, definite venous beading in two or more quadrants, prominent intraretinal microvascular abnormalities in one or more quadrants, and then of course proliferative is neovascularization and hemorrhages. And I’ll let Jay talk about this just a little bit later on, if he would like to, about really what is the process going on here. All right. But in your report, it will then break it down. So you’ve got this severity score. And now we’re going to have it broken down by subcategory. I have inserted the original image on the right side. That does not appear in the report. What you see in the report is the image on the left side, with the highlighting and the text. So exudates were present, and the software has highlighted all the areas of exudates that were identified. Okay? After exudates, we have the hemorrhages. And again, the little box area, the highlighting, are identifying all areas of intraretinal and preretinal hemorrhages. And after that, then we have the microaneurysms. And this is a really — for me, this is a nice feature, because personally I find that it picks up a lot of microaneurysms that maybe I don’t necessarily see when I just look at the image. Of course, we can all see the hemorrhages. They show up quite easy. Rather, the exudates. The exudates show up quite easy, right? But when you get down to the different hemorrhages and microaneurysms, I think that’s really where the system shines, and I really like how it highlights all the microaneurysms. Then — so it’s not just the hemorrhages and exudates. We’re also looking at the optic nerve. All right? So there are a couple scores that go into the optic nerve. One is going to be this. The disc anomaly score. So the nerve is being analyzed. And we’re being told: Does this nerve seem to be relatively normal? Or does this nerve have some unusual characteristics to it? This one passed. Okay? But it does, in every case, go ahead and give you the vertical cup to disc ratio. And I think that this is really nice. I’ve been monitoring this feature, and I feel like it does a very good job of estimating the cup to disc ratio. So here I’ve enlarged the graded image down here in the bottom right this time. And it’s still a little bit small to see, but it’s estimating the vertical cup to disc ratio to be 0.43, and then there are the heavy dotted black lines, which are identifying the disc borders. And then there are some more subtle white dotted lines, which are estimating the cup size of the optic nerve. And I think this is really a cool feature, because I expect it’s going to be consistent over time. So to me, this is like a poor man’s OCT, maybe, or a nice way to follow your patients over time. Because you now have this much more precise estimate of the cup to disc ratio than I could do clinically. Right? I might write down: Oh, it’s a 0.4. It’s a 0.6. But this is going to break it down even into 100ths. So you’re going to be able to follow this patient over time with multiple AI analyses, and you’re gonna say: Oh, jeez. This was a 0.43, June of 2019. A year from now, if it’s a 0.48 or a 0.5, then you’re gonna want to look at that patient really carefully, because maybe that cup has truly progressed. And that’s probably below the resolution of what you would normally pick up clinically, I think. All right? So if the cup to disc ratio is more than 0.6, this definitely gets flagged. But I think just being able to follow this number over time is going to be nice. Then finally there’s this macula anomaly score. So the Pegasus system is now highlighting those areas within the macula, which are significantly abnormal. And you get a grade on this. So if that score is higher than a 0.75, then that’s highly indicative of a macular anomaly, and here you can see this one was 1.0. So this was maxed out. This is clearly an anomalous-looking macula. So this is an example that we chose because it has some very prominent findings. But I also want to show you some that I think are a little more subtle. Because that’s also going to be very useful to us. All right. So glaucoma suspects. This is an actual case, submitted from Trinidad-Tobago. We’ve got these kind of funny-looking optic nerves, where the one on the left in particular is not… Neither one of them is totally round. You have kind of some unusual disc contours, perhaps, and so if you’re looking at this, trying to estimate what the cup to disc ratio is, I might look at this one on the left and say: Oh, I don’t know. Maybe that one’s a 0.5 or a 0.6, and this one on the right… Maybe that’s a 0.4. So… I’m kind of going by color contour. We don’t have stereopsis here. But I’m making these estimates. And then we get a report back for it. And very interesting. I’ll enlarge this in a second. But the vertical cup to disc ratio: 0.52. In both eyes. So even though I thought these two discs looked different, they actually don’t really analyze that differently. And then we also see this little mild diabetic retinopathy pop up. Like… Where is that from? Well, I didn’t see anything before. So then I go back, and I say… Oh yeah, maybe I see something here. So let’s… Let me switch to the PDF for this particular file that I ran earlier today. So here’s that same case. Here’s the PDF for that. Glaucoma suspect case. And let me zoom this a little bit, maybe. There we go. All right. So the left image number one was the right eye. Pretty much normal. Image number two was the left eye. Mild diabetic retinopathy. Equal cup to disc ratio. Verification score, 1. Perfect. Gradability score, 1. So these were really good. Nice, clear images. No diabetic retinopathy. All right. Right eye, exudates, hemorrhages, absent, absent. Microaneurysms, absent. Disc anomaly score. So here’s our cup to disc ratio. Let me just mag that up for a minute, so you can see these little white lines a little easier. Okay. There we go. So this is a more close-up view of the optic nerve grading. So we have the heavy lines marking the borders of the optic nerve. And then the white lines marking the vertical cup to disc ratio. And there’s your estimate. 0.52. Okay? All right. Let’s go to the other eye. The other eye also had a high verification and gradability score. And then this estimate of mild diabetic retinopathy. All right. Well, why does this have an estimate of mild diabetic retinopathy? And I don’t know if this patient is diabetic or not. Right? I don’t have that information offhand. But what do we find? Right here. Here we are highlighting the microaneurysms. So there are two nice little vascular abnormalities. So that’s nice information to be aware of. Disc anomaly score was low. This actually turned out to be a pretty normal disc, with a cup to disc ratio of 0.52. Identical to the other eye. So I think that’s really an interesting, cool feature for glaucoma suspects. If you were wondering if there’s an abnormal disc, I suppose you could use it for drusen too. And see if you get a disc anomaly score. It won’t tell you that the discs have drusen, but it’ll pick up those irregular contours. Also, I think from my standpoint being able to follow these patients over time, these glaucoma patients over time, is going to be pretty cool. All right. So we’ve looked at diabetic retinopathy. And we’ve looked at glaucoma. Both of those… So every patient that gets submitted gets analyzed for both of those features. Right? You don’t have to choose glaucoma or choose diabetes. The Pegasus software will automatically give you the report that includes both features. Now, let’s try to do one live. Fingers crossed, right? Live doesn’t always work out. But this should work. So these are the images I’m going to submit for automated interpretation. Realtime. We’ll see how this works. And this is a diabetic patient. You can see some abnormalities there already. All right? So I’m going to now stop sharing the PowerPoint, and I’m going to start sharing my desktop again. Straight in to… Cybersight. So I have my homepage open. And I have a new request. So submit new request. Patient case. Selected. And now we’re opening up a new case. Subspecialty. Okay. So I need to pick one of the three subspecialties. Retina, pediatric retina, or glaucoma. I will do… Retina. And I’m gonna send this case to myself. So that it doesn’t go anywhere else. And I’m just gonna put in… Just made up stuff. So that we can speed through this. I have to have a visual acuity. I will put hand motion. Light perception. And I’m just gonna… I’m not gonna add all this other stuff. I’m just gonna get down to business. We need a diagnosis. So I’m gonna say… Diabetic retinopathy. Very general. And I’m gonna say… Laser or Avastin. Let me put that down here. That’s gonna be my question. Laser or Avastin. But I need to treat this. Now… Of course, the AI software is not gonna tell you whether or not you need to treat. I would wait for that from the mentor response. AI is gonna give you a little bit of heads-up. So up here, I’m just gonna put “none yet”. Untreated patient. All right. So normally you’re like… Okay. I’m gonna add some files here. I’ve got my color fundus photographs. If you hold down your command button, you can get all this stuff at the same time. And I’m gonna add this OCT that I have. So that the faculty member, my mentor, can get a good idea of what’s going on with this patient. All right. So then run AI. Run the automated interpretation. Yeah. Let’s do that. That sounds cool. So I hit that. It opens up my files again. Now, I’m not gonna add all the other stuff. The OCTs and visual field. I’m only going to do the color fundus photographs now. So I am picking out just those two images for AI. All right? Those are being uploaded. And there they are. So these are the two images we’re going to run the AI analysis on. And again, I’m gonna keep this private, since this is just a demo case. And let’s submit. All right? So that case is being submitted. And once those images are uploaded, they’re still uploading to Cybersight, they’re immediately then kicked over to Visulytix, and the Pegasus software is receiving those right now. And it’s running its automated interpretation. So I’m done with my cases. I’m just gonna go back to my… We’ve submitted our case. I’m just going back to my home screen. And normal little lag here. Now I’m gonna go to my email. All right? So I’ve just received a notification here that I have a new case submitted. And shortly after that, here you can see I’ve just received notification that I have AI interpretation available. All right? So that was maybe 30 seconds. That was super fast. So I can click on my link, and that’s gonna take us back to our case. There’s my case. Amazingly, I got it assigned to myself, so I will have to give myself some good advice. And there’s our findings. There’s what we uploaded, and here is our AI result. All right? So let’s open that. We have our AI result. We have image gradability. And verification scores, which are good. And then we have our diabetic retinopathy score. Right eye scored severe. The left eye, moderate. And an anomalous macula on that right eye. So let’s look at the outputs for this. All right? So we passed our gradability. We get a score of severe for this right eye. Now, why might we get a score of severe? Well, I think it has to do with the fact that we’ve got these exudates, and we’ve got exudates right in the macula. Right in the fovea, basically, right here, and then some stuff temporal. We’ve got hemorrhages being outlined in all the quadrants. We’ve got microaneurysms pretty much in all the quadrants. And we’ve got the vessel characteristics on top of all that. Disc anomaly. We passed our disc scoring, our disc is fine. Vertical cup to disc ratio is 0.41. It’s normal. Here we’ve got this macular anomaly score. Our macula did not pass analysis. The macula definitely has some threatening areas right there in the middle. Unlike image two. Image two, which was graded at moderate. You can see that there are some exudates out here, temporal. We’ve got a few hemorrhages. But they’re not in every quadrant. We do have quite a few microaneurysms. Disc anomaly was normal. Vertical cup to disc ratio was excellent. And our macular anomaly score — really the macula was looking pretty clean on this side. So therefore this does not get graded to be as severe as the other. Okay? So that’s a live feed on how the AI software works. I think at this time I will turn things over to Jay. And let’s give Mr. Lakhani the microphone here. So I’ve got the questions open right now. Let me tell you what the first one is. The first question here is: Can I submit a case and fundus images for AI grading from my smartphone? Or must I use a laptop? Is there any cost to us? Well, there’s absolutely zero cost. Right? This is a free service. Just like the consultation. You can use your phone. You can use a laptop. You can use a desktop. Jay, is there any difference in the quality of those images, as far as you’re concerned, as to how they’re submitted?

MR LAKHANI: No. I think the image acquisition should be done in a way that’s either through a desktop fundus camera or through a handheld smartphone camera, and it needs to be at a certain level of image quality, as we assess on our system. However, the way you then take that image and upload it onto our system — it doesn’t impact performance at all.

DR NEELY: Right. So we need a good image of sufficient resolution, and a number of pixels. But otherwise the mechanism for submitting it doesn’t matter, as long as you have a good quality image.

MR LAKHANI: Exactly.

DR NEELY: We have a question. Next question is: Is the cup to disc ratio calculated (inaudible) can you give us any insight into the algorithms there?

MR LAKHANI: I guess to some degree it’s difficult to say what the AI exactly is doing underneath. But what I can say is the algorithms have been trained with specialists giving their interpretation of the vertical cup to disc ratio. And it’s doing its best to mimic their intended metrics. So it’s gonna reflect on the training that was given to it by the specialists.

DR NEELY: Okay. So these have been quantified based on expert interpretation, so that the two match. Okay. How do I access this system? Well, you access this system by going to Cybersight consult and submitting your case, just like we did. Next question has to do with sensitivity and specificity. I don’t know if that’s information that we have available.

MR LAKHANI: So I think the answer here is that there are quite a few different algorithms running underneath the system. So we have a sort of spec sheet that we can share, which shows the algorithm performance in the detection of the different diseases. Some of that is published literature and some of that is in-house studies or validation done by private institutions.

DR NEELY: Right. And Jay, I have another question here that has to do with the fundus camera image magnification. And the question specifically is: Does it matter if someone is using a 30-degree field of view or 45-degree? I would assume that the more periphery you have probably the better, but maybe not.

MR LAKHANI: Yeah, actually, the first thing the AI is doing is assessing what type of image it’s looking at. So if it can see the optic nerve, then it can make a decision about the health of the optic nerve. If it can’t, it still works, but it then obviously restricts the output that it can give. We’ve actually had the software tested on different fields of view, across the seven fields of view space. We’ve also had it tested on 45 versus 60-degree visual field of view, and so on, and actually the performance is broadly similar. But the more information you give it, the more those annotations or heat maps are useful for your interpretation.

DR NEELY: Excellent. The next question has to do with: Any concerns with patient confidentiality when sharing these images and including patients’ bio data? All right. So certainly within the Cybersight system, all of our users are vetted. All of our users have passwords and usernames. So that system is secure. Visulytix does have temporary access to the images. Beyond the images, Jay, are there any security concerns with Visulytix?

MR LAKHANI: No. So as you said, the images that are sent to the Visulytix system are sent to our deployment, which is held on Microsoft Azure, which is obviously a grade A cloud provider. We don’t keep any of that information. In our agreement with Orbis, we’ve actually decided not to keep any patient information at all. So it’s deleted on our side as soon as it’s processed. Our intentions here are to provide this service to lower-middle income countries, and not to harvest data.

DR NEELY: Right. Yeah. And I think that’s a good point. Next question is: If an image is taken with a smartphone camera, is it possible to have AI analysis? Well, I think the answer to that is yes. As long as you can get a good picture, you can have the AI analysis. I think the main handicap here is that, in my experience with smartphone cameras, is that they are very difficult to get really good images that are clear in all areas. But if you’re skilled at it and you have a nice device, then of course, yes. You can do that. Here’s one for you, Jay. Is it possible to purchase this analysis for use in private clinics? If people want your service outside of Orbis, can they get that?

MR LAKHANI: Absolutely, yes. We have a couple of ways to get in touch, if you’d like that. One is through our website, where we have a contact form for new customers. And the other would be to email myself directly, or one of our accounts team. I can share a link for that, if that’s helpful.

DR NEELY: Okay, perfect. If a case has both glaucoma and diabetic retinopathy, in which category should I submit it? Well, I think it doesn’t really matter, in terms of the AI analysis. You’ll get it with either one. My advice is: Submit it to whichever category you want the mentor who’s supervising the case to be from. So if you are more worried about having a glaucoma specialist follow up on this case, then you want to choose glaucoma. If you’re more worried about this being a medical or surgical retina case, then choose the retina subspecialty category. But you will get the same report from Visulytix either way. Next question: Is the model sensitive to regional differences from different countries? And is information on the demographics of the training dataset available? Yeah. Go ahead. I don’t think there’s anything with the demographics, though.

MR LAKHANI: No. We’ve had this platform running live with patients around the world for some time. We’re not seeing any particular differences in performance. There isn’t a specific region where the data was sourced from. We got data from around the world. And that definitely helped with the robustness of the algorithm performance. Firstly to different cameras and different light settings, but also secondly to different ethnicities within the images themselves.

DR NEELY: Right. So this system is based on a diverse population to begin with. So it should be applicable to everyone around the world. Next question is: Any effect, Jay, of media opacities like cataract? How does that affect things?

MR LAKHANI: Yes, absolutely. So this is an ongoing area of research for us. So medial opacities like cataracts do cause the image gradability to fall. Not alarmingly so, and it doesn’t stop the AI from trying to find areas of interest, but in severe cases, it does make it very, very difficult. And there is currently no obvious route around this. The best we do is to analyze the images the best we can, provide all that information, all those heat maps, to the best possibility the algorithms can manage, but there is the kind of added caveat that the image quality will appear to be very weak. So there could be more mistakes.

DR NEELY: Right. So clearly if you have a total cataract, there’s no way the system can work. You have to be able to see it to grade it. In these more moderate opacities, I think that’s why you’re getting this kind of feedback score on: How gradable is the image? And if it’s a poor quality image, you need to take the results under clinical consideration. And I think that’s where the end user has some responsibility as well. Yeah. So you have to use your judgment and take all this information together. Because just like our opinions are going to be limited based on what we can see, so will the AI software. All right. Here’s an interesting question. If the patient has other fundus findings like macular degeneration, ARMD, then what happens?

MR LAKHANI: So in addition to all the algorithms that we specifically have to look for signs of disease, we also have an anomaly detector that looks for abnormalities beyond the scope of the specific things that we’re looking for. In the case of AMD, if it’s noticeable in the fundus image, it will be detected by the anomaly detection, which is the green neon halo that we showed earlier. It won’t specifically be identified as AMD, but the area of interest will be highlighted and then passed back for review.

DR NEELY: All right. So the anomaly will be highlighted. That’s an interesting question, though. We haven’t incorporated AMD into the Cybersight system. But is that something you have separately? Do you have AMD grading, just like you have the diabetic retinopathy grading? Or is that something that might be an add-on in the future?

MR LAKHANI: So for fundus images, it’s certainly something we’re looking to add on in the future. But we currently have an entirely developed and market-ready product to look for OCT images. So we do macular OCT images and we look for signs of wet and dry AMD, diabetic macular edema, and also other potential anomalies in the macula as well. And that’s, at this moment in time, not available on Cybersight, but it will be in due course.

DR NEELY: Yeah. I think that’s really interesting. And I think everyone needs to keep in mind that this is a technology that’s come a long way, but it’s really, I think, in its infancy, for what it’s going to be as it rapidly progresses. I think it’s gonna be exciting to see what you guys come up with. Another question here. If there is an imaging artifact in the fundus image, how does it affect analysis? So imaging artifacts, reflections, those kinds of things — what’s the impact of that?

MR LAKHANI: Sure. So I think it always depends on the type of artifact or the severity of the artifact. So if it’s a huge reflective area, it can sometimes be problematic and will flag up in the gradability part of our algorithms as potentially ungradable. What I would say is, because we’ve trained on quite poor quality data, as well as well curated data, artifacts tend to actually be incorporated pretty well into our system. So if you compare dust versus microaneurysms, it performs really well in picking out the difference between those. What I would say is that if there are artifacts out there, generally, as I think you mentioned with regards to the technology, it is very difficult, beyond telling you that the image is poorly… Poor image and needs to be retaken… It’s very difficult for AI algorithms to specifically pick out artifacts versus areas of interest. So we definitely approach with caution, if you have a camera that’s generating artifacts every single time. However, they should be spotted by our system. It may come up as a macular anomaly, for example, where it might be dust.

DR NEELY: Perfect. There are two more questions. I will answer these at the same time. We’re at 45 minutes now. And so we’ll draw this to an end. The two questions that I’d like to answer here are, one: Is the system available now? Can I submit a case image today? Yes. It’s actually been live for more than a month. But this is our big promotion of it. So we’ve had our soft opening, and we want you guys out there using it now. So today, start submitting images, and we’ll get right on it. The other question was: I joined late. Is it possible to see this presentation as a recording? And yes. Like all of our webinars, this has been recorded, and it will be available on the Cybersight website. In the library section. So you can see that at any time. And the final question was — I’ll just throw this in there — is it possible to run Visulytix models in areas with poor internet connectivity. And I would say the answer is yes. If you have enough internet to submit a Cybersight case, that will get you into the Visulytix system. It takes nothing more complicated than what you’ve already been doing. Jay, thank you for your time and thank you for the efforts of you and your company and working with Orbis, and bringing this really fascinating technology to Cybersight and to our users around the world. We are appreciative and we’re looking forward to seeing how this evolves over time. Thank you, Jay.

June 24, 2019

Last Updated: October 31, 2022

Leave a Comment