Lecture: Intro to AI in Ophthalmology

Join Dr. Daniel Neely and Dr. Nicolas Jaccard as we examine Artificial Intelligence (AI), its applications in Ophthalmology and related tools available on Cybersight.  We begin with a non-technical overview of AI, including a brief history, current state, and considerations such as bias and ethics.  We then demonstrate Cybersight AI, and how its open-access tools can detect and visualize glaucoma, macular disease and diabetic retinopathy.

Lecturers: Dr. Daniel Neely, Pediatric Ophthalmologist & Professor of Ophthalmology, Indiana University School of Medicine, USA and Dr. Nicolas Jaccard, Principal Architect, Artificial Intelligence, Orbis International

Transcript

DR JACCARD: Hi, everyone. Welcome to this Cybersight webinar about AI and ophthalmology. My name is Nicolas Jaccard. I’m principal AI architect at Orbis, and I’m joined by Dr. Daniel Neely, professor of ophthalmology, medical advisor at Cybersight, and also a long time volunteer at Orbis. So hopefully this will work. Okay. Perfect. So what can you expect today from the webinar? We’re going to start with a non-technical introduction to artificial intelligence. We will then look at how AI has been and will be applied in ophthalmology and eyecare. We will do an overview of ethical concerns around the use of AI in general and health in particular. And then we’re going to do a live demonstration of our AI enabled Cybersight consult features, and then at the end we’ll have a live Q and A for any questions you may have about all of this. So let’s start by a brief introduction about artificial intelligence and machine learning in general. So artificial intelligence is defined as the ability of a computer program or a machine to think like humans do, and this remains science fiction, as of yet. But we have a really good approximation of it. And this is called machine learning. And one thing machine learning is — is the ability of giving a computer program the skill to learn from examples without being explicitly trained to do so. And I’ll give some examples of those in a few slides. I also want to introduce you to Deep Learning, which is a subset of machine learning. It’s a technique that’s seen a lot of success in recent years. And pretty much nowadays, when we talk about machine learning and AI to a certain extent, we typically refer to a variation of Deep Learning. So I’ll explain what that is in a few slides. So I would like you to think of machine learning as a tool set to solve really hard problems. And some of them are here on the left hand side. So we have language understanding, we have computer vision, we have speech recognition, or robotics. And let’s say you want to solve computer vision. Which is a really hard problem. It’s basically the ability to give machines or computers the ability to see. Like humans do. Or animals do. And if you want to tackle this problem, you may want to start, for example, with physics. You want to understand how light refracts and works. And for example how the objects of a camera will work and try to replicate this in your computer program. Or you may want to look at neuroscience, inspired by how biology does it or how human vision actually works. But another tool you can use in addition to physics and neuroscience would be, for example, machine learning. And in turn, machine learning has multiple methods associated with it. And the one we’re going to talk about today is called Deep Learning. As you will see, it’s a method that’s been very successful at enabling computer vision and making it a big success, in our day-to-day lives. So I’m going to start by explaining what exactly makes machine learning different from any other kind of traditional computer programs. So the example we’re going to use is: We’re going to have a program, shown by this box here, that will classify images of cats and dogs. So you start with an image of a cat coming in, and then the computer will say: Yeah, this is a cat in that image. And similarly for dog images, you will give that image to the program and the program can say: Yep, this is a dog. And this is something that will happen, for example, nowadays on phones, if you take a photo, sometimes, it will tell you what’s the content of that photo. So it’s a similar program to those. The way you would traditionally do it with a regular computer program is: You would sit down and think really hard about the difference between cats and dogs. And once you have a reasonable list of differences, you will formalize that list, in the form of computer code. So basically you will explicitly program your computer to recognize cats and dogs, based on the rules that you just came up with. And then at the end you get to programs that will be able to classify cats and dogs in the photograph. When you use machine learning, it’s a very different approach. What to start with? You start by collecting a bunch of data. Luckily we’re using cats and dogs, so finding data on the internet shouldn’t be too difficult. So you have a bunch of pictures of cats. And a bunch of pictures of dogs. And the next step is what we call labeling. You tell the computer that all the images here are images of cats and all the images here are images of dogs. And taken together, this is a dataset, so it’s a collection of data with the corresponding label, in our cases cats and dogs. The next step is to give all that data to what we call a machine learning algorithm. And this process is called training. And basically during that process, the machine learning algorithm will learn the best possible rules to differentiate between cats and dogs. It doesn’t know anything about cats and dogs. It was never programmed to know anything about cats and dogs. But because it’s able to learn from examples, it will learn to do the differentiation by itself and find the best possible rules to do it. And then you end up with a program which might be very similar. The output might be similar. But the way we got to it was very different. In one, you had to explicitly tell the computer what cats and dogs are. In the other example, the computer was able to learn by itself, based on examples. And this is really what makes machine learning so different from traditional computer programs. So before we proceed, I’m going to take a little detour, and talk about benchmarks in computer vision. So I was talking about datasets. Which are collections of data. And when a group or a company comes up with a new AI algorithm, typically what it wants to do is evaluate on public datasets, so that you know whether you’re better than the state of the art that you try to beat. And when a dataset becomes very popular, they’re typically called benchmarks. And every year around benchmarks, we have challenges that are organized. Sometimes they’re part of conferences, and sometimes they’re just organized online. And there’s this particular challenge called the ImageNet Large Scale Visual Recognition Challenge, which consists of classifying 1.2 million images, a very large dataset, into one of a thousand categories. Here you have an example of these categories. You have a red fox, you have a hamster, or you have a sting ray. And basically the computer is given one of those images, and should come up with the right category for that image. And this challenge used to be thought of as extremely difficult. We thought it would take decades before a computer could actually solve it, near human performance, which is about 95% accuracy. And you can see here on the left a plot of the errors, 20% error, 26% error in 2011. This is about 74% accuracy. So we’re very far from humans in 2011. In 2012, however, machine learning was almost reinvented. What happened is a group from the University of Toronto came up with a new version of an existing method, which we’ll call artificial neural networks. They made a few tweaks to that method and used it in a very smart way. They came up with something called conventional neural networks and Deep Learning, and we’re going to discuss on the next slide exactly what those terms mean. But it’s important to see that there was a huge gain in performance, as soon as this technique was introduced. You can see here we jumped from 26% error to 16% error. And every single one of these data points following 2012 were using Deep Learning methods as well, or variants of Deep Learning. And the interesting point is to see that after 2014, you can see that this horizontal line here is human error. And after 2014, we actually achieved superhuman performance. So the algorithms were actually better at this task than your average human would be. Which is a theme of the computational neural networks, which achieve superhuman performance on a variety of tasks. So what exactly is Deep Learning, and what are convolutional neural networks? We use CNN for short. Here is an example for computer vision, but similar methods are applicable to natural language processing, sound processing, any kind of problem of this category would be — you would be able to apply Deep Learning to. So in this case, we want to categorize our computer to be able to tell us what a breed of a dog is on a photograph. In this case, we start with this photo here. The machine sees it as slightly different to us. It sees it as three channels. A red, green, and blue channel. And then what happens is we have our neural network, which is represented here by these layers. It’s not a coincidence that this is very similar to how human vision actually works. We also use stacks of neurons to be able to see. And this is inspired by that. Similar to human vision, the complexity and layers of abstraction go up, the deeper you go into your network. So how it works is: The first layer will look at your image and detect things that are very simple image features. For example, edges. And then the second layer will go up in complexity. And instead of just looking at edges, we look at shapes, which are groups of edges. And then maybe the third layer will look at textures, which are very helpful to look at, for example, the fur coat of a dog. And then by the time you get to the fourth layer, it might be very good at detecting dogs. So doing a dog versus non-dog kind of classification. And then by the time you get to layer five, you get a very good dog breed detector, and then it’s able to tell you that: Okay. I have 83% confidence that this is a husky. And the important bit about this is that it’s called Deep Learning, because there are multiple layers in the architecture. In this case, it’s five. But nowadays, you can imagine — you have networks that are 150, 200, 300 layers deep. And as I mentioned, the deeper you go, the more abstraction you have and the more specialized your neurons will be. So there’s this notion that the deeper you go, the better your algorithm will perform. They’re called convolutional neural networks, because the type of connections between these neurons are called convolutions, which is a mathematical operation that is very well suited for image applications. Which makes it — all of these very efficient, and some things that a few years ago, ten years ago, would take weeks to run, we can now run it in hours or minutes. So this was a huge revolution in the field of machine learning. And it was quickly followed by big headlines. And all of those are related to Deep Learning. The first one I wanted to mention is: In 2016, an AI beat a Go player. So it was a grand master. So one of the best players in the world. And the AI beat it. And it was a big thing, because Go, unlike chess, was thought to be almost impossible for a machine to reach human performance. But it turns out that this AI just beat the best player in the world in 2016. And then in 2019, a similar result, where an AI won at a video game, which is called Dota 2. A 5 versus 5 strategy game. What is important here is that it’s not only a one on one game, like Go versus chess. It’s 5 on 5. So the AI beat 5 human players and had to coordinate 5 of its versions to coordinate and play together. To beat all of them. This was the first time it was achieved and it was a big deal. More recently, at the end of 2020, there was a big breakthrough of AI being used to solve protein structures. This is very interesting, because protein structures underlie most of our understanding of biology. But also it’s a big thing in drug discovery. So this might possibly lead to more drugs and better drugs being discovered through these breakthroughs. So you can see that AI had a huge impact already. And especially Deep Learning. Was this huge revolution in machine learning. It just took off in a matter of years. Now I’m going to discuss how these advances are currently being used in ophthalmology and how they will be used in the future. So the first thing to know is: Most of the applications nowadays, in ophthalmology, are imaging based. So they will look at fundus photographs, such as the top one here, or OCT scan here at the bottom. Although there are other areas, such as patient management, disease risk predictions, progression analysis, and automated interpretation of non-imaging modalities, such as visual field tests. But those are relatively rare, and most of the applications will be diagnostics based on these two modalities, 2D and 3D data. And very early on, AI became a very big interest in AI and health care. Ophthalmology became a very big interest in AI and health care. And these are four reasons why that might be. First of all, there’s plentiful data. We can acquire an image of the eye very non-intrusively, non-invasively. When you compare that to an MRI or a PET scan, certainly acquiring fundus or OCT scan is much, much easier and faster and less intrusive for the patient. There’s a reason why there is a lot of data to work with. Which is a requirement to use these more advanced techniques, such as Deep Learning. There’s also the big familiarity with technology. Ophthalmologists and health care professionals working in ophthalmology and eyecare are very familiar with remote diagnosis, or using telemedicine, and various decision tools, for example, modern fundus cameras would have kind of tools that help you do better, make better decisions, through visualizations, or other tools. Ophthalmology and fundus in particular is used at all levels of health care systems. So whether you do community screening or you are a general practitioner in primary care, or a secondary care hospital, or highly specialized tertiary care institution, fundus is across all of these levels, which means that there’s a very big incentive to solve it and to apply AI, because then it will be usable at various levels of care, versus, for example, MRI scanners — will probably be only useful in secondary and tertiary care. And the last one is: There are existing use cases for AI to be used in ophthalmology. There’s a lot of workflows, such as multigrader workflows, for screening programs, or triaging workflows, that could already use AI to replace, for example, one of the graders, and that would not need to rethink the entire workflow. We could use exactly the same workflow that we use today, but replacing one of the graders, for example, by an AI or a machine learning algorithm. So I would like to present three big landmarks, landmark moments in AI and ophthalmology and eyecare. And the first one in 2016, when I was talking about the importance of benchmarks, and benchmarks tend to spur research and create a lot of interest for a given field in machine learning. In 2016, there was a large dataset at the time, about 100,000 images, that was released for diabetic retinopathy. That created a big sense of excitement for AI in ophthalmology and machine learning in ophthalmology. And you will see many, many academic groups and commercial organizations trying to use this dataset to create diabetic retinopathy rating algorithms. Even to this day — so this was five years ago — you still see papers almost monthly coming out that use this dataset as a basis for their work. Another big landmark moment was also in 2016, when Deep Mind, which is a subsidiary of Google, announced a research partnership with Moorfields Eye Hospital in London. And this was a big moment, because Google is not a small company. This is when Big Tech is entering AI in eyecare. It shows a lot of interest in that field. It’s not just academic anymore. There’s really an interest from a commercialization standpoint. And then in 2018, the FDA approved an AI-based medical device for the detection of diabetic retinopathy, and this was a big deal, because it was the first AI-enabled medical device that was approved by the FDA, and this was in eyecare for ophthalmology. And this was a time when this went from really being a research or academic field to something that can be used to screen patients and diagnose patients, day-to-day, as part of a clinical workflow. So this was a really big date in AI and ophthalmology. So since then, there’s been this race to match expert performance. The same way I was talking about ImageNet and how we got superhuman performance, this is something that is happening as well in ophthalmology. And two main areas of application for this race are diabetic retinopathy and glaucoma. And you can see on this plot that there’s a lot of interest in both of those. In AMD as well, and actually less for, for example, cataract or ROP. And across all those, most of these studies you’re going to see are going to claim they have superhuman performance. So they are better than human clinical experts at a particular task. While that may be true, it’s actually extremely difficult to prove that just because, as everything in AI in health care, there’s a lot of variability between clinicians, so clinicians tend to not agree on a given diagnosis. And that makes it really difficult to demonstrate that AI is actually superhuman, for example, in ophthalmology. And I invite all of you to be very critical about papers that claim they have superhuman performance, because most of the time, while it might be true, they haven’t shown sufficient data to demonstrate that it’s actually the case. And as always, the proof is in the pudding. And I personally say that we should only think of an AI system to be a success if it demonstrates actual benefits to patients in the field, on the ground, in real world clinical settings. And this is extremely rare. There are very, very few examples to this day that have shown that it’s actually beneficial to use AI as part of a real world clinical setting. I also encourage all of you in general to acknowledge that there should be clinical education of clinicians in this area, so that clinicians can critically evaluate new AI systems, and make sure that they really fully understand what a system is capable of, before thinking of deploying it. Deploying AI is a very hard problem. Google experienced it when they tried to deploy an algorithm that was performing really well in a laboratory context, and then deployed it on the ground, in Thailand. It turns out it was performing very poorly. And the reason for that is: As soon as you go out of the ideal conditions of the lab, it becomes very difficult, and results are extremely unpredictable. And this is very important to think about it. Because we typically only have one chance to get it right. If we deploy AI, it does terribly, and the outcome is not an improvement for the patient, or even worse, it’s detrimental to the patient, you lose all the trust from the systems, from both clinicians and from patients, so you really have to get it right. So that you gain that trust and you do things correctly, and everyone buys into the idea of having AI deployed in their clinical workflows. And I think we should dream bigger than diagnosis. Those are two potential applications that have been demonstrated. One is prediction from fundus photographs. For example, being able to tell if someone is hypertensive or likely to suffer from a major cardiac event in the next five years, just based on a fundus photograph, and also progression analyze, and this is an example where AI was used to predict where dry AMD will convert to wet AMD. I’m not an ophthalmologist. My understanding is this is something that is difficult to do. And having an AI be able to do it will be extremely useful. And this is a kind of area where I think AI can outperform humans. Not just diagnostics, but coming up with new ways to use the data that we already have available. So in order… Before I finish on this bit, I will leave you with the top four ways I think AI could impact eyecare in the medium term. First of all is decision support. So AI will be there to make sure that clinicians and health care professionals never miss anything. They can provide best possible care to their patients. AI will be used to discover new biomarkers that will allow for early and more accurate diagnosis of eye conditions. AI will be extremely helpful when it comes to screening or triaging of patients. Especially in low resourced areas. And finally something that is not discussed as much is: I think that AI will be very helpful and useful when it comes to training and mentoring as well. Before we continue, I will run a poll so you will be able to answer your opinions on this. To best describe your feelings in regards to AI and ophthalmology. You have a few choices here. You have excitement, fear, indifference, confusion, and nothing in particular. And we are very interested to know what you think about these different terms and how you feel about AI and ophthalmology in general. Just because this is something we are very interested in at Cybersight. And knowing what our audience thinks about it will be very helpful in the future. And hopefully it will not be fear. It’s excitement. So that’s good. 82% excitement. And then confusion as the next follow-up. So hopefully some of this today will help clear the confusion. But I’m very happy to see that by a large majority, it’s excitement. So this is what we want to see. Okay. So very briefly, before we go on to the demonstration, I wanted to very quickly touch on kind of the ethical considerations around AI in general. This is a really big topic, and this is something that underlies everything we do in AI at Orbis. And I wanted to make it very clear that unlike what you might think, AI tends to amplify prejudices and biases. So if you count on AI to, for example, minimize those, I think you will be mistaken. And two big examples that came out recently is that an example, for example, Amazon were using an AI to screen candidates for jobs they were advertising, and the AI tended to favor men. So they were biasing against women, trying to apply for jobs. So they had to scrap this AI recruiting tool. And maybe way more concerning is a risk algorithm that is used in the US, as part of the health care system, was heavily biased against Black patients. And again, those are prejudices and biases that are existing outside of AI, but AI basically amplifies biases and prejudices, so we have to be very, very careful about that. And I think the take-home message is that all AI algorithms are biased in one way or another. And I think it’s on us developers of AIs and promoters of AI to make sure that we minimize that bias as much as possible. And very quickly, there are two types of bias. There are data bias, which means that machine learning algorithms are extremely good at finding any pattern in your data. And if any of those patterns are biases or prejudices against a given demographic, the machine learning will find it and potentially amplify it. So you have to be very careful about what is in your data and that your data should be as much as possible free of any biases. And the second bit is: It’s more of an algorithmic bias. Is that the method used to train the algorithms are designed to extract as much performance as possible, potentially at the cost of other factors, such as ethical factors or ethical considerations. So it’s not always the best way to achieve — to try to achieve, for example, superhuman performance. If it means that you’re going to be biased against a given demographic. So keeping that in mind, Orbis, when it comes to AI, have a few values, and here’s an example of them. We value fairness over performance. We build for everyone. Orbis has a very, very large audience, and we have to make sure that what we produce works for everyone. And we try to build fairness right into what we’re doing, through better education, having diverse teams, and following and suggesting industry-wide standards, to make sure that all of this becomes standard practice across the industry. So before we go into the hands-on, we quickly introduce what you’re going to see. So at Orbis, our motto when it comes to AI is that we want to leave no one behind the AI revolution. And the way we’re doing this is we’re democratizing access to AI, with the aim to benefit our patients and partners around the world. And we have this initiative called Cybersight AI. And what we’re going to show you today is how it can be used for clinical decision support. But we also are working on application related to teaching and mentoring. Using AI. And the way it works is: AI is built into Cybersight consult. And it can be used today, free of charge, for any of our users, our Cybersight Consult users. And you may have a Cybersight account to access our learning libraries. If that’s the case, you will need to be upgraded by our support staff to make a full Cybersight Consult account. So if you try to access Cybersight Consult and the AI features, and you can’t, please contact support and we’ll work through the process to upgrade your account to make it eligible to access those features. But once you have the account, it’s accessible free of charge. But it’s always an option. We never send data to the AI unless you explicitly want us to do so. And the way it works is: When you submit a patient case on Cybersight Consult, as a mentee, you submit a patient case to Cybersight Consult that will go to your mentor, and then you will get a recommendation and you will have this kind of conversation going on with your mentor. If you want to obtain, you can also send the data to the AI, and you will receive within a few minutes an AI report that contains highlights of the interpretation by the AI. You can look at that report right away and discuss the report with your mentor, and use that as a basis for your conversation with your mentor. The second option you have is to use newly introduced AI on the cases. Whereas a mentee, you can submit an AI case, as I will go to Cybersight Consult, you will get the same AI report back to you, but it doesn’t involve your mentor at this stage. So if you have any questions or you are confused about the AI report, or you’re not sure what to do next, you can always create a patient case from that page. And it’s very important for us. That it’s always human, that it can be in the loop, in case you are — you need one to help you. And we’re going to do more — to say more about this in the hands-on in a minute. The AI report content includes information about disc anomalies, vertical cup to disc ratio, macula anomalies, information about DR grading as well, following the international grading scheme. So you’ll see more about this in a second. But just to give you an idea of what to expect in terms of data, this is an example of fundus that is a perfect example of what we see as input data to Cybersight Consult AI features. And very briefly, what we want to see as input is a 45 to 60 degree fundus photograph. This is a typical field of view for fundus photograph, such as this one. It has to be macula centered, although decentered also works, but it’s not optimal. It has to be a high quality export from camera software directly. It is one fundus on the file, such as the one I just showed you. It has nothing else in the file, such as text or manual annotations. It has to be in a supported file format. We have JPG, PNG, and TIFF file format. We’re not compatible with SLO or Widefield images. Such as this image here. We’re not supporting OCT scans being uploaded, camera reports, typically generated by your camera software directly. We do not support those. We want a fundus export from it, instead of the report directly. We do not support screen photographs. So this one is an example of taking a photo of the screen. Because this typically leads to lower quality. It is not sufficient for AI grading. We do not support visual field reports. And we do not support fundus montages. Which is this one. Where you have multiple fundus that has been put together to form a mosaic. That being said, we welcome you to upload any other data. And the software will also let you know if the data is not suitable for upload. So on that note, this is it from me. I will come back when it comes to the Q and A. And I’m handing it to Dan. For the rest.

DR NEELY: Thank you, Nicolas. So Nicolas is an AI engineer, or AI architect. And I’m talking to you as a clinician. As an ophthalmologist. And I think it’s important that we always keep that loop intact. Because simply having a tool is different than being able to apply a tool correctly. As you can see, AI is just in its infancy. This is not the birth of AI, but in terms of AI and ophthalmology, we’re talking in the last five years. So this really is the ground floor. And it’s something that is going to change so rapidly. Just like a young child growing from infancy. This is just gonna be a logarithmic change, every six months to a year. And here we are. We’re offering this to you free. Which is… I think… Just an amazing benefit to Orbis/Cybersight. What I’m going to do is demonstrate to you in realtime submitting a couple AI consultations, and I just want you to keep in mind, as you look at all of this AI, and as you use the AI consultation feature, it’s not going to be perfect. This is a supplemental tool. This is not designed to replace being a physician. But this is to guide you. And I see the potential of AI here to be most beneficial in a couple areas. One is screening. So mass screening of fundus photographs, or other images, to determine who needs to see a physician. And then the other group are the people that you’re already seeing, using AI to help you guide your diagnosis, based on the information that’s input. And then once you have been helped with your diagnosis, then guiding your treatment plan. So things like preferred practice patterns. All right? If you now have a diagnosis of diabetic retinopathy, what are the preferred practice patterns for that? What should you be doing now? And again, always with the understanding that you’re the doctor, you need to make the decision, or you need to ask your mentor to help you decipher this information, if it’s not clear. What are we doing right now? Well, currently we offer AI services in just two categories. And that’s glaucoma, optic nerve head analysis, glaucoma screening, and then the other category is in the field of diabetic retinopathy for both adults and children. And so when you are submitting cases, you will see that those are the only areas where you’re able to submit a case from. And I’ll share my screen and we’ll just go to the home screen and share this. And as we look at this home screen, so this is the home screen if you have a Cybersight consultation account. If you just have the library and course access, you will need to change your account to a full access consultation account. And when you do, this is what will show up. You’ll see a list of ongoing consultations, you’ll see a list of cases you can search by category, and you’ll see your own cases up here. So you need to be able to see this screen to submit a request. When you want to submit a request, you can either use the red “submit a new request” button, or down here, on the sidebar, you have the ability to go through this. When you open this up, there will be a couple options. So general question doesn’t apply to this. Just patient cases. We have an AI only case, where it is not automatically submitted to a teaching mentor. So this would be if you’re screening for glaucoma or just screening for fundus photographs, or diabetic repository, or you can use AI interpretation as part of a case that you are submitting to a mentor. So you have a diabetic patient, you’re submitting it to a mentor for advice. While you’re waiting for that result to come back, you will be getting an AI report, generated instantly, almost. When I say instantly, I’m talking less than a minute, perhaps. So let’s start with the AI only case. This is a new feature, by the way. We’ve offered AI interpretation maybe for more than a year now, but the AI only feature is just the last few months. The AI only case… Again, on the subspecialty dropdown box, you’re only going to see these three categories. It’s retina or it’s glaucoma. So there’s glaucoma, and then I have the option of choosing files. And I’m going to choose a normal right here. And I’m going to choose one that is not normal. Upload those. So those are uploaded. I have my preview. Once I have my preview, then I submit. It’s already in the works. I’ll go back to my home screen. And this will appear in my in-progress cases for me. And while we’re waiting for that, let’s launch the first poll question here, Lawrence. So this is something important for us to know. Because here we talk about AI grading of fundus photographs. So step number one here: I would just like to hear from the audience: Do you have access to a fundus camera that can take a decent photograph? Yes or no? Obviously this is important, because if you have a bad image or no image, then the AI is not going to be of much use to you. The old saying, garbage in, garbage out. Most of us have access to a decent fundus camera. And the quality of the picture is important, as you submit these things. Here’s that AI only case. That we just submitted. Got it opened up here. And I’m just hitting my screen refresh. Here’s our report. In the span of about a minute, we have received our AI report. This is an AI only case. All I’m receiving is the AI report here. We have image 1 and image 2. And again, as Nicolas showed that preview, the very first thing you get on the summary is: Was your image verified and was it gradable? You have to have good images. Otherwise the system can’t interpret them for you. And it’s giving you a summary. Disc anomalies and diabetic retinopathy. So I think that’s another important point here. We’re selecting categories of glaucoma or diabetic retinopathy, but the system is performing both screenings on every image that you submit. All right? So you don’t have to be accurate on which of those dropdowns you use. But those are the categories we can use. It will run the analysis on all images, both for DR and for glaucoma. And then if you see that everything is green, you don’t even need to look at the images. But if there are abnormalities, like we have here, and I’m gonna scroll down to the second image, which is the normal image first, so here we have a normal image, the disc is highlighted, and you will get… I think this is a really nice feature… You will get a vertical cup to disc ratio estimated for you. So here, this one is being estimated at 0.55 vertical cup to disc ratio. And so I think that’s a nice tool, not just for screening, but this is something you can use in managing your glaucoma patients. You know, that’s limited information, obviously. But it is one more tool, where you can monitor the consistent objective AI cup to disc ratio for you in your patients over time. All right? Macula. No macular abnormalities were detected. No diabetic retinopathy was detected. No microaneurysms, exudates, or hemorrhages. Okay? So that’s a normal report. I’ll just go back to that. So why is that yellow? Well, this one is yellow, because this cup to disc ratio is 0.55. In this case, we know it’s a physiological cup. Because it’s greater than what we typically expect to see, this cautionary yellow flag is appearing. Again, you’re the physician. You need to take all the information into account. This is simply highlighting something that maybe we should pay attention to. Because it’s a little bit outside normal limits. The other image that we submitted is actually one that we know is abnormal. And you’ll get that same kind of assessment here. Disc anomalies. The machine is just in this highlighting purple. Showing some areas that it detected were outside of normal limits. And then we have again that vertical cup to disc ratio. This time, much larger, 0.75. So this is well outside normal limits. And so we’ve got a red flag on that. If you scroll back, you can see the areas of the disc in particular, that the AI program is highlighting. All right? So this is a glaucoma suspect at this point. That would need further evaluation. The macula is normal. No evidence of diabetic retinopathy. You can see that the machine picked up on a couple microaneurysms. And these are quite small. But when magnified, you can see that there’s a microaneurysm right here, and this one is a little more difficult for me to find. So here you go. You can see how even subtle things that I think clinically — at least I would probably miss, at a cursory glance — it’s highlighting those and bringing those to your attention. So really a super tool. And you can just imagine how this can be utilized maybe for glaucoma screening. Administered by non-ophthalmic personnel. Or even camera kiosk setup in a pharmacy or grocery. So that’s an example of an AI-only glaucoma case. Let’s do another one. I’ll go to the sidebar this time. I will start it as an AI-only case, but I’ll show you how that can convert to a full consult. In this case, we’re going to go retina-vitreous. I’ll choose my files. And I’ve got a couple here on my desktop. This diabetic and this diabetic. These are not the same patient. All right? So normally… If we were submitting photos, we’re gonna use the same patient. But for example purposes here, I’ve got two separate photos, even though they’re right and left eye and they’re taken on different camera systems as well. So that’s been submitted. Back to my cases. My in-progress cases here. So my report is… I can see it confirmed that it’s submitted and it’s pending. Let’s launch our second poll question. Our images are there. And we will be waiting for our report to come up. All right. So this is just to follow up on the fundus camera question. Because I’m curious as to what’s out there. If you do have a fundus camera, that 60-some-percent of you that had a fundus camera, if you could, in the Q and A box, please, go ahead and type in the type of camera, or the brand of camera, or the name of the camera. Otherwise… So go ahead and answer the poll here. If you don’t have one, of course, don’t have one. If you have one, but don’t know the name, go ahead and respond to that. So we just have that information. And again, type in the kind of camera you have in the Q and A. So we’ll take a look at that. It’s not critical for what we’re doing here. But I would like to look at that. All right. So we have… I need to move my bar here. It’s hiding my refresh. Screen is refreshed, and I have my AI report. All right. So now… We can see a lot of the images again are verified and gradable. We put in decent images. We are getting an error on the disc for one side. And then we’re getting a nice normal report on the other disc. And then both of these images are showing up as abnormal for diabetic retinopathy. So we’re gonna want to take a look at those images. And I’ll just go to… This first image. And showing our image here… You can see the system is highlighting these macular abnormalities. And then we’re getting a diagnosis of severe diabetic retinopathy, based on the amount of retinopathy changes and the locations. Here we’re highlighting some, again, microaneurysms, and the boxes around those are where the machine has identified the microaneurysms, in addition to the exudates and cotton wool spots. There’s highlighting of the exudates, and highlighting of the hemorrhages. Now let’s go to our second image. And it’s one. Image one. And then image two. So we’ve got… Our disc is coming out normal. And this would also pick up neovascularization of the disc, if we had that. We have a nice normal disc. We also have a nice normal cup to disc ratio here. Being measured as 0.17. So totally within normal. But once again, our macular anomaly score — we’re finding changes which are significantly anomalous. Those are being highlighted. And then the machine algorithm goes into the grading. And based on the number and extent of exudates and hemorrhages, we’re getting a report of severe non-proliferative diabetic retinopathy. Now, I think one could look at these images and findings and say… Was this moderate or severe? And that’s where the physician’s input needs to come in. The machine is highlighting the changes. Based on that information, it’s doing the best it can to grade it. But ultimately, the physician is the authority there. And when we look at correspondence for grading diabetic retinopathy with this program versus human graders, when the severity is at the kind of moderate and above level, the correspondence with the human graders is about 90% so that’s not bad. And Nicolas can speak more to that if we have questions during the Q and A. Again, the exudates… And the hemorrhages… Here we have just one small dot hemorrhage. All right. So I looked at that and I’m like… Okay. Well, those seem like moderate diabetic retinopathy to me. I’m not sure if it’s severe. I’m not sure if I need to treat this patient. Now that I’ve done that AI analysis, I think I’d like to get an opinion from one of my mentors at Orbis. So at the bottom — let me show you the original images that we used. Here’s the original right eye. I’ll just open it. If you go to large, I can just zoom in. So there’s the original right eye. And here again, there’s the original left eye. So you can open them full resolution or smaller, and zoom in. So these are our cotton wool spots, our hard exudates, and a few scattered hemorrhages. No neovascularization was highlighted. So that’s another part of the report if it’s present. And that will show you where it’s located. So here you’re like… I’m not sure if I need to treat this patient or just watch them. Down at the bottom, resubmit case for human consult. Okay? So I’m clicking on that. It’s informing me that I’m now going to submit it for human review. But it’s also telling me that I still have access to my report. Okay? So in my case files, I’ll still be able to pull this up. I’m like… Yeah, cool. I want some feedback from one of my retina colleagues. Now, the one thing it doesn’t do — this looks now just like a blank empty new case. Just like if I was starting a case all over. It doesn’t prepopulate anything at this time. So I’m back to a new patient case. And then I’ll just go back again to retina-vitreous. I’m gonna send this to myself, so it doesn’t go out live into the system. And… When you submit consults, I’m just… There’s a lot of stuff you can put in on here. Right? But what you have to put in is you have to put in the red asterisk. So we need case category. We need age. Pretty basic stuff. I’ll just put in male. Put in insulin dependent diabetic for the past 20 years. Right? I’m gonna put in… You have to have some history to work on. Right? I’m gonna put that in. All right. This is ophthalmology. We need a visual acuity. So let’s put in a visual acuity. And you can put it in any form you want. Anywhere from LOGMAR to 20 foot to decimal. Many of you use decimal. So I’ll put in two random decimal acuities. But you see we have other options up here, if you can’t do that. So that’s really all I have to put in to submit a consult. Other than… Diabetic… My typing is amazing. Diabetic retinopathy. Treatment: None. But… Does this warrant treatment? All right. So I have moderate to severe diabetic retinopathy. Do I need to be doing Avastin? Do I need to be doing focal laser? We don’t see RP in this case. So choose files. So in this first one, I’m going to go back to our same diabetic images, there and there. So I’m attaching these to the case. Keep in mind that if I want to run AI interpretation, you have to manually select that. In this case, we definitely do, because this is a follow-up to our AI-only consult. I’m gonna click yes. And go back to my images. You have to then choose which images you want the AI to run on. Right? So there’s my 2DR images. So now those are appearing there. Why do I have to do this twice? Well, because if some of the other stuff you had put in was not AI, I don’t know if I have any other images here, but let’s just say I had accessory images, or… So if this image was an OCT, or a photograph of the patient’s chart or something, I don’t want AI analysis running on that. So you’re only gonna include the ones that are appropriate for AI, as Nicolas outlined, when he was showing you those good sample images. And this is what we want. We want fundus images. Like this. Either centered on the macula or centered on the disc. At that point, you also can submit. I just save a draft, and here goes our submission. And now that is submitted… And what’s happening now? Well, not only are you right now getting another version of that AI report, which we didn’t have to run again, but I did, but now we also have a case that’s being sent to the mentor. And this case will then come back to you for feedback. So you’ll get a retina expert’s opinion as to whether or not that was truly moderate or severe, and should you be doing anti-VEGF treatment? Should you be doing laser? And we’ll kind of close the loop that way. So I think that’s really the exciting point where we are right now. I do have one more poll question. Which we’re going to launch. This is where we are now. This is what we can do right now. But what do you want to see next? All of this is going to be fine tuned, of course. The glaucoma screening and mapping the contour of the nerve. And maybe some more education in with the diabetic retinopathy grading scale. That’s all going to be fine tuned. But what do you want to see us do next? Now, click on a couple of these. Try and prioritize what you find to be the most exciting. Macular degeneration? Dry and wet exudates? Dry forms or wet exudates and guidance on treatment for ARMD, ERG interpretation, if you have an ERG, and you’re not comfortable with generating reports, would you like to see that? This is an interesting one. Glaucoma optic nerve monitoring over time. Where I talked about being able to look at a nerve and get a grading. What about being able to scroll those nerves over time, or having an ongoing record where you can highlight changes over time? Pediatric refraction prescribing. You have a refraction. Now what do you give, based on the strabismus or absence of strabismus? ROP screening. I think this is an exciting one. We’ll have the ability maybe to screen for PLUS disease. So a non-physician can take photographs, patients in the outlying areas can be triaged and seen by an ophthalmologist if they have changes. Changes in strabismus mobility. You can take a grid and the AI can come up with a diagnosis. And then visual field interpretation. All right. So a little bit over the board. Yeah. All right. So that’ll give us some guidance as to where we go in the future. At this point, I’m going to bring Nicolas back in. I’m going to have both of us handle the Q and A session here.

DR JACCARD: Yep, I’m back. Thanks, Dan.

DR NEELY: Nicolas is back. Let me open our Q and A. We’ll go through these. Any AI system tool to address dry eyes? I’m not aware of any, Nicolas. Are you?

DR JACCARD: There is research. I’ve certainly seen some papers on it, but I’ve never seen any kind of commercial product to do it. My answer is probably there’s some research on it. I’m not aware of any kind of systems you can buy and use. Certainly something that should need more work as well.

DR NEELY: All right. Is there any AI available other than Pegasus for glaucoma diagnosis?

DR JACCARD: So for context, Pegasus is a name of some of the AI algorithms we’re using. And the answer is yes, there are other companies offering… There are companies that offer such services. Two algorithms that are FDA approved, and maybe three or four that are to be used in the European Union. I’m not going to name names, because I’m sure I’m going to forget some. But you have products who can license… As I showed during the presentation, mostly DR. Countless products for DR out there. And they’re getting better all the time. But the performance for glaucoma detection is nowhere near as good as DR grading. But this is certainly something else that exists, and it is both for fundus photograph, but also from disc CTs.

DR NEELY: So just for clarity, our system is the Pegasus system, formerly known as Visiolytics, now part of the Orbis family. The Pegasus system we have for you is free. Always a nice selling point. And you can use it to your heart’s content. Next question… We have: Is there any AI available… I’m sorry. Wrong one. When clinicians are critically evaluating AI systems, what are the key questions to achieve this scrutiny? So if we’re looking at an AI system that’s available to us, what to we need to be looking for?

DR JACCARD: It’s very similar to, for example, if you read papers about a new drug that comes out. I don’t know… To treat AMD. I’m not an ophthalmologist, so I don’t know. But let’s say a drug to treat AMD or glaucoma. First of all, if it’s to good to be true, it probably is. There are many… Especially papers that tend to be written by non-medical groups. So let’s say… A machine learning group that use ophthalmic data as input for the work, but non-ophthalmologists tend to overestimate the actual performance of the algorithms. And just because they look at the perfect dataset and they say… Oh yeah, we got perfect performance… This is what I mentioned with the Google case. When it deployed in Thailand, it was terrible. Even though when they tested in practice in the lab, it was outperforming every single ophthalmologist they could get their hands on. So I think be critical. As in: Don’t believe the hype too much. Which holds true for any academic paper or any product out there. But also always look at what was used as baseline for evaluation. Make sure you’re happy with the fact that: Okay. What they used as what they call the ground truth, the gold standard against which we compare AI, is reasonable. It’s not just one random person. Usually you have a panel of expert ophthalmologists, or you use additional tests. For example, with glaucoma, you can use OCT. Let’s say your application is detection of glaucoma on fundus photograph. You may want to use disc OCTs and visual field and everything you can to ascertain the diagnosis, and then you compare the AI versus that. So it’s about making sure that the data is right, that the evaluation seems reasonable, but also, as I mentioned in my talk, I would only really trust a system that’s been deployed in the wild. As part, for example, as a multicenter study, where the company or the organization that created a system gave it away to various institutions, and they used it independently, and then they came to a consensus as to how it performed on the ground with real patients. And I think this is what you are after. There are tens of thousands of papers out there about how an algorithm is better than everyone else at diagnosing DR, for example. But as long as they are not deployed on the ground and used with on the ground patients, I don’t think… It’s interesting from an academic standpoint. But let’s say you do 1% better than the previous state of the art — it’s interesting if you’re pushing for machine learning improvement, for example, but when it comes to patient benefits and satisfaction, and so on, only once it has been deployed and tested on the ground should you trust the system.

DR NEELY: That’s important. A couple of things. One, you have to know what population was the machine learning based on. If it’s just a very small population, versus a diverse population. Certainly that’s something that our users can help us with, as we deploy this over time. We should be able to have a diverse collection of real world photographs that we can analyze. And so I think that’s a good point. And I think… So you need to know what’s it based on. And you mentioned earlier about something functioning in a laboratory setting, versus functioning in the real world. And I think that’s an important point. Because so often we find… All right. We’ve got the perfect image. And now we get these great reports. But when you just start taking normal images that we get from users, then we find that there’s a lot of difficulty. And that’s just how this works. You have to have a good image. And the machine has to be… The machine learning has to be able to interpret that image. And so those are key parameters. There’s another question about ophthalmology AI books. Are there any good AI books in ophthalmology? It’s such a recent field. Has anything come out yet that you’re aware of?

DR JACCARD: I think there’s a useful answer somewhere from someone, recommending a couple of general books. So when it comes to AI and ophthalmology, as I said, I don’t know if there are many books out there. I know there are a couple in the press. So there are a couple that will come out soon. About the subject. I think you have… When it comes to AI and machine learning, you have a few resources that hopefully… We’re in the process of updating our patient artificial intelligence on Cybersight. And there will be… We’re planning to put a lot of resources there. So be sure to check out Cybersight in maybe a month’s time. There will be much more resources. But… So AI and ophthalmology — as you said, it’s probably a bit too… Specialized. As it is now. To have especially books that are non-technical, and kind of for beginners. I would — every year, when you have ARVO and other conferences, there’s always a couple of tracks about AI, and I’ve found that there’s a few papers that came out with those. Again, we’ll reference those in a future Cybersight update. But they’re very good explainers for kind of newcomers, to when it comes to AI. Or going over what we just talked about — how to be critical about AI, but also giving the tools to understand AI and maybe do a bit of AI as well. And you have many, many tutorials out there, and courses. There are courses for AI and ophthalmology, but as far as I understand, there are no free courses for AI and ophthalmology. So I don’t know if I should recommend them, because they tend to be very expensive. This is certainly something we are going to look at probably in the future as well. If there is interest in it, we can do it. But what I would recommend, if you’re interested in machine learning in general, don’t… There are a million tutorials out there, how to get started on machine learning. If you go on Coursera, which is a massive learning thing, there is a free machine learning course on there that is very good, to get started on machine learning. I would not start on machine learning doing AI stuff, because you’re going to struggle… Doing ophthalmology stuff. Sorry. Because you’re going to struggle finding the data, and it’s not going to be right. I would start with… As I showed, cats and dogs photos that you find everywhere. And you have datasets that already exist. You can very easily download and start experimenting with. So again, all of these resources will be on the Cybersight page at some point, when we get time to update it.

DR NEELY: Yeah, I think in terms of the medical practice of AI and ophthalmology, it’s going to change so fast. But by the time the book would be published, it’s going to be out of date already. So I think Nicolas is right. If you want to learn about machine learning and have a basic grasp of the concepts, that’s pretty reasonable to get a book for. But to say I’m gonna learn about AI and ophthalmology from a book, it’s gonna be outdated. This is gonna change way faster than that. All right. Next question is… Considering how difficult, challenging it can be, to make a diagnosis of early glaucoma, to what extent is AI currently useful in this regard? I’ll answer this one. From a clinician standpoint, I would not rely on AI to make your diagnosis of glaucoma. Right? What’s this going to do? It’s going to be a screening or a monitoring tool. All right? So a diagnosis of early glaucoma is an elevated intraocular pressure with changes to the eye. You need those two things. So you can’t make that diagnosis with AI. You can highlight disc abnormalities that bring the patient to your attention. But you can’t make the diagnosis of glaucoma. So I think that’s where the use is. At this point, largely as a screening tool for glaucoma. In the next phase, as a monitoring tool for glaucomatous changes to the optic nerve. Or monitoring OCT changes over time, et cetera. Or visual field changes over time. So that’s the next evolution of a glaucoma AI package. All right? Next question. OCT images poor at detecting progression in glaucoma and AMD? This is more of a statement. I’m going to defer that one. Let’s see. What else do we have here? Can AI replace a technician and optometry in the future? No. I don’t think so. I don’t think AI is gonna replace anything. You know, this is a supplement. There’s been such an explosion of information in medicine. That we all have information overload. The purpose of AI is to, one, bring patients to our attention, with screening. Two, make your job easier. But I don’t think anyone in the near future expects AI to do anything in place of a technician or an optometrist. You need hands on. This is just supplementing closing the loop.

DR JACCARD: Yes, and certainly at Cybersight and Orbis in general, I think we’re looking at AI, as I said, as a decision support tool rather than something that will replace humans at doing what they do. So we are making sure you have the best possible information when you have to make a decision regarding a patient. This is where AI shines, really.

DR NEELY: Here’s a question. How about AI for cataract staging?

DR JACCARD: I think that, again, I’m not aware of a commercial product, but I’ve definitely seen papers on that. And I think there’s just been a release of a dataset on that as part of one of the challenges I was talking about, benchmarks and challenges. I think there is a challenge about cataract detection now. I’m not sure about staging, but certainly detection. And the typical trend is when you start seeing academic papers on this, a few years later, you will see products and features on platforms such as ours. But certainly — the answer to any of these questions is: Is there data out there for this particular condition in a sufficient amount? So, let’s say, is this relatively doable to collect, I don’t know, 10,000 examples or something? Then I can guarantee you, somewhere, someone is working on an algorithm to solve that problem.

DR NEELY: Right. And here’s another question related to glaucoma screening. This has to do with… I would like to see how it performs in small and large discs. Where, if accurate, it would be helpful. So if the optic nerve is larger than normal or smaller than normal, does the machine learning take that into account? Or is it kind of limited to the cup to disc ratio at this point?

DR JACCARD: So the way we diagnose abnormal disc is to weigh this through — as you say, we have the VCDR computation, which is very explicit. It shows you how it computes it. So it can verify it or reject it, very easily. You have the second bit, which is a classification algorithm which is much more black box-y. You get some kind of visualization, but it’s not… You know, quite often it’s not that useful. And that black box-y algorithm will certainly take that into account, because it was trained, as I mentioned. When you train your algorithm with a bunch of data, it will learn implicitly to do this, to take into account the size of the disc. So it’s not trivial. Because the size of the disc — how do you evaluate this? If you have non-calibrated cameras, we don’t know what a pixel is in absolute measurements, how do you even start? So you can… Algorithms, these types of Deep Learning algorithms, will typically find a way, and maybe use the discs — the vessels as a reference to evaluate some kind of scale. That being said, there is ongoing research in academia, as part of a commercial organization, and also in what we do, as to how to improve that and make it much more granular and explicitly take into account the size of the disc. So right now, it’s probably implicit. It does it… Somehow we can’t really verify. But we’re going to make it so that at some point we’ll have something we can look at and be told by the algorithm exactly how it came to a conclusion.

DR NEELY: Right. And I think that’s one of the near future evolutions. Is: Not only does an AI system give you a report. But there’s an educational component that goes along with that, that says: Okay. This is what we’re reporting. And here’s the reason why this report is abnormal. Not just highlighting it. So that’s a great point and a great evolution that’s in the near future. Someone had wondered in this question — they said: I see you submitted both a normal and a suspected abnormal image for AI analysis. Is this required? Could the normal image being supplied as baseline by Cybersight? No, it’s absolutely not required. I simply submitted a normal so that you would see what a normal report looks like. And again, that normal and that glaucoma suspect were not the same patient. Just simply taking two images to show you what the report is like and not have to submit multiple reports for the sake of time during our presentation. So no. Any image — you don’t need to have a normal in there. All right. Scrolling through more questions here. It says: What is the success rate of Cybersight AI when an image is of optimal quality?

DR JACCARD: So that will depend on which bit we’re looking at, whether it’s optic disc, DR grading, or just abnormality detection. I think that’s something we want to be much more transparent on, in the future. In the update I mentioned, of Cybersight, the artificial intelligence page of Cybersight, that’s something we want to be transparent on, and provide figures. Generally speaking, we are similar — for example, the grading performance for DR is more or less similar to an expert ophthalmologist or a grader. That being said, don’t trust what I say. Don’t trust people if they say it will exceed human performance. In a very controlled environment, this is what we see. For glaucoma, much more difficult to ascertain, because of the variability between experts. But we are within the range of experts, typically. So when we ask multiple experts, typically Cybersight AI is somewhere in between of all these experts. So it’s within reason, but again, it’s very difficult to say exactly how we’re doing, in comparison, because there’s so much variability. We are pretty good for abnormality detections. For example, the macular abnormalities. As soon as it deviates from what we expect a normal macula to look like, it’s really good at highlights this. Even very subtle cues as well, such as, for example, a macular hole, even if it’s a tiny macular hole, it will tend to be picked up by the AI. So yeah. All in all, we are pretty much similar to what human experts would do. Though again… Take that with a grain of salt. It varies greatly with quality. So performance decreases quickly with quality.

DR NEELY: That’s our next question, actually. How much does image quality affect the reliability of the results? Well, I mean, it affects it greatly. Right? If you put in a bad picture, you’re going to get bad results. That’s pretty much an automatic. You give me a bad patient history, I’m probably going to give you bad feedback and bad diagnosis and bad information as a teacher. So I think this is no different. You just have to have as good of a quality of images as you can. And I think what’s important… If you put in a bad image, we’re not gonna send you a report that has bad information. That bad image is going to be flagged, and you’re going to be told: This is a bad image. This information is not reliable to our level of satisfaction. So… Keep that in mind. Don’t interpret. We’ll give you the report. But we’re gonna tell you: This is not a reliable report. Because the image is not sufficient. And you can resubmit if you have the ability to get a better image. All right? So definitely affects it. It has to. All right. This is about artifact. Humans need two images to identify a camera artifact. Can AI identify artifacts with only one image? So it will identify it, right? But it’s not gonna tell you it’s artifact. Correct, Nicolas?

DR JACCARD: Yes. Some artifacts are quite obvious. We talk about lighting artifacts. Where you have a region of your image oversaturated.. That will be very easily identified. The edge cases become for example… With microaneurysm detection, let’s say you have dust on your lens, and the only way for a human to make sure that this is dust and not a microaneurysm is to look at two images of two different patients. If it’s the exact same location, you say… Yeah, this must be dust versus microaneurysm. We have some clever ways around it, but the answer is you cannot be 100% resilient to it. There are always some… For example, dust in some cases. Very, very similar in visual features. Even though we have a system in place to avoid this. If it’s very similar in the way it looks at a pixel level to a microaneurysm, there’s almost nothing you can do, except using two images. Which we can do, but not in the context of Cybersight consult.

DR NEELY: Right. And of course, again, this is the ground floor of this future ability. So you’re gonna see the machine learning get refined, the ability to accept a wider variety of images, or filter out things like artifact. That’s going to come with time. Ears another question. Can AI help in the differential diagnosis of retinal and optic nerve pathologies? Well, I think this is what I would like to see. I think that right now, we have a great screening tool. As a clinician, one of my goals with our future development is… Okay. We’ve screened and we’ve found this anomaly. And we’ve given you some baseline grading information. Now Mr. AI: How about giving me a list of differential diagnoses I need to consider? And then once I’ve picked a diagnosis, based on these suggestions you’re giving me, how is the best way for me to treat this? And so… If any of you use the Wills Eye Manual, that’s what that Wills Eye Manual does. You input a finding. And then it suggests possible diagnoses. And ways to rule in and out each of those diagnoses. And then once you’ve narrowed it down, you get a treatment algorithm. And that’s where I see us going in the future. Is this… Now assisting your care of the patient, once this diagnosis is made. So I think that’s gonna be terribly exciting. All right. You touched on this, Nicolas. Who can use this service, and how can I get access? So we have different kinds of accounts.

DR JACCARD: Yep. So when you go and sign up for Cybersight, for a Cybersight account, first of all, you need a Cybersight account. Then when you sign up, you have the choice between different accounts, and the base level is basically just accessing the learning material. Which doesn’t require any more validation steps. I think you just get the account created there and then. When you can choose online courses plus also consultation, so kind of doing access to Cybersight Consult, which is the telemedicine platform, and this will require some approval process. For various reasons, we need to check that you are a clinician or health care professional. Or someone with the background to make the most of that information. And use that information in a way that will not be detrimental. Make sure that if you don’t have a Cybersight account yet, create one. And you take that option, that you want access to the consult application as well. If you already have a Cybersight account that is only for learning materials, but you want also to have access to consults and the AI functionality, then just contact support, which I think is [email protected], and then we’ll walk you through the different steps with it.

DR NEELY: Right. So we can help you get converted to a consult user. Now, keep in mind the consultation service that Orbis provides is restricted to certain countries. The purpose of what Orbis does with the consult service is to assist physicians without mentors in low to middle income countries. And so if you’re in the United States, you have access to all the learning materials. But you don’t have access to the consult system. Will it be possible… All right. Here’s a question from the Philippines. Will it be possible to use the AI-only feature by uploading sample photographs from the internet? So that I can practice? That’s kind of a mixed bag. That doesn’t usually work. Does it, Nicolas?

DR JACCARD: Well, I think that if you find images that are of high enough resolution, for example, there is this one image that everyone uses — I think it’s the Wikipedia article on the fundus photograph. Literally the fundus article on Wikipedia — has a normal eye. And then it has like I think it’s a glaucomatous eye, or DR eye. These two images — everyone uses when they test the AI system. They are very high resolution and good quality. So I think although the system is mostly designed for you to upload your own data, so that you use the system and you get the report and then that report informs your decision when it comes to your patients, I think it’s acceptable, because there’s no mentor involved in the loop. By default. It’s acceptable to upload example data, if you want, for example, to get used to the system. Though… You can — in order to avoid maybe overloading the system with data that’s not real patient data — you can also go to cybersight.org, and I think if you go in consult and artificial intelligence, it will show you how to use the system. At least for patient cases. And how to interpret the AI report as well. So if it’s one or two images, go for it. And if you get a feel for the system. But please don’t upload a thousand, you know, random images.

DR NEELY: Yeah, don’t crash our system. But feel free — I think it’s perfectly reasonable to play around with it, and do a couple samples, so you can see how it works. Depending on the images, it may or may not work. Depending on your resolution, et cetera. So…

DR JACCARD: Just to clarify, please don’t do this on consult — a patient case. Do it only an AI-only cases.

DR NEELY: That’s a good point, yeah. We don’t want it going to one of our mentors, and sending them distracting things. All right. Next question. Oh, and I just undid it. Okay. The question was: Is your AI system able to do OCT images or follow OCT images over time?

DR JACCARD: So we have OCT capabilities. As in… We can, for example, take an OCT cube. We can work with a single slice, but typically it will be a whole cube. There are macular centered cubes and not disc centered cubes. We can detect stuff like MD, wet and dry. We can quantify the thickness of layers. The caveat to that is this is not available on consult yet. Just because consult is really — as Dan mentioned — for low to middle income countries. OCT is not as prevalent. So we want to really focus on fundus photograph for now, and get the system working as well as possible, with images, 2D images, before we start rolling out the full thing with OCT. Because it needs increased complexity and everything. The answer is yes. And it is possible. There is a lot of work going on with OCT, in addition to fundus. But for now, we’re only supporting fundus photography.

DR NEELY: Right. So that’s to be determined. But obviously we all have that goal in mind. We all think that that would be useful. So… Yes. Expect changes constantly. And that will be one of them, I’m sure. Here’s another question/comment. I think that combining smartphone fundoscopy with Cybersight AI would be a game changer for ophthalmology, and a cheap one too. And also thanking us for helping out. Well, we totally agree with you. And trust me, we realize that’s what needs to happen. Right? I mean, you see that on these fundus cameras questions — only about half of us have access to a good fundus camera. What we really need — and there are several variations out there — is the ability to take an easy, fast, accurate smartphone fundus photograph. Either the fundus or slit lamp photograph. And once we get a product that’s really good, that’s consistent, and that we can all access, that is going to be a game changer for ophthalmology. And I think right now that’s the major hurdle with a fair amount of telemedicine for ophthalmology. Is… We need something with a smartphone to image the eye. And we’re getting there. But I would say that a lot of stuff is not quite ready for prime time yet. It’s coming. Let’s see. How do you think this will shape ophthalmic training for the next ten years? That’s an interesting question. You see it already, with the simulation tools that are out there. Nicolas, do you have any thoughts on how it’s gonna shape training?

DR JACCARD: So that’s something we are working on and thinking very hard about. As I mentioned during my slide, I think diagnostics and prediction is one area where AI is useful. And this is kind of the low hanging fruit. Which is why this is probably the first thing that people will think about, and productize and sell. I do think there’s a lot of potential for AI in mentoring. So as we mentioned, if you have an algorithm that’s granular enough, to show you exactly why it came to a conclusion — for example, we’re going back to that disc size for glaucoma detection — if your algorithm is so granular that it tells you exactly step by step how it came to that conclusion, rather than telling you yes or no, I think this could be a very powerful training tool. Because then you could imagine anyone in their spare time just uploading a bunch of images and learning from this. And also I think this is kind of the low hanging fruit. I think there’s much more that can be done, going from personalized courses that adapt to your needs and experience, and creating course material on the go, basically, on the fly, based on what exactly you need to learn, a lot of AI work these days, about generation of images, can we generate, like, random images of DR patients with… If you say… Oh, I want to see what progression from DR, from moderate to severe to proliferative DR looks like, you will have a hard time, because finding these images in the wild is very difficult. But I’m sure there will be AI algorithms that will allow you to do that very easily in a manner that is very realistic. So a lot to go. I think we haven’t even started to explore what can be done for training. But I’m sure it will be a huge thing in the future. Not just diagnostics and clinical decision support.

DR NEELY: Right. And I think a lot of the AI training is going to be clinically oriented. So you’re submitting a consult on Purtscher’s retinopathy, and the AI system recognizes that you put in the word Purtscher’s, and it brings up — here’s a nice summary review article on Purtscher’s retinopathy. So lots of ways to link in information with the medical diagnosis process. And so I think that’s what AI is going to do. It’s going to call educational material and present it to you, based on what you’re currently doing. This is just a general background question about how AI was developed. Nicolas, can you give a short summary of the… When this started? And how long it took to come up with a viable system?

DR JACCARD: So… Very briefly, skipping a lot of important steps here, machine learning has been a thing for a very long time. I mentioned Deep Learning and convolutional neural networks, that are used in kind of modern machine vision, computer vision, AI systems. These artificial neural networks have been described in the ’80s and ’90s. It’s just that it turns out that you needed a lot of data and computational power to make good use of them, that we didn’t have back then. Which is why machine learning kind of progressed until 2010 or so, and there was a lot of progress, but it was very slow progress. Where every year you were chipping away at these benchmarks. Chipping 1% away every year. And then you have the Deep Learning and convolutional neural networks that came around in 2012, and this is where everything changed. And AI, even though it’s not true AI, it’s a good approximation of it. But I would say 2012 is when all these things I described in my slides, like key landmark moments — it all came from that point in time, in 2012, when Deep Learning was really introduced to the world. And nowadays, everything uses Deep Learning. Every single AI system. And it’s part of our lives now. You know. When you go to Google Assistant or Siri, or do a Google search, it’s all Deep Learning-based nowadays. So it’s a huge change. And this is definitely, I think, what we will see in the future, when we have true AI, at some point. 2012 will probably be the date that will be in the history books as the day it all started. For better or worse. I hope the outcome will be good with AI in the long term. But that will be probably when all that started, yeah.

DR NEELY: Right. All right. I’ve scrolled through to the end of the questions. And I think at this point, we’ve answered most of people’s questions directly. Or at least something related to it. So I may start to bring us to a close here. I will end on this last question, though. Can we depend fully on this? Or do we need to recheck? Well… Nicolas, do we just take it at its word?

DR JACCARD: I would say no, and this is why, for example, to get access to it, you need some kind of clinical background. Because we want to make sure that you have the capability to be critical about it. No AI system is perfect. Certainly ours isn’t. So there’s always a chance of false positives, false negatives. So having this as an aid — this is not autonomous decision making.

DR NEELY: Right. You’re still the doctor. You still have to take the information. You have to make a decision. This is another tool. And like any tool, it’s not going to be perfect. You have to take it for what it is. And ultimately, take all the information and the fact that the patient is there with you, and make your decision. Okay? And when you’re not sure about that, that’s where the mentorship side comes from. And the ability to kick it down to someone with more experience. And so we do want everyone to use this feature. We want you to have realistic expectations. We want you to take good fundus images. So that we can help you as much as possible. And we want you to stay tuned. Because this is just going to change month by month. And so don’t be frustrated that this is all we’re offering now. Because this is just going to change, and it’s just gonna be part of what we do. So… Nicolas, thank you for your great presentation there at the beginning. And certainly all the hard work that you’re doing for Orbis/Cybersight, and this is going to be an amazing adventure. I appreciate it.

DR JACCARD: Thank you.

DR NEELY: With that, we’ll close out our webinar and I thank everyone for their attendance today.

2 thoughts on “Lecture: Intro to AI in Ophthalmology”

Leave a Comment