Lecturer: Dr. G Chandra Sekhar, MD, FRCS
DR SEKHAR: Hi, everyone. Good day to all of you from wherever you are. I would like to start off by thanking Cybersight for giving me this opportunity to interact with all of you on what I consider an important topic, of evidence-based medicine. I’ve been working with the Prasad Eye Institute for the past 37 years. And it’s a pleasure to interact with you. So by way of interaction, the question I would like to know is what your background is, whether you’re an ophthalmologist in training, or an allied professional. If you can click on the answer, then we’ll know who you are. Okay. I see that 62% are ophthalmologists, and some are ophthalmic technicians. One medical student, one ophthalmologist in training. Thank you. So the outline for this stuff is going to be — I’ll talk about the need for evidence-based medicine and its definition. We’ll talk about the fallacies in published literature, the hierarchy of evidence and its relevance to clinical practice, the importance of having to look at the data, and how to evaluate and look for hidden information. We need to know the concepts in statistics, and the mathematics is optional. I’ll touch upon three different concepts. Confidence intervals, clinical versus statistical significance, and related risk. We’ll get started with a question for you ophthalmologists and ophthalmologists in training. I’m putting four discs here for you. And I’ll give you the clue that only one of them is glaucoma. So your job is to pick the one that is glaucoma. Options are A, B, C, D. The relevance of what this has to do with the topics of what we’re covering will come towards the end of the talk. Okay. I guess the majority have chosen either B or C. 38% for B and 50% for C. We will see how that pans out. So starting with the need for evidence-based medicine and its definition, around 1987, this health policy article, the quality of medical evidence, its implication for quality of care, was published. The authors touched upon predominantly systemic conditions like angioplasty versus bypass surgery, screening for colorectal cancer, and screening for breast cancer. And they looked for evidence to support those practices, and concluded that there is not good quality evidence to be funding the kind of care we are giving in that profession. And in passing by, they also talk about ocular hypertension and glaucoma and say there is virtually no usable evidence about the effectiveness of medical treatment for glaucoma. That was kind of a blow for all glaucoma specialists at that time. And since then, all the randomized controlled trials we keep talking about in glaucoma management came up subsequently. I think the need for a randomized controlled trial was hit very hard by this publication, as old as 1959. But all of us realize now that coronary artery bypass or stenting is what is required for ischemic heart disease. That was not the standard of practice. It used to be ligating the coronary artery after doing a phlebotomy. And both patients and surgeons felt better after that surgery. It took a controlled trial where patients and surgeons were blinded, the phlebotomy was done, and the procedure — either you like it or don’t like it, the artery was done. And it was found that there was no significance between the two groups. So some of the practices to be established requires evidence-based medicine. Based on those concepts, the definition that was given at that time for evidence-based medicine was that it was going to be a shifting paradigm, intuition, unsystematic clinical experience are insufficient grounds for clinical decision making, and they also explicitly said there will be lower value on authority. And that the evidence that was being gathered was graded as level 1, 2, or 3, level 1 being the highest, with systematic review, level 2 being controlled trials, and being good quality cohort or case controlled trials. Level 3 is considered expert evidence. I think that is wrong. There is a serious danger of viewing the statistics as hard realities applicable to a given patient, and the individual clinical experience, which is actually really crucial, is also being missed out. This article by an obstetrician says that I have come to appreciate — he’s talking about level 4 evidence — I have come to appreciate that the influence of a randomized, controlled trial, no matter how well conducted or generalizable, pales in comparison with that of the audible bleeding of a profound postpartum hemorrhage. So the clinical clues that you get with some patients like that I think sometimes are more valuable than the randomized controls themselves. So then they went on to say that evidence alone is never sufficient. We need to rate the risks and benefits, inconveniences, and when possible we have to take the patients’ values into consideration. The hierarchy of evidence I will talk about subsequently is the other thing we need to consider. Keeping all those things in mind, the current definition of evidence-based medicine, probably, is that the practice of evidence-based medicine means integrating individual clinical expertise with the best available external clinical evidence. So that’s what the current definition of evidence-based medicine would be. To talk about certain fallacies in published literature, this is a revealing article that was published in 1994. Some, perhaps most of the published articles, belong in the bin, and certainly should not be used to inform practice. Why do they make such a drastic statement? So they’re going to explain. What should we think about a doctor who uses the wrong treatment, either willfully, or through ignorance, or uses the right treatment wrongly, such as giving the wrong dose of a drug? Most people would agree that such behavior is unprofessional, arguably unethical, and certainly not acceptable. How does that translate with the publications that we have? What should we think about researchers who use the wrong techniques, either willfully or in ignorance, use the right techniques wrongly, misinterpret the results, report their results selectively, cite the literature selectively, and draw unsupported conclusions? We should be appalled. Both general and specialist journals have shown that this is not uncommon. This should be a scandal. Close to home, if you consider that the Journal of Ophthalmology and Ophthalmology are the reputed journals, we looked at several articles from these journals and how many of them conformed to the guidelines. The number of the guidelines that they actually followed in their randomized controlled trial publications is close to 50% only. And similar to other specialties, they said room for improvement exist in the reporting of key methodological items of RCT. That’s why, while RCTs are the best instruments we can have, they need to be done well. As medicine leans increasingly on mathematics, no clinician can afford to leave the statistical aspects of a paper to the statisticians. We need to be in the driving seat and we need to understand what the statistics actually mean, and I think that is the basis for this series. Shifting gears and talking briefly about the hierarchy of evidence and its relevance, the pyramid here represents probably the best hierarchy of evidence. The highest level being the computerized decision support systems. If we have EMR systems and we’re diagnosing a condition like glaucoma and there are standardized guidelines for glaucoma management, evidence-based, then the computers can tell us that with this patient, this is the kind of treatment we need to follow. That’s the highest evidence, which is not available in general ophthalmology. And along that, we have evidence-based journals, Cochrane reviews, which we have some, and individual published articles. Among those published articles, we have systematic reviews of RCTs… Single RCT, the type of RCT that is used, observational studies, and case reports. And this it the sliding scale of the evidence that one could have. So the methodological limitations of RCTs is… These are some caveats about the evidence-based medicine. The methodological limitations of RCTs, the execution limitations of RCT, and the research versus clinical aspect of medicine. So one of the things they said about RCTs which is very important is: the paradox of clinical trials among RCTs is that they are the best way to assess if a given intervention works. If you’re evaluating a therapeutic modality, RCT is the best way to decide between treatment A and B. But arguably it is the worst way to assess which patient benefits from it. So the question with RCT is does it work for most patients? And what we as clinicians sitting in front of a patient need to ask is does the result of that RCT apply to the patient in front of me? Expanding further, this is an article from the American Journal of Ophthalmology. If the word homogeneity describes the randomization in a clinical trial, that is what is fundamental for a clinical trial. Homogeneity in the treatment population in the treatment versus control group. Then the word heterogeneity describes the population seen in clinical practice. So there is a basic dichotomy between the clinical trial and the clinical practice. And in a perfect world, every clinician would practice only evidence-based medicine. But most real world medicine is practiced in areas not covered by clinical trials or meta-analysis. That’s a limitation that we have to live with. This is a powerful, loaded slide. The slide that I took from the same article, from Lancet. Personal significance in medical practice. The author talks about statistical significance, clinical significance, and finally personal significance. One of the important concepts the author talks about is how doctors actually internalize the evidence that we get and use it. It’s important to spend a minute here. The author argues that doctors conduct an inner consultation with biomedical evidence before deciding how to apply it. Although the doctor’s organizer responds in an analytical and logical way, each one has what we call the doctor’s responder. It will act in a more intuitive manner. The responder is sensitive to internal messages, led by the doctor’s feelings and emotion, and this will affect the interpretation of information in a way that recognizes context, experience, apprehensions, failures, and so on. So it is not followed verbatim, but we color it with our biases, to some extent. With that background, if you want to look at how we publish or practice evidence-based medicine, I will touch on two examples. This is an article published in Ophthalmology in 2001. The article is titled: Diode laser transscleral cyclophotocoagulation as a primary surgical treatment for primary open-angle glaucoma. And the conclusion from the abstract says the treatment, as used in this study, is free from serious complications, though a new complication of atonic pupil is reported. It’s a rapid and easy to learn surgical procedure for primary open-angle glaucoma. So let us see actually the data that is there. In that article. The reported success rate, defined as 20% IOP reduction, along with medication — you are going into economics data, so that you can take in the need for medical treatment. But the definition is reduction of intraocular pressure, along with medications. 20% reduction. The IOP actually increased from the baseline. In about 33%. If you look at the scatterplot that they have provided, this median line describes no difference in treatment change. No effect of treatment. All the dots below that median are the cases where the intraocular pressure actually reduced from probably 40 to 20. And all these dots indicate where the pressure actually from preoperative level of 22 or something increased to more than 50. That particular dot. Preoperative 20 to 50. So there is a significant increase. One out of 19, about 5% of the patients with vision better than 20/60 preoperatively, had actually decreased in vision. And they went on to describe a new complication, called atonic pupil, in 30% of the patients. How then can a result like that actually justify a conclusion of… A treatment free from serious complications? And actually effective? So I think that’s the lesson that we need to learn, by looking at the data rather than going by the conclusion that is given in the abstract. Just to talk about these three randomized controlled trials that are talked about verbatim in almost all meetings and conferences — normal tension glaucoma, ocular hypertension study, and EMGT. These are the patients — the rate of progression without treatment, and this is the rate of progression with treatment. There is a significant decrease of rate of progression of disease with treatment. And that’s why we all want to treat our patients with glaucoma. But if you flip the coin and ask the person how many patients without treatment actually did not progress, in the duration of the randomized controlled trial, 65% of normal tension glaucoma, 90% of ocular hypertension, and 95% of EMGT did not progress in spite of having no treatment. What is the biology behind it? How do we identify those patients that do not get worse within the natural course of the disease is something we need to think about. Here is a publication done looking at the impact of glaucoma on visual function in Indians. So this scatterplot again — I’m very fond of scatterplots, because they give all the information that the authors may not want to talk about. So in this scatterplot, what we have done is to look at the mean deviation of the better eye. Here you have the normal vision, where the mean deviation is close to zero, and at this end of the spectrum, you have severe disease where the mean deviation is close to 30, and it is very marked here at this end of the scale. At this end of the scale, there is no limitation of activity whatsoever. So that’s the data you’re looking at. If you look at this particular group of patients, who have the disease ranging from normal vision to a lot of damage in the better eye, their limitation of normal activity is almost zero. If you look at this group of patients, where the vision is near normal, the activity limitation varies from none to normal to high. It is actually this group of patients where the activity limitation and the severity is actually correlated. But you put those dots into a computer and do a so-called rational analysis, you will come up with 0.40 and 0.001. So this is talking about the data correlated to the individual patient, rather than the whole global summary and p-value that the authors have given. So we’ll shift gears and talk about some of the concepts that we need to understand, as far as statistics is concerned. We don’t need to learn the mathematics ourselves, but we need to understand the concepts behind it. So the first concept I’m going to talk about is the confidence intervals. This is the question that was given to you. The question was about rate of serious complications for a new surgical procedure is 3.3%, 1 out of 30, compared to the standard of care, where it is 4 out of 30. So you have your options on the screen. You need to decide which procedure you will adopt and why. Okay. One third — I will adopt the new procedure. But will consider the cost. One third — maybe the competence of the surgeons is not the same. Three out of those that have answered feel that the differences are not statistically significant. And two out of those that answered say that I will adopt the new procedure, as complications are less. Okay. So we’ll see how this pans out, once I talk about confidence intervals. The whole concept of statistics is about measuring the uncertainty. One thing we need to realize: whatever we say, we can never be absolutely certain about any statement that we’re making. What is the level of uncertainty that is there in what we are saying? And whether we can quantify that uncertainty by using different statistical methods — is the fundamental question. And based on how much uncertainty is there, the question that is being asked about the clinical utility of the procedure or treatment — we need to decide whether we can live with that uncertainty and use that procedure in a clinical practice or not. So from that perspective, I think we need to understand the confidence intervals. As an example, if I say that the mean intraocular pressure in the Japanese population is lower than the mean intraocular pressure in the Caucasian population, a study that has been published, how many Japanese we need to study, how many Caucasians we need to study, we establish the fact that the mean intraocular pressure is actually different between the two populations? The other way of looking at confidence intervals would be success of a new surgical procedure which is more common. What is the proportion of success and what is the variability of success that would be there? Typically in statistical terms, we keep talking about the 95% confidence intervals, and I think most of the computers can do that calculation for us nowadays. But what we need to do is understand the importance and look at the values. So if you’re looking at a mean of a given sample size, the mean is the central value. Most of the biological parameters — in a large sample size — would fall into what is called as a bell-shaped curve. Once they fall into the bell-shaped curve, we get the mathematical mean, with the standard deviation on either side. What it means is that if you have a mean of 16 and a standard deviation of 2.5, if you add two times the standard deviation, 16 plus 2 times 2.5 will come to 21, and 16 minus that will come to 11. 68% of the population will have an intraocular pressure between 11 and 21. 95% will have mean plus or minus two standard deviations. So that’s a concept that we need to understand. But purely from how we extrapolate the data, we’re given a situation. If I’m going to say that I have a new surgical procedure, and the complication rate is zero, I have tried my new surgical procedure with 10 patients. So why the percentage of complications… Zero out of 10 is 0%. If you look at the confidence intervals, the complication rate could be as high as 26. If you increase the sample size to 25 and still have no complications, your complication rate could be as high as 11. If you increase your sample size to 100, and your complication rate is zero, your complication rate would be as high as 3%. So that’s what we need to understand. If the complication rate is n, you need to have a sample size of three times n to encounter at least one complication. That means in the last scenario now, if the complication rate is 3/100, you need to actually have 300 people to be able to see that one complication that you are looking for. Flipping the coin, if you want to talk about success rate, with the same denominator, I have done a new procedure. Its success rate is 100% out of those 10 patients. If you calculate the confidence intervals, the success rate can actually drop down to 74. In a real world situation, the success rate could be as low as 74%, or the complication rate could be as high as 26%, because sample size is only 10. If your sample size, on the other hand, keeps on increasing, the confidence with which you are close to your 100% success rate keeps on increasing, and at a sample size of 100, your success rate of 100% could still be as low as 97%. Close to 100%. So that’s what we need to understand, as far as the success or complication of a given procedure is concerned. So essentially what I’m talking about is: if you have a proportion, 0/10 or 10/10, you need to calculate the confidence intervals, and smaller the sample size — larger will be the confidence intervals around that estimate that you have made. Flipping the coin now, if the complication rate is 1 in 10, then the percentage of complication is 10%. But the 95% confidence intervals could be 1.7 to 40. If the complication is 1 in 15, the percentage of complications is 6.6, but the confidence intervals in real life when you have a large sample could vary from 1 to 30. So going down on that scale now, in the last, the proportion is 1 out of 30. And the reported complication rate is 3.3. But if you have a larger sample size, the complication rate could be as low as 0.5, or could be as high as 15. So what is the question that we had? The rate of serious complications for a new surgical procedure is 3.3%. One out of 30. Compared to 30.33%, 4 out of 30. So our sample size is 30, and the confidence intervals around that proportion are 1/30 or 4/30 — are actually not significantly different. If you have a larger sample size, both of them are the same. So the answer to the question would be that the complication rates of 3.3% versus 13.3% are not different if your sample size is 30. So that’s what we need to understand. We’ll go on and talk about the second concept. The clinical versus statistical significance. And this is the question I will ask. The final IOP with medication A is statistically significantly — p-value equal to 0.001 — lower than that with medication B. Which of the following will you agree with? I will use the new medication. I will use the new drops in my practice but consider the increased cost. I need to worry about the side effects. I need more information on the amount of pressure reduction. Okay. We are making progress. Majority are saying I need more information on the amount of pressure reduction. Good. So let’s understand this concept now. So as an example now, what I’m saying is the study hypothesis is: drug A lowers the intraocular pressure more than drug B, which is the standard of care, timolol. So when you’re going through a study like this, the null hypothesis obviously would be the IOP reduction produced by drug A and drug B are the same. You go on collecting data. And if your data shows that the drug A is actually reducing the pressure more than drug B, you will refuse the null hypothesis, with whatever confidence you can get. And accept the alternate hypothesis, the study hypothesis. So the suggestion one is the final IOP with drug A is 14 millimeters of mercury, and that with drug B, 17 millimeters of mercury. So what does that mean? There is a 1 in 100 chance, because P is 0.01, there is 1 in 100 chance that the 3 millimeter greater reduction by A is due to chance, so we will accept that difference. With experiment two, the final IOP with drug A is 14 millimeters and with B is 17 millimeters. But the p-value is 0.1. That means there is a one in ten chance that the reduction is by chance alone. So we may not accept that. This is a tricky situation. What is saying here is the final IOP with drug A is statistically significantly — P equals 0.001 — lower than that with drug B. What I’m not giving you here is the amount of pressure reduction. The final IOP is 14.5 and that with drug B is the 15.25. So we need to see the quantum of pressure reduction. What it essentially means is that there is a one in 1,000 chance that the 0.75 millimeter greater reduction by A as compared to B is by chance. So we are very, very confident that the pressure reduction by A is more than B, but we don’t know how much, if we look only at the p-value. You actually need to look at the magnitude of pressure reduction and see whether that magnitude of pressure reduction is clinically useful, cost effective because of the increased cost, and then see whether it is by chance alone or not. So the fundamental thing that we need to learn is that we should not be looking at the p-value without looking at the magnitude of the effect that the treatment has. The magnitude of reduction of either 3 millimeters or 0.75 millimeters — how clinically important it is. How much money or value can you put on that. How much of side effects can you accept with that. Is what you need to decide first. The clinical significance. Only then you ask the question: is the result actually occurring by chance? Or would it be valid even if I extrapolated to a larger population? So these are the two different things. One, the magnitude of reduction, two, magnitude of clinical effect, whether it is clinically significant or not. And two, whether that difference is occurring by chance or is it true? So to rephrase it, the p-value measures the uncertainty in the observation being reported. We need to know the significance of the observation by the magnitude, as well as the chance or uncertainty involved in its measurement. So we’ll go on to the third concept. Absolute versus relative risk. And that’s for which I gave you this question. Ocular hypertension treatment study reported a 50% reduction with medical treatment. Early manifest glaucoma trial reported a 17% risk reduction with medical treatment. Which of the following would you agree with? OHTS results show better protection as the risk reduction is more? EMGT results show better protection as the subjects included had glaucoma? Cannot compare as the inclusion criteria are different? EMGT results are better, as the NNT is lower? So the majority say we cannot compare, as the inclusion criteria are different. Okay. Let’s see what it means. So the ocular hypertension treatment study, just to get a little bit of background, randomized 1636 patients of ocular hypertension, newly diagnosed. Either receiving ocular hypertensive treatment, or close observation. And followed them for five years. And it showed that there was a 20% pressure reduction conversion to primary open-angle glaucoma occurred in 9.5% of the control group. It occurred in 4.4% of the treatment group. So the risk reduction is 5.1%. The early manifest glaucoma trial, it randomized 225 patients, all new glaucoma patients with established visual field loss, 129 into the treatment group and 126 into the control group, where they were not treated, and followed closely. There was 62% of progression in the control group and 45% in the treated group. So the risk reduction is 17%. But what those numbers actually do not sow show is the difference between absolute versus relative risk. The 5.1% we have calculated is the absolute risk reduction. This risk reduction is for those 9.5 or 10% of people. So a 5% absolute risk reduction is not very attractive. Whereas 50% risk reduction is very attractive. So they will give you only the relative risk reduction without talking about the absolute risk reduction. The number needed to treat is another concept that we need to understand. Where if the absolute risk reduction is 5, 100 upon 5 is the number needed to treat. Meaning we need to treat 100 patients of ocular hypertension over a 5-year period, get a 20% pressure reduction, to prevent one of them developing early glaucoma. So if you relate to 20 patients, you’re only benefiting 1. That is the number needed to treat. And that is a concept that can be used to compare different treatments. As opposed to that, in the early manifest glaucoma trial, the absolute risk reduction is 17%. Calculating the 95% confidence intervals around that. We can calculate the confidence intervals around any of the parameters that we’re talking about. The absolute risk reduction is 17%, but the number needed to treat will come to 6. That means we need to treat 6 proven glaucoma patients over a 6-year time frame to prevent one of them developing progression. So if you treat 20 ocular hypertensives, you are getting one benefit. If you treat 6 early manifest glaucomas, you are getting one benefit. So the lower the number needed to treat, the better is actually the effect. But we need to understand the relationship between the absolute and the relative risk. So to put that in context, let us say that treatment A, mortality rate is 1%. And with treatment B, the mortality rate reduces to 0.5%. So the absolute risk reduction would be 1 minus 0.5. That would be 0.5%. And the relative risk reduction would be 0.5 upon 1. That would be 50%. So the relative risk reduction is 50%. The relative risk reduction would be 50% whether the benefit is coming from 100 to 50, 50 to 25, or 25 to 12. If you get 50 patients, the relatively risk reduction would be the same as 5 patients. So we need to see what is the actual risk to the population. That’s the absolute risk. From which the risk is being reduced. This concept I told you already. One upon absolute risk reduction is the number needed to treat. It gives us valuable information. Can easily compare different treatment options as to which one is better. So that brings me back to the first slide that I gave you. As glaucoma specialists, one thing that we keep teaching about repeatedly is we cannot estimate cup-disc ratio. Increased cup-disc ratio is a very important sign of glaucoma, but we cannot estimate cusp-disc ratio without knowing the disc size. All these three discs are very large discs with a large cup. It is physiological cupping with a normal visual field followed over time. This disc is small in size, cup is minimal, inferior rim is smaller compared to superior rim, and this is a proven glaucoma. So looking at cup-disc ratio without disc size is not acceptable in glaucoma practice. If you extrapolate into the statistical concepts we talked about, the point estimate of a mean or proportion without knowing the sample size — whether that is 10, 100, or 1,000 patients, has no meaning whatsoever. So if anyone tells you that the success rate is 10% or the complication rate is 100%, you need to necessarily ask the question: Out of how many patients have you gotten that result, so that I know what is the spread? What are the confidence intervals for that? Similarly, if somebody tells you this is the amount of risk reduction with a given treatment, what is the absolute risk for the population from which it has reduced? Is the point that you need to ask. So for me, cusp-disc ratio without disc size is meaningless. Point estimate without sample size is meaningless, and relative reduction of risk without absolute risk is meaningless. So what we have done is we talked about the need for evidence-based medicine. Otherwise we’re practicing witchcraft. Whatever treatment we’re giving, we need to validate that the treatment is actually beneficial to the patients. And we looked at the definition of evidence-based medicine and said we not only need evidence about the utility of the treatment, but also how it is useful to the given patient sitting in front of us. We need to understand the patients, particularly clinical condition, values, and how the randomized controlled result that is coming from the general population as included in the RCT will be extrapolatable to our patient. In published literature, the authors and reviewers are both biased. You need to know what is best for your patient, so you need to understand and develop the capability of looking at results and objectively assess those results as to what they mean, rather than go by the conclusions given in the abstract alone. That’s the object of this series. There is a hierarchy of evidence. All evidence has a value based on where it is coming from and how robust the study is. You will extrapolate the results of that study to a given patient. The most important lesson to learn is we need to look at the data and look at the hidden information, not just the information thrown out in the abstract. While we learn these concepts, they are not difficult to learn. We should not get intimidated by the formulas given by the mathematicians, but we need to understand what they are trying to achieve by the mathematics. Thank you for your attention, and I will go on to the questions.
>> So if anyone has questions, we have some time left. Dr. GC, if you want to stop sharing your screen… And we’ll wait a bit and see if any questions come in.
DR SEKHAR: Even if you want me to repeat a concept, if it was very fast, and you want me to repeat it, we can go over that.
>> So Dr. GC, we had some questions at the time of registration. Oh, we had one come in right now.
DR SEKHAR: Can you explain absolutely and relative risk reduction? So like the example we have taken from the ocular hypertension treatment study — an ocular hypertension patient, if followed up for the duration of the study for five years, only 10% or 1 in 10 are actually developing — are converting into primary open-angle glaucoma. And with the treatment, we are reducing that to 5%. So the absolute risk reduction, if you did not treat, 10 people would progress. That’s the absolute risk. 10% is the absolute risk. By treatment, you are reducing it to 5%. That’s the absolute risk reduction. 10 by 5. The 5% is the absolute risk. That’s the absolute risk reduction. What people usually do is, because this 5% is not very attractive to look at, they will talk about these 5% forms — what this 5% would form out of that 10. So that will become 50%. So relative risk reduction is of the people who are at risk of progressing, how much are we reducing? That would be the relative risk reduction. If you are looking at relative risk reduction, what is the absolute risk is what you need to see. In the early manifest glaucoma, if you did not treat, 60% progressed. By treating, you reduce to 40%. So that’s the benefit you’re getting. If the absolute risk is poor, putting too many patients on treatment is not a good idea. What is the ideal patient number for a study to be considered statistically significant? Sample size, I think, is a concept that will be hopefully discussed subsequently. There is nothing like an ideal number. It essentially will depend upon what is your treatment benefit going to be. If your treatment benefit is going to be great, that means if you treat, 50% of the patients are going to get better. If you treat, only 10% of the patients are going to get better. So the difference between the treated and untreated group — if I’m giving the new medication that will reduce the pressure by 40 — as opposed to I’m giving a new medication and expecting the pressure to reduce only by 10%. That is the magnitude of effect. If it is large, you will require small sample size. If the magnitude of effect is small, you will require a large sample size. That’s the fundamental thing you need to remember. Can you explain how disc size is necessary in cup-disc ratio? So what happens is the average… The number of ganglion cells that average anybody has is about 1.2 million. So those 1.2 million, fibers are passing through the hole in the sclera. That’s the scleral opening, the optic disc. So the space left behind after those 1.2 million axons have passed is what you see as the cup. If somebody is damaging those 1.2 million axons, the cup will increase in size. That’s what we call glaucoma. But the disc size in the normal population can vary anywhere from 1 to 5 millimeters square. So that’s the only biological parameter that varies so much. If somebody starts off with a very large disc, then to start with the space left behind after the axons have passed through the neuroretinal rim is very large. So that’s why the cup disc ratio in large physiological discs can be large. In small glaucomatous discs — can be abnormal. So in a small disc, even a small cup is abnormal. In a large disc, even a large cup is normal. I hope that explains.
>> Dr. GC, we had some questions asked at the time of registration. I’m gonna share those with you. Since we have some time. Can you see my screen? Do you want to run through a couple of these?
DR SEKHAR: So I need to see how I close the…
>> The Q and A window?
DR SEKHAR: Okay. Can we use EBP in teaching clinical curriculum? If yes, how? Okay. I think it is possible, but depending on the clinical condition that we are considering, what evidence is there for that is what needs to be seen. And based on that, we will have to teach. So in some clinical conditions, evidence is very difficult to achieve, because they are relatively rare conditions. So the conditions that are common, we can have a very robust evidence. Conditions that are rare, we will have to go by the individual practice and small series, and then decide what is the best thing to be done for that. How do we use the findings whose p-value is not significant? I think if the p-value is not significant, what is necessary is that you need to increase the sample size. For rare conditions, what we can do is that multiple centers will collaborate, and increase the sample size. If there are multiple studies that have been done with similar inclusion/exclusion criteria, all those studies’ data can be combined to give you what is called a meta-analysis. Based on that, we can increase the sample size. If the p-value is low, it essentially means that whatever result is got could be by chance. So we cannot use that data alone. We need to increase the sample size, either by collaborating with others or by doing a larger study. Implication of EBM on day-to-day ophthalmology clinical practice for postgraduate trainee students? I think the important thing is, as we train in ophthalmology, we need to be asking the questions about the cases that we are seeing and the way we are treating. At that level I think it is most important for us to look for evidence. There is a huge amount of evidence that is there, that is not being utilized. So my recommendation in that situation is to actually search the literature and see what evidence is available, and what questions are coming out of the case. And then probably they will become future researchers, if they develop that inquisitiveness as to whether the evidence given by the data is actually appropriate or not. They should be able to ask those questions. Good question. Thank you. Yes, Cochrane can be… Is there any compiled repository of evidence-based medicine? Can we consider Cochrane literature at one? I think Cochrane literature is an excellent source of collating the evidence that we have, and seeing what is the current state of practice. So when we look at Cochrane, I think that we keep getting reviewed also — periodically. What time it was done, and whether there is any new evidence, subsequently. But Cochrane is a great source for looking at what evidence is available. Because the evidence is not great, we also have the consensus practice meeting and guidelines that are developed by different societies. They will be another good source. What are the types of paper publications is the next question. I think the types of paper publications would essentially be the types of studies that you would have. I think if it’s a relatively rare case and you want to share your experience about that case, as to how it ended up and what you have done, it’s usually a case report. Multiple cases of the same type would be a case series. Descriptive studies of a population as to what is the kind of disease pattern that is there, or what are the features of a particular disease in a population, will be descriptive epidemiology. Those that are interventional are the randomized controlled trials. If you intervene, what is the thing that you have done? And what does it compare with? And how the result is better? Is randomized controlled trials, compared to case series. Quite often we compare our current practice with the past practice. It’s not the great thing to do, but in some situations, it’s the best thing we can do, probably. There are scientific limitations on that kind of methodology. The important thing to realize is that if we’re talking about a treatment or an intervention, we need to always have a control group. Otherwise the treatment that you’re talking about will not be valid. P-value is… Sorry. The question is: When writing a research paper, do you have to put both the p values and relative risk, or either one can do? So if your p-value is for any estimate that you have made, if you are looking at… P values for any comparison that you are doing, if you’re comparing treatments, you’re looking at the relative risks and risk reductions. If you’re comparing the biological parameters in two treatment groups to see whether both of them are equal or not, what is the severity of glaucoma somebody had? What is the macular thickness in the two treatment arms that you had? You are giving anti-VEGF in one, you are giving photocoagulation and anti-VEGF in the other. So in those comparison, you are comparing the intraocular pressure or the visual field. So the statistical test will give you a p-value to see whether the difference you are finding is occurring by chance, or could be extrapolated to real life situation.
>> So it looks like we have a few more minutes. If no more questions come in, Dr. GC, then we can stop.
DR SEKHAR: Thank you very much. Thanks, everyone, for having participated.