false
Catalog
The Liver Meeting 2021
Part II - Basics of Navigating Artificial Intellig ...
Part II - Basics of Navigating Artificial Intelligence in Liver Research
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
I'm very happy and honored to share the topic, Radiomics, Applying AI to Imaging in the AISLD Liver Meeting 2021. My name is Xiaolong Qi from the First Hospital of Landrieu University, and I'm a founder and chair of the Chinese Port of Hypertension Alliance. I'm also the chair of the Professional Committee of Port of Hypertension, the China Association for Promotion of Health Science and Technology. So my team research interest and clinical practice focus on the patient-oriented clinical and transfer research about the diagnosis and management of a poor hypertension. And we have leaded 21 multi-center clinical trials in China. And now we would like to share some emerging data and the data from our group about this topic, the Radiomics, Applying AI to the Imaging. So about six years ago, a very landmark paper in radiology, they summarized the Radiomics imaging more than pictures, they are data. And Radiomics has been initiated in oncology studies around several years ago, but it is potentially applicable to all disease, including the liver disease. So we should do is only conversion of images to high dimensional data, including the CT, MRI and PET images. But all these, the images, we need to extract the texture features and the non-texture features, which we call the high dimensional data. Radiomics is becoming increasingly more important in medical imaging, especially in clinical oncology. And this is a landmark review in the Nature Review of Clinical Oncology about five years ago. I highlight the Radiomics should be the bridge between the imaging and the personalized medicine. So the exposure of a medical imaging data creates an environment ideal for machine learning and database science. And not only the cross-section Radiomics, the Delta Radiomics, that means Radiomics feature pre-treatment and do the follow-up and post-treatment are very useful to predict the prognosis. So we call that Delta Radiomics is especially useful for the prediction of prognosis. And Radiomics-based decision support system for precision diagnosis and treatment can be a powerful tool. And also in this Nature Review paper, they summarize the workflow of some key steps, including the data selection, medical imaging, feature extraction, exploratory analysis, and the modeling. And they summarize each steps which the score should be highlighted here. And they give the total score, which is very helpful for the control of the Radiomics analysis. And lastly, large-scale data sharing is necessary for the validation and the full potential that Radiomics represents. So here highlights the CAT system, which including many hospitals, they have the learning contactor. And then we have the central data warehouse to do the Radiomics analysis to validate the models we have, CAT. So this is Radiomics, the whole pictures in the field. And we would like to share some previous data from our group, including three major parts, the radiological imaging, endoscopic imaging, and ultrasound imaging. So especially the Radiomics can be useful in these three different images, the radiology imaging. So this is a very landmark paper in the Radiomics field about six year ago, published by our collaborator, the Professor Liu Zaiyi in Guangzhou, China. And they developed and validated Radiomics normal gram for preoperative prediction of lymphoid metastasis in colorectal cancer. And this is GCO paper, the developer Radiomics normal gram, which including the Radiomics signature, CT-reported status, and clinical risk factors for the prediction of lymphoid metastasis. And then we do a Radiomics signature for clinical scan for hypertension in patient with cirrhosis. This is a multi-prospective, multi-center study from our group in 2017. And in this study, we're including the 222 patients in the training cohort and another 163 patients in the validation cohort. This is a flow diagram of the study. And this is a baseline. As you know, in China, many cirrhosis etiology is the HPV-related cirrhosis, so which has a little bit different from United States or the European countries. And then we do the Radiomics feature selection based on the 10,000 features, including text feature and the non-text features. And then we constructed a Radiomics model, we call the RHV-PG, R-stat for the Radiomics. So in the left panel, you can see the RHV-PG shows the best performance to predict the CSPH, clinical scan for hypertension, and which is better than the Fever scan only for the HPV-CT score published previously or the portal diameter or some cerebral markers. And in the right panel, we can see the RHV-PG model in the different cohorts, which showed the robust performance. And because in this data, we have only the data from China and the sample size is limited. One year later, we initiated another study in the CHESS, the SAT1802, which initiated in the 2018. So the study called the Deep Conventional Neural Network-Aided Detection, CNN detection, poor hypertension detection in patient-related cirrhosis. So in this study, it's a multi-center study. We not only the data from China, we're also including a cohort from Ankara in Turkey. And a total of the 679 patients with CT imaging and the RHV-PG as a reference standard was included. And also MRI cohort, which included 271 patients, the eight centers in China and one centers in Turkey in this MRI cohort. And both cohort was divided into three data sets, which means the training, validation, and the test data sets. And this is a baseline of the CT and MRI cohort. And this is a scheme of deep CNN models. We should do the patcher extraction and do the feature extraction and then do the output. So this is the deep CNN process. And training procedures, including liver and spleen channels for both the CT and MRS. And panel A shows the classification plot of the CSPH, which means that HV-PG more than 10. And the non-CSPH, which means that HV-PG less than 10 in the blue dot. So we do that classification plot for the prediction of the liver and spleen. So the panel A shows the performance of the model, the deep CNN model in the training data set, validation, and test data set. And the right panel shows in the MRI, the CNN model. The left panel is the CT, the CNN models. You can see the performance is pretty good. And no matter in the training, validation, or the test data set, all the performance of the AUC was higher than the 0.9, which means this is very good models. And this model was performed better than all the imaging parameters, including the liver stiffness measured by the elastography, or the portal venous velocity, or the HV-PG CT score and portal diameter. And also these deep CNN models were performed better than the serum biomarkers, including the fibrosis index, CSPH risk score, log score, 5.4, et cetera. And in clinical practice, we always believe that a technique and a paper is very important, but more important is a device and is a product. So in our group, we try to translate some novel techniques to some product. And in this work, we are developing, we call the HV-PG Intelligent Workstation, which can use the imaging raw data in a DECOM format from the PACS system from hospital and do some imaging analysis. And this shows what the workstation looks like. And then we use this workstation to do the CT or the MRI image analysis. And we have already have a formula based on our previous cohort. And then we can do the analysis with only the patient's CT or MRI images. And then we can give a result if the patient has a CSPH or not. But this is just in the approve of Chinese FDA. It's on the way. And we believe it will be approved in the near future. And one of our previous study is about Virtu HV-PG, which was published in Radiology in 2019. In this study, we use a CT imaging for the 3D construction and do the computational fluid dynamics. And this novel technical has obtained several patents, invented patents from China and from United States. And we also have an editorial from our collaborators in NIH, which was previous in Brigham and Women's Hospital in Harvard Medical School. And based on this topic, we do some vascular imaging quantification models. This is unpublished data. We use our previous MRI cohort to do some vascular quantification models. And you can see the patient in different status, but in the health control, or the patient with HV-PG more than 10. And all of this was successfully to do the vascular imaging extraction. And another study, which was also unpublished yet, is we use, we call the Auto Machine Learning CT or Radiomics Modeler. You can see from the panel C, the first thing is we use our images, CT images to do the 3D construction. And we should calculate the volume of the liver and spleen and to extract some features, the artificial intelligence features, which in the image processing stage. And then we do the feature extraction and auto machine learning and test the performance. And this figure shows some performance, we call the Radiomics HV-PG. The panel B shows the correlation with the Radiomics HV-PG with invasive HV-PG, which was mirrored by the intervention radiologist and showed the good correlation. And the panel C shows the performance of the Radiomics HV-PG to predict the performance of different status of the HV-PG. And you can see the Radiomics HV-PG performs better than the other regular features, which was currently used in clinical practice. Okay. And then the second part is the endoscopy imaging. And three years ago, two years ago, a paper in gastroenterologist is a very important work. They use a gastroenterologist level identification of small bowel disease and the normal variance by capsule endoscopy using a deep learning model. They developed also a CNN based algorithm to identify the abnormalities in SBC images. So this is a very important work to use AI in endoscopy images. So our previous work in Lansing Regional Health, which was published just this year, we have a study use a specific endoscopy we call the Deep Detectable Strain Magnetical Controlled Capsule Endoscopy for Detecting High-Risk Viruses. So in this study, this is a novel technique device in China, which was already used in a routine practice in many other big hospitals and university hospitals. So this, the Magnetical Controlled Capsule Endoscopy they use a capsule to do the imaging capture and the other patient, there is a micro control, the station. But the problem is, is we want to, for the patient with cirrhosis, we want the capsule stay for a long time in the esophagus to see more clear. And so we develop that detachable strain, which can attach to the top of the capsule. And based on this technique, the operator outside can control the strain to control the capsule position in the esophagus. So we have some payload study to test the feasibility and also the coordination with regular endoscopies. And now we use these endoscopy images to do the AI imaging analysis for the high-risk viruses detections. Okay. And also the very recently paper in gastrointestinal endoscopy, they use, they call the endoendro-based system, which is based on the AI techniques to detect some of the liver, the normal esophagus, stomach, or the diseases part. The last is the ultrasound imaging. Two years ago, our collaborator, the professor Kun Wang from Beijing, China, they published a paper in guts to develop a deep learning radiomics based on the Shelby elastography for assessing liver fibrosis in hepatitis B patients. And this is a prospective multi-center study. And they develop a deep learning radiomics model for predicting the different stages of liver fibrosis. And this year, we just initiated another international trials, collaborate with the investigators in China, Germany, and Italy for this international multi-center trials. We use the novel spleen dedicated stiffness measurement by the FibroScan, a new type, 6-3-0 to do the analysis. Okay. And in this trials, we already have some preliminary results and it showed that the diagnosis value for CSPH and VNT versus needle treatment of the spleen stiffness measurements is better than that of liver stiffness measurements. Okay. And also a previous study, they use only regular color ultrasound imaging to do that image analysis for prediction of cancer, but not liver disease. The workflow is the same. They do that feature extractions and then to do the model development and validation. So a very important review to summarize emerging non-invasive approaches for diagnosis and monitoring of poor hypertension published by our group, collaborate with Anandiza Bhadagane from Swiss and Andrews Canis from Spain and Dr. Shivkumar Srin from India. And in this study, we summarize some emergent techniques and the potential of the model techniques in the near future, which including the radiomics and deep learning in this area. Okay. So dear ladies and gentlemen, I'm very happy and honored to share the emergent data from this field and also from our group to summarize radiomics in imaging, especially in the liver imaging. And to date, most of the studies in evaluating the AI applications have not been validated for the reproducibility and the generalizability. The results do highlight increasingly concerted efforts in pushing AI technology to clinical use and impact future in care. Thank you very much for your time. I would like to thank the ASLD and the organizers, Dr. Rogal and Dr. Bajaj for the kind invitation to talk about applying artificial intelligence to liver pathology data. Mazen Urdin from Cedars-Sinai, and those are my disclosures. So the objectives of my talk is to talk about why and where do we need AI, artificial intelligence in liver histology, how do we apply AI in liver histology, and what have we achieved in AI liver histology to date. So let's talk about first, when do we need liver biopsy in liver clinic? First, if you have unexplained liver enzymes elevation, so we do workup and we don't have a reason, we do a liver biopsy. But from liver state disease standpoint, in NAFLD and NASH, we do liver biopsy. Well, mostly in clinic, we do non-invasive testing such as transient allostrography, but in clinical trials in NAFLD and NASH, we do liver biopsy, especially for phase IIb and IIIb. For autoimmune hepatitis, many of us still do liver biopsy, many of us still do liver biopsy. For PBC, not so much. A lot of us have moved away. We do AMA, ACFOS, and we moved away from liver biopsy. Some still do liver biopsy. For overlap between autoimmune and PBC, we do liver biopsy. For alcoholic hepatitis, we have moved away from liver biopsy. For DILI, we mostly moved away from liver biopsy. Other reasons, rarely. Viral hepatitis, we don't do liver biopsy anymore. We have a lot of non-invasive testing. So it's really in NAFLD and NASH, mostly in the setting of clinical trial and rarely in other situations. So my talk today is going to focus mostly in non-alcoholic fatty liver disease NASH trial. So let's talk about the current state. We do have the brunt criteria to diagnose the auto-hepatitis. Dr. Kleiner in 2005 out of the NIH came up with this manual and semi-quantitative grading of NAFLD lesions, which was tremendous help in NASH clinical trials. And you all know the NAFLD activity score, the NAS score, which measures steatosis, lobular inflammation, and baloney in what we call the NAFLD activity score, the NAS score. It's a score from zero to eight. And in addition to NAFLD activity score, we assess fibrosis staging. There's also the SAF score, which has been created in Europe. It's also been used in NASH clinical trials. If you look at the FDA efficacy endpoint in phase three registry clinical trials, there are two endpoints. NASH resolution, which requires resolution of steatohepatitis without worsening of fibrosis, or fibrosis improvement by one stage without worsening of steatohepatitis. And in the most recent FDA document published in hepatology, they also mentioned you can achieve both as an endpoint, which makes a lot of sense. So why we need artificial intelligence, especially with histology? So there are multiple reasons that were brought out multiple publications and evidence-based medicine. One of the issues that have been brought up is the inter- and intra-observer variabilities, talking about improving reading granularity, the number of the hepatopathologists that we need to finish clinical trials, and of course, overcoming the placebo effects. So let's tackle these problems one by one. Let's look at the inter- and intra-observer variabilities, which has been a huge issue. About at least a decade ago, in NASH clinical trials, it has been shown that the inter- and the intra-observer agreement were good in steatosis and fibrosis lesion readings. However, when it comes to lobular portal inflammation and ballooning and overall diagnosis, it's not as good. And this is a list of studies that looked into the inter- and the intra-observer agreements. So you see it good in steatosis grading and fibrosis staging, while in lobular inflammation, portal inflammation, hepatoceleal ballooning, and diagnosis, it struggles. The Davidson study that came in J-HEP in 2020 has been a landmark study and caused controversy. So this was a 339-patient study from the Eminence study. It was a NASH clinical trial. An excellent pathologist, A, scored the biopsy. However, when he went back and re-read his biopsies, there was a low intra-reader Kappa coefficient. He was able to reproduce 77 percent of that, while a reader B was able to reproduce 69 percent of that and reader C, 77.3 percent of that. So there was low intra- and inter-reader Kappa. The all three pathologists agreed on all of the criteria and only 53 or 54 percent of the time, making 46 percent of the time or 46 percent of the patients not meeting histological inclusion criteria by at least one of them. That's a big number. Indeed, when they looked at the efficacy endpoint, the outcome, the first one, NASH resolution, they achieved unweighted Kappa of 0.396, 0.4. That's low. The other endpoint, the efficacy endpoint, fibrosis or regression, the unweighted Kappa was 0.366. Both of them are low. That's a problem. Therefore, that was raised as a big issue. Let's look at the improve in reading granularity. Well, when you talk about improving steatosis stage by one, improving fibrosis stage from two to one is different between four to three because if you improve four by a little bit, that's a huge step. So from two to one versus four to three, they're totally different. It's kind of like traveling between Washington, D.C. and New York and Washington, D.C. and Los Angeles. So we have to be mindful. And it's probable we need more than just one stage improvement. We need to look at more details, the collagen content. Another concept of looking into the granularity that was proposed is the ballooning. The scale of that is zero to two. There's been talk about looking into that more granular, extending the scale, but that has not been done yet. Let's look at the placebo effect and overcoming it. I gave you an example from very well-known recent trials. The semaglutide came in the New England Journal, the linifibrin or PPR, pan-PPR, coming in the New England Journal soon. As you know, the semaglutide achieved a significant rise resolution, showing you in the top left. And in the bottom, it did not achieve fibrosis improvement. And one of the big debates that I'm not going to get into, that the placebo response was quite high, 33%. If you look at the LANI study, in the bottom, the placebo was lower, the 24%. And the question is, can we avoid such a difference of placebo response in clinical trials? So a number of expert hepatopathologists. I trained with David Kleiner, sat with him on the stethoscope during my fellowship at the NIH. We have multiple other pathologists, Dr. Brundt, Dr. Goodman, Dr. Bedroza, and many, many other. I don't want to miss any. They are excellent, but there are so many that we need many, many of them with the NASH clinical trials that are happening. And I'm going to use a quote from Dr. Brundt that she wrote in an editorial in response to the Davison paper. What she said is, pathologist A, who's actually outstanding, performed qualifying and end-of-treatment reads for the eminent study with the given time and results pressures and three qualifications to the completion inherently involved. There is a pressure. The slides needs to be prepared, shipped. It takes time for preparation, shipment, and then decide all of a sudden want the results. Not to talk about the sample size. Sometimes they'll give the two centimeters that the pathologists want. So all this with a pressure on the pathologist and their performances and their reading outcome. But also, let's take a look what is happening in the field. So the resumptive trial of phase three is ongoing. You have about 2,000 patients between baseline and end-of-treatment. The Belcipitin trial, you have about 1,000 patients at baseline. You need to read their biopsy. The GLP1, the semaglutide, they started their phase three study. And the Gilead phase two, that includes semaglutide, their FXR, and the ACC inhibitor has also started. They have baseline as well as end-of-treatment biopsies that need to be read. You have the RM call. They have a future phase three study that is coming. You have the Pfizer studies that are ongoing. You have many other studies that are in planning, the ACQUIRO and others. So you need multiple, multiple, full-time, excellent pathologists that need to read these studies. Not to mention is that one of the suggestions that was brought up is having more than one pathologist to achieve agreements on the biopsy read. And one of the suggestion was to have three pathologists to agree. So you do the math based on these studies. So how do we apply artificial intelligence and liver histology? This is an idea about artificial intelligence and liver histology. This is a review paper we published on hepatology. I'm not going to go into details. I invite you to read it. But in general, in artificial intelligence, you have machine learning, you have deep learning. There are supervised learning and unsupervised learning. But an example in histology will be in the deep learning, the conventional neural network, the CNN, that PathAI that I'm going to present to you has used. So it's a very example review paper that you can read on your own. But let me give you an example how it can be done in pathology. So there will be a scientist team from computer scientists, pathologists and hepatologists. They will do digital images of, for instance, liver biopsy of nettles. The pathologist will do his annotations. The machine will come and create its model. Then you would do validation of the lesions that the ML model, the machine learning model, created alongside with the pathologist annotations, your pathologist that you chose or panel of pathologists. Once you're comfortable with that, you will do another extent of validation. This is an example of the second hormonal generation that use unstained slide. And in this study published in hepatology in 2020, the Q steatosis score and Q fibrosis score performed excellent with 0.8 and 0.77 in compared to inflammation and bubonic. Again, it's a machine learning ability that use unstained slide. This is a path AI from their study. There was a central pathologist that created the annotation. Again, the pathologist teaches the machine. The annotation were for steatosis, bubonic and inflammation, as well as fibrosis, not shown on this slide. The machine created its own model. Then you see here on the left, they do the correlation. It depends how the degree of the steatosis and inflammation fibrosis. They correlate that with the central pathologist. The ML model create its own thing. And then here, they correlate it with the three pathologists reading, and then you get the final correlation. So what have we achieved in NI liver histology to date? Let's take a look. So this is the AI model, and let's look at the inter-observer reducibility of NAS component. So in terms of the steatosis, I want to point out first, you see here the three pathologists on the right. When they used actually three pathologists, they agreed excellently among each other. So yes, it's one of the solutions to mitigate the problems with the inter- and the intra-observer readers, but you need way more pathologists, which is not going to solve our problem. The ML model actually correlated in an excellent manner with the three pathologists reader, and you see here the correlation with the three readers in terms of steatosis, blood-brain inflammation, and bulimic. The fibrosis also correlated very well with the three pathologists. Another problem that it could have solved is the granularity, and I want to give you an example of these two patients, patient number two and patient number four. And to give you an example that not all serotics are created equally. So patient number two has in his biopsy 42% cirrhosis and 41% F3. In comparison to patient number four, he has 96% cirrhosis and about 3% F3. So patient number two and number four will respond totally different, for instance, to antifibrotic drug, and we know this exists in our patients with F3 and F4. And this is probably going to create a better assessment of histological changes. For instance, here on the left, the baseline assessment of a patient with F4 of 73%, you can measure at week 42 his or her response, F4 dropped, F3 dropped, and how much that turned into F2 and F1. Indeed, what here they did in PATH-AI, they created a score of delta fibrosis, and over time, how it changed. It is possible also that mitigated the placebo effect. How so? Here on the figure on the left, what I show you on the figure on the left, on the right side, the conventional way, and the placebo response was 14% versus in the drug, which was a combination of Cylon for medication was 23%, and that was not statistically significant. However, when you use the machine learning, the placebo response was only 5% compared to the combination was higher, 27%, and the difference was statistically significant. Now you have a significant trial and statistically significant difference. On the right, I'm showing you their delta fibrosis score. On the left, you see the drug. You have meaningful difference between baseline and end of treatment, and versus the placebo, you don't see meaningful difference, showing you that ML can lead to different results and something to consider in the future. Of course, when we do something, it has to correlate with clinical liver outcomes, and here in the same study, the machine learning model that they created correlated with clinical liver events in their STELR3 and STELR4 study. Moving on in subsequent study from the Suntizumab study that had the advantage in having a hepatic venous portal gradient measurements, Professor Bush and many esteemed colleagues published this paper. What they did is they published and they created a machine learning model to predict HVBG score, so they call it the ML-HVBG score, and in training and a validation set, the ML-HVBG score correlated very well with the HVBG score. One caveat about HVBG in NASH serotic that there have been talks that it's not as accurate as other liver diseases in NASH patients because many issues that I'm not going to talk about it in this talk, it's probably weight, other variations. Juana Braudis, Professor Garcia-Itzal have written about this. I direct you to their papers. What they also have shown in this paper, the machine learning HVBG score, that this score has been able to predict changes in HVBG score, in this case, at 20 percent degrees as well as fibrosis improvement. What is also important, and this is important here, the ML-HVBG score, the machine learning HVBG score at baseline did not predict clinical liver outcomes. Again, its baseline HVBG score did not predict clinical liver outcomes, but the change in that ML-HVBG score predicted clinical liver outcomes. So that was an observation of that study. On the other hand, in this meeting, what we present from the second hormone generation from histoindex, a concept that Professor Garcia-Itzal came up with about 10 years ago from the Emanuel assessment of cirrhosis, and at that time, she wrote papers about talking about the architect of cirrhosis and how we should consider the septa, the thickness of septa, and all these important architecture changes such as the septa, nodule size, number of nodules, fibrosis amount, and areas. And I invite you to read that poster for more details. And with histoindex involvement, Dr. Goodman, we did this analysis when we looked at many, many parameters for this concept, and we combined them in a score. We call it SNF, which is septa, nodules, and fibrosis. And here you see this SNF score correlating nicely with HVBG in a training as well as validation set. So again, it makes sense because it represents the architect of cirrhosis very nicely. And here it shows you each component by itself, and when you combine it, it did better in the validation training set. But also, it did nicely. It correlated clinically significant portal hypertension, but I told you the portal hypertension measurement, it has its own issue on NASH. We also looked at incidence of varices. The SNF, there was another score, which is SNF varices. It also predicted it nicely, which makes a lot of sense because, again, those are important architecture free features of cirrhosis patients, septa, nodules, and fibrosis, and there were so many of them that we included in the study that I invited you to read it. Again, using unstable slide from machine learning methodology that we included in this study. Another important machine learning method that Professor Sanyal presented from the varicotropic fixer concept that what they found in NASH patients with F3 looking at further features such as septa thickness, you can detect further improvement in fibrosis changes here, the septa thickness in particular, that can be meaningful, and thus we should consider such parameters in these patients. So it just gives you further dimensions and further analysis if important histological features that machine learning provide beyond the conventional liver histology. So in summary, artificial intelligence and machine learning histology is a new, innovative, promising technology. It reduces variability in histological reading. It's kind of like having three pathologists in one machine, but we should not forget that the pathologist teaches this machine. It gives more details, automated granular reading. It may decrease the placebo reading, but we need more data on that. We have data now correlates with HVBG, and if that has a problem, it also has correlated with clinical liver events, and it might show treatment effects better, and we might be able to go beyond these failing agents and show the treatment effects that we're probably missing. For that, I would like to thank you. I'll take any questions. Thank you very much. I would like to thank the organizers for giving me this opportunity to present in this forum. Really delighted to be here. These are my disclosures, and this is going to be the outline of my talk. I know some of these facets have been covered by previous speakers. My job is to gel all that together and present to you how we can utilize that in our research today and how it may impact our clinical practice in the coming years. AI to improve the diagnosis of a disease. Where do we stand in terms of that? I'm going to give you an example of something that is now in clinical practice. This is a group of AI investigators at Google. Companies work together with academicians where they wanted to diagnose diabetic retinopathy, and so they took a bunch of photographs from retinal fundus and just trained the AI-based algorithm to diagnose a diabetic retinopathy. Then they compared high-specificity operating point and then also looked at these diagnostic test characteristics, and they were able to improve upon those. These individual points are clinicians who were detecting those individually. You can see that you're able to get a really robust model once you keep training your AI-based model. It's better than a single clinician-derived model. When these models were compared with expert physicians, in general, all the AI-based model started performing better. Then this could be applied to new cases where they were not sure if the patient had diabetic retinopathy and they were able to pick it up. One thing which is really important is how many cases or how many fundus photographs that the AI model needed to be trained. This was assessed to this right as number of images needed to optimize model. It was somewhere around 60,000 images. At that time, the AI model basically achieved its maximum optimization. It's important that you need a large dataset, but you could definitely derive that dataset, train it, and then you could apply that clinically. This is not clinically available. What about diagnosing a disease patient comes to us as a clinician? This is also happening in real time. This was a paper published in New England Journal of Medicine 2018 from the Undiagnosed Disease Network. Here is a simple example where they had 1,500 patients. About 35% of these, they were able to diagnose as a problem. Others remain undiagnosed, but 35% were diagnosed with a key genetic diagnosis. They discovered 31 new syndromes in this program. One caveat is that it's not just exome sequencing because 32% of patients who came to this network had exome sequencing done already. It's really about how we utilize that data and really pinpoint which genes are causing these abnormalities in our patients, and understanding them is really important. Therefore, AI-based model also will require human help to optimize that model. Here we are giving an example. This is a recent review article that we wrote, Sylvia Villarino, Virul Ajmera, and Nalini Jain. The idea was, I was giving a visiting professor GI grand rounds. I think I was Penn, and one of the faculty asked me as to what is the algorithm? Can you define an algorithm for assessment for lean naphrodiene? I said, well, I think this is a really good idea, and we really need to assess that when to do genetic testing in our patients with NASH. It started from that discussion, and then we wanted to involve Sylvia because Sylvia has been doing something in liver disease to look at undiagnosed liver problems and doing exome sequencing and trying to match it up. We together came up with this idea that if you have a lean naphrodiene and you really didn't have features of insulin resistance, then there might be some genetic problem that has not explained the common genetic risk variants of naphrodiene, such as PMPLA3 or PM662. Something else might be going on, and so we want to identify those, and there could be rare monogenic genetic variants. It could be that a patient may have hypothalamic lipoproteinemia, and so you want to look at their lipid profile. There might be elevated triglyceride and LDL level. This might be some sort of transport defect or lysosomal acid lipase deficiency. We might see features of familial partial lipodystrophy syndrome, and so depending on the clinical presentation, you might consider doing whole exome sequencing in some of these patients and try new diagnoses or new manifestation of other known genetic disorders related to monogenic disorders, and this may eventually lead to some therapeutic interventions once we make that diagnosis, and some of those are listed here, including Wilson's disease, so I think this can be applied in clinical practice even today, and how we can then utilize an AI-based model is we then codify this as the patient comes in. We do exome sequencing results X, Y, and Z. This would be the next step, and that could be framed into a model, and you could run it through a computer, so you don't actually have to take the patient to UCSD or Yale, but the testing could be done anywhere in the world, and those things might be coming in the future. Here I'm giving you an example how just clinical data, there's no genetic testing. There's no whole exome sequencing. Just your routine clinical data that is available to you, applying that, so these investigators looked at NASH CRN data that was publicly available, came up with a 14-feature model to diagnose NASH and fibrosis, and then validated those in the Optum real-world data. No diagnostic accuracy about 0.74, which is what you will get with clinical data, but it's still helpful at a population level to detect who may have NASH. This is something I think looking at a liver image or a liver lesion. This is proof of concept study. How can we develop an AI-based system where radiologists, clinicians can upload their scans and get a probability for malignancy or hepatocellular carcinoma? So, we already know that there's a LiraD system. Instead of a radiologist or a clinician giving you a LiraD number, could we train a computer to run the same algorithm that is running in a radiologist's mind, but also provide clinical data on radiologic features, such as steatosis, cirrhosis, features of oral hypertension in their lab, so we can provide a lot of input, and based upon that input and the features of the lesion on the MRI or CT or ultrasound, we can really differentiate whether there's a benign lesion or a malignant lesion, or whether it's HCC or adenoma. They were able to do that with 94% accuracy here between a benign and a malignant lesion. So, I think these type of AI-based tests, especially for detection of betacellular carcinoma in a serotid, should be coming in the next few years. There's great promise here. Now, what about, you know, assessment of liver fat? You know, a simple problem, we would say. Conventional ultrasound lacks sensitivity, specifically in picking up liver fat when it is between 5% and 15%. You could, of course, do MRI, PDF, but it's expensive, requires scanner time. You may not have availability of an MRI. CAP, control retinitiation parameter could be done, which point of care does, but lacks accuracy in providing an accurate liver fat content. The solution, could you develop an AI-based model using the raw data from ultrasound waves, looking at those frequency and develop an ultrasound fat fraction? Is this fact or friction? Can we do it now? And yes. So, we've been working with engineers in University of Illinois, Urbana-Champaign, Claude Sorlin and I. We have an R01 on this. And one of the products is that we can now, using a deep learning method of raw ultrasound data, providing an ultrasound-based fat fraction. And this can estimate to the PDFF that we look at MRI with a very high correlation. And this deep learning approach diagnosed now for even 98% accuracy using the ultrasound fat fraction. So, this is definitely possible and scaling it up for clinical utilities, the next phase. And there is quite a lot of activity in this domain. I think there will be ultrasound-based fat fraction coming in the next two to three years in clinic. What about treatment landscape? Can we make a dent there? So, I work on NASH and NASH therapeutics. We are seeing a lot of drugs fail as they go from phase one to phase two to phase three. And this is common throughout drug development. Could we make a more efficient model for drug development using AI-based approaches? And the answer is yes. The key unmet need in drug discovery is a lot of wastage of time, precious resources of patients is spent, but the drug doesn't really have potential. And when we look back, it never had the potential to succeed in getting to the clinic. And it's because most therapies do not work because they're not derived from data that replicate human disease state. So, now we have examples of drug discovery based on precision approaches, such as relying on genetics. So, SRNA targeting PNPLA3 as an example. So, we are testing currently this, and this would be really a paradigm shift change in treatment of NASH if we are able to really establish safety and also efficacy of such an approach. Then there are SRNA-based program in alpha-1-antitrypsin targeting the Z-protein accumulation in the hepatocyte. So, I do believe that some of these therapies will see light in the coming years, and there's a lot of activity in this domain. Then trying to integrate genetic data, transcriptomic data, and metabolomics data to discover new targets for therapy. And one of the targets that people are looking at now based upon these data, it is HSD17B13-based therapies, and they're emerging as well. So, this is actually happening in the background while we are working and doing conventional drug development. And there are several life science companies that are working this domain to aid drug discovery using AI-based approaches. Now, if you look at endpoints for approval of new drugs for NASH, and, you know, previously we discussed about pathology being important and how there is imprecision and lack of reproducibility in liver path assessment. But for a drug to be approved in NASH for subpart H approval, they need to show either of these endpoints, NASH resolution without worsening of NASH or one-stage improvement in fibrosis without worsening of NASH. And then full approval, you need to hit clinical endpoints as listed here. So, just looking at caveats related to NASH resolution versus fibrosis regression. So, we know that, how do we define NASH resolution first? So, we define it if the lobular inflammation is minimal, zero to one, and bloating is zero. So, what's the kappa for lobular inflammation? Somewhere between 0.45 to 0.6. And then the kappa agreements for ballooning is somewhere between 0.55 to 0.66. So, when you're looking at NASH resolution, you're really multiplying these kappas for both lobular inflammation and ballooning. And so, these might be, you know, really low. And then for fibrosis improvement for one-stage agreement between readers is somewhere around 0.8. So, most of these imprecisions are then carried over to where we see that the treatment effect delta is not robust enough or precise enough to show whether therapy is working or not. This may also be a factor that the therapies may not be potent enough, but it also could be that reading could be improved where we could start seeing therapies that are not showing to be beneficial, but actually are beneficial. So, these traditional histologic assessment may lead to type two error. How can we improve that? One of the ways, and this is listing current data as to where do we stand in terms of diagnosis of osteoarthritis, there's about 66% of the times there's agreement, but, you know, 34% time there is agreement. So, we definitely want to improve upon that diagnosis. And, you know, people are looking into machine learning approaches and, you know, this is a study conducted using a large data set of pathology slides and annotating them using codes first read by expert pathologists and then read by computer. And this is one of the approaches using, again, AI-based methodologies. You know, you identify what osteoarthritis looks like and, you know, keep training the computer to learn that that's osteoarthritis, and then eventually it can quantify for you and then provide you a prediction for osteoarthritis, ballooning, inflammation based upon an HME slide. Similarly, you could do with a trichrome stain slide and get an idea about total fibrosis, which could range within the same biopsy that what proportion of the biopsy looks like cirrhosis, what proportion of the biopsy looks like stage three, what proportion looks like stage two, what proportion looks like stage one, because within the same biopsy, you might see different features and depending on the stage of fibrosis, what may be the predominant stage. And that's why you can see sometimes same liver biopsy read by one pathologist seeing the other one stage four. Both might be right, depending on what part of the liver they're looking at under the microscope. So this is also coming and we might see an AI-based approach to diagnose steatohepatitis, especially in the setting of a clinical trial coming along in the next few years. I also reviewed FDA data as of December 2020, there were 29 AI-based technologies, medical technologies that had received FDA approval. Majority of them were in imaging and there are several pathways that maybe in future meetings such as this, we could discuss there's 510K clearance pathway, there's pre-market approval, and then there's a de novo pathway. The idea is that you could get an AI-based test. For example, ultrasound based fat fraction would be a 510K clearance that you already have another test like a cap or say just the ultrasound, mild, moderate, severe fat on the liver. And then you say our test can provide at least as much information as what's already there. And that's pretty good. Or you could come up with a totally new way and say, I can diagnose national ultrasound. You could do that. That's amazing. But that will be then in the de novo pathway category. So really exciting area to see how AI-based technologies will be coming out in future. And liver disease is going to be right there. Right now, majority of this is on CT scans for you know, lung lesions, chest x-rays, those sort of things. Now, how do we initiate a biomarker discovery program to develop an AI-based model in hepatology? I'm giving an example of stool microbiome or metabolite-based biomarker discovery program that's underway at UCSD and we're working on it. And I'm listing different types of biomarkers, diagnostic, prognostic, pharmacodynamic, pharmacokinetic, treatment response indicator, disease monitoring. All of these would then could be utilized, but then the study design would be different and the type of patients that we would need would be different as well. So, you know, if you want to diagnose NAFLD versus normal, then that will be, you know, a front diagnostic with whether patient NAFLD or not, whether patient has NASH and if they're going to develop cirrhosis or not, then you need patients who have NASH, who are longitudinally being followed and, you know, 20 to 30% are then progressing to advanced fibrosis and others are not. How are they different? And, you know, longitudinally sample their stool and see them or got, you know, metabolome. We could also look at chemopreventive strategies, say, you know, we want to see if, you know, metformin would prevent future risk for HCC or statins would prevent future risk of HCC or decompensation in cirrhotics. So we, you know, start a patient who has early disease or early cirrhosis and no decompensation, no HCC, clear them first and then start the chemopreventive strategy, whether the placebo collects stool specimen and see the changes that may be predictive of future outcomes. So I think these could then be linked to future outcomes, but you have to really plan the intent and the context of use of what you want to achieve in the end. And it's really important to do that. Here are some key considerations. What is the exact clinical question you want to answer? You want to write it down. Many times it's vague, it's in the mind, but it's not articulated. It's really important to articulate that. And it should be as specific as possible, more specific the aim, greater the clarity, more likely to achieve that. It may be very clear, but what is novel about your clinical question? Is it linked to a biologically plausible and testable hypothesis? What are your assets in terms of what's the team like? Do you have prior training? Can you do bioinformatics yourself or you need other people to do it for you? What about your expertise in clinical phenotyping? What I find is that clinical phenotyping expertise is underestimated. A lot of people are focused on sequencing expertise and not really on clinical phenotyping. At the end of the day, it's the phenotype. It's the king. So you want to really establish that you understand who is this patient, where they're going, and how are we going to define when they will achieve their destination. And those should be really firm diagnoses. Institutional expertise is critical and, you know, putting together a team. And sometimes there's this cross-institutional as well, putting together a team that is complementary, synergistic, and can, you know, help you get to the bottom of the questions that you want to explore. Collaboration existing or new would be critical as well. Align the aim with the appropriate cohort. This is also very important. Which disease you want to study, and I think it's important to be more specific, which stage of disease you want to study, access to such patients. How will I phenotype them? What's novel about what I'm going to do as opposed to what others have done so far? Can I go and dig in deeper? What is the highest level of study design that is possible irrespective of the funding restriction? These questions are really important because many times we start cutting corners and sometimes we don't have to. I think first writing the ideal study plan is important, then figuring out what's possible for you. Also, what is the most ethical study design? Will my patients be served well by the study? It's critical. For me, that is paramount before I design a study. Write down the definitions of disease that you're interested in and how it will be defined upfront. You would think that this is given, but it's not given. What is NAFLD? How do you define it? By which criteria are you going to define? Is it the biopsy? Is it the imaging? Is that the ultrasound or the cap or the MRI, PDFF? Then what are the key inclusion and exclusion criteria? Again, just simple things, but writing it down really helps in great details. Then asking people your worst critique and say, what do you think about this? What are some caveats that I may be missing? Context of cohort enrollment, most critical for a biomarker study as to where is the patient? Typically, most of my biomarker studies that I'm working on are originating in our clinic. Patient sees a liver doctor as a clinical indication for a biopsy, and I'm now developing a biomarker to replace the biopsy. It becomes simple. Prospective or retrospective study. Write down detailed inclusion and exclusion criteria linked to the clinical question of interest and excluding confounding diseases is important. Collecting data on key effect mediators and covariates. Metadata is important. Coexisting metabolic diseases. If you're interested in microbiome, then you want to look at diseases, other diseases that may impact the microbiome. Concomitant medication, medication known to alter the microbiome, medication known to alter the disease state, diet, alcohol, quantifying that, smoking, weight status, physical exam, finding fasting, libids, insulin, hemoglobin, pregnancy, liver disease, specific labs, being a liver doctor. So all these would be critical as you're developing your study. I'm going to give you an example of Goldman Consortium that Susan Sharpton and I, along with several investigators from all over the world are now working on. The study is a longitudinal study to look at changes in MRI and MR listography long term and linking that to outcomes. But that's not all we're looking. We're also collecting serum plasma, urine, stool, DNA at baseline every two years. We're also looking at fibroscan and CAP at baseline every two years. And so these delta changes in these biomarkers, could they predict a worsening of survival or improvement in survival and potentially we can replace need for biopsy? So why am I telling you about this in this AI-based talk? The idea is now we're setting up this study. You may be setting up another study. Are you thinking about integrating AI into your study and how do you do that? Well, for example, here, this is the first international study in liver disease where we are storing raw MRI images. We've done many studies in the past where we had data like PDF of data, but here we are actually, you know, keeping all the images de-identified because we want to develop future models to be predicting who's developing cirrhosis or future HCC using those images in our imaging biorepository. Collection of relevant integrating metadata is now linked to that as well. We have several automated reads. We have local reads to compare. We are setting up image library for pathology slides, so we've got HNE and trichrome slides. We can scan all of them and have them available for any AI-based assessment. This could be done quickly on all 300 patients that we have so far already. Setting up collaborations upfront on genomic sequencing, metabolomics, microbiome sequencing, and where this data when it comes will be stored as well. So all that needs to be, so you can imagine when you're setting up a big institute, you have a physical building. You need to have this space set up for these AI-based, so infrastructure set up for these AI-based approaches needs to be planned upfront and there should be investment in there as well. Caveats with machine learning, that machine learning learns from existing data and variables collected. So there's a potential bias that model learns from only existing data. So garbage in, garbage out still holds true for this. The one example is that they could be biased as well. So Amazon had a tail end recruitment AI model and the model was trying to pick the best people from out of college. And what they were doing is it was giving them results, but it was so biased that it was really picking men over women. So they try to change the model and say, we won't look at the gender now, it won't be put in the model. So they started looking at some words in the CV and they performed or executed, but that was actually a proxy for men because men were more likely to use these and they started picking those words. So I think eventually they stopped using that model and so they're developing another one. But the idea is that these models could be flawed. There are also issues that majority of the genetic data, as you may know, is based on Caucasian. And so we may have very limited data to really see inferences on gene affecting disease state in African-American or other races. So we need to be mindful of that. So how to reduce bias, more inclusive data set, more generalizable data set, be mindful of the bias that this may not relate to all races and ethnicities, open to criticism. It's fine that you might be wrong because that allows you to improve upon your current model. And does the AI study population match the population we intend to use the AI solution on? Critical. AI-based application already here in research and in precision medicine, for example, undiagnosed liver disease assessment, we can apply it today. AI-based method will be routinely utilized in clinical care in coming years in radiology, pathology, omics-based diagnostic, and new cluster of drugs and novel treatments for liver disease may be discovered by harnessing the power of AI. With that, I thank you for your attention and look forward to the panel discussion. Thank you. Thank you so much for sticking around with us for the second part of this. And I thank on behalf of myself and Dr. Rogal, I want to thank Dr. Nooruddin, Dr. Chi, and Dr. Luma for these excellent presentations, which have complemented and enhanced the presentations that were in the first part of this section. Now I turn it over to Dr. Rogal to talk about it. There's a lot of questions that have come through and the audience, please send in as many questions as you want. If we are not able to get to it, you are able to engage with the speakers later as well through the TNMDX. Absolutely. Yeah. I just want to echo what Dr. Bajaj said. Thank you so much for your amazing talks. I definitely learned a lot. Dr. Chi, I wanted to start with you and ask you a bit about some caveats or cautions that you would offer to people who are starting to do AI work with regards to radiological data, sort of any experiences that you've had with overfitting or bias in the models that you've worked with. Yeah. So now it's my turn for the first question. So I thought the question about the radiomics features for the liver disease, and also they ask if the genomics or some molecular data is important for the pathology diagnosis. But regarding the cirrhosis and poor hypertension, I'm not sure if the molecular data is available in the routine practice in vast countries, but in Asian-Pacific countries, in China, we didn't test the molecular data for the cirrhotic and poor hypertension patient. So I think it's not a very common diagnosis in the near future. And also for the HPPG or the endoscopy measurements is already very easy to accessible and they're not very expensive. So if we use a molecular test for this way, I'm not sure the expensive, it definitely expensive. So for the regular test or the routine screening, I think it's not a cost effectiveness. So lastly, I think the radiogenomics, maybe you have mentioned in the first part, which is a multi-omics, that's a radiomics and genomics, is available in cancer research. And I know several large centers, large research initiated by the National Cancer Center of China. They have a database, a very large database, combined with radiomics and genomics and some clinical data to construct a model. But I think that is because in a diagnosis for the cancer, the genomics test is the routine for some kind of cancer disease. So it's not cost-effective and it's can be accessible by the patients. That's my comments. Okay. So without further diagnosis, I think this is a comment from Dr. Naradi. Thank you. Yeah. So Mazin, thank you for this wonderful talk. Can you, there's a lot of work for you in this chat. So could you first talk about, and this goes along to his AI magic bullet, if a biopsy is too small for a pathologist to grade it, do you think AI can magically grade it for you? Well, before I start answering any question, let me just emphasize that I want to humble myself and mention again, I'm not a pathologist and remind ourselves that the pathologists are the people who are training these machines. So by any means, we're not taking any credits from them. For instance, I think the path AI reading method, it's all coming from Dr. Goodman. So whatever path AI is going to produce eventually is Dr. Goodman's rating. So I do see also like ongoing role for our pathologists to keep assessing these AI machines, improving them. For instance, the model we just presented in this meeting, Dr. Goodman sat down and he created the septa nodule and fibrosis area, which was very important. So just for disclosure that we're not taking away anything from there. To answer that question, I think the sample size is as important for AI to the machine compared to the pathologist. So it's going to continue to be important. With time, it might overcome as it learns the slides more, but at this initial stages, it's going to be as important as it is important for the pathologist. What about the myofibroblast question that we are adding, staining? Yeah, I mean... Right now, you're dealing with fibrosis or not, right? Yeah, I know. There's a question about correlating with HVBG and that's a question also will be best directed to a pathologist. I do think that when we sat down, I showed you a poster for correlating with HVBG. It's a good idea, I think. I told you like when we try to improve correlation with oral pressures, we went back to Garcia Tsao's concept 10 years ago and Scott Friedman wrote an editorial with her. They did it in Hep C and Hep D and they looked at multiple features in the septa as well as the nodule, nodule size, number of nodules, and fibrosis area, which makes sense in the cirrhosis concept. We don't see those. We see just the fibrosis bridging and no bridging, but you want things like this, especially the septa thickness. It makes a lot of sense, and Dr. Sanyal presented earlier data from the trophifexor trial. So, what we did in the work in this meeting with the help of Dr. Goodman, we included multiple features. I mean, the septa by itself, we had multiple, multiple features of it. So, a lot of things were missing to correlate with HVBG that are not included, and the concept that the myofibroblast alpha sma is one of them. So, yeah, there are a lot of features we're missing, and AI has a lot of role to include, and that can be improved with time. Again, our pathologists play the role in quantifying them and drawing them and including them into the AI. Thank you for that clarification. Rohit, thank you for that wonderful talk that put it all together. There have been many questions both from this session and the session before, and you answered one of them anyway, but is there any generic agency or something that a pathologist answered this question, but this could easily be anyone in the audience who's starting out or starting out in AI. How do they get started as far as funding is concerned? Yeah, I think for junior trainees, I think they can pretty much work with any of their currently funded PIs in their institution. For example, their VCU, they could work with you on cirrhosis and develop a prediction model with some preliminary data, and so I think any of the digestive disease research center or liver centers across the country, the PIs would be able to support something like that and then just pick a disease, a phenotype, and then correlate with it. I just provided an example of a RFA that came out on cancers, abdominal cancers, and AI-based diagnostic tools. That makes a lot of sense. There were RFAs on COVID diagnosis earlier, and those could be applied as well, but I think one of the things would be HCC prediction or liver tumor or liver lesion prediction, and I think those would be really good. Other thing could be even a rounding on the transplant unit or in the ICU. If somehow we could feed in the data much before the round start at 8 o'clock, we can predict which patient needs to transfer to ICU. Those things have been done in AKI and could potentially be done in liver disease, so I agree. I think, and Jaz, the work that you do on infections, I think predicting infections using all the data that is collected from the patients in the telemetry could be utilized as well. Really cool. We had a lot of questions. Sorry, Cheri's going to ask the question. Oh, no, no, no. I was just going to ask you a question from session one, so hopefully everyone's still here so they can hear their question answered. One question was, is it possible to use AI algorithms to generate practice guidelines and update online textbooks, for example, based on sourcing PubMed and Web of Science? There's such a high volume of new literature coming out. How could you use AI to improve something like UpToDate? Well, I think the, what happens is that people really want, AI is good for repetitive tasks. It's not, I think, good for, I would say, picking one odd thing, but I think AI could be, you could build an algorithm that if something's being done 200 times, then we need to put that in. So, it could be that sometimes we are lazy and there are suddenly, you know, 20 publications in one domain, and they may not be from the top groups in that area, and they may be coming from one part of the world. And then maybe AI could be used that these will be brought to the meeting agenda to be discussed. And this is an important thing that needs to be tackled. I think it could be used as that. So, that sort of approach, I think, would be valuable, rather than just developing guidelines just on AI-based approach, because you still need to see from an expert viewpoint, if it is relevant to our patients. Awesome. That makes perfect sense. One other question for Mazan is, and I should say everyone was complimenting the speakers on excellent talks. So, what are your thoughts on applicability of AI and pathology for clinical practice outside of clinical trials? So, where do you see this first coming into our everyday practice? We would love to see it there, not just like histology, also algorithms. For instance, from this meeting, there were algorithms in predicting patient with NASH and F2 and higher. They have multiple variables between seven and 16 variables. So, you cannot just go in and plug all of them, but they have high predictable values. And hopefully, in the future, machines will start spitting them out. So, I think they eventually will be there. It's going to take time, though, because they need to be validated. They need to be FDA approved. And the steps that you saw in the presentation, such as correlating with liver outcomes, are part of that. So, it's going to be there, hopefully, one day, but we need further steps. I think this is very important that we all put together a thing that, you know, garbage in is garbage out, is what was the mantra of the first session as well. So, AI is not going to magically solve problems if you don't have enough samples, don't have enough data, and don't ask the right question, and you don't have enough resources to carry it through. This is only going to be as good as the patient, the bias, the setting that you're in. So, I think things are very, very important for us to realize. This is the first set of our workshops. The second workshop will be done next year. And I really am very, very thankful to everyone to help us out, understand from the basics all the way to how to, if not anything, start thinking about what AI is, and more importantly, what AI is not, so that we have a realistic expectation of where we go in. Things can be as sophisticated as the radiomics that Dr. Chi brought in, or as relevant and as clinically apparent without specialized technology as Dr. Lumba and Dr. Bansal talked about, and as complicated as Dr. Bhatt and Dr. Nurebin talked about. So, I think in the end, this workshop has put a lot of emphasis on what it is, what it is not, and what it could be useful for. And I'm incredibly thankful to the Clinical Research Committee for allowing me to co-chair this with my wonderful co-chair, Dr. Rogal. And I thank everyone else and the speakers as well as the audience members for their enthusiastic participation. And I know the Twitter sphere is going crazy, as viral as you can get. But Sherry, any last comments before we... No, I just, I wanted to thank Jas and also all of the speakers. I learned a lot from all of your work. And just a reminder, if you want to connect with a presenter or watch online, just to go into the TLMDX. And we hope that you enjoyed this. It was nice to hear from everybody. Thank you. Thank you.
Video Summary
Radiomics involves using AI to analyze medical images like CT, MRI, and PET scans, extracting detailed data for improved disease diagnosis and management. It bridges imaging with personalized medicine, enhancing decision-making through machine learning. Collaborative efforts and large data sharing validate radiomics models for accurate predictions in conditions like liver disease and diabetic retinopathy. AI applied to histology helps improve diagnostic precision and reduce discrepancies among observers. The future of AI in medical imaging is transformative, promising enhanced precision and efficiency in patient care by leveraging genetic, radiological, and clinical data. The video transcript highlights the significance of AI in healthcare, emphasizing its potential to optimize diagnostic and treatment strategies, improve pathology practices, and shape clinical guidelines based on current research. It underscores the importance of understanding both the capabilities and limitations of AI in healthcare for effective integration into medical practice.
Keywords
Radiomics
AI analysis
Medical images
CT scans
MRI scans
PET scans
Disease diagnosis
Machine learning
Personalized medicine
Histology
Diagnostic precision
Clinical guidelines
×
Please select your language
1
English