false
Catalog
The Liver Meeting 2023
Public Health SIG - Applied Health Informatics in ...
Public Health SIG - Applied Health Informatics in Action
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Good morning everybody. I think we need to get started so that we have some time for discussions at the end I want to welcome you to the symposium of the special interest group of the public health sick and my name is Michael Fuchs. I'm the chair of that sick and I have Dr. Fix with me who is my co-moderator. Dr. Fix is professor of medicine and the medical director of liver transplantation at UNC So what I want to do first is give you a very brief introduction and background about our speakers So our first speaker will be Ashley Spann She's an assistant professor of medicine and director of clinical research informatics at Vanderbilt University in Asheville With a very strong interest in informatics and electronic health record intervention design And using machine learning modeling natural language processing and big data Analytics. Ashley's efforts aims to enhance equitable care for patients with chronic liver disease Then we have Heather McCurdy who is an advanced practice provider in the liver program at the VA Ann Arbor health care system For the last several years. She has played a key role in the hepatic innovations team Which is a VA wide learning collaborative dedicated to improve cirrhosis care utilizing lean methodology and process improvement techniques Heather has focused her efforts on clinical decision support tools including reminders and dashboards And our third speaker is Jeremy Luisa who is assistant professor of medicine and Dean's school in a division of digestive and liver diseases at the University of Texas Southwestern and Dallas Jeremy's research interests are focused on improving outpatient care of patients with cirrhosis by leveraging effective patient provider communication particularly in the digital space and Then the symposium will be rounded up by Shanda Ho who is consultant and transplant hepatologist at Singapore General Hospital in the Department of Gastroenterology and Hepatology She holds a faculty appointment at Duke National University of Singapore Where she is the director of digital health transformation Before moving to Singapore. She was the medical director for clinical systems innovations at California Pacific Medical Center in San Francisco So you see we have quite some experts in the field and I hope for a stimulating symposium So I want to ask Ashley to come to the podium and give her a first talk Thank You Michael All right, so today I will be talking about EMR embedded tools for the management of mazal Dean These are my disclosures So today we're going to talk about four different things We're going to talk about some key informatics principles and how they pertain to the management of patients with liver disease We'll talk about the five rights of mazal D in relation to one of those informatics principles We'll talk about expanding that toolbox of the different tools that we have available to manage these patients And then we'll step into kind of the next phase of how we can use the EHR a little bit better to improve care So for the first part of this we're going to talk about a few informatics principles I like to start with talking about the PP PPT framework and what this involves is Thinking about the people who are involved in your decision support intervention You have to understand the human element of this You need to know who your stakeholders are You need to know who your clinical champions are going to be You need to understand who's going to be affected by this intervention. We're gonna be the major effects to those people What about the minor effects to other people around them? You need to involve these people early. You need to involve them often You also need to think about the different processes that are currently in place so that you can create a decision support intervention That's going to be successful in implementing You need to understand what the current process is and understand what the desired future state is for that process You need to know how you're going to measure success of your intervention And you also need to understand how you're going to monitor it once you implement it the last piece Which I think that we spend probably too much time on is the technology and not enough time on these others But we want to think about how can we automate a process? How can we automate our solution? Will the people involved actually use what we develop? Will it fit into the workflow you can create the best tool out there, but if no one uses it It's not going to be effective The next piece that I want to talk about as a key informatics principle Particularly as it pertains to clinical decision support is understanding the five rights of clinical decision support These are important and making sure that what you're creating is going to be an effective solution So you need to ensure that you have the right information given to the right person in the right intervention Formats through the right channel and at the right time and workflow We're going to step through each of these processes as it pertains to Maslady So for the right information How can we electronically define Maslady within the EHR? ASLD recently released this decision tree and I've broken it up into three different areas here the first Block that is on this diagram is does the patient have elevated liver enzymes? Specifically looking at an ALT over 33 and an ALT over 25 for males and females respectively This is something that's considered to be structured data We can track this specifically within the electronic health record and ALT value at Vanderbilt is the same as that it do An ALT value across those across institutions is the same So we can easily map to that and create rule based logic to try to pull this out and understand who meets these criteria The next piece is a little bit more difficult Does the patient have hepatic steatosis as identified by imaging or biopsy? This tends to fall into the unstructured data realm where we have findings within pathology and radiology reports As a structured data element this may just say radiology impression when we look at the actual data But what we need is what's inside of that radiology impression to understand if hepatic steatosis is present We can use diagnostic codes for hepatic steatosis and Maslady to be able to identify these patients, but it's insufficient It's not enough. We need to figure out ways that we can target the specific documentation of hepatic steatosis within unstructured data elements The last piece is does the patient have features suggestive of cirrhosis Well, we can use diagnostic codes for that as well But when we're talking about the features specifically this can also fall into the multimodal data. We've got diagnostic codes We've got information in radiology reports that suggest portal hypertension and suggest cirrhosis as such we can think about ways that we can use machine learning to give us a Probability that someone has cirrhosis based up what we find in their clinical records But we also need to understand that diagnostic codes are not enough in a study that we did of Almost 30,000 patients. We have a tool that creates Unstructured data elements within the electronic health record and maps them to structured codes So in this particular case is the patient I saw who has steatotic liver disease And you can see that this patient also has diabetes. He has hypertension. He has obesity Tobacco use disorder and hepatic steatosis is there as well That particular phrase is mapping to Finding and one of his radiology reports that shows he has steatosis So we were able to take that unstructured data element and map it to a structured element that we can then use to track Longitudinally, we did this over an entire year in 2021 almost 30,000 patients We looked at patients who are in primary care endocrinology and general GI visits And what we found is that approximately one in three of these patients had a radiographic Based off of this word cloud that looked for this So almost a third of the population in this in this setting But what we also found is that only one in four of them had a documented diagnosis despite having steatosis on imaging So we know that diagnostic codes are going to be insufficient to detect this But we also found two is that patients who had a radiographic evidence of steatosis on imaging If they were black, if they were white, if they had a reddish skin, if they had a reddish skin If they were black if they were older if they are women and if they were seen in endocrinology clinics They were less likely to have a documented diagnosis So these are all patients who had this finding on imaging, but we're not capturing it with diagnostic codes So we have to think about how this could potentially perpetuate disparities if we rely solely on structured data elements like diagnostic data Shift now talking about cardiometabolic criteria within the algorithm You know each of these five elements are things that we can think about how we can potentially map these within the electronic health record They all lend themselves to rule-based criteria, but we have to think about how we define each of these elements We need a consensus on value sets for each of these components. The first is a little bit easy It's BMI over 25 or waist circumference or ethnicity adjusted, but when we start thinking about how do we define diabetes? How do we do that within the electronic health record? How do we define hypertension hyperlipidemia? How do we determine if someone's actually on a lipid lowering agent and they can be included in this? We have to think about how we define that If we can come up with a consensus on how we define this we can then scale it across institutions But there are limitations here, you know If it's not measured We don't measure waist circumference and if we do it may not be in a structured data element that we can capture Then that's an issue and if it's also not captured think about all of your new referrals to your center Who are coming from external sites who have no data within your electronic health records to begin with? You're not going to know that information to be able to capture that and use that for an intervention So we have to think about that as well What about alcohol exposure and quantification You know this diagram breaks down the percentage of alcohol that is in different beverages, but if we don't ask about it, we don't know If we don't document it, we can't capture it And if we don't make it a routine practice to document this within the electronic health record We can't scale it and we won't be able to help differentiate between met ALD and Maslady So this is another thing that we need to think about as well How about delivering this to the right person this is the second part of the five rights of Maslady as I described them Specifically, you know our algorithm focuses on primary care or non GI or hepatology care We can think about how we do this in targeting populations Dr. Fox and dr. Brandman did this at UCSF where they looked at a panel of primary care patients and Looked at the diabetic patients specifically screened them using the electronic health record to find those who were not using alcohol and those who did not Have viral hepatitis and they were able to Implement an intervention with the electronic health record to prompt hepatology referral and in doing so they were able to diagnose a significant portion Of these patients with fatty liver disease and also were able to identify a significant portion with advanced fibrosis These are things that we can do with the electronic health record and can do this at a population health level We can also do this at a provider base level too at the time The providers are seeing these patients and a prospective 10 month before and after study that we did at Vanderbilt Small pilot of primary care providers. We created an interruptive alert that then creates a dynamic order set from there What this looks like is it shows you exactly what this patient's fib four score is It prompts you to open a dynamic order set that provides specific recommendations to this patient based off of clinical characteristics It also provides templated patient instructions so you can explain to the patient why you're doing all of this to begin with and Everything is pre-templated such that if you need to order an MRI listography as we recommended It's already linked to the appropriate diagnostic codes that can help get approval for this particular study So what we found when we did this we had over 500 encounters 263 before we implemented 240 after Representing almost 300 or a little over 300 patients and what we found is when we implemented this tool We were able to increase appropriate actions prior to implementation from 30% to 46% after implementation A lot of these patients were missing labs, and that was the biggest push that we had an improvement with these patients What About the right intervention format You know not everything that is clinical decision support is an interruptive alert is probably the most common thing that people see But there are many other options out there for example We can create a report that specifically looks for an item of interest in this case I created a report that was looking for patients who had an imaging based elastography study I wanted to be able to identify all of those patients. We can do that within our electronic health records We can create rule-based logic to try to identify patients It's an example of a different rule to kind of look at particular Diagnoses and within different settings whether that was in problem list or in an encounter where there was a billing diagnosis We can use that to try to capture patients as well And create a dynamic order set as I showed previously We can't create that interruptive alert as well But we could also do things where we can provide inline decision support for providers We have an elastography result here and with it It's showing you with the scoring what our recommendations are around the different values That's considered decision support as well We also can think about are we delivering it through the right channel So one thing that we haven't talked about yet is everything that is discussed so far than provider facing But what about patient facing? What can we do there with a patient portal? Well in order for us, I think to enter into that Avenue We have to be really mindful and thoughtful about what our patients are able to understand and do You know, I joke about this, but my father works for the CDC in cyber security But he cannot manage a patient portal. I have to do that for him So we need to be really aware of our patients digital health literacy and it varies Broadly, we need to have a way to understand this This is a tool that was created out of Vanderbilt that helps us to understand What a patient's digital health list literacy so we can make sure that we're targeting the interventions Potentially through a patient portal in a manner that can be successful for them We also have to think about can we deliver this at the right time in the workflow So we need to understand that the presentation of the data matters as well we need to think about timing I've mentioned interruptive alerts But the reason why we use them so often is because we know that they work When they are effective and when they create in situations that can be actionable And a study that kind of looked at this They found that interruptive alerts were 7.7 times more likely to be acted upon than those that were passive Which is a reason why we create a lot of these, but it does create a lot of alert fatigue Additionally another study that kind of looked at four different ways to prevent a risk score They did this for a hospital deterioration risk score They presented this to inpatient nurses and they wanted to look and see If the way that the data was presented changed how people responded to it and changed how they made decisions And what they found is when it was presented in a probability format, which is that one on the left The providers are much more likely to take the similar actions across the group But when it's presented in these other ways, there was a lot more discordance So again, you can have the best model possible But if you don't present the data in a way that's easily digestible and understandable It may not be effective for reasons that have nothing to do with the model itself Now i'm going to shift briefly to talk about the risk of interruptive alerts Now i'm going to shift briefly to talking about how we can expand the toolbox of decision support tools And think about artificial intelligence and decision support You know, there are ways that we can bring predictive models into our electronic health records and and act on those results They can be brought in directly into the EHR And we can create them as an actual data element as a structured data element They can then be used as a trigger for to prompt subsequent decision support We can think about the implementation of large-scale decision support We can think about the implementation of large language models which have come out in this past year And how they could be used for data summarization, especially for those unstructured data elements They're intuitive so they can also be useful for thinking about how we can direct these to patients But there are a lot of caveats that can be associated with that There are several concerns and limitations about the use of these in clinical practice It can be highly dependent on institutional resources and what's available to be able to create these linkages To import them into your electronic health record You may not have the capabilities within your electronic health record to do this That's something that has to be investigated and there is a lot of concern particularly with large language models around health IT privacy We have to be careful with that as well, too And the same thing with artificial intelligence and large language models There is a non-negligible risk of propagating bias in these models and we need to make sure that we are not Using these models in propagating these risks further So My final piece I'm going to talk about the next phase of what we can do for EHR use in Maslady specifically We need to start thinking about how can we better study these interventions longitudinally We can use the EHR to facilitate randomized controlled trials. It's something that we can can do fairly well We can use it for cohort detection to identify using rule-based logic value sets definitions to identify the patient population of interest We can use it to screen criteria for inclusion and exclusion We can make sure that we're doing our intervention only in the locations that we want them to with those restrictions And we can make sure that it's only to certain providers whether it's residents and fellows or just attendings or just nurse practitioners Practitioners we can do all of that within the electronic health record and create those those strict criteria We can also randomize studies electronically within the ring with the electronic health record whether that's simple randomization or block randomization, that can be done. If we want to think about how we can longitudinally monitor studies, we can do that as well with custom reporting to monitor accrued patients. We can create dashboards, which you'll hear more about dashboards here in a little bit, but for tracking patient outcomes. And then you can also think about ways to do longitudinal data collection. We can create data queries on the background database behind the electronic health record. And these data queries, particularly if you're on a similar or same electronic health record, can be portable across institutions. This also helps to minimize and eliminate the needs for manual chart review, which I'm sure all of us love. An example of how this has been effective is this recent trial that was published out of Vanderbilt on cefepime versus piptazo in adults who are hospitalized with acute infection, the ACORN trial. This was a pragmatic open-label parallel group trial, a randomized study of safety of these two drugs. And what they did was they screened patients all exclusively within the electronic health record for suspected infection if they were in the emergency room or at the MICU. And they were looking for an order initiated within 12 hours. This was their trigger. They excluded patients who had a documented allergy, those who had previously received an anti-pseudomotor antibiotic, those who were incarcerated, or if the treating clinician determined that another drug was better. They were able to do all this in the electronic health record. They even conducted randomization with this as well without stratification with the electronic health record. And all the assignments of randomization were concealed until the point of enrollment. All of this happened with the electronic health record. So this is very possible for us to do within our space in hepatology. And in fact, we are actually going to be conducting an RCT of this for Maslod. The intervention that I showed you previously will all be conducted within the electronic health record. We're going to identify our study cohort with diagnostic codes, with findings of hepatic stethosis on imaging, those who represent an increased risk and demonstrate a screening need. We're going to randomize them all within the electronic health record, one-to-one fashion. And some patients who are in the usual care arm will receive no decision support. Those providers won't see anything about that patient. But some providers will if they're in the intervention arm. And we're going to look and automate FIB4 calculations for these patients. We won't have to do any of this manually. We'll be able to monitor outcomes longitudinally, what happens to these patients over time. And we'll be able to capture all this with the electronic health record. This is where I think that we need to move into making sure that we can effectively evaluate clinical decision support in real time and start to automate the way that we do this so that we can do it more efficiently and faster. So in conclusion, there are many tools in the toolbox to manage MASLD. We need more structured data especially to be able to better discern ALD from MASLD. And our EHR tools all lend themselves when we create value sets and comparable items between sites. They lend themselves to scalability across clinics, across hospitals, across sites, across centers. We can do all of that if we can create standardization around this. And all these tools are not one size fits all. We need to have more rigorous studying of these in various settings and intervention formats. And we can do all that through the electronic health record. With that, I will close. Thank you very much. We'll have time for questions hopefully after all the talks are done. Next talk is by Heather McCurdy, APP at the Ann Arbor Health Care System talking about making data smart, the utility of clinical dashboards. Good morning. There it goes. Okay, today actually our slides are, our presentations are nicely following each other because I think we'll see a lot of the themes follow through. Today we'll talk about the utility of clinical dashboards. I know that's a lot. I know that's a lot. So today we'll talk about the utility of clinical dashboards. My name is Heather McCurdy, as you heard earlier. I've been at the VA health care system for twenty plus years. The entirety of my nursing career has been in liver clinic. So I don't speak for the VA, but it's really all I can speak about. I thought we'd start with just a case study just to sort of ground us in the kind of patients that we're seeing in an outpatient liver clinic. So this is a patient who was on a bilateral discharge back to primary care after successful treatment of his hep C several years ago. Pre-treatment assessment included a fiber scan, 14.8 kilopascals, a lower end of the platelet count, and an ultrasound at that time without focal liver lesions. Biannual ultrasound was recommended, however that was only done periodically and not at all in the last three years. No EGD or beta blocker use. We won't necessarily come back to this case, but I just sort of wanted to give you a sense of the kind of patient that we're looking for with dashboards. So a reminder that we've actually had quality measures for cirrhosis for several years now. So we know what high quality cirrhosis care looks like. We know how to define it. And in fact we use these measures often to proclaim how well we're doing or perhaps how well we're not doing. But the question that I pose is whether or not we use that information to prospectively try to improve the care that we're offering to our patients. So we know that there are many measures, in fact 46 in total in this paper. 26 of them being process measures. Lots of those are acute care, but I've selected a couple here that lend themselves to the outpatient setting. And 20 outcome measures, things like patient survival, hospitalization, and again there are a couple that lend themselves to the outpatient setting and are really nicely captured in dashboards. There are many approaches to improving delivery of cirrhosis care. Healthcare systems have used lots of things like patient navigators similar to cancer care navigators. Certainly in COVID we had a rapid increase in uptake in virtual care, which I think still continues. Smartphone apps have been developed. The VA has used a comprehensive approach. We heard earlier about the hepatic innovation team that I'm a part of. This was stood up during our really big push in the VA to treat hepatitis C infection many years ago now. And this is a learning collaborative of patients all across VA, across the country taking care of liver disease. And once we successfully sort of eradicated hep C in the VA, we turned that infrastructure towards cirrhosis care several years ago. And we continue as a learning collaborative to share best practices, commiserate over our common barriers. We have educational sessions every month to sort of hasten the diffusion of knowledge. We use didactic sessions, journal clubs, review of practice guidelines, stuff like that. Within that, we had a dashboard development team. This had expertise in data analysis, population health management, and liver disease. And we've developed a number of dashboards, including hepatitis B. We can track labs, antivirals, same thing for hepatitis C. And lastly, we had an advanced liver disease or cirrhosis dashboard, and that's what we'll focus on today. The users of the dashboard allow us to find those patients that might be slipping through the cracks and need to be linked back to care to get the recommended surveillance. We know that there are several evidence based practices that reduce morbidity and mortality. We'll focus on HCC surveillance and what we call surveillance, variceal surveillance and treatment for that smaller cohort of patients with a lower platelet count or a higher fiber scan. We know that only a third of patients in the U.S., and that's across veteran populations or non-veteran populations, receive that guideline concordant care. Because those evidence based practices are not reaching the populations that are intended to help, we look to implementation strategies to enhance adoption of those practices to improve the outcome of our patients. The implementation strategies that we'll talk about work best in combinations and when they fit the local context. So to your point earlier, it is not a one size fits all. We actually have to tailor it to the strengths and barriers at the local facilities. In the VA, if you work in the VA in liver disease, you've been surveyed annually for many, many years. Thank you, Dr. Rogel. Back in the audience. We survey every year about the use of different strategies, 72 strategies in all, trying to determine which strategies are associated with having higher surveillance rates for HCCs and variceal surveillance. In fact, there were in depth interviews of about 12 different facilities to really dive deep into how you were using particular strategies, why your facility was doing very well in terms of higher surveillance rates. And it turns out there are a core subset of these eight strategies and strategy combinations that were incorporated into cirrhosis care improvement. You can see right there, number one, working with the HICC collaborative. So that's something that we encourage across VA. But I'm going to focus more on the data tools using the advanced liver disease dashboard and then we'll talk very briefly about the HCC clinical reminder. Reminders dashboards, they're not exactly the same. The clinical reminder is something that we it's a point of care embedded in the medical record, a decision support tool that allows the provider seeing a patient at a particular time, oh, this patient is due for X, let's go ahead and order it. A dashboard is a little different. You can use a patient level view of the dashboard to get that same information, but you can also sort of jump back to a 10,000 foot view and look at all the patients in a particular cohort. And you can look at different avenues of that, all patients with a high MELD, all patients with a particular primary care provider. And you can use that dashboard to really dive into a particular cohort. So we'll talk more about dashboard as a data management tool. In the VA, we specifically use it to identify those who are overdue for HCC surveillance imaging or varicella surveillance, either endoscopy or beta blocker use. A pretty nifty thing about the dashboard is that we can export it into Excel, turn it into a table, sort it by whatever data field we want, and there's dozens of different fields that we can sort by. It does allow us to remove patients that have been erroneously coded for cirrhosis, and we'll talk more about that later in terms of limitations. That's the patient level view. We have summary data that sort of rolls up. How are we doing on these surveillance at a particular facility? How are we doing compared to people in our region? And how are we doing compared to all the VAs in the country? What is our national surveillance data? We have all of that. And we capture that data every month, we collate it every quarter, and we put it into snapshots so that we can really take a look at how are we doing on these quality measures. So getting into the nitty gritty, who's included in this cohort? It seems like it's so easy. Oh, everybody with cirrhosis. Turns out it's not that easy, and we had a lot of time trying to really define the cohort, and this is what it looks like now. It's been enhanced over the years, but we're looking at two ICD-9 or 10 codes going back anytime back into I think 1999. This is from any VA. If you're such a big, we have 130, 140 facilities across the country, you may be seen in Florida for 10 years and find yourself in Virginia, and those codes will follow you on the dashboard. So we need a combination of two codes, either an inpatient or an outpatient, or an addition of that code on the active problem list. We found certain things and had to make account for them. So ascites is not included as a stand-alone code. That must be combined with another cirrhosis code. There is potential for erroneous inclusion, things like people who are coded for cirrhosis when in fact they have LCAP, things like that. We've also found sort of inadvertent mistakes. We have a couple of examples where somebody was, a provider was coding their patients with essential hypertension for portal hypertension. Found, oh, why does this particular provider have so many patients with cirrhosis? Oops, it turns out it was just a coding issue. And we actually found that with review of the dashboard. So this is a mock-up. I apologize if it's a little blurry. This is not a real patient, or most of it is not a real patient, but this is what it looks like at the patient level view. This is a dashboard, when you're opening the dashboard in the VA, this is what you see. A lot of demographic information on the left, and then some sort of free text fields on the bottom left. The upper right is the two metrics that we're looking at. We're looking at HCC surveillance imaging, and then we're looking at either EGD or beta blocker usage, sort of some pertinent labs in the lower right. What we, as we teach users to look at the dashboard, we always encourage people to say, before you do any deep dive, make sure you trust that this patient really has cirrhosis, and we wonder, well, how do we find that? Well, it turns out we actually have a hyperlink right to all the cirrhosis codes, so we encourage that this is where you start. So if I go to that, I can pull up a list of every single time that a code was used in this patient's chart, and you can see different sources, meaning outpatient visit, inpatient visit, problem list addition, different iterations of the cirrhosis code. It's not all the same code. You can see that it's been used a couple different times. Location, you can see there's a couple times the patient was on an inpatient service, a couple times the patient was getting a fiber scan. I see, actually, this must be Dr. Sue's patient, because I can see an outpatient visit, and all the different dates, and so I have pretty good confidence, in fact, because it's Dr. Sue's patient, I've got pretty good confidence that this is a patient that truly has cirrhosis. However, when you're, if you're looking at the dashboard a first time, it's not necessarily a patient you know. Perhaps you pull up this list, and there's only two codes, and they're 15 years old. That maybe gives you a little bit of pause, that maybe that was over-coded or mis-coded. Well, the nice thing is, is that you've got a date and a location. You can go exactly to that point in the medical record, really dive in. Why did this provider think that this person had cirrhosis? Why did they code it that way? And it allows the user to have some trust and some confidence in the dashboard, or to determine that perhaps that patient needs to be re-evaluated, and let's determine if that diagnosis code is really correct. That upper right-hand corner that includes the measures of choice that we're really looking at is, we've done some enhancements based on user feedback. So, it's color-coded. It's nice, you can see here, it's red that sort of shows this is overdue, and it's maybe perhaps not the right word. Not due in the last six months is really what it means. And that turns yellow, actually, if it's somebody who is up-to-date, but it's coming due in the next 60 days, so that allows the user to jump right in and make sure they have an order. This, you can see here, this is a very recent addition where this dashboard now associates with our radiology scheduling package, so it will go looking for an order, it will even go looking for an appointment. So while this patient is overdue, I can see he has an upcoming appointment, so I don't have to worry about calling him, making sure that he calls radiology to get that ultrasound scheduled. You can see the bottom line here, this patient, if I'm not mistaken enough, is green. He's up-to-date with his endoscopy surveillance. He actually had endoscopy in 2018, and he happens to have a current prescription for Carvedilol. So he's up-to-date. I don't have to spend a lot of time on this patient. I can move right on to somebody who's all red, doesn't have any upcoming appointments, and I can really spend my time there. Another nice thing about this dashboard is that it really allows us to categorize this patient. We can put this in a bucket. We can say, why is this patient not getting their surveillance? Sometimes it's perfectly legitimate reasons. We might not like it, but we might have patient refusal. We might have a patient who we just simply can't contact, or a frequent no-show. We spend a lot of time on those kinds of patients, and we don't want to keep going over those patients over and over and over again, so we can put them in particular categories. Perhaps there's a reason that this is not clinically indicated. A provider has indicated that no-show is not a good time, or this person doesn't need HCC surveillance for whatever reason, maybe an advanced medical comorbidity, or limited life expectancy. Perhaps a patient is enrolled in hospice for metastatic lung cancer, and we've decided to forego HCC surveillance. The dashboard user can categorize patients in this, and sort of set them aside. Focus on the patients that we can link back to care. And then perhaps at some future time, say, you know what, I've got a couple of hours. I'm going to go into the dashboard, pull up the patients that I said had limited life expectancy. Here we are six, eight, nine months later. Let's see if they're still alive. Let's see if imaging is in fact indicated now, or if they are still in that category. Scheduling and radiology capacity is really something that we pay attention to, and sometimes have to take a systems redesign approach. And this is really when working with your colleagues instead of working against them is a good idea. Certainly have found that in the VA, the facilities with the greatest improvement in HCC surveillance rates were those that use these data tools, dashboards, and clinical reminders. So we thought we'd share different ways that we use dashboards. Certainly when you're first using dashboards, we spend an awful lot of time identifying patients who are overdue for a particular thing. And for some reason, searching for patients who are overdue for surveillance imaging is easier than finding those patients that are overdue for endoscopy or beta blocker use. So we spend an awful lot of time on this sort of first column, and certainly nothing wrong with that. But what I pose is that there's actually ways to use this in a broader sense to go find patients that we need to link and bring into care, perhaps for the first time. Certainly we can find patients who have a previous GI or liver clinic appointment, and maybe they haven't been seen in a year. We can identify that in our dashboard. So we can go search and just print out a list. Let's go find those patients. Let's bring them back into care. We can find patients who are on beta blocker, but perhaps their prescriptions are expired. Let's go make sure that they're taking those prescriptions. Some of the things that we're sort of trying to focus on in this coming year is building a list of patients who have a MELD score greater than 15. We heard from those quality measures that these are patients that should be evaluated for transplant, but are we in fact reviewing them, evaluating them, sending them to our transplant centers? In the VA, we're not. And so perhaps we can develop a workflow where we just pull up our dashboard, sort it by patients based on MELD score, take a double check that they're not on warfarin, that's a data field in our dashboard, and make sure that those patients who have a MELD score of 15 or higher are at least being evaluated in our liver clinic so that we can document whether transplant evaluation is appropriate. Lastly, as we build relationships with our primary care colleagues, we can create a list of their patients. They're gestalt is sort of to say, oh, too many patients, I don't have time for this. But perhaps you can show them of your panel of 1800 patients, 25 patients have cirrhosis and maybe we could actually try and make sure that they are up to date with their HCC surveillance or make sure their labs are up to date or get them into liver clinic if they need to be there. We can do that with our dashboard. We can say whatever primary care team down to the individual provider and we can provide those lists. So lots of really interesting ways to use the dashboard, not just patient level, but sort of taking a broader view and looking at the population as a whole. I thought we would share a couple of actual patients in terms of like what we consider close saves, things that patients that were found on the dashboard and may not have been found otherwise. So these are real patients. This is a veteran that was identified as being due for his HCC surveillance imaging. The user looked at the health record, turned out that last time that guy had an ultrasound was in 2020. He had reportedly been getting care in the community, but nothing had been scanned into our record. So that was unclear whether that was still happening. So the dashboard reviewer contacted the patient at home and said, Hey, what's going on? Are you still seeing the community? Turns out no. So we offered up an ultrasound and an AFP. That ultrasound was fine, but look at that AFP, it was elevated. So we went ahead and ordered the triple face CT and got him back into clinic, but he had been out there, you know, and everybody thought he had been taken care of in the community and in fact he wasn't. Here's another one, a little bit similar to our opening case. We had a 69 year old male who eradicated his hep C, was discharged back to clinic. His HCC surveillance was up to date, but lo and behold, he shows up on the dashboard because his platelet count is low. So we were able to contact the primary care and say, Hey, we maybe need to see that patient in our clinic to evaluate for clinically significant portal hypertension. What I have found is, um, one of the things we tend to tell our primary care colleagues about HCC imaging and the need to do that every six months, I think we're much less good at telling them about the need to pay attention to portal hypertension. And lastly, this is a patient of mine, actually a 54 year old who had never been seen in our clinic identified in the dashboard due to a platelet count. We alerted the primary care and said, Hey, what do you think? Can you bring him to the liver clinic? He's out here and I think we should come into clinic. Came into clinic, established care, had that first ultrasound. Turns out it was abnormal. Got him an MRI. Turns out he had a solitary HCC. Got him ablated. Then he developed refractory ascites requiring serial paracentesis, undergoing a transplant. In fact, the day before I left for the liver meeting, we got him placed on the, on the transplant list at Pittsburgh. So this is, these are real patients, patients that had just been out there, sort of unknown to liver clinic found specifically because somebody had been reviewing the dashboard. There are limitations. Of course, we talked about this before the coding remains the bane of our existence. It's garbage in garbage out situation. Sometimes when you're in clinic, you put, you click those clothes, kick, you know, on the check boxes and just click them on. Not really thinking about the downstream effects of what that code does. And for us, it turns on all kinds of things. It turns on alerts. It turns on reminders and includes people in the dashboard. It is incredibly intense. When you first are reviewing our dashboard, you might have 600, 800, 1200, 1800 patients in the cohort. That's a lot of patients to go through. Number one, to make sure if they really have cirrhosis. Number two, to try to get them all back into care. And so a lot of our VA facilities are using nurses for this, but it's a little bit outside of a nurse's scope to determine if somebody actually has cirrhosis. So it's important that the workflow shows that there's, you know, a provider on the other end to sort of double check or recommend that that patient come into clinic for an evaluation. Certainly the thing that we hear over and over and over, nobody has time for this. Nobody has time to review dashboards. We're seeing patients in clinic. When am I possibly supposed to? And I don't disagree. We recommend at the VA, we try to get people to spend two, four hours a week at the most. We're not asking you to spend 40 hours a week. But sometimes just sort of chipping away at it. But what I sort of pose is that you can chip away at it if you're looking at the individual patient. But you really need to dedicate some time if you want to find entire cohorts of patients that need to be linked back to care. Certainly primary care resistance, we've talked about that. Reminder fatigue, alert fatigue is a real thing. Radiology and endoscopy capacity is also a real thing. I don't want to find a patient, a hundred patients that need an ultrasound and find out radiology can't accommodate them. So it's important here to really engage with your colleagues. Patient factors, certainly we have patients that simply refuse and there's not a whole lot we can do about that. And lastly, dashboards really don't do nuance felt. It's a very much a black and white situation. And as we sort of, especially around clinically significant portal hypertension, whether somebody needs an EGD, what's the timing of that EGD? Can they have a beta blocker? The dashboard isn't really good at teasing that out. So it is important to have subject matter experts. I did want to end on a positive note, sort of a plug for the VA that of course we know that there are established measures for cirrhosis quality of care. The VA selected two of those measures sort of officially and set national targets, national benchmarks for what we consider successful. We have a dashboard that's built around those two metrics and we can follow them. We track that data. We track it individual for individual facilities. We share that data with leadership. We offer services. How can we help? How can we work through the barriers that you're having? And we have clinical decision support tools for each one of those measures. So boom, boom, boom. We track it. We measure it. We alert the providers and doing really well. And so if you're able, that's the plug for the VA. Make sure your patients are seen at a VA. Thank you. Thank you. We'll go on to the next talk, which is entitled Decision Support Tools to Reduce Hospital Admissions for Complications of Liver Cirrhosis. And it's given by Jeremy Loison. So first, thank you for the opportunity to be here. It's a pretty exciting talk to put together. I think a lot of people in the audience have actually published some of this work. So it's great to have you guys here. So my talk today is going to be on decision support tools to reduce hospital admissions for complications of cirrhosis. I have no relevant disclosures. So first, just to orient all of us, I think we all have experiences that our patients with cirrhosis have a very high burden of readmissions. Many studies are out there, but the readmission rate is as high as 53% at 90 days for our patients with cirrhosis. While much of these hospitalizations are not preventable, there is a certain significant subset of hospitalizations that can be prevented through guideline-directed management. The top reasons for unplanned readmissions include hepatic encephalopathy, renal azotemia, metabolic derangement, infection, and gastrointestinal bleeding. So you can imagine your patient leaves the hospital, they encounter several burdens or several pitfalls on the way to clinic, and they may actually fall through the cracks, experience a readmission, and then repeat the cycle over and over again. So our patients definitely need our help, and so I hope this talk can really talk about ways that we can use clinical decision support tools to assist our patients. And when it comes to reviewing these top reasons for unplanned readmissions, it's important to look at what are causing some of these failures. There's failures in medication use and misuse. There's inappropriate linkage to care post-discharge, so follow-up appointments, follow-up procedures are often missed, and sometimes we lack linkage to appropriate consultations. So whether it's inpatient GI consultations or outpatient social worker visits, these can result in unplanned readmissions for our patients with cirrhosis. So before I jump into things, I think we should sort of quickly review what is a clinical decision support tool. The Office of the National Coordinator for Health Information Technology defines a CDS as something that provides clinicians, staff, patients, and other individuals with knowledge and person-specific information at appropriate times to enhance health and healthcare. Ultimately, these tools are meant to combine knowledge and data to generate and present helpful information as care is being delivered in real time. So as you can see, the main focus is it's knowledge and information provided to patients or providers or staff at the right time so that they can facilitate appropriate management. CDS tools can exist in many different forms. They can be alerts, they can be reminders, guidelines, order sets, focus data reports, documentation templates, diagnostic support tools, and much more. So the definition is pretty broad, therefore the tools are also broad. So as Ashley has already talked about, there are five rights that should be considered in any development of a CDS tool. Really the focus should be that those rights include delivering the right information at the right time in clinical care via the right channels in the right format to the right stakeholders. If you can achieve those things, then you can often achieve a successful CDS implementation. If you have any sort of pitfalls in these key rights, your CDS may be unsuccessful for the goal that you have set forth. All right, so let's jump into some really great studies that have looked at CDS use in cirrhosis for the purpose of preventing hospitalizations. So in this first study, and again I'm trying to highlight sort of what other CDS tools that the studies are using. So the first tool, this was meant to reduce all-cause readmissions in cirrhosis, and it involved clinical checklists and order sets. So in this study by Tapper et al., the authors sought to modify the risk of re-hospitalization through the use of checklists and order sets, and this was conducted in three phases. You had the control phase or the pre-implementation phase. You had the handheld checklist phase. This handheld checklist was basically a document that was given to inpatient providers that are treating patients with cirrhosis, and it was really meant to provide guideline-directed instructions for treating common cirrhosis complications, things like hepatic encephalopathy, spontaneous bacterial peritonitis. In the electronic phase, this handheld checklist was incorporated into the electronic health record, into order sets for admissions, and also into documentation templates. So now you had people who are admitting patients with cirrhosis having access to the order set with the checklist, as well as preset templates in their notes that can help them, guide them through sort of guideline-directed care for cirrhosis complications. What they found was that the rate of 30-day readmissions was significantly reduced after this electronic phase was put into motion. So prior, the risk of readmissions was 37.9%, which dropped to 26.6% after the electronic phase intervention. Specifically, when looking at patients with hepatic encephalopathy, there was a 40% reduction in the odds of readmissions in the electronic phase compared to the pre-implementation phase or the control phase. A deeper dive revealed that a lot of this was attributed to increased rifaximin prescription for hepatic encephalopathy. As we all know, rifaximin prescription is necessary for the prevention of recurrent hepatic encephalopathy. So what they found was rifaximin use was independently associated with the decreased risk odds ratio of readmissions for cirrhosis. So a simple intervention had a pretty big impact on patient care. So we took this further. So in the study that we conducted, again, we realized that rifaximin works to reduce the risk of readmissions in patients with hepatic encephalopathy, but we also realized that it's frequently not prescribed, and even when we prescribe it, oftentimes patients have difficulty obtaining rifaximin prescriptions as an outpatient. So we performed a study that aims to solve these issues through the deployment of a best practice alert. The alerts, we sort of called it interruptive, but I wanted to kind of amend that and call it supportive. These alerts were delivered at two distinct times during the patient's admission. The first was basically at the time when you'd open a chart and the patient was ordered for lactulose but not ordered for rifaximin. The alert would pop up, and in the alert, it would provide information, education, essentially, that rifaximin use is effective at preventing readmissions. You should consider use if appropriate. And the second alert, that was mainly tied to discharge planning. So if a patient was ordered for rifaximin at the time of discharge, this alert would pop up. Most alerts contained were the contact number for the transitional care pharmacy team so that providers can reach out early during the admission, talk with the pharmacist, and see how we can best facilitate the patient to receive rifaximin as an outpatient. So whether that's getting started early on the prior authorizations or figuring out the cost for the patient before they go home. What we found was that for patients with hepatic encephalopathy, there was a 37% decreased risk of readmissions during the follow-up period. So again, you can take these simple measures and really squeeze out a lot of benefits for our patients. Next, I want to switch back to reducing the risk of all-cause readmissions. In this study by Burke et al., they developed a guideline-based order set to improve inpatient cirrhosis care. They targeted distinct gaps in quality measures such as timely paracentesis, ascites management, how do you manage upper GI bleeding and cirrhosis, hepatic encephalopathy, and I think really importantly they focused on facilitating social work consultation for patients with substance use disorder. So the creators had a pre-implementation phase, they had a pilot phase where they created these order sets that included these quality metrics, and they encouraged providers to use this. In the implementation phase, they went a step further and added an alert system. So the alert would pop up for any patients that had ICD codes concerning for cirrhosis. If that was true, then providers would get an alert that would direct them to the order set for usage. So what they found was that although there was not a significant decrease in the risk of 30-day readmissions, there were several key performance improvements that they found. The hospital length of stay was significantly reduced, the time to early paracentesis was increased, patients were more commonly on a low-salt diet, and I think really importantly there was high linkage to social work consultations for patients with substance use disorders. So I think that's very valuable in terms of how do we prevent these patients maybe struggling with alcohol use disorder to get the right linkage to care to outpatients to hopefully lead to abstinence and prevent readmissions. In this really cool study by Moon and all, really they focused on post-variceal bleeding surveillance, and this study used clinical order sets to help facilitate that goal. So as we all know, patients that experience a variceal hemorrhage, they do have a high risk of recurrent variceal hemorrhage, and how do we manage that is with medications as well as timely endoscopic surveillance. So recognizing that at their local facility, the rates of timely surveillance EGD was low, this team decided to do two things. Weekly they created two additional endoscopy slots specifically reserved for patients in hospital variceal hemorrhage, and they created this order set that focused on really three key components. The first was to highlight a focus on urgency. So as you can see in this order set, there's many areas that they really want people to understand this needs to happen fast. Two weeks or two to four weeks. So that's actually in three parts of this order set. The next part was it reminds the schedulers of how to facilitate timely endoscopic surveillance. So you can see in the text, I apologize, it's a little bit small, but it highlights, you know, doctors X, Y, and Z on X, Y, and Z days use these slots for patients that need urgent outpatient endoscopy. And then I really love the idea of this closed loop communication. So if it's successful, let someone know. If you have difficulty scheduling this patient for an outpatient surveillance endoscopy, then reach out to the ordering provider so that he or she can start to facilitate a work around to get this patient scheduled early. So vast improvements were seen in the time to early surveillance endoscopy increasing to 43 percent from 16 percent pre-intervention. Now we can also switch to Ascites management. And here we see the CVS tool used as a focus data report. So in this study, this was for patients with Ascites. These enrolled patients were given a Bluetooth-enabled scale that transmitted their weights to a smartphone application and then subsequently to the electronic health record at that hospital. Focus data reports were generated of the patient's weight trend and they were reviewed by staff members for any sort of significant changes, either increases or decreases in weights. For any significant changes in weights, these were alerted to the patient's providers for, you know, their input on any actions that need to be taken. What the authors found was that 57 percent of alerts that were relayed to providers led to the provider doing an action, so actually making an intervention. These interventions included scheduling a clinic appointment, it included scheduling a paracentesis, lab work, medication adjustments, all as you can see. Although they did not look at the rate of readmissions after this intervention, I think we can all imagine that timely sort of monitoring of patients with Ascites will likely be effective for preventing readmissions. Just to comment, you know, I think a way to take this even further is automate all this process. So they use these focus data reports that were reviewed by the staff members, but as you all can imagine from Ashley's talk, these things can easily be just automatically integrated into the EHRs and delivered to providers automatically. So we talked a lot about clinical decision support tools from a provider perspective, but you can also have clinical decision support tools that are patient-facing. And the key study showing the potential of patient-facing clinical decision support tools is the evaluation of the PatientBuddy app. In this study, patients and their caregivers were given a tablet or smartphone that was pre-installed with the PatientBuddy app. The goal of the PatientBuddy app was to really focus on cirrhosis monitoring and self-care. So key parts of the app included medication adherence, it included dietary adherence, and most importantly, it included cognition assessments. These assessments included the caregiver giving the patient orientation exams, having the patient complete the encephalopstrute test, but also documenting bowel movements achieved on a daily basis. What the study showed was that there was potentially avoidance of unplanned admissions for hepatic encephalopathy. So most of the alerts that happened were in terms of changes in mentation that were relayed to the staff, either through the app directly, or patients being prompted to call the center due to abnormal entries. Because of the actions taken by the staff members to either call the patient, schedule an appointment, titrate medications, they felt that eight patients out of 40 likely avoided a readmission for hepatic encephalopathy. So I think that's a huge win for HE management, and again it sort of empowers patients and uses a clinical decision support tool to do so. The last thing that I'll highlight is a recent study in the alcohol use disorder sphere. Really this, in terms of the CDSUs, they used a lot of knowledge and data to link patients to appropriate alcohol use disorder care. So in this study by Mellinger and all, patients with alcohol-related liver disease and access to a smartphone were randomized to either receive this AUD focused smartphone application or not. Patients first, in this application, they first completed a knowledge module to assess for misconceptions regarding alcohol use. The focus here was really trying to give patients the appropriate correct information about alcohol use disorder and the harms of different drinking patterns. The next part I thought was just even more exceptional is that patients would then complete a 17-item questionnaire to assess for treatment preferences. Preferences such as for alcohol use disorder treatment, you know, would you prefer one-on-one counseling or do you prefer group-based therapy? Do you prefer the involvement of your family or not? Do you prefer groups that are led by people with certain expertise or does that not matter to you? So as we complete these preference-related questions, the output would be three different types of AUD treatment models for them. So here you have a patient who's entering information and the output from this algorithm is giving the patient back information to help them make a decision about their AUD treatment. So I thought this was a pretty fantastic way of us to empower patients to seek appropriate care. What they found was that, you know, the retention rate was 65 percent at six months, which may seem modest at first, but anybody that's worked in smartphone applications and, you know, patients dropping out of these kind of studies is actually excellent in my opinion. They also found that there was a 2.3 odds of patients seeking and engaging in AUD treatment and although it was not significant, there was certainly a trend towards a decrease in WHO drinking levels in the patients that were engaged in the app versus those who were not. So like it has been talked about before, you know, clinical decision support tools are fantastic. However, they are not perfect and I think we need to be mindful of several different things. As talked about before, alert fatigue is very real and I always say it can lead to alert anger. So I think we should be mindful that, you know, we are all humans, you know, we cannot handle a thousand alerts a day. So we should be mindful in sort of how we target the use of these CDS tools in clinical care. Next, I think we should really start focusing on cost analyses. I think that would help us gain more support from institutions and other stakeholders when we can show that these simple interventions lead to real cost savings for patients and hospital systems. The other point is, you know, we can't create these things and forget about them. Guidelines change. We need to update these tools to reflect the changing landscape of cirrhosis management. We should also focus on interoperability and I always say sort of sharing is caring. So a great tool that's developed at Vanderbilt, we should work on efforts to expand that out to other centers. You know, creating a CITES management tool for one center is great, but think about if we could expand it to a thousand hospitals. The benefit gets even larger. We should also focus on digital literacy. I won't go too much into that because Ashley has already talked about that, but we should be mindful that not all of our patients are tech savvy. So we should really focus on human centeredness and how we make these applications or tools and how we deliver it. And lastly, disruptions of workflow. So we can't have a provider going through to admit a patient. They get an alert to do something, get an alert and that alert to do something else. It leads to fragmented workflows that we should be mindful of because fragmented workflows can often inadvertently lead to errors that occur. So in closing, CDS tools are effective in reducing the readmissions, reducing readmissions and cirrhosis. The most effective when they emerge from in-depth analyses of sort of what's causing readmissions in the first place, what's preventable, what's intervenable and how. Pitfalls do exist and again we should really strive to expand these these tools to other sites to really sort of use our collective resources to improve cirrhosis care. I just want to highlight this article that was recently published in Hepatology that it's an excellent overview of clinical decision support tools in chronic liver disease. So with that, thank you. Thank you very much. So I'll bring up the last speaker for this session. It's Chanda Ho. She's going to be talking about exploring the clinical informatics career in hepatology. Chanda and I have known each other for a long time and have recently bonded over our mutual interest in hepatology informatics and we're both board certified in both transplant hepatology and clinical informatics. Potentially the first two, maybe the only two with those designations and really it was Chanda whose idea spurned this symposium. So looking forward to your talk. Thanks. Thanks Oren for the kind introduction and thank you to the SIG for the invitation to speak today on exploring a career in clinical informatics. As Dr. Fuchs mentioned, I'm currently in Singapore by way of San Francisco, St. Louis, Memphis, and Boston. So the joke is that I keep moving westward in my career but today I wanted to share my experience in navigating a career in clinical informatics and hepatology. These are my disclosures. So I wanted to start off first with the definition of clinical informatics as defined by the American Medical Informatics Association referred to as AMIA. As there are a lot of assumptions and misconceptions as to what clinical informatics means. Simply put, it is a science of how to use data, information, and knowledge to improve human health and the delivery of health care services. So with that definition, clinical informatics is not synonymous with AI and big data. AI and big data for sure are used in clinical informatics but more so as a tool in methodology. So my goal today is to share what it means to be a clinical informaticist and to think about how this can be applied to a career in hepatology. So I'll attempt to answer the question of how do I become an informaticist and to outline some career examples as well as some next steps. Clinical informatics is still quite a nascent field. So it's first recognized as a subspecialty in 2011 with first board certifications that were granted in 2013. AMIA has played a historical and pivotal role in getting CI recognized as a subspecialty. It's currently recognized as a subspecialty under the American Board of Medical Specialties and it's housed under the American Board of Preventive Medicine. So those who practice informatics are referred to as informaticists or informaticians. So again clinical informatics is not just about the data, it's truly the intersection of data, people, and the system. So how do we use, leverage, and process the data that we have and apply it to how we care for patients and to the clinical system that we have. So ultimately the overall goal is to advance individual and population health. In the following slide I'll go more into the domains of clinical informatics. So clinical informatics curriculums are often broken down into four main domains. One, fundamentals. Two, clinical decision making and care process improvement. Three, health information systems. And four, leading and managing change. So fundamentals is essential to understanding the health system context that you're working in. So for example working in the U.S. is much different than working in a country with a nationalized health care system. And it's also understanding the context of the individual institution that you're at. Clinical decision making and care process improvement involves understanding how medicine is actually being practiced on the ground so that appropriate clinical decision support, as we've heard from our other speakers, can be put into place so that workflows can be improved upon. It's about understanding what clinical data that you have to work with, its advantages and its limitations, and how it can inform data-driven approaches. Health information systems refers to more the techie sciency part, you know, how do databases work, what's a data warehouse. It is helpful to understand some programming, but it's not a requirement. And here this is where you're learning about the different health care data standards, think HL7 nomenclature, SNOMED. And finally, leading and managing change. As we all know and as we've heard, we can create these amazing systems and solutions, but if they're not adopted they're really of no use. So therefore change management, leadership models, and organizational behavior is actually a really huge part of being an informaticist. So all of that said, many of us are actually already practicing informatics in our day-to-day work, as it is. So the question becomes, do I need to have additional training? And for those who want to pursue additional training and subspecialty certification in clinical informatics, there are now formalized and defined pathways. For those of you in the audience who are still in training, this may help you decide if you want to pursue a separate fellowship in clinical informatics through an ACGME accredited program. For those of us who have long exited training, you could still become board certified through what's called the practice pathway that's offered by the ABPM. That's going to expire in 2025, but you know in previous years they keep pushing for the deadline, so it may be 2025 and it may be pushed back later. It's unclear right now. But to go through the practice pathway you have to detail kind of what's referred to as time in practice and what you've done in terms of clinical informatics work, which they define as three years of clinical practice. Our practice time is at least 25 percent FTE, which is about 10 hours per week. It has to occur over a five year period, so you can't have said like, oh I did this, but over the last decade. So these are the requirements for the practice pathway. You have to fill out an application to the ABPM. You have to be board certified already in another specialty. You have to have an active medical license. Again, the bulk of the application asks you to detail your time in practice to ensure that you've met this three years of clinical practice of 25 percent FTE over a five year period. You have to submit a CV and a resume, have a letter of recommendation. The application is then reviewed and if it's accepted, and this is after you've paid the application fee, you're eligible to sit for the board exam, which is offered once a year. So for those of you who are interested, I encourage you to go to the ABPM website to look up the details. There's a detailed timeline with important dates on the website. AMIA does offer a board review course for those if they want to help prepare for the exam. So I'm going to switch gears a little bit in terms of kind of my thoughts on how to negotiate a career in informatics. So in terms of a career in informatics, we really automatically think of, you know, C titles, CIO, CMIO, the chief medical information or the informatics officer. However, there is quite a large spectrum of career opportunities and multiple career paths for those who do varying percentages of informatics work alongside their clinical work. But these paths are not well delineated. You won't see job postings yet for a hepatologist informaticist. As many career mentors will tell you, they ended up in their career role by way of a mixture of luck and opportunity. So as I reflect back on my own path, I'll share with you what I found useful. So one of the most important things that I've learned is that it's so important to understand your hospital leadership and stakeholders. So most of us are in GI. So a common structure is that GI is often housed within a department of medicine, which is part of a larger academic medical center. And on top of that, you have your hospital leadership and the C suite, again with the CMIOs, CMOs, CIOs, so on and so forth. So I would encourage you to get a hold of your institution's org chart, as every institution is structured differently with different reporting structures. So to illustrate this, I just pulled a couple off the internet from institutions that I've been a part of just to show how vastly different institutions are from one another. So this is the executive org chart from Sutter Health. It's a non-profit integrated health system in northern California. And in this hospital leadership, there are many domains, I think, of interest to an informaticist. You have a chief innovation officer, a chief digital patient experience officer, a chief digital officer, chief strategy, chief enterprise transformation officer, and all of these roles have different responsibilities and they run different programs within their own reporting structure. So for example, an informaticist, they can become involved with the digital patient experience office running telemedicine, but if you're interested in AI-powered chatbots, that might fall under the innovation office. So it's important to understand how the organization is structured. This is one from UCSF, and this is just the leadership of the department of medicine. So within medicine, there are some really interesting roles such as the vice chair for clinical affairs and value improvement, associate chair for clinical research, associate chair for ambulatory care and population health. So I would bet that these, that informatics is used heavily in each of these roles. So knowing how your organization is structured will help you understand where and how you can add your informatics expertise. As you can also see, a lot of informatics opportunities might be outside of the GI department and division, either within the department of medicine or within the medical center. So within GI and hepatology, however, I put together a clinical informatics menu, and so you can see the possibility for work in clinical informatics relevant to hepatology is pretty endless. So popular topics are on the left, including but not limited to what you see listed, and on the right, you have processes, tools and infrastructure. Coupled with the clinical gap and need, there's so much work to be done, whether it's through EHR optimization, mobile apps, wearables, using AI or machine learning, creating registries, databases and dashboards like we've heard about, working on patient-centered solutions, preventative health, population health, personalized medicine. There are multiple possibilities, as I talked about, to highlight additional ones. One can get involved in tele-hepatology efforts, consulting on clinical research ideas for the department or division, or EHR-related work. A quick note on EHR optimization. So I want to stress that it's not simply being your physician super user or being an epic dot phrase guru. That's certainly a good skill to have, but in terms of informatics, EHR optimization refers more to implementation, identifying care gaps and how the EHR can help, clinical decision support, physician support, not only just in the technical aspect, but representing the physician stakeholder, as we're also end users, and development of care pathways and care linkages. So how to put it all together. So thus far, I've discussed what clinical informatics is, how to potentially pursue some specialty training through the practice pathway, and what a menu of option looks like in hepatology. So I want to put it all together with what an actual job scope could look like. I'll go through three examples, one where it's a 20 percent informatics position with 80 percent clinical work, one that's 50-50, one where it's 80 percent clinical with 20 percent informatics. So an 80 percent clinical workload in a five-day workweek could look something like this. So during your 80 percent, which is depicted in orange, you might have a scope day, two and a half days of clinic, maybe a pre and post-transplant clinic, and your salary would be supported by GI hepatology, most commonly under the Department of Medicine. You could negotiate 20% of your informatics time being a physician builder. And in many institutions, the physician builders come out of the informatics department. If your department is active in research or quality work, you could imagine a role doing data governance for your research department. And this could also be supported by gastrohep, but perhaps through a different funding source within the Department of Medicine. So what about a 50-50 role? In orange would be your clinical responsibilities, which, again, is a combination of the usual service weeks, subspecialty clinics, scope block time. The other 50% could come from a combination, maybe the transplant department, Department of Medicine, informatics. I was really fortunate to have something similar to this, where I was an innovation lead for transplant, the transplant service line, which included not only liver, but also kidney and heart transplant. But within hepatology, I could help design new care pathways and processes for hepatology patients. And under informatics, I was an EHR champion for our transplant physicians. And then finally, what does an 80% informatics job, what could that potentially look like? Here, the clinical load is much reduced, perhaps maybe just one day of clinic per week, limited time on wards. The informatics portion in green would be supported by both an informatics department, as well as the gastro department. Informatics could support your salary as a consultant on various projects or for EHR-related work. So this could cover work outside of GI, for example, working on diabetes or cardiovascular project. And then the gastrohepatology department could support one's salary by doing specific GI physician builder work, as well as consulting on GI-related clinical research issues. So as you talk about building and navigating a career in clinical informatics, it's interesting to note that such a track has yet to be defined in academic medical centers. So we're all familiar with what's needed to be a clinician scientist or a clinician educator, and even a clinician administrator. But what about a clinician informaticist? And that still remains largely undefined. So if you're a clinical informaticist, what are the implications for career advancement and career promotion? What is required? Is it hospital service? And if so, how is that defined? Will you have specific deliverables? Will they be project-based? What are the expectations with regard to publications and teaching? Is it based on what leadership positions you've had or what committees you've been on? So I think now is a really good time to reach out to your hospital and department leadership, as this could really be the time to define this at your own institution. We should also take a step back and look at how to nurture the next generation of informaticists. One could argue that there's too little training in med school, should there be an informatics module rotation in med school residency or a subspecialty training. One example might be to have students collaborate or complete a capstone project. As for GI fellowship, how can we support those who are interested in informatics? These are some of the professional organizations and meetings you can check out if you want to learn more about informatics. AMIA is the main organization. They have an annual symposium, but unfortunately, that often overlaps with ASLD. But they do have a separate clinical informatics conference. It'll be in Minneapolis next May. And there's also AMDIS and HIMSS as well. So as I'm concluding, many of us are already practicing informaticians. Carving out your career pathway in informatics involves partnering outside of the traditional academic division and department. I encourage you to reach out to people and to let them know your background and interests. You never know what can happen, but nothing will happen if you don't say anything. My key takeaways are as follows. Hopefully, you have come away from this talk with a better understanding of what clinical informatics is and is not. It's more than just big data. As technology advances, we have an increased need for informaticists, especially in this environment where we need to uphold quality standards while containing costs and ensuring a patient-centered experience. This is a very tall order and will take an entire medical village. There is no single formula for how to be a clinical informaticist. On this slide, as Oren mentioned, are other gastroenterologists, transplant hepatologists, and clinical informaticists. As you know, Oren is at UNC. He was certified via the practice pathway. He's the lead informatics physician at UNC with a lot of experience in epic building and also the medical director of liver transplantation. Ashley, whom we've heard from, is one of our panelists. She went through an ACGME-accredited clinical informatics fellowship and has appointments through both gastro as well as the biomedical informatics department. And I also became board certified through the practice pathway with experience in epic and Phoenix implementation and currently working in digital transformation in Singapore. In terms of next steps, it would be great to keep the conversation going. For those of you who are interested in mentorship, collaboration, or just joining a tribe of like-minded people, we have created a WhatsApp chat group. So you can scan the QR code, which I'll have again in the final slide. And if it doesn't work, you can feel free to come find me. I'd like to acknowledge CPMC Medical Center in San Francisco, Center Health Leadership, and the Department of Transplant for helping me as I was trying to navigate my career path and letting me do cool things. I'm thankful to the SGH gastro department and transplant center for supporting me as well. Email address is listed below if anyone has any questions. And thank you so much. Thank you very much. Well, let's open this for discussion. We've got some time to ask some questions of the speakers or talk about anything that might come to mind relevant to informatics and epitology. Hey, everyone. Michael Volk. It's great presentations. I learned a lot. Thank you. So so much of quality improvement leads to trying to get people to do more. And there are definitely huge care gaps. But if you look at the bigger picture of American health care, the problem is, the main problem is not that we do too little, it's that we do too much. And so are any of you familiar with any work that is looking to optimize utilization or prevent unnecessary care in informatics? I can think of one very simple example with some EHR alerts where if you try to order a test that's been done before, like HFE mutation analysis, it will show you the past result and deter you from ordering it again. That's just a really simple one. I will say, I think Elliot has had some work, too, looking at ordering sclerotoplasm and trying to give people real-time data that maybe this isn't appropriate for this patient. So I think there's been, like you said, some lab-based interventions. How can we do less? So kind of on this, Sajid Jadartan from Hepatology and Rush in Chicago. Just on the same kind of thought process, I think all three of you, all four of you mentioned the idea of alert fatigue, but not really a clear solution of what's the threshold? Where is alert, like where are alerts helpful, but what's the threshold at which you're saying, hey, this is too much, or this is causing disjointed workflows? The other question I had was the VA system is awesome, right? Everybody loves the VA. How easy is it in other systems to set up the dashboard? I know it's possible, but at least in my institution, I know it's like a monumental task to get a new build in Epic and things like that. And then I just wanted to compliment the fact that you have a transitional care pharmacy. I just think that's awesome. We would kill to get refactoring approved before people left the hospital. I can speak a little bit about the alert fatigue. I think we do a very good job of building things and releasing them, but we don't do a very good job of maintenance. One thing that we've done at Vanderbilt is that we've created a dashboard, actually, that looks at how people are interacting with these alerts. We can see the acceptance rates. We can see how abysmal many of them are. And we've done a lot of work on trying to identify the ones that are giving people the most concerns and optimizing them and reaching back out to the end users who are providing feedback on those and making adjustments as needed. We've been able to show that we've been able to reduce the number of clicks that it takes to get through these alerts and receive a lot of good feedback on that. So I think we need more processes like that to improve the management of those alerts. And just building on that, if you see some alerts that really annoy you, instead of just ignoring it or clicking the X, there's usually a way to give feedback about why you think it's not useful for you. And there are people looking at those reports, so use that feedback. I will also say that feedback is looked at, so be careful what you write in there. Yeah, Nizar Talat. So it's so interesting to see that there's a lot of tools that's gonna be coming out or available to help patients, but are we thinking about how much physician burnout we're already gonna be adding to the already burned out physician? I know alert fatigue, the acceptance rate might be great in some of the studies, but then if you look at the acceptance rates for all alerts that comes across the EMR is probably really low or like the silencing is really high. So what are we gonna do to mitigate this or possibly not contribute to the already burned out providers out there? Yeah, so one thing that I think is really interesting and it's just kind of coming out more in the generative AI space. So I'm a board member on the specialty steering board for Epic and one of the things that they're looking at is how can they use generative AI to kind of reduce alert fatigue and some of the administrative tasks that we see as clinicians. So they're looking at how can we better summarize the messages that we're receiving in the patient portal, reducing in-basket burden, those things like that. So I think it can be helpful in that perspective, but we also need to think about ways that we can reduce what's ineffective and create systems around monitoring that. We also need to keep in mind that when we're creating these things, we need to think about it in the frame of what can be sustainable and doesn't rely on a single person to manage because that can be a system that sets you up for failure as well. Also that when it comes to trying to come up with EHR solutions, a lot of people think about active interventions. I wanna tell somebody to do something or not do something and stop them in their tracks before they go on. And I think there's a lot of pushback about using that active BPA alerts as a solution. And then there's also, I know at least UNC, an effort to look at all the existing BPA alerts and see how they're being used and try to get rid of a lot of them because there's huge variability in the different institutions about the number of active alerts that show up. And that really is contributing a lot to fatigue and burnout. I'll also add that at Vanderbilt, we have a clinical decision support working group that's comprised of health IT professionals and clinicians. And the goal of that really is whenever someone puts a request in for usually an alert, the entire purpose of that meeting is to leave that meeting with them not doing an alert and creating some other form of decision support to kind of answer their question and ensuring that. But not all places have that and those are the type of systems that we need. We have an effort in the VA right now as well to just reduce the huge number of alerts and eliminate old alerts, to eliminate duplicative alerts and things like that just to combat alert fatigue, reminder fatigue. Hi, I'm Catherine Mezacapa, I'm a fellow at Yale. Thanks for the talks, it's really informative. I have a question that spans Dr. Spann and Dr. Louisant's talks about the use of AI to identify the people to put into a pipeline and to sort of track people. In a lot of informatics papers I've read, these algorithms will work really well in the institution where they're developed, but not necessarily replicate. And so how do we get to the sharing is caring point? Because the patient population, the workflow, the availability of certain tests may be different at one institution than another. So if it's ease of fibroscan or whatever else is generating your model, how do we get over this non-replicability of AI generated tools to make them useful everywhere? I'll let you get the AI point if you wanna start with that. Yeah, so for artificial intelligence, there are a lot of concerns there, I would say. One, and we're gonna talk about this in the next hour in the community conversation, but the FDA is starting to regulate these as medical devices particularly when we're talking about risk predictions and things like that. So from a research perspective, we can use them, but when we start thinking about how we scale them, there can be a lot of issues with that, particularly with perpetuation of bias. And you can see that because you're dealing with different populations in different areas. A big issue I think related to that is interoperability between sites. We have a lot of difficulty even between sites that are on the same EMR. Things can be mapped completely differently like you mentioned with the last biography. They can be completely in different areas and we need to go through processes to make sure that we have structured data around that. But it can be pretty difficult and I think there's a long road ahead and there's probably gonna be some coming regulation that's gonna make that even more strict. I think your question is important, right? Very important because I think a lot of these traditional algorithms, we all know the variables that go into it. We can easily assess it, understand it. But when we start going into AI and machine learning, I think most of us are a little bit scared by that and we don't understand it. Therefore, how do we say, well, I want that here? So I think there will be a greater need for more people who are trained in AI and machine learning who are able to then be at different sites and understand, okay, as she designed this, I understand how that works at a high level. I can implement that here or not. That's I think a future challenge. I'll just give you a real simple example of how tough this problem is. Epic put out a FIB4 calculator and it was in the foundation system so each individual institution didn't have to create it but it didn't work at UNC. And I think if it weren't for a hepatologist informaticist who wanted to fix it, it probably would have never been fixed. It turned out the plate count was just not mapped correctly and that was a simple fix but it might've taken a year to fix if we just put in a ticket. Thank you. Do we have time for more questions? Yeah, go ahead. So it's George Ioanna, University of Washington. Thank you very much. It's interesting and you were saying at Vanderbilt we do this, at UNC we do that. Do you think, but the data exists at the EHR level and I wonder whether you think, maybe that was addressed before a little bit but in a big picture at the EHR level there are companies that provides, a couple of big players that provide EHR for all the institutions in the United States. To what extent are they invested in creating AI solutions that could be applicable across the country? Because you can't expect every institution and every medical center within every institution to validate, operationalize all these algorithms one by one. It has to happen at a high level. Can it happen and will it be applicable and are the major EHR manufacturers even interested in these applications? I will definitely say that they are and they've already built some of these models out. So for, I can speak for Epic specifically in the sense that they already have several predictive models that they're utilizing in many different areas and they're also kind of already building out and building things for large language models and generative AI. Because from a business perspective it's a selling point for other institutions to purchase kind of Epic and buy in. So we are seeing kind of that happen but we still kind of remain with issues of interoperability and testing the algorithms on local data and monitoring drift at the algorithms are implemented and making sure that they continue to be successful after implementation. Before we take one last question I just want to remind people that are interested in these topics. There are multiple sessions at this meeting particularly today about AI, EHR kind of stuff. There's one right now at 10 o'clock room 309. There's another AI talk 230 today. There's one about EHR hands-on workshop today at four o'clock and then on Sunday at 430 there's something called Implementation Heroes, Masters of Healthcare Delivery and Hepatology. And then finally the Hepatology Informatics Interest Group and if you're interested contact us through this WhatsApp link and we're trying to start an informal group that we'll meet on Monday. All right, one last question, thank you. Thank you, this was a very interesting session. Puneet Puri from Richmond VA Medical Center and great talks. Part of my question was also asked by George. So I'll focus more on the VA system and dashboard specifically, very nicely presented. One of the things that I often struggle is that dashboards try to kind of identify at a population level, but the providers who are actually providing service, if there is anything like just like clinical reminders, if these dashboard data can just pop up for that provider because that is the best opportunity to fill those gaps. Simple solutions like we at our center through Ho Chung and Dr. Fuchs made a big implementation for like gap reminders for colonoscopy and pathology reporting. So that addresses three, four solutions by click of buttons and kind of also feeds into the data analytic piece that is nationally assessed as quality metrics. So any thoughts that these dashboard can become like a patient reminder kind of thing? It's a great idea, thank you for the question. As you know, dashboard is a sort of a standalone right now. And if you're in the VA working with reminders at all, we are sort of hamstrung by the potential looming implementation of Cerner. And the only reason we're able to do any reminders at all is if they already exist in Cerner. And like our recently released varices reminder, the only reason we were able to do that in this effort of trying to reduce reminders is because it's coming in Cerner and the medical records have to match across the enterprise. So I don't actually see any effort into being able to do sort of pop-up reminders that come from the dashboard and that the dashboard will more likely remain as a population health tool rather than a point of care tool. Whether that's good or bad, I don't know. Well, thank you very much, everybody, for being here. I wanna thank the SIG chair and my co-moderator, Michael Fuchs, all the wonderful speakers. I think this is a great start to informatics programming here at the liver meeting and hopefully continue it in the future. Thanks for being here. Thank you.
Video Summary
The video transcript from a symposium discusses the use of clinical decision support tools in enhancing care for patients with liver disease, specifically cirrhosis. Speakers emphasized utilizing tools like dashboards to identify overdue surveillance and reduce hospital admissions for cirrhosis complications. Challenges such as coding inaccuracies, provider time constraints, and patient non-compliance were highlighted. Real patient scenarios showed how dashboard reviews led to interventions preventing readmissions, stressing guideline-directed management and post-discharge care. Strategies were discussed to improve care coordination, medication management, and patient follow-up to reduce readmission rates and enhance outcomes for cirrhosis patients.<br />The speakers also covered the role of clinical informatics in hepatology, detailing clinical decision support tools as providing knowledge to improve healthcare outcomes. Tools such as alerts, reminders, and guidelines were discussed, emphasizing timely information delivery. Challenges like alert fatigue and AI usage for predictive models were addressed, with emphasis on interoperability and data sharing for broader AI implementation. Lastly, the importance of mentorship, collaboration, and training in AI and informatics for advancing patient care in hepatology was highlighted.
Keywords
clinical decision support tools
cirrhosis
dashboards
surveillance
hospital admissions
coding inaccuracies
provider time constraints
patient non-compliance
guideline-directed management
care coordination
medication management
readmission rates
clinical informatics
×
Please select your language
1
English