false
Catalog
The Liver Meeting 2022
Public Health/ Health Care Delivery SIG Program: D ...
Public Health/ Health Care Delivery SIG Program: Dissemination and Implementation Research: Conceptual Framework and Applications in Hepatology
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
All right. Good afternoon, everyone. We'll get started on to this important meeting for dissemination and implementation science research. I would like to take an opportunity to thank AASLD for letting our public health special interest group to put together this program. As you all know, there has been well-documented gap between research and practice. Dissemination and implementation science framework has really emerged to address this research to practice gap and accelerate the speed with which translation and real-world uptake in practice can be improved. We have four honorable speakers who are going to discuss their dissemination implementation practices with us. I'm honored to introduce our first speaker, Dr. Shari Rogel. She's an associate professor of medicine at the University of Pittsburgh, where she leads two implementation science CTSI course. She also co-directs the D&I, which is dissemination and implementation core of the VA Pittsburgh Center for Health Equity Research and Promotion. Her work focuses on advancing the science of implementation to promote equitable high-quality health care and practice across a number of clinical domains. So, Shari, we welcome you to do our discussion. Thank you. Thank you. Thanks, Manisha, and thanks for having me. We're going to have so much fun that, like, people are just going to be coming in from the hallway. It's going to be amazing. Okay. I don't have any disclosures that are related to this talk. I'm going to talk a little bit about why dissemination and implementation science exists. Manisha kind of started to talk about that, and then I'm going to talk about building blocks of dissemination and implementation science work, and so I was tasked with telling people stuff everyone should know about dissemination and implementation science, so this is going to be really a big overview of the discipline, and then I'll just give a little bit of an example from some of the work that our group has done. Okay. So, the question, why do we have dissemination and implementation as a discipline, sort of arose out of a need, because we know that the gap between research and practice takes about 17 years, and within that gap, there's also a huge voltage drop, meaning that, you know, we generate all this great evidence, very little of it, if any, reaches actual patients or people in the community to improve public health, and so I used to give examples, and I would talk about smallpox and penicillin and how long it took for the smallpox vaccine and penicillin as a medication to kind of be adopted in the real world, but now I just put up this sign, and everybody realizes why we have a problem with dissemination and implementation, so we can create even a perfect sort of innovation in the lab, and if no one uses it, or if there's issues in the community with acceptance, it may never get used. So implementation science is the scientific study of methods to promote systematic uptake of research findings and other evidence-based practices into routine practice. This initially started back in the 1950s, when Everett Rogers noticed that some farmers were taking up, like, genetically modified organisms. They were using new crops and new seeds in their farming, and other people weren't, and so he wrote this really cool book called The Diffusion of Innovations, and it sort of is where that S-curve of early adoption and late adopters comes from. So this is not a new field, but it is a very interdisciplinary field, and more recently has been applied in the medical arena. Dissemination implementation science is sort of on the far end of the translational continuum. If you define basic science research as sort of T0, and then translation to humans or efficacy studies as T1, followed by effectiveness, implementation sits in the T3 to T4 space. So once you know that something is effective, this is when you're talking about how to use it. So now we're going to talk about some building blocks. Okay. In the ideal world, you would have evidence-based practices that come from clinical expertise that incorporate patient values and preferences and the best research evidence, and everyone would magically use them. However, the real world is much more complicated, and implementation science sort of embraces this mess, and that's one of the reasons that I personally was drawn to it. You should see the inside of my car. So yes, so we're going to talk about the mess now. Okay. So this is Enola Proctor's model of what implementation science research is, and you'll notice I'm not talking a lot about dissemination science. Dissemination science is getting messages out there, educating people about innovations, and it's sort of one part of implementation science, but implementation embraces that as well as a bunch of other stuff that we're going to be talking about. Okay. So very typically, and this is what I was doing when I was doing more health services research or pure health services research, I was looking at how different practices or interventions led to changes in patient outcomes, and this is sort of the usual. We deliver an intervention, and we look at how it changes outcomes. However, there's a lot of stuff that is in the middle of that that we often don't think about, but it's very important. So the how, how you get a practice or intervention to people, the delivery components, the things that you do to help that along are called implementation strategies. And then between, before you get to patient outcomes, you have implementation outcomes. So if you have an innovation and you deliver it, but you don't deliver it with fidelity, or if it's not acceptable to clients, or if it costs too much, then that can all get in the way of implementation, and it'll look like your intervention doesn't work, but in fact, it's just that it wasn't implemented well. There are also service outcomes that are in the middle defined by the IOM, and these, I just want to really point out that this includes equity, patient-centeredness, timeliness, really important stuff. There's also around everything is the context of implementation. And so these are sort of the things that can help or hinder implementation in the environment. And so this is where implementation science sits, and you're going to be hearing a lot about this from the other speakers as well. So this is Jeff Kern. He's an investigator in the Arkansas VA, and he's awesome. And so when he tries to explain implementation science to people, he uses this language that's really accessible, and so I just wanted to share this with you in case it's helpful. So when we're talking about implementing, what we're implementing is a thing, an evidence-based intervention, practice, or innovation. And so he calls it the thing. So this is the thing that you're trying to do. Effectiveness research looks at whether the thing works. Implementation research looks at how best to help people or places do the thing. And implementation strategies are the stuff we do to try to help people do the thing. And then main implementation outcomes focus on how much or how well they do the thing. So whenever we're talking about implementation, we're talking about two levels. We're talking about the evidence-based practice or intervention that you're trying to deliver the thing, and then we're talking about all the stuff that you build around that to get that into practice. And so I do a lot of consults, like helping people figure this out and clarify. The first thing that we tell people is, what is your evidence-based intervention? And how strong is the evidence? Okay. So the next thing that we talk about in implementation science that we might not be talking about in really controlled efficacy trials is the context. And so there are a lot of things that can help and hurt implementation. And again, this is all about embracing the messiness of reality. So we're going to talk more about the implementation process, but this framework that I'm showing here is the Consolidated Framework for Implementation Research that was pioneered by Laura Damschroder in Michigan. And so this divides implementation determinants into five categories. And they recently updated it, and so this is all available on the web. And so people are really into making things freely available and open, like interview guides. So it's a great resource. And what it talks about is that the implementation process is important, like we said, but the characteristics of the intervention itself matter. And so these are some things that we think about, but maybe not enough when we're developing interventions. So things like, how complicated is the intervention? So if something's simple, it's a lot easier to get people to use it, to get it into the real world, if something is not too costly. So those are some examples of the characteristics of the intervention that make it easier or harder to implement. Other things that are important are the individuals involved. So the people that are delivering the thing, so say this is a behavioral intervention, do clinicians believe in it? Do they know about it? Do they feel confident that they can deliver it? And also, of course, the patients are also involved, and how they feel about the intervention. Then there are the outer setting and the inner setting. So the inner setting is sort of the clinic environment that you work in. So things like, do you have enough staffing? What's the culture? What's the sort of structural characteristics? Do you have the equipment that you need? Whereas the outer setting focuses on things outside of the direct clinic, and maybe outside of some people's control, things like the bigger social environment policies and things that can either help or hinder implementation. I was recently working on an HIV-related project, and they were talking with some people in Mississippi, and they were saying that there are really strict rules about consent for HIV testing in some states, but not others, and that this was a major barrier for their public health efforts. This is another framework that just shows how implementation can be impacted by, you know, there's a lot of domains of influence, as well as levels of influence that ultimately translate into individual, family, community, and population health. And so I really like this framework. It's the NIH's Health Disparities Research Framework, and it shows that there are biological, behavioral, physical factors, socio and cultural, environmental factors, and healthcare system factors, as well as all the complexities of the interactions of the individuals living in these contexts. So I think when we're talking about the context, and when we're talking about all these complex factors, we're thinking about how do we measure these things, how do we account for them, but also what strategies do we need to pick to leverage the facilitators and overcome the barriers? And so this is where implementation strategies come in. They are the methods or techniques used to enhance the adoption, implementation, or sustainment of an evidence-based practice or program. And this is my personal research interest, is like, how do you pick implementation strategies? So implementation strategies can be challenging to define and pick. They were, they've been likened to the Tower of Babel, this is just a Google image, until about 2015 when some implementation scientists who I know and love got together and defined taxonomy or the expert recommendations for implementing change, the ERIC group. They came up with 73 implementation strategies, they named them and defined them. And then they used concept mapping to put them, to locate them into nine different conceptual sort of clusters. So if you think about some of the things that we do to get people to use interventions, you might think about, and I notice this a lot when I do consults, people jump to something that they like. People think, we need to educate the providers, and if we do that, that'll be good. Or we just need a really effective intervention, like if we build it, they will come. I call that the Kevin Costner fallacy. But in fact, we have to do a lot of things to get people to want to use innovations. And this just sort of shows the depth and breadth of the different kinds of things we can do. So for example, when I talk with the healthcare system that I work in, they focus a lot, not the VA, but the other healthcare system I work in focuses a lot on financial incentives and financial strategies. Whereas I think there are other things that are important, like changing the infrastructure, so setting up your clinic in a way to make doing the right thing the easy thing to do. Working with consumers, like actually working with the patients and designing things the way that they like. And then evaluative and iterative strategies. And so there's just a lot of, this is a lot of different stuff. And the other thing that I would encourage, so I would encourage everyone to kind of think about not just jumping to a single implementation strategy, like a decision aid or a clinical reminder, but actually thinking more broadly about what are the barriers and what are the strategies we need to overcome them. The other thing that's important is considering implementation outcomes. And I alluded to this before, but basically the reason this is important is because you could have a very effective intervention, but if it's not adopted, if it's implemented with poor fidelity, or if it's not sustained, it's going to look like it's not that good. And then you're going to throw away the intervention, sort of like throwing away the baby with the bathwater, when in fact the intervention's good, we just didn't do enough to implement it well. And again, I just want to say that equity is a key implementation outcome. This is a framework that I helped a little bit with. But essentially it's talking about all of the factors that can influence whether interventions are implemented equitably. So I just want to say that you can, if something is implemented, to implement it well, it has to be done equitably. Successful implementation requires that it's done equitably. So inherent in implementation science is the focus on health equity. Implementation science, implementation is multi-level, and it's also multi-phasic. So there are, the things that work or are needed early in implementation, like when you're exploring or preparing, are different than the things that you need in the sustainment phase. This is Greg Ahrens' EPS framework, also a really nice framework to be aware of. Okay, so now I'm going to show you a very quick example. All right, so the reason I put this picture up, and some of you may be in this picture, is because implementation science is a team sport, and that's what I like about it. I was a basketball player a long time ago, before I was a parent, and lost all skills in coordination. And so this is the Hepatic Innovation Team Collaborative in the VA. This is a bunch of providers who were interested in making sure that all veterans were treated for their hep C. So this is pre-COVID data, but what this shows is, so the VA has treated over 90% of veterans with hep C at this point. And you know, lest you think that that is because they pay for the treatment, we can look at the NHS in England or the Australian healthcare system and just see that the numbers are not nearly this good. So how did VA do this? Is the question. So back in 2014, I finished my fellowship, and I was really interested in trying to sort some of this out. And so we started measuring implementation strategies across the VA, and who was using what, using that 73-item taxonomy that I talked about, the implementation strategies. And so we mapped what different sites were doing, you know, in each year, and then we looked at what was working. So what worked when for who. And what we found was that effective implementation strategies actually change over time. So initially in year one of the DAA rollout, the things that were associated with higher treatment rates were the things that are highlighted in this yellow, with yellow dots. So a lot of the things that mattered were developing stakeholder interrelationships, doing some initial training, getting some initial interactive assistance, changing the infrastructure, meaning typically buying a fiber scan machine, and reaching out directly to veterans. So like at my VA, we sent letters to all the veterans who had a history of Hep C and told them about the DAAs to let them know about it. But what's interesting is that the things that worked in year two of implementation were a little bit different. So no longer was it just enough to sort of develop interrelationships and train and educate. Those things were necessary, but not sufficient. In the later years, the things that mattered were doing more tailoring and evaluative and iterative strategies, meaning that we need to address harder to reach populations. So they did a lot of like specific outreach to homeless or unstably housed veterans, things like that. And so by measuring these things, we can see that the needs may change over time. We then are currently just finishing up this study, and some of you, thank you to those of you who are part of this. So this is a study where we packaged implementation strategies that were working within the VA, and we delivered them to 12 lower performing VA hospitals to try to improve cirrhosis care. And this is the HCC surveillance data from that study. The yellow sites you don't see yet, but the red sites and the blue sites significantly improved with the intervention. So by measuring what works, we can also figure out how to deliver what works in a precise and timely way to the sites that are struggling. Okay. I'm just under the, I'm going to get done in less than a minute and be on time. There's like this like really scary buzzer up here. I'm not sure what's going to happen when it finishes ticking. So key takeaways, D&I offers tools to bridge evidence to practice gaps, strategies or implementation strategies are the interventions in D&I, and implementation is not considered to be successful unless it's done equitably. All right. So thank you. Like I said, team sport, these are just some of the people I wanted to thank. And thank you guys. Thank you, Sherry, for an excellent talk. We will have a discussion after all four speakers. So I want to introduce the next speaker, which really doesn't need an introduction, Dr. Fasiha Khanval. She's professor of medicine at Baylor College and well known for her impact in how we today care about patients with cirrhosis. She's also the editor-in-chief of the Clinical Journal of Gastroenterology and Hepatology. And welcome, Fasiha. Thank you very much for inviting, for putting together this wonderful session, and for everyone who's here. I'm Fasiha Khanval, and I'm going to be talking about some examples of using principles from implementation science to improve cirrhosis care in the next 20 minutes or so. The way I will handle it is I'll just quickly talk about why is it important to think about implementation science for cirrhosis specifically. So what is the state of quality of cirrhosis care? I will not spend a lot of time on implementation science. Dr. Rogel did a wonderful, wonderful job with that. I will touch upon a few concepts just in a very, very simple way, in a way reemphasizing some of the aspects she discussed and tying them back to our strategies and our studies. And then give you an example of how we used the principles for improving quality of cirrhosis care in our local health care system. And also introduce you to a broader intervention or effort that is underway right now. So why is it important to talk about cirrhosis and quality of care in cirrhosis? These data are not new, and they do show many shortfalls in how we deliver care to patients with cirrhosis. This is an older study, the graph on your left-hand side. So let me just orient you a bit. The graph is going to be percentage of patients who are eligible to receive guideline-indicated care, who actually received that care. So it's percentage of eligible patients who received indicated care. If you look at the very first one, these are the patients who had SITs but admitted to the hospital and had a paracentesis done during hospitalization. So regardless of how long the hospitalization was, they had it done at least once. It's a flip of a coin. 50% of patients received that. We were pretty good in the study. This is a study from three different VA health care systems of over 800 patients. When we found patients who had spondylitis bacterial peritonitis, most of them they received antibiotics. But look what happened to them after they were discharged from the hospital. Now we know there's lots of studies, clinical practice guidelines recommending that patients should be discharged on antibiotics, not new information, but it happened less than third of the time. So only 30% of patients left the hospital with antibiotics prescribed to them. In a more recent study, we looked at over 30,000 patients who were in the national VA health care systems. All of them had indications for liver transplantation. And we followed their journey over time to see what happens to them as they follow this long process from being eligible for liver transplantation and all the way to transplantation. Only 4.5 percent of these patients were referred for liver transplantation. 3% were listed and few, less than 2% were transplanted. So again showing that there are shortfalls and gaps that impede patients progress through a critical aspect which is really potentially life-saving for these patients. We don't really have the time to go over the details of what we found in this study, but really I'm pointing out key gaps, shortfalls in quality of care, and also linking it back to the fact that this is why we're not seeing any change in improvement in outcomes of patients with cirrhosis. So these are again data from looking at national VA health care system. We followed over 100,000 patients with cirrhosis who were admitted for their cirrhosis at any of the VA health care system and we looked at their mortality in two different ways. One, we just looked at in-hospitalized, in-hospital mortality. So did they, did they die during the hospital stay? And we found this nice downward trend in mortality, meaning they were, we were getting better at discharging patients alive over time. But when we shifted this a little bit and looked at a fixed time window, 30 days after admission to the hospital, how were patients doing? There was no difference over this long period of time. So unlike major improvements that we see in patients with congestive heart failure using the same metric, we're not really doing much in terms of patients with cirrhosis. The outcomes are fairly flat over this period of time. So it's really indicating that we need to do something about it and that's where role of implementation science comes into play. So Sherry really nicely presented that chasm between effective care and how really does it get implemented or really true effectiveness of that care into our clinical practice. And unfortunately we do see that chasm. There is no shortage of new innovations. We're seeing innovations in our, in our, in our field all the time. We also, there is no shortage of guidelines also. We are really keeping up with making sure that evidence is compiled into clinical practice guidelines. But the problem is that passive efforts, just sharing those guidelines with people, is not an effective way of making sure there is an uptake of those evidence-based interventions into clinical practice. So that is what we heard earlier on and that is what I will be emphasizing here. Effective uptake, it really requires use of more active and systematic approaches and that is what implementation science does. You saw this definition of implementation science in Dr. Rogel's slide. It's really application of efforts to make sure that research findings and other evidence-based practices, they are really implemented and used in routine care with an eye towards equity to improve quality and effectiveness of health services. So that's implementation science and this is also implementation science. Although Dr. Rogel described all the wonderful work, and I have to say it's a new field, but it's also a very dynamic field. All the enthusiastic implementation scientists, they continuously are developing these new frameworks, which are critical and I rely on them a lot to really shape how we think about implementation science. They're coming up with all these factors that we have to consider. I listed some, you heard about them in Dr. Rogel's slides as well, and that there are these implementation strategies that we heard about, very critical, but we heard about it really is a lot. It could be overwhelming. If you think about it, I looked at it, there's 61 implementation frameworks that are out there. In just one framework, the one that Dr. Rogel presented CIFAR, which I've used also, there are 39 factors to consider in one framework, and if you look at those implementation strategies that really nicely get collapsed into these seven, eight domains, there's 73 implementation strategies. So really, I appreciate all the work, Sherry, that you're doing because we need that help to figure out which one of these 73 do we pick, but it is overwhelming to think about this. So that's why I want to go back to the concept that you heard in the previous slide as well. It might be overwhelming, but it's also very logical. And you think about it, you're doing some of that in your daily practices, and as you're trying to implement these into your, into your clinical practices, it's just useful to go back to those frameworks because you'll find things that you might have missed that make a lot of sense, but you just weren't really thinking about it, and that's where these frameworks help. So I'm going to just make it very, very simple, going back to the same concept of the thing, because that is really how I understood implementation science. So this is a thing, the intervention, the practice, you're trying to improve HCC screening, you have this cool app that you want to use to make sure patients don't get readmitted, that's your intervention, that's your thing that you're trying to implement. Of course you will do your effectiveness research. You really want to make sure whether that thing is really working in the ideal case scenario, and that's where your efficacy trials, randomized control trials, quasi-experimental studies, they come into place to really see if it is resulting in any positive outcomes. That's the effectiveness research. We're all very, very familiar with that, but implementation science is going, focusing on, then you know that thing works, but how best to do that thing really in clinical practice? And the intervention strategies are all that stuff that we do to make sure that we do the thing right. So it's the thing and the stuff that we do to make that thing do it well. So just thinking along those lines, I think it helps, we sort of think about that anyways, it just boils it down to these simple, simple ways to consider implementation science. So with that, I'm going to go back to one of our examples where we used principles from implementation science in a study, a pilot study within our healthcare system, and we focused on a very simple, basic step. It was too basic to even start with, but then we realized and learned all the lessons as we were trying to implement it. So in one of the earlier studies, we found, to no one's surprise, that seeing a specialist was strongly associated with the receipt of high-quality cirrhosis care. So that was one of the strongest factors in our studies, and when we looked at our healthcare system, what we found that actually less than 50% of patients who were diagnosed with cirrhosis in our healthcare system were linked to liver clinics. So we're not, we were not even seeing half of the patients that we could help take care of. So our idea, along with Tamar here, was to try to create a system, our thing, to try to link patients to liver clinics and specialty care. And we call our thing a patient, a population-based cirrhosis identification and management system. So that was our intervention that we wanted to see if we can implement and we can improve our main outcome being linkage to care. So this was implemented in one healthcare system, really based on this internet program. So that was the IT backbone of this program that runs on automated data in a healthcare system and was able to identify patients with cirrhosis that we thought we wanted to get to. They hadn't seen us in the last three years, they didn't have any major comorbidity that you can, we can identify, and they had documented diagnosis of cirrhosis in the system. We also built a system to allow for diagnosis, evaluation, and treatment so we can track them along those different steps in the care pathway. And then we built care coordination around it and had systems and protocols, templates to ensure timely follow-up. So how this really worked was that our PAs and NPs, our APPs, they would get these reminders, these alerts about those patients, did review and triage using medical records, and then they will notify the primary care providers requesting them to arrange for a liver clinic visit by putting a consult. And then they will also do a follow-up to make sure patients' consults are placed, and once they're placed to make sure we have an active outreach, we bring patients in. There were a group of patients that we also reached out directly to try to link them into care through the system. So this is sort of the front end of what really the program was, what our thing was. But we also did a lot of stuff to make sure that it is really working. So in the pre-implementation, before we even launched this system, we met with all the stakeholders. Now remember that big little chunk in Dr. Ogle's slide about stakeholder engagement and involvement? So we met with a lot of stakeholders to identify barriers to implementation that they could foresee. We really went out of our way to obtain leadership support at the department level, at the institution level. We engaged clinical, clinician stakeholders up front, and we made this their program. And we really went, it took a long time to really develop a local implementation plan. While we were implementing it, we kept on it. And the main strategy that we used is called facilitation. So we had a team of facilitators, we had local champions that in a way emerged from this program, and then we had technical support people. And our other strategies continued to be leadership engagement. We also spent a lot of time in educational outreach. We had an audit and feedback system built in, and really problem-solving to the extent that we could, that also included modification of workflows. So with this system, we identified over 2,000 patients with cirrhosis that were not being seen in our, in our liver clinics that could be potentially the patient population that we wanted to reach out to. And then based upon review of their records, over 1,500 actually really did need to come and see us. It's not that they were being cared for elsewhere, they really needed to come and see us. And when we followed that, those patients, the intervention ran for about 18 months. We were able to link through multiple strategies outreach, successful outreach, to about a 75 percent of those patients. Of the ones that we were able to reach out to, more than half had a scheduled visit, and then of them, there was a proportion that was seen in our liver clinics. So the numbers of course are falling as you follow that graph, but, and if you look at just this very last bar, patients who were seen, they were about a third, a little over a third of the patient that we started with. But many things happened. We were able to identify patients who were not on hepatitis C treatments, over 120 patients that we started on HCV treatment, and there were also patients that were diagnosed with liver cancer, just because we were able to bring them in and link them to care. So in total, 30% of patients without ongoing linkage to liver care were seen in liver specialty clinics because of this program. Now you can look at this number and you can say 30% doesn't seem very good, but I want you to really think of that in the context of the patient population and the setting. These were veterans that we were seeing and they were patients who had no linkage to care prior to our implementation of our intervention. They're also disproportionately vulnerable patient population that it really has difficulty staying in care. So with that context, 30% is higher than what we would have achieved without that program. But this program, this program was not a randomized study. There was no control arm, which is a key limitation. I'll come to it in a minute. But there are many, many lessons that we learned, and I think that's one of the important points in these efforts, is that you do that, but then you really need to pay careful attention to what is happening, because that will really guide the next step or the next steps in this path. So things that work for us, and they actually align really nicely with what we heard in the slide earlier, local buy-in, engagement, ownership of the intervention was important. So intervention characteristic was such that people believed in, and we didn't have to convince people that seeing a specialist is a good thing for patients with liver disease. We also really relied on rise of natural champions, and that also came from stakeholder engagement up earlier. Creating changing infrastructure, the workflows, integration of this program in the workflow was important. And then there were a few things in that context, higher level, that really were very, very critical. We had commitment to quality improvement at our institutional level. Institution is forward-thinking, there is commitment. And also there was this larger context that was happening with the treatments for hepatitis C, a big desire at the VA to treat patients with hepatitis C virus infections. They were listening to us and giving us the resources that we needed for this program. So those outer context elements were critical for success of the program. But we also had a lot of challenges. Our outreach that went through primary care providers did not really work very well, meaning that we really need to try to reach out to patient directly. Many patients were given appointments, they were unable to keep their appointments, suggesting that we really need to figure out another strategy to try to see them. Virtual care becomes important here. We also had difficulty really justifying why clinicians were spending their time looking at the records, because they were not really meeting productivity benchmarks. That is a major issue that I think one has to think about. I've talked a little bit about the control group. I'm not so sure what 30% means in the absence of control group, though I think this is improvement. And also COVID. So COVID came at about the same time, and then everything had to be put in a hole. So that's, I think, important reason that I'm bringing this up is that these actually things, they align with the larger framework that we just talked about, and here they are. We actually relied, though I did make a little joke about these earlier on, but we did rely on CFAR. We looked at the outer setting, the inner setting. These are the issues, the drivers for care that helped us, really focused a lot on individuals. And then, yes, we tried different implementation strategies. Many of them are listed here. We are relaunching the intervention, again learning from the lessons that we have. It will have the similar dashboard that I talked about. Now we are going to have an outreach through virtual care, so we bypass the movement through primary care providers. We're also going to try to integrate curative and palliative care up front and link it to specialists for the patient that really need to come and see the specialist. So again, the lesson that we learned is really changing how we're going to be modifying and tailoring this intervention. It's also going to go out from one site to four sites. We also are thinking of a hybrid implementation effectiveness design, and we are also going to have a control group. So again, things that we learned really help shape the next step in this journey. One thing that I'll mention in the last few seconds is a larger effort, implementation effort, which is underway and implementation principles are being used as part of this effort, is the cirrhosis quality collaborative of AASLD. And here we are using learning health network and principles associated with that, which again, would fit into implementation strategies to try to implement this program at ten different health care systems. So just to end, yes, we all know there are shortfalls in how we deliver care to patients with cirrhosis, and we also know there are many evidence-based guidelines and practices that could help improve that care, but it has to be implemented well. And that is where implementation science comes into play. I gave you an example of PCIMS. I shared a little bit about cirrhosis quality collaborative. There are many other examples we heard earlier about the success of the VA hepatitis C treatment program. So really, I think it is important for us to at least understand the basic principles of implementation science. And yes, not all of us can really know this to the breadth and the depth that is necessary, but that's where team science comes into play. So at your institutions, if you have people who are interested in implementation science, team up with them, because really in the end, that is team science again, and that is the way we could potentially help bridge this gap and deliver better care to our patients. Thank you so much. Thank you so much, Dr. Rogel, Dr. Cunwell, for such excellent initiation on implementation science framework. I'm honored to introduce our next speaker, Dr. Tamar Taddy. It's a real pleasure and honor to introduce my friend. She is the chief of GI at VA Connecticut. She is also the professor of medicine. She co-leads and is the associate program director for the MD-PhD program at Yale, and her research interests are predominantly on cirrhosis implementation and improving outcomes for liver cancer. Thank you. Thank you. It's a pleasure to be here, and it's a pleasure to be among friends, because I've learned a lot from everyone at this table, and I will be talking a little bit about the mess that Sherry brought up at the beginning. So I have no disclosures other than to say that I'm not an implementation scientist, but I do engage in a lot of team science, and I do look often to people like Sherry and Fasiha for help. So by background, my practice is in the Veterans Affairs Health System, and cirrhosis, as you know, is a common illness in the VA, a five-fold higher prevalence of hep C, now mostly cured, as you saw from Dr. Roval's slides. Alcohol use disorder is highly prevalent, and metabolic comorbidities are as well. In addition, the VA is the largest health care provider for liver disease in the nation, and we have this fantastic electronic health record, which has really been curated for research by the corporate data warehouse. Because of this, we have operational population health dashboards that identify patients in need of surveillance for HCC, and we have clinical reminders that are national to alert primary care providers to perform HCC surveillance. And we've also had innovations in patient tracking through the continuum of care, some of which Dr. Conwell mentioned. So we really are in a system that facilitates implementation. And I think a lot of what I want to talk to you about is sort of practical, so how implementation evolves, oftentimes through serendipity and other forces. So the VA National Leadership Framework for Hepatitis C treatment was developed really at breakneck speed to pivot the nation to treat almost all of our veterans, and that was done by hepatic innovation teams that were spread throughout the VA's regions to really deliver care effectively. Once we treated hep C, we had to kind of turn to look at advanced liver disease, and in doing so we set up more of sort of an implementation, but also a policy and leadership arm. So we developed this hepatology field advisory board. That field advisory board has subcommittees that are focused on sort of the pressing issues in advanced liver disease to set policy and practice. So we've really developed an infrastructure to implement where we have leadership that actually considers the field, and the leadership, and the VA's priorities. Now HCC surveillance is a national metric, and we're supposed to be meeting 65 percent or 10 percent per year increase in HCC surveillance from baseline. And that's very important, and I'll move a little bit to that. So why is this important? It's because liver cancer surveillance saves lives, and if you can't survey, you can't detect HCC, and you can't treat it. So it's the very beginning of the cascade, is to perform surveillance in people who need it. And we know that liver cancer surveillance saves lives. There have been innumerable studies. I'm just putting up a nice infographic from one of Dr. Singhal's latest studies looking at about 150,000 patients, showing the benefit of early detection, receipt of curative therapy, and overall survival. We don't know everything. There may be harms in screening that we don't know yet, but the fact of the matter is that there is enough science here to say that we should be performing surveillance. So now the linkage to liver cancer care obviously starts with identifying cirrhosis and starting surveillance, and I know that, you know, years from now we're going to have much more technology and innovation to determine who needs surveillance, even with cirrhosis or without cirrhosis, but we're not there yet. So very practically, how is this done, right? So an ultrasound every six months. There was a publication that came out in 2012 by Dr. Conwell's group actually, that showed less than 18% surveillance rates in the private sector. I have a more recent study showing about a 24% surveillance rate in a hep C cohort in the private sector. And this is still abysmal. And in the VA, we're actually very proud to say that we reached 44%. This is pre-pandemic published work by Dr. Rogal. But the point is we're at about 44% even recovering from the pandemic. So it's pretty amazing that we've doubled what the private sector can do. So once you detect HTC though, you need multidisciplinary management and that's key. And so we have set up in the VA a number of center and regional and scan echo tumor boards so that we can care for patients with HTC in a multidisciplinary way. But even with that, there are problems. So what are the problems? And they're pretty big problems, okay? So the implementation of both surveillance and multidisciplinary management for HTC remains pretty elusive. There are a lot of technical issues starting from getting sonographers which are poorly paid in the VA, getting scheduling to work, show rates, the logistics and policy issues. So there's distance. The VA has a big rural population and it's hard to get them in for a scan. But also the Mission Act has allowed us to send veterans outside and then we can't recoup their imaging. We don't know if they were really in a surveillance program. We may lose them, we may lose them to cancer. And there are personnel issues. So we need personnel to use the population health tools and personnel to see and treat patients. And there's a lot of resource variability, especially for multidisciplinary management and treatment. And also you need availability of expertise. So you need a clinical champion. And you never want your thing, whatever you're implementing, to be dependent on a single clinical champion because it's not sustainable. And this is a common model which is doomed to failure. So I believe that disparities in resource equal disparities in care. And this is why resource allocation needs to be looked at very carefully. This graphic that you can see courtesy of Heather Patton who's in the audience shows that we have a pretty good uptake of liver tumor board. 41% of stations have a tumor board. But we have very variable accessibility to things like Y90, transarterial therapies like TACE for example. Really good palliative care, really good oncology care. But not everybody with HCC starts there. In fact, we actually need to be resource heavy on local regional therapies if we're gonna actually detect these cancers early. This paper which we published in 2017 shows that actually people with BCLC zero, some of them were getting serafinin which we really don't wanna see happening. And part of that is related to resource poor areas that may only have an oncologist and may only be able to treat that patient that way. So that is not what we wanna see. If you're really gonna implement best practices, you have to do it in uniformity and equitably. So let's take an implementation science approach. And I'm not really gonna spend too much time on this. This is sort of, when you think about a study feature, we're all pretty used to clinical research but what's the difference with implementation research? And I think Dr. Rogal covered that very well. There are a lot of caveats to that though. And I think it's important if you're doing research, you want a priori selection of your aim, you wanna carefully planned and protocoled intervention, you want your outcomes to be measurable and you want willing stakeholders. And oftentimes we sort of fly by the seat of our pants with sort of, if you build it, they will come, right? So the challenges to even the best plan study are disruptive innovations, shifting priorities, need for training and consistency and oftentimes changes in personnel. And so I wanna kind of give you a little bit of a case study before I go there, I wanted to, I like this quote by Garrison Wynn, knowledge is not power, implementation is power. And Sherry beat me to the punch with implementation is messy. So I will attribute that to Dr. Rogal. So I'm gonna talk to you a little bit about this case study that we're kind of living through right now. So VA Connecticut is a level one tertiary care, 1A facility in the VA. And we work closely with Central Western Massachusetts, which is a level three ambulatory only facility. And the distance between the two centers is about two hours. Central Western Massachusetts was actually one of the underperformers in Dr. Rogal's study that she showed. They had both underperformance in HCC and variceal surveillance, which were the two national metrics she was looking at compared to VA Connecticut. So the innovation team kind of had already brought us together and we knew that center very well. And then as Sherry's work was also taking off, so was this clinical resource hub, which was the VA's way to try to get specialists out into more remote or rural areas. So the clinical resource hubs are VISN owned, meaning that the actual VISN, and I have a VISN map up here so you can see, I'm in VISN one, it's basically all of New England. The VISN is the stakeholder. And the VISN sort of gives the budgets to each of the centers. So this really made the stakeholders care because they held the budget. It wasn't a central office budget. And the goal was to increase access to clinical services when local facilities had gaps in care or service capabilities. All 18 VISNs have an established CRH supported by a multidisciplinary leadership team. And CRH provides care to veterans at local VA health facilities through mostly telehealth technology. So anything that you can consider as a virtual, like you can see the patient type of technology. So we call it clinical video telehealth when the patient is in their station with a nurse and we're seeing them in our station, that's really like an in-person appointment for them. But we can also do this on what's called VA Video Connect where we have just a virtual face-to-face through a computer like Zoom, for example. And veterans connect with distant primary care, mental health, and specialty care teams to improve access to care. This was spearheaded in mental health. The VA has a long track record in virtual mental health. So really we're establishing this hub and spoke and we received funding for on-the-ground nurse practitioner and MD effort at our hub and funding for the spoke RN to do both care coordination but also to come to tumor board and make sure that these patients had a cascade of care once they were in a surveillance program. There was buy-in from Central Western stakeholders from their central leadership to primary care with many, many meetings, and I can't stress how many. And then we had to iron out the flow, which was also many more meetings. We used an RN tele-presenter who could palpate the liver and spleen and was trained by us to do so. And so there was a little bit of, I guess we call it kind of mini-sabbatical where that RN had come to our center to learn about liver disease and felt confident doing an exam, okay. And I think the logistics are key and really require a lot of planning. So the most important thing is support at the highest level, right. So we had a financial incentive. This was a VA incentive. Every VISN got money. Every VISN had to spend that money. So we had support right up front. We had to have regular meetings, but thankfully through the work of the national program and through Dr. Rogal's earlier work, we already knew this center very well. And then we had to actually have protected time to find veterans on our dashboards. And then we needed to do iterative improvement in documentation and communication because every time we thought we were ready to go, there was some other hiccup. Like what if the person actually needs to go to the hospital and there's no hospital in Central Western Massachusetts. So we had to make a flow for every contingency. And then obviously ensuring patients don't fall through the cracks requires follow-up and a tracking system, which thankfully we have. So our one-year data shows pretty impressively that we saw 316 uniques during this time and 441 encounters. The standard episode of care cost for a liver appointment in our community is $2,311 per patient. So we've actually ended up saving close to $750,000. So that's money right back in the leadership's pocket. And I can't stress this enough. Implementation should ride some really good benefit to whoever your stakeholder is. Play to your stakeholder, okay? So we found five new HCCs. Four were treated at VA Connecticut. So they had established such a rapport with us that for them to drive two hours was okay. And their travel benefits will allow them to come through VA travel. One went to the community. But that's pretty good if you think about it. So what are some lessons learned? I'm sorry the yellow doesn't project very well. You can't expand this type of program without further developing time and training for pre- and post-clinic care. So the dashboard, the tracking, interpreting a fiber scan from afar. You need to see the images. You need to have a workflow for that. So as an example, in our own facility, we have an RN who preps clinic and tracks post-care for safety and timely high-quality care. But I have to say that the CRH is now hiring remote nurse coordinators, which I think will revolutionize the way we can deliver care. So some lessons learned. You really have to establish clear expectations with your funding source prior to developing anything. So the thing has to have some expectations. And yes, they should be scientific in the sense that you're thinking a priori how are you gonna launch this? What's it gonna look like? But the thing also needs resources. And whoever is providing your resources has to have clear expectations, understanding that implementation is messy and doesn't always turn out the way you anticipated. You need excellent continuing communication with your entire team. But if you're trying to implement something to another team with that team indeed as well. You really wanna develop enduring materials, living materials that can be updated so that other providers can smoothly transition into whatever thing it is you're delivering to them and they need to teach other people how to take it on. So the VA operates on Microsoft Teams. We can IM each other anywhere in the country anytime we want. I mean, I could say to Dr. Conwell, hey, are you at your desk? Yep, at my desk. And then we can just go on a video with each other. So we can do that very easily with this team. We have a SharePoint site where we can actually update image any kind of flow in real time. So all of our workflows are up on a SharePoint site. And again, instant messaging is commonly used in a crisis. I really need to get in touch with you. So we've also asked for patient feedback throughout the process from the beginning through the end, just to say like, are you even enjoying this? Like you're sitting there in Central Western Massachusetts. I'm here in Connecticut. You've never seen me in the flesh. And they're like, thank you so much. Thanks for explaining transplant. So we've had like three transplant referrals. It's been quite nice. And collect meaningful data from the very start. So I know that the bottom line financially is really important to the VISN leadership, but that's not so important to me. So we are collecting data. How many of these people will remain in surveillance? How many of these people will come to our tumor board, get treatment at our facility? How many get sent out? What does their care look like? Is it the care we would have given them? So these kind of data points, you have to think a priori, what do you want to collect, right? So with the retrospectoscope in the last few minutes, I just want to say that this is retrospective, right? I didn't set out to implement this thing. I was told that we need to do this thing because there's money here, right? But still, we learned a ton. So in terms of aims, I think we really didn't have an a priori selection of our aims. Our disruptive innovation was a financial pressure. We really had to get it done. And in terms of implementing, did we have a framework? I think we probably thought about it with the RE-AIM framework, which I'll show. It's like essentially thinking through how are you gonna reach the person? What are you gonna study, essentially? So there are many frameworks, and I think you can get very dizzied by all these different frameworks, which is why you actually need to talk it through with an expert. So I think if I had to start this all over again, I probably would have sat down with somebody like Dr. Rogal and said, okay, what are the data? How are we gonna really do this? What's our strategy? What are we gonna measure? So specialty care expansion to a spoke site really was the intervention. It wasn't really linkage to HCC care. That was sort of like a byproduct. It had to be done very iteratively. And the challenge was that it's really specialty care rather than HCC care per se, but sometimes you have to take a small win and a bigger win, right? And obviously caring for these patients is a big win. The primary outcomes are enhancing linkage to care, really for cirrhosis and HCC. The dollar savings is measurable, but I don't know that it's meaningful, at least not to me. And the need for training and consistency is a constant struggle, and it will be in any clinical arena. We had a facility unit of analysis, and we had very willing stakeholders, but I think the changes in personnel are an issue because there's ongoing training needs, and you have to really build that into your model. So I think I wanna make a plug for virtual specialty care because I really do think it is the future. This was a recent paper in the Harvard Business Review by a gastroenterologist who's vice chair of medicine at his institution. So the quote is that virtual care is unlocking opportunities for forward-thinking specialists to deliver unbundled consults, e.g. e-consults, which we do a lot in the VA, co-management, and even principal care across geographic regions. In doing so, they may develop deep experience and expertise in serving specific patient segments and their referring practices. So we really have built trust with Central Western, so much so that the primary care docs now seek our counsel. So we need to consider how to approach this prospectively, really through an implementation science lens. And I really wanna thank the Central Western teams and the Connecticut teams, and really, Dr. Rogal, for kind of putting everything in place for us to serendipitously take advantage of CRH. So thank you. Thank you for your work that you're doing in the VA. I think it's phenomenal. So we have heard from two speakers, mainly located in the VA, about cirrhosis care and HCC. And the next logical step is that we need to talk about liver transplantation. And the best speaker to do this is Dr. Michael Carton, who is a professor of medicine at the University of Chicago, well-known in the field. And he will give us his insight with regard to liver transplantation, and we'll probably also hear something and some data outside of the VA. I'd like to really thank the organizers and the chairs for putting this together. I've learned a lot today. I'm obviously not an implementation scientist, but as every transplant physician in the field is, we're deeply affected by it. And I've learned today, and in preparing for this, just how deeply affected we are. I'm gonna speak specifically to dissemination implementation science as it pertains to transplantation. For the dissemination part, and this really understanding pieces of knowledge. And in our field, of course, our feet are held to the fire. You have to take your board, you have to take your CAQ if you're gonna do transplant and hepatology. And you have to repeat it every so often. And then for the parts that are pertinent to transplant, you have UNOS coming to audit you at least every three years. So I think for the dissemination part, within the confines of this Washington Convention Center, I think that's done quite well. The implementation part, I think, is where we fall short. And that's the adoption and integration of the evidence-based health interventions. And in preparing for this, I wanted to see how good are we as a country in the things for implementation that are really not controversial. So take, for example, breast cancer. So the timeline, this is one of the, I think, cardinal, it was an image that kept recurring when I was looking into the background for this presentation. It's this 17-year lag between your great idea and it affecting patients. So you have, let's say, not just an NIH grant, you have a U01, multicenter U01, and you have this idea that you will improve mortality and morbidity for some important thing. You do your grant, you publish it, it gets into PubMed databases, makes it into reviews and guidelines, and it's implemented. It takes 17 years for that to happen. I was very skeptical of this until I looked at breast cancer. And in breast cancer, which again, it's really not that controversial. There's always things you can find controversial in anything, but this is really not. The first evidence that mammography could help to diagnose breast cancer early was in 1966 at the far left of any of these panels for eight different countries in the West. The implementation didn't happen until 1991, sorry, 1990 in the United States, so 24 years later. And in every country where it was implemented, it was easy to see the bending of the curve for mortality for breast cancer deaths. Now, of course, it didn't bend all the way down to zero. Mammography doesn't treat breast cancer. It just helps you to diagnose it. But this is as good as it can be. So how do we perform as a country? We perform terribly. There's a three to fourfold variance between states for something as simple and covered by almost every plan. No, almost every plan will cover mammography, and it ranges from 66% performance to 87% performance. And it's not really predictable in ways that I would have guessed. Louisiana, when you look at any sort of health outcomes metric, does terribly. For this one, it does very well. So there's sort of random variance in performance for something which could be life-saving. Now, the issue in transplant, I was at an investigation meeting with 50 centers representing three quarters of transplant in the United States, and I asked the group at the beginning, how many of you have a more difficult case mix than average? Everyone put their hands up. Statistically impossible. And I think that this is how we feel in general when we look at these things like HCC screening, et cetera, that, oh, that's terrible, but my center's much better than that. It turns out that we're really not. So the transplant pathway for a patient starts when they're referred to us. We just heard earlier how you can go and find some of these patients through virtual care, et cetera. But eventually the patient arrives in your office with a diagnosis either virtually or in person of cirrhosis, and you'll do your treatments. They have BC, whatever it is. They'll get their six to 12 monthly cross-sectional imaging, and they will or they won't have a decompensating event for a liver cancer. Then they go through these various stages through the evaluation process, and these bars at the bottom here really represent new providers coming into the mix. So at the referral stage, you suddenly get a nurse coordinator, maybe a transplant surgeon, et cetera, a cardiologist, and you have that would-be-vanishingly-small part of the care, which is the liver transplant procedure itself, has a deep impact so that those few hours, only 40% of outcomes from liver transplantation are predictable by any pre-transplant parameter, which tells you that 60% is affected by individual skill sets and center-specific skill sets. So all these things run into outcomes. So what is the pathway, what is our performance for, say, something that's not controversial as referring for evaluation, somebody with a MELD score of 15, spontaneous bacterial peritonitis, any of the liver-relating events that are widely recognized and nobody would disagree with? You heard earlier some of this. Turns out the number is 87% don't either get referred or they're not evaluated, which only 13% of patients, this is a VA study, only 13% of patients in a system that is as geared to referral as any system is in the United States, 13% were referred and evaluated. Concerningly, in their multivariate analysis of this, the strongest single predictor of not being referred and evaluated was ethnicity. They were not able to tease out, tease to the point of lack of statistical significance. Hispanic ethnicity came close. I suspect with the larger sample size, it probably would have been significant too. And for things that are obvious that you should refer someone, something like spontaneous bacterial peritonitis, you could do that quite well, a tenfold increase in odds ratio for referral if you had SBP, encephalopathy following close behind, but we're missing most patients. When I say most, nearly 90% of patients are neither referred nor evaluated. And I say, what is this one like, Frodo Baggins and Usain Bolt? You would like this to be as predictable as Usain Bolt in a race where he starts it, he's gonna always finish and almost always finish first, whereas Frodo Baggins, you don't know if he's gonna survive five minutes into the scene. He somehow makes it through various editions of the film. But for patients, it's much more like Frodo Baggins. They're not actually likely to make it to the end of the scene. And for HCC, another not controversial thing, as you just heard, the number is the same, 87% in a safety net hospital in Dallas. And this is Amit Singhal. This is the first author of the ASLD's soon-to-be-republished HCC guidance, showing that the annual rates, once every 12 months, good enough to not get sued, but really not very good, that's 13%. Every six months, we saw what a patient should have, 2%, terrible. And now, just recently published in JAMA online only so far, a performance in a multi-center cohort nearly as abysmal. So only 15% of patients get adequate screening for HCC in the prior 12 months. So there's so much room to be improved. I was really totally unaware of this state of affairs prior to this presentation. So that brings us to our guidelines. So can anyone guess how many guidelines we have specifically related to liver transplantation ASLD? Any guesses? There's no prize or penalty. Nope. Well, there's 148 freestanding recommendations. Most of those would be community names, six of them, seven, but it's 148. There are 55 recommendations pre-transplant, 93 recommendations post-transplantation. I read through all of them. And of those that are level 1A, so the highest quality, strongest recommendation, things we should know and implement, there are only seven that are actionable. And by that, I mean most of those 27, the great majority of things like, say, decompensated anasterosis is an accessible indication, hemochromatosis, et cetera. That's the great majority of the level 1As that we have, not really actionable. I mean, no one would argue with any of those, and they're pretty much acted on when the patient's in front of us. And there are seven that are actionable for dissemination and implementation changes, and four are high impact. By high impact, I mean the great majority of patients are affected, over 80% for each of these four. So let's look at what these are. So these are the four out of those 148. The first is that once a patient with cirrhosis has experienced an index complication, and we just saw in that first part, only 13% of patients are ever referred or evaluated. So this is a recommendation, but it's not being implemented, and it could clearly affect a large number of patients. The number of patients who die of liver disease per year in the United States, 50,000 to 60,000. We transplant less than 10,000 per year. Early referral for alcoholic liberties, we're going to come back to that. It's something that's clearly not happening. It's less than 2% of transplantation historically is for alcohol-related liver disease, acute alcoholic-related liver disease. Patients with acute liver failure require immediate referral. So something which is transparently not happening, and that liver transplant is an effective therapy for HCC. Well, that's certainly known, but as we've seen and heard earlier, that's also not transpiring. This is Philippe Maturin's impact. He published in the New England Journal the study in the French and Belgian multinational multicenter study of people being randomly assigned essentially to get liver transplantation for acute alcoholic liver failure. And when that article was published, the number of sites in the U.S. that were performing or at least admitting to performing transplantation for alcohol-related acute liver failure went from 5 to 73, a tremendous jump. It's increasing more than any other indication. These are data that we published last year looking at the rate of increase of various indications. Alcohol-related acute liver failure, 207% increase, more than NASH. NASH gets a lot of attention. The rooms with NASH presentations right now are standing room only, not even, spilling into the hallways. So this is where the action is. I should point out that the outcomes, interestingly, for acute alcohol-related liver failure are better than any other indication. So we're withholding access to this for patients, even though the outcomes, including for graft survival, the thing you would worry about with recurrence of alcohol consumption, is better than for any other indication. So we wondered, you know, is there a variance between geographic regions? And these are the donor service areas in the UNOS regions in the United States for organ allocation. And interestingly, it ranged from over 4% to 0%. There are whole UNOS regions where not a single patient, not less than 1%, zero patients were transplanted for this indication in 2019. So if you look at, say, Maine, this has one of the highest rates of referral, and Iowa has not a single patient, is it because no one in Iowa drinks, and that people in Maine drink a lot? Well, it turns out, absolutely not true. Iowa has one of the heaviest consumptions of alcohol in the United States, and Maine relatively small amounts. So the need totally dissociated from the implementation of the procedure. And it's getting much more dramatic. So these are data from the JAMA network, and this looks at liver transplantation for alcohol use disorder, acute and chronic liver failure. And it's quadrupled in a two-year period. The pandemic has been hard on everyone. Alcohol sales increased 25%, 30%. It looks like it's a real number. It's not just an ascertainment bias. This is a real thing. It's gone from 2% for the last 30 years to 7% in just a couple of years' span. So I'm going to conclude a little bit early. And for me, what I learned is its guidelines are a little bit like the Beatles' haircut. So somebody did this analysis of the Beatles' haircuts according to their style of music between 1963 and 1970. And you can see the results here. But it was the same basically entertaining group producing wonderful music from the beginning from 1963 till they disbanded in 1970. But they changed. Their appearance changed. Their nuances changed. Guidelines are very similar. So you look at, say, hepatocellular carcinoma for a good example. We started off not knowing who to transplant. The first liver transplant in the world was done for somebody with hepatocellular carcinoma, among adults at least. Then we had the Milan criteria. Then we had the UCSF criteria. We can downstage UCSF. Now we're trying to figure out how to incorporate checkpoint inhibitors. It's a rapid evolution, much like the Beatles' haircuts. But the most important evolution that I've come away with is that we need to have dissemination and implementation science at the very beginning of our guidelines and recommendations. You cannot have 143, 150-something recommendations or guidances, and none of them tell you how to get patients into your office. This has to be an absolute prerequisite going forward so that ASLD can lead the field or lead the fields in making this a mandatory part of guidelines going forward. Some of the things and the tools that we've heard today need to find their way into guidances. I took part in the hep C and the NASH guidances. And I can tell you, we never even discussed implementation strategies. That's something which I wish I could have that time back and to have some role in those guidelines again. I hope the next set will incorporate them. So I'm actually going to finish early, and I'm going to conclude at this point. Thank you. I want to thank all the speakers for such excellent presentations and open up for Q&A. There are mics here. We'll wait for the audience to think. I have one question to ask from all the speakers to consider. When you are designing a dissemination implementation slash science or research project, what are the key disciplines you want to consider up front so that you're given a chance to run a meeting with six people? Who are the key players you want to include in those meetings, in the initial setup? I can go. So I think that's a good question. I think about it in buckets maybe. So when I'm designing for implementation, I will have sort of patients or like sort of the people affected, their caregivers, providers. So not just MDs, but the providers that are actually doing the work and interacting the most with patients. And then operation stakeholders, so like people in the health care system, payers, things like that. So I'm part of a study where I'm the implementation person, even though I'm a hepatologist, focusing on implementing OUD, opiate use disorder treatment. And so in different settings. And the first thing that we did in this big network was we got, we actually asked sort of all those groups of people, which outcomes should we measure? What would be most useful to you? What would change how you do things? And so that's how I think about it. I was interested in, it was at Select Health in Utah for a while, and it has 40 hospitals, thousands of providers. They own the hospitals. The providers are salaried. It's a little bit like it could be in some ways. And they even have the insurance groups owned by them as well, called Select Health. And we were trying to come up with screening strategies for HCC and NASH. NASH was sort of easy because the FIB-4 criteria in the EMR. And when we reached out to the PCPs, they asked us not to do it. In fact, they were unanimous in saying, please don't put this on the EMR. We don't want to be held to it at this point. So there are so many things that populate their EMR that they already feel overwhelmed. Did you encounter any of that? Or have you seen this as a phenomenon where providers agree that there's illness out there, but they don't have the capacity or they don't have the bandwidth to be responsible for it? Yeah, we encounter that all the time. So there's a hierarchy in clinical reminders, some that are mandatory, others that are not. And the logic behind each clinical reminder that the VA implements on a national level has a set of metrics behind it. And you have to get it through a national group, which is heavily populated with primary care. So I do think it's a big issue. And I feel for primary care providers, I think if we could just recognize who has advanced liver disease, that would be a great start. And if we could make that easier for primary care providers, I think they may be easier on us when we give them sort of, we would like them to be screened or that kind of thing. But it's a big barrier. Part of that, I think it's an iterative issue. Part of that also comes, is related to understanding or appreciating that the intervention actually works. So there's so much debate with HCC screening. That's why we think it's important. But at the same time, we know that evidence still needs to evolve. And when there are so many competing demands on primary care providers' time, this falls down on the priority list. So I think some of that is an iterative process too. Now with better treatments, hopefully we could change that. And I think the great example for that is hepatitis C treatment. I remember not too long ago when we were talking, we would sit down and talk about how to get hepatitis C patients to your clinics to treat them, including the VA. Now with effective treatments, it was a much easier sell or buy-in because the treatments were effective. They were safe, and no one thought that there would be challenges. So I think once we have better treatments and more evidence, some of this will shift also. Sounds good. Thank you. Thank you so much. I think the other question which I can pose is, you were fortunate to work in a VA system where things are a little bit more accessible than other academic centers. How do you design a project between VA and non-VA together? Like what are the internal outer settings and the messiness concept? I mean, what would you consider upfront and disclose to your funder and say, this is what we are going to test, but this is how we are going to operate? I know it's tough. Maybe you should answer that. I was just going to say, you're the person who's doing that. You're like the flagship trial, Manisha. Yeah, we are doing it. I think we are doing the way everybody is doing and learning from individual strategies. I think now the challenge would be to set back and learn what VA versus non-VA did to operationalize the different models of palliative care. Just for the audience, we are discussing a little bit on the PAL liver study, which is funded by PCORI, where we have VA centers and non-VA centers, and they are randomized as a straight up for VA versus non-VA into two models of care. It's still unclear to us what the individual centers did, but some of them are working high. Some of them are kind of operationally okay, so there have been individual struggles, but anything from your insight would be. One clear difference, so when we looked at what strategies were sort of in play and included all the 73 in the world of possibilities, in the VA, no one used financial strategies because they are highly constrained. When I work outside of the VA, it's one of the most important factors, and so the VA is a nice sort of learning health care system in the sense that you at least have one thing that's sort of standardized and eliminated from the mess, and if we have nationalized health care system in the US ever, that would be the case as well, but right now we don't, and so we have a lot of extra measurement to do outside the VA. Sounds good. We have a question from the audience. Yeah. Hi. I'm Archita Desai from Indianapolis. Corollary to your question, Manisha, is in non-VA settings where liver disease is such a small portion of the health care system's utilization, how have you engaged stakeholders to care for the high-risk population? Am I the only non-VA here? I have some input. Go ahead. So we're in the south side of Chicago. All of us study for NIH and for liver transplantation and renal at lung and heart. We have the highest proportion of African-American recipients in the nation, and I can tell you that we struggle to engage patients. They're not coming to clinics, so even with all the best guidelines in the world and all the stakeholder engagement, the patients aren't there sometimes, so the patients with illness aren't even going to their primary care doctors, so we found that to be a virus, and now we have, for example, 11 kitchens around the city where we go and help with nutrition literacy. For virtual clinics, we found uptake was poor initially, so now we go physically, even if it's a long way away, we go physically to establish those relationships, and then we can retreat to a more virtual practice, but we found that there are underappreciated, I think, barriers to delivering care, including just turning up and being engaged in the health care system itself for many patients. I would say that it depends on the payment model and the insurance model and the system that you work in, and it depends who you're talking to, but just to Tamar's point, that if you sort of show the sort of return on investment of focusing on the population via transplant money brought in, and also I would argue that a third of the population has NAFLD, so I try to talk about that a lot, too. Thank you. Thank you. I have a follow-up question related to the research aspect of implementation science. We are like, there are 73 strategies, and every strategy can have its own individual positive or variant effect in an individual health system. How do we account all this when you do the analysis? I don't know what will happen, but how do you account for effective strategies change over time? How do you account for those impacts in your overall analysis, even in palliative care interventions or behavioral health interventions? People get trained over and over time. I mean, one year is different versus year three. Because the studies are running for five years, how do you account for those things from the research perspective? Yeah. No, it's really challenging. I mean, I don't want to be the only one talking. We've done a lot of configurational comparative methods, which is like non-Boolean algebra, because you have such wide data sets, and you're trying to look for pathways to success. So sort of thinking about almost like an electrical circuit, you have all these potential combinations. What is the simplest way from point A to point B? So we've done that a bit, but I would say the science is still evolving. We recently proposed to use some machine learning to help us sort of look at strategy to barrier matching, because the field right now is just at the point of expert opinion. And so I think there's a lot more room for the science to grow. But you're right. It's really complicated. It's very. And it is also very challenging. It's like every strategy has its own impact. How are you measuring outcomes at that particular time point? It's like how many breaths did you take in this duration of time when you were speaking versus when you were sitting? It's like every time point is so continuous that I feel implementation science has to have more of a narrative aspect than a numerical aspect. There's also a lot of unique trial designs in implementation science. So in terms of we use a lot of optimization designs, like most, and then also a lot of multi-time point studies, like step wedge studies, what I just finished, where you get the multiple time points and measures, and it's a pain to do, but it sort of tries to account for some of the complexity over time. But I think the point that you're also bringing up is that, and that's at least the way I think about, I shouldn't say limitation, but the issue with implementation science is that it is so dependent on the context, both institutional and temporal, that it's very hard to really generalize the exact strategy from one institution from one time, even within the same institution, from one time to another time. So that's basically what you're pointing out, and I'm not so sure if there's a way to make it standardized. I think just the nature of the field is that it will always be dynamic, and I think just recognizing that and making sure it's measured in a way that one can explain it to others, but really, it's very hard to transport a strategy that worked in a VA in 2020 to another institution that's across the country in a different system in 2022. That's I think the main issue with implementation science. It's very, very context dependent. Yeah, I agree, but I think just a point to add is probably with any kind of intervention, having some framework or having the key elements written up front would just help others to learn like, these are the key elements I need to begin as a baseline with. I mean, I need to have a palliative care person, I need to have a nurse coordinator to really run this intervention or deliver this intervention at all, and they're moving forward, all those things can be more complicated. Yeah, and I think this is the whole field of precision implementation science is like, what context specific characteristics can we use to predict which strategies would work best? But it's just to narrow down from 73 to say 10, and not to be picking at random, but we're not, it's not, the science isn't there yet, of course. Thank you. Anyone else from audience? Michael. I have one question for the panelists here, up here, and that is, you all have identified the problem, specifically for HCC screening, when you do it every six months, that patients don't show up for their appointments. So what do you think a healthcare system could do to improve that? I mean, if we think out of the box, is that something that we need to bring to the patient's home, that we need to, you know, go to the patient's home in a kind of mobile service that we offer to do that? And if so, is that possible even outside of the VA? So I think that we should take a page out of our European friends' books, and do point of care ultrasound. And I think we should do it in clinic, for our patients, in the US, which is a revolutionary statement. But you can screen for HCC in your clinic, if you know how to do a liver ultrasound, and it's a limited liver ultrasound. It shouldn't take any more than 15 minutes. And most medical students are learning point of care ultrasound. And I think it would be a very easy way to do it. I think you could scratch your head a little bit, and say, well, you know, it's a really different model. It is. It's also, we have a very obese population, so, you know, would it be the same, in terms of quality? So I think it's a very tough question, but, I mean, listen, mobile mammography units really revolutionized screening for breast cancer. So even just having screening days. So one of the things that the VA does is, they don't pay well, but they do this fee service where you can contract and pay somebody much better. So like our sonography problems in the VA, why not just have screening days, and just pay sonographers per diem to come sit in the cafeteria with their sonograms, you know, make some bays, and just screen people? You could do it on a weekend. People don't have to take time off of work, but it takes a lot of logistical effort to do it, and also, you have to have buy-in. So yeah, I think there's a lot of different ways that we could do it. It's just, you know, what battles do we want to pick to get it done? How many are no-shows? I mean, for me, this is a population enriched with... Our no-show rate for ultrasound is approaching 40%. I know we are running out of time, but thank you so much for being here and sharing your expertise. We are really honored to have such a wealthy discussion on implementation science and learning the thing, how to do the thing. I think that was really cool to take home that something you need to design is as a thing, and it's definitely messy. It has so many components to consider, that teasing apart the things are hard, but it is a thesis in itself. I mean, writing PCORI was not easy. I was telling Sherry as well, if you want to design and do a PhD on your own, this is the best way, and we can award the PhD certificate for that. On implementation, thank you so much for being here, and I hope you enjoy the rest of the day. Thank you.
Video Summary
The video focuses on the importance of implementation science in improving healthcare outcomes and bridging the gap between research and practice. Dr. Rogel and Dr. Khanval discuss the building blocks of dissemination and implementation science work, including evidence-based practices, implementation strategies, implementation outcomes, service outcomes, and the context of implementation. They provide an example of how implementation science principles were used to improve cirrhosis care, highlighting the challenges and lessons learned from the implementation effort. The panel also discusses the challenges and complexities of implementing surveillance and multidisciplinary management for hepatocellular carcinoma (HCC), addressing issues such as technical problems, resource variability, disparities in resource allocation, and the importance of clear expectations and communication with funding sources. They emphasize the need for ongoing training, data collection, stakeholder engagement, and context-specific approaches to implementation. Overall, the speakers emphasize the importance of implementation science in improving healthcare outcomes and the ongoing need for research and innovation in the field.
Asset Caption
Dissemination and Implementation Science (D&I) is an emerging field focused on the science of improving the adoption and sustainment of evidence-based practices, programs, and interventions. The existing time-lag between research generation and actual bedside implementation results in low-value, inequitable healthcare. This session aims to introduce D&I as a discipline with tools and concepts that can accelerate the translation of hepatology research into practice.
Keywords
implementation science
healthcare outcomes
research and practice
dissemination
evidence-based practices
implementation strategies
implementation outcomes
cirrhosis care
challenges
surveillance
hepatocellular carcinoma
HCC
stakeholder engagement
×
Please select your language
1
English