Close  

Subscribe to communications about upcoming webinars


  • All fields are required unless indicated optional

  •  
  • 3M takes your privacy seriously. 3M and its authorized third parties will use the information you provided in accordance with our Privacy Policy to send you communications which may include promotions, product information and service offers. Please be aware that this information may be stored on a server located in the U.S. If you do not consent to this use of your personal information, please do not use this system.

  • Submit

Thank you

You are now subscribed!

Our Apologies...

An error has occurred while submitting. Please try again later...

Close  

Request webinar recording


  • All fields are required unless indicated optional

  • If you are requesting access to the "Advance your CDI program with 3M’s CDI technology" webinar recording, please indicate that below.

  • 3M takes your privacy seriously. 3M and its authorized third parties will use the information you provided in accordance with our Privacy Policy to send you communications which may include promotions, product information and service offers. Please be aware that this information may be stored on a server located in the U.S. If you do not consent to this use of your personal information, please do not use this system.

  • Submit

Our apologies...

An error has occurred while submitting. Please try again later...

Thank you!

Your form was submitted successfully

Close  

Let’s talk about how we can help you and your organization

Fill out the form to start the conversation. A 3M representative will reach out to you soon.


  • All fields are required unless indicated optional

  •  
  •  
  • 3M takes your privacy seriously. 3M and its authorized third parties will use the information you provided in accordance with our Privacy Policy to send you communications which may include promotions, product information and service offers. Please be aware that this information may be stored on a server located in the U.S. If you do not consent to this use of your personal information, please do not use this system.

  • Submit

Thank You

Your form was submitted successfully!

Our Apologies...

An error has occurred while submitting. Please try again later...

Medical professionals reviewing documentation on a tablet
Rethinking clinical documentation integrity.

3M CDI Innovation Webinar Series

Subscribe to upcoming webinars

3M CDI Innovation Webinar Series

For more than 20 years, clinical documentation integrity (CDI) experts have played a key role in the health care industry. As the industry evolves at a record pace, their work has never been more important, or more challenging. 3M is here to support that crucial work through a new 3M CDI Innovation Webinar Series.

 

 

 

On-demand webinars

  • 2023

    • The power of automation on your CDI program

      • December 2023

      • The power of utilizing automation technology in your CDI program comes with many benefits. Automation can increase efficiency by prioritizing reviews, accurately assigning DRGs and providing continuous financial impact tracking. These advantages can lead to optimized revenue, reduced errors and a streamlined documentation process that positively impacts both patient care and health care organization profitability.
      • Recording coming soon.
    • (DESCRIPTION) Four people appear on a video call at the top left-hand corner of a slideshow. Slide, On 24 for a better webinar experience! A screenshot of the webinar screen, with labels pointing to the different functions and features. Text, Copyright 3M, 2023. All Rights Reserved. Slide, 3M C.D.I. Innovation Webinar Series. November 9, 2023. 3M.. Science. Applied to Life. (SPEECH) Good afternoon, and welcome to today's webinar for our CDI Innovation Series, where we're going to be exploring the strategic approaches for risk adjustment and HCCs. Before (DESCRIPTION) Slide, Housekeeping. A bullet point list and a disclaimer. Disclaimer, The information, guidance, and other statements provided by 3M are based upon experience and information 3M believes to be reliable, but the accuracy, completeness, and representative nature of such information is not guaranteed. Such information is intended for people with knowledge and skills sufficient to assess and apply their own informed judgment to the information and is not a substitute for the user's own analysis. The participant and or participant's organization are solely responsible for any compliance and reimbursement decisions, including those that may arise in whole or in part from participant's use of or reliance upon information contained in this presentation. 3M disclaims all responsibility for any use made of such information. No license under any 3M or third-party intellectual property rights is granted or implied with this information. 3M and its authorized third parties will use your personal information according to 3M's privacy policy. This meeting may be recorded. If you do not consent to being recorded, please exit the meeting when the recording begins. (SPEECH) we get started, I'm going to just go over a couple of housekeeping items. We are utilizing the On23 platform, it is a web-based platform, so we recommend that you utilize Google Chrome and close out of any VPN or multiple tabs to help with bandwidth. There is no dial-in number, so if you are having any audio issues, check your speaker settings and do a quick refresh of your browser. That usually will clear anything out. We have several engagement sections. You can see we have the media player. You can make that section larger. You can make our presentation area larger. And in that media player, if you do need the closed captioning, we do have that available as well. We have a Q&A section. We encourage you to ask as many questions as you want. And we'll get to as many as we can at the end, but please make sure you put that in the Q&A section. We do have an attendee chat, so feel free to say hi to your friends. But the Q&A will just be easier for our moderators to see where those questions are at. And then, we do have a resources section. Make sure you download the certificate of attendance. Those are not CEUs, those can just be used to submit to an accredited association to request those CEUs. And we also have a couple other resources in there. And then lastly, we always appreciate you letting us know how we did. So at the close of the webinar, a survey will launch, and we would appreciate you let us know how we did. So let's go ahead and get started. I'm going to pass it over to Carrie, who is going to introduce our speakers and go over our topic for today. (DESCRIPTION) Slide, Meet our panel. Pictures of two women. Names and titles of three people. (SPEECH) Carrie, Thanks, Lisa. I'm very excited to introduce today's topic as well as our great panelists that we have to discuss the strategic approaches to risk adjustment in HCC. We're very fortunate to have three experts within their areas. And I'd like to go ahead and introduce them, but I'd like them to tell a little bit about themselves and their experience if they would. And Deann, I'll start with you, please. Thanks, Carrie. I'm Deann Tate. I'm the system director of risk or accuracy for Bon Secours Mercy Health. As you can see on the screen there, we're one of the top-five Catholic health care systems in the United States, with around 48 hospitals and 3,000 providers here in the United States. So happy to be here. Thank you. Thanks, Deann. Kristen? Hi, everyone. My name is Kristen Viviano. I am the product manager for HCC Management and Outpatient CDI here at 3M. And last, but not least, Wayne, if you'd tell a little bit about yourself? Thank you, Carrie. Hi, I'm Wayne Morris. I am the manager of the ambulatory CDI program here at CaroMont Health, just outside of Charlotte, North Carolina. I've worked as a nurse for 18 years, but I've been in clinical documentation integrity for about 10 of those. Very happy to be here. Thank you. Thanks, Wayne. Throughout this conference-- or through this session, if you do have questions, please just add them into the chat. And at the end of this discussion, we will be happy to answer as many as possible. Let's go ahead, though, and set up our discussion. Kristen, I'm going to go ahead and turn this over to you to talk about the criticality of HCC capture at the moment. (DESCRIPTION) Slide, Criticality of HCC capture. Screenshots of articles, and a graphic displaying a list of healthcare expenditures. (SPEECH) Thanks, Carrie. So I'm sure everyone has seen all of the news lately. There's been a lot of announcements recently from CMS back in the beginning of 2023. They did finalize their RADV rule, allowing them to extrapolate and recoup money from health plans who are receiving those payments for covering claims for their Medicare Advantage patients. There have been quite a few OIG audits that I've seen coming through, and various lawsuits surrounding risk adjustment. And then one stat that I saw that was particularly interesting from one of the OIG studies was this top right graphic here, the $5 billion of conditions among 20 companies that had no evidence of treatment. So I think that one of the key areas that's becoming more known within risk adjustment it's always kind of been a focus on closing those gaps, capturing those HCCs. But really, what the focus is turning to and really pointing out the necessity of ensuring that the codes are captured in a compliant manner, and making sure that documentation supports those conditions. It's a little difficult to tell on the graphic, but one of the top conditions on there, vascular disease, and then number 2 is major depression and paranoid disorders. So there's $533 million of payments for major depression with no evidence of treatment. So it's something that we really want to make sure that there's a focus on. And bring to light and bring to the forefront the importance of getting that documentation captured for your HCC conditions. You can go to the next slide, Carrie. (DESCRIPTION) Slide, V 28 Updates. A bullet point list, and a list of codes titled V 24 to V 28. (SPEECH) Another big change from CMS has been the announcement of the switch from version 24 to version 28. It's intentional that it's not in numerical order. I didn't forget anything. But CMS will be phasing out the version 24 the next three years, and slowly rolling out version 28. Version 28 has been intended to allow the HCC categories to be remapped so that they're more in line with the changes that were made back in 2015, when we made the switch to ICD-10. So previously, the categories were very high-level and comprised of conditions that didn't always make sense based on the category description. So this is taking into account the additional specificity that's allowed with ICD-10 diagnosis coding and then doing some changes to the scoring, the category weights, and pretty much every condition. If it's still on, there is mapped to a different HCC category. So for those of us who have a lot of the categories memorized, it's a whole new ballgame, and it'll be a fun one to delve into. So the way that they're going to be rolling this out from CMS to a health plan standpoint is for payment year 2024, which is comprised of 2023 dates of service, CMS is going to use 67% of the calculation on the V24 model and 33% on the V28 model. In payment year 2025, which will be 2024 dates of service, it'll flip, and 33% will be based on the old version 24, and then 67% will be using version 28. And then finally, for 2025, dates of service, which will impact 2026 payments. The expectation right now from CMS is that it will be 100% based on version 28 scoring. Some numbers with version 28, on version 24, there are about 80-- there are 86 HCCs. Version 28 is going to 115 HCC categories. So they're increasing the number of categories significantly, again, to allow for more specificity with the category description. And from a diagnosis code standpoint, they're going from just under 10,000 diagnosis codes to just under 8,000 diagnosis codes. So there are quite a few conditions that have been removed from a risk adjustment mapping. So it'll be an interesting switch. Something we're definitely keeping our eyes on. Everything I've read has said that it's expected that risk scores will decrease slightly because of the fewer codes that are risk-adjusting. CMS has also made changes to some of the weights. So it's even more important that organizations are ready for this change. And have a good handle on capturing those conditions, closing gaps that they may not know about, and doing so in a compliant manner. Because on the previous slide, we did see that the documentation piece is critically important. (DESCRIPTION) Slide, Discussion with panel members. (SPEECH) Thanks, Kristen. Thanks for setting that up. I think that everybody knows that this is a very complicated process. And I think you did a nice job of setting up how it's going to really evolve in the next couple of year in complexity and accepting new ways of doing this process altogether. So before we move on, I wanted to just ask you what your thoughts are about how this has evolved over the past couple of years and why you think this is going to continue to be such an important topic. Yeah, so one thing that I've seen over the last few years-- so my background prior to joining 3M has been risk adjustment, coding, and operations on the payer side. So in my experience over the last couple of years, we've seen a lot more provider-payer contracts switch to value-based care. When I first started on the payer side, all of the contracts I had experience with were fee-for-service. So it was really difficult to get into those providers and get the buy-in from them to capture the diagnosis codes, and then also to make sure that they were documenting those diagnosis codes. From their perspective, they weren't getting paid on diagnosis codes, so why should they take the extra time to make sure that all of that information was there? Health plans have been being paid on a risk adjustment model using HCCs for years. So I think one thing that I've found very interesting and really positive change is that more and more contracts between the payers and providers are moving to that value-based care setting, which means that there's even more of a focus on diagnosis code now from both sides to help with that compliance aspect. Thanks Kristen. Deann, do you have anything to add to that from your experience? Yeah, I agree with what Kristen has said. We started our journey back around 2015, 2016. And it was difficult at first to obtain that provider engagement because we were so heavily involved in fee for service, and our physicians wanted to talk about their levels, their E&M levels. But as the value-based contracts grew and our population health team spearheaded our initiative, it's been easier. Definitely, having clinical leadership involved has paved that way for us. But it's been a long journey. Thanks. Wayne, any other comments you would like to add? I agree with both Deann and Kristen regarding the very similar journeys that we've had. I do think that our shift here has focused from risk adjustment just in the ambulatory setting to really encompass everyone who's part of our health system, so including the inpatient providers and even some of the ambulatory specialties. Getting them to really focus that risk adjustment affects everyone, so that's certainly been something that I've noticed in our journey. And the provider buy-in when they actually see we're comparing apples to apples and oranges to oranges, and this is the only way we can do it, is through HCC documentation. Thank you, Wayne. Wayne, I'm going to ask this next question of you to begin with. And just ask, you've all been involved with this for a very long time, but what triggered your organization to say, hey, we really need to focus on this and apply resources, education, whatever the case may be? What was your organization's trigger for this? So in 2018, we were looking at some data that was shared to the organization from Medicare regarding our ACO, specifically our risk adjustment score. So when we looked at each of the separate cohorts, whether the patient had Medicare, end-stage renal disease, or whether they had Medicare, they aged in, et cetera, we noticed that every single risk score was less than 1. Well, here in North Carolina we have 100 counties. And when we look at the DHHS, how we rank as far as healthy members, we're 77 out of 100. So it just didn't seem to jive that we would have very healthy patients that were taken care of in each of those cohorts. So we knew the problem had to lie in documentation and specifically the HCC documentation from which those scores were being obtained. So the leadership here graciously said, hey, we think we need a team to look at this. And it started with myself. Then one other person came three months later. And now we're a team of eight. That's great. Deann, same question. Yes, and as I was thinking back, I remember that when annual wellness visits became a hot topic, that helped us, that and the transition to ICD-10. We were really pushing an emphasis on telling the patient's story through the diagnosis codes. And that sort of evolved into what we talk about now around risk adjustment, which is accurate, precise, and compliant documentation of illness severity. So I think that ICD-10 change really helped us a lot, as well as the introduction of the annual wellness visits, and how that's one place that you can talk to the patients about their chronic conditions and how they're doing. And then, of course, the shift to more of the value-based care rather than the volume of fee for service. Thank you, Deann. Kristen, along those lines, I mean, you're working with several organizations-- many organizations, I should say, that all have different triggers or initiatives that they're trying to approach. What have you seen as one of the most common challenges or barriers for organizations that are trying to initiate or improve their HCC capture program? Yeah, so from our experience, the main challenge that I've seen has been with resources. A lot of the providers on the outpatient side, which is where 85% of HCCs are captured, a lot of them just don't have the coding support that inpatient encounters have. On the inpatient side, it's traditionally been reimbursed on a heavy focus with diagnosis coding. So that's been a common area where coders have gravitated to. On the outpatient side, the offices that do have coders are typically more experienced with that E&M coding on the fee-for-service side. So there's not a huge familiarity with the ICD-10 coding guidelines, which can cause some confusion about which diagnosis codes can go together, which ones can be billed in an outpatient setting, and so on and so forth. So I think, from a resource standpoint, the challenge of not having as many coders. And then also, the communication between that pre-visit side and prepping those visits, and then the providers and getting the provider buy in with documentation has been a challenge as well. Trying to work with organizations to open those lines of communication up. And make sure that their coders are prepared and able to communicate with the providers on a regular basis to help them out with documentation improvement. Thanks, Kristen. Deann, what about you? Have you seen barriers? Or what types of barriers have you had to overcome as you've gone through this process? Well, definitely the ones that Kristen mentioned, the amount of resources. Also, the time constraints that providers have. So the sheer volume of visits with encounters each day is pretty staggering. And they are, for the most part, doing their own coding. So our providers are assigning the diagnosis codes there in the EHR, typically right when the patient's there. And so we have a team of 11 at the moment, hoping to add more that do some concurrent reviews. We do some pre-visits. But just the number of encounters is really prohibitive to looking at 100%. And in the inpatient setting, of course, you usually have coders doing that coding. So that's one of the constraints as well. But there's so much on providers plates now, the administrative burden's pretty high, and they have a lot of priorities, so that's a constraint as well. Thanks. Wayne, I know that you have a team of eight people looking at this-- or part of your team. What did you do to invest in those people? And what did you have to prove would be the outcomes if you had this team to really focus on HCCs? So specifically looking at the key performance indicators and the HCC recapture that the team was able to do is how we were able to grow that team. As I said, it started with just myself. And then we had another member come a few months later. But as we would see from the data, from our-- specifically our part C payers, because it was much quicker turnaround when we were able to prove, hey we can start at a 0.9 with this cohort. And we can get them up to 1.1, or really tell the picture, as Deann was saying, paint the picture for his patient. That's how we were able to grow and get that c-suite buy-in. Thank you, Wayne. You both kind of mentioned your inpatient settings. But. Have you seen a difference in education or in working with the different care settings, such as specialty, primary care, or whatever? Wayne, I'll start with you since you mentioned that a little earlier as well. I think the inpatient setting was absolutely much easier, if I can be frank. I can say to you, if you're a provider, in that taking care of that patient in the hospital, hey, your patient needs all these criteria for acute respiratory failure. And if you document that, then the reimbursement will change to this. And then this DRG will change. They get it. They absolutely, completely get it. They've heard it for years. But when we were working in the community or with our ambulatory providers, it's a very different-- And I think, maybe, Deann, you may have hit on that as well, that they're more focused on what is my E&M going to be for this visit. How can I change this from a 99213 to a 99214? Well, if you add a couple HCC diagnoses, it doesn't matter at all. So really getting them to understand what does it mean in the big picture. And it's very predictive, of course. It's theoretically we could get here, but they want to see things that are very tangible right then and there. So that's, in my opinion, a huge challenge, a huge difference. So, Wayne, if I can just expand on your answer just a little further. How are you reaching those ambulatory people? What type of education or training? Or how are you getting these people to make that change? So any new provider that comes to CaroMont is onboarded with us one-on-one specifically, and they get an hour-long class with me. But they get continued follow-up. So like I said, to me, risk adjustment can oftentimes be pretty esoteric. But we have nice graphics, we have nice continued education that we go through with them to really allow them to see or put in something tangible of how risk adjustment affects every part of their patient. And if you want to-- if you say you have the sickest patients then look, I've, been a nurse a long time, every provider I've ever worked with says, I take care of the sickest patients. Well, then, I say to them, prove it. And this is the only way that you're going to be able to prove that and get those resources for those patients. So the younger ones really get it. It's a little bit more of a challenge with those who have been doing it a certain way for some time, but the continued education and also being able to give them metrics, this is where you were, and this is where you are now, it goes absolutely far and away. Thanks. Deann, do you have any comments on that as well? I know you also mentioned inpatient in a different-- and you're looking at wellness as well. So how are you making sure that these physicians are engaged and capturing this? Lots of education. But my team works specifically with ambulatory, so we have partners in the inpatient setting, but we are specifically ambulatory. We have the good fortune that, for a lot of our providers, a certain amount of education is mandatory. So we have general HCC courses that we offer throughout the year for our primary care physicians. And next year, we're also adding an advanced offering as well. And then we have some shorter specialty modules. Those are not mandatory at this time for the specialists, but we have been looping them in. Our primary care providers really want our specialists to be engaged and to be closing a lot of those HCC gaps for what we know is wrong with the patient. But also, we do have metrics that we share via a scorecard. And peer pressure and competition is alive and well among our physicians. And we share with them their recapture of those known HCC conditions. So we set a pretty high target. My goal is 100%, but we say 90% of what you've previously identified we need to recapture by the end of the year. We also put suspect clinically-inferred conditions before them. But because we don't know for a fact if it's accurate, we're not going to hold them to that target as strictly. But I think the educational offerings and then our team is located in the population health team of our ministry rather than revenue circle. So we work closely with a lot of clinical leaders who are medical directors in different markets and other members of the population health team that deal with quality outcomes, et cetera. And so we combine our messages so that we're not each taking up, say, an hour of a provider's time every month or every few months. We try to put all of our messages together and work closely in collaboration, and specifically with those medical directors. Because a lot of our providers want some peer-to-peer education. Kristen, do you have any other comments you'd like to add? No, I think Wayne and Deann did a great job articulating. I think there's such a difference when it comes to coding from the various places of service and even the workflow. So having those lines of communication open and having the training with providers beforehand is a great tactic to help with that documentation. Because you have to keep in mind that the providers went to school for learning how to practice medicine, not for coding. And a lot of physicians end up doing their own coding on the outpatient space. So I think giving them the support in order to feel confident in what they're doing is really important. So the fact that both organizations are doing that right as the new physicians come in, I think, is great to show their commitment to capturing those conditions compliantly. Thanks, Kristen. I know we've talked a little bit about this already, about the need for accurate documentation, and about how you're training and educating. But can you tell us what your current process looks like for capturing HCCs? Is it done within your EHR? Do you have-- And I think, Deann, you actually mentioned some post-reviews, as well as some pre-reviews. Where does that start? And how do you ensure that is being-- going through your workflows correctly? The most stable ingrained part is that EHR alert at the point of care, and it's a pop up. We do not force our physicians to add any codes. We give them the ability to say, no, this isn't relevant at this time, or it's not accurate. But we let them know what's on the problem list. For instance, we're on encounter diagnoses in the past few years that they have not captured yet this year. We call them concurrent reviews, but they're really after the encounter is closed. But before a claim has been generated, we check their documentation because we want to make sure as much as we can given the number of encounters, that the documentation matches what they picked. We have started doing pre-visits over the past year. Our physicians, even though in the best practice advisory there's frequently a hyperlink to where a diagnosis was used before, they're in such a rush and only have a few minutes with the patient, so we look ahead of time, and can put a clinical documentation note in to let that provider know where a certain diagnosis generated. So that really is our approach, is some pre-visit reviews and some concurrent reviews after the fact. But we do rely heavily on the alerts. In our EHR, both for known HCCs that should be recaptured, those chronic or combined ones. We don't put the acute ones before them. And then, also, for the ones that are suspected due to labs, imaging, medications, et cetera. And they're very used to that now. In fact, I was surprised that the majority wanted it to be a pop-up and all year-round. Because they said they really relied on it. And so that's the biggest part. And I think that another reason we're successful is we have an internal team of informaticist and EHR experts that can make changes. We have customized the out-of-the-box HCC tools from our EHR vendor. And we can make changes to them. We have a physician advisory council that meets monthly, practicing physicians from each market. And they let us know how those are working. And they're very frank about it, great discussions. And so, we can change the criteria for the suspects, if they feel that they're getting too many false positives. So I think it's really a group effort with a lot of physician input. Thank you, Deann. Wayne, can you tell us a little bit about your process as well, please? Absolutely. So what my team does are pre-visit reviews. And we luckily have 3M M*Modal, a HCC Collaborate that the nurses use to review the cases prior to the patients, usually the day before. And it helps them to focus in on the members because we can see what the visits going to be. Of course, we don't want to review acute visits. So we focus in on annual wellness visit, physicals, or any chronic care follow-up. And what Collaborate does is basically mine the data that's been sent to them through the EHR, looks at old claims, and has the diagnoses queued up and ready to go for the nurses to review, as well as any conditions that may be suspected through clinical inference, as Deann was saying. So then they're able to go through and vet those conditions. What we find, unfortunately, is over the few years when we were here, that some things get stuck into the documentation that, unfortunately, just the patient does not have. And of course, or in the inpatient setting, we can bill suspected or probable diagnoses, which we do not do in the ambulatory setting. So we do a lot of vetting of those diagnoses, and we also ask our providers to remove those or make some commentary when they are not clinically valid. And that integrates with 3M's Fluency Direct with Engage for the providers. So it looks like a little toolbar that overlays the EHR that they use. And when they open a patient for the visit, there's a little notification for them that has the conditions queued up that we vetted and we think are possibly clinically appropriate. They're able to, with one click, insert the diagnosis into their visit diagnoses and then give us some MEAT criteria, telling us that it was either monitored, evaluated, assessed or addressed, or treated at that visit. So that it can make its way to a claim. Which we then therefore-- which we then verify at the end of the process before we close the visit out. Thank you, Wayne Kristen, I know you work with a lot of different organizations. Are there other processes that you've seen that you thought would be helpful for the audience to hear? Yeah, I think one thing I've learned is that there are so many different workflows within even the same organization. Different practices can have different workflows. So one thing that we've really tried to take into consideration, Wayne was talking about how we surface conditions within HCC management. I think that trying to take that into consideration with a variety of ways that those notifications can be surfaced to the providers. And then also just helping to make the whole process more efficient. I think having somebody look through on the pre-visit before the physician is in the room with the patient makes a huge difference. Because it's taking some of that work off of the physician, it's having someone else take a peek and saying, here's the things that you should consider evaluating when the patient's in to see you today. But then, also we do have a workflow that allows the post-visit review to take place to make sure that the codes that are going to be built on the claim are supported in the documentation. So we're really trying to close the gap on both ends and make sure that all of the conditions are presented to the physicians, whether or not they've been billed in the past. We also, like Wayne mentioned, look for the suspected conditions that haven't ever come in on a claim to capture those new gaps. But also, on the post-visit side, making sure that those conditions are satisfied. Like Diann mentioned on her side, post-visit review is critically important, so making sure that documentation supports the codes is just as important as getting those gaps closed to begin with. Thanks, Kristen. I think we've talked a little bit about each of your organizations, and Kristen's talked a little bit about the different organizations she's worked with. It seems like everyone has a slightly different role, different people involved. I know, Deann, you said you work with Pop Health. Can you talk about some of the roles that are involved in helping to create or to drive an HCC process? Who are you pulling into these conversations to make sure that all the right stakeholders are engaged? and Deann, I'll start with you if I didn't already mention that. Yeah, so specifically on our team, we have seasoned coders that have a lot of experience at education. And we're generalists before we got into the niche that's risk adjustment, and so we focus on education as well as reviews, interactions with our payer partners. So everyone are CRCs, and we're adding some more coders to help do strictly chart reviews. We partner within our clinical integration team with quality outcomes managers, who are usually clinical in background. And what I didn't mention also, and I meant to, is that not only is our team composed of CRCs, but we have several nurse coders as well on our team that have joined us over the last two years. So we do have that diversity now and backgrounds on our team. But we work with the quality outcomes team and with folks that are clinical outcomes managers, who are really our liaisons with operations and practice staff. Medical directors, as I told you. And we work very closely with group chief clinical officers. So all of the different locations that, we work primarily with those in Virginia, South Carolina, Ohio, Kentucky. We have clinical leadership in each of those areas that supports us and helps deliver our messages, but we also work with our financial leadership as well, and then also with our revenue cycle team for coding discussions and questions about billing and reimbursement. We do have a lot of calls with payers, typically our MA payers that we have large contracts with. And they give us a lot of valuable information, but they also suggest that we close lots of suspected HCCs. And so we vet those before they go to our providers. Because one source of abrasion was when we brought in lots of external diagnoses right into our alerts, and we captured 90% to 95% of those with our suspecting algorithm. But it was a lot of extra noise to them. But so, we have a lot of folks, clinical roles and coding, that contribute to this. And I think having that, as Wayne was talking about how they're utilizing the HCC management tool, I think having something of that nature, which we're moving toward, to help you prioritize your pre-reviews and your post-reviews is really critical. Our EHR has some reporting functionality, but we're looking to really expand that. We need robust, actionable data for our providers. And that's our goal over the next year. So, Deann, just a follow-up question. You just mentioned a lot of different stakeholders that are involved in this. How do you get everybody on the same page? I mean, either you've got a million meetings, or else you have a lot of efficiency already built into this. How do you get agreement and collaboration with all these people? I have a lot of partners. And I'll tell you that our compliance department and our legal department are great friends. And so-- they are. Everybody's probably laughing. But they're very helpful. And I'll tell you that our team leader, who is our chief medical officer and vice president of clinical integration and population health, is wonderful, very supportive, and understands our role, as well as our chief ambulatory medical officer. So I've got leadership that supports what our mission is, our approach, and agrees with us. And we're pretty risk-averse. We're a bit conservative. We want to be compliant with our documentation, and we want to be very cautious not to generate an atmosphere of competition for risk scores among our providers because we don't feel that would be appropriate. So we have that good support from direct leadership, as well as compliance and legal, and that has really helped tremendously. And I have a zillion meetings, yes, and a fantastic team of 10 that are so supportive with that. So it's a constant juggling, absolutely. Thanks, Deann. Wayne, do you have that same amount of meetings and stakeholders? Absolutely. I think you hit the nail on the head, Deann. And I have to giggle because, yes, I think we have to make sure that our PARC partners, they want to feel the most important. So we certainly make sure that they do, but they are-- I think they're very instrumental. There have been, in our program and especially in the data that we receive, and I really feel as if it's benefited their members as well and the care that they receive. So that makes us feel good as a team when we're looking for, hey, what do we get out of this job and our job satisfaction? But I can't agree with you more, Deann, about our partnership with compliance either, especially at the inception of this program, making sure that what metrics that we share keeps us all on the same page. And we also partner with pharmacy here and with the quality staff. Some of their documentation is absolutely instrumental in the services that the patients receive or in risk-adjusted diagnoses. We know that, for example, the quality measure of blood glucose control. Well, if a patient has Alzheimer's and that's appropriately documented in the record and is submitted on a claim, well, then that patient shouldn't be in that measure. So we have a lot-- we are able to partner with a lot of different specialties here. But yes, it does add meeting after meeting. [LAUGHS] Yeah, yeah. It sounds like it-- well, we know this, right? It's complicated. It's always complicated. But I do have a question. A follow-up question to that is, do you have a physician advisor that brings your physician group into the discussion, or do you use your CMO? How do you make sure that the physician is also always engaged in this process? And Wayne, I'll start with you. We absolutely do. We have a physician advisor, and he has some assistance for-- we have one for each specialty, actually. So he really champions our message. We put the message and he trusts us, and he backs us up 100%. And also for the specialties, we have those physician advisors and physician champions for each one. And we work incredibly well and work collaboratively. Thank you, Wayne. Deann, same question, a physician advisor or champion of some sort? Not just one, I have several. And not only are they still practicing physicians who are also administrators, but some of our practicing physicians that carry the role of population health medical director, they are very tuned in. And then a couple of the other members of that physician advisory council that I talked to you before. They are major influencers among their peers, and they have-- most of them have a really more than working knowledge of the EHR that we're in. And so, we're able to share our message with them, and it cascades. Communication is always a challenge, trying to get to every provider, boots on the ground. But we do have that partnership with several physicians, that's very critical. And we'll probably talk about this later, but that partnership that we both, Wayne and I, have with compliance is critical as well because they do audits. And so, we have input into those. And the communication with those is also helpful for us. I was in compliance for a while. I've been coding for 23 years now. And I was a compliance specialist and audited E&Ms for many years. And so I understand the struggle for that, but also how valuable that can be. So we try to leverage all of our collaborative partnerships and the data that we're each gathering and sharing that. Thank you, Deann. Kristen, for somebody that's just starting out with engaging in an HCC program or somebody that wants to improve upon their HCC program, I know that we've heard from both Deann and Wayne about the need for a physician advisor. What have you seen, and what has been your advice to new organizations looking to start this program? Yeah, I think Wayne and Deann really highlighted how many different areas need to come together in order to have a successful risk adjustment program. It's not something that can be done just through working with the coders just through capturing those conditions on the claim. You have to have buy-in from all of the different areas, from financial to quality. Wayne made a great point with the partnership between risk adjustment and quality. There's a lot of overlap between the metrics and ways that each area can help each other. And then having a physician champion is so important because, at the end of the day, you are asking the physicians to change something about what they're doing. I think education is crucially important to show them that even though you're asking for more documentation, you're not suggesting that they stay up until 1:00 or 2:00 in the morning. For the most part, conditions can be captured with just one or two sentences showing that the condition is active. So having that relationship between physicians and coders and allowing the coders to really explain what's needed from a documentation standpoint but having a physician champion who sees the importance of it to help kind of ease the other physicians who might be a little more reluctant to change into agreeing to make these changes and add that little bit of extra documentation when it's needed. Thanks, Kristen. I want to go back to something that you mentioned as well. Diane, about compliance. So how are you ensuring compliance? What kind of-- and the fact that legal and compliance are your best friends, how are you utilizing the compliance to-- or what are you doing to check to make sure that your compliance is always in check? Yeah, we do a lot of monitoring of documentation at our organization, which is really helpful to us. So they'll conduct some chart reviews throughout the year. My team does those chart reviews that I mentioned previously, the post visit ones before claims are generated. And we always make sure that we look at things from that bi-directional aspect. We can add a code if the support is there, but we're certainly not going to let a code go out if it's not, if the MEAT is not there and it is not supported by documentation. And that's some of the things that I think others have maybe overlooked. But we're very conscious of that. But yes, our compliance partners do a good bit of monitoring. They share those results with us. And we sit down and we discuss them because everybody that's a coder knows there are gray areas. So we talk those through, make sure that we're on the same page with anything that they might have noticed as an opportunity for documentation improvement, that we all agree on it, we understand it, and that united voice for here's how you can improve. So we partner with them. And then, also, in my spare time, I read a lot of OIG audits and DOG lawsuits, and learn what not to do from those. But another thing is with our technical team, we call them the HCC build team, and that doesn't really accurately describe everything they do. But we built VPAs in Epic, so we call them the build team and the informaticists. And we understand that the OIG is looking at high-value-- or I call them high value-targets, but they're looking at these high-risk diagnoses, right? So they're looking at CVAs in the office. They're looking at acute MI being billed in the office and after 28 days, so things like that. Even though acute MI might not be something on our radar. We partner to look at those diagnoses. . And then we also try to find ways in our EHR that we can do some automation to help with problem list updates for those types of codes. So we utilize the technical and the human aspects there. And we find that is-- it's working, it can always be improved and we can always do more. But at least that's a winning strategy, given the resources that we have. And we will expand it and extrapolate it as we can. I want to take note that you and Kristen both have the shared hobbies of reading OIG reports. And Wayne, you might as well. [LAUGHTER] Yeah, that's a fun airport read I guess. Anyway. Wayne, same question about compliance. I doesn't show it on my camera, but yep, I have the compliance guide right here. When I can't go to sleep, that's my best friend. But [LAUGHS] I have to echo exactly what Deann was saying that is absolutely how we partner with compliance from the very beginning, [INAUDIBLE] what our messaging is. Our compliance coding staff do audits of all our specialties and all our primary care providers. And if there's something that they're seeing in the documentation that may be problematic, they come to us and we partner with our messaging as well. So absolutely everything that Dan said, we do as well. Well, I'm glad you are also getting some late night reading. I guess I'm going to have to take up a new hobby. Oh, you'll love it! Anyway, my next question is, I want to leave a few minutes for questions at the end, so just a couple more questions. I'd like to know, and we'll start with you, Wayne, how has this execution or this implementation of an HCC program positively affected your organization? What are some of the benefits that you have seen? So the most obvious would be an increase in the risk scores, which we set out to do. Because, as we said, we find it hard to believe that our members were healthier than the average Medicare patient. So we certainly saw an improvement in those risk scores. We saw an improvement in our risk adjustment for the inpatient value based practice measures. So when I'm specifically speaking to our re-admissions and our mortality rates, we saw vast improvement in that because we knew people just weren't looking at these diagnoses, previously. And then when we speak to our [INAUDIBLE] partners, not only did we see metrics rise there, as far as the risk score, but we saw HCC conditions, which they count. And look to see-- one of our contracts we doubled the amount of HCC conditions that we had, with 900 new members. So we knew that the increase in members really didn't contribute that as much as the HCC program did. And then by that, as I said, it really helps the patients because we saw changes in those plans. So huge impacts across the board. Deanne? Yeah, I would echo what Wayne has said. We've seen an improvement in the recapture rate of those known chronic HCCs among our primary care and our specialty providers, which has led to that more accurate risk score and an accurate and appropriate reimbursement. We've seen identification of patients that can be included in a care initiative. So focusing on some of the HCCs, such as CHF. We've got a more accurate registry of those patients and that population that we can pilot some care management programs with. And we've seen improvement in quality outcomes. Because I have coder after my name, I get to lead the CPT two work group, which is highly technical in nature. But making sure that the right CPT two codes go out on A1Cs and other quality measures. But our partnership with them and the focus on those diagnoses has improved that. And you mentioned pharmacy earlier, Wayne, those diagnoses that are exclusions from some of those requirements are helpful. So I think we've seen both a clinical and financial improvements, which are appropriate, based on what we're doing with our patients and how we're treating them. Thank you, Deanne. Kristin, a question for you. As you, again, have worked for organizations, how have you seen this affect some of the quality programs? Or where is our HCC's leverage for some of the quality improvement programs or care coordination? Yeah. I think the tie-in from HCC's to quality programs is really important. We mentioned it earlier, a little bit. But you can't have accurate quality metrics if you don't have an accurate idea of what conditions your patients have. So if you have diabetes erroneously captured for a patient, then they're going to be put into quality metrics that aren't applicable for them. Likewise, if you're missing a condition, like Wayne mentioned the Alzheimer's example, then they're going to be in metrics, again, that they shouldn't be in. So there's a really big tie-in between quality programs and risk adjustment coding, from capturing those conditions. In my experience, I've worked with quality programs where they've helped identify ostomies or amputations because those are the types of things that they might see in a chart that are an easy win for, from a risk adjustment standpoint. That we need to capture those, but they're often missed when it comes to being assessed in documentation. And then the risk adjustment coders will keep an eye out for diabetic eye exams or mammograms and colonoscopies. So it's really a good partnership, I think, between the two different areas within an organization, to work together so that they're not duplicating efforts. And I know I have the health plan background, so I know that I'm the one who has the experience sending out all those record requests. But with the [INAUDIBLE] record pulls and the risk adjustment record pulls, less duplication, especially with chart review requests, is always a good thing. So as much as possible, trying to overlay those two efforts, is really beneficial. Thanks, Kristen. Again, I want to give a couple of minutes. So I'd just like to end the questions with asking both Deanne and Wayne to give your advice to somebody that's just starting a new program. What are some of the key things that you can recommend that they start or what are some of the things that you would say, let's do this first? So, Deanne, if you don't mind, I'll just start with you. I think I would find a friendly physician partner that's interested in this, number one. That's key. Start with a partner, a clinical partner, that is an influencer among their peers. And very, very experienced coder or two to start with, because you really need to understand diagnosis coding. That's the heart of it. Somebody that's really familiar with ICD-10 guidelines. And I would start with those two. And then try to cultivate some clinical leadership that is supportive for the clinical reasons that we do this, not just the financial reasons. But what is important to physicians? The care of their patients and making sure that you can tie this work to that and show how it benefits, not only the patient, but the physician and then the organization as a whole. Thank you, Deanne. Wayne? I completely agree with you, Deanne. We have to, you have to start with a physician champion. Hopefully, they're in the c-suite, but if not, partner with one of those two. Because I think that messaging is very important. Because, as I said before, a lot of the stuff that we do is prospective and you can't just, it's not very tangible to say that, hey, if you add peripheral vascular disease to this record, it you get this amount of xy and z. But not all providers care about reimbursement, specifically. So that physician champion that really understands the people who may report to him, or his colleagues, to know the specific messaging that works for them, I think is integral. And then, I think really narrowing your focus at first, because ambulatory CDI can be huge. So do we want to look at these specific z We know in version 28 we're going to have 115. So you probably don't want to tackle all 115 at first. So take those small wins. And then as your team grows, I think your scope can also grow. Thank you. Thank you to both of you, for your time and your efforts. Kristen, I'm going to just go ahead and ask Lisa to bring out any questions, unless you have one final thought. No. I think Lisa going with questions would be great. I see that there's a couple in there. I've been trying to keep an eye on them throughout the presentation. Yeah, absolutely. The first question I'm going to ask is what is the HCC management tool that we have been referencing? Is that a software that is offered by 3M. That is. Oh, go ahead, Wayne. Oh, I'm sorry. You go. Oh, no, no. I thought you were-- because I know I keep referencing it. I'm sorry. But, yes, we do use 3M's HCC Collaborate. So that's the software program that the coders in nursing can use to aggregate those HCs that either it suspects or that have been built in prior years. And it will mine the data that's connected to our EHR, which here, we do use Epic for our EHR. And then it's integrated into that system. And the providers use what's called fluency direct Engage. So that allows the work that we do in Collaborate to communicate with Engage for those notifications, or queries, as we probably colloquially know them. And we use Epic as well. And we are in the process of implementing the HCC management now. So, yeah. And I think that the HCC management ties in with, I saw that there was a question about one of the resources. But because of the connectivity between the Collaborate part of HCC management and the Engage part, it helps with that real time communication that was mentioned in that second bullet point. And then, also, allowing to operationalize and really work off of actionable data. And if you could, just as a kind of follow up, I know we've also mentioned our CDI solution. For ambulatory, can you go into what our CDI solution is? Yeah. So we do have CDI, inpatient CDI programs. Right now, we have CDI Collaborate and CDI Engage one. That helps with the query side on the inpatient. And we are working on, in 2024, expanding the CDI components and capability into the outpatient space, both within HCC management and then also, potentially, in other ways as well. So think that outpatient CDI is really something that's growing. Even, I think it was Wayne, who mentioned ambulatory CDI. So something that's important and becoming more common in the outpatient space. So I think that that's a great opportunity as well. Great. Well, since we are right at time, we want to give everybody a chance to get onto their next meeting. (DESCRIPTION) Slide, That's a wrap! (SPEECH) And so with that, we really appreciate, again, our speakers today. It was great to have kind of those two perspectives from different organizations. So thank you to you both. If you are interested in learning more about our solutions, there is a let us know section in the middle, right below the presentation area. So please feel free to complete that form. Or if you want to, in the survey, let us know in the feedback area, just so we know who to contact. So, again, we really appreciate your time today. (DESCRIPTION) Slide, Thank you. (SPEECH) Make sure you download that certificate of attendance. Once you do close out of it, you won't be able to download that, as well as the e-guide that we've provided as well. And, again, we always appreciate letting us know how we did in the survey. So with that, Carrie or Kristin, do you want to say anything to wrap our session today? I'll just jump in before Kristin, and tell our panelists, thank you so much for an incredible exchange of information. We really appreciate your expertise and your time committed to this and helping to educate us on HCCs. What Carrie said. Thank you for having us. we appreciate it. Ditto. Thank you. Yep, agreed. One last thing. We did record today's session. So make sure you check out our website, where you will be able to access the recording here, in the next couple of weeks. Again, we appreciate your time today, and have a good rest of your Thursday. Thank you. Thank you.

      Intro presentation slide and presenter thumbnails on the left.

      Explore strategic approaches to risk adjustment and HCCs

      • November 2023
      • Incorporating risk adjustment and hierarchical condition categories (HCCs) within value-based care can help health care organizations to better manage patient populations. Accurate risk adjustment helps capture a patient’s chronic conditions and the complexity of a patient population. An integrated approach to HCCs can help health care organizations capture and document chronic conditions that reflect each patient’s burden of illness, provide insight into care utilization and support appropriate reimbursement for the care provided. Watch our panel discussion with Wayne Morris, CaroMont Health; Deann Tate, Bon Secours Mercy Health; and Kristen Viviano, 3M Health Information Systems, as they discuss their strategic approach to risk adjustment and HCCs.
    • (DESCRIPTION) A video conference call and slideshow begins. In a display in the corner, Tami McMasters Gomez smiles, wearing a blazer. The slideshow title screen is labeled with a 3m logo and text, Science, applied to life. An image depicts two business professionals smiling as they regard a document. (SPEECH) Good afternoon. And welcome to our August CDI Innovation webinar series, where we have a really great speaker with us today from UC Davis where we will be talking about engaging physicians and proactively with AI-powered CAPD and improving documentation integrity with AI-powered CDI tools. (DESCRIPTION) Slide change. A bulleted list labeled Housekeeping. (SPEECH) Before I pass it over to our speaker, I'm just going to go over a couple of housekeeping items. We are utilizing the On24 platform. And this is a web-based platform. So we encourage you to use Chrome, close out of VPNs or multiple tabs. That'll just help with the bandwidth. If you are experiencing any audio issues, do a quick refresh of your browser. That usually fixes any problems. And check your speaker settings. Within the On24 platform, we have several engagement tools. There is the Menu bar at the bottom, so you can minimize the different sections if you want to. You can also make them larger. Within those engagement tools, we have the Q&A section. So please ask as many questions as you want. We'll get to as many as we can at the end in that Q&A section. In the Resources section, we have the presentation for download as well as the certificate of attendance that we give during the live session that you can utilize to request CEUs with that certificate of attendance to an association like AHIMA. Within the media player, if you do need closed captioning, you can turn that on as well. And at the end of the webinar today, we really appreciate you completing the survey to let us know how we did. (DESCRIPTION) Smaller text at the bottom of the slide reads as follows. The information, guidance, and other statements, provided by 3M are based upon experience and information 3M believes to be reliable, but the accuracy, completeness, and representative nature of such information is not guaranteed. Such information is intended for people with knowledge and skills sufficient to assess and apply their own informed judgment to the information and is not a substitute for the user’s own analysis. The participant and/or participants organization are solely responsible for any compliance and reimbursement decisions, including those that may arise in whole or in part from participant’s use of or reliance upon information contained in this presentation. 3M disclaims all responsibility for any use made of such information. No license under any 3M or third-party intellectual property rights is granted or implied with this information. 3M and it's authorized third parties will use your personal information according to 3M’s privacy policy. This meeting may be recorded. If you do not consent to being recorded, please exit the meeting with the recording begins. (SPEECH) So again, we have a really great speaker today. (DESCRIPTION) Slide change, Meet Our Speaker. A headshot of Tami McMasters Gomez, MHL, BS, CCDS, CDIP, CCS-P, AHIMA, Approved ICD-10 CM/PCS Trainer. Director, Coding, and CDI Programs, UC Davis Health, Sacramento, C.A.. I started my journey in healthcare 30 years ago as a file clerk in medical records at a very small rural hospital. It was there when I learn several important skills, including coding. In 1997, I accepted a position at UC Davis health, and during my 26 years here, I have worked in a variety roles, Coder, Auditor, Supervisor, Manager, Director. I have really enjoyed the growth and challenges that have come with my various positions at UC Davis. ACDIS 2021 award for most diverse CDI program in the nation, UC Davis. ACDIS 2022 CDI Professional Achievement Award. (SPEECH) Tammy McMasters Gomez has been on several of our webinars. You might have seen her on the stage at ACDIS presenting their great story. And we're really excited to have her here with us today. And as you can see, she has a lot of credentials behind her name. She's an approved trainer, and she is the director of coding and CDI programs at UC Davis. She's been a part of the medical community for quite some time. And her expertise and skills are really going to be showcased here today with the story that they have at their hospital. If you want to learn a little bit more about her biography, that is in the Speaker section as well. So I am going to go ahead and turn things over to Tammy, who's going to go over the agenda for today and get things started. Tammy. Yes. Thank you so much. And thank you for having me today. I look forward to this presentation. (DESCRIPTION) Slide change, Agenda, with eight numbered bullets. (SPEECH) So today, we're going to talk about engaging physicians proactively with some of the artificial intelligence and our AI-powered tools that we're using, how we use data to provide feedback and show the return on our investment, how we use our physician advocate team, which is a team of physician trainers that are Epic builders and engage with our providers on the voice recognition side of things, understanding the impact automation has on some of your key performance indicators, improving documentation integrity with those tools and how you can leverage that to improve those KPIs, assigning initial reviews for inpatient CDI and how we've automated in that space UC Davis inpatient CDI department and how we're leveraging multiple applications and including prioritization tools for our concurrent reviews. (DESCRIPTION) Slide, Who We Are. A few highlights about the UC Davis Health Medical Center, with nine bulleted items. (SPEECH) Just a little bit about who we are. We are a large academic medical center. We currently have 625 multidisciplinary specialty beds. We are in the process of building a new California tower, which will add additional 75 beds. We are also in the process of building a new state-of-the-art community surgery center. We are expanding significantly in the ambulatory space. We serve over 33 counties, covering 65,000 square miles North to the Oregon border and East to Nevada. We've been consistently named the "Most Wired" hospital in the USA. We are ranked Sacramento's top hospital by US News and Report and among the nation's best in 15 medical specialties. We have been recognized as the best hospital five years in a row in the greater Sacramento area. We currently have 15, actually the number is now 16 inpatient CDI and 9 outpatient CDI. And our program started around 2008. (DESCRIPTION) Slide. Organizational chart, Health Information Management. A flow chart that lists six levels below Tami Gomez, director. (SPEECH) A little bit about our org chart. I am the director of our programs, both physician advocates, which is those physician trainers and educators that go out and talk to physicians about documentation improvement. I have the inpatient ED and H coding units under me. I have our inpatient and outpatient CDI programs. And we also have a team that monitors our quality outcomes and our data governance around coding and some of those coding initiatives that drive quality outcomes that drive our scoring in US News World and Report LeapFrog, you name it. (DESCRIPTION) Slide, Disclosure. A bulleted list of organizations (SPEECH) So this next slide is really just a slide to indicate that we have no affiliation with the following organizations, applications, and so on. (DESCRIPTION) Slide. A poll question appears. (SPEECH) OK. So before we get started, I'd like to start with a couple of poll questions. The first question is, does your organization have a CAPD in use? If you could go ahead and submit your vote and let me know, yes or no, or if you're in the process of implementing. Let's give it another 20 seconds or so. We've got about 45% persons of the attendings who have responded. (DESCRIPTION) Slide. The results of the poll question appear. (SPEECH) OK. The next slide indicates about 45% say yes, 42% no, and about 13% are in the process of implementing. So hopefully, for those of you that have already implemented this, this is information that will be insightful moving forward with how we have continued to grow in that space. (DESCRIPTION) Slide. A second question appears. (SPEECH) The next poll question is, in your organization, are your CDI assignments automated? If you could go ahead and answer that question for me at this time, it'll give me a perspective of how and what others are doing organizationally. (DESCRIPTION) The options are A, yes. B, no. C, Hybrid. The results of the question appear. (SPEECH) OK. Let's see. So about 34% say yes, 54% say no, and about 11% have some type of hybrid method. (DESCRIPTION) Tami's video screen disappears. Title Slide, Impatient/Outpatient CDI. (SPEECH) So before we get started, I'm going to go ahead and jump off camera so that I can focus on the content of this presentation. But today, we're going to talk about how both on the inpatient and outpatient CDI side, how we've used strategies and what our strategies are to engage physicians around improving our key performance indicators specific to CDI operations. (DESCRIPTION) Slide, Artificial Intelligence and CDI, with a bulleted list of phrases. (SPEECH) Just a brief disclaimer here. So AI-powered tools provide real-time insight for physicians and CDI teams to drive clinical and revenue integrity using NLU and NLP, which is Natural Language Understanding and Processing. Many of you are probably familiar with those terms. AI aids in our prioritization, the cases that really matter. AI aids in the managing of workflows and the capture of risk model diagnoses. (DESCRIPTION) Slide, Journey to digitization via implementation of 3M products. An image of a paved road labeled with years. An arrow extends from the end of the road past the year 2022. Several phases are indicated along the road. A bulleted list shows key customers served. (SPEECH) So a little bit about our journey at UC Davis. So we embarked on our journey in 2019 to implement the 3M 360 encoder, including the inpatient and ED coding as well as our CDI applications. In 2020, we looked at how we could implement the pro-fee application with 3M in our clinics. And then phase 3 was introduction of our CAPD, which included our NLU and NLP. And phase 4, which we're currently in and still working towards implementation, is our HCC management, which is our outpatient application that we're using to help basically enhance the workflow in the outpatient CDI space. Some of our customers are providers-- obviously our coders, our CDI, our quality team, our compliance, and our billing offices. We also work very closely with IT in some of this. (DESCRIPTION) Slide, Physician benefits of A.I.. Five boxes appear, separated by arrows from one to the next. (SPEECH) So the benefits of AI really are the care and documentation gaps that are closed before note node is saved in the electronic health record with implementation of in-workflow nudges that are powered at the point of care by advanced AI. So what that means is our physicians are actually being nudged at the point of care while they're documenting the patient record to say, hey, provider, we see that there's an indication here that the patient is encephalopathic. Can you further specify the type of encephalopathy. So it's not a retrospective query. It's not something that they have to leave their current workflow to go address, which has been meaningful for our providers or so the feedback is that is meaningful for them. Retrospective queries can be minimized through using this, and proactive clinical awareness is delivered through that EHR real-time nudge. Concurrent clinical summaries of information for providers it reduces the rework, as I mentioned, going out of Epic into the inbox or into a co-sign notes folder or whatever your workflow are. It mitigates that. It mitigates that administrative burden and increases their ability to spend more time with patients. (DESCRIPTION) Slide, H.I.M. physician advocate program supports CDI. A bulleted list is labeled clinical documentation content training. (SPEECH) So our HIM and our physician advocate program support CDI and clinical documentation content and training. So the inpatient CDI team build and support query templates. They do that both for the inpatient and outpatient, and they help support the clinical content and build behind the CAPD nudges. At UC Davis, we've taken a number of the queries that are in the content guide that's provided by the 3M encoder. But we also have used a lot of customization because that's available to us as an organization. And then they also perform special projects. So this team again is a team of Epic trainers and builders that have coding backgrounds, clinical backgrounds, but also understand how to navigate Epic and use those tools efficiently but also train our providers on how to document better through enhancement of, you name it, smart tools, smart lists, smart phrases and then help us with training the providers on the use of the CAPD. (DESCRIPTION) Slide, HIM physician advocate program supports physicians. Three boxes appear, labeled goal, benefit, and physician training. (SPEECH) The HIM and physician advocate program supports physicians by making sure physician training is ongoing starting day 1. So anytime we have a new physician onboarding, we provide them with an overview of what the CDI program is and all the tools that they can use to enhance documentation, reduce note bloat. You name it. Ongoing training enables our providers to be more proficient in CDI but in documentation specifically and effective in their job. And it allows great clinical documentation to impact the better coding, billing, and reporting. Our physician training includes all clinic providers, all clinical providers that are facing Epic or EHR applications and training in those applications in and of themselves, so how they can navigate better documentation through using less clicks, less buttons, bringing in smart phrases, [? top ?] phrases. And then with that efficiency and functionality, they have more time to spend at home. And we do have a wellness officer at UC Davis who has a large promotional wellness program going on where we're trying to get physicians home in time for dinner to be with family. (DESCRIPTION) Slide, phase one, creating partnerships. A bulleted list of five items. (SPEECH) So what we did is when we began our journey with CAPD, we started by creating partnerships. We partnered with our clinic managers and our physicians. We attended physician staff meetings to socialize the project. We piloted and validated before any go live. We had almost a year pilot before we were willing to go live with any product. We explored virtual options to meet the physicians needs if they couldn't meet face to face because we did go live during a pandemic. We worked really quickly to make things available virtually. And when we educate providers, we limit those presentations to 15 minutes because we do know that their time is valuable, and we try to do as much as we can with a 15-minute time window. (DESCRIPTION) Slide, phase two, engage physicians by providing resources. A bulleted list of five items. (SPEECH) We provided tip sheets, training videos, one on one education. We also have a physician champion that does that peer to peer education. So if we find that one physician is not as engaged, maybe not responsive, we do have our physician champions work with those providers. So they have a better understanding from a peer of the importance behind clinical documentation. And then we create newsletters that are catered to our providers. (DESCRIPTION) Slide, phase three, constant partnerships. A bulleted list of five items. (SPEECH) And those constant partnerships have really led to the success of what we've seen at UC Davis, identifying those key stakeholders and design workflows that can help with automation, using your data for continuous process improvement. And that is one of my big, big, big-- this is something that I promote daily. If I don't have data to back anything up, I usually don't use any kind of education or training unless I have data to share with the teams. So we leverage that data to facilitate engagement, engage vendors for partnerships on product development, and then also be open to change and flexibility. Data may change. The workflows can change and evolve. But if you keep working the plan, you will see success. (DESCRIPTION) Slide, provider nudge. A bulleted list of three items. (SPEECH) So when we're developing a provider nudge or when we're looking at turning on a provider nudge, you need a basic rule. So the clinical evidence and documentation we want the tool to calculate before firing. So a nudge usually is linked to laboratory data and our documentation to a diagnosis, and it's going to request specificity of a diagnosis or suggest a missing clinical condition. (DESCRIPTION) Slide, nudge example, with three boxes of text. (SPEECH) But what we've found is that sometimes, we need to customize those nudges to meet our needs at UC Davis. So a clinical note may say the sodium is 128. We didn't want a nudge to fire anytime we had one abnormal lab value. And so the program that was requesting firing a nudge, we made it so that we needed two additional-- we needed two abnormal lab findings in order to. So the evidence of hyponatremia without explicit mention is the rule. And a physician message will go out that says we've identified electrolyte imbalances. If appropriate, please document the associated diagnosis. And in this case, the diagnosis is hyponatremia. However, again, we don't fire that with just one abnormal sodium value. We know that's not really a true clinical indicator of hyponatremia, so we're looking for two abnormal lab values there. And that's something I would recommend. If you're looking at turning on nudges, make sure you curate them to your organizational needs. (DESCRIPTION) Slide, C.A.P.D. focus and nudge clinical criteria evaluation, with six bulleted items. (SPEECH) So the focus and on the nudge and the clinical criteria evaluation was to focus on clinical conditions and procedures. Always review the data. Provide an overview of all nudges in the rules and physician messages, and get your physician buy-in. We did a lot of customization around our pediatric population because those are their specific criteria that we've adopted regarding malnutrition , regarding sepsis that maybe aren't in line with what the vendor is offering you. So make sure you think about how you can customize this again because your physician buy-in will only be improved. And then we measured our physician engagement. And then when we found that there were less physicians engaged in one nudge over another, we would come back to that nudge and have very candid conversations with our providers to say what can we do better. Why aren't you engaging with this? And they would provide us feedback, and then we'd go back to the drawing board and make some tweaks to make sure that it met the needs of our physicians and that it was making sense to them and firing appropriately. (DESCRIPTION) Slide, custom nudge content. Two labeled icons appear, a check mark and a bar graph. (SPEECH) So the custom nudge content is very service line specific. At UC Davis, provider nudges are customized based on service line or organization and clinical criteria. And the nudge content will be constantly modified over time based on provider feedback and engagement and as our Vizient risk modeling or Elixhauser risk modeling changes. (DESCRIPTION) Slide, what the NLU or NLP would use to fire the nudge. Four columns of text are labeled clinical diagnosis, clinical rule, physician message, supporting clinical evidence. (SPEECH) So this is just a slide on what the NLU or the NLP would use to fire the nudge. And this is really just more a reference for you. I won't go into the details, but you can see the clinical diagnosis is anemia. The clinical rule is here, the physician's message, what the physician sees, and then the supporting clinical evidence. So these are just a couple examples for you to have as reference. (DESCRIPTION) Slide, K.P.I. improvement. A list of five bulleted items. (SPEECH) So what did we do in terms of trying to improve our key performance indicators with our query responses? So again, we did that physician buy-in and education. We designated a physician champion, both on the inpatient and the outpatient side. And we aligned. We customized the data and analysis that was actionable for various service lines. And again, we leveraged that data to provide them with how they're doing, the enhancements it's made overall at the UC at the service line level or at the institutional level, and the importance that it has on documentation integrity across the board. (DESCRIPTION) Slide, Computer Assisted Physician Documentation, C.A.P.D., in action. Two different screenshots appear of the same screen full of text. On the left image is a bell alert icon in the corner. On the right image, a checkmark occupies the same spot. (SPEECH) So this is a snapshot of what the Computer Assisted Physician Documentation, the CAPD in action looks like. So on the left, you can see this little red bell that indicates that there is a notification for the provider. And this is fine because there is evidence of a blood disorder, acute blood loss anemia but they haven't documented. So you see here the physician has documented the patient has encephalopathy. There's some lab values, and the patient has blood loss. And so it's asking the provider to be more specific on the blood loss. And it's also asking the provider to please mention the acuity and type of encephalopathy. So over on the right, you can see the physician documents. The patient has metabolic encephalopathy. And the patient has acute blood loss anemia. And you can see here that red bell icon that indicates there's a message to resolve a nudge turns to a gray check mark when you satisfy the nudge or the clinical criteria. So this is just a snapshot of an unresolved and resolved nudge for your reference, and that's what our physicians actually see. (DESCRIPTION) Title slide, Leveraging technology in C.D.I.. (SPEECH) So moving on, I'm going to talk about leveraging technology in CDI and some of the things we've done at UC Davis. (DESCRIPTION) Slide, leveraging tool, concurrent review prioritization. Six labeled hexagons are arranged around the center, labeled C.D.I. prioritization. (SPEECH) Specifically, we have lots of tools regarding how we prioritize our reviews. We're looking at surgical cases without a CC or an MCC. We're looking at all questionable admissions. We're looking at symptom DRGs. We're looking at our evidence sheets that help us determine where we have opportunities. We're looking at those medical cases without a CC or MCC. And we're focusing on specific DRGs, so sepsis DRGs. We're looking at malnutrition diagnosis. We're looking at all mortalities. These are things that we've prioritized institutionally. (DESCRIPTION) Slide, Elimination of C.D.I. reconciliation. A list of six bulleted items. (SPEECH) We've also, at UC Davis, eliminated what I think is in some cases a controversial decision here. Most people still have what I call DRG reconciliation in place. That's when you're looking at the DRG mismatch between the coder and the CDI. And I took a look at this because I really felt that it was an adversarial process. We have coders who are trained coders that have a really high accuracy rate at UC Davis, and that's where this starts. They had a 99.9% coding accuracy rate. We had several external audits, and I was holding CDIs to the same standard that I was coders. Why would I expect my CDI team to be as accurate as our coders? That's why we have coders in place. And so the only benefit that I really did find from this was that our CDIs were receiving education and feedback. So by discontinuing the reconciliation process, it increased our CDI productivity by 25% to 33%. It helped really the morale between the team. But what it did is it allowed our coders to code, and CDI now has more time to review more complex cases. Because they were taking sometimes an hour to two hours trying to figure out why their PCS code was incorrect. And it could have been some obscure coding clinic that they really weren't aware of. And so what we've done is we've moved this process to the back end. So I have a report that comes out daily of all DRG mismatches. And we have one person reviewing that and providing direct feedback to either the coder or the CDI and also providing supporting documentation or evidence or references to say here's why we agree with the DRG by the coder or by the CDI. And if we see any trends, we actually provide education to the larger groups. But it really has improved that team morale. And it's allowed our team, our CDI team to be more productive. (DESCRIPTION) Slide, What We did to improve K.P.I.. A list of seven bulleted items. (SPEECH) So some of the things we did to improve our key performance indicators besides the CAPD and besides eliminating our DRG reconciliation, mismatch reconciliation, is we've expanded our CDI program. We perform ongoing coding audits. We've hired a second-level reviewer. Again, we establish those back end reviews and controls to ensure integrity. We invested in that technology, the CAPD and the HCC management tools and CDI prioritization tools. We are doing ongoing data analysis. And we're also working to decrease one day stays, which are considered inpatient admissions because they're typically low-weighted MSCRGs. And they impact things like CMI and CC/MCC capture rate. And because they're here one day and go home, the CDI team rarely can get to them in a timely fashion. We also have worked to build templates within the Epic physician groups use it so they can utilize smart phrases and smart lists. (DESCRIPTION) Slide, C.D.I. case auto assignment. A list of five bulleted items. (SPEECH) And we also have what's called a CDI auto assignment. So what we've done is we've created internal controls to auto assign cases every day to our teams. So what it does is we have an algorithm where it looks to-- it's actually connected to our calendar. So it knows if someone's on PTO. It knows if it's a Monday or a Tuesday. As you know, we have more of a caseload on Monday because we have discharges from Friday, Saturday, and Sunday we need to assign. But our auto assignment has no direct integration with encoder. It's facility developed program. It's eliminated that manual process. We've developed a database to track all assignments. Historical data was pulled to identify the average number of new and reviewed cases that needed to be assigned. And then we prioritize those assignments based on payer, surgeries, trauma, and service lines. (DESCRIPTION) Slide, Concurrent review prioritization. A list of four bulleted items. (SPEECH) So our concurrent review prioritization addresses any challenges with auto assignment. So we need to adjust that. We will. We review all accounts with a single CC and an MCC or if there are mortality. Again, we have caseload prioritization accounts in our-- basically, our team has anywhere between 20 to 40 total reviews per week. I'm sorry, that's actually per day. And those are new and rereviews as well. And then we are focusing on and we make these a priority specific hospital DRGs that are either OIG targets or we're seeing denials in the industry on like sepsis and malnutrition. (DESCRIPTION) Slide. Step 1, Setting the foundation for auto-assignment. A list of two bulleted items. (SPEECH) So what we did when we started to look at how we can automate the assignment for our team was we connected with a system analyst. And we had to ask the right questions. Our system has a direct feed of patient information from an HL7 interface, which is your ADT, from our hospital systems within our EMR. And then we determined in what databases are available. And we created a database that is able to determine the ability to export and transform and load that data. That's your ETL. And then we built a system visualization tool to auto assign these cases. (DESCRIPTION) Slide, Setting max, new patient assigned, accounts. Increase production by eliminating reconciliation. A table with four columns, labeled day of week, P.T.O., regular assignment, P.T.O. or holiday caps. (SPEECH) And so this is what it looks like. So any day of the week, on a Monday, we may have one person out on PTO. The regular assignment will be 10, but because there's PTO or holidays, we cap it out at 12. And then Monday, Tuesday, you can see so on. It goes down, but this is what the logic looks at. (DESCRIPTION) Slide. Step 2, Creating the build for auto-assignment. One bulleted item with seven database items, account I.D., patient M.R.N., admit date, discharge date, financial class, hospital service, and hospital department. (SPEECH) So our system analysts started to create the build and the logic in our analytic platform by creating a script to pull in the following information from the database. So it's looking for the account number of MRN and all of those informations that we need. And we use that to make sure that everybody is getting an equal assignment every day based on the number of discharges that they had and that their case load stays around on average 40 per person a day. And that's initial, new reviews, and it is also your rereviews. (DESCRIPTION) Slide, Dashboards to measure K.P.I.'s. Two tables appear with tiny text, labeled Peds and Adult. (SPEECH) This is just a snapshot of some of our key performance indicators and some of the things we're measuring at CDI. And this is always evolving. I know case mix index isn't always a good indicator for CDI, so this is just at a high level for our executive suite what we're measuring. So we're looking at case mix index on the Ped side. We're looking at mortality index. We're looking at expected mortality. We're looking at sepsis expected mortality, and we're looking at the expected length of stay. On the adult side, we're looking at case mix index, mortality index, oncology mortality index, sepsis mortality index for the non-present on admission population, length of stay index. And we also measure our RAF score on the outpatient side. Now, you may wonder, why are we looking at some of these indexes that we don't have a whole lot of control over? What we found was that we don't have a whole lot of control over the observed outcomes. But when we're talking about expected outcomes, it is strictly the documentation that drives some of those metrics. And so we're tracking those at the expected level outcome to see how we have or where we have opportunity for reviewing. (DESCRIPTION) Slide, C.D.I., Prioritizing concurrent reviews. A table appears with columns labeled C.D.I. priority, description, potential conditions, and indicators. (SPEECH) I'm jumping around a little bit, but we also prioritize concurrent reviews. And here's what that prioritization looks like. So if you're a CDI and you jump into your encoder, like, your 3M encoder, this is what you're going to see. And it gives it a score. (DESCRIPTION) Slide, Doing More with Less. One bulleted item with five dashed items. (SPEECH) So really, we're trying to do more with less. Due to increased CDI productivity, we've been able to reclassify one of our full time CDIs to a CDI educator, because again, we saved some time with eliminating that reconciliation process, automating assignments, leveraging that CAPD. We're using tools to become more efficient and doing more with less. So we've increased our CDI education. We've added additional staff resources for questions. We've had the subject matter expert, and they're performing audits. And it has little or no impact to the CDI workload. So think about how you can leverage taking more away from the staff through automations and technologies and how you can repurpose some of those positions to add more value to your program, like, adding a second-level reviewer or a CDI educator. (DESCRIPTION) Slide, Flexibility! Report logic and scheduling is key. A box of tasks is labeled Assignment Administration. (SPEECH) This is just a snapshot of our-- this is our auto assignment database. And so basically, we've allowed some flexibility. So because we've created a report logic and scheduling is key, if somebody calls in last minute sick or they have an emergency and they have to go pick up their kid or whatever it might be and they're not going to be here that day, we have a CDI lead that can go in really quickly and manipulate the auto assignment to take those patients away from that individual and redistribute them. So this allows some flexibility in that auto assignment when we have those last minute unexplained outcomes. (DESCRIPTION) Slide, Other Incentives I.P. C.D.I. evidence sheets provide. Two tables appear with data listed in columns. (SPEECH) So some of the other incentives in inpatient CDI are evidence sheets. So what our evidence sheets do are they provide notifications to the team. This is a CDI facing tool that lets them know that there's evidence in the chart that there may be presence of a diagnosis, like, heart failure without explicit mention of the type, and it may be a query opportunity. So this is something that the CDI team looks at and uses regularly and then places a query on the chart. And this is helpful, I think, especially if you have just started a CDI program. Most of the time I will say our team already has this on their radar, but it is a good second-level check or validation check against what your team could have potentially missed. (DESCRIPTION) Slide, Custom SmartLists, E.N.T. slash Malnutrition. Three boxes of prompts appear. A column reads "evidenced by." "Malnutrition" is highlighted, followed by a series of subcategories. The category "Severe Malnutrition" is highlighted, prompting similar prompts. (SPEECH) This is an example of some of the custom smart lists we've implemented for various service lines. One of the things that we found was that our ENT providers, especially those with head, neck, and throat cancers, or mouth cancers had a high population of malnutrition in their clinics. And it was something that was being missed regularly. And so what we created was some smart lists that provided when there was evidence that the patient was malnourished, it prompted them to further document the specificity. So we also embedded some of our criteria. The clinical criteria that we've adopted at UC Davis is a modified version of Aspen and Andy. So we also included some of that criteria into the smart lists that you're seeing here. (DESCRIPTION) Slide, Case management and leveraging C.D.I. technology. An option is highlighted from a table of listed cases, "Major Male Pelvic Procedures." (SPEECH) We also are working and partnering with our case management team to leverage CDI technology. So we have an encoder, and we have an auto suggested-- AI is providing an auto suggested DRG that can be leveraged to engage with case management and the hospital specific goals around length of stay. So anytime a CDI touches a case, there's a working DRG that gets assigned. And that working DRG then gets interfaced over to our case management so that they can see the changes as the working DRG evolves one but also so they can see the geometric mean length of stay. And they can do a better job at gauging the appropriate length of stay for this patient. Now, we don't touch every patient. There are several cases that maybe those one day stays or cases that we just don't get to because we don't have the bandwidth to touch everything. So we have what's called the auto suggested, which is AI that reads and reasons over documentation and auto suggests a working DRG. And that also gets interfaced over to our case management team to use to help with gauging appropriateness with length of stay. (DESCRIPTION) Slide, Case management and leveraging C.D.I. tools and E.H.R. views. A screenshot of a launch screen. A tab, Review, is selected, followed by a large table of data. (SPEECH) And this is just a view of what they see. So as our CDI is working in our encoder and they're assigning that working DRG, once they hit Complete, interfaces over into Epic, into a field down here where they see what those MSDRGs are with the diagnosis coding. (DESCRIPTION) Slide, Case Mix Index, K.P.I.. A line graph sits atop two bar graphs, labeled Adult C.M.I. and Pediatric C.M.I.. All graphs trend from lower left to upper right. (SPEECH) So here is the fruit of our labor. We saw a significant improvement in our case mix index, both for adult and for a pediatric population from the last five fiscal years or six fiscal years actually, from fiscal year 16 to fiscal year 21. We started to see this normalize in fiscal year 2022 and 2023. But for the most part, we haven't seen a significant decline except for in our pediatric population. And we're contributing that to higher one-day stays. We have almost 35%-- about 35% of our population in pediatrics is here one day and goes home, both on the surgical and on the medical side. (DESCRIPTION) Slide, C.C. slash M.C.C. capture rates, K.P.I.. Four additional graphs appear, all trending from lower left to upper right. A vertical whisker plot has a dark dot toward the upper end. (SPEECH) We saw the same trend with our CC/MCC capture rate. You can see off here to the right, this is basically all of the scatter plot or whisker plot here of our-- gray dots are all other academic medical centers. And that dark blue dot there you see is the UC Davis health's performance in terms of where we are. And we're in the 75th percentile of all academic medical centers. So we're doing a really good job of optimizing every patient that we touch to demonstrate the acuity through capturing those CCs and MCCs. (DESCRIPTION) Slide, Historical data on 1 day stays. Four bar charts appear, labeled high percentage of 1 day I.P. Admissions Dilutes A.M.C. Metrics. All charts trend from lower left to upper right. A vertical whisker plot has a darkened dot toward the upper end. (SPEECH) As I mentioned, we had a lot of opportunity with our one-day stay population. So if you're an organization that is looking at your case mix index and wondering what you can do differently, take a look at your one-day stay population. Typically, these groups have a very low case mix index. And you can see off to the left here in this whisker plot that UC Davis is that dark, yellow dot there. We were an extreme outlier. 25% to 35% in the Peds population were here one day and went home. And when you break it down by CMI and you look at your one-day stays, which is this gold bar here, you can see that the CMI is very low. As you start moving to 2 to 7 days, your CMI goes up, 8 to 15 days, your CMI goes up. So it really is when you look at it, when you have 25, a quarter of your patients essentially here one day with really low weighted MSDRGs, those relative weights dilute your entire case mix index. We saw the same thing with CC/MCC capture, the same trend with one-day stays, and then down here your expected length of stay in terms of your O to E ratio again. We don't have a whole lot that we can impact to demonstrate that they needed to be in the hospital as long as they were because they're typically not very complex. The same trend we also saw with expected mortality. So if you're somebody who's trying to figure out how you can improve in those areas, take a look at your one-day stay population. (DESCRIPTION) Slide, Service line data analysis, L.O.S. outliers. Two tables appear with similar lists of data. C-section is highlighted on both tables, along with relevant statistics. (SPEECH) We've even targeted it down and got very specific to which service lines we have opportunities particular with the length of stay outliers. So what you're looking at here is we took a look at our OB population. And what we found was that we were an outlier with our C-section population. So about 407 cases, which is about 30% of the total OB population, had a mean observed length of stay of 4.41. Our expected, which is driven off of documentation, was 4.16 with an overall index of 1.06. When you look off to the right, this is all other academic medical centers. You can see same percentage. About 29% of their cases are C-sections, but they have a mean observed length of stay of 3.67 days and an expected a little bit lower. But their overall index is at 0.94. So this tells us that we are keeping patients almost a day longer than nationally the national average. And what are we doing differently at UC Davis? So we don't really make any operational changes on the care side of the house, but we take this data back to those service lines, and then they make changes that are tangible to improve length of stay outcomes. (DESCRIPTION) Slide, Service line data analysis, L.O.S. outliers. Three line graphs appear, labeled index, observed, and expected. (SPEECH) We saw a similar trend with our hysterectomy population. So as you can see here, our index was at 1.36 overall for length of stay. If you go down below here, you can see our length of stay index in terms of ranking. UC Davis is 98 out of 99. We're one of the worst in terms of length of stay index with our hysterectomy population. Our observed length of stay again at 3.74, we are 99 out of 99. So we are not performing well. Our expected, though, is 2.74. This is documentation. The CDI team has direct impact on the overall expected outcomes. And we are in the top performance here. We're 89 out of 99. In this particular, metric higher is better. So we knew that it wasn't the documentation. There was something on the hospital side that needed to be looked at closer to see how we could reduce the length of stay for this particular population. (DESCRIPTION) Slide, Service line data analysis, L.O.S. outliers. Three similar bar graphs appear with similar trends. (SPEECH) So obviously when you take information back to these service lines, their immediate response is, oh, our patients are sicker. This isn't a good representation. We need to look at California hospital. California hospitals we know are sicker. So we ran the data the same way. We looked at all the California hospitals here and we saw the very same trend. So once we presented that to the physicians and that service line, they were more inclined to look at what they could do differently at UC Davis to improve our observed length of stay for our hysterectomy population. (DESCRIPTION) Title slide, Outcome. Slide, Outcome of Implementation of A.I. for physicians and C.D.I. at U.C. Davis. Six boxes appear, labeled with text. (SPEECH) So the next few slides are going to talk about outcomes. So the outcome of implementation for AI for physicians and CDI at UC Davis was bringing together workflows, enabling better engagement and efficiency and collaboration. We saw that improved case mix index by 7.4%. We saw an increased capture of CCs and MCCs by 5%. We positively drove query agreement rates above 80%. We identified outliers for expected length of stay and expected mortality. And we established back end reviews and controls to improve accuracy and compliance. (DESCRIPTION) Slide, Learning outcomes. A numbered list of eight bullet points. (SPEECH) So some of the learning outcomes here are we really wanted to have the ability to create and engage our physicians proactively with AI-powered CDI. And that's what we aim to do. And so if you're doing that, ask yourself what your goal is, what you're aiming to accomplish with implementation of a CAPD. Understanding how the data provides meaningful and provide meaningful feedback to your providers is going to be key, because they're going to be data-driven. If you can't show that there's a return on investment, you will lose physician buy-in. Understand the importance of the physician trainers or physician advocate interaction, how your physicians interact with your trainers, making sure that you have a team that's facing those providers that can help them with no bloat reduction, with enhancements in Epic, with less clicks. And then recognize the impact automation has on your key performance indicators. Support how documentation integrity is improved with the AI-powered tools. Again, that's data. Understand how automation of initial reviews for inpatient CDI improves your productivity. And also look at automation in that space. If you can automate your assignments or if you can discontinue your reconciliation process, you will see improvements in productivity there. Reflect upon inpatient CDI, departments leverage of CDI applications, and examine prioritization of concurrent reviews. (DESCRIPTION) A title slide appears with text. Thank you. Questions? T Gomez at U.C. Davis dot E.D.U.. (SPEECH) So that's all I have now. And so I really do want to thank you. And if you have any questions, please put them in the chat now. (DESCRIPTION) Slide, Questions. (SPEECH) Awesome. Thank you so much, Tammy, for all that great information. We do have some great questions that have come in. So we'll get to as many as we can in the next 15 minutes. So the first question that came in is from Mary. Can the response options on the nudges be edited? Query compliance requires only offering choices supported by clinical indicators. (DESCRIPTION) Tami's video screen reappears. (SPEECH) So when you talk about editing, basically the physician documentation is something that we can't edit. We don't come behind and change that. But I will tell you the nudges only fire with those clinical that meet criteria, those clinical indicators and risk factors. So, for example, we've been very conservative with what we've turned on at UC Davis. We're only turning on things that is looking for further specificity. For example, a physician has documented the patient has CHF or encephalopathy. That nudge is going to fire to say, can you further specify the acuity and type? And then the physician is going to come behind and document that acuity and type. And that was really intentional and deliberate on our part so that we could focus on those more complex, maybe quality outcomes or safety indicators and do more in-depth clinical reviews and allow the technology, the AI to do some of those more repetitive queries that we're sending for acuity and specificity. Great. So Jennifer asked, you mentioned your CDI team reviews question admits. Can you share more about this? Yeah. So we work very closely with our physician advisors. And our physician advisors work very closely with case management, utilization review, and UM. And so if there's admission for a patient who has weak clinical evidence to support an inpatient admission, we're looking at those admissions to see if there's documentation opportunities to support acquiring the provider for additional diagnoses to support an inpatient admission in those instances. All right. Next question is from Mariana. What is the amount of review you think is reasonable when CDI staff codes the chart as well? Also, a follow up to that, when you talk about 40 reviews per day, do you mean these are daily reviews? Yeah, so-- I'll let you answer that first, and then there's a little bit more to it. So we'll do two parts. Yeah, the first one is, we don't expect our CDIs to be coders. And so we ask them to get the principal diagnosis and the principal procedure in any additional CCs or MCCs correct. But in terms of coding accuracy, the expectation is not that they perform at the level of coders. So they are coding to get that working DRG. So they know where their optimization lies, but they are not expected to be accurate. So we hold our team to about 40, no more than 40, on average, about 35 to 40 a day. And those are case reviews. They don't have to review those. If they're initial reviews, then they may review them that day, or they may get to them the next day. If they're second-level reviews-- not second-level reviews but rereviews, they may not have to review them that day. So they spread this out throughout the week. But their caseload is a total of 40, no greater than 40 occasionally. During the holidays when we have people off, we may go 42 to 45 but no more than that. And again, that's a total caseload with initial and rereviews that are spread out through the week. OK. Great. So this is a good follow up to that. If a patient is on ICU versus a med surgery case that does make a difference, how do you make sure it is all reasonably fair? So again, our team gets a fairly distributed caseload. So everybody gets a little bit of everything. Everybody gets trauma. Everybody gets OB. Everybody gets your hospice. Everybody gets your ICU and your surgical population. So everybody gets an equal distribution. So maybe one week somebody has more ICU cases than their peer, but eventually it evens out because we've set the logic and the algorithm around auto assignment to make sure that that occurs. Great. Holly asked, do you feel your CMI increase and capture rates was due to prioritization of DRGs without CC/MCCs? Or were there other areas you identified? I think it's definitely a factor, for sure. But some of the other areas that I identified were those reducing one-day stays they impact our CMI significantly. So prioritization is key. You want to make sure you look at those cases that are greater than three to four days that don't have a CC or MCC to see why they needed to be in the hospital as long as they did. They should have in some instances, a co-morbidity, unless they're a placement issue. So we are taking a look at those to see where we can optimize. And then simultaneously, we're working on other initiatives around reduction of one-day stays and things like that. All right. We have one more question. So if there are any other questions or you're thinking about asking a question, go ahead and put that into the Q&A now. We do have one more question that we're going to get to. And so can you expand more on measuring physician engagement with the provider nudges? Do you have a high level of physician engagement? We do. So we monitor all of that. And when we don't have physician engagement, we run that data. And then we see why is it the nudge? We do an analysis before we do to see if it maybe the way the nudge is firing. We look to make sure that it's appropriately firing, first and foremost. And then if we see that the physician is just ignoring it or maybe the physician is naturally compliant and documenting later in the stay, we don't really take any action unless we see the physician is just blatantly ignoring the nudges. And then we engage our physician champion who has those peer to peer conversations about, do you need training around the CAPD? How to engage with the CAPD, why are you not responding or why are you not documenting? And our team also can see when the physicians aren't engaging and can put that query on the chart to make sure that we don't have any missed opportunities. And this is a good follow up or expansion even to that. So can you expand on that mode then? So you just mentioned you have that physician champion. So are they doing phone calls, face to face? And is there then a frequency of your one to one provider education? So that physician peer to peer education happens either virtually or in person. They may get on a phone call. They may get on a Teams chat, whatever, depending-- we work with their availability. So that happens as ad hoc, as needed. And then my team meets with our physicians on a quarterly basis down to the service line level. And we present data at the service line level to our physicians quarterly. Now, we also monitor that data. And if we start to see a dip in anything, that's something that we may ask to come back sooner and discuss with them, especially if we see a metric going in the wrong direction all of a sudden. We might do some investigation to find out, did they lose a provider? Or is there a shortage of anesthesiologists this week? Why are we seeing these trends? So we do a deeper dive, but we meet with all service lines quarterly to present their metrics to them. That's great. And thank you too, Danielle, for that question and actually the follow up question as well. And so that is all of our questions that we have for today. (DESCRIPTION) Slide, That's a Wrap! (SPEECH) And so, Tammy, is there anything else before I do a quick wrap that you would like to say up. Hang on. We did just get one more. Where are you pulling service line metrics data? So at UC Davis, we're using Vizient service lines. And then when Vizient service lines don't always align, we pull from the Vizient CDB database, and we pull by PI numbers, physician numbers. All right. This is starting to generate a couple more questions. So how do you deliver your provider queries? Our provider queries are a actual note type in Epic called Andy query. And they're electronically delivered to the providers and are a legal part of the medical record in Epic. They're electronically delivered the same way we would deliver a HMP gap that needs a code signature or a progress note or discharge summary that needs a code signature. It goes into the providers Epic cosine notes folder, and they address the queries that way. Fantastic. All right. I do think we are out of questions. And so again, we really appreciate your time today. It's been a great presentation. And so we do have this presentation available in the Resources section as well as the certificate of attendance. Before you close out and complete your survey, do download those both ahead of time. Once you close out, you're unable to get back in. And so that certificate of attendance is something you can use to request CEUs at an association like AHIMA. We do have other resources in that section as well. There's a case study for UC Days-- excuse me, UC Davis that would be really great for more information on all the great work that they're doing. (DESCRIPTION) Slide, Thank you. (SPEECH) So again, Tammy, we can't thank you enough for your time today. Do you have any-- and I feel like you do. You have been super busy this past year with some of your speaking engagements. Do you have anything coming up? I will be at Bueckers in October. Awesome. That's what I thought. I know that [INAUDIBLE]. Actually speaking next week on AHIMA Executive Roundtable. That will be a live webinar as well. Fantastic. Well, they are definitely lucky to have you. So if you ever are at a show where Tammy is speaking, we certainly encourage you to go. So again, Tammy, thank you again. We will have our next CDI Innovation webinar in October. It's hard to believe we're already talking about October. So we look forward to hosting our next session and hope that you can join that one as well. So, Tammy, thank you again for your time today. Thank you. I appreciate your time. Have a great afternoon all.

      Webinar presentation slide.

      Engaging physicians proactively and improving documentation integrity with AI-powered CDI technology

      • August 2023
      • UC Davis Health is on a journey to build a gold standard CDI program by streamlining workflows and empowering physicians to focus on patient centered care. As part of that initiative, UC Davis Health is expanding 3M™ 360 Encompass™ System with 3M™ M*Modal CDI Engage One™ to automatically embed clinical intelligence into normal physician and CDI workflows. Join Tami McMasters Gomez, director of CDI and coding services, UC Davis Health, to learn how her team has leveled up their ability to engage physicians, tailor AI for their clinical practice and improve CDI outcomes.
    • Advance your CDI program with 3M’s CDI technology

      Learn about our advanced CDI technology. CDI programs can capitalize on opportunities to capture the additional documentation necessary to accurately reflect the acuity of patients and complexity of care provided. Learn from experts about a new cloud-based workflow experience and advanced prioritization that automatically tracks query impact as well as a new dashboard for quickly identifying areas of opportunity. In this webinar, attendees will learn about new tools for improving productivity, quality and efficiency to support program success.

    • (DESCRIPTION) Text, On24 for a better webinar experience! A great company is showing what interesting applications a fantastic product can bring for motivated users. On a user interface, at the top left is the media player. At the center left is resources. At the bottom left is text, have a question? Let us know here! An arrow points to a Q and A box. At the center is slides. At the top right is speaker bio. At the bottom left is text, we want to hear from you survey! An arrow points to a survey box with questions and dropdowns. Text, 3M C D I Innovation Webinar Series. Improve youir C D I documentation by leveraging comprehensive A I technology. (SPEECH) To our May CDI innovation webinar where we're going to be talking about improving your CDI documentation by leveraging comprehensive AI technology. Before we get started, I do just want to go over a couple of housekeeping items. (DESCRIPTION) Slide title, Housekeeping. Bullet points, On24 Webinar Platform for a better user experience! Use Google Chrome and close out of V P N/multiple tabs. Check speaker settings and refresh if you are having audio issues. Ability to move engagement sections Attendee chat Ask questions! Certificate of Attendance available to download for live webinar sessions. Engagement tools and CC available. Check the resources section. Complete the survey. The information, guidance, and other statements provided by 3M are based upon experience and information 3M believes to be reliable, but the accuracy, completeness, and representative nature of such information is not guaranteed. Such information is intended for people with knowledge and skills sufficient to assess and apply their own informed judgment to the information and is not a substitute for the user's own analysis. The participant and/or participant's organization are solely responsible for any compliance and reimbursement decisions, including those that may arise in whole or in part from participant's use of or reliance upon information contained in the presentation. 3M disclaims all responsibility for any use made of such information. No license under any 3M or third-party intellectual property rights is granted or implied with this information. 3M and its authorized third parties will use your personal information according to 3M's privacy policy. This meeting may be recorded. If you do not consent to being recorded, please exit the meeting when the recording begins. (SPEECH) We are using the ON24 platform. This is a web based platform. So there is no dial in number. So if you are having any issues, do a quick refresh of your browser, and that typically clears up any tech issues. We do recommend you using Google Chrome, closing out of any VPN, or multiple tabs. That will help with the bandwidth. Check your speaker settings. Again, there is no dial in number. So again, if you are having any technical issues, typically doing a quick refresh of your browser will take care of that. In ON24, we have several engagement sections. Within the media player, we do have closed captioning available. You can make both the slide area and the media player larger or smaller. You can minimize those. We have turned on the attendee chat. So feel free to use that to talk amongst yourselves. And if you hear something interesting and you want to make a comment on that, please feel free to do so in the attendee chat. We will be monitoring questions in the Q&A box. So if you do have any questions about the content, please put that in the Q&A, and we'll get to as many as we can at the end. We also have a resources section available to you where you can download the certificate of attendance that you can use to submit to an accredited association for CEUs, as well as some other resources and some of our other on-demand webinars if you'd like to listen to those from ones that we've done in the past. If you do have any questions, again, please put those in the Q&A. We'll get to as many as we can. And then at the end, we always appreciate that you complete the survey to let us know how we did. So once the webinar is over, we'd love to hear from you on that. So let's go ahead and get started. I'm going to go ahead and pass it off to our moderator today, Adriana Harris, who will introduce our speakers and get things started. Thanks, Lisa. And thanks, everyone, for joining us today. (DESCRIPTION) Slide title, Meet our 3M speakers. Slide text, Garri Garrison, R N, President. Diana Ortiz, R N, J D, C C D S, C C D S-O Senior Manager of Global Content. Kaitlyn Crowther, R H I A, Chief Product Owner. Julie Salomon, B S N, R N, Director of Revenue Cycle Strategy. (SPEECH) First, I just want to quickly introduce our panel. We have Garri Garrison, who is the president of 3M HIS, Diana Ortiz, who is the senior manager of Global Content, Kaitlyn Crowther who is our chief product owner, and Julie Solomon, who I think will be rejoining us shortly, is the director of revenue cycle strategy here at 3M. So first, I'm going to ask a question to you, Garri, so you can give your perspective. How do you see innovative technology factoring into the health care market today, and how is 3M supporting that initiative? Thank you. Glad to be here today. And welcome to our guest. What I'd like to say is that just coming out of HIMs, obviously, the interest in innovative technology is very high. With the announcements recently of ChatGPT and how iOS being used, whether it's both good or bad, there was a lot of discussion at HIMs and a lot of discussion if you see the publications in health care about the use of technology going forward. Some of the clear messages we heard at HIMs was really about, how do we use AI to get more productivity, more automation, and be able to increase the capacity of the employees we have today in this labor challenge that we're seeing in the market? From a 3M perspective, what I would say is, we're really using AI to really build increased automation in really three areas-- coding, CDI, and then in speech and as we move to ambient. There was a lot of interest at HIMs about our announcement with the partnership on AWS to really advance our capabilities in that ambient space and be able to scale and go to market. One of the things we're doing here at 3M, though, is whatever we do with AI, we're focusing on being able to be credible, being able to be responsible, and then actually building in control so that you, the end user, have the availability to validate and minimize any risk that might be created through continued automation. Great. And I think on that lines, we're going to do our first poll to the audience. And how open is your team, your organization, your physicians to using a technology, including things like chat to help facilitate behavior change? (DESCRIPTION) Answer options. A, Very open and with unlimited use. B, Open but with built in controls. C, Not open at all. (SPEECH) So we'll leave this open for a minute, and-- all right, we've got a pretty decent number of folks who've submitted so far. We'll just wait one more minute and we'll see what the results look like. All right. All right. I think we've given that enough time. So we'll see our folks very open with unlimited use, open but with some built in controls, or not open at all. (DESCRIPTION) Results. Open but with built in controls, 73 percent. Not open at all, 18 percent. Very open and with unlimited use, 10%. (SPEECH) It looks like a lot of people are, like you had said, you're saying or doing it with some built in controls. So I think for our next question, Julie, if-- I don't see you on camera anymore, but I think you said you're here. What are we hearing from the industry and our clients about things like nudges and using responsible AI in the CDI workflow? Thanks, [? Aj. ?] Can you hear me? Yeah. (DESCRIPTION) Text, 3M Responsible Artificial Intelligence. Heading, Artificial Intelligence, N L U, N L P, Machine Learning, Deep Neural Networks. Training and A I content led to heading, industry intelligence. Clinical Intelligence, Coding Compliance, Quality Compliance, Regulations, Coding Rules, LP, Machine Learning, Deep Neural Networks. Industry intelligence leads to provider nudges, A S-codes, A S-D R G, clinical opportunities, A I-driven prioritization, code confidence, chart confidence, autonomous coding, A C S. (SPEECH) OK. So you know what I've been hearing from customers, like Garri, stated we've heard a lot of buzz in the market about all the different types of AI that are out there and how they're going to somehow solve every problem in the world. And as we've listened to customers and the industry, we know that they need to be able to trust their technology and their AI. There are flavors of AI like Probability where we're driving based off of relationships between different clinical scenarios. I've heard the example of heart failure and fever being associated. Is fever the cause of heart failure? From a clinical perspective, we would say no. But there is an association with that probability and connection there. Also with the ChatGPT, those models learn really great. Almost too well because they can fill in the gaps and they can make up data that they've learned in the other pieces of the modeling that may or may not be true or accurate. So when we look at building models in 3M, we're really looking at, how can we build that artificial intelligence responsibly? How can we make sure that the models are informed not only by that data that they are learning and building on, but how can we roll in and wrap in the other pieces of industry intelligence like compliance-- coding compliance, quality compliance, regulations? What is that Ci hat and how do I use that clinical intelligence so that it makes sense in health care? So when we're building across our platform, all of our features and functionality are based on that training and that AI content, but then relearning and re informing those models based on the industry as well as the data that's out in the vast data we have with our customer base. So you can see on this slide below, all the feature and functionality across our platform can really inform each other and build on that modeling. Great. Thanks, Julie. For our next question, I'll look to you Diana. What are some of the top questions 3M gets about compliance and how do we make sure our technology have the right solution? Yeah. So we definitely get a lot of questions as we work with customers who are adopting technology specifically in the CDI space, as well as with their physicians. I think Kaitlyn will speak around kind of champions later. But the real physician component brings a lot of questions around compliance as well as how to very effectively deploy intelligence into workflows. So some of the questions we get are around consistency. So how can we ensure consistency across because any time you're bringing a coder workflow, a CDI workflow, and a physician workflow, you really want that compliance across the board. You want consistency in how you're applying things. The other question that comes up a lot is around risk tolerance. I think the responsible and credible things that we bring to the industry, we have been working in the coding space for a number of years, and different organizations have different thresholds for risk tolerance. (DESCRIPTION) Slide title, Content Governance. Arrows turn clockwise around a pie chart with wedges, end user experience, industry leading content, customer outcomes. Surrounding content methodology is evidenced-based practice, quality improvement, government regulatory, provider feedback, impact analysis, industry regulations. Heading, Detail. Text, Up To Date. Date Driven Analytic. A H I M A & A C D I S Guidance C M S, Coding Clinic Feedback Loops Define Value Impact (SPEECH) So our unique ability to customize content around artificial intelligence, I think, helps bring some of that thresholds that are most meaningful within an organization to different workflows. So risk tolerance comes up a lot. The last thing is really around efficiency. I mean, everyone's looking to try to find ways to make their workflows more efficient. We often speak around creating time to care here, and we really mean that around the physician workflow. We're trying to find ways to reduce their burden around the queries that they're delivered to today, but we also want our CDI teams to have those really actionable insights within their workflows that speak to the efficiencies. When I think back to ICD-10 and the transition that came for our coders, I mean, it really came down to we want accurate codes to be surfaced to our coders, but we also want the efficiency that would come with intelligence in their workflows. So when you think about that, our CDI programs and our physicians are challenged with lots of different initiatives-- within quality within their spaces. I mean, everybody is trying to achieve different things within a program. It's no longer just a traditional DRG focus, if you will. There are so many things that they're challenged with. And in order to accommodate that scope change and put that responsibility on your CDI teams and your physicians, you have to find ways to make it more efficient. The intelligence that comes into those workflows is meant to be able to help the organizations to achieve the goals that they're trying to get to in a much more efficient manner. So I think consistency, risk tolerance, and efficiencies are the things that probably leap out ahead. With content governance, what I wanted to focus on here I just have a visual is when we speak about that consistency, we're looking to create content that is meaningful for all of these workflows, and a lot goes into that. So is there our coding clinics that change? We no longer want to surface a nudge or an insight to a CDI team member that really isn't of any value anymore. A coding clinic can come forward and negate the need to oppose the question. So we're constantly looking at ways to say, OK, this is a new coding clinic, what that means to a coder. We've historically always done that, but we also carry that through and say, we don't need to bring an insight to a physician for this anymore. But there's a new coding clinic maybe right that poses new questions that we need to ask. And so we're really good about bringing that forward to a physician. So we're only bringing forward the things that are most meaningful to them. So we're constantly looking at what's happening with coding clinics is just one example, but Up To Date is another clinical resource that we use very consistently. As clinical guidelines are changing, thresholds are changing, standards of practice are changing. We want to update all of our content to be consistent with that. We're also looking at feedback loops, and then things that are really going to drive value in the industry too. So as new and interesting use cases come up, I can think specifically right now around social determinants of health, there's a lot more codes, a lot more ability to capture that in the documentation. We want to be at the forefront to bring those insights into the physician workflow as well as carry it through all the way to coding. (DESCRIPTION) Slide title, Heart Failure Rule. Text, 10,000 Documentation or heart failure (plus or minus clinical lab, medication, and/or echo cadence at heart failure) without documentation at the type and acuity of heart failure. At the left, section, defining logic. Two boxes, conditions and requirements. At the right, section, default messages. Two boxes, rule satisfied, rule not satisfied, C D I (alert) notification, physician title, message. (SPEECH) So I do have an example here I think on this next slide just to get a little bit further down in. So heart failure is the thing that resonates with everyone. We've been talking about heart failure forever. Even though a lot of those changes have come with coding clinics-- and we recognize different variations of heart failure. I wanted to take you down into the intelligence, if you will. So this is just one example. There are multitudes of ways that you could harness the concept of where you might need to clarify a heart failure condition. But this is where the customization and the risk thresholds come in, I think. Is that we have logic around what the conditions are that we find within whether it be discrete data points or in the narrative, and then we have default messages. So we have a lot of compliance that goes into making sure that the things that we're looking for are clinically relevant to the patient at hand, as well as how we kind of surface that information to our end users. But again, every organization might have a little bit different variation of what they want their messages to look like to their CDIs or their physicians. And so in that bottom, you can see there's a lot of customization that we do, and customers really value this because they know their physicians. They've been working with them. They know what kinds of things will be meaningful to help them answer the question in a way that makes sense and actually translates to something at the end of the day. And so that customization around message is really, really helpful. (DESCRIPTION) Slide title, Heart Failure Evidence. At the left, a user interface, Dr. Henry Willard. Text, Address messages with your voice. Heart failure. please specify acuity and type. Heading, Evidence for heart failure. Text, Acuity and type of heart failure were properly documented. Heading, Required Evidence Not Found. Exes are next to text, explicit mention of N Y H a class. Explicit mention of grade O R N Y H A classification. Heading, Evidence Found. Check marks are next to text, Explicit mention of acute or chronic heart failure. Explicit mention of systolic or diastolic heart failure. Evidence of diastolic heart failure (disorders). Heading, Evidence Not Found. Exes are next to items in a list. (SPEECH) And this is just a view of what it would come forward. When I spoke about efficiency for the CDI team and that bottom right, you'll see hey, not only are these the things that we found in the screenshots that you see there, this is what we found. We've gone through the record and know that these things are pertinent to this condition. And so this will make it much more efficient, but we've also surfaced the things that we looked for that would be valuable but didn't find. And so these are just a couple of examples of how when we create the intelligence and insights for both position in CDI. We're looking at it always to be pertinent and relevant to the patient at hand as well as within industry guidance, and again, driving that efficiency within the workflows. (DESCRIPTION) Slide title, Heart failure concept customization. Slide text, Notice customization of ejection fraction for customers specific values. On the left is a list of logic rules. On the right is a flow of steps for a technical process. (SPEECH) And the last thing I just wanted to highlight here is there's a lot that goes into. This visual on the left gives you just a little bit behind the scenes in terms of all of the things. The concepts and things that would go into an intelligent notification, if you will, within a workflow. But I have in the triangle there-- there's so much customization that occurs. We not every organization is looking at it exactly the same way. I remember back when I worked in CDI in the hospital and trying to get physicians to agree on what their definition would be for acute tubular necrosis versus an AKI and where the threshold sit. And you can get multiple internists and nephrologists to try to get consistency on something. And so we recognize that we have customers that have different thresholds that are meaningful for them. And we work regularly with them to make sure that it meets their needs and reaches their risk tolerance, and is, again, within their compliance measures too. (DESCRIPTION) Slide title, Unique closed-loop workflow. Slide text, Heading, Physicians. Bullet points, Nudges appear inline in the E H R workflow in real time, generated by N L U reasoning over encounter. Common documentation gaps are resolved proactively, before the note is saved. Heading, C D I teams. Bullet points, N L U insights prioritize work list, uncovering top opportunities. Visibility to physician nudge interaction provides real-time feedback on physician engagement. N L U automatically generates evidence sheets to support queries, replacing manual effort. Queries can be delivered inline, in physicians' workflow, avoiding the inbox. On the left, arrows turn clockwise around labels, physicians, patient care. C D I and H I M teams, revenue integrity. (SPEECH) Great. Thanks, Diana. And when you were talking, I heard you mentioned bridging the gap that CDI has with working with coding and quality. So Kaitlyn, I'll ask my next question to you, how can organizations make sure to use compliant technology to bridge the gap between CDI and others in the organization like coding, quality, and the physician advisors? Yeah. So 3M HIS has a really unique value proposition where we are in front of many physicians through our fluency direct application, many applications that are delivered to that fluency direct control bar, and then we're also really working with the coding and CDI team. So we feel that we have this opportunity to create a really unique closed loop workflow between these groups. It is integral that CDI and the physicians and coding all stay on the same page, they're operating from the same playbook, they have the same definitions. And so us for us, it's extremely important that we present this information both to the physicians in real time and the CDI and HIM teams. So really, a unique value proposition that we have with our CDI engage one offering is as the physician is creating their documentation in real time, we are able to use our natural language understanding and our AI technologies, like Julie and Diana explained, understand what the physician is talking about and see if there's those opportunities to ask them for the pertinent information in that golden second when the physician has their focus on that patient. Something we talked about when we kick this off is really about creating time to care for the physician. And we understand from working with them, they have so many drags on their resources, they need to be doing so many things. If we can ask them to capture these pertinent documentation concerns right at the time that they are working on that patient, it helps them so much with the accuracy and the speed that they can add those things. That same feedback goes to the CDI team. We might know too that right now is the appropriate time to get this documentation from the physician. Maybe we want to wait for the echo to come in for the heart failure, the BMP, the ejection fraction to get that documentation. The physician can also defer that nudge in real time, and that same evidence is presented over to the CDI. Team the CDI can see where we've looked across the entire patient chart to prioritize this information. They can see what evidences we're still waiting for that are in line with Diana's information models, and then they can make that clinical decision making point that am I going to now manually query the physician, or do I want to give them a chance that maybe next time they go into the documentation they can answer that nudge again. What is that right threshold? So I think it's extremely important that this information is presented to both the physician and the CDI and HIM teams. Another big thing that Garri kicks off with is how we can leverage technology to make up for our workforce. So nothing ever replaces the way a human thinks in the human decision making, but can we cover cases that a human would never be able to get to? So by using our artificial intelligence technology to understand what the physician is documenting while they're documenting, that really helps with short stay cases, weekend cases, things like that where we know our CDI teams may eventually get to it, but these are the ones that are a little bit harder to, and especially with the workforce coverage. So we really do have a unique value proposition with being able to cover and keep in concert both of those sides. (DESCRIPTION) Slide title, The 3M M Modal C D I Engage One implementation strategy. Slide text, Holistic solutions lead to staged implementation, which leads to 3M M Modal Adoption Approach. Heading, Staged implementation. 1. Evidence sheets. Find and leverage opportunity using N L U-enabled clinical intelligence. 2. Query delivery. Improve physician workflow and reduce E H R inbox burden. 3. Nudges. Move from reactive and burdensome to proactive and fluid. Heading, 3M M Modal Adoption Approach. Heading, Pre and Post Contract. Organizational discovery & alignment on expectations and success factors. Heading, Stage 1. Assessment of organizational readiness; augment C D I teamwork. Stage 2. Leverage Evidence Sheets to strengthen manual queries. Stage 3. C D I Team Adoption to strengthen and target high-yield opportunities for the organization; layer automated clinician-facing nudges. Set up for sustainment post go-live. (SPEECH) We're able to do this through a three step implementation approach. This is something we learned through doing best practices with a lot of our customers. Over the last couple of years, we had this wealth of technology, and we really had to learn the best ways to apply it both for the CDI and coding teams and for the physician teams. And what we learned is starting with the CDI and coding teams with rolling out what we call our evidence sheets. So we've very much learned that it's important to get the CDI and coding teams on board, have them understand exactly how the AI is working so they can be that level of buy in that when we go to the physician, there's that trust there. So these evidence sheets show up natively in the applications that the users are already working with. They don't have to navigate somewhere new. And we make sure to do a clinical review with the group to say, OK, what rules do we want to be configuring, what are the key things that your organization is looking at, and are these then laid out in a way that makes sense to you all? We've learned from some organizations to include heart failure as a type of heart failure because that was something that was important to them. So we do customizations on top of that as well. And doing this starting with the evidence sheets that are just seen by the CDI and coding teams, that gets us that organizational buy in, and it helps augment that CDI team work. We can actually start measuring our impact with the evidence sheets, and we call it finding the needle in the haystack. So how many times are people going in a cases that were potentially not resulting in queries? Can we bubble up these cases that have the opportunities to the top of the list-- this goes hand in hand with the prioritization, expose the clinical methodologies through the evidence sheets, and then potentially starting with the step one, they can send a manual query. For step two, we want to improve that query delivery process for the physician. So we know that a lot of physicians today have a very laborious process for having to respond s their queries. It can be up to 10 clicks. It may be outside their workflow. It might be co-mingled with other tasks that they have to do. So something that we've been able to roll out into the physician experience is giving them that one click way to answer a query. And we've gotten tremendous feedback on that. That now becomes the caret to the physicians interacting with the assistant that we have on the screen to influence the direct application. They're able to see exactly what queries are assigned to them. We've configurations that they can see what's assigned to them and their other peers. If they navigate to a case that has a query, they can opt to see that. So it really helps them be able to answer this query. And we have a built in a more intelligent way by leveraging those evidence sheets, and then they can just use one click to respond to it. The query response is sent into the e-HR, and it takes care of that workflow. So we've also seen really great outcomes just by adding that second step. So we see reduced query turnaround times, increased physician satisfaction. Because we took something that was previously maybe a 10 step process for them and moved it down to one or two steps. So again, it's getting back to that creating time to care. We want to give physicians and clinicians a motivation to interact with our assistant technology, or our CDI engage one nudge. Let's present it in a more user happy way to answer a query and get that physician satisfaction. Finally in step 3 is we really pump up the artificial intelligence on the physician facing side. And that's by rolling out our nudges across very specific physician groups on items that are pertinent to them. So when we get into our best practices, which we'll speak about in a little bit, we want to make sure that we're identifying things that can be identified by the AI that are pertinent to the physicians we're showing this to. So if we want to break out into a cardiology group or nephrology group OB-GYN group. Right now, we're only recommending to enable 4 to 5 clinical concepts per each of those groups, and really have those things that have strong clinician buy in that the clinicians feel confident about responding to and answering in real time. That they feel empowered with the information that they have about how to answer these CDI topics, and that's where we get the breadth because this is now going across all the documentation that they're doing. That we can detect this in real time. Again, this is married back with the evidence sheet. So on the evidence sheets, the CDI teams can see when a physician received a nudge and how they responded to it. We really want to set this up to be sustainable. So we measure how many times did we not need to show a physician, a nudge because they got the documentation right to begin with. That's huge. So the best nudge we're saying is we don't need to give. So we're hoping to set up these sustainable learnings that people become better documentaries naturally over time, and then we can consider enabling a different role for that group or more rules for that group. So this is a holistic implementation strategy where we can bring together the physician side and the CDI encoder side by giving them the same information, but then putting it in the place that they most commonly interact with that's useful for them. Diana, do you have anything to add about these processes. No. I think you covered it well. Like you said, we've seen really good success where we are not bombarding physicians or CDIs with tons of concepts, and are certainly serving up the things that are most meaningful to the organization in terms of priority. And like you said, it really is important to have that physician buy in, especially-- my team works most closely with the content itself. So as those thresholds that are really important and will really drive success, we want to be very mindful of that and customize the content to be most meaningful for an organization. I don't want to put back on the spot, Kaitlyn, but you mentioned the physician a lot. But what can be done to make sure that this technology as an adding to an already overburdened physician workload? (DESCRIPTION) Slide title, Implementation & adoption critical success factors. Heading, Defined goals. Slide text, Hospitals use 3M C D I Engage One to achieve quality, financial, clinical goals, or all of the above -defining your goals is critical. Heading, Incremental strategy. Slide text, Flexible and comprehensive, 3M C D I Engage One offers a staged approach to realize enterprise opportunity - start with what's most important and then build. Heading, Data-driven improvement. Slide text, 3M gives you the data to measure, the flexibility to adjust, and the support you need to continuously improve. Heading, Leadership and governance. Slide text, Clear and well-aligned leadership and governance structure — which includes a physician champion—sets you up for success. This ensures strategic enablement of clinical content for both C D I staff and clinicians. (SPEECH) Yeah. Again, this is a place where we have a lot more lessons learned, as well. So I think when we started rolling out, everybody was very excited about this potential technology that we could cover all this documentation that's going on, and we didn't have a lot of balance about how frequently we should have nudges buying, the amount of content we should have them firing on, the amount of physicians. It should fire over. So we've gotten a lot more pragmatic in defining our best practices in order to not overburden the physician. And we talk about that in our implementation and adoption process, and we defined these four categories as our true North on how this technology should be rolled out for the best physician satisfaction. So number one is defining your organizational goals. We notice when the physicians alongside CDI and the c-suite are aligned on what they want to improve across the health system, we get a lot better outcomes. We've seen groups rally behind their US News and World reports, quality indexes and rankings, maybe a specific diagnosis that the hospital is really hoping to improve. When we see that alignment between c-suite, CDI, and physicians, that really helps in defining our goals. So what I recommend is if we see that, OK, we really want to take a strong quality focus. We know that this is the content that is going to move our quality needle and stick to it. That really helps the outcomes. I think if we see things that are enabled across the board and with less focus, that's when people lose their true North on why are we working on this. So having that defined goals is just so important. And alongside that, having that shared understanding between CDI and physicians on why I'm asking you this and the importance of documenting it I think is integral. I mean, that's basic CDI. But I think we've seen over the past couple of years when that contract doesn't match up, we don't have a strong basis to stand on. So it's so important to have that great relationship between those groups. The next is having an incremental rollout strategy. So as I talked about previously, we were having all these elements to the application that could head out once, they get hit in different orders. But we took a moment to step back and say, OK, what if we first start with the evidence sheets, then the queries, then the nudges? Maybe queries don't make sense for your organization because you have very strict governance that they need to be in the specific place. What is the incremental strategy and how you guys want to roll this out for your organization? And that can be done as well with the content that you choose to enable. So maybe you have a very engaged group of cardiologists and you know that I have a strong group with this. This is where we want to start. We're going to start with our cardiology evidence sheets on the CDI side, have the CDI specialists really have a high level of comfort with those, understand how they're working, expose the physicians and have the physicians in the CDI and dialogue about when you are going to start to see the nudges. They're going to be around this concept. This is the documentation that they're looking for to resolve. That resolution documentation is built on best practices for making that access. It isn't just arbitrary. This is trusted, but what is your incremental rollout strategy of these components of how we want to get them out successfully and not overwhelm the group with the amount of data we're putting out? So incremental on the content we're releasing and incremental on the components of the application that we're releasing. The next is data driven improvement. So this is something we've also spent a lot of time in the last year on how we measure how people are interacting with what the application is putting forward? So we're exposing these evidence sheets to the CDI group, the nudges to the physician group. So KPIs that we look at is with the nudges, are physicians resolving the nudges? That's the best thing possible. That means that nudge showed up to them, they provided the documentation, and it went away. That's a resolution. That's really what we're trying to capture through those processes, is having that intervention and that outcome. But we also look at, like I talked about earlier, a naturally satisfying rate. So how many times do we not need to show the nudge because the acute systolic heart failure was already present on the chart? That heart failure was already optimally documented? That's a great indicator of the health of your documentation at your organization, and maybe that would make you think, oh, maybe we shouldn't concentrate on this nudge. I think for some organizations, we've seen really high natural satisfaction rates for chronic kidney disease stage. So we thought maybe you guys are already doing really well at that. That's maybe not the best nudge to enable because we don't have the opportunity there. We also see how many times nudges being dismissed? So that's a really strong indicator to us that the physician did not like that feedback. Because that means they physically clicked their x and said, I am not responding to this. More often than not, they can just potentially ignore the nudge. We also capture that statistic. So those are the four nudge interactions that we look at. We look across the different provider groups, we look across the different locations this is rolled out to. So are we seeing better adoption at one location than other? Was training different at one or other? We look across the different medical conditions. So do we see really good feedback for heart failure but poor feedback for AKI? Do we not-- maybe the physicians aren't educated on exactly how this AI rule is working. So we use our data driven improvement to really look at how the nudges we rolled out or the content we rolled out is being given feedback to, and then that helps us tweak with how we move forward. And then finally, leadership and governance. Leaders are key here. We always want to define a physician champion-- our CDI champions, our IT champions. This is somebody who can talk to the different types of end users help moderate the system, and really set us up for success. So I'll pause here if anybody else on our panel has any feedback about this question about overburdening the physician. Not sure if we still have Julie. We do. So I think you covered it well, Kaitlyn. And I would add that as we look at utilizing the same technology, the same AI, the same automation, we definitely provide a way for the physician to not have as big of a manual lift, and we are doing that subtle education that you spoke to relieve their burden. And then we also create the same collaboration with the CDI coding and quality teams to leverage that as well. And as we provide this AI in real time to each of those personas, we then support that provider and release more of that burden from them. Yeah. I think just having the physician in the room is so important during these discussions. When this is made unilaterally-- I think we've seen it both ways. When just the physicians are doing the decision making or just when CDI is making the decision making, we get a little bit lopsided outcomes. We see good adoption on one side, but we don't see the outcomes that we need to be driving to. So it's super integral that we have both sides. And I'm so excited too to see the role of the physician advisor taking more shape, more organizations hiring full time into that role. I think whenever we refer for references and things, a lot of our reference customers talk about how much the adoption was driven by the physician advisors. So I'm really excited to see that role taking off more and more in organizations across the US. Great. So, [? Aj, ?] I have a question for our panel, if I can ask. Sure. It would be-- maybe each of you can answer it may be a little different from the viewpoint in which you all work, but what do you recommend as best practice for really representing the clinical status in our patient documentation? So maybe Julie, we'll start with you. So I think for sure, Garri, that as we look at each view of our workflow and each person that's part of that particular part of the workflow, we really have to think about the continuity across the data, the AI modeling, the technology at the point in time that they're interacting with that patient care. So it may be before we even get into the hospital and that ambulatory space where we're trying to collect the appropriate physician information at that point and time. So how do we make it seamless yet use the same data models and the same information so that we're not mixing up what the goal is? So sometimes we can have disparate goals across the organization. So I'm working in CDI and maybe I'm in a program that really is just focused on MCC/CC capture, a basic beginning program. I could be bumping up against my quality folks because all of a sudden, I'm communicating with the provider to capture a major complication and comorbidity, and that major complication and comorbidity just rang the bell for a PSI. Or I am not really thinking about POA in any of those aspects. So I'm now creating a problem downstream that that quality group is going to decide that they have some KPIs and go to the provider that maybe don't line up with the same data or technology use that my CDI has. So it can really create a problem with our best practice. So to me, it's streamlining across the platform and the users to be able to make sure that we're using the same data, the same technology, and that that provider is really getting the same messaging and he's not-- he or she isn't just digging in the pinball machine of what KPI am I looking at now? Kaitlyn, anything you would add to that? Yeah. Julie, I think you brought up really great points about highlighting value based care. So there's all sorts of things that ring the bell-- CC/MCC capture, PSI. But ultimately, we're trying to get accurate reflection of this documentation to show the resources the hospital use on these patients. And that isn't just to reimburse the hospital, but it is for the next care provider down the line, you can read that documentation and see what's going on with this patient? When we look at things like HCC capture, that's going over the patient here. We're using previous years documentation to inform what's going on with the patient. It's just so important to capture what's going on in that episode that we have this opportunity with this patient in front of us. So a lot of times, like I talked about earlier, getting CDI and physicians on the same page, I think everybody understands that critical mission of what-- it wasn't done if it wasn't documented, and how do we show through that documentation that level of care that the patient should receive? And then ultimately, driving to value based care. I think that's how we see things evolving in the future. Talking about what are we monitoring, evaluating, assessing, treating? That's not just in one arena, it's holistically what's going on with the patient. So I think using this a technology, like Diana went through, we have these countless information models that we've built out a lot of those components. We don't just think about heart failure. We think about the vital signs that go with heart failure, the structured labs that we receive about heart failure, the medications for the treatment of the heart failure that the patient is being put on. And we continue to build those out all the time. And that's not our only information model. We have hundreds of those that we maintain. So I think we have this really great opportunity that we're running through this information in real time, identifying it, presenting it to the physician, and that is all in the effort to capture a complete clinical picture of the patient. Interestingly, I've been getting a lot of feedback lately about, can we tweak different nudges based on the payer that the patient is? And I go back to ACDIS and AHIMA, and that's not really our true North. Dr. Smith shouldn't be treating and documenting a patient differently because they have Blue Cross or they have Medicare. He should be rendering the best treatment possible for that patient and documenting what he did. So it's been interesting to have that discussion with a lot of customers because I think we do get so put into a corner sometimes with denials. But if we can accurately represent what the health system is doing to treat that patient, that's going to flow into so many downstream sources on being used to accurately reflect future treatments of that patient on top of those other KPIs we talked about. Diana, anything to add for that? Yeah, not much to add. I think you guys did a great job describing that. Mainly, I think what's really exciting is it really is pulling together all of the things that artificial intelligence can find that can really supplement the workflow for the physician and for CDI. And documentation may not always sound like the most exciting thing. But when you pull all this intelligence together and look at gaps in terms of how that patient's story will be told when it gets coded, it is really important that the documentation truly reflect everything that's going on with the patient. So as much as you can think you're doing a really great job with your manual efforts, to supplement it with the things that you can bring technology to scale for is really going to help drive the best coded picture for every single patient. Kaitlyn mentioned short stays. They may never get a CDI touch and it really may never translate in their documentation to what really brought them into the hospital and what that true outcome is. And as it ties to value based care, like you said, Kaitlyn, is beyond this episode of care. We want the truth about the patient across the board, and that's really where the industry is headed. s And really the probably most effective way to get there is to do it leveraging technology. So I think it's an exciting time around this, and it really will translate to better patient care. At the end of the day, we're moving beyond documentation. It's really the documentation that will translate to that. Can I add one last thing? We look at all this data and it's going to pay us. It's going to reflect our quality, and we want the most accurate true information about the patient. And the other thing that I think is often forgotten is that a lot of this data then goes back to organizations like CMS and payers and all the raiders and rankers out there. but it then informs everything that happens. So if we have patient data and it's not accurate, then it's difficult to be able to get to the next clinical evolution and that threshold that we really need. So it's the data that we put into the industry then is going to inform the industry. And if we aren't accurate and correct, it can keep us uneven and in our footing so that may be then we don't reach not just documentation standards, but clinical and treatment standards are impacted by that. Yeah. I have one more thing to add. Sorry, it made me think of it-- Julie and Diana, the way you guys were talking was, I think previously, there was this perception that if something was not captured and structured data, it really can be used to measure. So that's things that we're ending up on the problem list, the medication list, the allergy list. We now have this technology that opens up all the unstructured data. So I think leveraging that. And a lot of times, the ICD-10 code that gets assigned to that unstructured data is what tells the story. So if we can capture even more to that unstructured data, that's not just lost data. That is now going to be captured into a more accurate ICD-10 code, a more accurate transition of care document. And I think that's where a lot of our vendor partners see us coming in as well. Is we just have this really precious kernel with the natural language understanding. And even when we work with outside systems, that's something that we end up integrating a lot and giving the value at the end. So I just want to leave it that thought as well as everything can be reasoned over now and it's extremely powerful. And Kaitlyn, that's really important. As I work with a lot of our customer base and their executives and organizations that have entered into value based care, risk agreements, those that are not performing well realize that they really did not understand the true complexity of the population that was going to be in that risk agreement. And being able to capture all of that data, whether it's in structured or unstructured components, is really helpful to have. So the quality of your data also helps your own organization make decisions appropriately based on the true complexity of the population that you're treating. So with that, Adriana, I'm assuming we're ready to go to some questions maybe? Yeah, we have seen some questions come in. So I will I'll ask the first one to the group. Maybe I think Diana, this came during your slides. Would this customization include things like national registries like cardiac, such as STDs and renal classification within two weeks? Yeah. So it's a good question. It's interesting because we have a lot of our users of the technology who are really in an advanced stage. And so they've really adopted it and want to, within their organization, leverage it for different use cases. So while it's not necessarily a registry abstraction tool, there are certainly evidence sheets that we have created that have helped organizations solve different problems outside of that. So there's, like Kaitlyn mentioned, numerous information models that can be leveraged within an organization for a particular use case. I can't speak directly to that registry. That's something that we have used historically, but we're seeing a lot of the organizations that are pretty far into their journey get really creative and ask for things that have not been much of a stretch to provide value-- additional value to their organization. Yeah. Diana, I was just going to add, I know that we worked with another customer around the registry asks when they learned about the power of this technology. And we actually changed our delivery mechanism. So rather than it just going to CDI, we said that-- what we took from unstructured data made into structured data and we fed that to their enterprise data warehouse that they could use in their own abstracting workflows. So that's also a possibility. So I know today we've talked specifically about using this within the CDI and physician workflows, but a lot of this data can be operationalized and sent directly to the customer data warehouse for help with these more future use cases as well. Great, thanks. We got another question. Could you please show an example of your evidence sheets, perhaps the CHF topic? Sure. We can go back to that slide. (DESCRIPTION) Slide title, Heart Failure Evidence. (SPEECH) That is just one example-- yeah, this one here's just one example of a heart failure. So within the heart failure, I guess, group, if you will, there are numerous variations of what an evidence sheet could look like. We've blacked out a lot of things here to just keep it as an illustrative example, but this is just one example in terms of an organization may take the standard variation of this rule or they may ask for some customization of our CDI teams really want to see everything you have within this particular concept. So it's just one example and it's really illustrative, but that is something to make it much easier for the CDI within their workflow. These are the things that they would be out looking for around this concept, and really surfacing it up to them very easily. So I don't know, Kaitlyn or Julie, if you want to add around the workflow but-- I was just going to add, Diana, you talked about this being one example of a heart failure evidence sheet. Other ones we've done have been around clinical validation where acute heart failure has been documented, but we do not find the supporting evidences for that heart failure to be documented as acute. So those are also part of the content that would be available to enable. All right. We've got another question here. What do nudges to physicians look like? Are they regulated by FDA in any way? If I can take that one. So you do have an example over here of a nudge card. Nudges are completely customizable by your organization. So we want to make sure that your organization is comfortable with the language, the content that is driving it. We do our out of the box language that Diana's team works on through that clinical governance project to ensure is 100% compliant with best practices, to not be leading, but you all can change that documentation any way that makes sense for your organization. We have some customers that put in links to their internal knowledge bases in the nudge cards itself. So if they get nudged for heart failure, there's a link to their knowledge base for heart failure if the physician wants to click on that and learn more about it. We can able things like that. One direction we've moved a little bit away from in order to reflect more compliance is what you see here is what we called an actionable message where you can click on the option and submit it into your documentation. With the latest act is AHIMA documentation that came out in December, we're moving more towards open ended questions because when you do provide options to the customer, they should only be those that are clinically supported. So we felt a lot more comfortable with the open ended ones. A lot of our early adopter customers agree with keeping the questions open ended to not be leading, and our best practice is to have all open ended questions in those nudge cards. But we do constantly change those updates based on coding clinics, best practices, things that come out. Interestingly, with the question about FDA compliance, we do that a little bit differently. Because all of our information models are exposed to customers and we show exactly what the approach is, it doesn't fall under that specter. It's not a black box. FDA goes more with the devices where it's proprietary with how they function. Here, we're exposing to you this heart failure is firing because you said heart failure in a positive context some way, and there was at least one piece of evidence that supported heart failure across that encounter. So we are very open with our clients on exactly how that rules are firing. That way, that they can sign off on it, and that also avoids some of the FDA compliance concerns because in reading the legislation with that, it does not fall into how they define something as a device because it is open ended in that way. Great. And I heard you mention the act as AHIMA brief. And one question is, how is the nudge process shown to be compliant with the latest practice brief? Yep. So just things that we've taken from the latest practice brief and built into the application. Like I mentioned, where keeping the nudges open ended in order to reflect that. We're keeping a more stringent record of the nudges. So in our reporting out, we can capture exactly which nudge fired on which record? What the language said? How the physician reacted with it? The CDI can also see how the physician reacted to the nudge natively on their evidence sheet. So a lot of that ACDIS/AHIMA white paper career brief was around setting organizational policies on how to proceed with asking physicians questions. So we're giving that CDI all the information they need for that decision tree. But we found from working with a lot of our clients that they consider themselves conservative organizations when it comes to measures like that. They felt that the application was meeting their needs. I don't know Diana, Julie, or Garri, if you guys want to add anything? No. I would just say that, to the point Kaitlyn made about policy and procedure, that definitely, the customer would set the policy and procedure that is appropriate for them and then we provide the ability with the technology to do what you need to based on that approach. Great. Thanks. There were a few questions about the recording. The recording will be on our website and available for folks. (DESCRIPTION) Slide title, Let's continue the conversation at ACDIS 2023. Heading, Schedule a 1 to 1. Want to meet with one a 3M representative? Let us know and we can schedule a one- on-one meeting fits into your busy ACDIS conference schedule. Heading, Stop by our booth! Booth #131. Join us at the 3M Booth to explore how a comprehensive solution can transform your C D I program. And get a new caricature portrait! Heading, Hear presentations on achieving success. 5 sessions. 5 sessions. Role of an Advanced C D I Program in a Clinical Validation Denial Appeals Process. Joy Bombay. M S N, M H A, C C D S; Joseph Cristiano; and Tamara Hicks, R N, B S N, M H A, C C S, C C D S, C C D S-O, Atrium Heath Wake Forest. Finding Hidden Jewels in Priority and Impact Despite C O V I D-19 Waves. Gail B. Higle. B S, B S M, R N, C C D Subtitles, Piedmont Healthcare. Leveraging Data to Improve C D I Outcomes Beyond $ $ $. Kory Anderson, M D; Kearstin Jorgenson, M S, C P C, C O C; Laura Ogaard, R N, M S N; Sathya Vijayakumar M S, M B A, lntermountain Health. Engaging Physicians Proactively With A I-Powered C A P D and Improving Documentation Integrity With A I-Powered C D I Tools. Penny M. Jefferson, M S N, R N, C C D S, C D I S, C C S, C R C, C D I P, C H D A, C R C R, and Tami L. McMasters-Gomez, M H Logo, B S-H I M, C C D S, C D l P. Shoot for the Stars: How C D I Can Assist in Obtaining a C M S 5-Star Rating. Cheryl Manchenton, R N, B S N, 3M, H I S. (SPEECH) After the fact and just a quick mention since we've been talking about ACDIS, that 3M will be at ACDIS. If you are going and have any questions, please stop by Booth 131. We will have a team there. If you're interested in meeting with any of our 3M representatives, you can feel free to contact us. And we do have lots of clients in a few 3M sessions that you can see on the screen. We'd love to have you join us for any of those. I do want to thank everyone in the audience for attending. And our panelists, it's been a great discussion. I found it super helpful. I hope everyone else did. And a reminder. At the close of this, there will be a poll for your feedback, and we'd love to hear what you thought about the session. Lisa, is there anything I'm missing as we wrap up? No. I just want to echo what Adriana said about thanking our panel today. A lot of great information and questions from the audience. (DESCRIPTION) Text, That's a wrap! (SPEECH) If you do have any questions or would like more information about what you heard today, within the portal, there is a Let Us Know button. So please feel free to complete that information and we'd be happy to reach out to you to have more discussions about the solutions that you heard about today. And like Adriana mentioned, we will have the presentation on our website in the next couple of weeks if you do want to listen again. (DESCRIPTION) Slide title, 20233M Client Experience Summit. Slide text, The future is now. Let's go. May 22 to 25, 2023. Atlanta, Georgia. 3M Client Experience Summit for 2023, we are excited to be at a new location and venue at the iconic Westin Peachtree Plaza Hotel in the heart of downtown Atlanta, Georgia. Mark your calendars for May 22 to 25, 2023. Each year, 3M brings together our most valued clients and colleagues to discuss evolving trends in the health care industry, learn about new opportunities in care and technology, share best practices and celebrate successes in innovation. And each year, we keep expanding and adding new and exciting opportunities! Button, Learn more here. (SPEECH) And we are excited in the next couple of weeks. So if you did register, registration closed yesterday for our client experience summit. So if you did register, we're really looking forward to seeing you in the next few weeks. We have a lot of great presentations, a lot of client presentations that are really going to be fantastic for our other clients to listen to. So if you'd like more information, visit our website, and you can get some more information about the upcoming client experience summit. And then we will also have our next session for the CDI innovation webinar in June. So be on the lookout for a link in the next couple of weeks to register for that as well. So again, Thank you to our panelists, thank you to our moderator, and we look forward to having you for the next one. Thank you. Thanks, everyone. Have a good afternoon. (DESCRIPTION) Text, Thank you!

      Webinar still image of the presenters and powerpoint presentation

      Improve your CDI documentation by leveraging comprehensive AI technology

      • May 2023
      • Learn about practical applications and best practices for how 3M comprehensive clinical documentation integrity (CDI) technology can easily identify and flag potential documentation clarification opportunities in patient health records. This can help improve quality of care, support accurate coding, medical necessity and billing processes, and decrease the risk of regulatory violations or reimbursement denials. In addition, learn from our experts on how the application of compliant technology can help reduce the time required to resolve documentation issues, promoting greater efficiency and productivity within your organization.
      • Download the handout (PDF, 2.1 MB)
    • (DESCRIPTION) Logo, 3M Health Information Systems. Text, March 2023 3M C D I Innovation Webinar: N L U, clinical content, and documentation integrity. (SPEECH) Good afternoon and welcome to our March CDI Innovation Webinar. Before we get started and before we get into our discussion around NLU and the clinical content and documentation integrity, I'm just going to go over a couple of housekeeping items before we go over to our speakers. So we are using the ON24 webinar platform. It is a great experience with a lot of different engagement tools. This is a web based platform. So if you are having any issues, close out of VPN, make sure you're using Google Chrome. That'll help with the actual platform as well as bandwidth, just making sure you're closing out of multiple tabs, that'll help. If you are having any speaker issues, there is no dial in number. So if you are having any audio issues, do a quick refresh and that typically solves any issues you might be having. Again, we have multiple engagement sections for a better experience for you. We have our Q&A section so definitely ask questions throughout and we'll get to the questions at the end. So put your questions into that feature there. And like I said, we'll get to those questions at the end. We do have a certificate of attendance available in the resources section. We also have the presentation handout if you'd like to download that and follow along, as well as a couple other resources and that is in the resources section of your dashboard. We also have closed captioning. So if you do need that feature, that is in the media player and that is in real time. So if you do need closed captioning, that is available. And like I said, if you do need the presentation as well as the certificate of attendance, that is in the resources section. And then at the end, we always appreciate feedback so please let us know how we did within that survey. All right. So our speakers today is Dannie Greenlee and Josh Arman. If you'd like to learn more about their experience and a little bit more about them, you can look at that in the speaker section and the meet the speakers section of the dashboard as well. And so I'm going to go ahead and turn things over to Josh to tell us about the agenda and get things started. Thanks, Lisa. Good afternoon, everyone. I'm so for today's agenda. I'm going to jump into some primary goals here in a minute that Dannie and I are hoping to cover. We're going to talk about AI technology and documentation. Yes, we're going to focus a little bit on the CDI lens. We're also going to talk about other ways in which AI can be used in your provider documentation. We're going to talk a little bit about 3Ms, content governance approach to our technology and content in review a use case related to heart failure. The primary goals that we're looking to cover are going to be achieving compliant documentation through artificial intelligence. We want to decrease the documentation burden on providers as it relates to CDI. We want to increase efficiency, accuracy, and consistency across different workflows. We want to have the ability to expand the CDI team's encounter coverage. We know that in the industry is a large focus. As well as intelligent prioritization driven by artificial intelligence. So looking at clinical factors, patient, and business focused factors as well as event driven factors. So in some of the slides coming up as we look into the use case, you'll see how those factors can help drive some of the prioritization. (DESCRIPTION) Slide title, Game-changing cloud-based AI technology. (SPEECH) Jumping right into AI technology. And I-- It is applying systemic reasoning and contextual understanding to data aggregated from your electronic medical record. So our AI is using the data from the EHR, we partner with multiple different EHR vendors in the industry to capture that data. The data that we are using is your primarily your provider documentation, your laboratory data and discrete laboratory data. So we use that data across an HL7 interface. We're not looking at just what the provider has documented into their documentation but we're looking at pulling that across an HL7 interface as well as the radiology results. Those two across an interface. So those are our three sources of data today as it stands. But we are looking to expand our data sources throughout this year to help capture additional data sources beyond just looking at a provider documentation. Continuously and automatically, reviewing, analyzing, monitor, and improving all the documentation, all the time, driving, consistency, and efficiency real time. So on the provider end the provider does not need to really click anything in their workflow to see if there's something that's been identified by the AI we're able to push that to the provider real time while they're working on their documentation. In our AI, we also use standard ontologies as well as clinical concepts in value sets from across the medical record. So using those data sources that I mentioned to really help identify those clinical conditions. And Dannie will jump to that and to the use case whenever we look at heart failure. This, we'll on the right hand side of your screen is basically looking at heart failure. But then how we identify that from an AI standpoint? So we're looking at temporalities. We're looking at the children concepts or the parent concepts. We're looking at evidence of heart failure. So we're not just looking at that a provider set heart failure. We're looking at all the different ways to capture that and Dannie is going to jump through those as it relates to the clinical concepts. And then, we're able to identify the type of heart failure. We know we need the type, we know we need the acuity so we're able to identify those concepts as well in the provider documentation. So at this point, I'm going to turn it over to Dannie to begin to discuss some of the AI capabilities, the 3M has. (DESCRIPTION) Slide title, Artificial Intelligence. Two branches, deep learning and predictive analytics, feed into the machine learning branch. Three branches, translation, classification and clustering, and information extraction, feed into the natural language processing (N L P) branch. Two branches, speech to text and text to speech, feed into the speech branch. Two branches, image recognition and machine vision, feed into the vision branch. The branches, along with the expert systems and planning, scheduling, and optimization branch, ultimately feed into artificial intelligence (A I). (SPEECH) Thank you very much, Josh. So oop, this slide is very interesting to me, because oftentimes when we think AI, we think of Arnold Schwarzenegger fighting robots, Will Smith fighting robots or maybe Keanu Reeves fighting robots, maybe that's just me. But in truth AI is made up of many parts to create a whole artificial intelligence. And we can see NLP and machine learning here are driving artificial intelligence as well as expert systems and speech to text and other things that make up the whole underpinning of our NLU and AI. And so I want to talk in depth as we go on to the expert systems that support our clinical solutions products as we think about AI. (DESCRIPTION) Slide title, N L U engines. Slide text, Acuity Engine. Grammar-based engine that assigns acuity to findings. Acute onset, acute to subacute, acute on chronic, chronic, sudden onset. (SPEECH) So as Josh mentioned, we look at the whole concept in our natural language understanding and he highlighted heart failure specifically. So it goes beyond NLP or natural language processing, which our NLP is phrase based it's sort of small in scope and just at the sentence level while our NLU is concept based and surrounded by information models engines and more to really capture the context around that single piece of information. So as Josh-- if we quickly go back and we take a look at Josh-- Josh's slide there about heart failure and he mentioned it perfectly of capturing the children concepts or the parent concepts, the temporal-- temporality or any of these other surrounding piece of information we can really drive value by providing more context around that single piece of clinical information. So here, highlighted on this slide is we have over 20 engines in our NLU and I just want to talk briefly about the acuity. As we know it that heart failure example, you really need to capture whether that's an acute or chronic heart failure. And based on our NLU engine, we can run our documentation through the NLU and part of it is to piece out whether that is acute or chronic. And there's many other engines that make up that, that are displayed here. The lab engine that reasons over that, the clinical finding engines you can see them all here but this really continues to drive that context surrounding clinical information. (DESCRIPTION) Slide title, M*Modal Information Models (Mims). Surrounding Mims 15+ are nodes labeled findings, substance administration, action course, procedures, labs, and allergies. (SPEECH) So this slide is our MIMs or our medical information models which is also-- we also call them immortal information models, which is fun to be able to use that acronym. Our NLU has over 15 MIMs and they're pretty spectacular. So how I had to envision a MIM was as an empty shell that has slots that need to be filled for the NLU to reason over and generate some sort of output. So you can see in the square on the left, we've highlighted the medication administration model. And it has the slots that need to be filled for this to happen. You can see a substance needs to be mentioned, , a start time a stop time that it was given. So for example in our heart failure use case, if a provider were to document, we've given IV [INAUDIBLE] at this time. We can then begin to fulfill those slots. So we've got the substance [INAUDIBLE], we've got the time, an that it was given IV in order to really reason over this information. We have to have all of these pieces of information which are important and are driven by the value sets that we maintain. So value sets are groups of things that provide the clinical indicators that we need to fulfill each slot. So if we need-- for heart failure, for example, if we need to understand all of the medications that may be used to treat heart failure, we contain a value set of those medications that would be commonly found within those. And these MIMs really again drive that AI, drive that context around these individual pieces of information that we get when we reason over the clinical documentation. (DESCRIPTION) Slide title, Mims in N L U. (SPEECH) So this is a really high level visualization of how the MIMs are working with the NLU. And you can see that we have documents that come through that are either narrative documentation or structured data. They're transitioned into the NLU and then we semantically pull those out or syntactically, we process those documents to make them readable by the NLU. We reason over those with our MIMs, with our engines, all of that happens and those produce serialized objects. Those serialized objects are then fed into the applications so that's what the application understands. And the application does a translation in order to display information that an end user would understand. So I like-- this slide just to think about how data flows from one place, the EHR, all the way through to the applications to really provide value for our end users. And the next slide we're going to jump into is the back to Josh, and he's going to talk about our Content Governance and our Content Creation Processes around this information (DESCRIPTION) Slide title, Content governance. At the left is a circle with three slices: patient outcomes, end user experience, and industry leading content. (SPEECH) OK. So if we look at this slide that I'm presenting now when we go to the left hand side and we're looking at the smaller wheel, the buckets that we have here, end user experience, industry leading content, and then patient outcomes. So when we're talking about the end user experience that could be either the provider. So I'm receiving the information as I mentioned earlier real time while they're doing their documentation for that increased specificity that's required for that clinical concept on the CDI side using sort of that same framework but using the tools for helping identify those clinical concepts as well as prioritizing the encounters, which those clinical concepts are contained for them to be able to be captured. And then looking at the content, I'm going to jump into the content here in a second. And then really we're-- our hope is that we're driving better patient outcomes that we're getting the specificity real time and we're helping with the long term continuum of care. (DESCRIPTION) At the center, content methodology is surrounded by six triangles labeled, evidence-based practice, quality improvement, government regulatory, provider feedback, impact analysis, and industry regulations. (SPEECH) So if we jump into the center slide there with the symbols and then they sort of correlate on the right hand side with the detail. Let's start up at the top with the evidence based practice. So we're really using up to date information to help develop out our clinical concepts and nudges that will be presented either again to the provider or to the CDI in their workflow. We're looking at the quality improvement so we're using data driven analytics to really help drive the quality of that documentation. We're looking at the government regulatory. So looking at CMS, looking at coding clinic, we're using that information to make sure that our clinical concepts and nudges are remaining compliant. Provider feedback this is very important. Our team spends a lot of time on site with our customers to really get that provider feedback because we know today, that alert fatigue is real. Administrative burn is real. We want to be able to have the technology aid real time to save any backend workflows for the providers. Impact analysis. We want you to be able to gain value from using our technology and every customer defines what their value is, maybe a little bit different. But we are able to provide the impact analysis from the use of the technology. And then as I mentioned industry regulations, so the [INAUDIBLE] guidelines, we really do follow as we develop out our clinical guidelines-- our clinical concepts that are used in our applications. (DESCRIPTION) Slide title, Customization request process. Slide text, Adoption specialist assigned for life of project. Bullet points, Customer meetings with adoption, weekly, bi-weekly, daily as needed. On site visits as needed. Works with customer to determine nudges for go-lives and specialties. Submits customer requests (enhancements, issues, bugs). Tests with customer in product. (SPEECH) As we move on to sort of our customization request process. So we have our out of the box-- out of the box content that can be used but really our differentiator is that we do take customization from our customers. And there really is no limit there so you can bring new concept requests to us and we will develop them out, or maybe you have found that there is a clinical concept that you're interested in but maybe it doesn't meet your organization's needs. That is completely fine. We're able to develop that out so that it does meet your organization's needs. So for the life of the contract. We do assign an adoption specialist. And this is a subject matter expert that is primarily focused on the use of the technology. So the adoption specialist does arrange regular meetings with the customers. And this is a resource that is actually assigned at the beginning of the implementation and is really with the customer again through the life of the contract. So it's not like someone that is coming sort of mid midway in your use of the technology. It's really started at the beginning of the implementation in there with you. This is a person that we don't traditionally sort of change hands with or change out. We really want to focus on developing that vendor and customer relationship. So they're really basically part of your team in helping ensure that you are using the technology to its fullest potential. As I mentioned on site visits as needed, and this is really at our customers sort of request or expectation. It's not like we're going to be coming out every week or every month. But I think we're able to develop a cadence that would meet your needs and whether that be quarterly, or that be maybe twice a year, whatever the need is, we want to be sure that we're there to support you. The adoption specialist also works with the customer to determine what are the nudges act go live in the specialties that we want to focus on. We're going to talk a little bit more about our best practice in future slides and I can cover that in a little bit further. The adoption specialist is also sort of the customer voice. So we-- the adoption specialist is the resource that submits that request to our internal content team to begin to triage, and then to develop it out. And then, they're also available to test alongside of you as you're going through your process. Another resource that is part of the team is the content coach. And the content coach is there to support basically the adoption and the customer. The content coach is a subject matter expert as it relates to really the NLU in the clinical-- the clinical concept that Dannie has mentioned. And Dannie is going to discuss really her team and some of the content coaches background and really the makeup of that team. They're also there to triage your request and make sure that we are developing out as you expect. We want to get it right the first time but maybe we develop it one way and it wasn't the customer's expectation, or we determine that hey we're capturing many-- too many false positives or maybe we need to tweak it. Again as I mentioned, we really-- are NLU is very nimble in the sense of the customization. So the content coach is able to really help answer those questions or help guide the customer as to how we think that clinical concept really should be created or used. As well as discussing the content needs with adoption to basically best support the customer. The content coach is not necessarily 100% someone that is coming on site to do the necessary work, that is where the adoption specialist comes in. But the content coach is there in the background to help support adoption as it relates to how the content is being used. So at this point, I'm going to turn it back over to Dannie for her to discuss the clinical content team and the makeup. (DESCRIPTION) Slide title, Clinical content team. Above a bridge is text, medical providers document in clinical terms. Coding and compliance need specificity in diagnosis terms. Below the bridge reads, A C D I program creates aa bridge between this gap. Who builds the bridge? The clinical content team. 20+ variety of credentials (M D, Ph D, Pharm D, M S/M S N, M L S, M S W, R N, B Subtitles, C P C, C C Subtitles, C C D Subtitles, R H I A. Years experience, from 4 to 47. (SPEECH) Thank you, Josh. I love this slide because I get to talk about what is nearest and dearest to my heart and that is my team and our expertise. As you can see represented at the top of this slide, we see providers on the left hand side in purple who document in clinical terms. And then, we know that coding and compliance needs specificity in diagnosis terms. So a CDI program within facilities bridges that gap. And our application helps also to bridge that gap. And who builds this bridge on the NLU side here is the clinical content team. And we have over 20 varieties of credentials from doctors to farm deeds-- to nurses, CCDS, lab techs, informaticist. And we use our clinical background to create and curate this content. We also have a wide range of experience like years of experience, which I think is really fantastic as well from less than 1 year experience to greater than 15 years. So we have innovation and new ideas mixed with the wisdom of the people who've been working in these fields for a long time to really drive and maintain the value of that content governance that Josh was representing earlier. (DESCRIPTION) Slide title, Content workflow diagram. Arrows travel clockwise around a circle, which has slices labeled new nudge request; research guidelines, create nudge; review encounters; customize nudge; update engines grammars; Q A content gov; test N L U and repeat. (SPEECH) I'm going to move on now and talk about the process for the content workflow, what we do on our team. And this begins here with this green pie piece where we get a new nudge request. This may be a new nudge request, it may be a new enhancement. And this often comes from the as Josh mentioned, the adoption team who works closely with the customers to come up with a new use cases or enhance existing use cases to really drive value in their individual areas. So we get that request, we research the guidelines, create the nudge based on those guidelines that Josh mentioned [INAUDIBLE] up to date. Whatever clinical guidelines and CDI guidelines to verify the value of the request. We review encounters customize the nudges, update the engines and grammars in the NLU. We of course, go through a content, a QA, a quality assurance within and this drives back to that content way just to ensure that this each request is reviewed by at least two CDI content team experts, SMEs. And we also test this locally. And we repeat this process as needed. And highlighted on the right is that each of these requests have to come through with a release and we've highlighted our release cycle over on the right hand side. (DESCRIPTION) Slide title, Quantity Recommendations. Slide text, To avoid burnout for providers and C D I specialists, 3M has established the following best practices. Heading, Evidence sheets. Text, At Go-Live, 10 to 11 potential conditions from the approved conditions list. 30 Days Post Go-Live, Add 3 to 5 additional conditions. 60 Days Post Go-Live, Add 3 to 5 additional conditions. Heading, Nudges. At Go-Live: 3 to 5 specialty-specific groups. 10 to 11 nudges per group. 3 Months Post Go-Live, Up to 10 specialty-specific groups 10 to 12 nudges per group. For each nudge, the same condition should be enacted for Evidence Sheets. (SPEECH) So I'm going to hand it back to Josh, who's going to talk about some of the best practices and recommendations. So as I mentioned earlier and this is a question that we get all the time is basically, how many do we start with or how many conditions do we start with? As I previously discussed, we know that a ministry-- that the alert fatigue is real and administrative burden is there. So we want to be very pragmatic in how we roll out the technology. In every customer and every organization is a little bit different in how they want to approach it. And maybe it's we approach it one way because we think it's going to work the best way. And then, we find out that, hey, maybe the rollout that we did wasn't ideal and we need to take a step back and we need to re approach it. That's completely fine. The way that our technology is rolled out is on an end user basis. So we don't need to do Big Bang. For many of our customers, we don't do Big Bang. We actually do phases. So when we look at evidence sheet, then this is a workflow that is used in 360 encompass for the CDI team. At go live, we really want to focus on 10 to 11 potential conditions that are sort of approved from our list that we recommend. We've identified that the NLU functions well with these. They drive value. So this is what we suggest you start with. 30 days post go live, we can look to add in three to five additional conditions, and then 60 days post go live another three to 5. Again, every customer is going to be a little bit different. There's not necessarily a cookie cutter response here. Some teams react to evidence sheets a lot better and find them easier to use than other teams. So we're able to go at a faster pace. As well as I can't stress it enough. It is very important that we need some interaction and management from our customers as we roll out this technology. This is not a sort of install in drop piece of content. We really need active engagement in where we've seen the best success or with those customers that are actively engaged with us. So the evidence sheets are truly focusing on the CDI workflow to help capture those clinical concepts from encounters that they don't necessarily have to go review and sort of jot down on a piece of paper. The AI is doing that heavy lift for them and pushing that information to them in their workflow. As it relates to nudges on the provider side, this is where, again while we find that the CDI maybe has a little bit more tolerance for volume, we know providers really are a little bit more, I guess boisterous as it relates to how they perceive technology. So from a nudge standpoint, we really want to start with about 10 to 11 nudges per group that's maybe a little bit even on the high end. We tend to like the small-- start smaller and add in. But, then we want to focus on three to five specialty specific groups. So what that means is maybe we're not going to do a Big Bang approach. But for your organization, you've determined a Big Bang or focusing the same conditions. I guess, I would say for every specialty is the way to go. We have the ability to sort of cut those out. So maybe it is you want to focus with hospitalists, you want to focus with pulmonologists, and then nephrologists. You can define all those groups and then define what content or what nudges are to be enabled for those groups. So not every group has to have the same nudge enabled. Of course, they can. Maybe, you've determined from your organization. There's an initiative and you have to or you need-- you want to enable malnutrition for every provider no matter what their specialty is. You can do that. But we've also found that if you are focusing on specialties that most likely those providers that are answering those nudges are going to be able to add that specificity in and it's much more pertinent to the patient population in which they're treating. We have heard from providers. I don't know why I'm receiving this nudge type. This isn't a nudge that I would traditionally respond to or even be queried for so that's why we're able to break this out based off of specialties. I always urge customers, don't find yourself going down rabbit holes is you define specialties because it can-- you can find yourself in the weeds. So we widely want to focus on your high hitting specialties or your costly service lines or your high queried service lines as we develop out these nudges. Then, three to five I'm sorry,-- Three months post go live, maybe we increase the number of specialties as well as the number of nudges per group. Again, this is rolled out on an end user basis. So you don't have to roll it out all at once. You can focus on certain specialties. Once you get that under your belt, add in additional providers. It is completely at your pace. We can go as fast or as slow as you need to. So at this point, I'm going to turn it back over to Dannie to sort of complete out our areas of coverage from the content standpoint and then jump into that use case a little bit further related to heart failure to give you a real understanding of how our AI functions. (DESCRIPTION) Slide title, Areas of coverage. On the X axis of a bar graph are labels, such as neuro, eye, E N T, and respiratory. Each label has two bars, conditions and nudges. The Y axis ranges from 0 to 80. Every Nudges bar is significantly higher than the Conditions bars, which don't exceed 20. Some Conditions bars reach over 70. (SPEECH) Great. Thanks, Josh. We are going to dive deep here in just a moment. So this slide is a representation of our library and our areas of coverage. Organized along the bottom, you can see MDCs and you can see where we have conditions and nudges available. Light teal is conditions and dark teal is the number of nudges for that condition. So you can see where we have lots of high areas of impact with rules created. And you can see where we have areas where we can expand our content. And as Josh mentioned, while we did, we do have this focus in CDI. We appreciate any use case that this NLU and AI may be used really to drive whatever the facility's needs are. We're always looking to expand our coverage as we create and curate this content. (DESCRIPTION) Slide title, Heart failure overview. Heading, M D C 05 Circulatory System. Text, Condition: Heart failure. Nudge count: 9. Heading, C D I guidelines. Bullet points, Code to specific type and acuity. Specify stage of H F if possible. A G O/A H A classification used as reference. 3M coding and reimbursement references, coding clinics, A C D l S/A H I M A references. (SPEECH) So this is the heart failure condition overview. So under MDC 05, the circulatory system, we have a condition of heart failure. And within this condition, we have nine nudges that are created to capture different bits of information depending on the different use cases. And this may be a CDI workflow or it may be nudge workflow or quality or other sort of areas that we've had rules that we've created come through. We've highlighted where our CDI guidelines come from and our clinical guidelines come from specific to this condition. This is not an all inclusive list, it's just an example of some of the information that we look for as we go to curate these rules at this level. So we have up to date and the Merck Manual are some of the clinical areas that we use to do as references for building our clinical guidelines. And then as mentioned earlier, we use the 3M coding and reimbursement reference code in clinics [INAUDIBLE] as our CDI guidelines to know really how to capture content in those specific areas. We've selected a single nudge to review, to go just really down into the details of what it takes to build and maintain one of these rules. (DESCRIPTION) Heading, Nudge details. Subheading, Condition: bullet points, Documentation of HF, (+/-) evidence of diastolic H F, (+/-) evidence of acuity. Subheading, Requirement: Bullet points, Documentation of systolic/diastolic H F. Documentation of acute/chronic. (+/-) grade. Heading, C D I messages. Subheading, Rule Satisfied Message: Bullet point, Acuity and type of heart failure were properly documented. Subheading, Unsatisfied Message: Bullet point, There is documentation and evidence of heart failure but type and acuity were not documented. (SPEECH) So this is a heart failure nudge and you can see from the title alone there, we're looking for the documentation of heart failure, plus or minus evidence of heart failure, without documentation of the type and acuity of heart failure. So at first glance, what this is looking for is somewhere in the medical record, we've got the word [INAUDIBLE]-- in the encounter, we've got the word heart failure. And we also have some evidence of heart failure but we don't know the type or acuity of that heart failure. So in the nudge details, what it takes to trigger this nudge is that documentation of heart failure plus some pieces of evidence of heart failure. And then, what it takes to resolve this nudge is the very specific documentation of the type of heart failure and whether it was acute and chronic. We also have some information called out on the middle and right hand side of this slide with the messaging. And these messages can be displayed depending upon who is-- where in the workflow this is. As Josh mentioned, we have provider messaging and so if we're nudging the provider in real time to capture the specificity, we have that message there and those messages can be customized per facility and per whatever the hospital is trying to capture because those relationships, the CDI specialists have with their providers and so they know the type of messages to put in front of them. The CDI messages are a little more general. And that's because as Josh mentioned, we can put more information in front of them and they can use that to decide whether to link a query or prioritize their workflow based on these evidence sheets that are put in front of them. So now, we're going to move on to a really deep dive into how these rules are built. (DESCRIPTION) Three concentric circles. The innermost circle reads, condition. The next middle circle reads, requirement. The outer circle reads, provider message. (SPEECH) So you can see this diagram, we have CDI notifications, CDI opportunities, and provider nudges, and what it takes to create each one of those separately. So if we just have a condition represented there with the green box that would be a CDI notification. So this is when there's been some clinical piece of information that we want to get in front of the CDI workflow. And that is because it may drive value for how they're working on that chart. So for example, during the pandemic, we created some notifications regarding COVID. So this would be like your-- this patient here appears to have all these signs and symptoms of COVID and we just thought that you would like to know perhaps you need to investigate a little bit farther. To add a layer of depth to that, we also have requirements. So if you have a condition which triggers the rule and a condition plus a requirement, we're given a CDI opportunity. So these are rules that fire and then would be either fully documented or still an opportunity for documentation improvements based upon what information is found in the chart. And lastly, to provide that next layer is the physician message and turning it on within the nudge workflow to get that in real time that was sort of what differentiates a physician nudge from a CDI opportunity. (DESCRIPTION) Below the circles are three labels: value sets, concepts, parameters. (SPEECH) So here's the heart failure rule. And this is-- I got it. Very one moment. So what I talked about earlier was that we have to have that mention of heart failure. And you can see the presence of heart failure as the last bullet point. We absolutely have to have that within the encounter for this rule to fire. We also need some specific piece of clinical indicators that heart failure may be present on this patient. So represented here, we have the less than or equal to 40% ejection fraction. And we also have some evidence of BNP or proBNP greater than 500. And we also have some evidence of maybe that some heart failure medications were given. And it's interesting when you look at this condition because you can see that we have as Josh called it our out of the box, parameter of less than or equal to 40% in our ejection fraction. But we also have customer customizations where some people wanted less than or equal to 50% or even less than or equal to 55%. On this one, we're also looking for some temporalities which is part of the NLU AI. We're trying to see if it's past or present heart history of heart failure. And we have a customer constraint there to exclude past sometimes on these rules. So that's part of just the giving this rule to fire in front of the CDI specialist or the provider. We have to have these pieces of information. To fulfill it and make it a fully documented opportunity, we need documentation of the type of heart failure and we also need the documentation of the acuity of heart failure. We also include some temporality constraints and we can do customization constraints like we've done with this document type underneath the requirement. (DESCRIPTION) Heading, Value Set. Subheading, Heart failure. Bullet point, Snomed C T, heart failure disorder. Under the bullet point is a subset of bullet points, acute heart failure, chronic heart failure, right ventricular failure, left ventricular failure. (SPEECH) Each of these pieces of this rule-- this nudge are maintained with the curation of value sets. And SNOMED is one of the values-- SNOMED CT is one of the ontologies that Josh mentioned that we use to capture the clinical content for these roles. So you can see heart failure and all of its descendants that would help resolve this rule. (DESCRIPTION) Heading, Concept. Subheading, Systolic heart failure. Bullet points, Snomed C T, systolic dysfunction; Snomed C T, heart failure, systolic failure. (SPEECH) We also talked-- We also talked about concepts and this is something where-- this like we have a value set of these different types of heart failure that we were showing before. And now we have this specific type of heart failure, this systolic heart failure, which may be further model to capture more information. So systolic heart failure is made up of concepts of systolic dysfunction, as well as heart failure plus many synonyms, SHF systolic failure and even a German version of systolic heart failure there. So this type of work where we add synonyms or model synonyms is part of the daily work that the content team does to constantly curate and maintain these values that's to drive more value. (DESCRIPTION) Heading, Parameters. Bullet points, N L U temporality, past, present future; N L U experiencer, family patient, other; N L U certainty, certain, hedged, hypothetical, maybe, remote, ruled out, negative, undefined; N L U document type, clinical, lab, radiology, medication administration; evidence, ejection fraction less than or equal to 40%; customer less than or equal to 50%; documentation, acuity of heart failure; customer, O R right ventricular failure. (SPEECH) So if we're going to talk about individual concepts and the-- We're down to parameters. So we're going to talk about parameters and all of the ways that we tailor the NLU to capture that context we were talking about earlier. We can look for temporality and experiencer and certainty as document type and we reason over the encounter with all of this information to really provide that context. And again, here's another example of the ejection fraction less than or equal to 40 and customers who don't-- they want to be a little bit tighter. They want that ejection fraction to be less than or equal to 50. And this is the same thing of the acuity of heart failure. We have a customer who preferred that documentation of right heart failure-- right ventricular failure was enough to capture that. So we have this ability to really curate and customize these use cases at many different levels. (DESCRIPTION) Slide title, Encounter reviews. Slide text, Review encounters. Bullet points, Per nudge, per customer and across customers. Search for grammar, language and N L U patterns/issues. Disambiguation - acronyms most common issues. A table with columns category, cause, and comments. Row, category, circulatory. Cause, incorrect evidence. Comments, template issue: O2 triggering instead of flow rate. Row, category, respiratory. Cause, other. Comments, False Positive. Disambiguation: 'pe' 'lmmature granulocytes' - (pulmonary edema). Row, category, circulatory. Cause, context. Comments, grammar: D V T unlikely, suspected versus ruled out. Row, category, neuro. Cause, context. Comments, Temporality: History of the following complications - stroke, not picking up historical. Row, category, Respiratory. Cause, Language. Comments, Disambiguation: Possible P E, Pulmonary embolism, versus Pulmonary edema. Row, category, Kidney and Urinary. Cause, Language. Comments, Disambiguation: C K D client using as C C/K G/Day at ped hospital. (SPEECH) So I mentioned as a part of our process, we do encounter reviews. And this is where we really look at how the NLU is functioning within a specific set of encounters and a specific set of organizations-- organization. So we know that each facility and each provider and everything may document a little bit differently. And that's really where doing these encounter reviews provides value. So while we do these encounter reviews, we look at the NLU and how it fires and we find lots of different things. And we do this per nudge per customer and across customers. We're searching for grammar language and NLU patterns and issues. And we're also looking at disambiguation. Acronyms are a really common thing that we find and that we add to the NLU to really provide more context for individual customers. One of my favorite examples of something that we found during a review was down there at the bottom with kidney and urinary. So there was a facility, it was a Children's Hospital that had CKD documented all over their medical record. But what we know what all of us know is CKD means chronic kidney disease except for at this facility, it was most often use as CC per kilogram per day because it was a pediatric hospital and that's how they did their fluid restrictions for each of their pediatric patients. And so we had to create some disambiguation tickets and enhance the NLU to not fire any chronic kidney disease rules, nudges that may have also been turned on by this facility. We do things like that. And we have to look at that and then enhance the NLU and drive that value, not just for this single customer but all customers. So these encounter reviews are an invaluable part of our content curation and maintenance. (DESCRIPTION) The slide with the three concentric circles of condition, requirement, and provider message. Heading, Primary Care Exam Summary 01/01/2013. Slide text, patient has pulmonary edema, heart failure with election traction less than 40K and B N P 912. Currently taking 40mg furosemide. Heading, Primary Care Exam Summary 01/02/2023. Slide text, patient has been diagnosed with acute on chronic systolic heart failure. (SPEECH) So this probably looks familiar. But I wanted to talk about some clinical examples and how we do some testing to check that this rule is firing as we expect it to be fired. So here on the bottom, we have an example of maybe a note from a primary care exam. And you can see that it says, patient has pulmonary edema, heart failure with ejection fraction less than 40%, and a BNP of 912, currently taking 40 milligrams of furosemide IV, twice a day. (DESCRIPTION) Lines from parts of the condition point to key phrases in a primary care exam summary. (SPEECH) So if this piece of text were to be ran through the NLU, we can see which individual pieces here would capture for these different pieces of information. So we have heart failure, which I said was required for this rule to fire. We have our ejection fraction less than 40%, we have our BNP greater than 500, and we also have the 40 milligrams furosemide IV BID. Interestingly, it's not represented on this slide but in order to get that evidence of heart failure medications, we've had to fulfill that substance administration MIM. And we had to fill all of those slots in order for that to fire for this certain piece of evidence. And you can see, we have our dose, we have our substance, we have, how it was given, and we have when it was given to fulfill all of that in the medical information model in order to trigger that piece of evidence for this nudge. (DESCRIPTION) The lines disappear. (SPEECH) So if all of those are met, we can then move on to what it would take to resolve the rule or make the nudge go away. So if the document-- doctor provided patient has been diagnosed with acute on chronic systolic heart failure, we have now nudged them and we've said, hey, you said heart failure and you said that they had EF less than 40 and also they're on some medications, and their BNP is high, we've given that message that provider nudge message that's like can you please document the type and acuity of heart failure that you've said this patient has. (DESCRIPTION) Lines from parts of the requirement point to a phrase in a primary care exam summary. (SPEECH) And when we look at this, we can see which portions of this rule are now resolved. So we have the acute on chronic systolic heart failure. Acute on chronic of course, managing that acute piece that we need to capture and the systolic managing that type of heart failure. So we do this testing locally where we test our rules at the local level. And we also look at these within our encounter reviews to verify that things are triggering and resolving as we expect them to be. So I think-- Let's see. (DESCRIPTION) Text, Q and A. (SPEECH) I think that was all that I had to cover today. And we can open it up to questions. Great. Thank you both so much. There was so much information that you got through. So thank you so much. We do have a couple of questions. The first one I have is, do the MIMs review the MAR or scanned information of Drug Administration, or does it need to be written in a provider's note? So from the NLU standpoint today. And I'm talking about the functionality as it stands today, we would capture the medications as it relates to the provider documentation. This year and hopefully here in the next quarter or two, we will be adding in medication administration records to be able to capture that information outside of just provider documentation. So today, if you're using the technology or if you're implementing the technology, we would be capturing it from the provider documentation. But here in the near future, we will be looking to capture it outside of the provider documentation. The other piece I want to hit on not directly to this question but as we look at the data sources again, it is your provider documentation. It is discrete laboratory data and I sort of stress the discrete part because we are looking again across an HL7 interface. So we're not looking just at the provider documentation. Actually, we are looking at making a lot of changes where as it relates to lab results where we're looking not at provider documentation, we want to solely use the lab as the source of truth. So the discrete laboratory data and the radiology results. We know our customers have a lot more requests out there such as vital signs and flow sheets. And we want to get there but today, we are truly just looking at the provider documentation. OK, great. Next question we have is does this integrate with Epic EMR and let's just expand that to other EMRs? Yeah, so I don't want to focus on the word integration because we are not in the epic UI or we're not in any EMRs UI today. When we think of how we nudge providers, we sit on top of the EMR. So there is a control bar that's sitting on top of the EMR, not into the actual EMRs UI. We are partnering with our UI with our EMR vendors to see if we're able to enhance that but today, we sit on top. But we do take the information from Epic. This is both for hyperdrive and hyperspace. So if you are moving to hyperdrive here this year or even in the future, we are doing testing with some of our early adopters today as it relates to hyperdrive and we'll be able to support that. I don't want to limit ourselves as it relates to a specific EMR. If you are interested to know if we support your EMR, I was just reaching out to our 3M sales team and we can help that. But obviously, with those sort of larger EMRs into the industry today, Epic, Cerner Meditech expanse, we do all work alongside of those today. Great. All right, we have another question about the actual nudges. Does the HF scenario generate both a provider nudge and a CDI notification? Yes, it does. If it's turned on in both scenarios like Josh was talking about in the best practices, it will surface either fully documented opportunity or just an opportunity to capture heart failure for the CDI specialist. And it will generate a provider nudge, if we don't have that type in acuity already documented. All right, great. Another question we have is, can you create content outside of CGI? Yes. Let me take that. You want me to or you, sorry, Josh. Yep, you can. Go ahead. OK. Yeah, absolutely we're always looking for areas to expand our content internally as well as with partners in the community. We recently were given an example of a doc like an article that had some really interesting physician template that they do to do cancer screening. And that was brought to us as ways that we could maybe reason over the NLU you to provide some more information about these cancer patients and capture something sort of up front. And we're always looking for use cases like that to expand our content beyond just the CDI workflow. And Josh may have some good examples where we've done that in the past. Yeah, so I think an example where we have helped to identify patients in the past beyond just CDI is if you're a current 3M time customer, you have access to or I should say if you're currently using our engage one platform, you do have access to our content that is available. And you may see some of the conditions that are there, such as identifying patients that may require hospice care, a little bit sooner in their plan of care. So we do have different conditions available beyond just CDI. I Always tell customers don't always assume that we're not able to do something, please give us your use case because most likely we're able, to develop it out. It just may be dependent on what application, we may find that would it be the best suited for. All right. I'm going to go back to the nudges. Can you set it up so the nudged-- to nudge the provider based on specific sepsis criteria? For an example, sepsis-- the different types of sepsis and those types of criteria. We can create customized rules based on any criteria that's very clearly defined. So if we have a very clear use case of the sepsis three criteria and what would be expected to trigger and resolve in the pieces of evidence, we can work within our parameters to build a rule to do that. All right, great. Next question is does this encourage doctors to move these diagnoses to their discharge summary? So I think the important part here is obviously capturing the specificity real time while they're doing their documentation. So we have heard this request from customers in the past to say, hey, is there a way to nudge a provider when a condition gets dropped. So let's say that you have a patient that has a very high length of stay, they've been there for a couple of months. Something was documented early on in their length of stay and it didn't get carried forward to their documentation. Unfortunately, we're not able to capture that today. We're not able to say, hey, you've documented this specificity somewhere within the encounter and it didn't make it to this specific piece of documentation. Our hope is that if the provider is adding in the specificity that's required to their problem list and they're really using the problem list in a way to help drive the patient care, which I know we all know the problem list is a disaster and it doesn't always reflect the current patient's conditions. But whether they update their problem list or they're pulling some type of list forward in each of their notes, once the specificity is contained within the encounter, then our hope is that it would get captured to a discharge summary. But unfortunately today, there's no way for us to sort of nudge that provider that says, hey, you need to add this to your discharge summary today. All right. Another question. Does your program utilize APR DRG or mirror the VA's Alex Houser core-- Comorbidity Index, I can never say that word. [LAUGHS] So the important thing to keep in mind as it relates to our AI is we are looking at documentation quality. We're not looking at any type of financial impact. And again, this is feedback that we sometimes hear from, especially the CDI teams is this encounter is fully maximized or I wouldn't necessarily have queried for this condition based off of what they're seeing and NLU doesn't understand financial impact. We're really looking at the clinical concept and making sure that specificity is being captured. So we're not necessarily opening up a DRG workbook and seeing what do they map back to. We're really taking from our customers and from our sneeze, what is the use case to capture this specificity and then sort of, it just works its way through the process to say, OK, if you capture this specificity then you'll capture this the DRG that you may be looking for. We're not necessarily developing out content as it relates to a specific financial model. All right, great. And let's go ahead with one final question. So everybody has time to get on to their next meeting. Our last question for today. You mentioned customer quests, what's an example of some of those that you've gotten? So as it relates to customer requests. I will really say they are-- they're really all over the board as to really what we could be looking to capture or what a customer may be looking to capture. You could be bringing in to us specific use case as it relates to maybe a program or initiative that your hospital is focused on or you are looking at some type of data point that you're not capturing appropriately today. So let's say that you-- this may be a bad example or low value example. But you're looking to capture that any time that a patient has any type of bleed. Maybe you're not focusing on a specific type of bleed like a GI bleed or a head bleed but you want any type of bleed to sort of bubble up to your CDI team to have that encounter reviewed where we're able to take that information and develop that out. If you think of really how the information that we need-- we need A plus B equals C. As it relates to really developing out the clinical concepts. So if you can give us what you're looking to capture, either as the actual use case or you can just tell us what you're hoping to capture, our team with the different areas of sneeze that we have are really able to put something together and present that back to you to make sure that it's meeting your need. And Josh, we're going to ask you to circle back to the question prior, just some clarification about the-- they don't-- about the comorbidities, they don't currently use APR DRG. So they're not really asking from a financial impact question more of a quality impact, does that make sense? Yes, I mean as it relates to the quality or maybe you aren't using APRs, really our content. I mean I would say yes, you could use it in really any environment to help capture the needed specificity as well as if there's something that you're looking to capture or that we're not focused on or we need to focus on. We can definitely have those conversations and see what we can do in a way of partnering with our customers to really leverage the content to capture what is needed. OK, great. (DESCRIPTION) That's a wrap! (SPEECH) Like I said, this has been a lot of great information and we had a lot of great questions come in. So we really appreciate your time today. So a couple of the questions that did come in about the recording or if there is a recording, there will be. So after today's session, it'll take us a little bit of time but we will get this updated onto our website in the next couple of weeks. If you would like more information about these solutions within the portal, there is a Learn More button. Let us know if you would like some more information and we can certainly contact you for that. (DESCRIPTION) Slide title, 2023 3M Client Experience Summit. Slide text, The future is now. Let's go. May 22 to 25, 2023, Atlanta, Georgia. A description includes a venue at the Westin Peachtree Plaza Hotel in downtown Atlanta from May 22 to 25, 2023. Button, Learn more here. (SPEECH) And we also encourage you, if you are a customer, we would love to have you join us at our client experience summit in May in Atlanta. There is going to be a lot of sessions. And Josh, I don't know if you wanted to talk about some of the sessions that we would have at CES, but if you are a customer, we definitely encourage you to join us. Yeah, so at CES this year, we do have a lot of current customers speaking on behalf of their use and experience of using our AI in their different workflows as well, whether it be provider or CDI workflows. As well as this year, if you have any physician leaders that are interested in learning about our clinician solutions track, this is the first year at CES that we are going to have a provider focused sort of track related to the different clinician solutions applications in the AI technology is included in that track as well. So if you have physician leaders or any physician liaisons that are involved with your program that you think would benefit for attending CES, please reach out to the team. And let us as we are looking to have an interactive physician group as well as for the CDI and quality teams there are different topics as it relates to AI being covered, not only by 3M teams but also customers. Awesome. Thank you so much. And just a couple of the last questions that came in around. The certificate of attendance, you can use that. Download that out of the resources section. Once it ends and you complete the survey, you can't go back and redownload it. So take a minute just to download the certificate of attendance. And you can utilize that to request CEUs. This is not actually approved CEUs but you can utilize that certificate-- of certificate of attendance excuse me to request those at an accredited association. And again, we will have this posted on our website in the next couple of weeks. So again we really thank you for joining us today. (DESCRIPTION) Text, Thank you. (SPEECH) Please fill out that survey. We'd love to hear how we did. And we will be having another session or another CDI innovation webinar here. I believe it's the first week of-- first week of May excuse me. So be on the lookout for that registration and we'd love to have you join us again. So thank you both to Dannie and Josh. And we hope you all have a great day. Thank you.

      Webinar still image

      NLU, clinical content and documentation integrity: A closer look

      • March 2023
      • Join experts from 3M’s clinical content team for a closer look at the engine that powers solutions like 3M M*Modal CDI Engage One. Geared for a non-technical audience, this session will include an overview of Natural Language Understanding, including a review of clinical content rules and how they’re built. In addition, the session will explain the role clinical content plays in helping health systems take significant leaps forward in clinical documentation integrity—while simultaneously improving the physician-patient experience.
      • Download the handout (PDF, 1.8 MB)

    • (DESCRIPTION) Information slide. On24 Platform. A great company is showing what interesting applications a fantastic product can bring for motivated users. Media player, livestream, 320x240, Sides, 640x360, Resources, Have a question? Let us know here in Q&A window. Want to know more about our products? Ask and expert. Meet our speaker in Speaker Bio. We want to hear from you in the survey window! Copyright 3M 2022, all rights reserved. Title, 3M logo. 3M CDI Innovation Webinar Series. Boost your CDI program by leveraging impactful, quality-based prioritization. December 2022. (SPEECH) Good afternoon and welcome to our final CDI innovation webinar of the year. It's hard to believe that we are heading into 2023. So with us today, we have Stanford Health who will be talking about their CDI program. (DESCRIPTION) On24 Webinar Platform for a better user experience! Use Google Chrome and close out of VPN/multiple tabs. Check speaker settings and refresh if you are having audio issues. Ability to move engagement sections. Ask questions! Certificate of Attendance available to download for live webinar sessions. Engagement tools and CC available. Check the resources section. Complete the survey . The information presented herein contains the views of the presenters and does not imply a formal endorsement for consultation engagement on the part of 3M. Participants are cautioned that information contained in this presentation is not a substitute for informed judgement. The participant and/or participant's organization are solely responsible for compliance and reimbursement decisions, including those that may arise in whole or in part from participant's use of or reliance upon information contained in the presentation. 3M and the presenters disclaim all responsibility for any use made of such information. The content of this webinar has been produced by the-- 3M and its authorized third parties will use your personal information according to 3M's privacy policy (see Legal link). This meeting may be recorded. If you do not consent to being recorded, please exit the meeting when the recording begins. (SPEECH) Before we get started and I pass things over to our moderator today, just wanted to go over a couple of housekeeping items about the On24 webinar platform. It is a web-based platform so we do not have a dial-in number. So we recommend using Google Chrome, close out a VPN, multiple tabs that will help with bandwidth. If you are having any issues check your speakers settings. You can also do a browser refresh, that usually takes care of any glitches that you might be having. There are several engagement tools within the platform. In the media player, we do offer closed captioning so if you do need that function, it's in the media player. We also have a Q&A section so we do encourage questions throughout, so go ahead and put those in there and we'll get to as many as we can at the end. In the bottom left hand corner is our resources section. There is the certificate of attendance for today. So you can download that certificate and submit it to either AHIMA or Actis to obtain CE use, as well as, the presentation for today is also in that section. And then at the end, we always appreciate you letting us know how we did. There is a survey as well, and we'd love to hear your feedback. We are recording today's session, so in the next couple of weeks, we will have that available on our website. So if you would like to go back and listen in again, that will be on our website soon. (DESCRIPTION) Boost your CDI program by leveraging impactful, quality-based prioritization. Mark LeBlanc, CDI Manager, Stanford Health Care. Michelle McCormack, CDI Director, Stanford Health Care. (SPEECH) So let's go ahead and get started. Adriana Harris from 3M is going to be moderating today. And she's going to welcome our speakers. Adriana? Yes. Thanks, Lisa and thanks, everyone, for joining today for our webinar entitled, Boost Your CDI Program by Leveraging Impactful Quality-based Prioritization. (DESCRIPTION) Slide, About our presenters. (SPEECH) Our speakers today are Mark LeBlanc and Michelle McCormack. Mark has 40 years of health care and 17 years of CDI experience. His MBA in healthcare administration and his vast clinical experience as a registered nurse assists him in supporting the CDI team to meet their personal, professional, and organizational goals. (DESCRIPTION) He has extensive experience in change management and holds a Healthcare Lean Certificate. He is active in A C D I S, H F M A and A H I M A. (SPEECH) Michelle has been the CDI director at Stanford Healthcare since 2013. She earned her associate's in nursing, her bachelor's in nursing, and her MBA in healthcare management. She has experience in CDI dating back to 2005 following her clinical nursing experience in a vast and a variety of specialties and settings. (DESCRIPTION) She has led successful CDI departments in academic medical centers, community hospitals, and multi-hospital systems. Michelle is also a former National A C D I S Advisory Board Member and a current National A C D I S Leadership Council Member. She holds certifications in CDI, Coding and Revenue Cycle. (SPEECH) So with that introduction, I'll turn it over to you two to give us some background on Stanford and then we'll get into our questions. Great. Thank you so much for having us today. Like you said, we'll jump right in, right, Mark, with talking a little bit about Stanford. (DESCRIPTION) Infographic, Stanford Health Care. Stanford Health Care seeks to heal humanity through science and compassion, one patient at a time, through its commitment to care, education and discovery. Stanford Health Care delivers clinical innovation across its inpatient services, specialty health centers, physician offices, virtual care offerings and health plan programs. The only level-a trauma center between San Francisco and San Jose. Life Flight transports 500 patients annually, 49 operating rooms, 613 licensed beds, 67 licensed ICU beds. 371 solid organ transplants in 2017. Kidney transplant patients, 100% 1-year survival rate in the last 2 years. 1, 970 heart transplants performed with a 92.7 percent 1-year survival rate. Admissions, emergency room visits 77,425, Discharges 27, 167. 1.8 million outpatient visits systemwide in 2018. Mission, to care, to educate, to discover. Vision, healing humanity through science and compassion, one patient at a time. Stanford Hospital, 500 Pasteur Drive opened for patient care in 2019 with 824,000 square feet of space. Our people, 14,143 employees, 2,902 medical staff, 3,194 nurses, 1,412 residents and fellows. 98.4% of SHC physicians have a star rating of 4.5 or higher. 93.4% of SHC nurses have a BSN, MSN or Doctorate degree. Translators & Interpreters. We offer Spanish, Mandarin, Cantonese, Burmese, Russian, Vietnamese and American Sign Language and access to a many as 200 languages through phone interpretation. 8 all time Stanford Medicine Nobel laureates. 28 dogs in pet assisted wellness PAWS program. Over 1,000 volunteers provided 62,800 hours of service. Awards & Recognition. Stanford Health Care was first designated as a magnet hospital in 2007 and was re-designated in 2012 & 2016, submitting document this year, 2020. Magnet recognition is a prestigious award developed by the American Nurses Credentialing Center, A N C C, to recognize health care organizations that provide nursing excellence. Fewer than 7% of US health care organizations achieve this honor. Vizient Quality Leadership Award 2019 Winner, ranked in the top ten percent for both inpatient and ambulatory care. The Stanford Stroke Center is designated as a comprehensive stroke center, providing the most advanced and rapid stroke care for patients nationwide. Best hospitals US News & World Report Honor Roll 2019-2020. Leapfrog Top Teaching Hospitals 2019. Named one of the nation’s best teaching hospitals by the Leapfrog Group, a top health care watchdog organization that evaluates providers based on rigorous quality and patient safety standards. Stanford Health Care is part of Stanford Medicine, a leading academic health system that includes the Stanford University School of Medicine, Stanford Health Care, and Stanford Children’s Health, Lucille Packard Children’s Hospital. Stanford Medicine is renowned for breakthroughs in treating cancer, heart disease, brain disorders and surgical and medical conditions. (SPEECH) So Stanford is a major academic Medical Center, a level I trauma center. We do all transplants. The only services, I think, we do not offer are, we don't have a burn unit. So that is the one area, I think, that we don't have. We have a lot of residents. You could see lots of residents and fellows. Lots of research. We have eight all-time Stanford Medicine Nobel Laureates and I think actually that might be nine now, because I think we had one this year. And then we're always looking at how we compare to other organizations. So we've listed some of the awards and recognition. That's something that's really paramount at our organization as well. Mark, anything I forgot about us? Yeah. As well as part of our Stanford Healthcare group, we do have a facility and we call Tri-Valley over in the East Bay which is a community-based hospital and they do a variety of-- they have OB. They have PEDs and Neo, as well as, the standard community-based hospital. So we have a little of both. (DESCRIPTION) Org chart summary. Nine CDI Specialists at the top of the chart break down to six CDI line leads below that. Mark LeBlanc is the Manager with two CDI quality and outcomes leads under him. Next on the chart is Michelle McCormack, who is the Director. (SPEECH) Yes, good point. So this next slide is a little bit of an eye chart. But I know it's near and dear to Mark's heart so I'm going to let you walk through this org chart. Mark? Yeah. I think, for me, over my career and especially in the last 10 years or so, it's become really obvious to me for change management, and for innovation, and being able to set a team up for success, setting a culture is really important. And so Michelle and I have similar visions and so we were able to create this upside down org chart. And it's one that we're very proud of. We are at the bottom. And I can proudly say I enjoy being down there. And because I feel like we are there to support all the team that's doing the work. And if they're all successful, then we become very successful as a whole and as a leadership group. So we have a couple of provider champions, one, at the academic medical center, and one at our community-based facility that helps support us. And we work very closely with our coding department. And we all report up through revenue integrity, which is part of the revenue cycle family at Stanford. We have a couple of quality and outcomes leads that we utilize for a lot of the work around mortality PSIs, and HACs, and other projects to improve our outcomes. We have two education leads which help support providing education for the team, as well as, individual staff members, as well as, bigger educational initiatives across the organization. I think one of our keys that we've learned over the years is, we have service line leads and these are for people who have specific service lines that they support and provide education direct to providers. They help provide data. They help partner on improvements and documentation directly with those service lines. And then finally, we have our CDI specialist team which is a phenomenal group of frontline specialists that do all of our prioritization reviews. They do the majority of our queries. And they are definitely advanced in the type of work that they do. And all of that makes up our CDI program here at Stanford. We also have a career ladder, which goes along with this as well. Michelle? Yeah, great. Yeah, I think you covered everything. The only other thing I would add is that, our two physician leads also are supported by our volunteer physician champion. So we have at least one volunteer physician champion from every service line that also partners with those two providers. So I think that support has been really helpful too. Want to talk a little about our Encompass journey, Mark? (DESCRIPTION) Slide, 3M 3 60 Encompass System Journey. 3 60 Encompass Go-Live November 2018. Passed on prioritization, One work list, Specialist with unit assignments, Sorted by units, Shared accountability, Specialized reviewers, Only covered specialized united, PTO coverage by team, Final reviewer, Completed final CDI review, Validated impact of all queries on case. Logo, Journey, a path to success. (SPEECH) Sure I spoke at the summit this year and I think, I introduced myself as the person who said, no way to prioritization in the very beginning. So back in 2018, we went live with our 360 Encompass. And prioritization was there as a function and they explained it to me. And really tried hard to get me to be an early adopter. And I will say that I passed. I just felt like we needed to be able to-- it was a big change for the team to go on 360. And we were really trying to hone in on our work. And at the time, they were all unit based assignments. The team was very siloed in the type of work that they did. And so we were trying to get just moved to a more team approach. And I was struggling with the live prioritization offered at that time. And I think we all can see, over time, as they improved the product, it definitely became a much better thing for teams to use. And we really wanted to promote shared accountability. So we were trying to figure out how to get the team to be more involved together and own the work as one and not own an individual part of the work and feel like they weren't part of the whole team. Anything on this? Yeah, I really just wanted to let you tell on yourself about how critical we were of prioritization and really were the person who was the most cautious about it. But I think it was very beneficial. I think you worked a lot with 3M on your concerns. And you did a lot of investigation beforehand. And I think that's one of the reasons why you are probably a really good reference for that system because you're a convert. You were not really on board at the beginning. So I appreciate you telling on yourself about that. (DESCRIPTION) Text, What are you leveraging to maintain productivity while asking your teams to do more with the same amount of staffing? (SPEECH) [CHUCKLES] Yeah, and I think, 3M, when they listened, and then they went back, and when they came back and tried to offer it again, all the concerns I had were gone. That they had actually gone back and worked at all of those and more. Yeah, great point. That's great. So we'll start with our first question. What are you leveraging to maintain productivity while asking your teams to do more with the same amount of staffing? (DESCRIPTION) Slide, Setting the stage. Culture. Quality focus, Continuous quality improvement, Collaborative focus, Trust in the systems, Transparency metrics, Accountability focus, Continuous feedback, Utilize notifications. Change management. Trust, Goal setting, Education, Support, Open Communication (SPEECH) That's a big one, right? I think that [CHUCKLES] that is something that's going to echo across every organization, every company for that matter, to do more with less. And I think we have a great team who is really good at being creative and sharing their thoughts on how we can have improvement in our processes and in our technology. And I think Mark has done a really great job of creating that culture and enhancing it with his leadership. And so I would love for you, Mark, to talk about your strategy working directly with those teams. Yeah, I think, in the beginning, it was time, as we said, we're all there to help support each other. But each level would then provide support in its own way. But the people that we want to make sure are the most successful are our frontline staff. And so I think they realize now that that's what we're all doing behind the scenes and on the org chart, the way we've developed it. And so we're always listening to feedback from them. What do they think could help improve their daily work. We're very transparent with the teams about the metrics and where we are. And where we, as leaders, see potential areas that we are concerned about and get their feedback on what they think might be some of the issues or barriers that they're facing that we're not hitting those metrics. And that's really been a big thing is the transparency and developing that trust. We do set goals. And those are always something that, I think, most people enjoy having some goal to be able to test what they're doing against that goal and seeing what it is. Education is a big part of our program. And, in fact, we're missing our education for this morning that are collaborative with our coding partners. And so we do that. And we provide education for the staff. We encourage them. And we've been very lucky that we've had the same staff for five-plus years. And so that's a huge-- I have to admit, it makes doing all of these things and asking for more easier because people have been around and they want to always be challenged and learn. So I think that's been a really good thing. Yeah, I agree. And I think, especially, as we move to a prioritization, reinforcing this and being really thoughtful about how you include everybody really led to our success story with it. (DESCRIPTION) Graphic chart, Start. 3M 3 60 Encompass System Standard to Prioritization. Organizational initiatives ensure that the system was being used to the maximum capacity. Team decision to move from specialized to generalized specialist. Leadership desire to make sure that resources were being utilized efficiently. Organizational focus on quality. Organizational focus on provider satisfaction. (SPEECH) So let's talk a little bit in detail about our journey with prioritization initially and how we took our vision, as you said, we have this big vision, and tried to tweak the prioritization to meet those needs. I think you're right that we started with those organizational goals and the initiatives that were in place to test the system and that those were the first areas that we focused on. But I think, you were pretty strong in your opinion that we start with the 3M settings and not do a lot of customization right off the bat. But you had to set up our specialists in the way that they looked at their work a little differently to make this successful. And I know that move from being specialized on a unit or specialized with a certain service line was something you and I were careful about. But in the end, it was the team that said, we want to move to that anyway, right? That was something they had wanted to do. Yeah, I think it was really key. Well, first of all, 3M said-- I said, I want to talk to some of your customers who've gone on prioritization and hear what-- I don't want to make the same mistakes. And the person that I spoke to was awesome. We spent a lot of time on the phone with her team. And then she said, one thing I will tell you is, go out the box. Take the settings out the box. Try not to make too many tweaks or personalized settings for your organization unless there's something really big and pressing. And then, continue to work on honing in on where you want to make those changes. Because you said, we did too much in the beginning and we didn't really know what we did impacted what the outcomes were. So we really took that advice. And I can tell you over the last few years, that's really helped us. And when we make a change in prioritization, it's obvious. We see that the difference in that. How it's impacting our work. And yeah, I was really surprised. I have always been a-- I always liked generalized. I always enjoyed having different types of patients and different types of reviews. But it was the team, as we talked about, the functionality of prioritization that they actually saw, well, there's no way we could have one work list, and use prioritization points, and still keep it separated. But we made-- we did that. And we have provided them an avenue where the specialized people-- so if I had a case on a neuro floor, I knew who the neuro person was from the past. And I could reach out to that person and say, hey, I need some help understanding how to really do a good review on a neuro case. And I think giving them that freedom and encouraging them to use each other really built them as a really strong team. And they are on, what we call-- well, they are on chat all day long, helping each other, asking questions, posing things. And it's really amazing to watch them to do that work. I agree. Yeah, it's been really fun. And I think it's also-- just using that prioritization and having those discussions about what we're going to change and what we're going to add, has really helped them understand on the frontline what impact all of those different quality indicators and all of those measures. That was a big focus for our program from the very beginning. But I think it was hard to get that to really filter down to the frontline staff and to learn all the details. And that tool helped us nudge them a little bit more down that path. (DESCRIPTION) 3M 3 60 Encompass System Standard to Prioritization project kick-off 2020. Aggressive timeline. September kick-off, November 1st soft go-live, Mid-November team training, December 1st team go-live date. 3M standard (out of the box), Minimal setting changes. Super users, Proficiency in 3M 360 Encompass, Understand the "big picture". Start. (SPEECH) Oh, yeah, for sure. I mean, they are already, now, doing PSI and HAC reviews concurrently when they fire something. Because the team, the frontline staff do assign codes. And they do assign a DRG. And they do know what's firing, what's not firing. How to work on it. They'll send queries if needed. And so that's been really fun to watch them grow and doing even more broad reviews. Yeah, definitely. I just want to point out on this slide, we talked a little bit about how we took the 3M standard out of the box. But I wanted to address the fact that once you decided to go live, you didn't waste any time, right? We're going to get this here. We're going to put this in. And we're going to start to use it. I think the other really important aspect of this, and really any technology that you're going to integrate into your process is, really to identify some super users. And you went with the frontline staff. And you said, hey, we need some super users. Who's interested? Who wants to do it? And those super users are really the people who drive these changes. And they're on the calls with 3M. And they're making decisions or recommendations to 3M about what we need to do and then bringing it back to you. And that, I think, is really empowering for a team who understands their goal and how they connect to the organization. (DESCRIPTION) 3M 3 60 Encompass System Prioritization Journey Begins. October 2020, System build, Workflows documented. November 1, 2020, Soft go-live with super user group, Daily check-in, Weekly 3M/IT meeting, Utilized old worklist and new worklist. Mid-November 2020, Team education, Super user led. December 1, 2020, Team go-live, Command center, Daily check-in. Quote, The journey of a thousand miles begins with one step, Lao Tzu. (SPEECH) Yeah, they're constantly helping us with prioritization. And they've become stronger. And we meet monthly and review requests that come in around issues that people are having and see if we can change prioritization scores to move cases around, and move them up, and change initiatives, and stuff. So yeah, they're-- and it's amazing because I don't say a whole lot on the 3M calls anymore. They really lead the conversations with the technical people, and ask the questions, and they come prepared. It's great. Yeah. And I know we added this slide here just to give a little bit more detail in case folks were thinking about how to go live with prioritization. What are the steps, the specific steps we took, and the timeline. I think, the command center was a really interesting thing we did at go live but I don't think we really needed it. I think we had it there but nobody joined. And we were just sitting there chatting with each other all day, which is fine. I mean, that's the go live you want, right? One where you don't need to intervene. And I think the soft go live with the super users was key, as well, because those people, they looked at the old work list and the new work list, and they validated that the new work list was working. And then when we turned off the old work list, they could tell the team, nope, we need to do it. It works. We've been doing it for a month. And we see it. We know it works. So that was another key, was giving them the opportunity to validate that we weren't going to be missing things in the new world. Yeah, great point. All right. And our next question is, how did the CDI and coding departments get a seat at the table for the discussion regarding quality outcomes? (DESCRIPTION) Slide, Engagement Strategy, Meet them where they are, unmask the models & algorithms, analyze for all opportunities. (SPEECH) Oh, yes. We're still trying to get seats at the table. No, it's a challenge. I think the bigger your organization is, the tougher it is to understand all of the places you need to have a seat at the table. There's lots going on. And a lot of overlapping efforts. And so, I think, one of the big things we did was really-- we had already been working with our quality team on specific reviews. And so really working with them to understand where their leaders were at. What did they understand about the quality of outcomes we were focusing on. And how we, as CDI and coding and documentation, how those all fit in to that. Once we figured out where they were at, we did a lot of work to unmask the models and the algorithms. We were very open with that. We showed them how the different areas, different codes impacted the models and our scores. And then we analyzed the data, of course, but we analyzed it for all opportunities. So in our organization, as a CDI department, we are focused on complete and accurate medical records. So when we touch a case, even if we're touching it because we want to look at a specific outcome, we are looking for all opportunities. We see every medical record as an educational opportunity and learning opportunity. And I think that approach has been eye-opening for some of our quality leadership who may have a focus on a few different quality outcomes or the meetings about a specific outcome. But we're bringing up other opportunities, as well. I think that's been really helpful for us in terms of engagement. And we have to repeat this a lot. We are years into this now. And I think, we're still reminding them about how the models work, and what are the barriers that we, have as an organization, outside of the documentation. (DESCRIPTION) Steps to Success. Data analysis and validation. Need to agree on the performance measurement outcomes, Need to trust the data and have a process for validation. Goal and Messaging Alignment. Alignment of goals, SMART - Do we agree? Mutually beneficial - W I F M, Balance everyone's needs - patient-centered and mission-minded. Performance Transparency, Dashboards, Distribution and Access. Advocacy, Think big! Inset box, Patients over Paperwork. Reduce unnecessary regulatory burden to allow providers to concentrate on their primary mission: improving patient health outcomes. (SPEECH) So steps to success for all of this engagement with our quality partners. I think, Mark mentioned our transparency. That is one of our mantras in CDI, as well as, just, we're going to analyze the data. We're going to be really open and transparent with it. And we're going to validate it. We're going to validate the impact that we have. The big, and I think, one of the most challenging aspects was for all of us to get on the same page about goals. What should our goal be for different aspects. And really talking about who is responsible for those different elements. So obviously, documentation, impacts, expected outcomes, and exclusions, right? But we didn't want to leave out the fact that we do probably have opportunities to improve some of our care quality. Or there's other data we need to look at to see why certain things are happening from a clinical perspective. And we needed to start to trust that data so that we could have that conversation. We could align those goals. And then, advocacy is another-- oh, it's such a-- been such a challenge with COVID and the pandemic. But we really started to make some big steps in terms with our advocates with CMS around some bigger coding changes, some challenges that providers have from a documentation perspective. And a little bit of progress there, but now, it's been a bit on hold due to all of the pandemic. And all of the focus on the clinical work, which absolutely needed to happen. That was the right thing to do. (DESCRIPTION) Going Beyond Traditional CDI Efforts. RCC Improvements and Reporting, Integrated documentation tools and strategy, Ongoing since 2017 (RCC - managed by CDI). Use has become a largely consistent and standard practice for providers, Meaningful use is strong and has influenced improved capture and performance and reduction in queries. Admission Status, Process, standardization and governance, Project in progress with updated completion goal of 4/30/21. Provider Experience, Technology optimization, Reduce provider burden. CMS Advocacy, Pathology report and pressure ulcer code capture guidelines - Potentially Industry Impacting (SPEECH) Yeah, and I would say also, with our quality partners, that partnership really has grown over the last couple of years. And just getting them to understand all the work that's been done by CDI and coding around reviews, and queries, and capturing, and multiple second level reviews in certain instances. I don't think people understood the amount of touches that sometimes some of these cases get. And it was eye-opening for them. And I think it helped them to understand the work we do and how it is supportive of what they're trying to accomplish as well. Yeah, absolutely agree with you. I think we also wanted to call out some of the things that we did that were a little bit outside of what we would maybe term traditional CDI. And taking those steps with the quality teams and the quality of leadership really helped to build that trust. So one of the areas-- so we are benchmarked Vizient. We're a Vizient member. And so one of the aspects that was a big concern was the accuracy of admission status. And so we led the efforts to create a process standardization, a governance for admission status. And that was a big lift for us. And really, we had to engage a lot of other people that were not CDI. They were not coding. We were pulling leaders from the clinical side, from PSS to create this governance. But very successful. I think the other thing we've done a lot of work on is our documentation tool, the .RCC. And that is a tool that takes a lot of time for our service line leads, that is a preemptive documentation tool to help the providers select conditions they might not think about documenting on their own. And then the other aspect was just our partnership with the providers, the more we work on all of these tools, the more we partner with other departments and on other initiatives, the more burden there is to potentially add to the provider. So we also rolled out a provider experience metric. And we have a big effort and a big focus on technology optimization for the provider. So I think those efforts really went a long way to partnering with that quality team. All right. And our next question, how do you get your frontline staff to incorporate quality into their daily assigned tasks and workflow, thinking beyond the basics like financials SOI, and ROM? (DESCRIPTION) Circular Graphic, PSI/HAC Review Integration. Multidisciplinary pre-bill review of PSI/HAC cases, A E S edit workflow, Focused education regarding PSI/HAC and exclusions, Staggered rollout of concurrent PSI and HAC reviews by CDI staff, 360E tools, Ongoing feedback and accuracy scores for CDI staff, notification processes in 360E, Organization focus on quality. (SPEECH) It's easy for us here. But the way we did it was, we made a project out of wanting to incorporate PSI and HAC reviews, and moving it from a retrospective type effort to a frontline concurrent effort. And so we created the workflows. We had input from the specialist. If we had dates, go live dates, we had lots of education. We rolled it out one or two PSIs at a time so that they could learn how to-- You notice, the tool, actually, tells us when it's firing and alerts us. So it was just a matter of people learning how to incorporate that. Just like we're reviewing edits. Also looking at when PSI and HACs are firing and being able to look at that work. And yeah, we give lots of feedback. Yeah, I think, for me, been a little bit more removed from the group, I think, just seeing their interest in this. And just how they wanted to learn more. They wanted to learn more about what are other things we should be looking at outside of just the PSIs and HACs. And what else are we looking at? And how does Healthgrades work? And how does-- they're really, now, curious. And that curiosity is fun. It's also tough for us, as leaders, because they push us. They push us really hard to take them to the next level and to provide them with these new areas to explore. So really fun for us to watch that happen. And also to acknowledge all the work that the quality and outcome leads were doing prebill. There was a lot of discussion about how-- wow, you review all of these. And you must have sent a lot of queries on some of these because I didn't realize that this was an exclusion. I think that was fun as well. And how has effective collaboration been created between those teams, CDI, coding, and quality in establishing aligned goals and criteria? (DESCRIPTION) Graphic, Multidisciplinary Collaboration and Goal Setting. Accuracy is in the middle circle surrounded by record, performance outcomes, code capture, reimbursement. (SPEECH) I think it's a good question. I think it's something we keep trying to grow and enhance. I think the way we were able to get everybody focused is to put the accuracy of the medical record at the center, at the heart. And then, talk about how we are all coming at it from different perspectives. But the key is still the accuracy. The accuracy of the medical record. The accuracy of the code capture. Making sure that the patient has an accurate medical record for continued care. And then, everything else flows from that accuracy. So our performance outcomes should fall where they're meant to fall if everything is accurate in the medical record. And I really think we still have times where we are challenged and we disagree about things. And then we just to come back to that accuracy. And if we have a question, I don't know what you think, Mark, but I feel like we have a really low threshold for queries. And I feel like, as leaders, we are often like, yeah, just send a query. I don't know. We shouldn't argue about it, just send a query, right? [CHUCKLES] Yeah, I mean, I think that has become the mantra a lot of times. It's like, after enough people have talked about it, if we're all having this long of a discussion, then somebody else is-- another set of eyes is going to have the same question. So just clarify it, right? Just get the query out there. And I think we had a luxury in that we sat on the same floor as our quality folks so they were in cubes not far from us. And so there was a lot of shared discussions just because of where we sat. But I think that going live in 2018 with 360, we included them because we wanted to have a multidisciplinary process within the system. And so they were there with us during training. They were there as we developed what that workflow would look like, what their part was in doing that workflow. And so it really, especially, since the pandemic and we all went home, it really has made that develop that collaboration a lot stronger and easier. Yes, that's true. I'd forgotten about that. It's been a while since we've been in the office. But yes, that was really helpful for us at that time. And I think, also, it was interesting to me to learn that even folks who have really focused on quality for a lot of their career, didn't understand the details of the coding behind it. Because they're not coders, right? They're looking at the clinical quality perspective. But they are very interested in learning about that. And they have become experts about that as well. And so that's been really fun for us to share with them as well. And what are some of the shared KPIs (DESCRIPTION) CMS, mortality, etc (SPEECH) amongst the specialties, like CDI, HIM, and quality that you're using? (DESCRIPTION) Operational Metrics. Review Rate, Review Timeframe, Meaningful Reviews. Query Rate, Concurrent vs Retro, Meaningful Responses. Match Rate, Final CDI Review Impact, Reason Code Definitions. Accuracy Rate, Code set definition, Clear, aligned expectations. Query Response Turnaround Time, Defined escalation process, Shared Accountability. Multidisciplinary Reviews, P S I/HAC, mortality, AWOP and other internal. Multidisciplinary Reviews (SPEECH) Yeah, so we broke this up into one side for operational and one side for more outcomes focused. I would say that all of these are really shared goals between all of our departments. Although CDI may have more accountability and responsibility for different metrics, and coding may be more focused on different metrics, or it may be a result of direct result of their work, I think we are all keeping an eye on all of these. We have very open communication. Our dashboards are published. Everybody can see them in the organization. They can go to the website and look at them. And I think just being transparent about that has helped people adopt some of these metrics as shared metrics for them. I think, one of the really important efforts that I just want to call out again, Mark, I'm going to put you on the spot here is, the code set definition in our accuracy rate. So I think one of the areas where we found our opportunity was in our matches or match rate with coding. And one of the things that we started talking about pretty early on was the fact that our expectation was deeper than the DRG. And how do we establish or change that, I guess, from a DRG match rate to an overall match rate. An overall match, and what does that mean, and how do we define that. Do you want to share a little bit about that journey? I know we're not quite done with that journey yet but you want to share your thoughts on that? Yeah, I totally agree. I mean the system will just take us out of the DRG level. But the final CDI review process they do, the CDI specialist is accountable for looking at the entire final code set as compared to what their code set was. And if they see discrepancies in POA status, or they see codes missing that they feel meet the definition, then they are encouraged to raise that question with the coder and have a discussion to see if they can get some of those things resolved. And we have an escalation process that goes on up through all the different levels if need be. And we're starting to see more and more, and we're moving more where the coders are actually sometimes is initiating some of these. I'm going to-- I think this should be POA, or no. Or I don't think this code meets the definition. So we're continually working on both sides. And it is a process and something that we'll continue to work on. The other one that is the escalation process for our query response. So the organization has a two-day expectation for responses. And CDI, the service line leads to all of the escalations of queries that aren't answered in that two-day period. Because they have the relationships with all the providers. And so right now, they do all of that. And then they notify whoever the query author is when they response or when it's responded to. And so it helps provide some efficiency around that as well. And we look at it every day in our huddle. How many of our cases are holding up the final coding process so that we make sure that we're not losing sight of keeping those moving forward. So yeah, a lot of shared things that go on. Yeah, that's a great point to add that we have a DNB goal because we didn't put that on here. But we do report out-- excuse me, we report out all of the cases with open queries or that are in a mismatch review, CDI is responsible for that. And so, I think, that has been really helpful as well to build that partnership with coding to recognize and acknowledge the importance of that DNB metric as well. I did, also, want to just touch on the fact that we have these multidisciplinary reviews. Our quality reviews, PSIs and HACs, and other quality outcome reviews include our quality and outcome leads, our coding quality specialists, and our quality improvement analysts. So our quality team, as well, as you mentioned. We do coding and CDI prebill mortality reviews on 100% of our mortalities. And then, we have an AWOP, which we will talk about a little bit more in one of the slides coming up. But that is our internal analysis, so where we find potential areas of opportunity and we do some internal multidisciplinary reviews. And those reviews include our medical directors as well, which, I think, has been really interesting for us. (DESCRIPTION) Performance Outcomes. Benchmark and Comparison. Expected Mortality, Expected Length of Stay, CC/MCC Capture, Other Risk Adjustment, Complication Rates. Internal/Year-over-Year, Financial Impact, Case Mix Index, Query Rate, CC/MCC Capture, S O I/ROM (SPEECH) And then from a performance outcome perspective, you see a lot of the traditional case mix index, CC/MCC capture. But we're looking at them both internally year-over-year, as well as, through many different comparison and benchmarks. We don't focus in CDI and coding on one performance metric or one benchmarking. We use 3M's benchmarking. We do a lot of work with the PBM report. We look at Vizient benchmarking. We look at the CMS compare. We look at all of these different comparison groups. Because that helps us identify where there's an area that's at higher risk of being a concern. If we're just looking at one, we found that we were having a lot of false positives, I guess, is what we would call it. Whereas when we took the bigger global perspective, we were able to weed those out by looking at it from those different perspectives. Anything here that I'm forgetting because you did think about the DNB goal on the last slide? No, I think you covered it all. All right. What continuous improvement practices and tools have you incorporated to maximize outcomes while maintaining appropriate staffing levels? (DESCRIPTION) Optimization and Ongoing Process Improvement Efforts. Advanced Sequencing, Utilizing Code Comments to identify impactful codes, Care Quality (Mortality) Review Process, Query Reconciliation Process, Automated Query Impact, Other Query Impact utilization, Utilization of Organizational Outcomes, Internal Review Process (A W O P, PDM, CDI Accuracy, Other), Prioritization point recommendations (SPEECH) Oh, this again. This is Mark's-- this is his baby, again. All of the performance improvement and process improvement in QA. So I'm just really proud of all of this that has been developed. And I'm going to let you brag a little bit, Mark, about it. Yeah, I think part of, as we went with prioritization and we continue to make sure that we're utilizing the system to its maximum capacity. I think that's the advanced sequencing functionality that we've been an early adopter and continue to use. We did a-- coding was wanting to know more about some of our Vizient outcomes. and so we were able to create some code comments around potential impactful codes that have a note in the system so that they can start to see, oh, I wouldn't have thought that would be impactful, right? And also making sure that people are looking at all the other drivers as well. We instituted the Care Quality review process. So that's been a big help because we had created this very convoluted and robust notification process that just seemed to be overwhelming at times. And now, with this whole, since 3M has built this into the system, it seems to be much more efficient and works very well and quickly. The query reconciliation process and this automated query impact is awesome. We still have other query impact. But we actually-- we do an audit every month. And the staff are really great at making sure they get all of these impacts set right in the system. So we're enjoying the automated piece as well. And then we do do a lot of stuff around, like she talked about AWOP, PBM, CDI, accuracy. And we continue to get feedback around where other people, through some of these internal reviews, think that we should be looking at these cases. And then we look to see how do we get that DRG higher up into our work list, when identified. Yeah, a lot of great work. A lot of effort by our super users. They've been really vital partners for us in all of these efforts. So yeah, lots of great work, especially, in the last year or two. Well, on those same lines, what's on the horizon or the next thing to be on the lookout for within your CDI program? (DESCRIPTION) Post-it note, What's Next? Text, The Journey Continues. Enterprise Workflow, New SSR Prioritization Reports, Continuous Analysis of Prioritization, Super-user/IT bi-weekly meetings, Real-time provider facing A I, Query delivery and response optimization, Diagnosis Auto-population project (SPEECH) Oh, well, what's not on the list, right? The more we do, the more the organization sees it, and the more we're asked to do. So we have a lot of efforts, actually, partnering really closely with 3M. We do, now that we have our super users who are consistently asking for enhancements, and changes, and adaptations. And I think that has been really fun. And I think the enterprise workflow beta that we're doing now is going to be really fun. It's not live yet. We're in the midst of it. But that's going to be exciting. And I am a data-- I'm a data bug. Everybody knows I like to look at reports. And so we're waiting how we can look at those new prioritization reports and the new reports for the care quality mortality reviews. And we are, of course, looking at provider facing AI. A lot of people ask us about that, have we integrated that into our program. And are we going to-- we are looking at it currently. We're also looking at our query delivery and response processes to see if we can optimize that or make that easier for providers. And then we have a lot of diagnosis auto-population that we're doing internally with our electronic medical record. So we're building in our medical record some criteria that would auto-populate some diagnoses for validation from the provider. So we're looking at all of these tools to work together, right? That provider facing AI, the real time nudges, our auto-population, as well as, our other documentation tools. Mark, anything else that you think we may want to share about what's coming? No, I think that's a lot of it and the biggest stuff that we're working on. And I think we're very lucky that we have a lot of providers that are researchers, as well. And they're very tech savvy. And they actually think we might be moving too slow. So they push us really hard, so. [CHUCKLES] That's true. [CHUCKLES] I think that's going to be our big is to how to meet their needs this year around the AI and stuff, so. It's always a balance, right? We're either moving too fast or we're not moving fast enough. So yeah, that's interesting. (DESCRIPTION) Save the date for 3M Client Experience Summit. When: May 22-25, 2023. Where: Atlanta, Georgia. What: 3M CES is the premier event for clients of 3M H I S. Go to our website for updates and to subscribe for more info! Interested in speaking? Call for proposals are now open until Jan. 13th! 3M CDI Innovation Webinar Series. Copyright 3M 2022. All rights reserved. (SPEECH) Well, that was the last of our questions for you. Mark and Michelle, I want to thank you for your time today. If the audience has any questions, they can throw those into the Q&A. But I appreciate your time and the audience's time today. And I just want to turn it over to Lisa to do a quick update on the 3M Client Experience Summit. Yeah, thank you all for joining today. That was a great presentation. And like Adriana said, thank you to all of our-- all of our attendees today. So if you are a current 3M customer, you may have gotten information-- hopefully, you got the information about saving the date for our upcoming Client Experience Summit. It will be taking place in May this year, May 22 through the 25 in Atlanta, Georgia. We are going to be moving out of where we normally go in Salt Lake City. We're going to try Atlanta. And hopefully, that'll be easier for people to get to. I know it is for me being on the East Coast. So I definitely can give a thumbs up to that. And again, that is for our clients that are interested in attending. So if you are interested, there is a link in the resources section to go to our website, where if you didn't get information, you can certainly subscribe to get updates. And if you are a thought leader and you'd like to talk about the experiences that you've had at your facility, we'd love to hear if you'd like to speak at CES. So the call for proposals is open now, from now until January 13. So take a look at our website for more information. Again, today's webinar was recorded. It will be on our website in the next couple of weeks. The October webinar, I believe, Adriana, wasn't that Piedmont? I believe that was-- Piedmont House, yeah. Yep. So if you would like to go back and listen to our October webinar, please also take a look within the resources section. Go ahead and download your certificate of attendance before you close out or complete the survey because you won't be able to get back in to download that if you complete the survey. So, yes, download the certificate of attendance, the presentation, and be on the lookout. We are busy planning our 2023 CDI Innovation Series. So hopefully, in the next couple of weeks, we'll be sending out some information so you can start to get registered for the webinars that we have next year. So again, cannot thank Stanford for joining us today. It was a great presentation. And again, thank you to those that joined and have a great holiday season. And it's crazy to say, but we'll see you next year. So thank you all for joining today. Yeah, thank you so much. Yeah, thank you so much. (DESCRIPTION) Text, That's a wrap!

      Webinar title slide

      Boost your CDI program by leveraging an impactful AI-based prioritization

      • December 2022
      • It is no secret that health care is undergoing a drastic transformation impacting the CDI profession. There is a disconnect on what can be controlled through documentation that often exists between quality, CDI and coding teams. By leveraging AI-based CDI prioritization technology, Stanford Health Care ensures it utilizes prioritized worklists that focus on the most impactful cases down to the DRG level. This focus can be customized and manipulated with the transforming health care environment. Learn how Stanford Health Care tackled this challenging landscape through proactive, connected tools and a culture driven by quality.
      • Download the handout (PDF, 1.3 MB)
    • (DESCRIPTION) Slide presentation. Logo, 3M, Science, Applied to Life. Text, Taking Piedmont CDI to the Next Level for the Win! 3M CDI Innovation Webinar Series. October 2022. A man in a white coat and a woman in blue scrubs sit together at a table looking at a tablet. (SPEECH) Good afternoon and welcome to our October CDI innovation webinar. Before we get started, I am going to go over a couple of housekeeping items. And what we'll be talking about today is taking Piedmont Healthcare CDI to the next level for the win. We have a couple of great speakers here today. So we're really excited to have them. Before (DESCRIPTION) New slide. Text, On24 Webinar Platform for a better user experience! Use Google Chrome and close out of VPN/multiple tabs. Check speaker settings and refresh if you are having audio issues. Ability to move engagement sections. Ask questions! Certificate of Attendance available to download for live webinar sessions. Engagement tools and CC available. Check the resources section. Complete the survey. (SPEECH) we get started, I just want to go over a couple of things. This is a web-based platform so make sure if you are having any technical issues, make sure you're in Chrome, close out of VPN or out of multiple tabs. That'll help with bandwidth. And a lot of times, if you just do a quick refresh, that will help with any problems that you might be having. Because this is a web-based platform, we do not have a dial-in number. So you will want to use your computer audio. So again, if you are having any issues, make sure you check those settings. Because this is a new platform, I just want to also go over some of the engagement tools that you have. So in the top area, you have a Q&A box. So if you have any questions, we encourage questions, please put that into the Q&A box. We'll get to as many as we can at the end. Down at the bottom left, you should see Resources. So that is where the certificate of attendance is for download. You can also download the presentation from today, as well as a couple other resources. If you missed our August CDI webinar, a link to that recording is in there as well. In the middle, you can see an area that if you would like some more information, if you click on that, you can let us know there. And then, if you are interested in learning more about our speakers, there's a speaker bio section. And then, we always do appreciate for you to complete the survey at the end to let us know how we did. So also, one final thing, if you do need closed captioning, that is available in the media section of your dashboard as well. (DESCRIPTION) New slide titled Meet our speakers. Headshot photos of each speaker. Text, Gail Higle, BSN, RN, CCDS, Manager of Clinical Documentation Improvement, Piedmont Healthcare. Niki Spear, BSN, RN, CCDS, Manager of Clinical Documentation Improvement, Piedmont Healthcare. (SPEECH) So again, like I mentioned, we have some great speakers today, Gail and Niki, from Piedmont Healthcare. So I'm going to go ahead and turn it over to Gail to get things started. Gail? (DESCRIPTION) New slide. Text, Objectives. (SPEECH) Good afternoon. This is Gail Higle. Our talk today is about how Piedmont, in 2020, took the-- our CDI department to the next level for the win. And as most of you know, our Georgia Bulldogs are number one, and they won the national championship last year. And CDI has a lot in common with working together as a team. And so, we want to tell you how we, as a CDI team, became successful using Priority and Impact ROI, our wonderful 3M technology. The objectives of today's talk are to focus-- our whole reason that we started using this was to focus on CDI reviews on the most needed cases, to maximize the use of worklists, prioritize needed follow-up reviews, benefit from the AI auto-suggested codes and queries, easily reconcile cases with coders' final codes for accurate financial impact, educate on inaccurate reconciliation and missed opportunities, and report vital CDI impacts to administration and each individual CDI. The impact tab was started after Active was in Atlanta several years ago. CDI, at their national active convention, asked that each CDI, they wanted to have their impact. They want to know what each query they get Impact makes, and this wonderful Impact ROI, the reports you can build out of SSR does just that. (DESCRIPTION) A new slide with the Piedmont logo in the corner shows a photo of a tall and long building with curves slightly. (SPEECH) This is Piedmont Healthcare. This is our newest building in downtown Atlanta. This is-- the building was opened August 2020. It is a 408-bed facility, 16 storeys in the heart of Atlanta's Historic District. There are 16 ORs, eight cath labs, four cardio-physiology labs. It is-- also has an urban Plaza with a Starbucks, a 300-car garage, and it's very high tech with our world-renowned cardiovascular surgeons. If you would like to, you can go on YouTube and take a tour of our wonderful facility in downtown Atlanta. (DESCRIPTION) New slide. Text, Piedmont. Real Change Lives Here. Piedmont has more than 31,000 employees caring for 3.4 million patients across 1,400 locations and serving communities that comprise 80% of Georgia's population. Piedmont has provided $1.4 billion in uncompensated career and community benefit programming to the communities we serve over the past 5 years. (SPEECH) Piedmont, tell you-- I'll tell you a little bit about Piedmont. It is the largest health care provider in the state of Georgia. We currently have 22 hospitals, 55 Urgent Care centers, and 25 Quick Care lotion-- locations, 1875 Clinic physicians practices, and more than 2,800 Piedmont clinic members. And just this last year, in 2022, Piedmont was ranked 166th as one of the Best Large Employers in the US by Forbes. And we are very proud of that. (DESCRIPTION) A new slide shows photos and years built of different buildings: Atlanta 1905, (1957 location), Fayette 1997, Mountainside 2004, Newnan 2006, Henry 2012, Newton 2015, Athens Regional 2016, Rockdale 2017, Walton 2018, Columbus Midtown 2018, Columbus Northside 2018. (SPEECH) Now, this-- our study that we did began in 2020. And in 2020, these are the 11 facilities that Piedmont had. What is interesting about this is we had 11 facilities with seven integrations in six years. So we are consistently changing and growing and growing. (DESCRIPTION) New slide shows more building photos: Macon Coliseum 2012, Macon North 2021, Cartersville 2021, Eastside 2021, Eastside Logansville 2021, August 2022, Augusta Summerville 2022. (SPEECH) And in 2021, we acquired five more facilities. These five facilities joined in June of 2022 in 3M and are now part of our CDI program. And then, in-- also, in 2022, we integrated with Augusta and Augusta Summerville, which is the University Hospital for the Bulldogs. And so-- and we will integrate with them in 3M and Epic next year, in November 2023. So we look forward to that. So Piedmont continues to grow. (DESCRIPTION) New slide titled Piedmont CDI shows a photo of the University of Georgia Bulldogs football team together on the field. (SPEECH) A little bit about Piedmont CDI. We are celebrating 10 years of being together. It started in 2012. We began with two facilities and four CDIs reviewing only Medicare. By 19-- 2019, in July, we grew to 11 facilities, a director, four managers, an educator, more than 35 CDIs reviewing cases using the length of stay, working DRG priority that is in 3M 360. So all payers, no self-pay or charity, no OB, no Peds, and no NICU. In October of 2019, before COVID, we went 100% remote. And this is very, very important because that asset helped us get through the rough times of COVID. Starting April of 2020, CDIs began reviewing using the long length of stay, working DRG priorities for all admissions, except OB, Peds, and NICU. (DESCRIPTION) New slide. Text, Piedmont Case Selection July 2019 to October 2020. CDIs Assigning cases by L O S and 3M Working D R G Priority. Filter by L O S choosing cases greater than or equal to 3 days. Next select cases by working D R G priority in the following order: 1, symptom Dx/D R G, 2, medical cases without CC/MCC., 3, surgical cases without CC/MCC, 4, surgical cases with CC without MCC, 5, sepsis D R G's review, 6, review D R G, consider alternate D R G, 7, questionable admits, 8, medical cases over GMLOS, 9, elective surgery over GMLOS, 10, low priority cases, minimal change impact, 11, optimal D R G, no need for review/re-re-review. A section of a chart shows Active Priority Factors and Working D R G information. (SPEECH) And if you are not familiar with what 3M working DRG priority looked like prior to the priority list, this is what the hierarchy of the working DRG priority looks like. (DESCRIPTION) New slide. Text, Welcome to the Game. A photo from a UGA football game with the opposing time about to snap the football. Text, March 19, 2020, First Piedmont COVID-19 admission. (SPEECH) Then, lo and behold, hit COVID. March 19, 2020, the first Piedmont COVID admission happened. And our stable team, at that point, we had no integration for the year. And this gave us an opportunity to use 3M technology to focus on CDI reviews that most needed reviewed. And Piedmont administration gave CDI the goal of reviewing 80% of admissions. And you cannot review 100% admissions. So at that point, our director, Lori Dixon, who is very instrumental in leading our team using technology, brought us together and had Niki and I, who, she will talk about Priority. We would start using Priority, Worklist, and the Impact ROI right in the middle of the five COVID waves. Our peaks for April 2020, July 2020, January 2021, August '21, and then January 2022. And right in the midst of that, we began using both at the same time. (DESCRIPTION) New slide. Text, Taking Piedmont CDI to the Next Level amidst COVID-19 Waves. October 2020, Priority and Impact ROI launched together. Two side by side screenshots, the left a table labeled North Priority Worklist with a long list of illegible items. The right screenshot shows a dashboard with tabs along the top and a lot of information with a popup box on top. (SPEECH) To get started, this is what our priority looks like-- looks like on the left. And on the right, this is what Impact ROI looks like. And Niki is going to tell you now about her beginnings with the Priority Worklist. (DESCRIPTION) New slide. Text, Priority Worklist Launch. Practice makes perfect! Prior to system launch: Set up the game plan. 3M defaults for prioritization points, established regional superusers, trialed different functionality, modified CDI workflow for the Priority Worklist, priority superuser team chose the layout of the worklist columns, added focus DRG priority for sepsis. Priority factor weights: New documents: OP Note, DC Summary, Queries. Financial class. (SPEECH) Thanks, Gail. So with the increasing challenging of staffing, we adopted prioritization as a tool to improve case review efficiency. With the goal of reviewing 80% of all adult inpatient admissions, except mother-baby, the 20% that we could not review should have the least likelihood of query opportunities. Prior to implementing prioritization, we attempted to do this manually, as Gail said earlier, with the list that she had shown. However, old workflows and habits are hard to break. And many staff would gravitate to cases that they preferred, such as reviewing by service line, length of stay, or, as some staff would call it, cherry-picking. This results in inefficiencies in the review process. Using prioritization worklist as a tool in customizing to our needs, we would help sequence the cases from high priority for query potential to the lowest without having to manually sort through the worklist. CDIs could just take the next in line per case and review it. To implement the Priority Worklist, we've decide to practice with a soft launch. If something did not result with the end result we had in mind, we could adjust, make changes, and improvements that would help prepare for the overall system launch. We established regional superusers. We used prioritization, as a tool within 3M, that allowed for a lot of customization. We started with the 3M default and adjusted from there. I would recommend to assess what works best for your facilities. For example, sepsis has a high potential for denial so we set up a focus DRG for sepsis cases to review for clinical validity and associated and organ damage. We also found that adding priority factor weights for a few-- new document types was a great tool for CDI. (DESCRIPTION) New slide titled Priority Worklist Launch: Game Time. A photo of the UGA football team in a huddle on the field. Text, At October 2020 department meeting, priority worklist manager and priority superuser team presented priority worklists to staff and educated staff on prioritization and new features. Second workgroup came together to create a daily workflow job aid. Additional education using 3M tools and filtering. Region priority worklists were implemented following meeting. Priority worklist manager continues to validate worklist and educate staff. (SPEECH) Game time. Priority Worklist launch. So at the October 2020 department meeting, we presented a new priority worklist and educated staff on prioritization and new features. Regional priority worklists were implemented following the meeting. We had a second education session using 3M tools and filtering about a month after the initial launch. We also had another workgroup that came together to create a daily workflow job aid to assist the staff. We continue to work to validate the worklist while offering ongoing support and education to the staff. (DESCRIPTION) New slide titled Piedmont CDI Regional Priority Swim Lanes. A screenshot of the 3M CDI Dashboard for Gail Higle. It shows a color-coded key, purple for prioritized, green for ready, gray for scheduled for today, red for queries pending, blue for scheduled for later and orange for discharged and pending. Below is a horizontal bar chart labeled Visits. The bars are labeled with priority worklists for various locations. Each bar is divided into colors, each with a number on it to correspond to its length. (SPEECH) This is the CDI dashboard work queue divided into four regional swim lanes. There is one manager per region with 9 to 10 CDIs reviewing. This does not include a guest that is to be integrated into Piedmont, Epic, and 3M next year. (DESCRIPTION) New slide titled Piedmont Priority Worklists. A screenshot shows a chart titled North Priority Worklist with blurred out information. The columns are: Visit ID, Patient name, Score, Case status, last review date, assigned to, last access, available documents, pending queries, provider queries, follow-up, notification, priority and working D R G, Wt/GLOS/SOI/ROM. (SPEECH) So we sort the priority worklist by unreviewed cases and start reviewing from the top. We customized our worklist to have the priority score, then case status, last review date, assigned to, the last access to the chart. We customized which available documents to have. We included a column for the number of pending queries, number of query, the names for the queries we sent to the providers, any follow-up data if it was assigned, notifications encoding, additional priority that we may assign individually to CDI. And then, also, on the end, the auto-suggested working DRG. These columns were customized by our superuser teams. And it was very helpful for them. (DESCRIPTION) New slide titled New Features. Text, At a glance, see how many queries need followup. A screenshot shows a closeup of the Pending Queries column from the worklist. Above the chart it says 6 pending queries. The choice CDI Query Status Pending has been chosen from the Priority Factor dropdown menu. Text, Who last accessed account. A screenshot of the Last Access column shows different names in each row. Text, Case status and last review date. A screenshot of the Case Status and Last Review Date columns. The Case Statuses shown are Discharge and Concurrent. (SPEECH) These are some of the new features we shared with the staff. The first column shows the ability to sort by priority factor to quickly see how many queries are pending, which is also helpful if we have staff out and are covering for each other. In the middle, you can see who last accessed the account. This is useful in determining if coding has a chance to review the case. And on the last column on the end, there is a case status that shows new, concurrent or discharged, and also the last review date. It shows in green if it was reviewed today. (DESCRIPTION) New slide titled Priority Scoring showing two screengrabs from the Home tab of the dashboard, the left one labeled Ability to dismiss factor. It shows the Priority Score, 310, and the Visit State: New. A blue box appears around Possible Sepsis, 30. Below the written statistics is a line chart labeled Priority Score Progression showing the Findings and Priority Score from 3 PM to 3 AM. The screengrab on the right side is labeled Action Items. It shows the same information, but the Action Items section at the top reads, 1 Open, 1 Total. A blue box appears around the heading and the text, Open. Actual Result Codes not found in Final Codeset, Immediate action is required. (SPEECH) Priority scoring. To dive a little bit deeper, you can see additional priority scoring tools within the encounter. On the left is the ability to dismiss a resolve factor. When CDI reviews for possible sepsis and then decides whether or not to query, they could dismiss the factor which will move the priority factor from the scoring of that case. The priority score pertains to just the new information, documentation or status change to give the most up to date priority score to assist with which case to review next. The open actual item on the right side of the page shows the missing query response from the final code set. This creates an alert to the CDI for the missing code and helps prevent re-billing. Before, this is a manual process of preparing codes. But now, CDI is notified of the missing query response in the final coding. This improves efficiency with the time spent in the chart and helps reduce errors. (DESCRIPTION) New slide titled Workflow Changes. Text, Assign and complete initial case review one at a time by priority score. No longer assigning 10 to 12 cases when you sign on, only assign the one you are working on. No longer required to assign followups for all cases. Only assign followups as needed, for specific reasons and not for routine scheduled followup. Worklist will move the cases with the highest priority to the top of your list. (SPEECH) The workflow changes the two biggest workflow changes that we have were assigning cases one at a time by priority score and not scheduling routine follow-ups for all patients. The process now is to assign cases and complete initial cases one at a time by priority score, no longer assigning 10 to 12 cases when you sign on in the morning. Cases are continuously updated in real time. So the cases on the top of the list are most likely to need clarification. In the past, when new documentation came in, it would go unnoticed until the CDI would manually review. Using the prioritization as a tool gets CDI to the case most likely needing a review without having to manually review each case before changes. And this reduces unnecessary or non-value-added reviews. We no longer assign follow-ups in all cases. The CDI only assigns a follow-up for potential clarification and use of prioritization as a tool to alert the CDI when a new review is needed. With continuous updating on prioritization scoring, we don't need to spend the time to follow-up on all cases looking for changes. We now use a combination of technology and CDI expertise to improve reviewing efficiency. (DESCRIPTION) New slide titled Priority Ongoing Improvement. Text, Obstacles. Questioning change: Increased autonomy in setting reviews using clinical expertise and 3M tools to enhance review efficiency. Only creating followups as needed. Choosing cases by priority score and not picking preferred service line, DRG, short stays. Regional differences: surgery hubs, sepsis cases. CDIs working different hours caused differing case loads. CDIs working from 4 AM to 10 PM, live across the US in different time zones. Of note: Asked staff to escalate cases that appear not to have a correct score. Validated all cases as having correct storing by priority settings. (SPEECH) Some of the obstacles we encountered were questioning change, needing to reinforce the new workflow, there were regional differences that we'd have surgery hubs, sepsis cases. There was ongoing improvement to the priority. CDI can now check between 30 to 40 total cases per workday with some staff taking upwards of 16 to 17 initial cases. This is the success we really have to own to the staff and our-- the workgroups that worked with this. They were instrumental in getting priority going. It's important to encourage the staff not to revert to old workflows and to assign follow-ups for all cases where they will be buried in a sea of red, overdue follow-ups. We found with routine scheduling of follow-ups many would not get reviewed before discharge, and the act of scheduling follow-ups was inefficient resulting in many clicks to set up and then resolve the follow-ups upon reconciliation. The workgroup also noticed the active-- potential query opportunities that the CDI recognized when all of the scheduled follow-ups-- all follow-ups were scheduled would be one amongst many set of follow-ups and would likely be missed. So when the CDI wonder, will I miss something, they are now using their clinical expertise to assign follow-ups only for a particular reason, allowing them to get to the cases that they really need to be reviewed. This gives the CDI increased the time in setting review. Of note, we also did ask staff and to help create that buy-in and support to escalate any cases that did not appear to have the correct score. And we were able to validate all the cases of having the correct score by the priority settings that we chose. So that is what I have for prioritization. One last thing with that is I would encourage you to play with that prioritization. It's a little bit of a tinkering tool. It's customizable to whatever comes up. If you want to do reviews for sepsis, we're doing a new travel project. So we were able to set up worklists based off of that and this incredible tool we use, which is something that you can tinker with. That is what I have. So back to you, Gail. Thank you, Niki. After Niki had the part of the meeting, this was one department meeting that we launched this in together as the department. And at that point, we were all remote. So this was one, large department meeting online through Webex at that time. We now use Teams. And after she presented her priority worklist that the team put together, then at the same time, right after she spoke, I spoke about the Impact ROI launch. (DESCRIPTION) New slide titled Impact ROI Launch. Text, Impact ROI manager presented at October 2020 department meeting; Impact ROI education highlighted. Query scenarios for Missing Diagnosis, New Principal Diagnosis, Clinical Validation and POA. Impact ROI reconciliation steps, including open action item for uncoded query responses. CDI scorecards to display individual CDIs information: PDX, MCC, CC, Procedure, SOI and ROM Impacts and accurate financial impact. Impact ROI Tab implemented after department meeting. Additional Benefits and Support: Regional manager validation worklists save managers time validating impactful cases concurrently. CDIs case reconciliation is concurrent before the bill drops not at the end of the month. Impact ROI manager provides ongoing education at department meetings and after feature updates. Ability to submit 3M enhancements to improve reporting of impacts to administration and CDI scorecards. Managers continue to troubleshoot cases with errors, including missing and incorrect impacts and escalate to 3M the unresolved issues. (SPEECH) And of note, right after this meeting, it was turned on in 3M. So when everybody went back to work after this meeting, there they had worklist, and they also had their case is going to Impact. And to tell you about our lunch, after-- when we started, the way I did it was I went to 3M. In 2020, they had updates 7-- .7 and 20.8 and I used those updates to create a PowerPoint to educate the CDIs at that meeting. And then, after the meeting, each CDI got a copy of that PowerPoint to use so when they were reconciling their cases, they understood how to do that step-by-step. And some different query scenarios on how to reconcile the cases using the Impact tab. We did missing diagnoses, new principal diagnoses, clinical validation, and POA. And those examples are all on the original 20.7 and 20.8 updates that 3M did. Then, I will also show you-- we went through the Impact ROI reconciliation steps, including the open action item and the un-coded query responses. And then, I will also show you how we created a CDI scorecard that gave each CDI all of their credits for their queries for PDX, MCC, CC, procedures, SOI, and ROM, and the accurate financial impact. And I'll talk about that in a minute. The Impact ROI tab was implemented right after the meeting, right as we started the worklist also. The big benefits of doing the Impact ROI tab is that our regional managers can validate the worklists and the cases as they are completed instead of at the end of every month. So this saved us a lot of time. But for regional managers, it was in real time. And because of that, CDI cases were concurrently reconciled something with the code or before the bill dropped. And this also is saving Piedmont a lot of time getting those bills out the door instead of at the end of the month and then putting up the red flags when things-- bills are held. The Impact ROI manager, which was myself, I provide ongoing education at department meetings. Any time there's a feature update, the quarterly 3M feature updates, we do further education. If it is big education, it will be part of our department meeting. If it is just small education, it will go out in an email this-- the morning after the update. Lists of the update are sent directly to the CDI so they can see the cosmetic changes or whatever changes that 3M has in that feature update. The ability to submit 3M enhancements. This was huge. As we started building more reports to send to administration or add to scorecards, there were more fields that we wanted to offer. Geometric length of stay was one of those. And that enhancement was put in. And within a couple of months, that field of ability to put geometric length of stay on our administrative KPI was given. And so, that is also very helpful. Managers, this is the big part. Managers continue to troubleshoot cases, even today. We were doing one that was a POA query that a CDI couldn't get or impact. So the managers, we worked together, we looked at that. And including missing-- and the biggest ones are the missing baselines, the incorrect impacts. And we escalate any problems that we find to 3M. So as we find issues, and they get right back to us. It has been a wonderful collaboration. (DESCRIPTION) New slide. Text, Steps for Successful Impact Tab Reconciliation. Before checking CDI Final Review Complete. The left side shows a screenshot from the Impact ROI tab on the dashboard. An orange arrow points to the word Codesets in the upper right corner. The tab is labeled Final Cumulative Impact. There are statistics across the top such as Estimated financial impact, weight, SOI and ROM. An orange arrow, labeled Coder's Final Codes, points to the Baseline row under D R G Type. The right side of the screen shows the Codefinder page. An orange arrow, labeled CDIS Codes, points to the two codes and their info listed under the Medicare D R G and MDC information, 177, Respiratory Infections and Inflammations with MCC, and 004, Diseases and Disorders of the Respiratory System. (SPEECH) Here is what the steps of successful impact tab reconciliation looks like. You have the coder's codes on the left, and the CGI codes are on the right. And the CDI can see their DRG and then the coder CRT. And the course, the CDI CRT should be the baseline. And then, here are all the codes that the coder code-- coded in order. And then, the query links are the little RD query templates. And this next part is the steps of how you would do a reconciliation of a case. (DESCRIPTION) New slide. Text, #1 Queries are linked to Coder's Codes. A screenshot from the Impact ROI tab showing a chart titled Final Diagnosis Codes. It shows each code, its description, and POA, Affect, MCC, CC, SOI, ROM, HCC, HAC, PPC, Elix and Baseline. Under the Query column, Sepsis with Criteria PHC for one item and CHF PHC for another item is circled. (SPEECH) Number one, as a CDI looks at their query. First of all, is it linked? Is it linked to the coder's code? The slide before was an RD query. Here, we have sepsis. It's linked to the sepsis. The CHF is linked to the CHF. And you can see the baseline diagnosis and the final diagnosis-- DRG, sorry, DRG. And the impact and all of the impacts going across the top. (DESCRIPTION) New slide. Text, #2, Home Tab: Query Green Check Mark, Except Clinical Validity Queries. A screenshot shows the Home tab, including headings for Action Items, Priority Score, Activity, Findings, Followups and Queries. The status Finalized is circled along with the green checkmark and Pulmonary Embolism PHC next to it. (SPEECH) Now, the second part that they look for is in the Home tab. Is there a link? If they did not find their query placed in the impact cab, they go and look for a link in the Home tab, and if that green check mark is missing. Except for clinical validity queries. Clinical validity queries will not have a green check mark. (DESCRIPTION) New slide titled #3, Home Tab: No open action items. It shows another screenshot from the Home tab. Under the Findings heading, a chart of codes is shown for a patient. The Elix, Baseline and Query columns are circled for one of the codes. A checkmark appears in Elix and the baseline and query are blank. In the Queries section, the status reads Finalized and the query reads malnutrition PHC. Text, CDI needs to check for correct query response diagnosis code. If not coded, send Coder notification to code the query response. (SPEECH) And third, if it still isn't linked, if they haven't figured out why it isn't linked, they go to the Home tab and there may be an open action item. This occurs if the coder did not code the exact codes from the query box. And this one was malnutrition and immediate action is required. The malnutrition code was not added by the coder. And the CDI would add a notification to the coder and let them know that their code was not coded. (DESCRIPTION) New slide titled Impact ROI Ongoing Improvement. A photo of the Bulldogs preparing to snap the football. Text, Obstacles: Accurate financial impact: collaboration with EPIC and 3M team to correctly interface coder's EPIC estimated reimbursements to 3M 360 Encompass. CDIs continuing to use final DRG comparison tab and not Impact Tab for reconciliation. CDIs missing Final Cumulative Header for agreed queries. CDIs logic for clinical validity "Was the diagnosis documented and truly supported?" cases should have zero-dollar impact. Incorrect negative and positive financial impact mostly due to incorrect Baseline Diagnosis codes. (SPEECH) Ongoing improvement. One of the things that I said, our very first hurdle that we had in our 3M team-- our wonderful Wendy and Barry, and Orlando. I worked with Barry. And we worked on getting our finances that were in the estimated financial impact in Epic, was not coming over to 3M in the impact tab for all of our then 11 facilities. At that point, we knew we had a problem. They weren't accurate with what was in Epic. So Barry worked with the Epic team and they came up with a cross spot table so that our-- all of our finances meet-- match facility per facility, DRG per DRG in Epic, in 3M now. And we check that every once in a while just to double check. But if you have any of those problems. Work with your EMR team and see if you can figure that out. Our CDI also continued to use the final DRG comparison tab instead of opening up the impact tab for reconciliation. That took some time to teach everybody click that impact tab first. Next, the CDIs were missing their final cumulative header on the agreed query, that was another learning. If you got an agreed query, what was in the header? Did you get your CC? Did you get your MCC? Was there a check mark for clinical validity? And then, a hurdle we still have some time is the CDI logic for clinical validity. Was the diagnosis documented and truly supported? And if it was documented, those cases should have a $0 impact. And then finally, incorrect negative and positive financial impact, mostly due to the incorrect baseline diagnosis codes. And we still struggle with some of those. But our team is really doing a great job. It's been two years and we are getting so much better at this. CDI will escalate anything they need to the manager that does not make sense. And then, the managers work with the CDIs to one-on-one to include those. We've even talked as a management team about not even validating those anymore because the team has been doing such an excellent job of correctly validating queries and reconciling them on their own. So we have come a long way in two years. (DESCRIPTION) New slide titled Key Secrets to Winning: Team Support. Text, #1, Successful Remote Team Support. By the time COVID-19 high admissions and continuous coding changes hit, Piedmont remote CDI had established: fairly reliable remote technology with Webex, (now Teams), CDI scorecards with productivity expectations, weekly leadership team meetings, monthly department meetings and regional manager-led team meetings,, monthly manager/CDI scorecard meetings with annual performance evaluations, addition of CDI educator, facility physician advisors with bi-weekly query reports, remote nationwide hiring and orientation, administrative support, CDI coding collaboration and buddy system, supportive Piedmont 3M Team, CDI leadership approach of education, guidance and trust. (SPEECH) So what are some secrets that we have? One is I would say team support. And I think all of our managers would agree. And I would believe that our CDI team would be proud to say that we really do have excellent support and the way we work together as a group. The 2019 going 100% remote was very key because by the time COVID hit, we already had Webex, now we use Team meetings. And fairly reliable technology, we still have our technology issues but we have good processes and we work consistently to get those fixed. We already had CDI scorecards with productivity expectations. We had weekly leadership meetings, monthly department meetings. And then, each manager of four managers in the regions, we have our own team meetings once a month. Then, our-- we have monthly manager calls one-on-one with our CDI. And then also, we do annual performance evaluations. During this time, an educator was also added, which was very key to continue the support of education across the board of not only CDIs, but at that time we did some physician education. The facility, physician advisors were added. At that point, we had a physician advisor who gets at almost every facility that gets bi-weekly query, outstanding query report. We continue that to this day. Remote national-- nationwide hiring and orientation, that was a learning curve. But we had already gone through that. And administrative support. Administration has been very supportive of us over the years despite our setbacks of COVID. And then, CDI coding. We have a collaboration, we have a buddy system. Each CDI is connected with a buddy in their region that also codes for that region and that is very key. And then, with that, our coders-- we also have our second level review team, C2E that reviews our second level reviews where CDI and coding might not see eye-to-eye. And we use our notification systems very closely as a team. And then, we have a wonderful, supportive Piedmont 3M team, I can't thank them enough. And then, our CDI leadership approaches, education, guidance and trust. And trust-- ongoing education but trust is so, so important. Our CDIs have-- some have up to 40 years nursing experience and they have a lot of knowledge they can give. And that we just have to trust each other as a team to grow and learn. And that there are going to be mistakes, but we work together. And we get through the mistakes, and we continue to grow. (DESCRIPTION) New slide titled Key Secrets to Winning: Coaching. Text, #2, Team Building, Goal-Driven Leadership. Director with vision for change, willing to take risk in new technology and provides direction to managers, bi-monthly leadership meetings. Weekly one-on-one calls with managers and educators for development, support, projects and goals. Managers and educator meet weekly to update processes through job aids, analyze tough reconciliation, escalate potential errors to 3M, submit needed enhancements to 3M, daily ongoing support of CDI team members through priority and reconciliation education, monthly scorecard calls to build relationships, review progress and goals, and inspire growth. Key factor: to trust CDIs with education provided to work autonomously. CDIs daily work to follow job aid process and to meet and exceed CD scorecard goals, promptly escalate priority or impact reconciliation problems, consistently collaborate with coders through notifications to complete reconciliation for timely billing. Department meets monthly led by director, supported by managers, educator, and CDI, coding, priority and impact education, updates and team building. (SPEECH) And that was our team support. Our next winning secret is our coaching. And like I said in the beginning, we have a wonderful director, Lori, who seize these kinds of opportunities 3M offers and instituted these changes. And to this day, she's still supporting us to add further technology and grow as a team, whether it be education or technology. And then, like I said, she has a vision. And we do all of these calls that I was talking about. And then, our managers and educator, we still meet weekly. And we go through job aids and talk reconciliation, we escalate any potential errors to 3M right away. We keep track on a spreadsheet what our errors are and what our-- what the success of resolving these things. And then, we escalate any needed enhancements. It might come out of this team or from our CDIs themselves. Several of them have had ideas for enhancements. And then, again, our key factor is to trust the CDIs with education provided to work autonomously. And that's something we all try to build into each other, and Lori also does in us as managers. And then, CDIs' daily workflow, job aid process, to meet and exceed scorecard goals. And I think Niki brought that up that we have a goal of 11 to 12 initials a day. And some of our CDIs have actually made their goal of 14 or 15 initials a day. And we often see up on our dashboard CDIs doing 45, 50 case reviews a day. And through priority worklist and the impact tab, and all of our winning coaching and education, we're able to do this. And then, the department meets monthly, led by Lori, our director. And the managers, educators, CDIs coding, we all work together to keep learning and moving on. (DESCRIPTION) New slide titled CDI Scorecard showing a chart titled CDI Query Impacts FY22. It shows information for each month of the year, as well as yearly totals, for number of queries, PDX impact, MCCs impact, CC impact, procedures impact, SOI impact, ROM Impact, number of Clinical validation queries, and financial impact, in dollars. A graphic in the corner of the slide shows the UGA bulldog standing in front of a college football national championship trophy. (SPEECH) And this is what one of our scorecards looks like. And you can see we built the impacts in. And when I was out at CEF this summer, I understand some departments have even added the HCC's on here and different other impacts that are available. You can see we have the number of queries, all of the PDXs, MCCs, CCs, procedures. And then, there are financial impacts that's at the end. And this is one of our team members that has been a CDI for 16 years. So we are very fortunate to have a very rich history of CDI in our team. And there is our winning little bulldog. (DESCRIPTION) New slide titled CDI KPI Dashboard. A chart showing PHC CDI KPI, All Admissions, (No OB, Peds, NICU), for July of '21 through June '22. The information included is Total Admissions, total admissions reviewed, percent admissions reviewed, total reviews: initial, continued stays, retrospective, CDI average chart reviews per day, query rate, query agreement rate, provider query response time/days, financial impact, increased GMLOS days by queries, CMI balance scorecard. (SPEECH) And then, this is our KPI dashboard that goes out to the administration at each of the facilities. And as you can see, one of the enhancements that we had added on here, one of the key goals for Piedmont, of course, in many hospitals is to decrease the length of stay. And our case management teams are always looking for opportunities to increase our geometric length of stay on cases. So with our queries, we can tell our case management groups, these are the number of days that our queries have added. And then, we have our case mix index from our balance scorecard, and our query rates, and our agreement rates. And one of the questions that CEF-- somebody asked me, "Well, how many queries do you not get answered?" Well, very, very, very few. Our department sends about an average of 2,000 queries a month. And we have very few non-answered. It's not acceptable to not answer a query. So even with our new integrations, we do go through a learning curve. But with the support of the administration and of the position leaders, we have been able to have a very, very low no responses so-- (DESCRIPTION) New slide. Text, Real Change for the Win! Estimated Financial Impact, up 15%. A photo of a UGA football player kissing the trophy. (SPEECH) and this is where it all comes down to what did the priority worklist, what did it bring? (DESCRIPTION) Text, CDI impacts by working D R G Priority FY20, July 2019 to May 202. Admissions reviewed, 71,406, query rate, 26%, agreement rate, 97%, physician response, 1.4 days. (SPEECH) And this is over that two year period from October, looking before at 2019 and then looking forward to 2020 through 2021, we were able to increase our estimated impact by 15%. And this is the six-month period that we looked at-- we reviewed 71,000 admissions. Our clear rates remain stable around 25% to 26%. And our physician response days are about 1.4 days. (DESCRIPTION) CDI impacts by priority and impact ROI reports FY22, July 2021 through May 2022. (SPEECH) And then, we looked at July 2021 to May 2022, and we reviewed 73,000 admissions. And remember, this is the same amount of CDIs reviewing. Again, query rate, 25%, agreement rate stayed stable at 97%, physician response, 1.5 days. (DESCRIPTION) Principal diagnosis impacts 3,134, MCCs added 6,637, C C's added 4,505, procedures added 193, GMLOS days increased, about 4,900. (SPEECH) And now, we are able to say how many principal diagnoses we've impacted, how many MCCs, CCs, procedures we added. Geometric length of stay. We increase our geometric length of stay almost 5,000 days. And our estimated impact, about 15%. And I just reran some numbers yesterday and it continues to climb. In my region, we are up over 20% from last year alone. So we are doing excellent with the impact. And (DESCRIPTION) A new slide shows photos of W. Edwards Deming and Nelson Mandela next to quotes. (SPEECH) if I was to say anything, I would say that our leadership under Lori, the four of us, plus Pam, our educator, are big things that we try to instill into each other and to our team, is education. As Nelson Mandela said, "Education is the most powerful weapon which you can use to change the world." And one of my favorite people is Edwards Deming. And he said that "85% of the reasons for failure are deficiencies in the systems and processes rather than the employee." It's usually not the employee, it's our processes. And "The role of management is to change the process rather than badgering individuals." And (DESCRIPTION) To do better. (SPEECH) what a great tool we have here in 3M, with the impact ROI and with the priority worklist to bring new processes that improve and take away some of the frustrations and lack of ability to grow. And we have grown. And to summation the whole thing, Piedmont, real change lives here. (DESCRIPTION) New slide with the Piedmont logo. Text, Real change lives here. (SPEECH) So we are ready for some questions. Awesome! Thank you so much. I love how you kind of tied that in at the end about processes and people. And there's a lot of things that within our everyday lives that really rings true of where some of the frustration comes from. And I love that Piedmont is really looking at that head on. So I applaud-- I applaud that. So before we get started, we have a few great questions that have come in. I just want to remind everyone that the certificate of attendance is available in the Resources section for download. Also, on the bottom, the menu bar, you might also see a little kind of cap-- a graduation cap. You can also access the certificate of attendance there. And then, also, I did see Linda just put in real quick that question about the slides, those are also in the Resources section that you can download as well. So please make sure you do. We will be offering this webinar on-- as an on-demand on our website in the next few weeks. Once we get all of that wrapped up, if you do want to listen in again, you certainly can on our website. So (DESCRIPTION) New slide. Text, Q&A. (SPEECH) let's get into the questions that we have. Angela asked, "Do you encounter issues with accounts becoming a priority but now have a longer LOS? Did that cause any concerns with the CDI specialists?" I can take that one, this is Niki. Absolutely, that's a great question. We encouraged our staff to escalate cases if they had any concerns. We did get cases escalated because they were unreviewed, the long length of stay. One that we did used to before prioritization review with length of stay as a sorting feature and then going by the other auto-suggested DRGs. So with this process, it was a big change. We're not able to review all cases. And we just-- we can't review 100% of cases. So there's going to be about 20% or more cases that we can't get to, we're not going to be able to review those cases. And we want the ones that we're not going to review to have the lowest probability of a query opportunity. So we would take a look and validate, and see if those cases have a longer the length of stay, should have been higher up on the worklist. And so, we'd look and validate. And we would see that those cases typically would have a low priority to review. And when you're looking at cases, is that a case that is a medical or surgical case, [INAUDIBLE] or MCC, or is that an optimized DRG with little query opportunity. And what we found is those cases really had very little potential for query opportunities. And it's one of the ones that we're willing not to have CDI review. We just cannot review all of the cases. So that was a great question. Thank you. Great! Next question is, "Who trained the coders on Impact ROI?" The coders don't use it but the CDIs do. And like I said, in that October meeting that we launched this and turned it on afterwards, I created a PowerPoint using the feature updates from 20.7 and 20.8 3M documents. And I created a PowerPoint, presented that, then each CDI was emailed that PowerPoint presentation. And they used that education. So that's how they were all educated on how to do those reconciliation steps and continue. Like I said, we continue to do education, ongoing. But it took probably, I would say, about three-- good three months probably for people to get it really down. All right, the next question we have is, "What is the time frame for CDS to perform the reconcilia-- excuse me, reconciliation processes?" They have their discharged and ready-for-final list. And it is expected that each CDI has 10 or less cases on that. And the manager is pretty much-- we go out and watch to make sure that people are doing their ready-for-final. They reconcile-- I saw on the other question out there that's also in this, they reconcile 100% of the cases they review. Of course, the impact tab-- their header only comes up on the agreed and documented queries. But they do reconcile 100% of their cases with coding. And they use the notification process. So at the end of every month, we expect that the previous months' cases will be reconciled, except for a couple queries that may be pending out there by the 8th of the following month. And for example, I went out on the systems list and looked this morning. And there were only two September cases left. One of them had the query answered that wasn't yesterday. So as of right now, there is only one query left that has not been answered from September. So-- but it is expected that they keep them ongoing because this keeps bills going out the door without delays so-- And they do a great job at it. Can you talk about some of the issues or challenges for the CDIs when starting the impact ROI? I guess-- well first, of all, just knowing does my query make an impact? OK, if it did, did I get-- was it positive, was it negative? The-- and if they misunderstood, they emailed to us, was the baseline missing or was it present-- did you start with a UTI and go to a sepsis, was the UTI code in the baseline? That kind of thing. And that was probably the first hurdle. And then, once people started to-- the CDIs realized, hey, this did make an impact on getting my tab. I'm getting a positive or negative impact. Then, it just grew from there, and very successfully. And yes, we still troubleshoot problems. What percentage of your population is built on APR? Also, do your CDIs code the record concurrently? We don't-- we totally build by DRG. There is very little that we-- so we do not use the ACR DRG. And what was the second part? Do they code concurrently? No, our coders do not code concurrently. They code-- our CDIs do, they code the cases by using the priority list. But our coders code after discharge. I think they're pretty much at a day or two-- pretty much at one to two days post-discharge the coders are coding. Do you-- [CLEARS THROAT] Excuse me, do you use standardized query templates? No, we don't. Our management team, starting somewhere around 2019 or even before, maybe 2018, we started developing our own query templates. And every fall, as it is right now, fall again, we now have 84 templates that we have written. And with every update in the fall, we go through those again and quickly look at them and make sure that they meet all of the new coding standards. And we will work-- rework words or whatever we need. But like I said, we have written our own. Another question. We probably have about-- time for about two more. Let's go with: can you elaborate on how the CDI queries increased GMLOS? Yes, with the baseline DRG, when you look at that in the header, it will tell you what the geometric length of stay is for that DRG. And when you move from pneumonia to sepsis, it'll show you that sepsis has a longer geometric length of stay. And in SSR, there is a report that you can run in the impact. There is a field for final baseline to final geometric length of stay change. And that can be pulled into your report. And that is how that is reported. All right, are your-- are Piedmont-- my goodness! Are Piedmont CDIs only Georgia-based? No, we have CDIs all over the nation. We have two in California. Hello, John. Hello, Abby. And we have-- Diana's up in Iowa, we've got Aaron and Carrie up in the Midwest-- or North, we have CDIs in the Midwest. And our director, Lori, is in Florida, where it's nice and warm. So we are all over the nation. And then, the rest of us are mostly in Georgia, but we are all over. Got to love the ability to have remote workers. I don't know what we would do if we didn't have that ability. So I applaud that. Let's go ahead with one last question. Do you have any issue with retrospective queries needing to be sent because accounts weren't re-reviewed? Yes, and we keep an eye on those. We also use the SS report-- SSR reports for recon-- for-- we look at concurrent versus retrospective queries. And as managers, , every month or so, we will run a report and look to track that and trend that. Because if people are not doing their follow-ups, they might end up with a lot of retrospective queries. And we try to keep a handle on that so that those bills aren't delayed and that we aren't impeding Piedmont's ability to balance our budget. So yes, we do keep an eye on that. And coding works but that's also very, very closely. (DESCRIPTION) New slide. Text, That's a wrap! (SPEECH) Great! Well, thank you to everyone that did submit a question that we weren't able to get to. I do want to thank our speakers today. One of the questions that was asked about submitting for CEUs. So you are able to take that certificate of attendance. And you can submit that to one of the accredited associations. You can submit that to get those. So if you do have any questions about that, there is a email-- you have the ability to email us within that menu bar as well. But you should be able to download that CEU and submit. Again, great presentation to Piedmont Health, we truly appreciate it. We will have this recording available in the next few weeks on our website if you do want to go ahead and listen in again. If you could, please complete that survey. We always love to hear how we did. And also, be on the lookout for the final webinar. Gosh, I can't even-- I can't even believe that we're already talking about the end of the year. Our final CDI innovation webinar will be in December. So be on the lookout for that so you can register. So again. thank you, Piedmont, and we look forward to hosting you all again. Thank you so much. (DESCRIPTION) New slide. Text, Thank you. (SPEECH) Thank you. Thank you.

      Webinar title slide

      Piedmont Healthcare: Taking the CDI game to the next level with priority and impact ROI

      • October 2022
      • Starting in October 2020, Piedmont’s clinical documentation integrity (CDI) team implemented 3M™ 360 Encompass™ System’s prioritization and impact ROI features. This allowed the organization to review the most impactful cases, improve documentation and simplify the reconciliation process. By July 2022, Piedmont’s CDI team began a second phase to investigate additional opportunities to improve priority worklists and refine impact ROI.
      • Learn how Piedmont was able to capture an impressive 15 percent increase in impact. In addition, hear from the team that successfully enabled CDI leadership to report increased comprehensive CDI impacts to administration with individual CDI scorecards.
    • (DESCRIPTION) Videos of speakers appear on the left. Slides are to the right. The slides show a webinar template. Text, New year, new webinar platform! A great company is showing what interesting applications a fantastic product can bring for motivated users. 3M CDI Innovation Webinar Series. (SPEECH) Good afternoon and welcome to our August-- I almost said quality. This is the CDI Innovation Series. I'm getting my months mixed up. Welcome, everybody for joining. The summer is starting to wind down, and we have kids going back to school. So hopefully everything is good in your world. And we appreciate you joining us today. Just a couple of things before we get started. (DESCRIPTION) Text, 3M Science. Applied to Life. 3M CDI Innovation Webinar Series. Applying compliant guidelines and (SPEECH) We have a great panel today just to make sure you know all of the functionalities of the On24 webinar platform. (DESCRIPTION) Text, On24 Webinar Platform for a better user experience! On 24 Webinar Platform for a better user experience! Use Google Chrome and close out of VPN/multiple tabs. Check speaker settings and refresh if you are having audio issues. Ability to move engagement sections. Ask questions! Certificate of Attendance available to download for live webinar sessions. Engagement tools and CC available. Check the resources sections. Complete the survey. The information presented herein contains the views of the presenters and does not imply a formal endorsement for consultation engagement on the part of 3M. Participants are cautioned that information contained in this presentation is not a substitute for informed judgement. The participant and/or participant's organization are solely responsible for compliance and reimbursement decisions, including those that may arise in whole or in part from participant's use of or reliance upon information contained in the presentation. 3M and the presenters disclaim all responsibility for any use made of such information. The content of this webinar has been produced by the 3M and its authorized third parties will use your personal information according to 3M's privacy policy (see Legal link). This meeting may be recorded. If you do not consent to being recorded, please exit the meeting when the recording begins. (SPEECH) This is a web-based platform. So if you are having any audio issues or any of the engagement tools, make sure you're off of VPN, close out of multiple tabs just to help with the bandwidth on your end. And you can always do a quick refresh of the browser. And Chrome is the recommended browser. So if you're in Edge or Explorer, switch on over to Chrome and that could help as well. There are some engagement tools. Please ask questions in the Q&A box. We'll get to as many as we can at the end. There is a resources section. So we do provide a certificate of attendance that you can submit for CE use. We also have the presentation handout there, as well as some other resources about our solutions. If you do need closed captioning, you can turn that on in the media player. Again, that resources section has multiple resources that are available to you. And then at the end, we always appreciate you completing the survey just to let us know how we did. (DESCRIPTION) Text, Meet our panelists. Below are photos of the five presenters. Text, Chris Berg, R H I A, CCS, CCDS-O, CHC. Colleen Deighan, R H I A, CCS, CCDS-O. Audrey Howard, R H I A. Sue Bailey, M.Ed., R H I A, CPHQ. Bobbie Starkey, R H I T, CCS-P, A H I M A. (SPEECH) Another section that we have in that dashboard is our Meet the Speakers section. So if you are interested in learning more about our speakers today, their bios are in that section. So feel free to peruse that as well. So let's go ahead and get things started. I'm going to pass today's session over to Colleen Deighan who's going to go over the agenda and just what to expect today. Again, please feel free to ask questions in the Q&A section of your dashboard, and we'll get to as many as we can at the end. Colleen? (DESCRIPTION) Text, Agenda. Provide an overview of Hierarchical Condition Categories (HCCs). Introduce the Risk Adjustment Data Validation (RAD V) program. Identify the two official coding sources used by RAD V. ICD-10-CM Official Guidelines for Coding and Reporting. American Hospital Association (AHA) Coding Clinic. Discuss the MEAT criteria. Listen to discussion among 3M panel participants. Participate in Q/A with our listeners. (SPEECH) Yeah. Thanks, Lisa. And hello to all the listeners. We appreciate you dialing in today to listen to this topic, and hopefully dialogue with us on the topic, something we talk internally about a lot. So our agenda today. We're going to provide just a brief overview of HCCs known as Hierarchical Condition Categories, introduce, maybe for some of you, the Risk Adjustment Data Validation program or what's called the RADV audit program, identify the two official sources that are used by RADV, talk also and discuss the MEAT criteria, and then we're going to have a discussion internally, amongst those of us on the panel. And then again, I hope to participate with all of you, and some question and answer towards the third section of our session today. (DESCRIPTION) Text, HCC Model overview. There are two columns below. The left column is labeled, text, CMS HCCs. Under this column, text, Developed by Centers of Medicare and Medicaid Services (CMS). For risk adjustment of the Medicare Advantage Program. CMS also developed a CMS RX HCC model for risk adjustment of Medicare Part D population. Based on aged population (65 and over). Current year data predictive of future year risk. The right column is labeled, text, HHS HCCs. Under this column, text, Developed by the Department of Health and Human Services (HHS). For risk adjustment within the commercial payer population. HHS-HCCs predict the sum of medical and drug spending. Includes all ages. Current year data used to predict current year risk. (SPEECH) So let's start with a brief overview before we begin really talking about this. Just we want to talk about the guidelines and the criteria, but we wanted also to provide just a brief overview, sort of, to set the table. So I wanted to point out that there are two models for HCCs or hierarchical condition categories, and just a brief point out of the differences. So the original model is the CMS model. It was developed by CMS for use with the Medicare Advantage program. They do have what we call Part D or prescription drug model that can be added as part of this program. As you know I use the comparison to DRGs that the HHS model-- I'm sorry, the CMS model is based on aged population, similar to the DRG model. And it uses-- this is a prospective payment model. So it's using current year's data to predict next year or future year risk around disease burden and cost of care. In comparison, the Department of Health and Human Services, known as HHS, did develop a model. It was developed around the Affordable Care Act, actually. And we see it used with commercial payer populations, including state Medicaid and Medicaid HMO programs. That program has a combined medical and drug spending model. And the two big differences between this model and the CMS model is it does include all ages. So, again, I use the APR DRG comparison to this. So all aged patients are included in the HHS model. And in knowing that this population of patients typically moves, changes jobs, goes on and off different health plans, the current year's data is used to predict current year risk. So we have a prospective model aged in an all age, current year model. (DESCRIPTION) Text, Risk Adjustment Data Validation program. HHS-RAD V - risk adjustment data validation. Department of Health and Human Services (HHS) operates. CMS, on behalf o HHS, performs risk adjustment data validation. Purpose, Ensure the integrity of risk adjustment program. Validate the accuracy of data submitted. Two types of RAD V audits are conducted. Annual national level audits - conducted to estimate the national Medicare Advantage (MA) improper payment rate. Contract-level RAD V audits - conducted to identify and recover improper payments. RAD V recognizes two resources for validating coded data. ICD-10-CM Official Guidelines for Coding and Reporting. Coding Clinic for ICD-10-CM published by the American Hospital Association. (SPEECH) If any of you have been involved in diagnosis capture for HCCs, I think, no doubt, you've heard about the Risk Adjustment Data Validation audit, known also as the RADV program. CMS, the Center for Medicare and Medicaid Services, conducts various RADV audits to ensure the accuracy and integrity of risk adjustment data that's been submitted by the Medicare Advantage payment programs. RADV is a process to verify that diagnosis is submitted for payment by Medicare Advantage organizations are supported in the medical record documentation. So we do have two types of RADV audits that I just wanted to touch on briefly. There is the annual national level audits and the contract level audits. The national audits are meant to estimate the national Medicare Advantage and proper payment rate, while the contract level audits are used to conduct and identify, recover improper payments. So as part of the RADV, one of the really important components that we wanted to talk today is that CMS is part of RADV, states that the medical records must meet five minimum-- at a minimum, five requirements to avoid what they call discrepant findings. One of those five requirements is listed here at the bottom of this page. So according-- that the RADV, when an audit is performed the claims have to be coded according to the official conventions and instructions provided within the ICD-10-CM code book, along with the official guidelines for coding and reporting and guidance that's provided by the American Hospital Association, known as the AHA coding clinic for ICD-10-CM, which is published quarterly by the American Hospital Association. And Bobbie, I'm going to turn it over to you so you can discuss a little bit more detail, these guidelines. Thank you so much, Colleen. (DESCRIPTION) Text, Accurate coding and reporting of outpatient services. ICD-10-CM Official Guidelines for Coding and Reporting. Section 1 Conventions, general coding guidelines and chapter specific guidelines. Section IV Diagnostic Coding and Reporting Guidelines for Outpatient Services. J. Code all documented conditions that coexist. Code all documented conditions that coexist at the time of the encounter/visit that require or affect patient care, treatment or management. Do not code conditions that were previously treated and not longer exist. However, history codes (categories Z80-Z87) may be used as secondary codes if the historical condition or family history has an impact on current care or influences treatment. A H A Coding Clinic Central Office. Serves as the official coding clearinghouse on the proper use of the ICD-10-CM, ICD-10-P C S, and HCPCS Level II classification systems. Provides coding advice regarding the proper application of these systems using the Alphabetic Index, Tabular list, Official Coding Guidelines, and A H A Coding Clinic advice. (SPEECH) So as Colleen mentioned-- as Colleen mentioned, there are two resources that RADV utilizes for coding validation. The first one is ICD-10-CM Official Coding Guidelines for Coding and Reporting. And the sections of these guidelines that focus on outpatient coding would be section I, which is the conventions. So the symbols, the how to use the index, those would be listed under conventions. It includes the general coding guidelines, for example coding to the greatest level of specificity, coding manifestations or complication codes. And then your chapter specific guidelines, which would be like COVID diagnosis guidelines specific to COVID, chapter 2, guidelines specific to pregnancy coding, chapter 15. And then the other section that refers to outpatient is section IV. And in this section, they talk about outpatient services, and they state that this includes hospital outpatient services, as well as physician outpatient services. So for physicians, their office visits would be considered outpatient services, and for the hospitals, it could be same-day surgeries, emergency department. It could be your ancillary tests, wound clinic, outpatient hemodialysis centers. So all of these fit under the outpatient services umbrella. They don't differentiate between HCCs and non HCCs, it's all outpatient services fall under these guidelines. And the one guideline that I want to specifically address is the letter J guideline under Section IV, and it's very important that you understand this guideline. The guideline states to code all documented conditions that coexist at the time of the encounter or visit that require or affect patient care, treatment or management. So not just coexist at the time, but they also have to require or affect patient care, treatment or management. It goes on to say his personal history, family history. Those conditions can be reported if they have an impact on current care or influence current treatment. So it's kind of vague to me. It tells me they have to affect or require care, treatment or management on this visit, but they don't specify what that looks like. So that leads us to the second resource that a RADV utilizes, and that would be the AHA Coding Clinic Advice. (DESCRIPTION) Text, Coding Clinic advice. Coding chronic conditions for outpatient encounters. ICD-10-CRM/P C S Coding Clinic, Third Quarter ICD-10 2019 Pages: 5-6 Effective with discharge: October 1, 2019. Question, A patient presents as an outpatient for hernia repair surgery. The provider notes "Crohn's disease," in the past medical history and indicates the patient is taking an immune modulating drug for the condition. Per the Official Guidelines for Coding and Reporting, Section IV.I: Chronic diseases treated on an ongoing basis may be coded and reported as many times as the patient receives treatment and care for the condition(s). Additionally, section IV.J states: Code all documents conditions that coexist at the time of the encounter/visit and require or affect patient care treatment or management. Although the patient did not receive treatment during the current encounter, is it appropriate to report the Crohn's disease as an additional diagnosis? Answer, In the outpatient setting, chronic diseases treated on an ongoing basis may be coded and reported as many times as the patient receives treatment and care for the condition(s). Based on the documentation submitted, the provider has specifically stated that the patient is receiving treatment for the Crohn's disease. Although the patient is not receiving treatment during the current encounter, the patient is receiving interval treatment; therefore, Crohn's disease should be coded and reported. The ongoing treatment does not need to occur during this encounter. The fact that the patient is undergoing treatment for Crohn's disease affects patient care and management. (SPEECH) And there are a couple of coding clinic advices that I want to specifically go over with you today. The first one is in regards to coding chronic conditions for outpatient encounters. And this advice was published third quarter 2019. It's in regards to a patient that comes in for a same-day surgery visit for a hernia repair. And the question is, the patient has Crohn's disease listed in their past medical history. The physician indicates that the patient is taking an immune modulating drug for the Crohn's. Can this Crohn's disease be reported for this same-day surgery visit for hernia repair? And the answer that coding clinic gives is, in the outpatient setting, chronic diseases treated on an ongoing basis maybe coded and reported as many times as the patient receives treatment and care for the condition or conditions. That is actually letter I in section IV of the outpatient coding guidelines that we didn't discuss. But so they go on to say, based on the documentation submitted, the provider has specifically stated the patient is receiving treatment for the Crohn's disease, and therefore it impacts care and management for this same-day surgery visit. So it would be reportable even though the patient's not getting that treatment on this visit. So unfortunately, we don't have privy to the documentation that coding clinic reviewed to come to this determination. So it's unclear how that physician specifically stated the patient was undergoing treatment for the Crohn's disease. So now we're going to move on to another coding clinic that is in regards to reporting additional diagnosis for outpatient. It came out third quarter, 2020. And this one, the question was a patient presents to the emergency department, another outpatient setting. And they were here for a symptom. But in the past medical history, the provider also documented some behavioral health conditions, and the patient had antipsychotic drugs listed in their medication list. So they're asking the question, can these conditions be reported because the patient's currently on antipsychotic medications? So the previous coding clinic, the physician specifically stated that the patient was taking the immune modulating drugs and they impacted care. In this case, the patient has listed in the medication-- or the provider has listed in the medication list antipsychotic medications and in the past medical history, the chronic condition. Coding clinic's response is no. Those mental disorders were not treated during this encounter, nor was there any documentation that these conditions affected patient care, management or treatment. The provider has to indicate that these conditions or any other conditions-- so this doesn't just apply to behavioral health conditions-- affected the management of the patient during the current visit. Those disorders would not be coded and reported. So this response is telling me, OK-- it doesn't tell me what support does look like, but it tells me it doesn't look like a condition only listed in a past medical history and a medication for that condition listed in the medication list. (DESCRIPTION) Coding Clinic advice. Reporting additional diagnoses in outpatient setting- clarification. ICD-10-CM/P C S Coding Clinic, Third Quarter ICD-10 2021 Pages: 32-33 Effective with discharges: September 20, 2021. Reporting Additional Diagnoses in Outpatient Setting. Question, Coding Clinic, Third Quarter 2020, page 33, advised against assigning a code for the patient's mental health conditions since the provider did not document that the conditions affected patient care and management. It was also noted the patient was currently on antipsychotic medications for their chronic mental health conditions. This advice seems contradictory to Coding Clinic, Third Quarter 2019, pages 5-6, where a code for Crohn's disease, a chronic autoimmune disorder, was assigned for a patient on interval immune modulating drugs to treat the condition. Coding Clinic established that ongoing treatment did not need to occur during the encounter, the fact that the patient was undergoing treatment affected patient care and management. It seems as though ongoing treatment with antipsychotic medications constitutes affecting patient care and management. Would the advice in Coding Clinic, Third Quarter 2020, change for conditions that have potential to exacerbate during care, such as autism or schizophrenia? Please provide clarification on the coding of chronic conditions in the outpatient setting. Answer, Coding professionals should not assign codes based solely on diagnoses noted in the history, problem list and/or a medication list. It is the provider's responsibility to document that the chronic condition affected care and management of the patient for that encounter. In the case published in Coding Clinic, Third Quarter 2019, pages 5-6, the provider specifically stated that the patient was receiving treatment for Crohn's disease. When the provider documents that a patient's condition or treatment thereof affects care and management for the current encounter, the condition should be reported even if treatment did not occur during the encounter. In the case published in Coding Clinic, Third Quarter 2020, page 33, codes were not assigned for the mental health conditions, since there was no provider documentation that the mental health conditions or their treatment affected patient care and management for the current encounter. If the medical record is unclear or ambiguous regarding which condition(s) affected patient care and/or management of the patient, query the provider for clarification. (SPEECH) So then, I'm still unclear on what it looks like, and I'm guessing I'm not the only one because third quarter 2021, Coding Clinic puts out a clarification for the two prior coding clinics. Someone wrote in and said, can you clarify what you talked about in these previous two coding clinics? What constitutes support for these conditions? And the answer the Coding Clinic gave was coding professionals should not assign codes based solely on diagnoses noted in the history, problem list, and/or a medication list. It's the provider's responsibility to document that the chronic condition affected care and management of the patient for that encounter. So again, they're really stressing what does not support coding those chronic conditions. (DESCRIPTION) Text, Coding Clinic advice. Hierarchical Condition Category (HCC) Coding- clarification. ICD-10-CM/P C S Coding Clinic, Second Quarter ICD-10 2022 Page: 30. Effective with discharges: June 3, 2022. Hierarchical Condition Category (HCC) Coding. Question, Is the advice published in Coding Clinic Third Quarter 2021, pp. 32-33, related to reporting additional diagnoses in the outpatient setting only if the chronic condition affected care and management of the patient for that encounter applicable to coding for hierarchical condition categories (HCC) for risk adjustment? Answer, The Coding Clinic advice that additional diagnoses in the outpatient setting must affect care and management of the patient was related to the coding for a single specific encounter in time. Coding for risk adjustment, such as for HCCs, involves the collection of known current chronic conditions over the course of a year. While a patient's chronic condition would be captured for HCC coding from other visits, encounters, or hospitalizations when the chronic condition affected care or needed management. (SPEECH) And then in June of this year, we had two new Coding Clinics come out. The first one, it's actually a clarification for HCC coding, but it refers back to those same Coding Clinics that we just looked at. And the question it's asking this previous Coding Clinic clarification regarding coding secondary diagnosis if they're documented in the problem list or the medical history and the medication list. Does that apply also to encounters for hierarchical condition category or risk adjustment coding? And the answer that Coding Clinic gives is that Coding Clinic advice for additional diagnosis in the outpatient setting and that they must affect care and management of the patient is related to the coding for a single specific encounter in time. They say coding for risk adjustment such as HCCs is over the course of a year. So if that patient goes to-- has a visit every month for that year, so 12 visits, they can take for HCC RAF reporting calculation and HCC from any one of those visits. But the visit that it comes from needs to be supported. And that visit, it needed-- that chronic condition needed to affect care, or needed to impact management or treatment. (DESCRIPTION) Text, Coding Clinic advice. Reporting additional diagnoses in outpatient setting- clarification. ICD-10-CM/P C S Coding Clinic, Second Quarter ICD-10 2022 Page: 30 Effective with discharges: June 3, 2022. Reporting Additional Diagnoses on Outpatient Setting. Question, We disagree with advice published in Coding Clinic Third Quarter 2020, page 33, regarding not coding a mental disorder during an emergency department (E D) visit for an unrelated condition because the mental disorder was not treated during the current encounter, nor was there any documentation that the condition affected patient care or management. We are requesting clarification of this advice as it appears to conflict with existing outpatient guidelines. Answer, The advice published in Third Quarter 2020 does not conflict with Official Guidelines for Coding and Reporting (Section IV.J) as it utilized the same verbiage as the guideline that states "Code all documented conditions that coexist at the time of the encounter/visit and require or affect patient care, treatment or management." (SPEECH) And then again in June 2022, another Coding Clinic question. Someone wrote in that they disagree with the advice regarding the ED visit with the mental disorders, and they say that they felt that by not coding those they were going against coding guidelines. And the answer that Coding Clinic gives, they refer that question back to official coding guidelines for coding and reporting section IV, J as it utilizes the same verbiage as the coding guideline-- code all documented conditions that coexist at the time of the encounter or visit and require or affect patient care, treatment or management. So we're seeing this consistency from Coding Clinic even though they haven't specifically told us what does the documentation need to look like to support this. (DESCRIPTION) Text, Documentation examples. Assessment/Plan: Patient was seen today for annual exam. Diagnoses and all orders for this visit: 1. Essential hypertension. 2. Hyperlipidemia, mixed. 3. CKD. 4. Acquired hypothyroidism, unspecified. Current Outpatient Medications Ordered in Epic. aspirin 81 MG EC tablet TAKE 1 TABLET BY MOUTH ONCE DAILY 90 tablet 2. atorvastatin (LIPITOR) 40 MG tablet TAKE 1 TABLET (40 MG) BY MOUTH EVERY DAY. cholecalciferol (CHOLECALCIFEROL) 1000 unit tablet Take by mouth. gabapentin (NEURONTIN) 100 MG capsule Take 100 mg by mouth 3 (three) times daily. hydrochlorothiazide (HYDRODIURIL) 25 MG tablet TAKE 1 TABLET (25 MG) BY MOUTH EVERY DAY. levothyroxine (SYNTHROID) 112 M C G tablet Take 1 tablet (112 mcg total) by mouth once daily Take on an empty stomach with a glass of water at least 30-60 minutes before breakfast. lisinopriL (ZESTRIL) 40 MG tablet Take 1 tablet (40 mg total) by mouth once daily. Assessment/Plan: Patient was seen today for annual exam. Diagnoses and all orders for this visit: 1. Essential hypertension - controlled. Continue Lisinopril. Comprehensive Metabolic Panel (CMP). 2. Hyperlipidemia, mixed. Lipid Panel W/Reflex Direct Low Density Lipoprotein (LDL) Cholesterol. 3. CKD - following with Nephrology every 6 months. 4. Acquired hypothyroidism, unspecified - stable, normal lab- a month ago. Hair loss has stopped. levothyroxine (SYNTHROID, LEVOTHROID) 112 M C G tablet; Take on an empty stomach with a glass of water at least 30-60 minutes before breakfast. (SPEECH) So I'm going to look at two documentation examples with you. These are both clinic-- office visit notes. And I'm going to preface this because these are examples. And so I put what I thought was pertinent on the screen. We don't have the whole note. So on the left hand side, we're going to pretend that-- this patient was seen for an annual exam. So we're going to pretend that all the doctor put in the rest of his note was that patient's-- make sure that patient's having up to date on all their screenings, their annual screening. So did she have her mammogram? Did she have her colonoscopy? Is she at risk for osteoporosis? Those types of questions and [INAUDIBLE] were what was on the rest of the note. But in his assessment and plan, this is what he documented, and then the med list, this is what is documented. So we have to pretend that's it. On the right hand side, I want you to know that the support for those chronic conditions doesn't have to be in the assessment and plan. It can be anywhere in that record or in that visit note. So for example, an ED note. A lot of times the ED providers will put one diagnosis in their final assessment and then in their, sometimes they call it an ED course note, sometimes they call it an ED rationale note, whatever they call it, they summarize other conditions in that note and they document we're going to give her IV fluids for dehydration or things like that. So there's additional information that can be pulled from anywhere in that visit note, that specific visit documentation can be used. It doesn't have to be in the assessment and plan. These are just what we're using for examples. So on the left hand side, you can see that there are four diagnosis listed, chronic conditions. And the medication list is there, and there are medications for those conditions. So Coding Clinic has made it clear in their advice that it's not acceptable to code from past medical history in a med list. Now, is it acceptable to code from an assessment and plan and the med list when that's all that is documented, there's nothing else documented here? So they say that the physician has to specifically document how that condition impacted the current stay and treatment, and I don't see that on the left hand side. On the right hand side, this is excellent documentation. We have the same four chronic conditions. Essential hypertension, he notes that it's controlled. He wants the patient to continue their antihypertensive medication, and he's ordering a CMP for the hypertension. The mixed hyperlipidemia. He's ordering a lipid panel. And if you look above essential hypertension, it says diagnoses and all orders for this visit. So those tests were ordered on this visit. He's assessing that condition. Number three, CKD. He notes, following with nephrology every six months. He's not treating the CKD, he's treating the hypertension. But he's monitoring the CKD, stating someone is following the CKD so that for him following the hypertension, he needs to know that that CKD is also being addressed. So that would support the CKD. Number four, acquired hypothyroidism. He notes that it's stable, patient has had a normal lab a month ago, the patient's hair loss has stopped, and he links the medication to the hypothyroidism. So to me, this supports what Coding Clinic advice and coding guidelines are saying how the physician has to specifically show how those conditions, those chronic conditions impacted the stay. Now, we still don't have any official guidance. But I'm going to turn this over to Chris now, and she's going to talk about a tool that might help us determine what constitutes-- it meets the assessed or required treatment during the specific encounter. Chris. Well. Thank you, Bobbie. So moving on to the MEAT criteria. (DESCRIPTION) Text, MEAT criteria. Where does MEAT criteria come into the picture? Monitor - signs, symptoms, disease progression, disease regression. Evaluate - test results, medication effectiveness, response to treatment. Assess - ordering tests, discussion, review records, counseling. Treat - medication, therapies, other modalities. (SPEECH) And MEAT is an acronym that I use when I am educating both physicians and coders in CDI. So MEAT acronym is-- each letter in MEAT is monitor, evaluate, assess, and treat. So there are examples of each within this slide. So when I use this as a reference tool for educating physicians when documenting what they need to document to support a diagnosis in the note, but I also use it as a education tool for coders in CDI, an auditors too, for what to look for within the documentation to support the application of official ICD-10-CM coding guidelines. So going back to what RADV looks at, the auditors in RADV, they do use the ICD-10 official coding guidelines for coding and reporting HCC diagnoses. So at this time, I will give it over to Sue, I believe. OK. Thank you, Chris. (DESCRIPTION) Text, Question, Can you talk about the hospital inpatient guidelines for coding and reporting and how they differ from outpatient guidelines? (SPEECH) So we're going to spend the next few minutes talking with our panel here about some-- what we feel are some pertinent questions, discussion topics. So this first one I'm going to direct toward Audrey. Audrey, can you talk about hospital inpatient codeing guidelines and how those differ from outpatient guidelines that we've talked about today? Yes. Thank you very much. The main part of it is very similar for the guidelines for coding secondary diagnoses, those chronic conditions, or anything that's not the principal diagnosis. It's basically on the same thinking regarding the inpatient, the outpatient. In other words, we need to make sure that it's a reportable diagnosis. And that reportable diagnosis means that it had a clinical evaluation, therapeutic treatment, that there was a diagnostic procedure that was performed towards that that it increased nursing care or it increased length of stay. And it's not that all five of those reporting criteria are necessary. It's just that at least one of them was done towards that condition that would make it a reportable secondary diagnosis. So where it kind of differs a little bit from the outpatient setting is that where Bobbie was expanding on it saying it's not just that a medication is prescribed for that condition, in the inpatient setting, it's that did they receive the medication? There's obviously a longer length of stay or there's more time for the inpatient setting and that they will be getting that medication, maybe for the congestive heart failure, for the atrial fib. And that supports the utilization of resources towards that condition that we can pick it up then as a secondary diagnosis. From the inpatient setting, we know that we can pick up possible or probable diagnoses, that if it is documented as an uncertain diagnosis at the time of discharge in the outpatient setting. However, you can only code to the highest degree of certainty. So if they're saying chest pain possible-- why am I blanking on possible causes of chest pain? But if they're saying possible heart attack, then that could be coded in the inpatient setting as long as it's documented at the time of discharge. However, in the outpatient setting, you would just be coding the chest pain as the final diagnosis. Also, one other kind of big difference in between the inpatient and outpatient setting is that on the outpatient setting you can code from a diagnostic study. The impression. The findings from a diagnostic study. However, in the inpatient setting, you need to get that information confirmed by a hands on physician if you will. So the pathology report needs to have the diagnosis from the pathology in the body of the record. The discharge summary. What were the results from the pathology? What were the results from any of the radiological tests that are done? That it showed that the physician in the patient setting needs to confirm that information in the body of the record at that point. Bobbie, anything else to add on that on this question? No. I think you touched almost everything. The only thing I might add is just to note that for outpatient visits sometimes symptoms are going to be appropriate instead of an actual condition because the patient may not have a confirmed diagnosis at the time they're discharged on an outpatient visit. But other than that, yes, you touched it. Yes. OK. Thank you, ladies. Moving on. Our next question. This one-- I'm sorry. (DESCRIPTION) Text, Question, How would you contrast and compare ICD-10-CM Official Coding Guidelines and MEAT criteria? (SPEECH) I think it didn't advance. So this one, I'm going to direct toward Colleen. Colleen, how would you contrast and compare ICD-10 CM official code and guidelines and MEAT criteria? Thanks, Sue. So Chris touched on this a little bit just a few slides ago. And what I would add for starters is that there's CMS and there's the National Center for Health Statistics. There are two departments within the federal government's department of Health and Human Services. And they are the ones that provide the ICD-10 guidelines as a set of rules developed to accompany and complement the conventions and instructions that we talked about that are within ICD-10 itself. I also want to point out that adherence to these guidelines is required under HIPAA. And that these guidelines have been adopted under HIPAA for all health care settings. So as coding professionals, documentation integrity professionals, we live and breathe these guidelines. These four sections in the POA that's added if you're on the inpatient side. And reference them often as the official sort of rules or regulations around coding and reporting. So MEAT criteria to me then-- as Chris mentioned, it's really how do I apply these guidelines. Bobbie gave you some of the examples where they talk about a condition has to be monitored or it has to impact the treatment. But they don't always give you the specifics to that. So the MEAT criteria is a good tool for applying those guidelines. As Chris mentioned, when you're educating providers on what needs to be in the documentation to support a condition. We always tell them whatever you're thinking, write it down. And they don't always do a great job of that. And it's really teaching them whatever you're thinking about the patient write it down. And that helps to support HCC capture compliantly as Chris mentioned as well. Education to coding staff, to CDI staff around just the HCC methodology as well as when they are validating conditions that have been reported. Our job as coding and documentation professionals and the CMS requirement is that we send out an accurate claim. So it's really important that we follow the guidelines, and then utilize the MEAT to apply those guidelines. Chris, anything you'd add to that? The only thing that I would add is we just want to reiterate the importance of using the official coding guidelines. And then the references available to us, such as coding clinic, that guide us for reporting diagnoses in the outpatient setting. Thank you. OK. Thank you. (DESCRIPTION) Text, Question, When is it appropriate to query the provider for clarification of an HCC diagnosis? (SPEECH) So Chris, we'll pick up with you again. When is it appropriate to query the provider for clarification of an HCC diagnosis? OK. So when in the outpatient setting and specific to office clinic visits, queries can be sent prospectively, concurrently, or retrospectively. So many outpatient CDI professionals are reviewing that record before the patient comes into the office for their appointments. And they may send a prospective query or what sometimes is called a nudge-- you may have heard it as a nudge or a notification-- to the provider for clarification. So a prospective query or any query in the outpatient setting, or a nudge, should be compliant as well as non-leading what they use on the inpatient side also. So a great reference for queries is the 2019 position paper from ACDIS and AHIMA. And it's titled, "Guidelines for Achieving a Compliant Query Practice." And this is a great tool that outpatient CDI programs can use, inpatient CDI programs can use, when developing query policies and procedures. So moving on to the question at hand, there are several reasons to query a provider regarding an HCC diagnosis. And I have just a few here. When there are clinical indicators of an HCC diagnosis, but no documentation of the condition in the notes. And an example of this is when you're reviewing the note, you see that BMI over 40 is documented. You see that within the physical exam the provider notes the patient is obese. And then he says in the note that he has discussed dietary lifestyle changes, increasing exercise. But he does not document morbid obesity. So there is an opportunity there to send a query for clarification. Another reason would be clinical evidence is found for a higher specificity of an HCC diagnosis avoiding that unspecified ICD-10 code. An example of this would be our diabetic patients that are coming into the office. And they may have labs done before coming into the office. That lab value, the blood glucose, is over 125. So do we have an opportunity there if it was not documented by the physician for diabetic with hypoglycemia? The third reason that I have is when there's a question of cause and effect. That relationship between two conditions that are documented within the note. We need to get clarification on that. And then when there's treatment documented in the notes, and then there's no documentation of that diagnosis associated with that treatment. So if a physician is adjusting medications for a specific diagnosis, but doesn't link the treatment to the diagnosis, we have an opportunity there. And again, these are just a few reasons of wanting to query or nudge a provider in the outpatient office setting. And I wanted to pull in Audrey for her take on queries. And do you have anything to add from an inpatient perspective? Thank you, Chris. It's really similar from the patient perspective. It's really when you are needing to get that clarification. When there is an indication that a diagnosis is present or that a condition was being treated, but there's no actual diagnosis documented by the license provider. So we need to get that clarification. Or even on the other side of it, sometimes the license provider will document a diagnosis but you are not seeing the evidence that it was clinically significant for the current encounter. So you may need to get that clarification just to say you know you've documented this diagnosis, but please provide additional documentation to confirm the diagnosis as evidenced by. So that you can get that kind of clarification there. From the inpatient setting, there's two requirements that a diagnosis can be added as a secondary diagnosis. One, that it's documented by the license provider. There are some exceptions to that. But that's the majority of all of your diagnoses need to be documented by the license provider. And that you can verify that it meets the reporting criteria of meeting one of those five criteria. Evaluated, monitor, treated, increased nursing care, or increased length of stay. If either one of those two requirements is not met, then that's your good query opportunity to not just say, oh, I'm not going to code it. But to say, hey, I need to go back and get either the diagnosis documented, or to get that evidence that condition was clinically significant or clinically valid. You want to get that documentation. Sue, are you on mute? We might of lost you. Sorry, I was. There you are. Yes. Here. I'm here. This last question is for Bobbie. Bobbie, for outpatient services, are chronic conditions able to be coded from an anesthesiology pre-procedural assessment? (DESCRIPTION) Text, Question, For outpatient services are chronic conditions able to be coded from an Anesthesiology pre-procedural assessment? (SPEECH) OK. I'm not going to say yes and I'm not going to say no. What I will say, we know that on an anesthesiology note though some of those chronic conditions definitely would impact the treatment by the anesthesiologist. So patients with COPD, sleep apnea, it could impact what type of airway they use. It could impact their ASA score. It could impact the type or amount of anesthesia provided. The time that the patient needs to be monitored both during and after anesthesia. So we know that those things impact what the anesthesiologist is going to do. But we need to make sure that our documentation in that pre-procedural assessment support that. So when you look at your pre-procedure assessment from anesthesiology, is he documenting a list of past medical history conditions and a list of medications? Or is he specifically documenting that those conditions are impacting. What is he doing for the COPD different? So when you look at your documentation, if it mirrors coding guidelines, if you apply MEAT to it, and you say, yes, I do have support here, then I would say report the condition. The chronic condition. If it's not mirroring coding guidelines, or if you're saying, well, he's listed in a history list. And the patient's on medication they're on. Home oxygen. That does not meet the guidance that we've been given and the coding guidelines for reporting those additional chronic conditions. So in that case, I would be hesitant to report those. Sue, anything to add? Or since you've been asking us all the questions, do you have anything you want to talk about as far as outpatient coding guidelines and reporting for HCC diagnoses? Well, thank you for your response about anesthesiology. So I think I've been thinking about the fact that we're on the eve of the 40th anniversary of the inception of in-patient perspective payment and the associated DRGs. And that's such a long time. And because of that, over the last almost 40 years, we've seen this continual expansion of the official code and guidelines every October 1st, and during the pandemic, in between as well. And we've seen volumes of coding clinic advice, as well as the advent of clinical documentation improvement to help us all achieve accurate coding and reporting, which in turn supports accurate DRG assignment and reimbursement. So I think we haven't seen as much growth definitely with regard to coding guidelines and advice in the outpatient setting over the same time period until much more recently. And I think this is organically occurring now. We're starting to see more guidelines and more conversation about proper and accurate reporting of outpatient diagnoses because one, we're seeing more care delivered in the outpatient setting. Just think about the fact that joint replacements can now be performed in the outpatient setting. Who would have ever thought that? And two, we're starting to see more reimbursement being tied to diagnosis coding in the outpatient setting. For example, the HCCs we've been talking about today. And I think compared to that inpatient setting, payment may have been more linked to or driven by procedure in the outpatient setting. So with the linkage to diagnoses, I think we're going to continue to see growth in coding guidelines. And advice on how to support outpatient coding. As well as the best approaches to achieve documentation that supports these initiatives as the industry figures out outpatient clinical documentation improvement. So I think the growth in this area will be analogous to what we've seen in the past 40 years with our inpatient perspective payment system. So people may wonder why are we talking about this today. And it's because of these situations. And in our consulting practice, the kinds of questions that we get asked from our customers and the types of reviews, and audits, and outpatient CDI programs they ask us to help with because they are really attuned and focus now on that outpatient setting. So we need this infrastructure of guidelines and advice to help us achieve all of the benefits we have with inpatient prospective payment. So that's what I was thinking about, Bobbie. Colleen or anyone else, do you have any more thoughts that you'd like to bring up today before we move into Q&A with the audience? Yeah. So this is Colleen. So just kind of building on what you were saying. So 40 years ago, the inpatient prospective payment system. And then we saw ambulatory payment classifications 22 years ago in the year 2000, which was also a prospective payment method for hospital outpatient services. So the physician side is really kind of remain fee for service. So we see again that shift from inpatient to outpatient, and the shift from fee for service into these prospective payment models. So when you think about HCCs it is a prospective payment model. And when you think about population. So the continuum of care across where that patient might see care in a given year. Our role, the providers role, the documentation integrity, our roles are around always telling the accurate story of the patient's encounter. So when we think about population health, we used to just take care of sick people. Right. And now we're being asked to manage populations of patients. So this disease burden. These chronic diseases these patients are expected to have throughout the course of the year or until their death are a big focus around getting this right and getting proper payment prospectively to care for. Predict the cost of care for this population. And remember, disease progression is expected in this population. So the example of COPD, or diabetes, or CKD, those diseases will progress. And even something like COPD, or heart failure, something along those lines. What elements does chronic respiratory failure come into play as this patient's disease progresses? So all those elements of population health are really important to think about. And I think just closing with the Department of Justice. So the public information around. You can search the Department of Justice website and see allegations and settlements. Very large settlements, including corporate integrity agreements that are happening in the fraud and abuse arena of HCCs. So again that's the purpose of Rad v is to uncover this. So always with that idea of compliance. It's interesting if you go out and read some of the fraud abuse elements of the Medicare Trust Fund related to Medicare Advantage and this perspective model. Thank you, Sue, for asking. (DESCRIPTION) Text, References. CMS Medical Record Reviewer Guidance (available on CMS website). Hyperlink, text, https://www.cms.gov/Research dash Statistics dash Data dash and dash Systems/Monitoring dash Programs/Medicare dash Risk dash Assessment dash Data dash Validation dash Program/Other dash Content dash Types/RAD V dash Docs/Medical dash Record dash Reviewer dash Guidance.pdf. 2019 HHS dash RAD V White Paper (available on CMS website). Hyperlink, text, https://www.cms.gov/files/document/2019 dash hhs dash risk dash adjustment dash data dash validation dash hhs dash rad v dash white dash paper.pdf. Hyperlink, text, ICD-10-CM Official Guidelines for Coding and Reporting F Y 2022 (available on CMS website). FY2022 April1 update ICD-10-CM Guidelines (cms.gov). Text, A H A Coding Clinic (available on the A H A website). Hyperlink, text, A H A Central Office A H A Coding Clinic (codingclinicadvisor.com). (SPEECH) Well, thank you everyone for your thoughts today. So we have time to take some questions that have come in from our audience. So Bobbie, I think I'm going to ask you to tackle this. And everyone else chime in if you have additional thoughts. But this listener asks the following question. (DESCRIPTION) Text, That's a wrap! (SPEECH) If a provider lists a diagnosis in the assessment but does not document treatment of the conditions, how is a coder supposed to know the documentation that the physician added in the assessment doesn't affect patient care? The physician puts it in the assessment for a reason. (DESCRIPTION) Text, Q&A. (SPEECH) So I would have to say, how do you know that it does affect care? He put it in his assessment. But what-- I mean, coding clinic advice tells us that he specifically needs to state that. So just documenting that condition in the final assessment or final impression doesn't tell us what did he do. Did he order tests? Did he address the patient's medication? Make a change? Send in a prescription for refills? What did he do to support that chronic condition? So I would say if you can't tell that, you apply MEAT criteria, and you can't see that, then you would need to work with your physicians to get that clarified. And in the outpatient setting, that's not going to be easy to do. Anybody have anything to add to that? Yeah, I think-- this is Sue. Another way to look at it is if it's an HCC diagnosis, the provider, the group, et cetera, is going to receive some payment, future payment, to take care of these patients if they're providing care for conditions. So the physician in my mind has to earn that right. He has to or she has to document what they are doing with regard to this condition and the patient. And if that isn't evident, or it's just mentioned casually and there's no evidence, it shouldn't be reported. Or if as Bobbie said, you really think it's making an impact. Then the physician would need to be queried and update their documentation to support that. OK. So let's move on to another question. This question-- and I'll just open this up to the group. The question says, I understand about the diabetes and similar diagnoses. Blindness and low vision are frequently listed in past medical history. Would you code this? A patient may have trouble reading a prescription bottle or being able to get an appointment. Would you code this? So it kind of goes-- this is Colleen-- kind of along the social determinants of health kind of logic too. So the fact that the patient has some form of reading disability, it's sort of like a catalog sometimes of conditions. But we really need our physician our advanced practice provider to indicate how that affected the care and treatment. We can't assume that it affected the care and the treatment. I'll use my mother as an example who's an elderly person. I direct her care where she couldn't do a lot of the things that are sometimes asked of her. So I think it just depends on the situation. And we really need our provider to help tell that story of how it impacted the care. OK. Thank you. I think this next question I'm going to direct toward you, Bobbie. In speaking of in terms of coding from areas other than the assessment and plan, is it appropriate to code from the narrative in the report? I have been given to understand that it's not. And I think, Bobbie, you touched a bit on this in some of your earlier remarks. Yes. So I know-- and I can't remember the specific reference. But for HCC outpatient coding, you can use that entire note. As long as it's documented by the provider. Now we need to be careful because with our EHRs, sometimes things get pulled in from nursing or from other screens where it's not documented by the physician. So I would be careful with those types of things. Problem list. Medication lists. But especially if a doctor is putting a narrative, and the ED was an example. A lot of times they will type a note with their medical decision making note. Or for why they're doing what they're doing. What they're running tests for-- ordering tests for. Why they're doing a CT scan. And that's a great place to support coding additional diagnosis whether they're chronic or not chronic. So yes, you can use the entire note as long as it's documented by the provider. OK. Thank you. This question I'm going to open up to the group. If a patient has diabetes mellitus, hypertension, and chronic kidney disease, and the physician documents that the chronic kidney disease is due to hypertension, how would this be coded? And I know internally we've all kind of talked about this situation. I think what this question may be asking is, what if the physician says the CKD is due to the hypertension, but they don't treat or they're not addressing the CKD. What do you do? That's what I think this questioner was trying to get at. I will say this is one of the things that we have questions about, and we are hoping that Coding Clinic will address. They're saying that you cannot report chronic conditions if they're not specifically documented as being impacting the state or requiring treatment. So what if in those cases you cannot report the CKD. It's not being evaluated. Is the coding guideline that says they both exist so the width criteria is met, and you can report both, is that enough to report both? Or now do we need to look at that differently as well. Some of the other conditions. So like a wound clinic visit where the patient has paraplegia. Does a doctor need to get specific about how that paraplegia is impacting this study. Or is it just enough to say that the patient is a paraplegic. So these are things that we're hoping that as Coding Clinic starts looking at the outpatient guidelines that they will start expanding on because now these are questions in our heads as well. Yes. And we urge people to submit these kind of conundrums to Coding Clinic. Audrey is sort of our residual liaison for Coding Clinic. And she submits a lot of our questions to them. So we've tried to be very proactive submitting these questions and waiting for a response. So next question if an ED provider treating a patient for an ankle sprain documents that the patient has asthma, and he performs a brief respiratory exam, can the asthma diagnosis be reported? The facility follows MEAT criteria. So if I can take that one too. Sure. Sure, Chris. I'll pass it back to Audrey. So did the physician treat the asthma at any time during the ED visit? Did he provide any medication? Anything like that? If they do a full physical, they're going to look at the cardiovascular system, the respiratory system, GI, neuro, and put down their findings. And there may be findings. But do they treat those findings? And do they document that they treat those findings? If there's no documentation from the physician that he is treating that condition while the is in the ED, I would not code it. And I think-- this is Colleen. If you go back to MEAT. Monitor, evaluate, assess, or treat, so the patient I would expect-- first of all, a lot of doctors do a full physical exam because they're looking for just underlying conditions as part of an ED doctor. But if the patient's coming in with an ankle sprain, I would have expected the HPI to indicate some shortness of breath or some comment about respiratory status. The fact that a respiratory exam was completed without a tie-in the assessment-- again, I can't perform a physical exam. I'm not a provider. But by the comments in the physical exam, what's the what's the ramp up to that? Is it stable? Is there a need again as Chris was mentioning for additional treatment? So the fact that an exam alone is being done is still not showing to me the monitor, evaluate, assess, or treat without making a comment about them that asthma is stable. That asthma is well controlled. Or why are you performing a physical exam on a respiratory patient who's coming in with an ankle sprain? There has to been something in the HPI that would have triggered that. Or they just do a full physical exam, which may be a standard of care. And not necessarily reportable. OK. Thank you, ladies. I think we have time for one more question. And Lisa will keep us on track with that. This question is, I am confused every time I read or hear about HCC and RAD. Is this just something that is related to MA plans? Or does it affect and apply to traditional Medicare? So this is Colleen-- or Chris. You can go ahead go ahead Chris. So I was going to just go back to what RADV stands for. It is risk adjustment data validation. And these are audits performed to ensure that there is documentation supporting those HCC diagnoses that are used in the Medicare Advantage plans. So it does tie into the outpatient setting, professional services, and HCC diagnoses and supporting documentation. And just I would add to that there certainly are other-- this is the RADV that we addressed today is specific as Chris says to the Medicare Advantage plan. There are other audit processes that are done by HHS and CMS for other programs such as how RAC is utilized or how the MACs are utilized. So there's different other methodologies of audit that are under the umbrella. But RADV in this specific is related to Medicare Advantage. And I think I would add to this, so yes, we talk about this with regard to MA plans. But overall, the outpatient coding and reporting guidelines apply to all outpatient encounters. Be it facility in emergency room, or observation, as well as professional settings such as clinic visits. Or the ED doc in the emergency department. Those visits. So you need to follow those guidelines for all outpatient coding whether HCCs are involved or not. So just to make that clarification. So Lisa, I think you're probably going to tell me that would be the last question we could take for today. Unfortunately, I am since we are right about at the two minute warning. So I am going to say let's go ahead and wrap for today. So thank you to all of our panelists. Such a wealth of information. And just shows how much our consulting services team is just truly spectacular to listen to. So thank you all. Just a couple things before we wrap up. (DESCRIPTION) Text, 3M Science. Applied to Life. 2022 3M Client Experience Summit. Reflect, Reconnect, Reinvigorate. July 18-21, Salt Lake City. If you already registered to attend the in-person event: 1. Go to the 2022 3M CES Virtual Event login page. 2. Enter your First Name, Last Name, and the Email Address that you used to register for the event. 3. Click the Next button. 4. The page will prompt you to enter a 6-digit verification code. You can find your verification code in a text message on your mobile phone and in your email. Enter the code. 5. Click the Log In button. After entering the verification code, you'll be logged in and taken to the event's Home page. If you have not yet registered to attend the event: 1. Click on the Register Now button at the top or bottom of the page. 2. Enter your registration information through the system. 3. You may need to wait for your registration to be approved after submitting your request to register. 4. After you receive a Registration Confirmation number and your registration is approved, you will be able to follow the steps above to access the virtual event. you will also receive a confirmation email that includes these instructions. (SPEECH) If you did attend our client experience summit in July, we did host that. That is for our customers who join us. Once a year it was exciting to be back in person. We did record those sessions. And so if you're already registered for that, you can log back into the virtual event site to see those sessions. That link is in the resources section if you are interested, and you did not attend, and you are a customer. You are able to log in and sign up for it. But we will be checking registration because again that is a customer specific event. But that link is in the resources section. (DESCRIPTION) Text, 3M educational boot camps for advanced CDI, pediatrics and quality training. 3M has successfully educated and trained thousands of CDI specialists and coders since the early 1990s, and we created the industry's first formal CDI program. With more than two decades of industry experience, our 3M consultants are not only experienced educators, CDI specialists and coding professionals, but they are also on the front lines working hand in hand with clients optimizing their CDI and quality programs. They take that expertise directly from the field and into the classroom, so you have the most up to date content to succeed in your role. Advanced CDI training. 3M's advanced CDI training, normally offered as part of the 3M Advanced CDI transformation program, will be offered in an engaging weeklong course. Available exclusively to 3M clients, this course will help address knowledge gaps, fundamental CDI skills and dive into the clinical and coding concepts. Upcoming training sessions: August 1-5, 2022. Advanced Quality training. The new advanced quality training takes a look at how CDI, coding and quality efforts can help or hinder an accurate reflection of the quality of care. Available exclusively to 3M clients, this course is perfect for seasoned CDI, coding and quality professionals and multidisciplinary CDI teams looking to take their quality programs to the next level. Upcoming training sessions, Nov. 14-18, 2022. Advanced CDI Pediatric training. 3M's advanced pediatric training can prepare your CDI team to recognize the unique problems and challenges in a pediatric population leading to more accurate documentation. Available exclusively to 3M clients, this week of training has been tailor made for experienced CDI, coding and quality teams and individuals looking to develop their pediatric knowledge. Upcoming training session: Sept. 12-16, 2022. Click to learn more. (SPEECH) Other resources that we have again, are for our outpatient inpatient resources. We also have our boot camps. We have several coming up in August, September, and November. So if you are interested in learning more about those, you can. And also, in the portal, if you are interested in learning more about just our solutions products and services that we have in that middle section where it says ask an expert, if you click on that, you can certainly let us know there if you'd like to learn more. (DESCRIPTION) Text, Notices. Incorporating the International Statistical Classification of Diseases and Related Health Problems - Tenth Revision (ICD-10), Copyright World Health Organization, Geneva, Switzerland. ICD-10-CM (Clinical Modification) is the United States' clinical modification of the WHO ICD-10. The International Classification of Diseases, Tenth Review, Procedure Coding System (ICD-10-P C S) was developed for the Centers for Medicare and Medicaid Services (CMS0, CMS is the US Governmental agency responsible for overseeing all changes and modifications to the ICD-10-P C S. If this presentation includes CPT or CPT Assistant: CPT is a registered trademark of the American Medical Association. This product includes CPT and/or CPT Assistant which is commercial technical data and/or computer databases and/or commercial computer software and/or commercial computer software documentation, as applicable which were developed exclusively at private expense by the American Medical Association. The responsibility for the content of any "National Correct Coding Policy" included in this product is with the Centers for Medicare and Medicaid Services and no endorsement by the AMA is intended or should be implied. The AMA disclaims responsibility for any consequences or liability attributable to or related to any use, nonuse, or interpretation of information contained in this product. If this presentation includes Coding Clinic: Coding Clinic is the official publication for ICD-10-CM/P C S coding guidelines and advice as designated by the four cooperating parties. The cooperating parties listed below have final approval of the coding advice provided in this publication: American Hospital Association, American Health Information Management Association, Centers for Medicare & Medicaid Services (formerly HCFA), National Center for Health Statistics, copyright 2020 by the American Hospital Association. All Rights Reserved. If this presentation includes UB-04 information: Copyright 2019, American Hospital Association ("A H A"), Chicago, Illinois. Reproduced with permission. No portion of this publication may be reproduced, sorted in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without prior express, written consent of A H A. (SPEECH) And we do also in this presentation, if you did download the handout, there's just some information about some of the items that were presented today. (DESCRIPTION) Text, Thank you. (SPEECH) Lastly, we certainly appreciate you completing the survey just to let us know how we did. We greatly appreciate it. And with that, I'm going to go ahead and thank you all again for attending. Our next CDI webinar will be in October. And we also have next month's quality webinars. So we have our series continuing. And we appreciate you joining us today. So thank you all for joining, and thank you to our panelists again. So have a great rest of the day.

      Webinar title slide

      Applying compliant guidelines and M.E.A.T. criteria for appropriate Hierarchical Condition Categories (HCC) diagnoses

      • August 2022
         
      • Applying compliant guidelines and M.E.A.T. (Monitoring, Evaluating, Assessing and Treatment) based on medical record documentation is a key requirement for supporting HCC coding. Join our panel of experts discussing the importance of applying guidelines and M.E.A.T. criteria as part of standard practices to ensure accurate documentation, quality patient care and improve data integrity.
    • (DESCRIPTION) A video conference. Two women sit in adjacent chat windows, wearing headphones. A text bar indicates Lisa Paulenich is present on the phone. A very small window on the screen shows a slideshow title slide. 3M C.D.I. Innovation Webinar Series. Data as a Catalyst to C.D.I. Program Performance and Physician Engagement, a Four-Step Approach. A photo shows two business people smiling in a conference room. (SPEECH) Good afternoon, and thank you everyone for joining us in our June CDI innovation webinar. Before we get started, I am going to go over a couple of housekeeping items. (DESCRIPTION) The slide changes. New year, New Platform. The additional text on the slide is too small to read. (SPEECH) If you joined us last year, and this is your first time joining us in 2022, you might notice that we have a new webinar platform. That is really here for a better experience for attendees. If you're joining today, definitely make sure you're using Google Chrome, closing out of any VPN, multiple tabs, that will help with your bandwidth. If you are having any issues with your audio, check your speaker settings and do a quick refresh. Because this is a web-based platform, there is no dial-in number. Everything is through the actual portal. We do offer closed captioning. So in the media section, if you do need closed captioning, that is available for you to start, as well. And because again, this is much more interactive, you can make the sections of the platform bigger, smaller, just so if you want to make the presentation bigger, you can see it that way. We do encourage questions. So in the Q&A section of the portal, please ask questions throughout. We'll get to as many as we can at the end. We do also have a resources section-- that is where you can download the certificate of attendance. If you want to submit those to obtain CEUs after this webinar, you can download that certificate there. The handout is also in that resources section for download, as well as a couple other items just for more information if you're interested. We do also have a survey that we would ask at the end if you can complete that-- we like to know how we do. And so before any more time passes, I'm going to go ahead and pass it over to Kaycie, who will introduce our speaker and kick things off. So Casey, go ahead. (DESCRIPTION) The woman in the right video chat speaks. The slide changes. Title, Learning Objectives. Additional bulleted text is too small to read. (SPEECH) Thank you, Lisa. Good afternoon, everyone. My name is Kaycie LaSage, I am Performance Outcomes manager with 3M. Today, I will be presenting with Carrie Wilmer, who is the CDI director for Intermountain Healthcare, formerly SCL Health. And I'll let Carrie introduce herself. (DESCRIPTION) The woman on the left smiles. The slide changes. Title, Meet Our Speaker. Two small photos of the women appear with biographical text, too small to read. (SPEECH) Good morning, good afternoon, everyone. I'm Carrie Wilmer, as Kaycie said, I am the director of the CDI program for Legacy SCL Health. We have since merged with Intermountain Healthcare, and have been newly named as the Peaks Region. So looking forward to our time together today. Yes, and I had the pleasure of working with Carrie and her team in my previous role here at 3M as a performance advisor, worked with Carrie and her team for about two years while I was their data coach. (DESCRIPTION) A title slide. Clinical Documentation. Integrity Legacy S.C.L. Health. (SPEECH) So we'll go ahead and get started with our first polling question. (DESCRIPTION) Slide change. A question and two answers appears. (SPEECH) What measures do you track to indicate physician documentation opportunity or success? (DESCRIPTION) The two answers. M.S. dash D.R.G. and Case Mix Index, C.M.I.. Quality Data, Length of stay, Patient Safety Indicators, Hospital Acquired Conditions. (SPEECH) Another minute-- seeing what the results are looking like. OK. (DESCRIPTION) Slide Change. Legacy S.C.L. Health. A map with colored sections and text too small to read. (SPEECH) Take it away, Carrie. OK, I didn't see those results come through. So we'll go ahead and just get ourselves started. So we already did our introductions, but just to give you an idea of our footprint and get the story started here today. So we consist of seven acute care facilities in Montana, Colorado. And this graphic here is our original SCL Health footprint. Of course, with that Intermountain merger, we extend much broader into the Western region at the United States. So for our CDI program, we have 41 FTEs total, and broken out into several different roles. We have 28 CDI specialists, and then 13 advanced roles as listed there, as far as the makeup of our team. So (DESCRIPTION) Slide Change. S.C.L. Health C.D.I. Program History. Three boxes of text appear, getting progressively higher on a line graph, with time as the x-axis. (SPEECH) to start with, we're going to go back in time a little bit, a few years, and give you how we got to where we are today in our agreement, or our relationship with 3M on these reports, and what we have done to move our program forward. So back in 2013 is when we first started our centralization effort. At that point in time, we had CDI programs-- teams at each of the sites, except they kind of reported up differently, different training, different tools. And we brought all of that together from a system approach. And as a system approach to centralize that process and build it one team, one system for SCL Health. So at that point in time, our response rate of 85%, agreement rate, 86%, not too bad. But we definitely were able to do more and move that needle. So at that point in time, at the beginnings of our measuring, we were at about $875,000 monthly on our DRG shift approximation with our Medicare blended rates. So 2015-2017, in that time frame, we were able to expand. We were very fortunate to be able to have pretty significant investment into the program. However, that came about as a result of external consulting assessments, and the messaging to identify that there was opportunity being left on the table. And so our executive leaders received that message, heard that message, and decided we're going to invest. And we expect that CDI is going to rise to the occasion. And fortunately, we did. So we were able to achieve new FTEs-- that was at the beginnings of our advanced CDI roles. And through the course of those couple of years, we really rose to the occasion, increasing to about a $1.8 million monthly average, with our record high being there in 2017. So from there on, 2018-2020, as would be expected with many CDI programs, established programs, you get to a point that you plateau and you don't see as much improvement any longer. And so through that time period, we experienced some leadership turnover. We were still having very successful response rates, agreement rates, as far as our physician engagement across all of our sites. But our financials dipped a little bit there at the 1.4 monthly average. (DESCRIPTION) Slide, Data as a Catalyst, Breaking through the Plateau. Two text boxes on a similar graph, time on the x-axis. (SPEECH) So we knew that we either needed to be able to validate that our plateau was true, true to form, and that there wasn't any more opportunity to be had, which was suspect, or we were going to need to find a new strategy, another way to revitalize and boost momentum, and identify what opportunity still remained for the program. So at that time, our physician education strategy was very much built on our CDI query metrics, what were our top questions, et cetera. But we came to the conclusion, we can't boil the ocean-- we were not being effective trying to do, I guess, disseminate all education to all specialties and expect that that was going to move the needle any longer. So we needed data. And that was our challenge. We did not have line of sight to identify very easily where that opportunity would be found, how much was there, and really who would we need to, as far as best utilization resources, who would we need to partner with first? Which physicians, which groups, to be able to catapult and serve as a catalyst to move the program forward? So we decided to engage with 3M and begin leveraging the performance data monitoring reports to be able to focus in and use that data to restrategize, and continue to push the messaging forward. So I'll turn it over to Kaycie. (DESCRIPTION) Slide, Performance Data Monitoring. Bulleted text beside a graphic of bright lights points connected in a web in front of a cityscape. (SPEECH) Thanks, Carrie. So reports that Carrie and her team used are in 3M's online cloud-based tool, called Performance Data Monitoring, or PDM. The reports in PDM are based on submitted inpatient claims data, looking at the total inpatient population, not just what CDI reviewed. And this allows for a more holistic view of the inpatients, helps to understand the effectiveness of CDI education of providers, and helps to identify gaps. (DESCRIPTION) Slide. Two bulleted text paragraphs appear, labeled Physician Reports and C.D.I. Performance Reports. (SPEECH) The goal of utilizing the PDM reports is to gain insight into key metrics, and performance improvement opportunities against baseline and best practices. The position reports get down to the cases attributed to a particular physician, for example, Dr. Smith, orthopedic surgeon-- what's the MCC/CC capture on Dr. Smith's cases? What SOI/ROM subclasses do Dr. Smith's cases typically fall into? And then, compare that to the other providers in Dr. Smith's practice group, the other orthopedic surgeons at the facility, and compared to the national norm or ortho. The CDI report section has both financial and quality reports. The compare report is a financial report that's based on MSGRG. And for example, you could compare the facility's performance in Q1 2021, to the performance in Q1 2022, and also against national norms, to include the metrics that are bulleted here on the slide. Severity and mortality reports are based on APR-DRG, and those compare the facility's performance against baseline and state peer groups. (DESCRIPTION) Slide, Role of the Performance Advisor. A bulleted list of text labeled Coaching with performance data advisor. Six button graphics appear beside. (SPEECH) In my role as a performance advisor, working with Carrie and her team, like I mentioned before, I was there to be their data coach. I worked extensively with them to help them understand their data in TDM, and how to effectively utilize PBM as a tool. There is a ton of data available in TDM. And like Carrie mentioned before, you can't boil the ocean. So they needed to understand where to focus their efforts for improvement. And I helped them identify focus areas that they then investigated further. (DESCRIPTION) Title slide, The Data-Driven, Physician-Focused, Four-Step Approach. (SPEECH) And now, we're on to our next polling question. (DESCRIPTION) A question appears with three answers. (SPEECH) How do you identify opportunities for physicians' CDI education? And I'll hang out here a minute before we go to the result. (DESCRIPTION) The three answers. Use C.D.I. query trends. Use claims level data. General C.D.I. industry trends. A fourth option appears to be below the bottom of the screen. (SPEECH) Let's see-- and it's submitting. Oh, there's some people submitting. Oh, 2004, I apologize. In the interest of the time, we'll go over-- so combination of all seems to be the trend. (DESCRIPTION) Slide, Data Analysis and Opportunity Identification. Step One, with a numbered list of text. A graphic of two people standing before an oversized computer monitor, twice as big as they are. (SPEECH) All right, turn it back over to you, Carrie. Yep, and that result really isn't too surprising, as far as the number of different data metrics that can come together to tell the CDI story. So we have four steps to go through, as far as how we were able to slice and dice, pull this together, present, and we're hoping that this will be helpful to simplify the process, as you work through the data, whichever form you may be using. So our first step is obviously we've got to analyze the data. We've got to identify that opportunity. So in being a multiple site system, we had to tackle this from a couple of different angles. We first went through and looked at each of the individual care sites to be able to look at their patient mix, the population, the services offered, and be able to see what the top opportunity truly was. From there, though, that we wanted to be systematic again, as best we could, to drive that education. So we looked for and identified those surface to surface themes across all of the care sites from a system perspective. From there, we took all of those data results, those top DRGs, took a sampling, and we knew that we needed to make sure that we validated the data. Some of the DRGs that rose to the top, as far as the financial opportunity, based upon this MCC/CC capture, maybe didn't truly manifest into that opportunity that we would expect. So we needed to partner both together in order to first, bring forward and identify what those opportunities were. I would also say that we were able to then partner these findings with our prioritization tool within the 3M 360 encompass, as well, and I'll be touching on that a bit more in the presentation. So Kaycie, why don't you talk through some of these screenshots of the data? (DESCRIPTION) Slide, Service Line and D.R.G. Level Opportunity. Two bar graphs are placed above a table full of text data. (SPEECH) So this is a screenshot out of PDM for one of Carrie's facilities. The graphs at the top are broken out by the MSC-RG service line. So we've got the medical opportunity and the surgical opportunity. And the estimated, or the potential revenue here, is based on MCC/CC capture opportunity. So down below, we get a little bit more granular. Here, we're looking at the triplet DRGs, and looking at the full triplet. So for that very first row, the major small large bowel procedures, we're comparing how many times the facility was in either 329 or 330, compared to how many times they were in 331. So the 63 number, the little hyperlink, is the count of cases in 329 or 330. Then, you've got the 33 cases in the DRG 331. The total cases-- the actual capture rate then the performance for this particular example happens to be MEDPAR. Then we've got the capture rate variance. And that reimbursement differential is saying that, if the facility was to capture the MCC or CC, just for the CRG cluster, at the MEDPAR 80th percentile performance, it could be a potential additional $286,000. So from here, what Carrie and her team would do, is they would drill into those 33 cases that are in the DRG 331. And get to that encounter level detail to then pull up the case in the EEHR or in 360 to see what was really going on with those cases that did not have an MCC or a CC. (DESCRIPTION) Slide change. A large table of itemized text data in 10 columns. (SPEECH) Another screenshot from PDM-- this is now looking at the mortality data. So now, we're looking at the APR-DRG service line. And we got the information from MEDPAR the state of Colorado, the facilities in Colorado. So we got all the information on the total cases from MEDPAR and the actual death and the mortality rate. And then we get into the facility-specific information. So we've got the total cases and the actual deaths in each one of the APR service lines, and then we've got our mortality rate information. So the service lines that are in the red font in the two right-hand columns with the little asterisk, those are ones that have an unfavorable mortality variance. When we get further down in the list, you can see the service lines that did have a favorable mortality variance. So looking at orthopedics, they had nine deaths when they were only expected to have 5.3. (DESCRIPTION) Slide. A.P.R. D.R.G. Level Mortality Opportunity. A large table of text data in several columns. (SPEECH) On our next slide, we see getting into the individual APR-DRG in that mortality data to then get to your drill down. So here, we're looking at the APR service line of medicine, and we've got APR-DRG 53 and 242. So again, we've got our breakdown of the cases for the MEDPAR data in the different ROM subclasses, and then we get into the facility-specific information. So you can see here for both of these APRs-- the deaths, the one death in each APR, occurred in the subclass four. So each of these APRs had an unfavorable mortality variance, but the actual death expired where we would hope that they expire. So what Carrie and her team would do from here is they would actually go into the cases that discharged alive to see if any of those cases in a one, two, or three could have moved to a higher subclass. (DESCRIPTION) Slide, Data Analysis Summary. Step 1. Bulleted text beside a graphic of hands pointing at a pad of paper. (SPEECH) Turn it back over you, Carrie. Great, so in summary, as we looked at the opportunity, we really focused on it from two angles-- from that financial opportunity, and then the mortality, as we just showed in those screenshots. So we were able to look at it from that care site level, from a service line level, and then to be able to have the power to drill down deeply into the detail of the DRGs specifically, and monitor what is happening, and to pull out those case examples for review to validate the data. All very valuable insights to be able to set us up for our second step. (DESCRIPTION) The slide changes. (SPEECH) We went too far, apologies. (DESCRIPTION) Another slide change, then it returns to the previous slide. Identify the Right Audience. Four graphic boxes arranged in a square. A dot in the center of them with arrows pointing to each space between the boxes. A paragraph of text in a list beside. (SPEECH) Oh, we just have it flip-flopped. So the second step, my apologies, is identifying the right audience. And so this here is the prioritization matrix that we used to be able to bring the opportunity forward to our care site leadership, our CMOs, have been identified as really in place of a physician advisor program. So what we did was take each of the opportunities, size, and scale, for each site, but also kind of map it, think it through, in terms of a continuum of engagement-- who would be most likely and most successful to meet with to be able to have adoption, to be able to have an engaged conversation about the opportunities, and what we may be able to do to partner with that particular group to move the needle on the data. So in this particular example, from one of the sites-- cardiovascular surgery and neurosurgery, of course, had a very high opportunity, being surgical of DRGs. And general surgery, also, very high opportunity. However, due to the contracted relationship and some of the potential politics behind the scene, it was determined that we weren't going to spend any time there. It's going to be an uphill battle. We need to just leave that be, let's focus where we're going to be successful. And so we were able to really maximize, then, the opportunity with the cardiovascular and neurosurgery groups, and the hospitalists. Orthopedics is up there in this particular example, though. They were low engagement, and low opportunity for this site. So we really had no conversation or need to explore that angle. So knowing that we all are so stretched with resources, bandwidth, and even being respectful to physician workflow, workload, and all of the demands currently, we really wanted to make sure that we were prioritizing who in which audiences we were seeking out. So some of the criteria to consider here would be to think about the group size, think about their leadership structure, the employed model, or contracted, are they private physicians, surgeons, providing services in the hospital? We also had conversations about the mid-levels, and in some cases, we did a sequenced meeting where we met with group leadership, then we met with the group, then we met with the mid-levels, because that group really expected that their PAs and MPs would be carrying much of the load of the documentation. We also incorporated our CDI query data into the mix in all of these decisions. So (DESCRIPTION) Slide Change. Audience Identification and Customization. Step Two. A bulleted list of text. (SPEECH) we took that prioritization matrix, that was really a driver of the conversations that I had with the CMOs. And so with our care site leaders, I needed their expertise and their guidance to know the personalities of whom I would be meeting with. I also needed their backup to join each of these conversations, and be able to continue to support the importance of why we were going to need to be having these conversations. It was also very eye-opening to be able to talk through specific initiatives for each of the care sites that may be different than the data that I had available, to be able to bring my CDI opportunity forward. But length of stay was one of those measures that is also directly impacted by the documentation, and the way that we were able to then marry those conversations together and join the initiatives and get further bang for the buck, as far as the engagement into that documentation. So we talked through the data, we talked through the challenges of the groups, we talked through which groups would be most optimal. We talked about even the structure and makeup of the content of the presentation, itself, which I'll get into more so with step three. But it became abundantly clear, as we set out on mapping the initial logistics of these physician education meetings, that we needed to remain mindful of culture and engagement and ensure we were doing everything we could to support as much buy-in as possible, and make it an effective use of everyone's time for the meetings to come. (DESCRIPTION) Slide Change. Effective Communication, Step 2. Bulleted text beside a graphic of a person holding a tablet with a pie chart on it. (SPEECH) So step two, know your audience. Tailor your messaging. Focus that messaging into what is going to be most valuable for that particular group. It was interesting, too, as far as even feedback from the CMOs on how much data to include in a presentation or not. So although we built a templated presentation to be able to deliver readily for any of these meetings that would come up, we definitely customized and altered each one. And I had some meetings where I had no data whatsoever-- it really emphasized case examples and more of the qualitative aspects of the documentation and what we found. And I had other presentations where the CMO wanted full data, unblinded, and make sure that those physicians could see where they fell against their peer group as transparently as possible. So there is such importance, though, to be sensitive and ensure you know the audience of who you're going to be coming in to speak with. So we have another polling question. (DESCRIPTION) Slide change. Who primarily delivers C.D.I. education to physicians at your organization. The options. C.D.I. Specialists. C.D.I. Managers or Directors. C.D.I. Educators. A fourth option is cut off the bottom of the screen. (SPEECH) Excuse me, but I'm going to see how our results look. C, CDI special. OK? (DESCRIPTION) Percentages appear for each option. Specialists at 49.4%. Managers or Directors, as well as Educators, each at 17.4%. (SPEECH) Not too surprising with the results there, especially as there's so many different makeups of CDI programs, in terms of the amount of bandwidth that any individuals may have across the team. I'm a little surprised to see that the physician advisor score was a little lower, as I know that is one very successful strategy and being able to disseminate, and peer to peer, have these discussions and education. But also good and validating to show that really, any of us can be delivering these messages. (DESCRIPTION) Slide change. Presentation Development and Delivery, Step 3. A list of bulleted text. (SPEECH) So for step three, this is the actual presentation, itself. So I have a number of screenshots, just as samples, to show how we did it, how we communicated the messaging. And a few different screenshots, too, again, back to the PDM data that was driving much of the content here. So first and foremost, we leveraged this SBAR framework. And so I'll touch on that more so in this next slide to come, but throughout the course of the content, I'm sure we all have versions of very similar presentations. But we've got to make sure we're outlining what CDI is, why does CDI matter, what is that opportunity, and where are you going to find it? What do we need from you, as a result? We've got to prove it. And so that's where the case examples come back into play. We validated the data by doing those specific case reviews, so we had an abundance of examples right there at our fingertips to be able to pull into these presentations, to be able to show this is a case, and this is what we saw, this was what we found, and this is where and how it could have been potentially different data set at the end with that final DRG. And then at the end, we of course, need to always be clear on what the ask is, and what we need from each of these providers engaging with us. (DESCRIPTION) Slide, S.B.A.R. Framework. A colored box of bulleted text beside a list of text labeled C.D.I. opportunities for St. Mary's. (SPEECH) So back to that SBAR-- so really is setting up the framework for the need and why we're having the conversation. So it maybe widely known across the audience here today, but it is definitely a key tool to be from a clinical bedside nursing perspective, and being able to succinctly communicate with the physician about changes with the patient at hand and needing to potentially report vital signs, report that lab value, and change course of treatment as a result. So we did, and literally wrote out an SBAR statement to start each of these conversations, to be able to highlight what the situation was. We have data showing we have opportunity to give that background as far as making sure that they know that they have a CDI team of registered nurses that are reviewing this documentation. That we have done that assessment to identify the opportunity, and to share what the conclusions have been and provide those recommendations. So out of the gate, give them all to them in a very, very brief skeleton of what we are here to talk about today. And then, get into the nuts and bolts of the details. (DESCRIPTION) Slide. Slide Example, Physician Education. Two boxes appear, each with a different colored graphic in it, displaying data. (SPEECH) So when it came to what is CDI? Why does CDI matter? I've seen many different depictions of CDI kind of at the center of the wheel, and how documentation and the query effort supports so many initiatives. So there's obviously so many more than even what we have listed in our slide, but this was the graphic that was most liked, in terms of really just listing most simply that, with a bit of attention and effort on that documentation, it really kind of killing two birds with one stone, that you can have a multifactor effects as a result. Then we would get into making sure that they knew that it needed to be their documentation, that the diagnoses made needed to meet four criteria to be captured, and the final code set, with the treatment, the monitoring, and evaluation, et cetera, that we then get that group of codes. We get the DRG, and it is those DRGs that are then driving all of the data measures included here. (DESCRIPTION) Slide. The graphics are replaced with tables and charts full of text. (SPEECH) So we would also give an example, just a high level example, not with supporting documentation yet at this point. But just to be able to show a DRG shift. And one of the-- as I had stated, one of those themes that really came out across many of our sites was the emphasis on length of stay. So we were able to highlight the shifts to DRG, and how the documentation would buy them more time to be able to take care of their patients. So out of the gates, just in setting up the groundwork, this is what we're looking at, this is what we're asking for. We need that specificity. We need to be able to capture the codes appropriate for that patient. We would then shift it into some mortality conversation. And although it's a bit complicated, as Kaycie already talked us through, the APR-DRG, we would be able to really focus that message to be able to show the twofold CDI approach to mortality. We want to make sure that those cases that expire are as high as possible with the risk of mortality. But it also is equally valuable to look at the generalized population within that APR that are falling to the lower levels, one, two, three, and four, to be able to explain our context, our approach, and thought process to the documentation. (DESCRIPTION) Slide. The charts are replaced with additional charts, formatted in a similar fashion. (SPEECH) Through the PDM tool, we would have the opportunity to be able to get a physician's listing of DRGs, and their MCC/CC capture variants. So just like one of those initial screenshots that Kaycie spoke through at the care site level, to be able to drill down at the physician level and see all of their claims, whether queried or not, CDI reviewed or not, but just to be able to see what their capture rate was, what their variance compared to the benchmark. And then, to be able to have those projected financial dollars to further quantify the opportunity for the physicians. So in some cases, we included this. In some cases, we did not. Really leaned on the guidance and advice from the CMOs in knowing the personalities, again, as I said earlier. So the second piece of data here is more-- it's actually a newer data point that the PDM tool was able to provide. But it was fascinating how many times I was able to pull this out to further prove the point that we may have some opportunity to move the needle. So it's very small, I realize, and blinded, blanked out with the physician names, but what that small table is really depicting are four different surgeons with their case volume. And then, it has how their volume broke out for each of the severity levels-- one, two, three, and four, with what their average length of stay was for one, two, three, and four. So for the one that's circled, and hopefully you can zoom in on it, or when you get the slide deck, if you can look more closely, but what we saw was that the length of stay was much higher for SOI three compared to four. So that kind of is counter-intuitive to what we would think as far as the amount of resources there. We wouldn't want patients staying longer at lower levels. We want to be able to maximize that. (DESCRIPTION) Slide. A bar graph on the left, and on the right, several smaller bar graphs with data. (SPEECH) So additionally, this is yet one more potential data representation, and we would again not use all data points for all presentations. We picked which ones were most fitting and most convincing for each of the conversations that were held. So in this one, what you're seeing-- this is a graphic to be able to compare the physicians break out, again, of that severity of illness, one, two, three, and four-- progressing is the larger graph on the left side. And then, to be able to compare how their percentage of SOI capture compared to their specialty, compared to their physician group, and then compare to the national norm, are the three graphs in the middle there. So in this example, this was a urologist, small site, he was his own specialty and he was his own group. So both of those did not really bring us much value, because it's the same data sets. But what was fascinating was how that graph looks for his performance compared to that national norm. So you can see the lightest peach color there is very low at that national norm. His is very high. And so he does not mimic the national trend in terms of the severity and the amount of secondary diagnoses being captured on his cases. So this was a very compelling graphic to be able to share. (DESCRIPTION) A paragraph of bulleted text beside a table of several smaller bulleted paragraphs. (SPEECH) So I mentioned before, all of those case reviews bringing in those examples, I would recommend don't have a conversation without examples at the ready to be able to talk through and to be able to give more of the context of how the data applies into a real life example. I would also recommend that any examples you have are as timely as possible. As we all know, we probably get that pushback, oh, it's not my case, or oh, that was six months ago-- I've changed my template already. I've already fixed this problem. So we heard all sorts of different responses and rebuttals to what we were seeing. We tried to just stay on track to be able to get the concepts across. But to have real examples for pertinent to the audience that you are presenting to, and as real time as possible. That last graphic there is a very oversimplified listing of many of the general themes that, of course, are red flags for CDI specialists as we review our charts. But this was actually very helpful to have a kind of a synopsis, a one-page that focused the talk on what some of those key diagnoses are. We know that the physicians aren't going to remember, they're not going to be able to keep this front of mind at all times, but this actually was a well-appreciated summary that we were able to provide. (DESCRIPTION) Slide. Initial Recommendations to Physicians, with an outline of bulleted text. (SPEECH) And then, last but not least, we make sure that we include those recommendations-- what is the ask? What is it that we need from them? So I think the slide may be still a little wordy, but ultimately summarizing the same message. And CDI talking points are often very similar, but we all know we need that comprehensive H&P, we need the progression of the documentation through the progress notes. And we need that final statement and discharge summary to wrap up that case with all of the details therein, and make sure that we are solid for each of the codes captured. And then we really emphasize the CDI query as a tool. And if necessary, that we are there to support and to help be layer and a safety net to help them get the documentation that they need it. So a lot of different angles there, slide decks, again, similar, I'm sure to many that you have out there already. (DESCRIPTION) Slide. Monitor Performance and Communicate Progress. Two lists of text. On the left, bulleted list of steps. On the right, a text bubble, What Does Success Look Like, with text below. (SPEECH) But this was our approach, this is how we were able to incorporate that data. So as we've been moving through these four steps, we've made it so that last one. But just to recap, that first step was, we've got to be able to get our data analyzed and identify that opportunity. Then, we prioritize the message and the audiences that we selected to be presenting to. And then, built the customized presentation for them. So in order to then wrap up this process, we needed to make sure that we evaluated the effectiveness of the approach, and of the education. We need to identify KPIs to ensure and identify what success was going to look like for us. And this may look different across organizations and across sites, but some to consider there on the slide would be fewer queries issued, potentially increasing your CMI, increasing the severity of illness, increasing the risk of mortality, or decreasing that mortality index. So it's important to be able to track and monitor the effectiveness, but also to be able to give the feedback. So one of those phrases that I had heard candidly was that CDI education seemed so random from our previous approaches. And that it was just a flavor of the month, and something that was in front of their minds, and then they never heard anything more about it again until that flavor came up again. So these meetings really opened the door to be able to have ongoing dialogue, and to be able to continue to keep that discussion and commentary going. So to be able to feed back the progress, or lack thereof, if nothing was really changing, then we needed to regroup and be able to further emphasize or re-educate, revisit any of the themes that we were continuing to find in the data, to continue to move that needle. So we wanted to maintain visibility of an ongoing initiative, not just a meeting, and then they weren't going to hear from us for a while. (DESCRIPTION) Slide. The Four Step Approach. A graphic of a circle cut into quadrants, with labeled text. (SPEECH) So those are the four steps. It's really very simple. Again, you got to be able to analyze the data, select your audience, deliver that presentation, and then track the outcomes and monitor that performance. (DESCRIPTION) Title Slide. Key Outcomes and Lessons Learned. (SPEECH) So for key outcomes, lessons learned, I'll go through these quickly, just to demonstrate how and why we were able to use the data and validate this approach. (DESCRIPTION) Slide. A pinwheel. Text in a circle wheel surrounded by six circles on spokes, labeled with graphics and text. (SPEECH) So the next slides really break out each of these areas of the circle. It's kind of a more holistic view of the various angles where we were able to see improvements, and tremendous steps forward with the CDI program this past year. (DESCRIPTION) Slide. A Deeper Look, Vascular Surgery. Two tables of text data, labeled At a Single Facility and Across the System. (SPEECH) So first, we're going to take a look at through a vascular surgery lens. So at one of our facilities, we met with the leadership of the vascular team and established weekly rounding-- which we had not had in place before. But through his advocacy and support and understanding of the importance of the data, the documentation driving that data, he was able to get us embedded into their weekly rounding to be able to talk through live cases. So we saw after one quarter, a real quick win on our severity of illness and risk mortality scores, as indicated there on the slide. And then, we also saw, year over year improvement as we continued that engagement with that group, that their severity index improved, CMI variance improved. And the opportunity per case, per physician, in that group on average decreased by about $2,000 for each case. So although those negatives are still there on those percentages, that means that we are still under the national norm, but some significant headway was made through that engagement. From a system perspective, we were able to incorporate the data again into the prioritization functionality in our 3M product-- 3M's 360 Encompass. And so for DRG 219-221, on the average for the increased CMI shift that occurred within just that DRG, applied to the volumes within that DRG, and with our Medicare blended rates, we approximated that we had about $388,000 gained due to increased CC and MCC capture within just that one DRG grouping for the year. (DESCRIPTION) Slide. A Deeper Look, Orthopedic Surgery. Two additional tables, similar to the previous slide, but with different data. (SPEECH) Second example here was from an orthopedic surgery lens. So this was an interesting engagement, in the fact that I was so impressed by the level of engagement and championing that this one particular surgeon had in acknowledging validating the importance of our data and all of the information that I had brought forward to him. It appeared that he had a potential opportunity of greater than $1.5 million just for his spine cases. So as those of us across the audience that do the chart reviews, we know that those spine cases can be very difficult to be able to get opportunity captured, as such. It also is very difficult for an orthopedic surgeon to feel confident and aware of all of the medical criteria that goes in making many of those medical diagnoses for their patients. So due to the size of opportunity, due to the level of engagement, there was a business case brought forward that ended up being approved to be able to achieve a nurse practitioner that would be more medically trained to be able to support this group in their documentation and covering those patients, so that he could spend more of his time in the OR where he needed to be. But we could also ensure that we had the expertise that we needed to be able to look at the medical diagnoses for those cases. So additionally, beyond that, some more of the qualitative conversations that were able to be spurred from many of these meetings across the system has been to further our own internal evaluation of the value of potential documentation assistance tools, the computer-assisted physician documentation. Specifically, more engagement and collaboration than we have ever had before as far as invitations to be at the table, and bringing our data in conjunction with much of the data sets that our quality team is using from care management, et cetera, and being able to really collaborate and move the needle moving forward. (DESCRIPTION) Slide. A Deeper Look, C.D.I. Program. Four small columns of bulleted text. (SPEECH) So more of our outcomes, probably saw it on that wheel slide, a few slides ago. But we did achieve a 36% increase in our financial totals for the year. This was in conjunction with, again, the prioritization functionality, being able to identify what DRGs had the biggest room for movement with that CC/MCC capture, we needed to make sure that we were getting CDI coverage on those DRGs. So I have a couple of them listed there, again, with those approximations of year over year of what was achieved due to increase CC/MCC capture for those DRG groupings there. In a time where we all are needing to be good stewards of our resources across all of our organizations in health care, it was a tremendous investment that we were able to achieve, as well, that it was decided that we would further expand the program this past year. And we had five new FTEs approved for the CDI program in the fall of last year. So initially, those roles were slated to be an additional educator, auditor lead, and two new CDS positions. We did shift that educator into an additional auditor, but we were able to get those posted and filled and trained, and we are working, we're working hard. From a CDI performance perspective, the query rate increased from 31% to 37% on average. So all of these data insights have really helped us be able to build our internal CDI education in partnership with each of these physician education engagements. So we are able to know what those opportunities are to be able to adjust, to make sure that we're covering the right cases, that we're asking the right questions, and that we're providing the tools and resources needed to keep the success of the CDI program moving with the opportunities for the claims. (DESCRIPTION) Slide, Final Thoughts. Two columns of bulleted text. (SPEECH) So that brings us to the end here and some of our final thoughts. So some of the challenges and our lessons learned important to call out-- the providing sustained feedback was impacted by reporting cadence. But I will own that that was our own internal decision. It's very difficult to know what that sweet spot is going to be, because there is such an abundance of data through these reports. So if you are getting the reports too frequently, may not be able to maximize each of the shifts in opportunity. But at the same time, we really have been needing to get that feedback as timely as possible to keep the engagement, and keep that conversation going, as I said. So that was just one internal finding that we have experienced and had many conversations around. Be careful with the data projections. So these are not actualized dollars, these are not actualized results to expect, to get the whole amount. The methodology is good, and it's what we have to be able to benchmark against our peers. But every patient, or every patient population at every site, is different. And there are those nuances that you've got to be able to do that case level review and validate that what the data is telling you is for opportunity, is or is not there. And be able to adjust as accordingly as a trend, from a kind of a compass. Point us in the right direction-- invaluable. But just not actualized to the exact dollars. Be careful with data getting into the wrong hands, potentially. Some of these data sets are complicated, and it takes some time to explain what the audience might be receiving. So being careful to ensure that you're either able to explain it, or to simplify that message as best you can, and not have incorrect interpretations and assumptions made, especially when it comes to some of those actualized dollars perceptions that might be out there. Limitations with physician attribution-- we definitely ran into some of this, as far as being able to identify which group, for each provider, where do they need to fall within the data mix? And I'm going to pause and shift it over to Kaycie, I think she has some more to say on that particular point. Thanks, Carrie. Yes, within PDM, there is a limitation with physician attribution. The physician attributed to a case is the attending of record at discharge. So we're not able to say that a case is attributed to a particular surgeon if they were not listed as the attending. We know that can be problematic, especially in this situation where the hospitalist is the attending, we know that they didn't perform the surgery on a surgical VRT, but it is a limitation within the system. So with the last bullet, the impact of COVID-19 in benchmarking-- so we all know, are all well aware what COVID did to us from a benchmarking perspective. And when we look back to our 2020 data, obviously, 2019 didn't have COVID in there. So from a CMI perspective, a lot of facilities saw an increase, comparing year over year, when we had 2020 data with all the COVID in it, because we had this high rate of medical VRTs, and the surgeries that could be performed for emergent. So that they were super high weighted. And then we get to 2021, where we may have had a little bit of a more normal year. And from a CMI perspective, comparing back to 2020, things didn't look so great. Risk and mortality was another one where we saw COVID have a huge impact, in that there was no COVID data in the benchmarking information. So all of the additional unexpected deaths in the pulmonary population really hurt a lot of facilities. So one of the things that we've done at 3M, and in the PDB data, we know that MEDPAR and HPOP are a handful of years behind. But one of the things that we came up with is an internal benchmark called CCB, which stands for a client comparative benchmarks. And those are based on participating 3M clients that are in this pool of data, so that our PDM customers can select which-- working with their performance advisor-- select which CCB benchmark works best for them. Are they an academic facility? Are they a smaller, rural facility? And pick the CCB that fits with their facility, and it gives us a more real-time benchmark to look at how are other 3M customers or clients that look like your facility doing? And we're able to do that in a more real time, more current fashion. Turn back over to you, Carrie. Great. So I feel a bit like a broken record at this point. But our criteria for success-- emphasize as much of what we focused on here today. So do ensure your accurate physician demographic data. So Kaycie talked about attribution, as far as that case, and who it's assigned to. But beyond that, too, there is the ability to identify and make sure that you are correct and good with which physician group, which specialty are those physicians aligned with, because that then, in turn, impacts the data that you're able to see from that internal comparison-- some of those graphics shared earlier in the presentation. Always leverage your case examples, the real examples, show those documentation opportunities and make those as real time as possible, and as applicable as possible to your audience. Garner physician care site leadership support and participation. So whether that is through your physician advisor team, whether that's through your leadership at your site, specifically, or even being able to gain that leadership to that specific physician group, invaluable, to be able to have that backing and even just one more angle of perspective to share with the group and to be able to help answer the questions that undoubtedly would be raised through each of these education sessions. Partner your data with your prioritization functionality for your CDI team, if you have it, to be able to make sure you're getting the coverage onto the DRGs that have the opportunity. Do our due diligence to make sure that we are present and we are reviewing and able to catch that opportunity, especially as those physicians are trying so diligently, so hard, to get the documentation in there. Tailor your data and presentation to each audience-- not every data point is going to necessarily be of interest or even be that compelling. So pick and choose, make sure that what you're pulling together is going to be well received and will provide the appropriate what's in it for me type strategy and hook, to be able to get their buy-in through that conversation. And then, track your results. So that is it in a nutshell. And I know there have been a few questions coming through, but I'll turn it back over. (DESCRIPTION) Title Slide. Q and A. (SPEECH) Great, thank you both so much. I mean, just a wealth of information, and such a wonderful program that you all have set up. We don't have a ton of time, so I am going to ask one question for you. How often do you use the PDM data to update prior prioritization? Great question. We are using it on a quarterly basis. So it's again difficult to get that feedback real time, to the cadence of reports, like I was talking about. But in terms of prioritization, especially, and being able to have enough time to potentially show any shifts-- if there have been changes to the DRGs, if maybe a different grouping has risen to the top of new opportunity that we need to focus in on, or if others that we have been focused in on have actually dropped and don't need to be focused DRG any longer. We want to be very judicious on how many DRGs we're identifying to be focused DRGs. You can't have all of them be focused, or it wipes out kind of the point of being able to flag those as higher. So great question, and we are reevaluating and assessing on a quarterly basis, in conjunction with these PDM reports. So I think a good follow-up question actually would be, who is reviewing that PDM information? And then, developing the action plan during those quarterly reviews? So it's a combination collaborative effort. As I mentioned at the beginning, we have a number of advanced CDI roles-- we're very fortunate to have built this education support team that we have with leads, and CDI auditors, managers, myself, educator roles. So we actually all share the wealth a bit. I do have a CDI auditor that is designated for much of the data and has become expert in terms of the reports and getting in there and being able to navigate most efficiently. So she and I partner together in terms of pulling out that PDM opportunity and data. But then when we get into the case level reviews, we really assess to see who has bandwidth at what point in time, and be able to get those cases looked at to validate the data together. In terms of the action plans, it has been a combination of myself and our managers bringing those findings to our care site leadership meetings with our CMOs, to be able to then further prioritize and determine whom we would want to be meeting with. So I hope that answers the question sufficiently. (DESCRIPTION) Title Slide. That's a wrap! (SPEECH) Yeah, I think that was perfect. Again, cannot thank you both enough for the great information that you provided today. (DESCRIPTION) Slide. Consulting and Outsourced Services Content. Three graphics of photos and text, too small to read. (SPEECH) If you are interested in learning more, that link that is in the resources section didn't work, so I posted it in the Q&A section, where you can get some of the services that we do provide with performance data monitoring. We'll be putting this recording on our website soon, so if you do want to listen in again, that will be available. The handout is available in the resources section, as well as the ability to register for our next webinar that's coming up in August. So if you do want to learn more about HCCs, you can register for that in the resources section. And again, we always appreciate your feedback. So if you have the opportunity to complete the survey, we certainly appreciate that, as well. Again, Carrie and Kaycie, thank you both so much for the information today. And we really appreciate your time, as well as everybody joining today. So thank you all again, and we look forward to seeing you in August. (DESCRIPTION) Slide. Thank you.

      Webinar title slide

      Data as a catalyst to CDI program performance and physician engagement: A four step approach

      • June 2022

        In this presentation, attendees will hear how legacy SCL Health, now the Peaks Region of Intermountain Healthcare, leveraged claims data to conduct an in-depth CDI performance reporting and analysis. Participants will learn how legacy SCL Health created a targeted strategy to engage and educate physicians in a four-step data-driven approach focused on key outcomes, early wins, expansion to all payers and increased commitment from leadership.

      • Download the handout (PDF, 2.8 MB)

    • (DESCRIPTION) A slideshow. Slide, New year, new webinar platform! A woman appears on a video call in the top left corner of the slide (SPEECH) Well hello and good afternoon and thank you for joining the first CDI innovation webinar of 2022. (DESCRIPTION) Slide, Housekeeping, a bullet point list (SPEECH) We are excited to have Tami Gomez here with us today. Before we get started, I just wanted to go ahead and go over some housekeeping items. If you were with us last year you may notice that we are using a new webinar platform. We are excited for this new enhanced user experience so before we kick things off, I just wanted to go over some of the new features and layout. There is an engagement toolbar at the bottom of your screen that you can use for the different sections of the portal. You also have the ability to move and minimize those different sections. , Because this is a web based platform there is not a dial in number to participate by phone. If you are having audio issues, please check your speaker settings, clear your cache, and refresh your browser. If you do need closed captioning, we do offer that within the live stream section that you can click on that to enable that feature. As always, we encourage questions throughout the webinar. We have a lot to get through today. So we will personally follow up after but please add all of those questions to the Q&A box below. We do provide a certificate of attendance that you can submit to obtain credits as well as the handouts for the webinar. Those can both be found in the resources section for download. If you would like to learn more about our products and solutions, you can click on the Learn More button under the slides and always, we appreciate your feedback so during the webinar, there is the ability to complete the survey in the portal or it will launch at the end of the webinar. We always appreciate your feedback. But if you do ever have a question, again, with those enhancement tools at the bottom of your screen, there is the ability to contact us. (DESCRIPTION) Slide, 3 M C D I Innovation Webinar Series, February 2022 (SPEECH) All right so before we get started, I do just want to introduce today again we have Tami Gomez as she goes over a global approach to engaging physicians and CDI operations with an AI powered CDI workflow. Tami is an a FEMA approved ICD-10 trainer and director of coding and the director of coding and CDI services at Uc Davis. Uc Davis has been named as a coding and CDI gold standard program for data analytics by Vizient and was awarded for their diversity in 2021 with Actis. And so Tami, I am going to pass things over to you so you can go ahead and get started. (DESCRIPTION) Slide, Meet our speaker. New slide, Agenda, a bullet point list. Tami appears on the video call (SPEECH) Thank you. Thanks for having me today. So today we're going to talk about how to understand or prepare tactics and how we actually leverage the 3M M Modal CDI Engage One for our impatient team. I'm going to talk about ways there is the impact of automation has on some of your key performance indicators, understanding strategies to engage our physicians, how to leverage your data and focus the work through stabilization and understanding our lessons learned in implementation. (DESCRIPTION) Slide, Why are we doing this? (SPEECH) So first we asked, why are we doing this? Leveraging technology to make CDI operations efficient, easy to manage, and partner across departments with ease, technology in many ways is really doing less with more as we are now empowered by artificial intelligence so that was really the goal here. (DESCRIPTION) Slide, Who we are, a bullet point list and a picture of a hospital (SPEECH) So it's just a little bit about who we are, Uc Davis is a 625 bed multidisciplinary academic Medical Center. We are a burn Institute and a children's hospital as well. We are in the process of building a new California tower which will add a 75 additional beds. We serve 33 counties covering about 65,000 square miles, which is an area North to the Oregon border and East of the Nevada border. We're recognized as one of the most wired hospitals in the US. We are ranked Sacramento's top hospital by US News and World report, among the nation's best in 13 medical specialties, and we've been recognized as the best hospital four years in a row in the greater Sacramento area. (DESCRIPTION) Slide, Organizational Chart: Health Information Management (Patient Revenue Cycle) (SPEECH) Just want to give you a little bit of background about the organizational chart so CDI encoding report up through the revenue cycle. There is the CFO, and then an executive director, and then I am the director over coding and CDI services but I also have a team of physician advocates and those individuals actually are physician trainers. They help with documentation integrity by building templates and smart lists and dot phrases. They have a big role in helping to ensure documentation throughout the record. There's a coding manager both on the inpatient and outpatient side, there's an outpatient CDI supervisor and inpatient CDI manager, and then we have a whole data quality integrity program as well that supports all of the analytics to drive KPIs and performance improvement. (DESCRIPTION) Slide, Homegrown auto-assignment & leveraging 3 M (SPEECH) So I'm going to start off by talking about how we were able to create a homegrown auto assignment leveraging 3M. (DESCRIPTION) Slide, Birth of Auto Assignment - No direct integration with 3 M, a list (SPEECH) The starting point to making the most of AI was how could we develop some type of automation with assigning CDI daily cases. As you know, every morning this was a very manual process for us. We would look at our admissions for that day and we'd have to manually distribute them and prioritize which ones we could review, how many people we had off, so it was a lot of manual work. It took about three to four hours to complete on a good day and on Mondays it was much worse as you can probably imagine. We had admissions from Friday, Saturday, and Sunday that we had to consider. It was our goal to fine tune this process. So as we approach going live with CDI Engage One, we also talked about how we could automate assignment for CDI. We did initial reviews, and then we looked at Tableau assignment, and that was the approach that we took. We used historical data to identify the average number of new reviews, and then we used prioritization as a form of what we would do from a hierarchical viewpoint. (DESCRIPTION) Slide, Creating the Logic: How to Start, a list (SPEECH) So how we started is we created logic. We worked with some very talented report writers who created logic where we started with the hospital service and we changed that to the hospital division. We looked back three days with logic to not duplicate. We exclude patients who are discharged so if patients been discharged they're excluded. We also excluded newborns and basically the logic looked for any baby or any newborn admission type and/or a baby girl or baby boy within their name. (DESCRIPTION) Slide, Setting Max Accounts: Eliminating Reconciliation, a table (SPEECH) So what we also did is we set the max accounts eliminating reconciliation has allowed us to assign more cases and I'll talk briefly about what we did is at Uc Davis, we had a really high coding accuracy rate. We had two independent audits done on our coding. Our coding accuracy rates are around 99.96% and I really felt that the time spent trying to determine why there was a DRG mismatch wasn't the best and that there could be a better process in place. And so we eliminated the DRG reconciliation process for the CDIs on the front end. They're not doing any DRG reconciliation but I do have a back end reviewer that takes a look at all the DRG mismatches every day and provides individual feedback with any references, whether it's a coding clinic or if it's something that was documented after their last review and provides a daily feedback to that staff which enabled them to spend about 33% more of their day doing clinical reviews concurrent. What we did is we looked at each day of the week and we decided how we wanted to create the logic to assign cases and this has been tweaked multiple times. So you may start out with saying OK, on a Monday, if we have one person on PTO we're going to assign 10 cases to every CDI but if we don't have anybody out on PTO maybe we'll do 11. So we programmed when there were holidays into the system. We've connected this to an actual Team's calendar where employees put in their time off so the logic recognizes when somebody is off and it doesn't assign a case to them. If we have two or more people out on a Monday than 11 get signed and then so on and so on. You get the gist, Tuesdays 8 and then Wednesday through Friday is 7, and if we have people working on the weekends it's 7. However, we've decided to tweak these numbers a bit and so on Monday it's 11 or 12 depending upon the circumstances. On Tuesday it's 8 or nine depending on the circumstances, and then Wednesday through Friday it's 8 to 7 depending on the circumstances. (DESCRIPTION) Slide, One-Size will not work, Program Flexibility and Triggers are key, a list (SPEECH) So one size fits all will not work. You have to be flexible and triggers are the key. We created a database to check schedules, check when there's holidays or when staff is off, and so we created all of these checkpoints to make sure that the system recognized and the logic created when not to assign a case. (DESCRIPTION) Slide, Auto-assignment & concurrent reviews: Prioritization, a bullet point list (SPEECH) So auto-assignment and concurrent reviews and the prioritization within CDI Engage One made this a little bit easier. So I'll go over that. Our challenge with auto assignment was managing concurrent reviews and organizing our current reviews. The good news is that we had 3M with the key prioritization factor to assist with managing concurrent patients. And so what we did is we customized that prioritization list to look at all accounts that had just a single CC or a single MCC. We Were looking at all mortalities, we're looking at accounts with pending queries, those are reviewed daily. We're looking at malnutrition cases because there's an organizational goal associated with that, we're looking at certain sepsis cases because of the high clinical validation denial rates that we're starting to see. This has been an ongoing and prioritization will be ongoing as our KPIs organizationally change. So digging deeper and prioritizing accounts to maintain a total of 20-40 total reviews per week right now. Our CDIs have anywhere between 36-40, not to exceed 40 total cases that they're reviewing, that includes initial and re-reviews. We also said we don't want priority, we want to remove cases from the priority list if they have two CCs or two MCCs, if they're optimized fully from the SOI and ROM perspective, and then so on. So you can really kind of customize that prioritization list to your needs and your organizational challenges and make changes to align with what you need. (DESCRIPTION) Slide, Leveraging 3 M: Concurrent review prioritization, a list (SPEECH) The current review prioritization so priority scoring four concurrent reviews can provide and assist see opportunity. So anytime there is a PSI it falls on that priority list. Medical or surgical cases without a CC or an MCC, if there's a symptom diagnosis that's driving the DRG, and then the 3M prioritization and scoring so we also can set that customize that scoring as well. If we want to focus on and make certain things a priority, we can do that organizationally. (DESCRIPTION) Slide, Scoring & Priority Factors, a screenshot (SPEECH) This is just a screenshot of scoring and the priority factors, just wanted to share that with you. It's kind of a lot on this slide so I won't go into it but we've created some customization around this so that we can make privatization effective for what our organizational needs are. (DESCRIPTION) Slide, C D I Teams: Prioritizing concurrent reviews, a screenshot (SPEECH) And this is kind of just another snapshot of the CDI Team's prioritization concurrent reviews and how they look on the screen. (DESCRIPTION) Slide, C D I: Evidence sheets - heavy lifting by tool, a screenshot (SPEECH) We also have evidence sheets as part of the CDI Engage One tool so it does a lot of the heavy lifting actually. So what this does is it alerts the CDI if there's a potential query opportunity. In some cases, this may be something your CDI already has on their radar and they're following and so it's just confirmation that they're on the right track. And sometimes it may be something that they had overlooked or missed, and this is popping up to let them know that they should either keep it on the radar and follow it or that there's a query opportunity. So we use the evidence sheets as well. (DESCRIPTION) Slide, Other incentives I P C D I evidence sheets provide, two screenshots (SPEECH) There's other incentives in inpatient CDI evidence sheets that are provided as well and this is what that looks like. So this is just another screenshot of what the evidence sheets and prioritization look like together. (DESCRIPTION) Slide, F Y 2021: Auto Assignment Data. A bar graph comparing 2020 and 2021 shows the numbers for 2021 higher in all categories (SPEECH) We also did a comparison from fiscal year 2021 and you can kind of see the impact we had as we changed to auto assignment and how many more cases we were able to get to when we compare 2020 to 2021. So this is just a slide to show by eliminating your DRG mismatch and then also using your prioritization tools, and auto assigning, and evidence sheets, all of that automation can help with increasing the number of reviews and the number of cases that your team can touch. (DESCRIPTION) Slide, Query Rate: 2020 Compared to 2021, a line graph (SPEECH) This is also a query rate comparing 2020 to 2021 so our query rates also went up as well. So what we did is we used the CDI Engage One evidence sheets, we turned on the auto assignment, and we also used our data to drive some of our improvement metrics to continue to tweak and refine on some of the processes that we put in place. Again, that's going to be ongoing. I think no matter what you're doing there's always going to be an opportunity to continue to enhance and improve on automation or processes that you've put in place or how you prioritize your reviews. (DESCRIPTION) Slide, K P I Improvement Journey: Coding and Clinical Documentation Integrity (SPEECH) I'm going to go over a little bit of our key performance improvement and the journey we had with seeing improvement. (DESCRIPTION) Slide, What we did to improve K P I's, a bullet point list (SPEECH) So what we did to improve our KPIs are we expanded our CDI program, we discontinue the reconciliation process, which I've mentioned, we perform ongoing audits both on the coding and the CDI program, we establish back end reviews and controls to ensure integrity. We've invested in technology, the CAPD, the HCC Management, CDI Engage One which includes those prioritization tools. And we do data analysis, we're big on data. And we've done a lot of work around decreasing one day stays. We found that as an organization, we were an outlier in that area and it did create some opportunities. And then template builds, utilizations of dot phrases, smart lists, et cetera. (DESCRIPTION) Slide, bullet point list continued (SPEECH) Physician buy-in also and education was key. We also had to designate physician champions both on the inpatient and the outpatient side of the house for CDI. We aligned with our physician advisors, our case management team, our quality and safety, our patient financial services, population health, and then we customized data and did an analysis that was actionable for various service lines. So we leverage data to analytics to drive improvement in documentation and operationally. (DESCRIPTION) Slide, Case management and leveraging 3 M, a screenshot (SPEECH) This next couple of slides will show some of what we've done with case management. If you're familiar with the working DRG, we basically send all cases over to case management via an interface when there is a working DRG assigned by the CDI so that they have that geometric mean length of stay to help improve our outcomes with hospital length of stay. But we also realize that, hey, they're not touching 100% of every case and what could we do to get them a working DRG on every case. Well, there is also an auto suggested DRG. So if the CDI doesn't touch the case, the CAC will come in, review the record, auto assign an MS DRG and that will also interface over to the case management team so they have that geometric mean length of stay. We did basically educate them on the fact that this is not a human being touching this, this is all AI, and that things could change by discharging. So they understand that this is just a preliminary look based on documentation in the record but it has really helped that team understand the geometric mean length of stay and how our patients should be managed in terms of trying to find or discharge them timely. (DESCRIPTION) Slide, Epic View, a screenshot (SPEECH) This is just a view of where they can see that in Epic, so again, there's an interface that goes out of 3M into our EHR and that's where they find that information in the chart. (DESCRIPTION) Slide, Case Mix Index, a line graph and two bar graphs. All three graphs show a steady increase over time (SPEECH) So this is just a snapshot of case mix index. While case mix index isn't a great indicator of CDI work, it is something that we have tracked as a KPI for CDI because we do have some impact, especially when we talk about capturing CCs and MCCs to drive that case mix up but you can see right around here is where we implemented our artificial intelligence. And you can see the impact it's had both on our adult population and our pediatric population. (DESCRIPTION) Slide, C C slash M C C Capture rates, two line graphs, two bar graphs, and a scatter plot. All graphs show a steady increase over time (SPEECH) Now while I just mentioned CMI is not always a great indicator for CDI as far as a key performance indicator, in my humble opinion CC MCC capture rates are. And as you can see here, same trend is happening with our adult case mix index or with our adult CC and MCC capture, and our pediatric CC MCC capture. Not only that, but when you come over here on this slide here to the right, you can see the trend from fiscal year 2020 to fiscal year 2021. And you can see over here where it says AMC distribution, basically these gray dots are all academic medical centers and where they fall with regard to their CC and MCC capture rate. And we're this dark blue dot here, so we're technically in the top 10% of all academic medical centers within our benchmark group, and there's 180 or so academic medical centers. And this tells me this is really a direct reflection of CDI work. In fact, I can take this data and I can quantify using some of the data that we have within 3M to show that the CC or the MCC was a direct reflection of either querying, or CAPD, or one of the metrics that we're actually using to touch cases. (DESCRIPTION) Slide, Strategies to engage physicians. New slide, Phase 1: Kicking Off the Project (Initiation of Partnerships), a bullet point list (SPEECH) The next couple of slides will be strategies on how to engage your physicians. It's not always easy kicking off the project, we really had a large group of individuals. We partnered with our system administrator, our service line medical directors, and our physicians, they're obviously key. So depending on your environment, we partnered with attending physicians to meet and kick off the project, and began to establish partnership with clinic managers and physician specialties to leverage physician connections with medical assistance and nursing teams as well. This does work virtually if executed correctly because we had to do it due to COVID so I can say without a doubt that it can be done. Again, when presenting keep it to 15 minutes and always be ready to do a demo that works perfectly. So when we were meeting with them to talk about CAPD, and why it's important, and why we are rolling this product out, there was a lot of questions about why are we doing this? This is one more thing that we have to do and really the education was focused on CAPD captured important and leveraging any data available, RAF scores, MIPS, risk adjustment. So we talked about how this product actually engages with the physician real time at the point of care. Instead of receiving a query two or three days later, this really is something that will ping you real time for you to enhance your documentation. And so you've got to keep at it, you're going to get physicians who are going to be naysayers honestly or are just not interested in hearing what you have to say. And so what we tried to do is get some champions behind us, get physicians to see the importance behind this product, and we kept at it. We kept customizing, and tweaking, and turning things on and off, and doing what we can to make this as meaningful as possible for them because if it's not meaningful for the providers, they're not going to engage with it. My one takeaway here is it was not immediately accepted or physicians weren't readily receptive to this but we kept at it we kept working with them, we kept enhancing things, we kept customizing things, and that's where we really got physician buy-in and engagement. (DESCRIPTION) Slide, Phase 2: How to Engage Physicians (Resources), a bullet point list (SPEECH) Resources are essential. So tools for physicians, tip sheets, videos, we actually sent out a video, we actually have an EMR newsletter and we sent out some information on that. So wherever we could create tools or ways or enhancements we did. Again, we kept it to five minutes. Our last video was eight minutes when we recently launched HCC Engage with our providers and the feedback was it was too long and so we condensed it. Focus on showing physicians how to answer and engage with the tool in these videos. And then your physician, you need to have educators and trainers and people that can be shoulder to shoulder with the providers if they have questions, that can train them how to use this, or walk them through every little nuance. It may be something like, how do I dock this and get it out of the way, while I'm doing my charting. And so that that's what we did is we made sure that we had somebody available for these physicians whenever they had a question or a concern. (DESCRIPTION) Slide, Phase 3: Continuous Partnership, a bullet point list (SPEECH) Again, continuing to partner. We believe in continuing partnerships with key stakeholders to leverage technology to ensure successes. We identify key stakeholders and design workflows for automation and we leverage data to facilitate engagement. Using data, going through meaning behind the nudge, and inviting physicians to the table has been extremely helpful. So when you're creating a knowledge, especially a custom nudge with the CAPD, you want to look at that clinical content to make sure that the nudges firing and it's meaningful to the providers. For example, there was some ad hoc out of the box nudges within the content guide that 3M provided, one of them was on sodium and hyponatremia and it fired when there was just one abnormal lab value and our physicians said no, we don't want that. This is what we want. We want there to be two abnormal lab findings and we also want to know this, this, and this. And so what we did is we worked with the content team at 3M. And we said we'd like to revise the current nudge that you have on hyponatremia and we want to customize it to something that is a little more meaningful to our physicians. And getting their buy-in on all of that, especially on the pediatric and the children's hospital and different things like that has really been key. So having a physician that's willing to go over the clinical content that's going to fire that nudge will be key for your organization. Again, I can't stress it enough, be flexible. Data may change, workflows will change but keep working this, the plan, and keep on making this something that is meaningful for the providers. How can we help? How can we change things? What would make this better? And getting that feedback and making those tangible changes will have impact. (DESCRIPTION) Slide, C A P D (Computer Assisted Provider slash Physician documentation) (SPEECH) So data focus and insights on rollout with physicians, so I'll go over some of that on the next slide. (DESCRIPTION) Slide, Define C A P D focus and nudge definition: Ongoing, a bullet point list (SPEECH) So focus on clinical conditions and the procedures turned on. Define what a nudge means to your provider, your community, a clinician diagnosis procedure that has clinical evidence and a physician message. Always review the data and always provide an overview of all nudges. The rule, and physician message, and then the customization as I talked about, that is really the key for us, especially with the children's hospital. There is not a whole lot of clinical content in the clinical content guide that 3M offers on the pediatric side of the house and so we really have been successful with customizing those nudges to make them meaningful for that population of patients. (DESCRIPTION) Slide, C A P D - The Why on Streamlining Physician Engagement, a list of goals (SPEECH) So the why on streamlining physician engagement. So physician documentation, guidance using evidence based clinical definitions, having a virtual conversation, to add the critical details that impact treatment and outcomes, engaging physicians at the point of care to reduce queries, and then overall quality improvement in patient care outcomes. That's your clinical decision arm really so those were the goals. But also engaging physicians at the point of care to reduce queries, what we found is that by turning on some of these nudges which are things like CHF acuity, or acute blood loss anemia, are things that physicians have been queried on routinely at our organization and have done a really good job at addressing. And so we don't have a whole lot of opportunity there. But what we found was that there was opportunity with certain things, we ran a lot of data, we looked at what our number one query was organizationally, and by service line, and got really granular, and we were very specific and deliberate about what we turned on and where we turned it on and for who. (DESCRIPTION) Slide, What is required for a nudge to fire? (Repeat slash Rewind), a picture of a fire in a fireplace (SPEECH) And then using the data that we have has really, really, really been key so that we can go back to providers and say, here is your capture rate on this diagnosis for this patient population, and here are the rest of your peers within an academic medical setting that are capturing this. And when they can see a tangible like, hey, I'm only capturing this diagnosis 5% of the time compared to my peers who are capturing 25% at a time, they're very engaged and interested in what they can do to be better at documenting this specific condition or whatever it might be. (DESCRIPTION) Slide, A Nudge Requires, a bullet point list (SPEECH) So this next slide will be basically what's required for a nudge to fire and then you really just kind of going to be on a repeat and rewind from here on out. So a nudge requires one specific criteria, a rule that points to a clinical evidence or documentation. So here's the rule, clinical evidence and documentation, we want the tool to reason over before firing. For example, clinical note says sodium is 128. The program fires this nudge for clinical diagnoses as it relates to clinical evidence. Evidence of hyponatremia, sodium level, with explicit mention of and/or physician mention or a physician message will populate in the fluency direct pill and that's part of the CAPD. And it will say something like, we have identified electrolyte imbalances, if appropriate please document the associated diagnosis. The diagnosis is hyponatremia. A clinician can replace the sodium. There really is, again, a content guide that's provided to you out of the gate from 3M and you'll have to take a look at that clinical content to see if it's actually something that you would query a provider on. And if it's not, you're going to want to tweak it and customize it to your organizational needs. (DESCRIPTION) Slide, July 2021 C A P D Data: Top 4 clinical conditions reviewed for accuracy and review. Data source: 3 M C A P D utilization reports. New slide, line graphs for five conditions (SPEECH) So this next is just a data source. This is where we're at today with the Uc Davis CAPD utilization. And I just looked at the top five nudges that we have turned on, which is diabetes, respiratory failure, a-fib, kidney disease, and cardiovascular congenital conditions. So you can see the overall compliance rate for those right now is 77% but mind you when we first went live, we were in the 25%, 30%, and so this is significant improvement in less than a year and I think if you stick to the program, you'll start to see compliance rates up there in the 80% to 90% which is where you ideally would like to be. (DESCRIPTION) Slide, a table showing diagnosis, rule, message, and evidence (SPEECH) This is just a snapshot of what the nudge rule looks like. So anemia specificity, you're going to look for the clinical diagnosis, you're going to look for the clinical rule, what the physician message looks like. We updated this for surgery because it would show up as a blood disorder and so the physicians were kind of confused about, what do you want from me? A blood disorder could mean something like pancytopenia, it can mean something like leukemia, so what is it that you want from me? So we worked to address that issue and created a custom nudge that actually said anemia. So you can see same thing for hyponatremia, acute respiratory failure, and what's actually being used in terms of the clinical rule physician message and the supporting evidence. And these are all things that can be customized. If it's not applicable to you, the content guide that's being offered to you through the vendor 3M, you can customize those which is, again, key for us because we found a lot of things that really made a difference for us when we customize them and that's where we started to see more compliant rates. (DESCRIPTION) Slide, an excerpt from the table for Heart Failure Specificity (SPEECH) This is just another snapshot of what heart failure looks like, the clinical rule, the physician message, and some of the supporting clinical evidence for the nudge to fire. (DESCRIPTION) Slide, Lessons Learned, a hierarchical data tree (SPEECH) When we talk about lessons learned, I think I've gone over some of those already. But focus on which physician groups you want to start with. We were very deliberate about that, we actually piloted a group of physicians. We had one surgeon, we have one pediatric physician, we had a hospitalist, and I think we had maybe a specialty physician as well. And we looked at all of the data that we had on our current queries and the percentage of queries we were sending, and what the top queries were, and we turned those nudges on. And we piloted it and we got a lot of feedback, and we got a lot of information that we were able to take back and improve things, and tweak things, and customize things. And before we went live, we made sure that we had all of those things, and all of that feedback was taken into consideration to improve outcomes. CAPD can work but be patient and don't give up. I mean, that was our thing as we have about 3,000 physicians turned on now. And of those 3,000, I think five were absolutely adamant that they wanted it turned off. They were great documenters already, they didn't feel they needed this, it was just one more thing that they didn't want to deal with. And so we think that's successful in our eyes. We worked with them to try to convince them about the value of this tool but at no avail so I think you have to really work with physicians and make this meaningful to them and customize it to their needs. Always acknowledge a physician when they're providing feedback, especially if they're complaining. What I like to do is say, hey, all your points are valid, what can I do to make this better? How can I help you document better? What can we do? And then we take their feedback and we work with them individually. I think when they are involved or feel like they have a voice, there are a lot more open to working with you and to engaging with the tools. (DESCRIPTION) Slide, a screenshot of a diagnostic form (SPEECH) Again, customization, know the documentation, keep things in perspective. Remember this is a computer but you can make it work. Again, customization for us, I can't say it enough, has been key. We're going to continue to customize, we are just basically scratching the surface with customization and I think we're at a point in time where we can be very deliberate and very meaningful about what we turn on or providers to ensure engagement continues to go up and the product continues to be meaningful and we continue to see impact on our overall key performance indicators. And I think that is my last slide. (DESCRIPTION) Slide, Q & A (SPEECH) It is thank you so much, Tami. The information is just incredible and what your team was able to stand up. We do have a couple questions that I think would be maybe good for you to address. We do have a little bit of extra time, just want to be cognizant of the time for everyone but the first one, thank you Deanne, who said that they really enjoyed your presentation and then also asked, is there any work you have done on the day one stays? Yes. For the one day stays, excuse me I flip-flop that. Yeah, so I'm glad you asked that question. So as you know, CDIs really can't even sometimes get to the one day stays and so we've excluded them from the reviews that are being done by the CDI team but what we did is we went to leadership and we acknowledge that when we ran data we found that we were an outlier for one day stays in terms of the percentage of patients that were here one day and went home and were counted as an inpatient admission, we were an outlier compared to our peers. And so I think it was 25% of our patients were here one day. We noted that it diluted our case mix index and diluted our CC MCC capture. It diluted our mortality and it artificially inflated our length of stay metrics. And so we went to leadership and said, it also impacts throughput, we took these numbers to our case management team and said, could they be better served in an observation status or an outpatient bed to determine appropriateness of admissions? And the other thing we said is, we can't get to them from a CDI perspective to try to optimize them. And so we took a different approach with how we were going to address one day stays and we ran data on the top DRGs for the one day stays both on peds and adults. We found on peds it was asthma and we found that there was a best practice at NYU where they created a clinical pathway in the Ed for pediatric patients and observed them at 2, 4, and 6 hours and if they had improved after six hours they were put in observation and if hadn't they were admitted. And then we found on the adult population it was something like gastroenteritis and seizures and we'd created a similar clinical pathway for those. And so the CDI team really took that information back to the clinical teams and said, here's what it looks like. Here's your top DRGs, could there be something like NYU did here at Uc Davis. And what we saw was immediately a correlation between an increase in CC MCC capture and an increase in our CMI as well as our mortality metrics. So we don't look at one day stays because the documentation, there's a delay, obviously with physicians getting documentation on the record and there's not a whole lot a whole lot of opportunity for the CDI to review them. And as you know, it's almost meaningless to review the case on day one without that documentation in there and then if they go home the next day. So that was the approach we took was operationally, could these patients be removed from our review process and from the observed outcomes and what could we do better organizationally? I hope that answered your question. Yes, absolutely. And we do have a bunch actually coming in that kind of stirred some questions so we will get to a few more here. How many nudges do you have active for each service line and how did you select which nudges to turn on? So we were very deliberate about that, again, we pulled data. So for example we pulled the hospitalist group data and looked at the top five queries for that group. And then we turned on. We were very specific about not turning on a ton of nudges, we were very deliberate about making sure that it was meaningful. So about five to seven per service line and it was driven off of the data we pulled to see which nudges were already or which queries we had already been sending from a CDI perspective. But I would urge everybody to keep it to no more than seven be deliberate about how you turn them on. So look at your current data, look at your current query patterns, look at your service lines. There are things that aren't going to be meaningful to surgeons that are meaningful to hospitalists and so that's how I would approach it. Fantastic. Before we get to the next one, I just want to answer one question quickly from Jessica who asked if CDI Engage One is available and it is. And so if you would like someone from our team to contact you, in that middle button in the portal, if you click on that, that will take you to a form to complete and we can follow up with you to talk about it. Let's go ahead if we do have time for a couple more. How will this process evolve to help with prior authorization and denials? So I think I'm not sure yet, but I do believe that there is an opportunity for us to work with making sure that we get the documentation on the record, especially with sepsis, specifically that core criteria that we're seeing denials on now. My goal is to eventually try to use this in a way where we can get documentation on the record to demonstrate medical necessity and also the clinical evidence to avoid some of those clinical validation denials that we're seeing now for things like sepsis and malnutrition. Great. We have a question that said how long after admit do you do your first review? So initial reviews are done two to three days after the admission. And then our re-reviews are done every two to three days as well, depending upon the complexity of the case and what it is they're looking at. So we give our CDI a choice. So yes, that's our current state. All right. I think this kind of goes along with it, how long should a chart be on hold for a query reply? So we have processes in place where we have query escalation. So after 48 hours, concurrently if CDI has sent a query and there's not an answer within 48 hours and the patient's still in the house, there's a query escalation process where we escalate to our physician advisors through a portal we created on Microsoft Teams. If it's a retrospective query and it's something that's being held for a CC or an MCC or procedures that will drive your DRG or change your DRG, we hold up to 10 days retrospectively only in the events where it's maybe a portable outcome for quality like I said or a procedure question. But we typically don't ever have to hold for 10 days, I will say our physicians are pretty good at getting back to us within 72 hours, retrospectively anyway. Perfect. With the nudges, how often do you evaluate the response improvement to documentation and adjust nudges to continue to target the top diagnosis? So we look at this monthly. Yeah. Wow, that's a lot. And probably a lot of work for your team. Rhonda asked, we cannot lead to a diagnosis and a query, isn't providing a diagnosis in a nudge leading in our nudges visible to others and a part of the permanent record? So we don't lead either so that's the rules that we were talking about. There must be that clinical evidence, that risk factors in that treatment, and you build the nudge to make sure that it has those things in place so that you don't lead. And what it does is it tells the provider that there is a diagnosis based on this treatment, this lab value, this X-ray finding, whatever it might be and they document that in the record. And so we're very careful about not leading the providers and having that clinical evidence to ensure the accuracy and not leading. So we are compliant with that. This is a product that only will nudge when there is clinical evidence risk factors and treatment that exist. And that's one of the things I was talking about, is sometimes the clinical evidence may be just one nudge and I wasn't-- I'm sorry, one abnormal lab finding like the sodium that we talked about earlier. And in my opinion, that could be dilution from surgery or that could be something completely unrelated to a true diagnosis of hyponatremia. So we weren't comfortable turning that on and we made deliberate changes to the clinical evidence for this to fire. So it will require some work on your end to make sure that you are not leading the provider. So our queries are a permanent part of the record, our nudges fire for the physicians basically, and they see it as they're firing and they documented in the record. Fantastic. Well, what I'm going to do is we do have a couple more questions that we will follow up with after. And so Tami I do want to thank you for your time today. (DESCRIPTION) Slide, That's a wrap! (SPEECH) We've had a lot of comments even just within just to say how great the information was and how great the presentation was so we greatly thank you for that. Just a reminder to attendees today, the certificate of attendance can be downloaded. If you do want to submit that for credits to an association you can to obtain CEUs, and we did provide the handout in the resources section, those are both there. If you are interested in learning more about the CDI Engage One, excuse me, that was discussed today, you can click that button in the middle and let us know and we'll follow up with you. The archive of this recording will be on our website in the next couple of weeks. So if you do want to go back and listen, you can. And lastly, we will be here again. And we're doing these every other month and I can't believe that it's already almost March so in April we will be back with another CDI innovation webinar so be on the lookout to register. And we appreciate your feedback so please complete that survey at the end. And so again, Tami, cannot thank you enough for your time today and so we welcome you back anytime. So have a great rest of the day and to everybody else that joined we thank you.

      Webinar title slide

      Leveraging technology to engage physicians and improve CDI operations with AI-powered CDI

      • February 2022

        UC Davis Director of Coding and CDI Services Tami Gomez and her team have a mission: Build a gold-standard CDI program, with streamlined workflows that allow physicians to focus on patient-centered care. To support this goal, UC Davis implemented 3M’s advanced AI and NLU technologies, automatically embedding clinical intelligence into normal physician and CDI workflows.

        Join Tami for an inside look at UC Davis’ operations and transformation strategy. You’ll learn how the team laid the groundwork for new technology, how they’re using automation to drive key performance indicators, and how they approach physician engagement. Tami will also cover lessons learned to date, along with how the organization is using data to continually improve and optimize.

Subscribe to our 3M CDI Innovation Webinar

The 3M CDI Innovation Webinar Series offers in-depth sessions with 3M experts and clients on a wide variety of emerging CDI challenges and opportunities such as shifting care settings, evolving payment models, advancing technology, rising consumerism and much more. Subscribe here to stay in the loop.