You are now subscribed!
An error has occurred while submitting. Please try again later...
An error has occurred while submitting. Please try again later...
Your form was submitted successfully
Your form was submitted successfully!
An error has occurred while submitting. Please try again later...
For more than 20 years, clinical documentation integrity (CDI) experts have played a key role in the health care industry. As the industry evolves at a record pace, their work has never been more important, or more challenging. 3M is here to support that crucial work through a new 3M CDI Innovation Webinar Series.
Learn about our advanced CDI technology. CDI programs can capitalize on opportunities to capture the additional documentation necessary to accurately reflect the acuity of patients and complexity of care provided. Learn from experts about a new cloud-based workflow experience and advanced prioritization that automatically tracks query impact as well as a new dashboard for quickly identifying areas of opportunity. In this webinar, attendees will learn about new tools for improving productivity, quality and efficiency to support program success.
(DESCRIPTION) Logo, 3M Health Information Systems. Text, March 2023 3M C D I Innovation Webinar: N L U, clinical content, and documentation integrity. (SPEECH) Good afternoon and welcome to our March CDI Innovation Webinar. Before we get started and before we get into our discussion around NLU and the clinical content and documentation integrity, I'm just going to go over a couple of housekeeping items before we go over to our speakers. So we are using the ON24 webinar platform. It is a great experience with a lot of different engagement tools. This is a web based platform. So if you are having any issues, close out of VPN, make sure you're using Google Chrome. That'll help with the actual platform as well as bandwidth, just making sure you're closing out of multiple tabs, that'll help. If you are having any speaker issues, there is no dial in number. So if you are having any audio issues, do a quick refresh and that typically solves any issues you might be having. Again, we have multiple engagement sections for a better experience for you. We have our Q&A section so definitely ask questions throughout and we'll get to the questions at the end. So put your questions into that feature there. And like I said, we'll get to those questions at the end. We do have a certificate of attendance available in the resources section. We also have the presentation handout if you'd like to download that and follow along, as well as a couple other resources and that is in the resources section of your dashboard. We also have closed captioning. So if you do need that feature, that is in the media player and that is in real time. So if you do need closed captioning, that is available. And like I said, if you do need the presentation as well as the certificate of attendance, that is in the resources section. And then at the end, we always appreciate feedback so please let us know how we did within that survey. All right. So our speakers today is Dannie Greenlee and Josh Arman. If you'd like to learn more about their experience and a little bit more about them, you can look at that in the speaker section and the meet the speakers section of the dashboard as well. And so I'm going to go ahead and turn things over to Josh to tell us about the agenda and get things started. Thanks, Lisa. Good afternoon, everyone. I'm so for today's agenda. I'm going to jump into some primary goals here in a minute that Dannie and I are hoping to cover. We're going to talk about AI technology and documentation. Yes, we're going to focus a little bit on the CDI lens. We're also going to talk about other ways in which AI can be used in your provider documentation. We're going to talk a little bit about 3Ms, content governance approach to our technology and content in review a use case related to heart failure. The primary goals that we're looking to cover are going to be achieving compliant documentation through artificial intelligence. We want to decrease the documentation burden on providers as it relates to CDI. We want to increase efficiency, accuracy, and consistency across different workflows. We want to have the ability to expand the CDI team's encounter coverage. We know that in the industry is a large focus. As well as intelligent prioritization driven by artificial intelligence. So looking at clinical factors, patient, and business focused factors as well as event driven factors. So in some of the slides coming up as we look into the use case, you'll see how those factors can help drive some of the prioritization. (DESCRIPTION) Slide title, Game-changing cloud-based AI technology. (SPEECH) Jumping right into AI technology. And I-- It is applying systemic reasoning and contextual understanding to data aggregated from your electronic medical record. So our AI is using the data from the EHR, we partner with multiple different EHR vendors in the industry to capture that data. The data that we are using is your primarily your provider documentation, your laboratory data and discrete laboratory data. So we use that data across an HL7 interface. We're not looking at just what the provider has documented into their documentation but we're looking at pulling that across an HL7 interface as well as the radiology results. Those two across an interface. So those are our three sources of data today as it stands. But we are looking to expand our data sources throughout this year to help capture additional data sources beyond just looking at a provider documentation. Continuously and automatically, reviewing, analyzing, monitor, and improving all the documentation, all the time, driving, consistency, and efficiency real time. So on the provider end the provider does not need to really click anything in their workflow to see if there's something that's been identified by the AI we're able to push that to the provider real time while they're working on their documentation. In our AI, we also use standard ontologies as well as clinical concepts in value sets from across the medical record. So using those data sources that I mentioned to really help identify those clinical conditions. And Dannie will jump to that and to the use case whenever we look at heart failure. This, we'll on the right hand side of your screen is basically looking at heart failure. But then how we identify that from an AI standpoint? So we're looking at temporalities. We're looking at the children concepts or the parent concepts. We're looking at evidence of heart failure. So we're not just looking at that a provider set heart failure. We're looking at all the different ways to capture that and Dannie is going to jump through those as it relates to the clinical concepts. And then, we're able to identify the type of heart failure. We know we need the type, we know we need the acuity so we're able to identify those concepts as well in the provider documentation. So at this point, I'm going to turn it over to Dannie to begin to discuss some of the AI capabilities, the 3M has. (DESCRIPTION) Slide title, Artificial Intelligence. Two branches, deep learning and predictive analytics, feed into the machine learning branch. Three branches, translation, classification and clustering, and information extraction, feed into the natural language processing (N L P) branch. Two branches, speech to text and text to speech, feed into the speech branch. Two branches, image recognition and machine vision, feed into the vision branch. The branches, along with the expert systems and planning, scheduling, and optimization branch, ultimately feed into artificial intelligence (A I). (SPEECH) Thank you very much, Josh. So oop, this slide is very interesting to me, because oftentimes when we think AI, we think of Arnold Schwarzenegger fighting robots, Will Smith fighting robots or maybe Keanu Reeves fighting robots, maybe that's just me. But in truth AI is made up of many parts to create a whole artificial intelligence. And we can see NLP and machine learning here are driving artificial intelligence as well as expert systems and speech to text and other things that make up the whole underpinning of our NLU and AI. And so I want to talk in depth as we go on to the expert systems that support our clinical solutions products as we think about AI. (DESCRIPTION) Slide title, N L U engines. Slide text, Acuity Engine. Grammar-based engine that assigns acuity to findings. Acute onset, acute to subacute, acute on chronic, chronic, sudden onset. (SPEECH) So as Josh mentioned, we look at the whole concept in our natural language understanding and he highlighted heart failure specifically. So it goes beyond NLP or natural language processing, which our NLP is phrase based it's sort of small in scope and just at the sentence level while our NLU is concept based and surrounded by information models engines and more to really capture the context around that single piece of information. So as Josh-- if we quickly go back and we take a look at Josh-- Josh's slide there about heart failure and he mentioned it perfectly of capturing the children concepts or the parent concepts, the temporal-- temporality or any of these other surrounding piece of information we can really drive value by providing more context around that single piece of clinical information. So here, highlighted on this slide is we have over 20 engines in our NLU and I just want to talk briefly about the acuity. As we know it that heart failure example, you really need to capture whether that's an acute or chronic heart failure. And based on our NLU engine, we can run our documentation through the NLU and part of it is to piece out whether that is acute or chronic. And there's many other engines that make up that, that are displayed here. The lab engine that reasons over that, the clinical finding engines you can see them all here but this really continues to drive that context surrounding clinical information. (DESCRIPTION) Slide title, M*Modal Information Models (Mims). Surrounding Mims 15+ are nodes labeled findings, substance administration, action course, procedures, labs, and allergies. (SPEECH) So this slide is our MIMs or our medical information models which is also-- we also call them immortal information models, which is fun to be able to use that acronym. Our NLU has over 15 MIMs and they're pretty spectacular. So how I had to envision a MIM was as an empty shell that has slots that need to be filled for the NLU to reason over and generate some sort of output. So you can see in the square on the left, we've highlighted the medication administration model. And it has the slots that need to be filled for this to happen. You can see a substance needs to be mentioned, , a start time a stop time that it was given. So for example in our heart failure use case, if a provider were to document, we've given IV [INAUDIBLE] at this time. We can then begin to fulfill those slots. So we've got the substance [INAUDIBLE], we've got the time, an that it was given IV in order to really reason over this information. We have to have all of these pieces of information which are important and are driven by the value sets that we maintain. So value sets are groups of things that provide the clinical indicators that we need to fulfill each slot. So if we need-- for heart failure, for example, if we need to understand all of the medications that may be used to treat heart failure, we contain a value set of those medications that would be commonly found within those. And these MIMs really again drive that AI, drive that context around these individual pieces of information that we get when we reason over the clinical documentation. (DESCRIPTION) Slide title, Mims in N L U. (SPEECH) So this is a really high level visualization of how the MIMs are working with the NLU. And you can see that we have documents that come through that are either narrative documentation or structured data. They're transitioned into the NLU and then we semantically pull those out or syntactically, we process those documents to make them readable by the NLU. We reason over those with our MIMs, with our engines, all of that happens and those produce serialized objects. Those serialized objects are then fed into the applications so that's what the application understands. And the application does a translation in order to display information that an end user would understand. So I like-- this slide just to think about how data flows from one place, the EHR, all the way through to the applications to really provide value for our end users. And the next slide we're going to jump into is the back to Josh, and he's going to talk about our Content Governance and our Content Creation Processes around this information (DESCRIPTION) Slide title, Content governance. At the left is a circle with three slices: patient outcomes, end user experience, and industry leading content. (SPEECH) OK. So if we look at this slide that I'm presenting now when we go to the left hand side and we're looking at the smaller wheel, the buckets that we have here, end user experience, industry leading content, and then patient outcomes. So when we're talking about the end user experience that could be either the provider. So I'm receiving the information as I mentioned earlier real time while they're doing their documentation for that increased specificity that's required for that clinical concept on the CDI side using sort of that same framework but using the tools for helping identify those clinical concepts as well as prioritizing the encounters, which those clinical concepts are contained for them to be able to be captured. And then looking at the content, I'm going to jump into the content here in a second. And then really we're-- our hope is that we're driving better patient outcomes that we're getting the specificity real time and we're helping with the long term continuum of care. (DESCRIPTION) At the center, content methodology is surrounded by six triangles labeled, evidence-based practice, quality improvement, government regulatory, provider feedback, impact analysis, and industry regulations. (SPEECH) So if we jump into the center slide there with the symbols and then they sort of correlate on the right hand side with the detail. Let's start up at the top with the evidence based practice. So we're really using up to date information to help develop out our clinical concepts and nudges that will be presented either again to the provider or to the CDI in their workflow. We're looking at the quality improvement so we're using data driven analytics to really help drive the quality of that documentation. We're looking at the government regulatory. So looking at CMS, looking at coding clinic, we're using that information to make sure that our clinical concepts and nudges are remaining compliant. Provider feedback this is very important. Our team spends a lot of time on site with our customers to really get that provider feedback because we know today, that alert fatigue is real. Administrative burn is real. We want to be able to have the technology aid real time to save any backend workflows for the providers. Impact analysis. We want you to be able to gain value from using our technology and every customer defines what their value is, maybe a little bit different. But we are able to provide the impact analysis from the use of the technology. And then as I mentioned industry regulations, so the [INAUDIBLE] guidelines, we really do follow as we develop out our clinical guidelines-- our clinical concepts that are used in our applications. (DESCRIPTION) Slide title, Customization request process. Slide text, Adoption specialist assigned for life of project. Bullet points, Customer meetings with adoption, weekly, bi-weekly, daily as needed. On site visits as needed. Works with customer to determine nudges for go-lives and specialties. Submits customer requests (enhancements, issues, bugs). Tests with customer in product. (SPEECH) As we move on to sort of our customization request process. So we have our out of the box-- out of the box content that can be used but really our differentiator is that we do take customization from our customers. And there really is no limit there so you can bring new concept requests to us and we will develop them out, or maybe you have found that there is a clinical concept that you're interested in but maybe it doesn't meet your organization's needs. That is completely fine. We're able to develop that out so that it does meet your organization's needs. So for the life of the contract. We do assign an adoption specialist. And this is a subject matter expert that is primarily focused on the use of the technology. So the adoption specialist does arrange regular meetings with the customers. And this is a resource that is actually assigned at the beginning of the implementation and is really with the customer again through the life of the contract. So it's not like someone that is coming sort of mid midway in your use of the technology. It's really started at the beginning of the implementation in there with you. This is a person that we don't traditionally sort of change hands with or change out. We really want to focus on developing that vendor and customer relationship. So they're really basically part of your team in helping ensure that you are using the technology to its fullest potential. As I mentioned on site visits as needed, and this is really at our customers sort of request or expectation. It's not like we're going to be coming out every week or every month. But I think we're able to develop a cadence that would meet your needs and whether that be quarterly, or that be maybe twice a year, whatever the need is, we want to be sure that we're there to support you. The adoption specialist also works with the customer to determine what are the nudges act go live in the specialties that we want to focus on. We're going to talk a little bit more about our best practice in future slides and I can cover that in a little bit further. The adoption specialist is also sort of the customer voice. So we-- the adoption specialist is the resource that submits that request to our internal content team to begin to triage, and then to develop it out. And then, they're also available to test alongside of you as you're going through your process. Another resource that is part of the team is the content coach. And the content coach is there to support basically the adoption and the customer. The content coach is a subject matter expert as it relates to really the NLU in the clinical-- the clinical concept that Dannie has mentioned. And Dannie is going to discuss really her team and some of the content coaches background and really the makeup of that team. They're also there to triage your request and make sure that we are developing out as you expect. We want to get it right the first time but maybe we develop it one way and it wasn't the customer's expectation, or we determine that hey we're capturing many-- too many false positives or maybe we need to tweak it. Again as I mentioned, we really-- are NLU is very nimble in the sense of the customization. So the content coach is able to really help answer those questions or help guide the customer as to how we think that clinical concept really should be created or used. As well as discussing the content needs with adoption to basically best support the customer. The content coach is not necessarily 100% someone that is coming on site to do the necessary work, that is where the adoption specialist comes in. But the content coach is there in the background to help support adoption as it relates to how the content is being used. So at this point, I'm going to turn it back over to Dannie for her to discuss the clinical content team and the makeup. (DESCRIPTION) Slide title, Clinical content team. Above a bridge is text, medical providers document in clinical terms. Coding and compliance need specificity in diagnosis terms. Below the bridge reads, A C D I program creates aa bridge between this gap. Who builds the bridge? The clinical content team. 20+ variety of credentials (M D, Ph D, Pharm D, M S/M S N, M L S, M S W, R N, B Subtitles, C P C, C C Subtitles, C C D Subtitles, R H I A. Years experience, from 4 to 47. (SPEECH) Thank you, Josh. I love this slide because I get to talk about what is nearest and dearest to my heart and that is my team and our expertise. As you can see represented at the top of this slide, we see providers on the left hand side in purple who document in clinical terms. And then, we know that coding and compliance needs specificity in diagnosis terms. So a CDI program within facilities bridges that gap. And our application helps also to bridge that gap. And who builds this bridge on the NLU side here is the clinical content team. And we have over 20 varieties of credentials from doctors to farm deeds-- to nurses, CCDS, lab techs, informaticist. And we use our clinical background to create and curate this content. We also have a wide range of experience like years of experience, which I think is really fantastic as well from less than 1 year experience to greater than 15 years. So we have innovation and new ideas mixed with the wisdom of the people who've been working in these fields for a long time to really drive and maintain the value of that content governance that Josh was representing earlier. (DESCRIPTION) Slide title, Content workflow diagram. Arrows travel clockwise around a circle, which has slices labeled new nudge request; research guidelines, create nudge; review encounters; customize nudge; update engines grammars; Q A content gov; test N L U and repeat. (SPEECH) I'm going to move on now and talk about the process for the content workflow, what we do on our team. And this begins here with this green pie piece where we get a new nudge request. This may be a new nudge request, it may be a new enhancement. And this often comes from the as Josh mentioned, the adoption team who works closely with the customers to come up with a new use cases or enhance existing use cases to really drive value in their individual areas. So we get that request, we research the guidelines, create the nudge based on those guidelines that Josh mentioned [INAUDIBLE] up to date. Whatever clinical guidelines and CDI guidelines to verify the value of the request. We review encounters customize the nudges, update the engines and grammars in the NLU. We of course, go through a content, a QA, a quality assurance within and this drives back to that content way just to ensure that this each request is reviewed by at least two CDI content team experts, SMEs. And we also test this locally. And we repeat this process as needed. And highlighted on the right is that each of these requests have to come through with a release and we've highlighted our release cycle over on the right hand side. (DESCRIPTION) Slide title, Quantity Recommendations. Slide text, To avoid burnout for providers and C D I specialists, 3M has established the following best practices. Heading, Evidence sheets. Text, At Go-Live, 10 to 11 potential conditions from the approved conditions list. 30 Days Post Go-Live, Add 3 to 5 additional conditions. 60 Days Post Go-Live, Add 3 to 5 additional conditions. Heading, Nudges. At Go-Live: 3 to 5 specialty-specific groups. 10 to 11 nudges per group. 3 Months Post Go-Live, Up to 10 specialty-specific groups 10 to 12 nudges per group. For each nudge, the same condition should be enacted for Evidence Sheets. (SPEECH) So I'm going to hand it back to Josh, who's going to talk about some of the best practices and recommendations. So as I mentioned earlier and this is a question that we get all the time is basically, how many do we start with or how many conditions do we start with? As I previously discussed, we know that a ministry-- that the alert fatigue is real and administrative burden is there. So we want to be very pragmatic in how we roll out the technology. In every customer and every organization is a little bit different in how they want to approach it. And maybe it's we approach it one way because we think it's going to work the best way. And then, we find out that, hey, maybe the rollout that we did wasn't ideal and we need to take a step back and we need to re approach it. That's completely fine. The way that our technology is rolled out is on an end user basis. So we don't need to do Big Bang. For many of our customers, we don't do Big Bang. We actually do phases. So when we look at evidence sheet, then this is a workflow that is used in 360 encompass for the CDI team. At go live, we really want to focus on 10 to 11 potential conditions that are sort of approved from our list that we recommend. We've identified that the NLU functions well with these. They drive value. So this is what we suggest you start with. 30 days post go live, we can look to add in three to five additional conditions, and then 60 days post go live another three to 5. Again, every customer is going to be a little bit different. There's not necessarily a cookie cutter response here. Some teams react to evidence sheets a lot better and find them easier to use than other teams. So we're able to go at a faster pace. As well as I can't stress it enough. It is very important that we need some interaction and management from our customers as we roll out this technology. This is not a sort of install in drop piece of content. We really need active engagement in where we've seen the best success or with those customers that are actively engaged with us. So the evidence sheets are truly focusing on the CDI workflow to help capture those clinical concepts from encounters that they don't necessarily have to go review and sort of jot down on a piece of paper. The AI is doing that heavy lift for them and pushing that information to them in their workflow. As it relates to nudges on the provider side, this is where, again while we find that the CDI maybe has a little bit more tolerance for volume, we know providers really are a little bit more, I guess boisterous as it relates to how they perceive technology. So from a nudge standpoint, we really want to start with about 10 to 11 nudges per group that's maybe a little bit even on the high end. We tend to like the small-- start smaller and add in. But, then we want to focus on three to five specialty specific groups. So what that means is maybe we're not going to do a Big Bang approach. But for your organization, you've determined a Big Bang or focusing the same conditions. I guess, I would say for every specialty is the way to go. We have the ability to sort of cut those out. So maybe it is you want to focus with hospitalists, you want to focus with pulmonologists, and then nephrologists. You can define all those groups and then define what content or what nudges are to be enabled for those groups. So not every group has to have the same nudge enabled. Of course, they can. Maybe, you've determined from your organization. There's an initiative and you have to or you need-- you want to enable malnutrition for every provider no matter what their specialty is. You can do that. But we've also found that if you are focusing on specialties that most likely those providers that are answering those nudges are going to be able to add that specificity in and it's much more pertinent to the patient population in which they're treating. We have heard from providers. I don't know why I'm receiving this nudge type. This isn't a nudge that I would traditionally respond to or even be queried for so that's why we're able to break this out based off of specialties. I always urge customers, don't find yourself going down rabbit holes is you define specialties because it can-- you can find yourself in the weeds. So we widely want to focus on your high hitting specialties or your costly service lines or your high queried service lines as we develop out these nudges. Then, three to five I'm sorry,-- Three months post go live, maybe we increase the number of specialties as well as the number of nudges per group. Again, this is rolled out on an end user basis. So you don't have to roll it out all at once. You can focus on certain specialties. Once you get that under your belt, add in additional providers. It is completely at your pace. We can go as fast or as slow as you need to. So at this point, I'm going to turn it back over to Dannie to sort of complete out our areas of coverage from the content standpoint and then jump into that use case a little bit further related to heart failure to give you a real understanding of how our AI functions. (DESCRIPTION) Slide title, Areas of coverage. On the X axis of a bar graph are labels, such as neuro, eye, E N T, and respiratory. Each label has two bars, conditions and nudges. The Y axis ranges from 0 to 80. Every Nudges bar is significantly higher than the Conditions bars, which don't exceed 20. Some Conditions bars reach over 70. (SPEECH) Great. Thanks, Josh. We are going to dive deep here in just a moment. So this slide is a representation of our library and our areas of coverage. Organized along the bottom, you can see MDCs and you can see where we have conditions and nudges available. Light teal is conditions and dark teal is the number of nudges for that condition. So you can see where we have lots of high areas of impact with rules created. And you can see where we have areas where we can expand our content. And as Josh mentioned, while we did, we do have this focus in CDI. We appreciate any use case that this NLU and AI may be used really to drive whatever the facility's needs are. We're always looking to expand our coverage as we create and curate this content. (DESCRIPTION) Slide title, Heart failure overview. Heading, M D C 05 Circulatory System. Text, Condition: Heart failure. Nudge count: 9. Heading, C D I guidelines. Bullet points, Code to specific type and acuity. Specify stage of H F if possible. A G O/A H A classification used as reference. 3M coding and reimbursement references, coding clinics, A C D l S/A H I M A references. (SPEECH) So this is the heart failure condition overview. So under MDC 05, the circulatory system, we have a condition of heart failure. And within this condition, we have nine nudges that are created to capture different bits of information depending on the different use cases. And this may be a CDI workflow or it may be nudge workflow or quality or other sort of areas that we've had rules that we've created come through. We've highlighted where our CDI guidelines come from and our clinical guidelines come from specific to this condition. This is not an all inclusive list, it's just an example of some of the information that we look for as we go to curate these rules at this level. So we have up to date and the Merck Manual are some of the clinical areas that we use to do as references for building our clinical guidelines. And then as mentioned earlier, we use the 3M coding and reimbursement reference code in clinics [INAUDIBLE] as our CDI guidelines to know really how to capture content in those specific areas. We've selected a single nudge to review, to go just really down into the details of what it takes to build and maintain one of these rules. (DESCRIPTION) Heading, Nudge details. Subheading, Condition: bullet points, Documentation of HF, (+/-) evidence of diastolic H F, (+/-) evidence of acuity. Subheading, Requirement: Bullet points, Documentation of systolic/diastolic H F. Documentation of acute/chronic. (+/-) grade. Heading, C D I messages. Subheading, Rule Satisfied Message: Bullet point, Acuity and type of heart failure were properly documented. Subheading, Unsatisfied Message: Bullet point, There is documentation and evidence of heart failure but type and acuity were not documented. (SPEECH) So this is a heart failure nudge and you can see from the title alone there, we're looking for the documentation of heart failure, plus or minus evidence of heart failure, without documentation of the type and acuity of heart failure. So at first glance, what this is looking for is somewhere in the medical record, we've got the word [INAUDIBLE]-- in the encounter, we've got the word heart failure. And we also have some evidence of heart failure but we don't know the type or acuity of that heart failure. So in the nudge details, what it takes to trigger this nudge is that documentation of heart failure plus some pieces of evidence of heart failure. And then, what it takes to resolve this nudge is the very specific documentation of the type of heart failure and whether it was acute and chronic. We also have some information called out on the middle and right hand side of this slide with the messaging. And these messages can be displayed depending upon who is-- where in the workflow this is. As Josh mentioned, we have provider messaging and so if we're nudging the provider in real time to capture the specificity, we have that message there and those messages can be customized per facility and per whatever the hospital is trying to capture because those relationships, the CDI specialists have with their providers and so they know the type of messages to put in front of them. The CDI messages are a little more general. And that's because as Josh mentioned, we can put more information in front of them and they can use that to decide whether to link a query or prioritize their workflow based on these evidence sheets that are put in front of them. So now, we're going to move on to a really deep dive into how these rules are built. (DESCRIPTION) Three concentric circles. The innermost circle reads, condition. The next middle circle reads, requirement. The outer circle reads, provider message. (SPEECH) So you can see this diagram, we have CDI notifications, CDI opportunities, and provider nudges, and what it takes to create each one of those separately. So if we just have a condition represented there with the green box that would be a CDI notification. So this is when there's been some clinical piece of information that we want to get in front of the CDI workflow. And that is because it may drive value for how they're working on that chart. So for example, during the pandemic, we created some notifications regarding COVID. So this would be like your-- this patient here appears to have all these signs and symptoms of COVID and we just thought that you would like to know perhaps you need to investigate a little bit farther. To add a layer of depth to that, we also have requirements. So if you have a condition which triggers the rule and a condition plus a requirement, we're given a CDI opportunity. So these are rules that fire and then would be either fully documented or still an opportunity for documentation improvements based upon what information is found in the chart. And lastly, to provide that next layer is the physician message and turning it on within the nudge workflow to get that in real time that was sort of what differentiates a physician nudge from a CDI opportunity. (DESCRIPTION) Below the circles are three labels: value sets, concepts, parameters. (SPEECH) So here's the heart failure rule. And this is-- I got it. Very one moment. So what I talked about earlier was that we have to have that mention of heart failure. And you can see the presence of heart failure as the last bullet point. We absolutely have to have that within the encounter for this rule to fire. We also need some specific piece of clinical indicators that heart failure may be present on this patient. So represented here, we have the less than or equal to 40% ejection fraction. And we also have some evidence of BNP or proBNP greater than 500. And we also have some evidence of maybe that some heart failure medications were given. And it's interesting when you look at this condition because you can see that we have as Josh called it our out of the box, parameter of less than or equal to 40% in our ejection fraction. But we also have customer customizations where some people wanted less than or equal to 50% or even less than or equal to 55%. On this one, we're also looking for some temporalities which is part of the NLU AI. We're trying to see if it's past or present heart history of heart failure. And we have a customer constraint there to exclude past sometimes on these rules. So that's part of just the giving this rule to fire in front of the CDI specialist or the provider. We have to have these pieces of information. To fulfill it and make it a fully documented opportunity, we need documentation of the type of heart failure and we also need the documentation of the acuity of heart failure. We also include some temporality constraints and we can do customization constraints like we've done with this document type underneath the requirement. (DESCRIPTION) Heading, Value Set. Subheading, Heart failure. Bullet point, Snomed C T, heart failure disorder. Under the bullet point is a subset of bullet points, acute heart failure, chronic heart failure, right ventricular failure, left ventricular failure. (SPEECH) Each of these pieces of this rule-- this nudge are maintained with the curation of value sets. And SNOMED is one of the values-- SNOMED CT is one of the ontologies that Josh mentioned that we use to capture the clinical content for these roles. So you can see heart failure and all of its descendants that would help resolve this rule. (DESCRIPTION) Heading, Concept. Subheading, Systolic heart failure. Bullet points, Snomed C T, systolic dysfunction; Snomed C T, heart failure, systolic failure. (SPEECH) We also talked-- We also talked about concepts and this is something where-- this like we have a value set of these different types of heart failure that we were showing before. And now we have this specific type of heart failure, this systolic heart failure, which may be further model to capture more information. So systolic heart failure is made up of concepts of systolic dysfunction, as well as heart failure plus many synonyms, SHF systolic failure and even a German version of systolic heart failure there. So this type of work where we add synonyms or model synonyms is part of the daily work that the content team does to constantly curate and maintain these values that's to drive more value. (DESCRIPTION) Heading, Parameters. Bullet points, N L U temporality, past, present future; N L U experiencer, family patient, other; N L U certainty, certain, hedged, hypothetical, maybe, remote, ruled out, negative, undefined; N L U document type, clinical, lab, radiology, medication administration; evidence, ejection fraction less than or equal to 40%; customer less than or equal to 50%; documentation, acuity of heart failure; customer, O R right ventricular failure. (SPEECH) So if we're going to talk about individual concepts and the-- We're down to parameters. So we're going to talk about parameters and all of the ways that we tailor the NLU to capture that context we were talking about earlier. We can look for temporality and experiencer and certainty as document type and we reason over the encounter with all of this information to really provide that context. And again, here's another example of the ejection fraction less than or equal to 40 and customers who don't-- they want to be a little bit tighter. They want that ejection fraction to be less than or equal to 50. And this is the same thing of the acuity of heart failure. We have a customer who preferred that documentation of right heart failure-- right ventricular failure was enough to capture that. So we have this ability to really curate and customize these use cases at many different levels. (DESCRIPTION) Slide title, Encounter reviews. Slide text, Review encounters. Bullet points, Per nudge, per customer and across customers. Search for grammar, language and N L U patterns/issues. Disambiguation - acronyms most common issues. A table with columns category, cause, and comments. Row, category, circulatory. Cause, incorrect evidence. Comments, template issue: O2 triggering instead of flow rate. Row, category, respiratory. Cause, other. Comments, False Positive. Disambiguation: 'pe' 'lmmature granulocytes' - (pulmonary edema). Row, category, circulatory. Cause, context. Comments, grammar: D V T unlikely, suspected versus ruled out. Row, category, neuro. Cause, context. Comments, Temporality: History of the following complications - stroke, not picking up historical. Row, category, Respiratory. Cause, Language. Comments, Disambiguation: Possible P E, Pulmonary embolism, versus Pulmonary edema. Row, category, Kidney and Urinary. Cause, Language. Comments, Disambiguation: C K D client using as C C/K G/Day at ped hospital. (SPEECH) So I mentioned as a part of our process, we do encounter reviews. And this is where we really look at how the NLU is functioning within a specific set of encounters and a specific set of organizations-- organization. So we know that each facility and each provider and everything may document a little bit differently. And that's really where doing these encounter reviews provides value. So while we do these encounter reviews, we look at the NLU and how it fires and we find lots of different things. And we do this per nudge per customer and across customers. We're searching for grammar language and NLU patterns and issues. And we're also looking at disambiguation. Acronyms are a really common thing that we find and that we add to the NLU to really provide more context for individual customers. One of my favorite examples of something that we found during a review was down there at the bottom with kidney and urinary. So there was a facility, it was a Children's Hospital that had CKD documented all over their medical record. But what we know what all of us know is CKD means chronic kidney disease except for at this facility, it was most often use as CC per kilogram per day because it was a pediatric hospital and that's how they did their fluid restrictions for each of their pediatric patients. And so we had to create some disambiguation tickets and enhance the NLU to not fire any chronic kidney disease rules, nudges that may have also been turned on by this facility. We do things like that. And we have to look at that and then enhance the NLU and drive that value, not just for this single customer but all customers. So these encounter reviews are an invaluable part of our content curation and maintenance. (DESCRIPTION) The slide with the three concentric circles of condition, requirement, and provider message. Heading, Primary Care Exam Summary 01/01/2013. Slide text, patient has pulmonary edema, heart failure with election traction less than 40K and B N P 912. Currently taking 40mg furosemide. Heading, Primary Care Exam Summary 01/02/2023. Slide text, patient has been diagnosed with acute on chronic systolic heart failure. (SPEECH) So this probably looks familiar. But I wanted to talk about some clinical examples and how we do some testing to check that this rule is firing as we expect it to be fired. So here on the bottom, we have an example of maybe a note from a primary care exam. And you can see that it says, patient has pulmonary edema, heart failure with ejection fraction less than 40%, and a BNP of 912, currently taking 40 milligrams of furosemide IV, twice a day. (DESCRIPTION) Lines from parts of the condition point to key phrases in a primary care exam summary. (SPEECH) So if this piece of text were to be ran through the NLU, we can see which individual pieces here would capture for these different pieces of information. So we have heart failure, which I said was required for this rule to fire. We have our ejection fraction less than 40%, we have our BNP greater than 500, and we also have the 40 milligrams furosemide IV BID. Interestingly, it's not represented on this slide but in order to get that evidence of heart failure medications, we've had to fulfill that substance administration MIM. And we had to fill all of those slots in order for that to fire for this certain piece of evidence. And you can see, we have our dose, we have our substance, we have, how it was given, and we have when it was given to fulfill all of that in the medical information model in order to trigger that piece of evidence for this nudge. (DESCRIPTION) The lines disappear. (SPEECH) So if all of those are met, we can then move on to what it would take to resolve the rule or make the nudge go away. So if the document-- doctor provided patient has been diagnosed with acute on chronic systolic heart failure, we have now nudged them and we've said, hey, you said heart failure and you said that they had EF less than 40 and also they're on some medications, and their BNP is high, we've given that message that provider nudge message that's like can you please document the type and acuity of heart failure that you've said this patient has. (DESCRIPTION) Lines from parts of the requirement point to a phrase in a primary care exam summary. (SPEECH) And when we look at this, we can see which portions of this rule are now resolved. So we have the acute on chronic systolic heart failure. Acute on chronic of course, managing that acute piece that we need to capture and the systolic managing that type of heart failure. So we do this testing locally where we test our rules at the local level. And we also look at these within our encounter reviews to verify that things are triggering and resolving as we expect them to be. So I think-- Let's see. (DESCRIPTION) Text, Q and A. (SPEECH) I think that was all that I had to cover today. And we can open it up to questions. Great. Thank you both so much. There was so much information that you got through. So thank you so much. We do have a couple of questions. The first one I have is, do the MIMs review the MAR or scanned information of Drug Administration, or does it need to be written in a provider's note? So from the NLU standpoint today. And I'm talking about the functionality as it stands today, we would capture the medications as it relates to the provider documentation. This year and hopefully here in the next quarter or two, we will be adding in medication administration records to be able to capture that information outside of just provider documentation. So today, if you're using the technology or if you're implementing the technology, we would be capturing it from the provider documentation. But here in the near future, we will be looking to capture it outside of the provider documentation. The other piece I want to hit on not directly to this question but as we look at the data sources again, it is your provider documentation. It is discrete laboratory data and I sort of stress the discrete part because we are looking again across an HL7 interface. So we're not looking just at the provider documentation. Actually, we are looking at making a lot of changes where as it relates to lab results where we're looking not at provider documentation, we want to solely use the lab as the source of truth. So the discrete laboratory data and the radiology results. We know our customers have a lot more requests out there such as vital signs and flow sheets. And we want to get there but today, we are truly just looking at the provider documentation. OK, great. Next question we have is does this integrate with Epic EMR and let's just expand that to other EMRs? Yeah, so I don't want to focus on the word integration because we are not in the epic UI or we're not in any EMRs UI today. When we think of how we nudge providers, we sit on top of the EMR. So there is a control bar that's sitting on top of the EMR, not into the actual EMRs UI. We are partnering with our UI with our EMR vendors to see if we're able to enhance that but today, we sit on top. But we do take the information from Epic. This is both for hyperdrive and hyperspace. So if you are moving to hyperdrive here this year or even in the future, we are doing testing with some of our early adopters today as it relates to hyperdrive and we'll be able to support that. I don't want to limit ourselves as it relates to a specific EMR. If you are interested to know if we support your EMR, I was just reaching out to our 3M sales team and we can help that. But obviously, with those sort of larger EMRs into the industry today, Epic, Cerner Meditech expanse, we do all work alongside of those today. Great. All right, we have another question about the actual nudges. Does the HF scenario generate both a provider nudge and a CDI notification? Yes, it does. If it's turned on in both scenarios like Josh was talking about in the best practices, it will surface either fully documented opportunity or just an opportunity to capture heart failure for the CDI specialist. And it will generate a provider nudge, if we don't have that type in acuity already documented. All right, great. Another question we have is, can you create content outside of CGI? Yes. Let me take that. You want me to or you, sorry, Josh. Yep, you can. Go ahead. OK. Yeah, absolutely we're always looking for areas to expand our content internally as well as with partners in the community. We recently were given an example of a doc like an article that had some really interesting physician template that they do to do cancer screening. And that was brought to us as ways that we could maybe reason over the NLU you to provide some more information about these cancer patients and capture something sort of up front. And we're always looking for use cases like that to expand our content beyond just the CDI workflow. And Josh may have some good examples where we've done that in the past. Yeah, so I think an example where we have helped to identify patients in the past beyond just CDI is if you're a current 3M time customer, you have access to or I should say if you're currently using our engage one platform, you do have access to our content that is available. And you may see some of the conditions that are there, such as identifying patients that may require hospice care, a little bit sooner in their plan of care. So we do have different conditions available beyond just CDI. I Always tell customers don't always assume that we're not able to do something, please give us your use case because most likely we're able, to develop it out. It just may be dependent on what application, we may find that would it be the best suited for. All right. I'm going to go back to the nudges. Can you set it up so the nudged-- to nudge the provider based on specific sepsis criteria? For an example, sepsis-- the different types of sepsis and those types of criteria. We can create customized rules based on any criteria that's very clearly defined. So if we have a very clear use case of the sepsis three criteria and what would be expected to trigger and resolve in the pieces of evidence, we can work within our parameters to build a rule to do that. All right, great. Next question is does this encourage doctors to move these diagnoses to their discharge summary? So I think the important part here is obviously capturing the specificity real time while they're doing their documentation. So we have heard this request from customers in the past to say, hey, is there a way to nudge a provider when a condition gets dropped. So let's say that you have a patient that has a very high length of stay, they've been there for a couple of months. Something was documented early on in their length of stay and it didn't get carried forward to their documentation. Unfortunately, we're not able to capture that today. We're not able to say, hey, you've documented this specificity somewhere within the encounter and it didn't make it to this specific piece of documentation. Our hope is that if the provider is adding in the specificity that's required to their problem list and they're really using the problem list in a way to help drive the patient care, which I know we all know the problem list is a disaster and it doesn't always reflect the current patient's conditions. But whether they update their problem list or they're pulling some type of list forward in each of their notes, once the specificity is contained within the encounter, then our hope is that it would get captured to a discharge summary. But unfortunately today, there's no way for us to sort of nudge that provider that says, hey, you need to add this to your discharge summary today. All right. Another question. Does your program utilize APR DRG or mirror the VA's Alex Houser core-- Comorbidity Index, I can never say that word. [LAUGHS] So the important thing to keep in mind as it relates to our AI is we are looking at documentation quality. We're not looking at any type of financial impact. And again, this is feedback that we sometimes hear from, especially the CDI teams is this encounter is fully maximized or I wouldn't necessarily have queried for this condition based off of what they're seeing and NLU doesn't understand financial impact. We're really looking at the clinical concept and making sure that specificity is being captured. So we're not necessarily opening up a DRG workbook and seeing what do they map back to. We're really taking from our customers and from our sneeze, what is the use case to capture this specificity and then sort of, it just works its way through the process to say, OK, if you capture this specificity then you'll capture this the DRG that you may be looking for. We're not necessarily developing out content as it relates to a specific financial model. All right, great. And let's go ahead with one final question. So everybody has time to get on to their next meeting. Our last question for today. You mentioned customer quests, what's an example of some of those that you've gotten? So as it relates to customer requests. I will really say they are-- they're really all over the board as to really what we could be looking to capture or what a customer may be looking to capture. You could be bringing in to us specific use case as it relates to maybe a program or initiative that your hospital is focused on or you are looking at some type of data point that you're not capturing appropriately today. So let's say that you-- this may be a bad example or low value example. But you're looking to capture that any time that a patient has any type of bleed. Maybe you're not focusing on a specific type of bleed like a GI bleed or a head bleed but you want any type of bleed to sort of bubble up to your CDI team to have that encounter reviewed where we're able to take that information and develop that out. If you think of really how the information that we need-- we need A plus B equals C. As it relates to really developing out the clinical concepts. So if you can give us what you're looking to capture, either as the actual use case or you can just tell us what you're hoping to capture, our team with the different areas of sneeze that we have are really able to put something together and present that back to you to make sure that it's meeting your need. And Josh, we're going to ask you to circle back to the question prior, just some clarification about the-- they don't-- about the comorbidities, they don't currently use APR DRG. So they're not really asking from a financial impact question more of a quality impact, does that make sense? Yes, I mean as it relates to the quality or maybe you aren't using APRs, really our content. I mean I would say yes, you could use it in really any environment to help capture the needed specificity as well as if there's something that you're looking to capture or that we're not focused on or we need to focus on. We can definitely have those conversations and see what we can do in a way of partnering with our customers to really leverage the content to capture what is needed. OK, great. (DESCRIPTION) That's a wrap! (SPEECH) Like I said, this has been a lot of great information and we had a lot of great questions come in. So we really appreciate your time today. So a couple of the questions that did come in about the recording or if there is a recording, there will be. So after today's session, it'll take us a little bit of time but we will get this updated onto our website in the next couple of weeks. If you would like more information about these solutions within the portal, there is a Learn More button. Let us know if you would like some more information and we can certainly contact you for that. (DESCRIPTION) Slide title, 2023 3M Client Experience Summit. Slide text, The future is now. Let's go. May 22 to 25, 2023, Atlanta, Georgia. A description includes a venue at the Westin Peachtree Plaza Hotel in downtown Atlanta from May 22 to 25, 2023. Button, Learn more here. (SPEECH) And we also encourage you, if you are a customer, we would love to have you join us at our client experience summit in May in Atlanta. There is going to be a lot of sessions. And Josh, I don't know if you wanted to talk about some of the sessions that we would have at CES, but if you are a customer, we definitely encourage you to join us. Yeah, so at CES this year, we do have a lot of current customers speaking on behalf of their use and experience of using our AI in their different workflows as well, whether it be provider or CDI workflows. As well as this year, if you have any physician leaders that are interested in learning about our clinician solutions track, this is the first year at CES that we are going to have a provider focused sort of track related to the different clinician solutions applications in the AI technology is included in that track as well. So if you have physician leaders or any physician liaisons that are involved with your program that you think would benefit for attending CES, please reach out to the team. And let us as we are looking to have an interactive physician group as well as for the CDI and quality teams there are different topics as it relates to AI being covered, not only by 3M teams but also customers. Awesome. Thank you so much. And just a couple of the last questions that came in around. The certificate of attendance, you can use that. Download that out of the resources section. Once it ends and you complete the survey, you can't go back and redownload it. So take a minute just to download the certificate of attendance. And you can utilize that to request CEUs. This is not actually approved CEUs but you can utilize that certificate-- of certificate of attendance excuse me to request those at an accredited association. And again, we will have this posted on our website in the next couple of weeks. So again we really thank you for joining us today. (DESCRIPTION) Text, Thank you. (SPEECH) Please fill out that survey. We'd love to hear how we did. And we will be having another session or another CDI innovation webinar here. I believe it's the first week of-- first week of May excuse me. So be on the lookout for that registration and we'd love to have you join us again. So thank you both to Dannie and Josh. And we hope you all have a great day. Thank you.
(DESCRIPTION) Slide presentation. Logo, 3M, Science, Applied to Life. Text, Taking Piedmont CDI to the Next Level for the Win! 3M CDI Innovation Webinar Series. October 2022. A man in a white coat and a woman in blue scrubs sit together at a table looking at a tablet. (SPEECH) Good afternoon and welcome to our October CDI innovation webinar. Before we get started, I am going to go over a couple of housekeeping items. And what we'll be talking about today is taking Piedmont Healthcare CDI to the next level for the win. We have a couple of great speakers here today. So we're really excited to have them. Before (DESCRIPTION) New slide. Text, On24 Webinar Platform for a better user experience! Use Google Chrome and close out of VPN/multiple tabs. Check speaker settings and refresh if you are having audio issues. Ability to move engagement sections. Ask questions! Certificate of Attendance available to download for live webinar sessions. Engagement tools and CC available. Check the resources section. Complete the survey. (SPEECH) we get started, I just want to go over a couple of things. This is a web-based platform so make sure if you are having any technical issues, make sure you're in Chrome, close out of VPN or out of multiple tabs. That'll help with bandwidth. And a lot of times, if you just do a quick refresh, that will help with any problems that you might be having. Because this is a web-based platform, we do not have a dial-in number. So you will want to use your computer audio. So again, if you are having any issues, make sure you check those settings. Because this is a new platform, I just want to also go over some of the engagement tools that you have. So in the top area, you have a Q&A box. So if you have any questions, we encourage questions, please put that into the Q&A box. We'll get to as many as we can at the end. Down at the bottom left, you should see Resources. So that is where the certificate of attendance is for download. You can also download the presentation from today, as well as a couple other resources. If you missed our August CDI webinar, a link to that recording is in there as well. In the middle, you can see an area that if you would like some more information, if you click on that, you can let us know there. And then, if you are interested in learning more about our speakers, there's a speaker bio section. And then, we always do appreciate for you to complete the survey at the end to let us know how we did. So also, one final thing, if you do need closed captioning, that is available in the media section of your dashboard as well. (DESCRIPTION) New slide titled Meet our speakers. Headshot photos of each speaker. Text, Gail Higle, BSN, RN, CCDS, Manager of Clinical Documentation Improvement, Piedmont Healthcare. Niki Spear, BSN, RN, CCDS, Manager of Clinical Documentation Improvement, Piedmont Healthcare. (SPEECH) So again, like I mentioned, we have some great speakers today, Gail and Niki, from Piedmont Healthcare. So I'm going to go ahead and turn it over to Gail to get things started. Gail? (DESCRIPTION) New slide. Text, Objectives. (SPEECH) Good afternoon. This is Gail Higle. Our talk today is about how Piedmont, in 2020, took the-- our CDI department to the next level for the win. And as most of you know, our Georgia Bulldogs are number one, and they won the national championship last year. And CDI has a lot in common with working together as a team. And so, we want to tell you how we, as a CDI team, became successful using Priority and Impact ROI, our wonderful 3M technology. The objectives of today's talk are to focus-- our whole reason that we started using this was to focus on CDI reviews on the most needed cases, to maximize the use of worklists, prioritize needed follow-up reviews, benefit from the AI auto-suggested codes and queries, easily reconcile cases with coders' final codes for accurate financial impact, educate on inaccurate reconciliation and missed opportunities, and report vital CDI impacts to administration and each individual CDI. The impact tab was started after Active was in Atlanta several years ago. CDI, at their national active convention, asked that each CDI, they wanted to have their impact. They want to know what each query they get Impact makes, and this wonderful Impact ROI, the reports you can build out of SSR does just that. (DESCRIPTION) A new slide with the Piedmont logo in the corner shows a photo of a tall and long building with curves slightly. (SPEECH) This is Piedmont Healthcare. This is our newest building in downtown Atlanta. This is-- the building was opened August 2020. It is a 408-bed facility, 16 storeys in the heart of Atlanta's Historic District. There are 16 ORs, eight cath labs, four cardio-physiology labs. It is-- also has an urban Plaza with a Starbucks, a 300-car garage, and it's very high tech with our world-renowned cardiovascular surgeons. If you would like to, you can go on YouTube and take a tour of our wonderful facility in downtown Atlanta. (DESCRIPTION) New slide. Text, Piedmont. Real Change Lives Here. Piedmont has more than 31,000 employees caring for 3.4 million patients across 1,400 locations and serving communities that comprise 80% of Georgia's population. Piedmont has provided $1.4 billion in uncompensated career and community benefit programming to the communities we serve over the past 5 years. (SPEECH) Piedmont, tell you-- I'll tell you a little bit about Piedmont. It is the largest health care provider in the state of Georgia. We currently have 22 hospitals, 55 Urgent Care centers, and 25 Quick Care lotion-- locations, 1875 Clinic physicians practices, and more than 2,800 Piedmont clinic members. And just this last year, in 2022, Piedmont was ranked 166th as one of the Best Large Employers in the US by Forbes. And we are very proud of that. (DESCRIPTION) A new slide shows photos and years built of different buildings: Atlanta 1905, (1957 location), Fayette 1997, Mountainside 2004, Newnan 2006, Henry 2012, Newton 2015, Athens Regional 2016, Rockdale 2017, Walton 2018, Columbus Midtown 2018, Columbus Northside 2018. (SPEECH) Now, this-- our study that we did began in 2020. And in 2020, these are the 11 facilities that Piedmont had. What is interesting about this is we had 11 facilities with seven integrations in six years. So we are consistently changing and growing and growing. (DESCRIPTION) New slide shows more building photos: Macon Coliseum 2012, Macon North 2021, Cartersville 2021, Eastside 2021, Eastside Logansville 2021, August 2022, Augusta Summerville 2022. (SPEECH) And in 2021, we acquired five more facilities. These five facilities joined in June of 2022 in 3M and are now part of our CDI program. And then, in-- also, in 2022, we integrated with Augusta and Augusta Summerville, which is the University Hospital for the Bulldogs. And so-- and we will integrate with them in 3M and Epic next year, in November 2023. So we look forward to that. So Piedmont continues to grow. (DESCRIPTION) New slide titled Piedmont CDI shows a photo of the University of Georgia Bulldogs football team together on the field. (SPEECH) A little bit about Piedmont CDI. We are celebrating 10 years of being together. It started in 2012. We began with two facilities and four CDIs reviewing only Medicare. By 19-- 2019, in July, we grew to 11 facilities, a director, four managers, an educator, more than 35 CDIs reviewing cases using the length of stay, working DRG priority that is in 3M 360. So all payers, no self-pay or charity, no OB, no Peds, and no NICU. In October of 2019, before COVID, we went 100% remote. And this is very, very important because that asset helped us get through the rough times of COVID. Starting April of 2020, CDIs began reviewing using the long length of stay, working DRG priorities for all admissions, except OB, Peds, and NICU. (DESCRIPTION) New slide. Text, Piedmont Case Selection July 2019 to October 2020. CDIs Assigning cases by L O S and 3M Working D R G Priority. Filter by L O S choosing cases greater than or equal to 3 days. Next select cases by working D R G priority in the following order: 1, symptom Dx/D R G, 2, medical cases without CC/MCC., 3, surgical cases without CC/MCC, 4, surgical cases with CC without MCC, 5, sepsis D R G's review, 6, review D R G, consider alternate D R G, 7, questionable admits, 8, medical cases over GMLOS, 9, elective surgery over GMLOS, 10, low priority cases, minimal change impact, 11, optimal D R G, no need for review/re-re-review. A section of a chart shows Active Priority Factors and Working D R G information. (SPEECH) And if you are not familiar with what 3M working DRG priority looked like prior to the priority list, this is what the hierarchy of the working DRG priority looks like. (DESCRIPTION) New slide. Text, Welcome to the Game. A photo from a UGA football game with the opposing time about to snap the football. Text, March 19, 2020, First Piedmont COVID-19 admission. (SPEECH) Then, lo and behold, hit COVID. March 19, 2020, the first Piedmont COVID admission happened. And our stable team, at that point, we had no integration for the year. And this gave us an opportunity to use 3M technology to focus on CDI reviews that most needed reviewed. And Piedmont administration gave CDI the goal of reviewing 80% of admissions. And you cannot review 100% admissions. So at that point, our director, Lori Dixon, who is very instrumental in leading our team using technology, brought us together and had Niki and I, who, she will talk about Priority. We would start using Priority, Worklist, and the Impact ROI right in the middle of the five COVID waves. Our peaks for April 2020, July 2020, January 2021, August '21, and then January 2022. And right in the midst of that, we began using both at the same time. (DESCRIPTION) New slide. Text, Taking Piedmont CDI to the Next Level amidst COVID-19 Waves. October 2020, Priority and Impact ROI launched together. Two side by side screenshots, the left a table labeled North Priority Worklist with a long list of illegible items. The right screenshot shows a dashboard with tabs along the top and a lot of information with a popup box on top. (SPEECH) To get started, this is what our priority looks like-- looks like on the left. And on the right, this is what Impact ROI looks like. And Niki is going to tell you now about her beginnings with the Priority Worklist. (DESCRIPTION) New slide. Text, Priority Worklist Launch. Practice makes perfect! Prior to system launch: Set up the game plan. 3M defaults for prioritization points, established regional superusers, trialed different functionality, modified CDI workflow for the Priority Worklist, priority superuser team chose the layout of the worklist columns, added focus DRG priority for sepsis. Priority factor weights: New documents: OP Note, DC Summary, Queries. Financial class. (SPEECH) Thanks, Gail. So with the increasing challenging of staffing, we adopted prioritization as a tool to improve case review efficiency. With the goal of reviewing 80% of all adult inpatient admissions, except mother-baby, the 20% that we could not review should have the least likelihood of query opportunities. Prior to implementing prioritization, we attempted to do this manually, as Gail said earlier, with the list that she had shown. However, old workflows and habits are hard to break. And many staff would gravitate to cases that they preferred, such as reviewing by service line, length of stay, or, as some staff would call it, cherry-picking. This results in inefficiencies in the review process. Using prioritization worklist as a tool in customizing to our needs, we would help sequence the cases from high priority for query potential to the lowest without having to manually sort through the worklist. CDIs could just take the next in line per case and review it. To implement the Priority Worklist, we've decide to practice with a soft launch. If something did not result with the end result we had in mind, we could adjust, make changes, and improvements that would help prepare for the overall system launch. We established regional superusers. We used prioritization, as a tool within 3M, that allowed for a lot of customization. We started with the 3M default and adjusted from there. I would recommend to assess what works best for your facilities. For example, sepsis has a high potential for denial so we set up a focus DRG for sepsis cases to review for clinical validity and associated and organ damage. We also found that adding priority factor weights for a few-- new document types was a great tool for CDI. (DESCRIPTION) New slide titled Priority Worklist Launch: Game Time. A photo of the UGA football team in a huddle on the field. Text, At October 2020 department meeting, priority worklist manager and priority superuser team presented priority worklists to staff and educated staff on prioritization and new features. Second workgroup came together to create a daily workflow job aid. Additional education using 3M tools and filtering. Region priority worklists were implemented following meeting. Priority worklist manager continues to validate worklist and educate staff. (SPEECH) Game time. Priority Worklist launch. So at the October 2020 department meeting, we presented a new priority worklist and educated staff on prioritization and new features. Regional priority worklists were implemented following the meeting. We had a second education session using 3M tools and filtering about a month after the initial launch. We also had another workgroup that came together to create a daily workflow job aid to assist the staff. We continue to work to validate the worklist while offering ongoing support and education to the staff. (DESCRIPTION) New slide titled Piedmont CDI Regional Priority Swim Lanes. A screenshot of the 3M CDI Dashboard for Gail Higle. It shows a color-coded key, purple for prioritized, green for ready, gray for scheduled for today, red for queries pending, blue for scheduled for later and orange for discharged and pending. Below is a horizontal bar chart labeled Visits. The bars are labeled with priority worklists for various locations. Each bar is divided into colors, each with a number on it to correspond to its length. (SPEECH) This is the CDI dashboard work queue divided into four regional swim lanes. There is one manager per region with 9 to 10 CDIs reviewing. This does not include a guest that is to be integrated into Piedmont, Epic, and 3M next year. (DESCRIPTION) New slide titled Piedmont Priority Worklists. A screenshot shows a chart titled North Priority Worklist with blurred out information. The columns are: Visit ID, Patient name, Score, Case status, last review date, assigned to, last access, available documents, pending queries, provider queries, follow-up, notification, priority and working D R G, Wt/GLOS/SOI/ROM. (SPEECH) So we sort the priority worklist by unreviewed cases and start reviewing from the top. We customized our worklist to have the priority score, then case status, last review date, assigned to, the last access to the chart. We customized which available documents to have. We included a column for the number of pending queries, number of query, the names for the queries we sent to the providers, any follow-up data if it was assigned, notifications encoding, additional priority that we may assign individually to CDI. And then, also, on the end, the auto-suggested working DRG. These columns were customized by our superuser teams. And it was very helpful for them. (DESCRIPTION) New slide titled New Features. Text, At a glance, see how many queries need followup. A screenshot shows a closeup of the Pending Queries column from the worklist. Above the chart it says 6 pending queries. The choice CDI Query Status Pending has been chosen from the Priority Factor dropdown menu. Text, Who last accessed account. A screenshot of the Last Access column shows different names in each row. Text, Case status and last review date. A screenshot of the Case Status and Last Review Date columns. The Case Statuses shown are Discharge and Concurrent. (SPEECH) These are some of the new features we shared with the staff. The first column shows the ability to sort by priority factor to quickly see how many queries are pending, which is also helpful if we have staff out and are covering for each other. In the middle, you can see who last accessed the account. This is useful in determining if coding has a chance to review the case. And on the last column on the end, there is a case status that shows new, concurrent or discharged, and also the last review date. It shows in green if it was reviewed today. (DESCRIPTION) New slide titled Priority Scoring showing two screengrabs from the Home tab of the dashboard, the left one labeled Ability to dismiss factor. It shows the Priority Score, 310, and the Visit State: New. A blue box appears around Possible Sepsis, 30. Below the written statistics is a line chart labeled Priority Score Progression showing the Findings and Priority Score from 3 PM to 3 AM. The screengrab on the right side is labeled Action Items. It shows the same information, but the Action Items section at the top reads, 1 Open, 1 Total. A blue box appears around the heading and the text, Open. Actual Result Codes not found in Final Codeset, Immediate action is required. (SPEECH) Priority scoring. To dive a little bit deeper, you can see additional priority scoring tools within the encounter. On the left is the ability to dismiss a resolve factor. When CDI reviews for possible sepsis and then decides whether or not to query, they could dismiss the factor which will move the priority factor from the scoring of that case. The priority score pertains to just the new information, documentation or status change to give the most up to date priority score to assist with which case to review next. The open actual item on the right side of the page shows the missing query response from the final code set. This creates an alert to the CDI for the missing code and helps prevent re-billing. Before, this is a manual process of preparing codes. But now, CDI is notified of the missing query response in the final coding. This improves efficiency with the time spent in the chart and helps reduce errors. (DESCRIPTION) New slide titled Workflow Changes. Text, Assign and complete initial case review one at a time by priority score. No longer assigning 10 to 12 cases when you sign on, only assign the one you are working on. No longer required to assign followups for all cases. Only assign followups as needed, for specific reasons and not for routine scheduled followup. Worklist will move the cases with the highest priority to the top of your list. (SPEECH) The workflow changes the two biggest workflow changes that we have were assigning cases one at a time by priority score and not scheduling routine follow-ups for all patients. The process now is to assign cases and complete initial cases one at a time by priority score, no longer assigning 10 to 12 cases when you sign on in the morning. Cases are continuously updated in real time. So the cases on the top of the list are most likely to need clarification. In the past, when new documentation came in, it would go unnoticed until the CDI would manually review. Using the prioritization as a tool gets CDI to the case most likely needing a review without having to manually review each case before changes. And this reduces unnecessary or non-value-added reviews. We no longer assign follow-ups in all cases. The CDI only assigns a follow-up for potential clarification and use of prioritization as a tool to alert the CDI when a new review is needed. With continuous updating on prioritization scoring, we don't need to spend the time to follow-up on all cases looking for changes. We now use a combination of technology and CDI expertise to improve reviewing efficiency. (DESCRIPTION) New slide titled Priority Ongoing Improvement. Text, Obstacles. Questioning change: Increased autonomy in setting reviews using clinical expertise and 3M tools to enhance review efficiency. Only creating followups as needed. Choosing cases by priority score and not picking preferred service line, DRG, short stays. Regional differences: surgery hubs, sepsis cases. CDIs working different hours caused differing case loads. CDIs working from 4 AM to 10 PM, live across the US in different time zones. Of note: Asked staff to escalate cases that appear not to have a correct score. Validated all cases as having correct storing by priority settings. (SPEECH) Some of the obstacles we encountered were questioning change, needing to reinforce the new workflow, there were regional differences that we'd have surgery hubs, sepsis cases. There was ongoing improvement to the priority. CDI can now check between 30 to 40 total cases per workday with some staff taking upwards of 16 to 17 initial cases. This is the success we really have to own to the staff and our-- the workgroups that worked with this. They were instrumental in getting priority going. It's important to encourage the staff not to revert to old workflows and to assign follow-ups for all cases where they will be buried in a sea of red, overdue follow-ups. We found with routine scheduling of follow-ups many would not get reviewed before discharge, and the act of scheduling follow-ups was inefficient resulting in many clicks to set up and then resolve the follow-ups upon reconciliation. The workgroup also noticed the active-- potential query opportunities that the CDI recognized when all of the scheduled follow-ups-- all follow-ups were scheduled would be one amongst many set of follow-ups and would likely be missed. So when the CDI wonder, will I miss something, they are now using their clinical expertise to assign follow-ups only for a particular reason, allowing them to get to the cases that they really need to be reviewed. This gives the CDI increased the time in setting review. Of note, we also did ask staff and to help create that buy-in and support to escalate any cases that did not appear to have the correct score. And we were able to validate all the cases of having the correct score by the priority settings that we chose. So that is what I have for prioritization. One last thing with that is I would encourage you to play with that prioritization. It's a little bit of a tinkering tool. It's customizable to whatever comes up. If you want to do reviews for sepsis, we're doing a new travel project. So we were able to set up worklists based off of that and this incredible tool we use, which is something that you can tinker with. That is what I have. So back to you, Gail. Thank you, Niki. After Niki had the part of the meeting, this was one department meeting that we launched this in together as the department. And at that point, we were all remote. So this was one, large department meeting online through Webex at that time. We now use Teams. And after she presented her priority worklist that the team put together, then at the same time, right after she spoke, I spoke about the Impact ROI launch. (DESCRIPTION) New slide titled Impact ROI Launch. Text, Impact ROI manager presented at October 2020 department meeting; Impact ROI education highlighted. Query scenarios for Missing Diagnosis, New Principal Diagnosis, Clinical Validation and POA. Impact ROI reconciliation steps, including open action item for uncoded query responses. CDI scorecards to display individual CDIs information: PDX, MCC, CC, Procedure, SOI and ROM Impacts and accurate financial impact. Impact ROI Tab implemented after department meeting. Additional Benefits and Support: Regional manager validation worklists save managers time validating impactful cases concurrently. CDIs case reconciliation is concurrent before the bill drops not at the end of the month. Impact ROI manager provides ongoing education at department meetings and after feature updates. Ability to submit 3M enhancements to improve reporting of impacts to administration and CDI scorecards. Managers continue to troubleshoot cases with errors, including missing and incorrect impacts and escalate to 3M the unresolved issues. (SPEECH) And of note, right after this meeting, it was turned on in 3M. So when everybody went back to work after this meeting, there they had worklist, and they also had their case is going to Impact. And to tell you about our lunch, after-- when we started, the way I did it was I went to 3M. In 2020, they had updates 7-- .7 and 20.8 and I used those updates to create a PowerPoint to educate the CDIs at that meeting. And then, after the meeting, each CDI got a copy of that PowerPoint to use so when they were reconciling their cases, they understood how to do that step-by-step. And some different query scenarios on how to reconcile the cases using the Impact tab. We did missing diagnoses, new principal diagnoses, clinical validation, and POA. And those examples are all on the original 20.7 and 20.8 updates that 3M did. Then, I will also show you-- we went through the Impact ROI reconciliation steps, including the open action item and the un-coded query responses. And then, I will also show you how we created a CDI scorecard that gave each CDI all of their credits for their queries for PDX, MCC, CC, procedures, SOI, and ROM, and the accurate financial impact. And I'll talk about that in a minute. The Impact ROI tab was implemented right after the meeting, right as we started the worklist also. The big benefits of doing the Impact ROI tab is that our regional managers can validate the worklists and the cases as they are completed instead of at the end of every month. So this saved us a lot of time. But for regional managers, it was in real time. And because of that, CDI cases were concurrently reconciled something with the code or before the bill dropped. And this also is saving Piedmont a lot of time getting those bills out the door instead of at the end of the month and then putting up the red flags when things-- bills are held. The Impact ROI manager, which was myself, I provide ongoing education at department meetings. Any time there's a feature update, the quarterly 3M feature updates, we do further education. If it is big education, it will be part of our department meeting. If it is just small education, it will go out in an email this-- the morning after the update. Lists of the update are sent directly to the CDI so they can see the cosmetic changes or whatever changes that 3M has in that feature update. The ability to submit 3M enhancements. This was huge. As we started building more reports to send to administration or add to scorecards, there were more fields that we wanted to offer. Geometric length of stay was one of those. And that enhancement was put in. And within a couple of months, that field of ability to put geometric length of stay on our administrative KPI was given. And so, that is also very helpful. Managers, this is the big part. Managers continue to troubleshoot cases, even today. We were doing one that was a POA query that a CDI couldn't get or impact. So the managers, we worked together, we looked at that. And including missing-- and the biggest ones are the missing baselines, the incorrect impacts. And we escalate any problems that we find to 3M. So as we find issues, and they get right back to us. It has been a wonderful collaboration. (DESCRIPTION) New slide. Text, Steps for Successful Impact Tab Reconciliation. Before checking CDI Final Review Complete. The left side shows a screenshot from the Impact ROI tab on the dashboard. An orange arrow points to the word Codesets in the upper right corner. The tab is labeled Final Cumulative Impact. There are statistics across the top such as Estimated financial impact, weight, SOI and ROM. An orange arrow, labeled Coder's Final Codes, points to the Baseline row under D R G Type. The right side of the screen shows the Codefinder page. An orange arrow, labeled CDIS Codes, points to the two codes and their info listed under the Medicare D R G and MDC information, 177, Respiratory Infections and Inflammations with MCC, and 004, Diseases and Disorders of the Respiratory System. (SPEECH) Here is what the steps of successful impact tab reconciliation looks like. You have the coder's codes on the left, and the CGI codes are on the right. And the CDI can see their DRG and then the coder CRT. And the course, the CDI CRT should be the baseline. And then, here are all the codes that the coder code-- coded in order. And then, the query links are the little RD query templates. And this next part is the steps of how you would do a reconciliation of a case. (DESCRIPTION) New slide. Text, #1 Queries are linked to Coder's Codes. A screenshot from the Impact ROI tab showing a chart titled Final Diagnosis Codes. It shows each code, its description, and POA, Affect, MCC, CC, SOI, ROM, HCC, HAC, PPC, Elix and Baseline. Under the Query column, Sepsis with Criteria PHC for one item and CHF PHC for another item is circled. (SPEECH) Number one, as a CDI looks at their query. First of all, is it linked? Is it linked to the coder's code? The slide before was an RD query. Here, we have sepsis. It's linked to the sepsis. The CHF is linked to the CHF. And you can see the baseline diagnosis and the final diagnosis-- DRG, sorry, DRG. And the impact and all of the impacts going across the top. (DESCRIPTION) New slide. Text, #2, Home Tab: Query Green Check Mark, Except Clinical Validity Queries. A screenshot shows the Home tab, including headings for Action Items, Priority Score, Activity, Findings, Followups and Queries. The status Finalized is circled along with the green checkmark and Pulmonary Embolism PHC next to it. (SPEECH) Now, the second part that they look for is in the Home tab. Is there a link? If they did not find their query placed in the impact cab, they go and look for a link in the Home tab, and if that green check mark is missing. Except for clinical validity queries. Clinical validity queries will not have a green check mark. (DESCRIPTION) New slide titled #3, Home Tab: No open action items. It shows another screenshot from the Home tab. Under the Findings heading, a chart of codes is shown for a patient. The Elix, Baseline and Query columns are circled for one of the codes. A checkmark appears in Elix and the baseline and query are blank. In the Queries section, the status reads Finalized and the query reads malnutrition PHC. Text, CDI needs to check for correct query response diagnosis code. If not coded, send Coder notification to code the query response. (SPEECH) And third, if it still isn't linked, if they haven't figured out why it isn't linked, they go to the Home tab and there may be an open action item. This occurs if the coder did not code the exact codes from the query box. And this one was malnutrition and immediate action is required. The malnutrition code was not added by the coder. And the CDI would add a notification to the coder and let them know that their code was not coded. (DESCRIPTION) New slide titled Impact ROI Ongoing Improvement. A photo of the Bulldogs preparing to snap the football. Text, Obstacles: Accurate financial impact: collaboration with EPIC and 3M team to correctly interface coder's EPIC estimated reimbursements to 3M 360 Encompass. CDIs continuing to use final DRG comparison tab and not Impact Tab for reconciliation. CDIs missing Final Cumulative Header for agreed queries. CDIs logic for clinical validity "Was the diagnosis documented and truly supported?" cases should have zero-dollar impact. Incorrect negative and positive financial impact mostly due to incorrect Baseline Diagnosis codes. (SPEECH) Ongoing improvement. One of the things that I said, our very first hurdle that we had in our 3M team-- our wonderful Wendy and Barry, and Orlando. I worked with Barry. And we worked on getting our finances that were in the estimated financial impact in Epic, was not coming over to 3M in the impact tab for all of our then 11 facilities. At that point, we knew we had a problem. They weren't accurate with what was in Epic. So Barry worked with the Epic team and they came up with a cross spot table so that our-- all of our finances meet-- match facility per facility, DRG per DRG in Epic, in 3M now. And we check that every once in a while just to double check. But if you have any of those problems. Work with your EMR team and see if you can figure that out. Our CDI also continued to use the final DRG comparison tab instead of opening up the impact tab for reconciliation. That took some time to teach everybody click that impact tab first. Next, the CDIs were missing their final cumulative header on the agreed query, that was another learning. If you got an agreed query, what was in the header? Did you get your CC? Did you get your MCC? Was there a check mark for clinical validity? And then, a hurdle we still have some time is the CDI logic for clinical validity. Was the diagnosis documented and truly supported? And if it was documented, those cases should have a $0 impact. And then finally, incorrect negative and positive financial impact, mostly due to the incorrect baseline diagnosis codes. And we still struggle with some of those. But our team is really doing a great job. It's been two years and we are getting so much better at this. CDI will escalate anything they need to the manager that does not make sense. And then, the managers work with the CDIs to one-on-one to include those. We've even talked as a management team about not even validating those anymore because the team has been doing such an excellent job of correctly validating queries and reconciling them on their own. So we have come a long way in two years. (DESCRIPTION) New slide titled Key Secrets to Winning: Team Support. Text, #1, Successful Remote Team Support. By the time COVID-19 high admissions and continuous coding changes hit, Piedmont remote CDI had established: fairly reliable remote technology with Webex, (now Teams), CDI scorecards with productivity expectations, weekly leadership team meetings, monthly department meetings and regional manager-led team meetings,, monthly manager/CDI scorecard meetings with annual performance evaluations, addition of CDI educator, facility physician advisors with bi-weekly query reports, remote nationwide hiring and orientation, administrative support, CDI coding collaboration and buddy system, supportive Piedmont 3M Team, CDI leadership approach of education, guidance and trust. (SPEECH) So what are some secrets that we have? One is I would say team support. And I think all of our managers would agree. And I would believe that our CDI team would be proud to say that we really do have excellent support and the way we work together as a group. The 2019 going 100% remote was very key because by the time COVID hit, we already had Webex, now we use Team meetings. And fairly reliable technology, we still have our technology issues but we have good processes and we work consistently to get those fixed. We already had CDI scorecards with productivity expectations. We had weekly leadership meetings, monthly department meetings. And then, each manager of four managers in the regions, we have our own team meetings once a month. Then, our-- we have monthly manager calls one-on-one with our CDI. And then also, we do annual performance evaluations. During this time, an educator was also added, which was very key to continue the support of education across the board of not only CDIs, but at that time we did some physician education. The facility, physician advisors were added. At that point, we had a physician advisor who gets at almost every facility that gets bi-weekly query, outstanding query report. We continue that to this day. Remote national-- nationwide hiring and orientation, that was a learning curve. But we had already gone through that. And administrative support. Administration has been very supportive of us over the years despite our setbacks of COVID. And then, CDI coding. We have a collaboration, we have a buddy system. Each CDI is connected with a buddy in their region that also codes for that region and that is very key. And then, with that, our coders-- we also have our second level review team, C2E that reviews our second level reviews where CDI and coding might not see eye-to-eye. And we use our notification systems very closely as a team. And then, we have a wonderful, supportive Piedmont 3M team, I can't thank them enough. And then, our CDI leadership approaches, education, guidance and trust. And trust-- ongoing education but trust is so, so important. Our CDIs have-- some have up to 40 years nursing experience and they have a lot of knowledge they can give. And that we just have to trust each other as a team to grow and learn. And that there are going to be mistakes, but we work together. And we get through the mistakes, and we continue to grow. (DESCRIPTION) New slide titled Key Secrets to Winning: Coaching. Text, #2, Team Building, Goal-Driven Leadership. Director with vision for change, willing to take risk in new technology and provides direction to managers, bi-monthly leadership meetings. Weekly one-on-one calls with managers and educators for development, support, projects and goals. Managers and educator meet weekly to update processes through job aids, analyze tough reconciliation, escalate potential errors to 3M, submit needed enhancements to 3M, daily ongoing support of CDI team members through priority and reconciliation education, monthly scorecard calls to build relationships, review progress and goals, and inspire growth. Key factor: to trust CDIs with education provided to work autonomously. CDIs daily work to follow job aid process and to meet and exceed CD scorecard goals, promptly escalate priority or impact reconciliation problems, consistently collaborate with coders through notifications to complete reconciliation for timely billing. Department meets monthly led by director, supported by managers, educator, and CDI, coding, priority and impact education, updates and team building. (SPEECH) And that was our team support. Our next winning secret is our coaching. And like I said in the beginning, we have a wonderful director, Lori, who seize these kinds of opportunities 3M offers and instituted these changes. And to this day, she's still supporting us to add further technology and grow as a team, whether it be education or technology. And then, like I said, she has a vision. And we do all of these calls that I was talking about. And then, our managers and educator, we still meet weekly. And we go through job aids and talk reconciliation, we escalate any potential errors to 3M right away. We keep track on a spreadsheet what our errors are and what our-- what the success of resolving these things. And then, we escalate any needed enhancements. It might come out of this team or from our CDIs themselves. Several of them have had ideas for enhancements. And then, again, our key factor is to trust the CDIs with education provided to work autonomously. And that's something we all try to build into each other, and Lori also does in us as managers. And then, CDIs' daily workflow, job aid process, to meet and exceed scorecard goals. And I think Niki brought that up that we have a goal of 11 to 12 initials a day. And some of our CDIs have actually made their goal of 14 or 15 initials a day. And we often see up on our dashboard CDIs doing 45, 50 case reviews a day. And through priority worklist and the impact tab, and all of our winning coaching and education, we're able to do this. And then, the department meets monthly, led by Lori, our director. And the managers, educators, CDIs coding, we all work together to keep learning and moving on. (DESCRIPTION) New slide titled CDI Scorecard showing a chart titled CDI Query Impacts FY22. It shows information for each month of the year, as well as yearly totals, for number of queries, PDX impact, MCCs impact, CC impact, procedures impact, SOI impact, ROM Impact, number of Clinical validation queries, and financial impact, in dollars. A graphic in the corner of the slide shows the UGA bulldog standing in front of a college football national championship trophy. (SPEECH) And this is what one of our scorecards looks like. And you can see we built the impacts in. And when I was out at CEF this summer, I understand some departments have even added the HCC's on here and different other impacts that are available. You can see we have the number of queries, all of the PDXs, MCCs, CCs, procedures. And then, there are financial impacts that's at the end. And this is one of our team members that has been a CDI for 16 years. So we are very fortunate to have a very rich history of CDI in our team. And there is our winning little bulldog. (DESCRIPTION) New slide titled CDI KPI Dashboard. A chart showing PHC CDI KPI, All Admissions, (No OB, Peds, NICU), for July of '21 through June '22. The information included is Total Admissions, total admissions reviewed, percent admissions reviewed, total reviews: initial, continued stays, retrospective, CDI average chart reviews per day, query rate, query agreement rate, provider query response time/days, financial impact, increased GMLOS days by queries, CMI balance scorecard. (SPEECH) And then, this is our KPI dashboard that goes out to the administration at each of the facilities. And as you can see, one of the enhancements that we had added on here, one of the key goals for Piedmont, of course, in many hospitals is to decrease the length of stay. And our case management teams are always looking for opportunities to increase our geometric length of stay on cases. So with our queries, we can tell our case management groups, these are the number of days that our queries have added. And then, we have our case mix index from our balance scorecard, and our query rates, and our agreement rates. And one of the questions that CEF-- somebody asked me, "Well, how many queries do you not get answered?" Well, very, very, very few. Our department sends about an average of 2,000 queries a month. And we have very few non-answered. It's not acceptable to not answer a query. So even with our new integrations, we do go through a learning curve. But with the support of the administration and of the position leaders, we have been able to have a very, very low no responses so-- (DESCRIPTION) New slide. Text, Real Change for the Win! Estimated Financial Impact, up 15%. A photo of a UGA football player kissing the trophy. (SPEECH) and this is where it all comes down to what did the priority worklist, what did it bring? (DESCRIPTION) Text, CDI impacts by working D R G Priority FY20, July 2019 to May 202. Admissions reviewed, 71,406, query rate, 26%, agreement rate, 97%, physician response, 1.4 days. (SPEECH) And this is over that two year period from October, looking before at 2019 and then looking forward to 2020 through 2021, we were able to increase our estimated impact by 15%. And this is the six-month period that we looked at-- we reviewed 71,000 admissions. Our clear rates remain stable around 25% to 26%. And our physician response days are about 1.4 days. (DESCRIPTION) CDI impacts by priority and impact ROI reports FY22, July 2021 through May 2022. (SPEECH) And then, we looked at July 2021 to May 2022, and we reviewed 73,000 admissions. And remember, this is the same amount of CDIs reviewing. Again, query rate, 25%, agreement rate stayed stable at 97%, physician response, 1.5 days. (DESCRIPTION) Principal diagnosis impacts 3,134, MCCs added 6,637, C C's added 4,505, procedures added 193, GMLOS days increased, about 4,900. (SPEECH) And now, we are able to say how many principal diagnoses we've impacted, how many MCCs, CCs, procedures we added. Geometric length of stay. We increase our geometric length of stay almost 5,000 days. And our estimated impact, about 15%. And I just reran some numbers yesterday and it continues to climb. In my region, we are up over 20% from last year alone. So we are doing excellent with the impact. And (DESCRIPTION) A new slide shows photos of W. Edwards Deming and Nelson Mandela next to quotes. (SPEECH) if I was to say anything, I would say that our leadership under Lori, the four of us, plus Pam, our educator, are big things that we try to instill into each other and to our team, is education. As Nelson Mandela said, "Education is the most powerful weapon which you can use to change the world." And one of my favorite people is Edwards Deming. And he said that "85% of the reasons for failure are deficiencies in the systems and processes rather than the employee." It's usually not the employee, it's our processes. And "The role of management is to change the process rather than badgering individuals." And (DESCRIPTION) To do better. (SPEECH) what a great tool we have here in 3M, with the impact ROI and with the priority worklist to bring new processes that improve and take away some of the frustrations and lack of ability to grow. And we have grown. And to summation the whole thing, Piedmont, real change lives here. (DESCRIPTION) New slide with the Piedmont logo. Text, Real change lives here. (SPEECH) So we are ready for some questions. Awesome! Thank you so much. I love how you kind of tied that in at the end about processes and people. And there's a lot of things that within our everyday lives that really rings true of where some of the frustration comes from. And I love that Piedmont is really looking at that head on. So I applaud-- I applaud that. So before we get started, we have a few great questions that have come in. I just want to remind everyone that the certificate of attendance is available in the Resources section for download. Also, on the bottom, the menu bar, you might also see a little kind of cap-- a graduation cap. You can also access the certificate of attendance there. And then, also, I did see Linda just put in real quick that question about the slides, those are also in the Resources section that you can download as well. So please make sure you do. We will be offering this webinar on-- as an on-demand on our website in the next few weeks. Once we get all of that wrapped up, if you do want to listen in again, you certainly can on our website. So (DESCRIPTION) New slide. Text, Q&A. (SPEECH) let's get into the questions that we have. Angela asked, "Do you encounter issues with accounts becoming a priority but now have a longer LOS? Did that cause any concerns with the CDI specialists?" I can take that one, this is Niki. Absolutely, that's a great question. We encouraged our staff to escalate cases if they had any concerns. We did get cases escalated because they were unreviewed, the long length of stay. One that we did used to before prioritization review with length of stay as a sorting feature and then going by the other auto-suggested DRGs. So with this process, it was a big change. We're not able to review all cases. And we just-- we can't review 100% of cases. So there's going to be about 20% or more cases that we can't get to, we're not going to be able to review those cases. And we want the ones that we're not going to review to have the lowest probability of a query opportunity. So we would take a look and validate, and see if those cases have a longer the length of stay, should have been higher up on the worklist. And so, we'd look and validate. And we would see that those cases typically would have a low priority to review. And when you're looking at cases, is that a case that is a medical or surgical case, [INAUDIBLE] or MCC, or is that an optimized DRG with little query opportunity. And what we found is those cases really had very little potential for query opportunities. And it's one of the ones that we're willing not to have CDI review. We just cannot review all of the cases. So that was a great question. Thank you. Great! Next question is, "Who trained the coders on Impact ROI?" The coders don't use it but the CDIs do. And like I said, in that October meeting that we launched this and turned it on afterwards, I created a PowerPoint using the feature updates from 20.7 and 20.8 3M documents. And I created a PowerPoint, presented that, then each CDI was emailed that PowerPoint presentation. And they used that education. So that's how they were all educated on how to do those reconciliation steps and continue. Like I said, we continue to do education, ongoing. But it took probably, I would say, about three-- good three months probably for people to get it really down. All right, the next question we have is, "What is the time frame for CDS to perform the reconcilia-- excuse me, reconciliation processes?" They have their discharged and ready-for-final list. And it is expected that each CDI has 10 or less cases on that. And the manager is pretty much-- we go out and watch to make sure that people are doing their ready-for-final. They reconcile-- I saw on the other question out there that's also in this, they reconcile 100% of the cases they review. Of course, the impact tab-- their header only comes up on the agreed and documented queries. But they do reconcile 100% of their cases with coding. And they use the notification process. So at the end of every month, we expect that the previous months' cases will be reconciled, except for a couple queries that may be pending out there by the 8th of the following month. And for example, I went out on the systems list and looked this morning. And there were only two September cases left. One of them had the query answered that wasn't yesterday. So as of right now, there is only one query left that has not been answered from September. So-- but it is expected that they keep them ongoing because this keeps bills going out the door without delays so-- And they do a great job at it. Can you talk about some of the issues or challenges for the CDIs when starting the impact ROI? I guess-- well first, of all, just knowing does my query make an impact? OK, if it did, did I get-- was it positive, was it negative? The-- and if they misunderstood, they emailed to us, was the baseline missing or was it present-- did you start with a UTI and go to a sepsis, was the UTI code in the baseline? That kind of thing. And that was probably the first hurdle. And then, once people started to-- the CDIs realized, hey, this did make an impact on getting my tab. I'm getting a positive or negative impact. Then, it just grew from there, and very successfully. And yes, we still troubleshoot problems. What percentage of your population is built on APR? Also, do your CDIs code the record concurrently? We don't-- we totally build by DRG. There is very little that we-- so we do not use the ACR DRG. And what was the second part? Do they code concurrently? No, our coders do not code concurrently. They code-- our CDIs do, they code the cases by using the priority list. But our coders code after discharge. I think they're pretty much at a day or two-- pretty much at one to two days post-discharge the coders are coding. Do you-- [CLEARS THROAT] Excuse me, do you use standardized query templates? No, we don't. Our management team, starting somewhere around 2019 or even before, maybe 2018, we started developing our own query templates. And every fall, as it is right now, fall again, we now have 84 templates that we have written. And with every update in the fall, we go through those again and quickly look at them and make sure that they meet all of the new coding standards. And we will work-- rework words or whatever we need. But like I said, we have written our own. Another question. We probably have about-- time for about two more. Let's go with: can you elaborate on how the CDI queries increased GMLOS? Yes, with the baseline DRG, when you look at that in the header, it will tell you what the geometric length of stay is for that DRG. And when you move from pneumonia to sepsis, it'll show you that sepsis has a longer geometric length of stay. And in SSR, there is a report that you can run in the impact. There is a field for final baseline to final geometric length of stay change. And that can be pulled into your report. And that is how that is reported. All right, are your-- are Piedmont-- my goodness! Are Piedmont CDIs only Georgia-based? No, we have CDIs all over the nation. We have two in California. Hello, John. Hello, Abby. And we have-- Diana's up in Iowa, we've got Aaron and Carrie up in the Midwest-- or North, we have CDIs in the Midwest. And our director, Lori, is in Florida, where it's nice and warm. So we are all over the nation. And then, the rest of us are mostly in Georgia, but we are all over. Got to love the ability to have remote workers. I don't know what we would do if we didn't have that ability. So I applaud that. Let's go ahead with one last question. Do you have any issue with retrospective queries needing to be sent because accounts weren't re-reviewed? Yes, and we keep an eye on those. We also use the SS report-- SSR reports for recon-- for-- we look at concurrent versus retrospective queries. And as managers, , every month or so, we will run a report and look to track that and trend that. Because if people are not doing their follow-ups, they might end up with a lot of retrospective queries. And we try to keep a handle on that so that those bills aren't delayed and that we aren't impeding Piedmont's ability to balance our budget. So yes, we do keep an eye on that. And coding works but that's also very, very closely. (DESCRIPTION) New slide. Text, That's a wrap! (SPEECH) Great! Well, thank you to everyone that did submit a question that we weren't able to get to. I do want to thank our speakers today. One of the questions that was asked about submitting for CEUs. So you are able to take that certificate of attendance. And you can submit that to one of the accredited associations. You can submit that to get those. So if you do have any questions about that, there is a email-- you have the ability to email us within that menu bar as well. But you should be able to download that CEU and submit. Again, great presentation to Piedmont Health, we truly appreciate it. We will have this recording available in the next few weeks on our website if you do want to go ahead and listen in again. If you could, please complete that survey. We always love to hear how we did. And also, be on the lookout for the final webinar. Gosh, I can't even-- I can't even believe that we're already talking about the end of the year. Our final CDI innovation webinar will be in December. So be on the lookout for that so you can register. So again. thank you, Piedmont, and we look forward to hosting you all again. Thank you so much. (DESCRIPTION) New slide. Text, Thank you. (SPEECH) Thank you. Thank you.
(DESCRIPTION) A video conference. Two women sit in adjacent chat windows, wearing headphones. A text bar indicates Lisa Paulenich is present on the phone. A very small window on the screen shows a slideshow title slide. 3M C.D.I. Innovation Webinar Series. Data as a Catalyst to C.D.I. Program Performance and Physician Engagement, a Four-Step Approach. A photo shows two business people smiling in a conference room. (SPEECH) Good afternoon, and thank you everyone for joining us in our June CDI innovation webinar. Before we get started, I am going to go over a couple of housekeeping items. (DESCRIPTION) The slide changes. New year, New Platform. The additional text on the slide is too small to read. (SPEECH) If you joined us last year, and this is your first time joining us in 2022, you might notice that we have a new webinar platform. That is really here for a better experience for attendees. If you're joining today, definitely make sure you're using Google Chrome, closing out of any VPN, multiple tabs, that will help with your bandwidth. If you are having any issues with your audio, check your speaker settings and do a quick refresh. Because this is a web-based platform, there is no dial-in number. Everything is through the actual portal. We do offer closed captioning. So in the media section, if you do need closed captioning, that is available for you to start, as well. And because again, this is much more interactive, you can make the sections of the platform bigger, smaller, just so if you want to make the presentation bigger, you can see it that way. We do encourage questions. So in the Q&A section of the portal, please ask questions throughout. We'll get to as many as we can at the end. We do also have a resources section-- that is where you can download the certificate of attendance. If you want to submit those to obtain CEUs after this webinar, you can download that certificate there. The handout is also in that resources section for download, as well as a couple other items just for more information if you're interested. We do also have a survey that we would ask at the end if you can complete that-- we like to know how we do. And so before any more time passes, I'm going to go ahead and pass it over to Kaycie, who will introduce our speaker and kick things off. So Casey, go ahead. (DESCRIPTION) The woman in the right video chat speaks. The slide changes. Title, Learning Objectives. Additional bulleted text is too small to read. (SPEECH) Thank you, Lisa. Good afternoon, everyone. My name is Kaycie LaSage, I am Performance Outcomes manager with 3M. Today, I will be presenting with Carrie Wilmer, who is the CDI director for Intermountain Healthcare, formerly SCL Health. And I'll let Carrie introduce herself. (DESCRIPTION) The woman on the left smiles. The slide changes. Title, Meet Our Speaker. Two small photos of the women appear with biographical text, too small to read. (SPEECH) Good morning, good afternoon, everyone. I'm Carrie Wilmer, as Kaycie said, I am the director of the CDI program for Legacy SCL Health. We have since merged with Intermountain Healthcare, and have been newly named as the Peaks Region. So looking forward to our time together today. Yes, and I had the pleasure of working with Carrie and her team in my previous role here at 3M as a performance advisor, worked with Carrie and her team for about two years while I was their data coach. (DESCRIPTION) A title slide. Clinical Documentation. Integrity Legacy S.C.L. Health. (SPEECH) So we'll go ahead and get started with our first polling question. (DESCRIPTION) Slide change. A question and two answers appears. (SPEECH) What measures do you track to indicate physician documentation opportunity or success? (DESCRIPTION) The two answers. M.S. dash D.R.G. and Case Mix Index, C.M.I.. Quality Data, Length of stay, Patient Safety Indicators, Hospital Acquired Conditions. (SPEECH) Another minute-- seeing what the results are looking like. OK. (DESCRIPTION) Slide Change. Legacy S.C.L. Health. A map with colored sections and text too small to read. (SPEECH) Take it away, Carrie. OK, I didn't see those results come through. So we'll go ahead and just get ourselves started. So we already did our introductions, but just to give you an idea of our footprint and get the story started here today. So we consist of seven acute care facilities in Montana, Colorado. And this graphic here is our original SCL Health footprint. Of course, with that Intermountain merger, we extend much broader into the Western region at the United States. So for our CDI program, we have 41 FTEs total, and broken out into several different roles. We have 28 CDI specialists, and then 13 advanced roles as listed there, as far as the makeup of our team. So (DESCRIPTION) Slide Change. S.C.L. Health C.D.I. Program History. Three boxes of text appear, getting progressively higher on a line graph, with time as the x-axis. (SPEECH) to start with, we're going to go back in time a little bit, a few years, and give you how we got to where we are today in our agreement, or our relationship with 3M on these reports, and what we have done to move our program forward. So back in 2013 is when we first started our centralization effort. At that point in time, we had CDI programs-- teams at each of the sites, except they kind of reported up differently, different training, different tools. And we brought all of that together from a system approach. And as a system approach to centralize that process and build it one team, one system for SCL Health. So at that point in time, our response rate of 85%, agreement rate, 86%, not too bad. But we definitely were able to do more and move that needle. So at that point in time, at the beginnings of our measuring, we were at about $875,000 monthly on our DRG shift approximation with our Medicare blended rates. So 2015-2017, in that time frame, we were able to expand. We were very fortunate to be able to have pretty significant investment into the program. However, that came about as a result of external consulting assessments, and the messaging to identify that there was opportunity being left on the table. And so our executive leaders received that message, heard that message, and decided we're going to invest. And we expect that CDI is going to rise to the occasion. And fortunately, we did. So we were able to achieve new FTEs-- that was at the beginnings of our advanced CDI roles. And through the course of those couple of years, we really rose to the occasion, increasing to about a $1.8 million monthly average, with our record high being there in 2017. So from there on, 2018-2020, as would be expected with many CDI programs, established programs, you get to a point that you plateau and you don't see as much improvement any longer. And so through that time period, we experienced some leadership turnover. We were still having very successful response rates, agreement rates, as far as our physician engagement across all of our sites. But our financials dipped a little bit there at the 1.4 monthly average. (DESCRIPTION) Slide, Data as a Catalyst, Breaking through the Plateau. Two text boxes on a similar graph, time on the x-axis. (SPEECH) So we knew that we either needed to be able to validate that our plateau was true, true to form, and that there wasn't any more opportunity to be had, which was suspect, or we were going to need to find a new strategy, another way to revitalize and boost momentum, and identify what opportunity still remained for the program. So at that time, our physician education strategy was very much built on our CDI query metrics, what were our top questions, et cetera. But we came to the conclusion, we can't boil the ocean-- we were not being effective trying to do, I guess, disseminate all education to all specialties and expect that that was going to move the needle any longer. So we needed data. And that was our challenge. We did not have line of sight to identify very easily where that opportunity would be found, how much was there, and really who would we need to, as far as best utilization resources, who would we need to partner with first? Which physicians, which groups, to be able to catapult and serve as a catalyst to move the program forward? So we decided to engage with 3M and begin leveraging the performance data monitoring reports to be able to focus in and use that data to restrategize, and continue to push the messaging forward. So I'll turn it over to Kaycie. (DESCRIPTION) Slide, Performance Data Monitoring. Bulleted text beside a graphic of bright lights points connected in a web in front of a cityscape. (SPEECH) Thanks, Carrie. So reports that Carrie and her team used are in 3M's online cloud-based tool, called Performance Data Monitoring, or PDM. The reports in PDM are based on submitted inpatient claims data, looking at the total inpatient population, not just what CDI reviewed. And this allows for a more holistic view of the inpatients, helps to understand the effectiveness of CDI education of providers, and helps to identify gaps. (DESCRIPTION) Slide. Two bulleted text paragraphs appear, labeled Physician Reports and C.D.I. Performance Reports. (SPEECH) The goal of utilizing the PDM reports is to gain insight into key metrics, and performance improvement opportunities against baseline and best practices. The position reports get down to the cases attributed to a particular physician, for example, Dr. Smith, orthopedic surgeon-- what's the MCC/CC capture on Dr. Smith's cases? What SOI/ROM subclasses do Dr. Smith's cases typically fall into? And then, compare that to the other providers in Dr. Smith's practice group, the other orthopedic surgeons at the facility, and compared to the national norm or ortho. The CDI report section has both financial and quality reports. The compare report is a financial report that's based on MSGRG. And for example, you could compare the facility's performance in Q1 2021, to the performance in Q1 2022, and also against national norms, to include the metrics that are bulleted here on the slide. Severity and mortality reports are based on APR-DRG, and those compare the facility's performance against baseline and state peer groups. (DESCRIPTION) Slide, Role of the Performance Advisor. A bulleted list of text labeled Coaching with performance data advisor. Six button graphics appear beside. (SPEECH) In my role as a performance advisor, working with Carrie and her team, like I mentioned before, I was there to be their data coach. I worked extensively with them to help them understand their data in TDM, and how to effectively utilize PBM as a tool. There is a ton of data available in TDM. And like Carrie mentioned before, you can't boil the ocean. So they needed to understand where to focus their efforts for improvement. And I helped them identify focus areas that they then investigated further. (DESCRIPTION) Title slide, The Data-Driven, Physician-Focused, Four-Step Approach. (SPEECH) And now, we're on to our next polling question. (DESCRIPTION) A question appears with three answers. (SPEECH) How do you identify opportunities for physicians' CDI education? And I'll hang out here a minute before we go to the result. (DESCRIPTION) The three answers. Use C.D.I. query trends. Use claims level data. General C.D.I. industry trends. A fourth option appears to be below the bottom of the screen. (SPEECH) Let's see-- and it's submitting. Oh, there's some people submitting. Oh, 2004, I apologize. In the interest of the time, we'll go over-- so combination of all seems to be the trend. (DESCRIPTION) Slide, Data Analysis and Opportunity Identification. Step One, with a numbered list of text. A graphic of two people standing before an oversized computer monitor, twice as big as they are. (SPEECH) All right, turn it back over to you, Carrie. Yep, and that result really isn't too surprising, as far as the number of different data metrics that can come together to tell the CDI story. So we have four steps to go through, as far as how we were able to slice and dice, pull this together, present, and we're hoping that this will be helpful to simplify the process, as you work through the data, whichever form you may be using. So our first step is obviously we've got to analyze the data. We've got to identify that opportunity. So in being a multiple site system, we had to tackle this from a couple of different angles. We first went through and looked at each of the individual care sites to be able to look at their patient mix, the population, the services offered, and be able to see what the top opportunity truly was. From there, though, that we wanted to be systematic again, as best we could, to drive that education. So we looked for and identified those surface to surface themes across all of the care sites from a system perspective. From there, we took all of those data results, those top DRGs, took a sampling, and we knew that we needed to make sure that we validated the data. Some of the DRGs that rose to the top, as far as the financial opportunity, based upon this MCC/CC capture, maybe didn't truly manifest into that opportunity that we would expect. So we needed to partner both together in order to first, bring forward and identify what those opportunities were. I would also say that we were able to then partner these findings with our prioritization tool within the 3M 360 encompass, as well, and I'll be touching on that a bit more in the presentation. So Kaycie, why don't you talk through some of these screenshots of the data? (DESCRIPTION) Slide, Service Line and D.R.G. Level Opportunity. Two bar graphs are placed above a table full of text data. (SPEECH) So this is a screenshot out of PDM for one of Carrie's facilities. The graphs at the top are broken out by the MSC-RG service line. So we've got the medical opportunity and the surgical opportunity. And the estimated, or the potential revenue here, is based on MCC/CC capture opportunity. So down below, we get a little bit more granular. Here, we're looking at the triplet DRGs, and looking at the full triplet. So for that very first row, the major small large bowel procedures, we're comparing how many times the facility was in either 329 or 330, compared to how many times they were in 331. So the 63 number, the little hyperlink, is the count of cases in 329 or 330. Then, you've got the 33 cases in the DRG 331. The total cases-- the actual capture rate then the performance for this particular example happens to be MEDPAR. Then we've got the capture rate variance. And that reimbursement differential is saying that, if the facility was to capture the MCC or CC, just for the CRG cluster, at the MEDPAR 80th percentile performance, it could be a potential additional $286,000. So from here, what Carrie and her team would do, is they would drill into those 33 cases that are in the DRG 331. And get to that encounter level detail to then pull up the case in the EEHR or in 360 to see what was really going on with those cases that did not have an MCC or a CC. (DESCRIPTION) Slide change. A large table of itemized text data in 10 columns. (SPEECH) Another screenshot from PDM-- this is now looking at the mortality data. So now, we're looking at the APR-DRG service line. And we got the information from MEDPAR the state of Colorado, the facilities in Colorado. So we got all the information on the total cases from MEDPAR and the actual death and the mortality rate. And then we get into the facility-specific information. So we've got the total cases and the actual deaths in each one of the APR service lines, and then we've got our mortality rate information. So the service lines that are in the red font in the two right-hand columns with the little asterisk, those are ones that have an unfavorable mortality variance. When we get further down in the list, you can see the service lines that did have a favorable mortality variance. So looking at orthopedics, they had nine deaths when they were only expected to have 5.3. (DESCRIPTION) Slide. A.P.R. D.R.G. Level Mortality Opportunity. A large table of text data in several columns. (SPEECH) On our next slide, we see getting into the individual APR-DRG in that mortality data to then get to your drill down. So here, we're looking at the APR service line of medicine, and we've got APR-DRG 53 and 242. So again, we've got our breakdown of the cases for the MEDPAR data in the different ROM subclasses, and then we get into the facility-specific information. So you can see here for both of these APRs-- the deaths, the one death in each APR, occurred in the subclass four. So each of these APRs had an unfavorable mortality variance, but the actual death expired where we would hope that they expire. So what Carrie and her team would do from here is they would actually go into the cases that discharged alive to see if any of those cases in a one, two, or three could have moved to a higher subclass. (DESCRIPTION) Slide, Data Analysis Summary. Step 1. Bulleted text beside a graphic of hands pointing at a pad of paper. (SPEECH) Turn it back over you, Carrie. Great, so in summary, as we looked at the opportunity, we really focused on it from two angles-- from that financial opportunity, and then the mortality, as we just showed in those screenshots. So we were able to look at it from that care site level, from a service line level, and then to be able to have the power to drill down deeply into the detail of the DRGs specifically, and monitor what is happening, and to pull out those case examples for review to validate the data. All very valuable insights to be able to set us up for our second step. (DESCRIPTION) The slide changes. (SPEECH) We went too far, apologies. (DESCRIPTION) Another slide change, then it returns to the previous slide. Identify the Right Audience. Four graphic boxes arranged in a square. A dot in the center of them with arrows pointing to each space between the boxes. A paragraph of text in a list beside. (SPEECH) Oh, we just have it flip-flopped. So the second step, my apologies, is identifying the right audience. And so this here is the prioritization matrix that we used to be able to bring the opportunity forward to our care site leadership, our CMOs, have been identified as really in place of a physician advisor program. So what we did was take each of the opportunities, size, and scale, for each site, but also kind of map it, think it through, in terms of a continuum of engagement-- who would be most likely and most successful to meet with to be able to have adoption, to be able to have an engaged conversation about the opportunities, and what we may be able to do to partner with that particular group to move the needle on the data. So in this particular example, from one of the sites-- cardiovascular surgery and neurosurgery, of course, had a very high opportunity, being surgical of DRGs. And general surgery, also, very high opportunity. However, due to the contracted relationship and some of the potential politics behind the scene, it was determined that we weren't going to spend any time there. It's going to be an uphill battle. We need to just leave that be, let's focus where we're going to be successful. And so we were able to really maximize, then, the opportunity with the cardiovascular and neurosurgery groups, and the hospitalists. Orthopedics is up there in this particular example, though. They were low engagement, and low opportunity for this site. So we really had no conversation or need to explore that angle. So knowing that we all are so stretched with resources, bandwidth, and even being respectful to physician workflow, workload, and all of the demands currently, we really wanted to make sure that we were prioritizing who in which audiences we were seeking out. So some of the criteria to consider here would be to think about the group size, think about their leadership structure, the employed model, or contracted, are they private physicians, surgeons, providing services in the hospital? We also had conversations about the mid-levels, and in some cases, we did a sequenced meeting where we met with group leadership, then we met with the group, then we met with the mid-levels, because that group really expected that their PAs and MPs would be carrying much of the load of the documentation. We also incorporated our CDI query data into the mix in all of these decisions. So (DESCRIPTION) Slide Change. Audience Identification and Customization. Step Two. A bulleted list of text. (SPEECH) we took that prioritization matrix, that was really a driver of the conversations that I had with the CMOs. And so with our care site leaders, I needed their expertise and their guidance to know the personalities of whom I would be meeting with. I also needed their backup to join each of these conversations, and be able to continue to support the importance of why we were going to need to be having these conversations. It was also very eye-opening to be able to talk through specific initiatives for each of the care sites that may be different than the data that I had available, to be able to bring my CDI opportunity forward. But length of stay was one of those measures that is also directly impacted by the documentation, and the way that we were able to then marry those conversations together and join the initiatives and get further bang for the buck, as far as the engagement into that documentation. So we talked through the data, we talked through the challenges of the groups, we talked through which groups would be most optimal. We talked about even the structure and makeup of the content of the presentation, itself, which I'll get into more so with step three. But it became abundantly clear, as we set out on mapping the initial logistics of these physician education meetings, that we needed to remain mindful of culture and engagement and ensure we were doing everything we could to support as much buy-in as possible, and make it an effective use of everyone's time for the meetings to come. (DESCRIPTION) Slide Change. Effective Communication, Step 2. Bulleted text beside a graphic of a person holding a tablet with a pie chart on it. (SPEECH) So step two, know your audience. Tailor your messaging. Focus that messaging into what is going to be most valuable for that particular group. It was interesting, too, as far as even feedback from the CMOs on how much data to include in a presentation or not. So although we built a templated presentation to be able to deliver readily for any of these meetings that would come up, we definitely customized and altered each one. And I had some meetings where I had no data whatsoever-- it really emphasized case examples and more of the qualitative aspects of the documentation and what we found. And I had other presentations where the CMO wanted full data, unblinded, and make sure that those physicians could see where they fell against their peer group as transparently as possible. So there is such importance, though, to be sensitive and ensure you know the audience of who you're going to be coming in to speak with. So we have another polling question. (DESCRIPTION) Slide change. Who primarily delivers C.D.I. education to physicians at your organization. The options. C.D.I. Specialists. C.D.I. Managers or Directors. C.D.I. Educators. A fourth option is cut off the bottom of the screen. (SPEECH) Excuse me, but I'm going to see how our results look. C, CDI special. OK? (DESCRIPTION) Percentages appear for each option. Specialists at 49.4%. Managers or Directors, as well as Educators, each at 17.4%. (SPEECH) Not too surprising with the results there, especially as there's so many different makeups of CDI programs, in terms of the amount of bandwidth that any individuals may have across the team. I'm a little surprised to see that the physician advisor score was a little lower, as I know that is one very successful strategy and being able to disseminate, and peer to peer, have these discussions and education. But also good and validating to show that really, any of us can be delivering these messages. (DESCRIPTION) Slide change. Presentation Development and Delivery, Step 3. A list of bulleted text. (SPEECH) So for step three, this is the actual presentation, itself. So I have a number of screenshots, just as samples, to show how we did it, how we communicated the messaging. And a few different screenshots, too, again, back to the PDM data that was driving much of the content here. So first and foremost, we leveraged this SBAR framework. And so I'll touch on that more so in this next slide to come, but throughout the course of the content, I'm sure we all have versions of very similar presentations. But we've got to make sure we're outlining what CDI is, why does CDI matter, what is that opportunity, and where are you going to find it? What do we need from you, as a result? We've got to prove it. And so that's where the case examples come back into play. We validated the data by doing those specific case reviews, so we had an abundance of examples right there at our fingertips to be able to pull into these presentations, to be able to show this is a case, and this is what we saw, this was what we found, and this is where and how it could have been potentially different data set at the end with that final DRG. And then at the end, we of course, need to always be clear on what the ask is, and what we need from each of these providers engaging with us. (DESCRIPTION) Slide, S.B.A.R. Framework. A colored box of bulleted text beside a list of text labeled C.D.I. opportunities for St. Mary's. (SPEECH) So back to that SBAR-- so really is setting up the framework for the need and why we're having the conversation. So it maybe widely known across the audience here today, but it is definitely a key tool to be from a clinical bedside nursing perspective, and being able to succinctly communicate with the physician about changes with the patient at hand and needing to potentially report vital signs, report that lab value, and change course of treatment as a result. So we did, and literally wrote out an SBAR statement to start each of these conversations, to be able to highlight what the situation was. We have data showing we have opportunity to give that background as far as making sure that they know that they have a CDI team of registered nurses that are reviewing this documentation. That we have done that assessment to identify the opportunity, and to share what the conclusions have been and provide those recommendations. So out of the gate, give them all to them in a very, very brief skeleton of what we are here to talk about today. And then, get into the nuts and bolts of the details. (DESCRIPTION) Slide. Slide Example, Physician Education. Two boxes appear, each with a different colored graphic in it, displaying data. (SPEECH) So when it came to what is CDI? Why does CDI matter? I've seen many different depictions of CDI kind of at the center of the wheel, and how documentation and the query effort supports so many initiatives. So there's obviously so many more than even what we have listed in our slide, but this was the graphic that was most liked, in terms of really just listing most simply that, with a bit of attention and effort on that documentation, it really kind of killing two birds with one stone, that you can have a multifactor effects as a result. Then we would get into making sure that they knew that it needed to be their documentation, that the diagnoses made needed to meet four criteria to be captured, and the final code set, with the treatment, the monitoring, and evaluation, et cetera, that we then get that group of codes. We get the DRG, and it is those DRGs that are then driving all of the data measures included here. (DESCRIPTION) Slide. The graphics are replaced with tables and charts full of text. (SPEECH) So we would also give an example, just a high level example, not with supporting documentation yet at this point. But just to be able to show a DRG shift. And one of the-- as I had stated, one of those themes that really came out across many of our sites was the emphasis on length of stay. So we were able to highlight the shifts to DRG, and how the documentation would buy them more time to be able to take care of their patients. So out of the gates, just in setting up the groundwork, this is what we're looking at, this is what we're asking for. We need that specificity. We need to be able to capture the codes appropriate for that patient. We would then shift it into some mortality conversation. And although it's a bit complicated, as Kaycie already talked us through, the APR-DRG, we would be able to really focus that message to be able to show the twofold CDI approach to mortality. We want to make sure that those cases that expire are as high as possible with the risk of mortality. But it also is equally valuable to look at the generalized population within that APR that are falling to the lower levels, one, two, three, and four, to be able to explain our context, our approach, and thought process to the documentation. (DESCRIPTION) Slide. The charts are replaced with additional charts, formatted in a similar fashion. (SPEECH) Through the PDM tool, we would have the opportunity to be able to get a physician's listing of DRGs, and their MCC/CC capture variants. So just like one of those initial screenshots that Kaycie spoke through at the care site level, to be able to drill down at the physician level and see all of their claims, whether queried or not, CDI reviewed or not, but just to be able to see what their capture rate was, what their variance compared to the benchmark. And then, to be able to have those projected financial dollars to further quantify the opportunity for the physicians. So in some cases, we included this. In some cases, we did not. Really leaned on the guidance and advice from the CMOs in knowing the personalities, again, as I said earlier. So the second piece of data here is more-- it's actually a newer data point that the PDM tool was able to provide. But it was fascinating how many times I was able to pull this out to further prove the point that we may have some opportunity to move the needle. So it's very small, I realize, and blinded, blanked out with the physician names, but what that small table is really depicting are four different surgeons with their case volume. And then, it has how their volume broke out for each of the severity levels-- one, two, three, and four, with what their average length of stay was for one, two, three, and four. So for the one that's circled, and hopefully you can zoom in on it, or when you get the slide deck, if you can look more closely, but what we saw was that the length of stay was much higher for SOI three compared to four. So that kind of is counter-intuitive to what we would think as far as the amount of resources there. We wouldn't want patients staying longer at lower levels. We want to be able to maximize that. (DESCRIPTION) Slide. A bar graph on the left, and on the right, several smaller bar graphs with data. (SPEECH) So additionally, this is yet one more potential data representation, and we would again not use all data points for all presentations. We picked which ones were most fitting and most convincing for each of the conversations that were held. So in this one, what you're seeing-- this is a graphic to be able to compare the physicians break out, again, of that severity of illness, one, two, three, and four-- progressing is the larger graph on the left side. And then, to be able to compare how their percentage of SOI capture compared to their specialty, compared to their physician group, and then compare to the national norm, are the three graphs in the middle there. So in this example, this was a urologist, small site, he was his own specialty and he was his own group. So both of those did not really bring us much value, because it's the same data sets. But what was fascinating was how that graph looks for his performance compared to that national norm. So you can see the lightest peach color there is very low at that national norm. His is very high. And so he does not mimic the national trend in terms of the severity and the amount of secondary diagnoses being captured on his cases. So this was a very compelling graphic to be able to share. (DESCRIPTION) A paragraph of bulleted text beside a table of several smaller bulleted paragraphs. (SPEECH) So I mentioned before, all of those case reviews bringing in those examples, I would recommend don't have a conversation without examples at the ready to be able to talk through and to be able to give more of the context of how the data applies into a real life example. I would also recommend that any examples you have are as timely as possible. As we all know, we probably get that pushback, oh, it's not my case, or oh, that was six months ago-- I've changed my template already. I've already fixed this problem. So we heard all sorts of different responses and rebuttals to what we were seeing. We tried to just stay on track to be able to get the concepts across. But to have real examples for pertinent to the audience that you are presenting to, and as real time as possible. That last graphic there is a very oversimplified listing of many of the general themes that, of course, are red flags for CDI specialists as we review our charts. But this was actually very helpful to have a kind of a synopsis, a one-page that focused the talk on what some of those key diagnoses are. We know that the physicians aren't going to remember, they're not going to be able to keep this front of mind at all times, but this actually was a well-appreciated summary that we were able to provide. (DESCRIPTION) Slide. Initial Recommendations to Physicians, with an outline of bulleted text. (SPEECH) And then, last but not least, we make sure that we include those recommendations-- what is the ask? What is it that we need from them? So I think the slide may be still a little wordy, but ultimately summarizing the same message. And CDI talking points are often very similar, but we all know we need that comprehensive H&P, we need the progression of the documentation through the progress notes. And we need that final statement and discharge summary to wrap up that case with all of the details therein, and make sure that we are solid for each of the codes captured. And then we really emphasize the CDI query as a tool. And if necessary, that we are there to support and to help be layer and a safety net to help them get the documentation that they need it. So a lot of different angles there, slide decks, again, similar, I'm sure to many that you have out there already. (DESCRIPTION) Slide. Monitor Performance and Communicate Progress. Two lists of text. On the left, bulleted list of steps. On the right, a text bubble, What Does Success Look Like, with text below. (SPEECH) But this was our approach, this is how we were able to incorporate that data. So as we've been moving through these four steps, we've made it so that last one. But just to recap, that first step was, we've got to be able to get our data analyzed and identify that opportunity. Then, we prioritize the message and the audiences that we selected to be presenting to. And then, built the customized presentation for them. So in order to then wrap up this process, we needed to make sure that we evaluated the effectiveness of the approach, and of the education. We need to identify KPIs to ensure and identify what success was going to look like for us. And this may look different across organizations and across sites, but some to consider there on the slide would be fewer queries issued, potentially increasing your CMI, increasing the severity of illness, increasing the risk of mortality, or decreasing that mortality index. So it's important to be able to track and monitor the effectiveness, but also to be able to give the feedback. So one of those phrases that I had heard candidly was that CDI education seemed so random from our previous approaches. And that it was just a flavor of the month, and something that was in front of their minds, and then they never heard anything more about it again until that flavor came up again. So these meetings really opened the door to be able to have ongoing dialogue, and to be able to continue to keep that discussion and commentary going. So to be able to feed back the progress, or lack thereof, if nothing was really changing, then we needed to regroup and be able to further emphasize or re-educate, revisit any of the themes that we were continuing to find in the data, to continue to move that needle. So we wanted to maintain visibility of an ongoing initiative, not just a meeting, and then they weren't going to hear from us for a while. (DESCRIPTION) Slide. The Four Step Approach. A graphic of a circle cut into quadrants, with labeled text. (SPEECH) So those are the four steps. It's really very simple. Again, you got to be able to analyze the data, select your audience, deliver that presentation, and then track the outcomes and monitor that performance. (DESCRIPTION) Title Slide. Key Outcomes and Lessons Learned. (SPEECH) So for key outcomes, lessons learned, I'll go through these quickly, just to demonstrate how and why we were able to use the data and validate this approach. (DESCRIPTION) Slide. A pinwheel. Text in a circle wheel surrounded by six circles on spokes, labeled with graphics and text. (SPEECH) So the next slides really break out each of these areas of the circle. It's kind of a more holistic view of the various angles where we were able to see improvements, and tremendous steps forward with the CDI program this past year. (DESCRIPTION) Slide. A Deeper Look, Vascular Surgery. Two tables of text data, labeled At a Single Facility and Across the System. (SPEECH) So first, we're going to take a look at through a vascular surgery lens. So at one of our facilities, we met with the leadership of the vascular team and established weekly rounding-- which we had not had in place before. But through his advocacy and support and understanding of the importance of the data, the documentation driving that data, he was able to get us embedded into their weekly rounding to be able to talk through live cases. So we saw after one quarter, a real quick win on our severity of illness and risk mortality scores, as indicated there on the slide. And then, we also saw, year over year improvement as we continued that engagement with that group, that their severity index improved, CMI variance improved. And the opportunity per case, per physician, in that group on average decreased by about $2,000 for each case. So although those negatives are still there on those percentages, that means that we are still under the national norm, but some significant headway was made through that engagement. From a system perspective, we were able to incorporate the data again into the prioritization functionality in our 3M product-- 3M's 360 Encompass. And so for DRG 219-221, on the average for the increased CMI shift that occurred within just that DRG, applied to the volumes within that DRG, and with our Medicare blended rates, we approximated that we had about $388,000 gained due to increased CC and MCC capture within just that one DRG grouping for the year. (DESCRIPTION) Slide. A Deeper Look, Orthopedic Surgery. Two additional tables, similar to the previous slide, but with different data. (SPEECH) Second example here was from an orthopedic surgery lens. So this was an interesting engagement, in the fact that I was so impressed by the level of engagement and championing that this one particular surgeon had in acknowledging validating the importance of our data and all of the information that I had brought forward to him. It appeared that he had a potential opportunity of greater than $1.5 million just for his spine cases. So as those of us across the audience that do the chart reviews, we know that those spine cases can be very difficult to be able to get opportunity captured, as such. It also is very difficult for an orthopedic surgeon to feel confident and aware of all of the medical criteria that goes in making many of those medical diagnoses for their patients. So due to the size of opportunity, due to the level of engagement, there was a business case brought forward that ended up being approved to be able to achieve a nurse practitioner that would be more medically trained to be able to support this group in their documentation and covering those patients, so that he could spend more of his time in the OR where he needed to be. But we could also ensure that we had the expertise that we needed to be able to look at the medical diagnoses for those cases. So additionally, beyond that, some more of the qualitative conversations that were able to be spurred from many of these meetings across the system has been to further our own internal evaluation of the value of potential documentation assistance tools, the computer-assisted physician documentation. Specifically, more engagement and collaboration than we have ever had before as far as invitations to be at the table, and bringing our data in conjunction with much of the data sets that our quality team is using from care management, et cetera, and being able to really collaborate and move the needle moving forward. (DESCRIPTION) Slide. A Deeper Look, C.D.I. Program. Four small columns of bulleted text. (SPEECH) So more of our outcomes, probably saw it on that wheel slide, a few slides ago. But we did achieve a 36% increase in our financial totals for the year. This was in conjunction with, again, the prioritization functionality, being able to identify what DRGs had the biggest room for movement with that CC/MCC capture, we needed to make sure that we were getting CDI coverage on those DRGs. So I have a couple of them listed there, again, with those approximations of year over year of what was achieved due to increase CC/MCC capture for those DRG groupings there. In a time where we all are needing to be good stewards of our resources across all of our organizations in health care, it was a tremendous investment that we were able to achieve, as well, that it was decided that we would further expand the program this past year. And we had five new FTEs approved for the CDI program in the fall of last year. So initially, those roles were slated to be an additional educator, auditor lead, and two new CDS positions. We did shift that educator into an additional auditor, but we were able to get those posted and filled and trained, and we are working, we're working hard. From a CDI performance perspective, the query rate increased from 31% to 37% on average. So all of these data insights have really helped us be able to build our internal CDI education in partnership with each of these physician education engagements. So we are able to know what those opportunities are to be able to adjust, to make sure that we're covering the right cases, that we're asking the right questions, and that we're providing the tools and resources needed to keep the success of the CDI program moving with the opportunities for the claims. (DESCRIPTION) Slide, Final Thoughts. Two columns of bulleted text. (SPEECH) So that brings us to the end here and some of our final thoughts. So some of the challenges and our lessons learned important to call out-- the providing sustained feedback was impacted by reporting cadence. But I will own that that was our own internal decision. It's very difficult to know what that sweet spot is going to be, because there is such an abundance of data through these reports. So if you are getting the reports too frequently, may not be able to maximize each of the shifts in opportunity. But at the same time, we really have been needing to get that feedback as timely as possible to keep the engagement, and keep that conversation going, as I said. So that was just one internal finding that we have experienced and had many conversations around. Be careful with the data projections. So these are not actualized dollars, these are not actualized results to expect, to get the whole amount. The methodology is good, and it's what we have to be able to benchmark against our peers. But every patient, or every patient population at every site, is different. And there are those nuances that you've got to be able to do that case level review and validate that what the data is telling you is for opportunity, is or is not there. And be able to adjust as accordingly as a trend, from a kind of a compass. Point us in the right direction-- invaluable. But just not actualized to the exact dollars. Be careful with data getting into the wrong hands, potentially. Some of these data sets are complicated, and it takes some time to explain what the audience might be receiving. So being careful to ensure that you're either able to explain it, or to simplify that message as best you can, and not have incorrect interpretations and assumptions made, especially when it comes to some of those actualized dollars perceptions that might be out there. Limitations with physician attribution-- we definitely ran into some of this, as far as being able to identify which group, for each provider, where do they need to fall within the data mix? And I'm going to pause and shift it over to Kaycie, I think she has some more to say on that particular point. Thanks, Carrie. Yes, within PDM, there is a limitation with physician attribution. The physician attributed to a case is the attending of record at discharge. So we're not able to say that a case is attributed to a particular surgeon if they were not listed as the attending. We know that can be problematic, especially in this situation where the hospitalist is the attending, we know that they didn't perform the surgery on a surgical VRT, but it is a limitation within the system. So with the last bullet, the impact of COVID-19 in benchmarking-- so we all know, are all well aware what COVID did to us from a benchmarking perspective. And when we look back to our 2020 data, obviously, 2019 didn't have COVID in there. So from a CMI perspective, a lot of facilities saw an increase, comparing year over year, when we had 2020 data with all the COVID in it, because we had this high rate of medical VRTs, and the surgeries that could be performed for emergent. So that they were super high weighted. And then we get to 2021, where we may have had a little bit of a more normal year. And from a CMI perspective, comparing back to 2020, things didn't look so great. Risk and mortality was another one where we saw COVID have a huge impact, in that there was no COVID data in the benchmarking information. So all of the additional unexpected deaths in the pulmonary population really hurt a lot of facilities. So one of the things that we've done at 3M, and in the PDB data, we know that MEDPAR and HPOP are a handful of years behind. But one of the things that we came up with is an internal benchmark called CCB, which stands for a client comparative benchmarks. And those are based on participating 3M clients that are in this pool of data, so that our PDM customers can select which-- working with their performance advisor-- select which CCB benchmark works best for them. Are they an academic facility? Are they a smaller, rural facility? And pick the CCB that fits with their facility, and it gives us a more real-time benchmark to look at how are other 3M customers or clients that look like your facility doing? And we're able to do that in a more real time, more current fashion. Turn back over to you, Carrie. Great. So I feel a bit like a broken record at this point. But our criteria for success-- emphasize as much of what we focused on here today. So do ensure your accurate physician demographic data. So Kaycie talked about attribution, as far as that case, and who it's assigned to. But beyond that, too, there is the ability to identify and make sure that you are correct and good with which physician group, which specialty are those physicians aligned with, because that then, in turn, impacts the data that you're able to see from that internal comparison-- some of those graphics shared earlier in the presentation. Always leverage your case examples, the real examples, show those documentation opportunities and make those as real time as possible, and as applicable as possible to your audience. Garner physician care site leadership support and participation. So whether that is through your physician advisor team, whether that's through your leadership at your site, specifically, or even being able to gain that leadership to that specific physician group, invaluable, to be able to have that backing and even just one more angle of perspective to share with the group and to be able to help answer the questions that undoubtedly would be raised through each of these education sessions. Partner your data with your prioritization functionality for your CDI team, if you have it, to be able to make sure you're getting the coverage onto the DRGs that have the opportunity. Do our due diligence to make sure that we are present and we are reviewing and able to catch that opportunity, especially as those physicians are trying so diligently, so hard, to get the documentation in there. Tailor your data and presentation to each audience-- not every data point is going to necessarily be of interest or even be that compelling. So pick and choose, make sure that what you're pulling together is going to be well received and will provide the appropriate what's in it for me type strategy and hook, to be able to get their buy-in through that conversation. And then, track your results. So that is it in a nutshell. And I know there have been a few questions coming through, but I'll turn it back over. (DESCRIPTION) Title Slide. Q and A. (SPEECH) Great, thank you both so much. I mean, just a wealth of information, and such a wonderful program that you all have set up. We don't have a ton of time, so I am going to ask one question for you. How often do you use the PDM data to update prior prioritization? Great question. We are using it on a quarterly basis. So it's again difficult to get that feedback real time, to the cadence of reports, like I was talking about. But in terms of prioritization, especially, and being able to have enough time to potentially show any shifts-- if there have been changes to the DRGs, if maybe a different grouping has risen to the top of new opportunity that we need to focus in on, or if others that we have been focused in on have actually dropped and don't need to be focused DRG any longer. We want to be very judicious on how many DRGs we're identifying to be focused DRGs. You can't have all of them be focused, or it wipes out kind of the point of being able to flag those as higher. So great question, and we are reevaluating and assessing on a quarterly basis, in conjunction with these PDM reports. So I think a good follow-up question actually would be, who is reviewing that PDM information? And then, developing the action plan during those quarterly reviews? So it's a combination collaborative effort. As I mentioned at the beginning, we have a number of advanced CDI roles-- we're very fortunate to have built this education support team that we have with leads, and CDI auditors, managers, myself, educator roles. So we actually all share the wealth a bit. I do have a CDI auditor that is designated for much of the data and has become expert in terms of the reports and getting in there and being able to navigate most efficiently. So she and I partner together in terms of pulling out that PDM opportunity and data. But then when we get into the case level reviews, we really assess to see who has bandwidth at what point in time, and be able to get those cases looked at to validate the data together. In terms of the action plans, it has been a combination of myself and our managers bringing those findings to our care site leadership meetings with our CMOs, to be able to then further prioritize and determine whom we would want to be meeting with. So I hope that answers the question sufficiently. (DESCRIPTION) Title Slide. That's a wrap! (SPEECH) Yeah, I think that was perfect. Again, cannot thank you both enough for the great information that you provided today. (DESCRIPTION) Slide. Consulting and Outsourced Services Content. Three graphics of photos and text, too small to read. (SPEECH) If you are interested in learning more, that link that is in the resources section didn't work, so I posted it in the Q&A section, where you can get some of the services that we do provide with performance data monitoring. We'll be putting this recording on our website soon, so if you do want to listen in again, that will be available. The handout is available in the resources section, as well as the ability to register for our next webinar that's coming up in August. So if you do want to learn more about HCCs, you can register for that in the resources section. And again, we always appreciate your feedback. So if you have the opportunity to complete the survey, we certainly appreciate that, as well. Again, Carrie and Kaycie, thank you both so much for the information today. And we really appreciate your time, as well as everybody joining today. So thank you all again, and we look forward to seeing you in August. (DESCRIPTION) Slide. Thank you.
In this presentation, attendees will hear how legacy SCL Health, now the Peaks Region of Intermountain Healthcare, leveraged claims data to conduct an in-depth CDI performance reporting and analysis. Participants will learn how legacy SCL Health created a targeted strategy to engage and educate physicians in a four-step data-driven approach focused on key outcomes, early wins, expansion to all payers and increased commitment from leadership.
(DESCRIPTION) A slideshow. Slide, New year, new webinar platform! A woman appears on a video call in the top left corner of the slide (SPEECH) Well hello and good afternoon and thank you for joining the first CDI innovation webinar of 2022. (DESCRIPTION) Slide, Housekeeping, a bullet point list (SPEECH) We are excited to have Tami Gomez here with us today. Before we get started, I just wanted to go ahead and go over some housekeeping items. If you were with us last year you may notice that we are using a new webinar platform. We are excited for this new enhanced user experience so before we kick things off, I just wanted to go over some of the new features and layout. There is an engagement toolbar at the bottom of your screen that you can use for the different sections of the portal. You also have the ability to move and minimize those different sections. , Because this is a web based platform there is not a dial in number to participate by phone. If you are having audio issues, please check your speaker settings, clear your cache, and refresh your browser. If you do need closed captioning, we do offer that within the live stream section that you can click on that to enable that feature. As always, we encourage questions throughout the webinar. We have a lot to get through today. So we will personally follow up after but please add all of those questions to the Q&A box below. We do provide a certificate of attendance that you can submit to obtain credits as well as the handouts for the webinar. Those can both be found in the resources section for download. If you would like to learn more about our products and solutions, you can click on the Learn More button under the slides and always, we appreciate your feedback so during the webinar, there is the ability to complete the survey in the portal or it will launch at the end of the webinar. We always appreciate your feedback. But if you do ever have a question, again, with those enhancement tools at the bottom of your screen, there is the ability to contact us. (DESCRIPTION) Slide, 3 M C D I Innovation Webinar Series, February 2022 (SPEECH) All right so before we get started, I do just want to introduce today again we have Tami Gomez as she goes over a global approach to engaging physicians and CDI operations with an AI powered CDI workflow. Tami is an a FEMA approved ICD-10 trainer and director of coding and the director of coding and CDI services at Uc Davis. Uc Davis has been named as a coding and CDI gold standard program for data analytics by Vizient and was awarded for their diversity in 2021 with Actis. And so Tami, I am going to pass things over to you so you can go ahead and get started. (DESCRIPTION) Slide, Meet our speaker. New slide, Agenda, a bullet point list. Tami appears on the video call (SPEECH) Thank you. Thanks for having me today. So today we're going to talk about how to understand or prepare tactics and how we actually leverage the 3M M Modal CDI Engage One for our impatient team. I'm going to talk about ways there is the impact of automation has on some of your key performance indicators, understanding strategies to engage our physicians, how to leverage your data and focus the work through stabilization and understanding our lessons learned in implementation. (DESCRIPTION) Slide, Why are we doing this? (SPEECH) So first we asked, why are we doing this? Leveraging technology to make CDI operations efficient, easy to manage, and partner across departments with ease, technology in many ways is really doing less with more as we are now empowered by artificial intelligence so that was really the goal here. (DESCRIPTION) Slide, Who we are, a bullet point list and a picture of a hospital (SPEECH) So it's just a little bit about who we are, Uc Davis is a 625 bed multidisciplinary academic Medical Center. We are a burn Institute and a children's hospital as well. We are in the process of building a new California tower which will add a 75 additional beds. We serve 33 counties covering about 65,000 square miles, which is an area North to the Oregon border and East of the Nevada border. We're recognized as one of the most wired hospitals in the US. We are ranked Sacramento's top hospital by US News and World report, among the nation's best in 13 medical specialties, and we've been recognized as the best hospital four years in a row in the greater Sacramento area. (DESCRIPTION) Slide, Organizational Chart: Health Information Management (Patient Revenue Cycle) (SPEECH) Just want to give you a little bit of background about the organizational chart so CDI encoding report up through the revenue cycle. There is the CFO, and then an executive director, and then I am the director over coding and CDI services but I also have a team of physician advocates and those individuals actually are physician trainers. They help with documentation integrity by building templates and smart lists and dot phrases. They have a big role in helping to ensure documentation throughout the record. There's a coding manager both on the inpatient and outpatient side, there's an outpatient CDI supervisor and inpatient CDI manager, and then we have a whole data quality integrity program as well that supports all of the analytics to drive KPIs and performance improvement. (DESCRIPTION) Slide, Homegrown auto-assignment & leveraging 3 M (SPEECH) So I'm going to start off by talking about how we were able to create a homegrown auto assignment leveraging 3M. (DESCRIPTION) Slide, Birth of Auto Assignment - No direct integration with 3 M, a list (SPEECH) The starting point to making the most of AI was how could we develop some type of automation with assigning CDI daily cases. As you know, every morning this was a very manual process for us. We would look at our admissions for that day and we'd have to manually distribute them and prioritize which ones we could review, how many people we had off, so it was a lot of manual work. It took about three to four hours to complete on a good day and on Mondays it was much worse as you can probably imagine. We had admissions from Friday, Saturday, and Sunday that we had to consider. It was our goal to fine tune this process. So as we approach going live with CDI Engage One, we also talked about how we could automate assignment for CDI. We did initial reviews, and then we looked at Tableau assignment, and that was the approach that we took. We used historical data to identify the average number of new reviews, and then we used prioritization as a form of what we would do from a hierarchical viewpoint. (DESCRIPTION) Slide, Creating the Logic: How to Start, a list (SPEECH) So how we started is we created logic. We worked with some very talented report writers who created logic where we started with the hospital service and we changed that to the hospital division. We looked back three days with logic to not duplicate. We exclude patients who are discharged so if patients been discharged they're excluded. We also excluded newborns and basically the logic looked for any baby or any newborn admission type and/or a baby girl or baby boy within their name. (DESCRIPTION) Slide, Setting Max Accounts: Eliminating Reconciliation, a table (SPEECH) So what we also did is we set the max accounts eliminating reconciliation has allowed us to assign more cases and I'll talk briefly about what we did is at Uc Davis, we had a really high coding accuracy rate. We had two independent audits done on our coding. Our coding accuracy rates are around 99.96% and I really felt that the time spent trying to determine why there was a DRG mismatch wasn't the best and that there could be a better process in place. And so we eliminated the DRG reconciliation process for the CDIs on the front end. They're not doing any DRG reconciliation but I do have a back end reviewer that takes a look at all the DRG mismatches every day and provides individual feedback with any references, whether it's a coding clinic or if it's something that was documented after their last review and provides a daily feedback to that staff which enabled them to spend about 33% more of their day doing clinical reviews concurrent. What we did is we looked at each day of the week and we decided how we wanted to create the logic to assign cases and this has been tweaked multiple times. So you may start out with saying OK, on a Monday, if we have one person on PTO we're going to assign 10 cases to every CDI but if we don't have anybody out on PTO maybe we'll do 11. So we programmed when there were holidays into the system. We've connected this to an actual Team's calendar where employees put in their time off so the logic recognizes when somebody is off and it doesn't assign a case to them. If we have two or more people out on a Monday than 11 get signed and then so on and so on. You get the gist, Tuesdays 8 and then Wednesday through Friday is 7, and if we have people working on the weekends it's 7. However, we've decided to tweak these numbers a bit and so on Monday it's 11 or 12 depending upon the circumstances. On Tuesday it's 8 or nine depending on the circumstances, and then Wednesday through Friday it's 8 to 7 depending on the circumstances. (DESCRIPTION) Slide, One-Size will not work, Program Flexibility and Triggers are key, a list (SPEECH) So one size fits all will not work. You have to be flexible and triggers are the key. We created a database to check schedules, check when there's holidays or when staff is off, and so we created all of these checkpoints to make sure that the system recognized and the logic created when not to assign a case. (DESCRIPTION) Slide, Auto-assignment & concurrent reviews: Prioritization, a bullet point list (SPEECH) So auto-assignment and concurrent reviews and the prioritization within CDI Engage One made this a little bit easier. So I'll go over that. Our challenge with auto assignment was managing concurrent reviews and organizing our current reviews. The good news is that we had 3M with the key prioritization factor to assist with managing concurrent patients. And so what we did is we customized that prioritization list to look at all accounts that had just a single CC or a single MCC. We Were looking at all mortalities, we're looking at accounts with pending queries, those are reviewed daily. We're looking at malnutrition cases because there's an organizational goal associated with that, we're looking at certain sepsis cases because of the high clinical validation denial rates that we're starting to see. This has been an ongoing and prioritization will be ongoing as our KPIs organizationally change. So digging deeper and prioritizing accounts to maintain a total of 20-40 total reviews per week right now. Our CDIs have anywhere between 36-40, not to exceed 40 total cases that they're reviewing, that includes initial and re-reviews. We also said we don't want priority, we want to remove cases from the priority list if they have two CCs or two MCCs, if they're optimized fully from the SOI and ROM perspective, and then so on. So you can really kind of customize that prioritization list to your needs and your organizational challenges and make changes to align with what you need. (DESCRIPTION) Slide, Leveraging 3 M: Concurrent review prioritization, a list (SPEECH) The current review prioritization so priority scoring four concurrent reviews can provide and assist see opportunity. So anytime there is a PSI it falls on that priority list. Medical or surgical cases without a CC or an MCC, if there's a symptom diagnosis that's driving the DRG, and then the 3M prioritization and scoring so we also can set that customize that scoring as well. If we want to focus on and make certain things a priority, we can do that organizationally. (DESCRIPTION) Slide, Scoring & Priority Factors, a screenshot (SPEECH) This is just a screenshot of scoring and the priority factors, just wanted to share that with you. It's kind of a lot on this slide so I won't go into it but we've created some customization around this so that we can make privatization effective for what our organizational needs are. (DESCRIPTION) Slide, C D I Teams: Prioritizing concurrent reviews, a screenshot (SPEECH) And this is kind of just another snapshot of the CDI Team's prioritization concurrent reviews and how they look on the screen. (DESCRIPTION) Slide, C D I: Evidence sheets - heavy lifting by tool, a screenshot (SPEECH) We also have evidence sheets as part of the CDI Engage One tool so it does a lot of the heavy lifting actually. So what this does is it alerts the CDI if there's a potential query opportunity. In some cases, this may be something your CDI already has on their radar and they're following and so it's just confirmation that they're on the right track. And sometimes it may be something that they had overlooked or missed, and this is popping up to let them know that they should either keep it on the radar and follow it or that there's a query opportunity. So we use the evidence sheets as well. (DESCRIPTION) Slide, Other incentives I P C D I evidence sheets provide, two screenshots (SPEECH) There's other incentives in inpatient CDI evidence sheets that are provided as well and this is what that looks like. So this is just another screenshot of what the evidence sheets and prioritization look like together. (DESCRIPTION) Slide, F Y 2021: Auto Assignment Data. A bar graph comparing 2020 and 2021 shows the numbers for 2021 higher in all categories (SPEECH) We also did a comparison from fiscal year 2021 and you can kind of see the impact we had as we changed to auto assignment and how many more cases we were able to get to when we compare 2020 to 2021. So this is just a slide to show by eliminating your DRG mismatch and then also using your prioritization tools, and auto assigning, and evidence sheets, all of that automation can help with increasing the number of reviews and the number of cases that your team can touch. (DESCRIPTION) Slide, Query Rate: 2020 Compared to 2021, a line graph (SPEECH) This is also a query rate comparing 2020 to 2021 so our query rates also went up as well. So what we did is we used the CDI Engage One evidence sheets, we turned on the auto assignment, and we also used our data to drive some of our improvement metrics to continue to tweak and refine on some of the processes that we put in place. Again, that's going to be ongoing. I think no matter what you're doing there's always going to be an opportunity to continue to enhance and improve on automation or processes that you've put in place or how you prioritize your reviews. (DESCRIPTION) Slide, K P I Improvement Journey: Coding and Clinical Documentation Integrity (SPEECH) I'm going to go over a little bit of our key performance improvement and the journey we had with seeing improvement. (DESCRIPTION) Slide, What we did to improve K P I's, a bullet point list (SPEECH) So what we did to improve our KPIs are we expanded our CDI program, we discontinue the reconciliation process, which I've mentioned, we perform ongoing audits both on the coding and the CDI program, we establish back end reviews and controls to ensure integrity. We've invested in technology, the CAPD, the HCC Management, CDI Engage One which includes those prioritization tools. And we do data analysis, we're big on data. And we've done a lot of work around decreasing one day stays. We found that as an organization, we were an outlier in that area and it did create some opportunities. And then template builds, utilizations of dot phrases, smart lists, et cetera. (DESCRIPTION) Slide, bullet point list continued (SPEECH) Physician buy-in also and education was key. We also had to designate physician champions both on the inpatient and the outpatient side of the house for CDI. We aligned with our physician advisors, our case management team, our quality and safety, our patient financial services, population health, and then we customized data and did an analysis that was actionable for various service lines. So we leverage data to analytics to drive improvement in documentation and operationally. (DESCRIPTION) Slide, Case management and leveraging 3 M, a screenshot (SPEECH) This next couple of slides will show some of what we've done with case management. If you're familiar with the working DRG, we basically send all cases over to case management via an interface when there is a working DRG assigned by the CDI so that they have that geometric mean length of stay to help improve our outcomes with hospital length of stay. But we also realize that, hey, they're not touching 100% of every case and what could we do to get them a working DRG on every case. Well, there is also an auto suggested DRG. So if the CDI doesn't touch the case, the CAC will come in, review the record, auto assign an MS DRG and that will also interface over to the case management team so they have that geometric mean length of stay. We did basically educate them on the fact that this is not a human being touching this, this is all AI, and that things could change by discharging. So they understand that this is just a preliminary look based on documentation in the record but it has really helped that team understand the geometric mean length of stay and how our patients should be managed in terms of trying to find or discharge them timely. (DESCRIPTION) Slide, Epic View, a screenshot (SPEECH) This is just a view of where they can see that in Epic, so again, there's an interface that goes out of 3M into our EHR and that's where they find that information in the chart. (DESCRIPTION) Slide, Case Mix Index, a line graph and two bar graphs. All three graphs show a steady increase over time (SPEECH) So this is just a snapshot of case mix index. While case mix index isn't a great indicator of CDI work, it is something that we have tracked as a KPI for CDI because we do have some impact, especially when we talk about capturing CCs and MCCs to drive that case mix up but you can see right around here is where we implemented our artificial intelligence. And you can see the impact it's had both on our adult population and our pediatric population. (DESCRIPTION) Slide, C C slash M C C Capture rates, two line graphs, two bar graphs, and a scatter plot. All graphs show a steady increase over time (SPEECH) Now while I just mentioned CMI is not always a great indicator for CDI as far as a key performance indicator, in my humble opinion CC MCC capture rates are. And as you can see here, same trend is happening with our adult case mix index or with our adult CC and MCC capture, and our pediatric CC MCC capture. Not only that, but when you come over here on this slide here to the right, you can see the trend from fiscal year 2020 to fiscal year 2021. And you can see over here where it says AMC distribution, basically these gray dots are all academic medical centers and where they fall with regard to their CC and MCC capture rate. And we're this dark blue dot here, so we're technically in the top 10% of all academic medical centers within our benchmark group, and there's 180 or so academic medical centers. And this tells me this is really a direct reflection of CDI work. In fact, I can take this data and I can quantify using some of the data that we have within 3M to show that the CC or the MCC was a direct reflection of either querying, or CAPD, or one of the metrics that we're actually using to touch cases. (DESCRIPTION) Slide, Strategies to engage physicians. New slide, Phase 1: Kicking Off the Project (Initiation of Partnerships), a bullet point list (SPEECH) The next couple of slides will be strategies on how to engage your physicians. It's not always easy kicking off the project, we really had a large group of individuals. We partnered with our system administrator, our service line medical directors, and our physicians, they're obviously key. So depending on your environment, we partnered with attending physicians to meet and kick off the project, and began to establish partnership with clinic managers and physician specialties to leverage physician connections with medical assistance and nursing teams as well. This does work virtually if executed correctly because we had to do it due to COVID so I can say without a doubt that it can be done. Again, when presenting keep it to 15 minutes and always be ready to do a demo that works perfectly. So when we were meeting with them to talk about CAPD, and why it's important, and why we are rolling this product out, there was a lot of questions about why are we doing this? This is one more thing that we have to do and really the education was focused on CAPD captured important and leveraging any data available, RAF scores, MIPS, risk adjustment. So we talked about how this product actually engages with the physician real time at the point of care. Instead of receiving a query two or three days later, this really is something that will ping you real time for you to enhance your documentation. And so you've got to keep at it, you're going to get physicians who are going to be naysayers honestly or are just not interested in hearing what you have to say. And so what we tried to do is get some champions behind us, get physicians to see the importance behind this product, and we kept at it. We kept customizing, and tweaking, and turning things on and off, and doing what we can to make this as meaningful as possible for them because if it's not meaningful for the providers, they're not going to engage with it. My one takeaway here is it was not immediately accepted or physicians weren't readily receptive to this but we kept at it we kept working with them, we kept enhancing things, we kept customizing things, and that's where we really got physician buy-in and engagement. (DESCRIPTION) Slide, Phase 2: How to Engage Physicians (Resources), a bullet point list (SPEECH) Resources are essential. So tools for physicians, tip sheets, videos, we actually sent out a video, we actually have an EMR newsletter and we sent out some information on that. So wherever we could create tools or ways or enhancements we did. Again, we kept it to five minutes. Our last video was eight minutes when we recently launched HCC Engage with our providers and the feedback was it was too long and so we condensed it. Focus on showing physicians how to answer and engage with the tool in these videos. And then your physician, you need to have educators and trainers and people that can be shoulder to shoulder with the providers if they have questions, that can train them how to use this, or walk them through every little nuance. It may be something like, how do I dock this and get it out of the way, while I'm doing my charting. And so that that's what we did is we made sure that we had somebody available for these physicians whenever they had a question or a concern. (DESCRIPTION) Slide, Phase 3: Continuous Partnership, a bullet point list (SPEECH) Again, continuing to partner. We believe in continuing partnerships with key stakeholders to leverage technology to ensure successes. We identify key stakeholders and design workflows for automation and we leverage data to facilitate engagement. Using data, going through meaning behind the nudge, and inviting physicians to the table has been extremely helpful. So when you're creating a knowledge, especially a custom nudge with the CAPD, you want to look at that clinical content to make sure that the nudges firing and it's meaningful to the providers. For example, there was some ad hoc out of the box nudges within the content guide that 3M provided, one of them was on sodium and hyponatremia and it fired when there was just one abnormal lab value and our physicians said no, we don't want that. This is what we want. We want there to be two abnormal lab findings and we also want to know this, this, and this. And so what we did is we worked with the content team at 3M. And we said we'd like to revise the current nudge that you have on hyponatremia and we want to customize it to something that is a little more meaningful to our physicians. And getting their buy-in on all of that, especially on the pediatric and the children's hospital and different things like that has really been key. So having a physician that's willing to go over the clinical content that's going to fire that nudge will be key for your organization. Again, I can't stress it enough, be flexible. Data may change, workflows will change but keep working this, the plan, and keep on making this something that is meaningful for the providers. How can we help? How can we change things? What would make this better? And getting that feedback and making those tangible changes will have impact. (DESCRIPTION) Slide, C A P D (Computer Assisted Provider slash Physician documentation) (SPEECH) So data focus and insights on rollout with physicians, so I'll go over some of that on the next slide. (DESCRIPTION) Slide, Define C A P D focus and nudge definition: Ongoing, a bullet point list (SPEECH) So focus on clinical conditions and the procedures turned on. Define what a nudge means to your provider, your community, a clinician diagnosis procedure that has clinical evidence and a physician message. Always review the data and always provide an overview of all nudges. The rule, and physician message, and then the customization as I talked about, that is really the key for us, especially with the children's hospital. There is not a whole lot of clinical content in the clinical content guide that 3M offers on the pediatric side of the house and so we really have been successful with customizing those nudges to make them meaningful for that population of patients. (DESCRIPTION) Slide, C A P D - The Why on Streamlining Physician Engagement, a list of goals (SPEECH) So the why on streamlining physician engagement. So physician documentation, guidance using evidence based clinical definitions, having a virtual conversation, to add the critical details that impact treatment and outcomes, engaging physicians at the point of care to reduce queries, and then overall quality improvement in patient care outcomes. That's your clinical decision arm really so those were the goals. But also engaging physicians at the point of care to reduce queries, what we found is that by turning on some of these nudges which are things like CHF acuity, or acute blood loss anemia, are things that physicians have been queried on routinely at our organization and have done a really good job at addressing. And so we don't have a whole lot of opportunity there. But what we found was that there was opportunity with certain things, we ran a lot of data, we looked at what our number one query was organizationally, and by service line, and got really granular, and we were very specific and deliberate about what we turned on and where we turned it on and for who. (DESCRIPTION) Slide, What is required for a nudge to fire? (Repeat slash Rewind), a picture of a fire in a fireplace (SPEECH) And then using the data that we have has really, really, really been key so that we can go back to providers and say, here is your capture rate on this diagnosis for this patient population, and here are the rest of your peers within an academic medical setting that are capturing this. And when they can see a tangible like, hey, I'm only capturing this diagnosis 5% of the time compared to my peers who are capturing 25% at a time, they're very engaged and interested in what they can do to be better at documenting this specific condition or whatever it might be. (DESCRIPTION) Slide, A Nudge Requires, a bullet point list (SPEECH) So this next slide will be basically what's required for a nudge to fire and then you really just kind of going to be on a repeat and rewind from here on out. So a nudge requires one specific criteria, a rule that points to a clinical evidence or documentation. So here's the rule, clinical evidence and documentation, we want the tool to reason over before firing. For example, clinical note says sodium is 128. The program fires this nudge for clinical diagnoses as it relates to clinical evidence. Evidence of hyponatremia, sodium level, with explicit mention of and/or physician mention or a physician message will populate in the fluency direct pill and that's part of the CAPD. And it will say something like, we have identified electrolyte imbalances, if appropriate please document the associated diagnosis. The diagnosis is hyponatremia. A clinician can replace the sodium. There really is, again, a content guide that's provided to you out of the gate from 3M and you'll have to take a look at that clinical content to see if it's actually something that you would query a provider on. And if it's not, you're going to want to tweak it and customize it to your organizational needs. (DESCRIPTION) Slide, July 2021 C A P D Data: Top 4 clinical conditions reviewed for accuracy and review. Data source: 3 M C A P D utilization reports. New slide, line graphs for five conditions (SPEECH) So this next is just a data source. This is where we're at today with the Uc Davis CAPD utilization. And I just looked at the top five nudges that we have turned on, which is diabetes, respiratory failure, a-fib, kidney disease, and cardiovascular congenital conditions. So you can see the overall compliance rate for those right now is 77% but mind you when we first went live, we were in the 25%, 30%, and so this is significant improvement in less than a year and I think if you stick to the program, you'll start to see compliance rates up there in the 80% to 90% which is where you ideally would like to be. (DESCRIPTION) Slide, a table showing diagnosis, rule, message, and evidence (SPEECH) This is just a snapshot of what the nudge rule looks like. So anemia specificity, you're going to look for the clinical diagnosis, you're going to look for the clinical rule, what the physician message looks like. We updated this for surgery because it would show up as a blood disorder and so the physicians were kind of confused about, what do you want from me? A blood disorder could mean something like pancytopenia, it can mean something like leukemia, so what is it that you want from me? So we worked to address that issue and created a custom nudge that actually said anemia. So you can see same thing for hyponatremia, acute respiratory failure, and what's actually being used in terms of the clinical rule physician message and the supporting evidence. And these are all things that can be customized. If it's not applicable to you, the content guide that's being offered to you through the vendor 3M, you can customize those which is, again, key for us because we found a lot of things that really made a difference for us when we customize them and that's where we started to see more compliant rates. (DESCRIPTION) Slide, an excerpt from the table for Heart Failure Specificity (SPEECH) This is just another snapshot of what heart failure looks like, the clinical rule, the physician message, and some of the supporting clinical evidence for the nudge to fire. (DESCRIPTION) Slide, Lessons Learned, a hierarchical data tree (SPEECH) When we talk about lessons learned, I think I've gone over some of those already. But focus on which physician groups you want to start with. We were very deliberate about that, we actually piloted a group of physicians. We had one surgeon, we have one pediatric physician, we had a hospitalist, and I think we had maybe a specialty physician as well. And we looked at all of the data that we had on our current queries and the percentage of queries we were sending, and what the top queries were, and we turned those nudges on. And we piloted it and we got a lot of feedback, and we got a lot of information that we were able to take back and improve things, and tweak things, and customize things. And before we went live, we made sure that we had all of those things, and all of that feedback was taken into consideration to improve outcomes. CAPD can work but be patient and don't give up. I mean, that was our thing as we have about 3,000 physicians turned on now. And of those 3,000, I think five were absolutely adamant that they wanted it turned off. They were great documenters already, they didn't feel they needed this, it was just one more thing that they didn't want to deal with. And so we think that's successful in our eyes. We worked with them to try to convince them about the value of this tool but at no avail so I think you have to really work with physicians and make this meaningful to them and customize it to their needs. Always acknowledge a physician when they're providing feedback, especially if they're complaining. What I like to do is say, hey, all your points are valid, what can I do to make this better? How can I help you document better? What can we do? And then we take their feedback and we work with them individually. I think when they are involved or feel like they have a voice, there are a lot more open to working with you and to engaging with the tools. (DESCRIPTION) Slide, a screenshot of a diagnostic form (SPEECH) Again, customization, know the documentation, keep things in perspective. Remember this is a computer but you can make it work. Again, customization for us, I can't say it enough, has been key. We're going to continue to customize, we are just basically scratching the surface with customization and I think we're at a point in time where we can be very deliberate and very meaningful about what we turn on or providers to ensure engagement continues to go up and the product continues to be meaningful and we continue to see impact on our overall key performance indicators. And I think that is my last slide. (DESCRIPTION) Slide, Q & A (SPEECH) It is thank you so much, Tami. The information is just incredible and what your team was able to stand up. We do have a couple questions that I think would be maybe good for you to address. We do have a little bit of extra time, just want to be cognizant of the time for everyone but the first one, thank you Deanne, who said that they really enjoyed your presentation and then also asked, is there any work you have done on the day one stays? Yes. For the one day stays, excuse me I flip-flop that. Yeah, so I'm glad you asked that question. So as you know, CDIs really can't even sometimes get to the one day stays and so we've excluded them from the reviews that are being done by the CDI team but what we did is we went to leadership and we acknowledge that when we ran data we found that we were an outlier for one day stays in terms of the percentage of patients that were here one day and went home and were counted as an inpatient admission, we were an outlier compared to our peers. And so I think it was 25% of our patients were here one day. We noted that it diluted our case mix index and diluted our CC MCC capture. It diluted our mortality and it artificially inflated our length of stay metrics. And so we went to leadership and said, it also impacts throughput, we took these numbers to our case management team and said, could they be better served in an observation status or an outpatient bed to determine appropriateness of admissions? And the other thing we said is, we can't get to them from a CDI perspective to try to optimize them. And so we took a different approach with how we were going to address one day stays and we ran data on the top DRGs for the one day stays both on peds and adults. We found on peds it was asthma and we found that there was a best practice at NYU where they created a clinical pathway in the Ed for pediatric patients and observed them at 2, 4, and 6 hours and if they had improved after six hours they were put in observation and if hadn't they were admitted. And then we found on the adult population it was something like gastroenteritis and seizures and we'd created a similar clinical pathway for those. And so the CDI team really took that information back to the clinical teams and said, here's what it looks like. Here's your top DRGs, could there be something like NYU did here at Uc Davis. And what we saw was immediately a correlation between an increase in CC MCC capture and an increase in our CMI as well as our mortality metrics. So we don't look at one day stays because the documentation, there's a delay, obviously with physicians getting documentation on the record and there's not a whole lot a whole lot of opportunity for the CDI to review them. And as you know, it's almost meaningless to review the case on day one without that documentation in there and then if they go home the next day. So that was the approach we took was operationally, could these patients be removed from our review process and from the observed outcomes and what could we do better organizationally? I hope that answered your question. Yes, absolutely. And we do have a bunch actually coming in that kind of stirred some questions so we will get to a few more here. How many nudges do you have active for each service line and how did you select which nudges to turn on? So we were very deliberate about that, again, we pulled data. So for example we pulled the hospitalist group data and looked at the top five queries for that group. And then we turned on. We were very specific about not turning on a ton of nudges, we were very deliberate about making sure that it was meaningful. So about five to seven per service line and it was driven off of the data we pulled to see which nudges were already or which queries we had already been sending from a CDI perspective. But I would urge everybody to keep it to no more than seven be deliberate about how you turn them on. So look at your current data, look at your current query patterns, look at your service lines. There are things that aren't going to be meaningful to surgeons that are meaningful to hospitalists and so that's how I would approach it. Fantastic. Before we get to the next one, I just want to answer one question quickly from Jessica who asked if CDI Engage One is available and it is. And so if you would like someone from our team to contact you, in that middle button in the portal, if you click on that, that will take you to a form to complete and we can follow up with you to talk about it. Let's go ahead if we do have time for a couple more. How will this process evolve to help with prior authorization and denials? So I think I'm not sure yet, but I do believe that there is an opportunity for us to work with making sure that we get the documentation on the record, especially with sepsis, specifically that core criteria that we're seeing denials on now. My goal is to eventually try to use this in a way where we can get documentation on the record to demonstrate medical necessity and also the clinical evidence to avoid some of those clinical validation denials that we're seeing now for things like sepsis and malnutrition. Great. We have a question that said how long after admit do you do your first review? So initial reviews are done two to three days after the admission. And then our re-reviews are done every two to three days as well, depending upon the complexity of the case and what it is they're looking at. So we give our CDI a choice. So yes, that's our current state. All right. I think this kind of goes along with it, how long should a chart be on hold for a query reply? So we have processes in place where we have query escalation. So after 48 hours, concurrently if CDI has sent a query and there's not an answer within 48 hours and the patient's still in the house, there's a query escalation process where we escalate to our physician advisors through a portal we created on Microsoft Teams. If it's a retrospective query and it's something that's being held for a CC or an MCC or procedures that will drive your DRG or change your DRG, we hold up to 10 days retrospectively only in the events where it's maybe a portable outcome for quality like I said or a procedure question. But we typically don't ever have to hold for 10 days, I will say our physicians are pretty good at getting back to us within 72 hours, retrospectively anyway. Perfect. With the nudges, how often do you evaluate the response improvement to documentation and adjust nudges to continue to target the top diagnosis? So we look at this monthly. Yeah. Wow, that's a lot. And probably a lot of work for your team. Rhonda asked, we cannot lead to a diagnosis and a query, isn't providing a diagnosis in a nudge leading in our nudges visible to others and a part of the permanent record? So we don't lead either so that's the rules that we were talking about. There must be that clinical evidence, that risk factors in that treatment, and you build the nudge to make sure that it has those things in place so that you don't lead. And what it does is it tells the provider that there is a diagnosis based on this treatment, this lab value, this X-ray finding, whatever it might be and they document that in the record. And so we're very careful about not leading the providers and having that clinical evidence to ensure the accuracy and not leading. So we are compliant with that. This is a product that only will nudge when there is clinical evidence risk factors and treatment that exist. And that's one of the things I was talking about, is sometimes the clinical evidence may be just one nudge and I wasn't-- I'm sorry, one abnormal lab finding like the sodium that we talked about earlier. And in my opinion, that could be dilution from surgery or that could be something completely unrelated to a true diagnosis of hyponatremia. So we weren't comfortable turning that on and we made deliberate changes to the clinical evidence for this to fire. So it will require some work on your end to make sure that you are not leading the provider. So our queries are a permanent part of the record, our nudges fire for the physicians basically, and they see it as they're firing and they documented in the record. Fantastic. Well, what I'm going to do is we do have a couple more questions that we will follow up with after. And so Tami I do want to thank you for your time today. (DESCRIPTION) Slide, That's a wrap! (SPEECH) We've had a lot of comments even just within just to say how great the information was and how great the presentation was so we greatly thank you for that. Just a reminder to attendees today, the certificate of attendance can be downloaded. If you do want to submit that for credits to an association you can to obtain CEUs, and we did provide the handout in the resources section, those are both there. If you are interested in learning more about the CDI Engage One, excuse me, that was discussed today, you can click that button in the middle and let us know and we'll follow up with you. The archive of this recording will be on our website in the next couple of weeks. So if you do want to go back and listen, you can. And lastly, we will be here again. And we're doing these every other month and I can't believe that it's already almost March so in April we will be back with another CDI innovation webinar so be on the lookout to register. And we appreciate your feedback so please complete that survey at the end. And so again, Tami, cannot thank you enough for your time today and so we welcome you back anytime. So have a great rest of the day and to everybody else that joined we thank you.
UC Davis Director of Coding and CDI Services Tami Gomez and her team have a mission: Build a gold-standard CDI program, with streamlined workflows that allow physicians to focus on patient-centered care. To support this goal, UC Davis implemented 3M’s advanced AI and NLU technologies, automatically embedding clinical intelligence into normal physician and CDI workflows.
Join Tami for an inside look at UC Davis’ operations and transformation strategy. You’ll learn how the team laid the groundwork for new technology, how they’re using automation to drive key performance indicators, and how they approach physician engagement. Tami will also cover lessons learned to date, along with how the organization is using data to continually improve and optimize.
The 3M CDI Innovation Webinar Series offers in-depth sessions with 3M experts and clients on a wide variety of emerging CDI challenges and opportunities such as shifting care settings, evolving payment models, advancing technology, rising consumerism and much more. Subscribe here to stay in the loop.